source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Seven states of randomness#0
The seven states of randomness in probability theory, fractals and risk analysis are extensions of the concept of randomness as modeled by the normal distribution. These seven states were first introduced by Benoît Mandelbrot in his 1997 book Fractals and Scaling in Finance, which applied fractal analysis to the study of risk and randomness. This classification builds upon the three main states of randomness: mild, slow, and wild. The importance of seven states of randomness classification for mathematical finance is that methods such as Markowitz mean variance portfolio and Black–Scholes model may be invalidated as the tails of the distribution of returns are fattened: the former relies on finite standard deviation (volatility) and stability of correlation, while the latter is constructed upon Brownian motion. == History == These seven states build on earlier work of Mandelbrot in 1963: "The variations of certain speculative prices" and "New methods in statistical economics" in which he argued that most statistical models approached only a first stage of dealing with indeterminism in science, and that they ignored many aspects of real world turbulence, in particular, most cases of financial modeling. This was then presented by Mandelbrot in the International Congress for Logic (1964) in an address titled "The Epistemology of Chance in Certain Newer Sciences" Intuitively speaking, Mandelbrot argued that the traditional normal distribution does not properly capture empirical and "real world" distributions and there are other forms of randomness that can be used to model extreme changes in risk and randomness. He observed that randomness can become quite "wild" if the requirements regarding finite mean and variance are abandoned. Wild randomness corresponds to situations in which a single observation, or a particular outcome can impact the total in a very disproportionate way. The classification was formally introduced in his 1997 book Fractals and Scaling in Finance, as a way to bring insight into the three main states of randomness: mild, slow, and wild. Given N addends, portioning concerns the relative contribution of the addends to their sum. By even portioning, Mandelbrot meant that the addends were of same order of magnitude, otherwise he considered the portioning to be concentrated. Given the moment of order q of a random variable, Mandelbrot called the root of degree q of such moment the scale factor (of order q). The seven states are: Proper mild randomness: short-run portioning is even for N = 2, e.g. the normal distribution Borderline mild randomness: short-run portioning is concentrated for N = 2, but eventually becomes even as N grows, e.g. the exponential distribution with rate λ = 1 (and so with expected value 1/λ = 1) Slow randomness with finite delocalized moments: scale factor increases faster than q but no faster than q w {\displaystyle {\sqrt[{w}]{q}}} , w < 1 Slow randomness with finite and localized moments: scale factor increases faster than any power of q, but remains finite, e.g. the lognormal distribution and importantly, the bounded uniform distribution (which by construction with finite scale for all q cannot be pre-wild randomness.) Pre-wild randomness: scale factor becomes infinite for q > 2, e.g. the Pareto distribution with α = 2.5 Wild randomness: infinite second moment, but finite moment of some positive order, e.g. the Pareto distribution with α ≤ 2 {\displaystyle \alpha \leq 2} Extreme randomness: all moments are infinite, e.g. the log-Cauchy distribution Wild randomness has applications outside financial markets, e.g. it has been used in the analysis of turbulent situations such as wild forest fires. Using elements of this distinction, in March 2006, before the 2008 financial crisis, and four years before the 2010 Flash Crash, during which the Dow Jones Industrial Average had a 1,000 point intraday swing within minutes, Mandelbrot and Nassim Taleb published an article in the Financial Times arguing that the traditional "bell curves" that have been in use for over a century are inadequate for measuring risk in financial markets, given that such curves disregard the possibility of sharp jumps or discontinuities. Contrasting this approach with the traditional approaches based on random walks, they stated: We live in a world primarily driven by random jumps, and tools designed for random walks address the wrong problem. Mandelbrot and Taleb pointed out that although one can assume that the odds of finding a person who is several miles tall are extremely low, similar excessive observations cannot be excluded in other areas of application. They argued that while traditional bell curves may provide a satisfactory representation of height and weight in the population, they do not provide a suitable modeling mechanism for market risks or returns, where just ten trading days represent 63 per cent of the returns between 1956 and 2006. == Definitions == === Doubling convolution === If the probability density of U = U ′ + U ″ {\displaystyle U=U'+U''} is denoted p 2 ( u ) {\displaystyle p_{2}(u)} , then it can be obtained by the double convolution p 2 ( x ) = ∫ p ( u ) p ( x − u ) d u {\displaystyle p_{2}(x)=\int p(u)p(x-u)\,du} . === Short run portioning ratio === When u is known, the conditional probability density of u′ is given by the portioning ratio: p ( u ′ ) p ( u − u ′ ) p 2 ( u ) {\displaystyle {\frac {p(u')p(u-u')}{p_{2}(u)}}} === Concentration in mode === In many important cases, the maximum of p ( u ′ ) p ( u − u ′ ) {\displaystyle p(u')p(u-u')} occurs near u ′ = u / 2 {\displaystyle u'=u/2} , or near u ′ = 0 {\displaystyle u'=0} and u ′ = u {\displaystyle u'=u} . Take the logarithm of p ( u ′ ) p ( u − u ′ ) {\displaystyle p(u')p(u-u')} and write: Δ ( u ) = 2 log ⁡ p ( u / 2 ) − [ log ⁡ p ( 0 ) + log ⁡ p ( u ) ] {\displaystyle \Delta (u)=2\log p(u/2)-[\log p(0)+\log p(u)]} If log ⁡ p ( u ) {\displaystyle \log p(u)} is cap-convex, the portioning ratio is maximal for u ′ = u / 2 {\displaystyle u'=u/2} If log ⁡ p ( u ) {\displaystyle \log p(u)} is straight, the portioning ratio is constant If log ⁡ p ( u ) {\displaystyle \log p(u)} is cup-convex, the portioning ratio is minimal for u ′ = u / 2 {\displaystyle u'=u/2} === Concentration in probability === Splitting the doubling convolution into three parts gives: p 2 ( x ) = ∫ 0 x p ( u ) p ( x − u ) d u = { ∫ 0 x ~ + ∫ x ~ x − x ~ + ∫ x − x ~ x } p ( u ) p ( x − u ) d u = I L + I 0 + I R {\displaystyle p_{2}(x)=\int _{0}^{x}p(u)p(x-u)\,du=\left\{\int _{0}^{\tilde {x}}+\int _{\tilde {x}}^{x-{\tilde {x}}}+\int _{x-{\tilde {x}}}^{x}\right\}p(u)p(x-u)\,du=I_{L}+I_{0}+I_{R}} p(u) is short-run concentrated in probability if it is possible to select u ~ ( u ) {\displaystyle {\tilde {u}}(u)} so that the middle interval of ( u ~ , u − u ~ {\displaystyle {\tilde {u}},u-{\tilde {u}}} ) has the following two properties as u→∞: I0/p2(u) → 0 ( u − 2 u ~ ) u {\displaystyle (u-2{\tilde {u}})u} does not → 0 === Localized and delocalized moments === Consider the formula E ⁡ [ U q ] = ∫ 0 ∞ u q p ( u ) d u {\displaystyle \operatorname {E} [U^{q}]=\int _{0}^{\infty }u^{q}p(u)\,du} , if p(u) is the scaling distribution the integrand is maximum at 0 and ∞, on other cases the integrand may have a sharp global maximum for some value u ~ q {\displaystyle {\tilde {u}}_{q}} defined by the following equation: 0 = d d u ( q log ⁡ u + log ⁡ p ( u ) ) = q u − | d log ⁡ p ( u ) d u | {\displaystyle 0={\frac {d}{du}}(q\log u+\log p(u))={\frac {q}{u}}-\left|{\frac {d\log p(u)}{du}}\right|} One must also know u q p ( u ) {\displaystyle u^{q}p(u)} in the neighborhood of u ~ q {\displaystyle {\tilde {u}}_{q}} . The function u q p ( u ) {\displaystyle u^{q}p(u)} often admits a "Gaussian" approximation given by: log ⁡ [ u q p ( u ) ] = log ⁡ p ( u ) + q u = constant − ( u − u ~ q ) 2 σ ~ q − 2 / 2 {\displaystyle \log[u^{q}p(u)]=\log p(u)+qu={\text{constant}}-(u-{\tilde {u}}_{q})^{2}{\tilde {\sigma }}_{q}^{-2/2}} When u q p ( u ) {\displaystyle u^{q}p(u)} is well-approximated by a Gaussian density, the bulk of E ⁡ [ U q ] {\displaystyle \operatorname {E} [U^{q}]} originates in the "q-interval" defined as [ u ~ q − σ ~ q , u ~ q + σ ~ q ] {\displaystyle [{\tilde {u}}_{q}-{\tilde {\sigma }}_{q},{\tilde {u}}_{q}+{\tilde {\sigma }}_{q}]} . The Gaussian q-intervals greatly overlap for all values of σ {\displaystyle \sigma } . The Gaussian moments are called delocalized. The lognormal's q-intervals are uniformly spaced and their width is independent of q; therefore, if the log-normal is sufficiently skew, the q-interval and (q + 1)-interval do not overlap. The lognormal moments are called uniformly localized. In other cases, neighboring q-intervals cease to overlap for sufficiently high q, such moments are called asymptotically localized. == See also == History of randomness Random sequence Fat-tailed distribution Heavy-tailed distribution Daubechies wavelet for a system based on infinite moments (chaotic waves) == References ==
Wikipedia:Seventh power#0
In arithmetic and algebra, the seventh power of a number n is the result of multiplying seven instances of n together. So: n7 = n × n × n × n × n × n × n. Seventh powers are also formed by multiplying a number by its sixth power, the square of a number by its fifth power, or the cube of a number by its fourth power. The sequence of seventh powers of integers is: 0, 1, 128, 2187, 16384, 78125, 279936, 823543, 2097152, 4782969, 10000000, 19487171, 35831808, 62748517, 105413504, 170859375, 268435456, 410338673, 612220032, 893871739, 1280000000, 1801088541, 2494357888, 3404825447, 4586471424, 6103515625, 8031810176, ... (sequence A001015 in the OEIS) In the archaic notation of Robert Recorde, the seventh power of a number was called the "second sursolid". == Properties == Leonard Eugene Dickson studied generalizations of Waring's problem for seventh powers, showing that every non-negative integer can be represented as a sum of at most 258 non-negative seventh powers (17 is 1, and 27 is 128). All but finitely many positive integers can be expressed more simply as the sum of at most 46 seventh powers. If powers of negative integers are allowed, only 12 powers are required. The smallest number that can be represented in two different ways as a sum of four positive seventh powers is 2056364173794800. The smallest seventh power that can be represented as a sum of eight distinct seventh powers is: 102 7 = 12 7 + 35 7 + 53 7 + 58 7 + 64 7 + 83 7 + 85 7 + 90 7 . {\displaystyle 102^{7}=12^{7}+35^{7}+53^{7}+58^{7}+64^{7}+83^{7}+85^{7}+90^{7}.} The two known examples of a seventh power expressible as the sum of seven seventh powers are 568 7 = 127 7 + 258 7 + 266 7 + 413 7 + 430 7 + 439 7 + 525 7 {\displaystyle 568^{7}=127^{7}+258^{7}+266^{7}+413^{7}+430^{7}+439^{7}+525^{7}} (M. Dodrill, 1999); and 626 7 = 625 7 + 309 7 + 258 7 + 255 7 + 158 7 + 148 7 + 91 7 {\displaystyle 626^{7}=625^{7}+309^{7}+258^{7}+255^{7}+158^{7}+148^{7}+91^{7}} (Maurice Blondot, 11/14/2000); any example with fewer terms in the sum would be a counterexample to Euler's sum of powers conjecture, which is currently only known to be false for the powers 4 and 5. == See also == Eighth power Sixth power Fifth power (algebra) Fourth power Cube (algebra) Square (algebra) == References ==
Wikipedia:Sexagesimal#0
Sexagesimal, also known as base 60, is a numeral system with sixty as its base. It originated with the ancient Sumerians in the 3rd millennium BC, was passed down to the ancient Babylonians, and is still used—in a modified form—for measuring time, angles, and geographic coordinates. The number 60, a superior highly composite number, has twelve divisors, namely 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60, of which 2, 3, and 5 are prime numbers. With so many factors, many fractions involving sexagesimal numbers are simplified. For example, one hour can be divided evenly into sections of 30 minutes, 20 minutes, 15 minutes, 12 minutes, 10 minutes, 6 minutes, 5 minutes, 4 minutes, 3 minutes, 2 minutes, and 1 minute. 60 is the smallest number that is divisible by every number from 1 to 6; that is, it is the lowest common multiple of 1, 2, 3, 4, 5, and 6. In this article, all sexagesimal digits are represented as decimal numbers, except where otherwise noted. For example, the largest sexagesimal digit is "59". == Origin == According to Otto Neugebauer, the origins of sexagesimal are not as simple, consistent, or singular in time as they are often portrayed. Throughout their many centuries of use, which continues today for specialized topics such as time, angles, and astronomical coordinate systems, sexagesimal notations have always contained a strong undercurrent of decimal notation, such as in how sexagesimal digits are written. Their use has also always included (and continues to include) inconsistencies in where and how various bases are used to represent numbers even within a single text. The most powerful driver for rigorous, fully self-consistent use of sexagesimal has always been its mathematical advantages for writing and calculating fractions. In ancient texts this shows up in the fact that sexagesimal is used most uniformly and consistently in mathematical tables of data. Another practical factor that helped expand the use of sexagesimal in the past, even if less consistently than in mathematical tables, was its decided advantages to merchants and buyers for making everyday financial transactions easier when they involved bargaining for and dividing up larger quantities of goods. In the late 3rd millennium BC, Sumerian/Akkadian units of weight included the kakkaru (talent, approximately 30 kg) divided into 60 manû (mina), which was further subdivided into 60 šiqlu (shekel); the descendants of these units persisted for millennia, though the Greeks later coerced this relationship into the more base-10–compatible ratio of a shekel being one 50th of a mina. Apart from mathematical tables, the inconsistencies in how numbers were represented within most texts extended all the way down to the most basic cuneiform symbols used to represent numeric quantities. For example, the cuneiform symbol for 1 was an ellipse made by applying the rounded end of the stylus at an angle to the clay, while the sexagesimal symbol for 60 was a larger oval or "big 1". But within the same texts in which these symbols were used, the number 10 was represented as a circle made by applying the round end of the style perpendicular to the clay, and a larger circle or "big 10" was used to represent 100. Such multi-base numeric quantity symbols could be mixed with each other and with abbreviations, even within a single number. The details and even the magnitudes implied (since zero was not used consistently) were idiomatic to the particular time periods, cultures, and quantities or concepts being represented. In modern times there is the recent innovation of adding decimal fractions to sexagesimal astronomical coordinates. == Usage == === Babylonian mathematics === The sexagesimal system as used in ancient Mesopotamia was not a pure base-60 system, in the sense that it did not use 60 distinct symbols for its digits. Instead, the cuneiform digits used ten as a sub-base in the fashion of a sign-value notation: a sexagesimal digit was composed of a group of narrow, wedge-shaped marks representing units up to nine (, , , , ..., ) and a group of wide, wedge-shaped marks representing up to five tens (, , , , ). The value of the digit was the sum of the values of its component parts: Numbers larger than 59 were indicated by multiple symbol blocks of this form in place value notation. Because there was no symbol for zero it is not always immediately obvious how a number should be interpreted, and its true value must sometimes have been determined by its context. For example, the symbols for 1 and 60 are identical. Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as in numbers like 13200. === Other historical usages === In the Chinese calendar, a system is commonly used in which days or years are named by positions in a sequence of ten stems and in another sequence of 12 branches. The same stem and branch repeat every 60 steps through this cycle. Book VIII of Plato's Republic involves an allegory of marriage centered on the number 604 = 12960000 and its divisors. This number has the particularly simple sexagesimal representation 1,0,0,0,0. Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage. Ptolemy's Almagest, a treatise on mathematical astronomy written in the second century AD, uses base 60 to express the fractional parts of numbers. In particular, his table of chords, which was essentially the only extensive trigonometric table for more than a millennium, has fractional parts of a degree in base 60, and was practically equivalent to a modern-day table of values of the sine function. Medieval astronomers also used sexagesimal numbers to note time. Al-Biruni first subdivided the hour sexagesimally into minutes, seconds, thirds and fourths in 1000 while discussing Jewish months. Around 1235 John of Sacrobosco continued this tradition, although Nothaft thought Sacrobosco was the first to do so. The Parisian version of the Alfonsine tables (ca. 1320) used the day as the basic unit of time, recording multiples and fractions of a day in base-60 notation. The sexagesimal number system continued to be frequently used by European astronomers for performing calculations as late as 1671. For instance, Jost Bürgi in Fundamentum Astronomiae (presented to Emperor Rudolf II in 1592), his colleague Ursus in Fundamentum Astronomicum, and possibly also Henry Briggs, used multiplication tables based on the sexagesimal system in the late 16th century, to calculate sines. In the late 18th and early 19th centuries, Tamil astronomers were found to make astronomical calculations, reckoning with shells using a mixture of decimal and sexagesimal notations developed by Hellenistic astronomers. Base-60 number systems have also been used in some other cultures that are unrelated to the Sumerians, for example by the Ekari people of Western New Guinea. === Modern usage === Modern uses for the sexagesimal system include measuring angles, geographic coordinates, electronic navigation, and time. One hour of time is divided into 60 minutes, and one minute is divided into 60 seconds. Thus, a measurement of time such as 3:23:17 (3 hours, 23 minutes, and 17 seconds) can be interpreted as a whole sexagesimal number (no sexagesimal point), meaning 3 × 602 + 23 × 601 + 17 × 600 seconds. However, each of the three sexagesimal digits in this number (3, 23, and 17) is written using the decimal system. Similarly, the practical unit of angular measure is the degree, of which there are 360 (six sixties) in a circle. There are 60 minutes of arc in a degree, and 60 arcseconds in a minute. ==== YAML ==== In version 1.1 of the YAML data storage format, sexagesimals are supported for plain scalars, and formally specified both for integers and floating point numbers. This has led to confusion, as e.g. some MAC addresses would be recognised as sexagesimals and loaded as integers, where others were not and loaded as strings. In YAML 1.2 support for sexagesimals was dropped. == Notations == In Hellenistic Greek astronomical texts, such as the writings of Ptolemy, sexagesimal numbers were written using Greek alphabetic numerals, with each sexagesimal digit being treated as a distinct number. Hellenistic astronomers adopted a new symbol for zero, —°, which morphed over the centuries into other forms, including the Greek letter omicron, ο, normally meaning 70, but permissible in a sexagesimal system where the maximum value in any position is 59. The Greeks limited their use of sexagesimal numbers to the fractional part of a number. In medieval Latin texts, sexagesimal numbers were written using Arabic numerals; the different levels of fractions were denoted minuta (i.e., fraction), minuta secunda, minuta tertia, etc. By the 17th century it became common to denote the integer part of sexagesimal numbers by a superscripted zero, and the various fractional parts by one or more accent marks. John Wallis, in his Mathesis universalis, generalized this notation to include higher multiples of 60; giving as an example the number 49‵‵‵‵36‵‵‵25‵‵15‵1°15′2″36‴49⁗; where the numbers to the left are multiplied by higher powers of 60, the numbers to the right are divided by powers of 60, and the number marked with the superscripted zero is multiplied by 1. This notation leads to the modern signs for degrees, minutes, and seconds. The same minute and second nomenclature is also used for units of time, and the modern notation for time with hours, minutes, and seconds written in decimal and separated from each other by colons may be interpreted as a form of sexagesimal notation. In some usage systems, each position past the sexagesimal point was numbered, using Latin or French roots: prime or primus, seconde or secundus, tierce, quatre, quinte, etc. To this day we call the second-order part of an hour or of a degree a "second". Until at least the 18th century, ⁠1/60⁠ of a second was called a "tierce" or "third". In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integer and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days. This notation is used in this article. == Fractions and irrational numbers == === Fractions === In the sexagesimal system, any fraction in which the denominator is a regular number (having only 2, 3, and 5 in its prime factorization) may be expressed exactly. Shown here are all fractions of this type in which the denominator is less than or equal to 60: However numbers that are not regular form more complicated repeating fractions. For example: 1⁄7 = 0;8,34,17 (the bar indicates the sequence of sexagesimal digits 8,34,17 repeats infinitely many times) 1⁄11 = 0;5,27,16,21,49 1⁄13 = 0;4,36,55,23 1⁄14 = 0;4,17,8,34 1⁄17 = 0;3,31,45,52,56,28,14,7 1⁄19 = 0;3,9,28,25,15,47,22,6,18,56,50,31,34,44,12,37,53,41 1⁄59 = 0;1 1⁄61 = 0;0,59 The fact that the two numbers that are adjacent to sixty, 59 and 61, are both prime numbers implies that fractions that repeat with a period of one or two sexagesimal digits can only have regular number multiples of 59 or 61 as their denominators, and that other non-regular numbers have fractions that repeat with a longer period. === Irrational numbers === The representations of irrational numbers in any positional number system (including decimal and sexagesimal) neither terminate nor repeat. The square root of 2, the length of the diagonal of a unit square, was approximated by the Babylonians of the Old Babylonian Period (1900 BC – 1650 BC) as 1 ; 24 , 51 , 10 = 1 + 24 60 + 51 60 2 + 10 60 3 = 30547 21600 ≈ 1.41421296 … {\displaystyle 1;24,51,10=1+{\frac {24}{60}}+{\frac {51}{60^{2}}}+{\frac {10}{60^{3}}}={\frac {30547}{21600}}\approx 1.41421296\ldots } Because √2 ≈ 1.41421356... is an irrational number, it cannot be expressed exactly in sexagesimal (or indeed any integer-base system), but its sexagesimal expansion does begin 1;24,51,10,7,46,6,4,44... (OEIS: A070197) The value of π as used by the Greek mathematician and scientist Ptolemy was 3;8,30 = 3 + ⁠8/60⁠ + ⁠30/602⁠ = ⁠377/120⁠ ≈ 3.141666.... Jamshīd al-Kāshī, a 15th-century Persian mathematician, calculated 2π as a sexagesimal expression to its correct value when rounded to nine subdigits (thus to ⁠1/609⁠); his value for 2π was 6;16,59,28,1,34,51,46,14,50. Like √2 above, 2π is an irrational number and cannot be expressed exactly in sexagesimal. Its sexagesimal expansion begins 6;16,59,28,1,34,51,46,14,49,55,12,35... (OEIS: A091649) == See also == Clock Latitude Trigonometry == References == == Further reading == Ifrah, Georges (1999), The Universal History of Numbers: From Prehistory to the Invention of the Computer, Wiley, ISBN 0-471-37568-3. Nissen, Hans J.; Damerow, P.; Englund, R. (1993), Archaic Bookkeeping, University of Chicago Press, ISBN 0-226-58659-6 == External links == "Facts on the Calculation of Degrees and Minutes" is an Arabic language book by Sibṭ al-Māridīnī, Badr al-Dīn Muḥammad ibn Muḥammad (b. 1423). This work offers a very detailed treatment of sexagesimal mathematics and includes what appears to be the first mention of the periodicity of sexagesimal fractions.
Wikipedia:Seán Dineen#0
Seán Dineen (12 February 1944 – 18 January 2024) was an Irish mathematician specialising in complex analysis. His academic career was spent, in the main, at University College Dublin (UCD) where he was Professor of Mathematics, serving as Head of Department and as Head of the School of Mathematical Sciences before retiring in 2009. Dineen died on 18 January 2024, at the age of 79. == Education == Seán Dineen was born in Clonakilty, Co. Cork, Ireland on 12 February 1944. He attended St Mary's, the first secondary school for boys in Clonakilty, which his parents Jerry (Jeremiah) and Margaret Dineen had founded in 1938. His father had died in 1953 and the school was subsequently run by his mother. He entered University College Cork (UCC) in 1961 to study mathematics, graduating with honours BSc in mathematics in 1964. While at UCC, he was involved in setting up the student mathematics society there. His tutors and lecturers included Finbarr Holland, Michael Mortell, Tagdh Carey, Paddy Kennedy, Paddy Barry and Siobhán O'Shea (later Siobhán Vernon). He completed his MSc there in 1965, and was awarded a National University of Ireland Travelling Studentship. Dineen was the first student of pure mathematics from UCC to travel to the USA to do his doctorate, where he did his coursework in the University of Maryland. His official supervisor there was John Horvath, but his PhD research was carried out in Rio de Janeiro at Instituto Nacional de Matemática Pura e Aplicada (IMPA) under the supervision of Leopoldo Nachbin. He completed his thesis on "Holomorphy Types on a Banach Space" in 1970. == UCD Career == Dineen spent the year 1969-1970 at Johns Hopkins as an instructor before returning to Ireland. After two years at the Dublin Institute for Advanced Studies (DIAS), he secured a position at University College Dublin. Seven years later, in 1979, he was appointed to the professorship and chair of mathematics vacated by J. R. Timoney. He spent the rest of his career there, formally retiring in 2009. == Mathematics == Dineen's work has principally been in the area of infinite dimensional complex analysis and the topological structure of spaces of Holomorphic functions. He later worked on bounded symmetric domains and spectral theory, among other topics. He has said "If you want to stay active as a research mathematician, you have to reinvent yourself regularly". His academic footprint includes 10 books and/or monographs, over 100 peer-reviewed research articles, over 4000 citations, 11 PhD students, over 40 collaborators, and the organisation of numerous mathematical conferences and meetings. In 1987 he was elected to the Royal Irish Academy. == Selected papers == Dineen, Seán "The second dual of a JB∗ triple system. Complex analysis, functional analysis and approximation theory". (Campinas, 1984), 67–69, North-Holland Math. Stud., 125, Notas Mat., 110, North-Holland, Amsterdam, 1986. Dineen, Seán "Complete holomorphic vector fields on the second dual of a Banach space". Math. Scand. 59 (1986), no. 1, 131–142. Dineen, Seán "Holomorphy types on a Banach space". Studia Math. 39 (1971), 241–288. Alencar, Raymundo; Aron, Richard M.; Dineen, Seán "A reflexive space of holomorphic functions in infinitely many variables". Proc. Amer. Math. Soc. 90 (1984), no. 3, 407–411. Dineen, Seán; Timoney, Richard M. "On a problem of H. Bohr". Bull. Soc. Roy. Sci. Liège 60 (1991), no. 6, 401–404. Dineen, Seán; Timoney, Richard M.; Vigué, Jean-Pierre "Pseudodistances invariantes sur les domaines d'un espace localement convexe". Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 12 (1985), no. 4, 515–529. Dineen, Seán; Timoney, Richard M. "Absolute bases, tensor products and a theorem of Bohr". Studia Math. 94 (1989), no. 3, 227–234. Dineen, Seán; Mellon, Pauline "Holomorphic functions on symmetric Banach manifolds of compact type are constant". Math. Z. 229 (1998), no. 4, 753–765. Dineen, Seán; Mackey, Michael; Mellon, Pauline "The density property for JB∗-triples". Studia Math. 137 (1999), no. 2, 143–160. Dineen, Seán; Patyi, Imre; Venkova, Milena "Inverses depending holomorphically on a parameter in a Banach space". J. Funct. Anal. 237 (2006), no. 1, 338–349. Dineen, Seán; Mujica, Jorge "A monomial basis for the holomorphic functions on $c_0$". Proc. Amer. Math. Soc. 141 (2013), no. 5, 1663–1672. Dineen, Seán; Harte, Robin E. "Banach-valued axiomatic spectra". Studia Math. 175 (2006), no. 3, 213–232. Dineen, Seán; Galindo, Pablo; García, Domingo; Maestre, "Manuel Linearization of holomorphic mappings on fully nuclear spaces with a basis". Glasgow Math. J. 36 (1994), no. 2, 201–208. == Selected books == Analysis, a Gateway to Understanding. World Scientific, 2012, 320pp. Black-Scholes Formula. Second edition. Graduate Studies in Mathematics, 70. American Mathematical Society, Providence, RI, 2013. xiv+305 Probability Theory in Finance. A Mathematical Guide to the Black-Scholes formula. Graduate Studies in Mathematics, 70. American Mathematical Society, Providence, RI, 2005. xiv+294 pp. Complex Analysis on Infinite-Dimensional Spaces. Springer Monographs in Mathematics. Springer-Verlag London, Ltd., London, 1999. xvi+543 pp. Multivariate Calculus and Geometry. Springer Undergraduate Mathematics, Series. Springer-Verlag London, Ltd., London, 1998. xii+262 pp. Third edition 2014. xiv+257 pp. Functions of Two Variables. Chapman and Hall Mathematics, Series. Chapman & Hall, London, 1995. x+189 pp. Second edition Chapman & Hall/CRC, Boca Raton, FL, 2000. xii+191 pp. The Schwarz Lemma. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1989. x+248 pp. == References == == External links == Seán Dineen at the Mathematics Genealogy Project Obituary in Irish Maths Society Bulletin
Wikipedia:Shahar Mozes#0
Shahar Mozes (Hebrew: שחר מוזס) is an Israeli mathematician. Mozes received in 1991, his doctorate from the Hebrew University of Jerusalem with thesis Actions of Cartan subgroups under the supervision of Hillel Fürstenberg. At the Hebrew University of Jerusalem, Mozes became in 1993 a senior lecturer, in 1996 associate professor, and in 2002 a full professor. Moses does research on Lie groups and discrete subgroups of Lie groups, geometric group theory, ergodic theory, and aperiodic tilings. His collaborators include Jean Bourgain, Alex Eskin, Elon Lindenstrauss, Gregory Margulis, and Hee Oh. In 2000 Mozes received the Erdős Prize. In 1998 he was an invited speaker with talk Products of trees, lattices and simple groups at the International Congress of Mathematicians (ICM) in Berlin. He was a plenary speaker at the ICM Satellite Conference on "Geometry Topology and Dynamics in Negative Curvature" held at the Raman Research Institute of the International Centre for Theoretical Sciences (ICTS) from August 2 to August 7, 2010. == Selected publications == Mozes, Shahar (1989). "Tilings, substitution systems and dynamical systems generated by them". Journal d'Analyse Mathématique. 53 (1): 139–186. doi:10.1007/BF02793412. ISSN 0021-7670. Mozes, Shahar (1990). "Reflection processes on graphs and Weyl groups". Journal of Combinatorial Theory, Series A. 53 (1): 128–142. doi:10.1016/0097-3165(90)90024-Q. ISSN 0097-3165. Mozes, Shahar (1992). "Mixing of all orders of Lie groups actions". Inventiones Mathematicae. 107 (1): 235–241. Bibcode:1992InMat.107..235M. doi:10.1007/BF01231889. ISSN 0020-9910. Lubotzky, Alexander; Mozes, S.; Raghunathan, M. S. (1993). "Cyclic subgroups of exponential growth and metrics on discrete groups" (PDF). C. R. Acad. Sci. Paris. Séries I. 317: 735. Mozes, Shahar (1995). "Epimorphic subgroups and invariant measures". Ergodic Theory and Dynamical Systems. 15 (6): 1207–1210. doi:10.1017/S0143385700009871. ISSN 0143-3857. Burger, M.; Mozes, S. (1996). "CAT(-1)-Spaces, Divergence Groups and their Commensurators". Journal of the American Mathematical Society. 9 (1): 57–93. doi:10.1090/S0894-0347-96-00196-8. JSTOR 2152840. Eskin, Alex; Mozes, Shahar; Shah, Nimish (1996). "Unipotent Flows and Counting Lattice Points on Homogeneous Varieties". The Annals of Mathematics. 143 (2): 253. doi:10.2307/2118644. JSTOR 2118644. S2CID 18628583. Eskin, A.; Mozes, S.; Shah, N. (1997). "Non-divergence of translates of certain algebraic measures". Geometric and Functional Analysis. 7: 48–80. doi:10.1007/PL00001616. Burger, Marc; Mozes, Shahar (1997). "Finitely presented products of trees" (PDF). C. R. Acad. Sci. Paris. Séries I. 324: 747–752. doi:10.1016/S0764-4442(97)86938-8. Mozes, S. (1997). "Aperiodic tilings". Inventiones Mathematicae. 128 (3): 603–611. Bibcode:1997InMat.128..603M. doi:10.1007/s002220050153. Eskin, A.; Margulis, G.; Mozes, S. (1998). "Upper bounds and asymptotics in a quantitative version of the Oppenheim conjecture". Annals of Mathematics. 147 (1): 93–141. doi:10.2307/120984. JSTOR 120984. Lubotzky, Alexander; Mozes, Shahar; Raghunathan, M. S. (2000). "The word and Riemannian metrics on lattices of semisimple groups". Publications Mathématiques de l'IHÉS. 91: 5–53. doi:10.1007/BF02698740. Burger, Marc; Mozes, Shahar (2000). "Groups acting on trees: from local to global structure" (PDF). Publications Mathématiques de l'IHÉS. 92: 113–150. doi:10.1007/BF02698915. Burger, Marc; Mozes, Shahar (2000). "Lattices in product of trees" (PDF). Publications Mathématiques de l'IHÉS. 92: 151–194. doi:10.1007/BF02698916. Eskin, Alex; Mozes, Shahar; Oh, Hee (2002). "Uniform exponential growth for linear groups". International Mathematics Research Notices. 2002 (31): 1675–1683. arXiv:math/0108157. doi:10.1155/S1073792802108099. ISSN 1073-7928. Glasner, Yair; Mozes, Shahar (2005). "Automata and Square Complexes". Geometriae Dedicata. 111: 43–64. arXiv:math/0306259. doi:10.1007/s10711-004-1815-2. Druţu, Cornelia; Mozes, Shahar; Sapir, Mark (2009). "Divergence in lattices in semisimple Lie groups and graphs of groups". Transactions of the American Mathematical Society. 362 (5): 2451–2505. arXiv:0801.4141. doi:10.1090/S0002-9947-09-04882-X. Bourgain, Jean; Furman, Alex; Lindenstrauss, Elon; Mozes, Shahar (2011). "Stationary measures and equidistribution for orbits of nonabelian semigroups on the torus". Journal of the American Mathematical Society. 24: 231–280. doi:10.1090/S0894-0347-2010-00674-1. == References == == External links == "Plenary lecture 9 by Shahar Mozes". YouTube. International Centre for Theoretical Sciences. 2 November 2016. (ICTS Conference, August 2010 — Mozes describes joint work with Bourgain, Furman, and Lindenstrauss.)
Wikipedia:Shams al-Din al-Samarqandi#0
Shams al-Din (IPA: /ʃamsaddiːn/) (Arabic: شمس الدين, lit. "sun of the faith") is an Arabic personal name or title. Notable persons with this name are: == 10th–13th century == Shams al-Din Altınapa, Seljuk atabeg Muhammad ibn Ahmad Shams al-Din al-Maqdisi (c. 945–1000), Arab geographer Shams al-Din Ibn Fallus (1194-1240), Arab Egyptian mathematician Shams al-Din Muhammad bin Ali, or Suzani Samarqandi (died 1166), Persian poet Shams al-Din Ildeniz (died c. 1175), atabeg of Azerbaijan Shams al-Din Muhammad ibn al-Muqaddam (died 1188), Zengid governor of Damascus and Ayyubid emir of Baalbek Shams-ud-din Iltutmish (1192-1236), Muslim Turkic sultan of Delhi Shamsuddin Sabzwari (died 1247), Sufi missionary in southern Punjab Shams al-Din Muhammad, or Shams Tabrizi (1185-1248), Persian Sufi mystic Shams al-Din Lu'lu' al-Amini (died 1251), regent of Aleppo Shams al-Din 'Ali ibn Mas'ud (died 1255), Mihrabanid malik of Sistan Ajall Shams al-Din Omar (1211–1279), provincial governor of Yunnan Shams al-Dīn Abū Al-ʿAbbās Aḥmad Ibn Muḥammad Ibn Khallikān (1211–1282), Iraqi Shafi'i Islamic scholar Shams al-Din Juvayni (died 1285) vizier and sahib-divan under three Mongol Ilkhans Shams al-Din Muhammad ibn Mahmud al-Shahrazuri (died c. 1290), Kurdish physician and historian == 14th–17th century == Shams al-Din Muhammad (1257–1310), imam of the Nizari Isma'ili community Shams al-Din al-Samarqandi (c. 1250 – c. 1310), astronomer and mathematician from Samarkand Shamsuddin Firuz Shah (died 1322), sultan of the Bengali kingdom of Lakhnauti Shams al-Din al-Ansari al-Dimashqi (1256–1327), Arab geographer Shams al-Din Abu’Abdallah Muhammad ibn’Abdallah ibn Muhammad ibn Ibrahim ibn Muhammad ibn Yusuf Lawati al-Tanji Ibn Battuta (1304-1368), explorer Shams-ud-Din Shah Mir (1300-1342), ruler of Kashmir Ali Shams al-Din I (died 1348), leader of the Tayyibi Isma'ili community Shams al-Din ibn Fazl Allah (died c. 1348), leader of the Sarbadars of Sabzewar Khwaja Shams al-Din 'Ali (died c. 1352), leader of the Sarbadars of Sabzewar Shams ud din, or Shams Tabraiz (missionary) (died 1356), Ismaili saint in India Shamsuddin Ilyas Shah (died 1358), sultan of Bengal Shams al-Din Ibn Muflih (1308-1361), authority on Hanbali Law Shams ud-Din Amir Kulal (1278-1370), tribal head, scholar and religious figure in Turkistan Shams al-Din Abu Abd Allah al-Khalili (1320–1380), Syrian astronomer Shams al-Din al-Kirmani (1317-1384), Sunni scholar Khwaja Shams al-Din Muhammad Hafez-e Shirazi (1315–1390), Persian lyric poet Ali Shams al-Din II (died 1428), leader of the Tayyibi Isma'ili community Shams al-Din al-Fanari (1350–1431), Turkish logician, Islamic theologian, and Islamic legal academic Shamsuddin Ahmad Shah (1419-1436), ruler of Bengal Shams al-Din 'Ali ibn Qutb al-Din (c. 1387 – c. 1438), Mihrabanid malik of Sistan Shamsuddin Yusuf Shah (died 1481), ruler of Bengal Shamsuddin Muhammad Shah III (died 1482), sultan of Bahmani Shams ad-Din ibn Muhammad (died 1487), sultan of Adal Shamsuddin Muzaffar Shah (died 1494), Abyssinian sultan of Bengal Shams al-Din Muhammad (died c. 1494), Mihrabanid malik of Sistan Shams al-Din Muhammad ibn `Abd al-Rahman al-Sakhawi (1428–1497), Egyptian Islamic scholar Mir Shams-ud-Din Araqi (1440-1515), Sufi Shi'a missionary in Kashmir Ali Shams al-Din III (died 1527), leader of the Tayyibi Isma'ili community Shamsuddin Muhammad Khan Sur Shah Ghazi (died 1555), Sultan of Bengal Shamsuddin Muhammad Ataga Khan (died 1562), minister in Mughal court Shams al-Din al-Ramli (1513-1596), Egyptian Shafi'i scholar Khawaja Shamsuddin Khawafi (died 1600), minister to Emperor Akbar == 18th century–present == Shamseddin Amir-Alaei (1900–1994), Iranian politician and diplomat Muhammad Shamsuddeen III (1879–1935), Sultan of the Maldives Şemsettin Günaltay (1883–1961) prime minister of Turkey Şemsettin Mardin, Turkish diplomat Abul Kalam Shamsuddin (1897–1978), Bangladeshi journalist and politician Abu Jafar Shamsuddin (1911–1989), Bangladeshi author and novelist Shamsuddin Ahmed (1920–1971), Bangladeshi surgeon Khwaja Shams-ud-Din (1922–1999), Prime Minister of Jammu and Kashmir Shamsuddin Abul Kalam (1926–1997), Bangladeshi author and novelist Shamsuddin Qasemi (1935–1996), Bangladeshi Islamic scholar and politician Nasri Shamseddine (1927–1983), Lebanese singer and actor A. T. M. Shamsuddin (1927-2009), Bangladeshi author Khawaja Shamsuddin Azeemi (1927-2025), patriarch of the Sufi Order of Azeemia Mohammad Mehdi Shamseddine (1936–2001), Lebanese Twelver Shia Islamic scholar Shamsodin Vaezi (born 1936), Iraqi Twelver Shi'a Marja Abdul Aziz Shamsuddin (1938-2020), Malaysian politician Samsuddin Ahmed (1945–2020), Bangladeshi politician Shamsuddeen Usman (born 1949), Nigerian politician Semezdin Mehmedinović (born 1960), Bosnian writer Chettithody Shamshuddin (born 1970), Indian cricket umpire Mohammad Shamsuddin (born 1983), Bangladeshi sprinter Shamsuddin Amiri (born 1985), Afghan footballer Şemseddin Sami Efendi, pen-name of Sami Frashëri (1850–1904), Albanian writer, philosopher and playwright Ashari Samsudin (born 1985), Malaysian footballer Shamsuddin Ahmed (died 2020), Bangladeshi engineer and former MP AHM Shamsuddin Chowdhury Manik, Bangladeshi Supreme Court judge Shamsuddin Ahmed, Bangladeshi politician Ali Chamseddine (born 1953), Lebanese physicist Muhammad Ali Chamseddine (1942–2022), Lebanese poet and writer Chems-Eddine Hafiz (born 1954), Franco-Algerian lawyer Chems-Eddine Chitour (born 1944), Algerian scholar Chamseddine Rahmani (born 1990), Algerian footballer Chamseddine Harrag (born 1992), Algerian footballer Chamseddine Dhaouadi (born 1987), Tunisian retired footballer Chemseddine Chtibi (born 1982), Moroccan footballer Chemseddine Nessakh (born 1988), Algerian footballer == References ==
Wikipedia:Sharif Muhammad Azizul Haque#0
Sharif Muhammad Azizul Haque (also known as S. M. Azizul Haque) (February 1, 1924 – April 13, 2016) was a Bangladeshi professor and mathematician. He served as the head of the Department of Mathematics and the Dean of the Faculty of Science at Dhaka University. Following Bangladesh's independence, he was one of the 12 Bangladeshi scientists who played a key role in establishing the Bangladesh Academy of Sciences. Additionally, he served as the Chairman of the General Insurance Corporation and as the founding president of the Bangladesh Mathematical Society. == Early life == Azizul Haque was born on February 1, 1924, in the village of Astail in Udaypur Union, Mollahat Upazila, Bagerhat District of present-day Bangladesh. His father was Abdul Wajid Sharif, and his mother was Matiunnesa. He completed his primary education at Mollahat English Middle School, Ikhri Katenga English School in Khulna, and Wajid Memorial High School in Mollahat. In 1940, he passed his secondary education with first division under the University of Calcutta. He completed his higher secondary studies in 1942 at Presidency College. Subsequently, he earned his bachelor's and master's degrees in applied mathematics from the University of Calcutta. He achieved first division in almost all his academic examinations. == Career == After completing his education, Azizul Haque joined as a Customs Appraiser in Kolkata in 1947. Following the Partition of India, he worked in the same position in Chattogram and Khulna during the latter part of that year. In January 1948, he joined the Mathematics Department of Dhaka University as a lecturer and, in August of the same year, went to Imperial College London on a government scholarship for a Ph.D. On November 1, 1950, he was promoted to Reader (Associate Professor) at Dhaka University. In 1952, he participated in the "Foreign Student Summer Project" at the Massachusetts Institute of Technology. In 1954, he worked as an advisor for a UNESCO symposium held in Tokyo. From 1957 to 1958, he served as a research officer in the Mathematics Department of Manchester University under the Nuffield Foundation Fellowship. Between 1962 and 1963, he was a professor at Florida State University, and from 1963 to 1964, he taught at the University of Hawaii. At Dhaka University, he was promoted to professor in 1962 but had already been serving as the head of the department since 1954. Except for his time abroad, he remained the head of the Mathematics Department from 1954 to 1975. In 1965, he served as the chairperson of the Physics, Mathematics, Statistics, Astronomy, and Meteorology Division of the Pakistan Association for the Advancement of Science. From 1966 to 1972, Azizul Haque was the provost of Jahurul Haq Hall (formerly Iqbal Hall) at Dhaka University. He also served as the Dean of the Faculty of Science from 1972 to 1973. He retired from Dhaka University in 1982. Outside of teaching, he was also an actuary. He was affiliated with the Institute of Actuaries in London and worked as an insurance consultant in both Pakistan and Bangladesh. When the insurance corporation was established in Bangladesh, he was appointed Chairman of the General Insurance Corporation. He held this position from 1973 to 1976. In 1972, he co-founded the Bangladesh Mathematical Society and served as its president. == Honors == Azizul Haque was elected a Fellow of the Pakistan Academy of Sciences in 1970 and became a founding Fellow of the Bangladesh Academy of Sciences in 1973. == Death == Azizul Haque died on April 13, 2016, at the age of 92. == References ==
Wikipedia:Shaul Foguel#0
Shaul Reuven Foguel was an Israeli mathematician (Hebrew: שאול פוגל, December 5, 1931 - December 19, 2020). Shaul Foguel was born to one of the founding families of the City of Tel Aviv and his mother Dora Malkin was a direct descendant of Saul Wahl. He received his B.S. and M.S. in Mathematics from the Hebrew University of Jerusalem and his PhD in Mathematics from Yale University in 1958. He wrote his dissertation under Nelson Dunford on "Studies in Spectral Operators and the Basis Problem". Shaul Foguel was Professor Emeritus of the Hebrew University of Jerusalem, and a supporter of the Israeli Left. He ran in the 1969 Knesset elections on the Peace List along with Gadi Yatziv, although it failed to win a seat. He spent his retirement in New York City, where his two sons, Professor Tuval Foguel of Adelphi University, and Sy Foguel, the CEO of Berkshire Hathaway GUARD Insurance Companies live. == Publications == Shaul R. Foguel "Selected topics in the study of Markov operators" Dept. of Mathematics, University of North Carolina at Chapel Hill, 1980 Shaul R. Foguel "The Ergodic theory of Markov processes" Van Nostrand Reinhold Co., 1969 == Selected articles == Foguel, Shaul R. (1983). "A generalized 0–2 law". Israel Journal of Mathematics. 45 (2): 219–224. doi:10.1007/BF02774018. Foguel, Shaul R. (1979). "Harris operators". Israel Journal of Mathematics. 33 (3): 281–309. doi:10.1007/BF02762166. Foguel, Shaul R. (1979). "Shlomo Horowitz 1938–1978". Israel Journal of Mathematics. 33 (3): 175–176. doi:10.1007/BF02762158. Foguel, Shaul R. (1976). "More on the "Zero-Two" Law". Proceedings of the American Mathematical Society. 61 (2): 262–264. doi:10.1090/S0002-9939-1976-0428076-2. == References ==
Wikipedia:Shavkat Ayupov#0
Shavkat Abdullayevich Ayupov (Russian: Шавкат Абдуллаевич Аюпов; born September 14, 1952, in Tashkent) is a Soviet Uzbek scientist in the field of mathematics. He is an Academician of the Uzbekistan Academy of Sciences (1995). He is also a Senator in the Senate of the Oliy Majlis of the Republic of Uzbekistan (2020). He was awarded the title of Hero of Uzbekistan in 2021, and he holds the title of Distinguished Scientist of the Republic of Uzbekistan (2011). President of Academy of Sciences of Uzbekistan from December 16, 2024. == Biography == He was born into an intellectual family. His father, Abdulla Talipovich Ayupov, was a participant in the Great Patriotic War and headed the Department of Philosophy at Tashkent University. His mother, Marguba Khamidova, was a doctor who worked as a therapist at the 4th City Clinical Hospital. He graduated from Tashkent University in 1974 and was a student of Academician T.A. Sarymsakov. He earned a Candidate of Physical and Mathematical Sciences degree (1977) and a Doctor of Physical and Mathematical Sciences degree (1983). He was a laureate of the Lenin Komsomol Prize as part of the authorial team consisting of Berdikulov, Musulmonkul Abdullaevich, Usmanov, Shukhrat Muttalibovich, and a research fellow at the Institute of Mathematics named after V.I. Romanovsky at the Academy of Sciences of the Uzbek SSR; Abdullaev, Rustambai Zairovich, an assistant at the Tashkent State Pedagogical Institute named after V.I. Lenin; Tikhonov, Oleg Yevgenyevich, an assistant, and Trunov, Nikolai Vasilievich, an associate professor at the V.I. Ulyanov Kazan State University, for their work on "Research on Operator Algebras and Non-Commutative Integration" (1989). He became an Academician of the World Academy of Sciences in 2003. From 2008 to 2013, he was an associated member of the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, Italy. He became a member of the Senate of the Oliy Majlis of the Republic of Uzbekistan in 2020. In 1992, he became the head of the Institute of Mathematics named after V.I. Romanovsky at the Academy of Sciences of the Republic of Uzbekistan. In 1994, he embarked on a lengthy assignment at the University of Louis Pasteur in Strasbourg, France, where he conducted joint research with Professor J.-L. Lode. He teaches at the National University of Uzbekistan and is a professor of the Department of Algebra and Functional Analysis. == Awards == "Oʻzbekiston Qahramon" (Hero of Uzbekistan, August 24, 2021) "Mehnat shuhrati ordeni" (Order of "Mekhnat Shukhrati") (2003) "Shuhrat" medali (Medal "Shukhrat" 1996) "Oʻzbekiston fan arbobi" (Distinguished Scientist of the Republic of Uzbekistan, 2011) Recipient of the State Prize of the 1st Degree in the field of science and technology (2017) == Literature == National Encyclopedia of Uzbekistan. The first volume. Tashkent, 2000 == References ==
Wikipedia:Shayle R. Searle#0
Shayle Robert Searle PhD (26 April 1928 – 18 February 2013) was a New Zealand mathematician who was professor emeritus of biological statistics at Cornell University. He was a leader in the field of linear and mixed models in statistics, and published widely on the topics of linear models, mixed models, and variance component estimation. Searle was one of the first statisticians to use matrix algebra in statistical methodology, and was an early proponent of the use of applied statistical techniques in animal breeding. He died at his home in Ithaca, New York. == Education == BA – Victoria University of Wellington – 1949 MSc – Victoria University of Wellington – 1950 PhD – Cornell University – 1958 DSc (h.c.) – Victoria University of Wellington – 2005 == Employment == Research statistician – New Zealand Dairy Board – 1953 to 1955, 1959 to 1962 Statistician – University Computing Center, Cornell University – 1962 to 1965 Professor of biological statistics – Cornell University – 1965 to 1996 == Honours == Winner, Humboldt Research Award of the Alexander von Humboldt Foundation Fellow, American Statistical Association Fellow, Royal Statistical Society Honorary Fellow, Royal Society of New Zealand == Bibliography == === Books === Shayle R. Searle (2009). The Collected Works of Shayle R. Searle. New York: Wiley. ISBN 978-0-470-55606-1. Neuhaus, John William; McCulloch, Charles E.; Shayle R. Searle (2008). Generalized, Linear, and Mixed Models (Wiley Series in Probability and Statistics) (2nd ed.). New York: Wiley-Interscience. ISBN 978-0-470-07371-1. Shayle R. Searle (2006). Linear Models for Unbalanced Data (Wiley Series in Probability and Statistics). New York: Wiley-Interscience. ISBN 0-470-04004-1. McCulloch, Charles E.; Shayle R. Searle; Casella, George (2006). Variance Components (Wiley Series in Probability and Statistics). New York: Wiley-Interscience. ISBN 0-470-00959-4. Shayle R. Searle (2006). Matrix Algebra Useful for Statistics (Wiley Series in Probability and Statistics). New York: Wiley-Interscience. ISBN 0-470-00961-6. McCulloch, Charles E.; Searle, Shayle R. (2001). Generalized, Linear, and Mixed Models (1st ed.). Chichester: John Wiley & Sons. ISBN 0-471-19364-X. Willett, Lois Schertz; Searle, S. R. (2001). Matrix algebra for applied economics. Chichester: John Wiley & Sons. ISBN 0-471-32207-5. Searle, S. R. (1971). Linear models. New York: Wiley. ISBN 0-471-18499-3. Hausman, Warren H.; Searle, S. R. (1970). Matrix algebra for business and economics. New York: Wiley-Interscience. ISBN 0-471-76941-X. Shayle R. Searle (1966). Matrix Algebra for the Biological Sciences (Series on Quantitative Methods for Biologists & Medical Scientists). John Wiley & Sons Inc. ISBN 0-471-76930-4. === Selected journal articles === == References == == Further reading == Shayle Searle (1968). "Oral and Personal Histories of Computing at Cornell". Office of Information Technologies, Cornell University. Searle, Shayle R. (2005). "Recollections from a 50-year random walk midst matrices, statistics and computing". Research Letters in the Information and Mathematical Sciences. 8: 45–52. ISSN 1175-2777. Archived from the original on 14 October 2008. Martin T. Wells (2009). "A Conversation with Shayle R. Searle". Statistical Science. 24 (2): 244–254. arXiv:1001.3272. doi:10.1214/08-STS259. S2CID 62162161. Hunter, Jeffrey (2015). "Shayle R. Searle: Pioneer in Linear Modelling". Australian & New Zealand Journal of Statistics. 57: 1–14. doi:10.1111/anzs.12107. == External links == Shayle R. Searle at the Mathematics Genealogy Project
Wikipedia:Shear mapping#0
In plane geometry, a shear mapping is an affine transformation that displaces each point in a fixed direction by an amount proportional to its signed distance from a given line parallel to that direction. This type of mapping is also called shear transformation, transvection, or just shearing. The transformations can be applied with a shear matrix or transvection, an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value. An example is the linear map that takes any point with coordinates ( x , y ) {\displaystyle (x,y)} to the point ( x + 2 y , y ) {\displaystyle (x+2y,y)} . In this case, the displacement is horizontal by a factor of 2 where the fixed line is the x-axis, and the signed distance is the y-coordinate. Note that points on opposite sides of the reference line are displaced in opposite directions. Shear mappings must not be confused with rotations. Applying a shear map to a set of points of the plane will change all angles between them (except straight angles), and the length of any line segment that is not parallel to the direction of displacement. Therefore, it will usually distort the shape of a geometric figure, for example turning squares into parallelograms, and circles into ellipses. However a shearing does preserve the area of geometric figures and the alignment and relative distances of collinear points. A shear mapping is the main difference between the upright and slanted (or italic) styles of letters. The same definition is used in three-dimensional geometry, except that the distance is measured from a fixed plane. A three-dimensional shearing transformation preserves the volume of solid figures, but changes areas of plane figures (except those that are parallel to the displacement). This transformation is used to describe laminar flow of a fluid between plates, one moving in a plane above and parallel to the first. In the general n-dimensional Cartesian space ⁠ R n , {\displaystyle \mathbb {R} ^{n},} ⁠ the distance is measured from a fixed hyperplane parallel to the direction of displacement. This geometric transformation is a linear transformation of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠ that preserves the n-dimensional measure (hypervolume) of any set. == Definition == === Horizontal and vertical shear of the plane === In the plane R 2 = R × R {\displaystyle \mathbb {R} ^{2}=\mathbb {R} \times \mathbb {R} } , a horizontal shear (or shear parallel to the x-axis) is a function that takes a generic point with coordinates ( x , y ) {\displaystyle (x,y)} to the point ( x + m y , y ) {\displaystyle (x+my,y)} ; where m is a fixed parameter, called the shear factor. The effect of this mapping is to displace every point horizontally by an amount proportionally to its y-coordinate. Any point above the x-axis is displaced to the right (increasing x) if m > 0, and to the left if m < 0. Points below the x-axis move in the opposite direction, while points on the axis stay fixed. Straight lines parallel to the x-axis remain where they are, while all other lines are turned (by various angles) about the point where they cross the x-axis. Vertical lines, in particular, become oblique lines with slope 1 m . {\displaystyle {\tfrac {1}{m}}.} Therefore, the shear factor m is the cotangent of the shear angle φ {\displaystyle \varphi } between the former verticals and the x-axis. In the example on the right the square is tilted by 30°, so the shear angle is 60°. If the coordinates of a point are written as a column vector (a 2×1 matrix), the shear mapping can be written as multiplication by a 2×2 matrix: ( x ′ y ′ ) = ( x + m y y ) = ( 1 m 0 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x^{\prime }\\y^{\prime }\end{pmatrix}}={\begin{pmatrix}x+my\\y\end{pmatrix}}={\begin{pmatrix}1&m\\0&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} A vertical shear (or shear parallel to the y-axis) of lines is similar, except that the roles of x and y are swapped. It corresponds to multiplying the coordinate vector by the transposed matrix: ( x ′ y ′ ) = ( x m x + y ) = ( 1 0 m 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x^{\prime }\\y^{\prime }\end{pmatrix}}={\begin{pmatrix}x\\mx+y\end{pmatrix}}={\begin{pmatrix}1&0\\m&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} The vertical shear displaces points to the right of the y-axis up or down, depending on the sign of m. It leaves vertical lines invariant, but tilts all other lines about the point where they meet the y-axis. Horizontal lines, in particular, get tilted by the shear angle φ {\displaystyle \varphi } to become lines with slope m. ==== Composition ==== Two or more shear transformations can be combined. If two shear matrices are ( 1 λ 0 1 ) {\textstyle {\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}} and ( 1 0 μ 1 ) {\textstyle {\begin{pmatrix}1&0\\\mu &1\end{pmatrix}}} then their composition matrix is ( 1 λ 0 1 ) ( 1 0 μ 1 ) = ( 1 + λ μ λ μ 1 ) , {\displaystyle {\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\\mu &1\end{pmatrix}}={\begin{pmatrix}1+\lambda \mu &\lambda \\\mu &1\end{pmatrix}},} which also has determinant 1, so that area is preserved. In particular, if λ = μ {\displaystyle \lambda =\mu } , we have ( 1 + λ 2 λ λ 1 ) , {\displaystyle {\begin{pmatrix}1+\lambda ^{2}&\lambda \\\lambda &1\end{pmatrix}},} which is a positive definite matrix. === Higher dimensions === A typical shear matrix is of the form S = ( 1 0 0 λ 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 ) . {\displaystyle S={\begin{pmatrix}1&0&0&\lambda &0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{pmatrix}}.} This matrix shears parallel to the x axis in the direction of the fourth dimension of the underlying vector space. A shear parallel to the x axis results in x ′ = x + λ y {\displaystyle x'=x+\lambda y} and y ′ = y {\displaystyle y'=y} . In matrix form: ( x ′ y ′ ) = ( 1 λ 0 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} Similarly, a shear parallel to the y axis has x ′ = x {\displaystyle x'=x} and y ′ = y + λ x {\displaystyle y'=y+\lambda x} . In matrix form: ( x ′ y ′ ) = ( 1 0 λ 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}1&0\\\lambda &1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} In 3D space this matrix shear the YZ plane into the diagonal plane passing through these 3 points: ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} ( λ , 1 , 0 ) {\displaystyle (\lambda ,1,0)} ( μ , 0 , 1 ) {\displaystyle (\mu ,0,1)} S = ( 1 λ μ 0 1 0 0 0 1 ) . {\displaystyle S={\begin{pmatrix}1&\lambda &\mu \\0&1&0\\0&0&1\end{pmatrix}}.} The determinant will always be 1, as no matter where the shear element is placed, it will be a member of a skew-diagonal that also contains zero elements (as all skew-diagonals have length at least two) hence its product will remain zero and will not contribute to the determinant. Thus every shear matrix has an inverse, and the inverse is simply a shear matrix with the shear element negated, representing a shear transformation in the opposite direction. In fact, this is part of an easily derived more general result: if S is a shear matrix with shear element λ, then Sn is a shear matrix whose shear element is simply nλ. Hence, raising a shear matrix to a power n multiplies its shear factor by n. ==== Properties ==== If S is an n × n shear matrix, then: S has rank n and therefore is invertible 1 is the only eigenvalue of S, so det S = 1 and tr S = n the eigenspace of S (associated with the eigenvalue 1) has n − 1 dimensions. S is defective S is asymmetric S may be made into a block matrix by at most 1 column interchange and 1 row interchange operation the area, volume, or any higher order interior capacity of a polytope is invariant under the shear transformation of the polytope's vertices. === General shear mappings === For a vector space V and subspace W, a shear fixing W translates all vectors in a direction parallel to W. To be more precise, if V is the direct sum of W and W′, and we write vectors as v = w + w ′ {\displaystyle v=w+w'} correspondingly, the typical shear L fixing W is L ( v ) = ( L w + L w ′ ) = ( w + M w ′ ) + w ′ , {\displaystyle L(v)=(Lw+Lw')=(w+Mw')+w',} where M is a linear mapping from W′ into W. Therefore in block matrix terms L can be represented as ( I M 0 I ) . {\displaystyle {\begin{pmatrix}I&M\\0&I\end{pmatrix}}.} == Applications == The following applications of shear mapping were noted by William Kingdon Clifford: "A succession of shears will enable us to reduce any figure bounded by straight lines to a triangle of equal area." "... we may shear any triangle into a right-angled triangle, and this will not alter its area. Thus the area of any triangle is half the area of the rectangle on the same base and with height equal to the perpendicular on the base from the opposite angle." The area-preserving property of a shear mapping can be used for results involving area. For instance, the Pythagorean theorem has been illustrated with shear mapping as well as the related geometric mean theorem. Shear matrices are often used in computer graphics. An algorithm due to Alan W. Paeth uses a sequence of three shear mappings (horizontal, vertical, then horizontal again) to rotate a digital image by an arbitrary angle. The algorithm is very simple to implement, and very efficient, since each step processes only one column or one row of pixels at a time. In typography, normal text transformed by a shear mapping results in oblique type. In pre-Einsteinian Galilean relativity, transformations between frames of reference are shear mappings called Galilean transformations. These are also sometimes seen when describing moving reference frames relative to a "preferred" frame, sometimes referred to as absolute time and space. == See also == Transformation matrix == References == == Bibliography == Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7
Wikipedia:Shear matrix#0
In plane geometry, a shear mapping is an affine transformation that displaces each point in a fixed direction by an amount proportional to its signed distance from a given line parallel to that direction. This type of mapping is also called shear transformation, transvection, or just shearing. The transformations can be applied with a shear matrix or transvection, an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value. An example is the linear map that takes any point with coordinates ( x , y ) {\displaystyle (x,y)} to the point ( x + 2 y , y ) {\displaystyle (x+2y,y)} . In this case, the displacement is horizontal by a factor of 2 where the fixed line is the x-axis, and the signed distance is the y-coordinate. Note that points on opposite sides of the reference line are displaced in opposite directions. Shear mappings must not be confused with rotations. Applying a shear map to a set of points of the plane will change all angles between them (except straight angles), and the length of any line segment that is not parallel to the direction of displacement. Therefore, it will usually distort the shape of a geometric figure, for example turning squares into parallelograms, and circles into ellipses. However a shearing does preserve the area of geometric figures and the alignment and relative distances of collinear points. A shear mapping is the main difference between the upright and slanted (or italic) styles of letters. The same definition is used in three-dimensional geometry, except that the distance is measured from a fixed plane. A three-dimensional shearing transformation preserves the volume of solid figures, but changes areas of plane figures (except those that are parallel to the displacement). This transformation is used to describe laminar flow of a fluid between plates, one moving in a plane above and parallel to the first. In the general n-dimensional Cartesian space ⁠ R n , {\displaystyle \mathbb {R} ^{n},} ⁠ the distance is measured from a fixed hyperplane parallel to the direction of displacement. This geometric transformation is a linear transformation of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠ that preserves the n-dimensional measure (hypervolume) of any set. == Definition == === Horizontal and vertical shear of the plane === In the plane R 2 = R × R {\displaystyle \mathbb {R} ^{2}=\mathbb {R} \times \mathbb {R} } , a horizontal shear (or shear parallel to the x-axis) is a function that takes a generic point with coordinates ( x , y ) {\displaystyle (x,y)} to the point ( x + m y , y ) {\displaystyle (x+my,y)} ; where m is a fixed parameter, called the shear factor. The effect of this mapping is to displace every point horizontally by an amount proportionally to its y-coordinate. Any point above the x-axis is displaced to the right (increasing x) if m > 0, and to the left if m < 0. Points below the x-axis move in the opposite direction, while points on the axis stay fixed. Straight lines parallel to the x-axis remain where they are, while all other lines are turned (by various angles) about the point where they cross the x-axis. Vertical lines, in particular, become oblique lines with slope 1 m . {\displaystyle {\tfrac {1}{m}}.} Therefore, the shear factor m is the cotangent of the shear angle φ {\displaystyle \varphi } between the former verticals and the x-axis. In the example on the right the square is tilted by 30°, so the shear angle is 60°. If the coordinates of a point are written as a column vector (a 2×1 matrix), the shear mapping can be written as multiplication by a 2×2 matrix: ( x ′ y ′ ) = ( x + m y y ) = ( 1 m 0 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x^{\prime }\\y^{\prime }\end{pmatrix}}={\begin{pmatrix}x+my\\y\end{pmatrix}}={\begin{pmatrix}1&m\\0&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} A vertical shear (or shear parallel to the y-axis) of lines is similar, except that the roles of x and y are swapped. It corresponds to multiplying the coordinate vector by the transposed matrix: ( x ′ y ′ ) = ( x m x + y ) = ( 1 0 m 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x^{\prime }\\y^{\prime }\end{pmatrix}}={\begin{pmatrix}x\\mx+y\end{pmatrix}}={\begin{pmatrix}1&0\\m&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} The vertical shear displaces points to the right of the y-axis up or down, depending on the sign of m. It leaves vertical lines invariant, but tilts all other lines about the point where they meet the y-axis. Horizontal lines, in particular, get tilted by the shear angle φ {\displaystyle \varphi } to become lines with slope m. ==== Composition ==== Two or more shear transformations can be combined. If two shear matrices are ( 1 λ 0 1 ) {\textstyle {\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}} and ( 1 0 μ 1 ) {\textstyle {\begin{pmatrix}1&0\\\mu &1\end{pmatrix}}} then their composition matrix is ( 1 λ 0 1 ) ( 1 0 μ 1 ) = ( 1 + λ μ λ μ 1 ) , {\displaystyle {\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}{\begin{pmatrix}1&0\\\mu &1\end{pmatrix}}={\begin{pmatrix}1+\lambda \mu &\lambda \\\mu &1\end{pmatrix}},} which also has determinant 1, so that area is preserved. In particular, if λ = μ {\displaystyle \lambda =\mu } , we have ( 1 + λ 2 λ λ 1 ) , {\displaystyle {\begin{pmatrix}1+\lambda ^{2}&\lambda \\\lambda &1\end{pmatrix}},} which is a positive definite matrix. === Higher dimensions === A typical shear matrix is of the form S = ( 1 0 0 λ 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 ) . {\displaystyle S={\begin{pmatrix}1&0&0&\lambda &0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{pmatrix}}.} This matrix shears parallel to the x axis in the direction of the fourth dimension of the underlying vector space. A shear parallel to the x axis results in x ′ = x + λ y {\displaystyle x'=x+\lambda y} and y ′ = y {\displaystyle y'=y} . In matrix form: ( x ′ y ′ ) = ( 1 λ 0 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}1&\lambda \\0&1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} Similarly, a shear parallel to the y axis has x ′ = x {\displaystyle x'=x} and y ′ = y + λ x {\displaystyle y'=y+\lambda x} . In matrix form: ( x ′ y ′ ) = ( 1 0 λ 1 ) ( x y ) . {\displaystyle {\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}1&0\\\lambda &1\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}.} In 3D space this matrix shear the YZ plane into the diagonal plane passing through these 3 points: ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} ( λ , 1 , 0 ) {\displaystyle (\lambda ,1,0)} ( μ , 0 , 1 ) {\displaystyle (\mu ,0,1)} S = ( 1 λ μ 0 1 0 0 0 1 ) . {\displaystyle S={\begin{pmatrix}1&\lambda &\mu \\0&1&0\\0&0&1\end{pmatrix}}.} The determinant will always be 1, as no matter where the shear element is placed, it will be a member of a skew-diagonal that also contains zero elements (as all skew-diagonals have length at least two) hence its product will remain zero and will not contribute to the determinant. Thus every shear matrix has an inverse, and the inverse is simply a shear matrix with the shear element negated, representing a shear transformation in the opposite direction. In fact, this is part of an easily derived more general result: if S is a shear matrix with shear element λ, then Sn is a shear matrix whose shear element is simply nλ. Hence, raising a shear matrix to a power n multiplies its shear factor by n. ==== Properties ==== If S is an n × n shear matrix, then: S has rank n and therefore is invertible 1 is the only eigenvalue of S, so det S = 1 and tr S = n the eigenspace of S (associated with the eigenvalue 1) has n − 1 dimensions. S is defective S is asymmetric S may be made into a block matrix by at most 1 column interchange and 1 row interchange operation the area, volume, or any higher order interior capacity of a polytope is invariant under the shear transformation of the polytope's vertices. === General shear mappings === For a vector space V and subspace W, a shear fixing W translates all vectors in a direction parallel to W. To be more precise, if V is the direct sum of W and W′, and we write vectors as v = w + w ′ {\displaystyle v=w+w'} correspondingly, the typical shear L fixing W is L ( v ) = ( L w + L w ′ ) = ( w + M w ′ ) + w ′ , {\displaystyle L(v)=(Lw+Lw')=(w+Mw')+w',} where M is a linear mapping from W′ into W. Therefore in block matrix terms L can be represented as ( I M 0 I ) . {\displaystyle {\begin{pmatrix}I&M\\0&I\end{pmatrix}}.} == Applications == The following applications of shear mapping were noted by William Kingdon Clifford: "A succession of shears will enable us to reduce any figure bounded by straight lines to a triangle of equal area." "... we may shear any triangle into a right-angled triangle, and this will not alter its area. Thus the area of any triangle is half the area of the rectangle on the same base and with height equal to the perpendicular on the base from the opposite angle." The area-preserving property of a shear mapping can be used for results involving area. For instance, the Pythagorean theorem has been illustrated with shear mapping as well as the related geometric mean theorem. Shear matrices are often used in computer graphics. An algorithm due to Alan W. Paeth uses a sequence of three shear mappings (horizontal, vertical, then horizontal again) to rotate a digital image by an arbitrary angle. The algorithm is very simple to implement, and very efficient, since each step processes only one column or one row of pixels at a time. In typography, normal text transformed by a shear mapping results in oblique type. In pre-Einsteinian Galilean relativity, transformations between frames of reference are shear mappings called Galilean transformations. These are also sometimes seen when describing moving reference frames relative to a "preferred" frame, sometimes referred to as absolute time and space. == See also == Transformation matrix == References == == Bibliography == Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1991), Computer Graphics: Principles and Practice (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-12110-7
Wikipedia:Sheila Oates Williams#0
Sheila Oates Williams (1939 – 12 August 2024, also published as Sheila Oates and Sheila Oates Macdonald) was a British and Australian mathematician specializing in abstract algebra. She was the namesake of the Oates–Powell theorem in group theory, and a winner of the B. H. Neumann Award. == Education and career == Sheila Oates was originally from Cornwall, where her father was a primary school headmaster in Tintagel. She was educated at Sir James Smith's Grammar School, and inspired to become a mathematician by a teacher there, Alfred Hooper. She read mathematics at St Hugh's College, Oxford, with Ida Busbridge as her tutor, and continued at Oxford as a doctoral student of Graham Higman. She completed her doctorate (D.Phil.) in 1963. She became a lecturer and fellow at St Hilda's College, Oxford, before moving to Australia in 1965. In 1966, she took a position as senior lecturer at the University of Newcastle and later moved to the University of Queensland as reader. She retired in 1997. == Contributions == As a student at Oxford, with Martin B. Powell, another student of Higman, she proved the Oates–Powell theorem. This is an analogue for group theory of Hilbert's basis theorem, and states that all finite groups have a finite system of axioms from which can be derived all equations that are true of the group. That is, every finite group is finitely based. As well as for her research, Williams was known for her work setting Australian mathematics competitions, including the International Mathematical Olympiad in 1988 and the Australian Mathematics Competition. She also participated several times in the Australian edition of the Mastermind television quiz show. == Recognition == Williams was a 2002 recipient of the B. H. Neumann Award for Excellence in Mathematics Enrichment of the Australian Maths Trust. == References ==
Wikipedia:Shekel function#0
The Shekel function or also Shekel's foxholes is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques. The mathematical form of a function in n {\displaystyle n} dimensions with m {\displaystyle m} maxima is: f ( x → ) = ∑ i = 1 m ( c i + ∑ j = 1 n ( x j − a j i ) 2 ) − 1 {\displaystyle f({\vec {x}})=\sum _{i=1}^{m}\;\left(c_{i}+\sum \limits _{j=1}^{n}(x_{j}-a_{ji})^{2}\right)^{-1}} or, similarly, f ( x 1 , x 2 , . . . , x n − 1 , x n ) = ∑ i = 1 m ( c i + ∑ j = 1 n ( x j − a i j ) 2 ) − 1 {\displaystyle f(x_{1},x_{2},...,x_{n-1},x_{n})=\sum _{i=1}^{m}\;\left(c_{i}+\sum \limits _{j=1}^{n}(x_{j}-a_{ij})^{2}\right)^{-1}} == Global minima == Numerically certified global minima and the corresponding solutions were obtained using interval methods for up to n = 10 {\displaystyle n=10} . == See also == Test functions for optimization == References == == Further reading == Shekel, J. 1971. "Test Functions for Multimodal Search Techniques." Fifth Annual Princeton Conference on Information Science and Systems.
Wikipedia:Sherman–Morrison formula#0
In linear algebra, the Sherman–Morrison formula, named after Jack Sherman and Winifred J. Morrison, computes the inverse of a "rank-1 update" to a matrix whose inverse has previously been computed. That is, given an invertible matrix A {\displaystyle A} and the outer product u v T {\displaystyle uv^{\textsf {T}}} of vectors u {\displaystyle u} and v , {\displaystyle v,} the formula cheaply computes an updated matrix inverse ( A + u v T ) ) − 1 . {\textstyle \left(A+uv^{\textsf {T}}\right){\vphantom {)}}^{\!-1}.} The Sherman–Morrison formula is a special case of the Woodbury formula. Though named after Sherman and Morrison, it appeared already in earlier publications. == Statement == Suppose A ∈ R n × n {\displaystyle A\in \mathbb {R} ^{n\times n}} is an invertible square matrix and u , v ∈ R n {\displaystyle u,v\in \mathbb {R} ^{n}} are column vectors. Then A + u v T {\displaystyle A+uv^{\textsf {T}}} is invertible if and only if 1 + v T A − 1 u ≠ 0 {\displaystyle 1+v^{\textsf {T}}A^{-1}u\neq 0} . In this case, ( A + u v T ) − 1 = A − 1 − A − 1 u v T A − 1 1 + v T A − 1 u . {\displaystyle \left(A+uv^{\textsf {T}}\right)^{-1}=A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}.} Here, u v T {\displaystyle uv^{\textsf {T}}} is the outer product of two vectors u {\displaystyle u} and v {\displaystyle v} . The general form shown here is the one published by Bartlett. == Proof == ( ⇐ {\displaystyle \Leftarrow } ) To prove that the backward direction 1 + v T A − 1 u ≠ 0 ⇒ A + u v T {\displaystyle 1+v^{\textsf {T}}A^{-1}u\neq 0\Rightarrow A+uv^{\textsf {T}}} is invertible with inverse given as above) is true, we verify the properties of the inverse. A matrix Y {\displaystyle Y} (in this case the right-hand side of the Sherman–Morrison formula) is the inverse of a matrix X {\displaystyle X} (in this case A + u v T {\displaystyle A+uv^{\textsf {T}}} ) if and only if X Y = Y X = I {\displaystyle XY=YX=I} . We first verify that the right hand side ( Y {\displaystyle Y} ) satisfies X Y = I {\displaystyle XY=I} . X Y = ( A + u v T ) ( A − 1 − A − 1 u v T A − 1 1 + v T A − 1 u ) = A A − 1 + u v T A − 1 − A A − 1 u v T A − 1 + u v T A − 1 u v T A − 1 1 + v T A − 1 u = I + u v T A − 1 − u v T A − 1 + u v T A − 1 u v T A − 1 1 + v T A − 1 u = I + u v T A − 1 − u ( 1 + v T A − 1 u ) v T A − 1 1 + v T A − 1 u = I + u v T A − 1 − u v T A − 1 = I {\displaystyle {\begin{aligned}XY&=\left(A+uv^{\textsf {T}}\right)\left(A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\right)\\[6pt]&=AA^{-1}+uv^{\textsf {T}}A^{-1}-{AA^{-1}uv^{\textsf {T}}A^{-1}+uv^{\textsf {T}}A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-{uv^{\textsf {T}}A^{-1}+uv^{\textsf {T}}A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-{u\left(1+v^{\textsf {T}}A^{-1}u\right)v^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\\[6pt]&=I+uv^{\textsf {T}}A^{-1}-uv^{\textsf {T}}A^{-1}\\[6pt]&=I\end{aligned}}} To end the proof of this direction, we need to show that Y X = I {\displaystyle YX=I} in a similar way as above: Y X = ( A − 1 − A − 1 u v T A − 1 1 + v T A − 1 u ) ( A + u v T ) = I . {\displaystyle YX=\left(A^{-1}-{A^{-1}uv^{\textsf {T}}A^{-1} \over 1+v^{\textsf {T}}A^{-1}u}\right)(A+uv^{\textsf {T}})=I.} (In fact, the last step can be avoided since for square matrices X {\displaystyle X} and Y {\displaystyle Y} , X Y = I {\displaystyle XY=I} is equivalent to Y X = I {\displaystyle YX=I} .) ( ⇒ {\displaystyle \Rightarrow } ) Reciprocally, if 1 + v T A − 1 u = 0 {\displaystyle 1+v^{\textsf {T}}A^{-1}u=0} , then via the matrix determinant lemma, det ( A + u v T ) = ( 1 + v T A − 1 u ) det ( A ) = 0 {\displaystyle \det \!\left(A+uv^{\textsf {T}}\right)=(1+v^{\textsf {T}}A^{-1}u)\det(A)=0} , so ( A + u v T ) {\displaystyle \left(A+uv^{\textsf {T}}\right)} is not invertible. == Application == If the inverse of A {\displaystyle A} is already known, the formula provides a numerically cheap way to compute the inverse of A {\displaystyle A} corrected by the matrix u v T {\displaystyle uv^{\textsf {T}}} (depending on the point of view, the correction may be seen as a perturbation or as a rank-1 update). The computation is relatively cheap because the inverse of A + u v T {\displaystyle A+uv^{\textsf {T}}} does not have to be computed from scratch (which in general is expensive), but can be computed by correcting (or perturbing) A − 1 {\displaystyle A^{-1}} . Using unit columns (columns from the identity matrix) for u {\displaystyle u} or v {\displaystyle v} , individual columns or rows of A {\displaystyle A} may be manipulated and a correspondingly updated inverse computed relatively cheaply in this way. In the general case, where A − 1 {\displaystyle A^{-1}} is an n {\displaystyle n} -by- n {\displaystyle n} matrix and u {\displaystyle u} and v {\displaystyle v} are arbitrary vectors of dimension n {\displaystyle n} , the whole matrix is updated and the computation takes 3 n 2 {\displaystyle 3n^{2}} scalar multiplications. If u {\displaystyle u} is a unit column, the computation takes only 2 n 2 {\displaystyle 2n^{2}} scalar multiplications. The same goes if v {\displaystyle v} is a unit column. If both u {\displaystyle u} and v {\displaystyle v} are unit columns, the computation takes only n 2 {\displaystyle n^{2}} scalar multiplications. This formula also has application in theoretical physics. Namely, in quantum field theory, one uses this formula to calculate the propagator of a spin-1 field. The inverse propagator (as it appears in the Lagrangian) has the form A + u v T {\displaystyle A+uv^{\textsf {T}}} . One uses the Sherman–Morrison formula to calculate the inverse (satisfying certain time-ordering boundary conditions) of the inverse propagator—or simply the (Feynman) propagator—which is needed to perform any perturbative calculation involving the spin-1 field. One of the issues with the formula is that little is known about its numerical stability. There are no published results concerning its error bounds. Anecdotal evidence suggests that the Woodbury matrix identity (a generalization of the Sherman–Morrison formula) may diverge even for seemingly benign examples (when both the original and modified matrices are well-conditioned). == Alternative verification == Following is an alternate verification of the Sherman–Morrison formula using the easily verifiable identity ( I + w v T ) − 1 = I − w v T 1 + v T w {\displaystyle \left(I+wv^{\textsf {T}}\right)^{-1}=I-{\frac {wv^{\textsf {T}}}{1+v^{\textsf {T}}w}}} . Let u = A w , and A + u v T = A ( I + w v T ) , {\displaystyle u=Aw,\quad {\text{and}}\quad A+uv^{\textsf {T}}=A\left(I+wv^{\textsf {T}}\right),} then ( A + u v T ) − 1 = ( I + w v T ) − 1 A − 1 = ( I − w v T 1 + v T w ) A − 1 {\displaystyle \left(A+uv^{\textsf {T}}\right)^{-1}=\left(I+wv^{\textsf {T}}\right)^{-1}A^{-1}=\left(I-{\frac {wv^{\textsf {T}}}{1+v^{\textsf {T}}w}}\right)A^{-1}} . Substituting w = A − 1 u {\displaystyle w=A^{-1}u} gives ( A + u v T ) − 1 = ( I − A − 1 u v T 1 + v T A − 1 u ) A − 1 = A − 1 − A − 1 u v T A − 1 1 + v T A − 1 u {\displaystyle \left(A+uv^{\textsf {T}}\right)^{-1}=\left(I-{\frac {A^{-1}uv^{\textsf {T}}}{1+v^{\textsf {T}}A^{-1}u}}\right)A^{-1}=A^{-1}-{\frac {A^{-1}uv^{\textsf {T}}A^{-1}}{1+v^{\textsf {T}}A^{-1}u}}} == Generalization (Woodbury matrix identity) == Given a square invertible n × n {\displaystyle n\times n} matrix A {\displaystyle A} , an n × k {\displaystyle n\times k} matrix U {\displaystyle U} , and a k × n {\displaystyle k\times n} matrix V {\displaystyle V} , let B {\displaystyle B} be an n × n {\displaystyle n\times n} matrix such that B = A + U V {\displaystyle B=A+UV} . Then, assuming ( I k + V A − 1 U ) {\displaystyle \left(I_{k}+VA^{-1}U\right)} is invertible, we have B − 1 = A − 1 − A − 1 U ( I k + V A − 1 U ) − 1 V A − 1 . {\displaystyle B^{-1}=A^{-1}-A^{-1}U\left(I_{k}+VA^{-1}U\right)^{-1}VA^{-1}.} == See also == The matrix determinant lemma performs a rank-1 update to a determinant. Woodbury matrix identity Quasi-Newton method Binomial inverse theorem Bunch–Nielsen–Sorensen formula Maxwell stress tensor contains an application of the Sherman–Morrison formula. == References == == External links == Weisstein, Eric W. "Sherman–Morrison formula". MathWorld.
Wikipedia:Sherry Li#0
Xiaoye Sherry Li is a researcher in numerical methods at the Lawrence Berkeley National Laboratory, where she works as a senior scientist. She is responsible there for the SuperLU package, a high-performance parallel system for solving sparse systems of linear equations by using their LU decomposition. At the Lawrence Berkeley National Laboratory, she heads the Scalable Solvers Group. == Education == Li graduated from Tsinghua University in 1986, with a bachelor's degree in computer science. She moved to the United States for graduate study, earning a master's degree from Pennsylvania State University in 1990 and a Ph.D. in computer science from the University of California, Berkeley in 1996. Her doctoral dissertation, Sparse Gaussian Elimination on High Performance Computers, was supervised by James Demmel. == Recognition == In 2016, she was elected as a SIAM Fellow "for advances in the development of fast and scalable sparse matrix algorithms and fostering their use in large-scale scientific and engineering applications". With Piyush Sao and Richard Vuduc she was awarded the 2022 SIAM Activity Group on Supercomputing Best Paper Prize. == References == == External links == Home page Sherry Li publications indexed by Google Scholar
Wikipedia:Shift theorem#0
In mathematics, the (exponential) shift theorem is a theorem about polynomial differential operators (D-operators) and exponential functions. It permits one to eliminate, in certain cases, the exponential from under the D-operators. == Statement == The theorem states that, if P(D) is a polynomial of the D-operator, then, for any sufficiently differentiable function y, P ( D ) ( e a x y ) ≡ e a x P ( D + a ) y . {\displaystyle P(D)(e^{ax}y)\equiv e^{ax}P(D+a)y.} To prove the result, proceed by induction. Note that only the special case P ( D ) = D n {\displaystyle P(D)=D^{n}} needs to be proved, since the general result then follows by linearity of D-operators. The result is clearly true for n = 1 since D ( e a x y ) = e a x ( D + a ) y . {\displaystyle D(e^{ax}y)=e^{ax}(D+a)y.} Now suppose the result true for n = k, that is, D k ( e a x y ) = e a x ( D + a ) k y . {\displaystyle D^{k}(e^{ax}y)=e^{ax}(D+a)^{k}y.} Then, D k + 1 ( e a x y ) ≡ d d x { e a x ( D + a ) k y } = e a x d d x { ( D + a ) k y } + a e a x { ( D + a ) k y } = e a x { ( d d x + a ) ( D + a ) k y } = e a x ( D + a ) k + 1 y . {\displaystyle {\begin{aligned}D^{k+1}(e^{ax}y)&\equiv {\frac {d}{dx}}\left\{e^{ax}\left(D+a\right)^{k}y\right\}\\&{}=e^{ax}{\frac {d}{dx}}\left\{\left(D+a\right)^{k}y\right\}+ae^{ax}\left\{\left(D+a\right)^{k}y\right\}\\&{}=e^{ax}\left\{\left({\frac {d}{dx}}+a\right)\left(D+a\right)^{k}y\right\}\\&{}=e^{ax}(D+a)^{k+1}y.\end{aligned}}} This completes the proof. The shift theorem can be applied equally well to inverse operators: 1 P ( D ) ( e a x y ) = e a x 1 P ( D + a ) y . {\displaystyle {\frac {1}{P(D)}}(e^{ax}y)=e^{ax}{\frac {1}{P(D+a)}}y.} == Related == There is a similar version of the shift theorem for Laplace transforms ( t < a {\displaystyle t<a} ): e − a s L { f ( t ) } = L { f ( t − a ) } . {\displaystyle e^{-as}{\mathcal {L}}\{f(t)\}={\mathcal {L}}\{f(t-a)\}.} == Examples == The exponential shift theorem can be used to speed the calculation of higher derivatives of functions that is given by the product of an exponential and another function. For instance, if f ( x ) = sin ⁡ ( x ) e x {\displaystyle f(x)=\sin(x)e^{x}} , one has that D 3 f = D 3 ( e x sin ⁡ ( x ) ) = e x ( D + 1 ) 3 sin ⁡ ( x ) = e x ( D 3 + 3 D 2 + 3 D + 1 ) sin ⁡ ( x ) = e x ( − cos ⁡ ( x ) − 3 sin ⁡ ( x ) + 3 cos ⁡ ( x ) + sin ⁡ ( x ) ) {\displaystyle {\begin{aligned}D^{3}f&=D^{3}(e^{x}\sin(x))=e^{x}(D+1)^{3}\sin(x)\\&=e^{x}\left(D^{3}+3D^{2}+3D+1\right)\sin(x)\\&=e^{x}\left(-\cos(x)-3\sin(x)+3\cos(x)+\sin(x)\right)\end{aligned}}} Another application of the exponential shift theorem is to solve linear differential equations whose characteristic polynomial has repeated roots. == Notes == == References == Morris, Tenenbaum; Pollard, Harry (1985). Ordinary differential equations : an elementary textbook for students of mathematics, engineering, and the sciences. New York: Dover Publications. ISBN 0486649407. OCLC 12188701.
Wikipedia:Shihoko Ishii#0
Shihoko Ishii (Japanese: 石井志保子, born 1950) is a Japanese mathematician and professor at the University of Tokyo. Her research area is algebraic geometry. == Education == Ishii received her bachelor's degree from Tokyo Women's Christian University in 1973 and her master's degree from Waseda University in 1975. She later earned her PhD from Tokyo Metropolitan University in 1983. == Research == Ishii's research focuses on singularity theory. She studies arc spaces, a mathematical concept related to jets: arc spaces are varieties encapsulating information about curves on another variety. == Awards and honours == Ishii received the Saruhashi Prize for accomplishments by a Japanese woman researcher in the natural sciences in 1995. As a postdoc, Ishii was inspired by reading a profile of Fumiko Yonezawa, a physicist and former winner of the Saruhashi prize. Ishii received the Algebra Prize from the Mathematical Society of Japan in 2011. In 2022, she became a laureate of the Asian Scientist 100 by the Asian Scientist. == References ==
Wikipedia:Shimshon Amitsur#0
Shimshon Avraham Amitsur (born Kaplan; Hebrew: שמשון אברהם עמיצור; August 26, 1921 – September 5, 1994) was an Israeli mathematician. He is best known for his work in ring theory, in particular PI rings, an area of abstract algebra. == Biography == Amitsur was born in Jerusalem and studied at the Hebrew University under the supervision of Jacob Levitzki. His studies were repeatedly interrupted, first by World War II and then by the 1948 Arab–Israeli War. He received his M.Sc. degree in 1946, and his Ph.D. in 1950. Later, for his joint work with Levitzki, he received the first Israel Prize in Exact Sciences. He worked at the Hebrew University until his retirement in 1989. Amitsur was a visiting scholar at the Institute for Advanced Study from 1952 to 1954. He was an Invited Speaker at the ICM in 1970 in Nice. He was a member of the Israel Academy of Sciences, where he was the Head for Experimental Science Section. He was one of the founding editors of the Israel Journal of Mathematics, and the mathematical editor of the Hebrew Encyclopedia. Amitsur received a number of awards, including the honorary doctorate from Ben-Gurion University in 1990. His students included Avinoam Mann, Amitai Regev, Eliyahu Rips and Aner Shalev. == Awards == Amitsur and Jacob Levitzki were each awarded the Israel Prize in exact sciences, in 1953, its inaugural year. == See also == Amitsur–Levitzki theorem List of Israel Prize recipients == Publications == Amitsur, A. S.; Levitzki, Jakob (1950), "Minimal identities for algebras", Proceedings of the American Mathematical Society, 1 (4): 449–463, doi:10.1090/S0002-9939-1950-0036751-9, ISSN 0002-9939, JSTOR 2032312, MR 0036751 Amitsur, S. A. (2001), Mann, Avinoam; Regev, Amitai; Rowen, Louis; Saltman, David J.; Small, Lance W. (eds.), Selected papers of S. A. Amitsur with commentary. Part 1, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2924-0, MR 1866636 Amitsur, S. A. (2001), Mann, Avinoam; Regev, Amitai; Rowen, Louis; Saltman, David J.; Small, Lance W. (eds.), Selected papers of S. A. Amitsur with commentary. Part 2, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2925-7, MR 1866637 == References == "Shimshon Avraham Amitsur (1921 — 1994)", by A. Mann, Israel Journal of Mathematics, Vol. 96 (December 1996), ix - xxvii. Formanek, Edward (2003), "Review of Selected papers of S. A. Amitsur", Bulletin of the American Mathematical Society, 40: 131–135, doi:10.1090/s0273-0979-02-00960-6, ISSN 0002-9904 == External links == Shimshon Amitsur at the Mathematics Genealogy Project O'Connor, John J.; Robertson, Edmund F., "Shimshon Amitsur", MacTutor History of Mathematics Archive, University of St Andrews Colleagues, students and family sharing personal experiences with Shimshon Amitsur. Video recording from the 27th Amitsur Memorial Symposium 2020.
Wikipedia:Shiri Artstein#0
Shiri Artstein-Avidan (Hebrew: שירי ארטשטיין-אבידן; born 28 September 1978) is an Israeli mathematician who in 2015 won the Erdős Prize. She specializes in convex geometry and asymptotic geometric analysis, and is a professor of mathematics at Tel Aviv University. == Education and career == Artstein was born in Jerusalem, the daughter of mathematician Zvi Artstein, best known for his proof of Artstein's theorem. She graduated summa cum laude from Tel Aviv University in 2000, with a bachelor's degree in mathematics, and completed her PhD at Tel Aviv University in 2004 under the supervision of Vitali Milman, with a dissertation on Entropy Methods. She worked from 2004 to 2006 as a Veblen Research Instructor in Mathematics at Princeton University and as a researcher at the Institute for Advanced Study before returning to Tel Aviv as a faculty member in 2006. == Recognition == Artstein won the Haim Nessyahu Prize in Mathematics, an annual dissertation award of the Israel Mathematical Union, in 2006. In 2008 she won the Krill Prize for Excellence in Scientific Research, from the Wolf Foundation. In 2015 she won the Anna and Lajos Erdős Prize in Mathematics. The award cited her "solution of Shannon's long standing problem on monotonicity of entropy (with K. Ball, F. Barthe and A. Naor), profound and unexpected development of the concept of duality, Legendre and Fourier transform from axiomatic viewpoint (with V. Milman) and discovery of an astonishing link between Mahler's conjecture in convexity theory and an isoperimetric-type inequality involving symplectic capacities (with R. Karasev and Y. Ostrover)". == Selected publications == Artstein-Avidan, Shiri; Giannopoulos, Apostolos; Milman, Vitali (17 June 2015). "Asymptotic Geometric Analysis, Part I". Mathematical Surveys and Monographs. Providence, Rhode Island: American Mathematical Society. doi:10.1090/surv/202. ISBN 978-1-4704-2193-9. ISSN 0076-5376. Her research publications include: Artstein, Shiri; Ball, Keith M.; Barthe, Franck; Naor, Assaf (2004), "Solution of Shannon's problem on the monotonicity of entropy", Journal of the American Mathematical Society, 17 (4): 975–982, doi:10.1090/S0894-0347-04-00459-X, MR 2083473 Artstein, S.; Milman, V.; Szarek, S. J. (2004), "Duality of metric entropy", Annals of Mathematics, Second Series, 159 (3): 1313–1328, doi:10.4007/annals.2004.159.1313, MR 2113023 Artstein-Avidan, S.; Klartag, B.; Milman, V. (2004), "The Santaló point of a function, and a functional form of the Santaló inequality", Mathematika, 51 (1–2): 33–48, doi:10.1112/S0025579300015497, MR 2220210 Artstein-Avidan, Shiri; Milman, Vitali (2009), "The concept of duality in convex analysis, and the characterization of the Legendre transform", Annals of Mathematics, Second Series, 169 (2): 661–674, doi:10.4007/annals.2009.169.661, MR 2480615 Artstein-Avidan, Shiri; Karasev, Roman; Ostrover, Yaron (2014), "From symplectic measurements to the Mahler conjecture", Duke Mathematical Journal, 163 (11): 2003–2022, arXiv:1303.4197, doi:10.1215/00127094-2794999, MR 3263026, S2CID 43483415 == References == == External links == Home page
Wikipedia:Shisanji Hokari#0
Dr. Shisanji Hokari (穂刈 四三二, Hokari Shisanji, 28 March 1908 – 2 January 2004) was a Japanese mathematician. He was admitted to the American Mathematical Society in 1966. He was a professor emeritus of Tokyo Metropolitan University and the president of Josai University. == References ==
Wikipedia:Shmuel Agmon#0
Shmuel Agmon (Hebrew: שמואל אגמון; 2 February 1922 – 21 March 2025) was an Israeli mathematician who was known for his work in analysis and partial differential equations. == Biography == Shmuel Agmon was born in Tel Aviv to writer Nathan Agmon and Chaya Gutman, and spent the first years of his life in Nazareth. A member of the HaMahanot HaOlim youth movement, Agmon studied at the Gymnasia Rehavia and joined a hakhshara program at Kibbutz Na'an after graduating from high school. He began his studies in mathematics at the Hebrew University of Jerusalem in 1940 but enlisted in the Jewish Brigade of the British Army before graduating. He served for four years in Cyprus, Italy and Belgium during World War II. After his discharge, he completed his undergraduate and master's degrees at the Hebrew University and went to France for further studies. He obtained a PhD from Paris-Sorbonne University in 1949, under the supervision of Szolem Mandelbrojt. He returned to Jerusalem after working as a visiting scholar at Rice University from 1950 to 1952, and was appointed full professor at the Hebrew University in 1959. Agmon died on 21 March 2025, at the age of 103. == Work == Agmon's contributions to partial differential equations include Agmon's method for proving exponential decay of eigenfunctions for elliptic operators. == Awards == Agmon was awarded the 1991 Israel Prize in mathematics. He received the 2007 EMET Prize "for paving new paths in the study of partial-elliptical differential equations and their problematic language and for advancing the knowledge in the field, as well as his essential contribution to the development of the Spectral Theory and the Distribution Theory of Schrödinger Operators." He has also received the Weizmann Prize and the Rothschild Prize. In 2012, he became a fellow of the American Mathematical Society. == Selected works == Agmon, Shmuel (1951). "Functions of exponential type in an angle and singularities of Taylor series". Trans. Amer. Math. Soc. 70 (3): 492–508. doi:10.1090/s0002-9947-1951-0041222-5. MR 0041222. with Lipman Bers: Agmon, Shmuel; Bers, Lipman (1952). "The expansion theorem for pseudo-analytic functions". Proc. Amer. Math. Soc. 3 (5): 757–764. doi:10.1090/s0002-9939-1952-0057349-4. MR 0057349. Agmon, Shmuel (1953). "Complex variable Tauberians". Trans. Amer. Math. Soc. 74 (3): 444–481. doi:10.1090/s0002-9947-1953-0054079-5. MR 0054079. Agmon, Shmuel (1960). "Maximum theorems for solutions of higher order elliptic equations". Bull. Amer. Math. Soc. 66 (2): 77–80. doi:10.1090/s0002-9904-1960-10402-8. MR 0124618. Lectures on elliptic boundary value problems. Van Nostrand. 1965. iii+291 p.; 2nd edition. AMS Chelsea Pub. 2010. ISBN 978-0-8218-4910-1. Unicité et convexité dans les problèmes différentiels. Presses de l'Université de Montréal. 1966. 156 p. Spectral properties of Schrödinger operators and scattering theory. Scuola normale superiore di Pisa. 1975. Lectures on exponential decay of solutions of second-order elliptic equations: bounds on eigenfunctions of N-body Schrödinger operators. Princeton University Press. 1982. ISBN 978-0-691-08318-6. == See also == Agmon's inequality == External links == Shmuel Agmon at the Mathematics Genealogy Project == References ==
Wikipedia:Shmuel Friedland#0
Shmuel Friedland (Hebrew: שמואל פרידלנד; born 1944 in Tashkent, Uzbek Soviet Socialist Republic) is an Israeli-American mathematician. Friedland studied at the Technion – Israel Institute of Technology, graduating in 1967 with bachelor's degree and in 1971 with doctorate of science under the supervision of Binjamin Schwarz. As a postdoc Friedland was in 1972/73 at the Weizmann Institute, in 1973/74 at Stanford University, and in 1974/75 at the Institute for Advanced Study. Then he taught at the Hebrew University of Jerusalem, where he became in 1982 a full professor. In 1985 he became a professor at the University of Illinois at Chicago. Besides linear algebra (matrix theory), Friedland does research on a wide variety of mathematics, including complex dynamics and applied mathematics. With Elizabeth Gross, he proved a set-theoretic version of the salmon conjecture posed by Elizabeth S. Allman. With Miroslav Fiedler and Israel Gohberg, Friedland shared in the first Hans Schneider Prize, awarded by the International Linear Algebra Society in 1993. He was elected a Fellow of the American Mathematical Society (Class of 2019). Also, he was selected as a 2021 SIAM Fellow, "for deep and varied contributions to mathematics, especially linear algebra, matrix theory, and matrix computations". == Selected publications == "Nonoscillation and integral inequalities", Bull. Amer. Math. Soc., vol. 80, 1974, pp. 715–717. doi:10.1090/S0002-9904-1974-13565-2 with Samuel Karlin: "Some inequalities for the spectral radius of nonnegative matrices and applications", Duke Mathematical Journal, vol. 42, 1975, pp. 459–490. (subscription required) Nonoscillation, disconjugacy and integral inequalities, Memoirs Amer. Math. Soc. 176, 1976 with Walter K. Hayman: "Eigenvalue inequalities for the Dirichlet problem on spheres and the growth of subharmonic functions", Commentarii Mathematici Helvetici 51, no. 1 (1976): 133–161. doi:10.1007/BF02568147 "On an inverse problem for nonnegative and eventually nonnegative matrices", Israel Journal of Mathematics, vol. 29, no. 1, 1978, 43–60. doi:10.1007/BF02760401 "A lower bound for the permanent of doubly stochastic matrices", Annals of Mathematics, vol. 110, 1979, pp. 167–176. JSTOR 1971250 with Nimrod Moiseyev: "Association of resonance states with the incomplete spectrum of finite complex-scaled Hamiltonian matrices", Physical Review A, vol. 22, no. 2, 1980, 618–624. doi:10.1103/PhysRevA.22.618 "Convex spectral functions", Linear and Multilinear Algebra, vol. 9, no. 4, 1981, 299–316. doi:10.1080/03081088108817381 with Carl R. de Boor and Allan Pincus: "Inverses of infinite sign regular matrices", Trans. Amer. Math. Soc., vol. 274, 1982, pp. 59–68. doi:10.1090/S0002-9947-1982-0670918-7 "Simultaneous similarity of matrices", Bull. Amer. Math. Soc., vol. 8, 1983, pp. 93–95. doi:10.1090/S0273-0979-1983-15094-2 "Simultaneous similarity of matrices", Advances in Mathematics, vol. 50, 1983, pp. 189–265. doi:10.1016/0001-8708(83)90044-0 with Joel W. Robbin and John H. Sylvester: "On the crossing rule", Communications in Pure and Applied Mathematics, vol. 37, 1984, pp. 19–37. doi:10.1002/cpa.3160370104 with Noga Alon and Gil Kalai: "Regular subgraphs of almost regular graphs", Journal of Combinatorial Theory, Series B, vol. 37, no. 1, 1984, 79–91. doi:10.1016/0095-8956(84)90047-9 with John Willard Milnor: "Dynamical properties of plane polynomial automorphisms", Journal of Ergodic Theory & Dynamical Systems, vol. 9, 1989, pp. 67–99. doi:10.1017/S014338570000482X "Entropy of polynomial and rational maps", Annals of Mathematics, vol. 133, 1991, pp. 359–368. JSTOR 2944341 with Sa'ar Hersonsky: "Jorgensen's inequality for discrete groups in normed algebras", Duke Mathematical Journal, vol. 69, 1993, pp. 593–614. (subscription required) with Vlad Gheorghiu and Gilad Gour: "Universal uncertainty relations", Physical Review Letters, vol. 111, 2013, p. 230401 doi:10.1103/PhysRevLett.111.230401 with Stéphane Gaubert and Lixing Han: "Perron–Frobenius theorem for nonnegative multilinear forms and extensions", Linear Algebra and its Applications, vol. 438, no. 2, 2013, pp. 738–749. doi:10.1016/j.laa.2011.02.042 with Giorgio Ottaviani: "The number of singular vector tuples and uniqueness of best rank one approximation of tensors", Foundations of Computational Mathematics, vol. 14, 2014, pp. 1209–1242. doi:10.1007/s10208-014-9194-z Matrices: Algebra, Analysis and Applications, World Scientific 2015 with Lek-Heng Lim: "The computational complexity of duality", SIAM Journal on Optimization, vol. 26, no. 4, 2016, 2378–2393. doi:10.1137/16M105887X with Jinjie Zhang and Lek-Heng Lim: "Grothendieck constant is norm of Strassen matrix multiplication tensor", arXiv preprint arXiv:1711.04427, 2017. (See Grothendieck inequality.) with Mohsen Aliabadi, Linear Algebra and Matrices, SIAM 2018 with Mohsen Aliabadi, Analysis and Probability on Graphs, De Gruyter, 2025 == References == == External links == Interview of Shmuel Friedland by Lek-Heng Lim for IMAGE, The Bulletin of the International Linear Algebra Society, Fall 2017
Wikipedia:Shmuel Gal#0
Shmuel Gal (Hebrew: שמואל גל; born 1940) is a mathematician and professor of statistics at the University of Haifa in Israel. He devised the Gal's accurate tables method for the computer evaluation of elementary functions. With Zvi Yehudai he developed in 1993 a new algorithm for sorting which is used by IBM. Gal has solved the Princess and monster game and made several significant contributions to the area of search games. He has been working on rendezvous problems with his collaborative colleagues Steve Alpern, Vic Baston, and John Howard. Gal received a Ph.D. in mathematics from the Hebrew University of Jerusalem. His thesis advisor was Aryeh Dvoretzky. == References == == External links == {https://sites.google.com/edu.haifa.ac.il/sgal/home }
Wikipedia:Shmuel Onn#0
Shmuel Onn (Hebrew: שמואל און; born 1960) is a mathematician, Professor of Operations Research and Dresner Chair at the Technion - Israel Institute of Technology. He is known for his contributions to integer programming and nonlinear combinatorial optimization. == Education == Shmuel Onn did his elementary education in Kadoorie(he). He received his B.Sc. (Cum Laude) in Electrical Engineering from Technion in 1980, and following his obligatory service in the Navy, received his M.Sc. from Technion in 1987. Onn obtained his Ph.D. in operations research from Cornell University, with minors in applied mathematics and computer science, in 1992. His thesis, "Discrete Geometry, Group Representations and Combinatorial Optimization: an Interplay", was advised by Louis J. Billera, Bernd Sturmfels, and Leslie E. Trotter Jr. During 1992–1993 he was a postdoctoral fellow at DIMACS, and during 1993-1994 he was an Alexander von Humboldt postdoctoral fellow at the University of Passau, Germany. == Career == In 1994 Onn joined the Faculty of Data and Decision Sciences of Technion, where he is currently Professor and Dresner Chair. He was also a Visiting Professor and Nachdiplom Lecturer at the Institute for Mathematical Research, ETH Zürich in 2009, and visiting professor at the Mathematics Department in the University of California at Davis (2001-2002). Professor Onn has been also a long-term visitor at various mathematical research institutes including Mittag-Leffler in Stockholm, MSRI in Berkeley, and Oberwolfach in Germany. He also served as Associate Editor for Mathematics of Operations Research in 2010–2016 and Associate Editor for Discrete Optimization in 2004–2010. Onn advised several students and postdoctoral researchers who proceeded to pursue academic careers, including Antoine Deza, Sharon Aviran, Tal Raviv, Nir Halman, and Martin Koutecký. == Research == Shmuel Onn is known for his contributions to integer programming and nonlinear combinatorial optimization. In particular, he developed an algorithmic theory of linear and nonlinear integer programming in variable dimension using Graver bases. This work introduced the theory of block-structured and n-fold integer programming, and the broader theory of sparse and bounded tree-depth integer programming, shown to be fixed-parameter tractable. These theories were followed up by other authors, and have applications in a variety of areas. Some other contributions of Onn include a framework that uses edge-directions for solving convex multi-criteria combinatorial optimization problems and its applications, a universality theorem showing that every integer program is one over slim three-dimensional tables, the settling of the complexity of hypergraph degree sequences, and the introduction of colorful linear programming. == Honors and awards == 2010, INFORMS Computing Society (ICS) Prize. 2009, Nachdiplom Lecturer, Institute for Mathematical Research, ETH Zürich. == Books == Nonlinear discrete optimization: An algorithmic theory. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2010. == Personal life == Shmuel is married to Ruth. They have two children, Amos and Naomi, and live in Haifa. == External links == Shmuel Onn (personal page) Archived 2020-07-02 at the Wayback Machine, Technion Shmuel Onn, Technion Video Lecture Series on Nonlinear Discrete Optimization at MSRI, Berkeley == References ==
Wikipedia:Shortlex order#0
In mathematics, and particularly in the theory of formal languages, shortlex is a total ordering for finite sequences of objects that can themselves be totally ordered. In the shortlex ordering, sequences are primarily sorted by cardinality (length) with the shortest sequences first, and sequences of the same length are sorted into lexicographical order. Shortlex ordering is also called radix, length-lexicographic, military, or genealogical ordering. In the context of strings on a totally ordered alphabet, the shortlex order is identical to the lexicographical order, except that shorter strings precede longer strings. For example, the shortlex order of the set of strings on the English alphabet (in its usual order) is [ε, a, b, c, ..., z, aa, ab, ac, ..., zz, aaa, aab, aac, ..., zzz, ...], where ε denotes the empty string. The strings in this ordering over a fixed finite alphabet can be placed into one-to-one order-preserving correspondence with the natural numbers, giving the bijective numeration system for representing numbers. The shortlex ordering is also important in the theory of automatic groups. == See also == Graded lexicographic order Level order == References ==
Wikipedia:Shulba Sutras#0
The Shulva Sutras or Śulbasūtras (Sanskrit: शुल्बसूत्र; śulba: "string, cord, rope") are sutra texts belonging to the Śrauta ritual and containing geometry related to fire-altar construction. == Purpose and origins == The Shulba Sutras are part of the larger corpus of texts called the Shrauta Sutras, considered to be appendices to the Vedas. They are the only sources of knowledge of Indian mathematics from the Vedic period. Unique Vedi (fire-altar) shapes were associated with unique gifts from the Gods. For instance, "he who desires heaven is to construct a fire-altar in the form of a falcon"; "a fire-altar in the form of a tortoise is to be constructed by one desiring to win the world of Brahman" and "those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus". The four major Shulba Sutras, which are mathematically the most significant, are those attributed to Baudhayana, Manava, Apastamba and Katyayana. Their language is late Vedic Sanskrit, pointing to a composition roughly during the 1st millennium BCE. The oldest is the sutra attributed to Baudhayana, possibly compiled around 800 BCE to 500 BCE. Pingree says that the Apastamba is likely the next oldest; he places the Katyayana and the Manava third and fourth chronologically, on the basis of apparent borrowings. According to mathematical historian Kim Plofker, the Katyayana was composed after "the great grammatical codification of Sanskrit by Pāṇini in probably the mid-fourth century BCE", but she places the Manava in the same period as the Baudhayana. With regard to the composition of Vedic texts, Plofker writes,The Vedic veneration of Sanskrit as a sacred speech, whose divinely revealed texts were meant to be recited, heard, and memorized rather than transmitted in writing, helped shape Sanskrit literature in general. ... Thus texts were composed in formats that could be easily memorized: either condensed prose aphorisms (sūtras, a word later applied to mean a rule or algorithm in general) or verse, particularly in the Classical period. Naturally, ease of memorization sometimes interfered with ease of comprehension. As a result, most treatises were supplemented by one or more prose commentaries ..." There are multiple commentaries for each of the Shulba Sutras, but these were written long after the original works. The commentary of Sundararāja on the Apastamba, for example, comes from the late 15th century CE and the commentary of Dvārakãnātha on the Baudhayana appears to borrow from Sundararāja. According to philosopher Frits Staal, certain aspects of the tradition described in the Shulba Sutras would have been "transmitted orally", and he points to places in southern India where the fire-altar ritual is still practiced and an oral tradition preserved. The fire-altar tradition largely died out in India, however, and Plofker warns that those pockets where the practice remains may reflect a later Vedic revival rather than an unbroken tradition. Archaeological evidence of the altar constructions described in the Shulba Sutras is sparse. A large falcon-shaped fire altar (śyenaciti), dating to the second century BCE, was found in the, 1957-59, excavations by G. R. Sharma at Kausambi, but this altar does not conform to the dimensions prescribed by the Shulba Sutras. The content of the Shulba Sutras is likely older than the works themselves. The Satapatha Brahmana and the Taittiriya Samhita, whose contents date to the late second millennium or early first millennium BCE, describe altars whose dimensions appear to be based on the right triangle with legs of 15 pada and 36 pada, one of the triangles listed in the Baudhayana Shulba Sutra. The origin of the mathematics in the Shulba Sutras is not known. It is possible, as proposed by mathematical historian Radha Charan Gupta, that the geometry was developed to meet the needs of ritual. Some scholars go farther: Staal hypothesizes a common ritual origin for Indian and Greek geometry, citing similar interest and approach to doubling and other geometric transformation problems. Seidenberg, followed by Bartel Leendert van der Waerden, sees a ritual origin for mathematics more broadly, postulating that the major advances, such as discovery of the Pythagorean theorem, occurred in only one place, and diffused from there to the rest of the world. Van der Waerden mentions that author of Sulbha sutras existed before 600 BCE and could not have been influenced by Greek geometry. While, historian, Carl Benjamin Boyer mentions Old Babylonian mathematics (c. 2000 BCE–1600 BCE) as a possible origin, the c. 1800 BCE Plimpton 322 tablet containing a table of triplets, however also states that Shulba sutras contain a formula not found in Babylon sources. Abraham Seidenberg argues that either "Old Babylonia got the theorem of Pythagoras from India or that Old Babylonia and India got it from a third source". Seidenberg suggests that this source might be Sumerian and may predate 1700 BC. In contrast, Pingree cautions that "it would be a mistake to see in [the altar builders'] works the unique origin of geometry; others in India and elsewhere, whether in response to practical or theoretical problems, may well have advanced as far without their solutions having been committed to memory or eventually transcribed in manuscripts." Plofker also raises the possibility that "existing geometric knowledge [was] consciously incorporated into ritual practice". == List of Shulba Sutras == Apastamba Baudhayana Manava Katyayana Maitrayaniya (somewhat similar to Manava text) Varaha (in manuscript) Vadhula (in manuscript) Hiranyakeshin (similar to Apastamba Shulba Sutras) == Mathematics == === Pythagorean theorem and Pythagorean triples === The sutras contain statements of the Pythagorean theorem, both in the case of an isosceles right triangle and in the general case, as well as lists of Pythagorean triples. In Baudhayana, for example, the rules are given as follows: 1.9. The diagonal of a square produces double the area [of the square].[...] 1.12. The areas [of the squares] produced separately by the lengths of the breadth of a rectangle together equal the area [of the square] produced by the diagonal.1.13. This is observed in rectangles having sides 3 and 4, 12 and 5, 15 and 8, 7 and 24, 12 and 35, 15 and 36. Similarly, Apastamba's rules for constructing right angles in fire-altars use the following Pythagorean triples: ( 3 , 4 , 5 ) {\displaystyle (3,4,5)} ( 5 , 12 , 13 ) {\displaystyle (5,12,13)} ( 8 , 15 , 17 ) {\displaystyle (8,15,17)} ( 12 , 35 , 37 ) {\displaystyle (12,35,37)} In addition, the sutras describe procedures for constructing a square with area equal either to the sum or to the difference of two given squares. Both constructions proceed by letting the largest of the squares be the square on the diagonal of a rectangle, and letting the two smaller squares be the squares on the sides of that rectangle. The assertion that each procedure produces a square of the desired area is equivalent to the statement of the Pythagorean theorem. Another construction produces a square with area equal to that of a given rectangle. The procedure is to cut a rectangular piece from the end of the rectangle and to paste it to the side so as to form a gnomon of area equal to the original rectangle. Since a gnomon is the difference of two squares, the problem can be completed using one of the previous constructions. === Geometry === The Baudhayana Shulba sutra gives the construction of geometric shapes such as squares and rectangles. It also gives, sometimes approximate, geometric area-preserving transformations from one geometric shape to another. These include transforming a square into a rectangle, an isosceles trapezium, an isosceles triangle, a rhombus, and a circle, and transforming a circle into a square. In these texts approximations, such as the transformation of a circle into a square, appear side by side with more accurate statements. As an example, the statement of circling the square is given in Baudhayana as: 2.9. If it is desired to transform a square into a circle, [a cord of length] half the diagonal [of the square] is stretched from the centre to the east [a part of it lying outside the eastern side of the square]; with one-third [of the part lying outside] added to the remainder [of the half diagonal], the [required] circle is drawn. and the statement of squaring the circle is given as: 2.10. To transform a circle into a square, the diameter is divided into eight parts; one [such] part after being divided into twenty-nine parts is reduced by twenty-eight of them and further by the sixth [of the part left] less the eighth [of the sixth part].2.11. Alternatively, divide [the diameter] into fifteen parts and reduce it by two of them; this gives the approximate side of the square [desired]. The constructions in 2.9 and 2.10 give a value of π as 3.088, while the construction in 2.11 gives π as 3.004. === Square roots === Altar construction also led to an estimation of the square root of 2 as found in three of the sutras. In the Baudhayana sutra it appears as: 2.12. The measure is to be increased by its third and this [third] again by its own fourth less the thirty-fourth part [of that fourth]; this is [the value of] the diagonal of a square [whose side is the measure]. which leads to the value of the square root of two as being: 2 ≈ 1 + 1 3 + 1 3 ⋅ 4 − 1 3 ⋅ 4 ⋅ 34 = 577 408 = 1.4142... {\displaystyle {\sqrt {2}}\approx 1+{\frac {1}{3}}+{\frac {1}{3\cdot 4}}-{\frac {1}{3\cdot 4\cdot 34}}={\frac {577}{408}}=1.4142...} Indeed, an early method for calculating square roots can be found in some Sutras, the method involves the recursive formula: x ≈ x − 1 + 1 2 ⋅ x − 1 {\displaystyle {\sqrt {x}}\approx {\sqrt {x-1}}+{\frac {1}{2\cdot {\sqrt {x-1}}}}} for large values of x, which bases itself on the non-recursive identity a 2 + r ≈ a + r 2 ⋅ a {\displaystyle {\sqrt {a^{2}+r}}\approx a+{\frac {r}{2\cdot a}}} for values of r extremely small relative to a. It has also been suggested, for example by Bürk that this approximation of √2 implies knowledge that √2 is irrational. In his translation of Euclid's Elements, Heath outlines a number of milestones necessary for irrationality to be considered to have been discovered, and points out the lack of evidence that Indian mathematics had achieved those milestones in the era of the Shulba Sutras. == See also == Kalpa (Vedanga) == Citations and footnotes == == References == Boyer, Carl B. (1991). A History of Mathematics (Second ed.). John Wiley & Sons. ISBN 0-471-54397-7. Bürk, Albert (1901). "Das Āpastamba-Śulba-Sūtra, herausgegeben, übersetzt und mit einer Einleitung versehen". Zeitschrift der Deutschen Morgenländischen Gesellschaft (in German). 55: 543–591. Delire, Jean Michele (2009). "Chronological inferences from a comparison between commentaries on different Śulbasūtras". In Wujastyk, Dominik (ed.). Mathematics and Medicine in Sanskrit. pp. 37–62. Bryant, Edwin (2001). The Quest for the Origins of Vedic Culture: The Indo-Aryan Migration Debate. Oxford University Press. ISBN 9780195137774. Cooke, Roger (2005) [First published 1997]. The History of Mathematics: A Brief Course. Wiley-Interscience. ISBN 0-471-44459-6. Datta, Bibhutibhushan (1932). The Science of the Sulba. A study in early Hindu geometry. University of Calcutta. Gupta, R.C. (1997). "Baudhāyana". In Selin, Helaine (ed.). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Springer. ISBN 978-0-7923-4066-9. Heath, Sir Thomas L. (1925) [1908]. The Thirteen Books of Euclid's Elements, Translated from the Text of Heiberg, with Introduction and Commentary. Vol. I (2 ed.). New York: Dover. Pingree, David (1981), Gonda, Jan (ed.), Jyotiḥśāstra : astral and mathematical literature, A history of Indian literature, vol. VI, Scientific and technical literature Plofker, Kim (2007). "Mathematics in India". In Katz, Victor J (ed.). The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. ISBN 978-0-691-11485-9. Plofker, Kim (2009). Mathematics in India. Princeton University Press. ISBN 9780691120676. Sarma, K.V. (1997). "Sulbasutras". In Selin, Helaine (ed.). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Springer. ISBN 978-0-7923-4066-9. Seidenberg, A. (1978). "The origin of mathematics". Archive for History of Exact Sciences. 18 (4): 301–342. doi:10.1007/BF00348435. S2CID 118671661. Seidenberg, A. (1983). "The Geometry of the Vedic Rituals". In Staal, Frits (ed.). Agni: The Vedic Ritual of the Fire Altar. Berkeley: Asian Humanities Press. Staal, Frits (1999). "Greek and Vedic Geometry". Journal of Indian Philosophy. 27: 105–127. doi:10.1023/A:1004364417713. S2CID 16466375. Thibaut, George (1875). "On the Śulvasútras". The Journal of the Asiatic Society of Bengal. 44: 227–275. van der Waerden, Bartel Leendert (1983). Geometry and Algebra in Ancient Civilizations. Springer-Verlag. ISBN 9783642617812. == Translations == "The Śulvasútra of Baudháyana, with the commentary by Dvárakánáthayajvan", by George Thibaut, was published in a series of issues of The Pandit. A Monthly Journal, of the Benares College, devoted to Sanskrit Literature. Note that the commentary is left untranslated. (1875) 9 (108): 292–298 (1875–1876) 10 (109): 17–22, (110): 44–50, (111): 72–74, (114): 139–146, (115): 166–170, (116): 186–194, (117): 209–218 (new series) (1876–1877) 1 (5): 316–322, (9): 556–578, (10): 626–642, (11): 692–706, (12): 761–770 "Kátyáyana's Śulbapariśishta with the Commentary by Ráma, Son of Súryadása", by George Thibaut, was published in a series of issues of The Pandit. A Monthly Journal, of the Benares College, devoted to Sanskrit Literature. Note that the commentary is left untranslated. (new series) (1882) 4 (1–4): 94–103, (5–8): 328–339, (9–10): 382–389, (9–10): 487–491 Bürk, Albert (1902). "Das Āpastamba-Śulba-Sūtra, herausgegeben, übersetzt und mit einer Einleitung versehen". Zeitschrift der Deutschen Morgenländischen Gesellschaft (in German). 56: 327–391. Transcription and analysis in Bürk (1901). Sen, S.N.; Bag, A.K. (1983). The Śulba Sūtras of Baudhāyana, Āpastamba, Kātyāyana and Mānava with Text, English Translation and Commentary. New Delhi: Indian National Science Academy.
Wikipedia:Shushu Jiyi#0
Shushu Jiyi (數術記遺; translated as Notes on Traditions of Arithmetic Methods, Memoir on the Methods of Numbering or Notes on Traditions of Arithmetic Method) is a Chinese mathematical treatise written by the Eastern Han dynasty mathematician Xu Yue. The text received a subsequent commentary by Zhen Luan in the 6th century. == Description == The text mentions 14 methods of calculation, and it was selected to become one of the Ten Computational Canons in the 11th century during the Song dynasty, replacing the Zhui Shu (Method of Interpolation) by Zu Chongzhi. The earliest surviving printed edition of the text is a Southern Song printed copy from 1212, now preserved in the Peking University Library. == References ==
Wikipedia:Sich (mathematics)#0
cis is a mathematical notation defined by cis x = cos x + i sin x, where cos is the cosine function, i is the imaginary unit and sin is the sine function. x is the argument of the complex number (angle between line to point and x-axis in polar form). The notation is less commonly used in mathematics than Euler's formula, eix, which offers an even shorter notation for cos x + i sin x, but cis(x) is widely used as a name for this function in software libraries. == Overview == The cis notation is a shorthand for the combination of functions on the right-hand side of Euler's formula: e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where i2 = −1. So, cis ⁡ x = cos ⁡ x + i sin ⁡ x , {\displaystyle \operatorname {cis} x=\cos x+i\sin x,} i.e. "cis" is an acronym for "Cos i Sin". It connects trigonometric functions with exponential functions in the complex plane via Euler's formula. While the domain of definition is usually x ∈ R {\displaystyle x\in \mathbb {R} } , complex values z ∈ C {\displaystyle z\in \mathbb {C} } are possible as well: cis ⁡ z = cos ⁡ z + i sin ⁡ z , {\displaystyle \operatorname {cis} z=\cos z+i\sin z,} so the cis function can be used to extend Euler's formula to a more general complex version. The function is mostly used as a convenient shorthand notation to simplify some expressions, for example in conjunction with Fourier and Hartley transforms, or when exponential functions shouldn't be used for some reason in math education. In information technology, the function sees dedicated support in various high-performance math libraries (such as Intel's Math Kernel Library (MKL) or MathCW), available for many compilers and programming languages (including C, C++, Common Lisp, D, Haskell, Julia, and Rust). Depending on the platform, the fused operation is about twice as fast as calling the sine and cosine functions individually. == Mathematical identities == === Derivative === d d z cis ⁡ z = i cis ⁡ z = i e i z {\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {cis} z=i\operatorname {cis} z=ie^{iz}} === Integral === ∫ cis ⁡ z d z = − i cis ⁡ z = − i e i z {\displaystyle \int \operatorname {cis} z\,\mathrm {d} z=-i\operatorname {cis} z=-ie^{iz}} === Other properties === These follow directly from Euler's formula. cos ⁡ ( x ) = cis ⁡ ( x ) + cis ⁡ ( − x ) 2 = e i x + e − i x 2 {\displaystyle \cos(x)={\frac {\operatorname {cis} (x)+\operatorname {cis} (-x)}{2}}={\frac {e^{ix}+e^{-ix}}{2}}} sin ⁡ ( x ) = cis ⁡ ( x ) − cis ⁡ ( − x ) 2 i = e i x − e − i x 2 i {\displaystyle \sin(x)={\frac {\operatorname {cis} (x)-\operatorname {cis} (-x)}{2i}}={\frac {e^{ix}-e^{-ix}}{2i}}} cis ⁡ ( x + y ) = cis ⁡ x cis ⁡ y {\displaystyle \operatorname {cis} (x+y)=\operatorname {cis} x\,\operatorname {cis} y} cis ⁡ ( x − y ) = cis ⁡ x cis ⁡ y {\displaystyle \operatorname {cis} (x-y)={\operatorname {cis} x \over \operatorname {cis} y}} The identities above hold if x and y are any complex numbers. If x and y are real, then | cis ⁡ x − cis ⁡ y | ≤ | x − y | . {\displaystyle |\operatorname {cis} x-\operatorname {cis} y|\leq |x-y|.} == History == The cis notation was first coined by William Rowan Hamilton in Elements of Quaternions (1866) and subsequently used by Irving Stringham (who also called it "sector of x") in works such as Uniplanar Algebra (1893), James Harkness and Frank Morley in their Introduction to the Theory of Analytic Functions (1898), or by George Ashley Campbell (who also referred to it as "cisoidal oscillation") in his works on transmission lines (1901) and Fourier integrals (1928). In 1942, inspired by the cis notation, Ralph V. L. Hartley introduced the cas (for cosine-and-sine) function for the real-valued Hartley kernel, a meanwhile established shortcut in conjunction with Hartley transforms: cas ⁡ x = cos ⁡ x + sin ⁡ x . {\displaystyle \operatorname {cas} x=\cos x+\sin x.} == Motivation == The cis notation is sometimes used to emphasize one method of viewing and dealing with a problem over another. The mathematics of trigonometry and exponentials are related but not exactly the same; exponential notation emphasizes the whole, whereas cis x and cos x + i sin x notations emphasize the parts. This can be rhetorically useful to mathematicians and engineers when discussing this function, and further serve as a mnemonic (for cos + i sin). The cis notation is convenient for math students whose knowledge of trigonometry and complex numbers permit this notation, but whose conceptual understanding does not yet permit the notation eix. The usual proof that cis x = eix requires calculus, which the student may not have studied before encountering the expression cos x + i sin x. This notation was more common when typewriters were used to convey mathematical expressions. == See also == De Moivre's formula Euler's formula Complex number Ptolemy's theorem Phasor Versor == Notes == == References ==
Wikipedia:Siddhānta Shiromani#0
Siddhānta Śiromaṇi (Sanskrit: सिद्धान्त शिरोमणि [siddʱɑn̪t̪ᵊ ɕɪɾoməɳiː] for "Crown of treatises") is the major treatise of Indian mathematician Bhāskara II. He wrote the Siddhānta Śiromaṇi in 1150 when he was 36 years old. The work is composed in Sanskrit Language in 1450 verses. == Parts == === Līlāvatī === The name of the book comes from his daughter, Līlāvatī. It is the first volume of the Siddhānta Śiromaṇi. The book contains thirteen chapters, 278 verses, mainly arithmetic and measurement. === Beejagaṇita === It is the second volume of Siddhānta Śiromaṇi. It is divided into six parts, contains 213 verses and is devoted to algebra. === Gaṇitādhyāya and Golādhyāya === Gaṇitādhyāya and Golādhyāya of Siddhānta Śiromaṇi are devoted to astronomy. All put together there are about 900 verses. (Gaṇitādhyāya has 451 and Golādhyāya has 501 verses). == Translations == In 1797, Safdar Ali Khan of Hyderabad translated the Siddhanta Shiromani into Persian as Zij-i Sarumani. The translation is now a lost work, and is known only from a mention in Khan's other work - Zij-i Safdari. == References == === Bibliography === == External links == Scan of reprint A 1917 edition
Wikipedia:Siegel disc#0
A Siegel disc or Siegel disk is a connected component in the Fatou set where the dynamics is analytically conjugate to an irrational rotation. == Description == Given a holomorphic endomorphism f : S → S {\displaystyle f:S\to S} on a Riemann surface S {\displaystyle S} we consider the dynamical system generated by the iterates of f {\displaystyle f} denoted by f n = f ∘ ⋯ ( n ) ∘ f {\displaystyle f^{n}=f\circ {\stackrel {\left(n\right)}{\cdots }}\circ f} . We then call the orbit O + ( z 0 ) {\displaystyle {\mathcal {O}}^{+}(z_{0})} of z 0 {\displaystyle z_{0}} as the set of forward iterates of z 0 {\displaystyle z_{0}} . We are interested in the asymptotic behavior of the orbits in S {\displaystyle S} (which will usually be C {\displaystyle \mathbb {C} } , the complex plane or C ^ = C ∪ { ∞ } {\displaystyle \mathbb {\hat {C}} =\mathbb {C} \cup \{\infty \}} , the Riemann sphere), and we call S {\displaystyle S} the phase plane or dynamical plane. One possible asymptotic behavior for a point z 0 {\displaystyle z_{0}} is to be a fixed point, or in general a periodic point. In this last case f p ( z 0 ) = z 0 {\displaystyle f^{p}(z_{0})=z_{0}} where p {\displaystyle p} is the period and p = 1 {\displaystyle p=1} means z 0 {\displaystyle z_{0}} is a fixed point. We can then define the multiplier of the orbit as ρ = ( f p ) ′ ( z 0 ) {\displaystyle \rho =(f^{p})'(z_{0})} and this enables us to classify periodic orbits as attracting if | ρ | < 1 {\displaystyle |\rho |<1} superattracting if | ρ | = 0 {\displaystyle |\rho |=0} ), repelling if | ρ | > 1 {\displaystyle |\rho |>1} and indifferent if | ρ | = 1 {\displaystyle |\rho |=1} . Indifferent periodic orbits can be either rationally indifferent or irrationally indifferent, depending on whether ρ n = 1 {\displaystyle \rho ^{n}=1} for some n ∈ Z {\displaystyle n\in \mathbb {Z} } or ρ n ≠ 1 {\displaystyle \rho ^{n}\neq 1} for all n ∈ Z {\displaystyle n\in \mathbb {Z} } , respectively. Siegel discs are one of the possible cases of connected components in the Fatou set (the complementary set of the Julia set), according to Classification of Fatou components, and can occur around irrationally indifferent periodic points. The Fatou set is, roughly, the set of points where the iterates behave similarly to their neighbours (they form a normal family). Siegel discs correspond to points where the dynamics of f {\displaystyle f} are analytically conjugate to an irrational rotation of the complex unit disc. == Name == The Siegel disc is named in honor of Carl Ludwig Siegel. == Gallery == == Formal definition == Let f : S → S {\displaystyle f\colon S\to S} be a holomorphic endomorphism where S {\displaystyle S} is a Riemann surface, and let U be a connected component of the Fatou set F ( f ) {\displaystyle {\mathcal {F}}(f)} . We say U is a Siegel disc of f around the point z 0 {\displaystyle z_{0}} if there exists a biholomorphism ϕ : U → D {\displaystyle \phi :U\to \mathbb {D} } where D {\displaystyle \mathbb {D} } is the unit disc and such that ϕ ( f n ( ϕ − 1 ( z ) ) ) = e 2 π i α n z {\displaystyle \phi (f^{n}(\phi ^{-1}(z)))=e^{2\pi i\alpha n}z} for some α ∈ R ∖ Q {\displaystyle \alpha \in \mathbb {R} \backslash \mathbb {Q} } and ϕ ( z 0 ) = 0 {\displaystyle \phi (z_{0})=0} . Siegel's theorem proves the existence of Siegel discs for irrational numbers satisfying a strong irrationality condition (a Diophantine condition), thus solving an open problem since Fatou conjectured his theorem on the Classification of Fatou components. Later Alexander D. Brjuno improved this condition on the irrationality, enlarging it to the Brjuno numbers. This is part of the result from the Classification of Fatou components. == See also == Douady rabbit Herman ring == References == Siegel disks at Scholarpedia
Wikipedia:Sigmundur Gudmundsson#0
Sigmundur Gudmundsson (born 1960) is an Icelandic-Swedish mathematician working at Lund University in the fields of differential geometry and global analysis. He is mainly interested in the geometric aspects of harmonic maps and their derivatives, such as harmonic morphisms and p-harmonic functions. His work is partially devoted to the existence theory of complex-valued harmonic morphisms and p-harmonic functions from Riemannian homogeneous spaces of various types, such as symmetric spaces and semisimple, solvable and nilpotent Lie groups. Gudmundsson earned his Ph.D. from the University of Leeds in 1992, under the supervision of John C. Wood. Gudmundsson is the founder of the website Nordic-Math-Job advertising vacant academic positions in the Nordic university departments of Mathematics and Statistics. This started off in 1997 as a one-man show, but is now supported by the mathematical societies in the Nordic countries and the National Committee for Mathematics of The Royal Swedish Academy of Sciences. == Publications == An Introduction to Gaussian Geometry, Lund University (2025). An Introduction to Riemannian Geometry, Lund University (2025). Research Papers == References == == External links == Home Page at Lund University Profile at Zentralblatt MATH Profile at Google Scholar Nordic-Math-Job - Established on 14 February 1997
Wikipedia:Signal-flow graph#0
A signal-flow graph or signal-flowgraph (SFG), invented by Claude Shannon, but often called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, and branches (edges, arcs, or arrows) represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs (also called digraphs), which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications. SFGs are most commonly used to represent signal flow in a physical system and its controller(s), forming a cyber-physical system. Among their other uses are the representation of signal flow in various electronic networks and amplifiers, digital filters, state-variable filters and some other types of analog filters. In nearly all literature, a signal-flow graph is associated with a set of linear equations. == History == Wai-Kai Chen wrote: "The concept of a signal-flow graph was originally worked out by Shannon [1942] in dealing with analog computers. The greatest credit for the formulation of signal-flow graphs is normally extended to Mason [1953], [1956]. He showed how to use the signal-flow graph technique to solve some difficult electronic problems in a relatively simple manner. The term signal flow graph was used because of its original application to electronic problems and the association with electronic signals and flowcharts of the systems under study." Lorens wrote: "Previous to Mason's work, C. E. Shannon worked out a number of the properties of what are now known as flow graphs. Unfortunately, the paper originally had a restricted classification and very few people had access to the material." "The rules for the evaluation of the graph determinant of a Mason Graph were first given and proven by Shannon [1942] using mathematical induction. His work remained essentially unknown even after Mason published his classical work in 1953. Three years later, Mason [1956] rediscovered the rules and proved them by considering the value of a determinant and how it changes as variables are added to the graph. [...]" == Domain of application == Robichaud et al. identify the domain of application of SFGs as follows: "All the physical systems analogous to these networks [constructed of ideal transformers, active elements and gyrators] constitute the domain of application of the techniques developed [here]. Trent has shown that all the physical systems which satisfy the following conditions fall into this category. The finite lumped system is composed of a number of simple parts, each of which has known dynamical properties which can be defined by equations using two types of scalar variables and parameters of the system. Variables of the first type represent quantities which can be measured, at least conceptually, by attaching an indicating instrument to two connection points of the element. Variables of the second type characterize quantities which can be measured by connecting a meter in series with the element. Relative velocities and positions, pressure differentials and voltages are typical quantities of the first class, whereas electric currents, forces, rates of heat flow, are variables of the second type. Firestone has been the first to distinguish these two types of variables with the names across variables and through variables. Variables of the first type must obey a mesh law, analogous to Kirchhoff's voltage law, whereas variables of the second type must satisfy an incidence law analogous to Kirchhoff's current law. Physical dimensions of appropriate products of the variables of the two types must be consistent. For the systems in which these conditions are satisfied, it is possible to draw a linear graph isomorphic with the dynamical properties of the system as described by the chosen variables. The techniques [...] can be applied directly to these linear graphs as well as to electrical networks, to obtain a signal flow graph of the system." == Basic flow graph concepts == The following illustration and its meaning were introduced by Mason to illustrate basic concepts: In the simple flow graphs of the figure, a functional dependence of a node is indicated by an incoming arrow, the node originating this influence is the beginning of this arrow, and in its most general form the signal flow graph indicates by incoming arrows only those nodes that influence the processing at the receiving node, and at each node, i, the incoming variables are processed according to a function associated with that node, say Fi. The flowgraph in (a) represents a set of explicit relationships: x 1 = an independent variable x 2 = F 2 ( x 1 , x 3 ) x 3 = F 3 ( x 1 , x 2 , x 3 ) {\displaystyle {\begin{aligned}x_{\mathrm {1} }&={\text{an independent variable}}\\x_{\mathrm {2} }&=F_{2}(x_{\mathrm {1} },x_{\mathrm {3} })\\x_{\mathrm {3} }&=F_{3}(x_{\mathrm {1} },x_{\mathrm {2} },x_{\mathrm {3} })\\\end{aligned}}} Node x1 is an isolated node because no arrow is incoming; the equations for x2 and x3 have the graphs shown in parts (b) and (c) of the figure. These relationships define for every node a function that processes the input signals it receives. Each non-source node combines the input signals in some manner, and broadcasts a resulting signal along each outgoing branch. "A flow graph, as defined originally by Mason, implies a set of functional relations, linear or not." However, the commonly used Mason graph is more restricted, assuming that each node simply sums its incoming arrows, and that each branch involves only the initiating node involved. Thus, in this more restrictive approach, the node x1 is unaffected while: x 2 = f 21 ( x 1 ) + f 23 ( x 3 ) {\displaystyle x_{2}=f_{21}(x_{1})+f_{23}(x_{3})} x 3 = f 31 ( x 1 ) + f 32 ( x 2 ) + f 33 ( x 3 ) , {\displaystyle x_{3}=f_{31}(x_{1})+f_{32}(x_{2})+f_{33}(x_{3})\ ,} and now the functions fij can be associated with the signal-flow branches ij joining the pair of nodes xi, xj, rather than having general relationships associated with each node. A contribution by a node to itself like f33 for x3 is called a self-loop. Frequently these functions are simply multiplicative factors (often called transmittances or gains), for example, fij(xj)=cijxj, where c is a scalar, but possibly a function of some parameter like the Laplace transform variable s. Signal-flow graphs are very often used with Laplace-transformed signals, because then they represent systems of Linear differential equations. In this case the transmittance, c(s), often is called a transfer function. === Choosing the variables === In general, there are several ways of choosing the variables in a complex system. Corresponding to each choice, a system of equations can be written and each system of equations can be represented in a graph. This formulation of the equations becomes direct and automatic if one has at his disposal techniques which permit the drawing of a graph directly from the schematic diagram of the system under study. The structure of the graphs thus obtained is related in a simple manner to the topology of the schematic diagram, and it becomes unnecessary to consider the equations, even implicitly, to obtain the graph. In some cases, one has simply to imagine the flow graph in the schematic diagram and the desired answers can be obtained without even drawing the flow graph. === Non-uniqueness === Robichaud et al. wrote: "The signal flow graph contains the same information as the equations from which it is derived; but there does not exist a one-to-one correspondence between the graph and the system of equations. One system will give different graphs according to the order in which the equations are used to define the variable written on the left-hand side." If all equations relate all dependent variables, then there are n! possible SFGs to choose from. == Linear signal-flow graphs == Linear signal-flow graph (SFG) methods only apply to linear time-invariant systems, as studied by their associated theory. When modeling a system of interest, the first step is often to determine the equations representing the system's operation without assigning causes and effects (this is called acausal modeling). A SFG is then derived from this system of equations. A linear SFG consists of nodes indicated by dots and weighted directional branches indicated by arrows. The nodes are the variables of the equations and the branch weights are the coefficients. Signals may only traverse a branch in the direction indicated by its arrow. The elements of a SFG can only represent the operations of multiplication by a coefficient and addition, which are sufficient to represent the constrained equations. When a signal traverses a branch in its indicated direction, the signal is multiplied the weight of the branch. When two or more branches direct into the same node, their outputs are added. For systems described by linear algebraic or differential equations, the signal-flow graph is mathematically equivalent to the system of equations describing the system, and the equations governing the nodes are discovered for each node by summing incoming branches to that node. These incoming branches convey the contributions of the other nodes, expressed as the connected node value multiplied by the weight of the connecting branch, usually a real number or function of some parameter (for example a Laplace transform variable s). For linear active networks, Choma writes: "By a 'signal flow representation' [or 'graph', as it is commonly referred to] we mean a diagram that, by displaying the algebraic relationships among relevant branch variables of network, paints an unambiguous picture of the way an applied input signal ‘flows’ from input-to-output ... ports." A motivation for a SFG analysis is described by Chen: "The analysis of a linear system reduces ultimately to the solution of a system of linear algebraic equations. As an alternative to conventional algebraic methods of solving the system, it is possible to obtain a solution by considering the properties of certain directed graphs associated with the system." [See subsection: Solving linear equations.] "The unknowns of the equations correspond to the nodes of the graph, while the linear relations between them appear in the form of directed edges connecting the nodes. ...The associated directed graphs in many cases can be set up directly by inspection of the physical system without the necessity of first formulating the →associated equations..." === Basic components === A linear signal flow graph is related to a system of linear equations of the following form: x j = ∑ k = 1 N t j k x k {\displaystyle {\begin{aligned}x_{\mathrm {j} }&=\sum _{\mathrm {k} =1}^{\mathrm {N} }t_{\mathrm {jk} }x_{\mathrm {k} }\end{aligned}}} where t j k {\displaystyle t_{jk}} = transmittance (or gain) from x k {\displaystyle x_{k}} to x j {\displaystyle x_{j}} . The figure to the right depicts various elements and constructs of a signal flow graph (SFG). Exhibit (a) is a node. In this case, the node is labeled x {\displaystyle x} . A node is a vertex representing a variable or signal. A source node has only outgoing branches (represents an independent variable). As a special case, an input node is characterized by having one or more attached arrows pointing away from the node and no arrows pointing into the node. Any open, complete SFG will have at least one input node. An output or sink node has only incoming branches (represents a dependent variable). Although any node can be an output, explicit output nodes are often used to provide clarity. Explicit output nodes are characterized by having one or more attached arrows pointing into the node and no arrows pointing away from the node. Explicit output nodes are not required. A mixed node has both incoming and outgoing branches. Exhibit (b) is a branch with a multiplicative gain of m {\displaystyle m} . The meaning is that the output, at the tip of the arrow, is m {\displaystyle m} times the input at the tail of the arrow. The gain can be a simple constant or a function (for example: a function of some transform variable such as s {\displaystyle s} , ω {\displaystyle \omega } , or z {\displaystyle z} , for Laplace, Fourier or Z-transform relationships). Exhibit (c) is a branch with a multiplicative gain of one. When the gain is omitted, it is assumed to be unity. Exhibit (d) V i n {\displaystyle V_{in}} is an input node. In this case, V i n {\displaystyle V_{in}} is multiplied by the gain m {\displaystyle m} . Exhibit (e) I o u t {\displaystyle I_{out}} is an explicit output node; the incoming edge has a gain of m {\displaystyle m} . Exhibit (f) depicts addition. When two or more arrows point into a node, the signals carried by the edges are added. Exhibit (g) depicts a simple loop. The loop gain is A × m {\displaystyle A\times m} . Exhibit (h) depicts the expression Z = a X + b Y {\displaystyle Z=aX+bY} . Terms used in linear SFG theory also include: Path. A path is a continuous set of branches traversed in the direction indicated by the branch arrows. Open path. If no node is re-visited, the path is open. Forward path. A path from an input node (source) to an output node (sink) that does not re-visit any node. Path gain: the product of the gains of all the branches in the path. Loop. A closed path. (it originates and ends on the same node, and no node is touched more than once). Loop gain: the product of the gains of all the branches in the loop. Non-touching loops. Non-touching loops have no common nodes. Graph reduction. Removal of one or more nodes from a graph using graph transformations. Residual node. In any contemplated process of graph reduction, the nodes to be retained in the new graph are called residual nodes. Splitting a node. Splitting a node corresponds to splitting a node into two half nodes, one being a sink and the other a source. Index: The index of a graph is the minimum number of nodes which have to be split in order to remove all the loops in a graph. Index node. The nodes that are split to determine the index of a graph are called index nodes, and in general they are not unique. === Systematic reduction to sources and sinks === A signal-flow graph may be simplified by graph transformation rules. These simplification rules are also referred to as signal-flow graph algebra. The purpose of this reduction is to relate the dependent variables of interest (residual nodes, sinks) to its independent variables (sources). The systematic reduction of a linear signal-flow graph is a graphical method equivalent to the Gauss-Jordan elimination method for solving linear equations. The rules presented below may be applied over and over until the signal flow graph is reduced to its "minimal residual form". Further reduction can require loop elimination or the use of a "reduction formula" with the goal to directly connect sink nodes representing the dependent variables to the source nodes representing the independent variables. By these means, any signal-flow graph can be simplified by successively removing internal nodes until only the input and output and index nodes remain. Robichaud described this process of systematic flow-graph reduction: The reduction of a graph proceeds by the elimination of certain nodes to obtain a residual graph showing only the variables of interest. This elimination of nodes is called "node absorption". This method is close to the familiar process of successive eliminations of undesired variables in a system of equations. One can eliminate a variable by removing the corresponding node in the graph. If one reduces the graph sufficiently, it is possible to obtain the solution for any variable and this is the objective which will be kept in mind in this description of the different methods of reduction of the graph. In practice, however, the techniques of reduction will be used solely to transform the graph to a residual graph expressing some fundamental relationships. Complete solutions will be more easily obtained by application of Mason's rule. The graph itself programs the reduction process. Indeed a simple inspection of the graph readily suggests the different steps of the reduction which are carried out by elementary transformations, by loop elimination, or by the use of a reduction formula. For digitally reducing a flow graph using an algorithm, Robichaud extends the notion of a simple flow graph to a generalized flow graph: Before describing the process of reduction...the correspondence between the graph and a system of linear equations ... must be generalized...The generalized graphs will represent some operational relationships between groups of variables...To each branch of the generalized graph is associated a matrix giving the relationships between the variables represented by the nodes at the extremities of that branch... The elementary transformations [defined by Robichaud in his Figure 7.2, p. 184] and the loop reduction permit the elimination of any node j of the graph by the reduction formula:[described in Robichaud's Equation 7-1]. With the reduction formula, it is always possible to reduce a graph of any order... [After reduction] the final graph will be a cascade graph in which the variables of the sink nodes are explicitly expressed as functions of the sources. This is the only method for reducing the generalized graph since Mason's rule is obviously inapplicable. The definition of an elementary transformation varies from author to author: Some authors only consider as elementary transformations the summation of parallel-edge gains and the multiplication of series-edge gains, but not the elimination of self-loops Other authors consider the elimination of a self-loop as an elementary transformation Parallel edges. Replace parallel edges with a single edge having a gain equal to the sum of original gains. The graph on the left has parallel edges between nodes. On the right, these parallel edges have been replaced with a single edge having a gain equal to the sum of the gains on each original edge. The equations corresponding to the reduction between N and node I1 are: N = I 1 f 1 + I 1 f 2 + I 1 f 3 + . . . N = I 1 ( f 1 + f 2 + f 3 ) + . . . {\displaystyle {\begin{aligned}N&=I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {1} }f_{\mathrm {2} }+I_{\mathrm {1} }f_{\mathrm {3} }+...\\N&=I_{\mathrm {1} }(f_{\mathrm {1} }+f_{\mathrm {2} }+f_{\mathrm {3} })+...\\\end{aligned}}} Outflowing edges. Replace outflowing edges with edges directly flowing from the node's sources. The graph on the left has an intermediate node N between nodes from which it has inflows, and nodes to which it flows out. The graph on the right shows direct flows between these node sets, without transiting via N. For the sake of simplicity, N and its inflows are not represented. The outflows from N are eliminated. The equations corresponding to the reduction directly relating N's input signals to its output signals are: N = I 1 f 1 + I 2 f 2 + I 3 f 3 O 1 = g 1 N O 2 = g 2 N O 3 = g 3 N O 1 = g 1 ( I 1 f 1 + I 2 f 2 + I 3 f 3 ) O 2 = g 2 ( I 1 f 1 + I 2 f 2 + I 3 f 3 ) O 3 = g 3 ( I 1 f 1 + I 2 f 2 + I 3 f 3 ) O 1 = I 1 f 1 g 1 + I 2 f 2 g 1 + I 3 f 3 g 1 O 2 = I 1 f 1 g 2 + I 2 f 2 g 2 + I 3 f 3 g 2 O 3 = I 1 f 1 g 3 + I 2 f 2 g 3 + I 3 f 3 g 3 {\displaystyle {\begin{aligned}N&=I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} }\\O_{\mathrm {1} }&=g_{\mathrm {1} }N\\O_{\mathrm {2} }&=g_{\mathrm {2} }N\\O_{\mathrm {3} }&=g_{\mathrm {3} }N\\O_{\mathrm {1} }&=g_{\mathrm {1} }(I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} })\\O_{\mathrm {2} }&=g_{\mathrm {2} }(I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} })\\O_{\mathrm {3} }&=g_{\mathrm {3} }(I_{\mathrm {1} }f_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} })\\O_{\mathrm {1} }&=I_{\mathrm {1} }f_{\mathrm {1} }g_{\mathrm {1} }+I_{\mathrm {2} }f_{\mathrm {2} }g_{\mathrm {1} }+I_{\mathrm {3} }f_{\mathrm {3} }g_{\mathrm {1} }\\O_{\mathrm {2} }&=I_{\mathrm {1} }f_{\mathrm {1} }g_{\mathrm {2} }+I_{\mathrm {2} }f_{\mathrm {2} }g_{\mathrm {2} }+I_{\mathrm {3} }f_{\mathrm {3} }g_{\mathrm {2} }\\O_{\mathrm {3} }&=I_{\mathrm {1} }f_{\mathrm {1} }g_{\mathrm {3} }+I_{\mathrm {2} }f_{\mathrm {2} }g_{\mathrm {3} }+I_{\mathrm {3} }f_{\mathrm {3} }g_{\mathrm {3} }\\\end{aligned}}} Zero-signal nodes. Eliminate outflowing edges from a node determined to have a value of zero. If the value of a node is zero, its outflowing edges can be eliminated. Nodes without outflows. Eliminate a node without outflows. In this case, N is not a variable of interest, and it has no outgoing edges; therefore, N, and its inflowing edges, can be eliminated. Self-looping edge. Replace looping edges by adjusting the gains on the incoming edges. The graph on the left has a looping edge at node N, with a gain of g. On the right, the looping edge has been eliminated, and all inflowing edges have their gain divided by (1-g).The equations corresponding to the reduction between N and all its input signals are: ==== Implementations ==== The above procedure for building the SFG from an acausal system of equations and for solving the SFG's gains have been implemented as an add-on to MATHLAB 68, an on-line system providing machine aid for the mechanical symbolic processes encountered in analysis. === Solving linear equations === Signal flow graphs can be used to solve sets of simultaneous linear equations. The set of equations must be consistent and all equations must be linearly independent. ==== Putting the equations in "standard form" ==== For M equations with N unknowns where each yj is a known value and each xj is an unknown value, there is equation for each known of the following form. ∑ k = 1 N c j k x k = y j {\displaystyle {\begin{aligned}\sum _{\mathrm {k} =1}^{\mathrm {N} }c_{\mathrm {jk} }x_{\mathrm {k} }&=y_{\mathrm {j} }\end{aligned}}} ; the usual form for simultaneous linear equations with 1 ≤ j ≤ M Although it is feasible, particularly for simple cases, to establish a signal flow graph using the equations in this form, some rearrangement allows a general procedure that works easily for any set of equations, as now is presented. To proceed, first the equations are rewritten as ∑ k = 1 N c j k x k − y j = 0 {\displaystyle {\begin{aligned}\sum _{\mathrm {k} =1}^{\mathrm {N} }c_{\mathrm {jk} }x_{\mathrm {k} }-y_{\mathrm {j} }&=0\end{aligned}}} and further rewritten as ∑ k = 1 N c j k x k + x j − y j = x j {\displaystyle {\begin{aligned}\sum _{\mathrm {k=1} }^{\mathrm {N} }c_{\mathrm {jk} }x_{\mathrm {k} }+x_{\mathrm {j} }-y_{\mathrm {j} }&=x_{\mathrm {j} }\end{aligned}}} and finally rewritten as ∑ k = 1 N ( c j k + δ j k ) x k − y j = x j {\displaystyle {\begin{aligned}\sum _{\mathrm {k=1} }^{\mathrm {N} }(c_{\mathrm {jk} }+\delta _{\mathrm {jk} })x_{\mathrm {k} }-y_{\mathrm {j} }&=x_{\mathrm {j} }\end{aligned}}} ; form suitable to be expressed as a signal flow graph. where δkj = Kronecker delta The signal-flow graph is now arranged by selecting one of these equations and addressing the node on the right-hand side. This is the node for which the node connects to itself with the branch of weight including a '+1', making a self-loop in the flow graph. The other terms in that equation connect this node first to the source in this equation and then to all the other branches incident on this node. Every equation is treated this way, and then each incident branch is joined to its respective emanating node. For example, the case of three variables is shown in the figure, and the first equation is: x 1 = ( c 11 + 1 ) x 1 + c 12 x 2 + c 13 x 3 − y 1 , {\displaystyle x_{1}=\left(c_{11}+1\right)x_{1}+c_{12}x_{2}+c_{13}x_{3}-y_{1}\ ,} where the right side of this equation is the sum of the weighted arrows incident on node x1. As there is a basic symmetry in the treatment of every node, a simple starting point is an arrangement of nodes with each node at one vertex of a regular polygon. When expressed using the general coefficients {cin}, the environment of each node is then just like all the rest apart from a permutation of indices. Such an implementation for a set of three simultaneous equations is seen in the figure. Often the known values, yj are taken as the primary causes and the unknowns values, xj to be effects, but regardless of this interpretation, the last form for the set of equations can be represented as a signal-flow graph. This point is discussed further in the subsection Interpreting 'causality'. ==== Applying Mason's gain formula ==== In the most general case, the values for all the xk variables can be calculated by computing Mason's gain formula for the path from each yj to each xk and using superposition. x k = ∑ j = 1 M ( G k j ) y j {\displaystyle {\begin{aligned}x_{\mathrm {k} }&=\sum _{\mathrm {j} =1}^{\mathrm {M} }(G_{\mathrm {kj} })y_{\mathrm {j} }\end{aligned}}} where Gkj = the sum of Mason's gain formula computed for all the paths from input yj to variable xk. In general, there are N-1 paths from yj to variable xk so the computational effort to calculated Gkj is proportional to N-1. Since there are M values of yj, Gkj must be computed M times for a single value of xk. The computational effort to calculate a single xk variable is proportional to (N-1)(M). The effort to compute all the xk variables is proportional to (N)(N-1)(M). If there are N equations and N unknowns, then the computation effort is on the order of N3. == Relation to block diagrams == For some authors, a linear signal-flow graph is more constrained than a block diagram, in that the SFG rigorously describes linear algebraic equations represented by a directed graph. For other authors, linear block diagrams and linear signal-flow graphs are equivalent ways of depicting a system, and either can be used to solve the gain. A tabulation of the comparison between block diagrams and signal-flow graphs is provided by Bakshi & Bakshi, and another tabulation by Kumar. According to Barker et al.: "The signal flow graph is the most convenient method for representing a dynamic system. The topology of the graph is compact and the rules for manipulating it are easier to program than the corresponding rules that apply to block diagrams." In the figure, a simple block diagram for a feedback system is shown with two possible interpretations as a signal-flow graph. The input R(s) is the Laplace-transformed input signal; it is shown as a source node in the signal-flow graph (a source node has no input edges). The output signal C(s) is the Laplace-transformed output variable. It is represented as a sink node in the flow diagram (a sink has no output edges). G(s) and H(s) are transfer functions, with H(s) serving to feed back a modified version of the output to the input, B(s). The two flow graph representations are equivalent. == Interpreting 'causality' == The term "cause and effect" was applied by Mason to SFGs: "The process of constructing a graph is one of tracing a succession of cause and effects through the physical system. One variable is expressed as an explicit effect due to certain causes; they in turn, are recognized as effects due to still other causes." — S.J. Mason: Section IV: Illustrative applications of flow graph technique and has been repeated by many later authors: "The signal flow graph is another visual tool for representing causal relationships between components of the system. It is a simplified version of a block diagram introduced by S.J. Mason as a cause-and-effect representation of linear systems." — Arthur G.O. Mutambara: Design and Analysis of Control Systems, p.238 However, Mason's paper is concerned to show in great detail how a set of equations is connected to an SFG, an emphasis unrelated to intuitive notions of "cause and effect". Intuitions can be helpful for arriving at an SFG or for gaining insight from an SFG, but are inessential to the SFG. The essential connection of the SFG is to its own set of equations, as described, for example, by Ogata: "A signal-flow graph is a diagram that represents a set of simultaneous algebraic equations. When applying the signal flow graph method to analysis of control systems, we must first transform linear differential equations into algebraic equations in [the Laplace transform variable] s.." — Katsuhiko Ogata: Modern Control Engineering, p. 104 There is no reference to "cause and effect" here, and as said by Barutsky: "Like block diagrams, signal flow graphs represent the computational, not the physical structure of a system." — Wolfgang Borutzky, Bond Graph Methodology, p. 10 The term "cause and effect" may be misinterpreted as it applies to the SFG, and taken incorrectly to suggest a system view of causality, rather than a computationally based meaning. To keep discussion clear, it may be advisable to use the term "computational causality", as is suggested for bond graphs: "Bond-graph literature uses the term computational causality, indicating the order of calculation in a simulation, in order to avoid any interpretation in the sense of intuitive causality." The term "computational causality" is explained using the example of current and voltage in a resistor: "The computational causality of physical laws can therefore not be predetermined, but depends upon the particular use of that law. We cannot conclude whether it is the current flowing through a resistor that causes a voltage drop, or whether it is the difference in potentials at the two ends of the resistor that cause current to flow. Physically these are simply two concurrent aspects of one and the same physical phenomenon. Computationally, we may have to assume at times one position, and at other times the other." — François Cellier & Ernesto Kofman: §1.5 Simulation software today and tomorrow, p. 15 A computer program or algorithm can be arranged to solve a set of equations using various strategies. They differ in how they prioritize finding some of the variables in terms of the others, and these algorithmic decisions, which are simply about solution strategy, then set up the variables expressed as dependent variables earlier in the solution to be "effects", determined by the remaining variables that now are "causes", in the sense of "computational causality". Using this terminology, it is computational causality, not system causality, that is relevant to the SFG. There exists a wide-ranging philosophical debate, not concerned specifically with the SFG, over connections between computational causality and system causality. == Signal-flow graphs for analysis and design == Signal-flow graphs can be used for analysis, that is for understanding a model of an existing system, or for synthesis, that is for determining the properties of a design alternative. === Signal-flow graphs for dynamic systems analysis === When building a model of a dynamic system, a list of steps is provided by Dorf & Bishop: Define the system and its components. Formulate the mathematical model and list the needed assumptions. Write the differential equations describing the model. Solve the equations for the desired output variables. Examine the solutions and the assumptions. If needed, reanalyze or redesign the system. —RC Dorf and RH Bishop, Modern Control Systems, Chapter 2, p. 2 In this workflow, equations of the physical system's mathematical model are used to derive the signal-flow graph equations. === Signal-flow graphs for design synthesis === Signal-flow graphs have been used in Design Space Exploration (DSE), as an intermediate representation towards a physical implementation. The DSE process seeks a suitable solution among different alternatives. In contrast with the typical analysis workflow, where a system of interest is first modeled with the physical equations of its components, the specification for synthesizing a design could be a desired transfer function. For example, different strategies would create different signal-flow graphs, from which implementations are derived. Another example uses an annotated SFG as an expression of the continuous-time behavior, as input to an architecture generator == Shannon and Shannon-Happ formulas == Shannon's formula is an analytic expression for calculating the gain of an interconnected set of amplifiers in an analog computer. During World War II, while investigating the functional operation of an analog computer, Claude Shannon developed his formula. Because of wartime restrictions, Shannon's work was not published at that time, and, in 1952, Mason rediscovered the same formula. William W. Happ generalized the Shannon formula for topologically closed systems. The Shannon-Happ formula can be used for deriving transfer functions, sensitivities, and error functions. For a consistent set of linear unilateral relations, the Shannon-Happ formula expresses the solution using direct substitution (non-iterative). NASA's electrical circuit software NASAP is based on the Shannon-Happ formula. == Linear signal-flow graph examples == === Simple voltage amplifier === The amplification of a signal V1 by an amplifier with gain a12 is described mathematically by V 2 = a 12 V 1 . {\displaystyle V_{2}=a_{12}V_{1}\,.} This relationship represented by the signal-flow graph of Figure 1. is that V2 is dependent on V1 but it implies no dependency of V1 on V2. See Kou page 57. === Ideal negative feedback amplifier === A possible SFG for the asymptotic gain model for a negative feedback amplifier is shown in Figure 3, and leads to the equation for the gain of this amplifier as G = y 2 x 1 {\displaystyle G={\frac {y_{2}}{x_{1}}}} = G ∞ ( T T + 1 ) + G 0 ( 1 T + 1 ) . {\displaystyle =G_{\infty }\left({\frac {T}{T+1}}\right)+G_{0}\left({\frac {1}{T+1}}\right)\ .} The interpretation of the parameters is as follows: T = return ratio, G∞ = direct amplifier gain, G0 = feedforward (indicating the possible bilateral nature of the feedback, possibly deliberate as in the case of feedforward compensation). Figure 3 has the interesting aspect that it resembles Figure 2 for the two-port network with the addition of the extra feedback relation x2 = T y1. From this gain expression an interpretation of the parameters G0 and G∞ is evident, namely: G ∞ = lim T → ∞ G ; G 0 = lim T → 0 G . {\displaystyle G_{\infty }=\lim _{T\to \infty }G\ ;\ G_{0}=\lim _{T\to 0}G\ .} There are many possible SFG's associated with any particular gain relation. Figure 4 shows another SFG for the asymptotic gain model that can be easier to interpret in terms of a circuit. In this graph, parameter β is interpreted as a feedback factor and A as a "control parameter", possibly related to a dependent source in the circuit. Using this graph, the gain is G = y 2 x 1 {\displaystyle G={\frac {y_{2}}{x_{1}}}} = G 0 + A 1 − β A . {\displaystyle =G_{0}+{\frac {A}{1-\beta A}}\ .} To connect to the asymptotic gain model, parameters A and β cannot be arbitrary circuit parameters, but must relate to the return ratio T by: T = − β A , {\displaystyle T=-\beta A\ ,} and to the asymptotic gain as: G ∞ = lim T → ∞ G = G 0 − 1 β . {\displaystyle G_{\infty }=\lim _{T\to \infty }G=G_{0}-{\frac {1}{\beta }}\ .} Substituting these results into the gain expression, G = G 0 + 1 β − T 1 + T {\displaystyle G=G_{0}+{\frac {1}{\beta }}{\frac {-T}{1+T}}} = G 0 + ( G 0 − G ∞ ) − T 1 + T {\displaystyle =G_{0}+(G_{0}-G_{\infty }){\frac {-T}{1+T}}} = G ∞ T 1 + T + G 0 1 1 + T , {\displaystyle =G_{\infty }{\frac {T}{1+T}}+G_{0}{\frac {1}{1+T}}\ ,} which is the formula of the asymptotic gain model. === Electrical circuit containing a two-port network === The figure to the right depicts a circuit that contains a y-parameter two-port network. Vin is the input of the circuit and V2 is the output. The two-port equations impose a set of linear constraints between its port voltages and currents. The terminal equations impose other constraints. All these constraints are represented in the SFG (Signal Flow Graph) below the circuit. There is only one path from input to output which is shown in a different color and has a (voltage) gain of -RLy21. There are also three loops: -Riny11, -RLy22, Riny21RLy12. Sometimes a loop indicates intentional feedback but it can also indicate a constraint on the relationship of two variables. For example, the equation that describes a resistor says that the ratio of the voltage across the resistor to the current through the resistor is a constant which is called the resistance. This can be interpreted as the voltage is the input and the current is the output, or the current is the input and the voltage is the output, or merely that the voltage and current have a linear relationship. Virtually all passive two terminal devices in a circuit will show up in the SFG as a loop. The SFG and the schematic depict the same circuit, but the schematic also suggests the circuit's purpose. Compared to the schematic, the SFG is awkward but it does have the advantage that the input to output gain can be written down by inspection using Mason's rule. === Mechatronics : Position servo with multi-loop feedback === This example is representative of a SFG (signal-flow graph) used to represent a servo control system and illustrates several features of SFGs. Some of the loops (loop 3, loop 4 and loop 5) are extrinsic intentionally designed feedback loops. These are shown with dotted lines. There are also intrinsic loops (loop 0, loop1, loop2) that are not intentional feedback loops, although they can be analyzed as though they were. These loops are shown with solid lines. Loop 3 and loop 4 are also known as minor loops because they are inside a larger loop. The forward path begins with θC, the desired position command. This is multiplied by KP which could be a constant or a function of frequency. KP incorporates the conversion gain of the DAC and any filtering on the DAC output. The output of KP is the velocity command VωC which is multiplied by KV which can be a constant or a function of frequency. The output of KV is the current command, VIC which is multiplied by KC which can be a constant or a function of frequency. The output of KC is the amplifier output voltage, VA. The current, IM, though the motor winding is the integral of the voltage applied to the inductance. The motor produces a torque, T, proportional to IM. Permanent magnet motors tend to have a linear current to torque function. The conversion constant of current to torque is KM. The torque, T, divided by the load moment of inertia, M, is the acceleration, α, which is integrated to give the load velocity ω which is integrated to produce the load position, θLC. The forward path of loop 0 asserts that acceleration is proportional to torque and the velocity is the time integral of acceleration. The backward path says that as the speed increases there is a friction or drag that counteracts the torque. Torque on the load decreases proportionately to the load velocity until the point is reached that all the torque is used to overcome friction and the acceleration drops to zero. Loop 0 is intrinsic. Loop1 represents the interaction of an inductor's current with its internal and external series resistance. The current through an inductance is the time integral of the voltage across the inductance. When a voltage is first applied, all of it appears across the inductor. This is shown by the forward path through 1 s L M {\displaystyle {\frac {1}{s\mathrm {L} _{\mathrm {M} }}}\,} . As the current increases, voltage is dropped across the inductor internal resistance RM and the external resistance RS. This reduces the voltage across the inductor and is represented by the feedback path -(RM + RS). The current continues to increase but at a steadily decreasing rate until the current reaches the point at which all the voltage is dropped across (RM + RS). Loop 1 is intrinsic. Loop2 expresses the effect of the motor back EMF. Whenever a permanent magnet motor rotates, it acts like a generator and produces a voltage in its windings. It does not matter whether the rotation is caused by a torque applied to the drive shaft or by current applied to the windings. This voltage is referred to as back EMF. The conversion gain of rotational velocity to back EMF is GM. The polarity of the back EMF is such that it diminishes the voltage across the winding inductance. Loop 2 is intrinsic. Loop 3 is extrinsic. The current in the motor winding passes through a sense resister. The voltage, VIM, developed across the sense resister is fed back to the negative terminal of the power amplifier KC. This feedback causes the voltage amplifier to act like a voltage controlled current source. Since the motor torque is proportional to motor current, the sub-system VIC to the output torque acts like a voltage controlled torque source. This sub-system may be referred to as the "current loop" or "torque loop". Loop 3 effectively diminishes the effects of loop 1 and loop 2. Loop 4 is extrinsic. A tachometer (actually a low power dc generator) produces an output voltage VωM that is proportional to is angular velocity. This voltage is fed to the negative input of KV. This feedback causes the sub-system from VωC to the load angular velocity to act like a voltage to velocity source. This sub-system may be referred to as the "velocity loop". Loop 4 effectively diminishes the effects of loop 0 and loop 3. Loop 5 is extrinsic. This is the overall position feedback loop. The feedback comes from an angle encoder that produces a digital output. The output position is subtracted from the desired position by digital hardware which drives a DAC which drives KP. In the SFG, the conversion gain of the DAC is incorporated into KP. See Mason's rule for development of Mason's Gain Formula for this example. == Terminology and classification of signal-flow graphs == There is some confusion in literature about what a signal-flow graph is; Henry Paynter, inventor of bond graphs, writes: "But much of the decline of signal-flow graphs [...] is due in part to the mistaken notion that the branches must be linear and the nodes must be summative. Neither assumption was embraced by Mason, himself !" === Standards covering signal-flow graphs === IEEE Std 155-1960, IEEE Standards on Circuits: Definitions of Terms for Linear Signal Flow Graphs, 1960. This IEEE standard defines a signal-flow graph as a network of directed branches representing dependent and independent signals as nodes. Incoming branches carry branch signals to the dependent node signals. A dependent node signal is the algebraic sum of the incoming branch signals at that node, i.e. nodes are summative. === State transition signal-flow graph === A state transition SFG or state diagram is a simulation diagram for a system of equations, including the initial conditions of the states. === Closed flowgraph === Closed flowgraphs describe closed systems and have been utilized to provide a rigorous theoretical basis for topological techniques of circuit analysis. Terminology for closed flowgraph theory includes: Contributive node. Summing point for two or more incoming signals resulting in only one outgoing signal. Distributive node. Sampling point for two or more outgoing signals resulting from only one incoming signal. Compound node. Contraction of a contributive node and a distributive node. Strictly dependent & strictly independent node. A strictly independent node represent s an independent source; a strictly dependent node represents a meter. Open & Closed Flowgraphs. An open flowgraph contains strictly dependent or strictly independent nodes; otherwise it is a closed flowgraph. == Nonlinear flow graphs == Mason introduced both nonlinear and linear flow graphs. To clarify this point, Mason wrote : "A linear flow graph is one whose associated equations are linear." === Examples of nonlinear branch functions === It we denote by xj the signal at node j, the following are examples of node functions that do not pertain to a linear time-invariant system: x j = x k × x l x k = a b s ( x j ) x l = log ⁡ ( x k ) x m = t × x j ,where t represents time {\displaystyle {\begin{aligned}x_{\mathrm {j} }&=x_{\mathrm {k} }\times x_{\mathrm {l} }\\x_{\mathrm {k} }&=abs(x_{\mathrm {j} })\\x_{\mathrm {l} }&=\log(x_{\mathrm {k} })\\x_{\mathrm {m} }&=t\times x_{\mathrm {j} }{\text{ ,where }}t{\text{ represents time}}\\\end{aligned}}} === Examples of nonlinear signal-flow graph models === Although they generally can't be transformed between time domain and frequency domain representations for classical control theory analysis, nonlinear signal-flow graphs can be found in electrical engineering literature. Nonlinear signal-flow graphs can also be found in life sciences, for example, Dr Arthur Guyton's model of the cardiovascular system. == Applications of SFG techniques in various fields of science == Electronic circuits Characterizing sequential circuits of the Moore and Mealy type, obtaining regular expressions from state diagrams. Synthesis of non-linear data converters Control and network theory Stochastic signal processing. Reliability of electronic systems Physiology and biophysics Cardiac output regulation Simulation Simulation on analog computers Neuroscience and Combinatorics Study of Polychrony == See also == Asymptotic gain model Bond graphs Coates graph Control Systems/Signal Flow Diagrams in the Control Systems Wikibook Flow graph (mathematics) Leapfrog filter for an example of filter design using a signal flow graph Mason's gain formula Minor loop feedback Noncommutative signal-flow graph == Notes == == References == Ernest J. Henley & R. A. Williams (1973). Graph theory in modern engineering; computer aided design, control, optimization, reliability analysis. Academic Press. ISBN 978-0-08-095607-7. Book almost entirely devoted to this topic. Kou, Benjamin C. (1967), Automatic Control Systems, Prentice Hall Robichaud, Louis P.A.; Maurice Boisvert; Jean Robert (1962). Signal flow graphs and applications. Prentice-Hall electrical engineering series. Englewood Cliffs, N.J.: Prentice Hall. pp. xiv, 214 p. Deo, Narsingh (1974), Graph Theory with Applications to Engineering and Computer Science, PHI Learning Pvt. Ltd., p. 418, ISBN 978-81-203-0145-0 K Thulasiramen; MNS Swarmy (2011). "§6.11 The Coates and Mason graphs". Graphs: Theory and algorithms. John Wiley & Sons. pp. 163 ff. ISBN 9781118030257. Ogata, Katsuhiko (2002). "Section 3-9 Signal Flow Graphs". Modern Control Engineering 4th Edition. Prentice-Hal. ISBN 978-0-13-043245-2. Phang, Khoman (2000-12-14). "2.5 An overview of Signal-flow graphs" (PDF). CMOS Optical Preamplifier Design Using Graphical Circuit Analysis (Thesis). Department of Electrical and Computer Engineering, University of Toronto. © Copyright by Khoman Phang 2001 == Further reading == Wai-Kai Chen (1976). Applied Graph Theory. North Holland Publishing Company. ISBN 978-0720423624. Chapter 3 for the essentials, but applications are scattered throughout the book. Wai-Kai Chen (May 1964). "Some applications of linear graphs". Contract DA-28-043-AMC-00073 (E). Coordinated Science Laboratory, University of Illinois, Urbana. Archived from the original on January 10, 2015. K. Thulasiraman & M. N. S. Swamy (1992). Graphs: Theory and Algorithms. John Wiley & Sons. 6.10-6.11 for the essential mathematical idea. ISBN 978-0-471-51356-8. Shu-Park Chan (2006). "Graph theory". In Richard C. Dorf (ed.). Circuits, Signals, and Speech and Image Processing (3rd ed.). CRC Press. § 3.6. ISBN 978-1-4200-0308-6. Compares Mason and Coates graph approaches with Maxwell's k-tree approach. RF Hoskins (2014). "Flow-graph and signal flow-graph analysis of linear systems". In SR Deards (ed.). Recent Developments in Network Theory: Proceedings of the Symposium Held at the College of Aeronautics, Cranfield, September 1961. Elsevier. ISBN 9781483223568. A comparison of the utility of the Coates flow graph and the Mason flow graph. == External links == M. L. Edwards: S-parameters, signal flow graphs, and other matrix representations All Rights Reserved H Schmid: Signal-Flow Graphs in 12 Short Lessons Control Systems/Signal Flow Diagrams at Wikibooks Media related to Signal flow graphs at Wikimedia Commons
Wikipedia:Signomial#0
A signomial is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multivariable polynomials—an extension that permits exponents to be arbitrary real numbers (rather than just non-negative integers) while requiring the independent variables to be strictly positive (so that division by zero and other inappropriate algebraic operations are not encountered). Formally, a signomial is a function with domain R > 0 n {\displaystyle \mathbb {R} _{>0}^{n}} which takes values f ( x 1 , x 2 , … , x n ) = ∑ i = 1 M ( c i ∏ j = 1 n x j a i j ) {\displaystyle f(x_{1},x_{2},\dots ,x_{n})=\sum _{i=1}^{M}\left(c_{i}\prod _{j=1}^{n}x_{j}^{a_{ij}}\right)} where the coefficients c i {\displaystyle c_{i}} and the exponents a i j {\displaystyle a_{ij}} are real numbers. Signomials are closed under addition, subtraction, multiplication, and scaling. If we restrict all c i {\displaystyle c_{i}} to be positive, then the function f is a posynomial. Consequently, each signomial is either a posynomial, the negative of a posynomial, or the difference of two posynomials. If, in addition, all exponents a i j {\displaystyle a_{ij}} are non-negative integers, then the signomial becomes a polynomial whose domain is the positive orthant. For example, f ( x 1 , x 2 , x 3 ) = 2.7 x 1 2 x 2 − 1 / 3 x 3 0.7 − 2 x 1 − 4 x 3 2 / 5 {\displaystyle f(x_{1},x_{2},x_{3})=2.7x_{1}^{2}x_{2}^{-1/3}x_{3}^{0.7}-2x_{1}^{-4}x_{3}^{2/5}} is a signomial. The term "signomial" was introduced by Richard J. Duffin and Elmor L. Peterson in their seminal joint work on general algebraic optimization—published in the late 1960s and early 1970s. A recent introductory exposition involves optimization problems. Nonlinear optimization problems with constraints and/or objectives defined by signomials are harder to solve than those defined by only posynomials, because (unlike posynomials) signomials cannot necessarily be made convex by applying a logarithmic change of variables. Nevertheless, signomial optimization problems often provide a much more accurate mathematical representation of real-world nonlinear optimization problems. == See also == Posynomial Geometric programming == References == == External links == S. Boyd, S. J. Kim, L. Vandenberghe, and A. Hassibi, A Tutorial on Geometric Programming
Wikipedia:Sigurður Helgason (mathematician)#0
Sigurdur Helgason (Icelandic: Sigurður Helgason; 30 September 1927 – 3 December 2023) was an Icelandic mathematician whose research has been devoted to the geometry and analysis on symmetric spaces. In particular, he used new integral geometric methods to establish fundamental existence theorems for differential equations on symmetric spaces as well as some new results on the representations of their isometry groups. He also introduced a Fourier transform on these spaces and proved the principal theorems for this transform, the inversion formula, the Plancherel theorem and the analog of the Paley–Wiener theorem. == Biography == Sigurdur Helgason was born in Akureyri, Iceland on 30 September 1927. In 1954, he earned a PhD from Princeton University under Salomon Bochner. Helgason became a professor of mathematics at the Massachusetts Institute of Technology in 1965, and he retired from the faculty in 2014. Helgason received the Børge Jessen Diploma Award of the Danish Mathematical Society in 1982, and the Grand Knight's Cross (Stórriddarakross) of the Icelandic Order of the Falcon in 1991. He was winner of the 1988 Leroy P. Steele Prize for Seminal Contributions for his books Groups and Geometric Analysis and Differential Geometry, Lie Groups and Symmetric Spaces. This was followed by the 2008 book Geometric Analysis on Symmetric Spaces. On 31 May 1996, Helgason received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden. Helgason was elected a member of the Icelandic Academy of Sciences in 1960, the American Academy of Arts and Sciences in 1970, and the Royal Danish Academy of Sciences and Letters in 1972. In 2013, he became a fellow of the American Mathematical Society. He was made an honorary member of the Icelandic Mathematical Society when he turned 70, and a symposium was held in his honor. Helgason died on 3 December 2023, at the age of 96. == Selected works == === Articles === Helgason, S. (1954). "The derived algebra of a Banach algebra". Proceedings of the National Academy of Sciences of the United States of America. 40 (10): 994–995. Bibcode:1954PNAS...40..994H. doi:10.1073/pnas.40.10.994. PMC 534208. PMID 16589593. Helgason, Sigurdur (1957). "Topologies of group algebras and a theorem of Littlewood". Transactions of the American Mathematical Society. 86 (2): 269–283. doi:10.1090/S0002-9947-1957-0095428-5. hdl:1721.1/26691. MR 0095428. Helgason, Sigurdur (1958). "Lacunary Fourier series on noncommutative groups". Proceedings of the American Mathematical Society. 9 (5): 782–790. doi:10.1090/S0002-9939-1958-0100234-5. MR 0100234. Helgason, Sigurdur (1958). "On Riemannian curvature of homogeneous spaces". Proceedings of the American Mathematical Society. 9 (6): 831–838. doi:10.1090/S0002-9939-1958-0108811-2. MR 0108811. Helgason, S. (1962). "Some results on invariant theory". Bulletin of the American Mathematical Society. 68 (4): 367–371. doi:10.1090/S0002-9904-1962-10812-X. hdl:1721.1/26682. MR 0166303. Helgason, S. (1963). "Fundamental solutions to invariant differential operators on symmetric spaces". Bulletin of the American Mathematical Society. 69 (6): 778–781. doi:10.1090/S0002-9904-1963-11029-0. hdl:1721.1/26683. MR 0156919. Helgason, S. (1963). "Duality and Radon transforms for symmetric spaces". Bulletin of the American Mathematical Society. 69 (6): 782–7881. doi:10.1090/S0002-9904-1963-11030-7. hdl:1721.1/26684. MR 0158408. Helgason, Sigurdur (1964). "A duality in integral geometry; some generalizations of the Radon transform". Bulletin of the American Mathematical Society. 70 (4): 435–446. doi:10.1090/S0002-9904-1964-11147-2. hdl:1721.1/26685. MR 0166795. Helgason, S. (1965). "Radon–Fourier transforms on symmetric spaces and related group representations". Bulletin of the American Mathematical Society. 71 (5): 757–763. doi:10.1090/S0002-9904-1965-11380-5. hdl:1721.1/26686. MR 0179295. Helgason, Sigurdur; Korányi, Ádám (1968). "A Fatou-type theorem for harmonic functions on symmetric spaces". Bulletin of the American Mathematical Society. 74 (2): 258–263. doi:10.1090/S0002-9904-1968-11912-3. hdl:1721.1/26687. MR 0229179. Helgason, Sigurdur (1969). "Applications of the Radon transform to representations of semisimple Lie groups". Proceedings of the National Academy of Sciences of the United States of America. 63 (3): 643–647. Bibcode:1969PNAS...63..643H. doi:10.1073/pnas.63.3.643. PMC 223499. PMID 16591772. Helgason, Sigurdur (1973). "Paley-Wiener theorems and surjectivity of invariant differential operators on symmetric spaces and Lie groups". Bulletin of the American Mathematical Society. 79 (1): 129–132. doi:10.1090/S0002-9904-1973-13127-1. hdl:1721.1/26688. MR 0312158. Helgason, Sigurdur (1977). "Invariant differential equations and homogeneous manifolds" (PDF). Bulletin of the American Mathematical Society. 83 (5): 751–774. doi:10.1090/S0002-9904-1977-14317-6. MR 0445235. === Books === Differential geometry and symmetric spaces. Academic Press 1962, AMS 2001 Analysis on Lie groups and homogeneous spaces. AMS 1972 Differential geometry, Lie groups and symmetric spaces. Academic Press 1978, 7th edn. 1995 The Radon Transform. Birkhäuser, 1980, 2nd edn. 1999 Topics in harmonic analysis on homogeneous spaces. Birkhäuser 1981 Groups and geometric analysis: integral geometry, invariant differential operators and spherical functions. Academic Press 1984, AMS 1994 Geometric analysis on symmetric spaces. AMS 1994, 2nd. edn. 2008 == References == == Sources == "Curriculum vitae". Massachusetts Institute of Technology. 2005. "Sigurdur Helgason". Massachusetts Institute of Technology. == External links == Sigurdur Helgason at the Mathematics Genealogy Project Sigurdur Helgason – Publications – MIT Mathematics Gonzalez, Fulton; Ólafsson, Gestur; Anker, Jean-Philippe; Van Den Ban, Erik; Dadok, Jiri; Grinberg, Eric; Koranyi, Adam; Kuchment, Peter; Natterer, Frank; Ørsted, Bent; Orloff, Jeremy; Oshima, Toshio; Quinto, Todd; Rouvière, François; Rubin, Boris; Schlichtkrull, Henrik; Sigurðsson, Ragnar; Stanton, Robert J.; Vogan, David (November 2024). "Remembering Sigurður Helgason (1927–2023)" (PDF). Notices of the American Mathematical Society. 71 (10): 1349–1361. doi:10.1090/noti3049.
Wikipedia:Sigve Tjøtta#0
Sigve Tjøtta (1 March 1930 – 28 August 2023) was a Norwegian mathematician. == Early life == He was born in Klepp. He took the cand.real. degree in 1954 and the dr.philos. degree in 1960, both at the University of Oslo. His doctoral thesis was On Some Non-linear Effects in Sound Fields, with Special Emphasis on the Generation of Vorticity and the Formation of Streaming Patterns. He worked as a research assistant in Oslo from 1954 to 1956, 1957 to 1958 and 1959 to 1960. In between he studied at Brown University from 1956 to 1957 and at the Max Planck Institute. Among his advisors were Johan Peter Holtsmark. == Career == He was appointed docent at the University of Bergen in 1960, and was promoted to professor in 1963. He succeeded Oddvar Bjørgum, and had responsibility for the university's education in applied mathematics. His fields of research include plasma, nonlinear acoustics, hydroacoustics and acoustic streaming. He was also the dean of the Faculty of Mathematics and Natural Sciences from 1975 to 1977, and has held positions in NAVF, NTVF, and in the national committee of the International Union of Theoretical and Applied Mechanics. He has also been a visiting scholar at the University of Texas. == Personal life == Tjøtta married Jacqueline Naze, a colleague, in 1964—they barely survived a car crash sustained on their honeymoon. They have since done extensive research together. After retirement from the professor chair, the couple moved to Oslo. Tjøtta's most prominent hobby is long-distance running. He discovered his talent during a stay in the United States, where jogging was popular. He ran the marathon in 3:17.02 hours at the age of 66, and the half marathon in 1:43.09 hours at the age of 75. He died in August 2023, aged 93. == Honors == Together with his wife, Tjøtta won a prize in underwater acoustics from the French Academy of Sciences. He is a fellow of the Acoustical Society of America, and a member of the Norwegian Academy of Science and Letters. In 2002 he was decorated as a Knight, First Class of the Royal Norwegian Order of St. Olav. == References ==
Wikipedia:Silverman–Toeplitz theorem#0
In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in series summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a linear sequence transformation that preserves the limits of convergent sequences. The linear sequence transformation can be applied to the divergent sequences of partial sums of divergent series to give those series generalized sums. An infinite matrix ( a i , j ) i , j ∈ N {\displaystyle (a_{i,j})_{i,j\in \mathbb {N} }} with complex-valued entries defines a regular matrix summability method if and only if it satisfies all of the following properties: lim i → ∞ a i , j = 0 j ∈ N (Every column sequence converges to 0.) lim i → ∞ ∑ j = 0 ∞ a i , j = 1 (The row sums converge to 1.) sup i ∑ j = 0 ∞ | a i , j | < ∞ (The absolute row sums are bounded.) {\displaystyle {\begin{aligned}&\lim _{i\to \infty }a_{i,j}=0\quad j\in \mathbb {N} &&{\text{(Every column sequence converges to 0.)}}\\[3pt]&\lim _{i\to \infty }\sum _{j=0}^{\infty }a_{i,j}=1&&{\text{(The row sums converge to 1.)}}\\[3pt]&\sup _{i}\sum _{j=0}^{\infty }\vert a_{i,j}\vert <\infty &&{\text{(The absolute row sums are bounded.)}}\end{aligned}}} An example is Cesàro summation, a matrix summability method with a m n = { 1 m n ≤ m 0 n > m = ( 1 0 0 0 0 ⋯ 1 2 1 2 0 0 0 ⋯ 1 3 1 3 1 3 0 0 ⋯ 1 4 1 4 1 4 1 4 0 ⋯ 1 5 1 5 1 5 1 5 1 5 ⋯ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ) . {\displaystyle a_{mn}={\begin{cases}{\frac {1}{m}}&n\leq m\\0&n>m\end{cases}}={\begin{pmatrix}1&0&0&0&0&\cdots \\{\frac {1}{2}}&{\frac {1}{2}}&0&0&0&\cdots \\{\frac {1}{3}}&{\frac {1}{3}}&{\frac {1}{3}}&0&0&\cdots \\{\frac {1}{4}}&{\frac {1}{4}}&{\frac {1}{4}}&{\frac {1}{4}}&0&\cdots \\{\frac {1}{5}}&{\frac {1}{5}}&{\frac {1}{5}}&{\frac {1}{5}}&{\frac {1}{5}}&\cdots \\\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}.} == Formal statement == Let the aforementioned inifinite matrix ( a i , j ) i , j ∈ N {\displaystyle (a_{i,j})_{i,j\in \mathbb {N} }} of complex elements satisfy the following conditions: lim i → ∞ a i , j = 0 {\displaystyle \lim _{i\to \infty }a_{i,j}=0} for every fixed j ∈ N {\displaystyle j\in \mathbb {N} } . sup i ∈ N ∑ j = 1 i | a i , j | < ∞ {\displaystyle \sup _{i\in \mathbb {N} }\sum _{j=1}^{i}\vert a_{i,j}\vert <\infty } ; and z n {\displaystyle z_{n}} be a sequence of complex numbers that converges to lim n → ∞ z n = z ∞ {\displaystyle \lim _{n\to \infty }z_{n}=z_{\infty }} . We denote S n {\displaystyle S_{n}} as the weighted sum sequence: S n = ∑ m = 1 n ( a n , m z n ) {\displaystyle S_{n}=\sum _{m=1}^{n}{\left(a_{n,m}z_{n}\right)}} . Then the following results hold: If lim n → ∞ z n = z ∞ = 0 {\displaystyle \lim _{n\to \infty }z_{n}=z_{\infty }=0} , then lim n → ∞ S n = 0 {\displaystyle \lim _{n\to \infty }{S_{n}}=0} . If lim n → ∞ z n = z ∞ ≠ 0 {\displaystyle \lim _{n\to \infty }z_{n}=z_{\infty }\neq 0} and lim i → ∞ ∑ j = 1 i a i , j = 1 {\displaystyle \lim _{i\to \infty }\sum _{j=1}^{i}a_{i,j}=1} , then lim n → ∞ S n = z ∞ {\displaystyle \lim _{n\to \infty }{S_{n}}=z_{\infty }} . == Proof == === Proving 1. === For the fixed j ∈ N {\displaystyle j\in \mathbb {N} } the complex sequences z n {\displaystyle z_{n}} , S n {\displaystyle S_{n}} and a i , j {\displaystyle a_{i,j}} approach zero if and only if the real-values sequences | z n | {\displaystyle \left|z_{n}\right|} , | S n | {\displaystyle \left|S_{n}\right|} and | a i , j | {\displaystyle \left|a_{i,j}\right|} approach zero respectively. We also introduce M = sup i ∈ N ∑ j = 1 i | a i , j | > 0 {\displaystyle M=\sup _{i\in \mathbb {N} }\sum _{j=1}^{i}\vert a_{i,j}\vert >0} . Since | z n | → 0 {\displaystyle \left|z_{n}\right|\to 0} , for prematurely chosen ε > 0 {\displaystyle \varepsilon >0} there exists N ε = N ε ( ε ) {\displaystyle N_{\varepsilon }=N_{\varepsilon }\left(\varepsilon \right)} , so for every n > N ε ( ε ) {\displaystyle n>N_{\varepsilon }\left(\varepsilon \right)} we have | z n | < ε 2 M {\displaystyle \left|z_{n}\right|<{\frac {\varepsilon }{2M}}} . Next, for some N a = N a ( ε ) > N ε ( ε ) {\displaystyle N_{a}=N_{a}\left(\varepsilon \right)>N_{\varepsilon }\left(\varepsilon \right)} it's true, that | a n , m | < M N ε {\displaystyle \left|a_{n,m}\right|<{\frac {M}{N_{\varepsilon }}}} for every n > N a ( ε ) {\displaystyle n>N_{a}\left(\varepsilon \right)} and 1 ⩽ m ⩽ n {\displaystyle 1\leqslant m\leqslant n} . Therefore, for every n > N a ( ε ) {\displaystyle n>N_{a}\left(\varepsilon \right)} | S n | = | ∑ m = 1 n ( a n , m z n ) | ⩽ ∑ m = 1 n ( | a n , m | ⋅ | z n | ) = ∑ m = 1 N ε ( | a n , m | ⋅ | z n | ) + ∑ m = N ε n ( | a n , m | ⋅ | z n | ) < < N ε ⋅ M N ε ⋅ ε 2 M + ε 2 M ∑ m = N ε n | a n , m | ⩽ ε 2 + ε 2 M ∑ m = 1 n | a n , m | ⩽ ε 2 + ε 2 M ⋅ M = ε {\displaystyle {\begin{aligned}&\left|S_{n}\right|=\left|\sum _{m=1}^{n}\left(a_{n,m}z_{n}\right)\right|\leqslant \sum _{m=1}^{n}\left(\left|a_{n,m}\right|\cdot \left|z_{n}\right|\right)=\sum _{m=1}^{N_{\varepsilon }}\left(\left|a_{n,m}\right|\cdot \left|z_{n}\right|\right)+\sum _{m=N_{\varepsilon }}^{n}\left(\left|a_{n,m}\right|\cdot \left|z_{n}\right|\right)<\\&<N_{\varepsilon }\cdot {\frac {M}{N_{\varepsilon }}}\cdot {\frac {\varepsilon }{2M}}+{\frac {\varepsilon }{2M}}\sum _{m=N_{\varepsilon }}^{n}\left|a_{n,m}\right|\leqslant {\frac {\varepsilon }{2}}+{\frac {\varepsilon }{2M}}\sum _{m=1}^{n}\left|a_{n,m}\right|\leqslant {\frac {\varepsilon }{2}}+{\frac {\varepsilon }{2M}}\cdot M=\varepsilon \end{aligned}}} which means, that both sequences | S n | {\displaystyle \left|S_{n}\right|} and S n {\displaystyle S_{n}} converge zero. === Proving 2. === lim n → ∞ ( z n − z ∞ ) = 0 {\displaystyle \lim _{n\to \infty }\left(z_{n}-z_{\infty }\right)=0} . Applying the already proven statement yields lim n → ∞ ∑ m = 1 n ( a n , m ( z n − z ∞ ) ) = 0 {\displaystyle \lim _{n\to \infty }\sum _{m=1}^{n}{\big (}a_{n,m}\left(z_{n}-z_{\infty }\right){\big )}=0} . Finally, lim n → ∞ S n = lim n → ∞ ∑ m = 1 n ( a n , m z n ) = lim n → ∞ ∑ m = 1 n ( a n , m ( z n − z ∞ ) ) + z ∞ lim n → ∞ ∑ m = 1 n ( a n , m ) = 0 + z ∞ ⋅ 1 = z ∞ {\displaystyle \lim _{n\to \infty }S_{n}=\lim _{n\to \infty }\sum _{m=1}^{n}{\big (}a_{n,m}z_{n}{\big )}=\lim _{n\to \infty }\sum _{m=1}^{n}{\big (}a_{n,m}\left(z_{n}-z_{\infty }\right){\big )}+z_{\infty }\lim _{n\to \infty }\sum _{m=1}^{n}{\big (}a_{n,m}{\big )}=0+z_{\infty }\cdot 1=z_{\infty }} , which completes the proof. == References == === Citations === === Further reading === Toeplitz, Otto (1911) "Über allgemeine lineare Mittelbildungen." Prace mat.-fiz., 22, 113–118 (the original paper in German) Silverman, Louis Lazarus (1913) "On the definition of the sum of a divergent series." University of Missouri Studies, Math. Series I, 1–96 Hardy, G. H. (1949), Divergent Series, Oxford: Clarendon Press, 43-48. Boos, Johann (2000). Classical and modern methods in summability. New York: Oxford University Press. ISBN 019850165X.
Wikipedia:Silvio Ballarin#0
Silvio Ballarin (1901 – 1969) was a Dalmatian Italian mathematician and university professor. He was born in Zara (today Zadar) in 1901, which at the time was still part of Austria-Hungary. He graduated in mathematics from the University of Bologna in 1924. Ballarin taught topography at the University of Pisa starting from 1950. He was principally interested in gravimetry and geodesy. Ballarin performed many gravimetric measurements in Italy. == References == == Bibliography == Gero Geri - Brunetto Palla. BALLARIN, Silvio. Dizionario Biografico degli Italiani - Volume 34 (1988). Attilio Selvini (2013). Appunti per una storia della topografia in Italia nel XX secolo. Maggioli Editore, 2013. pp. 22–25. ISBN 978-88-387-6211-6.
Wikipedia:Similarity invariance#0
In linear algebra, similarity invariance is a property exhibited by a function whose value is unchanged under similarities of its domain. That is, f {\displaystyle f} is invariant under similarities if f ( A ) = f ( B − 1 A B ) {\displaystyle f(A)=f(B^{-1}AB)} where B − 1 A B {\displaystyle B^{-1}AB} is a matrix similar to A. Examples of such functions include the trace, determinant, characteristic polynomial, and the minimal polynomial. A more colloquial phrase that means the same thing as similarity invariance is "basis independence", since a matrix can be regarded as a linear operator, written in a certain basis, and the same operator in a new basis is related to one in the old basis by the conjugation B − 1 A B {\displaystyle B^{-1}AB} , where B {\displaystyle B} is the transformation matrix to the new basis. == See also == Invariant (mathematics) Gauge invariance Trace diagram
Wikipedia:Simion Stoilow#0
Simion Stoilow or Stoilov (14 September [O.S. 2 September] 1887 – 4 April 1961) was a Romanian mathematician, creator of the Romanian school of complex analysis, and author of over 100 publications. == Biography == He was born in Bucharest, and grew up in Craiova. His father, Colonel Simion Stoilow, fought at the Battle of Smârdan in the Romanian War of Independence. After studying at the Obedeanu elementary school and the Carol I High School, Stoilow went in 1907 to the University of Paris, where he earned a B.S. degree in 1910 and a Ph.D. in Mathematics in 1916. His doctoral dissertation was written under the direction of Émile Picard. He returned to Romania in 1916 to fight in the Romanian Campaign of World War I, first in Dobrudja, then in Moldavia. After the war, he became professor of mathematics at the University of Iași (1919–1921) and the University of Cernăuți (1921–1939). He was an Invited Speaker of the International Congress of Mathematicians in 1920 at Strasbourg, in 1928 at Bologna, and in 1936 at Oslo. In 1928 he was awarded the Legion of Honour, Officer rank. In 1939 he moved to Bucharest, working first at the Polytechnic University of Bucharest, and from 1941 at the University of Bucharest, serving as rector from 1944 to 1946 and as dean of the Faculties of Mathematics and Physics from 1948 to 1951. From 1946 to 1948, he served as Romanian ambassador to France. In 1946 he was a member of the Romanian delegation at the Paris Peace Conference, headed by Gheorghe Tătărescu. In July 1947 he organized at Club de Chaillot the exhibit "L'art français au secours des enfants roumains"; Constantin Brâncuși participated, Tristan Tzara and Jean Cassou wrote the preface to the catalogue. In 1946 he was awarded the Order of the Star of Romania, Grand Officer rank and in 1948, the Order of the Star of the Romanian People's Republic, Second class. Stoilow was elected corresponding member of the Romanian Academy in 1936, and full member in 1945, and later became president of the Physics and Mathematics section of the Academy. In 1949 he was the founding director of the Institute of Mathematics of the Romanian Academy, serving in that capacity until he died. Among his students at the University of Bucharest and at the Institute were Cabiria Andreian Cazacu, Romulus Cristescu, Martin Jurchescu, Ionel Bucur, and Aristide Deleanu, as well as Nicolae Boboc, Corneliu Constantinescu, and Aurel Cornea. Some of the first Romanian topologists who obtained their candidate’s theses were Stoilow’s students Tudor Ganea, Israel Berstein, Aristide Deleanu, Valentin Poénaru, and Kostake Teleman. In 1952, Stoilow was awarded the Order of the Star of the Romanian People's Republic, First class. Stoilow died in Bucharest in 1961 of a brain stroke. He was cremated at the Cenușa crematorium. Prior to the Romanian Revolution of 1989, his funeral urn was maintained in a crypt at the Carol Park Mausoleum. == Legacy == The Institute of Mathematics of the Romanian Academy (closed in 1975 by a decree of Nicolae Ceaușescu, reopened in the immediate aftermath of the 1989 Revolution), is now named after him. The Simion Stoilow Prize is awarded every year by the Romanian Academy. == Work == Siméon Stoilow (1916). Sur une classe de fonctions de deux variables définies par les équations linéaires aux dérivées partielles (PDF) (Thesis). Vol. VI u. 84 S. 4. Paris: Gauthier-Villars. JFM 46.1481.03. Siméon Stoïlow (1919). "Sur les singularités mobiles des intégrales des équations linéaires aux dérivées partielles et sur leur intégrale générale" (PDF). Annales Scientifiques de l'École Normale Supérieure. (3). 36: 235–262. doi:10.24033/asens.717. JFM 47.0946.01. Simion Stoïlow, "Leçons sur les principes topologiques de la théorie des fonctions analytiques", Gauthier-Villars, Paris, 1956. MR0082545 Simion Stoïlow, "Œuvre mathématique", Éditions de l'Académie de la République Populaire Roumaine, Bucharest, 1964. MR0168435 == References == Cabiria Andreian Cazacu, "Sur l'œuvre mathématique de Simion Stoïlow", pp. 8–21, Lecture Notes in Mathematics, vol. 1013, Springer-Verlag, Berlin, 1983. MR0738079 "Analysis and Topology: A Volume Dedicated to the Memory of S. Stoilow", edited by Cabiria Andreian Cazacu, Olli Lehto, and Themistocles M. Rassias, World Scientific Publishers, 1998. ISBN 981-02-2761-2 == External links == (in Romanian) Short biography (in Romanian) Short biography, at Obedeanu school
Wikipedia:Simon Plouffe#0
Simon Plouffe (born June 11, 1956) is a Canadian mathematician who discovered the Bailey–Borwein–Plouffe formula (BBP algorithm) which permits the computation of the nth binary digit of π, in 1995. His other 2022 formula allows extracting the nth digit of π in decimal. He was born in Saint-Jovite, Quebec. He co-authored The Encyclopedia of Integer Sequences, made into the website On-Line Encyclopedia of Integer Sequences dedicated to integer sequences later in 1995. In 1975, Plouffe broke the world record for memorizing digits of π by reciting 4096 digits, a record which stood until 1977. == See also == Fabrice Bellard, who discovered in 1997 a faster formula to compute pi. PiHex == Notes == == External links == Works by Simon Plouffe at Project Gutenberg Works by or about Simon Plouffe at the Internet Archive Plouffe website (in French) Simon Plouffe at the Mathematics Genealogy Project N. J. A. Sloane and S. Plouffe, The Encyclopedia of Integer Sequences, Academic Press, San Diego, 1995, 587 pp. ISBN 0-12-558630-2.
Wikipedia:Simon von Stampfer#0
Simon Ritter von Stampfer (26 October 1792 (according to other sources 1790)), in Windisch-Mattrai, Archbishopric of Salzburg, today called Matrei in Osttirol, Tyrol – 10 November 1864 in Vienna) was an Austrian mathematician, surveyor and inventor. His most famous invention is that of the stroboscopic disk which has a claim to be the first device to show moving images. Almost simultaneously, a similar device was developed in Belgium (the phenakistiscope). == Life == === Youth and education === Simon Ritter von Stampfer was born in Matrei in Osttirol, and was the first son of Bartlmä Stampfer, a weaver. From 1801 he attended the local school and in 1804 and moved to the Franciscan Gymnasium in Lienz, where he studied until 1807. From there he went to the Lyceum in Salzburg, to study philosophy, however he was not assessed. In 1814 in Munich, he passed the state examination and applied there as a teacher. He chose, however, to stay in Salzburg, where he was assistant teacher in mathematics, natural history, physics and Greek at the high school. He then moved to the Lyceum, where he taught elementary mathematics, physics and applied mathematics . In 1819 he was also appointed a professor. In his spare time he made geodetic measurements, astronomical observations, experiments on the propagation speed of sound at different heights and measurements using the barometer. Stampfer was often to be seen in the Benedictine Monastery of Kremsmünster which had numerous pieces of astronomical equipment available. In 1822, von Stampfer married Johanna Wagner. They had a daughter in 1824 (Maria Aloysia Johanna) and in 1825 a son (Anton Josef Simon). === First scientific and teaching work === After several unsuccessful applications, in Innsbruck, Stampfer was finally promoted to full professor of pure mathematics in Salzburg. However, at the Polytechnic Institute in Vienna, he was also promoted to the Chair of Practical Geometry. He settled there in December 1825 to replace Franz Josef von Gerstner. He now taught Practical geometry, but was also employed as a physicist and astronomer. He produced a method for the computation of solar eclipses. He was concerned about his astronomical work with lenses and their accuracy and distortion. This led him to the field of optical illusions. In 1828, he developed test methods for telescopes and methods of measurement to determine the "Krümmungshalbmesser" of lenses and the refractive and dispersion property of the glass. For his work on the theoretical foundations of the production of high quality optics, he turned to the achromatic Fraunhofer lens. One of his most famous students include Christian Doppler, known for his work in the Doppler Effect === Development of "stroboscopic discs" === In 1832, Stampfer became aware through the Journal of Physics and Mathematics of experiments by the British physicist, Michael Faraday, on the optical illusion caused by rapidly rotating gears, in which the human eye could not follow the movement of the gear. He was so impressed that he conducted similar experiments with intermittent views through the openings between the teeth of slotted cardboard wheels. From these experiments he eventually developed his Stroboscopische Scheiben (optische Zauberscheiben) (Stroboscopic Discs, or optical magic discs, or simply Stroboscope ), coining the term as a combination of the Ancient Greek words στρόβος - strobos, meaning "whirlpool" and σκοπεῖν - skopein, meaning "to look at". In a pamphlet published in July 1833, Stampfer mentioned that the sequence of images could be placed on either a disc, a cylinder (much like the Zoetrope, introduced in 1866) or longer scenes on a looped strip of paper or canvas stretched around two parallel rollers (somewhat similar to film on spools). A disc with pictures could be viewed though a slotted disc on the other side of an axis, but Stampfer found spinning one disc with slots as well as pictures in front of a mirror more simple. He also suggested covering up the view of all but one of the moving figures with a cut-out sheet of cardboard and painting theatrical coulisses and backdrops around the cut-out part (somewhat similar to the later Praxinoscope-Theatre). The patent for the invention also mentions the option of transparent versions. Stampfer and lithographer Mathias Trentsensky chose to publish the invention in the shape of a disc to be viewed in a mirror. Belgian scientist Joseph Antoine Ferdinand Plateau had been developing a very similar device for some time and finally published about what would later be named the Fantascope or Phénakisticope in January 1833 in a Belgian scientific periodical, illustrated with a plate of the device. Plateau mentioned in 1836 that he thought it difficult to state the exact time when he got the idea, but he believed he was first able to successfully assemble his invention in December. He stated to trust the assertion of Stampfer to have started his experiments at the same time, which soon resulted in the discovery of the stroboscopic animation principle. Both Stampfer and Plateau have a claim to be the founding father of Cinema. Most cited with this honour however is Joseph Antoine Ferdinand Plateau. Stampfer received the imperial privilege No. 1920 for his invention on 7 May 1833 : The 1920th S. Stampfer, a professor at Imperial Polytechnic Institute in Vienna. (Wieden, Nro. 64), and Mathias Trentsensky; in the invention, figures and colored shapes, images ever of any kind, according to mathematical and physical laws so as to distinguish that, if the same with due speed by some mechanism before the eye passed, while the beam is constantly interrupted, the varied optical illusions in related movements and actions that represent the eye, and with these images the easiest way to slices of cardboard or any other materials zweckmässigcn are drawn to their peripheral holes are attached to Browse . When these discs, a mirror opposite, quickly turned around their axes, so evident to the eye when the holes Browse through the lively pictures in the mirror, and it can in this way not only machine movements of any kind, such as wheels and hammer works, continue rolling carts and rising balloons, but also the different kinds of actions and movements of people and animals depicted are surprising. Nor can follow the same principles by other mechanical devices themselves compound acts, such as theatrical scenes in Thätigkeit diverge workshops, etc., either through transparent as well as ordinary kind drawn pictures. In two years, from 7 May.(Jb Polytechnic. Inst Vol 19, 406f., Zit. In [1]) The device was developed by the Viennese art dealers Trentsensky & Vieweg and commercially marketed. The first edition was published in May 1833 and was soon sold out, so that in July a second, improved edition appeared. His "stroboscopic discs" became known outside of Austria, and it was from this that the term "stroboscopic effect" arose. == References == == Sources == Franz Allmer: Simon Stampfer 1790–1864. Picture a life. In:Communications of the Geodetic Institute of the Technical University of Graz, No. 82, Graz 1996 William Formann: Austrian pioneers of cinematography. Bergland Verlag, Wien 1966, p. 10–18 Peter Schuster, and Christian Strasser:Simon Stampfer 1790–1864. From the magic disc for the film (series of press offices, Special Publications Series No. 142), Salzburg 1998 == External links == Simon Stampfer Stroboscopic slices of Academic Gymnasium Salzburg 119533049 Simon von Stampfer in the German National Library catalogue Simon Stampfer: scholar scientist inventor Simon rammer stroboscopic slices (Object of the month from the Museum of Sternwarte Kremsmünster, August 2001) Simon von Stampfer in Austria-Forum (in German) Introduction to Animation (by Sandro Corsaro, 2003; PDF file, 112 KB)
Wikipedia:Simone Gutt#0
Simone Gutt (born 1956) is a Belgian mathematician specializing in differential geometry. She is a professor of mathematics at the Université libre de Bruxelles. == Education and career == Gutt was born on 13 July 1956 in Uccle, near Brussels. She completed her doctorate in 1980 at the Université libre de Bruxelles; her dissertation, Déformations formelles de l'algèbre des fonctions différentiables sur une variété symplectique, was jointly supervised by Michel Cahen and Moshé Flato. She was a researcher for the National Fund for Scientific Research from 1981 until 1991, and became a professor at the Université libre de Bruxelles in 1992. == Recognition == Gutt was the 1998 winner of the quadrennial Francois Deruyts Prize in geometry of the Royal Academies for Science and the Arts of Belgium. She was elected to the Royal Academy of Science, Letters and Fine Arts of Belgium in 2004. == References == == External links == Home page Simone Gutt publications indexed by Google Scholar
Wikipedia:Simple (abstract algebra)#0
In mathematics, the term simple is used to describe an algebraic structure which in some sense cannot be divided by a smaller structure of the same type. Put another way, an algebraic structure is simple if the kernel of every homomorphism is either the whole structure or a single element. Some examples are: A group is called a simple group if it does not contain a nontrivial proper normal subgroup. A ring is called a simple ring if it does not contain a nontrivial two sided ideal. A module is called a simple module if it does not contain a nontrivial submodule. An algebra is called a simple algebra if it does not contain a nontrivial two sided ideal. The general pattern is that the structure admits no non-trivial congruence relations. The term is used differently in semigroup theory. A semigroup is said to be simple if it has no nontrivial ideals, or equivalently, if Green's relation J is the universal relation. Not every congruence on a semigroup is associated with an ideal, so a simple semigroup may have nontrivial congruences. A semigroup with no nontrivial congruences is called congruence simple. == See also == Semisimple Simple algebra (universal algebra)
Wikipedia:Simple continued fraction#0
A simple or regular continued fraction is a continued fraction with numerators all equal one, and denominators built from a sequence { a i } {\displaystyle \{a_{i}\}} of integer numbers. The sequence can be finite or infinite, resulting in a finite (or terminated) continued fraction like a 0 + 1 a 1 + 1 a 2 + 1 ⋱ + 1 a n {\displaystyle a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{\ddots +{\cfrac {1}{a_{n}}}}}}}}}} or an infinite continued fraction like a 0 + 1 a 1 + 1 a 2 + 1 ⋱ {\displaystyle a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{\ddots }}}}}}} Typically, such a continued fraction is obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. In the finite case, the iteration/recursion is stopped after finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence, other than the first, must be positive. The integers a i {\displaystyle a_{i}} are called the coefficients or terms of the continued fraction. Simple continued fractions have a number of remarkable properties related to the Euclidean algorithm for integers or real numbers. Every rational number ⁠ p {\displaystyle p} / q {\displaystyle q} ⁠ has two closely related expressions as a finite continued fraction, whose coefficients ai can be determined by applying the Euclidean algorithm to ( p , q ) {\displaystyle (p,q)} . The numerical value of an infinite continued fraction is irrational; it is defined from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions. Each finite continued fraction of the sequence is obtained by using a finite prefix of the infinite continued fraction's defining sequence of integers. Moreover, every irrational number α {\displaystyle \alpha } is the value of a unique infinite regular continued fraction, whose coefficients can be found using the non-terminating version of the Euclidean algorithm applied to the incommensurable values α {\displaystyle \alpha } and 1. This way of expressing real numbers (rational and irrational) is called their continued fraction representation. == Motivation and notation == Consider, for example, the rational number ⁠415/93⁠, which is around 4.4624. As a first approximation, start with 4, which is the integer part; ⁠415/93⁠ = 4 + ⁠43/93⁠. The fractional part is the reciprocal of ⁠93/43⁠ which is about 2.1628. Use the integer part, 2, as an approximation for the reciprocal to obtain a second approximation of 4 + ⁠1/2⁠ = 4.5. Now, ⁠93/43⁠ = 2 + ⁠7/43⁠; the remaining fractional part, ⁠7/43⁠, is the reciprocal of ⁠43/7⁠, and ⁠43/7⁠ is around 6.1429. Use 6 as an approximation for this to obtain 2 + ⁠1/6⁠ as an approximation for ⁠93/43⁠ and 4 + ⁠1/2 + ⁠1/6⁠⁠, about 4.4615, as the third approximation. Further, ⁠43/7⁠ = 6 + ⁠1/7⁠. Finally, the fractional part, ⁠1/7⁠, is the reciprocal of 7, so its approximation in this scheme, 7, is exact (⁠7/1⁠ = 7 + ⁠0/1⁠) and produces the exact expression 4 + 1 2 + 1 6 + 1 7 {\displaystyle 4+{\cfrac {1}{2+{\cfrac {1}{6+{\cfrac {1}{7}}}}}}} for ⁠415/93⁠. That expression is called the continued fraction representation of ⁠415/93⁠. This can be represented by the abbreviated notation ⁠415/93⁠ = [4; 2, 6, 7]. It is customary to place a semicolon after the first number to indicate that it is the whole part. Some older textbooks use all commas in the (n + 1)-tuple, for example, [4, 2, 6, 7]. If the starting number is rational, then this process exactly parallels the Euclidean algorithm applied to the numerator and denominator of the number. In particular, it must terminate and produce a finite continued fraction representation of the number. The sequence of integers that occur in this representation is the sequence of successive quotients computed by the Euclidean algorithm. If the starting number is irrational, then the process continues indefinitely. This produces a sequence of approximations, all of which are rational numbers, and these converge to the starting number as a limit. This is the (infinite) continued fraction representation of the number. Examples of continued fraction representations of irrational numbers are: √19 = [4;2,1,3,1,2,8,2,1,3,1,2,8,...] (sequence A010124 in the OEIS). The pattern repeats indefinitely with a period of 6. e = [2;1,2,1,1,4,1,1,6,1,1,8,...] (sequence A003417 in the OEIS). The pattern repeats indefinitely with a period of 3 except that 2 is added to one of the terms in each cycle. π = [3;7,15,1,292,1,1,1,2,1,3,1,...] (sequence A001203 in the OEIS). No pattern has ever been found in this representation. φ = [1;1,1,1,1,1,1,1,1,1,1,1,...] (sequence A000012 in the OEIS). The golden ratio, the irrational number that is the "most difficult" to approximate rationally (see § A property of the golden ratio φ below). γ = [0;1,1,2,1,2,1,4,3,13,5,1,...] (sequence A002852 in the OEIS). The Euler–Mascheroni constant, which is expected but not known to be irrational, and whose continued fraction has no apparent pattern. Continued fractions are, in some ways, more "mathematically natural" representations of a real number than other representations such as decimal representations, and they have several desirable properties: The continued fraction representation for a real number is finite if and only if it is a rational number. In contrast, the decimal representation of a rational number may be finite, for example ⁠137/1600⁠ = 0.085625, or infinite with a repeating cycle, for example ⁠4/27⁠ = 0.148148148148... Every rational number has an essentially unique simple continued fraction representation. Each rational can be represented in exactly two ways, since [a0;a1,... an−1,an] = [a0;a1,... an−1,(an−1),1]. Usually the first, shorter one is chosen as the canonical representation. The simple continued fraction representation of an irrational number is unique. (However, additional representations are possible when using generalized continued fractions; see below.) The real numbers whose continued fraction eventually repeats are precisely the quadratic irrationals. For example, the repeating continued fraction [1;1,1,1,...] is the golden ratio, and the repeating continued fraction [1;2,2,2,...] is the square root of 2. In contrast, the decimal representations of quadratic irrationals are apparently random. The square roots of all (positive) integers that are not perfect squares are quadratic irrationals, and hence are unique periodic continued fractions. The successive approximations generated in finding the continued fraction representation of a number, that is, by truncating the continued fraction representation, are in a certain sense (described below) the "best possible". == Formulation == A continued fraction in canonical form is an expression of the form a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 1 ⋱ {\displaystyle a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\vphantom {\cfrac {1}{1}}}{_{\ddots }}}}}}}}} where ai are integer numbers, called the coefficients or terms of the continued fraction. When the expression contains finitely many terms, it is called a finite continued fraction. When the expression contains infinitely many terms, it is called an infinite continued fraction. When the terms eventually repeat from some point onwards, the continued fraction is called periodic. Thus, all of the following illustrate valid finite simple continued fractions: For simple continued fractions of the form r = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 1 ⋱ {\displaystyle r=a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\vphantom {\cfrac {1}{1}}}{_{\ddots }}}}}}}}} the a n {\displaystyle a_{n}} term can be calculated from the following recursive sequence: f n + 1 = 1 f n − ⌊ f n ⌋ {\displaystyle f_{n+1}={\frac {1}{f_{n}-\lfloor f_{n}\rfloor }}} where f 0 = r {\displaystyle f_{0}=r} and a n = ⌊ f n ⌋ {\displaystyle a_{n}=\left\lfloor f_{n}\right\rfloor } . from which it can be understood that the a n {\displaystyle a_{n}} sequence stops if f n = ⌊ f n ⌋ {\displaystyle f_{n}=\lfloor f_{n}\rfloor } is an integer. === Notations === Consider a continued fraction expressed as x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 a 4 {\displaystyle x=a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{a_{4}}}}}}}}}} Because such a continued fraction expression may take a significant amount of vertical space, a number of methods have been tried to shrink it. Gottfried Leibniz sometimes used the notation x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 a 4 , {\displaystyle {\begin{aligned}x=a_{0}+{\dfrac {1}{a_{1}}}{{} \atop +}\\[28mu]\ \end{aligned}}\!{\begin{aligned}{\dfrac {1}{a_{2}}}{{} \atop +}\\[2mu]\ \end{aligned}}\!{\begin{aligned}{\dfrac {1}{a_{3}}}{{} \atop +}\end{aligned}}\!{\begin{aligned}\\[2mu]{\dfrac {1}{a_{4}}},\end{aligned}}} and later the same idea was taken even further with the nested fraction bars drawn aligned, for example by Alfred Pringsheim as x = a 0 + | 1 a 1 | + | 1 a 2 | + | 1 a 3 | + | 1 a 4 | , {\displaystyle x=a_{0}+{{} \atop {{\big |}\!}}\!{\frac {1}{\,a_{1}}}\!{{\!{\big |}} \atop {}}+{{} \atop {{\big |}\!}}\!{\frac {1}{\,a_{2}}}\!{{\!{\big |}} \atop {}}+{{} \atop {{\big |}\!}}\!{\frac {1}{\,a_{3}}}\!{{\!{\big |}} \atop {}}+{{} \atop {{\big |}\!}}\!{\frac {1}{\,a_{4}}}\!{{\!{\big |}} \atop {}},} or in more common related notations as x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 a 4 {\displaystyle x=a_{0}+{1 \over a_{1}+}\,{1 \over a_{2}+}\,{1 \over a_{3}+}\,{1 \over a_{4}}} or x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 a 4 . {\displaystyle x=a_{0}+{1 \over a_{1}}{{} \atop +}{1 \over a_{2}}{{} \atop +}{1 \over a_{3}}{{} \atop +}{1 \over a_{4}}.} Carl Friedrich Gauss used a notation reminiscent of summation notation, x = a 0 + K 4 i = 1 1 a i , {\displaystyle x=a_{0}+{\underset {i=1}{\overset {4}{\mathrm {K} }}}~{\frac {1}{a_{i}}},} or in cases where the numerator is always 1, eliminated the fraction bars altogether, writing a list-style x = [ a 0 ; a 1 , a 2 , a 3 , a 4 ] . {\displaystyle x=[a_{0};a_{1},a_{2},a_{3},a_{4}].} Sometimes list-style notation uses angle brackets instead, x = ⟨ a 0 ; a 1 , a 2 , a 3 , a 4 ⟩ . {\displaystyle x=\left\langle a_{0};a_{1},a_{2},a_{3},a_{4}\right\rangle .} The semicolon in the square and angle bracket notations is sometimes replaced by a comma. One may also define infinite simple continued fractions as limits: [ a 0 ; a 1 , a 2 , a 3 , … ] = lim n → ∞ [ a 0 ; a 1 , a 2 , … , a n ] . {\displaystyle [a_{0};a_{1},a_{2},a_{3},\,\ldots \,]=\lim _{n\to \infty }\,[a_{0};a_{1},a_{2},\,\ldots ,a_{n}].} This limit exists for any choice of a 0 {\displaystyle a_{0}} and positive integers a 1 , a 2 , … {\displaystyle a_{1},a_{2},\ldots } . == Calculating continued fraction representations == Consider a real number ⁠ r {\displaystyle r} ⁠. Let i = ⌊ r ⌋ {\displaystyle i=\lfloor r\rfloor } and let ⁠ f = r − i {\displaystyle f=r-i} ⁠. When ⁠ f ≠ 0 {\displaystyle f\neq 0} ⁠, the continued fraction representation of r {\displaystyle r} is ⁠ [ i ; a 1 , a 2 , … ] {\displaystyle [i;a_{1},a_{2},\ldots ]} ⁠, where [ a 1 ; a 2 , … ] {\displaystyle [a_{1};a_{2},\ldots ]} is the continued fraction representation of ⁠ 1 / f {\displaystyle 1/f} ⁠. When ⁠ r ≥ 0 {\displaystyle r\geq 0} ⁠, then i {\displaystyle i} is the integer part of r {\displaystyle r} , and f {\displaystyle f} is the fractional part of ⁠ r {\displaystyle r} ⁠. In order to calculate a continued fraction representation of a number r {\displaystyle r} , write down the floor of r {\displaystyle r} . Subtract this value from r {\displaystyle r} . If the difference is 0, stop; otherwise find the reciprocal of the difference and repeat. The procedure will halt if and only if r {\displaystyle r} is rational. This process can be efficiently implemented using the Euclidean algorithm when the number is rational. The table below shows an implementation of this procedure for the number ⁠ 3.245 = 649 / 200 {\displaystyle 3.245=649/200} ⁠: The continued fraction for ⁠ 3.245 {\displaystyle 3.245} ⁠ is thus [ 3 ; 4 , 12 , 4 ] , {\displaystyle [3;4,12,4],} or, expanded: 649 200 = 3 + 1 4 + 1 12 + 1 4 . {\displaystyle {\frac {649}{200}}=3+{\cfrac {1}{4+{\cfrac {1}{12+{\cfrac {1}{4}}}}}}.} == Reciprocals == The continued fraction representations of a positive rational number and its reciprocal are identical except for a shift one place left or right depending on whether the number is less than or greater than one respectively. In other words, the numbers represented by [ a 0 ; a 1 , a 2 , … , a n ] {\displaystyle [a_{0};a_{1},a_{2},\ldots ,a_{n}]} and [ 0 ; a 0 , a 1 , … , a n ] {\displaystyle [0;a_{0},a_{1},\ldots ,a_{n}]} are reciprocals. For instance if a {\displaystyle a} is an integer and x < 1 {\displaystyle x<1} then x = 0 + 1 a + 1 b {\displaystyle x=0+{\frac {1}{a+{\frac {1}{b}}}}} and 1 x = a + 1 b {\displaystyle {\frac {1}{x}}=a+{\frac {1}{b}}} . If x > 1 {\displaystyle x>1} then x = a + 1 b {\displaystyle x=a+{\frac {1}{b}}} and 1 x = 0 + 1 a + 1 b {\displaystyle {\frac {1}{x}}=0+{\frac {1}{a+{\frac {1}{b}}}}} . The last number that generates the remainder of the continued fraction is the same for both x {\displaystyle x} and its reciprocal. For example, 2.25 = 9 4 = [ 2 ; 4 ] {\displaystyle 2.25={\frac {9}{4}}=[2;4]} and 1 2.25 = 4 9 = [ 0 ; 2 , 4 ] {\displaystyle {\frac {1}{2.25}}={\frac {4}{9}}=[0;2,4]} . == Finite continued fractions == Every finite continued fraction represents a rational number, and every rational number can be represented in precisely two different ways as a finite continued fraction, with the conditions that the first coefficient is an integer and the other coefficients are positive integers. These two representations agree except in their final terms. In the longer representation the final term in the continued fraction is 1; the shorter representation drops the final 1, but increases the new final term by 1. The final element in the short representation is therefore always greater than 1, if present. In symbols: [a0; a1, a2, ..., an − 1, an, 1] = [a0; a1, a2, ..., an − 1, an + 1]. [a0; 1] = [a0 + 1]. == Infinite continued fractions and convergents == Every infinite continued fraction is irrational, and every irrational number can be represented in precisely one way as an infinite continued fraction. An infinite continued fraction representation for an irrational number is useful because its initial segments provide rational approximations to the number. These rational numbers are called the convergents of the continued fraction. The larger a term is in the continued fraction, the closer the corresponding convergent is to the irrational number being approximated. Numbers like π have occasional large terms in their continued fraction, which makes them easy to approximate with rational numbers. Other numbers like e have only small terms early in their continued fraction, which makes them more difficult to approximate rationally. The golden ratio φ has terms equal to 1 everywhere—the smallest values possible—which makes φ the most difficult number to approximate rationally. In this sense, therefore, it is the "most irrational" of all irrational numbers. Even-numbered convergents are smaller than the original number, while odd-numbered ones are larger. For a continued fraction [a0; a1, a2, ...], the first four convergents (numbered 0 through 3) are a 0 1 , a 1 a 0 + 1 a 1 , a 2 ( a 1 a 0 + 1 ) + a 0 a 2 a 1 + 1 , a 3 ( a 2 ( a 1 a 0 + 1 ) + a 0 ) + ( a 1 a 0 + 1 ) a 3 ( a 2 a 1 + 1 ) + a 1 . {\displaystyle {\frac {a_{0}}{1}},\,{\frac {a_{1}a_{0}+1}{a_{1}}},\,{\frac {a_{2}(a_{1}a_{0}+1)+a_{0}}{a_{2}a_{1}+1}},\,{\frac {a_{3}{\bigl (}a_{2}(a_{1}a_{0}+1)+a_{0}{\bigr )}+(a_{1}a_{0}+1)}{a_{3}(a_{2}a_{1}+1)+a_{1}}}.} The numerator of the third convergent is formed by multiplying the numerator of the second convergent by the third coefficient, and adding the numerator of the first convergent. The denominators are formed similarly. Therefore, each convergent can be expressed explicitly in terms of the continued fraction as the ratio of certain multivariate polynomials called continuants. If successive convergents are found, with numerators h1, h2, ... and denominators k1, k2, ... then the relevant recursive relation is that of Gaussian brackets: h n = a n h n − 1 + h n − 2 , k n = a n k n − 1 + k n − 2 . {\displaystyle {\begin{aligned}h_{n}&=a_{n}h_{n-1}+h_{n-2},\\[3mu]k_{n}&=a_{n}k_{n-1}+k_{n-2}.\end{aligned}}} The successive convergents are given by the formula h n k n = a n h n − 1 + h n − 2 a n k n − 1 + k n − 2 . {\displaystyle {\frac {h_{n}}{k_{n}}}={\frac {a_{n}h_{n-1}+h_{n-2}}{a_{n}k_{n-1}+k_{n-2}}}.} Thus to incorporate a new term into a rational approximation, only the two previous convergents are necessary. The initial "convergents" (required for the first two terms) are 0⁄1 and 1⁄0. For example, here are the convergents for [0;1,5,2,2]. When using the Babylonian method to generate successive approximations to the square root of an integer, if one starts with the lowest integer as first approximant, the rationals generated all appear in the list of convergents for the continued fraction. Specifically, the approximants will appear on the convergents list in positions 0, 1, 3, 7, 15, ... , 2k−1, ... For example, the continued fraction expansion for 3 {\displaystyle {\sqrt {3}}} is [1; 1, 2, 1, 2, 1, 2, 1, 2, ...]. Comparing the convergents with the approximants derived from the Babylonian method: x0 = 1 = ⁠1/1⁠ x1 = ⁠1/2⁠(1 + ⁠3/1⁠) = ⁠2/1⁠ = 2 x2 = ⁠1/2⁠(2 + ⁠3/2⁠) = ⁠7/4⁠ x3 = ⁠1/2⁠(⁠7/4⁠ + ⁠3/⁠7/4⁠⁠) = ⁠97/56⁠ === Properties === The Baire space is a topological space on infinite sequences of natural numbers. The infinite continued fraction provides a homeomorphism from the Baire space to the space of irrational real numbers (with the subspace topology inherited from the usual topology on the reals). The infinite continued fraction also provides a map between the quadratic irrationals and the dyadic rationals, and from other irrationals to the set of infinite strings of binary numbers (i.e. the Cantor set); this map is called the Minkowski question-mark function. The mapping has interesting self-similar fractal properties; these are given by the modular group, which is the subgroup of Möbius transformations having integer values in the transform. Roughly speaking, continued fraction convergents can be taken to be Möbius transformations acting on the (hyperbolic) upper half-plane; this is what leads to the fractal self-symmetry. The limit probability distribution of the coefficients in the continued fraction expansion of a random variable uniformly distributed in (0, 1) is the Gauss–Kuzmin distribution. === Some useful theorems === If a 0 , {\displaystyle \ a_{0}\ ,} a 1 , {\displaystyle a_{1}\ ,} a 2 , {\displaystyle a_{2}\ ,} … {\displaystyle \ \ldots \ } is an infinite sequence of positive integers, define the sequences h n {\displaystyle \ h_{n}\ } and k n {\displaystyle \ k_{n}\ } recursively: Theorem 1. For any positive real number x {\displaystyle \ x\ } [ a 0 ; a 1 , … , a n − 1 , x ] = x h n − 1 + h n − 2 x k n − 1 + k n − 2 , [ a 0 ; a 1 , … , a n − 1 + x ] = h n − 1 + x h n − 2 k n − 1 + x k n − 2 {\displaystyle \left[\ a_{0};\ a_{1},\ \dots ,a_{n-1},x\ \right]={\frac {x\ h_{n-1}+h_{n-2}}{\ x\ k_{n-1}+k_{n-2}\ }},\quad \left[\ a_{0};\ a_{1},\ \dots ,a_{n-1}+x\ \right]={\frac {h_{n-1}+xh_{n-2}}{\ k_{n-1}+xk_{n-2}\ }}} Theorem 2. The convergents of [ a 0 ; {\displaystyle \ [\ a_{0}\ ;} a 1 , {\displaystyle a_{1}\ ,} a 2 , {\displaystyle a_{2}\ ,} … ] {\displaystyle \ldots \ ]\ } are given by [ a 0 ; a 1 , … , a n ] = h n k n . {\displaystyle \left[\ a_{0};\ a_{1},\ \dots ,a_{n}\ \right]={\frac {h_{n}}{\ k_{n}\ }}~.} or in matrix form, [ h n h n − 1 k n k n − 1 ] = [ a 0 1 1 0 ] ⋯ [ a n 1 1 0 ] {\displaystyle {\begin{bmatrix}h_{n}&h_{n-1}\\k_{n}&k_{n-1}\end{bmatrix}}={\begin{bmatrix}a_{0}&1\\1&0\end{bmatrix}}\cdots {\begin{bmatrix}a_{n}&1\\1&0\end{bmatrix}}} Theorem 3. If the n {\displaystyle \ n} th convergent to a continued fraction is h n k n , {\displaystyle \ {\frac {h_{n}}{k_{n}}}\ ,} then k n h n − 1 − k n − 1 h n = ( − 1 ) n , {\displaystyle k_{n}\ h_{n-1}-k_{n-1}\ h_{n}=(-1)^{n}\ ,} or equivalently h n k n − h n − 1 k n − 1 = ( − 1 ) n + 1 k n − 1 k n . {\displaystyle {\frac {h_{n}}{\ k_{n}\ }}-{\frac {h_{n-1}}{\ k_{n-1}\ }}={\frac {(-1)^{n+1}}{\ k_{n-1}\ k_{n}\ }}~.} Corollary 1: Each convergent is in its lowest terms (for if h n {\displaystyle \ h_{n}\ } and k n {\displaystyle \ k_{n}\ } had a nontrivial common divisor it would divide k n h n − 1 − k n − 1 h n , {\displaystyle \ k_{n}\ h_{n-1}-k_{n-1}\ h_{n}\ ,} which is impossible). Corollary 2: The difference between successive convergents is a fraction whose numerator is unity: h n k n − h n − 1 k n − 1 = h n k n − 1 − k n h n − 1 k n k n − 1 = ( − 1 ) n + 1 k n k n − 1 . {\displaystyle {\frac {h_{n}}{k_{n}}}-{\frac {h_{n-1}}{k_{n-1}}}={\frac {\ h_{n}\ k_{n-1}-k_{n}\ h_{n-1}\ }{\ k_{n}\ k_{n-1}\ }}={\frac {(-1)^{n+1}}{\ k_{n}\ k_{n-1}\ }}~.} Corollary 3: The continued fraction is equivalent to a series of alternating terms: a 0 + ∑ n = 0 ∞ ( − 1 ) n k n k n + 1 . {\displaystyle a_{0}+\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{\ k_{n}\ k_{n+1}\ }}~.} Corollary 4: The matrix [ h n h n − 1 k n k n − 1 ] = [ a 0 1 1 0 ] ⋯ [ a n 1 1 0 ] {\displaystyle {\begin{bmatrix}h_{n}&h_{n-1}\\k_{n}&k_{n-1}\end{bmatrix}}={\begin{bmatrix}a_{0}&1\\1&0\end{bmatrix}}\cdots {\begin{bmatrix}a_{n}&1\\1&0\end{bmatrix}}} has determinant ( − 1 ) n + 1 {\displaystyle (-1)^{n+1}} , and thus belongs to the group of 2 × 2 {\displaystyle \ 2\times 2\ } unimodular matrices G L ( 2 , Z ) . {\displaystyle \ \mathrm {GL} (2,\mathbb {Z} )~.} Corollary 5: The matrix [ h n h n − 2 k n k n − 2 ] = [ h n − 1 h n − 2 k n − 1 k n − 2 ] [ a n 0 1 1 ] {\displaystyle {\begin{bmatrix}h_{n}&h_{n-2}\\k_{n}&k_{n-2}\end{bmatrix}}={\begin{bmatrix}h_{n-1}&h_{n-2}\\k_{n-1}&k_{n-2}\end{bmatrix}}{\begin{bmatrix}a_{n}&0\\1&1\end{bmatrix}}} has determinant ( − 1 ) n a n {\displaystyle (-1)^{n}a_{n}} , or equivalently, h n k n − h n − 2 k n − 2 = ( − 1 ) n k n − 2 k n a n {\displaystyle {\frac {h_{n}}{\ k_{n}\ }}-{\frac {h_{n-2}}{\ k_{n-2}\ }}={\frac {(-1)^{n}}{\ k_{n-2}\ k_{n}\ }}a_{n}} meaning that the odd terms monotonically decrease, while the even terms monotonically increase. Corollary 6: The denominator sequence k 0 , k 1 , k 2 , … {\displaystyle k_{0},k_{1},k_{2},\dots } satisfies the recurrence relation k − 1 = 0 , k 0 = 1 , k n = k n − 1 a n + k n − 2 {\displaystyle k_{-1}=0,k_{0}=1,k_{n}=k_{n-1}a_{n}+k_{n-2}} , and grows at least as fast as the Fibonacci sequence, which itself grows like O ( ϕ n ) {\displaystyle O(\phi ^{n})} where ϕ = 1.618 … {\displaystyle \phi =1.618\dots } is the golden ratio. Theorem 4. Each ( s {\displaystyle \ s} th) convergent is nearer to a subsequent ( n {\displaystyle \ n} th) convergent than any preceding ( r {\displaystyle \ r} th) convergent is. In symbols, if the n {\displaystyle \ n} th convergent is taken to be [ a 0 ; a 1 , … , a n ] = x n , {\displaystyle \ \left[\ a_{0};\ a_{1},\ \ldots ,\ a_{n}\ \right]=x_{n}\ ,} then | x r − x n | > | x s − x n | {\displaystyle \left|\ x_{r}-x_{n}\ \right|>\left|\ x_{s}-x_{n}\ \right|} for all r < s < n . {\displaystyle \ r<s<n~.} Corollary 1: The even convergents (before the n {\displaystyle \ n} th) continually increase, but are always less than x n . {\displaystyle \ x_{n}~.} Corollary 2: The odd convergents (before the n {\displaystyle \ n} th) continually decrease, but are always greater than x n . {\displaystyle \ x_{n}~.} Theorem 5. 1 k n ( k n + 1 + k n ) < | x − h n k n | < 1 k n k n + 1 . {\displaystyle {\frac {1}{\ k_{n}\ (k_{n+1}+k_{n})\ }}<\left|\ x-{\frac {h_{n}}{\ k_{n}\ }}\ \right|<{\frac {1}{\ k_{n}\ k_{n+1}\ }}~.} Corollary 1: A convergent is nearer to the limit of the continued fraction than any fraction whose denominator is less than that of the convergent. Corollary 2: A convergent obtained by terminating the continued fraction just before a large term is a close approximation to the limit of the continued fraction.Theorem 6: Consider the set of all open intervals with end-points [ 0 ; a 1 , … , a n ] , [ 0 ; a 1 , … , a n + 1 ] {\displaystyle [0;a_{1},\dots ,a_{n}],[0;a_{1},\dots ,a_{n}+1]} . Denote it as C {\displaystyle {\mathcal {C}}} . Any open subset of [ 0 , 1 ] ∖ Q {\displaystyle [0,1]\setminus \mathbb {Q} } is a disjoint union of sets from C {\displaystyle {\mathcal {C}}} .Corollary: The infinite continued fraction provides a homeomorphism from the Baire space to [ 0 , 1 ] ∖ Q {\displaystyle [0,1]\setminus \mathbb {Q} } . == Semiconvergents == If h n − 1 k n − 1 , h n k n {\displaystyle {\frac {h_{n-1}}{k_{n-1}}},{\frac {h_{n}}{k_{n}}}} are consecutive convergents, then any fractions of the form h n − 1 + m h n k n − 1 + m k n , {\displaystyle {\frac {h_{n-1}+mh_{n}}{k_{n-1}+mk_{n}}},} where m {\displaystyle m} is an integer such that 0 ≤ m ≤ a n + 1 {\displaystyle 0\leq m\leq a_{n+1}} , are called semiconvergents, secondary convergents, or intermediate fractions. The ( m + 1 ) {\displaystyle (m+1)} -st semiconvergent equals the mediant of the m {\displaystyle m} -th one and the convergent h n k n {\displaystyle {\tfrac {h_{n}}{k_{n}}}} . Sometimes the term is taken to mean that being a semiconvergent excludes the possibility of being a convergent (i.e., 0 < m < a n + 1 {\displaystyle 0<m<a_{n+1}} ), rather than that a convergent is a kind of semiconvergent. It follows that semiconvergents represent a monotonic sequence of fractions between the convergents h n − 1 k n − 1 {\displaystyle {\tfrac {h_{n-1}}{k_{n-1}}}} (corresponding to m = 0 {\displaystyle m=0} ) and h n + 1 k n + 1 {\displaystyle {\tfrac {h_{n+1}}{k_{n+1}}}} (corresponding to m = a n + 1 {\displaystyle m=a_{n+1}} ). The consecutive semiconvergents a b {\displaystyle {\tfrac {a}{b}}} and c d {\displaystyle {\tfrac {c}{d}}} satisfy the property a d − b c = ± 1 {\displaystyle ad-bc=\pm 1} . If a rational approximation p q {\displaystyle {\tfrac {p}{q}}} to a real number x {\displaystyle x} is such that the value | x − p q | {\displaystyle \left|x-{\tfrac {p}{q}}\right|} is smaller than that of any approximation with a smaller denominator, then p q {\displaystyle {\tfrac {p}{q}}} is a semiconvergent of the continued fraction expansion of x {\displaystyle x} . The converse is not true, however. == Best rational approximations == One can choose to define a best rational approximation to a real number x as a rational number ⁠n/d⁠, d > 0, that is closer to x than any approximation with a smaller or equal denominator. The simple continued fraction for x can be used to generate all of the best rational approximations for x by applying these three rules: Truncate the continued fraction, and reduce its last term by a chosen amount (possibly zero). The reduced term cannot have less than half its original value. If the final term is even, half its value is admissible only if the corresponding semiconvergent is better than the previous convergent. (See below.) For example, 0.84375 has continued fraction [0;1,5,2,2]. Here are all of its best rational approximations. The strictly monotonic increase in the denominators as additional terms are included permits an algorithm to impose a limit, either on size of denominator or closeness of approximation. The "half rule" mentioned above requires that when ak is even, the halved term ak/2 is admissible if and only if |x − [a0 ; a1, ..., ak − 1]| > |x − [a0 ; a1, ..., ak − 1, ak/2]|. This is equivalent to: [ak; ak − 1, ..., a1] > [ak; ak + 1, ...]. The convergents to x are "best approximations" in a much stronger sense than the one defined above. Namely, n/d is a convergent for x if and only if |dx − n| has the smallest value among the analogous expressions for all rational approximations m/c with c ≤ d; that is, we have |dx − n| < |cx − m| so long as c < d. (Note also that |dkx − nk| → 0 as k → ∞.) === Best rational within an interval === A rational that falls within the interval (x, y), for 0 < x < y, can be found with the continued fractions for x and y. When both x and y are irrational and x = [a0; a1, a2, ..., ak − 1, ak, ak + 1, ...] y = [a0; a1, a2, ..., ak − 1, bk, bk + 1, ...] where x and y have identical continued fraction expansions up through ak−1, a rational that falls within the interval (x, y) is given by the finite continued fraction, z(x,y) = [a0; a1, a2, ..., ak − 1, min(ak, bk) + 1] This rational will be best in the sense that no other rational in (x, y) will have a smaller numerator or a smaller denominator. If x is rational, it will have two continued fraction representations that are finite, x1 and x2, and similarly a rational y will have two representations, y1 and y2. The coefficients beyond the last in any of these representations should be interpreted as +∞; and the best rational will be one of z(x1, y1), z(x1, y2), z(x2, y1), or z(x2, y2). For example, the decimal representation 3.1416 could be rounded from any number in the interval [3.14155, 3.14165). The continued fraction representations of 3.14155 and 3.14165 are 3.14155 = [3; 7, 15, 2, 7, 1, 4, 1, 1] = [3; 7, 15, 2, 7, 1, 4, 2] 3.14165 = [3; 7, 16, 1, 3, 4, 2, 3, 1] = [3; 7, 16, 1, 3, 4, 2, 4] and the best rational between these two is [3; 7, 16] = ⁠355/113⁠ = 3.1415929.... Thus, ⁠355/113⁠ is the best rational number corresponding to the rounded decimal number 3.1416, in the sense that no other rational number that would be rounded to 3.1416 will have a smaller numerator or a smaller denominator. === Interval for a convergent === A rational number, which can be expressed as finite continued fraction in two ways, z = [a0; a1, ..., ak − 1, ak, 1] = [a0; a1, ..., ak − 1, ak + 1] = ⁠pk/qk⁠ will be one of the convergents for the continued fraction expansion of a number, if and only if the number is strictly between (see this proof) x = [a0; a1, ..., ak − 1, ak, 2] = ⁠2pk - pk-1/2qk - qk-1⁠ and y = [a0; a1, ..., ak − 1, ak + 2] = ⁠pk + pk-1/qk + qk-1⁠ The numbers x and y are formed by incrementing the last coefficient in the two representations for z. It is the case that x < y when k is even, and x > y when k is odd. For example, the number ⁠355/113⁠ (Zu's fraction) has the continued fraction representations ⁠355/113⁠ = [3; 7, 15, 1] = [3; 7, 16] and thus ⁠355/113⁠ is a convergent of any number strictly between === Legendre's theorem on continued fractions === In his Essai sur la théorie des nombres (1798), Adrien-Marie Legendre derives a necessary and sufficient condition for a rational number to be a convergent of the continued fraction of a given real number. A consequence of this criterion, often called Legendre's theorem within the study of continued fractions, is as follows: Theorem. If α is a real number and p, q are positive integers such that | α − p q | < 1 2 q 2 {\displaystyle \left|\alpha -{\frac {p}{q}}\right|<{\frac {1}{2q^{2}}}} , then p/q is a convergent of the continued fraction of α. This theorem forms the basis for Wiener's attack, a polynomial-time exploit of the RSA cryptographic protocol that can occur for an injudicious choice of public and private keys (specifically, this attack succeeds if the prime factors of the public key n = pq satisfy p < q < 2p and the private key d is less than (1/3)n1/4). == Comparison == Consider x = [a0; a1, ...] and y = [b0; b1, ...]. If k is the smallest index for which ak is unequal to bk then x < y if (−1)k(ak − bk) < 0 and y < x otherwise. If there is no such k, but one expansion is shorter than the other, say x = [a0; a1, ..., an] and y = [b0; b1, ..., bn, bn + 1, ...] with ai = bi for 0 ≤ i ≤ n, then x < y if n is even and y < x if n is odd. == Continued fraction expansion of π and its convergents == To calculate the convergents of π we may set a0 = ⌊π⌋ = 3, define u1 = ⁠1/π − 3⁠ ≈ 7.0625 and a1 = ⌊u1⌋ = 7, u2 = ⁠1/u1 − 7⁠ ≈ 15.9966 and a2 = ⌊u2⌋ = 15, u3 = ⁠1/u2 − 15⁠ ≈ 1.0034. Continuing like this, one can determine the infinite continued fraction of π as [3;7,15,1,292,1,1,...] (sequence A001203 in the OEIS). The fourth convergent of π is [3;7,15,1] = ⁠355/113⁠ = 3.14159292035..., sometimes called Milü, which is fairly close to the true value of π. Let us suppose that the quotients found are, as above, [3;7,15,1]. The following is a rule by which we can write down at once the convergent fractions which result from these quotients without developing the continued fraction. The first quotient, supposed divided by unity, will give the first fraction, which will be too small, namely, ⁠3/1⁠. Then, multiplying the numerator and denominator of this fraction by the second quotient and adding unity to the numerator, we shall have the second fraction, ⁠22/7⁠, which will be too large. Multiplying in like manner the numerator and denominator of this fraction by the third quotient, and adding to the numerator the numerator of the preceding fraction, and to the denominator the denominator of the preceding fraction, we shall have the third fraction, which will be too small. Thus, the third quotient being 15, we have for our numerator (22 × 15 = 330) + 3 = 333, and for our denominator, (7 × 15 = 105) + 1 = 106. The third convergent, therefore, is ⁠333/106⁠. We proceed in the same manner for the fourth convergent. The fourth quotient being 1, we say 333 times 1 is 333, and this plus 22, the numerator of the fraction preceding, is 355; similarly, 106 times 1 is 106, and this plus 7 is 113. In this manner, by employing the four quotients [3;7,15,1], we obtain the four fractions: ⁠3/1⁠, ⁠22/7⁠, ⁠333/106⁠, ⁠355/113⁠, .... To sum up, the pattern is Numerator i = Numerator ( i − 1 ) ⋅ Quotient i + Numerator ( i − 2 ) {\displaystyle {\text{Numerator}}_{i}={\text{Numerator}}_{(i-1)}\cdot {\text{Quotient}}_{i}+{\text{Numerator}}_{(i-2)}} Denominator i = Denominator ( i − 1 ) ⋅ Quotient i + Denominator ( i − 2 ) {\displaystyle {\text{Denominator}}_{i}={\text{Denominator}}_{(i-1)}\cdot {\text{Quotient}}_{i}+{\text{Denominator}}_{(i-2)}} These convergents are alternately smaller and larger than the true value of π, and approach nearer and nearer to π. The difference between a given convergent and π is less than the reciprocal of the product of the denominators of that convergent and the next convergent. For example, the fraction ⁠22/7⁠ is greater than π, but ⁠22/7⁠ − π is less than ⁠1/7 × 106⁠ = ⁠1/742⁠ (in fact, ⁠22/7⁠ − π is just more than ⁠1/791⁠ = ⁠1/7 × 113⁠). The demonstration of the foregoing properties is deduced from the fact that if we seek the difference between one of the convergent fractions and the next adjacent to it we shall obtain a fraction of which the numerator is always unity and the denominator the product of the two denominators. Thus the difference between ⁠22/7⁠ and ⁠3/1⁠ is ⁠1/7⁠, in excess; between ⁠333/106⁠ and ⁠22/7⁠, ⁠1/742⁠, in deficit; between ⁠355/113⁠ and ⁠333/106⁠, ⁠1/11978⁠, in excess; and so on. The result being, that by employing this series of differences we can express in another and very simple manner the fractions with which we are here concerned, by means of a second series of fractions of which the numerators are all unity and the denominators successively be the product of every two adjacent denominators. Instead of the fractions written above, we have thus the series: ⁠3/1⁠ + ⁠1/1 × 7⁠ − ⁠1/7 × 106⁠ + ⁠1/106 × 113⁠ − ... The first term, as we see, is the first fraction; the first and second together give the second fraction, ⁠22/7⁠; the first, the second and the third give the third fraction ⁠333/106⁠, and so on with the rest; the result being that the series entire is equivalent to the original value. == Non-simple continued fraction == A non-simple continued fraction is an expression of the form x = b 0 + a 1 b 1 + a 2 b 2 + a 3 b 3 + a 4 b 4 + ⋱ {\displaystyle x=b_{0}+{\cfrac {a_{1}}{b_{1}+{\cfrac {a_{2}}{b_{2}+{\cfrac {a_{3}}{b_{3}+{\cfrac {a_{4}}{b_{4}+\ddots \,}}}}}}}}} where the an (n > 0) are the partial numerators, the bn are the partial denominators, and the leading term b0 is called the integer part of the continued fraction. To illustrate the use of non-simple continued fractions, consider the following example. The sequence of partial denominators of the simple continued fraction of π does not show any obvious pattern: π = [ 3 ; 7 , 15 , 1 , 292 , 1 , 1 , 1 , 2 , 1 , 3 , 1 , … ] {\displaystyle \pi =[3;7,15,1,292,1,1,1,2,1,3,1,\ldots ]} or π = 3 + 1 7 + 1 15 + 1 1 + 1 292 + 1 1 + 1 1 + 1 1 + 1 2 + 1 1 + 1 3 + 1 1 + ⋱ {\displaystyle \pi =3+{\cfrac {1}{7+{\cfrac {1}{15+{\cfrac {1}{1+{\cfrac {1}{292+{\cfrac {1}{1+{\cfrac {1}{1+{\cfrac {1}{1+{\cfrac {1}{2+{\cfrac {1}{1+{\cfrac {1}{3+{\cfrac {1}{1+\ddots }}}}}}}}}}}}}}}}}}}}}}} However, several non-simple continued fractions for π have a perfectly regular structure, such as: π = 4 1 + 1 2 2 + 3 2 2 + 5 2 2 + 7 2 2 + 9 2 2 + ⋱ = 4 1 + 1 2 3 + 2 2 5 + 3 2 7 + 4 2 9 + ⋱ = 3 + 1 2 6 + 3 2 6 + 5 2 6 + 7 2 6 + 9 2 6 + ⋱ {\displaystyle \pi ={\cfrac {4}{1+{\cfrac {1^{2}}{2+{\cfrac {3^{2}}{2+{\cfrac {5^{2}}{2+{\cfrac {7^{2}}{2+{\cfrac {9^{2}}{2+\ddots }}}}}}}}}}}}={\cfrac {4}{1+{\cfrac {1^{2}}{3+{\cfrac {2^{2}}{5+{\cfrac {3^{2}}{7+{\cfrac {4^{2}}{9+\ddots }}}}}}}}}}=3+{\cfrac {1^{2}}{6+{\cfrac {3^{2}}{6+{\cfrac {5^{2}}{6+{\cfrac {7^{2}}{6+{\cfrac {9^{2}}{6+\ddots }}}}}}}}}}} π = 2 + 2 1 + 1 1 / 2 + 1 1 / 3 + 1 1 / 4 + ⋱ = 2 + 2 1 + 1 ⋅ 2 1 + 2 ⋅ 3 1 + 3 ⋅ 4 1 + ⋱ {\displaystyle \displaystyle \pi =2+{\cfrac {2}{1+{\cfrac {1}{1/2+{\cfrac {1}{1/3+{\cfrac {1}{1/4+\ddots }}}}}}}}=2+{\cfrac {2}{1+{\cfrac {1\cdot 2}{1+{\cfrac {2\cdot 3}{1+{\cfrac {3\cdot 4}{1+\ddots }}}}}}}}} π = 2 + 4 3 + 1 ⋅ 3 4 + 3 ⋅ 5 4 + 5 ⋅ 7 4 + ⋱ {\displaystyle \displaystyle \pi =2+{\cfrac {4}{3+{\cfrac {1\cdot 3}{4+{\cfrac {3\cdot 5}{4+{\cfrac {5\cdot 7}{4+\ddots }}}}}}}}} The first two of these are special cases of the arctangent function with π = 4 arctan (1) and the fourth and fifth one can be derived using the Wallis product. π = 3 + 1 6 + 1 3 + 2 3 6 ⋅ 1 2 + 1 2 1 3 + 2 3 + 3 3 + 4 3 6 ⋅ 2 2 + 2 2 1 3 + 2 3 + 3 3 + 4 3 + 5 3 + 6 3 6 ⋅ 3 2 + 3 2 1 3 + 2 3 + 3 3 + 4 3 + 5 3 + 6 3 + 7 3 + 8 3 6 ⋅ 4 2 + ⋱ {\displaystyle \pi =3+{\cfrac {1}{6+{\cfrac {1^{3}+2^{3}}{6\cdot 1^{2}+1^{2}{\cfrac {1^{3}+2^{3}+3^{3}+4^{3}}{6\cdot 2^{2}+2^{2}{\cfrac {1^{3}+2^{3}+3^{3}+4^{3}+5^{3}+6^{3}}{6\cdot 3^{2}+3^{2}{\cfrac {1^{3}+2^{3}+3^{3}+4^{3}+5^{3}+6^{3}+7^{3}+8^{3}}{6\cdot 4^{2}+\ddots }}}}}}}}}}} The continued fraction of π {\displaystyle \pi } above consisting of cubes uses the Nilakantha series and an exploit from Leonhard Euler. == Other continued fraction expansions == === Periodic continued fractions === The numbers with periodic continued fraction expansion are precisely the irrational solutions of quadratic equations with rational coefficients; rational solutions have finite continued fraction expansions as previously stated. The simplest examples are the golden ratio φ = [1;1,1,1,1,1,...] and √2 = [1;2,2,2,2,...], while √14 = [3;1,2,1,6,1,2,1,6...] and √42 = [6;2,12,2,12,2,12...]. All irrational square roots of integers have a special form for the period; a symmetrical string, like the empty string (for √2) or 1,2,1 (for √14), followed by the double of the leading integer. === A property of the golden ratio φ === Because the continued fraction expansion for φ doesn't use any integers greater than 1, φ is one of the most "difficult" real numbers to approximate with rational numbers. Hurwitz's theorem states that any irrational number k can be approximated by infinitely many rational ⁠m/n⁠ with | k − m n | < 1 n 2 5 . {\displaystyle \left|k-{m \over n}\right|<{1 \over n^{2}{\sqrt {5}}}.} While virtually all real numbers k will eventually have infinitely many convergents ⁠m/n⁠ whose distance from k is significantly smaller than this limit, the convergents for φ (i.e., the numbers ⁠5/3⁠, ⁠8/5⁠, ⁠13/8⁠, ⁠21/13⁠, etc.) consistently "toe the boundary", keeping a distance of almost exactly 1 n 2 5 {\displaystyle {\scriptstyle {1 \over n^{2}{\sqrt {5}}}}} away from φ, thus never producing an approximation nearly as impressive as, for example, ⁠355/113⁠ for π. It can also be shown that every real number of the form ⁠a + bφ/c + dφ⁠, where a, b, c, and d are integers such that a d − b c = ±1, shares this property with the golden ratio φ; and that all other real numbers can be more closely approximated. === Regular patterns in continued fractions === While there is no discernible pattern in the simple continued fraction expansion of π, there is one for e, the base of the natural logarithm: e = e 1 = [ 2 ; 1 , 2 , 1 , 1 , 4 , 1 , 1 , 6 , 1 , 1 , 8 , 1 , 1 , 10 , 1 , 1 , 12 , 1 , 1 , … ] , {\displaystyle e=e^{1}=[2;1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,1,1,\dots ],} which is a special case of this general expression for positive integer n: e 1 / n = [ 1 ; n − 1 , 1 , 1 , 3 n − 1 , 1 , 1 , 5 n − 1 , 1 , 1 , 7 n − 1 , 1 , 1 , … ] . {\displaystyle e^{1/n}=[1;n-1,1,1,3n-1,1,1,5n-1,1,1,7n-1,1,1,\dots ]\,\!.} Another, more complex pattern appears in this continued fraction expansion for positive odd n: e 2 / n = [ 1 ; n − 1 2 , 6 n , 5 n − 1 2 , 1 , 1 , 7 n − 1 2 , 18 n , 11 n − 1 2 , 1 , 1 , 13 n − 1 2 , 30 n , 17 n − 1 2 , 1 , 1 , … ] , {\displaystyle e^{2/n}=\left[1;{\frac {n-1}{2}},6n,{\frac {5n-1}{2}},1,1,{\frac {7n-1}{2}},18n,{\frac {11n-1}{2}},1,1,{\frac {13n-1}{2}},30n,{\frac {17n-1}{2}},1,1,\dots \right]\,\!,} with a special case for n = 1: e 2 = [ 7 ; 2 , 1 , 1 , 3 , 18 , 5 , 1 , 1 , 6 , 30 , 8 , 1 , 1 , 9 , 42 , 11 , 1 , 1 , 12 , 54 , 14 , 1 , 1 … , 3 k , 12 k + 6 , 3 k + 2 , 1 , 1 … ] . {\displaystyle e^{2}=[7;2,1,1,3,18,5,1,1,6,30,8,1,1,9,42,11,1,1,12,54,14,1,1\dots ,3k,12k+6,3k+2,1,1\dots ]\,\!.} Other continued fractions of this sort are tanh ⁡ ( 1 / n ) = [ 0 ; n , 3 n , 5 n , 7 n , 9 n , 11 n , 13 n , 15 n , 17 n , 19 n , … ] {\displaystyle \tanh(1/n)=[0;n,3n,5n,7n,9n,11n,13n,15n,17n,19n,\dots ]} where n is a positive integer; also, for integer n: tan ⁡ ( 1 / n ) = [ 0 ; n − 1 , 1 , 3 n − 2 , 1 , 5 n − 2 , 1 , 7 n − 2 , 1 , 9 n − 2 , 1 , … ] , {\displaystyle \tan(1/n)=[0;n-1,1,3n-2,1,5n-2,1,7n-2,1,9n-2,1,\dots ]\,\!,} with a special case for n = 1: tan ⁡ ( 1 ) = [ 1 ; 1 , 1 , 3 , 1 , 5 , 1 , 7 , 1 , 9 , 1 , 11 , 1 , 13 , 1 , 15 , 1 , 17 , 1 , 19 , 1 , … ] . {\displaystyle \tan(1)=[1;1,1,3,1,5,1,7,1,9,1,11,1,13,1,15,1,17,1,19,1,\dots ]\,\!.} If In(x) is the modified, or hyperbolic, Bessel function of the first kind, we may define a function on the rationals ⁠p/q⁠ by S ( p / q ) = I p / q ( 2 / q ) I 1 + p / q ( 2 / q ) , {\displaystyle S(p/q)={\frac {I_{p/q}(2/q)}{I_{1+p/q}(2/q)}},} which is defined for all rational numbers, with p and q in lowest terms. Then for all nonnegative rationals, we have S ( p / q ) = [ p + q ; p + 2 q , p + 3 q , p + 4 q , … ] , {\displaystyle S(p/q)=[p+q;p+2q,p+3q,p+4q,\dots ],} with similar formulas for negative rationals; in particular we have S ( 0 ) = S ( 0 / 1 ) = [ 1 ; 2 , 3 , 4 , 5 , 6 , 7 , … ] . {\displaystyle S(0)=S(0/1)=[1;2,3,4,5,6,7,\dots ].} Many of the formulas can be proved using Gauss's continued fraction. === Typical continued fractions === Most irrational numbers do not have any periodic or regular behavior in their continued fraction expansion. Nevertheless, for almost all numbers on the unit interval, they have the same limit behavior. The arithmetic average diverges: lim n → ∞ 1 n ∑ k = 1 n a k = + ∞ {\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}a_{k}=+\infty } , and so the coefficients grow arbitrarily large: lim sup n a n = + ∞ {\displaystyle \limsup _{n}a_{n}=+\infty } . In particular, this implies that almost all numbers are well-approximable, in the sense that lim inf n → ∞ | x − p n q n | q n 2 = 0 {\displaystyle \liminf _{n\to \infty }\left|x-{\frac {p_{n}}{q_{n}}}\right|q_{n}^{2}=0} Khinchin proved that the geometric mean of ai tends to a constant (known as Khinchin's constant): lim n → ∞ ( a 1 a 2 . . . a n ) 1 / n = K 0 = 2.6854520010 … {\displaystyle \lim _{n\rightarrow \infty }\left(a_{1}a_{2}...a_{n}\right)^{1/n}=K_{0}=2.6854520010\dots } Paul Lévy proved that the nth root of the denominator of the nth convergent converges to Lévy's constant lim n → ∞ q n 1 / n = e π 2 / ( 12 ln ⁡ 2 ) = 3.2758 … {\displaystyle \lim _{n\rightarrow \infty }q_{n}^{1/n}=e^{\pi ^{2}/(12\ln 2)}=3.2758\ldots } Lochs' theorem states that the convergents converge exponentially at the rate of lim n → ∞ 1 n ln ⁡ | x − p n q n | = − π 2 6 ln ⁡ 2 {\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\ln \left|x-{\frac {p_{n}}{q_{n}}}\right|=-{\frac {\pi ^{2}}{6\ln 2}}} == Applications == === Pell's equation === Continued fractions play an essential role in the solution of Pell's equation. For example, for positive integers p and q, and non-square n, it is true that if p2 − nq2 = ±1, then ⁠p/q⁠ is a convergent of the regular continued fraction for √n. The converse holds if the period of the regular continued fraction for √n is 1, and in general the period describes which convergents give solutions to Pell's equation. === Dynamical systems === Continued fractions also play a role in the study of dynamical systems, where they tie together the Farey fractions which are seen in the Mandelbrot set with Minkowski's question-mark function and the modular group Gamma. The backwards shift operator for continued fractions is the map h(x) = 1/x − ⌊1/x⌋ called the Gauss map, which lops off digits of a continued fraction expansion: h([0; a1, a2, a3, ...]) = [0; a2, a3, ...]. The transfer operator of this map is called the Gauss–Kuzmin–Wirsing operator. The distribution of the digits in continued fractions is given by the zero'th eigenvector of this operator, and is called the Gauss–Kuzmin distribution. == History == 300 BCE Euclid's Elements contains an algorithm for the greatest common divisor, whose modern version generates a continued fraction as the sequence of quotients of successive Euclidean divisions that occur in it. 499 The Aryabhatiya contains the solution of indeterminate equations using continued fractions 1572 Rafael Bombelli, L'Algebra Opera – method for the extraction of square roots which is related to continued fractions 1613 Pietro Cataldi, Trattato del modo brevissimo di trovar la radice quadra delli numeri – first notation for continued fractions Cataldi represented a continued fraction as a 0 {\displaystyle a_{0}} & n 1 d 1 ⋅ {\displaystyle {\frac {n_{1}}{d_{1}\cdot }}} & n 2 d 2 ⋅ {\displaystyle {\frac {n_{2}}{d_{2}\cdot }}} & n 3 d 3 ⋅ {\displaystyle {\frac {n_{3}}{d_{3}\cdot }}} with the dots indicating where the following fractions went. 1695 John Wallis, Opera Mathematica – introduction of the term "continued fraction" 1737 Leonhard Euler, De fractionibus continuis dissertatio – Provided the first then-comprehensive account of the properties of continued fractions, and included the first proof that the number e is irrational. 1748 Euler, Introductio in analysin infinitorum. Vol. I, Chapter 18 – proved the equivalence of a certain form of continued fraction and a generalized infinite series, proved that every rational number can be written as a finite continued fraction, and proved that the continued fraction of an irrational number is infinite. 1761 Johann Lambert – gave the first proof of the irrationality of π using a continued fraction for tan(x). 1768 Joseph-Louis Lagrange – provided the general solution to Pell's equation using continued fractions similar to Bombelli's 1770 Lagrange – proved that quadratic irrationals expand to periodic continued fractions. 1813 Carl Friedrich Gauss, Werke, Vol. 3, pp. 134–138 – derived a very general complex-valued continued fraction via a clever identity involving the hypergeometric function 1892 Henri Padé defined Padé approximant 1972 Bill Gosper – First exact algorithms for continued fraction arithmetic. == See also == Complete quotient Computing continued fractions of square roots – Algorithms for calculating square rootsPages displaying short descriptions of redirect targets Egyptian fraction – Finite sum of distinct unit fractions Engel expansion – decomposition of a positive real number into a series of unit fractions, each an integer multiple of the next onePages displaying wikidata descriptions as a fallback Euler's continued fraction formula – Connects a very general infinite series with an infinite continued fraction. Iterated binary operation – Repeated application of an operation to a sequence Klein polyhedron – Concept in the geometry of numbers Mathematical constants by continued fraction representation Restricted partial quotients – Analytic series Stern–Brocot tree – Ordered binary tree of rational numbers == Notes == == References == Bunder, Martin W.; Tonien, Joseph (2017). "Closed form expressions for two harmonic continued fractions". The Mathematical Gazette. 101 (552): 439–448. doi:10.1017/mag.2017.125. S2CID 125489697. Chen, Chen-Fan; Shieh, Leang-San (1969). "Continued fraction inversion by Routh's Algorithm". IEEE Trans. Circuit Theory. 16 (2): 197–202. doi:10.1109/TCT.1969.1082925. Collins, Darren C. (2001). "Continued Fractions" (PDF). MIT Undergraduate Journal of Mathematics. Archived from the original (PDF) on 2001-11-20. Cuyt, A.; Brevik Petersen, V.; Verdonk, B.; Waadeland, H.; Jones, W. B. (2008). Handbook of Continued fractions for Special functions. Springer Verlag. ISBN 978-1-4020-6948-2. Encyclopædia Britannica (2013). "Continued fraction – mathematics". Retrieved 26 April 2022. Euler, Leonhard (1748). "E101 – Introductio in analysin infinitorum, volume 1". The Euler Archive. Retrieved 26 April 2022. Foster, Tony (22 June 2015). "Theorem of the Day: Theorem no. 203" (PDF). Robin Whitty. Archived (PDF) from the original on 2013-12-11. Retrieved 26 April 2022. Gragg, William B. (1974). "Matrix interpretations and applications of the continued fraction algorithm". Rocky Mountain J. Math. 4 (2): 213. doi:10.1216/RMJ-1974-4-2-213. S2CID 121378061. Hardy, Godfrey H.; Wright, Edward M. (December 2008) [1979]. An Introduction to the Theory of Numbers (6 ed.). Oxford University Press. ISBN 9780199219865. Heilermann, J. B. H. (1846). "Ueber die Verwandlung von Reihen in Kettenbrüche". Journal für die reine und angewandte Mathematik. 33: 174–188. Jones, William B.; Thron, W. J. (1980). Continued Fractions: Analytic Theory and Applications. Encyclopedia of Mathematics and its Applications. Vol. 11. Reading. Massachusetts: Addison-Wesley Publishing Company. ISBN 0-201-13510-8. Khinchin, A. Ya. (1964) [Originally published in Russian, 1935]. Continued Fractions. University of Chicago Press. ISBN 0-486-69630-8. {{cite book}}: ISBN / Date incompatibility (help) Long, Calvin T. (December 1972). Elementary Introduction to Number Theory (2nd ed.). Lexington: D. C. Heath and Company. ISBN 9780669627039. Magnus, Arne (1962). "Continued fractions associated with the Padé Table". Math. Z. 78: 361–374. doi:10.1007/BF01195180. S2CID 120535167. Niven, Ivan; Zuckerman, Herbert S.; Montgomery, Hugh L. (1991). An introduction to the theory of numbers (Fifth ed.). New York: Wiley. ISBN 0-471-62546-9. Perron, Oskar (1950). Die Lehre von den Kettenbrüchen. New York, NY: Chelsea Publishing Company. Pettofrezzo, Anthony J.; Byrkit, Donald R. (December 1970). Elements of Number Theory. Englewood Cliffs: Prentice Hall. ISBN 9780132683005. Rieger, Georg Johann (1982). "A new approach to the real numbers (motivated by continued fractions)" (PDF). Abhandlungen der Braunschweigischen Wissenschaftlichen Gesellschaft. 33: 205–217. Archived (PDF) from the original on 2020-12-10. Rockett, Andrew M.; Szüsz, Peter (1992). Continued Fractions. World Scientific Press. ISBN 981-02-1047-7. Sandifer, C. Edward (2006). "Chapter 32: Who proved e is irrational?". How Euler Did It (PDF). Mathematical Association of America. pp. 185–190. ISBN 978-0-88385-563-8. LCCN 2007927658. Archived (PDF) from the original on 2014-03-19. Scheinerman, Ed; Pickett, Thomas J.; Coleman, Ann (2008). "Another Continued Fraction for π". The American Mathematical Monthly. 115 (10): 930–933. doi:10.1080/00029890.2008.11920610. JSTOR 27642639. S2CID 11914017. Shoemake, Ken (1995). "I.4: Rational Approximation". In Paeth, Alan W. (ed.). Graphics Gems V (IBM Version). San Diego, California: Academic Press. pp. 25–31. ISBN 0-12-543455-3. Siebeck, H. (1846). "Ueber periodische Kettenbrüche". Journal für die reine und angewandte Mathematik. 33: 68–70. Thill, M. (2008). "A more precise rounding algorithm for rational numbers". Computing. 82 (2–3): 189–198. doi:10.1007/s00607-008-0006-7. S2CID 45166490. Thurston, Ben (2012). "Estimating square roots, generalized continued fraction expression for every square root". The Ben Paul Thurston Blog. Retrieved 26 April 2022. Wall, Hubert Stanley (1948). Analytic Theory of Continued Fractions. D. Van Nostrand Company, Inc. ISBN 0-8284-0207-8. {{cite book}}: ISBN / Date incompatibility (help) Weisstein, Eric Wolfgang (2022). MathWorld (ed.). "Periodic Continued Fraction". Retrieved 26 April 2022. == External links == "Continued fraction", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Knott, Ron (2018). "Continued fractions (An online Combined Continued Fraction Calculator is available)". Retrieved 26 April 2022. Linas Vepstas Continued Fractions and Gaps (2004) reviews chaotic structures in continued fractions. Continued Fractions on the Stern-Brocot Tree at cut-the-knot The Antikythera Mechanism I: Gear ratios and continued fractions Archived 2009-05-04 at the Wayback Machine Continued fraction calculator, WIMS. Continued Fraction Arithmetic Gosper's first continued fractions paper, unpublished. Cached on the Internet Archive's Wayback Machine Weisstein, Eric W. "Continued Fraction". MathWorld. Continued Fractions by Stephen Wolfram and Continued Fraction Approximations of the Tangent Function by Michael Trott, Wolfram Demonstrations Project. OEIS sequence A133593 ("Exact" continued fraction for pi) A view into "fractional interpolation" of a continued fraction {1; 1, 1, 1, ...} Best rational approximation through continued fractions CONTINUED FRACTIONS by C. D. Olds
Wikipedia:Simplicial Lie algebra#0
In algebra, a simplicial Lie algebra is a simplicial object in the category of Lie algebras. In particular, it is a simplicial abelian group, and thus is subject to the Dold–Kan correspondence. == See also == Differential graded Lie algebra == References == Quillen, Daniel (September 1969). "Rational homotopy theory". Annals of Mathematics. 2. 90 (2): 205–295. doi:10.2307/1970725. JSTOR 1970725. == External links == https://ncatlab.org/nlab/show/simplicial+Lie+algebra
Wikipedia:Sims conjecture#0
In mathematics, the Sims conjecture is a result in group theory, originally proposed by Charles Sims. He conjectured that if G {\displaystyle G} is a primitive permutation group on a finite set S {\displaystyle S} and G α {\displaystyle G_{\alpha }} denotes the stabilizer of the point α {\displaystyle \alpha } in S {\displaystyle S} , then there exists an integer-valued function f {\displaystyle f} such that f ( d ) ≥ | G α | {\displaystyle f(d)\geq |G_{\alpha }|} for d {\displaystyle d} the length of any orbit of G α {\displaystyle G_{\alpha }} in the set S ∖ { α } {\displaystyle S\setminus \{\alpha \}} . The conjecture was proven by Peter Cameron, Cheryl Praeger, Jan Saxl, and Gary Seitz using the classification of finite simple groups, in particular the fact that only finitely many isomorphism types of sporadic groups exist. The theorem reads precisely as follows. Thus, in a primitive permutation group with "large" stabilizers, these stabilizers cannot have any small orbit. A consequence of their proof is that there exist only finitely many connected distance-transitive graphs having degree greater than 2. == References ==
Wikipedia:Sina Greenwood#0
Sina Ruth Greenwood is a New Zealand mathematician whose interests include continuum theory, discrete dynamical systems, inverse limits, set-valued analysis, and Volterra spaces. She is an associate professor of mathematics and Associate Dean Pacific in the faculty of science at the University of Auckland. == Education and career == Greenwood's parents emigrated from Samoa to Whanganui in New Zealand, shortly before Greenwood was born; they moved from there to Auckland when she was a child. She earned a bachelor's degree at the University of Auckland, and after some time in Australia became a secondary school teacher in Auckland. Returning to the University of Auckland for graduate study in mathematics, she earned a master's degree and then completed her PhD in 1999, under the joint supervision of David Gauld and David W. Mcintyre. Her dissertation was Nonmetrisable Manifolds. She and three other students who finished their doctorates at the same time became the first topologists to earn a doctorate at Auckland. After postdoctoral research, funded by a New Zealand Science and Technology Post-Doctoral Fellowship, she obtained a permanent position at the University of Auckland as a lecturer in 2004, later becoming an associate professor. Beyond mathematics, her work at the university has also included advocating for the interests of Pasifika and Māori students. == Recognition == Greenwood is a Fellow of the New Zealand Mathematical Society, elected in 2018. == References ==
Wikipedia:Singular matrix#0
In linear algebra, an invertible matrix (non-singular, non-degenarate or regular) is a square matrix that has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. An invertible matrix multiplied by its inverse yields the identity matrix. Invertible matrices are the same size as their inverse. == Definition == An n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that A B = B A = I n , {\displaystyle \mathbf {AB} =\mathbf {BA} =\mathbf {I} _{n},} where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix. Over a field, a square matrix that is not invertible is called singular or degenerate. A square matrix with entries in a field is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any bounded region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices, i.e. m-by-n matrices for which m ≠ n, do not have an inverse. However, in some cases such a matrix may have a left inverse or right inverse. If A is m-by-n and the rank of A is equal to n, (n ≤ m), then A has a left inverse, an n-by-m matrix B such that BA = In. If A has rank m (m ≤ n), then it has a right inverse, an n-by-m matrix B such that AB = Im. While the most common case is that of matrices over the real or complex numbers, all of those definitions can be given for matrices over any algebraic structure equipped with addition and multiplication (i.e. rings). However, in the case of a ring being commutative, the condition for a square matrix to be invertible is that its determinant is invertible in the ring, which in general is a stricter requirement than it being nonzero. For a noncommutative ring, the usual determinant is not defined. The conditions for existence of left-inverse or right-inverse are more complicated, since a notion of rank does not exist over rings. The set of n × n invertible matrices together with the operation of matrix multiplication and entries from ring R form a group, the general linear group of degree n, denoted GLn(R). == Properties == === Invertible matrix theorem === Let A be a square n-by-n matrix over a field K (e.g., the field ⁠ R {\displaystyle \mathbb {R} } ⁠ of real numbers). The following statements are equivalent, i.e., they are either all true or all false for any given matrix: A is invertible, i.e. it has an inverse under matrix multiplication, i.e., there exists a B such that AB = In = BA. (In that statement, "invertible" can equivalently be replaced with "left-invertible" or "right-invertible" in which one-sided inverses are considered.) The linear transformation mapping x to Ax is invertible, i.e., it has an inverse under function composition. (There, again, "invertible" can equivalently be replaced with either "left-invertible" or "right-invertible".) The transpose AT is an invertible matrix. A is row-equivalent to the n-by-n identity matrix In. A is column-equivalent to the n-by-n identity matrix In. A has n pivot positions. A has full rank: rank A = n. A has a trivial kernel: ker(A) = {0}. The linear transformation mapping x to Ax is bijective; that is, the equation Ax = b has exactly one solution for each b in Kn. (There, "bijective" can equivalently be replaced with "injective" or "surjective".) The columns of A form a basis of Kn. (In this statement, "basis" can equivalently be replaced with either "linearly independent set" or "spanning set") The rows of A form a basis of Kn. (Similarly, here, "basis" can equivalently be replaced with either "linearly independent set" or "spanning set") The determinant of A is nonzero: det A ≠ 0. In general, a square matrix over a commutative ring is invertible if and only if its determinant is a unit (i.e. multiplicatively invertible element) of that ring. The number 0 is not an eigenvalue of A. (More generally, a number λ {\displaystyle \lambda } is an eigenvalue of A if the matrix A − λ I {\displaystyle \mathbf {A} -\lambda \mathbf {I} } is singular, where I is the identity matrix.) The matrix A can be expressed as a finite product of elementary matrices. === Other properties === Furthermore, the following properties hold for an invertible matrix A: ( A − 1 ) − 1 = A {\displaystyle (\mathbf {A} ^{-1})^{-1}=\mathbf {A} } ( k A ) − 1 = k − 1 A − 1 {\displaystyle (k\mathbf {A} )^{-1}=k^{-1}\mathbf {A} ^{-1}} for nonzero scalar k ( A x ) + = x + A − 1 {\displaystyle (\mathbf {Ax} )^{+}=\mathbf {x} ^{+}\mathbf {A} ^{-1}} if A has orthonormal columns, where + denotes the Moore–Penrose inverse and x is a vector ( A T ) − 1 = ( A − 1 ) T {\displaystyle (\mathbf {A} ^{\mathrm {T} })^{-1}=(\mathbf {A} ^{-1})^{\mathrm {T} }} For any invertible n-by-n matrices A and B, ( A B ) − 1 = B − 1 A − 1 . {\displaystyle (\mathbf {AB} )^{-1}=\mathbf {B} ^{-1}\mathbf {A} ^{-1}.} More generally, if A 1 , … , A k {\displaystyle \mathbf {A} _{1},\dots ,\mathbf {A} _{k}} are invertible n-by-n matrices, then ( A 1 A 2 ⋯ A k − 1 A k ) − 1 = A k − 1 A k − 1 − 1 ⋯ A 2 − 1 A 1 − 1 . {\displaystyle (\mathbf {A} _{1}\mathbf {A} _{2}\cdots \mathbf {A} _{k-1}\mathbf {A} _{k})^{-1}=\mathbf {A} _{k}^{-1}\mathbf {A} _{k-1}^{-1}\cdots \mathbf {A} _{2}^{-1}\mathbf {A} _{1}^{-1}.} det A − 1 = ( det A ) − 1 . {\displaystyle \det \mathbf {A} ^{-1}=(\det \mathbf {A} )^{-1}.} The rows of the inverse matrix V of a matrix U are orthonormal to the columns of U (and vice versa interchanging rows for columns). To see this, suppose that UV = VU = I where the rows of V are denoted as v i T {\displaystyle v_{i}^{\mathrm {T} }} and the columns of U as u j {\displaystyle u_{j}} for 1 ≤ i , j ≤ n . {\displaystyle 1\leq i,j\leq n.} Then clearly, the Euclidean inner product of any two v i T u j = δ i , j . {\displaystyle v_{i}^{\mathrm {T} }u_{j}=\delta _{i,j}.} This property can also be useful in constructing the inverse of a square matrix in some instances, where a set of orthogonal vectors (but not necessarily orthonormal vectors) to the columns of U are known. In which case, one can apply the iterative Gram–Schmidt process to this initial set to determine the rows of the inverse V. A matrix that is its own inverse (i.e., a matrix A such that A = A−1 and consequently A2 = I) is called an involutory matrix. === In relation to its adjugate === The adjugate of a matrix A can be used to find the inverse of A as follows: If A is an invertible matrix, then A − 1 = 1 det ( A ) adj ⁡ ( A ) . {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\operatorname {adj} (\mathbf {A} ).} === In relation to the identity matrix === It follows from the associativity of matrix multiplication that if A B = I {\displaystyle \mathbf {AB} =\mathbf {I} \ } for finite square matrices A and B, then also B A = I {\displaystyle \mathbf {BA} =\mathbf {I} \ } === Density === Over the field of real numbers, the set of singular n-by-n matrices, considered as a subset of ⁠ R n × n , {\displaystyle \mathbb {R} ^{n\times n},} ⁠ is a null set, that is, has Lebesgue measure zero. That is true because singular matrices are the roots of the determinant function. It is a continuous function because it is a polynomial in the entries of the matrix. Thus in the language of measure theory, almost all n-by-n matrices are invertible. Furthermore, the set of n-by-n invertible matrices is open and dense in the topological space of all n-by-n matrices. Equivalently, the set of singular matrices is closed and nowhere dense in the space of n-by-n matrices. In practice, however, non-invertible matrices may be encountered. In numerical calculations, matrices that are invertible but close to a non-invertible matrix may still be problematic and are said to be ill-conditioned. == Examples == This example with rank of n − 1 is a non-invertible matrix: A = ( 2 4 2 4 ) . {\displaystyle \mathbf {A} ={\begin{pmatrix}2&4\\2&4\end{pmatrix}}.} We can see the rank of this 2-by-2 matrix is 1, which is n − 1 ≠ n, so it is non-invertible. Consider the following 2-by-2 matrix: B = ( − 1 3 2 1 − 1 ) . {\displaystyle \mathbf {B} ={\begin{pmatrix}-1&{\tfrac {3}{2}}\\1&-1\end{pmatrix}}.} The matrix B {\displaystyle \mathbf {B} } is invertible. To check this, one can compute that det B = − 1 2 {\textstyle \det \mathbf {B} =-{\frac {1}{2}}} , which is non-zero. As an example of a non-invertible, or singular, matrix, consider: C = ( − 1 3 2 2 3 − 1 ) . {\displaystyle \mathbf {C} ={\begin{pmatrix}-1&{\tfrac {3}{2}}\\{\tfrac {2}{3}}&-1\end{pmatrix}}.} The determinant of C {\displaystyle \mathbf {C} } is 0, which is a necessary and sufficient condition for a matrix to be non-invertible. == Methods of matrix inversion == === Gaussian elimination === Gaussian elimination is a useful and easy way to compute the inverse of a matrix. To compute a matrix inverse using this method, an augmented matrix is first created with the left side being the matrix to invert and the right side being the identity matrix. Then, Gaussian elimination is used to convert the left side into the identity matrix, which causes the right side to become the inverse of the input matrix. For example, take the following matrix: A = ( − 1 3 2 1 − 1 ) . {\displaystyle \mathbf {A} ={\begin{pmatrix}-1&{\tfrac {3}{2}}\\1&-1\end{pmatrix}}.} The first step to compute its inverse is to create the augmented matrix ( − 1 3 2 1 0 1 − 1 0 1 ) . {\displaystyle \left(\!\!{\begin{array}{cc|cc}-1&{\tfrac {3}{2}}&1&0\\1&-1&0&1\end{array}}\!\!\right).} Call the first row of this matrix R 1 {\displaystyle R_{1}} and the second row R 2 {\displaystyle R_{2}} . Then, add row 1 to row 2 ( R 1 + R 2 → R 2 ) . {\displaystyle (R_{1}+R_{2}\to R_{2}).} This yields ( − 1 3 2 1 0 0 1 2 1 1 ) . {\displaystyle \left(\!\!{\begin{array}{cc|cc}-1&{\tfrac {3}{2}}&1&0\\0&{\tfrac {1}{2}}&1&1\end{array}}\!\!\right).} Next, subtract row 2, multiplied by 3, from row 1 ( R 1 − 3 R 2 → R 1 ) , {\displaystyle (R_{1}-3\,R_{2}\to R_{1}),} which yields ( − 1 0 − 2 − 3 0 1 2 1 1 ) . {\displaystyle \left(\!\!{\begin{array}{cc|cc}-1&0&-2&-3\\0&{\tfrac {1}{2}}&1&1\end{array}}\!\!\right).} Finally, multiply row 1 by −1 ( − R 1 → R 1 ) {\displaystyle (-R_{1}\to R_{1})} and row 2 by 2 ( 2 R 2 → R 2 ) . {\displaystyle (2\,R_{2}\to R_{2}).} This yields the identity matrix on the left side and the inverse matrix on the right: ( 1 0 2 3 0 1 2 2 ) . {\displaystyle \left(\!\!{\begin{array}{cc|cc}1&0&2&3\\0&1&2&2\end{array}}\!\!\right).} Thus, A − 1 = ( 2 3 2 2 ) . {\displaystyle \mathbf {A} ^{-1}={\begin{pmatrix}2&3\\2&2\end{pmatrix}}.} It works because the process of Gaussian elimination can be viewed as a sequence of applying left matrix multiplication using elementary row operations using elementary matrices ( E n {\displaystyle \mathbf {E} _{n}} ), such as E n E n − 1 ⋯ E 2 E 1 A = I . {\displaystyle \mathbf {E} _{n}\mathbf {E} _{n-1}\cdots \mathbf {E} _{2}\mathbf {E} _{1}\mathbf {A} =\mathbf {I} .} Applying right-multiplication using A − 1 , {\displaystyle \mathbf {A} ^{-1},} we get E n E n − 1 ⋯ E 2 E 1 I = I A − 1 . {\displaystyle \mathbf {E} _{n}\mathbf {E} _{n-1}\cdots \mathbf {E} _{2}\mathbf {E} _{1}\mathbf {I} =\mathbf {I} \mathbf {A} ^{-1}.} And the right side I A − 1 = A − 1 , {\displaystyle \mathbf {I} \mathbf {A} ^{-1}=\mathbf {A} ^{-1},} which is the inverse we want. To obtain E n E n − 1 ⋯ E 2 E 1 I , {\displaystyle \mathbf {E} _{n}\mathbf {E} _{n-1}\cdots \mathbf {E} _{2}\mathbf {E} _{1}\mathbf {I} ,} we create the augumented matrix by combining A with I and applying Gaussian elimination. The two portions will be transformed using the same sequence of elementary row operations. When the left portion becomes I, the right portion applied the same elementary row operation sequence will become A−1. === Newton's method === A generalization of Newton's method as used for a multiplicative inverse algorithm may be convenient if it is convenient to find a suitable starting seed: X k + 1 = 2 X k − X k A X k . {\displaystyle X_{k+1}=2X_{k}-X_{k}AX_{k}.} Victor Pan and John Reif have done work that includes ways of generating a starting seed. Newton's method is particularly useful when dealing with families of related matrices that behave enough like the sequence manufactured for the homotopy above: sometimes a good starting point for refining an approximation for the new inverse can be the already obtained inverse of a previous matrix that nearly matches the current matrix. For example, the pair of sequences of inverse matrices used in obtaining matrix square roots by Denman–Beavers iteration. That may need more than one pass of the iteration at each new matrix, if they are not close enough together for just one to be enough. Newton's method is also useful for "touch up" corrections to the Gauss–Jordan algorithm which has been contaminated by small errors from imperfect computer arithmetic. === Cayley–Hamilton method === The Cayley–Hamilton theorem allows the inverse of A to be expressed in terms of det(A), traces and powers of A: A − 1 = 1 det ( A ) ∑ s = 0 n − 1 A s ∑ k 1 , k 2 , … , k n − 1 ∏ l = 1 n − 1 ( − 1 ) k l + 1 l k l k l ! tr ⁡ ( A l ) k l , {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\sum _{s=0}^{n-1}\mathbf {A} ^{s}\sum _{k_{1},k_{2},\ldots ,k_{n-1}}\prod _{l=1}^{n-1}{\frac {(-1)^{k_{l}+1}}{l^{k_{l}}k_{l}!}}\operatorname {tr} \left(\mathbf {A} ^{l}\right)^{k_{l}},} where n is size of A, and tr(A) is the trace of matrix A given by the sum of the main diagonal. The sum is taken over s and the sets of all k l ≥ 0 {\displaystyle k_{l}\geq 0} satisfying the linear Diophantine equation s + ∑ l = 1 n − 1 l k l = n − 1. {\displaystyle s+\sum _{l=1}^{n-1}lk_{l}=n-1.} The formula can be rewritten in terms of complete Bell polynomials of arguments t l = − ( l − 1 ) ! tr ⁡ ( A l ) {\displaystyle t_{l}=-(l-1)!\operatorname {tr} \left(A^{l}\right)} as A − 1 = 1 det ( A ) ∑ s = 1 n A s − 1 ( − 1 ) n − 1 ( n − s ) ! B n − s ( t 1 , t 2 , … , t n − s ) . {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\sum _{s=1}^{n}\mathbf {A} ^{s-1}{\frac {(-1)^{n-1}}{(n-s)!}}B_{n-s}(t_{1},t_{2},\ldots ,t_{n-s}).} That is described in more detail under Cayley–Hamilton method. === Eigendecomposition === If matrix A can be eigendecomposed, and if none of its eigenvalues are zero, then A is invertible and its inverse is given by A − 1 = Q Λ − 1 Q − 1 , {\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1},} where Q is the square (N × N) matrix whose ith column is the eigenvector q i {\displaystyle q_{i}} of A, and Λ is the diagonal matrix whose diagonal entries are the corresponding eigenvalues, that is, Λ i i = λ i . {\displaystyle \Lambda _{ii}=\lambda _{i}.} If A is symmetric, Q is guaranteed to be an orthogonal matrix, therefore Q − 1 = Q T . {\displaystyle \mathbf {Q} ^{-1}=\mathbf {Q} ^{\mathrm {T} }.} Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate: [ Λ − 1 ] i i = 1 λ i . {\displaystyle \left[\Lambda ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}.} === Cholesky decomposition === If matrix A is positive definite, then its inverse can be obtained as A − 1 = ( L ∗ ) − 1 L − 1 , {\displaystyle \mathbf {A} ^{-1}=\left(\mathbf {L} ^{*}\right)^{-1}\mathbf {L} ^{-1},} where L is the lower triangular Cholesky decomposition of A, and L* denotes the conjugate transpose of L. === Analytic solution === Writing the transpose of the matrix of cofactors, known as an adjugate matrix, may also be an efficient way to calculate the inverse of small matrices, but the recursive method is inefficient for large matrices. To determine the inverse, we calculate a matrix of cofactors: A − 1 = 1 | A | C T = 1 | A | ( C 11 C 21 ⋯ C n 1 C 12 C 22 ⋯ C n 2 ⋮ ⋮ ⋱ ⋮ C 1 n C 2 n ⋯ C n n ) {\displaystyle \mathbf {A} ^{-1}={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}\mathbf {C} ^{\mathrm {T} }={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}{\begin{pmatrix}\mathbf {C} _{11}&\mathbf {C} _{21}&\cdots &\mathbf {C} _{n1}\\\mathbf {C} _{12}&\mathbf {C} _{22}&\cdots &\mathbf {C} _{n2}\\\vdots &\vdots &\ddots &\vdots \\\mathbf {C} _{1n}&\mathbf {C} _{2n}&\cdots &\mathbf {C} _{nn}\\\end{pmatrix}}} so that ( A − 1 ) i j = 1 | A | ( C T ) i j = 1 | A | ( C j i ) {\displaystyle \left(\mathbf {A} ^{-1}\right)_{ij}={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}\left(\mathbf {C} ^{\mathrm {T} }\right)_{ij}={1 \over {\begin{vmatrix}\mathbf {A} \end{vmatrix}}}\left(\mathbf {C} _{ji}\right)} where |A| is the determinant of A, C is the matrix of cofactors, and CT represents the matrix transpose. ==== Inversion of 2 × 2 matrices ==== The cofactor equation listed above yields the following result for 2 × 2 matrices. Inversion of these matrices can be done as follows: A − 1 = [ a b c d ] − 1 = 1 det A [ d − b − c a ] = 1 a d − b c [ d − b − c a ] . {\displaystyle \mathbf {A} ^{-1}={\begin{bmatrix}a&b\\c&d\\\end{bmatrix}}^{-1}={\frac {1}{\det \mathbf {A} }}{\begin{bmatrix}\,\,\,d&\!\!-b\\-c&\,a\\\end{bmatrix}}={\frac {1}{ad-bc}}{\begin{bmatrix}\,\,\,d&\!\!-b\\-c&\,a\\\end{bmatrix}}.} This is possible because 1/(ad − bc) is the reciprocal of the determinant of the matrix in question, and the same strategy could be used for other matrix sizes. The Cayley–Hamilton method gives A − 1 = 1 det A [ ( tr ⁡ A ) I − A ] . {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det \mathbf {A} }}\left[\left(\operatorname {tr} \mathbf {A} \right)\mathbf {I} -\mathbf {A} \right].} ==== Inversion of 3 × 3 matrices ==== A computationally efficient 3 × 3 matrix inversion is given by A − 1 = [ a b c d e f g h i ] − 1 = 1 det ( A ) [ A B C D E F G H I ] T = 1 det ( A ) [ A D G B E H C F I ] {\displaystyle \mathbf {A} ^{-1}={\begin{bmatrix}a&b&c\\d&e&f\\g&h&i\\\end{bmatrix}}^{-1}={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}\,A&\,B&\,C\\\,D&\,E&\,F\\\,G&\,H&\,I\\\end{bmatrix}}^{\mathrm {T} }={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}\,A&\,D&\,G\\\,B&\,E&\,H\\\,C&\,F&\,I\\\end{bmatrix}}} (where the scalar A is not to be confused with the matrix A). If the determinant is non-zero, the matrix is invertible, with the entries of the intermediary matrix on the right side above given by A = ( e i − f h ) , D = − ( b i − c h ) , G = ( b f − c e ) , B = − ( d i − f g ) , E = ( a i − c g ) , H = − ( a f − c d ) , C = ( d h − e g ) , F = − ( a h − b g ) , I = ( a e − b d ) . {\displaystyle {\begin{alignedat}{6}A&={}&(ei-fh),&\quad &D&={}&-(bi-ch),&\quad &G&={}&(bf-ce),\\B&={}&-(di-fg),&\quad &E&={}&(ai-cg),&\quad &H&={}&-(af-cd),\\C&={}&(dh-eg),&\quad &F&={}&-(ah-bg),&\quad &I&={}&(ae-bd).\\\end{alignedat}}} The determinant of A can be computed by applying the rule of Sarrus as follows: det ( A ) = a A + b B + c C . {\displaystyle \det(\mathbf {A} )=aA+bB+cC.} The Cayley–Hamilton decomposition gives A − 1 = 1 det ( A ) ( 1 2 [ ( tr ⁡ A ) 2 − tr ⁡ ( A 2 ) ] I − A tr ⁡ A + A 2 ) . {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}\left({\tfrac {1}{2}}\left[(\operatorname {tr} \mathbf {A} )^{2}-\operatorname {tr} (\mathbf {A} ^{2})\right]\mathbf {I} -\mathbf {A} \operatorname {tr} \mathbf {A} +\mathbf {A} ^{2}\right).} The general 3 × 3 inverse can be expressed concisely in terms of the cross product and triple product. If a matrix A = [ x 0 x 1 x 2 ] {\displaystyle \mathbf {A} ={\begin{bmatrix}\mathbf {x} _{0}&\mathbf {x} _{1}&\mathbf {x} _{2}\end{bmatrix}}} (consisting of three column vectors, x 0 {\displaystyle \mathbf {x} _{0}} , x 1 {\displaystyle \mathbf {x} _{1}} , and x 2 {\displaystyle \mathbf {x} _{2}} ) is invertible, its inverse is given by A − 1 = 1 det ( A ) [ ( x 1 × x 2 ) T ( x 2 × x 0 ) T ( x 0 × x 1 ) T ] . {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}{\begin{bmatrix}{(\mathbf {x} _{1}\times \mathbf {x} _{2})}^{\mathrm {T} }\\{(\mathbf {x} _{2}\times \mathbf {x} _{0})}^{\mathrm {T} }\\{(\mathbf {x} _{0}\times \mathbf {x} _{1})}^{\mathrm {T} }\end{bmatrix}}.} The determinant of A, det(A), is equal to the triple product of x0, x1, and x2—the volume of the parallelepiped formed by the rows or columns: det ( A ) = x 0 ⋅ ( x 1 × x 2 ) . {\displaystyle \det(\mathbf {A} )=\mathbf {x} _{0}\cdot (\mathbf {x} _{1}\times \mathbf {x} _{2}).} The correctness of the formula can be checked by using cross- and triple-product properties and by noting that for groups, left and right inverses always coincide. Intuitively, because of the cross products, each row of A–1 is orthogonal to the non-corresponding two columns of A (causing the off-diagonal terms of I = A − 1 A {\displaystyle \mathbf {I} =\mathbf {A} ^{-1}\mathbf {A} } be zero). Dividing by det ( A ) = x 0 ⋅ ( x 1 × x 2 ) {\displaystyle \det(\mathbf {A} )=\mathbf {x} _{0}\cdot (\mathbf {x} _{1}\times \mathbf {x} _{2})} causes the diagonal entries of I = A−1A to be unity. For example, the first diagonal is: 1 = 1 x 0 ⋅ ( x 1 × x 2 ) x 0 ⋅ ( x 1 × x 2 ) . {\displaystyle 1={\frac {1}{\mathbf {x_{0}} \cdot (\mathbf {x} _{1}\times \mathbf {x} _{2})}}\mathbf {x_{0}} \cdot (\mathbf {x} _{1}\times \mathbf {x} _{2}).} ==== Inversion of 4 × 4 matrices ==== With increasing dimension, expressions for the inverse of A get complicated. For n = 4, the Cayley–Hamilton method leads to an expression that is still tractable: A − 1 = 1 det ( A ) ( 1 6 ( ( tr ⁡ A ) 3 − 3 tr ⁡ A tr ⁡ ( A 2 ) + 2 tr ⁡ ( A 3 ) ) I − 1 2 A ( ( tr ⁡ A ) 2 − tr ⁡ ( A 2 ) ) + A 2 tr ⁡ A − A 3 ) . {\displaystyle {\begin{aligned}\mathbf {A} ^{-1}={\frac {1}{\det(\mathbf {A} )}}{\Bigl (}&{\tfrac {1}{6}}{\bigl (}(\operatorname {tr} \mathbf {A} )^{3}-3\operatorname {tr} \mathbf {A} \operatorname {tr} (\mathbf {A} ^{2})+2\operatorname {tr} (\mathbf {A} ^{3}){\bigr )}\mathbf {I} \\[-3mu]&\ \ \ -{\tfrac {1}{2}}\mathbf {A} {\bigl (}(\operatorname {tr} \mathbf {A} )^{2}-\operatorname {tr} (\mathbf {A} ^{2}){\bigr )}+\mathbf {A} ^{2}\operatorname {tr} \mathbf {A} -\mathbf {A} ^{3}{\Bigr )}.\end{aligned}}} === Blockwise inversion === Let M = [ A B C D ] {\displaystyle \mathbf {M} ={\begin{bmatrix}\mathbf {A} &\mathbf {B} \\\mathbf {C} &\mathbf {D} \end{bmatrix}}} where A, B, C and D are matrix sub-blocks of arbitrary size and M / A := D − C A − 1 B {\displaystyle \mathbf {M} /\mathbf {A} :=\mathbf {D} -\mathbf {C} \mathbf {A} ^{-1}\mathbf {B} } is the Schur complement of A. (A must be square, so that it can be inverted. Furthermore, A and D − CA−1B must be nonsingular.) Matrices can also be inverted blockwise by using the analytic inversion formula: The strategy is particularly advantageous if A is diagonal and M / A is a small matrix, since they are the only matrices requiring inversion. This technique was reinvented several times by Hans Boltz (1923), who used it for the inversion of geodetic matrices, and Tadeusz Banachiewicz (1937), who generalized it and proved its correctness. The nullity theorem says that the nullity of A equals the nullity of the sub-block in the lower right of the inverse matrix, and that the nullity of B equals the nullity of the sub-block in the upper right of the inverse matrix. The inversion procedure that led to Equation (1) performed matrix block operations that operated on C and D first. Instead, if A and B are operated on first, and provided D and M / D := A − BD−1C are nonsingular, the result is Equating the upper-left sub-matrices of Equations (1) and (2) leads to where Equation (3) is the Woodbury matrix identity, which is equivalent to the binomial inverse theorem. If A and D are both invertible, then the above two block matrix inverses can be combined to provide the simple factorization By the Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is. This formula simplifies significantly when the upper right block matrix B is the zero matrix. This formulation is useful when the matrices A and D have relatively simple inverse formulas (or pseudo inverses in the case where the blocks are not all square. In this special case, the block matrix inversion formula stated in full generality above becomes [ A 0 C D ] − 1 = [ A − 1 0 − D − 1 C A − 1 D − 1 ] . {\displaystyle {\begin{bmatrix}\mathbf {A} &\mathbf {0} \\\mathbf {C} &\mathbf {D} \end{bmatrix}}^{-1}={\begin{bmatrix}\mathbf {A} ^{-1}&\mathbf {0} \\-\mathbf {D} ^{-1}\mathbf {CA} ^{-1}&\mathbf {D} ^{-1}\end{bmatrix}}.} If the given invertible matrix is a symmetric matrix with invertible block A the following block inverse formula holds where S = D − C A − 1 C T {\displaystyle \mathbf {S} =\mathbf {D} -\mathbf {C} \mathbf {A} ^{-1}\mathbf {C} ^{T}} . This requires 2 inversions of the half-sized matrices A and S and only 4 multiplications of half-sized matrices, if organized properly W 1 = C A − 1 , W 2 = W 1 C T = C A − 1 C T , W 3 = S − 1 W 1 = S − 1 C A − 1 , W 4 = W 1 T W 3 = A − 1 C T S − 1 C A − 1 , {\displaystyle {\begin{aligned}\mathbf {W} _{1}&=\mathbf {C} \mathbf {A} ^{-1},\\[3mu]\mathbf {W} _{2}&=\mathbf {W} _{1}\mathbf {C} ^{T}=\mathbf {C} \mathbf {A} ^{-1}\mathbf {C} ^{T},\\[3mu]\mathbf {W} _{3}&=\mathbf {S} ^{-1}\mathbf {W} _{1}=\mathbf {S} ^{-1}\mathbf {C} \mathbf {A} ^{-1},\\[3mu]\mathbf {W} _{4}&=\mathbf {W} _{1}^{T}\mathbf {W} _{3}=\mathbf {A} ^{-1}\mathbf {C} ^{T}\mathbf {S} ^{-1}\mathbf {C} \mathbf {A} ^{-1},\end{aligned}}} together with some additions, subtractions, negations and transpositions of negligible complexity. Any matrix M {\displaystyle \mathbf {M} } has an associated positive semidefinite, symmetric matrix M T M {\displaystyle \mathbf {M} ^{T}\mathbf {M} } , which is exactly invertible (and positive definite), if and only if M {\displaystyle \mathbf {M} } is invertible. By writing M − 1 = ( M T M ) − 1 M T {\displaystyle \mathbf {M} ^{-1}=\left(\mathbf {M} ^{T}\mathbf {M} \right)^{-1}\mathbf {M} ^{T}} matrix inversion can be reduced to inverting symmetric matrices and 2 additional matrix multiplications, because the positive definite matrix M T M {\displaystyle \mathbf {M} ^{T}\mathbf {M} } satisfies the invertibility condition for its left upper block A. Those formulas together allow to construct a divide and conquer algorithm that uses blockwise inversion of associated symmetric matrices to invert a matrix with the same time complexity as the matrix multiplication algorithm that is used internally. Research into matrix multiplication complexity shows that there exist matrix multiplication algorithms with a complexity of O(n2.371552) operations, while the best proven lower bound is Ω(n2 log n). === By Neumann series === If a matrix A has the property that lim n → ∞ ( I − A ) n = 0 {\displaystyle \lim _{n\to \infty }(\mathbf {I} -\mathbf {A} )^{n}=0} then A is nonsingular and its inverse may be expressed by a Neumann series: A − 1 = ∑ n = 0 ∞ ( I − A ) n . {\displaystyle \mathbf {A} ^{-1}=\sum _{n=0}^{\infty }(\mathbf {I} -\mathbf {A} )^{n}.} Truncating the sum results in an "approximate" inverse which may be useful as a preconditioner. Note that a truncated series can be accelerated exponentially by noting that the Neumann series is a geometric sum. As such, it satisfies ∑ n = 0 2 L − 1 ( I − A ) n = ∏ l = 0 L − 1 ( I + ( I − A ) 2 l ) {\displaystyle \sum _{n=0}^{2^{L}-1}(\mathbf {I} -\mathbf {A} )^{n}=\prod _{l=0}^{L-1}\left(\mathbf {I} +(\mathbf {I} -\mathbf {A} )^{2^{l}}\right)} . Therefore, only 2L − 2 matrix multiplications are needed to compute 2L terms of the sum. More generally, if A is "near" the invertible matrix X in the sense that lim n → ∞ ( I − X − 1 A ) n = 0 o r lim n → ∞ ( I − A X − 1 ) n = 0 {\displaystyle \lim _{n\to \infty }\left(\mathbf {I} -\mathbf {X} ^{-1}\mathbf {A} \right)^{n}=0\mathrm {~~or~~} \lim _{n\to \infty }\left(\mathbf {I} -\mathbf {A} \mathbf {X} ^{-1}\right)^{n}=0} then A is nonsingular and its inverse is A − 1 = ∑ n = 0 ∞ ( X − 1 ( X − A ) ) n X − 1 . {\displaystyle \mathbf {A} ^{-1}=\sum _{n=0}^{\infty }\left(\mathbf {X} ^{-1}(\mathbf {X} -\mathbf {A} )\right)^{n}\mathbf {X} ^{-1}~.} If it is also the case that A − X has rank 1 then this simplifies to A − 1 = X − 1 − X − 1 ( A − X ) X − 1 1 + tr ⁡ ( X − 1 ( A − X ) ) . {\displaystyle \mathbf {A} ^{-1}=\mathbf {X} ^{-1}-{\frac {\mathbf {X} ^{-1}(\mathbf {A} -\mathbf {X} )\mathbf {X} ^{-1}}{1+\operatorname {tr} \left(\mathbf {X} ^{-1}(\mathbf {A} -\mathbf {X} )\right)}}~.} === p-adic approximation === If A is a matrix with integer or rational entries, and we seek a solution in arbitrary-precision rationals, a p-adic approximation method converges to an exact solution in O(n4 log2 n), assuming standard O(n3) matrix multiplication is used. The method relies on solving n linear systems via Dixon's method of p-adic approximation (each in O(n3 log2 n)) and is available as such in software specialized in arbitrary-precision matrix operations, for example, in IML. === Reciprocal basis vectors method === Given an n × n square matrix X = [ x i j ] {\displaystyle \mathbf {X} =\left[x^{ij}\right]} , 1 ≤ i , j ≤ n {\displaystyle 1\leq i,j\leq n} , with n rows interpreted as n vectors x i = x i j e j {\displaystyle \mathbf {x} _{i}=x^{ij}\mathbf {e} _{j}} (Einstein summation assumed) where the e j {\displaystyle \mathbf {e} _{j}} are a standard orthonormal basis of Euclidean space R n {\displaystyle \mathbb {R} ^{n}} ( e i = e i , e i ⋅ e j = δ i j {\displaystyle \mathbf {e} _{i}=\mathbf {e} ^{i},\mathbf {e} _{i}\cdot \mathbf {e} ^{j}=\delta _{i}^{j}} ), then using Clifford algebra (or geometric algebra) we compute the reciprocal (sometimes called dual) column vectors: x i = x j i e j = ( − 1 ) i − 1 ( x 1 ∧ ⋯ ∧ ( ) i ∧ ⋯ ∧ x n ) ⋅ ( x 1 ∧ x 2 ∧ ⋯ ∧ x n ) − 1 {\displaystyle \mathbf {x} ^{i}=x_{ji}\mathbf {e} ^{j}=(-1)^{i-1}(\mathbf {x} _{1}\wedge \cdots \wedge ()_{i}\wedge \cdots \wedge \mathbf {x} _{n})\cdot (\mathbf {x} _{1}\wedge \ \mathbf {x} _{2}\wedge \cdots \wedge \mathbf {x} _{n})^{-1}} as the columns of the inverse matrix X − 1 = [ x j i ] . {\displaystyle \mathbf {X} ^{-1}=[x_{ji}].} Note that, the place " ( ) i {\displaystyle ()_{i}} " indicates that " x i {\displaystyle \mathbf {x} _{i}} " is removed from that place in the above expression for x i {\displaystyle \mathbf {x} ^{i}} . We then have X X − 1 = [ x i ⋅ x j ] = [ δ i j ] = I n {\displaystyle \mathbf {X} \mathbf {X} ^{-1}=\left[\mathbf {x} _{i}\cdot \mathbf {x} ^{j}\right]=\left[\delta _{i}^{j}\right]=\mathbf {I} _{n}} , where δ i j {\displaystyle \delta _{i}^{j}} is the Kronecker delta. We also have X − 1 X = [ ( e i ⋅ x k ) ( e j ⋅ x k ) ] = [ e i ⋅ e j ] = [ δ i j ] = I n {\displaystyle \mathbf {X} ^{-1}\mathbf {X} =\left[\left(\mathbf {e} _{i}\cdot \mathbf {x} ^{k}\right)\left(\mathbf {e} ^{j}\cdot \mathbf {x} _{k}\right)\right]=\left[\mathbf {e} _{i}\cdot \mathbf {e} ^{j}\right]=\left[\delta _{i}^{j}\right]=\mathbf {I} _{n}} , as required. If the vectors x i {\displaystyle \mathbf {x} _{i}} are not linearly independent, then ( x 1 ∧ x 2 ∧ ⋯ ∧ x n ) = 0 {\displaystyle (\mathbf {x} _{1}\wedge \mathbf {x} _{2}\wedge \cdots \wedge \mathbf {x} _{n})=0} and the matrix X {\displaystyle \mathbf {X} } is not invertible (has no inverse). == Derivative of the matrix inverse == Suppose that the invertible matrix A depends on a parameter t. Then the derivative of the inverse of A with respect to t is given by d A − 1 d t = − A − 1 d A d t A − 1 . {\displaystyle {\frac {\mathrm {d} \mathbf {A} ^{-1}}{\mathrm {d} t}}=-\mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{-1}.} To derive the above expression for the derivative of the inverse of A, one can differentiate the definition of the matrix inverse A − 1 A = I {\displaystyle \mathbf {A} ^{-1}\mathbf {A} =\mathbf {I} } and then solve for the inverse of A: d ( A − 1 A ) d t = d A − 1 d t A + A − 1 d A d t = d I d t = 0 . {\displaystyle {\frac {\mathrm {d} (\mathbf {A} ^{-1}\mathbf {A} )}{\mathrm {d} t}}={\frac {\mathrm {d} \mathbf {A} ^{-1}}{\mathrm {d} t}}\mathbf {A} +\mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}={\frac {\mathrm {d} \mathbf {I} }{\mathrm {d} t}}=\mathbf {0} .} Subtracting A − 1 d A d t {\displaystyle \mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}} from both sides of the above and multiplying on the right by A − 1 {\displaystyle \mathbf {A} ^{-1}} gives the correct expression for the derivative of the inverse: d A − 1 d t = − A − 1 d A d t A − 1 . {\displaystyle {\frac {\mathrm {d} \mathbf {A} ^{-1}}{\mathrm {d} t}}=-\mathbf {A} ^{-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{-1}.} Similarly, if ε {\displaystyle \varepsilon } is a small number then ( A + ε X ) − 1 = A − 1 − ε A − 1 X A − 1 + O ( ε 2 ) . {\displaystyle \left(\mathbf {A} +\varepsilon \mathbf {X} \right)^{-1}=\mathbf {A} ^{-1}-\varepsilon \mathbf {A} ^{-1}\mathbf {X} \mathbf {A} ^{-1}+{\mathcal {O}}(\varepsilon ^{2})\,.} More generally, if d f ( A ) d t = ∑ i g i ( A ) d A d t h i ( A ) , {\displaystyle {\frac {\mathrm {d} f(\mathbf {A} )}{\mathrm {d} t}}=\sum _{i}g_{i}(\mathbf {A} ){\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}h_{i}(\mathbf {A} ),} then, f ( A + ε X ) = f ( A ) + ε ∑ i g i ( A ) X h i ( A ) + O ( ε 2 ) . {\displaystyle f(\mathbf {A} +\varepsilon \mathbf {X} )=f(\mathbf {A} )+\varepsilon \sum _{i}g_{i}(\mathbf {A} )\mathbf {X} h_{i}(\mathbf {A} )+{\mathcal {O}}\left(\varepsilon ^{2}\right).} Given a positive integer n {\displaystyle n} , d A n d t = ∑ i = 1 n A i − 1 d A d t A n − i , d A − n d t = − ∑ i = 1 n A − i d A d t A − ( n + 1 − i ) . {\displaystyle {\begin{aligned}{\frac {\mathrm {d} \mathbf {A} ^{n}}{\mathrm {d} t}}&=\sum _{i=1}^{n}\mathbf {A} ^{i-1}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{n-i},\\{\frac {\mathrm {d} \mathbf {A} ^{-n}}{\mathrm {d} t}}&=-\sum _{i=1}^{n}\mathbf {A} ^{-i}{\frac {\mathrm {d} \mathbf {A} }{\mathrm {d} t}}\mathbf {A} ^{-(n+1-i)}.\end{aligned}}} Therefore, ( A + ε X ) n = A n + ε ∑ i = 1 n A i − 1 X A n − i + O ( ε 2 ) , ( A + ε X ) − n = A − n − ε ∑ i = 1 n A − i X A − ( n + 1 − i ) + O ( ε 2 ) . {\displaystyle {\begin{aligned}(\mathbf {A} +\varepsilon \mathbf {X} )^{n}&=\mathbf {A} ^{n}+\varepsilon \sum _{i=1}^{n}\mathbf {A} ^{i-1}\mathbf {X} \mathbf {A} ^{n-i}+{\mathcal {O}}\left(\varepsilon ^{2}\right),\\(\mathbf {A} +\varepsilon \mathbf {X} )^{-n}&=\mathbf {A} ^{-n}-\varepsilon \sum _{i=1}^{n}\mathbf {A} ^{-i}\mathbf {X} \mathbf {A} ^{-(n+1-i)}+{\mathcal {O}}\left(\varepsilon ^{2}\right).\end{aligned}}} == Generalized inverses == Some of the properties of inverse matrices are shared by generalized inverses (such as the Moore–Penrose inverse), which can be defined for any m-by-n matrix. == Applications == For most practical applications, it is not necessary to invert a matrix to solve a system of linear equations; however, for a unique solution, it is necessary for the matrix involved to be invertible. Decomposition techniques like LU decomposition are much faster than inversion, and various fast algorithms for special classes of linear systems have also been developed. === Regression/least squares === Although an explicit inverse is not necessary to estimate the vector of unknowns, it is the easiest way to estimate their accuracy and is found in the diagonal of a matrix inverse (the posterior covariance matrix of the vector of unknowns). However, faster algorithms to compute only the diagonal entries of a matrix inverse are known in many cases. === Matrix inverses in real-time simulations === Matrix inversion plays a significant role in computer graphics, particularly in 3D graphics rendering and 3D simulations. Examples include screen-to-world ray casting, world-to-subspace-to-world object transformations, and physical simulations. === Matrix inverses in MIMO wireless communication === Matrix inversion also plays a significant role in the MIMO (Multiple-Input, Multiple-Output) technology in wireless communications. The MIMO system consists of N transmit and M receive antennas. Unique signals, occupying the same frequency band, are sent via N transmit antennas and are received via M receive antennas. The signal arriving at each receive antenna will be a linear combination of the N transmitted signals forming an N × M transmission matrix H. It is crucial for the matrix H to be invertible so that the receiver can figure out the transmitted information. == See also == == References == == Further reading == "Inversion of a matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "28.4: Inverting matrices". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 755–760. ISBN 0-262-03293-7. Bernstein, Dennis S. (2009). Matrix Mathematics: Theory, Facts, and Formulas (2nd ed.). Princeton University Press. ISBN 978-0691140391 – via Google Books. Petersen, Kaare Brandt; Pedersen, Michael Syskind (November 15, 2012). "The Matrix Cookbook" (PDF). pp. 17–23. == External links == Sanderson, Grant (August 15, 2016). "Inverse Matrices, Column Space and Null Space". Essence of Linear Algebra. Archived from the original on 2021-11-03 – via YouTube. Strang, Gilbert. "Linear Algebra Lecture on Inverse Matrices". MIT OpenCourseWare. Moore-Penrose Inverse Matrix
Wikipedia:Singular value decomposition#0
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any ⁠ m × n {\displaystyle m\times n} ⁠ matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an m × n {\displaystyle m\times n} complex matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is a factorization of the form M = U Σ V ∗ , {\displaystyle \mathbf {M} =\mathbf {U\Sigma V^{*}} ,} where ⁠ U {\displaystyle \mathbf {U} } ⁠ is an ⁠ m × m {\displaystyle m\times m} ⁠ complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is an m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, ⁠ V {\displaystyle \mathbf {V} } ⁠ is an n × n {\displaystyle n\times n} complex unitary matrix, and V ∗ {\displaystyle \mathbf {V} ^{*}} is the conjugate transpose of ⁠ V {\displaystyle \mathbf {V} } ⁠. Such decomposition always exists for any complex matrix. If ⁠ M {\displaystyle \mathbf {M} } ⁠ is real, then ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ can be guaranteed to be real orthogonal matrices; in such contexts, the SVD is often denoted U Σ V T . {\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{\mathrm {T} }.} The diagonal entries σ i = Σ i i {\displaystyle \sigma _{i}=\Sigma _{ii}} of Σ {\displaystyle \mathbf {\Sigma } } are uniquely determined by ⁠ M {\displaystyle \mathbf {M} } ⁠ and are known as the singular values of ⁠ M {\displaystyle \mathbf {M} } ⁠. The number of non-zero singular values is equal to the rank of ⁠ M {\displaystyle \mathbf {M} } ⁠. The columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ and the columns of ⁠ V {\displaystyle \mathbf {V} } ⁠ are called left-singular vectors and right-singular vectors of ⁠ M {\displaystyle \mathbf {M} } ⁠, respectively. They form two sets of orthonormal bases ⁠ u 1 , … , u m {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{m}} ⁠ and ⁠ v 1 , … , v n , {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{n},} ⁠ and if they are sorted so that the singular values σ i {\displaystyle \sigma _{i}} with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as M = ∑ i = 1 r σ i u i v i ∗ , {\displaystyle \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*},} where r ≤ min { m , n } {\displaystyle r\leq \min\{m,n\}} is the rank of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The SVD is not unique. However, it is always possible to choose the decomposition such that the singular values Σ i i {\displaystyle \Sigma _{ii}} are in descending order. In this case, Σ {\displaystyle \mathbf {\Sigma } } (but not ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠) is uniquely determined by ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The term sometimes refers to the compact SVD, a similar decomposition ⁠ M = U Σ V ∗ {\displaystyle \mathbf {M} =\mathbf {U\Sigma V} ^{*}} ⁠ in which ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ is square diagonal of size ⁠ r × r , {\displaystyle r\times r,} ⁠ where ⁠ r ≤ min { m , n } {\displaystyle r\leq \min\{m,n\}} ⁠ is the rank of ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ and has only the non-zero singular values. In this variant, ⁠ U {\displaystyle \mathbf {U} } ⁠ is an ⁠ m × r {\displaystyle m\times r} ⁠ semi-unitary matrix and V {\displaystyle \mathbf {V} } is an ⁠ n × r {\displaystyle n\times r} ⁠ semi-unitary matrix, such that U ∗ U = V ∗ V = I r . {\displaystyle \mathbf {U} ^{*}\mathbf {U} =\mathbf {V} ^{*}\mathbf {V} =\mathbf {I} _{r}.} Mathematical applications of the SVD include computing the pseudoinverse, matrix approximation, and determining the rank, range, and null space of a matrix. The SVD is also extremely useful in many areas of science, engineering, and statistics, such as signal processing, least squares fitting of data, and process control. == Intuitive interpretations == === Rotation, coordinate scaling, and reflection === In the special case when ⁠ M {\displaystyle \mathbf {M} } ⁠ is an ⁠ m × m {\displaystyle m\times m} ⁠ real square matrix, the matrices ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ can be chosen to be real ⁠ m × m {\displaystyle m\times m} ⁠ matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as ⁠ A , {\displaystyle \mathbf {A} ,} ⁠ as a linear transformation ⁠ x ↦ A x {\displaystyle \mathbf {x} \mapsto \mathbf {Ax} } ⁠ of the space ⁠ R m , {\displaystyle \mathbf {R} _{m},} ⁠ the matrices ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ represent rotations or reflection of the space, while ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ represents the scaling of each coordinate ⁠ x i {\displaystyle \mathbf {x} _{i}} ⁠ by the factor ⁠ σ i . {\displaystyle \sigma _{i}.} ⁠ Thus the SVD decomposition breaks down any linear transformation of ⁠ R m {\displaystyle \mathbf {R} ^{m}} ⁠ into a composition of three geometrical transformations: a rotation or reflection (⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠), followed by a coordinate-by-coordinate scaling (⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠), followed by another rotation or reflection (⁠ U {\displaystyle \mathbf {U} } ⁠). In particular, if ⁠ M {\displaystyle \mathbf {M} } ⁠ has a positive determinant, then ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ can be chosen to be both rotations with reflections, or both rotations without reflections. If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type. If the matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is real but not square, namely ⁠ m × n {\displaystyle m\times n} ⁠ with ⁠ m ≠ n , {\displaystyle m\neq n,} ⁠ it can be interpreted as a linear transformation from ⁠ R n {\displaystyle \mathbf {R} ^{n}} ⁠ to ⁠ R m . {\displaystyle \mathbf {R} ^{m}.} ⁠ Then ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ can be chosen to be rotations/reflections of ⁠ R m {\displaystyle \mathbf {R} ^{m}} ⁠ and ⁠ R n , {\displaystyle \mathbf {R} ^{n},} ⁠ respectively; and ⁠ Σ , {\displaystyle \mathbf {\Sigma } ,} ⁠ besides scaling the first ⁠ min { m , n } {\displaystyle \min\{m,n\}} ⁠ coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turn ⁠ R n {\displaystyle \mathbf {R} ^{n}} ⁠ into ⁠ R m . {\displaystyle \mathbf {R} ^{m}.} ⁠ === Singular values as semiaxes of an ellipse or ellipsoid === As shown in the figure, the singular values can be interpreted as the magnitude of the semiaxes of an ellipse in 2D. This concept can be generalized to ⁠ n {\displaystyle n} ⁠-dimensional Euclidean space, with the singular values of any ⁠ n × n {\displaystyle n\times n} ⁠ square matrix being viewed as the magnitude of the semiaxis of an ⁠ n {\displaystyle n} ⁠-dimensional ellipsoid. Similarly, the singular values of any ⁠ m × n {\displaystyle m\times n} ⁠ matrix can be viewed as the magnitude of the semiaxis of an ⁠ n {\displaystyle n} ⁠-dimensional ellipsoid in ⁠ m {\displaystyle m} ⁠-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. See below for further details. === The columns of U and V are orthonormal bases === Since ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ are unitary, the columns of each of them form a set of orthonormal vectors, which can be regarded as basis vectors. The matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ maps the basis vector ⁠ V i {\displaystyle \mathbf {V} _{i}} ⁠ to the stretched unit vector ⁠ σ i U i . {\displaystyle \sigma _{i}\mathbf {U} _{i}.} ⁠ By the definition of a unitary matrix, the same is true for their conjugate transposes ⁠ U ∗ {\displaystyle \mathbf {U} ^{*}} ⁠ and ⁠ V , {\displaystyle \mathbf {V} ,} ⁠ except the geometric interpretation of the singular values as stretches is lost. In short, the columns of ⁠ U , {\displaystyle \mathbf {U} ,} ⁠ ⁠ U ∗ , {\displaystyle \mathbf {U} ^{*},} ⁠ ⁠ V , {\displaystyle \mathbf {V} ,} ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ are orthonormal bases. When ⁠ M {\displaystyle \mathbf {M} } ⁠ is a positive-semidefinite Hermitian matrix, ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ are both equal to the unitary matrix used to diagonalize ⁠ M . {\displaystyle \mathbf {M} .} ⁠ However, when ⁠ M {\displaystyle \mathbf {M} } ⁠ is not positive-semidefinite and Hermitian but still diagonalizable, its eigendecomposition and singular value decomposition are distinct. === Relation to the four fundamental subspaces === The first ⁠ r {\displaystyle r} ⁠ columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ are a basis of the column space of ⁠ M {\displaystyle \mathbf {M} } ⁠. The last ⁠ m − r {\displaystyle m-r} ⁠ columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ are a basis of the null space of ⁠ M ∗ {\displaystyle \mathbf {M} ^{*}} ⁠. The first ⁠ r {\displaystyle r} ⁠ columns of ⁠ V {\displaystyle \mathbf {V} } ⁠ are a basis of the column space of ⁠ M ∗ {\displaystyle \mathbf {M} ^{*}} ⁠ (the row space of ⁠ M {\displaystyle \mathbf {M} } ⁠ in the real case). The last ⁠ n − r {\displaystyle n-r} ⁠ columns of ⁠ V {\displaystyle \mathbf {V} } ⁠ are a basis of the null space of ⁠ M {\displaystyle \mathbf {M} } ⁠. === Geometric meaning === Because ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ are unitary, we know that the columns ⁠ U 1 , … , U m {\displaystyle \mathbf {U} _{1},\ldots ,\mathbf {U} _{m}} ⁠ of ⁠ U {\displaystyle \mathbf {U} } ⁠ yield an orthonormal basis of ⁠ K m {\displaystyle K^{m}} ⁠ and the columns ⁠ V 1 , … , V n {\displaystyle \mathbf {V} _{1},\ldots ,\mathbf {V} _{n}} ⁠ of ⁠ V {\displaystyle \mathbf {V} } ⁠ yield an orthonormal basis of ⁠ K n {\displaystyle K^{n}} ⁠ (with respect to the standard scalar products on these spaces). The linear transformation T : { K n → K m x ↦ M x {\displaystyle T:\left\{{\begin{aligned}K^{n}&\to K^{m}\\x&\mapsto \mathbf {M} x\end{aligned}}\right.} has a particularly simple description with respect to these orthonormal bases: we have T ( V i ) = σ i U i , i = 1 , … , min ( m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where ⁠ σ i {\displaystyle \sigma _{i}} ⁠ is the ⁠ i {\displaystyle i} ⁠-th diagonal entry of ⁠ Σ , {\displaystyle \mathbf {\Sigma } ,} ⁠ and ⁠ T ( V i ) = 0 {\displaystyle T(\mathbf {V} _{i})=0} ⁠ for ⁠ i > min ( m , n ) . {\displaystyle i>\min(m,n).} ⁠ The geometric content of the SVD theorem can thus be summarized as follows: for every linear map ⁠ T : K n → K m {\displaystyle T:K^{n}\to K^{m}} ⁠ one can find orthonormal bases of ⁠ K n {\displaystyle K^{n}} ⁠ and ⁠ K m {\displaystyle K^{m}} ⁠ such that ⁠ T {\displaystyle T} ⁠ maps the ⁠ i {\displaystyle i} ⁠-th basis vector of ⁠ K n {\displaystyle K^{n}} ⁠ to a non-negative multiple of the ⁠ i {\displaystyle i} ⁠-th basis vector of ⁠ K m , {\displaystyle K^{m},} ⁠ and sends the leftover basis vectors to zero. With respect to these bases, the map ⁠ T {\displaystyle T} ⁠ is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphere ⁠ S {\displaystyle S} ⁠ of radius one in ⁠ R n . {\displaystyle \mathbf {R} ^{n}.} ⁠ The linear map ⁠ T {\displaystyle T} ⁠ maps this sphere onto an ellipsoid in ⁠ R m . {\displaystyle \mathbf {R} ^{m}.} ⁠ Non-zero singular values are simply the lengths of the semi-axes of this ellipsoid. Especially when ⁠ n = m , {\displaystyle n=m,} ⁠ and all the singular values are distinct and non-zero, the SVD of the linear map ⁠ T {\displaystyle T} ⁠ can be easily analyzed as a succession of three consecutive moves: consider the ellipsoid ⁠ T ( S ) {\displaystyle T(S)} ⁠ and specifically its axes; then consider the directions in ⁠ R n {\displaystyle \mathbf {R} ^{n}} ⁠ sent by ⁠ T {\displaystyle T} ⁠ onto these axes. These directions happen to be mutually orthogonal. Apply first an isometry ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ sending these directions to the coordinate axes of ⁠ R n . {\displaystyle \mathbf {R} ^{n}.} ⁠ On a second move, apply an endomorphism ⁠ D {\displaystyle \mathbf {D} } ⁠ diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths of ⁠ T ( S ) {\displaystyle T(S)} ⁠ as stretching coefficients. The composition ⁠ D ∘ V ∗ {\displaystyle \mathbf {D} \circ \mathbf {V} ^{*}} ⁠ then sends the unit-sphere onto an ellipsoid isometric to ⁠ T ( S ) . {\displaystyle T(S).} ⁠ To define the third and last move, apply an isometry ⁠ U {\displaystyle \mathbf {U} } ⁠ to this ellipsoid to obtain ⁠ T ( S ) . {\displaystyle T(S).} ⁠ As can be easily checked, the composition ⁠ U ∘ D ∘ V ∗ {\displaystyle \mathbf {U} \circ \mathbf {D} \circ \mathbf {V} ^{*}} ⁠ coincides with ⁠ T . {\displaystyle T.} ⁠ == Example == Consider the ⁠ 4 × 5 {\displaystyle 4\times 5} ⁠ matrix M = [ 1 0 0 0 2 0 0 3 0 0 0 0 0 0 0 0 2 0 0 0 ] {\displaystyle \mathbf {M} ={\begin{bmatrix}1&0&0&0&2\\0&0&3&0&0\\0&0&0&0&0\\0&2&0&0&0\end{bmatrix}}} A singular value decomposition of this matrix is given by ⁠ U Σ V ∗ {\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} ⁠ U = [ 0 − 1 0 0 − 1 0 0 0 0 0 0 − 1 0 0 − 1 0 ] Σ = [ 3 0 0 0 0 0 5 0 0 0 0 0 2 0 0 0 0 0 0 0 ] V ∗ = [ 0 0 − 1 0 0 − 0.2 0 0 0 − 0.8 0 − 1 0 0 0 0 0 0 1 0 − 0.8 0 0 0 0.2 ] {\displaystyle {\begin{aligned}\mathbf {U} &={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0&\color {Emerald}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0&\color {Emerald}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0&\color {Emerald}-1\\\color {Green}0&\color {Blue}0&\color {Cyan}-1&\color {Emerald}0\end{bmatrix}}\\[6pt]\mathbf {\Sigma } &={\begin{bmatrix}3&0&0&0&\color {Gray}{\mathit {0}}\\0&{\sqrt {5}}&0&0&\color {Gray}{\mathit {0}}\\0&0&2&0&\color {Gray}{\mathit {0}}\\0&0&0&\color {Red}\mathbf {0} &\color {Gray}{\mathit {0}}\end{bmatrix}}\\[6pt]\mathbf {V} ^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}0&\color {Orchid}0&\color {Orchid}0&\color {Orchid}1&\color {Orchid}0\\\color {Purple}-{\sqrt {0.8}}&\color {Purple}0&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.2}}\end{bmatrix}}\end{aligned}}} The scaling matrix ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matrices ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ are unitary, multiplying by their respective conjugate transposes yields identity matrices, as shown below. In this case, because ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ are real valued, each is an orthogonal matrix. U U ∗ = [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] = I 4 V V ∗ = [ 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 ] = I 5 {\displaystyle {\begin{aligned}\mathbf {U} \mathbf {U} ^{*}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}=\mathbf {I} _{4}\\[6pt]\mathbf {V} \mathbf {V} ^{*}&={\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{bmatrix}}=\mathbf {I} _{5}\end{aligned}}} This particular singular value decomposition is not unique. For instance, we can keep ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ the same, but change the last two rows of ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ such that V ∗ = [ 0 0 − 1 0 0 − 0.2 0 0 0 − 0.8 0 − 1 0 0 0 0.4 0 0 0.5 − 0.1 − 0.4 0 0 0.5 0.1 ] {\displaystyle \mathbf {V} ^{*}={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}{\sqrt {0.4}}&\color {Orchid}0&\color {Orchid}0&\color {Orchid}{\sqrt {0.5}}&\color {Orchid}-{\sqrt {0.1}}\\\color {Purple}-{\sqrt {0.4}}&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.5}}&\color {Purple}{\sqrt {0.1}}\end{bmatrix}}} and get an equally valid singular value decomposition. As the matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ has rank 3, it has only 3 nonzero singular values. In taking the product ⁠ U Σ V ∗ {\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} ⁠, the final column of ⁠ U {\displaystyle \mathbf {U} } ⁠ and the final two rows of ⁠ V ∗ {\displaystyle \mathbf {V^{*}} } ⁠ are multiplied by zero, so have no effect on the matrix product, and can be replaced by any unit vectors which are orthogonal to the first three and to each-other. The compact SVD, ⁠ M = U r Σ r V r ∗ {\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}} ⁠, eliminates these superfluous rows, columns, and singular values: U r = [ 0 − 1 0 − 1 0 0 0 0 0 0 0 − 1 ] Σ r = [ 3 0 0 0 5 0 0 0 2 ] V r ∗ = [ 0 0 − 1 0 0 − 0.2 0 0 0 − 0.8 0 − 1 0 0 0 ] {\displaystyle {\begin{aligned}\mathbf {U} _{r}&={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}-1\end{bmatrix}}\\[6pt]\mathbf {\Sigma } _{r}&={\begin{bmatrix}3&0&0\\0&{\sqrt {5}}&0\\0&0&2\end{bmatrix}}\\[6pt]\mathbf {V} _{r}^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\end{bmatrix}}\end{aligned}}} == SVD and spectral decomposition == === Singular values, singular vectors, and their relation to the SVD === A non-negative real number ⁠ σ {\displaystyle \sigma } ⁠ is a singular value for ⁠ M {\displaystyle \mathbf {M} } ⁠ if and only if there exist unit-length vectors ⁠ u {\displaystyle \mathbf {u} } ⁠ in ⁠ K m {\displaystyle K^{m}} ⁠ and ⁠ v {\displaystyle \mathbf {v} } ⁠ in ⁠ K n {\displaystyle K^{n}} ⁠ such that M v = σ u , M ∗ u = σ v . {\displaystyle {\begin{aligned}\mathbf {Mv} &=\sigma \mathbf {u} ,\\[3mu]\mathbf {M} ^{*}\mathbf {u} &=\sigma \mathbf {v} .\end{aligned}}} The vectors ⁠ u {\displaystyle \mathbf {u} } ⁠ and ⁠ v {\displaystyle \mathbf {v} } ⁠ are called left-singular and right-singular vectors for ⁠ σ , {\displaystyle \sigma ,} ⁠ respectively. In any singular value decomposition M = U Σ V ∗ {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} the diagonal entries of ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ are equal to the singular values of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The first ⁠ p = min ( m , n ) {\displaystyle p=\min(m,n)} ⁠ columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that: An ⁠ m × n {\displaystyle m\times n} ⁠ matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ has at most ⁠ p {\displaystyle p} ⁠ distinct singular values. It is always possible to find a unitary basis ⁠ U {\displaystyle \mathbf {U} } ⁠ for ⁠ K m {\displaystyle K^{m}} ⁠ with a subset of basis vectors spanning the left-singular vectors of each singular value of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ It is always possible to find a unitary basis ⁠ V {\displaystyle \mathbf {V} } ⁠ for ⁠ K n {\displaystyle K^{n}} ⁠ with a subset of basis vectors spanning the right-singular vectors of each singular value of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ A singular value for which we can find two left (or right) singular vectors that are linearly independent is called degenerate. If ⁠ u 1 {\displaystyle \mathbf {u} _{1}} ⁠ and ⁠ u 2 {\displaystyle \mathbf {u} _{2}} ⁠ are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ corresponding to diagonal elements of ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ all with the same value ⁠ σ . {\displaystyle \sigma .} ⁠ As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in the cokernel and kernel, respectively, of ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ which by the rank–nullity theorem cannot be the same dimension if ⁠ m ≠ n . {\displaystyle m\neq n.} ⁠ Even if all singular values are nonzero, if ⁠ m > n {\displaystyle m>n} ⁠ then the cokernel is nontrivial, in which case ⁠ U {\displaystyle \mathbf {U} } ⁠ is padded with ⁠ m − n {\displaystyle m-n} ⁠ orthogonal vectors from the cokernel. Conversely, if ⁠ m < n , {\displaystyle m<n,} ⁠ then ⁠ V {\displaystyle \mathbf {V} } ⁠ is padded by ⁠ n − m {\displaystyle n-m} ⁠ orthogonal vectors from the kernel. However, if the singular value of ⁠ 0 {\displaystyle 0} ⁠ exists, the extra columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ or ⁠ V {\displaystyle \mathbf {V} } ⁠ already appear as left or right-singular vectors. Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factor ⁠ e i φ {\displaystyle e^{i\varphi }} ⁠ (for the real case up to a sign). Consequently, if all singular values of a square matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column of ⁠ U {\displaystyle \mathbf {U} } ⁠ by a unit-phase factor and simultaneous multiplication of the corresponding column of ⁠ V {\displaystyle \mathbf {V} } ⁠ by the same unit-phase factor. In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of both ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ spanning the kernel and cokernel, respectively, of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ === Relation to eigenvalue decomposition === The singular value decomposition is very general in the sense that it can be applied to any ⁠ m × n {\displaystyle m\times n} ⁠ matrix, whereas eigenvalue decomposition can only be applied to square diagonalizable matrices. Nevertheless, the two decompositions are related. If ⁠ M {\displaystyle \mathbf {M} } ⁠ has SVD ⁠ M = U Σ V ∗ , {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},} ⁠ the following two relations hold: M ∗ M = V Σ ∗ U ∗ U Σ V ∗ = V ( Σ ∗ Σ ) V ∗ , M M ∗ = U Σ V ∗ V Σ ∗ U ∗ = U ( Σ Σ ∗ ) U ∗ . {\displaystyle {\begin{aligned}\mathbf {M} ^{*}\mathbf {M} &=\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}\,\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}=\mathbf {V} (\mathbf {\Sigma } ^{*}\mathbf {\Sigma } )\mathbf {V} ^{*},\\[3mu]\mathbf {M} \mathbf {M} ^{*}&=\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}\,\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}=\mathbf {U} (\mathbf {\Sigma } \mathbf {\Sigma } ^{*})\mathbf {U} ^{*}.\end{aligned}}} The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently: The columns of ⁠ V {\displaystyle \mathbf {V} } ⁠ (referred to as right-singular vectors) are eigenvectors of ⁠ M ∗ M . {\displaystyle \mathbf {M} ^{*}\mathbf {M} .} ⁠ The columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ (referred to as left-singular vectors) are eigenvectors of ⁠ M M ∗ . {\displaystyle \mathbf {M} \mathbf {M} ^{*}.} ⁠ The non-zero elements of ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ (non-zero singular values) are the square roots of the non-zero eigenvalues of ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠ or ⁠ M M ∗ . {\displaystyle \mathbf {M} \mathbf {M} ^{*}.} ⁠ In the special case of ⁠ M {\displaystyle \mathbf {M} } ⁠ being a normal matrix, and thus also square, the spectral theorem ensures that it can be unitarily diagonalized using a basis of eigenvectors, and thus decomposed as ⁠ M = U D U ∗ {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}} ⁠ for some unitary matrix ⁠ U {\displaystyle \mathbf {U} } ⁠ and diagonal matrix ⁠ D {\displaystyle \mathbf {D} } ⁠ with complex elements ⁠ σ i {\displaystyle \sigma _{i}} ⁠ along the diagonal. When ⁠ M {\displaystyle \mathbf {M} } ⁠ is positive semi-definite, ⁠ σ i {\displaystyle \sigma _{i}} ⁠ will be non-negative real numbers so that the decomposition ⁠ M = U D U ∗ {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}} ⁠ is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phase ⁠ e i φ {\displaystyle e^{i\varphi }} ⁠ of each ⁠ σ i {\displaystyle \sigma _{i}} ⁠ to either its corresponding ⁠ V i {\displaystyle \mathbf {V} _{i}} ⁠ or ⁠ U i . {\displaystyle \mathbf {U} _{i}.} ⁠ The natural connection of the SVD to non-normal matrices is through the polar decomposition theorem: ⁠ M = S R , {\displaystyle \mathbf {M} =\mathbf {S} \mathbf {R} ,} ⁠ where ⁠ S = U Σ U ∗ {\displaystyle \mathbf {S} =\mathbf {U} \mathbf {\Sigma } \mathbf {U} ^{*}} ⁠ is positive semidefinite and normal, and ⁠ R = U V ∗ {\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}} ⁠ is unitary. Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD of ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ while related, differ: the eigenvalue decomposition is ⁠ M = U D U − 1 , {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{-1},} ⁠ where ⁠ U {\displaystyle \mathbf {U} } ⁠ is not necessarily unitary and ⁠ D {\displaystyle \mathbf {D} } ⁠ is not necessarily positive semi-definite, while the SVD is ⁠ M = U Σ V ∗ , {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},} ⁠ where ⁠ Σ {\displaystyle \mathbf {\Sigma } } ⁠ is diagonal and positive semi-definite, and ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V {\displaystyle \mathbf {V} } ⁠ are unitary matrices that are not necessarily related except through the matrix ⁠ M . {\displaystyle \mathbf {M} .} ⁠ While only non-defective square matrices have an eigenvalue decomposition, any ⁠ m × n {\displaystyle m\times n} ⁠ matrix has a SVD. == Applications of the SVD == === Pseudoinverse === The singular value decomposition can be used for computing the pseudoinverse of a matrix. The pseudoinverse of the matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ with singular value decomposition ⁠ M = U Σ V ∗ {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} ⁠ is M + = V Σ + U ∗ , {\displaystyle \mathbf {M} ^{+}=\mathbf {V} {\boldsymbol {\Sigma }}^{+}\mathbf {U} ^{\ast },} where Σ + {\displaystyle {\boldsymbol {\Sigma }}^{+}} is the pseudoinverse of Σ {\displaystyle {\boldsymbol {\Sigma }}} , which is formed by replacing every non-zero diagonal entry by its reciprocal and transposing the resulting matrix. The pseudoinverse is one way to solve linear least squares problems. === Solving homogeneous linear equations === A set of homogeneous linear equations can be written as ⁠ A x = 0 {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {0} } ⁠ for a matrix ⁠ A {\displaystyle \mathbf {A} } ⁠ and vector ⁠ x . {\displaystyle \mathbf {x} .} ⁠ A typical situation is that ⁠ A {\displaystyle \mathbf {A} } ⁠ is known and a non-zero ⁠ x {\displaystyle \mathbf {x} } ⁠ is to be determined which satisfies the equation. Such an ⁠ x {\displaystyle \mathbf {x} } ⁠ belongs to ⁠ A {\displaystyle \mathbf {A} } ⁠'s null space and is sometimes called a (right) null vector of ⁠ A . {\displaystyle \mathbf {A} .} ⁠ The vector ⁠ x {\displaystyle \mathbf {x} } ⁠ can be characterized as a right-singular vector corresponding to a singular value of ⁠ A {\displaystyle \mathbf {A} } ⁠ that is zero. This observation means that if ⁠ A {\displaystyle \mathbf {A} } ⁠ is a square matrix and has no vanishing singular value, the equation has no non-zero ⁠ x {\displaystyle \mathbf {x} } ⁠ as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zero ⁠ x {\displaystyle \mathbf {x} } ⁠ satisfying ⁠ x ∗ A = 0 {\displaystyle \mathbf {x} ^{*}\mathbf {A} =\mathbf {0} } ⁠ with ⁠ x ∗ {\displaystyle \mathbf {x} ^{*}} ⁠ denoting the conjugate transpose of ⁠ x , {\displaystyle \mathbf {x} ,} ⁠ is called a left null vector of ⁠ A . {\displaystyle \mathbf {A} .} ⁠ === Total least squares minimization === A total least squares problem seeks the vector ⁠ x {\displaystyle \mathbf {x} } ⁠ that minimizes the 2-norm of a vector ⁠ A x {\displaystyle \mathbf {A} \mathbf {x} } ⁠ under the constraint ‖ x ‖ = 1. {\displaystyle \|\mathbf {x} \|=1.} The solution turns out to be the right-singular vector of ⁠ A {\displaystyle \mathbf {A} } ⁠ corresponding to the smallest singular value. === Range, null space and rank === Another application of the SVD is that it provides an explicit representation of the range and null space of a matrix ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The right-singular vectors corresponding to vanishing singular values of ⁠ M {\displaystyle \mathbf {M} } ⁠ span the null space of ⁠ M {\displaystyle \mathbf {M} } ⁠ and the left-singular vectors corresponding to the non-zero singular values of ⁠ M {\displaystyle \mathbf {M} } ⁠ span the range of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ For example, in the above example the null space is spanned by the last row of ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ and the range is spanned by the first three columns of ⁠ U . {\displaystyle \mathbf {U} .} ⁠ As a consequence, the rank of ⁠ M {\displaystyle \mathbf {M} } ⁠ equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements in Σ {\displaystyle \mathbf {\Sigma } } . In numerical linear algebra the singular values can be used to determine the effective rank of a matrix, as rounding error may lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero. === Low-rank matrix approximation === Some practical applications need to solve the problem of approximating a matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ with another matrix M ~ {\displaystyle {\tilde {\mathbf {M} }}} , said to be truncated, which has a specific rank ⁠ r {\displaystyle r} ⁠. In the case that the approximation is based on minimizing the Frobenius norm of the difference between ⁠ M {\displaystyle \mathbf {M} } ⁠ and ⁠ M ~ {\displaystyle {\tilde {\mathbf {M} }}} ⁠ under the constraint that rank ⁡ ( M ~ ) = r , {\displaystyle \operatorname {rank} {\bigl (}{\tilde {\mathbf {M} }}{\bigr )}=r,} it turns out that the solution is given by the SVD of ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ namely M ~ = U Σ ~ V ∗ , {\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} {\tilde {\mathbf {\Sigma } }}\mathbf {V} ^{*},} where Σ ~ {\displaystyle {\tilde {\mathbf {\Sigma } }}} is the same matrix as Σ {\displaystyle \mathbf {\Sigma } } except that it contains only the ⁠ r {\displaystyle r} ⁠ largest singular values (the other singular values are replaced by zero). This is known as the Eckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; see Stewart 1993). === Image compression === One practical consequence of the low-rank approximation given by SVD is that a greyscale image represented as an m × n {\displaystyle m\times n} matrix A {\displaystyle A} , can be efficiently represented by keeping the first k {\displaystyle k} singular values and corresponding vectors. The truncated decomposition A k = U k Σ k V k T {\displaystyle A_{k}=\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{T}} gives an image which minimizes the Frobenius error compared to the original image. Thus, the task becomes finding a close approximation A k {\displaystyle A_{k}} that balances retaining perceptual fidelity with the number of vectors required to reconstruct the image. Storing A k {\displaystyle A_{k}} requires only k ( n + m + 1 ) {\displaystyle k(n+m+1)} numbers compared to n m {\displaystyle nm} . This same idea extends to color images by applying this operation to each channel or stacking the channels into one matrix. Since the singular values of most natural images decay quickly, most of their variance is often captured by a small k {\displaystyle k} . For a 1528 × 1225 greyscale image, we can achieve a relative error of .7 % {\displaystyle .7\%} with as little as k = 100 {\displaystyle k=100} . In practice, however, computing the SVD can be too computationally expensive and the resulting compression is typically less storage efficient than a specialized algorithm such as JPEG. === Separable models === The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrix ⁠ A {\displaystyle \mathbf {A} } ⁠ can be written as an outer product of two vectors ⁠ A = u ⊗ v , {\displaystyle \mathbf {A} =\mathbf {u} \otimes \mathbf {v} ,} ⁠ or, in coordinates, ⁠ A i j = u i v j . {\displaystyle A_{ij}=u_{i}v_{j}.} ⁠ Specifically, the matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ can be decomposed as, M = ∑ i A i = ∑ i σ i U i ⊗ V i . {\displaystyle \mathbf {M} =\sum _{i}\mathbf {A} _{i}=\sum _{i}\sigma _{i}\mathbf {U} _{i}\otimes \mathbf {V} _{i}.} Here ⁠ U i {\displaystyle \mathbf {U} _{i}} ⁠ and ⁠ V i {\displaystyle \mathbf {V} _{i}} ⁠ are the ⁠ i {\displaystyle i} ⁠-th columns of the corresponding SVD matrices, ⁠ σ i {\displaystyle \sigma _{i}} ⁠ are the ordered singular values, and each ⁠ A i {\displaystyle \mathbf {A} _{i}} ⁠ is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zero ⁠ σ i {\displaystyle \sigma _{i}} ⁠ is exactly the rank of the matrix. Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described by a Gabor filter in the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example, reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column of ⁠ U {\displaystyle \mathbf {U} } ⁠ in the SVD factorization is then a Gabor while the first column of ⁠ V {\displaystyle \mathbf {V} } ⁠ represents the time modulation (or vice versa). One may then define an index of separability α = σ 1 2 ∑ i σ i 2 , {\displaystyle \alpha ={\frac {\sigma _{1}^{2}}{\sum _{i}\sigma _{i}^{2}}},} which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition. === Nearest orthogonal matrix === It is possible to use the SVD of a square matrix ⁠ A {\displaystyle \mathbf {A} } ⁠ to determine the orthogonal matrix ⁠ O {\displaystyle \mathbf {O} } ⁠ closest to ⁠ A . {\displaystyle \mathbf {A} .} ⁠ The closeness of fit is measured by the Frobenius norm of ⁠ O − A . {\displaystyle \mathbf {O} -\mathbf {A} .} ⁠ The solution is the product ⁠ U V ∗ . {\displaystyle \mathbf {U} \mathbf {V} ^{*}.} ⁠ This intuitively makes sense because an orthogonal matrix would have the decomposition ⁠ U I V ∗ {\displaystyle \mathbf {U} \mathbf {I} \mathbf {V} ^{*}} ⁠ where ⁠ I {\displaystyle \mathbf {I} } ⁠ is the identity matrix, so that if ⁠ A = U Σ V ∗ {\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} ⁠ then the product ⁠ A = U V ∗ {\displaystyle \mathbf {A} =\mathbf {U} \mathbf {V} ^{*}} ⁠ amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrix ⁠ R = U V ∗ {\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}} ⁠ of the Polar Decomposition M = R P = P ′ R {\displaystyle \mathbf {M} =\mathbf {R} \mathbf {P} =\mathbf {P} '\mathbf {R} } in either order of stretch and rotation, as described above. A similar problem, with interesting applications in shape analysis, is the orthogonal Procrustes problem, which consists of finding an orthogonal matrix ⁠ O {\displaystyle \mathbf {O} } ⁠ which most closely maps ⁠ A {\displaystyle \mathbf {A} } ⁠ to ⁠ B . {\displaystyle \mathbf {B} .} ⁠ Specifically, O = argmin Ω ‖ A Ω − B ‖ F subject to Ω T Ω = I , {\displaystyle \mathbf {O} ={\underset {\Omega }{\operatorname {argmin} }}\|\mathbf {A} {\boldsymbol {\Omega }}-\mathbf {B} \|_{F}\quad {\text{subject to}}\quad {\boldsymbol {\Omega }}^{\operatorname {T} }{\boldsymbol {\Omega }}=\mathbf {I} ,} where ‖ ⋅ ‖ F {\displaystyle \|\cdot \|_{F}} denotes the Frobenius norm. This problem is equivalent to finding the nearest orthogonal matrix to a given matrix M = A T B {\displaystyle \mathbf {M} =\mathbf {A} ^{\operatorname {T} }\mathbf {B} } . === The Kabsch algorithm === The Kabsch algorithm (called Wahba's problem in other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules. === Principal Component Analysis === The SVD can be used to construct the principal components in principal component analysis as follows: Let X ∈ R N × p {\displaystyle \mathbf {X} \in \mathbb {R} ^{N\times p}} be a data matrix where each of the N {\displaystyle N} rows is a (feature-wise) mean-centered observation, each of dimension p {\displaystyle p} . The SVD of X {\displaystyle \mathbf {X} } is: X = V Σ U ∗ {\displaystyle \mathbf {X} =\mathbf {V} {\boldsymbol {\Sigma }}\mathbf {U} ^{\ast }} From the same reference, we see that V Σ {\displaystyle \mathbf {V} {\boldsymbol {\Sigma }}} contains the scores of the rows of X {\displaystyle \mathbf {X} } (i.e. each observation), and U {\displaystyle \mathbf {U} } is the matrix whose columns are principal component loading vectors. === Signal processing === The SVD and pseudoinverse have been successfully applied to signal processing, image processing and big data (e.g., in genomic signal processing). === Other examples === The SVD is also applied extensively to the study of linear inverse problems and is useful in the analysis of regularization methods such as that of Tikhonov. It is widely used in statistics, where it is related to principal component analysis and to correspondence analysis, and in signal processing and pattern recognition. It is also used in output-only modal analysis, where the non-scaled mode shapes can be determined from the singular vectors. Yet another usage is latent semantic indexing in natural-language text processing. In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number" κ := σ max / σ min {\displaystyle \kappa :=\sigma _{\text{max}}/\sigma _{\text{min}}} . It often controls the error rate or convergence rate of a given computational scheme on such systems. The SVD also plays a crucial role in the field of quantum information, in a form often referred to as the Schmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to be entangled: if the rank of the Σ {\displaystyle \mathbf {\Sigma } } matrix is larger than one. One application of SVD to rather large matrices is in numerical weather prediction, where Lanczos methods are used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate an ensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction. SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled with radial basis functions to interpolate solutions to three-dimensional unsteady flow problems. Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO. SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models. Singular value decomposition is used in recommender systems to predict people's item ratings. Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines. Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to disease outbreak detection. A combination of SVD and higher-order SVD also has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) in disease surveillance. In astrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design and orbital station-keeping. The SVD can be used to measure the similarity between real-valued matrices. By measuring the angles between the singular vectors, the inherent two-dimensional structure of matrices is accounted for. This method was shown to outperform cosine similarity and Frobenius norm in most cases, including brain activity measurements from neuroscience experiments. == Proof of existence == An eigenvalue ⁠ λ {\displaystyle \lambda } ⁠ of a matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is characterized by the algebraic relation ⁠ M u = λ u . {\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} .} ⁠ When ⁠ M {\displaystyle \mathbf {M} } ⁠ is Hermitian, a variational characterization is also available. Let ⁠ M {\displaystyle \mathbf {M} } ⁠ be a real ⁠ n × n {\displaystyle n\times n} ⁠ symmetric matrix. Define f : { R n → R x ↦ x T M x {\displaystyle f:\left\{{\begin{aligned}\mathbb {R} ^{n}&\to \mathbb {R} \\\mathbf {x} &\mapsto \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} \end{aligned}}\right.} By the extreme value theorem, this continuous function attains a maximum at some ⁠ u {\displaystyle \mathbf {u} } ⁠ when restricted to the unit sphere { ‖ x ‖ = 1 } . {\displaystyle \{\|\mathbf {x} \|=1\}.} By the Lagrange multipliers theorem, ⁠ u {\displaystyle \mathbf {u} } ⁠ necessarily satisfies ∇ u T M u − λ ⋅ ∇ u T u = 0 {\displaystyle \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {u} -\lambda \cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} =0} for some real number ⁠ λ . {\displaystyle \lambda .} ⁠ The nabla symbol, ⁠ ∇ {\displaystyle \nabla } ⁠, is the del operator (differentiation with respect to ⁠ x {\displaystyle \mathbf {x} } ⁠). Using the symmetry of ⁠ M {\displaystyle \mathbf {M} } ⁠ we obtain ∇ x T M x − λ ⋅ ∇ x T x = 2 ( M − λ I ) x . {\displaystyle \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} -\lambda \cdot \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {x} =2(\mathbf {M} -\lambda \mathbf {I} )\mathbf {x} .} Therefore ⁠ M u = λ u , {\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} ,} ⁠ so ⁠ u {\displaystyle \mathbf {u} } ⁠ is a unit length eigenvector of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ For every unit length eigenvector ⁠ v {\displaystyle \mathbf {v} } ⁠ of ⁠ M {\displaystyle \mathbf {M} } ⁠ its eigenvalue is ⁠ f ( v ) , {\displaystyle f(\mathbf {v} ),} ⁠ so ⁠ λ {\displaystyle \lambda } ⁠ is the largest eigenvalue of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The same calculation performed on the orthogonal complement of ⁠ u {\displaystyle \mathbf {u} } ⁠ gives the next largest eigenvalue and so on. The complex Hermitian case is similar; there ⁠ f ( x ) = x ∗ M x {\displaystyle f(\mathbf {x} )=\mathbf {x} ^{*}\mathbf {M} \mathbf {x} } ⁠ is a real-valued function of ⁠ 2 n {\displaystyle 2n} ⁠ real variables. Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, of ⁠ M {\displaystyle \mathbf {M} } ⁠ is no longer required. This section gives these two arguments for existence of singular value decomposition. === Based on the spectral theorem === Let M {\displaystyle \mathbf {M} } be an ⁠ m × n {\displaystyle m\times n} ⁠ complex matrix. Since M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } is positive semi-definite and Hermitian, by the spectral theorem, there exists an ⁠ n × n {\displaystyle n\times n} ⁠ unitary matrix V {\displaystyle \mathbf {V} } such that V ∗ M ∗ M V = D ¯ = [ D 0 0 0 ] , {\displaystyle \mathbf {V} ^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} ={\bar {\mathbf {D} }}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}},} where D {\displaystyle \mathbf {D} } is diagonal and positive definite, of dimension ℓ × ℓ {\displaystyle \ell \times \ell } , with ℓ {\displaystyle \ell } the number of non-zero eigenvalues of M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } (which can be shown to verify ℓ ≤ min ( n , m ) {\displaystyle \ell \leq \min(n,m)} ). Note that V {\displaystyle \mathbf {V} } is here by definition a matrix whose i {\displaystyle i} -th column is the i {\displaystyle i} -th eigenvector of M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } , corresponding to the eigenvalue D ¯ i i {\displaystyle {\bar {\mathbf {D} }}_{ii}} . Moreover, the j {\displaystyle j} -th column of V {\displaystyle \mathbf {V} } , for j > ℓ {\displaystyle j>\ell } , is an eigenvector of M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } with eigenvalue D ¯ j j = 0 {\displaystyle {\bar {\mathbf {D} }}_{jj}=0} . This can be expressed by writing V {\displaystyle \mathbf {V} } as V = [ V 1 V 2 ] {\displaystyle \mathbf {V} ={\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}} , where the columns of V 1 {\displaystyle \mathbf {V} _{1}} and V 2 {\displaystyle \mathbf {V} _{2}} therefore contain the eigenvectors of M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting of V {\displaystyle \mathbf {V} } , the equation becomes: [ V 1 ∗ V 2 ∗ ] M ∗ M [ V 1 V 2 ] = [ V 1 ∗ M ∗ M V 1 V 1 ∗ M ∗ M V 2 V 2 ∗ M ∗ M V 1 V 2 ∗ M ∗ M V 2 ] = [ D 0 0 0 ] . {\displaystyle {\begin{bmatrix}\mathbf {V} _{1}^{*}\\\mathbf {V} _{2}^{*}\end{bmatrix}}\mathbf {M} ^{*}\mathbf {M} \,{\begin{bmatrix}\mathbf {V} _{1}&\!\!\mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\\\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}}.} This implies that V 1 ∗ M ∗ M V 1 = D , V 2 ∗ M ∗ M V 2 = 0 . {\displaystyle \mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}=\mathbf {D} ,\quad \mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}=\mathbf {0} .} Moreover, the second equation implies M V 2 = 0 {\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} } . Finally, the unitary-ness of V {\displaystyle \mathbf {V} } translates, in terms of V 1 {\displaystyle \mathbf {V} _{1}} and V 2 {\displaystyle \mathbf {V} _{2}} , into the following conditions: V 1 ∗ V 1 = I 1 , V 2 ∗ V 2 = I 2 , V 1 V 1 ∗ + V 2 V 2 ∗ = I 12 , {\displaystyle {\begin{aligned}\mathbf {V} _{1}^{*}\mathbf {V} _{1}&=\mathbf {I} _{1},\\\mathbf {V} _{2}^{*}\mathbf {V} _{2}&=\mathbf {I} _{2},\\\mathbf {V} _{1}\mathbf {V} _{1}^{*}+\mathbf {V} _{2}\mathbf {V} _{2}^{*}&=\mathbf {I} _{12},\end{aligned}}} where the subscripts on the identity matrices are used to remark that they are of different dimensions. Let us now define U 1 = M V 1 D − 1 2 . {\displaystyle \mathbf {U} _{1}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}.} Then, U 1 D 1 2 V 1 ∗ = M V 1 D − 1 2 D 1 2 V 1 ∗ = M ( I − V 2 V 2 ∗ ) = M − ( M V 2 ) V 2 ∗ = M , {\displaystyle \mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} (\mathbf {I} -\mathbf {V} _{2}\mathbf {V} _{2}^{*})=\mathbf {M} -(\mathbf {M} \mathbf {V} _{2})\mathbf {V} _{2}^{*}=\mathbf {M} ,} since M V 2 = 0 . {\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} .} This can be also seen as immediate consequence of the fact that M V 1 V 1 ∗ = M {\displaystyle \mathbf {M} \mathbf {V} _{1}\mathbf {V} _{1}^{*}=\mathbf {M} } . This is equivalent to the observation that if { v i } i = 1 ℓ {\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }} is the set of eigenvectors of M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } corresponding to non-vanishing eigenvalues { λ i } i = 1 ℓ {\displaystyle \{\lambda _{i}\}_{i=1}^{\ell }} , then { M v i } i = 1 ℓ {\displaystyle \{\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }} is a set of orthogonal vectors, and { λ i − 1 / 2 M v i } | i = 1 ℓ {\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }} is a (generally not complete) set of orthonormal vectors. This matches with the matrix formalism used above denoting with V 1 {\displaystyle \mathbf {V} _{1}} the matrix whose columns are { v i } i = 1 ℓ {\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }} , with V 2 {\displaystyle \mathbf {V} _{2}} the matrix whose columns are the eigenvectors of M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } with vanishing eigenvalue, and U 1 {\displaystyle \mathbf {U} _{1}} the matrix whose columns are the vectors { λ i − 1 / 2 M v i } | i = 1 ℓ {\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }} . We see that this is almost the desired result, except that U 1 {\displaystyle \mathbf {U} _{1}} and V 1 {\displaystyle \mathbf {V} _{1}} are in general not unitary, since they might not be square. However, we do know that the number of rows of U 1 {\displaystyle \mathbf {U} _{1}} is no smaller than the number of columns, since the dimensions of D {\displaystyle \mathbf {D} } is no greater than m {\displaystyle m} and n {\displaystyle n} . Also, since U 1 ∗ U 1 = D − 1 2 V 1 ∗ M ∗ M V 1 D − 1 2 = D − 1 2 D D − 1 2 = I 1 , {\displaystyle \mathbf {U} _{1}^{*}\mathbf {U} _{1}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} \mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {I_{1}} ,} the columns in U 1 {\displaystyle \mathbf {U} _{1}} are orthonormal and can be extended to an orthonormal basis. This means that we can choose U 2 {\displaystyle \mathbf {U} _{2}} such that U = [ U 1 U 2 ] {\displaystyle \mathbf {U} ={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}} is unitary. For ⁠ V 1 {\displaystyle \mathbf {V} _{1}} ⁠ we already have ⁠ V 2 {\displaystyle \mathbf {V} _{2}} ⁠ to make it unitary. Now, define Σ = [ [ D 1 2 0 0 0 ] 0 ] , {\displaystyle \mathbf {\Sigma } ={\begin{bmatrix}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}},} where extra zero rows are added or removed to make the number of zero rows equal the number of columns of ⁠ U 2 , {\displaystyle \mathbf {U} _{2},} ⁠ and hence the overall dimensions of Σ {\displaystyle \mathbf {\Sigma } } equal to m × n {\displaystyle m\times n} . Then [ U 1 U 2 ] [ [ D 1 2 0 0 0 ] 0 ] [ V 1 V 2 ] ∗ = [ U 1 U 2 ] [ D 1 2 V 1 ∗ 0 ] = U 1 D 1 2 V 1 ∗ = M , {\displaystyle {\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}{\begin{bmatrix}\mathbf {} D^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}}{\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}^{*}={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}\\0\end{bmatrix}}=\mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} ,} which is the desired result: M = U Σ V ∗ . {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}.} Notice the argument could begin with diagonalizing ⁠ M M ∗ {\displaystyle \mathbf {M} \mathbf {M} ^{*}} ⁠ rather than ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠ (This shows directly that ⁠ M M ∗ {\displaystyle \mathbf {M} \mathbf {M} ^{*}} ⁠ and ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠ have the same non-zero eigenvalues). === Based on variational characterization === The singular values can also be characterized as the maxima of ⁠ u T M v , {\displaystyle \mathbf {u} ^{\mathrm {T} }\mathbf {M} \mathbf {v} ,} ⁠ considered as a function of ⁠ u {\displaystyle \mathbf {u} } ⁠ and ⁠ v , {\displaystyle \mathbf {v} ,} ⁠ over particular subspaces. The singular vectors are the values of ⁠ u {\displaystyle \mathbf {u} } ⁠ and ⁠ v {\displaystyle \mathbf {v} } ⁠ where these maxima are attained. Let ⁠ M {\displaystyle \mathbf {M} } ⁠ denote an ⁠ m × n {\displaystyle m\times n} ⁠ matrix with real entries. Let ⁠ S k − 1 {\displaystyle S^{k-1}} ⁠ be the unit ( k − 1 ) {\displaystyle (k-1)} -sphere in R k {\displaystyle \mathbb {R} ^{k}} , and define σ ( u , v ) = u T M v , {\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )=\mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} ,} u ∈ S m − 1 , {\displaystyle \mathbf {u} \in S^{m-1},} v ∈ S n − 1 . {\displaystyle \mathbf {v} \in S^{n-1}.} Consider the function ⁠ σ {\displaystyle \sigma } ⁠ restricted to ⁠ S m − 1 × S n − 1 . {\displaystyle S^{m-1}\times S^{n-1}.} ⁠ Since both ⁠ S m − 1 {\displaystyle S^{m-1}} ⁠ and ⁠ S n − 1 {\displaystyle S^{n-1}} ⁠ are compact sets, their product is also compact. Furthermore, since ⁠ σ {\displaystyle \sigma } ⁠ is continuous, it attains a largest value for at least one pair of vectors ⁠ u {\displaystyle \mathbf {u} } ⁠ in ⁠ S m − 1 {\displaystyle S^{m-1}} ⁠ and ⁠ v {\displaystyle \mathbf {v} } ⁠ in ⁠ S n − 1 . {\displaystyle S^{n-1}.} ⁠ This largest value is denoted ⁠ σ 1 {\displaystyle \sigma _{1}} ⁠ and the corresponding vectors are denoted ⁠ u 1 {\displaystyle \mathbf {u} _{1}} ⁠ and ⁠ v 1 . {\displaystyle \mathbf {v} _{1}.} ⁠ Since ⁠ σ 1 {\displaystyle \sigma _{1}} ⁠ is the largest value of ⁠ σ ( u , v ) {\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )} ⁠ it must be non-negative. If it were negative, changing the sign of either ⁠ u 1 {\displaystyle \mathbf {u} _{1}} ⁠ or ⁠ v 1 {\displaystyle \mathbf {v} _{1}} ⁠ would make it positive and therefore larger. Statement. ⁠ u 1 {\displaystyle \mathbf {u} _{1}} ⁠ and ⁠ v 1 {\displaystyle \mathbf {v} _{1}} ⁠ are left and right-singular vectors of ⁠ M {\displaystyle \mathbf {M} } ⁠ with corresponding singular value ⁠ σ 1 . {\displaystyle \sigma _{1}.} ⁠ Proof. Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation: ∇ σ = ∇ u T M v − λ 1 ⋅ ∇ u T u − λ 2 ⋅ ∇ v T v {\displaystyle \nabla \sigma =\nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} -\lambda _{1}\cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} -\lambda _{2}\cdot \nabla \mathbf {v} ^{\operatorname {T} }\mathbf {v} } After some algebra, this becomes M v 1 = 2 λ 1 u 1 + 0 , M T u 1 = 0 + 2 λ 2 v 1 . {\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=2\lambda _{1}\mathbf {u} _{1}+0,\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=0+2\lambda _{2}\mathbf {v} _{1}.\end{aligned}}} Multiplying the first equation from left by ⁠ u 1 T {\displaystyle \mathbf {u} _{1}^{\textrm {T}}} ⁠ and the second equation from left by ⁠ v 1 T {\displaystyle \mathbf {v} _{1}^{\textrm {T}}} ⁠ and taking ‖ u ‖ = ‖ v ‖ = 1 {\displaystyle \|\mathbf {u} \|=\|\mathbf {v} \|=1} into account gives σ 1 = 2 λ 1 = 2 λ 2 . {\displaystyle \sigma _{1}=2\lambda _{1}=2\lambda _{2}.} Plugging this into the pair of equations above, we have M v 1 = σ 1 u 1 , M T u 1 = σ 1 v 1 . {\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=\sigma _{1}\mathbf {u} _{1},\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=\sigma _{1}\mathbf {v} _{1}.\end{aligned}}} This proves the statement. More singular vectors and singular values can be found by maximizing ⁠ σ ( u , v ) {\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )} ⁠ over normalized ⁠ u {\displaystyle \mathbf {u} } ⁠ and ⁠ v {\displaystyle \mathbf {v} } ⁠ which are orthogonal to ⁠ u 1 {\displaystyle \mathbf {u} _{1}} ⁠ and ⁠ v 1 , {\displaystyle \mathbf {v} _{1},} ⁠ respectively. The passage from real to complex is similar to the eigenvalue case. == Calculating the SVD == === One-sided Jacobi algorithm === One-sided Jacobi algorithm is an iterative algorithm, where a matrix is iteratively transformed into a matrix with orthogonal columns. The elementary iteration is given as a Jacobi rotation, M ← M J ( p , q , θ ) , {\displaystyle M\leftarrow MJ(p,q,\theta ),} where the angle θ {\displaystyle \theta } of the Jacobi rotation matrix J ( p , q , θ ) {\displaystyle J(p,q,\theta )} is chosen such that after the rotation the columns with numbers p {\displaystyle p} and q {\displaystyle q} become orthogonal. The indices ( p , q ) {\displaystyle (p,q)} are swept cyclically, ( p = 1 … m , q = p + 1 … m ) {\displaystyle (p=1\dots m,q=p+1\dots m)} , where m {\displaystyle m} is the number of columns. After the algorithm has converged, the singular value decomposition M = U S V T {\displaystyle M=USV^{T}} is recovered as follows: the matrix V {\displaystyle V} is the accumulation of Jacobi rotation matrices, the matrix U {\displaystyle U} is given by normalising the columns of the transformed matrix M {\displaystyle M} , and the singular values are given as the norms of the columns of the transformed matrix M {\displaystyle M} . === Two-sided Jacobi algorithm === Two-sided Jacobi SVD algorithm—a generalization of the Jacobi eigenvalue algorithm—is an iterative algorithm where a square matrix is iteratively transformed into a diagonal matrix. If the matrix is not square the QR decomposition is performed first and then the algorithm is applied to the R {\displaystyle R} matrix. The elementary iteration zeroes a pair of off-diagonal elements by first applying a Givens rotation to symmetrize the pair of elements and then applying a Jacobi transformation to zero them, M ← J T G M J {\displaystyle M\leftarrow J^{T}GMJ} where G {\displaystyle G} is the Givens rotation matrix with the angle chosen such that the given pair of off-diagonal elements become equal after the rotation, and where J {\displaystyle J} is the Jacobi transformation matrix that zeroes these off-diagonal elements. The iterations proceeds exactly as in the Jacobi eigenvalue algorithm: by cyclic sweeps over all off-diagonal elements. After the algorithm has converged the resulting diagonal matrix contains the singular values. The matrices U {\displaystyle U} and V {\displaystyle V} are accumulated as follows: U ← U G T J {\displaystyle U\leftarrow UG^{T}J} , V ← V J {\displaystyle V\leftarrow VJ} . === Numerical approach === The singular value decomposition can be computed using the following observations: The left-singular vectors of ⁠ M {\displaystyle \mathbf {M} } ⁠ are a set of orthonormal eigenvectors of ⁠ M M ∗ {\displaystyle \mathbf {M} \mathbf {M} ^{*}} ⁠. The right-singular vectors of ⁠ M {\displaystyle \mathbf {M} } ⁠ are a set of orthonormal eigenvectors of ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠. The non-zero singular values of ⁠ M {\displaystyle \mathbf {M} } ⁠ (found on the diagonal entries of Σ {\displaystyle \mathbf {\Sigma } } ) are the square roots of the non-zero eigenvalues of both ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠ and ⁠ M M ∗ {\displaystyle \mathbf {M} \mathbf {M} ^{*}} ⁠. The SVD of a matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is typically computed by a two-step procedure. In the first step, the matrix is reduced to a bidiagonal matrix. This takes order ⁠ O ( m n 2 ) {\displaystyle O(mn^{2})} ⁠ floating-point operations (flop), assuming that ⁠ m ≥ n . {\displaystyle m\geq n.} ⁠ The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with an iterative method (as with eigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like the machine epsilon. If this precision is considered constant, then the second step takes ⁠ O ( n ) {\displaystyle O(n)} ⁠ iterations, each costing ⁠ O ( n ) {\displaystyle O(n)} ⁠ flops. Thus, the first step is more expensive, and the overall cost is ⁠ O ( m n 2 ) {\displaystyle O(mn^{2})} ⁠ flops (Trefethen & Bau III 1997, Lecture 31). The first step can be done using Householder reflections for a cost of ⁠ 4 m n 2 − 4 n 3 / 3 {\displaystyle 4mn^{2}-4n^{3}/3} ⁠ flops, assuming that only the singular values are needed and not the singular vectors. If ⁠ m {\displaystyle m} ⁠ is much larger than ⁠ n {\displaystyle n} ⁠ then it is advantageous to first reduce the matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ to a triangular matrix with the QR decomposition and then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is ⁠ 2 m n 2 + 2 n 3 {\displaystyle 2mn^{2}+2n^{3}} ⁠ flops (Trefethen & Bau III 1997, Lecture 31). The second step can be done by a variant of the QR algorithm for the computation of eigenvalues, which was first described by Golub & Kahan (1965). The LAPACK subroutine DBDSQR implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD routine for the computation of the singular value decomposition. The same algorithm is implemented in the GNU Scientific Library (GSL). The GSL also offers an alternative method that uses a one-sided Jacobi orthogonalization in step 2 (GSL Team 2007). This method computes the SVD of the bidiagonal matrix by solving a sequence of ⁠ 2 × 2 {\displaystyle 2\times 2} ⁠ SVD problems, similar to how the Jacobi eigenvalue algorithm solves a sequence of ⁠ 2 × 2 {\displaystyle 2\times 2} ⁠ eigenvalue methods (Golub & Van Loan 1996, §8.6.3). Yet another method for step 2 uses the idea of divide-and-conquer eigenvalue algorithms (Trefethen & Bau III 1997, Lecture 31). There is an alternative way that does not explicitly use the eigenvalue decomposition. Usually the singular value problem of a matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is converted into an equivalent symmetric eigenvalue problem such as ⁠ M M ∗ , {\displaystyle \mathbf {M} \mathbf {M} ^{*},} ⁠ ⁠ M ∗ M , {\displaystyle \mathbf {M} ^{*}\mathbf {M} ,} ⁠ or [ 0 M M ∗ 0 ] . {\displaystyle {\begin{bmatrix}\mathbf {0} &\mathbf {M} \\\mathbf {M} ^{*}&\mathbf {0} \end{bmatrix}}.} The approaches that use eigenvalue decompositions are based on the QR algorithm, which is well-developed to be stable and fast. Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between the QR decomposition and the LQ decomposition to find the real diagonal Hermitian matrices. The QR decomposition gives ⁠ M ⇒ Q R {\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {R} } ⁠ and the LQ decomposition of ⁠ R {\displaystyle \mathbf {R} } ⁠ gives ⁠ R ⇒ L P ∗ . {\displaystyle \mathbf {R} \Rightarrow \mathbf {L} \mathbf {P} ^{*}.} ⁠ Thus, at every iteration, we have ⁠ M ⇒ Q L P ∗ , {\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {L} \mathbf {P} ^{*},} ⁠ update ⁠ M ⇐ L {\displaystyle \mathbf {M} \Leftarrow \mathbf {L} } ⁠ and repeat the orthogonalizations. Eventually, this iteration between QR decomposition and LQ decomposition produces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD. === Analytic result of 2 × 2 SVD === The singular values of a ⁠ 2 × 2 {\displaystyle 2\times 2} ⁠ matrix can be found analytically. Let the matrix be M = z 0 I + z 1 σ 1 + z 2 σ 2 + z 3 σ 3 {\displaystyle \mathbf {M} =z_{0}\mathbf {I} +z_{1}\sigma _{1}+z_{2}\sigma _{2}+z_{3}\sigma _{3}} where z i ∈ C {\displaystyle z_{i}\in \mathbb {C} } are complex numbers that parameterize the matrix, ⁠ I {\displaystyle \mathbf {I} } ⁠ is the identity matrix, and σ i {\displaystyle \sigma _{i}} denote the Pauli matrices. Then its two singular values are given by σ ± = | z 0 | 2 + | z 1 | 2 + | z 2 | 2 + | z 3 | 2 ± ( | z 0 | 2 + | z 1 | 2 + | z 2 | 2 + | z 3 | 2 ) 2 − | z 0 2 − z 1 2 − z 2 2 − z 3 2 | 2 = | z 0 | 2 + | z 1 | 2 + | z 2 | 2 + | z 3 | 2 ± 2 ( Re ⁡ z 0 z 1 ∗ ) 2 + ( Re ⁡ z 0 z 2 ∗ ) 2 + ( Re ⁡ z 0 z 3 ∗ ) 2 + ( Im ⁡ z 1 z 2 ∗ ) 2 + ( Im ⁡ z 2 z 3 ∗ ) 2 + ( Im ⁡ z 3 z 1 ∗ ) 2 {\displaystyle {\begin{aligned}\sigma _{\pm }&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm {\sqrt {{\bigl (}|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}{\bigr )}^{2}-|z_{0}^{2}-z_{1}^{2}-z_{2}^{2}-z_{3}^{2}|^{2}}}}}\\&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm 2{\sqrt {(\operatorname {Re} z_{0}z_{1}^{*})^{2}+(\operatorname {Re} z_{0}z_{2}^{*})^{2}+(\operatorname {Re} z_{0}z_{3}^{*})^{2}+(\operatorname {Im} z_{1}z_{2}^{*})^{2}+(\operatorname {Im} z_{2}z_{3}^{*})^{2}+(\operatorname {Im} z_{3}z_{1}^{*})^{2}}}}}\end{aligned}}} == Reduced SVDs == In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for an ⁠ m × n {\displaystyle m\times n} ⁠ matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ of rank ⁠ r {\displaystyle r} ⁠: === Thin SVD === The thin, or economy-sized, SVD of a matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is given by M = U k Σ k V k ∗ , {\displaystyle \mathbf {M} =\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{*},} where k = min ( m , n ) , {\displaystyle k=\min(m,n),} the matrices ⁠ U k {\displaystyle \mathbf {U} _{k}} ⁠ and ⁠ V k {\displaystyle \mathbf {V} _{k}} ⁠ contain only the first ⁠ k {\displaystyle k} ⁠ columns of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V , {\displaystyle \mathbf {V} ,} ⁠ and ⁠ Σ k {\displaystyle \mathbf {\Sigma } _{k}} ⁠ contains only the first ⁠ k {\displaystyle k} ⁠ singular values from ⁠ Σ . {\displaystyle \mathbf {\Sigma } .} ⁠ The matrix ⁠ U k {\displaystyle \mathbf {U} _{k}} ⁠ is thus ⁠ m × k , {\displaystyle m\times k,} ⁠ ⁠ Σ k {\displaystyle \mathbf {\Sigma } _{k}} ⁠ is ⁠ k × k {\displaystyle k\times k} ⁠ diagonal, and ⁠ V k ∗ {\displaystyle \mathbf {V} _{k}^{*}} ⁠ is ⁠ k × n . {\displaystyle k\times n.} ⁠ The thin SVD uses significantly less space and computation time if ⁠ k ≪ max ( m , n ) . {\displaystyle k\ll \max(m,n).} ⁠ The first stage in its calculation will usually be a QR decomposition of ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ which can make for a significantly quicker calculation in this case. === Compact SVD === The compact SVD of a matrix ⁠ M {\displaystyle \mathbf {M} } ⁠ is given by M = U r Σ r V r ∗ . {\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}.} Only the ⁠ r {\displaystyle r} ⁠ column vectors of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ r {\displaystyle r} ⁠ row vectors of ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ corresponding to the non-zero singular values ⁠ Σ r {\displaystyle \mathbf {\Sigma } _{r}} ⁠ are calculated. The remaining vectors of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ are not calculated. This is quicker and more economical than the thin SVD if ⁠ r ≪ min ( m , n ) . {\displaystyle r\ll \min(m,n).} ⁠ The matrix ⁠ U r {\displaystyle \mathbf {U} _{r}} ⁠ is thus ⁠ m × r , {\displaystyle m\times r,} ⁠ ⁠ Σ r {\displaystyle \mathbf {\Sigma } _{r}} ⁠ is ⁠ r × r {\displaystyle r\times r} ⁠ diagonal, and ⁠ V r ∗ {\displaystyle \mathbf {V} _{r}^{*}} ⁠ is ⁠ r × n . {\displaystyle r\times n.} ⁠ === Truncated SVD === In many applications the number ⁠ r {\displaystyle r} ⁠ of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute only ⁠ t ≪ r {\displaystyle t\ll r} ⁠ non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrix ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ but rather provides the optimal low-rank matrix approximation ⁠ M ~ {\displaystyle {\tilde {\mathbf {M} }}} ⁠ by any matrix of a fixed rank ⁠ t {\displaystyle t} ⁠ M ~ = U t Σ t V t ∗ , {\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} _{t}\mathbf {\Sigma } _{t}\mathbf {V} _{t}^{*},} where matrix ⁠ U t {\displaystyle \mathbf {U} _{t}} ⁠ is ⁠ m × t , {\displaystyle m\times t,} ⁠ ⁠ Σ t {\displaystyle \mathbf {\Sigma } _{t}} ⁠ is ⁠ t × t {\displaystyle t\times t} ⁠ diagonal, and ⁠ V t ∗ {\displaystyle \mathbf {V} _{t}^{*}} ⁠ is ⁠ t × n . {\displaystyle t\times n.} ⁠ Only the ⁠ t {\displaystyle t} ⁠ column vectors of ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ t {\displaystyle t} ⁠ row vectors of ⁠ V ∗ {\displaystyle \mathbf {V} ^{*}} ⁠ corresponding to the ⁠ t {\displaystyle t} ⁠ largest singular values ⁠ Σ t {\displaystyle \mathbf {\Sigma } _{t}} ⁠ are calculated. This can be much quicker and more economical than the compact SVD if ⁠ t ≪ r , {\displaystyle t\ll r,} ⁠ but requires a completely different toolset of numerical solvers. In applications that require an approximation to the Moore–Penrose inverse of the matrix ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ the smallest singular values of ⁠ M {\displaystyle \mathbf {M} } ⁠ are of interest, which are more challenging to compute compared to the largest ones. Truncated SVD is employed in latent semantic indexing. == Norms == === Ky Fan norms === The sum of the ⁠ k {\displaystyle k} ⁠ largest singular values of ⁠ M {\displaystyle \mathbf {M} } ⁠ is a matrix norm, the Ky Fan ⁠ k {\displaystyle k} ⁠-norm of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as the operator norm of ⁠ M {\displaystyle \mathbf {M} } ⁠ as a linear operator with respect to the Euclidean norms of ⁠ K m {\displaystyle K^{m}} ⁠ and ⁠ K n . {\displaystyle K^{n}.} ⁠ In other words, the Ky Fan 1-norm is the operator norm induced by the standard ℓ 2 {\displaystyle \ell ^{2}} Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operator ⁠ M {\displaystyle \mathbf {M} } ⁠ on (possibly infinite-dimensional) Hilbert spaces ‖ M ‖ = ‖ M ∗ M ‖ 1 2 {\displaystyle \|\mathbf {M} \|=\|\mathbf {M} ^{*}\mathbf {M} \|^{\frac {1}{2}}} But, in the matrix case, ⁠ ( M ∗ M ) 1 / 2 {\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2}} ⁠ is a normal matrix, so ‖ M ∗ M ‖ 1 / 2 {\displaystyle \|\mathbf {M} ^{*}\mathbf {M} \|^{1/2}} is the largest eigenvalue of ⁠ ( M ∗ M ) 1 / 2 , {\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2},} ⁠ i.e. the largest singular value of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ The last of the Ky Fan norms, the sum of all singular values, is the trace norm (also known as the 'nuclear norm'), defined by ‖ M ‖ = Tr ⁡ ( M ∗ M ) 1 / 2 {\displaystyle \|\mathbf {M} \|=\operatorname {Tr} (\mathbf {M} ^{*}\mathbf {M} )^{1/2}} (the eigenvalues of ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠ are the squares of the singular values). === Hilbert–Schmidt norm === The singular values are related to another norm on the space of operators. Consider the Hilbert–Schmidt inner product on the ⁠ n × n {\displaystyle n\times n} ⁠ matrices, defined by ⟨ M , N ⟩ = tr ⁡ ( N ∗ M ) . {\displaystyle \langle \mathbf {M} ,\mathbf {N} \rangle =\operatorname {tr} \left(\mathbf {N} ^{*}\mathbf {M} \right).} So the induced norm is ‖ M ‖ = ⟨ M , M ⟩ = tr ⁡ ( M ∗ M ) . {\displaystyle \|\mathbf {M} \|={\sqrt {\langle \mathbf {M} ,\mathbf {M} \rangle }}={\sqrt {\operatorname {tr} \left(\mathbf {M} ^{*}\mathbf {M} \right)}}.} Since the trace is invariant under unitary equivalence, this shows ‖ M ‖ = | ∑ i σ i 2 {\displaystyle \|\mathbf {M} \|={\sqrt {{\vphantom {\bigg |}}\sum _{i}\sigma _{i}^{2}}}} where ⁠ σ i {\displaystyle \sigma _{i}} ⁠ are the singular values of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ This is called the Frobenius norm, Schatten 2-norm, or Hilbert–Schmidt norm of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ Direct calculation shows that the Frobenius norm of ⁠ M = ( m i j ) {\displaystyle \mathbf {M} =(m_{ij})} ⁠ coincides with: | ∑ i j | m i j | 2 . {\displaystyle {\sqrt {{\vphantom {\bigg |}}\sum _{ij}|m_{ij}|^{2}}}.} In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of the Schatten norm. == Variations and generalizations == === Scale-invariant SVD === The singular values of a matrix ⁠ A {\displaystyle \mathbf {A} } ⁠ are uniquely defined and are invariant with respect to left and/or right unitary transformations of ⁠ A . {\displaystyle \mathbf {A} .} ⁠ In other words, the singular values of ⁠ U A V , {\displaystyle \mathbf {U} \mathbf {A} \mathbf {V} ,} ⁠ for unitary matrices ⁠ U {\displaystyle \mathbf {U} } ⁠ and ⁠ V , {\displaystyle \mathbf {V} ,} ⁠ are equal to the singular values of ⁠ A . {\displaystyle \mathbf {A} .} ⁠ This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations. The Scale-Invariant SVD, or SI-SVD, is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations of ⁠ A . {\displaystyle \mathbf {A} .} ⁠ In other words, the singular values of ⁠ D A E , {\displaystyle \mathbf {D} \mathbf {A} \mathbf {E} ,} ⁠ for invertible diagonal matrices ⁠ D {\displaystyle \mathbf {D} } ⁠ and ⁠ E , {\displaystyle \mathbf {E} ,} ⁠ are equal to the singular values of ⁠ A . {\displaystyle \mathbf {A} .} ⁠ This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed. === Bounded operators on Hilbert spaces === The factorization ⁠ M = U Σ V ∗ {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}} ⁠ can be extended to a bounded operator ⁠ M {\displaystyle \mathbf {M} } ⁠ on a separable Hilbert space ⁠ H . {\displaystyle H.} ⁠ Namely, for any bounded operator ⁠ M , {\displaystyle \mathbf {M} ,} ⁠ there exist a partial isometry ⁠ U , {\displaystyle \mathbf {U} ,} ⁠ a unitary ⁠ V , {\displaystyle \mathbf {V} ,} ⁠ a measure space ⁠ ( X , μ ) , {\displaystyle (X,\mu ),} ⁠ and a non-negative measurable ⁠ f {\displaystyle f} ⁠ such that M = U T f V ∗ {\displaystyle \mathbf {M} =\mathbf {U} T_{f}\mathbf {V} ^{*}} where ⁠ T f {\displaystyle T_{f}} ⁠ is the multiplication by ⁠ f {\displaystyle f} ⁠ on ⁠ L 2 ( X , μ ) . {\displaystyle L^{2}(X,\mu ).} ⁠ This can be shown by mimicking the linear algebraic argument for the matrix case above. ⁠ V T f V ∗ {\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}} ⁠ is the unique positive square root of ⁠ M ∗ M , {\displaystyle \mathbf {M} ^{*}\mathbf {M} ,} ⁠ as given by the Borel functional calculus for self-adjoint operators. The reason why ⁠ U {\displaystyle \mathbf {U} } ⁠ need not be unitary is that, unlike the finite-dimensional case, given an isometry ⁠ U 1 {\displaystyle U_{1}} ⁠ with nontrivial kernel, a suitable ⁠ U 2 {\displaystyle U_{2}} ⁠ may not be found such that [ U 1 U 2 ] {\displaystyle {\begin{bmatrix}U_{1}\\U_{2}\end{bmatrix}}} is a unitary operator. As for matrices, the singular value factorization is equivalent to the polar decomposition for operators: we can simply write M = U V ∗ ⋅ V T f V ∗ {\displaystyle \mathbf {M} =\mathbf {U} \mathbf {V} ^{*}\cdot \mathbf {V} T_{f}\mathbf {V} ^{*}} and notice that ⁠ U V ∗ {\displaystyle \mathbf {U} \mathbf {V} ^{*}} ⁠ is still a partial isometry while ⁠ V T f V ∗ {\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}} ⁠ is positive. === Singular values and compact operators === The notion of singular values and left/right-singular vectors can be extended to compact operator on Hilbert space as they have a discrete spectrum. If ⁠ T {\displaystyle T} ⁠ is compact, every non-zero ⁠ λ {\displaystyle \lambda } ⁠ in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. If ⁠ M {\displaystyle \mathbf {M} } ⁠ is compact, so is ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠. Applying the diagonalization result, the unitary image of its positive square root ⁠ T f {\displaystyle T_{f}} ⁠ has a set of orthonormal eigenvectors ⁠ { e i } {\displaystyle \{e_{i}\}} ⁠ corresponding to strictly positive eigenvalues ⁠ { σ i } {\displaystyle \{\sigma _{i}\}} ⁠. For any ⁠ ψ {\displaystyle \psi } ⁠ in ⁠ H , {\displaystyle H,} ⁠ M ψ = U T f V ∗ ψ = ∑ i ⟨ U T f V ∗ ψ , U e i ⟩ U e i = ∑ i σ i ⟨ ψ , V e i ⟩ U e i , {\displaystyle \mathbf {M} \psi =\mathbf {U} T_{f}\mathbf {V} ^{*}\psi =\sum _{i}\left\langle \mathbf {U} T_{f}\mathbf {V} ^{*}\psi ,\mathbf {U} e_{i}\right\rangle \mathbf {U} e_{i}=\sum _{i}\sigma _{i}\left\langle \psi ,\mathbf {V} e_{i}\right\rangle \mathbf {U} e_{i},} where the series converges in the norm topology on ⁠ H . {\displaystyle H.} ⁠ Notice how this resembles the expression from the finite-dimensional case. ⁠ σ i {\displaystyle \sigma _{i}} ⁠ are called the singular values of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ ⁠ { U e i } {\displaystyle \{\mathbf {U} e_{i}\}} ⁠ (resp. ⁠ { U e i } {\displaystyle \{\mathbf {U} e_{i}\}} ⁠) can be considered the left-singular (resp. right-singular) vectors of ⁠ M . {\displaystyle \mathbf {M} .} ⁠ Compact operators on a Hilbert space are the closure of finite-rank operators in the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is: Theorem. ⁠ M {\displaystyle \mathbf {M} } ⁠ is compact if and only if ⁠ M ∗ M {\displaystyle \mathbf {M} ^{*}\mathbf {M} } ⁠ is compact. == History == The singular value decomposition was originally developed by differential geometers, who wished to determine whether a real bilinear form could be made equal to another by independent orthogonal transformations of the two spaces it acts on. Eugenio Beltrami and Camille Jordan discovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form a complete set of invariants for bilinear forms under orthogonal substitutions. James Joseph Sylvester also arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values the canonical multipliers of the matrix ⁠ A . {\displaystyle \mathbf {A} .} ⁠ The fourth mathematician to discover the singular value decomposition independently is Autonne in 1915, who arrived at it via the polar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be by Carl Eckart and Gale J. Young in 1936; they saw it as a generalization of the principal axis transformation for Hermitian matrices. In 1907, Erhard Schmidt defined an analog of singular values for integral operators (which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed by Émile Picard in 1910, who is the first to call the numbers σ k {\displaystyle \sigma _{k}} singular values (or in French, valeurs singulières). Practical methods for computing the SVD date back to Kogbetliantz in 1954–1955 and Hestenes in 1958, resembling closely the Jacobi eigenvalue algorithm, which uses plane rotations or Givens rotations. However, these were replaced by the method of Gene Golub and William Kahan published in 1965, which uses Householder transformations or reflections. In 1970, Golub and Christian Reinsch published a variant of the Golub/Kahan algorithm that is still the one most-used today. == See also == == Notes == == References == Banerjee, Sudipto; Roy, Anindya (2014). Linear Algebra and Matrix Analysis for Statistics. Texts in Statistical Science (1st ed.). Chapman and Hall/CRC. ISBN 978-1420095388. Bisgard, James (2021). Analysis and Linear Algebra: The Singular Value Decomposition and Applications. Student Mathematical Library (1st ed.). AMS. ISBN 978-1-4704-6332-8. Chicco, D; Masseroli, M (2015). "Software suite for gene and protein annotation prediction and similarity search". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 12 (4): 837–843. doi:10.1109/TCBB.2014.2382127. hdl:11311/959408. PMID 26357324. S2CID 14714823. Trefethen, Lloyd N.; Bau III, David (1997). Numerical linear algebra. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0-89871-361-9. Demmel, James; Kahan, William (1990). "Accurate singular values of bidiagonal matrices". SIAM Journal on Scientific and Statistical Computing. 11 (5): 873–912. CiteSeerX 10.1.1.48.3740. doi:10.1137/0911052. Golub, Gene H.; Kahan, William (1965). "Calculating the singular values and pseudo-inverse of a matrix". Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis. 2 (2): 205–224. Bibcode:1965SJNA....2..205G. doi:10.1137/0702016. JSTOR 2949777. Golub, Gene H.; Van Loan, Charles F. (1996). Matrix Computations (3rd ed.). Johns Hopkins. ISBN 978-0-8018-5414-9. GSL Team (2007). "§14.4 Singular Value Decomposition". GNU Scientific Library. Reference Manual. Halldor, Bjornsson and Venegas, Silvia A. (1997). "A manual for EOF and SVD analyses of climate data". McGill University, CCGCR Report No. 97-1, Montréal, Québec, 52pp. Hansen, P. C. (1987). "The truncated SVD as a method for regularization". BIT. 27 (4): 534–553. doi:10.1007/BF01937276. S2CID 37591557. Horn, Roger A.; Johnson, Charles R. (1985). "Section 7.3". Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6. Horn, Roger A.; Johnson, Charles R. (1991). "Chapter 3". Topics in Matrix Analysis. Cambridge University Press. ISBN 978-0-521-46713-1. Samet, H. (2006). Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann. ISBN 978-0-12-369446-1. Strang G. (1998). "Section 6.7". Introduction to Linear Algebra (3rd ed.). Wellesley-Cambridge Press. ISBN 978-0-9614088-5-5. Stewart, G. W. (1993). "On the Early History of the Singular Value Decomposition". SIAM Review. 35 (4): 551–566. CiteSeerX 10.1.1.23.1831. doi:10.1137/1035134. hdl:1903/566. JSTOR 2132388. Wall, Michael E.; Rechtsteiner, Andreas; Rocha, Luis M. (2003). "Singular value decomposition and principal component analysis". In D.P. Berrar; W. Dubitzky; M. Granzow (eds.). A Practical Approach to Microarray Data Analysis. Norwell, MA: Kluwer. pp. 91–109. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 2.6". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. == External links == Online SVD calculator
Wikipedia:Singularity (mathematics)#0
In mathematics, a singularity is a point at which a given mathematical object is not defined, or a point where the mathematical object ceases to be well-behaved in some particular way, such as by lacking differentiability or analyticity. For example, the reciprocal function f ( x ) = 1 / x {\displaystyle f(x)=1/x} has a singularity at x = 0 {\displaystyle x=0} , where the value of the function is not defined, as involving a division by zero. The absolute value function g ( x ) = | x | {\displaystyle g(x)=|x|} also has a singularity at x = 0 {\displaystyle x=0} , since it is not differentiable there. The algebraic curve defined by { ( x , y ) : y 3 − x 2 = 0 } {\displaystyle \left\{(x,y):y^{3}-x^{2}=0\right\}} in the ( x , y ) {\displaystyle (x,y)} coordinate system has a singularity (called a cusp) at ( 0 , 0 ) {\displaystyle (0,0)} . For singularities in algebraic geometry, see singular point of an algebraic variety. For singularities in differential geometry, see singularity theory. == Real analysis == In real analysis, singularities are either discontinuities, or discontinuities of the derivative (sometimes also discontinuities of higher order derivatives). There are four kinds of discontinuities: type I, which has two subtypes, and type II, which can also be divided into two subtypes (though usually is not). To describe the way these two types of limits are being used, suppose that f ( x ) {\displaystyle f(x)} is a function of a real argument x {\displaystyle x} , and for any value of its argument, say c {\displaystyle c} , then the left-handed limit, f ( c − ) {\displaystyle f(c^{-})} , and the right-handed limit, f ( c + ) {\displaystyle f(c^{+})} , are defined by: f ( c − ) = lim x → c f ( x ) {\displaystyle f(c^{-})=\lim _{x\to c}f(x)} , constrained by x < c {\displaystyle x<c} and f ( c + ) = lim x → c f ( x ) {\displaystyle f(c^{+})=\lim _{x\to c}f(x)} , constrained by x > c {\displaystyle x>c} . The value f ( c − ) {\displaystyle f(c^{-})} is the value that the function f ( x ) {\displaystyle f(x)} tends towards as the value x {\displaystyle x} approaches c {\displaystyle c} from below, and the value f ( c + ) {\displaystyle f(c^{+})} is the value that the function f ( x ) {\displaystyle f(x)} tends towards as the value x {\displaystyle x} approaches c {\displaystyle c} from above, regardless of the actual value the function has at the point where x = c {\displaystyle x=c} . There are some functions for which these limits do not exist at all. For example, the function g ( x ) = sin ⁡ ( 1 x ) {\displaystyle g(x)=\sin \left({\frac {1}{x}}\right)} does not tend towards anything as x {\displaystyle x} approaches c = 0 {\displaystyle c=0} . The limits in this case are not infinite, but rather undefined: there is no value that g ( x ) {\displaystyle g(x)} settles in on. Borrowing from complex analysis, this is sometimes called an essential singularity. The possible cases at a given value c {\displaystyle c} for the argument are as follows. A point of continuity is a value of c {\displaystyle c} for which f ( c − ) = f ( c ) = f ( c + ) {\displaystyle f(c^{-})=f(c)=f(c^{+})} , as one expects for a smooth function. All the values must be finite. If c {\displaystyle c} is not a point of continuity, then a discontinuity occurs at c {\displaystyle c} . A type I discontinuity occurs when both f ( c − ) {\displaystyle f(c^{-})} and f ( c + ) {\displaystyle f(c^{+})} exist and are finite, but at least one of the following three conditions also applies: f ( c − ) ≠ f ( c + ) {\displaystyle f(c^{-})\neq f(c^{+})} ; f ( x ) {\displaystyle f(x)} is not defined for the case of x = c {\displaystyle x=c} ; or f ( c ) {\displaystyle f(c)} has a defined value, which, however, does not match the value of the two limits. Type I discontinuities can be further distinguished as being one of the following subtypes: A jump discontinuity occurs when f ( c − ) ≠ f ( c + ) {\displaystyle f(c^{-})\neq f(c^{+})} , regardless of whether f ( c ) {\displaystyle f(c)} is defined, and regardless of its value if it is defined. A removable discontinuity occurs when f ( c − ) = f ( c + ) {\displaystyle f(c^{-})=f(c^{+})} , also regardless of whether f ( c ) {\displaystyle f(c)} is defined, and regardless of its value if it is defined (but which does not match that of the two limits). A type II discontinuity occurs when either f ( c − ) {\displaystyle f(c^{-})} or f ( c + ) {\displaystyle f(c^{+})} does not exist (possibly both). This has two subtypes, which are usually not considered separately: An infinite discontinuity is the special case when either the left hand or right hand limit does not exist, specifically because it is infinite, and the other limit is either also infinite, or is some well defined finite number. In other words, the function has an infinite discontinuity when its graph has a vertical asymptote. An essential singularity is a term borrowed from complex analysis (see below). This is the case when either one or the other limits f ( c − ) {\displaystyle f(c^{-})} or f ( c + ) {\displaystyle f(c^{+})} does not exist, but not because it is an infinite discontinuity. Essential singularities approach no limit, not even if valid answers are extended to include ± ∞ {\displaystyle \pm \infty } . In real analysis, a singularity or discontinuity is a property of a function alone. Any singularities that may exist in the derivative of a function are considered as belonging to the derivative, not to the original function. === Coordinate singularities === A coordinate singularity occurs when an apparent singularity or discontinuity occurs in one coordinate frame, which can be removed by choosing a different frame. An example of this is the apparent singularity at the 90 degree latitude in spherical coordinates. An object moving due north (for example, along the line 0 degrees longitude) on the surface of a sphere will suddenly experience an instantaneous change in longitude at the pole (in the case of the example, jumping from longitude 0 to longitude 180 degrees). This discontinuity, however, is only apparent; it is an artifact of the coordinate system chosen, which is singular at the poles. A different coordinate system would eliminate the apparent discontinuity (e.g., by replacing the latitude/longitude representation with an n-vector representation). == Complex analysis == In complex analysis, there are several classes of singularities. These include the isolated singularities, the nonisolated singularities, and the branch points. === Isolated singularities === Suppose that f {\displaystyle f} is a function that is complex differentiable in the complement of a point a {\displaystyle a} in an open subset U {\displaystyle U} of the complex numbers C . {\displaystyle \mathbb {C} .} Then: The point a {\displaystyle a} is a removable singularity of f {\displaystyle f} if there exists a holomorphic function g {\displaystyle g} defined on all of U {\displaystyle U} such that f ( z ) = g ( z ) {\displaystyle f(z)=g(z)} for all z {\displaystyle z} in U ∖ { a } . {\displaystyle U\smallsetminus \{a\}.} The function g {\displaystyle g} is a continuous replacement for the function f . {\displaystyle f.} The point a {\displaystyle a} is a pole or non-essential singularity of f {\displaystyle f} if there exists a holomorphic function g {\displaystyle g} defined on U {\displaystyle U} with g ( a ) {\displaystyle g(a)} nonzero, and a natural number n {\displaystyle n} such that f ( z ) = g ( z ) ( z − a ) n {\displaystyle f(z)={\frac {g(z)}{(z-a)^{n}}}} for all z {\displaystyle z} in U ∖ { a } . {\displaystyle U\smallsetminus \{a\}.} The least such number n {\displaystyle n} is called the order of the pole. The derivative at a non-essential singularity itself has a non-essential singularity, with n {\displaystyle n} increased by 1 (except if n {\displaystyle n} is 0 so that the singularity is removable). The point a {\displaystyle a} is an essential singularity of f {\displaystyle f} if it is neither a removable singularity nor a pole. The point a {\displaystyle a} is an essential singularity if and only if the Laurent series has infinitely many powers of negative degree. === Nonisolated singularities === Other than isolated singularities, complex functions of one variable may exhibit other singular behaviour. These are termed nonisolated singularities, of which there are two types: Cluster points: limit points of isolated singularities. If they are all poles, despite admitting Laurent series expansions on each of them, then no such expansion is possible at its limit. Natural boundaries: any non-isolated set (e.g. a curve) on which functions cannot be analytically continued around (or outside them if they are closed curves in the Riemann sphere). === Branch points === Branch points are generally the result of a multi-valued function, such as z {\displaystyle {\sqrt {z}}} or log ⁡ ( z ) , {\displaystyle \log(z),} which are defined within a certain limited domain so that the function can be made single-valued within the domain. The cut is a line or curve excluded from the domain to introduce a technical separation between discontinuous values of the function. When the cut is genuinely required, the function will have distinctly different values on each side of the branch cut. The shape of the branch cut is a matter of choice, even though it must connect two different branch points (such as z = 0 {\displaystyle z=0} and z = ∞ {\displaystyle z=\infty } for log ⁡ ( z ) {\displaystyle \log(z)} ) which are fixed in place. == Finite-time singularity == A finite-time singularity occurs when one input variable is time, and an output variable increases towards infinity at a finite time. These are important in kinematics and Partial Differential Equations – infinites do not occur physically, but the behavior near the singularity is often of interest. Mathematically, the simplest finite-time singularities are power laws for various exponents of the form x − α , {\displaystyle x^{-\alpha },} of which the simplest is hyperbolic growth, where the exponent is (negative) 1: x − 1 . {\displaystyle x^{-1}.} More precisely, in order to get a singularity at positive time as time advances (so the output grows to infinity), one instead uses ( t 0 − t ) − α {\displaystyle (t_{0}-t)^{-\alpha }} (using t for time, reversing direction to − t {\displaystyle -t} so that time increases to infinity, and shifting the singularity forward from 0 to a fixed time t 0 {\displaystyle t_{0}} ). An example would be the bouncing motion of an inelastic ball on a plane. If idealized motion is considered, in which the same fraction of kinetic energy is lost on each bounce, the frequency of bounces becomes infinite, as the ball comes to rest in a finite time. Other examples of finite-time singularities include the various forms of the Painlevé paradox (for example, the tendency of a chalk to skip when dragged across a blackboard), and how the precession rate of a coin spun on a flat surface accelerates towards infinite—before abruptly stopping (as studied using the Euler's Disk toy). Hypothetical examples include Heinz von Foerster's facetious "Doomsday's equation" (simplistic models yield infinite human population in finite time). == Algebraic geometry and commutative algebra == In algebraic geometry, a singularity of an algebraic variety is a point of the variety where the tangent space may not be regularly defined. The simplest example of singularities are curves that cross themselves. But there are other types of singularities, like cusps. For example, the equation y2 − x3 = 0 defines a curve that has a cusp at the origin x = y = 0. One could define the x-axis as a tangent at this point, but this definition can not be the same as the definition at other points. In fact, in this case, the x-axis is a "double tangent." For affine and projective varieties, the singularities are the points where the Jacobian matrix has a rank which is lower than at other points of the variety. An equivalent definition in terms of commutative algebra may be given, which extends to abstract varieties and schemes: A point is singular if the local ring at this point is not a regular local ring. == See also == Catastrophe theory Defined and undefined Degeneracy (mathematics) Hyperbolic growth Movable singularity Pathological (mathematics) Regular singularity Singular solution == References ==
Wikipedia:Singularity spectrum#0
In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève (1945, 1978) spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition. == Brief history == The origins of SSA and, more generally, of subspace-based methods for signal processing, go back to the eighteenth century (Prony's method). A key development was the formulation of the spectral decomposition of the covariance operator of stochastic processes by Kari Karhunen and Michel Loève in the late 1940s (Loève, 1945; Karhunen, 1947). Broomhead and King (1986a, b) and Fraedrich (1986) proposed to use SSA and multichannel SSA (M-SSA) in the context of nonlinear dynamics for the purpose of reconstructing the attractor of a system from measured time series. These authors provided an extension and a more robust application of the idea of reconstructing dynamics from a single time series based on the embedding theorem. Several other authors had already applied simple versions of M-SSA to meteorological and ecological data sets (Colebrook, 1978; Barnett and Hasselmann, 1979; Weare and Nasstrom, 1982). Ghil, Vautard and their colleagues (Vautard and Ghil, 1989; Ghil and Vautard, 1991; Vautard et al., 1992; Ghil et al., 2002) noticed the analogy between the trajectory matrix of Broomhead and King, on the one hand, and the Karhunen–Loeve decomposition (Principal component analysis in the time domain), on the other. Thus, SSA can be used as a time-and-frequency domain method for time series analysis — independently from attractor reconstruction and including cases in which the latter may fail. The survey paper of Ghil et al. (2002) is the basis of the § Methodology section of this article. A crucial result of the work of these authors is that SSA can robustly recover the "skeleton" of an attractor, including in the presence of noise. This skeleton is formed by the least unstable periodic orbits, which can be identified in the eigenvalue spectra of SSA and M-SSA. The identification and detailed description of these orbits can provide highly useful pointers to the underlying nonlinear dynamics. The so-called ‘Caterpillar’ methodology is a version of SSA that was developed in the former Soviet Union, independently of the mainstream SSA work in the West. This methodology became known in the rest of the world more recently (Danilov and Zhigljavsky, Eds., 1997; Golyandina et al., 2001; Zhigljavsky, Ed., 2010; Golyandina and Zhigljavsky, 2013; Golyandina et al., 2018). ‘Caterpillar-SSA’ emphasizes the concept of separability, a concept that leads, for example, to specific recommendations concerning the choice of SSA parameters. This method is thoroughly described in § SSA as a model-free tool of this article. == Methodology == In practice, SSA is a nonparametric spectral estimation method based on embedding a time series { X ( t ) : t = 1 , … , N } {\displaystyle \{X(t):t=1,\ldots ,N\}} in a vector space of dimension M {\displaystyle M} . SSA proceeds by diagonalizing the M × M {\displaystyle M\times M} lag-covariance matrix C X {\displaystyle {\textbf {C}}_{X}} of X ( t ) {\displaystyle X(t)} to obtain spectral information on the time series, assumed to be stationary in the weak sense. The matrix C X {\displaystyle {\textbf {C}}_{X}} can be estimated directly from the data as a Toeplitz matrix with constant diagonals (Vautard and Ghil, 1989), i.e., its entries c i j {\displaystyle c_{ij}} depend only on the lag | i − j | {\displaystyle |i-j|} : c i j = 1 N − | i − j | ∑ t = 1 N − | i − j | X ( t ) X ( t + | i − j | ) . {\displaystyle c_{ij}={\frac {1}{N-|i-j|}}\sum _{t=1}^{N-|i-j|}X(t)X(t+|i-j|).} An alternative way to compute C X {\displaystyle {\textbf {C}}_{X}} , is by using the N ′ × M {\displaystyle N'\times M} "trajectory matrix" D {\displaystyle {\textbf {D}}} that is formed by M {\displaystyle M} lag-shifted copies of X ( t ) {\displaystyle {\it {X(t)}}} , which are N ′ = N − M + 1 {\displaystyle N'=N-M+1} long; then C X = 1 N ′ D t D . {\displaystyle {\textbf {C}}_{X}={\frac {1}{N'}}{\textbf {D}}^{\rm {t}}{\textbf {D}}.} The M {\displaystyle M} eigenvectors E k {\displaystyle {\textbf {E}}_{k}} of the lag-covariance matrix C X {\displaystyle {\textbf {C}}_{X}} are called temporal empirical orthogonal functions (EOFs). The eigenvalues λ k {\displaystyle \lambda _{k}} of C X {\displaystyle {\textbf {C}}_{X}} account for the partial variance in the direction E k {\displaystyle {\textbf {E}}_{k}} and the sum of the eigenvalues, i.e., the trace of C X {\displaystyle {\textbf {C}}_{X}} , gives the total variance of the original time series X ( t ) {\displaystyle X(t)} . The name of the method derives from the singular values λ k 1 / 2 {\displaystyle \lambda _{k}^{1/2}} of C X . {\displaystyle {\textbf {C}}_{X}.} === Decomposition and reconstruction === Projecting the time series onto each EOF yields the corresponding temporal principal components (PCs) A k {\displaystyle {\textbf {A}}_{k}} : A k ( t ) = ∑ j = 1 M X ( t + j − 1 ) E k ( j ) . {\displaystyle A_{k}(t)=\sum _{j=1}^{M}X(t+j-1)E_{k}(j).} An oscillatory mode is characterized by a pair of nearly equal SSA eigenvalues and associated PCs that are in approximate phase quadrature (Ghil et al., 2002). Such a pair can represent efficiently a nonlinear, anharmonic oscillation. This is due to the fact that a single pair of data-adaptive SSA eigenmodes often will capture better the basic periodicity of an oscillatory mode than methods with fixed basis functions, such as the sines and cosines used in the Fourier transform. The window width M {\displaystyle M} determines the longest periodicity captured by SSA. Signal-to-noise separation can be obtained by merely inspecting the slope break in a "scree diagram" of eigenvalues λ k {\displaystyle \lambda _{k}} or singular values λ k 1 / 2 {\displaystyle \lambda _{k}^{1/2}} vs. k {\displaystyle k} . The point k ∗ = S {\displaystyle k^{*}=S} at which this break occurs should not be confused with a "dimension" D {\displaystyle D} of the underlying deterministic dynamics (Vautard and Ghil, 1989). A Monte-Carlo test (Allen and Smith, 1996; Allen and Robertson, 1996; Groth and Ghil, 2015) can be applied to ascertain the statistical significance of the oscillatory pairs detected by SSA. The entire time series or parts of it that correspond to trends, oscillatory modes or noise can be reconstructed by using linear combinations of the PCs and EOFs, which provide the reconstructed components (RCs) R K {\displaystyle {\textbf {R}}_{K}} : R K ( t ) = 1 M t ∑ k ∈ K ∑ j = L t U t A k ( t − j + 1 ) E k ( j ) ; {\displaystyle R_{K}(t)={\frac {1}{M_{t}}}\sum _{k\in {\textit {K}}}\sum _{j={L_{t}}}^{U_{t}}A_{k}(t-j+1)E_{k}(j);} here K {\displaystyle K} is the set of EOFs on which the reconstruction is based. The values of the normalization factor M t {\displaystyle M_{t}} , as well as of the lower and upper bound of summation L t {\displaystyle L_{t}} and U t {\displaystyle U_{t}} , differ between the central part of the time series and the vicinity of its endpoints (Ghil et al., 2002). === Multivariate extension === Multi-channel SSA (or M-SSA) is a natural extension of SSA to an L {\displaystyle L} -channel time series of vectors or maps with N {\displaystyle N} data points { X l ( t ) : l = 1 , … , L ; t = 1 , … , N } {\displaystyle \{X_{l}(t):l=1,\dots ,L;t=1,\dots ,N\}} . In the meteorological literature, extended EOF (EEOF) analysis is often assumed to be synonymous with M-SSA. The two methods are both extensions of classical principal component analysis (PCA) but they differ in emphasis: EEOF analysis typically utilizes a number L {\displaystyle L} of spatial channels much greater than the number M {\displaystyle M} of temporal lags, thus limiting the temporal and spectral information. In M-SSA, on the other hand, one usually chooses L ≤ M {\displaystyle L\leq M} . Often M-SSA is applied to a few leading PCs of the spatial data, with M {\displaystyle M} chosen large enough to extract detailed temporal and spectral information from the multivariate time series (Ghil et al., 2002). However, Groth and Ghil (2015) have demonstrated possible negative effects of this variance compression on the detection rate of weak signals when the number L {\displaystyle L} of retained PCs becomes too small. This practice can further affect negatively the judicious reconstruction of the spatio-temporal patterns of such weak signals, and Groth et al. (2016) recommend retaining a maximum number of PCs, i.e., L = N {\displaystyle L=N} . Groth and Ghil (2011) have demonstrated that a classical M-SSA analysis suffers from a degeneracy problem, namely the EOFs do not separate well between distinct oscillations when the corresponding eigenvalues are similar in size. This problem is a shortcoming of principal component analysis in general, not just of M-SSA in particular. In order to reduce mixture effects and to improve the physical interpretation, Groth and Ghil (2011) have proposed a subsequent VARIMAX rotation of the spatio-temporal EOFs (ST-EOFs) of the M-SSA. To avoid a loss of spectral properties (Plaut and Vautard 1994), they have introduced a slight modification of the common VARIMAX rotation that does take the spatio-temporal structure of ST-EOFs into account. Alternatively, a closed matrix formulation of the algorithm for the simultaneous rotation of the EOFs by iterative SVD decompositions has been proposed (Portes and Aguirre, 2016). M-SSA has two forecasting approaches known as recurrent and vector. The discrepancies between these two approaches are attributable to the organization of the single trajectory matrix X {\displaystyle {\textbf {X}}} of each series into the block trajectory matrix in the multivariate case. Two trajectory matrices can be organized as either vertical (VMSSA) or horizontal (HMSSA) as was recently introduced in Hassani and Mahmoudvand (2013), and it was shown that these constructions lead to better forecasts. Accordingly, we have four different forecasting algorithms that can be exploited in this version of MSSA (Hassani and Mahmoudvand, 2013). === Prediction === In this subsection, we focus on phenomena that exhibit a significant oscillatory component: repetition increases understanding and hence confidence in a prediction method that is closely connected with such understanding. Singular spectrum analysis (SSA) and the maximum entropy method (MEM) have been combined to predict a variety of phenomena in meteorology, oceanography and climate dynamics (Ghil et al., 2002, and references therein). First, the “noise” is filtered out by projecting the time series onto a subset of leading EOFs obtained by SSA; the selected subset should include statistically significant, oscillatory modes. Experience shows that this approach works best when the partial variance associated with the pairs of RCs that capture these modes is large (Ghil and Jiang, 1998). The prefiltered RCs are then extrapolated by least-square fitting to an autoregressive model A R [ p ] {\displaystyle AR[p]} , whose coefficients give the MEM spectrum of the remaining “signal”. Finally, the extended RCs are used in the SSA reconstruction process to produce the forecast values. The reason why this approach – via SSA prefiltering, AR extrapolation of the RCs, and SSA reconstruction – works better than the customary AR-based prediction is explained by the fact that the individual RCs are narrow-band signals, unlike the original, noisy time series X ( t ) {\displaystyle X(t)} (Penland et al., 1991; Keppenne and Ghil, 1993). In fact, the optimal order p obtained for the individual RCs is considerably lower than the one given by the standard Akaike information criterion (AIC) or similar ones. === Spatio-temporal gap filling === The gap-filling version of SSA can be used to analyze data sets that are unevenly sampled or contain missing data (Kondrashov and Ghil, 2006; Kondrashov et al. 2010). For a univariate time series, the SSA gap filling procedure utilizes temporal correlations to fill in the missing points. For a multivariate data set, gap filling by M-SSA takes advantage of both spatial and temporal correlations. In either case: (i) estimates of missing data points are produced iteratively, and are then used to compute a self-consistent lag-covariance matrix C X {\displaystyle {\textbf {C}}_{X}} and its EOFs E k {\displaystyle {\textbf {E}}_{k}} ; and (ii) cross-validation is used to optimize the window width M {\displaystyle M} and the number of leading SSA modes to fill the gaps with the iteratively estimated "signal," while the noise is discarded. == As a model-free tool == The areas where SSA can be applied are very broad: climatology, marine science, geophysics, engineering, image processing, medicine, econometrics among them. Hence different modifications of SSA have been proposed and different methodologies of SSA are used in practical applications such as trend extraction, periodicity detection, seasonal adjustment, smoothing, noise reduction (Golyandina, et al, 2001). === Basic SSA === SSA can be used as a model-free technique so that it can be applied to arbitrary time series including non-stationary time series. The basic aim of SSA is to decompose the time series into the sum of interpretable components such as trend, periodic components and noise with no a-priori assumptions about the parametric form of these components. Consider a real-valued time series X = ( x 1 , … , x N ) {\displaystyle \mathbb {X} =(x_{1},\ldots ,x_{N})} of length N {\displaystyle N} . Let L {\displaystyle L} ( 1 < L < N ) {\displaystyle \ (1<L<N)} be some integer called the window length and K = N − L + 1 {\displaystyle K=N-L+1} . ==== Main algorithm ==== 1st step: Embedding. Form the trajectory matrix of the series X {\displaystyle \mathbb {X} } , which is the L × K {\displaystyle L\!\times \!K} matrix X = [ X 1 : … : X K ] = ( x i j ) i , j = 1 L , K = [ x 1 x 2 x 3 … x K x 2 x 3 x 4 … x K + 1 x 3 x 4 x 5 … x K + 2 ⋮ ⋮ ⋮ ⋱ ⋮ x L x L + 1 x L + 2 … x N ] {\displaystyle \mathbf {X} =[X_{1}:\ldots :X_{K}]=(x_{ij})_{i,j=1}^{L,K}={\begin{bmatrix}x_{1}&x_{2}&x_{3}&\ldots &x_{K}\\x_{2}&x_{3}&x_{4}&\ldots &x_{K+1}\\x_{3}&x_{4}&x_{5}&\ldots &x_{K+2}\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{L}&x_{L+1}&x_{L+2}&\ldots &x_{N}\\\end{bmatrix}}} where X i = ( x i , … , x i + L − 1 ) T ( 1 ≤ i ≤ K ) {\displaystyle X_{i}=(x_{i},\ldots ,x_{i+L-1})^{\mathrm {T} }\;\quad (1\leq i\leq K)} are lagged vectors of size L {\displaystyle L} . The matrix X {\displaystyle \mathbf {X} } is a Hankel matrix which means that X {\displaystyle \mathbf {X} } has equal elements x i j {\displaystyle x_{ij}} on the anti-diagonals i + j = c o n s t {\displaystyle i+j=\,{\rm {const}}} . 2nd step: Singular Value Decomposition (SVD). Perform the singular value decomposition (SVD) of the trajectory matrix X {\displaystyle \mathbf {X} } . Set S = X X T {\displaystyle \mathbf {S} =\mathbf {X} \mathbf {X} ^{\mathrm {T} }} and denote by λ 1 , … , λ L {\displaystyle \lambda _{1},\ldots ,\lambda _{L}} the eigenvalues of S {\displaystyle \mathbf {S} } taken in the decreasing order of magnitude ( λ 1 ≥ … ≥ λ L ≥ 0 {\displaystyle \lambda _{1}\geq \ldots \geq \lambda _{L}\geq 0} ) and by U 1 , … , U L {\displaystyle U_{1},\ldots ,U_{L}} the orthonormal system of the eigenvectors of the matrix S {\displaystyle \mathbf {S} } corresponding to these eigenvalues. Set d = r a n k ⁡ X = max { i , such that λ i > 0 } {\displaystyle d=\mathop {\mathrm {rank} } \mathbf {X} =\max\{i,\ {\mbox{such that}}\ \lambda _{i}>0\}} (note that d = L {\displaystyle d=L} for a typical real-life series) and V i = X T U i / λ i {\displaystyle V_{i}=\mathbf {X} ^{\mathrm {T} }U_{i}/{\sqrt {\lambda _{i}}}} ( i = 1 , … , d ) {\displaystyle (i=1,\ldots ,d)} . In this notation, the SVD of the trajectory matrix X {\displaystyle \mathbf {X} } can be written as X = X 1 + … + X d , {\displaystyle \mathbf {X} =\mathbf {X} _{1}+\ldots +\mathbf {X} _{d},} where X i = λ i U i V i T {\displaystyle \mathbf {X} _{i}={\sqrt {\lambda _{i}}}U_{i}V_{i}^{\mathrm {T} }} are matrices having rank 1; these are called elementary matrices. The collection ( λ i , U i , V i ) {\displaystyle ({\sqrt {\lambda _{i}}},U_{i},V_{i})} will be called the i {\displaystyle i} th eigentriple (abbreviated as ET) of the SVD. Vectors U i {\displaystyle U_{i}} are the left singular vectors of the matrix X {\displaystyle \mathbf {X} } , numbers λ i {\displaystyle {\sqrt {\lambda _{i}}}} are the singular values and provide the singular spectrum of X {\displaystyle \mathbf {X} } ; this gives the name to SSA. Vectors λ i V i = X T U i {\displaystyle {\sqrt {\lambda _{i}}}V_{i}=\mathbf {X} ^{\mathrm {T} }U_{i}} are called vectors of principal components (PCs). 3rd step: Eigentriple grouping. Partition the set of indices { 1 , … , d } {\displaystyle \{1,\ldots ,d\}} into m {\displaystyle m} disjoint subsets I 1 , … , I m {\displaystyle I_{1},\ldots ,I_{m}} . Let I = { i 1 , … , i p } {\displaystyle I=\{i_{1},\ldots ,i_{p}\}} . Then the resultant matrix X I {\displaystyle \mathbf {X} _{I}} corresponding to the group I {\displaystyle I} is defined as X I = X i 1 + … + X i p {\displaystyle \mathbf {X} _{I}=\mathbf {X} _{i_{1}}+\ldots +\mathbf {X} _{i_{p}}} . The resultant matrices are computed for the groups I = I 1 , … , I m {\displaystyle I=I_{1},\ldots ,I_{m}} and the grouped SVD expansion of X {\displaystyle \mathbf {X} } can now be written as X = X I 1 + … + X I m . {\displaystyle \mathbf {X} =\mathbf {X} _{I_{1}}+\ldots +\mathbf {X} _{I_{m}}.} 4th step: Diagonal averaging. Each matrix X I j {\displaystyle \mathbf {X} _{I_{j}}} of the grouped decomposition is hankelized and then the obtained Hankel matrix is transformed into a new series of length N {\displaystyle N} using the one-to-one correspondence between Hankel matrices and time series. Diagonal averaging applied to a resultant matrix X I k {\displaystyle \mathbf {X} _{I_{k}}} produces a reconstructed series X ~ ( k ) = ( x ~ 1 ( k ) , … , x ~ N ( k ) ) {\displaystyle {\widetilde {\mathbb {X} }}^{(k)}=({\widetilde {x}}_{1}^{(k)},\ldots ,{\widetilde {x}}_{N}^{(k)})} . In this way, the initial series x 1 , … , x N {\displaystyle x_{1},\ldots ,x_{N}} is decomposed into a sum of m {\displaystyle m} reconstructed subseries: x n = ∑ k = 1 m x ~ n ( k ) ( n = 1 , 2 , … , N ) . {\displaystyle x_{n}=\sum \limits _{k=1}^{m}{\widetilde {x}}_{n}^{(k)}\ \ (n=1,2,\ldots ,N).} This decomposition is the main result of the SSA algorithm. The decomposition is meaningful if each reconstructed subseries could be classified as a part of either trend or some periodic component or noise. ==== Theory of SSA separability ==== The two main questions which the theory of SSA attempts to answer are: (a) what time series components can be separated by SSA, and (b) how to choose the window length L {\displaystyle L} and make proper grouping for extraction of a desirable component. Many theoretical results can be found in Golyandina et al. (2001, Ch. 1 and 6). Trend (which is defined as a slowly varying component of the time series), periodic components and noise are asymptotically separable as N → ∞ {\displaystyle N\rightarrow \infty } . In practice N {\displaystyle N} is fixed and one is interested in approximate separability between time series components. A number of indicators of approximate separability can be used, see Golyandina et al. (2001, Ch. 1). The window length L {\displaystyle L} determines the resolution of the method: larger values of L {\displaystyle L} provide more refined decomposition into elementary components and therefore better separability. The window length L {\displaystyle L} determines the longest periodicity captured by SSA. Trends can be extracted by grouping of eigentriples with slowly varying eigenvectors. A sinusoid with frequency smaller than 0.5 produces two approximately equal eigenvalues and two sine-wave eigenvectors with the same frequencies and π / 2 {\displaystyle \pi /2} -shifted phases. Separation of two time series components can be considered as extraction of one component in the presence of perturbation by the other component. SSA perturbation theory is developed in Nekrutkin (2010) and Hassani et al. (2011). === Forecasting by SSA === If for some series X {\displaystyle \mathbb {X} } the SVD step in Basic SSA gives d < L {\displaystyle d<L} , then this series is called time series of rank d {\displaystyle d} (Golyandina et al., 2001, Ch.5). The subspace spanned by the d {\displaystyle d} leading eigenvectors is called signal subspace. This subspace is used for estimating the signal parameters in signal processing, e.g. ESPRIT for high-resolution frequency estimation. Also, this subspace determines the linear homogeneous recurrence relation (LRR) governing the series, which can be used for forecasting. Continuation of the series by the LRR is similar to forward linear prediction in signal processing. Let the series be governed by the minimal LRR x n = ∑ k = 1 d b k x n − k {\displaystyle x_{n}=\sum _{k=1}^{d}b_{k}x_{n-k}} . Let us choose L > d {\displaystyle L>d} , U 1 , … , U d {\displaystyle U_{1},\ldots ,U_{d}} be the eigenvectors (left singular vectors of the L {\displaystyle L} -trajectory matrix), which are provided by the SVD step of SSA. Then this series is governed by an LRR x n = ∑ k = 1 L − 1 a k x n − k {\displaystyle x_{n}=\sum _{k=1}^{L-1}a_{k}x_{n-k}} , where ( a L − 1 , … , a 1 ) T {\displaystyle (a_{L-1},\ldots ,a_{1})^{\mathrm {T} }} are expressed through U 1 , … , U d {\displaystyle U_{1},\ldots ,U_{d}} (Golyandina et al., 2001, Ch.5), and can be continued by the same LRR. This provides the basis for SSA recurrent and vector forecasting algorithms (Golyandina et al., 2001, Ch.2). In practice, the signal is corrupted by a perturbation, e.g., by noise, and its subspace is estimated by SSA approximately. Thus, SSA forecasting can be applied for forecasting of a time series component that is approximately governed by an LRR and is approximately separated from the residual. === Multivariate extension === Multi-channel, Multivariate SSA (or M-SSA) is a natural extension of SSA to for analyzing multivariate time series, where the size of different univariate series does not have to be the same. The trajectory matrix of multi-channel time series consists of linked trajectory matrices of separate times series. The rest of the algorithm is the same as in the univariate case. System of series can be forecasted analogously to SSA recurrent and vector algorithms (Golyandina and Stepanov, 2005). MSSA has many applications. It is especially popular in analyzing and forecasting economic and financial time series with short and long series length (Patterson et al., 2011, Hassani et al., 2012, Hassani and Mahmoudvand, 2013). Other multivariate extension is 2D-SSA that can be applied to two-dimensional data like digital images (Golyandina and Usevich, 2010). The analogue of trajectory matrix is constructed by moving 2D windows of size L x × L y {\displaystyle L_{x}\times L_{y}} . ==== MSSA and causality ==== A question that frequently arises in time series analysis is whether one economic variable can help in predicting another economic variable. One way to address this question was proposed by Granger (1969), in which he formalized the causality concept. A comprehensive causality test based on MSSA has recently introduced for causality measurement. The test is based on the forecasting accuracy and predictability of the direction of change of the MSSA algorithms (Hassani et al., 2011 and Hassani et al.,2012). ==== MSSA and EMH ==== The MSSA forecasting results can be used in examining the efficient-market hypothesis controversy (EMH). The EMH suggests that the information contained in the price series of an asset is reflected “instantly, fully, and perpetually” in the asset’s current price. Since the price series and the information contained in it are available to all market participants, no one can benefit by attempting to take advantage of the information contained in the price history of an asset by trading in the markets. This is evaluated using two series with different series length in a multivariate system in SSA analysis (Hassani et al. 2010). ==== MSSA, SSA and business cycles ==== Business cycles plays a key role in macroeconomics, and are interest for a variety of players in the economy, including central banks, policy-makers, and financial intermediaries. MSSA-based methods for tracking business cycles have been recently introduced, and have been shown to allow for a reliable assessment of the cyclical position of the economy in real-time (de Carvalho et al., 2012 and de Carvalho and Rua, 2017). ==== MSSA, SSA and unit root ==== SSA's applicability to any kind of stationary or deterministically trending series has been extended to the case of a series with a stochastic trend, also known as a series with a unit root. In Hassani and Thomakos (2010) and Thomakos (2010) the basic theory on the properties and application of SSA in the case of series of a unit root is given, along with several examples. It is shown that SSA in such series produces a special kind of filter, whose form and spectral properties are derived, and that forecasting the single reconstructed component reduces to a moving average. SSA in unit roots thus provides an `optimizing' non-parametric framework for smoothing series with a unit root. This line of work is also extended to the case of two series, both of which have a unit root but are cointegrated. The application of SSA in this bivariate framework produces a smoothed series of the common root component. === Gap-filling === The gap-filling versions of SSA can be used to analyze data sets that are unevenly sampled or contain missing data (Schoellhamer, 2001; Golyandina and Osipov, 2007). Schoellhamer (2001) shows that the straightforward idea to formally calculate approximate inner products omitting unknown terms is workable for long stationary time series. Golyandina and Osipov (2007) uses the idea of filling in missing entries in vectors taken from the given subspace. The recurrent and vector SSA forecasting can be considered as particular cases of filling in algorithms described in the paper. === Detection of structural changes === SSA can be effectively used as a non-parametric method of time series monitoring and change detection. To do that, SSA performs the subspace tracking in the following way. SSA is applied sequentially to the initial parts of the series, constructs the corresponding signal subspaces and checks the distances between these subspaces and the lagged vectors formed from the few most recent observations. If these distances become too large, a structural change is suspected to have occurred in the series (Golyandina et al., 2001, Ch.3; Moskvina and Zhigljavsky, 2003). In this way, SSA could be used for change detection not only in trends but also in the variability of the series, in the mechanism that determines dependence between different series and even in the noise structure. The method have proved to be useful in different engineering problems (e.g. Mohammad and Nishida (2011) in robotics), and has been extended to the multivariate case with corresponding analysis of detection delay and false positive rate. == Relation between SSA and other methods == Autoregression Typical model for SSA is x n = s n + e n {\displaystyle x_{n}=s_{n}+e_{n}} , where s n = ∑ k = 1 r a k s n − k {\displaystyle s_{n}=\sum _{k=1}^{r}a_{k}s_{n-k}} (signal satisfying an LRR) and e n {\displaystyle e_{n}} is noise. The model of AR is x n = ∑ k = 1 r a k x n − k + e n {\displaystyle x_{n}=\sum _{k=1}^{r}a_{k}x_{n-k}+e_{n}} . Despite these two models look similar they are very different. SSA considers AR as a noise component only. AR(1), which is red noise, is typical model of noise for Monte-Carlo SSA (Allen and Smith, 1996). Spectral Fourier Analysis In contrast with Fourier analysis with fixed basis of sine and cosine functions, SSA uses an adaptive basis generated by the time series itself. As a result, the underlying model in SSA is more general and SSA can extract amplitude-modulated sine wave components with frequencies different from k / N {\displaystyle k/N} . SSA-related methods like ESPRIT can estimate frequencies with higher resolution than spectral Fourier analysis. Linear Recurrence Relations Let the signal be modeled by a series, which satisfies a linear recurrence relation s n = ∑ k = 1 r a k s n − k {\displaystyle s_{n}=\sum _{k=1}^{r}a_{k}s_{n-k}} ; that is, a series that can be represented as sums of products of exponential, polynomial and sine wave functions. This includes the sum of dumped sinusoids model whose complex-valued form is s n = ∑ k C k ρ k n e i 2 π ω k n {\displaystyle s_{n}=\sum _{k}C_{k}\rho _{k}^{n}e^{i2\pi \omega _{k}n}} . SSA-related methods allow estimation of frequencies ω k {\displaystyle \omega _{k}} and exponential factors ρ k {\displaystyle \rho _{k}} (Golyandina and Zhigljavsky, 2013, Sect 3.8). Coefficients C k {\displaystyle C_{k}} can be estimated by the least squares method. Extension of the model, where C k {\displaystyle C_{k}} are replaced by polynomials of n {\displaystyle n} , can be also considered within the SSA-related methods (Badeau et al., 2008). Signal Subspace methods SSA can be considered as a subspace-based method, since it allows estimation of the signal subspace of dimension r {\displaystyle r} by s p a n ⁡ ( U 1 , … , U r ) {\displaystyle \mathop {\mathrm {span} } (U_{1},\ldots ,U_{r})} . State Space Models The main model behind SSA is x n = s n + e n {\displaystyle x_{n}=s_{n}+e_{n}} , where s n = ∑ k = 1 r a k s n − k {\displaystyle s_{n}=\sum _{k=1}^{r}a_{k}s_{n-k}} and e n {\displaystyle e_{n}} is noise. Formally, this model belongs to the general class of state space models. The specifics of SSA is in the facts that parameter estimation is a problem of secondary importance in SSA and the data analysis procedures in SSA are nonlinear as they are based on the SVD of either trajectory or lag-covariance matrix. Regression SSA is able to extract polynomial and exponential trends. However, unlike regression, SSA does not assume any parametric model which may give significant advantage when an exploratory data analysis is performed with no obvious model in hand (Golyandina et al., 2001, Ch.1). Linear Filters The reconstruction of the series by SSA can be considered as adaptive linear filtration. If the window length L {\displaystyle L} is small, then each eigenvector U i = ( u 1 , … , u L ) T {\displaystyle U_{i}=(u_{1},\ldots ,u_{L})^{\mathrm {T} }} generates a linear filter of width 2 L − 1 {\displaystyle 2L-1} for reconstruction of the middle of the series x ~ s {\displaystyle {\widetilde {x}}_{s}} , L ≤ s ≤ K {\displaystyle L\leq s\leq K} . The filtration is non-causal. However, the so-called Last-point SSA can be used as a causal filter (Golyandina and Zhigljavsky 2013, Sect. 3.9). Density Estimation Since SSA can be used as a method of data smoothing it can be used as a method of non-parametric density estimation (Golyandina et al., 2012). == See also == Multitaper method Short-time Fourier transform Spectral density estimation == References == Akaike, H. (1969): "Fitting autoregressive models for prediction, " Ann. Inst. Stat. Math., 21, 243–247. Allen, M.R., and A.W. Robertson (1996): "Distinguishing modulated oscillations from coloured noise in multivariate datasets", Clim. Dyn., 12, 775–784. Allen, M.R. and L.A. Smith (1996) "Monte Carlo SSA: detecting irregular oscillations in the presence of colored noise". Journal of Climate, 9 (12), 3373–3404. Badeau, R., G. Richard, and B. David (2008): "Performance of ESPRIT for Estimating Mixtures of Complex Exponentials Modulated by Polynomials". IEEE Transactions on signal processing, 56(2), 492–504. Barnett, T. P., and K. Hasselmann (1979): "Techniques of linear prediction, with application to oceanic and atmospheric fields in the tropical Pacific, " Rev. Geophys., 17, 949–968. Bozzo, E., R. Carniel and D. Fasino (2010): "Relationship between singular spectrum analysis and Fourier analysis: Theory and application to the monitoring of volcanic activity", Comput. Math. Appl. 60(3), 812–820 Broomhead, D.S., and G.P. King (1986a): "Extracting qualitative dynamics from experimental data", Physica D, 20, 217–236. Broomhead, D.S., and G. P. King (1986b): "On the qualitative analysis of experimental dynamical systems". Nonlinear Phenomena and Chaos, Sarkar S (Ed.), Adam Hilger, Bristol, 113-–144. Colebrook, J. M., (1978): "Continuous plankton records: Zooplankton and environment, Northeast Atlantic and North Sea," Oceanol. Acta, 1, 9–23. Danilov, D. and Zhigljavsky, A. (Eds.) (1997):Principal Components of Time Series: the Caterpillar method, University of St. Petersburg Press. (In Russian.) de Carvalho, M., Rodrigues, P. C. and Rua, A. (2012): "Tracking the US business cycle with a singular spectrum analysis". Econ. Lett., 114, 32‒35. de Carvalho, M., and Rua, A. (2017): "Real-time nowcasting the US output gap: Singular spectrum analysis at work". Int. J. Forecasting, 33, 185–198. Ghil, M., and R. Vautard (1991): "Interdecadal oscillations and the warming trend in global temperature time series", Nature, 350, 324–327. Elsner, J.B. and Tsonis, A.A. (1996): Singular Spectrum Analysis. A New Tool in Time Series Analysis, Plenum Press. Fraedrich, K. (1986) "Estimating dimensions of weather and climate attractors". J. Atmos. Sci. 43, 419–432. Ghil, M., and R. Vautard (1991): "Interdecadal oscillations and the warming trend in global temperature time series", Nature, 350, 324–327. Ghil, M. and Jiang, N. (1998): "Recent forecast skill for the El Niño/Southern Oscillation ", Geophys. Res. Lett., 25, 171–174, 1998. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1–3.41. Golyandina, N., A. Korobeynikov and A. Zhigljavsky (2018): Singular Spectrum Analysis with R. Springer Verlag. ISBN 3662573784. Golyandina, N., V. Nekrutkin and A. Zhigljavsky (2001): Analysis of Time Series Structure: SSA and related techniques. Chapman and Hall/CRC. ISBN 1-58488-194-1. Golyandina, N., and E. Osipov (2007) "The ‘Caterpillar’-SSA method for analysis of time series with missing values", J. Stat. Plan. Inference 137(8), 2642–2653. Golyandina, N., A. Pepelyshev and A. Steland (2012): "New approaches to nonparametric density estimation and selection of smoothing parameters", Comput. Stat. Data Anal. 56(7), 2206–2218. Golyandina, N. and D. Stepanov (2005): "SSA-based approaches to analysis and forecast of multidimensional time series". In: Proceedings of the 5th St.Petersburg Workshop on Simulation, June 26-July 2, 2005, St. Petersburg State University, St. Petersburg, pp. 293–298. Golyandina, N. and K. Usevich (2010): "2D-extension of Singular Spectrum Analysis: algorithm and elements of theory". In: Matrix Methods: Theory, Algorithms and Applications (Eds. V.Olshevsky and E.Tyrtyshnikov). World Scientific Publishing, 449–473. Golyandina, N., and A. Zhigljavsky (2013) Singular Spectrum Analysis for time series. Springer Briefs in Statistics, Springer, ISBN 978-3-642-34912-6. Groth, A., Feliks, Y., Kondrashov, D., and Ghil, M. (2016): "Interannual variability in the North Atlantic ocean's temperature field and its association with the wind stress forcing", Journal of Climate, doi:10.1175/jcli-d-16-0370.1. Groth, A. and M. Ghil (2011): "Multivariate singular spectrum analysis and the road to phase synchronization", Physical Review E 84, 036206, doi:10.1103/PhysRevE.84.036206. Groth, A. and M. Ghil (2015): "Monte Carlo Singular Spectrum Analysis (SSA) revisited: Detecting oscillator clusters in multivariate datasets", Journal of Climate, 28, 7873-7893, doi:10.1175/JCLI-D-15-0100.1. Harris, T. and H. Yan (2010): "Filtering and frequency interpretations of singular spectrum analysis". Physica D 239, 1958–1967. Hassani, H.and D. Thomakos, (2010): "A Review on Singular Spectrum Analysis for Economic and Financial Time Series". Statistics and Its Interface 3(3), 377-397. Hassani, H., A. Soofi and A. Zhigljavsky (2011): "Predicting Daily Exchange Rate with Singular Spectrum Analysis".Nonlinear Analysis: Real World Applications 11, 2023-2034. Hassani, H., Z. Xu and A. Zhigljavsky (2011): "Singular spectrum analysis based on the perturbation theory". Nonlinear Analysis: Real World Applications 12 (5), 2752-2766. Hassani, H., S. Heravi and A. Zhigljavsky (2012): " Forecasting UK industrial production with multivariate singular spectrum analysis". Journal of Forecasting 10.1002/for.2244 Hassani, H., A. Zhigljavsky., K. Patterson and A. Soofi (2011): " A comprehensive causality test based on the singular spectrum analysis". In: Illari, P.M., Russo, F., Williamson, J. (eds.) Causality in Science, 1st edn., p. 379. Oxford University Press, London. Hassani, H., and Mahmoudvand, R. (2013). Multivariate Singular Spectrum Analysis: A General View and New Vector Forecasting Approach;. International Journal of Energy and Statistics 1(1), 55-83. Keppenne, C. L. and M. Ghil (1993): "Adaptive filtering and prediction of noisy multivariate signals: An application to subannual variability in atmospheric angular momentum," Intl. J. Bifurcation & Chaos, 3, 625–634. Kondrashov, D., and M. Ghil (2006): "Spatio-temporal filling of missing points in geophysical data sets", Nonlin. Processes Geophys., 13, 151–159. Kondrashov, D., Y. Shprits, M. Ghil, 2010: " Gap Filling of Solar Wind Data by Singular Spectrum Analysis," Geophys. Res. Lett, 37, L15101, Mohammad, Y., and T. Nishida (2011) "On comparing SSA-based change point discovery algorithms". IEEE SII, 938–945. Moskvina, V., and A. Zhigljavsky (2003) "An algorithm based on singular spectrum analysis for change-point detection". Commun Stat Simul Comput 32, 319–352. Nekrutkin, V. (2010) "Perturbation expansions of signal subspaces for long signals". J. Stat. Interface 3, 297–319. Patterson, K., H. Hassani, S. Heravi and A. Zhigljavsky (2011) "Multivariate singular spectrum analysis for forecasting revisions to real-time data". Journal of Applied Statistics 38 (10), 2183-2211. Penland, C., Ghil, M., and Weickmann, K. M. (1991): "Adaptive filtering and maximum entropy spectra, with application to changes in atmospheric angular momentum," J. Geophys. Res., 96, 22659–22671. Pietilä, A., M. El-Segaier, R. Vigário and E. Pesonen (2006) "Blind source separation of cardiac murmurs from heart recordings". In: Rosca J, et al. (eds) Independent Component Analysis and Blind Signal Separation, Lecture Notes in Computer Science, vol 3889, Springer, pp 470–477. Portes, L. L. and Aguirre, L. A. (2016): "Matrix formulation and singular-value decomposition algorithm for structured varimax rotation in multivariate singular spectrum analysis", Physical Review E, 93, 052216, doi:10.1103/PhysRevE.93.052216. de Prony, G. (1795) "Essai expérimental et analytique sur les lois de la dilatabilité des fluides élastiques et sur celles de la force expansive de la vapeur de l’eau et la vapeur de l’alkool à différentes températures". J. de l’Ecole Polytechnique, 1(2), 24–76. Sanei, S., and H. Hassani (2015) Singular Spectrum Analysis of Biomedical Signals. CRC Press, ISBN 9781466589278 - CAT# K20398. Schoellhamer, D. (2001) "Singular spectrum analysis for time series with missing data". Geophys. Res. Lett. 28(16), 3187–3190. Thomakos, D. (2010) "Median Unbiased Optimal Smoothing and Trend. Extraction". Journal of Modern Applied Statistical Methods 9,144-159. Vautard, R., and M. Ghil (1989): "Singular spectrum analysis in nonlinear dynamics, with applications to paleoclimatic time series", Physica D, 35, 395–424. Vautard, R., Yiou, P., and M. Ghil (1992): "Singular-spectrum analysis: A toolkit for short, noisy chaotic signals", Physica D, 58, 95-126. Weare, B. C., and J. N. Nasstrom (1982): "Examples of extended empirical orthogonal function analyses," Mon. Weather Rev., 110, 784–812. Zhigljavsky, A. (Guest Editor) (2010) "Special issue on theory and practice in singular spectrum analysis of time series". Stat. Interface 3(3) == External links == kSpectra Toolkit for Mac OS X from SpectraWorks. Caterpillar-SSA Papers and software from Gistat Group. Efficient implementation of SSA in R Examples in R with the Rssa package Applied SSA in R SSA and Phase Synchronisation in R Multivariate singular spectrum filter for tracking business cycles Singular Spectrum Analysis Excel Demo With VBA Singular Spectrum Analysis tutorial with Matlab Multichannel Singular Spectrum Analysis tutorial with Matlab Singular Spectrum Analysis in Julia
Wikipedia:Sinān ibn al-Fatḥ#0
Sinān ibn al-Fatḥ was an Arab mathematician from Ḥarrān, who probably lived in the first half of the 10th century. Ibn an-Nadīm lists the following works of his: Kitāb at-Taḫt fi l-ḥisāb al-hindī ("Book of the Table on the Indian Calculation") Kitāb al-Ğamʿ wa-t-tafrīq ("Book of Addition and Subtraction") Kitāb Šarḥ al-Ğamʿ wa-t-tafrīq ("Commentary on the Book of Addition and Subtraction") Kitāb Ḥisāb al-mukaʿʿabāt ("Book on the Cubic Calculation") Kitāb Šarḥ al-ğabr wa-l-muqābala li-l-Ḫwārizmī ("Commentary on the Book of Balancing and Restoration by al-Ḫwārizmī") == References ==
Wikipedia:Sir Isaac Newton Sixth Form#0
Sir Isaac Newton Sixth Form is a specialist maths and science sixth form with free school status located in Norwich, owned by the Inspiration Trust. It has the capacity for 480 students aged 16–19. It specialises in mathematics and science. == History == Prior to becoming a Sixth Form College the building functioned as a fire station serving the central Norwich area until August 2011 when it closed down. Two years later the Sixth Form was created within the empty building with various additions being made to the existing structure. The sixth form was ranked the 7th best state sixth form in England by the Times in 2022. == Curriculum == At Sir Isaac Newton Sixth Form, students can study a choice of either maths, further maths, core maths, biology, chemistry, physics, computer science, environmental science or psychology. Additionally, students can also study any of the subjects on offer at the partner free school Jane Austen College, also located in Norwich and specialising in humanities, Arts and English. == References == == External links == Official website Ofsted reports
Wikipedia:Siraj al-Din al-Sajawandi#0
Sirāj ud-Dīn Muhammad ibn Muhammad ibn 'Abd ur-Rashīd Sajāwandī (Persian: محمد ابن محمد ابن عبدالرشید سجاوندی) also known as Abū Tāhir Muhammad al-Sajāwandī al-Hanafī (Arabic: ابی طاهر محمد السجاوندي الحنفي) and the honorific Sirāj ud-Dīn (سراج الدین, "lamp of the faith") (died c. 1203 CE or 600 AH) was a 12th-century Hanafi scholar of Islamic inheritance jurisprudence, mathematics astrology and geography. He is primarily known for his work Kitāb al-Farāʼiḍ al-Sirājīyah (Arabic:کتاب الفرائض السراجیه), commonly known simply as "the Sirājīyah", which is a principal work on Hanafi inheritance law. The work was translated into English by Sir William Jones in 1792 for subsequent use in the courts of British India. He was the grand-nephew of qari Muhammad ibn Tayfour Sajawandi. He lies buried in the Ziārat-e Hazrat-o 'Āshiqān wa Ārifān in Sajawand. == Name == His full name is Sirāj ud-Dīn Abū Tāhir Muḥammad Ibn Muhammad ibn 'Abd ur-Rashīd ibn Tayfoūr Sajāwandī (Persian: سراج الدین محمد سجاوندی). His nasab, Ibn Muhammad ibn 'Abd ur-Rashīd ibn Tayfoūr refers to him being the "son of Muhammad son of 'Abd ur-Rashīd son of Tayfour". Sajāwandī is his nisbah meaning "from Sajawand". He is also known by the teknonym Abū Tāhir meaning "father of Tahir". == Works == Kitāb al-Farāʼiẓ al-Sirājīyah (The Sirajite Book of Inhertiance laws, کتاب الفرائض السراجیه) a.k.a. al-Sirājīyah ("The Sirajite") al-Tajnīs Fī al-Hasāb (The Analogy for the Calculations , کتاب التجنیس فی الحساب) Resālat Fī al-Jabr wa al-Muqābilah (Treatise on Algebra , رسالة فی الجبر و المقابله) == References == == External links == Al Sirajiyyah: Or the Mahommedan Law of Inheritance. Jones, William (Calcutta, 1792)
Wikipedia:Sivaguru S. Sritharan#0
Sivaguru S. Sritharan (also known as S. S. Sritharan) is an American aerodynamicist and mathematician. Sritharan served in civilian universities such as University of Southern California and University of Wyoming as faculty member and head of the department and also in the Department of Defense (U. S. Navy and U. S. Air Force) in various capacities ranging from scientist to leadership roles, and also held visiting positions at several international institutions. He served as the vice chancellor at the Ramaiah University of Applied Sciences in Bengaluru, India. == Education == Sritharan had his high schooling at Jaffna Central College. He then joined at University of Sri Lanka (Peradeniya) and obtained a BSc (Honors) degree in mechanical engineering. He obtained a Master of Science degree in aeronautics and astronautics from University of Washington and a master's degree and Ph.D. in applied mathematics from University of Arizona. == Career == Sritharan served as the first provost and vice chancellor of the Air Force Institute of Technology at Dayton, Ohio and as the dean of the Graduate School of Engineering and Applied Sciences at the Naval Postgraduate School, Monterey, California. He was a professor and head of the Department of Mathematics at University of Wyoming and head of the Science and Technology Branch at the Naval Information Warfare Systems Command in San Diego. == Contributions == Sritharan is known for his research contributions in rigorous mathematical theory, optimal control and stochastic analysis of fluid mechanics and magneto-hydrodynamics. His notable contributions include: 1. Developing dynamic programming method for the equations of fluid dynamics. This subject is closely related to reinforcement learning in the language of machine learning. 2. First complete proof of the Pontryagin’s Maximum Principle for fluid dynamic equations with state constraints, as a joint work with UCLA mathematician Hector. O. Fattorini. 3. Developing robust (H-infinity) control theory for fluid dynamics as a joint work with Romanian mathematician Viorel P. Barbu. 4. First successful rigorous theory establishing a direct stochastic analogy to the famous Jacques-Louis Lions and G. Prodi (1959) on existence and uniqueness theorem for the two dimensional Navier-Stokes equation as a joint work with J. L. Menaldi utilizing a subtle local monotonicity property. 5. Proving Large Deviation Principle for stochastic Navier-Stokes equation as a joint work with P. Sundar to estimate the probability of rare events. == Bibliography == Sritharan, S.S. (2019), Invariant Manifold Theory for Hydrodynamic Transition, Courier Dover Publications, ISBN 9780486828282 Sritharan, S.S. (1998), Optimal Control of Viscous Flow, SIAM, ISBN 9780898714067 == References ==
Wikipedia:Sixth Term Examination Paper#0
The Sixth Term Examination Papers in Mathematics, often referred to as STEP, is currently a university admissions test for undergraduate courses with significant mathematical content - most notably for Mathematics at the University of Cambridge. Starting from 2024, STEP will be administered by OCR, replacing CAAT, who was responsible for administering STEP in previous years. Being after the reply date for universities in the UK, STEP is typically taken as part of a conditional offer for an undergraduate place. There are also a small number of candidates who sit STEP as a challenge. The papers are designed to test ability to answer questions similar in style to undergraduate Mathematics. The official users of STEP in Mathematics at present are the University of Cambridge, Imperial College London, and the University of Warwick. Since the 2025 entry application cycle, the STEP exams have been superseded by the TMUA exam at Imperial College London and the University of Warwick. Candidates applying to study mathematics at the University of Cambridge are almost always required to take STEP as part of the terms of their conditional offer. In addition, other courses at Cambridge with a large mathematics component, such as Economics and Engineering, occasionally require STEP. Candidates applying to study Mathematics or closely related subjects at the University of Warwick can take STEP as part of their offer. Imperial College London may require it for Computing applicants as well as Mathematics applicants who either did not take MAT or achieved a borderline score in it. A typical STEP offer for a candidate applying to read mathematics at the University of Cambridge would be at least a grade 1 in both STEP 2 and STEP 3, though - depending on individual circumstances - some colleges may only require a grade 1 in either STEP. Candidates applying to the University of Warwick to read mathematics, or joint subjects such as MORSE, can use a grade 2 from either STEP as part of their offer. Imperial typically requires a grade 2 in STEP 2 and/or STEP 3. == History == Before 2003, STEP was available for a wide range of subjects. In 1989, for instance, the full list of subjects offered was: Biology, Chemistry, Economics, English Literature, French, General Studies, Geography, Geology, German, Greek, History, Italian, Latin, Mathematics, Further Mathematics, Music, Physics, Religious Studies, Russian, and Spanish. STEP in Mathematics are the only ones now in use. Two (three prior to the discontinuation of STEP 1 in 2019) STEP in Mathematics are set each year, and are both sat during the school summer examination cycle (usually in June). == Format == Until 2019, there were three STEPs: STEP 1, STEP 2 and STEP 3. Since the academic year 2019/20, STEP 1 has been phased out. There was no STEP 1 set in 2020 due to the COVID-19 pandemic, and it was later announced that from 2021, STEP 1 would no longer be set, with only STEP 2 and STEP 3 being available. The last STEP 1 was held in 2019. Candidates may enter for as many as they wish, although this is often dictated by the STEP offers they hold. Each paper offers a selection of questions and there is no restriction on which can be answered. For each paper, candidates have three hours to complete their solutions. Whilst students are permitted to answer as many questions as they choose, they are advised to attempt no more than six, and their final grade is based on their six best question solutions. Each question is worth 20 marks, and so the maximum a candidate can score is 120. For examinations up to and including the 2018 papers, the specification for STEP 1 and STEP 2 was based on Mathematics A Level content while the syllabus for STEP 3 was based on Further Mathematics A Level. The questions on STEP 2 and 3 were about the same difficulty. Both STEP 2 and STEP 3 are harder than STEP 1. For the 2019 examinations onwards, the specifications have been updated to reflect the reforms in A Level Mathematics and Further Mathematics; in addition, the number of questions in each paper has been reduced. Specifically: The STEP 1 specification was based on A Level Mathematics, with some additions and modifications. The paper comprised 11 questions: 8 pure, and 3 further questions on mechanics and probability/statistics, with at least one question of the 3 on mechanics and at least one on probability/statistics. The June 2019 paper was the only STEP 1 paper to be sat under the new syllabus before the retiring of STEP 1. The STEP 2 specification is based on A Level Mathematics and AS Level Further Mathematics, with some additions and modifications. The paper comprises 12 questions: 8 pure, 2 mechanics, and 2 probability/statistics. The STEP 3 specification is based on A Level Mathematics and A Level Further Mathematics, with some additions and modifications. The paper comprises 12 questions: 8 pure, 2 mechanics, and 2 probability/statistics == Practicalities == Since June 2009, graph paper has not been allowed in STEP as the test requires only sketches, not detailed graphs. Instead, all graphs should be sketched inside the answer booklets provided as part of a candidate's solution. Since June 2018, the format of the answer booklet for the STEP Mathematics examinations has been updated to ensure that the paper is fully anonymised before it is marked. Candidates are issued with a 44-page booklet, of which 40 pages are available for writing out solutions and for rough work. Only one booklet per candidate is allowed unless a further booklet is required and has been formally requested as a result of specific access arrangements. Candidates are advised to write their answers in black ink and draw pictures in pencil, although some flexibility is permitted with this. Candidates should not use green or red pen at any stage. Calculators may not be used during STEP. Rulers, protractors, and compasses can be taken into the examination. Candidates who don't have English as a first language were allowed to use bilingual dictionaries until 2023; these are now no longer permitted. A formulae booklet was available to all candidates for all examinations up to and including those in 2018. As of 2019, candidates are no longer be issued with a formulae booklet; instead they will be expected to recall, or know how to derive quickly, standard formulae. All the required standard formulae are given in an appendix to the new specification. == Marking == STEP is marked by teams of mathematicians specially trained for the purpose. All the markers have Mathematics degrees and most are reading for PhDs at Cambridge. Each question is marked by a small team who coordinate to ensure their question is marked fairly and that all correct solutions are given appropriate marks. Markers are closely supervised by a team of marking supervisors, usually senior teachers, who are responsible for the mark scheme and by a senior Mathematics assessment expert. All non-crossed-out work is assessed and a candidate's final score is based on their six highest scoring solutions. All papers are checked at least twice to ensure that all of a candidate's non-crossed-out solutions have been assessed. A candidate's marks are then independently entered twice into a database to ensure that there are no clerical errors in mark recording. The mark data is then checked a further time by a small team who hand check a random selection of scripts. == Scoring == There are five possible grades awarded. From best to worst, these are 'S' (Outstanding), '1', '2', '3', and 'U' (Unclassified). The rule of thumb is that four good answers (to a reasonable level of completion) will gain a grade 1; more may gain an S, and fewer will gain a correspondingly lower grade. The grade boundaries shift from year to year, and the boundaries for STEP 3 are generally lower than those for STEP 2. All STEP questions are marked out of 20. The mark scheme for each question is designed to reward candidates who make good progress towards a solution. A candidate reaching the correct answer will receive full marks, regardless of the method used to answer the question. All the questions that are attempted by a student and not crossed out will be assessed. However, only the six best answers will be used in the calculation of the final grade for the paper, giving a total maximum mark of 120. == Timing and results == STEP is normally sat at a candidate's school or college. Alternatively, the test can be taken at any test centre authorised to run STEP. Entries for STEP are typically accepted from the start of March until the end of April and late entries (with late entry fees) accepted until mid-May. STEP is taken in mid-to-late June, with online results available in mid-August, issued on the same date as A Level results, typically at midnight. == Usage == There is some variation in how institutions make use of the results – candidates can contact the relevant institution(s) for more information. However, STEP is typically taken post-interview and the results used to supplement candidates' exam results. For applicants to the University of Cambridge, candidates' scripts are made available to admissions officers. This enables officers to make judgements on the basis of candidates' actual work, rather than on just their marks or grade. == Preparation == STEP does not require a lot of extra knowledge as they are designed to test skills and knowledge of topics within the A Level syllabus; however, preparation is advised as questions are significantly more challenging than those found in standard A Level examinations. Ideally, students should begin preparation from the summer preceding the academic year of the STEP series they intend to sit. Practice materials, including past papers, example solutions, and a STEP formula booklet, are available for free from the Cambridge Assessment Admissions Testing website. The STEP support programme provides modules for individual additional study, along with hints and solutions. Furthermore, the book "Advanced Problems in Mathematics: Preparing for University" by Stephen Siklos, a former paper-setter for STEP, has been specifically written for students preparing for it. == See also == Oxford, Cambridge and RSA Examinations University admissions tests in the United Kingdom Cambridge Mathematical Tripos == References == == External links == Official website University of Cambridge STEP Support Programme STEP Database
Wikipedia:Skew-Hamiltonian matrix#0
In mathematics, a Hamiltonian matrix is a 2n-by-2n matrix A such that JA is symmetric, where J is the skew-symmetric matrix J = [ 0 n I n − I n 0 n ] {\displaystyle J={\begin{bmatrix}0_{n}&I_{n}\\-I_{n}&0_{n}\\\end{bmatrix}}} and In is the n-by-n identity matrix. In other words, A is Hamiltonian if and only if (JA)T = JA where ()T denotes the transpose. (Not to be confused with Hamiltonian (quantum mechanics)) == Properties == Suppose that the 2n-by-2n matrix A is written as the block matrix A = [ a b c d ] {\displaystyle A={\begin{bmatrix}a&b\\c&d\end{bmatrix}}} where a, b, c, and d are n-by-n matrices. Then the condition that A be Hamiltonian is equivalent to requiring that the matrices b and c are symmetric, and that a + dT = 0. Another equivalent condition is that A is of the form A = JS with S symmetric.: 34 It follows easily from the definition that the transpose of a Hamiltonian matrix is Hamiltonian. Furthermore, the sum (and any linear combination) of two Hamiltonian matrices is again Hamiltonian, as is their commutator. It follows that the space of all Hamiltonian matrices is a Lie algebra, denoted sp(2n). The dimension of sp(2n) is 2n2 + n. The corresponding Lie group is the symplectic group Sp(2n). This group consists of the symplectic matrices, those matrices A which satisfy ATJA = J. Thus, the matrix exponential of a Hamiltonian matrix is symplectic. However the logarithm of a symplectic matrix is not necessarily Hamiltonian because the exponential map from the Lie algebra to the group is not surjective.: 34–36 The characteristic polynomial of a real Hamiltonian matrix is even. Thus, if a Hamiltonian matrix has λ as an eigenvalue, then −λ, λ* and −λ* are also eigenvalues.: 45 It follows that the trace of a Hamiltonian matrix is zero. The square of a Hamiltonian matrix is skew-Hamiltonian (a matrix A is skew-Hamiltonian if (JA)T = −JA). Conversely, every skew-Hamiltonian matrix arises as the square of a Hamiltonian matrix. == Extension to complex matrices == As for symplectic matrices, the definition for Hamiltonian matrices can be extended to complex matrices in two ways. One possibility is to say that a matrix A is Hamiltonian if (JA)T = JA, as above. Another possibility is to use the condition (JA)* = JA where the superscript asterisk ((⋅)*) denotes the conjugate transpose. == Hamiltonian operators == Let V be a vector space, equipped with a symplectic form Ω. A linear map A : V ↦ V {\displaystyle A:\;V\mapsto V} is called a Hamiltonian operator with respect to Ω if the form x , y ↦ Ω ( A ( x ) , y ) {\displaystyle x,y\mapsto \Omega (A(x),y)} is symmetric. Equivalently, it should satisfy Ω ( A ( x ) , y ) = − Ω ( x , A ( y ) ) {\displaystyle \Omega (A(x),y)=-\Omega (x,A(y))} Choose a basis e1, …, e2n in V, such that Ω is written as ∑ i e i ∧ e n + i {\textstyle \sum _{i}e_{i}\wedge e_{n+i}} . A linear operator is Hamiltonian with respect to Ω if and only if its matrix in this basis is Hamiltonian. == References ==
Wikipedia:Skew-Hermitian matrix#0
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix A {\displaystyle A} is skew-Hermitian if it satisfies the relation where A H {\displaystyle A^{\textsf {H}}} denotes the conjugate transpose of the matrix A {\displaystyle A} . In component form, this means that for all indices i {\displaystyle i} and j {\displaystyle j} , where a i j {\displaystyle a_{ij}} is the element in the i {\displaystyle i} -th row and j {\displaystyle j} -th column of A {\displaystyle A} , and the overline denotes complex conjugation. Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric matrices, or as the matrix analogue of the purely imaginary numbers. The set of all skew-Hermitian n × n {\displaystyle n\times n} matrices forms the u ( n ) {\displaystyle u(n)} Lie algebra, which corresponds to the Lie group U(n). The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm. Note that the adjoint of an operator depends on the scalar product considered on the n {\displaystyle n} dimensional complex or real space K n {\displaystyle K^{n}} . If ( ⋅ ∣ ⋅ ) {\displaystyle (\cdot \mid \cdot )} denotes the scalar product on K n {\displaystyle K^{n}} , then saying A {\displaystyle A} is skew-adjoint means that for all u , v ∈ K n {\displaystyle \mathbf {u} ,\mathbf {v} \in K^{n}} one has ( A u ∣ v ) = − ( u ∣ A v ) {\displaystyle (A\mathbf {u} \mid \mathbf {v} )=-(\mathbf {u} \mid A\mathbf {v} )} . Imaginary numbers can be thought of as skew-adjoint (since they are like 1 × 1 {\displaystyle 1\times 1} matrices), whereas real numbers correspond to self-adjoint operators. == Example == For example, the following matrix is skew-Hermitian A = [ − i + 2 + i − 2 + i 0 ] {\displaystyle A={\begin{bmatrix}-i&+2+i\\-2+i&0\end{bmatrix}}} because − A = [ i − 2 − i 2 − i 0 ] = [ − i ¯ − 2 + i ¯ 2 + i ¯ 0 ¯ ] = [ − i ¯ 2 + i ¯ − 2 + i ¯ 0 ¯ ] T = A H {\displaystyle -A={\begin{bmatrix}i&-2-i\\2-i&0\end{bmatrix}}={\begin{bmatrix}{\overline {-i}}&{\overline {-2+i}}\\{\overline {2+i}}&{\overline {0}}\end{bmatrix}}={\begin{bmatrix}{\overline {-i}}&{\overline {2+i}}\\{\overline {-2+i}}&{\overline {0}}\end{bmatrix}}^{\mathsf {T}}=A^{\mathsf {H}}} == Properties == The eigenvalues of a skew-Hermitian matrix are all purely imaginary (and possibly zero). Furthermore, skew-Hermitian matrices are normal. Hence they are diagonalizable and their eigenvectors for distinct eigenvalues must be orthogonal. All entries on the main diagonal of a skew-Hermitian matrix have to be pure imaginary; i.e., on the imaginary axis (the number zero is also considered purely imaginary). If A {\displaystyle A} and B {\displaystyle B} are skew-Hermitian, then ⁠ a A + b B {\displaystyle aA+bB} ⁠ is skew-Hermitian for all real scalars a {\displaystyle a} and b {\displaystyle b} . A {\displaystyle A} is skew-Hermitian if and only if i A {\displaystyle iA} (or equivalently, − i A {\displaystyle -iA} ) is Hermitian. A {\displaystyle A} is skew-Hermitian if and only if the real part ℜ ( A ) {\displaystyle \Re {(A)}} is skew-symmetric and the imaginary part ℑ ( A ) {\displaystyle \Im {(A)}} is symmetric. If A {\displaystyle A} is skew-Hermitian, then A k {\displaystyle A^{k}} is Hermitian if k {\displaystyle k} is an even integer and skew-Hermitian if k {\displaystyle k} is an odd integer. A {\displaystyle A} is skew-Hermitian if and only if x H A y = − y H A x ¯ {\displaystyle \mathbf {x} ^{\mathsf {H}}A\mathbf {y} =-{\overline {\mathbf {y} ^{\mathsf {H}}A\mathbf {x} }}} for all vectors x , y {\displaystyle \mathbf {x} ,\mathbf {y} } . If A {\displaystyle A} is skew-Hermitian, then the matrix exponential e A {\displaystyle e^{A}} is unitary. The space of skew-Hermitian matrices forms the Lie algebra u ( n ) {\displaystyle u(n)} of the Lie group U ( n ) {\displaystyle U(n)} . == Decomposition into Hermitian and skew-Hermitian == The sum of a square matrix and its conjugate transpose ( A + A H ) {\displaystyle \left(A+A^{\mathsf {H}}\right)} is Hermitian. The difference of a square matrix and its conjugate transpose ( A − A H ) {\displaystyle \left(A-A^{\mathsf {H}}\right)} is skew-Hermitian. This implies that the commutator of two Hermitian matrices is skew-Hermitian. An arbitrary square matrix C {\displaystyle C} can be written as the sum of a Hermitian matrix A {\displaystyle A} and a skew-Hermitian matrix B {\displaystyle B} : C = A + B with A = 1 2 ( C + C H ) and B = 1 2 ( C − C H ) {\displaystyle C=A+B\quad {\mbox{with}}\quad A={\frac {1}{2}}\left(C+C^{\mathsf {H}}\right)\quad {\mbox{and}}\quad B={\frac {1}{2}}\left(C-C^{\mathsf {H}}\right)} == See also == Bivector (complex) Hermitian matrix Normal matrix Skew-symmetric matrix Unitary matrix == Notes == == References == Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6. Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8.
Wikipedia:Sklyanin algebra#0
In mathematics, specifically the field of algebra, Sklyanin algebras are a class of noncommutative algebra named after Evgeny Sklyanin. This class of algebras was first studied in the classification of Artin-Schelter regular algebras of global dimension 3 in the 1980s. Sklyanin algebras can be grouped into two different types, the non-degenerate Sklyanin algebras and the degenerate Sklyanin algebras, which have very different properties. A need to understand the non-degenerate Sklyanin algebras better has led to the development of the study of point modules in noncommutative geometry. == Formal definition == Let k {\displaystyle k} be a field with a primitive cube root of unity. Let D {\displaystyle {\mathfrak {D}}} be the following subset of the projective plane P k 2 {\displaystyle {\textbf {P}}_{k}^{2}} : D = { [ 1 : 0 : 0 ] , [ 0 : 1 : 0 ] , [ 0 : 0 : 1 ] } ⊔ { [ a : b : c ] | a 3 = b 3 = c 3 } . {\displaystyle {\mathfrak {D}}=\{[1:0:0],[0:1:0],[0:0:1]\}\sqcup \{[a:b:c]{\big |}a^{3}=b^{3}=c^{3}\}.} Each point [ a : b : c ] ∈ P k 2 {\displaystyle [a:b:c]\in {\textbf {P}}_{k}^{2}} gives rise to a (quadratic 3-dimensional) Sklyanin algebra, S a , b , c = k ⟨ x , y , z ⟩ / ( f 1 , f 2 , f 3 ) , {\displaystyle S_{a,b,c}=k\langle x,y,z\rangle /(f_{1},f_{2},f_{3}),} where, f 1 = a y z + b z y + c x 2 , f 2 = a z x + b x z + c y 2 , f 3 = a x y + b y x + c z 2 . {\displaystyle f_{1}=ayz+bzy+cx^{2},\quad f_{2}=azx+bxz+cy^{2},\quad f_{3}=axy+byx+cz^{2}.} Whenever [ a : b : c ] ∈ D {\displaystyle [a:b:c]\in {\mathfrak {D}}} we call S a , b , c {\displaystyle S_{a,b,c}} a degenerate Sklyanin algebra and whenever [ a : b : c ] ∈ P 2 ∖ D {\displaystyle [a:b:c]\in {\textbf {P}}^{2}\setminus {\mathfrak {D}}} we say the algebra is non-degenerate. == Properties == The non-degenerate case shares many properties with the commutative polynomial ring k [ x , y , z ] {\displaystyle k[x,y,z]} , whereas the degenerate case enjoys almost none of these properties. Generally the non-degenerate Sklyanin algebras are more challenging to understand than their degenerate counterparts. === Properties of degenerate Sklyanin algebras === Let S deg {\displaystyle S_{\text{deg}}} be a degenerate Sklyanin algebra. S deg {\displaystyle S_{\text{deg}}} contains non-zero zero divisors. The Hilbert series of S deg {\displaystyle S_{\text{deg}}} is H S deg = 1 + t 1 − 2 t {\displaystyle H_{S_{\text{deg}}}={\frac {1+t}{1-2t}}} . Degenerate Sklyanin algebras have infinite Gelfand–Kirillov dimension. S deg {\displaystyle S_{\text{deg}}} is neither left nor right Noetherian. S deg {\displaystyle S_{\text{deg}}} is a Koszul algebra. Degenerate Sklyanin algebras have infinite global dimension. === Properties of non-degenerate Sklyanin algebras === Let S {\displaystyle S} be a non-degenerate Sklyanin algebra. S {\displaystyle S} contains no non-zero zero divisors. The hilbert series of S {\displaystyle S} is H S = 1 ( 1 − t ) 3 {\displaystyle H_{S}={\frac {1}{(1-t)^{3}}}} . Non-degenerate Sklyanin algebras are Noetherian. S {\displaystyle S} is Koszul. Non-degenerate Sklyanin algebras are Artin-Schelter regular. Therefore, they have global dimension 3 and Gelfand–Kirillov dimension 3. There exists a normal central element in every non-degenerate Sklyanin algebra. == Examples == === Degenerate Sklyanin algebras === The subset D {\displaystyle {\mathfrak {D}}} consists of 12 points on the projective plane, which give rise to 12 expressions of degenerate Sklyanin algebras. However, some of these are isomorphic and there exists a classification of degenerate Sklyanin algebras into two different cases. Let S deg = S a , b , c {\displaystyle S_{\text{deg}}=S_{a,b,c}} be a degenerate Sklyanin algebra. If a = b {\displaystyle a=b} then S deg {\displaystyle S_{\text{deg}}} is isomorphic to k ⟨ x , y , z ⟩ / ( x 2 , y 2 , z 2 ) {\displaystyle k\langle x,y,z\rangle /(x^{2},y^{2},z^{2})} , which is the Sklyanin algebra corresponding to the point [ 0 : 0 : 1 ] ∈ D {\displaystyle [0:0:1]\in {\mathfrak {D}}} . If a ≠ b {\displaystyle a\neq b} then S deg {\displaystyle S_{\text{deg}}} is isomorphic to k ⟨ x , y , z ⟩ / ( x y , y x , z x ) {\displaystyle k\langle x,y,z\rangle /(xy,yx,zx)} , which is the Sklyanin algebra corresponding to the point [ 1 : 0 : 0 ] ∈ D {\displaystyle [1:0:0]\in {\mathfrak {D}}} . These two cases are Zhang twists of each other and therefore have many properties in common. === Non-degenerate Sklyanin algebras === The commutative polynomial ring k [ x , y , z ] {\displaystyle k[x,y,z]} is isomorphic to the non-degenerate Sklyanin algebra S 1 , − 1 , 0 = k ⟨ x , y , z ⟩ / ( x y − y x , y z − z y , z x − x z ) {\displaystyle S_{1,-1,0}=k\langle x,y,z\rangle /(xy-yx,yz-zy,zx-xz)} and is therefore an example of a non-degenerate Sklyanin algebra. == Point modules == The study of point modules is a useful tool which can be used much more widely than just for Sklyanin algebras. Point modules are a way of finding projective geometry in the underlying structure of noncommutative graded rings. Originally, the study of point modules was applied to show some of the properties of non-degenerate Sklyanin algebras. For example to find their Hilbert series and determine that non-degenerate Sklyanin algebras do not contain zero divisors. === Non-degenerate Sklyanin algebras === Whenever a b c ≠ 0 {\displaystyle abc\neq 0} and ( a 3 + b 3 + c 3 3 a b c ) 3 ≠ 1 {\displaystyle \left({\frac {a^{3}+b^{3}+c^{3}}{3abc}}\right)^{3}\neq 1} in the definition of a non-degenerate Sklyanin algebra S = S a , b , c {\displaystyle S=S_{a,b,c}} , the point modules of S {\displaystyle S} are parametrised by an elliptic curve. If the parameters a , b , c {\displaystyle a,b,c} do not satisfy those constraints, the point modules of any non-degenerate Sklyanin algebra are still parametrised by a closed projective variety on the projective plane. If S {\displaystyle S} is a Sklyanin algebra whose point modules are parametrised by an elliptic curve, then there exists an element g ∈ S {\displaystyle g\in S} which annihilates all point modules i.e. M g = 0 {\displaystyle Mg=0} for all point modules M {\displaystyle M} of S {\displaystyle S} . === Degenerate Sklyanin algebras === The point modules of degenerate Sklyanin algebras are not parametrised by a projective variety. == References ==
Wikipedia:Slavik Vlado Jablan#0
Slavik Vlado Jablan (Serbian: Славик Владо Јаблан; 10 June 1952 – 26 February 2015) was a Serbian mathematician and crystallographer. Jablan is known for his contributions to antisymmetry, knot theory, the theory of symmetry and ornament, and ethnomathematics. == Career == Jablan was born on 10 June 1952 in Sarajevo. Jablan graduated in mathematics from the University of Belgrade (1977), where he also gained his M.A. degree (1981) and Ph.D. degree (1984) with the dissertation Theory of Simple and Multiple Antisymmetry in E2 and E2\{O}. He was a Fulbright scholar in 2003/4. Jablan was a professor of geometry at the University of Niš until 1999; subsequently he was a researcher at the Mathematical Institute of the Serbian Academy of Sciences and Arts. Jablan established the online journal VisMath in 2005 and was its editor from its inception until 2014. He joined the editorial board of the journal Symmetry in 2009 and was editor-in-chief from 2012 until 2015. After his death the journal printed a 14-page obituary. Journal of Knot Theory and Its Ramifications printed a special issue in his memory in 2016. == Works == Books published by Jablan: Theory of symmetry and ornament (1995) Symmetry, ornament and modularity (2002) LinKnot: knot theory by computer (2007) Jablan published 65 academic papers. Selected papers available in English: Antisymmetry and coloured symmetry: Groups of conformal antisymmetry and complex antisymmetry In E2\{0} (1985) A new method of generating plane groups of simple and multiple antisymmetry (1986) Enantiomorphism of antisymmetric figures (1986) Colored antisymmetry (1992) Farbgruppen and their place in the history of colored symmetry (2007) Knot theory: Nonplanar graphs derived from Gauss codes of virtual knots and links (2011) Knots in art (2012) Delta diagrams (2016) Ornament and ethnomathematics: Antisymmetry and modularity in ornamental art (2001) Elementary constructions of Persian mosaics (2006) Knots and links in architecture (2012) == References ==
Wikipedia:Slim lattice#0
In lattice theory, a mathematical discipline, a finite lattice is slim if no three join-irreducible elements form an antichain. Every slim lattice is planar. A finite planar semimodular lattice is slim if and only if it contains no cover-preserving diamond sublattice M3 (this is the original definition of a slim lattice due to George Grätzer and Edward Knapp). == Notes == == References == Grätzer, George (2016). The congruences of a finite lattice. A "proof-by-picture" approach (2nd ed.). Cham, Switzerland: Birkhäuser/Springer. doi:10.1007/978-3-319-38798-7. ISBN 978-3-319-38796-3. MR 3495851. Grätzer, George; Knapp, Edward (2007). "Notes on planar semimodular lattices. I. Construction". Acta Sci. Math. (Szeged). 73 (3–4): 445–462. arXiv:0705.3366. MR 2380059. Czédli, Gábor; Schmidt, E. Tamás (2012). "Slim semimodular lattices. I. A visual approach" (PDF). Order. 29 (3): 481–497. doi:10.1007/s11083-011-9215-3. MR 2979644. S2CID 11481489.
Wikipedia:Slowly varying envelope approximation#0
In physics, slowly varying envelope approximation (SVEA, sometimes also called slowly varying asymmetric approximation or SVAA) is the assumption that the envelope of a forward-travelling wave pulse varies slowly in time and space compared to a period or wavelength. This requires the spectrum of the signal to be narrow-banded—hence it is also referred to as the narrow-band approximation. The slowly varying envelope approximation is often used because the resulting equations are in many cases easier to solve than the original equations, reducing the order of—all or some of—the highest-order partial derivatives. But the validity of the assumptions which are made need to be justified. == Example == For example, consider the electromagnetic wave equation: ∇ 2 E − 1 c 2 ∂ 2 E ∂ t 2 = 0 , {\displaystyle \nabla ^{2}E-{\frac {1}{c^{2}}}{\frac {\partial ^{2}E}{\partial t^{2}}}=0\,,} where c = 1 μ 0 ε 0 . {\displaystyle c={\frac {1}{\sqrt {\mu _{0}\varepsilon _{0}}}}~.} If k0 and ω0 are the wave number and angular frequency of the (characteristic) carrier wave for the signal E(r,t), the following representation is useful: E ( r , t ) = Re ⁡ [ E 0 ( r , t ) e i ( k 0 ⋅ r − ω 0 t ) ] , {\displaystyle E(\mathbf {r} ,t)=\operatorname {\operatorname {Re} } \left[E_{0}(\mathbf {r} ,t)\,e^{i(\mathbf {k} _{0}\cdot \mathbf {r} -\omega _{0}t)}\right],} where Re ⁡ [ ⋅ ] {\displaystyle \operatorname {Re} [\,\cdot \,]} denotes the real part of the quantity between brackets, and i 2 ≡ − 1. {\displaystyle i^{2}\equiv -1.} In the slowly varying envelope approximation (SVEA) it is assumed that the complex amplitude E0(r, t) only varies slowly with r and t. This inherently implies that E(r, t) represents waves propagating forward, predominantly in the k0 direction. As a result of the slow variation of E0(r, t), when taking derivatives, the highest-order derivatives may be neglected: | ∇ 2 E 0 | ≪ | k 0 ⋅ ∇ E 0 | {\displaystyle \left|\nabla ^{2}E_{0}\right|\ll \left|\mathbf {k} _{0}\cdot \nabla E_{0}\right|} and | ∂ 2 E 0 ∂ t 2 | ≪ | ω 0 ∂ E 0 ∂ t | , {\displaystyle \left|{\frac {\partial ^{2}E_{0}}{\partial t^{2}}}\right|\ll \left|\omega _{0}\,{\frac {\partial E_{0}}{\partial t}}\right|,} with k 0 ≡ | k 0 | . {\displaystyle k_{0}\equiv \left|\mathbf {k} _{0}\right|.} === Full approximation === Consequently, the wave equation is approximated in the SVEA as: 2 i k 0 ⋅ ∇ E 0 + 2 i ω 0 c 2 ∂ E 0 ∂ t − ( k 0 2 − ω 0 2 c 2 ) E 0 = 0 . {\displaystyle 2i\mathbf {k} _{0}\cdot \nabla E_{0}+{\frac {2i\omega _{0}}{c^{2}}}{\frac {\partial E_{0}}{\partial t}}-\left(k_{0}^{2}-{\frac {\omega _{0}^{2}}{c^{2}}}\right)E_{0}=0~.} It is convenient to choose k0 and ω0 such that they satisfy the dispersion relation: k 0 2 − ω 0 2 c 2 = 0 . {\displaystyle k_{0}^{2}-{\frac {\omega _{0}^{2}}{c^{2}}}=0~.} This gives the following approximation to the wave equation, as a result of the slowly varying envelope approximation: k 0 ⋅ ∇ E 0 + ω 0 c 2 ∂ E 0 ∂ t = 0 . {\displaystyle \mathbf {k} _{0}\cdot \nabla E_{0}+{\frac {\omega _{0}}{c^{2}}}\,{\frac {\partial E_{0}}{\partial t}}=0~.} This is a hyperbolic partial differential equation, like the original wave equation, but now of first-order instead of second-order. It is valid for coherent forward-propagating waves in directions near the k0-direction. The space and time scales over which E0 varies are generally much longer than the spatial wavelength and temporal period of the carrier wave. A numerical solution of the envelope equation thus can use much larger space and time steps, resulting in significantly less computational effort. === Parabolic approximation === Assume wave propagation is dominantly in the z-direction, and k0 is taken in this direction. The SVEA is only applied to the second-order spatial derivatives in the z-direction and time. If Δ ⊥ ≡ ∂ 2 / ∂ x 2 + ∂ 2 / ∂ y 2 {\displaystyle \Delta _{\perp }\equiv \partial ^{2}/\partial x^{2}+\partial ^{2}/\partial y^{2}} is the Laplace operator in the x×y plane, the result is: k 0 ∂ E 0 ∂ z + ω 0 c 2 ∂ E 0 ∂ t − 1 2 i Δ ⊥ E 0 = 0 . {\displaystyle k_{0}{\frac {\partial E_{0}}{\partial z}}+{\frac {\omega _{0}}{c^{2}}}{\frac {\partial E_{0}}{\partial t}}-{\frac {1}{2}}\,i\,\Delta _{\perp }E_{0}=0~.} This is a parabolic partial differential equation. This equation has enhanced validity as compared to the full SVEA: It represents waves propagating in directions significantly different from the z-direction. === Alternative limit of validity === In the one-dimensional case, another sufficient condition for the SVEA validity is ℓ g ≫ λ {\displaystyle \ell _{\mathsf {g}}\gg \lambda } and ℓ p ≫ λ ( 1 − v c ) , {\displaystyle \ell _{\mathsf {p}}\gg \lambda \left(1-{\frac {v}{c}}\right)\,,} with λ = 2 π k 0 , {\displaystyle \lambda ={\frac {2\pi }{k_{0}}}\,,} where ℓ g {\displaystyle \ell _{\mathsf {g}}} is the length over which the radiation pulse is amplified, ℓ p {\displaystyle \ell _{\mathsf {p}}} is the pulse width and v {\displaystyle v} is the group velocity of the radiating system. These conditions are much less restrictive in the relativistic limit where v c {\displaystyle {\frac {v}{c}}} is close to 1, as in a free-electron laser, compared to the usual conditions required for the SVEA validity. == See also == Ultrashort pulse WKB approximation == References ==
Wikipedia:Smith's Prize#0
Smith's Prize was the name of each of two prizes awarded annually to two research students in mathematics and theoretical physics at the University of Cambridge from 1769. Following the reorganization in 1998, they are now awarded under the names Smith-Knight Prize and Rayleigh-Knight Prize. == History == The Smith Prize fund was founded by bequest of Robert Smith upon his death in 1768, having by his will left £3,500 of South Sea Company stock to the University. Every year two or more junior Bachelor of Arts students who had made the greatest progress in mathematics and natural philosophy were to be awarded a prize from the fund. The prize was awarded every year from 1769 to 1998 except 1917. From 1769 to 1885, the prize was awarded for the best performance in a series of examinations. In 1854 George Stokes included an examination question on a particular theorem that William Thomson had written to him about, which is now known as Stokes' theorem. T. W. Körner notes Only a small number of students took the Smith's prize examination in the nineteenth century. When Karl Pearson took the examination in 1879, the examiners were Stokes, Maxwell, Cayley, and Todhunter and the examinees went on each occasion to an examiner's dwelling, did a morning paper, had lunch there and continued their work on the paper in the afternoon. In 1885, the examination was renamed Part III, (now known as the Master of Advanced Study in Mathematics for students who studied outside of Cambridge before taking it) and the prize was awarded for the best submitted essay rather than examination performance. According to Barrow-Green By fostering an interest in the study of applied mathematics, the competition contributed towards the success in mathematical physics that was to become the hallmark of Cambridge mathematics during the second half of the nineteenth century. In the twentieth century, the competition stimulated postgraduate research in mathematics in Cambridge and the competition has played a significant role by providing a springboard for graduates considering an academic career. The majority of prize-winners have gone on to become professional mathematicians or physicists. The Rayleigh Prize was an additional prize, which was awarded for the first time in 1911. The Smith's and Rayleigh prizes were only available to Cambridge graduate students who had been undergraduates at Cambridge. The J.T. Knight Prize was established in 1974 for Cambridge graduates who had been undergraduates at other universities. The prize commemorates J.T. Knight (1942–1970), who had been an undergraduate student at Glasgow and a graduate student at Cambridge. He was killed in a motor car accident in Ireland in April 1970. === Value of the prizes === Originally, in 1769, the prizes were worth £25 each and remained at that level for 100 years. In 1867, they fell to £23 and in 1915 were still reported to be worth that amount. By 1930, the value had risen to about £30, and by 1940, the value had risen by a further one pound to £31. By 1998, a Smith's Prize was worth around £250. In 2007, the value of the three prize funds was roughly £175,000. === Reorganization of prizes === In 1998 the Smith Prize, Rayleigh Prize and J. T. Knight Prize were replaced by the Smith-Knight Prize and Rayleigh-Knight Prize, the standard for the former being higher than that required for the latter. == Smith's Prize recipients == For the period up to 1940 a complete list is given in Barrow-Green (1999) including titles of prize essays from 1889 to 1940. The following includes a selection from this list. === Awarded for examination performance === === Awarded for essay === == Rayleigh Prize recipients == A more complete list of Rayleigh prize recipients is given in Appendix 1 ("List of Prize Winners and their Essays 1885–1940") of 1913 Ralph H. Fowler 1923 Edward Collingwood 1927 William McCrea 1930 Harold Davenport 1937 David Stanley Evans 1951 Gabriel Andrew Dirac 1980 David Benson 1982 Susan Stepney 1994 Group 4: J.D. King, A.P. Martin. Group 5: K.M. Croudace, J.R. Elliot. 1998 P. Bolchover, O. T. Johnson, R. W. Verrill, R. Bhattacharyya, U. A. Salam, S. A. Wright and T. J. Hunt == J. T. Knight Prize recipients == 1974 Cameron Leigh Stewart Allan J. Clarke 1975 Frank Kelly and Ian Sobey 1976 Trevor McDougall 1977 Gerard Murphy 1981 Bruce Allen and Philip K. Pollett 1983 Ya-xiang Yuan 1985 Reinhard Diestel 1987 Qin Sheng (mathematician) 1988 Somak Raychaudhury 1990 Darryn W. Waugh 1991 Renzo L. Ricca 1992 Grant Lythe, Christophe Pichon, Henrik O. Rasmussen 1993 Anastasios Christou Petkou 1994 Group 1: M. Gaberdiel, Y. Liu. Group 3: H.A. Chamblin. Group 4: P.P. Avelino, S.G. Lack, A.L. Sydenham. Group 5: S. Keras, U. Meyer, G.M. Pritchard, H. Ramanathan, K. Strobl. Group 6: A.O. Bender, V. Toledano Laredo. 1996 Conor Houghton, Thomas Manke 1997 Arno Schindlmayr 1998 A. Bejancu, G. M. Keith, J. Sawon, D. R. Brecher, T. S. H. Leinster, S. Slijepcevic, K. K. Damodaran, A. R. Mohebalhojeh, C. T. Snydal, F. De Rooij, O. Pikhurko, David K. H. Tan, P. R. Hiemer, T. Prestidge, F. Wagner, Viet Ha Hoàng, A. W. Rempel and Jium-Huei Proty Wu == Smith–Knight Prize recipients == 1999 D. W. Essex, H. S. Reall, A. Saikia, A. C. Faul, Duncan C. Richer, M. J. Vartiainen, T. A. Fisher, J. Rosenzweig, J. Wierzba and J. B. Gutowski 2001 B. J. Green, T A. Mennim, A. Mijatovic, F. A. Dolan, Paul D. Metcalfe and S. R. Tod 2002 Konstantin Ardakov, Edward Crane and Simon Wadsley 2004 Neil Roxburgh 2005 David Conlon 2008 Miguel Paulos 2009 Olga Goulko 2010 Miguel Custódio 2011 Ioan Manolescu 2014 Bhargav P. Narayanan 2018 Theodor Bjorkmo, Muntazir Abidi, Amelia Drew, Leong Khim Wong 2020 Jef Laga, Kasia Warburton, Daniel Zhang, Shayan Iranipour 2021 David Gwilym Baker, Hannah Banks, Jason Joykutty, Andreas Schachner, Mohammed Rifath Khan Shafi == Rayleigh–Knight Prize recipients == 1999 C. D. Bloor, R. Oeckl, J. Y. Whiston, Y-C. Chen, P. L. Rendon, C. Wunderer, J. H. P. Dawes, D. M. Rodgers, H-M. Gutmann and A. N. Ross 2001 A. F. R. Bain, S. Khan, S. Schafer-Nameki, N. R. Farr, J. Niesen, J. H. Siggers, M. Fayers, D. Oriti, M. J. Tildesley, J. R. Gair, M. R. E. H. Pickles, A. J. Tolley, S. R. Hodges, R. Portugues, C. Voll, M. Kampp, P. J. P. Roche and B. M. J. B. Walker 2004 Oliver Rinne 2005 Guillaume Pierre Bascoul and Giuseppe Di Graziano 2007 Anders Hansen and Vladimir Lazić == See also == List of mathematics awards == References ==
Wikipedia:Smith–Volterra–Cantor set#0
In mathematics, the Smith–Volterra–Cantor set (SVC), ε-Cantor set, or fat Cantor set is an example of a set of points on the real line that is nowhere dense (in particular it contains no intervals), yet has positive measure. The Smith–Volterra–Cantor set is named after the mathematicians Henry Smith, Vito Volterra and Georg Cantor. In an 1875 paper, Smith discussed a nowhere-dense set of positive measure on the real line, and Volterra introduced a similar example in 1881. The Cantor set as we know it today followed in 1883. The Smith–Volterra–Cantor set is topologically equivalent to the middle-thirds Cantor set. == Construction == Similar to the construction of the Cantor set, the Smith–Volterra–Cantor set is constructed by removing certain intervals from the unit interval [ 0 , 1 ] . {\displaystyle [0,1].} The process begins by removing the middle 1/4 from the interval [ 0 , 1 ] {\displaystyle [0,1]} (the same as removing 1/8 on either side of the middle point at 1/2) so the remaining set is [ 0 , 3 8 ] ∪ [ 5 8 , 1 ] . {\displaystyle \left[0,{\tfrac {3}{8}}\right]\cup \left[{\tfrac {5}{8}},1\right].} The following steps consist of removing subintervals of width 1 / 4 n {\displaystyle 1/4^{n}} from the middle of each of the 2 n − 1 {\displaystyle 2^{n-1}} remaining intervals. So for the second step the intervals ( 5 / 32 , 7 / 32 ) {\displaystyle (5/32,7/32)} and ( 25 / 32 , 27 / 32 ) {\displaystyle (25/32,27/32)} are removed, leaving [ 0 , 5 32 ] ∪ [ 7 32 , 3 8 ] ∪ [ 5 8 , 25 32 ] ∪ [ 27 32 , 1 ] . {\displaystyle \left[0,{\tfrac {5}{32}}\right]\cup \left[{\tfrac {7}{32}},{\tfrac {3}{8}}\right]\cup \left[{\tfrac {5}{8}},{\tfrac {25}{32}}\right]\cup \left[{\tfrac {27}{32}},1\right].} Continuing indefinitely with this removal, the Smith–Volterra–Cantor set is then the set of points that are never removed. The image below shows the initial set and five iterations of this process. Each subsequent iterate in the Smith–Volterra–Cantor set's construction removes proportionally less from the remaining intervals. This stands in contrast to the Cantor set, where the proportion removed from each interval remains constant. Thus, the Smith–Volterra–Cantor set has positive measure while the Cantor set has zero measure. == Properties == By construction, the Smith–Volterra–Cantor set contains no intervals and therefore has empty interior. It is also the intersection of a sequence of closed sets, which means that it is closed. During the process, intervals of total length ∑ n = 0 ∞ 2 n 2 2 n + 2 = 1 4 + 1 8 + 1 16 + ⋯ = 1 2 {\displaystyle \sum _{n=0}^{\infty }{\frac {2^{n}}{2^{2n+2}}}={\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}+\cdots ={\frac {1}{2}}\,} are removed from [ 0 , 1 ] , {\displaystyle [0,1],} showing that the set of the remaining points has a positive measure of 1/2. This makes the Smith–Volterra–Cantor set an example of a closed set whose boundary has positive Lebesgue measure. == Other fat Cantor sets == In general, one can remove r n {\displaystyle r_{n}} from each remaining subinterval at the n {\displaystyle n} th step of the algorithm, and end up with a Cantor-like set. The resulting set will have positive measure if and only if the sum of the sequence is less than the measure of the initial interval. For instance, suppose the middle intervals of length a n {\displaystyle a^{n}} are removed from [ 0 , 1 ] {\displaystyle [0,1]} for each n {\displaystyle n} th iteration, for some 0 ≤ a ≤ 1 3 . {\displaystyle 0\leq a\leq {\dfrac {1}{3}}.} Then, the resulting set has Lebesgue measure 1 − ∑ n = 0 ∞ 2 n a n + 1 = 1 − a ∑ n = 0 ∞ ( 2 a ) n = 1 − a 1 1 − 2 a = 1 − 3 a 1 − 2 a {\displaystyle {\begin{aligned}1-\sum _{n=0}^{\infty }2^{n}a^{n+1}&=1-a\sum _{n=0}^{\infty }(2a)^{n}\\[5pt]&=1-a{\frac {1}{1-2a}}\\[5pt]&={\frac {1-3a}{1-2a}}\end{aligned}}} which goes from 0 {\displaystyle 0} to 1 {\displaystyle 1} as a {\displaystyle a} goes from 1 / 3 {\displaystyle 1/3} to 0. {\displaystyle 0.} ( a > 1 / 3 {\displaystyle a>1/3} is impossible in this construction.) Cartesian products of Smith–Volterra–Cantor sets can be used to find totally disconnected sets in higher dimensions with nonzero measure. By applying the Denjoy–Riesz theorem to a two-dimensional set of this type, it is possible to find an Osgood curve, a Jordan curve such that the points on the curve have positive area. == See also == The Smith–Volterra–Cantor set is used in the construction of Volterra's function (see external link). The Smith–Volterra–Cantor set is an example of a compact set that is not Jordan measurable, see Jordan measure#Extension to more complicated sets. The indicator function of the Smith–Volterra–Cantor set is an example of a bounded function that is not Riemann integrable on (0,1) and moreover, is not equal almost everywhere to a Riemann integrable function, see Riemann integral#Examples. List of topologies – List of concrete topologies and topological spaces == References ==
Wikipedia:Smooth algebra#0
In algebraic geometry, a smooth scheme over a field is a scheme which is well approximated by affine space near any point. Smoothness is one way of making precise the notion of a scheme with no singular points. A special case is the notion of a smooth variety over a field. Smooth schemes play the role in algebraic geometry of manifolds in topology. == Definition == First, let X be an affine scheme of finite type over a field k. Equivalently, X has a closed immersion into affine space An over k for some natural number n. Then X is the closed subscheme defined by some equations g1 = 0, ..., gr = 0, where each gi is in the polynomial ring k[x1,..., xn]. The affine scheme X is smooth of dimension m over k if X has dimension at least m in a neighborhood of each point, and the matrix of derivatives (∂gi/∂xj) has rank at least n−m everywhere on X. (It follows that X has dimension equal to m in a neighborhood of each point.) Smoothness is independent of the choice of immersion of X into affine space. The condition on the matrix of derivatives is understood to mean that the closed subset of X where all (n−m) × (n − m) minors of the matrix of derivatives are zero is the empty set. Equivalently, the ideal in the polynomial ring generated by all gi and all those minors is the whole polynomial ring. In geometric terms, the matrix of derivatives (∂gi/∂xj) at a point p in X gives a linear map Fn → Fr, where F is the residue field of p. The kernel of this map is called the Zariski tangent space of X at p. Smoothness of X means that the dimension of the Zariski tangent space is equal to the dimension of X near each point; at a singular point, the Zariski tangent space would be bigger. More generally, a scheme X over a field k is smooth over k if each point of X has an open neighborhood which is a smooth affine scheme of some dimension over k. In particular, a smooth scheme over k is locally of finite type. There is a more general notion of a smooth morphism of schemes, which is roughly a morphism with smooth fibers. In particular, a scheme X is smooth over a field k if and only if the morphism X → Spec k is smooth. == Properties == A smooth scheme over a field is regular and hence normal. In particular, a smooth scheme over a field is reduced. Define a variety over a field k to be an integral separated scheme of finite type over k. Then any smooth separated scheme of finite type over k is a finite disjoint union of smooth varieties over k. For a smooth variety X over the complex numbers, the space X(C) of complex points of X is a complex manifold, using the classical (Euclidean) topology. Likewise, for a smooth variety X over the real numbers, the space X(R) of real points is a real manifold, possibly empty. For any scheme X that is locally of finite type over a field k, there is a coherent sheaf Ω1 of differentials on X. The scheme X is smooth over k if and only if Ω1 is a vector bundle of rank equal to the dimension of X near each point. In that case, Ω1 is called the cotangent bundle of X. The tangent bundle of a smooth scheme over k can be defined as the dual bundle, TX = (Ω1)*. Smoothness is a geometric property, meaning that for any field extension E of k, a scheme X is smooth over k if and only if the scheme XE := X ×Spec k Spec E is smooth over E. For a perfect field k, a scheme X is smooth over k if and only if X is locally of finite type over k and X is regular. == Generic smoothness == A scheme X is said to be generically smooth of dimension n over k if X contains an open dense subset that is smooth of dimension n over k. Every variety over a perfect field (in particular an algebraically closed field) is generically smooth. == Examples == Affine space and projective space are smooth schemes over a field k. An example of a smooth hypersurface in projective space Pn over k is the Fermat hypersurface x0d + ... + xnd = 0, for any positive integer d that is invertible in k. An example of a singular (non-smooth) scheme over a field k is the closed subscheme x2 = 0 in the affine line A1 over k. An example of a singular (non-smooth) variety over k is the cuspidal cubic curve x2 = y3 in the affine plane A2, which is smooth outside the origin (x,y) = (0,0). A 0-dimensional variety X over a field k is of the form X = Spec E, where E is a finite extension field of k. The variety X is smooth over k if and only if E is a separable extension of k. Thus, if E is not separable over k, then X is a regular scheme but is not smooth over k. For example, let k be the field of rational functions Fp(t) for a prime number p, and let E = Fp(t1/p); then Spec E is a variety of dimension 0 over k which is a regular scheme, but not smooth over k. Schubert varieties are in general not smooth. == Notes == == References == D. Gaitsgory's notes on flatness and smoothness at http://www.math.harvard.edu/~gaitsgde/Schemes_2009/BR/SmoothMaps.pdf Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157 Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics (2nd ed.), Cambridge University Press, ISBN 978-0-521-36764-6, MR 1011461 == See also == Étale morphism Dimension of an algebraic variety Glossary of scheme theory Smooth completion
Wikipedia:Snezhana Abarzhi#0
Snezhana I. Abarzhi (Russian: Снежана Ивановна Абаржи, also known as Snejana I. Abarji) is an applied mathematician and theoretical physicist specializing in the dynamics of fluids and plasmas and their applications in nature and technology. Her research has revealed that instabilities elucidate dynamics of supernova blasts, and that supernovae explode more slowly and less turbulently than previously thought, changing the understanding of the mechanisms by which heavy atomic nuclei are formed in these explosions. Her works have found the mechanism of interface stabilization, the special self-similar class in interfacial mixing, and the fundamentals of Rayleigh-Taylor instabilities. == Education and career == Education and career Snezhana Abarzhi was born and raised in the former Soviet Union, there are no publicly available documents on when exactly she was born, or any records from her teenage years. She pursued her higher education at the Moscow Institute of Physics and Technology (MIPT). This university is regarded as one of the most prestigious and rigorous technical universities in Russia, often compared to institutions like MIT. After accomplishing her bachelor’s degrees, she then earned a master's degree in physics and applied mathematics, summa cum laude, in 1990. She completed her doctorate in 1994 through the Landau Institute for Theoretical Physics and Kapitza Institute for Physical Problems of the Russian Academy of Sciences, supervised by Sergei I. Anisimov. Abarzhi held a position as a researcher for the Russian Academy of Sciences from 1994 to 1997 (on leave in 1997-2004). She came to the US in 1997 as a visiting professor at the University of North Carolina in Chapel Hill, and then in 1998 became an Alexander von Humboldt Fellow at the University of Bayreuth in Germany. In 1999 she took a research position at Stony Brook University. In 2002 she briefly moved to a research professorship at Osaka University before returning to the US as a senior fellow in the Center for Turbulence Research at Stanford University. In 2005 she became a research faculty member at the University of Chicago and in 2006 she added a regular-rank faculty position as an associate professor at the Illinois Institute of Technology. She also worked at Carnegie Mellon University from 2013 to 2016 before moving to the University of Western Australia as professor and chair of applied mathematics. Abarzhi's research has led her to join the Committee on Scientific Publications of the American Physical Society. This also led her to join organized conferences and programs covering a few different topics such as non-equilibrium dynamics of interfaces, turbulent mixing, and beyond. Her strong ethic has put her in a strong position in our Women in STEM today. In 2020 Abarzhi was named a Fellow of the American Physical Society (APS) this was from after a nomination from the APS Division of Fluid Dynamics, "for deep and abiding work on the Rayleigh-Taylor and related instabilities, and for sustained leadership in that community". Not only is this a great personal accomplishment, but made significant scientific contributions, and she’s also played an important role in bringing researchers together, guiding scientific directions, organizing conferences, and mentoring others in the field. In 2023, Abarzhi released “Invariant forms and control dimensional parameters in complexity quantification” posted by the Department of Mathematics and Statistics, The University of Western Australia, located in Perth, WA, Australia. In this publication, she discusses non-equilibrium dynamics and its importance in natural and technological processes. Throughout her publication, she mentions many concepts, such as Real-world systems showing universal behaviors and symmetries, Scaling laws and spectral behaviors helping link the two phenomena, Symmetry and scaling principles providing insight into their theoretical differences, and many more. == Selected publications == Abarzhi, Snezhana I.; Bhowmick, Aklant K.; Naveh, Annie; Pandian, Arun; Swisher, Nora C.; Stellingwerf, Robert F.; Arnett, W. David (November 2018), "Supernova, nuclear synthesis, fluid instabilities, and interfacial mixing", Proceedings of the National Academy of Sciences, 116 (37): 18184–18192, doi:10.1073/pnas.1714502115, PMC 6744890, PMID 30478062 Abarzhi, Snezhana I.; Ilyin, Daniil V.; Goddard, William A.; Anisimov, Sergei I. (August 2018), "Interface dynamics: Mechanisms of stabilization and destabilization and structure of flow fields", Proceedings of the National Academy of Sciences, 116 (37): 18218–18226, Bibcode:2019PNAS..11618218A, doi:10.1073/pnas.1714500115, PMC 6744915, PMID 30082395 Abarzhi, Snezhana I. (April 2010), "Review of theoretical modelling approaches of Rayleigh–Taylor instabilities and turbulent mixing", Philosophical Transactions of the Royal Society A, 368 (1916): 1809–1828, Bibcode:2010RSPTA.368.1809A, doi:10.1098/rsta.2010.0020, PMID 20211884, S2CID 38628393 Abarzhi, Snezhana I. (July 1998), "Stable steady flows in Rayleigh–Taylor instability", Physical Review Letters, 81 (2): 337–340, Bibcode:1998PhRvL..81..337A, doi:10.1103/physrevlett.81.337 Abarzhi SI, Hill DL, Williams KC, Li JT, Remington BA, Arnett WD 2023 Fluid dynamics mathematical aspects of supernova remnants. Phys. Fluids 35, 034106. doi:10.1063/5.0123930 Abarzhi SI, Sreenivasan KR 2022 Self-similar Rayleigh-Taylor mixing with accelerations varying in time and space. Proc. Natl. Acad. Sci. USA 119, e2118589119. doi:10.1073/pnas.2118589119 Ilyin DV, Abarzhi SI 2022 Interface dynamics under thermal heat flux, inertial stabilization and destabilizing acceleration. Springer Nat. Appl. Sci. 4, 197. doi:10.1007/s42452-022-05000-4 Meshkov EE, Abarzhi SI 2019 On Rayleigh-Taylor interfacial mixing. Fluid Dyn. Res. 51, 065502. doi:10.1088/1873-7005/ab3e83, arXiv:1901.04578 == References ==
Wikipedia:Soboleva modified hyperbolic tangent#0
The Soboleva modified hyperbolic tangent, also known as (parametric) Soboleva modified hyperbolic tangent activation function ([P]SMHTAF), is a special S-shaped function based on the hyperbolic tangent, given by == History == This function was originally proposed as "modified hyperbolic tangent" by Ukrainian scientist Elena V. Soboleva (Елена В. Соболева) as a utility function for multi-objective optimization and choice modelling in decision-making. == Practical usage == The function has since been introduced into neural network theory and practice. It was also used in economics for modelling consumption and investment, to approximate current-voltage characteristics of field-effect transistors and light-emitting diodes, to design antenna feeders, and analyze plasma temperatures and densities in the divertor region of fusion reactors. == Sensitivity to parameters == Derivative of the function is defined by the formula: smht ′ ⁡ ( x ) ≐ a e a x + b e − b x e c x + e − d x − smht ⁡ ( x ) c e c x − d e − d x e c x + e − d x {\displaystyle \operatorname {smht} '(x)\doteq {\frac {ae^{ax}+be^{-bx}}{e^{cx}+e^{-dx}}}-\operatorname {smht} (x){\frac {ce^{cx}-de^{-dx}}{e^{cx}+e^{-dx}}}} The following conditions are keeping the function limited on y-axes: a ≤ c, b ≤ d. A family of recurrence-generated parametric Soboleva modified hyperbolic tangent activation functions (NPSMHTAF, FPSMHTAF) was studied with parameters a = c and b = d. It is worth noting that in this case, the function is not sensitive to flipping the left and right-sides parameters: The function is sensitive to ratio of the denominator coefficients and often is used without coefficients in the numerator: With parameters a = b = c = d = 1 the modified hyperbolic tangent function reduces to the conventional tanh(x) function, whereas for a = b = 1 and c = d = 0, the term becomes equal to sinh(x). == See also == Activation function e (mathematical constant) Equal incircles theorem, based on sinh Hausdorff distance Inverse hyperbolic functions List of integrals of hyperbolic functions Poinsot's spirals Sigmoid function == Notes == == References == == Further reading == Iliev, Anton; Kyurkchiev, Nikolay; Markov, Svetoslav (2017). "A Note on the New Activation Function of Gompertz Type". Biomath Communications. 4 (2). Faculty of Mathematics and Informatics, University of Plovdiv "Paisii Hilendarski", Plovdiv, Bulgaria / Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria: Biomath Forum (BF). doi:10.11145/10.11145/bmc.2017.10.201. ISSN 2367-5233. Archived from the original on 2020-06-20. Retrieved 2020-06-19. (20 pages) [5]
Wikipedia:Société mathématique de France#0
The Société Mathématique de France (SMF) is the main professional society of French mathematicians. The society was founded in 1872 by Émile Lemoine and is one of the oldest mathematical societies in existence. It publishes several academic journals: Annales Scientifiques de l'École Normale Supérieure, Astérisque, Bulletin de la Société Mathématique de France, Gazette des mathématiciens, Mémoires de la Société Mathématique de France, Panoramas et Synthèses, and Revue d'histoire des mathématiques. == List of presidents == == See also == European Mathematical Society Centre International de Rencontres Mathématiques List of mathematical societies == References == == External links == Webpage of the society
Wikipedia:Softmax function#0
The softmax function, also known as softargmax: 184 or normalized exponential function,: 198 converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and is used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes. == Definition == The softmax function takes as input a vector z of K real numbers, and normalizes it into a probability distribution consisting of K probabilities proportional to the exponentials of the input numbers. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval ( 0 , 1 ) {\displaystyle (0,1)} , and the components will add up to 1, so that they can be interpreted as probabilities. Furthermore, the larger input components will correspond to larger probabilities. Formally, the standard (unit) softmax function σ : R K → ( 0 , 1 ) K {\displaystyle \sigma \colon \mathbb {R} ^{K}\to (0,1)^{K}} , where K > 1 {\displaystyle K>1} , takes a vector z = ( z 1 , … , z K ) ∈ R K {\displaystyle \mathbf {z} =(z_{1},\dotsc ,z_{K})\in \mathbb {R} ^{K}} and computes each component of vector σ ( z ) ∈ ( 0 , 1 ) K {\displaystyle \sigma (\mathbf {z} )\in (0,1)^{K}} with σ ( z ) i = e z i ∑ j = 1 K e z j . {\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{z_{i}}}{\sum _{j=1}^{K}e^{z_{j}}}}\,.} In words, the softmax applies the standard exponential function to each element z i {\displaystyle z_{i}} of the input vector z {\displaystyle \mathbf {z} } (consisting of K {\displaystyle K} real numbers), and normalizes these values by dividing by the sum of all these exponentials. The normalization ensures that the sum of the components of the output vector σ ( z ) {\displaystyle \sigma (\mathbf {z} )} is 1. The term "softmax" derives from the amplifying effects of the exponential on any maxima in the input vector. For example, the standard softmax of ( 1 , 2 , 8 ) {\displaystyle (1,2,8)} is approximately ( 0.001 , 0.002 , 0.997 ) {\displaystyle (0.001,0.002,0.997)} , which amounts to assigning almost all of the total unit weight in the result to the position of the vector's maximal element (of 8). In general, instead of e a different base b > 0 can be used. As above, if b > 1 then larger input components will result in larger output probabilities, and increasing the value of b will create probability distributions that are more concentrated around the positions of the largest input values. Conversely, if 0 < b < 1 then smaller input components will result in larger output probabilities, and decreasing the value of b will create probability distributions that are more concentrated around the positions of the smallest input values. Writing b = e β {\displaystyle b=e^{\beta }} or b = e − β {\displaystyle b=e^{-\beta }} (for real β) yields the expressions: σ ( z ) i = e β z i ∑ j = 1 K e β z j or σ ( z ) i = e − β z i ∑ j = 1 K e − β z j for i = 1 , … , K . {\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{\beta z_{i}}}{\sum _{j=1}^{K}e^{\beta z_{j}}}}{\text{ or }}\sigma (\mathbf {z} )_{i}={\frac {e^{-\beta z_{i}}}{\sum _{j=1}^{K}e^{-\beta z_{j}}}}{\text{ for }}i=1,\dotsc ,K.} A value proportional to the reciprocal of β is sometimes referred to as the temperature: β = 1 / k T {\textstyle \beta =1/kT} , where k is typically 1 or the Boltzmann constant and T is the temperature. A higher temperature results in a more uniform output distribution (i.e. with higher entropy; it is "more random"), while a lower temperature results in a sharper output distribution, with one value dominating. In some fields, the base is fixed, corresponding to a fixed scale, while in others the parameter β (or T) is varied. == Interpretations == === Smooth arg max === The Softmax function is a smooth approximation to the arg max function: the function whose value is the index of a vector's largest element. The name "softmax" may be misleading. Softmax is not a smooth maximum (that is, a smooth approximation to the maximum function). The term "softmax" is also used for the closely related LogSumExp function, which is a smooth maximum. For this reason, some prefer the more accurate term "softargmax", though the term "softmax" is conventional in machine learning. This section uses the term "softargmax" for clarity. Formally, instead of considering the arg max as a function with categorical output 1 , … , n {\displaystyle 1,\dots ,n} (corresponding to the index), consider the arg max function with one-hot representation of the output (assuming there is a unique maximum arg): a r g m a x ⁡ ( z 1 , … , z n ) = ( y 1 , … , y n ) = ( 0 , … , 0 , 1 , 0 , … , 0 ) , {\displaystyle \operatorname {arg\,max} (z_{1},\,\dots ,\,z_{n})=(y_{1},\,\dots ,\,y_{n})=(0,\,\dots ,\,0,\,1,\,0,\,\dots ,\,0),} where the output coordinate y i = 1 {\displaystyle y_{i}=1} if and only if i {\displaystyle i} is the arg max of ( z 1 , … , z n ) {\displaystyle (z_{1},\dots ,z_{n})} , meaning z i {\displaystyle z_{i}} is the unique maximum value of ( z 1 , … , z n ) {\displaystyle (z_{1},\,\dots ,\,z_{n})} . For example, in this encoding a r g m a x ⁡ ( 1 , 5 , 10 ) = ( 0 , 0 , 1 ) , {\displaystyle \operatorname {arg\,max} (1,5,10)=(0,0,1),} since the third argument is the maximum. This can be generalized to multiple arg max values (multiple equal z i {\displaystyle z_{i}} being the maximum) by dividing the 1 between all max args; formally 1/k where k is the number of arguments assuming the maximum. For example, a r g m a x ⁡ ( 1 , 5 , 5 ) = ( 0 , 1 / 2 , 1 / 2 ) , {\displaystyle \operatorname {arg\,max} (1,\,5,\,5)=(0,\,1/2,\,1/2),} since the second and third argument are both the maximum. In case all arguments are equal, this is simply a r g m a x ⁡ ( z , … , z ) = ( 1 / n , … , 1 / n ) . {\displaystyle \operatorname {arg\,max} (z,\dots ,z)=(1/n,\dots ,1/n).} Points z with multiple arg max values are singular points (or singularities, and form the singular set) – these are the points where arg max is discontinuous (with a jump discontinuity) – while points with a single arg max are known as non-singular or regular points. With the last expression given in the introduction, softargmax is now a smooth approximation of arg max: as ⁠ β → ∞ {\displaystyle \beta \to \infty } ⁠, softargmax converges to arg max. There are various notions of convergence of a function; softargmax converges to arg max pointwise, meaning for each fixed input z as ⁠ β → ∞ {\displaystyle \beta \to \infty } ⁠, σ β ( z ) → a r g m a x ⁡ ( z ) . {\displaystyle \sigma _{\beta }(\mathbf {z} )\to \operatorname {arg\,max} (\mathbf {z} ).} However, softargmax does not converge uniformly to arg max, meaning intuitively that different points converge at different rates, and may converge arbitrarily slowly. In fact, softargmax is continuous, but arg max is not continuous at the singular set where two coordinates are equal, while the uniform limit of continuous functions is continuous. The reason it fails to converge uniformly is that for inputs where two coordinates are almost equal (and one is the maximum), the arg max is the index of one or the other, so a small change in input yields a large change in output. For example, σ β ( 1 , 1.0001 ) → ( 0 , 1 ) , {\displaystyle \sigma _{\beta }(1,\,1.0001)\to (0,1),} but σ β ( 1 , 0.9999 ) → ( 1 , 0 ) , {\displaystyle \sigma _{\beta }(1,\,0.9999)\to (1,\,0),} and σ β ( 1 , 1 ) = 1 / 2 {\displaystyle \sigma _{\beta }(1,\,1)=1/2} for all inputs: the closer the points are to the singular set ( x , x ) {\displaystyle (x,x)} , the slower they converge. However, softargmax does converge compactly on the non-singular set. Conversely, as ⁠ β → − ∞ {\displaystyle \beta \to -\infty } ⁠, softargmax converges to arg min in the same way, where here the singular set is points with two arg min values. In the language of tropical analysis, the softmax is a deformation or "quantization" of arg max and arg min, corresponding to using the log semiring instead of the max-plus semiring (respectively min-plus semiring), and recovering the arg max or arg min by taking the limit is called "tropicalization" or "dequantization". It is also the case that, for any fixed β, if one input ⁠ z i {\displaystyle z_{i}} ⁠ is much larger than the others relative to the temperature, T = 1 / β {\displaystyle T=1/\beta } , the output is approximately the arg max. For example, a difference of 10 is large relative to a temperature of 1: σ ( 0 , 10 ) := σ 1 ( 0 , 10 ) = ( 1 / ( 1 + e 10 ) , e 10 / ( 1 + e 10 ) ) ≈ ( 0.00005 , 0.99995 ) {\displaystyle \sigma (0,\,10):=\sigma _{1}(0,\,10)=\left(1/\left(1+e^{10}\right),\,e^{10}/\left(1+e^{10}\right)\right)\approx (0.00005,\,0.99995)} However, if the difference is small relative to the temperature, the value is not close to the arg max. For example, a difference of 10 is small relative to a temperature of 100: σ 1 / 100 ( 0 , 10 ) = ( 1 / ( 1 + e 1 / 10 ) , e 1 / 10 / ( 1 + e 1 / 10 ) ) ≈ ( 0.475 , 0.525 ) . {\displaystyle \sigma _{1/100}(0,\,10)=\left(1/\left(1+e^{1/10}\right),\,e^{1/10}/\left(1+e^{1/10}\right)\right)\approx (0.475,\,0.525).} As ⁠ β → ∞ {\displaystyle \beta \to \infty } ⁠, temperature goes to zero, T = 1 / β → 0 {\displaystyle T=1/\beta \to 0} , so eventually all differences become large (relative to a shrinking temperature), which gives another interpretation for the limit behavior. === Statistical mechanics === In statistical mechanics, the softargmax function is known as the Boltzmann distribution (or Gibbs distribution):: 7 the index set 1 , … , k {\displaystyle {1,\,\dots ,\,k}} are the microstates of the system; the inputs z i {\displaystyle z_{i}} are the energies of that state; the denominator is known as the partition function, often denoted by Z; and the factor β is called the coldness (or thermodynamic beta, or inverse temperature). == Applications == The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression),: 206–209 multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks. Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the jth class given a sample vector x and a weighting vector w is: P ( y = j ∣ x ) = e x T w j ∑ k = 1 K e x T w k {\displaystyle P(y=j\mid \mathbf {x} )={\frac {e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{j}}}{\sum _{k=1}^{K}e^{\mathbf {x} ^{\mathsf {T}}\mathbf {w} _{k}}}}} This can be seen as the composition of K linear functions x ↦ x T w 1 , … , x ↦ x T w K {\displaystyle \mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{1},\ldots ,\mathbf {x} \mapsto \mathbf {x} ^{\mathsf {T}}\mathbf {w} _{K}} and the softmax function (where x T w {\displaystyle \mathbf {x} ^{\mathsf {T}}\mathbf {w} } denotes the inner product of x {\displaystyle \mathbf {x} } and w {\displaystyle \mathbf {w} } ). The operation is equivalent to applying a linear operator defined by w {\displaystyle \mathbf {w} } to vectors x {\displaystyle \mathbf {x} } , thus transforming the original, probably highly-dimensional, input to vectors in a K-dimensional space R K {\displaystyle \mathbb {R} ^{K}} . === Neural networks === The standard softmax function is often used in the final layer of a neural network-based classifier. Such networks are commonly trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression. Since the function maps a vector and a specific index i {\displaystyle i} to a real value, the derivative needs to take the index into account: ∂ ∂ q k σ ( q , i ) = σ ( q , i ) ( δ i k − σ ( q , k ) ) . {\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},i)(\delta _{ik}-\sigma ({\textbf {q}},k)).} This expression is symmetrical in the indexes i , k {\displaystyle i,k} and thus may also be expressed as ∂ ∂ q k σ ( q , i ) = σ ( q , k ) ( δ i k − σ ( q , i ) ) . {\displaystyle {\frac {\partial }{\partial q_{k}}}\sigma ({\textbf {q}},i)=\sigma ({\textbf {q}},k)(\delta _{ik}-\sigma ({\textbf {q}},i)).} Here, the Kronecker delta is used for simplicity (cf. the derivative of a sigmoid function, being expressed via the function itself). To ensure stable numerical computations subtracting the maximum value from the input vector is common. This approach, while not altering the output or the derivative theoretically, enhances stability by directly controlling the maximum exponent value computed. If the function is scaled with the parameter β {\displaystyle \beta } , then these expressions must be multiplied by β {\displaystyle \beta } . See multinomial logit for a probability model which uses the softmax activation function. === Reinforcement learning === In the field of reinforcement learning, a softmax function can be used to convert values into action probabilities. The function commonly used is: P t ( a ) = exp ⁡ ( q t ( a ) / τ ) ∑ i = 1 n exp ⁡ ( q t ( i ) / τ ) , {\displaystyle P_{t}(a)={\frac {\exp(q_{t}(a)/\tau )}{\sum _{i=1}^{n}\exp(q_{t}(i)/\tau )}}{\text{,}}} where the action value q t ( a ) {\displaystyle q_{t}(a)} corresponds to the expected reward of following action a and τ {\displaystyle \tau } is called a temperature parameter (in allusion to statistical mechanics). For high temperatures ( τ → ∞ {\displaystyle \tau \to \infty } ), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature ( τ → 0 + {\displaystyle \tau \to 0^{+}} ), the probability of the action with the highest expected reward tends to 1. == Computational complexity and remedies == In neural network applications, the number K of possible outcomes is often large, e.g. in case of neural language models that predict the most likely outcome out of a vocabulary which might contain millions of possible words. This can make the calculations for the softmax layer (i.e. the matrix multiplications to determine the z i {\displaystyle z_{i}} , followed by the application of the softmax function itself) computationally expensive. What's more, the gradient descent backpropagation method for training such a neural network involves calculating the softmax for every training example, and the number of training examples can also become large. The computational effort for the softmax became a major limiting factor in the development of larger neural language models, motivating various remedies to reduce training times. Approaches that reorganize the softmax layer for more efficient calculation include the hierarchical softmax and the differentiated softmax. The hierarchical softmax (introduced by Morin and Bengio in 2005) uses a binary tree structure where the outcomes (vocabulary words) are the leaves and the intermediate nodes are suitably selected "classes" of outcomes, forming latent variables. The desired probability (softmax value) of a leaf (outcome) can then be calculated as the product of the probabilities of all nodes on the path from the root to that leaf. Ideally, when the tree is balanced, this would reduce the computational complexity from O ( K ) {\displaystyle O(K)} to O ( log 2 ⁡ K ) {\displaystyle O(\log _{2}K)} . In practice, results depend on choosing a good strategy for clustering the outcomes into classes. A Huffman tree was used for this in Google's word2vec models (introduced in 2013) to achieve scalability. A second kind of remedies is based on approximating the softmax (during training) with modified loss functions that avoid the calculation of the full normalization factor. These include methods that restrict the normalization sum to a sample of outcomes (e.g. Importance Sampling, Target Sampling). == Numerical algorithms == The standard softmax is numerically unstable because of large exponentiations. The safe softmax method calculates instead σ ( z ) i = e β ( z i − m ) ∑ j = 1 K e β ( z j − m ) {\displaystyle \sigma (\mathbf {z} )_{i}={\frac {e^{\beta (z_{i}-m)}}{\sum _{j=1}^{K}e^{\beta (z_{j}-m)}}}} where m = max i z i {\displaystyle m=\max _{i}z_{i}} is the largest factor involved. Subtracting by it guarantees that the exponentiations result in at most 1. The attention mechanism in Transformers takes three arguments: a "query vector" q {\displaystyle q} , a list of "key vectors" k 1 , … , k N {\displaystyle k_{1},\dots ,k_{N}} , and a list of "value vectors" v 1 , … , v N {\displaystyle v_{1},\dots ,v_{N}} , and outputs a softmax-weighted sum over value vectors: o = ∑ i = 1 N e q T k i − m ∑ j = 1 N e q T k j − m v i {\displaystyle o=\sum _{i=1}^{N}{\frac {e^{q^{T}k_{i}-m}}{\sum _{j=1}^{N}e^{q^{T}k_{j}-m}}}v_{i}} The standard softmax method involves several loops over the inputs, which would be bottlenecked by memory bandwidth. The FlashAttention method is a communication-avoiding algorithm that fuses these operations into a single loop, increasing the arithmetic intensity. It is an online algorithm that computes the following quantities: z i = q T k i m i = max ( z 1 , … , z i ) = max ( m i − 1 , z i ) l i = e z 1 − m i + ⋯ + e z i − m i = e m i − 1 − m i l i − 1 + e z i − m i o i = e z 1 − m i v 1 + ⋯ + e z i − m i v i = e m i − 1 − m i o i − 1 + e z i − m i v i {\displaystyle {\begin{aligned}z_{i}&=q^{T}k_{i}&\\m_{i}&=\max(z_{1},\dots ,z_{i})&=&\max(m_{i-1},z_{i})\\l_{i}&=e^{z_{1}-m_{i}}+\dots +e^{z_{i}-m_{i}}&=&e^{m_{i-1}-m_{i}}l_{i-1}+e^{z_{i}-m_{i}}\\o_{i}&=e^{z_{1}-m_{i}}v_{1}+\dots +e^{z_{i}-m_{i}}v_{i}&=&e^{m_{i-1}-m_{i}}o_{i-1}+e^{z_{i}-m_{i}}v_{i}\end{aligned}}} and returns o N / l N {\displaystyle o_{N}/l_{N}} . In practice, FlashAttention operates over multiple queries and keys per loop iteration, in a similar way as blocked matrix multiplication. If backpropagation is needed, then the output vectors and the intermediate arrays [ m 1 , … , m N ] , [ l 1 , … , l N ] {\displaystyle [m_{1},\dots ,m_{N}],[l_{1},\dots ,l_{N}]} are cached, and during the backward pass, attention matrices are rematerialized from these, making it a form of gradient checkpointing. == Mathematical properties == Geometrically the softmax function maps the vector space R K {\displaystyle \mathbb {R} ^{K}} to the boundary of the standard ( K − 1 ) {\displaystyle (K-1)} -simplex, cutting the dimension by one (the range is a ( K − 1 ) {\displaystyle (K-1)} -dimensional simplex in K {\displaystyle K} -dimensional space), due to the linear constraint that all output sum to 1 meaning it lies on a hyperplane. Along the main diagonal ( x , x , … , x ) , {\displaystyle (x,\,x,\,\dots ,\,x),} softmax is just the uniform distribution on outputs, ( 1 / n , … , 1 / n ) {\displaystyle (1/n,\dots ,1/n)} : equal scores yield equal probabilities. More generally, softmax is invariant under translation by the same value in each coordinate: adding c = ( c , … , c ) {\displaystyle \mathbf {c} =(c,\,\dots ,\,c)} to the inputs z {\displaystyle \mathbf {z} } yields σ ( z + c ) = σ ( z ) {\displaystyle \sigma (\mathbf {z} +\mathbf {c} )=\sigma (\mathbf {z} )} , because it multiplies each exponent by the same factor, e c {\displaystyle e^{c}} (because e z i + c = e z i ⋅ e c {\displaystyle e^{z_{i}+c}=e^{z_{i}}\cdot e^{c}} ), so the ratios do not change: σ ( z + c ) j = e z j + c ∑ k = 1 K e z k + c = e z j ⋅ e c ∑ k = 1 K e z k ⋅ e c = σ ( z ) j . {\displaystyle \sigma (\mathbf {z} +\mathbf {c} )_{j}={\frac {e^{z_{j}+c}}{\sum _{k=1}^{K}e^{z_{k}+c}}}={\frac {e^{z_{j}}\cdot e^{c}}{\sum _{k=1}^{K}e^{z_{k}}\cdot e^{c}}}=\sigma (\mathbf {z} )_{j}.} Geometrically, softmax is constant along diagonals: this is the dimension that is eliminated, and corresponds to the softmax output being independent of a translation in the input scores (a choice of 0 score). One can normalize input scores by assuming that the sum is zero (subtract the average: c {\displaystyle \mathbf {c} } where c = 1 n ∑ z i {\textstyle c={\frac {1}{n}}\sum z_{i}} ), and then the softmax takes the hyperplane of points that sum to zero, ∑ z i = 0 {\textstyle \sum z_{i}=0} , to the open simplex of positive values that sum to 1 ∑ σ ( z ) i = 1 {\textstyle \sum \sigma (\mathbf {z} )_{i}=1} , analogously to how the exponent takes 0 to 1, e 0 = 1 {\displaystyle e^{0}=1} and is positive. By contrast, softmax is not invariant under scaling. For instance, σ ( ( 0 , 1 ) ) = ( 1 / ( 1 + e ) , e / ( 1 + e ) ) {\displaystyle \sigma {\bigl (}(0,\,1){\bigr )}={\bigl (}1/(1+e),\,e/(1+e){\bigr )}} but σ ( ( 0 , 2 ) ) = ( 1 / ( 1 + e 2 ) , e 2 / ( 1 + e 2 ) ) . {\displaystyle \sigma {\bigl (}(0,2){\bigr )}={\bigl (}1/\left(1+e^{2}\right),\,e^{2}/\left(1+e^{2}\right){\bigr )}.} The standard logistic function is the special case for a 1-dimensional axis in 2-dimensional space, say the x-axis in the (x, y) plane. One variable is fixed at 0 (say z 2 = 0 {\displaystyle z_{2}=0} ), so e 0 = 1 {\displaystyle e^{0}=1} , and the other variable can vary, denote it z 1 = x {\displaystyle z_{1}=x} , so e z 1 / ∑ k = 1 2 e z k = e x / ( e x + 1 ) , {\textstyle e^{z_{1}}/\sum _{k=1}^{2}e^{z_{k}}=e^{x}/\left(e^{x}+1\right),} the standard logistic function, and e z 2 / ∑ k = 1 2 e z k = 1 / ( e x + 1 ) , {\textstyle e^{z_{2}}/\sum _{k=1}^{2}e^{z_{k}}=1/\left(e^{x}+1\right),} its complement (meaning they add up to 1). The 1-dimensional input could alternatively be expressed as the line ( x / 2 , − x / 2 ) {\displaystyle (x/2,\,-x/2)} , with outputs e x / 2 / ( e x / 2 + e − x / 2 ) = e x / ( e x + 1 ) {\displaystyle e^{x/2}/\left(e^{x/2}+e^{-x/2}\right)=e^{x}/\left(e^{x}+1\right)} and e − x / 2 / ( e x / 2 + e − x / 2 ) = 1 / ( e x + 1 ) . {\displaystyle e^{-x/2}/\left(e^{x/2}+e^{-x/2}\right)=1/\left(e^{x}+1\right).} === Gradients === The softmax function is also the gradient of the LogSumExp function: ∂ ∂ z i LSE ⁡ ( z ) = exp ⁡ z i ∑ j = 1 K exp ⁡ z j = σ ( z ) i , for i = 1 , … , K , z = ( z 1 , … , z K ) ∈ R K , {\displaystyle {\frac {\partial }{\partial z_{i}}}\operatorname {LSE} (\mathbf {z} )={\frac {\exp z_{i}}{\sum _{j=1}^{K}\exp z_{j}}}=\sigma (\mathbf {z} )_{i},\quad {\text{ for }}i=1,\dotsc ,K,\quad \mathbf {z} =(z_{1},\,\dotsc ,\,z_{K})\in \mathbb {R} ^{K},} where the LogSumExp function is defined as LSE ⁡ ( z 1 , … , z n ) = log ⁡ ( exp ⁡ ( z 1 ) + ⋯ + exp ⁡ ( z n ) ) {\displaystyle \operatorname {LSE} (z_{1},\,\dots ,\,z_{n})=\log \left(\exp(z_{1})+\cdots +\exp(z_{n})\right)} . The gradient of softmax is thus ∂ z j σ i = σ i ( δ i j − σ j ) {\displaystyle \partial _{z_{j}}\sigma _{i}=\sigma _{i}(\delta _{ij}-\sigma _{j})} . == History == The softmax function was used in statistical mechanics as the Boltzmann distribution in the foundational paper Boltzmann (1868), formalized and popularized in the influential textbook Gibbs (1902). The use of the softmax in decision theory is credited to R. Duncan Luce,: 1 who used the axiom of independence of irrelevant alternatives in rational choice theory to deduce the softmax in Luce's choice axiom for relative preferences. In machine learning, the term "softmax" is credited to John S. Bridle in two 1989 conference papers, Bridle (1990a):: 1 and Bridle (1990b): We are concerned with feed-forward non-linear networks (multi-layer perceptrons, or MLPs) with multiple outputs. We wish to treat the outputs of the network as probabilities of alternatives (e.g. pattern classes), conditioned on the inputs. We look for appropriate output non-linearities and for appropriate criteria for adaptation of the parameters of the network (e.g. weights). We explain two modifications: probability scoring, which is an alternative to squared error minimisation, and a normalised exponential (softmax) multi-input generalisation of the logistic non-linearity.: 227 For any input, the outputs must all be positive and they must sum to unity. ... Given a set of unconstrained values, ⁠ V j ( x ) {\displaystyle V_{j}(x)} ⁠, we can ensure both conditions by using a Normalised Exponential transformation: Q j ( x ) = e V j ( x ) / ∑ k e V k ( x ) {\displaystyle Q_{j}(x)=\left.e^{V_{j}(x)}\right/\sum _{k}e^{V_{k}(x)}} This transformation can be considered a multi-input generalisation of the logistic, operating on the whole output layer. It preserves the rank order of its input values, and is a differentiable generalisation of the 'winner-take-all' operation of picking the maximum value. For this reason we like to refer to it as softmax.: 213 == Example == With an input of (1, 2, 3, 4, 1, 2, 3), the softmax is approximately (0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175). The output has most of its weight where the "4" was in the original input. This is what the function is normally used for: to highlight the largest values and suppress values which are significantly below the maximum value. But note: a change of temperature changes the output. When the temperature is multiplied by 10, the inputs are effectively (0.1, 0.2, 0.3, 0.4, 0.1, 0.2, 0.3) and the softmax is approximately (0.125, 0.138, 0.153, 0.169, 0.125, 0.138, 0.153). This shows that high temperatures de-emphasize the maximum value. Computation of this example using Python code: == Alternatives == The softmax function generates probability predictions densely distributed over its support. Other functions like sparsemax or α-entmax can be used when sparse probability predictions are desired. Also the Gumbel-softmax reparametrization trick can be used when sampling from a discrete-discrete distribution needs to be mimicked in a differentiable manner. == See also == Softplus Multinomial logistic regression Dirichlet distribution – an alternative way to sample categorical distributions Partition function Exponential tilting – a generalization of Softmax to more general probability distributions == Notes == == References ==
Wikipedia:Sofya Kovalevskaya#0
Sofya Vasilyevna Kovalevskaya (Russian: Софья Васильевна Ковалевская; born Korvin-Krukovskaya; 15 January [O.S. 3 January] 1850 – 10 February 1891) was a Russian mathematician who made noteworthy contributions to analysis, partial differential equations and mechanics. She was a pioneer for women in mathematics around the world – the first woman to earn a doctorate (in the modern sense) in mathematics, the first woman appointed to a full professorship in northern Europe and one of the first women to work for a scientific journal as an editor. According to historian of science Ann Hibner Koblitz, Kovalevskaya was "the greatest known woman scientist before the twentieth century".: 255 Historian of mathematics Roger Cooke writes: ... the more I reflect on her life and consider the magnitude of her achievements, set against the weight of the obstacles she had to overcome, the more I admire her. For me she has taken on a heroic stature achieved by very few other people in history. To venture, as she did, into academia, a world almost no woman had yet explored, and to be consequently the object of curious scrutiny, while a doubting society looked on, half-expecting her to fail, took tremendous courage and determination. To achieve, as she did, at least two major results of lasting value to scholarship, is evidence of a considerable talent, developed through iron discipline.: 1 Her sister was the socialist Anne Jaclard. There are several alternative transliterations of her name. She herself used Sophie Kowalevski (or occasionally Kowalevsky) in her academic publications. In Sweden she was known as Sonja Kovalevsky; Sonja (Russian Соня) is her Russian nickname. == Background and early education == Sofya Kovalevskaya (née Korvin-Krukovskaya) was born in Moscow, the second of three children. Her father, Lieutenant General Vasily Vasilyevich Korvin-Krukovsky, served in the Imperial Russian Army as head of the Moscow Artillery before retiring to Polibino, his family estate in Pskov Oblast in 1858, when Kovalevskaya was eight years old. He was a member of the minor Russian nobility, of mixed Belarussian–Polish descent (Polish on his father's side), with possible partial ancestry from the royal Corvin family of Hungary, and served as Marshall of Nobility for Vitebsk province. (There may also have been some Romani ancestry on the father's side.) Her mother, Yelizaveta Fedorovna von Schubert (1820–1879), descended from a family of German immigrants to St. Petersburg who lived on Vasilievsky Island. Her maternal great-grandfather was the astronomer and geographer Friedrich Theodor von Schubert (1758–1825), who emigrated to Russia from Germany around 1785. He became a full member of the St. Petersburg Academy of Science and head of its astronomical observatory. His son, Kovalevskaya's maternal grandfather, was General Theodor Friedrich von Schubert (1789–1865), who was head of the military topographic service, and an honorary member of the Russian Academy of Sciences, as well as Director of the Kunstkamera museum. Kovalevskaya's parents provided her with a good early education. At various times, her governesses were native speakers of English, French, and German. When she was 11 years old, she was intrigued by a foretaste of what she was to learn later in her lessons in calculus; the wall of her room had been papered with pages from lecture notes by Ostrogradsky, left over from her father's student days. She was tutored privately in elementary mathematics by Iosif Ignatevich Malevich. The physicist Nikolai Nikanorovich Tyrtov noted her unusual aptitude when she managed to understand his textbook by discovering for herself an approximate construction of trigonometric functions which she had not yet encountered in her studies. Tyrtov called her a "new Pascal" and suggested she be given a chance to pursue further studies under the tutelage of A. N. Strannoliubskii. In 1866–67 she spent much of the winter with her family in St. Petersburg, where she was provided private tutoring by Strannoliubskii, a well-known advocate of higher education for women, who taught her calculus. During that same period, the son of a local priest introduced her sister Anna to progressive ideas influenced by the radical movement of the 1860s, providing her with copies of radical journals of the time discussing Russian nihilism. Although the word nihilist (нигилист) often was used in a negative sense, it did not have that meaning for the young Russians of the 1860s (шестидесятники): After the famous writer Ivan Turgenev used the word nihilist to refer to Bazarov, the young hero of his 1862 novel Fathers and Children, a certain segment of the "new people" adopted that name as well, despite its negative connotations in most quarters.... For the nihilists, science appeared to be the most effective means of helping the mass of people to a better life. Science pushed back the barriers of religion and superstition, and "proved" through the theory of evolution that (peaceful) social revolutions were the way of nature. For the early nihilists, science was virtually synonymous with truth, progress and radicalism; thus, the pursuit of a scientific career was viewed in no way as a hindrance to social activism. In fact, it was seen as a positive boost to progressive forces, an active blow against backwardness.: 2–4 Despite her obvious talent for mathematics, she could not complete her education in Russia. At that time, women were not allowed to attend universities in Russia and most other countries. In order to study abroad, Kovalevskaya needed written permission from her father (or husband). Accordingly, in 1868 she contracted a "fictitious marriage" with Vladimir Kovalevskij, a young paleontology student, book publisher and radical, who was the first to translate and publish the works of Charles Darwin in Russia. They moved from Russia to Germany in 1869, after a brief stay in Vienna, in order to pursue advanced studies. == Student years == In April 1869, following Sofia's and Vladimir's brief stay in Vienna, where she attended lectures in physics at the university, they moved to Heidelberg. Through great efforts, she obtained permission to audit classes with the professors' approval at the University of Heidelberg. There she attended courses in physics and mathematics under such teachers as Hermann von Helmholtz, Gustav Kirchhoff and Robert Bunsen.: 87–89 Vladimir, meanwhile, went on to the University of Jena to pursue a doctorate in paleontology. In October 1869, shortly after attending courses in Heidelberg, she visited London with Vladimir, who spent time with his colleagues Thomas Huxley and Charles Darwin, while she was invited to attend George Eliot's Sunday salons. There, at age nineteen, she met Herbert Spencer and was led into a debate, at Eliot's instigation, on "woman's capacity for abstract thought". Although there is no record of the details of their conversation, she had just completed a lecture course in Heidelberg on mechanics, and she may just possibly have made mention of the Euler equations governing the motion of a rigid body (see following section). George Eliot was writing Middlemarch at the time, in which one finds the remarkable sentence: "In short, woman was a problem which, since Mr. Brooke's mind felt blank before it, could hardly be less complicated than the revolutions of an irregular solid." This was well before Kovalevskaya's notable contribution of the "Kovalevskaya top" to the brief list of known examples of integrable rigid body motion (see following section). In October 1870, Kovalevskaya moved to Berlin, where she began to take private lessons with Karl Weierstrass, since the university would not allow her even to audit classes. He was very impressed with her mathematical skills, and over the subsequent three years taught her the same material that comprised his lectures at the university. In 1871 she briefly traveled to Paris together with Vladimir in order to help in the Paris Commune, where Kovalevskaya attended the injured and her sister Anyuta was active in the Commune.: 104–106 With the fall of the Commune, however, both Anyuta and her common law husband Victor Jaclard, who was leader of the Montmartre contingent of the National Guard and a prominent Blanquiste, were arrested. Although Anyuta managed to escape to London, Jaclard was sentenced to execution. However, with the assistance of Sofia's and Anyuta's father General Krukovsky, who had come urgently to Paris to help Anyuta and who wrote to Adolphe Thiers asking for clemency, Jaclard was saved.: 107–108 Kovalevskaya returned to Berlin and continued her studies with Weierstrass for three more years. In 1874 she presented three papers—on partial differential equations, on the dynamics of Saturn's rings, and on elliptic integrals—to the University of Göttingen as her doctoral dissertation. With the support of Weierstrass, this earned her a doctorate in mathematics summa cum laude, after Weierstrass succeeded in having her exempted from the usual oral examinations. Kovalevskaya thereby became the first woman to have been awarded a doctorate (in the modern sense of the word) in mathematics. Her paper on partial differential equations contains what is now commonly known as the Cauchy–Kovalevskaya theorem, which proves the existence and analyticity of local solutions to such equations under suitably defined initial/boundary conditions. == Last years in Germany and Sweden == In 1874, Kovalevskaya and her husband Vladimir returned to Russia, but Vladimir failed to secure a professorship because of his radical beliefs. (Kovalevskaya never would have been considered for such a position because of her gender.) During this time they tried a variety of schemes to support themselves, including real estate development and involvement with an oil company. But in the late 1870s they faced serious financial problems, leading to bankruptcy. In 1875, for some unknown reason, perhaps the death of her father, Sofia and Vladimir decided to spend several years together as an actual married couple. Three years later their daughter, Sofia (called "Fufa"), was born. After almost two years devoted to raising her daughter, Kovalevskaya put Fufa under the care of relatives and friends, resumed her work in mathematics, and left Vladimir for what would be the last time. Vladimir, who had always suffered severe mood swings, became more unstable. In 1883, faced with worsening mood swings and the possibility of being prosecuted for his role in a stock swindle, Vladimir committed suicide. That year, with the help of the mathematician Gösta Mittag-Leffler, whom she had known as a fellow student of Weierstrass, Kovalevskaya was able to secure a position as a privat-docent at Stockholm University in Sweden. Kovalevskaya met Mittag-Leffler's sister, the actress, novelist, and playwright Anne Charlotte Edgren-Leffler. Until Kovalevskaya's death the two women shared a close friendship. In 1884 Kovalevskaya was appointed to a five-year position as Extraordinary Professor (assistant professor in modern terminology) and became an editor of Acta Mathematica. In 1888 she won the Prix Bordin of the French Academy of Science, for her work "Mémoire sur un cas particulier du problème de la rotation d'un corps pesant autour d'un point fixe, où l'intégration s'effectue à l'aide des fonctions ultraelliptiques du temps". Her submission featured the celebrated discovery of what is now known as the "Kovalevskaya top", which was subsequently shown to be the only other case of rigid body motion that is "completely integrable" other than the tops of Euler and Lagrange. In 1889 Kovalevskaya was appointed Ordinary Professor (full professor) at Stockholm University, the first woman in Europe in modern times to hold such a position.: 218 After much lobbying on her behalf (and a change in the academy's rules) she was made a Corresponding Member of the Russian Academy of Sciences, but she was never offered a professorship in Russia. Kovalevskaya, who was involved in the progressive political and feminist currents of late nineteenth-century Russian nihilism, wrote several non-mathematical works as well, including a memoir, A Russian Childhood, two plays (in collaboration with Duchess Anne Charlotte Edgren-Leffler) and a partly autobiographical novel, Nihilist Girl (1890). In 1889, Kovalevskaya fell in love with Maxim Kovalevsky, a distant relation of her deceased husband, but insisted on not marrying him because she would not be able to settle down and live with him.: 18 Kovalevskaya died of flu complicated by pneumonia in 1891 at age forty-one, after returning from a vacation in Nice with Maxim.: 231 She is buried in Solna, Sweden, at Norra begravningsplatsen. Kovalevskaya's mathematical results, such as the Cauchy–Kowalevski theorem, and her pioneering role as a female mathematician in an almost exclusively male-dominated field, have made her the subject of several books, including a biography by Ann Hibner Koblitz, a biography in Russian by Polubarinova-Kochina (translated into English by M. Burov with the title Love and Mathematics: Sofya Kovalevskaya, Mir Publishers, 1985), and a book about her mathematics by R. Cooke. == Tributes == Sonya Kovalevsky High School Mathematics Day is a grant-making program of the Association for Women in Mathematics (AWM), funding workshops across the United States which encourage girls to explore mathematics. While the AWM currently does not have grant money to support this program, multiple universities continue the program with their own funding. The Kovalevsky Lecture is sponsored annually by the AWM and the Society for Industrial and Applied Mathematics, and is intended to highlight significant contributions of women in the fields of applied or computational mathematics. The Kovalevskaia Fund, founded in 1985 with the purpose of supporting women in science in developing countries, was named in her honor. The lunar crater Kovalevskaya is named in her honor. A gymnasium in Velikiye Luki and a progymnasium in Vilnius are named after Sofya Kovalevskaya. The Alexander Von Humboldt Foundation of Germany bestows a bi-annual Sofia Kovalevskaya Award to promising young researchers. Saint Petersburg, Moscow, and Stockholm have streets named in honor of Kovalevskaya. On 30 June 2021, a satellite named after her (ÑuSat 22 or "Sofya", COSPAR 2021-059AS) was launched into space as part of the Satellogic Aleph-1 constellation. == In film == Kovalevskaya has been the subject of three film and TV biographies. Sofya Kovalevskaya (1956) directed by Iosef Shapiro, starring Yelena Yunger, Lev Kolesov and Tatyana Sezenyevskaya. Berget på månens baksida ("A Hill on the Dark Side of the Moon") (1983) directed by Lennart Hjulström, starring Gunilla Nyroos as Sofja Kovalewsky and Bibi Andersson as Anne Charlotte Edgren-Leffler, Duchess of Cajanello, and sister to Gösta Mittag-Leffler. Sofya Kovalevskaya (1985 TV) directed by Azerbaijani director Ayan Shakhmaliyeva, starring Yelena Safonova as Sofia. == In fiction == Little Sparrow: A Portrait of Sophia Kovalevsky (1983), Don H. Kennedy, Ohio University Press, Athens, Ohio ISBN 0821406922 LCCN 82-12405 Beyond the Limit: The Dream of Sofya Kovalevskaya (2002), ISBN 0765302330 LCCN 2002-24363, a biographical novel by mathematician and educator Joan Spicci, published by Tom Doherty Associates, LLC, is an historically accurate portrayal of her early married years and quest for an education. It is based in part on 88 of Kovalevskaya's letters, which the author translated from Russian to English. 2021 ebook edition Against the Day, a 2006 novel by Thomas Pynchon was speculated before release to be based on the life of Kovalevskaya, but in the finished novel she appears as a minor character. "Too Much Happiness" (2009), short story by Alice Munro, published in the August 2009 issue of Harper's Magazine features Kovalevskaya as a main character. It was later published in a collection of the same name. == See also == Cauchy–Kowalevski theorem Kowalevski top Timeline of women in science Timeline of women in mathematics == Selected publications == Kowalevski, Sophie (1875), "Zur Theorie der partiellen Differentialgleichung", Journal für die reine und angewandte Mathematik, 80: 1–32 (The surname given in the paper is "von Kowalevsky".) Kowalevski, Sophie (1884), "Über die Reduction einer bestimmten Klasse Abel'scher Integrale 3ten Ranges auf elliptische Integrale" (PDF), Acta Mathematica, 4 (1): 393–414, doi:10.1007/BF02418424 Kowalevski, Sophie (1885), "Über die Brechung des Lichtes In Cristallinischen Mitteln" (PDF), Acta Mathematica, 6 (1): 249–304, doi:10.1007/BF02400418 Kowalevski, Sophie (1889), "Sur le probleme de la rotation d'un corps solide autour d'un point fixe" (PDF), Acta Mathematica, 12 (1): 177–232, doi:10.1007/BF02592182 Kowalevski, Sophie (1890), "Sur une propriété du système d'équations différentielles qui définit la rotation d'un corps solide autour d'un point fixe", Acta Mathematica, 14 (1): 81–93, doi:10.1007/BF02413316 Kowalevski, Sophie (1891), "Sur un théorème de M. Bruns", Acta Mathematica, 15 (1): 45–52, doi:10.1007/BF02392602, S2CID 124051110 Kovalevskaya, Sofia (2021). Mathematician with the Soul of a Poet: Poems and Plays of Sofia Kovalevskaya. Translated by Coleman, Sandra DeLozier. Bohannon Hall Press. ISBN 979-8985029802. == Novel == Nihilist Girl, translated by Natasha Kolchevska with Mary Zirin; introduction by Natasha Kolchevska. Modern Language Association of America (2001) ISBN 0-87352-790-9 == References == == Further reading == Cooke, Roger (1984).The Mathematics of Sonya Kovalevskaya (Springer-Verlag) ISBN 0-387-96030-9 Kennedy, Don H. (1983). Little Sparrow, a Portrait of Sofia Kovalevsky. Athens: Ohio University Press. ISBN 0-8214-0692-2 Koblitz, Ann Hibner (1993). A Convergence of Lives: Sofia Kovalevskaia – Scientist, Writer, Revolutionary. Lives of women in science, 99-2518221-2 (2., revised ed.). New Brunswick, N.J.: Rutgers Univ. P. ISBN 0-8135-1962-4 Koblitz, Ann Hibner (1987). Sofia Vasilevna Kovalevskaia in Grinstein, Louise S.; Campbell, Paul J., eds. (1987), Women of Mathematics: A Bio-Bibliographic Sourcebook, Greenwood Press, New York, ISBN 978-0-313-24849-8 Porter, Cathy (1976). "Into Exile". Fathers and Daughters: Russian Women in Revolution. London: Virago Press. pp. 116–174. ISBN 0-704-32802-X. OCLC 2288139. The Legacy of Sonya Kovalevskaya: proceedings of a symposium sponsored by the Association for Women in Mathematics and the Mary Ingraham Bunting Institute, held October 25–28, 1985. Contemporary mathematics, 0271–4132; 64. Providence, R.I.: American Mathematical Society. 1987. ISBN 0-8218-5067-9 Sophie (Sonja) Vasiljevna Kovalevsky at Svenskt kvinnobiografiskt lexikon This article incorporates material from Sofia Kovalevskaya on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == External links == "Sofia Kovalevskaya", Biographies of Women Mathematicians, Agnes Scott College O'Connor, John J.; Robertson, Edmund F., "Sofya Kovalevskaya", MacTutor History of Mathematics Archive, University of St Andrews Women's History – Sofia Kovalevskaya Archived 2009-02-09 at the Wayback Machine Brief biography of Sofia Kovalevskaya by Yuriy Belits. University of Colorado at Denver, March 17, 2005. Biography (in Russian) Association for Women in Mathematics Archived 2011-05-05 at the Wayback Machine Sof'i Kovalevskoy street, Saint Petersburg (OpenStreetMap) Sof'i Kovalevskoy street, Moscow (OpenStreetMap)
Wikipedia:Sofía Nieto#0
Elena Sofía Nieto Monje (Alcorcón; August 16, 1984), known as Sofía Nieto, is a Spanish actress and mathematician known especially for having worked in series such as Aquí no hay quien viva and La que se avecina. == Biography == She began her career as an actress at the age of 16, starting little by little in the world of theatre. In 2003 she was chosen to play Natalia Cuesta, daughter of Juan Cuesta (Jose Luis Gil) in the Spanish TV series Aquí no hay quien viva, a role that made her well known among the public, as well as the rest of her co-stars. She acted in the series until the end of its broadcast on the 6 July 2006. After Telecinco bought the rights of the Antena 3 series, she and most of the actors of Aquí no hay quien viva moved in January to the new series called La que se avecina, a series that is currently broadcast on Telecinco, and she went on to play Sandra, an intern at the hairdresser's of Araceli Madariaga. But when Araceli Madariaga (Isabel Ordaz) got divorced from Enrique Pastor (Jose Luis Gil), she turned the hairdresser's into a bar and she became a waitress. She left the series in the second season. She has filmed a telefilm called Bichos raros. She has participated in plays such as Entremeses by Miguel de Cervantes and in Amor de don Perlimplín con Belisa en su jardín by Federico García Lorca. == Education == In 2002, at the age of 18, she was awarded first prize in the XV Chemistry Olympiad, by the Universidad Rey Juan Carlos. She studied bachelor's degree in mathematics at the Universidad Autónoma de Madrid where he graduated with the Premio Extraordinario de Licenciatura in 2007. In 2006, she took part in the XXV Congreso Internacional de Matemáticos, in Madrid, and in the III Curso Internacional de Análisis Matemático de Andalucía. She later received her PhD in mathematics from the Universidad Autónoma de Madrid, as well as being an assistant professor at that university between 2010 and 2016. == Television career == In Aquí no hay quien viva and La que se avecina she played the characters Natalia Cuesta and Sandra Espinosa, respectively. Natalia Cuesta from Aquí no hay quien viva (2003-2006) (91 episodes) She starts in the series as an intelligent, manipulative, disobedient, promiscuous and partying girl, always rebelling against her parents (especially against her mother, Paloma, with whom she had a very bad relationship). Throughout the series we see her with a boyfriend disc jockey, sharing a flat with some rastafaris stoners, flirting with neighbours like Roberto, Carlos or Fernando, and even pregnant by artificial insemination, although the biological parents end up backing out and she keeps the baby. She ends up dating Yago, Lucía's ex-boyfriend. After having her baby she suffers from post-partum depression. In the last episode of the series she travels with Yago to Cuba, leaving her daughter with Juan and Isabel. Sandra Espinosa from La que se avecina (2007-2008) (26 episodes). She is a totally neurotic, nervous, shy girl who is constantly upset by any problem she has at work. She worked as a hairdresser intern with an Argentinean, Fabio, who treats her like a student, under the orders of a boss who is a bit upset. Later she works as a waitress in the bar where the hairdresser's shop used to be. == References ==
Wikipedia:Solomon Marcus#0
Solomon Marcus (Romanian pronunciation: [ˈsolomon ˈmarkus]; 1 March 1925 – 17 March 2016) was a Romanian mathematician, member of the Mathematical Section of the Romanian Academy (full member from 2001) and emeritus professor of the University of Bucharest's Faculty of Mathematics. His main research was in the fields of mathematical analysis, mathematical and computational linguistics and computer science. He also published numerous papers on various cultural topics: poetics, linguistics, semiotics, philosophy, and history of science and education. == Early life and education == He was born in Bacău, Romania, to Sima and Alter Marcus, a Jewish family of tailors. From an early age he had to live through dictatorships, war, infringements on free speech and free thinking as well as anti-Semitism. At the age of 16 or 17 he started tutoring younger pupils in order to help his family financially. He graduated from Ferdinand I High School in 1944, and completed his studies at the University of Bucharest's Faculty of Science, Department of Mathematics, in 1949. He continued tutoring throughout college and later recounted in an interview that he had to endure hunger during those years and that till the age of 20 he only wore hand-me-downs from his older brothers. == Academic career == Marcus obtained his PhD in Mathematics in 1956, with a thesis on the Monotonic functions of two variables, written under the direction of Miron Nicolescu. He was appointed Lecturer in 1955, Associate Professor in 1964, and became a Professor in 1966 (Emeritus in 1991). Marcus has contributed to the following areas: Mathematical Analysis, Set Theory, Measure and Integration Theory, and Topology Theoretical Computer Science Linguistics Poetics and Theory of Literature Semiotics Cultural Anthropology History and Philosophy of Science Education. == Publications by and on Marcus == Marcus published about 50 books, which have been translated into English, French, German, Italian, Spanish, Russian, Greek, Hungarian, Czech, Serbo-Croatian, and about 400 research articles in specialized journals in almost all European countries, in the United States, Canada, South America, Japan, India, and New Zealand among others; he is cited by more than a thousand authors, including mathematicians, computer scientists, linguists, literary researchers, semioticians, anthropologists and philosophers. He is recognised as one of the initiators of mathematical linguistics and of mathematical poetics, and has been a member of the editorial board of tens of international scientific journals covering all his domains of interest. Marcus is featured in the 1999 book People and Ideas in Theoretical Computer Science. and the 2015 The Human Face of Computing . A collection of his papers in English followed by some interviews and a brief autobiography was published in 2007 as Words and Languages Everywhere. The book Meetings with Solomon Marcus (Spandugino Publishing House, Bucharest, Romania, 2010, 1500 pages), edited by Lavinia Spandonide and Gheorghe Păun for Marcus' 85th birthday, includes recollections by several hundred people from a large variety of scientific and cultural fields, and from 25 countries. It also contains a longer autobiography. == Death == Marcus died of cardiac infections at the Fundeni Clinical Institute in Bucharest after a short stay at Elias Hospital in Bucharest. == Honours == National Order of Faithful Service in the rank of Grand Officer, 2011. Order of the Star of Romania (Romania's highest civil Order) in the rank of Commander, 2015. Romanian Royal Family: Knight of the Royal Decoration of Nihil Sine Deo. == Notes == == References == Alexandra Bellow, Cristian S. Calude. Solomon Marcus (1925–2016), Notices of the American Mathematical Society 64,10 (2017), 1216. G. Păun, I. Petre, G. Rozenberg and A. Salomaa (eds.). At the intersection of computer science with biology, chemistry and physics – In Memory of Solomon Marcus Theoretical computer science 701 (2017), 1–234. Global Perspectives on Science and Spirituality (GPSS) Publication list on his web page, at the "Simion Stoilow" Institute of Mathematics of the Romanian Academy International Journal of Computers, Communications & Control, Vol.I (2006), No.1, pp. 73–79, "Grigore C. Moisil: A Life Becoming a Myth", by Solomon Marcus, Editor's note about the author (p. 79) Marcus' articles on semiotics at Potlatch == External links == Solomon Marcus at the University of Bucharest
Wikipedia:Solomon Mikhlin#0
Solomon Grigor'evich Mikhlin (Russian: Соломо́н Григо́рьевич Ми́хлин, real name Zalman Girshevich Mikhlin) (the family name is also transliterated as Mihlin or Michlin) (23 April 1908 – 29 August 1990) was a Soviet mathematician of who worked in the fields of linear elasticity, singular integrals and numerical analysis: he is best known for the introduction of the symbol of a singular integral operator, which eventually led to the foundation and development of the theory of pseudodifferential operators. == Biography == He was born in Kholmech, Rechytsa District, Minsk Governorate (in present-day Belarus) on 23 April 1908; Mikhlin (1968) himself states in his resume that his father was a merchant, but this assertion could be untrue since, in that period, people sometimes lied on the profession of parents in order to overcome political limitations in the access to higher education. According to a different version, his father was a melamed, at a primary religious school (kheder), and that the family was of modest means: according to the same source, Zalman was the youngest of five children. His first wife was Victoria Isaevna Libina: Mikhlin's book (Mikhlin 1965) is dedicated to her memory. She died of peritonitis in 1961 during a boat trip on Volga. In 1940 they adopted a son, Grigory Zalmanovich Mikhlin, who later emigrated to Haifa, Israel. His second wife was Eugenia Yakovlevna Rubinova, born in 1918, who was his companion for the rest of his life. === Education and academic career === He graduated from a secondary school in Gomel in 1923 and entered the State Herzen Pedagogical Institute in 1925. In 1927 he was transferred to the Department of Mathematics and Mechanics of Leningrad State University as a second year student, passing all the exams of the first year without attending lectures. Among his university professors there were Nikolai Maximovich Günther and Vladimir Ivanovich Smirnov. The latter became his master thesis supervisor: the topic of the thesis was the convergence of double series, and was defended in 1929. Sergei Lvovich Sobolev studied in the same class as Mikhlin. In 1930 he started his teaching career, working in some Leningrad institutes for short periods, as Mikhlin himself records on the document (Mikhlin 1968). In 1932 he got a position at the Seismological Institute of the USSR Academy of Sciences, where he worked till 1941: in 1935 he got the degree of Doctor of Sciences in Mathematics and Physics, without having to earn the Candidate of Sciences degree, and finally in 1937 he was promoted to the rank of professor. During World War II he became professor at the Kazakh University in Alma Ata. Since 1944 S.G. Mikhlin has been professor at the Leningrad State University. From 1964 to 1986 he headed the Laboratory of Numerical Methods at the Research Institute of Mathematics and Mechanics of the same university: since 1986 until his death he was a senior researcher at that laboratory. === Honours === He received the order of the Badge of Honour (Russian: Орден Знак Почёта) in 1961: the name of the recipients of this prize was usually published in newspapers. He was awarded of the Laurea honoris causa by the Karl-Marx-Stadt (now Chemnitz) Polytechnic in 1968 and was elected member of the German Academy of Sciences Leopoldina in 1970 and of the Accademia Nazionale dei Lincei in 1981. As Fichera (1994, p. 51) states, in his country he did not receive honours comparable to his scientific stature, mainly because of the racial policy of the communist regime, briefly described in the following section. ==== Influence of communist antisemitism ==== He lived in one of the most difficult periods of contemporary Russian history. The state of mathematical sciences during this period is well described by Lorentz (2002): marxist ideology rise in the USSR universities and Academia was one of the main themes of that period. Local administrators and communist party functionaries interfered with scientists on either ethnical or ideological grounds. As a matter of fact, during the war and during the creation of a new academic system, Mikhlin did not experience the same difficulties as younger Soviet scientists of Jewish origin: for example he was included in the Soviet delegation in 1958, at the International Congress of Mathematicians in Edinburgh. However, Fichera (1994, pp. 56–60), examining the life of Mikhlin, finds it surprisingly similar to the life of Vito Volterra under the fascist regime. He notes that antisemitism in communist countries took different forms compared to his nazist counterpart: the communist regime aimed not to the brutal homicide of Jews, but imposed on them a number of constrictions, sometimes very cruel, in order to make their life difficult. During the period from 1963 to 1981, he met Mikhlin attending several conferences in the Soviet Union, and realised how he was in a state of isolation, almost marginalized inside his native community: Fichera describes several episodes revealing this fact. Perhaps, the most illuminating one is the election of Mikhlin as a member of the Accademia Nazionale dei Lincei: in June 1981, Solomon G. Mikhlin was elected Foreign Member of the class of mathematical and physical sciences of the Lincei. At first time, he was proposed as a winner of the Antonio Feltrinelli Prize, but the almost sure confiscation of the prize by the Soviet authorities induced the Lincei members to elect him as a member: they decided to honour him in a way that no political authority could alienate. However, Mikhlin was not allowed to visit Italy by the Soviet authorities, so Fichera and his wife brought the tiny golden lynx, the symbol of the Lincei membership, directly to Mikhlin's apartment in Leningrad on 17 October 1981: the only guests to that "ceremony" were Vladimir Maz'ya and his wife Tatyana Shaposhnikova. They just have power, but we have theorems. Therefore we are stronger! === Death === According to Fichera (1994, pp. 60–61), which refers a conversation with Mark Vishik and Olga Oleinik, on 29 August 1990 Mikhlin left home to buy medicines for his wife Eugenia. On a public transport, he suffered a lethal stroke. He had no documents with him, therefore he was identified only some time after his death: this may be the cause of the difference in the death date reported on several biographies and obituary notices. Fichera also writes that Mikhlin's wife Eugenia survived him only a few months. == Work == === Research activity === He was author of monographs and textbooks which become classics for their style. His research is devoted mainly to the following fields. ==== Elasticity theory and boundary value problems ==== In mathematical elasticity theory, Mikhlin was concerned by three themes: the plane problem (mainly from 1932 to 1935), the theory of shells (from 1954) and the Cosserat spectrum (from 1967 to 1973). Dealing with the plane elasticity problem, he proposed two methods for its solution in multiply connected domains. The first one is based upon the so-called complex Green's function and the reduction of the related boundary value problem to integral equations. The second method is a certain generalization of the classical Schwarz algorithm for the solution of the Dirichlet problem in a given domain by splitting it in simpler problems in smaller domains whose union is the original one. Mikhlin studied its convergence and gave applications to special applied problems. He proved existence theorems for the fundamental problems of plane elasticity involving inhomogeneous anisotropic media: these results are collected in the book (Mikhlin 1957). Concerning the theory of shells, there are several Mikhlin's articles dealing with it. He studied the error of the approximate solution for shells, similar to plane plates, and found out that this error is small for the so-called purely rotational state of stress. As a result of his study of this problem, Mikhlin also gave a new (invariant) form of the basic equations of the theory. He also proved a theorem on perturbations of positive operators in a Hilbert space which let him to obtain an error estimate for the problem of approximating a sloping shell by a plane plate. Mikhlin studied also the spectrum of the operator pencil of the classical linear elastostatic operator or Navier–Cauchy operator A ( ω ) u = Δ 2 u + ω ∇ ( ∇ ⋅ u ) {\displaystyle {\boldsymbol {\mathcal {A}}}(\omega ){\boldsymbol {u}}=\Delta _{2}{\boldsymbol {u}}+\omega \nabla \left(\nabla \cdot {\boldsymbol {u}}\right)} where u {\displaystyle u} is the displacement vector, Δ 2 {\displaystyle \scriptstyle \Delta _{2}} is the vector laplacian, ∇ {\displaystyle \scriptstyle \nabla } is the gradient, ∇ ⋅ {\displaystyle \scriptstyle \nabla \cdot } is the divergence and ω {\displaystyle \omega } is a Cosserat eigenvalue. The full description of the spectrum and the proof of the completeness of the system of eigenfunctions are also due to Mikhlin, and partly to V.G. Maz'ya in their only joint work. ==== Singular integrals and Fourier multipliers ==== He is one of the founders of the multi-dimensional theory of singular integrals, jointly with Francesco Tricomi and Georges Giraud, and also one of the main contributors. By singular integral we mean an integral operator of the following form A u = v ( x ) = ∫ R n f ( x , θ ) r n u ( y ) d y {\displaystyle Au=v({\boldsymbol {x}})=\int _{\mathbb {R} ^{n}}{\frac {f({\boldsymbol {x}},{\boldsymbol {\theta }})}{r^{n}}}u({\boldsymbol {y}})\mathrm {d} {\boldsymbol {y}}} where x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} is a point in the n-dimensional euclidean space, r {\displaystyle r} =| y − x {\displaystyle y-x} | and θ = y − x r {\displaystyle \scriptstyle {\boldsymbol {\theta }}={\frac {{\boldsymbol {y}}-{\boldsymbol {x}}}{r}}} are the hyperspherical coordinates (or the polar coordinates or the spherical coordinates respectively when n = 2 {\displaystyle n=2} or n = 3 {\displaystyle n=3} ) of the point y {\displaystyle y} with respect to the point x {\displaystyle x} . Such operators are called singular since the singularity of the kernel of the operator is so strong that the integral does not exist in the ordinary sense, but only in the sense of Cauchy principal value. Mikhlin was the first to develop a theory of singular integral equations as a theory of operator equations in function spaces. In the papers (Mikhlin 1936a) and (Mikhlin 1936b) he found a rule for the composition of double singular integrals (i.e. in 2-dimensional euclidean spaces) and introduced the very important notion of symbol of a singular integral. This enabled him to show that the algebra of bounded singular integral operators is isomorphic to the algebra of either scalar or matrix-valued functions. He proved the Fredholm's theorems for singular integral equations and systems of such equations under the hypothesis of non-degeneracy of the symbol: he also proved that the index of a single singular integral equation in the euclidean space is zero. In 1961 Mikhlin developed a theory of multidimensional singular integral equations on Lipschitz spaces. These spaces are widely used in the theory of one-dimensional singular integral equations: however, the direct extension of the related theory to the multidimensional case meets some technical difficulties, and Mikhlin suggested another approach to this problem. Precisely, he obtained the basic properties of this kind of singular integral equations as a by-product of the Lp-space theory of these equations. Mikhlin also proved a now classical theorem on multipliers of Fourier transform in the Lp-space, based on an analogous theorem of Józef Marcinkiewicz on Fourier series. A complete collection of his results in this field up to the 1965, as well as the contributions of other mathematicians like Tricomi, Giraud, Calderón and Zygmund, is contained in the monograph (Mikhlin 1965). A synthesis of the theories of singular integrals and linear partial differential operators was accomplished, in the mid 1960s, by the theory of pseudodifferential operators: Joseph J. Kohn, Louis Nirenberg, Lars Hörmander and others operated this synthesis, but this theory owe his rise to the discoveries of Mikhlin, as is universally acknowledged. This theory has numerous applications to mathematical physics. Mikhlin's multiplier theorem is widely used in different branches of mathematical analysis, particularly to the theory of differential equations. The analysis of Fourier multipliers was later forwarded by Lars Hörmander, Walter Littman, Elias Stein, Charles Fefferman and others. ==== Partial differential equations ==== In four papers, published in the period 1940–1942, Mikhlin applies the potentials method to the mixed problem for the wave equation. In particular, he solves the mixed problem for the two-space dimensional wave equation in the half plane by reducing it to the planar Abel integral equation. For plane domains with a sufficiently smooth curvilinear boundary he reduces the problem to an integro-differential equation, which he is also able to solve when the boundary of the given domain is analytic. In 1951 Mikhlin proved the convergence of the Schwarz alternating method for second order elliptic equations. He also applied the methods of functional analysis, at the same time as Mark Vishik but independently of him, to the investigation of boundary value problems for degenerate second order elliptic partial differential equations. ==== Numerical mathematics ==== His work in this field can be divided into several branches: in the following text, four main branches are described, and a sketch of his last researches is also given. The papers within the first branch are summarized in the monograph (Mikhlin 1964), which contain the study of convergence of variational methods for problems connected with positive operators, in particular, for some problems of mathematical physics. Both "a priori" and "a posteriori" estimates of the errors concerning the approximation given by these methods are proved. The second branch deals with the notion of stability of a numerical process introduced by Mikhlin himself. When applied to the variational method, this notion enables him to state necessary and sufficient conditions in order to minimize errors in the solution of the given problem when the error arising in the numerical construction of the algebraic system resulting from the application of the method itself is sufficiently small, no matter how large is the system's order. The third branch is the study of variational-difference and finite element methods. Mikhlin studied the completeness of the coordinate functions used in this methods in the Sobolev space W1,p, deriving the order of approximation as a function of the smoothness properties of the functions to be approximation of functions approximated. He also characterized the class of coordinate functions which give the best order of approximation, and has studied the stability of the variational-difference process and the growth of the condition number of the variation-difference matrix. Mikhlin also studied the finite element approximation in weighted Sobolev spaces related to the numerical solution of degenerate elliptic equations. He found the optimal order of approximation for some methods of solution of variational inequalities. The fourth branch of his research in numerical mathematics is a method for the solution of Fredholm integral equations which he called resolvent method: its essence rely on the possibility of substituting the kernel of the integral operator by its variational-difference approximation, so that the resolvent of the new kernel can be expressed by simple recurrence relations. This eliminates the need to construct and solve large systems of equations. During his last years, Mikhlin contributed to the theory of errors in numerical processes, proposing the following classification of errors. Approximation error: is the error due to the replacement of an exact problem by an approximating one. Perturbation error: is the error due to the inaccuracies in the computation of the data of the approximating problem. Algorithm error: is the intrinsic error of the algorithm used for the solution of the approximating problem. Rounding error: is the error due to the limits of computer arithmetic. This classification is useful since enables one to develop computational methods adjusted in order to diminish the errors of each particular type, following the divide et impera (divide and rule) principle. === Teaching activity === He was the doctoral advisor of Tatyana O. Shaposhnikova. He was also mentor and friend of Vladimir Maz'ya: he was never his official supervisor, but his friendship with the young undergraduate Maz'ya had a great influence on shaping his mathematical style. == Selected publications == === Books === Mikhlin, S.G. (1957), Integral equations and their applications to certain problems in mechanics, mathematical physics and technology, International Series of Monographs in Pure and Applied Mathematics, vol. 5, Oxford–London–Edinburgh–New York–Paris–Frankfurt: Pergamon Press, pp. XII+338, Zbl 0077.09903. The book of Mikhlin summarizing his results in the plane elasticity problem: according to Fichera (1994, pp. 55–56) this is a widely known monograph in the theory of integral equations. Mikhlin, S.G. (1964), Variational methods in mathematical physics, International Series of Monographs in Pure and Applied Mathematics, vol. 50, Oxford–London–Edinburgh–New York–Paris–Frankfurt: Pergamon Press, pp. XXXII+584, Zbl 0119.19002. Mikhlin, S.G. (1965), Multidimensional singular integrals and integral equations, International Series of Monographs in Pure and Applied Mathematics, vol. 83, Oxford–London–Edinburgh–New York–Paris–Frankfurt: Pergamon Press, pp. XII+255, MR 0185399, Zbl 0129.07701. A masterpiece in the multidimensional theory of singular integrals and singular integral equations summarizing all the results from the beginning to the year of publication, and also sketching the history of the subject. Mikhlin, Solomon G.; Prössdorf, Siegfried (1986), Singular Integral Operators, Berlin–Heidelberg–New York: Springer Verlag, p. 528, ISBN 978-3-540-15967-4, MR 0867687, Zbl 0612.47024. Mikhlin, S.G. (1991), Error analysis in numerical processes, Pure and Applied Mathematics. A Wiley-Interscience Series of Text Monographs & Tracts, vol. 1237, Chichester: John Wiley & Sons, p. 283, ISBN 978-0-471-92133-2, MR 1129889, Zbl 0786.65038. This book summarize the contributions of Mikhlin and of the former Soviet school of numerical analysis to the problem of error analysis in numerical solutions of various kind of equations: it was also reviewed by Stummel (1993, pp. 204–206) for the Bulletin of the American Mathematical Society. Mikhlin, Solomon G.; Morozov, Nikita Fedorovich; Paukshto, Michael V. (1995), The integral equations of the theory of elasticity, Teubner-Texte zur Mathematik, vol. 135, Leipzig: Teubner Verlag, p. 375, doi:10.1007/978-3-663-11626-4, ISBN 3-8154-2060-1, MR 1314625, Zbl 0817.45004. === Papers === Michlin, S.G. (1932), "Sur la convergence uniforme des séries de fonctions analytiques", Matematicheskii Sbornik (in French), 39 (3): 88–96, JFM 58.0302.03, Zbl 0006.31701. Mikhlin, Solomon G. (1936a), "Équations intégrales singulières à deux variables indépendantes", Recueil Mathématique (Matematicheskii Sbornik), New Series (in Russian), 1(43) (4): 535–552, Zbl 0016.02902. The paper, with French title and abstract, where Solomon Mikhlin introduces the symbol of a singular integral operator as a means to calculate the composition of such kind of operators and solve singular integral equations: the integral operators considered here are defined by integration on the whole n-dimensional (for n = 2) euclidean space. Mikhlin, Solomon G. (1936b), "Complément à l'article "Équations intégrales singulières à deux variables indépendantes", Recueil Mathématique (Matematicheskii Sbornik), New Series (in Russian), 1(43) (6): 963–964, JFM 62.1251.02. In this paper, with French title and abstract, Solomon Mikhlin extends the definition of the symbol of a singular integral operator introduced before in the paper (Mikhlin 1936a) to integral operators defined by integration on a (n − 1)-dimensional closed manifold (for n = 3) in n-dimensional euclidean space. Mikhlin, Solomon G. (1948), "Singular integral equations", Uspekhi Matematicheskikh Nauk (in Russian), 3 (25): 29–112, MR 0027429. Mikhlin, S.G. (1951), "On the Schwarz algorithm", Doklady Akademii Nauk SSSR, novaya Seriya (in Russian), 77: 569–571, Zbl 0054.04204. Mikhlin, Solomon G. (1952a), "An estimate of the error of approximating elastic shells by plane plates", Prikladnaya Matematika i Mekhanika (in Russian), 16 (4): 399–418, Zbl 0048.42304. Mikhlin, Solomon G. (1952b), "A theorem in operator theory and its application to the theory of elastic shells", Doklady Akademii Nauk SSSR, novaya Seriya (in Russian), 84: 909–912, Zbl 0048.42401. Mikhlin, Solomon G. (1956a), "The theory of multidimensional singular integral equations", Vestnik Leningradskogo Universiteta, Seriya Matematika, Mekhanika, Astronomija (in Russian), 11 (1): 3–24, Zbl 0075.11402. Mikhlin, Solomon G. (1956b), "On the multipliers of Fourier integrals", Doklady Akademii Nauk SSSR, New Series (in Russian), 109: 701–703, Zbl 0073.08402. Mikhlin, Solomon G. (1966), "On Cosserat functions", Probl. Mat. Analiza, kraevye Zadachi integral'nye Uravenya (in Russian), Leningrad, pp. 59–69, Zbl 0166.37505{{citation}}: CS1 maint: location missing publisher (link). Mikhlin, Solomon G. (1973), "The spectrum of a family of operators in the theory of elasticity", Uspekhi Matematicheskikh Nauk (in Russian), 28 (3(171)): 43–82, MR 0415422, Zbl 0291.35065 Mikhlin, S.G. (1974), "On a method for the approximate solution of integral equations", Vestn. Leningr. Univ., Ser. Mat. Mekh. Astron. (in Russian), 13 (3): 26–33, Zbl 0308.45014. == See also == Linear elasticity Mikhlin multiplier theorem Multiplier (Fourier analysis) Singular integrals Singular integral equations == Notes == == References == === Biographical and general references === Aleksandrov, P. S.; Kurosh, A. G. (1959), "International Congress of Mathematicians in Edinburg", Uspekhi Matematicheskikh Nauk (in Russian), 14 (1(142)): 249–253. Babich, Vasilii Mikhailovich; Bakelman, Ilya Yakovlevich; Koshelev, Alexander Ivanovich; Maz'ya, Vladimir Gilelevich (1968), "Solomon Grigor'evich Mikhlin (on the sixtieth anniversary of his birth)", Uspekhi Matematicheskikh Nauk (in Russian), 23 (4(142)): 269–272, MR 0228313, Zbl 0157.01202. Bakelman, Ilya Yakovlevich; Birman, Mikhail Shlemovich; Ladyzhenskaya, Olga Aleksandrovna (1958), "Solomon Grigor'evich Mikhlin (on the fiftieth anniversary of his birth)", Uspekhi Matematicheskikh Nauk (in Russian), 13 (5(83)): 215–221, Zbl 0085.00701. Dem'yanovich, Yuri Kazimirovich; Il'in, Valentin Petrovich; Koshelev, Alexander Ivanovich; Oleinik, Olga Arsen'evna; Sobolev, Sergei L'vovich (1988), "Solomon Grigor'evich Mikhlin (on his eightieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 43 (4(262)): 239–240, Bibcode:1988RuMaS..43..249D, doi:10.1070/RM1988v043n04ABEH001906, MR 0228313, S2CID 250917521, Zbl 0157.01202. Fichera, Gaetano (1994), "Solomon G. Mikhlin (1908–1990)", Atti della Accademia Nazionale dei Lincei, Rendiconti Lincei, Matematica e Applicazioni, Serie XI (in Italian), 5 (1): 49–61, Zbl 0852.01034. A detailed commemorative paper, referencing the works Bakelman, Birman & Ladyzhenskaya (1958), Babich et al. (1968) and of Dem'yanovich et al. (1988) for the bibliographical details. Fichera, G.; Maz'ya, V. (1978), "In honor of professor Solomon G. Mikhlin on the occasion of his seventieth birthday", Applicable Analysis, 7 (3): 167–170, doi:10.1080/00036817808839188, Zbl 0378.01018. A short survey of the work of Mikhlin by a friend and his pupil: not as complete as the commemorative paper (Fichera 1994), but very useful for the English speaking reader. Kantorovich, Leonid Vital'evich; Koshelev, Alexander Ivanovich; Oleinik, Olga Arsen'evna; Sobolev, Sergei L'vovich (1978), "Solomon Grigor'evich Mikhlin (on his seventieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 33 (2(200)): 213–216, Bibcode:1978RuMaS..33..209K, doi:10.1070/RM1978v033n02ABEH002313, MR 0495520, S2CID 250776686, Zbl 0378.01017. Lorentz, G.G. (2002), "Mathematics and politics in the Soviet Union from 1928 to 1953", Journal of Approximation Theory, 116 (2): 169–223, doi:10.1006/jath.2002.3670, MR 1911079, Zbl 1006.01009. See also the final version available from the "George Lorentz" section of the Approximation Theory web page at the Mathematics Department of the Ohio State University (retrieved on 25 October 2009). Maz'ya, Vladimir (2000), "In memory of Gaetano Fichera" (PDF), in Ricci, Paolo Emilio (ed.), Problemi attuali dell'analisi e della fisica matematica. Atti del II simposio internazionale (Taormina, 15–17 ottobre 1998). Dedicato alla memoria del Prof. Gaetano Fichera., Roma: Aracne Editrice, pp. 1–4, Zbl 0977.01027. Some vivid recollection about Gaetano Fichera by his colleague and friend Vladimir Gilelevich Maz'ya: there is a short description of the "ceremony" for the election of Mikhlin as a foreign member of the Accademia Nazionale dei Lincei. Maz'ya, Vladimir G. (2014), Differential equations of my young years, Basel: Birkhäuser Verlag, pp. xiii+191, ISBN 978-3-319-01808-9, MR 3288312, Zbl 1303.01002. Solomon Grigor'evich Mikhlin's entry at the Russian Wikipedia, Retrieved 28 May 2010. Mikhlin, Solomon G. (7 September 1968), ЛИЧНЫЙ ЛИСТОК ПО УЧЕТУ КАДРОВ [Formation record list] (in Russian), USSR, pp. 1–5. An official resume written by Mikhlin itself to be used by the public authority in the former Soviet Union: it contains very useful (if not unique) information about his early career and school formation. === Scientific references === Bochner, Salomon (1 December 1951), "Theta Relations with Spherical Harmonics", PNAS, 37 (12): 804–808, Bibcode:1951PNAS...37..804B, doi:10.1073/pnas.37.12.804, PMC 1063475, PMID 16589032, Zbl 0044.07501. Kozhevnikov, Alexander (1999), "A history of the Cosserat spectrum", in Rossman, Jürgen; Takáč, Peter; Günther, Wildenhain (eds.), The Maz'ya anniversary collection. Vol. 1: On Maz'ya's work in functional analysis, partial differential equations and applications. Based on talks given at the conference, Rostock, Germany, August 31 – September 4, 1998, Operator Theory. Advances and Applications, vol. 109, Basel: Birkhäuser Verlag, pp. 223–234, ISBN 978-3-7643-6201-0, Zbl 0936.35118. Stummel, F. (1993), "Review: Error analysis in numerical processes, by Solomon G. Mikhlin", Bulletin of the American Mathematical Society, 28 (1): 204–206, doi:10.1090/s0273-0979-1993-00357-4. == External links == Maz'ya, Vladimir G.; Shaposhnikova, Tatyana O.; Tampieri, Daniele (March 2011), "Solomon Grigoryevich Mikhlin", in O'Connor, John J.; Robertson, Edmund F. (eds.), MacTutor History of Mathematics Archive, University of St Andrews Solomon G. Mikhlin at the Mathematics Genealogy Project. St. Petersburg Mathematical Society (2006), Solomon Grigor'evich Mikhlin, retrieved 13 November 2009. Memorial page at the St. Petersburg Mathematical Pantheon.
Wikipedia:Solution in radicals#0
A solution in radicals or algebraic solution is an expression of a solution of a polynomial equation that is algebraic, that is, relies only on addition, subtraction, multiplication, division, raising to integer powers, and extraction of nth roots (square roots, cube roots, etc.). A well-known example is the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}},} which expresses the solutions of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} There exist algebraic solutions for cubic equations and quartic equations, which are more complicated than the quadratic formula. The Abel–Ruffini theorem,: 211 and, more generally Galois theory, state that some quintic equations, such as x 5 − x + 1 = 0 , {\displaystyle x^{5}-x+1=0,} do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation x 10 = 2 {\displaystyle x^{10}=2} can be solved as x = ± 2 10 . {\displaystyle x=\pm {\sqrt[{10}]{2}}.} The eight other solutions are nonreal complex numbers, which are also algebraic and have the form x = ± r 2 10 , {\displaystyle x=\pm r{\sqrt[{10}]{2}},} where r is a fifth root of unity, which can be expressed with two nested square roots. See also Quintic function § Other solvable quintics for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. == See also == Radical symbol Solvable quintics Solvable sextics Solvable septics == References ==
Wikipedia:Solving quadratic equations with continued fractions#0
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is a x 2 + b x + c = 0 , {\displaystyle ax^{2}+bx+c=0,} where a ≠ 0. The quadratic equation on a number x {\displaystyle x} can be solved using the well-known quadratic formula, which can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves a quadratic irrational number, which is an algebraic fraction that can be evaluated as a decimal fraction only by applying an additional root extraction algorithm. If the roots are real, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions. == Simple example == Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. We begin with the equation x 2 = 2 {\displaystyle x^{2}=2} and manipulate it directly. Subtracting one from both sides we obtain x 2 − 1 = 1. {\displaystyle x^{2}-1=1.} This is easily factored into ( x + 1 ) ( x − 1 ) = 1 {\displaystyle (x+1)(x-1)=1} from which we obtain ( x − 1 ) = 1 1 + x {\displaystyle (x-1)={\frac {1}{1+x}}} and finally x = 1 + 1 1 + x . {\displaystyle x=1+{\frac {1}{1+x}}.} Now comes the crucial step. We substitute this expression for x back into itself, recursively, to obtain x = 1 + 1 1 + ( 1 + 1 1 + x ) = 1 + 1 2 + 1 1 + x . {\displaystyle x=1+{\cfrac {1}{1+\left(1+{\cfrac {1}{1+x}}\right)}}=1+{\cfrac {1}{2+{\cfrac {1}{1+x}}}}.} But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity x as far down and to the right as we please, and obtaining in the limit the infinite simple continued fraction x = 1 + 1 2 + 1 2 + 1 2 + 1 2 + 1 2 + ⋱ = 2 . {\displaystyle x=1+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+{\cfrac {1}{2+\ddots }}}}}}}}}}={\sqrt {2}}.} By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers. == Algebraic explanation == We can gain further insight into this simple example by considering the successive powers of ω = 2 − 1. {\displaystyle \omega ={\sqrt {2}}-1.} That sequence of successive powers is given by ω 2 = 3 − 2 2 , ω 3 = 5 2 − 7 , ω 4 = 17 − 12 2 , ω 5 = 29 2 − 41 , ω 6 = 99 − 70 2 , ω 7 = 169 2 − 239 , {\displaystyle {\begin{aligned}\omega ^{2}&=3-2{\sqrt {2}},&\omega ^{3}&=5{\sqrt {2}}-7,&\omega ^{4}&=17-12{\sqrt {2}},\\\omega ^{5}&=29{\sqrt {2}}-41,&\omega ^{6}&=99-70{\sqrt {2}},&\omega ^{7}&=169{\sqrt {2}}-239,\,\end{aligned}}} and so forth. Notice how the fractions derived as successive approximants to √2 appear in this geometric progression. Since 0 < ω < 1, the sequence {ωn} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to √2, in the limit. We can also find these numerators and denominators appearing in the successive powers of ω − 1 = 2 + 1. {\displaystyle \omega ^{-1}={\sqrt {2}}+1.} The sequence of successive powers {ω−n} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example. Notice also that the set obtained by forming all the combinations a + b√2, where a and b are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain. See also algebraic number field. == General quadratic equation == Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial x 2 + b x + c = 0 {\displaystyle x^{2}+bx+c=0} which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that x 2 + b x = − c x + b = − c x x = − b − c x {\displaystyle {\begin{aligned}x^{2}+bx&=-c\\x+b&={\frac {-c}{x}}\\x&=-b-{\frac {c}{x}}\,\end{aligned}}} But now we can apply the last equation to itself recursively to obtain x = − b − c − b − c − b − c − b − c − b − ⋱ {\displaystyle x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}} If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial x2 + bx + c = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula and a monic polynomial with real coefficients. If the discriminant of such a polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if b and c are real numbers and b2 − 4c < 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form u + iv (where v ≠ 0), which does not lie on the real number line. == General theorem == By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients x 2 + b x + c = 0 {\displaystyle x^{2}+bx+c=0} given by x = − b − c − b − c − b − c − b − c − b − ⋱ {\displaystyle x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}} either converges or diverges depending on both the coefficient b and the value of the discriminant, b2 − 4c. If b = 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and ∞ {\displaystyle \infty } . If b ≠ 0 we distinguish three cases. If the discriminant is negative, the fraction diverges by oscillation, which means that its convergents wander around in a regular or even chaotic fashion, never approaching a finite limit. If the discriminant is zero the fraction converges to the single root of multiplicity two. If the discriminant is positive the equation has two real roots, and the continued fraction converges to the larger (in absolute value) of these. The rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges. When the monic quadratic equation with real coefficients is of the form x2 = c, the general solution described above is useless because division by zero is not well defined. As long as c is positive, though, it is always possible to transform the equation by subtracting a perfect square from both sides and proceeding along the lines illustrated with √2 above. In symbols, if x 2 = c ( c > 0 ) {\displaystyle x^{2}=c\qquad (c>0)} just choose some positive real number p such that p 2 < c . {\displaystyle p^{2}<c.} Then by direct manipulation we obtain x 2 − p 2 = c − p 2 ( x + p ) ( x − p ) = c − p 2 x − p = c − p 2 p + x x = p + c − p 2 p + x = p + c − p 2 p + ( p + c − p 2 p + x ) = p + c − p 2 2 p + c − p 2 2 p + c − p 2 2 p + ⋱ {\displaystyle {\begin{aligned}x^{2}-p^{2}&=c-p^{2}\\(x+p)(x-p)&=c-p^{2}\\x-p&={\frac {c-p^{2}}{p+x}}\\x&=p+{\frac {c-p^{2}}{p+x}}\\&=p+{\cfrac {c-p^{2}}{p+\left(p+{\cfrac {c-p^{2}}{p+x}}\right)}}&=p+{\cfrac {c-p^{2}}{2p+{\cfrac {c-p^{2}}{2p+{\cfrac {c-p^{2}}{2p+\ddots \,}}}}}}\,\end{aligned}}} and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers. == Complex coefficients == By the fundamental theorem of algebra, if the monic polynomial equation x2 + bx + c = 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminant b2 − 4c is not as useful in this situation, because it may be a complex number. Still, a modified version of the general theorem can be proved. The continued fraction solution to the general monic quadratic equation with complex coefficients x 2 + b x + c = 0 ( b ≠ 0 ) {\displaystyle x^{2}+bx+c=0\qquad (b\neq 0)} given by x = − b − c − b − c − b − c − b − c − b − ⋱ {\displaystyle x=-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-{\cfrac {c}{-b-\ddots \,}}}}}}}}} converges or not depending on the value of the discriminant, b2 − 4c, and on the relative magnitude of its two roots. Denoting the two roots by r1 and r2 we distinguish three cases. If the discriminant is zero the fraction converges to the single root of multiplicity two. If the discriminant is not zero, and |r1| ≠ |r2|, the continued fraction converges to the root of maximum modulus (i.e., to the root with the greater absolute value). If the discriminant is not zero, and |r1| = |r2|, the continued fraction diverges by oscillation. In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges. This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of the convergence problem for continued fractions with complex elements. == See also == Lucas sequence Methods of computing square roots Pell's equation == References == H. S. Wall, Analytic Theory of Continued Fractions, D. Van Nostrand Company, Inc., 1948 ISBN 0-8284-0207-8
Wikipedia:Sommerfeld identity#0
The Sommerfeld identity is a mathematical identity, due Arnold Sommerfeld, used in the theory of propagation of waves, e i k R R = ∫ 0 ∞ I 0 ( λ r ) e − μ | z | λ d λ μ {\displaystyle {\frac {e^{ikR}}{R}}=\int \limits _{0}^{\infty }I_{0}(\lambda r)e^{-\mu \left|z\right|}{\frac {\lambda d\lambda }{\mu }}} where μ = λ 2 − k 2 {\displaystyle \mu ={\sqrt {\lambda ^{2}-k^{2}}}} is to be taken with positive real part, to ensure the convergence of the integral and its vanishing in the limit z → ± ∞ {\displaystyle z\rightarrow \pm \infty } and R 2 = r 2 + z 2 {\displaystyle R^{2}=r^{2}+z^{2}} . Here, R {\displaystyle R} is the distance from the origin while r {\displaystyle r} is the distance from the central axis of a cylinder as in the ( r , ϕ , z ) {\displaystyle (r,\phi ,z)} cylindrical coordinate system. Here the notation for Bessel functions follows the German convention, to be consistent with the original notation used by Sommerfeld. The function I 0 ( z ) {\displaystyle I_{0}(z)} is the zeroth-order Bessel function of the first kind, better known by the notation I 0 ( z ) = J 0 ( i z ) {\displaystyle I_{0}(z)=J_{0}(iz)} in English literature. This identity is known as the Sommerfeld identity. In alternative notation, the Sommerfeld identity can be more easily seen as an expansion of a spherical wave in terms of cylindrically-symmetric waves: e i k 0 r r = i ∫ 0 ∞ d k ρ k ρ k z J 0 ( k ρ ρ ) e i k z | z | {\displaystyle {\frac {e^{ik_{0}r}}{r}}=i\int \limits _{0}^{\infty }{dk_{\rho }{\frac {k_{\rho }}{k_{z}}}J_{0}(k_{\rho }\rho )e^{ik_{z}\left|z\right|}}} Where k z = ( k 0 2 − k ρ 2 ) 1 / 2 {\displaystyle k_{z}=(k_{0}^{2}-k_{\rho }^{2})^{1/2}} The notation used here is different form that above: r {\displaystyle r} is now the distance from the origin and ρ {\displaystyle \rho } is the radial distance in a cylindrical coordinate system defined as ( ρ , ϕ , z ) {\displaystyle (\rho ,\phi ,z)} . The physical interpretation is that a spherical wave can be expanded into a summation of cylindrical waves in ρ {\displaystyle \rho } direction, multiplied by a two-sided plane wave in the z {\displaystyle z} direction; see the Jacobi-Anger expansion. The summation has to be taken over all the wavenumbers k ρ {\displaystyle k_{\rho }} . The Sommerfeld identity is closely related to the two-dimensional Fourier transform with cylindrical symmetry, i.e., the Hankel transform. It is found by transforming the spherical wave along the in-plane coordinates ( x {\displaystyle x} , y {\displaystyle y} , or ρ {\displaystyle \rho } , ϕ {\displaystyle \phi } ) but not transforming along the height coordinate z {\displaystyle z} . == Notes == == References == Sommerfeld, Arnold (1964). Partial Differential Equations in Physics. New York: Academic Press. ISBN 9780126546583. {{cite book}}: ISBN / Date incompatibility (help) Chew, Weng Cho (1990). Waves and Fields in Inhomogeneous Media. New York: Van Nostrand Reinhold. ISBN 9780780347496.
Wikipedia:Somos' quadratic recurrence constant#0
In mathematical analysis and number theory, Somos' quadratic recurrence constant or simply Somos' constant is a constant defined as an expression of infinitely many nested square roots. It arises when studying the asymptotic behaviour of a certain sequence and also in connection to the binary representations of real numbers between zero and one. The constant named after Michael Somos. It is defined by: σ = 1 2 3 4 5 ⋯ {\displaystyle \sigma ={\sqrt {1{\sqrt {2{\sqrt {3{\sqrt {4{\sqrt {5\cdots }}}}}}}}}}} which gives a numerical value of approximately: σ = 1.661687949633594121295 … {\displaystyle \sigma =1.661687949633594121295\dots \;} (sequence A112302 in the OEIS). == Sums and products == Somos' constant can be alternatively defined via the following infinite product: σ = ∏ k = 1 ∞ k 1 / 2 k = 1 1 / 2 2 1 / 4 3 1 / 8 4 1 / 16 … {\displaystyle \sigma =\prod _{k=1}^{\infty }k^{1/2^{k}}=1^{1/2}\;2^{1/4}\;3^{1/8}\;4^{1/16}\dots } This can be easily rewritten into the far more quickly converging product representation σ = ( 2 1 ) 1 / 2 ( 3 2 ) 1 / 4 ( 4 3 ) 1 / 8 ( 5 4 ) 1 / 16 … {\displaystyle \sigma =\left({\frac {2}{1}}\right)^{1/2}\left({\frac {3}{2}}\right)^{1/4}\left({\frac {4}{3}}\right)^{1/8}\left({\frac {5}{4}}\right)^{1/16}\dots } which can then be compactly represented in infinite product form by: σ = ∏ k = 1 ∞ ( 1 + 1 k ) 1 / 2 k {\displaystyle \sigma =\prod _{k=1}^{\infty }\left(1+{\frac {1}{k}}\right)^{1/2^{k}}} Another product representation is given by: σ = ∏ n = 1 ∞ ∏ k = 0 n ( k + 1 ) ( − 1 ) k + n ( n k ) {\displaystyle \sigma =\prod _{n=1}^{\infty }\prod _{k=0}^{n}(k+1)^{(-1)^{k+n}{\binom {n}{k}}}} Expressions for ln ⁡ σ {\displaystyle \ln \sigma } (sequence A114124 in the OEIS) include: ln ⁡ σ = ∑ k = 1 ∞ ln ⁡ k 2 k {\displaystyle \ln \sigma =\sum _{k=1}^{\infty }{\frac {\ln k}{2^{k}}}} ln ⁡ σ = ∑ k = 1 ∞ ( − 1 ) k + 1 k Li k ( 1 2 ) {\displaystyle \ln \sigma =\sum _{k=1}^{\infty }{\frac {(-1)^{k+1}}{k}}{\text{Li}}_{k}\left({\tfrac {1}{2}}\right)} ln ⁡ σ 2 = ∑ k = 1 ∞ 1 2 k ( ln ⁡ ( 1 + 1 k ) − 1 k ) {\displaystyle \ln {\frac {\sigma }{2}}=\sum _{k=1}^{\infty }{\frac {1}{2^{k}}}\left(\ln \left(1+{\frac {1}{k}}\right)-{\frac {1}{k}}\right)} == Integrals == Integrals for ln ⁡ σ {\displaystyle \ln \sigma } are given by: ln ⁡ σ = ∫ 0 1 1 − x ( x − 2 ) ln ⁡ x d x {\displaystyle \ln \sigma =\int _{0}^{1}{\frac {1-x}{(x-2)\ln x}}dx} ln ⁡ σ = ∫ 0 1 ∫ 0 1 − x ( 2 − x y ) ln ⁡ ( x y ) d x d y {\displaystyle \ln \sigma =\int _{0}^{1}\int _{0}^{1}{\frac {-x}{(2-xy)\ln(xy)}}dxdy} == Other formulas == The constant σ {\displaystyle \sigma } arises when studying the asymptotic behaviour of the sequence g 0 = 1 {\displaystyle g_{0}=1} g n = n g n − 1 2 , n ≥ 1 {\displaystyle g_{n}=ng_{n-1}^{2},\qquad n\geq 1} with first few terms 1, 1, 2, 12, 576, 1658880, ... (sequence A052129 in the OEIS). This sequence can be shown to have asymptotic behaviour as follows: g n ∼ σ 2 n ( n + 2 − n − 1 + 4 n − 2 − 21 n − 3 + 138 n − 4 + O ( n − 5 ) ) − 1 {\displaystyle g_{n}\sim {\sigma ^{2^{n}}}\left(n+2-n^{-1}+4n^{-2}-21n^{-3}+138n^{-4}+O(n^{-5})\right)^{-1}} Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent Φ ( z , s , q ) {\displaystyle \Phi (z,s,q)} : ln ⁡ σ = − 1 2 ∂ Φ ∂ s ( 1 / 2 , 0 , 1 ) {\displaystyle \ln \sigma =-{\frac {1}{2}}{\frac {\partial \Phi }{\partial s}}\!\left(1/2,0,1\right)} If one defines the Euler-constant function (which gives Euler's constant for z = 1 {\displaystyle z=1} ) as: γ ( z ) = ∑ n = 1 ∞ z n − 1 ( 1 n − ln ⁡ ( n + 1 n ) ) {\displaystyle \gamma (z)=\sum _{n=1}^{\infty }z^{n-1}\left({\frac {1}{n}}-\ln \left({\frac {n+1}{n}}\right)\right)} one has: γ ( 1 2 ) = 2 ln ⁡ 2 σ {\displaystyle \gamma ({\tfrac {1}{2}})=2\ln {\frac {2}{\sigma }}} == Universality == One may define a "continued binary expansion" for all real numbers in the set ( 0 , 1 ] {\displaystyle (0,1]} , similarly to the decimal expansion or simple continued fraction expansion. This is done by considering the unique base-2 representation for a number x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} which does not contain an infinite tail of 0's (for example write one half as 0.01111... 2 {\displaystyle 0.01111..._{2}} instead of 0.1 2 {\displaystyle 0.1_{2}} ). Then define a sequence ( a k ) ⊆ N {\displaystyle (a_{k})\subseteq \mathbb {N} } which gives the difference in positions of the 1's in this base-2 representation. This expansion for x {\displaystyle x} is now given by: x = ⟨ a 1 , a 2 , a 3 , . . . ⟩ {\displaystyle x=\langle a_{1},a_{2},a_{3},...\rangle } For example the fractional part of Pi we have: { π } = 0.14159 26535 89793... = 0.00100 10000 11111... 2 {\displaystyle \{\pi \}=0.14159\,26535\,89793...=0.00100\,10000\,11111..._{2}} (sequence A004601 in the OEIS) The first 1 occurs on position 3 after the radix point. The next 1 appears three places after the first one, the third 1 appears five places after the second one, etc. By continuing in this manner, we obtain: π − 3 = ⟨ 3 , 3 , 5 , 1 , 1 , 1 , 1... ⟩ {\displaystyle \pi -3=\langle 3,3,5,1,1,1,1...\rangle } (sequence A320298 in the OEIS) This gives a bijective map ( 0 , 1 ] ↦ N N {\displaystyle (0,1]\mapsto \mathbb {N} ^{\mathbb {N} }} , such that for every real number x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} we uniquely can give: x = ⟨ a 1 , a 2 , a 3 , . . . ⟩ :⇔ x = ∑ k = 1 ∞ 2 − ( a 1 + . . . + a k ) {\displaystyle x=\langle a_{1},a_{2},a_{3},...\rangle :\Leftrightarrow x=\sum _{k=1}^{\infty }2^{-(a_{1}+...+a_{k})}} It can now be proven that for almost all numbers x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} the limit of the geometric mean of the terms a k {\displaystyle a_{k}} converges to Somos' constant. That is, for almost all numbers in that interval we have: σ = lim n → ∞ a 1 a 2 . . . a n n {\displaystyle \sigma =\lim _{n\to \infty }{\sqrt[{n}]{a_{1}a_{2}...a_{n}}}} Somos' constant is universal for the "continued binary expansion" of numbers x ∈ ( 0 , 1 ] {\displaystyle x\in (0,1]} in the same sense that Khinchin's constant is universal for the simple continued fraction expansions of numbers x ∈ R {\displaystyle x\in \mathbb {R} } . == Generalizations == The generalized Somos' constants may be given by: σ t = ∏ k = 1 ∞ k 1 / t k = 1 1 / t 2 1 / t 2 3 1 / t 3 4 1 / t 4 … {\displaystyle \sigma _{t}=\prod _{k=1}^{\infty }k^{1/t^{k}}=1^{1/t}\;2^{1/t^{2}}\;3^{1/t^{3}}\;4^{1/t^{4}}\dots } for t > 1 {\displaystyle t>1} . The following series holds: ln ⁡ σ t = ∑ k = 1 ∞ ln ⁡ k t k {\displaystyle \ln \sigma _{t}=\sum _{k=1}^{\infty }{\frac {\ln k}{t^{k}}}} We also have a connection to the Euler-constant function: γ ( 1 t ) = t ln ⁡ ( t ( t − 1 ) σ t t − 1 ) {\displaystyle \gamma ({\tfrac {1}{t}})=t\ln \left({\frac {t}{(t-1)\sigma _{t}^{t-1}}}\right)} and the following limit, where γ {\displaystyle \gamma } is Euler's constant: lim t → 0 + t σ t + 1 t = e − γ {\displaystyle \lim _{t\to 0^{+}}t\sigma _{t+1}^{t}=e^{-\gamma }} == See also == Euler's constant Khinchin's constant Binary number Ergodic theory List of mathematical constants == References ==
Wikipedia:Sonja Lyttkens#0
Sonja Lyttkens (26 August 1919 – 18 December 2014) was a Swedish mathematician, the third woman to earn a mathematics doctorate in Sweden and the first of these women to obtain a permanent university position in mathematics. She is also known for her work to make academia less hostile to women, and for pointing out that the Swedish taxation system of the time, which provided an income deduction for husbands of non-working wives, pressured women even in low-income families not to work. Her observations helped push Sweden into taxing married people separately from their spouses. == Education and career == Lyttkens grew up in Halmstad and Karlskrona, and moved to Kalmar in 1930. She moved again to Uppsala in 1937 to study mathematics, but her studies were interrupted by marriage and children. She earned a licentiate in 1951, and completed her Ph.D. at Uppsala University in 1956. Her dissertation, The Remainder In Tauberian Theorems, concerned Tauberian theorems and was jointly supervised by Arne Beurling and Lennart Carleson. She was the third woman to earn a doctorate in mathematics in Sweden, after Louise Petrén-Overton in 1911 and Ingrid Lindström in 1947. Although Sofya Kovalevskaya had become a full professor of mathematics in a private university in Stockholm in 1884, women were forbidden from holding public university positions in Sweden until 1925, and both Petrén and Lindström became schoolteachers. Lyttkens obtained a permanent position as a senior lecturer at Uppsala University in 1963, and in 1970 she became the university's first female inspektor (an honorary chair of a student union), for the Kalmar nation. She retired in 1984. == Personal life == Lyttkens was the daughter of Swedish sculptor Anna Petrus and her husband, physician Harald Lyttkens. Two of her children, Ulla Lyttkens and Harald Hamrell, both became film actors and directors. As well as working in mathematics, Lyttkens also painted watercolors before and after her retirement, and had several exhibitions of her paintings. == References == == Further reading == Sonja Lyttkens at Svenskt kvinnobiografiskt lexikon
Wikipedia:Sonya Christian#0
Sonya Christian is an Indian-American academic administrator and former professor who is the current 11th Chancellor of the California Community Colleges. She previously served as the 6th Chancellor of the Kern Community College District from 2021–2023 and served as the 10th President of Bakersfield College from 2013–2021. == Education and career == Christian received a bachelor of science degree from the University of Kerala in Kerala, India. She earned her master of science in applied mathematics from the University of Southern California. She earned her EdD from the University of California, Los Angeles. In 1991, she began her career at Bakersfield College as a mathematics professor. During her 12 years at Bakersfield College, she also served as division chair, then Dean of Science, Engineering, Allied Health, and Mathematics. In 2003, Christian was named Associate Vice President for Instruction at Lane College in Eugene, Oregon. She later became Vice President of Academic and Student Affairs and Chief Academic Officer. She returned to Bakersfield College in 2013 after being named Bakersfield College's 10th president. During her tenure, she focused her work on student success with equity. In 2015, those efforts were rewarded when Bakersfield College's "Making it Happen" program was named a 2015 Exemplary Program by the Board of Governors of California Community Colleges. The California Community Colleges honored Bakersfield College with the 2018 Chancellor's Student Success Award for its work with High-Touch, High-Tech Transfer Pathways. That program was also honored in 2019, when Bakersfield College was named a 2019 Innovation of the Year Award winner. Also in 2019, Bakersfield College was named as a recipient of the Council for Higher Education Accreditation International Quality Group's CIQG Quality Award for the college's work in improving student outcomes. Christian also served as Chair of the Accrediting Commission for Community and Junior Colleges (2020-2022). She was appointed by the Governor to the Student Centered Funding Formula Oversight Committee, where she served from 2018-2022. She currently serves on the boards for the Campaign for College Opportunity and Equity in Policy Implementation Board, and is the Chair of the California Community College's Women's Caucus. On April 19, 2021, the Kern Community College District Board of Trustees announced that Christian would become the district's 10th chancellor, beginning her term in July 2021. == Awards and accolades == Christian received the District 6 Pacesetter of the Year Award from the National Council for Marketing & Public Relations awarded Christian the Assemblyman Rudy Salas nominated Christian as 2016 Woman of the Year for CA-32. In 2018, the Delano Chamber of Commerce selected Christian as co-recipient of the annual Educational Award, which she shared with Assemblymember Rudy Salas. In 2019, Christian was named Woman of the Year by the Kern County Hispanic Chamber of Commerce. == References ==
Wikipedia:Sophie Dabo-Niang#0
Sophie Dabo-Niang (née Dabo) is a Senegalese and French mathematician, statistician, and professor who has done outreach to increase the status of African mathematicians. == Biography == === Early life === Sophie was encouraged to pursue mathematics by her parents and her teachers. She knew she wanted to study mathematics early in high school. === Education === Sophie Dabo-Niang earned her PhD in 2002 from the Pierre and Marie Curie University in Paris. Sophie enjoys passing on her passion for mathematics to her students. === Marriage and children === As of 2016, Dabo-Niang is married. She had 3 children between starting her master's degree and finishing her doctoral thesis, and has 4 children in total. She has said that balancing parenting and her mathematics career has been a challenge, and she credits her persistence to her desire to succeed and the support of her husband. == Mathematical work == Dabo-Niang has published articles on functional statistics, nonparametric and semi-parametric estimates of weakly independent processes, spatial statistics, and mathematical epidemiology. Dabo-Niang serves as an editor of the journal Revista Colombiana de Estadística and is on the scientific committee of the Centre International de Mathématiques Pures et Appliquées (CIMPA). === Professorship and developing country outreach === Sophie Dabo-Niang has successfully supervised the doctoral theses of several students in Africa. As of January 2021 she is a full professor at the University of Lille and is supervising and co-supervising multiple African students. She has taught master's-level statistics courses, including in Senegal. She introduced the spatial statistics subfields to a university in Dakar, Senegal, and supervised the first Senegalese and Mauritanian doctoral students focusing on the field. She often participates on thesis juries in Africa. Dabo-Niang has coordinated scientific events in Africa. In Senegal, she coordinated a CIMPA event and an event to encourage young girls in the mathematical sciences. She serves as the chair of the Developing Countries Committee for the European Mathematical Society. === Selected publications === ==== Books ==== Functional and Operatorial Statistics. Contributions to Statistics. Sophie Dabo-Niang, Frédéric Ferraty (eds.). Physica-Verlag Heidelberg. 2008. ISBN 978-3-7908-2061-4. Retrieved 2021-01-15.{{cite book}}: CS1 maint: others (link) Mathematical Modeling of Random and Deterministic Phenomena. Solym Mawaki Mamou-Abi, Sophie Dabo-Niang, Jean-Jacques Salone (eds.). ISTE, Wiley. 2020-02-01. ISBN 978-1-78630-454-4. Retrieved 2021-01-15.{{cite book}}: CS1 maint: others (link) ==== Articles ==== Dabo-Niang, Sophie; Rhomari, Noureddine (2003-01-01). "Estimation non paramétrique de la régression avec variable explicative dans un espace métrique". Comptes Rendus Mathematique. 336 (1): 75–80. doi:10.1016/S1631-073X(02)00012-2. ISSN 1631-073X. S2CID 122151527. Dabo-Niang, Sophie (2007). "Kernel Regression Estimation for Continuous Spatial Processes". Mathematical Methods of Statistics. 16 (4): 298–317. doi:10.3103/S1066530707040023. S2CID 121410227. Dabo-Niang, Sophie; Ferraty, Frédéric; Vieu, Philippe (2007-06-15). "On the using of modal curves for radar waveforms classification". Computational Statistics & Data Analysis. 51 (10): 4878–4890. doi:10.1016/j.csda.2006.07.012. ISSN 0167-9473. Chebana, Fateh; Dabo‐Niang, Sophie; Ouarda, Taha B. M. J. (2012). "Exploratory functional flood frequency analysis and outlier detection". Water Resources Research. 48 (4). Bibcode:2012WRR....48.4514C. doi:10.1029/2011WR011040. ISSN 1944-7973. Dabo-Niang, Sophie; Yao, Anne-Françoise (2013-01-01). "Kernel spatial density estimation in infinite dimension space". Metrika. 76 (1): 19–52. doi:10.1007/s00184-011-0374-4. ISSN 1435-926X. S2CID 121408701. == Honours, decorations, awards and distinctions == The African Women in Mathematics Association has profiled Dabo-Niang. She was honored by Femmes et Mathématiques in 2015. == See also == Female education in STEM == References == == External links == personal website podcast recording of her workshop on "Functional estimation in high dimensional data : Application to classification" Sophie Dabo-Niang publications indexed by Google Scholar
Wikipedia:Sophie Piccard#0
Sophie Piccard (1904–1990) was a Russian-Swiss mathematician who became the first female full professor (professor ordinarius) in Switzerland. Her research concerned set theory, group theory, linear algebra, and the history of mathematics. == Early life and education == Piccard was born on September 27, 1904, in Saint Petersburg, with a French Huguenot mother and a Swiss father. She earned a diploma in Smolensk in 1925, where her father, Eugène-Ferdinand Piccard, was a university professor and her mother a language teacher at the lycée. Soon afterwards she moved to Switzerland with her parents, escaping the unrest in Russia that her mother, Eulalie Piccard, would become known for writing about. Sophie Piccard's Russian degree was worthless in Switzerland, and she earned another from the University of Lausanne in 1927, going on to complete a doctorate there in 1929 under the supervision of Dmitry Mirimanoff. == Career and later life == She worked outside of mathematics until 1936, when she began teaching part-time at the University of Neuchâtel as an assistant to Rudolf Gaberel. Gaberel died in 1938 and she inherited his position, becoming a professor extraordinarius (associate professor); she was promoted to professor ordinarius in 1943, as the chair of higher geometry and probability theory at Neuchâtel. Piccard died on January 6, 1990, in Fribourg. == Contributions == Piccard was an invited speaker at the International Congress of Mathematicians in 1932 and again in 1936. In 1939 she published the book Sur les ensembles de distances des ensembles de points d'un espace Euclidean (Mémoires de L’Université de Neuchâtel 13, Paris, France: Libraire Gauthier-Villars and Cie., 1939). Its subject was the sets of distances that a collection of points in a Euclidean space might determine. This book included early research on Golomb rulers, finite sets of integer points in a one-dimensional space with the property that their distances are all distinct. She published a theorem claiming that every two Golomb rulers with the same distance set must be congruent to each other; this turned out to be false for certain sets of six points, but true otherwise. == References == == Further reading == Schumacher, Mireille, "Des mathématiciennes en Suisse", Tangente (in French), 157: 38–39.
Wikipedia:Sorin Popa#0
Sorin Teodor Popa (born 24 March 1953) is a Romanian American mathematician working on operator algebras. He is a professor at the University of California, Los Angeles. He was elected a Member of the National Academy of Sciences in 2025. == Biography == Popa earned his PhD from the University of Bucharest in 1983 under the supervision of Dan-Virgil Voiculescu, with thesis Studiul unor clase de subalgebre ale C ∗ {\displaystyle C^{*}} -algebrelor. He has advised 15 doctoral students at UCLA, including Adrian Ioana. == Honors and awards == In 1990, Popa was an invited speaker at the International Congress of Mathematicians (ICM) in Kyoto, where he gave a talk on "Subfactors and Classifications in von Neumann algebras". He was a Guggenheim Fellow in 1995. In 2006, he gave a plenary lecture at the ICM in Madrid on "Deformation and Rigidity for group actions and Von Neumann Algebras". In 2009, he was awarded the Ostrowski Prize, and in 2010 the E. H. Moore Prize. He is one of the inaugural fellows of the American Mathematical Society. In 2013, he was elected to the American Academy of Arts and Sciences. == Selected publications == Pimsner, Mihai; Popa, Sorin (1986). "Entropy and index for subfactors". Annales Scientifiques de l'École Normale Supérieure. 19 (1): 57–106. doi:10.24033/asens.1504. MR 0860811. Popa, Sorin (1994). "Classification of amenable subfactors of type II". Acta Mathematica. 172 (2): 163–255. doi:10.1007/BF02392646. MR 1278111. S2CID 123256227. Popa, Sorin (1995). Classification of subfactors and their endomorphisms. CBMS Regional Conference Series in Mathematics. Vol. 86. Providence, Rhode Island: American Mathematical Society. doi:10.1090/cbms/086. ISBN 978-0-8218-0321-9. MR 1339767. Popa, Sorin (2006). "On a class of type II 1 factors with Betti numbers invariants". Annals of Mathematics. 163 (3): 809–899. arXiv:math/0209130. doi:10.4007/annals.2006.163.809. MR 2215135. S2CID 119174749. Popa, Sorin (2008). "On the superrigidity of malleable actions with spectral gap". Journal of the American Mathematical Society. 21 (4): 981–1000. arXiv:math/0608429. Bibcode:2008JAMS...21..981P. doi:10.1090/S0894-0347-07-00578-4. MR 2425177. S2CID 1298719. == References == == External links == Homepage of Sorin Popa at the University of California, Los Angeles UCLA – Sorin Popa elected into the American Academy of Arts and Sciences
Wikipedia:Sotero Prieto Rodríguez#0
Sotero Prieto Rodríguez (December 25, 1884 – May 22, 1935) was a Mexican mathematician who taught at the National Autonomous University of Mexico. Among his students were physicist Manuel Sandoval Vallarta, physicist and mathematician Carlos Graef Fernández, and engineer and Rector of UNAM Nabor Carrillo Flores. == Early life == Sotero Prieto Rodríguez was the son of the mining engineer and mathematics teacher Raúl Prieto González Bango and Teresa Rodríguez de Prieto. He was the cousin of Isabel Prieto de Landázuri, a distinguished poet, considered the first Mexican romantic. In 1897, at twelve years of age, Prieto arrived in Mexico City and began his preparatory studies in the Instituto Colón de don Toribio Soto, finishing them at the Escuela Nacional Preparatoria in 1901. In 1902 he was accepted as a student in the Escuela Nacional de Ingenieros where he studied of course of civil engineering, which he completed in 1906, without which he would never have received the corresponding degree. == Career == Still being very young, he began teaching and carried out mathematical studies. He influenced notably the change and progress of mathematical research in Mexico, by influencing the then new generation of engineers and students of the exact sciences at the National Autonomous University of Mexico. He was a teacher of Manuel Sandoval Vallarta in the Escuela Nacional Preparatoria and of Alberto Barajas Celis, Carlos Graef Fernández and of Nabor Carrillo Flores in the Escuela Nacional de Ingenieros, currently the Facultad de Ingeniería. In 1932, he established the Mathematics Section of the Sociedad Científica "Antonio Alzate", currently the Academia Nacional de Ciencias de México, where his students presented the results of their research. == Death == According to some close people, it was said that Prieto had expressed judgement that if he passed fifty years of age without having achieved some great discovery in his specialty, he would commit suicide, a statement which no-one took seriously. However, at midday of May 22, 1935 in house number 2 of Génova Street, Mexico City, when he was alone, he tragically fulfilled the promise that he had made himself. However, according to his family, the reasons for his suicide were others. == References == == Bibliography == Vasconcelos, José.- La Raza Cósmica.- México, Editorial Botas, S.A., 1926. p. 156. Armendáriz, Antonio. – Hermandad Pitagórica.- México, Novedades, March 21, 1987, P. Editorial.
Wikipedia:Space-filling tree#0
Space-filling trees are geometric constructions that are analogous to space-filling curves, but have a branching, tree-like structure and are rooted. A space-filling tree is defined by an incremental process that results in a tree for which every point in the space has a finite-length path that converges to it. In contrast to space-filling curves, individual paths in the tree are short, allowing any part of the space to be quickly reached from the root. The simplest examples of space-filling trees have a regular, self-similar, fractal structure, but can be generalized to non-regular and even randomized/Monte-Carlo variants (see Rapidly exploring random tree). Space-filling trees have interesting parallels in nature, including fluid distribution systems, vascular networks, and fractal plant growth, and many interesting connections to L-systems in computer science. == Definition == A space-filling tree is defined by an iterative process whereby a single point in a continuous space is connected via a continuous path to every other point in the space by a path of finite length, and for every point in the space, there is at least one path that converges to it. The concept of a "space-filling tree" in this sense was described in Chapter 15 of Mandelbrot's influential book The Fractal Geometry of Nature (1982). The concept was made more rigorous and given the name "space-filling tree" in a 2009 tech report that defines "space-filling" and "tree" differently than their traditional definitions in mathematics. As explained in the space-filling curve article, in 1890, Peano found the first space-filling curve, and by Jordan's 1887 definition, which is now standard, a curve is a single function, not a sequence of functions. The curve is "space filling" because it is "a curve whose range contains the entire 2-dimensional unit square" (as explained in the first sentence of space-filling curve). In contrast, a space-filling tree, as defined in the tech report, is not a single tree. It is only a sequence of trees. The paper says "A space-filling tree is actually defined as an infinite sequence of trees". It defines T square {\displaystyle T_{\text{square}}} as a "sequence of trees", then states " T square {\displaystyle T_{\text{square}}} is a space-filling tree". It is not space-filling in the standard sense of including the entire 2-dimensional unit square. Instead, the paper defines it as having trees in the sequence coming arbitrarily close to every point. It states "A tree sequence T is called 'space filling' in a space X if for every x ∈ X, there exists a path in the tree that starts at the root and converges to x.". The standard term for this concept is that it includes a set of points that is dense everywhere in the unit square. == Examples == The simplest example of a space-filling tree is one that fills a square planar region. The images illustrate the construction for the planar region [ 0 , 1 ] 2 ⊂ R 2 {\displaystyle [0,1]^{2}\subset \mathbb {R} ^{2}} . At each iteration, additional branches are added to the existing trees. Space-filling trees can also be defined for a variety of other shapes and volumes. Below is the subdivision scheme used to define a space-filling for a triangular region. At each iteration, additional branches are added to the existing trees connecting the center of each triangle to the centers of the four subtriangles. The first six iterations of the triangle space-filling tree are illustrated below: Space-filling trees can also be constructed in higher dimensions. The simplest examples are cubes in R 3 {\displaystyle \mathbb {R} ^{3}} and hypercubes in R n {\displaystyle \mathbb {R} ^{n}} . A similar sequence of iterations used for the square space-filling tree can be used for hypercubes. The third iteration of such a space-filling tree in R 3 {\displaystyle \mathbb {R} ^{3}} is illustrated below: == See also == H tree Space-filling curve Rapidly exploring random tree (RRTs) Binary space partitioning == References ==
Wikipedia:Special cases of Apollonius' problem#0
In Euclidean geometry, Apollonius' problem is to construct all the circles that are tangent to three given circles. Special cases of Apollonius' problem are those in which at least one of the given circles is a point or line, i.e., is a circle of zero or infinite radius. The nine types of such limiting cases of Apollonius' problem are to construct the circles tangent to: three points (denoted PPP, generally 1 solution) three lines (denoted LLL, generally 4 solutions) one line and two points (denoted LPP, generally 2 solutions) two lines and a point (denoted LLP, generally 2 solutions) one circle and two points (denoted CPP, generally 2 solutions) one circle, one line, and a point (denoted CLP, generally 4 solutions) two circles and a point (denoted CCP, generally 4 solutions) one circle and two lines (denoted CLL, generally 8 solutions) two circles and a line (denoted CCL, generally 8 solutions) In a different type of limiting case, the three given geometrical elements may have a special arrangement, such as constructing a circle tangent to two parallel lines and one circle. == Historical introduction == Like most branches of mathematics, Euclidean geometry is concerned with proofs of general truths from a minimum of postulates. For example, a simple proof would show that at least two angles of an isosceles triangle are equal. One important type of proof in Euclidean geometry is to show that a geometrical object can be constructed with a compass and an unmarked straightedge; an object can be constructed if and only if (iff) (something about no higher than square roots are taken). Therefore, it is important to determine whether an object can be constructed with compass and straightedge and, if so, how it may be constructed. Euclid developed numerous constructions with compass and straightedge. Examples include: regular polygons such as the pentagon and hexagon, a line parallel to another that passes through a given point, etc. Many rose windows in Gothic Cathedrals, as well as some Celtic knots, can be designed using only Euclidean constructions. However, some geometrical constructions are not possible with those tools, including the heptagon and trisecting an angle. Apollonius contributed many constructions, namely, finding the circles that are tangent to three geometrical elements simultaneously, where the "elements" may be a point, line or circle. == Rules of Euclidean constructions == In Euclidean constructions, five operations are allowed: Draw a line through two points Draw a circle through a point with a given center Find the intersection point of two lines Find the intersection points of two circles Find the intersection points of a line and a circle The initial elements in a geometric construction are called the "givens", such as a given point, a given line or a given circle. === Example 1: Perpendicular bisector === To construct the perpendicular bisector of the line segment between two points requires two circles, each centered on an endpoint and passing through the other endpoint (operation 2). The intersection points of these two circles (operation 4) are equidistant from the endpoints. The line through them (operation 1) is the perpendicular bisector. === Example 2: Angle bisector === To generate the line that bisects the angle between two given rays requires a circle of arbitrary radius centered on the intersection point P of the two lines (2). The intersection points of this circle with the two given lines (5) are T1 and T2. Two circles of the same radius, centered on T1 and T2, intersect at points P and Q. The line through P and Q (1) is an angle bisector. Rays have one angle bisector; lines have two, perpendicular to one another. == Preliminary results == A few basic results are helpful in solving special cases of Apollonius' problem. Note that a line and a point can be thought of as circles of infinitely large and infinitely small radius, respectively. A circle is tangent to a point if it passes through the point, and tangent to a line if they intersect at a single point P or if the line is perpendicular to a radius drawn from the circle's center to P. Circles tangent to two given points must lie on the perpendicular bisector. Circles tangent to two given lines must lie on the angle bisector. Tangent line to a circle from a given point draw semicircle centered on the midpoint between the center of the circle and the given point. Power of a point and the harmonic mean The radical axis of two circles is the set of points of equal tangents, or more generally, equal power. Circles may be inverted into lines and circles into circles. If two circles are internally tangent, they remain so if their radii are increased or decreased by the same amount. Conversely, if two circles are externally tangent, they remain so if their radii are changed by the same amount in opposite directions, one increasing and the other decreasing. == Types of solutions == === Type 1: Three points === PPP problems generally have a single solution. As shown above, if a circle passes through two given points P1 and P2, its center must lie somewhere on the perpendicular bisector line of the two points. Therefore, if the solution circle passes through three given points P1, P2 and P3, its center must lie on the perpendicular bisectors of P 1 P 2 ¯ {\displaystyle {\overline {\mathbf {P} _{1}\mathbf {P} _{2}}}} , P 1 P 3 ¯ {\displaystyle {\overline {\mathbf {P} _{1}\mathbf {P} _{3}}}} and P 2 P 3 ¯ {\displaystyle {\overline {\mathbf {P} _{2}\mathbf {P} _{3}}}} . At least two of these bisectors must intersect, and their intersection point is the center of the solution circle. The radius of the solution circle is the distance from that center to any one of the three given points. === Type 2: Three lines === LLL problems generally offer 4 solutions. As shown above, if a circle is tangent to two given lines, its center must lie on one of the two lines that bisect the angle between the two given lines. Therefore, if a circle is tangent to three given lines L1, L2, and L3, its center C must be located at the intersection of the bisecting lines of the three given lines. In general, there are four such points, giving four different solutions for the LLL Apollonius problem. The radius of each solution is determined by finding a point of tangency T, which may be done by choosing one of the three intersection points P between the given lines; and drawing a circle centered on the midpoint of C and P of diameter equal to the distance between C and P. The intersections of that circle with the intersecting given lines are the two points of tangency. === Type 3: One point, two lines === PLL problems generally have 2 solutions. As shown above, if a circle is tangent to two given lines, its center must lie on one of the two lines that bisect the angle between the two given lines. By symmetry, if such a circle passes through a given point P, it must also pass through a point Q that is the "mirror image" of P about the angle bisector. The two solution circles pass through both P and Q, and their radical axis is the line connecting those two points. Consider point G at which the radical axis intersects one of the two given lines. Since, every point on the radical axis has the same power relative to each circle, the distances G T 1 ¯ {\displaystyle {\overline {\mathbf {GT} _{1}}}} and G T 2 ¯ {\displaystyle {\overline {\mathbf {GT} _{2}}}} to the solution tangent points T1 and T2, are equal to each other and to the product G P ¯ ⋅ G Q ¯ = G T 1 ¯ ⋅ G T 1 ¯ = G T 2 ¯ ⋅ G T 2 ¯ {\displaystyle {\overline {\mathbf {GP} }}\cdot {\overline {\mathbf {GQ} }}={\overline {\mathbf {GT} _{1}}}\cdot {\overline {\mathbf {GT} _{1}}}={\overline {\mathbf {GT} _{2}}}\cdot {\overline {\mathbf {GT} _{2}}}} Thus, the distances G T 1 − 2 ¯ {\displaystyle {\overline {\mathbf {GT} _{1-2}}}} are both equal to the geometric mean of G P ¯ {\displaystyle {\overline {\mathbf {GP} }}} and G Q ¯ {\displaystyle {\overline {\mathbf {GQ} }}} . From G and this distance, the tangent points T1 and T2 can be found. Then, the two solution circles are the circles that pass through the three points (P, Q, T1) and (P, Q, T2), respectively. === Type 4: Two points, one line === PPL problems generally have 2 solutions. If a line m drawn through the given points P and Q is parallel to the given line l, the tangent point T of the circle with l is located at the intersection of the perpendicular bisector of P Q ¯ {\displaystyle {\overline {PQ}}} with l. In that case, the sole solution circle is the circle that passes through the three points P, Q and T. If the line m is not parallel to the given line l, then it intersects l at a point G. By the power of a point theorem, the distance from G to a tangent point T must equal the geometric mean G T ¯ ⋅ G T ¯ = G P ¯ ⋅ G Q ¯ {\displaystyle {\overline {\mathbf {GT} }}\cdot {\overline {\mathbf {GT} }}={\overline {\mathbf {GP} }}\cdot {\overline {\mathbf {GQ} }}} Two points on the given line L are located at a distance G T ¯ {\displaystyle {\overline {\mathbf {GT} }}} from the point G, which may be denoted as T1 and T2. The two solution circles are the circles that pass through the three points (P, Q, T1) and (P, Q, T2), respectively. ==== Compass and straightedge construction ==== The two circles in the Two points, one line problem where the line through P and Q is not parallel to the given line l, can be constructed with compass and straightedge by: Draw the line m through the given points P and Q . The point G is where the lines l and m intersect Draw circle C that has PQ as diameter. Draw one of the tangents from G to circle C. point A is where the tangent and the circle touch. Draw circle D with center G through A. Circle D cuts line l at the points T1 and T2. One of the required circles is the circle through P, Q and T1. The other circle is the circle through P, Q and T2. The fastest construction (if intersections of l with both (PQ) and the central perpendicular to [PQ] are available; based on Gergonne’s approach). Draw a line m through P and Q intersecting l at G. Draw a perpendicular n through the middle of [PQ] intersecting l at O. Draw a circle w centered at O with radius |OP|=|OQ|. Draw a circle W with [OG] as a diameter intersecting w at M1 and M2. Draw a circle v centered at G with radius |GM1|=|GM2| intersecting l at T1 and T2. The circles passing through P, Q, T1 and P, Q, T2 are solutions. The universal construction (if intersections of l with either (PQ) or the central perpendicular to [PQ] are unavailable or do not exist). Draw a perpendicular n through the middle of [PQ] (point R). Draw a perpendicular k to l through P or Q intersecting l at K. Draw a circle w centered at R with radius |RK|. Draw two lines n1 and n2 passing through P and Q parallel to n and intersecting w at points A1, A2 and B1, B2, respectively. Draw two lines (A1B1) and (A2B2) intersecting l at T1 and T2, respectively. The circles passing through P, Q, T1 and P, Q, T2 are solutions. === Type 5: One circle, two points === CPP problems generally have 2 solutions. Consider a circle centered on one given point P that passes through the second point, Q. Since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. The same inversion transforms Q into itself, and (in general) the given circle C into another circle c. Thus, the problem becomes that of finding a solution line that passes through Q and is tangent to c, which was solved above; there are two such lines. Re-inversion produces the two corresponding solution circles of the original problem. === Type 6: One circle, one line, one point === CLP problems generally have 4 solutions. The solution of this special case is similar to that of the CPP Apollonius solution. Draw a circle centered on the given point P; since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. In general, the same inversion transforms the given line L and given circle C into two new circles, c1 and c2. Thus, the problem becomes that of finding a solution line tangent to the two inverted circles, which was solved above. There are four such lines, and re-inversion transforms them into the four solution circles of the Apollonius problem. === Type 7: Two circles, one point === CCP problems generally have 4 solutions. The solution of this special case is similar to that of CPP. Draw a circle centered on the given point P; since the solution circle must pass through P, inversion in this circle transforms the solution circle into a line lambda. In general, the same inversion transforms the given circle C1 and C2 into two new circles, c1 and c2. Thus, the problem becomes that of finding a solution line tangent to the two inverted circles, which was solved above. There are four such lines, and re-inversion transforms them into the four solution circles of the original Apollonius problem. === Type 8: One circle, two lines === CLL problems generally have 8 solutions. This special case is solved most easily using scaling. The given circle is shrunk to a point, and the radius of the solution circle is either decreased by the same amount (if an internally tangent solution) or increased (if an externally tangent circle). Depending on whether the solution circle is increased or decreased in radii, the two given lines are displaced parallel to themselves by the same amount, depending on which quadrant the center of the solution circle falls. This shrinking of the given circle to a point reduces the problem to the PLL problem, solved above. In general, there are two such solutions per quadrant, giving eight solutions in all. === Type 9: Two circles, one line === CCL problems generally have 8 solutions. The solution of this special case is similar to CLL. The smaller circle is shrunk to a point, while adjusting the radii of the larger given circle and any solution circle, and displacing the line parallel to itself, according to whether they are internally or externally tangent to the smaller circle. This reduces the problem to CLP. Each CLP problem has four solutions, as described above, and there are two such problems, depending on whether the solution circle is internally or externally tangent to the smaller circle. == Special cases with no solutions == An Apollonius problem is impossible if the given circles are nested, i.e., if one circle is completely enclosed within a particular circle and the remaining circle is completely excluded. This follows because any solution circle would have to cross over the middle circle to move from its tangency to the inner circle to its tangency with the outer circle. This general result has several special cases when the given circles are shrunk to points (zero radius) or expanded to straight lines (infinite radius). For example, the CCL problem has zero solutions if the two circles are on opposite sides of the line since, in that case, any solution circle would have to cross the given line non-tangentially to go from the tangent point of one circle to that of the other. == See also == Problem of Apollonius Compass and straightedge constructions == References == Altshiller-Court N (1952). College Geometry: An Introduction to the Modern Geometry of the Triangle and the Circle (2nd edition, revised and enlarged ed.). New York: Barnes and Noble. pp. 222–227. Benjamin Alvord (1855) Tangencies of Circles and of Spheres, Smithsonian Contributions, volume 8, from Google Books. Bruen A, Fisher JC, Wilker JB (1983). "Apollonius by Inversion". Mathematics Magazine. 56 (2): 97–103. doi:10.2307/2690380. JSTOR 2690380. Hartshorne R (2000). Geometry:Euclid and beyond. New York: Springer Verlag. pp. 346–355. ISBN 0-387-98650-2.
Wikipedia:Special linear group#0
In mathematics, the special linear group SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,R)} of degree n {\displaystyle n} over a commutative ring R {\displaystyle R} is the set of n × n {\displaystyle n\times n} matrices with determinant 1 {\displaystyle 1} , with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant det : GL ⁡ ( n , R ) → R × . {\displaystyle \det \colon \operatorname {GL} (n,R)\to R^{\times }.} where R × {\displaystyle R^{\times }} is the multiplicative group of R {\displaystyle R} (that is, R {\displaystyle R} excluding 0 {\displaystyle 0} when R {\displaystyle R} is a field). These elements are "special" in that they form an algebraic subvariety of the general linear group – they satisfy a polynomial equation (since the determinant is polynomial in the entries). When R {\displaystyle R} is the finite field of order q {\displaystyle q} , the notation SL ⁡ ( n , q ) {\displaystyle \operatorname {SL} (n,q)} is sometimes used. == Geometric interpretation == The special linear group SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} can be characterized as the group of volume and orientation preserving linear transformations of R n {\displaystyle \mathbb {R} ^{n}} . This corresponds to the interpretation of the determinant as measuring change in volume and orientation. == Lie subgroup == When F {\displaystyle F} is R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } , SL ⁡ ( n , F ) {\displaystyle \operatorname {SL} (n,F)} is a Lie subgroup of GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} of dimension n 2 − 1 {\displaystyle n^{2}-1} . The Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} of SL ⁡ ( n , F ) {\displaystyle \operatorname {SL} (n,F)} consists of all n × n {\displaystyle n\times n} matrices over F {\displaystyle F} with vanishing trace. The Lie bracket is given by the commutator. == Topology == Any invertible matrix can be uniquely represented according to the polar decomposition as the product of a unitary matrix and a Hermitian matrix with positive eigenvalues. The determinant of the unitary matrix is on the unit circle, while that of the Hermitian matrix is real and positive. Since in the case of a matrix from the special linear group the product of these two determinants must be 1, then each of them must be 1. Therefore, a special linear matrix can be written as the product of a special unitary matrix (or special orthogonal matrix in the real case) and a positive definite Hermitian matrix (or symmetric matrix in the real case) having determinant 1. It follows that the topology of the group SL ⁡ ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} is the product of the topology of SU ⁡ ( n ) {\displaystyle \operatorname {SU} (n)} and the topology of the group of Hermitian matrices of unit determinant with positive eigenvalues. A Hermitian matrix of unit determinant and having positive eigenvalues can be uniquely expressed as the exponential of a traceless Hermitian matrix, and therefore the topology of this is that of ( n 2 − 1 ) {\displaystyle (n^{2}-1)} -dimensional Euclidean space. Since SU ⁡ ( n ) {\displaystyle \operatorname {SU} (n)} is simply connected, then SL ⁡ ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} is also simply connected, for all n ≥ 2 {\displaystyle n\geq 2} . The topology of SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} is the product of the topology of SO(n) and the topology of the group of symmetric matrices with positive eigenvalues and unit determinant. Since the latter matrices can be uniquely expressed as the exponential of symmetric traceless matrices, then this latter topology is that of (n + 2)(n − 1)/2-dimensional Euclidean space. Thus, the group SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} has the same fundamental group as SO ⁡ ( n ) {\displaystyle \operatorname {SO} (n)} ; that is, Z {\displaystyle \mathbb {Z} } for n = 2 {\displaystyle n=2} and Z 2 {\displaystyle \mathbb {Z} _{2}} for n > 2 {\displaystyle n>2} . In particular this means that SL ⁡ ( n , R ) {\displaystyle \operatorname {SL} (n,\mathbb {R} )} , unlike SL ⁡ ( n , C ) {\displaystyle \operatorname {SL} (n,\mathbb {C} )} , is not simply connected, for n > 1 {\displaystyle n>1} . == Relations to other subgroups of GL(n, A) == Two related subgroups, which in some cases coincide with SL {\displaystyle \operatorname {SL} } , and in other cases are accidentally conflated with SL {\displaystyle \operatorname {SL} } , are the commutator subgroup of GL {\displaystyle \operatorname {GL} } , and the group generated by transvections. These are both subgroups of SL {\displaystyle \operatorname {SL} } (transvections have determinant 1, and det is a map to an abelian group, so [ GL , GL ] < SL {\displaystyle [\operatorname {GL} ,\operatorname {GL} ]<\operatorname {SL} } ), but in general do not coincide with it. The group generated by transvections is denoted E ⁡ ( n , A ) {\displaystyle \operatorname {E} (n,A)} (for elementary matrices) or TV ⁡ ( n , A ) {\displaystyle \operatorname {TV} (n,A)} . By the second Steinberg relation, for n ≥ 3 {\displaystyle n\geq 3} , transvections are commutators, so for n ≥ 3 {\displaystyle n\geq 3} , E ⁡ ( n , A ) < [ GL ⁡ ( n , A ) , GL ⁡ ( n , A ) ] {\displaystyle \operatorname {E} (n,A)<[\operatorname {GL} (n,A),\operatorname {GL} (n,A)]} . For n = 2 {\displaystyle n=2} , transvections need not be commutators (of 2 × 2 {\displaystyle 2\times 2} matrices), as seen for example when A {\displaystyle A} is F 2 {\displaystyle \mathbb {F} _{2}} , the field of two elements. In that case A 3 ≅ [ GL ⁡ ( 2 , F 2 ) , GL ⁡ ( 2 , F 2 ) ] < E ⁡ ( 2 , F 2 ) = SL ⁡ ( 2 , F 2 ) = GL ⁡ ( 2 , F 2 ) ≅ S 3 , {\displaystyle A_{3}\cong [\operatorname {GL} (2,\mathbb {F} _{2}),\operatorname {GL} (2,\mathbb {F} _{2})]<\operatorname {E} (2,\mathbb {F} _{2})=\operatorname {SL} (2,\mathbb {F} _{2})=\operatorname {GL} (2,\mathbb {F} _{2})\cong S_{3},} where A 3 {\displaystyle A_{3}} and S 3 {\displaystyle S_{3}} respectively denote the alternating and symmetric group on 3 letters. However, if A {\displaystyle A} is a field with more than 2 elements, then E(2, A) = [GL(2, A), GL(2, A)], and if A {\displaystyle A} is a field with more than 3 elements, E(2, A) = [SL(2, A), SL(2, A)]. In some circumstances these coincide: the special linear group over a field or a Euclidean domain is generated by transvections, and the stable special linear group over a Dedekind domain is generated by transvections. For more general rings the stable difference is measured by the special Whitehead group S K 1 ( A ) = SL ⁡ ( A ) / E ⁡ ( A ) {\displaystyle SK_{1}(A)=\operatorname {SL} (A)/\operatorname {E} (A)} , where SL ⁡ ( A ) {\displaystyle \operatorname {SL} (A)} and E ⁡ ( A ) {\displaystyle \operatorname {E} (A)} are the stable groups of the special linear group and elementary matrices. == Generators and relations == If working over a ring where SL {\displaystyle \operatorname {SL} } is generated by transvections (such as a field or Euclidean domain), one can give a presentation of SL {\displaystyle \operatorname {SL} } using transvections with some relations. Transvections satisfy the Steinberg relations, but these are not sufficient: the resulting group is the Steinberg group, which is not the special linear group, but rather the universal central extension of the commutator subgroup of GL {\displaystyle \operatorname {GL} } . A sufficient set of relations for SL(n, Z) for n ≥ 3 is given by two of the Steinberg relations, plus a third relation (Conder, Robertson & Williams 1992, p. 19). Let Tij := eij(1) be the elementary matrix with 1's on the diagonal and in the ij position, and 0's elsewhere (and i ≠ j). Then [ T i j , T j k ] = T i k for i ≠ k [ T i j , T k ℓ ] = 1 for i ≠ ℓ , j ≠ k ( T 12 T 21 − 1 T 12 ) 4 = 1 {\displaystyle {\begin{aligned}\left[T_{ij},T_{jk}\right]&=T_{ik}&&{\text{for }}i\neq k\\[4pt]\left[T_{ij},T_{k\ell }\right]&=\mathbf {1} &&{\text{for }}i\neq \ell ,j\neq k\\[4pt]\left(T_{12}T_{21}^{-1}T_{12}\right)^{4}&=\mathbf {1} \end{aligned}}} are a complete set of relations for SL(n, Z), n ≥ 3. == SL±(n,F) == In characteristic other than 2, the set of matrices with determinant ±1 form another subgroup of GL, with SL as an index 2 subgroup (necessarily normal); in characteristic 2 this is the same as SL. This forms a short exact sequence of groups: 1 → SL ⁡ ( n , F ) → SL ± ⁡ ( n , F ) → { ± 1 } → 1. {\displaystyle 1\to \operatorname {SL} (n,F)\to \operatorname {SL} ^{\pm }(n,F)\to \{\pm 1\}\to 1.} This sequence splits by taking any matrix with determinant −1, for example the diagonal matrix ( − 1 , 1 , … , 1 ) . {\displaystyle (-1,1,\dots ,1).} If n = 2 k + 1 {\displaystyle n=2k+1} is odd, the negative identity matrix − I {\displaystyle -I} is in SL±(n,F) but not in SL(n,F) and thus the group splits as an internal direct product SL ± ⁡ ( 2 k + 1 , F ) ≅ SL ⁡ ( 2 k + 1 , F ) × { ± I } {\displaystyle \operatorname {SL} ^{\pm }(2k+1,F)\cong \operatorname {SL} (2k+1,F)\times \{\pm I\}} . However, if n = 2 k {\displaystyle n=2k} is even, − I {\displaystyle -I} is already in SL(n,F) , SL± does not split, and in general is a non-trivial group extension. Over the real numbers, SL±(n, R) has two connected components, corresponding to SL(n, R) and another component, which are isomorphic with identification depending on a choice of point (matrix with determinant −1). In odd dimension these are naturally identified by − I {\displaystyle -I} , but in even dimension there is no one natural identification. == Structure of GL(n,F) == The group GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} splits over its determinant (we use F × = GL ⁡ ( 1 , F ) → GL ⁡ ( n , F ) {\displaystyle F^{\times }=\operatorname {GL} (1,F)\to \operatorname {GL} (n,F)} as the monomorphism from F × {\displaystyle F^{\times }} to GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} , see semidirect product), and therefore GL ⁡ ( n , F ) {\displaystyle \operatorname {GL} (n,F)} can be written as a semidirect product of SL ⁡ ( n , F ) {\displaystyle \operatorname {SL} (n,F)} by F × {\displaystyle F^{\times }} : GL ⁡ ( n , F ) = SL ⁡ ( n , F ) ⋊ F × {\displaystyle \operatorname {GL} (n,F)=\operatorname {SL} (n,F)\rtimes F^{\times }} . == See also == SL(2, R) SL(2, C) Modular group (PSL(2, Z)) Projective linear group Conformal map Representations of classical Lie groups == References == Conder, Marston; Robertson, Edmund; Williams, Peter (1992), "Presentations for 3-dimensional special linear groups over integer rings", Proceedings of the American Mathematical Society, 115 (1), American Mathematical Society: 19–26, doi:10.2307/2159559, JSTOR 2159559, MR 1079696 Hall, Brian C. (2015), Lie groups, Lie algebras, and representations: An elementary introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer
Wikipedia:Spectral clustering#0
In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset. In application to image segmentation, spectral clustering is known as segmentation-based object categorization. == Definitions == Given an enumerated set of data points, the similarity matrix may be defined as a symmetric matrix A {\displaystyle A} , where A i j ≥ 0 {\displaystyle A_{ij}\geq 0} represents a measure of the similarity between data points with indices i {\displaystyle i} and j {\displaystyle j} . The general approach to spectral clustering is to use a standard clustering method (there are many such methods, k-means is discussed below) on relevant eigenvectors of a Laplacian matrix of A {\displaystyle A} . There are many different ways to define a Laplacian which have different mathematical interpretations, and so the clustering will also have different interpretations. The eigenvectors that are relevant are the ones that correspond to several smallest eigenvalues of the Laplacian except for the smallest eigenvalue which will have a value of 0. For computational efficiency, these eigenvectors are often computed as the eigenvectors corresponding to the largest several eigenvalues of a function of the Laplacian. === Laplacian matrix === Spectral clustering is well known to relate to partitioning of a mass-spring system, where each mass is associated with a data point and each spring stiffness corresponds to a weight of an edge describing a similarity of the two related data points, as in the spring system. Specifically, the classical reference explains that the eigenvalue problem describing transversal vibration modes of a mass-spring system is exactly the same as the eigenvalue problem for the graph Laplacian matrix defined as L := D − A {\displaystyle L:=D-A} , where D {\displaystyle D} is the diagonal matrix D i i = ∑ j A i j , {\displaystyle D_{ii}=\sum _{j}A_{ij},} and A is the adjacency matrix. The masses that are tightly connected by the springs in the mass-spring system evidently move together from the equilibrium position in low-frequency vibration modes, so that the components of the eigenvectors corresponding to the smallest eigenvalues of the graph Laplacian can be used for meaningful clustering of the masses. For example, assuming that all the springs and the masses are identical in the 2-dimensional spring system pictured, one would intuitively expect that the loosest connected masses on the right-hand side of the system would move with the largest amplitude and in the opposite direction to the rest of the masses when the system is shaken — and this expectation will be confirmed by analyzing components of the eigenvectors of the graph Laplacian corresponding to the smallest eigenvalues, i.e., the smallest vibration frequencies. === Laplacian matrix normalization === The goal of normalization is making the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights. A popular normalized spectral clustering technique is the normalized cuts algorithm or Shi–Malik algorithm introduced by Jianbo Shi and Jitendra Malik, commonly used for image segmentation. It partitions points into two sets ( B 1 , B 2 ) {\displaystyle (B_{1},B_{2})} based on the eigenvector v {\displaystyle v} corresponding to the second-smallest eigenvalue of the symmetric normalized Laplacian defined as L norm := I − D − 1 / 2 A D − 1 / 2 . {\displaystyle L^{\text{norm}}:=I-D^{-1/2}AD^{-1/2}.} The vector v {\displaystyle v} is also the eigenvector corresponding to the second-largest eigenvalue of the symmetrically normalized adjacency matrix D − 1 / 2 A D − 1 / 2 . {\displaystyle D^{-1/2}AD^{-1/2}.} The random walk (or left) normalized Laplacian is defined as L rw := D − 1 L = I − D − 1 A {\displaystyle L^{\text{rw}}:=D^{-1}L=I-D^{-1}A} and can also be used for spectral clustering. A mathematically equivalent algorithm takes the eigenvector u {\displaystyle u} corresponding to the largest eigenvalue of the random walk normalized adjacency matrix P = D − 1 A {\displaystyle P=D^{-1}A} . The eigenvector v {\displaystyle v} of the symmetrically normalized Laplacian and the eigenvector u {\displaystyle u} of the left normalized Laplacian are related by the identity D − 1 / 2 v = u . {\displaystyle D^{-1/2}v=u.} === Cluster analysis via Spectral Embedding === Knowing the n {\displaystyle n} -by- k {\displaystyle k} matrix V {\displaystyle V} of selected eigenvectors, mapping — called spectral embedding — of the original n {\displaystyle n} data points is performed to a k {\displaystyle k} -dimensional vector space using the rows of V {\displaystyle V} . Now the analysis is reduced to clustering vectors with k {\displaystyle k} components, which may be done in various ways. In the simplest case k = 1 {\displaystyle k=1} , the selected single eigenvector v {\displaystyle v} , called the Fiedler vector, corresponds to the second smallest eigenvalue. Using the components of v , {\displaystyle v,} one can place all points whose component in v {\displaystyle v} is positive in the set B + {\displaystyle B_{+}} and the rest in B − {\displaystyle B_{-}} , thus bi-partitioning the graph and labeling the data points with two labels. This sign-based approach follows the intuitive explanation of spectral clustering via the mass-spring model — in the low frequency vibration mode that the Fiedler vector v {\displaystyle v} represents, one cluster data points identified with mutually strongly connected masses would move together in one direction, while in the complement cluster data points identified with remaining masses would move together in the opposite direction. The algorithm can be used for hierarchical clustering by repeatedly partitioning the subsets in the same fashion. In the general case k > 1 {\displaystyle k>1} , any vector clustering technique can be used, e.g., DBSCAN. == Algorithms == Basic Algorithm Calculate the Laplacian L {\displaystyle L} (or the normalized Laplacian) Calculate the first k {\displaystyle k} eigenvectors (the eigenvectors corresponding to the k {\displaystyle k} smallest eigenvalues of L {\displaystyle L} ) Consider the matrix formed by the first k {\displaystyle k} eigenvectors; the l {\displaystyle l} -th row defines the features of graph node l {\displaystyle l} Cluster the graph nodes based on these features (e.g., using k-means clustering) If the similarity matrix A {\displaystyle A} has not already been explicitly constructed, the efficiency of spectral clustering may be improved if the solution to the corresponding eigenvalue problem is performed in a matrix-free fashion (without explicitly manipulating or even computing the similarity matrix), as in the Lanczos algorithm. For large-sized graphs, the second eigenvalue of the (normalized) graph Laplacian matrix is often ill-conditioned, leading to slow convergence of iterative eigenvalue solvers. Preconditioning is a key technology accelerating the convergence, e.g., in the matrix-free LOBPCG method. Spectral clustering has been successfully applied on large graphs by first identifying their community structure, and then clustering communities. Spectral clustering is closely related to nonlinear dimensionality reduction, and dimension reduction techniques such as locally-linear embedding can be used to reduce errors from noise or outliers. == Costs == Denoting the number of the data points by n {\displaystyle n} , it is important to estimate the memory footprint and compute time, or number of arithmetic operations (AO) performed, as a function of n {\displaystyle n} . No matter the algorithm of the spectral clustering, the two main costly items are the construction of the graph Laplacian and determining its k {\displaystyle k} eigenvectors for the spectral embedding. The last step — determining the labels from the n {\displaystyle n} -by- k {\displaystyle k} matrix of eigenvectors — is typically the least expensive requiring only k n {\displaystyle kn} AO and creating just a n {\displaystyle n} -by- 1 {\displaystyle 1} vector of the labels in memory. The need to construct the graph Laplacian is common for all distance- or correlation-based clustering methods. Computing the eigenvectors is specific to spectral clustering only. === Constructing graph Laplacian === The graph Laplacian can be and commonly is constructed from the adjacency matrix. The construction can be performed matrix-free, i.e., without explicitly forming the matrix of the graph Laplacian and no AO. It can also be performed in-place of the adjacency matrix without increasing the memory footprint. Either way, the costs of constructing the graph Laplacian is essentially determined by the costs of constructing the n {\displaystyle n} -by- n {\displaystyle n} graph adjacency matrix. Moreover, a normalized Laplacian has exactly the same eigenvectors as the normalized adjacency matrix, but with the order of the eigenvalues reversed. Thus, instead of computing the eigenvectors corresponding to the smallest eigenvalues of the normalized Laplacian, one can equivalently compute the eigenvectors corresponding to the largest eigenvalues of the normalized adjacency matrix, without even talking about the Laplacian matrix. Naive constructions of the graph adjacency matrix, e.g., using the RBF kernel, make it dense, thus requiring n 2 {\displaystyle n^{2}} memory and n 2 {\displaystyle n^{2}} AO to determine each of the n 2 {\displaystyle n^{2}} entries of the matrix. Nystrom method can be used to approximate the similarity matrix, but the approximate matrix is not elementwise positive, i.e. cannot be interpreted as a distance-based similarity. Algorithms to construct the graph adjacency matrix as a sparse matrix are typically based on a nearest neighbor search, which estimate or sample a neighborhood of a given data point for nearest neighbors, and compute non-zero entries of the adjacency matrix by comparing only pairs of the neighbors. The number of the selected nearest neighbors thus determines the number of non-zero entries, and is often fixed so that the memory footprint of the n {\displaystyle n} -by- n {\displaystyle n} graph adjacency matrix is only O ( n ) {\displaystyle O(n)} , only O ( n ) {\displaystyle O(n)} sequential arithmetic operations are needed to compute the O ( n ) {\displaystyle O(n)} non-zero entries, and the calculations can be trivially run in parallel. === Computing eigenvectors === The cost of computing the n {\displaystyle n} -by- k {\displaystyle k} (with k ≪ n {\displaystyle k\ll n} ) matrix of selected eigenvectors of the graph Laplacian is normally proportional to the cost of multiplication of the n {\displaystyle n} -by- n {\displaystyle n} graph Laplacian matrix by a vector, which varies greatly whether the graph Laplacian matrix is dense or sparse. For the dense case the cost thus is O ( n 2 ) {\displaystyle O(n^{2})} . The very commonly cited in the literature cost O ( n 3 ) {\displaystyle O(n^{3})} comes from choosing k = n {\displaystyle k=n} and is clearly misleading, since, e.g., in a hierarchical spectral clustering k = 1 {\displaystyle k=1} as determined by the Fiedler vector. In the sparse case of the n {\displaystyle n} -by- n {\displaystyle n} graph Laplacian matrix with O ( n ) {\displaystyle O(n)} non-zero entries, the cost of the matrix-vector product and thus of computing the n {\displaystyle n} -by- k {\displaystyle k} with k ≪ n {\displaystyle k\ll n} matrix of selected eigenvectors is O ( n ) {\displaystyle O(n)} , with the memory footprint also only O ( n ) {\displaystyle O(n)} — both are the optimal low bounds of complexity of clustering n {\displaystyle n} data points. Moreover, matrix-free eigenvalue solvers such as LOBPCG can efficiently run in parallel, e.g., on multiple GPUs with distributed memory, resulting not only in high quality clusters, which spectral clustering is famous for, but also top performance. == Software == Free software implementing spectral clustering is available in large open source projects like scikit-learn using LOBPCG with multigrid preconditioning or ARPACK, MLlib for pseudo-eigenvector clustering using the power iteration method, and R. == Relationship with other clustering methods == The ideas behind spectral clustering may not be immediately obvious. It may be useful to highlight relationships with other methods. In particular, it can be described in the context of kernel clustering methods, which reveals several similarities with other approaches. === Relationship with k-means === Spectral clustering is closely related to the k-means algorithm, especially in how cluster assignments are ultimately made. Although the two methods differ fundamentally in their initial formulations—spectral clustering being graph-based and k-means being centroid-based—the connection becomes clear when spectral clustering is viewed through the lens of kernel methods. In particular, weighted kernel k-means provides a key theoretical bridge between the two. Kernel k-means is a generalization of the standard k-means algorithm, where data is implicitly mapped into a high-dimensional feature space through a kernel function, and clustering is performed in that space. Spectral clustering, especially the normalized versions, performs a similar operation by mapping the input data (or graph nodes) to a lower-dimensional space defined by the eigenvectors of the graph Laplacian. These eigenvectors correspond to the solution of a relaxation of the normalized cut or other graph partitioning objectives. Mathematically, the objective function minimized by spectral clustering can be shown to be equivalent to the objective function of weighted kernel k-means in this transformed space. This was formally established in works such as where they demonstrated that normalized cuts are equivalent to a weighted version of kernel k-means applied to the rows of the normalized Laplacian’s eigenvector matrix. Because of this equivalence, spectral clustering can be viewed as performing kernel k-means in the eigenspace defined by the graph Laplacian. This theoretical insight has practical implications: the final clustering step in spectral clustering typically involves running the standard k-means algorithm on the rows of the matrix formed by the first k eigenvectors of the Laplacian. These rows can be thought of as embedding each data point or node in a low-dimensional space where the clusters are more well-separated and hence, easier for k-means to detect. Additionally, multi-level methods have been developed to directly optimize this shared objective function. These methods work by iteratively coarsening the graph to reduce problem size, solving the problem on a coarse graph, and then refining the solution on successively finer graphs. This leads to more efficient optimization for large-scale problems, while still capturing the global structure preserved by the spectral embedding. === Relationship to DBSCAN === Spectral clustering is also conceptually related to DBSCAN (Density-Based Spatial Clustering of Applications with Noise), particularly in the special case where the spectral method is used to identify connected graph components of a graph. In this trivial case—where the goal is to identify subsets of nodes with no interconnecting edges between them—the spectral method effectively reduces to a connectivity-based clustering approach, much like DBSCAN. DBSCAN operates by identifying density-connected regions in the input space: points that are reachable from one another via a sequence of neighboring points within a specified radius (ε), and containing a minimum number of points (minPts). The algorithm excels at discovering clusters of arbitrary shape and separating out noise without needing to specify the number of clusters in advance. In spectral clustering, when the similarity graph is constructed using a hard connectivity criterion (i.e., binary adjacency based on whether two nodes are within a threshold distance), and no normalization is applied to the Laplacian, the resulting eigenstructure of the graph Laplacian directly reveals disconnected components of the graph. This mirrors DBSCAN's ability to isolate density-connected components. The zeroth eigenvectors of the unnormalized Laplacian correspond to these components, with one eigenvector per connected region. This connection is most apparent when spectral clustering is used not to optimize a soft partition (like minimizing the normalized cut), but to identify exact connected components—which corresponds to the most extreme form of “density-based” clustering, where only directly or transitively connected nodes are grouped together. Therefore, spectral clustering in this regime behaves like a spectral version of DBSCAN, especially in sparse graphs or when constructing ε-neighborhood graphs. While DBSCAN operates directly in the data space using density estimates, spectral clustering transforms the data into an eigenspace where global structure and connectivity are emphasized. Both methods are non-parametric in spirit, and neither assumes convex cluster shapes, which further supports their conceptual alignment. == Measures to compare clusterings == Ravi Kannan, Santosh Vempala and Adrian Vetta proposed a bicriteria measure to define the quality of a given clustering. They said that a clustering was an (α, ε)-clustering if the conductance of each cluster (in the clustering) was at least α and the weight of the inter-cluster edges was at most ε fraction of the total weight of all the edges in the graph. They also look at two approximation algorithms in the same paper. == History and related literatures == Spectral clustering has a long history. Spectral clustering as a machine learning method was popularized by Shi & Malik and Ng, Jordan, & Weiss. Ideas and network measures related to spectral clustering also play an important role in a number of applications apparently different from clustering problems. For instance, networks with stronger spectral partitions take longer to converge in opinion-updating models used in sociology and economics. == See also == Affinity propagation Kernel principal component analysis Cluster analysis Spectral graph theory == References ==
Wikipedia:Spectral graph theory#0
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. The adjacency matrix of a simple undirected graph is a real symmetric matrix and is therefore orthogonally diagonalizable; its eigenvalues are real algebraic integers. While the adjacency matrix depends on the vertex labeling, its spectrum is a graph invariant, although not a complete one. Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as the Colin de Verdière number. == Cospectral graphs == Two graphs are called cospectral or isospectral if the adjacency matrices of the graphs are isospectral, that is, if the adjacency matrices have equal multisets of eigenvalues. Cospectral graphs need not be isomorphic, but isomorphic graphs are always cospectral. === Graphs determined by their spectrum === A graph G {\displaystyle G} is said to be determined by its spectrum if any other graph with the same spectrum as G {\displaystyle G} is isomorphic to G {\displaystyle G} . Some first examples of families of graphs that are determined by their spectrum include: The complete graphs. The finite starlike trees. === Cospectral mates === A pair of graphs are said to be cospectral mates if they have the same spectrum, but are non-isomorphic. The smallest pair of cospectral mates is {K1,4, C4 ∪ K1}, comprising the 5-vertex star and the graph union of the 4-vertex cycle and the single-vertex graph. The first example of cospectral graphs was reported by Collatz and Sinogowitz in 1957. The smallest pair of polyhedral cospectral mates are enneahedra with eight vertices each. === Finding cospectral graphs === Almost all trees are cospectral, i.e., as the number of vertices grows, the fraction of trees for which there exists a cospectral tree goes to 1. A pair of regular graphs are cospectral if and only if their complements are cospectral. A pair of distance-regular graphs are cospectral if and only if they have the same intersection array. Cospectral graphs can also be constructed by means of the Sunada method. Another important source of cospectral graphs are the point-collinearity graphs and the line-intersection graphs of point-line geometries. These graphs are always cospectral but are often non-isomorphic. == Cheeger inequality == The famous Cheeger's inequality from Riemannian geometry has a discrete analogue involving the Laplacian matrix; this is perhaps the most important theorem in spectral graph theory and one of the most useful facts in algorithmic applications. It approximates the sparsest cut of a graph through the second eigenvalue of its Laplacian. === Cheeger constant === The Cheeger constant (also Cheeger number or isoperimetric number) of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connected networks of computers, card shuffling, and low-dimensional topology (in particular, the study of hyperbolic 3-manifolds). More formally, the Cheeger constant h(G) of a graph G on n vertices is defined as h ( G ) = min 0 < | S | ≤ n 2 | ∂ ( S ) | | S | , {\displaystyle h(G)=\min _{0<|S|\leq {\frac {n}{2}}}{\frac {|\partial (S)|}{|S|}},} where the minimum is over all nonempty sets S of at most n/2 vertices and ∂(S) is the edge boundary of S, i.e., the set of edges with exactly one endpoint in S. === Cheeger inequality === When the graph G is d-regular, there is a relationship between h(G) and the spectral gap d − λ2 of G. An inequality due to Dodziuk and independently Alon and Milman states that 1 2 ( d − λ 2 ) ≤ h ( G ) ≤ 2 d ( d − λ 2 ) . {\displaystyle {\frac {1}{2}}(d-\lambda _{2})\leq h(G)\leq {\sqrt {2d(d-\lambda _{2})}}.} This inequality is closely related to the Cheeger bound for Markov chains and can be seen as a discrete version of Cheeger's inequality in Riemannian geometry. For general connected graphs that are not necessarily regular, an alternative inequality is given by Chung: 35 1 2 λ ≤ h ( G ) ≤ 2 λ , {\displaystyle {\frac {1}{2}}{\lambda }\leq {\mathbf {h} }(G)\leq {\sqrt {2\lambda }},} where λ {\displaystyle \lambda } is the least nontrivial eigenvalue of the normalized Laplacian, and h ( G ) {\displaystyle {\mathbf {h} }(G)} is the (normalized) Cheeger constant h ( G ) = min ∅ ≠ S ⊂ V ( G ) | ∂ ( S ) | min ( v o l ( S ) , v o l ( S ¯ ) ) {\displaystyle {\mathbf {h} }(G)=\min _{\emptyset \not =S\subset V(G)}{\frac {|\partial (S)|}{\min({\mathrm {vol} }(S),{\mathrm {vol} }({\bar {S}}))}}} where v o l ( Y ) {\displaystyle {\mathrm {vol} }(Y)} is the sum of degrees of vertices in Y {\displaystyle Y} . == Hoffman–Delsarte inequality == There is an eigenvalue bound for independent sets in regular graphs, originally due to Alan J. Hoffman and Philippe Delsarte. Suppose that G {\displaystyle G} is a k {\displaystyle k} -regular graph on n {\displaystyle n} vertices with least eigenvalue λ m i n {\displaystyle \lambda _{\mathrm {min} }} . Then: α ( G ) ≤ n 1 − k λ m i n {\displaystyle \alpha (G)\leq {\frac {n}{1-{\frac {k}{\lambda _{\mathrm {min} }}}}}} where α ( G ) {\displaystyle \alpha (G)} denotes its independence number. This bound has been applied to establish e.g. algebraic proofs of the Erdős–Ko–Rado theorem and its analogue for intersecting families of subspaces over finite fields. For general graphs which are not necessarily regular, a similar upper bound for the independence number can be derived by using the maximum eigenvalue λ m a x ′ {\displaystyle \lambda '_{max}} of the normalized Laplacian of G {\displaystyle G} : α ( G ) ≤ n ( 1 − 1 λ m a x ′ ) m a x d e g m i n d e g {\displaystyle \alpha (G)\leq n(1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\frac {\mathrm {maxdeg} }{\mathrm {mindeg} }}} where m a x d e g {\displaystyle {\mathrm {maxdeg} }} and m i n d e g {\displaystyle {\mathrm {mindeg} }} denote the maximum and minimum degree in G {\displaystyle G} , respectively. This a consequence of a more general inequality (pp. 109 in ): v o l ( X ) ≤ ( 1 − 1 λ m a x ′ ) v o l ( V ( G ) ) {\displaystyle {\mathrm {vol} }(X)\leq (1-{\frac {1}{\lambda '_{\mathrm {max} }}}){\mathrm {vol} }(V(G))} where X {\displaystyle X} is an independent set of vertices and v o l ( Y ) {\displaystyle {\mathrm {vol} }(Y)} denotes the sum of degrees of vertices in Y {\displaystyle Y} . == Historical outline == Spectral graph theory emerged in the 1950s and 1960s. Besides graph theoretic research on the relationship between structural and spectral properties of graphs, another major source was research in quantum chemistry, but the connections between these two lines of work were not discovered until much later. The 1980 monograph Spectra of Graphs by Cvetković, Doob, and Sachs summarised nearly all research to date in the area. In 1988 it was updated by the survey Recent Results in the Theory of Graph Spectra. The 3rd edition of Spectra of Graphs (1995) contains a summary of the further recent contributions to the subject. Discrete geometric analysis created and developed by Toshikazu Sunada in the 2000s deals with spectral graph theory in terms of discrete Laplacians associated with weighted graphs, and finds application in various fields, including shape analysis. In most recent years, the spectral graph theory has expanded to vertex-varying graphs often encountered in many real-life applications. == See also == Strongly regular graph Algebraic connectivity Algebraic graph theory Spectral clustering Spectral shape analysis Estrada index Lovász theta Expander graph == References == Alon; Spencer (2011), The probabilistic method, Wiley. Brouwer, Andries; Haemers, Willem H. (2011), Spectra of Graphs (PDF), Springer Hoory; Linial; Wigderson (2006), Expander graphs and their applications (PDF) Chung, Fan (1997). American Mathematical Society (ed.). Spectral Graph Theory. Providence, R. I. ISBN 0821803158. MR 1421568[first 4 chapters are available in the website]{{cite book}}: CS1 maint: postscript (link) Schwenk, A. J. (1973). "Almost All Trees are Cospectral". In Harary, Frank (ed.). New Directions in the Theory of Graphs. New York: Academic Press. ISBN 012324255X. OCLC 890297242. Bogdan, Nica (2018). "A Brief Introduction to Spectral Graph Theory". Zurich: EMS Press. ISBN 978-3-03719-188-0. Pavel Kurasov (2024), Spectral Geometry of Graphs, Springer(Birkhauser), Open Access (CC4.0). == External links == Spielman, Daniel (2011). "Spectral Graph Theory" (PDF). [chapter from Combinatorial Scientific Computing] Spielman, Daniel (2007). "Spectral Graph Theory and its Applications". [presented at FOCS 2007 Conference] Spielman, Daniel (2004). "Spectral Graph Theory and its Applications". [course page and lecture notes]
Wikipedia:Spectral theorem#0
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. == Mathematical background == The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics." There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann. The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis. The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand, it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality. One can also study the spectral properties of operators on Banach spaces. For example, compact operators on Banach spaces have many spectral properties similar to that of matrices. == Physical background == The background in the physics of vibrations has been explained in this way: Spectral theory is connected with the investigation of localized vibrations of a variety of different objects, from atoms and molecules in chemistry to obstacles in acoustic waveguides. These vibrations have frequencies, and the issue is to decide when such localized vibrations occur, and how to go about computing the frequencies. This is a very complicated problem since every object has not only a fundamental tone but also a complicated series of overtones, which vary radically from one body to another. Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum?). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl. The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz. It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation, that the connection was made to atomic spectra; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré, but rejected for simple quantitative reasons, absent an explanation of the Balmer series. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory. == A definition of spectrum == Consider a bounded linear transformation T defined everywhere over a general Banach space. We form the transformation: R ζ = ( ζ I − T ) − 1 . {\displaystyle R_{\zeta }=\left(\zeta I-T\right)^{-1}.} Here I is the identity operator and ζ is a complex number. The inverse of an operator T, that is T−1, is defined by: T T − 1 = T − 1 T = I . {\displaystyle TT^{-1}=T^{-1}T=I.} If the inverse exists, T is called regular. If it does not exist, T is called singular. With these definitions, the resolvent set of T is the set of all complex numbers ζ such that Rζ exists and is bounded. This set often is denoted as ρ(T). The spectrum of T is the set of all complex numbers ζ such that Rζ fails to exist or is unbounded. Often the spectrum of T is denoted by σ(T). The function Rζ for all ζ in ρ(T) (that is, wherever Rζ exists as a bounded operator) is called the resolvent of T. The spectrum of T is therefore the complement of the resolvent set of T in the complex plane. Every eigenvalue of T belongs to σ(T), but σ(T) may contain non-eigenvalues. This definition applies to a Banach space, but of course other types of space exist as well; for example, topological vector spaces include Banach spaces, but can be more general. On the other hand, Banach spaces include Hilbert spaces, and it is these spaces that find the greatest application and the richest theoretical results. With suitable restrictions, much can be said about the structure of the spectra of transformations in a Hilbert space. In particular, for self-adjoint operators, the spectrum lies on the real line and (in general) is a spectral combination of a point spectrum of discrete eigenvalues and a continuous spectrum. == Spectral theory briefly == In functional analysis and linear algebra the spectral theorem establishes conditions under which an operator can be expressed in simple form as a sum of simpler operators. As a full rigorous presentation is not appropriate for this article, we take an approach that avoids much of the rigor and satisfaction of a formal treatment with the aim of being more comprehensible to a non-specialist. This topic is easiest to describe by introducing the bra–ket notation of Dirac for operators. As an example, a very particular linear operator L might be written as a dyadic product: L = | k 1 ⟩ ⟨ b 1 | , {\displaystyle L=|k_{1}\rangle \langle b_{1}|,} in terms of the "bra" ⟨b1| and the "ket" |k1⟩. A function f is described by a ket as |f ⟩. The function f(x) defined on the coordinates ( x 1 , x 2 , x 3 , … ) {\displaystyle (x_{1},x_{2},x_{3},\dots )} is denoted as f ( x ) = ⟨ x | f ⟩ {\displaystyle f(x)=\langle x|f\rangle } and the magnitude of f by ‖ f ‖ 2 = ⟨ f | f ⟩ = ∫ ⟨ f | x ⟩ ⟨ x | f ⟩ d x = ∫ f ∗ ( x ) f ( x ) d x {\displaystyle \|f\|^{2}=\langle f|f\rangle =\int \langle f|x\rangle \langle x|f\rangle \,dx=\int f^{*}(x)f(x)\,dx} where the notation (*) denotes a complex conjugate. This inner product choice defines a very specific inner product space, restricting the generality of the arguments that follow. The effect of L upon a function f is then described as: L | f ⟩ = | k 1 ⟩ ⟨ b 1 | f ⟩ {\displaystyle L|f\rangle =|k_{1}\rangle \langle b_{1}|f\rangle } expressing the result that the effect of L on f is to produce a new function | k 1 ⟩ {\displaystyle |k_{1}\rangle } multiplied by the inner product represented by ⟨ b 1 | f ⟩ {\displaystyle \langle b_{1}|f\rangle } . A more general linear operator L might be expressed as: L = λ 1 | e 1 ⟩ ⟨ f 1 | + λ 2 | e 2 ⟩ ⟨ f 2 | + λ 3 | e 3 ⟩ ⟨ f 3 | + … , {\displaystyle L=\lambda _{1}|e_{1}\rangle \langle f_{1}|+\lambda _{2}|e_{2}\rangle \langle f_{2}|+\lambda _{3}|e_{3}\rangle \langle f_{3}|+\dots ,} where the { λ i } {\displaystyle \{\,\lambda _{i}\,\}} are scalars and the { | e i ⟩ } {\displaystyle \{\,|e_{i}\rangle \,\}} are a basis and the { ⟨ f i | } {\displaystyle \{\,\langle f_{i}|\,\}} a reciprocal basis for the space. The relation between the basis and the reciprocal basis is described, in part, by: ⟨ f i | e j ⟩ = δ i j {\displaystyle \langle f_{i}|e_{j}\rangle =\delta _{ij}} If such a formalism applies, the { λ i } {\displaystyle \{\,\lambda _{i}\,\}} are eigenvalues of L and the functions { | e i ⟩ } {\displaystyle \{\,|e_{i}\rangle \,\}} are eigenfunctions of L. The eigenvalues are in the spectrum of L. Some natural questions are: under what circumstances does this formalism work, and for what operators L are expansions in series of other operators like this possible? Can any function f be expressed in terms of the eigenfunctions (are they a Schauder basis) and under what circumstances does a point spectrum or a continuous spectrum arise? How do the formalisms for infinite-dimensional spaces and finite-dimensional spaces differ, or do they differ? Can these ideas be extended to a broader class of spaces? Answering such questions is the realm of spectral theory and requires considerable background in functional analysis and matrix algebra. == Resolution of the identity == This section continues in the rough and ready manner of the above section using the bra–ket notation, and glossing over the many important details of a rigorous treatment. A rigorous mathematical treatment may be found in various references. In particular, the dimension n of the space will be finite. Using the bra–ket notation of the above section, the identity operator may be written as: I = ∑ i = 1 n | e i ⟩ ⟨ f i | {\displaystyle I=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|} where it is supposed as above that { | e i ⟩ } {\displaystyle \{|e_{i}\rangle \}} are a basis and the { ⟨ f i | } {\displaystyle \{\langle f_{i}|\}} a reciprocal basis for the space satisfying the relation: ⟨ f i | e j ⟩ = δ i j . {\displaystyle \langle f_{i}|e_{j}\rangle =\delta _{ij}.} This expression of the identity operation is called a representation or a resolution of the identity. This formal representation satisfies the basic property of the identity: I k = I {\displaystyle I^{k}=I} valid for every positive integer k. Applying the resolution of the identity to any function in the space | ψ ⟩ {\displaystyle |\psi \rangle } , one obtains: I | ψ ⟩ = | ψ ⟩ = ∑ i = 1 n | e i ⟩ ⟨ f i | ψ ⟩ = ∑ i = 1 n c i | e i ⟩ {\displaystyle I|\psi \rangle =|\psi \rangle =\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\psi \rangle =\sum _{i=1}^{n}c_{i}|e_{i}\rangle } which is the generalized Fourier expansion of ψ in terms of the basis functions { ei }. Here c i = ⟨ f i | ψ ⟩ {\displaystyle c_{i}=\langle f_{i}|\psi \rangle } . Given some operator equation of the form: O | ψ ⟩ = | h ⟩ {\displaystyle O|\psi \rangle =|h\rangle } with h in the space, this equation can be solved in the above basis through the formal manipulations: O | ψ ⟩ = ∑ i = 1 n c i ( O | e i ⟩ ) = ∑ i = 1 n | e i ⟩ ⟨ f i | h ⟩ , {\displaystyle O|\psi \rangle =\sum _{i=1}^{n}c_{i}\left(O|e_{i}\rangle \right)=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|h\rangle ,} ⟨ f j | O | ψ ⟩ = ∑ i = 1 n c i ⟨ f j | O | e i ⟩ = ∑ i = 1 n ⟨ f j | e i ⟩ ⟨ f i | h ⟩ = ⟨ f j | h ⟩ , ∀ j {\displaystyle \langle f_{j}|O|\psi \rangle =\sum _{i=1}^{n}c_{i}\langle f_{j}|O|e_{i}\rangle =\sum _{i=1}^{n}\langle f_{j}|e_{i}\rangle \langle f_{i}|h\rangle =\langle f_{j}|h\rangle ,\quad \forall j} which converts the operator equation to a matrix equation determining the unknown coefficients cj in terms of the generalized Fourier coefficients ⟨ f j | h ⟩ {\displaystyle \langle f_{j}|h\rangle } of h and the matrix elements O j i = ⟨ f j | O | e i ⟩ {\displaystyle O_{ji}=\langle f_{j}|O|e_{i}\rangle } of the operator O. The role of spectral theory arises in establishing the nature and existence of the basis and the reciprocal basis. In particular, the basis might consist of the eigenfunctions of some linear operator L: L | e i ⟩ = λ i | e i ⟩ ; {\displaystyle L|e_{i}\rangle =\lambda _{i}|e_{i}\rangle \,;} with the { λi } the eigenvalues of L from the spectrum of L. Then the resolution of the identity above provides the dyad expansion of L: L I = L = ∑ i = 1 n L | e i ⟩ ⟨ f i | = ∑ i = 1 n λ i | e i ⟩ ⟨ f i | . {\displaystyle LI=L=\sum _{i=1}^{n}L|e_{i}\rangle \langle f_{i}|=\sum _{i=1}^{n}\lambda _{i}|e_{i}\rangle \langle f_{i}|.} == Resolvent operator == Using spectral theory, the resolvent operator R: R = ( λ I − L ) − 1 , {\displaystyle R=(\lambda I-L)^{-1},\,} can be evaluated in terms of the eigenfunctions and eigenvalues of L, and the Green's function corresponding to L can be found. Applying R to some arbitrary function in the space, say φ {\displaystyle \varphi } , R | φ ⟩ = ( λ I − L ) − 1 | φ ⟩ = ∑ i = 1 n 1 λ − λ i | e i ⟩ ⟨ f i | φ ⟩ . {\displaystyle R|\varphi \rangle =(\lambda I-L)^{-1}|\varphi \rangle =\sum _{i=1}^{n}{\frac {1}{\lambda -\lambda _{i}}}|e_{i}\rangle \langle f_{i}|\varphi \rangle .} This function has poles in the complex λ-plane at each eigenvalue of L. Thus, using the calculus of residues: 1 2 π i ∮ C R | φ ⟩ d λ = − ∑ i = 1 n | e i ⟩ ⟨ f i | φ ⟩ = − | φ ⟩ , {\displaystyle {\frac {1}{2\pi i}}\oint _{C}R|\varphi \rangle d\lambda =-\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\varphi \rangle =-|\varphi \rangle ,} where the line integral is over a contour C that includes all the eigenvalues of L. Suppose our functions are defined over some coordinates {xj}, that is: ⟨ x , φ ⟩ = φ ( x 1 , x 2 , . . . ) . {\displaystyle \langle x,\varphi \rangle =\varphi (x_{1},x_{2},...).} Introducing the notation ⟨ x , y ⟩ = δ ( x − y ) , {\displaystyle \langle x,y\rangle =\delta (x-y),} where δ(x − y) = δ(x1 − y1, x2 − y2, x3 − y3, ...) is the Dirac delta function, we can write ⟨ x , φ ⟩ = ∫ ⟨ x , y ⟩ ⟨ y , φ ⟩ d y . {\displaystyle \langle x,\varphi \rangle =\int \langle x,y\rangle \langle y,\varphi \rangle dy.} Then: ⟨ x , 1 2 π i ∮ C φ λ I − L d λ ⟩ = 1 2 π i ∮ C d λ ⟨ x , φ λ I − L ⟩ = 1 2 π i ∮ C d λ ∫ d y ⟨ x , y λ I − L ⟩ ⟨ y , φ ⟩ {\displaystyle {\begin{aligned}\left\langle x,{\frac {1}{2\pi i}}\oint _{C}{\frac {\varphi }{\lambda I-L}}d\lambda \right\rangle &={\frac {1}{2\pi i}}\oint _{C}d\lambda \left\langle x,{\frac {\varphi }{\lambda I-L}}\right\rangle \\&={\frac {1}{2\pi i}}\oint _{C}d\lambda \int dy\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \langle y,\varphi \rangle \end{aligned}}} The function G(x, y; λ) defined by: G ( x , y ; λ ) = ⟨ x , y λ I − L ⟩ = ∑ i = 1 n ∑ j = 1 n ⟨ x , e i ⟩ ⟨ f i , e j λ I − L ⟩ ⟨ f j , y ⟩ = ∑ i = 1 n ⟨ x , e i ⟩ ⟨ f i , y ⟩ λ − λ i = ∑ i = 1 n e i ( x ) f i ∗ ( y ) λ − λ i , {\displaystyle {\begin{aligned}G(x,y;\lambda )&=\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \\&=\sum _{i=1}^{n}\sum _{j=1}^{n}\langle x,e_{i}\rangle \left\langle f_{i},{\frac {e_{j}}{\lambda I-L}}\right\rangle \langle f_{j},y\rangle \\&=\sum _{i=1}^{n}{\frac {\langle x,e_{i}\rangle \langle f_{i},y\rangle }{\lambda -\lambda _{i}}}\\&=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(y)}{\lambda -\lambda _{i}}},\end{aligned}}} is called the Green's function for operator L, and satisfies: 1 2 π i ∮ C G ( x , y ; λ ) d λ = − ∑ i = 1 n ⟨ x , e i ⟩ ⟨ f i , y ⟩ = − ⟨ x , y ⟩ = − δ ( x − y ) . {\displaystyle {\frac {1}{2\pi i}}\oint _{C}G(x,y;\lambda )\,d\lambda =-\sum _{i=1}^{n}\langle x,e_{i}\rangle \langle f_{i},y\rangle =-\langle x,y\rangle =-\delta (x-y).} == Operator equations == Consider the operator equation: ( O − λ I ) | ψ ⟩ = | h ⟩ ; {\displaystyle (O-\lambda I)|\psi \rangle =|h\rangle ;} in terms of coordinates: ∫ ⟨ x , ( O − λ I ) y ⟩ ⟨ y , ψ ⟩ d y = h ( x ) . {\displaystyle \int \langle x,(O-\lambda I)y\rangle \langle y,\psi \rangle \,dy=h(x).} A particular case is λ = 0. The Green's function of the previous section is: ⟨ y , G ( λ ) z ⟩ = ⟨ y , ( O − λ I ) − 1 z ⟩ = G ( y , z ; λ ) , {\displaystyle \langle y,G(\lambda )z\rangle =\left\langle y,(O-\lambda I)^{-1}z\right\rangle =G(y,z;\lambda ),} and satisfies: ∫ ⟨ x , ( O − λ I ) y ⟩ ⟨ y , G ( λ ) z ⟩ d y = ∫ ⟨ x , ( O − λ I ) y ⟩ ⟨ y , ( O − λ I ) − 1 z ⟩ d y = ⟨ x , z ⟩ = δ ( x − z ) . {\displaystyle \int \langle x,(O-\lambda I)y\rangle \langle y,G(\lambda )z\rangle \,dy=\int \langle x,(O-\lambda I)y\rangle \left\langle y,(O-\lambda I)^{-1}z\right\rangle \,dy=\langle x,z\rangle =\delta (x-z).} Using this Green's function property: ∫ ⟨ x , ( O − λ I ) y ⟩ G ( y , z ; λ ) d y = δ ( x − z ) . {\displaystyle \int \langle x,(O-\lambda I)y\rangle G(y,z;\lambda )\,dy=\delta (x-z).} Then, multiplying both sides of this equation by h(z) and integrating: ∫ d z h ( z ) ∫ d y ⟨ x , ( O − λ I ) y ⟩ G ( y , z ; λ ) = ∫ d y ⟨ x , ( O − λ I ) y ⟩ ∫ d z h ( z ) G ( y , z ; λ ) = h ( x ) , {\displaystyle \int dz\,h(z)\int dy\,\langle x,(O-\lambda I)y\rangle G(y,z;\lambda )=\int dy\,\langle x,(O-\lambda I)y\rangle \int dz\,h(z)G(y,z;\lambda )=h(x),} which suggests the solution is: ψ ( x ) = ∫ h ( z ) G ( x , z ; λ ) d z . {\displaystyle \psi (x)=\int h(z)G(x,z;\lambda )\,dz.} That is, the function ψ(x) satisfying the operator equation is found if we can find the spectrum of O, and construct G, for example by using: G ( x , z ; λ ) = ∑ i = 1 n e i ( x ) f i ∗ ( z ) λ − λ i . {\displaystyle G(x,z;\lambda )=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(z)}{\lambda -\lambda _{i}}}.} There are many other ways to find G, of course. See the articles on Green's functions and on Fredholm integral equations. It must be kept in mind that the above mathematics is purely formal, and a rigorous treatment involves some pretty sophisticated mathematics, including a good background knowledge of functional analysis, Hilbert spaces, distributions and so forth. Consult these articles and the references for more detail. == Spectral theorem and Rayleigh quotient == Optimization problems may be the most useful examples about the combinatorial significance of the eigenvalues and eigenvectors in symmetric matrices, especially for the Rayleigh quotient with respect to a matrix M. Theorem Let M be a symmetric matrix and let x be the non-zero vector that maximizes the Rayleigh quotient with respect to M. Then, x is an eigenvector of M with eigenvalue equal to the Rayleigh quotient. Moreover, this eigenvalue is the largest eigenvalue of M. Proof Assume the spectral theorem. Let the eigenvalues of M be λ 1 ≤ λ 2 ≤ ⋯ ≤ λ n {\displaystyle \lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}} . Since the { v i } {\displaystyle \{v_{i}\}} form an orthonormal basis, any vector x can be expressed in this basis as x = ∑ i v i T x v i {\displaystyle x=\sum _{i}v_{i}^{T}xv_{i}} The way to prove this formula is pretty easy. Namely, v j T ∑ i v i T x v i = ∑ i v i T x v j T v i = ( v j T x ) v j T v j = v j T x {\displaystyle {\begin{aligned}v_{j}^{T}\sum _{i}v_{i}^{T}xv_{i}={}&\sum _{i}v_{i}^{T}xv_{j}^{T}v_{i}\\[4pt]={}&(v_{j}^{T}x)v_{j}^{T}v_{j}\\[4pt]={}&v_{j}^{T}x\end{aligned}}} evaluate the Rayleigh quotient with respect to x: x T M x = ( ∑ i ( v i T x ) v i ) T M ( ∑ j ( v j T x ) v j ) = ( ∑ i ( v i T x ) v i T ) ( ∑ j ( v j T x ) v j λ j ) = ∑ i , j ( v i T x ) v i T ( v j T x ) v j λ j = ∑ j ( v j T x ) ( v j T x ) λ j = ∑ j ( v j T x ) 2 λ j ≤ λ n ∑ j ( v j T x ) 2 = λ n x T x , {\displaystyle {\begin{aligned}x^{T}Mx={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}\right)^{T}M\left(\sum _{j}(v_{j}^{T}x)v_{j}\right)\\[4pt]={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}^{T}\right)\left(\sum _{j}(v_{j}^{T}x)v_{j}\lambda _{j}\right)\\[4pt]={}&\sum _{i,j}(v_{i}^{T}x)v_{i}^{T}(v_{j}^{T}x)v_{j}\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)(v_{j}^{T}x)\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)^{2}\lambda _{j}\leq \lambda _{n}\sum _{j}(v_{j}^{T}x)^{2}\\[4pt]={}&\lambda _{n}x^{T}x,\end{aligned}}} where we used Parseval's identity in the last line. Finally we obtain that x T M x x T x ≤ λ n {\displaystyle {\frac {x^{T}Mx}{x^{T}x}}\leq \lambda _{n}} so the Rayleigh quotient is always less than λ n {\displaystyle \lambda _{n}} . == See also == Functions of operators, Operator theory Lax pairs Least-squares spectral analysis Riesz projector Self-adjoint operator Spectrum (functional analysis), Resolvent formalism, Decomposition of spectrum (functional analysis) Spectral radius, Spectrum of an operator, Spectral theorem Spectral theory of compact operators Spectral theory of normal C*-algebras Sturm–Liouville theory, Integral equations, Fredholm theory Compact operators, Isospectral operators, Completeness Spectral geometry Spectral graph theory List of functional analysis topics == Notes == == References == Edward Brian Davies (1996). Spectral Theory and Differential Operators; Volume 42 in the Cambridge Studies in Advanced Mathematics. Cambridge University Press. ISBN 0-521-58710-7. Dunford, Nelson; Schwartz, Jacob T (1988). Linear Operators, Spectral Theory, Self Adjoint Operators in Hilbert Space (Part 2) (Paperback reprint of 1967 ed.). Wiley. ISBN 0-471-60847-5. Dunford, Nelson; Schwartz, Jacob T (1988). Linear Operators, Spectral Operators (Part 3) (Paperback reprint of 1971 ed.). Wiley. ISBN 0-471-60846-7. Sadri Hassani (1999). "Chapter 4: Spectral decomposition". Mathematical Physics: a Modern Introduction to its Foundations. Springer. ISBN 0-387-98579-4. "Spectral theory of linear operators", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Shmuel Kantorovitz (1983). Spectral Theory of Banach Space Operators;. Springer. Arch W. Naylor, George R. Sell (2000). "Chapter 5, Part B: The Spectrum". Linear Operator Theory in Engineering and Science; Volume 40 of Applied mathematical sciences. Springer. p. 411. ISBN 0-387-95001-X. Gerald Teschl (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. American Mathematical Society. ISBN 978-0-8218-4660-5. Valter Moretti (2017). Spectral Theory and Quantum Mechanics; Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation 2nd Edition. Springer. ISBN 978-3-319-70705-1. == External links == Evans M. Harrell II: A Short History of Operator Theory Gregory H. Moore (1995). "The axiomatization of linear algebra: 1875-1940". Historia Mathematica. 22 (3): 262–303. doi:10.1006/hmat.1995.1025. Steen, L. A. (April 1973). "Highlights in the History of Spectral Theory". The American Mathematical Monthly. 80 (4): 359–381. doi:10.2307/2319079. JSTOR 2319079.
Wikipedia:Spectral theory#0
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. == Mathematical background == The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics." There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann. The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis. The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand, it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality. One can also study the spectral properties of operators on Banach spaces. For example, compact operators on Banach spaces have many spectral properties similar to that of matrices. == Physical background == The background in the physics of vibrations has been explained in this way: Spectral theory is connected with the investigation of localized vibrations of a variety of different objects, from atoms and molecules in chemistry to obstacles in acoustic waveguides. These vibrations have frequencies, and the issue is to decide when such localized vibrations occur, and how to go about computing the frequencies. This is a very complicated problem since every object has not only a fundamental tone but also a complicated series of overtones, which vary radically from one body to another. Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum?). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl. The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz. It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation, that the connection was made to atomic spectra; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré, but rejected for simple quantitative reasons, absent an explanation of the Balmer series. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory. == A definition of spectrum == Consider a bounded linear transformation T defined everywhere over a general Banach space. We form the transformation: R ζ = ( ζ I − T ) − 1 . {\displaystyle R_{\zeta }=\left(\zeta I-T\right)^{-1}.} Here I is the identity operator and ζ is a complex number. The inverse of an operator T, that is T−1, is defined by: T T − 1 = T − 1 T = I . {\displaystyle TT^{-1}=T^{-1}T=I.} If the inverse exists, T is called regular. If it does not exist, T is called singular. With these definitions, the resolvent set of T is the set of all complex numbers ζ such that Rζ exists and is bounded. This set often is denoted as ρ(T). The spectrum of T is the set of all complex numbers ζ such that Rζ fails to exist or is unbounded. Often the spectrum of T is denoted by σ(T). The function Rζ for all ζ in ρ(T) (that is, wherever Rζ exists as a bounded operator) is called the resolvent of T. The spectrum of T is therefore the complement of the resolvent set of T in the complex plane. Every eigenvalue of T belongs to σ(T), but σ(T) may contain non-eigenvalues. This definition applies to a Banach space, but of course other types of space exist as well; for example, topological vector spaces include Banach spaces, but can be more general. On the other hand, Banach spaces include Hilbert spaces, and it is these spaces that find the greatest application and the richest theoretical results. With suitable restrictions, much can be said about the structure of the spectra of transformations in a Hilbert space. In particular, for self-adjoint operators, the spectrum lies on the real line and (in general) is a spectral combination of a point spectrum of discrete eigenvalues and a continuous spectrum. == Spectral theory briefly == In functional analysis and linear algebra the spectral theorem establishes conditions under which an operator can be expressed in simple form as a sum of simpler operators. As a full rigorous presentation is not appropriate for this article, we take an approach that avoids much of the rigor and satisfaction of a formal treatment with the aim of being more comprehensible to a non-specialist. This topic is easiest to describe by introducing the bra–ket notation of Dirac for operators. As an example, a very particular linear operator L might be written as a dyadic product: L = | k 1 ⟩ ⟨ b 1 | , {\displaystyle L=|k_{1}\rangle \langle b_{1}|,} in terms of the "bra" ⟨b1| and the "ket" |k1⟩. A function f is described by a ket as |f ⟩. The function f(x) defined on the coordinates ( x 1 , x 2 , x 3 , … ) {\displaystyle (x_{1},x_{2},x_{3},\dots )} is denoted as f ( x ) = ⟨ x | f ⟩ {\displaystyle f(x)=\langle x|f\rangle } and the magnitude of f by ‖ f ‖ 2 = ⟨ f | f ⟩ = ∫ ⟨ f | x ⟩ ⟨ x | f ⟩ d x = ∫ f ∗ ( x ) f ( x ) d x {\displaystyle \|f\|^{2}=\langle f|f\rangle =\int \langle f|x\rangle \langle x|f\rangle \,dx=\int f^{*}(x)f(x)\,dx} where the notation (*) denotes a complex conjugate. This inner product choice defines a very specific inner product space, restricting the generality of the arguments that follow. The effect of L upon a function f is then described as: L | f ⟩ = | k 1 ⟩ ⟨ b 1 | f ⟩ {\displaystyle L|f\rangle =|k_{1}\rangle \langle b_{1}|f\rangle } expressing the result that the effect of L on f is to produce a new function | k 1 ⟩ {\displaystyle |k_{1}\rangle } multiplied by the inner product represented by ⟨ b 1 | f ⟩ {\displaystyle \langle b_{1}|f\rangle } . A more general linear operator L might be expressed as: L = λ 1 | e 1 ⟩ ⟨ f 1 | + λ 2 | e 2 ⟩ ⟨ f 2 | + λ 3 | e 3 ⟩ ⟨ f 3 | + … , {\displaystyle L=\lambda _{1}|e_{1}\rangle \langle f_{1}|+\lambda _{2}|e_{2}\rangle \langle f_{2}|+\lambda _{3}|e_{3}\rangle \langle f_{3}|+\dots ,} where the { λ i } {\displaystyle \{\,\lambda _{i}\,\}} are scalars and the { | e i ⟩ } {\displaystyle \{\,|e_{i}\rangle \,\}} are a basis and the { ⟨ f i | } {\displaystyle \{\,\langle f_{i}|\,\}} a reciprocal basis for the space. The relation between the basis and the reciprocal basis is described, in part, by: ⟨ f i | e j ⟩ = δ i j {\displaystyle \langle f_{i}|e_{j}\rangle =\delta _{ij}} If such a formalism applies, the { λ i } {\displaystyle \{\,\lambda _{i}\,\}} are eigenvalues of L and the functions { | e i ⟩ } {\displaystyle \{\,|e_{i}\rangle \,\}} are eigenfunctions of L. The eigenvalues are in the spectrum of L. Some natural questions are: under what circumstances does this formalism work, and for what operators L are expansions in series of other operators like this possible? Can any function f be expressed in terms of the eigenfunctions (are they a Schauder basis) and under what circumstances does a point spectrum or a continuous spectrum arise? How do the formalisms for infinite-dimensional spaces and finite-dimensional spaces differ, or do they differ? Can these ideas be extended to a broader class of spaces? Answering such questions is the realm of spectral theory and requires considerable background in functional analysis and matrix algebra. == Resolution of the identity == This section continues in the rough and ready manner of the above section using the bra–ket notation, and glossing over the many important details of a rigorous treatment. A rigorous mathematical treatment may be found in various references. In particular, the dimension n of the space will be finite. Using the bra–ket notation of the above section, the identity operator may be written as: I = ∑ i = 1 n | e i ⟩ ⟨ f i | {\displaystyle I=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|} where it is supposed as above that { | e i ⟩ } {\displaystyle \{|e_{i}\rangle \}} are a basis and the { ⟨ f i | } {\displaystyle \{\langle f_{i}|\}} a reciprocal basis for the space satisfying the relation: ⟨ f i | e j ⟩ = δ i j . {\displaystyle \langle f_{i}|e_{j}\rangle =\delta _{ij}.} This expression of the identity operation is called a representation or a resolution of the identity. This formal representation satisfies the basic property of the identity: I k = I {\displaystyle I^{k}=I} valid for every positive integer k. Applying the resolution of the identity to any function in the space | ψ ⟩ {\displaystyle |\psi \rangle } , one obtains: I | ψ ⟩ = | ψ ⟩ = ∑ i = 1 n | e i ⟩ ⟨ f i | ψ ⟩ = ∑ i = 1 n c i | e i ⟩ {\displaystyle I|\psi \rangle =|\psi \rangle =\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\psi \rangle =\sum _{i=1}^{n}c_{i}|e_{i}\rangle } which is the generalized Fourier expansion of ψ in terms of the basis functions { ei }. Here c i = ⟨ f i | ψ ⟩ {\displaystyle c_{i}=\langle f_{i}|\psi \rangle } . Given some operator equation of the form: O | ψ ⟩ = | h ⟩ {\displaystyle O|\psi \rangle =|h\rangle } with h in the space, this equation can be solved in the above basis through the formal manipulations: O | ψ ⟩ = ∑ i = 1 n c i ( O | e i ⟩ ) = ∑ i = 1 n | e i ⟩ ⟨ f i | h ⟩ , {\displaystyle O|\psi \rangle =\sum _{i=1}^{n}c_{i}\left(O|e_{i}\rangle \right)=\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|h\rangle ,} ⟨ f j | O | ψ ⟩ = ∑ i = 1 n c i ⟨ f j | O | e i ⟩ = ∑ i = 1 n ⟨ f j | e i ⟩ ⟨ f i | h ⟩ = ⟨ f j | h ⟩ , ∀ j {\displaystyle \langle f_{j}|O|\psi \rangle =\sum _{i=1}^{n}c_{i}\langle f_{j}|O|e_{i}\rangle =\sum _{i=1}^{n}\langle f_{j}|e_{i}\rangle \langle f_{i}|h\rangle =\langle f_{j}|h\rangle ,\quad \forall j} which converts the operator equation to a matrix equation determining the unknown coefficients cj in terms of the generalized Fourier coefficients ⟨ f j | h ⟩ {\displaystyle \langle f_{j}|h\rangle } of h and the matrix elements O j i = ⟨ f j | O | e i ⟩ {\displaystyle O_{ji}=\langle f_{j}|O|e_{i}\rangle } of the operator O. The role of spectral theory arises in establishing the nature and existence of the basis and the reciprocal basis. In particular, the basis might consist of the eigenfunctions of some linear operator L: L | e i ⟩ = λ i | e i ⟩ ; {\displaystyle L|e_{i}\rangle =\lambda _{i}|e_{i}\rangle \,;} with the { λi } the eigenvalues of L from the spectrum of L. Then the resolution of the identity above provides the dyad expansion of L: L I = L = ∑ i = 1 n L | e i ⟩ ⟨ f i | = ∑ i = 1 n λ i | e i ⟩ ⟨ f i | . {\displaystyle LI=L=\sum _{i=1}^{n}L|e_{i}\rangle \langle f_{i}|=\sum _{i=1}^{n}\lambda _{i}|e_{i}\rangle \langle f_{i}|.} == Resolvent operator == Using spectral theory, the resolvent operator R: R = ( λ I − L ) − 1 , {\displaystyle R=(\lambda I-L)^{-1},\,} can be evaluated in terms of the eigenfunctions and eigenvalues of L, and the Green's function corresponding to L can be found. Applying R to some arbitrary function in the space, say φ {\displaystyle \varphi } , R | φ ⟩ = ( λ I − L ) − 1 | φ ⟩ = ∑ i = 1 n 1 λ − λ i | e i ⟩ ⟨ f i | φ ⟩ . {\displaystyle R|\varphi \rangle =(\lambda I-L)^{-1}|\varphi \rangle =\sum _{i=1}^{n}{\frac {1}{\lambda -\lambda _{i}}}|e_{i}\rangle \langle f_{i}|\varphi \rangle .} This function has poles in the complex λ-plane at each eigenvalue of L. Thus, using the calculus of residues: 1 2 π i ∮ C R | φ ⟩ d λ = − ∑ i = 1 n | e i ⟩ ⟨ f i | φ ⟩ = − | φ ⟩ , {\displaystyle {\frac {1}{2\pi i}}\oint _{C}R|\varphi \rangle d\lambda =-\sum _{i=1}^{n}|e_{i}\rangle \langle f_{i}|\varphi \rangle =-|\varphi \rangle ,} where the line integral is over a contour C that includes all the eigenvalues of L. Suppose our functions are defined over some coordinates {xj}, that is: ⟨ x , φ ⟩ = φ ( x 1 , x 2 , . . . ) . {\displaystyle \langle x,\varphi \rangle =\varphi (x_{1},x_{2},...).} Introducing the notation ⟨ x , y ⟩ = δ ( x − y ) , {\displaystyle \langle x,y\rangle =\delta (x-y),} where δ(x − y) = δ(x1 − y1, x2 − y2, x3 − y3, ...) is the Dirac delta function, we can write ⟨ x , φ ⟩ = ∫ ⟨ x , y ⟩ ⟨ y , φ ⟩ d y . {\displaystyle \langle x,\varphi \rangle =\int \langle x,y\rangle \langle y,\varphi \rangle dy.} Then: ⟨ x , 1 2 π i ∮ C φ λ I − L d λ ⟩ = 1 2 π i ∮ C d λ ⟨ x , φ λ I − L ⟩ = 1 2 π i ∮ C d λ ∫ d y ⟨ x , y λ I − L ⟩ ⟨ y , φ ⟩ {\displaystyle {\begin{aligned}\left\langle x,{\frac {1}{2\pi i}}\oint _{C}{\frac {\varphi }{\lambda I-L}}d\lambda \right\rangle &={\frac {1}{2\pi i}}\oint _{C}d\lambda \left\langle x,{\frac {\varphi }{\lambda I-L}}\right\rangle \\&={\frac {1}{2\pi i}}\oint _{C}d\lambda \int dy\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \langle y,\varphi \rangle \end{aligned}}} The function G(x, y; λ) defined by: G ( x , y ; λ ) = ⟨ x , y λ I − L ⟩ = ∑ i = 1 n ∑ j = 1 n ⟨ x , e i ⟩ ⟨ f i , e j λ I − L ⟩ ⟨ f j , y ⟩ = ∑ i = 1 n ⟨ x , e i ⟩ ⟨ f i , y ⟩ λ − λ i = ∑ i = 1 n e i ( x ) f i ∗ ( y ) λ − λ i , {\displaystyle {\begin{aligned}G(x,y;\lambda )&=\left\langle x,{\frac {y}{\lambda I-L}}\right\rangle \\&=\sum _{i=1}^{n}\sum _{j=1}^{n}\langle x,e_{i}\rangle \left\langle f_{i},{\frac {e_{j}}{\lambda I-L}}\right\rangle \langle f_{j},y\rangle \\&=\sum _{i=1}^{n}{\frac {\langle x,e_{i}\rangle \langle f_{i},y\rangle }{\lambda -\lambda _{i}}}\\&=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(y)}{\lambda -\lambda _{i}}},\end{aligned}}} is called the Green's function for operator L, and satisfies: 1 2 π i ∮ C G ( x , y ; λ ) d λ = − ∑ i = 1 n ⟨ x , e i ⟩ ⟨ f i , y ⟩ = − ⟨ x , y ⟩ = − δ ( x − y ) . {\displaystyle {\frac {1}{2\pi i}}\oint _{C}G(x,y;\lambda )\,d\lambda =-\sum _{i=1}^{n}\langle x,e_{i}\rangle \langle f_{i},y\rangle =-\langle x,y\rangle =-\delta (x-y).} == Operator equations == Consider the operator equation: ( O − λ I ) | ψ ⟩ = | h ⟩ ; {\displaystyle (O-\lambda I)|\psi \rangle =|h\rangle ;} in terms of coordinates: ∫ ⟨ x , ( O − λ I ) y ⟩ ⟨ y , ψ ⟩ d y = h ( x ) . {\displaystyle \int \langle x,(O-\lambda I)y\rangle \langle y,\psi \rangle \,dy=h(x).} A particular case is λ = 0. The Green's function of the previous section is: ⟨ y , G ( λ ) z ⟩ = ⟨ y , ( O − λ I ) − 1 z ⟩ = G ( y , z ; λ ) , {\displaystyle \langle y,G(\lambda )z\rangle =\left\langle y,(O-\lambda I)^{-1}z\right\rangle =G(y,z;\lambda ),} and satisfies: ∫ ⟨ x , ( O − λ I ) y ⟩ ⟨ y , G ( λ ) z ⟩ d y = ∫ ⟨ x , ( O − λ I ) y ⟩ ⟨ y , ( O − λ I ) − 1 z ⟩ d y = ⟨ x , z ⟩ = δ ( x − z ) . {\displaystyle \int \langle x,(O-\lambda I)y\rangle \langle y,G(\lambda )z\rangle \,dy=\int \langle x,(O-\lambda I)y\rangle \left\langle y,(O-\lambda I)^{-1}z\right\rangle \,dy=\langle x,z\rangle =\delta (x-z).} Using this Green's function property: ∫ ⟨ x , ( O − λ I ) y ⟩ G ( y , z ; λ ) d y = δ ( x − z ) . {\displaystyle \int \langle x,(O-\lambda I)y\rangle G(y,z;\lambda )\,dy=\delta (x-z).} Then, multiplying both sides of this equation by h(z) and integrating: ∫ d z h ( z ) ∫ d y ⟨ x , ( O − λ I ) y ⟩ G ( y , z ; λ ) = ∫ d y ⟨ x , ( O − λ I ) y ⟩ ∫ d z h ( z ) G ( y , z ; λ ) = h ( x ) , {\displaystyle \int dz\,h(z)\int dy\,\langle x,(O-\lambda I)y\rangle G(y,z;\lambda )=\int dy\,\langle x,(O-\lambda I)y\rangle \int dz\,h(z)G(y,z;\lambda )=h(x),} which suggests the solution is: ψ ( x ) = ∫ h ( z ) G ( x , z ; λ ) d z . {\displaystyle \psi (x)=\int h(z)G(x,z;\lambda )\,dz.} That is, the function ψ(x) satisfying the operator equation is found if we can find the spectrum of O, and construct G, for example by using: G ( x , z ; λ ) = ∑ i = 1 n e i ( x ) f i ∗ ( z ) λ − λ i . {\displaystyle G(x,z;\lambda )=\sum _{i=1}^{n}{\frac {e_{i}(x)f_{i}^{*}(z)}{\lambda -\lambda _{i}}}.} There are many other ways to find G, of course. See the articles on Green's functions and on Fredholm integral equations. It must be kept in mind that the above mathematics is purely formal, and a rigorous treatment involves some pretty sophisticated mathematics, including a good background knowledge of functional analysis, Hilbert spaces, distributions and so forth. Consult these articles and the references for more detail. == Spectral theorem and Rayleigh quotient == Optimization problems may be the most useful examples about the combinatorial significance of the eigenvalues and eigenvectors in symmetric matrices, especially for the Rayleigh quotient with respect to a matrix M. Theorem Let M be a symmetric matrix and let x be the non-zero vector that maximizes the Rayleigh quotient with respect to M. Then, x is an eigenvector of M with eigenvalue equal to the Rayleigh quotient. Moreover, this eigenvalue is the largest eigenvalue of M. Proof Assume the spectral theorem. Let the eigenvalues of M be λ 1 ≤ λ 2 ≤ ⋯ ≤ λ n {\displaystyle \lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}} . Since the { v i } {\displaystyle \{v_{i}\}} form an orthonormal basis, any vector x can be expressed in this basis as x = ∑ i v i T x v i {\displaystyle x=\sum _{i}v_{i}^{T}xv_{i}} The way to prove this formula is pretty easy. Namely, v j T ∑ i v i T x v i = ∑ i v i T x v j T v i = ( v j T x ) v j T v j = v j T x {\displaystyle {\begin{aligned}v_{j}^{T}\sum _{i}v_{i}^{T}xv_{i}={}&\sum _{i}v_{i}^{T}xv_{j}^{T}v_{i}\\[4pt]={}&(v_{j}^{T}x)v_{j}^{T}v_{j}\\[4pt]={}&v_{j}^{T}x\end{aligned}}} evaluate the Rayleigh quotient with respect to x: x T M x = ( ∑ i ( v i T x ) v i ) T M ( ∑ j ( v j T x ) v j ) = ( ∑ i ( v i T x ) v i T ) ( ∑ j ( v j T x ) v j λ j ) = ∑ i , j ( v i T x ) v i T ( v j T x ) v j λ j = ∑ j ( v j T x ) ( v j T x ) λ j = ∑ j ( v j T x ) 2 λ j ≤ λ n ∑ j ( v j T x ) 2 = λ n x T x , {\displaystyle {\begin{aligned}x^{T}Mx={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}\right)^{T}M\left(\sum _{j}(v_{j}^{T}x)v_{j}\right)\\[4pt]={}&\left(\sum _{i}(v_{i}^{T}x)v_{i}^{T}\right)\left(\sum _{j}(v_{j}^{T}x)v_{j}\lambda _{j}\right)\\[4pt]={}&\sum _{i,j}(v_{i}^{T}x)v_{i}^{T}(v_{j}^{T}x)v_{j}\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)(v_{j}^{T}x)\lambda _{j}\\[4pt]={}&\sum _{j}(v_{j}^{T}x)^{2}\lambda _{j}\leq \lambda _{n}\sum _{j}(v_{j}^{T}x)^{2}\\[4pt]={}&\lambda _{n}x^{T}x,\end{aligned}}} where we used Parseval's identity in the last line. Finally we obtain that x T M x x T x ≤ λ n {\displaystyle {\frac {x^{T}Mx}{x^{T}x}}\leq \lambda _{n}} so the Rayleigh quotient is always less than λ n {\displaystyle \lambda _{n}} . == See also == Functions of operators, Operator theory Lax pairs Least-squares spectral analysis Riesz projector Self-adjoint operator Spectrum (functional analysis), Resolvent formalism, Decomposition of spectrum (functional analysis) Spectral radius, Spectrum of an operator, Spectral theorem Spectral theory of compact operators Spectral theory of normal C*-algebras Sturm–Liouville theory, Integral equations, Fredholm theory Compact operators, Isospectral operators, Completeness Spectral geometry Spectral graph theory List of functional analysis topics == Notes == == References == Edward Brian Davies (1996). Spectral Theory and Differential Operators; Volume 42 in the Cambridge Studies in Advanced Mathematics. Cambridge University Press. ISBN 0-521-58710-7. Dunford, Nelson; Schwartz, Jacob T (1988). Linear Operators, Spectral Theory, Self Adjoint Operators in Hilbert Space (Part 2) (Paperback reprint of 1967 ed.). Wiley. ISBN 0-471-60847-5. Dunford, Nelson; Schwartz, Jacob T (1988). Linear Operators, Spectral Operators (Part 3) (Paperback reprint of 1971 ed.). Wiley. ISBN 0-471-60846-7. Sadri Hassani (1999). "Chapter 4: Spectral decomposition". Mathematical Physics: a Modern Introduction to its Foundations. Springer. ISBN 0-387-98579-4. "Spectral theory of linear operators", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Shmuel Kantorovitz (1983). Spectral Theory of Banach Space Operators;. Springer. Arch W. Naylor, George R. Sell (2000). "Chapter 5, Part B: The Spectrum". Linear Operator Theory in Engineering and Science; Volume 40 of Applied mathematical sciences. Springer. p. 411. ISBN 0-387-95001-X. Gerald Teschl (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. American Mathematical Society. ISBN 978-0-8218-4660-5. Valter Moretti (2017). Spectral Theory and Quantum Mechanics; Mathematical Foundations of Quantum Theories, Symmetries and Introduction to the Algebraic Formulation 2nd Edition. Springer. ISBN 978-3-319-70705-1. == External links == Evans M. Harrell II: A Short History of Operator Theory Gregory H. Moore (1995). "The axiomatization of linear algebra: 1875-1940". Historia Mathematica. 22 (3): 262–303. doi:10.1006/hmat.1995.1025. Steen, L. A. (April 1973). "Highlights in the History of Spectral Theory". The American Mathematical Monthly. 80 (4): 359–381. doi:10.2307/2319079. JSTOR 2319079.
Wikipedia:Spherical basis#0
In pure and applied mathematics, particularly quantum mechanics and computer graphics and their applications, a spherical basis is the basis used to express spherical tensors. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions. While spherical polar coordinates are one orthogonal coordinate system for expressing vectors and tensors using polar and azimuthal angles and radial distance, the spherical basis are constructed from the standard basis and use complex numbers. == In three dimensions == A vector A in 3D Euclidean space R3 can be expressed in the familiar Cartesian coordinate system in the standard basis ex, ey, ez, and coordinates Ax, Ay, Az: or any other coordinate system with associated basis set of vectors. From this extend the scalars to allow multiplication by complex numbers, so that we are now working in C 3 {\displaystyle \mathbb {C} ^{3}} rather than R 3 {\displaystyle \mathbb {R} ^{3}} . === Basis definition === In the spherical bases denoted e+, e−, e0, and associated coordinates with respect to this basis, denoted A+, A−, A0, the vector A is: where the spherical basis vectors can be defined in terms of the Cartesian basis using complex-valued coefficients in the xy plane: in which i {\displaystyle i} denotes the imaginary unit, and one normal to the plane in the z direction: e 0 = e z {\displaystyle \mathbf {e} _{0}=\mathbf {e} _{z}} The inverse relations are: === Commutator definition === While giving a basis in a 3-dimensional space is a valid definition for a spherical tensor, it only covers the case for when the rank k {\displaystyle k} is 1. For higher ranks, one may use either the commutator, or rotation definition of a spherical tensor. The commutator definition is given below, any operator T q ( k ) {\displaystyle T_{q}^{(k)}} that satisfies the following relations is a spherical tensor: [ J ± , T q ( k ) ] = ℏ ( k ∓ q ) ( k ± q + 1 ) T q ± 1 ( k ) {\displaystyle [J_{\pm },T_{q}^{(k)}]=\hbar {\sqrt {(k\mp q)(k\pm q+1)}}T_{q\pm 1}^{(k)}} [ J z , T q ( k ) ] = ℏ q T q ( k ) {\displaystyle [J_{z},T_{q}^{(k)}]=\hbar qT_{q}^{(k)}} === Rotation definition === Analogously to how the spherical harmonics transform under a rotation, a general spherical tensor transforms as follows, when the states transform under the unitary Wigner D-matrix D ( R ) {\displaystyle {\mathcal {D}}(R)} , where R is a (3×3 rotation) group element in SO(3). That is, these matrices represent the rotation group elements. With the help of its Lie algebra, one can show these two definitions are equivalent. D ( R ) T q ( k ) D † ( R ) = ∑ q ′ = − k k T q ′ ( k ) D q ′ q ( k ) {\displaystyle {\mathcal {D}}(R)T_{q}^{(k)}{\mathcal {D}}^{\dagger }(R)=\sum _{q'=-k}^{k}T_{q'}^{(k)}{\mathcal {D}}_{q'q}^{(k)}} === Coordinate vectors === For the spherical basis, the coordinates are complex-valued numbers A+, A0, A−, and can be found by substitution of (3B) into (1), or directly calculated from the inner product ⟨, ⟩ (5): A 0 = ⟨ e 0 , A ⟩ = ⟨ e z , A ⟩ = A z {\displaystyle A_{0}=\left\langle \mathbf {e} _{0},\mathbf {A} \right\rangle =\left\langle \mathbf {e} _{z},\mathbf {A} \right\rangle =A_{z}} with inverse relations: In general, for two vectors with complex coefficients in the same real-valued orthonormal basis ei, with the property ei·ej = δij, the inner product is: where · is the usual dot product and the complex conjugate * must be used to keep the magnitude (or "norm") of the vector positive definite. == Properties (three dimensions) == === Orthonormality === The spherical basis is an orthonormal basis, since the inner product ⟨, ⟩ (5) of every pair vanishes meaning the basis vectors are all mutually orthogonal: ⟨ e + , e − ⟩ = ⟨ e − , e 0 ⟩ = ⟨ e 0 , e + ⟩ = 0 {\displaystyle \left\langle \mathbf {e} _{+},\mathbf {e} _{-}\right\rangle =\left\langle \mathbf {e} _{-},\mathbf {e} _{0}\right\rangle =\left\langle \mathbf {e} _{0},\mathbf {e} _{+}\right\rangle =0} and each basis vector is a unit vector: ⟨ e + , e + ⟩ = ⟨ e − , e − ⟩ = ⟨ e 0 , e 0 ⟩ = 1 {\displaystyle \left\langle \mathbf {e} _{+},\mathbf {e} _{+}\right\rangle =\left\langle \mathbf {e} _{-},\mathbf {e} _{-}\right\rangle =\left\langle \mathbf {e} _{0},\mathbf {e} _{0}\right\rangle =1} hence the need for the normalizing factors of 1 / 2 {\displaystyle 1/\!{\sqrt {2}}} . === Change of basis matrix === The defining relations (3A) can be summarized by a transformation matrix U: ( e + e − e 0 ) = U ( e x e y e z ) , U = ( − 1 2 − i 2 0 + 1 2 − i 2 0 0 0 1 ) , {\displaystyle {\begin{pmatrix}\mathbf {e} _{+}\\\mathbf {e} _{-}\\\mathbf {e} _{0}\end{pmatrix}}=\mathbf {U} {\begin{pmatrix}\mathbf {e} _{x}\\\mathbf {e} _{y}\\\mathbf {e} _{z}\end{pmatrix}}\,,\quad \mathbf {U} ={\begin{pmatrix}-{\frac {1}{\sqrt {2}}}&-{\frac {i}{\sqrt {2}}}&0\\+{\frac {1}{\sqrt {2}}}&-{\frac {i}{\sqrt {2}}}&0\\0&0&1\end{pmatrix}}\,,} with inverse: ( e x e y e z ) = U − 1 ( e + e − e 0 ) , U − 1 = ( − 1 2 + 1 2 0 + i 2 + i 2 0 0 0 1 ) . {\displaystyle {\begin{pmatrix}\mathbf {e} _{x}\\\mathbf {e} _{y}\\\mathbf {e} _{z}\end{pmatrix}}=\mathbf {U} ^{-1}{\begin{pmatrix}\mathbf {e} _{+}\\\mathbf {e} _{-}\\\mathbf {e} _{0}\end{pmatrix}}\,,\quad \mathbf {U} ^{-1}={\begin{pmatrix}-{\frac {1}{\sqrt {2}}}&+{\frac {1}{\sqrt {2}}}&0\\+{\frac {i}{\sqrt {2}}}&+{\frac {i}{\sqrt {2}}}&0\\0&0&1\end{pmatrix}}\,.} It can be seen that U is a unitary matrix, in other words its Hermitian conjugate U† (complex conjugate and matrix transpose) is also the inverse matrix U−1. For the coordinates: ( A + A − A 0 ) = U ∗ ( A x A y A z ) , U ∗ = ( − 1 2 + i 2 0 + 1 2 + i 2 0 0 0 1 ) , {\displaystyle {\begin{pmatrix}A_{+}\\A_{-}\\A_{0}\end{pmatrix}}=\mathbf {U} ^{\mathrm {*} }{\begin{pmatrix}A_{x}\\A_{y}\\A_{z}\end{pmatrix}}\,,\quad \mathbf {U} ^{\mathrm {*} }={\begin{pmatrix}-{\frac {1}{\sqrt {2}}}&+{\frac {i}{\sqrt {2}}}&0\\+{\frac {1}{\sqrt {2}}}&+{\frac {i}{\sqrt {2}}}&0\\0&0&1\end{pmatrix}}\,,} and inverse: ( A x A y A z ) = ( U ∗ ) − 1 ( A + A − A 0 ) , ( U ∗ ) − 1 = ( − 1 2 + 1 2 0 − i 2 − i 2 0 0 0 1 ) . {\displaystyle {\begin{pmatrix}A_{x}\\A_{y}\\A_{z}\end{pmatrix}}=(\mathbf {U} ^{\mathrm {*} })^{-1}{\begin{pmatrix}A_{+}\\A_{-}\\A_{0}\end{pmatrix}}\,,\quad (\mathbf {U} ^{\mathrm {*} })^{-1}={\begin{pmatrix}-{\frac {1}{\sqrt {2}}}&+{\frac {1}{\sqrt {2}}}&0\\-{\frac {i}{\sqrt {2}}}&-{\frac {i}{\sqrt {2}}}&0\\0&0&1\end{pmatrix}}\,.} === Cross products === Taking cross products of the spherical basis vectors, we find an obvious relation: e q × e q = 0 {\displaystyle \mathbf {e} _{q}\times \mathbf {e} _{q}={\boldsymbol {0}}} where q is a placeholder for +, −, 0, and two less obvious relations: e ± × e ∓ = ± i e 0 {\displaystyle \mathbf {e} _{\pm }\times \mathbf {e} _{\mp }=\pm i\mathbf {e} _{0}} e ± × e 0 = ± i e ± {\displaystyle \mathbf {e} _{\pm }\times \mathbf {e} _{0}=\pm i\mathbf {e} _{\pm }} === Inner product in the spherical basis === The inner product between two vectors A and B in the spherical basis follows from the above definition of the inner product: ⟨ A , B ⟩ = A + B + ⋆ + A − B − ⋆ + A 0 B 0 ⋆ {\displaystyle \left\langle \mathbf {A} ,\mathbf {B} \right\rangle =A_{+}B_{+}^{\star }+A_{-}B_{-}^{\star }+A_{0}B_{0}^{\star }} == See also == Wigner–Eckart theorem Wigner D matrix 3D rotation group == References == === General === S. S. M. Wong (2008). Introductory Nuclear Physics (2nd ed.). John Wiley & Sons. ISBN 978-35-276-179-13. == External links ==
Wikipedia:Spherical geometry#0
Spherical geometry or spherics (from Ancient Greek σφαιρικά) is the geometry of the two-dimensional surface of a sphere or the n-dimensional surface of higher dimensional spheres. Long studied for its practical applications to astronomy, navigation, and geodesy, spherical geometry and the metrical tools of spherical trigonometry are in many respects analogous to Euclidean plane geometry and trigonometry, but also have some important differences. The sphere can be studied either extrinsically as a surface embedded in 3-dimensional Euclidean space (part of the study of solid geometry), or intrinsically using methods that only involve the surface itself without reference to any surrounding space. == Principles == In plane (Euclidean) geometry, the basic concepts are points and (straight) lines. In spherical geometry, the basic concepts are point and great circle. However, two great circles on a plane intersect in two antipodal points, unlike coplanar lines in Elliptic geometry. In the extrinsic 3-dimensional picture, a great circle is the intersection of the sphere with any plane through the center. In the intrinsic approach, a great circle is a geodesic; a shortest path between any two of its points provided they are close enough. Or, in the (also intrinsic) axiomatic approach analogous to Euclid's axioms of plane geometry, "great circle" is simply an undefined term, together with postulates stipulating the basic relationships between great circles and the also-undefined "points". This is the same as Euclid's method of treating point and line as undefined primitive notions and axiomatizing their relationships. Great circles in many ways play the same logical role in spherical geometry as lines in Euclidean geometry, e.g., as the sides of (spherical) triangles. This is more than an analogy; spherical and plane geometry and others can all be unified under the umbrella of geometry built from distance measurement, where "lines" are defined to mean shortest paths (geodesics). Many statements about the geometry of points and such "lines" are equally true in all those geometries provided lines are defined that way, and the theory can be readily extended to higher dimensions. Nevertheless, because its applications and pedagogy are tied to solid geometry, and because the generalization loses some important properties of lines in the plane, spherical geometry ordinarily does not use the term "line" at all to refer to anything on the sphere itself. If developed as a part of solid geometry, use is made of points, straight lines and planes (in the Euclidean sense) in the surrounding space. In spherical geometry, angles are defined between great circles, resulting in a spherical trigonometry that differs from ordinary trigonometry in many respects; for example, the sum of the interior angles of a spherical triangle exceeds 180 degrees. == Relation to similar geometries == Because a sphere and a plane differ geometrically, (intrinsic) spherical geometry has some features of a non-Euclidean geometry and is sometimes described as being one. However, spherical geometry was not considered a full-fledged non-Euclidean geometry sufficient to resolve the ancient problem of whether the parallel postulate is a logical consequence of the rest of Euclid's axioms of plane geometry, because it requires another axiom to be modified. The resolution was found instead in elliptic geometry, to which spherical geometry is closely related, and hyperbolic geometry; each of these new geometries makes a different change to the parallel postulate. The principles of any of these geometries can be extended to any number of dimensions. An important geometry related to that of the sphere is that of the real projective plane; it is obtained by identifying antipodal points (pairs of opposite points) on the sphere. Locally, the projective plane has all the properties of spherical geometry, but it has different global properties. In particular, it is non-orientable, or one-sided, and unlike the sphere it cannot be drawn as a surface in 3-dimensional space without intersecting itself. Concepts of spherical geometry may also be applied to the oblong sphere, though minor modifications must be implemented on certain formulas. == History == === Greek antiquity === The earliest mathematical work of antiquity to come down to our time is On the rotating sphere (Περὶ κινουμένης σφαίρας, Peri kinoumenes sphairas) by Autolycus of Pitane, who lived at the end of the fourth century BC. Spherical trigonometry was studied by early Greek mathematicians such as Theodosius of Bithynia, a Greek astronomer and mathematician who wrote Spherics, a book on the geometry of the sphere, and Menelaus of Alexandria, who wrote a book on spherical trigonometry called Sphaerica and developed Menelaus' theorem. === Islamic world === The Book of Unknown Arcs of a Sphere written by the Islamic mathematician Al-Jayyani is considered to be the first treatise on spherical trigonometry. The book contains formulae for right-handed triangles, the general law of sines, and the solution of a spherical triangle by means of the polar triangle. The book On Triangles by Regiomontanus, written around 1463, is the first pure trigonometrical work in Europe. However, Gerolamo Cardano noted a century later that much of its material on spherical trigonometry was taken from the twelfth-century work of the Andalusi scholar Jabir ibn Aflah. === Euler's work === Leonhard Euler published a series of important memoirs on spherical geometry: L. Euler, Principes de la trigonométrie sphérique tirés de la méthode des plus grands et des plus petits, Mémoires de l'Académie des Sciences de Berlin 9 (1753), 1755, p. 233–257; Opera Omnia, Series 1, vol. XXVII, p. 277–308. L. Euler, Eléments de la trigonométrie sphéroïdique tirés de la méthode des plus grands et des plus petits, Mémoires de l'Académie des Sciences de Berlin 9 (1754), 1755, p. 258–293; Opera Omnia, Series 1, vol. XXVII, p. 309–339. L. Euler, De curva rectificabili in superficie sphaerica, Novi Commentarii academiae scientiarum Petropolitanae 15, 1771, pp. 195–216; Opera Omnia, Series 1, Volume 28, pp. 142–160. L. Euler, De mensura angulorum solidorum, Acta academiae scientiarum imperialis Petropolitinae 2, 1781, p. 31–54; Opera Omnia, Series 1, vol. XXVI, p. 204–223. L. Euler, Problematis cuiusdam Pappi Alexandrini constructio, Acta academiae scientiarum imperialis Petropolitinae 4, 1783, p. 91–96; Opera Omnia, Series 1, vol. XXVI, p. 237–242. L. Euler, Geometrica et sphaerica quaedam, Mémoires de l'Académie des Sciences de Saint-Pétersbourg 5, 1815, p. 96–114; Opera Omnia, Series 1, vol. XXVI, p. 344–358. L. Euler, Trigonometria sphaerica universa, ex primis principiis breviter et dilucide derivata, Acta academiae scientiarum imperialis Petropolitinae 3, 1782, p. 72–86; Opera Omnia, Series 1, vol. XXVI, p. 224–236. L. Euler, Variae speculationes super area triangulorum sphaericorum, Nova Acta academiae scientiarum imperialis Petropolitinae 10, 1797, p. 47–62; Opera Omnia, Series 1, vol. XXIX, p. 253–266. == Properties == Spherical geometry has the following properties: Any two great circles intersect in two diametrically opposite points, called antipodal points. Any two points that are not antipodal points determine a unique great circle. There is a natural unit of angle measurement (based on a revolution), a natural unit of length (based on the circumference of a great circle) and a natural unit of area (based on the area of the sphere). Each great circle is associated with a pair of antipodal points, called its poles which are the common intersections of the set of great circles perpendicular to it. This shows that a great circle is, with respect to distance measurement on the surface of the sphere, a circle: the locus of points all at a specific distance from a center. Each point is associated with a unique great circle, called the polar circle of the point, which is the great circle on the plane through the centre of the sphere and perpendicular to the diameter of the sphere through the given point. As there are two arcs determined by a pair of points, which are not antipodal, on the great circle they determine, three non-collinear points do not determine a unique triangle. However, if we only consider triangles whose sides are minor arcs of great circles, we have the following properties: The angle sum of a triangle is greater than 180° and less than 540°. The area of a triangle is proportional to the excess of its angle sum over 180°. Two triangles with the same angle sum are equal in area. There is an upper bound for the area of triangles. The composition (product) of two reflections-across-a-great-circle may be considered as a rotation about either of the points of intersection of their axes. Two triangles are congruent if and only if they correspond under a finite product of such reflections. Two triangles with corresponding angles equal are congruent (i.e., all similar triangles are congruent). == Relation to Euclid's postulates == If "line" is taken to mean great circle, spherical geometry only obeys two of Euclid's five postulates: the second postulate ("to produce [extend] a finite straight line continuously in a straight line") and the fourth postulate ("that all right angles are equal to one another"). However, it violates the other three. Contrary to the first postulate ("that between any two points, there is a unique line segment joining them"), there is not a unique shortest route between any two points (antipodal points such as the north and south poles on a spherical globe are counterexamples); contrary to the third postulate, a sphere does not contain circles of arbitrarily great radius; and contrary to the fifth (parallel) postulate, there is no point through which a line can be drawn that never intersects a given line. A statement that is equivalent to the parallel postulate is that there exists a triangle whose angles add up to 180°. Since spherical geometry violates the parallel postulate, there exists no such triangle on the surface of a sphere. The sum of the angles of a triangle on a sphere is 180°(1 + 4f), where f is the fraction of the sphere's surface that is enclosed by the triangle. For any positive value of f, this exceeds 180°. == See also == Spherical astronomy Spherical conic Spherical distance Spherical polyhedron Spherics Half-side formula Lénárt sphere Versor == Notes == == References == == Further reading == Meserve, Bruce E. (1983) [1959], Fundamental Concepts of Geometry, Dover, ISBN 0-486-63415-9 Papadopoulos, Athanase (2015), Euler, la géométrie sphérique et le calcul des variations. In: Leonhard Euler : Mathématicien, physicien et théoricien de la musique (dir. X. Hascher et A. Papadopoulos), CNRS Editions, Paris, ISBN 978-2-271-08331-9 Van Brummelen, Glen (2013). Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry. Princeton University Press. ISBN 9780691148922. Retrieved 31 December 2014. Roshdi Rashed and Athanase Papadopoulos (2017) Menelaus' Spherics: Early Translation and al-Mahani'/alHarawi's version. Critical edition of Menelaus' Spherics from the Arabic manuscripts, with historical and mathematical commentaries, De Gruyter Series: Scientia Graeco-Arabica 21 ISBN 978-3-11-057142-4 == External links == The Geometry of the Sphere Archived 2011-06-21 at the Wayback Machine Rice University Weisstein, Eric W. "Spherical Geometry". MathWorld. Sphaerica - geometry software for constructing on the sphere
Wikipedia:Spherically complete field#0
In mathematics, a field K with an absolute value is called spherically complete if the intersection of every decreasing sequence of balls (in the sense of the metric induced by the absolute value) is nonempty: B 1 ⊇ B 2 ⊇ ⋯ ⇒ ⋂ n ∈ N B n ≠ ∅ . {\displaystyle B_{1}\supseteq B_{2}\supseteq \cdots \Rightarrow \bigcap _{n\in {\mathbf {N} }}B_{n}\neq \emptyset .} The definition can be adapted also to a field K with a valuation v taking values in an arbitrary ordered abelian group: (K,v) is spherically complete if every collection of balls that is totally ordered by inclusion has a nonempty intersection. Spherically complete fields are important in nonarchimedean functional analysis, since many results analogous to theorems of classical functional analysis require the base field to be spherically complete. == Examples == Any locally compact field is spherically complete. This includes, in particular, the fields Qp of p-adic numbers, and any of their finite extensions. Every spherically complete field is complete. On the other hand, Cp, the completion of the algebraic closure of Qp, is not spherically complete. Any field of Hahn series is spherically complete. == References ==
Wikipedia:Spherics#0
The Spherics (Greek: τὰ σφαιρικά, tà sphairiká) is a three-volume treatise on spherical geometry written by the Hellenistic mathematician Theodosius of Bithynia in the 2nd or 1st century BC. Book I and the first half of Book II establish basic geometric constructions needed for spherical geometry using the tools of Euclidean solid geometry, while the second half of Book II and Book III contain propositions relevant to astronomy as modeled by the celestial sphere. Primarily consisting of theorems which were known at least informally a couple centuries earlier, the Spherics was a foundational treatise for geometers and astronomers from its origin until the 19th century. It was continuously studied and copied in Greek manuscript for more than a millennium. It was translated into Arabic in the 9th century during the Islamic Golden Age, and thence translated into Latin in 12th century Iberia, though the text and diagrams were somewhat corrupted. In the 16th century printed editions in Greek were published along with better translations into Latin. == History == Several of the definitions and theorems in the Spherics were used without mention in Euclid's Phenomena and two extant works by Autolycus concerning motions of the celestial sphere, all written about two centuries before Theodosius. It has been speculated that this tradition of Greek "spherics" – founded in the axiomatic system and using the methods of proof of solid geometry exemplified by Euclid's Elements but extended with additional definitions relevant to the sphere – may have originated in a now-unknown work by Eudoxus, who probably established a two-sphere model of the cosmos (spherical Earth and celestial sphere) sometime between 370–340 BC. The Spherics is a supplement to the Elements, and takes its content for granted as a prerequisite. The Spherics follows the general presentation style of the Elements, with definitions followed by a list of theorems (propositions), each of which is first stated abstractly as prose, then restated with points lettered for the proof. It analyses spherical circles as flat circles lying in planes intersecting the sphere and provides geometric constructions for various configurations of spherical circles. Spherical distances and radii are treated as Euclidean distances in the surrounding 3-dimensional space. The relationship between planes is described in terms of dihedral angle. As in the Elements, there is no concept of angle measure or trigonometry per se. This approach differs from other quantitative methods of Greek astronomy such as the analemma (orthographic projection), stereographic projection, or trigonometry (a fledgling subject introduced by Theodosius' contemporary Hipparchus). It also differs from the approach taken in Menelaus' Spherics, a treatise of the same title written 3 centuries later, which treats the geometry of the sphere intrinsically, analyzing the inherent structure of the spherical surface and circles drawn on it rather than primarily treating it as a surface embedded in three-dimensional space. In late antiquity, the Spherics was part of a collection of treatises now called the Little Astronomy, an assortment of shorter works on geometry and astronomy building on Euclid's Elements. Other works in the collection included Aristarchus' On the Sizes and Distances, Autolycus' On Rising and Settings and On the Moving Sphere, Euclid's Catoptrics, Data, Optics, and Phenomena, Hypsicles' On Ascensions, Theodosius' On Geographic Places and On Days and Nights, and Menelaus' Spherics. Often several of these were bound together in a single volume. During the Islamic Golden Age, the books in the collection were translated into Arabic, and with the addition of a few new works, were known as the Middle Books, intended to fit between the Elements and Ptolemy's Almagest. Authoritative critical editions of the Greek text, compiled from several manuscripts, were made by Heiberg (1927) and Czinczenheim (2000). Sidoli & Thomas (2023) is an English translation by modern scholars. == Editions and translations == partial edition in: Valla, Giorgio, ed. (1501). "De sphaericis (book XII, chapter V)". De fugiendis et expetendis rebus (in Latin). Vol. 1. Venetiis: Aldus Manutius. Sphera mundi noviter recognita cum commentariis et authoribus in hoc volumine contentis, videlicet [...] Theodosii de Spheris [...] (in Latin). Venetiis. 1518. Vögelin, Johannes, ed. (1529). Theodosii de Sphaericis libri tres (in Latin). Vienna: Joannes Singrenius. Maurolico, Francesco, ed. (1558). Theodosii sphaericorum elementorum libri III, ex traditione Maurolyci Messanensis mathematici (in Latin). Messina: Petrus Spira mense Augusto. Péna, Jean, ed. (1558). Theodosij Tripolitae Sphaericorum, libri tres (in Greek and Latin). Paris: Andreas Wechelus. Dasypodius, Conrad, ed. (1573). Sphaericae doctrinae propositiones (in Latin and Greek). Argentorati: Excudebat Christianus Mylius. Clavius, Christopher, ed. (1586). Theodosii Tripolitae Sphaericorum Libri III (in Latin). Rome: Ex Typographia Dominici Basae. Henrion, Denis, ed. (1615). Les trois livres des Élémens spériques de Théodose Tripolitain (in French). Paris: Chez Abraham Pacard. Hérigone, Pierre, ed. (1637). "Theodosii Sphaerica = Sphériques de Théodose". Cursus mathematicus = Cours mathématique (in Latin and French). Vol. 5. Parisiis: Henry le Gras. pp. 218–329. Barrow, Isaac, ed. (1675). Theodosii Sphaerica: Methodo Nova Illustrata, & Succinctè Demonstrata (in Latin). London: Guil. Godbid. Hunt, Joseph, ed. (1707). Theodosiou Sphairikōn biblia 3. Theodosii Sphaericorum libri tres (in Greek and Latin). Oxford: H. Clements. Stone, Edmund, ed. (1721). Clavius's Commentary on the Sphericks of Theodosius Tripolitae: or, Spherical Elements. London: J. Senex. Nizze, Ernst, ed. (1826). Die Sphärik des Theodosios (in German). Stralsund. Nizze, Ernst, ed. (1852). Theodosii Tripolitae Sphaericorum Libros Tres (in Greek and Latin). Berlin: Georgii Reimer. Heiberg, Johan Ludvig, ed. (1927). Theodosius. Sphaerica (in Greek and Latin). Berlin: Weidmannsche Buchhandlung. Ver Eecke, Paul, ed. (1927). Les sphériques de Théodose de Tripoli (in French). Bruges: Brouwer et Cie. Czwalina, Arthur, ed. (1931). Autolykos: Rotierende kugel und Aufgang und untergang der gestirne. Theodosios von Tripolis: Sphaerik. Übersetzt und mit anmerkungen versehen (in German). Leipzig: Akademische verlagsgesellschaft m. b. h. Naṣīr al-Dīn al-Ṭūsī, ed. (1939). Kitāb al-ukar li-Thāʾudhūsiyūs: Taḥrīr al-alāma al-faylasūf al-H̱awāǧah Naṣīr al-Dīn Muḥammad ibn Muḥammad ibn al-Ḥasan al-Ṭūsī كتاب الاكر لثاوذوسيوس: تحرير العلامة الفيلسوف الخوا نصير الدين محمد بن محمد بن الحسن الطوصي (in Arabic). Ḥaydarʾābād: Dāʾirat al-maʿārif al-ʿuthmānīyyah. Martin, Thomas J., ed. (1975). The Arabic Translation of Theodosius's Sphaerica (PhD thesis) (in Arabic and English). University of St. Andrews. Czinczenheim, Claire, ed. (2000). Édition, traduction et commentaire des Sphériques de Théodose (PhD thesis) (in Greek and French). Université de Paris IV, Paris-Sorbonne. Spandagos, Vangelēs, ed. (2000). Ta Sphairika tu Theodosiu tu Tripolitu Τα Σφαιρικα του Θεοδοσιου του Τριπολιτου (in Greek). Athens: Aithra. ISBN 9789607007889. Kunitzsch, Paul; Lorch, Richard, eds. (2010). Theodosius, "Sphaerica": Arabic and Medieval Latin Translations (in Arabic, Latin, and English). Stuttgart: Franz Steiner. ISBN 9783515092883. Sidoli, Nathan; Thomas, Robert Spencer David, eds. (2023). The Spherics of Theodosios. London: Routledge. doi:10.4324/9781003142164. ISBN 9780367557300. == Notes == == References == Lorch, Richard (1996). "The transmission of Theodosius' Sphaerica". In Folkerts, Menso (ed.). Mathematische Probleme im Mittelalter: Der lateinische und arabische Sprachbereich. Wiesbaden: Harrassowitz. pp. 159–184. Malpangotto, Michela (2010). "Graphical Choices and Geometrical Thought in the Transmission of Theodosius' Spherics from Antiquity to the Renaissance". Archive for History of Exact Sciences. 64 (1): 75–112. doi:10.1007/s00407-009-0054-1. JSTOR 41342412. Sidoli, Nathan; Saito, Ken (2009). "The role of geometrical construction in Theodosius's Spherics" (PDF). Archive for History of Exact Sciences. 63 (6): 581–609. doi:10.1007/s00407-009-0045-2. JSTOR 41134325. Sidoli, Nathan; Kusuba, Takanori (2017). "Naṣīr al-Dīn al-Ṭūsī's revision of Theodosius's Spherics" (PDF). In Iqbal, Muzaffar (ed.). New Perspectives on the History of Islamic Science. Vol. 3. Routledge. pp. 355–392. doi:10.4324/9781315248011-18. Thomas, Robert S.D. (2013). "Acts of Geometrical Construction in the Spherics of Theodosios". From Alexandria, Through Baghdad. Springer. pp. 227–237. doi:10.1007/978-3-642-36736-6_11. Thomas, Robert S.D. (2018). "The definitions and theorems of The Spherics of Theodosios". In Sidoli, Nathan; Brummelen, Glen Van (eds.). Research in History and Philosophy of Mathematics. CSHPM Annual Meeting, Toronto, Ontario, May 28–30 2017. Springer. pp. 1–21. doi:10.1007/978-3-642-36736-6_11. Thomas, Robert S.D. (2018). "An Appreciation of the First Book of Spherics". Mathematics Magazine. 91 (1): 3–15. doi:10.1080/0025570X.2017.1404798. JSTOR 48664899.
Wikipedia:Spherics (Menelaus)#0
The Spherics (Greek: τὰ σφαιρικά, tà sphairiká) is a three-volume treatise on spherical geometry written by the Hellenistic mathematician Theodosius of Bithynia in the 2nd or 1st century BC. Book I and the first half of Book II establish basic geometric constructions needed for spherical geometry using the tools of Euclidean solid geometry, while the second half of Book II and Book III contain propositions relevant to astronomy as modeled by the celestial sphere. Primarily consisting of theorems which were known at least informally a couple centuries earlier, the Spherics was a foundational treatise for geometers and astronomers from its origin until the 19th century. It was continuously studied and copied in Greek manuscript for more than a millennium. It was translated into Arabic in the 9th century during the Islamic Golden Age, and thence translated into Latin in 12th century Iberia, though the text and diagrams were somewhat corrupted. In the 16th century printed editions in Greek were published along with better translations into Latin. == History == Several of the definitions and theorems in the Spherics were used without mention in Euclid's Phenomena and two extant works by Autolycus concerning motions of the celestial sphere, all written about two centuries before Theodosius. It has been speculated that this tradition of Greek "spherics" – founded in the axiomatic system and using the methods of proof of solid geometry exemplified by Euclid's Elements but extended with additional definitions relevant to the sphere – may have originated in a now-unknown work by Eudoxus, who probably established a two-sphere model of the cosmos (spherical Earth and celestial sphere) sometime between 370–340 BC. The Spherics is a supplement to the Elements, and takes its content for granted as a prerequisite. The Spherics follows the general presentation style of the Elements, with definitions followed by a list of theorems (propositions), each of which is first stated abstractly as prose, then restated with points lettered for the proof. It analyses spherical circles as flat circles lying in planes intersecting the sphere and provides geometric constructions for various configurations of spherical circles. Spherical distances and radii are treated as Euclidean distances in the surrounding 3-dimensional space. The relationship between planes is described in terms of dihedral angle. As in the Elements, there is no concept of angle measure or trigonometry per se. This approach differs from other quantitative methods of Greek astronomy such as the analemma (orthographic projection), stereographic projection, or trigonometry (a fledgling subject introduced by Theodosius' contemporary Hipparchus). It also differs from the approach taken in Menelaus' Spherics, a treatise of the same title written 3 centuries later, which treats the geometry of the sphere intrinsically, analyzing the inherent structure of the spherical surface and circles drawn on it rather than primarily treating it as a surface embedded in three-dimensional space. In late antiquity, the Spherics was part of a collection of treatises now called the Little Astronomy, an assortment of shorter works on geometry and astronomy building on Euclid's Elements. Other works in the collection included Aristarchus' On the Sizes and Distances, Autolycus' On Rising and Settings and On the Moving Sphere, Euclid's Catoptrics, Data, Optics, and Phenomena, Hypsicles' On Ascensions, Theodosius' On Geographic Places and On Days and Nights, and Menelaus' Spherics. Often several of these were bound together in a single volume. During the Islamic Golden Age, the books in the collection were translated into Arabic, and with the addition of a few new works, were known as the Middle Books, intended to fit between the Elements and Ptolemy's Almagest. Authoritative critical editions of the Greek text, compiled from several manuscripts, were made by Heiberg (1927) and Czinczenheim (2000). Sidoli & Thomas (2023) is an English translation by modern scholars. == Editions and translations == partial edition in: Valla, Giorgio, ed. (1501). "De sphaericis (book XII, chapter V)". De fugiendis et expetendis rebus (in Latin). Vol. 1. Venetiis: Aldus Manutius. Sphera mundi noviter recognita cum commentariis et authoribus in hoc volumine contentis, videlicet [...] Theodosii de Spheris [...] (in Latin). Venetiis. 1518. Vögelin, Johannes, ed. (1529). Theodosii de Sphaericis libri tres (in Latin). Vienna: Joannes Singrenius. Maurolico, Francesco, ed. (1558). Theodosii sphaericorum elementorum libri III, ex traditione Maurolyci Messanensis mathematici (in Latin). Messina: Petrus Spira mense Augusto. Péna, Jean, ed. (1558). Theodosij Tripolitae Sphaericorum, libri tres (in Greek and Latin). Paris: Andreas Wechelus. Dasypodius, Conrad, ed. (1573). Sphaericae doctrinae propositiones (in Latin and Greek). Argentorati: Excudebat Christianus Mylius. Clavius, Christopher, ed. (1586). Theodosii Tripolitae Sphaericorum Libri III (in Latin). Rome: Ex Typographia Dominici Basae. Henrion, Denis, ed. (1615). Les trois livres des Élémens spériques de Théodose Tripolitain (in French). Paris: Chez Abraham Pacard. Hérigone, Pierre, ed. (1637). "Theodosii Sphaerica = Sphériques de Théodose". Cursus mathematicus = Cours mathématique (in Latin and French). Vol. 5. Parisiis: Henry le Gras. pp. 218–329. Barrow, Isaac, ed. (1675). Theodosii Sphaerica: Methodo Nova Illustrata, & Succinctè Demonstrata (in Latin). London: Guil. Godbid. Hunt, Joseph, ed. (1707). Theodosiou Sphairikōn biblia 3. Theodosii Sphaericorum libri tres (in Greek and Latin). Oxford: H. Clements. Stone, Edmund, ed. (1721). Clavius's Commentary on the Sphericks of Theodosius Tripolitae: or, Spherical Elements. London: J. Senex. Nizze, Ernst, ed. (1826). Die Sphärik des Theodosios (in German). Stralsund. Nizze, Ernst, ed. (1852). Theodosii Tripolitae Sphaericorum Libros Tres (in Greek and Latin). Berlin: Georgii Reimer. Heiberg, Johan Ludvig, ed. (1927). Theodosius. Sphaerica (in Greek and Latin). Berlin: Weidmannsche Buchhandlung. Ver Eecke, Paul, ed. (1927). Les sphériques de Théodose de Tripoli (in French). Bruges: Brouwer et Cie. Czwalina, Arthur, ed. (1931). Autolykos: Rotierende kugel und Aufgang und untergang der gestirne. Theodosios von Tripolis: Sphaerik. Übersetzt und mit anmerkungen versehen (in German). Leipzig: Akademische verlagsgesellschaft m. b. h. Naṣīr al-Dīn al-Ṭūsī, ed. (1939). Kitāb al-ukar li-Thāʾudhūsiyūs: Taḥrīr al-alāma al-faylasūf al-H̱awāǧah Naṣīr al-Dīn Muḥammad ibn Muḥammad ibn al-Ḥasan al-Ṭūsī كتاب الاكر لثاوذوسيوس: تحرير العلامة الفيلسوف الخوا نصير الدين محمد بن محمد بن الحسن الطوصي (in Arabic). Ḥaydarʾābād: Dāʾirat al-maʿārif al-ʿuthmānīyyah. Martin, Thomas J., ed. (1975). The Arabic Translation of Theodosius's Sphaerica (PhD thesis) (in Arabic and English). University of St. Andrews. Czinczenheim, Claire, ed. (2000). Édition, traduction et commentaire des Sphériques de Théodose (PhD thesis) (in Greek and French). Université de Paris IV, Paris-Sorbonne. Spandagos, Vangelēs, ed. (2000). Ta Sphairika tu Theodosiu tu Tripolitu Τα Σφαιρικα του Θεοδοσιου του Τριπολιτου (in Greek). Athens: Aithra. ISBN 9789607007889. Kunitzsch, Paul; Lorch, Richard, eds. (2010). Theodosius, "Sphaerica": Arabic and Medieval Latin Translations (in Arabic, Latin, and English). Stuttgart: Franz Steiner. ISBN 9783515092883. Sidoli, Nathan; Thomas, Robert Spencer David, eds. (2023). The Spherics of Theodosios. London: Routledge. doi:10.4324/9781003142164. ISBN 9780367557300. == Notes == == References == Lorch, Richard (1996). "The transmission of Theodosius' Sphaerica". In Folkerts, Menso (ed.). Mathematische Probleme im Mittelalter: Der lateinische und arabische Sprachbereich. Wiesbaden: Harrassowitz. pp. 159–184. Malpangotto, Michela (2010). "Graphical Choices and Geometrical Thought in the Transmission of Theodosius' Spherics from Antiquity to the Renaissance". Archive for History of Exact Sciences. 64 (1): 75–112. doi:10.1007/s00407-009-0054-1. JSTOR 41342412. Sidoli, Nathan; Saito, Ken (2009). "The role of geometrical construction in Theodosius's Spherics" (PDF). Archive for History of Exact Sciences. 63 (6): 581–609. doi:10.1007/s00407-009-0045-2. JSTOR 41134325. Sidoli, Nathan; Kusuba, Takanori (2017). "Naṣīr al-Dīn al-Ṭūsī's revision of Theodosius's Spherics" (PDF). In Iqbal, Muzaffar (ed.). New Perspectives on the History of Islamic Science. Vol. 3. Routledge. pp. 355–392. doi:10.4324/9781315248011-18. Thomas, Robert S.D. (2013). "Acts of Geometrical Construction in the Spherics of Theodosios". From Alexandria, Through Baghdad. Springer. pp. 227–237. doi:10.1007/978-3-642-36736-6_11. Thomas, Robert S.D. (2018). "The definitions and theorems of The Spherics of Theodosios". In Sidoli, Nathan; Brummelen, Glen Van (eds.). Research in History and Philosophy of Mathematics. CSHPM Annual Meeting, Toronto, Ontario, May 28–30 2017. Springer. pp. 1–21. doi:10.1007/978-3-642-36736-6_11. Thomas, Robert S.D. (2018). "An Appreciation of the First Book of Spherics". Mathematics Magazine. 91 (1): 3–15. doi:10.1080/0025570X.2017.1404798. JSTOR 48664899.
Wikipedia:Sphuṭacandrāpti#0
Sphuṭacandrāpti (Computation of True Moon) is a treatise in Sanskrit composed by the fourteenth-century CE Kerala astronomer-mathematician Sangamagrama Madhava. The treatise enunciates a method for the computation of the position of the moon at intervals of 40 minutes each throughout the day. This is one of only two works of Madhava that have survived to modern times, the other one being Veṇvāroha. However, both Sphuṭacandrāpti and Veṇvāroha have more or less the same contents, that of the latter being apparently a more refined version of that of the former. == Critical editions == K. V. Sarma while working in Vishveshvaranand Institute of Sanskrit and Indological Studies, Hoshiarpur, has brought out in 1973 a critical edition of the treatise with an introduction, translation and notes. == See also == Indian mathematics Indian mathematicians Kerala school of astronomy and mathematics == References == == External links == Wikimedia Commons has a file available for Sphuṭacandrāpti.
Wikipedia:Spillover (experiment)#0
In experiments, a spillover is an indirect effect on a subject not directly treated by the experiment. These effects are useful for policy analysis but complicate the statistical analysis of experiments. Analysis of spillover effects involves relaxing the non-interference assumption, or SUTVA (Stable Unit Treatment Value Assumption). This assumption requires that subject i's revelation of its potential outcomes depends only on that subject i's own treatment status, and is unaffected by another subject j's treatment status. In ordinary settings where the researcher seeks to estimate the average treatment effect ( A T E ^ {\displaystyle {\widehat {ATE}}} ), violation of the non-interference assumption means that traditional estimators for the ATE, such as difference-in-means, may be biased. However, there are many real-world instances where a unit's revelation of potential outcomes depend on another unit's treatment assignment, and analyzing these effects may be just as important as analyzing the direct effect of treatment. One solution to this problem is to redefine the causal estimand of interest by redefining a subject's potential outcomes in terms of one's own treatment status and related subjects' treatment status. The researcher can then analyze various estimands of interest separately. One important assumption here is that this process captures all patterns of spillovers, and that there are no unmodeled spillovers remaining (ex. spillovers occur within a two-person household but not beyond). Once the potential outcomes are redefined, the rest of the statistical analysis involves modeling the probabilities of being exposed to treatment given some schedule of treatment assignment, and using inverse probability weighting (IPW) to produce unbiased (or asymptotically unbiased) estimates of the estimand of interest. == Examples of spillover effects == Spillover effects can occur in a variety of different ways. Common applications include the analysis of social network spillovers and geographic spillovers. Examples include the following: Communication: An intervention that conveys information about a technology or product can influence the take-up decisions of others in their network if it diffuses beyond the initial user. Competition: Job placement assistance for young job seekers may influence the job market prospects of individuals who did not receive the training but are competing for the same jobs. Contagion: Receiving deworming drugs can decrease other's likelihood of contracting the disease. Deterrence: Information about government audits in specific municipalities can spread to nearby municipalities. Displacement: A hotspot policing intervention that increases policing presence on a given street can lead to the displacement of crime onto nearby untreated streets. Reallocation of resources: A hotspot policing intervention that increases policing presence on a given street can decrease police presence on nearby streets. Social comparison: A program that randomizes individuals to receive a voucher to move to a new neighborhood can additionally influence the control group's beliefs about their housing conditions. In such examples, treatment in a randomized-control trial can have a direct effect on those who receive the intervention and also a spillover effect on those who were not directly treated. == Statistical issues == Estimating spillover effects in experiments introduces three statistical issues that researchers must take into account. === Relaxing the non-interference assumption === One key assumption for unbiased inference is the non-interference assumption, which posits that an individual's potential outcomes are only revealed by their own treatment assignment and not the treatment assignment of others. This assumption has also been called the Individualistic Treatment Response or the stable unit treatment value assumption. Non-interference is violated when subjects can communicate with each other about their treatments, decisions, or experiences, thereby influencing each other's potential outcomes. If the non-interference assumption does not hold, units no longer have just two potential outcomes (treated and control), but a variety of other potential outcomes that depend on other units’ treatment assignments, which complicates the estimation of the average treatment effect. Estimating spillover effects requires relaxing the non-interference assumption. This is because a unit's outcomes depend not only on its treatment assignment but also on the treatment assignment of its neighbors. The researcher must posit a set of potential outcomes that limit the type of interference. As an example, consider an experiment that sends out political information to undergraduate students to increase their political participation. If the study population consists of all students living with a roommate in a college dormitory, one can imagine four sets of potential outcomes, depending on whether the student or their partner received the information (assume no spillover outside of each two-person room): Y0,0 refers to an individual's potential outcomes when they are not treated (0) and neither was their roommate (0). Y0,1 refers to an individual's potential outcome when they are not treated (0) but their roommate was treated (1). Y1,0 refers to an individual's potential outcome when they are treated (1) but their roommate was not treated (0). Y1,1 refers to an individual's potential outcome when they are treated (1) and their roommate was treated (1). Now an individual's outcomes are influenced by both whether they received the treatment and whether their roommate received the treatment. We can estimate one type of spillover effect by looking at how one's outcomes change depending on whether their roommate received the treatment or not, given the individual did not receive treatment directly. This would be captured by the difference Y0,1- Y0,0. Similarly, we can measure how ones’ outcomes change depending on their roommate's treatment status, when the individual themselves are treated. This amounts to taking the difference Y1,1- Y1,0. While researchers typically embrace experiments because they require less demanding assumptions, spillovers can be “unlimited in extent and impossible to specify in form.” The researcher must make specific assumptions about which types of spillovers are operative. One can relax the non-interference assumption in various ways depending on how spillovers are thought to occur in a given setting. One way to model spillover effects is a binary indicator for whether an immediate neighbor was also treated, as in the example above. One can also posit spillover effects that depend on the number of immediate neighbors that were also treated, also known as k-level effects. === Using randomization inference for hypothesis testing === In experimental settings where treatment is randomized, we can use randomization inference to test for the existence of spillover effects. The key advantage of this approach is that randomization inference is finite-sample valid, without requiring correct model specification or normal asymptotics. To be specific, consider the aforementioned example experiment in college dorm rooms, and suppose we want to test: H 0 : Y 0 , 1 = Y 0 , 0 {\displaystyle H_{0}:Y_{0,1}=Y_{0,0}} This hypothesis posits that there is no spillover effect on students who don't receive the information (i.e., students who are in control in the experiment). Rejecting this hypothesis implies that even when students don't receive the information message directly, they still may receive it indirectly from treated roommates; hence, there is a spillover effect. To test a hypothesis like H 0 {\displaystyle H_{0}} we can apply a conditional Fisher randomization test. Let R i j = 1 {\displaystyle R_{ij}=1} be an indicator denoting that students i , j {\displaystyle i,j} are roommates, where we assumed for simplicity that each student has exactly one roommate. Suppose this is a completely randomized design and let D i {\displaystyle D_{i}} denote the binary treatment of student i {\displaystyle i} . Then: Define I 1 = { i : ∑ j R i j ( 1 − D i ) D j = 1 } {\displaystyle I_{1}=\{i:\sum _{j}R_{ij}(1-D_{i})D_{j}=1\}} and I 0 = { i : ∑ j R i j ( 1 − D i ) ( 1 − D j ) = 1 } {\displaystyle I_{0}=\{i:\sum _{j}R_{ij}(1-D_{i})(1-D_{j})=1\}} . Calculate an estimate of the spillover effect: T ( I 1 , I 0 ) = | ∑ i ∈ I 1 Y i | I 1 | − ∑ i ∈ I 0 Y i | I 0 | | {\displaystyle T(I_{1},I_{0})={\big |}{\frac {\sum _{i\in I_{1}}Y_{i}}{|I_{1}|}}-{\frac {\sum _{i\in I_{0}}Y_{i}}{|I_{0}|}}{\big |}} . This is the test statistic. For l = 1 , 2 , … , L {\displaystyle l=1,2,\ldots ,L} Randomly shuffle units between I 1 , I 0 {\displaystyle I_{1},I_{0}} producing new randomized sets I 1 ( l ) , I 0 ( l ) {\displaystyle I_{1}^{(l)},I_{0}^{(l)}} akin to the permutation test. Recalculate the test statistic T ( l ) = T ( I 1 ( l ) , I 0 ( l ) ) {\displaystyle T^{(l)}=T(I_{1}^{(l)},I_{0}^{(l)})} . Calculate the randomization p-value: p v a l = 1 L + 1 [ 1 + ∑ l 1 ( T ( l ) > T o b s ) ] . {\displaystyle \mathrm {pval} ={\frac {1}{L+1}}[1+\sum _{l}1(T^{(l)}>T^{obs})].} To explain this procedure, in Step 1, we define the sub-populations of interest: I 1 {\displaystyle I_{1}} is the set of students who are in control but their roommate is treated, and I 0 {\displaystyle I_{0}} are the students in control with their roommates also in control. These are known as "focal units". In Step 2, we define an estimate of the spillover effect as Y ¯ 0 , 1 − Y ¯ 0 , 0 {\displaystyle {\bar {Y}}_{0,1}-{\bar {Y}}_{0,0}} , the difference in outcomes between populations I 1 , I 0 {\displaystyle I_{1},I_{0}} . Crucially, in randomization inference, we don't need to derive the sampling distribution of this estimator. The validity of the procedure stems from Step 3 where we resample treatment according to the true experimental variation (here, simply permuting the "exposures" 01 and 00) while keeping the outcomes fixed under the null. Finally, in Step 4 we calculate the randomization p-value. The 1 / ( L + 1 ) {\displaystyle 1/(L+1)} term is a finite-sample correction to avoid issues with repeated test statistic values. As mentioned before, the randomization p-value is valid for any finite sample size and does not rely on correct model specification. This randomization procedure can be extended to arbitrary designs and more general definitions of spillover effects, although care must be taken to properly account for the interference structure between all pairs of units. The above procedure can also be used to obtain an interval estimate of a constant spillover effect through test inversion. Moreover, the same procedure could be modified for testing whether the "average" spillover effect is zero by using an appropriately studentized test statistic in Step 2. ==== Exposure mappings ==== The next step after redefining the causal estimand of interest is to characterize the probability of spillover exposure for each subject in the analysis, given some vector of treatment assignment. Aronow and Samii (2017) present a method for obtaining a matrix of exposure probabilities for each unit in the analysis. First, define a diagonal matrix with a vector of treatment assignment probabilities P = diag ⁡ ( p z 1 , p z 2 , … , p z | Ω | ) . {\displaystyle \mathbf {P} =\operatorname {diag} \left(p_{\mathbf {z} _{1}},p_{\mathbf {z} _{2}},\dots ,p_{\mathbf {z} _{|\Omega |}}\right).} Second, define an indicator matrix I {\displaystyle \mathbf {I} } of whether the unit is exposed to spillover or not. This is done by using an adjacency matrix as shown on the right, where information regarding a network can be transformed into an indicator matrix. This resulting indicator matrix will contain values of d k {\displaystyle d_{k}} , the realized values of a random binary variable D i = f ( Z , θ i ) {\displaystyle D_{i}=f\left(\mathbf {Z} ,\theta _{i}\right)} , indicating whether that unit has been exposed to spillover or not. Third, obtain the sandwich product I k P I k ′ {\displaystyle \mathbf {I} _{k}\mathbf {P} \mathbf {I} _{k}^{\prime }} , an N × N matrix which contains two elements: the individual probability of exposure π i ( d k ) {\displaystyle \pi _{i}\left(d_{k}\right)} on the diagonal, and the joint exposure probabilities π i j ( d k ) {\displaystyle \pi _{ij}\left(d_{k}\right)} on the off diagonals: I k P I k ′ = [ π 1 ( d k ) π 12 ( d k ) ⋯ π 1 N ( d k ) π 21 ( d k ) π 2 ( d k ) ⋯ π 2 N ( d k ) ⋮ ⋮ ⋱ π N 1 ( d k ) π N 2 ( d k ) π N ( d k ) ] {\displaystyle \mathbf {I} _{k}\mathbf {P} \mathbf {I} _{k}^{\prime }=\left[{\begin{array}{cccc}{\pi _{1}(d_{k})}&\pi _{12}(d_{k})&\cdots &\pi _{1N}(d_{k})\\\pi _{21}(d_{k})&\pi _{2}(d_{k})&\cdots &\pi _{2N}(d_{k})\\\vdots &\vdots &\ddots &\\\pi _{N1}(d_{k})&\pi _{N2}(d_{k})&{}&\pi _{N}(d_{k})\end{array}}\right]} In a similar fashion, the joint probability of exposure of i being in exposure condition d k {\displaystyle d_{k}} and j being in a different exposure condition d l {\displaystyle d_{l}} can be obtained by calculating I k P I l ′ {\displaystyle \mathbf {I} _{k}\mathbf {P} \mathbf {I} _{l}^{\prime }} : I k P I l ′ = [ 0 π 12 ( d k , d l ) … π 1 N ( d k , d l ) π 21 ( d k , d l ) 0 … π 2 N ( d k , d l ) ⋮ ⋮ ⋱ π N 1 ( d k , d l ) π N 2 ( d k , d l ) 0 ] {\displaystyle \mathbf {I} _{k}\mathbf {P} \mathbf {I} _{l}^{\prime }=\left[{\begin{array}{c c c c }{0}&{\pi _{12}\left(d_{k},d_{l}\right)}&{\dots }&{\pi _{1N}\left(d_{k},d_{l}\right)}\\{\pi _{21}\left(d_{k},d_{l}\right)}&{0}&{\ldots }&{\pi _{2N}\left(d_{k},d_{l}\right)}\\{\vdots }&{\vdots }&{\ddots }&{}\\\pi _{N1}(d_{k},d_{l})&\pi _{N2}(d_{k},d_{l})&&0\end{array}}\right]} Notice that the diagonals on the second matrix are 0 because a subject cannot be simultaneously exposed to two different exposure conditions at once, in the same way that a subject cannot reveal two different potential outcomes at once. The obtained exposure probabilities π {\displaystyle \pi } then can be used for inverse probability weighting (IPW, described below), in an estimator such as the Horvitz–Thompson estimator. One important caveat is that this procedure excludes all units whose probability of exposure is zero (ex. a unit that is not connected to any other units), since these numbers end up in the denominator of the IPW regression. === Need for inverse probability weights === Estimating spillover effects requires additional care: although treatment is directly assigned, spillover status is indirectly assigned and can lead to differential probabilities of spillover assignment for units. For example, a subject with 10 friend connections is more likely to be indirectly exposed to treatment as opposed to a subject with just one friend connection. Not accounting for varying probabilities of spillover exposure can bias estimates of the average spillover effect. Figure 1 displays an example where units have varying probabilities of being assigned to the spillover condition. Subfigure A displays a network of 25 nodes where the units in green are eligible to receive treatment. Spillovers are defined as sharing at least one edge with a treated unit. For example, if node 16 is treated, nodes 11, 17, and 21 would be classified as spillover units. Suppose three of these six green units are selected randomly to be treated, so that ( 6 3 ) = 20 {\displaystyle {\binom {6}{3}}=20} different sets of treatment assignments are possible. In this case, subfigure B displays each node's probability of being assigned to the spillover condition. Node 3 is assigned to spillover in 95% of the randomizations because it shares edges with three units that are treated. This node will only be a control node in 5% of randomizations: that is, when the three treated nodes are 14, 16, and 18. Meanwhile, node 15 is assigned to spillover only 50% of the time—if node 14 is not directly treated, node 15 will not be assigned to spillover. ==== Using inverse probability weights ==== When analyzing experiments with varying probabilities of assignment, special precautions should be taken. These differences in assignment probabilities may be neutralized by inverse-probability-weighted (IPW) regression, where each observation is weighted by the inverse of its likelihood of being assigned to the treatment condition observed using the Horvitz-Thompson estimator. This approach addresses the bias that might arise if potential outcomes were systematically related to assignment probabilities. The downside of this estimator is that it may be fraught with sampling variability if some observations are accorded a high amount of weight (i.e. a unit with a low probability of being spillover is assigned to the spillover condition by chance). ==== Regression approaches ==== In non-experimental settings, estimating the variability of a spillover effect creates additional difficulty. When the research study has a fixed unit of clustering, such as a school or household, researchers can use traditional standard error adjustment tools like cluster-robust standard errors, which allow for correlations in error terms within clusters but not across them. In other settings, however, there is no fixed unit of clustering. In order to conduct hypothesis testing in these settings, the use of randomization inference is recommended. This technique allows one to generate p-values and confidence intervals even when spillovers do not adhere to a fixed unit of clustering but nearby units tend to be assigned to similar spillover conditions, as in the case of fuzzy clustering. == See also == Social multiplier effect == References ==
Wikipedia:Spinors in three dimensions#0
In mathematics, the spinor concept as specialised to three dimensions can be treated by means of the traditional notions of dot product and cross product. This is part of the detailed algebraic discussion of the rotation group SO(3). == Formulation == The association of a spinor with a 2×2 complex traceless Hermitian matrix was formulated by Élie Cartan. In detail, given a vector x = (x1, x2, x3) of real (or complex) numbers, one can associate the complex matrix x → → X = ( x 3 x 1 − i x 2 x 1 + i x 2 − x 3 ) . {\displaystyle {\vec {x}}\rightarrow X\ =\left({\begin{matrix}x_{3}&x_{1}-ix_{2}\\x_{1}+ix_{2}&-x_{3}\end{matrix}}\right).} In physics, this is often written as a dot product X ≡ σ → ⋅ x → {\displaystyle X\equiv {\vec {\sigma }}\cdot {\vec {x}}} , where σ → ≡ ( σ 1 , σ 2 , σ 3 ) {\displaystyle {\vec {\sigma }}\equiv (\sigma _{1},\sigma _{2},\sigma _{3})} is the vector form of Pauli matrices. Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3-space: det X = − | x → | 2 {\displaystyle \det X=-|{\vec {x}}|^{2}} , where det {\displaystyle \det } denotes the determinant. X 2 = | x → | 2 I {\displaystyle X^{2}=|{\vec {x}}|^{2}I} , where I is the identity matrix. 1 2 ( X Y + Y X ) = ( x → ⋅ y → ) I {\displaystyle {\frac {1}{2}}(XY+YX)=({\vec {x}}\cdot {\vec {y}})I} : 43 1 2 ( X Y − Y X ) = i Z {\displaystyle {\frac {1}{2}}(XY-YX)=iZ} where Z is the matrix associated to the cross product z → = x → × y → {\displaystyle {\vec {z}}={\vec {x}}\times {\vec {y}}} . If u → {\displaystyle {\vec {u}}} is a unit vector, then − U X U {\displaystyle -UXU} is the matrix associated with the vector that results from reflecting x → {\displaystyle {\vec {x}}} in the plane orthogonal to u → {\displaystyle {\vec {u}}} . The last property can be used to simplify rotational operations. It is an elementary fact from linear algebra that any rotation in 3-space factors as a composition of two reflections. (More generally, any orientation-reversing orthogonal transformation is either a reflection or the product of three reflections.) Thus if R is a rotation which decomposes as the reflection in the plane perpendicular to a unit vector u → 1 {\displaystyle {\vec {u}}_{1}} followed by the reflection in the plane perpendicular to u → 2 {\displaystyle {\vec {u}}_{2}} , then the matrix U 2 U 1 X U 1 U 2 {\displaystyle U_{2}U_{1}XU_{1}U_{2}} represents the rotation of the vector x → {\displaystyle {\vec {x}}} through R. Having effectively encoded all the rotational linear geometry of 3-space into a set of complex 2×2 matrices, it is natural to ask what role, if any, the 2×1 matrices (i.e., the column vectors) play. Provisionally, a spinor is a column vector ξ = [ ξ 1 ξ 2 ] , {\displaystyle \xi =\left[{\begin{matrix}\xi _{1}\\\xi _{2}\end{matrix}}\right],} with complex entries ξ1 and ξ2. The space of spinors is evidently acted upon by complex 2×2 matrices. As shown above, the product of two reflections in a pair of unit vectors defines a 2×2 matrix whose action on euclidean vectors is a rotation. So there is an action of rotations on spinors. However, there is one important caveat: the factorization of a rotation is not unique. Clearly, if X ↦ R X R − 1 {\displaystyle X\mapsto RXR^{-1}} is a representation of a rotation, then replacing R by −R will yield the same rotation. In fact, one can easily show that this is the only ambiguity which arises. Thus the action of a rotation on a spinor is always double-valued. == History == There were some precursors to Cartan's work with 2×2 complex matrices: Wolfgang Pauli had used these matrices so intensively that elements of a certain basis of a four-dimensional subspace are called Pauli matrices σi, so that the Hermitian matrix is written as a Pauli vector x → ⋅ σ → . {\displaystyle {\vec {x}}\cdot {\vec {\sigma }}.} In the mid 19th century the algebraic operations of this algebra of four complex dimensions were studied as biquaternions. Michael Stone and Paul Goldbar, in Mathematics for Physics, contest this, saying, "The spin representations were discovered by ´Elie Cartan in 1913, some years before they were needed in physics." == Formulation using isotropic vectors == Spinors can be constructed directly from isotropic vectors in 3-space without using the quaternionic construction. To motivate this introduction of spinors, suppose that X is a matrix representing a vector x in complex 3-space. Suppose further that x is isotropic: i.e., x ⋅ x = x 1 2 + x 2 2 + x 3 2 = 0. {\displaystyle {\mathbf {x} }\cdot {\mathbf {x} }=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=0.} Then since the determinant of X is zero there is a proportionality between its rows or columns. Thus the matrix may be written as an outer product of two complex 2-vectors: X = 2 [ ξ 1 ξ 2 ] [ − ξ 2 ξ 1 ] . {\displaystyle X=2\left[{\begin{matrix}\xi _{1}\\\xi _{2}\end{matrix}}\right]\left[{\begin{matrix}-\xi _{2}&\xi _{1}\end{matrix}}\right].} This factorization yields an overdetermined system of equations in the coordinates of the vector x: subject to the constraint This system admits the solutions Either choice of sign solves the system (1). Thus a spinor may be viewed as an isotropic vector, along with a choice of sign. Note that because of the logarithmic branching, it is impossible to choose a sign consistently so that (3) varies continuously along a full rotation among the coordinates x. In spite of this ambiguity of the representation of a rotation on a spinor, the rotations do act unambiguously by a fractional linear transformation on the ratio ξ1:ξ2 since one choice of sign in the solution (3) forces the choice of the second sign. In particular, the space of spinors is a projective representation of the orthogonal group. As a consequence of this point of view, spinors may be regarded as a kind of "square root" of isotropic vectors. Specifically, introducing the matrix C = ( 0 1 − 1 0 ) , {\displaystyle C=\left({\begin{matrix}0&1\\-1&0\end{matrix}}\right),} the system (1) is equivalent to solving X = 2 ξ tξ C for the undetermined spinor ξ. A fortiori, if the roles of ξ and x are now reversed, the form Q(ξ) = x defines, for each spinor ξ, a vector x quadratically in the components of ξ. If this quadratic form is polarized, it determines a bilinear vector-valued form on spinors Q(μ, ξ). This bilinear form then transforms tensorially under a reflection or a rotation. == Reality == The above considerations apply equally well whether the original euclidean space under consideration is real or complex. When the space is real, however, spinors possess some additional structure which in turn facilitates a complete description of the representation of the rotation group. Suppose, for simplicity, that the inner product on 3-space has positive-definite signature: With this convention, real vectors correspond to Hermitian matrices. Furthermore, real rotations preserving the form (4) correspond (in the double-valued sense) to unitary matrices of determinant one. In modern terms, this presents the special unitary group SU(2) as a double cover of SO(3). As a consequence, the spinor Hermitian product is preserved by all rotations, and therefore is canonical. If, however, the signature of the inner product on 3-space is indefinite (i.e., non-degenerate, but also not positive definite), then the foregoing analysis must be adjusted to reflect this. Suppose then that the length form on 3-space is given by: Then the construction of spinors of the preceding sections proceeds, but with x 2 {\displaystyle x_{2}} replacing i {\displaystyle i} x 2 {\displaystyle x_{2}} in all the formulas. With this new convention, the matrix associated to a real vector ( x 1 , x 2 , x 3 ) {\displaystyle (x_{1},x_{2},x_{3})} is itself real: ( x 3 x 1 − x 2 x 1 + x 2 − x 3 ) {\displaystyle \left({\begin{matrix}x_{3}&x_{1}-x_{2}\\x_{1}+x_{2}&-x_{3}\end{matrix}}\right)} . The form (5) is no longer invariant under a real rotation (or reversal), since the group stabilizing (4′) is now a Lorentz group O(2,1). Instead, the anti-Hermitian form ⟨ μ | ξ ⟩ = μ ¯ 1 ξ 2 − μ ¯ 2 ξ 1 {\displaystyle \langle \mu |\xi \rangle ={\bar {\mu }}_{1}\xi _{2}-{\bar {\mu }}_{2}\xi _{1}} defines the appropriate notion of inner product for spinors in this metric signature. This form is invariant under transformations in the connected component of the identity of O(2,1). In either case, the quartic form ⟨ μ | ξ ⟩ 2 = length ( Q ( μ ¯ , ξ ) ) 2 {\displaystyle \langle \mu |\xi \rangle ^{2}={\hbox{length}}\left(Q({\bar {\mu }},\xi )\right)^{2}} is fully invariant under O(3) (or O(2,1), respectively), where Q is the vector-valued bilinear form described in the previous section. The fact that this is a quartic invariant, rather than quadratic, has an important consequence. If one confines attention to the group of special orthogonal transformations, then it is possible unambiguously to take the square root of this form and obtain an identification of spinors with their duals. In the language of representation theory, this implies that there is only one irreducible spin representation of SO(3) (or SO(2,1)) up to isomorphism. If, however, reversals (e.g., reflections in a plane) are also allowed, then it is no longer possible to identify spinors with their duals owing to a change of sign on the application of a reflection. Thus there are two irreducible spin representations of O(3) (or O(2,1)), sometimes called the pin representations. == Reality structures == The differences between these two signatures can be codified by the notion of a reality structure on the space of spinors. Informally, this is a prescription for taking a complex conjugate of a spinor, but in such a way that this may not correspond to the usual conjugate per the components of a spinor. Specifically, a reality structure is specified by a Hermitian 2 × 2 matrix K whose product with itself is the identity matrix: K2 = Id. The conjugate of a spinor with respect to a reality structure K is defined by ξ ∗ = K ξ ¯ . {\displaystyle \xi ^{*}=K{\bar {\xi }}.} The particular form of the inner product on vectors (e.g., (4) or (4′)) determines a reality structure (up to a factor of -1) by requiring X ¯ = K X K {\displaystyle {\bar {X}}=KXK\,} , whenever X is a matrix associated to a real vector. Thus K = i C is the reality structure in Euclidean signature (4), and K = Id is that for signature (4′). With a reality structure in hand, one has the following results: X is the matrix associated to a real vector if, and only if, X ¯ = K X K . {\displaystyle {\bar {X}}=KXK\,.} If μ and ξ is a spinor, then the inner product ⟨ μ | ξ ⟩ = i t μ ∗ C ξ {\displaystyle \langle \mu |\xi \rangle =i\,^{t}\mu ^{*}C\xi } determines a Hermitian form which is invariant under proper orthogonal transformations. == Examples in physics == === Spinors of the Pauli spin matrices === Often, the first example of spinors that a student of physics encounters are the 2×1 spinors used in Pauli's theory of electron spin. The Pauli matrices are a vector of three 2×2 matrices that are used as spin operators. Given a unit vector in 3 dimensions, for example (a, b, c), one takes a dot product with the Pauli spin matrices to obtain a spin matrix for spin in the direction of the unit vector. The eigenvectors of that spin matrix are the spinors for spin-1/2 oriented in the direction given by the vector. Example: u = (0.8, -0.6, 0) is a unit vector. Dotting this with the Pauli spin matrices gives the matrix: S u = ( 0.8 , − 0.6 , 0.0 ) ⋅ σ → = 0.8 σ 1 − 0.6 σ 2 + 0.0 σ 3 = [ 0.0 0.8 + 0.6 i 0.8 − 0.6 i 0.0 ] {\displaystyle S_{u}=(0.8,-0.6,0.0)\cdot {\vec {\sigma }}=0.8\sigma _{1}-0.6\sigma _{2}+0.0\sigma _{3}={\begin{bmatrix}0.0&0.8+0.6i\\0.8-0.6i&0.0\end{bmatrix}}} The eigenvectors may be found by the usual methods of linear algebra, but a convenient trick is to note that a Pauli spin matrix is an involutory matrix, that is, the square of the above matrix is the identity matrix. Thus a (matrix) solution to the eigenvector problem with eigenvalues of ±1 is simply 1 ± Su. That is, S u ( 1 ± S u ) = ± 1 ( 1 ± S u ) {\displaystyle S_{u}(1\pm S_{u})=\pm 1(1\pm S_{u})} One can then choose either of the columns of the eigenvector matrix as the vector solution, provided that the column chosen is not zero. Taking the first column of the above, eigenvector solutions for the two eigenvalues are: [ 1.0 + ( 0.0 ) 0.0 + ( 0.8 − 0.6 i ) ] , [ 1.0 − ( 0.0 ) 0.0 − ( 0.8 − 0.6 i ) ] {\displaystyle {\begin{bmatrix}1.0+(0.0)\\0.0+(0.8-0.6i)\end{bmatrix}},{\begin{bmatrix}1.0-(0.0)\\0.0-(0.8-0.6i)\end{bmatrix}}} The trick used to find the eigenvectors is related to the concept of ideals, that is, the matrix eigenvectors (1 ± Su)/2 are projection operators or idempotents and therefore each generates an ideal in the Pauli algebra. The same trick works in any Clifford algebra, in particular the Dirac algebra that is discussed below. These projection operators are also seen in density matrix theory where they are examples of pure density matrices. More generally, the projection operator for spin in the (a, b, c) direction is given by 1 2 [ 1 + c a − i b a + i b 1 − c ] {\displaystyle {\frac {1}{2}}{\begin{bmatrix}1+c&a-ib\\a+ib&1-c\end{bmatrix}}} and any non zero column can be taken as the projection operator. While the two columns appear different, one can use a2 + b2 + c2 = 1 to show that they are multiples (possibly zero) of the same spinor. === General remarks === In atomic physics and quantum mechanics, the property of spin plays a major role. In addition to their other properties all particles possess a non-classical property, i.e., which has no correspondence at all in conventional physics, namely the spin, which is a kind of intrinsic angular momentum. In the position representation, instead of a wavefunction without spin, ψ = ψ(r), one has with spin: ψ = ψ(r, σ), where σ takes the following discrete set of values: σ = − S ⋅ ℏ , − ( S − 1 ) ⋅ ℏ , . . . , + ( S − 1 ) ⋅ ℏ , + S ⋅ ℏ {\displaystyle \sigma =-S\cdot \hbar ,-(S-1)\cdot \hbar ,...,+(S-1)\cdot \hbar ,+S\cdot \hbar } . The total angular momentum operator, J → {\displaystyle {\vec {\mathbb {J} }}} , of a particle corresponds to the sum of the orbital angular momentum (i.e., there only integers are allowed) and the intrinsic part, the spin. One distinguishes bosons (S = 0, ±1, ±2, ...) and fermions (S = ±1/2, ±3/2, ±5/2, ...). == See also == Bloch sphere Pauli equation == References ==
Wikipedia:Spiral of Theodorus#0
In geometry, the spiral of Theodorus (also called the square root spiral, Pythagorean spiral, or Pythagoras's snail) is a spiral composed of right triangles, placed edge-to-edge. It was named after Theodorus of Cyrene. == Construction == The spiral is started with an isosceles right triangle, with each leg having unit length. Another right triangle (which is the only automedian right triangle) is formed, with one leg being the hypotenuse of the prior right triangle (with length the square root of 2) and the other leg having length of 1; the length of the hypotenuse of this second right triangle is the square root of 3. The process then repeats; the n {\displaystyle n} th triangle in the sequence is a right triangle with the side lengths n {\displaystyle {\sqrt {n}}} and 1, and with hypotenuse n + 1 {\displaystyle {\sqrt {n+1}}} . For example, the 16th triangle has sides measuring 4 = 16 {\displaystyle 4={\sqrt {16}}} , 1 and hypotenuse of 17 {\displaystyle {\sqrt {17}}} . == History and uses == Although all of Theodorus' work has been lost, Plato put Theodorus into his dialogue Theaetetus, which tells of his work. It is assumed that Theodorus had proved that all of the square roots of non-square integers from 3 to 17 are irrational by means of the Spiral of Theodorus. Plato does not attribute the irrationality of the square root of 2 to Theodorus, because it was well known before him. Theodorus and Theaetetus split the rational numbers and irrational numbers into different categories. == Hypotenuse == Each of the triangles' hypotenuses h n {\displaystyle h_{n}} gives the square root of the corresponding natural number, with h 1 = 2 {\displaystyle h_{1}={\sqrt {2}}} . Plato, tutored by Theodorus, questioned why Theodorus stopped at 17 {\displaystyle {\sqrt {17}}} . The reason is commonly believed to be that the 17 {\displaystyle {\sqrt {17}}} hypotenuse belongs to the last triangle that does not overlap the figure. === Overlapping === In 1958, Kaleb Williams proved that two hypotenuses will never overlap, regardless of how far the spiral is continued. Also, if the sides of unit length are extended into a line, they will never pass through any of the other vertices of the total figure. == Extension == Theodorus stopped his spiral at the triangle with a hypotenuse of 17 {\displaystyle {\sqrt {17}}} . If the spiral is continued to infinitely many triangles, many more interesting characteristics are found. === Growth rate === ==== Angle ==== If φ n {\displaystyle \varphi _{n}} is the angle of the n {\displaystyle n} th triangle (or spiral segment), then: tan ⁡ ( φ n ) = 1 n . {\displaystyle \tan \left(\varphi _{n}\right)={\frac {1}{\sqrt {n}}}.} Therefore, the growth of the angle φ n {\displaystyle \varphi _{n}} of the next triangle n {\displaystyle n} is: φ n = arctan ⁡ ( 1 n ) . {\displaystyle \varphi _{n}=\arctan \left({\frac {1}{\sqrt {n}}}\right).} The sum of the angles of the first k {\displaystyle k} triangles is called the total angle φ ( k ) {\displaystyle \varphi (k)} for the k {\displaystyle k} th triangle. It grows proportionally to the square root of k {\displaystyle k} , with a bounded correction term c 2 {\displaystyle c_{2}} : φ ( k ) = ∑ n = 1 k φ n = 2 k + c 2 ( k ) {\displaystyle \varphi \left(k\right)=\sum _{n=1}^{k}\varphi _{n}=2{\sqrt {k}}+c_{2}(k)} where lim k → ∞ c 2 ( k ) = − 2.157782996659 … {\displaystyle \lim _{k\to \infty }c_{2}(k)=-2.157782996659\ldots } (OEIS: A105459). ==== Radius ==== The growth of the radius of the spiral at a certain triangle n {\displaystyle n} is Δ r = n + 1 − n . {\displaystyle \Delta r={\sqrt {n+1}}-{\sqrt {n}}.} === Archimedean spiral === The Spiral of Theodorus approximates the Archimedean spiral. Just as the distance between two windings of the Archimedean spiral equals mathematical constant π {\displaystyle \pi } , as the number of spins of the spiral of Theodorus approaches infinity, the distance between two consecutive windings quickly approaches π {\displaystyle \pi } . The following table shows successive windings of the spiral approaching pi: As shown, after only the fifth winding, the distance is a 99.97% accurate approximation to π {\displaystyle \pi } . == Continuous curve == The question of how to interpolate the discrete points of the spiral of Theodorus by a smooth curve was proposed and answered by Philip J. Davis in 2001 by analogy with Euler's formula for the gamma function as an interpolant for the factorial function. Davis found the function T ( x ) = ∏ k = 1 ∞ 1 + i / k 1 + i / x + k ( − 1 < x < ∞ ) {\displaystyle T(x)=\prod _{k=1}^{\infty }{\frac {1+i/{\sqrt {k}}}{1+i/{\sqrt {x+k}}}}\qquad (-1<x<\infty )} which was further studied by his student Leader and by Iserles. This function can be characterized axiomatically as the unique function that satisfies the functional equation f ( x + 1 ) = ( 1 + i x + 1 ) ⋅ f ( x ) , {\displaystyle f(x+1)=\left(1+{\frac {i}{\sqrt {x+1}}}\right)\cdot f(x),} the initial condition f ( 0 ) = 1 , {\displaystyle f(0)=1,} and monotonicity in both argument and modulus. An analytic continuation of Davis' continuous form of the Spiral of Theodorus extends in the opposite direction from the origin. In the figure the nodes of the original (discrete) Theodorus spiral are shown as small green circles. The blue ones are those, added in the opposite direction of the spiral. Only nodes n {\displaystyle n} with the integer value of the polar radius r n = ± | n | {\displaystyle r_{n}=\pm {\sqrt {|n|}}} are numbered in the figure. The dashed circle in the coordinate origin O {\displaystyle O} is the circle of curvature at O {\displaystyle O} . == See also == Fermat's spiral List of spirals == References == == Further reading == Davis, P. J. (2001), Spirals from Theodorus to Chaos, A K Peters/CRC Press Gronau, Detlef (March 2004), "The Spiral of Theodorus", The American Mathematical Monthly, 111 (3): 230–237, doi:10.2307/4145130, JSTOR 4145130 Heuvers, J.; Moak, D. S.; Boursaw, B (2000), "The functional equation of the square root spiral", in T. M. Rassias (ed.), Functional Equations and Inequalities, pp. 111–117 Waldvogel, Jörg (2009), Analytic Continuation of the Theodorus Spiral (PDF)