source
stringlengths 16
98
| text
stringlengths 40
168k
|
|---|---|
Wikipedia:Ruth Moufang#0
|
Ruth Moufang (10 January 1905 – 26 November 1977) was a German mathematician. == Biography == She was born to German chemist Eduard Moufang and Else Fecht Moufang. Eduard Moufang was the son of Friedrich Carl Moufang (1848-1885) from Mainz, and Elisabeth von Moers from Mainz. Ruth Moufang's mother was Else Fecht, who was the daughter of Alexander Fecht (1848-1913) from Kehl and Ella Scholtz (1847-1921). Ruth was the younger of her parents' two daughters, having an elder sister named Erica. == Education and career == She studied mathematics at the University of Frankfurt. In 1931 she received her Ph.D. on projective geometry under the direction of Max Dehn, and in 1932 spent a fellowship year in Rome. After her year in Rome, she returned to Germany to lecture at the University of Königsberg and the University of Frankfurt. Denied permission to teach by the minister of education of Nazi Germany, she worked at Research and Development of Krupp (battleships, U-boats, tanks, howitzers, guns, etc.), where she became the first German woman with a doctorate to be employed as an industrial mathematician. At the end of World War II she was leading the Department of Applied Mathematics at the arms industry of Krupp. In 1946 she was finally allowed to accept a teaching position at the University of Frankfurt, and in 1957 she became the first woman professor at the university. == Research == Moufang's research in projective geometry built upon the work of David Hilbert. She was responsible for ground-breaking work on non-associative algebraic structures, including the Moufang loops named after her. In 1933, Moufang showed Desargues's theorem does not hold in the Cayley plane. The Cayley plane uses octonion coordinates which do not satisfy the associative law. Such connections between geometry and algebra had been previously noted by Karl von Staudt and David Hilbert. Ruth Moufang thus initiated a new branch of geometry called Moufang planes. She published 7 papers on this topic, these are Zur Struktur der projectiven Geometrie der Ebene Ⓣ (1931); Die Einführung in der ebenen Geometrie mit Hilfe des Satzes vom vollständigen Vierseit Ⓣ (1931); Die Schnittpunktssätze des projektiven speziellen Fünfecksnetzes in ihrer Abhängigkeit voneinander Ⓣ (1932); Ein Satz über die Schnittpunktsätze des allgemeinen Fünfecksnetzes Ⓣ (1932); Die Desarguesschen Sätze von Rang 10 Ⓣ (1933); Alternativkörper und der Satz vom vollständigen Vierseit D9 Ⓣ (1934); and Zur Struktur von Alternativkörpern Ⓣ (1934). Moufang published only one paper on group theory, Einige Untersuchungen über geordenete Schiefkörper Ⓣ, which appeared in print in 1937. == References == O'Connor, John J.; Robertson, Edmund F., "Ruth Moufang", MacTutor History of Mathematics Archive, University of St Andrews Ruth Moufang at the Mathematics Genealogy Project "Ruth Moufang", Biographies of Women Mathematicians, Agnes Scott College Bhama Srinivasan (1984) "Ruth Moufang, 1905—1977" Mathematical Intelligencer 6(2):51–5.
|
Wikipedia:S-procedure#0
|
The S-procedure or S-lemma is a mathematical result that gives conditions under which a particular quadratic inequality is a consequence of another quadratic inequality. The S-procedure was developed independently in a number of different contexts and has applications in control theory, linear algebra and mathematical optimization. == Statement of the S-procedure == Let F1 and F2 be symmetric matrices, g1 and g2 be vectors and h1 and h2 be real numbers. Assume that there is some x0 such that the strict inequality x 0 T F 1 x 0 + 2 g 1 T x 0 + h 1 < 0 {\displaystyle x_{0}^{T}F_{1}x_{0}+2g_{1}^{T}x_{0}+h_{1}<0} holds. Then the implication x T F 1 x + 2 g 1 T x + h 1 ≤ 0 ⟹ x T F 2 x + 2 g 2 T x + h 2 ≤ 0 {\displaystyle x^{T}F_{1}x+2g_{1}^{T}x+h_{1}\leq 0\Longrightarrow x^{T}F_{2}x+2g_{2}^{T}x+h_{2}\leq 0} holds if and only if there exists some nonnegative number λ such that λ [ F 1 g 1 g 1 T h 1 ] − [ F 2 g 2 g 2 T h 2 ] {\displaystyle \lambda {\begin{bmatrix}F_{1}&g_{1}\\g_{1}^{T}&h_{1}\end{bmatrix}}-{\begin{bmatrix}F_{2}&g_{2}\\g_{2}^{T}&h_{2}\end{bmatrix}}} is positive semidefinite. == References == == See also == Linear matrix inequality Finsler's lemma
|
Wikipedia:S. R. Srinivasa Varadhan#0
|
Sathamangalam Ranga Iyengar Srinivasa Varadhan, (born 2 January 1940) is an Indian American mathematician. He is known for his fundamental contributions to probability theory and in particular for creating a unified theory of large deviations. He is regarded as one of the fundamental contributors to the theory of diffusion processes with an orientation towards the refinement and further development of Itô’s stochastic calculus. In the year 2007, he became the first Asian to win the Abel Prize. == Early life and education == Srinivasa was born into a Hindu Tamil Brahmin Iyengar family in 1940 in Chennai (then Madras). In 1953, his family migrated to Kolkata. He grew up in Chennai and Kolkata. Varadhan received his undergraduate degree in 1959 and his postgraduate degree in 1960 from Presidency College, Chennai. He received his doctorate from Indian Statistical Institute in 1963 under C R Rao, who arranged for Andrey Kolmogorov to be present at Varadhan's thesis defence. He was one of the "famous four" (the others being R Ranga Rao, K R Parthasarathy, and Veeravalli S Varadarajan) in ISI during 1956–1963. == Career == Since 1963, he has worked at the Courant Institute of Mathematical Sciences at New York University, where he was at first a postdoctoral fellow (1963–66), strongly recommended by Monroe D Donsker. Here he met Daniel Stroock, who became a close colleague and co-author. In an article in the Notices of the American Mathematical Society, Stroock recalls these early years: Varadhan, whom everyone calls Raghu, came to these shores from his native India in the fall of 1963. He arrived by plane at Idlewild Airport and proceeded to Manhattan by bus. His destination was that famous institution with the modest name, The Courant Institute of Mathematical Sciences, where he had been given a postdoctoral fellowship. Varadhan was assigned to one of the many windowless offices in the Courant building, which used to be a hat factory. Yet despite the somewhat humble surroundings, from these offices flowed a remarkably large fraction of the post-war mathematics of which America is justly proud. Varadhan is currently a professor at the Courant Institute. He is known for his work with Daniel W Stroock on diffusion processes, and for his work on large deviations with Monroe D Donsker. He has chaired the Mathematical Sciences jury for the Infosys Prize from 2009 and was the chief guest in 2020. His son, Ashok Varadhan, is an executive at financial firm Goldman Sachs. == Awards and honours == Varadhan's awards and honours include the National Medal of Science (2010) from President Barack Obama, "the highest honour bestowed by the United States government on scientists, engineers and inventors". He also received the Birkhoff Prize (1994), the Margaret and Herman Sokol Award of the Faculty of Arts and Sciences, New York University (1995), and the Leroy P Steele Prize for Seminal Contribution to Research (1996) from the American Mathematical Society, awarded for his work with Daniel W Stroock on diffusion processes. He was awarded the Abel Prize in 2007 for his work on large deviations with Monroe D Donsker. In 2008, the Government of India awarded him the Padma Bhushan. and in 2023, he was awarded India's second highest civilian honor Padma Vibhushan. He also has two honorary degrees from Université Pierre et Marie Curie in Paris (2003) and from Indian Statistical Institute in Kolkata, India (2004). Varadhan is a member of the US National Academy of Sciences (1995), and the Norwegian Academy of Science and Letters (2009). He was elected to Fellow of the American Academy of Arts and Sciences (1988), the Third World Academy of Sciences (1988), the Institute of Mathematical Statistics (1991), the Royal Society (1998), the Indian Academy of Sciences (2004), the Society for Industrial and Applied Mathematics (2009), and the American Mathematical Society (2012). == Selected publications == Convolution Properties of Distributions on Topological Groups. Dissertation, Indian Statistical Institute, 1963. Varadhan, SRS (1966). "Asymptotic probabilities and differential equations". Communications on Pure and Applied Mathematics. 19 (3): 261–286. doi:10.1002/cpa.3160190303. Stroock, DW; SRS Varadhan (1972). "On the support of diffusion processes with applications to the strong maximum principle". Proc. of the Sixth Berkeley Symposium on Mathematical Statistics and Probability. 3: 333–359. (with M D Donsker) Donsker, M. D.; Varadhan, S. R. S. (1975). "On a variational formula for the principal eigenvalues for operators with maximum principle". Proc Natl Acad Sci USA. 72 (3): 780–783. Bibcode:1975PNAS...72..780D. doi:10.1073/pnas.72.3.780. PMC 432403. PMID 16592231. (with M D Donsker) Asymptotic evaluation of certain Markov process expectations for large time. I, Communications on Pure and Applied Mathematics 28 (1975), pp. 1–47; part II, 28 (1975), pp. 279–301; part III, 29 (1976), pp 389–461; part IV, 36 (1983), pp 183–212. Varadhan, SRS (2003). "Stochastic analysis and applications". Bull Amer Math Soc. 40 (1): 89–97. doi:10.1090/s0273-0979-02-00968-0. MR 1943135. == See also == Varadhan's lemma == References == == External links == S. R. Srinivasa Varadhan, home page at the Courant Institute O'Connor, John J.; Robertson, Edmund F., "S. R. Srinivasa Varadhan", MacTutor History of Mathematics Archive, University of St Andrews S. R. Srinivasa Varadhan at the Mathematics Genealogy Project
|
Wikipedia:S. Srisatkunarajah#0
|
Professor Sivakolundu Srisatkunarajah (Tamil: சிவக்கொழுந்து சிறிசற்குணராஜா) is a Sri Lankan Tamil mathematician, academic and current vice-chancellor of the University of Jaffna. == Early life == Srisatkunarajah was educated at Hartley College. After school he joined the University of Jaffna in 1979, graduating in 1983 with a B.Sc. honours degree in mathematics. He received a Ph.D. degree from Heriot-Watt University in 1988 after producing a thesis titled On the asymptotics of the heat equation for polygonal domains. He also has a postgraduate diploma in education from the Open University of Sri Lanka (2004). Srisatkunarajah holds dual Australian and Sri Lankan citizenship. == Career == Srisatkunarajah was head of the University of Jaffna's Department of Mathematics and Statistics from 2009 to 2012. He became a professor of mathematics in 2010. He served as dean of the Faculty of Science between July 2013 and July 2016 and dean of the Faculty of Technology from October 2016 to September 2017. In February 2017 the university's council nominated Srisatkunarajah along with T. Velnampy and R. Vigneswaran to be the university's new vice-chancellor. However, in April 2017 President Maithripala Sirisena chose Vigneswaran to be the new chancellor. In August 2020 the University of Jaffna's council nominated Srisatkunarajah along with K. Mikunthan and T. Velnampy to be the university's new vice-chancellor. Srisatkunarajah was chosen by President Gotabaya Rajapaksa. He assumed duties on 28 August 2020. == References == == External links == Sivakolundu Srisatkunarajah publications indexed by Google Scholar
|
Wikipedia:Sabine Bögli#0
|
Sabine Bögli (also published as Boegli) is a Swiss mathematician specialising in mathematical analysis, including the spectral theory of non-self-adjoint Schrödinger operators and their applications in mathematical physics. Her research has resolved a decades-old dispute over the location of autoionizing resonances in atoms and molecules, answered a longstanding open question on the accumulation of eigenvalues of Schrödinger operators, and disproved a conjecture of Laptev and Safronov relating the magnitude of these eigenvalues to the norm of the potential. She works in England as an associate professor at Durham University. == Education and career == Bögli is Swiss, and speaks Swiss German natively. After secondary education at the Gymnasium Biel-Seeland in Biel/Bienne, she studied mathematics at the University of Bern, receiving a bachelor's degree in 2010, a master's degree in 2012, and a Ph.D. in 2014. Her doctoral dissertation, Spectral approximation for linear operators and applications, was supervised by Christiane Tretter. After postdoctoral research at the University of Bern, Cardiff University, Imperial College London, and Ludwig Maximilian University of Munich, and a Chapman Fellowship at Imperial College London, she joined Durham University as an assistant professor in 2019. She was promoted to associate professor in 2023. == Recognition == Bögli was a 2024 recipient of the Whitehead Prize of the London Mathematical Society, given for her research on Schrödinger operators. == References == == External links == Home page
|
Wikipedia:Sacred Mathematics#0
|
Sacred Mathematics: Japanese Temple Geometry is a book on Sangaku, geometry problems presented on wooden tablets as temple offerings in the Edo period of Japan. It was written by Fukagawa Hidetoshi and Tony Rothman, and published in 2008 by the Princeton University Press. It won the PROSE Award of the Association of American Publishers in 2008 as the best book in mathematics for that year. == Topics == The book begins with an introduction to Japanese culture and how this culture led to the production of Sangaku tablets, depicting geometry problems, their presentation as votive offerings at temples, and their display at the temples. It also includes a chapter on the Chinese origins of Japanese mathematics, and a chapter on biographies of Japanese mathematicians from the time. The Sangaku tablets illustrate theorems in Euclidean geometry, typically involving circles or ellipses, often with a brief textual explanation. They are presented as puzzles for the viewer to prove, and in many cases the proofs require advanced mathematics. In some cases, booklets providing a solution were included separately, but in many cases the original solution has been lost or was never provided. The book's main content is the depiction, explanation, and solution of over 100 of these Sangaku puzzles, ranked by their difficulty, selected from over 1800 catalogued Sangaku and over 800 surviving examples. The solutions given use modern mathematical techniques where appropriate rather than attempting to model how the problems would originally have been solved. Also included is a translation of the travel diary of Japanese mathematician Yamaguchi Kanzan (or Kazu), who visited many of the temples where these tablets were displayed and in doing so built a collection of problems from them. The final three chapters provide a scholarly appraisal of precedence in mathematical discoveries between Japan and the west, and an explanation of the techniques that would have been available to Japanese problem-solvers of the time, in particular discussing how they would have solved problems that in western mathematics would have been solved using calculus or inversive geometry. == Audience and reception == Sacred Geometry can be read by historians of mathematics, professional mathematicians, "people who are simply interested in geometry", and "anyone who likes mathematics", and the puzzles it presents also span a wide range of expertise. Readers are not expected to already have a background in Japanese culture and history. The book is heavily illustrated, with many color photographs, also making it suitable as a mathematical coffee table book despite the depth of the mathematics it discusses. Reviewer Paul J. Campbell calls this book "the most thorough account of Japanese temple geometry available", reviewer Jean-Claude Martzloff calls it "exquisite, artfull, well-thought, and particularly well-documented",, reviewer Frank J. Swetz calls it "a well-crafted work that combines mathematics, history, and cultural considerations into an intriguing narrative", and reviewer Noel J. Pinnington calls it "excellent and well-thought-out". However, Pinnington points out that it lacks the citations and bibliography that would be necessary in a work of serious historical scholarship. Reviewer Peter Lu also criticizes the book's review of Japanese culture as superficial and romanticized, based on the oversimplification that the culture was born out of Japan's isolation and uninfluenced by the later mathematics of the west. == Related works == This is the third English-language book on Japanese mathematics from Fukagawa; the first two were Japanese Temple Geometry Problems (with Daniel Pedoe, 1989) and Traditional Japanese Mathematics Problems from the 18th and 19th Centuries (with John Rigby, 2002). Sacred Mathematics expands on a 1998 article on Sangaku by Fukagawa and Rothman in Scientific American. == References ==
|
Wikipedia:Saddlepoint approximation method#0
|
The saddlepoint approximation method, initially proposed by Daniels (1954) is a specific example of the mathematical saddlepoint technique applied to statistics, in particular to the distribution of the sum of N {\displaystyle N} independent random variables. It provides a highly accurate approximation formula for any PDF or probability mass function of a distribution, based on the moment generating function. There is also a formula for the CDF of the distribution, proposed by Lugannani and Rice (1980). == Definition == If the moment generating function of a random variable X = ∑ i = 1 N X i {\displaystyle X=\sum _{i=1}^{N}X_{i}} is written as M ( t ) = E [ e t X ] = E [ e t ∑ i = 1 N X i ] {\displaystyle M(t)=E\left[e^{tX}\right]=E\left[e^{t\sum _{i=1}^{N}X_{i}}\right]} and the cumulant generating function as K ( t ) = log ( M ( t ) ) = ∑ i = 1 N log E [ e t X i ] {\displaystyle K(t)=\log(M(t))=\sum _{i=1}^{N}\log E\left[e^{tX_{i}}\right]} then the saddlepoint approximation to the PDF of the distribution X {\displaystyle X} is defined as: f ^ X ( x ) = 1 2 π K ″ ( s ^ ) exp ( K ( s ^ ) − s ^ x ) ( 1 + R ) {\displaystyle {\hat {f}}_{X}(x)={\frac {1}{\sqrt {2\pi K''({\hat {s}})}}}\exp(K({\hat {s}})-{\hat {s}}x)\,\left(1+{\mathcal {R}}\right)} where R {\displaystyle {\mathcal {R}}} contains higher order terms to refine the approximation and the saddlepoint approximation to the CDF is defined as: F ^ X ( x ) = { Φ ( w ^ ) + ϕ ( w ^ ) ( 1 w ^ − 1 u ^ ) for x ≠ μ 1 2 + K ‴ ( 0 ) 6 2 π K ″ ( 0 ) 3 / 2 for x = μ {\displaystyle {\hat {F}}_{X}(x)={\begin{cases}\Phi ({\hat {w}})+\phi ({\hat {w}})\left({\frac {1}{\hat {w}}}-{\frac {1}{\hat {u}}}\right)&{\text{for }}x\neq \mu \\{\frac {1}{2}}+{\frac {K'''(0)}{6{\sqrt {2\pi }}K''(0)^{3/2}}}&{\text{for }}x=\mu \end{cases}}} where s ^ {\displaystyle {\hat {s}}} is the solution to K ′ ( s ^ ) = x {\displaystyle K'({\hat {s}})=x} , w ^ = sgn s ^ 2 ( s ^ x − K ( s ^ ) ) {\displaystyle {\hat {w}}=\operatorname {sgn} {\hat {s}}{\sqrt {2({\hat {s}}x-K({\hat {s}}))}}} , u ^ = s ^ K ″ ( s ^ ) {\displaystyle {\hat {u}}={\hat {s}}{\sqrt {K''({\hat {s}})}}} , and Φ ( t ) {\displaystyle \Phi (t)} and ϕ ( t ) {\displaystyle \phi (t)} are the cumulative distribution function and the probability density function of a normal distribution, respectively, and μ {\displaystyle \mu } is the mean of the random variable X {\displaystyle X} : μ ≜ E [ X ] = ∫ − ∞ + ∞ x f X ( x ) d x = ∑ i = 1 N E [ X i ] = ∑ i = 1 N ∫ − ∞ + ∞ x i f X i ( x i ) d x i {\displaystyle \mu \triangleq E\left[X\right]=\int _{-\infty }^{+\infty }xf_{X}(x)\,{\text{d}}x=\sum _{i=1}^{N}E\left[X_{i}\right]=\sum _{i=1}^{N}\int _{-\infty }^{+\infty }x_{i}f_{X_{i}}(x_{i})\,{\text{d}}x_{i}} . When the distribution is that of a sample mean, Lugannani and Rice's saddlepoint expansion for the cumulative distribution function F ( x ) {\displaystyle F(x)} may be differentiated to obtain Daniels' saddlepoint expansion for the probability density function f ( x ) {\displaystyle f(x)} (Routledge and Tsao, 1997). This result establishes the derivative of a truncated Lugannani and Rice series as an alternative asymptotic approximation for the density function f ( x ) {\displaystyle f(x)} . Unlike the original saddlepoint approximation for f ( x ) {\displaystyle f(x)} , this alternative approximation in general does not need to be renormalized. == References == Butler, Ronald W. (2007), Saddlepoint approximations with applications, Cambridge: Cambridge University Press, ISBN 9780521872508 Daniels, H. E. (1954), "Saddlepoint Approximations in Statistics", The Annals of Mathematical Statistics, 25 (4): 631–650, doi:10.1214/aoms/1177728652 Daniels, H. E. (1980), "Exact Saddlepoint Approximations", Biometrika, 67 (1): 59–63, doi:10.1093/biomet/67.1.59, JSTOR 2335316 Lugannani, R.; Rice, S. (1980), "Saddle Point Approximation for the Distribution of the Sum of Independent Random Variables", Advances in Applied Probability, 12 (2): 475–490, doi:10.2307/1426607, JSTOR 1426607, S2CID 124484743 Reid, N. (1988), "Saddlepoint Methods and Statistical Inference", Statistical Science, 3 (2): 213–227, doi:10.1214/ss/1177012906 Routledge, R. D.; Tsao, M. (1997), "On the relationship between two asymptotic expansions for the distribution of sample mean and its applications", Annals of Statistics, 25 (5): 2200–2209, doi:10.1214/aos/1069362394
|
Wikipedia:Sadleirian Professor of Pure Mathematics#0
|
The Sadleirian Professorship of Pure Mathematics, originally spelled in the statutes and for the first two professors as Sadlerian, is a professorship in pure mathematics within the DPMMS at the University of Cambridge. It was founded on a bequest from Lady Mary Sadleir for lectureships "for the full and clear explication and teaching that part of mathematical knowledge commonly called algebra". She died in 1706 and lectures began in 1710 but eventually these failed to attract undergraduates. In 1860 the foundation was used to establish the professorship. On 10 June 1863 Arthur Cayley was elected with the statutory duty "to explain and teach the principles of pure mathematics, and to apply himself to the advancement of that science." The stipend attached to the professorship was modest although it improved in the course of subsequent legislation. == List of Sadlerian Lecturers of Pure Mathematics == 1746–1769 William Ludlam 1826–1835 Lawrence Stephenson == List of Sadleirian Lecturers of Pure Mathematics == 1845–1847 Arthur Scratchley 1847–1857 George Ferns Reyner 1851 Stephen Hanson 1855–1858 William Charles Green 1857–1864 John Robert Lunn == List of Sadleirian Professors of Pure Mathematics == 1863–1895 Arthur Cayley 1895–1910 Andrew Russell Forsyth 1910–1931 E. W. Hobson 1931–1942 G. H. Hardy 1945–1953 Louis Mordell 1953–1967 Philip Hall 1967–1986 J. W. S. Cassels 1986–2012 John H. Coates 2013–2014 Vladimir Markovic 2017–2021 Emmanuel Breuillard 2024– Oscar Randal-Williams == References == == Sources == Obituary Notices of Fellows Deceased. (1895). Proceedings of the Royal Society of London, 58, I-Lx. Retrieved from https://www.jstor.org/stable/115800 (Obituary of Arthur Cayley written by Andrew Forsyth). University of Cambridge DPMMS https://web.archive.org/web/20160624155328/http://www.admin.cam.ac.uk/offices/academic/secretary/professorships/sadleirian.pdf
|
Wikipedia:Sadratnamala#0
|
Sadratnamala is an astronomical-mathematical treatise in Sanskrit written by Sankara Varman, an astronomer-mathematician of the Kerala school of mathematics, in 1819. Even though the book has been written at a time when western mathematics and astronomy had been introduced in India, it is composed purely in the traditional style followed by the mathematicians of the Kerala school. Sankara Varman has also written a detailed commentary on the book in Malayalam. Sadratnamala is one of the books cited in C. M. Whish's paper on the achievements of the Kerala school of mathematics. This paper published in the Transactions of the Royal Asiatic Society of Great Britain and Ireland in 1834, was the first ever attempt to bring the accomplishments of Keralese mathematicians to the attention of Western mathematical scholarship. Whish wrote in his paper thus: "The author of Sadratnamalah is SANCARA VARMA, the younger brother of the present Raja of Cadattanada near Tellicherry, a very intelligent man and acute mathematician. This work, which is a complete system of Hindu astronomy, is comprehended in two hundred and eleven verses of different measures, and abounds with fluxional forms and series, to be found in no work of foreign or other Indian countries." == Synopsis of the book == The book contains 212 verses divided into six chapters, called prakarana-s. Chapter 1: Gives the names of numerals; defines the eight operations of addition, subtraction, multiplication, division, squaring, extracting square root, cubing, and extracting cube root. Chapter 2: Lists the different measures, namely, the measures of time, angles, lunar days, planets and stars, almanacs, length, grain weight, money and the directions. Chapter 3: Defines the rule of three and syllabic enumeration; explains methods for the computation of the elements of the almanac, namely, mean and true sun, moon and planets, lunar day, yoga and karana; gives methods for determining the time elapsed after sunrise and after sunset. Chapter 4: Deals with arcs and sines and its application in astronomical measurements and computations. Chapter 5: Deals with computations relating to the shadow, eclipse, vyatipata, retrograde motion of the planets and apses of the moon. Chapter 6: Explains the necessity of periodic revision of astronomical constants; gives a full description of parahita-karana. == Sankara Varman (1774–1839) == Sankara Varman, author of Sadratnamala, was born as a younger prince in the principality of Katathanad in the North Malabar in Kerala. To the local people he was known as Appu Thampuran. The date of birth of Sankara Varman is still uncertain. There are some strong arguments in favour of the year 1774 CE. Sankara Varman died in 1839 CE. == References ==
|
Wikipedia:Salamis Tablet#0
|
The Salamis Tablet is a marble counting board (an early counting device) dating from around 300 BC, that was discovered on the island of Salamis in 1846. A precursor to the abacus, it is thought that it represents an ancient Greek means of performing mathematical calculations common in the ancient world. Pebbles (Latin: calculi) were placed at various locations and could be moved as calculations were performed. The marble tablet measures approximately 150 × 75 × 4.5 cm. == Discovery == Originally thought to be a gaming board, the slab of white marble is currently at the Epigraphical Museum in Athens. == Description == Five groups of markings appear on the tablet. The three sets of Greek symbols arranged along the left, right and bottom edges of the tablet are numbers from the acrophonic system. In the center of the tablet – a set of five parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below a wide horizontal crack is another group of eleven parallel lines. These are divided into two sections by a line perpendicular to them but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. == Numerical representations == As with other counting boards and abaci, each counter represents one unit of a magnitude determined by position. The precise interpretation of counters and methods used with the tablet is unknown, but it is possible that use was similar to medieval European counting boards in which counters on the lines represented powers of ten and counters between the lines represented 5 times the previous line. Retired engineer and high school teacher Stephen Stephenson (1942–2022) speculated that counters placed on either side of the dividing line might represent positive and negative quantities, and that the smaller area at the "top" (as shown in the picture above) of the tablet might represent the exponent of a floating-point number, with the larger area at the "bottom" representing the mantissa. == Calculations == On this board, physical markers (indicators) were placed on the various rows or columns that represented different values. The indicators were not physically attached to the board. On the tablet Greek numbers are represented. Already in the Ionian time period number systems were responsible for the written use, which became necessary because of the expanding commercial activity. Two different number systems were developed, the older Attic or Herodian number system and the younger, Milesian system. The two number systems differed in their use: the Attic predominantly served the commercial life for the adjustment of funds and goods data as well as for the designation of the columns on the abacus. For written calculations the Attic numeral system was unsuitable. The Milesian number system, with which one likewise assigned numbers to letters of the alphabet, was better suited for scientific mathematics. For example, Archimedes and Diophantus used the Milesian system. The Greek writer Herodotus (485–425 BC) reports in his travels through Egypt that the Egyptians calculated from right to left, contrary to the Greek custom of left to right. This may refer to moving pebbles on the counting board. == See also == Roman abacus == Notes == == References == Bradshaw, Gillian (2000), The Sand-Reckoner, Forge, ISBN 0312875819 Menninger, Karl (1969) [1st German ed. 1934], "The Abacus", Number Words and Number Symbols, MIT Press, pp. 295–388, ISBN 0262130408 == External links == Fernandes, Luis, "Abacus History, The Salamis Tablet", The Art of Calculating With Beads, retrieved Aug 12, 2013 Puhle, Jens, The Salamis Tablet: Calculating on a Counting Board (Video) – via YouTube, a speculative reconstruction of how the Salamis tablet might be used as a counting board
|
Wikipedia:Salem Hanna Khamis#0
|
Salem Hanna Khamis (Arabic: سالم حنا خميس) (November 22, 1919 – June 16, 2005) was a Palestinian economic statistician for the United Nations Food and Agriculture Organization who helped formalise the Geary-Khamis method of computing purchasing power parity of currencies. == Education and early career == Son of Hanna and Jamileh a Christian Palestinian family, Salem Khamis was born on November 22, 1919, in Reineh village, Palestine. He finished high school in 1938 with distinction at the Arab College in Jerusalem. He received a British Mandate scholarship for studying at the American University of Beirut (AUB), where he received in 1941 a BA degree in Mathematics (major) and Physics (minor), and in 1942 an MA in Physics. From 1942-1943, Salem taught at the Akka High School in Acre and St Lucas High School in Haifa. In 1943 he was appointed a lecturer in the Mathematics Department at AUB. In 1945 he received a British Council scholarship for a PhD at University College London. He defended his thesis during the 1948 Arab-Israeli War and Palestinian exodus (Nakba), and received the PhD title in 1950. In 1948 he was refused entrance to the new State of Israel, in whose territory Reineh now lay. Instead, he moved to Aleppo, Syria, where he lectured Applied Mathematics in the Engineering College of Syria University (now University of Damascus), and was appointed head of the Mathematics department. == United Nations == In 1949 he married Mary Guy and they had four children: Thea, Hanna, Christopher and Tareq. He accepted an invitation from the United Nations to work in its Statistical Office in Lake Success (1949-1950) then New York (1950-1953). In addition to his position in the United Nations, he became part-time visiting lecturer in the Mathematical Statistics Department at Columbia University. Salem finally visited his home in Israel in 1952. In 1953 he returned to AUB as Associate Professor of Economics. Between 1955 and 1958 he was appointed Professor and Chairman of Mathematics Department. Between 1958 and 1963 he became United Nations Food and Agriculture Organization Regional Statistician for the Near East (duty stations: Cairo, United Arab Republic 1958-1960, and Rome, Italy 1960-1963). Between 1961 and 1970 Salem became Chief of the FAO Trade Prices Branch in Rome. Between 1970 and 1972 he was Director and UN Project Manager at the Institute of Statistics and Applied Economics, Makerere University, Kampala, Uganda. In 1972 he returned to Rome to become Head of the FAO Methodology Group – Statistical Development Service until 1974, and Chief of the Service 1975-1981. In parallel (1976-1978), he acted in Baghdad as UN Project Manager/Chief Advisor to the Arab Institute for Training and Research in Statistics. In 1981 Salem resigned from his position at FAO and moved to England where his children lived. However, he continued to work as an expert and as the head of scientific missions by the UN, and consulted at countries, such as: 1981-1987: Statistics Advisor, Arab Fund for Social Economic Development, Kuwait 1982: Evaluation and development of the statistical activities of the Palestine Liberation Organization (PLO) 1986: Jordan Government Consultant, Rural Research Centre, An-Najah National University, Nablus, West Bank Statistics Advisor for Sri Lanka, Libya, Sudan and others Salem Khamis died on June 16, 2005, at his residence in Hemel Hempstead, England == Scientific contributions == Khamis contributed scientific research papers in statistics and mathematics. Specifically, he contributed to Sampling theory and the tabulation of the Incomplete gamma function, where he wrote the book “Tables of the Incomplete Gamma Function Ratio”. He contributed in the field of index number theory through a series of papers starting 1972. He developed what became known as, and “indelibly imprinted on all the recent work on international comparisons of prices, real incomes, output and productivity” (Rao, 2005), the Geary-Khamis Method of Computing Purchase Power Parity of Currencies and the Geary-Khamis dollar used to compare different economies. == References == == Sources == CV by Salem Hanna Khamis Rao, Prasada (15 July 2005). "Salem Khamis, Statistician who formulated PPPs". The Independent Obituaries. Archived from the original on 30 September 2007. Retrieved 20 February 2008. "The Dr Salem H Khamis Scholarship". Friends of Birzeit University. Retrieved 28 May 2024.
|
Wikipedia:Salih Zeki#0
|
Salih Zeki Bey (1864, Istanbul – 1921, Istanbul) was an Ottoman mathematician, astronomer and the founder of the mathematics, physics, and astronomy departments of Istanbul University. He was sent by the Post and Telegraph Ministry to study electrical engineering at the École Polytechnique in Paris. He returned to Istanbul in 1887 and started working at the Ministry as an electrical engineer and inspector. He was appointed as the director of the state observatory (Ottoman Turkish: رصدخانهيي امیره, romanized: Rasathâne-i Âmire) (now Kandilli Observatory) after Coumbary in 1895. In 1912, he became Under Secretary of the Ministry of Education and in 1913 the president of Istanbul University. In 1917, he resigned as the president but continued teaching at the university in the Faculty of Sciences until his death. == Works == === Astronomy === New Cosmography Abridged Cosmography === Physics === Hikmet-i Tabiiyye Mebhas-ı Elektrik-i Miknatisi Mebhas-ı Hararet-i Harekiye === History of science === Asar-ı Bakiye === Mathematics === Kamus-i Riyaziyat Hendese-i Tahliliye Hesab-i Ihtimali == References == Hüseyin Gazi Topdemir, "Salih Zeki" in The Biographical Encyclopedia of Astronomers, Thomas Hockey, Virginia Trimble, Thomas R. Williams, Katherine Bracher, Richard A. Jarrell, Jordan D. Marché, II, F. Jamil Ragep, JoAnn Palmeri, Marvin Bolt (Eds.), Springer Science+Business Media, LLC, 2007, pp. 1007-1008. ISBN 978-0-387-31022-0 Salih Zeki Özel Sayısı (=Osmanlı Bilimi Araştırmaları, vol. 7, no. 1), 2005. (Special issue on Salih Zeki of the journal 'Studies in Ottoman Science', in Turkish.)
|
Wikipedia:Sally Cockburn#0
|
Sally Patricia Cockburn (born 1960) is a mathematician whose research ranges from algebraic topology and set theory to geometric graph theory and combinatorial optimization. A Canadian immigrant to the US, she is William R. Kenan Jr. Professor of Mathematics at Hamilton College, and former chair of the mathematics department at Hamilton. == Education and career == Cockburn is originally from Ottawa. She earned a bachelor's and master's degree from Queen's University in Ontario, in 1982 and 1984 respectively, and also has a master's degree from the University of Ottawa. She completed her Ph.D. in algebraic topology in 1991 from Yale University. Her dissertation, The Gamma-Filtration on Extra-Special P-Groups, was supervised by Ronnie Lee. She joined the Hamilton College faculty in 1991, and was promoted to full professor in 2014. At Hamilton, she has also served as the coach for the college's squash team. == Recognition == Cockburn won the 2014 Carl B. Allendoerfer Award of the Mathematical Association of America with Joshua Lesperance for their joint work, "deranged socks", on a variation of the problem of counting derangements. == References ==
|
Wikipedia:Salomon Bochner#0
|
Salomon Bochner (20 August 1899 – 2 May 1982) was a Galizien-born mathematician, known for work in mathematical analysis, probability theory and differential geometry. == Life == He was born into a Jewish family in Podgórze (near Kraków), then Austria-Hungary, now Poland. Fearful of a Russian invasion in Galicia at the beginning of World War I in 1914, his family moved to Germany, seeking greater security. Bochner was educated at a Berlin gymnasium (secondary school), and then at the University of Berlin. There, he was a student of Erhard Schmidt, writing a dissertation involving what would later be called the Bergman kernel. Shortly after this, he left the academy to help his family during the escalating inflation. After returning to mathematical research, he lectured at the University of Munich from 1924 to 1933. His academic career in Germany ended after the Nazis came to power in 1933, and he left for a position at Princeton University. He was a visiting scholar at the Institute for Advanced Study in 1945 to 1948. He was appointed as Henry Burchard Fine Professor in 1959, retiring in 1968. Although he was seventy years old when he retired from Princeton, Bochner was appointed as Edgar Odell Lovett Professor of Mathematics at Rice University and went on to hold this chair until his death in 1982. He became Head of Department at Rice in 1969 and held this position until 1976. He died in Houston, Texas. He was an Orthodox Jew. == Mathematical work == In 1925 he started work in the area of almost periodic functions, simplifying the approach of Harald Bohr by use of compactness and approximate identity arguments. In 1933 he defined the Bochner integral, as it is now called, for vector-valued functions. Bochner's theorem on Fourier transforms appeared in a 1932 book. His techniques came into their own as Pontryagin duality and then the representation theory of locally compact groups developed in the following years. Subsequently, he worked on multiple Fourier series, posing the question of the Bochner–Riesz means. This led to results on how the Fourier transform on Euclidean space behaves under rotations. In differential geometry, Bochner's formula on curvature from 1946 was published. Joint work with Kentaro Yano (1912–1993) led to the 1953 book Curvature and Betti Numbers. It had consequences, for the Kodaira vanishing theory, representation theory, and spin manifolds. Bochner also worked on several complex variables (the Bochner–Martinelli formula and the book Several Complex Variables from 1948 with W. T. Martin). == Selected publications == Bochner, S. (1932). Vorlesungen über Fouriersche Integrale. Leipzig: Akademische Verlagsgesellschaft m.b.H. Bochner, S. (1948). Vorlesungen über Fouriersche Integrale. New York: Chelsea Pub. Co. Bochner, S. (1959). Lectures on Fourier integrals; with an author's supplement on monotonic functions, Stieltjes integrals, and harmonic analysis. Translated from the original by Morris Tenenbaum and Harry Pollard. Princeton, N.J.: Princeton University Press. Bochner, S. (1938). Lectures on commutative algebra. Ann Arbor, Mich.: Planographed by Edwards Brothers, inc; lectures given in 1937-1938, nootes by J. W. Tukey, J. Giese, and V. Martin{{cite book}}: CS1 maint: postscript (link) Bochner, S.; Martin, William Ted (1948). Several complex variables. Princeton: Princeton Univ. Press. Bochner, S.; Chandrasekharan, K. (1949). Fourier transforms. Princeton: Princeton Univ. Press. 2016 reprint Yano, K.; Bochner, S. (1953). Curvature and Betti numbers. Princeton: Princeton University Press. ISBN 0691095833. {{cite book}}: ISBN / Date incompatibility (help) Bochner, S. (1955). Harmonic Analysis and the Theory of Probability. University of California Press. 2013 reprint Bochner, S. (1966). Role of mathematics in the rise of science. Princeton, N.J.: Princeton University Press. 2014 reprint Bochner, S. (1969). Selected mathematical papers of Salomon Bochner. New York: W. A. Benjamin. Bochner, S. (1969). Eclosion and synthesis; perspectives on the history of knowledge. New York: W. A. Benjamin. Bochner, S. (1979). Einstein between centuries. Houston, Texas: William Marsh Rice University. Bochner, Salomon (1992), Gunning, Robert C. (ed.), Collected papers. Part 1, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0174-1, MR 1151390 Bochner, Salomon (1992), Gunning, Robert C. (ed.), Collected papers. Part 2, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0175-8, MR 1151391 Bochner, Salomon (1992), Gunning, Robert C. (ed.), Collected papers. Part 3, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0176-5, MR 1151392 Bochner, Salomon (1992), Gunning, Robert C. (ed.), Collected papers. Part 4, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0177-2, MR 1151393 == See also == Bochner almost periodic functions Bochner–Kodaira–Nakano identity Bochner Laplacian Bochner measurable function == References == == External links == O'Connor, John J.; Robertson, Edmund F., "Salomon Bochner", MacTutor History of Mathematics Archive, University of St Andrews Salomon Bochner at the Mathematics Genealogy Project National Academy of Sciences Biographical Memoir Salomon Bochner at Find a Grave
|
Wikipedia:Sammon mapping#0
|
Sammon mapping or Sammon projection is an algorithm that maps a high-dimensional space to a space of lower dimensionality (see multidimensional scaling) by trying to preserve the structure of inter-point distances in high-dimensional space in the lower-dimension projection. It is particularly suited for use in exploratory data analysis. The method was proposed by John W. Sammon in 1969. It is considered a non-linear approach as the mapping cannot be represented as a linear combination of the original variables as possible in techniques such as principal component analysis, which also makes it more difficult to use for classification applications. Denote the distance between ith and jth objects in the original space by d i j ∗ {\displaystyle \scriptstyle d_{ij}^{*}} , and the distance between their projections by d i j {\displaystyle \scriptstyle d_{ij}^{}} . Sammon's mapping aims to minimize the following error function, which is often referred to as Sammon's stress or Sammon's error: E = 1 ∑ i < j d i j ∗ ∑ i < j ( d i j ∗ − d i j ) 2 d i j ∗ . {\displaystyle E={\frac {1}{\sum \limits _{i<j}d_{ij}^{*}}}\sum _{i<j}{\frac {(d_{ij}^{*}-d_{ij})^{2}}{d_{ij}^{*}}}.} The minimization can be performed either by gradient descent, as proposed initially, or by other means, usually involving iterative methods. The number of iterations needs to be experimentally determined and convergent solutions are not always guaranteed. Many implementations prefer to use the first Principal Components as a starting configuration. The Sammon mapping has been one of the most successful nonlinear metric multidimensional scaling methods since its advent in 1969, but effort has been focused on algorithm improvement rather than on the form of the stress function. The performance of the Sammon mapping has been improved by extending its stress function using left Bregman divergence and right Bregman divergence. == See also == Prefrontal cortex basal ganglia working memory State–action–reward–state–action Constructing skill trees == References == == External links == HiSee – an open-source visualizer for high dimensional data A C# based program with code on CodeProject. Matlab code and method introduction
|
Wikipedia:Samson Shatashvili#0
|
Samson Lulievich Shatashvili (Georgian: სამსონ შათაშვილი; Russian: Самсон Лулиевич Шаташвили, born February 1960) is a theoretical and mathematical physicist who has been working at Trinity College Dublin, Ireland, since 2002. He holds the Trinity College Dublin Chair of Natural Philosophy and is the director of the Hamilton Mathematics Institute. He is also affiliated with the Institut des Hautes Études Scientifiques (IHÉS), where he held the Louis Michel Chair from 2003 to 2013 and the Israel Gelfand Chair from 2014 to 2019. Prior to moving to Trinity College, he was a professor of physics at Yale University from 1994. == Background == Shatashvili received his PhD in 1984 at the Steklov Institute of Mathematics in Saint Petersburg under the supervision of Ludwig Faddeev (and Vladimir Korepin). The topic of his thesis was on gauge theories and had the title "Modern Problems in Gauge Theories". In 1989 he received D.S. degree (doctor of science, 2nd degree in Russia) also at the Steklov Institute of Mathematics in Saint Petersburg. == Contributions and awards == Shatashvili has made several discoveries in the fields of theoretical and mathematical physics. He is mostly known for his work with Ludwig Faddeev on quantum anomalies, with Anton Alekseev on geometric methods in two-dimensional conformal field theories, for his work on background independent open string field theory, with Cumrun Vafa on superstrings and manifolds of exceptional holonomy, with Anton Gerasimov on tachyon condensation, with Andrei Losev, Nikita Nekrasov and Greg Moore on instantons and supersymmetric gauge theories, as well as for his work with Nikita Nekrasov on quantum integrable systems. In particular, Shatashvili and Nikita Nekrasov discovered the gauge/Bethe correspondence. In 1995 he received an Outstanding Junior Investigator Award of the Department of Energy (DOE) and a NSF Career Award and from 1996 to 2000 he was a Sloan Fellow. Shatashvili is the member of the Royal Irish Academy and the recipient of the 2010 Royal Irish Academy Gold Medal as well as the Ivane Javakhishvili State Medal, Georgia. In 2009 he was a plenary speaker at the International Congress on Mathematical Physics in Prague and in 2014 was an invited speaker at the International Congress of Mathematicians in Seoul (speaking on "Gauge theory angle at quantum integrability"). In 2025, he won the Dannie Heineman Prize for Mathematical Physics for "clever use of various techniques in studying symmetry in quantum field theory". == References == == External links == Videos of Samson Shatashvili in the AV-Portal of the German National Library of Science and Technology
|
Wikipedia:Samuel Beatty (mathematician)#0
|
Samuel Beatty (1881–1970) was dean of the Faculty of Mathematics at the University of Toronto, taking the position in 1934. == Early life == Beatty was born in 1881. In 1915, he graduated from the University of Toronto with a PhD and a dissertation entitled Extensions of Results Concerning the Derivatives of an Algebraic Function of a Complex Variable, with the help of his adviser, John Charles Fields. He was the first person to receive a PhD in mathematics from a Canadian university. In 1925 he was elected a Fellow of the Royal Society of Canada. In 1926, he published a problem in the American Mathematical Monthly, which formed the genesis for the Beatty sequence. == University of Toronto == Beatty was dean of the Faculty of Mathematics at the University of Toronto, taking the position in 1934. The famous mathematician Richard Brauer was recruited by Beatty in 1935. He invited Harold Scott MacDonald Coxeter to the University of Toronto with a position as an assistant professor, which Coxeter took; he remained in Toronto for the rest of his life. In June 1939, he was one of the founding members of the Committee of Teaching Staff. Beatty was appointed the 21st Chancellor of the University of Toronto in 1953, holding the position until 1959. He was associated with the university from 1911 to 1952, and a scholarship was established in his honor. He died in 1970. In a very real sense he guided Canadian mathematics from the isolation of the 19th century to a significant role in the 20th century. In an era when extremely few women received PhDs in mathematics, Beatty supervised the mathematical PhDs of Mary Fisher and Muriel Kennett Wales. Nobel Prize in Chemistry winner Walter Kohn, a student at the university while Beatty was a dean, expressed his appreciation in 1998 to the dean when accepting the prize for his development of the density functional theory. In 1942, when Kohn could not access the university's chemistry buildings during World War II because of his German nationality, Beatty had helped him to enroll in the Mathematics Department at the University. == Canadian Mathematical Society == Beatty was one of the founders of the Canadian Mathematical Congress and was elected to serve as the first president of the congress in 1945. Under his presidency, the Canadian Mathematical Congress began to promote mathematical development in Canada. Beatty served as the president of the Canadian Mathematical Congress until 1978 at which point the congress was renamed the Canadian Mathematical Society to avoid further confusion with the quadrennial mathematical congresses. == References == Overview of the Canadian Mathematical Society http://cms.math.ca/Docs/cms-eng.html == External links == Works by or about Samuel Beatty at the Internet Archive O'Connor, John J.; Robertson, Edmund F., "Samuel Beatty", MacTutor History of Mathematics Archive, University of St Andrews Archival papers held by the University of Toronto Archives and Record Management Services.
|
Wikipedia:Samuel Gitler Hammer#0
|
Samuel Carlos Gitler Hammer (July 14, 1933 – September 9, 2014) was a Mexican mathematician. He was an expert in Yang–Mills theory and is known for the Brown–Gitler spectrum. Born to a Jewish family in Mexico City, Gitler studied civil engineering at the National Autonomous University of Mexico, graduating in 1956. He then did his graduate studies in mathematics at Princeton University with Norman Steenrod, earning a doctorate in 1960. He taught briefly at Brandeis University and then returned to Mexico, where he was one of the founders of the mathematics department of CINVESTAV. Gitler was president of the Mexican Mathematical Society from 1967 to 1969, and chair at CINVESTAV from 1973 to 1981. In the late 1980s he moved to the University of Rochester, where he chaired the mathematics department. After retiring from Rochester in 2000, he returned to CINVESTAV. Gitler won Mexico's National Prize for Science in 1976. In 1986 he became a member of the Colegio Nacional. In 2012 he became a fellow of the American Mathematical Society. == Publications == Brown, Edgar H. Jr.; Gitler, Samuel (1973). "A spectrum whose cohomology is a certain cyclic module over the Steenrod algebra". Topology. 12 (3): 283–295. doi:10.1016/0040-9383(73)90014-1. MR 0391071. Cohen, Frederick R.; Gitler, Samuel (2002). "On loop spaces of configuration spaces". Transactions of the American Mathematical Society. 354 (5): 1705–1748. doi:10.1090/S0002-9947-02-02948-3. MR 1881013. Bahri, Anthony; Bendersky, Martin; Cohen, Frederick R.; Gitler, Samuel (2010). "The polyhedral product functor: a method of decomposition for moment-angle complexes, arrangements and related spaces". Advances in Mathematics. 225 (3): 1634–1668. doi:10.1016/j.aim.2010.03.026. MR 2673742. S2CID 16069626. Gitler, Samuel; López de Medrano, Santiago (2013). "Intersections of quadrics, moment-angle manifolds and connected sums". Geometry & Topology. 17 (3): 1497–1534. arXiv:0901.2580. doi:10.2140/gt.2013.17.1497. MR 3073929. S2CID 18250349. == References == == External links == "A conference at Princeton in commemoration of Sam Gitler". March 18–21, 2015. Retrieved 2018-04-01.
|
Wikipedia:Samuelson–Berkowitz algorithm#0
|
In mathematics, the Samuelson–Berkowitz algorithm efficiently computes the characteristic polynomial of an n × n {\displaystyle n\times n} matrix whose entries may be elements of any unital commutative ring. Unlike the Faddeev–LeVerrier algorithm, it performs no divisions, so may be applied to a wider range of algebraic structures. == Description of the algorithm == The Samuelson–Berkowitz algorithm applied to a matrix A {\displaystyle A} produces a vector whose entries are the coefficient of the characteristic polynomial of A {\displaystyle A} . It computes this coefficients vector recursively as the product of a Toeplitz matrix and the coefficients vector an ( n − 1 ) × ( n − 1 ) {\displaystyle (n-1)\times (n-1)} principal submatrix. Let A 0 {\displaystyle A_{0}} be an n × n {\displaystyle n\times n} matrix partitioned so that A 0 = [ a 1 , 1 R C A 1 ] {\displaystyle A_{0}=\left[{\begin{array}{c|c}a_{1,1}&R\\\hline C&A_{1}\end{array}}\right]} The first principal submatrix of A 0 {\displaystyle A_{0}} is the ( n − 1 ) × ( n − 1 ) {\displaystyle (n-1)\times (n-1)} matrix A 1 {\displaystyle A_{1}} . Associate with A 0 {\displaystyle A_{0}} the ( n + 1 ) × n {\displaystyle (n+1)\times n} Toeplitz matrix T 0 {\displaystyle T_{0}} defined by T 0 = [ 1 − a 1 , 1 ] {\displaystyle T_{0}=\left[{\begin{array}{c}1\\-a_{1,1}\end{array}}\right]} if A 0 {\displaystyle A_{0}} is 1 × 1 {\displaystyle 1\times 1} , T 0 = [ 1 0 − a 1 , 1 1 − R C − a 1 , 1 ] {\displaystyle T_{0}=\left[{\begin{array}{c c}1&0\\-a_{1,1}&1\\-RC&-a_{1,1}\end{array}}\right]} if A 0 {\displaystyle A_{0}} is 2 × 2 {\displaystyle 2\times 2} , and in general T 0 = [ 1 0 0 0 ⋯ − a 1 , 1 1 0 0 ⋯ − R C − a 1 , 1 1 0 ⋯ − R A 1 C − R C − a 1 , 1 1 ⋯ − R A 1 2 C − R A 1 C − R C − a 1 , 1 ⋯ ⋮ ⋮ ⋮ ⋮ ⋱ ] {\displaystyle T_{0}=\left[{\begin{array}{c c c c c}1&0&0&0&\cdots \\-a_{1,1}&1&0&0&\cdots \\-RC&-a_{1,1}&1&0&\cdots \\-RA_{1}C&-RC&-a_{1,1}&1&\cdots \\-RA_{1}^{2}C&-RA_{1}C&-RC&-a_{1,1}&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \end{array}}\right]} That is, all super diagonals of T 0 {\displaystyle T_{0}} consist of zeros, the main diagonal consists of ones, the first subdiagonal consists of − a 1 , 1 {\displaystyle -a_{1,1}} and the k {\displaystyle k} th subdiagonal consists of − R A 1 k − 2 C {\displaystyle -RA_{1}^{k-2}C} . The algorithm is then applied recursively to A 1 {\displaystyle A_{1}} , producing the Toeplitz matrix T 1 {\displaystyle T_{1}} times the characteristic polynomial of A 2 {\displaystyle A_{2}} , etc. Finally, the characteristic polynomial of the 1 × 1 {\displaystyle 1\times 1} matrix A n − 1 {\displaystyle A_{n-1}} is simply T n − 1 {\displaystyle T_{n-1}} . The Samuelson–Berkowitz algorithm then states that the vector v {\displaystyle v} defined by v = T 0 T 1 T 2 ⋯ T n − 1 {\displaystyle v=T_{0}T_{1}T_{2}\cdots T_{n-1}} contains the coefficients of the characteristic polynomial of A 0 {\displaystyle A_{0}} . Because each of the T i {\displaystyle T_{i}} may be computed independently, the algorithm is highly parallelizable. == References == Berkowitz, Stuart J. (30 March 1984). "On computing the determinant in small parallel time using a small number of processors". Information Processing Letters. 18 (3): 147–150. doi:10.1016/0020-0190(84)90018-8. Soltys, Michael; Cook, Stephen (December 2004). "The Proof Complexity of Linear Algebra" (PDF). Annals of Pure and Applied Logic. 130 (1–3): 277–323. CiteSeerX 10.1.1.308.6521. doi:10.1016/j.apal.2003.10.018. Kerber, Michael (May 2006). Division-Free computation of sub-resultants using Bezout matrices (PS) (Technical report). Saarbrucken: Max-Planck-Institut für Informatik. Tech. Report MPI-I-2006-1-006.
|
Wikipedia:Sand table#0
|
A sand table uses constrained sand for modelling or educational purposes. The original version of a sand table may be the abax used by early Greek students. In the modern era, one common use for a sand table is to make terrain models for military planning and wargaming. == Abax == An abax was a table covered with sand commonly used by students, particularly in Greece, to perform studies such as writing, geometry, and calculations. The abax was the predecessor to the abacus. Objects, such as stones, were added for counting and then columns for place-valued arithmetic. The demarcation between an abax and an abacus seems to be poorly defined in history; moreover, modern definitions of the word abacus universally describe it as a frame with rods and beads and, in general, do not include the definition of "sand table". The sand table may well have been the predecessor to some board games. ("The word abax, or abacus, is used both for the reckoning-board with its counters and the play-board with its pieces, ..."). Abax is from the old Greek for "sand table". == Ghubar == An Arabic word for sand (or dust) is ghubar (or gubar), and Western numerals (the decimal digits 0–9) are derived from the style of digits written on ghubar tables in North-West Africa and Iberia, also described as the 'West Arabic' or 'gubar' style. == Military use == Sand tables have been used for military planning and wargaming for many years as a field expedient, small-scale map, and in training for military actions. In 1890 a Sand table room was built at the Royal Military College of Canada for use in teaching cadets military tactics; this replaced the old sand table room in a pre-college building, in which the weight of the sand had damaged the floor. The use of sand tables increasingly fell out of favour with improved maps, aerial and satellite photography, and later, with digital terrain simulations. More modern sand tables have incorporated Augmented Reality, such as the Augmented Reality Sandtable (ARES) developed by the Army Research Laboratory. Today, virtual and conventional sand tables are used in operations training. In 1991, "Special Forces teams discovered an elaborate sand-table model of the Iraqi military plan for the defense of Kuwait City. Four huge red arrows from the sea pointed at the coastline of Kuwait City and the huge defensive effort positioned there. Small fences of concertina wire marked the shoreline and models of artillery pieces lined the shore area. Throughout the city were plastic models of other artillery and air defense positions, while thin, red-painted strips of board designated supply routes and main highways." In 2006, Google Earth users looking at satellite photography of China found a several kilometre large "sand table" scale model, strikingly reminiscent of a mountainous region (Aksai Chin) which China occupies militarily in a disputed zone with India, 2400 km from the model's location. Speculation has been rife that the terrain is used for military exercises of familiarisation. == Education == A sand table is a device useful for teaching in the early grades and for special needs children. == See also == Sandcastle == References and notes == Ifrah, Georges (2000). The Universal History of Numbers: from prehistory to the invention of the computer. New York et al.: John Wiley & Sons, Inc. pp. 633 pages. ISBN 0-471-37568-3. Raines, Shirley; Robert J. Canady (1992). Story Stretchers for the Primary Grades: Activities to Expand Children's Favorite Books. Mt. Rainier, Md.: Gryphon House. p. 101. ISBN 0-87659-157-8. Smith, David Eugene (1958). History of Mathematics. Vol. 2. Courier Dover. ISBN 0-486-20430-8. {{cite book}}: ISBN / Date incompatibility (help) Taylor, E. B., LL.D (1879), "The History of Games", Fortnightly Review republished in The Eclectic Magazine, New York, W. H. Bidwell, ed., pp. 21–30 Wagner, Sheila (1999). Inclusive Programming for Elementary Students With Autism. Arlington, TX: Future Horizons, Inc. ISBN 1-885477-54-6. == External links == The History of Computing Project "abacus." The American Heritage Dictionary of the English Language, Fourth Edition. Houghton Mifflin Company, 2004. via Dictionary.com Retrieved 28 August 2007. Sand table in "TACTICS, The Practical Art of Leading Troops in War" 1922
|
Wikipedia:Sandra Di Rocco#0
|
Sandra Di Rocco (born 1967) is an Italian mathematician specializing in algebraic geometry. She works in Sweden as a professor of mathematics and dean of the faculty of engineering science at KTH Royal Institute of Technology, and chairs the Activity Group on Algebraic Geometry of the Society for Industrial and Applied Mathematics. == Education == Di Rocco earned a laurea from the University of L'Aquila in 1992, and completed her Ph.D. in mathematics in 1996 at University of Notre Dame in the US, supervised by Andrew J. Sommese. == Career == After postdoctoral research at the Mittag-Leffler Institute in Sweden and the Max Planck Institute for Mathematics in Germany, and short stints as an assistant professor at Yale University and the University of Minnesota, Di Rocco became an associate professor at KTH in 2003. She was named full professor in 2010, served as department chair from 2012 to 2019, and became dean in 2020. Di Rocco was elected member of the Royal Swedish Academy of Engineering Sciences in 2021. == Service == Di Rocco was elected as chair of the Activity Group on Algebraic Geometry (SIAG-AG) of the Society for Industrial and Applied Mathematics (SIAM) in 2020. == References == == External links == Home page Sandra Di Rocco publications indexed by Google Scholar
|
Wikipedia:Sandrine Péché#0
|
Sandrine Péché (born 1977) is a French mathematician who works as a professor in the Laboratoire de Probabilités, Statistique et Modélisation of Paris Diderot University. Her research concerns probability theory, mathematical physics, and the theory and applications of random matrices. After studying at the École normale supérieure de Cachan, Péché earned a Ph.D. from the École Polytechnique Fédérale de Lausanne in Switzerland, in 2002, under the supervision of Gérard Ben Arous. She taught at the University of Grenoble before moving to Paris Diderot in 2011. She served as the editor-in-chief of Electronic Communications in Probability from 2015 to 2017. She was an invited speaker at the International Congress of Mathematicians in 2014. == References ==
|
Wikipedia:Sankara Variar#0
|
Sankara Variyar (IAST: Śaṅkara Vāriyar; c. 1500 – c. 1560) was an astronomer-mathematician of the Kerala school of astronomy and mathematics. His family were employed as temple-assistants in the temple at Tṛkkuṭaveli near modern Ottapalam. == Mathematical lineage == He was taught mainly by Nilakantha Somayaji (1444–1544), the author of the Tantrasamgraha and Jyesthadeva (1500–1575), the author of Yuktibhāṣā. Other teachers of Shankara include Netranarayana, the patron of Nilakantha Somayaji and Chitrabhanu, the author of an astronomical treaties dated to 1530 and a small work with solutions and proofs for algebraic equations. == Works == The known works of Shankara Variyar are the following: Yukti-dipika - an extensive commentary in verse on Tantrasamgraha based on Yuktibhāṣā. Laghu-vivrti - a short commentary in prose on Tantrasamgraha. Kriya-kramakari - a lengthy prose commentary on Lilavati of Bhaskara II. An astronomical commentary dated 1529 CE. An astronomical handbook completed around 1554 CE. == See also == List of astronomers and mathematicians of the Kerala school == References == K. V. Sarma (1997), "Sankara Variar", Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures edited by Helaine Selin, Springer, ISBN 978-0-7923-4066-9
|
Wikipedia:Sankara Varman#0
|
Sankara Varman (1774–1839) was an astronomer-mathematician belonging to the Kerala school of astronomy and mathematics. He is best known as the author of Sadratnamala, a treatise on astronomy and mathematics, composed in 1819. Sankara Varman is considered as the last notable figure in the long line of illustrious astronomers and mathematicians in the Kerala school of astronomy and mathematics beginning with Madhava of Sangamagrama. Sadratnamala was composed in the traditional style followed by members of the Kerala school at a time when India had been introduced to the western style of mathematics and of writing books in mathematics. One of Varman's contribution to mathematics was his computation of the value of the mathematical constant π correct to 17 decimal places. == Biographical sketch == Varman was born as a younger prince in the principality of Katattanad, in North Malabar, Kerala, in the year 1774. He had two elder brothers, the eldest being Raja Udaya varma, the ruler of the principality, and second one being Rama Varma, the crown prince. Local people referred to Varman as Appu Thampuran. Katattanad principality was under the suzerainty of the Zamorin of Calicut. The family deity of Sankara Varman was Goddess Parvati installed in the temple of Lokamalayarkavu and he was also a devotee of Lord Krishna. Varman completed the composition of Sadratnamala in 1819. He also wrote a Malayalam commentary on Sadratnamala. Two versions of the text of Sadratnamala and their commentaries have been discovered. A critical examination of the manuscripts suggests that both are written by Varman, one being a revision of the other. During the invasion of Malabar by Tipu Sultan/Hyder Ali during 1766 - 1781, many people including members of royal families fled Malabar and took shelter in Travancore. This brought the principality of Katattanad also into close contact with the Travancore Royal Family. Varman was especially attached to Maharaja Swati Tirunal (1813–47). Varman was an astute astrologer. There is a legend that he predicted the time of his own death as being in 1839, the year in which he died. == Whish's acquaintance == Varman was a close acquaintance of C.M. Whish a civil servant of East India Company attached to its Madras establishment. Whish spoke of him and his work thus: "The author of Sadratnamalah is SANCARA VARMA, the younger brother of the present Raja of Cadattanada near Tellicherry, a very intelligent man and acute mathematician. This work, which is a complete system of Hindu astronomy, is comprehended in two hundred and eleven verses of different measures, and abounds with fluxional forms and series, to be found in no work of foreign or other Indian countries." Whish was the first to bring to the notice of the western mathematical scholarship the achievements of the Kerala school of astronomy and mathematics. == See also == List of astronomers and mathematicians of the Kerala school == References ==
|
Wikipedia:Saqqara ostracon#0
|
The Saqqara ostracon is an ostracon, an Egyptian antiquity tracing to the period of Djoser (2650 BC). == Excavation == It was excavated in or near 1925 in Djoser's Pyramid in Saqqara, Egypt. == Description == It is an apparently complete flake made of limestone. It is 15 × 17.5 × 5 cm. In a few places, small portions of the surface seem to have been scaled away. It appears to date to the period of Djoser (~2650 BC). == Units written about == The ostracon has mentioned several units: Cubits Palms Fingers == The curve == The curve appears catenary. == See also == Ancient Egyptian units of measurement Ancient Egyptian mathematics Ancient Egyptian technology == References ==
|
Wikipedia:Sara Lombardo#0
|
Sara Lombardo is an Italian applied mathematician whose research topics include nonlinear dynamics, rogue waves and solitons, integrable systems, and automorphic Lie algebras. She is Executive Dean of the School of Mathematical & Computer Sciences at Heriot-Watt University. Previously she was professor of mathematics at Loughborough University, and associate dean with teaching responsibilities. == Education and career == Lombardo studied mathematical physics at Sapienza University of Rome, earning a laurea in physics in 2000. She completed her PhD at the University of Leeds in 2004; her dissertation, Reductions of integrable equations and automorphic Lie algebras, was supervised by Alexander V. Mikhailov. After postdoctoral research positions at the University of Leeds, University of Kent, Manchester University, and Vrije Universiteit Amsterdam, she joined the academic staff at Northumbria University in 2011, becoming head of mathematics there. She moved to Loughborough University in 2017 and to Heriot-Watt in 2022. == Recognition == Lombardo is a Fellow of the Higher Education Academy and a Fellow of the Institute of Mathematics and its Applications. She was one of the 2020 winners of the Suffrage Science award in maths and computing. == References == == External links == Sara Lombardo publications indexed by Google Scholar
|
Wikipedia:Saradaranjan Ray#0
|
Saradaranjan Ray (26 May 1858 – 30 October 1925) was a Bengali teacher of mathematics and Sanskrit who worked at Aligarh University and at Calcutta. He was also a cricket enthusiast and promoter who has been called the "W.G. Grace of India" and as the father of cricket in Bengal. He founded "The Town Club", a cricket club in Calcutta that played against European teams in the Eden Gardens from 1895. He was the elder brother of Upendrakishore Ray Chowdhury and hence a paternal great-uncle of Satyajit Ray. == Life == Saradaranjan was one of five siblings born to Kalinath and Joytara who came from a wealthy Kishoreganj family. Kalinath (d. 1879), also called Shyamsundar Munshi, knew Persian, Arabic, and Sanskrit, and served as an assistant to the deputy magistrate of Mymensingh. Saradaranjan was educated in Dhaka, where he took an interest in cricket and along with his brothers, Kamadaranjan (Upendrakishore), Muktidaranjan, Kuladaranjan and Pramodaranjan, founded the Dhaka College Cricket Club. He obtained a BA in 1878. He then obtained an MA from Calcutta in 1879 and joined the Aligarh Anglo-Oriental College as a mathematics teacher. He also taught Sanskrit. Ray was known to be physically violent and temperamental. On one occasion his son brought home a goat that disturbed him with its bleating causing him to beat the goat to death. On another occasion, an English soldier in a train annoyed him by putting his leg up on the seat next to him. After the man refused to heed his requests, he reportedly grabbed the man and pulled him onto the floor. Ray moved from Aligarh to Berhampore with a teaching job and then went to Dhaka College for a brief stint before moving to Cuttack. He was then invited by Ishwarchandra Vidyasagar to join the Metropolitan Institution which he joined in 1888, becoming its vice principal in 1892 and principal from 1909 until his death. Footballer Gostha Pal was encouraged by Ray in cricket. == Publishing and sports goods == After the death of Vidyasagar in 1891, the Metropolitan Institution ran into financial difficulties and Ray did not have a salary. He sought incomes from writing books, mainly commentaries in English on various Sanskrit works. He also conducted tutorials from 1895 charging 100 to 200 rupees per week. He established a printing and sports goods company, S. Ray and Company in 1898 along with his brother-in-law and renowned entrepreneur Hemendra Mohan Bose through which he published the first cricket manual in Bengali in 1899. The company was quite well known in the period and He designed a cricket bat that won a medal in the Indian Industrial and Agricultural Exhibition at Calcutta in 1906. He took a keen interest in fishing, designing baitless hooks and other gear which he sold through his company. Apart from cricket, he also took an interest in football, serving as the first president for the East Bengal Football Club. == References == == External links == Ray, Saradaranjan (1918) Kalidasa's Kumarasambhavam. Calcutta:S. Ray and Co. Kalidasa's Abhijnana Sakuntalam (1908) Siddhanta Kaumudi
|
Wikipedia:Sarah B. Hart#0
|
Sarah B. Hart is a British mathematician specialising in group theory. She is a former professor of mathematics at Birkbeck, University of London where she was the Head of Mathematics and Statistics until 2022. As of 2025, she is the Acting Provost of Gresham College. She was previously the Gresham Professor of Geometry from 2020 to 2024, the first woman to hold this position "since the chair was established in 1597". Hart is a keen expositor of mathematics: she has written about the mathematics of Moby-Dick, and her work has been featured in websites like 'Theorem of the Day'. == Education == While still in secondary school, Hart published an exploration (undertaken with her sister) into extending Euler's polyhedral formula to four dimensions. Hart read mathematics as an undergraduate at Balliol College, Oxford, and has an MSc in Mathematics from the University of Manchester. Her doctorate, from the University of Manchester Institute of Science and Technology (UMIST), addressed Coxeter Groups: Conjugacy Classes and Relative Dominance, under the supervision of Peter Rowley. == Later career == She remained in Manchester on an EPSRC research fellowship and then a temporary teaching position before obtaining a position as lecturer at Birkbeck in 2004. She was promoted to professor in 2013 and became head of the Department of Economics, Mathematics and Statistics in 2016. She was also president of the British Society for the History of Mathematics 2020 to 2023. == Bibliography == Hart, Sarah, "Once Upon a Prime: The Wondrous Connections Between Mathematics and Literature", HarperCollins Publishers UK, retrieved 2023-04-13 Winner of the Euler Book Prize. == References ==
|
Wikipedia:Sarah Glaz#0
|
Sarah Glaz (born 1947) is a mathematician and mathematical poet. Her research specialty is commutative algebra; she is a professor emeritus of mathematics at the University of Connecticut. == Education and career == Glaz was born in Bucharest, Romania, and earned a bachelor's degree in 1972 at Tel Aviv University, Israel. She came to the US for her graduate education in mathematics, completing a Ph.D. in 1977 at Rutgers University. Her dissertation, Finiteness and Differential Properties of Ideals, was supervised by Wolmer Vasconcelos. After postdoctoral research at Case Western Reserve University, Glaz became an assistant professor at Wesleyan University in 1980. She moved to George Mason University in 1988, and again to the University of Connecticut in 1989. She retired as a professor emeritus in 2017. == Books == Glaz is the author of a book on commutative algebra, Commutative Coherent Rings (Lecture Notes in Mathematics 1371, Springer, 1989). She is an editor of several other books on commutative algebra. In 2017 she published a book of her mathematical poetry named after a poem by Pablo Neruda, Ode to Numbers (Antrim House, 2017). Her book was a finalist for the 2018 Next Generation Indie Book Awards. She is also the editor of an anthology of mathematical poems, Strange Attractors: Poems of Love and Mathematics (with JoAnne Growney, AK Peters/CRC Press, 2008), and has published translations of poems into English from Romanian, Portuguese, German, Sanskrit, Sumerian, and Russian. == References == == External links == Home page Sarah Glaz publications indexed by Google Scholar
|
Wikipedia:Sarah Koch#0
|
Sarah Colleen Koch (born 1979) is an American mathematician, the Arthur F. Thurnau Professor of Mathematics at the University of Michigan. Her research interests include complex analysis, complex dynamics, and Teichmüller theory. == Education and career == Koch was born and educated in Concord, New Hampshire, with summers in Wilmington, Vermont. She went to the Rensselaer Polytechnic Institute, initially studying chemistry but soon switching to mathematics; she graduated in 2001. Next, she went to Cornell University for graduate study in mathematics, earning a master's degree in 2005, and completed her studies with a double Ph.D., supervised by John H. Hubbard: a doctorate from the University of Provence in 2007 with the dissertation La Théorie de Teichmüller et ses applications aux endomorphismes de P n {\displaystyle \mathbb {P} ^{n}} , and a doctorate from Cornell in 2008 with the dissertation A New Link between Teichmüller Theory and Complex Dynamics. She became a postdoctoral researcher as a National Science Foundation postdoctoral fellow at the University of Warwick and Harvard University; at Harvard, her postdoctoral mentor was Curtis T. McMullen. She stayed at Harvard as Benjamin Peirce Assistant Professor from 2010 to 2013. She moved to the University of Michigan in 2013, became an associate professor in 2016, and was promoted to full professor in 2021, at the same time being named as the Arthur F. Thurnau Professor. == Recognition == The University of Michigan gave Koch the 2016 Class of 1923 Memorial Teaching Award in 2016 and the 2020 Harold R. Johnson Diversity Service Award. Koch was the recipient of the 2021 Distinguished University Teaching of Mathematics Award of the Michigan Section of the Mathematical Association of America. She was a 2023 recipient of one of the Deborah and Franklin Haimo Awards for Distinguished College or University Teaching of Mathematics, recognizing her classroom teaching, her support of mathematics students from underrepresented groups, and her efforts to bring mathematics to middle schoolers from underserved African-American communities in Ypsilanti, Michigan. == References == == External links == Home page
|
Wikipedia:Sarah L. Waters#0
|
Sarah Louise Waters is a British applied mathematician whose research interests include biological fluid mechanics, tissue engineering, and their applications in medicine. She is a professor of applied mathematics in the Mathematical Institute at the University of Oxford, a Fellow of St Anne's College, Oxford, and a Leverhulme Trust Senior Research Fellow of the Royal Society. Waters completed her Ph.D. at the University of Leeds in 1996. Her dissertation, Coronary artery haemodynamics: pulsatile flow in a tube of time-dependent curvature, was supervised by Tim Pedley. She was named a professor at Oxford in 2014. In 2012, she won a Whitehead Prize "for her contributions to the fields of physiological fluid mechanics and the biomechanics of artificially engineered tissues". In 2019, Waters was elected a fellow of the American Physical Society. == References == == External links == Home page
|
Wikipedia:Sarason interpolation theorem#0
|
In mathematics complex analysis, the Sarason interpolation theorem, introduced by Sarason (1967), is a generalization of the Caratheodory interpolation theorem and Nevanlinna–Pick interpolation. == References == Sarason, Donald (1967). "Generalized interpolation in H∞". Transactions of the American Mathematical Society. 127 (2): 179–203. doi:10.2307/1994641. ISSN 0002-9947. JSTOR 1994641. MR 0208383.
|
Wikipedia:Sard's theorem#0
|
In mathematics, Sard's theorem, also known as Sard's lemma or the Morse–Sard theorem, is a result in mathematical analysis that asserts that the set of critical values (that is, the image of the set of critical points) of a smooth function f from one Euclidean space or manifold to another is a null set, i.e., it has Lebesgue measure 0. This makes the set of critical values "small" in the sense of a generic property. The theorem is named for Anthony Morse and Arthur Sard. == Statement == More explicitly, let f : R n → R m {\displaystyle f\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}} be C k {\displaystyle C^{k}} , (that is, k {\displaystyle k} times continuously differentiable), where k ≥ max { n − m + 1 , 1 } {\displaystyle k\geq \max\{n-m+1,1\}} . Let X ⊂ R n {\displaystyle X\subset \mathbb {R} ^{n}} denote the critical set of f , {\displaystyle f,} which is the set of points x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} at which the Jacobian matrix of f {\displaystyle f} has rank < m {\displaystyle <m} . Then the image f ( X ) {\displaystyle f(X)} has Lebesgue measure 0 in R m {\displaystyle \mathbb {R} ^{m}} . Intuitively speaking, this means that although X {\displaystyle X} may be large, its image must be small in the sense of Lebesgue measure: while f {\displaystyle f} may have many critical points in the domain R n {\displaystyle \mathbb {R} ^{n}} , it must have few critical values in the image R m {\displaystyle \mathbb {R} ^{m}} . More generally, the result also holds for mappings between differentiable manifolds M {\displaystyle M} and N {\displaystyle N} of dimensions m {\displaystyle m} and n {\displaystyle n} , respectively. The critical set X {\displaystyle X} of a C k {\displaystyle C^{k}} function f : N → M {\displaystyle f:N\rightarrow M} consists of those points at which the differential d f : T N → T M {\displaystyle df:TN\rightarrow TM} has rank less than m {\displaystyle m} as a linear transformation. If k ≥ max { n − m + 1 , 1 } {\displaystyle k\geq \max\{n-m+1,1\}} , then Sard's theorem asserts that the image of X {\displaystyle X} has measure zero as a subset of M {\displaystyle M} . This formulation of the result follows from the version for Euclidean spaces by taking a countable set of coordinate patches. The conclusion of the theorem is a local statement, since a countable union of sets of measure zero is a set of measure zero, and the property of a subset of a coordinate patch having zero measure is invariant under diffeomorphism. == Variants == There are many variants of this lemma, which plays a basic role in singularity theory among other fields. The case m = 1 {\displaystyle m=1} was proven by Anthony P. Morse in 1939, and the general case by Arthur Sard in 1942. A version for infinite-dimensional Banach manifolds was proven by Stephen Smale. The statement is quite powerful, and the proof involves analysis. In topology it is often quoted — as in the Brouwer fixed-point theorem and some applications in Morse theory — in order to prove the weaker corollary that “a non-constant smooth map has at least one regular value”. In 1965 Sard further generalized his theorem to state that if f : N → M {\displaystyle f:N\rightarrow M} is C k {\displaystyle C^{k}} for k ≥ max { n − m + 1 , 1 } {\displaystyle k\geq \max\{n-m+1,1\}} and if A r ⊆ N {\displaystyle A_{r}\subseteq N} is the set of points x ∈ N {\displaystyle x\in N} such that d f x {\displaystyle df_{x}} has rank strictly less than r {\displaystyle r} , then the r-dimensional Hausdorff measure of f ( A r ) {\displaystyle f(A_{r})} is zero. In particular the Hausdorff dimension of f ( A r ) {\displaystyle f(A_{r})} is at most r. Caveat: The Hausdorff dimension of f ( A r ) {\displaystyle f(A_{r})} can be arbitrarily close to r. == See also == Generic property == References == == Further reading == Hirsch, Morris W. (1976), Differential Topology, New York: Springer, pp. 67–84, ISBN 0-387-90148-5. Sternberg, Shlomo (1964), Lectures on Differential Geometry, Englewood Cliffs, NJ: Prentice-Hall, MR 0193578, Zbl 0129.13102.
|
Wikipedia:Satoshi Suzuki (mathematician)#0
|
Satoshi Suzuki (24 June 1930 – 11 August 1991) was a Japanese mathematician, and a professor at Kyoto University. == Academic works == "On m-adic Differentials" is cited by 5 articles. "Higher differential algebras of discrete valuation rings" is cited by "Regular local rings essentially of finite type over fields of prime characteristic", Mamoru Furuya a, Hiroshi Niitsuma, Journal of Algebra 306 (2006) 703–711. 5 articles written by Suzuki are cited in the textbook "Homologie des algèbres commutatives". == Life == Satoshi Suzuki was born on 24 June 1930 in Nagoya. He entered Kyoto University in 1949. He studied mathematics in his undergraduate cours. His adviser in graduate course was Yasuo Akizuki from 1953 to 1958, but his research ideas were developed by his own. He wrote a paper "Note on the existence of rational points" published in the Proceedings of the Japan Academy in 1958. 1958 - 59 taught at Kyoto Women's University 1959 - 61 taught at Momoyama Gakuin, Sakai, Osaka 1963 associate professor at Kyoto University 1964 submitted his doctoral thesis "Some results on m-adic differentials" to Kyoto University 1965 - 67 Purdue University, West Lafayette, Indiana, USA 1967 - 68 Queen's University, Kingston, Canada 1970 full professor at Kyoto University Paulo Ribenboim wrote about Suzuki that "I was very interested in his work on differentials, especially the higher order differentials and I attended his lectures with profit. At my suggestion, Suzuki wrote up his lecture notes ... ." == Doctoral thesis == Some results on m-adic differentials, Kyoto University, 1964. == Research articles == Note on the existence of rational points, Proceedings of the Japan Academy, 1958. On m-adic Differentials, J. Sci. Hiroshima Univ. Ser. A, Vol.24, No.3, December, 1960. Some results on Hausdorff m-adic modules and m-adic differentials, J.Math. Kyoto Univ. 2-2 (1963) 157-182. On torsion of the module of differentials of a locality which is a complete intersection, 1965, cited by 15 including one textbook Homologie des algèbres commutatives. Note on formally projective modules, 1966, cited by 8 On the Flatness of Complete Formally Projective Modules, 1968 JSTOR, cited by 2 including one textbook Homologie des algèbres commutatives. Differential modules and derivations of complete discrete valuation rings, 1969, cited by 4 including one textbook Homologie des algèbres commutatives. Modules of high order differentials of topological rings, 1970, cited by 1. On Neggers' numbers of discrete valuation rings, 1971, cited by 2. Differentials of commutative rings, 1971 - Kingston, Ont., Queen's University, cited by 14 including one textbook Homologie des algèbres commutatives. Corrections and supplements to my paper "Differential modules and derivations of complete discrete valuation rings", 1971 A note on Malliavin's result, 1973 Higher differential algebras of discrete valuation rings, 1974 Higher differential algebras of discrete valuation rings, 1975 Some types of derivations and their applications to field theory, 1981, cited by 16. On extensions of higher derivations for algebraic extensions of fields of positive characteristics, 1989 == Books == Collected Papers of Satoshi Suzuki, 1994 - Kingston, Ont.: Queen's University == References ==
|
Wikipedia:Savilian Professor of Geometry#0
|
The position of Savilian Professor of Geometry was established at the University of Oxford in 1619. It was founded (at the same time as the Savilian Professorship of Astronomy) by Sir Henry Savile, a mathematician and classical scholar who was Warden of Merton College, Oxford, and Provost of Eton College, reacting to what has been described by one 20th-century mathematician as "the wretched state of mathematical studies in England" at that time. He appointed Henry Briggs as the first professor. Edward Titchmarsh (professor 1931–63) said when applying that he was not prepared to lecture on geometry, and the requirement was removed from the duties of the post to enable his appointment, although the title of the chair was not changed. The two Savilian chairs have been linked with professorial fellowships at New College, Oxford, since the late 19th century. Before then, for over 175 years until the middle of the 19th century, the geometry professors had an official residence adjoining the college in New College Lane. There have been 20 professors; Frances Kirwan, the current (as of 2020) and first female holder of the chair, was appointed in 2017. The post has been held by a number of distinguished mathematicians. Briggs helped to develop the common logarithm, described as "one of the most useful systems for mathematics". The third professor, John Wallis, introduced the use of ∞ for infinity, and was regarded as "one of the leading mathematicians of his time". Both Edmond Halley, who successfully predicted the return of the comet named in his honour, and his successor Nathaniel Bliss held the post of Astronomer Royal in addition to the professorship. Stephen Rigaud (professor 1810–27) has been called "the foremost historian of astronomy and mathematics in his generation". The life and work of James Sylvester (professor 1883–94) was commemorated by the Royal Society by the inauguration of the Sylvester Medal; this was won by a later professor, Edward Titchmarsh. Two professors, Sylvester and Michael Atiyah (professor 1963–69), have been awarded the Copley Medal of the Royal Society; Atiyah also won the Fields Medal while he was professor. == Foundation and duties == Sir Henry Savile, the Warden of Merton College, Oxford, and Provost of Eton College, was deeply saddened by what the 20th-century mathematician Ida Busbridge has termed "the wretched state of mathematical studies in England", and so founded professorships in geometry and astronomy at the University of Oxford in 1619; both chairs were named after him. He also donated his books to the university's Bodleian Library "for the use chiefly of mathematical readers". He required the professors to be men of good character, at least 26 years old, and to have "imbibed the purer philosophy from the springs of Aristotle and Plato" before acquiring a thorough knowledge of science. The professors could come from any Christian country, but he specified that a professor from England should have a Master of Arts degree as a minimum. He wanted students to be educated in the works of the leading scientists of the ancient world, saying that the professor of geometry should teach Euclid's Elements, Apollonius's Conics, and the works of Archimedes; tuition in trigonometry was to be shared by the two professors. As many students would have had little mathematical knowledge, the professors were also permitted to provide instruction in basic mathematics in English (as opposed to Latin, the language used in education at Oxford at the time). == Appointment == Savile's first choice for the professorship of geometry was Edmund Gunter, Professor of Astronomy at Gresham College, London. It was reported that Gunter demonstrated the use of his sector and quadrant, but Savile regarded this as "showing of tricks" rather than geometry, and instead appointed Henry Briggs, the Gresham Professor of Geometry, in 1619. Briggs took up the chair in 1620 at an annual salary of £150 and thus became the first person to hold the first two mathematical chairs established in Britain. Savile reserved to himself the right to appoint the professors during his lifetime. After his death, he provided that vacancies should be filled by a majority of a group of "most distinguished persons": the Archbishop of Canterbury, the Lord Chancellor, the Chancellor of the university, the Bishop of London, the Secretary of State, the Chief Justice of the Common Pleas, the Chief Justice of the King's Bench, the Chief Baron of the Exchequer and the Dean of the Court of Arches. The Vice-Chancellor of the university was to inform the electors of any vacancy, and could be summoned to advise them. The appointment could either be made straight away, or delayed for some months to see whether "any eminent mathematician can be allured" from abroad. As part of reforms of the university in the 19th century, the University of Oxford commissioners laid down new statutes for the chair in 1881. The professor was to "lecture and give instruction in pure and analytical Geometry", and was to be a Fellow of New College. The electors for the professorship were to be the Warden of New College (or a person nominated by the college in his place), the Chancellor of the University of Oxford, the President of the Royal Society, the Sedleian Professor of Natural Philosophy at Oxford, the Sadleirian Professor of Pure Mathematics at the University of Cambridge, a person nominated by the university council and one other nominated by New College. Edward Titchmarsh (professor from 1931 to 1963) said when applying that he was not prepared to lecture on geometry, and the requirement was removed from the duties of the professor to enable his appointment, although the title of the chair was not changed. Changes to the university's internal legislation in the 20th and early 21st centuries abolished specific statutes for the duties of, and rules for appointment to, individual chairs such as the Savilian professorships. The University Council is now empowered to make appropriate arrangements for appointments and conditions of service, with the college to which any professorship is allocated (New College in the case of the Savilian chairs) to have two representatives on the board of electors. == Professors' house == John Wallis (professor 1649–1703) rented a house from New College on New College Lane from 1672 until his death in 1703; at some point, it was divided into two houses. Towards the end of his life, David Gregory (the Savilian Professor of Astronomy) lived in the eastern part of the premises: although no lease between Wallis and Gregory survives (if one was ever made between the two friends), Gregory's name appears for the first time in the parish rate-book of 1701. Wallis's son gave the unexpired portion of the lease to the university in 1704 in honour of his father's long tenure of the chair, to provide official residences for the two Savilian professors. New College renewed the lease at a low rent from 1716 and thereafter at intervals until the last renewal in 1814. Records of who lived in each house are not available throughout the period, but surviving documentation shows that the professors often sub-let the houses and for about twenty years in the early 18th century the premises were being used as a lodging house. Stephen Rigaud lived there from 1810 until he became the astronomy professor in 1827; thereafter, Baden Powell lived there with his family. The geometry professors were associated with the houses for longer than the astronomy professors: when the Radcliffe Observatory was built in the 1770s, the post of Radcliffe Observer was coupled to the astronomy professorship, and they were provided with a house in that role; thereafter, the university sublet the astronomy professor's house itself. In the early 19th century, New College decided that it wished to use the properties for itself and the lease expired in 1854. == List of professors == == See also == Gresham Professor of Geometry List of professorships at the University of Oxford == Notes == == References ==
|
Wikipedia:Scalar (mathematics)#0
|
A scalar is an element of a field which is used to define a vector space. In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers). A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector. The term scalar is also sometimes used informally to mean a vector, matrix, tensor, or other, usually, "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1 × n matrix and an n × 1 matrix, which is formally a 1 × 1 matrix, is often said to be a scalar. The real component of a quaternion is also called its scalar part. The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. == Etymology == The word scalar derives from the Latin word scalaris, an adjectival form of scala (Latin for "ladder"), from which the English word scale also comes. The first recorded usage of the word "scalar" in mathematics occurs in François Viète's Analytic Art (In artem analyticem isagoge) (1591): Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another may be called scalar terms. (Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, vocentur Scalares.) According to a citation in the Oxford English Dictionary the first recorded usage of the term "scalar" in English came with W. R. Hamilton in 1846, referring to the real part of a quaternion: The algebraically real part may receive, according to the question in which it occurs, all values contained on the one scale of progression of numbers from negative to positive infinity; we shall call it therefore the scalar part. == Definitions and properties == === Scalars of vector spaces === A vector space is defined as a set of vectors (additive abelian group), a set of scalars (field), and a scalar multiplication operation that takes a scalar k and a vector v to form another vector kv. For example, in a coordinate space, the scalar multiplication k ( v 1 , v 2 , … , v n ) {\displaystyle k(v_{1},v_{2},\dots ,v_{n})} yields ( k v 1 , k v 2 , … , k v n ) {\displaystyle (kv_{1},kv_{2},\dots ,kv_{n})} . In a (linear) function space, kf is the function x ↦ k(f(x)). The scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. === Scalars as vector components === According to a fundamental theorem of linear algebra, every vector space has a basis. It follows that every vector space over a field K is isomorphic to the corresponding coordinate vector space where each coordinate consists of elements of K (E.g., coordinates (a1, a2, ..., an) where ai ∈ K and n is the dimension of the vector space in consideration.). For example, every real vector space of dimension n is isomorphic to the n-dimensional real space Rn. === Scalars in normed vector spaces === Alternatively, a vector space V can be equipped with a norm function that assigns to every vector v in V a scalar ||v||. By definition, multiplying v by a scalar k also multiplies its norm by |k|. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k. A vector space equipped with a norm is called a normed vector space (or normed linear space). The norm is usually defined to be an element of V's scalar field K, which restricts the latter to fields that support the notion of sign. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four arithmetic operations; thus the rational numbers Q are excluded, but the surd field is acceptable. For this reason, not every scalar product space is a normed vector space. === Scalars in modules === When the requirement that the set of scalars form a field is relaxed so that it need only form a ring (so that, for example, the division of scalars need not be defined, or the scalars need not be commutative), the resulting more general algebraic structure is called a module. In this case the "scalars" may be complicated objects. For instance, if R is a ring, the vectors of the product space Rn can be made into a module with the n × n matrices with entries from R as the scalars. Another example comes from manifold theory, where the space of sections of the tangent bundle forms a module over the algebra of real functions on the manifold. === Scaling transformation === The scalar multiplication of vector spaces and modules is a special case of scaling, a kind of linear transformation. == See also == Algebraic structure Scalar (physics) Linear algebra Matrix (mathematics) Row and column vectors Tensor Vector (mathematics and physics) Vector calculus == References == == External links == "Scalar". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Weisstein, Eric W. "Scalar". MathWorld. Mathwords.com – Scalar
|
Wikipedia:Scaling and shifting#0
|
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: x 6 − 9 x 3 + 8 = 0. {\displaystyle x^{6}-9x^{3}+8=0.} Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written ( x 3 ) 2 − 9 ( x 3 ) + 8 = 0 {\displaystyle (x^{3})^{2}-9(x^{3})+8=0} (this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable u = x 3 {\displaystyle u=x^{3}} . Substituting x by u 3 {\displaystyle {\sqrt[{3}]{u}}} into the polynomial gives u 2 − 9 u + 8 = 0 , {\displaystyle u^{2}-9u+8=0,} which is just a quadratic equation with the two solutions: u = 1 and u = 8. {\displaystyle u=1\quad {\text{and}}\quad u=8.} The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives x 3 = 1 and x 3 = 8. {\displaystyle x^{3}=1\quad {\text{and}}\quad x^{3}=8.} Then, assuming that one is interested only in real solutions, the solutions of the original equation are x = ( 1 ) 1 / 3 = 1 and x = ( 8 ) 1 / 3 = 2. {\displaystyle x=(1)^{1/3}=1\quad {\text{and}}\quad x=(8)^{1/3}=2.} == Simple example == Consider the system of equations x y + x + y = 71 {\displaystyle xy+x+y=71} x 2 y + x y 2 = 880 {\displaystyle x^{2}y+xy^{2}=880} where x {\displaystyle x} and y {\displaystyle y} are positive integers with x > y {\displaystyle x>y} . (Source: 1991 AIME) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as x y ( x + y ) = 880 {\displaystyle xy(x+y)=880} . Making the substitutions s = x + y {\displaystyle s=x+y} and t = x y {\displaystyle t=xy} reduces the system to s + t = 71 , s t = 880 {\displaystyle s+t=71,st=880} . Solving this gives ( s , t ) = ( 16 , 55 ) {\displaystyle (s,t)=(16,55)} and ( s , t ) = ( 55 , 16 ) {\displaystyle (s,t)=(55,16)} . Back-substituting the first ordered pair gives us x + y = 16 , x y = 55 , x > y {\displaystyle x+y=16,xy=55,x>y} , which gives the solution ( x , y ) = ( 11 , 5 ) . {\displaystyle (x,y)=(11,5).} Back-substituting the second ordered pair gives us x + y = 55 , x y = 16 , x > y {\displaystyle x+y=55,xy=16,x>y} , which gives no solutions. Hence the solution that solves the system is ( x , y ) = ( 11 , 5 ) {\displaystyle (x,y)=(11,5)} . == Formal introduction == Let A {\displaystyle A} , B {\displaystyle B} be smooth manifolds and let Φ : A → B {\displaystyle \Phi :A\rightarrow B} be a C r {\displaystyle C^{r}} -diffeomorphism between them, that is: Φ {\displaystyle \Phi } is a r {\displaystyle r} times continuously differentiable, bijective map from A {\displaystyle A} to B {\displaystyle B} with r {\displaystyle r} times continuously differentiable inverse from B {\displaystyle B} to A {\displaystyle A} . Here r {\displaystyle r} may be any natural number (or zero), ∞ {\displaystyle \infty } (smooth) or ω {\displaystyle \omega } (analytic). The map Φ {\displaystyle \Phi } is called a regular coordinate transformation or regular variable substitution, where regular refers to the C r {\displaystyle C^{r}} -ness of Φ {\displaystyle \Phi } . Usually one will write x = Φ ( y ) {\displaystyle x=\Phi (y)} to indicate the replacement of the variable x {\displaystyle x} by the variable y {\displaystyle y} by substituting the value of Φ {\displaystyle \Phi } in y {\displaystyle y} for every occurrence of x {\displaystyle x} . == Other examples == === Coordinate transformation === Some systems can be more easily solved when switching to polar coordinates. Consider for example the equation U ( x , y ) := ( x 2 + y 2 ) 1 − x 2 x 2 + y 2 = 0. {\displaystyle U(x,y):=(x^{2}+y^{2}){\sqrt {1-{\frac {x^{2}}{x^{2}+y^{2}}}}}=0.} This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution ( x , y ) = Φ ( r , θ ) {\displaystyle \displaystyle (x,y)=\Phi (r,\theta )} given by Φ ( r , θ ) = ( r cos ( θ ) , r sin ( θ ) ) . {\displaystyle \displaystyle \Phi (r,\theta )=(r\cos(\theta ),r\sin(\theta )).} Note that if θ {\displaystyle \theta } runs outside a 2 π {\displaystyle 2\pi } -length interval, for example, [ 0 , 2 π ] {\displaystyle [0,2\pi ]} , the map Φ {\displaystyle \Phi } is no longer bijective. Therefore, Φ {\displaystyle \Phi } should be limited to, for example ( 0 , ∞ ] × [ 0 , 2 π ) {\displaystyle (0,\infty ]\times [0,2\pi )} . Notice how r = 0 {\displaystyle r=0} is excluded, for Φ {\displaystyle \Phi } is not bijective in the origin ( θ {\displaystyle \theta } can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions prescribed by Φ {\displaystyle \Phi } and using the identity sin 2 x + cos 2 x = 1 {\displaystyle \sin ^{2}x+\cos ^{2}x=1} , we get V ( r , θ ) = r 2 1 − r 2 cos 2 θ r 2 = r 2 1 − cos 2 θ = r 2 | sin θ | . {\displaystyle V(r,\theta )=r^{2}{\sqrt {1-{\frac {r^{2}\cos ^{2}\theta }{r^{2}}}}}=r^{2}{\sqrt {1-\cos ^{2}\theta }}=r^{2}\left|\sin \theta \right|.} Now the solutions can be readily found: sin ( θ ) = 0 {\displaystyle \sin(\theta )=0} , so θ = 0 {\displaystyle \theta =0} or θ = π {\displaystyle \theta =\pi } . Applying the inverse of Φ {\displaystyle \Phi } shows that this is equivalent to y = 0 {\displaystyle y=0} while x ≠ 0 {\displaystyle x\not =0} . Indeed, we see that for y = 0 {\displaystyle y=0} the function vanishes, except for the origin. Note that, had we allowed r = 0 {\displaystyle r=0} , the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of Φ {\displaystyle \Phi } is crucial. The function is always positive (for x , y ∈ R {\displaystyle x,y\in \mathbb {R} } ), hence the absolute values. === Differentiation === The chain rule is used to simplify complicated differentiation. For example, consider the problem of calculating the derivative d d x sin ( x 2 ) . {\displaystyle {\frac {d}{dx}}\sin(x^{2}).} Let y = sin u {\displaystyle y=\sin u} with u = x 2 . {\displaystyle u=x^{2}.} Then: d d x sin ( x 2 ) = d y d x = d y d u d u d x This part is the chain rule. = ( d d u sin u ) ( d d x x 2 ) = ( cos u ) ( 2 x ) = ( cos ( x 2 ) ) ( 2 x ) = 2 x cos ( x 2 ) {\displaystyle {\begin{aligned}{\frac {d}{dx}}\sin(x^{2})&={\frac {dy}{dx}}\\[6pt]&={\frac {dy}{du}}{\frac {du}{dx}}&&{\text{This part is the chain rule.}}\\[6pt]&=\left({\frac {d}{du}}\sin u\right)\left({\frac {d}{dx}}x^{2}\right)\\[6pt]&=(\cos u)(2x)\\&=\left(\cos(x^{2})\right)(2x)\\&=2x\cos(x^{2})\end{aligned}}} === Integration === Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant. Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems. ==== Change of variables formula in terms of Lebesgue measure ==== The following theorem allows us to relate integrals with respect to Lebesgue measure to an equivalent integral with respect to the pullback measure under a parameterization G. The proof is due to approximations of the Jordan content. Suppose that Ω {\displaystyle \Omega } is an open subset of R n {\displaystyle \mathbb {R} ^{n}} and G : Ω → R n {\displaystyle G:\Omega \to \mathbb {R} ^{n}} is a C 1 {\displaystyle C^{1}} diffeomorphism. If f {\displaystyle f} is a Lebesgue measurable function on G ( Ω ) {\displaystyle G(\Omega )} , then f ∘ G {\displaystyle f\circ G} is Lebesgue measurable on Ω {\displaystyle \Omega } . If f ≥ 0 {\displaystyle f\geq 0} or f ∈ L 1 ( G ( Ω ) , m ) , {\displaystyle f\in L^{1}(G(\Omega ),m),} then ∫ G ( Ω ) f ( x ) d x = ∫ Ω f ∘ G ( x ) | det D x G | d x {\displaystyle \int _{G(\Omega )}f(x)dx=\int _{\Omega }f\circ G(x)|{\text{det}}D_{x}G|dx} . If E ⊂ Ω {\displaystyle E\subset \Omega } and E {\displaystyle E} is Lebesgue measurable, then G ( E ) {\displaystyle G(E)} is Lebesgue measurable, then m ( G ( E ) ) = ∫ E | det D x G | d x {\displaystyle m(G(E))=\int _{E}|{\text{det}}D_{x}G|dx} . As a corollary of this theorem, we may compute the Radon–Nikodym derivatives of both the pullback and pushforward measures of m {\displaystyle m} under T {\displaystyle T} . ===== Pullback measure and transformation formula ===== The pullback measure in terms of a transformation T {\displaystyle T} is defined as T ∗ μ := μ ( T ( A ) ) {\displaystyle T^{*}\mu :=\mu (T(A))} . The change of variables formula for pullback measures is ∫ T ( Ω ) g d μ = ∫ Ω g ∘ T d T ∗ μ {\displaystyle \int _{T(\Omega )}gd\mu =\int _{\Omega }g\circ TdT^{*}\mu } . Pushforward measure and transformation formula The pushforward measure in terms of a transformation T {\displaystyle T} , is defined as T ∗ μ := μ ( T − 1 ( A ) ) {\displaystyle T_{*}\mu :=\mu (T^{-1}(A))} . The change of variables formula for pushforward measures is ∫ Ω g ∘ T d μ = ∫ T ( Ω ) g d T ∗ μ {\displaystyle \int _{\Omega }g\circ Td\mu =\int _{T(\Omega )}gdT_{*}\mu } . As a corollary of the change of variables formula for Lebesgue measure, we have that Radon-Nikodym derivative of the pullback with respect to Lebesgue measure: d T ∗ m d m ( x ) = | det D x T | {\displaystyle {\frac {dT^{*}m}{dm}}(x)=|{\text{det}}D_{x}T|} Radon-Nikodym derivative of the pushforward with respect to Lebesgue measure: d T ∗ m d m ( x ) = | det D x T − 1 | {\displaystyle {\frac {dT_{*}m}{dm}}(x)=|{\text{det}}D_{x}T^{-1}|} From which we may obtain The change of variables formula for pullback measure: ∫ T ( Ω ) g d m = ∫ Ω g ∘ T d T ∗ m = ∫ Ω g ∘ T | det D x T | d m ( x ) {\displaystyle \int _{T(\Omega )}gdm=\int _{\Omega }g\circ TdT^{*}m=\int _{\Omega }g\circ T|{\text{det}}D_{x}T|dm(x)} The change of variables formula for pushforward measure: ∫ Ω g d m = ∫ T ( Ω ) g ∘ T − 1 d T ∗ m = ∫ T ( Ω ) g ∘ T − 1 | det D x T − 1 | d m ( x ) {\displaystyle \int _{\Omega }gdm=\int _{T(\Omega )}g\circ T^{-1}dT_{*}m=\int _{T(\Omega )}g\circ T^{-1}|{\text{det}}D_{x}T^{-1}|dm(x)} === Differential equations === Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full. The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations, can be very complicated but allow much freedom. Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem. === Scaling and shifting === Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are "stretched" and "moved" by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an nth order derivative, the change simply results in d n y d x n = y scale x scale n d n y ^ d x ^ n {\displaystyle {\frac {d^{n}y}{dx^{n}}}={\frac {y_{\text{scale}}}{x_{\text{scale}}^{n}}}{\frac {d^{n}{\hat {y}}}{d{\hat {x}}^{n}}}} where x = x ^ x scale + x shift {\displaystyle x={\hat {x}}x_{\text{scale}}+x_{\text{shift}}} y = y ^ y scale + y shift . {\displaystyle y={\hat {y}}y_{\text{scale}}+y_{\text{shift}}.} This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem μ d 2 u d y 2 = d p d x ; u ( 0 ) = u ( L ) = 0 {\displaystyle \mu {\frac {d^{2}u}{dy^{2}}}={\frac {dp}{dx}}\quad ;\quad u(0)=u(L)=0} describes parallel fluid flow between flat solid walls separated by a distance δ; μ is the viscosity and d p / d x {\displaystyle dp/dx} the pressure gradient, both constants. By scaling the variables the problem becomes d 2 u ^ d y ^ 2 = 1 ; u ^ ( 0 ) = u ^ ( 1 ) = 0 {\displaystyle {\frac {d^{2}{\hat {u}}}{d{\hat {y}}^{2}}}=1\quad ;\quad {\hat {u}}(0)={\hat {u}}(1)=0} where y = y ^ L and u = u ^ L 2 μ d p d x . {\displaystyle y={\hat {y}}L\qquad {\text{and}}\qquad u={\hat {u}}{\frac {L^{2}}{\mu }}{\frac {dp}{dx}}.} Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may normalize variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations. === Momentum vs. velocity === Consider a system of equations m v ˙ = − ∂ H ∂ x m x ˙ = ∂ H ∂ v {\displaystyle {\begin{aligned}m{\dot {v}}&=-{\frac {\partial H}{\partial x}}\\[5pt]m{\dot {x}}&={\frac {\partial H}{\partial v}}\end{aligned}}} for a given function H ( x , v ) {\displaystyle H(x,v)} . The mass can be eliminated by the (trivial) substitution Φ ( p ) = 1 / m ⋅ p {\displaystyle \Phi (p)=1/m\cdot p} . Clearly this is a bijective map from R {\displaystyle \mathbb {R} } to R {\displaystyle \mathbb {R} } . Under the substitution v = Φ ( p ) {\displaystyle v=\Phi (p)} the system becomes p ˙ = − ∂ H ∂ x x ˙ = ∂ H ∂ p {\displaystyle {\begin{aligned}{\dot {p}}&=-{\frac {\partial H}{\partial x}}\\[5pt]{\dot {x}}&={\frac {\partial H}{\partial p}}\end{aligned}}} === Lagrangian mechanics === Given a force field φ ( t , x , v ) {\displaystyle \varphi (t,x,v)} , Newton's equations of motion are m x ¨ = φ ( t , x , v ) . {\displaystyle m{\ddot {x}}=\varphi (t,x,v).} Lagrange examined how these equations of motion change under an arbitrary substitution of variables x = Ψ ( t , y ) {\displaystyle x=\Psi (t,y)} , v = ∂ Ψ ( t , y ) ∂ t + ∂ Ψ ( t , y ) ∂ y ⋅ w . {\displaystyle v={\frac {\partial \Psi (t,y)}{\partial t}}+{\frac {\partial \Psi (t,y)}{\partial y}}\cdot w.} He found that the equations ∂ L ∂ y = d d t ∂ L ∂ w {\displaystyle {\frac {\partial {L}}{\partial y}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {L}}{\partial {w}}}} are equivalent to Newton's equations for the function L = T − V {\displaystyle L=T-V} , where T is the kinetic, and V the potential energy. In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton's equations in Cartesian coordinates. == See also == Change of variables (PDE) Change of variables for probability densities Substitution property of equality Universal instantiation == References ==
|
Wikipedia:Schilder's theorem#0
|
In mathematics, Schilder's theorem is a generalization of the Laplace method from integrals on R n {\displaystyle \mathbb {R} ^{n}} to functional Wiener integration. The theorem is used in the large deviations theory of stochastic processes. Roughly speaking, out of Schilder's theorem one gets an estimate for the probability that a (scaled-down) sample path of Brownian motion will stray far from the mean path (which is constant with value 0). This statement is made precise using rate functions. Schilder's theorem is generalized by the Freidlin–Wentzell theorem for Itō diffusions. == Statement of the theorem == Let C0 = C0([0, T]; Rd) be the Banach space of continuous functions f : [ 0 , T ] ⟶ R d {\displaystyle f:[0,T]\longrightarrow \mathbf {R} ^{d}} such that f ( 0 ) = 0 {\displaystyle f(0)=0} , equipped with the supremum norm ||⋅||∞ and C 0 ∗ {\displaystyle C_{0}^{\ast }} be the subspace of absolutely continuous functions whose derivative is in L 2 {\displaystyle L^{2}} (the so-called Cameron-Martin space). Define the rate function I ( ω ) = 1 2 ∫ 0 T ‖ ω ˙ ( t ) ‖ 2 d t {\displaystyle I(\omega )={\frac {1}{2}}\int _{0}^{T}\|{\dot {\omega }}(t)\|^{2}\,\mathrm {d} t} on C 0 ∗ {\displaystyle C_{0}^{\ast }} and let F : C 0 → R , G : C 0 → C {\displaystyle F:C_{0}\to \mathbb {R} ,G:C_{0}\to \mathbb {C} } be two given functions, such that S := I + F {\displaystyle S:=I+F} (the "action") has a unique minimum Ω ∈ C 0 ∗ {\displaystyle \Omega \in C_{0}^{\ast }} . Then under some differentiability and growth assumptions on F , G {\displaystyle F,G} which are detailed in Schilder 1966, one has lim λ → ∞ E [ exp ( − λ F ( λ − 1 / 2 ω ) ) G ( λ − 1 / 2 ω ) ] exp ( − λ S ( Ω ) ) = G ( Ω ) E [ exp ( − 1 2 ⟨ ω , D ( Ω ) ω ⟩ ) ] {\displaystyle \lim _{\lambda \to \infty }{\frac {\mathbb {E} \left[\exp \left(-\lambda F(\lambda ^{-1/2}\omega )\right)G(\lambda ^{-1/2}\omega )\right]}{\exp \left(-\lambda S(\Omega )\right)}}=G(\Omega )\mathbb {E} \left[\exp \left(-{\frac {1}{2}}\langle \omega ,D(\Omega )\omega \rangle \right)\right]} where E {\displaystyle \mathbb {E} } denotes expectation with respect to the Wiener measure P {\displaystyle \mathbb {P} } on C 0 {\displaystyle C_{0}} and D ( Ω ) {\displaystyle D(\Omega )} is the Hessian of F {\displaystyle F} at the minimum Ω {\displaystyle \Omega } ; ⟨ ω , D ( Ω ) ω ⟩ {\displaystyle \langle \omega ,D(\Omega )\omega \rangle } is meant in the sense of an L 2 ( [ 0 , T ] ) {\displaystyle L^{2}([0,T])} inner product. == Application to large deviations on the Wiener measure == Let B be a standard Brownian motion in d-dimensional Euclidean space Rd starting at the origin, 0 ∈ Rd; let W denote the law of B, i.e. classical Wiener measure. For ε > 0, let Wε denote the law of the rescaled process √εB. Then, on the Banach space C0 = C0([0, T]; Rd) of continuous functions f : [ 0 , T ] ⟶ R d {\displaystyle f:[0,T]\longrightarrow \mathbf {R} ^{d}} such that f ( 0 ) = 0 {\displaystyle f(0)=0} , equipped with the supremum norm ||⋅||∞, the probability measures Wε satisfy the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by I ( ω ) = 1 2 ∫ 0 T | ω ˙ ( t ) | 2 d t {\displaystyle I(\omega )={\frac {1}{2}}\int _{0}^{T}|{\dot {\omega }}(t)|^{2}\,\mathrm {d} t} if ω is absolutely continuous, and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0, lim sup ε ↓ 0 ε log W ε ( F ) ≤ − inf ω ∈ F I ( ω ) {\displaystyle \limsup _{\varepsilon \downarrow 0}\varepsilon \log \mathbf {W} _{\varepsilon }(F)\leq -\inf _{\omega \in F}I(\omega )} and lim inf ε ↓ 0 ε log W ε ( G ) ≥ − inf ω ∈ G I ( ω ) . {\displaystyle \liminf _{\varepsilon \downarrow 0}\varepsilon \log \mathbf {W} _{\varepsilon }(G)\geq -\inf _{\omega \in G}I(\omega ).} == Example == Taking ε = 1/c2, one can use Schilder's theorem to obtain estimates for the probability that a standard Brownian motion B strays further than c from its starting point over the time interval [0, T], i.e. the probability W ( C 0 ∖ B c ( 0 ; ‖ ⋅ ‖ ∞ ) ) ≡ P [ ‖ B ‖ ∞ > c ] , {\displaystyle \mathbf {W} (C_{0}\smallsetminus \mathbf {B} _{c}(0;\|\cdot \|_{\infty }))\equiv \mathbf {P} {\big [}\|B\|_{\infty }>c{\big ]},} as c tends to infinity. Here Bc(0; ||⋅||∞) denotes the open ball of radius c about the zero function in C0, taken with respect to the supremum norm. First note that ‖ B ‖ ∞ > c ⟺ ε B ∈ A := { ω ∈ C 0 ∣ | ω ( t ) | > 1 for some t ∈ [ 0 , T ] } . {\displaystyle \|B\|_{\infty }>c\iff {\sqrt {\varepsilon }}B\in A:=\left\{\omega \in C_{0}\mid |\omega (t)|>1{\text{ for some }}t\in [0,T]\right\}.} Since the rate function is continuous on A, Schilder's theorem yields lim c → ∞ log ( P [ ‖ B ‖ ∞ > c ] ) c 2 = lim ε → 0 ε log ( P [ ε B ∈ A ] ) = − inf { 1 2 ∫ 0 T | ω ˙ ( t ) | 2 d t | ω ∈ A } = − 1 2 ∫ 0 T 1 T 2 d t = − 1 2 T , {\displaystyle {\begin{aligned}\lim _{c\to \infty }{\frac {\log \left(\mathbf {P} \left[\|B\|_{\infty }>c\right]\right)}{c^{2}}}&=\lim _{\varepsilon \to 0}\varepsilon \log \left(\mathbf {P} \left[{\sqrt {\varepsilon }}B\in A\right]\right)\\[6pt]&=-\inf \left\{\left.{\frac {1}{2}}\int _{0}^{T}|{\dot {\omega }}(t)|^{2}\,\mathrm {d} t\,\right|\,\omega \in A\right\}\\[6pt]&=-{\frac {1}{2}}\int _{0}^{T}{\frac {1}{T^{2}}}\,\mathrm {d} t\\[6pt]&=-{\frac {1}{2T}},\end{aligned}}} making use of the fact that the infimum over paths in the collection A is attained for ω(t) = t/T . This result can be heuristically interpreted as saying that, for large c and/or large T log ( P [ ‖ B ‖ ∞ > c ] ) c 2 ≈ − 1 2 T or P [ ‖ B ‖ ∞ > c ] ≈ exp ( − c 2 2 T ) . {\displaystyle {\frac {\log \left(\mathbf {P} \left[\|B\|_{\infty }>c\right]\right)}{c^{2}}}\approx -{\frac {1}{2T}}\qquad {\text{or}}\qquad \mathbf {P} \left[\|B\|_{\infty }>c\right]\approx \exp \left(-{\frac {c^{2}}{2T}}\right).} In fact, the above probability can be estimated more precisely: for B a standard Brownian motion in Rn, and any T, c and ε > 0, we have: P [ sup 0 ≤ t ≤ T | ε B t | ≥ c ] ≤ 4 n exp ( − c 2 2 n T ε ) . {\displaystyle \mathbf {P} \left[\sup _{0\leq t\leq T}\left|{\sqrt {\varepsilon }}B_{t}\right|\geq c\right]\leq 4n\exp \left(-{\frac {c^{2}}{2nT\varepsilon }}\right).} == References == Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 5.2)
|
Wikipedia:Schlömilch's series#0
|
Schlömilch's series is a Fourier series type expansion of twice continuously differentiable function in the interval ( 0 , π ) {\displaystyle (0,\pi )} in terms of the Bessel function of the first kind, named after the German mathematician Oskar Schlömilch, who derived the series in 1857. The real-valued function f ( x ) {\displaystyle f(x)} has the following expansion: f ( x ) = a 0 + ∑ n = 1 ∞ a n J 0 ( n x ) , {\displaystyle f(x)=a_{0}+\sum _{n=1}^{\infty }a_{n}J_{0}(nx),} where a 0 = f ( 0 ) + 1 π ∫ 0 π ∫ 0 π / 2 u f ′ ( u sin θ ) d θ d u , a n = 2 π ∫ 0 π ∫ 0 π / 2 u cos n u f ′ ( u sin θ ) d θ d u . {\displaystyle {\begin{aligned}a_{0}&=f(0)+{\frac {1}{\pi }}\int _{0}^{\pi }\int _{0}^{\pi /2}uf'(u\sin \theta )\ d\theta \ du,\\a_{n}&={\frac {2}{\pi }}\int _{0}^{\pi }\int _{0}^{\pi /2}u\cos nu\ f'(u\sin \theta )\ d\theta \ du.\end{aligned}}} == Examples == Some examples of Schlömilch's series are the following: Null functions in the interval ( 0 , π ) {\displaystyle (0,\pi )} can be expressed by Schlömilch's Series, 0 = 1 2 + ∑ n = 1 ∞ ( − 1 ) n J 0 ( n x ) {\displaystyle 0={\frac {1}{2}}+\sum _{n=1}^{\infty }(-1)^{n}J_{0}(nx)} , which cannot be obtained by Fourier Series. This is particularly interesting because the null function is represented by a series expansion in which not all the coefficients are zero. The series converges only when 0 < x < π {\displaystyle 0<x<\pi } ; the series oscillates at x = 0 {\displaystyle x=0} and diverges at x = π {\displaystyle x=\pi } . This theorem is generalized so that 0 = 1 2 Γ ( ν + 1 ) + ∑ n = 1 ∞ ( − 1 ) n J 0 ( n x ) / ( n x / 2 ) ν {\displaystyle 0={\frac {1}{2\Gamma (\nu +1)}}+\sum _{n=1}^{\infty }(-1)^{n}J_{0}(nx)/(nx/2)^{\nu }} when − 1 / 2 < ν ≤ 1 / 2 {\displaystyle -1/2<\nu \leq 1/2} and 0 < x < π {\displaystyle 0<x<\pi } and also when ν > 1 / 2 {\displaystyle \nu >1/2} and 0 < x ≤ π {\displaystyle 0<x\leq \pi } . These properties were identified by Niels Nielsen. x = π 2 4 − 2 ∑ n = 1 , 3 , . . . ∞ J 0 ( n x ) n 2 , 0 < x < π . {\displaystyle x={\frac {\pi ^{2}}{4}}-2\sum _{n=1,3,...}^{\infty }{\frac {J_{0}(nx)}{n^{2}}},\quad 0<x<\pi .} x 2 = 2 π 2 3 + 8 ∑ n = 1 ∞ ( − 1 ) n n 2 J 0 ( n x ) , − π < x < π . {\displaystyle x^{2}={\frac {2\pi ^{2}}{3}}+8\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}}}J_{0}(nx),\quad -\pi <x<\pi .} 1 x + ∑ m = 1 k 2 x 2 − 4 m 2 π 2 = 1 2 + ∑ n = 1 ∞ J 0 ( n x ) , 2 k π < x < 2 ( k + 1 ) π . {\displaystyle {\frac {1}{x}}+\sum _{m=1}^{k}{\frac {2}{\sqrt {x^{2}-4m^{2}\pi ^{2}}}}={\frac {1}{2}}+\sum _{n=1}^{\infty }J_{0}(nx),\quad 2k\pi <x<2(k+1)\pi .} If ( r , z ) {\displaystyle (r,z)} are the cylindrical polar coordinates, then the series 1 + ∑ n = 1 ∞ e − n z J 0 ( n r ) {\displaystyle 1+\sum _{n=1}^{\infty }e^{-nz}J_{0}(nr)} is a solution of Laplace equation for z > 0 {\displaystyle z>0} . == See also == Kapteyn series == References ==
|
Wikipedia:Schmidt decomposition#0
|
In linear algebra, the Schmidt decomposition (named after its originator Erhard Schmidt) refers to a particular way of expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantum information theory, for example in entanglement characterization and in state purification, and plasticity. == Theorem == Let H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} be Hilbert spaces of dimensions n and m respectively. Assume n ≥ m {\displaystyle n\geq m} . For any vector w {\displaystyle w} in the tensor product H 1 ⊗ H 2 {\displaystyle H_{1}\otimes H_{2}} , there exist orthonormal sets { u 1 , … , u m } ⊂ H 1 {\displaystyle \{u_{1},\ldots ,u_{m}\}\subset H_{1}} and { v 1 , … , v m } ⊂ H 2 {\displaystyle \{v_{1},\ldots ,v_{m}\}\subset H_{2}} such that w = ∑ i = 1 m α i u i ⊗ v i {\textstyle w=\sum _{i=1}^{m}\alpha _{i}u_{i}\otimes v_{i}} , where the scalars α i {\displaystyle \alpha _{i}} are real, non-negative, and unique up to re-ordering. === Proof === The Schmidt decomposition is essentially a restatement of the singular value decomposition in a different context. Fix orthonormal bases { e 1 , … , e n } ⊂ H 1 {\displaystyle \{e_{1},\ldots ,e_{n}\}\subset H_{1}} and { f 1 , … , f m } ⊂ H 2 {\displaystyle \{f_{1},\ldots ,f_{m}\}\subset H_{2}} . We can identify an elementary tensor e i ⊗ f j {\displaystyle e_{i}\otimes f_{j}} with the matrix e i f j T {\displaystyle e_{i}f_{j}^{\mathsf {T}}} , where f j T {\displaystyle f_{j}^{\mathsf {T}}} is the transpose of f j {\displaystyle f_{j}} . A general element of the tensor product w = ∑ 1 ≤ i ≤ n , 1 ≤ j ≤ m β i j e i ⊗ f j {\displaystyle w=\sum _{1\leq i\leq n,1\leq j\leq m}\beta _{ij}e_{i}\otimes f_{j}} can then be viewed as the n × m matrix M w = ( β i j ) . {\displaystyle \;M_{w}=(\beta _{ij}).} By the singular value decomposition, there exist an n × n unitary U, m × m unitary V, and a positive semidefinite diagonal m × m matrix Σ such that M w = U [ Σ 0 ] V ∗ . {\displaystyle M_{w}=U{\begin{bmatrix}\Sigma \\0\end{bmatrix}}V^{*}.} Write U = [ U 1 U 2 ] {\displaystyle U={\begin{bmatrix}U_{1}&U_{2}\end{bmatrix}}} where U 1 {\displaystyle U_{1}} is n × m and we have M w = U 1 Σ V ∗ . {\displaystyle \;M_{w}=U_{1}\Sigma V^{*}.} Let { u 1 , … , u m } {\displaystyle \{u_{1},\ldots ,u_{m}\}} be the m column vectors of U 1 {\displaystyle U_{1}} , { v 1 , … , v m } {\displaystyle \{v_{1},\ldots ,v_{m}\}} the column vectors of V ¯ {\displaystyle {\overline {V}}} , and α 1 , … , α m {\displaystyle \alpha _{1},\ldots ,\alpha _{m}} the diagonal elements of Σ. The previous expression is then M w = ∑ k = 1 m α k u k v k T , {\displaystyle M_{w}=\sum _{k=1}^{m}\alpha _{k}u_{k}v_{k}^{\mathsf {T}},} Then w = ∑ k = 1 m α k u k ⊗ v k , {\displaystyle w=\sum _{k=1}^{m}\alpha _{k}u_{k}\otimes v_{k},} which proves the claim. == Some observations == Some properties of the Schmidt decomposition are of physical interest. === Spectrum of reduced states === Consider a vector w {\displaystyle w} of the tensor product H 1 ⊗ H 2 {\displaystyle H_{1}\otimes H_{2}} in the form of Schmidt decomposition w = ∑ i = 1 m α i u i ⊗ v i . {\displaystyle w=\sum _{i=1}^{m}\alpha _{i}u_{i}\otimes v_{i}.} Form the rank 1 matrix ρ = w w ∗ {\displaystyle \rho =ww^{*}} . Then the partial trace of ρ {\displaystyle \rho } , with respect to either system A or B, is a diagonal matrix whose non-zero diagonal elements are | α i | 2 {\displaystyle |\alpha _{i}|^{2}} . In other words, the Schmidt decomposition shows that the reduced states of ρ {\displaystyle \rho } on either subsystem have the same spectrum. === Schmidt rank and entanglement === The strictly positive values α i {\displaystyle \alpha _{i}} in the Schmidt decomposition of w {\displaystyle w} are its Schmidt coefficients, or Schmidt numbers. The total number of Schmidt coefficients of w {\displaystyle w} , counted with multiplicity, is called its Schmidt rank. If w {\displaystyle w} can be expressed as a product u ⊗ v {\displaystyle u\otimes v} then w {\displaystyle w} is called a separable state. Otherwise, w {\displaystyle w} is said to be an entangled state. From the Schmidt decomposition, we can see that w {\displaystyle w} is entangled if and only if w {\displaystyle w} has Schmidt rank strictly greater than 1. Therefore, two subsystems that partition a pure state are entangled if and only if their reduced states are mixed states. === Von Neumann entropy === A consequence of the above comments is that, for pure states, the von Neumann entropy of the reduced states is a well-defined measure of entanglement. For the von Neumann entropy of both reduced states of ρ {\displaystyle \rho } is − ∑ i | α i | 2 log ( | α i | 2 ) {\textstyle -\sum _{i}|\alpha _{i}|^{2}\log \left(|\alpha _{i}|^{2}\right)} , and this is zero if and only if ρ {\displaystyle \rho } is a product state (not entangled). == Schmidt-rank vector == The Schmidt rank is defined for bipartite systems, namely quantum states | ψ ⟩ ∈ H A ⊗ H B {\displaystyle |\psi \rangle \in H_{A}\otimes H_{B}} The concept of Schmidt rank can be extended to quantum systems made up of more than two subsystems. Consider the tripartite quantum system: | ψ ⟩ ∈ H A ⊗ H B ⊗ H C {\displaystyle |\psi \rangle \in H_{A}\otimes H_{B}\otimes H_{C}} There are three ways to reduce this to a bipartite system by performing the partial trace with respect to H A , H B {\displaystyle H_{A},H_{B}} or H C {\displaystyle H_{C}} { ρ ^ A = T r A ( | ψ ⟩ ⟨ ψ | ) ρ ^ B = T r B ( | ψ ⟩ ⟨ ψ | ) ρ ^ C = T r C ( | ψ ⟩ ⟨ ψ | ) {\displaystyle {\begin{cases}{\hat {\rho }}_{A}=Tr_{A}(|\psi \rangle \langle \psi |)\\{\hat {\rho }}_{B}=Tr_{B}(|\psi \rangle \langle \psi |)\\{\hat {\rho }}_{C}=Tr_{C}(|\psi \rangle \langle \psi |)\end{cases}}} Each of the systems obtained is a bipartite system and therefore can be characterized by one number (its Schmidt rank), respectively r A , r B {\displaystyle r_{A},r_{B}} and r C {\displaystyle r_{C}} . These numbers capture the "amount of entanglement" in the bipartite system when respectively A, B or C are discarded. For these reasons the tripartite system can be described by a vector, namely the Schmidt-rank vector r → = ( r A , r B , r C ) {\displaystyle {\vec {r}}=(r_{A},r_{B},r_{C})} === Multipartite systems === The concept of Schmidt-rank vector can be likewise extended to systems made up of more than three subsystems through the use of tensors. === Example === Take the tripartite quantum state | ψ 4 , 2 , 2 ⟩ = 1 2 ( | 0 , 0 , 0 ⟩ + | 1 , 0 , 1 ⟩ + | 2 , 1 , 0 ⟩ + | 3 , 1 , 1 ⟩ ) {\displaystyle |\psi _{4,2,2}\rangle ={\frac {1}{2}}{\big (}|0,0,0\rangle +|1,0,1\rangle +|2,1,0\rangle +|3,1,1\rangle {\big )}} This kind of system is made possible by encoding the value of a qudit into the orbital angular momentum (OAM) of a photon rather than its spin, since the latter can only take two values. The Schmidt-rank vector for this quantum state is ( 4 , 2 , 2 ) {\displaystyle (4,2,2)} . == See also == Singular value decomposition Purification of quantum state == References == == Further reading == Pathak, Anirban (2013). Elements of Quantum Computation and Quantum Communication. London: Taylor & Francis. pp. 92–98. ISBN 978-1-4665-1791-2.
|
Wikipedia:School Mathematics Project#0
|
The School Mathematics Project arose in the United Kingdom as part of the new mathematics educational movement of the 1960s. It is a developer of mathematics textbooks for secondary schools, formerly based in Southampton in the UK. Now generally known as SMP, it began as a research project inspired by a 1961 conference chaired by Bryan Thwaites at the University of Southampton, which itself was precipitated by calls to reform mathematics teaching in the wake of the Sputnik launch by the Soviet Union, the same circumstances that prompted the wider New Math movement. It maintained close ties with the former Collaborative Group for Research in Mathematics Education at the university. Instead of dwelling on 'traditional' areas such as arithmetic and geometry, SMP dwelt on subjects such as set theory, graph theory and logic, non-cartesian co-ordinate systems, matrix mathematics, affine transforms, Euclidean vectors, and non-decimal number systems. == Course books == === SMP, Book 1 === This was published in 1965. It was aimed at entry level pupils at secondary school, and was the first book in a series of 4 preparing pupils for Elementary Mathematics Examination at 'O' level. === SMP, Book 3 === The computer paper tape motif on early educational material reads "THE SCHOOL MATHEMATICS PROJECT DIRECTED BY BRYAN THWAITES". O O O O O O OO O O O O OO O O O O O O O OOOO O O O O OO O O O O O O O O O O OO O O OO O O O O O O OOO O O O OO O ··································································· O OO OO OO OOO O O O O OO O O O O O O OO OO OO OOO OOO O OO O OO O O OO OOO OO O THE SCHOOL MATHEMATICS PROJECT DIRECTED BY BRYAN THWAITES The code for this tape is introduced in Book 3 as part of the notional computer system now described. == Simpol programming language == The Simpol language was devised by The School Mathematics Project in the 1960s so as to introduce secondary pupils (typically aged 13) to what was then the novel concept of computer programming. It runs on the fictitious Simon computer. An interpreter for the Simpol language (that will run on a present-day PC) can be downloaded from the University of Southampton, at their SMP 2.0 website. == Joint Schools Project (JSP) == The Joint Schools Project in West Africa was one of the offshoots of SMP. Its originators were Michael Mitchelmore and Brian Radnor. Starting at Achimota College in 1966, it aimed to introduce SMP ideas within an African curriculum. Later, when Mitchelmore moved to Jamaica, a West Indian version of JSP was developed. == References == == External links == Manning, Godfrey (n.d.). "The simpol interpreter, manual and simpol code can be downloaded here". Simpol – Towards a School Mathematics Project 2.0. Southampton Education School; University of Southampton.
|
Wikipedia:Schreier coset graph#0
|
In the area of mathematics called combinatorial group theory, the Schreier coset graph is a graph associated with a group G, a generating set of G, and a subgroup of G. The Schreier graph encodes the abstract structure of the group modulo an equivalence relation formed by the cosets of the subgroup. The graph is named after Otto Schreier, who used the term "Nebengruppenbild". An equivalent definition was made in an early paper of Todd and Coxeter. == Description == Given a group G, a subgroup H ≤ G, and a generating set S = {si : i in I} of G, the Schreier graph Sch(G, H, S) is a graph whose vertices are the right cosets Hg = {hg : h in H} for g in G and whose edges are of the form (Hg, Hgs) for g in G and s in S. More generally, if X is any G-set, one can define a Schreier graph Sch(G, X, S) of the action of G on X (with respect to the generating set S): its vertices are the elements of X, and its edges are of the form (x, xs) for x in X and s in S. This includes the original Schreier coset graph definition, as H\G is a naturally a G-set with respect to multiplication from the right. From an algebraic-topological perspective, the graph Sch(G, X, S) has no distinguished vertex, whereas Sch(G, H, S) has the distinguished vertex H, and is thus a pointed graph. The Cayley graph of the group G itself is the Schreier coset graph for H = {1G} (Gross & Tucker 1987, p. 73). A spanning tree of a Schreier coset graph corresponds to a Schreier transversal, as in Schreier's subgroup lemma (Conder 2003). The book "Categories and Groupoids" listed below relates this to the theory of covering morphisms of groupoids. A subgroup H of a group G determines a covering morphism of groupoids p : K → G {\displaystyle p:K\rightarrow G} and if S is a generating set for G then its inverse image under p is the Schreier graph of (G, S). == Applications == The graph is useful to understand coset enumeration and the Todd–Coxeter algorithm. Coset graphs can be used to form large permutation representations of groups and were used by Graham Higman to show that the alternating groups of large enough degree are Hurwitz groups (Conder 2003). Stallings' core graphs are retracts of Schreier graphs of free groups, and are an essential tool for computing with subgroups of a free group. Every vertex-transitive graph is a coset graph. == References == Magnus, W.; Karrass, A.; Solitar, D. (1976), Combinatorial Group Theory, Dover Conder, Marston (2003), "Group actions on graphs, maps and surfaces with maximum symmetry", Groups St. Andrews 2001 in Oxford. Vol. I, London Math. Soc. Lecture Note Ser., vol. 304, Cambridge University Press, pp. 63–91, MR 2051519 Gross, Jonathan L.; Tucker, Thomas W. (1987), Topological graph theory, Wiley-Interscience Series in Discrete Mathematics and Optimization, New York: John Wiley & Sons, ISBN 978-0-471-04926-5, MR 0898434 Schreier graphs of the Basilica group Authors: Daniele D'Angeli, Alfredo Donno, Michel Matter, Tatiana Nagnibeda Philip J. Higgins, Categories and Groupoids, van Nostrand, New York, Lecture Notes, 1971, Republished as TAC Reprint, 2005
|
Wikipedia:Schröder's equation#0
|
Schröder's equation, named after Ernst Schröder, is a functional equation with one independent variable: given the function h, find the function Ψ such that Schröder's equation is an eigenvalue equation for the composition operator Ch that sends a function f to f(h(.)). If a is a fixed point of h, meaning h(a) = a, then either Ψ(a) = 0 (or ∞) or s = 1. Thus, provided that Ψ(a) is finite and Ψ′(a) does not vanish or diverge, the eigenvalue s is given by s = h′(a). == Functional significance == For a = 0, if h is analytic on the unit disk, fixes 0, and 0 < |h′(0)| < 1, then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) Ψ satisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf. Koenigs function. Equations such as Schröder's are suitable to encoding self-similarity, and have thus been extensively utilized in studies of nonlinear dynamics (often referred to colloquially as chaos theory). It is also used in studies of turbulence, as well as the renormalization group. An equivalent transpose form of Schröder's equation for the inverse Φ = Ψ−1 of Schröder's conjugacy function is h(Φ(y)) = Φ(sy). The change of variables α(x) = log(Ψ(x))/log(s) (the Abel function) further converts Schröder's equation to the older Abel equation, α(h(x)) = α(x) + 1. Similarly, the change of variables Ψ(x) = log(φ(x)) converts Schröder's equation to Böttcher's equation, φ(h(x)) = (φ(x))s. Moreover, for the velocity, β(x) = Ψ/Ψ′, Julia's equation, β(f(x)) = f′(x)β(x), holds. The n-th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvalue sn, instead. In the same vein, for an invertible solution Ψ(x) of Schröder's equation, the (non-invertible) function Ψ(x) k(log Ψ(x)) is also a solution, for any periodic function k(x) with period log(s). All solutions of Schröder's equation are related in this manner. == Solutions == Schröder's equation was solved analytically if a is an attracting (but not superattracting) fixed point, that is 0 < |h′(a)| < 1 by Gabriel Koenigs (1884). In the case of a superattracting fixed point, |h′(a)| = 0, Schröder's equation is unwieldy, and had best be transformed to Böttcher's equation. There are a good number of particular solutions dating back to Schröder's original 1870 paper. The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized by Szekeres. Several of the solutions are furnished in terms of asymptotic series, cf. Carleman matrix. == Applications == It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated by h(x) looks simpler, a mere dilation. More specifically, a system for which a discrete unit time step amounts to x → h(x), can have its smooth orbit (or flow) reconstructed from the solution of the above Schröder's equation, its conjugacy equation. That is, h(x) = Ψ−1(s Ψ(x)) ≡ h1(x). In general, all of its functional iterates (its regular iteration group, see iterated function) are provided by the orbit for t real — not necessarily positive or integer. (Thus a full continuous group.) The set of hn(x), i.e., of all positive integer iterates of h(x) (semigroup) is called the splinter (or Picard sequence) of h(x). However, all iterates (fractional, infinitesimal, or negative) of h(x) are likewise specified through the coordinate transformation Ψ(x) determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursion x → h(x) has been constructed; in effect, the entire orbit. For instance, the functional square root is h1/2(x) = Ψ−1(s1/2 Ψ(x)), so that h1/2(h1/2(x)) = h(x), and so on. For example, special cases of the logistic map such as the chaotic case h(x) = 4x(1 − x) were already worked out by Schröder in his original article (p. 306), Ψ(x) = (arcsin √x)2, s = 4, and hence ht(x) = sin2(2t arcsin √x). In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials, V(x) ∝ x(x − 1) (nπ + arcsin √x)2, a generic feature of continuous iterates effected by Schröder's equation. A nonchaotic case he also illustrated with his method, h(x) = 2x(1 − x), yields Ψ(x) = −1/2ln(1 − 2x), and hence ht(x) = −1/2((1 − 2x)2t − 1). Likewise, for the Beverton–Holt model, h(x) = x/(2 − x), one readily finds Ψ(x) = x/(1 − x), so that h t ( x ) = Ψ − 1 ( 2 − t Ψ ( x ) ) = x 2 t + x ( 1 − 2 t ) . {\displaystyle h_{t}(x)=\Psi ^{-1}{\big (}2^{-t}\Psi (x){\big )}={\frac {x}{2^{t}+x(1-2^{t})}}.} == See also == Böttcher's equation == References ==
|
Wikipedia:Schubert polynomial#0
|
In mathematics, Schubert polynomials are generalizations of Schur polynomials that represent cohomology classes of Schubert cycles in flag varieties. They were introduced by Lascoux & Schützenberger (1982) and are named after Hermann Schubert. == Background == Lascoux (1995) described the history of Schubert polynomials. The Schubert polynomials S w {\displaystyle {\mathfrak {S}}_{w}} are polynomials in the variables x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } depending on an element w {\displaystyle w} of the infinite symmetric group S ∞ {\displaystyle S_{\infty }} of all permutations of N {\displaystyle \mathbb {N} } fixing all but a finite number of elements. They form a basis for the polynomial ring Z [ x 1 , x 2 , … ] {\displaystyle \mathbb {Z} [x_{1},x_{2},\ldots ]} in infinitely many variables. The cohomology of the flag manifold Fl ( m ) {\displaystyle {\text{Fl}}(m)} is Z [ x 1 , x 2 , … , x m ] / I , {\displaystyle \mathbb {Z} [x_{1},x_{2},\ldots ,x_{m}]/I,} where I {\displaystyle I} is the ideal generated by homogeneous symmetric functions of positive degree. The Schubert polynomial S w {\displaystyle {\mathfrak {S}}_{w}} is the unique homogeneous polynomial of degree ℓ ( w ) {\displaystyle \ell (w)} representing the Schubert cycle of w {\displaystyle w} in the cohomology of the flag manifold Fl ( m ) {\displaystyle {\text{Fl}}(m)} for all sufficiently large m . {\displaystyle m.} == Properties == If w 0 {\displaystyle w_{0}} is the permutation of longest length in S n {\displaystyle S_{n}} then S w 0 = x 1 n − 1 x 2 n − 2 ⋯ x n − 1 1 {\displaystyle {\mathfrak {S}}_{w_{0}}=x_{1}^{n-1}x_{2}^{n-2}\cdots x_{n-1}^{1}} ∂ i S w = S w s i {\displaystyle \partial _{i}{\mathfrak {S}}_{w}={\mathfrak {S}}_{ws_{i}}} if w ( i ) > w ( i + 1 ) {\displaystyle w(i)>w(i+1)} , where s i {\displaystyle s_{i}} is the transposition ( i , i + 1 ) {\displaystyle (i,i+1)} and where ∂ i {\displaystyle \partial _{i}} is the divided difference operator taking P {\displaystyle P} to ( P − s i P ) / ( x i − x i + 1 ) {\displaystyle (P-s_{i}P)/(x_{i}-x_{i+1})} . Schubert polynomials can be calculated recursively from these two properties. In particular, this implies that S w = ∂ w − 1 w 0 x 1 n − 1 x 2 n − 2 ⋯ x n − 1 1 {\displaystyle {\mathfrak {S}}_{w}=\partial _{w^{-1}w_{0}}x_{1}^{n-1}x_{2}^{n-2}\cdots x_{n-1}^{1}} . Other properties are S i d = 1 {\displaystyle {\mathfrak {S}}_{id}=1} If s i {\displaystyle s_{i}} is the transposition ( i , i + 1 ) {\displaystyle (i,i+1)} , then S s i = x 1 + ⋯ + x i {\displaystyle {\mathfrak {S}}_{s_{i}}=x_{1}+\cdots +x_{i}} . If w ( i ) < w ( i + 1 ) {\displaystyle w(i)<w(i+1)} for all i ≠ r {\displaystyle i\neq r} , then S w {\displaystyle {\mathfrak {S}}_{w}} is the Schur polynomial s λ ( x 1 , … , x r ) {\displaystyle s_{\lambda }(x_{1},\ldots ,x_{r})} where λ {\displaystyle \lambda } is the partition ( w ( r ) − r , … , w ( 2 ) − 2 , w ( 1 ) − 1 ) {\displaystyle (w(r)-r,\ldots ,w(2)-2,w(1)-1)} . In particular all Schur polynomials (of a finite number of variables) are Schubert polynomials. Schubert polynomials have positive coefficients. A conjectural rule for their coefficients was put forth by Richard P. Stanley, and proven in two papers, one by Sergey Fomin and Stanley and one by Sara Billey, William Jockusch, and Stanley. The Schubert polynomials can be seen as a generating function over certain combinatorial objects called pipe dreams or rc-graphs. These are in bijection with reduced Kogan faces, (introduced in the PhD thesis of Mikhail Kogan) which are special faces of the Gelfand-Tsetlin polytope. Schubert polynomials also can be written as a weighted sum of objects called bumpless pipe dreams. As an example S 51423 ( x ) = x 1 x 3 2 x 4 x 2 2 + x 1 2 x 3 x 4 x 2 2 + x 1 2 x 3 2 x 4 x 2 . {\displaystyle {\mathfrak {S}}_{51423}(x)=x_{1}x_{3}^{2}x_{4}x_{2}^{2}+x_{1}^{2}x_{3}x_{4}x_{2}^{2}+x_{1}^{2}x_{3}^{2}x_{4}x_{2}.} == Multiplicative structure constants == Since the Schubert polynomials form a Z {\displaystyle \mathbb {Z} } -basis, there are unique coefficients c β γ α {\displaystyle c_{\beta \gamma }^{\alpha }} such that S β S γ = ∑ α c β γ α S α . {\displaystyle {\mathfrak {S}}_{\beta }{\mathfrak {S}}_{\gamma }=\sum _{\alpha }c_{\beta \gamma }^{\alpha }{\mathfrak {S}}_{\alpha }.} These can be seen as a generalization of the Littlewood−Richardson coefficients described by the Littlewood–Richardson rule. For algebro-geometric reasons (Kleiman's transversality theorem of 1974), these coefficients are non-negative integers and it is an outstanding problem in representation theory and combinatorics to give a combinatorial rule for these numbers. == Double Schubert polynomials == Double Schubert polynomials S w ( x 1 , x 2 , … , y 1 , y 2 , … ) {\displaystyle {\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )} are polynomials in two infinite sets of variables, parameterized by an element w of the infinite symmetric group, that becomes the usual Schubert polynomials when all the variables y i {\displaystyle y_{i}} are 0 {\displaystyle 0} . The double Schubert polynomial S w ( x 1 , x 2 , … , y 1 , y 2 , … ) {\displaystyle {\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )} are characterized by the properties S w ( x 1 , x 2 , … , y 1 , y 2 , … ) = ∏ i + j ≤ n ( x i − y j ) {\displaystyle {\mathfrak {S}}_{w}(x_{1},x_{2},\ldots ,y_{1},y_{2},\ldots )=\prod \limits _{i+j\leq n}(x_{i}-y_{j})} when w {\displaystyle w} is the permutation on 1 , … , n {\displaystyle 1,\ldots ,n} of longest length. ∂ i S w = S w s i {\displaystyle \partial _{i}{\mathfrak {S}}_{w}={\mathfrak {S}}_{ws_{i}}} if w ( i ) > w ( i + 1 ) . {\displaystyle w(i)>w(i+1).} The double Schubert polynomials can also be defined as S w ( x , y ) = ∑ w = v − 1 u and ℓ ( w ) = ℓ ( u ) + ℓ ( v ) S u ( x ) S v ( − y ) . {\displaystyle {\mathfrak {S}}_{w}(x,y)=\sum _{w=v^{-1}u{\text{ and }}\ell (w)=\ell (u)+\ell (v)}{\mathfrak {S}}_{u}(x){\mathfrak {S}}_{v}(-y).} == Quantum Schubert polynomials == Fomin, Gelfand & Postnikov (1997) introduced quantum Schubert polynomials, that have the same relation to the (small) quantum cohomology of flag manifolds that ordinary Schubert polynomials have to the ordinary cohomology. == Universal Schubert polynomials == Fulton (1999) introduced universal Schubert polynomials, that generalize classical and quantum Schubert polynomials. He also described universal double Schubert polynomials generalizing double Schubert polynomials. == See also == Stanley symmetric function Kostant polynomial Monk's formula gives the product of a linear Schubert polynomial and a Schubert polynomial. nil-Coxeter algebra == References == Bernstein, I. N.; Gelfand, I. M.; Gelfand, S. I. (1973), "Schubert cells, and the cohomology of the spaces G/P", Russian Math. Surveys, 28 (3): 1–26, Bibcode:1973RuMaS..28....1B, doi:10.1070/RM1973v028n03ABEH001557, S2CID 800432 Fomin, Sergey; Gelfand, Sergei; Postnikov, Alexander (1997), "Quantum Schubert polynomials", Journal of the American Mathematical Society, 10 (3): 565–596, doi:10.1090/S0894-0347-97-00237-3, ISSN 0894-0347, MR 1431829 Fulton, William (1992), "Flags, Schubert polynomials, degeneracy loci, and determinantal formulas", Duke Mathematical Journal, 65 (3): 381–420, doi:10.1215/S0012-7094-92-06516-1, ISSN 0012-7094, MR 1154177 Fulton, William (1997), Young tableaux, London Mathematical Society Student Texts, vol. 35, Cambridge University Press, ISBN 978-0-521-56144-0, MR 1464693 Fulton, William (1999), "Universal Schubert polynomials", Duke Mathematical Journal, 96 (3): 575–594, arXiv:alg-geom/9702012, doi:10.1215/S0012-7094-99-09618-7, ISSN 0012-7094, MR 1671215, S2CID 10546579 Lascoux, Alain (1995), "Polynômes de Schubert: une approche historique", Discrete Mathematics, 139 (1): 303–317, doi:10.1016/0012-365X(95)93984-D, ISSN 0012-365X, MR 1336845 Lascoux, Alain; Schützenberger, Marcel-Paul (1982), "Polynômes de Schubert", Comptes Rendus de l'Académie des Sciences, Série I, 294 (13): 447–450, ISSN 0249-6291, MR 0660739 Lascoux, Alain; Schützenberger, Marcel-Paul (1985), "Schubert polynomials and the Littlewood-Richardson rule", Letters in Mathematical Physics. A Journal for the Rapid Dissemination of Short Contributions in the Field of Mathematical Physics, 10 (2): 111–124, Bibcode:1985LMaPh..10..111L, doi:10.1007/BF00398147, ISSN 0377-9017, MR 0815233, S2CID 119654656 Macdonald, I. G. (1991), "Schubert polynomials", in Keedwell, A. D. (ed.), Surveys in combinatorics, 1991 (Guildford, 1991), London Math. Soc. Lecture Note Ser., vol. 166, Cambridge University Press, pp. 73–99, ISBN 978-0-521-40766-3, MR 1161461 Macdonald, I.G. (1991b), Notes on Schubert polynomials, Publications du Laboratoire de combinatoire et d'informatique mathématique, vol. 6, Laboratoire de combinatoire et d'informatique mathématique (LACIM), Université du Québec a Montréal, ISBN 978-2-89276-086-6 Manivel, Laurent (2001) [1998], Symmetric functions, Schubert polynomials and degeneracy loci, SMF/AMS Texts and Monographs, vol. 6, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2154-1, MR 1852463 Sottile, Frank (2001) [1994], "Schubert polynomials", Encyclopedia of Mathematics, EMS Press
|
Wikipedia:Schur algebra#0
|
In mathematics, Schur algebras, named after Issai Schur, are certain finite-dimensional algebras closely associated with Schur–Weyl duality between general linear and symmetric groups. They are used to relate the representation theories of those two groups. Their use was promoted by the influential monograph of J. A. Green first published in 1980. The name "Schur algebra" is due to Green. In the modular case (over infinite fields of positive characteristic) Schur algebras were used by Gordon James and Karin Erdmann to show that the (still open) problems of computing decomposition numbers for general linear groups and symmetric groups are actually equivalent. Schur algebras were used by Friedlander and Suslin to prove finite generation of cohomology of finite group schemes. == Construction == The Schur algebra S k ( n , r ) {\displaystyle S_{k}(n,r)} can be defined for any commutative ring k {\displaystyle k} and integers n , r ≥ 0 {\displaystyle n,r\geq 0} . Consider the algebra k [ x i j ] {\displaystyle k[x_{ij}]} of polynomials (with coefficients in k {\displaystyle k} ) in n 2 {\displaystyle n^{2}} commuting variables x i j {\displaystyle x_{ij}} , 1 ≤ i, j ≤ n {\displaystyle n} . Denote by A k ( n , r ) {\displaystyle A_{k}(n,r)} the homogeneous polynomials of degree r {\displaystyle r} . Elements of A k ( n , r ) {\displaystyle A_{k}(n,r)} are k-linear combinations of monomials formed by multiplying together r {\displaystyle r} of the generators x i j {\displaystyle x_{ij}} (allowing repetition). Thus k [ x i j ] = ⨁ r ≥ 0 A k ( n , r ) . {\displaystyle k[x_{ij}]=\bigoplus _{r\geq 0}A_{k}(n,r).} Now, k [ x i j ] {\displaystyle k[x_{ij}]} has a natural coalgebra structure with comultiplication Δ {\displaystyle \Delta } and counit ε {\displaystyle \varepsilon } the algebra homomorphisms given on generators by Δ ( x i j ) = ∑ l x i l ⊗ x l j , ε ( x i j ) = δ i j {\displaystyle \Delta (x_{ij})=\textstyle \sum _{l}x_{il}\otimes x_{lj},\quad \varepsilon (x_{ij})=\delta _{ij}\quad } (Kronecker's delta). Since comultiplication is an algebra homomorphism, k [ x i j ] {\displaystyle k[x_{ij}]} is a bialgebra. One easily checks that A k ( n , r ) {\displaystyle A_{k}(n,r)} is a subcoalgebra of the bialgebra k [ x i j ] {\displaystyle k[x_{ij}]} , for every r ≥ 0. Definition. The Schur algebra (in degree r {\displaystyle r} ) is the algebra S k ( n , r ) = H o m k ( A k ( n , r ) , k ) {\displaystyle S_{k}(n,r)=\mathrm {Hom} _{k}(A_{k}(n,r),k)} . That is, S k ( n , r ) {\displaystyle S_{k}(n,r)} is the linear dual of A k ( n , r ) {\displaystyle A_{k}(n,r)} . It is a general fact that the linear dual of a coalgebra A {\displaystyle A} is an algebra in a natural way, where the multiplication in the algebra is induced by dualizing the comultiplication in the coalgebra. To see this, let Δ ( a ) = ∑ a i ⊗ b i {\displaystyle \Delta (a)=\textstyle \sum a_{i}\otimes b_{i}} and, given linear functionals f {\displaystyle f} , g {\displaystyle g} on A {\displaystyle A} , define their product to be the linear functional given by a ↦ ∑ f ( a i ) g ( b i ) . {\displaystyle \textstyle a\mapsto \sum f(a_{i})g(b_{i}).} The identity element for this multiplication of functionals is the counit in A {\displaystyle A} . == Main properties == One of the most basic properties expresses S k ( n , r ) {\displaystyle S_{k}(n,r)} as a centralizer algebra. Let V = k n {\displaystyle V=k^{n}} be the space of rank n {\displaystyle n} column vectors over k {\displaystyle k} , and form the tensor power V ⊗ r = V ⊗ ⋯ ⊗ V ( r factors ) . {\displaystyle V^{\otimes r}=V\otimes \cdots \otimes V\quad (r{\text{ factors}}).} Then the symmetric group S r {\displaystyle {\mathfrak {S}}_{r}} on r {\displaystyle r} letters acts naturally on the tensor space by place permutation, and one has an isomorphism S k ( n , r ) ≅ E n d S r ( V ⊗ r ) . {\displaystyle S_{k}(n,r)\cong \mathrm {End} _{{\mathfrak {S}}_{r}}(V^{\otimes r}).} In other words, S k ( n , r ) {\displaystyle S_{k}(n,r)} may be viewed as the algebra of endomorphisms of tensor space commuting with the action of the symmetric group. S k ( n , r ) {\displaystyle S_{k}(n,r)} is free over k {\displaystyle k} of rank given by the binomial coefficient ( n 2 + r − 1 r ) {\displaystyle {\tbinom {n^{2}+r-1}{r}}} . Various bases of S k ( n , r ) {\displaystyle S_{k}(n,r)} are known, many of which are indexed by pairs of semistandard Young tableaux of shape λ {\displaystyle \lambda } , as λ {\displaystyle \lambda } varies over the set of partitions of r {\displaystyle r} into no more than n {\displaystyle n} parts. In case k is an infinite field, S k ( n , r ) {\displaystyle S_{k}(n,r)} may also be identified with the enveloping algebra (in the sense of H. Weyl) for the action of the general linear group G L n ( k ) {\displaystyle \mathrm {GL} _{n}(k)} acting on V ⊗ r {\displaystyle V^{\otimes r}} (via the diagonal action on tensors, induced from the natural action of G L n ( k ) {\displaystyle \mathrm {GL} _{n}(k)} on V = k n {\displaystyle V=k^{n}} given by matrix multiplication). Schur algebras are "defined over the integers". This means that they satisfy the following change of scalars property: S k ( n , r ) ≅ S Z ( n , r ) ⊗ Z k {\displaystyle S_{k}(n,r)\cong S_{\mathbb {Z} }(n,r)\otimes _{\mathbb {Z} }k} for any commutative ring k {\displaystyle k} . Schur algebras provide natural examples of quasihereditary algebras (as defined by Cline, Parshall, and Scott), and thus have nice homological properties. In particular, Schur algebras have finite global dimension. == Generalizations == Generalized Schur algebras (associated to any reductive algebraic group) were introduced by Donkin in the 1980s. These are also quasihereditary. Around the same time, Dipper and James introduced the quantized Schur algebras (or q-Schur algebras for short), which are a type of q-deformation of the classical Schur algebras described above, in which the symmetric group is replaced by the corresponding Hecke algebra and the general linear group by an appropriate quantum group. There are also generalized q-Schur algebras, which are obtained by generalizing the work of Dipper and James in the same way that Donkin generalized the classical Schur algebras. There are further generalizations, such as the affine q-Schur algebras related to affine Kac–Moody Lie algebras and other generalizations, such as the cyclotomic q-Schur algebras related to Ariki-Koike algebras (which are q-deformations of certain complex reflection groups). The study of these various classes of generalizations forms an active area of contemporary research. == References == == Further reading == Stuart Martin, Schur Algebras and Representation Theory, Cambridge University Press 1993. MR2482481, ISBN 978-0-521-10046-5 Andrew Mathas, Iwahori-Hecke algebras and Schur algebras of the symmetric group, University Lecture Series, vol.15, American Mathematical Society, 1999. MR1711316, ISBN 0-8218-1926-7 Hermann Weyl, The Classical Groups. Their Invariants and Representations. Princeton University Press, Princeton, N.J., 1939. MR0000255, ISBN 0-691-05756-7
|
Wikipedia:Schur complement#0
|
The Schur complement is a key tool in the fields of linear algebra, the theory of matrices, numerical analysis, and statistics. It is defined for a block matrix. Suppose p, q are nonnegative integers such that p + q > 0, and suppose A, B, C, D are respectively p × p, p × q, q × p, and q × q matrices of complex numbers. Let M = [ A B C D ] {\displaystyle M={\begin{bmatrix}A&B\\C&D\end{bmatrix}}} so that M is a (p + q) × (p + q) matrix. If D is invertible, then the Schur complement of the block D of the matrix M is the p × p matrix defined by M / D := A − B D − 1 C . {\displaystyle M/D:=A-BD^{-1}C.} If A is invertible, the Schur complement of the block A of the matrix M is the q × q matrix defined by M / A := D − C A − 1 B . {\displaystyle M/A:=D-CA^{-1}B.} In the case that A or D is singular, substituting a generalized inverse for the inverses on M/A and M/D yields the generalized Schur complement. The Schur complement is named after Issai Schur who used it to prove Schur's lemma, although it had been used previously. Emilie Virginia Haynsworth was the first to call it the Schur complement. The Schur complement is sometimes referred to as the Feshbach map after a physicist Herman Feshbach. == Background == The Schur complement arises when performing a block Gaussian elimination on the matrix M. In order to eliminate the elements below the block diagonal, one multiplies the matrix M by a block lower triangular matrix on the right as follows: M = [ A B C D ] → [ A B C D ] [ I p 0 − D − 1 C I q ] = [ A − B D − 1 C B 0 D ] , {\displaystyle {\begin{aligned}&M={\begin{bmatrix}A&B\\C&D\end{bmatrix}}\quad \to \quad {\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\-D^{-1}C&I_{q}\end{bmatrix}}={\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}},\end{aligned}}} where Ip denotes a p×p identity matrix. As a result, the Schur complement M / D = A − B D − 1 C {\displaystyle M/D=A-BD^{-1}C} appears in the upper-left p×p block. Continuing the elimination process beyond this point (i.e., performing a block Gauss–Jordan elimination), [ A − B D − 1 C B 0 D ] → [ I p − B D − 1 0 I q ] [ A − B D − 1 C B 0 D ] = [ A − B D − 1 C 0 0 D ] , {\displaystyle {\begin{aligned}&{\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}}\quad \to \quad {\begin{bmatrix}I_{p}&-BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&B\\0&D\end{bmatrix}}={\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}},\end{aligned}}} leads to an LDU decomposition of M, which reads M = [ A B C D ] = [ I p B D − 1 0 I q ] [ A − B D − 1 C 0 0 D ] [ I p 0 D − 1 C I q ] . {\displaystyle {\begin{aligned}M&={\begin{bmatrix}A&B\\C&D\end{bmatrix}}={\begin{bmatrix}I_{p}&BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\D^{-1}C&I_{q}\end{bmatrix}}.\end{aligned}}} Thus, the inverse of M may be expressed involving D−1 and the inverse of Schur's complement, assuming it exists, as M − 1 = [ A B C D ] − 1 = ( [ I p B D − 1 0 I q ] [ A − B D − 1 C 0 0 D ] [ I p 0 D − 1 C I q ] ) − 1 = [ I p 0 − D − 1 C I q ] [ ( A − B D − 1 C ) − 1 0 0 D − 1 ] [ I p − B D − 1 0 I q ] = [ ( A − B D − 1 C ) − 1 − ( A − B D − 1 C ) − 1 B D − 1 − D − 1 C ( A − B D − 1 C ) − 1 D − 1 + D − 1 C ( A − B D − 1 C ) − 1 B D − 1 ] = [ ( M / D ) − 1 − ( M / D ) − 1 B D − 1 − D − 1 C ( M / D ) − 1 D − 1 + D − 1 C ( M / D ) − 1 B D − 1 ] . {\displaystyle {\begin{aligned}M^{-1}={\begin{bmatrix}A&B\\C&D\end{bmatrix}}^{-1}={}&\left({\begin{bmatrix}I_{p}&BD^{-1}\\0&I_{q}\end{bmatrix}}{\begin{bmatrix}A-BD^{-1}C&0\\0&D\end{bmatrix}}{\begin{bmatrix}I_{p}&0\\D^{-1}C&I_{q}\end{bmatrix}}\right)^{-1}\\={}&{\begin{bmatrix}I_{p}&0\\-D^{-1}C&I_{q}\end{bmatrix}}{\begin{bmatrix}\left(A-BD^{-1}C\right)^{-1}&0\\0&D^{-1}\end{bmatrix}}{\begin{bmatrix}I_{p}&-BD^{-1}\\0&I_{q}\end{bmatrix}}\\[4pt]={}&{\begin{bmatrix}\left(A-BD^{-1}C\right)^{-1}&-\left(A-BD^{-1}C\right)^{-1}BD^{-1}\\-D^{-1}C\left(A-BD^{-1}C\right)^{-1}&D^{-1}+D^{-1}C\left(A-BD^{-1}C\right)^{-1}BD^{-1}\end{bmatrix}}\\[4pt]={}&{\begin{bmatrix}\left(M/D\right)^{-1}&-\left(M/D\right)^{-1}BD^{-1}\\-D^{-1}C\left(M/D\right)^{-1}&D^{-1}+D^{-1}C\left(M/D\right)^{-1}BD^{-1}\end{bmatrix}}.\end{aligned}}} The above relationship comes from the elimination operations that involve D−1 and M/D. An equivalent derivation can be done with the roles of A and D interchanged. By equating the expressions for M−1 obtained in these two different ways, one can establish the matrix inversion lemma, which relates the two Schur complements of M: M/D and M/A (see "Derivation from LDU decomposition" in Woodbury matrix identity § Alternative proofs). == Properties == If p and q are both 1 (i.e., A, B, C and D are all scalars), we get the familiar formula for the inverse of a 2-by-2 matrix: M − 1 = 1 A D − B C [ D − B − C A ] {\displaystyle M^{-1}={\frac {1}{AD-BC}}\left[{\begin{matrix}D&-B\\-C&A\end{matrix}}\right]} provided that AD − BC is non-zero. In general, if A is invertible, then M = [ A B C D ] = [ I p 0 C A − 1 I q ] [ A 0 0 D − C A − 1 B ] [ I p A − 1 B 0 I q ] , M − 1 = [ A − 1 + A − 1 B ( M / A ) − 1 C A − 1 − A − 1 B ( M / A ) − 1 − ( M / A ) − 1 C A − 1 ( M / A ) − 1 ] {\displaystyle {\begin{aligned}M&={\begin{bmatrix}A&B\\C&D\end{bmatrix}}={\begin{bmatrix}I_{p}&0\\CA^{-1}&I_{q}\end{bmatrix}}{\begin{bmatrix}A&0\\0&D-CA^{-1}B\end{bmatrix}}{\begin{bmatrix}I_{p}&A^{-1}B\\0&I_{q}\end{bmatrix}},\\[4pt]M^{-1}&={\begin{bmatrix}A^{-1}+A^{-1}B(M/A)^{-1}CA^{-1}&-A^{-1}B(M/A)^{-1}\\-(M/A)^{-1}CA^{-1}&(M/A)^{-1}\end{bmatrix}}\end{aligned}}} whenever this inverse exists. (Schur's formula) When A, respectively D, is invertible, the determinant of M is also clearly seen to be given by det ( M ) = det ( A ) det ( D − C A − 1 B ) {\displaystyle \det(M)=\det(A)\det \left(D-CA^{-1}B\right)} , respectively det ( M ) = det ( D ) det ( A − B D − 1 C ) {\displaystyle \det(M)=\det(D)\det \left(A-BD^{-1}C\right)} , which generalizes the determinant formula for 2 × 2 matrices. (Guttman rank additivity formula) If D is invertible, then the rank of M is given by rank ( M ) = rank ( D ) + rank ( A − B D − 1 C ) {\displaystyle \operatorname {rank} (M)=\operatorname {rank} (D)+\operatorname {rank} \left(A-BD^{-1}C\right)} (Haynsworth inertia additivity formula) If A is invertible, then the inertia of the block matrix M is equal to the inertia of A plus the inertia of M/A. (Quotient identity) A / B = ( ( A / C ) / ( B / C ) ) {\displaystyle A/B=((A/C)/(B/C))} . The Schur complement of a Laplacian matrix is also a Laplacian matrix. == Application to solving linear equations == The Schur complement arises naturally in solving a system of linear equations such as [ A B C D ] [ x y ] = [ u v ] {\displaystyle {\begin{bmatrix}A&B\\C&D\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}u\\v\end{bmatrix}}} . Assuming that the submatrix A {\displaystyle A} is invertible, we can eliminate x {\displaystyle x} from the equations, as follows. x = A − 1 ( u − B y ) . {\displaystyle x=A^{-1}(u-By).} Substituting this expression into the second equation yields ( D − C A − 1 B ) y = v − C A − 1 u . {\displaystyle \left(D-CA^{-1}B\right)y=v-CA^{-1}u.} We refer to this as the reduced equation obtained by eliminating x {\displaystyle x} from the original equation. The matrix appearing in the reduced equation is called the Schur complement of the first block A {\displaystyle A} in M {\displaystyle M} : S = d e f D − C A − 1 B {\displaystyle S\ {\overset {\underset {\mathrm {def} }{}}{=}}\ D-CA^{-1}B} . Solving the reduced equation, we obtain y = S − 1 ( v − C A − 1 u ) . {\displaystyle y=S^{-1}\left(v-CA^{-1}u\right).} Substituting this into the first equation yields x = ( A − 1 + A − 1 B S − 1 C A − 1 ) u − A − 1 B S − 1 v . {\displaystyle x=\left(A^{-1}+A^{-1}BS^{-1}CA^{-1}\right)u-A^{-1}BS^{-1}v.} We can express the above two equation as: [ x y ] = [ A − 1 + A − 1 B S − 1 C A − 1 − A − 1 B S − 1 − S − 1 C A − 1 S − 1 ] [ u v ] . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}A^{-1}+A^{-1}BS^{-1}CA^{-1}&-A^{-1}BS^{-1}\\-S^{-1}CA^{-1}&S^{-1}\end{bmatrix}}{\begin{bmatrix}u\\v\end{bmatrix}}.} Therefore, a formulation for the inverse of a block matrix is: [ A B C D ] − 1 = [ A − 1 + A − 1 B S − 1 C A − 1 − A − 1 B S − 1 − S − 1 C A − 1 S − 1 ] = [ I p − A − 1 B I q ] [ A − 1 S − 1 ] [ I p − C A − 1 I q ] . {\displaystyle {\begin{bmatrix}A&B\\C&D\end{bmatrix}}^{-1}={\begin{bmatrix}A^{-1}+A^{-1}BS^{-1}CA^{-1}&-A^{-1}BS^{-1}\\-S^{-1}CA^{-1}&S^{-1}\end{bmatrix}}={\begin{bmatrix}I_{p}&-A^{-1}B\\&I_{q}\end{bmatrix}}{\begin{bmatrix}A^{-1}&\\&S^{-1}\end{bmatrix}}{\begin{bmatrix}I_{p}&\\-CA^{-1}&I_{q}\end{bmatrix}}.} In particular, we see that the Schur complement is the inverse of the 2 , 2 {\displaystyle 2,2} block entry of the inverse of M {\displaystyle M} . In practice, one needs A {\displaystyle A} to be well-conditioned in order for this algorithm to be numerically accurate. This method is useful in electrical engineering to reduce the dimension of a network's equations. It is especially useful when element(s) of the output vector are zero. For example, when u {\displaystyle u} or v {\displaystyle v} is zero, we can eliminate the associated rows of the coefficient matrix without any changes to the rest of the output vector. If v {\displaystyle v} is null then the above equation for x {\displaystyle x} reduces to x = ( A − 1 + A − 1 B S − 1 C A − 1 ) u {\displaystyle x=\left(A^{-1}+A^{-1}BS^{-1}CA^{-1}\right)u} , thus reducing the dimension of the coefficient matrix while leaving u {\displaystyle u} unmodified. This is used to advantage in electrical engineering where it is referred to as node elimination or Kron reduction. == Applications to probability theory and statistics == Suppose the random column vectors X, Y live in Rn and Rm respectively, and the vector (X, Y) in Rn + m has a multivariate normal distribution whose covariance is the symmetric positive-definite matrix Σ = [ A B B T C ] , {\displaystyle \Sigma =\left[{\begin{matrix}A&B\\B^{\mathrm {T} }&C\end{matrix}}\right],} where A ∈ R n × n {\textstyle A\in \mathbb {R} ^{n\times n}} is the covariance matrix of X, C ∈ R m × m {\textstyle C\in \mathbb {R} ^{m\times m}} is the covariance matrix of Y and B ∈ R n × m {\textstyle B\in \mathbb {R} ^{n\times m}} is the covariance matrix between X and Y. Then the conditional covariance of X given Y is the Schur complement of C in Σ {\textstyle \Sigma } : Cov ( X ∣ Y ) = A − B C − 1 B T E ( X ∣ Y ) = E ( X ) + B C − 1 ( Y − E ( Y ) ) {\displaystyle {\begin{aligned}\operatorname {Cov} (X\mid Y)&=A-BC^{-1}B^{\mathrm {T} }\\\operatorname {E} (X\mid Y)&=\operatorname {E} (X)+BC^{-1}(Y-\operatorname {E} (Y))\end{aligned}}} If we take the matrix Σ {\displaystyle \Sigma } above to be, not a covariance of a random vector, but a sample covariance, then it may have a Wishart distribution. In that case, the Schur complement of C in Σ {\displaystyle \Sigma } also has a Wishart distribution. == Conditions for positive definiteness and semi-definiteness == Let X be a symmetric matrix of real numbers given by X = [ A B B T C ] . {\displaystyle X=\left[{\begin{matrix}A&B\\B^{\mathrm {T} }&C\end{matrix}}\right].} Then If A is invertible, then X is positive definite if and only if A and its complement X/A are both positive definite:: 34 X ≻ 0 ⇔ A ≻ 0 , X / A = C − B T A − 1 B ≻ 0. {\displaystyle X\succ 0\Leftrightarrow A\succ 0,X/A=C-B^{\mathrm {T} }A^{-1}B\succ 0.} If C is invertible, then X is positive definite if and only if C and its complement X/C are both positive definite: X ≻ 0 ⇔ C ≻ 0 , X / C = A − B C − 1 B T ≻ 0. {\displaystyle X\succ 0\Leftrightarrow C\succ 0,X/C=A-BC^{-1}B^{\mathrm {T} }\succ 0.} If A is positive definite, then X is positive semi-definite if and only if the complement X/A is positive semi-definite:: 34 If A ≻ 0 , then X ⪰ 0 ⇔ X / A = C − B T A − 1 B ⪰ 0. {\displaystyle {\text{If }}A\succ 0,{\text{ then }}X\succeq 0\Leftrightarrow X/A=C-B^{\mathrm {T} }A^{-1}B\succeq 0.} If C is positive definite, then X is positive semi-definite if and only if the complement X/C is positive semi-definite: If C ≻ 0 , then X ⪰ 0 ⇔ X / C = A − B C − 1 B T ⪰ 0. {\displaystyle {\text{If }}C\succ 0,{\text{ then }}X\succeq 0\Leftrightarrow X/C=A-BC^{-1}B^{\mathrm {T} }\succeq 0.} The first and third statements can be derived by considering the minimizer of the quantity u T A u + 2 v T B T u + v T C v , {\displaystyle u^{\mathrm {T} }Au+2v^{\mathrm {T} }B^{\mathrm {T} }u+v^{\mathrm {T} }Cv,\,} as a function of v (for fixed u). Furthermore, since [ A B B T C ] ≻ 0 ⟺ [ C B T B A ] ≻ 0 {\displaystyle \left[{\begin{matrix}A&B\\B^{\mathrm {T} }&C\end{matrix}}\right]\succ 0\Longleftrightarrow \left[{\begin{matrix}C&B^{\mathrm {T} }\\B&A\end{matrix}}\right]\succ 0} and similarly for positive semi-definite matrices, the second (respectively fourth) statement is immediate from the first (resp. third) statement. There is also a sufficient and necessary condition for the positive semi-definiteness of X in terms of a generalized Schur complement. Precisely, X ⪰ 0 ⇔ A ⪰ 0 , C − B T A g B ⪰ 0 , ( I − A A g ) B = 0 {\displaystyle X\succeq 0\Leftrightarrow A\succeq 0,C-B^{\mathrm {T} }A^{g}B\succeq 0,\left(I-AA^{g}\right)B=0\,} and X ⪰ 0 ⇔ C ⪰ 0 , A − B C g B T ⪰ 0 , ( I − C C g ) B T = 0 , {\displaystyle X\succeq 0\Leftrightarrow C\succeq 0,A-BC^{g}B^{\mathrm {T} }\succeq 0,\left(I-CC^{g}\right)B^{\mathrm {T} }=0,} where A g {\displaystyle A^{g}} denotes a generalized inverse of A {\displaystyle A} . == See also == Woodbury matrix identity Quasi-Newton method Haynsworth inertia additivity formula Gaussian process Total least squares Guyan reduction in computational mechanics == References ==
|
Wikipedia:Schur polynomial#0
|
In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials. == Definition (Jacobi's bialternant formula) == Schur polynomials are indexed by integer partitions. Given a partition λ = (λ1, λ2, ...,λn), where λ1 ≥ λ2 ≥ ... ≥ λn, and each λj is a non-negative integer, the functions a ( λ 1 + n − 1 , λ 2 + n − 2 , … , λ n ) ( x 1 , x 2 , … , x n ) = det [ x 1 λ 1 + n − 1 x 2 λ 1 + n − 1 … x n λ 1 + n − 1 x 1 λ 2 + n − 2 x 2 λ 2 + n − 2 … x n λ 2 + n − 2 ⋮ ⋮ ⋱ ⋮ x 1 λ n x 2 λ n … x n λ n ] {\displaystyle a_{(\lambda _{1}+n-1,\lambda _{2}+n-2,\dots ,\lambda _{n})}(x_{1},x_{2},\dots ,x_{n})=\det \left[{\begin{matrix}x_{1}^{\lambda _{1}+n-1}&x_{2}^{\lambda _{1}+n-1}&\dots &x_{n}^{\lambda _{1}+n-1}\\x_{1}^{\lambda _{2}+n-2}&x_{2}^{\lambda _{2}+n-2}&\dots &x_{n}^{\lambda _{2}+n-2}\\\vdots &\vdots &\ddots &\vdots \\x_{1}^{\lambda _{n}}&x_{2}^{\lambda _{n}}&\dots &x_{n}^{\lambda _{n}}\end{matrix}}\right]} are alternating polynomials by properties of the determinant. A polynomial is alternating if it changes sign under any transposition of the variables. Since they are alternating, they are all divisible by the Vandermonde determinant a ( n − 1 , n − 2 , … , 0 ) ( x 1 , x 2 , … , x n ) = det [ x 1 n − 1 x 2 n − 1 … x n n − 1 x 1 n − 2 x 2 n − 2 … x n n − 2 ⋮ ⋮ ⋱ ⋮ 1 1 … 1 ] = ∏ 1 ≤ j < k ≤ n ( x j − x k ) . {\displaystyle a_{(n-1,n-2,\dots ,0)}(x_{1},x_{2},\dots ,x_{n})=\det \left[{\begin{matrix}x_{1}^{n-1}&x_{2}^{n-1}&\dots &x_{n}^{n-1}\\x_{1}^{n-2}&x_{2}^{n-2}&\dots &x_{n}^{n-2}\\\vdots &\vdots &\ddots &\vdots \\1&1&\dots &1\end{matrix}}\right]=\prod _{1\leq j<k\leq n}(x_{j}-x_{k}).} The Schur polynomials are defined as the ratio s λ ( x 1 , x 2 , … , x n ) = a ( λ 1 + n − 1 , λ 2 + n − 2 , … , λ n + 0 ) ( x 1 , x 2 , … , x n ) a ( n − 1 , n − 2 , … , 0 ) ( x 1 , x 2 , … , x n ) . {\displaystyle s_{\lambda }(x_{1},x_{2},\dots ,x_{n})={\frac {a_{(\lambda _{1}+n-1,\lambda _{2}+n-2,\dots ,\lambda _{n}+0)}(x_{1},x_{2},\dots ,x_{n})}{a_{(n-1,n-2,\dots ,0)}(x_{1},x_{2},\dots ,x_{n})}}.} This is known as the bialternant formula of Jacobi. It is a special case of the Weyl character formula. This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant. == Properties == The degree d Schur polynomials in n variables are a linear basis for the space of homogeneous degree d symmetric polynomials in n variables. For a partition λ = (λ1, λ2, ..., λr) with r ≤ n {\displaystyle r\leq n} , the Schur polynomial is a sum of monomials, s λ ( x 1 , x 2 , … , x n ) = ∑ T x T = ∑ T x 1 t 1 ⋯ x n t n {\displaystyle s_{\lambda }(x_{1},x_{2},\ldots ,x_{n})=\sum _{T}x^{T}=\sum _{T}x_{1}^{t_{1}}\cdots x_{n}^{t_{n}}} where the summation is over all semistandard Young tableaux T of shape λ using the numbers 1, 2, ..., n. The exponents t1, ..., tn give the weight of T, in other words each ti counts the occurrences of the number i in T. This can be shown to be equivalent to the definition from the first Giambelli formula using the Lindström–Gessel–Viennot lemma (as outlined on that page). Schur polynomials can be expressed as linear combinations of monomial symmetric functions mμ with non-negative integer coefficients Kλμ called Kostka numbers, s λ = ∑ μ K λ μ m μ . {\displaystyle s_{\lambda }=\sum _{\mu }K_{\lambda \mu }m_{\mu }.\ } The Kostka numbers Kλμ are given by the number of semi-standard Young tableaux of shape λ and weight μ. === Jacobi−Trudi identities === The first Jacobi−Trudi formula expresses the Schur polynomial as a determinant in terms of the complete homogeneous symmetric polynomials, s λ = det ( h λ i + j − i ) i , j = 1 l ( λ ) = det [ h λ 1 h λ 1 + 1 … h λ 1 + n − 1 h λ 2 − 1 h λ 2 … h λ 2 + n − 2 ⋮ ⋮ ⋱ ⋮ h λ n − n + 1 h λ n − n + 2 … h λ n ] , {\displaystyle s_{\lambda }=\det(h_{\lambda _{i}+j-i})_{i,j=1}^{l(\lambda )}=\det \left[{\begin{matrix}h_{\lambda _{1}}&h_{\lambda _{1}+1}&\dots &h_{\lambda _{1}+n-1}\\h_{\lambda _{2}-1}&h_{\lambda _{2}}&\dots &h_{\lambda _{2}+n-2}\\\vdots &\vdots &\ddots &\vdots \\h_{\lambda _{n}-n+1}&h_{\lambda _{n}-n+2}&\dots &h_{\lambda _{n}}\end{matrix}}\right],} where hi := s(i). The second Jacobi-Trudi formula expresses the Schur polynomial as a determinant in terms of the elementary symmetric polynomials, s λ = det ( e λ i ′ + j − i ) i , j = 1 l ( λ ′ ) = det [ e λ 1 ′ e λ 1 ′ + 1 … e λ 1 ′ + l − 1 e λ 2 ′ − 1 e λ 2 ′ … e λ 2 ′ + l − 2 ⋮ ⋮ ⋱ ⋮ e λ l ′ − l + 1 e λ l ′ − l + 2 … e λ l ′ ] , {\displaystyle s_{\lambda }=\det(e_{\lambda '_{i}+j-i})_{i,j=1}^{l(\lambda ')}=\det \left[{\begin{matrix}e_{\lambda '_{1}}&e_{\lambda '_{1}+1}&\dots &e_{\lambda '_{1}+l-1}\\e_{\lambda '_{2}-1}&e_{\lambda '_{2}}&\dots &e_{\lambda '_{2}+l-2}\\\vdots &\vdots &\ddots &\vdots \\e_{\lambda '_{l}-l+1}&e_{\lambda '_{l}-l+2}&\dots &e_{\lambda '_{l}}\end{matrix}}\right],} where ei := s(1i) and λ ′ = ( λ 1 ′ , … , λ l ′ ) {\displaystyle \lambda '=(\lambda '_{1},\ldots ,\lambda '_{l})} is the conjugate partition to λ. In both identities, functions with negative subscripts are defined to be zero. === The Giambelli identity === Another determinantal identity is Giambelli's formula, which expresses the Schur function for an arbitrary partition in terms of those for the hook partitions contained within the Young diagram. In Frobenius' notation, the partition is denoted ( a 1 , … , a r ∣ b 1 , … , b r ) {\displaystyle (a_{1},\ldots ,a_{r}\mid b_{1},\ldots ,b_{r})} where, for each diagonal element in position ii, ai denotes the number of boxes to the right in the same row and bi denotes the number of boxes beneath it in the same column (the arm and leg lengths, respectively). The Giambelli identity expresses the Schur function corresponding to this partition as the determinant s ( a 1 , … , a r ∣ b 1 , … , b r ) = det ( s ( a i ∣ b j ) ) {\displaystyle s_{(a_{1},\ldots ,a_{r}\mid b_{1},\ldots ,b_{r})}=\det(s_{(a_{i}\mid b_{j})})} of those for hook partitions. === The Cauchy identity === The Cauchy identity for Schur functions (now in infinitely many variables), and its dual state that ∑ λ s λ ( x ) s λ ( y ) = ∑ λ m λ ( x ) h λ ( y ) = ∏ i , j ( 1 − x i y j ) − 1 , {\displaystyle \sum _{\lambda }s_{\lambda }(x)s_{\lambda }(y)=\sum _{\lambda }m_{\lambda }(x)h_{\lambda }(y)=\prod _{i,j}(1-x_{i}y_{j})^{-1},} and ∑ λ s λ ( x ) s λ ′ ( y ) = ∑ λ m λ ( x ) e λ ( y ) = ∏ i , j ( 1 + x i y j ) , {\displaystyle \sum _{\lambda }s_{\lambda }(x)s_{\lambda '}(y)=\sum _{\lambda }m_{\lambda }(x)e_{\lambda }(y)=\prod _{i,j}(1+x_{i}y_{j}),} where the sum is taken over all partitions λ, and h λ ( x ) {\displaystyle h_{\lambda }(x)} , e λ ( x ) {\displaystyle e_{\lambda }(x)} denote the complete symmetric functions and elementary symmetric functions, respectively. If the sum is taken over products of Schur polynomials in n {\displaystyle n} variables ( x 1 , … , x n ) {\displaystyle (x_{1},\dots ,x_{n})} , the sum includes only partitions of length ℓ ( λ ) ≤ n {\displaystyle \ell (\lambda )\leq n} since otherwise the Schur polynomials vanish. There are many generalizations of these identities to other families of symmetric functions. For example, Macdonald polynomials, Schubert polynomials and Grothendieck polynomials admit Cauchy-like identities. === Further identities === The Schur polynomial can also be computed via a specialization of a formula for Hall–Littlewood polynomials, s λ ( x 1 , … , x n ) = ∑ w ∈ S n / S n λ w ( x λ ∏ λ i > λ j x i x i − x j ) {\displaystyle s_{\lambda }(x_{1},\dotsc ,x_{n})=\sum _{w\in S_{n}/S_{n}^{\lambda }}w\left(x^{\lambda }\prod _{\lambda _{i}>\lambda _{j}}{\frac {x_{i}}{x_{i}-x_{j}}}\right)} where S n λ {\displaystyle S_{n}^{\lambda }} is the subgroup of permutations such that λ w ( i ) = λ i {\displaystyle \lambda _{w(i)}=\lambda _{i}} for all i, and w acts on variables by permuting indices. === The Murnaghan−Nakayama rule === The Murnaghan–Nakayama rule expresses a product of a power-sum symmetric function with a Schur polynomial, in terms of Schur polynomials: p r ⋅ s λ = ∑ μ ( − 1 ) h t ( μ / λ ) + 1 s μ {\displaystyle p_{r}\cdot s_{\lambda }=\sum _{\mu }(-1)^{ht(\mu /\lambda )+1}s_{\mu }} where the sum is over all partitions μ such that μ/λ is a rim-hook of size r and ht(μ/λ) is the number of rows in the diagram μ/λ. === The Littlewood–Richardson rule and Pieri's formula === The Littlewood–Richardson coefficients depend on three partitions, say λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } , of which λ {\displaystyle \lambda } and μ {\displaystyle \mu } describe the Schur functions being multiplied, and ν {\displaystyle \nu } gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} such that s λ s μ = ∑ ν c λ , μ ν s ν . {\displaystyle s_{\lambda }s_{\mu }=\sum _{\nu }c_{\lambda ,\mu }^{\nu }s_{\nu }.} The Littlewood–Richardson rule states that c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} is equal to the number of Littlewood–Richardson tableaux of skew shape ν / λ {\displaystyle \nu /\lambda } and of weight μ {\displaystyle \mu } . Pieri's formula is a special case of the Littlewood-Richardson rule, which expresses the product h r s λ {\displaystyle h_{r}s_{\lambda }} in terms of Schur polynomials. The dual version expresses e r s λ {\displaystyle e_{r}s_{\lambda }} in terms of Schur polynomials. === Specializations === Evaluating the Schur polynomial sλ in (1, 1, ..., 1) gives the number of semi-standard Young tableaux of shape λ with entries in 1, 2, ..., n. One can show, by using the Weyl character formula for example, that s λ ( 1 , 1 , … , 1 ) = ∏ 1 ≤ i < j ≤ n λ i − λ j + j − i j − i . {\displaystyle s_{\lambda }(1,1,\dots ,1)=\prod _{1\leq i<j\leq n}{\frac {\lambda _{i}-\lambda _{j}+j-i}{j-i}}.} In this formula, λ, the tuple indicating the width of each row of the Young diagram, is implicitly extended with zeros until it has length n. The sum of the elements λi is d. See also the Hook length formula which computes the same quantity for fixed λ. == Example == The following extended example should help clarify these ideas. Consider the case n = 3, d = 4. Using Ferrers diagrams or some other method, we find that there are just four partitions of 4 into at most three parts. We have s ( 2 , 1 , 1 ) ( x 1 , x 2 , x 3 ) = 1 Δ det [ x 1 4 x 2 4 x 3 4 x 1 2 x 2 2 x 3 2 x 1 x 2 x 3 ] = x 1 x 2 x 3 ( x 1 + x 2 + x 3 ) {\displaystyle s_{(2,1,1)}(x_{1},x_{2},x_{3})={\frac {1}{\Delta }}\;\det \left[{\begin{matrix}x_{1}^{4}&x_{2}^{4}&x_{3}^{4}\\x_{1}^{2}&x_{2}^{2}&x_{3}^{2}\\x_{1}&x_{2}&x_{3}\end{matrix}}\right]=x_{1}\,x_{2}\,x_{3}\,(x_{1}+x_{2}+x_{3})} s ( 2 , 2 , 0 ) ( x 1 , x 2 , x 3 ) = 1 Δ det [ x 1 4 x 2 4 x 3 4 x 1 3 x 2 3 x 3 3 1 1 1 ] = x 1 2 x 2 2 + x 1 2 x 3 2 + x 2 2 x 3 2 + x 1 2 x 2 x 3 + x 1 x 2 2 x 3 + x 1 x 2 x 3 2 {\displaystyle s_{(2,2,0)}(x_{1},x_{2},x_{3})={\frac {1}{\Delta }}\;\det \left[{\begin{matrix}x_{1}^{4}&x_{2}^{4}&x_{3}^{4}\\x_{1}^{3}&x_{2}^{3}&x_{3}^{3}\\1&1&1\end{matrix}}\right]=x_{1}^{2}\,x_{2}^{2}+x_{1}^{2}\,x_{3}^{2}+x_{2}^{2}\,x_{3}^{2}+x_{1}^{2}\,x_{2}\,x_{3}+x_{1}\,x_{2}^{2}\,x_{3}+x_{1}\,x_{2}\,x_{3}^{2}} and so on, where Δ {\displaystyle \Delta } is the Vandermonde determinant a ( 2 , 1 , 0 ) ( x 1 , x 2 , x 3 ) {\displaystyle a_{(2,1,0)}(x_{1},x_{2},x_{3})} . Summarizing: s ( 2 , 1 , 1 ) = e 1 e 3 {\displaystyle s_{(2,1,1)}=e_{1}\,e_{3}} s ( 2 , 2 , 0 ) = e 2 2 − e 1 e 3 {\displaystyle s_{(2,2,0)}=e_{2}^{2}-e_{1}\,e_{3}} s ( 3 , 1 , 0 ) = e 1 2 e 2 − e 2 2 − e 1 e 3 {\displaystyle s_{(3,1,0)}=e_{1}^{2}\,e_{2}-e_{2}^{2}-e_{1}\,e_{3}} s ( 4 , 0 , 0 ) = e 1 4 − 3 e 1 2 e 2 + 2 e 1 e 3 + e 2 2 . {\displaystyle s_{(4,0,0)}=e_{1}^{4}-3\,e_{1}^{2}\,e_{2}+2\,e_{1}\,e_{3}+e_{2}^{2}.} Every homogeneous degree-four symmetric polynomial in three variables can be expressed as a unique linear combination of these four Schur polynomials, and this combination can again be found using a Gröbner basis for an appropriate elimination order. For example, ϕ ( x 1 , x 2 , x 3 ) = x 1 4 + x 2 4 + x 3 4 {\displaystyle \phi (x_{1},x_{2},x_{3})=x_{1}^{4}+x_{2}^{4}+x_{3}^{4}} is obviously a symmetric polynomial which is homogeneous of degree four, and we have ϕ = s ( 2 , 1 , 1 ) − s ( 3 , 1 , 0 ) + s ( 4 , 0 , 0 ) . {\displaystyle \phi =s_{(2,1,1)}-s_{(3,1,0)}+s_{(4,0,0)}.\,\!} == Relation to representation theory == The Schur polynomials occur in the representation theory of the symmetric groups, general linear groups, and unitary groups. The Weyl character formula implies that the Schur polynomials are the characters of finite-dimensional irreducible representations of the general linear groups, and helps to generalize Schur's work to other compact and semisimple Lie groups. Several expressions arise for this relation, one of the most important being the expansion of the Schur functions sλ in terms of the symmetric power functions p k = ∑ i x i k {\displaystyle p_{k}=\sum _{i}x_{i}^{k}} . If we write χλρ for the character of the representation of the symmetric group indexed by the partition λ evaluated at elements of cycle type indexed by the partition ρ, then s λ = ∑ ν χ ν λ z ν p ν = ∑ ρ = ( 1 r 1 , 2 r 2 , 3 r 3 , … ) χ ρ λ ∏ k p k r k r k ! k r k , {\displaystyle s_{\lambda }=\sum _{\nu }{\frac {\chi _{\nu }^{\lambda }}{z_{\nu }}}p_{\nu }=\sum _{\rho =(1^{r_{1}},2^{r_{2}},3^{r_{3}},\dots )}\chi _{\rho }^{\lambda }\prod _{k}{\frac {p_{k}^{r_{k}}}{r_{k}!k^{r_{k}}}},} where ρ = (1r1, 2r2, 3r3, ...) means that the partition ρ has rk parts of length k. A proof of this can be found in R. Stanley's Enumerative Combinatorics Volume 2, Corollary 7.17.5. The integers χλρ can be computed using the Murnaghan–Nakayama rule. == Schur positivity == Due to the connection with representation theory, a symmetric function which expands positively in Schur functions are of particular interest. For example, the skew Schur functions expand positively in the ordinary Schur functions, and the coefficients are Littlewood–Richardson coefficients. A special case of this is the expansion of the complete homogeneous symmetric functions hλ in Schur functions. This decomposition reflects how a permutation module is decomposed into irreducible representations. === Methods for proving Schur positivity === There are several approaches to prove Schur positivity of a given symmetric function F. If F is described in a combinatorial manner, a direct approach is to produce a bijection with semi-standard Young tableaux. The Edelman–Greene correspondence and the Robinson–Schensted–Knuth correspondence are examples of such bijections. A bijection with more structure is a proof using so called crystals. This method can be described as defining a certain graph structure described with local rules on the underlying combinatorial objects. A similar idea is the notion of dual equivalence. This approach also uses a graph structure, but on the objects representing the expansion in the fundamental quasisymmetric basis. It is closely related to the RSK-correspondence. == Generalizations == === Skew Schur functions === Skew Schur functions sλ/μ depend on two partitions λ and μ, and can be defined by the property ⟨ s λ / μ , s ν ⟩ = ⟨ s λ , s μ s ν ⟩ . {\displaystyle \langle s_{\lambda /\mu },s_{\nu }\rangle =\langle s_{\lambda },s_{\mu }s_{\nu }\rangle .} Here, the inner product is the Hall inner product, for which the Schur polynomials form an orthonormal basis. Similar to the ordinary Schur polynomials, there are numerous ways to compute these. The corresponding Jacobi-Trudi identities are s λ / μ = det ( h λ i − μ j − i + j ) i , j = 1 l ( λ ) {\displaystyle s_{\lambda /\mu }=\det(h_{\lambda _{i}-\mu _{j}-i+j})_{i,j=1}^{l(\lambda )}} s λ ′ / μ ′ = det ( e λ i − μ j − i + j ) i , j = 1 l ( λ ) {\displaystyle s_{\lambda '/\mu '}=\det(e_{\lambda _{i}-\mu _{j}-i+j})_{i,j=1}^{l(\lambda )}} There is also a combinatorial interpretation of the skew Schur polynomials, namely it is a sum over all semi-standard Young tableaux (or column-strict tableaux) of the skew shape λ / μ {\displaystyle \lambda /\mu } . The skew Schur polynomials expands positively in Schur polynomials. A rule for the coefficients is given by the Littlewood-Richardson rule. === Double Schur polynomials === The double Schur polynomials can be seen as a generalization of the shifted Schur polynomials. These polynomials are also closely related to the factorial Schur polynomials. Given a partition λ, and a sequence a1, a2,... one can define the double Schur polynomial sλ(x || a) as s λ ( x | | a ) = ∑ T ∏ α ∈ λ ( x T ( α ) − a T ( α ) − c ( α ) ) {\displaystyle s_{\lambda }(x||a)=\sum _{T}\prod _{\alpha \in \lambda }(x_{T(\alpha )}-a_{T(\alpha )-c(\alpha )})} where the sum is taken over all reverse semi-standard Young tableaux T of shape λ, and integer entries in 1, ..., n. Here T(α) denotes the value in the box α in T and c(α) is the content of the box. A combinatorial rule for the Littlewood-Richardson coefficients (depending on the sequence a) was given by A.I Molev. In particular, this implies that the shifted Schur polynomials have non-negative Littlewood-Richardson coefficients. The shifted Schur polynomials s*λ(y) can be obtained from the double Schur polynomials by specializing ai = −i and yi = xi + i. The double Schur polynomials are special cases of the double Schubert polynomials. === Factorial Schur polynomials === The factorial Schur polynomials may be defined as follows. Given a partition λ, and a doubly infinite sequence ...,a−1, a0, a1, ... one can define the factorial Schur polynomial sλ(x|a) as s λ ( x | a ) = ∑ T ∏ α ∈ λ ( x T ( α ) − a T ( α ) + c ( α ) ) {\displaystyle s_{\lambda }(x|a)=\sum _{T}\prod _{\alpha \in \lambda }(x_{T(\alpha )}-a_{T(\alpha )+c(\alpha )})} where the sum is taken over all semi-standard Young tableaux T of shape λ, and integer entries in 1, ..., n. Here T(α) denotes the value in the box α in T and c(α) is the content of the box. There is also a determinant formula, s λ ( x | a ) = det [ ( x j | a ) λ i + n − i ] i , j = 1 l ( λ ) ∏ i < j ( x i − x j ) {\displaystyle s_{\lambda }(x|a)={\frac {\det[(x_{j}|a)^{\lambda _{i}+n-i}]_{i,j=1}^{l(\lambda )}}{\prod _{i<j}(x_{i}-x_{j})}}} where (y|a)k = (y − a1) ... (y − ak). It is clear that if we let ai = 0 for all i, we recover the usual Schur polynomial sλ. The double Schur polynomials and the factorial Schur polynomials in n variables are related via the identity sλ(x||a) = sλ(x|u) where an−i+1 = ui. === Other generalizations === There are numerous generalizations of Schur polynomials: Hall–Littlewood polynomials Shifted Schur polynomials Flagged Schur polynomials Schubert polynomials Stanley symmetric functions (also known as stable Schubert polynomials) Key polynomials (also known as Demazure characters) Quasi-symmetric Schur polynomials Row-strict Schur polynomials Jack polynomials Modular Schur polynomials Loop Schur functions Macdonald polynomials Schur polynomials for the symplectic and orthogonal group. k-Schur functions Grothendieck polynomials (K-theoretical analogue of Schur polynomials) LLT polynomials == See also == Schur functor Littlewood–Richardson rule, where one finds some identities involving Schur polynomials. == References == Macdonald, I. G. (1995). Symmetric functions and Hall polynomials. Oxford Mathematical Monographs (2nd ed.). Oxford University Press. ISBN 978-0-19-853489-1. MR 1354144. Sagan, Bruce E. (2001) [1994], "Schur functions in algebraic combinatorics", Encyclopedia of Mathematics, EMS Press Sturmfels, Bernd (1993). Algorithms in Invariant Theory. Springer. ISBN 978-0-387-82445-1. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
|
Wikipedia:Schwartz–Zippel lemma#0
|
In mathematics, the Schwartz–Zippel lemma (also called the DeMillo–Lipton–Schwartz–Zippel lemma) is a tool commonly used in probabilistic polynomial identity testing. Identity testing is the problem of determining whether a given multivariate polynomial is the 0-polynomial, the polynomial that ignores all its variables and always returns zero. The lemma states that evaluating a nonzero polynomial on inputs chosen randomly from a large-enough set is likely to find an input that produces a nonzero output. it was discovered independently by Jack Schwartz, Richard Zippel, and Richard DeMillo and Richard J. Lipton, although DeMillo and Lipton's version was shown a year prior to Schwartz and Zippel's result. The finite field version of this bound was proved by Øystein Ore in 1922. == Statement and proof of the lemma == Theorem 1 (Schwartz, Zippel). Let P ∈ R [ x 1 , x 2 , … , x n ] {\displaystyle P\in R[x_{1},x_{2},\ldots ,x_{n}]} be a non-zero polynomial of total degree d ≥ 0 over an integral domain R. Let S be a finite subset of R and let r1, r2, ..., rn be selected at random independently and uniformly from S. Then Pr [ P ( r 1 , r 2 , … , r n ) = 0 ] ≤ d | S | . {\displaystyle \Pr[P(r_{1},r_{2},\ldots ,r_{n})=0]\leq {\frac {d}{|S|}}.} Equivalently, the Lemma states that for any finite subset S of R, if Z(P) is the zero set of P, then | Z ( P ) ∩ S n | ≤ d ⋅ | S | n − 1 . {\displaystyle |Z(P)\cap S^{n}|\leq d\cdot |S|^{n-1}.} Proof. The proof is by mathematical induction on n. For n = 1, P can have at most d roots by the fundamental theorem of algebra. This gives us the base case. Now, assume that the theorem holds for all polynomials in n − 1 variables. We can then consider P to be a polynomial in x1 by writing it as P ( x 1 , … , x n ) = ∑ i = 0 d x 1 i P i ( x 2 , … , x n ) . {\displaystyle P(x_{1},\dots ,x_{n})=\sum _{i=0}^{d}x_{1}^{i}P_{i}(x_{2},\dots ,x_{n}).} Since P is not identically 0, there is some i such that P i {\displaystyle P_{i}} is not identically 0. Take the largest such i. Then deg P i ≤ d − i {\displaystyle \deg P_{i}\leq d-i} , since the degree of x 1 i P i {\displaystyle x_{1}^{i}P_{i}} is at most d. Now we randomly pick r 2 , … , r n {\displaystyle r_{2},\dots ,r_{n}} from S. By the induction hypothesis, Pr [ P i ( r 2 , … , r n ) = 0 ] ≤ d − i | S | . {\displaystyle \Pr[P_{i}(r_{2},\ldots ,r_{n})=0]\leq {\frac {d-i}{|S|}}.} If P i ( r 2 , … , r n ) ≠ 0 {\displaystyle P_{i}(r_{2},\ldots ,r_{n})\neq 0} , then P ( x 1 , r 2 , … , r n ) {\displaystyle P(x_{1},r_{2},\ldots ,r_{n})} is of degree i (and thus not identically zero) so Pr [ P ( r 1 , r 2 , … , r n ) = 0 | P i ( r 2 , … , r n ) ≠ 0 ] ≤ i | S | . {\displaystyle \Pr[P(r_{1},r_{2},\ldots ,r_{n})=0|P_{i}(r_{2},\ldots ,r_{n})\neq 0]\leq {\frac {i}{|S|}}.} If we denote the event P ( r 1 , r 2 , … , r n ) = 0 {\displaystyle P(r_{1},r_{2},\ldots ,r_{n})=0} by A, the event P i ( r 2 , … , r n ) = 0 {\displaystyle P_{i}(r_{2},\ldots ,r_{n})=0} by B, and the complement of B by B c {\displaystyle B^{c}} , we have Pr [ A ] = Pr [ A ∩ B ] + Pr [ A ∩ B c ] = Pr [ B ] Pr [ A | B ] + Pr [ B c ] Pr [ A | B c ] ≤ Pr [ B ] + Pr [ A | B c ] ≤ d − i | S | + i | S | = d | S | {\displaystyle {\begin{aligned}\Pr[A]&=\Pr[A\cap B]+\Pr[A\cap B^{c}]\\&=\Pr[B]\Pr[A|B]+\Pr[B^{c}]\Pr[A|B^{c}]\\&\leq \Pr[B]+\Pr[A|B^{c}]\\&\leq {\frac {d-i}{|S|}}+{\frac {i}{|S|}}={\frac {d}{|S|}}\end{aligned}}} == Applications == The importance of the Schwartz–Zippel Theorem and Testing Polynomial Identities follows from algorithms which are obtained to problems that can be reduced to the problem of polynomial identity testing. === Zero testing === For example, is ( x 1 + 3 x 2 − x 3 ) ( 3 x 1 + x 4 − 1 ) ⋯ ( x 7 − x 2 ) ≡ 0 ? {\displaystyle (x_{1}+3x_{2}-x_{3})(3x_{1}+x_{4}-1)\cdots (x_{7}-x_{2})\equiv 0\ ?} To solve this, we can multiply it out and check that all the coefficients are 0. However, this takes exponential time. In general, a polynomial can be algebraically represented by an arithmetic formula or circuit. === Comparison of two polynomials === Given a pair of polynomials p 1 ( x ) {\displaystyle p_{1}(x)} and p 2 ( x ) {\displaystyle p_{2}(x)} , is p 1 ( x ) ≡ p 2 ( x ) {\displaystyle p_{1}(x)\equiv p_{2}(x)} ? This problem can be solved by reducing it to the problem of polynomial identity testing. It is equivalent to checking if [ p 1 ( x ) − p 2 ( x ) ] ≡ 0. {\displaystyle [p_{1}(x)-p_{2}(x)]\equiv 0.} Hence if we can determine that p ( x ) ≡ 0 , {\displaystyle p(x)\equiv 0,} where p ( x ) = p 1 ( x ) − p 2 ( x ) , {\displaystyle p(x)=p_{1}(x)\;-\;p_{2}(x),} then we can determine whether the two polynomials are equivalent. Comparison of polynomials has applications for branching programs (also called binary decision diagrams). A read-once branching program can be represented by a multilinear polynomial which computes (over any field) on {0,1}-inputs the same Boolean function as the branching program, and two branching programs compute the same function if and only if the corresponding polynomials are equal. Thus, identity of Boolean functions computed by read-once branching programs can be reduced to polynomial identity testing. Comparison of two polynomials (and therefore testing polynomial identities) also has applications in 2D-compression, where the problem of finding the equality of two 2D-texts A and B is reduced to the problem of comparing equality of two polynomials p A ( x , y ) {\displaystyle p_{A}(x,y)} and p B ( x , y ) {\displaystyle p_{B}(x,y)} . === Primality testing === Given n ∈ N {\displaystyle n\in \mathbb {N} } , is n {\displaystyle n} a prime number? A simple randomized algorithm developed by Manindra Agrawal and Somenath Biswas can determine probabilistically whether n {\displaystyle n} is prime and uses polynomial identity testing to do so. They propose that all prime numbers n (and only prime numbers) satisfy the following polynomial identity: ( 1 + z ) n = 1 + z n ( mod n ) . {\displaystyle (1+z)^{n}=1+z^{n}({\mbox{mod}}\;n).} This is a consequence of the Frobenius endomorphism. Let P n ( z ) = ( 1 + z ) n − 1 − z n . {\displaystyle {\mathcal {P}}_{n}(z)=(1+z)^{n}-1-z^{n}.} Then P n ( z ) = 0 ( mod n ) {\displaystyle {\mathcal {P}}_{n}(z)=0\;({\mbox{mod}}\;n)} iff n is prime. The proof can be found in [4]. However, since this polynomial has degree n {\displaystyle n} , where n {\displaystyle n} may or may not be a prime, the Schwartz–Zippel method would not work. Agrawal and Biswas use a more sophisticated technique, which divides P n {\displaystyle {\mathcal {P}}_{n}} by a random monic polynomial of small degree. Prime numbers are used in a number of applications such as hash table sizing, pseudorandom number generators and in key generation for cryptography. Therefore, finding very large prime numbers (on the order of (at least) 10 350 ≈ 2 1024 {\displaystyle 10^{350}\approx 2^{1024}} ) becomes very important and efficient primality testing algorithms are required. === Perfect matching === Let G = ( V , E ) {\displaystyle G=(V,E)} be a graph of n vertices where n is even. Does G contain a perfect matching? Theorem 2 (Tutte 1947): A Tutte matrix determinant is not a 0-polynomial if and only if there exists a perfect matching. A subset D of E is called a matching if each vertex in V is incident with at most one edge in D. A matching is perfect if each vertex in V has exactly one edge that is incident to it in D. Create a Tutte matrix A in the following way: A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 … a n n ] {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1{\mathit {n}}}\\a_{21}&a_{22}&\cdots &a_{2{\mathit {n}}}\\\vdots &\vdots &\ddots &\vdots \\a_{{\mathit {n}}1}&a_{{\mathit {n}}2}&\ldots &a_{\mathit {nn}}\end{bmatrix}}} where a i j = { x i j if ( i , j ) ∈ E and i < j − x j i if ( i , j ) ∈ E and i > j 0 otherwise . {\displaystyle a_{ij}={\begin{cases}x_{ij}\;\;{\mbox{if}}\;(i,j)\in E{\mbox{ and }}i<j\\-x_{ji}\;\;{\mbox{if}}\;(i,j)\in E{\mbox{ and }}i>j\\0\;\;\;\;{\mbox{otherwise}}.\end{cases}}} The Tutte matrix determinant (in the variables xij, i < j {\displaystyle i<j} ) is then defined as the determinant of this skew-symmetric matrix which coincides with the square of the pfaffian of the matrix A and is non-zero (as polynomial) if and only if a perfect matching exists. One can then use polynomial identity testing to find whether G contains a perfect matching. There exists a deterministic black-box algorithm for graphs with polynomially bounded permanents (Grigoriev & Karpinski 1987). In the special case of a balanced bipartite graph on n = m + m {\displaystyle n=m+m} vertices this matrix takes the form of a block matrix A = ( 0 X − X t 0 ) {\displaystyle A={\begin{pmatrix}0&X\\-X^{t}&0\end{pmatrix}}} if the first m rows (resp. columns) are indexed with the first subset of the bipartition and the last m rows with the complementary subset. In this case the pfaffian coincides with the usual determinant of the m × m matrix X (up to sign). Here X is the Edmonds matrix. === Determinant of a matrix with polynomial entries === Let p ( x 1 , x 2 , … , x n ) {\displaystyle p(x_{1},x_{2},\ldots ,x_{n})} be the determinant of the polynomial matrix. Currently, there is no known sub-exponential time algorithm that can solve this problem deterministically. However, there are randomized polynomial algorithms whose analysis requires a bound on the probability that a non-zero polynomial will have roots at randomly selected test points. == Notes == == References == == External links == The Curious History of the Schwartz–Zippel Lemma, by Richard J. Lipton
|
Wikipedia:Science, Technology, Engineering and Mathematics Network#0
|
The Science, Technology, Engineering and Mathematics Network (STEMNET) is an educational charity in the United Kingdom that seeks to encourage participation at school and college in science and engineering-related subjects (science, technology, engineering, and mathematics) and (eventually) work. == History == It is based at Woolgate Exchange near Moorgate tube station in London and was established in 1996. The chief executive is Kirsten Bodley. The STEMNET offices are housed within the Engineering Council. == Function == Its chief aim is to interest children in science, technology, engineering and mathematics. Primary school children can start to have an interest in these subjects, leading secondary school pupils to choose science A levels, which will lead to a science career. It supports the After School Science and Engineering Clubs at schools. There are also nine regional Science Learning Centres. == STEM ambassadors == To promote STEM subjects and encourage young people to take up jobs in these areas, STEMNET have around 30,000 ambassadors across the UK. these come from a wide selection of the STEM industries and include TV personalities like Rob Bell. == Funding == STEMNET used to receive funding from the Department for Education and Skills. Since June 2007, it receives funding from the Department for Children, Schools and Families and Department for Innovation, Universities and Skills, since STEMNET sits on the chronological dividing point (age 16) of both of the new departments. == See also == The WISE Campaign Engineering and Physical Sciences Research Council National Centre for Excellence in Teaching Mathematics Association for Science Education Glossary of areas of mathematics Glossary of astronomy Glossary of biology Glossary of chemistry Glossary of engineering Glossary of physics == References == == External links == Official website DIUS page STEM Partnerships (extensive background educational information)
|
Wikipedia:Second derivative#0
|
In calculus, the second derivative, or the second-order derivative, of a function f is the derivative of the derivative of f. Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the second derivative of the position of an object with respect to time is the instantaneous acceleration of the object, or the rate at which the velocity of the object is changing with respect to time. In Leibniz notation: a = d v d t = d 2 x d t 2 , {\displaystyle a={\frac {dv}{dt}}={\frac {d^{2}x}{dt^{2}}},} where a is acceleration, v is velocity, t is time, x is position, and d is the instantaneous "delta" or change. The last expression d 2 x d t 2 {\displaystyle {\tfrac {d^{2}x}{dt^{2}}}} is the second derivative of position (x) with respect to time. On the graph of a function, the second derivative corresponds to the curvature or concavity of the graph. The graph of a function with a positive second derivative is upwardly concave, while the graph of a function with a negative second derivative curves in the opposite way. == Second derivative power rule == The power rule for the first derivative, if applied twice, will produce the second derivative power rule as follows: d 2 d x 2 x n = d d x d d x x n = d d x ( n x n − 1 ) = n d d x x n − 1 = n ( n − 1 ) x n − 2 . {\displaystyle {\frac {d^{2}}{dx^{2}}}x^{n}={\frac {d}{dx}}{\frac {d}{dx}}x^{n}={\frac {d}{dx}}\left(nx^{n-1}\right)=n{\frac {d}{dx}}x^{n-1}=n(n-1)x^{n-2}.} == Notation == The second derivative of a function f ( x ) {\displaystyle f(x)} is usually denoted f ″ ( x ) {\displaystyle f''(x)} . That is: f ″ = ( f ′ ) ′ {\displaystyle f''=\left(f'\right)'} When using Leibniz's notation for derivatives, the second derivative of a dependent variable y with respect to an independent variable x is written d 2 y d x 2 . {\displaystyle {\frac {d^{2}y}{dx^{2}}}.} This notation is derived from the following formula: d 2 y d x 2 = d d x ( d y d x ) . {\displaystyle {\frac {d^{2}y}{dx^{2}}}\,=\,{\frac {d}{dx}}\left({\frac {dy}{dx}}\right).} == Example == Given the function f ( x ) = x 3 , {\displaystyle f(x)=x^{3},} the derivative of f is the function f ′ ( x ) = 3 x 2 . {\displaystyle f'(x)=3x^{2}.} The second derivative of f is the derivative of f ′ {\displaystyle f'} , namely f ″ ( x ) = 6 x . {\displaystyle f''(x)=6x.} == Relation to the graph == === Concavity === The second derivative of a function f can be used to determine the concavity of the graph of f. A function whose second derivative is positive is said to be concave up (also referred to as convex), meaning that the tangent line near the point where it touches the function will lie below the graph of the function. Similarly, a function whose second derivative is negative will be concave down (sometimes simply called concave), and its tangent line will lie above the graph of the function near the point of contact. === Inflection points === If the second derivative of a function changes sign, the graph of the function will switch from concave down to concave up, or vice versa. A point where this occurs is called an inflection point. Assuming the second derivative is continuous, it must take a value of zero at any inflection point, although not every point where the second derivative is zero is necessarily a point of inflection. === Second derivative test === The relation between the second derivative and the graph can be used to test whether a stationary point for a function (i.e., a point where f ′ ( x ) = 0 {\displaystyle f'(x)=0} ) is a local maximum or a local minimum. Specifically, If f ″ ( x ) < 0 {\displaystyle f''(x)<0} , then f {\displaystyle f} has a local maximum at x {\displaystyle x} . If f ″ ( x ) > 0 {\displaystyle f''(x)>0} , then f {\displaystyle f} has a local minimum at x {\displaystyle x} . If f ″ ( x ) = 0 {\displaystyle f''(x)=0} , the second derivative test says nothing about the point x {\displaystyle x} , a possible inflection point. The reason the second derivative produces these results can be seen by way of a real-world analogy. Consider a vehicle that at first is moving forward at a great velocity, but with a negative acceleration. Clearly, the position of the vehicle at the point where the velocity reaches zero will be the maximum distance from the starting position – after this time, the velocity will become negative and the vehicle will reverse. The same is true for the minimum, with a vehicle that at first has a very negative velocity but positive acceleration. == Limit == It is possible to write a single limit for the second derivative: f ″ ( x ) = lim h → 0 f ( x + h ) − 2 f ( x ) + f ( x − h ) h 2 . {\displaystyle f''(x)=\lim _{h\to 0}{\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}.} The limit is called the second symmetric derivative. The second symmetric derivative may exist even when the (usual) second derivative does not. The expression on the right can be written as a difference quotient of difference quotients: f ( x + h ) − 2 f ( x ) + f ( x − h ) h 2 = f ( x + h ) − f ( x ) h − f ( x ) − f ( x − h ) h h . {\displaystyle {\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}={\frac {{\dfrac {f(x+h)-f(x)}{h}}-{\dfrac {f(x)-f(x-h)}{h}}}{h}}.} This limit can be viewed as a continuous version of the second difference for sequences. However, the existence of the above limit does not mean that the function f {\displaystyle f} has a second derivative. The limit above just gives a possibility for calculating the second derivative—but does not provide a definition. A counterexample is the sign function sgn ( x ) {\displaystyle \operatorname {sgn}(x)} , which is defined as: sgn ( x ) = { − 1 if x < 0 , 0 if x = 0 , 1 if x > 0. {\displaystyle \operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\0&{\text{if }}x=0,\\1&{\text{if }}x>0.\end{cases}}} The sign function is not continuous at zero, and therefore the second derivative for x = 0 {\displaystyle x=0} does not exist. But the above limit exists for x = 0 {\displaystyle x=0} : lim h → 0 sgn ( 0 + h ) − 2 sgn ( 0 ) + sgn ( 0 − h ) h 2 = lim h → 0 sgn ( h ) − 2 ⋅ 0 + sgn ( − h ) h 2 = lim h → 0 sgn ( h ) + ( − sgn ( h ) ) h 2 = lim h → 0 0 h 2 = 0. {\displaystyle {\begin{aligned}\lim _{h\to 0}{\frac {\operatorname {sgn}(0+h)-2\operatorname {sgn}(0)+\operatorname {sgn}(0-h)}{h^{2}}}&=\lim _{h\to 0}{\frac {\operatorname {sgn}(h)-2\cdot 0+\operatorname {sgn}(-h)}{h^{2}}}\\&=\lim _{h\to 0}{\frac {\operatorname {sgn}(h)+(-\operatorname {sgn}(h))}{h^{2}}}=\lim _{h\to 0}{\frac {0}{h^{2}}}=0.\end{aligned}}} == Quadratic approximation == Just as the first derivative is related to linear approximations, the second derivative is related to the best quadratic approximation for a function f. This is the quadratic function whose first and second derivatives are the same as those of f at a given point. The formula for the best quadratic approximation to a function f around the point x = a is f ( x ) ≈ f ( a ) + f ′ ( a ) ( x − a ) + 1 2 f ″ ( a ) ( x − a ) 2 . {\displaystyle f(x)\approx f(a)+f'(a)(x-a)+{\tfrac {1}{2}}f''(a)(x-a)^{2}.} This quadratic approximation is the second-order Taylor polynomial for the function centered at x = a. == Eigenvalues and eigenvectors of the second derivative == For many combinations of boundary conditions explicit formulas for eigenvalues and eigenvectors of the second derivative can be obtained. For example, assuming x ∈ [ 0 , L ] {\displaystyle x\in [0,L]} and homogeneous Dirichlet boundary conditions (i.e., v ( 0 ) = v ( L ) = 0 {\displaystyle v(0)=v(L)=0} where v is the eigenvector), the eigenvalues are λ j = − j 2 π 2 L 2 {\displaystyle \lambda _{j}=-{\tfrac {j^{2}\pi ^{2}}{L^{2}}}} and the corresponding eigenvectors (also called eigenfunctions) are v j ( x ) = 2 L sin ( j π x L ) {\displaystyle v_{j}(x)={\sqrt {\tfrac {2}{L}}}\sin \left({\tfrac {j\pi x}{L}}\right)} . Here, v j ″ ( x ) = λ j v j ( x ) {\displaystyle v''_{j}(x)=\lambda _{j}v_{j}(x)} , for j = 1 , … , ∞ {\displaystyle j=1,\ldots ,\infty } . For other well-known cases, see Eigenvalues and eigenvectors of the second derivative. == Generalization to higher dimensions == === The Hessian === The second derivative generalizes to higher dimensions through the notion of second partial derivatives. For a function f: R3 → R, these include the three second-order partials ∂ 2 f ∂ x 2 , ∂ 2 f ∂ y 2 , and ∂ 2 f ∂ z 2 {\displaystyle {\frac {\partial ^{2}f}{\partial x^{2}}},\;{\frac {\partial ^{2}f}{\partial y^{2}}},{\text{ and }}{\frac {\partial ^{2}f}{\partial z^{2}}}} and the mixed partials ∂ 2 f ∂ x ∂ y , ∂ 2 f ∂ x ∂ z , and ∂ 2 f ∂ y ∂ z . {\displaystyle {\frac {\partial ^{2}f}{\partial x\,\partial y}},\;{\frac {\partial ^{2}f}{\partial x\,\partial z}},{\text{ and }}{\frac {\partial ^{2}f}{\partial y\,\partial z}}.} If the function's image and domain both have a potential, then these fit together into a symmetric matrix known as the Hessian. The eigenvalues of this matrix can be used to implement a multivariable analogue of the second derivative test. (See also the second partial derivative test.) === The Laplacian === Another common generalization of the second derivative is the Laplacian. This is the differential operator ∇ 2 {\displaystyle \nabla ^{2}} (or Δ {\displaystyle \Delta } ) defined by ∇ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \nabla ^{2}f={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.} The Laplacian of a function is equal to the divergence of the gradient, and the trace of the Hessian matrix. == See also == Chirpyness, second derivative of instantaneous phase Finite difference, used to approximate second derivative Second partial derivative test Symmetry of second derivatives == References == == Further reading == === Print === Anton, Howard; Bivens, Irl; Davis, Stephen (February 2, 2005), Calculus: Early Transcendentals Single and Multivariable (8th ed.), New York: Wiley, ISBN 978-0-471-47244-5 Apostol, Tom M. (June 1967), Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra, vol. 1 (2nd ed.), Wiley, ISBN 978-0-471-00005-1 Apostol, Tom M. (June 1969), Calculus, Vol. 2: Multi-Variable Calculus and Linear Algebra with Applications, vol. 1 (2nd ed.), Wiley, ISBN 978-0-471-00007-5 Eves, Howard (January 2, 1990), An Introduction to the History of Mathematics (6th ed.), Brooks Cole, ISBN 978-0-03-029558-4 Larson, Ron; Hostetler, Robert P.; Edwards, Bruce H. (February 28, 2006), Calculus: Early Transcendental Functions (4th ed.), Houghton Mifflin Company, ISBN 978-0-618-60624-5 Spivak, Michael (September 1994), Calculus (3rd ed.), Publish or Perish, ISBN 978-0-914098-89-8 Stewart, James (December 24, 2002), Calculus (5th ed.), Brooks Cole, ISBN 978-0-534-39339-7 Thompson, Silvanus P. (September 8, 1998), Calculus Made Easy (Revised, Updated, Expanded ed.), New York: St. Martin's Press, ISBN 978-0-312-18548-0 === Online books === Crowell, Benjamin (2003), Calculus Garrett, Paul (2004), Notes on First-Year Calculus Hussain, Faraz (2006), Understanding Calculus Keisler, H. Jerome (2000), Elementary Calculus: An Approach Using Infinitesimals Mauch, Sean (2004), Unabridged Version of Sean's Applied Math Book, archived from the original on 2006-04-15 Sloughter, Dan (2000), Difference Equations to Differential Equations Strang, Gilbert (1991), Calculus Stroyan, Keith D. (1997), A Brief Introduction to Infinitesimal Calculus, archived from the original on 2005-09-11 Wikibooks, Calculus == External links == Discrete Second Derivative from Unevenly Spaced Points
|
Wikipedia:Sedleian Professor of Natural Philosophy#0
|
The Sedleian Professor of Natural Philosophy is the name of a chair at the Mathematical Institute of the University of Oxford. == Overview == The Sedleian Chair was founded by Sir William Sedley who, by his will dated 20 October 1618, left the sum of £2,000 to the University of Oxford for purchase of lands for its endowment. Sedley's bequest took effect in 1621 with the purchase of an estate at Waddesdon in Buckinghamshire to produce the necessary income. It is regarded as the oldest of Oxford's scientific chairs. Holders of the Sedleian Professorship have, since the mid 19th century, worked in a range of areas of applied mathematics and mathematical physics. They are simultaneously elected to fellowships at Queen's College, Oxford. The Sedleian Professors in the past century have been Augustus Love (1899-1940), who was distinguished for his work in the mathematical theory of elasticity, Sydney Chapman (1946-1953), who is renowned for his contributions to the kinetic theory of gases and solar-terrestrial physics, George Temple (1953-1968), who made significant contributions to mathematical physics and the theory of generalized functions, Brooke Benjamin (1979-1995), who did highly influential work in the areas of mathematical analysis and fluid mechanics, and Sir John Ball (1996-2019), who is distinguished for his work in the mathematical theory of elasticity, materials science, the calculus of variations, and infinite-dimensional dynamical systems. == List of Sedleian Professors == == Notes == == References == == Bibliography == Oxford Dictionary of National Biography, articles on Lapworth, Edwards, Wallis, Millington, Browne, Hornsby, Cooke, Price, Love, Chapman, Temple, Brook Benjamin.
|
Wikipedia:Segre classification#0
|
The Segre classification is an algebraic classification of rank two symmetric tensors. It was proposed by the italian mathematician Corrado Segre in 1884. The resulting types are then known as Segre types. It is most commonly applied to the energy–momentum tensor (or the Ricci tensor) and primarily finds application in the classification of exact solutions in general relativity. == See also == Corrado Segre Jordan normal form Petrov classification == References == Stephani, Hans; Kramer, Dietrich; MacCallum, Malcolm; Hoenselaers, Cornelius & Herlt, Eduard (2003). Exact Solutions of Einstein's Field Equations. Cambridge: Cambridge University Press. ISBN 0-521-46136-7. See section 5.1 for the Segre classification. Segre, C. (1884). "Sulla teoria e sulla classificazione delle omografie in uno spazio lineare ad uno numero qualunque di dimensioni". Memorie della R. Accademia dei Lincei. 3a: 127.
|
Wikipedia:Seidel adjacency matrix#0
|
In mathematics, in graph theory, the Seidel adjacency matrix of a simple undirected graph G is a symmetric matrix with a row and column for each vertex, having 0 on the diagonal, −1 for positions whose rows and columns correspond to adjacent vertices, and +1 for positions corresponding to non-adjacent vertices. It is also called the Seidel matrix or – its original name – the (−1,1,0)-adjacency matrix. It can be interpreted as the result of subtracting the adjacency matrix of G from the adjacency matrix of the complement of G. The multiset of eigenvalues of this matrix is called the Seidel spectrum. The Seidel matrix was introduced by J. H. van Lint and Johan Jacob Seidel in 1966 and extensively exploited by Seidel and coauthors. The Seidel matrix of G is also the adjacency matrix of a signed complete graph KG in which the edges of G are negative and the edges not in G are positive. It is also the adjacency matrix of the two-graph associated with G and KG. The eigenvalue properties of the Seidel matrix are valuable in the study of strongly regular graphs. == References == van Lint, J. H., and Seidel, J. J. (1966), Equilateral point sets in elliptic geometry. Indagationes Mathematicae, vol. 28 (= Proc. Kon. Ned. Aka. Wet. Ser. A, vol. 69), pp. 335–348. Seidel, J. J. (1976), A survey of two-graphs. In: Colloquio Internazionale sulle Teorie Combinatorie (Proceedings, Rome, 1973), vol. I, pp. 481–511. Atti dei Convegni Lincei, No. 17. Accademia Nazionale dei Lincei, Rome. Seidel, J. J. (1991), ed. D.G. Corneil and R. Mathon, Geometry and Combinatorics: Selected Works of J. J. Seidel. Boston: Academic Press. Many of the articles involve the Seidel matrix. Seidel, J. J. (1968), Strongly Regular Graphs with (−1,1,0) Adjacency Matrix Having Eigenvalue 3. Linear Algebra and its Applications 1, 281–298.
|
Wikipedia:Seki Takakazu#0
|
Seki Takakazu (関 孝和, c. March 1642 – December 5, 1708), also known as Seki Kōwa (関 孝和), was a mathematician, samurai, and Kofu feudal officer of the early Edo period of Japan. Seki laid foundations for the subsequent development of Japanese mathematics, known as wasan from c. 1870. He has been described as "Japan's Newton". He created a new algebraic notation system and, motivated by astronomical computations, did work on infinitesimal calculus and Diophantine equations. Although he was a contemporary of German polymath mathematician and philosopher Gottfried Leibniz and British polymath physicist and mathematician Isaac Newton, Seki's work was independent. His successors later developed a school dominant in Japanese mathematics until the end of the Edo period. While it is not clear how much of the achievements of wasan are Seki's, since many of them appear only in writings of his pupils, some of the results parallel or anticipate those discovered in Europe. For example, he is credited with the discovery of Bernoulli numbers. The resultant and determinant (the first in 1683, the complete version no later than 1710) are attributed to him. Seki also calculated the value of pi correct to the 10th decimal place, having used what is now called the Aitken's delta-squared process, rediscovered later by Alexander Aitken. Seki was influenced by Japanese mathematics books such as the Jinkōki. == Biography == Not much is known about Seki's personal life. His birthplace has been indicated as either Fujioka in Gunma Prefecture, or Edo. His birth date ranges from 1635 to 1643. Takakazu Seki was the second son of Uchiyama Shichibei Eimei, a samurai who served Tokugawa Tadanaga (徳川忠長), his mother the daughter of Yuasa Yoemon, a servant of Ando Tsushima Mamoru. Eimei in the 16th year of Kanei (寛永), 1639, was Tenshuban of Edo jō (江戸城) and vassal of Tokugawa Ieyasu (徳川家康). Seki was born to the Uchiyama clan, a subject of Ko-shu han, and adopted into the Seki family, a subject of the shōgun. In the first year of Hoei (宝永), 1704, Takakazu was hatamoto (旗本). While in Ko-shu han, he was involved in a surveying project to produce a reliable map of his employer's land. He spent many years in studying 13th-century Chinese calendars to replace the less accurate one used in Japan at that time. == Career == === Chinese mathematical roots === His mathematics (and wasan as a whole) was based on mathematical knowledge accumulated from the 13th to 15th centuries. The material in these works consisted of algebra with numerical methods, polynomial interpolation and its applications, and indeterminate integer equations. Seki's work is more or less based on and related to these known methods. Chinese algebraists discovered numerical evaluation (Horner's method, re-established by William George Horner in the 19th century) of arbitrary-degree algebraic equation with real coefficients. By using the Pythagorean theorem, they reduced geometric problems to algebra systematically. The number of unknowns in an equation was, however, quite limited. They used notations of an array of numbers to represent a formula; for example, ( a b c ) {\displaystyle (a\ b\ c)} for a x 2 + b x + c {\displaystyle ax^{2}+bx+c} . Later, they developed a method that uses two-dimensional arrays, representing four variables at most, but the scope of this method was limited. Accordingly, a target of Seki and his contemporary Japanese mathematicians was the development of general multivariable algebraic equations and elimination theory. In the Chinese approach to polynomial interpolation, the motivation was to predict the motion of celestial bodies from observed data. The method was also applied to find various mathematical formulas. Seki learned this technique, most likely, through his close examination of Chinese calendars. === Competing with contemporaries === In 1671, Sawaguchi Kazuyuki (沢口 一之), a pupil of Hashimoto Masakazu (橋本 正数) in Osaka, published Kokon Sanpō Ki (古今算法記), in which he gave the first comprehensive account of Chinese algebra in Japan. He successfully applied it to problems suggested by his contemporaries. Before him, these problems were solved using arithmetical methods. In the end of the book, he challenged other mathematicians with 15 new problems, which require multi-variable algebraic equations. In 1674, Seki published Hatsubi Sanpō (発微算法), giving solutions to all the 15 problems. The method he used is called bōsho-hō. He introduced the use of kanji to represent unknowns and variables in equations. Although it was possible to represent equations of an arbitrary degree (he once treated the 1458th degree) with negative coefficients, there were no symbols corresponding to parentheses, equality, or division. For example, a x + b {\displaystyle ax+b} could also mean a x + b = 0 {\displaystyle ax+b=0} . Later, the system was improved by other mathematicians, and in the end it became as expressive as the ones developed in Europe. In his book of 1674, however, Seki gave only single-variable equations resulting from elimination, but no account of the process at all, nor his new system of algebraic symbols. There were a few errors in the first edition. A mathematician in Hashimoto's school criticized the work, saying "only three out of 15 are correct." In 1678, Tanaka Yoshizane (田中 由真), who was from Hashimoto's school and was active in Kyoto, authored Sanpō Meiki (算法明記), and gave new solutions to Sawaguchi's 15 problems, using his version of multivariable algebra, similar to Seki's. To answer criticism, in 1685, Takebe Katahiro (建部 賢弘), one of Seki's pupils, published Hatsubi Sanpō Genkai (発微算法諺解), notes on Hatsubi Sanpō, in which he showed in detail the process of elimination using algebraic symbols. The effect of the introduction of the new symbolism was not restricted to algebra. With it, mathematicians at that time became able to express mathematical results in more general and abstract way. They concentrated on the study of elimination of variables. === Elimination theory === In 1683, Seki pushed ahead with elimination theory, based on resultants, in the Kaifukudai no Hō (解伏題之法). To express the resultant, he developed the notion of the determinant. While in his manuscript the formula for 5×5 matrices is obviously wrong, being always 0, in his later publication, Taisei Sankei (大成算経), written in 1683–1710 with Katahiro Takebe (建部 賢弘) and his brothers, a correct and general formula (Laplace's formula for the determinant) appears. Tanaka came up with the same idea independently. An indication appeared in his book of 1678: some of equations after elimination are the same as resultant. In Sanpō Funkai (算法紛解) (1690?), he explicitly described the resultant and applied it to several problems. In 1690, Izeki Tomotoki (井関 知辰), a mathematician active in Osaka but not in Hashimoto's school, published Sanpō Hakki (算法発揮), in which he gave resultant and Laplace's formula of determinant for the n×n case. The relationships between these works are not clear. Seki developed his mathematics in competition with mathematicians in Osaka and Kyoto, at the cultural center of Japan. In comparison with European mathematics, Seki's first manuscript was as early as Leibniz's first commentary on the subject, which treated matrices only up to the 3x3 case. The subject was forgotten in the West until Gabriel Cramer in 1750 was brought to it by the same motivations. Elimination theory equivalent to the wasan form was rediscovered by Étienne Bézout in 1764. Laplace's formula was established no earlier than 1750. With elimination theory in hand, a large part of the problems treated in Seki's time became solvable in principle, given the Chinese tradition of geometry almost reduced to algebra. In practice, the method could founder under huge computational complexity. Yet this theory had a significant influence on the direction of development of wasan. After the elimination is complete, one is left to find numerically the real roots of a single-variable equation. Horner's method, though well known in China, was not transmitted to Japan in its final form. So Seki had to work it out by himself independently. He is sometimes credited with Horner's method, which is not historically correct. He also suggested an improvement to Horner's method: to omit higher order terms after some iterations. This practice happens to be the same as that of Newton–Raphson method, but with a completely different perspective. Neither he nor his pupils had, strictly speaking, the idea of derivative. Seki also studied the properties of algebraic equations for assisting in numerical solution. The most notable of these are the conditions for the existence of multiple roots based on the discriminant, which is the resultant of a polynomial and its "derivative": His working definition of "derivative" was the O(h) -term in f(x + h), which was computed by the binomial theorem. He obtained some evaluations of the number of real roots of a polynomial equation. === Calculation of pi === Another of Seki's contributions was the rectification of the circle, i.e., the calculation of pi; he obtained a value for π that was correct to the 10th decimal place, using what is now called the Aitken's delta-squared process, rediscovered in the 20th century by Alexander Aitken. == Legacy == The asteroid 7483 Sekitakakazu is named after Seki Takakazu. == Selected works == In a statistical overview derived from writings by and about Seki Takakazu, OCLC/WorldCat encompasses roughly 50+ works in 50+ publications in three languages and 100+ library holdings. 1683 – Kenpu no Hō (驗符之法) OCLC 045626660 1712 – Katsuyō Sanpō (括要算法) OCLC 049703813 Seki Takakazu Zenshū (關孝和全集) OCLC 006343391, collected works == Gallery == == See also == Japanese mathematics Napkin ring problem Sangaku, the custom of presenting mathematical problems, carved in wood tablets, to the public in Shinto shrines Soroban, a Japanese abacus == Sources == == References == Endō Toshisada (1896). History of mathematics in Japan (日本數學史史, Dai Nihon sūgakush). Tōkyō: _____. OCLC 122770600 Horiuchi, Annick. (1994). Les Mathematiques Japonaises a L'Epoque d'Edo (1600–1868): Une Etude des Travaux de Seki Takakazu (?-1708) et de Takebe Katahiro (1664–1739). Paris: Librairie Philosophique J. Vrin. ISBN 9782711612130; OCLC 318334322 Howard Whitley, Eves. (1990). An Introduction to the History of Mathematics. Philadelphia: Saunders. ISBN 9780030295584; OCLC 20842510 Poole, David. (2005). Linear algebra: a Modern Introduction. Belmont, California: Thomson Brooks/Cole. ISBN 9780534998455; OCLC 67379937 Restivo, Sal P. (1992). Mathematics in Society and History: Sociological Inquiries. Dordrecht: Kluwer Academic Publishers. ISBN 9780792317654; OCLC 25709270 Sato, Kenichi. (2005), Kinsei Nihon Suugakushi -Seki Takakazu no jitsuzou wo motomete. Tokyo: University of Tokyo Press. ISBN 4-13-061355-3 Selin, Helaine. (1997). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Dordrecht: Kluwer/Springer. ISBN 9780792340669; OCLC 186451909 David Eugene Smith and Yoshio Mikami. (1914). A History of Japanese Mathematics. Chicago: Open Court Publishing. OCLC 1515528 Alternate online, full-text copy at archive.org pp. 91-127. == External links == Sugaku-bunka O'Connor, John J.; Robertson, Edmund F., "Takakazu Shinsuke Seki", MacTutor History of Mathematics Archive, University of St Andrews
|
Wikipedia:Selberg's identity#0
|
In number theory, Selberg's identity is an approximate identity involving logarithms of primes named after Atle Selberg. The identity, discovered jointly by Selberg and Paul Erdős, was used in the first elementary proof for the prime number theorem. == Statement == There are several different but equivalent forms of Selberg's identity. One form is ∑ p < x ( log p ) 2 + ∑ p q < x log p log q = 2 x log x + O ( x ) {\displaystyle \sum _{p<x}(\log p)^{2}+\sum _{pq<x}\log p\log q=2x\log x+O(x)} where the sums are over primes p and q. == Explanation == The strange-looking expression on the left side of Selberg's identity is (up to smaller terms) the sum ∑ n < x c n {\displaystyle \sum _{n<x}c_{n}} where the numbers c n = Λ ( n ) log n + ∑ d | n Λ ( d ) Λ ( n / d ) {\displaystyle c_{n}=\Lambda (n)\log n+\sum _{d\,|\,n}\Lambda (d)\Lambda (n/d)} are the coefficients of the Dirichlet series ζ ′ ′ ( s ) ζ ( s ) = ( ζ ′ ( s ) ζ ( s ) ) ′ + ( ζ ′ ( s ) ζ ( s ) ) 2 = ∑ c n n s . {\displaystyle {\frac {\zeta ^{\prime \prime }(s)}{\zeta (s)}}=\left({\frac {\zeta ^{\prime }(s)}{\zeta (s)}}\right)^{\prime }+\left({\frac {\zeta ^{\prime }(s)}{\zeta (s)}}\right)^{2}=\sum {\frac {c_{n}}{n^{s}}}.} This function has a pole of order 2 at s = 1 with coefficient 2, which gives the dominant term 2x log(x) in the asymptotic expansion of ∑ n < x c n . {\displaystyle \sum _{n<x}c_{n}.} == Another variation of the identity == Selberg's identity sometimes also refers to the following divisor sum identity involving the von Mangoldt function and the Möbius function when n ≥ 1 {\displaystyle n\geq 1} : Λ ( n ) log ( n ) + ∑ d | n Λ ( d ) Λ ( n d ) = ∑ d | n μ ( d ) log 2 ( n d ) . {\displaystyle \Lambda (n)\log(n)+\sum _{d\,|\,n}\Lambda (d)\Lambda \!\left({\frac {n}{d}}\right)=\sum _{d\,|\,n}\mu (d)\log ^{2}\left({\frac {n}{d}}\right).} This variant of Selberg's identity is proved using the concept of taking derivatives of arithmetic functions defined by f ′ ( n ) = f ( n ) ⋅ log ( n ) {\displaystyle f^{\prime }(n)=f(n)\cdot \log(n)} in Section 2.18 of Apostol's book (see also this link). == References == Pisot, Charles (1949), Démonstration élémentaire du théorème des nombres premiers, Séminaire Bourbaki, vol. 1, MR 1605145 Selberg, Atle (1949), "An elementary proof of the prime-number theorem", Ann. of Math., 2, 50 (2): 305–313, doi:10.2307/1969455, JSTOR 1969455, MR 0029410
|
Wikipedia:Self-concordant function#0
|
A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimization using Newton's method: Sub.6.2.4.2 A self-concordant barrier is a particular self-concordant function, that is also a barrier function for a particular convex set. Self-concordant barriers are important ingredients in interior point methods for optimization. == Self-concordant functions == === Multivariate self-concordant function === Here is the general definition of a self-concordant function.: Def.2.0.1 Let C be a convex nonempty open set in Rn. Let f be a function that is three-times continuously differentiable defined on C. We say that f is self-concordant on C if it satisfies the following properties: 1. Barrier property: on any sequence of points in C that converges to a boundary point of C, f converges to ∞. 2. Differential inequality: for every point x in C, and any direction h in Rn, let gh be the function f restricted to the direction h, that is: gh(t) = f(x+t*h). Then the one-dimensional function gh should satisfy the following differential inequality: | g h ‴ ( x ) | ≤ 2 g h ″ ( x ) 3 / 2 {\displaystyle |g_{h}'''(x)|\leq 2g_{h}''(x)^{3/2}} . Equivalently: d d α ∇ 2 f ( x + α y ) | α = 0 ⪯ 2 y T ∇ 2 f ( x ) y ∇ 2 f ( x ) {\displaystyle \left.{\frac {d}{d\alpha }}\nabla ^{2}f(x+\alpha y)\right|_{\alpha =0}\preceq 2{\sqrt {y^{T}\nabla ^{2}f(x)\,y}}\,\nabla ^{2}f(x)} === Univariate self-concordant function === A function f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } is self-concordant on R {\displaystyle \mathbb {R} } if: | f ‴ ( x ) | ≤ 2 f ″ ( x ) 3 / 2 {\displaystyle |f'''(x)|\leq 2f''(x)^{3/2}} Equivalently: if wherever f ″ ( x ) > 0 {\displaystyle f''(x)>0} it satisfies: | d d x 1 f ″ ( x ) | ≤ 1 {\displaystyle \left|{\frac {d}{dx}}{\frac {1}{\sqrt {f''(x)}}}\right|\leq 1} and satisfies f ‴ ( x ) = 0 {\displaystyle f'''(x)=0} elsewhere. === Examples === Linear and convex quadratic functions are self-concordant, since their third derivative is zero. Any function f ( x ) = − log ( − g ( x ) ) − log x {\displaystyle f(x)=-\log(-g(x))-\log x} where g ( x ) {\displaystyle g(x)} is defined and convex for all x > 0 {\displaystyle x>0} and verifies | g ‴ ( x ) | ≤ 3 g ″ ( x ) / x {\displaystyle |g'''(x)|\leq 3g''(x)/x} , is self concordant on its domain which is { x ∣ x > 0 , g ( x ) < 0 } {\displaystyle \{x\mid x>0,g(x)<0\}} . Some examples are g ( x ) = − x p {\displaystyle g(x)=-x^{p}} for 0 < p ≤ 1 {\displaystyle 0<p\leq 1} g ( x ) = − log x {\displaystyle g(x)=-\log x} g ( x ) = x p {\displaystyle g(x)=x^{p}} for − 1 ≤ p ≤ 0 {\displaystyle -1\leq p\leq 0} g ( x ) = ( a x + b ) 2 / x {\displaystyle g(x)=(ax+b)^{2}/x} for any function g {\displaystyle g} satisfying the conditions, the function g ( x ) + a x 2 + b x + c {\displaystyle g(x)+ax^{2}+bx+c} with a ≥ 0 {\displaystyle a\geq 0} also satisfies the conditions. Some functions that are not self-concordant: f ( x ) = e x {\displaystyle f(x)=e^{x}} f ( x ) = 1 x p , x > 0 , p > 0 {\displaystyle f(x)={\frac {1}{x^{p}}},x>0,p>0} f ( x ) = | x p | , p > 2 {\displaystyle f(x)=|x^{p}|,p>2} == Self-concordant barriers == Here is the general definition of a self-concordant barrier (SCB).: Def.3.1.1 Let C be a convex closed set in Rn with a non-empty interior. Let f be a function from interior(C) to R. Let M>0 be a real parameter. We say that f is a M-self-concordant barrier for C if it satisfies the following: 1. f is a self-concordant function on interior(C). 2. For every point x in interior(C), and any direction h in Rn, let gh be the function f restricted to the direction h, that is: gh(t) = f(x+t*h). Then the one-dimensional function gh should satisfy the following differential inequality: | g h ′ ( x ) | ≤ M 1 / 2 ⋅ g h ″ ( x ) 1 / 2 {\displaystyle |g_{h}'(x)|\leq M^{1/2}\cdot g_{h}''(x)^{1/2}} . === Constructing SCBs === Due to the importance of SCBs in interior-point methods, it is important to know how to construct SCBs for various domains. In theory, it can be proved that every closed convex domain in Rn has a self-concordant barrier with parameter O(n). But this “universal barrier” is given by some multivariate integrals, and it is too complicated for actual computations. Hence, the main goal is to construct SCBs that are efficiently computable.: Sec.9.2.3.3 SCBs can be constructed from some basic SCBs, that are combined to produce SCBs for more complex domains, using several combination rules. === Basic SCBs === Every constant is a self-concordant barrier for all Rn, with parameter M=0. It is the only self-concordant barrier for the entire space, and the only self-concordant barrier with M < 1.: Example 3.1.1 [Note that linear and quadratic functions are self-concordant functions, but they are not self concordant barriers]. For the positive half-line R + {\displaystyle \mathbb {R} _{+}} ( x > 0 {\displaystyle x>0} ), f ( x ) = − ln x {\displaystyle f(x)=-\ln x} is a self-concordant barrier with parameter M = 1 {\displaystyle M=1} . This can be proved directly from the definition. === Substitution rule === Let G be a closed convex domain in Rn, and g an M-SCB for G. Let x = Ay+b be an affine mapping from Rk to Rn with its image intersecting the interior of G. Let H be the inverse image of G under the mapping: H = {y in Rk | Ay+b in G}. Let h be the composite function h(y) := g(Ay+b). Then, h is an M-SCB for H.: Prop.3.1.1 For example, take n=1, G the positive half-line, and g ( x ) = − ln x {\displaystyle g(x)=-\ln x} . For any k, let a be a k-element vector and b a scalar. Let H = {y in Rk | aTy+b ≥ 0} = a k-dimensional half-space. By the substitution rule, h ( y ) = − ln ( a T y + b ) {\displaystyle h(y)=-\ln(a^{T}y+b)} is a 1-SCB for H. A more common format is H = {x in Rk | aTx ≤ b}, for which the SCB is h ( y ) = − ln ( b − a T y ) {\displaystyle h(y)=-\ln(b-a^{T}y)} . The substitution rule can be extended from affine mappings to a certain class of "appropriate" mappings,: Thm.9.1.1 and to quadratic mappings.: Sub.9.3 === Cartesian product rule === For all i in 1,...,m, let Gi be a closed convex domains in Rni, and let gi be an Mi-SCB for Gi. Let G be the cartesian product of all Gi. Let g(x1,...,xm) := sumi gi(xi). Then, g is a SCB for G, with parameter sumi Mi.: Prop.3.1.1 For example, take all Gi to be the positive half-line, so that G is the positive orthant R + m {\displaystyle \mathbb {R} _{+}^{m}} . Let g ( x ) = − ∑ i = 1 m ln x i {\displaystyle g(x)=-\sum _{i=1}^{m}\ln x_{i}} is an m-SCB for G. We can now apply the substitution rule. We get that, for the polytope defined by the linear inequalities ajTx ≤ bj for j in 1,...,m, if it satisfies Slater's condition, then f ( x ) = − ∑ i = 1 m ln ( b j − a j T x ) {\displaystyle f(x)=-\sum _{i=1}^{m}\ln(b_{j}-a_{j}^{T}x)} is an m-SCB. The linear functions b j − a j T x {\displaystyle b_{j}-a_{j}^{T}x} can be replaced by quadratic functions. === Intersection rule === Let G1,...,Gm be closed convex domains in Rn. For each i in 1,...,m, let gi be an Mi-SCB for Gi, and ri a real number. Let G be the intersection of all Gi, and suppose its interior is nonempty. Let g := sumi ri*gi. Then, g is a SCB for G, with parameter sumi ri*Mi.: Prop.3.1.1 Therefore, if G is defined by a list of constraints, we can find a SCB for each constraint separately, and then simply sum them to get a SCB for G. For example, suppose the domain is defined by m linear constraints of the form ajTx ≤ bj, for j in 1,...,m. Then we can use the Intersection rule to construct the m-SCB f ( x ) = − ∑ i = 1 m ln ( b j − a j T x ) {\displaystyle f(x)=-\sum _{i=1}^{m}\ln(b_{j}-a_{j}^{T}x)} (the same one that we previously computed using the Cartesian product rule). === SCBs for epigraphs === The epigraph of a function f(x) is the area above the graph of the function, that is, { ( x , t ) ∈ R 2 : t ≥ f ( x ) } {\displaystyle \{(x,t)\in \mathbb {R} ^{2}:t\geq f(x)\}} . The epigraph of f is a convex set if and only if f is a convex function. The following theorems present some functions f for which the epigraph has an SCB. Let g(t) be a 3-times continuously-differentiable concave function on t>0, such that t ⋅ | g ‴ ( t ) | / | g ″ ( t ) | {\displaystyle t\cdot |g'''(t)|/|g''(t)|} is bounded by a constant (denoted 3*b) for all t>0. Let G be the 2-dimensional convex domain: G = closure ( { ( x , t ) ∈ R 2 : t > 0 , x ≤ g ( t ) } ) . {\displaystyle G={\text{closure}}(\{(x,t)\in \mathbb {R} ^{2}:t>0,x\leq g(t)\}).} Then, the function f(x,t) = -ln(f(t)-x) - max[1,b2]*ln(t) is a self-concordant barrier for G, with parameter (1+max[1,b2]).: Prop.9.2.1 Examples: Let g(t) = t1/p, for some p≥1, and b=(2p-1)/(3p). Then G 1 = { ( x , t ) ∈ R 2 : ( x + ) p ≤ t } {\displaystyle G_{1}=\{(x,t)\in \mathbb {R} ^{2}:(x_{+})^{p}\leq t\}} has a 2-SCB. Similarly, G 2 = { ( x , t ) ∈ R 2 : ( [ − x ] + ) p ≤ t } {\displaystyle G_{2}=\{(x,t)\in \mathbb {R} ^{2}:([-x]_{+})^{p}\leq t\}} has a 2-SCB. Using the Intersection rule, we get that G = G 1 ∩ G 2 = { ( x , t ) ∈ R 2 : | x | p ≤ t } {\displaystyle G=G_{1}\cap G_{2}=\{(x,t)\in \mathbb {R} ^{2}:|x|^{p}\leq t\}} has a 4-SCB. Let g(t)=ln(t) and b=2/3. Then G = { ( x , t ) ∈ R 2 : e x ≤ t } {\displaystyle G=\{(x,t)\in \mathbb {R} ^{2}:e^{x}\leq t\}} has a 2-SCB. We can now construct a SCB for the problem of minimizing the p-norm: min x ∑ j = 1 n | v j − x T u j | p {\displaystyle \min _{x}\sum _{j=1}^{n}|v_{j}-x^{T}u_{j}|^{p}} , where vj are constant scalars, uj are constant vectors, and p>0 is a constant. We first convert it into minimization of a linear objective: min x ∑ j = 1 n t j {\displaystyle \min _{x}\sum _{j=1}^{n}t_{j}} , with the constraints: t j ≥ | v j − x T u j | p {\displaystyle t_{j}\geq |v_{j}-x^{T}u_{j}|^{p}} for all j in [m]. For each constraint, we have a 4-SCB by the affine substitution rule. Using the Intersection rule, we get a (4n)-SCB for the entire feasible domain. Similarly, let g be a 3-times continuously-differentiable convex function on the ray x>0, such that: x ⋅ | g ‴ ( x ) | / | g ″ ( x ) | ≤ 3 b {\displaystyle x\cdot |g'''(x)|/|g''(x)|\leq 3b} for all x>0. Let G be the 2-dimensional convex domain: closure({ (t,x) in R2: x>0, t ≥ g(x) }). Then, the function f(x,t) = -ln(t-f(x)) - max[1,b2]*ln(x) is a self-concordant barrier for G, with parameter (1+max[1,b2]).: Prop.9.2.2 Examples: Let g(x) = x−p, for some p>0, and b=(2+p)/3. Then G 1 = { ( x , t ) ∈ R 2 : x − p ≤ t , x ≥ 0 } {\displaystyle G_{1}=\{(x,t)\in \mathbb {R} ^{2}:x^{-p}\leq t,x\geq 0\}} has a 2-SCB. Let g(x)=x ln(x) and b=1/3. Then G = { ( x , t ) ∈ R 2 : x ln x ≤ t , x ≥ 0 } {\displaystyle G=\{(x,t)\in \mathbb {R} ^{2}:x\ln x\leq t,x\geq 0\}} has a 2-SCB. === SCBs for cones === For the second order cone { ( x , y ) ∈ R n − 1 × R ∣ ‖ x ‖ ≤ y } {\displaystyle \{(x,y)\in \mathbb {R} ^{n-1}\times \mathbb {R} \mid \|x\|\leq y\}} , the function f ( x , y ) = − log ( y 2 − x T x ) {\displaystyle f(x,y)=-\log(y^{2}-x^{T}x)} is a self-concordant barrier. For the cone of positive semidefinite of m*m symmetric matrices, the function f ( A ) = − log det A {\displaystyle f(A)=-\log \det A} is a self-concordant barrier. For the quadratic region defined by ϕ ( x ) > 0 {\displaystyle \phi (x)>0} where ϕ ( x ) = α + ⟨ a , x ⟩ − 1 2 ⟨ A x , x ⟩ {\displaystyle \phi (x)=\alpha +\langle a,x\rangle -{\frac {1}{2}}\langle Ax,x\rangle } where A = A T ≥ 0 {\displaystyle A=A^{T}\geq 0} is a positive semi-definite symmetric matrix, the logarithmic barrier f ( x ) = − log ϕ ( x ) {\displaystyle f(x)=-\log \phi (x)} is self-concordant with M = 2 {\displaystyle M=2} For the exponential cone { ( x , y , z ) ∈ R 3 ∣ y e x / y ≤ z , y > 0 } {\displaystyle \{(x,y,z)\in \mathbb {R} ^{3}\mid ye^{x/y}\leq z,y>0\}} , the function f ( x , y , z ) = − log ( y log ( z / y ) − x ) − log z − log y {\displaystyle f(x,y,z)=-\log(y\log(z/y)-x)-\log z-\log y} is a self-concordant barrier. For the power cone { ( x 1 , x 2 , y ) ∈ R + 2 × R ∣ | y | ≤ x 1 α x 2 1 − α } {\displaystyle \{(x_{1},x_{2},y)\in \mathbb {R} _{+}^{2}\times \mathbb {R} \mid |y|\leq x_{1}^{\alpha }x_{2}^{1-\alpha }\}} , the function f ( x 1 , x 2 , y ) = − log ( x 1 2 α x 2 2 ( 1 − α ) − y 2 ) − log x 1 − log x 2 {\displaystyle f(x_{1},x_{2},y)=-\log(x_{1}^{2\alpha }x_{2}^{2(1-\alpha )}-y^{2})-\log x_{1}-\log x_{2}} is a self-concordant barrier. == History == As mentioned in the "Bibliography Comments" of their 1994 book, self-concordant functions were introduced in 1988 by Yurii Nesterov and further developed with Arkadi Nemirovski. As explained in their basic observation was that the Newton method is affine invariant, in the sense that if for a function f ( x ) {\displaystyle f(x)} we have Newton steps x k + 1 = x k − [ f ″ ( x k ) ] − 1 f ′ ( x k ) {\displaystyle x_{k+1}=x_{k}-[f''(x_{k})]^{-1}f'(x_{k})} then for a function ϕ ( y ) = f ( A y ) {\displaystyle \phi (y)=f(Ay)} where A {\displaystyle A} is a non-degenerate linear transformation, starting from y 0 = A − 1 x 0 {\displaystyle y_{0}=A^{-1}x_{0}} we have the Newton steps y k = A − 1 x k {\displaystyle y_{k}=A^{-1}x_{k}} which can be shown recursively y k + 1 = y k − [ ϕ ″ ( y k ) ] − 1 ϕ ′ ( y k ) = y k − [ A T f ″ ( A y k ) A ] − 1 A T f ′ ( A y k ) = A − 1 x k − A − 1 [ f ″ ( x k ) ] − 1 f ′ ( x k ) = A − 1 x k + 1 {\displaystyle y_{k+1}=y_{k}-[\phi ''(y_{k})]^{-1}\phi '(y_{k})=y_{k}-[A^{T}f''(Ay_{k})A]^{-1}A^{T}f'(Ay_{k})=A^{-1}x_{k}-A^{-1}[f''(x_{k})]^{-1}f'(x_{k})=A^{-1}x_{k+1}} . However, the standard analysis of the Newton method supposes that the Hessian of f {\displaystyle f} is Lipschitz continuous, that is ‖ f ″ ( x ) − f ″ ( y ) ‖ ≤ M ‖ x − y ‖ {\displaystyle \|f''(x)-f''(y)\|\leq M\|x-y\|} for some constant M {\displaystyle M} . If we suppose that f {\displaystyle f} is 3 times continuously differentiable, then this is equivalent to | ⟨ f ‴ ( x ) [ u ] v , v ⟩ | ≤ M ‖ u ‖ ‖ v ‖ 2 {\displaystyle |\langle f'''(x)[u]v,v\rangle |\leq M\|u\|\|v\|^{2}} for all u , v ∈ R n {\displaystyle u,v\in \mathbb {R} ^{n}} where f ‴ ( x ) [ u ] = lim α → 0 α − 1 [ f ″ ( x + α u ) − f ″ ( x ) ] {\displaystyle f'''(x)[u]=\lim _{\alpha \to 0}\alpha ^{-1}[f''(x+\alpha u)-f''(x)]} . Then the left hand side of the above inequality is invariant under the affine transformation f ( x ) → ϕ ( y ) = f ( A y ) , u → A − 1 u , v → A − 1 v {\displaystyle f(x)\to \phi (y)=f(Ay),u\to A^{-1}u,v\to A^{-1}v} , however the right hand side is not. The authors note that the right hand side can be made also invariant if we replace the Euclidean metric by the scalar product defined by the Hessian of f {\displaystyle f} defined as ‖ w ‖ f ″ ( x ) = ⟨ f ″ ( x ) w , w ⟩ 1 / 2 {\displaystyle \|w\|_{f''(x)}=\langle f''(x)w,w\rangle ^{1/2}} for w ∈ R n {\displaystyle w\in \mathbb {R} ^{n}} . They then arrive at the definition of a self concordant function as | ⟨ f ‴ ( x ) [ u ] u , u ⟩ | ≤ M ⟨ f ″ ( x ) u , u ⟩ 3 / 2 {\displaystyle |\langle f'''(x)[u]u,u\rangle |\leq M\langle f''(x)u,u\rangle ^{3/2}} . == Properties == === Linear combination === If f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} are self-concordant with constants M 1 {\displaystyle M_{1}} and M 2 {\displaystyle M_{2}} and α , β > 0 {\displaystyle \alpha ,\beta >0} , then α f 1 + β f 2 {\displaystyle \alpha f_{1}+\beta f_{2}} is self-concordant with constant max ( α − 1 / 2 M 1 , β − 1 / 2 M 2 ) {\displaystyle \max(\alpha ^{-1/2}M_{1},\beta ^{-1/2}M_{2})} . === Affine transformation === If f {\displaystyle f} is self-concordant with constant M {\displaystyle M} and A x + b {\displaystyle Ax+b} is an affine transformation of R n {\displaystyle \mathbb {R} ^{n}} , then ϕ ( x ) = f ( A x + b ) {\displaystyle \phi (x)=f(Ax+b)} is also self-concordant with parameter M {\displaystyle M} . === Convex conjugate === If f {\displaystyle f} is self-concordant, then its convex conjugate f ∗ {\displaystyle f^{*}} is also self-concordant. === Non-singular Hessian === If f {\displaystyle f} is self-concordant and the domain of f {\displaystyle f} contains no straight line (infinite in both directions), then f ″ {\displaystyle f''} is non-singular. Conversely, if for some x {\displaystyle x} in the domain of f {\displaystyle f} and u ∈ R n , u ≠ 0 {\displaystyle u\in \mathbb {R} ^{n},u\neq 0} we have ⟨ f ″ ( x ) u , u ⟩ = 0 {\displaystyle \langle f''(x)u,u\rangle =0} , then ⟨ f ″ ( x + α u ) u , u ⟩ = 0 {\displaystyle \langle f''(x+\alpha u)u,u\rangle =0} for all α {\displaystyle \alpha } for which x + α u {\displaystyle x+\alpha u} is in the domain of f {\displaystyle f} and then f ( x + α u ) {\displaystyle f(x+\alpha u)} is linear and cannot have a maximum so all of x + α u , α ∈ R {\displaystyle x+\alpha u,\alpha \in \mathbb {R} } is in the domain of f {\displaystyle f} . We note also that f {\displaystyle f} cannot have a minimum inside its domain. == Applications == Among other things, self-concordant functions are useful in the analysis of Newton's method. Self-concordant barrier functions are used to develop the barrier functions used in interior point methods for convex and nonlinear optimization. The usual analysis of the Newton method would not work for barrier functions as their second derivative cannot be Lipschitz continuous, otherwise they would be bounded on any compact subset of R n {\displaystyle \mathbb {R} ^{n}} . Self-concordant barrier functions are a class of functions that can be used as barriers in constrained optimization methods can be minimized using the Newton algorithm with provable convergence properties analogous to the usual case (but these results are somewhat more difficult to derive) to have both of the above, the usual constant bound on the third derivative of the function (required to get the usual convergence results for the Newton method) is replaced by a bound relative to the Hessian === Minimizing a self-concordant function === A self-concordant function may be minimized with a modified Newton method where we have a bound on the number of steps required for convergence. We suppose here that f {\displaystyle f} is a standard self-concordant function, that is it is self-concordant with parameter M = 2 {\displaystyle M=2} . We define the Newton decrement λ f ( x ) {\displaystyle \lambda _{f}(x)} of f {\displaystyle f} at x {\displaystyle x} as the size of the Newton step [ f ″ ( x ) ] − 1 f ′ ( x ) {\displaystyle [f''(x)]^{-1}f'(x)} in the local norm defined by the Hessian of f {\displaystyle f} at x {\displaystyle x} λ f ( x ) = ⟨ f ″ ( x ) [ f ″ ( x ) ] − 1 f ′ ( x ) , [ f ″ ( x ) ] − 1 f ′ ( x ) ⟩ 1 / 2 = ⟨ [ f ″ ( x ) ] − 1 f ′ ( x ) , f ′ ( x ) ⟩ 1 / 2 {\displaystyle \lambda _{f}(x)=\langle f''(x)[f''(x)]^{-1}f'(x),[f''(x)]^{-1}f'(x)\rangle ^{1/2}=\langle [f''(x)]^{-1}f'(x),f'(x)\rangle ^{1/2}} Then for x {\displaystyle x} in the domain of f {\displaystyle f} , if λ f ( x ) < 1 {\displaystyle \lambda _{f}(x)<1} then it is possible to prove that the Newton iterate x + = x − [ f ″ ( x ) ] − 1 f ′ ( x ) {\displaystyle x_{+}=x-[f''(x)]^{-1}f'(x)} will be also in the domain of f {\displaystyle f} . This is because, based on the self-concordance of f {\displaystyle f} , it is possible to give some finite bounds on the value of f ( x + ) {\displaystyle f(x_{+})} . We further have λ f ( x + ) ≤ ( λ f ( x ) 1 − λ f ( x ) ) 2 {\displaystyle \lambda _{f}(x_{+})\leq {\Bigg (}{\frac {\lambda _{f}(x)}{1-\lambda _{f}(x)}}{\Bigg )}^{2}} Then if we have λ f ( x ) < λ ¯ = 3 − 5 2 {\displaystyle \lambda _{f}(x)<{\bar {\lambda }}={\frac {3-{\sqrt {5}}}{2}}} then it is also guaranteed that λ f ( x + ) < λ f ( x ) {\displaystyle \lambda _{f}(x_{+})<\lambda _{f}(x)} , so that we can continue to use the Newton method until convergence. Note that for λ f ( x + ) < β {\displaystyle \lambda _{f}(x_{+})<\beta } for some β ∈ ( 0 , λ ¯ ) {\displaystyle \beta \in (0,{\bar {\lambda }})} we have quadratic convergence of λ f {\displaystyle \lambda _{f}} to 0 as λ f ( x + ) ≤ ( 1 − β ) − 2 λ f ( x ) 2 {\displaystyle \lambda _{f}(x_{+})\leq (1-\beta )^{-2}\lambda _{f}(x)^{2}} . This then gives quadratic convergence of f ( x k ) {\displaystyle f(x_{k})} to f ( x ∗ ) {\displaystyle f(x^{*})} and of x {\displaystyle x} to x ∗ {\displaystyle x^{*}} , where x ∗ = arg min f ( x ) {\displaystyle x^{*}=\arg \min f(x)} , by the following theorem. If λ f ( x ) < 1 {\displaystyle \lambda _{f}(x)<1} then ω ( λ f ( x ) ) ≤ f ( x ) − f ( x ∗ ) ≤ ω ∗ ( λ f ( x ) ) {\displaystyle \omega (\lambda _{f}(x))\leq f(x)-f(x^{*})\leq \omega _{*}(\lambda _{f}(x))} ω ′ ( λ f ( x ) ) ≤ ‖ x − x ∗ ‖ x ≤ ω ∗ ′ ( λ f ( x ) ) {\displaystyle \omega '(\lambda _{f}(x))\leq \|x-x^{*}\|_{x}\leq \omega _{*}'(\lambda _{f}(x))} with the following definitions ω ( t ) = t − log ( 1 + t ) {\displaystyle \omega (t)=t-\log(1+t)} ω ∗ ( t ) = − t − log ( 1 − t ) {\displaystyle \omega _{*}(t)=-t-\log(1-t)} ‖ u ‖ x = ⟨ f ″ ( x ) u , u ⟩ 1 / 2 {\displaystyle \|u\|_{x}=\langle f''(x)u,u\rangle ^{1/2}} If we start the Newton method from some x 0 {\displaystyle x_{0}} with λ f ( x 0 ) ≥ λ ¯ {\displaystyle \lambda _{f}(x_{0})\geq {\bar {\lambda }}} then we have to start by using a damped Newton method defined by x k + 1 = x k − 1 1 + λ f ( x k ) [ f ″ ( x k ) ] − 1 f ′ ( x k ) {\displaystyle x_{k+1}=x_{k}-{\frac {1}{1+\lambda _{f}(x_{k})}}[f''(x_{k})]^{-1}f'(x_{k})} For this it can be shown that f ( x k + 1 ) ≤ f ( x k ) − ω ( λ f ( x k ) ) {\displaystyle f(x_{k+1})\leq f(x_{k})-\omega (\lambda _{f}(x_{k}))} with ω {\displaystyle \omega } as defined previously. Note that ω ( t ) {\displaystyle \omega (t)} is an increasing function for t > 0 {\displaystyle t>0} so that ω ( t ) ≥ ω ( λ ¯ ) {\displaystyle \omega (t)\geq \omega ({\bar {\lambda }})} for any t ≥ λ ¯ {\displaystyle t\geq {\bar {\lambda }}} , so the value of f {\displaystyle f} is guaranteed to decrease by a certain amount in each iteration, which also proves that x k + 1 {\displaystyle x_{k+1}} is in the domain of f {\displaystyle f} . == References ==
|
Wikipedia:Self-similarity#0
|
In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such as coastlines, are statistically self-similar: parts of them show the same statistical properties at many scales. Self-similarity is a typical property of fractals. Scale invariance is an exact form of self-similarity where at any magnification there is a smaller piece of the object that is similar to the whole. For instance, a side of the Koch snowflake is both symmetrical and scale-invariant; it can be continually magnified 3x without changing shape. The non-trivial similarity evident in fractals is distinguished by their fine structure, or detail on arbitrarily small scales. As a counterexample, whereas any portion of a straight line may resemble the whole, further detail is not revealed. Peitgen et al. explain the concept as such: If parts of a figure are small replicas of the whole, then the figure is called self-similar....A figure is strictly self-similar if the figure can be decomposed into parts which are exact replicas of the whole. Any arbitrary part contains an exact replica of the whole figure.Since mathematically, a fractal may show self-similarity under arbitrary magnification, it is impossible to recreate this physically. Peitgen et al. suggest studying self-similarity using approximations:In order to give an operational meaning to the property of self-similarity, we are necessarily restricted to dealing with finite approximations of the limit figure. This is done using the method which we will call box self-similarity where measurements are made on finite stages of the figure using grids of various sizes. This vocabulary was introduced by Benoit Mandelbrot in 1964. == Self-affinity == In mathematics, self-affinity is a feature of a fractal whose pieces are scaled by different amounts in the x and y directions. This means that to appreciate the self-similarity of these fractal objects, they have to be rescaled using an anisotropic affine transformation. == Definition == A compact topological space X is self-similar if there exists a finite set S indexing a set of non-surjective homeomorphisms { f s : s ∈ S } {\displaystyle \{f_{s}:s\in S\}} for which X = ⋃ s ∈ S f s ( X ) {\displaystyle X=\bigcup _{s\in S}f_{s}(X)} If X ⊂ Y {\displaystyle X\subset Y} , we call X self-similar if it is the only non-empty subset of Y such that the equation above holds for { f s : s ∈ S } {\displaystyle \{f_{s}:s\in S\}} . We call L = ( X , S , { f s : s ∈ S } ) {\displaystyle {\mathfrak {L}}=(X,S,\{f_{s}:s\in S\})} a self-similar structure. The homeomorphisms may be iterated, resulting in an iterated function system. The composition of functions creates the algebraic structure of a monoid. When the set S has only two elements, the monoid is known as the dyadic monoid. The dyadic monoid can be visualized as an infinite binary tree; more generally, if the set S has p elements, then the monoid may be represented as a p-adic tree. The automorphisms of the dyadic monoid is the modular group; the automorphisms can be pictured as hyperbolic rotations of the binary tree. A more general notion than self-similarity is self-affinity. == Examples == The Mandelbrot set is also self-similar around Misiurewicz points. Self-similarity has important consequences for the design of computer networks, as typical network traffic has self-similar properties. For example, in teletraffic engineering, packet switched data traffic patterns seem to be statistically self-similar. This property means that simple models using a Poisson distribution are inaccurate, and networks designed without taking self-similarity into account are likely to function in unexpected ways. Similarly, stock market movements are described as displaying self-affinity, i.e. they appear self-similar when transformed via an appropriate affine transformation for the level of detail being shown. Andrew Lo describes stock market log return self-similarity in econometrics. Finite subdivision rules are a powerful technique for building self-similar sets, including the Cantor set and the Sierpinski triangle. Some space filling curves, such as the Peano curve and Moore curve, also feature properties of self-similarity. === In cybernetics === The viable system model of Stafford Beer is an organizational model with an affine self-similar hierarchy, where a given viable system is one element of the System One of a viable system one recursive level higher up, and for whom the elements of its System One are viable systems one recursive level lower down. === In nature === Self-similarity can be found in nature, as well. Plants, such as Romanesco broccoli, exhibit strong self-similarity. === In music === Strict canons display various types and amounts of self-similarity, as do sections of fugues. A Shepard tone is self-similar in the frequency or wavelength domains. The Danish composer Per Nørgård has made use of a self-similar integer sequence named the 'infinity series' in much of his music. In the research field of music information retrieval, self-similarity commonly refers to the fact that music often consists of parts that are repeated in time. In other words, music is self-similar under temporal translation, rather than (or in addition to) under scaling. == See also == == References == == External links == "Copperplate Chevrons" — a self-similar fractal zoom movie "Self-Similarity" — New articles about Self-Similarity. Waltz Algorithm === Self-affinity === Mandelbrot, Benoit B. (1985). "Self-affinity and fractal dimension" (PDF). Physica Scripta. 32 (4): 257–260. Bibcode:1985PhyS...32..257M. doi:10.1088/0031-8949/32/4/001. S2CID 250815596. Sapozhnikov, Victor; Foufoula-Georgiou, Efi (May 1996). "Self-Affinity in Braided Rivers" (PDF). Water Resources Research. 32 (5): 1429–1439. Bibcode:1996WRR....32.1429S. doi:10.1029/96wr00490. Archived (PDF) from the original on 30 July 2018. Retrieved 30 July 2018. Benoît B. Mandelbrot (2002). Gaussian Self-Affinity and Fractals: Globality, the Earth, 1/F Noise, and R/S. Springer. ISBN 978-0387989938.
|
Wikipedia:Selig Brodetsky#0
|
Selig Brodetsky (Hebrew: אשר זליג ברודצקי, romanized: Asher Zelig Brodetsky; 10 February 1888 – 18 May 1954) was an English mathematician, a member of the World Zionist Executive, the president of the Board of Deputies of British Jews, and the second president of the Hebrew University of Jerusalem. == Background == Brodetsky was born in Olviopol (now Pervomaisk) in the Kherson Governorate of the Russian Empire (present-day Ukraine), the second of 13 children born to Akiva Brodetsky (the beadle of the local synagogue) and Adel (Prober). As a child, he witnessed the murder of his uncle in a pogrom. In 1894, the family followed Akiva to the East End of London, to where he had migrated a year earlier. Brodetsky attended the Jews' Free School, where he excelled at his studies. He was awarded a scholarship, which enabled him to attend the Central Foundation Boys' School of London and subsequently, in 1905, Trinity College, Cambridge. In 1908, he completed his studies with highest honours being Senior Wrangler, to the distress of the conservative press, which was forced to recognise that a son of immigrants surpassed all the local students. The Newton scholarship enabled him to study at Leipzig University where he was awarded a doctorate in 1913. His dissertation dealt with the gravitational field. In 1919, he married Manya Berenblum, whose family had recently emigrated from Belgium, where her father had been a diamond merchant in Antwerp. They had two children, Paul and Adele, in 1924 and 1927. == Academic career == In 1914, Brodetsky was appointed a lecturer in applied mathematics at the University of Bristol. During the First World War he was employed as an advisor to the British company developing periscopes for submarines. In 1919, Brodetsky became a lecturer at the University of Leeds. Five years later he was appointed professor of applied mathematics at Leeds where he remained until 1948. Much of his work concerned aeronautics and mechanics of aeroplanes. He was the head of the mathematics department of the University of Leeds from 1946 to 1948. He was active in the Association of University Teachers, serving as president in 1935–1936. Brodetsky became the second president of the Hebrew University of Jerusalem in 1949, preceded by Sir Leon Simon, serving until 1952, and followed by Benjamin Mazar (1953 to 1961), at a time when the university was going through a rocky period, eventually having to abandon its campus on Mount Scopus. He attempted to overhaul the structure of the university but he soon became embroiled in bitter struggles with the University Senate, which interfered in his academic and bureaucratic work. Apparently, Brodetsky thought that he was going to take up a position similar to that of Vice-Chancellor of an English university but many in Jerusalem saw the position as essentially an honorary one, like the Chancellor of an English university. This struggle affected his health and in 1952 he decided to resign his post and return to England. == Education == Jews' Free School (JFS), London (where there is now a Brodetsky House in his honour) Central Foundation Boys' School, London Trinity College, Cambridge (senior wrangler, 1908) Leipzig University (PhD) == Career == Lecturer in Applied Mathematics, University of Bristol, 1914–1919 Reader, 1920–1924; Professor, 1924–1948 then Emeritus Professor of Applied Mathematics, University of Leeds President of the Hebrew University of Jerusalem and Chairman of its Executive Council, 1949–1951 == Other posts == Member of the Executive, World Zionist Organisation and Jewish Agency for Palestine Honorary President, Zionist Federation of Great Britain and Ireland Honorary President, Maccabi World Union President, Board of Deputies of British Jews (1940–49) He was a Fellow of the Royal Astronomical Society, Royal Aeronautical Society and Institute of Physics. His sister Rachel married Rabbi Solomon Mestel; their son is astronomer and astrophysicist Leon Mestel. == References == == External links == The personal papers of Selig Brodetsky are kept at the Central Zionist Archives in Jerusalem. The notation of the record group is A82.
|
Wikipedia:Sema Salur#0
|
Sema Salur is a Turkish-American mathematician, currently serving as a Professor of Mathematics at the University of Rochester. She was awarded the Ruth I. Michler Memorial Prize for 2014–2015, a prize intended to give a recently promoted associate professor a year-long fellowship at Cornell University; and has been the recipient of a National Science Foundation Research Award beginning in 2017. She specialises in the "geometry and topology of the moduli spaces of calibrated submanifolds inside Calabi–Yau, G2 and Spin(7) manifolds", which are important to certain aspects of string theory and M-theory in physics, theories that attempt to unite gravity, electromagnetism, and the strong and weak nuclear forces into one coherent Theory of Everything. == Education == 1993: B.S. in Mathematics, Boğaziçi University, Turkey. 2000: PhD in Mathematics, Michigan State University == References ==
|
Wikipedia:Semi-continuity#0
|
In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f {\displaystyle f} is upper (respectively, lower) semicontinuous at a point x 0 {\displaystyle x_{0}} if, roughly speaking, the function values for arguments near x 0 {\displaystyle x_{0}} are not much higher (respectively, lower) than f ( x 0 ) . {\displaystyle f\left(x_{0}\right).} Briefly, a function on a domain X {\displaystyle X} is lower semi-continuous if its epigraph { ( x , t ) ∈ X × R : t ≥ f ( x ) } {\displaystyle \{(x,t)\in X\times \mathbb {R} :t\geq f(x)\}} is closed in X × R {\displaystyle X\times \mathbb {R} } , and upper semi-continuous if − f {\displaystyle -f} is lower semi-continuous. A function is continuous if and only if it is both upper and lower semicontinuous. If we take a continuous function and increase its value at a certain point x 0 {\displaystyle x_{0}} to f ( x 0 ) + c {\displaystyle f\left(x_{0}\right)+c} for some c > 0 {\displaystyle c>0} , then the result is upper semicontinuous; if we decrease its value to f ( x 0 ) − c {\displaystyle f\left(x_{0}\right)-c} then the result is lower semicontinuous. The notion of upper and lower semicontinuous function was first introduced and studied by René Baire in his thesis in 1899. == Definitions == Assume throughout that X {\displaystyle X} is a topological space and f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is a function with values in the extended real numbers R ¯ = R ∪ { − ∞ , ∞ } = [ − ∞ , ∞ ] {\displaystyle {\overline {\mathbb {R} }}=\mathbb {R} \cup \{-\infty ,\infty \}=[-\infty ,\infty ]} . === Upper semicontinuity === A function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is called upper semicontinuous at a point x 0 ∈ X {\displaystyle x_{0}\in X} if for every real y > f ( x 0 ) {\displaystyle y>f\left(x_{0}\right)} there exists a neighborhood U {\displaystyle U} of x 0 {\displaystyle x_{0}} such that f ( x ) < y {\displaystyle f(x)<y} for all x ∈ U {\displaystyle x\in U} . Equivalently, f {\displaystyle f} is upper semicontinuous at x 0 {\displaystyle x_{0}} if and only if lim sup x → x 0 f ( x ) ≤ f ( x 0 ) {\displaystyle \limsup _{x\to x_{0}}f(x)\leq f(x_{0})} where lim sup is the limit superior of the function f {\displaystyle f} at the point x 0 . {\displaystyle x_{0}.} If X {\displaystyle X} is a metric space with distance function d {\displaystyle d} and f ( x 0 ) ∈ R , {\displaystyle f(x_{0})\in \mathbb {R} ,} this can also be restated using an ε {\displaystyle \varepsilon } - δ {\displaystyle \delta } formulation, similar to the definition of continuous function. Namely, for each ε > 0 {\displaystyle \varepsilon >0} there is a δ > 0 {\displaystyle \delta >0} such that f ( x ) < f ( x 0 ) + ε {\displaystyle f(x)<f(x_{0})+\varepsilon } whenever d ( x , x 0 ) < δ . {\displaystyle d(x,x_{0})<\delta .} A function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is called upper semicontinuous if it satisfies any of the following equivalent conditions: (1) The function is upper semicontinuous at every point of its domain. (2) For each y ∈ R {\displaystyle y\in \mathbb {R} } , the set f − 1 ( [ − ∞ , y ) ) = { x ∈ X : f ( x ) < y } {\displaystyle f^{-1}([-\infty ,y))=\{x\in X:f(x)<y\}} is open in X {\displaystyle X} , where [ − ∞ , y ) = { t ∈ R ¯ : t < y } {\displaystyle [-\infty ,y)=\{t\in {\overline {\mathbb {R} }}:t<y\}} . (3) For each y ∈ R {\displaystyle y\in \mathbb {R} } , the y {\displaystyle y} -superlevel set f − 1 ( [ y , ∞ ) ) = { x ∈ X : f ( x ) ≥ y } {\displaystyle f^{-1}([y,\infty ))=\{x\in X:f(x)\geq y\}} is closed in X {\displaystyle X} . (4) The hypograph { ( x , t ) ∈ X × R : t ≤ f ( x ) } {\displaystyle \{(x,t)\in X\times \mathbb {R} :t\leq f(x)\}} is closed in X × R {\displaystyle X\times \mathbb {R} } . (5) The function f {\displaystyle f} is continuous when the codomain R ¯ {\displaystyle {\overline {\mathbb {R} }}} is given the left order topology. This is just a restatement of condition (2) since the left order topology is generated by all the intervals [ − ∞ , y ) {\displaystyle [-\infty ,y)} . === Lower semicontinuity === A function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is called lower semicontinuous at a point x 0 ∈ X {\displaystyle x_{0}\in X} if for every real y < f ( x 0 ) {\displaystyle y<f\left(x_{0}\right)} there exists a neighborhood U {\displaystyle U} of x 0 {\displaystyle x_{0}} such that f ( x ) > y {\displaystyle f(x)>y} for all x ∈ U {\displaystyle x\in U} . Equivalently, f {\displaystyle f} is lower semicontinuous at x 0 {\displaystyle x_{0}} if and only if lim inf x → x 0 f ( x ) ≥ f ( x 0 ) {\displaystyle \liminf _{x\to x_{0}}f(x)\geq f(x_{0})} where lim inf {\displaystyle \liminf } is the limit inferior of the function f {\displaystyle f} at point x 0 . {\displaystyle x_{0}.} If X {\displaystyle X} is a metric space with distance function d {\displaystyle d} and f ( x 0 ) ∈ R , {\displaystyle f(x_{0})\in \mathbb {R} ,} this can also be restated as follows: For each ε > 0 {\displaystyle \varepsilon >0} there is a δ > 0 {\displaystyle \delta >0} such that f ( x ) > f ( x 0 ) − ε {\displaystyle f(x)>f(x_{0})-\varepsilon } whenever d ( x , x 0 ) < δ . {\displaystyle d(x,x_{0})<\delta .} A function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is called lower semicontinuous if it satisfies any of the following equivalent conditions: (1) The function is lower semicontinuous at every point of its domain. (2) For each y ∈ R {\displaystyle y\in \mathbb {R} } , the set f − 1 ( ( y , ∞ ] ) = { x ∈ X : f ( x ) > y } {\displaystyle f^{-1}((y,\infty ])=\{x\in X:f(x)>y\}} is open in X {\displaystyle X} , where ( y , ∞ ] = { t ∈ R ¯ : t > y } {\displaystyle (y,\infty ]=\{t\in {\overline {\mathbb {R} }}:t>y\}} . (3) For each y ∈ R {\displaystyle y\in \mathbb {R} } , the y {\displaystyle y} -sublevel set f − 1 ( ( − ∞ , y ] ) = { x ∈ X : f ( x ) ≤ y } {\displaystyle f^{-1}((-\infty ,y])=\{x\in X:f(x)\leq y\}} is closed in X {\displaystyle X} . (4) The epigraph { ( x , t ) ∈ X × R : t ≥ f ( x ) } {\displaystyle \{(x,t)\in X\times \mathbb {R} :t\geq f(x)\}} is closed in X × R {\displaystyle X\times \mathbb {R} } .: 207 (5) The function f {\displaystyle f} is continuous when the codomain R ¯ {\displaystyle {\overline {\mathbb {R} }}} is given the right order topology. This is just a restatement of condition (2) since the right order topology is generated by all the intervals ( y , ∞ ] {\displaystyle (y,\infty ]} . == Examples == Consider the function f , {\displaystyle f,} piecewise defined by: f ( x ) = { − 1 if x < 0 , 1 if x ≥ 0 {\displaystyle f(x)={\begin{cases}-1&{\mbox{if }}x<0,\\1&{\mbox{if }}x\geq 0\end{cases}}} This function is upper semicontinuous at x 0 = 0 , {\displaystyle x_{0}=0,} but not lower semicontinuous. The floor function f ( x ) = ⌊ x ⌋ , {\displaystyle f(x)=\lfloor x\rfloor ,} which returns the greatest integer less than or equal to a given real number x , {\displaystyle x,} is everywhere upper semicontinuous. Similarly, the ceiling function f ( x ) = ⌈ x ⌉ {\displaystyle f(x)=\lceil x\rceil } is lower semicontinuous. Upper and lower semicontinuity bear no relation to continuity from the left or from the right for functions of a real variable. Semicontinuity is defined in terms of an ordering in the range of the functions, not in the domain. For example the function f ( x ) = { sin ( 1 / x ) if x ≠ 0 , 1 if x = 0 , {\displaystyle f(x)={\begin{cases}\sin(1/x)&{\mbox{if }}x\neq 0,\\1&{\mbox{if }}x=0,\end{cases}}} is upper semicontinuous at x = 0 {\displaystyle x=0} while the function limits from the left or right at zero do not even exist. If X = R n {\displaystyle X=\mathbb {R} ^{n}} is a Euclidean space (or more generally, a metric space) and Γ = C ( [ 0 , 1 ] , X ) {\displaystyle \Gamma =C([0,1],X)} is the space of curves in X {\displaystyle X} (with the supremum distance d Γ ( α , β ) = sup { d X ( α ( t ) , β ( t ) ) : t ∈ [ 0 , 1 ] } {\displaystyle d_{\Gamma }(\alpha ,\beta )=\sup\{d_{X}(\alpha (t),\beta (t)):t\in [0,1]\}} ), then the length functional L : Γ → [ 0 , + ∞ ] , {\displaystyle L:\Gamma \to [0,+\infty ],} which assigns to each curve α {\displaystyle \alpha } its length L ( α ) , {\displaystyle L(\alpha ),} is lower semicontinuous. As an example, consider approximating the unit square diagonal by a staircase from below. The staircase always has length 2, while the diagonal line has only length 2 {\displaystyle {\sqrt {2}}} . == Properties == Unless specified otherwise, all functions below are from a topological space X {\displaystyle X} to the extended real numbers R ¯ = [ − ∞ , ∞ ] . {\displaystyle {\overline {\mathbb {R} }}=[-\infty ,\infty ].} Several of the results hold for semicontinuity at a specific point, but for brevity they are only stated for semicontinuity over the whole domain. A function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is continuous if and only if it is both upper and lower semicontinuous. The characteristic function or indicator function of a set A ⊂ X {\displaystyle A\subset X} (defined by 1 A ( x ) = 1 {\displaystyle \mathbf {1} _{A}(x)=1} if x ∈ A {\displaystyle x\in A} and 0 {\displaystyle 0} if x ∉ A {\displaystyle x\notin A} ) is upper semicontinuous if and only if A {\displaystyle A} is a closed set. It is lower semicontinuous if and only if A {\displaystyle A} is an open set. In the field of convex analysis, the characteristic function of a set A ⊂ X {\displaystyle A\subset X} is defined differently, as χ A ( x ) = 0 {\displaystyle \chi _{A}(x)=0} if x ∈ A {\displaystyle x\in A} and χ A ( x ) = ∞ {\displaystyle \chi _{A}(x)=\infty } if x ∉ A {\displaystyle x\notin A} . With that definition, the characteristic function of any closed set is lower semicontinuous, and the characteristic function of any open set is upper semicontinuous. === Binary operations on semicontinuous functions === Let f , g : X → R ¯ {\displaystyle f,g:X\to {\overline {\mathbb {R} }}} . If f {\displaystyle f} and g {\displaystyle g} are lower semicontinuous, then the sum f + g {\displaystyle f+g} is lower semicontinuous (provided the sum is well-defined, i.e., f ( x ) + g ( x ) {\displaystyle f(x)+g(x)} is not the indeterminate form − ∞ + ∞ {\displaystyle -\infty +\infty } ). The same holds for upper semicontinuous functions. If f {\displaystyle f} and g {\displaystyle g} are lower semicontinuous and non-negative, then the product function f g {\displaystyle fg} is lower semicontinuous. The corresponding result holds for upper semicontinuous functions. The function f {\displaystyle f} is lower semicontinuous if and only if − f {\displaystyle -f} is upper semicontinuous. If f {\displaystyle f} and g {\displaystyle g} are upper semicontinuous and f {\displaystyle f} is non-decreasing, then the composition f ∘ g {\displaystyle f\circ g} is upper semicontinuous. On the other hand, if f {\displaystyle f} is not non-decreasing, then f ∘ g {\displaystyle f\circ g} may not be upper semicontinuous. For example take f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined as f ( x ) = − x {\displaystyle f(x)=-x} . Then f {\displaystyle f} is continuous and f ∘ g = − g {\displaystyle f\circ g=-g} , which is not upper semicontinuous unless g {\displaystyle g} is continuous. If f {\displaystyle f} and g {\displaystyle g} are lower semicontinuous, their (pointwise) maximum and minimum (defined by x ↦ max { f ( x ) , g ( x ) } {\displaystyle x\mapsto \max\{f(x),g(x)\}} and x ↦ min { f ( x ) , g ( x ) } {\displaystyle x\mapsto \min\{f(x),g(x)\}} ) are also lower semicontinuous. Consequently, the set of all lower semicontinuous functions from X {\displaystyle X} to R ¯ {\displaystyle {\overline {\mathbb {R} }}} (or to R {\displaystyle \mathbb {R} } ) forms a lattice. The corresponding statements also hold for upper semicontinuous functions. === Optimization of semicontinuous functions === The (pointwise) supremum of an arbitrary family ( f i ) i ∈ I {\displaystyle (f_{i})_{i\in I}} of lower semicontinuous functions f i : X → R ¯ {\displaystyle f_{i}:X\to {\overline {\mathbb {R} }}} (defined by f ( x ) = sup { f i ( x ) : i ∈ I } {\displaystyle f(x)=\sup\{f_{i}(x):i\in I\}} ) is lower semicontinuous. In particular, the limit of a monotone increasing sequence f 1 ≤ f 2 ≤ f 3 ≤ ⋯ {\displaystyle f_{1}\leq f_{2}\leq f_{3}\leq \cdots } of continuous functions is lower semicontinuous. (The Theorem of Baire below provides a partial converse.) The limit function will only be lower semicontinuous in general, not continuous. An example is given by the functions f n ( x ) = 1 − ( 1 − x ) n {\displaystyle f_{n}(x)=1-(1-x)^{n}} defined for x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} for n = 1 , 2 , … . {\displaystyle n=1,2,\ldots .} Likewise, the infimum of an arbitrary family of upper semicontinuous functions is upper semicontinuous. And the limit of a monotone decreasing sequence of continuous functions is upper semicontinuous. If C {\displaystyle C} is a compact space (for instance a closed bounded interval [ a , b ] {\displaystyle [a,b]} ) and f : C → R ¯ {\displaystyle f:C\to {\overline {\mathbb {R} }}} is upper semicontinuous, then f {\displaystyle f} attains a maximum on C . {\displaystyle C.} If f {\displaystyle f} is lower semicontinuous on C , {\displaystyle C,} it attains a minimum on C . {\displaystyle C.} (Proof for the upper semicontinuous case: By condition (5) in the definition, f {\displaystyle f} is continuous when R ¯ {\displaystyle {\overline {\mathbb {R} }}} is given the left order topology. So its image f ( C ) {\displaystyle f(C)} is compact in that topology. And the compact sets in that topology are exactly the sets with a maximum. For an alternative proof, see the article on the extreme value theorem.) === Other properties === (Theorem of Baire) Let X {\displaystyle X} be a metric space. Every lower semicontinuous function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is the limit of a point-wise increasing sequence of extended real-valued continuous functions on X . {\displaystyle X.} In particular, there exists a sequence { f i } {\displaystyle \{f_{i}\}} of continuous functions f i : X → R ¯ {\displaystyle f_{i}:X\to {\overline {\mathbb {R} }}} such that f i ( x ) ≤ f i + 1 ( x ) ∀ x ∈ X , ∀ i = 0 , 1 , 2 , … {\displaystyle f_{i}(x)\leq f_{i+1}(x)\quad \forall x\in X,\ \forall i=0,1,2,\dots } and lim i → ∞ f i ( x ) = f ( x ) ∀ x ∈ X . {\displaystyle \lim _{i\to \infty }f_{i}(x)=f(x)\quad \forall x\in X.} If f {\displaystyle f} does not take the value − ∞ {\displaystyle -\infty } , the continuous functions can be taken to be real-valued. Additionally, every upper semicontinuous function f : X → R ¯ {\displaystyle f:X\to {\overline {\mathbb {R} }}} is the limit of a monotone decreasing sequence of extended real-valued continuous functions on X ; {\displaystyle X;} if f {\displaystyle f} does not take the value ∞ , {\displaystyle \infty ,} the continuous functions can be taken to be real-valued. Any upper semicontinuous function f : X → N {\displaystyle f:X\to \mathbb {N} } on an arbitrary topological space X {\displaystyle X} is locally constant on some dense open subset of X . {\displaystyle X.} If the topological space X {\displaystyle X} is sequential, then f : X → R {\displaystyle f:X\to \mathbb {R} } is upper semi-continuous if and only if it is sequentially upper semi-continuous, that is, if for any x ∈ X {\displaystyle x\in X} and any sequence ( x n ) n ⊂ X {\displaystyle (x_{n})_{n}\subset X} that converges towards x {\displaystyle x} , there holds lim sup n → ∞ f ( x n ) ⩽ f ( x ) {\displaystyle \limsup _{n\to \infty }f(x_{n})\leqslant f(x)} . Equivalently, in a sequential space, f {\displaystyle f} is upper semicontinuous if and only if its superlevel sets { x ∈ X | f ( x ) ⩾ y } {\displaystyle \{\,x\in X\,|\,f(x)\geqslant y\,\}} are sequentially closed for all y ∈ R {\displaystyle y\in \mathbb {R} } . In general, upper semicontinuous functions are sequentially upper semicontinuous, but the converse may be false. == Semicontinuity of set-valued functions == For set-valued functions, several concepts of semicontinuity have been defined, namely upper, lower, outer, and inner semicontinuity, as well as upper and lower hemicontinuity. A set-valued function F {\displaystyle F} from a set A {\displaystyle A} to a set B {\displaystyle B} is written F : A ⇉ B . {\displaystyle F:A\rightrightarrows B.} For each x ∈ A , {\displaystyle x\in A,} the function F {\displaystyle F} defines a set F ( x ) ⊂ B . {\displaystyle F(x)\subset B.} The preimage of a set S ⊂ B {\displaystyle S\subset B} under F {\displaystyle F} is defined as F − 1 ( S ) := { x ∈ A : F ( x ) ∩ S ≠ ∅ } . {\displaystyle F^{-1}(S):=\{x\in A:F(x)\cap S\neq \varnothing \}.} That is, F − 1 ( S ) {\displaystyle F^{-1}(S)} is the set that contains every point x {\displaystyle x} in A {\displaystyle A} such that F ( x ) {\displaystyle F(x)} is not disjoint from S {\displaystyle S} . === Upper and lower semicontinuity === A set-valued map F : R m ⇉ R n {\displaystyle F:\mathbb {R} ^{m}\rightrightarrows \mathbb {R} ^{n}} is upper semicontinuous at x ∈ R m {\displaystyle x\in \mathbb {R} ^{m}} if for every open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} such that F ( x ) ⊂ U {\displaystyle F(x)\subset U} , there exists a neighborhood V {\displaystyle V} of x {\displaystyle x} such that F ( V ) ⊂ U . {\displaystyle F(V)\subset U.} : Def. 2.1 A set-valued map F : R m ⇉ R n {\displaystyle F:\mathbb {R} ^{m}\rightrightarrows \mathbb {R} ^{n}} is lower semicontinuous at x ∈ R m {\displaystyle x\in \mathbb {R} ^{m}} if for every open set U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} such that x ∈ F − 1 ( U ) , {\displaystyle x\in F^{-1}(U),} there exists a neighborhood V {\displaystyle V} of x {\displaystyle x} such that V ⊂ F − 1 ( U ) . {\displaystyle V\subset F^{-1}(U).} : Def. 2.2 Upper and lower set-valued semicontinuity are also defined more generally for a set-valued maps between topological spaces by replacing R m {\displaystyle \mathbb {R} ^{m}} and R n {\displaystyle \mathbb {R} ^{n}} in the above definitions with arbitrary topological spaces. Note, that there is not a direct correspondence between single-valued lower and upper semicontinuity and set-valued lower and upper semicontinuouty. An upper semicontinuous single-valued function is not necessarily upper semicontinuous when considered as a set-valued map.: 18 For example, the function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = { − 1 if x < 0 , 1 if x ≥ 0 {\displaystyle f(x)={\begin{cases}-1&{\mbox{if }}x<0,\\1&{\mbox{if }}x\geq 0\end{cases}}} is upper semicontinuous in the single-valued sense but the set-valued map x ↦ F ( x ) := { f ( x ) } {\displaystyle x\mapsto F(x):=\{f(x)\}} is not upper semicontinuous in the set-valued sense. === Inner and outer semicontinuity === A set-valued function F : R m ⇉ R n {\displaystyle F:\mathbb {R} ^{m}\rightrightarrows \mathbb {R} ^{n}} is called inner semicontinuous at x {\displaystyle x} if for every y ∈ F ( x ) {\displaystyle y\in F(x)} and every convergent sequence ( x i ) {\displaystyle (x_{i})} in R m {\displaystyle \mathbb {R} ^{m}} such that x i → x {\displaystyle x_{i}\to x} , there exists a sequence ( y i ) {\displaystyle (y_{i})} in R n {\displaystyle \mathbb {R} ^{n}} such that y i → y {\displaystyle y_{i}\to y} and y i ∈ F ( x i ) {\displaystyle y_{i}\in F\left(x_{i}\right)} for all sufficiently large i ∈ N . {\displaystyle i\in \mathbb {N} .} A set-valued function F : R m ⇉ R n {\displaystyle F:\mathbb {R} ^{m}\rightrightarrows \mathbb {R} ^{n}} is called outer semicontinuous at x {\displaystyle x} if for every convergence sequence ( x i ) {\displaystyle (x_{i})} in R m {\displaystyle \mathbb {R} ^{m}} such that x i → x {\displaystyle x_{i}\to x} and every convergent sequence ( y i ) {\displaystyle (y_{i})} in R n {\displaystyle \mathbb {R} ^{n}} such that y i ∈ F ( x i ) {\displaystyle y_{i}\in F(x_{i})} for each i ∈ N , {\displaystyle i\in \mathbb {N} ,} the sequence ( y i ) {\displaystyle (y_{i})} converges to a point in F ( x ) {\displaystyle F(x)} (that is, lim i → ∞ y i ∈ F ( x ) {\displaystyle \lim _{i\to \infty }y_{i}\in F(x)} ). == See also == Directional continuity – Mathematical function with no sudden changesPages displaying short descriptions of redirect targets Katětov–Tong insertion theorem – On existence of a continuous function between semicontinuous upper and lower bounds Hemicontinuity – Semicontinuity for set-valued functions Càdlàg – Right continuous function with left limits Fatou's lemma – Lemma in measure theory == Notes == == References == == Bibliography == Benesova, B.; Kruzik, M. (2017). "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review. 59 (4): 703–766. arXiv:1601.00390. doi:10.1137/16M1060947. S2CID 119668631. Bourbaki, Nicolas (1998). Elements of Mathematics: General Topology, 1–4. Springer. ISBN 0-201-00636-7. Bourbaki, Nicolas (1998). Elements of Mathematics: General Topology, 5–10. Springer. ISBN 3-540-64563-2. Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4. Gelbaum, Bernard R.; Olmsted, John M.H. (2003). Counterexamples in analysis. Dover Publications. ISBN 0-486-42875-3. Hyers, Donald H.; Isac, George; Rassias, Themistocles M. (1997). Topics in nonlinear analysis & applications. World Scientific. ISBN 981-02-2534-2. Stromberg, Karl (1981). Introduction to Classical Real Analysis. Wadsworth. ISBN 978-0-534-98012-2. Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive.
|
Wikipedia:Semi-differentiability#0
|
In calculus, the notions of one-sided differentiability and semi-differentiability of a real-valued function f of a real variable are weaker than differentiability. Specifically, the function f is said to be right differentiable at a point a if, roughly speaking, a derivative can be defined as the function's argument x moves to a from the right, and left differentiable at a if the derivative can be defined as x moves to a from the left. == One-dimensional case == In mathematics, a left derivative and a right derivative are derivatives (rates of change of a function) defined for movement in one direction only (left or right; that is, to lower or higher values) by the argument of a function. === Definitions === Let f denote a real-valued function defined on a subset I of the real numbers. If a ∈ I is a limit point of I ∩ [a,∞) and the one-sided limit ∂ + f ( a ) := lim x → a + x ∈ I f ( x ) − f ( a ) x − a {\displaystyle \partial _{+}f(a):=\lim _{\scriptstyle x\to a^{+} \atop \scriptstyle x\in I}{\frac {f(x)-f(a)}{x-a}}} exists as a real number, then f is called right differentiable at a and the limit ∂+f(a) is called the right derivative of f at a. If a ∈ I is a limit point of I ∩ (–∞,a] and the one-sided limit ∂ − f ( a ) := lim x → a − x ∈ I f ( x ) − f ( a ) x − a {\displaystyle \partial _{-}f(a):=\lim _{\scriptstyle x\to a^{-} \atop \scriptstyle x\in I}{\frac {f(x)-f(a)}{x-a}}} exists as a real number, then f is called left differentiable at a and the limit ∂–f(a) is called the left derivative of f at a. If a ∈ I is a limit point of I ∩ [a,∞) and I ∩ (–∞,a] and if f is left and right differentiable at a, then f is called semi-differentiable at a. If the left and right derivatives are equal, then they have the same value as the usual ("bidirectional") derivative. One can also define a symmetric derivative, which equals the arithmetic mean of the left and right derivatives (when they both exist), so the symmetric derivative may exist when the usual derivative does not. === Remarks and examples === A function is differentiable at an interior point a of its domain if and only if it is semi-differentiable at a and the left derivative is equal to the right derivative. An example of a semi-differentiable function, which is not differentiable, is the absolute value function f ( x ) = | x | {\displaystyle f(x)=|x|} , at a = 0. We find easily ∂ − f ( 0 ) = − 1 , ∂ + f ( 0 ) = 1. {\displaystyle \partial _{-}f(0)=-1,\partial _{+}f(0)=1.} If a function is semi-differentiable at a point a, it implies that it is continuous at a. The indicator function 1[0,∞) is right differentiable at every real a, but discontinuous at zero (note that this indicator function is not left differentiable at zero). === Application === If a real-valued, differentiable function f, defined on an interval I of the real line, has zero derivative everywhere, then it is constant, as an application of the mean value theorem shows. The assumption of differentiability can be weakened to continuity and one-sided differentiability of f. The version for right differentiable functions is given below, the version for left differentiable functions is analogous. === Differential operators acting to the left or the right === Another common use is to describe derivatives treated as binary operators in infix notation, in which the derivatives is to be applied either to the left or right operands. This is useful, for example, when defining generalizations of the Poisson bracket. For a pair of functions f and g, the left and right derivatives are respectively defined as f ∂ x ← g = ∂ f ∂ x ⋅ g {\displaystyle f{\stackrel {\leftarrow }{\partial _{x}}}g={\frac {\partial f}{\partial x}}\cdot g} f ∂ x → g = f ⋅ ∂ g ∂ x . {\displaystyle f{\stackrel {\rightarrow }{\partial _{x}}}g=f\cdot {\frac {\partial g}{\partial x}}.} In bra–ket notation, the derivative operator can act on the right operand as the regular derivative or on the left as the negative derivative. == Higher-dimensional case == This above definition can be generalized to real-valued functions f defined on subsets of Rn using a weaker version of the directional derivative. Let a be an interior point of the domain of f. Then f is called semi-differentiable at the point a if for every direction u ∈ Rn the limit ∂ u f ( a ) = lim h → 0 + f ( a + h u ) − f ( a ) h {\displaystyle \partial _{u}f(a)=\lim _{h\to 0^{+}}{\frac {f(a+h\,u)-f(a)}{h}}} with h ∈ {\displaystyle h\in } R exists as a real number. Semi-differentiability is thus weaker than Gateaux differentiability, for which one takes in the limit above h → 0 without restricting h to only positive values. For example, the function f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)={\sqrt {x^{2}+y^{2}}}} is semi-differentiable at ( 0 , 0 ) {\displaystyle (0,0)} , but not Gateaux differentiable there. Indeed, f ( h x , h y ) = | h | f ( x , y ) and for h ≥ 0 , f ( h x , h y ) = h f ( x , y ) , f ( h x , h y ) / h = f ( x , y ) , {\displaystyle f(hx,hy)=|h|f(x,y){\text{ and for }}h\geq 0,f(hx,hy)=hf(x,y),f(hx,hy)/h=f(x,y),} with a = 0 , u = ( x , y ) , ∂ u f ( 0 ) = f ( x , y ) {\displaystyle a=0,u=(x,y),\partial _{u}f(0)=f(x,y)} (Note that this generalization is not equivalent to the original definition for n = 1 since the concept of one-sided limit points is replaced with the stronger concept of interior points.) == Properties == Any convex function on a convex open subset of Rn is semi-differentiable. While every semi-differentiable function of one variable is continuous; this is no longer true for several variables. == Generalization == Instead of real-valued functions, one can consider functions taking values in Rn or in a Banach space. == See also == Directional derivative Partial derivative Gradient Gateaux derivative Fréchet derivative Derivative (generalizations) Phase space formulation § Star product Dini derivatives == References == Preda, V.; Chiţescu, I. (1999). "On Constraint Qualification in Multiobjective Optimization Problems: Semidifferentiable Case". J. Optim. Theory Appl. 100 (2): 417–433. doi:10.1023/A:1021794505701. S2CID 119868047.
|
Wikipedia:Semi-elliptic operator#0
|
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions. Elliptic operators are typical of potential theory, and they appear frequently in electrostatics and continuum mechanics. Elliptic regularity implies that their solutions tend to be smooth functions (if the coefficients in the operator are smooth). Steady-state solutions to hyperbolic and parabolic equations generally solve elliptic equations. == Definitions == Let L {\displaystyle L} be a linear differential operator of order m on a domain Ω {\displaystyle \Omega } in Rn given by L u = ∑ | α | ≤ m a α ( x ) ∂ α u {\displaystyle Lu=\sum _{|\alpha |\leq m}a_{\alpha }(x)\partial ^{\alpha }u} where α = ( α 1 , … , α n ) {\displaystyle \alpha =(\alpha _{1},\dots ,\alpha _{n})} denotes a multi-index, and ∂ α u = ∂ 1 α 1 ⋯ ∂ n α n u {\displaystyle \partial ^{\alpha }u=\partial _{1}^{\alpha _{1}}\cdots \partial _{n}^{\alpha _{n}}u} denotes the partial derivative of order α i {\displaystyle \alpha _{i}} in x i {\displaystyle x_{i}} . Then L {\displaystyle L} is called elliptic if for every x in Ω {\displaystyle \Omega } and every non-zero ξ {\displaystyle \xi } in Rn, ∑ | α | = m a α ( x ) ξ α ≠ 0 , {\displaystyle \sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }\neq 0,} where ξ α = ξ 1 α 1 ⋯ ξ n α n {\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}} . In many applications, this condition is not strong enough, and instead a uniform ellipticity condition may be imposed for operators of order m = 2k: ( − 1 ) k ∑ | α | = 2 k a α ( x ) ξ α > C | ξ | 2 k , {\displaystyle (-1)^{k}\sum _{|\alpha |=2k}a_{\alpha }(x)\xi ^{\alpha }>C|\xi |^{2k},} where C is a positive constant. Note that ellipticity only depends on the highest-order terms. A nonlinear operator L ( u ) = F ( x , u , ( ∂ α u ) | α | ≤ m ) {\displaystyle L(u)=F\left(x,u,\left(\partial ^{\alpha }u\right)_{|\alpha |\leq m}\right)} is elliptic if its linearization is; i.e. the first-order Taylor expansion with respect to u and its derivatives about any point is an elliptic operator. Example 1 The negative of the Laplacian in Rd given by − Δ u = − ∑ i = 1 d ∂ i 2 u {\displaystyle -\Delta u=-\sum _{i=1}^{d}\partial _{i}^{2}u} is a uniformly elliptic operator. The Laplace operator occurs frequently in electrostatics. If ρ is the charge density within some region Ω, the potential Φ must satisfy the equation − Δ Φ = 4 π ρ . {\displaystyle -\Delta \Phi =4\pi \rho .} Example 2 Given a matrix-valued function A(x) which is uniformly positive definite for every x, having components aij, the operator L u = − ∂ i ( a i j ( x ) ∂ j u ) + b j ( x ) ∂ j u + c u {\displaystyle Lu=-\partial _{i}\left(a^{ij}(x)\partial _{j}u\right)+b^{j}(x)\partial _{j}u+cu} is elliptic. This is the most general form of a second-order divergence form linear elliptic differential operator. The Laplace operator is obtained by taking A = I. These operators also occur in electrostatics in polarized media. Example 3 For p a non-negative number, the p-Laplacian is a nonlinear elliptic operator defined by L ( u ) = − ∑ i = 1 d ∂ i ( | ∇ u | p − 2 ∂ i u ) . {\displaystyle L(u)=-\sum _{i=1}^{d}\partial _{i}\left(|\nabla u|^{p-2}\partial _{i}u\right).} A similar nonlinear operator occurs in glacier mechanics. The Cauchy stress tensor of ice, according to Glen's flow law, is given by τ i j = B ( ∑ k , l = 1 3 ( ∂ l u k ) 2 ) − 1 3 ⋅ 1 2 ( ∂ j u i + ∂ i u j ) {\displaystyle \tau _{ij}=B\left(\sum _{k,l=1}^{3}\left(\partial _{l}u_{k}\right)^{2}\right)^{-{\frac {1}{3}}}\cdot {\frac {1}{2}}\left(\partial _{j}u_{i}+\partial _{i}u_{j}\right)} for some constant B. The velocity of an ice sheet in steady state will then solve the nonlinear elliptic system ∑ j = 1 3 ∂ j τ i j + ρ g i − ∂ i p = Q , {\displaystyle \sum _{j=1}^{3}\partial _{j}\tau _{ij}+\rho g_{i}-\partial _{i}p=Q,} where ρ is the ice density, g is the gravitational acceleration vector, p is the pressure and Q is a forcing term. == Elliptic regularity theorems == Let L be an elliptic operator of order 2k with coefficients having 2k continuous derivatives. The Dirichlet problem for L is to find a function u, given a function f and some appropriate boundary values, such that Lu = f and such that u has the appropriate boundary values and normal derivatives. The existence theory for elliptic operators, using Gårding's inequality, Lax–Milgram lemma and Fredholm alternative, states the sufficient condition for a weak solution u to exist in the Sobolev space Hk. For example, for a Second-order Elliptic operator as in Example 2, There is a number γ>0 such that for each μ>γ, each f ∈ L 2 ( U ) {\displaystyle f\in L^{2}(U)} , there exists a unique solution u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} of the boundary value problem L u + μ u = f in U , u = 0 on ∂ U {\displaystyle Lu+\mu u=f{\text{ in }}U,u=0{\text{ on }}\partial U} , which is based on Lax-Milgram lemma. Either (a) for any f ∈ L 2 ( U ) {\displaystyle f\in L^{2}(U)} , L u = f in U , u = 0 on ∂ U {\displaystyle Lu=f{\text{ in }}U,u=0{\text{ on }}\partial U} (1) has a unique solution, or (b) L u = 0 in U , u = 0 on ∂ U {\displaystyle Lu=0{\text{ in }}U,u=0{\text{ on }}\partial U} has a solution u ≢ 0 {\displaystyle u\not \equiv 0} , which is based on the property of compact operators and Fredholm alternative. This situation is ultimately unsatisfactory, as the weak solution u might not have enough derivatives for the expression Lu to be well-defined in the classical sense. The elliptic regularity theorem guarantees that, provided f is square-integrable, u will in fact have 2k square-integrable weak derivatives. In particular, if f is infinitely-often differentiable, then so is u. For L as in Example 2, Interior regularity: If m is a natural number, a i j , b j , c ∈ C m + 1 ( U ) , f ∈ H m ( U ) {\displaystyle a^{ij},b^{j},c\in C^{m+1}(U),f\in H^{m}(U)} (2) , u ∈ H 0 1 ( U ) {\displaystyle u\in H_{0}^{1}(U)} is a weak solution to (1), then for any open set V in U with compact closure, ‖ u ‖ H m + 2 ( V ) ≤ C ( ‖ f ‖ H m ( U ) + ‖ u ‖ L 2 ( U ) ) {\displaystyle \|u\|_{H^{m+2}(V)}\leq C(\|f\|_{H^{m}(U)}+\|u\|_{L^{2}(U)})} (3), where C depends on U, V, L, m, per se u ∈ H l o c m + 2 ( U ) {\displaystyle u\in H_{loc}^{m+2}(U)} , which also holds if m is infinity by Sobolev embedding theorem. Boundary regularity: (2) together with the assumption that ∂ U {\displaystyle \partial U} is C m + 2 {\displaystyle C^{m+2}} indicates that (3) still holds after replacing V with U, i.e. u ∈ H m + 2 ( U ) {\displaystyle u\in H^{m+2}(U)} , which also holds if m is infinity. Any differential operator exhibiting this property is called a hypoelliptic operator; thus, every elliptic operator is hypoelliptic. The property also means that every fundamental solution of an elliptic operator is infinitely differentiable in any neighborhood not containing 0. As an application, suppose a function f {\displaystyle f} satisfies the Cauchy–Riemann equations. Since the Cauchy-Riemann equations form an elliptic operator, it follows that f {\displaystyle f} is smooth. == Properties == For L as in Example 2 on U, which is an open domain with C1 boundary, then there is a number γ>0 such that for each μ>γ, L + μ I : H 0 1 ( U ) → H 0 1 ( U ) {\displaystyle L+\mu I:H_{0}^{1}(U)\rightarrow H_{0}^{1}(U)} satisfies the assumptions of Lax–Milgram lemma. Invertibility: For each μ>γ, L + μ I : L 2 ( U ) → L 2 ( U ) {\displaystyle L+\mu I:L^{2}(U)\rightarrow L^{2}(U)} admits a compact inverse. Eigenvalues and eigenvectors: If A is symmetric, bi,c are zero, then (1) Eigenvalues of L, are real, positive, countable, unbounded (2) There is an orthonormal basis of L2(U) composed of eigenvectors of L. (See Spectral theorem.) Generates a semigroup on L2(U): −L generates a semigroup { S ( t ) ; t ≥ 0 } {\displaystyle \{S(t);t\geq 0\}} of bounded linear operators on L2(U) s.t. d d t S ( t ) u 0 = − L S ( t ) u 0 , ‖ S ( t ) ‖ ≤ e γ t {\displaystyle {\frac {d}{dt}}S(t)u_{0}=-LS(t)u_{0},\|S(t)\|\leq e^{\gamma t}} in the norm of L2(U), for every u 0 ∈ L 2 ( U ) {\displaystyle u_{0}\in L^{2}(U)} , by Hille–Yosida theorem. == General definition == Let D {\displaystyle D} be a (possibly nonlinear) differential operator between vector bundles of any rank. Take its principal symbol σ ξ ( D ) {\displaystyle \sigma _{\xi }(D)} with respect to a one-form ξ {\displaystyle \xi } . (Basically, what we are doing is replacing the highest order covariant derivatives ∇ {\displaystyle \nabla } by vector fields ξ {\displaystyle \xi } .) We say D {\displaystyle D} is weakly elliptic if σ ξ ( D ) {\displaystyle \sigma _{\xi }(D)} is a linear isomorphism for every non-zero ξ {\displaystyle \xi } . We say D {\displaystyle D} is (uniformly) strongly elliptic if for some constant c > 0 {\displaystyle c>0} , ( [ σ ξ ( D ) ] ( v ) , v ) ≥ c ‖ v ‖ 2 {\displaystyle \left([\sigma _{\xi }(D)](v),v\right)\geq c\|v\|^{2}} for all ‖ ξ ‖ = 1 {\displaystyle \|\xi \|=1} and all v {\displaystyle v} . The definition of ellipticity in the previous part of the article is strong ellipticity. Here ( ⋅ , ⋅ ) {\displaystyle (\cdot ,\cdot )} is an inner product. Notice that the ξ {\displaystyle \xi } are covector fields or one-forms, but the v {\displaystyle v} are elements of the vector bundle upon which D {\displaystyle D} acts. The quintessential example of a (strongly) elliptic operator is the Laplacian (or its negative, depending upon convention). It is not hard to see that D {\displaystyle D} needs to be of even order for strong ellipticity to even be an option. Otherwise, just consider plugging in both ξ {\displaystyle \xi } and its negative. On the other hand, a weakly elliptic first-order operator, such as the Dirac operator can square to become a strongly elliptic operator, such as the Laplacian. The composition of weakly elliptic operators is weakly elliptic. Weak ellipticity is nevertheless strong enough for the Fredholm alternative, Schauder estimates, and the Atiyah–Singer index theorem. On the other hand, we need strong ellipticity for the maximum principle, and to guarantee that the eigenvalues are discrete, and their only limit point is infinity. == See also == Sobolev space Hypoelliptic operator Elliptic partial differential equation Hyperbolic partial differential equation Parabolic partial differential equation Hopf maximum principle Elliptic complex Ultrahyperbolic wave equation Semi-elliptic operator Weyl's lemma == Notes == == References == Evans, L. C. (2010) [1998], Partial differential equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3, MR 2597943 Review: Rauch, J. (2000). "Partial differential equations, by L. C. Evans" (PDF). Journal of the American Mathematical Society. 37 (3): 363–367. doi:10.1090/s0273-0979-00-00868-5. Gilbarg, D.; Trudinger, N. S. (1983) [1977], Elliptic partial differential equations of second order, Grundlehren der Mathematischen Wissenschaften, vol. 224 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-13025-3, MR 0737190 Shubin, M. A. (2001) [1994], "Elliptic operator", Encyclopedia of Mathematics, EMS Press == External links == Linear Elliptic Equations at EqWorld: The World of Mathematical Equations. Nonlinear Elliptic Equations at EqWorld: The World of Mathematical Equations.
|
Wikipedia:Semi-simplicity#0
|
In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context. For example, if G is a finite group, then a nontrivial finite-dimensional representation V over a field is said to be simple if the only subrepresentations it contains are either {0} or V (these are also called irreducible representations). Now Maschke's theorem says that any finite-dimensional representation of a finite group is a direct sum of simple representations (provided the characteristic of the base field does not divide the order of the group). So in the case of finite groups with this condition, every finite-dimensional representation is semi-simple. Especially in algebra and representation theory, "semi-simplicity" is also called complete reducibility. For example, Weyl's theorem on complete reducibility says a finite-dimensional representation of a semisimple compact Lie group is semisimple. A square matrix (in other words a linear operator T : V → V {\displaystyle T:V\to V} with V a finite-dimensional vector space) is said to be simple if its only invariant linear subspaces under T are {0} and V. If the field is algebraically closed (such as the complex numbers), then the only simple matrices are of size 1-by-1. A semi-simple matrix is one that is similar to a direct sum of simple matrices; if the field is algebraically closed, this is the same as being diagonalizable. These notions of semi-simplicity can be unified using the language of semi-simple modules, and generalized to semi-simple categories. == Introductory example of vector spaces == If one considers all vector spaces (over a field, such as the real numbers), the simple vector spaces are those that contain no proper nontrivial subspaces. Therefore, the one-dimensional vector spaces are the simple ones. So it is a basic result of linear algebra that any finite-dimensional vector space is the direct sum of simple vector spaces; in other words, all finite-dimensional vector spaces are semi-simple. == Semi-simple matrices == A square matrix or, equivalently, a linear operator T on a finite-dimensional vector space V is called semi-simple if every T-invariant subspace has a complementary T-invariant subspace. This is equivalent to the minimal polynomial of T being square-free. For vector spaces over an algebraically closed field F, semi-simplicity of a matrix is equivalent to diagonalizability. This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any eigenbasis for this subspace can be extended to an eigenbasis of the full space. == Semi-simple modules and rings == For a fixed ring R, a nontrivial R-module M is simple, if it has no submodules other than 0 and M. An R-module M is semi-simple if every R-submodule of M is an R-module direct summand of M (the trivial module 0 is semi-simple, but not simple). For an R-module M, M is semi-simple if and only if it is the direct sum of simple modules (the trivial module is the empty direct sum). Finally, R is called a semi-simple ring if it is semi-simple as an R-module. As it turns out, this is equivalent to requiring that any finitely generated R-module M is semi-simple. Examples of semi-simple rings include fields and, more generally, finite direct products of fields. For a finite group G Maschke's theorem asserts that the group ring R[G] over some ring R is semi-simple if and only if R is semi-simple and |G| is invertible in R. Since the theory of modules of R[G] is the same as the representation theory of G on R-modules, this fact is an important dichotomy, which causes modular representation theory, i.e., the case when |G| does divide the characteristic of R to be more difficult than the case when |G| does not divide the characteristic, in particular if R is a field of characteristic zero. By the Artin–Wedderburn theorem, a unital Artinian ring R is semisimple if and only if it is (isomorphic to) M n 1 ( D 1 ) × M n 2 ( D 2 ) × ⋯ × M n r ( D r ) {\displaystyle M_{n_{1}}(D_{1})\times M_{n_{2}}(D_{2})\times \cdots \times M_{n_{r}}(D_{r})} , where each D i {\displaystyle D_{i}} is a division ring and M n ( D ) {\displaystyle M_{n}(D)} is the ring of n-by-n matrices with entries in D. An operator T is semi-simple in the sense above if and only if the subalgebra F [ T ] ⊆ End F ( V ) {\displaystyle F[T]\subseteq \operatorname {End} _{F}(V)} generated by the powers (i.e., iterations) of T inside the ring of endomorphisms of V is semi-simple. As indicated above, the theory of semi-simple rings is much more easy than the one of general rings. For example, any short exact sequence 0 → M ′ → M → M ″ → 0 {\displaystyle 0\to M'\to M\to M''\to 0} of modules over a semi-simple ring must split, i.e., M ≅ M ′ ⊕ M ″ {\displaystyle M\cong M'\oplus M''} . From the point of view of homological algebra, this means that there are no non-trivial extensions. The ring Z of integers is not semi-simple: Z is not the direct sum of nZ and Z/n. == Semi-simple categories == Many of the above notions of semi-simplicity are recovered by the concept of a semi-simple category C. Briefly, a category is a collection of objects and maps between such objects, the idea being that the maps between the objects preserve some structure inherent in these objects. For example, R-modules and R-linear maps between them form a category, for any ring R. An abelian category C is called semi-simple if there is a collection of simple objects X α ∈ C {\displaystyle X_{\alpha }\in C} , i.e., ones with no subobject other than the zero object 0 and X α {\displaystyle X_{\alpha }} itself, such that any object X is the direct sum (i.e., coproduct or, equivalently, product) of finitely many simple objects. It follows from Schur's lemma that the endomorphism ring End C ( X ) = Hom C ( X , X ) {\displaystyle \operatorname {End} _{C}(X)=\operatorname {Hom} _{C}(X,X)} in a semi-simple category is a product of matrix rings over division rings, i.e., semi-simple. Moreover, a ring R is semi-simple if and only if the category of finitely generated R-modules is semisimple. An example from Hodge theory is the category of polarizable pure Hodge structures, i.e., pure Hodge structures equipped with a suitable positive definite bilinear form. The presence of this so-called polarization causes the category of polarizable Hodge structures to be semi-simple. Another example from algebraic geometry is the category of pure motives of smooth projective varieties over a field k Mot ( k ) ∼ {\displaystyle \operatorname {Mot} (k)_{\sim }} modulo an adequate equivalence relation ∼ {\displaystyle \sim } . As was conjectured by Grothendieck and shown by Jannsen, this category is semi-simple if and only if the equivalence relation is numerical equivalence. This fact is a conceptual cornerstone in the theory of motives. Semisimple abelian categories also arise from a combination of a t-structure and a (suitably related) weight structure on a triangulated category. == Semi-simplicity in representation theory == One can ask whether the category of finite-dimensional representations of a group or a Lie algebra is semisimple, that is, whether every finite-dimensional representation decomposes as a direct sum of irreducible representations. The answer, in general, is no. For example, the representation of R {\displaystyle \mathbb {R} } given by Π ( x ) = ( 1 x 0 1 ) {\displaystyle \Pi (x)={\begin{pmatrix}1&x\\0&1\end{pmatrix}}} is not a direct sum of irreducibles. (There is precisely one nontrivial invariant subspace, the span of the first basis element, e 1 {\displaystyle e_{1}} .) On the other hand, if G {\displaystyle G} is compact, then every finite-dimensional representation Π {\displaystyle \Pi } of G {\displaystyle G} admits an inner product with respect to which Π {\displaystyle \Pi } is unitary, showing that Π {\displaystyle \Pi } decomposes as a sum of irreducibles. Similarly, if g {\displaystyle {\mathfrak {g}}} is a complex semisimple Lie algebra, every finite-dimensional representation of g {\displaystyle {\mathfrak {g}}} is a sum of irreducibles. Weyl's original proof of this used the unitarian trick: Every such g {\displaystyle {\mathfrak {g}}} is the complexification of the Lie algebra of a simply connected compact Lie group K {\displaystyle K} . Since K {\displaystyle K} is simply connected, there is a one-to-one correspondence between the finite-dimensional representations of K {\displaystyle K} and of g {\displaystyle {\mathfrak {g}}} . Thus, the just-mentioned result about representations of compact groups applies. It is also possible to prove semisimplicity of representations of g {\displaystyle {\mathfrak {g}}} directly by algebraic means, as in Section 10.3 of Hall's book. See also: Fusion category (which are semisimple). == See also == A semisimple Lie algebra is a Lie algebra that is a direct sum of simple Lie algebras. A semisimple algebraic group is a linear algebraic group whose radical of the identity component is trivial. Semisimple algebra Semisimple representation == References == Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer == External links == MathOverflow:Are abelian non-degenerate tensor categories semisimple? Semisimple category at the nLab
|
Wikipedia:Semi-symmetric graph#0
|
In the mathematical field of graph theory, a semi-symmetric graph is an undirected graph that is edge-transitive and regular, but not vertex-transitive. In other words, a graph is semi-symmetric if each vertex has the same number of incident edges, and there is a symmetry taking any of the graph's edges to any other of its edges, but there is some pair of vertices such that no symmetry maps the first into the second. == Properties == A semi-symmetric graph must be bipartite, and its automorphism group must act transitively on each of the two vertex sets of the bipartition (in fact, regularity is not required for this property to hold). For instance, in the diagram of the Folkman graph shown here, green vertices can not be mapped to red ones by any automorphism, but every two vertices of the same color are symmetric with each other. == History == Semi-symmetric graphs were first studied E. Dauber, a student of F. Harary, in a paper, no longer available, titled "On line- but not point-symmetric graphs". This was seen by Jon Folkman, whose paper, published in 1967, includes the smallest semi-symmetric graph, now known as the Folkman graph, on 20 vertices. The term "semi-symmetric" was first used by Klin et al. in a paper they published in 1978. == Cubic graphs == The smallest cubic semi-symmetric graph (that is, one in which each vertex is incident to exactly three edges) is the Gray graph on 54 vertices. It was first observed to be semi-symmetric by Bouwer (1968). It was proven to be the smallest cubic semi-symmetric graph by Dragan Marušič and Aleksander Malnič. All the cubic semi-symmetric graphs on up to 10000 vertices are known. According to Conder, Malnič, Marušič and Potočnik, the four smallest possible cubic semi-symmetric graphs after the Gray graph are the Iofinova–Ivanov graph on 110 vertices, the Ljubljana graph on 112 vertices, a graph on 120 vertices with girth 8 and the Tutte 12-cage. == References == == External links == Weisstein, Eric W., "Semisymmetric Graph", MathWorld
|
Wikipedia:Semigroup Forum#0
|
Semigroup Forum (print ISSN 0037-1912, electronic ISSN 1432-2137) is a mathematics research journal published by Springer. The journal serves as a platform for the speedy and efficient transmission of information on current research in semigroup theory. Coverage in the journal includes: algebraic semigroups, topological semigroups, partially ordered semigroups, semigroups of measures and harmonic analysis on semigroups, transformation semigroups, and applications of semigroup theory to other disciplines such as ring theory, category theory, automata, and logic. Semigroups of operators were initially considered off-topic, but began being included in the journal in 1985. == Contents == Semigroup Forum features survey and research articles. It also contains research announcements, which describe new results, mostly without proofs, of full length papers appearing elsewhere as well as short notes, which detail such information as new proofs, significant generalizations of known facts, comments on unsolved problems, and historical remarks. In addition, the journal contains research problems; announcements of conferences, seminars, and symposia on semigroup theory; abstracts and bibliographical items; as well as listings of books, papers, and lecture notes of interest. == History == The journal published its first issue in 1970. It is indexed in Science Citation Index Expanded, Journal Citation Reports/Science Edition, SCOPUS, and Zentralblatt Math. "Semigroup Forum was a pioneering journal ... one of the early instances of a highly specialized journal, of which there are now many. Indeed, it was during the 1960s that many of the current specialised journals began to appear, probably in connection with research specialization ...among many other examples, the journals Topology, Journal of Algebra, Journal of Combinatorial Theory, and Journal of Number Theory were launched in 1962, 1964, 1966 and 1969 respectively. Semigroup Forum simply followed in this trend, with academic publishers realizing that there was a market for such narrowly focused journals.: 330 This journal has been called "in many ways a point of crystallization for semigroup theory and its community", and "an indicator of a field which is mathematically active". == References ==
|
Wikipedia:Semilinear map#0
|
In linear algebra, particularly projective geometry, a semilinear map between vector spaces V and W over a field K is a function that is a linear map "up to a twist", hence semi-linear, where "twist" means "field automorphism of K". Explicitly, it is a function T : V → W that is: additive with respect to vector addition: T ( v + v ′ ) = T ( v ) + T ( v ′ ) {\displaystyle T(v+v')=T(v)+T(v')} there exists a field automorphism θ of K such that T ( λ v ) = θ ( λ ) T ( v ) {\displaystyle T(\lambda v)=\theta (\lambda )T(v)} . If such an automorphism exists and T is nonzero, it is unique, and T is called θ-semilinear. Where the domain and codomain are the same space (i.e. T : V → V), it may be termed a semilinear transformation. The invertible semilinear transforms of a given vector space V (for all choices of field automorphism) form a group, called the general semilinear group and denoted Γ L ( V ) , {\displaystyle \operatorname {\Gamma L} (V),} by analogy with and extending the general linear group. The special case where the field is the complex numbers C {\displaystyle \mathbb {C} } and the automorphism is complex conjugation, a semilinear map is called an antilinear map. Similar notation (replacing Latin characters with Greek ones) is used for semilinear analogs of more restricted linear transformations; formally, the semidirect product of a linear group with the Galois group of field automorphisms. For example, PΣU is used for the semilinear analogs of the projective special unitary group PSU. Note, however, that it was only recently noticed that these generalized semilinear groups are not well-defined, as pointed out in (Bray, Holt & Roney-Dougal 2009) – isomorphic classical groups G and H (subgroups of SL) may have non-isomorphic semilinear extensions. At the level of semidirect products, this corresponds to different actions of the Galois group on a given abstract group, a semidirect product depending on two groups and an action. If the extension is non-unique, there are exactly two semilinear extensions; for example, symplectic groups have a unique semilinear extension, while SU(n, q) has two extensions if n is even and q is odd, and likewise for PSU. == Definition == A map f : V → W for vector spaces V and W over fields K and L respectively is σ-semilinear, or simply semilinear, if there exists a field homomorphism σ : K → L such that for all x, y in V and λ in K it holds that f ( x + y ) = f ( x ) + f ( y ) , {\displaystyle f(x+y)=f(x)+f(y),} f ( λ x ) = σ ( λ ) f ( x ) . {\displaystyle f(\lambda x)=\sigma (\lambda )f(x).} A given embedding σ of a field K in L allows us to identify K with a subfield of L, making a σ-semilinear map a K-linear map under this identification. However, a map that is τ-semilinear for a distinct embedding τ ≠ σ will not be K-linear with respect to the original identification σ, unless f is identically zero. More generally, a map ψ : M → N between a right R-module M and a left S-module N is σ-semilinear if there exists a ring antihomomorphism σ : R → S such that for all x, y in M and λ in R it holds that ψ ( x + y ) = ψ ( x ) + ψ ( y ) , {\displaystyle \psi (x+y)=\psi (x)+\psi (y),} ψ ( x λ ) = σ ( λ ) ψ ( x ) . {\displaystyle \psi (x\lambda )=\sigma (\lambda )\psi (x).} The term semilinear applies for any combination of left and right modules with suitable adjustment of the above expressions, with σ being a homomorphism as needed. The pair (ψ, σ) is referred to as a dimorphism. == Related == === Transpose === Let σ : R → S {\displaystyle \sigma :R\to S} be a ring isomorphism, M {\displaystyle M} a right R {\displaystyle R} -module and N {\displaystyle N} a right S {\displaystyle S} -module, and ψ : M → N {\displaystyle \psi :M\to N} a σ {\displaystyle \sigma } -semilinear map. Define the transpose of ψ {\displaystyle \psi } as the mapping t ψ : N ∗ → M ∗ {\displaystyle {}^{t}\psi :N^{*}\to M^{*}} that satisfies ⟨ y , ψ ( x ) ⟩ = σ ( ⟨ t ψ ( y ) , x ⟩ ) for all y ∈ N ∗ , and all x ∈ M . {\displaystyle \langle y,\psi (x)\rangle =\sigma \left(\left\langle {}^{\text{t}}\psi (y),x\right\rangle \right)\quad {\text{ for all }}y\in N^{*},{\text{ and all }}x\in M.} This is a σ − 1 {\displaystyle \sigma ^{-1}} -semilinear map. === Properties === Let σ : R → S {\displaystyle \sigma :R\to S} be a ring isomorphism, M {\displaystyle M} a right R {\displaystyle R} -module and N {\displaystyle N} a right S {\displaystyle S} -module, and ψ : M → N {\displaystyle \psi :M\to N} a σ {\displaystyle \sigma } -semilinear map. The mapping M → R : x ↦ σ − 1 ( ⟨ y , ψ ( x ) ⟩ ) , y ∈ N ∗ {\displaystyle M\to R:x\mapsto \sigma ^{-1}(\langle y,\psi (x)\rangle ),\quad y\in N^{*}} defines an R {\displaystyle R} -linear form. == Examples == Let K = C , V = C n , {\displaystyle K=\mathbf {C} ,V=\mathbf {C} ^{n},} with standard basis e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} . Define the map f : V → V {\displaystyle f\colon V\to V} by f ( ∑ i = 1 n z i e i ) = ∑ i = 1 n z ¯ i e i {\displaystyle f\left(\sum _{i=1}^{n}z_{i}e_{i}\right)=\sum _{i=1}^{n}{\bar {z}}_{i}e_{i}} f is semilinear (with respect to the complex conjugation field automorphism) but not linear. Let K = GF ( q ) {\displaystyle K=\operatorname {GF} (q)} – the Galois field of order q = p i {\displaystyle q=p^{i}} , p the characteristic. Let ℓ θ = ℓ p {\displaystyle \ell ^{\theta }=\ell ^{p}} . By the Freshman's dream it is known that this is a field automorphism. To every linear map f : V → W {\displaystyle f\colon V\to W} between vector spaces V and W over K we can establish a θ {\displaystyle \theta } -semilinear map f ~ ( ∑ i = 1 n ℓ i e i ) := f ( ∑ i = 1 n ℓ i θ e i ) . {\displaystyle {\widetilde {f}}\left(\sum _{i=1}^{n}\ell _{i}e_{i}\right):=f\left(\sum _{i=1}^{n}\ell _{i}^{\theta }e_{i}\right).} Indeed every linear map can be converted into a semilinear map in such a way. This is part of a general observation collected into the following result. Let R {\displaystyle R} be a noncommutative ring, M {\displaystyle M} a left R {\displaystyle R} -module, and α {\displaystyle \alpha } an invertible element of R {\displaystyle R} . Define the map φ : M → M : x ↦ α x {\displaystyle \varphi \colon M\to M\colon x\mapsto \alpha x} , so φ ( λ u ) = α λ u = ( α λ α − 1 ) α u = σ ( λ ) φ ( u ) {\displaystyle \varphi (\lambda u)=\alpha \lambda u=(\alpha \lambda \alpha ^{-1})\alpha u=\sigma (\lambda )\varphi (u)} , and σ {\displaystyle \sigma } is an inner automorphism of R {\displaystyle R} . Thus, the homothety x ↦ α x {\displaystyle x\mapsto \alpha x} need not be a linear map, but is σ {\displaystyle \sigma } -semilinear. == General semilinear group == Given a vector space V, the set of all invertible semilinear transformations V → V (over all field automorphisms) is the group ΓL(V). Given a vector space V over K, ΓL(V) decomposes as the semidirect product Γ L ( V ) = GL ( V ) ⋊ Aut ( K ) , {\displaystyle \operatorname {\Gamma L} (V)=\operatorname {GL} (V)\rtimes \operatorname {Aut} (K),} where Aut(K) is the automorphisms of K. Similarly, semilinear transforms of other linear groups can be defined as the semidirect product with the automorphism group, or more intrinsically as the group of semilinear maps of a vector space preserving some properties. We identify Aut(K) with a subgroup of ΓL(V) by fixing a basis B for V and defining the semilinear maps: ∑ b ∈ B ℓ b b ↦ ∑ b ∈ B ℓ b σ b {\displaystyle \sum _{b\in B}\ell _{b}b\mapsto \sum _{b\in B}\ell _{b}^{\sigma }b} for any σ ∈ Aut ( K ) {\displaystyle \sigma \in \operatorname {Aut} (K)} . We shall denoted this subgroup by Aut(K)B. We also see these complements to GL(V) in ΓL(V) are acted on regularly by GL(V) as they correspond to a change of basis. === Proof === Every linear map is semilinear, thus GL ( V ) ≤ Γ L ( V ) {\displaystyle \operatorname {GL} (V)\leq \operatorname {\Gamma L} (V)} . Fix a basis B of V. Now given any semilinear map f with respect to a field automorphism σ ∈ Aut(K), then define g : V → V by g ( ∑ b ∈ B ℓ b b ) := ∑ b ∈ B f ( ℓ b σ − 1 b ) = ∑ b ∈ B ℓ b f ( b ) {\displaystyle g\left(\sum _{b\in B}\ell _{b}b\right):=\sum _{b\in B}f\left(\ell _{b}^{\sigma ^{-1}}b\right)=\sum _{b\in B}\ell _{b}f(b)} As f(B) is also a basis of V, it follows that g is simply a basis exchange of V and so linear and invertible: g ∈ GL(V). Set h := f g − 1 {\displaystyle h:=fg^{-1}} . For every v = ∑ b ∈ B ℓ b b {\displaystyle v=\sum _{b\in B}\ell _{b}b} in V, h v = f g − 1 v = ∑ b ∈ B ℓ b σ b {\displaystyle hv=fg^{-1}v=\sum _{b\in B}\ell _{b}^{\sigma }b} thus h is in the Aut(K) subgroup relative to the fixed basis B. This factorization is unique to the fixed basis B. Furthermore, GL(V) is normalized by the action of Aut(K)B, so ΓL(V) = GL(V) ⋊ Aut(K). == Applications == === Projective geometry === The Γ L ( V ) {\displaystyle \operatorname {\Gamma L} (V)} groups extend the typical classical groups in GL(V). The importance in considering such maps follows from the consideration of projective geometry. The induced action of Γ L ( V ) {\displaystyle \operatorname {\Gamma L} (V)} on the associated projective space P(V) yields the projective semilinear group, denoted P Γ L ( V ) {\displaystyle \operatorname {P\Gamma L} (V)} , extending the projective linear group, PGL(V). The projective geometry of a vector space V, denoted PG(V), is the lattice of all subspaces of V. Although the typical semilinear map is not a linear map, it does follow that every semilinear map f : V → W {\displaystyle f\colon V\to W} induces an order-preserving map f : PG ( V ) → PG ( W ) {\displaystyle f\colon \operatorname {PG} (V)\to \operatorname {PG} (W)} . That is, every semilinear map induces a projectivity. The converse of this observation (except for the projective line) is the fundamental theorem of projective geometry. Thus semilinear maps are useful because they define the automorphism group of the projective geometry of a vector space. === Mathieu group === The group PΓL(3,4) can be used to construct the Mathieu group M24, which is one of the sporadic simple groups; PΓL(3,4) is a maximal subgroup of M24, and there are many ways to extend it to the full Mathieu group. == See also == Antilinear map Complex conjugate vector space == References == Assmus, E.F.; Key, J.D. (1994), Designs and Their Codes, Cambridge University Press, p. 93, ISBN 0-521-45839-0 Bourbaki, Nicolas (1989) [1970]. Algebra I Chapters 1-3 [Algèbre: Chapitres 1 à 3] (PDF). Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64243-5. OCLC 18588156. Bray, John N.; Holt, Derek F.; Roney-Dougal, Colva M. (2009), "Certain classical groups are not well-defined", Journal of Group Theory, 12 (2): 171–180, doi:10.1515/jgt.2008.069, ISSN 1433-5883, MR 2502211 Faure, Claude-Alain; Frölicher, Alfred (2000), Modern Projective Geometry, Kluwer Academic Publishers, ISBN 0-7923-6525-9 Gruenberg, K.W.; Weir, A.J. (1977), Linear Geometry, Graduate Texts in Mathematics, vol. 49 (1st ed.), Springer-Verlag New York Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. This article incorporates material from semilinear transformation on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Wikipedia:Seminorm#0
|
In mathematics, particularly in functional analysis, a seminorm is like a norm but need not be positive definite. Seminorms are intimately connected with convex sets: every seminorm is the Minkowski functional of some absorbing disk and, conversely, the Minkowski functional of any such set is a seminorm. A topological vector space is locally convex if and only if its topology is induced by a family of seminorms. == Definition == Let X {\displaystyle X} be a vector space over either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .} A real-valued function p : X → R {\displaystyle p:X\to \mathbb {R} } is called a seminorm if it satisfies the following two conditions: Subadditivity/Triangle inequality: p ( x + y ) ≤ p ( x ) + p ( y ) {\displaystyle p(x+y)\leq p(x)+p(y)} for all x , y ∈ X . {\displaystyle x,y\in X.} Absolute homogeneity: p ( s x ) = | s | p ( x ) {\displaystyle p(sx)=|s|p(x)} for all x ∈ X {\displaystyle x\in X} and all scalars s . {\displaystyle s.} These two conditions imply that p ( 0 ) = 0 {\displaystyle p(0)=0} and that every seminorm p {\displaystyle p} also has the following property: Nonnegativity: p ( x ) ≥ 0 {\displaystyle p(x)\geq 0} for all x ∈ X . {\displaystyle x\in X.} Some authors include non-negativity as part of the definition of "seminorm" (and also sometimes of "norm"), although this is not necessary since it follows from the other two properties. By definition, a norm on X {\displaystyle X} is a seminorm that also separates points, meaning that it has the following additional property: Positive definite/Positive/Point-separating: whenever x ∈ X {\displaystyle x\in X} satisfies p ( x ) = 0 , {\displaystyle p(x)=0,} then x = 0. {\displaystyle x=0.} A seminormed space is a pair ( X , p ) {\displaystyle (X,p)} consisting of a vector space X {\displaystyle X} and a seminorm p {\displaystyle p} on X . {\displaystyle X.} If the seminorm p {\displaystyle p} is also a norm then the seminormed space ( X , p ) {\displaystyle (X,p)} is called a normed space. Since absolute homogeneity implies positive homogeneity, every seminorm is a type of function called a sublinear function. A map p : X → R {\displaystyle p:X\to \mathbb {R} } is called a sublinear function if it is subadditive and positive homogeneous. Unlike a seminorm, a sublinear function is not necessarily nonnegative. Sublinear functions are often encountered in the context of the Hahn–Banach theorem. A real-valued function p : X → R {\displaystyle p:X\to \mathbb {R} } is a seminorm if and only if it is a sublinear and balanced function. == Examples == The trivial seminorm on X , {\displaystyle X,} which refers to the constant 0 {\displaystyle 0} map on X , {\displaystyle X,} induces the indiscrete topology on X . {\displaystyle X.} Let μ {\displaystyle \mu } be a measure on a space Ω {\displaystyle \Omega } . For an arbitrary constant c ≥ 1 {\displaystyle c\geq 1} , let X {\displaystyle X} be the set of all functions f : Ω → R {\displaystyle f:\Omega \rightarrow \mathbb {R} } for which ‖ f ‖ c := ( ∫ Ω | f | c d μ ) 1 / c {\displaystyle \lVert f\rVert _{c}:=\left(\int _{\Omega }|f|^{c}\,d\mu \right)^{1/c}} exists and is finite. It can be shown that X {\displaystyle X} is a vector space, and the functional ‖ ⋅ ‖ c {\displaystyle \lVert \cdot \rVert _{c}} is a seminorm on X {\displaystyle X} . However, it is not always a norm (e.g. if Ω = R {\displaystyle \Omega =\mathbb {R} } and μ {\displaystyle \mu } is the Lebesgue measure) because ‖ h ‖ c = 0 {\displaystyle \lVert h\rVert _{c}=0} does not always imply h = 0 {\displaystyle h=0} . To make ‖ ⋅ ‖ c {\displaystyle \lVert \cdot \rVert _{c}} a norm, quotient X {\displaystyle X} by the closed subspace of functions h {\displaystyle h} with ‖ h ‖ c = 0 {\displaystyle \lVert h\rVert _{c}=0} . The resulting space, L c ( μ ) {\displaystyle L^{c}(\mu )} , has a norm induced by ‖ ⋅ ‖ c {\displaystyle \lVert \cdot \rVert _{c}} . If f {\displaystyle f} is any linear form on a vector space then its absolute value | f | , {\displaystyle |f|,} defined by x ↦ | f ( x ) | , {\displaystyle x\mapsto |f(x)|,} is a seminorm. A sublinear function f : X → R {\displaystyle f:X\to \mathbb {R} } on a real vector space X {\displaystyle X} is a seminorm if and only if it is a symmetric function, meaning that f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} for all x ∈ X . {\displaystyle x\in X.} Every real-valued sublinear function f : X → R {\displaystyle f:X\to \mathbb {R} } on a real vector space X {\displaystyle X} induces a seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } defined by p ( x ) := max { f ( x ) , f ( − x ) } . {\displaystyle p(x):=\max\{f(x),f(-x)\}.} Any finite sum of seminorms is a seminorm. The restriction of a seminorm (respectively, norm) to a vector subspace is once again a seminorm (respectively, norm). If p : X → R {\displaystyle p:X\to \mathbb {R} } and q : Y → R {\displaystyle q:Y\to \mathbb {R} } are seminorms (respectively, norms) on X {\displaystyle X} and Y {\displaystyle Y} then the map r : X × Y → R {\displaystyle r:X\times Y\to \mathbb {R} } defined by r ( x , y ) = p ( x ) + q ( y ) {\displaystyle r(x,y)=p(x)+q(y)} is a seminorm (respectively, a norm) on X × Y . {\displaystyle X\times Y.} In particular, the maps on X × Y {\displaystyle X\times Y} defined by ( x , y ) ↦ p ( x ) {\displaystyle (x,y)\mapsto p(x)} and ( x , y ) ↦ q ( y ) {\displaystyle (x,y)\mapsto q(y)} are both seminorms on X × Y . {\displaystyle X\times Y.} If p {\displaystyle p} and q {\displaystyle q} are seminorms on X {\displaystyle X} then so are ( p ∨ q ) ( x ) = max { p ( x ) , q ( x ) } {\displaystyle (p\vee q)(x)=\max\{p(x),q(x)\}} and ( p ∧ q ) ( x ) := inf { p ( y ) + q ( z ) : x = y + z with y , z ∈ X } {\displaystyle (p\wedge q)(x):=\inf\{p(y)+q(z):x=y+z{\text{ with }}y,z\in X\}} where p ∧ q ≤ p {\displaystyle p\wedge q\leq p} and p ∧ q ≤ q . {\displaystyle p\wedge q\leq q.} The space of seminorms on X {\displaystyle X} is generally not a distributive lattice with respect to the above operations. For example, over R 2 {\displaystyle \mathbb {R} ^{2}} , p ( x , y ) := max ( | x | , | y | ) , q ( x , y ) := 2 | x | , r ( x , y ) := 2 | y | {\displaystyle p(x,y):=\max(|x|,|y|),q(x,y):=2|x|,r(x,y):=2|y|} are such that ( ( p ∨ q ) ∧ ( p ∨ r ) ) ( x , y ) = inf { max ( 2 | x 1 | , | y 1 | ) + max ( | x 2 | , 2 | y 2 | ) : x = x 1 + x 2 and y = y 1 + y 2 } {\displaystyle ((p\vee q)\wedge (p\vee r))(x,y)=\inf\{\max(2|x_{1}|,|y_{1}|)+\max(|x_{2}|,2|y_{2}|):x=x_{1}+x_{2}{\text{ and }}y=y_{1}+y_{2}\}} while ( p ∨ q ∧ r ) ( x , y ) := max ( | x | , | y | ) {\displaystyle (p\vee q\wedge r)(x,y):=\max(|x|,|y|)} If L : X → Y {\displaystyle L:X\to Y} is a linear map and q : Y → R {\displaystyle q:Y\to \mathbb {R} } is a seminorm on Y , {\displaystyle Y,} then q ∘ L : X → R {\displaystyle q\circ L:X\to \mathbb {R} } is a seminorm on X . {\displaystyle X.} The seminorm q ∘ L {\displaystyle q\circ L} will be a norm on X {\displaystyle X} if and only if L {\displaystyle L} is injective and the restriction q | L ( X ) {\displaystyle q{\big \vert }_{L(X)}} is a norm on L ( X ) . {\displaystyle L(X).} == Minkowski functionals and seminorms == Seminorms on a vector space X {\displaystyle X} are intimately tied, via Minkowski functionals, to subsets of X {\displaystyle X} that are convex, balanced, and absorbing. Given such a subset D {\displaystyle D} of X , {\displaystyle X,} the Minkowski functional of D {\displaystyle D} is a seminorm. Conversely, given a seminorm p {\displaystyle p} on X , {\displaystyle X,} the sets { x ∈ X : p ( x ) < 1 } {\displaystyle \{x\in X:p(x)<1\}} and { x ∈ X : p ( x ) ≤ 1 } {\displaystyle \{x\in X:p(x)\leq 1\}} are convex, balanced, and absorbing and furthermore, the Minkowski functional of these two sets (as well as of any set lying "in between them") is p . {\displaystyle p.} == Algebraic properties == Every seminorm is a sublinear function, and thus satisfies all properties of a sublinear function, including convexity, p ( 0 ) = 0 , {\displaystyle p(0)=0,} and for all vectors x , y ∈ X {\displaystyle x,y\in X} : the reverse triangle inequality: | p ( x ) − p ( y ) | ≤ p ( x − y ) {\displaystyle |p(x)-p(y)|\leq p(x-y)} and also 0 ≤ max { p ( x ) , p ( − x ) } {\textstyle 0\leq \max\{p(x),p(-x)\}} and p ( x ) − p ( y ) ≤ p ( x − y ) . {\displaystyle p(x)-p(y)\leq p(x-y).} For any vector x ∈ X {\displaystyle x\in X} and positive real r > 0 : {\displaystyle r>0:} x + { y ∈ X : p ( y ) < r } = { y ∈ X : p ( x − y ) < r } {\displaystyle x+\{y\in X:p(y)<r\}=\{y\in X:p(x-y)<r\}} and furthermore, { x ∈ X : p ( x ) < r } {\displaystyle \{x\in X:p(x)<r\}} is an absorbing disk in X . {\displaystyle X.} If p {\displaystyle p} is a sublinear function on a real vector space X {\displaystyle X} then there exists a linear functional f {\displaystyle f} on X {\displaystyle X} such that f ≤ p {\displaystyle f\leq p} and furthermore, for any linear functional g {\displaystyle g} on X , {\displaystyle X,} g ≤ p {\displaystyle g\leq p} on X {\displaystyle X} if and only if g − 1 ( 1 ) ∩ { x ∈ X : p ( x ) < 1 } = ∅ . {\displaystyle g^{-1}(1)\cap \{x\in X:p(x)<1\}=\varnothing .} Other properties of seminorms Every seminorm is a balanced function. A seminorm p {\displaystyle p} is a norm on X {\displaystyle X} if and only if { x ∈ X : p ( x ) < 1 } {\displaystyle \{x\in X:p(x)<1\}} does not contain a non-trivial vector subspace. If p : X → [ 0 , ∞ ) {\displaystyle p:X\to [0,\infty )} is a seminorm on X {\displaystyle X} then ker p := p − 1 ( 0 ) {\displaystyle \ker p:=p^{-1}(0)} is a vector subspace of X {\displaystyle X} and for every x ∈ X , {\displaystyle x\in X,} p {\displaystyle p} is constant on the set x + ker p = { x + k : p ( k ) = 0 } {\displaystyle x+\ker p=\{x+k:p(k)=0\}} and equal to p ( x ) . {\displaystyle p(x).} Furthermore, for any real r > 0 , {\displaystyle r>0,} r { x ∈ X : p ( x ) < 1 } = { x ∈ X : p ( x ) < r } = { x ∈ X : 1 r p ( x ) < 1 } . {\displaystyle r\{x\in X:p(x)<1\}=\{x\in X:p(x)<r\}=\left\{x\in X:{\tfrac {1}{r}}p(x)<1\right\}.} If D {\displaystyle D} is a set satisfying { x ∈ X : p ( x ) < 1 } ⊆ D ⊆ { x ∈ X : p ( x ) ≤ 1 } {\displaystyle \{x\in X:p(x)<1\}\subseteq D\subseteq \{x\in X:p(x)\leq 1\}} then D {\displaystyle D} is absorbing in X {\displaystyle X} and p = p D {\displaystyle p=p_{D}} where p D {\displaystyle p_{D}} denotes the Minkowski functional associated with D {\displaystyle D} (that is, the gauge of D {\displaystyle D} ). In particular, if D {\displaystyle D} is as above and q {\displaystyle q} is any seminorm on X , {\displaystyle X,} then q = p {\displaystyle q=p} if and only if { x ∈ X : q ( x ) < 1 } ⊆ D ⊆ { x ∈ X : q ( x ) ≤ } . {\displaystyle \{x\in X:q(x)<1\}\subseteq D\subseteq \{x\in X:q(x)\leq \}.} If ( X , ‖ ⋅ ‖ ) {\displaystyle (X,\|\,\cdot \,\|)} is a normed space and x , y ∈ X {\displaystyle x,y\in X} then ‖ x − y ‖ = ‖ x − z ‖ + ‖ z − y ‖ {\displaystyle \|x-y\|=\|x-z\|+\|z-y\|} for all z {\displaystyle z} in the interval [ x , y ] . {\displaystyle [x,y].} Every norm is a convex function and consequently, finding a global maximum of a norm-based objective function is sometimes tractable. === Relationship to other norm-like concepts === Let p : X → R {\displaystyle p:X\to \mathbb {R} } be a non-negative function. The following are equivalent: p {\displaystyle p} is a seminorm. p {\displaystyle p} is a convex F {\displaystyle F} -seminorm. p {\displaystyle p} is a convex balanced G-seminorm. If any of the above conditions hold, then the following are equivalent: p {\displaystyle p} is a norm; { x ∈ X : p ( x ) < 1 } {\displaystyle \{x\in X:p(x)<1\}} does not contain a non-trivial vector subspace. There exists a norm on X , {\displaystyle X,} with respect to which, { x ∈ X : p ( x ) < 1 } {\displaystyle \{x\in X:p(x)<1\}} is bounded. If p {\displaystyle p} is a sublinear function on a real vector space X {\displaystyle X} then the following are equivalent: p {\displaystyle p} is a linear functional; p ( x ) + p ( − x ) ≤ 0 for every x ∈ X {\displaystyle p(x)+p(-x)\leq 0{\text{ for every }}x\in X} ; p ( x ) + p ( − x ) = 0 for every x ∈ X {\displaystyle p(x)+p(-x)=0{\text{ for every }}x\in X} ; === Inequalities involving seminorms === If p , q : X → [ 0 , ∞ ) {\displaystyle p,q:X\to [0,\infty )} are seminorms on X {\displaystyle X} then: p ≤ q {\displaystyle p\leq q} if and only if q ( x ) ≤ 1 {\displaystyle q(x)\leq 1} implies p ( x ) ≤ 1. {\displaystyle p(x)\leq 1.} If a > 0 {\displaystyle a>0} and b > 0 {\displaystyle b>0} are such that p ( x ) < a {\displaystyle p(x)<a} implies q ( x ) ≤ b , {\displaystyle q(x)\leq b,} then a q ( x ) ≤ b p ( x ) {\displaystyle aq(x)\leq bp(x)} for all x ∈ X . {\displaystyle x\in X.} Suppose a {\displaystyle a} and b {\displaystyle b} are positive real numbers and q , p 1 , … , p n {\displaystyle q,p_{1},\ldots ,p_{n}} are seminorms on X {\displaystyle X} such that for every x ∈ X , {\displaystyle x\in X,} if max { p 1 ( x ) , … , p n ( x ) } < a {\displaystyle \max\{p_{1}(x),\ldots ,p_{n}(x)\}<a} then q ( x ) < b . {\displaystyle q(x)<b.} Then a q ≤ b ( p 1 + ⋯ + p n ) . {\displaystyle aq\leq b\left(p_{1}+\cdots +p_{n}\right).} If X {\displaystyle X} is a vector space over the reals and f {\displaystyle f} is a non-zero linear functional on X , {\displaystyle X,} then f ≤ p {\displaystyle f\leq p} if and only if ∅ = f − 1 ( 1 ) ∩ { x ∈ X : p ( x ) < 1 } . {\displaystyle \varnothing =f^{-1}(1)\cap \{x\in X:p(x)<1\}.} If p {\displaystyle p} is a seminorm on X {\displaystyle X} and f {\displaystyle f} is a linear functional on X {\displaystyle X} then: | f | ≤ p {\displaystyle |f|\leq p} on X {\displaystyle X} if and only if Re f ≤ p {\displaystyle \operatorname {Re} f\leq p} on X {\displaystyle X} (see footnote for proof). f ≤ p {\displaystyle f\leq p} on X {\displaystyle X} if and only if f − 1 ( 1 ) ∩ { x ∈ X : p ( x ) < 1 = ∅ } . {\displaystyle f^{-1}(1)\cap \{x\in X:p(x)<1=\varnothing \}.} If a > 0 {\displaystyle a>0} and b > 0 {\displaystyle b>0} are such that p ( x ) < a {\displaystyle p(x)<a} implies f ( x ) ≠ b , {\displaystyle f(x)\neq b,} then a | f ( x ) | ≤ b p ( x ) {\displaystyle a|f(x)|\leq bp(x)} for all x ∈ X . {\displaystyle x\in X.} === Hahn–Banach theorem for seminorms === Seminorms offer a particularly clean formulation of the Hahn–Banach theorem: If M {\displaystyle M} is a vector subspace of a seminormed space ( X , p ) {\displaystyle (X,p)} and if f {\displaystyle f} is a continuous linear functional on M , {\displaystyle M,} then f {\displaystyle f} may be extended to a continuous linear functional F {\displaystyle F} on X {\displaystyle X} that has the same norm as f . {\displaystyle f.} A similar extension property also holds for seminorms: Proof: Let S {\displaystyle S} be the convex hull of { m ∈ M : p ( m ) ≤ 1 } ∪ { x ∈ X : q ( x ) ≤ 1 } . {\displaystyle \{m\in M:p(m)\leq 1\}\cup \{x\in X:q(x)\leq 1\}.} Then S {\displaystyle S} is an absorbing disk in X {\displaystyle X} and so the Minkowski functional P {\displaystyle P} of S {\displaystyle S} is a seminorm on X . {\displaystyle X.} This seminorm satisfies p = P {\displaystyle p=P} on M {\displaystyle M} and P ≤ q {\displaystyle P\leq q} on X . {\displaystyle X.} ◼ {\displaystyle \blacksquare } == Topologies of seminormed spaces == === Pseudometrics and the induced topology === A seminorm p {\displaystyle p} on X {\displaystyle X} induces a topology, called the seminorm-induced topology, via the canonical translation-invariant pseudometric d p : X × X → R {\displaystyle d_{p}:X\times X\to \mathbb {R} } ; d p ( x , y ) := p ( x − y ) = p ( y − x ) . {\displaystyle d_{p}(x,y):=p(x-y)=p(y-x).} This topology is Hausdorff if and only if d p {\displaystyle d_{p}} is a metric, which occurs if and only if p {\displaystyle p} is a norm. This topology makes X {\displaystyle X} into a locally convex pseudometrizable topological vector space that has a bounded neighborhood of the origin and a neighborhood basis at the origin consisting of the following open balls (or the closed balls) centered at the origin: { x ∈ X : p ( x ) < r } or { x ∈ X : p ( x ) ≤ r } {\displaystyle \{x\in X:p(x)<r\}\quad {\text{ or }}\quad \{x\in X:p(x)\leq r\}} as r > 0 {\displaystyle r>0} ranges over the positive reals. Every seminormed space ( X , p ) {\displaystyle (X,p)} should be assumed to be endowed with this topology unless indicated otherwise. A topological vector space whose topology is induced by some seminorm is called seminormable. Equivalently, every vector space X {\displaystyle X} with seminorm p {\displaystyle p} induces a vector space quotient X / W , {\displaystyle X/W,} where W {\displaystyle W} is the subspace of X {\displaystyle X} consisting of all vectors x ∈ X {\displaystyle x\in X} with p ( x ) = 0. {\displaystyle p(x)=0.} Then X / W {\displaystyle X/W} carries a norm defined by p ( x + W ) = p ( x ) . {\displaystyle p(x+W)=p(x).} The resulting topology, pulled back to X , {\displaystyle X,} is precisely the topology induced by p . {\displaystyle p.} Any seminorm-induced topology makes X {\displaystyle X} locally convex, as follows. If p {\displaystyle p} is a seminorm on X {\displaystyle X} and r ∈ R , {\displaystyle r\in \mathbb {R} ,} call the set { x ∈ X : p ( x ) < r } {\displaystyle \{x\in X:p(x)<r\}} the open ball of radius r {\displaystyle r} about the origin; likewise the closed ball of radius r {\displaystyle r} is { x ∈ X : p ( x ) ≤ r } . {\displaystyle \{x\in X:p(x)\leq r\}.} The set of all open (resp. closed) p {\displaystyle p} -balls at the origin forms a neighborhood basis of convex balanced sets that are open (resp. closed) in the p {\displaystyle p} -topology on X . {\displaystyle X.} ==== Stronger, weaker, and equivalent seminorms ==== The notions of stronger and weaker seminorms are akin to the notions of stronger and weaker norms. If p {\displaystyle p} and q {\displaystyle q} are seminorms on X , {\displaystyle X,} then we say that q {\displaystyle q} is stronger than p {\displaystyle p} and that p {\displaystyle p} is weaker than q {\displaystyle q} if any of the following equivalent conditions holds: The topology on X {\displaystyle X} induced by q {\displaystyle q} is finer than the topology induced by p . {\displaystyle p.} If x ∙ = ( x i ) i = 1 ∞ {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }} is a sequence in X , {\displaystyle X,} then q ( x ∙ ) := ( q ( x i ) ) i = 1 ∞ → 0 {\displaystyle q\left(x_{\bullet }\right):=\left(q\left(x_{i}\right)\right)_{i=1}^{\infty }\to 0} in R {\displaystyle \mathbb {R} } implies p ( x ∙ ) → 0 {\displaystyle p\left(x_{\bullet }\right)\to 0} in R . {\displaystyle \mathbb {R} .} If x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} is a net in X , {\displaystyle X,} then q ( x ∙ ) := ( q ( x i ) ) i ∈ I → 0 {\displaystyle q\left(x_{\bullet }\right):=\left(q\left(x_{i}\right)\right)_{i\in I}\to 0} in R {\displaystyle \mathbb {R} } implies p ( x ∙ ) → 0 {\displaystyle p\left(x_{\bullet }\right)\to 0} in R . {\displaystyle \mathbb {R} .} p {\displaystyle p} is bounded on { x ∈ X : q ( x ) < 1 } . {\displaystyle \{x\in X:q(x)<1\}.} If inf { q ( x ) : p ( x ) = 1 , x ∈ X } = 0 {\displaystyle \inf {}\{q(x):p(x)=1,x\in X\}=0} then p ( x ) = 0 {\displaystyle p(x)=0} for all x ∈ X . {\displaystyle x\in X.} There exists a real K > 0 {\displaystyle K>0} such that p ≤ K q {\displaystyle p\leq Kq} on X . {\displaystyle X.} The seminorms p {\displaystyle p} and q {\displaystyle q} are called equivalent if they are both weaker (or both stronger) than each other. This happens if they satisfy any of the following conditions: The topology on X {\displaystyle X} induced by q {\displaystyle q} is the same as the topology induced by p . {\displaystyle p.} q {\displaystyle q} is stronger than p {\displaystyle p} and p {\displaystyle p} is stronger than q . {\displaystyle q.} If x ∙ = ( x i ) i = 1 ∞ {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }} is a sequence in X {\displaystyle X} then q ( x ∙ ) := ( q ( x i ) ) i = 1 ∞ → 0 {\displaystyle q\left(x_{\bullet }\right):=\left(q\left(x_{i}\right)\right)_{i=1}^{\infty }\to 0} if and only if p ( x ∙ ) → 0. {\displaystyle p\left(x_{\bullet }\right)\to 0.} There exist positive real numbers r > 0 {\displaystyle r>0} and R > 0 {\displaystyle R>0} such that r q ≤ p ≤ R q . {\displaystyle rq\leq p\leq Rq.} === Normability and seminormability === A topological vector space (TVS) is said to be a seminormable space (respectively, a normable space) if its topology is induced by a single seminorm (resp. a single norm). A TVS is normable if and only if it is seminormable and Hausdorff or equivalently, if and only if it is seminormable and T1 (because a TVS is Hausdorff if and only if it is a T1 space). A locally bounded topological vector space is a topological vector space that possesses a bounded neighborhood of the origin. Normability of topological vector spaces is characterized by Kolmogorov's normability criterion. A TVS is seminormable if and only if it has a convex bounded neighborhood of the origin. Thus a locally convex TVS is seminormable if and only if it has a non-empty bounded open set. A TVS is normable if and only if it is a T1 space and admits a bounded convex neighborhood of the origin. If X {\displaystyle X} is a Hausdorff locally convex TVS then the following are equivalent: X {\displaystyle X} is normable. X {\displaystyle X} is seminormable. X {\displaystyle X} has a bounded neighborhood of the origin. The strong dual X b ′ {\displaystyle X_{b}^{\prime }} of X {\displaystyle X} is normable. The strong dual X b ′ {\displaystyle X_{b}^{\prime }} of X {\displaystyle X} is metrizable. Furthermore, X {\displaystyle X} is finite dimensional if and only if X σ ′ {\displaystyle X_{\sigma }^{\prime }} is normable (here X σ ′ {\displaystyle X_{\sigma }^{\prime }} denotes X ′ {\displaystyle X^{\prime }} endowed with the weak-* topology). The product of infinitely many seminormable space is again seminormable if and only if all but finitely many of these spaces trivial (that is, 0-dimensional). === Topological properties === If X {\displaystyle X} is a TVS and p {\displaystyle p} is a continuous seminorm on X , {\displaystyle X,} then the closure of { x ∈ X : p ( x ) < r } {\displaystyle \{x\in X:p(x)<r\}} in X {\displaystyle X} is equal to { x ∈ X : p ( x ) ≤ r } . {\displaystyle \{x\in X:p(x)\leq r\}.} The closure of { 0 } {\displaystyle \{0\}} in a locally convex space X {\displaystyle X} whose topology is defined by a family of continuous seminorms P {\displaystyle {\mathcal {P}}} is equal to ⋂ p ∈ P p − 1 ( 0 ) . {\displaystyle \bigcap _{p\in {\mathcal {P}}}p^{-1}(0).} A subset S {\displaystyle S} in a seminormed space ( X , p ) {\displaystyle (X,p)} is bounded if and only if p ( S ) {\displaystyle p(S)} is bounded. If ( X , p ) {\displaystyle (X,p)} is a seminormed space then the locally convex topology that p {\displaystyle p} induces on X {\displaystyle X} makes X {\displaystyle X} into a pseudometrizable TVS with a canonical pseudometric given by d ( x , y ) := p ( x − y ) {\displaystyle d(x,y):=p(x-y)} for all x , y ∈ X . {\displaystyle x,y\in X.} The product of infinitely many seminormable spaces is again seminormable if and only if all but finitely many of these spaces are trivial (that is, 0-dimensional). === Continuity of seminorms === If p {\displaystyle p} is a seminorm on a topological vector space X , {\displaystyle X,} then the following are equivalent: p {\displaystyle p} is continuous. p {\displaystyle p} is continuous at 0; { x ∈ X : p ( x ) < 1 } {\displaystyle \{x\in X:p(x)<1\}} is open in X {\displaystyle X} ; { x ∈ X : p ( x ) ≤ 1 } {\displaystyle \{x\in X:p(x)\leq 1\}} is closed neighborhood of 0 in X {\displaystyle X} ; p {\displaystyle p} is uniformly continuous on X {\displaystyle X} ; There exists a continuous seminorm q {\displaystyle q} on X {\displaystyle X} such that p ≤ q . {\displaystyle p\leq q.} In particular, if ( X , p ) {\displaystyle (X,p)} is a seminormed space then a seminorm q {\displaystyle q} on X {\displaystyle X} is continuous if and only if q {\displaystyle q} is dominated by a positive scalar multiple of p . {\displaystyle p.} If X {\displaystyle X} is a real TVS, f {\displaystyle f} is a linear functional on X , {\displaystyle X,} and p {\displaystyle p} is a continuous seminorm (or more generally, a sublinear function) on X , {\displaystyle X,} then f ≤ p {\displaystyle f\leq p} on X {\displaystyle X} implies that f {\displaystyle f} is continuous. === Continuity of linear maps === If F : ( X , p ) → ( Y , q ) {\displaystyle F:(X,p)\to (Y,q)} is a map between seminormed spaces then let ‖ F ‖ p , q := sup { q ( F ( x ) ) : p ( x ) ≤ 1 , x ∈ X } . {\displaystyle \|F\|_{p,q}:=\sup\{q(F(x)):p(x)\leq 1,x\in X\}.} If F : ( X , p ) → ( Y , q ) {\displaystyle F:(X,p)\to (Y,q)} is a linear map between seminormed spaces then the following are equivalent: F {\displaystyle F} is continuous; ‖ F ‖ p , q < ∞ {\displaystyle \|F\|_{p,q}<\infty } ; There exists a real K ≥ 0 {\displaystyle K\geq 0} such that p ≤ K q {\displaystyle p\leq Kq} ; In this case, ‖ F ‖ p , q ≤ K . {\displaystyle \|F\|_{p,q}\leq K.} If F {\displaystyle F} is continuous then q ( F ( x ) ) ≤ ‖ F ‖ p , q p ( x ) {\displaystyle q(F(x))\leq \|F\|_{p,q}p(x)} for all x ∈ X . {\displaystyle x\in X.} The space of all continuous linear maps F : ( X , p ) → ( Y , q ) {\displaystyle F:(X,p)\to (Y,q)} between seminormed spaces is itself a seminormed space under the seminorm ‖ F ‖ p , q . {\displaystyle \|F\|_{p,q}.} This seminorm is a norm if q {\displaystyle q} is a norm. == Generalizations == The concept of norm in composition algebras does not share the usual properties of a norm. A composition algebra ( A , ∗ , N ) {\displaystyle (A,*,N)} consists of an algebra over a field A , {\displaystyle A,} an involution ∗ , {\displaystyle \,*,} and a quadratic form N , {\displaystyle N,} which is called the "norm". In several cases N {\displaystyle N} is an isotropic quadratic form so that A {\displaystyle A} has at least one null vector, contrary to the separation of points required for the usual norm discussed in this article. An ultraseminorm or a non-Archimedean seminorm is a seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } that also satisfies p ( x + y ) ≤ max { p ( x ) , p ( y ) } for all x , y ∈ X . {\displaystyle p(x+y)\leq \max\{p(x),p(y)\}{\text{ for all }}x,y\in X.} Weakening subadditivity: Quasi-seminorms A map p : X → R {\displaystyle p:X\to \mathbb {R} } is called a quasi-seminorm if it is (absolutely) homogeneous and there exists some b ≤ 1 {\displaystyle b\leq 1} such that p ( x + y ) ≤ b p ( p ( x ) + p ( y ) ) for all x , y ∈ X . {\displaystyle p(x+y)\leq bp(p(x)+p(y)){\text{ for all }}x,y\in X.} The smallest value of b {\displaystyle b} for which this holds is called the multiplier of p . {\displaystyle p.} A quasi-seminorm that separates points is called a quasi-norm on X . {\displaystyle X.} Weakening homogeneity - k {\displaystyle k} -seminorms A map p : X → R {\displaystyle p:X\to \mathbb {R} } is called a k {\displaystyle k} -seminorm if it is subadditive and there exists a k {\displaystyle k} such that 0 < k ≤ 1 {\displaystyle 0<k\leq 1} and for all x ∈ X {\displaystyle x\in X} and scalars s , {\displaystyle s,} p ( s x ) = | s | k p ( x ) {\displaystyle p(sx)=|s|^{k}p(x)} A k {\displaystyle k} -seminorm that separates points is called a k {\displaystyle k} -norm on X . {\displaystyle X.} We have the following relationship between quasi-seminorms and k {\displaystyle k} -seminorms: == See also == Asymmetric norm – Generalization of the concept of a norm Banach space – Normed vector space that is complete Contraction mapping – Function reducing distance between all points Finest locally convex topology – Vector space with a topology defined by convex open setsPages displaying short descriptions of redirect targets Hahn-Banach theorem – Theorem on extension of bounded linear functionalsPages displaying short descriptions of redirect targets Gowers norm – Class of norms in additive combinatorics Locally convex topological vector space – Vector space with a topology defined by convex open sets Mahalanobis distance – Statistical distance measure Matrix norm – Norm on a vector space of matrices Minkowski functional – Function made from a set Norm (mathematics) – Length in a vector space Normed vector space – Vector space on which a distance is defined Relation of norms and metrics – Mathematical space with a notion of distancePages displaying short descriptions of redirect targets Sublinear function – Type of function in linear algebra == Notes == Proofs == References == Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003. Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401. Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908. Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138. Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098. Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342. Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser. ISBN 978-0-8176-4998-2. OCLC 710154895. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Prugovečki, Eduard (1981). Quantum mechanics in Hilbert space (2nd ed.). Academic Press. p. 20. ISBN 0-12-566060-X. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. == External links == Sublinear functions The sandwich theorem for sublinear and super linear functionals
|
Wikipedia:Semisimple operator#0
|
In mathematics, a linear operator T : V → V on a vector space V is semisimple if every T-invariant subspace has a complementary T-invariant subspace. If T is a semisimple linear operator on V, then V is a semisimple representation of T. Equivalently, a linear operator is semisimple if its minimal polynomial is a product of distinct irreducible polynomials. A linear operator on a finite-dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable. Over a perfect field, the Jordan–Chevalley decomposition expresses an endomorphism x : V → V {\displaystyle x:V\to V} as a sum of a semisimple endomorphism s and a nilpotent endomorphism n such that both s and n are polynomials in x. == Notes == == References == Hoffman, Kenneth; Kunze, Ray (1971). "Semi-Simple operators". Linear algebra (2nd ed.). Englewood Cliffs, N.J.: Prentice-Hall, Inc. MR 0276251. Jacobson, Nathan (1979). Lie algebras. New York. ISBN 0-486-63832-4. OCLC 6499793.{{cite book}}: CS1 maint: location missing publisher (link) Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate texts in mathematics. Vol. 131 (2 ed.). Springer. ISBN 0-387-95183-0.
|
Wikipedia:Semyon Alesker#0
|
Semyon Alesker (Hebrew: סמיון אלסקר; born 1972) is an Israeli mathematician at Tel Aviv University. For his contributions in convex geometry and integral geometry, in particular his work on valuations, he won the EMS Prize in 2000, and the Erdős Prize in 2004. == References == == External links == Semyon Alesker at the Mathematics Genealogy Project Website at Tel Aviv University
|
Wikipedia:Senior Mathematical Challenge#0
|
The United Kingdom Mathematics Trust (UKMT) is a charity founded in 1996 to help with the education of children in mathematics within the UK. == History == The national mathematics competitions had existed prior to the formation of the trust, but the foundation of the UKMT in the summer of 1996 enabled them to be run collectively. The Senior Mathematical Challenge was formerly called the National Mathematics Contest. Founded in 1961, it was run by the Mathematical Association from 1975 until its adoption by the UKMT in 1996. The Junior and Intermediate Mathematical Challenges were the initiative of Tony Gardiner in 1987, and were run by him under the name of the United Kingdom Mathematics Foundation until 1996. In 1995, Gardiner advertised for the formation of a committee and for a host institution that would lead to the establishment of the UKMT, enabling the challenges to be run effectively together under one organization. == Mathematical Challenges == The UKMT runs a series of mathematics challenges to encourage children's interest in mathematics and to develop their skills. The three main challenges are: Junior Mathematical Challenge (UK year 8/S2 and below) Intermediate Mathematical Challenge (UK year 11/S4 and below) Senior Mathematical Challenge (UK year 13/S6 and below) == Certificates == In the Junior and Intermediate Challenges the top scoring 50% of the entrants receive bronze, silver or gold certificates based on their mark in the paper. In the Senior Mathematical Challenge these certificates are awarded to top scoring 66% of the entries. In each case bronze, silver and gold certificates are awarded in the ratio 3 : 2 : 1. So in the Junior and Intermediate Challenges The Gold award is achieved by the top 8-9% of the entrants. The Silver award is achieved by 16-17% of the entrants. The Bronze award is achieved by 25% of the entrants. In the past, only the top 40% of participants received a certificate in the Junior and Intermediate Challenges, and only the top 60% of participants received a certificate in the Senior Challenge. The ratio of bronze, silver, and gold have not changed, still being 3 : 2 : 1. == Junior Mathematical Challenge == The Junior Mathematical Challenge (JMC) is an introductory challenge for pupils in Years 8 or below (aged 13) or below, taking place in spring each year. This takes the form of twenty-five multiple choice questions to be sat in exam conditions, to be completed within one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks. The last five questions are intended to be the most challenging and so are also 6 marks. Questions to which no answer is entered will gain (and lose) 0 marks. However, in recent years there has been no negative marking so wrong questions will be given 0 marks. Previously, the top 40% of students (50% since the 2022 JMC) get a certificate of varying levels (Gold, Silver or Bronze) based on their score. === Junior Kangaroo === Over 10,000 participants from the JMC are invited to participate in the Junior Kangaroo. Most of the Junior Kangaroo participants are those who performed well in the JMC, however the Junior Kangaroo is open to discretionary entries for a fee. Similar to the JMC, the Junior Kangaroo is a 60 minute challenge consisting of 25 multiple-choice problems. Correct answers for Questions 1-15 earn 5 marks, and for Questions 16-25 earn 6 marks. Blank or incorrect answers are marked 0; there is no penalty for wrong answers. The top 25% of participants in the Junior Kangaroo receive a Certificate of Merit. === Junior Mathematical Olympiad === The highest 1200 scorers are also invited to take part in the Junior Mathematical Olympiad (JMO). Like the JMC, the JMO is sat in schools. Students are given 120 minutes to complete the JMO. This is also divided into two sections. Part A is composed of 10 questions in which the candidate gives just the answer (not multiple choice), worth 10 marks (1 mark each). Part B consists of 6 questions and encourages students to write out full solutions. Each question in section B is worth 10 marks and students are encouraged to write complete answers to 2-4 questions rather than hurry through incomplete answers to all 6. If the solution is judged to be incomplete, it is marked on a 0+ basis, maximum 3 marks. If it has an evident logical strategy, it is marked on a 10- basis. The total mark for the whole paper is 70. Everyone who participates in this challenge will gain a certificate (Participation 75%, Distinction 25%); the top 200 or so gaining medals (Gold, Silver, Bronze); with the top fifty winning a book prize. From 2025, this has changed as Part A has been omitted. Section B has stayed the same, though it is no longer called Section B (it is now the only section). This changes the total number of questions to 10 and the marks to 60. However the time given for the JMO, has stayed at 120 minutes. == Intermediate Mathematical Challenge == The Intermediate Mathematical Challenge (IMC) is aimed at school years equivalent to English Years 9-11, taking place in winter each year. Following the same structure as the JMC, this paper presents the student with twenty-five multiple choice questions to be done under exam conditions in one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks, with a penalty of 1 point for a wrong answer which tries to stop pupils guessing. The last five questions are intended to be the most challenging and so are also 6 marks, but with a 2 point penalty for an incorrectly answered question. Questions to which no answer is entered will gain (and lose) 0 marks. Again, the top 40% of students taking this challenge get a certificate. There are two follow-on rounds to this competition: The European Kangaroo and the Intermediate Mathematical Olympiad. Additionally, top performers can be selected for the National Mathematics Summer Schools. === Intermediate Mathematical Olympiad === To prevent this getting confused with the International Mathematical Olympiad, this is often abbreviated to the IMOK Olympiad (IMOK = Intermediate Mathematical Olympiad and Kangaroo). The IMOK is sat by the top 500 scorers from each school year in the Intermediate Maths Challenge and consists of three papers, 'Cayley', 'Hamilton' and 'Maclaurin' named after famous mathematicians. The paper the student will undertake depends on the year group that student is in (Cayley for those in year 9 and below, Hamilton for year 10 and Maclaurin for year 11). Each paper contains six questions. Each solution is marked out of 10 on a 0+ and 10- scale; that is to say, if an answer is judged incomplete or unfinished, it is awarded a few marks for progress and relevant observations, whereas if it is presented as complete and correct, marks are deducted for faults, poor reasoning, or unproven assumptions. As a result, it is quite uncommon for an answer to score a middling mark (e.g. 4–6). This makes the maximum mark out of 60. For a student to get two questions fully correct is considered "very good". All people taking part in this challenge will get a certificate (participation for the bottom 50%, merit for the next 25% and distinction for the top 25%). The mark boundaries for these certificates change every year, but normally around 30 marks will gain a Distinction. Those scoring highly (the top 50) will gain a book prize; again, this changes every year, with 44 marks required in the Maclaurin paper in 2006. Also, the top 100 candidates will receive a medal; bronze for Cayley, silver for Hamilton and gold for Maclaurin. === European Kangaroo === The European Kangaroo is a competition which follows the same structure as the AMC (Australian Mathematics Competition). There are twenty-five multiple-choice questions and no penalty marking. This paper is taken throughout Europe by over 3 million pupils from more than 37 countries. Two different Kangaroo papers follow on from the Intermediate Maths Challenge and the next 5500 highest scorers below the Olympiad threshold are invited to take part (both papers are by invitation only). The Grey Kangaroo is sat by students in year 9 and below and the Pink Kangaroo is sat by those in years 10 and 11. The top 25% of scorers in each paper receive a certificate of merit and the rest receive a certificate of participation. All those who sit either Kangaroo also receive a keyfob containing a different mathematical puzzle each year. (The puzzles along with solutions) === National Mathematics Summer Schools === Selected by lottery, 48 of the top 1.5% of scorers in the IMC are invited to participate in one of three week-long National Mathematics Summer Schools in July. Each from a different school across the UK, the 24 boys and 24 girls are facilitated with a range of activities, including daily lectures, designed to go beyond the GCSE syllabus and explore wider and more challenging areas of mathematics. The UKMT aims to "promote mathematical thinking" and "provide an opportunity for participants to meet other students and adults who enjoy mathematics". They were delivered virtually during the COVID-19 pandemic but had reverted to in-person events by 2022. == Senior Mathematical Challenge == The Senior Mathematical Challenge (SMC) takes place in late-autumn each year, and is open to students who are aged 19 or below and are not registered to attend a university. SMC consists of twenty-five multiple choice questions to be answered in 90 minutes. All candidates start with 25 marks, each correct answer is awarded 4 marks and 1 mark is deducted for each incorrect answer. This gives a score between 0 and 125 marks. Unlike the JMC and IMC, the top 66% get one of the three certificates. Further, the top 1000 highest scorers who are eligible to represent the UK at the International Mathematical Olympiad, together with any discretionary and international candidates, are invited to compete in the British Mathematical Olympiad and the next around 6000 highest scorers are invited to sit the Senior Kangaroo. Discretionary candidates are those students who are entered by their mathematics teachers, on payment of a fee, who did not score quite well enough in the SMC, but who might cope well in the next round. === British Mathematical Olympiad === Round 1 of the Olympiad is a three-and-a-half hour examination including six more difficult, long answer questions, which serve to test entrants' problem-solving skills. As of 2005, a more accessible first question was added to the paper; before this, it only consisted of 5 questions. Approximately 100 highest scoring candidates from BMO1 are invited to sit the BMO2, which is the follow-up round that has the same time limit as BMO1, but in which 4 harder questions are posed. The top 24 scoring students from the second round are subsequently invited to a training camp at Trinity College, Cambridge or Oundle School for the first stage of the International Mathematical Olympiad UK team selection. === Senior Kangaroo === The Senior Kangaroo is a one-hour examination to which the next around 6000 highest scorers below the Olympiad threshold are invited. The paper consists of twenty questions, each of which require three digit answers (leading zeros are used if the answer is less than 100, since the paper is marked by machine). The top 25% of candidates receive a certificate of merit and the rest receive a certificate of participation. == Team Challenge == The UKMT Team Maths Challenge is an annual event. One team from each participating school, comprising four pupils selected from year 8 and 9 (ages 12–14), competes in a regional round. No more than 2 pupils on a team may be from Year 9. There are over 60 regional competitions in the UK, held between February and May. The winning team in each regional round, as well as a few high-scoring runners-up from throughout the country, are then invited to the National Final in London, usually in late June. There are 4 rounds: Group Questions Cross-Numbers Shuttle (NB: The previous Head-to-Head Round has been replaced with another, similar to the Mini-Relay used in the 2007 and 2008 National Finals.) Relay In the National Final however an additional 'Poster Round' is added at the beginning. The poster round is a separate competition, however, since 2018 it is worth up to six marks towards the main event. Four schools have won the Junior Maths Team competition at least twice: Queen Mary's Grammar School in Walsall, City of London School, St Olave's Grammar School, and Westminster Under School. == Senior Team Challenge == A pilot event for a competition similar to the Team Challenge, aimed at 16- to 18-year-olds, was launched in the Autumn of 2007 and has been running ever since. The format is much the same, with a limit of two year 13 (Upper Sixth-Form) pupils per team. Regional finals take place between October and December, with the National Final in early February the following year. Previous winners are below: == British Mathematical Olympiad Subtrust == For more information see British Mathematical Olympiad Subtrust. The British Mathematical Olympiad Subtrust is run by UKMT, which runs the British Mathematical Olympiad as well as the UK Mathematical Olympiad for Girls, several training camps throughout the year such as a winter camp in Hungary, an Easter camp at Trinity College, Cambridge, and other training and selection of the IMO team. == See also == European Kangaroo British Mathematical Olympiad International Mathematical Olympiad International Mathematics Competition for University Students == References == == External links == United Kingdom Mathematics Trust website British Mathematical Olympiad Committee site International Mathematics Competition for University Students (IMC) site Junior Mathematical Challenge Sample Paper Intermediate Mathematical Challenge Sample Paper Senior Mathematical Challenge Sample Paper
|
Wikipedia:Sentinus#0
|
Sentinus is an educational charity based in Lisburn, Northern Ireland that provides educational programs for young people interested in science, technology, engineering and mathematics (STEM). == History == Northern Ireland produces around 2,000 qualified IT workers each year; there are around 16,000 IT jobs in the Northern Ireland economy. == Function == It works with EngineeringUK and the Council for the Curriculum, Examinations & Assessment (CCEA). It works with primary and secondary schools in Northern Ireland. It runs summer placements for IT workshops for those of sixth form age (16-18). It offers Robotics Roadshows for primary school children. == Sentinus Young Innovators == Sentinus hosts the annual Big Bang Northern Ireland Fair which incorporates Sentinus Young Innovators. This is a one day science and engineering project exhibition for post-primary students. It is one of largest such events in the United Kingdom. In 2019 over 3,000 students participated from 130 schools across both Northern Ireland and the Republic of Ireland. The competition is affiliated with the International Science and Engineering Fair (ISEF) and the Broadcom MASTERS program. The overall winner represents Northern Ireland at the following year's ISEF. === Past Overall Winners === == See also == Discover Science & Engineering, equivalent in the Republic of Ireland Science Week Ireland The Big Bang Fair Young Scientist and Technology Exhibition == References == == External links == Sentinus
|
Wikipedia:Seppo Linnainmaa#0
|
Seppo Ilmari Linnainmaa (born 28 September 1945) is a Finnish mathematician and computer scientist known for creating the modern version of backpropagation. == Biography == He was born in Pori. He received his MSc in 1970 and introduced a reverse mode of automatic differentiation in his MSc thesis. In 1974 he obtained the first doctorate ever awarded in computer science at the University of Helsinki. In 1976, he became Assistant Professor. From 1984 to 1985 he was Visiting Professor at the University of Maryland, USA. From 1986 to 1989 he was Chairman of the Finnish Artificial Intelligence Society. From 1989 to 2007, he was Research Professor at the VTT Technical Research Centre of Finland. He retired in 2007. == Backpropagation == Explicit, efficient error backpropagation in arbitrary, discrete, possibly sparsely connected, neural networks-like networks was first described in Linnainmaa's 1970 master's thesis, albeit without reference to NNs, when he introduced the reverse mode of automatic differentiation (AD), in order to efficiently compute the derivative of a differentiable composite function that can be represented as a graph, by recursively applying the chain rule to the building blocks of the function. Linnainmaa published it first, following Gerardi Ostrowski who had used it in the context of certain process models in chemical engineering some five years earlier, but didn't publish. == Notes == == External links == Seppo Linnainmaa on LinkedIn
|
Wikipedia:Sequence transformation#0
|
In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as discrete convolution with another sequence and resummation of a sequence and nonlinear mappings, more generally. They are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods. Classical examples for sequence transformations include the binomial transform, Möbius transform, and Stirling transform. == Definitions == For a given sequence ( s n ) n ∈ N , {\displaystyle (s_{n})_{n\in \mathbb {N} },\,} and a sequence transformation T , {\displaystyle \mathbf {T} ,} the sequence resulting from transformation by T {\displaystyle \mathbf {T} } is T ( ( s n ) ) = ( s n ′ ) n ∈ N , {\displaystyle \mathbf {T} ((s_{n}))=(s'_{n})_{n\in \mathbb {N} },} where the elements of the transformed sequence are usually computed from some finite number of members of the original sequence, for instance s n ′ = T n ( s n , s n + 1 , … , s n + k n ) {\displaystyle s_{n}'=T_{n}(s_{n},s_{n+1},\dots ,s_{n+k_{n}})} for some natural number k n {\displaystyle k_{n}} for each n {\displaystyle n} and a multivariate function T n {\displaystyle T_{n}} of k n + 1 {\displaystyle k_{n}+1} variables for each n . {\displaystyle n.} See for instance the binomial transform and Aitken's delta-squared process. In the simplest case the elements of the sequences, the s n {\displaystyle s_{n}} and s n ′ {\displaystyle s'_{n}} , are real or complex numbers. More generally, they may be elements of some vector space or algebra. If the multivariate functions T n {\displaystyle T_{n}} are linear in each of their arguments for each value of n , {\displaystyle n,} for instance if s n ′ = ∑ m = 0 k n c n , m s n + m {\displaystyle s'_{n}=\sum _{m=0}^{k_{n}}c_{n,m}s_{n+m}} for some constants k n {\displaystyle k_{n}} and c n , 0 , … , c n , k n {\displaystyle c_{n,0},\dots ,c_{n,k_{n}}} for each n , {\displaystyle n,} then the sequence transformation T {\displaystyle \mathbf {T} } is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations. In the context of series acceleration, when the original sequence ( s n ) {\displaystyle (s_{n})} and the transformed sequence ( s n ′ ) {\displaystyle (s'_{n})} share the same limit ℓ {\displaystyle \ell } as n → ∞ , {\displaystyle n\rightarrow \infty ,} the transformed sequence is said to have a faster rate of convergence than the original sequence if lim n → ∞ s n ′ − ℓ s n − ℓ = 0. {\displaystyle \lim _{n\to \infty }{\frac {s'_{n}-\ell }{s_{n}-\ell }}=0.} If the original sequence is divergent, the sequence transformation may act as an extrapolation method to an antilimit ℓ {\displaystyle \ell } . == Examples == The simplest examples of sequence transformations include shifting all elements by an integer k {\displaystyle k} that does not depend on n , {\displaystyle n,} s n ′ = s n + k {\displaystyle s'_{n}=s_{n+k}} if n + k ≥ 0 {\displaystyle n+k\geq 0} and 0 otherwise, and scalar multiplication of the sequence some constant c {\displaystyle c} that does not depend on n , {\displaystyle n,} s n ′ = c s n . {\displaystyle s'_{n}=cs_{n}.} These are both examples of linear sequence transformations. Less trivial examples include the discrete convolution of sequences with another reference sequence. A particularly basic example is the difference operator, which is convolution with the sequence ( − 1 , 1 , 0 , … ) {\displaystyle (-1,1,0,\ldots )} and is a discrete analog of the derivative; technically the shift operator and scalar multiplication can also be written as trivial discrete convolutions. The binomial transform and the Stirling transform are two linear transformations of a more general type. An example of a nonlinear sequence transformation is Aitken's delta-squared process, used to improve the rate of convergence of a slowly convergent sequence. An extended form of this is the Shanks transformation. The Möbius transform is also a nonlinear transformation, only possible for integer sequences. == See also == Aitken's delta-squared process Minimum polynomial extrapolation Richardson extrapolation Series acceleration Steffensen's method == References == Hugh J. Hamilton, "Mertens' Theorem and Sequence Transformations", AMS (1947) == External links == Transformations of Integer Sequences, a subpage of the On-Line Encyclopedia of Integer Sequences
|
Wikipedia:Sequential dynamical system#0
|
Sequential dynamical systems (SDSs) are a class of graph dynamical systems. They are discrete dynamical systems which generalize many aspects of for example classical cellular automata, and they provide a framework for studying asynchronous processes over graphs. The analysis of SDSs uses techniques from combinatorics, abstract algebra, graph theory, dynamical systems and probability theory. == Definition == An SDS is constructed from the following components: A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected. A state xv for each vertex i of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[i] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of i in Y (in some fixed order). A vertex function fi for each vertex i. The vertex function maps the state of vertex i at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of i in Y. A word w = (w1, w2, ... , wm) over v[Y]. It is convenient to introduce the Y-local maps Fi constructed from the vertex functions by F i ( x ) = ( x 1 , x 2 , … , x i − 1 , f i ( x [ i ] ) , x i + 1 , … , x n ) . {\displaystyle F_{i}(x)=(x_{1},x_{2},\ldots ,x_{i-1},f_{i}(x[i]),x_{i+1},\ldots ,x_{n})\;.} The word w specifies the sequence in which the Y-local maps are composed to derive the sequential dynamical system map F: Kn → Kn as [ F Y , w ] = F w ( m ) ∘ F w ( m − 1 ) ∘ ⋯ ∘ F w ( 2 ) ∘ F w ( 1 ) . {\displaystyle [F_{Y},w]=F_{w(m)}\circ F_{w(m-1)}\circ \cdots \circ F_{w(2)}\circ F_{w(1)}\;.} If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point. The phase space associated to a sequential dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update sequence w. A large part of SDS research seeks to infer phase space properties based on the structure of the system constituents. == Example == Consider the case where Y is the graph with vertex set {1,2,3} and undirected edges {1,2}, {1,3} and {2,3} (a triangle or 3-circle) with vertex states from K = {0,1}. For vertex functions use the symmetric, boolean function nor : K3 → K defined by nor(x,y,z) = (1+x)(1+y)(1+z) with boolean arithmetic. Thus, the only case in which the function nor returns the value 1 is when all the arguments are 0. Pick w = (1,2,3) as update sequence. Starting from the initial system state (0,0,0) at time t = 0 one computes the state of vertex 1 at time t=1 as nor(0,0,0) = 1. The state of vertex 2 at time t=1 is nor(1,0,0) = 0. Note that the state of vertex 1 at time t=1 is used immediately. Next one obtains the state of vertex 3 at time t=1 as nor(1,0,0) = 0. This completes the update sequence, and one concludes that the Nor-SDS map sends the system state (0,0,0) to (1,0,0). The system state (1,0,0) is in turned mapped to (0,1,0) by an application of the SDS map. == See also == Graph dynamical system Boolean network Gene regulatory network Dynamic Bayesian network Petri net == References == Henning S. Mortveit, Christian M. Reidys (2008). An Introduction to Sequential Dynamical Systems. Springer. ISBN 978-0387306544. Predecessor and Permutation Existence Problems for Sequential Dynamical Systems Genetic Sequential Dynamical Systems
|
Wikipedia:Serafim Kalliadasis#0
|
Serafim Kalliadasis is an applied mathematician and chemical engineer working at Imperial College London since 2004. == Career == Serafim Kalliadasis earned a five-year undergraduate degree in chemical engineering at the Polytechnic School of the Aristotle University of Thessaloniki, Greece. He graduated in 1989. In 1990 he started his PhD studies at the University of Notre Dame, USA. His doctoral thesis was in the general of fluid dynamics and was supervised by Prof. H.-C. Chang. Following his PhD in 1994 he moved on to the University of Bristol, UK, as post-doctoral fellow in applied mathematics. In 1995 he took up his first academic position at the Chemical Engineering Department of the University of Leeds, UK. In 2004 he was appointed to Readership in Fluid Mechanics at Department of Chemical Engineering, Imperial College, UK, in 2004 and was promoted to Professor in Engineering Science & Applied Mathematics at Imperial College in 2010. == Research == Serafim Kalliadasis' expertise is in the interface between Applied and Computational Mathematics, Complex Systems and Engineering, covering both fundamentals and applications. He leads the Complex Multiscale Systems Group of Imperial College London. == Distinctions == 2020, Institute of Mathematics and its Applications Fellow. 2019, Institute of Physics Fellow. 2014, American Physical Society Fellow. Citation reads: “For pioneering and rigorous contributions to fundamental fluid dynamics, particularly interfacial flows and dynamics of moving contact lines, statistical mechanics of inhomogeneous liquids, and coarse graining of complex multiscale systems.” 2010–2016, ERC Frontier Research Advanced Investigator Grant holder. 2009, Corporate Member and Fellow of IChemE. 2004–2009, EPSRC Advanced Fellowship. == Selected publications == Carrillo, J.A., Kalliadasis, S., Perez, S.P. & Shu, C.-W. 2020 “Well-balanced finite-volume schemes for hydrodynamic equations with general free energy,” SIAM Multiscale Model. Sim. 18 502–541 Gomes, S.N., Kalliadasis, S., Pavliotis, G.A. & Yatsyshin, P. 2019 “Dynamics of the Desai-Zwanzig model in multiwell and random energy landscapes,” Phys. Rev. E 99 Art. No. 032109 (13 pp) Schmuck, M., Pavliotis, G.A. & Kalliadasis, S. 2019 “Recent advances in the evolution of interfaces: thermodynamics, upscaling, and universality,” Comp. Mater. Sci. 156 441–451 (Special issue following Euromat2017 conference) Yatsyshin, P., Parry, A.O., Rascón, C. & Kalliadasis, S. 2018 ``Wetting of a plane with a narrow solvophobic stripe,” Mol. Phys. 116 1990–1997 (Special issue following Thermodynamics 2017 conference) Yatsyshin, P., Durán-Olivencia, M.A. & Kalliadasis, S. 2018 “Microscopic aspects of wetting using classical density functional theory,” J. Phys.-Condens. Matt. 30 Art. No. 274003 (9 pp) (Invited paper—special issue on “Physics of Integrated Microfluidics”) Dallaston, M.C., Fontelos, M.A., Tseluiko, D. & Kalliadasis S. 2018 “Discrete self-similarity in interfacial hydrodynamics and the formation of iterated structures,” Phys. Rev. Lett. 120} Art. No. 034505 (5 pp) Braga, C., Smith, E.R., Nold, A., Sibley, D.N. & Kalliadasis, S. 2018 “The pressure tensor across a liquid-vapour interface,” J. Chem. Phys. 149 Art. No. 044705 (8 pp) Schmuck, M. & Kalliadasis, S. 2017 “Rate of convergence of general phase field equations in strongly heterogeneous media towards their homogenized limit,” SIAM J. Appl. Math. 77 1471–1492 Nold, A., Goddard, B.D., Yatsyshin, P., Savva, N. & Kalliadasis, S. 2017 “Pseudospectral methods for density functional theory in bounded and unbounded domains,” J. Comp. Phys. 334 639–664 Durán-Olivencia, M.A., Yatsyshin, P., Goddard, B.D. & Kalliadasis, S. 2017 “General framework for fluctuating dynamic density functional theory,” New J. Phys. 19 Art. No. 123022 (16 pp) == References ==
|
Wikipedia:Serafino Raffaele Minich#0
|
Serafino Raffaele Minich or Serafin Rafael Minić (8 December 1808 – 29 May 1883) was a Croatian-Italian mathematician. Minić was born in Venice. His father, a sea captain from Prčanj, settled in the early nineteenth century in Venice where Minić has spent his entire life. After receiving a degree in mathematics at the University of Padua, in 1830 he started working at the University as an assistant, and since 1842 as a lecturer. During his lifetime, he served as the rector of the University of Padua, dean of the Faculty of Arts, dean of the Faculty of Science, and for several years he led the Istituto di scienze, letere ed arti in Venice. He published more than 60 papers in the theory of differential equations, algebra, mechanics and hydraulics. In 1875–76 he led the project of altering the port on Lido in Venice and regulating the flow of the river Brenta. He wrote several treatises on Dante, Petrarch and Tasso. In the hall of the University of Padua a memorial is raised in his honor. He died in Venice. == References ==
|
Wikipedia:Serena Dipierro#0
|
Serena Dipierro is an Italian mathematician whose research involves partial differential equations, the regularity of their solution, their phase transitions, nonlocal operators, and free boundary problems, with applications including population dynamics, quantum mechanics, crystallography, and mathematical finance. She is a professor in the School of Physics, Mathematics and Computing at the University of Western Australia, where she heads the department of mathematics and statistics. == Education and career == After earning a laurea at the University of Bari in 2006, and a master's degree with Lorenzo D’Ambrosio at the same university in 2008, Dipierro finished a Ph.D. in mathematics at the International School for Advanced Studies in Trieste in 2012. Her dissertation, Concentration phenomena for singularly perturbed elliptic problems and related topics, was supervised by Andrea Malchiodi. She was a postdoctoral researcher at the University of Chile and University of Edinburgh, and a Humboldt Fellow, and a faculty member at the University of Melbourne and University of Milan before taking her present position at the University of Western Australia in 2018. == Book == With María Medina de la Torre and Enrico Valdinoci, Dipierro is a coauthor of the monograph Fractional Elliptic Problems with Critical Growth in the Whole of R n {\displaystyle \mathbb {R} ^{n}} (arXiv:1506.01748; Edizioni Della Normale, 2017). == Recognition == 2021 Australian Mathematical Society Medal. 2024 Christopher Heyde Medal, Australian Academy of Science == References == == External links == Serena Dipierro publications indexed by Google Scholar
|
Wikipedia:Sergei Abramov (mathematician)#0
|
Sergei Mikhailovich Abramov (Russian: Сергей Михайлович Абрамов; born 25 March 1957) is a Russian mathematician, Professor, Dr.Sc., Corresponding Member of the Russian Academy of Sciences, Director of the Institute of Program Systems of the Russian Academy of Sciences, Rector of the University of Pereslavl (2003—2017). Specialist in the field of system programming and information technologies (supercomputer systems, telecommunication technologies, theory of constructive metasystems and meta-calculations). == Biography == He graduated from the faculty MSU CMC (1980). He presented his thesis «Meta-calculations and their application» for the degree of Doctor of Physical and Mathematical Sciences (1995). Was awarded the title of Professor (1996), Corresponding Member of the Russian Academy of Sciences (2006). == References == == External links == "Sergey Abramov". Russian Academy of Sciences (in Russian). Retrieved 2018-05-21. "Sergey Abramov". RAS Archive (in Russian). Retrieved 2018-05-21. "Biography Sergey Abramov". BOTIK.RU (in Russian). Retrieved 2018-05-21. Scientific works of Sergei Abramov(in English)
|
Wikipedia:Sergei Aseev#0
|
Sergei Mironovich Aseev (Russian: Сергéй Миро́нович Асéев; born 4 December 1957) is a Russian mathematician, Dr. Sc., Professor, and a Corresponding Member of the Russian Academy of Sciences. He graduated from the faculty of MSU CMC in 1980. He defended the thesis «Extremal problems for differential inclusions with phase constraints» for the degree of Doctor of Physical and Mathematical Sciences (1998). Was awarded the title of Corresponding Member of the Russian Academy of Sciences (2008). Author of 1 book and more than 30 scientific articles. Area of scientific interests: The theory of multivalued mappings, optimal control, and mathematical models in economics. == Literature == Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. 2010. pp. 272–274. ISBN 978-5-211-05838-5 – via Author-compiler Evgeny Grigoriev. == References == == External links == Russian Academy of Sciences(in Russian) MSU CMC(in Russian) Scientific works of Sergei Aseev Scientific works of Sergei Aseev(in English)
|
Wikipedia:Sergei Bernstein#0
|
Sergei Natanovich Bernstein (Ukrainian: Сергі́й Ната́нович Бернште́йн, sometimes Romanized as Bernshtein; 5 March 1880 – 26 October 1968) was a Ukrainian and Soviet mathematician of Jewish origin known for contributions to partial differential equations, differential geometry, probability theory, and approximation theory. == Life == Bernstein was born into the Jewish family of prominent Ukrainian physiologist Nathaniel Bernstein in Odessa. Sergei was brought up in Odessa but his father died on 4 February 1891 just before he was eleven years old. He graduated from high school in 1898. After this, following his mother's wishes, he went with his elder sister to Paris. Bernstein's sister studied biology in Paris and did not return to Ukraine but worked at the Pasteur Institute. After one year studying mathematics at the Sorbonne, Bernstein decided that he would rather become an engineer and entered the École supérieure d'électricité. However, he continued to be interested in mathematics and spent three terms at the University of Göttingen, beginning in the autumn of 1902, where his studies were supervised by David Hilbert. Bernstein returned to Paris and submitted his doctoral dissertation "Sur la nature analytique des solutions des équations aux dérivées partielles du second ordre" to the Sorbonne in the spring of 1904. He returned to Russia in 1905 and taught at Kharkiv University from 1908 to 1933. He was made an ordinary professor in 1920. Bernstein later worked at the Mathematical Institute of the USSR Academy of Sciences in Leningrad, and also taught at the University and Polytechnic Institute. From January 1939, Bernstein also worked also at Moscow University. He and his wife were evacuated to Borovoe, Kazakhstan in 1941. From 1943 he worked at the Mathematical Institute in Moscow, and edited Chebyshev’s complete works. In 1947 he was dismissed from the university and became head of the Department of Constructive Function Theory at the Steklov Institute. He died in Moscow in 1968. == Work == === Partial differential equations === In his doctoral dissertation, submitted in 1904 to the Sorbonne, Bernstein solved Hilbert's nineteenth problem on the analytic solution of elliptic differential equations. His later work was devoted to Dirichlet's boundary problem for non-linear equations of elliptic type, where, in particular, he introduced a priori estimates. === Probability theory === In 1917, Bernstein suggested the first axiomatic foundation of probability theory, based on the underlying algebraic structure. It was later superseded by the measure-theoretic approach of Kolmogorov. In the 1920s, he introduced a method for proving limit theorems for sums of dependent random variables. === Approximation theory === Through his application of Bernstein polynomials, he laid the foundations of constructive function theory, a field studying the connection between smoothness properties of a function and its approximations by polynomials. In particular, he proved the Weierstrass approximation theorem and Bernstein's theorem (approximation theory). Bernstein polynomials also form the mathematical basis for Bézier curves, which later became important in computer graphics. == International Congress of Mathematicians == Bernstein was an invited speaker at the International Congress of Mathematicians (ICM) in Cambridge, England in 1912 and in Bologna in 1928 and a plenary speaker at the ICM in Zurich. His plenary address Sur les liaisons entre quantités aléatoires was read by Bohuslav Hostinsky. == Honors and awards == Academician of the Academy of Sciences of the Soviet Union (1929) Member of the German Mathematical Society (1926) Member of the French Mathematical Society (1944) Honorary Doctor of Science of the University of Algiers (1944) Honorary Doctor of Science of the University of Paris (1945) Foreign member of the French Academy of Sciences (1955) Stalin Prize (1942) Order of Lenin (1945) Order of the Red Banner of Labour (1944) == Publications == S. N. Bernstein, Collected Works (Russian): vol. 1, The Constructive Theory of Functions (1905–1930), translated: Atomic Energy Commission, Springfield, Va, 1958 vol. 2, The Constructive Theory of Functions (1931–1953) vol. 3, Differential equations, calculus of variations and geometry (1903–1947) vol. 4, Theory of Probability. Mathematical statistics (1911–1946) S. N. Bernstein, The Theory of Probabilities (Russian), Moscow, Leningrad, 1946 == See also == A priori estimate Bernstein algebra Bernstein's inequality (mathematical analysis) Bernstein inequalities in probability theory Bernstein polynomial Bernstein's problem Bernstein's theorem (approximation theory) Bernstein's theorem on monotone functions Bernstein–von Mises theorem Stone–Weierstrass theorem == Notes == == References == O'Connor, John J.; Robertson, Edmund F., "Sergei Bernstein", MacTutor History of Mathematics Archive, University of St Andrews == External links == Sergei Bernstein at the Mathematics Genealogy Project Sergei Natanovich Bernstein and history of approximation theory from Technion — Israel Institute of Technology Author profile in the database zbMATH
|
Wikipedia:Sergei Chernikov#0
|
Sergei Nikolaevich Chernikov (11 May 1912 – 23 January 1987; Russian: Сергей Николаевич Черников) was a Russian mathematician who contributed significantly to the development of infinite group theory and linear inequalities. == Biography == Chernikov was born on 11 May 1912 in Sergiyev Posad, in Moscow Oblast, Russia, to Nikolai Nikolaevich, a priest, and Anna Alekseevna, a housewife. After graduating from secondary school, he worked as a labourer, as a driver, as a book-keeper and as an accountant. Until November 1931 he taught mathematics in a school for workers. From 1930 he was an external student of the Pedagogic Institute of Saratov State University, where he graduated in 1933. He began graduate studies at the Ural Industrial Institute under the outside tutelage of Alexandr G. Kurosh (of the University of Moscow). A remarkable student, Chernikov was made head of the Ural Mathematics department (1939–1946) immediately after earning his PhD in 1938, even before defending his DSc in 1940. He went on to be head of mathematical departments at Ural State University (1946–1951), Perm State University (1951–1961), the Steklov Institute of Mathematics (1961–1964), and finally the National Academy of Sciences of Ukraine from 1964 until days before his death in 1987. During his career, he trained more than 40 PhD and 7 DSc students, and published dozens of papers that remained influential 100 years after his birth. == Contributions == Chernikov is credited with introducing a number of fundamental concepts to group theory, including the locally finite group, and nilpotent group. As with many of his other contributions, these allow infinite groups to be partially or locally solved, establishing important early links between finite and infinite group theories. Later in his career, he was hailed as "one of the pioneers of linear programming", for his breakthrough algebraic theory of linear inequalities. == Published works == Chernikov S.N. (1939) Infinite special groups. Mat. Sbornik 6, 199–214 Chernikov S.N. (1940) Infinite locally soluble groups. Mat. Sbornik 7, 35–61 Chernikov S.N. (1940) To theory of infinite special groups. Mat. Sbornik 7, 539–548. Chernikov S.N. (1940) On groups with Sylow sets. Mat. Sbornik 8, 377–394. Chernikov S.N. (1943) To theory of locally soluble groups. Mat. Sbornik 13, 317–333. Chernikov S.N. (1946) Divisible groups possesses an ascending central series. Mat. Sbornik 18, 397–422. Chernikov S.N. (1947) To the theory of finite p – extensions of abelian p – groups. Doklady AN USSR, 58, 1287–1289. Journal Algebra Discrete Math. M. Kurosh A.G., Chernikov S.N. (1947) Soluble and nilpotent groups. Uspekhi Math. Nauk 2, number 3, 18 – 59. Chernikov S.N. (1948) Infinite layer – finite groups. Mat. Sbornik 22, 101–133. Chernikov S.N. (1948) To the theory of divisible groups. Mat. Sbornik 22, 319–348. Chernikov S.N. (1948) A complement to the paper “To the theory of divisible groups”. Mat. Sbornik 22, 455–456. Chernikov S.N. (1949) To the theory of torsion – free groups possesses an ascending central series. Uchenye zapiski Ural University 7, 3–21. Chernikov S.N. (1950) On divisible groups with ascending central series. Doklady AN USSR 70, 965–968. Chernikov S.N. (1950) On a centralizer of divisible abelian normal subgroups in infinite periodic groups. Doklady AN USSR 72, 243–246. Chernikov S.N. (1950) Periodic ZA – extension of divisible groups. Mat. Sbornik 27, 117 – 128. Chernikov S.N. (1955) On complementability of Sylow p-subgroups in some classes of infinite groups. Mat. Sbornik. – 37, 557 – 566. Chernikov S.N. (1957) On groups with finite conjugacy classes. Doklady AN USSR 114, 1177 – 1179 Chernikov S.N. (1957) On a structure of groups with finite conjugate classes. Doklady AN SSSR – 115, 60 – 63. Chernikov S.N. (1958) On layer – finite groups. Mat. Sbornik 45, 415–416. Chernikov S.N. (1959) Finiteness conditions in general group theory. Uspekhi Math. Nauk 14, 45 – 96. Chernikov S.N. (1960) On infinite locally finite groups with finite Sylow subgroups. Mat. Sbornik 52, 647 – 652. Chernikov S.N. (1967) Groups with prescribed properties of a system of infinite subgroups. Ukrain. Math. Journal 19, 111 – 131. Chernikov S.N. (1969) Investigations of groups with prescribed properties of subgroups. Ukrain. Math. Journal 21, 193 – 209. Chernikov S.N. (1971) On a problem of Schmidt. Ukrain. Math. Journal 23, 598 – 603 Chernikov S.N. (1971) On groups with the restrictions for subgroups. “Groups with the restrictions for subgroups”, NAUKOVA DUMKA: Kyiv 17 – 39. Chernikov S.N. (1975) Groups with dense system of complement subgroups. “Some problems of group theory”, MATH. INSTITUT: Kyiv 5 – 29. Chernikov S.N. (1980) The groups with prescribed properties of systems of subgroups. NAUKA : Moscow. Chernikov S.N. (1980) Infinite groups, defined by the properties of system of infinite subgroups. “VI Simposium on group theory”, NAUKOVA DUMKA: Kyiv 5 – 22. == References == == External links == Sergei Nikolaevich Chernikov's entry on Math-Net.ru
|
Wikipedia:Sergei Evdokimov#0
|
Sergei Alekseevich Evdokimov (Russian: Сергей Алексеевич Евдокимов; December 12, 1950 — September 10, 2016) was a Russian mathematician who contributed to the theory of modular forms, computational complexity theory, algebraic combinatorics and p-adic analysis. == Biography == Sergei Evdokimov was born in Leningrad (now Saint Petersburg, Russia), and graduated from Leningrad State University, Dept. of Mathematics and Mechanics, in 1973 (Honours Diploma). During his studies, he attended a seminar on modular forms and started to work in this area under the supervision of professor Anatoli N. Andrianov. After graduation, he continued research in the theory of modular forms, and in 1977 earned his PhD degree (Candidate of Sciences) from Leningrad Department of Steklov Mathematical Institute of USSR Academy of Sciences with the thesis "Euler products for congruence subgroups of the Siegel group of genus". During 1981–1993 he was a senior researcher of Laboratory of Theory of Algorithms at Leningrad Institute for Informatics and Automation of USSR Academy of Sciences. That time his scientific interests were switched to the computational complexity of algorithms in algebra and number theory. He was an active participant of a seminar on computational complexity headed by Anatol Slissenko and Dima Grigoriev. From 1993 he also began active collaboration with Ilia Ponomarenko, in algebraic combinatorics, which lasted until the end of his life. Many of the results obtained in this collaboration were included in his DSc thesis "Schurity and separability of association schemes", that was defended in 2004. Starting in 2005, he was a leading researcher in St. Petersburg Department Mathematical Institute of Russian Academy of Sciences. == Scientific activities == During 1975-1982 Sergei had published a series of impressive papers on the arithmetic of Siegel modular forms. His PhD thesis contains very fine arithmetic constructions related to the ray classes of ideals of imaginary quadratic fields. Continuing his research on the theory of modular forms, he found an elegant analytical description of the Maass subspace of Siegel modular forms of genus 2, an explicit formula for the generating Hecke series of the symplectic group of genus 3, and the first explicit formulas for the action of degenerate Hecke operators on the space of theta-series In the mid-1980s, switching to the computational complexity of algorithms in algebra and number theory, he found a delicate and simple algorithm for factorization of polynomials over finite fields. The algorithm has a quasi-polynomial complexity under the assumption of generalized Riemann's hypothesis. Despite considerable efforts by mathematicians working in the theory of computational complexity, up to the present (2019), his estimate for the complexity of the factorization problem has not been improved. Starting in 1993, Sergei has been engaged into problems of algebraic combinatorics. Several profound results were obtained, including the refutation of the Schur-Klin conjecture on Schur rings over a cyclic group, a polynomial-time algorithm for recognizing and testing isomorphism of circulant graphs, and building a theory of multidimensional coherent configurations . The latter provided an algebraic explanation for the fact that the problem of isomorphism of finite graphs cannot be solved using only combinatoric methods. Another series of works was devoted to the problem of isomorphism and the algorithmic theory of permutation groups. In particular, a number of algorithms (which became already classical) for testing graph isomorphism were constructed. In the last years of his life, Sergei became also interested in p-adic analysis. Jointly with Sergio Albeverio and Maria Skopina he studied p-adic wavelet bases. These studies revealed an unexpected and highly nontrivial fact: unlike similar theories in other structures, the standard method in p-adic analysis leads to nothing except the Haar basis. Moreover, any p-adic orthogonal wavelet basis generated by test functions is some modification of the Haar basis. In his last work on this topic, an orthogonal p-adic wavelet basis generated by functions with non-compact support was constructed, while all previously known bases, as well as frames, were generated by the test functions. == Notes == == References == In memory of S.A.Evdokimov (in Russian) Sergei Evdokimov on PDMI (in Russian) Sergei Evdokimov in St. Petersburg Mathematical Society == See also == Evdokimov's algorithm
|
Wikipedia:Sergei Mukhin#0
|
Sergei Mukhin (Russian: Серге́й Ива́нович Му́хин) (born 1959) is a Russian mathematician, Professor, Dr.Sc., and a professor at the Faculty of Computer Science at the Moscow State University. He graduated from the faculty MSU CMC in 1981. Mukhin has worked at Moscow State University since 1984. In 2009, he defended his thesis "Mathematical modeling of hemodynamics" for the degree of Doctor of Physical and Mathematical Sciences. He has authored 3 books and more than 80 scientific articles. == References == == Bibliography == Evgeny Grigoriev (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. pp. 114–115. ISBN 978-5-211-05838-5. == External links == Annals of the Moscow University(in Russian) MSU CMC(in Russian) Scientific works of Sergei Mukhin Scientific works of Sergei Mukhin(in English)
|
Wikipedia:Sergei Vasilyevich Kerov#0
|
Sergei Vasilyevich Kerov (Russian: Сергей Васильевич Керов; born 21 June 1946 in Leningrad died 30 July 2000) was a Russian mathematician and university professor. His research included operator algebras, combinatorics, probability and representation theory. == Life == Kerov was born in 1946 in Leningrad (now St. Petersburg). His father Vasily Kerov was a teacher for analytical chemistry at a university in Leningrad and his mother Marianna Nikolayeva was an expert in seed physiology. Kerov studied at the Saint Petersburg State University. He obtained a PhD in 1975 under the supervision of Anatoly Vershik. He was then a professor at various universities in St. Petersburg, including the Herzen Pedagogical University and the University of Saint Petersburg. From 1993 he did research at the Steklov Institute of Mathematics in St. Petersburg. In 1994 he received a Sc.D. (Doctor of Science) from the Steklov Institute for his work Asymptotic Representation Theory of the Symmetric Group, with Applications to Analysis. From 1995 he was a professor at the University of Saint Petersburg. In 2000 he died of a brain tumor. == Work == A list of Kerov's scientific articles was published in the Journal of Mathematical Sciences. In 1977 he proved together with Anatoly Vershik a limit theorem for the Plancherel measure for the symmetric group with a limiting shape which is now called Vershik–Kerov curve. The same result was also independently proved by Logan and Shepp and thus is also called Logan–Shepp curve. The result was later improved by Kerov to a central limit theorem. == Publications (Selections) == === Research papers === Kerov, S.V. (1999). "Rooks on ferrers boards and matrix integrals". Journal of Mathematical Sciences. 96 (5). Springer: 3531–3536. doi:10.1007/BF02175831. S. V. Kerov; G. I. Olshanski (1994). "Polynomial functions on the set of Young diagrams". C. R. Acad. Sci. Paris Sér. I. 319: 121–126. Vershik, A.M.; Kerov, S.V. (2007). "Four drafts on the representation theory of the group of infinite matrices over a finite field". Journal of Mathematical Sciences. 147 (6): 7129–7144. arXiv:0705.3605. doi:10.1007/s10958-007-0535-1. S2CID 16877947. === Books === ==== Translated in English ==== S. V. Kerov (2003). Asymptotic Representation Theory of the Symmetric Group and its Applications in Analysis. American Mathematical Society. ISBN 978-0-8218-3440-4. == References == == External links == Сергей Васильевич Керов on the website of the Saint Petersburg Mathematical Society Publications Sergei Kerov on mathnet.ru Homepage at the Steklov Institute
|
Wikipedia:Sergei Viktorovich Bochkarev#0
|
Sergei (or Sergey) Viktorovich Bochkarev (or Bočkarev) (Сергей Викторович Бочкарёв, born July 24, 1941, in Kuybyshev now renamed Samara) is a Soviet and Russian mathematician. == Education and career == He received in 1964 his undergraduate degree from Moscow Institute of Physics and Technology and in 1969 his Russian Candidate of Sciences degree (PhD) from Moscow State University. His dissertation о рядах Фурье по системе Хаара (On Fourier series in the Haar system) was supervised by Pyotr Lavrentyevich Ulyanov. From Moscow State University, Bochkarev received in 1974 his Russian Doctor of Science degree (habilitation). Since 1971 he has worked at the Steklov Institute of Mathematics, where he holds the title of leading scientific researcher in the Department of Function Theory. His research deals with harmonic analysis, BMO spaces, Hardy spaces, functional analysis, construction of orthogonal bases in various function spaces, and exponential sums. In 1977 he was awarded the Salem Prize. In 1978 he was an Invited Speaker with talk Метод усреднения в теории ортогональных рядов (The averaging method in the theory of orthogonal bases) at the International Congress of Mathematicians in Helsinki. == Selected publications == On a problem of Zygmund, Mathematics of the USSR-Izvestia, vol. 7, no. 3, 1973, p. 629 Existence of a basis in the space of functions analytic in the disk, and some properties of Franklin's system, Math. USSR Sbornik, vol. 24, 1974, pp. 1–16 The method of averaging in the theory of orthogonal series and some questions in the theory of bases, Tr. MIAN SSSR, vol. 146, 1978, pp. 3–87 The method of averaging in the theory of orthogonal series and some questions in the theory of bases, Proc. Steklov Inst. Math., vol. 146, 1980, pp. 1–92 Everywhere divergent Fourier series with respect to the Walsh system and with respect to multiplicative systems, Russian Math. Surveys, vol. 59, 2004, pp. 103–124 Multiplicative Inequalities for the L1 Norm: Applications in Analysis and Number Theory, Proc. Steklov Inst. Math., vol. 255, 2006, pp. 49–64 A Generalization of Kolmogorov's Theorem to Biorthogonal Systems, Proc. Steklov Inst. Math., vol. 260, 2008, pp. 37–49 == References == == External links == "Bochkarev, Sergei Viktorovich". mathnet.ru.
|
Wikipedia:Sergei Vostokov#0
|
Sergei Vladimirovich Vostokov (Russian: Сергей Владимирович Востоков; 13 April 1945 – 7 March 2025) was a Russian mathematician who made major contributions to local number theory. He was a professor at St. Petersburg State University. == Life and work == Vostokov developed an important class of explicit formulas for the Hilbert symbol on local fields, which had a wide range of applications in number theory. His formulas generalize to formal groups. A generalization of his explicit formula to higher local fields is called the Vostokov symbol. It plays an important role in higher local class field theory. Vostokov died on 7 March 2025, at the age of 79. == Awards == For his 60th birthday, two special volumes of St Petersburg Mathematical Society of Vostokov were published in Russian and English by the American Mathematical Society. In 2014, Vostokov was awarded the Chebyshev Prize. == Bibliography == === Books === Fesenko, Ivan B.; Vostokov, S. V. (17 July 2002). Local Fields and Their Extensions: Second Edition. American Mathematical Society. ISBN 978-0-8218-3259-2. == References == == External links == Sergei Vostokov at the Mathematics Genealogy Project
|
Wikipedia:Sergey Bobkov#0
|
Sergey Bobkov (Russian: Сергей Германович Бобков; born March 15, 1961) is a Russian mathematician. Currently Bobkov is a professor at the University of Minnesota, Twin Cities. He was born in Vorkuta (Komi Republic, Russia) and graduated from the Department of Mathematics and Mechanics in Leningrad State University. In 1988 he earned PhD in Mathematics and Physics (under direction of Vladimir N. Sudakov, Steklov Institute of Mathematics) and in 1997 earned his Doctor of Science. During 1998–2000 Bobkov held positions at Syktyvkar State University, Russia. From 1995 to 1996 he was an Alexander von Humboldt Fellow at Bielefeld University, Germany. He spent the summers of 2001 and 2002 as an EPSRC Fellow at Imperial College London, UK. Bobkov was awarded a Simons Fellowship (2012) and Humboldt Research Award (2014). Bobkov is known for research in mathematics on the border of probability theory, analysis, convex geometry and information theory. He has achieved important results about isoperimetric problems, concentration of measure and other high-dimensional phenomena. Bobkov's inequality is named after him. == References ==
|
Wikipedia:Sergey Fomin#0
|
Sergey Vladimirovich Fomin (Сергей Владимирович Фомин) (born 16 February 1958 in Saint Petersburg, Russia) is a Russian American mathematician who has made important contributions in combinatorics and its relations with algebra, geometry, and representation theory. Together with Andrei Zelevinsky, he introduced cluster algebras. == Biography == Fomin received his M.Sc. in 1979 and his Ph.D. in 1982 from St. Petersburg State University under the direction of Anatoly Vershik and Leonid Osipov. Previous to his appointment at the University of Michigan, he held positions at the Massachusetts Institute of Technology from 1992 to 2000, at the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, and at the Saint Petersburg Electrotechnical University. Sergey Fomin studied at the 45th Physics-Mathematics School and later taught mathematics there. == Research == Fomin's contributions include Discovery (with A. Zelevinsky) of cluster algebras. Work (jointly with A. Berenstein and A. Zelevinsky) on total positivity. Work (with A. Zelevinsky) on the Laurent phenomenon, including its applications to Somos sequences. == Awards and honors == Simons Fellow (2019) Steele Prize for Seminal Contribution to Research (2018). Invited lecture at the International Congress of Mathematicians (Hyderabad, 2010). Robert M. Thrall Collegiate Professor of Mathematics at the University of Michigan. Fellow (2012) of the American Mathematical Society. Elected to the American Academy of Arts and Sciences, 2023. == Selected publications == Fomin, S.; Zelevinsky, A. (2003). "Y-systems and generalized associahedra". Annals of Mathematics. 158 (3): 977–1018. arXiv:math/0505518. doi:10.4007/annals.2003.158.977. S2CID 5153512. Fomin, S.; Zelevinsky, A. (2003). "Cluster algebras II: Finite type classification". Inventiones Mathematicae. 154 (1): 63–121. arXiv:math/0208229. Bibcode:2003InMat.154...63F. doi:10.1007/s00222-003-0302-y. S2CID 14540263. Fomin, S.; Zelevinsky, A. (2002). "Cluster algebras I: Foundations". Journal of the AMS. 15: 497–529. Fomin, S.; Gelfand, S.; Postnikov, A. (1997). "Quantum Schubert Polynomials". Journal of the AMS. 10: 565–596. == References == == External links == Home page of Sergey Fomin
|
Wikipedia:Sergey Lozhkin#0
|
Sergey Lozhkin (Russian: Ло́жкин Серге́й Андре́евич; born March 29, 1951) is a Russian mathematician, Professor, Dr.Sc., a professor at the Faculty of Computer Science at the Moscow State University. He defended the thesis «Asymptotic estimates of a high degree of accuracy for the complexity of control systems» for the degree of Doctor of Physical and Mathematical Sciences (1998). Author of 8 books and more than 80 scientific articles. == References == == Bibliography == Evgeny Grigoriev (2010). Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. pp. 376–377. ISBN 978-5-211-05838-5. == External links == MSU CMC(in Russian) Scientific works of Sergey Lozhkin Scientific works of Sergey Lozhkin(in English)
|
Wikipedia:Sergey Mergelyan#0
|
Sergey Mergelyan (Armenian: Սերգեյ Մերգելյան; 19 May 1928 – 20 August 2008) was a Soviet and Armenian mathematician, who made major contributions to the Approximation theory. The modern Complex Approximation Theory is based on Mergelyan's classical work. Corresponding Member of the Academy of Sciences of the Soviet Union (since 1953), member of NAS ASSR (since 1956). The surname "Mergelov" given at birth was changed for patriotic reasons to the more Armenian-sounding "Mergelyan" by the mathematician himself before his trip to Moscow. He was a laureate of the Stalin Prize (1952) and the Order of St. Mesrop Mashtots (2008). He was the youngest Doctor of Sciences in the history of the USSR (at the age of 20), and the youngest corresponding member of the Academy of Sciences of the Soviet Union (the title was conferred at the age of 24). During his postgraduate studies, the 20-year-old Mergelyan solved one of the fundamental problems of the mathematical theory of functions, which had not been solved for more than 70 years. His theorem on the possibility of uniform polynomial approximation of functions of a complex variable is recognized by the classical Mergelyan theorem, and is included in the course of the theory of functions. Although he himself was not a computer designer, Mergelyan was a pioneer in Soviet computational mathematics. == Biography == === Early years === Sergey Mergelyan was born on 19 May 1928 in Simferopol in an Armenian family. His father Nikita (Mkrtich) Ivanovich Mergelov, a former private entrepreneur (Nepman), his mother Lyudmila Ivanovna Vyrodova, the daughter of the manager of the Azov-Black Sea bank, who was shot in 1918. In 1936 Sergey's father was building a paper mill in Yelets, but soon together with his family was deported to the Siberian settlement of Narym, Tomsk Oblast. In the Siberian frost, Sergey suffered from a serious illness and narrowly survived. In 1937, the mother and son were acquitted by the court's decision and returned to Kerch, and in 1938 Lyudmila Ivanovna obtained (from the USSR Prosecutor General Andrei Yanushyevich Vyshinsky) the rehabilitation of her husband. In 1941, in connection with the offensive of the Hitler armies to the South, the Mergelov family left Kerch and settled in Yerevan. === Education === Before the war, Mergelyan lived in Russia, and studied at the Kerch secondary school. When at the end of 1941 his family has moved from Kerch to Yerevan, he got into a completely unfamiliar environment, he did not know Armenian at all. He studied at the Yerevan school named after Mravyan. Soon he excelled in his abilities. In 1943, Mergelyan won the first place at the republican physics and mathematics Olympiad. In 1944, at age 16, he passed the examinations via extern for grades 9-10, graduated from high school and immediately entered the Physics and Mathematics Faculty of the Yerevan State University (YSU). He drew attention to himself at the university, where he during a year passed the first and second courses, and soon began attending lectures of academician Artashes Shahinian, the founder of the Armenian mathematical school. In addition to studying and working in the seminar, Mergelyan taught in the mathematical circle at the Yerevan Palace of Pioneers. There he gave full freedom to fantasy, writing puzzles for children by conducting competitions to solve particularly difficult tasks and organizing mathematical games. He passed a five-year university course for three years, in the first year he studied only a few days, then via extern had passed exams and immediately switched to the second, and in 1946 he received a diploma. At the same time, he restored the original surname on his father's line and received a diploma already as Sergey Nikitovich Mergelyan. After YSU (1946), Mergelyan entered the postgraduate study at Steklov Institute of Mathematics to Mstislav Vsevolodovich Keldysh. Although all his colossal employment, Keldysh paid special attention to his new graduate student. They met mainly at Keldysh's house, at 8-9 o'clock in the evening, and conducted long conversations about mathematical problems. A thesis for the degree on physical and mathematical sciences, Mergelyan wrote for a year and a half. The defense took place in 1949 and was brilliant. After an hour and a half session, the academic council announced the awarding of a doctorate in physics and mathematics to Mergelyan. Although Mergelyan introduced to defend the Ph.D. dissertation, all three official opponents - Academician Lavrentyev, Sergey Nikolsky and Corresponding Member Alexander Gelfоnd - petitioned the Academic Council to award him the Doctor of Science degree. The petition of opponents was satisfied (for this it was necessary to call the members of the scientific council, which took time), and Mergelyan became the youngest doctor of physical and mathematical sciences in the USSR at the age of 20 (at 21). Until today this is a record of getting the highest scientific degree (Doctor of Science) at such a young age in former USSR and present Russia. === Career === Mergelyan graduated from Yerevan University in 1947. From 1945 to 1957 he worked at the Yerevan University, and from 1954 to 1958 and from 1964 to 1968 at the Moscow State University named after MV Lomonosov. When he was 24 he became a corresponding member of the Academy of Sciences of the Soviet Union, which, from the point of view of young age, is yet another absolute record among USSR scientists. He has been a symbol of a young scientist in former USSR. Indira Gandhi, among other famous people in USSR and abroad, has been a friend of Mergelyan from the early 1950s. In 1978, after her official visit to Moscow, Gandhi had also a private visit to Yerevan just as a guest of Mergelyan. In 1952 he was awarded the Stalin Prize. Mergelyan was also a talented organizer of science. He played a leading role in establishing Yerevan Scientific Research Institute of Mathematical Machines (YerSRIMM). On 14 July 1956 the Yerevan Scientific Research Institute of Mathematical Machines (YerNIIMM) was founded․ He became the first director of the institute and headed it in 1956-1960. Soon the institute became popular as the "Mergelyan Institute". This unofficial name is preserved and still used nowadays․ In 1961 he returned to the field of pure mathematics. He resumed work at the Moscow Steklov Mathematical Institute of Academy of Sciences of the Soviet Union. In 1963 he was elected Deputy Academician of the Secretary of the Department of Mathematics of the Academy of Sciences of the Soviet Union (Nikolai Nikolaevich Bogolyubov). In 1964 he was appointed head of the department of complex analysis at the Mathematical Institute, the position he retained until 2002, the same year he was reinstated as a professor of the Mechanics and Mathematics Faculty of the Moscow State University. In 1968, he again left the post of professor of the faculty and only engaged in scientific work. Mergelyan had "traveling permission", and often was on foreign business trips. In 1970 he gave a presentation as a guest speaker at the International Congress of Mathematicians in Nice. == Scientific works == Mergelyan's main works include theory of functions of complex variables, theory of approximation, and theory of potential and harmonic functions. In 1951 he formulated and proved the famous result from complex analysis called Mergelyan's theorem. This solved an old classical problem. The theorem completed a long series of studies, begun in 1885, and composed of the classical results of Karl Weierstrass, Carl Runge, J. Walsh, Mikhail Lavrentyev, Mstislav Keldysh and others. The new terms "Mergelyan's theorem" and "Mergelyan's sets" found their place in textbooks and monographs on approximation theory. Several years later he solved another famous problem, the Sergei Bernstein Approximation Problem. Mergelyan also has many important results in other areas of complex analysis including the theory of pointwise approximations by polynomials. His research was the study of the approximation of continuous functions satisfying the smoothness properties for an arbitrary set (1962) and the solution of Bernstein's approximate problem (1963). Mergelyan conducted in-depth studies and obtained valuable results in such areas as best approximation by polynomials on an arbitrary continuum, weighted approximations by polynomials on the real axis, pointwise approximation by polynomials on closed sets of the complex plane, uniform approximation by harmonic functions on compact sets and entire functions on an unbounded continuum, uniqueness harmonic functions. In the theory of differential equations, his results related to the sphere of the Cauchy problem and some other questions. Mergelyan's scientific achievements significantly contributed to the formation, development and international recognition of the Armenian mathematical school, as evidenced by the one organized in Yerevan in 1965, at the initiative and with the active participation of Sergey Mergelyan a major international conference on the theory of functions. Many prominent mathematicians of the world took part in the conference, which promoted international cooperation and further promotion of the Armenian mathematical school. == Death == Sergey Mergelyan died on 20 August 2008. The farewell ceremony took place on 23 August 2008 at the Glendale Cemetery in California. At the request of the deceased, his ashes were transported to Moscow and buried at Novodevichy Cemetery next to his mother and his wife. == Awards and prizes == Stalin Prize, 2nd class (1952) for works on the constructive theory of functions, completed by the article "Some Problems in the Constructive Theory of Functions", published in the Proceedings of the Steklov Mathematical Institute of the Academy of Sciences of the Soviet Union (1951) Order of St. Mesrop Mashtots (26.05.2008) – On the occasion of the 80th anniversary of mathematician, the Consul General of Armenia in the USA handed over the Order of St. Mesrop Mashtots to the scientist and read the message of the President of Armenia, Serzh Sargsyan. Order of the Red Banner of Labour (17.09.1975) == Works == «Некоторые вопросы конструктивной теории функций» (Труды Математического института АН СССР, т. 3, 1951) «Равномерные приближения функций комплексного переменного» (Успехи математических наук, т. 8, вып. 2, 1952), «О полноте систем аналитических функций» (Успехи математических наук, т. 7, вып. 4, 1953) == References == == External links == National Academy of Sciences of Armenia Russian Academy of Sciences A Guide to the Russian Academy of Sciences, Part I, by Jack L. Cross Sergey Mergelyan at the Mathematics Genealogy Project
|
Wikipedia:Sergey Shorgin#0
|
Sergey Shorgin (Russian: Серге́й Я́ковлевич Шорги́н) (born 1952) is a Russian mathematician, Dr.Sc., Professor, a scientist in the field of informatics, a poet, a translator of poetry. == Biography == He graduated from the faculty MSU CMC (1974). He defended his thesis for the degree of candidate of physical and mathematical sciences (1979). He defended his thesis «Defining Insurance Tariffs: Stochastic Models and Methods of Evaluation» for the degree of Doctor of Physical and Mathematical Sciences (1979). Was awarded the title of professor (2003). Published more than 100 scientific papers, including articles in leading scientific journals. Deputy Director of Institute of Informatics Problems RAS. Since 1999 she has been translating foreign poetry into Russian and writing her own poems. Among his translated poems are works of classics and contemporary authors of English, Scottish, American (USA), Australian, Canadian, Polish, Ukrainian, Belarusian, Slovak, German literature. == References == == External links == Institute of Informatics Problems RAS(in Russian) Scientific works of Sergey Shorgin(in Russian) Scientific works of Sergey Shorgin(in English) Sergey Shorgin on the website «The Age of Translation»(in Russian)
|
Wikipedia:Sergio Albeverio#0
|
Sergio Albeverio (born 17 January 1939) is a Swiss mathematician and mathematical physicist working in numerous fields of mathematics and its applications. In particular he is known for his work in probability theory, analysis (including infinite-dimensional, non-standard, and stochastic analysis), mathematical physics, and in the areas algebra, geometry, number theory, as well as in applications, from natural to social-economic sciences. He initiated (with Raphael Høegh-Krohn) a systematic mathematical theory of Feynman path integrals and of infinite-dimensional Dirichlet forms and associated stochastic processes (with applications particularly in quantum mechanics, statistical mechanics and quantum field theory). He also gave essential contributions to the development of areas such as p-adic functional and stochastic analysis as well as to the singular perturbation theory for differential operators. Other important contributions concern constructive quantum field theory and representation theory of infinite-dimensional groups. He also initiated a new approach to the study of galaxy and planet formation inspired by stochastic mechanics. == References == == External links == Homepage of Sergio Albeverio at Bonn University (includes a list of publications) Sergio Albeverio at the Hausdorff Center for Mathematics Sergio Albeverio at MathSciNet Sergio Albeverio at Mathematics Genealogy Project Sergio Albeverio at ResearchGate
|
Wikipedia:Sergiu Hart#0
|
Sergiu Hart (Hebrew: סרג'יו הרט; born 1949) is an Israeli mathematician and economist. He is the Chairperson of the Humanities Division of the Israel Academy of Sciences and Humanities, and past President of the Game Theory Society (2008–2010), Member of Academia Europaea, International Honorary Member of the American Academy of Arts and Sciences, and International Member of the National Academy of Sciences (NAS). He is emeritus professor of mathematics and emeritus professor of economics, and member of the Center for the Study of Rationality, at the Hebrew University of Jerusalem. == Biography == Hart was born in Bucharest, Romania and immigrated to Israel in 1963. He received a B.Sc. in mathematics and statistics (summa cum laude, 1970) and an M.Sc. in mathematics (summa cum laude, 1971) from Tel Aviv University. His M.Sc. thesis was on the subject of "Values of Mixed Games" and was supervised by Robert Aumann, who was also his advisor in his doctoral thesis on "Cooperative Game Theory Models of Economic Equilibrium" (Ph.D., summa cum laude, 1976). In 1979–1991 he was at the School of Mathematical Sciences of Tel Aviv University, as professor since 1985. He was an assistant professor at the Department of Economics, Department of Operations Research, and Institute for Mathematical Studies in the Social Sciences at Stanford University (1976–1979), and a visiting professor at the Department of Economics of Harvard University (1984–1985 and 1990–1991). Since 1991 he is a member of the Departments of Economics and Mathematics at the Hebrew University of Jerusalem, and he was the founding director of the Center for the Study of Rationality (1991–1999) there. == Research contributions == His main area of research is game theory and economic theory, with additional contributions in mathematics, computer science, probability and statistics. Among his major contributions are studies of strategic foundations of cooperation; strategic use of information in long-term interactions ("repeated games"); adaptive and evolutionary dynamics, particularly with boundedly rational agents; perfect economic competition and its relations to models of fair distribution; riskiness; forecasting and calibration; mechanism design with multiple goods. Hart edited, with Robert J. Aumann, the first three volumes of the Handbook of Game Theory with Economic Applications (1992, 1994, 2002). == Honors and awards == In 1998, he was awarded the Rothschild Prize in the Social Sciences., In 2018, he was awarded the Israel Prize in Economics and Statistics. In 2020, he was awarded the ACM SIGecom Test of Time Award. In 2006, he was elected Member of the Israel Academy of Sciences and Humanities. From 2019, he serves as the Chairperson of the Humanities Division of the Israel Academy of Sciences and Humanities. In 2012, he was elected Member of Academia Europaea. In 2016, he was elected International Honorary Member of the American Academy of Arts and Sciences (USA). In 2025, he was elected International Member of the National Academy of Sciences (USA). In 1985, he was elected Fellow of the Econometric Society. In 1999, Charter Member of the Game Theory Society. From 2000 to 2005, Member of the First Council of the Game Theory Society. In 2013, Fellow of the Society for the Advancement of Economic Theory. In 2017, Fellow of the Game Theory Society and Member of the Advisory Board of the Game Theory Society. In 2000, he was invited to give the Cowles Lecture at Yale University. In 2003, he was invited to give the Walras-Bowley Lecture of the Econometric Society. In 2008, Presidential Address, GAMES 2008 - The Third World Congress of the Game Theory Society. In 2009, he was invited to give the Harris Lecture at Harvard University. In 2011, he was invited to give the Kwan Chao-Chih Distinguished Lecture, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing. In 2012, he was invited to give the Algorithms, Combinatorics, and Optimization (ACO) Distinguished Lecture at Georgia Institute of Technology. In 2013, he was invited to give the Don Patinkin Lecture at the Israeli Economic Association. From 2005 to 2006, he served as President of the Israel Mathematical Union. From 2006 to 2008, he served as Executive Vice President of the Game Theory Society. From 2008 to 2010, he served as President of the Game Theory Society. In 1975, he was awarded the Israel Defense Prize. From 2010 to 2015, he was awarded an ERC (European Research Council) Advanced Investigator Grant. == References == == External links == Hart’s homepage Sergiu Hart at the Mathematics Genealogy Project
|
Wikipedia:Series acceleration#0
|
In mathematics, a series acceleration method is any one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities. == Definition == Given an infinite series with a sequence of partial sums ( S n ) n ∈ N {\displaystyle (S_{n})_{n\in \mathbb {N} }} having a limit lim n → ∞ S n = S , {\displaystyle \lim _{n\to \infty }S_{n}=S,} an accelerated series is an infinite series with a second sequence of partial sums ( S n ′ ) n ∈ N {\displaystyle (S'_{n})_{n\in \mathbb {N} }} which asymptotically converges faster to S {\displaystyle S} than the original sequence of partial sums would: lim n → ∞ S n ′ − S S n − S = 0. {\displaystyle \lim _{n\to \infty }{\frac {S'_{n}-S}{S_{n}-S}}=0.} A series acceleration method is a sequence transformation that transforms the convergent sequences of partial sums of a series into more quickly convergent sequences of partial sums of an accelerated series with the same limit. If a series acceleration method is applied to a divergent series then the proper limit of the series is undefined, but the sequence transformation can still act usefully as an extrapolation method to an antilimit of the series. The mappings from the original to the transformed series may be linear sequence transformations or non-linear sequence transformations. In general, the non-linear sequence transformations tend to be more powerful. == Overview == Two classical techniques for series acceleration are Euler's transformation of series and Kummer's transformation of series. A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Katahiro Takebe in 1722; the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century; the epsilon method given by Peter Wynn in 1956; the Levin u-transform; and the Wilf-Zeilberger-Ekhad method or WZ method. For alternating series, several powerful techniques, offering convergence rates from 5.828 − n {\displaystyle 5.828^{-n}} all the way to 17.93 − n {\displaystyle 17.93^{-n}} for a summation of n {\displaystyle n} terms, are described by Cohen et al. == Euler's transform == A basic example of a linear sequence transformation, offering improved convergence, is Euler's transform. It is intended to be applied to an alternating series; it is given by ∑ n = 0 ∞ ( − 1 ) n a n = ∑ n = 0 ∞ ( − 1 ) n ( Δ n a ) 0 2 n + 1 {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}a_{n}=\sum _{n=0}^{\infty }(-1)^{n}{\frac {(\Delta ^{n}a)_{0}}{2^{n+1}}}} where Δ {\displaystyle \Delta } is the forward difference operator, for which one has the formula ( Δ n a ) 0 = ∑ k = 0 n ( − 1 ) k ( n k ) a n − k . {\displaystyle (\Delta ^{n}a)_{0}=\sum _{k=0}^{n}(-1)^{k}{n \choose k}a_{n-k}.} If the original series, on the left hand side, is only slowly converging, the forward differences will tend to become small quite rapidly; the additional power of two further improves the rate at which the right hand side converges. A particularly efficient numerical implementation of the Euler transform is the van Wijngaarden transformation. == Conformal mappings == A series S = ∑ n = 0 ∞ a n {\displaystyle S=\sum _{n=0}^{\infty }a_{n}} can be written as f ( 1 ) {\displaystyle f(1)} , where the function f is defined as f ( z ) = ∑ n = 0 ∞ a n z n . {\displaystyle f(z)=\sum _{n=0}^{\infty }a_{n}z^{n}.} The function f ( z ) {\displaystyle f(z)} can have singularities in the complex plane (branch point singularities, poles or essential singularities), which limit the radius of convergence of the series. If the point z = 1 {\displaystyle z=1} is close to or on the boundary of the disk of convergence, the series for S {\displaystyle S} will converge very slowly. One can then improve the convergence of the series by means of a conformal mapping that moves the singularities such that the point that is mapped to z = 1 {\displaystyle z=1} ends up deeper in the new disk of convergence. The conformal transform z = Φ ( w ) {\displaystyle z=\Phi (w)} needs to be chosen such that Φ ( 0 ) = 0 {\displaystyle \Phi (0)=0} , and one usually chooses a function that has a finite derivative at w = 0. One can assume that Φ ( 1 ) = 1 {\displaystyle \Phi (1)=1} without loss of generality, as one can always rescale w to redefine Φ {\displaystyle \Phi } . We then consider the function g ( w ) = f ( Φ ( w ) ) . {\displaystyle g(w)=f(\Phi (w)).} Since Φ ( 1 ) = 1 {\displaystyle \Phi (1)=1} , we have f ( 1 ) = g ( 1 ) {\displaystyle f(1)=g(1)} . We can obtain the series expansion of g ( w ) {\displaystyle g(w)} by putting z = Φ ( w ) {\displaystyle z=\Phi (w)} in the series expansion of f ( z ) {\displaystyle f(z)} because Φ ( 0 ) = 0 {\displaystyle \Phi (0)=0} ; the first n {\displaystyle n} terms of the series expansion for f ( z ) {\displaystyle f(z)} will yield the first n {\displaystyle n} terms of the series expansion for g ( w ) {\displaystyle g(w)} if Φ ′ ( 0 ) ≠ 0 {\displaystyle \Phi '(0)\neq 0} . Putting w = 1 {\displaystyle w=1} in that series expansion will thus yield a series such that if it converges, it will converge to the same value as the original series. == Non-linear sequence transformations == Examples of such nonlinear sequence transformations are Padé approximants, the Shanks transformation, and Levin-type sequence transformations. Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and therefore may be used as effective extrapolation methods. === Aitken method === A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method, A : S → S ′ = A ( S ) = ( s n ′ ) n ∈ N {\displaystyle \mathbb {A} :S\to S'=\mathbb {A} (S)={(s'_{n})}_{n\in \mathbb {N} }} defined by s n ′ = s n + 2 − ( s n + 2 − s n + 1 ) 2 s n + 2 − 2 s n + 1 + s n . {\displaystyle s'_{n}=s_{n+2}-{\frac {(s_{n+2}-s_{n+1})^{2}}{s_{n+2}-2s_{n+1}+s_{n}}}.} This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error. == See also == Shanks transformation Minimum polynomial extrapolation Van Wijngaarden transformation == References == C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, 1991. G. A. Baker Jr. and P. Graves-Morris, Padé Approximants, Cambridge U.P., 1996. Weisstein, Eric W. "Convergence Improvement". MathWorld. Herbert H. H. Homeier: Scalar Levin-Type Sequence Transformations, Journal of Computational and Applied Mathematics, vol. 122, no. 1–2, p 81 (2000). Homeier, H. H. H. (2000). "Scalar Levin-type sequence transformations". Journal of Computational and Applied Mathematics. 122 (1–2): 81–147. arXiv:math/0005209. Bibcode:2000JCoAM.122...81H. doi:10.1016/S0377-0427(00)00359-9., arXiv:math/0005209. Brezinski Claude and Redivo-Zaglia Michela : "The genesis and early developments of Aitken's process, Shanks transformation, the ϵ {\displaystyle \epsilon } -algorithm, and related fixed point methods", Numerical Algorithms, Vol.80, No.1, (2019), pp.11-133. Delahaye J. P. : "Sequence Transformations", Springer-Verlag, Berlin, ISBN 978-3540152835 (1988). Sidi Avram : "Vector Extrapolation Methods with Applications", SIAM, ISBN 978-1-61197-495-9 (2017). Brezinski Claude, Redivo-Zaglia Michela and Saad Yousef : "Shanks Sequence Transformations and Anderson Acceleration", SIAM Review, Vol.60, No.3 (2018), pp.646–669. doi:10.1137/17M1120725 . Brezinski Claude : "Reminiscences of Peter Wynn", Numerical Algorithms, Vol.80(2019), pp.5-10. Brezinski Claude and Redivo-Zaglia Michela : "Extrapolation and Rational Approximation", Springer, ISBN 978-3-030-58417-7 (2020). == External links == Convergence acceleration of series GNU Scientific Library, Series Acceleration Digital Library of Mathematical Functions
|
Wikipedia:Series expansion#0
|
In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators (addition, subtraction, multiplication and division). The resulting so-called series often can be limited to a finite number of terms, thus yielding an approximation of the function. The fewer terms of the sequence are used, the simpler this approximation will be. Often, the resulting inaccuracy (i.e., the partial sum of the omitted terms) can be described by an equation involving Big O notation (see also asymptotic expansion). The series expansion on an open interval will also be an approximation for non-analytic functions. == Types of series expansions == There are several kinds of series expansions, listed below. === Taylor series === A Taylor series is a power series based on a function's derivatives at a single point. More specifically, if a function f : U → R {\displaystyle f:U\to \mathbb {R} } is infinitely differentiable around a point x 0 {\displaystyle x_{0}} , then the Taylor series of f around this point is given by ∑ n = 0 ∞ f ( n ) ( x 0 ) n ! ( x − x 0 ) n {\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}} under the convention 0 0 := 1 {\displaystyle 0^{0}:=1} . The Maclaurin series of f is its Taylor series about x 0 = 0 {\displaystyle x_{0}=0} . === Laurent series === A Laurent series is a generalization of the Taylor series, allowing terms with negative exponents; it takes the form ∑ k = − ∞ ∞ c k ( z − a ) k {\textstyle \sum _{k=-\infty }^{\infty }c_{k}(z-a)^{k}} and converges in an annulus. In particular, a Laurent series can be used to examine the behavior of a complex function near a singularity by considering the series expansion on an annulus centered at the singularity. === Dirichlet series === A general Dirichlet series is a series of the form ∑ n = 1 ∞ a n e − λ n s . {\textstyle \sum _{n=1}^{\infty }a_{n}e^{-\lambda _{n}s}.} One important special case of this is the ordinary Dirichlet series ∑ n = 1 ∞ a n n s . {\textstyle \sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.} Used in number theory. === Fourier series === A Fourier series is an expansion of periodic functions as a sum of many sine and cosine functions. More specifically, the Fourier series of a function f ( x ) {\displaystyle f(x)} of period 2 L {\displaystyle 2L} is given by the expression a 0 + ∑ n = 1 ∞ [ a n cos ( n π x L ) + b n sin ( n π x L ) ] {\displaystyle a_{0}+\sum _{n=1}^{\infty }\left[a_{n}\cos \left({\frac {n\pi x}{L}}\right)+b_{n}\sin \left({\frac {n\pi x}{L}}\right)\right]} where the coefficients are given by the formulae a n := 1 L ∫ − L L f ( x ) cos ( n π x L ) d x , b n := 1 L ∫ − L L f ( x ) sin ( n π x L ) d x . {\displaystyle {\begin{aligned}a_{n}&:={\frac {1}{L}}\int _{-L}^{L}f(x)\cos \left({\frac {n\pi x}{L}}\right)dx,\\b_{n}&:={\frac {1}{L}}\int _{-L}^{L}f(x)\sin \left({\frac {n\pi x}{L}}\right)dx.\end{aligned}}} === Other series === In acoustics, e.g., the fundamental tone and the overtones together form an example of a Fourier series. Newtonian series Legendre polynomials: Used in physics to describe an arbitrary electrical field as a superposition of a dipole field, a quadrupole field, an octupole field, etc. Zernike polynomials: Used in optics to calculate aberrations of optical systems. Each term in the series describes a particular type of aberration. The Stirling series Ln Γ ( z ) ∼ ( z − 1 2 ) ln z − z + 1 2 ln ( 2 π ) + ∑ k = 1 ∞ B 2 k 2 k ( 2 k − 1 ) z 2 k − 1 {\displaystyle {\text{Ln}}\Gamma \left(z\right)\sim \left(z-{\tfrac {1}{2}}\right)\ln z-z+{\tfrac {1}{2}}\ln \left(2\pi \right)+\sum _{k=1}^{\infty }{\frac {B_{2k}}{2k(2k-1)z^{2k-1}}}} is an approximation of the log-gamma function. == Examples == The following is the Taylor series of e x {\displaystyle e^{x}} : e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 + x 3 6 . . . {\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}...} The Dirichlet series of the Riemann zeta function is ζ ( s ) := ∑ n = 1 ∞ 1 n s = 1 1 s + 1 2 s + ⋯ {\displaystyle \zeta (s):=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+\cdots } == References ==
|
Wikipedia:Series multisection#0
|
In mathematics, a multisection of a power series is a new power series composed of equally spaced terms extracted unaltered from the original series. Formally, if one is given a power series ∑ n = − ∞ ∞ a n ⋅ z n {\displaystyle \sum _{n=-\infty }^{\infty }a_{n}\cdot z^{n}} then its multisection is a power series of the form ∑ m = − ∞ ∞ a q m + p ⋅ z q m + p {\displaystyle \sum _{m=-\infty }^{\infty }a_{qm+p}\cdot z^{qm+p}} where p, q are integers, with 0 ≤ p < q. Series multisection represents one of the common transformations of generating functions. == Multisection of analytic functions == A multisection of the series of an analytic function f ( z ) = ∑ n = 0 ∞ a n ⋅ z n {\displaystyle f(z)=\sum _{n=0}^{\infty }a_{n}\cdot z^{n}} has a closed-form expression in terms of the function f ( x ) {\displaystyle f(x)} : ∑ m = 0 ∞ a q m + p ⋅ z q m + p = 1 q ⋅ ∑ k = 0 q − 1 ω − k p ⋅ f ( ω k ⋅ z ) , {\displaystyle \sum _{m=0}^{\infty }a_{qm+p}\cdot z^{qm+p}={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}\omega ^{-kp}\cdot f(\omega ^{k}\cdot z),} where ω = e 2 π i q {\displaystyle \omega =e^{\frac {2\pi i}{q}}} is a primitive q-th root of unity. This expression is often called a root of unity filter. This solution was first discovered by Thomas Simpson. == Examples == === Bisection === In general, the bisections of a series are the even and odd parts of the series. === Geometric series === Consider the geometric series ∑ n = 0 ∞ z n = 1 1 − z for | z | < 1. {\displaystyle \sum _{n=0}^{\infty }z^{n}={\frac {1}{1-z}}\quad {\text{ for }}|z|<1.} By setting z → z q {\displaystyle z\rightarrow z^{q}} in the above series, its multisections are easily seen to be ∑ m = 0 ∞ z q m + p = z p 1 − z q for | z | < 1. {\displaystyle \sum _{m=0}^{\infty }z^{qm+p}={\frac {z^{p}}{1-z^{q}}}\quad {\text{ for }}|z|<1.} Remembering that the sum of the multisections must equal the original series, we recover the familiar identity ∑ p = 0 q − 1 z p = 1 − z q 1 − z . {\displaystyle \sum _{p=0}^{q-1}z^{p}={\frac {1-z^{q}}{1-z}}.} === Exponential function === The exponential function e z = ∑ n = 0 ∞ z n n ! {\displaystyle e^{z}=\sum _{n=0}^{\infty }{z^{n} \over n!}} by means of the above formula for analytic functions separates into ∑ m = 0 ∞ z q m + p ( q m + p ) ! = 1 q ⋅ ∑ k = 0 q − 1 ω − k p e ω k z . {\displaystyle \sum _{m=0}^{\infty }{z^{qm+p} \over (qm+p)!}={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}\omega ^{-kp}e^{\omega ^{k}z}.} The bisections are trivially the hyperbolic functions: ∑ m = 0 ∞ z 2 m ( 2 m ) ! = 1 2 ( e z + e − z ) = cosh z {\displaystyle \sum _{m=0}^{\infty }{z^{2m} \over (2m)!}={\frac {1}{2}}\left(e^{z}+e^{-z}\right)=\cosh {z}} ∑ m = 0 ∞ z 2 m + 1 ( 2 m + 1 ) ! = 1 2 ( e z − e − z ) = sinh z . {\displaystyle \sum _{m=0}^{\infty }{z^{2m+1} \over (2m+1)!}={\frac {1}{2}}\left(e^{z}-e^{-z}\right)=\sinh {z}.} Higher order multisections are found by noting that all such series must be real-valued along the real line. By taking the real part and using standard trigonometric identities, the formulas may be written in explicitly real form as ∑ m = 0 ∞ z q m + p ( q m + p ) ! = 1 q ⋅ ∑ k = 0 q − 1 e z cos ( 2 π k / q ) cos ( z sin ( 2 π k q ) − 2 π k p q ) . {\displaystyle \sum _{m=0}^{\infty }{z^{qm+p} \over (qm+p)!}={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}e^{z\cos(2\pi k/q)}\cos {\left(z\sin {\left({\frac {2\pi k}{q}}\right)}-{\frac {2\pi kp}{q}}\right)}.} These can be seen as solutions to the linear differential equation f ( q ) ( z ) = f ( z ) {\displaystyle f^{(q)}(z)=f(z)} with boundary conditions f ( k ) ( 0 ) = δ k , p {\displaystyle f^{(k)}(0)=\delta _{k,p}} , using Kronecker delta notation. In particular, the trisections are ∑ m = 0 ∞ z 3 m ( 3 m ) ! = 1 3 ( e z + 2 e − z / 2 cos 3 z 2 ) {\displaystyle \sum _{m=0}^{\infty }{z^{3m} \over (3m)!}={\frac {1}{3}}\left(e^{z}+2e^{-z/2}\cos {\frac {{\sqrt {3}}z}{2}}\right)} ∑ m = 0 ∞ z 3 m + 1 ( 3 m + 1 ) ! = 1 3 ( e z − 2 e − z / 2 cos ( 3 z 2 + π 3 ) ) {\displaystyle \sum _{m=0}^{\infty }{z^{3m+1} \over (3m+1)!}={\frac {1}{3}}\left(e^{z}-2e^{-z/2}\cos {\left({\frac {{\sqrt {3}}z}{2}}+{\frac {\pi }{3}}\right)}\right)} ∑ m = 0 ∞ z 3 m + 2 ( 3 m + 2 ) ! = 1 3 ( e z − 2 e − z / 2 cos ( 3 z 2 − π 3 ) ) , {\displaystyle \sum _{m=0}^{\infty }{z^{3m+2} \over (3m+2)!}={\frac {1}{3}}\left(e^{z}-2e^{-z/2}\cos {\left({\frac {{\sqrt {3}}z}{2}}-{\frac {\pi }{3}}\right)}\right),} and the quadrisections are ∑ m = 0 ∞ z 4 m ( 4 m ) ! = 1 2 ( cosh z + cos z ) {\displaystyle \sum _{m=0}^{\infty }{z^{4m} \over (4m)!}={\frac {1}{2}}\left(\cosh {z}+\cos {z}\right)} ∑ m = 0 ∞ z 4 m + 1 ( 4 m + 1 ) ! = 1 2 ( sinh z + sin z ) {\displaystyle \sum _{m=0}^{\infty }{z^{4m+1} \over (4m+1)!}={\frac {1}{2}}\left(\sinh {z}+\sin {z}\right)} ∑ m = 0 ∞ z 4 m + 2 ( 4 m + 2 ) ! = 1 2 ( cosh z − cos z ) {\displaystyle \sum _{m=0}^{\infty }{z^{4m+2} \over (4m+2)!}={\frac {1}{2}}\left(\cosh {z}-\cos {z}\right)} ∑ m = 0 ∞ z 4 m + 3 ( 4 m + 3 ) ! = 1 2 ( sinh z − sin z ) . {\displaystyle \sum _{m=0}^{\infty }{z^{4m+3} \over (4m+3)!}={\frac {1}{2}}\left(\sinh {z}-\sin {z}\right).} === Binomial series === Multisection of a binomial expansion ( 1 + x ) n = ( n 0 ) x 0 + ( n 1 ) x + ( n 2 ) x 2 + ⋯ {\displaystyle (1+x)^{n}={n \choose 0}x^{0}+{n \choose 1}x+{n \choose 2}x^{2}+\cdots } at x = 1 gives the following identity for the sum of binomial coefficients with step q: ( n p ) + ( n p + q ) + ( n p + 2 q ) + ⋯ = 1 q ⋅ ∑ k = 0 q − 1 ( 2 cos π k q ) n ⋅ cos π ( n − 2 p ) k q . {\displaystyle {n \choose p}+{n \choose p+q}+{n \choose p+2q}+\cdots ={\frac {1}{q}}\cdot \sum _{k=0}^{q-1}\left(2\cos {\frac {\pi k}{q}}\right)^{n}\cdot \cos {\frac {\pi (n-2p)k}{q}}.} == Applications == Series multisection converts an infinite sum into a finite sum. It is used, for example, in a key step of a standard proof of Gauss's digamma theorem, which gives a closed-form solution to the digamma function evaluated at rational values p/q. == References == Weisstein, Eric W. "Series Multisection". MathWorld. Somos, Michael A Multisection of q-Series, 2006. John Riordan (1968). Combinatorial identities. New York: John Wiley and Sons.
|
Wikipedia:Serre's theorem on a semisimple Lie algebra#0
|
In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras. (A simple Lie algebra is a non-abelian Lie algebra without any non-zero proper ideals.) Throughout the article, unless otherwise stated, a Lie algebra is a finite-dimensional Lie algebra over a field of characteristic 0. For such a Lie algebra g {\displaystyle {\mathfrak {g}}} , if nonzero, the following conditions are equivalent: g {\displaystyle {\mathfrak {g}}} is semisimple; the Killing form κ ( x , y ) = tr ( ad ( x ) ad ( y ) ) {\displaystyle \kappa (x,y)=\operatorname {tr} (\operatorname {ad} (x)\operatorname {ad} (y))} is non-degenerate; g {\displaystyle {\mathfrak {g}}} has no non-zero abelian ideals; g {\displaystyle {\mathfrak {g}}} has no non-zero solvable ideals; the radical (maximal solvable ideal) of g {\displaystyle {\mathfrak {g}}} is zero. == Significance == The significance of semisimplicity comes firstly from the Levi decomposition, which states that every finite dimensional Lie algebra is the semidirect product of a solvable ideal (its radical) and a semisimple algebra. In particular, there is no nonzero Lie algebra that is both solvable and semisimple. Semisimple Lie algebras have a very elegant classification, in stark contrast to solvable Lie algebras. Semisimple Lie algebras over an algebraically closed field of characteristic zero are completely classified by their root system, which are in turn classified by Dynkin diagrams. Semisimple algebras over non-algebraically closed fields can be understood in terms of those over the algebraic closure, though the classification is somewhat more intricate; see real form for the case of real semisimple Lie algebras, which were classified by Élie Cartan. Further, the representation theory of semisimple Lie algebras is much cleaner than that for general Lie algebras. For example, the Jordan decomposition in a semisimple Lie algebra coincides with the Jordan decomposition in its representation; this is not the case for Lie algebras in general. If g {\displaystyle {\mathfrak {g}}} is semisimple, then g = [ g , g ] {\displaystyle {\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]} . In particular, every linear semisimple Lie algebra is a subalgebra of s l {\displaystyle {\mathfrak {sl}}} , the special linear Lie algebra. The study of the structure of s l {\displaystyle {\mathfrak {sl}}} constitutes an important part of the representation theory for semisimple Lie algebras. == History == The semisimple Lie algebras over the complex numbers were first classified by Wilhelm Killing (1888–90), though his proof lacked rigor. His proof was made rigorous by Élie Cartan (1894) in his Ph.D. thesis, who also classified semisimple real Lie algebras. This was subsequently refined, and the present classification by Dynkin diagrams was given by then 22-year-old Eugene Dynkin in 1947. Some minor modifications have been made (notably by J. P. Serre), but the proof is unchanged in its essentials and can be found in any standard reference, such as (Humphreys 1972). == Basic properties == Every ideal, quotient and product of semisimple Lie algebras is again semisimple. The center of a semisimple Lie algebra g {\displaystyle {\mathfrak {g}}} is trivial (since the center is an abelian ideal). In other words, the adjoint representation ad {\displaystyle \operatorname {ad} } is injective. Moreover, the image turns out to be Der ( g ) {\displaystyle \operatorname {Der} ({\mathfrak {g}})} of derivations on g {\displaystyle {\mathfrak {g}}} . Hence, ad : g → ∼ Der ( g ) {\displaystyle \operatorname {ad} :{\mathfrak {g}}{\overset {\sim }{\to }}\operatorname {Der} ({\mathfrak {g}})} is an isomorphism. (This is a special case of Whitehead's lemma.) As the adjoint representation is injective, a semisimple Lie algebra is a linear Lie algebra under the adjoint representation. This may lead to some ambiguity, as every Lie algebra is already linear with respect to some other vector space (Ado's theorem), although not necessarily via the adjoint representation. But in practice, such ambiguity rarely occurs. If g {\displaystyle {\mathfrak {g}}} is a semisimple Lie algebra, then g = [ g , g ] {\displaystyle {\mathfrak {g}}=[{\mathfrak {g}},{\mathfrak {g}}]} (because g / [ g , g ] {\displaystyle {\mathfrak {g}}/[{\mathfrak {g}},{\mathfrak {g}}]} is semisimple and abelian). A finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} over a field k of characteristic zero is semisimple if and only if the base extension g ⊗ k F {\displaystyle {\mathfrak {g}}\otimes _{k}F} is semisimple for each field extension F ⊃ k {\displaystyle F\supset k} . Thus, for example, a finite-dimensional real Lie algebra is semisimple if and only if its complexification is semisimple. == Jordan decomposition == Each endomorphism x of a finite-dimensional vector space over a field of characteristic zero can be decomposed uniquely into a semisimple (i.e., diagonalizable over the algebraic closure) and nilpotent part x = s + n {\displaystyle x=s+n\ } such that s and n commute with each other. Moreover, each of s and n is a polynomial in x. This is the Jordan decomposition of x. The above applies to the adjoint representation ad {\displaystyle \operatorname {ad} } of a semisimple Lie algebra g {\displaystyle {\mathfrak {g}}} . An element x of g {\displaystyle {\mathfrak {g}}} is said to be semisimple (resp. nilpotent) if ad ( x ) {\displaystyle \operatorname {ad} (x)} is a semisimple (resp. nilpotent) operator. If x ∈ g {\displaystyle x\in {\mathfrak {g}}} , then the abstract Jordan decomposition states that x can be written uniquely as: x = s + n {\displaystyle x=s+n} where s {\displaystyle s} is semisimple, n {\displaystyle n} is nilpotent and [ s , n ] = 0 {\displaystyle [s,n]=0} . Moreover, if y ∈ g {\displaystyle y\in {\mathfrak {g}}} commutes with x, then it commutes with both s , n {\displaystyle s,n} as well. The abstract Jordan decomposition factors through any representation of g {\displaystyle {\mathfrak {g}}} in the sense that given any representation ρ, ρ ( x ) = ρ ( s ) + ρ ( n ) {\displaystyle \rho (x)=\rho (s)+\rho (n)\,} is the Jordan decomposition of ρ(x) in the endomorphism algebra of the representation space. (This is proved as a consequence of Weyl's complete reducibility theorem; see Weyl's theorem on complete reducibility#Application: preservation of Jordan decomposition.) == Structure == Let g {\displaystyle {\mathfrak {g}}} be a (finite-dimensional) semisimple Lie algebra over an algebraically closed field of characteristic zero. The structure of g {\displaystyle {\mathfrak {g}}} can be described by an adjoint action of a certain distinguished subalgebra on it, a Cartan subalgebra. By definition, a Cartan subalgebra (also called a maximal toral subalgebra) h {\displaystyle {\mathfrak {h}}} of g {\displaystyle {\mathfrak {g}}} is a maximal subalgebra such that, for each h ∈ h {\displaystyle h\in {\mathfrak {h}}} , ad ( h ) {\displaystyle \operatorname {ad} (h)} is diagonalizable. As it turns out, h {\displaystyle {\mathfrak {h}}} is abelian and so all the operators in ad ( h ) {\displaystyle \operatorname {ad} ({\mathfrak {h}})} are simultaneously diagonalizable. For each linear functional α {\displaystyle \alpha } of h {\displaystyle {\mathfrak {h}}} , let g α = { x ∈ g | ad ( h ) x := [ h , x ] = α ( h ) x for all h ∈ h } {\displaystyle {\mathfrak {g}}_{\alpha }=\{x\in {\mathfrak {g}}|\operatorname {ad} (h)x:=[h,x]=\alpha (h)x\,{\text{ for all }}h\in {\mathfrak {h}}\}} . (Note that g 0 {\displaystyle {\mathfrak {g}}_{0}} is the centralizer of h {\displaystyle {\mathfrak {h}}} .) Then (The most difficult item to show is dim g α = 1 {\displaystyle \dim {\mathfrak {g}}_{\alpha }=1} . The standard proofs all use some facts in the representation theory of s l 2 {\displaystyle {\mathfrak {sl}}_{2}} ; e.g., Serre uses the fact that an s l 2 {\displaystyle {\mathfrak {sl}}_{2}} -module with a primitive element of negative weight is infinite-dimensional, contradicting dim g < ∞ {\displaystyle \dim {\mathfrak {g}}<\infty } .) Let h α ∈ h , e α ∈ g α , f α ∈ g − α {\displaystyle h_{\alpha }\in {\mathfrak {h}},e_{\alpha }\in {\mathfrak {g}}_{\alpha },f_{\alpha }\in {\mathfrak {g}}_{-\alpha }} with the commutation relations [ e α , f α ] = h α , [ h α , e α ] = 2 e α , [ h α , f α ] = − 2 f α {\displaystyle [e_{\alpha },f_{\alpha }]=h_{\alpha },[h_{\alpha },e_{\alpha }]=2e_{\alpha },[h_{\alpha },f_{\alpha }]=-2f_{\alpha }} ; i.e., the h α , e α , f α {\displaystyle h_{\alpha },e_{\alpha },f_{\alpha }} correspond to the standard basis of s l 2 {\displaystyle {\mathfrak {sl}}_{2}} . The linear functionals in Φ {\displaystyle \Phi } are called the roots of g {\displaystyle {\mathfrak {g}}} relative to h {\displaystyle {\mathfrak {h}}} . The roots span h ∗ {\displaystyle {\mathfrak {h}}^{*}} (since if α ( h ) = 0 , α ∈ Φ {\displaystyle \alpha (h)=0,\alpha \in \Phi } , then ad ( h ) {\displaystyle \operatorname {ad} (h)} is the zero operator; i.e., h {\displaystyle h} is in the center, which is zero.) Moreover, from the representation theory of s l 2 {\displaystyle {\mathfrak {sl}}_{2}} , one deduces the following symmetry and integral properties of Φ {\displaystyle \Phi } : for each α , β ∈ Φ {\displaystyle \alpha ,\beta \in \Phi } , Note that s α {\displaystyle s_{\alpha }} has the properties (1) s α ( α ) = − α {\displaystyle s_{\alpha }(\alpha )=-\alpha } and (2) the fixed-point set is { γ ∈ h ∗ | γ ( h α ) = 0 } {\displaystyle \{\gamma \in {\mathfrak {h}}^{*}|\gamma (h_{\alpha })=0\}} , which means that s α {\displaystyle s_{\alpha }} is the reflection with respect to the hyperplane corresponding to α {\displaystyle \alpha } . The above then says that Φ {\displaystyle \Phi } is a root system. It follows from the general theory of a root system that Φ {\displaystyle \Phi } contains a basis α 1 , … , α l {\displaystyle \alpha _{1},\dots ,\alpha _{l}} of h ∗ {\displaystyle {\mathfrak {h}}^{*}} such that each root is a linear combination of α 1 , … , α l {\displaystyle \alpha _{1},\dots ,\alpha _{l}} with integer coefficients of the same sign; the roots α i {\displaystyle \alpha _{i}} are called simple roots. Let e i = e α i {\displaystyle e_{i}=e_{\alpha _{i}}} , etc. Then the 3 l {\displaystyle 3l} elements e i , f i , h i {\displaystyle e_{i},f_{i},h_{i}} (called Chevalley generators) generate g {\displaystyle {\mathfrak {g}}} as a Lie algebra. Moreover, they satisfy the relations (called Serre relations): with a i j = α j ( h i ) {\displaystyle a_{ij}=\alpha _{j}(h_{i})} , [ h i , h j ] = 0 , {\displaystyle [h_{i},h_{j}]=0,} [ e i , f i ] = h i , [ e i , f j ] = 0 , i ≠ j , {\displaystyle [e_{i},f_{i}]=h_{i},[e_{i},f_{j}]=0,i\neq j,} [ h i , e j ] = a i j e j , [ h i , f j ] = − a i j f j , {\displaystyle [h_{i},e_{j}]=a_{ij}e_{j},[h_{i},f_{j}]=-a_{ij}f_{j},} ad ( e i ) − a i j + 1 ( e j ) = ad ( f i ) − a i j + 1 ( f j ) = 0 , i ≠ j {\displaystyle \operatorname {ad} (e_{i})^{-a_{ij}+1}(e_{j})=\operatorname {ad} (f_{i})^{-a_{ij}+1}(f_{j})=0,i\neq j} . The converse of this is also true: i.e., the Lie algebra generated by the generators and the relations like the above is a (finite-dimensional) semisimple Lie algebra that has the root space decomposition as above (provided the [ a i j ] 1 ≤ i , j ≤ l {\displaystyle [a_{ij}]_{1\leq i,j\leq l}} is a Cartan matrix). This is a theorem of Serre. In particular, two semisimple Lie algebras are isomorphic if they have the same root system. The implication of the axiomatic nature of a root system and Serre's theorem is that one can enumerate all possible root systems; hence, "all possible" semisimple Lie algebras (finite-dimensional over an algebraically closed field of characteristic zero). The Weyl group is the group of linear transformations of h ∗ ≃ h {\displaystyle {\mathfrak {h}}^{*}\simeq {\mathfrak {h}}} generated by the s α {\displaystyle s_{\alpha }} 's. The Weyl group is an important symmetry of the problem; for example, the weights of any finite-dimensional representation of g {\displaystyle {\mathfrak {g}}} are invariant under the Weyl group. == Example root space decomposition in sln(C) == For g = s l n ( C ) {\displaystyle {\mathfrak {g}}={\mathfrak {sl}}_{n}(\mathbb {C} )} and the Cartan subalgebra h {\displaystyle {\mathfrak {h}}} of diagonal matrices, define λ i ∈ h ∗ {\displaystyle \lambda _{i}\in {\mathfrak {h}}^{*}} by λ i ( d ( a 1 , … , a n ) ) = a i {\displaystyle \lambda _{i}(d(a_{1},\ldots ,a_{n}))=a_{i}} , where d ( a 1 , … , a n ) {\displaystyle d(a_{1},\ldots ,a_{n})} denotes the diagonal matrix with a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} on the diagonal. Then the decomposition is given by g = h ⊕ ( ⨁ i ≠ j g λ i − λ j ) {\displaystyle {\mathfrak {g}}={\mathfrak {h}}\oplus \left(\bigoplus _{i\neq j}{\mathfrak {g}}_{\lambda _{i}-\lambda _{j}}\right)} where g λ i − λ j = Span C ( e i j ) {\displaystyle {\mathfrak {g}}_{\lambda _{i}-\lambda _{j}}={\text{Span}}_{\mathbb {C} }(e_{ij})} for the vector e i j {\displaystyle e_{ij}} in s l n ( C ) {\displaystyle {\mathfrak {sl}}_{n}(\mathbb {C} )} with the standard (matrix) basis, meaning e i j {\displaystyle e_{ij}} represents the basis vector in the i {\displaystyle i} -th row and j {\displaystyle j} -th column. This decomposition of g {\displaystyle {\mathfrak {g}}} has an associated root system: Φ = { λ i − λ j : i ≠ j } {\displaystyle \Phi =\{\lambda _{i}-\lambda _{j}:i\neq j\}} === sl2(C) === For example, in s l 2 ( C ) {\displaystyle {\mathfrak {sl}}_{2}(\mathbb {C} )} the decomposition is s l 2 = h ⊕ g λ 1 − λ 2 ⊕ g λ 2 − λ 1 {\displaystyle {\mathfrak {sl}}_{2}={\mathfrak {h}}\oplus {\mathfrak {g}}_{\lambda _{1}-\lambda _{2}}\oplus {\mathfrak {g}}_{\lambda _{2}-\lambda _{1}}} and the associated root system is Φ = { λ 1 − λ 2 , λ 2 − λ 1 } {\displaystyle \Phi =\{\lambda _{1}-\lambda _{2},\lambda _{2}-\lambda _{1}\}} === sl3(C) === In s l 3 ( C ) {\displaystyle {\mathfrak {sl}}_{3}(\mathbb {C} )} the decomposition is s l 3 = h ⊕ g λ 1 − λ 2 ⊕ g λ 1 − λ 3 ⊕ g λ 2 − λ 3 ⊕ g λ 2 − λ 1 ⊕ g λ 3 − λ 1 ⊕ g λ 3 − λ 2 {\displaystyle {\mathfrak {sl}}_{3}={\mathfrak {h}}\oplus {\mathfrak {g}}_{\lambda _{1}-\lambda _{2}}\oplus {\mathfrak {g}}_{\lambda _{1}-\lambda _{3}}\oplus {\mathfrak {g}}_{\lambda _{2}-\lambda _{3}}\oplus {\mathfrak {g}}_{\lambda _{2}-\lambda _{1}}\oplus {\mathfrak {g}}_{\lambda _{3}-\lambda _{1}}\oplus {\mathfrak {g}}_{\lambda _{3}-\lambda _{2}}} and the associated root system is given by Φ = { ± ( λ 1 − λ 2 ) , ± ( λ 1 − λ 3 ) , ± ( λ 2 − λ 3 ) } {\displaystyle \Phi =\{\pm (\lambda _{1}-\lambda _{2}),\pm (\lambda _{1}-\lambda _{3}),\pm (\lambda _{2}-\lambda _{3})\}} == Examples == As noted in #Structure, semisimple Lie algebras over C {\displaystyle \mathbb {C} } (or more generally an algebraically closed field of characteristic zero) are classified by the root system associated to their Cartan subalgebras, and the root systems, in turn, are classified by their Dynkin diagrams. Examples of semisimple Lie algebras, the classical Lie algebras, with notation coming from their Dynkin diagrams, are: A n : {\displaystyle A_{n}:} s l n + 1 {\displaystyle {\mathfrak {sl}}_{n+1}} , the special linear Lie algebra. B n : {\displaystyle B_{n}:} s o 2 n + 1 {\displaystyle {\mathfrak {so}}_{2n+1}} , the odd-dimensional special orthogonal Lie algebra. C n : {\displaystyle C_{n}:} s p 2 n {\displaystyle {\mathfrak {sp}}_{2n}} , the symplectic Lie algebra. D n : {\displaystyle D_{n}:} s o 2 n {\displaystyle {\mathfrak {so}}_{2n}} , the even-dimensional special orthogonal Lie algebra ( n > 1 {\displaystyle n>1} ). The restriction n > 1 {\displaystyle n>1} in the D n {\displaystyle D_{n}} family is needed because s o 2 {\displaystyle {\mathfrak {so}}_{2}} is one-dimensional and commutative and therefore not semisimple. These Lie algebras are numbered so that n is the rank. Almost all of these semisimple Lie algebras are actually simple and the members of these families are almost all distinct, except for some collisions in small rank. For example s o 4 ≅ s o 3 ⊕ s o 3 {\displaystyle {\mathfrak {so}}_{4}\cong {\mathfrak {so}}_{3}\oplus {\mathfrak {so}}_{3}} and s p 2 ≅ s o 5 {\displaystyle {\mathfrak {sp}}_{2}\cong {\mathfrak {so}}_{5}} . These four families, together with five exceptions (E6, E7, E8, F4, and G2), are in fact the only simple Lie algebras over the complex numbers. == Classification == Every semisimple Lie algebra over an algebraically closed field of characteristic 0 is a direct sum of simple Lie algebras (by definition), and the finite-dimensional simple Lie algebras fall in four families – An, Bn, Cn, and Dn – with five exceptions E6, E7, E8, F4, and G2. Simple Lie algebras are classified by the connected Dynkin diagrams, shown on the right, while semisimple Lie algebras correspond to not necessarily connected Dynkin diagrams, where each component of the diagram corresponds to a summand of the decomposition of the semisimple Lie algebra into simple Lie algebras. The classification proceeds by considering a Cartan subalgebra (see below) and its adjoint action on the Lie algebra. The root system of the action then both determines the original Lie algebra and must have a very constrained form, which can be classified by the Dynkin diagrams. See the section below describing Cartan subalgebras and root systems for more details. The classification is widely considered one of the most elegant results in mathematics – a brief list of axioms yields, via a relatively short proof, a complete but non-trivial classification with surprising structure. This should be compared to the classification of finite simple groups, which is significantly more complicated. The enumeration of the four families is non-redundant and consists only of simple algebras if n ≥ 1 {\displaystyle n\geq 1} for An, n ≥ 2 {\displaystyle n\geq 2} for Bn, n ≥ 3 {\displaystyle n\geq 3} for Cn, and n ≥ 4 {\displaystyle n\geq 4} for Dn. If one starts numbering lower, the enumeration is redundant, and one has exceptional isomorphisms between simple Lie algebras, which are reflected in isomorphisms of Dynkin diagrams; the En can also be extended down, but below E6 are isomorphic to other, non-exceptional algebras. Over a non-algebraically closed field, the classification is more complicated – one classifies simple Lie algebras over the algebraic closure, then for each of these, one classifies simple Lie algebras over the original field which have this form (over the closure). For example, to classify simple real Lie algebras, one classifies real Lie algebras with a given complexification, which are known as real forms of the complex Lie algebra; this can be done by Satake diagrams, which are Dynkin diagrams with additional data ("decorations"). == Representation theory of semisimple Lie algebras == Let g {\displaystyle {\mathfrak {g}}} be a (finite-dimensional) semisimple Lie algebra over an algebraically closed field of characteristic zero. Then, as in #Structure, g = h ⊕ ⨁ α ∈ Φ g α {\textstyle {\mathfrak {g}}={\mathfrak {h}}\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }} where Φ {\displaystyle \Phi } is the root system. Choose the simple roots in Φ {\displaystyle \Phi } ; a root α {\displaystyle \alpha } of Φ {\displaystyle \Phi } is then called positive and is denoted by α > 0 {\displaystyle \alpha >0} if it is a linear combination of the simple roots with non-negative integer coefficients. Let b = h ⊕ ⨁ α > 0 g α {\textstyle {\mathfrak {b}}={\mathfrak {h}}\oplus \bigoplus _{\alpha >0}{\mathfrak {g}}_{\alpha }} , which is a maximal solvable subalgebra of g {\displaystyle {\mathfrak {g}}} , the Borel subalgebra. Let V be a (possibly-infinite-dimensional) simple g {\displaystyle {\mathfrak {g}}} -module. If V happens to admit a b {\displaystyle {\mathfrak {b}}} -weight vector v 0 {\displaystyle v_{0}} , then it is unique up to scaling and is called the highest weight vector of V. It is also an h {\displaystyle {\mathfrak {h}}} -weight vector and the h {\displaystyle {\mathfrak {h}}} -weight of v 0 {\displaystyle v_{0}} , a linear functional of h {\displaystyle {\mathfrak {h}}} , is called the highest weight of V. The basic yet nontrivial facts then are (1) to each linear functional μ ∈ h ∗ {\displaystyle \mu \in {\mathfrak {h}}^{*}} , there exists a simple g {\displaystyle {\mathfrak {g}}} -module V μ {\displaystyle V^{\mu }} having μ {\displaystyle \mu } as its highest weight and (2) two simple modules having the same highest weight are equivalent. In short, there exists a bijection between h ∗ {\displaystyle {\mathfrak {h}}^{*}} and the set of the equivalence classes of simple g {\displaystyle {\mathfrak {g}}} -modules admitting a Borel-weight vector. For applications, one is often interested in a finite-dimensional simple g {\displaystyle {\mathfrak {g}}} -module (a finite-dimensional irreducible representation). This is especially the case when g {\displaystyle {\mathfrak {g}}} is the Lie algebra of a Lie group (or complexification of such), since, via the Lie correspondence, a Lie algebra representation can be integrated to a Lie group representation when the obstructions are overcome. The next criterion then addresses this need: by the positive Weyl chamber C ⊂ h ∗ {\displaystyle C\subset {\mathfrak {h}}^{*}} , we mean the convex cone C = { μ ∈ h ∗ | μ ( h α ) ≥ 0 , α ∈ Φ > 0 } {\displaystyle C=\{\mu \in {\mathfrak {h}}^{*}|\mu (h_{\alpha })\geq 0,\alpha \in \Phi >0\}} where h α ∈ [ g α , g − α ] {\displaystyle h_{\alpha }\in [{\mathfrak {g}}_{\alpha },{\mathfrak {g}}_{-\alpha }]} is a unique vector such that α ( h α ) = 2 {\displaystyle \alpha (h_{\alpha })=2} . The criterion then reads: dim V μ < ∞ {\displaystyle \dim V^{\mu }<\infty } if and only if, for each positive root α > 0 {\displaystyle \alpha >0} , (1) μ ( h α ) {\displaystyle \mu (h_{\alpha })} is an integer and (2) μ {\displaystyle \mu } lies in C {\displaystyle C} . A linear functional μ {\displaystyle \mu } satisfying the above equivalent condition is called a dominant integral weight. Hence, in summary, there exists a bijection between the dominant integral weights and the equivalence classes of finite-dimensional simple g {\displaystyle {\mathfrak {g}}} -modules, the result known as the theorem of the highest weight. The character of a finite-dimensional simple module in turns is computed by the Weyl character formula. The theorem due to Weyl says that, over a field of characteristic zero, every finite-dimensional module of a semisimple Lie algebra g {\displaystyle {\mathfrak {g}}} is completely reducible; i.e., it is a direct sum of simple g {\displaystyle {\mathfrak {g}}} -modules. Hence, the above results then apply to finite-dimensional representations of a semisimple Lie algebra. == Real semisimple Lie algebra == For a semisimple Lie algebra over a field that has characteristic zero but is not algebraically closed, there is no general structure theory like the one for those over an algebraically closed field of characteristic zero. But over the field of real numbers, there are still the structure results. Let g {\displaystyle {\mathfrak {g}}} be a finite-dimensional real semisimple Lie algebra and g C = g ⊗ R C {\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} } the complexification of it (which is again semisimple). The real Lie algebra g {\displaystyle {\mathfrak {g}}} is called a real form of g C {\displaystyle {\mathfrak {g}}^{\mathbb {C} }} . A real form is called a compact form if the Killing form on it is negative-definite; it is necessarily the Lie algebra of a compact Lie group (hence, the name). === Compact case === Suppose g {\displaystyle {\mathfrak {g}}} is a compact form and h ⊂ g {\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}} a maximal abelian subspace. One can show (for example, from the fact g {\displaystyle {\mathfrak {g}}} is the Lie algebra of a compact Lie group) that ad ( h ) {\displaystyle \operatorname {ad} ({\mathfrak {h}})} consists of skew-Hermitian matrices, diagonalizable over C {\displaystyle \mathbb {C} } with imaginary eigenvalues. Hence, h C {\displaystyle {\mathfrak {h}}^{\mathbb {C} }} is a Cartan subalgebra of g C {\displaystyle {\mathfrak {g}}^{\mathbb {C} }} and there results in the root space decomposition (cf. #Structure) g C = h C ⊕ ⨁ α ∈ Φ g α {\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {h}}^{\mathbb {C} }\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }} where each α ∈ Φ {\displaystyle \alpha \in \Phi } is real-valued on i h {\displaystyle i{\mathfrak {h}}} ; thus, can be identified with a real-linear functional on the real vector space i h {\displaystyle i{\mathfrak {h}}} . For example, let g = s u ( n ) {\displaystyle {\mathfrak {g}}={\mathfrak {su}}(n)} and take h ⊂ g {\displaystyle {\mathfrak {h}}\subset {\mathfrak {g}}} the subspace of all diagonal matrices. Note g C = s l n C {\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {sl}}_{n}\mathbb {C} } . Let e i {\displaystyle e_{i}} be the linear functional on h C {\displaystyle {\mathfrak {h}}^{\mathbb {C} }} given by e i ( H ) = h i {\displaystyle e_{i}(H)=h_{i}} for H = diag ( h 1 , … , h n ) {\displaystyle H=\operatorname {diag} (h_{1},\dots ,h_{n})} . Then for each H ∈ h C {\displaystyle H\in {\mathfrak {h}}^{\mathbb {C} }} , [ H , E i j ] = ( e i ( H ) − e j ( H ) ) E i j {\displaystyle [H,E_{ij}]=(e_{i}(H)-e_{j}(H))E_{ij}} where E i j {\displaystyle E_{ij}} is the matrix that has 1 on the ( i , j ) {\displaystyle (i,j)} -th spot and zero elsewhere. Hence, each root α {\displaystyle \alpha } is of the form α = e i − e j , i ≠ j {\displaystyle \alpha =e_{i}-e_{j},i\neq j} and the root space decomposition is the decomposition of matrices: g C = h C ⊕ ⨁ i ≠ j C E i j . {\displaystyle {\mathfrak {g}}^{\mathbb {C} }={\mathfrak {h}}^{\mathbb {C} }\oplus \bigoplus _{i\neq j}\mathbb {C} E_{ij}.} === Noncompact case === Suppose g {\displaystyle {\mathfrak {g}}} is not necessarily a compact form (i.e., the signature of the Killing form is not all negative). Suppose, moreover, it has a Cartan involution θ {\displaystyle \theta } and let g = k ⊕ p {\displaystyle {\mathfrak {g}}={\mathfrak {k}}\oplus {\mathfrak {p}}} be the eigenspace decomposition of θ {\displaystyle \theta } , where k , p {\displaystyle {\mathfrak {k}},{\mathfrak {p}}} are the eigenspaces for 1 and -1, respectively. For example, if g = s l n R {\displaystyle {\mathfrak {g}}={\mathfrak {sl}}_{n}\mathbb {R} } and θ {\displaystyle \theta } the negative transpose, then k = s o ( n ) {\displaystyle {\mathfrak {k}}={\mathfrak {so}}(n)} . Let a ⊂ p {\displaystyle {\mathfrak {a}}\subset {\mathfrak {p}}} be a maximal abelian subspace. Now, ad ( p ) {\displaystyle \operatorname {ad} ({\mathfrak {p}})} consists of symmetric matrices (with respect to a suitable inner product) and thus the operators in ad ( a ) {\displaystyle \operatorname {ad} ({\mathfrak {a}})} are simultaneously diagonalizable, with real eigenvalues. By repeating the arguments for the algebraically closed base field, one obtains the decomposition (called the restricted root space decomposition): g = g 0 ⊕ ⨁ α ∈ Φ g α {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\oplus \bigoplus _{\alpha \in \Phi }{\mathfrak {g}}_{\alpha }} where the elements in Φ {\displaystyle \Phi } are called the restricted roots, θ ( g α ) = g − α {\displaystyle \theta ({\mathfrak {g}}_{\alpha })={\mathfrak {g}}_{-\alpha }} for any linear functional α {\displaystyle \alpha } ; in particular, − Φ ⊂ Φ {\displaystyle -\Phi \subset \Phi } , g 0 = a ⊕ Z k ( a ) {\displaystyle {\mathfrak {g}}_{0}={\mathfrak {a}}\oplus Z_{\mathfrak {k}}({\mathfrak {a}})} . Moreover, Φ {\displaystyle \Phi } is a root system but not necessarily reduced one (i.e., it can happen α , 2 α {\displaystyle \alpha ,2\alpha } are both roots). == The case of sl(n,C) == If g = s l ( n , C ) {\displaystyle {\mathfrak {g}}=\mathrm {sl} (n,\mathbb {C} )} , then h {\displaystyle {\mathfrak {h}}} may be taken to be the diagonal subalgebra of g {\displaystyle {\mathfrak {g}}} , consisting of diagonal matrices whose diagonal entries sum to zero. Since h {\displaystyle {\mathfrak {h}}} has dimension n − 1 {\displaystyle n-1} , we see that s l ( n ; C ) {\displaystyle \mathrm {sl} (n;\mathbb {C} )} has rank n − 1 {\displaystyle n-1} . The root vectors X {\displaystyle X} in this case may be taken to be the matrices E i , j {\displaystyle E_{i,j}} with i ≠ j {\displaystyle i\neq j} , where E i , j {\displaystyle E_{i,j}} is the matrix with a 1 in the ( i , j ) {\displaystyle (i,j)} spot and zeros elsewhere. If H {\displaystyle H} is a diagonal matrix with diagonal entries λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} , then we have [ H , E i , j ] = ( λ i − λ j ) E i , j {\displaystyle [H,E_{i,j}]=(\lambda _{i}-\lambda _{j})E_{i,j}} . Thus, the roots for s l ( n , C ) {\displaystyle \mathrm {sl} (n,\mathbb {C} )} are the linear functionals α i , j {\displaystyle \alpha _{i,j}} given by α i , j ( H ) = λ i − λ j {\displaystyle \alpha _{i,j}(H)=\lambda _{i}-\lambda _{j}} . After identifying h {\displaystyle {\mathfrak {h}}} with its dual, the roots become the vectors α i , j := e i − e j {\displaystyle \alpha _{i,j}:=e_{i}-e_{j}} in the space of n {\displaystyle n} -tuples that sum to zero. This is the root system known as A n − 1 {\displaystyle A_{n-1}} in the conventional labeling. The reflection associated to the root α i , j {\displaystyle \alpha _{i,j}} acts on h {\displaystyle {\mathfrak {h}}} by transposing the i {\displaystyle i} and j {\displaystyle j} diagonal entries. The Weyl group is then just the permutation group on n {\displaystyle n} elements, acting by permuting the diagonal entries of matrices in h {\displaystyle {\mathfrak {h}}} . == Generalizations == Semisimple Lie algebras admit certain generalizations. Firstly, many statements that are true for semisimple Lie algebras are true more generally for reductive Lie algebras. Abstractly, a reductive Lie algebra is one whose adjoint representation is completely reducible, while concretely, a reductive Lie algebra is a direct sum of a semisimple Lie algebra and an abelian Lie algebra; for example, s l n {\displaystyle {\mathfrak {sl}}_{n}} is semisimple, and g l n {\displaystyle {\mathfrak {gl}}_{n}} is reductive. Many properties of semisimple Lie algebras depend only on reducibility. Many properties of complex semisimple/reductive Lie algebras are true not only for semisimple/reductive Lie algebras over algebraically closed fields, but more generally for split semisimple/reductive Lie algebras over other fields: semisimple/reductive Lie algebras over algebraically closed fields are always split, but over other fields this is not always the case. Split Lie algebras have essentially the same representation theory as semisimple Lie algebras over algebraically closed fields, for instance, the splitting Cartan subalgebra playing the same role as the Cartan subalgebra plays over algebraically closed fields. This is the approach followed in (Bourbaki 2005), for instance, which classifies representations of split semisimple/reductive Lie algebras. == Semisimple and reductive groups == A connected Lie group is called semisimple if its Lie algebra is a semisimple Lie algebra, i.e. a direct sum of simple Lie algebras. It is called reductive if its Lie algebra is a direct sum of simple and trivial (one-dimensional) Lie algebras. Reductive groups occur naturally as symmetries of a number of mathematical objects in algebra, geometry, and physics. For example, the group G L n ( R ) {\displaystyle GL_{n}(\mathbb {R} )} of symmetries of an n-dimensional real vector space (equivalently, the group of invertible matrices) is reductive. == See also == Lie algebra Root system Lie algebra representation Compact group Simple Lie group Borel subalgebra Jacobson–Morozov theorem == References ==
|
Wikipedia:Sesquilinear form#0
|
In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner, thus the name; which originates from the Latin numerical prefix sesqui- meaning "one and a half". The basic concept of the dot product – producing a scalar from a pair of vectors – can be generalized by allowing a broader range of scalar values and, perhaps simultaneously, by widening the definition of a vector. A motivating special case is a sesquilinear form on a complex vector space, V. This is a map V × V → C that is linear in one argument and "twists" the linearity of the other argument by complex conjugation (referred to as being antilinear in the other argument). This case arises naturally in mathematical physics applications. Another important case allows the scalars to come from any field and the twist is provided by a field automorphism. An application in projective geometry requires that the scalars come from a division ring (skew field), K, and this means that the "vectors" should be replaced by elements of a K-module. In a very general setting, sesquilinear forms can be defined over R-modules for arbitrary rings R. == Informal introduction == Sesquilinear forms abstract and generalize the basic notion of a Hermitian form on complex vector space. Hermitian forms are commonly seen in physics, as the inner product on a complex Hilbert space. In such cases, the standard Hermitian form on Cn is given by ⟨ w , z ⟩ = ∑ i = 1 n w ¯ i z i . {\displaystyle \langle w,z\rangle =\sum _{i=1}^{n}{\overline {w}}_{i}z_{i}.} where w ¯ i {\displaystyle {\overline {w}}_{i}} denotes the complex conjugate of w i . {\displaystyle w_{i}~.} This product may be generalized to situations where one is not working with an orthonormal basis for Cn, or even any basis at all. By inserting an extra factor of i {\displaystyle i} into the product, one obtains the skew-Hermitian form, defined more precisely, below. There is no particular reason to restrict the definition to the complex numbers; it can be defined for arbitrary rings carrying an antiautomorphism, informally understood to be a generalized concept of "complex conjugation" for the ring. == Convention == Conventions differ as to which argument should be linear. In the commutative case, we shall take the first to be linear, as is common in the mathematical literature, except in the section devoted to sesquilinear forms on complex vector spaces. There we use the other convention and take the first argument to be conjugate-linear (i.e. antilinear) and the second to be linear. This is the convention used mostly by physicists and originates in Dirac's bra–ket notation in quantum mechanics. It is also consistent with the definition of the usual (Euclidean) product of w , z ∈ C n {\displaystyle w,z\in \mathbb {C} ^{n}} as w ∗ z {\displaystyle w^{*}z} . In the more general noncommutative setting, with right modules we take the second argument to be linear and with left modules we take the first argument to be linear. == Complex vector spaces == Assumption: In this section, sesquilinear forms are antilinear in their first argument and linear in their second. Over a complex vector space V {\displaystyle V} a map φ : V × V → C {\displaystyle \varphi :V\times V\to \mathbb {C} } is sesquilinear if φ ( x + y , z + w ) = φ ( x , z ) + φ ( x , w ) + φ ( y , z ) + φ ( y , w ) φ ( a x , b y ) = a ¯ b φ ( x , y ) {\displaystyle {\begin{aligned}&\varphi (x+y,z+w)=\varphi (x,z)+\varphi (x,w)+\varphi (y,z)+\varphi (y,w)\\&\varphi (ax,by)={\overline {a}}b\,\varphi (x,y)\end{aligned}}} for all x , y , z , w ∈ V {\displaystyle x,y,z,w\in V} and all a , b ∈ C . {\displaystyle a,b\in \mathbb {C} .} Here, a ¯ {\displaystyle {\overline {a}}} is the complex conjugate of a scalar a . {\displaystyle a.} A complex sesquilinear form can also be viewed as a complex bilinear map V ¯ × V → C {\displaystyle {\overline {V}}\times V\to \mathbb {C} } where V ¯ {\displaystyle {\overline {V}}} is the complex conjugate vector space to V . {\displaystyle V.} By the universal property of tensor products these are in one-to-one correspondence with complex linear maps V ¯ ⊗ V → C . {\displaystyle {\overline {V}}\otimes V\to \mathbb {C} .} For a fixed z ∈ V {\displaystyle z\in V} the map w ↦ φ ( z , w ) {\displaystyle w\mapsto \varphi (z,w)} is a linear functional on V {\displaystyle V} (i.e. an element of the dual space V ∗ {\displaystyle V^{*}} ). Likewise, the map w ↦ φ ( w , z ) {\displaystyle w\mapsto \varphi (w,z)} is a conjugate-linear functional on V . {\displaystyle V.} Given any complex sesquilinear form φ {\displaystyle \varphi } on V {\displaystyle V} we can define a second complex sesquilinear form ψ {\displaystyle \psi } via the conjugate transpose: ψ ( w , z ) = φ ( z , w ) ¯ . {\displaystyle \psi (w,z)={\overline {\varphi (z,w)}}.} In general, ψ {\displaystyle \psi } and φ {\displaystyle \varphi } will be different. If they are the same then φ {\displaystyle \varphi } is said to be Hermitian. If they are negatives of one another, then φ {\displaystyle \varphi } is said to be skew-Hermitian. Every sesquilinear form can be written as a sum of a Hermitian form and a skew-Hermitian form. === Matrix representation === If V {\displaystyle V} is a finite-dimensional complex vector space, then relative to any basis { e i } i {\displaystyle \left\{e_{i}\right\}_{i}} of V , {\displaystyle V,} a sesquilinear form is represented by a matrix A , {\displaystyle A,} and given by φ ( w , z ) = φ ( ∑ i w i e i , ∑ j z j e j ) = ∑ i ∑ j w i ¯ z j φ ( e i , e j ) = w † A z . {\displaystyle \varphi (w,z)=\varphi \left(\sum _{i}w_{i}e_{i},\sum _{j}z_{j}e_{j}\right)=\sum _{i}\sum _{j}{\overline {w_{i}}}z_{j}\varphi \left(e_{i},e_{j}\right)=w^{\dagger }Az.} where w † {\displaystyle w^{\dagger }} is the conjugate transpose. The components of the matrix A {\displaystyle A} are given by A i j := φ ( e i , e j ) . {\displaystyle A_{ij}:=\varphi \left(e_{i},e_{j}\right).} === Hermitian form === The term Hermitian form may also refer to a different concept than that explained below: it may refer to a certain differential form on a Hermitian manifold. A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × V → C {\displaystyle h:V\times V\to \mathbb {C} } such that h ( w , z ) = h ( z , w ) ¯ . {\displaystyle h(w,z)={\overline {h(z,w)}}.} The standard Hermitian form on C n {\displaystyle \mathbb {C} ^{n}} is given (again, using the "physics" convention of linearity in the second and conjugate linearity in the first variable) by ⟨ w , z ⟩ = ∑ i = 1 n w ¯ i z i . {\displaystyle \langle w,z\rangle =\sum _{i=1}^{n}{\overline {w}}_{i}z_{i}.} More generally, the inner product on any complex Hilbert space is a Hermitian form. A minus sign is introduced in the Hermitian form w w ∗ − z z ∗ {\displaystyle ww^{*}-zz^{*}} to define the group SU(1,1). A vector space with a Hermitian form ( V , h ) {\displaystyle (V,h)} is called a Hermitian space. The matrix representation of a complex Hermitian form is a Hermitian matrix. A complex Hermitian form applied to a single vector | z | h = h ( z , z ) {\displaystyle |z|_{h}=h(z,z)} is always a real number. One can show that a complex sesquilinear form is Hermitian if and only if the associated quadratic form is real for all z ∈ V . {\displaystyle z\in V.} === Skew-Hermitian form === A complex skew-Hermitian form (also called an antisymmetric sesquilinear form), is a complex sesquilinear form s : V × V → C {\displaystyle s:V\times V\to \mathbb {C} } such that s ( w , z ) = − s ( z , w ) ¯ . {\displaystyle s(w,z)=-{\overline {s(z,w)}}.} Every complex skew-Hermitian form can be written as the imaginary unit i := − 1 {\displaystyle i:={\sqrt {-1}}} times a Hermitian form. The matrix representation of a complex skew-Hermitian form is a skew-Hermitian matrix. A complex skew-Hermitian form applied to a single vector | z | s = s ( z , z ) {\displaystyle |z|_{s}=s(z,z)} is always a purely imaginary number. == Over a division ring == This section applies unchanged when the division ring K is commutative. More specific terminology then also applies: the division ring is a field, the anti-automorphism is also an automorphism, and the right module is a vector space. The following applies to a left module with suitable reordering of expressions. === Definition === A σ-sesquilinear form over a right K-module M is a bi-additive map φ : M × M → K with an associated anti-automorphism σ of a division ring K such that, for all x, y in M and all α, β in K, φ ( x α , y β ) = σ ( α ) φ ( x , y ) β . {\displaystyle \varphi (x\alpha ,y\beta )=\sigma (\alpha )\,\varphi (x,y)\,\beta .} The associated anti-automorphism σ for any nonzero sesquilinear form φ is uniquely determined by φ. === Orthogonality === Given a sesquilinear form φ over a module M and a subspace (submodule) W of M, the orthogonal complement of W with respect to φ is W ⊥ = { v ∈ M ∣ φ ( v , w ) = 0 , ∀ w ∈ W } . {\displaystyle W^{\perp }=\{\mathbf {v} \in M\mid \varphi (\mathbf {v} ,\mathbf {w} )=0,\ \forall \mathbf {w} \in W\}.} Similarly, x ∈ M is orthogonal to y ∈ M with respect to φ, written x ⊥φ y (or simply x ⊥ y if φ can be inferred from the context), when φ(x, y) = 0. This relation need not be symmetric, i.e. x ⊥ y does not imply y ⊥ x (but see § Reflexivity below). === Reflexivity === A sesquilinear form φ is reflexive if, for all x, y in M, φ ( x , y ) = 0 {\displaystyle \varphi (x,y)=0} implies φ ( y , x ) = 0. {\displaystyle \varphi (y,x)=0.} That is, a sesquilinear form is reflexive precisely when the derived orthogonality relation is symmetric. === Hermitian variations === A σ-sesquilinear form φ is called (σ, ε)-Hermitian if there exists ε in K such that, for all x, y in M, φ ( x , y ) = σ ( φ ( y , x ) ) ε . {\displaystyle \varphi (x,y)=\sigma (\varphi (y,x))\,\varepsilon .} If ε = 1, the form is called σ-Hermitian, and if ε = −1, it is called σ-anti-Hermitian. (When σ is implied, respectively simply Hermitian or anti-Hermitian.) For a nonzero (σ, ε)-Hermitian form, it follows that for all α in K, σ ( ε ) = ε − 1 {\displaystyle \sigma (\varepsilon )=\varepsilon ^{-1}} σ ( σ ( α ) ) = ε α ε − 1 . {\displaystyle \sigma (\sigma (\alpha ))=\varepsilon \alpha \varepsilon ^{-1}.} It also follows that φ(x, x) is a fixed point of the map α ↦ σ(α)ε. The fixed points of this map form a subgroup of the additive group of K. A (σ, ε)-Hermitian form is reflexive, and every reflexive σ-sesquilinear form is (σ, ε)-Hermitian for some ε. In the special case that σ is the identity map (i.e., σ = id), K is commutative, φ is a bilinear form and ε2 = 1. Then for ε = 1 the bilinear form is called symmetric, and for ε = −1 is called skew-symmetric. === Example === Let V be the three dimensional vector space over the finite field F = GF(q2), where q is a prime power. With respect to the standard basis we can write x = (x1, x2, x3) and y = (y1, y2, y3) and define the map φ by: φ ( x , y ) = x 1 y 1 q + x 2 y 2 q + x 3 y 3 q . {\displaystyle \varphi (x,y)=x_{1}y_{1}{}^{q}+x_{2}y_{2}{}^{q}+x_{3}y_{3}{}^{q}.} The map σ : t ↦ tq is an involutory automorphism of F. The map φ is then a σ-sesquilinear form. The matrix Mφ associated to this form is the identity matrix. This is a Hermitian form. == In projective geometry == Assumption: In this section, sesquilinear forms are antilinear (resp. linear) in their second (resp. first) argument. In a projective geometry G, a permutation δ of the subspaces that inverts inclusion, i.e. S ⊆ T ⇒ Tδ ⊆ Sδ for all subspaces S, T of G, is called a correlation. A result of Birkhoff and von Neumann (1936) shows that the correlations of desarguesian projective geometries correspond to the nondegenerate sesquilinear forms on the underlying vector space. A sesquilinear form φ is nondegenerate if φ(x, y) = 0 for all y in V (if and) only if x = 0. To achieve full generality of this statement, and since every desarguesian projective geometry may be coordinatized by a division ring, Reinhold Baer extended the definition of a sesquilinear form to a division ring, which requires replacing vector spaces by R-modules. (In the geometric literature these are still referred to as either left or right vector spaces over skewfields.) == Over arbitrary rings == The specialization of the above section to skewfields was a consequence of the application to projective geometry, and not intrinsic to the nature of sesquilinear forms. Only the minor modifications needed to take into account the non-commutativity of multiplication are required to generalize the arbitrary field version of the definition to arbitrary rings. Let R be a ring, V an R-module and σ an antiautomorphism of R. A map φ : V × V → R is σ-sesquilinear if φ ( x + y , z + w ) = φ ( x , z ) + φ ( x , w ) + φ ( y , z ) + φ ( y , w ) {\displaystyle \varphi (x+y,z+w)=\varphi (x,z)+\varphi (x,w)+\varphi (y,z)+\varphi (y,w)} φ ( c x , d y ) = c φ ( x , y ) σ ( d ) {\displaystyle \varphi (cx,dy)=c\,\varphi (x,y)\,\sigma (d)} for all x, y, z, w in V and all c, d in R. An element x is orthogonal to another element y with respect to the sesquilinear form φ (written x ⊥ y) if φ(x, y) = 0. This relation need not be symmetric, i.e. x ⊥ y does not imply y ⊥ x. A sesquilinear form φ : V × V → R is reflexive (or orthosymmetric) if φ(x, y) = 0 implies φ(y, x) = 0 for all x, y in V. A sesquilinear form φ : V × V → R is Hermitian if there exists σ such that: 325 φ ( x , y ) = σ ( φ ( y , x ) ) {\displaystyle \varphi (x,y)=\sigma (\varphi (y,x))} for all x, y in V. A Hermitian form is necessarily reflexive, and if it is nonzero, the associated antiautomorphism σ is an involution (i.e. of order 2). Since for an antiautomorphism σ we have σ(st) = σ(t)σ(s) for all s, t in R, if σ = id, then R must be commutative and φ is a bilinear form. In particular, if, in this case, R is a skewfield, then R is a field and V is a vector space with a bilinear form. An antiautomorphism σ : R → R can also be viewed as an isomorphism R → Rop, where Rop is the opposite ring of R, which has the same underlying set and the same addition, but whose multiplication operation (∗) is defined by a ∗ b = ba, where the product on the right is the product in R. It follows from this that a right (left) R-module V can be turned into a left (right) Rop-module, Vo. Thus, the sesquilinear form φ : V × V → R can be viewed as a bilinear form φ′ : V × Vo → R. == See also == *-ring == Notes == == References == Dembowski, Peter (1968), Finite geometries, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 44, Berlin, New York: Springer-Verlag, ISBN 3-540-61786-8, MR 0233275 Gruenberg, K.W.; Weir, A.J. (1977), Linear Geometry (2nd ed.), Springer, ISBN 0-387-90227-9 Jacobson, Nathan J. (2009) [1985], Basic Algebra I (2nd ed.), Dover, ISBN 978-0-486-47189-1 == External links == "Sesquilinear form", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia:Set function#0
|
In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line R ∪ { ± ∞ } , {\displaystyle \mathbb {R} \cup \{\pm \infty \},} which consists of the real numbers R {\displaystyle \mathbb {R} } and ± ∞ . {\displaystyle \pm \infty .} A set function generally aims to measure subsets in some way. Measures are typical examples of "measuring" set functions. Therefore, the term "set function" is often used for avoiding confusion between the mathematical meaning of "measure" and its common language meaning. == Definitions == If F {\displaystyle {\mathcal {F}}} is a family of sets over Ω {\displaystyle \Omega } (meaning that F ⊆ ℘ ( Ω ) {\displaystyle {\mathcal {F}}\subseteq \wp (\Omega )} where ℘ ( Ω ) {\displaystyle \wp (\Omega )} denotes the powerset) then a set function on F {\displaystyle {\mathcal {F}}} is a function μ {\displaystyle \mu } with domain F {\displaystyle {\mathcal {F}}} and codomain [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} or, sometimes, the codomain is instead some vector space, as with vector measures, complex measures, and projection-valued measures. The domain of a set function may have any number properties; the commonly encountered properties and categories of families are listed in the table below. In general, it is typically assumed that μ ( E ) + μ ( F ) {\displaystyle \mu (E)+\mu (F)} is always well-defined for all E , F ∈ F , {\displaystyle E,F\in {\mathcal {F}},} or equivalently, that μ {\displaystyle \mu } does not take on both − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as values. This article will henceforth assume this; although alternatively, all definitions below could instead be qualified by statements such as "whenever the sum/series is defined". This is sometimes done with subtraction, such as with the following result, which holds whenever μ {\displaystyle \mu } is finitely additive: Set difference formula: μ ( F ) − μ ( E ) = μ ( F ∖ E ) whenever μ ( F ) − μ ( E ) {\displaystyle \mu (F)-\mu (E)=\mu (F\setminus E){\text{ whenever }}\mu (F)-\mu (E)} is defined with E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} satisfying E ⊆ F {\displaystyle E\subseteq F} and F ∖ E ∈ F . {\displaystyle F\setminus E\in {\mathcal {F}}.} Null sets A set F ∈ F {\displaystyle F\in {\mathcal {F}}} is called a null set (with respect to μ {\displaystyle \mu } ) or simply null if μ ( F ) = 0. {\displaystyle \mu (F)=0.} Whenever μ {\displaystyle \mu } is not identically equal to either − ∞ {\displaystyle -\infty } or + ∞ {\displaystyle +\infty } then it is typically also assumed that: null empty set: μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} if ∅ ∈ F . {\displaystyle \varnothing \in {\mathcal {F}}.} Variation and mass The total variation of a set S {\displaystyle S} is | μ | ( S ) = def sup { | μ ( F ) | : F ∈ F and F ⊆ S } {\displaystyle |\mu |(S)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\sup\{|\mu (F)|:F\in {\mathcal {F}}{\text{ and }}F\subseteq S\}} where | ⋅ | {\displaystyle |\,\cdot \,|} denotes the absolute value (or more generally, it denotes the norm or seminorm if μ {\displaystyle \mu } is vector-valued in a (semi)normed space). Assuming that ∪ F = def ⋃ F ∈ F F ∈ F , {\displaystyle \cup {\mathcal {F}}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\textstyle \bigcup \limits _{F\in {\mathcal {F}}}F\in {\mathcal {F}},} then | μ | ( ∪ F ) {\displaystyle |\mu |\left(\cup {\mathcal {F}}\right)} is called the total variation of μ {\displaystyle \mu } and μ ( ∪ F ) {\displaystyle \mu \left(\cup {\mathcal {F}}\right)} is called the mass of μ . {\displaystyle \mu .} A set function is called finite if for every F ∈ F , {\displaystyle F\in {\mathcal {F}},} the value μ ( F ) {\displaystyle \mu (F)} is finite (which by definition means that μ ( F ) ≠ ∞ {\displaystyle \mu (F)\neq \infty } and μ ( F ) ≠ − ∞ {\displaystyle \mu (F)\neq -\infty } ; an infinite value is one that is equal to ∞ {\displaystyle \infty } or − ∞ {\displaystyle -\infty } ). Every finite set function must have a finite mass. === Common properties of set functions === A set function μ {\displaystyle \mu } on F {\displaystyle {\mathcal {F}}} is said to be non-negative if it is valued in [ 0 , ∞ ] . {\displaystyle [0,\infty ].} finitely additive if ∑ i = 1 n μ ( F i ) = μ ( ⋃ i = 1 n F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{n}\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{n}F_{i}\right)} for all pairwise disjoint finite sequences F 1 , … , F n ∈ F {\displaystyle F_{1},\ldots ,F_{n}\in {\mathcal {F}}} such that ⋃ i = 1 n F i ∈ F . {\displaystyle \textstyle \bigcup \limits _{i=1}^{n}F_{i}\in {\mathcal {F}}.} If F {\displaystyle {\mathcal {F}}} is closed under binary unions then μ {\displaystyle \mu } is finitely additive if and only if μ ( E ∪ F ) = μ ( E ) + μ ( F ) {\displaystyle \mu (E\cup F)=\mu (E)+\mu (F)} for all disjoint pairs E , F ∈ F . {\displaystyle E,F\in {\mathcal {F}}.} If μ {\displaystyle \mu } is finitely additive and if ∅ ∈ F {\displaystyle \varnothing \in {\mathcal {F}}} then taking E := F := ∅ {\displaystyle E:=F:=\varnothing } shows that μ ( ∅ ) = μ ( ∅ ) + μ ( ∅ ) {\displaystyle \mu (\varnothing )=\mu (\varnothing )+\mu (\varnothing )} which is only possible if μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} or μ ( ∅ ) = ± ∞ , {\displaystyle \mu (\varnothing )=\pm \infty ,} where in the latter case, μ ( E ) = μ ( E ∪ ∅ ) = μ ( E ) + μ ( ∅ ) = μ ( E ) + ( ± ∞ ) = ± ∞ {\displaystyle \mu (E)=\mu (E\cup \varnothing )=\mu (E)+\mu (\varnothing )=\mu (E)+(\pm \infty )=\pm \infty } for every E ∈ F {\displaystyle E\in {\mathcal {F}}} (so only the case μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} is useful). countably additive or σ-additive if in addition to being finitely additive, for all pairwise disjoint sequences F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F {\displaystyle {\mathcal {F}}} such that ⋃ i = 1 ∞ F i ∈ F , {\displaystyle \textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\in {\mathcal {F}},} all of the following hold: ∑ i = 1 ∞ μ ( F i ) = μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} The series on the left hand side is defined in the usual way as the limit ∑ i = 1 ∞ μ ( F i ) = def lim n → ∞ μ ( F 1 ) + ⋯ + μ ( F n ) . {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\displaystyle \lim _{n\to \infty }}\mu \left(F_{1}\right)+\cdots +\mu \left(F_{n}\right).} As a consequence, if ρ : N → N {\displaystyle \rho :\mathbb {N} \to \mathbb {N} } is any permutation/bijection then ∑ i = 1 ∞ μ ( F i ) = ∑ i = 1 ∞ μ ( F ρ ( i ) ) ; {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)=\textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{\rho (i)}\right);} this is because ⋃ i = 1 ∞ F i = ⋃ i = 1 ∞ F ρ ( i ) {\displaystyle \textstyle \bigcup \limits _{i=1}^{\infty }F_{i}=\textstyle \bigcup \limits _{i=1}^{\infty }F_{\rho (i)}} and applying this condition (a) twice guarantees that both ∑ i = 1 ∞ μ ( F i ) = μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} and μ ( ⋃ i = 1 ∞ F ρ ( i ) ) = ∑ i = 1 ∞ μ ( F ρ ( i ) ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{\rho (i)}\right)=\textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{\rho (i)}\right)} hold. By definition, a convergent series with this property is said to be unconditionally convergent. Stated in plain English, this means that rearranging/relabeling the sets F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots } to the new order F ρ ( 1 ) , F ρ ( 2 ) , … {\displaystyle F_{\rho (1)},F_{\rho (2)},\ldots } does not affect the sum of their measures. This is desirable since just as the union F = def ⋃ i ∈ N F i {\displaystyle F~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\textstyle \bigcup \limits _{i\in \mathbb {N} }F_{i}} does not depend on the order of these sets, the same should be true of the sums μ ( F ) = μ ( F 1 ) + μ ( F 2 ) + ⋯ {\displaystyle \mu (F)=\mu \left(F_{1}\right)+\mu \left(F_{2}\right)+\cdots } and μ ( F ) = μ ( F ρ ( 1 ) ) + μ ( F ρ ( 2 ) ) + ⋯ . {\displaystyle \mu (F)=\mu \left(F_{\rho (1)}\right)+\mu \left(F_{\rho (2)}\right)+\cdots \,.} if μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} is not infinite then this series ∑ i = 1 ∞ μ ( F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} must also converge absolutely, which by definition means that ∑ i = 1 ∞ | μ ( F i ) | {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\left|\mu \left(F_{i}\right)\right|} must be finite. This is automatically true if μ {\displaystyle \mu } is non-negative (or even just valued in the extended real numbers). As with any convergent series of real numbers, by the Riemann series theorem, the series ∑ i = 1 ∞ μ ( F i ) = lim N → ∞ μ ( F 1 ) + μ ( F 2 ) + ⋯ + μ ( F N ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)={\displaystyle \lim _{N\to \infty }}\mu \left(F_{1}\right)+\mu \left(F_{2}\right)+\cdots +\mu \left(F_{N}\right)} converges absolutely if and only if its sum does not depend on the order of its terms (a property known as unconditional convergence). Since unconditional convergence is guaranteed by (a) above, this condition is automatically true if μ {\displaystyle \mu } is valued in [ − ∞ , ∞ ] . {\displaystyle [-\infty ,\infty ].} if μ ( ⋃ i = 1 ∞ F i ) = ∑ i = 1 ∞ μ ( F i ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)=\textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} is infinite then it is also required that the value of at least one of the series ∑ μ ( F i ) > 0 i ∈ N μ ( F i ) and ∑ μ ( F i ) < 0 i ∈ N μ ( F i ) {\displaystyle \textstyle \sum \limits _{\stackrel {i\in \mathbb {N} }{\mu \left(F_{i}\right)>0}}\mu \left(F_{i}\right)\;{\text{ and }}\;\textstyle \sum \limits _{\stackrel {i\in \mathbb {N} }{\mu \left(F_{i}\right)<0}}\mu \left(F_{i}\right)\;} be finite (so that the sum of their values is well-defined). This is automatically true if μ {\displaystyle \mu } is non-negative. a pre-measure if it is non-negative, countably additive (including finitely additive), and has a null empty set. a measure if it is a pre-measure whose domain is a σ-algebra. That is to say, a measure is a non-negative countably additive set function on a σ-algebra that has a null empty set. a probability measure if it is a measure that has a mass of 1. {\displaystyle 1.} an outer measure if it is non-negative, countably subadditive, has a null empty set, and has the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} as its domain. Outer measures appear in the Carathéodory's extension theorem and they are often restricted to Carathéodory measurable subsets a signed measure if it is countably additive, has a null empty set, and μ {\displaystyle \mu } does not take on both − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as values. complete if every subset of every null set is null; explicitly, this means: whenever F ∈ F satisfies μ ( F ) = 0 {\displaystyle F\in {\mathcal {F}}{\text{ satisfies }}\mu (F)=0} and N ⊆ F {\displaystyle N\subseteq F} is any subset of F {\displaystyle F} then N ∈ F {\displaystyle N\in {\mathcal {F}}} and μ ( N ) = 0. {\displaystyle \mu (N)=0.} Unlike many other properties, completeness places requirements on the set domain μ = F {\displaystyle \operatorname {domain} \mu ={\mathcal {F}}} (and not just on μ {\displaystyle \mu } 's values). 𝜎-finite if there exists a sequence F 1 , F 2 , F 3 , … {\displaystyle F_{1},F_{2},F_{3},\ldots \,} in F {\displaystyle {\mathcal {F}}} such that μ ( F i ) {\displaystyle \mu \left(F_{i}\right)} is finite for every index i , {\displaystyle i,} and also ⋃ n = 1 ∞ F n = ⋃ F ∈ F F . {\displaystyle \textstyle \bigcup \limits _{n=1}^{\infty }F_{n}=\textstyle \bigcup \limits _{F\in {\mathcal {F}}}F.} decomposable if there exists a subfamily P ⊆ F {\displaystyle {\mathcal {P}}\subseteq {\mathcal {F}}} of pairwise disjoint sets such that μ ( P ) {\displaystyle \mu (P)} is finite for every P ∈ P {\displaystyle P\in {\mathcal {P}}} and also ⋃ P ∈ P P = ⋃ F ∈ F F {\displaystyle \textstyle \bigcup \limits _{P\in {\mathcal {P}}}\,P=\textstyle \bigcup \limits _{F\in {\mathcal {F}}}F} (where F = domain μ {\displaystyle {\mathcal {F}}=\operatorname {domain} \mu } ). Every 𝜎-finite set function is decomposable although not conversely. For example, the counting measure on R {\displaystyle \mathbb {R} } (whose domain is ℘ ( R ) {\displaystyle \wp (\mathbb {R} )} ) is decomposable but not 𝜎-finite. a vector measure if it is a countably additive set function μ : F → X {\displaystyle \mu :{\mathcal {F}}\to X} valued in a topological vector space X {\displaystyle X} (such as a normed space) whose domain is a σ-algebra. If μ {\displaystyle \mu } is valued in a normed space ( X , ‖ ⋅ ‖ ) {\displaystyle (X,\|\cdot \|)} then it is countably additive if and only if for any pairwise disjoint sequence F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F , {\displaystyle {\mathcal {F}},} lim n → ∞ ‖ μ ( F 1 ) + ⋯ + μ ( F n ) − μ ( ⋃ i = 1 ∞ F i ) ‖ = 0. {\displaystyle \lim _{n\to \infty }\left\|\mu \left(F_{1}\right)+\cdots +\mu \left(F_{n}\right)-\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)\right\|=0.} If μ {\displaystyle \mu } is finitely additive and valued in a Banach space then it is countably additive if and only if for any pairwise disjoint sequence F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F , {\displaystyle {\mathcal {F}},} lim n → ∞ ‖ μ ( F n ∪ F n + 1 ∪ F n + 2 ∪ ⋯ ) ‖ = 0. {\displaystyle \lim _{n\to \infty }\left\|\mu \left(F_{n}\cup F_{n+1}\cup F_{n+2}\cup \cdots \right)\right\|=0.} a complex measure if it is a countably additive complex-valued set function μ : F → C {\displaystyle \mu :{\mathcal {F}}\to \mathbb {C} } whose domain is a σ-algebra. By definition, a complex measure never takes ± ∞ {\displaystyle \pm \infty } as a value and so has a null empty set. a random measure if it is a measure-valued random element. Arbitrary sums As described in this article's section on generalized series, for any family ( r i ) i ∈ I {\displaystyle \left(r_{i}\right)_{i\in I}} of real numbers indexed by an arbitrary indexing set I , {\displaystyle I,} it is possible to define their sum ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} as the limit of the net of finite partial sums F ∈ FiniteSubsets ( I ) ↦ ∑ i ∈ F r i {\displaystyle F\in \operatorname {FiniteSubsets} (I)\mapsto \textstyle \sum \limits _{i\in F}r_{i}} where the domain FiniteSubsets ( I ) {\displaystyle \operatorname {FiniteSubsets} (I)} is directed by ⊆ . {\displaystyle \,\subseteq .\,} Whenever this net converges then its limit is denoted by the symbols ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} while if this net instead diverges to ± ∞ {\displaystyle \pm \infty } then this may be indicated by writing ∑ i ∈ I r i = ± ∞ . {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}=\pm \infty .} Any sum over the empty set is defined to be zero; that is, if I = ∅ {\displaystyle I=\varnothing } then ∑ i ∈ ∅ r i = 0 {\displaystyle \textstyle \sum \limits _{i\in \varnothing }r_{i}=0} by definition. For example, if z i = 0 {\displaystyle z_{i}=0} for every i ∈ I {\displaystyle i\in I} then ∑ i ∈ I z i = 0. {\displaystyle \textstyle \sum \limits _{i\in I}z_{i}=0.} And it can be shown that ∑ i ∈ I r i = ∑ r i = 0 i ∈ I , r i + ∑ r i ≠ 0 i ∈ I , r i = 0 + ∑ r i ≠ 0 i ∈ I , r i = ∑ r i ≠ 0 i ∈ I , r i . {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}=\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}=0}}r_{i}+\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}\neq 0}}r_{i}=0+\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}\neq 0}}r_{i}=\textstyle \sum \limits _{\stackrel {i\in I,}{r_{i}\neq 0}}r_{i}.} If I = N {\displaystyle I=\mathbb {N} } then the generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} converges in R {\displaystyle \mathbb {R} } if and only if ∑ i = 1 ∞ r i {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }r_{i}} converges unconditionally (or equivalently, converges absolutely) in the usual sense. If a generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} converges in R {\displaystyle \mathbb {R} } then both ∑ r i > 0 i ∈ I r i {\displaystyle \textstyle \sum \limits _{\stackrel {i\in I}{r_{i}>0}}r_{i}} and ∑ r i < 0 i ∈ I r i {\displaystyle \textstyle \sum \limits _{\stackrel {i\in I}{r_{i}<0}}r_{i}} also converge to elements of R {\displaystyle \mathbb {R} } and the set { i ∈ I : r i ≠ 0 } {\displaystyle \left\{i\in I:r_{i}\neq 0\right\}} is necessarily countable (that is, either finite or countably infinite); this remains true if R {\displaystyle \mathbb {R} } is replaced with any normed space. It follows that in order for a generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} to converge in R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} it is necessary that all but at most countably many r i {\displaystyle r_{i}} will be equal to 0 , {\displaystyle 0,} which means that ∑ i ∈ I r i = ∑ r i ≠ 0 i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}~=~\textstyle \sum \limits _{\stackrel {i\in I}{r_{i}\neq 0}}r_{i}} is a sum of at most countably many non-zero terms. Said differently, if { i ∈ I : r i ≠ 0 } {\displaystyle \left\{i\in I:r_{i}\neq 0\right\}} is uncountable then the generalized series ∑ i ∈ I r i {\displaystyle \textstyle \sum \limits _{i\in I}r_{i}} does not converge. In summary, due to the nature of the real numbers and its topology, every generalized series of real numbers (indexed by an arbitrary set) that converges can be reduced to an ordinary absolutely convergent series of countably many real numbers. So in the context of measure theory, there is little benefit gained by considering uncountably many sets and generalized series. In particular, this is why the definition of "countably additive" is rarely extended from countably many sets F 1 , F 2 , … {\displaystyle F_{1},F_{2},\ldots \,} in F {\displaystyle {\mathcal {F}}} (and the usual countable series ∑ i = 1 ∞ μ ( F i ) {\displaystyle \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} ) to arbitrarily many sets ( F i ) i ∈ I {\displaystyle \left(F_{i}\right)_{i\in I}} (and the generalized series ∑ i ∈ I μ ( F i ) {\displaystyle \textstyle \sum \limits _{i\in I}\mu \left(F_{i}\right)} ). === Inner measures, outer measures, and other properties === A set function μ {\displaystyle \mu } is said to be/satisfies monotone if μ ( E ) ≤ μ ( F ) {\displaystyle \mu (E)\leq \mu (F)} whenever E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} satisfy E ⊆ F . {\displaystyle E\subseteq F.} modular if it satisfies the following condition, known as modularity: μ ( E ∪ F ) + μ ( E ∩ F ) = μ ( E ) + μ ( F ) {\displaystyle \mu (E\cup F)+\mu (E\cap F)=\mu (E)+\mu (F)} for all E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} such that E ∪ F , E ∩ F ∈ F . {\displaystyle E\cup F,E\cap F\in {\mathcal {F}}.} Every finitely additive function on a field of sets is modular. In geometry, a set function valued in some abelian semigroup that possess this property is known as a valuation. This geometric definition of "valuation" should not be confused with the stronger non-equivalent measure theoretic definition of "valuation" that is given below. submodular if μ ( E ∪ F ) + μ ( E ∩ F ) ≤ μ ( E ) + μ ( F ) {\displaystyle \mu (E\cup F)+\mu (E\cap F)\leq \mu (E)+\mu (F)} for all E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} such that E ∪ F , E ∩ F ∈ F . {\displaystyle E\cup F,E\cap F\in {\mathcal {F}}.} finitely subadditive if | μ ( F ) | ≤ ∑ i = 1 n | μ ( F i ) | {\displaystyle |\mu (F)|\leq \textstyle \sum \limits _{i=1}^{n}\left|\mu \left(F_{i}\right)\right|} for all finite sequences F , F 1 , … , F n ∈ F {\displaystyle F,F_{1},\ldots ,F_{n}\in {\mathcal {F}}} that satisfy F ⊆ ⋃ i = 1 n F i . {\displaystyle F\;\subseteq \;\textstyle \bigcup \limits _{i=1}^{n}F_{i}.} countably subadditive or σ-subadditive if | μ ( F ) | ≤ ∑ i = 1 ∞ | μ ( F i ) | {\displaystyle |\mu (F)|\leq \textstyle \sum \limits _{i=1}^{\infty }\left|\mu \left(F_{i}\right)\right|} for all sequences F , F 1 , F 2 , F 3 , … {\displaystyle F,F_{1},F_{2},F_{3},\ldots \,} in F {\displaystyle {\mathcal {F}}} that satisfy F ⊆ ⋃ i = 1 ∞ F i . {\displaystyle F\;\subseteq \;\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}.} If F {\displaystyle {\mathcal {F}}} is closed under finite unions then this condition holds if and only if | μ ( F ∪ G ) | ≤ | μ ( F ) | + | μ ( G ) | {\displaystyle |\mu (F\cup G)|\leq |\mu (F)|+|\mu (G)|} for all F , G ∈ F . {\displaystyle F,G\in {\mathcal {F}}.} If μ {\displaystyle \mu } is non-negative then the absolute values may be removed. If μ {\displaystyle \mu } is a measure then this condition holds if and only if μ ( ⋃ i = 1 ∞ F i ) ≤ ∑ i = 1 ∞ μ ( F i ) {\displaystyle \mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)\leq \textstyle \sum \limits _{i=1}^{\infty }\mu \left(F_{i}\right)} for all F 1 , F 2 , F 3 , … {\displaystyle F_{1},F_{2},F_{3},\ldots \,} in F . {\displaystyle {\mathcal {F}}.} If μ {\displaystyle \mu } is a probability measure then this inequality is Boole's inequality. If μ {\displaystyle \mu } is countably subadditive and ∅ ∈ F {\displaystyle \varnothing \in {\mathcal {F}}} with μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} then μ {\displaystyle \mu } is finitely subadditive. superadditive if μ ( E ) + μ ( F ) ≤ μ ( E ∪ F ) {\displaystyle \mu (E)+\mu (F)\leq \mu (E\cup F)} whenever E , F ∈ F {\displaystyle E,F\in {\mathcal {F}}} are disjoint with E ∪ F ∈ F . {\displaystyle E\cup F\in {\mathcal {F}}.} continuous from above if lim n → ∞ μ ( F i ) = μ ( ⋂ i = 1 ∞ F i ) {\displaystyle \lim _{n\to \infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\right)} for all non-increasing sequences of sets F 1 ⊇ F 2 ⊇ F 3 ⋯ {\displaystyle F_{1}\supseteq F_{2}\supseteq F_{3}\cdots \,} in F {\displaystyle {\mathcal {F}}} such that ⋂ i = 1 ∞ F i ∈ F {\displaystyle \textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\in {\mathcal {F}}} with μ ( ⋂ i = 1 ∞ F i ) {\displaystyle \mu \left(\textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\right)} and all μ ( F i ) {\displaystyle \mu \left(F_{i}\right)} finite. Lebesgue measure λ {\displaystyle \lambda } is continuous from above but it would not be if the assumption that all μ ( F i ) {\displaystyle \mu \left(F_{i}\right)} are eventually finite was omitted from the definition, as this example shows: For every integer i , {\displaystyle i,} let F i {\displaystyle F_{i}} be the open interval ( i , ∞ ) {\displaystyle (i,\infty )} so that lim n → ∞ λ ( F i ) = lim n → ∞ ∞ = ∞ ≠ 0 = λ ( ∅ ) = λ ( ⋂ i = 1 ∞ F i ) {\displaystyle \lim _{n\to \infty }\lambda \left(F_{i}\right)=\lim _{n\to \infty }\infty =\infty \neq 0=\lambda (\varnothing )=\lambda \left(\textstyle \bigcap \limits _{i=1}^{\infty }F_{i}\right)} where ⋂ i = 1 ∞ F i = ∅ . {\displaystyle \textstyle \bigcap \limits _{i=1}^{\infty }F_{i}=\varnothing .} continuous from below if lim n → ∞ μ ( F i ) = μ ( ⋃ i = 1 ∞ F i ) {\displaystyle \lim _{n\to \infty }\mu \left(F_{i}\right)=\mu \left(\textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\right)} for all non-decreasing sequences of sets F 1 ⊆ F 2 ⊆ F 3 ⋯ {\displaystyle F_{1}\subseteq F_{2}\subseteq F_{3}\cdots \,} in F {\displaystyle {\mathcal {F}}} such that ⋃ i = 1 ∞ F i ∈ F . {\displaystyle \textstyle \bigcup \limits _{i=1}^{\infty }F_{i}\in {\mathcal {F}}.} infinity is approached from below if whenever F ∈ F {\displaystyle F\in {\mathcal {F}}} satisfies μ ( F ) = ∞ {\displaystyle \mu (F)=\infty } then for every real r > 0 , {\displaystyle r>0,} there exists some F r ∈ F {\displaystyle F_{r}\in {\mathcal {F}}} such that F r ⊆ F {\displaystyle F_{r}\subseteq F} and r ≤ μ ( F r ) < ∞ . {\displaystyle r\leq \mu \left(F_{r}\right)<\infty .} an outer measure if μ {\displaystyle \mu } is non-negative, countably subadditive, has a null empty set, and has the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} as its domain. an inner measure if μ {\displaystyle \mu } is non-negative, superadditive, continuous from above, has a null empty set, has the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} as its domain, and + ∞ {\displaystyle +\infty } is approached from below. atomic if every measurable set of positive measure contains an atom. If a binary operation + {\displaystyle \,+\,} is defined, then a set function μ {\displaystyle \mu } is said to be translation invariant if μ ( ω + F ) = μ ( F ) {\displaystyle \mu (\omega +F)=\mu (F)} for all ω ∈ Ω {\displaystyle \omega \in \Omega } and F ∈ F {\displaystyle F\in {\mathcal {F}}} such that ω + F ∈ F . {\displaystyle \omega +F\in {\mathcal {F}}.} === Topology related definitions === If τ {\displaystyle \tau } is a topology on Ω {\displaystyle \Omega } then a set function μ {\displaystyle \mu } is said to be: a Borel measure if it is a measure defined on the σ-algebra of all Borel sets, which is the smallest σ-algebra containing all open subsets (that is, containing τ {\displaystyle \tau } ). a Baire measure if it is a measure defined on the σ-algebra of all Baire sets. locally finite if for every point ω ∈ Ω {\displaystyle \omega \in \Omega } there exists some neighborhood U ∈ F ∩ τ {\displaystyle U\in {\mathcal {F}}\cap \tau } of this point such that μ ( U ) {\displaystyle \mu (U)} is finite. If μ {\displaystyle \mu } is a finitely additive, monotone, and locally finite then μ ( K ) {\displaystyle \mu (K)} is necessarily finite for every compact measurable subset K . {\displaystyle K.} τ {\displaystyle \tau } -additive if μ ( ⋃ D ) = sup D ∈ D μ ( D ) {\displaystyle \mu \left({\textstyle \bigcup }\,{\mathcal {D}}\right)=\sup _{D\in {\mathcal {D}}}\mu (D)} whenever D ⊆ τ ∩ F {\displaystyle {\mathcal {D}}\subseteq \tau \cap {\mathcal {F}}} is directed with respect to ⊆ {\displaystyle \,\subseteq \,} and satisfies ⋃ D = def ⋃ D ∈ D D ∈ F . {\displaystyle {\textstyle \bigcup }\,{\mathcal {D}}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\textstyle \bigcup \limits _{D\in {\mathcal {D}}}D\in {\mathcal {F}}.} D {\displaystyle {\mathcal {D}}} is directed with respect to ⊆ {\displaystyle \,\subseteq \,} if and only if it is not empty and for all A , B ∈ D {\displaystyle A,B\in {\mathcal {D}}} there exists some C ∈ D {\displaystyle C\in {\mathcal {D}}} such that A ⊆ C {\displaystyle A\subseteq C} and B ⊆ C . {\displaystyle B\subseteq C.} inner regular or tight if for every F ∈ F , {\displaystyle F\in {\mathcal {F}},} μ ( F ) = sup { μ ( K ) : F ⊇ K with K ∈ F a compact subset of ( Ω , τ ) } . {\displaystyle \mu (F)=\sup\{\mu (K):F\supseteq K{\text{ with }}K\in {\mathcal {F}}{\text{ a compact subset of }}(\Omega ,\tau )\}.} outer regular if for every F ∈ F , {\displaystyle F\in {\mathcal {F}},} μ ( F ) = inf { μ ( U ) : F ⊆ U and U ∈ F ∩ τ } . {\displaystyle \mu (F)=\inf\{\mu (U):F\subseteq U{\text{ and }}U\in {\mathcal {F}}\cap \tau \}.} regular if it is both inner regular and outer regular. a Borel regular measure if it is a Borel measure that is also regular. a Radon measure if it is a regular and locally finite measure. strictly positive if every non-empty open subset has (strictly) positive measure. a valuation if it is non-negative, monotone, modular, has a null empty set, and has domain τ . {\displaystyle \tau .} === Relationships between set functions === If μ {\displaystyle \mu } and ν {\displaystyle \nu } are two set functions over Ω , {\displaystyle \Omega ,} then: μ {\displaystyle \mu } is said to be absolutely continuous with respect to ν {\displaystyle \nu } or dominated by ν {\displaystyle \nu } , written μ ≪ ν , {\displaystyle \mu \ll \nu ,} if for every set F {\displaystyle F} that belongs to the domain of both μ {\displaystyle \mu } and ν , {\displaystyle \nu ,} if ν ( F ) = 0 {\displaystyle \nu (F)=0} then μ ( F ) = 0. {\displaystyle \mu (F)=0.} If μ {\displaystyle \mu } and ν {\displaystyle \nu } are σ {\displaystyle \sigma } -finite measures on the same measurable space and if μ ≪ ν , {\displaystyle \mu \ll \nu ,} then the Radon–Nikodym derivative d μ d ν {\displaystyle {\frac {d\mu }{d\nu }}} exists and for every measurable F , {\displaystyle F,} μ ( F ) = ∫ F d μ d ν d ν . {\displaystyle \mu (F)=\int _{F}{\frac {d\mu }{d\nu }}d\nu .} μ {\displaystyle \mu } and ν {\displaystyle \nu } are called equivalent if each one is absolutely continuous with respect to the other. μ {\displaystyle \mu } is called a supporting measure of a measure ν {\displaystyle \nu } if μ {\displaystyle \mu } is σ {\displaystyle \sigma } -finite and they are equivalent. μ {\displaystyle \mu } and ν {\displaystyle \nu } are singular, written μ ⊥ ν , {\displaystyle \mu \perp \nu ,} if there exist disjoint sets M {\displaystyle M} and N {\displaystyle N} in the domains of μ {\displaystyle \mu } and ν {\displaystyle \nu } such that M ∪ N = Ω , {\displaystyle M\cup N=\Omega ,} μ ( F ) = 0 {\displaystyle \mu (F)=0} for all F ⊆ M {\displaystyle F\subseteq M} in the domain of μ , {\displaystyle \mu ,} and ν ( F ) = 0 {\displaystyle \nu (F)=0} for all F ⊆ N {\displaystyle F\subseteq N} in the domain of ν . {\displaystyle \nu .} == Examples == Examples of set functions include: The function d ( A ) = lim n → ∞ | A ∩ { 1 , … , n } | n , {\displaystyle d(A)=\lim _{n\to \infty }{\frac {|A\cap \{1,\ldots ,n\}|}{n}},} assigning densities to sufficiently well-behaved subsets A ⊆ { 1 , 2 , 3 , … } , {\displaystyle A\subseteq \{1,2,3,\ldots \},} is a set function. A probability measure assigns a probability to each set in a σ-algebra. Specifically, the probability of the empty set is zero and the probability of the sample space is 1 , {\displaystyle 1,} with other sets given probabilities between 0 {\displaystyle 0} and 1. {\displaystyle 1.} A possibility measure assigns a number between zero and one to each set in the powerset of some given set. See possibility theory. A random set is a set-valued random variable. See the article random compact set. The Jordan measure on R n {\displaystyle \mathbb {R} ^{n}} is a set function defined on the set of all Jordan measurable subsets of R n ; {\displaystyle \mathbb {R} ^{n};} it sends a Jordan measurable set to its Jordan measure. === Lebesgue measure === The Lebesgue measure on R {\displaystyle \mathbb {R} } is a set function that assigns a non-negative real number to every set of real numbers that belongs to the Lebesgue σ {\displaystyle \sigma } -algebra. Its definition begins with the set Intervals ( R ) {\displaystyle \operatorname {Intervals} (\mathbb {R} )} of all intervals of real numbers, which is a semialgebra on R . {\displaystyle \mathbb {R} .} The function that assigns to every interval I {\displaystyle I} its length ( I ) {\displaystyle \operatorname {length} (I)} is a finitely additive set function (explicitly, if I {\displaystyle I} has endpoints a ≤ b {\displaystyle a\leq b} then length ( I ) = b − a {\displaystyle \operatorname {length} (I)=b-a} ). This set function can be extended to the Lebesgue outer measure on R , {\displaystyle \mathbb {R} ,} which is the translation-invariant set function λ ∗ : ℘ ( R ) → [ 0 , ∞ ] {\displaystyle \lambda ^{\!*\!}:\wp (\mathbb {R} )\to [0,\infty ]} that sends a subset E ⊆ R {\displaystyle E\subseteq \mathbb {R} } to the infimum λ ∗ ( E ) = inf { ∑ k = 1 ∞ length ( I k ) : ( I k ) k ∈ N is a sequence of open intervals with E ⊆ ⋃ k = 1 ∞ I k } . {\displaystyle \lambda ^{\!*\!}(E)=\inf \left\{\sum _{k=1}^{\infty }\operatorname {length} (I_{k}):{(I_{k})_{k\in \mathbb {N} }}{\text{ is a sequence of open intervals with }}E\subseteq \bigcup _{k=1}^{\infty }I_{k}\right\}.} Lebesgue outer measure is not countably additive (and so is not a measure) although its restriction to the 𝜎-algebra of all subsets M ⊆ R {\displaystyle M\subseteq \mathbb {R} } that satisfy the Carathéodory criterion: λ ∗ ( M ) = λ ∗ ( M ∩ E ) + λ ∗ ( M ∩ E c ) for every S ⊆ R {\displaystyle \lambda ^{\!*\!}(M)=\lambda ^{\!*\!}(M\cap E)+\lambda ^{\!*\!}(M\cap E^{c})\quad {\text{ for every }}S\subseteq \mathbb {R} } is a measure that called Lebesgue measure. Vitali sets are examples of non-measurable sets of real numbers. ==== Infinite-dimensional space ==== As detailed in the article on infinite-dimensional Lebesgue measure, the only locally finite and translation-invariant Borel measure on an infinite-dimensional separable normed space is the trivial measure. However, it is possible to define Gaussian measures on infinite-dimensional topological vector spaces. The structure theorem for Gaussian measures shows that the abstract Wiener space construction is essentially the only way to obtain a strictly positive Gaussian measure on a separable Banach space. === Finitely additive translation-invariant set functions === The only translation-invariant measure on Ω = R {\displaystyle \Omega =\mathbb {R} } with domain ℘ ( R ) {\displaystyle \wp (\mathbb {R} )} that is finite on every compact subset of R {\displaystyle \mathbb {R} } is the trivial set function ℘ ( R ) → [ 0 , ∞ ] {\displaystyle \wp (\mathbb {R} )\to [0,\infty ]} that is identically equal to 0 {\displaystyle 0} (that is, it sends every S ⊆ R {\displaystyle S\subseteq \mathbb {R} } to 0 {\displaystyle 0} ) However, if countable additivity is weakened to finite additivity then a non-trivial set function with these properties does exist and moreover, some are even valued in [ 0 , 1 ] . {\displaystyle [0,1].} In fact, such non-trivial set functions will exist even if R {\displaystyle \mathbb {R} } is replaced by any other abelian group G . {\displaystyle G.} == Extending set functions == === Extending from semialgebras to algebras === Suppose that μ {\displaystyle \mu } is a set function on a semialgebra F {\displaystyle {\mathcal {F}}} over Ω {\displaystyle \Omega } and let algebra ( F ) := { F 1 ⊔ ⋯ ⊔ F n : n ∈ N and F 1 , … , F n ∈ F are pairwise disjoint } , {\displaystyle \operatorname {algebra} ({\mathcal {F}}):=\left\{F_{1}\sqcup \cdots \sqcup F_{n}:n\in \mathbb {N} {\text{ and }}F_{1},\ldots ,F_{n}\in {\mathcal {F}}{\text{ are pairwise disjoint }}\right\},} which is the algebra on Ω {\displaystyle \Omega } generated by F . {\displaystyle {\mathcal {F}}.} The archetypal example of a semialgebra that is not also an algebra is the family S d := { ∅ } ∪ { ( a 1 , b 1 ] × ⋯ × ( a 1 , b 1 ] : − ∞ ≤ a i < b i ≤ ∞ for all i = 1 , … , d } {\displaystyle {\mathcal {S}}_{d}:=\{\varnothing \}\cup \left\{\left(a_{1},b_{1}\right]\times \cdots \times \left(a_{1},b_{1}\right]~:~-\infty \leq a_{i}<b_{i}\leq \infty {\text{ for all }}i=1,\ldots ,d\right\}} on Ω := R d {\displaystyle \Omega :=\mathbb {R} ^{d}} where ( a , b ] := { x ∈ R : a < x ≤ b } {\displaystyle (a,b]:=\{x\in \mathbb {R} :a<x\leq b\}} for all − ∞ ≤ a < b ≤ ∞ . {\displaystyle -\infty \leq a<b\leq \infty .} Importantly, the two non-strict inequalities ≤ {\displaystyle \,\leq \,} in − ∞ ≤ a i < b i ≤ ∞ {\displaystyle -\infty \leq a_{i}<b_{i}\leq \infty } cannot be replaced with strict inequalities < {\displaystyle \,<\,} since semialgebras must contain the whole underlying set R d ; {\displaystyle \mathbb {R} ^{d};} that is, R d ∈ S d {\displaystyle \mathbb {R} ^{d}\in {\mathcal {S}}_{d}} is a requirement of semialgebras (as is ∅ ∈ S d {\displaystyle \varnothing \in {\mathcal {S}}_{d}} ). If μ {\displaystyle \mu } is finitely additive then it has a unique extension to a set function μ ¯ {\displaystyle {\overline {\mu }}} on algebra ( F ) {\displaystyle \operatorname {algebra} ({\mathcal {F}})} defined by sending F 1 ⊔ ⋯ ⊔ F n ∈ algebra ( F ) {\displaystyle F_{1}\sqcup \cdots \sqcup F_{n}\in \operatorname {algebra} ({\mathcal {F}})} (where ⊔ {\displaystyle \,\sqcup \,} indicates that these F i ∈ F {\displaystyle F_{i}\in {\mathcal {F}}} are pairwise disjoint) to: μ ¯ ( F 1 ⊔ ⋯ ⊔ F n ) := μ ( F 1 ) + ⋯ + μ ( F n ) . {\displaystyle {\overline {\mu }}\left(F_{1}\sqcup \cdots \sqcup F_{n}\right):=\mu \left(F_{1}\right)+\cdots +\mu \left(F_{n}\right).} This extension μ ¯ {\displaystyle {\overline {\mu }}} will also be finitely additive: for any pairwise disjoint A 1 , … , A n ∈ algebra ( F ) , {\displaystyle A_{1},\ldots ,A_{n}\in \operatorname {algebra} ({\mathcal {F}}),} μ ¯ ( A 1 ∪ ⋯ ∪ A n ) = μ ¯ ( A 1 ) + ⋯ + μ ¯ ( A n ) . {\displaystyle {\overline {\mu }}\left(A_{1}\cup \cdots \cup A_{n}\right)={\overline {\mu }}\left(A_{1}\right)+\cdots +{\overline {\mu }}\left(A_{n}\right).} If in addition μ {\displaystyle \mu } is extended real-valued and monotone (which, in particular, will be the case if μ {\displaystyle \mu } is non-negative) then μ ¯ {\displaystyle {\overline {\mu }}} will be monotone and finitely subadditive: for any A , A 1 , … , A n ∈ algebra ( F ) {\displaystyle A,A_{1},\ldots ,A_{n}\in \operatorname {algebra} ({\mathcal {F}})} such that A ⊆ A 1 ∪ ⋯ ∪ A n , {\displaystyle A\subseteq A_{1}\cup \cdots \cup A_{n},} μ ¯ ( A ) ≤ μ ¯ ( A 1 ) + ⋯ + μ ¯ ( A n ) . {\displaystyle {\overline {\mu }}\left(A\right)\leq {\overline {\mu }}\left(A_{1}\right)+\cdots +{\overline {\mu }}\left(A_{n}\right).} === Extending from rings to σ-algebras === If μ : F → [ 0 , ∞ ] {\displaystyle \mu :{\mathcal {F}}\to [0,\infty ]} is a pre-measure on a ring of sets (such as an algebra of sets) F {\displaystyle {\mathcal {F}}} over Ω {\displaystyle \Omega } then μ {\displaystyle \mu } has an extension to a measure μ ¯ : σ ( F ) → [ 0 , ∞ ] {\displaystyle {\overline {\mu }}:\sigma ({\mathcal {F}})\to [0,\infty ]} on the σ-algebra σ ( F ) {\displaystyle \sigma ({\mathcal {F}})} generated by F . {\displaystyle {\mathcal {F}}.} If μ {\displaystyle \mu } is σ-finite then this extension is unique. To define this extension, first extend μ {\displaystyle \mu } to an outer measure μ ∗ {\displaystyle \mu ^{*}} on 2 Ω = ℘ ( Ω ) {\displaystyle 2^{\Omega }=\wp (\Omega )} by μ ∗ ( T ) = inf { ∑ n μ ( S n ) : T ⊆ ∪ n S n with S 1 , S 2 , … ∈ F } {\displaystyle \mu ^{*}(T)=\inf \left\{\sum _{n}\mu \left(S_{n}\right):T\subseteq \cup _{n}S_{n}{\text{ with }}S_{1},S_{2},\ldots \in {\mathcal {F}}\right\}} and then restrict it to the set F M {\displaystyle {\mathcal {F}}_{M}} of μ ∗ {\displaystyle \mu ^{*}} -measurable sets (that is, Carathéodory-measurable sets), which is the set of all M ⊆ Ω {\displaystyle M\subseteq \Omega } such that μ ∗ ( S ) = μ ∗ ( S ∩ M ) + μ ∗ ( S ∩ M c ) for every subset S ⊆ Ω . {\displaystyle \mu ^{*}(S)=\mu ^{*}(S\cap M)+\mu ^{*}(S\cap M^{\mathrm {c} })\quad {\text{ for every subset }}S\subseteq \Omega .} It is a σ {\displaystyle \sigma } -algebra and μ ∗ {\displaystyle \mu ^{*}} is sigma-additive on it, by Caratheodory lemma. === Restricting outer measures === If μ ∗ : ℘ ( Ω ) → [ 0 , ∞ ] {\displaystyle \mu ^{*}:\wp (\Omega )\to [0,\infty ]} is an outer measure on a set Ω , {\displaystyle \Omega ,} where (by definition) the domain is necessarily the power set ℘ ( Ω ) {\displaystyle \wp (\Omega )} of Ω , {\displaystyle \Omega ,} then a subset M ⊆ Ω {\displaystyle M\subseteq \Omega } is called μ ∗ {\displaystyle \mu ^{*}} –measurable or Carathéodory-measurable if it satisfies the following Carathéodory's criterion: μ ∗ ( S ) = μ ∗ ( S ∩ M ) + μ ∗ ( S ∩ M c ) for every subset S ⊆ Ω , {\displaystyle \mu ^{*}(S)=\mu ^{*}(S\cap M)+\mu ^{*}(S\cap M^{\mathrm {c} })\quad {\text{ for every subset }}S\subseteq \Omega ,} where M c := Ω ∖ M {\displaystyle M^{\mathrm {c} }:=\Omega \setminus M} is the complement of M . {\displaystyle M.} The family of all μ ∗ {\displaystyle \mu ^{*}} –measurable subsets is a σ-algebra and the restriction of the outer measure μ ∗ {\displaystyle \mu ^{*}} to this family is a measure. == See also == Absolute continuity (measure theory) – Form of continuity for functionsPages displaying short descriptions of redirect targets Boolean ring – Algebraic structure in mathematics Cylinder set measure – way to generate a measure over product spacesPages displaying wikidata descriptions as a fallback Field of sets – Algebraic concept in measure theory, also referred to as an algebra of sets Hadwiger's theorem – Theorem in integral geometry Hahn decomposition theorem – Measurability theorem Invariant measure – Concept in mathematics Lebesgue's decomposition theorem Positive and negative sets Radon–Nikodym theorem – Expressing a measure as an integral of another Riesz–Markov–Kakutani representation theorem – Statement about linear functionals and measures Ring of sets – Family closed under unions and relative complements σ-algebra – Algebraic structure of set algebra Vitali–Hahn–Saks theorem == Notes == Proofs == References == Durrett, Richard (2019). Probability: Theory and Examples (PDF). Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 49 (5th ed.). Cambridge New York, NY: Cambridge University Press. ISBN 978-1-108-47368-2. OCLC 1100115281. Retrieved November 5, 2020. Kolmogorov, Andrey; Fomin, Sergei V. (2012) [1957]. Elements of the Theory of Functions and Functional Analysis. Dover Books on Mathematics. New York: Dover Books. ISBN 978-1-61427-304-2. OCLC 912495626. A. N. Kolmogorov and S. V. Fomin (1975), Introductory Real Analysis, Dover. ISBN 0-486-61226-0 Royden, Halsey; Fitzpatrick, Patrick (15 January 2010). Real Analysis (4 ed.). Boston: Prentice Hall. ISBN 978-0-13-143747-0. OCLC 456836719. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. == Further reading == Sobolev, V.I. (2001) [1994], "Set function", Encyclopedia of Mathematics, EMS Press Regular set function at Encyclopedia of Mathematics
|
Wikipedia:Setoid#0
|
In mathematics, a setoid (X, ~) is a set (or type) X equipped with an equivalence relation ~. A setoid may also be called E-set, Bishop set, or extensional set. Setoids are studied especially in proof theory and in type-theoretic foundations of mathematics. Often in mathematics, when one defines an equivalence relation on a set, one immediately forms the quotient set (turning equivalence into equality). In contrast, setoids may be used when a difference between identity and equivalence must be maintained, often with an interpretation of intensional equality (the equality on the original set) and extensional equality (the equivalence relation, or the equality on the quotient set). == Proof theory == In proof theory, particularly the proof theory of constructive mathematics based on the Curry–Howard correspondence, one often identifies a mathematical proposition with its set of proofs (if any). A given proposition may have many proofs, of course; according to the principle of proof irrelevance, normally only the truth of the proposition matters, not which proof was used. However, the Curry–Howard correspondence can turn proofs into algorithms, and differences between algorithms are often important. So proof theorists may prefer to identify a proposition with a setoid of proofs, considering proofs equivalent if they can be converted into one another through beta conversion or the like. == Type theory == In type-theoretic foundations of mathematics, setoids may be used in a type theory that lacks quotient types to model general mathematical sets. For example, in Per Martin-Löf's intuitionistic type theory, there is no type of real numbers, only a type of regular Cauchy sequences of rational numbers. To do real analysis in Martin-Löf's framework, therefore, one must work with a setoid of real numbers, the type of regular Cauchy sequences equipped with the usual notion of equivalence. Predicates and functions of real numbers need to be defined for regular Cauchy sequences and proven to be compatible with the equivalence relation. Typically (although it depends on the type theory used), the axiom of choice will hold for functions between types (intensional functions), but not for functions between setoids (extensional functions). The term "set" is variously used either as a synonym of "type" or as a synonym of "setoid". == Constructive mathematics == In constructive mathematics, one often takes a setoid with an apartness relation instead of an equivalence relation, called a constructive setoid. One sometimes also considers a partial setoid using a partial equivalence relation or partial apartness (see e.g. Barthe et al., section 1). == See also == Groupoid == Notes == == References == Hofmann, Martin (1995), "A simple model for quotient types", Typed lambda calculi and applications (Edinburgh, 1995), Lecture Notes in Comput. Sci., vol. 902, Berlin: Springer, pp. 216–234, CiteSeerX 10.1.1.55.4629, doi:10.1007/BFb0014055, ISBN 978-3-540-59048-4, MR 1477985. Barthe, Gilles; Capretta, Venanzio; Pons, Olivier (2003), "Setoids in type theory" (PDF), Journal of Functional Programming, 13 (2): 261–293, doi:10.1017/S0956796802004501, MR 1985376, S2CID 10069160. == External links == Implementation of setoids in Coq Setoid at the nLab Bishop set at the nLab
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.