source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Felix Iversen#0
Felix Christian Herbert Iversen (22 October 1887 – 31 July 1973) was a Finnish mathematician and a pacifist. He was a student of Ernst Lindelöf, and later an associate professor of mathematics at the University of Helsinki. Although he stopped performing serious research in mathematics around 1922, he continued working as a professor until his retirement in 1954 and published a textbook on mathematics in 1950. The Soviet Union awarded Felix Iversen the Stalin Peace Prize in 1954. == References ==
Wikipedia:Felix Tarasenko#0
Felix Petrovich Tarasenko (6 March 1932 – 1 January 2021) was a Russian mathematician. He attended Tomsk State University and was one of the founders of the theory of systems analysis. == Distinctions == Jubilee Medal "In Commemoration of the 100th Anniversary of the Birth of Vladimir Ilyich Lenin" (1969) == References ==
Wikipedia:Feller–Tornier constant#0
In mathematics, the Feller–Tornier constant CFT is the density of the set of all positive integers that have an even number of distinct prime factors raised to a power larger than one (ignoring any prime factors which appear only to the first power). It is named after William Feller (1906–1970) and Erhard Tornier (1894–1982) C FT = 1 2 + ( 1 2 ∏ n = 1 ∞ ( 1 − 2 p n 2 ) ) = 1 2 ( 1 + ∏ n = 1 ∞ ( 1 − 2 p n 2 ) ) = 1 2 ( 1 + 1 ζ ( 2 ) ∏ n = 1 ∞ ( 1 − 1 p n 2 − 1 ) ) = 1 2 + 3 π 2 ∏ n = 1 ∞ ( 1 − 1 p n 2 − 1 ) = 0.66131704946 … {\displaystyle {\begin{aligned}C_{\text{FT}}&={1 \over 2}+\left({1 \over 2}\prod _{n=1}^{\infty }\left(1-{2 \over p_{n}^{2}}\right)\right)\\[4pt]&={{1} \over {2}}\left(1+\prod _{n=1}^{\infty }\left(1-{{2} \over {p_{n}^{2}}}\right)\right)\\[4pt]&={1 \over 2}\left(1+{{1} \over {\zeta (2)}}\prod _{n=1}^{\infty }\left(1-{{1} \over {p_{n}^{2}-1}}\right)\right)\\[4pt]&={1 \over 2}+{{3} \over {\pi ^{2}}}\prod _{n=1}^{\infty }\left(1-{{1} \over {p_{n}^{2}-1}}\right)=0.66131704946\ldots \end{aligned}}} (sequence A065493 in the OEIS) == Omega function == The Big Omega function is given by Ω ( x ) = the number of prime factors of x counted by multiplicities {\displaystyle \Omega (x)={\text{the number of prime factors of }}x{\text{ counted by multiplicities}}} See also: Prime omega function. The Iverson bracket is [ P ] = { 1 if P is true, 0 if P is false. {\displaystyle [P]={\begin{cases}1&{\text{if }}P{\text{ is true,}}\\0&{\text{if }}P{\text{ is false.}}\end{cases}}} With these notations, we have C FT = lim n → ∞ ∑ k = 1 n ( [ Ω ( k ) ≡ 0 mod 2 ] ) n {\displaystyle C_{\text{FT}}=\lim _{n\to \infty }{\frac {\sum _{k=1}^{n}([\Omega (k)\equiv 0{\bmod {2}}])}{n}}} == Prime zeta function == The prime zeta function P is give by P ( s ) = ∑ p is prime 1 p s . {\displaystyle P(s)=\sum _{p{\text{ is prime}}}{\frac {1}{p^{s}}}.} The Feller–Tornier constant satisfies C FT = 1 2 ( 1 + exp ⁡ ( − ∑ n = 1 ∞ 2 n P ( 2 n ) n ) ) . {\displaystyle C_{\text{FT}}={1 \over 2}\left(1+\exp \left(-\sum _{n=1}^{\infty }{2^{n}P(2n) \over n}\right)\right).} == See also == Riemann zeta function L-function Euler product Twin prime == References ==
Wikipedia:Fenchel's duality theorem#0
In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn. Then, if regularity conditions are satisfied, inf x ( f ( x ) − g ( x ) ) = sup p ( g ∗ ( p ) − f ∗ ( p ) ) . {\displaystyle \inf _{x}(f(x)-g(x))=\sup _{p}(g_{*}(p)-f^{*}(p)).} where ƒ * is the convex conjugate of ƒ (also referred to as the Fenchel–Legendre transform) and g * is the concave conjugate of g. That is, f ∗ ( x ∗ ) := sup { ⟨ x ∗ , x ⟩ − f ( x ) | x ∈ R n } {\displaystyle f^{*}\left(x^{*}\right):=\sup \left\{\left.\left\langle x^{*},x\right\rangle -f\left(x\right)\right|x\in \mathbb {R} ^{n}\right\}} g ∗ ( x ∗ ) := inf { ⟨ x ∗ , x ⟩ − g ( x ) | x ∈ R n } {\displaystyle g_{*}\left(x^{*}\right):=\inf \left\{\left.\left\langle x^{*},x\right\rangle -g\left(x\right)\right|x\in \mathbb {R} ^{n}\right\}} == Mathematical theorem == Let X and Y be Banach spaces, f : X → R ∪ { + ∞ } {\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}} and g : Y → R ∪ { + ∞ } {\displaystyle g:Y\to \mathbb {R} \cup \{+\infty \}} be convex functions and A : X → Y {\displaystyle A:X\to Y} be a bounded linear map. Then the Fenchel problems: p ∗ = inf x ∈ X { f ( x ) + g ( A x ) } {\displaystyle p^{*}=\inf _{x\in X}\{f(x)+g(Ax)\}} d ∗ = sup y ∗ ∈ Y ∗ { − f ∗ ( A ∗ y ∗ ) − g ∗ ( − y ∗ ) } {\displaystyle d^{*}=\sup _{y^{*}\in Y^{*}}\{-f^{*}(A^{*}y^{*})-g^{*}(-y^{*})\}} satisfy weak duality, i.e. p ∗ ≥ d ∗ {\displaystyle p^{*}\geq d^{*}} . Note that f ∗ , g ∗ {\displaystyle f^{*},g^{*}} are the convex conjugates of f,g respectively, and A ∗ {\displaystyle A^{*}} is the adjoint operator. The perturbation function for this dual problem is given by F ( x , y ) = f ( x ) + g ( A x − y ) {\displaystyle F(x,y)=f(x)+g(Ax-y)} . Suppose that f,g, and A satisfy either f and g are lower semi-continuous and 0 ∈ core ⁡ ( dom ⁡ g − A dom ⁡ f ) {\displaystyle 0\in \operatorname {core} (\operatorname {dom} g-A\operatorname {dom} f)} where core {\displaystyle \operatorname {core} } is the algebraic interior and dom ⁡ h {\displaystyle \operatorname {dom} h} , where h is some function, is the set { z : h ( z ) < + ∞ } {\displaystyle \{z:h(z)<+\infty \}} , or A dom ⁡ f ∩ cont ⁡ g ≠ ∅ {\displaystyle A\operatorname {dom} f\cap \operatorname {cont} g\neq \emptyset } where cont {\displaystyle \operatorname {cont} } are the points where the function is continuous. Then strong duality holds, i.e. p ∗ = d ∗ {\displaystyle p^{*}=d^{*}} . If d ∗ ∈ R {\displaystyle d^{*}\in \mathbb {R} } then supremum is attained. == One-dimensional illustration == In the following figure, the minimization problem on the left side of the equation is illustrated. One seeks to vary x such that the vertical distance between the convex and concave curves at x is as small as possible. The position of the vertical line in the figure is the (approximate) optimum. The next figure illustrates the maximization problem on the right hand side of the above equation. Tangents are drawn to each of the two curves such that both tangents have the same slope p. The problem is to adjust p in such a way that the two tangents are as far away from each other as possible (more precisely, such that the points where they intersect the y-axis are as far from each other as possible). Imagine the two tangents as metal bars with vertical springs between them that push them apart and against the two parabolas that are fixed in place. Fenchel's theorem states that the two problems have the same solution. The points having the minimum vertical separation are also the tangency points for the maximally separated parallel tangents. == See also == Legendre transformation Convex conjugate Moreau's theorem Wolfe duality Werner Fenchel == References == Bauschke, Heinz H.; Combettes, Patrick L. (2017). "Fenchel–Rockafellar Duality". Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer. pp. 247–262. doi:10.1007/978-3-319-48311-5_15. ISBN 978-3-319-48310-8. Rockafellar, Ralph Tyrrell (1996). Convex Analysis. Princeton University Press. p. 327. ISBN 0-691-01586-4.
Wikipedia:Fenchel–Moreau theorem#0
In convex analysis, the Fenchel–Moreau theorem (named after Werner Fenchel and Jean Jacques Moreau) or Fenchel biconjugation theorem (or just biconjugation theorem) is a theorem which gives necessary and sufficient conditions for a function to be equal to its biconjugate. This is in contrast to the general property that for any function f ∗ ∗ ≤ f {\displaystyle f^{**}\leq f} . This can be seen as a generalization of the bipolar theorem. It is used in duality theory to prove strong duality (via the perturbation function). == Statement == Let ( X , τ ) {\displaystyle (X,\tau )} be a Hausdorff locally convex space, for any extended real valued function f : X → R ∪ { ± ∞ } {\displaystyle f:X\to \mathbb {R} \cup \{\pm \infty \}} it follows that f = f ∗ ∗ {\displaystyle f=f^{**}} if and only if one of the following is true f {\displaystyle f} is a proper, lower semi-continuous, and convex function, f ≡ + ∞ {\displaystyle f\equiv +\infty } , or f ≡ − ∞ {\displaystyle f\equiv -\infty } . == References ==
Wikipedia:Fengyan Li#0
Fengyan Li is an applied mathematician. She specializes in numerical analysis and scientific computing, and especially in Galerkin methods for magnetohydrodynamics and related problems in computational fluid dynamics including Maxwell's equations and Eikonal equations. Educated in China and the US, she works in the US as a professor of applied mathematics at the Rensselaer Polytechnic Institute. == Education and career == Li was a student at Peking University, where she received a bachelor's degree in 1997 and a master's degree in 2000. Next, she came to Brown University for doctoral study in mathematics, and completed her Ph.D. in 2004. Her dissertation, On Locally Divergence-Free Discontinuous Galerkin Methods, was supervised by Chi-Wang Shu. After postdoctoral research at the University of South Carolina, Li joined the Rensselaer Polytechnic Institute in 2006. She received a Sloan Research Fellowship in 2008, and a National Science Foundation CAREER Award in 2009. == Recognition == The Association for Women in Mathematics named Li to their 2025 Class of AWM Fellows, "for her continuous and enduring contribution to the promotion of women in computational mathematics through her service to AWM, mentorship of young women scientists, and development of training opportunities as well as platforms for women to connect such as WINASc.". == References == == External links == Fengyan Li publications indexed by Google Scholar
Wikipedia:Ferdinand Augustin Hallerstein#0
Ferdinand Augustin Haller von Hallerstein (Slovene: Ferdinand Avguštin Haller von Hallerstein; 27 August 1703 – 29 October 1774), also known as August Allerstein or by his Chinese name Liu Songling (simplified Chinese: 刘松龄; traditional Chinese: 劉松齡; pinyin: Liú Sōnglíng), was a Jesuit missionary and astronomer from Carniola (then Habsburg monarchy, now Slovenia). He was active in 18th-century China and spent 35 years at the imperial court of the Qianlong Emperor as the head of the Imperial Astronomical Bureau and Board of Mathematics. He created an armillary sphere with rotating rings at the Beijing Observatory and was the first demographer in China who precisely calculated the exact number of Chinese population of the time (198,214,553). He also participated in Chinese cartography, serving concurrently as a missionary, "cultural ambassador" and mandarin between 1739 and 1774. == Life and work == Hallerstein was born on 27 August 1703 in Ljubljana (in older sources in Mengeš, incorrectly citing the date as 18 August 1703), Carniola (then part of the Habsburg monarchy, now in Slovenia) as a member of the Hungarian branch of the famous Haller von Hallerstein family from Nuremberg, Germany. He was baptized Ferdinandus Augustinus Haller L.B. ab Hallerstein in Ljubljana on 28 August 1703. He spent his youth in Mengeš, where his family owned Ravbar Castle, and studied at the Jesuit college in Ljubljana. He was a member of the Academy of Sciences in all the three cities, from Germany and Vienna, where he mainly published his scientific disputes, to Rome and Lisbon, the city of his correspondence and of his personal friend – the Queen of Portugal. It was from Portugal that he travelled to India as a missionary, where he worked in Goa and Macau and then continued his travel to Beijing, China. The former Beijing Astronomical Observatory, now a museum, still hosts the armillary sphere with rotating rings, which was made under Hallerstein's leadership and is considered the most prominent astronomical instrument. His list and the Chinese translation reached Europe in 1779. The Chinese emperors objected to census-taking, or at least to census-publication, lest the Chinese might recognize their strength and grow restless. It confirms all the calculations of one of his predecessors, Father Amiot and affords proof of the progressive increase of the Chinese population. In the 25th year, he found 196,837,977 souls, and in the following year, 198,214,624. Hallerstein's census is to be found in "Déscription Générale de la Chine", p. 283. He was buried in the Jesuits' Zhalan Cemetery in Beijing. == Legacy == In Budapest, translations of his letters were published in the 18th century. A part of the Third Conference of the European Society for the History of Science was dedicated to Hallerstein and his transfer of mid-European science to Beijing and back. In recent years he has attracted the attention of Chinese historians as the creator of the most intriguing astronomic instrument at the old Beijing observatory, the spherical astrolabe, a "celestial globe". The asteroid 15071 Hallerstein is named after Hallerstein. == References ==
Wikipedia:Ferdinand Gonseth#0
Ferdinand Gonseth (1890–1975) was a Swiss mathematician and philosopher. He was born on 22 September 1890 at Sonvilier, the son of Ferdinand Gonseth, a clockmaker, and his wife Marie Bourquin. He studied at La Chaux-de-Fonds, and read physics and mathematics at ETH Zurich, from 1910 to 1914. In 1929 Gonseth succeeded Jérôme Franel as Professor of Higher Mathematics at ETH. In 1947 he founded Dialectica, with Paul Bernays and Gaston Bachelard. In the same year he took the newly created chair of philosophy of science at ETH. Gonseth died on 17 December 1975 at Lausanne. He was noted for his "open philosophy", according to which science and mathematics lacked absolute foundations. See Idoneism. == Notes == == Further reading == Lauener, Henri (1977). "Ferdinand Gonseth 1890-1975". dialectica. 31 (1–2): 113–118. doi:10.1111/j.1746-8361.1977.tb01357.x. ISSN 0012-2017.
Wikipedia:Ferdinando Piretti#0
Ferdinando Piretti (17th century – 18th century) was an Italian mathematician. He lived at the San Vitale monastery in Ravenna and later at the San Benedetto monastery in Ferrara. == Works == Piretti, Ferdinando (1725). Lumi aritmetici. Ferrara: Bernardino Pomatelli. == References ==
Wikipedia:Fernando Q. Gouvêa#0
Fernando Quadros Gouvêa is a Brazilian number theorist and historian of mathematics who won the Lester R. Ford Award of the Mathematical Association of America (MAA) in 1995 for his exposition of Wiles's proof of Fermat's Last Theorem. He also won the Beckenbach Book Prize of the MAA in 2007 for his book with William P. Berlinghoff, Math through the Ages: A Gentle History for Teachers and Others (Oxton House, 2002; 2nd ed., 2014). He is the Carter Professor of Mathematics at Colby College in Waterville, Maine. Gouvêa grew up in São Paulo, the son of a lawyer and banker, and was educated there in an English-language primary school and then at the Colégio Bandeirantes de São Paulo. He earned a bachelor's degree from the University of São Paulo, and then a master's degree in 1981 under the supervision of César Polcino Milies. He moved to Harvard University in 1983 for continuing graduate study in number theory, and completed his doctorate there in 1987; his dissertation, titled Arithmetic of p-adic Modular Forms, was supervised by Barry Mazur. He became a faculty member at the University of São Paulo, took a visiting position at Queen's University in Kingston, Ontario in 1990, and was brought to Colby College by Keith Devlin, who had recently been hired as department chair there. He is the editor of the Carus Mathematical Monographs book series, and of MAA Reviews, an online book review service published by the MAA. == References == == External links == Home page
Wikipedia:Fernando Zalamea#0
Fernando Zalamea Traba (Bogota, 28 February 1959) is a Colombian mathematician, essayist, critic, philosopher and popularizer, known by his contributions to the philosophy of mathematics, being the creator of the synthetic philosophy of mathematics. He is the author of around twenty books and is one of the world's leading experts on the mathematical and philosophical work of Alexander Grothendieck, as well as in the logical work of Charles S. Peirce. Currently, he is a full professor in the Department of Mathematics of the National University of Colombia, where he has established a mathematical school, primarily through his ongoing seminar of epistemology, history and philosophy of mathematics, which he conducted for eleven years at the university. He is also known for his creative, critical, and constructive teaching of mathematics. Zalamea has supervised approximately 50 thesis projects at the undergraduate, master's and doctoral levels in various fields, including mathematics, philosophy, logic, category theory, semiology, medicine, culture, among others. Since 2018, he has been an honorary member of the Colombian Academy of Physical Exact Sciences and Natural. In 2016, he was recognized as one of the 100 most outstanding contemporary interdisciplinary global minds by "100 Global Minds, the most daring cross-disciplinary thinkers in the world," being the only Latin American included in this recognition. == References ==
Wikipedia:Fey Silva Vidal#0
Fey Yamina Silva Vidal (born 1966, Huánuco.) is a Peruvian meteorologist, and the first woman in the country to earn a PhD in Physical-Mathematical Sciences. She has led pioneering research on climate variability and the El Niño phenomenon, contributing to improved climate prediction and variability, with an emphasis on the Peruvian Andes. Her research has advanced scientific understanding of atmospheric processes and provides essential data for climate change adaptation. She has held several public positions, including serving as Vice Minister of Strategic Development of Natural Resources at the Ministry of Environment (Spanish: Ministerio del Ambiente, MINAM) in 2022, and as Director of the Geophysical Institute of Peru (Spanish: Instituto Geofísico del Perú, IGP). She is currently a lead researcher at the IGP, heading a research initiative focused on understanding the physical, dynamic, and microphysical processes that climate variability in the Andes. Her scientific achievements have been recognized by the National Council of Science, Technology, and Innovation (Spanish: Consejo Nacional de Ciencia, Tecnología e Innovación, CONCYTEC). She has joined the Pro-Women in Science, Technology, and Innovation (STI) Committee (Spanish: Comité Pro Mujer en CTI) since 2022 and served as its president for the 2023–2024 period. == Biography == === Early age === At the age of 12, her family decided to move to Lima, since the Monzón Valley had become a dangerous place, due to terrorism and drug trafficking. From an early age, she showed great interest in meteorological phenomena and the effects of El Niño phenomenon on her country. === Studies === After finishing high school, she obtained a scholarship from the National Institute of Scholarships and Educational Credit (Spanish: Instituto Nacional de Becas y Crédito Educativo,INABEC) to study meteorology at the Russian State University of Hydrometeorology, where she also earned her master's degree and a PhD in Physico-Mathematical Sciences in 1992. During her studies, she focused on the physical and dynamic processes of the atmosphere using meteorological models and radars to understand the causes of climate variability and extreme weather events. == Career == After 13 years abroad, she returned to Peru in 1998 as the country's first female meteorologist with a doctorate. She joined Dr. Pablo Lagos at the Geophysical Institute of Peru (IGP) to develop a predictive model for rainfall during the El Niño phenomenon, serving as the principal scientific researcher. Since then, she has worked at the IGP, focusing on evaluating atmospheric conditions in Peru, analyzing the El Niño phenomenon, and studying climate variability in the Andes across various time scales. Her research has also encompassed climate variability, extreme meteorological events, climate change, and atmospheric numerical modeling, leading several significant studies in these areas. Since 2003, she has conducted studies to understand the effects of climate variability in the Peruvian Andes, with an emphasis on the Mantaro River basin and its response to climate change. As part of her research efforts, she established the Atmospheric Microphysics and Radiation Laboratory (LAMAR) at the Huancayo Geophysical Observatory, enhancing the region's capacity to study atmospheric processes and their environmental impacts. In August 2021, Silva became the Head of the Decentralized Office of the Central Macro Region at the National Institute for Research of Glacier and Mountain Ecosystems (Spanish: Instituto Nacional de Investigación en Glaciares y Ecosistemas de Montaña, INAIGEM). During her tenure, she conducted research on the effects of climate variability and the melting of Andean glaciers in Peru, focusing on the interaction between the atmosphere and the cryosphere. From March to December 2022, she served as Vice Minister of Strategic Development of Natural Resources at the Ministry of Environment. Following her tenure at MINAM, she returned to the IGP as a principal scientific researcher, a position she continues to hold today. Yamina Silva has over two decades of experience as a university professor specializing in climate change and meteorology. She taught courses on climate change and risk management and in the Master’s Program in Water at the Pontifical Catholic University of Peru from 2021 to the present. == Research == Her research focuses on the atmosphere and hydrosphere, climate modeling, future climate scenarios, vulnerability and adaptation to climate change. == Awards and recognitions == Yamina Silva became the first Peruvian meteorologist to obtain a PhD in Physical-Mathematical Sciences, awarded by the Institute of Hydrometeorology of Russia. In 2021, CONCYTEC highlighted her career as a scientist and her inclusion in the book Women Scientists of Peru: 24 Stories to Discover. She was served as the President of the Pro-Women in STI Committee during the 2023–2024 period. == References ==
Wikipedia:Fiber (mathematics)#0
In mathematics, the fiber (US English) or fibre (British English) of an element y {\displaystyle y} under a function f {\displaystyle f} is the preimage of the singleton set { y } {\displaystyle \{y\}} ,: p.69 that is f − 1 ( y ) = { x : f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\mathrel {:} f(x)=y\}.} == Properties and applications == === In elementary set theory === If X {\displaystyle X} and Y {\displaystyle Y} are the domain and image of f {\displaystyle f} , respectively, then the fibers of f {\displaystyle f} are the sets in { f − 1 ( y ) : y ∈ Y } = { { x ∈ X : f ( x ) = y } : y ∈ Y } {\displaystyle \left\{f^{-1}(y)\mathrel {:} y\in Y\right\}\quad =\quad \left\{\left\{x\in X\mathrel {:} f(x)=y\right\}\mathrel {:} y\in Y\right\}} which is a partition of the domain set X {\displaystyle X} . Note that y {\displaystyle y} must be restricted to the image set Y {\displaystyle Y} of f {\displaystyle f} , since otherwise f − 1 ( y ) {\displaystyle f^{-1}(y)} would be the empty set which is not allowed in a partition. The fiber containing an element x ∈ X {\displaystyle x\in X} is the set f − 1 ( f ( x ) ) . {\displaystyle f^{-1}(f(x)).} For example, let f {\displaystyle f} be the function from R 2 {\displaystyle \mathbb {R} ^{2}} to R {\displaystyle \mathbb {R} } that sends point ( a , b ) {\displaystyle (a,b)} to a + b {\displaystyle a+b} . The fiber of 5 under f {\displaystyle f} are all the points on the straight line with equation a + b = 5 {\displaystyle a+b=5} . The fibers of f {\displaystyle f} are that line and all the straight lines parallel to it, which form a partition of the plane R 2 {\displaystyle \mathbb {R} ^{2}} . More generally, if f {\displaystyle f} is a linear map from some linear vector space X {\displaystyle X} to some other linear space Y {\displaystyle Y} , the fibers of f {\displaystyle f} are affine subspaces of X {\displaystyle X} , which are all the translated copies of the null space of f {\displaystyle f} . If f {\displaystyle f} is a real-valued function of several real variables, the fibers of the function are the level sets of f {\displaystyle f} . If f {\displaystyle f} is also a continuous function and y ∈ R {\displaystyle y\in \mathbb {R} } is in the image of f , {\displaystyle f,} the level set f − 1 ( y ) {\displaystyle f^{-1}(y)} will typically be a curve in 2D, a surface in 3D, and, more generally, a hypersurface in the domain of f . {\displaystyle f.} The fibers of f {\displaystyle f} are the equivalence classes of the equivalence relation ≡ f {\displaystyle \equiv _{f}} defined on the domain X {\displaystyle X} such that x ′ ≡ f x ″ {\displaystyle x'\equiv _{f}x''} if and only if f ( x ′ ) = f ( x ″ ) {\displaystyle f(x')=f(x'')} . === In topology === In point set topology, one generally considers functions from topological spaces to topological spaces. If f {\displaystyle f} is a continuous function and if Y {\displaystyle Y} (or more generally, the image set f ( X ) {\displaystyle f(X)} ) is a T1 space then every fiber is a closed subset of X . {\displaystyle X.} In particular, if f {\displaystyle f} is a local homeomorphism from X {\displaystyle X} to Y {\displaystyle Y} , each fiber of f {\displaystyle f} is a discrete subspace of X {\displaystyle X} . A function between topological spaces is called monotone if every fiber is a connected subspace of its domain. A function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is monotone in this topological sense if and only if it is non-increasing or non-decreasing, which is the usual meaning of "monotone function" in real analysis. A function between topological spaces is (sometimes) called a proper map if every fiber is a compact subspace of its domain. However, many authors use other non-equivalent competing definitions of "proper map" so it is advisable to always check how a particular author defines this term. A continuous closed surjective function whose fibers are all compact is called a perfect map. A fiber bundle is a function f {\displaystyle f} between topological spaces X {\displaystyle X} and Y {\displaystyle Y} whose fibers have certain special properties related to the topology of those spaces. === In algebraic geometry === In algebraic geometry, if f : X → Y {\displaystyle f:X\to Y} is a morphism of schemes, the fiber of a point p {\displaystyle p} in Y {\displaystyle Y} is the fiber product of schemes X × Y Spec ⁡ k ( p ) {\displaystyle X\times _{Y}\operatorname {Spec} k(p)} where k ( p ) {\displaystyle k(p)} is the residue field at p . {\displaystyle p.} == See also == == References ==
Wikipedia:Fibonacci word fractal#0
The Fibonacci word fractal is a fractal curve defined on the plane from the Fibonacci word. == Definition == This curve is built iteratively by applying the Odd–Even Drawing rule to the Fibonacci word 0100101001001...: For each digit at position k: If the digit is 0: Draw a line segment then turn 90° to the left if k is even Draw a line segment then Turn 90° to the right if k is odd If the digit is 1: Draw a line segment and stay straight To a Fibonacci word of length F n {\displaystyle F_{n}} (the nth Fibonacci number) is associated a curve F n {\displaystyle {\mathcal {F}}_{n}} made of F n {\displaystyle F_{n}} segments. The curve displays three different aspects whether n is in the form 3k, 3k + 1, or 3k + 2. == Properties == Some of the Fibonacci word fractal's properties include: The curve F n {\displaystyle {\mathcal {F_{n}}}} contains F n {\displaystyle F_{n}} segments, F n − 1 {\displaystyle F_{n-1}} right angles and F n − 2 {\displaystyle F_{n-2}} flat angles. The curve never self-intersects and does not contain double points. At the limit, it contains an infinity of points asymptotically close. The curve presents self-similarities at all scales. The reduction ratio is 1 + 2 {\displaystyle 1+{\sqrt {2}}} . This number, also called the silver ratio, is present in a great number of properties listed below. The number of self-similarities at level n is a Fibonacci number \ −1. (more precisely: F 3 n + 3 − 1 {\displaystyle F_{3n+3}-1} ). The curve encloses an infinity of square structures of decreasing sizes in a ratio 1 + 2 {\displaystyle 1+{\sqrt {2}}} (see figure). The number of those square structures is a Fibonacci number. The curve F n {\displaystyle {\mathcal {F}}_{n}} can also be constructed in different ways (see gallery below): Iterated function system of 4 and 1 homothety of ratio 1 / ( 1 + 2 ) {\displaystyle 1/(1+{\sqrt {2}})} and 1 / ( 1 + 2 ) 2 {\displaystyle 1/(1+{\sqrt {2}})^{2}} By joining together the curves F n − 1 {\displaystyle {\mathcal {F}}_{n-1}} and F n − 2 {\displaystyle {\mathcal {F}}_{n-2}} Lindenmayer system By an iterated construction of 8 square patterns around each square pattern. By an iterated construction of octagons The Hausdorff dimension of the Fibonacci word fractal is 3 log ⁡ φ log ⁡ ( 1 + 2 ) ≈ 1.6379 {\displaystyle 3\,{\frac {\log \varphi }{\log(1+{\sqrt {2}})}}\approx 1.6379} , with φ = 1 + 5 2 {\displaystyle \varphi ={\frac {1+{\sqrt {5}}}{2}}} the golden ratio. Generalizing to an angle α {\displaystyle \alpha } between 0 and π / 2 {\displaystyle \pi /2} , its Hausdorff dimension is 3 log ⁡ φ log ⁡ ( 1 + a + ( 1 + a ) 2 + 1 ) {\displaystyle 3\,{\frac {\log \varphi }{\log(1+a+{\sqrt {(1+a)^{2}+1}})}}} , with a = cos ⁡ α {\displaystyle a=\cos \alpha } . The Hausdorff dimension of its frontier is log ⁡ 3 log ⁡ ( 1 + 2 ) ≈ 1.2465 {\displaystyle {\frac {\log 3}{{\log(1+{\sqrt {2}}})}}\approx 1.2465} . Exchanging the roles of "0" and "1" in the Fibonacci word, or in the drawing rule yields a similar curve, but oriented 45°. From the Fibonacci word, one can define the «dense Fibonacci word», on an alphabet of 3 letters: 102210221102110211022102211021102110221022102211021... (sequence A143667 in the OEIS). The usage, on this word, of a more simple drawing rule, defines an infinite set of variants of the curve, among which: a "diagonal variant" a "svastika variant" a "compact variant" It is conjectured that the Fibonacci word fractal appears for every sturmian word for which the slope, written in continued fraction expansion, ends with an infinite sequence of "1"s. == Gallery == == The Fibonacci tile == The juxtaposition of four F 3 k {\displaystyle F_{3k}} curves allows the construction of a closed curve enclosing a surface whose area is not null. This curve is called a "Fibonacci tile". The Fibonacci tile almost tiles the plane. The juxtaposition of 4 tiles (see illustration) leaves at the center a free square whose area tends to zero as k tends to infinity. At the limit, the infinite Fibonacci tile tiles the plane. If the tile is enclosed in a square of side 1, then its area tends to 2 − 2 = 0.5857 {\displaystyle 2-{\sqrt {2}}=0.5857} . === Fibonacci snowflake === The Fibonacci snowflake is a Fibonacci tile defined by: q n = q n − 1 q n − 2 {\displaystyle q_{n}=q_{n-1}q_{n-2}} if n ≡ 2 ( mod 3 ) {\displaystyle n\equiv 2{\pmod {3}}} q n = q n − 1 q ¯ n − 2 {\displaystyle q_{n}=q_{n-1}{\overline {q}}_{n-2}} otherwise. with q 0 = ϵ {\displaystyle q_{0}=\epsilon } and q 1 = R {\displaystyle q_{1}=R} , L = {\displaystyle L=} "turn left" and R = {\displaystyle R=} "turn right", and R ¯ = L {\displaystyle {\overline {R}}=L} . Several remarkable properties: It is the Fibonacci tile associated to the "diagonal variant" previously defined. It tiles the plane at any order. It tiles the plane by translation in two different ways. its perimeter at order n equals 4 F ( 3 n + 1 ) {\displaystyle 4F(3n+1)} , where F ( n ) {\displaystyle F(n)} is the nth Fibonacci number. its area at order n follows the successive indexes of odd row of the Pell sequence (defined by P ( n ) = 2 P ( n − 1 ) + P ( n − 2 ) {\displaystyle P(n)=2P(n-1)+P(n-2)} ). == See also == Golden ratio Fibonacci number Fibonacci word List of fractals by Hausdorff dimension == References == == External links == "Generate a Fibonacci word fractal", OnlineMathTools.com.
Wikipedia:Field (mathematics)#0
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel–Ruffini theorem that general quintic equations cannot be solved in radicals. Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects. == Definition == Informally, a field is a set, along with two operations defined on that set: an addition operation a + b and a multiplication operation a ⋅ b, both of which behave similarly as they do for rational numbers and real numbers. This includes the existence of an additive inverse −a for all elements a and of a multiplicative inverse b−1 for every nonzero element b. This allows the definition of the so-called inverse operations, subtraction a − b and division a / b, as a − b = a + (−b) and a / b = a ⋅ b−1. Often the product a ⋅ b is represented by juxtaposition, as ab. === Classic definition === Formally, a field is a set F together with two binary operations on F called addition and multiplication. A binary operation on F is a mapping F × F → F, that is, a correspondence that associates with each ordered pair of elements of F a uniquely determined element of F. The result of the addition of a and b is called the sum of a and b, and is denoted a + b. Similarly, the result of the multiplication of a and b is called the product of a and b, and is denoted a ⋅ b. These operations are required to satisfy the following properties, referred to as field axioms. These axioms are required to hold for all elements a, b, c of the field F: Associativity of addition and multiplication: a + (b + c) = (a + b) + c, and a ⋅ (b ⋅ c) = (a ⋅ b) ⋅ c. Commutativity of addition and multiplication: a + b = b + a, and a ⋅ b = b ⋅ a. Additive and multiplicative identity: there exist two distinct elements 0 and 1 in F such that a + 0 = a and a ⋅ 1 = a. Additive inverses: for every a in F, there exists an element in F, denoted −a, called the additive inverse of a, such that a + (−a) = 0. Multiplicative inverses: for every a ≠ 0 in F, there exists an element in F, denoted by a−1 or 1/a, called the multiplicative inverse of a, such that a ⋅ a−1 = 1. Distributivity of multiplication over addition: a ⋅ (b + c) = (a ⋅ b) + (a ⋅ c). An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with 0 as the additive identity; the nonzero elements form a group under multiplication with 1 as the multiplicative identity; and multiplication distributes over addition. Even more succinctly: a field is a commutative ring where 0 ≠ 1 and all nonzero elements are invertible under multiplication. === Alternative definition === Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants 0 and 1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants 1 and −1, since 0 = 1 + (−1) and −a = (−1)a. == Examples == === Rational numbers === Rational numbers have been widely used a long time before the elaboration of the concept of field. They are numbers that can be written as fractions a/b, where a and b are integers, and b ≠ 0. The additive inverse of such a fraction is −a/b, and the multiplicative inverse (provided that a ≠ 0) is b/a, which can be seen as follows: b a ⋅ a b = b a a b = 1. {\displaystyle {\frac {b}{a}}\cdot {\frac {a}{b}}={\frac {ba}{ab}}=1.} The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows: a b ⋅ ( c d + e f ) = a b ⋅ ( c d ⋅ f f + e f ⋅ d d ) = a b ⋅ ( c f d f + e d f d ) = a b ⋅ c f + e d d f = a ( c f + e d ) b d f = a c f b d f + a e d b d f = a c b d + a e b f = a b ⋅ c d + a b ⋅ e f . {\displaystyle {\begin{aligned}&{\frac {a}{b}}\cdot \left({\frac {c}{d}}+{\frac {e}{f}}\right)\\[6pt]={}&{\frac {a}{b}}\cdot \left({\frac {c}{d}}\cdot {\frac {f}{f}}+{\frac {e}{f}}\cdot {\frac {d}{d}}\right)\\[6pt]={}&{\frac {a}{b}}\cdot \left({\frac {cf}{df}}+{\frac {ed}{fd}}\right)={\frac {a}{b}}\cdot {\frac {cf+ed}{df}}\\[6pt]={}&{\frac {a(cf+ed)}{bdf}}={\frac {acf}{bdf}}+{\frac {aed}{bdf}}={\frac {ac}{bd}}+{\frac {ae}{bf}}\\[6pt]={}&{\frac {a}{b}}\cdot {\frac {c}{d}}+{\frac {a}{b}}\cdot {\frac {e}{f}}.\end{aligned}}} === Real and complex numbers === The real numbers R, with the usual operations of addition and multiplication, also form a field. The complex numbers C consist of expressions a + bi, with a, b real, where i is the imaginary unit, i.e., a (non-real) number satisfying i2 = −1. Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for C. For example, the distributive law enforces (a + bi)(c + di) = ac + bci + adi + bdi2 = (ac − bd) + (bc + ad)i. It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines. === Constructible numbers === In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field Q of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within Q. Using the labeling in the illustration, construct the segments AB, BD, and a semicircle over AD (center at the midpoint C), which intersects the perpendicular line through B in a point F, at a distance of exactly h = p {\displaystyle h={\sqrt {p}}} from B when BD has length one. Not all real numbers are constructible. It can be shown that 2 3 {\displaystyle {\sqrt[{3}]{2}}} is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks. === A field with four elements === In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called O, I, A, and B. The notation is chosen such that O plays the role of the additive identity element (denoted 0 in the axioms above), and I is the multiplicative identity (denoted 1 in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example, A ⋅ (B + A) = A ⋅ I = A, which equals A ⋅ B + A ⋅ A = I + B = A, as required by the distributivity. This field is called a finite field or Galois field with four elements, and is denoted F4 or GF(4). The subset consisting of O and I (highlighted in red in the tables at the right) is also a field, known as the binary field F2 or GF(2). == Elementary notions == In this section, F denotes an arbitrary field and a and b are arbitrary elements of F. === Consequences of the definition === One has a ⋅ 0 = 0 and −a = (−1) ⋅ a. In particular, one may deduce the additive inverse of every element as soon as one knows −1. If ab = 0 then a or b must be 0, since, if a ≠ 0, then b = (a−1a)b = a−1(ab) = a−1 ⋅ 0 = 0. This means that every field is an integral domain. In addition, the following properties are true for any elements a and b: −0 = 0 1−1 = 1 (−(−a)) = a (−a) ⋅ b = a ⋅ (−b) = −(a ⋅ b) (a−1)−1 = a if a ≠ 0 === Additive and multiplicative groups of a field === The axioms of a field F imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by (F, +) when denoting it simply as F could be confusing. Similarly, the nonzero elements of F form an abelian group under multiplication, called the multiplicative group, and denoted by ( F ∖ { 0 } , ⋅ ) {\displaystyle (F\smallsetminus \{0\},\cdot )} or just F ∖ { 0 } {\displaystyle F\smallsetminus \{0\}} , or F×. A field may thus be defined as set F equipped with two operations denoted as an addition and a multiplication such that F is an abelian group under addition, F ∖ { 0 } {\displaystyle F\smallsetminus \{0\}} is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses −a and a−1 are uniquely determined by a. The requirement 1 ≠ 0 is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields. Every finite subgroup of the multiplicative group of a field is cyclic (see Root of unity § Cyclic groups). === Characteristic === In addition to the multiplication of two elements of F, it is possible to define the product n ⋅ a of an arbitrary element a of F by a positive integer n to be the n-fold sum a + a + ... + a (which is an element of F.) If there is no positive integer such that n ⋅ 1 = 0, then F is said to have characteristic 0. For example, the field of rational numbers Q has characteristic 0 since no positive integer n is zero. Otherwise, if there is a positive integer n satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by p and the field is said to have characteristic p then. For example, the field F4 has characteristic 2 since (in the notation of the above addition table) I + I = O. If F has characteristic p, then p ⋅ a = 0 for all a in F. This implies that (a + b)p = ap + bp, since all other binomial coefficients appearing in the binomial formula are divisible by p. Here, ap := a ⋅ a ⋅ ⋯ ⋅ a (p factors) is the pth power, i.e., the p-fold product of the element a. Therefore, the Frobenius map F → F : x ↦ xp is compatible with the addition in F (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic p quite different from fields of characteristic 0. === Subfields and prime fields === A subfield E of a field F is a subset of F that is a field with respect to the field operations of F. Equivalently E is a subset of F that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that 1 ∊ E, that for all a, b ∊ E both a + b and a ⋅ b are in E, and that for all a ≠ 0 in E, both −a and 1/a are in E. Field homomorphisms are maps φ: E → F between two fields such that φ(e1 + e2) = φ(e1) + φ(e2), φ(e1e2) = φ(e1) φ(e2), and φ(1E) = 1F, where e1 and e2 are arbitrary elements of E. All field homomorphisms are injective. If φ is also surjective, it is called an isomorphism (or the fields E and F are called isomorphic). A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field F contains a prime field. If the characteristic of F is p (a prime number), the prime field is isomorphic to the finite field Fp introduced below. Otherwise the prime field is isomorphic to Q. == Finite fields == Finite fields (also called Galois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example F4 is a field with four elements. Its subfield F2 is the smallest field, because by definition a field has at least two distinct elements, 0 and 1. The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer n, arithmetic "modulo n" means to work with the numbers Z/nZ = {0, 1, ..., n − 1}. The addition and multiplication on this set are done by performing the operation in question in the set Z of integers, dividing by n and taking the remainder as result. This construction yields a field precisely if n is a prime number. For example, taking the prime n = 2 results in the above-mentioned field F2. For n = 4 and more generally, for any composite number (i.e., any number n which can be expressed as a product n = r ⋅ s of two strictly smaller natural numbers), Z/nZ is not a field: the product of two non-zero elements is zero since r ⋅ s = 0 in Z/nZ, which, as was explained above, prevents Z/nZ from being a field. The field Z/pZ with p elements (p being prime) constructed in this way is usually denoted by Fp. Every finite field F has q = pn elements, where p is prime and n ≥ 1. This statement holds since F may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say n, which implies the asserted statement. A field with q = pn elements can be constructed as the splitting field of the polynomial f(x) = xq − x. Such a splitting field is an extension of Fp in which the polynomial f has q zeros. This means f has as many zeros as possible since the degree of f is q. For q = 22 = 4, it can be checked case by case using the above multiplication table that all four elements of F4 satisfy the equation x4 = x, so they are zeros of f. By contrast, in F2, f has only two zeros (namely 0 and 1), so f does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of the finite field with q elements, denoted by Fq or GF(q). == History == Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros x1, x2, x3 of a cubic polynomial in the expression (x1 + ωx2 + ω2x3)3 (with ω being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown x to a quadratic equation for x3. Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), studied the equation x p = 1 for a prime p and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular p-gon can be constructed if p = 22k + 1. Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree 5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by Moore (1893). By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system. In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q(X). Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of e and π, respectively. The first clear definition of an abstract field is due to Weber (1893). In particular, Heinrich Martin Weber's notion included the field Fp. Giuseppe Veronese (1891) studied the field of formal power series, which led Hensel (1904) to introduce the field of p-adic numbers. Steinitz (1910) synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. Artin & Schreier (1927) linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem. == Constructing fields == === Constructing fields from rings === A commutative ring is a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inverses a−1. For example, the integers Z form a commutative ring, but not a field: the reciprocal of an integer n is not itself an integer, unless n = ±1. In the hierarchy of algebraic structures fields can be characterized as the commutative rings R in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, (0) and R. Fields are also precisely the commutative rings in which (0) is the only prime ideal. Given a commutative ring R, there are two ways to construct a field related to R, i.e., two ways of modifying R such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of Z is Q, the rationals, while the residue fields of Z are the finite fields Fp. ==== Field of fractions ==== Given an integral domain R, its field of fractions Q(R) is built with the fractions of two elements of R exactly as Q is constructed from the integers. More precisely, the elements of Q(R) are the fractions a/b where a and b are in R, and b ≠ 0. Two fractions a/b and c/d are equal if and only if ad = bc. The operation on the fractions work exactly as for rational numbers. For example, a b + c d = a d + b c b d . {\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+bc}{bd}}.} It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field. The field F(x) of the rational fractions over a field (or an integral domain) F is the field of fractions of the polynomial ring F[x]. The field F((x)) of Laurent series ∑ i = k ∞ a i x i ( k ∈ Z , a i ∈ F ) {\displaystyle \sum _{i=k}^{\infty }a_{i}x^{i}\ (k\in \mathbb {Z} ,a_{i}\in F)} over a field F is the field of fractions of the ring F[[x]] of formal power series (in which k ≥ 0). Since any Laurent series is a fraction of a power series divided by a power of x (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though. ==== Residue fields ==== In addition to the field of fractions, which embeds R injectively into a field, a field can be obtained from a commutative ring R by means of a surjective map onto a field F. Any field obtained in this way is a quotient R / m, where m is a maximal ideal of R. If R has only one maximal ideal m, this field is called the residue field of R. The ideal generated by a single polynomial f in the polynomial ring R = E[X] (over a field E) is maximal if and only if f is irreducible in E, i.e., if f cannot be expressed as the product of two polynomials in E[X] of smaller degree. This yields a field F = E[X] / (f(X)). This field F contains an element x (namely the residue class of X) which satisfies the equation f(x) = 0. For example, C is obtained from R by adjoining the imaginary unit symbol i, which satisfies f(i) = 0, where f(X) = X2 + 1. Moreover, f is irreducible over R, which implies that the map that sends a polynomial f(X) ∊ R[X] to f(i) yields an isomorphism R [ X ] / ( X 2 + 1 ) ⟶ ≅ C . {\displaystyle \mathbf {R} [X]/\left(X^{2}+1\right)\ {\stackrel {\cong }{\longrightarrow }}\ \mathbf {C} .} === Constructing fields within a bigger field === Fields can be constructed inside a given bigger container field. Suppose given a field E, and a field F containing E as a subfield. For any element x of F, there is a smallest subfield of F containing E and x, called the subfield of F generated by x and denoted E(x). The passage from E to E(x) is referred to by adjoining an element to E. More generally, for a subset S ⊂ F, there is a minimal subfield of F containing E and S, denoted by E(S). The compositum of two subfields E and E′ of some field F is the smallest subfield of F containing both E and E′. The compositum can be used to construct the biggest subfield of F satisfying a certain property, for example the biggest subfield of F, which is, in the language introduced below, algebraic over E. === Field extensions === The notion of a subfield E ⊂ F can also be regarded from the opposite point of view, by referring to F being a field extension (or just extension) of E, denoted by F / E, and read "F over E". A basic datum of a field extension is its degree [F : E], i.e., the dimension of F as an E-vector space. It satisfies the formula [G : E] = [G : F] [F : E]. Extensions whose degree is finite are referred to as finite extensions. The extensions C / R and F4 / F2 are of degree 2, whereas R / Q is an infinite extension. ==== Algebraic extensions ==== A pivotal notion in the study of field extensions F / E are algebraic elements. An element x ∈ F is algebraic over E if it is a root of a polynomial with coefficients in E, that is, if it satisfies a polynomial equation en xn + en−1xn−1 + ⋯ + e1x + e0 = 0, with en, ..., e0 in E, and en ≠ 0. For example, the imaginary unit i in C is algebraic over R, and even over Q, since it satisfies the equation i2 + 1 = 0. A field extension in which every element of F is algebraic over E is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula. The subfield E(x) generated by an element x, as above, is an algebraic extension of E if and only if x is an algebraic element. That is to say, if x is algebraic, all other elements of E(x) are necessarily algebraic as well. Moreover, the degree of the extension E(x) / E, i.e., the dimension of E(x) as an E-vector space, equals the minimal degree n such that there is a polynomial equation involving x, as above. If this degree is n, then the elements of E(x) have the form ∑ k = 0 n − 1 a k x k , a k ∈ E . {\displaystyle \sum _{k=0}^{n-1}a_{k}x^{k},\ \ a_{k}\in E.} For example, the field Q(i) of Gaussian rationals is the subfield of C consisting of all numbers of the form a + bi where both a and b are rational numbers: summands of the form i2 (and similarly for higher exponents) do not have to be considered here, since a + bi + ci2 can be simplified to a − c + bi. ==== Transcendence bases ==== The above-mentioned field of rational fractions E(X), where X is an indeterminate, is not an algebraic extension of E since there is no polynomial equation with coefficients in E whose zero is X. Elements, such as X, which are not algebraic are called transcendental. Informally speaking, the indeterminate X and its powers do not interact with elements of E. A similar construction can be carried out with a set of indeterminates, instead of just one. Once again, the field extension E(x) / E discussed above is a key example: if x is not algebraic (i.e., x is not a root of a polynomial with coefficients in E), then E(x) is isomorphic to E(X). This isomorphism is obtained by substituting x to X in rational fractions. A subset S of a field F is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over E and if F is an algebraic extension of E(S). Any field extension F / E has a transcendence basis. Thus, field extensions can be split into ones of the form E(S) / E (purely transcendental extensions) and algebraic extensions. === Closure operations === A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation fn xn + fn−1xn−1 + ⋯ + f1x + f0 = 0, with coefficients fn, ..., f0 ∈ F, n > 0, has a solution x ∊ F. By the fundamental theorem of algebra, C is algebraically closed, i.e., any polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are not algebraically closed since the equation x2 + 1 = 0 does not have any rational or real solution. A field containing F is called an algebraic closure of F if it is algebraic over F (roughly speaking, not too big compared to F) and is algebraically closed (big enough to contain solutions of all polynomial equations). By the above, C is an algebraic closure of R. The situation that the algebraic closure is a finite extension of the field F is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily 2, and F is elementarily equivalent to R. Such fields are also known as real closed fields. Any field F has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as the algebraic closure and denoted F. For example, the algebraic closure Q of Q is called the field of algebraic numbers. The field F is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of Fq, is exceptionally simple. It is the union of the finite fields containing Fq (the ones of order qn). For any algebraically closed field F of characteristic 0, the algebraic closure of the field F((t)) of Laurent series is the field of Puiseux series, obtained by adjoining roots of t. == Fields with additional structure == Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas. === Ordered fields === A field F is called an ordered field if any two elements can be compared, so that x + y ≥ 0 and xy ≥ 0 whenever x ≥ 0 and y ≥ 0. For example, the real numbers form an ordered field, with the usual ordering ≥. The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation x 1 2 + x 2 2 + ⋯ + x n 2 = 0 {\displaystyle x_{1}^{2}+x_{2}^{2}+\dots +x_{n}^{2}=0} only has the solution x1 = x2 = ⋯ = xn = 0. The set of all possible orders on a fixed field F is isomorphic to the set of ring homomorphisms from the Witt ring W(F) of quadratic forms over F, to Z. An Archimedean field is an ordered field such that for each element there exists a finite expression 1 + 1 + ⋯ + 1 whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of R. An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of F is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence 1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit. Since every proper subfield of the reals also contains such gaps, R is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals. The hyperreals R* form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis. === Topological fields === Another refinement of the notion of a field is a topological field, in which the set F is a topological space, such that all operations of the field (addition, multiplication, the maps a ↦ −a and a ↦ a−1) are continuous maps with respect to the topology of the space. The topology of all the fields discussed below is induced from a metric, i.e., a function d : F × F → R, that measures a distance between any two elements of F. The completion of F is another field in which, informally speaking, the "gaps" in the original field F are filled, if there are any. For example, any irrational number x, such as x = √2, is a "gap" in the rationals Q in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers p/q, in the sense that distance of x and p/q given by the absolute value |x − p/q| is as small as desired. The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for n → ∞) is zero. The field Qp is used in number theory and p-adic analysis. The algebraic closure Qp carries a unique norm extending the one on Qp, but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by Cp. ==== Local fields ==== The following topological fields are called local fields: finite extensions of Qp (local fields of characteristic zero) finite extensions of Fp((t)), the field of Laurent series over Fp (local fields of characteristic p). These two types of local fields share some fundamental similarities. In this relation, the elements p ∈ Qp and t ∈ Fp((t)) (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in Fp. (However, since the addition in Qp is done using carrying, which is not the case in Fp((t)), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper: Any first-order statement that is true for almost all Qp is also true for almost all Fp((t)). An application of this is the Ax–Kochen theorem describing zeros of homogeneous polynomials in Qp. Tamely ramified extensions of both fields are in bijection to one another. Adjoining arbitrary p-power roots of p (in Qp), respectively of t (in Fp((t))), yields (infinite) extensions of these fields known as perfectoid fields. Strikingly, the Galois groups of these two fields are isomorphic, which is the first glimpse of a remarkable parallel between these two fields: Gal ⁡ ( Q p ( p 1 / p ∞ ) ) ≅ Gal ⁡ ( F p ( ( t ) ) ( t 1 / p ∞ ) ) . {\displaystyle \operatorname {Gal} \left(\mathbf {Q} _{p}\left(p^{1/p^{\infty }}\right)\right)\cong \operatorname {Gal} \left(\mathbf {F} _{p}((t))\left(t^{1/p^{\infty }}\right)\right).} === Differential fields === Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field R(X), together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations. == Galois theory == Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions F / E, which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form F = E[X] / f(X), where f is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of f are contained in F and that f has only simple zeros. The latter condition is always satisfied if E has characteristic 0. For a finite Galois extension, the Galois group Gal(F/E) is the group of field automorphisms of F that are trivial on E (i.e., the bijections σ : F → F that preserve addition and multiplication and that send elements of E to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of Gal(F/E) and the set of intermediate extensions of the extension F/E. By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of f cannot be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving n {\displaystyle {\sqrt[{n}]{~}}} . For example, the symmetric groups Sn is not solvable for n ≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem: f(X) = X5 − 4X + 2 (and E = Q), f(X) = Xn + an−1Xn−1 + ⋯ + a0 (where f is regarded as a polynomial in E(a0, ..., an−1), for some indeterminates ai, E is any field, and n ≥ 5). The tensor product of fields is not usually a field. For example, a finite extension F / E of degree n is a Galois extension if and only if there is an isomorphism of F-algebras F ⊗E F ≅ Fn. This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects. == Invariants of fields == Basic invariants of a field F include the characteristic and the transcendence degree of F over its prime field. The latter is defined as the maximal number of elements in F that are algebraically independent over the prime field. Two algebraically closed fields E and F are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, Qp, Cp and C are isomorphic (but not isomorphic as topological fields). === Model theory of fields === In model theory, a branch of mathematical logic, two fields E and F are called elementarily equivalent if every mathematical statement that is true for E is also true for F and conversely. The mathematical statements in question are required to be first-order sentences (involving 0, 1, the addition and multiplication). A typical example, for n > 0, n an integer, is φ(E) = "any polynomial of degree n in E has a zero in E" The set of such formulas for all n expresses that E is algebraically closed. The Lefschetz principle states that C is elementarily equivalent to any algebraically closed field F of characteristic zero. Moreover, any fixed statement φ holds in C if and only if it holds in any algebraically closed field of sufficiently high characteristic. If U is an ultrafilter on a set I, and Fi is a field for every i in I, the ultraproduct of the Fi with respect to U is a field. It is denoted by ulimi→∞ Fi, since it behaves in several ways as a limit of the fields Fi: Łoś's theorem states that any first order statement that holds for all but finitely many Fi, also holds for the ultraproduct. Applied to the above sentence φ, this shows that there is an isomorphism ulim p → ∞ ⁡ F ¯ p ≅ C . {\displaystyle \operatorname {ulim} _{p\to \infty }{\overline {\mathbf {F} }}_{p}\cong \mathbf {C} .} The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes p) ulimp Qp ≅ ulimp Fp((t)). In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function exp : F → F×). === Absolute Galois group === For fields that are not algebraically closed (or not separably closed), the absolute Galois group Gal(F) is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs all finite separable extensions of F. By elementary means, the group Gal(Fq) can be shown to be the Prüfer group, the profinite completion of Z. This statement subsumes the fact that the only algebraic extensions of Gal(Fq) are the fields Gal(Fqn) for n > 0, and that the Galois groups of these finite extensions are given by Gal(Fqn / Fq) = Z/nZ. A description in terms of generators and relations is also known for the Galois groups of p-adic number fields (finite extensions of Qp). Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple F-algebras, can be reinterpreted as a Galois cohomology group, namely Br(F) = H2(F, Gm). === K-theory === Milnor K-theory is defined as K n M ( F ) = F × ⊗ ⋯ ⊗ F × / ⟨ x ⊗ ( 1 − x ) ∣ x ∈ F ∖ { 0 , 1 } ⟩ . {\displaystyle K_{n}^{M}(F)=F^{\times }\otimes \cdots \otimes F^{\times }/\left\langle x\otimes (1-x)\mid x\in F\smallsetminus \{0,1\}\right\rangle .} The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism K n M ( F ) / p = H n ( F , μ l ⊗ n ) . {\displaystyle K_{n}^{M}(F)/p=H^{n}(F,\mu _{l}^{\otimes n}).} Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism K1(F) = F×. Matsumoto's theorem shows that K2(F) agrees with K2M(F). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general. == Applications == === Linear algebra and commutative algebra === If a ≠ 0, then the equation ax = b has a unique solution x in a field F, namely x = a − 1 b . {\displaystyle x=a^{-1}b.} This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis. The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring Z of the integers. === Finite fields: cryptography and coding theory === A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing an = a ⋅ a ⋅ ⋯ ⋅ a (n factors, for an integer n ≥ 1) in a (large) finite field Fq can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution n to an equation an = b. In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form y2 = x3 + ax + b. Finite fields are also used in coding theory and combinatorics. === Geometry: field of functions === Functions on a suitable topological space X into a field F can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain: (f ⋅ g)(x) = f(x) ⋅ g(x). This makes these functions a F-commutative algebra. For having a field of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form f ( x ) g ( x ) , {\displaystyle {\frac {f(x)}{g(x)}},} form a field, called field of functions. This occurs in two main cases. When X is a complex manifold X. In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on X. The function field of an algebraic variety X (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the n-dimensional space over a field F is F(x1, ..., xn), i.e., the field consisting of ratios of polynomials in n indeterminates. The function field of X is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing X by a (slightly) smaller subvariety. The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of F(X), is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field F(X) is very close to X: if X is smooth and proper (the analogue of being compact), X can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about X. The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field. === Number theory: global fields === Global fields are in the limelight in algebraic number theory and arithmetic geometry. They are, by definition, number fields (finite extensions of Q) or function fields over Fq (finite extensions of Fq(t)). As for local fields, these two types of fields share several similar features, even though they are of characteristic 0 and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne). Cyclotomic fields are among the most intensely studied number fields. They are of the form Q(ζn), where ζn is a primitive nth root of unity, i.e., a complex number ζ that satisfies ζn = 1 and ζm ≠ 1 for all 0 < m < n. For n being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation xn + yn = zn. Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of Q, a global field, are the local fields Qp and R. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in R and Qp, whose solutions can easily be described. Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group Gal(F/Q) for some number field F. Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian Qab extension of Q: it is the field Q(ζn, n ≥ 2) obtained by adjoining all primitive nth roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of Fab of general number fields F. For imaginary quadratic fields, F = Q ( − d ) {\displaystyle F=\mathbf {Q} ({\sqrt {-d}})} , d > 0, the theory of complex multiplication describes Fab using elliptic curves. For general number fields, no such explicit description is known. == Related notions == In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field 0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields Fp, as p tends to 1. In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields. There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well. === Division rings === Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a division ring or skew field; sometimes associativity is weakened as well. The only division rings that are finite-dimensional R-vector spaces are R itself, C (which is a field), and the quaternions H (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions O, for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor. Wedderburn's little theorem states that all finite division rings are fields. == Notes == == Citations == == References == == External links ==
Wikipedia:Fielden Professor of Pure Mathematics#0
The Fielden Chair of Pure Mathematics is an endowed professorial position in the School of Mathematics, University of Manchester, England. == History == In 1870 Samuel Fielden, a wealthy mill owner from Todmorden, donated £150 to Owens College (as the Victoria University of Manchester was then called) for the teaching of evening classes and a further £3000 for the development of natural sciences at the college. From 1877 this supported the Fielden Lecturer, subsequently to become the Fielden Reader with the appointment of L. J. Mordell in 1922 and then Fielden Professor in 1923. Alex Wilkie FRS was appointed to the post in 2007. == Holders == Previous holders of the Fielden Chair (and lectureship) are: A. T. Bentley (1876–1880) Lecturer in Pure Mathematics J. E. A. Steggall (1880–1883) Lecturer in Pure Mathematics R. F. Gwyther (1883–1907) Lecturer in Mathematics F. T. Swanwick (1907–1912) Lecturer in Mathematics H. R. Hasse (1912–1918) Lecturer in Mathematics George Henry Livens (1920–1922) Lecturer in Mathematics Louis Mordell (1923–1945) Max Newman (1945–1964) Frank Adams (1964–1971) Ian G. Macdonald (1972–1976) Norman Blackburn (1978–1994) Mark Pollicott (1996–2004) Alex Wilkie (2007–) Radha Kessar (2024–) == Related chairs == The other endowed chairs in mathematics at the University of Manchester are the Beyer Chair of Applied Mathematics, the Sir Horace Lamb Chair and the Richardson Chair of Applied Mathematics. == References ==
Wikipedia:Fierz identity#0
In theoretical physics, a Fierz identity is an identity that allows one to rewrite bilinears of the product of two spinors as a linear combination of products of the bilinears of the individual spinors. It is named after Swiss physicist Markus Fierz. The Fierz identities are also sometimes called the Fierz–Pauli–Kofink identities, as Pauli and Kofink described a general mechanism for producing such identities. There is a version of the Fierz identities for Dirac spinors and there is another version for Weyl spinors. And there are versions for other dimensions besides 3+1 dimensions. Spinor bilinears in arbitrary dimensions are elements of a Clifford algebra; the Fierz identities can be obtained by expressing the Clifford algebra as a quotient of the exterior algebra. When working in 4 spacetime dimensions the bivector ψ χ ¯ {\displaystyle \psi {\bar {\chi }}} may be decomposed in terms of the Dirac matrices that span the space: ψ χ ¯ = 1 4 ( c S 1 + c V μ γ μ + c T μ ν T μ ν + c A μ γ μ γ 5 + c P γ 5 ) {\displaystyle \psi {\bar {\chi }}={\frac {1}{4}}(c_{S}\mathbb {1} +c_{V}^{\mu }\gamma _{\mu }+c_{T}^{\mu \nu }T_{\mu \nu }+c_{A}^{\mu }\gamma _{\mu }\gamma _{5}+c_{P}\gamma _{5})} . The coefficients are c S = ( χ ¯ ψ ) , c V μ = ( χ ¯ γ μ ψ ) , c T μ ν = − ( χ ¯ T μ ν ψ ) , c A μ = − ( χ ¯ γ μ γ 5 ψ ) , c P = ( χ ¯ γ 5 ψ ) {\displaystyle c_{S}=({\bar {\chi }}\psi ),\quad c_{V}^{\mu }=({\bar {\chi }}\gamma ^{\mu }\psi ),\quad c_{T}^{\mu \nu }=-({\bar {\chi }}T^{\mu \nu }\psi ),\quad c_{A}^{\mu }=-({\bar {\chi }}\gamma ^{\mu }\gamma _{5}\psi ),\quad c_{P}=({\bar {\chi }}\gamma _{5}\psi )} and are usually determined by using the orthogonality of the basis under the trace operation. By sandwiching the above decomposition between the desired gamma structures, the identities for the contraction of two Dirac bilinears of the same type can be written with coefficients according to the following table. where S = χ ¯ ψ , V = χ ¯ γ μ ψ , T = χ ¯ [ γ μ , γ ν ] ψ / 2 2 , A = χ ¯ γ 5 γ μ ψ , P = χ ¯ γ 5 ψ . {\displaystyle S={\bar {\chi }}\psi ,\quad V={\bar {\chi }}\gamma ^{\mu }\psi ,\quad T={\bar {\chi }}[\gamma ^{\mu },\gamma ^{\nu }]\psi /2{\sqrt {2}},\quad A={\bar {\chi }}\gamma _{5}\gamma ^{\mu }\psi ,\quad P={\bar {\chi }}\gamma _{5}\psi .} The table is symmetric with respect to reflection across the central element. The signs in the table correspond to the case of commuting spinors, otherwise, as is the case of fermions in physics, all coefficients change signs. For example, under the assumption of commuting spinors, the V × V product can be expanded as, ( χ ¯ γ μ ψ ) ( ψ ¯ γ μ χ ) = ( χ ¯ χ ) ( ψ ¯ ψ ) − 1 2 ( χ ¯ γ μ χ ) ( ψ ¯ γ μ ψ ) − 1 2 ( χ ¯ γ μ γ 5 χ ) ( ψ ¯ γ μ γ 5 ψ ) − ( χ ¯ γ 5 χ ) ( ψ ¯ γ 5 ψ ) . {\displaystyle \left({\bar {\chi }}\gamma ^{\mu }\psi \right)\left({\bar {\psi }}\gamma _{\mu }\chi \right)=\left({\bar {\chi }}\chi \right)\left({\bar {\psi }}\psi \right)-{\frac {1}{2}}\left({\bar {\chi }}\gamma ^{\mu }\chi \right)\left({\bar {\psi }}\gamma _{\mu }\psi \right)-{\frac {1}{2}}\left({\bar {\chi }}\gamma ^{\mu }\gamma _{5}\chi \right)\left({\bar {\psi }}\gamma _{\mu }\gamma _{5}\psi \right)-\left({\bar {\chi }}\gamma _{5}\chi \right)\left({\bar {\psi }}\gamma _{5}\psi \right)~.} Combinations of bilinears corresponding to the eigenvectors of the transpose matrix transform to the same combinations with eigenvalues ±1. For example, again for commuting spinors, V×V + A×A, ( χ ¯ γ μ ψ ) ( ψ ¯ γ μ χ ) + ( χ ¯ γ 5 γ μ ψ ) ( ψ ¯ γ 5 γ μ χ ) = − ( ( χ ¯ γ μ χ ) ( ψ ¯ γ μ ψ ) + ( χ ¯ γ 5 γ μ χ ) ( ψ ¯ γ 5 γ μ ψ ) ) . {\displaystyle ({\bar {\chi }}\gamma ^{\mu }\psi )({\bar {\psi }}\gamma _{\mu }\chi )+({\bar {\chi }}\gamma _{5}\gamma ^{\mu }\psi )({\bar {\psi }}\gamma _{5}\gamma _{\mu }\chi )=-(~({\bar {\chi }}\gamma ^{\mu }\chi )({\bar {\psi }}\gamma _{\mu }\psi )+({\bar {\chi }}\gamma _{5}\gamma ^{\mu }\chi )({\bar {\psi }}\gamma _{5}\gamma _{\mu }\psi )~)~.} Simplifications arise when the spinors considered are Majorana spinors, or chiral fermions, as then some terms in the expansion can vanish from symmetry reasons. For example, for anticommuting spinors this time, it readily follows from the above that χ ¯ 1 γ μ ( 1 + γ 5 ) ψ 2 ψ ¯ 3 γ μ ( 1 − γ 5 ) χ 4 = − 2 χ ¯ 1 ( 1 − γ 5 ) χ 4 ψ ¯ 3 ( 1 + γ 5 ) ψ 2 . {\displaystyle {\bar {\chi }}_{1}\gamma ^{\mu }(1+\gamma _{5})\psi _{2}{\bar {\psi }}_{3}\gamma _{\mu }(1-\gamma _{5})\chi _{4}=-2{\bar {\chi }}_{1}(1-\gamma _{5})\chi _{4}{\bar {\psi }}_{3}(1+\gamma _{5})\psi _{2}.} == References == A derivation of identities for rewriting any scalar contraction of Dirac bilinears can be found in 29.3.4 of L. B. Okun (1980). Leptons and Quarks. North-Holland. ISBN 978-0-444-86924-1. See also appendix B.1.2 in T. Ortin (2004). Gravity and Strings. Cambridge University Press. ISBN 978-0-521-82475-0. Kennedy, A.D. (1981). "Clifford algebras in 2ω dimensions". Journal of Mathematical Physics. 22 (7): 1330–7. doi:10.1063/1.525069. Pal, Palash B. (2007). "Representation-independent manipulations with Dirac spinors". arXiv:physics/0703214.
Wikipedia:Fifth power (algebra)#0
In arithmetic and algebra, the fifth power or sursolid of a number n is the result of multiplying five instances of n together: n5 = n × n × n × n × n. Fifth powers are also formed by multiplying a number by its fourth power, or the square of a number by its cube. The sequence of fifth powers of integers is: 0, 1, 32, 243, 1024, 3125, 7776, 16807, 32768, 59049, 100000, 161051, 248832, 371293, 537824, 759375, 1048576, 1419857, 1889568, 2476099, 3200000, 4084101, 5153632, 6436343, 7962624, 9765625, ... (sequence A000584 in the OEIS) == Properties == For any integer n, the last decimal digit of n5 is the same as the last (decimal) digit of n, i.e. n ≡ n 5 ( mod 10 ) {\displaystyle n\equiv n^{5}{\pmod {10}}} By the Abel–Ruffini theorem, there is no general algebraic formula (formula expressed in terms of radical expressions) for the solution of polynomial equations containing a fifth power of the unknown as their highest power. This is the lowest power for which this is true. See quintic equation, sextic equation, and septic equation. Along with the fourth power, the fifth power is one of two powers k that can be expressed as the sum of k − 1 other k-th powers, providing counterexamples to Euler's sum of powers conjecture. Specifically, 275 + 845 + 1105 + 1335 = 1445 (Lander & Parkin, 1966) == See also == Eighth power Seventh power Sixth power Fourth power Cube (algebra) Square (algebra) Perfect power == Footnotes == == References == Råde, Lennart; Westergren, Bertil (2000). Springers mathematische Formeln: Taschenbuch für Ingenieure, Naturwissenschaftler, Informatiker, Wirtschaftswissenschaftler (in German) (3 ed.). Springer-Verlag. p. 44. ISBN 3-540-67505-1. Vega, Georg (1783). Logarithmische, trigonometrische, und andere zum Gebrauche der Mathematik eingerichtete Tafeln und Formeln (in German). Vienna: Gedruckt bey Johann Thomas Edlen von Trattnern, kaiferl. königl. Hofbuchdruckern und Buchhändlern. p. 358. 1 32 243 1024. Jahn, Gustav Adolph (1839). Tafeln der Quadrat- und Kubikwurzeln aller Zahlen von 1 bis 25500, der Quadratzahlen aller Zahlen von 1 bis 27000 und der Kubikzahlen aller Zahlen von 1 bis 24000 (in German). Leipzig: Verlag von Johann Ambrosius Barth. p. 241. Deza, Elena; Deza, Michel (2012). Figurate Numbers. Singapore: World Scientific Publishing. p. 173. ISBN 978-981-4355-48-3. Rosen, Kenneth H.; Michaels, John G. (2000). Handbook of Discrete and Combinatorial Mathematics. Boca Raton, Florida: CRC Press. p. 159. ISBN 0-8493-0149-1. Prändel, Johann Georg (1815). Arithmetik in weiterer Bedeutung, oder Zahlen- und Buchstabenrechnung in einem Lehrkurse - mit Tabellen über verschiedene Münzsorten, Gewichte und Ellenmaaße und einer kleinen Erdglobuslehre (in German). Munich. p. 264.
Wikipedia:Filip Rindler#0
Filip Rindler (born August 15, 1984 in Berlin) is an Austrian mathematician. After studying at TU Berlin, he finished his doctorate at the University of Oxford in 2011 under the supervision of Jan Kristensen. Since 2020 he is a Professor at the University of Warwick. He works on the calculus of variations, partial differential equations and geometric measure theory, as well as mathematical material science. In 2018 he was awarded the prestigious Whitehead Prize of the London Mathematical Society for "his solutions to fundamental problems on the border between the theory of partial differential equations, calculus of variations and geometric measure theory". He received an ERC Starting Grant in 2017 and an ERC Consolidator Grant in 2024 == Selected publications == De Philippis, Guido; Rindler, Filip (1 November 2016). "On the structure of \mathscr A-free measures and applications". Annals of Mathematics. 184 (3): 1017–1039. arXiv:1601.06543. doi:10.4007/annals.2016.184.3.10. ISSN 0003-486X. S2CID 67844397. Rindler, Filip (2018). Calculus of Variation. Universitext, Springer. == References ==
Wikipedia:Filled Julia set#0
The filled-in Julia set K ( f ) {\displaystyle K(f)} of a polynomial f {\displaystyle f} is a Julia set and its interior, non-escaping set. == Formal definition == The filled-in Julia set K ( f ) {\displaystyle K(f)} of a polynomial f {\displaystyle f} is defined as the set of all points z {\displaystyle z} of the dynamical plane that have bounded orbit with respect to f {\displaystyle f} K ( f ) = d e f { z ∈ C : f ( k ) ( z ) ↛ ∞ as k → ∞ } {\displaystyle K(f){\overset {\mathrm {def} }{{}={}}}\left\{z\in \mathbb {C} :f^{(k)}(z)\not \to \infty ~{\text{as}}~k\to \infty \right\}} where: C {\displaystyle \mathbb {C} } is the set of complex numbers f ( k ) ( z ) {\displaystyle f^{(k)}(z)} is the k {\displaystyle k} -fold composition of f {\displaystyle f} with itself = iteration of function f {\displaystyle f} == Relation to the Fatou set == The filled-in Julia set is the (absolute) complement of the attractive basin of infinity. K ( f ) = C ∖ A f ( ∞ ) {\displaystyle K(f)=\mathbb {C} \setminus A_{f}(\infty )} The attractive basin of infinity is one of the components of the Fatou set. A f ( ∞ ) = F ∞ {\displaystyle A_{f}(\infty )=F_{\infty }} In other words, the filled-in Julia set is the complement of the unbounded Fatou component: K ( f ) = F ∞ C . {\displaystyle K(f)=F_{\infty }^{C}.} == Relation between Julia, filled-in Julia set and attractive basin of infinity == The Julia set is the common boundary of the filled-in Julia set and the attractive basin of infinity J ( f ) = ∂ K ( f ) = ∂ A f ( ∞ ) {\displaystyle J(f)=\partial K(f)=\partial A_{f}(\infty )} where: A f ( ∞ ) {\displaystyle A_{f}(\infty )} denotes the attractive basin of infinity = exterior of filled-in Julia set = set of escaping points for f {\displaystyle f} A f ( ∞ ) = d e f { z ∈ C : f ( k ) ( z ) → ∞ a s k → ∞ } . {\displaystyle A_{f}(\infty )\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \{z\in \mathbb {C} :f^{(k)}(z)\to \infty \ as\ k\to \infty \}.} If the filled-in Julia set has no interior then the Julia set coincides with the filled-in Julia set. This happens when all the critical points of f {\displaystyle f} are pre-periodic. Such critical points are often called Misiurewicz points. == Spine == The most studied polynomials are probably those of the form f ( z ) = z 2 + c {\displaystyle f(z)=z^{2}+c} , which are often denoted by f c {\displaystyle f_{c}} , where c {\displaystyle c} is any complex number. In this case, the spine S c {\displaystyle S_{c}} of the filled Julia set K {\displaystyle K} is defined as arc between β {\displaystyle \beta } -fixed point and − β {\displaystyle -\beta } , S c = [ − β , β ] {\displaystyle S_{c}=\left[-\beta ,\beta \right]} with such properties: spine lies inside K {\displaystyle K} . This makes sense when K {\displaystyle K} is connected and full spine is invariant under 180 degree rotation, spine is a finite topological tree, Critical point z c r = 0 {\displaystyle z_{cr}=0} always belongs to the spine. β {\displaystyle \beta } -fixed point is a landing point of external ray of angle zero R 0 K {\displaystyle {\mathcal {R}}_{0}^{K}} , − β {\displaystyle -\beta } is landing point of external ray R 1 / 2 K {\displaystyle {\mathcal {R}}_{1/2}^{K}} . Algorithms for constructing the spine: detailed version is described by A. Douady Simplified version of algorithm: connect − β {\displaystyle -\beta } and β {\displaystyle \beta } within K {\displaystyle K} by an arc, when K {\displaystyle K} has empty interior then arc is unique, otherwise take the shortest way that contains 0 {\displaystyle 0} . Curve R {\displaystyle R} : R = d e f R 1 / 2 ∪ S c ∪ R 0 {\displaystyle R{\overset {\mathrm {def} }{{}={}}}R_{1/2}\cup S_{c}\cup R_{0}} divides dynamical plane into two components. == Images == == Names == airplane Douady rabbit dragon basilica or San Marco fractal or San Marco dragon cauliflower dendrite Siegel disc == Notes == == References == Peitgen Heinz-Otto, Richter, P.H. : The beauty of fractals: Images of Complex Dynamical Systems. Springer-Verlag 1986. ISBN 978-0-387-15851-8. Bodil Branner : Holomorphic dynamical systems in the complex plane. Department of Mathematics Technical University of Denmark, MAT-Report no. 1996-42.
Wikipedia:Filtration (mathematics)#0
In mathematics, a filtration F {\displaystyle {\mathcal {F}}} is, informally, like a set of ever larger Russian dolls, each one containing the previous ones, where a "doll" is a subobject of an algebraic structure. Formally, a filtration is an indexed family ( S i ) i ∈ I {\displaystyle (S_{i})_{i\in I}} of subobjects of a given algebraic structure S {\displaystyle S} , with the index i {\displaystyle i} running over some totally ordered index set I {\displaystyle I} , subject to the condition that if i ≤ j {\displaystyle i\leq j} in I {\displaystyle I} , then S i ⊆ S j {\displaystyle S_{i}\subseteq S_{j}} . If the index i {\displaystyle i} is the time parameter of some stochastic process, then the filtration can be interpreted as representing all historical but not future information available about the stochastic process, with the algebraic structure S i {\displaystyle S_{i}} gaining in complexity with time. Hence, a process that is adapted to a filtration F {\displaystyle {\mathcal {F}}} is also called non-anticipating, because it cannot "see into the future". Sometimes, as in a filtered algebra, there is instead the requirement that the S i {\displaystyle S_{i}} be subalgebras with respect to some operations (say, vector addition), but not with respect to other operations (say, multiplication) that satisfy only S i ⋅ S j ⊆ S i + j {\displaystyle S_{i}\cdot S_{j}\subseteq S_{i+j}} , where the index set is the natural numbers; this is by analogy with a graded algebra. Sometimes, filtrations are supposed to satisfy the additional requirement that the union of the S i {\displaystyle S_{i}} be the whole S {\displaystyle S} , or (in more general cases, when the notion of union does not make sense) that the canonical homomorphism from the direct limit of the S i {\displaystyle S_{i}} to S {\displaystyle S} is an isomorphism. Whether this requirement is assumed or not usually depends on the author of the text and is often explicitly stated. This article does not impose this requirement. There is also the notion of a descending filtration, which is required to satisfy S i ⊇ S j {\displaystyle S_{i}\supseteq S_{j}} in lieu of S i ⊆ S j {\displaystyle S_{i}\subseteq S_{j}} (and, occasionally, ⋂ i ∈ I S i = 0 {\displaystyle \bigcap _{i\in I}S_{i}=0} instead of ⋃ i ∈ I S i = S {\displaystyle \bigcup _{i\in I}S_{i}=S} ). Again, it depends on the context how exactly the word "filtration" is to be understood. Descending filtrations are not to be confused with the dual notion of cofiltrations (which consist of quotient objects rather than subobjects). Filtrations are widely used in abstract algebra, homological algebra (where they are related in an important way to spectral sequences), and in measure theory and probability theory for nested sequences of σ-algebras. In functional analysis and numerical analysis, other terminology is usually used, such as scale of spaces or nested spaces. == Examples == === Sets === Farey Sequence === Algebra === ==== Algebras ==== See: Filtered algebra ==== Groups ==== In algebra, filtrations are ordinarily indexed by N {\displaystyle \mathbb {N} } , the set of natural numbers. A filtration of a group G {\displaystyle G} , is then a nested sequence G n {\displaystyle G_{n}} of normal subgroups of G {\displaystyle G} (that is, for any n {\displaystyle n} we have G n + 1 ⊆ G n {\displaystyle G_{n+1}\subseteq G_{n}} ). Note that this use of the word "filtration" corresponds to our "descending filtration". Given a group G {\displaystyle G} and a filtration G n {\displaystyle G_{n}} , there is a natural way to define a topology on G {\displaystyle G} , said to be associated to the filtration. A basis for this topology is the set of all cosets of subgroups appearing in the filtration, that is, a subset of G {\displaystyle G} is defined to be open if it is a union of sets of the form a G n {\displaystyle aG_{n}} , where a ∈ G {\displaystyle a\in G} and n {\displaystyle n} is a natural number. The topology associated to a filtration on a group G {\displaystyle G} makes G {\displaystyle G} into a topological group. The topology associated to a filtration G n {\displaystyle G_{n}} on a group G {\displaystyle G} is Hausdorff if and only if ⋂ G n = { 1 } {\displaystyle \bigcap G_{n}=\{1\}} . If two filtrations G n {\displaystyle G_{n}} and G n ′ {\displaystyle G'_{n}} are defined on a group G {\displaystyle G} , then the identity map from G {\displaystyle G} to G {\displaystyle G} , where the first copy of G {\displaystyle G} is given the G n {\displaystyle G_{n}} -topology and the second the G n ′ {\displaystyle G'_{n}} -topology, is continuous if and only if for any n {\displaystyle n} there is an m {\displaystyle m} such that G m ⊆ G n ′ {\displaystyle G_{m}\subseteq G'_{n}} , that is, if and only if the identity map is continuous at 1. In particular, the two filtrations define the same topology if and only if for any subgroup appearing in one there is a smaller or equal one appearing in the other. ==== Rings and modules: descending filtrations ==== Given a ring R {\displaystyle R} and an R {\displaystyle R} -module M {\displaystyle M} , a descending filtration of M {\displaystyle M} is a decreasing sequence of submodules M n {\displaystyle M_{n}} . This is therefore a special case of the notion for groups, with the additional condition that the subgroups be submodules. The associated topology is defined as for groups. An important special case is known as the I {\displaystyle I} -adic topology (or J {\displaystyle J} -adic, etc.): Let R {\displaystyle R} be a commutative ring, and I {\displaystyle I} an ideal of R {\displaystyle R} . Given an R {\displaystyle R} -module M {\displaystyle M} , the sequence I n M {\displaystyle I^{n}M} of submodules of M {\displaystyle M} forms a filtration of M {\displaystyle M} (the I {\displaystyle I} -adic filtration). The I {\displaystyle I} -adic topology on M {\displaystyle M} is then the topology associated to this filtration. If M {\displaystyle M} is just the ring R {\displaystyle R} itself, we have defined the I {\displaystyle I} -adic topology on R {\displaystyle R} . When R {\displaystyle R} is given the I {\displaystyle I} -adic topology, R {\displaystyle R} becomes a topological ring. If an R {\displaystyle R} -module M {\displaystyle M} is then given the I {\displaystyle I} -adic topology, it becomes a topological R {\displaystyle R} -module, relative to the topology given on R {\displaystyle R} . ==== Rings and modules: ascending filtrations ==== Given a ring R {\displaystyle R} and an R {\displaystyle R} -module M {\displaystyle M} , an ascending filtration of M {\displaystyle M} is an increasing sequence of submodules M n {\displaystyle M_{n}} . In particular, if R {\displaystyle R} is a field, then an ascending filtration of the R {\displaystyle R} -vector space M {\displaystyle M} is an increasing sequence of vector subspaces of M {\displaystyle M} . Flags are one important class of such filtrations. ==== Sets ==== A maximal filtration of a set is equivalent to an ordering (a permutation) of the set. For instance, the filtration { 0 } ⊆ { 0 , 1 } ⊆ { 0 , 1 , 2 } {\displaystyle \{0\}\subseteq \{0,1\}\subseteq \{0,1,2\}} corresponds to the ordering ( 0 , 1 , 2 ) {\displaystyle (0,1,2)} . From the point of view of the field with one element, an ordering on a set corresponds to a maximal flag (a filtration on a vector space), considering a set to be a vector space over the field with one element. === Measure theory === In measure theory, in particular in martingale theory and the theory of stochastic processes, a filtration is an increasing sequence of σ {\displaystyle \sigma } -algebras on a measurable space. That is, given a measurable space ( Ω , F ) {\displaystyle (\Omega ,{\mathcal {F}})} , a filtration is a sequence of σ {\displaystyle \sigma } -algebras { F t } t ≥ 0 {\displaystyle \{{\mathcal {F}}_{t}\}_{t\geq 0}} with F t ⊆ F {\displaystyle {\mathcal {F}}_{t}\subseteq {\mathcal {F}}} where each t {\displaystyle t} is a non-negative real number and t 1 ≤ t 2 ⟹ F t 1 ⊆ F t 2 . {\displaystyle t_{1}\leq t_{2}\implies {\mathcal {F}}_{t_{1}}\subseteq {\mathcal {F}}_{t_{2}}.} The exact range of the "times" t {\displaystyle t} will usually depend on context: the set of values for t {\displaystyle t} might be discrete or continuous, bounded or unbounded. For example, t ∈ { 0 , 1 , … , N } , N 0 , [ 0 , T ] or [ 0 , + ∞ ) . {\displaystyle t\in \{0,1,\dots ,N\},\mathbb {N} _{0},[0,T]{\mbox{ or }}[0,+\infty ).} Similarly, a filtered probability space (also known as a stochastic basis) ( Ω , F , { F t } t ≥ 0 , P ) {\displaystyle \left(\Omega ,{\mathcal {F}},\left\{{\mathcal {F}}_{t}\right\}_{t\geq 0},\mathbb {P} \right)} , is a probability space equipped with the filtration { F t } t ≥ 0 {\displaystyle \left\{{\mathcal {F}}_{t}\right\}_{t\geq 0}} of its σ {\displaystyle \sigma } -algebra F {\displaystyle {\mathcal {F}}} . A filtered probability space is said to satisfy the usual conditions if it is complete (i.e., F 0 {\displaystyle {\mathcal {F}}_{0}} contains all P {\displaystyle \mathbb {P} } -null sets) and right-continuous (i.e. F t = F t + := ⋂ s > t F s {\displaystyle {\mathcal {F}}_{t}={\mathcal {F}}_{t+}:=\bigcap _{s>t}{\mathcal {F}}_{s}} for all times t {\displaystyle t} ). It is also useful (in the case of an unbounded index set) to define F ∞ {\displaystyle {\mathcal {F}}_{\infty }} as the σ {\displaystyle \sigma } -algebra generated by the infinite union of the F t {\displaystyle {\mathcal {F}}_{t}} 's, which is contained in F {\displaystyle {\mathcal {F}}} : F ∞ = σ ( ⋃ t ≥ 0 F t ) ⊆ F . {\displaystyle {\mathcal {F}}_{\infty }=\sigma \left(\bigcup _{t\geq 0}{\mathcal {F}}_{t}\right)\subseteq {\mathcal {F}}.} A σ-algebra defines the set of events that can be measured, which in a probability context is equivalent to events that can be discriminated, or "questions that can be answered at time t {\displaystyle t} ". Therefore, a filtration is often used to represent the change in the set of events that can be measured, through gain or loss of information. A typical example is in mathematical finance, where a filtration represents the information available up to and including each time t {\displaystyle t} , and is more and more precise (the set of measurable events is staying the same or increasing) as more information from the evolution of the stock price becomes available. ==== Relation to stopping times: stopping time sigma-algebras ==== Let ( Ω , F , { F t } t ≥ 0 , P ) {\displaystyle \left(\Omega ,{\mathcal {F}},\left\{{\mathcal {F}}_{t}\right\}_{t\geq 0},\mathbb {P} \right)} be a filtered probability space. A random variable τ : Ω → [ 0 , ∞ ] {\displaystyle \tau :\Omega \rightarrow [0,\infty ]} is a stopping time with respect to the filtration { F t } t ≥ 0 {\displaystyle \left\{{\mathcal {F}}_{t}\right\}_{t\geq 0}} , if { τ ≤ t } ∈ F t {\displaystyle \{\tau \leq t\}\in {\mathcal {F}}_{t}} for all t ≥ 0 {\displaystyle t\geq 0} . The stopping time σ {\displaystyle \sigma } -algebra is now defined as F τ := { A ∈ F | ∀ t ≥ 0 : A ∩ { τ ≤ t } ∈ F t } {\displaystyle {\mathcal {F}}_{\tau }:=\{A\in {\mathcal {F}}\vert \forall t\geq 0\colon A\cap \{\tau \leq t\}\in {\mathcal {F}}_{t}\}} . It is not difficult to show that F τ {\displaystyle {\mathcal {F}}_{\tau }} is indeed a σ {\displaystyle \sigma } -algebra. The set F τ {\displaystyle {\mathcal {F}}_{\tau }} encodes information up to the random time τ {\displaystyle \tau } in the sense that, if the filtered probability space is interpreted as a random experiment, the maximum information that can be found out about it from arbitrarily often repeating the experiment until the random time τ {\displaystyle \tau } is F τ {\displaystyle {\mathcal {F}}_{\tau }} . In particular, if the underlying probability space is finite (i.e. F {\displaystyle {\mathcal {F}}} is finite), the minimal sets of F τ {\displaystyle {\mathcal {F}}_{\tau }} (with respect to set inclusion) are given by the union over all t ≥ 0 {\displaystyle t\geq 0} of the sets of minimal sets of F t {\displaystyle {\mathcal {F}}_{t}} that lie in { τ = t } {\displaystyle \{\tau =t\}} . It can be shown that τ {\displaystyle \tau } is F τ {\displaystyle {\mathcal {F}}_{\tau }} -measurable. However, simple examples show that, in general, σ ( τ ) ≠ F τ {\displaystyle \sigma (\tau )\neq {\mathcal {F}}_{\tau }} . If τ 1 {\displaystyle \tau _{1}} and τ 2 {\displaystyle \tau _{2}} are stopping times on ( Ω , F , { F t } t ≥ 0 , P ) {\displaystyle \left(\Omega ,{\mathcal {F}},\left\{{\mathcal {F}}_{t}\right\}_{t\geq 0},\mathbb {P} \right)} , and τ 1 ≤ τ 2 {\displaystyle \tau _{1}\leq \tau _{2}} almost surely, then F τ 1 ⊆ F τ 2 . {\displaystyle {\mathcal {F}}_{\tau _{1}}\subseteq {\mathcal {F}}_{\tau _{2}}.} == See also == Natural filtration Filtration (probability theory) Filter (mathematics) == References == Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications. Berlin: Springer. ISBN 978-3-540-04758-2.
Wikipedia:Finitary relation#0
In mathematics, a finitary relation over a sequence of sets X1, ..., Xn is a subset of the Cartesian product X1 × ... × Xn; that is, it is a set of n-tuples (x1, ..., xn), each being a sequence of elements xi in the corresponding Xi. Typically, the relation describes a possible connection between the elements of an n-tuple. For example, the relation "x is divisible by y and z" consists of the set of 3-tuples such that when substituted to x, y and z, respectively, make the sentence true. The non-negative integer n that gives the number of "places" in the relation is called the arity, adicity or degree of the relation. A relation with n "places" is variously called an n-ary relation, an n-adic relation or a relation of degree n. Relations with a finite number of places are called finitary relations (or simply relations if the context is clear). It is also possible to generalize the concept to infinitary relations with infinite sequences. == Definitions == When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation. Definition R is an n-ary relation on sets X1, ..., Xn is given by a subset of the Cartesian product X1 × ... × Xn. Since the definition is predicated on the underlying sets X1, ..., Xn, R may be more formally defined as the (n + 1)-tuple (X1, ..., Xn, G), where G, called the graph of R, is a subset of the Cartesian product X1 × ... × Xn. As is often done in mathematics, the same symbol is used to refer to the mathematical object and an underlying set, so the statement (x1, ..., xn) ∈ R is often used to mean (x1, ..., xn) ∈ G is read "x1, ..., xn are R-related" and are denoted using prefix notation by Rx1⋯xn and using postfix notation by x1⋯xnR. In the case where R is a binary relation, those statements are also denoted using infix notation by x1Rx2. The following considerations apply: The set Xi is called the ith domain of R. In the case where R is a binary relation, X1 is also called simply the domain or set of departure of R, and X2 is also called the codomain or set of destination of R. When the elements of Xi are relations, Xi is called a nonsimple domain of R. The set of ∀xi ∈ Xi such that Rx1⋯xi−1xixi+1⋯xn for at least one (x1, ..., xn) is called the ith domain of definition or active domain of R. In the case where R is a binary relation, its first domain of definition is also called simply the domain of definition or active domain of R, and its second domain of definition is also called the codomain of definition or active codomain of R. When the ith domain of definition of R is equal to Xi, R is said to be total on its ith domain (or on Xi, when this is not ambiguous). In the case where R is a binary relation, when R is total on X1, it is also said to be left-total or serial, and when R is total on X2, it is also said to be right-total or surjective. When ∀x ∀y ∈ Xi. ∀z ∈ Xj. xRijz ∧ yRijz ⇒ x = y, where i ∈ I, j ∈ J, Rij = πij R, and {I, J} is a partition of {1, ..., n}, R is said to be unique on {Xi}i∈I, and {Xi}i∈J is called a primary key of R. In the case where R is a binary relation, when R is unique on {X1}, it is also said to be left-unique or injective, and when R is unique on {X2}, it is also said to be univalent or right-unique. When all Xi are the same set X, it is simpler to refer to R as an n-ary relation over X, called a homogeneous relation. Without this restriction, R is called a heterogeneous relation. When any of Xi is empty, the defining Cartesian product is empty, and the only relation over such a sequence of domains is the empty relation R = ∅. Let a Boolean domain B be a two-element set, say, B = {0, 1}, whose elements can be interpreted as logical values, typically 0 = false and 1 = true. The characteristic function of R, denoted by χR, is the Boolean-valued function χR: X1 × ... × Xn → B, defined by χR((x1, ..., xn)) = 1 if Rx1⋯xn and χR((x1, ..., xn)) = 0 otherwise. In applied mathematics, computer science and statistics, it is common to refer to a Boolean-valued function as an n-ary predicate. From the more abstract viewpoint of formal logic and model theory, the relation R constitutes a logical model or a relational structure, that serves as one of many possible interpretations of some n-ary predicate symbol. Because relations arise in many scientific disciplines, as well as in many branches of mathematics and logic, there is considerable variation in terminology. Aside from the set-theoretic extension of a relational concept or term, the term "relation" can also be used to refer to the corresponding logical entity, either the logical comprehension, which is the totality of intensions or abstract properties shared by all elements in the relation, or else the symbols denoting these elements and intensions. Further, some writers of the latter persuasion introduce terms with more concrete connotations (such as "relational structure" for the set-theoretic extension of a given relational concept). == Specific values of n == === Nullary === Nullary (0-ary) relations count only two members: the empty nullary relation, which never holds, and the universal nullary relation, which always holds. This is because there is only one 0-tuple, the empty tuple (), and there are exactly two subsets of the (singleton) set of all 0-tuples. They are sometimes useful for constructing the base case of an induction argument. === Unary === Unary (1-ary) relations can be viewed as a collection of members (such as the collection of Nobel laureates) having some property (such as that of having been awarded the Nobel Prize). Every nullary function is a unary relation. === Binary === Binary (2-ary) relations are the most commonly studied form of finitary relations. Homogeneous binary relations (where X1 = X2) include Equality and inequality, denoted by signs such as = and < in statements such as "5 < 12", or Divisibility, denoted by the sign | in statements such as "13 | 143". Heterogeneous binary relations include Set membership, denoted by the sign ∈ in statements such as "1 ∈ N". === Ternary === Ternary (3-ary) relations include, for example, the binary functions, which relate two inputs and the output. All three of the domains of a homogeneous ternary relation are the same set. == Example == Consider the ternary relation R "x thinks that y likes z" over the set of people P = { Alice, Bob, Charles, Denise }, defined by: R = { (Alice, Bob, Denise), (Charles, Alice, Bob), (Charles, Charles, Alice), (Denise, Denise, Denise) }. R can be represented equivalently by the following table: Here, each row represents a triple of R, that is it makes a statement of the form "x thinks that y likes z". For instance, the first row states that "Alice thinks that Bob likes Denise". All rows are distinct. The ordering of rows is insignificant but the ordering of columns is significant. The above table is also a simple example of a relational database, a field with theory rooted in relational algebra and applications in data management. Computer scientists, logicians, and mathematicians, however, tend to have different conceptions what a general relation is, and what it is consisted of. For example, databases are designed to deal with empirical data, which is by definition finite, whereas in mathematics, relations with infinite arity (i.e., infinitary relation) are also considered. == History == The logician Augustus De Morgan, in work published around 1860, was the first to articulate the notion of relation in anything like its present sense. He also stated the first formal results in the theory of relations (on De Morgan and relations, see Merrill 1990). Charles Peirce, Gottlob Frege, Georg Cantor, Richard Dedekind and others advanced the theory of relations. Many of their ideas, especially on relations called orders, were summarized in The Principles of Mathematics (1903) where Bertrand Russell made free use of these results. In 1970, Edgar Codd proposed a relational model for databases, thus anticipating the development of data base management systems. == See also == == References == == Bibliography ==
Wikipedia:Finite difference#0
A finite difference is a mathematical expression of the form f(x + b) − f(x + a). Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation. The difference operator, commonly denoted Δ {\displaystyle \Delta } , is the operator that maps a function f to the function Δ [ f ] {\displaystyle \Delta [f]} defined by Δ [ f ] ( x ) = f ( x + 1 ) − f ( x ) . {\displaystyle \Delta [f](x)=f(x+1)-f(x).} A difference equation is a functional equation that involves the finite difference operator in the same way as a differential equation involves derivatives. There are many similarities between difference equations and differential equations. Certain recurrence relations can be written as difference equations by replacing iteration notation with finite differences. In numerical analysis, finite differences are widely used for approximating derivatives, and the term "finite difference" is often used as an abbreviation of "finite difference approximation of derivatives". Finite differences were introduced by Brook Taylor in 1715 and have also been studied as abstract self-standing mathematical objects in works by George Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan (1939). Finite differences trace their origins back to one of Jost Bürgi's algorithms (c. 1592) and work by others including Isaac Newton. The formal calculus of finite differences can be viewed as an alternative to the calculus of infinitesimals. == Basic types == Three basic types are commonly considered: forward, backward, and central finite differences. A forward difference, denoted Δ h [ f ] , {\displaystyle \Delta _{h}[f],} of a function f is a function defined as Δ h [ f ] ( x ) = f ( x + h ) − f ( x ) . {\displaystyle \Delta _{h}[f](x)=f(x+h)-f(x).} Depending on the application, the spacing h may be variable or constant. When omitted, h is taken to be 1; that is, Δ [ f ] ( x ) = Δ 1 [ f ] ( x ) = f ( x + 1 ) − f ( x ) . {\displaystyle \Delta [f](x)=\Delta _{1}[f](x)=f(x+1)-f(x).} A backward difference uses the function values at x and x − h, instead of the values at x + h and x: ∇ h [ f ] ( x ) = f ( x ) − f ( x − h ) = Δ h [ f ] ( x − h ) . {\displaystyle \nabla _{h}[f](x)=f(x)-f(x-h)=\Delta _{h}[f](x-h).} Finally, the central difference is given by δ h [ f ] ( x ) = f ( x + h 2 ) − f ( x − h 2 ) = Δ h / 2 [ f ] ( x ) + ∇ h / 2 [ f ] ( x ) . {\displaystyle \delta _{h}[f](x)=f(x+{\tfrac {h}{2}})-f(x-{\tfrac {h}{2}})=\Delta _{h/2}[f](x)+\nabla _{h/2}[f](x).} == Relation with derivatives == The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. The derivative of a function f at a point x is defined by the limit f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h . {\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}.} If h has a fixed (non-zero) value instead of approaching zero, then the right-hand side of the above equation would be written f ( x + h ) − f ( x ) h = Δ h [ f ] ( x ) h . {\displaystyle {\frac {f(x+h)-f(x)}{h}}={\frac {\Delta _{h}[f](x)}{h}}.} Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is twice differentiable, we have Δ h [ f ] ( x ) h − f ′ ( x ) = o ( h ) → 0 as h → 0. {\displaystyle {\frac {\Delta _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.} The same formula holds for the backward difference: ∇ h [ f ] ( x ) h − f ′ ( x ) = o ( h ) → 0 as h → 0. {\displaystyle {\frac {\nabla _{h}[f](x)}{h}}-f'(x)=o(h)\to 0\quad {\text{as }}h\to 0.} However, the central (also called centered) difference yields a more accurate approximation. If f is three times differentiable, δ h [ f ] ( x ) h − f ′ ( x ) = o ( h 2 ) . {\displaystyle {\frac {\delta _{h}[f](x)}{h}}-f'(x)=o\left(h^{2}\right).} The main problem with the central difference method, however, is that oscillating functions can yield zero derivative. If f(nh) = 1 for n odd, and f(nh) = 2 for n even, then f′(nh) = 0 if it is calculated with the central difference scheme. This is particularly troublesome if the domain of f is discrete. See also Symmetric derivative. Authors for whom finite differences mean finite difference approximations define the forward/backward/central differences as the quotients given in this section (instead of employing the definitions given in the previous section). == Higher-order differences == In an analogous way, one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f′(x + ⁠h/2⁠) and f′(x − ⁠h/2⁠) and applying a central difference formula for the derivative of f′ at x, we obtain the central difference approximation of the second derivative of f: Second-order central f ″ ( x ) ≈ δ h 2 [ f ] ( x ) h 2 = f ( x + h ) − f ( x ) h − f ( x ) − f ( x − h ) h h = f ( x + h ) − 2 f ( x ) + f ( x − h ) h 2 . {\displaystyle f''(x)\approx {\frac {\delta _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x+h)-f(x)}{h}}-{\frac {f(x)-f(x-h)}{h}}}{h}}={\frac {f(x+h)-2f(x)+f(x-h)}{h^{2}}}.} Similarly we can apply other differencing formulas in a recursive manner. Second order forward f ″ ( x ) ≈ Δ h 2 [ f ] ( x ) h 2 = f ( x + 2 h ) − f ( x + h ) h − f ( x + h ) − f ( x ) h h = f ( x + 2 h ) − 2 f ( x + h ) + f ( x ) h 2 . {\displaystyle f''(x)\approx {\frac {\Delta _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x+2h)-f(x+h)}{h}}-{\frac {f(x+h)-f(x)}{h}}}{h}}={\frac {f(x+2h)-2f(x+h)+f(x)}{h^{2}}}.} Second order backward f ″ ( x ) ≈ ∇ h 2 [ f ] ( x ) h 2 = f ( x ) − f ( x − h ) h − f ( x − h ) − f ( x − 2 h ) h h = f ( x ) − 2 f ( x − h ) + f ( x − 2 h ) h 2 . {\displaystyle f''(x)\approx {\frac {\nabla _{h}^{2}[f](x)}{h^{2}}}={\frac {{\frac {f(x)-f(x-h)}{h}}-{\frac {f(x-h)-f(x-2h)}{h}}}{h}}={\frac {f(x)-2f(x-h)+f(x-2h)}{h^{2}}}.} More generally, the n-th order forward, backward, and central differences are given by, respectively, Forward Δ h n [ f ] ( x ) = ∑ i = 0 n ( − 1 ) n − i ( n i ) f ( x + i h ) , {\displaystyle \Delta _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{n-i}{\binom {n}{i}}f{\bigl (}x+ih{\bigr )},} Backward ∇ h n [ f ] ( x ) = ∑ i = 0 n ( − 1 ) i ( n i ) f ( x − i h ) , {\displaystyle \nabla _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{i}{\binom {n}{i}}f(x-ih),} Central δ h n [ f ] ( x ) = ∑ i = 0 n ( − 1 ) i ( n i ) f ( x + ( n 2 − i ) h ) . {\displaystyle \delta _{h}^{n}[f](x)=\sum _{i=0}^{n}(-1)^{i}{\binom {n}{i}}f\left(x+\left({\frac {n}{2}}-i\right)h\right).} These equations use binomial coefficients after the summation sign shown as (ni). Each row of Pascal's triangle provides the coefficient for each value of i. Note that the central difference will, for odd n, have h multiplied by non-integers. This is often a problem because it amounts to changing the interval of discretization. The problem may be remedied substituting the average of δ n [ f ] ( x − h 2 ) {\displaystyle \ \delta ^{n}[f](\ x-{\tfrac {\ h\ }{2}}\ )\ } and δ n [ f ] ( x + h 2 ) . {\displaystyle \ \delta ^{n}[f](\ x+{\tfrac {\ h\ }{2}}\ )~.} Forward differences applied to a sequence are sometimes called the binomial transform of the sequence, and have a number of interesting combinatorial properties. Forward differences may be evaluated using the Nörlund–Rice integral. The integral representation for these types of series is interesting, because the integral can often be evaluated using asymptotic expansion or saddle-point techniques; by contrast, the forward difference series can be extremely hard to evaluate numerically, because the binomial coefficients grow rapidly for large n. The relationship of these higher-order differences with the respective derivatives is straightforward, d n f d x n ( x ) = Δ h n [ f ] ( x ) h n + o ( h ) = ∇ h n [ f ] ( x ) h n + o ( h ) = δ h n [ f ] ( x ) h n + o ( h 2 ) . {\displaystyle {\frac {d^{n}f}{dx^{n}}}(x)={\frac {\Delta _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\nabla _{h}^{n}[f](x)}{h^{n}}}+o(h)={\frac {\delta _{h}^{n}[f](x)}{h^{n}}}+o\left(h^{2}\right).} Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination Δ h [ f ] ( x ) − 1 2 Δ h 2 [ f ] ( x ) h = − f ( x + 2 h ) − 4 f ( x + h ) + 3 f ( x ) 2 h {\displaystyle {\frac {\Delta _{h}[f](x)-{\frac {1}{2}}\Delta _{h}^{2}[f](x)}{h}}=-{\frac {f(x+2h)-4f(x+h)+3f(x)}{2h}}} approximates f′(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below. If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences. == Polynomials == For a given polynomial of degree n ≥ 1, expressed in the function P(x), with real numbers a ≠ 0 and b and lower order terms (if any) marked as l.o.t.: P ( x ) = a x n + b x n − 1 + l . o . t . {\displaystyle P(x)=ax^{n}+bx^{n-1}+l.o.t.} After n pairwise differences, the following result can be achieved, where h ≠ 0 is a real number marking the arithmetic difference: Δ h n [ P ] ( x ) = a h n n ! {\displaystyle \Delta _{h}^{n}[P](x)=ah^{n}n!} Only the coefficient of the highest-order term remains. As this result is constant with respect to x, any further pairwise differences will have the value 0. === Inductive proof === ==== Base case ==== Let Q(x) be a polynomial of degree 1: Δ h [ Q ] ( x ) = Q ( x + h ) − Q ( x ) = [ a ( x + h ) + b ] − [ a x + b ] = a h = a h 1 1 ! {\displaystyle \Delta _{h}[Q](x)=Q(x+h)-Q(x)=[a(x+h)+b]-[ax+b]=ah=ah^{1}1!} This proves it for the base case. ==== Inductive step ==== Let R(x) be a polynomial of degree m − 1 where m ≥ 2 and the coefficient of the highest-order term be a ≠ 0. Assuming the following holds true for all polynomials of degree m − 1: Δ h m − 1 [ R ] ( x ) = a h m − 1 ( m − 1 ) ! {\displaystyle \Delta _{h}^{m-1}[R](x)=ah^{m-1}(m-1)!} Let S(x) be a polynomial of degree m. With one pairwise difference: Δ h [ S ] ( x ) = [ a ( x + h ) m + b ( x + h ) m − 1 + l.o.t. ] − [ a x m + b x m − 1 + l.o.t. ] = a h m x m − 1 + l.o.t. = T ( x ) {\displaystyle \Delta _{h}[S](x)=[a(x+h)^{m}+b(x+h)^{m-1}+{\text{l.o.t.}}]-[ax^{m}+bx^{m-1}+{\text{l.o.t.}}]=ahmx^{m-1}+{\text{l.o.t.}}=T(x)} As ahm ≠ 0, this results in a polynomial T(x) of degree m − 1, with ahm as the coefficient of the highest-order term. Given the assumption above and m − 1 pairwise differences (resulting in a total of m pairwise differences for S(x)), it can be found that: Δ h m − 1 [ T ] ( x ) = a h m ⋅ h m − 1 ( m − 1 ) ! = a h m m ! {\displaystyle \Delta _{h}^{m-1}[T](x)=ahm\cdot h^{m-1}(m-1)!=ah^{m}m!} This completes the proof. === Application === This identity can be used to find the lowest-degree polynomial that intercepts a number of points (x, y) where the difference on the x-axis from one point to the next is a constant h ≠ 0. For example, given the following points: We can use a differences table, where for all cells to the right of the first y, the following relation to the cells in the column immediately to the left exists for a cell (a + 1, b + 1), with the top-leftmost cell being at coordinate (0, 0): ( a + 1 , b + 1 ) = ( a , b + 1 ) − ( a , b ) {\displaystyle (a+1,b+1)=(a,b+1)-(a,b)} To find the first term, the following table can be used: This arrives at a constant 648. The arithmetic difference is h = 3, as established above. Given the number of pairwise differences needed to reach the constant, it can be surmised this is a polynomial of degree 3. Thus, using the identity above: 648 = a ⋅ 3 3 ⋅ 3 ! = a ⋅ 27 ⋅ 6 = a ⋅ 162 {\displaystyle 648=a\cdot 3^{3}\cdot 3!=a\cdot 27\cdot 6=a\cdot 162} Solving for a, it can be found to have the value 4. Thus, the first term of the polynomial is 4x3. Then, subtracting out the first term, which lowers the polynomial's degree, and finding the finite difference again: Here, the constant is achieved after only two pairwise differences, thus the following result: − 306 = a ⋅ 3 2 ⋅ 2 ! = a ⋅ 18 {\displaystyle -306=a\cdot 3^{2}\cdot 2!=a\cdot 18} Solving for a, which is −17, the polynomial's second term is −17x2. Moving on to the next term, by subtracting out the second term: Thus the constant is achieved after only one pairwise difference: 108 = a ⋅ 3 1 ⋅ 1 ! = a ⋅ 3 {\displaystyle 108=a\cdot 3^{1}\cdot 1!=a\cdot 3} It can be found that a = 36 and thus the third term of the polynomial is 36x. Subtracting out the third term: Without any pairwise differences, it is found that the 4th and final term of the polynomial is the constant −19. Thus, the lowest-degree polynomial intercepting all the points in the first table is found: 4 x 3 − 17 x 2 + 36 x − 19 {\displaystyle 4x^{3}-17x^{2}+36x-19} == Arbitrarily sized kernels == Using linear algebra one can construct finite difference approximations which utilize an arbitrary number of points to the left and a (possibly different) number of points to the right of the evaluation point, for any order derivative. This involves solving a linear system such that the Taylor expansion of the sum of those points around the evaluation point best approximates the Taylor expansion of the desired derivative. Such formulas can be represented graphically on a hexagonal or diamond-shaped grid. This is useful for differentiating a function on a grid, where, as one approaches the edge of the grid, one must sample fewer and fewer points on one side. Finite difference approximations for non-standard (and even non-integer) stencils given an arbitrary stencil and a desired derivative order may be constructed. === Properties === For all positive k and n Δ k h n ( f , x ) = ∑ i 1 = 0 k − 1 ∑ i 2 = 0 k − 1 ⋯ ∑ i n = 0 k − 1 Δ h n ( f , x + i 1 h + i 2 h + ⋯ + i n h ) . {\displaystyle \Delta _{kh}^{n}(f,x)=\sum \limits _{i_{1}=0}^{k-1}\sum \limits _{i_{2}=0}^{k-1}\cdots \sum \limits _{i_{n}=0}^{k-1}\Delta _{h}^{n}\left(f,x+i_{1}h+i_{2}h+\cdots +i_{n}h\right).} Leibniz rule: Δ h n ( f g , x ) = ∑ k = 0 n ( n k ) Δ h k ( f , x ) Δ h n − k ( g , x + k h ) . {\displaystyle \Delta _{h}^{n}(fg,x)=\sum \limits _{k=0}^{n}{\binom {n}{k}}\Delta _{h}^{k}(f,x)\Delta _{h}^{n-k}(g,x+kh).} == In differential equations == An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods. Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc. == Newton's series == The Newton series consists of the terms of the Newton forward difference equation, named after Isaac Newton; in essence, it is the Gregory–Newton interpolation formula (named after Isaac Newton and James Gregory), first published in his Principia Mathematica in 1687, namely the discrete analog of the continuous Taylor expansion, which holds for any polynomial function f and for many (but not all) analytic functions. (It does not hold when f is exponential type π {\displaystyle \pi } . This is easily seen, as the sine function vanishes at integer multiples of π {\displaystyle \pi } ; the corresponding Newton series is identically zero, as all finite differences are zero in this case. Yet clearly, the sine function is not zero.) Here, the expression ( x k ) = ( x ) k k ! {\displaystyle {\binom {x}{k}}={\frac {(x)_{k}}{k!}}} is the binomial coefficient, and ( x ) k = x ( x − 1 ) ( x − 2 ) ⋯ ( x − k + 1 ) {\displaystyle (x)_{k}=x(x-1)(x-2)\cdots (x-k+1)} is the "falling factorial" or "lower factorial", while the empty product (x)0 is defined to be 1. In this particular case, there is an assumption of unit steps for the changes in the values of x, h = 1 of the generalization below. Note the formal correspondence of this result to Taylor's theorem. Historically, this, as well as the Chu–Vandermonde identity, ( x + y ) n = ∑ k = 0 n ( n k ) ( x ) n − k ( y ) k , {\displaystyle (x+y)_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(x)_{n-k}\,(y)_{k},} (following from it, and corresponding to the binomial theorem), are included in the observations that matured to the system of umbral calculus. Newton series expansions can be superior to Taylor series expansions when applied to discrete quantities like quantum spins (see Holstein–Primakoff transformation), bosonic operator functions or discrete counting statistics. To illustrate how one may use Newton's formula in actual practice, consider the first few terms of doubling the Fibonacci sequence f = 2, 2, 4, ... One can find a polynomial that reproduces these values, by first computing a difference table, and then substituting the differences that correspond to x0 (underlined) into the formula as follows, x f = Δ 0 Δ 1 Δ 2 1 2 _ 0 _ 2 2 2 _ 2 3 4 f ( x ) = Δ 0 ⋅ 1 + Δ 1 ⋅ ( x − x 0 ) 1 1 ! + Δ 2 ⋅ ( x − x 0 ) 2 2 ! ( x 0 = 1 ) = 2 ⋅ 1 + 0 ⋅ x − 1 1 + 2 ⋅ ( x − 1 ) ( x − 2 ) 2 = 2 + ( x − 1 ) ( x − 2 ) {\displaystyle {\begin{matrix}{\begin{array}{|c||c|c|c|}\hline x&f=\Delta ^{0}&\Delta ^{1}&\Delta ^{2}\\\hline 1&{\underline {2}}&&\\&&{\underline {0}}&\\2&2&&{\underline {2}}\\&&2&\\3&4&&\\\hline \end{array}}&\quad {\begin{aligned}f(x)&=\Delta ^{0}\cdot 1+\Delta ^{1}\cdot {\dfrac {(x-x_{0})_{1}}{1!}}+\Delta ^{2}\cdot {\dfrac {(x-x_{0})_{2}}{2!}}\quad (x_{0}=1)\\\\&=2\cdot 1+0\cdot {\dfrac {x-1}{1}}+2\cdot {\dfrac {(x-1)(x-2)}{2}}\\\\&=2+(x-1)(x-2)\\\end{aligned}}\end{matrix}}} For the case of nonuniform steps in the values of x, Newton computes the divided differences, Δ j , 0 = y j , Δ j , k = Δ j + 1 , k − 1 − Δ j , k − 1 x j + k − x j ∋ { k > 0 , j ≤ max ( j ) − k } , Δ 0 k = Δ 0 , k {\displaystyle \Delta _{j,0}=y_{j},\qquad \Delta _{j,k}={\frac {\Delta _{j+1,k-1}-\Delta _{j,k-1}}{x_{j+k}-x_{j}}}\quad \ni \quad \left\{k>0,\;j\leq \max \left(j\right)-k\right\},\qquad \Delta 0_{k}=\Delta _{0,k}} the series of products, P 0 = 1 , P k + 1 = P k ⋅ ( ξ − x k ) , {\displaystyle {P_{0}}=1,\quad \quad P_{k+1}=P_{k}\cdot \left(\xi -x_{k}\right),} and the resulting polynomial is the scalar product, f ( ξ ) = Δ 0 ⋅ P ( ξ ) . {\displaystyle f(\xi )=\Delta 0\cdot P\left(\xi \right).} In analysis with p-adic numbers, Mahler's theorem states that the assumption that f is a polynomial function can be weakened all the way to the assumption that f is merely continuous. Carlson's theorem provides necessary and sufficient conditions for a Newton series to be unique, if it exists. However, a Newton series does not, in general, exist. The Newton series, together with the Stirling series and the Selberg series, is a special case of the general difference series, all of which are defined in terms of suitably scaled forward differences. In a compressed and slightly more general form and equidistant nodes the formula reads f ( x ) = ∑ k = 0 ( x − a h k ) ∑ j = 0 k ( − 1 ) k − j ( k j ) f ( a + j h ) . {\displaystyle f(x)=\sum _{k=0}{\binom {\frac {x-a}{h}}{k}}\sum _{j=0}^{k}(-1)^{k-j}{\binom {k}{j}}f(a+jh).} == Calculus of finite differences == The forward difference can be considered as an operator, called the difference operator, which maps the function f to Δh[f]. This operator amounts to Δ h = T h − I ⁡ , {\displaystyle \Delta _{h}=\operatorname {T} _{h}-\operatorname {I} \ ,} where Th is the shift operator with step h, defined by Th[f](x) = f(x + h), and I is the identity operator. The finite difference of higher orders can be defined in recursive manner as Δnh ≡ Δh(Δn − 1h). Another equivalent definition is Δnh ≡ [Th − I]n. The difference operator Δh is a linear operator, as such it satisfies Δh[α f + β g](x) = α Δh[f](x) + β Δh[g](x). It also satisfies a special Leibniz rule: Δ h ⁡ ( f ( x ) g ( x ) ) = ( Δ h ⁡ f ( x ) ) g ( x + h ) + f ( x ) ( Δ h ⁡ g ( x ) ) . {\displaystyle \ \operatorname {\Delta } _{h}{\bigl (}\ f(x)\ g(x)\ {\bigr )}\ =\ {\bigl (}\ \operatorname {\Delta } _{h}f(x)\ {\bigr )}\ g(x+h)\ +\ f(x)\ {\bigl (}\ \operatorname {\Delta } _{h}g(x)\ {\bigr )}~.} Similar Leibniz rules hold for the backward and central differences. Formally applying the Taylor series with respect to h, yields the operator equation Δ h = h D + 1 2 ! h 2 D 2 + 1 3 ! h 3 D 3 + ⋯ = e h D − I ⁡ , {\displaystyle \operatorname {\Delta } _{h}=h\operatorname {D} +{\frac {1}{2!}}h^{2}\operatorname {D} ^{2}+{\frac {1}{3!}}h^{3}\operatorname {D} ^{3}+\cdots =e^{h\operatorname {D} }-\operatorname {I} \ ,} where D denotes the conventional, continuous derivative operator, mapping f to its derivative f′. The expansion is valid when both sides act on analytic functions, for sufficiently small h; in the special case that the series of derivatives terminates (when the function operated on is a finite polynomial) the expression is exact, for all finite stepsizes, h . Thus Th = eh D, and formally inverting the exponential yields h D = ln ⁡ ( 1 + Δ h ) = Δ h − 1 2 Δ h 2 + 1 3 Δ h 3 − ⋯ . {\displaystyle h\operatorname {D} =\ln(1+\Delta _{h})=\Delta _{h}-{\tfrac {1}{2}}\,\Delta _{h}^{2}+{\tfrac {1}{3}}\,\Delta _{h}^{3}-\cdots ~.} This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to f ′(x) mentioned at the end of the section § Higher-order differences. The analogous formulas for the backward and central difference operators are h D = − ln ⁡ ( 1 − ∇ h ) and h D = 2 arsinh ⁡ ( 1 2 δ h ) . {\displaystyle h\operatorname {D} =-\ln(1-\nabla _{h})\quad {\text{ and }}\quad h\operatorname {D} =2\operatorname {arsinh} \left({\tfrac {1}{2}}\,\delta _{h}\right)~.} The calculus of finite differences is related to the umbral calculus of combinatorics. This remarkably systematic correspondence is due to the identity of the commutators of the umbral quantities to their continuum analogs (h → 0 limits), A large number of formal differential relations of standard calculus involving functions f(x) thus systematically map to umbral finite-difference analogs involving f( x T−1h ). For instance, the umbral analog of a monomial xn is a generalization of the above falling factorial (Pochhammer k-symbol), ( x ) n ≡ ( x T h − 1 ) n = x ( x − h ) ( x − 2 h ) ⋯ ( x − ( n − 1 ) h ) , {\displaystyle \ (x)_{n}\equiv \left(\ x\ \operatorname {T} _{h}^{-1}\right)^{n}=x\left(x-h\right)\left(x-2h\right)\cdots {\bigl (}x-\left(n-1\right)\ h{\bigr )}\ ,} so that Δ h h ( x ) n = n ( x ) n − 1 , {\displaystyle \ {\frac {\Delta _{h}}{h}}(x)_{n}=n\ (x)_{n-1}\ ,} hence the above Newton interpolation formula (by matching coefficients in the expansion of an arbitrary function f(x) in such symbols), and so on. For example, the umbral sine is sin ⁡ ( x T h − 1 ) = x − ( x ) 3 3 ! + ( x ) 5 5 ! − ( x ) 7 7 ! + ⋯ {\displaystyle \ \sin \left(x\ \operatorname {T} _{h}^{-1}\right)=x-{\frac {(x)_{3}}{3!}}+{\frac {(x)_{5}}{5!}}-{\frac {(x)_{7}}{7!}}+\cdots \ } As in the continuum limit, the eigenfunction of ⁠Δh/h⁠ also happens to be an exponential, Δ h h ( 1 + λ h ) x h = Δ h h e ln ⁡ ( 1 + λ h ) x h = λ e ln ⁡ ( 1 + λ h ) x h , {\displaystyle \ {\frac {\Delta _{h}}{h}}(1+\lambda h)^{\frac {x}{h}}={\frac {\Delta _{h}}{h}}e^{\ln(1+\lambda h){\frac {x}{h}}}=\lambda e^{\ln(1+\lambda h){\frac {x}{h}}}\ ,} and hence Fourier sums of continuum functions are readily, faithfully mapped to umbral Fourier sums, i.e., involving the same Fourier coefficients multiplying these umbral basis exponentials. This umbral exponential thus amounts to the exponential generating function of the Pochhammer symbols. Thus, for instance, the Dirac delta function maps to its umbral correspondent, the cardinal sine function δ ( x ) ↦ sin ⁡ [ π 2 ( 1 + x h ) ] π ( x + h ) , {\displaystyle \ \delta (x)\mapsto {\frac {\sin \left[{\frac {\pi }{2}}\left(1+{\frac {x}{h}}\right)\right]}{\pi (x+h)}}\ ,} and so forth. Difference equations can often be solved with techniques very similar to those for solving differential equations. The inverse operator of the forward difference operator, so then the umbral integral, is the indefinite sum or antidifference operator. === Rules for calculus of finite difference operators === Analogous to rules for finding the derivative, we have: Constant rule: If c is a constant, then Δ c = 0 {\displaystyle \ \Delta c=0\ } Linearity: If a and b are constants, Δ ( a f + b g ) = a Δ f + b Δ g {\displaystyle \ \Delta (a\ f+b\ g)=a\ \Delta f+b\ \Delta g\ } All of the above rules apply equally well to any difference operator as to Δ, including δ and ∇. Product rule: Δ ( f g ) = f Δ g + g Δ f + Δ f Δ g ∇ ( f g ) = f ∇ g + g ∇ f − ∇ f ∇ g {\displaystyle {\begin{aligned}\ \Delta (fg)&=f\,\Delta g+g\,\Delta f+\Delta f\,\Delta g\\[4pt]\nabla (fg)&=f\,\nabla g+g\,\nabla f-\nabla f\,\nabla g\ \end{aligned}}} Quotient rule: ∇ ( f g ) = ( det [ ∇ f ∇ g f g ] ) / ( g ⋅ det [ g ∇ g 1 1 ] ) {\displaystyle \ \nabla \left({\frac {f}{g}}\right)=\left.\left(\det {\begin{bmatrix}\nabla f&\nabla g\\f&g\end{bmatrix}}\right)\right/\left(g\cdot \det {\begin{bmatrix}g&\nabla g\\1&1\end{bmatrix}}\right)} or ∇ ( f g ) = g ∇ f − f ∇ g g ⋅ ( g − ∇ g ) {\displaystyle \nabla \left({\frac {f}{g}}\right)={\frac {g\,\nabla f-f\,\nabla g}{g\cdot (g-\nabla g)}}\ } Summation rules: ∑ n = a b Δ f ( n ) = f ( b + 1 ) − f ( a ) ∑ n = a b ∇ f ( n ) = f ( b ) − f ( a − 1 ) {\displaystyle {\begin{aligned}\ \sum _{n=a}^{b}\Delta f(n)&=f(b+1)-f(a)\\\sum _{n=a}^{b}\nabla f(n)&=f(b)-f(a-1)\ \end{aligned}}} See references. == Generalizations == A generalized finite difference is usually defined as Δ h μ [ f ] ( x ) = ∑ k = 0 N μ k f ( x + k h ) , {\displaystyle \Delta _{h}^{\mu }[f](x)=\sum _{k=0}^{N}\mu _{k}f(x+kh),} where μ = (μ0, …, μN) is its coefficient vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients μk depend on point x: μk = μk(x), thus considering weighted finite difference. Also one may make the step h depend on point x: h = h(x). Such generalizations are useful for constructing different modulus of continuity. The generalized difference can be seen as the polynomial rings R[Th]. It leads to difference algebras. Difference operator generalizes to Möbius inversion over a partially ordered set. As a convolution operator: Via the formalism of incidence algebras, difference operators and other Möbius inversion can be represented by convolution with a function on the poset, called the Möbius function μ; for the difference operator, μ is the sequence (1, −1, 0, 0, 0, …). == Multivariate finite differences == Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables. Some partial derivative approximations are: f x ( x , y ) ≈ f ( x + h , y ) − f ( x − h , y ) 2 h f y ( x , y ) ≈ f ( x , y + k ) − f ( x , y − k ) 2 k f x x ( x , y ) ≈ f ( x + h , y ) − 2 f ( x , y ) + f ( x − h , y ) h 2 f y y ( x , y ) ≈ f ( x , y + k ) − 2 f ( x , y ) + f ( x , y − k ) k 2 f x y ( x , y ) ≈ f ( x + h , y + k ) − f ( x + h , y − k ) − f ( x − h , y + k ) + f ( x − h , y − k ) 4 h k . {\displaystyle {\begin{aligned}f_{x}(x,y)&\approx {\frac {f(x+h,y)-f(x-h,y)}{2h}}\\f_{y}(x,y)&\approx {\frac {f(x,y+k)-f(x,y-k)}{2k}}\\f_{xx}(x,y)&\approx {\frac {f(x+h,y)-2f(x,y)+f(x-h,y)}{h^{2}}}\\f_{yy}(x,y)&\approx {\frac {f(x,y+k)-2f(x,y)+f(x,y-k)}{k^{2}}}\\f_{xy}(x,y)&\approx {\frac {f(x+h,y+k)-f(x+h,y-k)-f(x-h,y+k)+f(x-h,y-k)}{4hk}}.\end{aligned}}} Alternatively, for applications in which the computation of f is the most costly step, and both first and second derivatives must be computed, a more efficient formula for the last case is f x y ( x , y ) ≈ f ( x + h , y + k ) − f ( x + h , y ) − f ( x , y + k ) + 2 f ( x , y ) − f ( x − h , y ) − f ( x , y − k ) + f ( x − h , y − k ) 2 h k , {\displaystyle f_{xy}(x,y)\approx {\frac {f(x+h,y+k)-f(x+h,y)-f(x,y+k)+2f(x,y)-f(x-h,y)-f(x,y-k)+f(x-h,y-k)}{2hk}},} since the only values to compute that are not already needed for the previous four equations are f(x + h, y + k) and f(x − h, y − k). == See also == == References == == External links == "Finite-difference calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Table of useful finite difference formula generated using Mathematica D. Gleich (2005), Finite Calculus: A Tutorial for Solving Nasty Sums Discrete Second Derivative from Unevenly Spaced Points
Wikipedia:Finite difference coefficient#0
In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward. == Central finite difference == This table contains the coefficients of the central differences, for several orders of accuracy and with uniform grid spacing: For example, the third derivative with a second-order accuracy is f ‴ ( x 0 ) ≈ − 1 2 f ( x − 2 ) + f ( x − 1 ) − f ( x + 1 ) + 1 2 f ( x + 2 ) h x 3 + O ( h x 2 ) , {\displaystyle f'''(x_{0})\approx {\frac {-{\frac {1}{2}}f(x_{-2})+f(x_{-1})-f(x_{+1})+{\frac {1}{2}}f(x_{+2})}{h_{x}^{3}}}+O\left(h_{x}^{2}\right),} where h x {\displaystyle h_{x}} represents a uniform grid spacing between each finite difference interval, and x n = x 0 + n h x {\displaystyle x_{n}=x_{0}+nh_{x}} . For the m {\displaystyle m} -th derivative with accuracy n {\displaystyle n} , there are 2 p + 1 = 2 ⌊ m + 1 2 ⌋ − 1 + n {\displaystyle 2p+1=2\left\lfloor {\frac {m+1}{2}}\right\rfloor -1+n} central coefficients a − p , a − p + 1 , . . . , a p − 1 , a p {\displaystyle a_{-p},a_{-p+1},...,a_{p-1},a_{p}} . These are given by the solution of the linear equation system ( 1 1 . . . 1 1 − p − p + 1 . . . p − 1 p ( − p ) 2 ( − p + 1 ) 2 . . . ( p − 1 ) 2 p 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ( − p ) 2 p ( − p + 1 ) 2 p . . . ( p − 1 ) 2 p p 2 p ) ( a − p a − p + 1 a − p + 2 . . . . . . . . . a p ) = ( 0 0 0 . . . m ! . . . 0 ) , {\displaystyle {\begin{pmatrix}1&1&...&1&1\\-p&-p+1&...&p-1&p\\(-p)^{2}&(-p+1)^{2}&...&(p-1)^{2}&p^{2}\\...&...&...&...&...\\...&...&...&...&...\\...&...&...&...&...\\(-p)^{2p}&(-p+1)^{2p}&...&(p-1)^{2p}&p^{2p}\end{pmatrix}}{\begin{pmatrix}a_{-p}\\a_{-p+1}\\a_{-p+2}\\...\\...\\...\\a_{p}\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\...\\m!\\...\\0\end{pmatrix}},} where the only non-zero value on the right hand side is in the ( m + 1 ) {\displaystyle (m+1)} -th row. An open source implementation for calculating finite difference coefficients of arbitrary derivates and accuracy order in one dimension is available. Given that the left-hand side matrix J T {\displaystyle \mathbf {J} ^{T}} is a transposed Vandermonde matrix, a rearrangement reveals that the coefficients are basically computed by fitting and deriving a 2 p {\displaystyle 2p} -th order polynomial to a window of 2 p + 1 {\displaystyle 2p+1} points. Consequently, the coefficients can also be computed as the m {\displaystyle m} -th order derivative of a fully determined Savitzky–Golay filter with polynomial degree 2 p {\displaystyle 2p} and a window size of 2 p + 1 {\displaystyle 2p+1} . For this, open source implementations are also available. There are two possible definitions which differ in the ordering of the coefficients: a filter for filtering via discrete convolution or via a matrix-vector-product. The coefficients given in the table above correspond to the latter definition. The theory of Lagrange polynomials provides explicit formulas for the finite difference coefficients. For the first six derivatives we have the following: where H n , m {\displaystyle H_{n,m}} are generalized harmonic numbers. == Forward finite difference == This table contains the coefficients of the forward differences, for several orders of accuracy and with uniform grid spacing: For example, the first derivative with a third-order accuracy and the second derivative with a second-order accuracy are f ′ ( x 0 ) ≈ − 11 6 f ( x 0 ) + 3 f ( x + 1 ) − 3 2 f ( x + 2 ) + 1 3 f ( x + 3 ) h x + O ( h x 3 ) , {\displaystyle \displaystyle f'(x_{0})\approx \displaystyle {\frac {-{\frac {11}{6}}f(x_{0})+3f(x_{+1})-{\frac {3}{2}}f(x_{+2})+{\frac {1}{3}}f(x_{+3})}{h_{x}}}+O\left(h_{x}^{3}\right),} f ″ ( x 0 ) ≈ 2 f ( x 0 ) − 5 f ( x + 1 ) + 4 f ( x + 2 ) − f ( x + 3 ) h x 2 + O ( h x 2 ) , {\displaystyle \displaystyle f''(x_{0})\approx \displaystyle {\frac {2f(x_{0})-5f(x_{+1})+4f(x_{+2})-f(x_{+3})}{h_{x}^{2}}}+O\left(h_{x}^{2}\right),} while the corresponding backward approximations are given by f ′ ( x 0 ) ≈ 11 6 f ( x 0 ) − 3 f ( x − 1 ) + 3 2 f ( x − 2 ) − 1 3 f ( x − 3 ) h x + O ( h x 3 ) , {\displaystyle \displaystyle f'(x_{0})\approx \displaystyle {\frac {{\frac {11}{6}}f(x_{0})-3f(x_{-1})+{\frac {3}{2}}f(x_{-2})-{\frac {1}{3}}f(x_{-3})}{h_{x}}}+O\left(h_{x}^{3}\right),} f ″ ( x 0 ) ≈ 2 f ( x 0 ) − 5 f ( x − 1 ) + 4 f ( x − 2 ) − f ( x − 3 ) h x 2 + O ( h x 2 ) , {\displaystyle \displaystyle f''(x_{0})\approx \displaystyle {\frac {2f(x_{0})-5f(x_{-1})+4f(x_{-2})-f(x_{-3})}{h_{x}^{2}}}+O\left(h_{x}^{2}\right),} == Backward finite difference == To get the coefficients of the backward approximations from those of the forward ones, give all odd derivatives listed in the table in the previous section the opposite sign, whereas for even derivatives the signs stay the same. The following table illustrates this: == Arbitrary stencil points == For N {\displaystyle \displaystyle N} arbitrary stencil points s {\displaystyle \displaystyle s} and any derivative of order d < N {\displaystyle \displaystyle d<N} up to one less than the number of stencil points, the finite difference coefficients can be obtained by solving the linear equations ( s 1 0 ⋯ s N 0 ⋮ ⋱ ⋮ s 1 N − 1 ⋯ s N N − 1 ) ( a 1 ⋮ a N ) = d ! ( δ 0 , d ⋮ δ i , d ⋮ δ N − 1 , d ) , {\displaystyle {\begin{pmatrix}s_{1}^{0}&\cdots &s_{N}^{0}\\\vdots &\ddots &\vdots \\s_{1}^{N-1}&\cdots &s_{N}^{N-1}\end{pmatrix}}{\begin{pmatrix}a_{1}\\\vdots \\a_{N}\end{pmatrix}}=d!{\begin{pmatrix}\delta _{0,d}\\\vdots \\\delta _{i,d}\\\vdots \\\delta _{N-1,d}\end{pmatrix}},} where δ i , j {\displaystyle \delta _{i,j}} is the Kronecker delta, equal to one if i = j {\displaystyle i=j} , and zero otherwise. Example, for s = [ − 3 , − 2 , − 1 , 0 , 1 ] {\displaystyle s=[-3,-2,-1,0,1]} , order of differentiation d = 4 {\displaystyle d=4} : ( a 1 a 2 a 3 a 4 a 5 ) = ( 1 1 1 1 1 − 3 − 2 − 1 0 1 9 4 1 0 1 − 27 − 8 − 1 0 1 81 16 1 0 1 ) − 1 ( 0 0 0 0 24 ) = ( 1 − 4 6 − 4 1 ) . {\displaystyle {\begin{pmatrix}a_{1}\\a_{2}\\a_{3}\\a_{4}\\a_{5}\end{pmatrix}}={\begin{pmatrix}1&1&1&1&1\\-3&-2&-1&0&1\\9&4&1&0&1\\-27&-8&-1&0&1\\81&16&1&0&1\\\end{pmatrix}}^{-1}{\begin{pmatrix}0\\0\\0\\0\\24\end{pmatrix}}={\begin{pmatrix}1\\-4\\6\\-4\\1\end{pmatrix}}.} The order of accuracy of the approximation takes the usual form O ( h x ( N − d ) ) {\displaystyle O\left(h_{x}^{(N-d)}\right)} (or better in the case of central finite difference). == See also == Finite difference method Finite difference Five-point stencil Numerical differentiation == References ==
Wikipedia:Finite difference method#0
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time domain (if applicable) are discretized, or broken into a finite number of intervals, and the values of the solution at the end points of the intervals are approximated by solving algebraic equations containing finite differences and values from nearby points. Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. Modern computers can perform these linear algebra computations efficiently, and this, along with their relative ease of implementation, has led to the widespread use of FDM in modern numerical analysis. Today, FDMs are one of the most common approaches to the numerical solution of PDE, along with finite element methods. == Derive difference quotient from Taylor's polynomial == For a n-times differentiable function, by Taylor's theorem the Taylor series expansion is given as f ( x 0 + h ) = f ( x 0 ) + f ′ ( x 0 ) 1 ! h + f ( 2 ) ( x 0 ) 2 ! h 2 + ⋯ + f ( n ) ( x 0 ) n ! h n + R n ( x ) , {\displaystyle f(x_{0}+h)=f(x_{0})+{\frac {f'(x_{0})}{1!}}h+{\frac {f^{(2)}(x_{0})}{2!}}h^{2}+\cdots +{\frac {f^{(n)}(x_{0})}{n!}}h^{n}+R_{n}(x),} Where n! denotes the factorial of n, and Rn(x) is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. Following is the process to derive an approximation for the first derivative of the function f by first truncating the Taylor polynomial plus remainder: f ( x 0 + h ) = f ( x 0 ) + f ′ ( x 0 ) h + R 1 ( x ) . {\displaystyle f(x_{0}+h)=f(x_{0})+f'(x_{0})h+R_{1}(x).} Dividing across by h gives: f ( x 0 + h ) h = f ( x 0 ) h + f ′ ( x 0 ) + R 1 ( x ) h {\displaystyle {f(x_{0}+h) \over h}={f(x_{0}) \over h}+f'(x_{0})+{R_{1}(x) \over h}} Solving for f ′ ( x 0 ) {\displaystyle f'(x_{0})} : f ′ ( x 0 ) = f ( x 0 + h ) − f ( x 0 ) h − R 1 ( x ) h . {\displaystyle f'(x_{0})={f(x_{0}+h)-f(x_{0}) \over h}-{R_{1}(x) \over h}.} Assuming that R 1 ( x ) {\displaystyle R_{1}(x)} is sufficiently small, the approximation of the first derivative of f is: f ′ ( x 0 ) ≈ f ( x 0 + h ) − f ( x 0 ) h . {\displaystyle f'(x_{0})\approx {f(x_{0}+h)-f(x_{0}) \over h}.} This is similar to the definition of derivative, which is: f ′ ( x 0 ) = lim h → 0 f ( x 0 + h ) − f ( x 0 ) h . {\displaystyle f'(x_{0})=\lim _{h\to 0}{\frac {f(x_{0}+h)-f(x_{0})}{h}}.} except for the limit towards zero (the method is named after this). == Accuracy and order == The error in a method's solution is defined as the difference between the approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the original differential equation and the exact quantity assuming perfect arithmetic (no round-off). To use a finite difference method to approximate the solution to a problem, one must first discretize the problem's domain. This is usually done by dividing the domain into a uniform grid (see image). This means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a "time-stepping" manner. An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method. That is, it is the quantity f ′ ( x i ) − f i ′ {\displaystyle f'(x_{i})-f'_{i}} if f ′ ( x i ) {\displaystyle f'(x_{i})} refers to the exact value and f i ′ {\displaystyle f'_{i}} to the numerical approximation. The remainder term of the Taylor polynomial can be used to analyze local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for f ( x 0 + h ) {\displaystyle f(x_{0}+h)} , which is R n ( x 0 + h ) = f ( n + 1 ) ( ξ ) ( n + 1 ) ! ( h ) n + 1 , x 0 < ξ < x 0 + h , {\displaystyle R_{n}(x_{0}+h)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}(h)^{n+1}\,,\quad x_{0}<\xi <x_{0}+h,} the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that f ( x i ) = f ( x 0 + i h ) {\displaystyle f(x_{i})=f(x_{0}+ih)} , f ( x 0 + i h ) = f ( x 0 ) + f ′ ( x 0 ) i h + f ″ ( ξ ) 2 ! ( i h ) 2 , {\displaystyle f(x_{0}+ih)=f(x_{0})+f'(x_{0})ih+{\frac {f''(\xi )}{2!}}(ih)^{2},} and with some algebraic manipulation, this leads to f ( x 0 + i h ) − f ( x 0 ) i h = f ′ ( x 0 ) + f ″ ( ξ ) 2 ! i h , {\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+{\frac {f''(\xi )}{2!}}ih,} and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is: f ( x 0 + i h ) − f ( x 0 ) i h = f ′ ( x 0 ) + O ( h ) . {\displaystyle {\frac {f(x_{0}+ih)-f(x_{0})}{ih}}=f'(x_{0})+O(h).} In this case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection and the step sizes (time and space steps). The data quality and simulation duration increase significantly with smaller step size. Therefore, a reasonable balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing simulation speed in practice. However, time steps which are too large may create instabilities and affect the data quality. The von Neumann and Courant-Friedrichs-Lewy criteria are often evaluated to determine the numerical model stability. == Example: ordinary differential equation == For example, consider the ordinary differential equation u ′ ( x ) = 3 u ( x ) + 2. {\displaystyle u'(x)=3u(x)+2.} The Euler method for solving this equation uses the finite difference quotient u ( x + h ) − u ( x ) h ≈ u ′ ( x ) {\displaystyle {\frac {u(x+h)-u(x)}{h}}\approx u'(x)} to approximate the differential equation by first substituting it for u'(x) then applying a little algebra (multiplying both sides by h, and then adding u(x) to both sides) to get u ( x + h ) ≈ u ( x ) + h ( 3 u ( x ) + 2 ) . {\displaystyle u(x+h)\approx u(x)+h(3u(x)+2).} The last equation is a finite-difference equation, and solving this equation gives an approximate solution to the differential equation. == Example: The heat equation == Consider the normalized heat equation in one dimension, with homogeneous Dirichlet boundary conditions { U t = U x x U ( 0 , t ) = U ( 1 , t ) = 0 (boundary condition) U ( x , 0 ) = U 0 ( x ) (initial condition) {\displaystyle {\begin{cases}U_{t}=U_{xx}\\U(0,t)=U(1,t)=0&{\text{(boundary condition)}}\\U(x,0)=U_{0}(x)&{\text{(initial condition)}}\end{cases}}} One way to numerically solve this equation is to approximate all the derivatives by finite differences. First partition the domain in space using a mesh x 0 , … , x J {\displaystyle x_{0},\dots ,x_{J}} and in time using a mesh t 0 , … , t N {\displaystyle t_{0},\dots ,t_{N}} . Assume a uniform partition both in space and in time, so the difference between two consecutive space points will be h and between two consecutive time points will be k. The points u ( x j , t n ) = u j n {\displaystyle u(x_{j},t_{n})=u_{j}^{n}} will represent the numerical approximation of u ( x j , t n ) . {\displaystyle u(x_{j},t_{n}).} === Explicit method === Using a forward difference at time t n {\displaystyle t_{n}} and a second-order central difference for the space derivative at position x j {\displaystyle x_{j}} (FTCS) gives the recurrence equation: u j n + 1 − u j n k Δ t = u j + 1 n − 2 u j n + u j − 1 n h 2 . {\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k\Delta t}}={\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}.} This is an explicit method for solving the one-dimensional heat equation. One can obtain u j n + 1 {\displaystyle u_{j}^{n+1}} from the other values this way: u j n + 1 = ( 1 − 2 r ) u j n + r u j − 1 n + r u j + 1 n {\displaystyle u_{j}^{n+1}=(1-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}} where r = k Δ t / h 2 . {\displaystyle r=k\Delta t/h^{2}.} So, with this recurrence relation, and knowing the values at time n, one can obtain the corresponding values at time n+1. u 0 n {\displaystyle u_{0}^{n}} and u J n {\displaystyle u_{J}^{n}} must be replaced by the boundary conditions, in this example they are both 0. This explicit method is known to be numerically stable and convergent whenever r ≤ 1 / 2 {\displaystyle r\leq 1/2} . The numerical errors are proportional to the time step and the square of the space step: Δ u = O ( k ) + O ( h 2 ) {\displaystyle \Delta u=O(k)+O(h^{2})} === Implicit method === Using the backward difference at time t n + 1 {\displaystyle t_{n+1}} and a second-order central difference for the space derivative at position x j {\displaystyle x_{j}} (The Backward Time, Centered Space Method "BTCS") gives the recurrence equation: u j n + 1 − u j n k Δ t = u j + 1 n + 1 − 2 u j n + 1 + u j − 1 n + 1 h 2 . {\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k\Delta t}}={\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}.} This is an implicit method for solving the one-dimensional heat equation. One can obtain u j n + 1 {\displaystyle u_{j}^{n+1}} from solving a system of linear equations: ( 1 + 2 r ) u j n + 1 − r u j − 1 n + 1 − r u j + 1 n + 1 = u j n {\displaystyle (1+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=u_{j}^{n}} The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step: Δ u = O ( k ) + O ( h 2 ) . {\displaystyle \Delta u=O(k)+O(h^{2}).} === Crank–Nicolson method === Finally, using the central difference at time t n + 1 / 2 {\displaystyle t_{n+1/2}} and a second-order central difference for the space derivative at position x j {\displaystyle x_{j}} ("CTCS") gives the recurrence equation: u j n + 1 − u j n k Δ t = 1 2 ( u j + 1 n + 1 − 2 u j n + 1 + u j − 1 n + 1 h 2 + u j + 1 n − 2 u j n + u j − 1 n h 2 ) . {\displaystyle {\frac {u_{j}^{n+1}-u_{j}^{n}}{k\Delta t}}={\frac {1}{2}}\left({\frac {u_{j+1}^{n+1}-2u_{j}^{n+1}+u_{j-1}^{n+1}}{h^{2}}}+{\frac {u_{j+1}^{n}-2u_{j}^{n}+u_{j-1}^{n}}{h^{2}}}\right).} This formula is known as the Crank–Nicolson method. One can obtain u j n + 1 {\displaystyle u_{j}^{n+1}} from solving a system of linear equations: ( 2 + 2 r ) u j n + 1 − r u j − 1 n + 1 − r u j + 1 n + 1 = ( 2 − 2 r ) u j n + r u j − 1 n + r u j + 1 n {\displaystyle (2+2r)u_{j}^{n+1}-ru_{j-1}^{n+1}-ru_{j+1}^{n+1}=(2-2r)u_{j}^{n}+ru_{j-1}^{n}+ru_{j+1}^{n}} The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step. The errors are quadratic over both the time step and the space step: Δ u = O ( k 2 ) + O ( h 2 ) . {\displaystyle \Delta u=O(k^{2})+O(h^{2}).} === Comparison === To summarize, usually the Crank–Nicolson scheme is the most accurate scheme for small time steps. For larger time steps, the implicit scheme works better since it is less computationally demanding. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. Here is an example. The figures below present the solutions given by the above methods to approximate the heat equation U t = α U x x , α = 1 π 2 , {\displaystyle U_{t}=\alpha U_{xx},\quad \alpha ={\frac {1}{\pi ^{2}}},} with the boundary condition U ( 0 , t ) = U ( 1 , t ) = 0. {\displaystyle U(0,t)=U(1,t)=0.} The exact solution is U ( x , t ) = 1 π 2 e − t sin ⁡ ( π x ) . {\displaystyle U(x,t)={\frac {1}{\pi ^{2}}}e^{-t}\sin(\pi x).} == Example: The Laplace operator == The (continuous) Laplace operator in n {\displaystyle n} -dimensions is given by Δ u ( x ) = ∑ i = 1 n ∂ i 2 u ( x ) {\displaystyle \Delta u(x)=\sum _{i=1}^{n}\partial _{i}^{2}u(x)} . The discrete Laplace operator Δ h u {\displaystyle \Delta _{h}u} depends on the dimension n {\displaystyle n} . In 1D the Laplace operator is approximated as Δ u ( x ) = u ″ ( x ) ≈ u ( x − h ) − 2 u ( x ) + u ( x + h ) h 2 =: Δ h u ( x ) . {\displaystyle \Delta u(x)=u''(x)\approx {\frac {u(x-h)-2u(x)+u(x+h)}{h^{2}}}=:\Delta _{h}u(x)\,.} This approximation is usually expressed via the following stencil Δ h = 1 h 2 [ 1 − 2 1 ] {\displaystyle \Delta _{h}={\frac {1}{h^{2}}}{\begin{bmatrix}1&-2&1\end{bmatrix}}} and which represents a symmetric, tridiagonal matrix. For an equidistant grid one gets a Toeplitz matrix. The 2D case shows all the characteristics of the more general n-dimensional case. Each second partial derivative needs to be approximated similar to the 1D case Δ u ( x , y ) = u x x ( x , y ) + u y y ( x , y ) ≈ u ( x − h , y ) − 2 u ( x , y ) + u ( x + h , y ) h 2 + u ( x , y − h ) − 2 u ( x , y ) + u ( x , y + h ) h 2 = u ( x − h , y ) + u ( x + h , y ) − 4 u ( x , y ) + u ( x , y − h ) + u ( x , y + h ) h 2 =: Δ h u ( x , y ) , {\displaystyle {\begin{aligned}\Delta u(x,y)&=u_{xx}(x,y)+u_{yy}(x,y)\\&\approx {\frac {u(x-h,y)-2u(x,y)+u(x+h,y)}{h^{2}}}+{\frac {u(x,y-h)-2u(x,y)+u(x,y+h)}{h^{2}}}\\&={\frac {u(x-h,y)+u(x+h,y)-4u(x,y)+u(x,y-h)+u(x,y+h)}{h^{2}}}\\&=:\Delta _{h}u(x,y)\,,\end{aligned}}} which is usually given by the following stencil Δ h = 1 h 2 [ 1 1 − 4 1 1 ] . {\displaystyle \Delta _{h}={\frac {1}{h^{2}}}{\begin{bmatrix}&1\\1&-4&1\\&1\end{bmatrix}}\,.} === Consistency === Consistency of the above-mentioned approximation can be shown for highly regular functions, such as u ∈ C 4 ( Ω ) {\displaystyle u\in C^{4}(\Omega )} . The statement is Δ u − Δ h u = O ( h 2 ) . {\displaystyle \Delta u-\Delta _{h}u={\mathcal {O}}(h^{2})\,.} To prove this, one needs to substitute Taylor Series expansions up to order 3 into the discrete Laplace operator. === Properties === ==== Subharmonic ==== Similar to continuous subharmonic functions one can define subharmonic functions for finite-difference approximations u h {\displaystyle u_{h}} − Δ h u h ≤ 0 . {\displaystyle -\Delta _{h}u_{h}\leq 0\,.} ==== Mean value ==== One can define a general stencil of positive type via [ α N α W − α C α E α S ] , α i > 0 , α C = ∑ i ∈ { N , E , S , W } α i . {\displaystyle {\begin{bmatrix}&\alpha _{N}\\\alpha _{W}&-\alpha _{C}&\alpha _{E}\\&\alpha _{S}\end{bmatrix}}\,,\quad \alpha _{i}>0\,,\quad \alpha _{C}=\sum _{i\in \{N,E,S,W\}}\alpha _{i}\,.} If u h {\displaystyle u_{h}} is (discrete) subharmonic then the following mean value property holds u h ( x C ) ≤ ∑ i ∈ { N , E , S , W } α i u h ( x i ) ∑ i ∈ { N , E , S , W } α i , {\displaystyle u_{h}(x_{C})\leq {\frac {\sum _{i\in \{N,E,S,W\}}\alpha _{i}u_{h}(x_{i})}{\sum _{i\in \{N,E,S,W\}}\alpha _{i}}}\,,} where the approximation is evaluated on points of the grid, and the stencil is assumed to be of positive type. A similar mean value property also holds for the continuous case. ==== Maximum principle ==== For a (discrete) subharmonic function u h {\displaystyle u_{h}} the following holds max Ω h u h ≤ max ∂ Ω h u h , {\displaystyle \max _{\Omega _{h}}u_{h}\leq \max _{\partial \Omega _{h}}u_{h}\,,} where Ω h , ∂ Ω h {\displaystyle \Omega _{h},\partial \Omega _{h}} are discretizations of the continuous domain Ω {\displaystyle \Omega } , respectively the boundary ∂ Ω {\displaystyle \partial \Omega } . A similar maximum principle also holds for the continuous case. == The SBP-SAT method == The SBP-SAT (summation by parts - simultaneous approximation term) method is a stable and accurate technique for discretizing and imposing boundary conditions of a well-posed partial differential equation using high order finite differences. The method is based on finite differences where the differentiation operators exhibit summation-by-parts properties. Typically, these operators consist of differentiation matrices with central difference stencils in the interior with carefully chosen one-sided boundary stencils designed to mimic integration-by-parts in the discrete setting. Using the SAT technique, the boundary conditions of the PDE are imposed weakly, where the boundary values are "pulled" towards the desired conditions rather than exactly fulfilled. If the tuning parameters (inherent to the SAT technique) are chosen properly, the resulting system of ODE's will exhibit similar energy behavior as the continuous PDE, i.e. the system has no non-physical energy growth. This guarantees stability if an integration scheme with a stability region that includes parts of the imaginary axis, such as the fourth order Runge-Kutta method, is used. This makes the SAT technique an attractive method of imposing boundary conditions for higher order finite difference methods, in contrast to for example the injection method, which typically will not be stable if high order differentiation operators are used. == See also == == References == == Further reading == K.W. Morton and D.F. Mayers, Numerical Solution of Partial Differential Equations, An Introduction. Cambridge University Press, 2005. Autar Kaw and E. Eric Kalu, Numerical Methods with Applications, (2008) [1]. Contains a brief, engineering-oriented introduction to FDM (for ODEs) in Chapter 08.07. John Strikwerda (2004). Finite Difference Schemes and Partial Differential Equations (2nd ed.). SIAM. ISBN 978-0-89871-639-9. Smith, G. D. (1985), Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed., Oxford University Press Peter Olver (2013). Introduction to Partial Differential Equations. Springer. Chapter 5: Finite differences. ISBN 978-3-319-02099-0.. Randall J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations, SIAM, 2007. Sergey Lemeshevsky, Piotr Matus, Dmitriy Poliakov(Eds): "Exact Finite-Difference Schemes", De Gruyter (2016). DOI: https://doi.org/10.1515/9783110491326 . Mikhail Shashkov: Conservative Finite-Difference Methods on General Grids, CRC Press, ISBN 0-8493-7375-1 (1996).
Wikipedia:Finite subdivision rule#0
In mathematics, a finite subdivision rule is a recursive way of dividing a polygon or other two-dimensional shape into smaller and smaller pieces. Subdivision rules in a sense are generalizations of regular geometric fractals. Instead of repeating exactly the same design over and over, they have slight variations in each stage, allowing a richer structure while maintaining the elegant style of fractals. Subdivision rules have been used in architecture, biology, and computer science, as well as in the study of hyperbolic manifolds. Substitution tilings are a well-studied type of subdivision rule. == Definition == A subdivision rule takes a tiling of the plane by polygons and turns it into a new tiling by subdividing each polygon into smaller polygons. It is finite if there are only finitely many ways that every polygon can subdivide. Each way of subdividing a tile is called a tile type. Each tile type is represented by a label (usually a letter). Every tile type subdivides into smaller tile types. Each edge also gets subdivided according to finitely many edge types. Finite subdivision rules can only subdivide tilings that are made up of polygons labelled by tile types. Such tilings are called subdivision complexes for the subdivision rule. Given any subdivision complex for a subdivision rule, we can subdivide it over and over again to get a sequence of tilings. For instance, binary subdivision has one tile type and one edge type: Since the only tile type is a quadrilateral, binary subdivision can only subdivide tilings made up of quadrilaterals. This means that the only subdivision complexes are tilings by quadrilaterals. The tiling can be regular, but doesn't have to be: Here we start with a complex made of four quadrilaterals and subdivide it twice. All quadrilaterals are type A tiles. == Examples of finite subdivision rules == Barycentric subdivision is an example of a subdivision rule with one edge type (that gets subdivided into two edges) and one tile type (a triangle that gets subdivided into 6 smaller triangles). Any triangulated surface is a barycentric subdivision complex. The Penrose tiling can be generated by a subdivision rule on a set of four tile types (the curved lines in the table below only help to show how the tiles fit together): Certain rational maps give rise to finite subdivision rules. This includes most Lattès maps. Every prime, non-split alternating knot or link complement has a subdivision rule, with some tiles that do not subdivide, corresponding to the boundary of the link complement. The subdivision rules show what the night sky would look like to someone living in a knot complement; because the universe wraps around itself (i.e. is not simply connected), an observer would see the visible universe repeat itself in an infinite pattern. The subdivision rule describes that pattern. The subdivision rule looks different for different geometries. This is a subdivision rule for the trefoil knot, which is not a hyperbolic knot: And this is the subdivision rule for the Borromean rings, which is hyperbolic: In each case, the subdivision rule would act on some tiling of a sphere (i.e. the night sky), but it is easier to just draw a small part of the night sky, corresponding to a single tile being repeatedly subdivided. This is what happens for the trefoil knot: And for the Borromean rings: == Subdivision rules in higher dimensions == Subdivision rules can easily be generalized to other dimensions. For instance, barycentric subdivision is used in all dimensions. Also, binary subdivision can be generalized to other dimensions (where hypercubes get divided by every midplane), as in the proof of the Heine–Borel theorem. == Rigorous definition == A finite subdivision rule R {\displaystyle R} consists of the following. 1. A finite 2-dimensional CW complex S R {\displaystyle S_{R}} , called the subdivision complex, with a fixed cell structure such that S R {\displaystyle S_{R}} is the union of its closed 2-cells. We assume that for each closed 2-cell s ~ {\displaystyle {\tilde {s}}} of S R {\displaystyle S_{R}} there is a CW structure s {\displaystyle s} on a closed 2-disk such that s {\displaystyle s} has at least two vertices, the vertices and edges of s {\displaystyle s} are contained in ∂ s {\displaystyle \partial s} , and the characteristic map ψ s : s → S R {\displaystyle \psi _{s}:s\rightarrow S_{R}} which maps onto s ~ {\displaystyle {\tilde {s}}} restricts to a homeomorphism onto each open cell. 2. A finite two dimensional CW complex R ( S R ) {\displaystyle R(S_{R})} , which is a subdivision of S R {\displaystyle S_{R}} . 3.A continuous cellular map ϕ R : R ( S R ) → S R {\displaystyle \phi _{R}:R(S_{R})\rightarrow S_{R}} called the subdivision map, whose restriction to every open cell is a homeomorphism onto an open cell. Each CW complex s {\displaystyle s} in the definition above (with its given characteristic map ψ s {\displaystyle \psi _{s}} ) is called a tile type. An R {\displaystyle R} -complex for a subdivision rule R {\displaystyle R} is a 2-dimensional CW complex X {\displaystyle X} which is the union of its closed 2-cells, together with a continuous cellular map f : X → S R {\displaystyle f:X\rightarrow S_{R}} whose restriction to each open cell is a homeomorphism. We can subdivide X {\displaystyle X} into a complex R ( X ) {\displaystyle R(X)} by requiring that the induced map f : R ( X ) → R ( S R ) {\displaystyle f:R(X)\rightarrow R(S_{R})} restricts to a homeomorphism onto each open cell. R ( X ) {\displaystyle R(X)} is again an R {\displaystyle R} -complex with map ϕ R ∘ f : R ( X ) → S R {\displaystyle \phi _{R}\circ f:R(X)\rightarrow S_{R}} . By repeating this process, we obtain a sequence of subdivided R {\displaystyle R} -complexes R n ( X ) {\displaystyle R^{n}(X)} with maps ϕ R n ∘ f : R n ( X ) → S R {\displaystyle \phi _{R}^{n}\circ f:R^{n}(X)\rightarrow S_{R}} . Binary subdivision is one example: The subdivision complex can be created by gluing together the opposite edges of the square, making the subdivision complex S R {\displaystyle S_{R}} into a torus. The subdivision map ϕ {\displaystyle \phi } is the doubling map on the torus, wrapping the meridian around itself twice and the longitude around itself twice. This is a four-fold covering map. The plane, tiled by squares, is a subdivision complex for this subdivision rule, with the structure map f : R 2 → R ( S R ) {\displaystyle f:\mathbb {R} ^{2}\rightarrow R(S_{R})} given by the standard covering map. Under subdivision, each square in the plane gets subdivided into squares of one-fourth the size. == Quasi-isometry properties == Subdivision rules can be used to study the quasi-isometry properties of certain spaces. Given a subdivision rule R {\displaystyle R} and subdivision complex X {\displaystyle X} , we can construct a graph called the history graph that records the action of the subdivision rule. The graph consists of the dual graphs of every stage R n ( X ) {\displaystyle R^{n}(X)} , together with edges connecting each tile in R n ( X ) {\displaystyle R^{n}(X)} with its subdivisions in R n + 1 ( X ) {\displaystyle R^{n+1}(X)} . The quasi-isometry properties of the history graph can be studied using subdivision rules. For instance, the history graph is quasi-isometric to hyperbolic space exactly when the subdivision rule is conformal, as described in the combinatorial Riemann mapping theorem. == Applications == Islamic Girih tiles in Islamic architecture are self-similar tilings that can be modeled with finite subdivision rules. In 2007, Peter J. Lu of Harvard University and Professor Paul J. Steinhardt of Princeton University published a paper in the journal Science suggesting that girih tilings possessed properties consistent with self-similar fractal quasicrystalline tilings such as Penrose tilings (presentation 1974, predecessor works starting in about 1964) predating them by five centuries. Subdivision surfaces in computer graphics use subdivision rules to refine a surface to any given level of precision. These subdivision surfaces (such as the Catmull-Clark subdivision surface) take a polygon mesh (the kind used in 3D animated movies) and refines it to a mesh with more polygons by adding and shifting points according to different recursive formulas. Although many points get shifted in this process, each new mesh is combinatorially a subdivision of the old mesh (meaning that for every edge and vertex of the old mesh, you can identify a corresponding edge and vertex in the new one, plus several more edges and vertices). Subdivision rules were applied by Cannon, Floyd and Parry (2000) to the study of large-scale growth patterns of biological organisms. Cannon, Floyd and Parry produced a mathematical growth model which demonstrated that some systems determined by simple finite subdivision rules can results in objects (in their example, a tree trunk) whose large-scale form oscillates wildly over time, even though the local subdivision laws remain the same. Cannon, Floyd and Parry also applied their model to the analysis of the growth patterns of rat tissue. They suggested that the "negatively curved" (or non-euclidean) nature of microscopic growth patterns of biological organisms is one of the key reasons why large-scale organisms do not look like crystals or polyhedral shapes but in fact in many cases resemble self-similar fractals. In particular they suggested that such "negatively curved" local structure is manifested in highly folded and highly connected nature of the brain and the lung tissue. == Cannon's conjecture == Cannon, Floyd, and Parry first studied finite subdivision rules as an attempt to prove the following conjecture: Cannon's conjecture: Every Gromov hyperbolic group with a 2-sphere at infinity acts geometrically on hyperbolic 3-space. Here, a geometric action is a cocompact, properly discontinuous action by isometries. This conjecture was partially solved by Grigori Perelman in his proof of the geometrization conjecture, which states (in part) that any Gromov hyperbolic group that is a 3-manifold group must act geometrically on hyperbolic 3-space. However, it still remains to be shown that a Gromov hyperbolic group with a 2-sphere at infinity is a 3-manifold group. Cannon and Swenson showed that a hyperbolic group with a 2-sphere at infinity has an associated subdivision rule. If this subdivision rule is conformal in a certain sense, the group will be a 3-manifold group with the geometry of hyperbolic 3-space. == Combinatorial Riemann mapping theorem == Subdivision rules give a sequence of tilings of a surface, and tilings give an idea of distance, length, and area (by letting each tile have length and area 1). In the limit, the distances that come from these tilings may converge in some sense to an analytic structure on the surface. The Combinatorial Riemann Mapping Theorem gives necessary and sufficient conditions for this to occur. Its statement needs some background. A tiling T {\displaystyle T} of a ring R {\displaystyle R} (i.e., a closed annulus) gives two invariants, M sup ( R , T ) {\displaystyle M_{\sup }(R,T)} and m inf ( R , T ) {\displaystyle m_{\inf }(R,T)} , called approximate moduli. These are similar to the classical modulus of a ring. They are defined by the use of weight functions. A weight function ρ {\displaystyle \rho } assigns a non-negative number called a weight to each tile of T {\displaystyle T} . Every path in R {\displaystyle R} can be given a length, defined to be the sum of the weights of all tiles in the path. Define the height H ( ρ ) {\displaystyle H(\rho )} of R {\displaystyle R} under ρ {\displaystyle \rho } to be the infimum of the length of all possible paths connecting the inner boundary of R {\displaystyle R} to the outer boundary. The circumference C ( ρ ) {\displaystyle C(\rho )} of R {\displaystyle R} under ρ {\displaystyle \rho } is the infimum of the length of all possible paths circling the ring (i.e. not nullhomotopic in R). The area A ( ρ ) {\displaystyle A(\rho )} of R {\displaystyle R} under ρ {\displaystyle \rho } is defined to be the sum of the squares of all weights in R {\displaystyle R} . Then define M sup ( R , T ) = sup H ( ρ ) 2 A ( ρ ) , {\displaystyle M_{\sup }(R,T)=\sup {\frac {H(\rho )^{2}}{A(\rho )}},} m inf ( R , T ) = inf A ( ρ ) C ( ρ ) 2 . {\displaystyle m_{\inf }(R,T)=\inf {\frac {A(\rho )}{C(\rho )^{2}}}.} Note that they are invariant under scaling of the metric. A sequence T 1 , T 2 , … {\displaystyle T_{1},T_{2},\ldots } of tilings is conformal ( K {\displaystyle K} ) if mesh approaches 0 and: For each ring R {\displaystyle R} , the approximate moduli M sup ( R , T i ) {\displaystyle M_{\sup }(R,T_{i})} and m inf ( R , T i ) {\displaystyle m_{\inf }(R,T_{i})} , for all i {\displaystyle i} sufficiently large, lie in a single interval of the form [ r , K r ] {\displaystyle [r,Kr]} ; and Given a point x {\displaystyle x} in the surface, a neighborhood N {\displaystyle N} of x {\displaystyle x} , and an integer I {\displaystyle I} , there is a ring R {\displaystyle R} in N ∖ { x } {\displaystyle N\smallsetminus \{x\}} separating x from the complement of N {\displaystyle N} , such that for all large i {\displaystyle i} the approximate moduli of R {\displaystyle R} are all greater than I {\displaystyle I} . === Statement of theorem === If a sequence T 1 , T 2 , … {\displaystyle T_{1},T_{2},\ldots } of tilings of a surface is conformal ( K {\displaystyle K} ) in the above sense, then there is a conformal structure on the surface and a constant K ′ {\displaystyle K'} depending only on K {\displaystyle K} in which the classical moduli and approximate moduli (from T i {\displaystyle T_{i}} for i {\displaystyle i} sufficiently large) of any given annulus are K ′ {\displaystyle K'} -comparable, meaning that they lie in a single interval [ r , K ′ r ] {\displaystyle [r,K'r]} . === Consequences === The Combinatorial Riemann Mapping Theorem implies that a group G {\displaystyle G} acts geometrically on H 3 {\displaystyle \mathbb {H} ^{3}} if and only if it is Gromov hyperbolic, it has a sphere at infinity, and the natural subdivision rule on the sphere gives rise to a sequence of tilings that is conformal in the sense above. Thus, Cannon's conjecture would be true if all such subdivision rules were conformal. == References == == External links == Bill Floyd's research page. This page contains most of the research papers by Cannon, Floyd and Parry on subdivision rules, as well as a gallery of subdivision rules.
Wikipedia:Finite von Neumann algebra#0
In mathematics, a von Neumann algebra or W*-algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra. Von Neumann algebras were originally introduced by John von Neumann, motivated by his study of single operators, group representations, ergodic theory and quantum mechanics. His double commutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as an algebra of symmetries. Two basic examples of von Neumann algebras are as follows: The ring L ∞ ( R ) {\displaystyle L^{\infty }(\mathbb {R} )} of essentially bounded measurable functions on the real line is a commutative von Neumann algebra, whose elements act as multiplication operators by pointwise multiplication on the Hilbert space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} of square-integrable functions. The algebra B ( H ) {\displaystyle {\mathcal {B}}({\mathcal {H}})} of all bounded operators on a Hilbert space H {\displaystyle {\mathcal {H}}} is a von Neumann algebra, non-commutative if the Hilbert space has dimension at least 2 {\displaystyle 2} . Von Neumann algebras were first studied by von Neumann (1930) in 1929; he and Francis Murray developed the basic theory, under the original name of rings of operators, in a series of papers written in the 1930s and 1940s (F.J. Murray & J. von Neumann 1936, 1937, 1943; J. von Neumann 1938, 1940, 1943, 1949), reprinted in the collected works of von Neumann (1961). Introductory accounts of von Neumann algebras are given in the online notes of Jones (2003) and Wassermann (1991) and the books by Dixmier (1981), Schwartz (1967), Blackadar (2005) and Sakai (1971). The three volume work by Takesaki (1979) gives an encyclopedic account of the theory. The book by Connes (1994) discusses more advanced topics. == Definitions == There are three common ways to define von Neumann algebras. The first and most common way is to define them as weakly closed *-algebras of bounded operators (on a Hilbert space) containing the identity. In this definition the weak (operator) topology can be replaced by many other common topologies including the strong, ultrastrong or ultraweak operator topologies. The *-algebras of bounded operators that are closed in the norm topology are C*-algebras, so in particular any von Neumann algebra is a C*-algebra. The second definition is that a von Neumann algebra is a subalgebra of the bounded operators closed under involution (the *-operation) and equal to its double commutant, or equivalently the commutant of some subalgebra closed under *. The von Neumann double commutant theorem (von Neumann 1930) says that the first two definitions are equivalent. The first two definitions describe a von Neumann algebra concretely as a set of operators acting on some given Hilbert space. Sakai (1971) showed that von Neumann algebras can also be defined abstractly as C*-algebras that have a predual; in other words the von Neumann algebra, considered as a Banach space, is the dual of some other Banach space called the predual. The predual of a von Neumann algebra is in fact unique up to isomorphism. Some authors use "von Neumann algebra" for the algebras together with a Hilbert space action, and "W*-algebra" for the abstract concept, so a von Neumann algebra is a W*-algebra together with a Hilbert space and a suitable faithful unital action on the Hilbert space. The concrete and abstract definitions of a von Neumann algebra are similar to the concrete and abstract definitions of a C*-algebra, which can be defined either as norm-closed *-algebras of operators on a Hilbert space, or as Banach *-algebras such that | | a a ∗ | | = | | a | | | | a ∗ | | {\displaystyle ||aa^{*}||=||a||\ ||a^{*}||} . == Terminology == Some of the terminology in von Neumann algebra theory can be confusing, and the terms often have different meanings outside the subject. A factor is a von Neumann algebra with trivial center, i.e. a center consisting only of scalar operators. A finite von Neumann algebra is one which is the direct integral of finite factors (meaning the von Neumann algebra has a faithful normal tracial state τ : M → C {\displaystyle \tau :M\rightarrow \mathbb {C} } ). Similarly, properly infinite von Neumann algebras are the direct integral of properly infinite factors. A von Neumann algebra that acts on a separable Hilbert space is called separable. Note that such algebras are rarely separable in the norm topology. The von Neumann algebra generated by a set of bounded operators on a Hilbert space is the smallest von Neumann algebra containing all those operators. The tensor product of two von Neumann algebras acting on two Hilbert spaces is defined to be the von Neumann algebra generated by their algebraic tensor product, considered as operators on the Hilbert space tensor product of the Hilbert spaces. By forgetting about the topology on a von Neumann algebra, we can consider it a (unital) *-algebra, or just a ring. Von Neumann algebras are semihereditary: every finitely generated submodule of a projective module is itself projective. There have been several attempts to axiomatize the underlying rings of von Neumann algebras, including Baer *-rings and AW*-algebras. The *-algebra of affiliated operators of a finite von Neumann algebra is a von Neumann regular ring. (The von Neumann algebra itself is in general not von Neumann regular.) == Commutative von Neumann algebras == The relationship between commutative von Neumann algebras and measure spaces is analogous to that between commutative C*-algebras and locally compact Hausdorff spaces. Every commutative von Neumann algebra is isomorphic to L∞(X) for some measure space (X, μ) and conversely, for every σ-finite measure space X, the *-algebra L∞(X) is a von Neumann algebra. Due to this analogy, the theory of von Neumann algebras has been called noncommutative measure theory, while the theory of C*-algebras is sometimes called noncommutative topology (Connes 1994). == Projections == Operators E in a von Neumann algebra for which E = EE = E* are called projections; they are exactly the operators which give an orthogonal projection of H onto some closed subspace. A subspace of the Hilbert space H is said to belong to the von Neumann algebra M if it is the image of some projection in M. This establishes a 1:1 correspondence between projections of M and subspaces that belong to M. Informally these are the closed subspaces that can be described using elements of M, or that M "knows" about. It can be shown that the closure of the image of any operator in M and the kernel of any operator in M belongs to M. Also, the closure of the image under an operator of M of any subspace belonging to M also belongs to M. (These results are a consequence of the polar decomposition). === Comparison theory of projections === The basic theory of projections was worked out by Murray & von Neumann (1936). Two subspaces belonging to M are called (Murray–von Neumann) equivalent if there is a partial isometry mapping the first isomorphically onto the other that is an element of the von Neumann algebra (informally, if M "knows" that the subspaces are isomorphic). This induces a natural equivalence relation on projections by defining E to be equivalent to F if the corresponding subspaces are equivalent, or in other words if there is a partial isometry of H that maps the image of E isometrically to the image of F and is an element of the von Neumann algebra. Another way of stating this is that E is equivalent to F if E=uu* and F=u*u for some partial isometry u in M. The equivalence relation ~ thus defined is additive in the following sense: Suppose E1 ~ F1 and E2 ~ F2. If E1 ⊥ E2 and F1 ⊥ F2, then E1 + E2 ~ F1 + F2. Additivity would not generally hold if one were to require unitary equivalence in the definition of ~, i.e. if we say E is equivalent to F if u*Eu = F for some unitary u. The Schröder–Bernstein theorems for operator algebras gives a sufficient condition for Murray-von Neumann equivalence. The subspaces belonging to M are partially ordered by inclusion, and this induces a partial order ≤ of projections. There is also a natural partial order on the set of equivalence classes of projections, induced by the partial order ≤ of projections. If M is a factor, ≤ is a total order on equivalence classes of projections, described in the section on traces below. A projection (or subspace belonging to M) E is said to be a finite projection if there is no projection F < E (meaning F ≤ E and F ≠ E) that is equivalent to E. For example, all finite-dimensional projections (or subspaces) are finite (since isometries between Hilbert spaces leave the dimension fixed), but the identity operator on an infinite-dimensional Hilbert space is not finite in the von Neumann algebra of all bounded operators on it, since it is isometrically isomorphic to a proper subset of itself. However it is possible for infinite dimensional subspaces to be finite. Orthogonal projections are noncommutative analogues of indicator functions in L∞(R). L∞(R) is the ||·||∞-closure of the subspace generated by the indicator functions. Similarly, a von Neumann algebra is generated by its projections; this is a consequence of the spectral theorem for self-adjoint operators. The projections of a finite factor form a continuous geometry. == Factors == A von Neumann algebra N whose center consists only of multiples of the identity operator is called a factor. As von Neumann (1949) showed, every von Neumann algebra on a separable Hilbert space is isomorphic to a direct integral of factors. This decomposition is essentially unique. Thus, the problem of classifying isomorphism classes of von Neumann algebras on separable Hilbert spaces can be reduced to that of classifying isomorphism classes of factors. Murray & von Neumann (1936) showed that every factor has one of 3 types as described below. The type classification can be extended to von Neumann algebras that are not factors, and a von Neumann algebra is of type X if it can be decomposed as a direct integral of type X factors; for example, every commutative von Neumann algebra has type I1. Every von Neumann algebra can be written uniquely as a sum of von Neumann algebras of types I, II, and III. There are several other ways to divide factors into classes that are sometimes used: A factor is called discrete (or occasionally tame) if it has type I, and continuous (or occasionally wild) if it has type II or III. A factor is called semifinite if it has type I or II, and purely infinite if it has type III. A factor is called finite if the projection 1 is finite and properly infinite otherwise. Factors of types I and II may be either finite or properly infinite, but factors of type III are always properly infinite. === Type I factors === A factor is said to be of type I if there is a minimal projection E ≠ 0, i.e. a projection E such that there is no other projection F with 0 < F < E. Any factor of type I is isomorphic to the von Neumann algebra of all bounded operators on some Hilbert space; since there is one Hilbert space for every cardinal number, isomorphism classes of factors of type I correspond exactly to the cardinal numbers. Since many authors consider von Neumann algebras only on separable Hilbert spaces, it is customary to call the bounded operators on a Hilbert space of finite dimension n a factor of type In, and the bounded operators on a separable infinite-dimensional Hilbert space, a factor of type I∞. === Type II factors === A factor is said to be of type II if there are no minimal projections but there are non-zero finite projections. This implies that every projection E can be "halved" in the sense that there are two projections F and G that are Murray–von Neumann equivalent and satisfy E = F + G. If the identity operator in a type II factor is finite, the factor is said to be of type II1; otherwise, it is said to be of type II∞. The best understood factors of type II are the hyperfinite type II1 factor and the hyperfinite type II∞ factor, found by Murray & von Neumann (1936). These are the unique hyperfinite factors of types II1 and II∞; there are an uncountable number of other factors of these types that are the subject of intensive study. Murray & von Neumann (1937) proved the fundamental result that a factor of type II1 has a unique finite tracial state, and the set of traces of projections is [0,1]. A factor of type II∞ has a semifinite trace, unique up to rescaling, and the set of traces of projections is [0,∞]. The set of real numbers λ such that there is an automorphism rescaling the trace by a factor of λ is called the fundamental group of the type II∞ factor. The tensor product of a factor of type II1 and an infinite type I factor has type II∞, and conversely any factor of type II∞ can be constructed like this. The fundamental group of a type II1 factor is defined to be the fundamental group of its tensor product with the infinite (separable) factor of type I. For many years it was an open problem to find a type II factor whose fundamental group was not the group of positive reals, but Connes then showed that the von Neumann group algebra of a countable discrete group with Kazhdan's property (T) (the trivial representation is isolated in the dual space), such as SL(3,Z), has a countable fundamental group. Subsequently, Sorin Popa showed that the fundamental group can be trivial for certain groups, including the semidirect product of Z2 by SL(2,Z). An example of a type II1 factor is the von Neumann group algebra of a countable infinite discrete group such that every non-trivial conjugacy class is infinite. McDuff (1969) found an uncountable family of such groups with non-isomorphic von Neumann group algebras, thus showing the existence of uncountably many different separable type II1 factors. === Type III factors === Lastly, type III factors are factors that do not contain any nonzero finite projections at all. In their first paper Murray & von Neumann (1936) were unable to decide whether or not they existed; the first examples were later found by von Neumann (1940). Since the identity operator is always infinite in those factors, they were sometimes called type III∞ in the past, but recently that notation has been superseded by the notation IIIλ, where λ is a real number in the interval [0,1]. More precisely, if the Connes spectrum (of its modular group) is 1 then the factor is of type III0, if the Connes spectrum is all integral powers of λ for 0 < λ < 1, then the type is IIIλ, and if the Connes spectrum is all positive reals then the type is III1. (The Connes spectrum is a closed subgroup of the positive reals, so these are the only possibilities.) The only trace on type III factors takes value ∞ on all non-zero positive elements, and any two non-zero projections are equivalent. At one time type III factors were considered to be intractable objects, but Tomita–Takesaki theory has led to a good structure theory. In particular, any type III factor can be written in a canonical way as the crossed product of a type II∞ factor and the real numbers. == The predual == Any von Neumann algebra M has a predual M∗, which is the Banach space of all ultraweakly continuous linear functionals on M. As the name suggests, M is (as a Banach space) the dual of its predual. The predual is unique in the sense that any other Banach space whose dual is M is canonically isomorphic to M∗. Sakai (1971) showed that the existence of a predual characterizes von Neumann algebras among C* algebras. The definition of the predual given above seems to depend on the choice of Hilbert space that M acts on, as this determines the ultraweak topology. However the predual can also be defined without using the Hilbert space that M acts on, by defining it to be the space generated by all positive normal linear functionals on M. (Here "normal" means that it preserves suprema when applied to increasing nets of self adjoint operators; or equivalently to increasing sequences of projections.) The predual M∗ is a closed subspace of the dual M* (which consists of all norm-continuous linear functionals on M) but is generally smaller. The proof that M∗ is (usually) not the same as M* is nonconstructive and uses the axiom of choice in an essential way; it is very hard to exhibit explicit elements of M* that are not in M∗. For example, exotic positive linear forms on the von Neumann algebra l∞(Z) are given by free ultrafilters; they correspond to exotic *-homomorphisms into C and describe the Stone–Čech compactification of Z. Examples: The predual of the von Neumann algebra L∞(R) of essentially bounded functions on R is the Banach space L1(R) of integrable functions. The dual of L∞(R) is strictly larger than L1(R) For example, a functional on L∞(R) that extends the Dirac measure δ0 on the closed subspace of bounded continuous functions C0b(R) cannot be represented as a function in L1(R). The predual of the von Neumann algebra B(H) of bounded operators on a Hilbert space H is the Banach space of all trace class operators with the trace norm ||A||= Tr(|A|). The Banach space of trace class operators is itself the dual of the C*-algebra of compact operators (which is not a von Neumann algebra). == Weights, states, and traces == Weights and their special cases states and traces are discussed in detail in (Takesaki 1979). A weight ω on a von Neumann algebra is a linear map from the set of positive elements (those of the form a*a) to [0,∞]. A positive linear functional is a weight with ω(1) finite (or rather the extension of ω to the whole algebra by linearity). A state is a weight with ω(1) = 1. A trace is a weight with ω(aa*) = ω(a*a) for all a. A tracial state is a trace with ω(1) = 1. Any factor has a trace such that the trace of a non-zero projection is non-zero and the trace of a projection is infinite if and only if the projection is infinite. Such a trace is unique up to rescaling. For factors that are separable or finite, two projections are equivalent if and only if they have the same trace. The type of a factor can be read off from the possible values of this trace over the projections of the factor, as follows: Type In: 0, x, 2x, ....,nx for some positive x (usually normalized to be 1/n or 1). Type I∞: 0, x, 2x, ....,∞ for some positive x (usually normalized to be 1). Type II1: [0,x] for some positive x (usually normalized to be 1). Type II∞: [0,∞]. Type III: {0,∞}. If a von Neumann algebra acts on a Hilbert space containing a norm 1 vector v, then the functional a → (av,v) is a normal state. This construction can be reversed to give an action on a Hilbert space from a normal state: this is the GNS construction for normal states. == Modules over a factor == Given an abstract separable factor, one can ask for a classification of its modules, meaning the separable Hilbert spaces that it acts on. The answer is given as follows: every such module H can be given an M-dimension dimM(H) (not its dimension as a complex vector space) such that modules are isomorphic if and only if they have the same M-dimension. The M-dimension is additive, and a module is isomorphic to a subspace of another module if and only if it has smaller or equal M-dimension. A module is called standard if it has a cyclic separating vector. Each factor has a standard representation, which is unique up to isomorphism. The standard representation has an antilinear involution J such that JMJ = M′. For finite factors the standard module is given by the GNS construction applied to the unique normal tracial state and the M-dimension is normalized so that the standard module has M-dimension 1, while for infinite factors the standard module is the module with M-dimension equal to ∞. The possible M-dimensions of modules are given as follows: Type In (n finite): The M-dimension can be any of 0/n, 1/n, 2/n, 3/n, ..., ∞. The standard module has M-dimension 1 (and complex dimension n2.) Type I∞ The M-dimension can be any of 0, 1, 2, 3, ..., ∞. The standard representation of B(H) is H⊗H; its M-dimension is ∞. Type II1: The M-dimension can be anything in [0, ∞]. It is normalized so that the standard module has M-dimension 1. The M-dimension is also called the coupling constant of the module H. Type II∞: The M-dimension can be anything in [0, ∞]. There is in general no canonical way to normalize it; the factor may have outer automorphisms multiplying the M-dimension by constants. The standard representation is the one with M-dimension ∞. Type III: The M-dimension can be 0 or ∞. Any two non-zero modules are isomorphic, and all non-zero modules are standard. == Amenable von Neumann algebras == Connes (1976) and others proved that the following conditions on a von Neumann algebra M on a separable Hilbert space H are all equivalent: M is hyperfinite or AFD or approximately finite dimensional or approximately finite: this means the algebra contains an ascending sequence of finite dimensional subalgebras with dense union. (Warning: some authors use "hyperfinite" to mean "AFD and finite".) M is amenable: this means that the derivations of M with values in a normal dual Banach bimodule are all inner. M has Schwartz's property P: for any bounded operator T on H the weak operator closed convex hull of the elements uTu* contains an element commuting with M. M is semidiscrete: this means the identity map from M to M is a weak pointwise limit of completely positive maps of finite rank. M has property E or the Hakeda–Tomiyama extension property: this means that there is a projection of norm 1 from bounded operators on H to M '. M is injective: any completely positive linear map from any self adjoint closed subspace containing 1 of any unital C*-algebra A to M can be extended to a completely positive map from A to M. There is no generally accepted term for the class of algebras above; Connes has suggested that amenable should be the standard term. The amenable factors have been classified: there is a unique one of each of the types In, I∞, II1, II∞, IIIλ, for 0 < λ ≤ 1, and the ones of type III0 correspond to certain ergodic flows. (For type III0 calling this a classification is a little misleading, as it is known that there is no easy way to classify the corresponding ergodic flows.) The ones of type I and II1 were classified by Murray & von Neumann (1943), and the remaining ones were classified by Connes (1976), except for the type III1 case which was completed by Haagerup. All amenable factors can be constructed using the group-measure space construction of Murray and von Neumann for a single ergodic transformation. In fact they are precisely the factors arising as crossed products by free ergodic actions of Z or Z/nZ on abelian von Neumann algebras L∞(X). Type I factors occur when the measure space X is atomic and the action transitive. When X is diffuse or non-atomic, it is equivalent to [0,1] as a measure space. Type II factors occur when X admits an equivalent finite (II1) or infinite (II∞) measure, invariant under an action of Z. Type III factors occur in the remaining cases where there is no invariant measure, but only an invariant measure class: these factors are called Krieger factors. == Tensor products of von Neumann algebras == The Hilbert space tensor product of two Hilbert spaces is the completion of their algebraic tensor product. One can define a tensor product of von Neumann algebras (a completion of the algebraic tensor product of the algebras considered as rings), which is again a von Neumann algebra, and act on the tensor product of the corresponding Hilbert spaces. The tensor product of two finite algebras is finite, and the tensor product of an infinite algebra and a non-zero algebra is infinite. The type of the tensor product of two von Neumann algebras (I, II, or III) is the maximum of their types. The commutation theorem for tensor products states that ( M ⊗ N ) ′ = M ′ ⊗ N ′ , {\displaystyle (M\otimes N)^{\prime }=M^{\prime }\otimes N^{\prime },} where M′ denotes the commutant of M. The tensor product of an infinite number of von Neumann algebras, if done naively, is usually a ridiculously large non-separable algebra. Instead von Neumann (1938) showed that one should choose a state on each of the von Neumann algebras, use this to define a state on the algebraic tensor product, which can be used to produce a Hilbert space and a (reasonably small) von Neumann algebra. Araki & Woods (1968) studied the case where all the factors are finite matrix algebras; these factors are called Araki–Woods factors or ITPFI factors (ITPFI stands for "infinite tensor product of finite type I factors"). The type of the infinite tensor product can vary dramatically as the states are changed; for example, the infinite tensor product of an infinite number of type I2 factors can have any type depending on the choice of states. In particular Powers (1967) found an uncountable family of non-isomorphic hyperfinite type IIIλ factors for 0 < λ < 1, called Powers factors, by taking an infinite tensor product of type I2 factors, each with the state given by: x ↦ T r ( 1 λ + 1 0 0 λ λ + 1 ) x . {\displaystyle x\mapsto {\rm {Tr}}{\begin{pmatrix}{1 \over \lambda +1}&0\\0&{\lambda \over \lambda +1}\\\end{pmatrix}}x.} All hyperfinite von Neumann algebras not of type III0 are isomorphic to Araki–Woods factors, but there are uncountably many of type III0 that are not. == Bimodules and subfactors == A bimodule (or correspondence) is a Hilbert space H with module actions of two commuting von Neumann algebras. Bimodules have a much richer structure than that of modules. Any bimodule over two factors always gives a subfactor since one of the factors is always contained in the commutant of the other. There is also a subtle relative tensor product operation due to Connes on bimodules. The theory of subfactors, initiated by Vaughan Jones, reconciles these two seemingly different points of view. Bimodules are also important for the von Neumann group algebra M of a discrete group Γ. Indeed, if V is any unitary representation of Γ, then, regarding Γ as the diagonal subgroup of Γ × Γ, the corresponding induced representation on l2 (Γ, V) is naturally a bimodule for two commuting copies of M. Important representation theoretic properties of Γ can be formulated entirely in terms of bimodules and therefore make sense for the von Neumann algebra itself. For example, Connes and Jones gave a definition of an analogue of Kazhdan's property (T) for von Neumann algebras in this way. == Non-amenable factors == Von Neumann algebras of type I are always amenable, but for the other types there are an uncountable number of different non-amenable factors, which seem very hard to classify, or even distinguish from each other. Nevertheless, Voiculescu has shown that the class of non-amenable factors coming from the group-measure space construction is disjoint from the class coming from group von Neumann algebras of free groups. Later Narutaka Ozawa proved that group von Neumann algebras of hyperbolic groups yield prime type II1 factors, i.e. ones that cannot be factored as tensor products of type II1 factors, a result first proved by Leeming Ge for free group factors using Voiculescu's free entropy. Popa's work on fundamental groups of non-amenable factors represents another significant advance. The theory of factors "beyond the hyperfinite" is rapidly expanding at present, with many new and surprising results; it has close links with rigidity phenomena in geometric group theory and ergodic theory. == Examples == The essentially bounded functions on a σ-finite measure space form a commutative (type I1) von Neumann algebra acting on the L2 functions. For certain non-σ-finite measure spaces, usually considered pathological, L∞(X) is not a von Neumann algebra; for example, the σ-algebra of measurable sets might be the countable-cocountable algebra on an uncountable set. A fundamental approximation theorem can be represented by the Kaplansky density theorem. The bounded operators on any Hilbert space form a von Neumann algebra, indeed a factor, of type I. If we have any unitary representation of a group G on a Hilbert space H then the bounded operators commuting with G form a von Neumann algebra G′, whose projections correspond exactly to the closed subspaces of H invariant under G. Equivalent subrepresentations correspond to equivalent projections in G′. The double commutant G′′ of G is also a von Neumann algebra. The von Neumann group algebra of a discrete group G is the algebra of all bounded operators on H = l2(G) commuting with the action of G on H through right multiplication. One can show that this is the von Neumann algebra generated by the operators corresponding to multiplication from the left with an element g ∈ G. It is a factor (of type II1) if every non-trivial conjugacy class of G is infinite (for example, a non-abelian free group), and is the hyperfinite factor of type II1 if in addition G is a union of finite subgroups (for example, the group of all permutations of the integers fixing all but a finite number of elements). The tensor product of two von Neumann algebras, or of a countable number with states, is a von Neumann algebra as described in the section above. The crossed product of a von Neumann algebra by a discrete (or more generally locally compact) group can be defined, and is a von Neumann algebra. Special cases are the group-measure space construction of Murray and von Neumann and Krieger factors. The von Neumann algebras of a measurable equivalence relation and a measurable groupoid can be defined. These examples generalise von Neumann group algebras and the group-measure space construction. == Applications == Von Neumann algebras have found applications in diverse areas of mathematics like knot theory, statistical mechanics, quantum field theory, local quantum physics, free probability, noncommutative geometry, representation theory, differential geometry, and dynamical systems. For instance, C*-algebra provides an alternative axiomatization to probability theory. In this case the method goes by the name of Gelfand–Naimark–Segal construction. This is analogous to the two approaches to measure and integration, where one has the choice to construct measures of sets first and define integrals later, or construct integrals first and define set measures as integrals of characteristic functions. == See also == AW*-algebra – algebraic generalization of a W*-algebraPages displaying wikidata descriptions as a fallback Central carrier Tomita–Takesaki theory – Mathematical method in functional analysis == References == Araki, H.; Woods, E. J. (1968), "A classification of factors", Publ. Res. Inst. Math. Sci. Ser. A, 4 (1): 51–130, doi:10.2977/prims/1195195263MR0244773 Blackadar, B. (2005), Operator algebras, Springer, ISBN 3-540-28486-9, corrected manuscript (PDF), 2013 Connes, A. (1976), "Classification of Injective Factors", Annals of Mathematics, Second Series, 104 (1): 73–115, doi:10.2307/1971057, JSTOR 1971057 Connes, A. (1994), Non-commutative geometry, Academic Press, ISBN 0-12-185860-X. Dixmier, J. (1981), Von Neumann algebras, 凡異出版社, ISBN 0-444-86308-7 (A translation of Dixmier, J. (1957), Les algèbres d'opérateurs dans l'espace hilbertien: algèbres de von Neumann, Gauthier-Villars, the first book about von Neumann algebras.) Jones, V.F.R. (2003), von Neumann algebras (PDF); incomplete notes from a course. Kostecki, R.P. (2013), W*-algebras and noncommutative integration, arXiv:1307.4818, Bibcode:2013arXiv1307.4818P. McDuff, Dusa (1969), "Uncountably many II1 factors", Annals of Mathematics, Second Series, 90 (2): 372–377, doi:10.2307/1970730, JSTOR 1970730 Murray, F. J. (2006), "The rings of operators papers", The legacy of John von Neumann (Hempstead, NY, 1988), Proc. Sympos. Pure Math., vol. 50, Providence, RI.: Amer. Math. Soc., pp. 57–60, ISBN 0-8218-4219-6 A historical account of the discovery of von Neumann algebras. Murray, F.J.; von Neumann, J. (1936), "On rings of operators", Annals of Mathematics, Second Series, 37 (1): 116–229, doi:10.2307/1968693, JSTOR 1968693. This paper gives their basic properties and the division into types I, II, and III, and in particular finds factors not of type I. Murray, F.J.; von Neumann, J. (1937), "On rings of operators II", Trans. Amer. Math. Soc., 41 (2), American Mathematical Society: 208–248, doi:10.2307/1989620, JSTOR 1989620. This is a continuation of the previous paper, that studies properties of the trace of a factor. Murray, F.J.; von Neumann, J. (1943), "On rings of operators IV", Annals of Mathematics, Second Series, 44 (4): 716–808, doi:10.2307/1969107, JSTOR 1969107. This studies when factors are isomorphic, and in particular shows that all approximately finite factors of type II1 are isomorphic. Powers, Robert T. (1967), "Representations of Uniformly Hyperfinite Algebras and Their Associated von Neumann Rings", Annals of Mathematics, Second Series, 86 (1): 138–171, doi:10.2307/1970364, JSTOR 1970364 Sakai, S. (1971), C*-algebras and W*-algebras, Springer, ISBN 3-540-63633-1 Schwartz, Jacob (1967), W-* Algebras, Gordon & Breach Publishing, ISBN 0-677-00670-5 Shtern, A.I. (2001) [1994], "von Neumann algebra", Encyclopedia of Mathematics, EMS Press Takesaki, M. (1979), Theory of Operator Algebras I, II, III, Springer, ISBN 3-540-42248-X von Neumann, J. (1930), "Zur Algebra der Funktionaloperationen und Theorie der normalen Operatoren", Math. Ann., 102 (1): 370–427, Bibcode:1930MatAn.102..685E, doi:10.1007/BF01782352, S2CID 121141866. The original paper on von Neumann algebras. von Neumann, J. (1936), "On a Certain Topology for Rings of Operators", Annals of Mathematics, Second Series, 37 (1): 111–115, doi:10.2307/1968692, JSTOR 1968692. This defines the ultrastrong topology. von Neumann, J. (1938), "On infinite direct products", Compos. Math., 6: 1–77. This discusses infinite tensor products of Hilbert spaces and the algebras acting on them. von Neumann, J. (1940), "On rings of operators III", Annals of Mathematics, Second Series, 41 (1): 94–161, doi:10.2307/1968823, JSTOR 1968823. This shows the existence of factors of type III. von Neumann, J. (1943), "On Some Algebraical Properties of Operator Rings", Annals of Mathematics, Second Series, 44 (4): 709–715, doi:10.2307/1969106, JSTOR 1969106. This shows that some apparently topological properties in von Neumann algebras can be defined purely algebraically. von Neumann, J. (1949), "On Rings of Operators. Reduction Theory", Annals of Mathematics, Second Series, 50 (2): 401–485, doi:10.2307/1969463, JSTOR 1969463. This discusses how to write a von Neumann algebra as a sum or integral of factors. von Neumann, John (1961), Taub, A.H. (ed.), Collected Works, Volume III: Rings of Operators, NY: Pergamon Press. Reprints von Neumann's papers on von Neumann algebras. Wassermann, A. J. (1991), Operators on Hilbert space
Wikipedia:Finite-difference frequency-domain method#0
The finite-difference frequency-domain (FDFD) method is a numerical solution method for problems usually in electromagnetism and sometimes in acoustics, based on finite-difference approximations of the derivative operators in the differential equation being solved. While "FDFD" is a generic term describing all frequency-domain finite-difference methods, the title seems to mostly describe the method as applied to scattering problems. The method shares many similarities to the finite-difference time-domain (FDTD) method, so much so that the literature on FDTD can be directly applied. The method works by transforming Maxwell's equations (or other partial differential equation) for sources and fields at a constant frequency into matrix form A x = b {\displaystyle Ax=b} . The matrix A is derived from the wave equation operator, the column vector x contains the field components, and the column vector b describes the source. The method is capable of incorporating anisotropic materials, but off-diagonal components of the tensor require special treatment. Strictly speaking, there are at least two categories of "frequency-domain" problems in electromagnetism. One is to find the response to a current density J with a constant frequency ω, i.e. of the form J ( x ) e i ω t {\displaystyle \mathbf {J} (\mathbf {x} )e^{i\omega t}} , or a similar time-harmonic source. This frequency-domain response problem leads to an A x = b {\displaystyle Ax=b} system of linear equations as described above. An early description of a frequency-domain response FDTD method to solve scattering problems was published by Christ and Hartnagel (1987). Another is to find the normal modes of a structure (e.g. a waveguide) in the absence of sources: in this case the frequency ω is itself a variable, and one obtains an eigenproblem A x = λ x {\displaystyle Ax=\lambda x} (usually, the eigenvalue λ is ω2). An early description of an FDTD method to solve electromagnetic eigenproblems was published by Albani and Bernardi (1974). == Implementing the method == Use a Yee grid because it offers the following benefits: (1) it implicitly satisfies the zero divergence conditions to avoid spurious solutions, (2) it naturally handles physical boundary conditions, and (3) it provides a very elegant and compact way of approximating the curl equations with finite-differences. Much of the literature on finite-difference time-domain (FDTD) methods applies to FDFD, particularly topics on how to represent materials and devices on a Yee grid. == Comparison with FDTD and FEM == The FDFD method is very similar to the finite element method (FEM), though there are some major differences. Unlike the FDTD method, there are no time steps that must be computed sequentially, thus making FDFD easier to implement. This might also lead one to imagine that FDFD is less computationally expensive; however, this is not necessarily the case. The FDFD method requires solving a sparse linear system, which even for simple problems can be 20,000 by 20,000 elements or larger, with over a million unknowns. In this respect, the FDFD method is similar to the FEM, which is a finite differential method and is also usually implemented in the frequency domain. There are efficient numerical solvers available so that matrix inversion—an extremely computationally expensive process—can be avoided. Additionally, model order reduction techniques can be employed to reduce problem size. FDFD, and FDTD for that matter, does not lend itself well to complex geometries or multiscale structures, as the Yee grid is restricted mostly to rectangular structures. This can be circumvented by either using a very fine grid mesh (which increases computational cost), or by approximating the effects with surface boundary conditions. Non uniform gridding can lead to spurious charges at the interface boundary, as the zero divergence conditions are not maintained when the grid is not uniform along an interface boundary. E and H field continuity can be maintained to circumvent this problem by enforcing weak continuity across the interface using basis functions, as is done in FEM. Perfectly matched layer (PML) boundary conditions can also be used to truncate the grid, and avoid meshing empty space. == Susceptance element equivalent circuit == The FDFD equations can be rearranged in such a way as to describe a second order equivalent circuit, where nodal voltages represent the E field components and branch currents represent the H field components. This equivalent circuit representation can be extremely useful, as techniques from circuit theory can be used to analyze or simplify the problem and can be used as a spice-like tool for three-dimensional electromagnetic simulation. This susceptance element equivalent circuit (SEEC) model has the advantages of a reduced number of unknowns, only having to solve for E field components, and second order model order reduction techniques can be employed. == Applications == The FDFD method has been used to provide full wave simulation for modeling interconnects for various applications in electronic packaging. FDFD has also been used for various scattering problems at optical frequencies. == See also == Finite-difference time-domain method Finite element method == References ==
Wikipedia:Fioralba Cakoni#0
Fioralba Cakoni is an American-Albanian mathematician and an expert on inverse scattering theory. She is a professor of mathematics at Rutgers University. == Education and career == Cakoni earned bachelor's and master's degrees from the University of Tirana in 1987 and 1990 respectively. She completed her Ph.D. in 1996, jointly between the University of Tirana and University of Patras, supervised by George Dassios. Her dissertation was Some Results on the Abstract Wave Equation. Problems of the Scattering Theory in Elasticity and Thermoelasticity in Low-Frequency. She became a lecturer at the University of Tirana and then, from 1998 to 2000, a Humboldt Research Fellow at the University of Stuttgart. She came to the US for additional postdoctoral research at the University of Delaware in 2000, and stayed at Delaware as an assistant professor beginning in 2002. She moved to Rutgers University-New Brunswick in 2015 where she is now distinguished professor of mathematics. She serves on the Scientific Advisory Board for the Institute for Computational and Experimental Research in Mathematics (ICERM). == Books == Cakoni is the author or coauthor of: Qualitative Methods in Inverse Scattering Theory (with David Colton, Springer, 2006) The Linear Sampling Method in Inverse Electromagnetic Scattering (with David Colton and Peter Monk, Society for Industrial and Applied Mathematics, 2011) A Qualitative Approach to Inverse Scattering Theory (with David Colton, Springer, 2014) Inverse Scattering Theory and Transmission Eigenvalues (with David Colton and Houssem Haddar, Society for Industrial and Applied Mathematics, 2016) == Recognition == Cakoni was included in the 2019 class of fellows of the American Mathematical Society "for contributions to analysis of partial differential equations especially in inverse scattering theory". In 2020 Cakoni was elected foreign member of the Academy of Sciences of Albania. She was elected to the 2023 Class of SIAM Fellows. == References == == External links == Home page Fioralba Cakoni publications indexed by Google Scholar
Wikipedia:First and second fundamental theorems of invariant theory#0
In algebra, the first and second fundamental theorems of invariant theory concern the generators and relations of the ring of invariants in the ring of polynomial functions for classical groups (roughly, the first concerns the generators and the second the relations). The theorems are among the most important results of invariant theory. Classically the theorems are proved over the complex numbers. But characteristic-free invariant theory extends the theorems to a field of arbitrary characteristic. == First fundamental theorem for == GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} The theorem states that the ring of GL ⁡ ( V ) {\displaystyle \operatorname {GL} (V)} -invariant polynomial functions on V ∗ p ⊕ V q {\displaystyle {V^{*}}^{p}\oplus V^{q}} is generated by the functions ⟨ α i | v j ⟩ {\displaystyle \langle \alpha _{i}|v_{j}\rangle } , where α i {\displaystyle \alpha _{i}} are in V ∗ {\displaystyle V^{*}} and v j ∈ V {\displaystyle v_{j}\in V} . == Second fundamental theorem for general linear group == Let V, W be finite-dimensional vector spaces over the complex numbers. Then the only GL ⁡ ( V ) × GL ⁡ ( W ) {\displaystyle \operatorname {GL} (V)\times \operatorname {GL} (W)} -invariant prime ideals in C [ hom ⁡ ( V , W ) ] {\displaystyle \mathbb {C} [\operatorname {hom} (V,W)]} are the determinant ideal I k = C [ hom ⁡ ( V , W ) ] D k {\displaystyle I_{k}=\mathbb {C} [\operatorname {hom} (V,W)]D_{k}} generated by the determinants of all the k × k {\displaystyle k\times k} -minors. == Notes == == References == Procesi, Claudio (2007). Lie groups : an approach through invariants and representations. New York: Springer. ISBN 978-0-387-26040-2. OCLC 191464530. == Further reading == Ch. II, § 4. of E. Arbarello, M. Cornalba, P.A. Griffiths, and J. Harris, Geometry of algebraic curves. Vol. I, Grundlehren der Mathematischen Wissenschaften, vol. 267, Springer-Verlag, New York, 1985. MR0770932 Artin, Michael (1999). "Noncommutative Rings" (PDF). Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Hanspeter Kraft and Claudio Procesi, Classical Invariant Theory, a Primer Weyl, Hermann (1939), The Classical Groups. Their Invariants and Representations, Princeton University Press, ISBN 978-0-691-05756-9, MR 0000255 {{citation}}: ISBN / Date incompatibility (help)
Wikipedia:Fixed point (mathematics)#0
In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically, for functions, a fixed point is an element that is mapped to itself by the function. Any set of fixed points of a transformation is also an invariant set. == Fixed point of a function == Formally, c is a fixed point of a function f if c belongs to both the domain and the codomain of f, and f(c) = c. In particular, f cannot have any fixed point if its domain is disjoint from its codomain. If f is defined on the real numbers, it corresponds, in graphical terms, to a curve in the Euclidean plane, and each fixed-point c corresponds to an intersection of the curve with the line y = x, cf. picture. For example, if f is defined on the real numbers by f ( x ) = x 2 − 3 x + 4 , {\displaystyle f(x)=x^{2}-3x+4,} then 2 is a fixed point of f, because f(2) = 2. Not all functions have fixed points: for example, f(x) = x + 1 has no fixed points because x + 1 is never equal to x for any real number. == Fixed point iteration == In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. Specifically, given a function f {\displaystyle f} with the same domain and codomain, a point x 0 {\displaystyle x_{0}} in the domain of f {\displaystyle f} , the fixed-point iteration is x n + 1 = f ( x n ) , n = 0 , 1 , 2 , … {\displaystyle x_{n+1}=f(x_{n}),\,n=0,1,2,\dots } which gives rise to the sequence x 0 , x 1 , x 2 , … {\displaystyle x_{0},x_{1},x_{2},\dots } of iterated function applications x 0 , f ( x 0 ) , f ( f ( x 0 ) ) , … {\displaystyle x_{0},f(x_{0}),f(f(x_{0})),\dots } which is hoped to converge to a point x {\displaystyle x} . If f {\displaystyle f} is continuous, then one can prove that the obtained x {\displaystyle x} is a fixed point of f {\displaystyle f} . The notions of attracting fixed points, repelling fixed points, and periodic points are defined with respect to fixed-point iteration. == Fixed-point theorems == A fixed-point theorem is a result saying that at least one fixed point exists, under some general condition. For example, the Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, fixed-point iteration will always converge to a fixed point. The Brouwer fixed-point theorem (1911) says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point. The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology give a way to count fixed points. == Fixed point of a group action == In algebra, for a group G acting on a set X with a group action ⋅ {\displaystyle \cdot } , x in X is said to be a fixed point of g if g ⋅ x = x {\displaystyle g\cdot x=x} . The fixed-point subgroup G f {\displaystyle G^{f}} of an automorphism f of a group G is the subgroup of G: G f = { g ∈ G ∣ f ( g ) = g } . {\displaystyle G^{f}=\{g\in G\mid f(g)=g\}.} Similarly, the fixed-point subring R f {\displaystyle R^{f}} of an automorphism f of a ring R is the subring of the fixed points of f, that is, R f = { r ∈ R ∣ f ( r ) = r } . {\displaystyle R^{f}=\{r\in R\mid f(r)=r\}.} In Galois theory, the set of the fixed points of a set of field automorphisms is a field called the fixed field of the set of automorphisms. == Topological fixed point property == A topological space X {\displaystyle X} is said to have the fixed point property (FPP) if for any continuous function f : X → X {\displaystyle f\colon X\to X} there exists x ∈ X {\displaystyle x\in X} such that f ( x ) = x {\displaystyle f(x)=x} . The FPP is a topological invariant, i.e., it is preserved by any homeomorphism. The FPP is also preserved by any retraction. According to the Brouwer fixed-point theorem, every compact and convex subset of a Euclidean space has the FPP. Compactness alone does not imply the FPP, and convexity is not even a topological property, so it makes sense to ask how to topologically characterize the FPP. In 1932 Borsuk asked whether compactness together with contractibility could be a necessary and sufficient condition for the FPP to hold. The problem was open for 20 years until the conjecture was disproved by Kinoshita, who found an example of a compact contractible space without the FPP. == Fixed points of partial orders == In domain theory, the notion and terminology of fixed points is generalized to a partial order. Let ≤ be a partial order over a set X and let f: X → X be a function over X. Then a prefixed point (also spelled pre-fixed point, sometimes shortened to prefixpoint or pre-fixpoint) of f is any p such that f(p) ≤ p. Analogously, a postfixed point of f is any p such that p ≤ f(p). The opposite usage occasionally appears. Malkis justifies the definition presented here as follows: "since f is before the inequality sign in the term f(x) ≤ x, such x is called a prefix point." A fixed point is a point that is both a prefixpoint and a postfixpoint. Prefixpoints and postfixpoints have applications in theoretical computer science. === Least fixed point === In order theory, the least fixed point of a function from a partially ordered set (poset) to itself is the fixed point which is less than each other fixed point, according to the order of the poset. A function need not have a least fixed point, but if it does then the least fixed point is unique. One way to express the Knaster–Tarski theorem is to say that a monotone function on a complete lattice has a least fixed point that coincides with its least prefixpoint (and similarly its greatest fixed point coincides with its greatest postfixpoint). == Fixed-point combinator == In combinatory logic for computer science, a fixed-point combinator is a higher-order function f i x {\displaystyle {\mathsf {fix}}} that returns a fixed point of its argument function, if one exists. Formally, if the function f has one or more fixed points, then f i x ⁡ f = f ( f i x ⁡ f ) . {\displaystyle \operatorname {\mathsf {fix}} f=f(\operatorname {\mathsf {fix}} f).} == Fixed-point logics == In mathematical logic, fixed-point logics are extensions of classical predicate logic that have been introduced to express recursion. Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog. == Applications == In many fields, equilibria or stability are fundamental concepts that can be described in terms of fixed points. Some examples follow. In projective geometry, a fixed point of a projectivity has been called a double point. In economics, a Nash equilibrium of a game is a fixed point of the game's best response correspondence. John Nash exploited the Kakutani fixed-point theorem for his seminal paper that won him the Nobel prize in economics. In physics, more precisely in the theory of phase transitions, linearization near an unstable fixed point has led to Wilson's Nobel prize-winning work inventing the renormalization group, and to the mathematical explanation of the term "critical phenomenon." Programming language compilers use fixed point computations for program analysis, for example in data-flow analysis, which is often required for code optimization. They are also the core concept used by the generic program analysis method abstract interpretation. In type theory, the fixed-point combinator allows definition of recursive functions in the untyped lambda calculus. The vector of PageRank values of all web pages is the fixed point of a linear transformation derived from the World Wide Web's link structure. The stationary distribution of a Markov chain is the fixed point of the one step transition probability function. Fixed points are used to finding formulas for iterated functions. == See also == == Notes == == External links == Yutaka Nishiyama (2012). "An Elegant Solution for Drawing a Fixed Point" (PDF). International Journal of Pure and Applied Mathematics. 78 (3): 363–377.
Wikipedia:Fixed points of isometry groups in Euclidean space#0
A fixed point of an isometry group is a point that is a fixed point for every isometry in the group. For any isometry group in Euclidean space the set of fixed points is either empty or an affine space. For an object, any unique centre and, more generally, any point with unique properties with respect to the object is a fixed point of its symmetry group. In particular this applies for the centroid of a figure, if it exists. In the case of a physical body, if for the symmetry not only the shape but also the density is taken into account, it applies to the centre of mass. If the set of fixed points of the symmetry group of an object is a singleton then the object has a specific centre of symmetry. The centroid and centre of mass, if defined, are this point. Another meaning of "centre of symmetry" is a point with respect to which inversion symmetry applies. Such a point needs not be unique; if it is not, there is translational symmetry, hence there are infinitely many of such points. On the other hand, in the cases of e.g. C3h and D2 symmetry there is a centre of symmetry in the first sense, but no inversion. If the symmetry group of an object has no fixed points then the object is infinite and its centroid and centre of mass are undefined. If the set of fixed points of the symmetry group of an object is a line or plane then the centroid and centre of mass of the object, if defined, and any other point that has unique properties with respect to the object, are on this line or plane. == 1D == Line Only the trivial isometry group leaves the whole line fixed. Point The groups generated by a reflection leave a point fixed. == 2D == Plane Only the trivial isometry group C1 leaves the whole plane fixed. Line Cs with respect to any line leaves that line fixed. Point The point groups in two dimensions with respect to any point leave that point fixed. == 3D == Space Only the trivial isometry group C1 leaves the whole space fixed. Plane Cs with respect to a plane leaves that plane fixed. Line Isometry groups leaving a line fixed are isometries which in every plane perpendicular to that line have common 2D point groups in two dimensions with respect to the point of intersection of the line and the planes. Cn ( n > 1 ) and Cnv ( n > 1 ) cylindrical symmetry without reflection symmetry in a plane perpendicular to the axis cases in which the symmetry group is an infinite subset of that of cylindrical symmetry Point All other point groups in three dimensions No fixed points The isometry group contains translations or a screw operation. == Arbitrary dimension == Point One example of an isometry group, applying in every dimension, is that generated by inversion in a point. An n-dimensional parallelepiped is an example of an object invariant under such an inversion. == References == Slavik V. Jablan, Symmetry, Ornament and Modularity, Volume 30 of K & E Series on Knots and Everything, World Scientific, 2002. ISBN 9812380809
Wikipedia:Fixed-point combinator#0
In combinatory logic for computer science, a fixed-point combinator (or fixpoint combinator): p.26 is a higher-order function (i.e., a function which takes a function as argument) that returns some fixed point (a value that is mapped to itself) of its argument function, if one exists. Formally, if f i x {\displaystyle \mathrm {fix} } is a fixed-point combinator and the function f {\displaystyle f} has one or more fixed points, then f i x f {\displaystyle \mathrm {fix} \ f} is one of these fixed points, i.e., f i x f = f ( f i x f ) . {\displaystyle \mathrm {fix} \ f\ =f\ (\mathrm {fix} \ f).} Fixed-point combinators can be defined in the lambda calculus and in functional programming languages, and provide a means to allow for recursive definitions. == Y combinator in lambda calculus == In the classical untyped lambda calculus, every function has a fixed point. A particular implementation of f i x {\displaystyle \mathrm {fix} } is Haskell Curry's paradoxical combinator Y, given by: 131 Y = λ f . ( λ x . f ( x x ) ) ( λ x . f ( x x ) ) {\displaystyle Y=\lambda f.\ (\lambda x.f\ (x\ x))\ (\lambda x.f\ (x\ x))} (Here using the standard notations and conventions of lambda calculus: Y is a function that takes one argument f and returns the entire expression following the first period; the expression λ x . f ( x x ) {\displaystyle \lambda x.f\ (x\ x)} denotes a function that takes one argument x, thought of as a function, and returns the expression f ( x x ) {\displaystyle f\ (x\ x)} , where ( x x ) {\displaystyle (x\ x)} denotes x applied to itself. Juxtaposition of expressions denotes function application, is left-associative, and has higher precedence than the period.) === Verification === The following calculation verifies that Y g {\displaystyle Yg} is indeed a fixed point of the function g {\displaystyle g} : The lambda term g ( Y g ) {\displaystyle g\ (Y\ g)} may not, in general, β-reduce to the term Y g {\displaystyle Y\ g} . However, both terms β-reduce to the same term, as shown. === Uses === Applied to a function with one variable, the Y combinator usually does not terminate. More interesting results are obtained by applying the Y combinator to functions of two or more variables. The added variables may be used as a counter, or index. The resulting function behaves like a while or a for loop in an imperative language. Used in this way, the Y combinator implements simple recursion. The lambda calculus does not allow a function to appear as a term in its own definition as is possible in many programming languages, but a function can be passed as an argument to a higher-order function that applies it in a recursive manner. The Y combinator may also be used in implementing Curry's paradox. The heart of Curry's paradox is that untyped lambda calculus is unsound as a deductive system, and the Y combinator demonstrates this by allowing an anonymous expression to represent zero, or even many values. This is inconsistent in mathematical logic. === Example implementations === An example implementation of Y in the language R is presented below:This can then be used to implement factorial as follows:Y is only needed when function names are absent. Substituting all the definitions into one line so that function names are not required gives:This works because R uses lazy evaluation. Languages that use strict evaluation, such as Python, C++, and other strict programming languages, can often express Y; however, any implementation is useless in practice since it loops indefinitely until terminating via a stack overflow. == Fixed-point combinator == The Y combinator is an implementation of a fixed-point combinator in lambda calculus. Fixed-point combinators may also be easily defined in other functional and imperative languages. The implementation in lambda calculus is more difficult due to limitations in lambda calculus. The fixed-point combinator may be used in a number of different areas: General mathematics Untyped lambda calculus Typed lambda calculus Functional programming Imperative programming Fixed-point combinators may be applied to a range of different functions, but normally will not terminate unless there is an extra parameter. When the function to be fixed refers to its parameter, another call to the function is invoked, so the calculation never gets started. Instead, the extra parameter is used to trigger the start of the calculation. The type of the fixed point is the return type of the function being fixed. This may be a real or a function or any other type. In the untyped lambda calculus, the function to apply the fixed-point combinator to may be expressed using an encoding, like Church encoding. In this case particular lambda terms (which define functions) are considered as values. "Running" (beta reducing) the fixed-point combinator on the encoding gives a lambda term for the result which may then be interpreted as fixed-point value. Alternately, a function may be considered as a lambda term defined purely in lambda calculus. These different approaches affect how a mathematician and a programmer may regard a fixed-point combinator. A mathematician may see the Y combinator applied to a function as being an expression satisfying the fixed-point equation, and therefore a solution. In contrast, a person only wanting to apply a fixed-point combinator to some general programming task may see it only as a means of implementing recursion. === Values and domains === Many functions do not have any fixed points, for instance f : N → N {\displaystyle f:\mathbb {N} \to \mathbb {N} } with f ( n ) = n + 1 {\displaystyle f(n)=n+1} . Using Church encoding, natural numbers can be represented in lambda calculus, and this function f can be defined in lambda calculus. However, its domain will now contain all lambda expressions, not just those representing natural numbers. The Y combinator, applied to f, will yield a fixed-point for f, but this fixed-point won't represent a natural number. If trying to compute Y f in an actual programming language, an infinite loop will occur. === Function versus implementation === The fixed-point combinator may be defined in mathematics and then implemented in other languages. General mathematics defines a function based on its extensional properties. That is, two functions are equal if they perform the same mapping. Lambda calculus and programming languages regard function identity as an intensional property. A function's identity is based on its implementation. A lambda calculus function (or term) is an implementation of a mathematical function. In the lambda calculus there are a number of combinators (implementations) that satisfy the mathematical definition of a fixed-point combinator. === Definition of the term "combinator" === Combinatory logic is a higher-order functions theory. A combinator is a closed lambda expression, meaning that it has no free variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables. == Recursive definitions and fixed-point combinators == Fixed-point combinators can be used to implement recursive definition of functions. However, they are rarely used in practical programming. Strongly normalizing type systems such as the simply typed lambda calculus disallow non-termination and hence fixed-point combinators often cannot be assigned a type or require complex type system features. Furthermore fixed-point combinators are often inefficient compared to other strategies for implementing recursion, as they require more function reductions and construct and take apart a tuple for each group of mutually recursive definitions.: page 232 === The factorial function === The factorial function provides a good example of how a fixed-point combinator may be used to define recursive functions. The standard recursive definition of the factorial function in mathematics can be written as fact ⁡ n = { 1 if n = 0 n × fact ⁡ ( n − 1 ) otherwise. {\displaystyle \operatorname {fact} \ n={\begin{cases}1&{\text{if}}~n=0\\n\times \operatorname {fact} (n-1)&{\text{otherwise.}}\end{cases}}} where n is a non-negative integer. Implementing this in lambda calculus, where integers are represented using Church encoding, encounters the problem that the lambda calculus disallows the name of a function ('fact') to be used in the function's definition. This can be circumvented using a fixed-point combinator fix {\displaystyle {\textsf {fix}}} as follows. Define a function F of two arguments f and n: F f n = ( IsZero ⁡ n ) 1 ( multiply ⁡ n ( f ( pred ⁡ n ) ) ) {\displaystyle F\ f\ n=(\operatorname {IsZero} \ n)\ 1\ (\operatorname {multiply} \ n\ (f\ (\operatorname {pred} \ n)))} (Here ( IsZero ⁡ n ) {\displaystyle (\operatorname {IsZero} \ n)} is a function that takes two arguments and returns its first argument if n=0, and its second argument otherwise; pred ⁡ n {\displaystyle \operatorname {pred} \ n} evaluates to n-1.) Now define fact = fix F {\displaystyle \operatorname {fact} ={\textsf {fix}}\ F} . Then fact {\displaystyle \operatorname {fact} } is a fixed-point of F, which gives fact ⁡ n = F fact ⁡ n = ( IsZero ⁡ n ) 1 ( multiply ⁡ n ( fact ⁡ ( pred ⁡ n ) ) ) {\displaystyle {\begin{aligned}\operatorname {fact} n&=F\ \operatorname {fact} \ n\\&=(\operatorname {IsZero} \ n)\ 1\ (\operatorname {multiply} \ n\ (\operatorname {fact} \ (\operatorname {pred} \ n)))\ \end{aligned}}} as desired. == Fixed-point combinators in lambda calculus == The Y combinator, discovered by Haskell Curry, is defined as Y = λ f . ( λ x . f ( x x ) ) ( λ x . f ( x x ) ) {\displaystyle Y=\lambda f.(\lambda x.f\ (x\ x))\ (\lambda x.f\ (x\ x))} === Other fixed-point combinators === In untyped lambda calculus fixed-point combinators are not especially rare. In fact there are infinitely many of them. In 2005 Mayer Goldberg showed that the set of fixed-point combinators of untyped lambda calculus is recursively enumerable. The Y combinator can be expressed in the SKI-calculus as Y = S ( K ( S I I ) ) ( S ( S ( K S ) K ) ( K ( S I I ) ) ) = S S I ( S ( S ( K S ) K ) ( K ( S I I ) ) ) {\displaystyle {\mathsf {Y=S(K(SII))(S(S(KS)K)(K(SII)))=SSI(S(S(KS)K)(K(SII)))}}} Additional combinators (B, C, K, W system) allow for much shorter encodings. With U = S I I {\displaystyle {\mathsf {U=SII}}} the self-application combinator, since S ( K x ) y z = x ( y z ) = B x y z {\displaystyle {\mathsf {S}}({\mathsf {K}}x)yz=x(yz)={\mathsf {B}}xyz} and S x ( K y ) z = x z y = C x y z {\displaystyle {\mathsf {S}}x({\mathsf {K}}y)z=xzy={\mathsf {C}}xyz} , the above becomes Y = S ( K U ) ( S B ( K U ) ) = B U ( C B U ) ; Y = S S I ( B W B ) {\displaystyle {\mathsf {Y=S(KU)(SB(KU))=BU(CBU)}}\ \ \ ;\ \ {\mathsf {Y=SSI(BWB)}}} The shortest fixed-point combinator in the SK-calculus using S and K combinators only, found by John Tromp, is Y ′ = S S K ( S ( K ( S S ( S ( S S K ) ) ) ) K ) = W C ( S B ( C ( W C ) ) ) {\displaystyle {\mathsf {Y'=SSK(S(K(SS(S(SSK))))K)=WC(SB(C(WC)))}}} although note that it is not in normal form, which is longer. This combinator corresponds to the lambda expression Y ′ = ( λ x y . x y x ) ( λ y x . y ( x y x ) ) {\displaystyle {\mathsf {Y}}'=(\lambda xy.xyx)(\lambda yx.y(xyx))} The following fixed-point combinator is simpler than the Y combinator, and β-reduces into the Y combinator; it is sometimes cited as the Y combinator itself: X = λ f . ( λ x . x x ) ( λ x . f ( x x ) ) ; X f = U ( B f U ) {\displaystyle {\mathsf {X}}=\lambda f.(\lambda x.xx)(\lambda x.f(xx))\ \ \ ;\ \ {\mathsf {Xf=U(BfU)}}} Another common fixed-point combinator is the Turing fixed-point combinator (named after its discoverer, Alan Turing):: 132 Θ = ( λ x y . y ( x x y ) ) ( λ x y . y ( x x y ) ) = S I I ( S ( K ( S I ) ) ( S I I ) ) = U ( B ( S I ) U ) {\displaystyle \Theta =(\lambda xy.y(xxy))\ (\lambda xy.y(xxy))={\mathsf {SII(S(K(SI))(SII))=U(B(SI)U)}}} Its advantage over Y {\displaystyle {\mathsf {Y}}} is that Θ f {\displaystyle \Theta \ f} beta-reduces to f ( Θ f ) {\displaystyle f\ (\Theta f)} , whereas Y f {\displaystyle {\mathsf {Y}}\ f} and f ( Y f ) {\displaystyle f\ ({\mathsf {Y}}f)} only beta-reduce to a common term. Θ {\displaystyle \Theta } also has a simple call-by-value form: Θ v = ( λ x y . y ( λ z . x x y z ) ) ( λ x y . y ( λ z . x x y z ) ) {\displaystyle \Theta _{v}=(\lambda xy.y(\lambda z.xxyz))\ (\lambda xy.y(\lambda z.xxyz))} The analog for mutual recursion is a polyvariadic fix-point combinator, which may be denoted Y*. === Strict fixed-point combinator === In a strict programming language the Y combinator will expand until stack overflow, or never halt in case of tail call optimization. The Z combinator will work in strict languages (also called eager languages, where applicative evaluation order is applied). The Z combinator has the next argument defined explicitly, preventing the expansion of Z g {\displaystyle Zg} in the right-hand side of the definition: Z g v = g ( Z g ) v . {\displaystyle Zgv=g(Zg)v\ .} and in lambda calculus it is an eta-expansion of the Y combinator: Z = λ f . ( λ x . f ( λ v . x x v ) ) ( λ x . f ( λ v . x x v ) ) . {\displaystyle Z=\lambda f.(\lambda x.f(\lambda v.xxv))\ (\lambda x.f(\lambda v.xxv))\ .} === Non-standard fixed-point combinators === If F is a fixed-point combinator in untyped lambda calculus, then there is: F = λ x . F x = λ x . x ( F x ) = λ x . x ( x ( F x ) ) = ⋯ {\displaystyle {\mathsf {F}}=\lambda x.Fx=\lambda x.x(Fx)=\lambda x.x(x(Fx))=\cdots } Terms that have the same Böhm tree as a fixed-point combinator, i.e., have the same infinite extension λ x . x ( x ( x ⋯ ) ) {\displaystyle \lambda x.x(x(x\cdots ))} , are called non-standard fixed-point combinators. Any fixed-point combinator is also a non-standard one, but not all non-standard fixed-point combinators are fixed-point combinators because some of them fail to satisfy the fixed-point equation that defines the "standard" ones. These combinators are called strictly non-standard fixed-point combinators; an example is the following combinator: N = B U ( B ( B U ) B ) {\displaystyle {\mathsf {N=BU(B(BU)B)}}} where B = λ x y z . x ( y z ) {\displaystyle {\mathsf {B}}=\lambda xyz.x(yz)} U = λ x . x x {\displaystyle {\mathsf {U}}=\lambda x.xx\ } since N = λ x . N x = λ x . x ( N 2 x ) = λ x . x ( x ( x ( N 3 x ) ) ) = λ x . x ( x ( x ( x ( x ( x ( N 4 x ) ) ) ) ) ) = ⋯ {\displaystyle {\mathsf {N}}=\lambda x.Nx=\lambda x.x(N_{2}x)=\lambda x.x(x(x(N_{3}x)))=\lambda x.x(x(x(x(x(x(N_{4}x))))))=\cdots } where N i {\displaystyle {\mathsf {N}}_{i}} are modifications of N {\displaystyle {\mathsf {N}}} created on the fly which add i {\displaystyle i} instances of x {\displaystyle x} at once into the chain while being replaced with N i + 1 {\displaystyle {\mathsf {N}}_{i+1}} . The set of non-standard fixed-point combinators is not recursively enumerable. == Implementation in other languages == The Y combinator is a particular implementation of a fixed-point combinator in lambda calculus. Its structure is determined by the limitations of lambda calculus. It is not necessary or helpful to use this structure in implementing the fixed-point combinator in other languages. Simple examples of fixed-point combinators implemented in some programming paradigms are given below. === Lazy functional implementation === In a language that supports lazy evaluation, as in Haskell, it is possible to define a fixed-point combinator using the defining equation of the fixed-point combinator which is conventionally named fix. Since Haskell has lazy data types, this combinator can also be used to define fixed points of data constructors (and not only to implement recursive functions). The definition is given here, followed by some usage examples. In Hackage, the original sample is: === Strict functional implementation === In a strict functional language, as illustrated below with OCaml, the argument to f is expanded beforehand, yielding an infinite call sequence, f ( f . . . ( f ( f i x f ) ) . . . ) x {\displaystyle f\ (f...(f\ ({\mathsf {fix}}\ f))...)\ x} . This may be resolved by defining fix with an extra parameter. In a multi-paradigm functional language (one decorated with imperative features), such as Lisp, Peter Landin suggested the use of a variable assignment to create a fixed-point combinator, as in the below example using Scheme: Using a lambda calculus with axioms for assignment statements, it can be shown that Y! satisfies the same fixed-point law as the call-by-value Y combinator: ( Y ! λ x . e ) e ′ = ( λ x . e ) ( Y ! λ x . e ) e ′ {\displaystyle (Y_{!}\ \lambda x.e)e'=(\lambda x.e)\ (Y_{!}\ \lambda x.e)e'} In more idiomatic modern Scheme usage, this would typically be handled via a letrec expression, as lexical scope was introduced to Lisp in the 1970s: Or without the internal label: === Imperative language implementation === This example is a slightly interpretive implementation of a fixed-point combinator. A class is used to contain the fix function, called fixer. The function to be fixed is contained in a class that inherits from fixer. The fix function accesses the function to be fixed as a virtual function. As for the strict functional definition, fix is explicitly given an extra parameter x, which means that lazy evaluation is not needed. Another example can be shown to demonstrate SKI combinator calculus (with given bird name from combinatory logic) being used to build up Z combinator to achieve tail call-like behavior through trampolining: == Typing == In System F (polymorphic lambda calculus) a polymorphic fixed-point combinator has type ∀a.(a → a) → a where a {\displaystyle a} is a type variable. That is, if the type of f i x f {\displaystyle fix\ f} fulfilling the equation f i x f = f ( f i x f ) {\displaystyle fix\ f\ =\ f\ (fix\ f)} is a {\displaystyle a} , – the most general type, – then the type of f {\displaystyle f} is a → a {\displaystyle a\to a} . So then, f i x {\displaystyle fix} takes a function which maps a {\displaystyle a} to a {\displaystyle a} and uses it to return a value of type a {\displaystyle a} . In the simply typed lambda calculus extended with recursive data types, fixed-point operators can be written, but the type of a "useful" fixed-point operator (one whose application always returns) may be restricted. In the simply typed lambda calculus, the fixed-point combinator Y cannot be assigned a type because at some point it would deal with the self-application sub-term x x {\displaystyle x~x} by the application rule: Γ ⊢ x : t 1 → t 2 Γ ⊢ x : t 1 Γ ⊢ x x : t 2 {\displaystyle {\Gamma \vdash x\!:\!t_{1}\to t_{2}\quad \Gamma \vdash x\!:\!t_{1}} \over {\Gamma \vdash x~x\!:\!t_{2}}} where x {\displaystyle x} has the infinite type t 1 = t 1 → t 2 {\displaystyle t_{1}=t_{1}\to t_{2}} . No fixed-point combinator can in fact be typed; in those systems, any support for recursion must be explicitly added to the language. === Type for the Y combinator === In programming languages that support recursive data types, the unbounded recursion in t = t → a {\displaystyle t=t\to a} which creates the infinite type t {\displaystyle t} is broken by marking the t {\displaystyle t} type explicitly as a recursive type R e c a {\displaystyle Rec\ a} , which is defined so as to be isomorphic to (or just to be a synonym of) R e c a → a {\displaystyle Rec\ a\to a} . The R e c a {\displaystyle Rec\ a} type value is created by simply tagging the function value of type R e c a → a {\displaystyle Rec\ a\to a} with the data constructor tag R e c {\displaystyle Rec} (or any other of our choosing). For example, in the following Haskell code, has Rec and app being the names of the two directions of the isomorphism, with types: which lets us write: Or equivalently in OCaml: Alternatively: == General information == Because fixed-point combinators can be used to implement recursion, it is possible to use them to describe specific types of recursive computations, such as those in fixed-point iteration, iterative methods, recursive join in relational databases, data-flow analysis, FIRST and FOLLOW sets of non-terminals in a context-free grammar, transitive closure, and other types of closure operations. A function for which every input is a fixed point is called an identity function. Formally: ∀ x ( f x = x ) {\displaystyle \forall x(f\ x=x)} In contrast to universal quantification over all x {\displaystyle x} , a fixed-point combinator constructs one value that is a fixed point of f {\displaystyle f} . The remarkable property of a fixed-point combinator is that it constructs a fixed point for an arbitrary given function f {\displaystyle f} . Other functions have the special property that, after being applied once, further applications don't have any effect. More formally: ∀ x ( f ( f x ) = f x ) {\displaystyle \forall x(f\ (f\ x)=f\ x)} Such functions are called idempotent (see also Projection (mathematics)). An example of such a function is the function that returns 0 for all even integers, and 1 for all odd integers. In lambda calculus,from a computational point of view, applying a fixed-point combinator to an identity function or an idempotent function typically results in non-terminating computation. For example, obtaining ( Y λ x . x ) = ( λ x . ( λ x . x ) ( x x ) ) ( λ x . ( λ x . x ) ( x x ) ) {\displaystyle (Y\ \lambda x.x)=(\lambda x.(\lambda x.x)(x\ x))\ (\lambda x.(\lambda x.x)(x\ x))} where the resulting term can only reduce to itself and represents an infinite loop. Fixed-point combinators do not necessarily exist in more restrictive models of computation. For instance, they do not exist in simply typed lambda calculus. The Y combinator allows recursion to be defined as a set of rewrite rules, without requiring native recursion support in the language. In programming languages that support anonymous functions, fixed-point combinators allow the definition and use of anonymous recursive functions, i.e., without having to bind such functions to identifiers. In this setting, the use of fixed-point combinators is sometimes called anonymous recursion. == See also == Anonymous function Fixed-point iteration Lambda calculus#Recursion and fixed points Lambda lifting Let expression == Notes == == References == Werner Kluge, Abstract computing machines: a lambda calculus perspective, Springer, 2005, ISBN 3-540-21146-2, pp. 73–77 Mayer Goldberg, (2005) On the Recursive Enumerability of Fixed-Point Combinators, BRICS Report RS-05-1, University of Aarhus Matthias Felleisen, A Lecture on the Why of Y. == External links == Recursion Theory and Joy, Manfred von Thun, (2002 or earlier) The Lambda Calculus - notes by Don Blaheta, October 12, 2000 Y Combinator Archived 2009-03-23 at the Wayback Machine "A Use of the Y Combinator in Ruby" "Functional programming in Ada" Rosetta code - Y combinator
Wikipedia:Fixed-point index#0
In mathematics, the Lefschetz fixed-point theorem is a formula that counts the fixed points of a continuous mapping from a compact topological space X {\displaystyle X} to itself by means of traces of the induced mappings on the homology groups of X {\displaystyle X} . It is named after Solomon Lefschetz, who first stated it in 1926. The counting is subject to an imputed multiplicity at a fixed point called the fixed-point index. A weak version of the theorem is enough to show that a mapping without any fixed point must have rather special topological properties (like a rotation of a circle). == Formal statement == For a formal statement of the theorem, let f : X → X {\displaystyle f\colon X\rightarrow X\,} be a continuous map from a compact triangulable space X {\displaystyle X} to itself. Define the Lefschetz number Λ f {\displaystyle \Lambda _{f}} of f {\displaystyle f} by Λ f := ∑ k ≥ 0 ( − 1 ) k t r ( H k ( f , Q ) ) , {\displaystyle \Lambda _{f}:=\sum _{k\geq 0}(-1)^{k}\mathrm {tr} (H_{k}(f,\mathbb {Q} )),} the alternating (finite) sum of the matrix traces of the linear maps induced by f {\displaystyle f} on H k ( X , Q ) {\displaystyle H_{k}(X,\mathbb {Q} )} , the singular homology groups of X {\displaystyle X} with rational coefficients. A simple version of the Lefschetz fixed-point theorem states: if Λ f ≠ 0 {\displaystyle \Lambda _{f}\neq 0\,} then f {\displaystyle f} has at least one fixed point, i.e., there exists at least one x {\displaystyle x} in X {\displaystyle X} such that f ( x ) = x {\displaystyle f(x)=x} . In fact, since the Lefschetz number has been defined at the homology level, the conclusion can be extended to say that any map homotopic to f {\displaystyle f} has a fixed point as well. Note however that the converse is not true in general: Λ f {\displaystyle \Lambda _{f}} may be zero even if f {\displaystyle f} has fixed points, as is the case for the identity map on odd-dimensional spheres. == Sketch of a proof == First, by applying the simplicial approximation theorem, one shows that if f {\displaystyle f} has no fixed points, then (possibly after subdividing X {\displaystyle X} ) f {\displaystyle f} is homotopic to a fixed-point-free simplicial map (i.e., it sends each simplex to a different simplex). This means that the diagonal values of the matrices of the linear maps induced on the simplicial chain complex of X {\displaystyle X} must be all be zero. Then one notes that, in general, the Lefschetz number can also be computed using the alternating sum of the matrix traces of the aforementioned linear maps (this is true for almost exactly the same reason that the Euler characteristic has a definition in terms of homology groups; see below for the relation to the Euler characteristic). In the particular case of a fixed-point-free simplicial map, all of the diagonal values are zero, and thus the traces are all zero. == Lefschetz–Hopf theorem == A stronger form of the theorem, also known as the Lefschetz–Hopf theorem, states that, if f {\displaystyle f} has only finitely many fixed points, then ∑ x ∈ F i x ( f ) i n d ( f , x ) = Λ f , {\displaystyle \sum _{x\in \mathrm {Fix} (f)}\mathrm {ind} (f,x)=\Lambda _{f},} where F i x ( f ) {\displaystyle \mathrm {Fix} (f)} is the set of fixed points of f {\displaystyle f} , and i n d ( f , x ) {\displaystyle \mathrm {ind} (f,x)} denotes the index of the fixed point x {\displaystyle x} . From this theorem one deduces the Poincaré–Hopf theorem for vector fields, since every vector field on compact differential manifold induce flow φ ( x , t ) {\displaystyle \varphi (x,t)} in a natural way. For every t {\displaystyle t} f t ( x ) = φ ( x , t ) {\displaystyle f_{t}(x)=\varphi (x,t)} is continuous mapping homotopic to identity (thus have same Lefschetz number) and for small t {\displaystyle t} indices of fixed points equals to indices of zeroes of vector field. == Relation to the Euler characteristic == The Lefschetz number of the identity map on a finite CW complex can be easily computed by realizing that each f ∗ {\displaystyle f_{\ast }} can be thought of as an identity matrix, and so each trace term is simply the dimension of the appropriate homology group. Thus the Lefschetz number of the identity map is equal to the alternating sum of the Betti numbers of the space, which in turn is equal to the Euler characteristic χ ( X ) {\displaystyle \chi (X)} . Thus we have Λ i d = χ ( X ) . {\displaystyle \Lambda _{\mathrm {id} }=\chi (X).\ } == Relation to the Brouwer fixed-point theorem == The Lefschetz fixed-point theorem generalizes the Brouwer fixed-point theorem, which states that every continuous map from the n {\displaystyle n} -dimensional closed unit disk D n {\displaystyle D^{n}} to D n {\displaystyle D^{n}} must have at least one fixed point. This can be seen as follows: D n {\displaystyle D^{n}} is compact and triangulable, all its homology groups except H 0 {\displaystyle H_{0}} are zero, and every continuous map f : D n → D n {\displaystyle f\colon D^{n}\to D^{n}} induces the identity map f ∗ : H 0 ( D n , Q ) → H 0 ( D n , Q ) {\displaystyle f_{*}\colon H_{0}(D^{n},\mathbb {Q} )\to H_{0}(D^{n},\mathbb {Q} )} , whose trace is one; all this together implies that Λ f {\displaystyle \Lambda _{f}} is non-zero for any continuous map f : D n → D n {\displaystyle f\colon D^{n}\to D^{n}} . == Historical context == Lefschetz presented his fixed-point theorem in . Lefschetz's focus was not on fixed points of maps, but rather on what are now called coincidence points of maps. Given two maps f {\displaystyle f} and g {\displaystyle g} from an orientable manifold X {\displaystyle X} to an orientable manifold Y {\displaystyle Y} of the same dimension, the Lefschetz coincidence number of f {\displaystyle f} and g {\displaystyle g} is defined as Λ f , g = ∑ ( − 1 ) k t r ( D X ∘ g ∗ ∘ D Y − 1 ∘ f ∗ ) , {\displaystyle \Lambda _{f,g}=\sum (-1)^{k}\mathrm {tr} (D_{X}\circ g^{*}\circ D_{Y}^{-1}\circ f_{*}),} where f ∗ {\displaystyle f_{*}} is as above, g ∗ {\displaystyle g^{*}} is the homomorphism induced by g {\displaystyle g} on the cohomology groups with rational coefficients, and D X {\displaystyle D_{X}} and D Y {\displaystyle D_{Y}} are the Poincaré duality isomorphisms for X {\displaystyle X} and Y {\displaystyle Y} , respectively. Lefschetz proved that if the coincidence number is nonzero, then f {\displaystyle f} and g {\displaystyle g} have a coincidence point. He noted in his paper that letting X = Y {\displaystyle X=Y} and letting g {\displaystyle g} be the identity map gives a simpler result, which is now known as the fixed-point theorem. == Frobenius == Let X {\displaystyle X} be a variety defined over the finite field k {\displaystyle k} with q {\displaystyle q} elements and let X ¯ {\displaystyle {\bar {X}}} be the base change of X {\displaystyle X} to the algebraic closure of k {\displaystyle k} . The Frobenius endomorphism of X ¯ {\displaystyle {\bar {X}}} (often the geometric Frobenius, or just the Frobenius), denoted by F q {\displaystyle F_{q}} , maps a point with coordinates x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} to the point with coordinates x 1 q , … , x n q {\displaystyle x_{1}^{q},\ldots ,x_{n}^{q}} . Thus the fixed points of F q {\displaystyle F_{q}} are exactly the points of X {\displaystyle X} with coordinates in k {\displaystyle k} ; the set of such points is denoted by X ( k ) {\displaystyle X(k)} . The Lefschetz trace formula holds in this context, and reads: # X ( k ) = ∑ i ( − 1 ) i t r ( F q ∗ | H c i ( X ¯ , Q ℓ ) ) . {\displaystyle \#X(k)=\sum _{i}(-1)^{i}\mathrm {tr} (F_{q}^{*}|H_{c}^{i}({\bar {X}},\mathbb {Q} _{\ell })).} This formula involves the trace of the Frobenius on the étale cohomology, with compact supports, of X ¯ {\displaystyle {\bar {X}}} with values in the field of ℓ {\displaystyle \ell } -adic numbers, where ℓ {\displaystyle \ell } is a prime coprime to q {\displaystyle q} . If X {\displaystyle X} is smooth and equidimensional, this formula can be rewritten in terms of the arithmetic Frobenius Φ q {\displaystyle \Phi _{q}} , which acts as the inverse of F q {\displaystyle F_{q}} on cohomology: # X ( k ) = q dim ⁡ X ∑ i ( − 1 ) i t r ( ( Φ q − 1 ) ∗ | H i ( X ¯ , Q ℓ ) ) . {\displaystyle \#X(k)=q^{\dim X}\sum _{i}(-1)^{i}\mathrm {tr} ((\Phi _{q}^{-1})^{*}|H^{i}({\bar {X}},\mathbb {Q} _{\ell })).} This formula involves usual cohomology, rather than cohomology with compact supports. The Lefschetz trace formula can also be generalized to algebraic stacks over finite fields. == See also == Fixed-point theorems Lefschetz zeta function Holomorphic Lefschetz fixed-point formula == References ==
Wikipedia:Fixed-point property#0
A mathematical object X has the fixed-point property if every suitably well-behaved mapping from X to itself has a fixed point. The term is most commonly used to describe topological spaces on which every continuous mapping has a fixed point. But another use is in order theory, where a partially ordered set P is said to have the fixed point property if every increasing function on P has a fixed point. == Definition == Let A be an object in the concrete category C. Then A has the fixed-point property if every morphism (i.e., every function) f : A → A {\displaystyle f:A\to A} has a fixed point. The most common usage is when C = Top is the category of topological spaces. Then a topological space X has the fixed-point property if every continuous map f : X → X {\displaystyle f:X\to X} has a fixed point. == Examples == === Singletons === In the category of sets, the objects with the fixed-point property are precisely the singletons. === The closed interval === The closed interval [0,1] has the fixed point property: Let f: [0,1] → [0,1] be a continuous mapping. If f(0) = 0 or f(1) = 1, then our mapping has a fixed point at 0 or 1. If not, then f(0) > 0 and f(1) − 1 < 0. Thus the function g(x) = f(x) − x is a continuous real valued function which is positive at x = 0 and negative at x = 1. By the intermediate value theorem, there is some point x0 with g(x0) = 0, which is to say that f(x0) − x0 = 0, and so x0 is a fixed point. The open interval does not have the fixed-point property. The mapping f(x) = x2 has no fixed point on the interval (0,1). === The closed disc === The closed interval is a special case of the closed disc, which in any finite dimension has the fixed-point property by the Brouwer fixed-point theorem. == Topology == A retract A of a space X with the fixed-point property also has the fixed-point property. This is because if r : X → A {\displaystyle r:X\to A} is a retraction and f : A → A {\displaystyle f:A\to A} is any continuous function, then the composition i ∘ f ∘ r : X → X {\displaystyle i\circ f\circ r:X\to X} (where i : A → X {\displaystyle i:A\to X} is inclusion) has a fixed point. That is, there is x ∈ A {\displaystyle x\in A} such that f ∘ r ( x ) = x {\displaystyle f\circ r(x)=x} . Since x ∈ A {\displaystyle x\in A} we have that r ( x ) = x {\displaystyle r(x)=x} and therefore f ( x ) = x . {\displaystyle f(x)=x.} A topological space has the fixed-point property if and only if its identity map is universal. A product of spaces with the fixed-point property in general fails to have the fixed-point property even if one of the spaces is the closed real interval. The FPP is a topological invariant, i.e. is preserved by any homeomorphism. The FPP is also preserved by any retraction. According to the Brouwer fixed-point theorem, every compact and convex subset of a Euclidean space has the FPP. More generally, according to the Schauder-Tychonoff fixed point theorem every compact and convex subset of a locally convex topological vector space has the FPP. Compactness alone does not imply the FPP and convexity is not even a topological property so it makes sense to ask how to topologically characterize the FPP. In 1932 Borsuk asked whether compactness together with contractibility could be a sufficient condition for the FPP to hold. The problem was open for 20 years until the conjecture was disproved by Kinoshita who found an example of a compact contractible space without the FPP. == References == Samuel Eilenberg, Norman Steenrod (1952). Foundations of Algebraic Topology. Princeton University Press. Schröder, Bernd (2002). Ordered Sets. Birkhäuser Boston.
Wikipedia:Fixed-point space#0
In mathematics, a Hausdorff space X is called a fixed-point space if it obeys a fixed-point theorem, according to which every continuous function f : X → X {\displaystyle f:X\rightarrow X} has a fixed point, a point x {\displaystyle x} for which f ( x ) = x {\displaystyle f(x)=x} . For example, the closed unit interval is a fixed point space, as can be proved from the intermediate value theorem. The real line is not a fixed-point space, because the continuous function that adds one to its argument does not have a fixed point. Generalizing the unit interval, by the Brouwer fixed-point theorem, every compact bounded convex set in a Euclidean space is a fixed-point space. The definition of a fixed-point space can also be extended from continuous functions of topological spaces to other classes of maps on other types of space. == References ==
Wikipedia:Flag (linear algebra)#0
In mathematics, particularly in linear algebra, a flag is an increasing sequence of subspaces of a finite-dimensional vector space V. Here "increasing" means each is a proper subspace of the next (see filtration): { 0 } = V 0 ⊂ V 1 ⊂ V 2 ⊂ ⋯ ⊂ V k = V . {\displaystyle \{0\}=V_{0}\subset V_{1}\subset V_{2}\subset \cdots \subset V_{k}=V.} The term flag is motivated by a particular example resembling a flag: the zero point, a line, and a plane correspond to a nail, a staff, and a sheet of fabric. If we write that dimVi = di then we have 0 = d 0 < d 1 < d 2 < ⋯ < d k = n , {\displaystyle 0=d_{0}<d_{1}<d_{2}<\cdots <d_{k}=n,} where n is the dimension of V (assumed to be finite). Hence, we must have k ≤ n. A flag is called a complete flag if di = i for all i, otherwise it is called a partial flag. A partial flag can be obtained from a complete flag by deleting some of the subspaces. Conversely, any partial flag can be completed (in many different ways) by inserting suitable subspaces. The signature of the flag is the sequence (d1, ..., dk). == Bases == An ordered basis for V is said to be adapted to a flag V0 ⊂ V1 ⊂ ... ⊂ Vk if the first di basis vectors form a basis for Vi for each 0 ≤ i ≤ k. Standard arguments from linear algebra can show that any flag has an adapted basis. Any ordered basis gives rise to a complete flag by letting the Vi be the span of the first i basis vectors. For example, the standard flag in Rn is induced from the standard basis (e1, ..., en) where ei denotes the vector with a 1 in the ith entry and 0's elsewhere. Concretely, the standard flag is the sequence of subspaces: 0 < ⟨ e 1 ⟩ < ⟨ e 1 , e 2 ⟩ < ⋯ < ⟨ e 1 , … , e n ⟩ = K n . {\displaystyle 0<\left\langle e_{1}\right\rangle <\left\langle e_{1},e_{2}\right\rangle <\cdots <\left\langle e_{1},\ldots ,e_{n}\right\rangle =K^{n}.} An adapted basis is almost never unique (the counterexamples are trivial); see below. A complete flag on an inner product space has an essentially unique orthonormal basis: it is unique up to multiplying each vector by a unit (scalar of unit length, e.g. 1, −1, i). Such a basis can be constructed using the Gram-Schmidt process. The uniqueness up to units follows inductively, by noting that v i {\displaystyle v_{i}} lies in the one-dimensional space V i − 1 ⊥ ∩ V i {\displaystyle V_{i-1}^{\perp }\cap V_{i}} . More abstractly, it is unique up to an action of the maximal torus: the flag corresponds to the Borel group, and the inner product corresponds to the maximal compact subgroup. == Stabilizer == The stabilizer subgroup of the standard flag is the group of invertible upper triangular matrices. More generally, the stabilizer of a flag (the linear operators T {\displaystyle T} on V such that T ( V i ) < V i {\displaystyle T(V_{i})<V_{i}} for all i) is, in matrix terms, the algebra of block upper triangular matrices (with respect to an adapted basis), where the block sizes d i − d i − 1 {\displaystyle d_{i}-d_{i-1}} . The stabilizer subgroup of a complete flag is the set of invertible upper triangular matrices with respect to any basis adapted to the flag. The subgroup of lower triangular matrices with respect to such a basis depends on that basis, and can therefore not be characterized in terms of the flag only. The stabilizer subgroup of any complete flag is a Borel subgroup (of the general linear group), and the stabilizer of any partial flags is a parabolic subgroup. The stabilizer subgroup of a flag acts simply transitively on adapted bases for the flag, and thus these are not unique unless the stabilizer is trivial. That is a very exceptional circumstance: it happens only for a vector space of dimension 0, or for a vector space over F 2 {\displaystyle \mathbf {F} _{2}} of dimension 1 (precisely the cases where only one basis exists, independently of any flag). == Subspace nest == In an infinite-dimensional space V, as used in functional analysis, the flag idea generalises to a subspace nest, namely a collection of subspaces of V that is a total order for inclusion and which further is closed under arbitrary intersections and closed linear spans. See nest algebra. == Set-theoretic analogs == From the point of view of the field with one element, a set can be seen as a vector space over the field with one element: this formalizes various analogies between Coxeter groups and algebraic groups. Under this correspondence, an ordering on a set corresponds to a maximal flag: an ordering is equivalent to a maximal filtration of a set. For instance, the filtration (flag) { 0 } ⊂ { 0 , 1 } ⊂ { 0 , 1 , 2 } {\displaystyle \{0\}\subset \{0,1\}\subset \{0,1,2\}} corresponds to the ordering ( 0 , 1 , 2 ) {\displaystyle (0,1,2)} . == See also == Filtration (mathematics) Flag manifold Grassmannian Matroid == References == Shafarevich, I. R.; A. O. Remizov (2012). Linear Algebra and Geometry. Springer. ISBN 978-3-642-30993-9.
Wikipedia:Flat (geometry)#0
In geometry, a flat is an affine subspace, i.e. a subset of an affine space that is itself an affine space. Particularly, in the case the parent space is Euclidean, a flat is a Euclidean subspace which inherits the notion of distance from its parent space. In an n-dimensional space, there are k-flats of every dimension k from 0 to n; flats one dimension lower than the parent space, (n − 1)-flats, are called hyperplanes. The flats in a plane (two-dimensional space) are points, lines, and the plane itself; the flats in three-dimensional space are points, lines, planes, and the space itself. The definition of flat excludes non-straight curves and non-planar surfaces, which are subspaces having different notions of distance: arc length and geodesic length, respectively. Flats occur in linear algebra, as geometric realizations of solution sets of systems of linear equations. A flat is a manifold and an algebraic variety, and is sometimes called a linear manifold or linear variety to distinguish it from other manifolds or varieties. == Descriptions == === By equations === A flat can be described by a system of linear equations. For example, a line in two-dimensional space can be described by a single linear equation involving x and y: 3 x + 5 y = 8. {\displaystyle 3x+5y=8.} In three-dimensional space, a single linear equation involving x, y, and z defines a plane, while a pair of linear equations can be used to describe a line. In general, a linear equation in n variables describes a hyperplane, and a system of linear equations describes the intersection of those hyperplanes. Assuming the equations are consistent and linearly independent, a system of k equations describes a flat of dimension n − k. === Parametric === A flat can also be described by a system of linear parametric equations. A line can be described by equations involving one parameter: x = 2 + 3 t , y = − 1 + t z = 3 2 − 4 t {\displaystyle x=2+3t,\;\;\;\;y=-1+t\;\;\;\;z={\frac {3}{2}}-4t} while the description of a plane would require two parameters: x = 5 + 2 t 1 − 3 t 2 , y = − 4 + t 1 + 2 t 2 z = 5 t 1 − 3 t 2 . {\displaystyle x=5+2t_{1}-3t_{2},\;\;\;\;y=-4+t_{1}+2t_{2}\;\;\;\;z=5t_{1}-3t_{2}.\,\!} In general, a parameterization of a flat of dimension k would require k parameters, e.g. t1, …, tk. == Operations and relations on flats == === Intersecting, parallel, and skew flats === An intersection of flats is either a flat or the empty set. If each line from one flat is parallel to some line from another flat, then these two flats are parallel. Two parallel flats of the same dimension either coincide or do not intersect; they can be described by two systems of linear equations which differ only in their right-hand sides. If flats do not intersect, and no line from the first flat is parallel to a line from the second flat, then these are skew flats. It is possible only if sum of their dimensions is less than dimension of the ambient space. === Join === For two flats of dimensions k1 and k2 there exists the minimal flat which contains them, of dimension at most k1 + k2 + 1. If two flats intersect, then the dimension of the containing flat equals to k1 + k2 minus the dimension of the intersection. === Properties of operations === These two operations (referred to as meet and join) make the set of all flats in the Euclidean n-space a lattice and can build systematic coordinates for flats in any dimension, leading to Grassmann coordinates or dual Grassmann coordinates. For example, a line in three-dimensional space is determined by two distinct points or by two distinct planes. However, the lattice of all flats is not a distributive lattice. If two lines ℓ1 and ℓ2 intersect, then ℓ1 ∩ ℓ2 is a point. If p is a point not lying on the same plane, then (ℓ1 ∩ ℓ2) + p = (ℓ1 + p) ∩ (ℓ2 + p), both representing a line. But when ℓ1 and ℓ2 are parallel, this distributivity fails, giving p on the left-hand side and a third parallel line on the right-hand side. == Euclidean geometry == The aforementioned facts do not depend on the structure being that of Euclidean space (namely, involving Euclidean distance) and are correct in any affine space. In a Euclidean space: There is the distance between a flat and a point. (See for example Distance from a point to a plane and Distance from a point to a line.) There is the distance between two flats, equal to 0 if they intersect. (See for example Distance between two parallel lines (in the same plane) and Skew lines § Distance.) There is the angle between two flats, which belongs to the interval [0, π/2] between 0 and the right angle. (See for example Dihedral angle (between two planes). See also Angles between flats.) == See also == Matroid Coplanarity Isometry == Notes == == References == Heinrich Guggenheimer (1977), Applicable Geometry, Krieger, New York, page 7. Stolfi, Jorge (1991), Oriented Projective Geometry, Academic Press, ISBN 978-0-12-672025-9From original Stanford Ph.D. dissertation, Primitives for Computational Geometry, available as DEC SRC Research Report 36 Archived 2021-10-17 at the Wayback Machine. == External links == Weisstein, Eric W. "Hyperplane". MathWorld. Weisstein, Eric W. "Flat". MathWorld.
Wikipedia:Flemming Topsøe#0
Flemming Topsøe (born 25 August 1938) is a Danish mathematician, and is emeritus in the mathematics department of the University of Copenhagen. He is the author of several mathematical science works, among them works about analysis, probability theory and information theory. He was born in Aarhus, son of the engineer Haldor Topsøe (1913–2013) and great-grandson of the crystallographer and chemist Haldor Topsøe (1842–1935). He is the older brother of the engineer Henrik Topsøe (born 1944). Topsøe completed his magister degree in mathematics at Aarhus University in 1962. After spending a year at the University of Cambridge in 1965–1966, he finished his PhD in 1971 at the University of Copenhagen. His thesis was titled Topology and Measure, and was later published by Springer. He was leader of the Danish Mathematical Society 1978–1982 and dynamic leader of Euromath 1983–1998, a great project about expansion of Internet-based services to mathematics societies in Europe and Russia. He received a Hlavka memorial medal in 1992 and a B. Bolzano honorary medal in 2006 of the Czechoslovak Academy of Sciences for his mathematical contributions. == Books == Topsøe, Flemming (1990). Spontaneous phenomena: a mathematical analysis. Translated by Stillwell, John. Oxford: Elsevier Science. ISBN 978-0-323-16038-4. OCLC 843202655. Topsøe, Flemming (1973). Informationsteori (in Danish). Gyldendal. ISBN 978-87-00-96231-6. OCLC 466446962. Topsøe, Flemming (1970). Topology and measure. Berlin New York: Springer-Verlag. ISBN 978-3-540-36284-5. OCLC 295009759. == References == == External links == Flemming Topsøe's homepage
Wikipedia:Florence Eliza Allen#0
Florence Eliza Allen (1876–1960) was an American mathematician and women's suffrage activist. In 1907 she became the second woman to receive a Ph.D. in mathematics at the University of Wisconsin–Madison, and the fourth Ph.D. overall from that department. == Early life == Florence Eliza Allen was born on October 4, 1876, in Horicon, Wisconsin, to Eliza and Charles Allen. Her father, a lawyer, died in 1890 when Allen was 14 years old. She had an older brother, Charles Allen, who was four years her senior and became a court reporter. Allen's mother died in 1913. Raised in a Protestant household, Allen was an active member of the First Congregational Church in Madison, Wisconsin. == Education == Allen’s academic journey began at the University of Wisconsin-Madison, where she earned her undergraduate degree in mathematics in 1900. She was a distinguished student, inducted into Phi Beta Kappa, and participated in campus life, serving in leadership roles in a literary society promoting fine arts, as well as in the self-government association and the yearbook board. Allen continued her studies at the University of Wisconsin, obtaining her master’s degree in 1901 with a thesis titled "The Abelian integrals of the first kind upon the Riemann’s surface s = ( z − a ) 5 6 ( z − b ) 5 6 ( z − c ) 2 6 {\displaystyle s=(z-a)^{\tfrac {5}{6}}(z-b)^{\tfrac {5}{6}}(z-c)^{\tfrac {2}{6}}} ." == PhD and dissertation == In 1907, Allen made history by becoming the second woman, after Charlotte Elvira Pengra, to receive a PhD in mathematics from the University of Wisconsin-Madison. Her dissertation, titled "The Cyclic Involutions of Third Order Determined by Nets of Curves of Deficiency 0, 1, and 2," was supervised by Linnaeus Wayland Dowling and published in the Quarterly Journal of Mathematics in 1914. This accomplishment marked her as the fourth PhD graduate overall from the university’s mathematics department.“There will always be some women who should go in for a PhD. — some because it will be an actual necessity to qualify them for one of the occasional — very occasional — openings in college and university positions, some because of the leisure they may have to follow a congenial pursuit. But on the whole I see no great encouragement to be had from past experiences and observations. I do not believe that there is or will be a great future for any but a few in this field. At present, it seems to me, as I look about this campus, that in all strictly academic fields (not those special to women) that there is a decided drop in the number of women engaged.” == Career == Following her doctorate, Allen remained at the University of Wisconsin, where she continued her academic career. Despite her significant contributions to the field, she faced challenges in professional advancement, possibly due to anti-nepotism policies, as her brother was a prominent faculty member in the university’s botany department. In 1945, after 43 years of service as an instructor, Allen was promoted to assistant professor, a position she held until her retirement in 1947. Her lengthy tenure at the university was marked by dedication to teaching and research. After her dissertation, her notable publications include: "A Certain Class of Transcendental Curves" (1915) Published in the Rendiconti del Circolo Matematico di Palermo "Closure of the Tangential Process on the Rational Plane Cubic" (1927) Published in the American Journal of Mathematics Allen was an active member of the Wisconsin Academy of Sciences, Arts, and Letters and participated in various professional organizations, including the American Mathematical Society and the American Mathematical Association. Her contributions to the field were often not recognized, and according to anecdotal evidence, many of her students were unaware of her doctorate, referring to her simply as “Miss Allen.” == Later life == Outside academia, Allen was listed in the 1914 “Woman’s Who’s Who of America,” where she expressed her support for women’s suffrage. Allen lived with her mother until 1913 when her mother died. Later, she either lived alone or shared her home with roommates. In her later years, she continued to be involved in her community and professional organizations. On December 29, 1960, Florence Allen was admitted to the hospital in Madison, Wisconsin. Soon after, she died on December 31, 1960 at the age of 84 years old. She was buried in Oak Hill Cemetery in her hometown of Horicon, Wisconsin. == See also == Lillian Beecroft Vermillion Thisson == References == == Sources == Green, Judy; LaDuke, Jeanne (2009). Pioneering Women in American Mathematics: The Pre-1940 PhD's. American Mathematical Society. ISBN 978-0-8218-4376-5. ProQuest 2148436019. == External links == Dissertation: "The Cyclic Involutions of Third Order Determined by Nets of Curves of Deficiency 0, 1, and 2" Allen, Florence Eliza (December 1915). "A certain class of transcendental curves". Rendiconti del Circolo Matematico di Palermo (1884-1940). 39 (1): 149–152. doi:10.1007/BF03015977. Allen, F. E. (1927). "Closure of the Tangential Process on the Rational Plane Cubic". American Journal of Mathematics. 49 (3): 456–461. doi:10.2307/2370676. JSTOR 2370676. Wisconsin Academy of Sciences, Arts, and Letters
Wikipedia:Florian Luca#0
Florian Luca (born 16 March 1969, in Galați) is a Romanian mathematician who specializes in number theory with emphasis on Diophantine equations, linear recurrences and the distribution of values of arithmetic functions. He has made notable contributions to the proof that irrational automatic numbers are transcendental and the proof of a conjecture of Erdős on the intersection of the Euler Totient function and the sum of divisors function. Luca graduated with a BS in Mathematics from Alexandru Ioan Cuza University in Iași (1992), and Ph.D. in Mathematics from the University of Alaska Fairbanks (1996). He has held various appointments at Syracuse University, Bielefeld University, Czech Academy of Sciences, National Autonomous University of Mexico and the University of the Witwatersrand. Currently he is a professor at Stellenbosch University. He has co-authored over 500 papers in mathematics with more than 200 co-authors. He is a recipient of the award of a 2005 Guggenheim Fellowship for Natural Sciences, Latin America & Caribbean. Luca is an editor-in-chief of Research in Number Theory and INTEGERS: the Electronic Journal of Combinatorial Number Theory, and an editor of the Fibonacci Quarterly. == Selected works == with Boris Adamczewski, Yann Bugeaud: Sur la complexité des nombres algébriques, Comptes Rendus Mathématique. Académie des Sciences. Paris 339 (1), 11-14, 2013 with Kevin Ford, Carl Pomerance: Common values of the arithmetic functions Φ and σ, Bulletin of the London Mathematical Society 42 (3), 478-488, 2010 with Jean-Marie De Koninck: Analytic Number Theory: Exploring the Anatomy of Integers, American Mathematical Society, 2012 Diophantine Equations - Effective Methods for Diophantine Equations, 2009, Online pdf file == References == == External links == Florian Luca at the Mathematics Genealogy Project Florian Luca publications indexed by Google Scholar
Wikipedia:Fluent (mathematics)#0
A fluent is a time-varying quantity or variable. The term was used by Isaac Newton in his early calculus to describe his form of a function. The concept was introduced by Newton in 1665 and detailed in his mathematical treatise, Method of Fluxions. Newton described any variable that changed its value as a fluent – for example, the velocity of a ball thrown in the air. The derivative of a fluent is known as a fluxion, the main focus of Newton's calculus. A fluent can be found from its corresponding fluxion through integration. == See also == == References ==
Wikipedia:Fluxion#0
A fluxion is the instantaneous rate of change, or gradient, of a fluent (a time-varying quantity, or function) at a given point. Fluxions were introduced by Isaac Newton to describe his form of a time derivative (a derivative with respect to time). Newton introduced the concept in 1665 and detailed them in his mathematical treatise, Method of Fluxions. Fluxions and fluents made up Newton's early calculus. == History == Fluxions were central to the Leibniz–Newton calculus controversy, when Newton sent a letter to Gottfried Wilhelm Leibniz explaining them, but concealing his words in code due to his suspicion. He wrote: I cannot proceed with the explanations of the fluxions now, I have preferred to conceal it thus: 6accdæ13eff7i3l9n4o4qrr4s8t12vx. The gibberish string was in fact a hash code (by denoting the frequency of each letter) of the Latin phrase Data æqvatione qvotcvnqve flventes qvantitates involvente, flvxiones invenire: et vice versa, meaning: "Given an equation that consists of any number of flowing quantities, to find the fluxions: and vice versa". == Example == If the fluent ⁠ y {\displaystyle y} ⁠ is defined as y = t 2 {\displaystyle y=t^{2}} (where ⁠ t {\displaystyle t} ⁠ is time) the fluxion (derivative) at t = 2 {\displaystyle t=2} is: y ˙ = Δ y Δ t = ( 2 + o ) 2 − 2 2 ( 2 + o ) − 2 = 4 + 4 o + o 2 − 4 2 + o − 2 = 4 o + o 2 o {\displaystyle {\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {(2+o)^{2}-2^{2}}{(2+o)-2}}={\frac {4+4o+o^{2}-4}{2+o-2}}={\frac {4o+o^{2}}{o}}} Here ⁠ o {\displaystyle o} ⁠ is an infinitely small amount of time. So, the term ⁠ o 2 {\displaystyle o^{2}} ⁠ is second order infinite small term and according to Newton, we can now ignore ⁠ o 2 {\displaystyle o^{2}} ⁠ because of its second order infinite smallness comparing to first order infinite smallness of ⁠ o {\displaystyle o} ⁠. So, the final equation gets the form: y ˙ = Δ y Δ t = 4 o o = 4 {\displaystyle {\dot {y}}={\frac {\Delta y}{\Delta t}}={\frac {4o}{o}}=4} He justified the use of ⁠ o {\displaystyle o} ⁠ as a non-zero quantity by stating that fluxions were a consequence of movement by an object. == Criticism == Bishop George Berkeley, a prominent philosopher of the time, denounced Newton's fluxions in his essay The Analyst, published in 1734. Berkeley refused to believe that they were accurate because of the use of the infinitesimal ⁠ o {\displaystyle o} ⁠. He did not believe it could be ignored and pointed out that if it was zero, the consequence would be division by zero. Berkeley referred to them as "ghosts of departed quantities", a statement which unnerved mathematicians of the time and led to the eventual disuse of infinitesimals in calculus. Towards the end of his life Newton revised his interpretation of ⁠ o {\displaystyle o} ⁠ as infinitely small, preferring to define it as approaching zero, using a similar definition to the concept of limit. He believed this put fluxions back on safe ground. By this time, Leibniz's derivative (and his notation) had largely replaced Newton's fluxions and fluents, and remains in use today. == See also == History of calculus Newton's notation Hyperreal number: A modern formalization of the reals that includes infinity and infinitesimals Nonstandard analysis == References ==
Wikipedia:Foias constant#0
In mathematical analysis, the Foias constant is a real number named after Ciprian Foias. It is defined in the following way: for every real number x1 > 0, there is a sequence defined by the recurrence relation x n + 1 = ( 1 + 1 x n ) n {\displaystyle x_{n+1}=\left(1+{\frac {1}{x_{n}}}\right)^{n}} for n = 1, 2, 3, .... The Foias constant is the unique choice α such that if x1 = α then the sequence diverges to infinity. For all other values of x1, the sequence is divergent as well, but it has two accumulation points: 1 and infinity. Numerically, it is α = 1.187452351126501 … {\displaystyle \alpha =1.187452351126501\ldots } . No closed form for the constant is known. When x1 = α then the growth rate of the sequence (xn) is given by the limit lim n → ∞ x n log ⁡ n n = 1 , {\displaystyle \lim _{n\to \infty }x_{n}{\frac {\log n}{n}}=1,} where "log" denotes the natural logarithm. The same methods used in the proof of the uniqueness of the Foias constant may also be applied to other similar recursive sequences. == See also == Mathematical constant == Notes and references == S. R. Finch (2003). Mathematical Constants. Cambridge University Press. p. 430. ISBN 0-521-818-052. Foias constant.
Wikipedia:Force chain#0
In the study of the physics of granular materials, a force chain consists of a set of particles within a compressed granular material that are held together and jammed into place by a network of mutual compressive forces. Between these chains are regions of low stress whose grains are shielded for the effects of the grains above by vaulting and arching. A set of interconnected force chains is known as a force network. Force networks visualise inter-particle forces, which is particularly informative for spherical particle systems. For non-spherical particle systems force chain networks benefit from being supplemented by traction chain networks. Traction chains visualise inter-particle tractions, which give additional insight in inter-particle contact not captured by force chains, in particular, the role of contact area over which inter-particle forces act. Force networks are an emergent phenomenon that are created by the complex interaction of the individual grains of material and the patterns of pressure applied within the material. Force chains can be shown to have fractal properties. Force chains have been investigated both experimentally, through the construction of specially instrumented physical models, and through computer simulation. == References == == External links == Force Chains and Distributions in Bead Packs
Wikipedia:Formal derivative#0
In mathematics, the formal derivative is an operation on elements of a polynomial ring or a ring of formal power series that mimics the form of the derivative from calculus. Though they appear similar, the algebraic advantage of a formal derivative is that it does not rely on the notion of a limit, which is in general impossible to define for a ring. Many of the properties of the derivative are true of the formal derivative, but some, especially those that make numerical statements, are not. Formal differentiation is used in algebra to test for multiple roots of a polynomial. == Definition == Fix a ring R {\displaystyle R} (not necessarily commutative) and let A = R [ x ] {\displaystyle A=R[x]} be the ring of polynomials over R {\displaystyle R} . (If R {\displaystyle R} is not commutative, this is the free algebra over a single indeterminate variable.) Then the formal derivative is an operation on elements of A {\displaystyle A} , where if f ( x ) = a n x n + ⋯ + a 1 x + a 0 , {\displaystyle f(x)\,=\,a_{n}x^{n}+\cdots +a_{1}x+a_{0},} then its formal derivative is f ′ ( x ) = D f ( x ) = n a n x n − 1 + ⋯ + i a i x i − 1 + ⋯ + a 1 . {\displaystyle f'(x)\,=\,Df(x)=na_{n}x^{n-1}+\cdots +ia_{i}x^{i-1}+\cdots +a_{1}.} In the above definition, for any nonnegative integer i {\displaystyle i} and r ∈ R {\displaystyle r\in R} , i r {\displaystyle ir} is defined as usual in a ring: i r = r + r + ⋯ + r ⏟ i times {\displaystyle ir=\underbrace {r+r+\cdots +r} _{i{\text{ times}}}} (with i r = 0 {\displaystyle ir=0} if i = 0 {\displaystyle i=0} ). This definition also works even if R {\displaystyle R} does not have a multiplicative identity (that is, R {\displaystyle R} is a rng). === Alternative axiomatic definition === One may also define the formal derivative axiomatically as the map ( ∗ ) ′ : R [ x ] → R [ x ] {\displaystyle (\ast )^{\prime }\colon R[x]\to R[x]} satisfying the following properties. r ′ = 0 {\displaystyle r'=0} for all r ∈ R ⊂ R [ x ] . {\displaystyle r\in R\subset R[x].} The normalization axiom, x ′ = 1. {\displaystyle x'=1.} The map commutes with the addition operation in the polynomial ring, ( a + b ) ′ = a ′ + b ′ . {\displaystyle (a+b)'=a'+b'.} The map satisfies Leibniz's law with respect to the polynomial ring's multiplication operation, ( a ⋅ b ) ′ = a ′ ⋅ b + a ⋅ b ′ . {\displaystyle (a\cdot b)'=a'\cdot b+a\cdot b'.} One may prove that this axiomatic definition yields a well-defined map respecting all of the usual ring axioms. The formula above (i.e. the definition of the formal derivative when the coefficient ring is commutative) is a direct consequence of the aforementioned axioms: ( ∑ i a i x i ) ′ = ∑ i ( a i x i ) ′ = ∑ i ( ( a i ) ′ x i + a i ( x i ) ′ ) = ∑ i ( 0 x i + a i ( ∑ j = 1 i x j − 1 ( x ′ ) x i − j ) ) = ∑ i ∑ j = 1 i a i x i − 1 = ∑ i i a i x i − 1 . {\displaystyle {\begin{aligned}\left(\sum _{i}a_{i}x^{i}\right)'&=\sum _{i}\left(a_{i}x^{i}\right)'\\&=\sum _{i}\left((a_{i})'x^{i}+a_{i}\left(x^{i}\right)'\right)\\&=\sum _{i}\left(0x^{i}+a_{i}\left(\sum _{j=1}^{i}x^{j-1}(x')x^{i-j}\right)\right)\\&=\sum _{i}\sum _{j=1}^{i}a_{i}x^{i-1}\\&=\sum _{i}ia_{i}x^{i-1}.\end{aligned}}} == Properties == It can be verified that: Formal differentiation is linear: for any two polynomials f(x),g(x) in R[x] and elements r,s of R we have ( r ⋅ f + s ⋅ g ) ′ ( x ) = r ⋅ f ′ ( x ) + s ⋅ g ′ ( x ) . {\displaystyle (r\cdot f+s\cdot g)'(x)=r\cdot f'(x)+s\cdot g'(x).} The formal derivative satisfies the product rule: ( f ⋅ g ) ′ ( x ) = f ′ ( x ) ⋅ g ( x ) + f ( x ) ⋅ g ′ ( x ) . {\displaystyle (f\cdot g)'(x)=f'(x)\cdot g(x)+f(x)\cdot g'(x).} Note the order of the factors; when R is not commutative this is important. These two properties make D a derivation on A (see module of relative differential forms for a discussion of a generalization). Note that the formal derivative is not a ring homomorphism, because the product rule is different from saying (and it is not the case) that ( f ⋅ g ) ′ = f ′ ⋅ g ′ {\displaystyle (f\cdot g)'=f'\cdot g'} . However, it is a homomorphism (linear map) of R-modules, by the above rules. == Application to finding repeated factors == As in calculus, the derivative detects multiple roots. If R is a field then R[x] is a Euclidean domain, and in this situation we can define multiplicity of roots; for every polynomial f(x) in R[x] and every element r of R, there exists a nonnegative integer mr and a polynomial g(x) such that f ( x ) = ( x − r ) m r g ( x ) {\displaystyle f(x)=(x-r)^{m_{r}}g(x)} where g(r) ≠ 0. mr is the multiplicity of r as a root of f. It follows from the Leibniz rule that in this situation, mr is also the number of differentiations that must be performed on f(x) before r is no longer a root of the resulting polynomial. The utility of this observation is that although in general not every polynomial of degree n in R[x] has n roots counting multiplicity (this is the maximum, by the above theorem), we may pass to field extensions in which this is true (namely, algebraic closures). Once we do, we may uncover a multiple root that was not a root at all simply over R. For example, if R is the finite field with three elements, the polynomial f ( x ) = x 6 + 1 {\displaystyle f(x)\,=\,x^{6}+1} has no roots in R; however, its formal derivative ( f ′ ( x ) = 6 x 5 {\displaystyle f'(x)\,=\,6x^{5}} ) is zero since 3 = 0 in R and in any extension of R, so when we pass to the algebraic closure it has a multiple root that could not have been detected by factorization in R itself. Thus, formal differentiation allows an effective notion of multiplicity. This is important in Galois theory, where the distinction is made between separable field extensions (defined by polynomials with no multiple roots) and inseparable ones. == Correspondence to analytic derivative == When the ring R of scalars is commutative, there is an alternative and equivalent definition of the formal derivative, which resembles the one seen in differential calculus. The element Y – X of the ring R[X, Y] divides Yn – Xn for any nonnegative integer n, and therefore divides f(Y) – f(X) for any polynomial f in one indeterminate. If the quotient in R[X, Y] is denoted by g, then g ( X , Y ) = f ( Y ) − f ( X ) Y − X . {\displaystyle g(X,Y)={\frac {f(Y)-f(X)}{Y-X}}.} It is then not hard to verify that g(X, X) (in R[X]) coincides with the formal derivative of f as it was defined above. This formulation of the derivative works equally well for a formal power series, as long as the ring of coefficients is commutative. Actually, if the division in this definition is carried out in the class of functions of Y continuous at X, it will recapture the classical definition of the derivative. If it is carried out in the class of functions continuous in both X and Y, we get uniform differentiability, and the function f will be continuously differentiable. Likewise, by choosing different classes of functions (say, the Lipschitz class), we get different flavors of differentiability. In this way, differentiation becomes a part of algebra of functions. == See also == Derivative Euclidean domain Module of relative differential forms Galois theory Formal power series Pincherle derivative == References == == Sources == Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001 Michael Livshits, You could simplify calculus, arXiv:0905.3611v1
Wikipedia:Formal power series#0
In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.). A formal power series is a special kind of formal series, of the form ∑ n = 0 ∞ a n x n = a 0 + a 1 x + a 2 x 2 + ⋯ , {\displaystyle \sum _{n=0}^{\infty }a_{n}x^{n}=a_{0}+a_{1}x+a_{2}x^{2}+\cdots ,} where the a n , {\displaystyle a_{n},} called coefficients, are numbers or, more generally, elements of some ring, and the x n {\displaystyle x^{n}} are formal powers of the symbol x {\displaystyle x} that is called an indeterminate or, commonly, a variable. Hence, power series can be viewed as a generalization of polynomials where the number of terms is allowed to be infinite, and differ from usual power series by the absence of convergence requirements, which implies that a power series may not represent a function of its variable. Formal power series are in one to one correspondence with their sequences of coefficients, but the two concepts must not be confused, since the operations that can be applied are different. A formal power series with coefficients in a ring R {\displaystyle R} is called a formal power series over R . {\displaystyle R.} The formal power series over a ring R {\displaystyle R} form a ring, commonly denoted by R [ [ x ] ] . {\displaystyle R[[x]].} (It can be seen as the (x)-adic completion of the polynomial ring R [ x ] , {\displaystyle R[x],} in the same way as the p-adic integers are the p-adic completion of the ring of the integers.) Formal powers series in several indeterminates are defined similarly by replacing the powers of a single indeterminate by monomials in several indeterminates. Formal power series are widely used in combinatorics for representing sequences of integers as generating functions. In this context, a recurrence relation between the elements of a sequence may often be interpreted as a differential equation that the generating function satisfies. This allows using methods of complex analysis for combinatorial problems (see analytic combinatorics). == Introduction == A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable X denotes any numerical value (not even an unknown value). For example, consider the series A = 1 − 3 X + 5 X 2 − 7 X 3 + 9 X 4 − 11 X 5 + ⋯ . {\displaystyle A=1-3X+5X^{2}-7X^{3}+9X^{4}-11X^{5}+\cdots .} If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1 by the Cauchy–Hadamard theorem. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, ... ] as coefficients, even though the corresponding power series diverges for any nonzero value of X. Algebra on formal power series is carried out by simply pretending that the series are polynomials. For example, if B = 2 X + 4 X 3 + 6 X 5 + ⋯ , {\displaystyle B=2X+4X^{3}+6X^{5}+\cdots ,} then we add A and B term by term: A + B = 1 − X + 5 X 2 − 3 X 3 + 9 X 4 − 5 X 5 + ⋯ . {\displaystyle A+B=1-X+5X^{2}-3X^{3}+9X^{4}-5X^{5}+\cdots .} We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product): A B = 2 X − 6 X 2 + 14 X 3 − 26 X 4 + 44 X 5 + ⋯ . {\displaystyle AB=2X-6X^{2}+14X^{3}-26X^{4}+44X^{5}+\cdots .} Notice that each coefficient in the product AB only depends on a finite number of coefficients of A and B. For example, the X5 term is given by 44 X 5 = ( 1 × 6 X 5 ) + ( 5 X 2 × 4 X 3 ) + ( 9 X 4 × 2 X ) . {\displaystyle 44X^{5}=(1\times 6X^{5})+(5X^{2}\times 4X^{3})+(9X^{4}\times 2X).} For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis. Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series A is a formal power series C such that AC = 1, provided that such a formal power series exists. It turns out that if A has a multiplicative inverse, it is unique, and we denote it by A−1. Now we can define division of formal power series by defining B/A to be the product BA−1, provided that the inverse of A exists. For example, one can use the definition of multiplication above to verify the familiar formula 1 1 + X = 1 − X + X 2 − X 3 + X 4 − X 5 + ⋯ . {\displaystyle {\frac {1}{1+X}}=1-X+X^{2}-X^{3}+X^{4}-X^{5}+\cdots .} An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator [ X n ] {\displaystyle [X^{n}]} applied to a formal power series A {\displaystyle A} in one variable extracts the coefficient of the n {\displaystyle n} th power of the variable, so that [ X 2 ] A = 5 {\displaystyle [X^{2}]A=5} and [ X 5 ] A = − 11 {\displaystyle [X^{5}]A=-11} . Other examples include [ X 3 ] ( B ) = 4 , [ X 2 ] ( X + 3 X 2 Y 3 + 10 Y 6 ) = 3 Y 3 , [ X 2 Y 3 ] ( X + 3 X 2 Y 3 + 10 Y 6 ) = 3 , [ X n ] ( 1 1 + X ) = ( − 1 ) n , [ X n ] ( X ( 1 − X ) 2 ) = n . {\displaystyle {\begin{aligned}\left[X^{3}\right](B)&=4,\\\left[X^{2}\right](X+3X^{2}Y^{3}+10Y^{6})&=3Y^{3},\\\left[X^{2}Y^{3}\right](X+3X^{2}Y^{3}+10Y^{6})&=3,\\\left[X^{n}\right]\left({\frac {1}{1+X}}\right)&=(-1)^{n},\\\left[X^{n}\right]\left({\frac {X}{(1-X)^{2}}}\right)&=n.\end{aligned}}} Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below. == The ring of formal power series == If one considers the set of all formal power series in X with coefficients in a commutative ring R, the elements of this set collectively constitute another ring which is written R [ [ X ] ] , {\displaystyle R[[X]],} and called the ring of formal power series in the variable X over R. === Definition of the formal power series ring === One can characterize R [ [ X ] ] {\displaystyle R[[X]]} abstractly as the completion of the polynomial ring R [ X ] {\displaystyle R[X]} equipped with a particular metric. This automatically gives R [ [ X ] ] {\displaystyle R[[X]]} the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe R [ [ X ] ] {\displaystyle R[[X]]} more explicitly, and define the ring structure and topological structure separately, as follows. ==== Ring structure ==== As a set, R [ [ X ] ] {\displaystyle R[[X]]} can be constructed as the set R N {\displaystyle R^{\mathbb {N} }} of all infinite sequences of elements of R {\displaystyle R} , indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index n {\displaystyle n} is a n {\displaystyle a_{n}} by ( a n ) {\displaystyle (a_{n})} , one defines addition of two such sequences by ( a n ) n ∈ N + ( b n ) n ∈ N = ( a n + b n ) n ∈ N {\displaystyle (a_{n})_{n\in \mathbb {N} }+(b_{n})_{n\in \mathbb {N} }=\left(a_{n}+b_{n}\right)_{n\in \mathbb {N} }} and multiplication by ( a n ) n ∈ N × ( b n ) n ∈ N = ( ∑ k = 0 n a k b n − k ) n ∈ N . {\displaystyle (a_{n})_{n\in \mathbb {N} }\times (b_{n})_{n\in \mathbb {N} }=\left(\sum _{k=0}^{n}a_{k}b_{n-k}\right)_{\!n\in \mathbb {N} }.} This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, R N {\displaystyle R^{\mathbb {N} }} becomes a commutative ring with zero element ( 0 , 0 , 0 , … ) {\displaystyle (0,0,0,\ldots )} and multiplicative identity ( 1 , 0 , 0 , … ) {\displaystyle (1,0,0,\ldots )} . The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds R {\displaystyle R} into R [ [ X ] ] {\displaystyle R[[X]]} by sending any (constant) a ∈ R {\displaystyle a\in R} to the sequence ( a , 0 , 0 , … ) {\displaystyle (a,0,0,\ldots )} and designates the sequence ( 0 , 1 , 0 , 0 , … ) {\displaystyle (0,1,0,0,\ldots )} by X {\displaystyle X} ; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as ( a 0 , a 1 , a 2 , … , a n , 0 , 0 , … ) = a 0 + a 1 X + ⋯ + a n X n = ∑ i = 0 n a i X i ; {\displaystyle (a_{0},a_{1},a_{2},\ldots ,a_{n},0,0,\ldots )=a_{0}+a_{1}X+\cdots +a_{n}X^{n}=\sum _{i=0}^{n}a_{i}X^{i};} these are precisely the polynomials in X {\displaystyle X} . Given this, it is quite natural and convenient to designate a general sequence ( a n ) n ∈ N {\displaystyle (a_{n})_{n\in \mathbb {N} }} by the formal expression ∑ i ∈ N a i X i {\displaystyle \textstyle \sum _{i\in \mathbb {N} }a_{i}X^{i}} , even though the latter is not an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation of the above definitions as ( ∑ i ∈ N a i X i ) + ( ∑ i ∈ N b i X i ) = ∑ i ∈ N ( a i + b i ) X i {\displaystyle \left(\sum _{i\in \mathbb {N} }a_{i}X^{i}\right)+\left(\sum _{i\in \mathbb {N} }b_{i}X^{i}\right)=\sum _{i\in \mathbb {N} }(a_{i}+b_{i})X^{i}} and ( ∑ i ∈ N a i X i ) × ( ∑ i ∈ N b i X i ) = ∑ n ∈ N ( ∑ k = 0 n a k b n − k ) X n . {\displaystyle \left(\sum _{i\in \mathbb {N} }a_{i}X^{i}\right)\times \left(\sum _{i\in \mathbb {N} }b_{i}X^{i}\right)=\sum _{n\in \mathbb {N} }\left(\sum _{k=0}^{n}a_{k}b_{n-k}\right)X^{n}.} which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition. ==== Topological structure ==== Having stipulated conventionally that one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in R N {\displaystyle R^{\mathbb {N} }} is defined and a topology on R N {\displaystyle R^{\mathbb {N} }} is constructed. There are several equivalent ways to define the desired topology. We may give R N {\displaystyle R^{\mathbb {N} }} the product topology, where each copy of R {\displaystyle R} is given the discrete topology. We may give R N {\displaystyle R^{\mathbb {N} }} the I-adic topology, where I = ( X ) {\displaystyle I=(X)} is the ideal generated by X {\displaystyle X} , which consists of all sequences whose first term a 0 {\displaystyle a_{0}} is zero. The desired topology could also be derived from the following metric. The distance between distinct sequences ( a n ) , ( b n ) ∈ R N , {\displaystyle (a_{n}),(b_{n})\in R^{\mathbb {N} },} is defined to be d ( ( a n ) , ( b n ) ) = 2 − k , {\displaystyle d((a_{n}),(b_{n}))=2^{-k},} where k {\displaystyle k} is the smallest natural number such that a k ≠ b k {\displaystyle a_{k}\neq b_{k}} ; the distance between two equal sequences is of course zero. Informally, two sequences ( a n ) {\displaystyle (a_{n})} and ( b n ) {\displaystyle (b_{n})} become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of X {\displaystyle X} the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (1), regardless of the values a n {\displaystyle a_{n}} , since inclusion of the term for i = n {\displaystyle i=n} gives the last (and in fact only) change to the coefficient of X n {\displaystyle X^{n}} . It is also obvious that the limit of the sequence of partial sums is equal to the left hand side. This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over R {\displaystyle R} and is denoted by R [ [ X ] ] {\displaystyle R[[X]]} . The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of X {\displaystyle X} occurs in only finitely many terms. The topological structure allows much more flexible usage of infinite summations. For instance the rule for multiplication can be restated simply as ( ∑ i ∈ N a i X i ) × ( ∑ i ∈ N b i X i ) = ∑ i , j ∈ N a i b j X i + j , {\displaystyle \left(\sum _{i\in \mathbb {N} }a_{i}X^{i}\right)\times \left(\sum _{i\in \mathbb {N} }b_{i}X^{i}\right)=\sum _{i,j\in \mathbb {N} }a_{i}b_{j}X^{i+j},} since only finitely many terms on the right affect any fixed X n {\displaystyle X^{n}} . Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1 (in which case the product is nonzero) or infinitely many factors have no constant term (in which case the product is zero). ==== Alternative topologies ==== The above topology is the finest topology for which ∑ i = 0 ∞ a i X i {\displaystyle \sum _{i=0}^{\infty }a_{i}X^{i}} always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring R {\displaystyle R} already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series. In the ring of formal power series Z [ [ X ] ] [ [ Y ] ] {\displaystyle \mathbb {Z} [[X]][[Y]]} , the topology of above construction only relates to the indeterminate Y {\displaystyle Y} , since the topology that was put on Z [ [ X ] ] {\displaystyle \mathbb {Z} [[X]]} has been replaced by the discrete topology when defining the topology of the whole ring. So ∑ i = 0 ∞ X Y i {\displaystyle \sum _{i=0}^{\infty }XY^{i}} converges (and its sum can be written as X 1 − Y {\displaystyle {\tfrac {X}{1-Y}}} ); however ∑ i = 0 ∞ X i Y {\displaystyle \sum _{i=0}^{\infty }X^{i}Y} would be considered to be divergent, since every term affects the coefficient of Y {\displaystyle Y} . This asymmetry disappears if the power series ring in Y {\displaystyle Y} is given the product topology where each copy of Z [ [ X ] ] {\displaystyle \mathbb {Z} [[X]]} is given its topology as a ring of formal power series rather than the discrete topology. With this topology, a sequence of elements of Z [ [ X ] ] [ [ Y ] ] {\displaystyle \mathbb {Z} [[X]][[Y]]} converges if the coefficient of each power of Y {\displaystyle Y} converges to a formal power series in X {\displaystyle X} , a weaker condition than stabilizing entirely. For instance, with this topology, in the second example given above, the coefficient of Y {\displaystyle Y} converges to 1 1 − X {\displaystyle {\tfrac {1}{1-X}}} , so the whole summation converges to Y 1 − X {\displaystyle {\tfrac {Y}{1-X}}} . This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all indeterminates at once. In the above example that would mean constructing Z [ [ X , Y ] ] {\displaystyle \mathbb {Z} [[X,Y]]} and here a sequence converges if and only if the coefficient of every monomial X i Y j {\displaystyle X^{i}Y^{j}} stabilizes. This topology, which is also the I {\displaystyle I} -adic topology, where I = ( X , Y ) {\displaystyle I=(X,Y)} is the ideal generated by X {\displaystyle X} and Y {\displaystyle Y} , still enjoys the property that a summation converges if and only if its terms tend to 0. The same principle could be used to make other divergent limits converge. For instance in R [ [ X ] ] {\displaystyle \mathbb {R} [[X]]} the limit lim n → ∞ ( 1 + X n ) n {\displaystyle \lim _{n\to \infty }\left(1+{\frac {X}{n}}\right)^{\!n}} does not exist, so in particular it does not converge to exp ⁡ ( X ) = ∑ n ∈ N X n n ! . {\displaystyle \exp(X)=\sum _{n\in \mathbb {N} }{\frac {X^{n}}{n!}}.} This is because for i ≥ 2 {\displaystyle i\geq 2} the coefficient ( n i ) / n i {\displaystyle {\tbinom {n}{i}}/n^{i}} of X i {\displaystyle X^{i}} does not stabilize as n → ∞ {\displaystyle n\to \infty } . It does however converge in the usual topology of R {\displaystyle \mathbb {R} } , and in fact to the coefficient 1 i ! {\displaystyle {\tfrac {1}{i!}}} of exp ⁡ ( X ) {\displaystyle \exp(X)} . Therefore, if one would give R [ [ X ] ] {\displaystyle \mathbb {R} [[X]]} the product topology of R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} where the topology of R {\displaystyle \mathbb {R} } is the usual topology rather than the discrete one, then the above limit would converge to exp ⁡ ( X ) {\displaystyle \exp(X)} . This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would not be the case that a summation converges if and only if its terms tend to 0. === Universal property === The ring R [ [ X ] ] {\displaystyle R[[X]]} may be characterized by the following universal property. If S {\displaystyle S} is a commutative associative algebra over R {\displaystyle R} , if I {\displaystyle I} is an ideal of S {\displaystyle S} such that the I {\displaystyle I} -adic topology on S {\displaystyle S} is complete, and if x {\displaystyle x} is an element of I {\displaystyle I} , then there is a unique Φ : R [ [ X ] ] → S {\displaystyle \Phi :R[[X]]\to S} with the following properties: Φ {\displaystyle \Phi } is an R {\displaystyle R} -algebra homomorphism Φ {\displaystyle \Phi } is continuous Φ ( X ) = x {\displaystyle \Phi (X)=x} . == Operations on formal power series == One can perform algebraic operations on power series to generate new power series. === Power series raised to powers === For any natural number n, the nth power of a formal power series S is defined recursively by S 1 = S S n = S ⋅ S n − 1 for n > 1. {\displaystyle {\begin{aligned}S^{1}&=S\\S^{n}&=S\cdot S^{n-1}\quad {\text{for }}n>1.\end{aligned}}} If a0 is invertible in the ring of coefficients, one can prove that in the expansion ( ∑ k = 0 ∞ a k X k ) n = ∑ m = 0 ∞ c m X m , {\displaystyle {\Big (}\sum _{k=0}^{\infty }a_{k}X^{k}{\Big )}^{n}=\sum _{m=0}^{\infty }c_{m}X^{m},} the coefficients are given by c 0 = a 0 n {\displaystyle c_{0}=a_{0}^{n}} and c m = 1 m a 0 ∑ k = 1 m ( k n − m + k ) a k c m − k {\displaystyle c_{m}={\frac {1}{ma_{0}}}\sum _{k=1}^{m}(kn-m+k)a_{k}c_{m-k}} for m ≥ 1 {\displaystyle m\geq 1} if m is invertible in the ring of coefficients. In the case of formal power series with complex coefficients, its complex powers are well defined for series f with constant term equal to 1. In this case, f α {\displaystyle f^{\alpha }} can be defined either by composition with the binomial series (1 + x)α, or by composition with the exponential and the logarithmic series, f α = exp ⁡ ( α log ⁡ ( f ) ) , {\displaystyle f^{\alpha }=\exp(\alpha \log(f)),} or as the solution of the differential equation (in terms of series) f ( f α ) ′ = α f α f ′ {\displaystyle f(f^{\alpha })'=\alpha f^{\alpha }f'} with constant term 1; the three definitions are equivalent. The exponent rules ( f α ) β = f α β {\displaystyle (f^{\alpha })^{\beta }=f^{\alpha \beta }} and f α g α = ( f g ) α {\displaystyle f^{\alpha }g^{\alpha }=(fg)^{\alpha }} easily follow for formal power series f, g. === Multiplicative inverse === The series A = ∑ n = 0 ∞ a n X n ∈ R [ [ X ] ] {\displaystyle A=\sum _{n=0}^{\infty }a_{n}X^{n}\in R[[X]]} is invertible in R [ [ X ] ] {\displaystyle R[[X]]} if and only if its constant coefficient a 0 {\displaystyle a_{0}} is invertible in R {\displaystyle R} . This condition is necessary, for the following reason: if we suppose that A {\displaystyle A} has an inverse B = b 0 + b 1 x + ⋯ {\displaystyle B=b_{0}+b_{1}x+\cdots } then the constant term a 0 b 0 {\displaystyle a_{0}b_{0}} of A ⋅ B {\displaystyle A\cdot B} is the constant term of the identity series, i.e. it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series B {\displaystyle B} via the explicit recursive formula b 0 = 1 a 0 , b n = − 1 a 0 ∑ i = 1 n a i b n − i , n ≥ 1. {\displaystyle {\begin{aligned}b_{0}&={\frac {1}{a_{0}}},\\b_{n}&=-{\frac {1}{a_{0}}}\sum _{i=1}^{n}a_{i}b_{n-i},\ \ \ n\geq 1.\end{aligned}}} An important special case is that the geometric series formula is valid in R [ [ X ] ] {\displaystyle R[[X]]} : ( 1 − X ) − 1 = ∑ n = 0 ∞ X n . {\displaystyle (1-X)^{-1}=\sum _{n=0}^{\infty }X^{n}.} If R = K {\displaystyle R=K} is a field, then a series is invertible if and only if the constant term is non-zero, i.e. if and only if the series is not divisible by X {\displaystyle X} . This means that K [ [ X ] ] {\displaystyle K[[X]]} is a discrete valuation ring with uniformizing parameter X {\displaystyle X} . === Division === The computation of a quotient f / g = h {\displaystyle f/g=h} ∑ n = 0 ∞ b n X n ∑ n = 0 ∞ a n X n = ∑ n = 0 ∞ c n X n , {\displaystyle {\frac {\sum _{n=0}^{\infty }b_{n}X^{n}}{\sum _{n=0}^{\infty }a_{n}X^{n}}}=\sum _{n=0}^{\infty }c_{n}X^{n},} assuming the denominator is invertible (that is, a 0 {\displaystyle a_{0}} is invertible in the ring of scalars), can be performed as a product f {\displaystyle f} and the inverse of g {\displaystyle g} , or directly equating the coefficients in f = g h {\displaystyle f=gh} : c n = 1 a 0 ( b n − ∑ k = 1 n a k c n − k ) . {\displaystyle c_{n}={\frac {1}{a_{0}}}\left(b_{n}-\sum _{k=1}^{n}a_{k}c_{n-k}\right).} === Extracting coefficients === The coefficient extraction operator applied to a formal power series f ( X ) = ∑ n = 0 ∞ a n X n {\displaystyle f(X)=\sum _{n=0}^{\infty }a_{n}X^{n}} in X is written [ X m ] f ( X ) {\displaystyle \left[X^{m}\right]f(X)} and extracts the coefficient of Xm, so that [ X m ] f ( X ) = [ X m ] ∑ n = 0 ∞ a n X n = a m . {\displaystyle \left[X^{m}\right]f(X)=\left[X^{m}\right]\sum _{n=0}^{\infty }a_{n}X^{n}=a_{m}.} === Composition === Given two formal power series f ( X ) = ∑ n = 1 ∞ a n X n = a 1 X + a 2 X 2 + ⋯ {\displaystyle f(X)=\sum _{n=1}^{\infty }a_{n}X^{n}=a_{1}X+a_{2}X^{2}+\cdots } g ( X ) = ∑ n = 0 ∞ b n X n = b 0 + b 1 X + b 2 X 2 + ⋯ {\displaystyle g(X)=\sum _{n=0}^{\infty }b_{n}X^{n}=b_{0}+b_{1}X+b_{2}X^{2}+\cdots } such that a 0 = 0 , {\displaystyle a_{0}=0,} one may form the composition g ( f ( X ) ) = ∑ n = 0 ∞ b n ( f ( X ) ) n = ∑ n = 0 ∞ c n X n , {\displaystyle g(f(X))=\sum _{n=0}^{\infty }b_{n}(f(X))^{n}=\sum _{n=0}^{\infty }c_{n}X^{n},} where the coefficients cn are determined by "expanding out" the powers of f(X): c n := ∑ k ∈ N , | j | = n b k a j 1 a j 2 ⋯ a j k . {\displaystyle c_{n}:=\sum _{k\in \mathbb {N} ,|j|=n}b_{k}a_{j_{1}}a_{j_{2}}\cdots a_{j_{k}}.} Here the sum is extended over all (k, j) with k ∈ N {\displaystyle k\in \mathbb {N} } and j ∈ N + k {\displaystyle j\in \mathbb {N} _{+}^{k}} with | j | := j 1 + ⋯ + j k = n . {\displaystyle |j|:=j_{1}+\cdots +j_{k}=n.} Since a 0 = 0 , {\displaystyle a_{0}=0,} one must have k ≤ n {\displaystyle k\leq n} and j i ≤ n {\displaystyle j_{i}\leq n} for every i . {\displaystyle i.} This implies that the above sum is finite and that the coefficient c n {\displaystyle c_{n}} is the coefficient of X n {\displaystyle X^{n}} in the polynomial g n ( f n ( X ) ) {\displaystyle g_{n}(f_{n}(X))} , where f n {\displaystyle f_{n}} and g n {\displaystyle g_{n}} are the polynomials obtained by truncating the series at x n , {\displaystyle x^{n},} that is, by removing all terms involving a power of X {\displaystyle X} higher than n . {\displaystyle n.} A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0. Composition is only valid when f ( X ) {\displaystyle f(X)} has no constant term, so that each c n {\displaystyle c_{n}} depends on only a finite number of coefficients of f ( X ) {\displaystyle f(X)} and g ( X ) {\displaystyle g(X)} . In other words, the series for g ( f ( X ) ) {\displaystyle g(f(X))} converges in the topology of R [ [ X ] ] {\displaystyle R[[X]]} . ==== Example ==== Assume that the ring R {\displaystyle R} has characteristic 0 and the nonzero integers are invertible in R {\displaystyle R} . If one denotes by exp ⁡ ( X ) {\displaystyle \exp(X)} the formal power series exp ⁡ ( X ) = 1 + X + X 2 2 ! + X 3 3 ! + X 4 4 ! + ⋯ , {\displaystyle \exp(X)=1+X+{\frac {X^{2}}{2!}}+{\frac {X^{3}}{3!}}+{\frac {X^{4}}{4!}}+\cdots ,} then the equality exp ⁡ ( exp ⁡ ( X ) − 1 ) = 1 + X + X 2 + 5 X 3 6 + 5 X 4 8 + ⋯ {\displaystyle \exp(\exp(X)-1)=1+X+X^{2}+{\frac {5X^{3}}{6}}+{\frac {5X^{4}}{8}}+\cdots } makes perfect sense as a formal power series, since the constant coefficient of exp ⁡ ( X ) − 1 {\displaystyle \exp(X)-1} is zero. === Composition inverse === Whenever a formal series f ( X ) = ∑ k f k X k ∈ R [ [ X ] ] {\displaystyle f(X)=\sum _{k}f_{k}X^{k}\in R[[X]]} has f0 = 0 and f1 being an invertible element of R, there exists a series g ( X ) = ∑ k g k X k {\displaystyle g(X)=\sum _{k}g_{k}X^{k}} that is the composition inverse of f {\displaystyle f} , meaning that composing f {\displaystyle f} with g {\displaystyle g} gives the series representing the identity function x = 0 + 1 x + 0 x 2 + 0 x 3 + ⋯ {\displaystyle x=0+1x+0x^{2}+0x^{3}+\cdots } . The coefficients of g {\displaystyle g} may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity X (that is 1 at degree 1 and 0 at every degree greater than 1). In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula (discussed below) provides a powerful tool to compute the coefficients of g, as well as the coefficients of the (multiplicative) powers of g. === Formal differentiation === Given a formal power series f = ∑ n ≥ 0 a n X n ∈ R [ [ X ] ] , {\displaystyle f=\sum _{n\geq 0}a_{n}X^{n}\in R[[X]],} we define its formal derivative, denoted Df or f ′, by D f = f ′ = ∑ n ≥ 1 a n n X n − 1 . {\displaystyle Df=f'=\sum _{n\geq 1}a_{n}nX^{n-1}.} The symbol D is called the formal differentiation operator. This definition simply mimics term-by-term differentiation of a polynomial. This operation is R-linear: D ( a f + b g ) = a ⋅ D f + b ⋅ D g {\displaystyle D(af+bg)=a\cdot Df+b\cdot Dg} for any a, b in R and any f, g in R [ [ X ] ] . {\displaystyle R[[X]].} Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid: D ( f g ) = f ⋅ ( D g ) + ( D f ) ⋅ g , {\displaystyle D(fg)\ =\ f\cdot (Dg)+(Df)\cdot g,} and the chain rule works as well: D ( f ∘ g ) = ( D f ∘ g ) ⋅ D g , {\displaystyle D(f\circ g)=(Df\circ g)\cdot Dg,} whenever the appropriate compositions of series are defined (see above under composition of series). Thus, in these respects formal power series behave like Taylor series. Indeed, for the f defined above, we find that ( D k f ) ( 0 ) = k ! a k , {\displaystyle (D^{k}f)(0)=k!a_{k},} where Dk denotes the kth formal derivative (that is, the result of formally differentiating k times). === Formal antidifferentiation === If R {\displaystyle R} is a ring with characteristic zero and the nonzero integers are invertible in R {\displaystyle R} , then given a formal power series f = ∑ n ≥ 0 a n X n ∈ R [ [ X ] ] , {\displaystyle f=\sum _{n\geq 0}a_{n}X^{n}\in R[[X]],} we define its formal antiderivative or formal indefinite integral by D − 1 f = ∫ f d X = C + ∑ n ≥ 0 a n X n + 1 n + 1 . {\displaystyle D^{-1}f=\int f\ dX=C+\sum _{n\geq 0}a_{n}{\frac {X^{n+1}}{n+1}}.} for any constant C ∈ R {\displaystyle C\in R} . This operation is R-linear: D − 1 ( a f + b g ) = a ⋅ D − 1 f + b ⋅ D − 1 g {\displaystyle D^{-1}(af+bg)=a\cdot D^{-1}f+b\cdot D^{-1}g} for any a, b in R and any f, g in R [ [ X ] ] . {\displaystyle R[[X]].} Additionally, the formal antiderivative has many of the properties of the usual antiderivative of calculus. For example, the formal antiderivative is the right inverse of the formal derivative: D ( D − 1 ( f ) ) = f {\displaystyle D(D^{-1}(f))=f} for any f ∈ R [ [ X ] ] {\displaystyle f\in R[[X]]} . == Properties == === Algebraic properties of the formal power series ring === R [ [ X ] ] {\displaystyle R[[X]]} is an associative algebra over R {\displaystyle R} which contains the ring R [ X ] {\displaystyle R[X]} of polynomials over R {\displaystyle R} ; the polynomials correspond to the sequences which end in zeros. The Jacobson radical of R [ [ X ] ] {\displaystyle R[[X]]} is the ideal generated by X {\displaystyle X} and the Jacobson radical of R {\displaystyle R} ; this is implied by the element invertibility criterion discussed above. The maximal ideals of R [ [ X ] ] {\displaystyle R[[X]]} all arise from those in R {\displaystyle R} in the following manner: an ideal M {\displaystyle M} of R [ [ X ] ] {\displaystyle R[[X]]} is maximal if and only if M ∩ R {\displaystyle M\cap R} is a maximal ideal of R {\displaystyle R} and M {\displaystyle M} is generated as an ideal by X {\displaystyle X} and M ∩ R {\displaystyle M\cap R} . Several algebraic properties of R {\displaystyle R} are inherited by R [ [ X ] ] {\displaystyle R[[X]]} : if R {\displaystyle R} is a local ring, then so is R [ [ X ] ] {\displaystyle R[[X]]} (with the set of non units the unique maximal ideal), if R {\displaystyle R} is Noetherian, then so is R [ [ X ] ] {\displaystyle R[[X]]} (a version of the Hilbert basis theorem), if R {\displaystyle R} is an integral domain, then so is R [ [ X ] ] {\displaystyle R[[X]]} , and if K {\displaystyle K} is a field, then K [ [ X ] ] {\displaystyle K[[X]]} is a discrete valuation ring. === Topological properties of the formal power series ring === The metric space ( R [ [ X ] ] , d ) {\displaystyle (R[[X]],d)} is complete. The ring R [ [ X ] ] {\displaystyle R[[X]]} is compact if and only if R is finite. This follows from Tychonoff's theorem and the characterisation of the topology on R [ [ X ] ] {\displaystyle R[[X]]} as a product topology. === Weierstrass preparation === The ring of formal power series with coefficients in a complete local ring satisfies the Weierstrass preparation theorem. == Applications == Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions. One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of Q [ [ X ] ] {\displaystyle \mathbb {Q} [[X]]} : sin ⁡ ( X ) := ∑ n ≥ 0 ( − 1 ) n ( 2 n + 1 ) ! X 2 n + 1 {\displaystyle \sin(X):=\sum _{n\geq 0}{\frac {(-1)^{n}}{(2n+1)!}}X^{2n+1}} cos ⁡ ( X ) := ∑ n ≥ 0 ( − 1 ) n ( 2 n ) ! X 2 n {\displaystyle \cos(X):=\sum _{n\geq 0}{\frac {(-1)^{n}}{(2n)!}}X^{2n}} Then one can show that sin 2 ⁡ ( X ) + cos 2 ⁡ ( X ) = 1 , {\displaystyle \sin ^{2}(X)+\cos ^{2}(X)=1,} ∂ ∂ X sin ⁡ ( X ) = cos ⁡ ( X ) , {\displaystyle {\frac {\partial }{\partial X}}\sin(X)=\cos(X),} sin ⁡ ( X + Y ) = sin ⁡ ( X ) cos ⁡ ( Y ) + cos ⁡ ( X ) sin ⁡ ( Y ) . {\displaystyle \sin(X+Y)=\sin(X)\cos(Y)+\cos(X)\sin(Y).} The last one being valid in the ring Q [ [ X , Y ] ] . {\displaystyle \mathbb {Q} [[X,Y]].} For K a field, the ring K [ [ X 1 , … , X r ] ] {\displaystyle K[[X_{1},\ldots ,X_{r}]]} is often used as the "standard, most general" complete local ring over K in algebra. == Interpreting formal power series as functions == In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let f = ∑ a n X n ∈ R [ [ X ] ] , {\displaystyle f=\sum a_{n}X^{n}\in R[[X]],} and suppose S {\displaystyle S} is a commutative associative algebra over R {\displaystyle R} , I {\displaystyle I} is an ideal in S {\displaystyle S} such that the I-adic topology on S {\displaystyle S} is complete, and x {\displaystyle x} is an element of I {\displaystyle I} . Define: f ( x ) = ∑ n ≥ 0 a n x n . {\displaystyle f(x)=\sum _{n\geq 0}a_{n}x^{n}.} This series is guaranteed to converge in S {\displaystyle S} given the above assumptions on x {\displaystyle x} . Furthermore, we have ( f + g ) ( x ) = f ( x ) + g ( x ) {\displaystyle (f+g)(x)=f(x)+g(x)} and ( f g ) ( x ) = f ( x ) g ( x ) . {\displaystyle (fg)(x)=f(x)g(x).} Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved. Since the topology on R [ [ X ] ] {\displaystyle R[[X]]} is the ( X ) {\displaystyle (X)} -adic topology and R [ [ X ] ] {\displaystyle R[[X]]} is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal ( X ) {\displaystyle (X)} ): f ( 0 ) {\displaystyle f(0)} , f ( X 2 − X ) {\displaystyle f(X^{2}-X)} and f ( ( 1 − X ) − 1 − 1 ) {\displaystyle f((1-X)^{-1}-1)} are all well defined for any formal power series f ∈ R [ [ X ] ] . {\displaystyle f\in R[[X]].} With this formalism, we can give an explicit formula for the multiplicative inverse of a power series f {\displaystyle f} whose constant coefficient a = f ( 0 ) {\displaystyle a=f(0)} is invertible in R {\displaystyle R} : f − 1 = ∑ n ≥ 0 a − n − 1 ( a − f ) n . {\displaystyle f^{-1}=\sum _{n\geq 0}a^{-n-1}(a-f)^{n}.} If the formal power series g {\displaystyle g} with g ( 0 ) = 0 {\displaystyle g(0)=0} is given implicitly by the equation f ( g ) = X {\displaystyle f(g)=X} where f {\displaystyle f} is a known power series with f ( 0 ) = 0 {\displaystyle f(0)=0} , then the coefficients of g {\displaystyle g} can be explicitly computed using the Lagrange inversion formula. == Generalizations == === Formal Laurent series === The formal Laurent series over a ring R {\displaystyle R} are defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree. That is, they are the series that can be written as f = ∑ n = N ∞ a n X n {\displaystyle f=\sum _{n=N}^{\infty }a_{n}X^{n}} for some integer N {\displaystyle N} , so that there are only finitely many negative n {\displaystyle n} with a n ≠ 0 {\displaystyle a_{n}\neq 0} . (This is different from the classical Laurent series of complex analysis.) For a non-zero formal Laurent series, the minimal integer n {\displaystyle n} such that a n ≠ 0 {\displaystyle a_{n}\neq 0} is called the order of f {\displaystyle f} and is denoted ord ⁡ ( f ) . {\displaystyle \operatorname {ord} (f).} (The order ord(0) of the zero series is + ∞ {\displaystyle +\infty } .) For instance, X − 3 + 1 2 X − 2 + 1 3 X − 1 + 1 4 + 1 5 X + 1 6 X 2 + 1 7 X 3 + 1 8 X 4 + … {\displaystyle X^{-3}+{\frac {1}{2}}X^{-2}+{\frac {1}{3}}X^{-1}+{\frac {1}{4}}+{\frac {1}{5}}X+{\frac {1}{6}}X^{2}+{\frac {1}{7}}X^{3}+{\frac {1}{8}}X^{4}+\dots } is a formal Laurent series of order –3. Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of X k {\displaystyle X^{k}} of two series with respective sequences of coefficients { a n } {\displaystyle \{a_{n}\}} and { b n } {\displaystyle \{b_{n}\}} is ∑ i ∈ Z a i b k − i . {\displaystyle \sum _{i\in \mathbb {Z} }a_{i}b_{k-i}.} This sum has only finitely many nonzero terms because of the assumed vanishing of coefficients at sufficiently negative indices. The formal Laurent series form the ring of formal Laurent series over R {\displaystyle R} , denoted by R ( ( X ) ) {\displaystyle R((X))} . It is equal to the localization of the ring R [ [ X ] ] {\displaystyle R[[X]]} of formal power series with respect to the set of positive powers of X {\displaystyle X} . If R = K {\displaystyle R=K} is a field, then K ( ( X ) ) {\displaystyle K((X))} is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain K [ [ X ] ] {\displaystyle K[[X]]} . As with R [ [ X ] ] {\displaystyle R[[X]]} , the ring R ( ( X ) ) {\displaystyle R((X))} of formal Laurent series may be endowed with the structure of a topological ring by introducing the metric d ( f , g ) = 2 − ord ⁡ ( f − g ) . {\displaystyle d(f,g)=2^{-\operatorname {ord} (f-g)}.} (In particular, ord ⁡ ( 0 ) = + ∞ {\displaystyle \operatorname {ord} (0)=+\infty } implies that d ( f , f ) = 2 − ord ⁡ ( 0 ) = 0 {\displaystyle d(f,f)=2^{-\operatorname {ord} (0)}=0} .) One may define formal differentiation for formal Laurent series in the natural (term-by-term) way. Precisely, the formal derivative of the formal Laurent series f {\displaystyle f} above is f ′ = D f = ∑ n ∈ Z n a n X n − 1 , {\displaystyle f'=Df=\sum _{n\in \mathbb {Z} }na_{n}X^{n-1},} which is again a formal Laurent series. If f {\displaystyle f} is a non-constant formal Laurent series and with coefficients in a field of characteristic 0, then one has ord ⁡ ( f ′ ) = ord ⁡ ( f ) − 1. {\displaystyle \operatorname {ord} (f')=\operatorname {ord} (f)-1.} However, in general this is not the case since the factor n {\displaystyle n} for the lowest order term could be equal to 0 in R {\displaystyle R} . ==== Formal residue ==== Assume that K {\displaystyle K} is a field of characteristic 0. Then the map D : K ( ( X ) ) → K ( ( X ) ) {\displaystyle D\colon K((X))\to K((X))} defined above is a K {\displaystyle K} -derivation that satisfies ker ⁡ D = K {\displaystyle \ker D=K} im ⁡ D = { f ∈ K ( ( X ) ) : [ X − 1 ] f = 0 } . {\displaystyle \operatorname {im} D=\left\{f\in K((X)):[X^{-1}]f=0\right\}.} The latter shows that the coefficient of X − 1 {\displaystyle X^{-1}} in f {\displaystyle f} is of particular interest; it is called formal residue of f {\displaystyle f} and denoted Res ⁡ ( f ) {\displaystyle \operatorname {Res} (f)} . The map Res : K ( ( X ) ) → K {\displaystyle \operatorname {Res} :K((X))\to K} is K {\displaystyle K} -linear, and by the above observation one has an exact sequence 0 → K → K ( ( X ) ) ⟶ D K ( ( X ) ) ⟶ Res K → 0. {\displaystyle 0\to K\to K((X)){\overset {D}{\longrightarrow }}K((X))\;{\overset {\operatorname {Res} }{\longrightarrow }}\;K\to 0.} Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any f , g ∈ K ( ( X ) ) {\displaystyle f,g\in K((X))} Res ⁡ ( f ′ ) = 0 ; {\displaystyle \operatorname {Res} (f')=0;} Res ⁡ ( f g ′ ) = − Res ⁡ ( f ′ g ) ; {\displaystyle \operatorname {Res} (fg')=-\operatorname {Res} (f'g);} Res ⁡ ( f ′ / f ) = ord ⁡ ( f ) , ∀ f ≠ 0 ; {\displaystyle \operatorname {Res} (f'/f)=\operatorname {ord} (f),\qquad \forall f\neq 0;} Res ⁡ ( ( g ∘ f ) f ′ ) = ord ⁡ ( f ) Res ⁡ ( g ) , {\displaystyle \operatorname {Res} \left((g\circ f)f'\right)=\operatorname {ord} (f)\operatorname {Res} (g),} if ord ⁡ ( f ) > 0 ; {\displaystyle \operatorname {ord} (f)>0;} [ X n ] f ( X ) = Res ⁡ ( X − n − 1 f ( X ) ) . {\displaystyle [X^{n}]f(X)=\operatorname {Res} \left(X^{-n-1}f(X)\right).} Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to ( f g ) ′ = f ′ g + f g ′ {\displaystyle (fg)'=f'g+fg'} . Property (iii): any f {\displaystyle f} can be written in the form f = X m g {\displaystyle f=X^{m}g} , with m = ord ⁡ ( f ) {\displaystyle m=\operatorname {ord} (f)} and ord ⁡ ( g ) = 0 {\displaystyle \operatorname {ord} (g)=0} : then f ′ / f = m X − 1 + g ′ / g . {\displaystyle f'/f=mX^{-1}+g'/g.} ord ⁡ ( g ) = 0 {\displaystyle \operatorname {ord} (g)=0} implies g {\displaystyle g} is invertible in K [ [ X ] ] ⊂ im ⁡ ( D ) = ker ⁡ ( Res ) , {\displaystyle K[[X]]\subset \operatorname {im} (D)=\ker(\operatorname {Res} ),} whence Res ⁡ ( f ′ / f ) = m . {\displaystyle \operatorname {Res} (f'/f)=m.} Property (iv): Since im ⁡ ( D ) = ker ⁡ ( Res ) , {\displaystyle \operatorname {im} (D)=\ker(\operatorname {Res} ),} we can write g = g − 1 X − 1 + G ′ , {\displaystyle g=g_{-1}X^{-1}+G',} with G ∈ K ( ( X ) ) {\displaystyle G\in K((X))} . Consequently, ( g ∘ f ) f ′ = g − 1 f − 1 f ′ + ( G ′ ∘ f ) f ′ = g − 1 f ′ / f + ( G ∘ f ) ′ {\displaystyle (g\circ f)f'=g_{-1}f^{-1}f'+(G'\circ f)f'=g_{-1}f'/f+(G\circ f)'} and (iv) follows from (i) and (iii). Property (v) is clear from the definition. === The Lagrange inversion formula === As mentioned above, any formal series f ∈ K [ [ X ] ] {\displaystyle f\in K[[X]]} with f0 = 0 and f1 ≠ 0 has a composition inverse g ∈ K [ [ X ] ] . {\displaystyle g\in K[[X]].} The following relation between the coefficients of gn and f−k holds ("Lagrange inversion formula"): k [ X k ] g n = n [ X − n ] f − k . {\displaystyle k[X^{k}]g^{n}=n[X^{-n}]f^{-k}.} In particular, for n = 1 and all k ≥ 1, [ X k ] g = 1 k Res ⁡ ( f − k ) . {\displaystyle [X^{k}]g={\frac {1}{k}}\operatorname {Res} \left(f^{-k}\right).} Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting one proof here. Noting ord ⁡ ( f ) = 1 {\displaystyle \operatorname {ord} (f)=1} , we can apply the rules of calculus above, crucially Rule (iv) substituting X ⇝ f ( X ) {\displaystyle X\rightsquigarrow f(X)} , to get: k [ X k ] g n = ( v ) k Res ⁡ ( g n X − k − 1 ) = ( i v ) k Res ⁡ ( X n f − k − 1 f ′ ) = c h a i n − Res ⁡ ( X n ( f − k ) ′ ) = ( i i ) Res ⁡ ( ( X n ) ′ f − k ) = c h a i n n Res ⁡ ( X n − 1 f − k ) = ( v ) n [ X − n ] f − k . {\displaystyle {\begin{aligned}k[X^{k}]g^{n}&\ {\stackrel {\mathrm {(v)} }{=}}\ k\operatorname {Res} \left(g^{n}X^{-k-1}\right)\ {\stackrel {\mathrm {(iv)} }{=}}\ k\operatorname {Res} \left(X^{n}f^{-k-1}f'\right)\ {\stackrel {\mathrm {chain} }{=}}\ -\operatorname {Res} \left(X^{n}(f^{-k})'\right)\\&\ {\stackrel {\mathrm {(ii)} }{=}}\ \operatorname {Res} \left(\left(X^{n}\right)'f^{-k}\right)\ {\stackrel {\mathrm {chain} }{=}}\ n\operatorname {Res} \left(X^{n-1}f^{-k}\right)\ {\stackrel {\mathrm {(v)} }{=}}\ n[X^{-n}]f^{-k}.\end{aligned}}} Generalizations. One may observe that the above computation can be repeated plainly in more general settings than K((X)): a generalization of the Lagrange inversion formula is already available working in the C ( ( X ) ) {\displaystyle \mathbb {C} ((X))} -modules X α C ( ( X ) ) , {\displaystyle X^{\alpha }\mathbb {C} ((X)),} where α is a complex exponent. As a consequence, if f and g are as above, with f 1 = g 1 = 1 {\displaystyle f_{1}=g_{1}=1} , we can relate the complex powers of f / X and g / X: precisely, if α and β are non-zero complex numbers with negative integer sum, m = − α − β ∈ N , {\displaystyle m=-\alpha -\beta \in \mathbb {N} ,} then 1 α [ X m ] ( f X ) α = − 1 β [ X m ] ( g X ) β . {\displaystyle {\frac {1}{\alpha }}[X^{m}]\left({\frac {f}{X}}\right)^{\alpha }=-{\frac {1}{\beta }}[X^{m}]\left({\frac {g}{X}}\right)^{\beta }.} For instance, this way one finds the power series for complex powers of the Lambert function. === Power series in several variables === Formal power series in any number of indeterminates (even infinitely many) can be defined. If I is an index set and XI is the set of indeterminates Xi for i∈I, then a monomial Xα is any finite product of elements of XI (repetitions allowed); a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted ∑ α c α X α {\textstyle \sum _{\alpha }c_{\alpha }X^{\alpha }} . The set of all such formal power series is denoted R [ [ X I ] ] , {\displaystyle R[[X_{I}]],} and it is given a ring structure by defining ( ∑ α c α X α ) + ( ∑ α d α X α ) = ∑ α ( c α + d α ) X α {\displaystyle \left(\sum _{\alpha }c_{\alpha }X^{\alpha }\right)+\left(\sum _{\alpha }d_{\alpha }X^{\alpha }\right)=\sum _{\alpha }(c_{\alpha }+d_{\alpha })X^{\alpha }} and ( ∑ α c α X α ) × ( ∑ β d β X β ) = ∑ α , β c α d β X α + β {\displaystyle \left(\sum _{\alpha }c_{\alpha }X^{\alpha }\right)\times \left(\sum _{\beta }d_{\beta }X^{\beta }\right)=\sum _{\alpha ,\beta }c_{\alpha }d_{\beta }X^{\alpha +\beta }} ==== Topology ==== The topology on R [ [ X I ] ] {\displaystyle R[[X_{I}]]} is such that a sequence of its elements converges only if for each monomial Xα the corresponding coefficient stabilizes. If I is finite, then this the J-adic topology, where J is the ideal of R [ [ X I ] ] {\displaystyle R[[X_{I}]]} generated by all the indeterminates in XI. This does not hold if I is infinite. For example, if I = N , {\displaystyle I=\mathbb {N} ,} then the sequence ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} with f n = X n + X n + 1 + X n + 2 + ⋯ {\displaystyle f_{n}=X_{n}+X_{n+1}+X_{n+2}+\cdots } does not converge with respect to any J-adic topology on R, but clearly for each monomial the corresponding coefficient stabilizes. As remarked above, the topology on a repeated formal power series ring like R [ [ X ] ] [ [ Y ] ] {\displaystyle R[[X]][[Y]]} is usually chosen in such a way that it becomes isomorphic as a topological ring to R [ [ X , Y ] ] . {\displaystyle R[[X,Y]].} ==== Operations ==== All of the operations defined for series in one variable may be extended to the several variables case. A series is invertible if and only if its constant term is invertible in R. The composition f(g(X)) of two series f and g is defined if f is a series in a single indeterminate, and the constant term of g is zero. For a series f in several indeterminates a form of "composition" can similarly be defined, with as many separate series in the place of g as there are indeterminates. In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other. ==== Universal property ==== In the several variables case, the universal property characterizing R [ [ X 1 , … , X r ] ] {\displaystyle R[[X_{1},\ldots ,X_{r}]]} becomes the following. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x1, ..., xr are elements of I, then there is a unique map Φ : R [ [ X 1 , … , X r ] ] → S {\displaystyle \Phi :R[[X_{1},\ldots ,X_{r}]]\to S} with the following properties: Φ is an R-algebra homomorphism Φ is continuous Φ(Xi) = xi for i = 1, ..., r. === Non-commuting variables === The several variable case can be further generalised by taking non-commuting variables Xi for i ∈ I, where I is an index set and then a monomial Xα is any word in the XI; a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted ∑ α c α X α {\displaystyle \textstyle \sum _{\alpha }c_{\alpha }X^{\alpha }} . The set of all such formal power series is denoted R«XI», and it is given a ring structure by defining addition pointwise ( ∑ α c α X α ) + ( ∑ α d α X α ) = ∑ α ( c α + d α ) X α {\displaystyle \left(\sum _{\alpha }c_{\alpha }X^{\alpha }\right)+\left(\sum _{\alpha }d_{\alpha }X^{\alpha }\right)=\sum _{\alpha }(c_{\alpha }+d_{\alpha })X^{\alpha }} and multiplication by ( ∑ α c α X α ) × ( ∑ α d α X α ) = ∑ α , β c α d β X α ⋅ X β {\displaystyle \left(\sum _{\alpha }c_{\alpha }X^{\alpha }\right)\times \left(\sum _{\alpha }d_{\alpha }X^{\alpha }\right)=\sum _{\alpha ,\beta }c_{\alpha }d_{\beta }X^{\alpha }\cdot X^{\beta }} where · denotes concatenation of words. These formal power series over R form the Magnus ring over R. === On a semiring === Given an alphabet Σ {\displaystyle \Sigma } and a semiring S {\displaystyle S} . The formal power series over S {\displaystyle S} supported on the language Σ ∗ {\displaystyle \Sigma ^{*}} is denoted by S ⟨ ⟨ Σ ∗ ⟩ ⟩ {\displaystyle S\langle \langle \Sigma ^{*}\rangle \rangle } . It consists of all mappings r : Σ ∗ → S {\displaystyle r:\Sigma ^{*}\to S} , where Σ ∗ {\displaystyle \Sigma ^{*}} is the free monoid generated by the non-empty set Σ {\displaystyle \Sigma } . The elements of S ⟨ ⟨ Σ ∗ ⟩ ⟩ {\displaystyle S\langle \langle \Sigma ^{*}\rangle \rangle } can be written as formal sums r = ∑ w ∈ Σ ∗ ( r , w ) w . {\displaystyle r=\sum _{w\in \Sigma ^{*}}(r,w)w.} where ( r , w ) {\displaystyle (r,w)} denotes the value of r {\displaystyle r} at the word w ∈ Σ ∗ {\displaystyle w\in \Sigma ^{*}} . The elements ( r , w ) ∈ S {\displaystyle (r,w)\in S} are called the coefficients of r {\displaystyle r} . For r ∈ S ⟨ ⟨ Σ ∗ ⟩ ⟩ {\displaystyle r\in S\langle \langle \Sigma ^{*}\rangle \rangle } the support of r {\displaystyle r} is the set supp ⁡ ( r ) = { w ∈ Σ ∗ | ( r , w ) ≠ 0 } {\displaystyle \operatorname {supp} (r)=\{w\in \Sigma ^{*}|\ (r,w)\neq 0\}} A series where every coefficient is either 0 {\displaystyle 0} or 1 {\displaystyle 1} is called the characteristic series of its support. The subset of S ⟨ ⟨ Σ ∗ ⟩ ⟩ {\displaystyle S\langle \langle \Sigma ^{*}\rangle \rangle } consisting of all series with a finite support is denoted by S ⟨ Σ ∗ ⟩ {\displaystyle S\langle \Sigma ^{*}\rangle } and called polynomials. For r 1 , r 2 ∈ S ⟨ ⟨ Σ ∗ ⟩ ⟩ {\displaystyle r_{1},r_{2}\in S\langle \langle \Sigma ^{*}\rangle \rangle } and s ∈ S {\displaystyle s\in S} , the sum r 1 + r 2 {\displaystyle r_{1}+r_{2}} is defined by ( r 1 + r 2 , w ) = ( r 1 , w ) + ( r 2 , w ) {\displaystyle (r_{1}+r_{2},w)=(r_{1},w)+(r_{2},w)} The (Cauchy) product r 1 ⋅ r 2 {\displaystyle r_{1}\cdot r_{2}} is defined by ( r 1 ⋅ r 2 , w ) = ∑ w 1 w 2 = w ( r 1 , w 1 ) ( r 2 , w 2 ) {\displaystyle (r_{1}\cdot r_{2},w)=\sum _{w_{1}w_{2}=w}(r_{1},w_{1})(r_{2},w_{2})} The Hadamard product r 1 ⊙ r 2 {\displaystyle r_{1}\odot r_{2}} is defined by ( r 1 ⊙ r 2 , w ) = ( r 1 , w ) ( r 2 , w ) {\displaystyle (r_{1}\odot r_{2},w)=(r_{1},w)(r_{2},w)} And the products by a scalar s r 1 {\displaystyle sr_{1}} and r 1 s {\displaystyle r_{1}s} by ( s r 1 , w ) = s ( r 1 , w ) {\displaystyle (sr_{1},w)=s(r_{1},w)} and ( r 1 s , w ) = ( r 1 , w ) s {\displaystyle (r_{1}s,w)=(r_{1},w)s} , respectively. With these operations ( S ⟨ ⟨ Σ ∗ ⟩ ⟩ , + , ⋅ , 0 , ε ) {\displaystyle (S\langle \langle \Sigma ^{*}\rangle \rangle ,+,\cdot ,0,\varepsilon )} and ( S ⟨ Σ ∗ ⟩ , + , ⋅ , 0 , ε ) {\displaystyle (S\langle \Sigma ^{*}\rangle ,+,\cdot ,0,\varepsilon )} are semirings, where ε {\displaystyle \varepsilon } is the empty word in Σ ∗ {\displaystyle \Sigma ^{*}} . These formal power series are used to model the behavior of weighted automata, in theoretical computer science, when the coefficients ( r , w ) {\displaystyle (r,w)} of the series are taken to be the weight of a path with label w {\displaystyle w} in the automata. === Replacing the index set by an ordered abelian group === Suppose G {\displaystyle G} is an ordered abelian group, meaning an abelian group with a total ordering < {\displaystyle <} respecting the group's addition, so that a < b {\displaystyle a<b} if and only if a + c < b + c {\displaystyle a+c<b+c} for all c {\displaystyle c} . Let I be a well-ordered subset of G {\displaystyle G} , meaning I contains no infinite descending chain. Consider the set consisting of ∑ i ∈ I a i X i {\displaystyle \sum _{i\in I}a_{i}X^{i}} for all such I, with a i {\displaystyle a_{i}} in a commutative ring R {\displaystyle R} , where we assume that for any index set, if all of the a i {\displaystyle a_{i}} are zero then the sum is zero. Then R ( ( G ) ) {\displaystyle R((G))} is the ring of formal power series on G {\displaystyle G} ; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same. Sometimes the notation [ [ R G ] ] {\displaystyle [[R^{G}]]} is used to denote R ( ( G ) ) {\displaystyle R((G))} . Various properties of R {\displaystyle R} transfer to R ( ( G ) ) {\displaystyle R((G))} . If R {\displaystyle R} is a field, then so is R ( ( G ) ) {\displaystyle R((G))} . If R {\displaystyle R} is an ordered field, we can order R ( ( G ) ) {\displaystyle R((G))} by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if G {\displaystyle G} is a divisible group and R {\displaystyle R} is a real closed field, then R ( ( G ) ) {\displaystyle R((G))} is a real closed field, and if R {\displaystyle R} is algebraically closed, then so is R ( ( G ) ) {\displaystyle R((G))} . This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality. == Examples and related topics == Bell series are used to study the properties of multiplicative arithmetic functions Formal groups are used to define an abstract group law using formal power series Puiseux series are an extension of formal Laurent series, allowing fractional exponents Rational series == See also == Ring of restricted power series == Notes == == References == Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007. Nicolas Bourbaki: Algebra, IV, §4. Springer-Verlag 1988. == Further reading == W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997, ISBN 3-540-60420-0 Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. doi:10.1007/978-3-642-01492-5_1 Arto Salomaa (1990). "Formal Languages and Power Series". In Jan van Leeuwen (ed.). Formal Models and Semantics. Handbook of Theoretical Computer Science. Vol. B. Elsevier. pp. 103–132. ISBN 0-444-88074-7.
Wikipedia:Formula#0
In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities. The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin). == In mathematics == In mathematics, a formula generally refers to an equation or inequality relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius: V = 4 3 π r 3 . {\displaystyle V={\frac {4}{3}}\pi r^{3}.} Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form. In a general context, formulas often represent mathematical models of real world phenomena, and as such can be used to provide solutions (or approximate solutions) to real world problems, with some being more general than others. For example, the formula F = m a {\displaystyle F=ma} is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations. Expressions are distinct from formulas in the sense that they don't usually contain relations like equality (=) or inequality (<). Expressions denote a mathematical object, where as formulas denote a statement about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, 8 x − 5 {\displaystyle 8x-5} is an expression, while 8 x − 5 ≥ 3 {\displaystyle 8x-5\geq 3} is a formula. However, in some areas mathematics, and in particular in computer algebra, formulas are viewed as expressions that can be evaluated to true or false, depending on the values that are given to the variables occurring in the expressions. For example 8 x − 5 ≥ 3 {\displaystyle 8x-5\geq 3} takes the value false if x is given a value less than 1, and the value true otherwise. (See Boolean expression) === In mathematical logic === In mathematical logic, a formula (often referred to as a well-formed formula) is an entity constructed using the symbols and formation rules of a given logical language. For example, in first-order logic, ∀ x ∀ y ( P ( f ( x ) ) → ¬ ( P ( x ) → Q ( f ( y ) , x , z ) ) ) {\displaystyle \forall x\forall y(P(f(x))\rightarrow \neg (P(x)\rightarrow Q(f(y),x,z)))} is a formula, provided that f {\displaystyle f} is a unary function symbol, P {\displaystyle P} a unary predicate symbol, and Q {\displaystyle Q} a ternary predicate symbol. == Chemical formulas == In modern chemistry, a chemical formula is a way of expressing information about the proportions of atoms that constitute a particular chemical compound, using a single line of chemical element symbols, numbers, and sometimes other symbols, such as parentheses, brackets, and plus (+) and minus (−) signs. For example, H2O is the chemical formula for water, specifying that each molecule consists of two hydrogen (H) atoms and one oxygen (O) atom. Similarly, O−3 denotes an ozone molecule consisting of three oxygen atoms and a net negative charge. A chemical formula identifies each constituent element by its chemical symbol, and indicates the proportionate number of atoms of each element. In empirical formulas, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound—as ratios to the key element. For molecular compounds, these ratio numbers can always be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O, because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written as empirical formulas which contains only the whole numbers. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio, with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulas often employ ways to suggest the structure of the molecule. There are several types of these formulas, including molecular formulas and condensed formulas. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. Except for the very simple substances, molecular chemical formulas generally lack needed structural information, and might even be ambiguous in occasions. A structural formula is a drawing that shows the location of each atom, and which atoms it binds to. == In computing == In computing, a formula typically describes a calculation, such as addition, to be performed on one or more variables. A formula is often implicitly provided in the form of a computer instruction such as. Degrees Celsius = (5/9)*(Degrees Fahrenheit - 32) In computer spreadsheet software, a formula indicating how to compute the value of a cell, say A3, could be written as =A1+A2 where A1 and A2 refer to other cells (column A, row 1 or 2) within the spreadsheet. This is a shortcut for the "paper" form A3 = A1+A2, where A3 is, by convention, omitted because the result is always stored in the cell itself, making the stating of the name redundant. == Units == Formulas used in science almost always require a choice of units. Formulas are used to express relationships between various quantities, such as temperature, mass, or charge in physics; supply, profit, or demand in economics; or a wide range of other quantities in other disciplines. An example of a formula used in science is Boltzmann's entropy formula. In statistical thermodynamics, it is a probability equation relating the entropy S of an ideal gas to the quantity W, which is the number of microstates corresponding to a given macrostate: S = k ⋅ ln ⁡ W {\displaystyle S=k\cdot \ln W} where k is the Boltzmann constant, equal to 1.380649×10−23 J⋅K−1, and W is the number of microstates consistent with the given macrostate. == See also == Formula editor Formula unit Law (mathematics) Mathematical notation Scientific law Chemical symbol Theorem Well-formed formula == References ==
Wikipedia:Forster–Swan theorem#0
The Forster–Swan theorem is a result from commutative algebra that states an upper bound for the minimal number of generators of a finitely generated module M {\displaystyle M} over a commutative Noetherian ring. The usefulness of the theorem stems from the fact, that in order to form the bound, one only needs the minimum number of generators of all localizations M p {\displaystyle M_{\mathfrak {p}}} . The theorem was proven in a more restrictive form in 1964 by Otto Forster and then in 1967 generalized by Richard G. Swan to its modern form. == Forster–Swan theorem == Let R {\displaystyle R} be a commutative Noetherian ring with one, M {\displaystyle M} be a finitely generated R {\displaystyle R} -module, p {\displaystyle {\mathfrak {p}}} a prime ideal of R {\displaystyle R} . μ ( M ) , μ p ( M ) {\displaystyle \mu (M),\mu _{\mathfrak {p}}(M)} are the minimal die number of generators to generated the R {\displaystyle R} -module M {\displaystyle M} respectively the R p {\displaystyle R_{\mathfrak {p}}} -module M p {\displaystyle M_{\mathfrak {p}}} . According to Nakayama's lemma, in order to compute μ p ( M ) {\displaystyle \mu _{\mathfrak {p}}(M)} one can compute the dimension of M p / p M {\displaystyle M_{\mathfrak {p}}/{\mathfrak {p}}M} over the field k ( p ) = R p / p R p {\displaystyle k({\mathfrak {p}})=R_{\mathfrak {p}}/{\mathfrak {p}}R_{\mathfrak {p}}} , i.e. μ p ( M ) = dim k ( p ) ⁡ ( M p / p M ) . {\displaystyle \mu _{\mathfrak {p}}(M)=\operatorname {dim} _{k({\mathfrak {p}})}(M_{\mathfrak {p}}/{\mathfrak {p}}M).} === Statement === Define the local p {\displaystyle {\mathfrak {p}}} -bound b p ( M ) := μ p ( M ) + dim ⁡ ( R / p ) , {\displaystyle b_{\mathfrak {p}}(M):=\mu _{\mathfrak {p}}(M)+\operatorname {dim} (R/{\mathfrak {p}}),} then the following holds μ ( M ) ≤ sup p { b p ( M ) | p is prime , M p ≠ 0 } . {\displaystyle \mu (M)\leq \sup _{\mathfrak {p}}\;\{b_{\mathfrak {p}}(M)\;|\;{\mathfrak {p}}\;{\text{is prime}},\;M_{\mathfrak {p}}\neq 0\}.} == Bibliography == Rao, R.A.; Ischebeck, F. (2005). Ideals and Reality: Projective Modules and Number of Generators of Ideals. Deutschland: Physica-Verlag. Swan, Richard G. (1967). "The number of generators of a module". Math. Mathematische Zeitschrift. 102 (4): 318–322. doi:10.1007/BF01110912. == References ==
Wikipedia:Forum of Mathematics#0
Forum of Mathematics, Pi and Forum of Mathematics, Sigma are open-access peer-reviewed journals for mathematics published under a creative commons license by Cambridge University Press. The founding managing editor was Rob Kirby. He was succeeded by Robert Guralnick, who is currently the managing editor of both journals. Forum of Mathematics, Pi publishes articles of interest to a wide audience of mathematicians, while Forum of Mathematics, Sigma is intended for more specialized articles, with clusters of editors in different areas of mathematics. == Abstracting and indexing == Both journals are abstracted and indexed in Science Citation Index Expanded, MathSciNet, and Scopus. == References == == External links == A new open-access venture from Cambridge University Press, Tim Gowers, 2 July 2012 Forum of Mathematics, Pi and Forum of Mathematics, Sigma, Terry Tao, 2 July 2012 The Forum of Mathematics, blessing or curse?, Peter Krautzberger, 11 November 2012
Wikipedia:Fox derivative#0
In mathematics, the Fox derivative is an algebraic construction in the theory of free groups which bears many similarities to the conventional derivative of calculus. The Fox derivative and related concepts are often referred to as the Fox calculus, or (Fox's original term) the free differential calculus. The Fox derivative was developed in a series of five papers by mathematician Ralph Fox, published in Annals of Mathematics beginning in 1953. == Definition == If G {\displaystyle G} is a free group with identity element e {\displaystyle e} and generators g i {\displaystyle g_{i}} , then the Fox derivative with respect to g i {\displaystyle g_{i}} is a function from G {\displaystyle G} into the integral group ring Z G {\displaystyle \mathbb {Z} G} which is denoted ∂ ∂ g i {\displaystyle {\frac {\partial }{\partial g_{i}}}} , and obeys the following axioms: ∂ ∂ g i ( g j ) = δ i j {\displaystyle {\frac {\partial }{\partial g_{i}}}(g_{j})=\delta _{ij}} , where δ i j {\displaystyle \delta _{ij}} is the Kronecker delta ∂ ∂ g i ( e ) = 0 {\displaystyle {\frac {\partial }{\partial g_{i}}}(e)=0} ∂ ∂ g i ( u v ) = ∂ ∂ g i ( u ) + u ∂ ∂ g i ( v ) {\displaystyle {\frac {\partial }{\partial g_{i}}}(uv)={\frac {\partial }{\partial g_{i}}}(u)+u{\frac {\partial }{\partial g_{i}}}(v)} for any elements u {\displaystyle u} and v {\displaystyle v} of G {\displaystyle G} . The first two axioms are identical to similar properties of the partial derivative of calculus, and the third is a modified version of the product rule. As a consequence of the axioms, we have the following formula for inverses ∂ ∂ g i ( u − 1 ) = − u − 1 ∂ ∂ g i ( u ) {\displaystyle {\frac {\partial }{\partial g_{i}}}(u^{-1})=-u^{-1}{\frac {\partial }{\partial g_{i}}}(u)} for any element u {\displaystyle u} of G {\displaystyle G} . == Applications == The Fox derivative has applications in group cohomology, knot theory, and covering space theory, among other areas of mathematics. == See also == Alexander polynomial Free group Ring (mathematics) Integral domain Derivation (differential algebra) == References == Brown, Kenneth S. (1972). Cohomology of Groups. Graduate Texts in Mathematics. Vol. 87. Springer Verlag. ISBN 0-387-90688-6. MR 0672956. Fox, Ralph (May 1953). "Free Differential Calculus, I: Derivation in the Free Group Ring". Annals of Mathematics. 57 (3): 547–560. doi:10.2307/1969736. JSTOR 1969736. MR 0053938. Fox, Ralph (March 1954). "Free Differential Calculus, II: The Isomorphism Problem of Groups". Annals of Mathematics. 59 (2): 196–210. doi:10.2307/1969686. JSTOR 1969686. MR 0062125. Fox, Ralph (November 1956). "Free Differential Calculus, III: Subgroups". Annals of Mathematics. 64 (2): 407–419. doi:10.2307/1969592. JSTOR 1969592. MR 0095876. Chen, Kuo-Tsai; Ralph Fox; Roger Lyndon (July 1958). "Free Differential Calculus, IV: The Quotient Groups of the Lower Central Series". Annals of Mathematics. 68 (1): 81–95. doi:10.2307/1970044. JSTOR 1970044. MR 0102539. Fox, Ralph (May 1960). "Free Differential Calculus, V: The Alexander Matrices Re-Examined". Annals of Mathematics. 71 (3): 408–422. doi:10.2307/1969936. JSTOR 1969936. MR 0111781.
Wikipedia:Fox–Wright function#0
In mathematics, the Fox–Wright function (also known as Fox–Wright Psi function, not to be confused with Wright Omega function) is a generalisation of the generalised hypergeometric function pFq(z) based on ideas of Charles Fox (1928) and E. Maitland Wright (1935): p Ψ q [ ( a 1 , A 1 ) ( a 2 , A 2 ) … ( a p , A p ) ( b 1 , B 1 ) ( b 2 , B 2 ) … ( b q , B q ) ; z ] = ∑ n = 0 ∞ Γ ( a 1 + A 1 n ) ⋯ Γ ( a p + A p n ) Γ ( b 1 + B 1 n ) ⋯ Γ ( b q + B q n ) z n n ! . {\displaystyle {}_{p}\Psi _{q}\left[{\begin{matrix}(a_{1},A_{1})&(a_{2},A_{2})&\ldots &(a_{p},A_{p})\\(b_{1},B_{1})&(b_{2},B_{2})&\ldots &(b_{q},B_{q})\end{matrix}};z\right]=\sum _{n=0}^{\infty }{\frac {\Gamma (a_{1}+A_{1}n)\cdots \Gamma (a_{p}+A_{p}n)}{\Gamma (b_{1}+B_{1}n)\cdots \Gamma (b_{q}+B_{q}n)}}\,{\frac {z^{n}}{n!}}.} Upon changing the normalisation p Ψ q ∗ [ ( a 1 , A 1 ) ( a 2 , A 2 ) … ( a p , A p ) ( b 1 , B 1 ) ( b 2 , B 2 ) … ( b q , B q ) ; z ] = Γ ( b 1 ) ⋯ Γ ( b q ) Γ ( a 1 ) ⋯ Γ ( a p ) ∑ n = 0 ∞ Γ ( a 1 + A 1 n ) ⋯ Γ ( a p + A p n ) Γ ( b 1 + B 1 n ) ⋯ Γ ( b q + B q n ) z n n ! {\displaystyle {}_{p}\Psi _{q}^{*}\left[{\begin{matrix}(a_{1},A_{1})&(a_{2},A_{2})&\ldots &(a_{p},A_{p})\\(b_{1},B_{1})&(b_{2},B_{2})&\ldots &(b_{q},B_{q})\end{matrix}};z\right]={\frac {\Gamma (b_{1})\cdots \Gamma (b_{q})}{\Gamma (a_{1})\cdots \Gamma (a_{p})}}\sum _{n=0}^{\infty }{\frac {\Gamma (a_{1}+A_{1}n)\cdots \Gamma (a_{p}+A_{p}n)}{\Gamma (b_{1}+B_{1}n)\cdots \Gamma (b_{q}+B_{q}n)}}\,{\frac {z^{n}}{n!}}} it becomes pFq(z) for A1...p = B1...q = 1. The Fox–Wright function is a special case of the Fox H-function (Srivastava & Manocha 1984, p. 50): p Ψ q [ ( a 1 , A 1 ) ( a 2 , A 2 ) … ( a p , A p ) ( b 1 , B 1 ) ( b 2 , B 2 ) … ( b q , B q ) ; z ] = H p , q + 1 1 , p [ − z | ( 1 − a 1 , A 1 ) ( 1 − a 2 , A 2 ) … ( 1 − a p , A p ) ( 0 , 1 ) ( 1 − b 1 , B 1 ) ( 1 − b 2 , B 2 ) … ( 1 − b q , B q ) ] . {\displaystyle {}_{p}\Psi _{q}\left[{\begin{matrix}(a_{1},A_{1})&(a_{2},A_{2})&\ldots &(a_{p},A_{p})\\(b_{1},B_{1})&(b_{2},B_{2})&\ldots &(b_{q},B_{q})\end{matrix}};z\right]=H_{p,q+1}^{1,p}\left[-z\left|{\begin{matrix}(1-a_{1},A_{1})&(1-a_{2},A_{2})&\ldots &(1-a_{p},A_{p})\\(0,1)&(1-b_{1},B_{1})&(1-b_{2},B_{2})&\ldots &(1-b_{q},B_{q})\end{matrix}}\right.\right].} A special case of Fox–Wright function appears as a part of the normalizing constant of the modified half-normal distribution with the pdf on ( 0 , ∞ ) {\displaystyle (0,\infty )} is given as f ( x ) = 2 β α 2 x α − 1 exp ⁡ ( − β x 2 + γ x ) Ψ ( α 2 , γ β ) {\displaystyle f(x)={\frac {2\beta ^{\frac {\alpha }{2}}x^{\alpha -1}\exp(-\beta x^{2}+\gamma x)}{\Psi {\left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}}} , where Ψ ( α , z ) = 1 Ψ 1 ( ( α , 1 2 ) ( 1 , 0 ) ; z ) {\displaystyle \Psi (\alpha ,z)={}_{1}\Psi _{1}\left({\begin{matrix}\left(\alpha ,{\frac {1}{2}}\right)\\(1,0)\end{matrix}};z\right)} denotes the Fox–Wright Psi function. == Wright function == The entire function W λ , μ ( z ) {\displaystyle W_{\lambda ,\mu }(z)} is often called the Wright function. It is the special case of 0 Ψ 1 [ … ] {\displaystyle {}_{0}\Psi _{1}\left[\ldots \right]} of the Fox–Wright function. Its series representation is W λ , μ ( z ) = ∑ n = 0 ∞ z n n ! Γ ( λ n + μ ) , λ > − 1. {\displaystyle W_{\lambda ,\mu }(z)=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!\,\Gamma (\lambda n+\mu )}},\lambda >-1.} This function is used extensively in fractional calculus and the stable count distribution. Recall that lim λ → 0 W λ , μ ( z ) = e z / Γ ( μ ) {\displaystyle \lim \limits _{\lambda \to 0}W_{\lambda ,\mu }(z)=e^{z}/\Gamma (\mu )} . Hence, a non-zero λ {\displaystyle \lambda } with zero μ {\displaystyle \mu } is the simplest nontrivial extension of the exponential function in such context. Three properties were stated in Theorem 1 of Wright (1933) and 18.1(30–32) of Erdelyi, Bateman Project, Vol 3 (1955) (p. 212) λ z W λ , μ + λ ( z ) = W λ , μ − 1 ( z ) + ( 1 − μ ) W λ , μ ( z ) ( a ) d d z W λ , μ ( z ) = W λ , μ + λ ( z ) ( b ) λ z d d z W λ , μ ( z ) = W λ , μ − 1 ( z ) + ( 1 − μ ) W λ , μ ( z ) ( c ) {\displaystyle {\begin{aligned}\lambda zW_{\lambda ,\mu +\lambda }(z)&=W_{\lambda ,\mu -1}(z)+(1-\mu )W_{\lambda ,\mu }(z)&(a)\\[6pt]{d \over dz}W_{\lambda ,\mu }(z)&=W_{\lambda ,\mu +\lambda }(z)&(b)\\[6pt]\lambda z{d \over dz}W_{\lambda ,\mu }(z)&=W_{\lambda ,\mu -1}(z)+(1-\mu )W_{\lambda ,\mu }(z)&(c)\end{aligned}}} Equation (a) is a recurrence formula. (b) and (c) provide two paths to reduce a derivative. And (c) can be derived from (a) and (b). A special case of (c) is λ = − c α , μ = 0 {\displaystyle \lambda =-c\alpha ,\mu =0} . Replacing z {\displaystyle z} with − x α {\displaystyle -x^{\alpha }} , we have x d d x W − c α , 0 ( − x α ) = − 1 c [ W − c α , − 1 ( − x α ) + W − c α , 0 ( − x α ) ] {\displaystyle {\begin{array}{lcl}x{d \over dx}W_{-c\alpha ,0}(-x^{\alpha })&=&-{\frac {1}{c}}\left[W_{-c\alpha ,-1}(-x^{\alpha })+W_{-c\alpha ,0}(-x^{\alpha })\right]\end{array}}} A special case of (a) is λ = − α , μ = 1 {\displaystyle \lambda =-\alpha ,\mu =1} . Replacing z {\displaystyle z} with − z {\displaystyle -z} , we have α z W − α , 1 − α ( − z ) = W − α , 0 ( − z ) {\displaystyle \alpha zW_{-\alpha ,1-\alpha }(-z)=W_{-\alpha ,0}(-z)} Two notations, M α ( z ) {\displaystyle M_{\alpha }(z)} and F α ( z ) {\displaystyle F_{\alpha }(z)} , were used extensively in the literatures: M α ( z ) = W − α , 1 − α ( − z ) , ⟹ F α ( z ) = W − α , 0 ( − z ) = α z M α ( z ) . {\displaystyle {\begin{aligned}M_{\alpha }(z)&=W_{-\alpha ,1-\alpha }(-z),\\[1ex]\implies F_{\alpha }(z)&=W_{-\alpha ,0}(-z)=\alpha zM_{\alpha }(z).\end{aligned}}} === M-Wright function === M α ( z ) {\displaystyle M_{\alpha }(z)} is known as the M-Wright function, entering as a probability density in a relevant class of self-similar stochastic processes, generally referred to as time-fractional diffusion processes. Its properties were surveyed in Mainardi et al (2010). Through the stable count distribution, α {\displaystyle \alpha } is connected to Lévy's stability index ( 0 < α ≤ 1 ) {\displaystyle (0<\alpha \leq 1)} . Its asymptotic expansion of M α ( z ) {\displaystyle M_{\alpha }(z)} for α > 0 {\displaystyle \alpha >0} is M α ( r α ) = A ( α ) r ( α − 1 / 2 ) / ( 1 − α ) e − B ( α ) r 1 / ( 1 − α ) , r → ∞ , {\displaystyle M_{\alpha }\left({\frac {r}{\alpha }}\right)=A(\alpha )\,r^{(\alpha -1/2)/(1-\alpha )}\,e^{-B(\alpha )\,r^{1/(1-\alpha )}},\,\,r\rightarrow \infty ,} where A ( α ) = 1 2 π ( 1 − α ) , {\displaystyle A(\alpha )={\frac {1}{\sqrt {2\pi (1-\alpha )}}},} B ( α ) = 1 − α α . {\displaystyle B(\alpha )={\frac {1-\alpha }{\alpha }}.} == See also == Prabhakar function Hypergeometric function Generalized hypergeometric function Modified half-normal distribution with the pdf on ( 0 , ∞ ) {\displaystyle (0,\infty )} is given as f ( x ) = 2 β α 2 x α − 1 exp ⁡ ( − β x 2 + γ x ) Ψ ( α 2 , γ β ) {\displaystyle f(x)={\frac {2\beta ^{\frac {\alpha }{2}}x^{\alpha -1}\exp(-\beta x^{2}+\gamma x)}{\Psi {\left({\frac {\alpha }{2}},{\frac {\gamma }{\sqrt {\beta }}}\right)}}}} , where Ψ ( α , z ) = 1 Ψ 1 ( ( α , 1 2 ) ( 1 , 0 ) ; z ) {\displaystyle \Psi (\alpha ,z)={}_{1}\Psi _{1}\left({\begin{matrix}\left(\alpha ,{\frac {1}{2}}\right)\\(1,0)\end{matrix}};z\right)} denotes the Fox–Wright Psi function. == References == Fox, C. (1928). "The asymptotic expansion of integral functions defined by generalized hypergeometric series". Proc. London Math. Soc. 27 (1): 389–400. doi:10.1112/plms/s2-27.1.389. Wright, E. M. (1935). "The asymptotic expansion of the generalized hypergeometric function". J. London Math. Soc. 10 (4): 286–293. doi:10.1112/jlms/s1-10.40.286. Wright, E. M. (1940). "The asymptotic expansion of the generalized hypergeometric function". Proc. London Math. Soc. 46 (2): 389–408. doi:10.1112/plms/s2-46.1.389. Wright, E. M. (1952). "Erratum to "The asymptotic expansion of the generalized hypergeometric function"". J. London Math. Soc. 27: 254. doi:10.1112/plms/s2-54.3.254-s. Srivastava, H.M.; Manocha, H.L. (1984). A treatise on generating functions. E. Horwood. ISBN 0-470-20010-3. Miller, A. R.; Moskowitz, I.S. (1995). "Reduction of a Class of Fox–Wright Psi Functions for Certain Rational Parameters". Computers Math. Applic. 30 (11): 73–82. doi:10.1016/0898-1221(95)00165-u. Sun, Jingchao; Kong, Maiying; Pal, Subhadip (22 June 2021). "The Modified-Half-Normal distribution: Properties and an efficient sampling scheme". Communications in Statistics – Theory and Methods. 52 (5): 1591–1613. doi:10.1080/03610926.2021.1934700. ISSN 0361-0926. S2CID 237919587. == External links == hypergeom on GitLab
Wikipedia:Fractal#0
In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory. One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension). Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line. Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century. There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole." Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants". The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, and architecture. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction). == Etymology == The term "fractal" was coined by the mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus, meaning "broken" or "fractured", and used it to extend the concept of theoretical fractional dimensions to geometric patterns in nature. == Introduction == The word "fractal" often has different connotations for mathematicians and the general public, where the public is more likely to be familiar with fractal art than the mathematical concept. The mathematical concept is difficult to define formally, even for mathematicians, but key features can be understood with a little mathematical background. The feature of "self-similarity", for instance, is easily understood by analogy to zooming in with a lens or other device that zooms in on digital images to uncover finer, previously invisible, new structure. If this is done on fractals, however, no new detail appears; nothing changes and the same pattern repeats over and over, or for some fractals, nearly the same pattern reappears over and over. Self-similarity itself is not necessarily counter-intuitive (e.g., people have pondered self-similarity informally such as in the infinite regress in parallel mirrors or the homunculus, the little man inside the head of the little man inside the head ...). The difference for fractals is that the pattern reproduced must be detailed.: 166, 18 This idea of being detailed relates to another feature that can be understood without much mathematical background: Having a fractal dimension greater than its topological dimension, for instance, refers to how a fractal scales compared to how geometric shapes are usually perceived. A straight line, for instance, is conventionally understood to be one-dimensional; if such a figure is rep-tiled into pieces each 1/3 the length of the original, then there are always three equal pieces. A solid square is understood to be two-dimensional; if such a figure is rep-tiled into pieces each scaled down by a factor of 1/3 in both dimensions, there are a total of 32 = 9 pieces. We see that for ordinary self-similar objects, being n-dimensional means that when it is rep-tiled into pieces each scaled down by a scale-factor of 1/r, there are a total of rn pieces. Now, consider the Koch curve. It can be rep-tiled into four sub-copies, each scaled down by a scale-factor of 1/3. So, strictly by analogy, we can consider the "dimension" of the Koch curve as being the unique real number D that satisfies 3D = 4. This number is called the fractal dimension of the Koch curve; it is not the conventionally perceived dimension of a curve. In general, a key property of fractals is that the fractal dimension differs from the conventionally understood dimension (formally called the topological dimension). This also leads to understanding a third feature, that fractals as mathematical equations are "nowhere differentiable". In a concrete sense, this means fractals cannot be measured in traditional ways. To elaborate, in trying to find the length of a wavy non-fractal curve, one could find straight segments of some measuring tool small enough to lay end to end over the waves, where the pieces could get small enough to be considered to conform to the curve in the normal manner of measuring with a tape measure. But in measuring an infinitely "wiggly" fractal curve such as the Koch snowflake, one would never find a small enough straight segment to conform to the curve, because the jagged pattern would always re-appear, at arbitrarily small scales, essentially pulling a little more of the tape measure into the total length measured each time one attempted to fit it tighter and tighter to the curve. The result is that one must need infinite tape to perfectly cover the entire curve, i.e. the snowflake has an infinite perimeter. == History == The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, with several notable people contributing canonical fractal forms along the way. A common theme in traditional African architecture is the use of fractal scaling, whereby small parts of the structure tend to look similar to larger parts, such as a circular village made of circular houses. According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense). In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them.: 405 Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". Thus, it was not until two centuries had passed that on July 18, 1872 Karl Weierstrass presented the first definition of a function with a graph that would today be considered a fractal, having the non-intuitive property of being everywhere continuous but nowhere differentiable at the Royal Prussian Academy of Sciences.: 7 In addition, the quotient difference becomes arbitrarily large as the summation index increases. Not long after that, in 1883, Georg Cantor, who attended lectures by Weierstrass, published examples of subsets of the real line known as Cantor sets, which had unusual properties and are now recognized as fractals.: 11–24 Also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called "self-inverse" fractals.: 166 One of the next milestones came in 1904, when Helge von Koch, extending ideas of Poincaré and dissatisfied with Weierstrass's abstract and analytic definition, gave a more geometric definition including hand-drawn images of a similar function, which is now called the Koch snowflake.: 25 Another milestone came a decade later in 1915, when Wacław Sierpiński constructed his famous triangle then, one year later, his carpet. By 1918, two French mathematicians, Pierre Fatou and Gaston Julia, though working independently, arrived essentially simultaneously at results describing what is now seen as fractal behaviour associated with mapping complex numbers and iterative functions and leading to further ideas about attractors and repellors (i.e., points that attract or repel other points), which have become very important in the study of fractals. Very shortly after that work was submitted, by March 1918, Felix Hausdorff expanded the definition of "dimension", significantly for the evolution of the definition of fractals, to allow for sets to have non-integer dimensions. The idea of self-similar curves was taken further by Paul Lévy, who, in his 1938 paper Plane or Space Curves and Surfaces Consisting of Parts Similar to the Whole, described a new fractal curve, the Lévy C curve. Different researchers have postulated that without the aid of modern computer graphics, early investigators were limited to what they could depict in manual drawings, so lacked the means to visualize the beauty and appreciate some of the implications of many of the patterns they had discovered (the Julia set, for instance, could only be visualized through a few iterations as very simple drawings).: 179 That changed, however, in the 1960s, when Benoit Mandelbrot started writing about self-similarity in papers such as How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension, which built on earlier work by Lewis Fry Richardson. In 1975, Mandelbrot solidified hundreds of years of thought and mathematical development in coining the word "fractal" and illustrated his mathematical definition with striking computer-constructed visualizations. These images, such as of his canonical Mandelbrot set, captured the popular imagination; many of them were based on recursion, leading to the popular meaning of the term "fractal". In 1980, Loren Carpenter gave a presentation at the SIGGRAPH where he introduced his software for generating and rendering fractally generated landscapes. == Definition and characteristics == One often cited description that Mandelbrot published to describe geometric fractals is "a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole"; this is generally helpful but limited. Authors disagree on the exact definition of fractal, but most usually elaborate on the basic ideas of self-similarity and the unusual relationship fractals have with the space they are embedded in. One point agreed on is that fractal patterns are characterized by fractal dimensions, but whereas these numbers quantify complexity (i.e., changing detail with changing scale), they neither uniquely describe nor specify details of how to construct particular fractal patterns. In 1975 when Mandelbrot coined the word "fractal", he did so to denote an object whose Hausdorff–Besicovitch dimension is greater than its topological dimension. However, this requirement is not met by space-filling curves such as the Hilbert curve. Because of the trouble involved in finding one definition for fractals, some argue that fractals should not be strictly defined at all. According to Falconer, fractals should be only generally characterized by a gestalt of the following features; Self-similarity, which may include: Exact self-similarity: identical at all scales, such as the Koch snowflake Quasi self-similarity: approximates the same pattern at different scales; may contain small copies of the entire fractal in distorted and degenerate forms; e.g., the Mandelbrot set's satellites are approximations of the entire set, but not exact copies. Statistical self-similarity: repeats a pattern stochastically so numerical or statistical measures are preserved across scales; e.g., randomly generated fractals like the well-known example of the coastline of Britain for which one would not expect to find a segment scaled and repeated as neatly as the repeated unit that defines fractals like the Koch snowflake. Qualitative self-similarity: as in a time series Multifractal scaling: characterized by more than one fractal dimension or scaling rule Fine or detailed structure at arbitrarily small scales. A consequence of this structure is fractals may have emergent properties (related to the next criterion in this list). Irregularity locally and globally that cannot easily be described in the language of traditional Euclidean geometry other than as the limit of a recursively defined sequence of stages. For images of fractal patterns, this has been expressed by phrases such as "smoothly piling up surfaces" and "swirls upon swirls";see Common techniques for generating fractals. As a group, these criteria form guidelines for excluding certain cases, such as those that may be self-similar without having other typically fractal features. A straight line, for instance, is self-similar but not fractal because it lacks detail, and is easily described in Euclidean language without a need for recursion. == Common techniques for generating fractals == Images of fractals can be created by fractal generating programs. Because of the butterfly effect, a small change in a single variable can have an unpredictable outcome. Iterated function systems (IFS) – use fixed geometric replacement rules; may be stochastic or deterministic; e.g., Koch snowflake, Cantor set, Haferman carpet, Sierpinski carpet, Sierpinski gasket, Peano curve, Harter-Heighway dragon curve, T-square, Menger sponge Strange attractors – use iterations of a map or solutions of a system of initial-value differential or difference equations that exhibit chaos (e.g., see multifractal image, or the logistic map) L-systems – use string rewriting; may resemble branching patterns, such as in plants, biological cells (e.g., neurons and immune system cells), blood vessels, pulmonary structure, etc. or turtle graphics patterns such as space-filling curves and tilings Escape-time fractals – use a formula or recurrence relation at each point in a space (such as the complex plane); usually quasi-self-similar; also known as "orbit" fractals; e.g., the Mandelbrot set, Julia set, Burning Ship fractal, Nova fractal and Lyapunov fractal. The 2d vector fields that are generated by one or two iterations of escape-time formulae also give rise to a fractal form when points (or pixel data) are passed through this field repeatedly. Random fractals – use stochastic rules; e.g., Lévy flight, percolation clusters, self avoiding walks, fractal landscapes, trajectories of Brownian motion and the Brownian tree (i.e., dendritic fractals generated by modeling diffusion-limited aggregation or reaction-limited aggregation clusters). Finite subdivision rules – use a recursive topological algorithm for refining tilings and they are similar to the process of cell division. The iterative processes used in creating the Cantor set and the Sierpinski carpet are examples of finite subdivision rules, as is barycentric subdivision. == Applications == === Simulated fractals === Fractal patterns have been modeled extensively, albeit within a range of scales rather than infinitely, owing to the practical limits of physical time and space. Models may simulate theoretical fractals or natural phenomena with fractal features. The outputs of the modelling process may be highly artistic renderings, outputs for investigation, or benchmarks for fractal analysis. Some specific applications of fractals to technology are listed elsewhere. Images and other outputs of modelling are normally referred to as being "fractals" even if they do not have strictly fractal characteristics, such as when it is possible to zoom into a region of the fractal image that does not exhibit any fractal properties. Also, these may include calculation or display artifacts which are not characteristics of true fractals. Modeled fractals may be sounds, digital images, electrochemical patterns, circadian rhythms, etc. Fractal patterns have been reconstructed in physical 3-dimensional space: 10 and virtually, often called "in silico" modeling. Models of fractals are generally created using fractal-generating software that implements techniques such as those outlined above. As one illustration, trees, ferns, cells of the nervous system, blood and lung vasculature, and other branching patterns in nature can be modeled on a computer by using recursive algorithms and L-systems techniques. The recursive nature of some patterns is obvious in certain examples—a branch from a tree or a frond from a fern is a miniature replica of the whole: not identical, but similar in nature. Similarly, random fractals have been used to describe/create many highly irregular real-world objects, such as coastlines and mountains. A limitation of modeling fractals is that resemblance of a fractal model to a natural phenomenon does not prove that the phenomenon being modeled is formed by a process similar to the modeling algorithms. === Natural phenomena with fractal features === Approximate fractals found in nature display self-similarity over extended, but finite, scale ranges. The connection between fractals and leaves, for instance, is currently being used to determine how much carbon is contained in trees. Phenomena known to have fractal features include: === Fractals in cell biology === Fractals often appear in the realm of living organisms where they arise through branching processes and other complex pattern formation. Ian Wong and co-workers have shown that migrating cells can form fractals by clustering and branching. Nerve cells function through processes at the cell surface, with phenomena that are enhanced by largely increasing the surface to volume ratio. As a consequence nerve cells often are found to form into fractal patterns. These processes are crucial in cell physiology and different pathologies. Multiple subcellular structures also are found to assemble into fractals. Diego Krapf has shown that through branching processes the actin filaments in human cells assemble into fractal patterns. Similarly Matthias Weiss showed that the endoplasmic reticulum displays fractal features. The current understanding is that fractals are ubiquitous in cell biology, from proteins, to organelles, to whole cells. === In creative works === Since 1999 numerous scientific groups have performed fractal analysis on over 50 paintings created by Jackson Pollock by pouring paint directly onto horizontal canvasses. Recently, fractal analysis has been used to achieve a 93% success rate in distinguishing real from imitation Pollocks. Cognitive neuroscientists have shown that Pollock's fractals induce the same stress-reduction in observers as computer-generated fractals and Nature's fractals. Decalcomania, a technique used by artists such as Max Ernst, can produce fractal-like patterns. It involves pressing paint between two surfaces and pulling them apart. Cyberneticist Ron Eglash has suggested that fractal geometry and mathematics are prevalent in African art, games, divination, trade, and architecture. Circular houses appear in circles of circles, rectangular houses in rectangles of rectangles, and so on. Such scaling patterns can also be found in African textiles, sculpture, and even cornrow hairstyles. Hokky Situngkir also suggested the similar properties in Indonesian traditional art, batik, and ornaments found in traditional houses. Ethnomathematician Ron Eglash has discussed the planned layout of Benin city using fractals as the basis, not only in the city itself and the villages but even in the rooms of houses. He commented that "When Europeans first came to Africa, they considered the architecture very disorganised and thus primitive. It never occurred to them that the Africans might have been using a form of mathematics that they hadn't even discovered yet." In a 1996 interview with Michael Silverblatt, David Foster Wallace explained that the structure of the first draft of Infinite Jest he gave to his editor Michael Pietsch was inspired by fractals, specifically the Sierpinski triangle (a.k.a. Sierpinski gasket), but that the edited novel is "more like a lopsided Sierpinsky Gasket". Some works by the Dutch artist M. C. Escher, such as Circle Limit III, contain shapes repeated to infinity that become smaller and smaller as they get near to the edges, in a pattern that would always look the same if zoomed in. Aesthetics and Psychological Effects of Fractal Based Design: Highly prevalent in nature, fractal patterns possess self-similar components that repeat at varying size scales. The perceptual experience of human-made environments can be impacted with inclusion of these natural patterns. Previous work has demonstrated consistent trends in preference for and complexity estimates of fractal patterns. However, limited information has been gathered on the impact of other visual judgments. Here we examine the aesthetic and perceptual experience of fractal ‘global-forest’ designs already installed in humanmade spaces and demonstrate how fractal pattern components are associated with positive psychological experiences that can be utilized to promote occupant well-being. These designs are composite fractal patterns consisting of individual fractal ‘tree-seeds’ which combine to create a ‘global fractal forest.’ The local ‘tree-seed’ patterns, global configuration of tree-seed locations, and overall resulting ‘global-forest’ patterns have fractal qualities. These designs span multiple mediums yet are all intended to lower occupant stress without detracting from the function and overall design of the space. In this series of studies, we first establish divergent relationships between various visual attributes, with pattern complexity, preference, and engagement ratings increasing with fractal complexity compared to ratings of refreshment and relaxation which stay the same or decrease with complexity. Subsequently, we determine that the local constituent fractal (‘tree-seed’) patterns contribute to the perception of the overall fractal design, and address how to balance aesthetic and psychological effects (such as individual experiences of perceived engagement and relaxation) in fractal design installations. This set of studies demonstrates that fractal preference is driven by a balance between increased arousal (desire for engagement and complexity) and decreased tension (desire for relaxation or refreshment). Installations of these composite mid-high complexity ‘global-forest’ patterns consisting of ‘tree-seed’ components balance these contrasting needs, and can serve as a practical implementation of biophilic patterns in human-made environments to promote occupant well-being. === Physiological responses === Humans appear to be especially well-adapted to processing fractal patterns with fractal dimension between 1.3 and 1.5. When humans view fractal patterns with fractal dimension between 1.3 and 1.5, this tends to reduce physiological stress. === Applications in technology === == See also == == Notes == == References == == Further reading == Stanley, Eugene H, Ostrowsky, N. (editors); On Growth and Fractal Form Fractal and Non-Fractal Patterns in Physics, Martinus Nijhoff Publisher, 1986. ISBN 0-89838-850-3 Barnsley, Michael F.; and Rising, Hawley; Fractals Everywhere. Boston: Academic Press Professional, 1993. ISBN 0-12-079061-0 Duarte, German A.; Fractal Narrative. About the Relationship Between Geometries and Technology and Its Impact on Narrative Spaces. Bielefeld: Transcript, 2014. ISBN 978-3-8376-2829-6 Falconer, Kenneth; Techniques in Fractal Geometry. John Wiley and Sons, 1997. ISBN 0-471-92287-0 Jürgens, Hartmut; Peitgen, Heinz-Otto; and Saupe, Dietmar; Chaos and Fractals: New Frontiers of Science. New York: Springer-Verlag, 1992. ISBN 0-387-97903-4 Mandelbrot, Benoit B.; The Fractal Geometry of Nature. New York: W. H. Freeman and Co., 1982. ISBN 0-7167-1186-9 Peitgen, Heinz-Otto; and Saupe, Dietmar; eds.; The Science of Fractal Images. New York: Springer-Verlag, 1988. ISBN 0-387-96608-0 Pickover, Clifford A.; ed.; Chaos and Fractals: A Computer Graphical Journey – A 10 Year Compilation of Advanced Research. Elsevier, 1998. ISBN 0-444-50002-2 Jones, Jesse; Fractals for the Macintosh, Waite Group Press, Corte Madera, CA, 1993. ISBN 1-878739-46-8. Lauwerier, Hans; Fractals: Endlessly Repeated Geometrical Figures, Translated by Sophia Gill-Hoffstadt, Princeton University Press, Princeton NJ, 1991. ISBN 0-691-08551-X, cloth. ISBN 0-691-02445-6 paperback. "This book has been written for a wide audience..." Includes sample BASIC programs in an appendix. Sprott, Julien Clinton (2003). Chaos and Time-Series Analysis. Oxford University Press. ISBN 978-0-19-850839-7. Wahl, Bernt; Van Roy, Peter; Larsen, Michael; and Kampman, Eric; Exploring Fractals on the Macintosh, Addison Wesley, 1995. ISBN 0-201-62630-6 Lesmoir-Gordon, Nigel; The Colours of Infinity: The Beauty, The Power and the Sense of Fractals. 2004. ISBN 1-904555-05-5 (The book comes with a related DVD of the Arthur C. Clarke documentary introduction to the fractal concept and the Mandelbrot set.) Liu, Huajie; Fractal Art, Changsha: Hunan Science and Technology Press, 1997, ISBN 9787535722348. Gouyet, Jean-François; Physics and Fractal Structures (Foreword by B. Mandelbrot); Masson, 1996. ISBN 2-225-85130-1, and New York: Springer-Verlag, 1996. ISBN 978-0-387-94153-0. Out-of-print. Available in PDF version at."Physics and Fractal Structures" (in French). Jfgouyet.fr. Retrieved October 17, 2010. Falconer, Kenneth (2013). Fractals, A Very Short Introduction. Oxford University Press. == External links == Fractals at the Library of Congress Web Archives (archived November 16, 2001) "Hunting the Hidden Dimension", PBS NOVA, first aired August 24, 2011 Benoit Mandelbrot: Fractals and the Art of Roughness (Archived February 17, 2014, at the Wayback Machine), TED, February 2010 Equations of self-similar fractal measure based on the fractional-order calculus(2007)
Wikipedia:Fractal analysis#0
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign a fractal dimension and other fractal characteristics to a dataset which may be a theoretical dataset, or a pattern or signal extracted from phenomena including topography, natural geometric objects, ecology and aquatic sciences, sound, market fluctuations, heart rates, frequency domain in electroencephalography signals, digital images, molecular motion, and data science. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. Fractal analysis is valuable in expanding our knowledge of the structure and function of various systems, and as a potential tool to mathematically assess novel areas of study. Fractal calculus was formulated which is a generalization of ordinary calculus. == Underlying principles == Fractals have fractional dimensions, which are a measure of complexity that indicates the degree to which the objects fill the available space. The fractal dimension measures the change in "size" of a fractal set with the changing observational scale, and is not limited by integer values. This is possible given that a smaller section of the fractal resembles the entirety, showing the same statistical properties at different scales. This characteristic is termed scale invariance, and can be further categorized as self-similarity or self-affinity, the latter scaled anisotropically (depending on the direction). Whether the view of the fractal is expanding or contracting, the structure remains the same and appears equivalently complex. Fractal analysis uses these underlying properties to help in the understanding and characterization of complex systems. It is also possible to expand the use of fractals to the lack of a single characteristic time scale, or pattern. Further information on the Origins: Fractal Geometry == Types of fractal analysis == There are various types of fractal analysis, including box counting, lacunarity analysis, mass methods, and multifractal analysis. A common feature of all types of fractal analysis is the need for benchmark patterns against which to assess outputs. These can be acquired with various types of fractal generating software capable of generating benchmark patterns suitable for this purpose, which generally differ from software designed to render fractal art. Other types include detrended fluctuation analysis and the Hurst absolute value method, which estimate the hurst exponent. == Applications == === Ecology and evolution === Unlike theoretical fractal curves which can be easily measured and the underlying mathematical properties calculated; natural systems are sources of heterogeneity and generate complex space-time structures that may only demonstrate partial self-similarity. Using fractal analysis, it is possible to analyze and recognize when features of complex ecological systems are altered since fractals are able to characterize the natural complexity in such systems. Thus, fractal analysis can help to quantify patterns in nature and to identify deviations from these natural sequences. It helps to improve our overall understanding of ecosystems and to reveal some of the underlying structural mechanisms of nature. For example, it was found that the structure of an individual tree’s xylem follows the same architecture as the spatial distribution of the trees in the forest, and that the distribution of the trees in the forest shared the same underlying fractal structure as the branches, scaling identically to the point of being able to use the pattern of the trees’ branches mathematically to determine the structure of the forest stand. The use of fractal analysis for understanding structures, and spatial and temporal complexity in biological systems has already been well studied and its use continues to increase in ecological research. Despite its extensive use, it still receives some criticism. === Architecture, urban design and landscape design === In his publication The Fractal Geometry of Nature, Benoit Mandelbrot suggested fractal theory could be applied to architecture. In this context, Mandelbrot was talking about the self-similar feature of fractal objects, rather than fractal analysis. In 1996, Carl Bovill applied the box counting method of fractal analysis to Architecture. Bovill’s work, using a manual version of box counting, has since been refined by others and computational approaches have been developed. Fractal analysis is one of the few quantitative analysis methods available to architects and designers to understand the visual complexity of buildings, urban areas and landscapes. Typical uses of fractal analysis of the built environment have been to understand the visual complexity of cities and skylines, the fractal dimensions of works of different architects and the landscape. Combining the fractal analysis of ecology (see above) with fractal analysis of architecture, fractal dimensions have been used to explore the possible relationship between nature and architecture. Promising results suggest further research is needed in this area. === Animal behaviour === Patterns in animal behaviour exhibit fractal properties on spatial and temporal scales. Fractal analysis helps in understanding the behaviour of animals and how they interact with their environments on multiple scales in space and time. Various animal movement signatures in their respective environments have been found to demonstrate spatially non-linear fractal patterns. This has generated ecological interpretations such as the Lévy Flight Foraging hypothesis, which has proven to be a more accurate description of animal movement for some species. Spatial patterns and animal behaviour sequences in fractal time have an optimal complexity range, which can be thought of as the homeostatic state on the spectrum where the complexity sequence should regularly fall. An increase or a loss in complexity, either becoming more stereotypical or conversely more random in their behaviour patterns, indicates that there has been an alteration in the functionality of the individual. Using fractal analysis, it is possible to examine the movement sequential complexity of animal behaviour and to determine whether individuals are experiencing deviations from their optimal range, suggesting a change in condition. For example, it has been used to assess welfare of domestic hens, stress in bottlenose dolphins in response to human disturbance, and parasitic infection in Japanese macaques and sheep. The research is furthering the field of behavioural ecology by simplifying and quantifying very complex relationships. When it comes to animal welfare and conservation, fractal analysis makes it possible to identify potential sources of stress on animal behaviour, stressors that may not always be discernible through classical behaviour research. This approach is more objective than classical behaviour measurements, such as frequency-based observations that are limited by the counts of behaviours, but is able to delve into the underlying reason for the behaviour. Another important advantage of fractal analysis is the ability to monitor the health of wild and free-ranging animal populations in their natural habitats without invasive measurements. == Applications include == Applications of fractal analysis include: Fractal calculus == See also == Multifractal Rescaled range Analysis on fractals == References == == Further reading == Fractals and Fractal Analysis Fractal analysis Benoit – Fractal Analysis Software Archived 2008-05-17 at the Wayback Machine Fractal Analysis Methods for Human Heartbeat and Gait Dynamics Archived 2016-05-06 at the Wayback Machine
Wikipedia:Fractal antenna#0
A fractal antenna is an antenna that uses a fractal, self-similar design to maximize the effective length, or increase the perimeter (on inside sections or the outer structure), of material that can receive or transmit electromagnetic radiation within a given total surface area or volume. Such fractal antennas are also referred to as multilevel and space filling curves, but the key aspect lies in their repetition of a motif over two or more scale sizes, or "iterations". For this reason, fractal antennas are very compact, multiband or wideband, and have useful applications in cellular telephone and microwave communications. A fractal antenna's response differs markedly from traditional antenna designs, in that it is capable of operating with good-to-excellent performance at many different frequencies simultaneously. Normally, standard antennas have to be "cut" for the frequency for which they are to be used—and thus the standard antennas only work well at that frequency. In addition, the fractal nature of the antenna shrinks its size, without the use of any extra components such as inductors or capacitors. == Log-periodic antennas == Log-periodic antennas are arrays invented in 1952 and commonly seen as TV antennas. This was long before Mandelbrot coined the word fractal in 1975. Some authors (for instance Cohen) consider log-periodic antennas to be an early form of fractal antenna due to their infinite self similarity at all scales. However, they have a finite length even in the theoretical limit with an infinite number of elements and therefore do not have a fractal dimension that exceeds their topological dimension – which is one way of defining fractals. More typically, (for instance Pandey) authors treat them as a separate but related class of antenna. == Performance == Antenna elements (as opposed to antenna arrays, which are usually not included as fractal antennas) made from self-similar shapes were first created by Nathan Cohen then a professor at Boston University, starting in 1988. Cohen's efforts with a variety of fractal antenna designs were first published in 1995, which marked the inaugural scientific publication on fractal antennas. Many fractal element antennas use the fractal structure as a virtual combination of capacitors and inductors. This makes the antenna so that it has many different resonances, which can be chosen and adjusted by choosing the proper fractal design. This complexity arises because the current on the structure has a complex arrangement caused by the inductance and self capacitance. In general, although their effective electrical length is longer, the fractal element antennas are themselves physically smaller, again due to this reactive loading. Thus, fractal element antennas are shrunken compared to conventional designs and do not need additional components, assuming the structure happens to have the desired resonant input impedance. In general, the fractal dimension of a fractal antenna is a poor predictor of its performance and application. Not all fractal antennas work well for a given application or set of applications. Computer search methods and antenna simulations are commonly used to identify which fractal antenna designs best meet the needs of the application. Studies during the 2000s showed advantages of the fractal element technology in real-life applications, such as RFID and cell phones. Fractals have been used commercially in antennas since the 2010s. Their advantages are good multiband performance, wide bandwidth, and small area. The gain with small size results from constructive interference with multiple current maxima, afforded by the electrically long structure in a small area. Some researchers have disputed that fractal antennas have superior performance. S.R. Best (2003) observed "that antenna geometry alone, fractal or otherwise, does not uniquely determine the electromagnetic properties of the small antenna". Hansen & Collin (2011) reviewed many papers on fractal antennas and concluded that they offer no advantage over fat dipoles, loaded dipoles, or simple loops, and that non-fractals are always better. Balanis (2011) reported on several fractal antennas and found them equivalent in performance to the electrically small antennas they were compared to. Log periodics, a form of fractal antenna, have their electromagnetic characteristics uniquely determined by geometry, via an opening angle. == Frequency invariance and Maxwell's equations == One different and useful attribute of some fractal element antennas is their self-scaling aspect. In 1957, V.H. Rumsey presented results that angle-defined scaling was one of the underlying requirements to make antennas invariant (have same radiation properties) at a number, or range, of frequencies. Work by Y. Mushiake in Japan starting in 1948 demonstrated a similar result of self-complementary antennas being frequency independent. It was believed that antennas had to be defined by angles for this to be true, but in 1999 it was discovered that self-similarity was one of the underlying requirements to make antennas frequency and bandwidth invariant. In other words, along with origin symmetry, the underlying requirement for frequency independence is self-similarity. Angle-defined antennas are self-similar, but other self-similar antennas are frequency independent although not angle-defined. This analysis, based on Maxwell's equations, showed fractal antennas offer a closed-form and unique insight into the invariance properties of Maxwell's equations – a key aspect of electromagnetic phenomena – now known as the Hohlfeld-Cohen-Rumsey (HCR) principle. Mushiake's earlier work on self complementarity was shown to be limited to impedance smoothness, as expected from Babinet's principle, but not frequency invariance. == Other uses == In addition to their use as antennas, fractals have also found application in other antenna system components, including loads, counterpoises, and ground planes. Fractal inductors and fractal tuned circuits (fractal resonators) were also discovered and invented simultaneously with fractal element antennas. An emerging example of such is in metamaterials. A recent invention demonstrates using close-packed fractal resonators to make the first wideband metamaterial 'invisibility cloak' at microwave frequencies. Fractal filters (a type of tuned circuit) are another example where the superiority of the fractal approach for smaller size and better rejection has been proven. As fractals can be used as counterpoises, loads, ground planes, and filters, all parts that can be integrated with antennas, they are considered parts of some antenna systems and thus are discussed in the context of fractal antennas. == See also == Waveguide (electromagnetism) == References == == External links == How to make a fractal antenna for HDTV or DTV CPW-fed H-tree fractal antenna for WLAN, WIMAX, RFID, C-band, HiperLAN, and UWB applications Video of a fractal antenna monopole using fractal metamaterials
Wikipedia:Fractal art#0
Fractal art is a form of algorithmic art created by calculating fractal objects and representing the calculation results as still digital images, animations, and media. Fractal art developed from the mid-1980s onwards. It is a genre of computer art and digital art which are part of new media art. The mathematical beauty of fractals lies at the intersection of generative art and computer art. They combine to produce a type of abstract art. Fractal art (especially in the western world) is rarely drawn or painted by hand. It is usually created indirectly with the assistance of fractal-generating software, iterating through three phases: setting parameters of appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product. In some cases, other graphics programs are used to further modify the images produced. This is called post-processing. Non-fractal imagery may also be integrated into the artwork. The Julia set and Mandelbrot sets can be considered as icons of fractal art. It was assumed that fractal art could not have developed without computers because of the calculative capabilities they provide. Fractals are generated by applying iterative methods to solving non-linear equations or polynomial equations. Fractals are any of various extremely irregular curves or shapes for which any suitably chosen part is similar in shape to a given larger or smaller part when magnified or reduced to the same size. == Types == There are many different kinds of fractal images. They can be subdivided into several groups. Fractals derived from standard geometry by using iterative transformations on an initial common figure like a straight line (the Cantor dust or the von Koch curve), a triangle (the Sierpinski triangle), or a cube (the Menger sponge). The first fractal figures invented near the end of the 19th and early 20th centuries belong to this group. IFS (iterated function systems) Strange attractors Fractal flame L-system fractals Fractals created by the iteration of complex polynomials. Newton fractals, including Nova fractals Fractals generated over quaternions and other Cayley-Dickson algebras Fractal terrains generated by random fractal processes Mandelbulbs are a form of three dimensional fractal. Fractal Expressionism is a term used to differentiate traditional visual art that incorporates fractal elements such as self-similarity for example. Perhaps the best example of fractal expressionism is found in Jackson Pollock's dripped patterns. They have been analysed and found to contain a fractal dimension which has been attributed to his technique. == Techniques == Fractals of all kinds have been used as the basis for digital art and animation. High resolution color graphics became increasingly available at scientific research labs in the mid-1980s. Scientific forms of art, including fractal art, have developed separately from mainstream culture. Starting with 2-dimensional details of fractals, such as the Mandelbrot Set, fractals have found artistic application in fields as varied as texture generation, plant growth simulation, and landscape generation. Fractals are sometimes combined with evolutionary algorithms, either by iteratively choosing good-looking specimens in a set of random variations of a fractal artwork and producing new variations, to avoid dealing with cumbersome or unpredictable parameters, or collectively, as in the Electric Sheep project, where people use fractal flames rendered with distributed computing as their screensaver and "rate" the flame they are viewing, influencing the server, which reduces the traits of the undesirables, and increases those of the desirables to produce a computer-generated, community-created piece of art. Many fractal images are admired because of their perceived harmony. This is typically achieved by the patterns which emerge from the balance of order and chaos. Similar qualities have been described in Chinese painting and miniature trees and rockeries. == Landscapes == The first fractal image that was intended to be a work of art was probably the famous one on the cover of Scientific American, August 1985. This image showed a landscape formed from the potential function on the domain outside the (usual) Mandelbrot set. However, as the potential function grows fast near the boundary of the Mandelbrot set, it was necessary for the creator to let the landscape grow downwards, so that it looked as if the Mandelbrot set was a plateau atop a mountain with steep sides. The same technique was used a year after in some images in The Beauty of Fractals by Heinz-Otto Peitgen and Michael M. Richter. They provide a formula to estimate the distance from a point outside the Mandelbrot set to the boundary of the Mandelbrot set (and a similar formula for the Julia sets). Landscapes can, for example, be formed from the distance function for a family of iterations of the form z 2 + a z 4 + c {\displaystyle z^{2}+az^{4}+c} . == Artists == Notable fractal artists include Desmond Paul Henry, Hamid Naderi Yeganeh, and musician Bruno Degazio. British artists include William Latham, who has used fractal geometry and other computer graphics techniques in his works. and Vienna Forrester who creates flame fractal art using data extracted from her photographs. Greg Sams has used fractal designs in postcards, T-shirts, and textiles. American Vicky Brago-Mitchell has created fractal art which has appeared in exhibitions and on magazine covers. Scott Draves is credited with inventing flame fractals. Carlos Ginzburg has explored fractal art and developed a concept called "homo fractalus" which is based around the idea that the human is the ultimate fractal. Merrin Parkers from New Zealand specialises in fractal art. Kerry Mitchell wrote a "Fractal Art Manifesto", claiming that. In Italy, the artist Giorgio Orefice wrote the "Fractalism" manifesto, founding a Fractalism cultural mouvement in 1999. Fractal Art is a subclass of two-dimensional visual art, and is in many respects similar to photography—another art form that was greeted by skepticism upon its arrival. Fractal images typically are manifested as prints, bringing fractal artists into the company of painters, photographers, and printmakers. Fractals exist natively as electronic images. This is a format that traditional visual artists are quickly embracing, bringing them into Fractal Art's digital realm. Generating fractals can be an artistic endeavor, a mathematical pursuit, or just a soothing diversion. However, Fractal Art is clearly distinguished from other digital activities by what it is, and by what it is not. According to Mitchell, fractal art is not computerized art, lacking in rules, unpredictable, nor something that any person with access to a computer can do well. Instead, fractal art is expressive, creative, and requires input, effort, and intelligence. Most importantly, "fractal art is simply that which is created by Fractal Artists: ART." American artist Hal Tenny was hired to design environment in the 2017 film Guardians of the Galaxy Vol. 2. There has also been a surge in fractal art distributed via Non-fungible tokens - such as work listed by Fractal_Dimensions, spectral.haus, and NetMetropolis. == Exhibits == Fractal art has been exhibited at major international art galleries. One of the first exhibitions of fractal art was "Map Art", a travelling exhibition of works from researchers at the University of Bremen. Mathematicians Heinz-Otto Peitgen and Michael M. Richter discovered that the public not only found the images aesthetically pleasing but that they also wanted to understand the scientific background to the images. In 1989, fractals were part of the subject matter for an art show called Strange Attractors: Signs of Chaos at the New Museum of Contemporary Art. The show consisted of photographs, installations and sculptures designed to provide greater scientific discourse to the field which had already captured the public's attention through colourful and intricate computer imagery. In 2014, emerging British fractal artist Vienna Forrester created an exhibition held at the I-node of the Planetary Collegium, Kefalonia, entitled "IO. Fragmented Myths and Memories: A Fractal Exploration of Kefalonia", part of the 2013–14 international arts festival "Stone Kingdom Kefalonia" commemorating the devastating 1953 Ionian earthquake. Her works were created by using geographical coordinates and photographs from parts of the island which still bear the scars. == Artworks == "Global Forest" artwork is based on a study highlighting the aesthetic and physiological impacts of fractal patterns. Fractals, patterns found universally in nature, repeat self-similarly across scales, with the complexity and aesthetic perception determined by their recursion and dimension rate. Notably, these patterns are featured in art across various cultures, including Jackson Pollock's paintings, eliciting strong aesthetic reactions. Moreover, incorporating fractals in architectural designs can mitigate visual strain and discomfort caused by Euclidean spaces and even reduce stress, resonating with the biophilic idea of humans' innate connection to nature. The ScienceDesignLab collaborated with the Mohawk Group to integrate these findings, producing award-winning "Relaxing Floors" that use fractal patterns, hypothesizing their therapeutic effects stem from nature's soothing visuals. == See also == Batik Fractal curve Greeble List of mathematical art software Mathematics and architecture Persian carpet Psychedelic art Systems art Infinite compositions of analytic functions == References == == Further reading == Duarte, German A. (2014). Fractal Narrative. About the Relationship Between Geometries and Technology and Its Impact on Narrative Spaces. Transcript-Verlag. ISBN 9783837628296. Pickover, Clifford (1990). Computers, Pattern, Chaos and Beauty. St. Martin's Press. ISBN 0-486-41709-3. Schroeder, Manfred (1991). Fractals, Chaos, Power Laws. Freeman. ISBN 0-7167-2357-3. == External links == Art and the Mandelbrot set (in commons.Wikimedia) Fractals in Wikimedia
Wikipedia:Fractal canopy#0
In geometry, a fractal canopy, a type of fractal tree, is one of the easiest-to-create types of fractals. Each canopy is created by splitting a line segment into two smaller segments at the end (symmetric binary tree), and then splitting the two smaller segments as well, and so on, infinitely. Canopies are distinguished by the angle between concurrent adjacent segments and ratio between lengths of successive segments. A fractal canopy must have the following three properties: The angle between any two neighboring line segments is the same throughout the fractal. The ratio of lengths of any two consecutive line segments is constant. Points all the way at the end of the smallest line segments are interconnected, which is to say the entire figure is a connected graph. The pulmonary system used by humans to breathe resembles a fractal canopy, as do trees, blood vessels, viscous fingering, electrical breakdown, and crystals with appropriately adjusted growth velocity from seed. == H tree == == See also == Brownian tree Dendrite (crystal) Lichtenberg figure H tree == References == == External links == Fractal Canopies at the Wayback Machine (archived 28 January 2007) from a student-generated Oracle Thinkquest website
Wikipedia:Fractal catalytic model#0
A fractal catalytic model is a mathematical representation of chemical catalysis in an environment with fractal properties. == References ==
Wikipedia:Fractal compression#0
Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. == Iterated function systems == Fractal image representation may be described mathematically as an iterated function system (IFS). === For binary images === We begin with the representation of a binary image, where the image may be thought of as a subset of R 2 {\displaystyle \mathbb {R} ^{2}} . An IFS is a set of contraction mappings ƒ1,...,ƒN, f i : R 2 → R 2 . {\displaystyle f_{i}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}.} According to these mapping functions, the IFS describes a two-dimensional set S as the fixed point of the Hutchinson operator H ( A ) = ⋃ i = 1 N f i ( A ) , A ⊂ R 2 . {\displaystyle H(A)=\bigcup _{i=1}^{N}f_{i}(A),\quad A\subset \mathbb {R} ^{2}.} That is, H is an operator mapping sets to sets, and S is the unique set satisfying H(S) = S. The idea is to construct the IFS such that this set S is the input binary image. The set S can be recovered from the IFS by fixed point iteration: for any nonempty compact initial set A0, the iteration Ak+1 = H(Ak) converges to S. The set S is self-similar because H(S) = S implies that S is a union of mapped copies of itself: S = f 1 ( S ) ∪ f 2 ( S ) ∪ ⋯ ∪ f N ( S ) {\displaystyle S=f_{1}(S)\cup f_{2}(S)\cup \cdots \cup f_{N}(S)} So we see the IFS is a fractal representation of S. === Extension to grayscale === IFS representation can be extended to a grayscale image by considering the image's graph as a subset of R 3 {\displaystyle \mathbb {R} ^{3}} . For a grayscale image u(x,y), consider the set S = {(x,y,u(x,y))}. Then similar to the binary case, S is described by an IFS using a set of contraction mappings ƒ1,...,ƒN, but in R 3 {\displaystyle \mathbb {R} ^{3}} , f i : R 3 → R 3 . {\displaystyle f_{i}:\mathbb {R} ^{3}\to \mathbb {R} ^{3}.} === Encoding === A challenging problem of ongoing research in fractal image representation is how to choose the ƒ1,...,ƒN such that its fixed point approximates the input image, and how to do this efficiently. A simple approach for doing so is the following partitioned iterated function system (PIFS): Partition the image domain into range blocks Ri of size s×s. For each Ri, search the image to find a block Di of size 2s×2s that is very similar to Ri. Select the mapping functions such that H(Di) = Ri for each i. In the second step, it is important to find a similar block so that the IFS accurately represents the input image, so a sufficient number of candidate blocks for Di need to be considered. On the other hand, a large search considering many blocks is computationally costly. This bottleneck of searching for similar blocks is why PIFS fractal encoding is much slower than for example DCT and wavelet based image representation. The initial square partitioning and brute-force search algorithm presented by Jacquin provides a starting point for further research and extensions in many possible directions—different ways of partitioning the image into range blocks of various sizes and shapes; fast techniques for quickly finding a close-enough matching domain block for each range block rather than brute-force searching, such as fast motion estimation algorithms; different ways of encoding the mapping from the domain block to the range block; etc. Other researchers attempt to find algorithms to automatically encode an arbitrary image as RIFS (recurrent iterated function systems) or global IFS, rather than PIFS; and algorithms for fractal video compression including motion compensation and three dimensional iterated function systems. Fractal image compression has many similarities to vector quantization image compression. == Features == With fractal compression, encoding is extremely computationally expensive because of the search used to find the self-similarities. Decoding, however, is quite fast. While this asymmetry has so far made it impractical for real time applications, when video is archived for distribution from disk storage or file downloads fractal compression becomes more competitive. At common compression ratios, up to about 50:1, fractal compression provides similar results to DCT-based algorithms such as JPEG. At high compression ratios fractal compression may offer superior quality. For satellite imagery, ratios of over 170:1 have been achieved with acceptable results. Fractal video compression ratios of 25:1–244:1 have been achieved in reasonable compression times (2.4 to 66 sec/frame). Compression efficiency increases with higher image complexity and color depth, compared to simple grayscale images. === Resolution independence and fractal scaling === An inherent feature of fractal compression is that images become resolution independent after being converted to fractal code. This is because the iterated function systems in the compressed file scale indefinitely. This indefinite scaling property of a fractal is known as "fractal scaling". === Fractal interpolation === The resolution independence of a fractal-encoded image can be used to increase the display resolution of an image. This process is also known as "fractal interpolation". In fractal interpolation, an image is encoded into fractal codes via fractal compression, and subsequently decompressed at a higher resolution. The result is an up-sampled image in which iterated function systems have been used as the interpolant. Fractal interpolation maintains geometric detail very well compared to traditional interpolation methods like bilinear interpolation and bicubic interpolation. Since the interpolation cannot reverse Shannon entropy however, it ends up sharpening the image by adding random instead of meaningful detail. One cannot, for example, enlarge an image of a crowd where each person's face is one or two pixels and hope to identify them. == History == Michael Barnsley led the development of fractal compression from 1985 at the Georgia Institute of Technology (where both Barnsley and Sloan were professors in the mathematics department). The work was sponsored by DARPA and the Georgia Tech Research Corporation. The project resulted in several patents from 1987. Barnsley's graduate student Arnaud Jacquin implemented the first automatic algorithm in software in 1992. All methods are based on the fractal transform using iterated function systems. Michael Barnsley and Alan Sloan formed Iterated Systems Inc. in 1987 which was granted over 20 additional patents related to fractal compression. A major breakthrough for Iterated Systems Inc. was the automatic fractal transform process which eliminated the need for human intervention during compression as was the case in early experimentation with fractal compression technology. In 1992, Iterated Systems Inc. received a US$2.1 million government grant to develop a prototype digital image storage and decompression chip using fractal transform image compression technology. Fractal image compression has been used in a number of commercial applications: onOne Software, developed under license from Iterated Systems Inc., Genuine Fractals 5 which is a Photoshop plugin capable of saving files in compressed FIF (Fractal Image Format). To date the most successful use of still fractal image compression is by Microsoft in its Encarta multimedia encyclopedia, also under license. Iterated Systems Inc. supplied a shareware encoder (Fractal Imager), a stand-alone decoder, a Netscape plug-in decoder and a development package for use under Windows. The redistribution of the "decompressor DLL" provided by the ColorBox III SDK was governed by restrictive per-disk or year-by-year licensing regimes for proprietary software vendors and by a discretionary scheme that entailed the promotion of the Iterated Systems products for certain classes of other users. ClearVideo – also known as RealVideo (Fractal) – and SoftVideo were early fractal video compression products. ClearFusion was Iterated's freely distributed streaming video plugin for web browsers. In 1994 SoftVideo was licensed to Spectrum Holobyte for use in its CD-ROM games including Falcon Gold and Star Trek: The Next Generation A Final Unity. In 1996, Iterated Systems Inc. announced an alliance with the Mitsubishi Corporation to market ClearVideo to their Japanese customers. The original ClearVideo 1.2 decoder driver is still supported by Microsoft in Windows Media Player although the encoder is no longer supported. Two firms, Total Multimedia Inc. and Dimension, both claim to own or have the exclusive licence to Iterated's video technology, but neither has yet released a working product. The technology basis appears to be Dimension's U.S. patents 8639053 and 8351509, which have been considerably analyzed. In summary, it is a simple quadtree block-copying system with neither the bandwidth efficiency nor PSNR quality of traditional DCT-based codecs. In January 2016, TMMI announced that it was abandoning fractal-based technology altogether. Research papers between 1997 and 2007 discussed possible solutions to improve fractal algorithms and encoding hardware. == Implementations == A library called Fiasco was created by Ullrich Hafner. In 2001, Fiasco was covered in the Linux Journal. According to the 2000-04 Fiasco manual, Fiasco can be used for video compression. The Netpbm library includes the Fiasco library. Femtosoft developed an implementation of fractal image compression in Object Pascal and Java. == See also == Iterated function system Image compression Wavelet == Notes == == External links == Pulcini and Verrando's Compressor Keith Howell's 1993 M.Sc. dissertation Fractal Image Compression for Spaceborne Transputers My Main Squeeze: Fractal Compression, Nov 1993, Wired. Fractal Basics description at FileFormat.Info Superfractals website devoted to fractals by the inventor of fractal compression
Wikipedia:Fractal cosmology#0
In physical cosmology, fractal cosmology is a set of minority cosmological theories which state that the distribution of matter in the Universe, or the structure of the universe itself, is a fractal across a wide range of scales (see also: multifractal system). More generally, it relates to the usage or appearance of fractals in the study of the universe and matter. A central issue in this field is the fractal dimension of the universe or of matter distribution within it, when measured at very large or very small scales. == Fractals in observational cosmology == The first attempt to model the distribution of galaxies with a fractal pattern was made by Luciano Pietronero and his team in 1987, and a more detailed view of the universe's large-scale structure emerged over the following decade, as the number of cataloged galaxies grew larger. Pietronero argues that the universe shows a definite fractal aspect over a fairly wide range of scale, with a fractal dimension of about 2. The fractal dimension of a homogeneous 3D object would be 3, and 2 for a homogeneous surface, whilst the fractal dimension for a fractal surface is between 2 and 3. The universe has been observed to be homogeneous and isotropic (i.e. is smoothly distributed) at very large scales, as is expected in a standard Big Bang or Friedmann-Lemaître-Robertson-Walker cosmology, and in most interpretations of the Lambda-Cold Dark Matter model. The scientific consensus interpretation is that the Sloan Digital Sky Survey (SDSS) suggests that things do indeed smooth out above 100 Megaparsecs. One study of the SDSS data in 2004 found "The power spectrum is not well-characterized by a single power law but unambiguously shows curvature ... thereby driving yet another nail into the coffin of the fractal universe hypothesis and any other models predicting a power-law power spectrum". Another analysis of luminous red galaxies (LRGs) in the SDSS data calculated the fractal dimension of galaxy distribution (on a scales from 70 to 100 Mpc/h) at 3, consistent with homogeneity, but that the fractal dimension is 2 "out to roughly 20 h−1 Mpc". In 2012, Scrimgeour et al. definitively showed that large-scale structure of galaxies was homogeneous beyond a scale around 70 Mpc/h. == Fractals in theoretical cosmology == In the realm of theory, the first appearance of fractals in cosmology was likely with Andrei Linde's "Eternally Existing Self-Reproducing Chaotic Inflationary Universe" theory (see chaotic inflation theory) in 1986. In this theory, the evolution of a scalar field creates peaks that become nucleation points that cause inflating patches of space to develop into "bubble universes," making the universe fractal on the very largest scales. Alan Guth's 2007 paper on "Eternal Inflation and its implications" shows that this variety of inflationary universe theory is still being seriously considered today. Inflation, in some form or another, is widely considered to be our best available cosmological model. Since 1986, quite a large number of different cosmological theories exhibiting fractal properties have been proposed. While Linde's theory shows fractality at scales likely larger than the observable universe, theories like causal dynamical triangulation and the asymptotic safety approach to quantum gravity are fractal at the opposite extreme, in the realm of the ultra-small near the Planck scale. These recent theories of quantum gravity describe a fractal structure for spacetime itself, and suggest that the dimensionality of space evolves with time. Specifically, they suggest that reality is 2D at the Planck scale, and that spacetime gradually becomes 4D at larger scales. French mathematician Alain Connes has been working for a number of years to reconcile general relativity with quantum mechanics using noncommutative geometry. Fractality also arises in this approach to quantum gravity. An article by Alexander Hellemans in the August 2006 issue of Scientific American quotes Connes as saying that the next important step toward this goal is to "try to understand how space with fractional dimensions couples with gravitation." The work of Connes and physicist Carlo Rovelli suggests that time is an emergent property or arises naturally in this formulation, whereas in causal dynamical triangulation choosing those configurations where adjacent building blocks share the same direction in time is an essential part of the "recipe." Both approaches suggest that the fabric of space itself is fractal, however. == See also == Invariant set postulate Large-scale structure of the Universe Scale invariance Shape of the universe == Notes == == References == Rassem, M. and Ahmed E., "On Fractal Cosmology", Astro. Phys. Lett. Commun. (1996), 35, 311.
Wikipedia:Fractal derivative#0
In applied mathematics and mathematical analysis, the fractal derivative or Hausdorff derivative is a non-Newtonian generalization of the derivative dealing with the measurement of fractals, defined in fractal geometry. Fractal derivatives were created for the study of anomalous diffusion, by which traditional approaches fail to factor in the fractal nature of the media. A fractal measure t is scaled according to tα. Such a derivative is local, in contrast to the similarly applied fractional derivative. Fractal calculus is formulated as a generalization of standard calculus. == Physical background == Porous media, aquifers, turbulence, and other media usually exhibit fractal properties. Classical diffusion or dispersion laws based on random walks in free space (essentially the same result variously known as Fick's laws of diffusion, Darcy's law, and Fourier's law) are not applicable to fractal media. To address this, concepts such as distance and velocity must be redefined for fractal media; in particular, scales for space and time are to be transformed according to (xβ, tα). Elementary physical concepts such as velocity are redefined as follows for fractal spacetime (xβ, tα): v ′ = d x ′ d t ′ = d x β d t α , α , β > 0 {\displaystyle v'={\frac {dx'}{dt'}}={\frac {dx^{\beta }}{dt^{\alpha }}}\,,\quad \alpha ,\beta >0} , where Sα,β represents the fractal spacetime with scaling indices α and β. The traditional definition of velocity makes no sense in the non-differentiable fractal spacetime. == Definition == Based on above discussion, the concept of the fractal derivative of a function f(t) with respect to a fractal measure t has been introduced as follows: ∂ f ( t ) ∂ t α = lim t 1 → t f ( t 1 ) − f ( t ) t 1 α − t α , α > 0 {\displaystyle {\frac {\partial f(t)}{\partial t^{\alpha }}}=\lim _{t_{1}\rightarrow t}{\frac {f(t_{1})-f(t)}{t_{1}^{\alpha }-t^{\alpha }}}\,,\quad \alpha >0} , A more general definition is given by ∂ β f ( t ) ∂ t α = lim t 1 → t f β ( t 1 ) − f β ( t ) t 1 α − t α , α > 0 , β > 0 {\displaystyle {\frac {\partial ^{\beta }f(t)}{\partial t^{\alpha }}}=\lim _{t_{1}\rightarrow t}{\frac {f^{\beta }(t_{1})-f^{\beta }(t)}{t_{1}^{\alpha }-t^{\alpha }}}\,,\quad \alpha >0,\beta >0} . For a function y(t) on F α {\displaystyle F^{\alpha }} -perfect fractal set F the fractal derivative or F α {\displaystyle F^{\alpha }} -derivative of y(t) at t is defined by D F α y ( t ) = { F − lim x → t y ( x ) − y ( t ) S F α ( x ) − S F α ( t ) , if t ∈ F ; 0 , otherwise . {\displaystyle D_{F}^{\alpha }y(t)=\left\{{\begin{array}{ll}{\underset {x\rightarrow t}{F_{-}\lim }}~{\frac {y(x)-y(t)}{S_{F}^{\alpha }(x)-S_{F}^{\alpha }(t)}},&{\text{if}}~t\in F;\\0,&{\text{otherwise}}.\end{array}}\right.} . === Motivation === The derivatives of a function f can be defined in terms of the coefficients ak in the Taylor series expansion: f ( x ) = ∑ k = 1 ∞ a k ⋅ ( x − x 0 ) k = ∑ k = 1 ∞ 1 k ! d k f d x k ( x 0 ) ⋅ ( x − x 0 ) k = f ( x 0 ) + f ′ ( x 0 ) ⋅ ( x − x 0 ) + o ( x − x 0 ) {\displaystyle f(x)=\sum _{k=1}^{\infty }a_{k}\cdot (x-x_{0})^{k}=\sum _{k=1}^{\infty }{1 \over k!}{d^{k}f \over dx^{k}}(x_{0})\cdot (x-x_{0})^{k}=f(x_{0})+f'(x_{0})\cdot (x-x_{0})+o(x-x_{0})} From this approach one can directly obtain: f ′ ( x 0 ) = f ( x ) − f ( x 0 ) − o ( x − x 0 ) x − x 0 = lim x → x 0 f ( x ) − f ( x 0 ) x − x 0 {\displaystyle f'(x_{0})={f(x)-f(x_{0})-o(x-x_{0}) \over x-x_{0}}=\lim _{x\to x_{0}}{f(x)-f(x_{0}) \over x-x_{0}}} This can be generalized approximating f with functions (xα-(x0)α)k: f ( x ) = ∑ k = 1 ∞ b k ⋅ ( x α − x 0 α ) k = f ( x 0 ) + b 1 ⋅ ( x α − x 0 α ) + o ( x α − x 0 α ) {\displaystyle f(x)=\sum _{k=1}^{\infty }b_{k}\cdot (x^{\alpha }-x_{0}^{\alpha })^{k}=f(x_{0})+b_{1}\cdot (x^{\alpha }-x_{0}^{\alpha })+o(x^{\alpha }-x_{0}^{\alpha })} Note that the lowest order coefficient still has to be b0=f(x0), since it's still the constant approximation of the function f at x0. Again one can directly obtain: b 1 = lim x → x 0 f ( x ) − f ( x 0 ) x α − x 0 α = d e f d f d x α ( x 0 ) {\displaystyle b_{1}=\lim _{x\to x_{0}}{f(x)-f(x_{0}) \over x^{\alpha }-x_{0}^{\alpha }}{\overset {\underset {\mathrm {def} }{}}{=}}{df \over dx^{\alpha }}(x_{0})} The Fractal Maclaurin series of f(t) with fractal support F is as follows: f ( t ) = ∑ m = 0 ∞ ( D F α ) m f ( t ) | t = 0 m ! ( S F α ( t ) ) m {\displaystyle f(t)=\sum _{m=0}^{\infty }{\frac {(D_{F}^{\alpha })^{m}f(t)|_{t=0}}{m!}}(S_{F}^{\alpha }(t))^{m}} == Properties == === Expansion coefficients === Just like in the Taylor series expansion, the coefficients bk can be expressed in terms of the fractal derivatives of order k of f: b k = 1 k ! ( d d x α ) k f ( x = x 0 ) {\displaystyle b_{k}={1 \over k!}{\biggl (}{d \over dx^{\alpha }}{\biggr )}^{k}f(x=x_{0})} Proof idea: Assuming ( d d x α ) k f ( x = x 0 ) {\textstyle ({d \over dx^{\alpha }})^{k}f(x=x_{0})} exists, bk can be written as b k = a k ⋅ ( d d x α ) k f ( x = x 0 ) {\textstyle b_{k}=a_{k}\cdot ({d \over dx^{\alpha }})^{k}f(x=x_{0})} One can now use f ( x ) = ( x α − x 0 α ) n ⇒ ( d d x α ) k f ( x = x 0 ) = n ! δ n k {\textstyle f(x)=(x^{\alpha }-x_{0}^{\alpha })^{n}\Rightarrow ({d \over dx^{\alpha }})^{k}f(x=x_{0})=n!\delta _{n}^{k}} and since b n = ! 1 ⇒ a n = 1 n ! {\textstyle b_{n}{\overset {\underset {\mathrm {!} }{}}{=}}1\Rightarrow a_{n}={1 \over n!}} === Chain rule === If for a given function f both the derivative Df and the fractal derivative Dαf exist, one can find an analog to the chain rule: d f d x α = d f d x d x d x α = 1 α x 1 − α d f d x {\displaystyle {df \over dx^{\alpha }}={df \over dx}{dx \over dx^{\alpha }}={1 \over \alpha }x^{1-\alpha }{df \over dx}} The last step is motivated by the implicit function theorem which, under appropriate conditions, gives us d x d x α = ( d x α d x ) − 1 {\displaystyle {\frac {dx}{dx^{\alpha }}}=({\frac {dx^{\alpha }}{dx}})^{-1}} Similarly for the more general definition: d β f d α x = d ( f β ) d α x = 1 α x 1 − α β f β − 1 ( x ) f ′ ( x ) {\displaystyle {d^{\beta }f \over d^{\alpha }x}={d(f^{\beta }) \over d^{\alpha }x}={1 \over \alpha }x^{1-\alpha }\beta f^{\beta -1}(x)f'(x)} == Application in anomalous diffusion == As an alternative modeling approach to the classical Fick's second law, the fractal derivative is used to derive a linear anomalous transport-diffusion equation underlying anomalous diffusion process, d u ( x , t ) d t α = D ∂ ∂ x β ( ∂ u ( x , t ) ∂ x β ) , ( 1 ) {\displaystyle {\frac {du(x,t)}{dt^{\alpha }}}=D{\frac {\partial }{\partial x^{\beta }}}\left({\frac {\partial u(x,t)}{\partial x^{\beta }}}\right),\quad (1)} u ( x , 0 ) = δ ( x ) {\displaystyle u(x,0)=\delta (x)} where 0 < α < 2, 0 < β < 1, x ∈ R {\displaystyle x\in \mathbb {R} } , and δ(x) is the Dirac delta function. To obtain the fundamental solution, we apply the transformation of variables t ′ = t α , x ′ = x β . {\displaystyle t'=t^{\alpha }\,,\quad x'=x^{\beta }.} then the equation (1) becomes the normal diffusion form equation, the solution of (1) has the stretched Gaussian kernel: u ( x , t ) = 1 2 π t α e − x 2 β 4 t α {\displaystyle u(x,t)={\frac {1}{2{\sqrt {\pi t^{\alpha }}}}}e^{-{\frac {x^{2\beta }}{4t^{\alpha }}}}} The mean squared displacement of above fractal derivative diffusion equation has the asymptote: ⟨ x 2 ( t ) ⟩ ∝ t ( 3 α − α β ) / 2 β . {\displaystyle \left\langle x^{2}(t)\right\rangle \propto t^{(3\alpha -\alpha \beta )/2\beta }.} == Fractal-fractional calculus == The fractal derivative is connected to the classical derivative if the first derivative exists. In this case, ∂ f ( t ) ∂ t α = lim t 1 → t f ( t 1 ) − f ( t ) t 1 α − t α = d f ( t ) d t 1 α t α − 1 , α > 0 {\displaystyle {\frac {\partial f(t)}{\partial t^{\alpha }}}=\lim _{t_{1}\rightarrow t}{\frac {f(t_{1})-f(t)}{t_{1}^{\alpha }-t^{\alpha }}}\ ={\frac {df(t)}{dt}}{\frac {1}{\alpha t^{\alpha -1}}},\quad \alpha >0} . However, due to the differentiability property of an integral, fractional derivatives are differentiable, thus the following new concept was introduced by Prof Abdon Atangana from South Africa. The following differential operators were introduced and applied very recently. Supposing that y(t) be continuous and fractal differentiable on (a, b) with order β, several definitions of a fractal–fractional derivative of y(t) hold with order α in the Riemann–Liouville sense: Having power law type kernel: F F P D 0 , t α , β ( y ( t ) ) = 1 Γ ( m − α ) d d t β ∫ 0 t ( t − s ) m − α − 1 y ( s ) d s {\displaystyle ^{FFP}D_{0,t}^{\alpha ,\beta }{\Big (}y(t){\Big )}={\dfrac {1}{\Gamma (m-\alpha )}}{\dfrac {d}{dt^{\beta }}}\int _{0}^{t}(t-s)^{m-\alpha -1}y(s)ds} Having exponentially decaying type kernel: F F E D 0 , t α , β ( y ( t ) ) = M ( α ) 1 − α d d t β ∫ 0 t exp ⁡ ( − α 1 − α ( t − s ) ) y ( s ) d s {\displaystyle ^{FFE}D_{0,t}^{\alpha ,\beta }{\Big (}y(t){\Big )}={\dfrac {M(\alpha )}{1-\alpha }}{\dfrac {d}{dt^{\beta }}}\int _{0}^{t}\exp {\Big (}-{\dfrac {\alpha }{1-\alpha }}(t-s){\Big )}y(s)ds} , Having generalized Mittag-Leffler type kernel: a F F M D t α f ( t ) = A B ( α ) 1 − α d d t β ∫ a t f ( τ ) E α ( − α ( t − τ ) α 1 − α ) d τ . {\displaystyle {}_{a}^{FFM}D_{t}^{\alpha }f(t)={\frac {AB(\alpha )}{1-\alpha }}{\frac {d}{dt^{\beta }}}\int _{a}^{t}f(\tau )E_{\alpha }\left(-\alpha {\frac {\left(t-\tau \right)^{\alpha }}{1-\alpha }}\right)\,d\tau \,.} The above differential operators each have an associated fractal-fractional integral operator, as follows: Power law type kernel: F F P J 0 , t α , β ( y ( t ) ) = β Γ ( α ) ∫ 0 t ( t − s ) α − 1 s β − 1 y ( s ) d s {\displaystyle ^{FFP}J_{0,t}^{\alpha ,\beta }{\Big (}y(t){\Big )}={\dfrac {\beta }{\Gamma (\alpha )}}\int _{0}^{t}(t-s)^{\alpha -1}s^{\beta -1}y(s)ds} Exponentially decaying type kernel: F F E J 0 , t α , β ( y ( t ) ) = α β M ( α ) ∫ 0 t s β − 1 y ( s ) d s + β ( 1 − α ) t β − 1 y ( t ) M ( α ) {\displaystyle ^{FFE}J_{0,t}^{\alpha ,\beta }{\Big (}y(t){\Big )}={\dfrac {\alpha \beta }{M(\alpha )}}\int _{0}^{t}s^{\beta -1}y(s)ds+{\dfrac {\beta (1-\alpha )t^{\beta -1}y(t)}{M(\alpha )}}} . Generalized Mittag-Leffler type kernel: F F M J 0 , t α , β ( y ( t ) ) = α β A B ( α ) ∫ 0 t s β − 1 y ( s ) ( t − s ) α − 1 d s + β ( 1 − α ) t β − 1 y ( t ) A B ( α ) {\displaystyle ^{FFM}J_{0,t}^{\alpha ,\beta }{\Big (}y(t){\Big )}={\dfrac {\alpha \beta }{AB(\alpha )}}\int _{0}^{t}s^{\beta -1}y(s)(t-s)^{\alpha -1}ds+{\dfrac {\beta (1-\alpha )t^{\beta -1}y(t)}{AB(\alpha )}}} . FFM refers to fractal-fractional with the generalized Mittag-Leffler kernel. == Fractal non-local calculus == Fractal analogue of the right-sided Riemann-Liouville fractional integral of order β ∈ R {\displaystyle \beta \in \mathbb {R} } of f is defined by: x I b β f ( x ) = 1 Γ ( β ) ∫ x b f ( t ) ( S F α ( t ) − S F α ( x ) ) 1 − β d F α t {\displaystyle {x}{\mathcal {I}}_{b}^{\beta }f(x)={\frac {1}{\Gamma (\beta )}}\int _{x}^{b}{\frac {f(t)}{(S_{F}^{\alpha }(t)-S_{F}^{\alpha }(x))^{1-\beta }}}d_{F}^{\alpha }t} . Fractal analogue of the left-sided Riemann-Liouville fractional integral of order β ∈ R {\displaystyle \beta \in \mathbb {R} } of f is defined by: a I x β f ( x ) = 1 Γ ( β ) ∫ a x f ( t ) ( S F α ( x ) − S F α ( t ) ) 1 − β d F α t . {\displaystyle {a}{\mathcal {I}}_{x}^{\beta }f(x)={\frac {1}{\Gamma (\beta )}}\int _{a}^{x}{\frac {f(t)}{(S_{F}^{\alpha }(x)-S_{F}^{\alpha }(t))^{1-\beta }}}d_{F}^{\alpha }t.} Fractal analogue of the right-sided Riemann-Liouville fractional derivative of order β ∈ R {\displaystyle \beta \in \mathbb {R} } of f is defined by: x D b β f ( x ) = 1 Γ ( n − β ) ( − D F α ) n ∫ x b f ( t ) ( S F α ( t ) − S F α ( x ) ) − n + β + 1 d F α t {\displaystyle {x}{\mathcal {D}}_{b}^{\beta }f(x)={\frac {1}{\Gamma (n-\beta )}}(-D_{F}^{\alpha })^{n}\int _{x}^{b}{\frac {f(t)}{(S_{F}^{\alpha }(t)-S_{F}^{\alpha }(x))^{-n+\beta +1}}}d_{F}^{\alpha }t} Fractal analogue of the left-sided Riemann-Liouville fractional derivative of order β ∈ R {\displaystyle \beta \in \mathbb {R} } of f is defined by: a D x β f ( x ) = 1 Γ ( n − β ) ( D F α ) n ∫ a x f ( t ) ( S F α ( x ) − S F α ( t ) ) − n + β + 1 d F α t {\displaystyle {a}{\mathcal {D}}_{x}^{\beta }f(x)={\frac {1}{\Gamma (n-\beta )}}(D_{F}^{\alpha })^{n}\int _{a}^{x}{\frac {f(t)}{(S_{F}^{\alpha }(x)-S_{F}^{\alpha }(t))^{-n+\beta +1}}}d_{F}^{\alpha }t} Fractal analogue of the right-sided Caputo fractional derivative of order β ∈ R {\displaystyle \beta \in \mathbb {R} } of f is defined by: x C D b β f ( x ) = 1 Γ ( n − β ) ∫ x b ( S F α ( t ) − S F α ( x ) ) n − β − 1 ( − D F α ) n f ( t ) d F α t {\displaystyle {x}^{C}{\mathcal {D}}_{b}^{\beta }f(x)={\frac {1}{\Gamma (n-\beta )}}\int _{x}^{b}(S_{F}^{\alpha }(t)-S_{F}^{\alpha }(x))^{n-\beta -1}(-D_{F}^{\alpha })^{n}f(t)d_{F}^{\alpha }t} Fractal analogue of the left-sided Caputo fractional derivative of order β ∈ R {\displaystyle \beta \in \mathbb {R} } of f is defined by: a C D x β f ( x ) = 1 Γ ( n − β ) ∫ a x ( S F α ( x ) − S F α ( t ) ) n − β − 1 ( D F α ) n f ( t ) d F α t {\displaystyle {a}^{C}{\mathcal {D}}_{x}^{\beta }f(x)={\frac {1}{\Gamma (n-\beta )}}\int _{a}^{x}(S_{F}^{\alpha }(x)-S_{F}^{\alpha }(t))^{n-\beta -1}(D_{F}^{\alpha })^{n}f(t)d_{F}^{\alpha }t} == See also == Fractional calculus Fractional-order system Multifractal system == References == == Bibliography == == External links == Power Law & Fractional Dynamics Non-Newtonian calculus website
Wikipedia:Fractal dimension#0
In mathematics, a fractal dimension is a term invoked in the science of geometry to provide a rational statistical index of complexity detail in a pattern. A fractal pattern changes with the scale at which it is measured. It is also a measure of the space-filling capacity of a pattern and tells how a fractal scales differently, in a fractal (non-integer) dimension. The main idea of "fractured" dimensions has a long history in mathematics, but the term itself was brought to the fore by Benoit Mandelbrot based on his 1967 paper on self-similarity in which he discussed fractional dimensions. In that paper, Mandelbrot cited previous work by Lewis Fry Richardson describing the counter-intuitive notion that a coastline's measured length changes with the length of the measuring stick used (see Fig. 1). In terms of that notion, the fractal dimension of a coastline quantifies how the number of scaled measuring sticks required to measure the coastline changes with the scale applied to the stick. There are several formal mathematical definitions of fractal dimension that build on this basic concept of change in detail with change in scale, see § Examples below. Ultimately, the term fractal dimension became the phrase with which Mandelbrot himself became most comfortable with respect to encapsulating the meaning of the word fractal, a term he created. After several iterations over years, Mandelbrot settled on this use of the language: "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants". One non-trivial example is the fractal dimension of a Koch snowflake. It has a topological dimension of 1, but it is by no means rectifiable: the length of the curve between any two points on the Koch snowflake is infinite. No small piece of it is line-like, but rather it is composed of an infinite number of segments joined at different angles. The fractal dimension of a curve can be explained intuitively by thinking of a fractal line as an object too detailed to be one-dimensional, but too simple to be two-dimensional. Therefore, its dimension might best be described not by its usual topological dimension of 1 but by its fractal dimension, which is often a number between one and two; in the case of the Koch snowflake, it is approximately 1.2619. == Introduction == A fractal dimension is an index for characterizing fractal patterns or sets by quantifying their complexity as a ratio of the change in detail to the change in scale.: 1 Several types of fractal dimension can be measured theoretically and empirically (see Fig. 2). Fractal dimensions are used to characterize a broad spectrum of objects ranging from the abstract to practical phenomena, including turbulence,: 97–104 river networks,: 246–247 urban growth, human physiology, medicine, and market trends. The essential idea of fractional or fractal dimensions has a long history in mathematics that can be traced back to the 1600s,: 19 but the terms fractal and fractal dimension were coined by mathematician Benoit Mandelbrot in 1975. Fractal dimensions were first applied as an index characterizing complicated geometric forms for which the details seemed more important than the gross picture. For sets describing ordinary geometric shapes, the theoretical fractal dimension equals the set's familiar Euclidean or topological dimension. Thus, it is 0 for sets describing points (0-dimensional sets); 1 for sets describing lines (1-dimensional sets having length only); 2 for sets describing surfaces (2-dimensional sets having length and width); and 3 for sets describing volumes (3-dimensional sets having length, width, and height). But this changes for fractal sets. If the theoretical fractal dimension of a set exceeds its topological dimension, the set is considered to have fractal geometry. Unlike topological dimensions, the fractal index can take non-integer values, indicating that a set fills its space qualitatively and quantitatively differently from how an ordinary geometrical set does. For instance, a curve with a fractal dimension very near to 1, say 1.10, behaves quite like an ordinary line, but a curve with fractal dimension 1.9 winds convolutedly through space very nearly like a surface. Similarly, a surface with fractal dimension of 2.1 fills space very much like an ordinary surface, but one with a fractal dimension of 2.9 folds and flows to fill space rather nearly like a volume.: 48 This general relationship can be seen in the two images of fractal curves in Fig. 2 and Fig. 3 – the 32-segment contour in Fig. 2, convoluted and space-filling, has a fractal dimension of 1.67, compared to the perceptibly less complex Koch curve in Fig. 3, which has a fractal dimension of approximately 1.2619. The relationship of an increasing fractal dimension with space-filling might be taken to mean fractal dimensions measure density, but that is not so; the two are not strictly correlated. Instead, a fractal dimension measures complexity, a concept related to certain key features of fractals: self-similarity and detail or irregularity. These features are evident in the two examples of fractal curves. Both are curves with topological dimension of 1, so one might hope to be able to measure their length and derivative in the same way as with ordinary curves. But we cannot do either of these things, because fractal curves have complexity in the form of self-similarity and detail that ordinary curves lack. The self-similarity lies in the infinite scaling, and the detail in the defining elements of each set. The length between any two points on these curves is infinite, no matter how close together the two points are, which means that it is impossible to approximate the length of such a curve by partitioning the curve into many small segments. Every smaller piece is composed of an infinite number of scaled segments that look exactly like the first iteration. These are not rectifiable curves, meaning that they cannot be measured by being broken down into many segments approximating their respective lengths. They cannot be meaningfully characterized by finding their lengths and derivatives. However, their fractal dimensions can be determined, which shows that both fill space more than ordinary lines but less than surfaces, and allows them to be compared in this regard. The two fractal curves described above show a type of self-similarity that is exact with a repeating unit of detail that is readily visualized. This sort of structure can be extended to other spaces (e.g., a fractal that extends the Koch curve into 3D space has a theoretical D = 2.5849). However, such neatly countable complexity is only one example of the self-similarity and detail that are present in fractals. The example of the coast line of Britain, for instance, exhibits self-similarity of an approximate pattern with approximate scaling.: 26 Overall, fractals show several types and degrees of self-similarity and detail that may not be easily visualized. These include, as examples, strange attractors, for which the detail has been described as in essence, smooth portions piling up,: 49 the Julia set, which can be seen to be complex swirls upon swirls, and heart rates, which are patterns of rough spikes repeated and scaled in time. Fractal complexity may not always be resolvable into easily grasped units of detail and scale without complex analytic methods, but it is still quantifiable through fractal dimensions.: 197, 262 == History == The terms fractal dimension and fractal were coined by Mandelbrot in 1975, about a decade after he published his paper on self-similarity in the coastline of Britain. Various historical authorities credit him with also synthesizing centuries of complicated theoretical mathematics and engineering work and applying them in a new way to study complex geometries that defied description in usual linear terms. The earliest roots of what Mandelbrot synthesized as the fractal dimension have been traced clearly back to writings about nondifferentiable, infinitely self-similar functions, which are important in the mathematical definition of fractals, around the time that calculus was discovered in the mid-1600s.: 405 There was a lull in the published work on such functions for a time after that, then a renewal starting in the late 1800s with the publishing of mathematical functions and sets that are today called canonical fractals (such as the eponymous works of von Koch, Sierpiński, and Julia), but at the time of their formulation were often considered antithetical mathematical "monsters". These works were accompanied by perhaps the most pivotal point in the development of the concept of a fractal dimension through the work of Hausdorff in the early 1900s who defined a "fractional" dimension that has come to be named after him and is frequently invoked in defining modern fractals.: 44 See Fractal history for more information == Mathematical definition == The mathematical definition of fractal dimension can be derived by observing and then generalizing the effect of traditional dimension on measurement-changes under scaling. For example, say you have a line and a measuring-stick of equal length. Now shrink the stick to 1/3 its size; you can now fit 3 sticks into the line. Similarly, in two dimensions, say you have a square and an identical "measuring-square". Now shrink the measuring-square's side to 1/3 its length; you can now fit 3^2 = 9 measuring-squares into the square. Such familiar scaling relationships obey equation (1), where ε {\displaystyle \varepsilon } is the scaling factor, D {\displaystyle D} the dimension, and N {\displaystyle N} the resulting number of units (sticks, squares, etc.) in the measured object: In the line example, the dimension D = 1 {\displaystyle D=1} because there are N = 3 {\displaystyle N=3} units when the scaling factor ε = 1 / 3 {\displaystyle \varepsilon =1/3} . In the square example, D = 2 {\displaystyle D=2} because N = 9 {\displaystyle N=9} when ε = 1 / 3 {\displaystyle \varepsilon =1/3} . Fractal dimension generalizes traditional dimension in that it can be fractional, but it has exactly the same relationship with scaling that traditional dimension does; in fact, it is derived by simply rearranging equation (1): D {\displaystyle D} can be thought of as the power of the scaling factor of an object's measure given some scaling of its "radius". For example, the Koch snowflake has D = 1.26185 … {\displaystyle D=1.26185\ldots } , indicating that lengthening its radius grows its measure faster than if it were a one-dimensional shape (such as a polygon), but slower than if it were a two-dimensional shape (such as a filled polygon). Of note, images shown in this page are not true fractals because the scaling described by D {\displaystyle D} cannot continue past the point of their smallest component, a pixel. However, the theoretical patterns that the images represent have no discrete pixel-like pieces, but rather are composed of an infinite number of infinitely scaled segments and do indeed have the claimed fractal dimensions. == D is not a unique descriptor == As is the case with dimensions determined for lines, squares, and cubes, fractal dimensions are general descriptors that do not uniquely define patterns. The value of D for the Koch fractal discussed above, for instance, quantifies the pattern's inherent scaling, but does not uniquely describe nor provide enough information to reconstruct it. Many fractal structures or patterns could be constructed that have the same scaling relationship but are dramatically different from the Koch curve, as is illustrated in Fig. 6. For examples of how fractal patterns can be constructed, see Fractal, Sierpinski triangle, Mandelbrot set, Diffusion-limited aggregation, L-system. == Fractal surface structures == The concept of fractality is applied increasingly in the field of surface science, providing a bridge between surface characteristics and functional properties. Numerous surface descriptors are used to interpret the structure of nominally flat surfaces, which often exhibit self-affine features across multiple length-scales. Mean surface roughness, usually denoted RA, is the most commonly applied surface descriptor, however, numerous other descriptors including mean slope, root-mean-square roughness (RRMS) and others are regularly applied. It is found, however, that many physical surface phenomena cannot readily be interpreted with reference to such descriptors, thus fractal dimension is increasingly applied to establish correlations between surface structure in terms of scaling behavior and performance. The fractal dimensions of surfaces have been employed to explain and better understand phenomena in areas of contact mechanics, frictional behavior, electrical contact resistance and transparent conducting oxides. == Examples == The concept of fractal dimension described in this article is a basic view of a complicated construct. The examples discussed here were chosen for clarity, and the scaling unit and ratios were known ahead of time. In practice, however, fractal dimensions can be determined using techniques that approximate scaling and detail from limits estimated from regression lines over log–log plots of size vs scale. Several formal mathematical definitions of different types of fractal dimension are listed below. Although for compact sets with exact affine self-similarity all these dimensions coincide, in general they are not equivalent: Box-counting dimension is estimated as the exponent of a power law: D 0 = lim ε → 0 log ⁡ N ( ε ) log ⁡ 1 ε . {\displaystyle D_{0}=\lim _{\varepsilon \to 0}{\frac {\log N(\varepsilon )}{\log {\frac {1}{\varepsilon }}}}.} Information dimension considers how the average information needed to identify an occupied box scales with box size ( p {\displaystyle p} is a probability): D 1 = lim ε → 0 − ⟨ log ⁡ p ε ⟩ log ⁡ 1 ε . {\displaystyle D_{1}=\lim _{\varepsilon \to 0}{\frac {-\langle \log p_{\varepsilon }\rangle }{\log {\frac {1}{\varepsilon }}}}.} Correlation dimension is based on M {\displaystyle M} as the number of points used to generate a representation of a fractal and gε, the number of pairs of points closer than ε to each other: D 2 = lim M → ∞ lim ε → 0 log ⁡ ( g ε / M 2 ) log ⁡ ε . {\displaystyle D_{2}=\lim _{M\to \infty }\lim _{\varepsilon \to 0}{\frac {\log(g_{\varepsilon }/M^{2})}{\log \varepsilon }}.} Generalized, or Rényi dimensions: the box-counting, information, and correlation dimensions can be seen as special cases of a continuous spectrum of generalized dimensions of order α, defined by D α = lim ε → 0 1 α − 1 log ⁡ ( ∑ i p i α ) log ⁡ ε . {\displaystyle D_{\alpha }=\lim _{\varepsilon \to 0}{\frac {{\frac {1}{\alpha -1}}\log(\sum _{i}p_{i}^{\alpha })}{\log \varepsilon }}.} Higuchi dimension D = d log ⁡ L ( k ) d log ⁡ k . {\displaystyle D={\frac {d\log L(k)}{d\log k}}.} Lyapunov dimension Multifractal dimensions: a special case of Rényi dimensions where scaling behaviour varies in different parts of the pattern. Uncertainty exponent Hausdorff dimension: For any subset S {\displaystyle S} of a metric space X {\displaystyle X} and d ≥ 0 {\displaystyle d\geq 0} , the d-dimensional Hausdorff content of S is defined by C H d ( S ) := inf { ∑ i r i d : there is a cover of S by balls with radii r i > 0 } . {\displaystyle C_{H}^{d}(S):=\inf {\Bigl \{}\sum _{i}r_{i}^{d}:{\text{ there is a cover of }}S{\text{ by balls with radii }}r_{i}>0{\Bigr \}}.} The Hausdorff dimension of S is defined by dim H ⁡ ( X ) := inf { d ≥ 0 : C H d ( X ) = 0 } . {\displaystyle \dim _{\operatorname {H} }(X):=\inf\{d\geq 0:C_{H}^{d}(X)=0\}.} Packing dimension Assouad dimension Local connected dimension Degree dimension describes the fractal nature of the degree distribution of graphs. Parabolic Hausdorff dimension == Estimating from real-world data == Many real-world phenomena exhibit limited or statistical fractal properties and fractal dimensions that have been estimated from sampled data using computer-based fractal analysis techniques. Practically, measurements of fractal dimension are affected by various methodological issues and are sensitive to numerical or experimental noise and limitations in the amount of data. Nonetheless, the field is rapidly growing as estimated fractal dimensions for statistically self-similar phenomena may have many practical applications in various fields, including astronomy, acoustics, architecture, geology and earth sciences, diagnostic imaging, ecology, electrochemical processes, image analysis, biology and medicine, neuroscience, network analysis, physiology, physics, and Riemann zeta zeros. Fractal dimension estimates have also been shown to correlate with Lempel–Ziv complexity in real-world data sets from psychoacoustics and neuroscience. An alternative to a direct measurement is considering a mathematical model that resembles formation of a real-world fractal object. In this case, a validation can also be done by comparing other than fractal properties implied by the model, with measured data. In colloidal physics, systems composed of particles with various fractal dimensions arise. To describe these systems, it is convenient to speak about a distribution of fractal dimensions and, eventually, a time evolution of the latter: a process that is driven by a complex interplay between aggregation and coalescence. == See also == List of fractals by Hausdorff dimension Lacunarity – Term in geometry and fractal analysis Fractal derivative – Generalization of derivative to fractals == Notes == == References == == Further reading == Mandelbrot, Benoit B.; Hudson, Richard L. (2010). The (Mis)Behaviour of Markets: A Fractal View of Risk, Ruin and Reward. Profile Books. ISBN 978-1-84765-155-6. == External links == TruSoft's Benoit, fractal analysis software product calculates fractal dimensions and hurst exponents. A Java Applet to Compute Fractal Dimensions Introduction to Fractal Analysis Bowley, Roger (2009). "Fractal Dimension". Sixty Symbols. Brady Haran for the University of Nottingham. "Fractals are typically not self-similar". 3Blue1Brown.
Wikipedia:Fractal globule#0
A fractal globule also sometimes called a crumpled globule is a name used to describe polymers that have compact local and global scaling. They can be modeled through a Hamiltonian Walk, a lattice walk in which every point is only visited once and no paths intersect, this prevents knot formation. A crumpled globule is a non-equillibrium structure that can be formed through crumpling of a polymer at all length scales, i.e. collapsing in on themselves and this iteratively occurring over the whole polymer. This process follows the Space Filling Peano Curve. It has been proposed that mammalian chromosomes form fractal globules. == References ==
Wikipedia:Fractal in soil mechanics#0
A fractal is an irregular geometric object with an infinite nesting of structure at all scales. It is mainly applicable in soil chromatography and soil micromorphology (Anderson, 1997). Internal structure, pore size distribution and pore geometry can be identified by using fractal dimension at nano scale. As soil is heterogeneous the pore spaces are made up of macropores, micropores and mesopores. When soil is studied in nanoscale it the macropore are composed of micro and meso pore and further they are composed of organo-mineral complex. The fractal approach to soil mechanics is a new line of thought. It was first raised in "Fractal Character Of Grain-Size Distribution Of Expansion Soils" by Yongfu Xu and Songyu Liu, published in 1999, by Fractals. There are several problems in soil mechanics which can be dealt by applying a fractal approach. One of these problems is the determination of soil-water-characteristic curve (also called (water retention curve) and/or capillary pressure curve). It is a time-consuming process considering usual laboratory experiments. Many scientists have been involved in making mathematical models of soil-water-characteristic curve (SWCC) in which constants are related to the fractal dimension of pore size distribution or particle size distribution of the soil. After the great mathematician Benoît Mandelbrot—father of fractal mathematics—showed the world fractals, Scientists of Agronomy, Agricultural engineering and Earth Scientists have developed more fractal-based models. All of these models have been used to extract hydraulic properties of soils and the potential capabilities of fractal mathematics to investigate mechanical properties of soils. Therefore, it is really important to use such physically based models to promote our understanding of the mechanics of the soils. It can be of great help for researchers in the area of unsaturated soil mechanics. Mechanical parameters can also be driven from such models and of course it needs further works and researches. Fractal calculus is a framework that includes functions with fractal support. == References == Anderson, A.N., McBratney, A.B. and Crawford, J.W., 1997. Applications of fractals to soil studies. In Advances in Agronomy (Vol. 63, pp. 1-76). Academic Press.
Wikipedia:Fractal landscape#0
A fractal landscape or fractal surface is generated using a stochastic algorithm designed to produce fractal behavior that mimics the appearance of natural terrain. In other words, the surface resulting from the procedure is not a deterministic, but rather a random surface that exhibits fractal behavior. Many natural phenomena exhibit some form of statistical self-similarity that can be modeled by fractal surfaces. Moreover, variations in surface texture provide important visual cues to the orientation and slopes of surfaces, and the use of almost self-similar fractal patterns can help create natural looking visual effects. The modeling of the Earth's rough surfaces via fractional Brownian motion was first proposed by Benoit Mandelbrot. Because the intended result of the process is to produce a landscape, rather than a mathematical function, processes are frequently applied to such landscapes that may affect the stationarity and even the overall fractal behavior of such a surface, in the interests of producing a more convincing landscape. According to R. R. Shearer, the generation of natural looking surfaces and landscapes was a major turning point in art history, where the distinction between geometric, computer generated images and natural, man made art became blurred. The first use of a fractal-generated landscape in a film was in 1982 for the movie Star Trek II: The Wrath of Khan. Loren Carpenter refined the techniques of Mandelbrot to create an alien landscape. == Behavior of natural landscapes == Whether or not natural landscapes behave in a generally fractal manner has been the subject of some research. Technically speaking, any surface in three-dimensional space has a topological dimension of 2, and therefore any fractal surface in three-dimensional space has a Hausdorff dimension between 2 and 3. Real landscapes however, have varying behavior at different scales. This means that an attempt to calculate the 'overall' fractal dimension of a real landscape can result in measures of negative fractal dimension, or of fractal dimension above 3. In particular, many studies of natural phenomena, even those commonly thought to exhibit fractal behavior, do not do so over more than a few orders of magnitude. For instance, Richardson's examination of the western coastline of Britain showed fractal behavior of the coastline over only two orders of magnitude. In general, there is no reason to suppose that the geological processes that shape terrain on large scales (for example plate tectonics) exhibit the same mathematical behavior as those that shape terrain on smaller scales (for instance, soil creep). Real landscapes also have varying statistical behavior from place to place, so for example sandy beaches don't exhibit the same fractal properties as mountain ranges. A fractal function, however, is statistically stationary, meaning that its bulk statistical properties are the same everywhere. Thus, any real approach to modeling landscapes requires the ability to modulate fractal behavior spatially. Additionally, real landscapes have very few natural minima (most of these are lakes), whereas a fractal function has as many minima as maxima, on average. Real landscapes also have features originating with the flow of water and ice over their surface, which simple fractals cannot model. It is because of these considerations that the simple fractal functions are often inappropriate for modeling landscapes. More sophisticated techniques (known as 'multi-fractal' techniques) use different fractal dimensions for different scales, and thus can better model the frequency spectrum behavior of real landscapes == Generation of fractal landscapes == A way to make such a landscape is to employ the random midpoint displacement algorithm, in which a square is subdivided into four smaller equal squares and the center point is vertically offset by some random amount. The process is repeated on the four new squares, and so on, until the desired level of detail is reached. There are many fractal procedures (such as combining multiple octaves of Simplex noise) capable of creating terrain data, however, the term "fractal landscape" has become more generic over time. == Fractal plants == Fractal plants can be procedurally generated using L-systems in computer-generated scenes. == See also == Brownian surface Bryce Diamond-square algorithm Fractal-generating software Grome Heightmap List of mathematical art software Outerra Scenery generator Terragen Octree Quadtree == Notes == == References == Lewis, J.P. "Is the Fractal Model Appropriate for Terrain?" (PDF). Richardson, L.F. (1961). "The Problem of Continuity". General Systems Yearbook. 6: 139–187. van Lawick van Pabst, Joost; Jense, Hans (2001). "Dynamic Terrain Generation Based on Multifractal Techniques" (PDF). Archived from the original (PDF) on 2011-07-24. Musgrave, Ken (1993). "Methods for Realistic Landscape Imaging" (PDF). == External links == A Web-Wide World by Ken Perlin, 1998; a Java applet showing a sphere with a generated landscape.
Wikipedia:Fractal physiology#0
Fractal physiology refers to the study of physiological systems using complexity science methods, such as chaos measure, entropy, and fractal dimensions. The underlying assumption is that biological systems are complex and exhibit non-linear patterns of activity, and that characterizing that complexity (using dedicated mathematical approaches) is useful to understand, and make inferences and predictions about the system. == Main Findings == === Neurophysiology === Quantifications of the complexity of brain activity is used in the context of neuropsychiatric diseases and mental states characterization, such as schizophrenia, affective disorders, or neurodegenerative disorders. Particularly, diminished EEG complexity is typically associated with increased symptomatology. === Cardiovascular systems === The complexity of Heart Rate Variability is a useful predictor of cardiovascular health. == Software == In Python, NeuroKit provides a comprehensive set of functions for complexity analysis of physiological data. AntroPy implements several measures to quantify the complexity of time-series. In R, TSEntropies provides methods to quantify the entropy. casnet implements a collection of analytic tools for studying signals recorded from complex adaptive systems. In MATLAB, The Neurophysiological Biomarker Toolbox (NBT) allows the computation of Detrended fluctuation analysis. EZ Entropy implements the entropy analysis of physiological time-series. == See also == Fractal dimension Entropy Complex system == References ==
Wikipedia:Fractal sequence#0
In mathematics, a fractal sequence is one that contains itself as a proper subsequence. An example is 1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ... If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many. == Definition == The precise definition of fractal sequence depends on a preliminary definition: a sequence x = (xn) is an infinitive sequence if for every i, (F1) xn = i for infinitely many n. Let a(i,j) be the jth index n for which xn = i. An infinitive sequence x is a fractal sequence if two additional conditions hold: (F2) if i+1 = xn, then there exists m < n such that i = x m {\displaystyle i=x_{m}} (F3) if h < i then for every j there is exactly one k such that a ( i , j ) < a ( h , k ) < a ( i , j + 1 ) . {\displaystyle a(i,j)<a(h,k)<a(i,j+1).} According to (F2), the first occurrence of each i > 1 in x must be preceded at least once by each of the numbers 1, 2, ..., i-1, and according to (F3), between consecutive occurrences of i in x, each h less than i occurs exactly once. == Example == Suppose θ is a positive irrational number. Let S(θ) = the set of numbers c + dθ, where c and d are positive integers and let cn(θ) + θdn(θ) be the sequence obtained by arranging the numbers in S(θ) in increasing order. The sequence cn(θ) is the signature of θ, and it is a fractal sequence. For example, the signature of the golden ratio (i.e., θ = (1 + sqrt(5))/2) begins with 1, 2, 1, 3, 2, 4, 1, 3, 5, 2, 4, 1, 6, 3, 5, 2, 7, 4, 1, 6, 3, 8, 5, ... and the signature of 1/θ = θ - 1 begins with 1, 1, 2, 1, 2, 1, 3, 2, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 5, ... These are sequences OEIS: A084531 and OEIS: A084532 in the On-Line Encyclopedia of Integer Sequences, where further examples from a variety of number-theoretic and combinatorial settings are given. == See also == Thue-Morse Sequence == External links == On-Line Encyclopedia of Integer Sequences: OEIS sequence A002260 (Triangle T(n,k) = k for k = 1..n) OEIS sequence A004736 (Triangle read by rows: row n lists the first n positive integers in decreasing order) OEIS sequence A003603 (Fractal sequence obtained from Fibonacci numbers (or Wythoff array)) OEIS sequence A112382 (Self-descriptive fractal sequence: the sequence contains every positive integer) OEIS sequence A122196 (Fractal sequence: count down by 2's from successive integers) OEIS sequence A022446 (Fractal sequence of the dispersion of the composite numbers) OEIS sequence A022447 (Fractal sequence of the dispersion of the primes) OEIS sequence A125158 (The fractal sequence associated with A125150) OEIS sequence A125159 (The fractal sequence associated with A125151) OEIS sequence A108712 (A fractal sequence, (the almost-natural numbers)) == References == Kimberling, Clark (1997). "Fractal sequences and interspersions". Ars Combinatoria. 45: 157–168. Zbl 0932.11016.
Wikipedia:Fractal transform#0
The fractal transform is a technique invented by Michael Barnsley et al. to perform lossy image compression. This first practical fractal compression system for digital images resembles a vector quantization system using the image itself as the codebook. == Fractal transform compression == Start with a digital image A1. Downsample it by a factor of 2 to produce image A2. Now, for each block B1 of 4x4 pixels in A1, find the corresponding block B2 in A2 most similar to B1, and then find the grayscale or RGB offset and gain from A2 to B2. For each destination block, output the positions of the source blocks and the color offsets and gains. == Fractal transform decompression == Starting with an empty destination image A1, repeat the following algorithm several times: Downsample A1 down by a factor of 2 to produce image A2. Then copy blocks from A2 to A1 as directed by the compressed data, multiplying by the respective gains and adding the respective color offsets. This algorithm is guaranteed to converge to an image, and it should appear similar to the original image. In fact, a slight modification of the decompressor to run at block sizes larger than 4x4 pixels produces a method of stretching images without causing the blockiness or blurriness of traditional linear resampling algorithms. == Patents == The basic patents covering Fractal Image Compression, U.S. Patents 4,941,193, 5,065,447, 5,384,867, 5,416,856, and 5,430,812 appear to be expired. == See also == Image compression == External links == E2 writeup
Wikipedia:Fractal-generating software#0
Fractal-generating software is any type of graphics software that generates images of fractals. There are many fractal generating programs available, both free and commercial. Mobile apps are available to play or tinker with fractals. Some programmers create fractal software for themselves because of the novelty and because of the challenge in understanding the related mathematics. The generation of fractals has led to some very large problems for pure mathematics. Fractal generating software creates mathematical beauty through visualization. Modern computers may take seconds or minutes to complete a single high resolution fractal image. Images are generated for both simulation (modeling) and random fractals for art. Fractal generation used for modeling is part of realism in computer graphics. Fractal generation software can be used to mimic natural landscapes with fractal landscapes and scenery generation programs. Fractal imagery can be used to introduce irregularity to an otherwise sterile computer generated environment. Fractals are generated in music visualization software, screensavers and wallpaper generators. This software presents the user with a more limited range of settings and features, sometimes relying a series pre-programmed variables. Because complex images can be generated from simple formula fractals are often used among the demoscene. The generation of fractals such as the Mandelbrot set is time-consuming and requires many computations, so it is often used in benchmarking devices. == History == The generation of fractals by calculation without computer assistance was undertaken by German mathematician Georg Cantor in 1883 to create the Cantor set. Throughout the following years, mathematicians have postulated the existence of numerous fractals. Some were conceived before the naming of fractals in 1975, for example, the Pythagoras tree by Dutch mathematics teacher Albert E. Bosman in 1942. The development of the first fractal generating software originated in Benoit Mandelbrot's pursuit of a generalized function for a class of shapes known as Julia sets. In 1979, Mandelbrot discovered that one image of the complex plane could be created by iteration. He and programmers working at IBM generated the first rudimentary fractal printouts. This marked the first instance of the generation of fractals by non-linear creations laws or 'escape time fractal'. Loren Carpenter created a two-minute color film called Vol Libre for presentation at SIGGRAPH in 1980. The October 1983 issue of Acorn User magazine carried a BBC BASIC listing for generating fractal shapes by Susan Stepney, now Professor of Computer Science at the University of York. She followed this up in the March 1984 Acorn User with “Snowflakes and other fractal monsters”. Fractals were rendered in computer games as early as 1984 with the creation of Rescue on Fractalus!. From the early 1980s to about 1995 hundreds of different fractal types were formulated. The generation of fractal images grew in popularity as the distribution of computers with a maths co-processor or floating-point unit in the central processing unit were adopted throughout the 1990s. At this time the rendering of high resolution VGA standard images could take many hours. Fractal generation algorithms display extreme parallelizability. Fractal-generating software was rewritten to make use of multi-threaded processing. Subsequently, the adoption of graphics processing units in computers has greatly increased the speed of rendering and allowed for real-time changes to parameters that were previously impossible due to render delay. 3D fractal generation emerged around 2009. An early list of fractal-generating software was compiled for the book titled Fractals: The Patterns of Chaos by John Briggs published in 1992. Leading writers in the field include Dietmar Saupe, Heinz-Otto Peitgen and Clifford A. Pickover. == Methods == There are two major methods of two dimensional fractal generation. One is to apply an iterative process to simple equations by generative recursion. Dynamical systems produce a series of values. In fractal software values for a set of points on the complex plane are calculated and then rendered as pixels. This computer-based generation of fractal objects is an endless process. In theory, images can be calculated infinitely but in practice are approximated to a certain level of detail. Mandelbrot used quadratic formulas described by the French mathematician Gaston Julia. The maximum fractal dimension that can be produced varies according to type and is sometimes limited according to the method implemented. There are numerous coloring methods that can be applied. One of earliest was the escape time algorithm. Colour banding may appear in images depending on the method of coloring used as well as gradient color density. Some programs generate geometric self-similar or deterministic fractals such as the Koch curve. These programs use an initiator followed by a generator that is repeated in a pattern. These simple fractals originate from a technique first proposed in 1904 by Koch. The other main method is with Iterated Function Systems consisting of a number of affine transformations. In the first method each pixel in a fractal image is evaluated according to a function and then coloured, before the same process is applied to the next pixel. The former method represents the classical stochastic approach while the latter implements a linear fractal model. Using recursion allowed programmers to create complex images through simple direction. Three dimensional fractals are generated in a variety of ways including by using quaternion algebra. Fractals emerge from fluid dynamics modelling simulations as turbulence when contour advection is used to study chaotic mixing. The Buddhabrot method was introduced in 1993. Programs might use fractal heightmaps to generate terrain. Fractals have been generated on computers using the following methods: Menger sponge, Hypercomplex manifold, Brownian tree, Brownian motion, Decomposition, L-systems, Lyapunov fractals, Newton fractals, Pickover stalks and Strange attractors. == Features == Many different features are included in fractal-generating software packages. A corresponding diversity in the images produced is therefore possible. Most feature some form of algorithm selection, an interactive image zoom, and the ability to save files in JPEG, TIFF, or PNG format, as well as the ability to save parameter files, allowing the user to easily return to previously created images for later modification or exploration. The formula, parameters, variables and coloring algorithms for fractal images can be exchanged between users of the same program. There is no universally adopted standard fractal file format. One feature of most escape time fractal programs or algebraic-based fractals is a maximum iteration setting. Increasing the iteration count is required if the image is magnified so that fine detail is not lost. Limiting the maximum iterations is important when a device's processing power is low. Coloring options often allow colors to be randomised. Options for color density are common because some gradients output hugely variable magnitudes resulting in heavy repetitive banding or large areas of the same color. Because of the convenient ability to add post-processing effects layering and alpha compositing features found in other graphics software have been included. Both 2D and 3D rendering effects such as plasma effect and lighting may be included. Many packages also allow the user to input their own formula, to allow for greater control of the fractals, as well as a choice of color rendering, along with the use of filters and other image manipulation techniques. Some fractal software packages allow for the creation of movies from a sequence of fractal images. Others display render time and allow some form of color cycling and color palette creation tools. Standard graphics software (such as GIMP) contains filters or plug-ins which can be used for fractal generation. Blender contains a fractal (or random) modifier. Many stand-alone fractal-generating programs can be used in conjunction with other graphics programs (such as Photoshop) to create more complex images. POV-Ray is a ray tracing program which generates images from a text-based scene description that can generate fractals. Scripts on 3ds Max and Autodesk Maya can be used. A number of web-based interfaces for the fractal generation are freely available including Turtle Graphics Renderer. Fractal Lab can generate both 2D and 3D fractals and is available over the web using WebGL. JWildfire is a java-based, open-source fractal flame generator. Mandelbrot Fractal is a fractal explorer written in JavaScript. Fractal Grower is software written in Java for generating Lindenmayer Substitution Fractals (L-systems). == Programs == Because of the butterfly effect, generating fractals can be difficult to master. A small change in a single variable can have an unpredictable effect. Some software presents the user with a steep learning curve and an understanding of chaos theory is advantageous. This includes the characteristics of fractal dimension, recursion and self-similarity exhibited by all fractals. There are many fractal generating programs available, both free and commercial. Notable fractal generating programs include: Apophysis – open source IFS software for Microsoft Windows-based systems Bryce – cross platform commercial software partially developed by Ken Musgrave Electric Sheep – open source distributed screensaver software, developed by Scott Draves. Fractint – MS-DOS freeware initially released in 1988 with available source code, later ported to Linux and Windows (as WinFract) Fyre is a cross-platform open source tool for producing images based on histograms of iterated chaotic functions Kalles Fraktaler – Windows based fractal zoomer Milkdrop – music visualization plugin distributed with Winamp MojoWorld Generator – a defunct landscape generator for Windows openPlaG – creates fractals by plotting simple functions Picogen - a cross platform open source terrain generator Sterling – freeware software for Windows Terragen – a fractal terrain generator that can render animations for Windows and Mac OS X Ultra Fractal – proprietary fractal generator for Windows and Mac OS X Wolfram Mathematica – can be used specifically to create fractal images XaoS – cross platform open source fractal zooming program Most of the above programs make two-dimensional fractals, with a few creating three-dimensional fractal objects, such as mandelbulbs and mandelboxes. Mandelbulber is an experimental, cross platform open-source program that generates three-dimensional fractal images. Mandelbulber is adept at producing 3D animations. Mandelbulb 3D is free software for creating 3D images featuring many effects found in 3D rendering environments. Incendia is a 3D fractal program that uses Iterated Function Systems (IFS) for fractal generation. Visions of Chaos, Boxplorer and Fragmentarium also render 3D images. The open source GnoFract 4D is available. ChaosPro is freeware fractal creation program. Fraqtive is an open source cross platform fractal generator. MandelX is a free program for rendering fractal images on Windows. WinCIG, Chaoscope, Tierazon, Fractal Forge and Malsys also generate fractal images. == See also == Logarithmic spiral Software art Chaos game == References == == External links == An Introduction to Fractals by Paul Bourke, May 1991
Wikipedia:Fractional Calculus and Applied Analysis#0
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator D {\displaystyle D} D f ( x ) = d d x f ( x ) , {\displaystyle Df(x)={\frac {d}{dx}}f(x)\,,} and of the integration operator J {\displaystyle J} J f ( x ) = ∫ 0 x f ( s ) d s , {\displaystyle Jf(x)=\int _{0}^{x}f(s)\,ds\,,} and developing a calculus for such operators generalizing the classical one. In this context, the term powers refers to iterative application of a linear operator D {\displaystyle D} to a function f {\displaystyle f} , that is, repeatedly composing D {\displaystyle D} with itself, as in D n ( f ) = ( D ∘ D ∘ D ∘ ⋯ ∘ D ⏟ n ) ( f ) = D ( D ( D ( ⋯ D ⏟ n ( f ) ⋯ ) ) ) . {\displaystyle {\begin{aligned}D^{n}(f)&=(\underbrace {D\circ D\circ D\circ \cdots \circ D} _{n})(f)\\&=\underbrace {D(D(D(\cdots D} _{n}(f)\cdots ))).\end{aligned}}} For example, one may ask for a meaningful interpretation of D = D 1 2 {\displaystyle {\sqrt {D}}=D^{\scriptstyle {\frac {1}{2}}}} as an analogue of the functional square root for the differentiation operator, that is, an expression for some linear operator that, when applied twice to any function, will have the same effect as differentiation. More generally, one can look at the question of defining a linear operator D a {\displaystyle D^{a}} for every real number a {\displaystyle a} in such a way that, when a {\displaystyle a} takes an integer value n ∈ Z {\displaystyle n\in \mathbb {Z} } , it coincides with the usual n {\displaystyle n} -fold differentiation D {\displaystyle D} if n > 0 {\displaystyle n>0} , and with the n {\displaystyle n} -th power of J {\displaystyle J} when n < 0 {\displaystyle n<0} . One of the motivations behind the introduction and study of these sorts of extensions of the differentiation operator D {\displaystyle D} is that the sets of operator powers { D a ∣ a ∈ R } {\displaystyle \{D^{a}\mid a\in \mathbb {R} \}} defined in this way are continuous semigroups with parameter a {\displaystyle a} , of which the original discrete semigroup of { D n ∣ n ∈ Z } {\displaystyle \{D^{n}\mid n\in \mathbb {Z} \}} for integer n {\displaystyle n} is a denumerable subgroup: since continuous semigroups have a well developed mathematical theory, they can be applied to other branches of mathematics. Fractional differential equations, also known as extraordinary differential equations, are a generalization of differential equations through the application of fractional calculus. == Historical notes == In applied mathematics and mathematical analysis, a fractional derivative is a derivative of any arbitrary order, real or complex. Its first appearance is in a letter written to Guillaume de l'Hôpital by Gottfried Wilhelm Leibniz in 1695. Around the same time, Leibniz wrote to Johann Bernoulli about derivatives of "general order". In the correspondence between Leibniz and John Wallis in 1697, Wallis's infinite product for π / 2 {\displaystyle \pi /2} is discussed. Leibniz suggested using differential calculus to achieve this result. Leibniz further used the notation d 1 / 2 y {\displaystyle {d}^{1/2}{y}} to denote the derivative of order ½. Fractional calculus was introduced in one of Niels Henrik Abel's early papers where all the elements can be found: the idea of fractional-order integration and differentiation, the mutually inverse relationship between them, the understanding that fractional-order differentiation and integration can be considered as the same generalized operation, and the unified notation for differentiation and integration of arbitrary real order. Independently, the foundations of the subject were laid by Liouville in a paper from 1832. Oliver Heaviside introduced the practical use of fractional differential operators in electrical transmission line analysis circa 1890. The theory and applications of fractional calculus expanded greatly over the 19th and 20th centuries, and numerous contributors have given different definitions for fractional derivatives and integrals. == Computing the fractional integral == Let f ( x ) {\displaystyle f(x)} be a function defined for x > 0 {\displaystyle x>0} . Form the definite integral from 0 to x {\displaystyle x} . Call this ( J f ) ( x ) = ∫ 0 x f ( t ) d t . {\displaystyle (Jf)(x)=\int _{0}^{x}f(t)\,dt\,.} Repeating this process gives ( J 2 f ) ( x ) = ∫ 0 x ( J f ) ( t ) d t = ∫ 0 x ( ∫ 0 t f ( s ) d s ) d t , {\displaystyle {\begin{aligned}\left(J^{2}f\right)(x)&=\int _{0}^{x}(Jf)(t)\,dt\\&=\int _{0}^{x}\left(\int _{0}^{t}f(s)\,ds\right)dt\,,\end{aligned}}} and this can be extended arbitrarily. The Cauchy formula for repeated integration, namely ( J n f ) ( x ) = 1 ( n − 1 ) ! ∫ 0 x ( x − t ) n − 1 f ( t ) d t , {\displaystyle \left(J^{n}f\right)(x)={\frac {1}{(n-1)!}}\int _{0}^{x}\left(x-t\right)^{n-1}f(t)\,dt\,,} leads in a straightforward way to a generalization for real n: using the gamma function to remove the discrete nature of the factorial function gives us a natural candidate for applications of the fractional integral operator as ( J α f ) ( x ) = 1 Γ ( α ) ∫ 0 x ( x − t ) α − 1 f ( t ) d t . {\displaystyle \left(J^{\alpha }f\right)(x)={\frac {1}{\Gamma (\alpha )}}\int _{0}^{x}\left(x-t\right)^{\alpha -1}f(t)\,dt\,.} This is in fact a well-defined operator. It is straightforward to show that the J operator satisfies ( J α ) ( J β f ) ( x ) = ( J β ) ( J α f ) ( x ) = ( J α + β f ) ( x ) = 1 Γ ( α + β ) ∫ 0 x ( x − t ) α + β − 1 f ( t ) d t . {\displaystyle {\begin{aligned}\left(J^{\alpha }\right)\left(J^{\beta }f\right)(x)&=\left(J^{\beta }\right)\left(J^{\alpha }f\right)(x)\\&=\left(J^{\alpha +\beta }f\right)(x)\\&={\frac {1}{\Gamma (\alpha +\beta )}}\int _{0}^{x}\left(x-t\right)^{\alpha +\beta -1}f(t)\,dt\,.\end{aligned}}} This relationship is called the semigroup property of fractional differintegral operators. === Riemann–Liouville fractional integral === The classical form of fractional calculus is given by the Riemann–Liouville integral, which is essentially what has been described above. The theory of fractional integration for periodic functions (therefore including the "boundary condition" of repeating after a period) is given by the Weyl integral. It is defined on Fourier series, and requires the constant Fourier coefficient to vanish (thus, it applies to functions on the unit circle whose integrals evaluate to zero). The Riemann–Liouville integral exists in two forms, upper and lower. Considering the interval [a,b], the integrals are defined as D a D t − α ⁡ f ( t ) = I a I t α ⁡ f ( t ) = 1 Γ ( α ) ∫ a t ( t − τ ) α − 1 f ( τ ) d τ D t D b − α ⁡ f ( t ) = I t I b α ⁡ f ( t ) = 1 Γ ( α ) ∫ t b ( τ − t ) α − 1 f ( τ ) d τ {\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{-\alpha }}Df(t)&=\sideset {_{a}}{_{t}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau \\\sideset {_{t}}{_{b}^{-\alpha }}Df(t)&=\sideset {_{t}}{_{b}^{\alpha }}If(t)\\&={\frac {1}{\Gamma (\alpha )}}\int _{t}^{b}\left(\tau -t\right)^{\alpha -1}f(\tau )\,d\tau \end{aligned}}} Where the former is valid for t > a and the latter is valid for t < b. It has been suggested that the integral on the positive real axis (i.e. a = 0 {\displaystyle a=0} ) would be more appropriately named the Abel–Riemann integral, on the basis of history of discovery and use, and in the same vein the integral over the entire real line be named Liouville–Weyl integral. By contrast the Grünwald–Letnikov derivative starts with the derivative instead of the integral. === Hadamard fractional integral === The Hadamard fractional integral was introduced by Jacques Hadamard and is given by the following formula, D a D t − α ⁡ f ( t ) = 1 Γ ( α ) ∫ a t ( log ⁡ t τ ) α − 1 f ( τ ) d τ τ , t > a . {\displaystyle \sideset {_{a}}{_{t}^{-\alpha }}{\mathbf {D} }f(t)={\frac {1}{\Gamma (\alpha )}}\int _{a}^{t}\left(\log {\frac {t}{\tau }}\right)^{\alpha -1}f(\tau ){\frac {d\tau }{\tau }},\qquad t>a\,.} === Atangana–Baleanu fractional integral (AB fractional integral) === The Atangana–Baleanu fractional integral of a continuous function is defined as: I A a AB I t α ⁡ f ( t ) = 1 − α AB ⁡ ( α ) f ( t ) + α AB ⁡ ( α ) Γ ( α ) ∫ a t ( t − τ ) α − 1 f ( τ ) d τ {\displaystyle \sideset {_{{\hphantom {A}}a}^{\operatorname {AB} }}{_{t}^{\alpha }}If(t)={\frac {1-\alpha }{\operatorname {AB} (\alpha )}}f(t)+{\frac {\alpha }{\operatorname {AB} (\alpha )\Gamma (\alpha )}}\int _{a}^{t}\left(t-\tau \right)^{\alpha -1}f(\tau )\,d\tau } == Fractional derivatives == Unfortunately, the comparable process for the derivative operator D is significantly more complex, but it can be shown that D is neither commutative nor additive in general. Unlike classical Newtonian derivatives, fractional derivatives can be defined in a variety of different ways that often do not all lead to the same result even for smooth functions. Some of these are defined via a fractional integral. Because of the incompatibility of definitions, it is frequently necessary to be explicit about which definition is used. === Riemann–Liouville fractional derivative === The corresponding derivative is calculated using Lagrange's rule for differential operators. To find the αth order derivative, the nth order derivative of the integral of order (n − α) is computed, where n is the smallest integer greater than α (that is, n = ⌈α⌉). The Riemann–Liouville fractional derivative and integral has multiple applications such as in case of solutions to the equation in the case of multiple systems such as the tokamak systems, and Variable order fractional parameter. Similar to the definitions for the Riemann–Liouville integral, the derivative has upper and lower variants. D a D t α ⁡ f ( t ) = d n d t n D a D t − ( n − α ) ⁡ f ( t ) = d n d t n I a I t n − α ⁡ f ( t ) D t D b α ⁡ f ( t ) = d n d t n D t D b − ( n − α ) ⁡ f ( t ) = d n d t n I t I b n − α ⁡ f ( t ) {\displaystyle {\begin{aligned}\sideset {_{a}}{_{t}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{a}}{_{t}^{n-\alpha }}If(t)\\\sideset {_{t}}{_{b}^{\alpha }}Df(t)&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{-(n-\alpha )}}Df(t)\\&={\frac {d^{n}}{dt^{n}}}\sideset {_{t}}{_{b}^{n-\alpha }}If(t)\end{aligned}}} === Caputo fractional derivative === Another option for computing fractional derivatives is the Caputo fractional derivative. It was introduced by Michele Caputo in his 1967 paper. In contrast to the Riemann–Liouville fractional derivative, when solving differential equations using Caputo's definition, it is not necessary to define the fractional order initial conditions. Caputo's definition is illustrated as follows, where again n = ⌈α⌉: D C D t α ⁡ f ( t ) = 1 Γ ( n − α ) ∫ 0 t f ( n ) ( τ ) ( t − τ ) α + 1 − n d τ . {\displaystyle \sideset {^{C}}{_{t}^{\alpha }}Df(t)={\frac {1}{\Gamma (n-\alpha )}}\int _{0}^{t}{\frac {f^{(n)}(\tau )}{\left(t-\tau \right)^{\alpha +1-n}}}\,d\tau .} There is the Caputo fractional derivative defined as: D ν f ( t ) = 1 Γ ( n − ν ) ∫ 0 t ( t − u ) ( n − ν − 1 ) f ( n ) ( u ) d u ( n − 1 ) < ν < n {\displaystyle D^{\nu }f(t)={\frac {1}{\Gamma (n-\nu )}}\int _{0}^{t}(t-u)^{(n-\nu -1)}f^{(n)}(u)\,du\qquad (n-1)<\nu <n} which has the advantage that it is zero when f(t) is constant and its Laplace Transform is expressed by means of the initial values of the function and its derivative. Moreover, there is the Caputo fractional derivative of distributed order defined as D a b D n ⁡ u ⁡ f ( t ) = ∫ a b ϕ ( ν ) [ D ( ν ) f ( t ) ] d ν = ∫ a b [ ϕ ( ν ) Γ ( 1 − ν ) ∫ 0 t ( t − u ) − ν f ′ ( u ) d u ] d ν {\displaystyle {\begin{aligned}\sideset {_{a}^{b}}{^{n}u}Df(t)&=\int _{a}^{b}\phi (\nu )\left[D^{(\nu )}f(t)\right]\,d\nu \\&=\int _{a}^{b}\left[{\frac {\phi (\nu )}{\Gamma (1-\nu )}}\int _{0}^{t}\left(t-u\right)^{-\nu }f'(u)\,du\right]\,d\nu \end{aligned}}} where ϕ(ν) is a weight function and which is used to represent mathematically the presence of multiple memory formalisms. === Caputo–Fabrizio fractional derivative === In a paper of 2015, M. Caputo and M. Fabrizio presented a definition of fractional derivative with a non singular kernel, for a function f ( t ) {\displaystyle f(t)} of C 1 {\displaystyle C^{1}} given by: D C a CF D t α ⁡ f ( t ) = 1 1 − α ∫ a t f ′ ( τ ) e ( − α t − τ 1 − α ) d τ , {\displaystyle \sideset {_{{\hphantom {C}}a}^{\text{CF}}}{_{t}^{\alpha }}Df(t)={\frac {1}{1-\alpha }}\int _{a}^{t}f'(\tau )\ e^{\left(-\alpha {\frac {t-\tau }{1-\alpha }}\right)}\ d\tau ,} where a < 0 , α ∈ ( 0 , 1 ] {\displaystyle a<0,\alpha \in (0,1]} . === Atangana–Baleanu fractional derivative === In 2016, Atangana and Baleanu suggested differential operators based on the generalized Mittag-Leffler function E α {\displaystyle E_{\alpha }} . The aim was to introduce fractional differential operators with non-singular nonlocal kernel. Their fractional differential operators are given below in Riemann–Liouville sense and Caputo sense respectively. For a function f ( t ) {\displaystyle f(t)} of C 1 {\displaystyle C^{1}} given by D A B a ABC D t α ⁡ f ( t ) = AB ⁡ ( α ) 1 − α ∫ a t f ′ ( τ ) E α ( − α ( t − τ ) α 1 − α ) d τ , {\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}\int _{a}^{t}f'(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,} If the function is continuous, the Atangana–Baleanu derivative in Riemann–Liouville sense is given by: D A B a ABC D t α ⁡ f ( t ) = AB ⁡ ( α ) 1 − α d d t ∫ a t f ( τ ) E α ( − α ( t − τ ) α 1 − α ) d τ , {\displaystyle \sideset {_{{\hphantom {AB}}a}^{\text{ABC}}}{_{t}^{\alpha }}Df(t)={\frac {\operatorname {AB} (\alpha )}{1-\alpha }}{\frac {d}{dt}}\int _{a}^{t}f(\tau )E_{\alpha }\left(-\alpha {\frac {(t-\tau )^{\alpha }}{1-\alpha }}\right)d\tau ,} The kernel used in Atangana–Baleanu fractional derivative has some properties of a cumulative distribution function. For example, for all α ∈ ( 0 , 1 ] {\displaystyle \alpha \in (0,1]} , the function E α {\displaystyle E_{\alpha }} is increasing on the real line, converges to 0 {\displaystyle 0} in − ∞ {\displaystyle -\infty } , and E α ( 0 ) = 1 {\displaystyle E_{\alpha }(0)=1} . Therefore, we have that, the function x ↦ 1 − E α ( − x α ) {\displaystyle x\mapsto 1-E_{\alpha }(-x^{\alpha })} is the cumulative distribution function of a probability measure on the positive real numbers. The distribution is therefore defined, and any of its multiples is called a Mittag-Leffler distribution of order α {\displaystyle \alpha } . It is also very well-known that, all these probability distributions are absolutely continuous. In particular, the function Mittag-Leffler has a particular case E 1 {\displaystyle E_{1}} , which is the exponential function, the Mittag-Leffler distribution of order 1 {\displaystyle 1} is therefore an exponential distribution. However, for α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} , the Mittag-Leffler distributions are heavy-tailed. Their Laplace transform is given by: E ( e − λ X α ) = 1 1 + λ α , {\displaystyle \mathbb {E} (e^{-\lambda X_{\alpha }})={\frac {1}{1+\lambda ^{\alpha }}},} This directly implies that, for α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} , the expectation is infinite. In addition, these distributions are geometric stable distributions. === Riesz derivative === The Riesz derivative is defined as F { ∂ α u ∂ | x | α } ( k ) = − | k | α F { u } ( k ) , {\displaystyle {\mathcal {F}}\left\{{\frac {\partial ^{\alpha }u}{\partial \left|x\right|^{\alpha }}}\right\}(k)=-\left|k\right|^{\alpha }{\mathcal {F}}\{u\}(k),} where F {\displaystyle {\mathcal {F}}} denotes the Fourier transform. === Conformable fractional derivative === The conformable fractional derivative of a function f {\displaystyle f} of order α {\displaystyle \alpha } is given by T a ( f ) ( t ) = lim ϵ → 0 f ( t + ϵ t 1 − α ) − f ( t ) ϵ {\displaystyle T_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}{\frac {f\left(t+\epsilon t^{1-\alpha }\right)-f(t)}{\epsilon }}} Unlike other definitions of the fractional derivative, the conformable fractional derivative obeys the product and quotient rule has analogs to Rolle's theorem and the mean value theorem. However, this fractional derivative produces significantly different results compared to the Riemann-Liouville and Caputo fractional derivative. In 2020, Feng Gao and Chunmei Chi defined the improved Caputo-type conformable fractional derivative, which more closely approximates the behavior of the Caputo fractional derivative: a C T ~ a ( f ) ( t ) = lim ϵ → 0 [ ( 1 − α ) ( f ( t ) − f ( a ) ) + α f ( t + ϵ ( t − a ) 1 − α ) − f ( t ) ϵ ] {\displaystyle _{a}^{C}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )(f(t)-f(a))+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]} where a {\displaystyle a} and t {\displaystyle t} are real numbers and a < t {\displaystyle a<t} . They also defined the improved Riemann-Liouville-type conformable fractional derivative to similarly approximate the Riemann-Liouville fractional derivative: a R L T ~ a ( f ) ( t ) = lim ϵ → 0 [ ( 1 − α ) f ( t ) + α f ( t + ϵ ( t − a ) 1 − α ) − f ( t ) ϵ ] {\displaystyle _{a}^{RL}{\widetilde {T}}_{a}(f)(t)=\lim _{\epsilon \rightarrow 0}\left[(1-\alpha )f(t)+\alpha {\frac {f\left(t+\epsilon (t-a)^{1-\alpha }\right)-f(t)}{\epsilon }}\right]} where a {\displaystyle a} and t {\displaystyle t} are real numbers and a < t {\displaystyle a<t} . Both improved conformable fractional derivatives have analogs to Rolle's theorem and the interior extremum theorem. === Other types === Classical fractional derivatives include: Grünwald–Letnikov derivative Sonin–Letnikov derivative Liouville derivative Caputo derivative Hadamard derivative Marchaud derivative Riesz derivative Miller–Ross derivative Weyl derivative Erdélyi–Kober derivative F α {\displaystyle F^{\alpha }} -derivative New fractional derivatives include: Coimbra derivative Katugampola derivative Hilfer derivative Davidson derivative Chen derivative Caputo Fabrizio derivative Atangana–Baleanu derivative ==== Coimbra derivative ==== The Coimbra derivative is used for physical modeling: A number of applications in both mechanics and optics can be found in the works by Coimbra and collaborators, as well as additional applications to physical problems and numerical implementations studied in a number of works by other authors For q ( t ) < 1 {\displaystyle q(t)<1} a C D q ( t ) f ( t ) = 1 Γ [ 1 − q ( t ) ] ∫ 0 + t ( t − τ ) − q ( t ) d f ( τ ) d τ d τ + ( f ( 0 + ) − f ( 0 − ) ) t − q ( t ) Γ ( 1 − q ( t ) ) , {\displaystyle {\begin{aligned}^{\mathbb {C} }_{a}\mathbb {D} ^{q(t)}f(t)={\frac {1}{\Gamma [1-q(t)]}}\int _{0^{+}}^{t}(t-\tau )^{-q(t)}{\frac {d\,f(\tau )}{d\tau }}d\tau \,+\,{\frac {(f(0^{+})-f(0^{-}))\,t^{-q(t)}}{\Gamma (1-q(t))}},\end{aligned}}} where the lower limit a {\displaystyle a} can be taken as either 0 − {\displaystyle 0^{-}} or − ∞ {\displaystyle -\infty } as long as f ( t ) {\displaystyle f(t)} is identically zero from or − ∞ {\displaystyle -\infty } to 0 − {\displaystyle 0^{-}} . Note that this operator returns the correct fractional derivatives for all values of t {\displaystyle t} and can be applied to either the dependent function itself f ( t ) {\displaystyle f(t)} with a variable order of the form q ( f ( t ) ) {\displaystyle q(f(t))} or to the independent variable with a variable order of the form q ( t ) {\displaystyle q(t)} . [ 1 ] {\displaystyle ^{[1]}} The Coimbra derivative can be generalized to any order, leading to the Coimbra Generalized Order Differintegration Operator (GODO) For q ( t ) < m {\displaystyle q(t)<m} − ∞ C D q ( t ) f ( t ) = 1 Γ [ m − q ( t ) ] ∫ 0 + t ( t − τ ) m − 1 − q ( t ) d m f ( τ ) d τ m d τ + ∑ n = 0 m − 1 ( d n f ( t ) d t n | 0 + − d n f ( t ) d t n | 0 − ) t n − q ( t ) Γ [ n + 1 − q ( t ) ] , {\displaystyle {\begin{aligned}^{\mathbb {\quad C} }_{\,\,-\infty }\mathbb {D} ^{q(t)}f(t)={\frac {1}{\Gamma [m-q(t)]}}\int _{0^{+}}^{t}(t-\tau )^{m-1-q(t)}{\frac {d^{m}f(\tau )}{d\tau ^{m}}}d\tau \,+\,\sum _{n=0}^{m-1}{\frac {({\frac {d^{n}f(t)}{dt^{n}}}|_{0^{+}}-{\frac {d^{n}f(t)}{dt^{n}}}|_{0^{-}})\,t^{n-q(t)}}{\Gamma [n+1-q(t)]}},\end{aligned}}} where m {\displaystyle m} is an integer larger than the larger value of q ( t ) {\displaystyle q(t)} for all values of t {\displaystyle t} . Note that the second (summation) term on the right side of the definition above can be expressed as 1 Γ [ m − q ( t ) ] ∑ n = 0 m − 1 { [ d n f ( t ) d t n | 0 + − d n f ( t ) d t n | 0 − ] t n − q ( t ) ∏ j = n + 1 m − 1 [ j − q ( t ) ] } {\displaystyle {\begin{aligned}{\frac {1}{\Gamma [m-q(t)]}}\sum _{n=0}^{m-1}\{[{\frac {d^{n}\!f(t)}{dt^{n}}}|_{0^{+}}-{\frac {d^{n}\!f(t)}{dt^{n}}}|_{0^{-}}]\,t^{n-q(t)}\prod _{j=n+1}^{m-1}[j-q(t)]\}\end{aligned}}} so to keep the denominator on the positive branch of the Gamma ( Γ {\displaystyle \Gamma } ) function and for ease of numerical calculation. === Nature of the fractional derivative === The a {\displaystyle a} -th derivative of a function f {\displaystyle f} at a point x {\displaystyle x} is a local property only when a {\displaystyle a} is an integer; this is not the case for non-integer power derivatives. In other words, a non-integer fractional derivative of f {\displaystyle f} at x = c {\displaystyle x=c} depends on all values of f {\displaystyle f} , even those far away from c {\displaystyle c} . Therefore, it is expected that the fractional derivative operation involves some sort of boundary conditions, involving information on the function further out. The fractional derivative of a function of order a {\displaystyle a} is nowadays often defined by means of the Fourier or Mellin integral transforms. == Generalizations == === Erdélyi–Kober operator === The Erdélyi–Kober operator is an integral operator introduced by Arthur Erdélyi (1940). and Hermann Kober (1940) and is given by x − ν − α + 1 Γ ( α ) ∫ 0 x ( t − x ) α − 1 t − α − ν f ( t ) d t , {\displaystyle {\frac {x^{-\nu -\alpha +1}}{\Gamma (\alpha )}}\int _{0}^{x}\left(t-x\right)^{\alpha -1}t^{-\alpha -\nu }f(t)\,dt\,,} which generalizes the Riemann–Liouville fractional integral and the Weyl integral. == Functional calculus == In the context of functional analysis, functions f(D) more general than powers are studied in the functional calculus of spectral theory. The theory of pseudo-differential operators also allows one to consider powers of D. The operators arising are examples of singular integral operators; and the generalisation of the classical theory to higher dimensions is called the theory of Riesz potentials. So there are a number of contemporary theories available, within which fractional calculus can be discussed. See also Erdélyi–Kober operator, important in special function theory (Kober 1940), (Erdélyi 1950–1951). == Applications == === Fractional conservation of mass === As described by Wheatcraft and Meerschaert (2008), a fractional conservation of mass equation is needed to model fluid flow when the control volume is not large enough compared to the scale of heterogeneity and when the flux within the control volume is non-linear. In the referenced paper, the fractional conservation of mass equation for fluid flow is: − ρ ( ∇ α ⋅ u → ) = Γ ( α + 1 ) Δ x 1 − α ρ ( β s + ϕ β w ) ∂ p ∂ t {\displaystyle -\rho \left(\nabla ^{\alpha }\cdot {\vec {u}}\right)=\Gamma (\alpha +1)\Delta x^{1-\alpha }\rho \left(\beta _{s}+\phi \beta _{w}\right){\frac {\partial p}{\partial t}}} === Electrochemical analysis === When studying the redox behavior of a substrate in solution, a voltage is applied at an electrode surface to force electron transfer between electrode and substrate. The resulting electron transfer is measured as a current. The current depends upon the concentration of substrate at the electrode surface. As substrate is consumed, fresh substrate diffuses to the electrode as described by Fick's laws of diffusion. Taking the Laplace transform of Fick's second law yields an ordinary second-order differential equation (here in dimensionless form): d 2 d x 2 C ( x , s ) = s C ( x , s ) {\displaystyle {\frac {d^{2}}{dx^{2}}}C(x,s)=sC(x,s)} whose solution C(x,s) contains a one-half power dependence on s. Taking the derivative of C(x,s) and then the inverse Laplace transform yields the following relationship: d d x C ( x , t ) = d 1 2 d t 1 2 C ( x , t ) {\displaystyle {\frac {d}{dx}}C(x,t)={\frac {d^{\scriptstyle {\frac {1}{2}}}}{dt^{\scriptstyle {\frac {1}{2}}}}}C(x,t)} which relates the concentration of substrate at the electrode surface to the current. This relationship is applied in electrochemical kinetics to elucidate mechanistic behavior. For example, it has been used to study the rate of dimerization of substrates upon electrochemical reduction. === Groundwater flow problem === In 2013–2014 Atangana et al. described some groundwater flow problems using the concept of a derivative with fractional order. In these works, the classical Darcy law is generalized by regarding the water flow as a function of a non-integer order derivative of the piezometric head. This generalized law and the law of conservation of mass are then used to derive a new equation for groundwater flow. === Fractional advection dispersion equation === This equation has been shown useful for modeling contaminant flow in heterogenous porous media. Atangana and Kilicman extended the fractional advection dispersion equation to a variable order equation. In their work, the hydrodynamic dispersion equation was generalized using the concept of a variational order derivative. The modified equation was numerically solved via the Crank–Nicolson method. The stability and convergence in numerical simulations showed that the modified equation is more reliable in predicting the movement of pollution in deformable aquifers than equations with constant fractional and integer derivatives === Time-space fractional diffusion equation models === Anomalous diffusion processes in complex media can be well characterized by using fractional-order diffusion equation models. The time derivative term corresponds to long-time heavy tail decay and the spatial derivative for diffusion nonlocality. The time-space fractional diffusion governing equation can be written as ∂ α u ∂ t α = − K ( − Δ ) β u . {\displaystyle {\frac {\partial ^{\alpha }u}{\partial t^{\alpha }}}=-K(-\Delta )^{\beta }u.} A simple extension of the fractional derivative is the variable-order fractional derivative, α and β are changed into α(x, t) and β(x, t). Its applications in anomalous diffusion modeling can be found in the reference. === Structural damping models === Fractional derivatives are used to model viscoelastic damping in certain types of materials like polymers. === PID controllers === Generalizing PID controllers to use fractional orders can increase their degree of freedom. The new equation relating the control variable u(t) in terms of a measured error value e(t) can be written as u ( t ) = K p e ( t ) + K i D t − α e ( t ) + K d D t β e ( t ) {\displaystyle u(t)=K_{\mathrm {p} }e(t)+K_{\mathrm {i} }D_{t}^{-\alpha }e(t)+K_{\mathrm {d} }D_{t}^{\beta }e(t)} where α and β are positive fractional orders and Kp, Ki, and Kd, all non-negative, denote the coefficients for the proportional, integral, and derivative terms, respectively (sometimes denoted P, I, and D). === Acoustic wave equations for complex media === The propagation of acoustical waves in complex media, such as in biological tissue, commonly implies attenuation obeying a frequency power-law. This kind of phenomenon may be described using a causal wave equation which incorporates fractional time derivatives: ∇ 2 u − 1 c 0 2 ∂ 2 u ∂ t 2 + τ σ α ∂ α ∂ t α ∇ 2 u − τ ϵ β c 0 2 ∂ β + 2 u ∂ t β + 2 = 0 . {\displaystyle \nabla ^{2}u-{\dfrac {1}{c_{0}^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}+\tau _{\sigma }^{\alpha }{\dfrac {\partial ^{\alpha }}{\partial t^{\alpha }}}\nabla ^{2}u-{\dfrac {\tau _{\epsilon }^{\beta }}{c_{0}^{2}}}{\dfrac {\partial ^{\beta +2}u}{\partial t^{\beta +2}}}=0\,.} See also Holm & Näsholm (2011) and the references therein. Such models are linked to the commonly recognized hypothesis that multiple relaxation phenomena give rise to the attenuation measured in complex media. This link is further described in Näsholm & Holm (2011b) and in the survey paper, as well as the Acoustic attenuation article. See Holm & Nasholm (2013) for a paper which compares fractional wave equations which model power-law attenuation. This book on power-law attenuation also covers the topic in more detail. Pandey and Holm gave a physical meaning to fractional differential equations by deriving them from physical principles and interpreting the fractional-order in terms of the parameters of the acoustical media, example in fluid-saturated granular unconsolidated marine sediments. Interestingly, Pandey and Holm derived Lomnitz's law in seismology and Nutting's law in non-Newtonian rheology using the framework of fractional calculus. Nutting's law was used to model the wave propagation in marine sediments using fractional derivatives. === Fractional Schrödinger equation in quantum theory === The fractional Schrödinger equation, a fundamental equation of fractional quantum mechanics, has the following form: i ℏ ∂ ψ ( r , t ) ∂ t = D α ( − ℏ 2 Δ ) α 2 ψ ( r , t ) + V ( r , t ) ψ ( r , t ) . {\displaystyle i\hbar {\frac {\partial \psi (\mathbf {r} ,t)}{\partial t}}=D_{\alpha }\left(-\hbar ^{2}\Delta \right)^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t)\,.} where the solution of the equation is the wavefunction ψ(r, t) – the quantum mechanical probability amplitude for the particle to have a given position vector r at any given time t, and ħ is the reduced Planck constant. The potential energy function V(r, t) depends on the system. Further, Δ = ∂ 2 ∂ r 2 {\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}} is the Laplace operator, and Dα is a scale constant with physical dimension [Dα] = J1 − α·mα·s−α = kg1 − α·m2 − α·sα − 2, (at α = 2, D 2 = 1 2 m {\textstyle D_{2}={\frac {1}{2m}}} for a particle of mass m), and the operator (−ħ2Δ)α/2 is the 3-dimensional fractional quantum Riesz derivative defined by ( − ℏ 2 Δ ) α 2 ψ ( r , t ) = 1 ( 2 π ℏ ) 3 ∫ d 3 p e i ℏ p ⋅ r | p | α φ ( p , t ) . {\displaystyle (-\hbar ^{2}\Delta )^{\frac {\alpha }{2}}\psi (\mathbf {r} ,t)={\frac {1}{(2\pi \hbar )^{3}}}\int d^{3}pe^{{\frac {i}{\hbar }}\mathbf {p} \cdot \mathbf {r} }|\mathbf {p} |^{\alpha }\varphi (\mathbf {p} ,t)\,.} The index α in the fractional Schrödinger equation is the Lévy index, 1 < α ≤ 2. ==== Variable-order fractional Schrödinger equation ==== As a natural generalization of the fractional Schrödinger equation, the variable-order fractional Schrödinger equation has been exploited to study fractional quantum phenomena: i ℏ ∂ ψ α ( r ) ( r , t ) ∂ t α ( r ) = ( − ℏ 2 Δ ) β ( t ) 2 ψ ( r , t ) + V ( r , t ) ψ ( r , t ) , {\displaystyle i\hbar {\frac {\partial \psi ^{\alpha (\mathbf {r} )}(\mathbf {r} ,t)}{\partial t^{\alpha (\mathbf {r} )}}}=\left(-\hbar ^{2}\Delta \right)^{\frac {\beta (t)}{2}}\psi (\mathbf {r} ,t)+V(\mathbf {r} ,t)\psi (\mathbf {r} ,t),} where Δ = ∂ 2 ∂ r 2 {\textstyle \Delta ={\frac {\partial ^{2}}{\partial \mathbf {r} ^{2}}}} is the Laplace operator and the operator (−ħ2Δ)β(t)/2 is the variable-order fractional quantum Riesz derivative. == See also == Acoustic attenuation Autoregressive fractionally integrated moving average Initialized fractional calculus Nonlocal operator === Other fractional theories === Fractional-order system Fractional Fourier transform Prabhakar function == Notes == == References == == Further reading == === Articles regarding the history of fractional calculus === Debnath, L. (2004). "A brief historical introduction to fractional calculus". International Journal of Mathematical Education in Science and Technology. 35 (4): 487–501. doi:10.1080/00207390410001686571. S2CID 122198977. === Books === Miller, Kenneth S.; Ross, Bertram, eds. (1993). An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley & Sons. ISBN 978-0-471-58884-9. Samko, S.; Kilbas, A.A.; Marichev, O. (1993). Fractional Integrals and Derivatives: Theory and Applications. Taylor & Francis Books. ISBN 978-2-88124-864-1. Carpinteri, A.; Mainardi, F., eds. (1998). Fractals and Fractional Calculus in Continuum Mechanics. Springer-Verlag Telos. ISBN 978-3-211-82913-4. Igor Podlubny (27 October 1998). Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications. Elsevier. ISBN 978-0-08-053198-4. Tarasov, V.E. (2010). Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media. Nonlinear Physical Science. Springer. doi:10.1007/978-3-642-14003-7. ISBN 978-3-642-14003-7. Li, Changpin; Cai, Min (2019). Theory and Numerical Approximations of Fractional Integrals and Derivatives. SIAM. doi:10.1137/1.9781611975888. ISBN 978-1-61197-587-1. == External links ==
Wikipedia:Fracton#0
A fraction (from Latin: fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction (examples: ⁠1/2⁠ and ⁠17/3⁠) consists of an integer numerator, displayed above a line (or before a slash like 1⁄2), and a non-zero integer denominator, displayed below (or after) that line. If these integers are positive, then the numerator represents a number of equal parts, and the denominator indicates how many of those parts make up a unit or a whole. For example, in the fraction ⁠3/4⁠, the numerator 3 indicates that the fraction represents 3 equal parts, and the denominator 4 indicates that 4 parts make up a whole. The picture to the right illustrates ⁠3/4⁠ of a cake. Fractions can be used to represent ratios and division. Thus the fraction ⁠3/4⁠ can be used to represent the ratio 3:4 (the ratio of the part to the whole), and the division 3 ÷ 4 (three divided by four). We can also write negative fractions, which represent the opposite of a positive fraction. For example, if ⁠1/2⁠ represents a half-dollar profit, then −⁠1/2⁠ represents a half-dollar loss. Because of the rules of division of signed numbers (which states in part that negative divided by positive is negative), −⁠1/2⁠, ⁠−1/2⁠ and ⁠1/−2⁠ all represent the same fraction – negative one-half. And because a negative divided by a negative produces a positive, ⁠−1/−2⁠ represents positive one-half. In mathematics a rational number is a number that can be represented by a fraction of the form ⁠a/b⁠, where a and b are integers and b is not zero; the set of all rational numbers is commonly represented by the symbol ⁠ Q {\displaystyle \mathbb {Q} } ⁠ or Q, which stands for quotient. The term fraction and the notation ⁠a/b⁠ can also be used for mathematical expressions that do not represent a rational number (for example 2 2 {\displaystyle \textstyle {\frac {\sqrt {2}}{2}}} ), and even do not represent any number (for example the rational fraction 1 x {\displaystyle \textstyle {\frac {1}{x}}} ). == Vocabulary == In a fraction, the number of equal parts being described is the numerator (from Latin: numerātor, "counter" or "numberer"), and the type or variety of the parts is the denominator (from Latin: dēnōminātor, "thing that names or designates"). As an example, the fraction ⁠8/5⁠ amounts to eight parts, each of which is of the type named fifth. In terms of division, the numerator corresponds to the dividend, and the denominator corresponds to the divisor. Informally, the numerator and denominator may be distinguished by placement alone, but in formal contexts they are usually separated by a fraction bar. The fraction bar may be horizontal (as in ⁠1/3⁠), oblique (as in 2/5), or diagonal (as in 4⁄9). These marks are respectively known as the horizontal bar; the virgule, slash (US), or stroke (UK); and the fraction bar, solidus, or fraction slash. In typography, fractions stacked vertically are also known as en or nut fractions, and diagonal ones as em or mutton fractions, based on whether a fraction with a single-digit numerator and denominator occupies the proportion of a narrow en square, or a wider em square. In traditional typefounding, a piece of type bearing a complete fraction (e.g. ⁠1/2⁠) was known as a case fraction, while those representing only parts of fractions were called piece fractions. The denominators of English fractions are generally expressed as ordinal numbers, in the plural if the numerator is not 1. (For example, ⁠2/5⁠ and ⁠3/5⁠ are both read as a number of fifths.) Exceptions include the denominator 2, which is always read half or halves, the denominator 4, which may be alternatively expressed as quarter/quarters or as fourth/fourths, and the denominator 100, which may be alternatively expressed as hundredth/hundredths or percent. When the denominator is 1, it may be expressed in terms of wholes but is more commonly ignored, with the numerator read out as a whole number. For example, ⁠3/1⁠ may be described as three wholes, or simply as three. When the numerator is 1, it may be omitted (as in a tenth or each quarter). The entire fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. (For example, two-fifths is the fraction ⁠2/5⁠ and two fifths is the same fraction understood as 2 instances of ⁠1/5⁠.) Fractions should always be hyphenated when used as adjectives. Alternatively, a fraction may be described by reading it out as the numerator over the denominator, with the denominator expressed as a cardinal number. (For example, ⁠3/1⁠ may also be expressed as three over one.) The term over is used even in the case of solidus fractions, where the numbers are placed left and right of a slash mark. (For example, 1/2 may be read one-half, one half, or one over two.) Fractions with large denominators that are not powers of ten are often rendered in this fashion (e.g., ⁠1/117⁠ as one over one hundred seventeen), while those with denominators divisible by ten are typically read in the normal ordinal fashion (e.g., ⁠6/1000000⁠ as six-millionths, six millionths, or six one-millionths). == Forms of fractions == === Simple, common, or vulgar fractions === A simple fraction (also known as a common fraction or vulgar fraction) is a rational number written as a/b or ⁠ a b {\displaystyle {\tfrac {a}{b}}} ⁠, where a and b are both integers. As with other fractions, the denominator (b) cannot be zero. Examples include ⁠1/2⁠, −⁠8/5⁠, ⁠−8/5⁠, and ⁠8/−5⁠. The term was originally used to distinguish this type of fraction from the sexagesimal fraction used in astronomy. Common fractions can be positive or negative, and they can be proper or improper (see below). Compound fractions, complex fractions, mixed numerals, and decimal expressions (see below) are not common fractions; though, unless irrational, they can be evaluated to a common fraction. A unit fraction is a common fraction with a numerator of 1 (e.g., ⁠1/7⁠). Unit fractions can also be expressed using negative exponents, as in 2−1, which represents 1/2, and 2−2, which represents 1/(22) or 1/4. A dyadic fraction is a common fraction in which the denominator is a power of two, e.g. ⁠1/8⁠ = ⁠1/23⁠. In Unicode, precomposed fraction characters are in the Number Forms block. === Proper and improper fractions === Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is called proper if the numerator is less than the denominator, and improper otherwise. The concept of an improper fraction is a late development, with the terminology deriving from the fact that fraction means "piece", so a proper fraction must be less than 1. This was explained in the 17th century textbook The Ground of Arts. In general, a common fraction is said to be a proper fraction if the absolute value of the fraction is strictly less than one—that is, if the fraction is greater than −1 and less than 1. It is said to be an improper fraction, or sometimes top-heavy fraction, if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, −3/4, and 4/9, whereas examples of improper fractions are 9/4, −4/3, and 3/3. As described below, any improper fraction can be converted to a mixed number (integer plus proper fraction), and vice versa. === Reciprocals and the invisible denominator === The reciprocal of a fraction is another fraction with the numerator and denominator exchanged. The reciprocal of ⁠3/7⁠, for instance, is ⁠7/3⁠. The product of a non-zero fraction and its reciprocal is 1, hence the reciprocal is the multiplicative inverse of a fraction. The reciprocal of a proper fraction is improper, and the reciprocal of an improper fraction not equal to 1 (that is, numerator and denominator are not equal) is a proper fraction. When the numerator and denominator of a fraction are equal (for example, ⁠7/7⁠), its value is 1, and the fraction therefore is improper. Its reciprocal is identical and hence also equal to 1 and improper. Any integer can be written as a fraction with the number one as denominator. For example, 17 can be written as ⁠17/1⁠, where 1 is sometimes referred to as the invisible denominator. Therefore, every fraction and every integer, except for zero, has a reciprocal. For example, the reciprocal of 17 is ⁠1/17⁠. === Ratios === A ratio is a relationship between two or more numbers that can be sometimes expressed as a fraction. Typically, a number of items are grouped and compared in a ratio, specifying numerically the relationship between each group. Ratios are expressed as "group 1 to group 2 ... to group n". For example, if a car lot had 12 vehicles, of which 2 are white, 6 are red, and 4 are yellow, then the ratio of red to white to yellow cars is 6 to 2 to 4. The ratio of yellow cars to white cars is 4 to 2 and may be expressed as 4:2 or 2:1. A ratio is often converted to a fraction when it is expressed as a ratio to the whole. In the above example, the ratio of yellow cars to all the cars on the lot is 4:12 or 1:3. We can convert these ratios to a fraction, and say that ⁠4/12⁠ of the cars or ⁠1/3⁠ of the cars in the lot are yellow. Therefore, if a person randomly chose one car on the lot, then there is a one in three chance or probability that it would be yellow. === Decimal fractions and percentages === A decimal fraction is a fraction whose denominator is an integer power of ten, commonly expressed using decimal notation, in which the denominator is not given explicitly but is implied by the number of digits to the right of a decimal separator. The separator can be a period ⟨.⟩, interpunct ⟨·⟩, or comma ⟨,⟩, depending on locale. (For examples, see Decimal separator.) Thus, for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, namely, 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), the fractional part of the number is expressed by the digits to the right of the separator (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction, ⁠375/100⁠, or as a mixed number, ⁠3+75/100⁠. Decimal fractions can also be expressed using scientific notation with negative exponents, such as 6.023×10−7, a convenient alternative to the unwieldy 0.0000006023. The 10−7 represents a denominator of 107. Dividing by 107 moves the decimal point seven places to the left. A decimal fraction with infinitely many digits to the right of the decimal separator represents an infinite series. For example, ⁠1/3⁠ = 0.333... represents the infinite series 3/10 + 3/100 + 3/1000 + .... Another kind of fraction is the percentage (from Latin: per centum, meaning "per hundred", represented by the symbol %), in which the implied denominator is always 100. Thus, 51% means 51⁄100. Percentages greater than 100 or less than zero are treated in the same way, e.g. 311% means 311⁄100 and −27% means −27⁄100. The related concept of permille, or parts per thousand (ppt), means a denominator of 1000, and this parts-per notation is commonly used with larger denominators, such as million and billion, e.g. 75 parts per million (ppm) means that the proportion is ⁠75/1000000⁠. The choice between fraction and decimal notation is often a matter of taste and context. Fractions are used most often when the denominator is relatively small. By mental calculation, it is easier to multiply 16 by 3⁄16 than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is more precise (exact, in fact) to multiply 15 by 1⁄3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are commonly expressed as decimal fractions with denominator 100, i.e., with two digits after the decimal separator, for example $3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example, "3/6", commonly read three and six, means three shillings and sixpence and has no relationship to the fraction three sixths. === Mixed numbers === A mixed number (also called a mixed fraction or mixed numeral) is the sum of a non-zero integer and a proper fraction, conventionally written by juxtaposition (or concatenation) of the two parts, without the use of an intermediate plus (+) or minus (−) sign. When the fraction is written horizontally, a space is added between the integer and fraction to separate them. As a basic example, two entire cakes and three quarters of another cake might be written as 2 3 4 {\displaystyle 2{\tfrac {3}{4}}} cakes or 2 3 / 4 {\displaystyle 2\ \,3/4} cakes, with the numeral 2 {\displaystyle 2} representing the whole cakes and the fraction 3 4 {\displaystyle {\tfrac {3}{4}}} representing the additional partial cake juxtaposed; this is more concise than the more explicit notation 2 + 3 4 {\displaystyle 2+{\tfrac {3}{4}}} cakes. The mixed number ⁠2+3/4⁠ is spoken two and three quarters or two and three fourths, with the integer and fraction portions connected by the word and. Subtraction or negation is applied to the entire mixed numeral, so − 2 3 4 {\displaystyle -2{\tfrac {3}{4}}} means − ( 2 + 3 4 ) . {\displaystyle -{\bigl (}2+{\tfrac {3}{4}}{\bigr )}.} Any mixed number can be converted to an improper fraction by applying the rules of adding unlike quantities. For example, 2 + 3 4 = 8 4 + 3 4 = 11 4 . {\displaystyle 2+{\tfrac {3}{4}}={\tfrac {8}{4}}+{\tfrac {3}{4}}={\tfrac {11}{4}}.} Conversely, an improper fraction can be converted to a mixed number using division with remainder, with the proper fraction consisting of the remainder divided by the divisor. For example, since 4 goes into 11 twice, with 3 left over, 11 4 = 2 + 3 4 . {\displaystyle {\tfrac {11}{4}}=2+{\tfrac {3}{4}}.} In primary school, teachers often insist that every fractional result should be expressed as a mixed number. Outside school, mixed numbers are commonly used for describing measurements, for instance ⁠2+1/2⁠ hours or 5 3/16 inches, and remain widespread in daily life and in trades, especially in regions that do not use the decimalized metric system. However, scientific measurements typically use the metric system, which is based on decimal fractions, and starting from the secondary school level, mathematics pedagogy treats every fraction uniformly as a rational number, the quotient ⁠p/q⁠ of integers, leaving behind the concepts of improper fraction and mixed number. College students with years of mathematical training are sometimes confused when re-encountering mixed numbers because they are used to the convention that juxtaposition in algebraic expressions means multiplication. === Historical notions === ==== Egyptian fraction ==== An Egyptian fraction is the sum of distinct positive unit fractions, for example 1 2 + 1 3 {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{3}}} . This definition derives from the fact that the ancient Egyptians expressed all fractions except 1 2 {\displaystyle {\tfrac {1}{2}}} , 2 3 {\displaystyle {\tfrac {2}{3}}} and 3 4 {\displaystyle {\tfrac {3}{4}}} in this manner. Every positive rational number can be expanded as an Egyptian fraction. For example, 5 7 {\displaystyle {\tfrac {5}{7}}} can be written as 1 2 + 1 6 + 1 21 . {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{6}}+{\tfrac {1}{21}}.} Any positive rational number can be written as a sum of unit fractions in infinitely many ways. Two ways to write 13 17 {\displaystyle {\tfrac {13}{17}}} are 1 2 + 1 4 + 1 68 {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{4}}+{\tfrac {1}{68}}} and 1 3 + 1 4 + 1 6 + 1 68 {\displaystyle {\tfrac {1}{3}}+{\tfrac {1}{4}}+{\tfrac {1}{6}}+{\tfrac {1}{68}}} . ==== Complex and compound fractions ==== In a complex fraction, either the numerator, or the denominator, or both, is a fraction or a mixed number, corresponding to division of fractions. For example, 1 / 2 1 / 3 {\displaystyle {\tfrac {1/2}{1/3}}} and ( 12 3 4 ) / 26 {\displaystyle {\bigl (}12{\tfrac {3}{4}}{\bigr )}{\big /}26} are complex fractions. To interpret nested fractions written stacked with a horizontal fraction bars, treat shorter bars as nested inside longer bars. Complex fractions can be simplified using multiplication by the reciprocal, as described below at § Division. For example: 1 2 1 3 = 1 2 ÷ 1 3 = 1 2 × 3 1 = 3 2 , 3 2 5 = 3 2 ÷ 5 = 3 2 × 1 5 = 3 10 , 12 3 4 26 = 12 × 4 + 3 4 ÷ 26 = 12 × 4 + 3 4 × 1 26 = 51 104 . {\displaystyle {\begin{aligned}{\frac {\;\!{\tfrac {1}{2}}\;\!}{\tfrac {1}{3}}}&={\frac {1}{2}}\div {\frac {1}{3}}={\frac {1}{2}}\times {\frac {3}{1}}={\frac {3}{2}},\qquad {\frac {\;\!{\tfrac {3}{2}}\;\!}{5}}={\frac {3}{2}}\div 5={\frac {3}{2}}\times {\frac {1}{5}}={\frac {3}{10}},\\[10mu]{\frac {12{\tfrac {3}{4}}}{26}}&={\frac {12\times 4+3}{4}}\div 26={\frac {12\times 4+3}{4}}\times {\frac {1}{26}}={\frac {51}{104}}.\end{aligned}}} A complex fraction should never be written without an obvious marker showing which fraction is nested inside the other, as such expressions are ambiguous. For example, the expression 5 / 10 / 20 {\displaystyle 5/10/20} could be plausibly interpreted as either 5 10 / 20 = 1 40 {\displaystyle {\tfrac {5}{10}}{\big /}20={\tfrac {1}{40}}} or as 5 / 10 20 = 10. {\displaystyle 5{\big /}{\tfrac {10}{20}}=10.} The meaning can be made explicit by writing the fractions using distinct separators or by adding explicit parentheses, in this instance ( 5 / 10 ) / 20 {\displaystyle (5/10){\big /}20} or 5 / ( 10 / 20 ) . {\displaystyle 5{\big /}(10/20).} A compound fraction is a fraction of a fraction, or any number of fractions connected with the word of, corresponding to multiplication of fractions. To reduce a compound fraction to a simple fraction, just carry out the multiplication (see § Multiplication). For example, 3 4 {\displaystyle {\tfrac {3}{4}}} of 5 7 {\displaystyle {\tfrac {5}{7}}} is a compound fraction, corresponding to 3 4 × 5 7 = 15 28 {\displaystyle {\tfrac {3}{4}}\times {\tfrac {5}{7}}={\tfrac {15}{28}}} . The terms compound fraction and complex fraction are closely related and sometimes one is used as a synonym for the other. (For example, the compound fraction 3 4 × 5 7 {\displaystyle {\tfrac {3}{4}}\times {\tfrac {5}{7}}} is equivalent to the complex fraction ⁠ 3 / 4 7 / 5 {\displaystyle {\tfrac {3/4}{7/5}}} ⁠.) Nevertheless, complex fraction and compound fraction may both be considered outdated and now used in no well-defined manner, partly even taken as synonymous with each other or with mixed numerals. They have lost their meaning as technical terms and the attributes complex and compound tend to be used in their everyday meaning of consisting of parts. == Arithmetic with fractions == Like whole numbers, fractions obey the commutative, associative, and distributive laws, and the rule against division by zero. Mixed-number arithmetic can be performed either by converting each mixed number to an improper fraction, or by treating each as a sum of integer and fractional parts. === Equivalent fractions === Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a fraction that is equivalent to the original fraction. This is true because for any non-zero number n {\displaystyle n} , the fraction n n {\displaystyle {\tfrac {n}{n}}} equals 1. Therefore, multiplying by n n {\displaystyle {\tfrac {n}{n}}} is the same as multiplying by one, and any number multiplied by one has the same value as the original number. By way of an example, start with the fraction ⁠ 1 2 {\displaystyle {\tfrac {1}{2}}} ⁠. When the numerator and denominator are both multiplied by 2, the result is ⁠2/4⁠, which has the same value (0.5) as ⁠1/2⁠. To picture this visually, imagine cutting a cake into four pieces; two of the pieces together (⁠2/4⁠) make up half the cake (⁠1/2⁠). ==== Simplifying (reducing) fractions ==== Dividing the numerator and denominator of a fraction by the same non-zero number yields an equivalent fraction: if the numerator and the denominator of a fraction are both divisible by a number (called a factor) greater than 1, then the fraction can be reduced to an equivalent fraction with a smaller numerator and a smaller denominator. For example, if both the numerator and the denominator of the fraction a b {\displaystyle {\tfrac {a}{b}}} are divisible by ⁠ c {\displaystyle c} ⁠, then they can be written as a = c d {\displaystyle a=cd} , b = c e {\displaystyle b=ce} , and the fraction becomes ⁠cd/ce⁠, which can be reduced by dividing both the numerator and denominator by c to give the reduced fraction ⁠d/e⁠. If one takes for c the greatest common divisor of the numerator and the denominator, one gets the equivalent fraction whose numerator and denominator have the lowest absolute values. One says that the fraction has been reduced to its lowest terms. If the numerator and the denominator do not share any factor greater than 1, the fraction is already reduced to its lowest terms, and it is said to be irreducible, reduced, or in simplest terms. For example, 3 9 {\displaystyle {\tfrac {3}{9}}} is not in lowest terms because both 3 and 9 can be exactly divided by 3. In contrast, 3 8 {\displaystyle {\tfrac {3}{8}}} is in lowest terms—the only positive integer that goes into both 3 and 8 evenly is 1. Using these rules, we can show that ⁠5/10⁠ = ⁠1/2⁠ = ⁠10/20⁠ = ⁠50/100⁠, for example. As another example, since the greatest common divisor of 63 and 462 is 21, the fraction ⁠63/462⁠ can be reduced to lowest terms by dividing the numerator and denominator by 21: 63 462 = 63 ÷ 21 462 ÷ 21 = 3 22 . {\displaystyle {\frac {63}{462}}={\frac {63\,\div \,21}{462\,\div \,21}}={\frac {3}{22}}.} The Euclidean algorithm gives a method for finding the greatest common divisor of any two integers. === Comparing fractions === Comparing fractions with the same positive denominator yields the same result as comparing the numerators: 3 4 > 2 4 {\displaystyle {\tfrac {3}{4}}>{\tfrac {2}{4}}} because 3 > 2, and the equal denominators 4 {\displaystyle 4} are positive. If the equal denominators are negative, then the opposite result of comparing the numerators holds for the fractions: 3 − 4 < 2 − 4 because a − b = − a b and − 3 < − 2. {\displaystyle {\tfrac {3}{-4}}<{\tfrac {2}{-4}}{\text{ because }}{\tfrac {a}{-b}}={\tfrac {-a}{b}}{\text{ and }}-3<-2.} If two positive fractions have the same numerator, then the fraction with the smaller denominator is the larger number. When a whole is divided into equal pieces, if fewer equal pieces are needed to make up the whole, then each piece must be larger. When two positive fractions have the same numerator, they represent the same number of parts, but in the fraction with the smaller denominator, the parts are larger. One way to compare fractions with different numerators and denominators is to find a common denominator. To compare a b {\displaystyle {\tfrac {a}{b}}} and c d {\displaystyle {\tfrac {c}{d}}} , these are converted to a ⋅ d b ⋅ d {\displaystyle {\tfrac {a\cdot d}{b\cdot d}}} and b ⋅ c b ⋅ d {\displaystyle {\tfrac {b\cdot c}{b\cdot d}}} (where the dot signifies multiplication and is an alternative symbol to ×). Then bd is a common denominator and the numerators ad and bc can be compared. It is not necessary to determine the value of the common denominator to compare fractions – one can just compare ad and bc, without evaluating bd, e.g., comparing 2 3 {\displaystyle {\tfrac {2}{3}}} ? 1 2 {\displaystyle {\tfrac {1}{2}}} gives 4 6 > 3 6 {\displaystyle {\tfrac {4}{6}}>{\tfrac {3}{6}}} . For the more laborious question 5 18 {\displaystyle {\tfrac {5}{18}}} ? 4 17 , {\displaystyle {\tfrac {4}{17}},} multiply top and bottom of each fraction by the denominator of the other fraction, to get a common denominator, yielding 5 × 17 18 × 17 {\displaystyle {\tfrac {5\times 17}{18\times 17}}} ? 18 × 4 18 × 17 {\displaystyle {\tfrac {18\times 4}{18\times 17}}} . It is not necessary to calculate 18 × 17 {\displaystyle 18\times 17} – only the numerators need to be compared. Since 5×17 (= 85) is greater than 4×18 (= 72), the result of comparing is ⁠ 5 18 > 4 17 {\displaystyle {\tfrac {5}{18}}>{\tfrac {4}{17}}} ⁠. Because every negative number, including negative fractions, is less than zero, and every positive number, including positive fractions, is greater than zero, it follows that any negative fraction is less than any positive fraction. This allows, together with the above rules, to compare all possible fractions. === Addition === The first rule of addition is that only like quantities can be added; for example, various quantities of quarters. Unlike quantities, such as adding thirds to quarters, must first be converted to like quantities as described below: Imagine a pocket containing two quarters, and another pocket containing three quarters; in total, there are five quarters. Since four quarters is equivalent to one (dollar), this can be represented as follows: 2 4 + 3 4 = 5 4 = 1 1 4 {\displaystyle {\tfrac {2}{4}}+{\tfrac {3}{4}}={\tfrac {5}{4}}=1{\tfrac {1}{4}}} . ==== Adding unlike quantities ==== To add fractions containing unlike quantities (e.g. quarters and thirds), it is necessary to convert all amounts to like quantities. It is easy to work out the chosen type of fraction to convert to; simply multiply together the two denominators (bottom number) of each fraction. In case of an integer number apply the invisible denominator 1. For adding quarters to thirds, both types of fraction are converted to twelfths, thus: 1 4 + 1 3 = 1 × 3 4 × 3 + 1 × 4 3 × 4 = 3 12 + 4 12 = 7 12 . {\displaystyle {\frac {1}{4}}+{\frac {1}{3}}={\frac {1\times 3}{4\times 3}}+{\frac {1\times 4}{3\times 4}}={\frac {3}{12}}+{\frac {4}{12}}={\frac {7}{12}}.} Consider adding the following two quantities: 3 5 + 2 3 . {\displaystyle {\frac {3}{5}}+{\frac {2}{3}}.} First, convert 3 5 {\displaystyle {\tfrac {3}{5}}} into fifteenths by multiplying both the numerator and denominator by three: ⁠ 3 5 × 3 3 = 9 15 {\displaystyle {\tfrac {3}{5}}\times {\tfrac {3}{3}}={\tfrac {9}{15}}} ⁠. Since ⁠3/3⁠ equals 1, multiplication by ⁠3/3⁠ does not change the value of the fraction. Second, convert ⁠2/3⁠ into fifteenths by multiplying both the numerator and denominator by five: ⁠ 2 3 × 5 5 = 10 15 {\displaystyle {\tfrac {2}{3}}\times {\tfrac {5}{5}}={\tfrac {10}{15}}} ⁠. Now it can be seen that 3 5 + 2 3 {\displaystyle {\frac {3}{5}}+{\frac {2}{3}}} is equivalent to 9 15 + 10 15 = 19 15 = 1 4 15 . {\displaystyle {\frac {9}{15}}+{\frac {10}{15}}={\frac {19}{15}}=1{\frac {4}{15}}.} This method can be expressed algebraically: a b + c d = a d + c b b d . {\displaystyle {\frac {a}{b}}+{\frac {c}{d}}={\frac {ad+cb}{bd}}.} This algebraic method always works, thereby guaranteeing that the sum of simple fractions is always again a simple fraction. However, if the single denominators contain a common factor, a smaller denominator than the product of these can be used. For example, when adding 3 4 {\displaystyle {\tfrac {3}{4}}} and 5 6 {\displaystyle {\tfrac {5}{6}}} the single denominators have a common factor 2, and therefore, instead of the denominator 24 (4 × 6), the halved denominator 12 may be used, not only reducing the denominator in the result, but also the factors in the numerator. 3 4 + 5 6 = 3 ⋅ 6 4 ⋅ 6 + 4 ⋅ 5 4 ⋅ 6 = 18 24 + 20 24 = 19 12 = 3 ⋅ 3 4 ⋅ 3 + 2 ⋅ 5 2 ⋅ 6 = 9 12 + 10 12 = 19 12 . {\displaystyle {\begin{alignedat}{3}{\frac {3}{4}}+{\frac {5}{6}}&={\frac {3\cdot 6}{4\cdot 6}}+{\frac {4\cdot 5}{4\cdot 6}}={\frac {18}{24}}+{\frac {20}{24}}&&={\frac {19}{12}}\\[10mu]&={\frac {3\cdot 3}{4\cdot 3}}+{\frac {2\cdot 5}{2\cdot 6}}={\frac {9}{12}}+{\frac {10}{12}}&&={\frac {19}{12}}.\end{alignedat}}} The smallest possible denominator is given by the least common multiple of the single denominators, which results from dividing the rote multiple by all common factors of the single denominators. This is called the least common denominator. === Subtraction === The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance, 2 3 − 1 2 = 4 6 − 3 6 = 1 6 . {\displaystyle {\tfrac {2}{3}}-{\tfrac {1}{2}}={\tfrac {4}{6}}-{\tfrac {3}{6}}={\tfrac {1}{6}}.} To subtract a mixed number, an extra one can be borrowed from the minuend, for instance 4 − 2 3 4 = ( 4 − 2 − 1 ) + ( 1 − 3 4 ) = 1 1 4 . {\displaystyle 4-2{\tfrac {3}{4}}=(4-2-1)+{\bigl (}1-{\tfrac {3}{4}}{\bigr )}=1{\tfrac {1}{4}}.} === Multiplication === ==== Multiplying a fraction by another fraction ==== To multiply fractions, multiply the numerators and multiply the denominators. Thus: 2 3 × 3 4 = 6 12 . {\displaystyle {\frac {2}{3}}\times {\frac {3}{4}}={\frac {6}{12}}.} To explain the process, consider one third of one quarter. Using the example of a cake, if three small slices of equal size make up a quarter, and four quarters make up a whole, twelve of these small, equal slices make up a whole. Therefore, a third of a quarter is a twelfth. Now consider the numerators. The first fraction, two thirds, is twice as large as one third. Since one third of a quarter is one twelfth, two thirds of a quarter is two twelfth. The second fraction, three quarters, is three times as large as one quarter, so two thirds of three quarters is three times as large as two thirds of one quarter. Thus two thirds times three quarters is six twelfths. A short cut for multiplying fractions is called cancellation. Effectively the answer is reduced to lowest terms during multiplication. For example: 2 3 × 3 4 = 2 1 3 1 × 3 1 4 2 = 1 1 × 1 2 = 1 2 . {\displaystyle {\frac {2}{3}}\times {\frac {3}{4}}={\frac {{\color {RoyalBlue}{\cancel {\color {Black}2}}}^{~1}}{{\color {RedOrange}{\cancel {\color {Black}3}}}^{~1}}}\times {\frac {{\color {RedOrange}{\cancel {\color {Black}3}}}^{~1}}{{\color {RoyalBlue}{\cancel {\color {Black}4}}}^{~2}}}={\frac {1}{1}}\times {\frac {1}{2}}={\frac {1}{2}}.} A two is a common factor in both the numerator of the left fraction and the denominator of the right and is divided out of both. Three is a common factor of the left denominator and right numerator and is divided out of both. ==== Multiplying a fraction by a whole number ==== Since a whole number can be rewritten as itself divided by 1, normal fraction multiplication rules can still apply. For example, 6 × 3 4 = 6 1 × 3 4 = 18 4 . {\displaystyle 6\times {\tfrac {3}{4}}={\tfrac {6}{1}}\times {\tfrac {3}{4}}={\tfrac {18}{4}}.} This method works because the fraction 6/1 means six equal parts, each one of which is a whole. ==== Multiplying mixed numbers ==== The product of mixed numbers can be computed by converting each to an improper fraction. For example: 3 × 2 3 4 = 3 1 × 2 × 4 + 3 4 = 33 4 = 8 1 4 . {\displaystyle 3\times 2{\frac {3}{4}}={\frac {3}{1}}\times {\frac {2\times 4+3}{4}}={\frac {33}{4}}=8{\frac {1}{4}}.} Alternately, mixed numbers can be treated as sums, and multiplied as binomials. In this example, 3 × 2 3 4 = 3 × 2 + 3 × 3 4 = 6 + 9 4 = 8 1 4 . {\displaystyle 3\times 2{\frac {3}{4}}=3\times 2+3\times {\frac {3}{4}}=6+{\frac {9}{4}}=8{\frac {1}{4}}.} === Division === To divide a fraction by a whole number, you may either divide the numerator by the number, if it goes evenly into the numerator, or multiply the denominator by the number. For example, 10 3 ÷ 5 {\displaystyle {\tfrac {10}{3}}\div 5} equals 2 3 {\displaystyle {\tfrac {2}{3}}} and also equals 10 3 ⋅ 5 = 10 15 {\displaystyle {\tfrac {10}{3\cdot 5}}={\tfrac {10}{15}}} , which reduces to 2 3 {\displaystyle {\tfrac {2}{3}}} . To divide a number by a fraction, multiply that number by the reciprocal of that fraction. Thus, 1 2 ÷ 3 4 = 1 2 × 4 3 = 1 ⋅ 4 2 ⋅ 3 = 2 3 {\displaystyle {\tfrac {1}{2}}\div {\tfrac {3}{4}}={\tfrac {1}{2}}\times {\tfrac {4}{3}}={\tfrac {1\cdot 4}{2\cdot 3}}={\tfrac {2}{3}}} . === Converting between fractions and decimal notation === To change a common fraction to decimal notation, do a long division of the numerator by the denominator (this is idiomatically also phrased as "divide the denominator into the numerator"), and round the result to the desired precision. For example, to change ⁠1/4⁠ to a decimal expression, divide 1 by 4 ("4 into 1"), to obtain exactly 0.25. To change ⁠1/3⁠ to a decimal expression, divide 1... by 3 ("3 into 1..."), and stop when the desired precision is obtained, e.g., at four places after the decimal separator (ten-thousandths) as 0.3333. The fraction ⁠1/4⁠ is expressed exactly with only two digits after the decimal separator, while the fraction ⁠1/3⁠ cannot be written exactly as a decimal with a finite number of digits. A decimal expression can be converted to a fraction by removing the decimal separator, using the result as the numerator, and using 1 followed by the same number of zeroes as there are digits to the right of the decimal separator as the denominator. Thus, 1.23 = 123 100 . {\displaystyle 1.23={\tfrac {123}{100}}.} ==== Converting repeating digits in decimal notation to fractions ==== Decimal numbers, while arguably more useful to work with when performing calculations, sometimes lack the precision that common fractions have. Sometimes an infinite repeating decimal is required to reach the same precision. Thus, it is often useful to convert repeating digits into fractions. A conventional way to indicate a repeating decimal is to place a bar (known as a vinculum) over the digits that repeat, for example 0.789 = 0.789789789.... For repeating patterns that begin immediately after the decimal point, the result of the conversion is the fraction with the pattern as a numerator, and the same number of nines as a denominator. For example: 0.5 = 5/9 0.62 = 62/99 0.264 = 264/999 0.6291 = 6291/9999 If leading zeros precede the pattern, the nines are suffixed by the same number of trailing zeros: 0.05 = 5/90 0.000392 = 392/999000 0.0012 = 12/9900 If a non-repeating set of digits precede the pattern (such as 0.1523987), one may write the number as the sum of the non-repeating and repeating parts, respectively: 0.1523 + 0.0000987 Then, convert both parts to fractions, and add them using the methods described above: 1523 / 10000 + 987 / 9990000 = 1522464 / 9990000 Alternatively, algebra can be used, such as below: Let x = the repeating decimal: x = 0.1523987 Multiply both sides by the power of 10 just great enough (in this case 104) to move the decimal point just before the repeating part of the decimal number: 10,000x = 1,523.987 Multiply both sides by the power of 10 (in this case 103) that is the same as the number of places that repeat: 10,000,000x = 1,523,987.987 Subtract the two equations from each other (if a = b and c = d, then a − c = b − d): 10,000,000x − 10,000x = 1,523,987.987 − 1,523.987 Continue the subtraction operation to clear the repeating decimal: 9,990,000x = 1,523,987 − 1,523 9,990,000x = 1,522,464 Divide both sides by 9,990,000 to represent x as a fraction x = ⁠1522464/9990000⁠ == Fractions in abstract mathematics == In addition to being of great practical importance, fractions are also studied by mathematicians, who check that the rules for fractions given above are consistent and reliable. Mathematicians define a fraction as an ordered pair ( a , b ) {\displaystyle (a,b)} of integers a {\displaystyle a} and b ≠ 0 , {\displaystyle b\neq 0,} for which the operations addition, subtraction, multiplication, and division are defined as follows: ( a , b ) + ( c , d ) = ( a d + b c , b d ) {\displaystyle (a,b)+(c,d)=(ad+bc,bd)\,} ( a , b ) − ( c , d ) = ( a d − b c , b d ) {\displaystyle (a,b)-(c,d)=(ad-bc,bd)\,} ( a , b ) ⋅ ( c , d ) = ( a c , b d ) {\displaystyle (a,b)\cdot (c,d)=(ac,bd)} ( a , b ) ÷ ( c , d ) = ( a d , b c ) ( with, additionally, c ≠ 0 ) {\displaystyle (a,b)\div (c,d)=(ad,bc)\quad ({\text{with, additionally, }}c\neq 0)} These definitions agree in every case with the definitions given above; only the notation is different. Alternatively, instead of defining subtraction and division as operations, the inverse fractions with respect to addition and multiplication might be defined as: − ( a , b ) = ( − a , b ) additive inverse fractions, with ( 0 , b ) as additive unities, and ( a , b ) − 1 = ( b , a ) multiplicative inverse fractions, for a ≠ 0 , with ( b , b ) as multiplicative unities . {\displaystyle {\begin{aligned}-(a,b)&=(-a,b)&&{\text{additive inverse fractions,}}\\&&&{\text{with }}(0,b){\text{ as additive unities, and}}\\(a,b)^{-1}&=(b,a)&&{\text{multiplicative inverse fractions, for }}a\neq 0,\\&&&{\text{with }}(b,b){\text{ as multiplicative unities}}.\end{aligned}}} Furthermore, the relation, specified as ( a , b ) ∼ ( c , d ) ⟺ a d = b c , {\displaystyle (a,b)\sim (c,d)\quad \iff \quad ad=bc,} is an equivalence relation of fractions. Each fraction from one equivalence class may be considered as a representative for the whole class, and each whole class may be considered as one abstract fraction. This equivalence is preserved by the above defined operations, i.e., the results of operating on fractions are independent of the selection of representatives from their equivalence class. Formally, for addition of fractions ( a , b ) ∼ ( a ′ , b ′ ) {\displaystyle (a,b)\sim (a',b')\quad } and ( c , d ) ∼ ( c ′ , d ′ ) {\displaystyle \quad (c,d)\sim (c',d')\quad } imply ( ( a , b ) + ( c , d ) ) ∼ ( ( a ′ , b ′ ) + ( c ′ , d ′ ) ) {\displaystyle ((a,b)+(c,d))\sim ((a',b')+(c',d'))} and similarly for the other operations. In the case of fractions of integers, the fractions ⁠a/b⁠ with a and b coprime and b > 0 are often taken as uniquely determined representatives for their equivalent fractions, which are considered to be the same rational number. This way the fractions of integers make up the field of the rational numbers. More generally, a and b may be elements of any integral domain R, in which case a fraction is an element of the field of fractions of R. For example, polynomials in one indeterminate, with coefficients from some integral domain D, are themselves an integral domain, call it P. So for a and b elements of P, the generated field of fractions is the field of rational fractions (also known as the field of rational functions). == Algebraic fractions == An algebraic fraction is the indicated quotient of two algebraic expressions. As with fractions of integers, the denominator of an algebraic fraction cannot be zero. Two examples of algebraic fractions are 3 x x 2 + 2 x − 3 {\displaystyle {\frac {3x}{x^{2}+2x-3}}} and ⁠ x + 2 x 2 − 3 {\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}} ⁠. Algebraic fractions are subject to the same field properties as arithmetic fractions. If the numerator and the denominator are polynomials, as in ⁠ 3 x x 2 + 2 x − 3 {\displaystyle {\frac {3x}{x^{2}+2x-3}}} ⁠, the algebraic fraction is called a rational fraction (or rational expression). An irrational fraction is one that is not rational, as, for example, one that contains the variable under a fractional exponent or root, as in ⁠ x + 2 x 2 − 3 {\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}} ⁠. The terminology used to describe algebraic fractions is similar to that used for ordinary fractions. For example, an algebraic fraction is in lowest terms if the only factors common to the numerator and the denominator are 1 and −1. An algebraic fraction whose numerator or denominator, or both, contain a fraction, such as ⁠ 1 + 1 x 1 − 1 x {\displaystyle {\frac {1+{\tfrac {1}{x}}}{1-{\tfrac {1}{x}}}}} ⁠, is called a complex fraction. The field of rational numbers is the field of fractions of the integers, while the integers themselves are not a field but rather an integral domain. Similarly, the rational fractions with coefficients in a field form the field of fractions of polynomials with coefficient in that field. Considering the rational fractions with real coefficients, radical expressions representing numbers, such as ⁠ 2 / 2 {\displaystyle \textstyle {\sqrt {2}}/2} ⁠, are also rational fractions, as are a transcendental numbers such as π / 2 , {\textstyle \pi /2,} since all of 2 , π , {\displaystyle {\sqrt {2}},\pi ,} and 2 {\displaystyle 2} are real numbers, and thus considered as coefficients. These same numbers, however, are not rational fractions with integer coefficients. The term partial fraction is used when decomposing rational fractions into sums of simpler fractions. For example, the rational fraction 2 x x 2 − 1 {\displaystyle {\frac {2x}{x^{2}-1}}} can be decomposed as the sum of two fractions: ⁠ 1 x + 1 + 1 x − 1 {\displaystyle {\frac {1}{x+1}}+{\frac {1}{x-1}}} ⁠. This is useful for the computation of antiderivatives of rational functions (see partial fraction decomposition for more). == Radical expressions == A fraction may also contain radicals in the numerator or the denominator. If the denominator contains radicals, it can be helpful to rationalize it (compare Simplified form of a radical expression), especially if further operations, such as adding or comparing that fraction to another, are to be carried out. It is also more convenient if division is to be done manually. When the denominator is a monomial square root, it can be rationalized by multiplying both the top and the bottom of the fraction by the denominator: 3 7 = 3 7 ⋅ 7 7 = 3 7 7 . {\displaystyle {\frac {3}{\sqrt {7}}}={\frac {3}{\sqrt {7}}}\cdot {\frac {\sqrt {7}}{\sqrt {7}}}={\frac {3{\sqrt {7}}}{7}}.} The process of rationalization of binomial denominators involves multiplying the top and the bottom of a fraction by the conjugate of the denominator so that the denominator becomes a rational number. For example: 3 3 − 2 5 = 3 3 − 2 5 ⋅ 3 + 2 5 3 + 2 5 = 3 ( 3 + 2 5 ) 3 2 − ( 2 5 ) 2 = 3 ( 3 + 2 5 ) 9 − 20 = − 9 + 6 5 11 , {\displaystyle {\frac {3}{3-2{\sqrt {5}}}}={\frac {3}{3-2{\sqrt {5}}}}\cdot {\frac {3+2{\sqrt {5}}}{3+2{\sqrt {5}}}}={\frac {3(3+2{\sqrt {5}})}{{3}^{2}-(2{\sqrt {5}})^{2}}}={\frac {3(3+2{\sqrt {5}})}{9-20}}=-{\frac {9+6{\sqrt {5}}}{11}},} 3 3 + 2 5 = 3 3 + 2 5 ⋅ 3 − 2 5 3 − 2 5 = 3 ( 3 − 2 5 ) 3 2 − ( 2 5 ) 2 = 3 ( 3 − 2 5 ) 9 − 20 = − 9 − 6 5 11 . {\displaystyle {\frac {3}{3+2{\sqrt {5}}}}={\frac {3}{3+2{\sqrt {5}}}}\cdot {\frac {3-2{\sqrt {5}}}{3-2{\sqrt {5}}}}={\frac {3(3-2{\sqrt {5}})}{{3}^{2}-(2{\sqrt {5}})^{2}}}={\frac {3(3-2{\sqrt {5}})}{9-20}}=-{\frac {9-6{\sqrt {5}}}{11}}.} Even if this process results in the numerator being irrational, like in the examples above, the process may still facilitate subsequent manipulations by reducing the number of irrationals one has to work with in the denominator. == Typographical variations == In computer displays and typography, simple fractions are sometimes printed as a single character, e.g. ½ (one half). See the article on Number Forms for information on doing this in Unicode. Scientific publishing distinguishes four ways to set fractions, together with guidelines on use: Special fractions: fractions that are presented as a single character with a slanted bar, with roughly the same height and width as other characters in the text. Generally used for simple fractions, such as: ½, ⅓, ⅔, ¼, and ¾. Since the numerals are smaller, legibility can be an issue, especially for small-sized fonts. These are not used in modern mathematical notation but in other contexts. Case fractions: similar to special fractions, these are rendered as a single typographical character, but with a horizontal bar, thus making them upright. An example would be ⁠1/2⁠, but rendered with the same height as other characters. Some sources include all rendering of fractions as case fractions if they take only one typographical space, regardless of the direction of the bar. Shilling, or solidus, fractions: 1/2, so called because this notation was used for pre-decimal British currency (£sd), as in "2/6" for a half crown, meaning two shillings and six pence. While the notation two shillings and six pence did not represent a fraction, the slash is now used in fractions, especially for fractions inline with prose (rather than displayed), to avoid uneven lines. It is also used for fractions within fractions (complex fractions) or within exponents to increase legibility. Fractions written this way, also known as piece fractions, are written all on one typographical line but take three or more typographical spaces. Built-up fractions: 1 2 {\displaystyle {\frac {1}{2}}} . This notation uses two or more lines of ordinary text and results in a variation in spacing between lines when included within other text. While large and legible, these can be disruptive, particularly for simple fractions or within complex fractions. == History == The earliest fractions were reciprocals of integers: ancient symbols representing one part of two, one part of three, one part of four, and so on. The Egyptians used Egyptian fractions c. 1000 BC. About 4000 years ago, Egyptians divided with fractions using slightly different methods. They used least common multiples with unit fractions. Their methods gave the same answer as modern methods. The Egyptians also had a different notation for dyadic fractions, used for certain systems of weights and measures. The Greeks used unit fractions and (later) simple continued fractions. Followers of the Greek philosopher Pythagoras (c. 530 BC) discovered that the square root of two cannot be expressed as a fraction of integers. (This is commonly though probably erroneously ascribed to Hippasus of Metapontum, who is said to have been executed for revealing this fact.) In 150 BC Jain mathematicians in India wrote the Sthananga Sutra, which contains work on the theory of numbers, arithmetical operations, and operations with fractions. A modern expression of fractions known as bhinnarasi seems to have originated in India in the work of Aryabhatta (c. AD 500), Brahmagupta (c. 628), and Bhaskara (c. 1150). Their works form fractions by placing the numerators (Sanskrit: amsa) over the denominators (cheda), but without a bar between them. In Sanskrit literature, fractions were always expressed as an addition to or subtraction from an integer. The integer was written on one line and the fraction in its two parts on the next line. If the fraction was marked by a small circle ⟨०⟩ or cross ⟨+⟩, it is subtracted from the integer; if no such sign appears, it is understood to be added. For example, Bhaskara I writes: ६ १ २ १ १ १० ४ ५ ९ which is the equivalent of 6 1 2 1 1 −1 4 5 9 and would be written in modern notation as 6⁠1/4⁠, 1⁠1/5⁠, and 2 − ⁠1/9⁠ (i.e., 1⁠8/9⁠). The horizontal fraction bar is first attested in the work of Al-Hassār (fl. 1200), a Muslim mathematician from Fez, Morocco, who specialized in Islamic inheritance jurisprudence. In his discussion he writes: "for example, if you are told to write three-fifths and a third of a fifth, write thus, 3 1 5 3 {\displaystyle {\frac {3\quad 1}{5\quad 3}}} ". The same fractional notation—with the fraction given before the integer—appears soon after in the work of Leonardo Fibonacci in the 13th century. In discussing the origins of decimal fractions, Dirk Jan Struik states: The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548–1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his Key to arithmetic (Samarkand, early fifteenth century). While the Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggren notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. == In formal education == === Primary schools === In primary schools, fractions have been demonstrated through Cuisenaire rods, Fraction Bars, fraction strips, fraction circles, paper (for folding or cutting), pattern blocks, pie-shaped pieces, plastic rectangles, grid paper, dot paper, geoboards, counters and computer software. === Documents for teachers === Several states in the United States have adopted learning trajectories from the Common Core State Standards Initiative's guidelines for mathematics education. Aside from sequencing the learning of fractions and operations with fractions, the document provides the following definition of a fraction: "A number expressible in the form a {\displaystyle a} ⁄ b {\displaystyle b} where a {\displaystyle a} is a whole number and b {\displaystyle b} is a positive whole number. (The word fraction in these standards always refers to a non-negative number.)" The document itself also refers to negative fractions. == See also == Cross multiplication 0.999... Multiple FRACTRAN == Notes == == References == Weisstein, Eric (2003). "CRC Concise Encyclopedia of Mathematics, Second Edition". CRC Concise Encyclopedia of Mathematics. Chapman & Hall/CRC. p. 1925. ISBN 1-58488-347-2. == External links == "Fraction, arithmetical". The Online Encyclopaedia of Mathematics. "Fraction". Encyclopædia Britannica. 5 January 2024.
Wikipedia:Frame (linear algebra)#0
In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering. == History == Because of the various mathematical components surrounding frames, frame theory has roots in harmonic and functional analysis, operator theory, linear algebra, and matrix theory. The Fourier transform has been used for over a century as a way of decomposing and expanding signals. However, the Fourier transform masks key information regarding the moment of emission and the duration of a signal. In 1946, Dennis Gabor was able to solve this using a technique that simultaneously reduced noise, provided resiliency, and created quantization while encapsulating important signal characteristics. This discovery marked the first concerted effort towards frame theory. The frame condition was first described by Richard Duffin and Albert Charles Schaeffer in a 1952 article on nonharmonic Fourier series as a way of computing the coefficients in a linear combination of the vectors of a linearly dependent spanning set (in their terminology, a "Hilbert space frame"). In the 1980s, Stéphane Mallat, Ingrid Daubechies, and Yves Meyer used frames to analyze wavelets. Today frames are associated with wavelets, signal and image processing, and data compression. == Definition and motivation == === Motivating example: computing a basis from a linearly dependent set === Suppose we have a vector space V {\displaystyle V} over a field F {\displaystyle F} and we want to express an arbitrary element v ∈ V {\displaystyle \mathbf {v} \in V} as a linear combination of the vectors { e k } ∈ V {\displaystyle \{\mathbf {e} _{k}\}\in V} , that is, finding coefficients { c k } ⊂ F {\displaystyle \{c_{k}\}\subset F} such that v = ∑ k c k e k . {\displaystyle \mathbf {v} =\sum _{k}c_{k}\mathbf {e} _{k}.} If the set { e k } {\displaystyle \{\mathbf {e} _{k}\}} does not span V {\displaystyle V} , then such coefficients do not exist for every such v {\displaystyle \mathbf {v} } . If { e k } {\displaystyle \{\mathbf {e} _{k}\}} spans V {\displaystyle V} and also is linearly independent, this set forms a basis of V {\displaystyle V} , and the coefficients c k {\displaystyle c_{k}} are uniquely determined by v {\displaystyle \mathbf {v} } . If, however, { e k } {\displaystyle \{\mathbf {e} _{k}\}} spans V {\displaystyle V} but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if V {\displaystyle V} is of infinite dimension. Given that { e k } {\displaystyle \{\mathbf {e} _{k}\}} spans V {\displaystyle V} and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan: Removing arbitrary vectors from the set may cause it to be unable to span V {\displaystyle V} before it becomes linearly independent. Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite. In some applications, it may be an advantage to use more vectors than necessary to represent v {\displaystyle \mathbf {v} } . This means that we want to find the coefficients c k {\displaystyle c_{k}} without removing elements in { e k } {\displaystyle \{\mathbf {e} _{k}\}} . The coefficients c k {\displaystyle c_{k}} will no longer be uniquely determined by v {\displaystyle \mathbf {v} } . Therefore, the vector v {\displaystyle \mathbf {v} } can be represented as a linear combination of { e k } {\displaystyle \{\mathbf {e} _{k}\}} in more than one way. === Definition === Let V {\displaystyle V} be an inner product space and { e k } k ∈ N {\displaystyle \{\mathbf {e} _{k}\}_{k\in \mathbb {N} }} be a set of vectors in V {\displaystyle V} . The set { e k } k ∈ N {\displaystyle \{\mathbf {e} _{k}\}_{k\in \mathbb {N} }} is a frame of V {\displaystyle V} if it satisfies the so called frame condition. That is, if there exist two constants 0 < A ≤ B < ∞ {\displaystyle 0<A\leq B<\infty } such that A ‖ v ‖ 2 ≤ ∑ k ∈ N | ⟨ v , e k ⟩ | 2 ≤ B ‖ v ‖ 2 , ∀ v ∈ V . {\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq \sum _{k\in \mathbb {N} }\left|\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \right|^{2}\leq B\left\|\mathbf {v} \right\|^{2},\quad \forall \mathbf {v} \in V.} A frame is called overcomplete (or redundant) if it is not a Riesz basis for the vector space. The redundancy of the frame is measured by the lower and upper frame bounds (or redundancy factors) A {\displaystyle A} and B {\displaystyle B} , respectively. That is, a frame of K ≥ N {\displaystyle K\geq N} normalized vectors ‖ e k ‖ = 1 {\displaystyle \|\mathbf {e} _{k}\|=1} in an N {\displaystyle N} -dimensional space V {\displaystyle V} has frame bounds which satisfiy 0 < A ≤ 1 N ∑ k = 1 K | ⟨ e k , e k ⟩ | 2 = K N ≤ B < ∞ . {\displaystyle 0<A\leq {\frac {1}{N}}\sum _{k=1}^{K}|\langle \mathbf {e} _{k},\mathbf {e} _{k}\rangle |^{2}={\frac {K}{N}}\leq B<\infty .} If the frame is a Riesz basis and is therefore linearly independent, then A ≤ 1 ≤ B {\displaystyle A\leq 1\leq B} . The frame bounds are not unique because numbers less than A {\displaystyle A} and greater than B {\displaystyle B} are also valid frame bounds. The optimal lower bound is the supremum of all lower bounds and the optimal upper bound is the infimum of all upper bounds. === Analysis operator === If the frame condition is satisfied, then the linear operator defined as T : V → ℓ 2 , v ↦ T v = { ⟨ v , e k ⟩ } k ∈ N , {\displaystyle \mathbf {T} :V\to \ell ^{2},\quad \mathbf {v} \mapsto \mathbf {T} \mathbf {v} =\{\langle \mathbf {v} ,\mathbf {e_{k}} \rangle \}_{k\in \mathbb {N} },} mapping v ∈ V {\displaystyle \mathbf {v} \in V} to the sequence of frame coefficients c k = ⟨ v , e k ⟩ {\displaystyle c_{k}=\langle \mathbf {v} ,\mathbf {e_{k}} \rangle } , is called the analysis operator. Using this definition, the frame condition can be rewritten as A ‖ v ‖ 2 ≤ ‖ T v ‖ 2 = ∑ k | ⟨ v , e k ⟩ | 2 ≤ B ‖ v ‖ 2 . {\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq \left\|\mathbf {T} \mathbf {v} \right\|^{2}=\sum _{k}\left|\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \right|^{2}\leq B\left\|\mathbf {v} \right\|^{2}.} === Synthesis operator === The adjoint of the analysis operator is called the synthesis operator of the frame and defined as T ∗ : ℓ 2 → V , { c k } k ∈ N ↦ ∑ k c k e k . {\displaystyle \mathbf {T} ^{*}:\ell ^{2}\to V,\quad \{c_{k}\}_{k\in \mathbb {N} }\mapsto \sum _{k}c_{k}\mathbf {e} _{k}.} === Frame operator === The composition of the analysis operator and the synthesis operator leads to the frame operator defined as S : V → V , v ↦ S v = T ∗ T v = ∑ k ⟨ v , e k ⟩ e k . {\displaystyle \mathbf {S} :V\rightarrow V,\quad \mathbf {v} \mapsto \mathbf {S} \mathbf {v} =\mathbf {T} ^{*}\mathbf {T} \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k}.} From this definition and linearity in the first argument of the inner product, the frame condition now yields A ‖ v ‖ 2 ≤ ‖ T v ‖ 2 = ⟨ S v , v ⟩ ≤ B ‖ v ‖ 2 . {\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq \left\|\mathbf {T} \mathbf {v} \right\|^{2}=\langle \mathbf {S} \mathbf {v} ,\mathbf {v} \rangle \leq B\left\|\mathbf {v} \right\|^{2}.} If the analysis operator exists, then so does the frame operator S {\displaystyle \mathbf {S} } as well as the inverse S − 1 {\displaystyle \mathbf {S} ^{-1}} . Both S {\displaystyle \mathbf {S} } and S − 1 {\displaystyle \mathbf {S} ^{-1}} are positive definite, bounded self-adjoint operators, resulting in A {\displaystyle A} and B {\displaystyle B} being the infimum and supremum values of the spectrum of S {\displaystyle \mathbf {S} } . In finite dimensions, the frame operator is automatically trace-class, with A {\displaystyle A} and B {\displaystyle B} corresponding to the smallest and largest eigenvalues of S {\displaystyle \mathbf {S} } or, equivalently, the smallest and largest singular values of T {\displaystyle \mathbf {T} } . === Relation to bases === The frame condition is a generalization of Parseval's identity that maintains norm equivalence between a signal in V {\displaystyle V} and its sequence of coefficients in ℓ 2 {\displaystyle \ell ^{2}} . If the set { e k } {\displaystyle \{\mathbf {e} _{k}\}} is a frame of V {\displaystyle V} , it spans V {\displaystyle V} . Otherwise there would exist at least one non-zero v ∈ V {\displaystyle \mathbf {v} \in V} which would be orthogonal to all e k {\displaystyle \mathbf {e} _{k}} such that A ‖ v ‖ 2 ≤ 0 ≤ B ‖ v ‖ 2 ; {\displaystyle A\left\|\mathbf {v} \right\|^{2}\leq 0\leq B\left\|\mathbf {v} \right\|^{2};} either violating the frame condition or the assumption that v ≠ 0 {\displaystyle \mathbf {v} \neq 0} . However, a spanning set of V {\displaystyle V} is not necessarily a frame. For example, consider V = R 2 {\displaystyle V=\mathbb {R} ^{2}} with the dot product, and the infinite set { e k } {\displaystyle \{\mathbf {e} _{k}\}} given by { ( 1 , 0 ) , ( 0 , 1 ) , ( 0 , 1 2 ) , ( 0 , 1 3 ) , … } . {\displaystyle \left\{(1,0),\,(0,1),\,\left(0,{\tfrac {1}{\sqrt {2}}}\right),\,\left(0,{\tfrac {1}{\sqrt {3}}}\right),\dotsc \right\}.} This set spans V {\displaystyle V} but since ∑ k | ⟨ e k , ( 0 , 1 ) ⟩ | 2 = 0 + 1 + 1 2 + 1 3 + ⋯ = ∞ , {\displaystyle \sum _{k}\left|\langle \mathbf {e} _{k},(0,1)\rangle \right|^{2}=0+1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\dotsb =\infty ,} we cannot choose a finite upper frame bound B. Consequently, the set { e k } {\displaystyle \{\mathbf {e} _{k}\}} is not a frame. == Dual frames == Let { e k } {\displaystyle \{\mathbf {e} _{k}\}} be a frame; satisfying the frame condition. Then the dual operator is defined as T ~ v = ∑ k ⟨ v , e ~ k ⟩ , {\displaystyle {\widetilde {\mathbf {T} }}\mathbf {v} =\sum _{k}\langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle ,} with e ~ k = ( T ∗ T ) − 1 e k = S − 1 e k , {\displaystyle {\tilde {\mathbf {e} }}_{k}=(\mathbf {T} ^{*}\mathbf {T} )^{-1}\mathbf {e} _{k}=\mathbf {S} ^{-1}\mathbf {e} _{k},} called the dual frame (or conjugate frame). It is the canonical dual of { e k } {\displaystyle \{\mathbf {e} _{k}\}} (similar to a dual basis of a basis), with the property that v = ∑ k ⟨ v , e k ⟩ e ~ k = ∑ k ⟨ v , e ~ k ⟩ e k , {\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {\tilde {e}} _{k}=\sum _{k}\langle \mathbf {v} ,\mathbf {\tilde {e}} _{k}\rangle \mathbf {e} _{k},} and subsequent frame condition 1 B ‖ v ‖ 2 ≤ ∑ k | ⟨ v , e ~ k ⟩ | 2 = ⟨ T S − 1 v , T S − 1 v ⟩ = ⟨ S − 1 v , v ⟩ ≤ 1 A ‖ v ‖ 2 , ∀ v ∈ V . {\displaystyle {\frac {1}{B}}\|\mathbf {v} \|^{2}\leq \sum _{k}|\langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle |^{2}=\langle \mathbf {T} \mathbf {S} ^{-1}\mathbf {v} ,\mathbf {T} \mathbf {S} ^{-1}\mathbf {v} \rangle =\langle \mathbf {S} ^{-1}\mathbf {v} ,\mathbf {v} \rangle \leq {\frac {1}{A}}\|\mathbf {v} \|^{2},\quad \forall \mathbf {v} \in V.} Canonical duality is a reciprocity relation, i.e. if the frame { e ~ k } {\displaystyle \{\mathbf {\tilde {e}} _{k}\}} is the canonical dual of { e k } , {\displaystyle \{\mathbf {e} _{k}\},} then the frame { e k } {\displaystyle \{\mathbf {e} _{k}\}} is the canonical dual of { e ~ k } . {\displaystyle \{\mathbf {\tilde {e}} _{k}\}.} To see that this makes sense, let v {\displaystyle \mathbf {v} } be an element of V {\displaystyle V} and let u = ∑ k ⟨ v , e k ⟩ e ~ k . {\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle {\tilde {\mathbf {e} }}_{k}.} Thus u = ∑ k ⟨ v , e k ⟩ ( S − 1 e k ) = S − 1 ( ∑ k ⟨ v , e k ⟩ e k ) = S − 1 S v = v , {\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle (\mathbf {S} ^{-1}\mathbf {e} _{k})=\mathbf {S} ^{-1}\left(\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k}\right)=\mathbf {S} ^{-1}\mathbf {S} \mathbf {v} =\mathbf {v} ,} proving that v = ∑ k ⟨ v , e k ⟩ e ~ k . {\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle {\tilde {\mathbf {e} }}_{k}.} Alternatively, let u = ∑ k ⟨ v , e ~ k ⟩ e k . {\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle \mathbf {e} _{k}.} Applying the properties of S {\displaystyle \mathbf {S} } and its inverse then shows that u = ∑ k ⟨ v , S − 1 e k ⟩ e k = ∑ k ⟨ S − 1 v , e k ⟩ e k = S ( S − 1 v ) = v , {\displaystyle \mathbf {u} =\sum _{k}\langle \mathbf {v} ,\mathbf {S} ^{-1}\mathbf {e} _{k}\rangle \mathbf {e} _{k}=\sum _{k}\langle \mathbf {S} ^{-1}\mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k}=\mathbf {S} (\mathbf {S} ^{-1}\mathbf {v} )=\mathbf {v} ,} and therefore v = ∑ k ⟨ v , e ~ k ⟩ e k . {\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle \mathbf {e} _{k}.} An overcomplete frame { e k } {\displaystyle \{\mathbf {e} _{k}\}} allows us some freedom for the choice of coefficients c k ≠ ⟨ v , e ~ k ⟩ {\displaystyle c_{k}\neq \langle \mathbf {v} ,{\tilde {\mathbf {e} }}_{k}\rangle } such that v = ∑ k c k e k {\textstyle \mathbf {v} =\sum _{k}c_{k}\mathbf {e} _{k}} . That is, there exist dual frames { g k } ≠ { e ~ k } {\displaystyle \{\mathbf {g} _{k}\}\neq \{{\tilde {\mathbf {e} }}_{k}\}} of { e k } {\displaystyle \{\mathbf {e} _{k}\}} for which v = ∑ k ⟨ v , g k ⟩ e k , ∀ v ∈ V . {\displaystyle \mathbf {v} =\sum _{k}\langle \mathbf {v} ,\mathbf {g} _{k}\rangle \mathbf {e} _{k},\quad \forall \mathbf {v} \in V.} === Dual frame synthesis and analysis === Suppose V {\displaystyle V} is a subspace of a Hilbert space H {\displaystyle H} and let { e k } k ∈ N {\displaystyle \{\mathbf {e} _{k}\}_{k\in \mathbb {N} }} and { e ~ k } k ∈ N {\displaystyle \{{\tilde {\mathbf {e} }}_{k}\}_{k\in \mathbb {N} }} be a frame and dual frame of V {\displaystyle V} , respectively. If { e k } {\displaystyle \{\mathbf {e} _{k}\}} does not depend on f ∈ H {\displaystyle f\in H} , the dual frame is computed as e ~ k = ( T ∗ T V ) − 1 e k , {\displaystyle {\tilde {\mathbf {e} }}_{k}=(\mathbf {T} ^{*}\mathbf {T} _{V})^{-1}\mathbf {e} _{k},} where T V {\displaystyle \mathbf {T} _{V}} denotes the restriction of T {\displaystyle \mathbf {T} } to V {\displaystyle V} such that T ∗ T V {\displaystyle \mathbf {T} ^{*}\mathbf {T} _{V}} is invertible on V {\displaystyle V} . The best linear approximation of f {\displaystyle f} in V {\displaystyle V} is then given by the orthogonal projection of f ∈ H {\displaystyle f\in H} onto V {\displaystyle V} , defined as P V f = ∑ k ⟨ f , e k ⟩ e ~ k = ∑ k ⟨ f , e ~ k ⟩ e k . {\displaystyle P_{V}f=\sum _{k}\langle f,\mathbf {e} _{k}\rangle \mathbf {\tilde {e}} _{k}=\sum _{k}\langle f,\mathbf {\tilde {e}} _{k}\rangle \mathbf {e} _{k}.} The dual frame synthesis operator is defined as P V f = T ~ ∗ T f = ( T ∗ T V ) − 1 T ∗ T f = ∑ k ⟨ f , e k ⟩ e ~ k , {\displaystyle P_{V}f={\widetilde {\mathbf {T} }}^{*}\mathbf {T} f=(\mathbf {T} ^{*}\mathbf {T} _{V})^{-1}\mathbf {T} ^{*}\mathbf {T} f=\sum _{k}\langle f,\mathbf {e} _{k}\rangle \mathbf {\tilde {e}} _{k},} and the orthogonal projection is computed from the frame coefficients ⟨ f , e k ⟩ {\displaystyle \langle f,\mathbf {e} _{k}\rangle } . In dual analysis, the orthogonal projection is computed from { e k } {\displaystyle \{\mathbf {e} _{k}\}} as P V f = T ∗ T ~ f = ∑ k ⟨ f , e ~ k ⟩ e k {\displaystyle P_{V}f=\mathbf {T} ^{*}{\widetilde {\mathbf {T} }}f=\sum _{k}\langle f,\mathbf {\tilde {e}} _{k}\rangle \mathbf {e} _{k}} with dual frame analysis operator { T ~ f } k = ⟨ f , e ~ k ⟩ {\displaystyle \{{\widetilde {\mathbf {T} }}f\}_{k}=\langle f,{\tilde {\mathbf {e} }}_{k}\rangle } . == Applications and examples == In signal processing, it is common to represent signals as vectors in a Hilbert space. In this interpretation, a vector expressed as a linear combination of the frame vectors is a redundant signal. Representing a signal strictly with a set of linearly independent vectors may not always be the most compact form. Using a frame, it is possible to create a simpler, more sparse representation of a signal as compared with a family of elementary signals. Frames, therefore, provide "robustness". Because they provide a way of producing the same vector within a space, signals can be encoded in various ways. This facilitates fault tolerance and resilience to a loss of signal. Finally, redundancy can be used to mitigate noise, which is relevant to the restoration, enhancement, and reconstruction of signals. === Non-harmonic Fourier series === From Harmonic analysis it is known that the complex trigonometric system { 1 2 π e i k x } k ∈ Z {\textstyle \{{\frac {1}{\sqrt {2\pi }}}e^{ikx}\}_{k\in \mathbb {Z} }} form an orthonormal basis for L 2 ( − π , π ) {\textstyle L^{2}(-\pi ,\pi )} . As such, { e i k x } k ∈ Z {\textstyle \{e^{ikx}\}_{k\in \mathbb {Z} }} is a (tight) frame for L 2 ( − π , π ) {\textstyle L^{2}(-\pi ,\pi )} with bounds A = B = 2 π {\displaystyle A=B=2\pi } . The system remains stable under "sufficiently small" perturbations { λ k − k } {\displaystyle \{\lambda _{k}-k\}} and the frame { e i λ k x } k ∈ Z {\textstyle \{e^{i\lambda _{k}x}\}_{k\in \mathbb {Z} }} will form a Riesz basis for L 2 ( − π , π ) {\textstyle L^{2}(-\pi ,\pi )} . Accordingly, every function f {\displaystyle f} in L 2 ( − π , π ) {\textstyle L^{2}(-\pi ,\pi )} will have a unique non-harmonic Fourier series representation f ( x ) = ∑ k ∈ Z c k e i λ k x , {\displaystyle f(x)=\sum _{k\in \mathbb {Z} }c_{k}e^{i\lambda _{k}x},} with ∑ | c k | 2 < ∞ {\textstyle \sum |c_{k}|^{2}<\infty } and { e i λ k x } k ∈ Z {\textstyle \{e^{i\lambda _{k}x}\}_{k\in \mathbb {Z} }} is called the Fourier frame (or frame of exponentials). What constitutes "sufficiently small" is described by the following theorem, named after Mikhail Kadets. The theorem can be easily extended to frames, replacing the integers by another sequence of real numbers { μ k } k ∈ Z {\textstyle \{\mu _{k}\}_{k\in \mathbb {Z} }} such that | λ k − μ k | ≤ L < 1 4 , ∀ k ∈ Z , and 1 − cos ⁡ ( π L ) + sin ⁡ ( π L ) < A B , {\displaystyle |\lambda _{k}-\mu _{k}|\leq L<{\frac {1}{4}},\quad \forall k\in \mathbb {Z} ,\quad {\text{and}}\quad 1-\cos(\pi L)+\sin(\pi L)<{\sqrt {\frac {A}{B}}},} then { e i λ k x } k ∈ Z {\textstyle \{e^{i\lambda _{k}x}\}_{k\in \mathbb {Z} }} is a frame for L 2 ( − π , π ) {\textstyle L^{2}(-\pi ,\pi )} with bounds A ( 1 − B A ( 1 − cos ⁡ ( π L ) + sin ⁡ ( π L ) ) ) 2 , B ( 2 − cos ⁡ ( π L ) + sin ⁡ ( π L ) ) 2 . {\displaystyle A(1-{\sqrt {\frac {B}{A}}}(1-\cos(\pi L)+\sin(\pi L)))^{2},\quad B(2-\cos(\pi L)+\sin(\pi L))^{2}.} === Frame projector === Redundancy of a frame is useful in mitigating added noise from the frame coefficients. Let a ∈ ℓ 2 ( N ) {\displaystyle \mathbf {a} \in \ell ^{2}(\mathbb {N} )} denote a vector computed with noisy frame coefficients. The noise is then mitigated by projecting a {\displaystyle \mathbf {a} } onto the image of T {\displaystyle \mathbf {T} } . The ℓ 2 {\displaystyle \ell ^{2}} sequence space and im ⁡ ( T ) {\displaystyle \operatorname {im} (\mathbf {T} )} (as im ⁡ ( T ) ⊆ ℓ 2 {\displaystyle \operatorname {im} (\mathbf {T} )\subseteq \ell ^{2}} ) are reproducing kernel Hilbert spaces with a kernel given by the matrix M k , p = ⟨ S − 1 e p , e k ⟩ {\displaystyle M_{k,p}=\langle \mathbf {S} ^{-1}\mathbf {e} _{p},\mathbf {e} _{k}\rangle } . As such, the above equation is also referred to as the reproducing kernel equation and expresses the redundancy of frame coefficients. == Special cases == === Tight frames === A frame is a tight frame if A = B {\displaystyle A=B} . A tight frame { e k } k = 1 ∞ {\textstyle \{\mathbf {e} _{k}\}_{k=1}^{\infty }} with frame bound A {\displaystyle A} has the property that v = 1 A ∑ k ⟨ v , e k ⟩ e k , ∀ v ∈ V . {\displaystyle \mathbf {v} ={\frac {1}{A}}\sum _{k}\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \mathbf {e} _{k},\quad \forall \mathbf {v} \in V.} For example, the union of k {\displaystyle k} disjoint orthonormal bases of a vector space is an overcomplete tight frame with A = B = k {\displaystyle A=B=k} . A tight frame is a Parseval frame if A = B = 1 {\displaystyle A=B=1} . Each orthonormal basis is a (complete) Parseval frame, but the converse is not necessarily true. === Equal norm frame === A frame is an equal norm frame if there is a constant c {\displaystyle c} such that ‖ e k ‖ = c {\displaystyle \|\mathbf {e} _{k}\|=c} for each k {\displaystyle k} . An equal norm frame is a normalized frame (sometimes called a unit-norm frame) if c = 1 {\displaystyle c=1} . A unit-norm Parseval frame is an orthonormal basis; such a frame satisfies Parseval's identity. === Equiangular frames === A frame is an equiangular frame if there is a constant c {\displaystyle c} such that | ⟨ e i , e j ⟩ | = c {\displaystyle |\langle \mathbf {e} _{i},\mathbf {e} _{j}\rangle |=c} for all i ≠ j {\displaystyle i\neq j} . In particular, every orthonormal basis is equiangular. === Exact frames === A frame is an exact frame if no proper subset of the frame spans the inner product space. Each basis for an inner product space is an exact frame for the space (so a basis is a special case of a frame). == Generalizations == === Semi-frame === Sometimes it may not be possible to satisfy both frame bounds simultaneously. An upper (respectively lower) semi-frame is a set that only satisfies the upper (respectively lower) frame inequality. The Bessel Sequence is an example of a set of vectors that satisfies only the upper frame inequality. For any vector v ∈ V {\displaystyle \mathbf {v} \in V} to be reconstructed from the coefficients { ⟨ v , e k ⟩ } k ∈ N {\displaystyle \{\langle \mathbf {v} ,\mathbf {e} _{k}\rangle \}_{k\in \mathbb {N} }} it suffices if there exists a constant A > 0 {\displaystyle A>0} such that A ‖ x − y ‖ 2 ≤ ‖ T x − T y ‖ 2 , ∀ x , y ∈ V . {\displaystyle A\|x-y\|^{2}\leq \|Tx-Ty\|^{2},\quad \forall x,y\in V.} By setting v = x − y {\displaystyle \mathbf {v} =x-y} and applying the linearity of the analysis operator, this condition is equivalent to: A ‖ v ‖ 2 ≤ ‖ T v ‖ 2 , ∀ v ∈ V , {\displaystyle A\|\mathbf {v} \|^{2}\leq \|T\mathbf {v} \|^{2},\quad \forall \mathbf {v} \in V,} which is exactly the lower frame bound condition. === Fusion frame === A fusion frame is best understood as an extension of the dual frame synthesis and analysis operators where, instead of a single subspace V ⊆ H {\displaystyle V\subseteq H} , a set of closed subspaces { W i } i ∈ N ⊆ H {\displaystyle \{W_{i}\}_{i\in \mathbb {N} }\subseteq H} with positive scalar weights { w i } i ∈ N {\displaystyle \{w_{i}\}_{i\in \mathbb {N} }} is considered. A fusion frame is a family { W i , w i } i ∈ N {\displaystyle \{W_{i},w_{i}\}_{i\in \mathbb {N} }} that satisfies the frame condition A ‖ f ‖ 2 ≤ ∑ i w i 2 ‖ P W i f ‖ 2 ≤ B ‖ f ‖ 2 , ∀ f ∈ H , {\displaystyle A\|f\|^{2}\leq \sum _{i}w_{i}^{2}\|P_{W_{i}}f\|^{2}\leq B\|f\|^{2},\quad \forall f\in H,} where P W i {\displaystyle P_{W_{i}}} denotes the orthogonal projection onto the subspace W i {\displaystyle W_{i}} . === Continuous frame === Suppose H {\displaystyle H} is a Hilbert space, X {\displaystyle X} a locally compact space, and μ {\displaystyle \mu } is a locally finite Borel measure on X {\displaystyle X} . Then a set of vectors in H {\displaystyle H} , { f x } x ∈ X {\displaystyle \{f_{x}\}_{x\in X}} with a measure μ {\displaystyle \mu } is said to be a continuous frame if there exists constants, 0 < A ≤ B {\displaystyle 0<A\leq B} such that A | | f | | 2 ≤ ∫ X | ⟨ f , f x ⟩ | 2 d μ ( x ) ≤ B | | f | | 2 , ∀ f ∈ H . {\displaystyle A||f||^{2}\leq \int _{X}|\langle f,f_{x}\rangle |^{2}d\mu (x)\leq B||f||^{2},\quad \forall f\in H.} To see that continuous frames are indeed the natural generalization of the frames mentioned above, consider a discrete set Λ ⊂ X {\displaystyle \Lambda \subset X} and a measure μ = δ Λ {\displaystyle \mu =\delta _{\Lambda }} where δ Λ {\displaystyle \delta _{\Lambda }} is the Dirac measure. Then the continuous frame condition reduces to A | | f | | 2 ≤ ∑ λ ∈ Λ | ⟨ f , f λ ⟩ | 2 ≤ B | | f | | 2 , ∀ f ∈ H . {\displaystyle A||f||^{2}\leq \sum _{\lambda \in \Lambda }|\langle f,f_{\lambda }\rangle |^{2}\leq B||f||^{2},\quad \forall f\in H.} Just like in the discrete case we can define the analysis, synthesis, and frame operators when dealing with continuous frames. ==== Continuous analysis operator ==== Given a continuous frame { f x } x ∈ X {\displaystyle \{f_{x}\}_{x\in X}} the continuous analysis operator is the operator mapping f {\displaystyle f} to a function on X {\displaystyle X} defined as follows: T : H → L 2 ( X , μ ) {\displaystyle T:H\to L^{2}(X,\mu )} by f ↦ ⟨ f , f x ⟩ x ∈ X {\displaystyle f\mapsto \langle f,f_{x}\rangle _{x\in X}} . ==== Continuous synthesis operator ==== The adjoint operator of the continuous analysis operator is the continuous synthesis operator, which is the map T ∗ : L 2 ( X , μ ) → H {\displaystyle T^{*}:L^{2}(X,\mu )\to H} by a x ↦ ∫ X a x f x d μ ( x ) {\displaystyle a_{x}\mapsto \int _{X}a_{x}f_{x}d\mu (x)} . ==== Continuous frame operator ==== The composition of the continuous analysis operator and the continuous synthesis operator is known as the continuous frame operator. For a continuous frame { f x } x ∈ X {\displaystyle \{f_{x}\}_{x\in X}} , it is defined as follows: S : H → H {\displaystyle S:H\to H} by S f := ∫ X ⟨ f , f x ⟩ f x d μ ( x ) . {\displaystyle Sf:=\int _{X}\langle f,f_{x}\rangle f_{x}d\mu (x).} In this case, the continuous frame projector P : L 2 ( x , μ ) → im ⁡ ( T ) {\displaystyle P:L^{2}(x,\mu )\to \operatorname {im} (T)} is the orthogonal projection defined by P := T S − 1 T ∗ . {\displaystyle P:=TS^{-1}T^{*}.} The projector P {\displaystyle P} is an integral operator with reproducting kernel K ( x , y ) = ⟨ S − 1 f x , f y ⟩ {\displaystyle K(x,y)=\langle S^{-1}f_{x},f_{y}\rangle } , thus im ⁡ ( T ) {\displaystyle \operatorname {im} (T)} is a reproducing kernel Hilbert space. ==== Continuous dual frame ==== Given a continuous frame { f x } x ∈ X {\displaystyle \{f_{x}\}_{x\in X}} , and another continuous frame { g x } x ∈ X {\displaystyle \{g_{x}\}_{x\in X}} , then { g x } x ∈ X {\displaystyle \{g_{x}\}_{x\in X}} is said to be a continuous dual frame of { f x } {\displaystyle \{f_{x}\}} if it satisfies the following condition for all f , h ∈ H {\displaystyle f,h\in H} : ⟨ f , h ⟩ = ∫ X ⟨ f , f x ⟩ ⟨ g x , h ⟩ d μ ( x ) . {\displaystyle \langle f,h\rangle =\int _{X}\langle f,f_{x}\rangle \langle g_{x},h\rangle d\mu (x).} === Framed positive operator-valued measure === Just as a frame is a natural generalization of a basis to sets that may be linear dependent, a positive operator-valued measure (POVM) is a natural generalization of a projection-valued measure (PVM) in that elements of a POVM are not necessarily orthogonal projections. Suppose ( X , M ) {\displaystyle (X,M)} is a measurable space with M {\displaystyle M} a Borel σ-algebra on X {\displaystyle X} and let F {\displaystyle F} be a POVM from M {\displaystyle M} to the space of positive operators on H {\displaystyle H} with the additional property that 0 < A I ≤ F ( M ) ≤ B I < ∞ , {\displaystyle 0<AI\leq F(M)\leq BI<\infty ,} where I {\displaystyle I} is the identity operator. Then F {\displaystyle F} is called a framed POVM. In case of the fusion frame condition, this allows for the substitution F ( m ) = ∑ i ∈ m w i P W i , m ∈ M . {\displaystyle F(m)=\sum _{i\in m}w_{i}P_{W_{i}},\quad m\in M.} For the continuous frame operator, the framed POVM would be ⟨ F ( M ) f x , f y ⟩ = ∫ M ⟨ S f x , f y ⟩ d μ ( x ) . {\displaystyle \langle F(M)f_{x},f_{y}\rangle =\int _{M}\langle Sf_{x},f_{y}\rangle d\mu (x).} == See also == k-frame Biorthogonal wavelet Orthogonal wavelet Restricted isometry property Schauder basis Harmonic analysis Fourier analysis Functional analysis == Notes == == References == Antoine, J.-P.; Balazs, P. (2012). "Frames, Semi-Frames, and Hilbert Scales". Numerical Functional Analysis and Optimization. 33 (7–9): 736–769. arXiv:1203.0506. doi:10.1080/01630563.2012.682128. ISSN 0163-0563. Casazza, Peter; Kutyniok, Gitta; Philipp, Friedrich (2013). "Introduction to Finite Frame Theory". Finite Frames: Theory and Applications. Berlin: Birkhäuser. pp. 1–53. ISBN 978-0-8176-8372-6. Christensen, Ole (2016). "An Introduction to Frames and Riesz Bases". Applied and Numerical Harmonic Analysis. Cham: Springer International Publishing. doi:10.1007/978-3-319-25613-9. ISBN 978-3-319-25611-5. ISSN 2296-5009. Duffin, Richard James; Schaeffer, Albert Charles (1952). "A class of nonharmonic Fourier series". Transactions of the American Mathematical Society. 72 (2): 341–366. doi:10.2307/1990760. JSTOR 1990760. MR 0047179. Kovačević, Jelena; Chebira, Amina (2008). "An Introduction to Frames" (PDF). Foundations and Trends in Signal Processing. 2 (1): 1–94. doi:10.1561/2000000006. Kovacevic, Jelena; Dragotti, Pier Luigi; Goyal, Vivek (2002). "Filter Bank Frame Expansions with Erasures" (PDF). IEEE Transactions on Information Theory. 48 (6): 1439–1450. CiteSeerX 10.1.1.661.2699. doi:10.1109/TIT.2002.1003832. Mallat, Stéphane (2009). A wavelet tour of signal processing: the sparse way. Amsterdam Boston: Elsevier/Academic Press. ISBN 978-0-12-374370-1. Moran, Bill; Howard, Stephen; Cochran, Doug (2013). "Positive-Operator-Valued Measures: A General Setting for Frames". Excursions in Harmonic Analysis, Volume 2. Boston: Birkhäuser Boston. doi:10.1007/978-0-8176-8379-5_4. ISBN 978-0-8176-8378-8. Robinson, Benjamin; Moran, Bill; Cochran, Doug (2021). "Positive operator-valued measures and densely defined operator-valued frames". Rocky Mountain Journal of Mathematics. 51 (1). arXiv:2004.11729. doi:10.1216/rmj.2021.51.265. ISSN 0035-7596. Young, Robert M. (2001). An Introduction to Non-Harmonic Fourier Series, Revised Edition, 93. Academic Press. ISBN 978-0-12-772955-8.
Wikipedia:Franc Breckerfeld#0
Franc Breckerfeld (February 17, 1681 in Ljubljana – October 29, 1744 in Cluj, Romania) was a Slovene theologian, mathematician, astronomer and latinist. In his later years he was a member of the Royal Observatory at Cluj. == References == Krleža, Miroslav, ed. (1955), Enciklopedija Jugoslavije (2nd ed.), Jugoslavenski leksikografski zavod, p. 314
Wikipedia:Frances Kuo#0
Frances Y. Kuo is an applied mathematician known for her research on low-discrepancy sequences and quasi-Monte Carlo methods for numerical integration and finite element analysis. Originally from Taiwan, she was educated in New Zealand, and works in Australia as a professor in applied mathematics at the University of New South Wales. == Education and career == Kuo is originally from Taipei, and went to high school in Taiwan. She moved to New Zealand in 1994, and became a student at the University of Waikato, completing a bachelor of computing and mathematical sciences with honours in 1998, and a PhD in 2001. Her dissertation, Constructive approaches to quasi-Monte Carlo methods for multiple integration, was supervised by Stephen Joe. After a year as an assistant lecturer at Waikato, Kuo moved to the University of New South Wales (UNSW) to do postdoctoral research with Ian Sloan. She remained as an ARC QEII Fellow and in 2012 became a senior lecturer at UNSW. She became an ARC Future Fellow in 2013 and a professor in 2019. == Recognition == In 2011, Kuo won the JH Michell Medal of ANZIAM, given annually to outstanding new researchers. The award cited her leadership in "theory and applications of high dimensional integration and approximation, Monte-Carlo methods and information-based complexity" and her interest in "applications in finance, statistics and porous media flow". She was the 2014 winner of the Joseph F. Traub Prize for Achievement in Information-Based Complexity. == References == == External links == Home page Frances Kuo publications indexed by Google Scholar
Wikipedia:Francesca Mazzia#0
Francesca Mazzia (born 13 March 1967) is an Italian applied mathematician and computer scientist specializing in numerical analysis, including numerical methods for ordinary differential equations. She is a professor of computer science at the University of Bari. == Education and career == Mazzia was born on 13 March 1967 in Taranto, and earned a laurea (the Italian equivalent of a master's degree) in information science in 1989 at the University of Bari, advised by Donato Trigiante. After continuing to work at the University of Bari as a researcher in the department of mathematics, beginning in 1990, She took a postdoctoral research position in parallel algorithms at the Centre européen de recherche et de formation avancée en calcul scientifique (CERFACS), in Toulouse, France, from 1997 to 1998, with the support of a Marie Curie Research Training Grant. Returning to Italy, she took an associate professor position in the department of mathematics at the University of Bari in 2000. In 2018, she moved to the department of computer science as a full professor. == Book == Mazzia is a coauthor of the book Solving Differential Equations in R (Springer, 2012), with Dutch ecoscientist Karline Soetaert and British mathematician Jeff R. Cash. The book was listed by the Association for Computing Machinery as one of the best computing books published in 2012. == Recognition == Mazzia is an honorary fellow of the European Society of Computational Methods in Sciences, Engineering and Technology. == References == == External links == Home page Francesca Mazzia publications indexed by Google Scholar
Wikipedia:Francesco Barberino Benici#0
Francesco Barberino Benici (3 December 1642 – 26 September 1702) was an Italian mathematician. He was among the first popularizers of mathematics for shopkeepers, along with Elia Del Re, Christopher Clavius, and Domenico Griminelli. == Works == Aritmetica prattica (in Italian). Palermo: Ignazio Calatro. 1697. == References ==
Wikipedia:Francesco Ventretti#0
Francesco Ventretti (1713–1784) was an Italian mathematician. == Life == Ventretti taught at the Military College of Verona and in 1773 invented the orosmeter, a tool to make precision measurements of hillside gradients. Gaetano Marzagaglia commented on his works. == Works == Ventretti, Francesco (1752). Genesi di tutti li triangoli rettangoli numerici (in Italian). In Verona: Antonio Andreoni. Ventretti, Francesco (1768). Del modo di trovare la fisica proporzione che hanno fra di loro due linee rette e due porzioni di circonferenze di cerchj (in Italian). In Verona: Agostino Carattoni. Ventretti, Francesco (1789). Dialoghi matematici (in Italian). In Verona: Dionigi Ramanzini. == References ==
Wikipedia:Francis Allotey#0
Francis Kofi Ampenyin Allotey (9 August 1932 – 2 November 2017) was a Ghanaian mathematical physicist. Together with Daniel Afedzi Akyeampong, he became the first Ghanaian to obtain a doctorate in mathematical sciences, earned in 1966. == Early life and education == Allotey was born on 9 August 1932 in the Fante town of Saltpond in the Central Region of Ghana to Joseph Kofi Allotey, a general commodities merchant and Alice Esi Nyena Allotey, a dressmaker from the Royal Dehyena family of Enyan Owomase and Ekumfi Edumafa, in the Central Region of Ghana. His father owned a bookstore. During his childhood, Allotey spent his free time in his father's bookstore reading the biographies of famous scientists which piqued his interest in science. He was raised a Roman Catholic. He had his primary education at St. John the Baptist Catholic (Boys) School in Saltpond and was among the pioneer batch of Ghana National College when the school was founded in July 1948 by Kwame Nkrumah. After secondary school, he attended the University Tutorial College in Ghana and the London Borough Polytechnic. He held master's and doctorate degrees from Princeton University, awarded in 1966 and earlier the Diploma of Imperial College, obtained in 1960. He was tutored by the Pakistani Nobel prize-winning physicist Abdus Salam as an undergraduate at Imperial College. During his time at Princeton, Allotey was mentored by many physicists such as Robert Dicke, Val Fitch, Robert Oppenheimer, Paul A. M. Dirac and C. N. Yang. == Career == He was known for the "Allotey Formalism" which arose from his work on soft X-ray spectroscopy. He was the 1973 recipient of the UK Prince Philip Golden Award for his work in this area. A founding fellow of the African Academy of Sciences, in 1974, he became the first Ghanaian full professor of mathematics and head of the Department of Mathematics and later Dean of the Faculty of Science at the Kwame Nkrumah University of Science and Technology. He was also the founding director of the KNUST Computer Centre before he assumed his position as the Pro-Vice-Chancellor of the university. Among Allotey's colleagues on the mathematics faculty at KNUST was Atu Mensa Taylor (died in 1977), the third Ghanaian to obtain a doctorate in mathematics. Taylor had received his DPhil (1967) from Oxford under the Welsh mathematical physicist, John Trevor Lewis, having also received an MA there many years before. Allotey was the President of the Ghana Academy of Arts and Sciences and a member of a number of international scientific organizations including the Abdus Salam International Centre for Theoretical Physics Scientific Council since 1996. He was also the President of the Ghana Institute of Physics and the founding President of the African Physical Society. He was instrumental in getting Ghana to join the International Union of Pure and Applied Physics, making it one of the first few African countries to join the Union. He collaborated with the IUPAP and ICTP to encourage physics education in developing countries through workshops and conferences in order to create awareness on the continent. Allotey was the Chairman of Board of Trustees of the Accra Institute of Technology, the President of the African Institute for Mathematical Sciences, Ghana. He was an honorary fellow of the Institute of Physics. He was an honorary Fellow of the Nigerian Mathematical Society among others. He consulted for many international institutions such as the UNESCO, IAEA and UNIDO. He was also the Vice president, 7th General Assembly of Intergovernmental Bureau of Informatics (IBI). He was also instrumental in the advancement of computer education in Africa and worked closely with organisations such as the IBM International and the International Federation for Information Processing. In 2004, he was the only African among the 100 most eminent physicists and mathematicians in the world to be cited in a book titled, "One hundred reasons to be a scientist." The Professor Francis Allotey Graduate School was established in 2009 at the Accra Institute of Technology. The institute provides master's degrees in Business Administration and Software Engineering and doctoral programmes in Information Technology and Philosophy. The Government of Ghana awarded him the Millennium Excellence Award in 2005, and dedicated a postage stamp in his honour. In 2009 he received the Order of the Volta and was posthumously awarded the Osagyefo Kwame Nkrumah African Genius Award in 2017. He helped establish the African Institute of Mathematical Sciences in Ghana in 2012. == Personal life == Allotey first married Edoris Enid Chandler from Barbados, whom he met while they were both studying in London. They had two children, Francis Kojo Enu Allotey and Joseph Kobina Nyansa Allotey. Chandler died in November 1981. He then remarried to Ruby Asie Mirekuwa Akuamoah. Together they raised her two children, Cilinnie and Kay. Akuamoah died in October, 2011. Overall, Allotey had four children and 20 grandchildren. == Death and state funeral == Francis Allotey died of natural causes on 2 November 2017. The Ghanaian government accorded him a state funeral in recognition of his contributions to the advancement of science and technology in Ghana. His body was interred in his hometown, Saltpond, Central Region. == References == == External links == AIMS Ghana Allotey profile
Wikipedia:Francis Bashforth#0
Francis Bashforth (8 January 1819 – 12 February 1912) was an English Anglican priest and mathematician, who is known for his use of applied mathematics on ballistics. == Early life and education == Bashforth was born on 8 January 1819 in Thurnscoe, Yorkshire, England. Bashforth was the eldest son of John Bashforth, a farmer. He was educated at Doncaster Grammar School. In 1839, he matriculated into St John's College, Cambridge as a sizar. Having studied the Mathematical Tripos at the University of Cambridge, he graduated with a Bachelor of Arts (BA) degree in 1843 and was the Second Wrangler. Bashforth later returned to his alma-mater to undertake a Bachelor of Divinity (BD) degree, which he completed in 1853. == Career == Bashforth was elected a Fellow of St John's College, Cambridge in 1843. Bashforth was ordained in the Church of England as a deacon in 1850 and as a priest in 1851. From 1857 until 1908, he was the Rector of Minting in Lincolnshire, the living of which belonged to his college. From 1864 to 1872, Bashforth was Professor of Applied Mathematics at the Royal Military Academy, Woolwich, teaching the British Army's artillery officers. Between 1864 and 1880, he undertook systematic ballistics experiments that studied the resistance of air. He invented a ballistic chronograph and received an award from the British government in the amount of £2000 (equivalent to £273,000 in 2023). He also studied liquid drops and surface tension. The Adams–Bashforth method (a numerical integration method) is named after John Couch Adams (who was the 1847 Senior Wrangler to Bashforth's Second Wrangler) and Bashforth. They used the method to study drop formation in 1883. == Personal life == On 14 September 1869, Bashforth married Elizabeth Jane, daughter of the Revd Samuel Rotton Piggott. Together, they had one son: Charles Pigott Bashforth (1872–1945) who was also an Anglican clergyman. Bashforth died on 12 February 1912 in Woodhall Spa, Lincolnshire, England, aged 93. == Writings == Bashforth, Francis (1866), Description of a Chronograph adapted for measuring the varying velocity of a body in motion through the air and for other purposes, London: Bell and Daldy Bashforth, Francis (1873), A mathematical treatise on the motion of Projectiles founded chiefly on the results of experiments made with the author's chronograph, London: Asher and Company Bashforth, Francis; Adams, John Couch (1883), An attempt to test the theories of capillary action by comparing the theoretical and measured forms of drops of fluid, with an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, University Press Bashforth, Francis (1890), Revised account of the experiments made with the Bashforth Chronograph, to find the resistance of the air to the motion of projectiles, with the application of the results to the calculation of trajectories according to J. Bernoulli's method, Cambridge University Press Bashforth, Francis (1895), Supplement to a revised account of the experiments made with the Bashforth ..., Cambridge: University Press Bashforth, Francis (1903), A Historical Sketch of the Experimental Determination of the Resistance of the Air to the Motion of Projectiles, Cambridge University Press Bashforth, Francis (1907), Ballistic experiments from 1864 to 1880, Cambridge University Press == References == Chisholm, Hugh, ed. (1922). "Bashforth, Francis" . Encyclopædia Britannica. Vol. 30 (12th ed.). London & New York: The Encyclopædia Britannica Company. p. 418. == External links == https://books.google.com/books?id=gO4pAAAAYAAJ&pg=PA42 states "Second Wrangler 1843", "Rector and Vicar of Minting". http://armiestrumenti.com/2010/11/03/introduzione-alla-balistica-esterna/ (Italian); has picture of Bashforth This article was translated from the corresponding article in the German Wikipedia.
Wikipedia:Francis Buekenhout#0
Francis Buekenhout (born 23 April 1937 in Ixelles near Brussels) is a Belgian mathematician who introduced Buekenhout geometries and the concept of quadratic sets. == Career == Buekenhout studied at the University of Brussels under Jacques Tits and Paul Libois. Together with his teacher Jacques Tits, he developed concepts with the diagram geometries, also called Buekenhout geometries or Buekenhout–Tits geometries. These largely disregard the concrete axiom systems of a projective or affine geometry and put these and many other incidence geometries into a common framework. He worked at the ULB from 1960 to 1969 as an assistant to Libois. He was then appointed as extraordinary professor 1969 to 1998, and as ordinary professor from 1977 until his retirement in 2002. He has been a member of the Académie Royale des Sciences, Lettres et des Beaux-Arts de Belgique since May 2002, and in 1982 he won the Prix François Deruyts of this academy. Buekenhout co-founded the Belgian Mathematics Olympics in 1976 and organized them from 1976 to 1987. == Notable student == Dimitri Leemans – Belgian mathematician == References == == External links == Official website
Wikipedia:Francis Su#0
Francis Edward Su is an American mathematician. He joined the Harvey Mudd College faculty in 1996, and is currently Benediktsson-Karwa Professor of Mathematics. Su served as president of the Mathematical Association of America from 2015–2017 and is serving as a Vice President of the American Mathematical Society from 2020-2023. Su has received multiple awards from the MAA, including the Henry L. Alder Award and a Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics, both for distinguished teaching. He was also a Phi Beta Kappa Visiting Scholar during the 2019-2020 term. He was elected as a Fellow of the American Mathematical Society, in the 2025 class of fellows. Su received his B.S. in Mathematics from the University of Texas, graduating Phi Beta Kappa in 1989. He went on to receive his Ph.D. from Harvard University, where his advisor was Persi Diaconis. His research area is combinatorics, and he is particularly known for his work on fair division. Su and Michael Starbird are co-authors of the book Topology Through Inquiry. His book, Mathematics for Human Flourishing, was released on 7 January 2020. The latter book is based on his speech of the same title, delivered Jan 6, 2017 at the Joint Math Meetings. He won the Halmos-Ford Award for Distinguished Writing in 2018 for that speech. Three of his articles have been featured on "The Princeton Anthology of the Best Writing in Mathematics" in the years 2011, 2014, and 2018. In 2021 he received the Euler Book Prize jointly with Christopher Jackson. == Selected publications == Starbird, Michael; Su, Francis (2019). Topology Through Inquiry. American Mathematical Society. ISBN 978-1-4704-5276-6. Su, Francis (2020). Mathematics for Human Flourishing. With Reflections by Christopher Jackson. Yale University Press. ISBN 9780300237139. == References == == External links == Home page at Harvey Mudd Author page Twitter account
Wikipedia:Francisco Dória#0
Francisco Antônio de Moraes Accioli Dória (born 1945, Rio de Janeiro, Brazil) is a Brazilian mathematician, philosopher, and genealogist. Francisco Antônio Dória received his B.S. in Chemical Engineering from the Federal University of Rio de Janeiro (UFRJ), Brazil, in 1968 and then got his doctorate from the Brazilian Center for Research in Physics (CBPF), advised by Leopoldo Nachbin in 1977. Dória worked for a while at the Physics Institute of UFRJ, and then left to become a Professor of the Foundations of Communications at the School of Communications, also at UFRJ. Dória held visiting positions at the University of Rochester (NY), Stanford University (CA) (here as a Senior Fulbright Scholar), and the University of São Paulo (USP). His most prolific period spawned from his collaboration with Newton da Costa, a Brazilian logician and one of the founders of paraconsistent logic, which began in 1985. He is currently Professor of Communications, Emeritus, at UFRJ and a member of the Brazilian Academy of Philosophy. His main achievement (with Brazilian logician and philosopher Newton da Costa) is the proof that chaos theory is undecidable (published in 1991), and when properly axiomatized within classical set theory, is incomplete in the sense of Gödel. The decision problem for chaotic dynamical systems had been formulated by mathematician Morris Hirsch. More recently da Costa and Dória introduced a formalization for the P = NP hypothesis that they called the “exotic formalization,” and showed in a series of papers that axiomatic set theory together with exotic P = NP is consistent if set theory is consistent. They then prove: If exotic P = NP together with axiomatic set theory is ω-consistent, then axiomatic set theory + P = NP is consistent. (So far nobody has advanced a proof of the ω-consistency of set theory + exotic P = NP.) They also showed that the equivalence between exotic P = NP and the usual formalization for P = NP, is independent of set theory and holds of the standard integers. If set theory plus that equivalence condition has the same provably total recursive functions as plain set theory, then the consistency of P = NP with set theory follows. Dória is also interested in the theories of hypercomputation and in the foundations of economic theory. == References == N. C. A. da Costa and F. A. Dória, "Undecidability and incompleteness in classical mechanics," Int. J. Theor. Physics vol. 30, pp. 1041–1073 (1991) Proves that chaos theory is undecidable and, if axiomatized within set theory, incomplete in the sense of Gödel. N. C. A. da Costa and F. A. Dória, "An undecidable Hopf bifurcation with an undecidable fixed point," Int. J. Theor. Physics vol. 33, pp. 1885–1903 (1994). Settles a question raised by V. I. Arnold in the list of problems drawn up at the 1974 American Mathematical Society Symposium on the Hilbert Problems: is the stability problem for stationary points algorithmically decidable? I. Stewart, "Deciding the undecidable," Nature vol. 352, pp. 664–665 (1991). I. Stewart, From Here to Infinity, Oxford (1996). Comments on the undecidability proof for chaos theory. J. Barrow, Impossibility – The Limits of Science and the Science of Limits, Oxford (1998). Describes the solution of Arnold's stability problem. S. Smale, "Problem 14: Lorenz attractor," in V. I. Arnold et al., Mathematics, Frontiers and Perspectives, pp. 285–286, AMS and IMU (2000). Summarizes the obstruction to decidability in chaos theory described by da Costa and Dória. F. A. Dória and J. F. Costa, "Special issue on hypercomputation," Applied Mathematics and Computation vol. 178 (2006). N. C. A. da Costa and F. A. Dória, "Consequences of an exotic formulation for P = NP," Applied Mathematics and Computation vol. 145, pp. 655–665 (2003) and vol. 172, pp. 1364–1367 (2006). The criticisms to the da Costa–Dória approach appear in the references in those papers. N. C. A. da Costa, F. A. Dória and E. Bir, "On the metamathematics of the P vs. NP question," to be published in Applied Mathematics and Computation (2007). Reviews the evidence for a conjectured consistency of P = NP with some strong axiomatic theory. A. Syropoulos, Hypercomputation: Computing Beyond the Church–Turing Barrier, Springer (2008). Describes the contribution to hypercomputation theories by da Costa and Dória, and sketches their contribution to the P = NP problem. == Book List == Francisco Antonio Doria, NCA da Costa, "On the Foundations of Science (LIVRO): Essays, First Series", Editora E-papers, 2013. Francisco Antonio Doria, "Chaos, Computers, Games and Time: A quarter century of joint work with Newton da Costa", Editora E-papers. Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa, "Goedel's Way: Exploits into an undecidable world", CRC Press, 2011. Francisco Antonio Doria (Ed.), "The Limits Of Mathematical Modeling In The Social Sciences: The Significance Of Godel's Incompleteness Phenomenon", World Scientific, 2017. Shyam Wuppuluri, Francisco Antonio Doria (Eds.), "The Map and the Territory: Exploring the foundations of science, thought and reality", foreword by Sir Roger Penrose, Afterword by Dagfinn Follesdal, Springer — The frontiers Collection, 2018. Shyam Wuppuluri, Francisco Antonio Doria (Eds.), "Unravelling Complexity: The Life And Work Of Gregory Chaitin" World Scientific, 2020.
Wikipedia:Frank Garvan#0
Francis G. Garvan (born March 9, 1955) is an Australian-born mathematician who specializes in number theory and combinatorics. He holds the position Professor of Mathematics at the University of Florida. He received his Ph.D. from Pennsylvania State University (January, 1986) with George E. Andrews as his thesis advisor. Garvan's thesis, Generalizations of Dyson's rank, concerned the rank of a partition and formed the groundwork for several of his later papers. Garvan is well-known for his work in the fields of q-series and integer partitions. Most famously, in 1988, Garvan and Andrews discovered a definition of the crank of a partition. The crank of a partition is an elusive combinatorial statistic similar to the rank of a partition which provides a key to the study of Ramanujan congruences in partition theory. It was first described by Freeman Dyson in a paper on ranks for the journal Eureka in 1944. Andrews and Garvan's definition was the first definition of a crank to satisfy the properties hypothesized for it in Dyson's paper. == References == == External links == http://people.clas.ufl.edu/fgarvan/ [1]
Wikipedia:Frank Harary#0
Frank Harary (March 11, 1921 – January 4, 2005) was an American mathematician, who specialized in graph theory. He was widely recognized as one of the "fathers" of modern graph theory. Harary was a master of clear exposition and, together with his many doctoral students, he standardized the terminology of graphs. He broadened the reach of this field to include physics, psychology, sociology, and even anthropology. Gifted with a keen sense of humor, Harary challenged and entertained audiences at all levels of mathematical sophistication. A particular trick he employed was to turn theorems into games—for instance, students would try to add red edges to a graph on six vertices in order to create a red triangle, while another group of students tried to add edges to create a blue triangle (and each edge of the graph had to be either blue or red). Because of the theorem on friends and strangers, one team or the other would have to win. == Biography == Frank Harary was born in New York City, the oldest child to a family of Jewish immigrants from Syria and Russia. He earned his bachelor's and master's degrees from Brooklyn College in 1941 and 1945 respectively and his Ph.D., with supervisor Alfred L. Foster, from University of California, Berkeley in 1948. Prior to his teaching career he became a research assistant in the Institute for Social Research at the University of Michigan. Harary's first publication, "Atomic Boolean-like rings with finite radical", went through much effort to be put into the Duke Mathematical Journal in 1950. This article was first submitted to the American Mathematical Society in November 1948, then sent to the Duke Mathematical Journal where it was revised three times before it was finally published two years after its initial submission. Harary began his teaching career at the University of Michigan in 1953 where he was first an assistant professor, then in 1959 associate professor and in 1964 was appointed as a professor of mathematics, a position he held until 1986. From 1987 he was Professor (and Distinguished Professor Emeritus) in the Computer Science Department at New Mexico State University in Las Cruces. He was one of the founders of the Journal of Combinatorial Theory and the Journal of Graph Theory. In 1949 Harary published On the algebraic structure of knots. Shortly after this publication in 1953 Harary published his first book (jointly with George Uhlenbeck) On the number of Husimi trees. It was following this text that he began to build up a worldwide reputation for his work in graph theory. In 1965 his first book Structural models: An introduction to the theory of directed graphs was published, and for the rest of his life his interest would be in the field of graph theory. While beginning his work in graph theory around 1965, Harary began buying property in Ann Arbor, and subdividing the houses he bought into apartments. This led to criticism for poor maintenance, "scores of building code violations", and six condemnations of buildings he owned. In a 1969 newspaper article, Harary was quoted as stating "We just wanted these properties for the land value ... we wanted to move the tenants out", while his wife Jayne stated "We've wanted to help poor blacks find better housing, but we've taken the rap again and again." Harary and his wife Jayne had six children together, Miriam, Natalie, Judith, Thomas, Joel and Chaya. From 1973 to 2007 Harary jointly wrote five more books, each in the field of graph theory. In the time before his death, Harary traveled the world researching and publishing over 800 papers (with some 300 different co-authors), in mathematical journals and other scientific publications, more than any mathematician other than Paul Erdos. Harary recorded that he lectured in 166 different cities around the United States and some 274 cities in over 80 different countries. Harary was particularly proud that he had given lectures in cities around the world beginning with every letter of the alphabet, even including "X" when he traveled to Xanten, Germany. Harary also played a curious role in the award-winning film Good Will Hunting. The film displayed formulas he had published on the enumeration of trees, which were supposed to be fiendishly difficult. It was in 1986 at the age of 65 that Harary retired from his professorship at the University of Michigan. Harary did not take his retirement lightly however; following his retirement Harary was appointed as a Distinguished Professor of Computer Sciences at New Mexico State University in Las Cruces. He held this position until his death in 2005. The same year as his retirement Harary was made an honorary fellow of the National Academy of Sciences, India; he also served as an editor for about 20 different journals focusing primarily on graph theory and combinatorics. It was following his retirement that Harary was elected as an honorary lifetime member of the Calcutta Mathematical Society and of the South African Mathematical Society. He died at Memorial Medical Center in Las Cruces, New Mexico. At the time of his death in Las Cruces other members of the department of Computer Science felt the loss for the great mind that once worked beside them. The head of the department of Computer Science at the time of Harary's death Desh Ranjan had this to say, "Dr. Harary was a true scholar with a genuine love for graph theory which was an endless source of new discoveries, beauty, curiosity, surprises and joy for him till the very end of his life." == Mathematics == Harary's work in graph theory was diverse. Some topics of great interest to him were: Graph enumeration, that is, counting graphs of a specified kind. He coauthored a book on the subject (Harary and Palmer 1973). The main difficulty is that two graphs that are isomorphic should not be counted twice; thus, one has to apply Pólya's theory of counting under group action. Harary was an expert in this. Signed graphs. Harary invented this branch of graph theory, which grew out of a problem of theoretical social psychology investigated by the psychologist Dorwin Cartwright and Harary. Applications of graph theory in numerous areas, especially to social science such as balance theory, opinion dynamics, and the theory of tournaments. Harary was co-author of John Wiley's first e-book, Graph Theory and Geography. Among over 700 scholarly articles Harary wrote, two were co-authored with Paul Erdős, giving Harary an Erdős number of 1. He lectured extensively and kept alphabetical lists of the cities where he spoke. Harary's most famous classic book Graph Theory was published in 1969 and offered a practical introduction to the field of graph theory. It is evident that Harary's focus in this book and amongst his other publications was towards the varied and diverse application of graph theory to other fields of mathematics, physics and many others. Taken from the preface of Graph Theory, Harary notes ... "...there are applications of graph theory to some areas of physics, chemistry, communication science, computer technology, electrical and civil engineering, architecture, operational research, genetics, psychology, sociology, economics, anthropology, and linguistics." Harary quickly began promoting inquiry based learning through his texts, apparent by his reference to the tradition of the Moore method. Harary made many unique contributions to graph theory as he explored more and more different fields of study and successfully attempted to relate them to graph theory. Harary's classic book Graph Theory begins by providing the reader with much of the requisite knowledge of basic graphs and then dives right into proving the diversity of content that is held within graph theory. Some of the other mathematical fields that Harary directly relates to graph theory in his book begin to appear around chapter 13, these topics include linear algebra, and abstract algebra. Harary also made an influential contribution in the theory of social learning used in sociology and behavioral economics, deriving a criterion for consensus in John R. P. French's model of social power. This anticipated by several decades, albeit in a special case, the widely used DeGroot learning model. == Tree square root == One motivation for the study of graph theory is its application to sociograms described by Jacob L. Moreno. For instance the adjacency matrix of a sociogram was used by Leon Festinger. Festinger identified the graph theory clique with the social clique and examined the diagonal of the cube of a groups’ adjacency matrix to detect cliques. Harary joined with Ian Ross to improve on Festinger's clique detection. The admission of powers of an adjacency matrix led Harary and Ross to note that a complete graph can be obtained from the square of an adjacency matrix of a tree. Relying on their study of clique detection, they described a class of graphs for which the adjacency matrix is the square of the adjacency matrix of a tree. If a graph G is the square of a tree, then it has a unique tree square root Some vocabulary necessary to understand this proof and the methods used here are provided in Harary's The Square of a Tree: (Cliqual, unicliqual, multicliqual, cocliqual, neighborhood, neighborly, cut point, block) How to determine if some graph G is the square of a tree. Iff a graph G is complete or satisfies the following 5 properties then G = T2 (i) Every point of G is neighborly and G is connected. (ii) If two cliques meet at only one point b, then there is a third clique with which they share b and exactly one other point. (iii) There is a 1-1 correspondence between the cliques and the multicliqual points b of G such that clique C(b) corresponding to b contains exactly as many multicliqual points as the number of cliques which include b. (iv) No two cliques intersect in more than two points. (v) The number of pairs of cliques that meet in two points is one less than the number of cliques. Algorithm for finding the tree square root of a graph G. Step 1: Find all the cliques of G. Step 2: Let the cliques of G be C1,...,Cn, and consider a collection of multicliqual points b1,...,bn corresponding to these cliques in accordance with condition iii. The elements of this collection are the nonendpoints of T. Find all of the pairwise intersections of the n cliques and form the graph S by joining the points bi and bj by a line if and only if the corresponding cliques Ci and Cj intersect in two points. S is then a tree by condition v. Step 3: For each clique Ci of G, let ni be the number of unicliqual points. To the tree S obtained in step 2, attach ni endpoints to bi, obtaining the tree T which we sought. Once we have the tree in question we can create an adjacency matrix for the tree T and check that it is indeed the tree which we sought. Squaring the adjacency matrix of T should yield an adjacency matrix for a graph which is isomorphic to the graph G which we started with. Probably the simplest way to observe this theorem in action is to observe the case which Harary mentions in The Square of a Tree. Specifically the example in question describes the tree corresponding the graph of K5 "Consider the tree consisting of one point joined with all the others. When the tree is squared, the result is the complete graph. We wish to illustrate... T2 = {\displaystyle =} K5" Upon squaring the adjacency matrix of the previously mentioned tree, we can observe that the theorem does in fact hold true. We can also observe that this pattern of setting up a tree where "one point joined with all the others" will always indeed yield the correct tree for all complete graphs. == Bibliography == 1965: (with Robert Z. Norman and Dorwin Cartwright), Structural Models: An Introduction to the Theory of Directed Graphs, New York: Wiley MR0184874 1967: (editor) Graph Theory and Theoretical Physics, Academic Press MR0232694 1969: Graph Theory, Addison–Wesley MR0256911 1971: (editor with Herbert Wilf) Mathematical Aspects of Electrical Networks Analysis, SIAM-AMS Proceedings, Volume 3, American Mathematical Society MR0329788 1973: (editor) New Directions in the Theory of Graphs: Proceedings of the 1971 Ann Arbor Conference on Graph Theory, University of Michigan, Academic Press MR0340065 1973: (with Edgar M. Palmer) Graphical Enumeration, Academic Press MR0357214 1979: (editor) Topics in Graph Theory, New York Academy of Sciences MR557879 1984: (with Per Hage) Structural Models in Anthropology, Cambridge Studies in Social and Cultural Anthropology, Cambridge University Press MR0738630 1990: (with Fred Buckley) Distance in Graphs, Perseus Press MR1045632 1991: (with Per Hage) Exchange in Oceania: A Graph Theoretic Analysis, Oxford Studies in Social and Cultural Anthropology, Oxford University Press. 2002: (with Sandra Lach Arlinghaus & William C. Arlinghaus) Graph Theory and Geography: An Interactive E-Book, John Wiley and Sons MR1936840 2007: (with Per Hage) Island Networks: Communication, Kinship, and Classification Structures in Oceania (Structural Analysis in the Social Sciences), Cambridge University Press. == References == == External links == Frank Harary at the Mathematics Genealogy Project Frank Harary memorial from New Mexico State University
Wikipedia:Frank Ruskey#0
Frank Ruskey is a combinatorialist and computer scientist, and professor at the University of Victoria. His research involves algorithms for exhaustively listing discrete structures, combinatorial Gray codes, Venn and Euler diagrams, combinatorics on words, and enumerative combinatorics. Frank Ruskey is the author of the Combinatorial Object Server (COS), a website for information on and generation of combinatorial objects. == Selected publications == Lucas, J.M.; Vanbaronaigien, D.R.; Ruskey, F. (November 1993). "On Rotations and the Generation of Binary Trees". Journal of Algorithms. 15 (3): 343–366. CiteSeerX 10.1.1.51.8866. doi:10.1006/jagm.1993.1045. Pruesse, Gara; Ruskey, Frank (April 1994). "Generating Linear Extensions Fast". SIAM Journal on Computing. 23 (2): 373–386. CiteSeerX 10.1.1.52.3057. doi:10.1137/s0097539791202647. Ruskey, F.; Hu, T. C. (1977). "Generating Binary Trees Lexicographically". SIAM Journal on Computing. 6 (4): 745–758. doi:10.1137/0206055. Ruskey, Frank; Weston, Mark (June 2005). "A Survey of Venn Diagrams". The Electronic Journal of Combinatorics. doi:10.37236/26. Archived from the original on 11 October 2011. Retrieved 1 October 2011. == References == == External links == Frank Ruskey's homepage Combinatorial Object Server Combinatorial Generation unpublished book on combinatorics Frank Ruskey at the Mathematics Genealogy Project
Wikipedia:Frank Smithies#0
Frank Smithies FRSE (10 March 1912 – 16 November 2002) was a British mathematician who worked on integral equations, functional analysis, and the history of mathematics. He was elected as a fellow of the Royal Society of Edinburgh in 1961. He was an alumnus and an academic of Cambridge University. == Publications == Smithies, F. (1958), Integral equations, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 49, Cambridge University Press, ISBN 978-0-521-06502-3, MR 0104991 {{citation}}: ISBN / Date incompatibility (help) Smithies, F. (1997), Cauchy and the creation of complex function theory, Cambridge University Press, ISBN 0-521-59278-X == References == O'Connor, John J.; Robertson, Edmund F., "Frank Smithies", MacTutor History of Mathematics Archive, University of St Andrews "Frank Smithies", The Times, Obituary == External links == Works by F. Smithies at Project Gutenberg Works by or about Frank Smithies at the Internet Archive Frank Smithies at the Mathematics Genealogy Project
Wikipedia:Frank Spitzer#0
Frank Ludvig Spitzer (July 24, 1926 – February 1, 1992) was an Austrian-born, Jewish-American mathematician who was a longtime professor at Cornell University and made fundamental contributions to probability theory, especially the theory of random walks, Brownian motion, and fluctuation theory, and then the theory of interacting particle systems. Other areas he made contributions to include percolation theory and the Wiener sausage. He focussed broadly on "phenomena", rather than any one of the many specific theorems that might help to articulate a given phenomenon. His book Principles of Random Walk, first published in 1964, remains a well-cited classic. Spitzer was born on July 24, 1926, in Vienna, into an Austrian Jewish family. By the time he was twelve years old, the Nazi threat in Austria was evident. His parents were able to send him to a summer camp for Jewish children in Sweden, and, as a result, Spitzer spent all of the World War II years in Sweden. He lived with two Swedish families, learned Swedish, graduated from high school, and for one year attended Tekniska Hogskolan in Stockholm. During the war years, Spitzer's parents and his sister were able to make their way to the United States by passing through the unoccupied parts of France and North Africa, and, after the war, Spitzer joined his family in their new country. Spitzer enlisted in the U.S. Army just as the war in Europe was ending. After completing his military service in 1947, Spitzer entered the University of Michigan to study mathematics. He completed his B.A. and Ph.D. there in just six years, receiving his doctorate in 1953. Spitzer's first academic appointments were at the California Institute of Technology (1953–1955) and the University of Minnesota (1955–1960), but most of his academic career was spent at Cornell University, where he started as a full professor in 1961. He did take part in leaves at Princeton University in the U.S. and the Mittag-Leffler Institute in Sweden. Among his honors, Spitzer was a member of the National Academy of Sciences. A multi-year struggle with Parkinson's disease culminated Spitzer's retirement from Cornell in 1991, at which point he became a professor emeritus. He died on February 1, 1992 at Tomkins County Hospital in Ithaca, New York. == Selected publications == Spitzer, Frank (1976), Principles of Random Walk, Graduate Texts in Mathematics, vol. 34, New York-Heidelberg: Springer-Verlag, p. 40, MR 0171290 == References == == External links == Frank Spitzer at the Mathematics Genealogy Project
Wikipedia:Frank W. Bubb Sr.#0
Frank Bubb (July 3, 1892 – May 3, 1961) was a scientist and a mathematician at Washington University in St. Louis. He was a part of the team that developed the cyclotron that produced the first batch of plutonium for the then secret program only referred to as the Manhattan Project, which produced the atomic bomb. == References == Frank W. Bubb Sr.'s obituary
Wikipedia:Frans-H. van den Dungen#0
Frans-H. van den Dungen (1898–1965) was a Belgian scientist and professor at the Universite Libre de Bruxelles. In 1946 he was awarded the Francqui Prize on Exact Sciences. Among his students was the mathematician Paul Dedecker. == External links == Frans-H. van den Dungen at the Mathematics Genealogy Project Universite Libre de Bruxelles (History of Science Department)
Wikipedia:František Mikloško#0
František Mikloško (born 2 June 1947) is a Slovak politician. He was the Speaker of the Slovak National Council from 1990 to 1992 and a long serving MP of the National Council of the Slovak Republic (1990-2010). For most of his career, he was a member of Christian Democratic Movement. == Early life == Mikloško studied Mathematics at the Comenius University, graduating in 1966. Already as a student, he was active in the activities of the Catholic Church, which had a complicated relationship with the Communist regime at the time. At first, Mikloško's activities were limited to low profile organization of small student gatherings while working as a researcher at the Slovak Academy of Sciences. However, since 1980s, Mikloško started gradually to contribute to organization of large religious pilgrimages, which has attracted the attention of the Communist regime. In 1983 he was fired from the Academy and could only work in manual occupations. In spite of the regime repression, Mikloško continued to organize increasingly anti-regime rallied, most prominently the Candle demonstration in Bratislava in 1988. After the Velvet Revolution, he became the first Speaker of the Slovak National Council. == Political career == Mikloško was one of the longest-serving members of parliament in Slovakia. He was also a candidate in the 2004 presidential election and the 2009 presidential election. Mikloško did not participate in 2010 parliament election and retired from politics. On 12 March 2008 František Mikloško, together with Vladimír Palko, Pavol Minárik, and Rudolf Bauer, established a new party called Conservative Democrats of Slovakia which was dissolved in 2014. After 15 years, Mikloško returned to run for KDH ahead of 2023 parliamentary election. == References ==
Wikipedia:Franz Aurenhammer#0
Franz Aurenhammer (born September 25, 1957) is an Austrian computational geometer known for his research in computational geometry on Voronoi diagrams, straight skeletons, and related structures. He is a professor in the Institute for Theoretical Computer Science of Graz University of Technology. Aurenhammer earned a diploma in technical mathematics from Graz University of Technology in 1982, and completed his doctorate there in 1984 and his habilitation in 1989. His doctoral dissertation was jointly supervised by Hermann Maurer and Herbert Edelsbrunner. He was on the faculty at Graz as an assistant professor from 1985 to 1989, and returned in 1992 as a full professor. == References ==
Wikipedia:Franz Mertens#0
Franz Mertens (20 March 1840 – 5 March 1927) (also known as Franciszek Mertens) was a German-Polish mathematician. He was born in Schroda in the Grand Duchy of Posen, Kingdom of Prussia (now Środa Wielkopolska, Poland) and died in Vienna, Austria. The Mertens function M(x) is the sum function for the Möbius function, in the theory of arithmetic functions. The Mertens conjecture concerning its growth, conjecturing it bounded by x1/2, which would have implied the Riemann hypothesis, is now known to be false (Odlyzko and te Riele, 1985). The Meissel–Mertens constant is analogous to the Euler–Mascheroni constant, but the harmonic series sum in its definition is only over the primes rather than over all integers and the logarithm is taken twice, not just once. Mertens's theorems are three 1874 results related to the density of prime numbers. Erwin Schrödinger was taught calculus and algebra by Mertens. His memory is honoured by the Franciszek Mertens Scholarship granted (from 2017) to those outstanding pupils of foreign secondary schools who wish to study at the Faculty of Mathematics and Computer Science of the Jagiellonian University in Kraków and were finalists of the national-level mathematics, or computer science olympiads, or they have participated in one of the following international olympiads: in mathematics (IMO), computer science (IOI), artificial intelligence (IOAI), astronomy (IAO), astronomy and astrophysics (IOAA), physics (IPhO), linguistics (IOL), European Girls' Mathematical Olympiad (EGMO), European Girls’ Olympiad in Informatics (EGOI), Romanian Masters of Mathematics (RMM), Romanian Masters of Informatics (RMI) or International Zhautykov Olympiad (IZhO). == See also == Mertens's theorems Cauchy product == References == == External links == Media related to Franz Mertens (mathematician) at Wikimedia Commons O'Connor, John J.; Robertson, Edmund F., "Franz Mertens", MacTutor History of Mathematics Archive, University of St Andrews Franz Mertens at the Mathematics Genealogy Project
Wikipedia:François Baccelli#0
François Louis Baccelli (born December 20, 1954) is senior researcher at INRIA Paris, in charge of the ERC project NEMO on network mathematics. == Education and career == Baccelli obtained his PhD at the University of Paris-Sud in 1983 under the supervision of Erol Gelenbe. Between 1991 and 2003, he was a faculty member at the applied mathematics department at École polytechnique. He was Simons Chair in mathematics and electrical and computer engineering at University of Texas at Austin between 2012 and 2021. Between 2012 and 2019, he was the head of the Simons Center on Communication, Information and Network Mathematics. == Research == Baccelli's research is at the interface between mathematics (probability theory, stochastic geometry, dynamical systems) and communications (information theory, wireless networks, network science). His work with P. Brémaud on the stationary-ergodic framework for queuing networks represents such networks as functionals of point processes on the real line. This led to mathematical tools which are now commonly used in applied probability and in the communication network literature. Jointly with G. Cohen, J.P. Quadrat and G.J. Olsder, he contributed to the development of an algebraic theory for the dynamics of networks, the so-called (max, plus) algebra. This impacted several fields of engineering (network calculus) and mathematics (tropical geometry). He is best known for contributions to stochastic geometry. His results on the Poisson-Voronoi model, the Poisson-Shannon model, and the Poisson-Voronoi-Shannon model laid the foundations of the representation of communication networks as functional of point processes in the Euclidean space. This led to wireless stochastic geometry, initially developed with B. Blaszczyszyn, which is now commonly used in the communication network literature. His current research interests are in stochastic geometry in high dimension, in the theory of stationary point processes, and in the mathematics of unimodular random graphs. == Honors and awards == Baccelli is a member of the French Academy of Sciences. He was inducted in 2005. He was awarded a Math+X chair by the Simons Foundation in 2012. He received an honorary doctorate of Heriot-Watt University, Edinburgh, in 2016, the ACM Sigmetrics Achievement Award, in 2014, and the Grand Prix France Telecom, of the French Academy of Sciences in 2002. In 2014, he was awarded both the Stephen O. Rice Prize and the 2014 Leonard G. Abraham Prize by the IEEE Communications Society. == References ==
Wikipedia:François Français#0
Cap-Haïtien (French: [kap a.isjɛ̃] ; Haitian Creole: Kap Ayisyen; "Haitian Cape"), typically spelled Cape Haitien in English, is a commune of about 400,000 people on the north coast of Haiti and capital of the department of Nord. Previously named Cap‑Français (Haitian Creole: Kap-Fransè; initially Cap-François Haitian Creole: Kap-Franswa) and Cap‑Henri (Haitian Creole: Kap-Enri) during the rule of Henri I, it was historically nicknamed the Paris of the Antilles, because of its wealth and sophistication, expressed through its architecture and artistic life. It was an important city during the colonial period, serving as the capital of the French Colony of Saint-Domingue from the city's formal foundation in 1711 until 1770 when the capital was moved to Port-au-Prince. After the Haitian Revolution, it became the capital of the Kingdom of Haiti under King Henri I until 1820. Cap-Haïtien's long history of independent thought was formed in part by its relative distance from Port-au-Prince, the barrier of mountains between it and the southern part of the country, and a history of large African populations. These contributed to making it a legendary incubator of independent movements since slavery times. For instance, from February 5–29, 2004, the city was taken over by militants who opposed the rule of the Haïtian president Jean-Bertrand Aristide. They eventually created enough political pressure to force him out of office and the country. Cap-Haïtien is near the historic Haitian town of Milot, which lies 19 kilometres (12 mi) to the southwest along a gravel road. Milot was Haiti's first capital under the self-proclaimed King Henri Christophe, who ascended to power in 1807, three years after Haiti had gained independence from France. He renamed Cap‑Français as Cap‑Henri. Milot is the site of his Sans-Souci Palace, wrecked by the 1842 earthquake. The Citadelle Laferrière, a massive stone fortress bristling with cannons, atop a nearby mountain is eight kilometres (5 mi) away. On clear days, its silhouette is visible from Cap‑Haïtien. The small Cap-Haïtien International Airport, located on the southeast edge of the city, is served by several small domestic airlines. It was patrolled by Chilean UN troops from the "O'Higgins Base" after the 2010 earthquake. Several hundred UN personnel, including nearby units from Nepal and Uruguay, are assigned to the city during the 2010–2017 United Nations Stabilization Mission in Haiti (MINUSTAH). The airport was the only functioning international airport in the country after the closure of the Toussaint Louverture International Airport in Tabarre due to gang violence in March 2024. Significant migration from the capital occurred during the Haitian crisis, putting strain on infrastructure and on the educational system. The destruction in 2020 of Shada 2 (a slum with 1,500 homes in the southern part of the city) was credited with disrupting gang activity in the former capital. == History == The island was occupied for thousands of years by cultures of indigenous peoples, who had migrated from present-day Central and South America. In the 16th century, Spanish explorers in the Caribbean began to colonize Hispaniola. They adopted the native Taíno name Guárico for the area that is today known as "Cap‑Haïtien". Due to the introduction of new infectious diseases, as well as poor treatment, the indigenous peoples population rapidly declined. On the nearby coast Columbus founded his first community in the New World, the short-lived La Navidad. In 1975, researchers found near Cap‑Haïtien another of the first Spanish towns of Hispaniola: Puerto Real was founded in 1503. It was abandoned in 1578, and its ruins were not discovered until late in the twentieth century. In 1670 during the French colonial period, Cap-Haïtien, or Cap-Français as the settlement was then known, was founded by a dozen colonists-adventurers under the command of Bertrand d'Ogeron. The French occupied roughly a third of the island of Hispaniola from the Spanish in the early eighteenth century. They established large sugar cane plantations on the northern plains and imported tens of thousands of African slaves to work them. Cap‑Français became an important port city of the French colonial period and the colony's main commercial centre. It served as the capital of the French colony of Saint-Domingue from the city's formal founding in 1711 until 1770, when the capital was moved to Port-au-Prince on the west coast of the island. Two thirds of the 15,000 inhabitants in 1790 were enslaved peoples, the remaining one third made up of colonists (24%) and free people of colour (10%). After the slave revolution, this was the first capital of the Kingdom of Haiti under King Henri I, when the nation was split apart. The central area of the city is between the Bay of Cap‑Haïtien to the east and nearby mountainsides, as well as the Acul Bay, to the west; these are increasingly dominated by flimsy urban slums. The streets are generally narrow and arranged in grids. As a legacy of the United States' occupation of Haiti from 1915 to 1934, Cap‑Haïtien's north–south streets were renamed as single letters (beginning with Rue A, a major avenue) and going to "Q", and its east–west streets with numbers from 1 to 26; the system is not followed outside the central city, where French names predominate. The historic city has numerous markets, churches, and low-rise apartment buildings (of three–four storeys), constructed primarily before and during the U.S. occupation. Much of the infrastructure is in need of repair. Many such buildings have balconies on the upper floors, which overlook the narrow streets below. With people eating outside on the balconies, there is an intimate communal atmosphere during dinner hours. == Geography == The commune consists of three communal sections, namely: Bande du Nord, urban (part of the commune of Cap-Haïtien) and rural Haut du Cap, urban (part of the commune of Cap-Haïtien) and rural Petit Anse, urban (commune of Petit Anse) and rural == Economy == Cap-Haïtien is known as the nation's largest center of historic monuments and as such, it is a tourist destination. The bay, beaches and monuments have made it a resort and vacation destination for Haiti's upper classes, comparable to Pétion-Ville. Cap‑Haïtien has also attracted more international tourists at times, as it has been isolated from the political instability in the south of the island. It has a wealth of 19th century architecture, which has been well preserved. During and after the Haitian Revolution many craftsmen from then Cap‑Français who were free people of color, fled to French-controlled New Orleans. Later in 1842 Cap-Haitien was devastated by an earthquake and a resulting tsunami, and most of the reconstruction was influenced by the globally popular French-style steel frame architecture. As a result, the New Orleans and Cap-Haitien share many similarities in styles of architecture. Especially notable are the gingerbread houses lining the city's older streets. Since 2021, there have been significant electrical outages in Cap-Haitien, due in large part to a lack of fuel. Those who can afford it have invested in solar energy. A power plant built in Caracol to provide electricity to the Industrial Park reaches as far as Limonade 30 minutes from downtown Cap Haïtien. == Tourism == === Labadie and other beaches === The walled Labadie (or Labadee) beach resort compound is located ten kilometres (6 mi) to the city's northwest. It serves as a brief stopover for Royal Caribbean International (RCI) cruise ships. Major RCI cruise ships dock weekly at Labadie. It is a private resort leased by RCI, which has generated the largest proportion of tourist revenue to Haiti since 1986. It employs 300 locals, allows another 200 to sell their wares on the premises, and pays the Haitian government US$6 per tourist. The resort is connected to Cap‑Haïtien by a mountainous, recently paved road. RCI has built a pier at Labadie, completed in late 2009, capable of servicing the luxury-class large ships. Attractions include a Haitian market, numerous beaches, watersports, a water-oriented playground, and a zip-line. Cormier Plage is another beach on the way to Labadie, and there are also water taxis from Labadie to other beaches, like Paradis beach. In addition, Belli Beach is a small sandy cove with boats and hotels. Labadie village can be visited from here. === Vertières === Vertières is the site of the Battle of Vertières, the last and defining battle of the Haitian Revolution. On November 18, 1803, the Haitian army led by Jean-Jacques Dessalines defeated a French colonial army led by the Comte de Rochambeau. The French withdrew their remaining 7,000 troops (many had died from yellow fever and other diseases), and in 1804, Dessalines' revolutionary government declared the independence of Haiti. The revolution had been underway, with some pauses, since the 1790s. In this last battle for independence, rebel leader Capois La Mort survived all the French bullets that nearly killed him. His horse was killed under him, and his hat fell off, but he kept advancing on the French, yelling, "En avant!" (Go forward!) to his men. He has become renowned as a hero of the revolution. The 18 of November has been widely celebrated since then as a Day of Army and Victory in Haiti. === Citadelle Henry and Sans-Souci Palace === The Citadelle Laferrière, also known as Citadelle Henry, or the Citadelle, is a large mountaintop fortress located approximately 27 kilometres (17 mi) south of the city of Cap‑Haïtien and eight kilometres (5 mi) beyond the town of Milot. It is the largest fortress in the Americas, and was listed by UNESCO as a World Heritage Site in 1982 along with the nearby Sans-Souci Palace. The Citadel was built by Henry Christophe, a leader during the Haitian slave rebellion and self-declared King of Northern Haiti, after the country gained its independence from France in 1804. Together with the remains of his Sans-Souci Palace, damaged in the 1842 earthquake, Citadelle Henry has been designated as a UNESCO World Heritage Site. === Bois Caïman === Bois Caïman (Haitian Creole: Bwa Kayiman), three kilometres (2 mi) south of road RN 1, is the place where Vodou rites were performed under a tree at the beginning of the slave revolution. For decades, maroons had been terrorizing slaveholders on the northern plains by poisoning their food and water. Makandal is the legendary (and perhaps historical) figure associated with the growing resistance movement. By the 1750s, he had organized the maroons, as well as many people enslaved on plantations, into a secret army. Makandal was murdered (or disappeared) in 1758, but the resistance movement grew. At Bois Caïman, a maroon leader named Dutty Boukman held the first mass antislavery meeting secretly on August 14, 1791. At this meeting, a Vodou ceremony was performed, and all those present swore to die rather than to endure the continuation of slavery on the island. Following the ritual led by Boukman and a mambo named Cécile Fatiman, the insurrection started on the night of August 22–23, 1791. Boukman was killed in an ambush soon after the revolution began. Jean-François was the next leader to follow Dutty Boukman in the uprising of the slaves, the Haitian equivalent of the storming of the Bastille in the French Revolution. Slaves burned the plantations and cane fields, and massacred French colonists across the northern plains. They also attacked Cap-Français and some of the free people of color. Eventually the revolution gained the independence of Haiti from France and freedom for the slaves. The site of Dutty Boukman's ceremony is marked by a ficus tree. Adjoining it is a colonial well, which is credited with mystic powers. === Morne Rouge === Morne Rouge is eight kilometres (5 mi) to the south of Cap. It is the site of the sugar plantation known as "Habitation Le Normand de Mezy", known for several slaves who led the rebellion against the French. == Disasters == === 1842 Cap-Haïtien earthquake === On 7 May 1842, an earthquake destroyed most of the city and other towns in the north of Haiti and the neighboring Dominican Republic. Among the buildings destroyed or significantly damaged was the Sans-Souci Palace. Ten thousand people were killed in the earthquake. Its magnitude is estimated as 8.1 on the Richter scale. === 2010 Haiti earthquake === In the wake of the 2010 Haiti earthquake, which destroyed port facilities in Port-au-Prince, the Port international du Cap-Haïtien was used to deliver relief supplies by ship. As the city's infrastructure suffered little damage, numerous businessmen and many residents have moved here from Port-au-Prince. The airport is patrolled by Chilean UN troops since the 2010 earthquake, and several hundred UN personnel have been assigned to the city as part of the ongoing United Nations Stabilization Mission in Haiti (MINUSTAH). They are working on recovery throughout the island. After the earthquake, the port of Labadee was demolished and the pier enlarged and completely re-paved with concrete, which now allows larger cruise ships to dock, rather than tendering passengers to shore. === Cap-Haïtien fuel tanker explosion === On 14 December 2021, over 75 people were killed when a fuel tank truck overturned and later exploded in the Samari neighborhood of Cap-Haïtien. == Transportation == === Airports === Cap-Haïtien is served by the Cap-Haïtien International Airport (CAP), Haiti's second busiest airport. It was a hub for Salsa d'Haiti prior to its cessation in 2013. American Airlines operated international flights to CAP for a number of years, but canceled their last connection in July, 2020, after the COVID-19 pandemic significantly reduced passenger demand. American Airlines was the last major US flight operator to provide service to CAP and thereby Northern Haiti—in July, 2020, Cap-Haïtien became only accessible by air travel through limited flights from Port-au-Prince's Toussaint Louverture International Airport. Spirit Airlines, which had previously canceled their service due to political unrest and low demand in 2019, announced in October, 2020 that they would resume limited service to CAP in December of the same year. === Seaport === The Port international du Cap-Haïtien is Cap-Haïtien's main seaport. USAID financed $24 million of works to renovate the port beginning in May 2024. === Roads === The Route Nationale 1 connects Cap-Haïtien with the Haitian capital city Port-au-Prince via the cities of Saint-Marc and Gonaïves. The Route Nationale 3 also connects Cap-Haïtien with Port-au-Prince via the Central Plateau and the cities of Mirebalais and Hinche. Cap-Haïtien has one of the best grid systems in Haiti with its north–south streets were renamed as single letters (beginning with Rue A, a major avenue), and its east–west streets with numbers. The Boulevard du Cap-Haitian (also called the Boulevard Carenage) is Cap‑Haïtien's main boulevard that runs along the Atlantic Ocean in the northern part of the city. === Public transportation === Cap-Haïtien is served by tap tap and local taxis or motorcycles. == Health == Cap Haitien is served by the teaching hospital: Hôpital Universitaire Justinien. == Education == A union of four Catholic Church private schools have been present for two decades in Cap‑Haïtien. They have higher-level grades, equivalent to the lycées that feed the Écoles Normale Supérieure in France. They have high standards of academic excellence, selectivity in admissions, and generally their students come from the social and economic elite. Also, the lyceé Philippe Guerrier that was built in 1844 by the Haitian President, Philippe Guerrier, has been a fountain of knowledge for more than a century. Collège Notre-Dame du Perpetuel Secours des Pères de Sainte-Croix Collège Regina Assumpta des Sœurs de Sainte-Croix École des Frères de l'instruction Chrétienne École Saint Joseph de Cluny des Sœurs Anne-Marie Javoue Lyceé Philippe Guerrier built by the Haitian President, Philippe Guerrier in 1844. === Universities === Cap Haitien is home to the Cap-Haitien Faculty of Law, Economics and, Management; the Public University of the North in Cap Haitien (UPNCH). The new Université Roi Henry Christophe is nearby in Limonade. == Sport == Cap Haitien has the Parc Saint-Victor home of three major league teams: Football Inter Club Association, AS Capoise, and Real du Cap. == Media == === Television === === Radio stations === == Notable natives == Pierre Nord Alexis (1820 – 1910), President of Haiti, 1902–1908. Tancrède Auguste (1856 – 1913), the 20th President of Haiti, 1912–1913. Étienne Chavannes (born 1939), a Haitian painter of crowd scenes Tyrone Edmond, Haitian-born model. Arly Larivière, Haitian Kompa musician and composer Yolette Lévy (1938–2018), Haitian-born Canadian politician and activist Lewis Page Mercier (1820–1875), Haitian educator and educator Alfred Auguste Nemours (1883–1955), military historian and diplomat Philomé Obin (1892–1986), artist and painter Leonel Saint-Preux (born 1985), footballer, played 41 games for Haiti Bruny Surin (born 1967), track and field runner, Olympic medalist, lives in Canada == See also == Battle of Cap-Français == Notes == == References == Dubois, Laurent Haiti : the aftershocks of history. New York : Metropolitan Books, 2012. Popkin, Jeremy D. Facing racial revolution : eyewitness accounts of the Haitian Insurrection Chicago : University of Chicago Press, 2007. Alyssa Goldstein Sepinwall. Haitian history : new perspectives. New York : Routledge, 2012. == External links == Cap-Haïtien travel guide from Wikivoyage short article - Columbia encyclopedia The Louverture Project: Cap Haïtien - Article from Haitian history wiki. Konbit Sante's page on Cap-Haitien. Konbit Sante is a non-denominational mixed NGO.
Wikipedia:François Viète#0
François Viète (French: [fʁɑ̃swa vjɛt]; 1540 – 23 February 1603), known in Latin as Franciscus Vieta, was a French mathematician whose work on new algebra was an important step towards modern algebra, due to his innovative use of letters as parameters in equations. He was a lawyer by trade, and served as a privy councillor to both Henry III and Henry IV of France. == Biography == === Early life and education === Viète was born at Fontenay-le-Comte in present-day Vendée. His grandfather was a merchant from La Rochelle. His father, Etienne Viète, was an attorney in Fontenay-le-Comte and a notary in Le Busseau. His mother was the aunt of Barnabé Brisson, a magistrate and the first president of parliament during the ascendancy of the Catholic League of France. Viète went to a Franciscan school and in 1558 studied law at Poitiers, graduating as a Bachelor of Laws in 1559. A year later, he began his career as an attorney in his native town. From the outset, he was entrusted with some major cases, including the settlement of rent in Poitou for the widow of King Francis I of France and looking after the interests of Mary, Queen of Scots. === Serving Parthenay === In 1564, Viète entered the service of Antoinette d'Aubeterre, Lady Soubise, wife of Jean V de Parthenay-Soubise, one of the main Huguenot military leaders, and accompanied him to Lyon to collect documents about his heroic defence of that city against the troops of Jacques of Savoy, 2nd Duke of Nemours just the year before. The same year, at Parc-Soubise, in the commune of Mouchamps in present-day Vendée, Viète became the tutor of Catherine de Parthenay, Soubise's twelve-year-old daughter. He taught her science and mathematics and wrote for her numerous treatises on astronomy and trigonometry, some of which have survived. In these treatises, Viète used decimal numbers (twenty years before Stevin's paper) and he also noted the elliptic orbit of the planets, forty years before Kepler and twenty years before Giordano Bruno's death. John V de Parthenay presented him to King Charles IX of France. Viète wrote a genealogy of the Parthenay family and following the death of Jean V de Parthenay-Soubise in 1566 his biography. In 1568, Antoinette, Lady Soubise, married her daughter Catherine to Baron Charles de Quellenec and Viète went with Lady Soubise to La Rochelle, where he mixed with the highest Calvinist aristocracy, leaders like Coligny and Condé and Queen Jeanne d’Albret of Navarre and her son, Henry of Navarre, the future Henry IV of France. In 1570, he refused to represent the Soubise ladies in their infamous lawsuit against the Baron De Quellenec, where they claimed the Baron was unable (or unwilling) to provide an heir. === First steps in Paris === In 1571, he enrolled as an attorney in Paris, and continued to visit his student Catherine. He regularly lived in Fontenay-le-Comte, where he took on some municipal functions. He began publishing his Universalium inspectionum ad Canonem mathematicum liber singularis and wrote new mathematical research by night or during periods of leisure. He was known to dwell on any one question for up to three days, his elbow on the desk, feeding himself without changing position (according to his friend, Jacques de Thou). In 1572, Viète was in Paris during the St. Bartholomew's Day massacre. That night, Baron De Quellenec was killed after having tried to save Admiral Coligny the previous night. The same year, Viète met Françoise de Rohan, Lady of Garnache, and became her adviser against Jacques, Duke of Nemours. In 1573, he became a councillor of the Parlement of Rennes, at Rennes, and two years later, he obtained the agreement of Antoinette d'Aubeterre for the marriage of Catherine of Parthenay to Duke René de Rohan, Françoise's brother. In 1576, Henri, duc de Rohan took him under his special protection, recommending him in 1580 as "maître des requêtes". In 1579, Viète finished the printing of his Universalium inspectionum (Mettayer publisher), published as an appendix to a book of two trigonometric tables (Canon mathematicus, seu ad triangula, the "canon" referred to by the title of his Universalium inspectionum, and Canonion triangulorum laterum rationalium). A year later, he was appointed maître des requêtes to the parliament of Paris, committed to serving the king. That same year, his success in the trial between the Duke of Nemours and Françoise de Rohan, to the benefit of the latter, earned him the resentment of the tenacious Catholic League. === Exile in Fontenay === Between 1583 and 1585, the League persuaded king Henry III to release Viète, Viète having been accused of sympathy with the Protestant cause. Henry of Navarre, at Rohan's instigation, addressed two letters to King Henry III of France on March 3 and April 26, 1585, in an attempt to obtain Viète's restoration to his former office, but he failed. Viète retired to Fontenay and Beauvoir-sur-Mer, with François de Rohan. He spent four years devoted to mathematics, writing his New Algebra (1591). === Code-breaker to two kings === In 1589, Henry III took refuge in Blois. He commanded the royal officials to be at Tours before 15 April 1589. Viète was one of the first who came back to Tours. He deciphered the secret letters of the Catholic League and other enemies of the king. Later, he had arguments with the classical scholar Joseph Juste Scaliger. Viète triumphed against him in 1590. After the death of Henry III, Viète became a privy councillor to Henry of Navarre, now Henry IV of France.: 75–77 He was appreciated by the king, who admired his mathematical talents. Viète was given the position of councillor of the parlement at Tours. In 1590, Viète broke the key to a Spanish cipher, consisting of more than 500 characters, and this meant that all dispatches in that language which fell into the hands of the French could be easily read. Henry IV published a letter from Commander Moreo to the King of Spain. The contents of this letter, read by Viète, revealed that the head of the League in France, Charles, Duke of Mayenne, planned to become king in place of Henry IV. This publication led to the settlement of the Wars of Religion. The King of Spain accused Viète of having used magical powers. In 1593, Viète published his arguments against Scaliger. Beginning in 1594, he was appointed exclusively deciphering the enemy's secret codes. === Gregorian calendar === In 1582, Pope Gregory XIII published his bull Inter gravissimas and ordered Catholic kings to comply with the change from the Julian calendar, based on the calculations of the Calabrian doctor Aloysius Lilius, aka Luigi Lilio or Luigi Giglio. His work was resumed, after his death, by the scientific adviser to the Pope, Christopher Clavius. Viète accused Clavius, in a series of pamphlets (1600), of introducing corrections and intermediate days in an arbitrary manner, and misunderstanding the meaning of the works of his predecessor, particularly in the calculation of the lunar cycle. Viète gave a new timetable, which Clavius cleverly refuted, after Viète's death, in his Explicatio (1603). It is said that Viète was wrong. Without doubt, he believed himself to be a kind of "King of Times" as the historian of mathematics, Dhombres, claimed. It is true that Viète held Clavius in low esteem, as evidenced by De Thou: He said that Clavius was very clever to explain the principles of mathematics, that he heard with great clarity what the authors had invented, and wrote various treatises compiling what had been written before him without quoting its references. So, his works were in a better order which was scattered and confused in early writings. === The Adriaan van Roomen problem === In 1596, Scaliger resumed his attacks from the University of Leyden. Viète replied definitively the following year. In March that same year, Adriaan van Roomen sought the resolution, by any of Europe's top mathematicians, to a polynomial equation of degree 45. King Henri IV received a snub from the Dutch ambassador, who claimed that there was no mathematician in France. He said it was simply because some Dutch mathematician, Adriaan van Roomen, had not asked any Frenchman to solve his problem. Viète came, saw the problem, and, after leaning on a window for a few minutes, solved it. It was the equation between sin(x) and sin(x/45). He resolved this at once, and said he was able to give at the same time (actually the next day) the solution to the other 22 problems to the ambassador. "Ut legit, ut solvit," he later said. Further, he sent a new problem back to Van Roomen, for resolution by Euclidean tools (rule and compass) of the lost answer to the problem first set by Apollonius of Perga. Van Roomen could not overcome that problem without resorting to a trick (see detail below). === Final years === In 1598, Viète was granted special leave. Henry IV, however, charged him to end the revolt of the Notaries, whom the King had ordered to pay back their fees. Sick and exhausted by work, he left the King's service in December 1602 and received 20,000 écus, which were found at his bedside after his death. A few weeks before his death, he wrote a final thesis on issues of cryptography, which essay made obsolete all encryption methods of the time. He died on 23 February 1603, as De Thou wrote, leaving two daughters, Jeanne, whose mother was Barbe Cottereau, and Suzanne, whose mother was Julienne Leclerc. Jeanne, the eldest, died in 1628, having married Jean Gabriau, a councillor of the parliament of Brittany. Suzanne died in January 1618 in Paris. The cause of Viète's death is unknown. Alexander Anderson, student of Viète and publisher of his scientific writings, speaks of a "praeceps et immaturum autoris fatum" (meeting an untimely end). == Work and thought == === New algebra === ==== Background ==== At the end of the 16th century, mathematics was placed under the dual aegis of Greek geometry and the Arabic procedures for resolution. At the time of Viète, algebra therefore oscillated between arithmetic, which gave the appearance of a list of rules; and geometry, which seemed more rigorous. Meanwhile, Italian mathematicians Luca Pacioli, Scipione del Ferro, Niccolò Fontana Tartaglia, Gerolamo Cardano, Lodovico Ferrari, and especially Raphael Bombelli (1560) all developed techniques for solving equations of the third degree, which heralded a new era. On the other hand, from the German school of Coss, the Welsh mathematician Robert Recorde (1550) and the Dutchman Simon Stevin (1581) brought an early algebraic notation: the use of decimals and exponents. However, complex numbers remained at best a philosophical way of thinking. Descartes, almost a century after their invention, used them as imaginary numbers. Only positive solutions were considered and using geometrical proof was common. The mathematician's task was in fact twofold. It was necessary to produce algebra in a more geometrical way (i.e. to give it a rigorous foundation), and it was also necessary to make geometry more algebraic, allowing for analytical calculation in the plane. Viète and Descartes solved this dual task in a double revolution. ==== Viète's symbolic algebra ==== Firstly, Viète gave algebra a foundation as strong as that of geometry. He then ended the algebra of procedures (al-Jabr and al-Muqabala), creating the first symbolic algebra, and claiming that with it, all problems could be solved (nullum non problema solvere). In his dedication of the Isagoge to Catherine de Parthenay, Viète wrote: "These things which are new are wont in the beginning to be set forth rudely and formlessly and must then be polished and perfected in succeeding centuries. Behold, the art which I present is new, but in truth so old, so spoiled and defiled by the barbarians, that I considered it necessary, in order to introduce an entirely new form into it, to think out and publish a new vocabulary, having gotten rid of all its pseudo-technical terms..." Viète did not know "multiplied" notation (given by William Oughtred in 1631) or the symbol of equality, =, an absence which is more striking because Robert Recorde had used the present symbol for this purpose since 1557, and Guilielmus Xylander had used parallel vertical lines since 1575. Note also the use of a 'u' like symbol with a number above it for an unknown to a given power by Rafael Bombelli in 1572. Viète had neither much time, nor students able to brilliantly illustrate his method. He took years in publishing his work (he was very meticulous), and most importantly, he made a very specific choice to separate the unknown variables, using consonants for parameters and vowels for unknowns. In this notation he perhaps followed some older contemporaries, such as Petrus Ramus, who designated the points in geometrical figures by vowels, making use of consonants, R, S, T, etc., only when these were exhausted. This choice proved unpopular with future mathematicians and Descartes, among others, preferred the first letters of the alphabet to designate the parameters and the latter for the unknowns. Viète also remained a prisoner of his time in several respects. First, he was heir of Ramus and did not address the lengths as numbers. His writing kept track of homogeneity, which did not simplify their reading. He failed to recognize the complex numbers of Bombelli and needed to double-check his algebraic answers through geometrical construction. Although he was fully aware that his new algebra was sufficient to give a solution, this concession tainted his reputation. However, Viète created many innovations: the binomial formula, which would be taken by Pascal and Newton, and the coefficients of a polynomial to sums and products of its roots, called Viète's formula. ==== Geometric algebra ==== Viète was well skilled in most modern artifices, aiming at the simplification of equations by the substitution of new quantities having a certain connection with the primitive unknown quantities. Another of his works, Recensio canonica effectionum geometricarum, bears a modern stamp, being what was later called an algebraic geometry—a collection of precepts how to construct algebraic expressions with the use of ruler and compass only. While these writings were generally intelligible, and therefore of the greatest didactic importance, the principle of homogeneity, first enunciated by Viète, was so far in advance of his times that most readers seem to have passed it over. That principle had been made use of by the Greek authors of the classic age; but of later mathematicians only Hero, Diophantus, etc., ventured to regard lines and surfaces as mere numbers that could be joined to give a new number, their sum. The study of such sums, found in the works of Diophantus, may have prompted Viète to lay down the principle that quantities occurring in an equation ought to be homogeneous, all of them lines, or surfaces, or solids, or supersolids — an equation between mere numbers being inadmissible. During the centuries that have elapsed between Viète's day and the present, several changes of opinion have taken place on this subject. Modern mathematicians like to make homogeneous such equations as are not so from the beginning, in order to get values of a symmetrical shape. Viète himself did not see that far; nevertheless, he indirectly suggested the thought. He also conceived methods for the general resolution of equations of the second, third and fourth degrees different from those of Scipione dal Ferro and Lodovico Ferrari, with which he had not been acquainted. He devised an approximate numerical solution of equations of the second and third degrees, wherein Leonardo of Pisa must have preceded him, but by a method which was completely lost. Above all, Viète was the first mathematician who introduced notations for the problem (and not just for the unknowns). As a result, his algebra was no longer limited to the statement of rules, but relied on an efficient computational algebra, in which the operations act on the letters and the results can be obtained at the end of the calculations by a simple replacement. This approach, which is the heart of contemporary algebraic method, was a fundamental step in the development of mathematics. With this, Viète marked the end of medieval algebra (from Al-Khwarizmi to Stevin) and opened the modern period. === The logic of species === Being wealthy, Viète began to publish at his own expense, for a few friends and scholars in almost every country of Europe, the systematic presentation of his mathematic theory, which he called "species logistic" (from species: symbol) or art of calculation on symbols (1591). He described in three stages how to proceed for solving a problem: As a first step, he summarized the problem in the form of an equation. Viète called this stage the Zetetic. It denotes the known quantities by consonants (B, D, etc.) and the unknown quantities by the vowels (A, E, etc.) In a second step, he made an analysis. He called this stage the Poristic. Here mathematicians must discuss the equation and solve it. It gives the characteristic of the problem, porisma (corrollary), from which we can move to the next step. In the last step, the exegetical analysis, he returned to the initial problem which presents a solution through a geometrical or numerical construction based on porisma. Among the problems addressed by Viète with this method is the complete resolution of the quadratic equations of the form X 2 + X b = c {\displaystyle X^{2}+Xb=c} and third-degree equations of the form X 3 + a X = b {\displaystyle X^{3}+aX=b} (Viète reduced it to quadratic equations). He knew the connection between the positive roots of an equation (which, in his day, were alone thought of as roots) and the coefficients of the different powers of the unknown quantity (see Viète's formulas and their application on quadratic equations). He discovered the formula for deriving the sine of a multiple angle, knowing that of the simple angle with due regard to the periodicity of sines. This formula must have been known to Viète in 1593. === Viète's formula === In 1593, based on geometrical considerations and through trigonometric calculations perfectly mastered, he discovered the first infinite product in the history of mathematics by giving an expression of π, now known as Viète's formula: π = 2 × 2 2 × 2 2 + 2 × 2 2 + 2 + 2 × 2 2 + 2 + 2 + 2 × ⋯ {\displaystyle \pi =2\times {\frac {2}{\sqrt {2}}}\times {\frac {2}{\sqrt {2+{\sqrt {2}}}}}\times {\frac {2}{\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}}\times {\frac {2}{\sqrt {2+{\sqrt {2+{\sqrt {2+{\sqrt {2}}}}}}}}}\times \cdots } He provides 10 decimal places of π by applying the Archimedes method to a polygon with 6 × 216 = 393,216 sides. === Adriaan van Roomen's challenge and the problem of Apollonius === This famous controversy is told by Tallemant des Réaux in these terms (46th story from the first volume of Les Historiettes. Mémoires pour servir à l’histoire du XVIIe siècle): "In the times of Henri the fourth, a Dutchman called Adrianus Romanus, a learned mathematician, but not so good as he believed, published a treatise in which he proposed a question to all the mathematicians of Europe, but did not ask any Frenchman. Shortly after, a state ambassador came to the King at Fontainebleau. The King took pleasure in showing him all the sights, and he said people there were excellent in every profession in his kingdom. 'But, Sire,' said the ambassador, 'you have no mathematician, according to Adrianus Romanus, who didn't mention any in his catalog.' 'Yes, we have,' said the King. 'I have an excellent man. Go and seek Monsieur Viette,' he ordered. Vieta, who was at Fontainebleau, came at once. The ambassador sent for the book from Adrianus Romanus and showed the proposal to Vieta, who had arrived in the gallery, and before the King came out, he had already written two solutions with a pencil. By the evening he had sent many other solutions to the ambassador." When, in 1595, Viète published his response to the problem set by Adriaan van Roomen, he proposed finding the resolution of the old problem of Apollonius, namely to find a circle tangent to three given circles. Van Roomen proposed a solution using a hyperbola, with which Viète did not agree, as he was hoping for a solution using Euclidean tools. Viète published his own solution in 1600 in his work Apollonius Gallus. In this paper, Viète made use of the center of similitude of two circles. His friend De Thou said that Adriaan van Roomen immediately left the University of Würzburg, saddled his horse and went to Fontenay-le-Comte, where Viète lived. According to De Thou, he stayed a month with him, and learned the methods of the new algebra. The two men became friends and Viète paid all van Roomen's expenses before his return to Würzburg. This resolution had an almost immediate impact in Europe and Viète earned the admiration of many mathematicians over the centuries. Viète did not deal with cases (circles together, these tangents, etc.), but recognized that the number of solutions depends on the relative position of the three circles and outlined the ten resulting situations. Descartes completed (in 1643) the theorem of the three circles of Apollonius, leading to a quadratic equation in 87 terms, each of which is a product of six factors (which, with this method, makes the actual construction humanly impossible). === Religious and political beliefs === Viète was accused of Protestantism by the Catholic League, but he was not a Huguenot. His father was, according to Dhombres. Indifferent in religious matters, he did not adopt the Calvinist faith of Parthenay, nor that of his other protectors, the Rohan family. His call to the parliament of Rennes proved the opposite. At the reception as a member of the court of Brittany, on 6 April 1574, he read in public a statement of Catholic faith. Nevertheless, Viète defended and protected Protestants his whole life, and suffered, in turn, the wrath of the League. It seems that for him, the stability of the state was to be preserved and that under this requirement, the King's religion did not matter. At that time, such people were called "Politicals." Furthermore, at his death, he did not want to confess his sins. A friend had to convince him that his own daughter would not find a husband, were he to refuse the sacraments of the Catholic Church. Whether Viète was an atheist or not is a matter of debate. === Publications === Chronological list Between 1564 and 1568, Viète prepared for his student, Catherine de Parthenay, some textbooks of astronomy and trigonometry and a treatise that was never published: Harmonicon coeleste. In 1579, the trigonometric tables Canon mathematicus, seu ad triangula, published together with a table of rational-sided triangles Canonion triangulorum laterum rationalium, and a book of trigonometry Universalium inspectionum ad canonem mathematicum – which he published at his own expense and with great printing difficulties. This text contains many formulas on the sine and cosine and is unusual in using decimal numbers. The trigonometric tables here exceeded those of Regiomontanus (Triangulate Omnimodis, 1533) and Rheticus (1543, annexed to De revolutionibus of Copernicus). (Alternative scan of a 1589 reprint) In 1589, Deschiffrement d'une lettre escripte par le Commandeur Moreo au Roy d'Espaigne son maître. In 1590, Deschiffrement description of a letter by the Commander Moreo at Roy Espaigne of his master, Tours: Mettayer. In 1591: In artem analyticem isagoge (Introduction to the art of analysis), also known as Algebra Nova (New Algebra) Tours: Mettayer, in 9 folio; the first edition of the Isagoge. Zeteticorum libri quinque. Tours: Mettayer, in 24 folio; which are the five books of Zetetics, a collection of problems from Diophantus solved using the analytical art. Between 1591 and 1593, Effectionum geometricarum canonica recensio. Tours: Mettayer, in 7 folio. In 1593: Vietae Supplementum geometriae. Tours: Francisci, in 21 folio. Francisci Vietae Variorum de rebus responsorum mathematics liber VIII. Tours: Mettaye, in 49 folio; about the challenges of Scaliger. Variorum de rebus mathematicis responsorum liber VIII; the "Eighth Book of Varied Responses" in which he talks about the problems of the trisection of the angle (which he acknowledges that it is bound to an equation of third degree) of squaring the circle, building the regular heptagon, etc. In 1594, Munimen adversus nova cyclometrica. Paris: Mettayer, in quarto, 8 folio; again, a response against Scaliger. In 1595, Ad problema quod omnibus mathematicis totius orbis construendum proposuit Adrianus Romanus, Francisci Vietae responsum. Paris: Mettayer, in quarto, 16 folio; about the Adriaan van Roomen problem. In 1600: De numerosa potestatum ad exegesim resolutione. Paris: Le Clerc, in 36 folio; work that provided the means for extracting roots and solutions of equations of degree at most 6. Francisci Vietae Apollonius Gallus. Paris: Le Clerc, in quarto, 13 folio; where he referred to himself as the French Apollonius. Between 1600 and 1602: Fontenaeensis libellorum supplicum in Regia magistri relatio Kalendarii vere Gregoriani ad ecclesiasticos doctores exhibita Pontifici Maximi Clementi VIII. Paris: Mettayer, in quarto, 40 folio. Francisci Vietae adversus Christophorum Clavium expostulatio. Paris: Mettayer, in quarto, 8 folio; his theses against Clavius. Posthumous publications 1612: Supplementum Apollonii Galli edited by Marin Ghetaldi. Supplementum Apollonii Redivivi sive analysis problematis bactenus desiderati ad Apollonii Pergaei doctrinam a Marino Ghetaldo Patritio Regusino hujusque non ita pridem institutam edited by Alexander Anderson. 1615: Ad Angularum Sectionem Analytica Theoremata F. Vieta primum excogitata at absque ulla demonstratione ad nos transmissa, iam tandem demonstrationibus confirmata edited by Alexander Anderson. Pro Zetetico Apolloniani problematis a se jam pridem edito in supplemento Apollonii Redivivi Zetetico Apolloniani problematis a se jam pridem edito; in qua ad ea quae obiter inibi perstrinxit Ghetaldus respondetur edited by Alexander Anderson Francisci Vietae Fontenaeensis, De aequationum — recognitione et emendatione tractatus duo per Alexandrum Andersonum edited by Alexander Anderson 1617: Animadversionis in Franciscum Vietam, a Clemente Cyriaco nuper editae brevis diakrisis edited by Alexander Anderson 1619: Exercitationum Mathematicarum Decas Prima edited by Alexander Anderson 1631: In artem analyticem isagoge. Eiusdem ad logisticem speciosam notae priores, nunc primum in lucem editae. Paris: Baudry, in 12 folio; the second edition of the Isagoge, including the posthumously published Ad logisticem speciosam notae priores. == Reception and influence == During the ascendancy of the Catholic League, Viète's secretary was Nathaniel Tarporley, perhaps one of the more interesting and enigmatic mathematicians of 16th-century England. When he returned to London, Tarporley became one of the trusted friends of Thomas Harriot. Apart from Catherine de Parthenay, Viète's other notable students were: French mathematician Jacques Aleaume, from Orleans, Marino Ghetaldi of Ragusa, Jean de Beaugrand and the Scottish mathematician Alexander Anderson. They illustrated his theories by publishing his works and continuing his methods. At his death, his heirs gave his manuscripts to Peter Aleaume. We give here the most important posthumous editions: In 1612: Supplementum Apollonii Galli of Marino Ghetaldi. From 1615 to 1619: Animadversionis in Franciscum Vietam, Clemente a Cyriaco nuper by Alexander Anderson Francisci Vietae Fontenaeensis ab aequationum recognitione et emendatione Tractatus duo Alexandrum per Andersonum. Paris, Laquehay, 1615, in 4, 135 p. The death of Alexander Anderson unfortunately halted the publication. In 1630, an Introduction en l'art analytic ou nouvelle algèbre ('Introduction to the analytic art or modern algebra), translated into French and commentary by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin. The Five Books of François Viette's Zetetic (Les cinq livres des zététiques de François Viette), put into French, and commented increased by mathematician J. L. Sieur de Vaulezard. Paris, Jacquin, p. 219. The same year, there appeared an Isagoge by Antoine Vasset (a pseudonym of Claude Hardy), and the following year, a translation into Latin of Beaugrand, which Descartes would have received. In 1648, the corpus of mathematical works printed by Frans van Schooten, professor at Leiden University (Elzevirs presses). He was assisted by Jacques Golius and Mersenne. The English mathematicians Thomas Harriot and Isaac Newton, and the Dutch physicist Willebrord Snellius, the French mathematicians Pierre de Fermat and Blaise Pascal all used Viète's symbolism. About 1770, the Italian mathematician Targioni Tozzetti, found in Florence Viète's Harmonicon coeleste. Viète had written in it: Describat Planeta Ellipsim ad motum anomaliae ad Terram. (That shows he adopted Copernicus' system and understood before Kepler the elliptic form of the orbits of planets.) In 1841, the French mathematician Michel Chasles was one of the first to reevaluate his role in the development of modern algebra. In 1847, a letter from François Arago, perpetual secretary of the Academy of Sciences (Paris), announced his intention to write a biography of François Viète. Between 1880 and 1890, the polytechnician Fréderic Ritter, based in Fontenay-le-Comte, was the first translator of the works of François Viète and his first contemporary biographer with Benjamin Fillon. === Descartes' views on Viète === Thirty-four years after the death of Viète, the philosopher René Descartes published his method and a book of geometry that changed the landscape of algebra and built on Viète's work, applying it to the geometry by removing its requirements of homogeneity. Descartes, accused by Jean Baptiste Chauveau, a former classmate of La Flèche, explained in a letter to Mersenne (1639 February) that he never read those works. Descartes accepted the Viète's view of mathematics for which the study shall stress the self-evidence of the results that Descartes implemented translating the symbolic algebra in geometric reasoning. Descartes adopted the term mathesis universalis, which he called an "already venerable term with a received usage", which originated in van Roomen's book Mathesis Universalis. "I have no knowledge of this surveyor and I wonder what he said, that we studied Viète's work together in Paris, because it is a book which I cannot remember having seen the cover, while I was in France." Elsewhere, Descartes said that Viète's notations were confusing and used unnecessary geometric justifications. In some letters, he showed he understands the program of the Artem Analyticem Isagoge; in others, he shamelessly caricatured Viète's proposals. One of his biographers, Charles Adam, noted this contradiction: "These words are surprising, by the way, for he (Descartes) had just said a few lines earlier that he had tried to put in his geometry only what he believed "was known neither by Vieta nor by anyone else". So he was informed of what Viète knew; and he must have read his works previously." Current research has not shown the extent of the direct influence of the works of Viète on Descartes. This influence could have been formed through the works of Adriaan van Roomen or Jacques Aleaume at the Hague, or through the book by Jean de Beaugrand. In his letters to Mersenne, Descartes consciously minimized the originality and depth of the work of his predecessors. "I began," he says, "where Vieta finished". His views emerged in the 17th century and mathematicians won a clear algebraic language without the requirements of homogeneity. Many contemporary studies have restored the work of Parthenay's mathematician, showing he had the double merit of introducing the first elements of literal calculation and building a first axiomatic for algebra. Although Viète was not the first to propose notation of unknown quantities by letters - Jordanus Nemorarius had done this in the past - we can reasonably estimate that it would be simplistic to summarize his innovations for that discovery and place him at the junction of algebraic transformations made during the late sixteenth – early 17th century. == See also == Vieta's formulas Michael Stifel Rafael Bombelli == Notes == == Bibliography == == Attribution == This article incorporates text from a publication now in the public domain: Cantor, Moritz (1911). "Vieta, François". In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 28 (11th ed.). Cambridge University Press. pp. 57–58. == External links == Literature by and about François Viète in the German National Library catalogue François Viète at Library of Congress O'Connor, John J.; Robertson, Edmund F., "François Viète", MacTutor History of Mathematics Archive, University of St Andrews New Algebra (1591) online Francois Viète: Father of Modern Algebraic Notation The Lawyer and the Gambler About Tarporley Site de Jean-Paul Guichard (in French) L'algèbre nouvelle (in French) "About the Harmonicon" (PDF). Archived from the original (PDF) on 2011-08-07. Retrieved 2009-06-18. (200 KB). (in French)
Wikipedia:François-Joseph Servois#0
François-Joseph Servois (French pronunciation: [fʁɑ̃swa ʒozɛf sɛʁvwa]; born 19 July 1767 in Mont-de-Laval, Doubs, France; died 17 April 1847 in Mont-de-Laval, Doubs, France) was a French priest, military officer and mathematician. His most notable contribution came in his publication of Essai sur un nouveau mode d’exposition des principes du calcul différentiel (Essay on a system of exposition of the principles of differential calculus) in 1814, where he first introduced the mathematical terms for commutative and distributive. == Life == Servois was born on 18 July 1767 in Mont-de-Laval, France to Jacques-Ignace Servois, a local merchant, and Jeanne-Marie Jolliet. Not much is known about his early life except that he had at least one sibling, a sister with which he would eventually move in with after his retirement. He attended several religious schools in both Mont-de-Laval and Besançon with the intention of becoming a priest. He was ordained at Besançon near the beginning of the French Revolution. His life as a priest was short lived though. As tensions in France, prior to the French Revolution, began to escalate, he left the priesthood to join the French Army in 1793. He officially entered École d'Artillerie (Artillery School) at Châlons-sur-Marne on 5 March 1794, and was commissioned as Second Lieutenant in the First Foot Artillery Regiment by 13 November of that same year. During his time in the Army, Servois was actively involved in many battles, including the crossing of the Rhine, the Battle of Neuwied, and the Battle of Paris. It was during his leisure time in the army that he began to seriously devote himself to the study of Mathematics. He suffered from poor health during his years as an officer and this led to him requesting a non-active military position as a professor of mathematics. He gained the attention of Adrien-Marie Legendre with some of his work and through Legendre's recommendation, he was assigned to his first academic position, as a professor at the École d'Artillerie in Besançon in July 1801. He would go on to teach at a number of different artillery schools through France, including ones in Châlons-sur-Marne (March 1802 - December 1802), Metz (December 1802 - February 1808, 1815–1816), and La Fère (February 1808 – 1814, 1814–1815). == Work in mathematics == Like many of his colleagues who taught at the military schools in France, Servois closely followed the developments in mathematics and sought to make original contributions to the subject. Through his experience in the military, his first publication, Solutions peu connues de différents problèmes de géométrie pratique (Little-known Solutions to Various Problems in Practical Geometry), where he drew on notions of modern geometry and applied them to practical problems, was well received and prominent French mathematician, Jean-Victor Poncelet, considered it to be "a truly original work, notable for presenting the first applications of theory of transversals to the geometry of the ruler or surveyor's staff, thus revealing the fruitfulness and utility of this theory" Servois presented several memoirs to the Académie des Sciences at this time including one on the principles of differential calculus and the development of functions in series. He would further publish papers to the Annales de mathématiques pures et appliquées, where his friend, Joseph Diaz Gergonne was the editor, where he started to formalize his position on the foundations of calculus. As a disciple of Joseph-Louis Lagrange, he strongly believed that structure of calculus should be based on power series as opposed to limits or infinitesimals. In late 1814, he consolidated his ideas on an algebraic formalization of calculus in his most celebrated work, Essai sur un nouveau mode d'exposition des principes du calcul différential (Essay on a New Method of Exposition of the Principles of Differential Calculus). It was in this paper, when considering abstract functional equations of differential calculus, that he proposed the terms "commutative" and "distributive" to describe properties of functions. Servois' 1814 Essai was published well before the modern definitions of functions, identities and inverses, so in his paper, he attempted to formalize these ideas by defining their behavior. In many occasions throughout the document, he discusses operations on functions to not only describe ordinary functions of an independent variable but also to describe operators, such as difference and differential operators. It is here where we first see a formal definition of the distributive property. Servois asserts the following statement: Let ϕ ( x + y + . . . ) = ϕ ( x ) + ϕ ( y ) + . . . {\displaystyle \phi (x+y+...)=\phi (x)+\phi (y)+...} "Functions which, like ϕ {\displaystyle \phi } , are such that the function of the (algebraic) sum of any number of quantities is equal to the sum of the same function of each of these quantities, are called distributive" He goes on to further describe the commutative function as follows: Let f f z {\displaystyle fz} = f {\displaystyle f} f z {\displaystyle z} "Functions, which like f and f {\displaystyle f} , are such that they give identical results, no matter in which order we apply them to the subject, are called commutative between themselves." == Retirement and recognition == Servois went on to publish two more articles in the Annales des mathématiques pures et appliquées, but they were far less influential than his previous papers. He was assigned to his final position as the curator of the Artillery Museum in Paris on 2 May 1817, where he stayed until 1 June 1827. During his time as the curator, Servois was made a Knight of Saint-Louis for distinguished service to the military on 17 August 1822. After his retirement, he returned to his hometown of Mont-de-Laval and resided with his sister and his two nieces until his death on 17 April 1847 at the age of 79. == References ==
Wikipedia:Françoise Tisseur#0
Françoise Tisseur is a numerical analyst and Professor of Numerical Analysis at the Department of Mathematics, University of Manchester, UK. She works in numerical linear algebra and in particular on nonlinear eigenvalue problems and structured matrix problems, including the development of algorithms and software. She is a graduate of the University of St-Etienne, France, from where she gained her Maitrise (Mathematical Engineering) in 1993, Diplome d'Etude Approfondie in 1994, and PhD (Numerical Analysis) in 1997. She has contributed software to LAPACK, ScaLAPACK, and the MATLAB distribution. Tisseur is a member of the editorial boards of the SIAM Journal on Matrix Analysis and Applications, the IMA Journal of Numerical Analysis and the Electronic Journal of Linear Algebra. == Awards and honours == Tisseur was awarded the 2010 Whitehead Prize by the London Mathematical Society for her research achievements in numerical linear algebra, including polynomial eigenvalue and structured matrix problems. She was awarded the 2011–2012 Adams Prize of the University of Cambridge for her work on polynomial eigenvalue problems and holds a Royal Society Wolfson Research Merit Award in 2014–2019. She delivered the Olga Taussky-Tood Lecture at the International Congress on Industrial and Applied Mathematics in Valencia, Spain, in 2019. She is the 2020 winner of the Fröhlich Prize of the London Mathematical Society "for her important and highly innovative contributions to the analysis, perturbation theory, and numerical solution of nonlinear eigenvalue problems". Tisseur became a Fellow of the Society for Industrial and Applied Mathematics in 2016 "for contributions to numerical linear algebra, especially numerical methods for eigenvalue problems". She held an EPSRC Leadership Fellowship in 2011–2016, and is a Fellow of the Institute of Mathematics and its Applications. == References == == External links == Françoise Tisseur publications indexed by Google Scholar
Wikipedia:Fraňková–Helly selection theorem#0
In mathematics, the Fraňková–Helly selection theorem is a generalisation of Helly's selection theorem for functions of bounded variation to the case of regulated functions. It was proved in 1991 by the Czech mathematician Dana Fraňková. == Background == Let X be a separable Hilbert space, and let BV([0, T]; X) denote the normed vector space of all functions f : [0, T] → X with finite total variation over the interval [0, T], equipped with the total variation norm. It is well known that BV([0, T]; X) satisfies the compactness theorem known as Helly's selection theorem: given any sequence of functions (fn)n∈N in BV([0, T]; X) that is uniformly bounded in the total variation norm, there exists a subsequence ( f n ( k ) ) ⊆ ( f n ) ⊂ B V ( [ 0 , T ] ; X ) {\displaystyle \left(f_{n(k)}\right)\subseteq (f_{n})\subset \mathrm {BV} ([0,T];X)} and a limit function f ∈ BV([0, T]; X) such that fn(k)(t) converges weakly in X to f(t) for every t ∈ [0, T]. That is, for every continuous linear functional λ ∈ X*, λ ( f n ( k ) ( t ) ) → λ ( f ( t ) ) in R as k → ∞ . {\displaystyle \lambda \left(f_{n(k)}(t)\right)\to \lambda (f(t)){\mbox{ in }}\mathbb {R} {\mbox{ as }}k\to \infty .} Consider now the Banach space Reg([0, T]; X) of all regulated functions f : [0, T] → X, equipped with the supremum norm. Helly's theorem does not hold for the space Reg([0, T]; X): a counterexample is given by the sequence f n ( t ) = sin ⁡ ( n t ) . {\displaystyle f_{n}(t)=\sin(nt).} One may ask, however, if a weaker selection theorem is true, and the Fraňková–Helly selection theorem is such a result. == Statement of the Fraňková–Helly selection theorem == As before, let X be a separable Hilbert space and let Reg([0, T]; X) denote the space of regulated functions f : [0, T] → X, equipped with the supremum norm. Let (fn)n∈N be a sequence in Reg([0, T]; X) satisfying the following condition: for every ε > 0, there exists some Lε > 0 so that each fn may be approximated by a un ∈ BV([0, T]; X) satisfying ‖ f n − u n ‖ ∞ < ε {\displaystyle \|f_{n}-u_{n}\|_{\infty }<\varepsilon } and | u n ( 0 ) | + V a r ( u n ) ≤ L ε , {\displaystyle |u_{n}(0)|+\mathrm {Var} (u_{n})\leq L_{\varepsilon },} where |-| denotes the norm in X and Var(u) denotes the variation of u, which is defined to be the supremum sup Π ∑ j = 1 m | u ( t j ) − u ( t j − 1 ) | {\displaystyle \sup _{\Pi }\sum _{j=1}^{m}|u(t_{j})-u(t_{j-1})|} over all partitions Π = { 0 = t 0 < t 1 < ⋯ < t m = T , m ∈ N } {\displaystyle \Pi =\{0=t_{0}<t_{1}<\dots <t_{m}=T,m\in \mathbf {N} \}} of [0, T]. Then there exists a subsequence ( f n ( k ) ) ⊆ ( f n ) ⊂ R e g ( [ 0 , T ] ; X ) {\displaystyle \left(f_{n(k)}\right)\subseteq (f_{n})\subset \mathrm {Reg} ([0,T];X)} and a limit function f ∈ Reg([0, T]; X) such that fn(k)(t) converges weakly in X to f(t) for every t ∈ [0, T]. That is, for every continuous linear functional λ ∈ X*, λ ( f n ( k ) ( t ) ) → λ ( f ( t ) ) in R as k → ∞ . {\displaystyle \lambda \left(f_{n(k)}(t)\right)\to \lambda (f(t)){\mbox{ in }}\mathbb {R} {\mbox{ as }}k\to \infty .} == References == Fraňková, Dana (1991). "Regulated functions". Math. Bohem. 116 (1): 20–59. ISSN 0862-7959. MR 1100424.
Wikipedia:Fred Van Oystaeyen#0
Fred Van Oystaeyen (born 1947), also Freddy van Oystaeyen, is a mathematician and emeritus professor of mathematics at the University of Antwerp. He has pioneered work on noncommutative geometry, in particular noncommutative algebraic geometry. == Biography == In 1972, Fred Van Oystaeyen obtained his Ph.D. from the Vrije Universiteit of Amsterdam. In 1975 he became professor at the University of Antwerp, Department of Mathematics and Computer Science. Van Oystaeyen has well over 200 scientific papers and several books. One of his recent books, Virtual Topology and Functor Geometry, provides an introduction to noncommutative topology. At the occasion of his 60th birthday, a conference in his honour was held in Almería, September 18 to 22, 2007; on March 25, 2011, he received his first honorary doctorate from that same university, Universidad de Almería. At the campus of Universidad de Almería the street "Calle Fred Van Oystaeyen" (previously "Calle los Gallardos") is named after him. In 2019, he received another honorary doctorate from the Vrije Universiteit Brussel. == Books == Hidetoshi Marubayashi, Fred Van Oystaeyen: Prime Divisors and Noncommutative Valuation Theory, Springer, 2012, ISBN 978-3-6423-1151-2 Fred Van Oystaeyen: Virtual topology and functor geometry, Chapman & Hall, 2008, ISBN 978-1-4200-6056-0 Constantin Nastasescu, Freddy van Oystaeyen: Methods of graded rings, Lecture Notes in Mathematics 1836, Springer, February 2004, ISBN 978-3-540-20746-7 Freddy van Oystaeyen: Algebraic geometry for associative algebras, M. Dekker, New York, 2000, ISBN 0-8247-0424-X F. van Oystaeyen, A. Verschoren: Relative invariants of rings: the noncommutative theory, M. Dekker, New York, 1984, ISBN 0-8247-7281-4 F. van Oystaeyen, A. Verschoren: Relative invariants of rings: the commutative theory, M. Dekker, New York, 1983, ISBN 0-8247-7043-9 Freddy M.J. van Oystaeyen, Alain H.M.J. Verschoren: Non-commutative algebraic geometry: an introduction, Springer-Verlag, 1981, ISBN 0-387-11153-0 F. van Oystaeyen, A. Verschoren: Reflectors and localization : application to sheaf theory, M. Dekker, New York, 1979, ISBN 0-8247-6844-2 F. van Oystaeyen: Prime spectra in non-commutative algebra, Springer-Verlag, 1975, ISBN 0-8247-0424-X == References == == External links == Fred Van Oystaeyen, Universiteit Antwerpen - Academic bibliography - Research Fred van Oystaeyen at the nLab Fred Van Oystaeyen, publication list at Scientific Commons Fred Van Oystaeyen: On the Reality of Noncommutative Space, neverendingbooks.org
Wikipedia:Fred van der Blij#0
Frederik van der Blij (13 May 1923 – 27 January 2018) was a Dutch mathematician. From 1955 until his retirement in 1988 he was professor at the University of Utrecht. His research focused on number theory, among other fields. == See also == Van der Blij's lemma == References == == External links == Fred van der Blij at the Mathematics Genealogy Project