source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Lev M. Bregman#0
Lev M. Bregman (1941–2023) was a Soviet and Israeli mathematician, most known for the Bregman divergence named after him. Bregman received his M. Sc. in mathematics in 1963 at Leningrad University and his Ph.D. in mathematics in 1966 at the same institution, under the direction of his advisor Prof. J. V. Romanovsky, for his thesis about relaxation methods for finding a common point of convex sets, which led to one of his most well-known publications. Bregman's Theorem, proving a 1963 conjecture of Henryk Minc, gives an upper bound on the permanent of a 0-1 matrix. Bregman was employed at the Institute for Industrial Mathematics, Beer-Sheva, Israel, after having spent one year at Ben-Gurion University of the Negev, Beer-Sheva. Formerly, from 1966 through 1991, he was senior researcher at the Leningrad University. Bregman is author of several textbooks and dozens of publications in international journals. == References == == External links == Homepage of Lev M. Bregman at the Institute for Industrial Mathematics, Beer-Sheva, Israel Bregman's Theorem at Theorem of the Day
Wikipedia:Levi-Civita symbol#0
In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon ε or ϵ, or less commonly the Latin lower case e. Index notation allows one to display permutations in a way compatible with tensor analysis: ε i 1 i 2 … i n {\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}} where each index i1, i2, ..., in takes values 1, 2, ..., n. There are nn indexed values of εi1i2...in, which can be arranged into an n-dimensional array. The key defining property of the symbol is total antisymmetry in the indices. When any two indices are interchanged, equal or not, the symbol is negated: ε … i p … i q … = − ε … i q … i p … . {\displaystyle \varepsilon _{\dots i_{p}\dots i_{q}\dots }=-\varepsilon _{\dots i_{q}\dots i_{p}\dots }.} If any two indices are equal, the symbol is zero. When all indices are unequal, we have: ε i 1 i 2 … i n = ( − 1 ) p ε 1 2 … n , {\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}=(-1)^{p}\varepsilon _{1\,2\,\dots n},} where p (called the parity of the permutation) is the number of pairwise interchanges of indices necessary to unscramble i1, i2, ..., in into the order 1, 2, ..., n, and the factor (−1)p is called the sign, or signature of the permutation. The value ε1 2 ... n must be defined, else the particular values of the symbol for all permutations are indeterminate. Most authors choose ε1 2 ... n = +1, which means the Levi-Civita symbol equals the sign of a permutation when the indices are all unequal. This choice is used throughout this article. The term "n-dimensional Levi-Civita symbol" refers to the fact that the number of indices on the symbol n matches the dimensionality of the vector space in question, which may be Euclidean or non-Euclidean, for example, R 3 {\displaystyle \mathbb {R} ^{3}} or Minkowski space. The values of the Levi-Civita symbol are independent of any metric tensor and coordinate system. Also, the specific term "symbol" emphasizes that it is not a tensor because of how it transforms between coordinate systems; however it can be interpreted as a tensor density. The Levi-Civita symbol allows the determinant of a square matrix, and the cross product of two vectors in three-dimensional Euclidean space, to be expressed in Einstein index notation. == Definition == The Levi-Civita symbol is most often used in three and four dimensions, and to some extent in two dimensions, so these are given here before defining the general case. === Two dimensions === In two dimensions, the Levi-Civita symbol is defined by: ε i j = { + 1 if ( i , j ) = ( 1 , 2 ) − 1 if ( i , j ) = ( 2 , 1 ) 0 if i = j {\displaystyle \varepsilon _{ij}={\begin{cases}+1&{\text{if }}(i,j)=(1,2)\\-1&{\text{if }}(i,j)=(2,1)\\\;\;\,0&{\text{if }}i=j\end{cases}}} The values can be arranged into a 2 × 2 antisymmetric matrix: ( ε 11 ε 12 ε 21 ε 22 ) = ( 0 1 − 1 0 ) {\displaystyle {\begin{pmatrix}\varepsilon _{11}&\varepsilon _{12}\\\varepsilon _{21}&\varepsilon _{22}\end{pmatrix}}={\begin{pmatrix}0&1\\-1&0\end{pmatrix}}} Use of the two-dimensional symbol is common in condensed matter, and in certain specialized high-energy topics like supersymmetry and twistor theory, where it appears in the context of 2-spinors. === Three dimensions === In three dimensions, the Levi-Civita symbol is defined by: ε i j k = { + 1 if ( i , j , k ) is ( 1 , 2 , 3 ) , ( 2 , 3 , 1 ) , or ( 3 , 1 , 2 ) , − 1 if ( i , j , k ) is ( 3 , 2 , 1 ) , ( 1 , 3 , 2 ) , or ( 2 , 1 , 3 ) , 0 if i = j , or j = k , or k = i {\displaystyle \varepsilon _{ijk}={\begin{cases}+1&{\text{if }}(i,j,k){\text{ is }}(1,2,3),(2,3,1),{\text{ or }}(3,1,2),\\-1&{\text{if }}(i,j,k){\text{ is }}(3,2,1),(1,3,2),{\text{ or }}(2,1,3),\\\;\;\,0&{\text{if }}i=j,{\text{ or }}j=k,{\text{ or }}k=i\end{cases}}} That is, εijk is 1 if (i, j, k) is an even permutation of (1, 2, 3), −1 if it is an odd permutation, and 0 if any index is repeated. In three dimensions only, the cyclic permutations of (1, 2, 3) are all even permutations, similarly the anticyclic permutations are all odd permutations. This means in 3d it is sufficient to take cyclic or anticyclic permutations of (1, 2, 3) and easily obtain all the even or odd permutations. Analogous to 2-dimensional matrices, the values of the 3-dimensional Levi-Civita symbol can be arranged into a 3 × 3 × 3 array: where i is the depth (blue: i = 1; red: i = 2; green: i = 3), j is the row and k is the column. Some examples: ε 1 3 2 = − ε 1 2 3 = − 1 ε 3 1 2 = − ε 2 1 3 = − ( − ε 1 2 3 ) = 1 ε 2 3 1 = − ε 1 3 2 = − ( − ε 1 2 3 ) = 1 ε 2 3 2 = − ε 2 3 2 = 0 {\displaystyle {\begin{aligned}\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{2}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}}&=-1\\\varepsilon _{\color {Violet}{3}\color {BrickRed}{1}\color {Orange}{2}}=-\varepsilon _{\color {Orange}{2}\color {BrickRed}{1}\color {Violet}{3}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}})=1\\\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {BrickRed}{1}}=-\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{2}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{2}\color {Violet}{3}})=1\\\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {Orange}{2}}=-\varepsilon _{\color {Orange}{2}\color {Violet}{3}\color {Orange}{2}}&=0\end{aligned}}} === Four dimensions === In four dimensions, the Levi-Civita symbol is defined by: ε i j k l = { + 1 if ( i , j , k , l ) is an even permutation of ( 1 , 2 , 3 , 4 ) − 1 if ( i , j , k , l ) is an odd permutation of ( 1 , 2 , 3 , 4 ) 0 otherwise {\displaystyle \varepsilon _{ijkl}={\begin{cases}+1&{\text{if }}(i,j,k,l){\text{ is an even permutation of }}(1,2,3,4)\\-1&{\text{if }}(i,j,k,l){\text{ is an odd permutation of }}(1,2,3,4)\\\;\;\,0&{\text{otherwise}}\end{cases}}} These values can be arranged into a 4 × 4 × 4 × 4 array, although in 4 dimensions and higher this is difficult to draw. Some examples: ε 1 4 3 2 = − ε 1 2 3 4 = − 1 ε 2 1 3 4 = − ε 1 2 3 4 = − 1 ε 4 3 2 1 = − ε 1 3 2 4 = − ( − ε 1 2 3 4 ) = 1 ε 3 2 4 3 = − ε 3 2 4 3 = 0 {\displaystyle {\begin{aligned}\varepsilon _{\color {BrickRed}{1}\color {RedViolet}{4}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}}&=-1\\\varepsilon _{\color {Orange}{\color {Orange}{2}}\color {BrickRed}{1}\color {Violet}{3}\color {RedViolet}{4}}=-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}}&=-1\\\varepsilon _{\color {RedViolet}{4}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {BrickRed}{1}}=-\varepsilon _{\color {BrickRed}{1}\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}}&=-(-\varepsilon _{\color {BrickRed}{1}\color {Orange}{\color {Orange}{2}}\color {Violet}{3}\color {RedViolet}{4}})=1\\\varepsilon _{\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}\color {Violet}{3}}=-\varepsilon _{\color {Violet}{3}\color {Orange}{\color {Orange}{2}}\color {RedViolet}{4}\color {Violet}{3}}&=0\end{aligned}}} === Generalization to n dimensions === More generally, in n dimensions, the Levi-Civita symbol is defined by: ε a 1 a 2 a 3 … a n = { + 1 if ( a 1 , a 2 , a 3 , … , a n ) is an even permutation of ( 1 , 2 , 3 , … , n ) − 1 if ( a 1 , a 2 , a 3 , … , a n ) is an odd permutation of ( 1 , 2 , 3 , … , n ) 0 otherwise {\displaystyle \varepsilon _{a_{1}a_{2}a_{3}\ldots a_{n}}={\begin{cases}+1&{\text{if }}(a_{1},a_{2},a_{3},\ldots ,a_{n}){\text{ is an even permutation of }}(1,2,3,\dots ,n)\\-1&{\text{if }}(a_{1},a_{2},a_{3},\ldots ,a_{n}){\text{ is an odd permutation of }}(1,2,3,\dots ,n)\\\;\;\,0&{\text{otherwise}}\end{cases}}} Thus, it is the sign of the permutation in the case of a permutation, and zero otherwise. Using the capital pi notation Π for ordinary multiplication of numbers, an explicit expression for the symbol is: ε a 1 a 2 a 3 … a n = ∏ 1 ≤ i < j ≤ n sgn ⁡ ( a j − a i ) = sgn ⁡ ( a 2 − a 1 ) sgn ⁡ ( a 3 − a 1 ) ⋯ sgn ⁡ ( a n − a 1 ) sgn ⁡ ( a 3 − a 2 ) sgn ⁡ ( a 4 − a 2 ) ⋯ sgn ⁡ ( a n − a 2 ) ⋯ sgn ⁡ ( a n − a n − 1 ) {\displaystyle {\begin{aligned}\varepsilon _{a_{1}a_{2}a_{3}\ldots a_{n}}&=\prod _{1\leq i<j\leq n}\operatorname {sgn}(a_{j}-a_{i})\\&=\operatorname {sgn}(a_{2}-a_{1})\operatorname {sgn}(a_{3}-a_{1})\dotsm \operatorname {sgn}(a_{n}-a_{1})\operatorname {sgn}(a_{3}-a_{2})\operatorname {sgn}(a_{4}-a_{2})\dotsm \operatorname {sgn}(a_{n}-a_{2})\dotsm \operatorname {sgn}(a_{n}-a_{n-1})\end{aligned}}} where the signum function (denoted sgn) returns the sign of its argument while discarding the absolute value if nonzero. The formula is valid for all index values, and for any n (when n = 0 or n = 1, this is the empty product). However, computing the formula above naively has a time complexity of O(n2), whereas the sign can be computed from the parity of the permutation from its disjoint cycles in only O(n log(n)) cost. == Properties == A tensor whose components in an orthonormal basis are given by the Levi-Civita symbol (a tensor of covariant rank n) is sometimes called a permutation tensor. Under the ordinary transformation rules for tensors the Levi-Civita symbol is unchanged under pure rotations, consistent with that it is (by definition) the same in all coordinate systems related by orthogonal transformations. However, the Levi-Civita symbol is a pseudotensor because under an orthogonal transformation of Jacobian determinant −1, for example, a reflection in an odd number of dimensions, it should acquire a minus sign if it were a tensor. As it does not change at all, the Levi-Civita symbol is, by definition, a pseudotensor. As the Levi-Civita symbol is a pseudotensor, the result of taking a cross product is a pseudovector, not a vector. Under a general coordinate change, the components of the permutation tensor are multiplied by the Jacobian of the transformation matrix. This implies that in coordinate frames different from the one in which the tensor was defined, its components can differ from those of the Levi-Civita symbol by an overall factor. If the frame is orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not. In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual. Summation symbols can be eliminated by using Einstein notation, where an index repeated between two or more terms indicates summation over that index. For example, ε i j k ε i m n ≡ ∑ i = 1 , 2 , 3 ε i j k ε i m n {\displaystyle \varepsilon _{ijk}\varepsilon ^{imn}\equiv \sum _{i=1,2,3}\varepsilon _{ijk}\varepsilon ^{imn}} . In the following examples, Einstein notation is used. === Two dimensions === In two dimensions, when all i, j, m, n each take the values 1 and 2: === Three dimensions === ==== Index and symbol values ==== In three dimensions, when all i, j, k, m, n each take values 1, 2, and 3: ==== Product ==== The Levi-Civita symbol is related to the Kronecker delta. In three dimensions, the relationship is given by the following equations (vertical lines denote the determinant): ε i j k ε l m n = | δ i l δ i m δ i n δ j l δ j m δ j n δ k l δ k m δ k n | = δ i l ( δ j m δ k n − δ j n δ k m ) − δ i m ( δ j l δ k n − δ j n δ k l ) + δ i n ( δ j l δ k m − δ j m δ k l ) . {\displaystyle {\begin{aligned}\varepsilon _{ijk}\varepsilon _{lmn}&={\begin{vmatrix}\delta _{il}&\delta _{im}&\delta _{in}\\\delta _{jl}&\delta _{jm}&\delta _{jn}\\\delta _{kl}&\delta _{km}&\delta _{kn}\\\end{vmatrix}}\\[6pt]&=\delta _{il}\left(\delta _{jm}\delta _{kn}-\delta _{jn}\delta _{km}\right)-\delta _{im}\left(\delta _{jl}\delta _{kn}-\delta _{jn}\delta _{kl}\right)+\delta _{in}\left(\delta _{jl}\delta _{km}-\delta _{jm}\delta _{kl}\right).\end{aligned}}} A special case of this result occurs when one of the indices is repeated and summed over: ∑ i = 1 3 ε i j k ε i m n = δ j m δ k n − δ j n δ k m {\displaystyle \sum _{i=1}^{3}\varepsilon _{ijk}\varepsilon _{imn}=\delta _{jm}\delta _{kn}-\delta _{jn}\delta _{km}} In Einstein notation, the duplication of the i index implies the sum on i. The previous is then denoted εijkεimn = δjmδkn − δjnδkm. If two indices are repeated (and summed over), this further reduces to: ∑ i = 1 3 ∑ j = 1 3 ε i j k ε i j n = 2 δ k n {\displaystyle \sum _{i=1}^{3}\sum _{j=1}^{3}\varepsilon _{ijk}\varepsilon _{ijn}=2\delta _{kn}} === n dimensions === ==== Index and symbol values ==== In n dimensions, when all i1, ...,in, j1, ..., jn take values 1, 2, ..., n: where the exclamation mark (!) denotes the factorial, and δα...β... is the generalized Kronecker delta. For any n, the property ∑ i , j , k , ⋯ = 1 n ε i j k … ε i j k … = n ! {\displaystyle \sum _{i,j,k,\dots =1}^{n}\varepsilon _{ijk\dots }\varepsilon _{ijk\dots }=n!} follows from the facts that every permutation is either even or odd, (+1)2 = (−1)2 = 1, and the number of permutations of any n-element set number is exactly n!. The particular case of (8) with k = n − 2 {\textstyle k=n-2} is ε i 1 … i n − 2 j k ε i 1 … i n − 2 l m = ( n − 2 ) ! ( δ j l δ k m − δ j m δ k l ) . {\displaystyle \varepsilon _{i_{1}\dots i_{n-2}jk}\varepsilon ^{i_{1}\dots i_{n-2}lm}=(n-2)!(\delta _{j}^{l}\delta _{k}^{m}-\delta _{j}^{m}\delta _{k}^{l})\,.} ==== Product ==== In general, for n dimensions, one can write the product of two Levi-Civita symbols as: ε i 1 i 2 … i n ε j 1 j 2 … j n = | δ i 1 j 1 δ i 1 j 2 … δ i 1 j n δ i 2 j 1 δ i 2 j 2 … δ i 2 j n ⋮ ⋮ ⋱ ⋮ δ i n j 1 δ i n j 2 … δ i n j n | . {\displaystyle \varepsilon _{i_{1}i_{2}\dots i_{n}}\varepsilon _{j_{1}j_{2}\dots j_{n}}={\begin{vmatrix}\delta _{i_{1}j_{1}}&\delta _{i_{1}j_{2}}&\dots &\delta _{i_{1}j_{n}}\\\delta _{i_{2}j_{1}}&\delta _{i_{2}j_{2}}&\dots &\delta _{i_{2}j_{n}}\\\vdots &\vdots &\ddots &\vdots \\\delta _{i_{n}j_{1}}&\delta _{i_{n}j_{2}}&\dots &\delta _{i_{n}j_{n}}\\\end{vmatrix}}.} Proof: Both sides change signs upon switching two indices, so without loss of generality assume i 1 ≤ ⋯ ≤ i n , j 1 ≤ ⋯ ≤ j n {\displaystyle i_{1}\leq \cdots \leq i_{n},j_{1}\leq \cdots \leq j_{n}} . If some i c = i c + 1 {\displaystyle i_{c}=i_{c+1}} then left side is zero, and right side is also zero since two of its rows are equal. Similarly for j c = j c + 1 {\displaystyle j_{c}=j_{c+1}} . Finally, if i 1 < ⋯ < i n , j 1 < ⋯ < j n {\displaystyle i_{1}<\cdots <i_{n},j_{1}<\cdots <j_{n}} , then both sides are 1. === Proofs === For (1), both sides are antisymmetric with respect of ij and mn. We therefore only need to consider the case i ≠ j and m ≠ n. By substitution, we see that the equation holds for ε12ε12, that is, for i = m = 1 and j = n = 2. (Both sides are then one). Since the equation is antisymmetric in ij and mn, any set of values for these can be reduced to the above case (which holds). The equation thus holds for all values of ij and mn. Using (1), we have for (2) ε i j ε i n = δ i i δ j n − δ i n δ j i = 2 δ j n − δ j n = δ j n . {\displaystyle \varepsilon _{ij}\varepsilon ^{in}=\delta _{i}{}^{i}\delta _{j}{}^{n}-\delta _{i}{}^{n}\delta _{j}{}^{i}=2\delta _{j}{}^{n}-\delta _{j}{}^{n}=\delta _{j}{}^{n}\,.} Here we used the Einstein summation convention with i going from 1 to 2. Next, (3) follows similarly from (2). To establish (5), notice that both sides vanish when i ≠ j. Indeed, if i ≠ j, then one can not choose m and n such that both permutation symbols on the left are nonzero. Then, with i = j fixed, there are only two ways to choose m and n from the remaining two indices. For any such indices, we have ε j m n ε i m n = ( ε i m n ) 2 = 1 {\displaystyle \varepsilon _{jmn}\varepsilon ^{imn}=\left(\varepsilon ^{imn}\right)^{2}=1} (no summation), and the result follows. Then (6) follows since 3! = 6 and for any distinct indices i, j, k taking values 1, 2, 3, we have ε i j k ε i j k = 1 {\displaystyle \varepsilon _{ijk}\varepsilon ^{ijk}=1} (no summation, distinct i, j, k) == Applications and examples == === Determinants === In linear algebra, the determinant of a 3 × 3 square matrix A = [aij] can be written det ( A ) = ∑ i = 1 3 ∑ j = 1 3 ∑ k = 1 3 ε i j k a 1 i a 2 j a 3 k {\displaystyle \det(\mathbf {A} )=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon _{ijk}a_{1i}a_{2j}a_{3k}} Similarly the determinant of an n × n matrix A = [aij] can be written as det ( A ) = ε i 1 … i n a 1 i 1 … a n i n , {\displaystyle \det(\mathbf {A} )=\varepsilon _{i_{1}\dots i_{n}}a_{1i_{1}}\dots a_{ni_{n}},} where each ir should be summed over 1, ..., n, or equivalently: det ( A ) = 1 n ! ε i 1 … i n ε j 1 … j n a i 1 j 1 … a i n j n , {\displaystyle \det(\mathbf {A} )={\frac {1}{n!}}\varepsilon _{i_{1}\dots i_{n}}\varepsilon _{j_{1}\dots j_{n}}a_{i_{1}j_{1}}\dots a_{i_{n}j_{n}},} where now each ir and each jr should be summed over 1, ..., n. More generally, we have the identity ∑ i 1 , i 2 , … ε i 1 … i n a i 1 j 1 … a i n j n = det ( A ) ε j 1 … j n {\displaystyle \sum _{i_{1},i_{2},\dots }\varepsilon _{i_{1}\dots i_{n}}a_{i_{1}\,j_{1}}\dots a_{i_{n}\,j_{n}}=\det(\mathbf {A} )\varepsilon _{j_{1}\dots j_{n}}} === Vector cross product === ==== Cross product (two vectors) ==== Let ( e 1 , e 2 , e 3 ) {\displaystyle (\mathbf {e_{1}} ,\mathbf {e_{2}} ,\mathbf {e_{3}} )} a positively oriented orthonormal basis of a vector space. If (a1, a2, a3) and (b1, b2, b3) are the coordinates of the vectors a and b in this basis, then their cross product can be written as a determinant: a × b = | e 1 e 2 e 3 a 1 a 2 a 3 b 1 b 2 b 3 | = ∑ i = 1 3 ∑ j = 1 3 ∑ k = 1 3 ε i j k e i a j b k {\displaystyle \mathbf {a\times b} ={\begin{vmatrix}\mathbf {e_{1}} &\mathbf {e_{2}} &\mathbf {e_{3}} \\a^{1}&a^{2}&a^{3}\\b^{1}&b^{2}&b^{3}\\\end{vmatrix}}=\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon _{ijk}\mathbf {e} _{i}a^{j}b^{k}} hence also using the Levi-Civita symbol, and more simply: ( a × b ) i = ∑ j = 1 3 ∑ k = 1 3 ε i j k a j b k . {\displaystyle (\mathbf {a\times b} )^{i}=\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon _{ijk}a^{j}b^{k}.} In Einstein notation, the summation symbols may be omitted, and the ith component of their cross product equals ( a × b ) i = ε i j k a j b k . {\displaystyle (\mathbf {a\times b} )^{i}=\varepsilon _{ijk}a^{j}b^{k}.} The first component is ( a × b ) 1 = a 2 b 3 − a 3 b 2 , {\displaystyle (\mathbf {a\times b} )^{1}=a^{2}b^{3}-a^{3}b^{2}\,,} then by cyclic permutations of 1, 2, 3 the others can be derived immediately, without explicitly calculating them from the above formulae: ( a × b ) 2 = a 3 b 1 − a 1 b 3 , ( a × b ) 3 = a 1 b 2 − a 2 b 1 . {\displaystyle {\begin{aligned}(\mathbf {a\times b} )^{2}&=a^{3}b^{1}-a^{1}b^{3}\,,\\(\mathbf {a\times b} )^{3}&=a^{1}b^{2}-a^{2}b^{1}\,.\end{aligned}}} ==== Triple scalar product (three vectors) ==== From the above expression for the cross product, we have: a × b = − b × a {\displaystyle \mathbf {a\times b} =-\mathbf {b\times a} } . If c = (c1, c2, c3) is a third vector, then the triple scalar product equals a ⋅ ( b × c ) = ε i j k a i b j c k . {\displaystyle \mathbf {a} \cdot (\mathbf {b\times c} )=\varepsilon _{ijk}a^{i}b^{j}c^{k}.} From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any pair of arguments. For example, a ⋅ ( b × c ) = − b ⋅ ( a × c ) {\displaystyle \mathbf {a} \cdot (\mathbf {b\times c} )=-\mathbf {b} \cdot (\mathbf {a\times c} )} . ==== Curl (one vector field) ==== If F = (F1, F2, F3) is a vector field defined on some open set of R 3 {\displaystyle \mathbb {R} ^{3}} as a function of position x = (x1, x2, x3) (using Cartesian coordinates). Then the ith component of the curl of F equals ( ∇ × F ) i ( x ) = ε i j k ∂ ∂ x j F k ( x ) , {\displaystyle (\nabla \times \mathbf {F} )^{i}(\mathbf {x} )=\varepsilon _{ijk}{\frac {\partial }{\partial x^{j}}}F^{k}(\mathbf {x} ),} which follows from the cross product expression above, substituting components of the gradient vector operator (nabla). == Tensor density == In any arbitrary curvilinear coordinate system and even in the absence of a metric on the manifold, the Levi-Civita symbol as defined above may be considered to be a tensor density field in two different ways. It may be regarded as a contravariant tensor density of weight +1 or as a covariant tensor density of weight −1. In n dimensions using the generalized Kronecker delta, ε μ 1 … μ n = δ 1 … n μ 1 … μ n ε ν 1 … ν n = δ ν 1 … ν n 1 … n . {\displaystyle {\begin{aligned}\varepsilon ^{\mu _{1}\dots \mu _{n}}&=\delta _{\,1\,\dots \,n}^{\mu _{1}\dots \mu _{n}}\,\\\varepsilon _{\nu _{1}\dots \nu _{n}}&=\delta _{\nu _{1}\dots \nu _{n}}^{\,1\,\dots \,n}\,.\end{aligned}}} Notice that these are numerically identical. In particular, the sign is the same. == Levi-Civita tensors == On a pseudo-Riemannian manifold, one may define a coordinate-invariant covariant tensor field whose coordinate representation agrees with the Levi-Civita symbol wherever the coordinate system is such that the basis of the tangent space is orthonormal with respect to the metric and matches a selected orientation. This tensor should not be confused with the tensor density field mentioned above. The presentation in this section closely follows Carroll 2004. The covariant Levi-Civita tensor (also known as the Riemannian volume form) in any coordinate system that matches the selected orientation is E a 1 … a n = | det [ g a b ] | ε a 1 … a n , {\displaystyle E_{a_{1}\dots a_{n}}={\sqrt {\left|\det[g_{ab}]\right|}}\,\varepsilon _{a_{1}\dots a_{n}}\,,} where gab is the representation of the metric in that coordinate system. We can similarly consider a contravariant Levi-Civita tensor by raising the indices with the metric as usual, E a 1 … a n = E b 1 … b n ∏ i = 1 n g a i b i = 1 | det [ g a b ] | ε a 1 … a n , , {\displaystyle E^{a_{1}\dots a_{n}}=E_{b_{1}\dots b_{n}}\prod _{i=1}^{n}g^{a_{i}b_{i}}={\frac {1}{\sqrt {\left|\det[g_{ab}]\right|}}}\,\varepsilon ^{a_{1}\dots a_{n}},,} but notice that if the metric signature contains an odd number of negative eigenvalues q, then the sign of the components of this tensor differ from the standard Levi-Civita symbol: E a 1 … a n = sgn ⁡ ( det [ g a b ] ) | det [ g a b ] | ε a 1 … a n , {\displaystyle E^{a_{1}\dots a_{n}}={\frac {\operatorname {sgn} \left(\det[g_{ab}]\right)}{\sqrt {\left|\det[g_{ab}]\right|}}}\,\varepsilon ^{a_{1}\dots a_{n}},} where sgn(det[gab]) = (−1)q, ε a 1 … a n {\displaystyle \varepsilon _{a_{1}\dots a_{n}}} is the usual Levi-Civita symbol discussed in the rest of this article, and we used the definition of the metric determinant in the derivation. More explicitly, when the tensor and basis orientation are chosen such that E 01 … n = + | det [ g a b ] | {\textstyle E_{01\dots n}=+{\sqrt {\left|\det[g_{ab}]\right|}}} , we have that E 01 … n = sgn ⁡ ( det [ g a b ] ) | det [ g a b ] | {\displaystyle E^{01\dots n}={\frac {\operatorname {sgn}(\det[g_{ab}])}{\sqrt {\left|\det[g_{ab}]\right|}}}} . From this we can infer the identity, E μ 1 … μ p α 1 … α n − p E μ 1 … μ p β 1 … β n − p = ( − 1 ) q p ! δ β 1 … β n − p α 1 … α n − p , {\displaystyle E^{\mu _{1}\dots \mu _{p}\alpha _{1}\dots \alpha _{n-p}}E_{\mu _{1}\dots \mu _{p}\beta _{1}\dots \beta _{n-p}}=(-1)^{q}p!\delta _{\beta _{1}\dots \beta _{n-p}}^{\alpha _{1}\dots \alpha _{n-p}}\,,} where δ β 1 … β n − p α 1 … α n − p = ( n − p ) ! δ β 1 [ α 1 … δ β n − p α n − p ] {\displaystyle \delta _{\beta _{1}\dots \beta _{n-p}}^{\alpha _{1}\dots \alpha _{n-p}}=(n-p)!\delta _{\beta _{1}}^{\lbrack \alpha _{1}}\dots \delta _{\beta _{n-p}}^{\alpha _{n-p}\rbrack }} is the generalized Kronecker delta. === Example: Minkowski space === In Minkowski space (the four-dimensional spacetime of special relativity), the covariant Levi-Civita tensor is E α β γ δ = ± | det [ g μ ν ] | ε α β γ δ , {\displaystyle E_{\alpha \beta \gamma \delta }=\pm {\sqrt {\left|\det[g_{\mu \nu }]\right|}}\,\varepsilon _{\alpha \beta \gamma \delta }\,,} where the sign depends on the orientation of the basis. The contravariant Levi-Civita tensor is E α β γ δ = g α ζ g β η g γ θ g δ ι E ζ η θ ι . {\displaystyle E^{\alpha \beta \gamma \delta }=g^{\alpha \zeta }g^{\beta \eta }g^{\gamma \theta }g^{\delta \iota }E_{\zeta \eta \theta \iota }\,.} The following are examples of the general identity above specialized to Minkowski space (with the negative sign arising from the odd number of negatives in the signature of the metric tensor in either sign convention): E α β γ δ E ρ σ μ ν = − g α ζ g β η g γ θ g δ ι δ ρ σ μ ν ζ η θ ι E α β γ δ E ρ σ μ ν = − g α ζ g β η g γ θ g δ ι δ ζ η θ ι ρ σ μ ν E α β γ δ E α β γ δ = − 24 E α β γ δ E ρ β γ δ = − 6 δ ρ α E α β γ δ E ρ σ γ δ = − 2 δ ρ σ α β E α β γ δ E ρ σ θ δ = − δ ρ σ θ α β γ . {\displaystyle {\begin{aligned}E_{\alpha \beta \gamma \delta }E_{\rho \sigma \mu \nu }&=-g_{\alpha \zeta }g_{\beta \eta }g_{\gamma \theta }g_{\delta \iota }\delta _{\rho \sigma \mu \nu }^{\zeta \eta \theta \iota }\\E^{\alpha \beta \gamma \delta }E^{\rho \sigma \mu \nu }&=-g^{\alpha \zeta }g^{\beta \eta }g^{\gamma \theta }g^{\delta \iota }\delta _{\zeta \eta \theta \iota }^{\rho \sigma \mu \nu }\\E^{\alpha \beta \gamma \delta }E_{\alpha \beta \gamma \delta }&=-24\\E^{\alpha \beta \gamma \delta }E_{\rho \beta \gamma \delta }&=-6\delta _{\rho }^{\alpha }\\E^{\alpha \beta \gamma \delta }E_{\rho \sigma \gamma \delta }&=-2\delta _{\rho \sigma }^{\alpha \beta }\\E^{\alpha \beta \gamma \delta }E_{\rho \sigma \theta \delta }&=-\delta _{\rho \sigma \theta }^{\alpha \beta \gamma }\,.\end{aligned}}} == See also == List of permutation topics Symmetric tensor == Notes == == References == Misner, C.; Thorne, K. S.; Wheeler, J. A. (1973). Gravitation. W. H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0. Neuenschwander, D. E. (2015). Tensor Calculus for Physics. Johns Hopkins University Press. pp. 11, 29, 95. ISBN 978-1-4214-1565-9. Carroll, Sean M. (2004), Spacetime and Geometry, Addison-Wesley, ISBN 0-8053-8732-3 == External links == This article incorporates material from Levi-Civita permutation symbol on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Weisstein, Eric W. "Permutation Tensor". MathWorld.
Wikipedia:Levi–Lechicki theorem#0
Wijsman convergence is a variation of Hausdorff convergence suitable for work with unbounded sets. Intuitively, Wijsman convergence is to convergence in the Hausdorff metric as pointwise convergence is to uniform convergence. == History == The convergence was defined by Robert Wijsman. The same definition was used earlier by Zdeněk Frolík. Yet earlier, Hausdorff in his book Grundzüge der Mengenlehre defined so called closed limits; for proper metric spaces it is the same as Wijsman convergence. == Definition == Let (X, d) be a metric space and let Cl(X) denote the collection of all d-closed subsets of X. For a point x ∈ X and a set A ∈ Cl(X), set d ( x , A ) = inf a ∈ A d ( x , a ) . {\displaystyle d(x,A)=\inf _{a\in A}d(x,a).} A sequence (or net) of sets Ai ∈ Cl(X) is said to be Wijsman convergent to A ∈ Cl(X) if, for each x ∈ X, d ( x , A i ) → d ( x , A ) . {\displaystyle d(x,A_{i})\to d(x,A).} Wijsman convergence induces a topology on Cl(X), known as the Wijsman topology. == Properties == The Wijsman topology depends very strongly on the metric d. Even if two metrics are uniformly equivalent, they may generate different Wijsman topologies. Beer's theorem: if (X, d) is a complete, separable metric space, then Cl(X) with the Wijsman topology is a Polish space, i.e. it is separable and metrizable with a complete metric. Cl(X) with the Wijsman topology is always a Tychonoff space. Moreover, one has the Levi-Lechicki theorem: (X, d) is separable if and only if Cl(X) is either metrizable, first-countable or second-countable. If the pointwise convergence of Wijsman convergence is replaced by uniform convergence (uniformly in x), then one obtains Hausdorff convergence, where the Hausdorff metric is given by d H ( A , B ) = sup x ∈ X | d ( x , A ) − d ( x , B ) | . {\displaystyle d_{\mathrm {H} }(A,B)=\sup _{x\in X}{\big |}d(x,A)-d(x,B){\big |}.} The Hausdorff and Wijsman topologies on Cl(X) coincide if and only if (X, d) is a totally bounded space. == See also == Hausdorff distance Kuratowski convergence Vietoris topology Hemicontinuity == References == Notes Bibliography Beer, Gerald (1993). Topologies on closed and closed convex sets. Mathematics and its Applications 268. Dordrecht: Kluwer Academic Publishers Group. pp. xii+340. ISBN 0-7923-2531-1. MR1269778 Beer, Gerald (1994). "Wijsman convergence: a survey". Set-Valued Anal. 2 (1–2): 77–94. doi:10.1007/BF01027094. MR1285822 == External links == Som Naimpally (2001) [1994], "Wijsman convergence", Encyclopedia of Mathematics, EMS Press
Wikipedia:Lexing Ying#0
Lexing Ying is a professor of mathematics at Stanford University, where he is also a member of the Institute for Computational and Mathematical Engineering. He specializes in scientific computing and numerical analysis. In particular, his research concerns the design of numerical algorithms for problems in scientific computing. Ying received his bachelor's degree in computer science and applied mathematics from Shanghai Jiaotong University in 1998. He received his Ph.D. from the Courant Institute at New York University in 2004, under the guidance of Denis Zorin. Before joining Stanford in 2012, he was a post-doc at California Institute of Technology and a professor at University of Texas, Austin. The awards Ying has received include a Sloan Fellowship in 2007, an NSF Career Award in 2009, the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing in 2013 (for "his outstanding contributions in many areas, including the rapid evaluation of oscillatory integral transforms, high frequency wave propagation and the computation of electron structure in metallic systems"), and a silver Morningside Medal in 2016. He is an invited speaker of International Congress of Mathematicians 2022. == References ==
Wikipedia:Li Shanlan identity#0
In mathematics, in combinatorics, the Li Shanlan identity (also called Li Shanlan's summation formula) is a certain combinatorial identity attributed to the nineteenth century Chinese mathematician Li Shanlan. Since Li Shanlan is also known as Li Renshu (his courtesy name), this identity is also referred to as the Li Renshu identity. This identity appears in the third chapter of Duoji bilei (垛积比类 / 垛積比類, meaning summing finite series), a mathematical text authored by Li Shanlan and published in 1867 as part of his collected works. A Czech mathematician Josef Kaucky published an elementary proof of the identity along with a history of the identity in 1964. Kaucky attributed the identity to a certain Li Jen-Shu. From the account of the history of the identity, it has been ascertained that Li Jen-Shu is in fact Li Shanlan. Western scholars had been studying Chinese mathematics for its historical value; but the attribution of this identity to a nineteenth century Chinese mathematician sparked a rethink on the mathematical value of the writings of Chinese mathematicians. == The identity == The Li Shanlan identity states that ∑ k = 0 p ( p k ) 2 ( n + 2 p − k 2 p ) = ( n + p p ) 2 {\displaystyle \sum _{k=0}^{p}{p \choose k}^{2}{{n+2p-k} \choose {2p}}={{n+p} \choose p}^{2}} . Li Shanlan did not present the identity in this way. He presented it in the traditional Chinese algorithmic and rhetorical way. == Proofs of the identity == Li Shanlan had not given a proof of the identity in Duoji bilei. The first proof using differential equations and Legendre polynomials, concepts foreign to Li, was published by Pál Turán in 1936, and the proof appeared in Chinese in Yung Chang's paper published in 1939. Since then at least fifteen different proofs have been found. The following is one of the simplest proofs. The proof begins by expressing ( n q ) {\displaystyle n \choose q} as Vandermonde's convolution: ( n q ) = ∑ k = 0 q ( n − p k ) ( p q − k ) {\displaystyle {n \choose q}=\sum _{k=0}^{q}{{n-p} \choose k}{p \choose {q-k}}} Pre-multiplying both sides by ( n p ) {\displaystyle n \choose p} , ( n p ) ( n q ) = ∑ k = 0 q ( n p ) ( n − p k ) ( p q − k ) {\displaystyle {n \choose p}{n \choose q}=\sum _{k=0}^{q}{n \choose p}{{n-p} \choose k}{p \choose {q-k}}} . Using the following relation ( n p ) ( n − p k ) = ( p + k k ) ( n p + k ) {\displaystyle {n \choose p}{{n-p} \choose k}={{p+k} \choose k}{n \choose {p+k}}} the above relation can be transformed to ( n p ) ( n q ) = ∑ k = 0 q ( p q − k ) ( p + k k ) ( n p + k ) {\displaystyle {n \choose p}{n \choose q}=\sum _{k=0}^{q}{p \choose {q-k}}{{p+k} \choose k}{n \choose {p+k}}} . Next the relation ( p q − k ) ( p + k k ) = ( q k ) ( p + k q ) {\displaystyle {p \choose {q-k}}{{p+k} \choose {k}}={q \choose k}{{p+k} \choose {q}}} is used to get ( n p ) ( n q ) = ∑ k = 0 q ( q k ) ( n p + k ) ( p + k q ) {\displaystyle {n \choose p}{n \choose q}=\sum _{k=0}^{q}{q \choose k}{n \choose {p+k}}{{p+k} \choose q}} . Another application of Vandermonde's convolution yields ( p + k q ) = ∑ j = 0 q ( p j ) ( k q − j ) {\displaystyle {{p+k} \choose q}=\sum _{j=0}^{q}{p \choose j}{k \choose {q-j}}} and hence ( n p ) ( n q ) = ∑ k = 0 q ( q k ) ( n p + k ) ∑ j = 0 q ( p j ) ( k q − j ) {\displaystyle {n \choose p}{n \choose q}=\sum _{k=0}^{q}{q \choose k}{n \choose {p+k}}\sum _{j=0}^{q}{p \choose j}{k \choose {q-j}}} Since ( p j ) {\displaystyle p \choose j} is independent of k, this can be put in the form ( n p ) ( n q ) = ∑ j = 0 q ( p j ) ∑ k = 0 q ( q k ) ( n p + k ) ( k q − j ) {\displaystyle {n \choose p}{n \choose q}=\sum _{j=0}^{q}{p \choose j}\sum _{k=0}^{q}{q \choose k}{n \choose {p+k}}{k \choose {q-j}}} Next, the result ( q k ) ( k q − j ) = ( q j ) ( j q − k ) {\displaystyle {q \choose k}{k \choose {q-j}}={q \choose j}{j \choose {q-k}}} gives ( n p ) ( n q ) = ∑ j = 0 q ( p j ) ∑ k = 0 q ( q j ) ( j q − k ) ( n p + k ) {\displaystyle {n \choose p}{n \choose q}=\sum _{j=0}^{q}{p \choose j}\sum _{k=0}^{q}{q \choose j}{j \choose {q-k}}{n \choose {p+k}}} = ∑ j = 0 q ( p j ) ( q j ) ∑ k = 0 q ( j q − k ) ( n p + k ) {\displaystyle =\sum _{j=0}^{q}{p \choose j}{q \choose j}\sum _{k=0}^{q}{j \choose {q-k}}{n \choose {p+k}}} = ∑ j = 0 q ( p j ) ( q j ) ( n + j p + q ) {\displaystyle =\sum _{j=0}^{q}{p \choose j}{q \choose j}{{n+j} \choose {p+q}}} Setting p = q and replacing j by k, ( n p ) 2 = ∑ k = 0 p ( p k ) 2 ( n + k 2 p ) {\displaystyle {n \choose p}^{2}=\sum _{k=0}^{p}{p \choose k}^{2}{{n+k} \choose {2p}}} Li's identity follows from this by replacing n by n + p and doing some rearrangement of terms in the resulting expression: ( n + p p ) 2 = ∑ k = 0 p ( p k ) 2 ( n + 2 p − k 2 p ) {\displaystyle {{n+p} \choose p}^{2}=\sum _{k=0}^{p}{p \choose k}^{2}{{n+2p-k} \choose {2p}}} == On Duoji bilei == The term duoji denotes a certain traditional Chinese method of computing sums of piles. Most of the mathematics that was developed in China since the sixteenth century is related to the duoji method. Li Shanlan was one of the greatest exponents of this method and Duoji bilei is an exposition of his work related to this method. Duoji bilei consists of four chapters: Chapter 1 deals with triangular piles, Chapter 2 with finite power series, Chapter 3 with triangular self-multiplying piles and Chapter 4 with modified triangular piles. == References ==
Wikipedia:Li Yingshi#0
Li Yingshi (traditional Chinese: 李應試; simplified Chinese: 李应试, referred to by Jesuits as Li Paul; fl. ca. 1600) was a Ming Chinese military officer, scientist, astrologer and feng shui practicer that was converted to Christianity. He was converted to Catholicism by Matteo Ricci and Diego de Pantoja, the first two Jesuits to establish themselves in Beijing. He then became a zealous Christian. == Early life == Li Yingshi was a member of the Chinese literati class. He commanded a unit of 500 soldiers during the Korean War of 1592–98. He was awarded a lifetime pension by the Wanli government, which was to continue to be paid to his heirs in perpetuity. During peacetime, he studied astrology and geomancy. == Conversion == In the meantime, in 1601 the Jesuits Matteo Ricci and Diego de Pantoja became the first Christian missionaries to be settled in the Ming capital. They appreciated Confucian learning and wanted to establish good relations with the Chinese literati, but strongly disparaged the "ridiculous wizardry" of Chinese occult practices. This is why, perhaps, Ricci described the Jesuits' conversion of Li Yingshi to Christianity – which was accomplished on the Feast of St Matthew (i.e., September 21) of 1602 – as nothing less than "extraordinary". Once Li became a Christian, it took him, Ricci, and de Pantoja three days to go through his "beautiful and well stocked library" to identify all books and manuscripts "forbidden by ecclesiastical regulations". The forbidden books and manuscripts, mostly dealing with the "art of divination" were then burned, some at Li's own courtyard and others at the Beijing Mission House, to demonstrate Li's commitment to Christianity. This apparently was not an uncommon practice in Ricci's day: when another celebrated convert, the mathematician Ignatius Qu Taisu (瞿太素; Chiutaiso in Ricci's transcription) became Christian, he also sent his library of books on "the dogma of the sects" to the Nanjing Mission House to be burned, along with book printing plates and non-Christian religious statues. Li Yingshi became a zealous member of the Catholic church. He proselytized among his friends and relatives, and got all his servants to join the church as well. He had a chapel built at his home and had his son study at the Beijing Mission House, so that soon enough the young Li was able to celebrate Mass himself. As a person highly knowledgeable about the "sect of idol worshippers", Li was able to supply the Jesuits with a large amount of information that they found helpful in converting other non-Christians. == See also == Three Pillars of Chinese Catholicism == References == === Citations === === Sources ===
Wikipedia:Lia Bronsard#0
Lia Bronsard (b. 14 March 1963) is a Canadian mathematician and the former president of the Canadian Mathematical Society. She is a professor of mathematics at McMaster University. == Contributions == In her research, she has used geometric flows to model the interface dynamics of reaction–diffusion systems. Other topics in her research include pattern formation, grain boundaries, and vortices in superfluids. == Education and career == Bronsard is originally from Québec. She did her undergraduate studies at the Université de Montréal, graduating in 1983, and earned her PhD in 1988 from New York University under the supervision of Robert V. Kohn. After short-term positions at Brown University, the Institute for Advanced Study, and Carnegie Mellon University, she moved to McMaster in 1992. She was president of the Canadian Mathematical Society for 2014–2016. == Recognition == Bronsard was the 2010 winner of the Krieger–Nelson Prize. In 2018 the Canadian Mathematical Society listed her in their inaugural class of fellows. == Selected publications == Bronsard, Lia; Kohn, Robert V. (1990), "On the slowness of phase boundary motion in one space dimension", Communications on Pure and Applied Mathematics, 43 (8): 983–997, Bibcode:1990STIN...9121480B, doi:10.1002/cpa.3160430804, MR 1075075 Bronsard, Lia; Kohn, Robert V. (1991), "Motion by mean curvature as the singular limit of Ginzburg–Landau dynamics", Journal of Differential Equations, 90 (2): 211–237, Bibcode:1991JDE....90..211B, doi:10.1016/0022-0396(91)90147-2, MR 1101239 Bronsard, Lia; Reitich, Fernando (1993), "On three-phase boundary motion and the singular limit of a vector-valued Ginzburg–Landau equation", Archive for Rational Mechanics and Analysis, 124 (4): 355–379, Bibcode:1993ArRMA.124..355B, doi:10.1007/BF00375607, MR 1240580, S2CID 123291032 == References == == External links == Home page Lia Bronsard publications indexed by Google Scholar
Wikipedia:Lichtenberg figure#0
A Lichtenberg figure (German: Lichtenberg-Figur), or Lichtenberg dust figure, is a branching electric discharge that sometimes appears on the surface or in the interior of insulating materials. Lichtenberg figures are often associated with the progressive deterioration of high-voltage components and equipment. The study of planar Lichtenberg figures along insulating surfaces and 3D electrical trees within insulating materials often provides engineers with valuable insights for improving the long-term reliability of high-voltage equipment. Lichtenberg figures are now known to occur on or within solids, liquids, and gases during electrical breakdown. Lichtenberg figures are natural phenomena that exhibit fractal properties. == History == Lichtenberg figures are named after the German physicist Georg Christoph Lichtenberg, who originally discovered and studied them. When they were first discovered, it was thought that their characteristic shapes might help to reveal the nature of positive and negative electric "fluids". In 1777, Lichtenberg built a large electrophorus to generate high-voltage static electricity through induction. After discharging a high-voltage point to the surface of an insulator, he recorded the resulting radial patterns by sprinkling various powdered materials onto the surface. By then pressing blank sheets of paper onto these patterns, Lichtenberg was able to transfer and record these images, thereby discovering the basic principle of modern xerography. This discovery was also the forerunner of the modern day science of plasma physics. Although Lichtenberg only studied two-dimensional (2D) figures, modern high-voltage researchers study 2D and 3D figures (electrical trees) on, and within, insulating materials. == Formation == Two-dimensional (2D) Lichtenberg figures can be produced by placing a sharp-pointed needle perpendicular to the surface of a non-conducting plate, such as of resin, ebonite, or glass. The point is positioned very near or contacting the plate. A source of high voltage such as a Leyden jar (an early type of capacitor) or a static electricity generator is applied to the needle, typically through a spark gap. This creates a sudden, small electrical discharge along the surface of the plate. This deposits stranded areas of charge onto the surface of the plate. These electrified areas are then tested by sprinkling a mixture of powdered flowers of sulfur and red lead (Pb3O4 or lead tetroxide) onto the plate. Sulfur and red lead exhibit the triboelectric effect. During handling, powdered sulfur particles tend to acquire a negative charge. Similarly, powdered red lead particles tend to acquire a positive charge. The negatively-charged sulfur particles are electrostatically attracted and adhere to the positively electrified areas of the plate, while the positively charged red lead particles are attracted to the negatively electrified areas. In addition to the distribution of colors thereby produced, there is also a marked difference in the form of the figure, according to the polarity of the electrical charge that was applied to the plate. If the charged areas were positive, a widely extending patch is seen on the plate, consisting of a dense nucleus from which branches radiate in all directions. Negatively charged areas are considerably smaller and have a sharp circular or fan-like boundary entirely devoid of branches. Heinrich Rudolf Hertz employed Lichtenberg dust figures in his seminal work proving Maxwell's electromagnetic wave theories. If the plate receives a mixture of positive and negative charges as, for example, from an induction coil, a mixed figure results, consisting of a large red central nucleus, corresponding to the negative charge, surrounded by yellow rays, corresponding to the positive charge. The difference between positive and negative figures seems to depend on the presence of air, for the difference tends to disappear when the experiment is conducted in a vacuum. Peter T. Riess (a 19th-century researcher) theorized that the negative electrification of the plate was caused by the friction of the water vapour, etc., driven along the surface by the explosion that accompanies the disruptive discharge at the point. This electrification would favor the spread of a positive, but hinder that of a negative discharge. It is now known that electrical charges are transferred to the insulator's surface through small spark discharges that occur along the boundary between the gas and insulator surface. Once transferred to the insulator, these excess charges become temporarily stranded. The shapes of the resulting charge distributions reflect the shape of the spark discharges which, in turn, depend on the high voltage polarity and pressure of the gas. Using a higher applied voltage will generate larger-diameter and more branched figures. It is now known that positive Lichtenberg figures have longer, branching structures because long sparks within air can more easily form and propagate from positively charged high-voltage terminals. This property has been used to measure the transient voltage polarity and magnitude of lightning surges on electrical power lines. Another type of 2D Lichtenberg figure can be created when an insulating surface becomes contaminated with semiconducting material. When a high voltage is applied across the surface, leakage currents may cause localized heating and progressive degradation and charring of the underlying material. Over time, branching, tree-like carbonized patterns are formed upon the surface of the insulator, called electrical trees. This degradation process is called tracking. If the conductive paths ultimately bridge the insulating space, the result is catastrophic failure of the insulating material. Some artists moisten the surface of wood or cardboard with a semiconductive electrolytic solution and then apply a high voltage across the surface to induce tracking, thereby creating complex carbonized 2D Fractal burning on the surface. === Fractal similarities === The branching, self-similar patterns observed in Lichtenberg figures exhibit fractal properties. Lichtenberg figures often develop during the dielectric breakdown of solids, liquids, and even gases. Their appearance and growth appear to be related to a process called diffusion-limited aggregation (DLA). A useful macroscopic model that combines an electric field with DLA was developed by Niemeyer, Pietronero, and Weismann in 1984, and is known as the dielectric breakdown model (DBM). Although the electrical breakdown mechanisms of air and PMMA plastic are considerably different, the branching discharge structures turn out to be related. The branching forms taken by natural lightning also have fractal characteristics. === Constructal law === Lichtenberg figures are examples of natural phenomena that exhibit fractal properties. The emergence and evolution of these and the other tree-like structures that abound in nature are summarized by the constructal law. First published by Duke professor Adrian Bejan in 1996, the constructal law is a first principle of physics that summarizes the tendency in nature to generate configurations (patterns, designs) that facilitate the free movement of the imposed currents that flow through it. The constructal law predicts that the tree-like designs described in this article should emerge and evolve to facilitate the movement (point-to-area) of the electrical currents flowing through them. == Natural occurrences == Lichtenberg figures are fern-like patterns that may appear on the skin of lightning strike victims and typically disappear in 24 hours. They are also known as Keraunographic markings. A lightning strike can also create a large Lichtenberg figure in grass surrounding the point struck. These are sometimes found on golf courses or in grassy meadows. Branching root-shaped "fulgurite" mineral deposits may also be created as sand and soil are fused into glassy tubes by the intense ionizing electrical current, an effect often seen in chemical vapor deposition when using arc discharge to produce carbon nanotubes. Electrical treeing often occurs in high-voltage equipment prior to causing complete breakdown. Following these Lichtenberg figures within the insulation during post-accident investigation of an insulation failure can be useful in finding the cause of breakdown. From the direction and shape of the trees and their branches, an experienced high-voltage engineer can see exactly the point where the insulation began to break down, and using that knowledge, possibly find the initial cause as well. Broken-down transformers, high-voltage cables, bushings, and other equipment can usefully be investigated in this manner. The insulation is unrolled (in the case of paper insulation) or sliced in thin slices (in the case of solid insulating materials). The results are then sketched or photographed to create a record of the breakdown process. == In insulating materials == Modern Lichtenberg figures can also be created within solid insulating materials, such as acrylic (polymethyl methacrylate, or PMMA) or glass by injecting them with a beam of high energy electrons from an electron beam accelerator, such as a LINAC (or Linac, a type of particle accelerator). Inside the Linac, electrons are focused and accelerated to form a beam of high-speed particles. Electrons emerging from the accelerator can have energies up to 25 MeV and are moving at an appreciable fraction (95 – 99+ percent) of the speed of light (relativistic velocities). If the electron beam is aimed towards a thick acrylic specimen, the electrons easily penetrate the surface of the acrylic, rapidly decelerating as they collide with molecules inside the plastic, finally coming to rest deep inside the specimen. Since acrylic is an excellent electrical insulator, these electrons become temporarily trapped within the specimen, forming a plane of excess negative charge. Positive "mirror" charges are attracted by the building internal negative charge, accumlating on the outer surfaces of the specimen. Under continued irradiation, the amount of trapped charge builds until the effective voltage inside the specimen reaches millions of volts. Once the electrical stress exceeds the dielectric strength of the plastic, some portions suddenly become conductive in a process called dielectric breakdown. During breakdown, branching tree or fern-like conductive channels rapidly form and propagate through the plastic, allowing the trapped charge to suddenly rush out in a miniature lightning-like flash and bang. Breakdown of a charged specimen may also be manually triggered by poking the plastic with a pointed conductive object to create a point of excessive voltage stress. During the discharge, the powerful electric sparks leave thousands of branching chains of fractures behind, creating a permanent Lichtenberg figure inside the specimen. Although the internal charge within the specimen is negative, the discharge is initiated from the positively charged exterior surfaces of the specimen, so that the resulting discharge creates a positive Lichtenberg figure. These objects are sometimes called electron trees, beam trees, or lightning trees. As the electrons rapidly decelerate inside the acrylic, they also generate powerful X-rays. Residual electrons and X-rays darken the acrylic by introducing defects (color centers) in a process called solarization. Solarization initially turns acrylic specimens a lime green color, which then changes to an amber color after the specimen has been discharged. The color usually fades over time, and gentle heating, combined with oxygen, accelerates the fading process. == On wood == Lichtenberg figures can also be produced on wood. The types of wood and grain patterns affect the shape of the Lichtenberg figure produced. By applying a coat of electrolytic solution to the surface of the wood, the resistance of the surface drops considerably. Two electrodes are then placed on the wood and a high voltage is passed across them. Current from the electrodes will cause the surface of the wood to heat up until the electrolyte boils and the wooden surface burns. Because the charred surface of the wood is mildly conductive, the surface of the wood will burn in a pattern outwards from the electrodes. Applying fractal burning to wood can be dangerous; it results in deaths every year from electrocution. == See also == == References == == External links == What are Lichtenberg Figures and how are they created? Lichtenberg Figures, Glass and Gemstones 1927 General Electric Review Article about Lichtenberg Figures Dielectric Breakdown Model (DBM) Trap Lightning in a Block. (DIY Lichtenberg Figure at Popular Science) Lichtenbergs in acrylic in 3d. 1 2 3 (Requires QuickTime VR to view.) Bibliography of Fulgurites Lichtenberg wood burning With a Welder
Wikipedia:Lidia Angeleri Hügel#0
Lidia Angeleri Hügel (born 1960) is an Italian mathematician whose research in abstract algebra and representation theory focuses on tilting theory and its offshoot, silting theory. She is a professor of algebra at the University of Verona. == Education and career == Angeleri Hügel was born in Milan, in 1960. She studied mathematics at the Ludwig Maximilian University of Munich, completing a Ph.D. there in 1991 under the supervision of Wolfgang Zimmermann. She continued at Ludwig Maximilian University of Munich as a postdoctoral researcher from 1992 to 2002, earning a habilitation there in 2000. In 2002, she was Ramon y Cajal Fellow at the Autonomous University of Barcelona, and briefly held an associate professorship at the University of Insubria, before moving to the University of Verona as an associate professor. She became full professor at the University of Verona in 2016. At the University of Verona, she served as Vice-Rector for International Relations from 2013 to 2019. == Book == Angeleri Hügel is the co-editor of the Handbook of Tilting Theory (Cambridge University Press, London Mathematical Society Lecture Note Series 332, 2007, with Dieter Happel and Henning Krause). == References == == External links == Lidia Angeleri Hügel publications indexed by Google Scholar
Wikipedia:Lie algebroid#0
In mathematics, a Lie algebroid is a vector bundle A → M {\displaystyle A\rightarrow M} together with a Lie bracket on its space of sections Γ ( A ) {\displaystyle \Gamma (A)} and a vector bundle morphism ρ : A → T M {\displaystyle \rho :A\rightarrow TM} , satisfying a Leibniz rule. A Lie algebroid can thus be thought of as a "many-object generalisation" of a Lie algebra. Lie algebroids play a similar same role in the theory of Lie groupoids that Lie algebras play in the theory of Lie groups: reducing global problems to infinitesimal ones. Indeed, any Lie groupoid gives rise to a Lie algebroid, which is the vertical bundle of the source map restricted at the units. However, unlike Lie algebras, not every Lie algebroid arises from a Lie groupoid. Lie algebroids were introduced in 1967 by Jean Pradines. == Definition and basic concepts == A Lie algebroid is a triple ( A , [ ⋅ , ⋅ ] , ρ ) {\displaystyle (A,[\cdot ,\cdot ],\rho )} consisting of a vector bundle A {\displaystyle A} over a manifold M {\displaystyle M} a Lie bracket [ ⋅ , ⋅ ] {\displaystyle [\cdot ,\cdot ]} on its space of sections Γ ( A ) {\displaystyle \Gamma (A)} a morphism of vector bundles ρ : A → T M {\displaystyle \rho :A\rightarrow TM} , called the anchor, where T M {\displaystyle TM} is the tangent bundle of M {\displaystyle M} such that the anchor and the bracket satisfy the following Leibniz rule: [ X , f Y ] = ρ ( X ) f ⋅ Y + f [ X , Y ] {\displaystyle [X,fY]=\rho (X)f\cdot Y+f[X,Y]} where X , Y ∈ Γ ( A ) , f ∈ C ∞ ( M ) {\displaystyle X,Y\in \Gamma (A),f\in C^{\infty }(M)} . Here ρ ( X ) f {\displaystyle \rho (X)f} is the image of f {\displaystyle f} via the derivation ρ ( X ) {\displaystyle \rho (X)} , i.e. the Lie derivative of f {\displaystyle f} along the vector field ρ ( X ) {\displaystyle \rho (X)} . The notation ρ ( X ) f ⋅ Y {\displaystyle \rho (X)f\cdot Y} denotes the (point-wise) product between the function ρ ( X ) f {\displaystyle \rho (X)f} and the vector field Y {\displaystyle Y} . One often writes A → M {\displaystyle A\to M} when the bracket and the anchor are clear from the context; some authors denote Lie algebroids by A ⇒ M {\displaystyle A\Rightarrow M} , suggesting a "limit" of a Lie groupoids when the arrows denoting source and target become "infinitesimally close". === First properties === It follows from the definition that for every x ∈ M {\displaystyle x\in M} , the kernel g x ( A ) = ker ⁡ ( ρ x ) {\displaystyle {\mathfrak {g}}_{x}(A)=\ker(\rho _{x})} is a Lie algebra, called the isotropy Lie algebra at x {\displaystyle x} the kernel g ( A ) = ker ⁡ ( ρ ) {\displaystyle {\mathfrak {g}}(A)=\ker(\rho )} is a (not necessarily locally trivial) bundle of Lie algebras, called the isotropy Lie algebra bundle the image I m ( ρ ) ⊆ T M {\displaystyle \mathrm {Im} (\rho )\subseteq TM} is a singular distribution which is integrable, i.e. its admits maximal immersed submanifolds O ⊆ M {\displaystyle {\mathcal {O}}\subseteq M} , called the orbits, satisfying I m ( ρ x ) = T x O {\displaystyle \mathrm {Im} (\rho _{x})=T_{x}{\mathcal {O}}} for every x ∈ O {\displaystyle x\in {\mathcal {O}}} . Equivalently, orbits can be explicitly described as the sets of points which are joined by A-paths, i.e. pairs ( a : I → A , γ : I → M ) {\displaystyle (a:I\to A,\gamma :I\to M)} of paths in A {\displaystyle A} and in M {\displaystyle M} such that a ( t ) ∈ A γ ( t ) {\displaystyle a(t)\in A_{\gamma (t)}} and ρ ( a ( t ) ) = γ ′ ( t ) {\displaystyle \rho (a(t))=\gamma '(t)} the anchor map ρ {\displaystyle \rho } descends to a map between sections ρ : Γ ( A ) → X ( M ) {\displaystyle \rho :\Gamma (A)\rightarrow {\mathfrak {X}}(M)} which is a Lie algebra morphism, i.e. ρ ( [ X , Y ] ) = [ ρ ( X ) , ρ ( Y ) ] {\displaystyle \rho ([X,Y])=[\rho (X),\rho (Y)]} for all X , Y ∈ Γ ( A ) {\displaystyle X,Y\in \Gamma (A)} . The property that ρ {\displaystyle \rho } induces a Lie algebra morphism was taken as an axiom in the original definition of Lie algebroid. Such redundancy, despite being known from an algebraic point of view already before Pradine's definition, was noticed only much later. === Subalgebroids and ideals === A Lie subalgebroid of a Lie algebroid ( A , [ ⋅ , ⋅ ] , ρ ) {\displaystyle (A,[\cdot ,\cdot ],\rho )} is a vector subbundle A ′ → M ′ {\displaystyle A'\to M'} of the restriction A ∣ M ′ → M ′ {\displaystyle A_{\mid M'}\to M'} such that ρ ∣ A ′ {\displaystyle \rho _{\mid A'}} takes values in T M ′ {\displaystyle TM'} and Γ ( A , A ′ ) := { α ∈ Γ ( A ) ∣ α ∣ M ′ ∈ Γ ( A ′ ) } {\displaystyle \Gamma (A,A'):=\{\alpha \in \Gamma (A)\mid \alpha _{\mid M'}\in \Gamma (A')\}} is a Lie subalgebra of Γ ( A ) {\displaystyle \Gamma (A)} . Clearly, A ′ → M ′ {\displaystyle A'\to M'} admits a unique Lie algebroid structure such that Γ ( A , A ′ ) → Γ ( A ′ ) {\displaystyle \Gamma (A,A')\to \Gamma (A')} is a Lie algebra morphism. With the language introduced below, the inclusion A ′ ↪ A {\displaystyle A'\hookrightarrow A} is a Lie algebroid morphism. A Lie subalgebroid is called wide if M ′ = M {\displaystyle M'=M} . In analogy to the standard definition for Lie algebra, an ideal of a Lie algebroid is wide Lie subalgebroid I ⊆ A {\displaystyle I\subseteq A} such that Γ ( I ) ⊆ Γ ( A ) {\displaystyle \Gamma (I)\subseteq \Gamma (A)} is a Lie ideal. Such notion proved to be very restrictive, since I {\displaystyle I} is forced to be inside the isotropy bundle ker ⁡ ( ρ ) {\displaystyle \ker(\rho )} . For this reason, the more flexible notion of infinitesimal ideal system has been introduced. === Morphisms === A Lie algebroid morphism between two Lie algebroids ( A 1 , [ ⋅ , ⋅ ] A 1 , ρ 1 ) {\displaystyle (A_{1},[\cdot ,\cdot ]_{A_{1}},\rho _{1})} and ( A 2 , [ ⋅ , ⋅ ] A 2 , ρ 2 ) {\displaystyle (A_{2},[\cdot ,\cdot ]_{A_{2}},\rho _{2})} with the same base M {\displaystyle M} is a vector bundle morphism ϕ : A 1 → A 2 {\displaystyle \phi :A_{1}\to A_{2}} which is compatible with the Lie brackets, i.e. ϕ ( [ α , β ] A 1 ) = [ ϕ ( α ) , ϕ ( β ) ] A 2 {\displaystyle \phi ([\alpha ,\beta ]_{A_{1}})=[\phi (\alpha ),\phi (\beta )]_{A_{2}}} for every α , β ∈ Γ ( A 1 ) {\displaystyle \alpha ,\beta \in \Gamma (A_{1})} , and with the anchors, i.e. ρ 2 ∘ ϕ = ρ 1 {\displaystyle \rho _{2}\circ \phi =\rho _{1}} . A similar notion can be formulated for morphisms with different bases, but the compatibility with the Lie brackets becomes more involved. Equivalently, one can ask that the graph of ϕ : A 1 → A 2 {\displaystyle \phi :A_{1}\to A_{2}} to be a subalgebroid of the direct product A 1 × A 2 {\displaystyle A_{1}\times A_{2}} (introduced below). Lie algebroids together with their morphisms form a category. == Examples == === Trivial and extreme cases === Given any manifold M {\displaystyle M} , its tangent Lie algebroid is the tangent bundle T M → M {\displaystyle TM\to M} together with the Lie bracket of vector fields and the identity of T M {\displaystyle TM} as an anchor. Given any manifold M {\displaystyle M} , the zero vector bundle M × 0 → M {\displaystyle M\times 0\to M} is a Lie algebroid with zero bracket and anchor. Lie algebroids A → { ∗ } {\displaystyle A\to \{*\}} over a point are the same thing as Lie algebras. More generally, any bundles of Lie algebras is Lie algebroid with zero anchor and Lie bracket defined pointwise. === Examples from differential geometry === Given a foliation F {\displaystyle {\mathcal {F}}} on M {\displaystyle M} , its foliation algebroid is the associated involutive subbundle F ⊆ T M {\displaystyle {\mathcal {F}}\subseteq TM} , with brackets and anchor induced from the tangent Lie algebroid. Given the action of a Lie algebra g {\displaystyle {\mathfrak {g}}} on a manifold M {\displaystyle M} , its action algebroid is the trivial vector bundle g × M → M {\displaystyle {\mathfrak {g}}\times M\to M} , with anchor given by the Lie algebra action and brackets uniquely determined by the bracket of g {\displaystyle {\mathfrak {g}}} on constant sections M → g {\displaystyle M\to {\mathfrak {g}}} and by the Leibniz identity. Given a principal G-bundle P {\displaystyle P} over a manifold M {\displaystyle M} , its Atiyah algebroid is the Lie algebroid A = T P / G {\displaystyle A=TP/G} fitting in the following short exact sequence: 0 → ker ⁡ ( ρ ) → T P / G → ρ T M → 0. {\displaystyle 0\to \ker(\rho )\to TP/G\xrightarrow {\rho } TM\to 0.} The space of sections of the Atiyah algebroid is the Lie algebra of G {\displaystyle G} -invariant vector fields on P {\displaystyle P} , its isotropy Lie algebra bundle is isomorphic to the adjoint vector bundle P × G g {\displaystyle P\times _{G}{\mathfrak {g}}} , and the right splittings of the sequence above are principal connections on P {\displaystyle P} . Given a vector bundle E → M {\displaystyle E\to M} , its general linear algebroid, denoted by g l ( E ) {\displaystyle {\mathfrak {gl}}(E)} or D e r ( E ) {\displaystyle \mathrm {Der} (E)} , is the vector bundle whose sections are derivations of E {\displaystyle E} , i.e. first-order differential operators Γ ( E ) → Γ ( E ) {\displaystyle \Gamma (E)\to \Gamma (E)} admitting a vector field ρ ( D ) ∈ X ( M ) {\displaystyle \rho (D)\in {\mathfrak {X}}(M)} such that D ( f σ ) = f D ( σ ) + ρ ( D ) ( f ) σ {\displaystyle D(f\sigma )=fD(\sigma )+\rho (D)(f)\sigma } for every f ∈ C ∞ ( M ) , σ ∈ Γ ( E ) {\displaystyle f\in {\mathcal {C}}^{\infty }(M),\sigma \in \Gamma (E)} . The anchor is simply the assignment D ↦ ρ ( D ) {\displaystyle D\mapsto \rho (D)} and the Lie bracket is given by the commutator of differential operators. Given a Poisson manifold ( M , π ) {\displaystyle (M,\pi )} , its cotangent algebroid is the cotangent vector bundle A = T ∗ M {\displaystyle A=T^{*}M} , with Lie bracket [ α , β ] := L π ♯ ( α ) ( β ) − L π ♯ ( β ) ( α ) − d π ( α , β ) {\displaystyle [\alpha ,\beta ]:={\mathcal {L}}_{\pi ^{\sharp }(\alpha )}(\beta )-{\mathcal {L}}_{\pi ^{\sharp }(\beta )}(\alpha )-d\pi (\alpha ,\beta )} and anchor map π ♯ : T ∗ M → T M , α ↦ π ( α , ⋅ ) {\displaystyle \pi ^{\sharp }:T^{*}M\to TM,\alpha \mapsto \pi (\alpha ,\cdot )} . Given a closed 2-form ω ∈ Ω 2 ( M ) {\displaystyle \omega \in \Omega ^{2}(M)} , the vector bundle A ω := T M × R → M {\displaystyle A_{\omega }:=TM\times \mathbb {R} \to M} is a Lie algebroid with anchor the projection on the first component and Lie bracket [ ( X , f ) , ( Y , g ) ] := ( [ X , Y ] , L X ( g ) − L Y ( f ) − ω ( X , Y ) ) . {\displaystyle [(X,f),(Y,g)]:={\Big (}[X,Y],{\mathcal {L}}_{X}(g)-{\mathcal {L}}_{Y}(f)-\omega (X,Y){\Big )}.} Actually, the bracket above can be defined for any 2-form ω {\displaystyle \omega } , but A ω {\displaystyle A_{\omega }} is a Lie algebroid if and only if ω {\displaystyle \omega } is closed. === Constructions from other Lie algebroids === Given any Lie algebroid ( A → M , [ ⋅ , ⋅ ] , ρ ) {\displaystyle (A\to M,[\cdot ,\cdot ],\rho )} , there is a Lie algebroid ( T A → T M , [ ⋅ , ⋅ ] , ρ ) {\displaystyle (TA\to TM,[\cdot ,\cdot ],\rho )} , called its tangent algebroid, obtained by considering the tangent bundle of A {\displaystyle A} and M {\displaystyle M} and the differential of the anchor. Given any Lie algebroid ( A → M , [ ⋅ , ⋅ ] A , ρ A ) {\displaystyle (A\to M,[\cdot ,\cdot ]_{A},\rho _{A})} , there is a Lie algebroid ( J k A → M , [ ⋅ , ⋅ ] , ρ ) {\displaystyle (J^{k}A\to M,[\cdot ,\cdot ],\rho )} , called its k-jet algebroid, obtained by considering the k-jet bundle of A → M {\displaystyle A\to M} , with Lie bracket uniquely defined by [ j k α , j k β ] := j k [ α , β ] A {\displaystyle [j^{k}\alpha ,j^{k}\beta ]:=j^{k}[\alpha ,\beta ]_{A}} and anchor ρ ( j x k α ) := ρ A ( α ( x ) ) {\displaystyle \rho (j_{x}^{k}\alpha ):=\rho _{A}(\alpha (x))} . Given two Lie algebroids A 1 → M 1 {\displaystyle A_{1}\to M_{1}} and A 2 → M 2 {\displaystyle A_{2}\to M_{2}} , their direct product is the unique Lie algebroid A 1 × A 2 → M 1 × M 2 {\displaystyle A_{1}\times A_{2}\to M_{1}\times M_{2}} with anchor ( α 1 , α 2 ) ↦ ρ 1 ( α 1 ) ⊕ ρ 2 ( α 2 ) ∈ T M 1 ⊕ T M 2 ≅ T ( M 1 × M 2 ) , {\displaystyle (\alpha _{1},\alpha _{2})\mapsto \rho _{1}(\alpha _{1})\oplus \rho _{2}(\alpha _{2})\in TM_{1}\oplus TM_{2}\cong T(M_{1}\times M_{2}),} and such that Γ ( A 1 ) ⊕ Γ ( A 2 ) → Γ ( A 1 × A 2 ) , α 1 ⊕ α 2 ↦ p r M 1 ∗ α 1 + p r M 2 ∗ α 2 {\displaystyle \Gamma (A_{1})\oplus \Gamma (A_{2})\to \Gamma (A_{1}\times A_{2}),\alpha _{1}\oplus \alpha _{2}\mapsto \mathrm {pr} _{M_{1}}^{*}\alpha _{1}+\mathrm {pr} _{M_{2}}^{*}\alpha _{2}} is a Lie algebra morphism. Given a Lie algebroid ( A → M , [ ⋅ , ⋅ ] A , ρ A ) {\displaystyle (A\to M,[\cdot ,\cdot ]_{A},\rho _{A})} and a map f : M ′ → M {\displaystyle f:M'\to M} whose differential is transverse to the anchor map ρ : A → T M {\displaystyle \rho :A\to TM} (for instance, it is enough for f {\displaystyle f} to be a surjective submersion), the pullback algebroid is the unique Lie algebroid f ! A → M ′ {\displaystyle f^{!}A\to M'} , with f ! A := T M ′ × T M A → M ′ {\displaystyle f^{!}A:=TM'\times _{TM}A\to M'} the pullback vector bundle, and ρ f ! A : f ! A → T M ′ {\displaystyle \rho _{f^{!}A}:f^{!}A\to TM'} the projection on the first component, such that f ! A → A {\displaystyle f^{!}A\to A} is a Lie algebroid morphism. == Important classes of Lie algebroids == === Totally intransitive Lie algebroids === A Lie algebroid is called totally intransitive if the anchor map ρ : A → T M {\displaystyle \rho :A\to TM} is zero. Bundle of Lie algebras (hence also Lie algebras) are totally intransitive. This actually exhaust completely the list of totally intransitive Lie algebroids: indeed, if A {\displaystyle A} is totally intransitive, it must coincide with its isotropy Lie algebra bundle. === Transitive Lie algebroids === A Lie algebroid is called transitive if the anchor map ρ : A → T M {\displaystyle \rho :A\to TM} is surjective. As a consequence: there is a short exact sequence 0 → ker ⁡ ( ρ ) → A → ρ T M → 0 ; {\displaystyle 0\to \ker(\rho )\to A\xrightarrow {\rho } TM\to 0;} right-splitting of ρ {\displaystyle \rho } defines a principal bundle connections on ker ⁡ ( ρ ) {\displaystyle \ker(\rho )} ; the isotropy bundle ker ⁡ ( ρ ) {\displaystyle \ker(\rho )} is locally trivial (as bundle of Lie algebras); the pullback of A {\displaystyle A} exist for every f : M ′ → M {\displaystyle f:M'\to M} . The prototypical examples of transitive Lie algebroids are Atiyah algebroids. For instance: tangent algebroids T M {\displaystyle TM} are trivially transitive (indeed, they are Atiyah algebroid of the principal { e } {\displaystyle \{e\}} -bundle M → M {\displaystyle M\to M} ) Lie algebras g {\displaystyle {\mathfrak {g}}} are trivially transitive (indeed, they are Atiyah algebroid of the principal G {\displaystyle G} -bundle G → ∗ {\displaystyle G\to *} , for G {\displaystyle G} an integration of g {\displaystyle {\mathfrak {g}}} ) general linear algebroids g l ( E ) {\displaystyle {\mathfrak {gl}}(E)} are transitive (indeed, they are Atiyah algebroids of the frame bundle F r ( E ) → M {\displaystyle Fr(E)\to M} ) In analogy to Atiyah algebroids, an arbitrary transitive Lie algebroid is also called abstract Atiyah sequence, and its isotropy algebra bundle ker ⁡ ( ρ ) {\displaystyle \ker(\rho )} is also called adjoint bundle. However, it is important to stress that not every transitive Lie algebroid is an Atiyah algebroid. For instance: pullbacks of transitive algebroids are transitive cotangent algebroids T ∗ M {\displaystyle T^{*}M} associated to Poisson manifolds ( M , π ) {\displaystyle (M,\pi )} are transitive if and only if the Poisson structure π {\displaystyle \pi } is non-degenerate Lie algebroids A ω {\displaystyle A_{\omega }} defined by closed 2-forms are transitive These examples are very relevant in the theory of integration of Lie algebroid (see below): while any Atiyah algebroid is integrable (to a gauge groupoid), not every transitive Lie algebroid is integrable. === Regular Lie algebroids === A Lie algebroid is called regular if the anchor map ρ : A → T M {\displaystyle \rho :A\to TM} is of constant rank. As a consequence the image of ρ {\displaystyle \rho } defines a regular foliation on M {\displaystyle M} ; the restriction of A {\displaystyle A} over each leaf O ⊆ M {\displaystyle {\mathcal {O}}\subseteq M} is a transitive Lie algebroid. For instance: any transitive Lie algebroid is regular (the anchor has maximal rank); any totally intransitive Lie algebroids is regular (the anchor has zero rank); foliation algebroids are always regular; cotangent algebroids T ∗ M {\displaystyle T^{*}M} associated to Poisson manifolds ( M , π ) {\displaystyle (M,\pi )} are regular if and only if the Poisson structure π {\displaystyle \pi } is regular. == Further related concepts == === Actions === An action of a Lie algebroid A → M {\displaystyle A\to M} on a manifold P along a smooth map μ : P → M {\displaystyle \mu :P\to M} consists of a Lie algebra morphism a : Γ ( A ) → X ( P ) {\displaystyle a:\Gamma (A)\to {\mathfrak {X}}(P)} such that, for every p ∈ P , X ∈ Γ ( A ) , f ∈ C ∞ ( M ) {\displaystyle p\in P,X\in \Gamma (A),f\in {\mathcal {C}}^{\infty }(M)} , d p μ ( a ( X ) p ) = ρ μ ( p ) ( X μ ( p ) ) , a ( f ⋅ X ) = ( f ∘ μ ) ⋅ a ( X ) . {\displaystyle d_{p}\mu (a(X)_{p})=\rho _{\mu (p)}(X_{\mu (p)}),\quad a(f\cdot X)=(f\circ \mu )\cdot a(X).} Of course, when A = g {\displaystyle A={\mathfrak {g}}} , both the anchor A → { ∗ } {\displaystyle A\to \{*\}} and the map P → { ∗ } {\displaystyle P\to \{*\}} must be trivial, therefore both conditions are empty, and we recover the standard notion of action of a Lie algebra on a manifold. === Connections === Given a Lie algebroid A → M {\displaystyle A\to M} , an A-connection on a vector bundle E → M {\displaystyle E\to M} consists of an R {\displaystyle \mathbb {R} } -bilinear map ∇ : Γ ( A ) × Γ ( E ) → Γ ( E ) , ( α , s ) ↦ ∇ α ( s ) {\displaystyle \nabla :\Gamma (A)\times \Gamma (E)\to \Gamma (E),\quad (\alpha ,s)\mapsto \nabla _{\alpha }(s)} which is C ∞ ( M ) {\displaystyle {\mathcal {C}}^{\infty }(M)} -linear in the first factor and satisfies the following Leibniz rule: ∇ α ( f s ) = f ∇ α ( s ) + L ρ ( α ) ( f ) s {\displaystyle \nabla _{\alpha }(fs)=f\nabla _{\alpha }(s)+{\mathcal {L}}_{\rho (\alpha )}(f)s} for every α ∈ Γ ( A ) , s ∈ Γ ( E ) , f ∈ C ∞ ( M ) {\displaystyle \alpha \in \Gamma (A),s\in \Gamma (E),f\in {\mathcal {C}}^{\infty }(M)} , where L ρ ( α ) {\displaystyle {\mathcal {L}}_{\rho (\alpha )}} denotes the Lie derivative with respect to the vector field ρ ( α ) {\displaystyle \rho (\alpha )} . The curvature of an A-connection ∇ {\displaystyle \nabla } is the C ∞ ( M ) {\displaystyle {\mathcal {C}}^{\infty }(M)} -bilinear map R ∇ : Γ ( A ) × Γ ( A ) → H o m ( E , E ) , ( α , β ) ↦ ∇ α ∇ β − ∇ β ∇ α − ∇ [ α , β ] , {\displaystyle R_{\nabla }:\Gamma (A)\times \Gamma (A)\to \mathrm {Hom} (E,E),\quad (\alpha ,\beta )\mapsto \nabla _{\alpha }\nabla _{\beta }-\nabla _{\beta }\nabla _{\alpha }-\nabla _{[\alpha ,\beta ]},} and ∇ {\displaystyle \nabla } is called flat if R ∇ = 0 {\displaystyle R_{\nabla }=0} . Of course, when A = T M {\displaystyle A=TM} , we recover the standard notion of connection on a vector bundle, as well as those of curvature and flatness. === Representations === A representation of a Lie algebroid A → M {\displaystyle A\to M} is a vector bundle E → M {\displaystyle E\to M} together with a flat A-connection ∇ {\displaystyle \nabla } . Equivalently, a representation ( E , ∇ ) {\displaystyle (E,\nabla )} is a Lie algebroid morphism A → g l ( E ) {\displaystyle A\to {\mathfrak {gl}}(E)} . The set R e p ( A ) {\displaystyle \mathrm {Rep} (A)} of isomorphism classes of representations of a Lie algebroid A → M {\displaystyle A\to M} has a natural structure of semiring, with direct sums and tensor products of vector bundles. Examples include the following: When A = g {\displaystyle A={\mathfrak {g}}} , an A {\displaystyle A} -connection simplifies to a linear map g → g l ( V ) {\displaystyle {\mathfrak {g}}\to {\mathfrak {gl}}(V)} and the flatness condition makes it into a Lie algebra morphism, therefore we recover the standard notion of representation of a Lie algebra. When A = g × M → M {\displaystyle A={\mathfrak {g}}\times M\to M} and V {\displaystyle V} is a representation the Lie algebra g {\displaystyle {\mathfrak {g}}} , the trivial vector bundle V × M → M {\displaystyle V\times M\to M} is automatically a representation of A {\displaystyle A} Representations of the tangent algebroid A = T M {\displaystyle A=TM} are vector bundles endowed with flat connections Every Lie algebroid A → M {\displaystyle A\to M} has a natural representation on the line bundle Q A := ∧ t o p A ⊗ ∧ t o p T ∗ M → M {\displaystyle Q_{A}:=\wedge ^{top}A\otimes \wedge ^{top}T^{*}M\to M} , i.e. the tensor product between the determinant line bundles of A {\displaystyle A} and of T ∗ M {\displaystyle T^{*}M} . One can associate a cohomology class in H 1 ( A , Q A ) {\displaystyle H^{1}(A,Q_{A})} (see below) known as the modular class of the Lie algebroid. For the cotangent algebroid T ∗ M → M {\displaystyle T^{*}M\to M} associated to a Poisson manifold ( M , π ) {\displaystyle (M,\pi )} one recovers the modular class of π {\displaystyle \pi } . Note that there an arbitrary Lie groupoid does not have a canonical representation on its Lie algebroid, playing the role of the adjoint representation of Lie groups on their Lie algebras. However, this becomes possible if one allows the more general notion of representation up to homotopy. === Lie algebroid cohomology === Consider a Lie algebroid A → M {\displaystyle A\to M} and a representation ( E , ∇ ) {\displaystyle (E,\nabla )} . Denoting by Ω n ( A , E ) := Γ ( ∧ n A ∗ ⊗ E ) {\displaystyle \Omega ^{n}(A,E):=\Gamma (\wedge ^{n}A^{*}\otimes E)} the space of n {\displaystyle n} -differential forms on A {\displaystyle A} with values in the vector bundle E {\displaystyle E} , one can define a differential d n : Ω n ( A , E ) → Ω n + 1 ( A , E ) {\displaystyle d^{n}:\Omega ^{n}(A,E)\to \Omega ^{n+1}(A,E)} with the following Koszul-like formula: d ω ( α 0 , … , α n ) := ∑ i = 1 n ( − 1 ) i ∇ α i ( ω ( α 0 , … , α i ^ , … , α n ) ) − ∑ i < j n ( − 1 ) i + j + 1 ω ( [ α i , α j ] , α 0 , … , α i ^ , … , α j ^ , … , α n ) {\displaystyle d\omega (\alpha _{0},\ldots ,\alpha _{n}):=\sum _{i=1}^{n}(-1)^{i}\nabla _{\alpha _{i}}{\big (}\omega (\alpha _{0},\ldots ,{\widehat {\alpha _{i}}},\ldots ,\alpha _{n}){\big )}-\sum _{i<j}^{n}(-1)^{i+j+1}\omega ([\alpha _{i},\alpha _{j}],\alpha _{0},\ldots ,{\widehat {\alpha _{i}}},\ldots ,{\widehat {\alpha _{j}}},\ldots ,\alpha _{n})} Thanks to the flatness of ∇ {\displaystyle \nabla } , ( Ω n ( A , E ) , d n ) {\displaystyle (\Omega ^{n}(A,E),d^{n})} becomes a cochain complex and its cohomology, denoted by H ∗ ( A , E ) {\displaystyle H^{*}(A,E)} , is called the Lie algebroid cohomology of A {\displaystyle A} with coefficients in the representation ( E , ∇ ) {\displaystyle (E,\nabla )} . This general definition recovers well-known cohomology theories: The cohomology of a Lie algebroid g → { ∗ } {\displaystyle {\mathfrak {g}}\to \{*\}} coincides with the Chevalley-Eilenberg cohomology of g {\displaystyle {\mathfrak {g}}} as a Lie algebra. The cohomology of a tangent Lie algebroid T M → M {\displaystyle TM\to M} coincides with the de Rham cohomology of M {\displaystyle M} . The cohomology of a foliation Lie algebroid F → M {\displaystyle {\mathcal {F}}\to M} coincides with the leafwise cohomology of the foliation F {\displaystyle {\mathcal {F}}} . The cohomology of the cotangent Lie algebroid T ∗ M {\displaystyle T^{*}M} associated to a Poisson structure π {\displaystyle \pi } coincides with the Poisson cohomology of π {\displaystyle \pi } . == Lie groupoid-Lie algebroid correspondence == The standard construction which associates a Lie algebra to a Lie group generalises to this setting: to every Lie groupoid G ⇉ M {\displaystyle G\rightrightarrows M} one can canonically associate a Lie algebroid L i e ( G ) {\displaystyle \mathrm {Lie} (G)} defined as follows: the vector bundle is L i e ( G ) = A := u ∗ T s G {\displaystyle \mathrm {Lie} (G)=A:=u^{*}T^{s}G} , where T s G ⊆ T G {\displaystyle T^{s}G\subseteq TG} is the vertical bundle of the source fibre s : G → M {\displaystyle s:G\to M} and u : M → G {\displaystyle u:M\to G} is the groupoid unit map; the sections of A {\displaystyle A} are identified with the right-invariant vector fields on G {\displaystyle G} , so that Γ ( A ) {\displaystyle \Gamma (A)} inherits a Lie bracket; the anchor map is the differential ρ := d t ∣ A : A → T M {\displaystyle \rho :=dt_{\mid A}:A\to TM} of the target map t : G → M {\displaystyle t:G\to M} . Of course, a symmetric construction arises when swapping the role of the source and the target maps, and replacing right- with left-invariant vector fields; an isomorphism between the two resulting Lie algebroids will be given by the differential of the inverse map i : G → G {\displaystyle i:G\to G} . The flow of a section α ∈ Γ ( A ) {\displaystyle \alpha \in \Gamma (A)} is the 1-parameter bisection ϕ α ϵ ∈ B i s ( G ) {\displaystyle \phi _{\alpha }^{\epsilon }\in \mathrm {Bis} (G)} , defined by ϕ α ϵ ( x ) := ϕ α ~ ϵ ( 1 x ) {\displaystyle \phi _{\alpha }^{\epsilon }(x):=\phi _{\tilde {\alpha }}^{\epsilon }(1_{x})} , where ϕ α ~ ϵ ∈ D i f f ( G ) {\displaystyle \phi _{\tilde {\alpha }}^{\epsilon }\in \mathrm {Diff} (G)} is the flow of the corresponding right-invariant vector field α ~ ∈ X ( G ) {\displaystyle {\tilde {\alpha }}\in {\mathfrak {X}}(G)} . This allows one to defined the analogue of the exponential map for Lie groups as exp : Γ ( A ) → B i s ( G ) , exp ⁡ ( α ) ( x ) := ϕ α 1 ( x ) {\displaystyle \exp :\Gamma (A)\to \mathrm {Bis} (G),\exp(\alpha )(x):=\phi _{\alpha }^{1}(x)} . === Lie functor === The mapping G ↦ L i e ( G ) {\displaystyle G\mapsto \mathrm {Lie} (G)} sending a Lie groupoid to a Lie algebroid is actually part of a categorical construction. Indeed, any Lie groupoid morphism ϕ : G 1 → G 2 {\displaystyle \phi :G_{1}\to G_{2}} can be differentiated to a morphism d ϕ ∣ L i e ( G 1 ) : L i e ( G 1 ) → L i e ( G 2 ) {\displaystyle d\phi _{\mid \mathrm {Lie} (G_{1})}:\mathrm {Lie} (G_{1})\to \mathrm {Lie} (G_{2})} between the associated Lie algebroids. This construction defines a functor from the category of Lie groupoids and their morphisms to the category of Lie algebroids and their morphisms, called the Lie functor. === Structures and properties induced from groupoids to algebroids === Let G ⇉ M {\displaystyle G\rightrightarrows M} be a Lie groupoid and ( A → M , [ ⋅ , ⋅ ] , ρ ) {\displaystyle (A\to M,[\cdot ,\cdot ],\rho )} its associated Lie algebroid. Then The isotropy algebras g x ( A ) {\displaystyle {\mathfrak {g}}_{x}(A)} are the Lie algebras of the isotropy groups G x {\displaystyle G_{x}} The orbits of G {\displaystyle G} coincides with the orbits of A {\displaystyle A} G {\displaystyle G} is transitive and ( s , t ) : G → M × M {\displaystyle (s,t):G\to M\times M} is a submersion if and only if A {\displaystyle A} is transitive an action m : G × M P → P {\displaystyle m:G\times _{M}P\to P} of G {\displaystyle G} on P → M {\displaystyle P\to M} induces an action a : Γ ( A ) → X ( P ) {\displaystyle a:\Gamma (A)\to {\mathfrak {X}}(P)} of A {\displaystyle A} (called infinitesimal action), defined by a ( α ) p := d 1 μ ( p ) m ( ⋅ , p ) ( α μ ( p ) ) = d ( 1 μ ( p ) , p ) m ( α μ ( p ) , 0 ) {\displaystyle a(\alpha )_{p}:=d_{1_{\mu (p)}}m(\cdot ,p)(\alpha _{\mu (p)})=d_{(1_{\mu (p)},p)}m(\alpha _{\mu (p)},0)} a representation of G {\displaystyle G} on a vector bundle E → M {\displaystyle E\to M} induces a representation ∇ {\displaystyle \nabla } of A {\displaystyle A} on E → M {\displaystyle E\to M} , defined by ∇ α σ ( x ) := d d ϵ ∣ ϵ = 0 ( ϕ α ϵ ( x ) ) − 1 ⋅ σ ( t ( ϕ α ϵ ( x ) ) ) {\displaystyle \nabla _{\alpha }\sigma (x):={\frac {d}{d\epsilon }}_{\mid \epsilon =0}{\Big (}\phi _{\alpha }^{\epsilon }(x){\Big )}^{-1}\cdot \sigma {\Big (}t(\phi _{\alpha }^{\epsilon }(x)){\Big )}} Moreover, there is a morphism of semirings R e p ( G ) → R e p ( A ) {\displaystyle \mathrm {Rep} (G)\to \mathrm {Rep} (A)} , which becomes an isomorphism if G {\displaystyle G} is source-simply connected. there is a morphism V E k : H d k ( G , E ) → H k ( A , E ) {\displaystyle VE^{k}:H_{d}^{k}(G,E)\to H^{k}(A,E)} , called Van Est morphism, from the differentiable cohomology of G {\displaystyle G} with coefficients in some representation on E {\displaystyle E} to the cohomology of A {\displaystyle A} with coefficients in the induced representation on E {\displaystyle E} . Moreover, if the s {\displaystyle s} -fibres of G {\displaystyle G} are homologically n {\displaystyle n} -connected, then V E k {\displaystyle VE^{k}} is an isomorphism for k ≤ n {\displaystyle k\leq n} , and is injective for k = n + 1 {\displaystyle k=n+1} . === Examples === The Lie algebroid of a Lie group G ⇉ { ∗ } {\displaystyle G\rightrightarrows \{*\}} is the Lie algebra g → { ∗ } {\displaystyle {\mathfrak {g}}\to \{*\}} The Lie algebroid of both the pair groupoid M × M ⇉ M {\displaystyle M\times M\rightrightarrows M} and the fundamental groupoid Π 1 ( M ) ⇉ M {\displaystyle \Pi _{1}(M)\rightrightarrows M} is the tangent algebroid T M → M {\displaystyle TM\to M} The Lie algebroid of the unit groupoid u ( M ) ⇉ M {\displaystyle u(M)\rightrightarrows M} is the zero algebroid M × 0 → M {\displaystyle M\times 0\to M} The Lie algebroid of a Lie group bundle G ⇉ M {\displaystyle G\rightrightarrows M} is the Lie algebra bundle A → M {\displaystyle A\to M} The Lie algebroid of an action groupoid G × M ⇉ M {\displaystyle G\times M\rightrightarrows M} is the action algebroid g × M → M {\displaystyle {\mathfrak {g}}\times M\to M} The Lie algebroid of a gauge groupoid ( P × P ) / G ⇉ M {\displaystyle (P\times P)/G\rightrightarrows M} is the Atiyah algebroid T P / G → M {\displaystyle TP/G\to M} The Lie algebroid of a general linear groupoid G L ( E ) ⇉ M {\displaystyle GL(E)\rightrightarrows M} is the general linear algebroid g l ( E ) → M {\displaystyle {\mathfrak {gl}}(E)\to M} The Lie algebroid of both the holonomy groupoid H o l ( F ) ⇉ M {\displaystyle \mathrm {Hol} ({\mathcal {F}})\rightrightarrows M} and the monodromy groupoid Π 1 ( F ) ⇉ M {\displaystyle \Pi _{1}({\mathcal {F}})\rightrightarrows M} is the foliation algebroid F → M {\displaystyle {\mathcal {F}}\to M} The Lie algebroid of a tangent groupoid T G ⇉ T M {\displaystyle TG\rightrightarrows TM} is the tangent algebroid T A → T M {\displaystyle TA\to TM} , for A = L i e ( G ) {\displaystyle A=\mathrm {Lie} (G)} The Lie algebroid of a jet groupoid J k G ⇉ M {\displaystyle J^{k}G\rightrightarrows M} is the jet algebroid J k A → M {\displaystyle J^{k}A\to M} , for A = L i e ( G ) {\displaystyle A=\mathrm {Lie} (G)} === Detailed example 1 === Let us describe the Lie algebroid associated to the pair groupoid G := M × M {\displaystyle G:=M\times M} . Since the source map is s : G → M : ( p , q ) ↦ q {\displaystyle s:G\to M:(p,q)\mapsto q} , the s {\displaystyle s} -fibers are of the kind M × { q } {\displaystyle M\times \{q\}} , so that the vertical space is T s G = ⋃ q ∈ M T M × { q } ⊂ T M × T M {\displaystyle T^{s}G=\bigcup _{q\in M}TM\times \{q\}\subset TM\times TM} . Using the unit map u : M → G : q ↦ ( q , q ) {\displaystyle u:M\to G:q\mapsto (q,q)} , one obtain the vector bundle A := u ∗ T s G = ⋃ q ∈ M T q M = T M {\displaystyle A:=u^{*}T^{s}G=\bigcup _{q\in M}T_{q}M=TM} . The extension of sections X ∈ Γ ( A ) {\displaystyle X\in \Gamma (A)} to right-invariant vector fields X ~ ∈ X ( G ) {\displaystyle {\tilde {X}}\in {\mathfrak {X}}(G)} is simply X ~ ( p , q ) = X ( p ) ⊕ 0 {\displaystyle {\tilde {X}}(p,q)=X(p)\oplus 0} and the extension of a smooth function f {\displaystyle f} from M {\displaystyle M} to a right-invariant function on G {\displaystyle G} is f ~ ( p , q ) = f ( q ) {\displaystyle {\tilde {f}}(p,q)=f(q)} . Therefore, the bracket on A {\displaystyle A} is just the Lie bracket of tangent vector fields and the anchor map is just the identity. === Detailed example 2 === Consider the (action) Lie groupoid R 2 × U ( 1 ) ⇉ R 2 {\displaystyle \mathbb {R} ^{2}\times U(1)\rightrightarrows \mathbb {R} ^{2}} where the target map (i.e. the right action of U ( 1 ) {\displaystyle U(1)} on R 2 {\displaystyle \mathbb {R} ^{2}} ) is ( ( x , y ) , e i θ ) ↦ [ cos ⁡ ( θ ) − sin ⁡ ( θ ) sin ⁡ ( θ ) cos ⁡ ( θ ) ] [ x y ] . {\displaystyle ((x,y),e^{i\theta })\mapsto {\begin{bmatrix}\cos(\theta )&-\sin(\theta )\\\sin(\theta )&\cos(\theta )\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}.} The s {\displaystyle s} -fibre over a point p = ( x , y ) {\displaystyle p=(x,y)} are all copies of U ( 1 ) {\displaystyle U(1)} , so that u ∗ ( T s ( R 2 × U ( 1 ) ) ) {\displaystyle u^{*}(T^{s}(\mathbb {R} ^{2}\times U(1)))} is the trivial vector bundle R 2 × U ( 1 ) → R 2 {\displaystyle \mathbb {R} ^{2}\times U(1)\to \mathbb {R} ^{2}} . Since its anchor map ρ : R 2 × U ( 1 ) → T R 2 {\displaystyle \rho :\mathbb {R} ^{2}\times U(1)\to T\mathbb {R} ^{2}} is given by the differential of the target map, there are two cases for the isotropy Lie algebras, corresponding to the fibers of T t ( R 2 × U ( 1 ) ) {\displaystyle T^{t}(\mathbb {R} ^{2}\times U(1))} : t − 1 ( 0 ) ≅ U ( 1 ) t − 1 ( p ) ≅ { ( a , u ) ∈ R 2 × U ( 1 ) : u a = p } {\displaystyle {\begin{aligned}t^{-1}(0)\cong &U(1)\\t^{-1}(p)\cong &\{(a,u)\in \mathbb {R} ^{2}\times U(1):ua=p\}\end{aligned}}} This demonstrates that the isotropy over the origin is U ( 1 ) {\displaystyle U(1)} , while everywhere else is zero. == Integration of a Lie algebroid == === Lie theorems === A Lie algebroid is called integrable if it is isomorphic to L i e ( G ) {\displaystyle \mathrm {Lie} (G)} for some Lie groupoid G ⇉ M {\displaystyle G\rightrightarrows M} . The analogue of the classical Lie I theorem states that:if A {\displaystyle A} is an integrable Lie algebroid, then there exists a unique (up to isomorphism) s {\displaystyle s} -simply connected Lie groupoid G {\displaystyle G} integrating A {\displaystyle A} .Similarly, a morphism F : A 1 → A 2 {\displaystyle F:A_{1}\to A_{2}} between integrable Lie algebroids is called integrable if it is the differential F = d ϕ ∣ A {\displaystyle F=d\phi _{\mid A}} for some morphism ϕ : G 1 → G 2 {\displaystyle \phi :G_{1}\to G_{2}} between two integrations of A 1 {\displaystyle A_{1}} and A 2 {\displaystyle A_{2}} . The analogue of the classical Lie II theorem states that: if F : L i e ( G 1 ) → L i e ( G 2 ) {\displaystyle F:\mathrm {Lie} (G_{1})\to \mathrm {Lie} (G_{2})} is a morphism of integrable Lie algebroids, and G 1 {\displaystyle G_{1}} is s {\displaystyle s} -simply connected, then there exists a unique morphism of Lie groupoids ϕ : G 1 → G 2 {\displaystyle \phi :G_{1}\to G_{2}} integrating F {\displaystyle F} .In particular, by choosing as G 2 {\displaystyle G_{2}} the general linear groupoid G L ( E ) {\displaystyle GL(E)} of a vector bundle E {\displaystyle E} , it follows that any representation of an integrable Lie algebroid integrates to a representation of its s {\displaystyle s} -simply connected integrating Lie groupoid. On the other hand, there is no analogue of the classical Lie III theorem, i.e. going back from any Lie algebroid to a Lie groupoid is not always possible. Pradines claimed that such a statement hold, and the first explicit example of non-integrable Lie algebroids, coming for instance from foliation theory, appeared only several years later. Despite several partial results, including a complete solution in the transitive case, the general obstructions for an arbitrary Lie algebroid to be integrable have been discovered only in 2003 by Crainic and Fernandes. Adopting a more general approach, one can see that every Lie algebroid integrates to a stacky Lie groupoid. === Ševera-Weinstein groupoid === Given any Lie algebroid A {\displaystyle A} , the natural candidate for an integration is given by G ( A ) := P ( A ) / ∼ {\displaystyle G(A):=P(A)/\sim } , where P ( A ) {\displaystyle P(A)} denotes the space of A {\displaystyle A} -paths and ∼ {\displaystyle \sim } the relation of A {\displaystyle A} -homotopy between them. This is often called the Weinstein groupoid or Ševera-Weinstein groupoid. Indeed, one can show that G ( A ) {\displaystyle G(A)} is an s {\displaystyle s} -simply connected topological groupoid, with the multiplication induced by the concatenation of paths. Moreover, if A {\displaystyle A} is integrable, G ( A ) {\displaystyle G(A)} admits a smooth structure such that it coincides with the unique s {\displaystyle s} -simply connected Lie groupoid integrating A {\displaystyle A} . Accordingly, the only obstruction to integrability lies in the smoothness of G ( A ) {\displaystyle G(A)} . This approach led to the introduction of objects called monodromy groups, associated to any Lie algebroid, and to the following fundamental result: A Lie algebroid is integrable if and only if its monodromy groups are uniformly discrete.Such statement simplifies in the transitive case:A transitive Lie algebroid is integrable if and only if its monodromy groups are discrete.The results above show also that every Lie algebroid admits an integration to a local Lie groupoid (roughly speaking, a Lie groupoid where the multiplication is defined only in a neighbourhood around the identity elements). === Integrable examples === Lie algebras are always integrable (by Lie III theorem) Atiyah algebroids of a principal bundle are always integrable (to the gauge groupoid of that principal bundle) Lie algebroids with injective anchor (hence foliation algebroids) are always integrable (by Frobenius theorem) Lie algebra bundle are always integrable Action Lie algebroids are always integrable (but the integration is not necessarily an action Lie groupoid) Any Lie subalgebroid of an integrable Lie algebroid is integrable. === A non-integrable example === Consider the Lie algebroid A ω = T M × R → M {\displaystyle A_{\omega }=TM\times \mathbb {R} \to M} associated to a closed 2-form ω ∈ Ω 2 ( M ) {\displaystyle \omega \in \Omega ^{2}(M)} and the group of spherical periods associated to ω {\displaystyle \omega } , i.e. the image Λ := I m ( Φ ) ⊆ R {\displaystyle \Lambda :=\mathrm {Im} (\Phi )\subseteq \mathbb {R} } of the following group homomorphism from the second homotopy group of M {\displaystyle M} Φ : π 2 ( M ) → R : [ f ] ↦ ∫ S 2 f ∗ ω . {\displaystyle \Phi :\pi _{2}(M)\to \mathbb {R} :\quad [f]\mapsto \int _{S^{2}}f^{*}\omega .} Since A ω {\displaystyle A_{\omega }} is transitive, it is integrable if and only if it is the Atyah algebroid of some principal bundle; a careful analysis shows that this happens if and only if the subgroup Λ ⊆ R {\displaystyle \Lambda \subseteq \mathbb {R} } is a lattice, i.e. it is discrete. An explicit example where such condition fails is given by taking M = S 2 × S 2 {\displaystyle M=S^{2}\times S^{2}} and ω = p r 1 ∗ σ + 2 p r 2 ∗ σ ∈ Ω 2 ( M ) {\displaystyle \omega =\mathrm {pr} _{1}^{*}\sigma +{\sqrt {2}}\mathrm {pr} _{2}^{*}\sigma \in \Omega ^{2}(M)} for σ ∈ Ω 2 ( S 2 ) {\displaystyle \sigma \in \Omega ^{2}(S^{2})} the area form. Here Λ {\displaystyle \Lambda } turns out to be Z + 2 Z {\displaystyle \mathbb {Z} +{\sqrt {2}}\mathbb {Z} } , which is dense in R {\displaystyle \mathbb {R} } . == See also == R-algebroid Lie bialgebroid == References == == Books and lecture notes == Alan Weinstein, Groupoids: unifying internal and external symmetry, AMS Notices, 43 (1996), 744–752. Also available at arXiv:math/9602220. Kirill Mackenzie, Lie Groupoids and Lie Algebroids in Differential Geometry, Cambridge U. Press, 1987. Kirill Mackenzie, General Theory of Lie Groupoids and Lie Algebroids, Cambridge U. Press, 2005. Marius Crainic, Rui Loja Fernandes, Lectures on Integrability of Lie Brackets, Geometry&Topology Monographs 17 (2011) 1–107, available at arXiv:math/0611259. Eckhard Meinrenken, Lecture notes on Lie groupoids and Lie algebroids, available at http://www.math.toronto.edu/mein/teaching/MAT1341_LieGroupoids/Groupoids.pdf. Ieke Moerdijk, Janez Mrčun, Introduction to Foliations and Lie Groupoids, Cambridge U. Press, 2010.
Wikipedia:Lie derivative#0
In differential geometry, the Lie derivative ( LEE), named after Sophus Lie by Władysław Ślebodziński, evaluates the change of a tensor field (including scalar functions, vector fields and one-forms), along the flow defined by another vector field. This change is coordinate invariant and therefore the Lie derivative is defined on any differentiable manifold. Functions, tensor fields and forms can be differentiated with respect to a vector field. If T is a tensor field and X is a vector field, then the Lie derivative of T with respect to X is denoted L X T {\displaystyle {\mathcal {L}}_{X}T} . The differential operator T ↦ L X T {\displaystyle T\mapsto {\mathcal {L}}_{X}T} is a derivation of the algebra of tensor fields of the underlying manifold. The Lie derivative commutes with contraction and the exterior derivative on differential forms. Although there are many concepts of taking a derivative in differential geometry, they all agree when the expression being differentiated is a function or scalar field. Thus in this case the word "Lie" is dropped, and one simply speaks of the derivative of a function. The Lie derivative of a vector field Y with respect to another vector field X is known as the "Lie bracket" of X and Y, and is often denoted [X,Y] instead of L X Y {\displaystyle {\mathcal {L}}_{X}Y} . The space of vector fields forms a Lie algebra with respect to this Lie bracket. The Lie derivative constitutes an infinite-dimensional Lie algebra representation of this Lie algebra, due to the identity L [ X , Y ] T = L X L Y T − L Y L X T , {\displaystyle {\mathcal {L}}_{[X,Y]}T={\mathcal {L}}_{X}{\mathcal {L}}_{Y}T-{\mathcal {L}}_{Y}{\mathcal {L}}_{X}T,} valid for any vector fields X and Y and any tensor field T. Considering vector fields as infinitesimal generators of flows (i.e. one-dimensional groups of diffeomorphisms) on M, the Lie derivative is the differential of the representation of the diffeomorphism group on tensor fields, analogous to Lie algebra representations as infinitesimal representations associated to group representation in Lie group theory. Generalisations exist for spinor fields, fibre bundles with a connection and vector-valued differential forms. == Motivation == A 'naïve' attempt to define the derivative of a tensor field with respect to a vector field would be to take the components of the tensor field and take the directional derivative of each component with respect to the vector field. However, this definition is undesirable because it is not invariant under changes of coordinate system, e.g. the naive derivative expressed in polar or spherical coordinates differs from the naive derivative of the components in Cartesian coordinates. On an abstract manifold such a definition is meaningless and ill defined. In differential geometry, there are three main coordinate independent notions of differentiation of tensor fields: Lie derivatives, derivatives with respect to connections, the exterior derivative of totally antisymmetric covariant tensors, i.e. differential forms. The main difference between the Lie derivative and a derivative with respect to a connection is that the latter derivative of a tensor field with respect to a tangent vector is well-defined even if it is not specified how to extend that tangent vector to a vector field. However, a connection requires the choice of an additional geometric structure (e.g. a Riemannian metric in the case of Levi-Civita connection, or just an abstract connection) on the manifold. In contrast, when taking a Lie derivative, no additional structure on the manifold is needed, but it is impossible to talk about the Lie derivative of a tensor field with respect to a single tangent vector, since the value of the Lie derivative of a tensor field with respect to a vector field X at a point p depends on the value of X in a neighborhood of p, not just at p itself. Finally, the exterior derivative of differential forms does not require any additional choices, but is only a well defined derivative of differential forms (including functions), thus excluding vectors and other tensors that are not purely differential forms. The idea of Lie derivatives is to use a vector field to define a notion of transport (Lie transport). A smooth vector field defines a smooth flow on the manifold, which allows vectors to be transported between two points on the same line of flow (This contrasts with connections, which allows transport between arbitrary points). Intuitively, a vector Y ( p ) {\displaystyle Y(p)} based at point p {\displaystyle p} is transported by flowing its base point to p ′ {\displaystyle p'} , while flowing its tip point p + Y ( p ) δ {\displaystyle p+Y(p)\delta } to p ′ + δ p ′ {\displaystyle p'+\delta p'} . == Definition == The Lie derivative may be defined in several equivalent ways. To keep things simple, we begin by defining the Lie derivative acting on scalar functions and vector fields, before moving on to the definition for general tensors. === The (Lie) derivative of a function === Defining the derivative of a function f : M → R {\displaystyle f\colon M\to {\mathbb {R} }} on a manifold takes care because the difference quotient ( f ( x + h ) − f ( x ) ) / h {\displaystyle \textstyle (f(x+h)-f(x))/h} cannot be determined while the displacement x + h {\displaystyle x+h} is undefined. The Lie derivative of a function f : M → R {\displaystyle f\colon M\to {\mathbb {R} }} with respect to a vector field X {\displaystyle X} at a point p ∈ M {\displaystyle p\in M} is the function ( L X f ) ( p ) = d d t | t = 0 ( f ∘ Φ X t ) ( p ) = lim t → 0 f ( Φ X t ( p ) ) − f ( p ) t {\displaystyle ({\mathcal {L}}_{X}f)(p)={d \over dt}{\biggr |}_{t=0}{\bigl (}f\circ \Phi _{X}^{t}{\bigr )}(p)=\lim _{t\to 0}{\frac {f{\bigl (}\Phi _{X}^{t}(p){\bigr )}-f{\bigl (}p{\bigr )}}{t}}} where Φ X t ( p ) {\displaystyle \Phi _{X}^{t}(p)} is the point to which the flow defined by the vector field X {\displaystyle X} maps the point p {\displaystyle p} at time instant t . {\displaystyle t.} In the vicinity of t = 0 , {\displaystyle t=0,} Φ X t ( p ) {\displaystyle \Phi _{X}^{t}(p)} is the unique solution of the system d d t | t Φ X t ( p ) = X ( Φ X t ( p ) ) {\displaystyle {\frac {d}{dt}}{\biggr |}_{t}\Phi _{X}^{t}(p)=X{\bigl (}\Phi _{X}^{t}(p){\bigr )}} of first-order autonomous (i.e. time-independent) differential equations, with Φ X 0 ( p ) = p . {\displaystyle \Phi _{X}^{0}(p)=p.} Setting L X f = ∇ X f {\displaystyle {\mathcal {L}}_{X}f=\nabla _{X}f} identifies the Lie derivative of a function with the directional derivative, which is also denoted by X ( f ) := L X f = ∇ X f {\displaystyle X(f):={\mathcal {L}}_{X}f=\nabla _{X}f} . === The Lie derivative of a vector field === If X and Y are both vector fields, then the Lie derivative of Y with respect to X is also known as the Lie bracket of X and Y, and is sometimes denoted [ X , Y ] {\displaystyle [X,Y]} . There are several approaches to defining the Lie bracket, all of which are equivalent. We list two definitions here, corresponding to the two definitions of a vector field given above: === The Lie derivative of a tensor field === ==== Definition in terms of flows ==== The Lie derivative is the speed with which the tensor field changes under the space deformation caused by the flow. Formally, given a differentiable (time-independent) vector field X {\displaystyle X} on a smooth manifold M , {\displaystyle M,} let Φ X t : M → M {\displaystyle \Phi _{X}^{t}:M\to M} be the corresponding local flow. Since Φ X t {\displaystyle \Phi _{X}^{t}} is a local diffeomorphism for each t {\displaystyle t} , it gives rise to a pullback of tensor fields. For covariant tensors, this is just the multi-linear extension of the pullback map ( Φ X t ) p ∗ : T Φ X t ( p ) ∗ M → T p ∗ M , ( ( Φ X t ) p ∗ α ) ( Y ) = α ( T p Φ X t ( Y ) ) , α ∈ T Φ X t ( p ) ∗ M , Y ∈ T p M {\displaystyle \left(\Phi _{X}^{t}\right)_{p}^{*}:T_{\Phi _{X}^{t}(p)}^{*}M\to T_{p}^{*}M,\qquad \left(\left(\Phi _{X}^{t}\right)_{p}^{*}\alpha \right)(Y)=\alpha {\bigl (}T_{p}\Phi _{X}^{t}(Y){\bigr )},\quad \alpha \in T_{\Phi _{X}^{t}(p)}^{*}M,Y\in T_{p}M} For contravariant tensors, one extends the inverse ( T p Φ X t ) − 1 : T Φ X t ( p ) M → T p M {\displaystyle \left(T_{p}\Phi _{X}^{t}\right)^{-1}:T_{\Phi _{X}^{t}(p)}M\to T_{p}M} of the differential T p Φ X t {\displaystyle T_{p}\Phi _{X}^{t}} . For every t , {\displaystyle t,} there is, consequently, a tensor field ( Φ X t ) ∗ T {\displaystyle (\Phi _{X}^{t})^{*}T} of the same type as T {\displaystyle T} 's. If T {\displaystyle T} is an ( r , 0 ) {\displaystyle (r,0)} - or ( 0 , s ) {\displaystyle (0,s)} -type tensor field, then the Lie derivative L X T {\displaystyle {\cal {L}}_{X}T} of T {\displaystyle T} along a vector field X {\displaystyle X} is defined at point p ∈ M {\displaystyle p\in M} to be L X T ( p ) = d d t | t = 0 ( ( Φ X t ) ∗ T ) p = d d t | t = 0 ( Φ X t ) p ∗ T Φ X t ( p ) = lim t → 0 ( Φ X t ) ∗ T Φ X t ( p ) − T p t . {\displaystyle {\cal {L}}_{X}T(p)={\frac {d}{dt}}{\biggl |}_{t=0}\left({\bigl (}\Phi _{X}^{t}{\bigr )}^{*}T\right)_{p}={\frac {d}{dt}}{\biggl |}_{t=0}{\bigl (}\Phi _{X}^{t}{\bigr )}_{p}^{*}T_{\Phi _{X}^{t}(p)}=\lim _{t\to 0}{\frac {{\bigl (}\Phi _{X}^{t}{\bigr )}^{*}T_{\Phi _{X}^{t}(p)}-T_{p}}{t}}.} The resulting tensor field L X T {\displaystyle {\cal {L}}_{X}T} is of the same type as T {\displaystyle T} 's. More generally, for every smooth 1-parameter family Φ t {\displaystyle \Phi _{t}} of diffeomorphisms that integrate a vector field X {\displaystyle X} in the sense that d d t | t = 0 Φ t = X ∘ Φ 0 {\displaystyle {d \over dt}{\biggr |}_{t=0}\Phi _{t}=X\circ \Phi _{0}} , one has L X T = ( Φ 0 − 1 ) ∗ d d t | t = 0 Φ t ∗ T = − d d t | t = 0 ( Φ t − 1 ) ∗ Φ 0 ∗ T . {\displaystyle {\mathcal {L}}_{X}T={\bigl (}\Phi _{0}^{-1}{\bigr )}^{*}{d \over dt}{\biggr |}_{t=0}\Phi _{t}^{*}T=-{d \over dt}{\biggr |}_{t=0}{\bigl (}\Phi _{t}^{-1}{\bigr )}^{*}\Phi _{0}^{*}T\,.} ==== Algebraic definition ==== We now give an algebraic definition. The algebraic definition for the Lie derivative of a tensor field follows from the following four axioms: Axiom 1. The Lie derivative of a function is equal to the directional derivative of the function. This fact is often expressed by the formula L Y f = Y ( f ) {\displaystyle {\mathcal {L}}_{Y}f=Y(f)} Axiom 2. The Lie derivative obeys the following version of Leibniz's rule: For any tensor fields S and T, we have L Y ( S ⊗ T ) = ( L Y S ) ⊗ T + S ⊗ ( L Y T ) . {\displaystyle {\mathcal {L}}_{Y}(S\otimes T)=({\mathcal {L}}_{Y}S)\otimes T+S\otimes ({\mathcal {L}}_{Y}T).} Axiom 3. The Lie derivative obeys the Leibniz rule with respect to contraction: L X ( T ( Y 1 , … , Y n ) ) = ( L X T ) ( Y 1 , … , Y n ) + T ( ( L X Y 1 ) , … , Y n ) + ⋯ + T ( Y 1 , … , ( L X Y n ) ) {\displaystyle {\mathcal {L}}_{X}(T(Y_{1},\ldots ,Y_{n}))=({\mathcal {L}}_{X}T)(Y_{1},\ldots ,Y_{n})+T(({\mathcal {L}}_{X}Y_{1}),\ldots ,Y_{n})+\cdots +T(Y_{1},\ldots ,({\mathcal {L}}_{X}Y_{n}))} Axiom 4. The Lie derivative commutes with exterior derivative on functions: [ L X , d ] = 0 {\displaystyle [{\mathcal {L}}_{X},d]=0} Using the first and third axioms, applying the Lie derivative L X {\displaystyle {\mathcal {L}}_{X}} to Y ( f ) {\displaystyle Y(f)} shows that L X Y ( f ) = X ( Y ( f ) ) − Y ( X ( f ) ) , {\displaystyle {\mathcal {L}}_{X}Y(f)=X(Y(f))-Y(X(f)),} which is one of the standard definitions for the Lie bracket. The Lie derivative acting on a differential form is the anticommutator of the interior product with the exterior derivative. So if α is a differential form, L Y α = i Y d α + d i Y α . {\displaystyle {\mathcal {L}}_{Y}\alpha =i_{Y}d\alpha +di_{Y}\alpha .} This follows easily by checking that the expression commutes with exterior derivative, is a derivation (being an anticommutator of graded derivations) and does the right thing on functions. This is Cartan's magic formula. See interior product for details. Explicitly, let T be a tensor field of type (p, q). Consider T to be a differentiable multilinear map of smooth sections α1, α2, ..., αp of the cotangent bundle T∗M and of sections X1, X2, ..., Xq of the tangent bundle TM, written T(α1, α2, ..., X1, X2, ...) into R. Define the Lie derivative of T along Y by the formula ( L Y T ) ( α 1 , α 2 , … , X 1 , X 2 , … ) = Y ( T ( α 1 , α 2 , … , X 1 , X 2 , … ) ) {\displaystyle ({\mathcal {L}}_{Y}T)(\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots )=Y(T(\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots ))} − T ( L Y α 1 , α 2 , … , X 1 , X 2 , … ) − T ( α 1 , L Y α 2 , … , X 1 , X 2 , … ) − … {\displaystyle -T({\mathcal {L}}_{Y}\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots )-T(\alpha _{1},{\mathcal {L}}_{Y}\alpha _{2},\ldots ,X_{1},X_{2},\ldots )-\ldots } − T ( α 1 , α 2 , … , L Y X 1 , X 2 , … ) − T ( α 1 , α 2 , … , X 1 , L Y X 2 , … ) − … {\displaystyle -T(\alpha _{1},\alpha _{2},\ldots ,{\mathcal {L}}_{Y}X_{1},X_{2},\ldots )-T(\alpha _{1},\alpha _{2},\ldots ,X_{1},{\mathcal {L}}_{Y}X_{2},\ldots )-\ldots } The analytic and algebraic definitions can be proven to be equivalent using the properties of the pushforward and the Leibniz rule for differentiation. The Lie derivative commutes with the contraction. === The Lie derivative of a differential form === A particularly important class of tensor fields is the class of differential forms. The restriction of the Lie derivative to the space of differential forms is closely related to the exterior derivative. Both the Lie derivative and the exterior derivative attempt to capture the idea of a derivative in different ways. These differences can be bridged by introducing the idea of an interior product, after which the relationships falls out as an identity known as Cartan's formula. Cartan's formula can also be used as a definition of the Lie derivative on the space of differential forms. Let M be a manifold and X a vector field on M. Let ω ∈ Λ k ( M ) {\displaystyle \omega \in \Lambda ^{k}(M)} be a k-form, i.e., for each p ∈ M {\displaystyle p\in M} , ω ( p ) {\displaystyle \omega (p)} is an alternating multilinear map from ( T p M ) k {\displaystyle (T_{p}M)^{k}} to the real numbers. The interior product of X and ω is the (k − 1)-form i X ω {\displaystyle i_{X}\omega } defined as ( i X ω ) ( X 1 , … , X k − 1 ) = ω ( X , X 1 , … , X k − 1 ) {\displaystyle (i_{X}\omega )(X_{1},\ldots ,X_{k-1})=\omega (X,X_{1},\ldots ,X_{k-1})\,} The differential form i X ω {\displaystyle i_{X}\omega } is also called the contraction of ω with X, and i X : Λ k ( M ) → Λ k − 1 ( M ) {\displaystyle i_{X}:\Lambda ^{k}(M)\rightarrow \Lambda ^{k-1}(M)} is a ∧ {\displaystyle \wedge } -antiderivation where ∧ {\displaystyle \wedge } is the wedge product on differential forms. That is, i X {\displaystyle i_{X}} is R-linear, and i X ( ω ∧ η ) = ( i X ω ) ∧ η + ( − 1 ) k ω ∧ ( i X η ) {\displaystyle i_{X}(\omega \wedge \eta )=(i_{X}\omega )\wedge \eta +(-1)^{k}\omega \wedge (i_{X}\eta )} for ω ∈ Λ k ( M ) {\displaystyle \omega \in \Lambda ^{k}(M)} and η another differential form. Also, for a function f ∈ Λ 0 ( M ) {\displaystyle f\in \Lambda ^{0}(M)} , that is, a real- or complex-valued function on M, one has i f X ω = f i X ω {\displaystyle i_{fX}\omega =f\,i_{X}\omega } where f X {\displaystyle fX} denotes the product of f and X. The relationship between exterior derivatives and Lie derivatives can then be summarized as follows. First, since the Lie derivative of a function f with respect to a vector field X is the same as the directional derivative X(f), it is also the same as the contraction of the exterior derivative of f with X: L X f = i X d f {\displaystyle {\mathcal {L}}_{X}f=i_{X}\,df} For a general differential form, the Lie derivative is likewise a contraction, taking into account the variation in X: L X ω = i X d ω + d ( i X ω ) . {\displaystyle {\mathcal {L}}_{X}\omega =i_{X}d\omega +d(i_{X}\omega ).} This identity is known variously as Cartan formula, Cartan homotopy formula or Cartan's magic formula. See interior product for details. The Cartan formula can be used as a definition of the Lie derivative of a differential form. Cartan's formula shows in particular that d L X ω = L X ( d ω ) . {\displaystyle d{\mathcal {L}}_{X}\omega ={\mathcal {L}}_{X}(d\omega ).} The Lie derivative also satisfies the relation L f X ω = f L X ω + d f ∧ i X ω . {\displaystyle {\mathcal {L}}_{fX}\omega =f{\mathcal {L}}_{X}\omega +df\wedge i_{X}\omega .} == Coordinate expressions == In local coordinate notation, for a type (r, s) tensor field T {\displaystyle T} , the Lie derivative along X {\displaystyle X} is ( L X T ) a 1 … a r b 1 … b s = X c ( ∂ c T a 1 … a r b 1 … b s ) − ( ∂ c X a 1 ) T c a 2 … a r b 1 … b s − … − ( ∂ c X a r ) T a 1 … a r − 1 c b 1 … b s + ( ∂ b 1 X c ) T a 1 … a r c b 2 … b s + … + ( ∂ b s X c ) T a 1 … a r b 1 … b s − 1 c {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s}}={}&X^{c}(\partial _{c}T^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s}})\\&{}-{}(\partial _{c}X^{a_{1}})T^{ca_{2}\ldots a_{r}}{}_{b_{1}\ldots b_{s}}-\ldots -(\partial _{c}X^{a_{r}})T^{a_{1}\ldots a_{r-1}c}{}_{b_{1}\ldots b_{s}}\\&+(\partial _{b_{1}}X^{c})T^{a_{1}\ldots a_{r}}{}_{cb_{2}\ldots b_{s}}+\ldots +(\partial _{b_{s}}X^{c})T^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s-1}c}\end{aligned}}} here, the notation ∂ a = ∂ ∂ x a {\displaystyle \partial _{a}={\frac {\partial }{\partial x^{a}}}} means taking the partial derivative with respect to the coordinate x a {\displaystyle x^{a}} . Alternatively, if we are using a torsion-free connection (e.g., the Levi Civita connection), then the partial derivative ∂ a {\displaystyle \partial _{a}} can be replaced with the covariant derivative which means replacing ∂ a X b {\displaystyle \partial _{a}X^{b}} with (by abuse of notation) ∇ a X b = X ; a b := ( ∇ X ) a b = ∂ a X b + Γ a c b X c {\displaystyle \nabla _{a}X^{b}=X_{;a}^{b}:=(\nabla X)_{a}^{\ b}=\partial _{a}X^{b}+\Gamma _{ac}^{b}X^{c}} where the Γ b c a = Γ c b a {\displaystyle \Gamma _{bc}^{a}=\Gamma _{cb}^{a}} are the Christoffel coefficients. The Lie derivative of a tensor is another tensor of the same type, i.e., even though the individual terms in the expression depend on the choice of coordinate system, the expression as a whole results in a tensor ( L X T ) a 1 … a r b 1 … b s ∂ a 1 ⊗ ⋯ ⊗ ∂ a r ⊗ d x b 1 ⊗ ⋯ ⊗ d x b s {\displaystyle ({\mathcal {L}}_{X}T)^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s}}\partial _{a_{1}}\otimes \cdots \otimes \partial _{a_{r}}\otimes dx^{b_{1}}\otimes \cdots \otimes dx^{b_{s}}} which is independent of any coordinate system and of the same type as T {\displaystyle T} . The definition can be extended further to tensor densities. If T is a tensor density of some real number valued weight w (e.g. the volume density of weight 1), then its Lie derivative is a tensor density of the same type and weight. ( L X T ) a 1 … a r b 1 … b s = X c ( ∂ c T a 1 … a r b 1 … b s ) − ( ∂ c X a 1 ) T c a 2 … a r b 1 … b s − … − ( ∂ c X a r ) T a 1 … a r − 1 c b 1 … b s + + ( ∂ b 1 X c ) T a 1 … a r c b 2 … b s + … + ( ∂ b s X c ) T a 1 … a r b 1 … b s − 1 c + w ( ∂ c X c ) T a 1 … a r b 1 … b s {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s}}={}&X^{c}(\partial _{c}T^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s}})-(\partial _{c}X^{a_{1}})T^{ca_{2}\ldots a_{r}}{}_{b_{1}\ldots b_{s}}-\ldots -(\partial _{c}X^{a_{r}})T^{a_{1}\ldots a_{r-1}c}{}_{b_{1}\ldots b_{s}}+\\&+(\partial _{b_{1}}X^{c})T^{a_{1}\ldots a_{r}}{}_{cb_{2}\ldots b_{s}}+\ldots +(\partial _{b_{s}}X^{c})T^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s-1}c}+w(\partial _{c}X^{c})T^{a_{1}\ldots a_{r}}{}_{b_{1}\ldots b_{s}}\end{aligned}}} Notice the new term at the end of the expression. For a linear connection Γ = ( Γ b c a ) {\displaystyle \Gamma =(\Gamma _{bc}^{a})} , the Lie derivative along X {\displaystyle X} is ( L X Γ ) b c a = X d ∂ d Γ b c a + ∂ b ∂ c X a − Γ b c d ∂ d X a + Γ d c a ∂ b X d + Γ b d a ∂ c X d {\displaystyle ({\mathcal {L}}_{X}\Gamma )_{bc}^{a}=X^{d}\partial _{d}\Gamma _{bc}^{a}+\partial _{b}\partial _{c}X^{a}-\Gamma _{bc}^{d}\partial _{d}X^{a}+\Gamma _{dc}^{a}\partial _{b}X^{d}+\Gamma _{bd}^{a}\partial _{c}X^{d}} === Examples === For clarity we now show the following examples in local coordinate notation. For a scalar field ϕ ( x c ) ∈ F ( M ) {\displaystyle \phi (x^{c})\in {\mathcal {F}}(M)} we have: ( L X ϕ ) = X ( ϕ ) = X a ∂ a ϕ {\displaystyle ({\mathcal {L}}_{X}\phi )=X(\phi )=X^{a}\partial _{a}\phi } . Hence for the scalar field ϕ ( x , y ) = x 2 − sin ⁡ ( y ) {\displaystyle \phi (x,y)=x^{2}-\sin(y)} and the vector field X a ∂ a = sin ⁡ ( x ) ∂ y − y 2 ∂ x {\displaystyle X^{a}\partial _{a}=\sin(x)\partial _{y}-y^{2}\partial _{x}} the corresponding Lie derivative becomes L X ϕ = ( sin ⁡ ( x ) ∂ y − y 2 ∂ x ) ( x 2 − sin ⁡ ( y ) ) = sin ⁡ ( x ) ∂ y ( x 2 − sin ⁡ ( y ) ) − y 2 ∂ x ( x 2 − sin ⁡ ( y ) ) = − sin ⁡ ( x ) cos ⁡ ( y ) − 2 x y 2 {\displaystyle {\begin{alignedat}{3}{\mathcal {L}}_{X}\phi &=(\sin(x)\partial _{y}-y^{2}\partial _{x})(x^{2}-\sin(y))\\&=\sin(x)\partial _{y}(x^{2}-\sin(y))-y^{2}\partial _{x}(x^{2}-\sin(y))\\&=-\sin(x)\cos(y)-2xy^{2}\\\end{alignedat}}} For an example of higher rank differential form, consider the 2-form ω = ( x 2 + y 2 ) d x ∧ d z {\displaystyle \omega =(x^{2}+y^{2})dx\wedge dz} and the vector field X {\displaystyle X} from the previous example. Then, L X ω = d ( i sin ⁡ ( x ) ∂ y − y 2 ∂ x ( ( x 2 + y 2 ) d x ∧ d z ) ) + i sin ⁡ ( x ) ∂ y − y 2 ∂ x ( d ( ( x 2 + y 2 ) d x ∧ d z ) ) = d ( − y 2 ( x 2 + y 2 ) d z ) + i sin ⁡ ( x ) ∂ y − y 2 ∂ x ( 2 y d y ∧ d x ∧ d z ) = ( − 2 x y 2 d x + ( − 2 y x 2 − 4 y 3 ) d y ) ∧ d z + ( 2 y sin ⁡ ( x ) d x ∧ d z + 2 y 3 d y ∧ d z ) = ( − 2 x y 2 + 2 y sin ⁡ ( x ) ) d x ∧ d z + ( − 2 y x 2 − 2 y 3 ) d y ∧ d z {\displaystyle {\begin{aligned}{\mathcal {L}}_{X}\omega &=d(i_{\sin(x)\partial _{y}-y^{2}\partial _{x}}((x^{2}+y^{2})dx\wedge dz))+i_{\sin(x)\partial _{y}-y^{2}\partial _{x}}(d((x^{2}+y^{2})dx\wedge dz))\\&=d(-y^{2}(x^{2}+y^{2})dz)+i_{\sin(x)\partial _{y}-y^{2}\partial _{x}}(2ydy\wedge dx\wedge dz)\\&=\left(-2xy^{2}dx+(-2yx^{2}-4y^{3})dy\right)\wedge dz+(2y\sin(x)dx\wedge dz+2y^{3}dy\wedge dz)\\&=\left(-2xy^{2}+2y\sin(x)\right)dx\wedge dz+(-2yx^{2}-2y^{3})dy\wedge dz\end{aligned}}} Some more abstract examples. L X ( d x b ) = d i X ( d x b ) = d X b = ∂ a X b d x a {\displaystyle {\mathcal {L}}_{X}(dx^{b})=di_{X}(dx^{b})=dX^{b}=\partial _{a}X^{b}dx^{a}} . Hence for a covector field, i.e., a differential form, A = A a ( x b ) d x a {\displaystyle A=A_{a}(x^{b})dx^{a}} we have: L X A = X ( A a ) d x a + A b L X ( d x b ) = ( X b ∂ b A a + A b ∂ a ( X b ) ) d x a {\displaystyle {\mathcal {L}}_{X}A=X(A_{a})dx^{a}+A_{b}{\mathcal {L}}_{X}(dx^{b})=(X^{b}\partial _{b}A_{a}+A_{b}\partial _{a}(X^{b}))dx^{a}} The coefficient of the last expression is the local coordinate expression of the Lie derivative. For a covariant rank 2 tensor field T = T a b ( x c ) d x a ⊗ d x b {\displaystyle T=T_{ab}(x^{c})dx^{a}\otimes dx^{b}} we have: ( L X T ) = ( L X T ) a b d x a ⊗ d x b = X ( T a b ) d x a ⊗ d x b + T c b L X ( d x c ) ⊗ d x b + T a c d x a ⊗ L X ( d x c ) = ( X c ∂ c T a b + T c b ∂ a X c + T a c ∂ b X c ) d x a ⊗ d x b {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)&=({\mathcal {L}}_{X}T)_{ab}dx^{a}\otimes dx^{b}\\&=X(T_{ab})dx^{a}\otimes dx^{b}+T_{cb}{\mathcal {L}}_{X}(dx^{c})\otimes dx^{b}+T_{ac}dx^{a}\otimes {\mathcal {L}}_{X}(dx^{c})\\&=(X^{c}\partial _{c}T_{ab}+T_{cb}\partial _{a}X^{c}+T_{ac}\partial _{b}X^{c})dx^{a}\otimes dx^{b}\\\end{aligned}}} If T = g {\displaystyle T=g} is the symmetric metric tensor, it is parallel with respect to the Levi-Civita connection (aka covariant derivative), and it becomes fruitful to use the connection. This has the effect of replacing all derivatives with covariant derivatives, giving ( L X g ) = ( X c g a b ; c + g c b X ; a c + g a c X ; b c ) d x a ⊗ d x b = ( X b ; a + X a ; b ) d x a ⊗ d x b {\displaystyle ({\mathcal {L}}_{X}g)=(X^{c}g_{ab;c}+g_{cb}X_{;a}^{c}+g_{ac}X_{;b}^{c})dx^{a}\otimes dx^{b}=(X_{b;a}+X_{a;b})dx^{a}\otimes dx^{b}} == Properties == The Lie derivative has a number of properties. Let F ( M ) {\displaystyle {\mathcal {F}}(M)} be the algebra of functions defined on the manifold M. Then L X : F ( M ) → F ( M ) {\displaystyle {\mathcal {L}}_{X}:{\mathcal {F}}(M)\rightarrow {\mathcal {F}}(M)} is a derivation on the algebra F ( M ) {\displaystyle {\mathcal {F}}(M)} . That is, L X {\displaystyle {\mathcal {L}}_{X}} is R-linear and L X ( f g ) = ( L X f ) g + f L X g . {\displaystyle {\mathcal {L}}_{X}(fg)=({\mathcal {L}}_{X}f)g+f{\mathcal {L}}_{X}g.} Similarly, it is a derivation on F ( M ) × X ( M ) {\displaystyle {\mathcal {F}}(M)\times {\mathcal {X}}(M)} where X ( M ) {\displaystyle {\mathcal {X}}(M)} is the set of vector fields on M: L X ( f Y ) = ( L X f ) Y + f L X Y {\displaystyle {\mathcal {L}}_{X}(fY)=({\mathcal {L}}_{X}f)Y+f{\mathcal {L}}_{X}Y} which may also be written in the equivalent notation L X ( f ⊗ Y ) = ( L X f ) ⊗ Y + f ⊗ L X Y {\displaystyle {\mathcal {L}}_{X}(f\otimes Y)=({\mathcal {L}}_{X}f)\otimes Y+f\otimes {\mathcal {L}}_{X}Y} where the tensor product symbol ⊗ {\displaystyle \otimes } is used to emphasize the fact that the product of a function times a vector field is being taken over the entire manifold. Additional properties are consistent with that of the Lie bracket. Thus, for example, considered as a derivation on a vector field, L X [ Y , Z ] = [ L X Y , Z ] + [ Y , L X Z ] {\displaystyle {\mathcal {L}}_{X}[Y,Z]=[{\mathcal {L}}_{X}Y,Z]+[Y,{\mathcal {L}}_{X}Z]} one finds the above to be just the Jacobi identity. Thus, one has the important result that the space of vector fields over M, equipped with the Lie bracket, forms a Lie algebra. The Lie derivative also has important properties when acting on differential forms. Let α and β be two differential forms on M, and let X and Y be two vector fields. Then L X ( α ∧ β ) = ( L X α ) ∧ β + α ∧ ( L X β ) {\displaystyle {\mathcal {L}}_{X}(\alpha \wedge \beta )=({\mathcal {L}}_{X}\alpha )\wedge \beta +\alpha \wedge ({\mathcal {L}}_{X}\beta )} [ L X , L Y ] α := L X L Y α − L Y L X α = L [ X , Y ] α {\displaystyle [{\mathcal {L}}_{X},{\mathcal {L}}_{Y}]\alpha :={\mathcal {L}}_{X}{\mathcal {L}}_{Y}\alpha -{\mathcal {L}}_{Y}{\mathcal {L}}_{X}\alpha ={\mathcal {L}}_{[X,Y]}\alpha } [ L X , i Y ] α = [ i X , L Y ] α = i [ X , Y ] α , {\displaystyle [{\mathcal {L}}_{X},i_{Y}]\alpha =[i_{X},{\mathcal {L}}_{Y}]\alpha =i_{[X,Y]}\alpha ,} where i denotes interior product defined above and it is clear whether [·,·] denotes the commutator or the Lie bracket of vector fields. == Generalizations == Various generalizations of the Lie derivative play an important role in differential geometry. === The Lie derivative of a spinor field === A definition for Lie derivatives of spinors along generic spacetime vector fields, not necessarily Killing ones, on a general (pseudo) Riemannian manifold was already proposed in 1971 by Yvette Kosmann. Later, it was provided a geometric framework which justifies her ad hoc prescription within the general framework of Lie derivatives on fiber bundles in the explicit context of gauge natural bundles which turn out to be the most appropriate arena for (gauge-covariant) field theories. In a given spin manifold, that is in a Riemannian manifold ( M , g ) {\displaystyle (M,g)} admitting a spin structure, the Lie derivative of a spinor field ψ {\displaystyle \psi } can be defined by first defining it with respect to infinitesimal isometries (Killing vector fields) via the André Lichnerowicz's local expression given in 1963: L X ψ := X a ∇ a ψ − 1 4 ∇ a X b γ a γ b ψ , {\displaystyle {\mathcal {L}}_{X}\psi :=X^{a}\nabla _{a}\psi -{\frac {1}{4}}\nabla _{a}X_{b}\gamma ^{a}\gamma ^{b}\psi \,,} where ∇ a X b = ∇ [ a X b ] {\displaystyle \nabla _{a}X_{b}=\nabla _{[a}X_{b]}} , as X = X a ∂ a {\displaystyle X=X^{a}\partial _{a}} is assumed to be a Killing vector field, and γ a {\displaystyle \gamma ^{a}} are Dirac matrices. It is then possible to extend Lichnerowicz's definition to all vector fields (generic infinitesimal transformations) by retaining Lichnerowicz's local expression for a generic vector field X {\displaystyle X} , but explicitly taking the antisymmetric part of ∇ a X b {\displaystyle \nabla _{a}X_{b}} only. More explicitly, Kosmann's local expression given in 1972 is: L X ψ := X a ∇ a ψ − 1 8 ∇ [ a X b ] [ γ a , γ b ] ψ = ∇ X ψ − 1 4 ( d X ♭ ) ⋅ ψ , {\displaystyle {\mathcal {L}}_{X}\psi :=X^{a}\nabla _{a}\psi -{\frac {1}{8}}\nabla _{[a}X_{b]}[\gamma ^{a},\gamma ^{b}]\psi \,=\nabla _{X}\psi -{\frac {1}{4}}(dX^{\flat })\cdot \psi \,,} where [ γ a , γ b ] = γ a γ b − γ b γ a {\displaystyle [\gamma ^{a},\gamma ^{b}]=\gamma ^{a}\gamma ^{b}-\gamma ^{b}\gamma ^{a}} is the commutator, d {\displaystyle d} is exterior derivative, X ♭ = g ( X , − ) {\displaystyle X^{\flat }=g(X,-)} is the dual 1 form corresponding to X {\displaystyle X} under the metric (i.e. with lowered indices) and ⋅ {\displaystyle \cdot } is Clifford multiplication. It is worth noting that the spinor Lie derivative is independent of the metric, and hence also of the connection. This is not obvious from the right-hand side of Kosmann's local expression, as the right-hand side seems to depend on the metric through the spin connection (covariant derivative), the dualisation of vector fields (lowering of the indices) and the Clifford multiplication on the spinor bundle. Such is not the case: the quantities on the right-hand side of Kosmann's local expression combine so as to make all metric and connection dependent terms cancel. To gain a better understanding of the long-debated concept of Lie derivative of spinor fields one may refer to the original article, where the definition of a Lie derivative of spinor fields is placed in the more general framework of the theory of Lie derivatives of sections of fiber bundles and the direct approach by Y. Kosmann to the spinor case is generalized to gauge natural bundles in the form of a new geometric concept called the Kosmann lift. As for the tensor counterpart, also for spinors the vanishing of the Lie derivative along a Killing vector implements on the spinor the symmetries encoded by that Killing vector. However, differently from tensors, from spinors it is possible to build bi-linear quantities (such as the velocity vector ψ ¯ γ a ψ {\displaystyle {\overline {\psi }}\gamma ^{a}\psi } or the spin axial-vector ψ ¯ γ a γ 5 ψ {\displaystyle {\overline {\psi }}\gamma ^{a}\gamma ^{5}\psi } ) which are tensors. A natural question that now arises is whether the vanishing of the Lie derivative along a Killing vector of a spinor is equivalent to the vanishing of the Lie derivative along the same Killing vector of all the spinor bi-linear quantities. While a spinor that is Lie-invariant implies that all its bi-linear quantities are also Lie invariant, the converse is in general not true. === Covariant Lie derivative === If we have a principal bundle over the manifold M with G as the structure group, and we pick X to be a covariant vector field as section of the tangent space of the principal bundle (i.e. it has horizontal and vertical components), then the covariant Lie derivative is just the Lie derivative with respect to X over the principal bundle. Now, if we're given a vector field Y over M (but not the principal bundle) but we also have a connection over the principal bundle, we can define a vector field X over the principal bundle such that its horizontal component matches Y and its vertical component agrees with the connection. This is the covariant Lie derivative. See connection form for more details. === Nijenhuis–Lie derivative === Another generalization, due to Albert Nijenhuis, allows one to define the Lie derivative of a differential form along any section of the bundle Ωk(M, TM) of differential forms with values in the tangent bundle. If K ∈ Ωk(M, TM) and α is a differential p-form, then it is possible to define the interior product iKα of K and α. The Nijenhuis–Lie derivative is then the anticommutator of the interior product and the exterior derivative: L K α = [ d , i K ] α = d i K α − ( − 1 ) k − 1 i K d α . {\displaystyle {\mathcal {L}}_{K}\alpha =[d,i_{K}]\alpha =di_{K}\alpha -(-1)^{k-1}i_{K}\,d\alpha .} == History == In 1931, Władysław Ślebodziński introduced a new differential operator, later called by David van Dantzig that of Lie derivation, which can be applied to scalars, vectors, tensors and affine connections and which proved to be a powerful instrument in the study of groups of automorphisms. The Lie derivatives of general geometric objects (i.e., sections of natural fiber bundles) were studied by A. Nijenhuis, Y. Tashiro and K. Yano. For a quite long time, physicists had been using Lie derivatives, without reference to the work of mathematicians. In 1940, Léon Rosenfeld—and before him (in 1921) Wolfgang Pauli—introduced what he called a ‘local variation’ δ ∗ A {\displaystyle \delta ^{\ast }A} of a geometric object A {\displaystyle A\,} induced by an infinitesimal transformation of coordinates generated by a vector field X {\displaystyle X\,} . One can easily prove that his δ ∗ A {\displaystyle \delta ^{\ast }A} is − L X ( A ) {\displaystyle -{\mathcal {L}}_{X}(A)\,} . == See also == Covariant derivative Connection (mathematics) Frölicher–Nijenhuis bracket Geodesic Killing field Derivative of the exponential map == Notes == == References == Abraham, Ralph; Marsden, Jerrold E. (1978). Foundations of Mechanics. London: Benjamin-Cummings. ISBN 0-8053-0102-X. See section 2.2. Bleecker, David (1981). Gauge Theory and Variational Principles. Addison-Wesley. ISBN 0-201-10096-7. See Chapter 0. Jost, Jürgen (2002). Riemannian Geometry and Geometric Analysis. Berlin: Springer. ISBN 3-540-42627-2. See section 1.6. Kolář, I.; Michor, P.; Slovák, J. (1993). Natural operations in differential geometry. Springer-Verlag. ISBN 9783662029503. Extensive discussion of Lie brackets, and the general theory of Lie derivatives. Lang, S. (1995). Differential and Riemannian manifolds. Springer-Verlag. ISBN 978-0-387-94338-1. For generalizations to infinite dimensions. Lang, S. (1999). Fundamentals of Differential Geometry. Springer-Verlag. ISBN 978-0-387-98593-0. For generalizations to infinite dimensions. Yano, K. (1957). The Theory of Lie Derivatives and its Applications. North-Holland. ISBN 978-0-7204-2104-0. {{cite book}}: ISBN / Date incompatibility (help) Classical approach using coordinates. == External links == "Lie derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia:Lie n-algebra#0
In mathematics, a Lie algebra (pronounced LEE) is a vector space g {\displaystyle {\mathfrak {g}}} together with an operation called the Lie bracket, an alternating bilinear map g × g → g {\displaystyle {\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}} , that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors x {\displaystyle x} and y {\displaystyle y} is denoted [ x , y ] {\displaystyle [x,y]} . A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket, [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} . Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra. In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space g {\displaystyle {\mathfrak {g}}} to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give g {\displaystyle {\mathfrak {g}}} the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces. In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics. An elementary example (not directly coming from an associative algebra) is the 3-dimensional space g = R 3 {\displaystyle {\mathfrak {g}}=\mathbb {R} ^{3}} with Lie bracket defined by the cross product [ x , y ] = x × y . {\displaystyle [x,y]=x\times y.} This is skew-symmetric since x × y = − y × x {\displaystyle x\times y=-y\times x} , and instead of associativity it satisfies the Jacobi identity: x × ( y × z ) + y × ( z × x ) + z × ( x × y ) = 0. {\displaystyle x\times (y\times z)+\ y\times (z\times x)+\ z\times (x\times y)\ =\ 0.} This is the Lie algebra of the Lie group of rotations of space, and each vector v ∈ R 3 {\displaystyle v\in \mathbb {R} ^{3}} may be pictured as an infinitesimal rotation around the axis v {\displaystyle v} , with angular speed equal to the magnitude of v {\displaystyle v} . The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property [ x , x ] = x × x = 0 {\displaystyle [x,x]=x\times x=0} . == History == Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used. == Definition of a Lie algebra == A Lie algebra is a vector space g {\displaystyle \,{\mathfrak {g}}} over a field F {\displaystyle F} together with a binary operation [ ⋅ , ⋅ ] : g × g → g {\displaystyle [\,\cdot \,,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} called the Lie bracket, satisfying the following axioms: Bilinearity, [ a x + b y , z ] = a [ x , z ] + b [ y , z ] , {\displaystyle [ax+by,z]=a[x,z]+b[y,z],} [ z , a x + b y ] = a [ z , x ] + b [ z , y ] {\displaystyle [z,ax+by]=a[z,x]+b[z,y]} for all scalars a , b {\displaystyle a,b} in F {\displaystyle F} and all elements x , y , z {\displaystyle x,y,z} in g {\displaystyle {\mathfrak {g}}} . The Alternating property, [ x , x ] = 0 {\displaystyle [x,x]=0\ } for all x {\displaystyle x} in g {\displaystyle {\mathfrak {g}}} . The Jacobi identity, [ x , [ y , z ] ] + [ z , [ x , y ] ] + [ y , [ z , x ] ] = 0 {\displaystyle [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0\ } for all x , y , z {\displaystyle x,y,z} in g {\displaystyle {\mathfrak {g}}} . Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation. Using bilinearity to expand the Lie bracket [ x + y , x + y ] {\displaystyle [x+y,x+y]} and using the alternating property shows that [ x , y ] + [ y , x ] = 0 {\displaystyle [x,y]+[y,x]=0} for all x , y {\displaystyle x,y} in g {\displaystyle {\mathfrak {g}}} . Thus bilinearity and the alternating property together imply Anticommutativity, [ x , y ] = − [ y , x ] , {\displaystyle [x,y]=-[y,x],\ } for all x , y {\displaystyle x,y} in g {\displaystyle {\mathfrak {g}}} . If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies [ x , x ] = − [ x , x ] . {\displaystyle [x,x]=-[x,x].} It is customary to denote a Lie algebra by a lower-case fraktur letter such as g , h , b , n {\displaystyle {\mathfrak {g,h,b,n}}} . If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is s u ( n ) {\displaystyle {\mathfrak {su}}(n)} . === Generators and dimension === The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra g {\displaystyle {\mathfrak {g}}} means a subset of g {\displaystyle {\mathfrak {g}}} such that any Lie subalgebra (as defined below) that contains S must be all of g {\displaystyle {\mathfrak {g}}} . Equivalently, g {\displaystyle {\mathfrak {g}}} is spanned (as a vector space) by all iterated brackets of elements of S. == Basic examples == === Abelian Lie algebras === A Lie algebra is called abelian if its Lie bracket is identically zero. Any vector space V {\displaystyle V} endowed with the identically zero Lie bracket becomes a Lie algebra. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket. === The Lie algebra of matrices === On an associative algebra A {\displaystyle A} over a field F {\displaystyle F} with multiplication written as x y {\displaystyle xy} , a Lie bracket may be defined by the commutator [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} . With this bracket, A {\displaystyle A} is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on A {\displaystyle A} .) The endomorphism ring of an F {\displaystyle F} -vector space V {\displaystyle V} with the above Lie bracket is denoted g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} . For a field F and a positive integer n, the space of n × n matrices over F, denoted g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} or g l n ( F ) {\displaystyle {\mathfrak {gl}}_{n}(F)} , is a Lie algebra with bracket given by the commutator of matrices: [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra. When F is the real numbers, g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} is the Lie algebra of the general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} , the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise, g l ( n , C ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )} is the Lie algebra of the complex Lie group G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} . The Lie bracket on g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} can be viewed as the Lie algebra of the algebraic group G L ( n ) {\displaystyle \mathrm {GL} (n)} over F. == Definitions == === Subalgebras, ideals and homomorphisms === The Lie bracket is not required to be associative, meaning that [ [ x , y ] , z ] {\displaystyle [[x,y],z]} need not be equal to [ x , [ y , z ] ] {\displaystyle [x,[y,z]]} . Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace h ⊆ g {\displaystyle {\mathfrak {h}}\subseteq {\mathfrak {g}}} which is closed under the Lie bracket. An ideal i ⊆ g {\displaystyle {\mathfrak {i}}\subseteq {\mathfrak {g}}} is a linear subspace that satisfies the stronger condition: [ g , i ] ⊆ i . {\displaystyle [{\mathfrak {g}},{\mathfrak {i}}]\subseteq {\mathfrak {i}}.} In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals. A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets: ϕ : g → h , ϕ ( [ x , y ] ) = [ ϕ ( x ) , ϕ ( y ) ] for all x , y ∈ g . {\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}},\quad \phi ([x,y])=[\phi (x),\phi (y)]\ {\text{for all}}\ x,y\in {\mathfrak {g}}.} An isomorphism of Lie algebras is a bijective homomorphism. As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra g {\displaystyle {\mathfrak {g}}} and an ideal i {\displaystyle {\mathfrak {i}}} in it, the quotient Lie algebra g / i {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}} is defined, with a surjective homomorphism g → g / i {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}} of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism ϕ : g → h {\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}}} of Lie algebras, the image of ϕ {\displaystyle \phi } is a Lie subalgebra of h {\displaystyle {\mathfrak {h}}} that is isomorphic to g / ker ( ϕ ) {\displaystyle {\mathfrak {g}}/{\text{ker}}(\phi )} . For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} are said to commute if their bracket vanishes: [ x , y ] = 0 {\displaystyle [x,y]=0} . The centralizer subalgebra of a subset S ⊂ g {\displaystyle S\subset {\mathfrak {g}}} is the set of elements commuting with S {\displaystyle S} : that is, z g ( S ) = { x ∈ g : [ x , s ] = 0 for all s ∈ S } {\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]=0\ {\text{ for all }}s\in S\}} . The centralizer of g {\displaystyle {\mathfrak {g}}} itself is the center z ( g ) {\displaystyle {\mathfrak {z}}({\mathfrak {g}})} . Similarly, for a subspace S, the normalizer subalgebra of S {\displaystyle S} is n g ( S ) = { x ∈ g : [ x , s ] ∈ S for all s ∈ S } {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]\in S\ {\text{ for all}}\ s\in S\}} . If S {\displaystyle S} is a Lie subalgebra, n g ( S ) {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)} is the largest subalgebra such that S {\displaystyle S} is an ideal of n g ( S ) {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)} . ==== Example ==== The subspace t n {\displaystyle {\mathfrak {t}}_{n}} of diagonal matrices in g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is an abelian Lie subalgebra. (It is a Cartan subalgebra of g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} , analogous to a maximal torus in the theory of compact Lie groups.) Here t n {\displaystyle {\mathfrak {t}}_{n}} is not an ideal in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} for n ≥ 2 {\displaystyle n\geq 2} . For example, when n = 2 {\displaystyle n=2} , this follows from the calculation: [ [ a b c d ] , [ x 0 0 y ] ] = [ a x b y c x d y ] − [ a x b x c y d y ] = [ 0 b ( y − x ) c ( x − y ) 0 ] {\displaystyle {\begin{aligned}\left[{\begin{bmatrix}a&b\\c&d\end{bmatrix}},{\begin{bmatrix}x&0\\0&y\end{bmatrix}}\right]&={\begin{bmatrix}ax&by\\cx&dy\\\end{bmatrix}}-{\begin{bmatrix}ax&bx\\cy&dy\\\end{bmatrix}}\\&={\begin{bmatrix}0&b(y-x)\\c(x-y)&0\end{bmatrix}}\end{aligned}}} (which is not always in t 2 {\displaystyle {\mathfrak {t}}_{2}} ). Every one-dimensional linear subspace of a Lie algebra g {\displaystyle {\mathfrak {g}}} is an abelian Lie subalgebra, but it need not be an ideal. === Product and semidirect product === For two Lie algebras g {\displaystyle {\mathfrak {g}}} and g ′ {\displaystyle {\mathfrak {g'}}} , the product Lie algebra is the vector space g × g ′ {\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}} consisting of all ordered pairs ( x , x ′ ) , x ∈ g , x ′ ∈ g ′ {\displaystyle (x,x'),\,x\in {\mathfrak {g}},\ x'\in {\mathfrak {g'}}} , with Lie bracket [ ( x , x ′ ) , ( y , y ′ ) ] = ( [ x , y ] , [ x ′ , y ′ ] ) . {\displaystyle [(x,x'),(y,y')]=([x,y],[x',y']).} This is the product in the category of Lie algebras. Note that the copies of g {\displaystyle {\mathfrak {g}}} and g ′ {\displaystyle {\mathfrak {g}}'} in g × g ′ {\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}} commute with each other: [ ( x , 0 ) , ( 0 , x ′ ) ] = 0. {\displaystyle [(x,0),(0,x')]=0.} Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra and i {\displaystyle {\mathfrak {i}}} an ideal of g {\displaystyle {\mathfrak {g}}} . If the canonical map g → g / i {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}} splits (i.e., admits a section g / i → g {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}\to {\mathfrak {g}}} , as a homomorphism of Lie algebras), then g {\displaystyle {\mathfrak {g}}} is said to be a semidirect product of i {\displaystyle {\mathfrak {i}}} and g / i {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}} , g = g / i ⋉ i {\displaystyle {\mathfrak {g}}={\mathfrak {g}}/{\mathfrak {i}}\ltimes {\mathfrak {i}}} . See also semidirect sum of Lie algebras. === Derivations === For an algebra A over a field F, a derivation of A over F is a linear map D : A → A {\displaystyle D\colon A\to A} that satisfies the Leibniz rule D ( x y ) = D ( x ) y + x D ( y ) {\displaystyle D(xy)=D(x)y+xD(y)} for all x , y ∈ A {\displaystyle x,y\in A} . (The definition makes sense for a possibly non-associative algebra.) Given two derivations D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} , their commutator [ D 1 , D 2 ] := D 1 D 2 − D 2 D 1 {\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}} is again a derivation. This operation makes the space Der k ( A ) {\displaystyle {\text{Der}}_{k}(A)} of all derivations of A over F into a Lie algebra. Informally speaking, the space of derivations of A is the Lie algebra of the automorphism group of A. (This is literally true when the automorphism group is a Lie group, for example when F is the real numbers and A has finite dimension as a vector space.) For this reason, spaces of derivations are a natural way to construct Lie algebras: they are the "infinitesimal automorphisms" of A. Indeed, writing out the condition that ( 1 + ϵ D ) ( x y ) ≡ ( 1 + ϵ D ) ( x ) ⋅ ( 1 + ϵ D ) ( y ) ( mod ϵ 2 ) {\displaystyle (1+\epsilon D)(xy)\equiv (1+\epsilon D)(x)\cdot (1+\epsilon D)(y){\pmod {\epsilon ^{2}}}} (where 1 denotes the identity map on A) gives exactly the definition of D being a derivation. Example: the Lie algebra of vector fields. Let A be the ring C ∞ ( X ) {\displaystyle C^{\infty }(X)} of smooth functions on a smooth manifold X. Then a derivation of A over R {\displaystyle \mathbb {R} } is equivalent to a vector field on X. (A vector field v gives a derivation of the space of smooth functions by differentiating functions in the direction of v.) This makes the space Vect ( X ) {\displaystyle {\text{Vect}}(X)} of vector fields into a Lie algebra (see Lie bracket of vector fields). Informally speaking, Vect ( X ) {\displaystyle {\text{Vect}}(X)} is the Lie algebra of the diffeomorphism group of X. So the Lie bracket of vector fields describes the non-commutativity of the diffeomorphism group. An action of a Lie group G on a manifold X determines a homomorphism of Lie algebras g → Vect ( X ) {\displaystyle {\mathfrak {g}}\to {\text{Vect}}(X)} . (An example is illustrated below.) A Lie algebra can be viewed as a non-associative algebra, and so each Lie algebra g {\displaystyle {\mathfrak {g}}} over a field F determines its Lie algebra of derivations, Der F ( g ) {\displaystyle {\text{Der}}_{F}({\mathfrak {g}})} . That is, a derivation of g {\displaystyle {\mathfrak {g}}} is a linear map D : g → g {\displaystyle D\colon {\mathfrak {g}}\to {\mathfrak {g}}} such that D ( [ x , y ] ) = [ D ( x ) , y ] + [ x , D ( y ) ] {\displaystyle D([x,y])=[D(x),y]+[x,D(y)]} . The inner derivation associated to any x ∈ g {\displaystyle x\in {\mathfrak {g}}} is the adjoint mapping a d x {\displaystyle \mathrm {ad} _{x}} defined by a d x ( y ) := [ x , y ] {\displaystyle \mathrm {ad} _{x}(y):=[x,y]} . (This is a derivation as a consequence of the Jacobi identity.) That gives a homomorphism of Lie algebras, ad : g → Der F ( g ) {\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\text{Der}}_{F}({\mathfrak {g}})} . The image Inn F ( g ) {\displaystyle {\text{Inn}}_{F}({\mathfrak {g}})} is an ideal in Der F ( g ) {\displaystyle {\text{Der}}_{F}({\mathfrak {g}})} , and the Lie algebra of outer derivations is defined as the quotient Lie algebra, Out F ( g ) = Der F ( g ) / Inn F ( g ) {\displaystyle {\text{Out}}_{F}({\mathfrak {g}})={\text{Der}}_{F}({\mathfrak {g}})/{\text{Inn}}_{F}({\mathfrak {g}})} . (This is exactly analogous to the outer automorphism group of a group.) For a semisimple Lie algebra (defined below) over a field of characteristic zero, every derivation is inner. This is related to the theorem that the outer automorphism group of a semisimple Lie group is finite. In contrast, an abelian Lie algebra has many outer derivations. Namely, for a vector space V {\displaystyle V} with Lie bracket zero, the Lie algebra Out F ( V ) {\displaystyle {\text{Out}}_{F}(V)} can be identified with g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} . == Examples == === Matrix Lie algebras === A matrix group is a Lie group consisting of invertible matrices, G ⊂ G L ( n , R ) {\displaystyle G\subset \mathrm {GL} (n,\mathbb {R} )} , where the group operation of G is matrix multiplication. The corresponding Lie algebra g {\displaystyle {\mathfrak {g}}} is the space of matrices which are tangent vectors to G inside the linear space M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} : this consists of derivatives of smooth curves in G at the identity matrix I {\displaystyle I} : g = { X = c ′ ( 0 ) ∈ M n ( R ) : smooth c : R → G , c ( 0 ) = I } . {\displaystyle {\mathfrak {g}}=\{X=c'(0)\in M_{n}(\mathbb {R} ):{\text{ smooth }}c:\mathbb {R} \to G,\ c(0)=I\}.} The Lie bracket of g {\displaystyle {\mathfrak {g}}} is given by the commutator of matrices, [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . Given a Lie algebra g ⊂ g l ( n , R ) {\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(n,\mathbb {R} )} , one can recover the Lie group as the subgroup generated by the matrix exponential of elements of g {\displaystyle {\mathfrak {g}}} . (To be precise, this gives the identity component of G, if G is not connected.) Here the exponential mapping exp : M n ( R ) → M n ( R ) {\displaystyle \exp :M_{n}(\mathbb {R} )\to M_{n}(\mathbb {R} )} is defined by exp ⁡ ( X ) = I + X + 1 2 ! X 2 + 1 3 ! X 3 + ⋯ {\displaystyle \exp(X)=I+X+{\tfrac {1}{2!}}X^{2}+{\tfrac {1}{3!}}X^{3}+\cdots } , which converges for every matrix X {\displaystyle X} . The same comments apply to complex Lie subgroups of G L ( n , C ) {\displaystyle GL(n,\mathbb {C} )} and the complex matrix exponential, exp : M n ( C ) → M n ( C ) {\displaystyle \exp :M_{n}(\mathbb {C} )\to M_{n}(\mathbb {C} )} (defined by the same formula). Here are some matrix Lie groups and their Lie algebras. For a positive integer n, the special linear group S L ( n , R ) {\displaystyle \mathrm {SL} (n,\mathbb {R} )} consists of all real n × n matrices with determinant 1. This is the group of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to itself that preserve volume and orientation. More abstractly, S L ( n , R ) {\displaystyle \mathrm {SL} (n,\mathbb {R} )} is the commutator subgroup of the general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} . Its Lie algebra s l ( n , R ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {R} )} consists of all real n × n matrices with trace 0. Similarly, one can define the analogous complex Lie group S L ( n , C ) {\displaystyle {\rm {SL}}(n,\mathbb {C} )} and its Lie algebra s l ( n , C ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )} . The orthogonal group O ( n ) {\displaystyle \mathrm {O} (n)} plays a basic role in geometry: it is the group of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to itself that preserve the length of vectors. For example, rotations and reflections belong to O ( n ) {\displaystyle \mathrm {O} (n)} . Equivalently, this is the group of n x n orthogonal matrices, meaning that A T = A − 1 {\displaystyle A^{\mathrm {T} }=A^{-1}} , where A T {\displaystyle A^{\mathrm {T} }} denotes the transpose of a matrix. The orthogonal group has two connected components; the identity component is called the special orthogonal group S O ( n ) {\displaystyle \mathrm {SO} (n)} , consisting of the orthogonal matrices with determinant 1. Both groups have the same Lie algebra s o ( n ) {\displaystyle {\mathfrak {so}}(n)} , the subspace of skew-symmetric matrices in g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} ( X T = − X {\displaystyle X^{\rm {T}}=-X} ). See also infinitesimal rotations with skew-symmetric matrices. The complex orthogonal group O ( n , C ) {\displaystyle \mathrm {O} (n,\mathbb {C} )} , its identity component S O ( n , C ) {\displaystyle \mathrm {SO} (n,\mathbb {C} )} , and the Lie algebra s o ( n , C ) {\displaystyle {\mathfrak {so}}(n,\mathbb {C} )} are given by the same formulas applied to n x n complex matrices. Equivalently, O ( n , C ) {\displaystyle \mathrm {O} (n,\mathbb {C} )} is the subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} that preserves the standard symmetric bilinear form on C n {\displaystyle \mathbb {C} ^{n}} . The unitary group U ( n ) {\displaystyle \mathrm {U} (n)} is the subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} that preserves the length of vectors in C n {\displaystyle \mathbb {C} ^{n}} (with respect to the standard Hermitian inner product). Equivalently, this is the group of n × n unitary matrices (satisfying A ∗ = A − 1 {\displaystyle A^{*}=A^{-1}} , where A ∗ {\displaystyle A^{*}} denotes the conjugate transpose of a matrix). Its Lie algebra u ( n ) {\displaystyle {\mathfrak {u}}(n)} consists of the skew-hermitian matrices in g l ( n , C ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )} ( X ∗ = − X {\displaystyle X^{*}=-X} ). This is a Lie algebra over R {\displaystyle \mathbb {R} } , not over C {\displaystyle \mathbb {C} } . (Indeed, i times a skew-hermitian matrix is hermitian, rather than skew-hermitian.) Likewise, the unitary group U ( n ) {\displaystyle \mathrm {U} (n)} is a real Lie subgroup of the complex Lie group G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} . For example, U ( 1 ) {\displaystyle \mathrm {U} (1)} is the circle group, and its Lie algebra (from this point of view) is i R ⊂ C = g l ( 1 , C ) {\displaystyle i\mathbb {R} \subset \mathbb {C} ={\mathfrak {gl}}(1,\mathbb {C} )} . The special unitary group S U ( n ) {\displaystyle \mathrm {SU} (n)} is the subgroup of matrices with determinant 1 in U ( n ) {\displaystyle \mathrm {U} (n)} . Its Lie algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} consists of the skew-hermitian matrices with trace zero. The symplectic group S p ( 2 n , R ) {\displaystyle \mathrm {Sp} (2n,\mathbb {R} )} is the subgroup of G L ( 2 n , R ) {\displaystyle \mathrm {GL} (2n,\mathbb {R} )} that preserves the standard alternating bilinear form on R 2 n {\displaystyle \mathbb {R} ^{2n}} . Its Lie algebra is the symplectic Lie algebra s p ( 2 n , R ) {\displaystyle {\mathfrak {sp}}(2n,\mathbb {R} )} . The classical Lie algebras are those listed above, along with variants over any field. === Two dimensions === Some Lie algebras of low dimension are described here. See the classification of low-dimensional real Lie algebras for further examples. There is a unique nonabelian Lie algebra g {\displaystyle {\mathfrak {g}}} of dimension 2 over any field F, up to isomorphism. Here g {\displaystyle {\mathfrak {g}}} has a basis X , Y {\displaystyle X,Y} for which the bracket is given by [ X , Y ] = Y {\displaystyle \left[X,Y\right]=Y} . (This determines the Lie bracket completely, because the axioms imply that [ X , X ] = 0 {\displaystyle [X,X]=0} and [ Y , Y ] = 0 {\displaystyle [Y,Y]=0} .) Over the real numbers, g {\displaystyle {\mathfrak {g}}} can be viewed as the Lie algebra of the Lie group G = A f f ( 1 , R ) {\displaystyle G=\mathrm {Aff} (1,\mathbb {R} )} of affine transformations of the real line, x ↦ a x + b {\displaystyle x\mapsto ax+b} . The affine group G can be identified with the group of matrices ( a b 0 1 ) {\displaystyle \left({\begin{array}{cc}a&b\\0&1\end{array}}\right)} under matrix multiplication, with a , b ∈ R {\displaystyle a,b\in \mathbb {R} } , a ≠ 0 {\displaystyle a\neq 0} . Its Lie algebra is the Lie subalgebra g {\displaystyle {\mathfrak {g}}} of g l ( 2 , R ) {\displaystyle {\mathfrak {gl}}(2,\mathbb {R} )} consisting of all matrices ( c d 0 0 ) . {\displaystyle \left({\begin{array}{cc}c&d\\0&0\end{array}}\right).} In these terms, the basis above for g {\displaystyle {\mathfrak {g}}} is given by the matrices X = ( 1 0 0 0 ) , Y = ( 0 1 0 0 ) . {\displaystyle X=\left({\begin{array}{cc}1&0\\0&0\end{array}}\right),\qquad Y=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right).} For any field F {\displaystyle F} , the 1-dimensional subspace F ⋅ Y {\displaystyle F\cdot Y} is an ideal in the 2-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , by the formula [ X , Y ] = Y ∈ F ⋅ Y {\displaystyle [X,Y]=Y\in F\cdot Y} . Both of the Lie algebras F ⋅ Y {\displaystyle F\cdot Y} and g / ( F ⋅ Y ) {\displaystyle {\mathfrak {g}}/(F\cdot Y)} are abelian (because 1-dimensional). In this sense, g {\displaystyle {\mathfrak {g}}} can be broken into abelian "pieces", meaning that it is solvable (though not nilpotent), in the terminology below. === Three dimensions === The Heisenberg algebra h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} over a field F is the three-dimensional Lie algebra with a basis X , Y , Z {\displaystyle X,Y,Z} such that [ X , Y ] = Z , [ X , Z ] = 0 , [ Y , Z ] = 0 {\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0} . It can be viewed as the Lie algebra of 3×3 strictly upper-triangular matrices, with the commutator Lie bracket and the basis X = ( 0 1 0 0 0 0 0 0 0 ) , Y = ( 0 0 0 0 0 1 0 0 0 ) , Z = ( 0 0 1 0 0 0 0 0 0 ) . {\displaystyle X=\left({\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}}\right),\quad Y=\left({\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}}\right),\quad Z=\left({\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}}\right)~.\quad } Over the real numbers, h 3 ( R ) {\displaystyle {\mathfrak {h}}_{3}(\mathbb {R} )} is the Lie algebra of the Heisenberg group H 3 ( R ) {\displaystyle \mathrm {H} _{3}(\mathbb {R} )} , that is, the group of matrices ( 1 a c 0 1 b 0 0 1 ) {\displaystyle \left({\begin{array}{ccc}1&a&c\\0&1&b\\0&0&1\end{array}}\right)} under matrix multiplication. For any field F, the center of h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} is the 1-dimensional ideal F ⋅ Z {\displaystyle F\cdot Z} , and the quotient h 3 ( F ) / ( F ⋅ Z ) {\displaystyle {\mathfrak {h}}_{3}(F)/(F\cdot Z)} is abelian, isomorphic to F 2 {\displaystyle F^{2}} . In the terminology below, it follows that h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} is nilpotent (though not abelian). The Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of the rotation group SO(3) is the space of skew-symmetric 3 x 3 matrices over R {\displaystyle \mathbb {R} } . A basis is given by the three matrices F 1 = ( 0 0 0 0 0 − 1 0 1 0 ) , F 2 = ( 0 0 1 0 0 0 − 1 0 0 ) , F 3 = ( 0 − 1 0 1 0 0 0 0 0 ) . {\displaystyle F_{1}=\left({\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}}\right),\quad F_{2}=\left({\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}}\right),\quad F_{3}=\left({\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}}\right)~.\quad } The commutation relations among these generators are [ F 1 , F 2 ] = F 3 , {\displaystyle [F_{1},F_{2}]=F_{3},} [ F 2 , F 3 ] = F 1 , {\displaystyle [F_{2},F_{3}]=F_{1},} [ F 3 , F 1 ] = F 2 . {\displaystyle [F_{3},F_{1}]=F_{2}.} The cross product of vectors in R 3 {\displaystyle \mathbb {R} ^{3}} is given by the same formula in terms of the standard basis; so that Lie algebra is isomorphic to s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . Also, s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} is equivalent to the Spin (physics) angular-momentum component operators for spin-1 particles in quantum mechanics. The Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} cannot be broken into pieces in the way that the previous examples can: it is simple, meaning that it is not abelian and its only ideals are 0 and all of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . Another simple Lie algebra of dimension 3, in this case over C {\displaystyle \mathbb {C} } , is the space s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} of 2 x 2 matrices of trace zero. A basis is given by the three matrices H = ( 1 0 0 − 1 ) , E = ( 0 1 0 0 ) , F = ( 0 0 1 0 ) . {\displaystyle H=\left({\begin{array}{cc}1&0\\0&-1\end{array}}\right),\ E=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right),\ F=\left({\begin{array}{cc}0&0\\1&0\end{array}}\right).} The Lie bracket is given by: [ H , E ] = 2 E , {\displaystyle [H,E]=2E,} [ H , F ] = − 2 F , {\displaystyle [H,F]=-2F,} [ E , F ] = H . {\displaystyle [E,F]=H.} Using these formulas, one can show that the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} is simple, and classify its finite-dimensional representations (defined below). In the terminology of quantum mechanics, one can think of E and F as raising and lowering operators. Indeed, for any representation of s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} , the relations above imply that E maps the c-eigenspace of H (for a complex number c) into the ( c + 2 ) {\displaystyle (c+2)} -eigenspace, while F maps the c-eigenspace into the ( c − 2 ) {\displaystyle (c-2)} -eigenspace. The Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} is isomorphic to the complexification of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} , meaning the tensor product s o ( 3 ) ⊗ R C {\displaystyle {\mathfrak {so}}(3)\otimes _{\mathbb {R} }\mathbb {C} } . The formulas for the Lie bracket are easier to analyze in the case of s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} . As a result, it is common to analyze complex representations of the group S O ( 3 ) {\displaystyle \mathrm {SO} (3)} by relating them to representations of the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} . === Infinite dimensions === The Lie algebra of vector fields on a smooth manifold of positive dimension is an infinite-dimensional Lie algebra over R {\displaystyle \mathbb {R} } . The Kac–Moody algebras are a large class of infinite-dimensional Lie algebras, say over C {\displaystyle \mathbb {C} } , with structure much like that of the finite-dimensional simple Lie algebras (such as s l ( n , C ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )} ). The Moyal algebra is an infinite-dimensional Lie algebra that contains all the classical Lie algebras as subalgebras. The Virasoro algebra is important in string theory. The functor that takes a Lie algebra over a field F to the underlying vector space has a left adjoint V ↦ L ( V ) {\displaystyle V\mapsto L(V)} , called the free Lie algebra on a vector space V. It is spanned by all iterated Lie brackets of elements of V, modulo only the relations coming from the definition of a Lie algebra. The free Lie algebra L ( V ) {\displaystyle L(V)} is infinite-dimensional for V of dimension at least 2. == Representations == === Definitions === Given a vector space V, let g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} denote the Lie algebra consisting of all linear maps from V to itself, with bracket given by [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . A representation of a Lie algebra g {\displaystyle {\mathfrak {g}}} on V is a Lie algebra homomorphism π : g → g l ( V ) . {\displaystyle \pi \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V).} That is, π {\displaystyle \pi } sends each element of g {\displaystyle {\mathfrak {g}}} to a linear map from V to itself, in such a way that the Lie bracket on g {\displaystyle {\mathfrak {g}}} corresponds to the commutator of linear maps. A representation is said to be faithful if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero has a faithful representation on a finite-dimensional vector space. Kenkichi Iwasawa extended this result to finite-dimensional Lie algebras over a field of any characteristic. Equivalently, every finite-dimensional Lie algebra over a field F is isomorphic to a Lie subalgebra of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} for some positive integer n. === Adjoint representation === For any Lie algebra g {\displaystyle {\mathfrak {g}}} , the adjoint representation is the representation ad : g → g l ( g ) {\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})} given by ad ⁡ ( x ) ( y ) = [ x , y ] {\displaystyle \operatorname {ad} (x)(y)=[x,y]} . (This is a representation of g {\displaystyle {\mathfrak {g}}} by the Jacobi identity.) === Goals of representation theory === One important aspect of the study of Lie algebras (especially semisimple Lie algebras, as defined below) is the study of their representations. Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra g {\displaystyle {\mathfrak {g}}} . Indeed, in the semisimple case, the adjoint representation is already faithful. Rather, the goal is to understand all possible representations of g {\displaystyle {\mathfrak {g}}} . For a semisimple Lie algebra over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The finite-dimensional irreducible representations are well understood from several points of view; see the representation theory of semisimple Lie algebras and the Weyl character formula. === Universal enveloping algebra === The functor that takes an associative algebra A over a field F to A as a Lie algebra (by [ X , Y ] := X Y − Y X {\displaystyle [X,Y]:=XY-YX} ) has a left adjoint g ↦ U ( g ) {\displaystyle {\mathfrak {g}}\mapsto U({\mathfrak {g}})} , called the universal enveloping algebra. To construct this: given a Lie algebra g {\displaystyle {\mathfrak {g}}} over F, let T ( g ) = F ⊕ g ⊕ ( g ⊗ g ) ⊕ ( g ⊗ g ⊗ g ) ⊕ ⋯ {\displaystyle T({\mathfrak {g}})=F\oplus {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\oplus \cdots } be the tensor algebra on g {\displaystyle {\mathfrak {g}}} , also called the free associative algebra on the vector space g {\displaystyle {\mathfrak {g}}} . Here ⊗ {\displaystyle \otimes } denotes the tensor product of F-vector spaces. Let I be the two-sided ideal in T ( g ) {\displaystyle T({\mathfrak {g}})} generated by the elements X Y − Y X − [ X , Y ] {\displaystyle XY-YX-[X,Y]} for X , Y ∈ g {\displaystyle X,Y\in {\mathfrak {g}}} ; then the universal enveloping algebra is the quotient ring U ( g ) = T ( g ) / I {\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I} . It satisfies the Poincaré–Birkhoff–Witt theorem: if e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} is a basis for g {\displaystyle {\mathfrak {g}}} as an F-vector space, then a basis for U ( g ) {\displaystyle U({\mathfrak {g}})} is given by all ordered products e 1 i 1 ⋯ e n i n {\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}} with i 1 , … , i n {\displaystyle i_{1},\ldots ,i_{n}} natural numbers. In particular, the map g → U ( g ) {\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})} is injective. Representations of g {\displaystyle {\mathfrak {g}}} are equivalent to modules over the universal enveloping algebra. The fact that g → U ( g ) {\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})} is injective implies that every Lie algebra (possibly of infinite dimension) has a faithful representation (of infinite dimension), namely its representation on U ( g ) {\displaystyle U({\mathfrak {g}})} . This also shows that every Lie algebra is contained in the Lie algebra associated to some associative algebra. === Representation theory in physics === The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example is the angular momentum operators, whose commutation relations are those of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of the rotation group S O ( 3 ) {\displaystyle \mathrm {SO} (3)} . Typically, the space of states is far from being irreducible under the pertinent operators, but one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the hydrogen atom, for example, quantum mechanics textbooks classify (more or less explicitly) the finite-dimensional irreducible representations of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . == Structure theory and classification == Lie algebras can be classified to some extent. This is a powerful approach to the classification of Lie groups. === Abelian, nilpotent, and solvable === Analogously to abelian, nilpotent, and solvable groups, one can define abelian, nilpotent, and solvable Lie algebras. A Lie algebra g {\displaystyle {\mathfrak {g}}} is abelian if the Lie bracket vanishes; that is, [x,y] = 0 for all x and y in g {\displaystyle {\mathfrak {g}}} . In particular, the Lie algebra of an abelian Lie group (such as the group R n {\displaystyle \mathbb {R} ^{n}} under addition or the torus group T n {\displaystyle \mathbb {T} ^{n}} ) is abelian. Every finite-dimensional abelian Lie algebra over a field F {\displaystyle F} is isomorphic to F n {\displaystyle F^{n}} for some n ≥ 0 {\displaystyle n\geq 0} , meaning an n-dimensional vector space with Lie bracket zero. A more general class of Lie algebras is defined by the vanishing of all commutators of given length. First, the commutator subalgebra (or derived subalgebra) of a Lie algebra g {\displaystyle {\mathfrak {g}}} is [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} , meaning the linear subspace spanned by all brackets [ x , y ] {\displaystyle [x,y]} with x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} . The commutator subalgebra is an ideal in g {\displaystyle {\mathfrak {g}}} , in fact the smallest ideal such that the quotient Lie algebra is abelian. It is analogous to the commutator subgroup of a group. A Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if the lower central series g ⊇ [ g , g ] ⊇ [ [ g , g ] , g ] ⊇ [ [ [ g , g ] , g ] , g ] ⊇ ⋯ {\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}],{\mathfrak {g}}]\supseteq \cdots } becomes zero after finitely many steps. Equivalently, g {\displaystyle {\mathfrak {g}}} is nilpotent if there is a finite sequence of ideals in g {\displaystyle {\mathfrak {g}}} , 0 = a 0 ⊆ a 1 ⊆ ⋯ ⊆ a r = g , {\displaystyle 0={\mathfrak {a}}_{0}\subseteq {\mathfrak {a}}_{1}\subseteq \cdots \subseteq {\mathfrak {a}}_{r}={\mathfrak {g}},} such that a j / a j − 1 {\displaystyle {\mathfrak {a}}_{j}/{\mathfrak {a}}_{j-1}} is central in g / a j − 1 {\displaystyle {\mathfrak {g}}/{\mathfrak {a}}_{j-1}} for each j. By Engel's theorem, a Lie algebra over any field is nilpotent if and only if for every u in g {\displaystyle {\mathfrak {g}}} the adjoint endomorphism ad ⁡ ( u ) : g → g , ad ⁡ ( u ) v = [ u , v ] {\displaystyle \operatorname {ad} (u):{\mathfrak {g}}\to {\mathfrak {g}},\quad \operatorname {ad} (u)v=[u,v]} is nilpotent. More generally, a Lie algebra g {\displaystyle {\mathfrak {g}}} is said to be solvable if the derived series: g ⊇ [ g , g ] ⊇ [ [ g , g ] , [ g , g ] ] ⊇ [ [ [ g , g ] , [ g , g ] ] , [ [ g , g ] , [ g , g ] ] ] ⊇ ⋯ {\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\supseteq \cdots } becomes zero after finitely many steps. Equivalently, g {\displaystyle {\mathfrak {g}}} is solvable if there is a finite sequence of Lie subalgebras, 0 = m 0 ⊆ m 1 ⊆ ⋯ ⊆ m r = g , {\displaystyle 0={\mathfrak {m}}_{0}\subseteq {\mathfrak {m}}_{1}\subseteq \cdots \subseteq {\mathfrak {m}}_{r}={\mathfrak {g}},} such that m j − 1 {\displaystyle {\mathfrak {m}}_{j-1}} is an ideal in m j {\displaystyle {\mathfrak {m}}_{j}} with m j / m j − 1 {\displaystyle {\mathfrak {m}}_{j}/{\mathfrak {m}}_{j-1}} abelian for each j. Every finite-dimensional Lie algebra over a field has a unique maximal solvable ideal, called its radical. Under the Lie correspondence, nilpotent (respectively, solvable) Lie groups correspond to nilpotent (respectively, solvable) Lie algebras over R {\displaystyle \mathbb {R} } . For example, for a positive integer n and a field F of characteristic zero, the radical of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is its center, the 1-dimensional subspace spanned by the identity matrix. An example of a solvable Lie algebra is the space b n {\displaystyle {\mathfrak {b}}_{n}} of upper-triangular matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} ; this is not nilpotent when n ≥ 2 {\displaystyle n\geq 2} . An example of a nilpotent Lie algebra is the space u n {\displaystyle {\mathfrak {u}}_{n}} of strictly upper-triangular matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} ; this is not abelian when n ≥ 3 {\displaystyle n\geq 3} . === Simple and semisimple === A Lie algebra g {\displaystyle {\mathfrak {g}}} is called simple if it is not abelian and the only ideals in g {\displaystyle {\mathfrak {g}}} are 0 and g {\displaystyle {\mathfrak {g}}} . (In particular, a one-dimensional—necessarily abelian—Lie algebra g {\displaystyle {\mathfrak {g}}} is by definition not simple, even though its only ideals are 0 and g {\displaystyle {\mathfrak {g}}} .) A finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} is called semisimple if the only solvable ideal in g {\displaystyle {\mathfrak {g}}} is 0. In characteristic zero, a Lie algebra g {\displaystyle {\mathfrak {g}}} is semisimple if and only if it is isomorphic to a product of simple Lie algebras, g ≅ g 1 × ⋯ × g r {\displaystyle {\mathfrak {g}}\cong {\mathfrak {g}}_{1}\times \cdots \times {\mathfrak {g}}_{r}} . For example, the Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} is simple for every n ≥ 2 {\displaystyle n\geq 2} and every field F of characteristic zero (or just of characteristic not dividing n). The Lie algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} over R {\displaystyle \mathbb {R} } is simple for every n ≥ 2 {\displaystyle n\geq 2} . The Lie algebra s o ( n ) {\displaystyle {\mathfrak {so}}(n)} over R {\displaystyle \mathbb {R} } is simple if n = 3 {\displaystyle n=3} or n ≥ 5 {\displaystyle n\geq 5} . (There are "exceptional isomorphisms" s o ( 3 ) ≅ s u ( 2 ) {\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)} and s o ( 4 ) ≅ s u ( 2 ) × s u ( 2 ) {\displaystyle {\mathfrak {so}}(4)\cong {\mathfrak {su}}(2)\times {\mathfrak {su}}(2)} .) The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of their representations. When the ground field F has characteristic zero, every finite-dimensional representation of a semisimple Lie algebra is semisimple (that is, a direct sum of irreducible representations). A finite-dimensional Lie algebra over a field of characteristic zero is called reductive if its adjoint representation is semisimple. Every reductive Lie algebra is isomorphic to the product of an abelian Lie algebra and a semisimple Lie algebra. For example, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is reductive for F of characteristic zero: for n ≥ 2 {\displaystyle n\geq 2} , it is isomorphic to the product g l ( n , F ) ≅ F × s l ( n , F ) , {\displaystyle {\mathfrak {gl}}(n,F)\cong F\times {\mathfrak {sl}}(n,F),} where F denotes the center of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} , the 1-dimensional subspace spanned by the identity matrix. Since the special linear Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} is simple, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} contains few ideals: only 0, the center F, s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} , and all of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} . === Cartan's criterion === Cartan's criterion (by Élie Cartan) gives conditions for a finite-dimensional Lie algebra of characteristic zero to be solvable or semisimple. It is expressed in terms of the Killing form, the symmetric bilinear form on g {\displaystyle {\mathfrak {g}}} defined by K ( u , v ) = tr ⁡ ( ad ⁡ ( u ) ad ⁡ ( v ) ) , {\displaystyle K(u,v)=\operatorname {tr} (\operatorname {ad} (u)\operatorname {ad} (v)),} where tr denotes the trace of a linear operator. Namely: a Lie algebra g {\displaystyle {\mathfrak {g}}} is semisimple if and only if the Killing form is nondegenerate. A Lie algebra g {\displaystyle {\mathfrak {g}}} is solvable if and only if K ( g , [ g , g ] ) = 0. {\displaystyle K({\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}])=0.} === Classification === The Levi decomposition asserts that every finite-dimensional Lie algebra over a field of characteristic zero is a semidirect product of its solvable radical and a semisimple Lie algebra. Moreover, a semisimple Lie algebra in characteristic zero is a product of simple Lie algebras, as mentioned above. This focuses attention on the problem of classifying the simple Lie algebras. The simple Lie algebras of finite dimension over an algebraically closed field F of characteristic zero were classified by Killing and Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. Here the simple Lie algebra of type An is s l ( n + 1 , F ) {\displaystyle {\mathfrak {sl}}(n+1,F)} , Bn is s o ( 2 n + 1 , F ) {\displaystyle {\mathfrak {so}}(2n+1,F)} , Cn is s p ( 2 n , F ) {\displaystyle {\mathfrak {sp}}(2n,F)} , and Dn is s o ( 2 n , F ) {\displaystyle {\mathfrak {so}}(2n,F)} . The other five are known as the exceptional Lie algebras. The classification of finite-dimensional simple Lie algebras over R {\displaystyle \mathbb {R} } is more complicated, but it was also solved by Cartan (see simple Lie group for an equivalent classification). One can analyze a Lie algebra g {\displaystyle {\mathfrak {g}}} over R {\displaystyle \mathbb {R} } by considering its complexification g ⊗ R C {\displaystyle {\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} } . In the years leading up to 2004, the finite-dimensional simple Lie algebras over an algebraically closed field of characteristic p > 3 {\displaystyle p>3} were classified by Richard Earl Block, Robert Lee Wilson, Alexander Premet, and Helmut Strade. (See restricted Lie algebra#Classification of simple Lie algebras.) It turns out that there are many more simple Lie algebras in positive characteristic than in characteristic zero. == Relation to Lie groups == Although Lie algebras can be studied in their own right, historically they arose as a means to study Lie groups. The relationship between Lie groups and Lie algebras can be summarized as follows. Each Lie group determines a Lie algebra over R {\displaystyle \mathbb {R} } (concretely, the tangent space at the identity). Conversely, for every finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , there is a connected Lie group G {\displaystyle G} with Lie algebra g {\displaystyle {\mathfrak {g}}} . This is Lie's third theorem; see the Baker–Campbell–Hausdorff formula. This Lie group is not determined uniquely; however, any two Lie groups with the same Lie algebra are locally isomorphic, and more strongly, they have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitary group SU(2) have isomorphic Lie algebras, but SU(2) is a simply connected double cover of SO(3). For simply connected Lie groups, there is a complete correspondence: taking the Lie algebra gives an equivalence of categories from simply connected Lie groups to Lie algebras of finite dimension over R {\displaystyle \mathbb {R} } . The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of Lie groups and the representation theory of Lie groups. For finite-dimensional representations, there is an equivalence of categories between representations of a real Lie algebra and representations of the corresponding simply connected Lie group. This simplifies the representation theory of Lie groups: it is often easier to classify the representations of a Lie algebra, using linear algebra. Every connected Lie group is isomorphic to its universal cover modulo a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the discrete subgroups of the center, once the Lie algebra is known. For example, the real semisimple Lie algebras were classified by Cartan, and so the classification of semisimple Lie groups is well understood. For infinite-dimensional Lie algebras, Lie theory works less well. The exponential map need not be a local homeomorphism (for example, in the diffeomorphism group of the circle, there are diffeomorphisms arbitrarily close to the identity that are not in the image of the exponential map). Moreover, in terms of the existing notions of infinite-dimensional Lie groups, some infinite-dimensional Lie algebras do not come from any group. Lie theory also does not work so neatly for infinite-dimensional representations of a finite-dimensional group. Even for the additive group G = R {\displaystyle G=\mathbb {R} } , an infinite-dimensional representation of G {\displaystyle G} can usually not be differentiated to produce a representation of its Lie algebra on the same space, or vice versa. The theory of Harish-Chandra modules is a more subtle relation between infinite-dimensional representations for groups and Lie algebras. == Real form and complexification == Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a real Lie algebra g 0 {\displaystyle {\mathfrak {g}}_{0}} is said to be a real form of g {\displaystyle {\mathfrak {g}}} if the complexification g 0 ⊗ R C {\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} } is isomorphic to g {\displaystyle {\mathfrak {g}}} . A real form need not be unique; for example, s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} has two real forms up to isomorphism, s l ( 2 , R ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {R} )} and s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} . Given a semisimple complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a split form of it is a real form that splits; i.e., it has a Cartan subalgebra which acts via an adjoint representation with real eigenvalues. A split form exists and is unique (up to isomorphism). A compact form is a real form that is the Lie algebra of a compact Lie group. A compact form exists and is also unique up to isomorphism. == Lie algebra with additional structures == A Lie algebra may be equipped with additional structures that are compatible with the Lie bracket. For example, a graded Lie algebra is a Lie algebra (or more generally a Lie superalgebra) with a compatible grading. A differential graded Lie algebra also comes with a differential, making the underlying vector space a chain complex. For example, the homotopy groups of a simply connected topological space form a graded Lie algebra, using the Whitehead product. In a related construction, Daniel Quillen used differential graded Lie algebras over the rational numbers Q {\displaystyle \mathbb {Q} } to describe rational homotopy theory in algebraic terms. == Lie ring == The definition of a Lie algebra over a field extends to define a Lie algebra over any commutative ring R. Namely, a Lie algebra g {\displaystyle {\mathfrak {g}}} over R is an R-module with an alternating R-bilinear map [ , ] : g × g → g {\displaystyle [\ ,\ ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} that satisfies the Jacobi identity. A Lie algebra over the ring Z {\displaystyle \mathbb {Z} } of integers is sometimes called a Lie ring. (This is not directly related to the notion of a Lie group.) Lie rings are used in the study of finite p-groups (for a prime number p) through the Lazard correspondence. The lower central factors of a finite p-group are finite abelian p-groups. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives; see the example below. p-adic Lie groups are related to Lie algebras over the field Q p {\displaystyle \mathbb {Q} _{p}} of p-adic numbers as well as over the ring Z p {\displaystyle \mathbb {Z} _{p}} of p-adic integers. Part of Claude Chevalley's construction of the finite groups of Lie type involves showing that a simple Lie algebra over the complex numbers comes from a Lie algebra over the integers, and then (with more care) a group scheme over the integers. === Examples === Here is a construction of Lie rings arising from the study of abstract groups. For elements x , y {\displaystyle x,y} of a group, define the commutator [ x , y ] = x − 1 y − 1 x y {\displaystyle [x,y]=x^{-1}y^{-1}xy} . Let G = G 1 ⊇ G 2 ⊇ G 3 ⊇ ⋯ ⊇ G n ⊇ ⋯ {\displaystyle G=G_{1}\supseteq G_{2}\supseteq G_{3}\supseteq \cdots \supseteq G_{n}\supseteq \cdots } be a filtration of a group G {\displaystyle G} , that is, a chain of subgroups such that [ G i , G j ] {\displaystyle [G_{i},G_{j}]} is contained in G i + j {\displaystyle G_{i+j}} for all i , j {\displaystyle i,j} . (For the Lazard correspondence, one takes the filtration to be the lower central series of G.) Then L = ⨁ i ≥ 1 G i / G i + 1 {\displaystyle L=\bigoplus _{i\geq 1}G_{i}/G_{i+1}} is a Lie ring, with addition given by the group multiplication (which is abelian on each quotient group G i / G i + 1 {\displaystyle G_{i}/G_{i+1}} ), and with Lie bracket G i / G i + 1 × G j / G j + 1 → G i + j / G i + j + 1 {\displaystyle G_{i}/G_{i+1}\times G_{j}/G_{j+1}\to G_{i+j}/G_{i+j+1}} given by commutators in the group: [ x G i + 1 , y G j + 1 ] := [ x , y ] G i + j + 1 . {\displaystyle [xG_{i+1},yG_{j+1}]:=[x,y]G_{i+j+1}.} For example, the Lie ring associated to the lower central series on the dihedral group of order 8 is the Heisenberg Lie algebra of dimension 3 over the field Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } . == Definition using category-theoretic notation == The definition of a Lie algebra can be reformulated more abstractly in the language of category theory. Namely, one can define a Lie algebra in terms of linear maps—that is, morphisms in the category of vector spaces—without considering individual elements. (In this section, the field over which the algebra is defined is assumed to be of characteristic different from 2.) For the category-theoretic definition of Lie algebras, two braiding isomorphisms are needed. If A is a vector space, the interchange isomorphism τ : A ⊗ A → A ⊗ A {\displaystyle \tau :A\otimes A\to A\otimes A} is defined by τ ( x ⊗ y ) = y ⊗ x . {\displaystyle \tau (x\otimes y)=y\otimes x.} The cyclic-permutation braiding σ : A ⊗ A ⊗ A → A ⊗ A ⊗ A {\displaystyle \sigma :A\otimes A\otimes A\to A\otimes A\otimes A} is defined as σ = ( i d ⊗ τ ) ∘ ( τ ⊗ i d ) , {\displaystyle \sigma =(\mathrm {id} \otimes \tau )\circ (\tau \otimes \mathrm {id} ),} where i d {\displaystyle \mathrm {id} } is the identity morphism. Equivalently, σ {\displaystyle \sigma } is defined by σ ( x ⊗ y ⊗ z ) = y ⊗ z ⊗ x . {\displaystyle \sigma (x\otimes y\otimes z)=y\otimes z\otimes x.} With this notation, a Lie algebra can be defined as an object A {\displaystyle A} in the category of vector spaces together with a morphism [ ⋅ , ⋅ ] : A ⊗ A → A {\displaystyle [\cdot ,\cdot ]\colon A\otimes A\rightarrow A} that satisfies the two morphism equalities [ ⋅ , ⋅ ] ∘ ( i d + τ ) = 0 , {\displaystyle [\cdot ,\cdot ]\circ (\mathrm {id} +\tau )=0,} and [ ⋅ , ⋅ ] ∘ ( [ ⋅ , ⋅ ] ⊗ i d ) ∘ ( i d + σ + σ 2 ) = 0. {\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes \mathrm {id} )\circ (\mathrm {id} +\sigma +\sigma ^{2})=0.} == See also == == Remarks == == References == == Sources == Bourbaki, Nicolas (1989). Lie Groups and Lie Algebras: Chapters 1-3. Springer. ISBN 978-3-540-64242-8. MR 1728312. Erdmann, Karin; Wildon, Mark (2006). Introduction to Lie Algebras. Springer. ISBN 1-84628-040-0. MR 2218355. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Hall, Brian C. (2015). Lie groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. doi:10.1007/978-3-319-13467-3. ISBN 978-3319134666. ISSN 0072-5285. MR 3331229. Humphreys, James E. (1978). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90053-7. MR 0499562. Jacobson, Nathan (1979) [1962]. Lie Algebras. Dover. ISBN 978-0-486-63832-4. MR 0559927. Khukhro, E. I. (1998), p-Automorphisms of Finite p-Groups, Cambridge University Press, doi:10.1017/CBO9780511526008, ISBN 0-521-59717-X, MR 1615819 Knapp, Anthony W. (2001) [1986], Representation Theory of Semisimple Groups: an Overview Based on Examples, Princeton University Press, ISBN 0-691-09089-0, MR 1880691 Milnor, John (2010) [1986], "Remarks on infinite-dimensional Lie groups", Collected Papers of John Milnor, vol. 5, American Mathematical Soc., pp. 91–141, ISBN 978-0-8218-4876-0, MR 0830252 O'Connor, J.J; Robertson, E.F. (2000). "Marius Sophus Lie". MacTutor History of Mathematics Archive. O'Connor, J.J; Robertson, E.F. (2005). "Wilhelm Karl Joseph Killing". MacTutor History of Mathematics Archive. Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031 Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups (2nd ed.). Springer. ISBN 978-3-540-55008-2. MR 2179691. Varadarajan, Veeravalli S. (1984) [1974]. Lie Groups, Lie Algebras, and Their Representations. Springer. ISBN 978-0-387-90969-1. MR 0746308. Wigner, Eugene (1959). Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. Translated by J. J. Griffin. Academic Press. ISBN 978-0127505503. MR 0106711. {{cite book}}: ISBN / Date incompatibility (help) == External links == Kac, Victor G.; et al. Course notes for MIT 18.745: Introduction to Lie Algebras. Archived from the original on 2010-04-20. "Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] McKenzie, Douglas (2015). "An Elementary Introduction to Lie Algebras for Physicists".
Wikipedia:Lie operad#0
In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad O {\displaystyle O} , one defines an algebra over O {\displaystyle O} to be a set together with concrete operations on this set which behave just like the abstract operations of O {\displaystyle O} . For instance, there is a Lie operad L {\displaystyle L} such that the algebras over L {\displaystyle L} are precisely the Lie algebras; in a sense L {\displaystyle L} abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations. == History == Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972. Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads: "The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898." The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer). Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher. == Intuition == Suppose X {\displaystyle X} is a set and for n ∈ N {\displaystyle n\in \mathbb {N} } we define P ( n ) := { f : X n → X } {\displaystyle P(n):=\{f\colon X^{n}\to X\}} , the set of all functions from the cartesian product of n {\displaystyle n} copies of X {\displaystyle X} to X {\displaystyle X} . We can compose these functions: given f ∈ P ( n ) {\displaystyle f\in P(n)} , f 1 ∈ P ( k 1 ) , … , f n ∈ P ( k n ) {\displaystyle f_{1}\in P(k_{1}),\ldots ,f_{n}\in P(k_{n})} , the function f ∘ ( f 1 , … , f n ) ∈ P ( k 1 + ⋯ + k n ) {\displaystyle f\circ (f_{1},\ldots ,f_{n})\in P(k_{1}+\cdots +k_{n})} is defined as follows: given k 1 + ⋯ + k n {\displaystyle k_{1}+\cdots +k_{n}} arguments from X {\displaystyle X} , we divide them into n {\displaystyle n} blocks, the first one having k 1 {\displaystyle k_{1}} arguments, the second one k 2 {\displaystyle k_{2}} arguments, etc., and then apply f 1 {\displaystyle f_{1}} to the first block, f 2 {\displaystyle f_{2}} to the second block, etc. We then apply f {\displaystyle f} to the list of n {\displaystyle n} values obtained from X {\displaystyle X} in such a way. We can also permute arguments, i.e. we have a right action ∗ {\displaystyle *} of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} , defined by ( f ∗ s ) ( x 1 , … , x n ) = f ( x s − 1 ( 1 ) , … , x s − 1 ( n ) ) {\displaystyle (f*s)(x_{1},\ldots ,x_{n})=f(x_{s^{-1}(1)},\ldots ,x_{s^{-1}(n)})} for f ∈ P ( n ) {\displaystyle f\in P(n)} , s ∈ S n {\displaystyle s\in S_{n}} and x 1 , … , x n ∈ X {\displaystyle x_{1},\ldots ,x_{n}\in X} . The definition of a symmetric operad given below captures the essential properties of these two operations ∘ {\displaystyle \circ } and ∗ {\displaystyle *} . == Definition == === Non-symmetric operad === A non-symmetric operad (sometimes called an operad without permutations, or a non- Σ {\displaystyle \Sigma } or plain operad) consists of the following: a sequence ( P ( n ) ) n ∈ N {\displaystyle (P(n))_{n\in \mathbb {N} }} of sets, whose elements are called n {\displaystyle n} -ary operations, an element 1 {\displaystyle 1} in P ( 1 ) {\displaystyle P(1)} called the identity, for all positive integers n {\displaystyle n} , k 1 , … , k n {\textstyle k_{1},\ldots ,k_{n}} , a composition function ∘ : P ( n ) × P ( k 1 ) × ⋯ × P ( k n ) → P ( k 1 + ⋯ + k n ) ( θ , θ 1 , … , θ n ) ↦ θ ∘ ( θ 1 , … , θ n ) , {\displaystyle {\begin{aligned}\circ :P(n)\times P(k_{1})\times \cdots \times P(k_{n})&\to P(k_{1}+\cdots +k_{n})\\(\theta ,\theta _{1},\ldots ,\theta _{n})&\mapsto \theta \circ (\theta _{1},\ldots ,\theta _{n}),\end{aligned}}} satisfying the following coherence axioms: identity: θ ∘ ( 1 , … , 1 ) = θ = 1 ∘ θ {\displaystyle \theta \circ (1,\ldots ,1)=\theta =1\circ \theta } associativity: θ ∘ ( θ 1 ∘ ( θ 1 , 1 , … , θ 1 , k 1 ) , … , θ n ∘ ( θ n , 1 , … , θ n , k n ) ) = ( θ ∘ ( θ 1 , … , θ n ) ) ∘ ( θ 1 , 1 , … , θ 1 , k 1 , … , θ n , 1 , … , θ n , k n ) {\displaystyle {\begin{aligned}&\theta \circ {\Big (}\theta _{1}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}}),\ldots ,\theta _{n}\circ (\theta _{n,1},\ldots ,\theta _{n,k_{n}}){\Big )}\\={}&{\Big (}\theta \circ (\theta _{1},\ldots ,\theta _{n}){\Big )}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}},\ldots ,\theta _{n,1},\ldots ,\theta _{n,k_{n}})\end{aligned}}} === Symmetric operad === A symmetric operad (often just called operad) is a non-symmetric operad P {\displaystyle P} as above, together with a right action of the symmetric group S n {\displaystyle S_{n}} on P ( n ) {\displaystyle P(n)} for n ∈ N {\displaystyle n\in \mathbb {N} } , denoted by ∗ {\displaystyle *} and satisfying equivariance: given a permutation t ∈ S n {\displaystyle t\in S_{n}} , ( θ ∗ t ) ∘ ( θ 1 , … , θ n ) = ( θ ∘ ( θ t − 1 ( 1 ) , … , θ t − 1 ( n ) ) ) ∗ t ′ {\displaystyle (\theta *t)\circ (\theta _{1},\ldots ,\theta _{n})=(\theta \circ (\theta _{t^{-1}(1)},\ldots ,\theta _{t^{-1}(n)}))*t'} (where t ′ {\displaystyle t'} on the right hand side refers to the element of S k 1 + ⋯ + k n {\displaystyle S_{k_{1}+\dots +k_{n}}} that acts on the set { 1 , 2 , … , k 1 + ⋯ + k n } {\displaystyle \{1,2,\dots ,k_{1}+\dots +k_{n}\}} by breaking it into n {\displaystyle n} blocks, the first of size k 1 {\displaystyle k_{1}} , the second of size k 2 {\displaystyle k_{2}} , through the n {\displaystyle n} th block of size k n {\displaystyle k_{n}} , and then permutes these n {\displaystyle n} blocks by t {\displaystyle t} , keeping each block intact) and given n {\displaystyle n} permutations s i ∈ S k i {\displaystyle s_{i}\in S_{k_{i}}} , θ ∘ ( θ 1 ∗ s 1 , … , θ n ∗ s n ) = ( θ ∘ ( θ 1 , … , θ n ) ) ∗ ( s 1 , … , s n ) {\displaystyle \theta \circ (\theta _{1}*s_{1},\ldots ,\theta _{n}*s_{n})=(\theta \circ (\theta _{1},\ldots ,\theta _{n}))*(s_{1},\ldots ,s_{n})} (where ( s 1 , … , s n ) {\displaystyle (s_{1},\ldots ,s_{n})} denotes the element of S k 1 + ⋯ + k n {\displaystyle S_{k_{1}+\dots +k_{n}}} that permutes the first of these blocks by s 1 {\displaystyle s_{1}} , the second by s 2 {\displaystyle s_{2}} , etc., and keeps their overall order intact). The permutation actions in this definition are vital to most applications, including the original application to loop spaces. === Morphisms === A morphism of operads f : P → Q {\displaystyle f:P\to Q} consists of a sequence ( f n : P ( n ) → Q ( n ) ) n ∈ N {\displaystyle (f_{n}:P(n)\to Q(n))_{n\in \mathbb {N} }} that: preserves the identity: f ( 1 ) = 1 {\displaystyle f(1)=1} preserves composition: for every n-ary operation θ {\displaystyle \theta } and operations θ 1 , … , θ n {\displaystyle \theta _{1},\ldots ,\theta _{n}} , f ( θ ∘ ( θ 1 , … , θ n ) ) = f ( θ ) ∘ ( f ( θ 1 ) , … , f ( θ n ) ) {\displaystyle f(\theta \circ (\theta _{1},\ldots ,\theta _{n}))=f(\theta )\circ (f(\theta _{1}),\ldots ,f(\theta _{n}))} preserves the permutation actions: f ( x ∗ s ) = f ( x ) ∗ s {\displaystyle f(x*s)=f(x)*s} . Operads therefore form a category denoted by O p e r {\displaystyle {\mathsf {Oper}}} . === In other categories === So far operads have only been considered in the category of sets. More generally, it is possible to define operads in any symmetric monoidal category C . In that case, each P ( n ) {\displaystyle P(n)} is an object of C, the composition ∘ {\displaystyle \circ } is a morphism P ( n ) ⊗ P ( k 1 ) ⊗ ⋯ ⊗ P ( k n ) → P ( k 1 + ⋯ + k n ) {\displaystyle P(n)\otimes P(k_{1})\otimes \cdots \otimes P(k_{n})\to P(k_{1}+\cdots +k_{n})} in C (where ⊗ {\displaystyle \otimes } denotes the tensor product of the monoidal category), and the actions of the symmetric group elements are given by isomorphisms in C. A common example is the category of topological spaces and continuous maps, with the monoidal product given by the cartesian product. In this case, an operad is given by a sequence of spaces (instead of sets) { P ( n ) } n ≥ 0 {\displaystyle \{P(n)\}_{n\geq 0}} . The structure maps of the operad (the composition and the actions of the symmetric groups) are then assumed to be continuous. The result is called a topological operad. Similarly, in the definition of a morphism of operads, it would be necessary to assume that the maps involved are continuous. Other common settings to define operads include, for example, modules over a commutative ring, chain complexes, groupoids (or even the category of categories itself), coalgebras, etc. === Algebraist definition === Given a commutative ring R we consider the category R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} of modules over R. An operad over R can be defined as a monoid object ( T , γ , η ) {\displaystyle (T,\gamma ,\eta )} in the monoidal category of endofunctors on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} (it is a monad) satisfying some finiteness condition. For example, a monoid object in the category of "polynomial endofunctors" on R - M o d {\displaystyle R{\text{-}}{\mathsf {Mod}}} is an operad. Similarly, a symmetric operad can be defined as a monoid object in the category of S {\displaystyle \mathbb {S} } -objects, where S {\displaystyle \mathbb {S} } means a symmetric group. A monoid object in the category of combinatorial species is an operad in finite sets. An operad in the above sense is sometimes thought of as a generalized ring. For example, Nikolai Durov defines his generalized rings as monoid objects in the monoidal category of endofunctors on Set {\displaystyle {\textbf {Set}}} that commute with filtered colimits. This is a generalization of a ring since each ordinary ring R defines a monad Σ R : Set → Set {\displaystyle \Sigma _{R}:{\textbf {Set}}\to {\textbf {Set}}} that sends a set X to the underlying set of the free R-module R ( X ) {\displaystyle R^{(X)}} generated by X. == Understanding the axioms == === Associativity axiom === "Associativity" means that composition of operations is associative (the function ∘ {\displaystyle \circ } is associative), analogous to the axiom in category theory that f ∘ ( g ∘ h ) = ( f ∘ g ) ∘ h {\displaystyle f\circ (g\circ h)=(f\circ g)\circ h} ; it does not mean that the operations themselves are associative as operations. Compare with the associative operad, below. Associativity in operad theory means that expressions can be written involving operations without ambiguity from the omitted compositions, just as associativity for operations allows products to be written without ambiguity from the omitted parentheses. For instance, if θ {\displaystyle \theta } is a binary operation, which is written as θ ( a , b ) {\displaystyle \theta (a,b)} or ( a b ) {\displaystyle (ab)} . So that θ {\displaystyle \theta } may or may not be associative. Then what is commonly written ( ( a b ) c ) {\displaystyle ((ab)c)} is unambiguously written operadically as θ ∘ ( θ , 1 ) {\displaystyle \theta \circ (\theta ,1)} . This sends ( a , b , c ) {\displaystyle (a,b,c)} to ( a b , c ) {\displaystyle (ab,c)} (apply θ {\displaystyle \theta } on the first two, and the identity on the third), and then the θ {\displaystyle \theta } on the left "multiplies" a b {\displaystyle ab} by c {\displaystyle c} . This is clearer when depicted as a tree: which yields a 3-ary operation: However, the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is a priori ambiguous: it could mean θ ∘ ( ( θ , 1 ) ∘ ( ( θ , 1 ) , 1 ) ) {\displaystyle \theta \circ ((\theta ,1)\circ ((\theta ,1),1))} , if the inner compositions are performed first, or it could mean ( θ ∘ ( θ , 1 ) ) ∘ ( ( θ , 1 ) , 1 ) {\displaystyle (\theta \circ (\theta ,1))\circ ((\theta ,1),1)} , if the outer compositions are performed first (operations are read from right to left). Writing x = θ , y = ( θ , 1 ) , z = ( ( θ , 1 ) , 1 ) {\displaystyle x=\theta ,y=(\theta ,1),z=((\theta ,1),1)} , this is x ∘ ( y ∘ z ) {\displaystyle x\circ (y\circ z)} versus ( x ∘ y ) ∘ z {\displaystyle (x\circ y)\circ z} . That is, the tree is missing "vertical parentheses": If the top two rows of operations are composed first (puts an upward parenthesis at the ( a b ) c d {\displaystyle (ab)c\ \ d} line; does the inner composition first), the following results: which then evaluates unambiguously to yield a 4-ary operation. As an annotated expression: θ ( a b ) c ⋅ d ∘ ( ( θ a b ⋅ c , 1 d ) ∘ ( ( θ a ⋅ b , 1 c ) , 1 d ) ) {\displaystyle \theta _{(ab)c\cdot d}\circ ((\theta _{ab\cdot c},1_{d})\circ ((\theta _{a\cdot b},1_{c}),1_{d}))} If the bottom two rows of operations are composed first (puts a downward parenthesis at the a b c d {\displaystyle ab\quad c\ \ d} line; does the outer composition first), following results: which then evaluates unambiguously to yield a 4-ary operation: The operad axiom of associativity is that these yield the same result, and thus that the expression ( ( ( a b ) c ) d ) {\displaystyle (((ab)c)d)} is unambiguous. === Identity axiom === The identity axiom (for a binary operation) can be visualized in a tree as: meaning that the three operations obtained are equal: pre- or post- composing with the identity makes no difference. As for categories, 1 ∘ 1 = 1 {\displaystyle 1\circ 1=1} is a corollary of the identity axiom. == Examples == === Endomorphism operad in sets and operad algebras === The most basic operads are the ones given in the section on "Intuition", above. For any set X {\displaystyle X} , we obtain the endomorphism operad E n d X {\displaystyle {\mathcal {End}}_{X}} consisting of all functions X n → X {\displaystyle X^{n}\to X} . These operads are important because they serve to define operad algebras. If O {\displaystyle {\mathcal {O}}} is an operad, an operad algebra over O {\displaystyle {\mathcal {O}}} is given by a set X {\displaystyle X} and an operad morphism O → E n d X {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{X}} . Intuitively, such a morphism turns each "abstract" operation of O ( n ) {\displaystyle {\mathcal {O}}(n)} into a "concrete" n {\displaystyle n} -ary operation on the set X {\displaystyle X} . An operad algebra over O {\displaystyle {\mathcal {O}}} thus consists of a set X {\displaystyle X} together with concrete operations on X {\displaystyle X} that follow the rules abstractely specified by the operad O {\displaystyle {\mathcal {O}}} . === Endomorphism operad in vector spaces and operad algebras === If k is a field, we can consider the category of finite-dimensional vector spaces over k; this becomes a monoidal category using the ordinary tensor product over k. We can then define endomorphism operads in this category, as follows. Let V be a finite-dimensional vector space The endomorphism operad E n d V = { E n d V ( n ) } {\displaystyle {\mathcal {End}}_{V}=\{{\mathcal {End}}_{V}(n)\}} of V consists of E n d V ( n ) {\displaystyle {\mathcal {End}}_{V}(n)} = the space of linear maps V ⊗ n → V {\displaystyle V^{\otimes n}\to V} , (composition) given f ∈ E n d V ( n ) {\displaystyle f\in {\mathcal {End}}_{V}(n)} , g 1 ∈ E n d V ( k 1 ) {\displaystyle g_{1}\in {\mathcal {End}}_{V}(k_{1})} , ..., g n ∈ E n d V ( k n ) {\displaystyle g_{n}\in {\mathcal {End}}_{V}(k_{n})} , their composition is given by the map V ⊗ k 1 ⊗ ⋯ ⊗ V ⊗ k n ⟶ g 1 ⊗ ⋯ ⊗ g n V ⊗ n → f V {\displaystyle V^{\otimes k_{1}}\otimes \cdots \otimes V^{\otimes k_{n}}\ {\overset {g_{1}\otimes \cdots \otimes g_{n}}{\longrightarrow }}\ V^{\otimes n}\ {\overset {f}{\to }}\ V} , (identity) The identity element in E n d V ( 1 ) {\displaystyle {\mathcal {End}}_{V}(1)} is the identity map id V {\displaystyle \operatorname {id} _{V}} , (symmetric group action) S n {\displaystyle S_{n}} operates on E n d V ( n ) {\displaystyle {\mathcal {End}}_{V}(n)} by permuting the components of the tensors in V ⊗ n {\displaystyle V^{\otimes n}} . If O {\displaystyle {\mathcal {O}}} is an operad, a k-linear operad algebra over O {\displaystyle {\mathcal {O}}} is given by a finite-dimensional vector space V over k and an operad morphism O → E n d V {\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{V}} ; this amounts to specifying concrete multilinear operations on V that behave like the operations of O {\displaystyle {\mathcal {O}}} . (Notice the analogy between operads&operad algebras and rings&modules: a module over a ring R is given by an abelian group M together with a ring homomorphism R → End ⁡ ( M ) {\displaystyle R\to \operatorname {End} (M)} .) Depending on applications, variations of the above are possible: for example, in algebraic topology, instead of vector spaces and tensor products between them, one uses (reasonable) topological spaces and cartesian products between them. === "Little something" operads === The little 2-disks operad is a topological operad where P ( n ) {\displaystyle P(n)} consists of ordered lists of n disjoint disks inside the unit disk of R 2 {\displaystyle \mathbb {R} ^{2}} centered at the origin. The symmetric group acts on such configurations by permuting the list of little disks. The operadic composition for little disks is illustrated in the accompanying figure to the right, where an element θ ∈ P ( 3 ) {\displaystyle \theta \in P(3)} is composed with an element ( θ 1 , θ 2 , θ 3 ) ∈ P ( 2 ) × P ( 3 ) × P ( 4 ) {\displaystyle (\theta _{1},\theta _{2},\theta _{3})\in P(2)\times P(3)\times P(4)} to yield the element θ ∘ ( θ 1 , θ 2 , θ 3 ) ∈ P ( 9 ) {\displaystyle \theta \circ (\theta _{1},\theta _{2},\theta _{3})\in P(9)} obtained by shrinking the configuration of θ i {\displaystyle \theta _{i}} and inserting it into the i-th disk of θ {\displaystyle \theta } , for i = 1 , 2 , 3 {\displaystyle i=1,2,3} . Analogously, one can define the little n-disks operad by considering configurations of disjoint n-balls inside the unit ball of R n {\displaystyle \mathbb {R} ^{n}} . Originally the little n-cubes operad or the little intervals operad (initially called little n-cubes PROPs) was defined by Michael Boardman and Rainer Vogt in a similar way, in terms of configurations of disjoint axis-aligned n-dimensional hypercubes (n-dimensional intervals) inside the unit hypercube. Later it was generalized by May to the little convex bodies operad, and "little disks" is a case of "folklore" derived from the "little convex bodies". === Rooted trees === In graph theory, rooted trees form a natural operad. Here, P ( n ) {\displaystyle P(n)} is the set of all rooted trees with n leaves, where the leaves are numbered from 1 to n. The group S n {\displaystyle S_{n}} operates on this set by permuting the leaf labels. Operadic composition T ∘ ( S 1 , … , S n ) {\displaystyle T\circ (S_{1},\ldots ,S_{n})} is given by replacing the i-th leaf of T {\displaystyle T} by the root of the i-th tree S i {\displaystyle S_{i}} , for i = 1 , … , n {\displaystyle i=1,\ldots ,n} , thus attaching the n trees to T {\displaystyle T} and forming a larger tree, whose root is taken to be the same as the root of T {\displaystyle T} and whose leaves are numbered in order. === Swiss-cheese operad === The Swiss-cheese operad is a two-colored topological operad defined in terms of configurations of disjoint n-dimensional disks inside a unit n-semidisk and n-dimensional semidisks, centered at the base of the unit semidisk and sitting inside of it. The operadic composition comes from gluing configurations of "little" disks inside the unit disk into the "little" disks in another unit semidisk and configurations of "little" disks and semidisks inside the unit semidisk into the other unit semidisk. The Swiss-cheese operad was defined by Alexander A. Voronov. It was used by Maxim Kontsevich to formulate a Swiss-cheese version of Deligne's conjecture on Hochschild cohomology. Kontsevich's conjecture was proven partly by Po Hu, Igor Kriz, and Alexander A. Voronov and then fully by Justin Thomas. === Associative operad === Another class of examples of operads are those capturing the structures of algebraic structures, such as associative algebras, commutative algebras and Lie algebras. Each of these can be exhibited as a finitely presented operad, in each of these three generated by binary operations. For example, the associative operad is a symmetric operad generated by a binary operation ψ {\displaystyle \psi } , subject only to the condition that ψ ∘ ( ψ , 1 ) = ψ ∘ ( 1 , ψ ) . {\displaystyle \psi \circ (\psi ,1)=\psi \circ (1,\psi ).} This condition corresponds to associativity of the binary operation ψ {\displaystyle \psi } ; writing ψ ( a , b ) {\displaystyle \psi (a,b)} multiplicatively, the above condition is ( a b ) c = a ( b c ) {\displaystyle (ab)c=a(bc)} . This associativity of the operation should not be confused with associativity of composition which holds in any operad; see the axiom of associativity, above. In the associative operad, each P ( n ) {\displaystyle P(n)} is given by the symmetric group S n {\displaystyle S_{n}} , on which S n {\displaystyle S_{n}} acts by right multiplication. The composite σ ∘ ( τ 1 , … , τ n ) {\displaystyle \sigma \circ (\tau _{1},\dots ,\tau _{n})} permutes its inputs in blocks according to σ {\displaystyle \sigma } , and within blocks according to the appropriate τ i {\displaystyle \tau _{i}} . The algebras over the associative operad are precisely the semigroups: sets together with a single binary associative operation. The k-linear algebras over the associative operad are precisely the associative k-algebras. === Terminal symmetric operad === The terminal symmetric operad is the operad which has a single n-ary operation for each n, with each S n {\displaystyle S_{n}} acting trivially. The algebras over this operad are the commutative semigroups; the k-linear algebras are the commutative associative k-algebras. === Operads from the braid groups === Similarly, there is a non- Σ {\displaystyle \Sigma } operad for which each P ( n ) {\displaystyle P(n)} is given by the Artin braid group B n {\displaystyle B_{n}} . Moreover, this non- Σ {\displaystyle \Sigma } operad has the structure of a braided operad, which generalizes the notion of an operad from symmetric to braid groups. === Linear algebra === In linear algebra, real vector spaces can be considered to be algebras over the operad R ∞ {\displaystyle \mathbb {R} ^{\infty }} of all linear combinations . This operad is defined by R ∞ ( n ) = R n {\displaystyle \mathbb {R} ^{\infty }(n)=\mathbb {R} ^{n}} for n ∈ N {\displaystyle n\in \mathbb {N} } , with the obvious action of S n {\displaystyle S_{n}} permuting components, and composition x → ∘ ( y 1 → , … , y n → ) {\displaystyle {\vec {x}}\circ ({\vec {y_{1}}},\ldots ,{\vec {y_{n}}})} given by the concatentation of the vectors x ( 1 ) y 1 → , … , x ( n ) y n → {\displaystyle x^{(1)}{\vec {y_{1}}},\ldots ,x^{(n)}{\vec {y_{n}}}} , where x → = ( x ( 1 ) , … , x ( n ) ) ∈ R n {\displaystyle {\vec {x}}=(x^{(1)},\ldots ,x^{(n)})\in \mathbb {R} ^{n}} . The vector x → = ( 2 , 3 , − 5 , 0 , … ) {\displaystyle {\vec {x}}=(2,3,-5,0,\dots )} for instance represents the operation of forming a linear combination with coefficients 2,3,-5,0,... This point of view formalizes the notion that linear combinations are the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of vector addition and scalar multiplication are a generating set for the operad of all linear combinations, while the linear combinations operad canonically encodes all possible operations on a vector space. Similarly, affine combinations, conical combinations, and convex combinations can be considered to correspond to the sub-operads where the terms of the vector x → {\displaystyle {\vec {x}}} sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by R n {\displaystyle \mathbb {R} ^{n}} being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. === Commutative-ring operad and Lie operad === The commutative-ring operad is an operad whose algebras are the commutative rings. It is defined by P ( n ) = Z [ x 1 , … , x n ] {\displaystyle P(n)=\mathbb {Z} [x_{1},\ldots ,x_{n}]} , with the obvious action of S n {\displaystyle S_{n}} and operadic composition given by substituting polynomials (with renumbered variables) for variables. A similar operad can be defined whose algebras are the associative, commutative algebras over some fixed base field. The Koszul-dual of this operad is the Lie operad (whose algebras are the Lie algebras), and vice versa. == Free Operads == Typical algebraic constructions (e.g., free algebra construction) can be extended to operads. Let S e t S n {\displaystyle \mathbf {Set} ^{S_{n}}} denote the category whose objects are sets on which the group S n {\displaystyle S_{n}} acts. Then there is a forgetful functor O p e r → ∏ n ∈ N S e t S n {\displaystyle {\mathsf {Oper}}\to \prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}} , which simply forgets the operadic composition. It is possible to construct a left adjoint Γ : ∏ n ∈ N S e t S n → O p e r {\displaystyle \Gamma :\prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}\to {\mathsf {Oper}}} to this forgetful functor (this is the usual definition of free functor). Given a collection of operations E, Γ ( E ) {\displaystyle \Gamma (E)} is the free operad on E. Like a group or a ring, the free construction allows to express an operad in terms of generators and relations. By a free representation of an operad O {\displaystyle {\mathcal {O}}} , we mean writing O {\displaystyle {\mathcal {O}}} as a quotient of a free operad F = Γ ( E ) {\displaystyle {\mathcal {F}}=\Gamma (E)} where E describes generators of O {\displaystyle {\mathcal {O}}} and the kernel of the epimorphism F → O {\displaystyle {\mathcal {F}}\to {\mathcal {O}}} describes the relations. A (symmetric) operad O = { O ( n ) } {\displaystyle {\mathcal {O}}=\{{\mathcal {O}}(n)\}} is called quadratic if it has a free presentation such that E = O ( 2 ) {\displaystyle E={\mathcal {O}}(2)} is the generator and the relation is contained in Γ ( E ) ( 3 ) {\displaystyle \Gamma (E)(3)} . == Clones == Clones are the special case of operads that are also closed under identifying arguments together ("reusing" some data). Clones can be equivalently defined as operads that are also a minion (or clonoid). == Operads in homotopy theory == In Stasheff (2004), Stasheff writes: Operads are particularly important and useful in categories with a good notion of "homotopy", where they play a key role in organizing hierarchies of higher homotopies. == See also == PRO (category theory) Algebra over an operad Higher-order operad E∞-operad Pseudoalgebra Multicategory == Notes == === Citations === == References == Tom Leinster (2004). Higher Operads, Higher Categories. Cambridge University Press. arXiv:math/0305049. Bibcode:2004hohc.book.....L. ISBN 978-0-521-53215-0. Martin Markl, Steve Shnider, Jim Stasheff (2002). Operads in Algebra, Topology and Physics. American Mathematical Society. ISBN 978-0-8218-4362-8.{{cite book}}: CS1 maint: multiple names: authors list (link) Markl, Martin (June 2006). "Operads and PROPs". arXiv:math/0601129. Stasheff, Jim (June–July 2004). "What Is...an Operad?" (PDF). Notices of the American Mathematical Society. 51 (6): 630–631. Retrieved 17 January 2008. Loday, Jean-Louis; Vallette, Bruno (2012), Algebraic Operads (PDF), Grundlehren der Mathematischen Wissenschaften, vol. 346, Berlin, New York: Springer-Verlag, ISBN 978-3-642-30361-6 Zinbiel, Guillaume W. (2012), "Encyclopedia of types of algebras 2010", in Bai, Chengming; Guo, Li; Loday, Jean-Louis (eds.), Operads and universal algebra, Nankai Series in Pure, Applied Mathematics and Theoretical Physics, vol. 9, pp. 217–298, arXiv:1101.0267, Bibcode:2011arXiv1101.0267Z, ISBN 9789814365116 Fresse, Benoit (17 May 2017), Homotopy of Operads and Grothendieck-Teichmüller Groups, Mathematical Surveys and Monographs, American Mathematical Society, ISBN 978-1-4704-3480-9, MR 3643404, Zbl 1373.55014 Miguel A. Mendéz (2015). Set Operads in Combinatorics and Computer Science. SpringerBriefs in Mathematics. ISBN 978-3-319-11712-6. Samuele Giraudo (2018). Nonsymmetric Operads in Combinatorics. Springer International Publishing. ISBN 978-3-030-02073-6. == External links == operad at the nLab https://golem.ph.utexas.edu/category/2011/05/an_operadic_introduction_to_en.html
Wikipedia:Lie-* algebra#0
In mathematics, a Lie algebra (pronounced LEE) is a vector space g {\displaystyle {\mathfrak {g}}} together with an operation called the Lie bracket, an alternating bilinear map g × g → g {\displaystyle {\mathfrak {g}}\times {\mathfrak {g}}\rightarrow {\mathfrak {g}}} , that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation (called the Lie bracket) is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors x {\displaystyle x} and y {\displaystyle y} is denoted [ x , y ] {\displaystyle [x,y]} . A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket, [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} . Lie algebras are closely related to Lie groups, which are groups that are also smooth manifolds: every Lie group gives rise to a Lie algebra, which is the tangent space at the identity. (In this case, the Lie bracket measures the failure of commutativity for the Lie group.) Conversely, to any finite-dimensional Lie algebra over the real or complex numbers, there is a corresponding connected Lie group, unique up to covering spaces (Lie's third theorem). This correspondence allows one to study the structure and classification of Lie groups in terms of Lie algebras, which are simpler objects of linear algebra. In more detail: for any Lie group, the multiplication operation near the identity element 1 is commutative to first order. In other words, every Lie group G is (to first order) approximately a real vector space, namely the tangent space g {\displaystyle {\mathfrak {g}}} to G at the identity. To second order, the group operation may be non-commutative, and the second-order terms describing the non-commutativity of G near the identity give g {\displaystyle {\mathfrak {g}}} the structure of a Lie algebra. It is a remarkable fact that these second-order terms (the Lie algebra) completely determine the group structure of G near the identity. They even determine G globally, up to covering spaces. In physics, Lie groups appear as symmetry groups of physical systems, and their Lie algebras (tangent vectors near the identity) may be thought of as infinitesimal symmetry motions. Thus Lie algebras and their representations are used extensively in physics, notably in quantum mechanics and particle physics. An elementary example (not directly coming from an associative algebra) is the 3-dimensional space g = R 3 {\displaystyle {\mathfrak {g}}=\mathbb {R} ^{3}} with Lie bracket defined by the cross product [ x , y ] = x × y . {\displaystyle [x,y]=x\times y.} This is skew-symmetric since x × y = − y × x {\displaystyle x\times y=-y\times x} , and instead of associativity it satisfies the Jacobi identity: x × ( y × z ) + y × ( z × x ) + z × ( x × y ) = 0. {\displaystyle x\times (y\times z)+\ y\times (z\times x)+\ z\times (x\times y)\ =\ 0.} This is the Lie algebra of the Lie group of rotations of space, and each vector v ∈ R 3 {\displaystyle v\in \mathbb {R} ^{3}} may be pictured as an infinitesimal rotation around the axis v {\displaystyle v} , with angular speed equal to the magnitude of v {\displaystyle v} . The Lie bracket is a measure of the non-commutativity between two rotations. Since a rotation commutes with itself, one has the alternating property [ x , x ] = x × x = 0 {\displaystyle [x,x]=x\times x=0} . == History == Lie algebras were introduced to study the concept of infinitesimal transformations by Sophus Lie in the 1870s, and independently discovered by Wilhelm Killing in the 1880s. The name Lie algebra was given by Hermann Weyl in the 1930s; in older texts, the term infinitesimal group was used. == Definition of a Lie algebra == A Lie algebra is a vector space g {\displaystyle \,{\mathfrak {g}}} over a field F {\displaystyle F} together with a binary operation [ ⋅ , ⋅ ] : g × g → g {\displaystyle [\,\cdot \,,\cdot \,]:{\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} called the Lie bracket, satisfying the following axioms: Bilinearity, [ a x + b y , z ] = a [ x , z ] + b [ y , z ] , {\displaystyle [ax+by,z]=a[x,z]+b[y,z],} [ z , a x + b y ] = a [ z , x ] + b [ z , y ] {\displaystyle [z,ax+by]=a[z,x]+b[z,y]} for all scalars a , b {\displaystyle a,b} in F {\displaystyle F} and all elements x , y , z {\displaystyle x,y,z} in g {\displaystyle {\mathfrak {g}}} . The Alternating property, [ x , x ] = 0 {\displaystyle [x,x]=0\ } for all x {\displaystyle x} in g {\displaystyle {\mathfrak {g}}} . The Jacobi identity, [ x , [ y , z ] ] + [ z , [ x , y ] ] + [ y , [ z , x ] ] = 0 {\displaystyle [x,[y,z]]+[z,[x,y]]+[y,[z,x]]=0\ } for all x , y , z {\displaystyle x,y,z} in g {\displaystyle {\mathfrak {g}}} . Given a Lie group, the Jacobi identity for its Lie algebra follows from the associativity of the group operation. Using bilinearity to expand the Lie bracket [ x + y , x + y ] {\displaystyle [x+y,x+y]} and using the alternating property shows that [ x , y ] + [ y , x ] = 0 {\displaystyle [x,y]+[y,x]=0} for all x , y {\displaystyle x,y} in g {\displaystyle {\mathfrak {g}}} . Thus bilinearity and the alternating property together imply Anticommutativity, [ x , y ] = − [ y , x ] , {\displaystyle [x,y]=-[y,x],\ } for all x , y {\displaystyle x,y} in g {\displaystyle {\mathfrak {g}}} . If the field does not have characteristic 2, then anticommutativity implies the alternating property, since it implies [ x , x ] = − [ x , x ] . {\displaystyle [x,x]=-[x,x].} It is customary to denote a Lie algebra by a lower-case fraktur letter such as g , h , b , n {\displaystyle {\mathfrak {g,h,b,n}}} . If a Lie algebra is associated with a Lie group, then the algebra is denoted by the fraktur version of the group's name: for example, the Lie algebra of SU(n) is s u ( n ) {\displaystyle {\mathfrak {su}}(n)} . === Generators and dimension === The dimension of a Lie algebra over a field means its dimension as a vector space. In physics, a vector space basis of the Lie algebra of a Lie group G may be called a set of generators for G. (They are "infinitesimal generators" for G, so to speak.) In mathematics, a set S of generators for a Lie algebra g {\displaystyle {\mathfrak {g}}} means a subset of g {\displaystyle {\mathfrak {g}}} such that any Lie subalgebra (as defined below) that contains S must be all of g {\displaystyle {\mathfrak {g}}} . Equivalently, g {\displaystyle {\mathfrak {g}}} is spanned (as a vector space) by all iterated brackets of elements of S. == Basic examples == === Abelian Lie algebras === A Lie algebra is called abelian if its Lie bracket is identically zero. Any vector space V {\displaystyle V} endowed with the identically zero Lie bracket becomes a Lie algebra. Every one-dimensional Lie algebra is abelian, by the alternating property of the Lie bracket. === The Lie algebra of matrices === On an associative algebra A {\displaystyle A} over a field F {\displaystyle F} with multiplication written as x y {\displaystyle xy} , a Lie bracket may be defined by the commutator [ x , y ] = x y − y x {\displaystyle [x,y]=xy-yx} . With this bracket, A {\displaystyle A} is a Lie algebra. (The Jacobi identity follows from the associativity of the multiplication on A {\displaystyle A} .) The endomorphism ring of an F {\displaystyle F} -vector space V {\displaystyle V} with the above Lie bracket is denoted g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} . For a field F and a positive integer n, the space of n × n matrices over F, denoted g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} or g l n ( F ) {\displaystyle {\mathfrak {gl}}_{n}(F)} , is a Lie algebra with bracket given by the commutator of matrices: [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . This is a special case of the previous example; it is a key example of a Lie algebra. It is called the general linear Lie algebra. When F is the real numbers, g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} is the Lie algebra of the general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} , the group of invertible n x n real matrices (or equivalently, matrices with nonzero determinant), where the group operation is matrix multiplication. Likewise, g l ( n , C ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )} is the Lie algebra of the complex Lie group G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} . The Lie bracket on g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} describes the failure of commutativity for matrix multiplication, or equivalently for the composition of linear maps. For any field F, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} can be viewed as the Lie algebra of the algebraic group G L ( n ) {\displaystyle \mathrm {GL} (n)} over F. == Definitions == === Subalgebras, ideals and homomorphisms === The Lie bracket is not required to be associative, meaning that [ [ x , y ] , z ] {\displaystyle [[x,y],z]} need not be equal to [ x , [ y , z ] ] {\displaystyle [x,[y,z]]} . Nonetheless, much of the terminology for associative rings and algebras (and also for groups) has analogs for Lie algebras. A Lie subalgebra is a linear subspace h ⊆ g {\displaystyle {\mathfrak {h}}\subseteq {\mathfrak {g}}} which is closed under the Lie bracket. An ideal i ⊆ g {\displaystyle {\mathfrak {i}}\subseteq {\mathfrak {g}}} is a linear subspace that satisfies the stronger condition: [ g , i ] ⊆ i . {\displaystyle [{\mathfrak {g}},{\mathfrak {i}}]\subseteq {\mathfrak {i}}.} In the correspondence between Lie groups and Lie algebras, subgroups correspond to Lie subalgebras, and normal subgroups correspond to ideals. A Lie algebra homomorphism is a linear map compatible with the respective Lie brackets: ϕ : g → h , ϕ ( [ x , y ] ) = [ ϕ ( x ) , ϕ ( y ) ] for all x , y ∈ g . {\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}},\quad \phi ([x,y])=[\phi (x),\phi (y)]\ {\text{for all}}\ x,y\in {\mathfrak {g}}.} An isomorphism of Lie algebras is a bijective homomorphism. As with normal subgroups in groups, ideals in Lie algebras are precisely the kernels of homomorphisms. Given a Lie algebra g {\displaystyle {\mathfrak {g}}} and an ideal i {\displaystyle {\mathfrak {i}}} in it, the quotient Lie algebra g / i {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}} is defined, with a surjective homomorphism g → g / i {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}} of Lie algebras. The first isomorphism theorem holds for Lie algebras: for any homomorphism ϕ : g → h {\displaystyle \phi \colon {\mathfrak {g}}\to {\mathfrak {h}}} of Lie algebras, the image of ϕ {\displaystyle \phi } is a Lie subalgebra of h {\displaystyle {\mathfrak {h}}} that is isomorphic to g / ker ( ϕ ) {\displaystyle {\mathfrak {g}}/{\text{ker}}(\phi )} . For the Lie algebra of a Lie group, the Lie bracket is a kind of infinitesimal commutator. As a result, for any Lie algebra, two elements x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} are said to commute if their bracket vanishes: [ x , y ] = 0 {\displaystyle [x,y]=0} . The centralizer subalgebra of a subset S ⊂ g {\displaystyle S\subset {\mathfrak {g}}} is the set of elements commuting with S {\displaystyle S} : that is, z g ( S ) = { x ∈ g : [ x , s ] = 0 for all s ∈ S } {\displaystyle {\mathfrak {z}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]=0\ {\text{ for all }}s\in S\}} . The centralizer of g {\displaystyle {\mathfrak {g}}} itself is the center z ( g ) {\displaystyle {\mathfrak {z}}({\mathfrak {g}})} . Similarly, for a subspace S, the normalizer subalgebra of S {\displaystyle S} is n g ( S ) = { x ∈ g : [ x , s ] ∈ S for all s ∈ S } {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)=\{x\in {\mathfrak {g}}:[x,s]\in S\ {\text{ for all}}\ s\in S\}} . If S {\displaystyle S} is a Lie subalgebra, n g ( S ) {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)} is the largest subalgebra such that S {\displaystyle S} is an ideal of n g ( S ) {\displaystyle {\mathfrak {n}}_{\mathfrak {g}}(S)} . ==== Example ==== The subspace t n {\displaystyle {\mathfrak {t}}_{n}} of diagonal matrices in g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is an abelian Lie subalgebra. (It is a Cartan subalgebra of g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} , analogous to a maximal torus in the theory of compact Lie groups.) Here t n {\displaystyle {\mathfrak {t}}_{n}} is not an ideal in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} for n ≥ 2 {\displaystyle n\geq 2} . For example, when n = 2 {\displaystyle n=2} , this follows from the calculation: [ [ a b c d ] , [ x 0 0 y ] ] = [ a x b y c x d y ] − [ a x b x c y d y ] = [ 0 b ( y − x ) c ( x − y ) 0 ] {\displaystyle {\begin{aligned}\left[{\begin{bmatrix}a&b\\c&d\end{bmatrix}},{\begin{bmatrix}x&0\\0&y\end{bmatrix}}\right]&={\begin{bmatrix}ax&by\\cx&dy\\\end{bmatrix}}-{\begin{bmatrix}ax&bx\\cy&dy\\\end{bmatrix}}\\&={\begin{bmatrix}0&b(y-x)\\c(x-y)&0\end{bmatrix}}\end{aligned}}} (which is not always in t 2 {\displaystyle {\mathfrak {t}}_{2}} ). Every one-dimensional linear subspace of a Lie algebra g {\displaystyle {\mathfrak {g}}} is an abelian Lie subalgebra, but it need not be an ideal. === Product and semidirect product === For two Lie algebras g {\displaystyle {\mathfrak {g}}} and g ′ {\displaystyle {\mathfrak {g'}}} , the product Lie algebra is the vector space g × g ′ {\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}} consisting of all ordered pairs ( x , x ′ ) , x ∈ g , x ′ ∈ g ′ {\displaystyle (x,x'),\,x\in {\mathfrak {g}},\ x'\in {\mathfrak {g'}}} , with Lie bracket [ ( x , x ′ ) , ( y , y ′ ) ] = ( [ x , y ] , [ x ′ , y ′ ] ) . {\displaystyle [(x,x'),(y,y')]=([x,y],[x',y']).} This is the product in the category of Lie algebras. Note that the copies of g {\displaystyle {\mathfrak {g}}} and g ′ {\displaystyle {\mathfrak {g}}'} in g × g ′ {\displaystyle {\mathfrak {g}}\times {\mathfrak {g'}}} commute with each other: [ ( x , 0 ) , ( 0 , x ′ ) ] = 0. {\displaystyle [(x,0),(0,x')]=0.} Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra and i {\displaystyle {\mathfrak {i}}} an ideal of g {\displaystyle {\mathfrak {g}}} . If the canonical map g → g / i {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\mathfrak {i}}} splits (i.e., admits a section g / i → g {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}\to {\mathfrak {g}}} , as a homomorphism of Lie algebras), then g {\displaystyle {\mathfrak {g}}} is said to be a semidirect product of i {\displaystyle {\mathfrak {i}}} and g / i {\displaystyle {\mathfrak {g}}/{\mathfrak {i}}} , g = g / i ⋉ i {\displaystyle {\mathfrak {g}}={\mathfrak {g}}/{\mathfrak {i}}\ltimes {\mathfrak {i}}} . See also semidirect sum of Lie algebras. === Derivations === For an algebra A over a field F, a derivation of A over F is a linear map D : A → A {\displaystyle D\colon A\to A} that satisfies the Leibniz rule D ( x y ) = D ( x ) y + x D ( y ) {\displaystyle D(xy)=D(x)y+xD(y)} for all x , y ∈ A {\displaystyle x,y\in A} . (The definition makes sense for a possibly non-associative algebra.) Given two derivations D 1 {\displaystyle D_{1}} and D 2 {\displaystyle D_{2}} , their commutator [ D 1 , D 2 ] := D 1 D 2 − D 2 D 1 {\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}} is again a derivation. This operation makes the space Der k ( A ) {\displaystyle {\text{Der}}_{k}(A)} of all derivations of A over F into a Lie algebra. Informally speaking, the space of derivations of A is the Lie algebra of the automorphism group of A. (This is literally true when the automorphism group is a Lie group, for example when F is the real numbers and A has finite dimension as a vector space.) For this reason, spaces of derivations are a natural way to construct Lie algebras: they are the "infinitesimal automorphisms" of A. Indeed, writing out the condition that ( 1 + ϵ D ) ( x y ) ≡ ( 1 + ϵ D ) ( x ) ⋅ ( 1 + ϵ D ) ( y ) ( mod ϵ 2 ) {\displaystyle (1+\epsilon D)(xy)\equiv (1+\epsilon D)(x)\cdot (1+\epsilon D)(y){\pmod {\epsilon ^{2}}}} (where 1 denotes the identity map on A) gives exactly the definition of D being a derivation. Example: the Lie algebra of vector fields. Let A be the ring C ∞ ( X ) {\displaystyle C^{\infty }(X)} of smooth functions on a smooth manifold X. Then a derivation of A over R {\displaystyle \mathbb {R} } is equivalent to a vector field on X. (A vector field v gives a derivation of the space of smooth functions by differentiating functions in the direction of v.) This makes the space Vect ( X ) {\displaystyle {\text{Vect}}(X)} of vector fields into a Lie algebra (see Lie bracket of vector fields). Informally speaking, Vect ( X ) {\displaystyle {\text{Vect}}(X)} is the Lie algebra of the diffeomorphism group of X. So the Lie bracket of vector fields describes the non-commutativity of the diffeomorphism group. An action of a Lie group G on a manifold X determines a homomorphism of Lie algebras g → Vect ( X ) {\displaystyle {\mathfrak {g}}\to {\text{Vect}}(X)} . (An example is illustrated below.) A Lie algebra can be viewed as a non-associative algebra, and so each Lie algebra g {\displaystyle {\mathfrak {g}}} over a field F determines its Lie algebra of derivations, Der F ( g ) {\displaystyle {\text{Der}}_{F}({\mathfrak {g}})} . That is, a derivation of g {\displaystyle {\mathfrak {g}}} is a linear map D : g → g {\displaystyle D\colon {\mathfrak {g}}\to {\mathfrak {g}}} such that D ( [ x , y ] ) = [ D ( x ) , y ] + [ x , D ( y ) ] {\displaystyle D([x,y])=[D(x),y]+[x,D(y)]} . The inner derivation associated to any x ∈ g {\displaystyle x\in {\mathfrak {g}}} is the adjoint mapping a d x {\displaystyle \mathrm {ad} _{x}} defined by a d x ( y ) := [ x , y ] {\displaystyle \mathrm {ad} _{x}(y):=[x,y]} . (This is a derivation as a consequence of the Jacobi identity.) That gives a homomorphism of Lie algebras, ad : g → Der F ( g ) {\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\text{Der}}_{F}({\mathfrak {g}})} . The image Inn F ( g ) {\displaystyle {\text{Inn}}_{F}({\mathfrak {g}})} is an ideal in Der F ( g ) {\displaystyle {\text{Der}}_{F}({\mathfrak {g}})} , and the Lie algebra of outer derivations is defined as the quotient Lie algebra, Out F ( g ) = Der F ( g ) / Inn F ( g ) {\displaystyle {\text{Out}}_{F}({\mathfrak {g}})={\text{Der}}_{F}({\mathfrak {g}})/{\text{Inn}}_{F}({\mathfrak {g}})} . (This is exactly analogous to the outer automorphism group of a group.) For a semisimple Lie algebra (defined below) over a field of characteristic zero, every derivation is inner. This is related to the theorem that the outer automorphism group of a semisimple Lie group is finite. In contrast, an abelian Lie algebra has many outer derivations. Namely, for a vector space V {\displaystyle V} with Lie bracket zero, the Lie algebra Out F ( V ) {\displaystyle {\text{Out}}_{F}(V)} can be identified with g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} . == Examples == === Matrix Lie algebras === A matrix group is a Lie group consisting of invertible matrices, G ⊂ G L ( n , R ) {\displaystyle G\subset \mathrm {GL} (n,\mathbb {R} )} , where the group operation of G is matrix multiplication. The corresponding Lie algebra g {\displaystyle {\mathfrak {g}}} is the space of matrices which are tangent vectors to G inside the linear space M n ( R ) {\displaystyle M_{n}(\mathbb {R} )} : this consists of derivatives of smooth curves in G at the identity matrix I {\displaystyle I} : g = { X = c ′ ( 0 ) ∈ M n ( R ) : smooth c : R → G , c ( 0 ) = I } . {\displaystyle {\mathfrak {g}}=\{X=c'(0)\in M_{n}(\mathbb {R} ):{\text{ smooth }}c:\mathbb {R} \to G,\ c(0)=I\}.} The Lie bracket of g {\displaystyle {\mathfrak {g}}} is given by the commutator of matrices, [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . Given a Lie algebra g ⊂ g l ( n , R ) {\displaystyle {\mathfrak {g}}\subset {\mathfrak {gl}}(n,\mathbb {R} )} , one can recover the Lie group as the subgroup generated by the matrix exponential of elements of g {\displaystyle {\mathfrak {g}}} . (To be precise, this gives the identity component of G, if G is not connected.) Here the exponential mapping exp : M n ( R ) → M n ( R ) {\displaystyle \exp :M_{n}(\mathbb {R} )\to M_{n}(\mathbb {R} )} is defined by exp ⁡ ( X ) = I + X + 1 2 ! X 2 + 1 3 ! X 3 + ⋯ {\displaystyle \exp(X)=I+X+{\tfrac {1}{2!}}X^{2}+{\tfrac {1}{3!}}X^{3}+\cdots } , which converges for every matrix X {\displaystyle X} . The same comments apply to complex Lie subgroups of G L ( n , C ) {\displaystyle GL(n,\mathbb {C} )} and the complex matrix exponential, exp : M n ( C ) → M n ( C ) {\displaystyle \exp :M_{n}(\mathbb {C} )\to M_{n}(\mathbb {C} )} (defined by the same formula). Here are some matrix Lie groups and their Lie algebras. For a positive integer n, the special linear group S L ( n , R ) {\displaystyle \mathrm {SL} (n,\mathbb {R} )} consists of all real n × n matrices with determinant 1. This is the group of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to itself that preserve volume and orientation. More abstractly, S L ( n , R ) {\displaystyle \mathrm {SL} (n,\mathbb {R} )} is the commutator subgroup of the general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} . Its Lie algebra s l ( n , R ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {R} )} consists of all real n × n matrices with trace 0. Similarly, one can define the analogous complex Lie group S L ( n , C ) {\displaystyle {\rm {SL}}(n,\mathbb {C} )} and its Lie algebra s l ( n , C ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )} . The orthogonal group O ( n ) {\displaystyle \mathrm {O} (n)} plays a basic role in geometry: it is the group of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to itself that preserve the length of vectors. For example, rotations and reflections belong to O ( n ) {\displaystyle \mathrm {O} (n)} . Equivalently, this is the group of n x n orthogonal matrices, meaning that A T = A − 1 {\displaystyle A^{\mathrm {T} }=A^{-1}} , where A T {\displaystyle A^{\mathrm {T} }} denotes the transpose of a matrix. The orthogonal group has two connected components; the identity component is called the special orthogonal group S O ( n ) {\displaystyle \mathrm {SO} (n)} , consisting of the orthogonal matrices with determinant 1. Both groups have the same Lie algebra s o ( n ) {\displaystyle {\mathfrak {so}}(n)} , the subspace of skew-symmetric matrices in g l ( n , R ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {R} )} ( X T = − X {\displaystyle X^{\rm {T}}=-X} ). See also infinitesimal rotations with skew-symmetric matrices. The complex orthogonal group O ( n , C ) {\displaystyle \mathrm {O} (n,\mathbb {C} )} , its identity component S O ( n , C ) {\displaystyle \mathrm {SO} (n,\mathbb {C} )} , and the Lie algebra s o ( n , C ) {\displaystyle {\mathfrak {so}}(n,\mathbb {C} )} are given by the same formulas applied to n x n complex matrices. Equivalently, O ( n , C ) {\displaystyle \mathrm {O} (n,\mathbb {C} )} is the subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} that preserves the standard symmetric bilinear form on C n {\displaystyle \mathbb {C} ^{n}} . The unitary group U ( n ) {\displaystyle \mathrm {U} (n)} is the subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} that preserves the length of vectors in C n {\displaystyle \mathbb {C} ^{n}} (with respect to the standard Hermitian inner product). Equivalently, this is the group of n × n unitary matrices (satisfying A ∗ = A − 1 {\displaystyle A^{*}=A^{-1}} , where A ∗ {\displaystyle A^{*}} denotes the conjugate transpose of a matrix). Its Lie algebra u ( n ) {\displaystyle {\mathfrak {u}}(n)} consists of the skew-hermitian matrices in g l ( n , C ) {\displaystyle {\mathfrak {gl}}(n,\mathbb {C} )} ( X ∗ = − X {\displaystyle X^{*}=-X} ). This is a Lie algebra over R {\displaystyle \mathbb {R} } , not over C {\displaystyle \mathbb {C} } . (Indeed, i times a skew-hermitian matrix is hermitian, rather than skew-hermitian.) Likewise, the unitary group U ( n ) {\displaystyle \mathrm {U} (n)} is a real Lie subgroup of the complex Lie group G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} . For example, U ( 1 ) {\displaystyle \mathrm {U} (1)} is the circle group, and its Lie algebra (from this point of view) is i R ⊂ C = g l ( 1 , C ) {\displaystyle i\mathbb {R} \subset \mathbb {C} ={\mathfrak {gl}}(1,\mathbb {C} )} . The special unitary group S U ( n ) {\displaystyle \mathrm {SU} (n)} is the subgroup of matrices with determinant 1 in U ( n ) {\displaystyle \mathrm {U} (n)} . Its Lie algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} consists of the skew-hermitian matrices with trace zero. The symplectic group S p ( 2 n , R ) {\displaystyle \mathrm {Sp} (2n,\mathbb {R} )} is the subgroup of G L ( 2 n , R ) {\displaystyle \mathrm {GL} (2n,\mathbb {R} )} that preserves the standard alternating bilinear form on R 2 n {\displaystyle \mathbb {R} ^{2n}} . Its Lie algebra is the symplectic Lie algebra s p ( 2 n , R ) {\displaystyle {\mathfrak {sp}}(2n,\mathbb {R} )} . The classical Lie algebras are those listed above, along with variants over any field. === Two dimensions === Some Lie algebras of low dimension are described here. See the classification of low-dimensional real Lie algebras for further examples. There is a unique nonabelian Lie algebra g {\displaystyle {\mathfrak {g}}} of dimension 2 over any field F, up to isomorphism. Here g {\displaystyle {\mathfrak {g}}} has a basis X , Y {\displaystyle X,Y} for which the bracket is given by [ X , Y ] = Y {\displaystyle \left[X,Y\right]=Y} . (This determines the Lie bracket completely, because the axioms imply that [ X , X ] = 0 {\displaystyle [X,X]=0} and [ Y , Y ] = 0 {\displaystyle [Y,Y]=0} .) Over the real numbers, g {\displaystyle {\mathfrak {g}}} can be viewed as the Lie algebra of the Lie group G = A f f ( 1 , R ) {\displaystyle G=\mathrm {Aff} (1,\mathbb {R} )} of affine transformations of the real line, x ↦ a x + b {\displaystyle x\mapsto ax+b} . The affine group G can be identified with the group of matrices ( a b 0 1 ) {\displaystyle \left({\begin{array}{cc}a&b\\0&1\end{array}}\right)} under matrix multiplication, with a , b ∈ R {\displaystyle a,b\in \mathbb {R} } , a ≠ 0 {\displaystyle a\neq 0} . Its Lie algebra is the Lie subalgebra g {\displaystyle {\mathfrak {g}}} of g l ( 2 , R ) {\displaystyle {\mathfrak {gl}}(2,\mathbb {R} )} consisting of all matrices ( c d 0 0 ) . {\displaystyle \left({\begin{array}{cc}c&d\\0&0\end{array}}\right).} In these terms, the basis above for g {\displaystyle {\mathfrak {g}}} is given by the matrices X = ( 1 0 0 0 ) , Y = ( 0 1 0 0 ) . {\displaystyle X=\left({\begin{array}{cc}1&0\\0&0\end{array}}\right),\qquad Y=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right).} For any field F {\displaystyle F} , the 1-dimensional subspace F ⋅ Y {\displaystyle F\cdot Y} is an ideal in the 2-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , by the formula [ X , Y ] = Y ∈ F ⋅ Y {\displaystyle [X,Y]=Y\in F\cdot Y} . Both of the Lie algebras F ⋅ Y {\displaystyle F\cdot Y} and g / ( F ⋅ Y ) {\displaystyle {\mathfrak {g}}/(F\cdot Y)} are abelian (because 1-dimensional). In this sense, g {\displaystyle {\mathfrak {g}}} can be broken into abelian "pieces", meaning that it is solvable (though not nilpotent), in the terminology below. === Three dimensions === The Heisenberg algebra h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} over a field F is the three-dimensional Lie algebra with a basis X , Y , Z {\displaystyle X,Y,Z} such that [ X , Y ] = Z , [ X , Z ] = 0 , [ Y , Z ] = 0 {\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0} . It can be viewed as the Lie algebra of 3×3 strictly upper-triangular matrices, with the commutator Lie bracket and the basis X = ( 0 1 0 0 0 0 0 0 0 ) , Y = ( 0 0 0 0 0 1 0 0 0 ) , Z = ( 0 0 1 0 0 0 0 0 0 ) . {\displaystyle X=\left({\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}}\right),\quad Y=\left({\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}}\right),\quad Z=\left({\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}}\right)~.\quad } Over the real numbers, h 3 ( R ) {\displaystyle {\mathfrak {h}}_{3}(\mathbb {R} )} is the Lie algebra of the Heisenberg group H 3 ( R ) {\displaystyle \mathrm {H} _{3}(\mathbb {R} )} , that is, the group of matrices ( 1 a c 0 1 b 0 0 1 ) {\displaystyle \left({\begin{array}{ccc}1&a&c\\0&1&b\\0&0&1\end{array}}\right)} under matrix multiplication. For any field F, the center of h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} is the 1-dimensional ideal F ⋅ Z {\displaystyle F\cdot Z} , and the quotient h 3 ( F ) / ( F ⋅ Z ) {\displaystyle {\mathfrak {h}}_{3}(F)/(F\cdot Z)} is abelian, isomorphic to F 2 {\displaystyle F^{2}} . In the terminology below, it follows that h 3 ( F ) {\displaystyle {\mathfrak {h}}_{3}(F)} is nilpotent (though not abelian). The Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of the rotation group SO(3) is the space of skew-symmetric 3 x 3 matrices over R {\displaystyle \mathbb {R} } . A basis is given by the three matrices F 1 = ( 0 0 0 0 0 − 1 0 1 0 ) , F 2 = ( 0 0 1 0 0 0 − 1 0 0 ) , F 3 = ( 0 − 1 0 1 0 0 0 0 0 ) . {\displaystyle F_{1}=\left({\begin{array}{ccc}0&0&0\\0&0&-1\\0&1&0\end{array}}\right),\quad F_{2}=\left({\begin{array}{ccc}0&0&1\\0&0&0\\-1&0&0\end{array}}\right),\quad F_{3}=\left({\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&0\end{array}}\right)~.\quad } The commutation relations among these generators are [ F 1 , F 2 ] = F 3 , {\displaystyle [F_{1},F_{2}]=F_{3},} [ F 2 , F 3 ] = F 1 , {\displaystyle [F_{2},F_{3}]=F_{1},} [ F 3 , F 1 ] = F 2 . {\displaystyle [F_{3},F_{1}]=F_{2}.} The cross product of vectors in R 3 {\displaystyle \mathbb {R} ^{3}} is given by the same formula in terms of the standard basis; so that Lie algebra is isomorphic to s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . Also, s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} is equivalent to the Spin (physics) angular-momentum component operators for spin-1 particles in quantum mechanics. The Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} cannot be broken into pieces in the way that the previous examples can: it is simple, meaning that it is not abelian and its only ideals are 0 and all of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . Another simple Lie algebra of dimension 3, in this case over C {\displaystyle \mathbb {C} } , is the space s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} of 2 x 2 matrices of trace zero. A basis is given by the three matrices H = ( 1 0 0 − 1 ) , E = ( 0 1 0 0 ) , F = ( 0 0 1 0 ) . {\displaystyle H=\left({\begin{array}{cc}1&0\\0&-1\end{array}}\right),\ E=\left({\begin{array}{cc}0&1\\0&0\end{array}}\right),\ F=\left({\begin{array}{cc}0&0\\1&0\end{array}}\right).} The Lie bracket is given by: [ H , E ] = 2 E , {\displaystyle [H,E]=2E,} [ H , F ] = − 2 F , {\displaystyle [H,F]=-2F,} [ E , F ] = H . {\displaystyle [E,F]=H.} Using these formulas, one can show that the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} is simple, and classify its finite-dimensional representations (defined below). In the terminology of quantum mechanics, one can think of E and F as raising and lowering operators. Indeed, for any representation of s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} , the relations above imply that E maps the c-eigenspace of H (for a complex number c) into the ( c + 2 ) {\displaystyle (c+2)} -eigenspace, while F maps the c-eigenspace into the ( c − 2 ) {\displaystyle (c-2)} -eigenspace. The Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} is isomorphic to the complexification of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} , meaning the tensor product s o ( 3 ) ⊗ R C {\displaystyle {\mathfrak {so}}(3)\otimes _{\mathbb {R} }\mathbb {C} } . The formulas for the Lie bracket are easier to analyze in the case of s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} . As a result, it is common to analyze complex representations of the group S O ( 3 ) {\displaystyle \mathrm {SO} (3)} by relating them to representations of the Lie algebra s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} . === Infinite dimensions === The Lie algebra of vector fields on a smooth manifold of positive dimension is an infinite-dimensional Lie algebra over R {\displaystyle \mathbb {R} } . The Kac–Moody algebras are a large class of infinite-dimensional Lie algebras, say over C {\displaystyle \mathbb {C} } , with structure much like that of the finite-dimensional simple Lie algebras (such as s l ( n , C ) {\displaystyle {\mathfrak {sl}}(n,\mathbb {C} )} ). The Moyal algebra is an infinite-dimensional Lie algebra that contains all the classical Lie algebras as subalgebras. The Virasoro algebra is important in string theory. The functor that takes a Lie algebra over a field F to the underlying vector space has a left adjoint V ↦ L ( V ) {\displaystyle V\mapsto L(V)} , called the free Lie algebra on a vector space V. It is spanned by all iterated Lie brackets of elements of V, modulo only the relations coming from the definition of a Lie algebra. The free Lie algebra L ( V ) {\displaystyle L(V)} is infinite-dimensional for V of dimension at least 2. == Representations == === Definitions === Given a vector space V, let g l ( V ) {\displaystyle {\mathfrak {gl}}(V)} denote the Lie algebra consisting of all linear maps from V to itself, with bracket given by [ X , Y ] = X Y − Y X {\displaystyle [X,Y]=XY-YX} . A representation of a Lie algebra g {\displaystyle {\mathfrak {g}}} on V is a Lie algebra homomorphism π : g → g l ( V ) . {\displaystyle \pi \colon {\mathfrak {g}}\to {\mathfrak {gl}}(V).} That is, π {\displaystyle \pi } sends each element of g {\displaystyle {\mathfrak {g}}} to a linear map from V to itself, in such a way that the Lie bracket on g {\displaystyle {\mathfrak {g}}} corresponds to the commutator of linear maps. A representation is said to be faithful if its kernel is zero. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero has a faithful representation on a finite-dimensional vector space. Kenkichi Iwasawa extended this result to finite-dimensional Lie algebras over a field of any characteristic. Equivalently, every finite-dimensional Lie algebra over a field F is isomorphic to a Lie subalgebra of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} for some positive integer n. === Adjoint representation === For any Lie algebra g {\displaystyle {\mathfrak {g}}} , the adjoint representation is the representation ad : g → g l ( g ) {\displaystyle \operatorname {ad} \colon {\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})} given by ad ⁡ ( x ) ( y ) = [ x , y ] {\displaystyle \operatorname {ad} (x)(y)=[x,y]} . (This is a representation of g {\displaystyle {\mathfrak {g}}} by the Jacobi identity.) === Goals of representation theory === One important aspect of the study of Lie algebras (especially semisimple Lie algebras, as defined below) is the study of their representations. Although Ado's theorem is an important result, the primary goal of representation theory is not to find a faithful representation of a given Lie algebra g {\displaystyle {\mathfrak {g}}} . Indeed, in the semisimple case, the adjoint representation is already faithful. Rather, the goal is to understand all possible representations of g {\displaystyle {\mathfrak {g}}} . For a semisimple Lie algebra over a field of characteristic zero, Weyl's theorem says that every finite-dimensional representation is a direct sum of irreducible representations (those with no nontrivial invariant subspaces). The finite-dimensional irreducible representations are well understood from several points of view; see the representation theory of semisimple Lie algebras and the Weyl character formula. === Universal enveloping algebra === The functor that takes an associative algebra A over a field F to A as a Lie algebra (by [ X , Y ] := X Y − Y X {\displaystyle [X,Y]:=XY-YX} ) has a left adjoint g ↦ U ( g ) {\displaystyle {\mathfrak {g}}\mapsto U({\mathfrak {g}})} , called the universal enveloping algebra. To construct this: given a Lie algebra g {\displaystyle {\mathfrak {g}}} over F, let T ( g ) = F ⊕ g ⊕ ( g ⊗ g ) ⊕ ( g ⊗ g ⊗ g ) ⊕ ⋯ {\displaystyle T({\mathfrak {g}})=F\oplus {\mathfrak {g}}\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}})\oplus ({\mathfrak {g}}\otimes {\mathfrak {g}}\otimes {\mathfrak {g}})\oplus \cdots } be the tensor algebra on g {\displaystyle {\mathfrak {g}}} , also called the free associative algebra on the vector space g {\displaystyle {\mathfrak {g}}} . Here ⊗ {\displaystyle \otimes } denotes the tensor product of F-vector spaces. Let I be the two-sided ideal in T ( g ) {\displaystyle T({\mathfrak {g}})} generated by the elements X Y − Y X − [ X , Y ] {\displaystyle XY-YX-[X,Y]} for X , Y ∈ g {\displaystyle X,Y\in {\mathfrak {g}}} ; then the universal enveloping algebra is the quotient ring U ( g ) = T ( g ) / I {\displaystyle U({\mathfrak {g}})=T({\mathfrak {g}})/I} . It satisfies the Poincaré–Birkhoff–Witt theorem: if e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} is a basis for g {\displaystyle {\mathfrak {g}}} as an F-vector space, then a basis for U ( g ) {\displaystyle U({\mathfrak {g}})} is given by all ordered products e 1 i 1 ⋯ e n i n {\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}} with i 1 , … , i n {\displaystyle i_{1},\ldots ,i_{n}} natural numbers. In particular, the map g → U ( g ) {\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})} is injective. Representations of g {\displaystyle {\mathfrak {g}}} are equivalent to modules over the universal enveloping algebra. The fact that g → U ( g ) {\displaystyle {\mathfrak {g}}\to U({\mathfrak {g}})} is injective implies that every Lie algebra (possibly of infinite dimension) has a faithful representation (of infinite dimension), namely its representation on U ( g ) {\displaystyle U({\mathfrak {g}})} . This also shows that every Lie algebra is contained in the Lie algebra associated to some associative algebra. === Representation theory in physics === The representation theory of Lie algebras plays an important role in various parts of theoretical physics. There, one considers operators on the space of states that satisfy certain natural commutation relations. These commutation relations typically come from a symmetry of the problem—specifically, they are the relations of the Lie algebra of the relevant symmetry group. An example is the angular momentum operators, whose commutation relations are those of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} of the rotation group S O ( 3 ) {\displaystyle \mathrm {SO} (3)} . Typically, the space of states is far from being irreducible under the pertinent operators, but one can attempt to decompose it into irreducible pieces. In doing so, one needs to know the irreducible representations of the given Lie algebra. In the study of the hydrogen atom, for example, quantum mechanics textbooks classify (more or less explicitly) the finite-dimensional irreducible representations of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . == Structure theory and classification == Lie algebras can be classified to some extent. This is a powerful approach to the classification of Lie groups. === Abelian, nilpotent, and solvable === Analogously to abelian, nilpotent, and solvable groups, one can define abelian, nilpotent, and solvable Lie algebras. A Lie algebra g {\displaystyle {\mathfrak {g}}} is abelian if the Lie bracket vanishes; that is, [x,y] = 0 for all x and y in g {\displaystyle {\mathfrak {g}}} . In particular, the Lie algebra of an abelian Lie group (such as the group R n {\displaystyle \mathbb {R} ^{n}} under addition or the torus group T n {\displaystyle \mathbb {T} ^{n}} ) is abelian. Every finite-dimensional abelian Lie algebra over a field F {\displaystyle F} is isomorphic to F n {\displaystyle F^{n}} for some n ≥ 0 {\displaystyle n\geq 0} , meaning an n-dimensional vector space with Lie bracket zero. A more general class of Lie algebras is defined by the vanishing of all commutators of given length. First, the commutator subalgebra (or derived subalgebra) of a Lie algebra g {\displaystyle {\mathfrak {g}}} is [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} , meaning the linear subspace spanned by all brackets [ x , y ] {\displaystyle [x,y]} with x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} . The commutator subalgebra is an ideal in g {\displaystyle {\mathfrak {g}}} , in fact the smallest ideal such that the quotient Lie algebra is abelian. It is analogous to the commutator subgroup of a group. A Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if the lower central series g ⊇ [ g , g ] ⊇ [ [ g , g ] , g ] ⊇ [ [ [ g , g ] , g ] , g ] ⊇ ⋯ {\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],{\mathfrak {g}}],{\mathfrak {g}}]\supseteq \cdots } becomes zero after finitely many steps. Equivalently, g {\displaystyle {\mathfrak {g}}} is nilpotent if there is a finite sequence of ideals in g {\displaystyle {\mathfrak {g}}} , 0 = a 0 ⊆ a 1 ⊆ ⋯ ⊆ a r = g , {\displaystyle 0={\mathfrak {a}}_{0}\subseteq {\mathfrak {a}}_{1}\subseteq \cdots \subseteq {\mathfrak {a}}_{r}={\mathfrak {g}},} such that a j / a j − 1 {\displaystyle {\mathfrak {a}}_{j}/{\mathfrak {a}}_{j-1}} is central in g / a j − 1 {\displaystyle {\mathfrak {g}}/{\mathfrak {a}}_{j-1}} for each j. By Engel's theorem, a Lie algebra over any field is nilpotent if and only if for every u in g {\displaystyle {\mathfrak {g}}} the adjoint endomorphism ad ⁡ ( u ) : g → g , ad ⁡ ( u ) v = [ u , v ] {\displaystyle \operatorname {ad} (u):{\mathfrak {g}}\to {\mathfrak {g}},\quad \operatorname {ad} (u)v=[u,v]} is nilpotent. More generally, a Lie algebra g {\displaystyle {\mathfrak {g}}} is said to be solvable if the derived series: g ⊇ [ g , g ] ⊇ [ [ g , g ] , [ g , g ] ] ⊇ [ [ [ g , g ] , [ g , g ] ] , [ [ g , g ] , [ g , g ] ] ] ⊇ ⋯ {\displaystyle {\mathfrak {g}}\supseteq [{\mathfrak {g}},{\mathfrak {g}}]\supseteq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\supseteq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\supseteq \cdots } becomes zero after finitely many steps. Equivalently, g {\displaystyle {\mathfrak {g}}} is solvable if there is a finite sequence of Lie subalgebras, 0 = m 0 ⊆ m 1 ⊆ ⋯ ⊆ m r = g , {\displaystyle 0={\mathfrak {m}}_{0}\subseteq {\mathfrak {m}}_{1}\subseteq \cdots \subseteq {\mathfrak {m}}_{r}={\mathfrak {g}},} such that m j − 1 {\displaystyle {\mathfrak {m}}_{j-1}} is an ideal in m j {\displaystyle {\mathfrak {m}}_{j}} with m j / m j − 1 {\displaystyle {\mathfrak {m}}_{j}/{\mathfrak {m}}_{j-1}} abelian for each j. Every finite-dimensional Lie algebra over a field has a unique maximal solvable ideal, called its radical. Under the Lie correspondence, nilpotent (respectively, solvable) Lie groups correspond to nilpotent (respectively, solvable) Lie algebras over R {\displaystyle \mathbb {R} } . For example, for a positive integer n and a field F of characteristic zero, the radical of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is its center, the 1-dimensional subspace spanned by the identity matrix. An example of a solvable Lie algebra is the space b n {\displaystyle {\mathfrak {b}}_{n}} of upper-triangular matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} ; this is not nilpotent when n ≥ 2 {\displaystyle n\geq 2} . An example of a nilpotent Lie algebra is the space u n {\displaystyle {\mathfrak {u}}_{n}} of strictly upper-triangular matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} ; this is not abelian when n ≥ 3 {\displaystyle n\geq 3} . === Simple and semisimple === A Lie algebra g {\displaystyle {\mathfrak {g}}} is called simple if it is not abelian and the only ideals in g {\displaystyle {\mathfrak {g}}} are 0 and g {\displaystyle {\mathfrak {g}}} . (In particular, a one-dimensional—necessarily abelian—Lie algebra g {\displaystyle {\mathfrak {g}}} is by definition not simple, even though its only ideals are 0 and g {\displaystyle {\mathfrak {g}}} .) A finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} is called semisimple if the only solvable ideal in g {\displaystyle {\mathfrak {g}}} is 0. In characteristic zero, a Lie algebra g {\displaystyle {\mathfrak {g}}} is semisimple if and only if it is isomorphic to a product of simple Lie algebras, g ≅ g 1 × ⋯ × g r {\displaystyle {\mathfrak {g}}\cong {\mathfrak {g}}_{1}\times \cdots \times {\mathfrak {g}}_{r}} . For example, the Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} is simple for every n ≥ 2 {\displaystyle n\geq 2} and every field F of characteristic zero (or just of characteristic not dividing n). The Lie algebra s u ( n ) {\displaystyle {\mathfrak {su}}(n)} over R {\displaystyle \mathbb {R} } is simple for every n ≥ 2 {\displaystyle n\geq 2} . The Lie algebra s o ( n ) {\displaystyle {\mathfrak {so}}(n)} over R {\displaystyle \mathbb {R} } is simple if n = 3 {\displaystyle n=3} or n ≥ 5 {\displaystyle n\geq 5} . (There are "exceptional isomorphisms" s o ( 3 ) ≅ s u ( 2 ) {\displaystyle {\mathfrak {so}}(3)\cong {\mathfrak {su}}(2)} and s o ( 4 ) ≅ s u ( 2 ) × s u ( 2 ) {\displaystyle {\mathfrak {so}}(4)\cong {\mathfrak {su}}(2)\times {\mathfrak {su}}(2)} .) The concept of semisimplicity for Lie algebras is closely related with the complete reducibility (semisimplicity) of their representations. When the ground field F has characteristic zero, every finite-dimensional representation of a semisimple Lie algebra is semisimple (that is, a direct sum of irreducible representations). A finite-dimensional Lie algebra over a field of characteristic zero is called reductive if its adjoint representation is semisimple. Every reductive Lie algebra is isomorphic to the product of an abelian Lie algebra and a semisimple Lie algebra. For example, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} is reductive for F of characteristic zero: for n ≥ 2 {\displaystyle n\geq 2} , it is isomorphic to the product g l ( n , F ) ≅ F × s l ( n , F ) , {\displaystyle {\mathfrak {gl}}(n,F)\cong F\times {\mathfrak {sl}}(n,F),} where F denotes the center of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} , the 1-dimensional subspace spanned by the identity matrix. Since the special linear Lie algebra s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} is simple, g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} contains few ideals: only 0, the center F, s l ( n , F ) {\displaystyle {\mathfrak {sl}}(n,F)} , and all of g l ( n , F ) {\displaystyle {\mathfrak {gl}}(n,F)} . === Cartan's criterion === Cartan's criterion (by Élie Cartan) gives conditions for a finite-dimensional Lie algebra of characteristic zero to be solvable or semisimple. It is expressed in terms of the Killing form, the symmetric bilinear form on g {\displaystyle {\mathfrak {g}}} defined by K ( u , v ) = tr ⁡ ( ad ⁡ ( u ) ad ⁡ ( v ) ) , {\displaystyle K(u,v)=\operatorname {tr} (\operatorname {ad} (u)\operatorname {ad} (v)),} where tr denotes the trace of a linear operator. Namely: a Lie algebra g {\displaystyle {\mathfrak {g}}} is semisimple if and only if the Killing form is nondegenerate. A Lie algebra g {\displaystyle {\mathfrak {g}}} is solvable if and only if K ( g , [ g , g ] ) = 0. {\displaystyle K({\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}])=0.} === Classification === The Levi decomposition asserts that every finite-dimensional Lie algebra over a field of characteristic zero is a semidirect product of its solvable radical and a semisimple Lie algebra. Moreover, a semisimple Lie algebra in characteristic zero is a product of simple Lie algebras, as mentioned above. This focuses attention on the problem of classifying the simple Lie algebras. The simple Lie algebras of finite dimension over an algebraically closed field F of characteristic zero were classified by Killing and Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. Here the simple Lie algebra of type An is s l ( n + 1 , F ) {\displaystyle {\mathfrak {sl}}(n+1,F)} , Bn is s o ( 2 n + 1 , F ) {\displaystyle {\mathfrak {so}}(2n+1,F)} , Cn is s p ( 2 n , F ) {\displaystyle {\mathfrak {sp}}(2n,F)} , and Dn is s o ( 2 n , F ) {\displaystyle {\mathfrak {so}}(2n,F)} . The other five are known as the exceptional Lie algebras. The classification of finite-dimensional simple Lie algebras over R {\displaystyle \mathbb {R} } is more complicated, but it was also solved by Cartan (see simple Lie group for an equivalent classification). One can analyze a Lie algebra g {\displaystyle {\mathfrak {g}}} over R {\displaystyle \mathbb {R} } by considering its complexification g ⊗ R C {\displaystyle {\mathfrak {g}}\otimes _{\mathbb {R} }\mathbb {C} } . In the years leading up to 2004, the finite-dimensional simple Lie algebras over an algebraically closed field of characteristic p > 3 {\displaystyle p>3} were classified by Richard Earl Block, Robert Lee Wilson, Alexander Premet, and Helmut Strade. (See restricted Lie algebra#Classification of simple Lie algebras.) It turns out that there are many more simple Lie algebras in positive characteristic than in characteristic zero. == Relation to Lie groups == Although Lie algebras can be studied in their own right, historically they arose as a means to study Lie groups. The relationship between Lie groups and Lie algebras can be summarized as follows. Each Lie group determines a Lie algebra over R {\displaystyle \mathbb {R} } (concretely, the tangent space at the identity). Conversely, for every finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , there is a connected Lie group G {\displaystyle G} with Lie algebra g {\displaystyle {\mathfrak {g}}} . This is Lie's third theorem; see the Baker–Campbell–Hausdorff formula. This Lie group is not determined uniquely; however, any two Lie groups with the same Lie algebra are locally isomorphic, and more strongly, they have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitary group SU(2) have isomorphic Lie algebras, but SU(2) is a simply connected double cover of SO(3). For simply connected Lie groups, there is a complete correspondence: taking the Lie algebra gives an equivalence of categories from simply connected Lie groups to Lie algebras of finite dimension over R {\displaystyle \mathbb {R} } . The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification of Lie groups and the representation theory of Lie groups. For finite-dimensional representations, there is an equivalence of categories between representations of a real Lie algebra and representations of the corresponding simply connected Lie group. This simplifies the representation theory of Lie groups: it is often easier to classify the representations of a Lie algebra, using linear algebra. Every connected Lie group is isomorphic to its universal cover modulo a discrete central subgroup. So classifying Lie groups becomes simply a matter of counting the discrete subgroups of the center, once the Lie algebra is known. For example, the real semisimple Lie algebras were classified by Cartan, and so the classification of semisimple Lie groups is well understood. For infinite-dimensional Lie algebras, Lie theory works less well. The exponential map need not be a local homeomorphism (for example, in the diffeomorphism group of the circle, there are diffeomorphisms arbitrarily close to the identity that are not in the image of the exponential map). Moreover, in terms of the existing notions of infinite-dimensional Lie groups, some infinite-dimensional Lie algebras do not come from any group. Lie theory also does not work so neatly for infinite-dimensional representations of a finite-dimensional group. Even for the additive group G = R {\displaystyle G=\mathbb {R} } , an infinite-dimensional representation of G {\displaystyle G} can usually not be differentiated to produce a representation of its Lie algebra on the same space, or vice versa. The theory of Harish-Chandra modules is a more subtle relation between infinite-dimensional representations for groups and Lie algebras. == Real form and complexification == Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a real Lie algebra g 0 {\displaystyle {\mathfrak {g}}_{0}} is said to be a real form of g {\displaystyle {\mathfrak {g}}} if the complexification g 0 ⊗ R C {\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} } is isomorphic to g {\displaystyle {\mathfrak {g}}} . A real form need not be unique; for example, s l ( 2 , C ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {C} )} has two real forms up to isomorphism, s l ( 2 , R ) {\displaystyle {\mathfrak {sl}}(2,\mathbb {R} )} and s u ( 2 ) {\displaystyle {\mathfrak {su}}(2)} . Given a semisimple complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a split form of it is a real form that splits; i.e., it has a Cartan subalgebra which acts via an adjoint representation with real eigenvalues. A split form exists and is unique (up to isomorphism). A compact form is a real form that is the Lie algebra of a compact Lie group. A compact form exists and is also unique up to isomorphism. == Lie algebra with additional structures == A Lie algebra may be equipped with additional structures that are compatible with the Lie bracket. For example, a graded Lie algebra is a Lie algebra (or more generally a Lie superalgebra) with a compatible grading. A differential graded Lie algebra also comes with a differential, making the underlying vector space a chain complex. For example, the homotopy groups of a simply connected topological space form a graded Lie algebra, using the Whitehead product. In a related construction, Daniel Quillen used differential graded Lie algebras over the rational numbers Q {\displaystyle \mathbb {Q} } to describe rational homotopy theory in algebraic terms. == Lie ring == The definition of a Lie algebra over a field extends to define a Lie algebra over any commutative ring R. Namely, a Lie algebra g {\displaystyle {\mathfrak {g}}} over R is an R-module with an alternating R-bilinear map [ , ] : g × g → g {\displaystyle [\ ,\ ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} that satisfies the Jacobi identity. A Lie algebra over the ring Z {\displaystyle \mathbb {Z} } of integers is sometimes called a Lie ring. (This is not directly related to the notion of a Lie group.) Lie rings are used in the study of finite p-groups (for a prime number p) through the Lazard correspondence. The lower central factors of a finite p-group are finite abelian p-groups. The direct sum of the lower central factors is given the structure of a Lie ring by defining the bracket to be the commutator of two coset representatives; see the example below. p-adic Lie groups are related to Lie algebras over the field Q p {\displaystyle \mathbb {Q} _{p}} of p-adic numbers as well as over the ring Z p {\displaystyle \mathbb {Z} _{p}} of p-adic integers. Part of Claude Chevalley's construction of the finite groups of Lie type involves showing that a simple Lie algebra over the complex numbers comes from a Lie algebra over the integers, and then (with more care) a group scheme over the integers. === Examples === Here is a construction of Lie rings arising from the study of abstract groups. For elements x , y {\displaystyle x,y} of a group, define the commutator [ x , y ] = x − 1 y − 1 x y {\displaystyle [x,y]=x^{-1}y^{-1}xy} . Let G = G 1 ⊇ G 2 ⊇ G 3 ⊇ ⋯ ⊇ G n ⊇ ⋯ {\displaystyle G=G_{1}\supseteq G_{2}\supseteq G_{3}\supseteq \cdots \supseteq G_{n}\supseteq \cdots } be a filtration of a group G {\displaystyle G} , that is, a chain of subgroups such that [ G i , G j ] {\displaystyle [G_{i},G_{j}]} is contained in G i + j {\displaystyle G_{i+j}} for all i , j {\displaystyle i,j} . (For the Lazard correspondence, one takes the filtration to be the lower central series of G.) Then L = ⨁ i ≥ 1 G i / G i + 1 {\displaystyle L=\bigoplus _{i\geq 1}G_{i}/G_{i+1}} is a Lie ring, with addition given by the group multiplication (which is abelian on each quotient group G i / G i + 1 {\displaystyle G_{i}/G_{i+1}} ), and with Lie bracket G i / G i + 1 × G j / G j + 1 → G i + j / G i + j + 1 {\displaystyle G_{i}/G_{i+1}\times G_{j}/G_{j+1}\to G_{i+j}/G_{i+j+1}} given by commutators in the group: [ x G i + 1 , y G j + 1 ] := [ x , y ] G i + j + 1 . {\displaystyle [xG_{i+1},yG_{j+1}]:=[x,y]G_{i+j+1}.} For example, the Lie ring associated to the lower central series on the dihedral group of order 8 is the Heisenberg Lie algebra of dimension 3 over the field Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } . == Definition using category-theoretic notation == The definition of a Lie algebra can be reformulated more abstractly in the language of category theory. Namely, one can define a Lie algebra in terms of linear maps—that is, morphisms in the category of vector spaces—without considering individual elements. (In this section, the field over which the algebra is defined is assumed to be of characteristic different from 2.) For the category-theoretic definition of Lie algebras, two braiding isomorphisms are needed. If A is a vector space, the interchange isomorphism τ : A ⊗ A → A ⊗ A {\displaystyle \tau :A\otimes A\to A\otimes A} is defined by τ ( x ⊗ y ) = y ⊗ x . {\displaystyle \tau (x\otimes y)=y\otimes x.} The cyclic-permutation braiding σ : A ⊗ A ⊗ A → A ⊗ A ⊗ A {\displaystyle \sigma :A\otimes A\otimes A\to A\otimes A\otimes A} is defined as σ = ( i d ⊗ τ ) ∘ ( τ ⊗ i d ) , {\displaystyle \sigma =(\mathrm {id} \otimes \tau )\circ (\tau \otimes \mathrm {id} ),} where i d {\displaystyle \mathrm {id} } is the identity morphism. Equivalently, σ {\displaystyle \sigma } is defined by σ ( x ⊗ y ⊗ z ) = y ⊗ z ⊗ x . {\displaystyle \sigma (x\otimes y\otimes z)=y\otimes z\otimes x.} With this notation, a Lie algebra can be defined as an object A {\displaystyle A} in the category of vector spaces together with a morphism [ ⋅ , ⋅ ] : A ⊗ A → A {\displaystyle [\cdot ,\cdot ]\colon A\otimes A\rightarrow A} that satisfies the two morphism equalities [ ⋅ , ⋅ ] ∘ ( i d + τ ) = 0 , {\displaystyle [\cdot ,\cdot ]\circ (\mathrm {id} +\tau )=0,} and [ ⋅ , ⋅ ] ∘ ( [ ⋅ , ⋅ ] ⊗ i d ) ∘ ( i d + σ + σ 2 ) = 0. {\displaystyle [\cdot ,\cdot ]\circ ([\cdot ,\cdot ]\otimes \mathrm {id} )\circ (\mathrm {id} +\sigma +\sigma ^{2})=0.} == See also == == Remarks == == References == == Sources == Bourbaki, Nicolas (1989). Lie Groups and Lie Algebras: Chapters 1-3. Springer. ISBN 978-3-540-64242-8. MR 1728312. Erdmann, Karin; Wildon, Mark (2006). Introduction to Lie Algebras. Springer. ISBN 1-84628-040-0. MR 2218355. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Hall, Brian C. (2015). Lie groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (2nd ed.). Springer. doi:10.1007/978-3-319-13467-3. ISBN 978-3319134666. ISSN 0072-5285. MR 3331229. Humphreys, James E. (1978). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90053-7. MR 0499562. Jacobson, Nathan (1979) [1962]. Lie Algebras. Dover. ISBN 978-0-486-63832-4. MR 0559927. Khukhro, E. I. (1998), p-Automorphisms of Finite p-Groups, Cambridge University Press, doi:10.1017/CBO9780511526008, ISBN 0-521-59717-X, MR 1615819 Knapp, Anthony W. (2001) [1986], Representation Theory of Semisimple Groups: an Overview Based on Examples, Princeton University Press, ISBN 0-691-09089-0, MR 1880691 Milnor, John (2010) [1986], "Remarks on infinite-dimensional Lie groups", Collected Papers of John Milnor, vol. 5, American Mathematical Soc., pp. 91–141, ISBN 978-0-8218-4876-0, MR 0830252 O'Connor, J.J; Robertson, E.F. (2000). "Marius Sophus Lie". MacTutor History of Mathematics Archive. O'Connor, J.J; Robertson, E.F. (2005). "Wilhelm Karl Joseph Killing". MacTutor History of Mathematics Archive. Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031 Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups (2nd ed.). Springer. ISBN 978-3-540-55008-2. MR 2179691. Varadarajan, Veeravalli S. (1984) [1974]. Lie Groups, Lie Algebras, and Their Representations. Springer. ISBN 978-0-387-90969-1. MR 0746308. Wigner, Eugene (1959). Group Theory and its Application to the Quantum Mechanics of Atomic Spectra. Translated by J. J. Griffin. Academic Press. ISBN 978-0127505503. MR 0106711. {{cite book}}: ISBN / Date incompatibility (help) == External links == Kac, Victor G.; et al. Course notes for MIT 18.745: Introduction to Lie Algebras. Archived from the original on 2010-04-20. "Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] McKenzie, Douglas (2015). "An Elementary Introduction to Lie Algebras for Physicists".
Wikipedia:Light's associativity test#0
In mathematics, Light's associativity test is a procedure invented by F. W. Light for testing whether a binary operation defined in a finite set by a Cayley multiplication table is associative. The naive procedure for verification of the associativity of a binary operation specified by a Cayley table, which compares the two products that can be formed from each triple of elements, is cumbersome. Light's associativity test simplifies the task in some instances (although it does not improve the worst-case runtime of the naive algorithm, namely O ( n 3 ) {\displaystyle {\mathcal {O}}\left(n^{3}\right)} for sets of size n {\displaystyle n} ). == Description of the procedure == Let a binary operation ' · ' be defined in a finite set A by a Cayley table. Choosing some element a in A, two new binary operations are defined in A as follows: x ⋆ {\displaystyle \star } y = x ⋅ ( a ⋅ y ) x ∘ {\displaystyle \circ } y = ( x ⋅ a ) ⋅ y The Cayley tables of these operations are constructed and compared. If the tables coincide then x · ( a · y ) = ( x · a ) · y for all x and y. This is repeated for every element of the set A. The example below illustrates a further simplification in the procedure for the construction and comparison of the Cayley tables of the operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } '. It is not even necessary to construct the Cayley tables of ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' for all elements of A. It is enough to compare Cayley tables of ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to the elements in a proper generating subset of A. When the operation ' . ' is commutative, then x ⋆ {\displaystyle \star } y = y ∘ {\displaystyle \circ } x. As a result, only part of each Cayley table must be computed, because x ⋆ {\displaystyle \star } x = x ∘ {\displaystyle \circ } x always holds, and x ⋆ {\displaystyle \star } y = x ∘ {\displaystyle \circ } y implies y ⋆ {\displaystyle \star } x = y ∘ {\displaystyle \circ } x. When there is an identity element e, it does not need to be included in the Cayley tables because x ⋆ {\displaystyle \star } y = x ∘ {\displaystyle \circ } y always holds if at least one of x and y are equal to e. == Example == Consider the binary operation ' · ' in the set A = { a, b, c, d, e } defined by the following Cayley table (Table 1): The set { c, e } is a generating set for the set A under the binary operation defined by the above table, for, a = e · e, b = c · c, d = c · e. Thus it is enough to verify that the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to c coincide and also that the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to e coincide. To verify that the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } ' corresponding to c coincide, choose the row in Table 1 corresponding to the element c : This row is copied as the header row of a new table (Table 3): Under the header a copy the corresponding column in Table 1, under the header b copy the corresponding column in Table 1, etc., and construct Table 4. The column headers of Table 4 are now deleted to get Table 5: The Cayley table of the binary operation ' ⋆ {\displaystyle \star } ' corresponding to the element c is given by Table 6. Next choose the c column of Table 1: Copy this column to the index column to get Table 8: Against the index entry a in Table 8 copy the corresponding row in Table 1, against the index entry b copy the corresponding row in Table 1, etc., and construct Table 9. The index entries in the first column of Table 9 are now deleted to get Table 10: The Cayley table of the binary operation ' ∘ {\displaystyle \circ } ' corresponding to the element c is given by Table 11. One can verify that the entries in the various cells in Table 6 agrees with the entries in the corresponding cells of Table 11. This shows that x · ( c · y ) = ( x · c ) · y for all x and y in A. If there were some discrepancy then it would not be true that x · ( c · y ) = ( x · c ) · y for all x and y in A. That x · ( e · y ) = ( x · e ) · y for all x and y in A can be verified in a similar way by constructing the following tables (Table 12 and Table 13): === A further simplification === It is not necessary to construct the Cayley tables (Table 6 and table 11) of the binary operations ' ⋆ {\displaystyle \star } ' and ' ∘ {\displaystyle \circ } '. It is enough to copy the column corresponding to the header c in Table 1 to the index column in Table 5 and form the following table (Table 14) and verify that the a -row of Table 14 is identical with the a -row of Table 1, the b -row of Table 14 is identical with the b -row of Table 1, etc. This is to be repeated mutatis mutandis for all the elements of the generating set of A. == Program == Computer software can be written to carry out Light's associativity test. Kehayopulu and Argyris have developed such a program for Mathematica. == Extension == Light's associativity test can be extended to test associativity in a more general context. Let T = { t1, t2, … {\displaystyle \ldots } , tm } be a magma in which the operation is denoted by juxtaposition. Let X = { x1, x2, … {\displaystyle \ldots } , xn } be a set. Let there be a mapping from the Cartesian product T × X to X denoted by (t, x) ↦ tx and let it be required to test whether this map has the property (st)x = s(tx) for all s, t in T and all x in X. A generalization of Light's associativity test can be applied to verify whether the above property holds or not. In mathematical notations, the generalization runs as follows: For each t in T, let L(t) be the m × n matrix of elements of X whose i - th row is ( (tit)x1, (tit)x2, … {\displaystyle \ldots } , (tit)xn ) for i = 1, … {\displaystyle \ldots } , m and let R(t) be the m × n matrix of elements of X, the elements of whose j - th column are ( t1(txj), t2(txj), … {\displaystyle \ldots } , tm(txj) ) for j = 1, … {\displaystyle \ldots } , n. According to the generalised test (due to Bednarek), that the property to be verified holds if and only if L(t) = R(t) for all t in T. When X = T, Bednarek's test reduces to Light's test. == More advanced algorithms == There is a randomized algorithm by Rajagopalan and Schulman to test associativity in time proportional to the input size. (The method also works for testing certain other identities.) Specifically, the runtime is O ( n 2 log ⁡ 1 δ ) {\displaystyle O(n^{2}\log {\frac {1}{\delta }})} for an n × n {\displaystyle n\times n} table and error probability δ {\displaystyle \delta } . The algorithm can be modified to produce a triple ⟨ a , b , c ⟩ {\displaystyle \langle a,b,c\rangle } for which ( a b ) c ≠ a ( b c ) {\displaystyle (ab)c\neq a(bc)} , if there is one, in time O ( n 2 log ⁡ n ⋅ log ⁡ 1 δ ) {\displaystyle O(n^{2}\log n\cdot \log {\frac {1}{\delta }})} . == Notes == == References == Clifford, Alfred Hoblitzelle; Preston, Gordon Bamford (1961). The algebraic theory of semigroups. Vol. I. Mathematical Surveys, No. 7. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0272-4. MR 0132791. {{cite book}}: ISBN / Date incompatibility (help) (pp. 7–9)
Wikipedia:Like terms#0
In mathematics, like terms are summands in a sum that differ only by a numerical factor. Like terms can be regrouped by adding their coefficients. Typically, in a polynomial expression, like terms are those that contain the same variables to the same powers, possibly with different coefficients. More generally, when some variable are considered as parameters, like terms are defined similarly, but "numerical factors" must be replaced by "factors depending only on the parameters". For example, when considering a quadratic equation, one considers often the expression ( x − r ) ( x − s ) , {\displaystyle (x-r)(x-s),} where r {\displaystyle r} and s {\displaystyle s} are the roots of the equation and may be considered as parameters. Then, expanding the above product and regrouping the like terms gives x 2 − ( r + s ) x + r s . {\displaystyle x^{2}-(r+s)x+rs.} == Generalization == In this discussion, a "term" will refer to a string of numbers being multiplied or divided (that division is simply multiplication by a reciprocal) together. Terms are within the same expression and are combined by either addition or subtraction. For example, take the expression: a x + b x {\displaystyle ax+bx} There are two terms in this expression. Notice that the two terms have a common factor, that is, both terms have an x {\displaystyle x} . This means that the common factor variable can be factored out, resulting in ( a + b ) x {\displaystyle (a+b)x} If the expression in parentheses may be calculated, that is, if the variables in the expression in the parentheses are known numbers, then it is simpler to write the calculation a + b {\displaystyle a+b} . and juxtapose that new number with the remaining unknown number. Terms combined in an expression with a common, unknown factor (or multiple unknown factors) are called like terms. == Examples == === Example === To provide an example for above, let a {\displaystyle a} and b {\displaystyle b} have numerical values, so that their sum may be calculated. For ease of calculation, let a = 5 {\displaystyle a=5} and b = 3 {\displaystyle b=3} . The original expression becomes 5 x + 3 x {\displaystyle 5x+3x} which may be factored into ( 5 + 3 ) x {\displaystyle (5+3)x} or, equally, 8 x {\displaystyle 8x} . This demonstrates that 5 x + 3 x = 8 x {\displaystyle 5x+3x=8x} The known values assigned to the unlike part of two or more terms are called coefficients. As this example shows, when like terms exist in an expression, they may be combined by adding or subtracting (whatever the expression indicates) the coefficients, and maintaining the common factor of both terms. Such combination is called combining like terms or collecting like terms, and it is an important tool used for solving equations. === Simplifying an expression === Take the expression, which is to be simplified: 3 ( 4 x 2 y − 6 y ) + 7 x 2 y − 3 y 2 + 2 ( 8 y − 4 y 2 − 4 x 2 y ) {\displaystyle 3(4x^{2}y-6y)+7x^{2}y-3y^{2}+2(8y-4y^{2}-4x^{2}y)} The first step to grouping like terms in this expression is to get rid of the parentheses. Do this by distributing (multiplying) each number in front of a set of parentheses to each term in that set of parentheses: 12 x 2 y − 18 y + 7 x 2 y − 3 y 2 + 16 y − 8 y 2 − 8 x 2 y {\displaystyle 12x^{2}y-18y+7x^{2}y-3y^{2}+16y-8y^{2}-8x^{2}y} The like terms in this expression are the terms that can be grouped together by having exactly the same set of unknown factors. Here, the sets of unknown factors are x 2 y , {\displaystyle x^{2}y,} y 2 , {\displaystyle y^{2},} and y . {\displaystyle y.} . By the rule in the first example, all terms with the same set of unknown factors, that is, all like terms, may be combined by adding or subtracting their coefficients, while maintaining the unknown factors. Thus, the expression becomes 11 x 2 y − 2 y − 11 y 2 {\displaystyle 11x^{2}y-2y-11y^{2}} The expression is considered simplified when all like terms have been combined, and all terms present are unlike. In this case, all terms now have different unknown factors, and are thus unlike, and so the expression is completely simplified. == Footnotes ==
Wikipedia:Liliana Borcea#0
Liliana Borcea is the George P. Livanos Professor of Applied Physics and Applied Mathematics in the Department of Applied Physics and Applied Mathematics at Columbia University. She was previously the Peter Field Collegiate Professor of Mathematics at the University of Michigan. Her research interests are in scientific computing and applied mathematics, including the scattering and transport of electromagnetic waves. == Education and career == Borcea is originally from Romania, and earned a diploma in applied physics in 1987 from the University of Bucharest. She came to Stanford University for her graduate studies in Scientific Computing and Computational Mathematics, earning a master's degree in 1992 and completing her doctorate in 1996, under the supervision of George C. Papanicolaou. After postdoctoral research at the California Institute of Technology, she joined the Rice University department of Computational and Applied Mathematics in 1996, and became the Noah Harding Professor at Rice in 2007. In 2013 she moved to Michigan as Peter Field Collegiate Professor. She served on the Scientific Advisory Board for the Institute for Computational and Experimental Research in Mathematics (ICERM). == Recognition == She was recognized as the AWM-SIAM Sonia Kovalevsky Lecturer for 2017, selected "for her distinguished scientific contributions to the mathematical and numerical analysis of wave propagation in random media, array imaging in complex environments, and inverse problems in high-contrast electrical impedance tomography, as well as model reduction techniques for parabolic and hyperbolic partial differential equations." She is a member of the 2018 class of SIAM Fellows. She was elected to the American Academy of Arts and Sciences in 2023. == References == == External links == Home page Liliana Borcea publications indexed by Google Scholar
Wikipedia:Limit (mathematics)#0
In mathematics, a limit is the value that a function (or sequence) approaches as the argument (or index) approaches some value. Limits of functions are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals. The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist. == Notation == In formulas, a limit of a function is usually written as lim x → c f ( x ) = L , {\displaystyle \lim _{x\to c}f(x)=L,} and is read as "the limit of f of x as x approaches c equals L". This means that the value of the function f can be made arbitrarily close to L, by choosing x sufficiently close to c. Alternatively, the fact that a function f approaches the limit L as x approaches c is sometimes denoted by a right arrow (→ or → {\displaystyle \rightarrow } ), as in f ( x ) → L as x → c , {\displaystyle f(x)\to L{\text{ as }}x\to c,} which reads " f {\displaystyle f} of x {\displaystyle x} tends to L {\displaystyle L} as x {\displaystyle x} tends to c {\displaystyle c} ". == History == According to Hankel (1871), the modern concept of limit originates from Proposition X.1 of Euclid's Elements, which forms the basis of the Method of exhaustion found in Euclid and Archimedes: "Two unequal magnitudes being set out, if from the greater there is subtracted a magnitude greater than its half, and from that which is left a magnitude greater than its half, and if this process is repeated continually, then there will be left some magnitude less than the lesser magnitude set out." Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment." In the Scholium to Principia in 1687, Isaac Newton had a clear definition of a limit, stating that "Those ultimate ratios... are not actually ratios of ultimate quantities, but limits... which they can approach so closely that their difference is less than any given quantity". The modern definition of a limit goes back to Bernard Bolzano who, in 1817, developed the basics of the epsilon-delta technique to define continuous functions. However, his work remained unknown to other mathematicians until thirty years after his death. Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit. The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, who introduced it in his book A Course of Pure Mathematics in 1908. == Types of limits == === In sequences === ==== Real numbers ==== The expression 0.999... should be interpreted as the limit of the sequence 0.9, 0.99, 0.999, ... and so on. This sequence can be rigorously shown to have the limit 1, and therefore this expression is meaningfully interpreted as having the value 1. Formally, suppose a1, a2, ... is a sequence of real numbers. When the limit of the sequence exists, the real number L is the limit of this sequence if and only if for every real number ε > 0, there exists a natural number N such that for all n > N, we have |an − L| < ε. The common notation lim n → ∞ a n = L {\displaystyle \lim _{n\to \infty }a_{n}=L} is read as: "The limit of an as n approaches infinity equals L" or "The limit as n approaches infinity of an equals L". The formal definition intuitively means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value |an − L| is the distance between an and L. Not every sequence has a limit. A sequence with a limit is called convergent; otherwise it is called divergent. One can show that a convergent sequence has only one limit. The limit of a sequence and the limit of a function are closely related. On one hand, the limit as n approaches infinity of a sequence {an} is simply the limit at infinity of a function a(n)—defined on the natural numbers {n}. On the other hand, if X is the domain of a function f(x) and if the limit as n approaches infinity of f(xn) is L for every arbitrary sequence of points {xn} in X − x0 which converges to x0, then the limit of the function f(x) as x approaches x0 is equal to L. One such sequence would be {x0 + 1/n}. ==== Infinity as a limit ==== There is also a notion of having a limit "tend to infinity", rather than to a finite value L {\displaystyle L} . A sequence { a n } {\displaystyle \{a_{n}\}} is said to "tend to infinity" if, for each real number M > 0 {\displaystyle M>0} , known as the bound, there exists an integer N {\displaystyle N} such that for each n > N {\displaystyle n>N} , a n > M . {\displaystyle a_{n}>M.} That is, for every possible bound, the sequence eventually exceeds the bound. This is often written lim n → ∞ a n = ∞ {\displaystyle \lim _{n\rightarrow \infty }a_{n}=\infty } or simply a n → ∞ {\displaystyle a_{n}\rightarrow \infty } . It is possible for a sequence to be divergent, but not tend to infinity. Such sequences are called oscillatory. An example of an oscillatory sequence is a n = ( − 1 ) n {\displaystyle a_{n}=(-1)^{n}} . There is a corresponding notion of tending to negative infinity, lim n → ∞ a n = − ∞ {\displaystyle \lim _{n\rightarrow \infty }a_{n}=-\infty } , defined by changing the inequality in the above definition to a n < M , {\displaystyle a_{n}<M,} with M < 0. {\displaystyle M<0.} A sequence { a n } {\displaystyle \{a_{n}\}} with lim n → ∞ | a n | = ∞ {\displaystyle \lim _{n\rightarrow \infty }|a_{n}|=\infty } is called unbounded, a definition equally valid for sequences in the complex numbers, or in any metric space. Sequences which do not tend to infinity are called bounded. Sequences which do not tend to positive infinity are called bounded above, while those which do not tend to negative infinity are bounded below. ==== Metric space ==== The discussion of sequences above is for sequences of real numbers. The notion of limits can be defined for sequences valued in more abstract spaces, such as metric spaces. If M {\displaystyle M} is a metric space with distance function d {\displaystyle d} , and { a n } n ≥ 0 {\displaystyle \{a_{n}\}_{n\geq 0}} is a sequence in M {\displaystyle M} , then the limit (when it exists) of the sequence is an element a ∈ M {\displaystyle a\in M} such that, given ε > 0 {\displaystyle \varepsilon >0} , there exists an N {\displaystyle N} such that for each n > N {\displaystyle n>N} , we have d ( a , a n ) < ε . {\displaystyle d(a,a_{n})<\varepsilon .} An equivalent statement is that a n → a {\displaystyle a_{n}\rightarrow a} if the sequence of real numbers d ( a , a n ) → 0 {\displaystyle d(a,a_{n})\rightarrow 0} . ===== Example: Rn ===== An important example is the space of n {\displaystyle n} -dimensional real vectors, with elements x = ( x 1 , ⋯ , x n ) {\displaystyle \mathbf {x} =(x_{1},\cdots ,x_{n})} where each of the x i {\displaystyle x_{i}} are real, an example of a suitable distance function is the Euclidean distance, defined by d ( x , y ) = ‖ x − y ‖ = ∑ i ( x i − y i ) 2 . {\displaystyle d(\mathbf {x} ,\mathbf {y} )=\|\mathbf {x} -\mathbf {y} \|={\sqrt {\sum _{i}(x_{i}-y_{i})^{2}}}.} The sequence of points { x n } n ≥ 0 {\displaystyle \{\mathbf {x} _{n}\}_{n\geq 0}} converges to x {\displaystyle \mathbf {x} } if the limit exists and ‖ x n − x ‖ → 0 {\displaystyle \|\mathbf {x} _{n}-\mathbf {x} \|\rightarrow 0} . ==== Topological space ==== In some sense the most abstract space in which limits can be defined are topological spaces. If X {\displaystyle X} is a topological space with topology τ {\displaystyle \tau } , and { a n } n ≥ 0 {\displaystyle \{a_{n}\}_{n\geq 0}} is a sequence in X {\displaystyle X} , then the limit (when it exists) of the sequence is a point a ∈ X {\displaystyle a\in X} such that, given a (open) neighborhood U ∈ τ {\displaystyle U\in \tau } of a {\displaystyle a} , there exists an N {\displaystyle N} such that for every n > N {\displaystyle n>N} , a n ∈ U {\displaystyle a_{n}\in U} is satisfied. In this case, the limit (if it exists) may not be unique. However it must be unique if X {\displaystyle X} is a Hausdorff space. ==== Function space ==== This section deals with the idea of limits of sequences of functions, not to be confused with the idea of limits of functions, discussed below. The field of functional analysis partly seeks to identify useful notions of convergence on function spaces. For example, consider the space of functions from a generic set E {\displaystyle E} to R {\displaystyle \mathbb {R} } . Given a sequence of functions { f n } n > 0 {\displaystyle \{f_{n}\}_{n>0}} such that each is a function f n : E → R {\displaystyle f_{n}:E\rightarrow \mathbb {R} } , suppose that there exists a function such that for each x ∈ E {\displaystyle x\in E} , f n ( x ) → f ( x ) or equivalently lim n → ∞ f n ( x ) = f ( x ) . {\displaystyle f_{n}(x)\rightarrow f(x){\text{ or equivalently }}\lim _{n\rightarrow \infty }f_{n}(x)=f(x).} Then the sequence f n {\displaystyle f_{n}} is said to converge pointwise to f {\displaystyle f} . However, such sequences can exhibit unexpected behavior. For example, it is possible to construct a sequence of continuous functions which has a discontinuous pointwise limit. Another notion of convergence is uniform convergence. The uniform distance between two functions f , g : E → R {\displaystyle f,g:E\rightarrow \mathbb {R} } is the maximum difference between the two functions as the argument x ∈ E {\displaystyle x\in E} is varied. That is, d ( f , g ) = max x ∈ E | f ( x ) − g ( x ) | . {\displaystyle d(f,g)=\max _{x\in E}|f(x)-g(x)|.} Then the sequence f n {\displaystyle f_{n}} is said to uniformly converge or have a uniform limit of f {\displaystyle f} if f n → f {\displaystyle f_{n}\rightarrow f} with respect to this distance. The uniform limit has "nicer" properties than the pointwise limit. For example, the uniform limit of a sequence of continuous functions is continuous. Many different notions of convergence can be defined on function spaces. This is sometimes dependent on the regularity of the space. Prominent examples of function spaces with some notion of convergence are Lp spaces and Sobolev space. === In functions === Suppose f is a real-valued function and c is a real number. Intuitively speaking, the expression lim x → c f ( x ) = L {\displaystyle \lim _{x\to c}f(x)=L} means that f(x) can be made to be as close to L as desired, by making x sufficiently close to c. In that case, the above equation can be read as "the limit of f of x, as x approaches c, is L". Formally, the definition of the "limit of f ( x ) {\displaystyle f(x)} as x {\displaystyle x} approaches c {\displaystyle c} " is given as follows. The limit is a real number L {\displaystyle L} so that, given an arbitrary real number ε > 0 {\displaystyle \varepsilon >0} (thought of as the "error"), there is a δ > 0 {\displaystyle \delta >0} such that, for any x {\displaystyle x} satisfying 0 < | x − c | < δ {\displaystyle 0<|x-c|<\delta } , it holds that | f ( x ) − L | < ε {\displaystyle |f(x)-L|<\varepsilon } . This is known as the (ε, δ)-definition of limit. The inequality 0 < | x − c | {\displaystyle 0<|x-c|} is used to exclude c {\displaystyle c} from the set of points under consideration, but some authors do not include this in their definition of limits, replacing 0 < | x − c | < δ {\displaystyle 0<|x-c|<\delta } with simply | x − c | < δ {\displaystyle |x-c|<\delta } . This replacement is equivalent to additionally requiring that f {\displaystyle f} be continuous at c {\displaystyle c} . It can be proven that there is an equivalent definition which makes manifest the connection between limits of sequences and limits of functions. The equivalent definition is given as follows. First observe that for every sequence { x n } {\displaystyle \{x_{n}\}} in the domain of f {\displaystyle f} , there is an associated sequence { f ( x n ) } {\displaystyle \{f(x_{n})\}} , the image of the sequence under f {\displaystyle f} . The limit is a real number L {\displaystyle L} so that, for all sequences x n → c {\displaystyle x_{n}\rightarrow c} , the associated sequence f ( x n ) → L {\displaystyle f(x_{n})\rightarrow L} . ==== One-sided limit ==== It is possible to define the notion of having a "left-handed" limit ("from below"), and a notion of a "right-handed" limit ("from above"). These need not agree. An example is given by the positive indicator function, f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } , defined such that f ( x ) = 0 {\displaystyle f(x)=0} if x ≤ 0 {\displaystyle x\leq 0} , and f ( x ) = 1 {\displaystyle f(x)=1} if x > 0 {\displaystyle x>0} . At x = 0 {\displaystyle x=0} , the function has a "left-handed limit" of 0, a "right-handed limit" of 1, and its limit does not exist. Symbolically, this can be stated as, for this example, lim x → c − f ( x ) = 0 {\displaystyle \lim _{x\to c^{-}}f(x)=0} , and lim x → c + f ( x ) = 1 {\displaystyle \lim _{x\to c^{+}}f(x)=1} , and from this it can be deduced lim x → c f ( x ) {\displaystyle \lim _{x\to c}f(x)} doesn't exist, because lim x → c − f ( x ) ≠ lim x → c + f ( x ) {\displaystyle \lim _{x\to c^{-}}f(x)\neq \lim _{x\to c^{+}}f(x)} . ==== Infinity in limits of functions ==== It is possible to define the notion of "tending to infinity" in the domain of f {\displaystyle f} , lim x → + ∞ f ( x ) = L . {\displaystyle \lim _{x\rightarrow +\infty }f(x)=L.} This could be considered equivalent to the limit as a reciprocal tends to 0: lim x ′ → 0 + f ( 1 / x ′ ) = L . {\displaystyle \lim _{x'\rightarrow 0^{+}}f(1/x')=L.} or it can be defined directly: the "limit of f {\displaystyle f} as x {\displaystyle x} tends to positive infinity" is defined as a value L {\displaystyle L} such that, given any real ε > 0 {\displaystyle \varepsilon >0} , there exists an M > 0 {\displaystyle M>0} so that for all x > M {\displaystyle x>M} , | f ( x ) − L | < ε {\displaystyle |f(x)-L|<\varepsilon } . The definition for sequences is equivalent: As n → + ∞ {\displaystyle n\rightarrow +\infty } , we have f ( x n ) → L {\displaystyle f(x_{n})\rightarrow L} . In these expressions, the infinity is normally considered to be signed ( + ∞ {\displaystyle +\infty } or − ∞ {\displaystyle -\infty } ) and corresponds to a one-sided limit of the reciprocal. A two-sided infinite limit can be defined, but an author would explicitly write ± ∞ {\displaystyle \pm \infty } to be clear. It is also possible to define the notion of "tending to infinity" in the value of f {\displaystyle f} , lim x → c f ( x ) = ∞ . {\displaystyle \lim _{x\rightarrow c}f(x)=\infty .} Again, this could be defined in terms of a reciprocal: lim x → c 1 f ( x ) = 0. {\displaystyle \lim _{x\rightarrow c}{\frac {1}{f(x)}}=0.} Or a direct definition can be given as follows: given any real number M > 0 {\displaystyle M>0} , there is a δ > 0 {\displaystyle \delta >0} so that for 0 < | x − c | < δ {\displaystyle 0<|x-c|<\delta } , the absolute value of the function | f ( x ) | > M {\displaystyle |f(x)|>M} . A sequence can also have an infinite limit: as n → ∞ {\displaystyle n\rightarrow \infty } , the sequence f ( x n ) → ∞ {\displaystyle f(x_{n})\rightarrow \infty } . This direct definition is easier to extend to one-sided infinite limits. While mathematicians do talk about functions approaching limits "from above" or "from below", there is not a standard mathematical notation for this as there is for one-sided limits. === Nonstandard analysis === In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence ( a n ) {\displaystyle (a_{n})} can be expressed as the standard part of the value a H {\displaystyle a_{H}} of the natural extension of the sequence at an infinite hypernatural index n=H. Thus, lim n → ∞ a n = st ⁡ ( a H ) . {\displaystyle \lim _{n\to \infty }a_{n}=\operatorname {st} (a_{H}).} Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal a = [ a n ] {\displaystyle a=[a_{n}]} represented in the ultrapower construction by a Cauchy sequence ( a n ) {\displaystyle (a_{n})} , is simply the limit of that sequence: st ⁡ ( a ) = lim n → ∞ a n . {\displaystyle \operatorname {st} (a)=\lim _{n\to \infty }a_{n}.} In this sense, taking the limit and taking the standard part are equivalent procedures. === Limit sets === ==== Limit set of a sequence ==== Let { a n } n > 0 {\displaystyle \{a_{n}\}_{n>0}} be a sequence in a topological space X {\displaystyle X} . For concreteness, X {\displaystyle X} can be thought of as R {\displaystyle \mathbb {R} } , but the definitions hold more generally. The limit set is the set of points such that if there is a convergent subsequence { a n k } k > 0 {\displaystyle \{a_{n_{k}}\}_{k>0}} with a n k → a {\displaystyle a_{n_{k}}\rightarrow a} , then a {\displaystyle a} belongs to the limit set. In this context, such an a {\displaystyle a} is sometimes called a limit point. A use of this notion is to characterize the "long-term behavior" of oscillatory sequences. For example, consider the sequence a n = ( − 1 ) n {\displaystyle a_{n}=(-1)^{n}} . Starting from n=1, the first few terms of this sequence are − 1 , + 1 , − 1 , + 1 , ⋯ {\displaystyle -1,+1,-1,+1,\cdots } . It can be checked that it is oscillatory, so has no limit, but has limit points { − 1 , + 1 } {\displaystyle \{-1,+1\}} . ==== Limit set of a trajectory ==== This notion is used in dynamical systems, to study limits of trajectories. Defining a trajectory to be a function γ : R → X {\displaystyle \gamma :\mathbb {R} \rightarrow X} , the point γ ( t ) {\displaystyle \gamma (t)} is thought of as the "position" of the trajectory at "time" t {\displaystyle t} . The limit set of a trajectory is defined as follows. To any sequence of increasing times { t n } {\displaystyle \{t_{n}\}} , there is an associated sequence of positions { x n } = { γ ( t n ) } {\displaystyle \{x_{n}\}=\{\gamma (t_{n})\}} . If x {\displaystyle x} is the limit set of the sequence { x n } {\displaystyle \{x_{n}\}} for any sequence of increasing times, then x {\displaystyle x} is a limit set of the trajectory. Technically, this is the ω {\displaystyle \omega } -limit set. The corresponding limit set for sequences of decreasing time is called the α {\displaystyle \alpha } -limit set. An illustrative example is the circle trajectory: γ ( t ) = ( cos ⁡ ( t ) , sin ⁡ ( t ) ) {\displaystyle \gamma (t)=(\cos(t),\sin(t))} . This has no unique limit, but for each θ ∈ R {\displaystyle \theta \in \mathbb {R} } , the point ( cos ⁡ ( θ ) , sin ⁡ ( θ ) ) {\displaystyle (\cos(\theta ),\sin(\theta ))} is a limit point, given by the sequence of times t n = θ + 2 π n {\displaystyle t_{n}=\theta +2\pi n} . But the limit points need not be attained on the trajectory. The trajectory γ ( t ) = t / ( 1 + t ) ( cos ⁡ ( t ) , sin ⁡ ( t ) ) {\displaystyle \gamma (t)=t/(1+t)(\cos(t),\sin(t))} also has the unit circle as its limit set. == Uses == Limits are used to define a number of important concepts in analysis. === Series === A particular expression of interest which is formalized as the limit of a sequence is sums of infinite series. These are "infinite sums" of real numbers, generally written as ∑ n = 1 ∞ a n . {\displaystyle \sum _{n=1}^{\infty }a_{n}.} This is defined through limits as follows: given a sequence of real numbers { a n } {\displaystyle \{a_{n}\}} , the sequence of partial sums is defined by s n = ∑ i = 1 n a i . {\displaystyle s_{n}=\sum _{i=1}^{n}a_{i}.} If the limit of the sequence { s n } {\displaystyle \{s_{n}\}} exists, the value of the expression ∑ n = 1 ∞ a n {\displaystyle \sum _{n=1}^{\infty }a_{n}} is defined to be the limit. Otherwise, the series is said to be divergent. A classic example is the Basel problem, where a n = 1 / n 2 {\displaystyle a_{n}=1/n^{2}} . Then ∑ n = 1 ∞ 1 n 2 = π 2 6 . {\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}={\frac {\pi ^{2}}{6}}.} However, while for sequences there is essentially a unique notion of convergence, for series there are different notions of convergence. This is due to the fact that the expression ∑ n = 1 ∞ a n {\displaystyle \sum _{n=1}^{\infty }a_{n}} does not discriminate between different orderings of the sequence { a n } {\displaystyle \{a_{n}\}} , while the convergence properties of the sequence of partial sums can depend on the ordering of the sequence. A series which converges for all orderings is called unconditionally convergent. It can be proven to be equivalent to absolute convergence. This is defined as follows. A series is absolutely convergent if ∑ n = 1 ∞ | a n | {\displaystyle \sum _{n=1}^{\infty }|a_{n}|} is well defined. Furthermore, all possible orderings give the same value. Otherwise, the series is conditionally convergent. A surprising result for conditionally convergent series is the Riemann series theorem: depending on the ordering, the partial sums can be made to converge to any real number, as well as ± ∞ {\displaystyle \pm \infty } . ==== Power series ==== A useful application of the theory of sums of series is for power series. These are sums of series of the form f ( z ) = ∑ n = 0 ∞ c n z n . {\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}z^{n}.} Often z {\displaystyle z} is thought of as a complex number, and a suitable notion of convergence of complex sequences is needed. The set of values of z ∈ C {\displaystyle z\in \mathbb {C} } for which the series sum converges is a circle, with its radius known as the radius of convergence. === Continuity of a function at a point === The definition of continuity at a point is given through limits. The above definition of a limit is true even if f ( c ) ≠ L {\displaystyle f(c)\neq L} . Indeed, the function f need not even be defined at c. However, if f ( c ) {\displaystyle f(c)} is defined and is equal to L {\displaystyle L} , then the function is said to be continuous at the point c {\displaystyle c} . Equivalently, the function is continuous at c {\displaystyle c} if f ( x ) → f ( c ) {\displaystyle f(x)\rightarrow f(c)} as x → c {\displaystyle x\rightarrow c} , or in terms of sequences, whenever x n → c {\displaystyle x_{n}\rightarrow c} , then f ( x n ) → f ( c ) {\displaystyle f(x_{n})\rightarrow f(c)} . An example of a limit where f {\displaystyle f} is not defined at c {\displaystyle c} is given below. Consider the function f ( x ) = x 2 − 1 x − 1 . {\displaystyle f(x)={\frac {x^{2}-1}{x-1}}.} then f(1) is not defined (see Indeterminate form), yet as x moves arbitrarily close to 1, f(x) correspondingly approaches 2: Thus, f(x) can be made arbitrarily close to the limit of 2—just by making x sufficiently close to 1. In other words, lim x → 1 x 2 − 1 x − 1 = 2. {\displaystyle \lim _{x\to 1}{\frac {x^{2}-1}{x-1}}=2.} This can also be calculated algebraically, as x 2 − 1 x − 1 = ( x + 1 ) ( x − 1 ) x − 1 = x + 1 {\textstyle {\frac {x^{2}-1}{x-1}}={\frac {(x+1)(x-1)}{x-1}}=x+1} for all real numbers x ≠ 1. Now, since x + 1 is continuous in x at 1, we can now plug in 1 for x, leading to the equation lim x → 1 x 2 − 1 x − 1 = 1 + 1 = 2. {\displaystyle \lim _{x\to 1}{\frac {x^{2}-1}{x-1}}=1+1=2.} In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function f ( x ) = 2 x − 1 x {\displaystyle f(x)={\frac {2x-1}{x}}} where: f(100) = 1.9900 f(1000) = 1.9990 f(10000) = 1.9999 As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish—by making x sufficiently large. So in this case, the limit of f(x) as x approaches infinity is 2, or in mathematical notation, lim x → ∞ 2 x − 1 x = 2. {\displaystyle \lim _{x\to \infty }{\frac {2x-1}{x}}=2.} === Continuous functions === An important class of functions when considering limits are continuous functions. These are precisely those functions which preserve limits, in the sense that if f {\displaystyle f} is a continuous function, then whenever a n → a {\displaystyle a_{n}\rightarrow a} in the domain of f {\displaystyle f} , then the limit f ( a n ) {\displaystyle f(a_{n})} exists and furthermore is f ( a ) {\displaystyle f(a)} . In the most general setting of topological spaces, a short proof is given below: Let f : X → Y {\displaystyle f:X\rightarrow Y} be a continuous function between topological spaces X {\displaystyle X} and Y {\displaystyle Y} . By definition, for each open set V {\displaystyle V} in Y {\displaystyle Y} , the preimage f − 1 ( V ) {\displaystyle f^{-1}(V)} is open in X {\displaystyle X} . Now suppose a n → a {\displaystyle a_{n}\rightarrow a} is a sequence with limit a {\displaystyle a} in X {\displaystyle X} . Then f ( a n ) {\displaystyle f(a_{n})} is a sequence in Y {\displaystyle Y} , and f ( a ) {\displaystyle f(a)} is some point. Choose a neighborhood V {\displaystyle V} of f ( a ) {\displaystyle f(a)} . Then f − 1 ( V ) {\displaystyle f^{-1}(V)} is an open set (by continuity of f {\displaystyle f} ) which in particular contains a {\displaystyle a} , and therefore f − 1 ( V ) {\displaystyle f^{-1}(V)} is a neighborhood of a {\displaystyle a} . By the convergence of a n {\displaystyle a_{n}} to a {\displaystyle a} , there exists an N {\displaystyle N} such that for n > N {\displaystyle n>N} , we have a n ∈ f − 1 ( V ) {\displaystyle a_{n}\in f^{-1}(V)} . Then applying f {\displaystyle f} to both sides gives that, for the same N {\displaystyle N} , for each n > N {\displaystyle n>N} we have f ( a n ) ∈ V {\displaystyle f(a_{n})\in V} . Originally V {\displaystyle V} was an arbitrary neighborhood of f ( a ) {\displaystyle f(a)} , so f ( a n ) → f ( a ) {\displaystyle f(a_{n})\rightarrow f(a)} . This concludes the proof. In real analysis, for the more concrete case of real-valued functions defined on a subset E ⊂ R {\displaystyle E\subset \mathbb {R} } , that is, f : E → R {\displaystyle f:E\rightarrow \mathbb {R} } , a continuous function may also be defined as a function which is continuous at every point of its domain. === Limit points === In topology, limits are used to define limit points of a subset of a topological space, which in turn give a useful characterization of closed sets. In a topological space X {\displaystyle X} , consider a subset S {\displaystyle S} . A point a {\displaystyle a} is called a limit point if there is a sequence { a n } {\displaystyle \{a_{n}\}} in S ∖ { a } {\displaystyle S\backslash \{a\}} such that a n → a {\displaystyle a_{n}\rightarrow a} . The reason why { a n } {\displaystyle \{a_{n}\}} is defined to be in S ∖ { a } {\displaystyle S\backslash \{a\}} rather than just S {\displaystyle S} is illustrated by the following example. Take X = R {\displaystyle X=\mathbb {R} } and S = [ 0 , 1 ] ∪ { 2 } {\displaystyle S=[0,1]\cup \{2\}} . Then 2 ∈ S {\displaystyle 2\in S} , and therefore is the limit of the constant sequence 2 , 2 , ⋯ {\displaystyle 2,2,\cdots } . But 2 {\displaystyle 2} is not a limit point of S {\displaystyle S} . A closed set, which is defined to be the complement of an open set, is equivalently any set C {\displaystyle C} which contains all its limit points. === Derivative === The derivative is defined formally as a limit. In the scope of real analysis, the derivative is first defined for real functions f {\displaystyle f} defined on a subset E ⊂ R {\displaystyle E\subset \mathbb {R} } . The derivative at x ∈ E {\displaystyle x\in E} is defined as follows. If the limit of f ( x + h ) − f ( x ) h {\displaystyle {\frac {f(x+h)-f(x)}{h}}} as h → 0 {\displaystyle h\rightarrow 0} exists, then the derivative at x {\displaystyle x} is this limit. Equivalently, it is the limit as y → x {\displaystyle y\rightarrow x} of f ( y ) − f ( x ) y − x . {\displaystyle {\frac {f(y)-f(x)}{y-x}}.} If the derivative exists, it is commonly denoted by f ′ ( x ) {\displaystyle f'(x)} . == Properties == === Sequences of real numbers === For sequences of real numbers, a number of properties can be proven. Suppose { a n } {\displaystyle \{a_{n}\}} and { b n } {\displaystyle \{b_{n}\}} are two sequences converging to a {\displaystyle a} and b {\displaystyle b} respectively. Sum of limits is equal to limit of sum a n + b n → a + b . {\displaystyle a_{n}+b_{n}\rightarrow a+b.} Product of limits is equal to limit of product a n ⋅ b n → a ⋅ b . {\displaystyle a_{n}\cdot b_{n}\rightarrow a\cdot b.} Inverse of limit is equal to limit of inverse (as long as a ≠ 0 {\displaystyle a\neq 0} ) 1 a n → 1 a . {\displaystyle {\frac {1}{a_{n}}}\rightarrow {\frac {1}{a}}.} Equivalently, the function f ( x ) = 1 / x {\displaystyle f(x)=1/x} is continuous about nonzero x {\displaystyle x} . ==== Cauchy sequences ==== A property of convergent sequences of real numbers is that they are Cauchy sequences. The definition of a Cauchy sequence { a n } {\displaystyle \{a_{n}\}} is that for every real number ε > 0 {\displaystyle \varepsilon >0} , there is an N {\displaystyle N} such that whenever m , n > N {\displaystyle m,n>N} , | a m − a n | < ε . {\displaystyle |a_{m}-a_{n}|<\varepsilon .} Informally, for any arbitrarily small error ε {\displaystyle \varepsilon } , it is possible to find an interval of diameter ε {\displaystyle \varepsilon } such that eventually the sequence is contained within the interval. Cauchy sequences are closely related to convergent sequences. In fact, for sequences of real numbers they are equivalent: any Cauchy sequence is convergent. In general metric spaces, it continues to hold that convergent sequences are also Cauchy. But the converse is not true: not every Cauchy sequence is convergent in a general metric space. A classic counterexample is the rational numbers, Q {\displaystyle \mathbb {Q} } , with the usual distance. The sequence of decimal approximations to 2 {\displaystyle {\sqrt {2}}} , truncated at the n {\displaystyle n} th decimal place is a Cauchy sequence, but does not converge in Q {\displaystyle \mathbb {Q} } . A metric space in which every Cauchy sequence is also convergent, that is, Cauchy sequences are equivalent to convergent sequences, is known as a complete metric space. One reason Cauchy sequences can be "easier to work with" than convergent sequences is that they are a property of the sequence { a n } {\displaystyle \{a_{n}\}} alone, while convergent sequences require not just the sequence { a n } {\displaystyle \{a_{n}\}} but also the limit of the sequence a {\displaystyle a} . === Order of convergence === Beyond whether or not a sequence { a n } {\displaystyle \{a_{n}\}} converges to a limit a {\displaystyle a} , it is possible to describe how fast a sequence converges to a limit. One way to quantify this is using the order of convergence of a sequence. A formal definition of order of convergence can be stated as follows. Suppose { a n } n > 0 {\displaystyle \{a_{n}\}_{n>0}} is a sequence of real numbers which is convergent with limit a {\displaystyle a} . Furthermore, a n ≠ a {\displaystyle a_{n}\neq a} for all n {\displaystyle n} . If positive constants λ {\displaystyle \lambda } and α {\displaystyle \alpha } exist such that lim n → ∞ | a n + 1 − a | | a n − a | α = λ {\displaystyle \lim _{n\to \infty }{\frac {\left|a_{n+1}-a\right|}{\left|a_{n}-a\right|^{\alpha }}}=\lambda } then a n {\displaystyle a_{n}} is said to converge to a {\displaystyle a} with order of convergence α {\displaystyle \alpha } . The constant λ {\displaystyle \lambda } is known as the asymptotic error constant. Order of convergence is used for example the field of numerical analysis, in error analysis. === Computability === Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits. There are several theorems or tests that indicate whether the limit exists. These are known as convergence tests. Examples include the ratio test and the squeeze theorem. However they may not tell how to compute the limit. == See also == Asymptotic analysis: a method of describing limiting behavior Big O notation: used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity Banach limit defined on the Banach space ℓ ∞ {\displaystyle \ell ^{\infty }} that extends the usual limits. Convergence of random variables Convergent matrix Limit in category theory Direct limit Inverse limit Limit of a function One-sided limit: either of the two limits of functions of a real variable x, as x approaches a point from above or below List of limits: list of limits for common functions Squeeze theorem: finds a limit of a function via comparison with two other functions Limit superior and limit inferior Modes of convergence An annotated index == Notes == == References == Apostol, Tom M. (1974), Mathematical Analysis (2nd ed.), Menlo Park: Addison-Wesley, LCCN 72011473 == External links ==
Wikipedia:Limit of a function#0
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function. Formal definitions, first devised in the early 19th century, are given below. Informally, a function f assigns an output f(x) to every input x. We say that the function has a limit L at an input p, if f(x) gets closer and closer to L as x moves closer and closer to p. More specifically, the output value can be made arbitrarily close to L if the input to f is taken sufficiently close to p. On the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, then we say the limit does not exist. The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function. == History == Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bernard Bolzano who, in 1817, introduced the basics of the epsilon-delta technique (see (ε, δ)-definition of limit below) to define continuous functions. However, his work was not known during his lifetime. Bruce Pourciau argues that Isaac Newton, in his 1687 Principia, demonstrates a more sophisticated understanding of limits than he is generally given credit for, including being the first to present an epsilon argument. In his 1821 book Cours d'analyse, Augustin-Louis Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of y = f ( x ) {\displaystyle y=f(x)} by saying that an infinitesimal change in x necessarily produces an infinitesimal change in y, while Grabiner claims that he used a rigorous epsilon-delta definition in proofs. In 1861, Karl Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today. He also introduced the notations lim {\textstyle \lim } and lim x → x 0 . {\textstyle \textstyle \lim _{x\to x_{0}}\displaystyle .} The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, which is introduced in his book A Course of Pure Mathematics in 1908. == Motivation == Imagine a person walking on a landscape represented by the graph y = f(x). Their horizontal position is given by x, much like the position given by a map of the land or by a global positioning system. Their altitude is given by the coordinate y. Suppose they walk towards a position x = p, as they get closer and closer to this point, they will notice that their altitude approaches a specific value L. If asked about the altitude corresponding to x = p, they would reply by saying y = L. What, then, does it mean to say, their altitude is approaching L? It means that their altitude gets nearer and nearer to L—except for a possible small error in accuracy. For example, suppose we set a particular accuracy goal for our traveler: they must get within ten meters of L. They report back that indeed, they can get within ten vertical meters of L, arguing that as long as they are within fifty horizontal meters of p, their altitude is always within ten meters of L. The accuracy goal is then changed: can they get within one vertical meter? Yes, supposing that they are able to move within five horizontal meters of p, their altitude will always remain within one meter from the target altitude L. Summarizing the aforementioned concept we can say that the traveler's altitude approaches L as their horizontal position approaches p, so as to say that for every target accuracy goal, however small it may be, there is some neighbourhood of p where all (not just some) altitudes correspond to all the horizontal positions, except maybe the horizontal position p itself, in that neighbourhood fulfill that accuracy goal. The initial informal statement can now be explicated: In fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space. More specifically, to say that lim x → p f ( x ) = L , {\displaystyle \lim _{x\to p}f(x)=L,} is to say that f(x) can be made as close to L as desired, by making x close enough, but not equal, to p. The following definitions, known as (ε, δ)-definitions, are the generally accepted definitions for the limit of a function in various contexts. == Functions of a single variable == === (ε, δ)-definition of limit === Suppose f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } is a function defined on the real line, and there are two real numbers p and L. One would say: The limit of f of x, as x approaches p, exists, and it equals L. and write, lim x → p f ( x ) = L , {\displaystyle \lim _{x\to p}f(x)=L,} or alternatively, say f(x) tends to L as x tends to p, and write, f ( x ) → L as x → p , {\displaystyle f(x)\to L{\text{ as }}x\to p,} if the following property holds: for every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < |x − p| < δ implies |f(x) − L| < ε. Symbolically, ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ R ) ( 0 < | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in \mathbb {R} )\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} For example, we may say lim x → 2 ( 4 x + 1 ) = 9 {\displaystyle \lim _{x\to 2}(4x+1)=9} because for every real ε > 0, we can take δ = ε/4, so that for all real x, if 0 < |x − 2| < δ, then |4x + 1 − 9| < ε. A more general definition applies for functions defined on subsets of the real line. Let S be a subset of ⁠ R . {\displaystyle \mathbb {R} .} ⁠ Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a real-valued function. Let p be a point such that there exists some open interval (a, b) containing p with ( a , p ) ∪ ( p , b ) ⊂ S . {\displaystyle (a,p)\cup (p,b)\subset S.} It is then said that the limit of f as x approaches p is L, if: Or, symbolically: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ ( a , b ) ) ( 0 < | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} For example, we may say lim x → 1 x + 3 = 2 {\displaystyle \lim _{x\to 1}{\sqrt {x+3}}=2} because for every real ε > 0, we can take δ = ε, so that for all real x ≥ −3, if 0 < |x − 1| < δ, then |f(x) − 2| < ε. In this example, S = [−3, ∞) contains open intervals around the point 1 (for example, the interval (0, 2)). Here, note that the value of the limit does not depend on f being defined at p, nor on the value f(p)—if it is defined. For example, let f : [ 0 , 1 ) ∪ ( 1 , 2 ] → R , f ( x ) = 2 x 2 − x − 1 x − 1 . {\displaystyle f:[0,1)\cup (1,2]\to \mathbb {R} ,f(x)={\tfrac {2x^{2}-x-1}{x-1}}.} lim x → 1 f ( x ) = 3 {\displaystyle \lim _{x\to 1}f(x)=3} because for every ε > 0, we can take δ = ε/2, so that for all real x ≠ 1, if 0 < |x − 1| < δ, then |f(x) − 3| < ε. Note that here f(1) is undefined. In fact, a limit can exist in { p ∈ R | ∃ ( a , b ) ⊂ R : p ∈ ( a , b ) and ( a , p ) ∪ ( p , b ) ⊂ S } , {\displaystyle \{p\in \mathbb {R} \,|\,\exists (a,b)\subset \mathbb {R} :\,p\in (a,b){\text{ and }}(a,p)\cup (p,b)\subset S\},} which equals int ⁡ S ∪ iso ⁡ S c , {\displaystyle \operatorname {int} S\cup \operatorname {iso} S^{c},} where int S is the interior of S, and iso Sc are the isolated points of the complement of S. In our previous example where S = [ 0 , 1 ) ∪ ( 1 , 2 ] , {\displaystyle S=[0,1)\cup (1,2],} int ⁡ S = ( 0 , 1 ) ∪ ( 1 , 2 ) , {\displaystyle \operatorname {int} S=(0,1)\cup (1,2),} iso ⁡ S c = { 1 } . {\displaystyle \operatorname {iso} S^{c}=\{1\}.} We see, specifically, this definition of limit allows a limit to exist at 1, but not 0 or 2. The letters ε and δ can be understood as "error" and "distance". In fact, Cauchy used ε as an abbreviation for "error" in some of his work, though in his definition of continuity, he used an infinitesimal α {\displaystyle \alpha } rather than either ε or δ (see Cours d'Analyse). In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired, by reducing the distance (δ) to the limit point. As discussed below, this definition also works for functions in a more general context. The idea that δ and ε represent distances helps suggest these generalizations. === Existence and one-sided limits === Alternatively, x may approach p from above (right) or below (left), in which case the limits may be written as lim x → p + f ( x ) = L {\displaystyle \lim _{x\to p^{+}}f(x)=L} or lim x → p − f ( x ) = L {\displaystyle \lim _{x\to p^{-}}f(x)=L} respectively. If these limits exist at p and are equal there, then this can be referred to as the limit of f(x) at p. If the one-sided limits exist at p, but are unequal, then there is no limit at p (i.e., the limit at p does not exist). If either one-sided limit does not exist at p, then the limit at p also does not exist. A formal definition is as follows. The limit of f as x approaches p from above is L if: For every ε > 0, there exists a δ > 0 such that whenever 0 < x − p < δ, we have |f(x) − L| < ε. ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ ( a , b ) ) ( 0 < x − p < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<x-p<\delta \implies |f(x)-L|<\varepsilon ).} The limit of f as x approaches p from below is L if: For every ε > 0, there exists a δ > 0 such that whenever 0 < p − x < δ, we have |f(x) − L| < ε. ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ ( a , b ) ) ( 0 < p − x < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<p-x<\delta \implies |f(x)-L|<\varepsilon ).} If the limit does not exist, then the oscillation of f at p is non-zero. === More general definition using limit points and subsets === Limits can also be defined by approaching from subsets of the domain. In general: Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a real-valued function defined on some S ⊆ R . {\displaystyle S\subseteq \mathbb {R} .} Let p be a limit point of some T ⊂ S {\displaystyle T\subset S} —that is, p is the limit of some sequence of elements of T distinct from p. Then we say the limit of f, as x approaches p from values in T, is L, written lim x → p x ∈ T f ( x ) = L {\displaystyle \lim _{{x\to p} \atop {x\in T}}f(x)=L} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ T ) ( 0 < | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in T)\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} Note, T can be any subset of S, the domain of f. And the limit might depend on the selection of T. This generalization includes as special cases limits on an interval, as well as left-handed limits of real-valued functions (e.g., by taking T to be an open interval of the form (–∞, a)), and right-handed limits (e.g., by taking T to be an open interval of the form (a, ∞)). It also extends the notion of one-sided limits to the included endpoints of (half-)closed intervals, so the square root function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} can have limit 0 as x approaches 0 from above: lim x → 0 x ∈ [ 0 , ∞ ) x = 0 {\displaystyle \lim _{{x\to 0} \atop {x\in [0,\infty )}}{\sqrt {x}}=0} since for every ε > 0, we may take δ = ε2 such that for all x ≥ 0, if 0 < |x − 0| < δ, then |f(x) − 0| < ε. This definition allows a limit to be defined at limit points of the domain S, if a suitable subset T which has the same limit point is chosen. Notably, the previous two-sided definition works on int ⁡ S ∪ iso ⁡ S c , {\displaystyle \operatorname {int} S\cup \operatorname {iso} S^{c},} which is a subset of the limit points of S. For example, let S = [ 0 , 1 ) ∪ ( 1 , 2 ] . {\displaystyle S=[0,1)\cup (1,2].} The previous two-sided definition would work at 1 ∈ iso ⁡ S c = { 1 } , {\displaystyle 1\in \operatorname {iso} S^{c}=\{1\},} but it wouldn't work at 0 or 2, which are limit points of S. === Deleted versus non-deleted limits === The definition of limit given here does not depend on how (or whether) f is defined at p. Bartle refers to this as a deleted limit, because it excludes the value of f at p. The corresponding non-deleted limit does depend on the value of f at p, if p is in the domain of f. Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a real-valued function. The non-deleted limit of f, as x approaches p, is L if ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( | x − p | < δ ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(|x-p|<\delta \implies |f(x)-L|<\varepsilon ).} The definition is the same, except that the neighborhood |x − p| < δ now includes the point p, in contrast to the deleted neighborhood 0 < |x − p| < δ. This makes the definition of a non-deleted limit less general. One of the advantages of working with non-deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions (other than the existence of their non-deleted limits). Bartle notes that although by "limit" some authors do mean this non-deleted limit, deleted limits are the most popular. === Examples === ==== Non-existence of one-sided limit(s) ==== The function f ( x ) = { sin ⁡ 5 x − 1 for x < 1 0 for x = 1 1 10 x − 10 for x > 1 {\displaystyle f(x)={\begin{cases}\sin {\frac {5}{x-1}}&{\text{ for }}x<1\\0&{\text{ for }}x=1\\[2pt]{\frac {1}{10x-10}}&{\text{ for }}x>1\end{cases}}} has no limit at x0 = 1 (the left-hand limit does not exist due to the oscillatory nature of the sine function, and the right-hand limit does not exist due to the asymptotic behaviour of the reciprocal function, see picture), but has a limit at every other x-coordinate. The function f ( x ) = { 1 x rational 0 x irrational {\displaystyle f(x)={\begin{cases}1&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}} (a.k.a., the Dirichlet function) has no limit at any x-coordinate. ==== Non-equality of one-sided limits ==== The function f ( x ) = { 1 for x < 0 2 for x ≥ 0 {\displaystyle f(x)={\begin{cases}1&{\text{ for }}x<0\\2&{\text{ for }}x\geq 0\end{cases}}} has a limit at every non-zero x-coordinate (the limit equals 1 for negative x and equals 2 for positive x). The limit at x = 0 does not exist (the left-hand limit equals 1, whereas the right-hand limit equals 2). ==== Limits at only one point ==== The functions f ( x ) = { x x rational 0 x irrational {\displaystyle f(x)={\begin{cases}x&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}} and f ( x ) = { | x | x rational 0 x irrational {\displaystyle f(x)={\begin{cases}|x|&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}} both have a limit at x = 0 and it equals 0. ==== Limits at countably many points ==== The function f ( x ) = { sin ⁡ x x irrational 1 x rational {\displaystyle f(x)={\begin{cases}\sin x&x{\text{ irrational }}\\1&x{\text{ rational }}\end{cases}}} has a limit at any x-coordinate of the form π 2 + 2 n π , {\displaystyle {\tfrac {\pi }{2}}+2n\pi ,} where n is any integer. == Limits involving infinity == === Limits at infinity === Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a function defined on S ⊆ R . {\displaystyle S\subseteq \mathbb {R} .} The limit of f as x approaches infinity is L, denoted lim x → ∞ f ( x ) = L , {\displaystyle \lim _{x\to \infty }f(x)=L,} means that: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( x > c ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(x>c\implies |f(x)-L|<\varepsilon ).} Similarly, the limit of f as x approaches minus infinity is L, denoted lim x → − ∞ f ( x ) = L , {\displaystyle \lim _{x\to -\infty }f(x)=L,} means that: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( x < − c ⟹ | f ( x ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(x<-c\implies |f(x)-L|<\varepsilon ).} For example, lim x → ∞ ( − 3 sin ⁡ x x + 4 ) = 4 {\displaystyle \lim _{x\to \infty }\left(-{\frac {3\sin x}{x}}+4\right)=4} because for every ε > 0, we can take c = 3/ε such that for all real x, if x > c, then |f(x) − 4| < ε. Another example is that lim x → − ∞ e x = 0 {\displaystyle \lim _{x\to -\infty }e^{x}=0} because for every ε > 0, we can take c = max{1, −ln(ε)} such that for all real x, if x < −c, then |f(x) − 0| < ε. === Infinite limits === For a function whose values grow without bound, the function diverges and the usual limit does not exist. However, in this case one may introduce limits with infinite values. Let f : S → R {\displaystyle f:S\to \mathbb {R} } be a function defined on S ⊆ R . {\displaystyle S\subseteq \mathbb {R} .} The statement the limit of f as x approaches p is infinity, denoted lim x → p f ( x ) = ∞ , {\displaystyle \lim _{x\to p}f(x)=\infty ,} means that: ( ∀ N > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ f ( x ) > N ) . {\displaystyle (\forall N>0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies f(x)>N).} The statement the limit of f as x approaches p is minus infinity, denoted lim x → p f ( x ) = − ∞ , {\displaystyle \lim _{x\to p}f(x)=-\infty ,} means that: ( ∀ N > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ f ( x ) < − N ) . {\displaystyle (\forall N>0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies f(x)<-N).} For example, lim x → 1 1 ( x − 1 ) 2 = ∞ {\displaystyle \lim _{x\to 1}{\frac {1}{(x-1)^{2}}}=\infty } because for every N > 0, we can take δ = 1 N δ = 1 N {\textstyle \delta ={\tfrac {1}{{\sqrt {N}}\delta }}={\tfrac {1}{\sqrt {N}}}} such that for all real x > 0, if 0 < x − 1 < δ, then f(x) > N. These ideas can be used together to produce definitions for different combinations, such as lim x → ∞ f ( x ) = ∞ , {\displaystyle \lim _{x\to \infty }f(x)=\infty ,} or lim x → p + f ( x ) = − ∞ . {\displaystyle \lim _{x\to p^{+}}f(x)=-\infty .} For example, lim x → 0 + ln ⁡ x = − ∞ {\displaystyle \lim _{x\to 0^{+}}\ln x=-\infty } because for every N > 0, we can take δ = e−N such that for all real x > 0, if 0 < x − 0 < δ, then f(x) < −N. Limits involving infinity are connected with the concept of asymptotes. These notions of a limit attempt to provide a metric space interpretation to limits at infinity. In fact, they are consistent with the topological space definition of limit if a neighborhood of −∞ is defined to contain an interval [−∞, c) for some ⁠ c ∈ R , {\displaystyle c\in \mathbb {R} ,} ⁠ a neighborhood of ∞ is defined to contain an interval (c, ∞] where ⁠ c ∈ R , {\displaystyle c\in \mathbb {R} ,} ⁠ and a neighborhood of ⁠ a ∈ R {\displaystyle a\in \mathbb {R} } ⁠ is defined in the normal way metric space ⁠ R . {\displaystyle \mathbb {R} .} ⁠ In this case, ⁠ R ¯ {\displaystyle {\overline {\mathbb {R} }}} ⁠ is a topological space and any function of the form f : X → Y {\displaystyle f:X\to Y} with X , Y ⊆ R ¯ {\displaystyle X,Y\subseteq {\overline {\mathbb {R} }}} is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense. === Alternative notation === Many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as ⁠ R ∪ { − ∞ , + ∞ } {\displaystyle \mathbb {R} \cup \{-\infty ,+\infty \}} ⁠ and the projectively extended real line is ⁠ R ∪ { ∞ } {\displaystyle \mathbb {R} \cup \{\infty \}} ⁠ where a neighborhood of ∞ is a set of the form { x : | x | > c } . {\displaystyle \{x:|x|>c\}.} The advantage is that one only needs three definitions for limits (left, right, and central) to cover all the cases. As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line, x − 1 {\displaystyle x^{-1}} does not possess a central limit (which is normal): lim x → 0 + 1 x = + ∞ , lim x → 0 − 1 x = − ∞ . {\displaystyle \lim _{x\to 0^{+}}{1 \over x}=+\infty ,\quad \lim _{x\to 0^{-}}{1 \over x}=-\infty .} In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit does exist in that context: lim x → 0 + 1 x = lim x → 0 − 1 x = lim x → 0 1 x = ∞ . {\displaystyle \lim _{x\to 0^{+}}{1 \over x}=\lim _{x\to 0^{-}}{1 \over x}=\lim _{x\to 0}{1 \over x}=\infty .} In fact there are a plethora of conflicting formal systems in use. In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. A simple reason has to do with the converse of lim x → 0 − x − 1 = − ∞ , {\displaystyle \lim _{x\to 0^{-}}{x^{-1}}=-\infty ,} namely, it is convenient for lim x → − ∞ x − 1 = − 0 {\displaystyle \lim _{x\to -\infty }{x^{-1}}=-0} to be considered true. Such zeroes can be seen as an approximation to infinitesimals. === Limits at infinity for rational functions === There are three basic rules for evaluating limits at infinity for a rational function f ( x ) = p ( x ) q ( x ) {\displaystyle f(x)={\tfrac {p(x)}{q(x)}}} (where p and q are polynomials): If the degree of p is greater than the degree of q, then the limit is positive or negative infinity depending on the signs of the leading coefficients; If the degree of p and q are equal, the limit is the leading coefficient of p divided by the leading coefficient of q; If the degree of p is less than the degree of q, the limit is 0. If the limit at infinity exists, it represents a horizontal asymptote at y = L. Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions. == Functions of more than one variable == === Ordinary limits === By noting that |x − p| represents a distance, the definition of a limit can be extended to functions of more than one variable. In the case of a function f : S × T → R {\displaystyle f:S\times T\to \mathbb {R} } defined on S × T ⊆ R 2 , {\displaystyle S\times T\subseteq \mathbb {R} ^{2},} we defined the limit as follows: the limit of f as (x, y) approaches (p, q) is L, written lim ( x , y ) → ( p , q ) f ( x , y ) = L {\displaystyle \lim _{(x,y)\to (p,q)}f(x,y)=L} if the following condition holds: For every ε > 0, there exists a δ > 0 such that for all x in S and y in T, whenever 0 < ( x − p ) 2 + ( y − q ) 2 < δ , {\textstyle 0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta ,} we have |f(x, y) − L| < ε, or formally: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( 0 < ( x − p ) 2 + ( y − q ) 2 < δ ⟹ | f ( x , y ) − L | < ε ) ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,(0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta \implies |f(x,y)-L|<\varepsilon )).} Here ( x − p ) 2 + ( y − q ) 2 {\textstyle {\sqrt {(x-p)^{2}+(y-q)^{2}}}} is the Euclidean distance between (x, y) and (p, q). (This can in fact be replaced by any norm ||(x, y) − (p, q)||, and be extended to any number of variables.) For example, we may say lim ( x , y ) → ( 0 , 0 ) x 4 x 2 + y 2 = 0 {\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{4}}{x^{2}+y^{2}}}=0} because for every ε > 0, we can take δ = ε {\textstyle \delta ={\sqrt {\varepsilon }}} such that for all real x ≠ 0 and real y ≠ 0, if 0 < ( x − 0 ) 2 + ( y − 0 ) 2 < δ , {\textstyle 0<{\sqrt {(x-0)^{2}+(y-0)^{2}}}<\delta ,} then |f(x, y) − 0| < ε. Similar to the case in single variable, the value of f at (p, q) does not matter in this definition of limit. For such a multivariable limit to exist, this definition requires the value of f approaches L along every possible path approaching (p, q). In the above example, the function f ( x , y ) = x 4 x 2 + y 2 {\displaystyle f(x,y)={\frac {x^{4}}{x^{2}+y^{2}}}} satisfies this condition. This can be seen by considering the polar coordinates ( x , y ) = ( r cos ⁡ θ , r sin ⁡ θ ) → ( 0 , 0 ) , {\displaystyle (x,y)=(r\cos \theta ,r\sin \theta )\to (0,0),} which gives lim r → 0 f ( r cos ⁡ θ , r sin ⁡ θ ) = lim r → 0 r 4 cos 4 ⁡ θ r 2 = lim r → 0 r 2 cos 4 ⁡ θ . {\displaystyle \lim _{r\to 0}f(r\cos \theta ,r\sin \theta )=\lim _{r\to 0}{\frac {r^{4}\cos ^{4}\theta }{r^{2}}}=\lim _{r\to 0}r^{2}\cos ^{4}\theta .} Here θ = θ(r) is a function of r which controls the shape of the path along which f is approaching (p, q). Since cos θ is bounded between [−1, 1], by the sandwich theorem, this limit tends to 0. In contrast, the function f ( x , y ) = x y x 2 + y 2 {\displaystyle f(x,y)={\frac {xy}{x^{2}+y^{2}}}} does not have a limit at (0, 0). Taking the path (x, y) = (t, 0) → (0, 0), we obtain lim t → 0 f ( t , 0 ) = lim t → 0 0 t 2 = 0 , {\displaystyle \lim _{t\to 0}f(t,0)=\lim _{t\to 0}{\frac {0}{t^{2}}}=0,} while taking the path (x, y) = (t, t) → (0, 0), we obtain lim t → 0 f ( t , t ) = lim t → 0 t 2 t 2 + t 2 = 1 2 . {\displaystyle \lim _{t\to 0}f(t,t)=\lim _{t\to 0}{\frac {t^{2}}{t^{2}+t^{2}}}={\frac {1}{2}}.} Since the two values do not agree, f does not tend to a single value as (x, y) approaches (0, 0). === Multiple limits === Although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. For a two-variable function, this is the double limit. Let f : S × T → R {\displaystyle f:S\times T\to \mathbb {R} } be defined on S × T ⊆ R 2 , {\displaystyle S\times T\subseteq \mathbb {R} ^{2},} we say the double limit of f as x approaches p and y approaches q is L, written lim x → p y → q f ( x , y ) = L {\displaystyle \lim _{{x\to p} \atop {y\to q}}f(x,y)=L} if the following condition holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( ( 0 < | x − p | < δ ) ∧ ( 0 < | y − q | < δ ) ⟹ | f ( x , y ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,((0<|x-p|<\delta )\land (0<|y-q|<\delta )\implies |f(x,y)-L|<\varepsilon ).} For such a double limit to exist, this definition requires the value of f approaches L along every possible path approaching (p, q), excluding the two lines x = p and y = q. As a result, the multiple limit is a weaker notion than the ordinary limit: if the ordinary limit exists and equals L, then the multiple limit exists and also equals L. The converse is not true: the existence of the multiple limits does not imply the existence of the ordinary limit. Consider the example f ( x , y ) = { 1 for x y ≠ 0 0 for x y = 0 {\displaystyle f(x,y)={\begin{cases}1\quad {\text{for}}\quad xy\neq 0\\0\quad {\text{for}}\quad xy=0\end{cases}}} where lim x → 0 y → 0 f ( x , y ) = 1 {\displaystyle \lim _{{x\to 0} \atop {y\to 0}}f(x,y)=1} but lim ( x , y ) → ( 0 , 0 ) f ( x , y ) {\displaystyle \lim _{(x,y)\to (0,0)}f(x,y)} does not exist. If the domain of f is restricted to ( S ∖ { p } ) × ( T ∖ { q } ) , {\displaystyle (S\setminus \{p\})\times (T\setminus \{q\}),} then the two definitions of limits coincide. === Multiple limits at infinity === The concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. For f : S × T → R , {\displaystyle f:S\times T\to \mathbb {R} ,} we say the double limit of f as x and y approaches infinity is L, written lim x → ∞ y → ∞ f ( x , y ) = L {\displaystyle \lim _{{x\to \infty } \atop {y\to \infty }}f(x,y)=L} if the following condition holds: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( ( x > c ) ∧ ( y > c ) ⟹ | f ( x , y ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(\forall y\in T)\,((x>c)\land (y>c)\implies |f(x,y)-L|<\varepsilon ).} We say the double limit of f as x and y approaches minus infinity is L, written lim x → − ∞ y → − ∞ f ( x , y ) = L {\displaystyle \lim _{{x\to -\infty } \atop {y\to -\infty }}f(x,y)=L} if the following condition holds: ( ∀ ε > 0 ) ( ∃ c > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( ( x < − c ) ∧ ( y < − c ) ⟹ | f ( x , y ) − L | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(\forall y\in T)\,((x<-c)\land (y<-c)\implies |f(x,y)-L|<\varepsilon ).} === Pointwise limits and uniform limits === Let f : S × T → R . {\displaystyle f:S\times T\to \mathbb {R} .} Instead of taking limit as (x, y) → (p, q), we may consider taking the limit of just one variable, say, x → p, to obtain a single-variable function of y, namely g : T → R . {\displaystyle g:T\to \mathbb {R} .} In fact, this limiting process can be done in two distinct ways. The first one is called pointwise limit. We say the pointwise limit of f as x approaches p is g, denoted lim x → p f ( x , y ) = g ( y ) , {\displaystyle \lim _{x\to p}f(x,y)=g(y),} or lim x → p f ( x , y ) = g ( y ) pointwise . {\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{pointwise}}.} Alternatively, we may say f tends to g pointwise as x approaches p, denoted f ( x , y ) → g ( y ) as x → p , {\displaystyle f(x,y)\to g(y)\;\;{\text{as}}\;\;x\to p,} or f ( x , y ) → g ( y ) pointwise as x → p . {\displaystyle f(x,y)\to g(y)\;\;{\text{pointwise}}\;\;{\text{as}}\;\;x\to p.} This limit exists if the following holds: ( ∀ ε > 0 ) ( ∀ y ∈ T ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ | f ( x , y ) − g ( y ) | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\forall y\in T)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies |f(x,y)-g(y)|<\varepsilon ).} Here, δ = δ(ε, y) is a function of both ε and y. Each δ is chosen for a specific point of y. Hence we say the limit is pointwise in y. For example, f ( x , y ) = x cos ⁡ y {\displaystyle f(x,y)={\frac {x}{\cos y}}} has a pointwise limit of constant zero function lim x → 0 f ( x , y ) = 0 ( y ) pointwise {\displaystyle \lim _{x\to 0}f(x,y)=0(y)\;\;{\text{pointwise}}} because for every fixed y, the limit is clearly 0. This argument fails if y is not fixed: if y is very close to π/2, the value of the fraction may deviate from 0. This leads to another definition of limit, namely the uniform limit. We say the uniform limit of f on T as x approaches p is g, denoted u n i f lim x → p y ∈ T f ( x , y ) = g ( y ) , {\displaystyle {\underset {{x\to p} \atop {y\in T}}{\mathrm {unif} \lim \;}}f(x,y)=g(y),} or lim x → p f ( x , y ) = g ( y ) uniformly on T . {\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{uniformly on}}\;T.} Alternatively, we may say f tends to g uniformly on T as x approaches p, denoted f ( x , y ) ⇉ g ( y ) on T as x → p , {\displaystyle f(x,y)\rightrightarrows g(y)\;{\text{on}}\;T\;\;{\text{as}}\;\;x\to p,} or f ( x , y ) → g ( y ) uniformly on T as x → p . {\displaystyle f(x,y)\to g(y)\;\;{\text{uniformly on}}\;T\;\;{\text{as}}\;\;x\to p.} This limit exists if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( 0 < | x − p | < δ ⟹ | f ( x , y ) − g ( y ) | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,(0<|x-p|<\delta \implies |f(x,y)-g(y)|<\varepsilon ).} Here, δ = δ(ε) is a function of only ε but not y. In other words, δ is uniformly applicable to all y in T. Hence we say the limit is uniform in y. For example, f ( x , y ) = x cos ⁡ y {\displaystyle f(x,y)=x\cos y} has a uniform limit of constant zero function lim x → 0 f ( x , y ) = 0 ( y ) uniformly on R {\displaystyle \lim _{x\to 0}f(x,y)=0(y)\;\;{\text{ uniformly on}}\;\mathbb {R} } because for all real y, cos y is bounded between [−1, 1]. Hence no matter how y behaves, we may use the sandwich theorem to show that the limit is 0. === Iterated limits === Let f : S × T → R . {\displaystyle f:S\times T\to \mathbb {R} .} We may consider taking the limit of just one variable, say, x → p, to obtain a single-variable function of y, namely g : T → R , {\displaystyle g:T\to \mathbb {R} ,} and then take limit in the other variable, namely y → q, to get a number L. Symbolically, lim y → q lim x → p f ( x , y ) = lim y → q g ( y ) = L . {\displaystyle \lim _{y\to q}\lim _{x\to p}f(x,y)=\lim _{y\to q}g(y)=L.} This limit is known as iterated limit of the multivariable function. The order of taking limits may affect the result, i.e., lim y → q lim x → p f ( x , y ) ≠ lim x → p lim y → q f ( x , y ) {\displaystyle \lim _{y\to q}\lim _{x\to p}f(x,y)\neq \lim _{x\to p}\lim _{y\to q}f(x,y)} in general. A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit lim x → p f ( x , y ) = g ( y ) {\displaystyle \lim _{x\to p}f(x,y)=g(y)} to be uniform on T. == Functions on metric spaces == Suppose M and N are subsets of metric spaces A and B, respectively, and f : M → N is defined between M and N, with x ∈ M, p a limit point of M and L ∈ N. It is said that the limit of f as x approaches p is L and write lim x → p f ( x ) = L {\displaystyle \lim _{x\to p}f(x)=L} if the following property holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ M ) ( 0 < d A ( x , p ) < δ ⟹ d B ( f ( x ) , L ) < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in M)\,(0<d_{A}(x,p)<\delta \implies d_{B}(f(x),L)<\varepsilon ).} Again, note that p need not be in the domain of f, nor does L need to be in the range of f, and even if f(p) is defined it need not be equal to L. === Euclidean metric === The limit in Euclidean space is a direct generalization of limits to vector-valued functions. For example, we may consider a function f : S × T → R 3 {\displaystyle f:S\times T\to \mathbb {R} ^{3}} such that f ( x , y ) = ( f 1 ( x , y ) , f 2 ( x , y ) , f 3 ( x , y ) ) . {\displaystyle f(x,y)=(f_{1}(x,y),f_{2}(x,y),f_{3}(x,y)).} Then, under the usual Euclidean metric, lim ( x , y ) → ( p , q ) f ( x , y ) = ( L 1 , L 2 , L 3 ) {\displaystyle \lim _{(x,y)\to (p,q)}f(x,y)=(L_{1},L_{2},L_{3})} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( ∀ y ∈ T ) ( 0 < ( x − p ) 2 + ( y − q ) 2 < δ ⟹ ( f 1 − L 1 ) 2 + ( f 2 − L 2 ) 2 + ( f 3 − L 3 ) 2 < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,\left(0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta \implies {\sqrt {(f_{1}-L_{1})^{2}+(f_{2}-L_{2})^{2}+(f_{3}-L_{3})^{2}}}<\varepsilon \right).} In this example, the function concerned are finite-dimension vector-valued function. In this case, the limit theorem for vector-valued function states that if the limit of each component exists, then the limit of a vector-valued function equals the vector with each component taken the limit: lim ( x , y ) → ( p , q ) ( f 1 ( x , y ) , f 2 ( x , y ) , f 3 ( x , y ) ) = ( lim ( x , y ) → ( p , q ) f 1 ( x , y ) , lim ( x , y ) → ( p , q ) f 2 ( x , y ) , lim ( x , y ) → ( p , q ) f 3 ( x , y ) ) . {\displaystyle \lim _{(x,y)\to (p,q)}{\Bigl (}f_{1}(x,y),f_{2}(x,y),f_{3}(x,y){\Bigr )}=\left(\lim _{(x,y)\to (p,q)}f_{1}(x,y),\lim _{(x,y)\to (p,q)}f_{2}(x,y),\lim _{(x,y)\to (p,q)}f_{3}(x,y)\right).} === Manhattan metric === One might also want to consider spaces other than Euclidean space. An example would be the Manhattan space. Consider f : S → R 2 {\displaystyle f:S\to \mathbb {R} ^{2}} such that f ( x ) = ( f 1 ( x ) , f 2 ( x ) ) . {\displaystyle f(x)=(f_{1}(x),f_{2}(x)).} Then, under the Manhattan metric, lim x → p f ( x ) = ( L 1 , L 2 ) {\displaystyle \lim _{x\to p}f(x)=(L_{1},L_{2})} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ | f 1 − L 1 | + | f 2 − L 2 | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies |f_{1}-L_{1}|+|f_{2}-L_{2}|<\varepsilon ).} Since this is also a finite-dimension vector-valued function, the limit theorem stated above also applies. === Uniform metric === Finally, we will discuss the limit in function space, which has infinite dimensions. Consider a function f(x, y) in the function space S × T → R . {\displaystyle S\times T\to \mathbb {R} .} We want to find out as x approaches p, how f(x, y) will tend to another function g(y), which is in the function space T → R . {\displaystyle T\to \mathbb {R} .} The "closeness" in this function space may be measured under the uniform metric. Then, we will say the uniform limit of f on T as x approaches p is g and write u n i f lim x → p y ∈ T f ( x , y ) = g ( y ) , {\displaystyle {\underset {{x\to p} \atop {y\in T}}{\mathrm {unif} \lim \;}}f(x,y)=g(y),} or lim x → p f ( x , y ) = g ( y ) uniformly on T , {\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{uniformly on}}\;T,} if the following holds: ( ∀ ε > 0 ) ( ∃ δ > 0 ) ( ∀ x ∈ S ) ( 0 < | x − p | < δ ⟹ sup y ∈ T | f ( x , y ) − g ( y ) | < ε ) . {\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies \sup _{y\in T}|f(x,y)-g(y)|<\varepsilon ).} In fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section. == Functions on topological spaces == Suppose X and Y are topological spaces with Y a Hausdorff space. Let p be a limit point of Ω ⊆ X, and L ∈ Y. For a function f : Ω → Y, it is said that the limit of f as x approaches p is L, written lim x → p f ( x ) = L , {\displaystyle \lim _{x\to p}f(x)=L,} if the following property holds: This last part of the definition can also be phrased "there exists an open punctured neighbourhood U of p such that f(U ∩ Ω) ⊆ V". The domain of f does not need to contain p. If it does, then the value of f at p is irrelevant to the definition of the limit. In particular, if the domain of f is X − {p} (or all of X), then the limit of f as x → p exists and is equal to L if, for all subsets Ω of X with limit point p, the limit of the restriction of f to Ω exists and is equal to L. Sometimes this criterion is used to establish the non-existence of the two-sided limit of a function on ⁠ R {\displaystyle \mathbb {R} } ⁠ by showing that the one-sided limits either fail to exist or do not agree. Such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. Alternatively, the requirement that Y be a Hausdorff space can be relaxed to the assumption that Y be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point. A function is continuous at a limit point p of and in its domain if and only if f(p) is the (or, in the general case, a) limit of f(x) as x tends to p. There is another type of limit of a function, namely the sequential limit. Let f : X → Y be a mapping from a topological space X into a Hausdorff space Y, p ∈ X a limit point of X and L ∈ Y. The sequential limit of f as x tends to p is L if For every sequence (xn) in X − {p} that converges to p, the sequence f(xn) converges to L. If L is the limit (in the sense above) of f as x approaches p, then it is a sequential limit as well, however the converse need not hold in general. If in addition X is metrizable, then L is the sequential limit of f as x approaches p if and only if it is the limit (in the sense above) of f as x approaches p. == Other characterizations == === In terms of sequences === For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. (This definition is usually attributed to Eduard Heine.) In this setting: lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if, and only if, for all sequences xn (with, for all n, xn not equal to a) converging to a the sequence f(xn) converges to L. It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. Note that defining what it means for a sequence xn to converge to a requires the epsilon, delta method. Similarly as it was the case of Weierstrass's definition, a more general Heine definition applies to functions defined on subsets of the real line. Let f be a real-valued function with the domain Dm(f ). Let a be the limit of a sequence of elements of Dm(f ) \ {a}. Then the limit (in this sense) of f is L as x approaches a if for every sequence xn ∈ Dm(f ) \ {a} (so that for all n, xn is not equal to a) that converges to a, the sequence f(xn) converges to L. This is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset Dm(f ) of ⁠ R {\displaystyle \mathbb {R} } ⁠ as a metric space with the induced metric. === In non-standard calculus === In non-standard calculus the limit of a function is defined by: lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if and only if for all x ∈ R ∗ , {\displaystyle x\in \mathbb {R} ^{*},} f ∗ ( x ) − L {\displaystyle f^{*}(x)-L} is infinitesimal whenever x − a is infinitesimal. Here R ∗ {\displaystyle \mathbb {R} ^{*}} are the hyperreal numbers and f* is the natural extension of f to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods cannot be realized in full. Bŀaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament". === In terms of nearness === At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness". A point x is defined to be near a set A ⊆ R {\displaystyle A\subseteq \mathbb {R} } if for every r > 0 there is a point a ∈ A so that |x − a| < r. In this setting the lim x → a f ( x ) = L {\displaystyle \lim _{x\to a}f(x)=L} if and only if for all A ⊆ R , {\displaystyle A\subseteq \mathbb {R} ,} L is near f(A) whenever a is near A. Here f(A) is the set { f ( x ) | x ∈ A } . {\displaystyle \{f(x)|x\in A\}.} This definition can also be extended to metric and topological spaces. == Relationship to continuity == The notion of the limit of a function is very closely related to the concept of continuity. A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c: lim x → c f ( x ) = f ( c ) . {\displaystyle \lim _{x\to c}f(x)=f(c).} We have here assumed that c is a limit point of the domain of f. == Properties == If a function f is real-valued, then the limit of f at p is L if and only if both the right-handed limit and left-handed limit of f at p exist and are equal to L. The function f is continuous at p if and only if the limit of f(x) as x approaches p exists and is equal to f(p). If f : M → N is a function between metric spaces M and N, then it is equivalent that f transforms every sequence in M which converges towards p into a sequence in N which converges towards f(p). If N is a normed vector space, then the limit operation is linear in the following sense: if the limit of f(x) as x approaches p is L and the limit of g(x) as x approaches p is P, then the limit of f(x) + g(x) as x approaches p is L + P. If a is a scalar from the base field, then the limit of af(x) as x approaches p is aL. If f and g are real-valued (or complex-valued) functions, then taking the limit of an operation on f(x) and g(x) (e.g., f + g, f − g, f × g, f / g, f g) under certain conditions is compatible with the operation of limits of f(x) and g(x). This fact is often called the algebraic limit theorem. The main condition needed to apply the following rules is that the limits on the right-hand sides of the equations exist (in other words, these limits are finite values including 0). Additionally, the identity for division requires that the denominator on the right-hand side is non-zero (division by 0 is not defined), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive (finite). lim x → p ( f ( x ) + g ( x ) ) = lim x → p f ( x ) + lim x → p g ( x ) lim x → p ( f ( x ) − g ( x ) ) = lim x → p f ( x ) − lim x → p g ( x ) lim x → p ( f ( x ) ⋅ g ( x ) ) = lim x → p f ( x ) ⋅ lim x → p g ( x ) lim x → p ( f ( x ) / g ( x ) ) = lim x → p f ( x ) / lim x → p g ( x ) lim x → p f ( x ) g ( x ) = lim x → p f ( x ) lim x → p g ( x ) {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to p}(f(x)+g(x))&=&\displaystyle \lim _{x\to p}f(x)+\lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)-g(x))&=&\displaystyle \lim _{x\to p}f(x)-\lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)\cdot g(x))&=&\displaystyle \lim _{x\to p}f(x)\cdot \lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)/g(x))&=&\displaystyle {\lim _{x\to p}f(x)/\lim _{x\to p}g(x)}\\\displaystyle \lim _{x\to p}f(x)^{g(x)}&=&\displaystyle {\lim _{x\to p}f(x)^{\lim _{x\to p}g(x)}}\end{array}}} These rules are also valid for one-sided limits, including when p is ∞ or −∞. In each rule above, when one of the limits on the right is ∞ or −∞, the limit on the left may sometimes still be determined by the following rules. q + ∞ = ∞ if q ≠ − ∞ q × ∞ = { ∞ if q > 0 − ∞ if q < 0 q ∞ = 0 if q ≠ ∞ and q ≠ − ∞ ∞ q = { 0 if q < 0 ∞ if q > 0 q ∞ = { 0 if 0 < q < 1 ∞ if q > 1 q − ∞ = { ∞ if 0 < q < 1 0 if q > 1 {\displaystyle {\begin{array}{rcl}q+\infty &=&\infty {\text{ if }}q\neq -\infty \\[8pt]q\times \infty &=&{\begin{cases}\infty &{\text{if }}q>0\\-\infty &{\text{if }}q<0\end{cases}}\\[6pt]\displaystyle {\frac {q}{\infty }}&=&0{\text{ if }}q\neq \infty {\text{ and }}q\neq -\infty \\[6pt]\infty ^{q}&=&{\begin{cases}0&{\text{if }}q<0\\\infty &{\text{if }}q>0\end{cases}}\\[4pt]q^{\infty }&=&{\begin{cases}0&{\text{if }}0<q<1\\\infty &{\text{if }}q>1\end{cases}}\\[4pt]q^{-\infty }&=&{\begin{cases}\infty &{\text{if }}0<q<1\\0&{\text{if }}q>1\end{cases}}\end{array}}} (see also Extended real number line). In other cases the limit on the left may still exist, although the right-hand side, called an indeterminate form, does not allow one to determine the result. This depends on the functions f and g. These indeterminate forms are: 0 0 ± ∞ ± ∞ 0 × ± ∞ ∞ + − ∞ 0 0 ∞ 0 1 ± ∞ {\displaystyle {\begin{array}{cc}\displaystyle {\frac {0}{0}}&\displaystyle {\frac {\pm \infty }{\pm \infty }}\\[6pt]0\times \pm \infty &\infty +-\infty \\[8pt]\qquad 0^{0}\qquad &\qquad \infty ^{0}\qquad \\[8pt]1^{\pm \infty }\end{array}}} See further L'Hôpital's rule below and Indeterminate form. === Limits of compositions of functions === In general, from knowing that lim y → b f ( y ) = c {\displaystyle \lim _{y\to b}f(y)=c} and lim x → a g ( x ) = b , {\displaystyle \lim _{x\to a}g(x)=b,} it does not follow that lim x → a f ( g ( x ) ) = c . {\displaystyle \lim _{x\to a}f(g(x))=c.} However, this "chain rule" does hold if one of the following additional conditions holds: f(b) = c (that is, f is continuous at b), or g does not take the value b near a (that is, there exists a δ > 0 such that if 0 < |x − a| < δ then |g(x) − b| > 0). As an example of this phenomenon, consider the following function that violates both additional restrictions: f ( x ) = g ( x ) = { 0 if x ≠ 0 1 if x = 0 {\displaystyle f(x)=g(x)={\begin{cases}0&{\text{if }}x\neq 0\\1&{\text{if }}x=0\end{cases}}} Since the value at f(0) is a removable discontinuity, lim x → a f ( x ) = 0 {\displaystyle \lim _{x\to a}f(x)=0} for all a. Thus, the naïve chain rule would suggest that the limit of f(f(x)) is 0. However, it is the case that f ( f ( x ) ) = { 1 if x ≠ 0 0 if x = 0 {\displaystyle f(f(x))={\begin{cases}1&{\text{if }}x\neq 0\\0&{\text{if }}x=0\end{cases}}} and so lim x → a f ( f ( x ) ) = 1 {\displaystyle \lim _{x\to a}f(f(x))=1} for all a. === Limits of special interest === ==== Rational functions ==== For n a nonnegative integer and constants a 1 , a 2 , a 3 , … , a n {\displaystyle a_{1},a_{2},a_{3},\ldots ,a_{n}} and b 1 , b 2 , b 3 , … , b n , {\displaystyle b_{1},b_{2},b_{3},\ldots ,b_{n},} lim x → ∞ a 1 x n + a 2 x n − 1 + a 3 x n − 2 + ⋯ + a n b 1 x n + b 2 x n − 1 + b 3 x n − 2 + ⋯ + b n = a 1 b 1 {\displaystyle \lim _{x\to \infty }{\frac {a_{1}x^{n}+a_{2}x^{n-1}+a_{3}x^{n-2}+\dots +a_{n}}{b_{1}x^{n}+b_{2}x^{n-1}+b_{3}x^{n-2}+\dots +b_{n}}}={\frac {a_{1}}{b_{1}}}} This can be proven by dividing both the numerator and denominator by xn. If the numerator is a polynomial of higher degree, the limit does not exist. If the denominator is of higher degree, the limit is 0. ==== Trigonometric functions ==== lim x → 0 sin ⁡ x x = 1 lim x → 0 1 − cos ⁡ x x = 0 {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {1-\cos x}{x}}&=&0\end{array}}} ==== Exponential functions ==== lim x → 0 ( 1 + x ) 1 x = lim r → ∞ ( 1 + 1 r ) r = e lim x → 0 e x − 1 x = 1 lim x → 0 e a x − 1 b x = a b lim x → 0 c a x − 1 b x = a b ln ⁡ c lim x → 0 + x x = 1 {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}(1+x)^{\frac {1}{x}}&=&\displaystyle \lim _{r\to \infty }\left(1+{\frac {1}{r}}\right)^{r}=e\\[4pt]\displaystyle \lim _{x\to 0}{\frac {e^{x}-1}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {e^{ax}-1}{bx}}&=&\displaystyle {\frac {a}{b}}\\[4pt]\displaystyle \lim _{x\to 0}{\frac {c^{ax}-1}{bx}}&=&\displaystyle {\frac {a}{b}}\ln c\\[4pt]\displaystyle \lim _{x\to 0^{+}}x^{x}&=&1\end{array}}} ==== Logarithmic functions ==== lim x → 0 ln ⁡ ( 1 + x ) x = 1 lim x → 0 ln ⁡ ( 1 + a x ) b x = a b lim x → 0 log c ⁡ ( 1 + a x ) b x = a b ln ⁡ c {\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}{\frac {\ln(1+x)}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {\ln(1+ax)}{bx}}&=&\displaystyle {\frac {a}{b}}\\[4pt]\displaystyle \lim _{x\to 0}{\frac {\log _{c}(1+ax)}{bx}}&=&\displaystyle {\frac {a}{b\ln c}}\end{array}}} === L'Hôpital's rule === This rule uses derivatives to find limits of indeterminate forms 0/0 or ±∞/∞, and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions f(x) and g(x), defined over an open interval I containing the desired limit point c, then if: lim x → c f ( x ) = lim x → c g ( x ) = 0 , {\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0,} or lim x → c f ( x ) = ± lim x → c g ( x ) = ± ∞ , {\displaystyle \lim _{x\to c}f(x)=\pm \lim _{x\to c}g(x)=\pm \infty ,} and f {\displaystyle f} and g {\displaystyle g} are differentiable over I ∖ { c } , {\displaystyle I\setminus \{c\},} and g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} for all x ∈ I ∖ { c } , {\displaystyle x\in I\setminus \{c\},} and lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\tfrac {f'(x)}{g'(x)}}} exists, then: lim x → c f ( x ) g ( x ) = lim x → c f ′ ( x ) g ′ ( x ) . {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=\lim _{x\to c}{\frac {f'(x)}{g'(x)}}.} Normally, the first condition is the most important one. For example: lim x → 0 sin ⁡ ( 2 x ) sin ⁡ ( 3 x ) = lim x → 0 2 cos ⁡ ( 2 x ) 3 cos ⁡ ( 3 x ) = 2 ⋅ 1 3 ⋅ 1 = 2 3 . {\displaystyle \lim _{x\to 0}{\frac {\sin(2x)}{\sin(3x)}}=\lim _{x\to 0}{\frac {2\cos(2x)}{3\cos(3x)}}={\frac {2\cdot 1}{3\cdot 1}}={\frac {2}{3}}.} === Summations and integrals === Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit. A short way to write the limit lim n → ∞ ∑ i = s n f ( i ) {\displaystyle \lim _{n\to \infty }\sum _{i=s}^{n}f(i)} is ∑ i = s ∞ f ( i ) . {\displaystyle \sum _{i=s}^{\infty }f(i).} An important example of limits of sums such as these are series. A short way to write the limit lim x → ∞ ∫ a x f ( t ) d t {\displaystyle \lim _{x\to \infty }\int _{a}^{x}f(t)\;dt} is ∫ a ∞ f ( t ) d t . {\displaystyle \int _{a}^{\infty }f(t)\;dt.} A short way to write the limit lim x → − ∞ ∫ x b f ( t ) d t {\displaystyle \lim _{x\to -\infty }\int _{x}^{b}f(t)\;dt} is ∫ − ∞ b f ( t ) d t . {\displaystyle \int _{-\infty }^{b}f(t)\;dt.} == See also == Big O notation – Describes limiting behavior of a function L'Hôpital's rule – Mathematical rule for evaluating some limits List of limits Limit of a sequence – Value to which tends an infinite sequence Limit point – Cluster point in a topological spacePages displaying short descriptions of redirect targets Limit superior and limit inferior – Bounds of a sequencePages displaying short descriptions of redirect targets Net (mathematics) – A generalization of a sequence of points Non-standard calculus – Modern application of infinitesimalsPages displaying short descriptions of redirect targets Squeeze theorem – Method for finding limits in calculus Subsequential limit – The limit of some subsequence == Notes == == References == Apostol, Tom M. (1974). Mathematical Analysis (2 ed.). Addison–Wesley. ISBN 0-201-00288-4. Bartle, Robert (1967). The elements of real analysis. Wiley. Bartle, Robert G.; Sherbert, Donald R. (2000). Introduction to real analysis. Wiley. Courant, Richard (1924). Vorlesungen über Differential- und Integralrechnung (in German). Springer. Hardy, G. H. (1921). A course in pure mathematics. Cambridge University Press. Hubbard, John H. (2015). Vector calculus, linear algebra, and differential forms: A unified approach (5th ed.). Matrix Editions. Page, Warren; Hersh, Reuben; Selden, Annie; et al., eds. (2002). "Media Highlights". The College Mathematics. 33 (2): 147–154. JSTOR 2687124.. Rudin, Walter (1964). Principles of mathematical analysis. McGraw-Hill. Sutherland, W. A. (1975). Introduction to Metric and Topological Spaces. Oxford: Oxford University Press. ISBN 0-19-853161-3. Whittaker; Watson (1904). A Course of Modern Analysis. Cambridge University Press. == External links == MacTutor History of Weierstrass. MacTutor History of Bolzano Visual Calculus by Lawrence S. Husch, University of Tennessee (2001)
Wikipedia:Lina Fazylovna Rakhmatullina#0
Lina Fazylovna Rakhmatullina (1932-2024) was a Russian mathematician and Honored Worker of the Higher School of the Russian Federation who specialized in functional differential equations. Rakhmatullina was born in Tatarstan and defended her doctoral dissertation at the National Academy of Sciences of Ukraine. She led a research center on functional differential equations at Perm State University. Rakhmatullina's doctoral students included Vladimir Maksimov. == References ==
Wikipedia:Line segment#0
In geometry, a line segment is a part of a straight line that is bounded by two distinct endpoints (its extreme points), and contains every point on the line that is between its endpoints. It is a special case of an arc, with zero curvature. The length of a line segment is given by the Euclidean distance between its endpoints. A closed line segment includes both endpoints, while an open line segment excludes both endpoints; a half-open line segment includes exactly one of the endpoints. In geometry, a line segment is often denoted using an overline (vinculum) above the symbols for the two endpoints, such as in AB. Examples of line segments include the sides of a triangle or square. More generally, when both of the segment's end points are vertices of a polygon or polyhedron, the line segment is either an edge (of that polygon or polyhedron) if they are adjacent vertices, or a diagonal. When the end points both lie on a curve (such as a circle), a line segment is called a chord (of that curve). == In real or complex vector spaces == If V is a vector space over ⁠ R {\displaystyle \mathbb {R} } ⁠ or ⁠ C , {\displaystyle \mathbb {C} ,} ⁠ and L is a subset of V, then L is a line segment if L can be parameterized as L = { u + t v ∣ t ∈ [ 0 , 1 ] } {\displaystyle L=\{\mathbf {u} +t\mathbf {v} \mid t\in [0,1]\}} for some vectors u , v ∈ V {\displaystyle \mathbf {u} ,\mathbf {v} \in V} where v is nonzero. The endpoints of L are then the vectors u and u + v. Sometimes, one needs to distinguish between "open" and "closed" line segments. In this case, one would define a closed line segment as above, and an open line segment as a subset L that can be parametrized as L = { u + t v ∣ t ∈ ( 0 , 1 ) } {\displaystyle L=\{\mathbf {u} +t\mathbf {v} \mid t\in (0,1)\}} for some vectors u , v ∈ V . {\displaystyle \mathbf {u} ,\mathbf {v} \in V.} Equivalently, a line segment is the convex hull of two points. Thus, the line segment can be expressed as a convex combination of the segment's two end points. In geometry, one might define point B to be between two other points A and C, if the distance |AB| added to the distance |BC| is equal to the distance |AC|. Thus in ⁠ R 2 , {\displaystyle \mathbb {R} ^{2},} ⁠ the line segment with endpoints A = ( a x , a y ) {\displaystyle A=(a_{x},a_{y})} and C = ( c x , c y ) {\displaystyle C=(c_{x},c_{y})} is the following collection of points: { ( x , y ) ∣ ( x − c x ) 2 + ( y − c y ) 2 + ( x − a x ) 2 + ( y − a y ) 2 = ( c x − a x ) 2 + ( c y − a y ) 2 } . {\displaystyle {\Biggl \{}(x,y)\mid {\sqrt {(x-c_{x})^{2}+(y-c_{y})^{2}}}+{\sqrt {(x-a_{x})^{2}+(y-a_{y})^{2}}}={\sqrt {(c_{x}-a_{x})^{2}+(c_{y}-a_{y})^{2}}}{\Biggr \}}.} == Properties == A line segment is a connected, non-empty set. If V is a topological vector space, then a closed line segment is a closed set in V. However, an open line segment is an open set in V if and only if V is one-dimensional. More generally than above, the concept of a line segment can be defined in an ordered geometry. A pair of line segments can be any one of the following: intersecting, parallel, skew, or none of these. The last possibility is a way that line segments differ from lines: if two nonparallel lines are in the same Euclidean plane then they must cross each other, but that need not be true of segments. == In proofs == In an axiomatic treatment of geometry, the notion of betweenness is either assumed to satisfy a certain number of axioms, or defined in terms of an isometry of a line (used as a coordinate system). Segments play an important role in other theories. For example, in a convex set, the segment that joins any two points of the set is contained in the set. This is important because it transforms some of the analysis of convex sets, to the analysis of a line segment. The segment addition postulate can be used to add congruent segment or segments with equal lengths, and consequently substitute other segments into another statement to make segments congruent. == As a degenerate ellipse == A line segment can be viewed as a degenerate case of an ellipse, in which the semiminor axis goes to zero, the foci go to the endpoints, and the eccentricity goes to one. A standard definition of an ellipse is the set of points for which the sum of a point's distances to two foci is a constant; if this constant equals the distance between the foci, the line segment is the result. A complete orbit of this ellipse traverses the line segment twice. As a degenerate orbit, this is a radial elliptic trajectory. == In other geometric shapes == In addition to appearing as the edges and diagonals of polygons and polyhedra, line segments also appear in numerous other locations relative to other geometric shapes. === Triangles === Some very frequently considered segments in a triangle to include the three altitudes (each perpendicularly connecting a side or its extension to the opposite vertex), the three medians (each connecting a side's midpoint to the opposite vertex), the perpendicular bisectors of the sides (perpendicularly connecting the midpoint of a side to one of the other sides), and the internal angle bisectors (each connecting a vertex to the opposite side). In each case, there are various equalities relating these segment lengths to others (discussed in the articles on the various types of segment), as well as various inequalities. Other segments of interest in a triangle include those connecting various triangle centers to each other, most notably the incenter, the circumcenter, the nine-point center, the centroid and the orthocenter. === Quadrilaterals === In addition to the sides and diagonals of a quadrilateral, some important segments are the two bimedians (connecting the midpoints of opposite sides) and the four maltitudes (each perpendicularly connecting one side to the midpoint of the opposite side). === Circles and ellipses === Any straight line segment connecting two points on a circle or ellipse is called a chord. Any chord in a circle which has no longer chord is called a diameter, and any segment connecting the circle's center (the midpoint of a diameter) to a point on the circle is called a radius. In an ellipse, the longest chord, which is also the longest diameter, is called the major axis, and a segment from the midpoint of the major axis (the ellipse's center) to either endpoint of the major axis is called a semi-major axis. Similarly, the shortest diameter of an ellipse is called the minor axis, and the segment from its midpoint (the ellipse's center) to either of its endpoints is called a semi-minor axis. The chords of an ellipse which are perpendicular to the major axis and pass through one of its foci are called the latera recta of the ellipse. The interfocal segment connects the two foci. == Directed line segment == When a line segment is given an orientation (direction) it is called a directed line segment or oriented line segment. It suggests a translation or displacement (perhaps caused by a force). The magnitude and direction are indicative of a potential change. Extending a directed line segment semi-infinitely produces a directed half-line and infinitely in both directions produces a directed line. This suggestion has been absorbed into mathematical physics through the concept of a Euclidean vector. The collection of all directed line segments is usually reduced by making equipollent any pair having the same length and orientation. This application of an equivalence relation was introduced by Giusto Bellavitis in 1835. == Generalizations == Analogous to straight line segments above, one can also define arcs as segments of a curve. In one-dimensional space, a ball is a line segment. An oriented plane segment or bivector generalizes the directed line segment. Beyond Euclidean geometry, geodesic segments play the role of line segments. A line segment is a one-dimensional simplex; a two-dimensional simplex is a triangle. == Types of line segments == Chord (geometry) Diameter Radius == See also == Polygonal chain Interval (mathematics) Line segment intersection, the algorithmic problem of finding intersecting pairs in a collection of line segments == Notes == == References == David Hilbert The Foundations of Geometry. The Open Court Publishing Company 1950, p. 4 == External links == Weisstein, Eric W. "Line segment". MathWorld. Line Segment at PlanetMath Copying a line segment with compass and straightedge Dividing a line segment into N equal parts with compass and straightedge Animated demonstration This article incorporates material from Line segment on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia:Linear Algebra and Its Applications#0
Linear Algebra and its Applications is a biweekly peer-reviewed mathematics journal published by Elsevier and covering matrix theory and finite-dimensional linear algebra. == History == The journal was established in January 1968 with A.J. Hoffman, A.S. Householder, A.M. Ostrowski, H. Schneider, and O. Taussky Todd as founding editors-in-chief. The current editors-in-chief are Richard A. Brualdi (University of Wisconsin at Madison), Volker Mehrmann (Technische Universität Berlin), and Peter Semrl (University of Ljubljana). == Abstracting and indexing == The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.401. == References ==
Wikipedia:Linear algebra#0
Linear algebra is the branch of mathematics concerning linear equations such as a 1 x 1 + ⋯ + a n x n = b , {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=b,} linear maps such as ( x 1 , … , x n ) ↦ a 1 x 1 + ⋯ + a n x n , {\displaystyle (x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\cdots +a_{n}x_{n},} and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to function spaces. Linear algebra is also used in most sciences and fields of engineering because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear map that best approximates the function near that point. == History == The procedure (using counting rods) for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations. The first systematic methods for solving linear systems used determinants and were first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's rule. Later, Gauss further described the method of elimination, which was initially listed as an advancement in geodesy. In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb. Linear algebra grew with ideas noted in the complex plane. For instance, two numbers w and z in C {\displaystyle \mathbb {C} } have a difference w – z, and the line segments wz and 0(w − z) are of the same length and direction. The segments are equipollent. The four-dimensional system H {\displaystyle \mathbb {H} } of quaternions was discovered by W.R. Hamilton in 1843. The term vector was introduced as v = xi + yj + zk representing a point in space. The quaternion difference p – q also produces a segment equipollent to pq. Other hypercomplex number systems also used the idea of a linear space with a basis. Arthur Cayley introduced matrix multiplication and the inverse matrix in 1856, making possible the general linear group. The mechanism of group representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants". Benjamin Peirce published his Linear Associative Algebra (1872), and his son Charles Sanders Peirce extended the work later. The telegraph required an explanatory system, and the 1873 publication by James Clerk Maxwell of A Treatise on Electricity and Magnetism instituted a field theory of forces and required differential geometry for expression. Linear algebra is flat differential geometry and serves in tangent spaces to manifolds. Electromagnetic symmetries of spacetime are expressed by the Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations. The first modern and more precise definition of a vector space was introduced by Peano in 1888; by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century when many ideas and methods of previous centuries were generalized as abstract algebra. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modeling and simulations. == Vector spaces == Until the 19th century, linear algebra was introduced through systems of linear equations and matrices. In modern mathematics, the presentation through vector spaces is generally preferred, since it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract. A vector space over a field F (often the field of the real numbers or of the complex numbers) is a set V equipped with two binary operations. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The axioms that addition and scalar multiplication must satisfy are the following. (In the list below, u, v and w are arbitrary elements of V, and a and b are arbitrary scalars in the field F.) The first four axioms mean that V is an abelian group under addition. The elements of a specific vector space may have various natures; for example, they could be tuples, sequences, functions, polynomials, or a matrices. Linear algebra is concerned with the properties of such objects that are common to all vector spaces. === Linear maps === Linear maps are mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear map (also called, in some contexts, linear transformation or linear mapping) is a map T : V → W {\displaystyle T:V\to W} that is compatible with addition and scalar multiplication, that is T ( u + v ) = T ( u ) + T ( v ) , T ( a v ) = a T ( v ) {\displaystyle T(\mathbf {u} +\mathbf {v} )=T(\mathbf {u} )+T(\mathbf {v} ),\quad T(a\mathbf {v} )=aT(\mathbf {v} )} for any vectors u,v in V and scalar a in F. An equivalent condition is that for any vectors u, v in V and scalars a, b in F, one has T ( a u + b v ) = a T ( u ) + b T ( v ) {\displaystyle T(a\mathbf {u} +b\mathbf {v} )=aT(\mathbf {u} )+bT(\mathbf {v} )} . When V = W are the same vector space, a linear map T : V → V is also known as a linear operator on V. A bijective linear map between two vector spaces (that is, every vector from the second space is associated with exactly one in the first) is an isomorphism. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm. === Subspaces, span, and basis === The study of those subsets of vector spaces that are in themselves vector spaces under the induced operations is fundamental, similarly as for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a vector space V over a field F is a subset W of V such that u + v and au are in W, for every u, v in W, and every a in F. (These conditions suffice for implying that W is a vector space.) For example, given a linear map T : V → W, the image T(V) of V, and the inverse image T−1(0) of 0 (called kernel or null space), are linear subspaces of W and V, respectively. Another important way of forming a subspace is to consider linear combinations of a set S of vectors: the set of all sums a 1 v 1 + a 2 v 2 + ⋯ + a k v k , {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k},} where v1, v2, ..., vk are in S, and a1, a2, ..., ak are in F form a linear subspace called the span of S. The span of S is also the intersection of all linear subspaces containing S. In other words, it is the smallest (for the inclusion relation) linear subspace containing S. A set of vectors is linearly independent if none is in the span of the others. Equivalently, a set S of vectors is linearly independent if the only way to express the zero vector as a linear combination of elements of S is to take zero for every coefficient ai. A set of vectors that spans a vector space is called a spanning set or generating set. If a spanning set S is linearly dependent (that is not linearly independent), then some element w of S is in the span of the other elements of S, and the span would remain the same if one were to remove w from S. One may continue to remove elements of S until getting a linearly independent spanning set. Such a linearly independent set that spans a vector space V is called a basis of V. The importance of bases lies in the fact that they are simultaneously minimal-generating sets and maximal independent sets. More precisely, if S is a linearly independent set, and T is a spanning set such that S ⊆ T, then there is a basis B such that S ⊆ B ⊆ T. Any two bases of a vector space V have the same cardinality, which is called the dimension of V; this is the dimension theorem for vector spaces. Moreover, two vector spaces over the same field F are isomorphic if and only if they have the same dimension. If any basis of V (and therefore every basis) has a finite number of elements, V is a finite-dimensional vector space. If U is a subspace of V, then dim U ≤ dim V. In the case where V is finite-dimensional, the equality of the dimensions implies U = V. If U1 and U2 are subspaces of V, then dim ⁡ ( U 1 + U 2 ) = dim ⁡ U 1 + dim ⁡ U 2 − dim ⁡ ( U 1 ∩ U 2 ) , {\displaystyle \dim(U_{1}+U_{2})=\dim U_{1}+\dim U_{2}-\dim(U_{1}\cap U_{2}),} where U1 + U2 denotes the span of U1 ∪ U2. == Matrices == Matrices allow explicit manipulation of finite-dimensional vector spaces and linear maps. Their theory is thus an essential part of linear algebra. Let V be a finite-dimensional vector space over a field F, and (v1, v2, ..., vm) be a basis of V (thus m is the dimension of V). By definition of a basis, the map ( a 1 , … , a m ) ↦ a 1 v 1 + ⋯ a m v m F m → V {\displaystyle {\begin{aligned}(a_{1},\ldots ,a_{m})&\mapsto a_{1}\mathbf {v} _{1}+\cdots a_{m}\mathbf {v} _{m}\\F^{m}&\to V\end{aligned}}} is a bijection from Fm, the set of the sequences of m elements of F, onto V. This is an isomorphism of vector spaces, if Fm is equipped with its standard structure of vector space, where vector addition and scalar multiplication are done component by component. This isomorphism allows representing a vector by its inverse image under this isomorphism, that is by the coordinate vector (a1, ..., am) or by the column matrix [ a 1 ⋮ a m ] . {\displaystyle {\begin{bmatrix}a_{1}\\\vdots \\a_{m}\end{bmatrix}}.} If W is another finite dimensional vector space (possibly the same), with a basis (w1, ..., wn), a linear map f from W to V is well defined by its values on the basis elements, that is (f(w1), ..., f(wn)). Thus, f is well represented by the list of the corresponding column matrices. That is, if f ( w j ) = a 1 , j v 1 + ⋯ + a m , j v m , {\displaystyle f(w_{j})=a_{1,j}v_{1}+\cdots +a_{m,j}v_{m},} for j = 1, ..., n, then f is represented by the matrix [ a 1 , 1 ⋯ a 1 , n ⋮ ⋱ ⋮ a m , 1 ⋯ a m , n ] , {\displaystyle {\begin{bmatrix}a_{1,1}&\cdots &a_{1,n}\\\vdots &\ddots &\vdots \\a_{m,1}&\cdots &a_{m,n}\end{bmatrix}},} with m rows and n columns. Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing the same concepts. Two matrices that encode the same linear transformation in different bases are called similar. It can be proved that two matrices are similar if and only if one can transform one into the other by elementary row and column operations. For a matrix representing a linear map from W to V, the row operations correspond to change of bases in V and the column operations correspond to change of bases in W. Every matrix is similar to an identity matrix possibly bordered by zero rows and zero columns. In terms of vector spaces, this means that, for any linear map from W to V, there are bases such that a part of the basis of W is mapped bijectively on a part of the basis of V, and that the remaining basis elements of W, if any, are mapped to zero. Gaussian elimination is the basic algorithm for finding these elementary operations, and proving these results. == Linear systems == A finite set of linear equations in a finite set of variables, for example, x1, x2, ..., xn, or x, y, ..., z is called a system of linear equations or a linear system. Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory have been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems. For example, let be a linear system. To such a system, one may associate its matrix M = [ 2 1 − 1 − 3 − 1 2 − 2 1 2 ] . {\displaystyle M=\left[{\begin{array}{rrr}2&1&-1\\-3&-1&2\\-2&1&2\end{array}}\right].} and its right member vector v = [ 8 − 11 − 3 ] . {\displaystyle \mathbf {v} ={\begin{bmatrix}8\\-11\\-3\end{bmatrix}}.} Let T be the linear transformation associated with the matrix M. A solution of the system (S) is a vector X = [ x y z ] {\displaystyle \mathbf {X} ={\begin{bmatrix}x\\y\\z\end{bmatrix}}} such that T ( X ) = v , {\displaystyle T(\mathbf {X} )=\mathbf {v} ,} that is an element of the preimage of v by T. Let (S′) be the associated homogeneous system, where the right-hand sides of the equations are put to zero: The solutions of (S′) are exactly the elements of the kernel of T or, equivalently, M. The Gaussian-elimination consists of performing elementary row operations on the augmented matrix [ M v ] = [ 2 1 − 1 8 − 3 − 1 2 − 11 − 2 1 2 − 3 ] {\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]} for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is [ M v ] = [ 1 0 0 2 0 1 0 3 0 0 1 − 1 ] , {\displaystyle \left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right],} showing that the system (S) has the unique solution x = 2 y = 3 z = − 1. {\displaystyle {\begin{aligned}x&=2\\y&=3\\z&=-1.\end{aligned}}} It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses. == Endomorphisms and square matrices == A linear endomorphism is a linear map that maps a vector space V to itself. If V has a basis of n elements, such an endomorphism is represented by a square matrix of size n. Concerning general linear maps, linear endomorphisms, and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations, coordinate changes, quadratic forms, and many other parts of mathematics. === Determinant === The determinant of a square matrix A is defined to be ∑ σ ∈ S n ( − 1 ) σ a 1 σ ( 1 ) ⋯ a n σ ( n ) , {\displaystyle \sum _{\sigma \in S_{n}}(-1)^{\sigma }a_{1\sigma (1)}\cdots a_{n\sigma (n)},} where Sn is the group of all permutations of n elements, σ is a permutation, and (−1)σ the parity of the permutation. A matrix is invertible if and only if the determinant is invertible (i.e., nonzero if the scalars belong to a field). Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of n linear equations in n unknowns. Cramer's rule is useful for reasoning about the solution, but, except for n = 2 or 3, it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm. The determinant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense since this determinant is independent of the choice of the basis. === Eigenvalues and eigenvectors === If f is a linear endomorphism of a vector space V over a field F, an eigenvector of f is a nonzero vector v of V such that f(v) = av for some scalar a in F. This scalar a is an eigenvalue of f. If the dimension of V is finite, and a basis has been chosen, f and v may be represented, respectively, by a square matrix M and a column matrix z; the equation defining eigenvectors and eigenvalues becomes M z = a z . {\displaystyle Mz=az.} Using the identity matrix I, whose entries are all zero, except those of the main diagonal, which are equal to one, this may be rewritten ( M − a I ) z = 0. {\displaystyle (M-aI)z=0.} As z is supposed to be nonzero, this means that M – aI is a singular matrix, and thus that its determinant det (M − aI) equals zero. The eigenvalues are thus the roots of the polynomial det ( x I − M ) . {\displaystyle \det(xI-M).} If V is of dimension n, this is a monic polynomial of degree n, called the characteristic polynomial of the matrix (or of the endomorphism), and there are, at most, n eigenvalues. If a basis exists that consists only of eigenvectors, the matrix of f on this basis has a very simple structure: it is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to be diagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable. A symmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being [ 0 1 0 0 ] {\displaystyle {\begin{bmatrix}0&1\\0&0\end{bmatrix}}} (it cannot be diagonalizable since its square is the zero matrix, and the square of a nonzero diagonal matrix is never zero). When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need to extend the field of scalars and makes the characteristic polynomial immediately readable on the matrix. The Jordan normal form requires to extension of the field of scalar for containing all eigenvalues and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1. == Duality == A linear form is a linear map from a vector space V over a field F to the field of scalars F, viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the dual space of V, and usually denoted V* or V′. If v1, ..., vn is a basis of V (this implies that V is finite-dimensional), then one can define, for i = 1, ..., n, a linear map vi* such that vi*(vi) = 1 and vi*(vj) = 0 if j ≠ i. These linear maps form a basis of V*, called the dual basis of v1, ..., vn. (If V is not finite-dimensional, the vi* may be defined similarly; they are linearly independent, but do not form a basis.) For v in V, the map f → f ( v ) {\displaystyle f\to f(\mathbf {v} )} is a linear form on V*. This defines the canonical linear map from V into (V*)*, the dual of V*, called the double dual or bidual of V. This canonical map is an isomorphism if V is finite-dimensional, and this allows identifying V with its bidual. (In the infinite-dimensional case, the canonical map is injective, but not surjective.) There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the bra–ket notation ⟨ f , x ⟩ {\displaystyle \langle f,\mathbf {x} \rangle } for denoting f(x). === Dual map === Let f : V → W {\displaystyle f:V\to W} be a linear map. For every linear form h on W, the composite function h ∘ f is a linear form on V. This defines a linear map f ∗ : W ∗ → V ∗ {\displaystyle f^{*}:W^{*}\to V^{*}} between the dual spaces, which is called the dual or the transpose of f. If V and W are finite-dimensional, and M is the matrix of f in terms of some ordered bases, then the matrix of f* over the dual bases is the transpose MT of M, obtained by exchanging rows and columns. If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in bra–ket notation by ⟨ h T , M v ⟩ = ⟨ h T M , v ⟩ . {\displaystyle \langle h^{\mathsf {T}},M\mathbf {v} \rangle =\langle h^{\mathsf {T}}M,\mathbf {v} \rangle .} To highlight this symmetry, the two members of this equality are sometimes written ⟨ h T ∣ M ∣ v ⟩ . {\displaystyle \langle h^{\mathsf {T}}\mid M\mid \mathbf {v} \rangle .} === Inner-product spaces === Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map. ⟨ ⋅ , ⋅ ⟩ : V × V → F {\displaystyle \langle \cdot ,\cdot \rangle :V\times V\to F} that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F: Conjugate symmetry: ⟨ u , v ⟩ = ⟨ v , u ⟩ ¯ . {\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle ={\overline {\langle \mathbf {v} ,\mathbf {u} \rangle }}.} In R {\displaystyle \mathbb {R} } , it is symmetric. Linearity in the first argument: ⟨ a u , v ⟩ = a ⟨ u , v ⟩ . ⟨ u + v , w ⟩ = ⟨ u , w ⟩ + ⟨ v , w ⟩ . {\displaystyle {\begin{aligned}\langle a\mathbf {u} ,\mathbf {v} \rangle &=a\langle \mathbf {u} ,\mathbf {v} \rangle .\\\langle \mathbf {u} +\mathbf {v} ,\mathbf {w} \rangle &=\langle \mathbf {u} ,\mathbf {w} \rangle +\langle \mathbf {v} ,\mathbf {w} \rangle .\end{aligned}}} Positive-definiteness: ⟨ v , v ⟩ ≥ 0 {\displaystyle \langle \mathbf {v} ,\mathbf {v} \rangle \geq 0} with equality only for v = 0. We can define the length of a vector v in V by ‖ v ‖ 2 = ⟨ v , v ⟩ , {\displaystyle \|\mathbf {v} \|^{2}=\langle \mathbf {v} ,\mathbf {v} \rangle ,} and we can prove the Cauchy–Schwarz inequality: | ⟨ u , v ⟩ | ≤ ‖ u ‖ ⋅ ‖ v ‖ . {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |\leq \|\mathbf {u} \|\cdot \|\mathbf {v} \|.} In particular, the quantity | ⟨ u , v ⟩ | ‖ u ‖ ⋅ ‖ v ‖ ≤ 1 , {\displaystyle {\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |}{\|\mathbf {u} \|\cdot \|\mathbf {v} \|}}\leq 1,} and so we can call this quantity the cosine of the angle between the two vectors. Two vectors are orthogonal if ⟨u, v⟩ = 0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if v = a1 v1 + ⋯ + an vn, then a i = ⟨ v , v i ⟩ . {\displaystyle a_{i}=\langle \mathbf {v} ,\mathbf {v} _{i}\rangle .} The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying ⟨ T u , v ⟩ = ⟨ u , T ∗ v ⟩ . {\displaystyle \langle T\mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,T^{*}\mathbf {v} \rangle .} If T satisfies TT* = T*T, we call T normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span V. == Relationship with geometry == There is a strong relationship between linear algebra and geometry, which started with the introduction by René Descartes, in 1637, of Cartesian coordinates. In this new (at that time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of the usual three-dimensional space). The basic objects of geometry, which are lines and planes are represented by linear equations. Thus, computing intersections of lines and planes amounts to solving systems of linear equations. This was one of the main motivations for developing linear algebra. Most geometric transformation, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified, and studied in terms of linear maps. This is also the case of homographies and Möbius transformations when considered as transformations of a projective space. Until the end of the 19th century, geometric spaces were defined by axioms relating points, lines, and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example, Projective space and Affine space). It has been shown that the two approaches are essentially equivalent. In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, including finite fields. Presently, most textbooks introduce geometric spaces from linear algebra, and geometry is often presented, at the elementary level, as a subfield of linear algebra. == Usage and applications == Linear algebra is used in almost all areas of mathematics, thus making it relevant in almost all scientific domains that use mathematics. These applications may be divided into several wide categories. === Functional analysis === Functional analysis studies function spaces. These are vector spaces with additional structure, such as Hilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave functions) and Fourier analysis (orthogonal basis). === Scientific computation === Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, to adapt them to the specificities of the computer (cache size, number of available cores, ...). Since the 1960s there have been processors with specialized instructions for optimizing the operations of linear algebra, optional array processors under the control of a conventional processor, supercomputers designed for array processing and conventional processors augmented with vector registers. Some contemporary processors, typically graphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra. === Geometry of ambient space === The modeling of ambient space is based on geometry. Sciences concerned with this space use geometry widely. This is the case with mechanics and robotics, for describing rigid body dynamics; geodesy for describing Earth shape; perspectivity, computer vision, and computer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains. In all these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute with coordinates. This requires the heavy use of linear algebra. === Study of complex systems === Most physical phenomena are modeled by partial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interacting cells. For linear systems this interaction involves linear functions. For nonlinear systems, this interaction is often approximated by linear functions.This is called a linear model or first-order approximation. Linear models are frequently used for complex nonlinear real-world systems because they make parametrization more manageable. In both cases, very large matrices are generally involved. Weather forecasting (or more specifically, parametrization for atmospheric modeling) is a typical example of a real-world application, where the whole Earth atmosphere is divided into cells of, say, 100 km of width and 100 km of height. === Fluid mechanics, fluid dynamics, and thermal energy systems === Linear algebra, a branch of mathematics dealing with vector spaces and linear mappings between these spaces, plays a critical role in various engineering disciplines, including fluid mechanics, fluid dynamics, and thermal energy systems. Its application in these fields is multifaceted and indispensable for solving complex problems. In fluid mechanics, linear algebra is integral to understanding and solving problems related to the behavior of fluids. It assists in the modeling and simulation of fluid flow, providing essential tools for the analysis of fluid dynamics problems. For instance, linear algebraic techniques are used to solve systems of differential equations that describe fluid motion. These equations, often complex and non-linear, can be linearized using linear algebra methods, allowing for simpler solutions and analyses. In the field of fluid dynamics, linear algebra finds its application in computational fluid dynamics (CFD), a branch that uses numerical analysis and data structures to solve and analyze problems involving fluid flows. CFD relies heavily on linear algebra for the computation of fluid flow and heat transfer in various applications. For example, the Navier–Stokes equations, fundamental in fluid dynamics, are often solved using techniques derived from linear algebra. This includes the use of matrices and vectors to represent and manipulate fluid flow fields. Furthermore, linear algebra plays a crucial role in thermal energy systems, particularly in power systems analysis. It is used to model and optimize the generation, transmission, and distribution of electric power. Linear algebraic concepts such as matrix operations and eigenvalue problems are employed to enhance the efficiency, reliability, and economic performance of power systems. The application of linear algebra in this context is vital for the design and operation of modern power systems, including renewable energy sources and smart grids. Overall, the application of linear algebra in fluid mechanics, fluid dynamics, and thermal energy systems is an example of the profound interconnection between mathematics and engineering. It provides engineers with the necessary tools to model, analyze, and solve complex problems in these domains, leading to advancements in technology and industry. == Extensions and generalizations == This section presents several related topics that do not appear generally in elementary textbooks on linear algebra but are commonly considered, in advanced mathematics, as parts of linear algebra. === Module theory === The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by a ring R, and this gives the structure called a module over R, or R-module. The concepts of linear independence, span, basis, and linear maps (also called module homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, if R is not a field, there are modules that do not have any basis. The modules that have a basis are the free modules, and those that are spanned by a finite set are the finitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring. Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is a cokernel of a homomorphism of free modules. Modules over the integers can be identified with abelian groups, since the multiplication by an integer may be identified as a repeated addition. Most of the theory of abelian groups may be extended to modules over a principal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and the fundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring. There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally a computational complexity that is much higher than similar algorithms over a field. For more details, see Linear equation over a ring. === Multilinear algebra and tensors === In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of several different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V* consisting of linear maps f : V → F where F is the field of scalars. Multilinear maps T : Vn → F can be described via tensor products of elements of V*. If, in addition to vector addition and scalar multiplication, there is a bilinear vector product V × V → V, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials). === Topological vector spaces === Vector spaces that are not finite-dimensional often require additional structure to be tractable. A normed vector space is a vector space along with a function called a norm, which measures the "size" of elements. The norm induces a metric, which measures the distance between elements, and induces a topology, which allows for a definition of continuous maps. The metric also allows for a definition of limits and completeness – a normed vector space that is complete is known as a Banach space. A complete metric space along with the additional structure of an inner product (a conjugate symmetric sesquilinear form) is known as a Hilbert space, which is in some sense a particularly well-behaved Banach space. Functional analysis applies the methods of linear algebra alongside those of mathematical analysis to study various function spaces; the central objects of study in functional analysis are Lp spaces, which are Banach spaces, and especially the L2 space of square-integrable functions, which is the only Hilbert space among them. Functional analysis is of particular importance to quantum mechanics, the theory of partial differential equations, digital signal processing, and electrical engineering. It also provides the foundation and theoretical framework that underlies the Fourier transform and related methods. == See also == Fundamental matrix (computer vision) Geometric algebra Linear programming Linear regression, a statistical estimation method Numerical linear algebra Outline of linear algebra Transformation matrix == Explanatory notes == == Citations == == General and cited sources == == Further reading == === History === Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly 86 (1979), pp. 809–817. Grassmann, Hermann (1844), Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, Leipzig: O. Wigand === Introductory textbooks === Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 Bretscher, Otto (2004), Linear Algebra with Applications (3rd ed.), Prentice Hall, ISBN 978-0-13-145334-0 Farin, Gerald; Hansford, Dianne (2004), Practical Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-1-56881-234-2 Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M. Kolman, Bernard; Hill, David R. (2007), Elementary Linear Algebra with Applications (9th ed.), Prentice Hall, ISBN 978-0-13-229654-0 Lay, David C. (2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall, ISBN 978-0-13-185785-8 Murty, Katta G. (2014) Computational and Algorithmic Linear Algebra and n-Dimensional Geometry, World Scientific Publishing, ISBN 978-981-4366-62-5. Chapter 1: Systems of Simultaneous Linear Equations Noble, B. & Daniel, J.W. (2nd Ed. 1977) [1], Pearson Higher Education, ISBN 978-0130413437. Poole, David (2010), Linear Algebra: A Modern Introduction (3rd ed.), Cengage – Brooks/Cole, ISBN 978-0-538-73545-2 Ricardo, Henry (2010), A Modern Introduction To Linear Algebra (1st ed.), CRC Press, ISBN 978-1-4398-0040-9 Sadun, Lorenzo (2008), Applied Linear Algebra: the decoupling principle (2nd ed.), AMS, ISBN 978-0-8218-4441-0 Strang, Gilbert (2016), Introduction to Linear Algebra (5th ed.), Wellesley-Cambridge Press, ISBN 978-09802327-7-6 The Manga Guide to Linear Algebra (2012), by Shin Takahashi, Iroha Inoue and Trend-Pro Co., Ltd., ISBN 978-1-59327-413-9 === Advanced textbooks === Bhatia, Rajendra (November 15, 1996), Matrix Analysis, Graduate Texts in Mathematics, Springer, ISBN 978-0-387-94846-1 Demmel, James W. (August 1, 1997), Applied Numerical Linear Algebra, SIAM, ISBN 978-0-89871-389-3 Dym, Harry (2007), Linear Algebra in Action, AMS, ISBN 978-0-8218-3813-6 Gantmacher, Felix R. (2005), Applications of the Theory of Matrices, Dover Publications, ISBN 978-0-486-44554-0 Gantmacher, Felix R. (1990), Matrix Theory Vol. 1 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-1376-8 Gantmacher, Felix R. (2000), Matrix Theory Vol. 2 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-2664-5 Gelfand, Israel M. (1989), Lectures on Linear Algebra, Dover Publications, ISBN 978-0-486-66082-0 Glazman, I. M.; Ljubic, Ju. I. (2006), Finite-Dimensional Linear Analysis, Dover Publications, ISBN 978-0-486-45332-3 Golan, Johnathan S. (January 2007), The Linear Algebra a Beginning Graduate Student Ought to Know (2nd ed.), Springer, ISBN 978-1-4020-5494-5 Golan, Johnathan S. (August 1995), Foundations of Linear Algebra, Kluwer, ISBN 0-7923-3614-3 Greub, Werner H. (October 16, 1981), Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-0-8018-5414-9 Hoffman, Kenneth; Kunze, Ray (1971), Linear algebra (2nd ed.), Englewood Cliffs, N.J.: Prentice-Hall, Inc., MR 0276251 Halmos, Paul R. (August 20, 1993), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-90093-3 Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (September 7, 2018), Linear Algebra (5th ed.), Pearson, ISBN 978-0-13-486024-4 Horn, Roger A.; Johnson, Charles R. (February 23, 1990), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6 Horn, Roger A.; Johnson, Charles R. (June 24, 1994), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0-521-46713-1 Lang, Serge (March 9, 2004), Linear Algebra, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-0-387-96412-6 Marcus, Marvin; Minc, Henryk (2010), A Survey of Matrix Theory and Matrix Inequalities, Dover Publications, ISBN 978-0-486-67102-4 Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on October 31, 2009 Mirsky, L. (1990), An Introduction to Linear Algebra, Dover Publications, ISBN 978-0-486-66434-7 Shafarevich, I. R.; Remizov, A. O (2012), Linear Algebra and Geometry, Springer, ISBN 978-3-642-30993-9 Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover Publications, ISBN 978-0-486-63518-7 Shores, Thomas S. (December 6, 2006), Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-33194-2 Smith, Larry (May 28, 1998), Linear Algebra, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-98455-1 Trefethen, Lloyd N.; Bau, David (1997), Numerical Linear Algebra, SIAM, ISBN 978-0-898-71361-9 === Study guides and outlines === Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs Quick Review), Cliffs Notes, ISBN 978-0-8220-5331-6 Lipschutz, Seymour; Lipson, Marc (December 6, 2000), Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill, ISBN 978-0-07-136200-9 Lipschutz, Seymour (January 1, 1989), 3,000 Solved Problems in Linear Algebra, McGraw–Hill, ISBN 978-0-07-038023-3 McMahon, David (October 28, 2005), Linear Algebra Demystified, McGraw–Hill Professional, ISBN 978-0-07-146579-3 Zhang, Fuzhen (April 7, 2009), Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press, ISBN 978-0-8018-9125-0 == External links == === Online Resources === MIT Linear Algebra Video Lectures, a series of 34 recorded lectures by Professor Gilbert Strang (Spring 2010) International Linear Algebra Society "Linear algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Linear Algebra on MathWorld Matrix and Linear Algebra Terms on Earliest Known Uses of Some of the Words of Mathematics Earliest Uses of Symbols for Matrices and Vectors on Earliest Uses of Various Mathematical Symbols Essence of linear algebra, a video presentation from 3Blue1Brown of the basics of linear algebra, with emphasis on the relationship between the geometric, the matrix and the abstract points of view === Online books === Beezer, Robert A. (2009) [2004]. A First Course in Linear Algebra. Gainesville, Florida: University Press of Florida. ISBN 9781616100049. Connell, Edwin H. (2004) [1999]. Elements of Abstract and Linear Algebra. University of Miami, Coral Gables, Florida: Self-published. Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M. Margalit, Dan; Rabinoff, Joseph (2019). Interactive Linear Algebra. Georgia Institute of Technology, Atlanta, Georgia: Self-published. Matthews, Keith R. (2013) [1991]. Elementary Linear Algebra. University of Queensland, Brisbane, Australia: Self-published. Mikaelian, Vahagn H. (2020) [2017]. Linear Algebra: Theory and Algorithms. Yerevan, Armenia: Self-published – via ResearchGate. Sharipov, Ruslan, Course of linear algebra and multidimensional geometry Treil, Sergei, Linear Algebra Done Wrong
Wikipedia:Linear combination#0
In mathematics, a linear combination or superposition is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article. == Definition == Let V be a vector space over the field K. As usual, we call elements of V vectors and call elements of K scalars. If v1,...,vn are vectors and a1,...,an are scalars, then the linear combination of those vectors with those scalars as coefficients is a 1 v 1 + a 2 v 2 + a 3 v 3 + ⋯ + a n v n . {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+a_{3}\mathbf {v} _{3}+\cdots +a_{n}\mathbf {v} _{n}.} There is some ambiguity in the use of the term "linear combination" as to whether it refers to the expression or to its value. In most cases the value is emphasized, as in the assertion "the set of all linear combinations of v1,...,vn always forms a subspace". However, one could also say "two different linear combinations can have the same value" in which case the reference is to the expression. The subtle difference between these uses is the essence of the notion of linear dependence: a family F of vectors is linearly independent precisely if any linear combination of the vectors in F (as value) is uniquely so (as expression). In any case, even when viewed as expressions, all that matters about a linear combination is the coefficient of each vi; trivial modifications such as permuting the terms or adding terms with zero coefficient do not produce distinct linear combinations. In a given situation, K and V may be specified explicitly, or they may be obvious from context. In that case, we often speak of a linear combination of the vectors v1,...,vn, with the coefficients unspecified (except that they must belong to K). Or, if S is a subset of V, we may speak of a linear combination of vectors in S, where both the coefficients and the vectors are unspecified, except that the vectors must belong to the set S (and the coefficients must belong to K). Finally, we may speak simply of a linear combination, where nothing is specified (except that the vectors must belong to V and the coefficients must belong to K); in this case one is probably referring to the expression, since every vector in V is certainly the value of some linear combination. Note that by definition, a linear combination involves only finitely many vectors (except as described in the § Generalizations section. However, the set S that the vectors are taken from (if one is mentioned) can still be infinite; each individual linear combination will only involve finitely many vectors. Also, there is no reason that n cannot be zero; in that case, we declare by convention that the result of the linear combination is the zero vector in V. == Examples and counterexamples == === Euclidean vectors === Let the field K be the set R of real numbers, and let the vector space V be the Euclidean space R3. Consider the vectors e1 = (1,0,0), e2 = (0,1,0) and e3 = (0,0,1). Then any vector in R3 is a linear combination of e1, e2, and e3. To see that this is so, take an arbitrary vector (a1,a2,a3) in R3, and write: ( a 1 , a 2 , a 3 ) = ( a 1 , 0 , 0 ) + ( 0 , a 2 , 0 ) + ( 0 , 0 , a 3 ) = a 1 ( 1 , 0 , 0 ) + a 2 ( 0 , 1 , 0 ) + a 3 ( 0 , 0 , 1 ) = a 1 e 1 + a 2 e 2 + a 3 e 3 . {\displaystyle {\begin{aligned}(a_{1},a_{2},a_{3})&=(a_{1},0,0)+(0,a_{2},0)+(0,0,a_{3})\\[6pt]&=a_{1}(1,0,0)+a_{2}(0,1,0)+a_{3}(0,0,1)\\[6pt]&=a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+a_{3}\mathbf {e} _{3}.\end{aligned}}} === Functions === Let K be the set C of all complex numbers, and let V be the set CC(R) of all continuous functions from the real line R to the complex plane C. Consider the vectors (functions) f and g defined by f(t) := eit and g(t) := e−it. (Here, e is the base of the natural logarithm, about 2.71828..., and i is the imaginary unit, a square root of −1.) Some linear combinations of f and g are: On the other hand, the constant function 3 is not a linear combination of f and g. To see this, suppose that 3 could be written as a linear combination of eit and e−it. This means that there would exist complex scalars a and b such that aeit + be−it = 3 for all real numbers t. Setting t = 0 and t = π gives the equations a + b = 3 and a + b = −3, and clearly this cannot happen. See Euler's identity. === Polynomials === Let K be R, C, or any field, and let V be the set P of all polynomials with coefficients taken from the field K. Consider the vectors (polynomials) p1 := 1, p2 := x + 1, and p3 := x2 + x + 1. Is the polynomial x2 − 1 a linear combination of p1, p2, and p3? To find out, consider an arbitrary linear combination of these vectors and try to see when it equals the desired vector x2 − 1. Picking arbitrary coefficients a1, a2, and a3, we want a 1 ( 1 ) + a 2 ( x + 1 ) + a 3 ( x 2 + x + 1 ) = x 2 − 1. {\displaystyle a_{1}(1)+a_{2}(x+1)+a_{3}(x^{2}+x+1)=x^{2}-1.} Multiplying the polynomials out, this means ( a 1 ) + ( a 2 x + a 2 ) + ( a 3 x 2 + a 3 x + a 3 ) = x 2 − 1 {\displaystyle (a_{1})+(a_{2}x+a_{2})+(a_{3}x^{2}+a_{3}x+a_{3})=x^{2}-1} and collecting like powers of x, we get a 3 x 2 + ( a 2 + a 3 ) x + ( a 1 + a 2 + a 3 ) = 1 x 2 + 0 x + ( − 1 ) . {\displaystyle a_{3}x^{2}+(a_{2}+a_{3})x+(a_{1}+a_{2}+a_{3})=1x^{2}+0x+(-1).} Two polynomials are equal if and only if their corresponding coefficients are equal, so we can conclude a 3 = 1 , a 2 + a 3 = 0 , a 1 + a 2 + a 3 = − 1. {\displaystyle a_{3}=1,\quad a_{2}+a_{3}=0,\quad a_{1}+a_{2}+a_{3}=-1.} This system of linear equations can easily be solved. First, the first equation simply says that a3 is 1. Knowing that, we can solve the second equation for a2, which comes out to −1. Finally, the last equation tells us that a1 is also −1. Therefore, the only possible way to get a linear combination is with these coefficients. Indeed, x 2 − 1 = − 1 − ( x + 1 ) + ( x 2 + x + 1 ) = − p 1 − p 2 + p 3 {\displaystyle x^{2}-1=-1-(x+1)+(x^{2}+x+1)=-p_{1}-p_{2}+p_{3}} so x2 − 1 is a linear combination of p1, p2, and p3. On the other hand, what about the polynomial x3 − 1? If we try to make this vector a linear combination of p1, p2, and p3, then following the same process as before, we get the equation 0 x 3 + a 3 x 2 + ( a 2 + a 3 ) x + ( a 1 + a 2 + a 3 ) = 1 x 3 + 0 x 2 + 0 x + ( − 1 ) . {\displaystyle {\begin{aligned}&0x^{3}+a_{3}x^{2}+(a_{2}+a_{3})x+(a_{1}+a_{2}+a_{3})\\[5pt]={}&1x^{3}+0x^{2}+0x+(-1).\end{aligned}}} However, when we set corresponding coefficients equal in this case, the equation for x3 is 0 = 1 {\displaystyle 0=1} which is always false. Therefore, there is no way for this to work, and x3 − 1 is not a linear combination of p1, p2, and p3. == The linear span == Take an arbitrary field K, an arbitrary vector space V, and let v1,...,vn be vectors (in V). It is interesting to consider the set of all linear combinations of these vectors. This set is called the linear span (or just span) of the vectors, say S = {v1, ..., vn}. We write the span of S as span(S) or sp(S): span ⁡ ( v 1 , … , v n ) := { a 1 v 1 + ⋯ + a n v n : a 1 , … , a n ∈ K } . {\displaystyle \operatorname {span} (\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}):=\{a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n}:a_{1},\ldots ,a_{n}\in K\}.} == Linear independence == Suppose that, for some sets of vectors v1,...,vn, a single vector can be written in two different ways as a linear combination of them: v = ∑ i a i v i = ∑ i b i v i where a i ≠ b i . {\displaystyle \mathbf {v} =\sum _{i}a_{i}\mathbf {v} _{i}=\sum _{i}b_{i}\mathbf {v} _{i}{\text{ where }}a_{i}\neq b_{i}.} This is equivalent, by subtracting these ( c i := a i − b i {\displaystyle c_{i}:=a_{i}-b_{i}} ), to saying a non-trivial combination is zero: 0 = ∑ i c i v i . {\displaystyle \mathbf {0} =\sum _{i}c_{i}\mathbf {v} _{i}.} If that is possible, then v1,...,vn are called linearly dependent; otherwise, they are linearly independent. Similarly, we can speak of linear dependence or independence of an arbitrary set S of vectors. If S is linearly independent and the span of S equals V, then S is a basis for V. == Affine, conical, and convex combinations == By restricting the coefficients used in linear combinations, one can define the related concepts of affine combination, conical combination, and convex combination, and the associated notions of sets closed under these operations. Because these are more restricted operations, more subsets will be closed under them, so affine subsets, convex cones, and convex sets are generalizations of vector subspaces: a vector subspace is also an affine subspace, a convex cone, and a convex set, but a convex set need not be a vector subspace, affine, or a convex cone. These concepts often arise when one can take certain linear combinations of objects, but not any: for example, probability distributions are closed under convex combination (they form a convex set), but not conical or affine combinations (or linear), and positive measures are closed under conical combination but not affine or linear – hence one defines signed measures as the linear closure. Linear and affine combinations can be defined over any field (or ring), but conical and convex combination require a notion of "positive", and hence can only be defined over an ordered field (or ordered ring), generally the real numbers. If one allows only scalar multiplication, not addition, one obtains a (not necessarily convex) cone; one often restricts the definition to only allowing multiplication by positive scalars. All of these concepts are usually defined as subsets of an ambient vector space (except for affine spaces, which are also considered as "vector spaces forgetting the origin"), rather than being axiomatized independently. == Operad theory == More abstractly, in the language of operad theory, one can consider vector spaces to be algebras over the operad R ∞ {\displaystyle \mathbf {R} ^{\infty }} (the infinite direct sum, so only finitely many terms are non-zero; this corresponds to only taking finite sums), which parametrizes linear combinations: the vector ( 2 , 3 , − 5 , 0 , … ) {\displaystyle (2,3,-5,0,\dots )} for instance corresponds to the linear combination 2 v 1 + 3 v 2 − 5 v 3 + 0 v 4 + ⋯ {\displaystyle 2\mathbf {v} _{1}+3\mathbf {v} _{2}-5\mathbf {v} _{3}+0\mathbf {v} _{4}+\cdots } . Similarly, one can consider affine combinations, conical combinations, and convex combinations to correspond to the sub-operads where the terms sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by R n {\displaystyle \mathbf {R} ^{n}} being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories. From this point of view, we can think of linear combinations as the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of addition and scalar multiplication, together with the existence of an additive identity and additive inverses, cannot be combined in any more complicated way than the generic linear combination: the basic operations are a generating set for the operad of all linear combinations. Ultimately, this fact lies at the heart of the usefulness of linear combinations in the study of vector spaces. == Generalizations == If V is a topological vector space, then there may be a way to make sense of certain infinite linear combinations, using the topology of V. For example, we might be able to speak of a1v1 + a2v2 + a3v3 + ⋯, going on forever. Such infinite linear combinations do not always make sense; we call them convergent when they do. Allowing more linear combinations in this case can also lead to a different concept of span, linear independence, and basis. The articles on the various flavors of topological vector spaces go into more detail about these. If K is a commutative ring instead of a field, then everything that has been said above about linear combinations generalizes to this case without change. The only difference is that we call spaces like this V modules instead of vector spaces. If K is a noncommutative ring, then the concept still generalizes, with one caveat: since modules over noncommutative rings come in left and right versions, our linear combinations may also come in either of these versions, whatever is appropriate for the given module. This is simply a matter of doing scalar multiplication on the correct side. A more complicated twist comes when V is a bimodule over two rings, KL and KR. In that case, the most general linear combination looks like a 1 v 1 b 1 + ⋯ + a n v n b n {\displaystyle a_{1}\mathbf {v} _{1}b_{1}+\cdots +a_{n}\mathbf {v} _{n}b_{n}} where a1,...,an belong to KL, b1,...,bn belong to KR, and v1,…,vn belong to V. == See also == Weighted sum == Citations == == References == === Textbook === Axler, Sheldon Jay (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. doi:10.1007/978-3-319-11080-6. ISBN 978-3-319-11079-0. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Lay, David C.; Lay, Steven R.; McDonald, Judi J. (2016). Linear Algebra and its Applications (5th ed.). Pearson. ISBN 978-0-321-98238-4. Strang, Gilbert (2016). Introduction to Linear Algebra (5th ed.). Wellesley Cambridge Press. ISBN 978-0-9802327-7-6. === Web === "Linear Combinations". nLab. 27 October 2015. Retrieved 16 Feb 2021. == External links == Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org.
Wikipedia:Linear complementarity problem#0
In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968. == Formulation == Given a real matrix M and vector q, the linear complementarity problem LCP(q, M) seeks vectors z and w which satisfy the following constraints: w , z ⩾ 0 , {\displaystyle w,z\geqslant 0,} (that is, each component of these two vectors is non-negative) z T w = 0 {\displaystyle z^{T}w=0} or equivalently ∑ i w i z i = 0. {\displaystyle \sum \nolimits _{i}w_{i}z_{i}=0.} This is the complementarity condition, since it implies that, for all i {\displaystyle i} , at most one of w i {\displaystyle w_{i}} and z i {\displaystyle z_{i}} can be positive. w = M z + q {\displaystyle w=Mz+q} A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite. If M is such that LCP(q, M) has a solution for every q, then M is a Q-matrix. If M is such that LCP(q, M) have a unique solution for every q, then M is a P-matrix. Both of these characterizations are sufficient and necessary. The vector w is a slack variable, and so is generally discarded after z is found. As such, the problem can also be formulated as: M z + q ⩾ 0 {\displaystyle Mz+q\geqslant 0} z ⩾ 0 {\displaystyle z\geqslant 0} z T ( M z + q ) = 0 {\displaystyle z^{\mathrm {T} }(Mz+q)=0} (the complementarity condition) == Convex quadratic-minimization: Minimum conditions == Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function f ( z ) = z T ( M z + q ) {\displaystyle f(z)=z^{T}(Mz+q)} subject to the constraints M z + q ⩾ 0 {\displaystyle {Mz}+q\geqslant 0} z ⩾ 0 {\displaystyle z\geqslant 0} These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem. If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice. Also, a quadratic-programming problem stated as minimize f ( x ) = c T x + 1 2 x T Q x {\displaystyle f(x)=c^{T}x+{\tfrac {1}{2}}x^{T}Qx} subject to A x ⩾ b {\displaystyle Ax\geqslant b} as well as x ⩾ 0 {\displaystyle x\geqslant 0} with Q symmetric is the same as solving the LCP with q = [ c − b ] , M = [ Q − A T A 0 ] {\displaystyle q={\begin{bmatrix}c\\-b\end{bmatrix}},\qquad M={\begin{bmatrix}Q&-A^{T}\\A&0\end{bmatrix}}} This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as: { v = Q x − A T λ + c s = A x − b x , λ , v , s ⩾ 0 x T v + λ T s = 0 {\displaystyle {\begin{cases}v=Qx-A^{T}{\lambda }+c\\s=Ax-b\\x,{\lambda },v,s\geqslant 0\\x^{T}v+{\lambda }^{T}s=0\end{cases}}} with v the Lagrange multipliers on the non-negativity constraints, λ the multipliers on the inequality constraints, and s the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables (x, s) with its set of KKT vectors (optimal Lagrange multipliers) being (v, λ). In that case, z = [ x λ ] , w = [ v s ] {\displaystyle z={\begin{bmatrix}x\\\lambda \end{bmatrix}},\qquad w={\begin{bmatrix}v\\s\end{bmatrix}}} If the non-negativity constraint on the x is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as Q is non-singular (which is guaranteed if it is positive definite). The multipliers v are no longer present, and the first KKT conditions can be rewritten as: Q x = A T λ − c {\displaystyle Qx=A^{T}{\lambda }-c} or: x = Q − 1 ( A T λ − c ) {\displaystyle x=Q^{-1}(A^{T}{\lambda }-c)} pre-multiplying the two sides by A and subtracting b we obtain: A x − b = A Q − 1 ( A T λ − c ) − b {\displaystyle Ax-b=AQ^{-1}(A^{T}{\lambda }-c)-b\,} The left side, due to the second KKT condition, is s. Substituting and reordering: s = ( A Q − 1 A T ) λ + ( − A Q − 1 c − b ) {\displaystyle s=(AQ^{-1}A^{T}){\lambda }+(-AQ^{-1}c-b)\,} Calling now M := ( A Q − 1 A T ) q := ( − A Q − 1 c − b ) {\displaystyle {\begin{aligned}M&:=(AQ^{-1}A^{T})\\q&:=(-AQ^{-1}c-b)\end{aligned}}} we have an LCP, due to the relation of complementarity between the slack variables s and their Lagrange multipliers λ. Once we solve it, we may obtain the value of x from λ through the first KKT condition. Finally, it is also possible to handle additional equality constraints: A e q x = b e q {\displaystyle A_{eq}x=b_{eq}} This introduces a vector of Lagrange multipliers μ, with the same dimension as b e q {\displaystyle b_{eq}} . It is easy to verify that the M and Q for the LCP system s = M λ + Q {\displaystyle s=M{\lambda }+Q} are now expressed as: M := [ A 0 ] [ Q A e q T − A e q 0 ] − 1 [ A T 0 ] q := − [ A 0 ] [ Q A e q T − A e q 0 ] − 1 [ c b e q ] − b {\displaystyle {\begin{aligned}M&:={\begin{bmatrix}A&0\end{bmatrix}}{\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}A^{T}\\0\end{bmatrix}}\\q&:=-{\begin{bmatrix}A&0\end{bmatrix}}{\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}c\\b_{eq}\end{bmatrix}}-b\end{aligned}}} From λ we can now recover the values of both x and the Lagrange multiplier of equalities μ: [ x μ ] = [ Q A e q T − A e q 0 ] − 1 [ A T λ − c − b e q ] {\displaystyle {\begin{bmatrix}x\\\mu \end{bmatrix}}={\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}A^{T}\lambda -c\\-b_{eq}\end{bmatrix}}} In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods. LCP problems can be solved also by the criss-cross algorithm, conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix. A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive. Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory. == See also == Complementarity theory Physics engine Impulse/constraint type physics engines for games use this approach. Contact dynamics Contact dynamics with the nonsmooth approach. Bimatrix games can be reduced to LCP. == Notes == == References == Björner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Günter (1999). "10 Linear programming". Oriented Matroids. Cambridge University Press. pp. 417–479. doi:10.1017/CBO9780511586507. ISBN 978-0-521-77750-6. MR 1744046. Cottle, R. W.; Dantzig, G. B. (1968). "Complementary pivot theory of mathematical programming". Linear Algebra and Its Applications. 1: 103–125. doi:10.1016/0024-3795(68)90052-9. Cottle, Richard W.; Pang, Jong-Shi; Stone, Richard E. (1992). The linear complementarity problem. Computer Science and Scientific Computing. Boston, MA: Academic Press, Inc. pp. xxiv+762 pp. ISBN 978-0-12-192350-1. MR 1150683. Cottle, R. W.; Pang, J.-S.; Venkateswaran, V. (March–April 1989). "Sufficient matrices and the linear complementarity problem". Linear Algebra and Its Applications. 114–115: 231–249. doi:10.1016/0024-3795(89)90463-1. MR 0986877. Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (PDF). Optimization Methods and Software. 21 (2): 247–266. doi:10.1080/10556780500095009. S2CID 24418835. Fukuda, Komei; Namiki, Makoto (March 1994). "On extremal behaviors of Murty's least index method". Mathematical Programming. 64 (1): 365–370. doi:10.1007/BF01582581. MR 1286455. S2CID 21476636. Fukuda, Komei; Terlaky, Tamás (1997). Thomas M. Liebling; Dominique de Werra (eds.). "Criss-cross methods: A fresh view on pivot algorithms". Mathematical Programming, Series B. Papers from the 16th International Symposium on Mathematical Programming held in Lausanne, 1997. 79 (1–3): 369–395. CiteSeerX 10.1.1.36.9373. doi:10.1007/BF02614325. MR 1464775. S2CID 2794181. Postscript preprint. den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the criss-cross method" (PDF). Linear Algebra and Its Applications. 187: 1–14. doi:10.1016/0024-3795(93)90124-7. Murty, Katta G. (January 1972). "On the number of solutions to the complementarity problem and spanning properties of complementary cones" (PDF). Linear Algebra and Its Applications. 5 (1): 65–108. doi:10.1016/0024-3795(72)90019-5. hdl:2027.42/34188. Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. Vol. 3. Berlin: Heldermann Verlag. ISBN 978-3-88538-403-8. MR 0949214. Updated and free PDF version at Katta G. Murty's website. Archived from the original on 2010-04-01. Taylor, Joshua Adam (2015). Convex Optimization of Power Systems. Cambridge University Press. ISBN 9781107076877. Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. Degeneracy in optimization problems. 46–47 (1): 203–233. CiteSeerX 10.1.1.36.7658. doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. S2CID 6058077. Todd, Michael J. (1985). "Linear and quadratic programming in oriented matroids". Journal of Combinatorial Theory. Series B. 39 (2): 105–133. doi:10.1016/0095-8956(85)90042-5. MR 0811116. == Further reading == R. Chandrasekaran. "Bimatrix games" (PDF). pp. 5–7. Retrieved 18 December 2015. == External links == LCPSolve — A simple procedure in GAUSS to solve a linear complementarity problem Siconos/Numerics open-source GPL implementation in C of Lemke's algorithm and other methods to solve LCPs and MLCPs
Wikipedia:Linear equation#0
In mathematics, a linear equation is an equation that may be put in the form a 1 x 1 + … + a n x n + b = 0 , {\displaystyle a_{1}x_{1}+\ldots +a_{n}x_{n}+b=0,} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the variables (or unknowns), and b , a 1 , … , a n {\displaystyle b,a_{1},\ldots ,a_{n}} are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are required to not all be zero. Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken. The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true. In the case of just one variable, there is exactly one solution (provided that a 1 ≠ 0 {\displaystyle a_{1}\neq 0} ). Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown. In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in n variables form a hyperplane (a subspace of dimension n − 1) in the Euclidean space of dimension n. Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, to linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations. == One variable == A linear equation in one variable x can be written as a x + b = 0 , {\displaystyle ax+b=0,} with a ≠ 0 {\displaystyle a\neq 0} . The solution is x = − b a {\displaystyle x=-{\frac {b}{a}}} . == Two variables == A linear equation in two variables x and y can be written as a x + b y + c = 0 , {\displaystyle ax+by+c=0,} where a and b are not both 0. If a and b are real numbers, it has infinitely many solutions. === Linear function === If b ≠ 0, the equation a x + b y + c = 0 {\displaystyle ax+by+c=0} is a linear equation in the single variable y for every value of x. It therefore has a unique solution for y, which is given by y = − a b x − c b . {\displaystyle y=-{\frac {a}{b}}x-{\frac {c}{b}}.} This defines a function. The graph of this function is a line with slope − a b {\displaystyle -{\frac {a}{b}}} and y-intercept − c b . {\displaystyle -{\frac {c}{b}}.} The functions whose graph is a line are generally called linear functions in the context of calculus. However, in linear algebra, a linear function is a function that maps a sum to the sum of the images of the summands. So, for this definition, the above function is linear only when c = 0, that is when the line passes through the origin. To avoid confusion, the functions whose graph is an arbitrary line are often called affine functions, and the linear functions such that c = 0 are often called linear maps. === Geometric interpretation === Each solution (x, y) of a linear equation a x + b y + c = 0 {\displaystyle ax+by+c=0} may be viewed as the Cartesian coordinates of a point in the Euclidean plane. With this interpretation, all solutions of the equation form a line, provided that a and b are not both zero. Conversely, every line is the set of all solutions of a linear equation. The phrase "linear equation" takes its origin in this correspondence between lines and equations: a linear equation in two variables is an equation whose solutions form a line. If b ≠ 0, the line is the graph of the function of x that has been defined in the preceding section. If b = 0, the line is a vertical line (that is a line parallel to the y-axis) of equation x = − c a , {\displaystyle x=-{\frac {c}{a}},} which is not the graph of a function of x. Similarly, if a ≠ 0, the line is the graph of a function of y, and, if a = 0, one has a horizontal line of equation y = − c b . {\displaystyle y=-{\frac {c}{b}}.} === Equation of a line === There are various ways of defining a line. In the following subsections, a linear equation of the line is given in each case. ==== Slope–intercept form or Gradient-intercept form ==== A non-vertical line can be defined by its slope m, and its y-intercept y0 (the y coordinate of its intersection with the y-axis). In this case, its linear equation can be written y = m x + y 0 . {\displaystyle y=mx+y_{0}.} If, moreover, the line is not horizontal, it can be defined by its slope and its x-intercept x0. In this case, its equation can be written y = m ( x − x 0 ) , {\displaystyle y=m(x-x_{0}),} or, equivalently, y = m x − m x 0 . {\displaystyle y=mx-mx_{0}.} These forms rely on the habit of considering a nonvertical line as the graph of a function. For a line given by an equation a x + b y + c = 0 , {\displaystyle ax+by+c=0,} these forms can be easily deduced from the relations m = − a b , x 0 = − c a , y 0 = − c b . {\displaystyle {\begin{aligned}m&=-{\frac {a}{b}},\\x_{0}&=-{\frac {c}{a}},\\y_{0}&=-{\frac {c}{b}}.\end{aligned}}} ==== Point–slope form or Point-gradient form ==== A non-vertical line can be defined by its slope m, and the coordinates x 1 , y 1 {\displaystyle x_{1},y_{1}} of any point of the line. In this case, a linear equation of the line is y = y 1 + m ( x − x 1 ) , {\displaystyle y=y_{1}+m(x-x_{1}),} or y = m x + y 1 − m x 1 . {\displaystyle y=mx+y_{1}-mx_{1}.} This equation can also be written y − y 1 = m ( x − x 1 ) {\displaystyle y-y_{1}=m(x-x_{1})} to emphasize that the slope of a line can be computed from the coordinates of any two points. ==== Intercept form ==== A line that is not parallel to an axis and does not pass through the origin cuts the axes into two different points. The intercept values x0 and y0 of these two points are nonzero, and an equation of the line is x x 0 + y y 0 = 1. {\displaystyle {\frac {x}{x_{0}}}+{\frac {y}{y_{0}}}=1.} (It is easy to verify that the line defined by this equation has x0 and y0 as intercept values). ==== Two-point form ==== Given two different points (x1, y1) and (x2, y2), there is exactly one line that passes through them. There are several ways to write a linear equation of this line. If x1 ≠ x2, the slope of the line is y 2 − y 1 x 2 − x 1 . {\displaystyle {\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}.} Thus, a point-slope form is y − y 1 = y 2 − y 1 x 2 − x 1 ( x − x 1 ) . {\displaystyle y-y_{1}={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}(x-x_{1}).} By clearing denominators, one gets the equation ( x 2 − x 1 ) ( y − y 1 ) − ( y 2 − y 1 ) ( x − x 1 ) = 0 , {\displaystyle (x_{2}-x_{1})(y-y_{1})-(y_{2}-y_{1})(x-x_{1})=0,} which is valid also when x1 = x2 (to verify this, it suffices to verify that the two given points satisfy the equation). This form is not symmetric in the two given points, but a symmetric form can be obtained by regrouping the constant terms: ( y 1 − y 2 ) x + ( x 2 − x 1 ) y + ( x 1 y 2 − x 2 y 1 ) = 0 {\displaystyle (y_{1}-y_{2})x+(x_{2}-x_{1})y+(x_{1}y_{2}-x_{2}y_{1})=0} (exchanging the two points changes the sign of the left-hand side of the equation). ==== Determinant form ==== The two-point form of the equation of a line can be expressed simply in terms of a determinant. There are two common ways for that. The equation ( x 2 − x 1 ) ( y − y 1 ) − ( y 2 − y 1 ) ( x − x 1 ) = 0 {\displaystyle (x_{2}-x_{1})(y-y_{1})-(y_{2}-y_{1})(x-x_{1})=0} is the result of expanding the determinant in the equation | x − x 1 y − y 1 x 2 − x 1 y 2 − y 1 | = 0. {\displaystyle {\begin{vmatrix}x-x_{1}&y-y_{1}\\x_{2}-x_{1}&y_{2}-y_{1}\end{vmatrix}}=0.} The equation ( y 1 − y 2 ) x + ( x 2 − x 1 ) y + ( x 1 y 2 − x 2 y 1 ) = 0 {\displaystyle (y_{1}-y_{2})x+(x_{2}-x_{1})y+(x_{1}y_{2}-x_{2}y_{1})=0} can be obtained by expanding with respect to its first row the determinant in the equation | x y 1 x 1 y 1 1 x 2 y 2 1 | = 0. {\displaystyle {\begin{vmatrix}x&y&1\\x_{1}&y_{1}&1\\x_{2}&y_{2}&1\end{vmatrix}}=0.} Besides being very simple and mnemonic, this form has the advantage of being a special case of the more general equation of a hyperplane passing through n points in a space of dimension n − 1. These equations rely on the condition of linear dependence of points in a projective space. == More than two variables == A linear equation with more than two variables may always be assumed to have the form a 1 x 1 + a 2 x 2 + ⋯ + a n x n + b = 0. {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}+b=0.} The coefficient b, often denoted a0 is called the constant term (sometimes the absolute term in old books). Depending on the context, the term coefficient can be reserved for the ai with i > 0. When dealing with n = 3 {\displaystyle n=3} variables, it is common to use x , y {\displaystyle x,\;y} and z {\displaystyle z} instead of indexed variables. A solution of such an equation is a n-tuple such that substituting each element of the tuple for the corresponding variable transforms the equation into a true equality. For an equation to be meaningful, the coefficient of at least one variable must be non-zero. If every variable has a zero coefficient, then, as mentioned for one variable, the equation is either inconsistent (for b ≠ 0) as having no solution, or all n-tuples are solutions. The n-tuples that are solutions of a linear equation in n variables are the Cartesian coordinates of the points of an (n − 1)-dimensional hyperplane in an n-dimensional Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). In the case of three variables, this hyperplane is a plane. If a linear equation is given with aj ≠ 0, then the equation can be solved for xj, yielding x j = − b a j − ∑ i ∈ { 1 , … , n } , i ≠ j a i a j x i . {\displaystyle x_{j}=-{\frac {b}{a_{j}}}-\sum _{i\in \{1,\ldots ,n\},i\neq j}{\frac {a_{i}}{a_{j}}}x_{i}.} If the coefficients are real numbers, this defines a real-valued function of n real variables. == See also == Linear equation over a ring Algebraic equation Line coordinates Linear inequality Nonlinear equation == Notes == == References == Barnett, R.A.; Ziegler, M.R.; Byleen, K.E. (2008), College Mathematics for Business, Economics, Life Sciences and the Social Sciences (11th ed.), Upper Saddle River, N.J.: Pearson, ISBN 978-0-13-157225-6 Larson, Ron; Hostetler, Robert (2007), Precalculus:A Concise Course, Houghton Mifflin, ISBN 978-0-618-62719-6 Wilson, W.A.; Tracey, J.I. (1925), Analytic Geometry (revised ed.), D.C. Heath == External links == "Linear equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia:Linear equation over a ring#0
In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or "typically Noetherian integral domain". In the case of a single equation, the problem splits in two parts. First, the ideal membership problem, which consists, given a non-homogeneous equation a 1 x 1 + ⋯ + a k x k = b {\displaystyle a_{1}x_{1}+\cdots +a_{k}x_{k}=b} with a 1 , … , a k {\displaystyle a_{1},\ldots ,a_{k}} and b in a given ring R, to decide if it has a solution with x 1 , … , x k {\displaystyle x_{1},\ldots ,x_{k}} in R, and, if any, to provide one. This amounts to decide if b belongs to the ideal generated by the ai. The simplest instance of this problem is, for k = 1 and b = 1, to decide if a is a unit in R. The syzygy problem consists, given k elements a 1 , … , a k {\displaystyle a_{1},\ldots ,a_{k}} in R, to provide a system of generators of the module of the syzygies of ( a 1 , … , a k ) , {\displaystyle (a_{1},\ldots ,a_{k}),} that is a system of generators of the submodule of those elements ( x 1 , … , x k ) {\displaystyle (x_{1},\ldots ,x_{k})} in Rk that are solutions of the homogeneous equation a 1 x 1 + ⋯ + a k x k = 0. {\displaystyle a_{1}x_{1}+\cdots +a_{k}x_{k}=0.} The simplest case, when k = 1 amounts to find a system of generators of the annihilator of a1. Given a solution of the ideal membership problem, one obtains all the solutions by adding to it the elements of the module of syzygies. In other words, all the solutions are provided by the solution of these two partial problems. In the case of several equations, the same decomposition into subproblems occurs. The first problem becomes the submodule membership problem. The second one is also called the syzygy problem. A ring such that there are algorithms for the arithmetic operations (addition, subtraction, multiplication) and for the above problems may be called a computable ring, or effective ring. One may also say that linear algebra on the ring is effective. The article considers the main rings for which linear algebra is effective. == Generalities == To be able to solve the syzygy problem, it is necessary that the module of syzygies is finitely generated, because it is impossible to output an infinite list. Therefore, the problems considered here make sense only for a Noetherian ring, or at least a coherent ring. In fact, this article is restricted to Noetherian integral domains because of the following result. Given a Noetherian integral domain, if there are algorithms to solve the ideal membership problem and the syzygies problem for a single equation, then one may deduce from them algorithms for the similar problems concerning systems of equations. This theorem is useful to prove the existence of algorithms. However, in practice, the algorithms for the systems are designed directly. A field is an effective ring as soon one has algorithms for addition, subtraction, multiplication, and computation of multiplicative inverses. In fact, solving the submodule membership problem is what is commonly called solving the system, and solving the syzygy problem is the computation of the null space of the matrix of a system of linear equations. The basic algorithm for both problems is Gaussian elimination. === Properties of effective rings === Let R be an effective commutative ring. There is an algorithm for testing if an element a is a zero divisor: this amounts to solving the linear equation ax = 0. There is an algorithm for testing if an element a is a unit, and if it is, computing its inverse: this amounts to solving the linear equation ax = 1. Given an ideal I generated by a1, ..., ak, there is an algorithm for testing if two elements of R have the same image in R/I: testing the equality of the images of a and b amounts to solving the equation a = b + a1 z1 + ⋯ + ak zk; linear algebra is effective over R/I: for solving a linear system over R/I, it suffices to write it over R and to add to one side of the ith equation a1 zi,1 + ⋯ + ak zi, k (for i = 1, ...), where the zi, j are new unknowns. Linear algebra is effective on the polynomial ring R [ x 1 , … , x n ] {\displaystyle R[x_{1},\ldots ,x_{n}]} if and only if one has an algorithm that computes an upper bound of the degree of the polynomials that may occur when solving linear systems of equations: if one has solving algorithms, their outputs give the degrees. Conversely, if one knows an upper bound of the degrees occurring in a solution, one may write the unknown polynomials as polynomials with unknown coefficients. Then, as two polynomials are equal if and only if their coefficients are equal, the equations of the problem become linear equations in the coefficients, that can be solved over an effective ring. == Over the integers or a principal ideal domain == There are algorithms to solve all the problems addressed in this article over the integers. In other words, linear algebra is effective over the integers; see Linear Diophantine system for details. More generally, linear algebra is effective on a principal ideal domain if there are algorithms for addition, subtraction and multiplication, and Solving equations of the form ax = b, that is, testing whether a is a divisor of b, and, if this is the case, computing the quotient a/b, Computing Bézout's identity, that is, given a and b, computing s and t such that as + bt is a greatest common divisor of a and b. It is useful to extend to the general case the notion of a unimodular matrix by calling unimodular a square matrix whose determinant is a unit. This means that the determinant is invertible and implies that the unimodular matrices are exactly the invertible matrices such all entries of the inverse matrix belong to the domain. The above two algorithms imply that given a and b in the principal ideal domain, there is an algorithm computing a unimodular matrix [ s t u v ] {\displaystyle {\begin{bmatrix}s&t\\u&v\end{bmatrix}}} such that [ s t u v ] [ a b ] = [ gcd ( a , b ) 0 ] . {\displaystyle {\begin{bmatrix}s&t\\u&v\end{bmatrix}}{\begin{bmatrix}a\\b\end{bmatrix}}={\begin{bmatrix}\gcd(a,b)\\0\end{bmatrix}}.} (This algorithm is obtained by taking for s and t the coefficients of Bézout's identity, and for u and v the quotient of −b and a by as + bt; this choice implies that the determinant of the square matrix is 1.) Having such an algorithm, the Smith normal form of a matrix may be computed exactly as in the integer case, and this suffices to apply the described in Linear Diophantine system for getting an algorithm for solving every linear system. The main case where this is commonly used is the case of linear systems over the ring of univariate polynomials over a field. In this case, the extended Euclidean algorithm may be used for computing the above unimodular matrix; see Polynomial greatest common divisor § Bézout's identity and extended GCD algorithm for details. == Over polynomials rings over a field == Linear algebra is effective on a polynomial ring k [ x 1 , … , x n ] {\displaystyle k[x_{1},\ldots ,x_{n}]} over a field k. This has been first proved in 1926 by Grete Hermann. The algorithms resulting from Hermann's results are only of historical interest, as their computational complexity is too high for allowing effective computer computation. Proofs that linear algebra is effective on polynomial rings and computer implementations are presently all based on Gröbner basis theory. == References == == External links == Cox, David A.; Little, John; O'Shea, Donal (1997). Ideals, Varieties, and Algorithms (second ed.). Springer-Verlag. ISBN 0-387-94680-2. Zbl 0861.13012. Aschenbrenner, Matthias (2004). "Ideal membership in polynomial rings over the integers" (PDF). J. Amer. Math. Soc. 17 (2). AMS: 407–441. doi:10.1090/S0894-0347-04-00451-5. S2CID 8176473. Retrieved 23 October 2013.
Wikipedia:Linear form#0
In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If V is a vector space over a field k, the set of all linear functionals from V to k is itself a vector space over k with addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, when a topological dual space is also considered. It is often denoted Hom(V, k), or, when the field k is understood, V ∗ {\displaystyle V^{*}} ; other notations are also used, such as V ′ {\displaystyle V'} , V # {\displaystyle V^{\#}} or V ∨ . {\displaystyle V^{\vee }.} When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left). == Examples == The constant zero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of k). Indexing into a vector: The second element of a three-vector is given by the one-form [ 0 , 1 , 0 ] . {\displaystyle [0,1,0].} That is, the second element of [ x , y , z ] {\displaystyle [x,y,z]} is [ 0 , 1 , 0 ] ⋅ [ x , y , z ] = y . {\displaystyle [0,1,0]\cdot [x,y,z]=y.} Mean: The mean element of an n {\displaystyle n} -vector is given by the one-form [ 1 / n , 1 / n , … , 1 / n ] . {\displaystyle \left[1/n,1/n,\ldots ,1/n\right].} That is, mean ⁡ ( v ) = [ 1 / n , 1 / n , … , 1 / n ] ⋅ v . {\displaystyle \operatorname {mean} (v)=\left[1/n,1/n,\ldots ,1/n\right]\cdot v.} Sampling: Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. Net present value of a net cash flow, R ( t ) , {\displaystyle R(t),} is given by the one-form w ( t ) = ( 1 + i ) − t {\displaystyle w(t)=(1+i)^{-t}} where i {\displaystyle i} is the discount rate. That is, N P V ( R ( t ) ) = ⟨ w , R ⟩ = ∫ t = 0 ∞ R ( t ) ( 1 + i ) t d t . {\displaystyle \mathrm {NPV} (R(t))=\langle w,R\rangle =\int _{t=0}^{\infty }{\frac {R(t)}{(1+i)^{t}}}\,dt.} === Linear functionals in Rn === Suppose that vectors in the real coordinate space R n {\displaystyle \mathbb {R} ^{n}} are represented as column vectors x = [ x 1 ⋮ x n ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.} For each row vector a = [ a 1 ⋯ a n ] {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}} there is a linear functional f a {\displaystyle f_{\mathbf {a} }} defined by f a ( x ) = a 1 x 1 + ⋯ + a n x n , {\displaystyle f_{\mathbf {a} }(\mathbf {x} )=a_{1}x_{1}+\cdots +a_{n}x_{n},} and each linear functional can be expressed in this form. This can be interpreted as either the matrix product or the dot product of the row vector a {\displaystyle \mathbf {a} } and the column vector x {\displaystyle \mathbf {x} } : f a ( x ) = a ⋅ x = [ a 1 ⋯ a n ] [ x 1 ⋮ x n ] . {\displaystyle f_{\mathbf {a} }(\mathbf {x} )=\mathbf {a} \cdot \mathbf {x} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.} === Trace of a square matrix === The trace tr ⁡ ( A ) {\displaystyle \operatorname {tr} (A)} of a square matrix A {\displaystyle A} is the sum of all elements on its main diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a vector space from the set of all n × n {\displaystyle n\times n} matrices. The trace is a linear functional on this space because tr ⁡ ( s A ) = s tr ⁡ ( A ) {\displaystyle \operatorname {tr} (sA)=s\operatorname {tr} (A)} and tr ⁡ ( A + B ) = tr ⁡ ( A ) + tr ⁡ ( B ) {\displaystyle \operatorname {tr} (A+B)=\operatorname {tr} (A)+\operatorname {tr} (B)} for all scalars s {\displaystyle s} and all n × n {\displaystyle n\times n} matrices A and B . {\displaystyle A{\text{ and }}B.} === (Definite) Integration === Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral I ( f ) = ∫ a b f ( x ) d x {\displaystyle I(f)=\int _{a}^{b}f(x)\,dx} is a linear functional from the vector space C [ a , b ] {\displaystyle C[a,b]} of continuous functions on the interval [ a , b ] {\displaystyle [a,b]} to the real numbers. The linearity of I {\displaystyle I} follows from the standard facts about the integral: I ( f + g ) = ∫ a b [ f ( x ) + g ( x ) ] d x = ∫ a b f ( x ) d x + ∫ a b g ( x ) d x = I ( f ) + I ( g ) I ( α f ) = ∫ a b α f ( x ) d x = α ∫ a b f ( x ) d x = α I ( f ) . {\displaystyle {\begin{aligned}I(f+g)&=\int _{a}^{b}[f(x)+g(x)]\,dx=\int _{a}^{b}f(x)\,dx+\int _{a}^{b}g(x)\,dx=I(f)+I(g)\\I(\alpha f)&=\int _{a}^{b}\alpha f(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx=\alpha I(f).\end{aligned}}} === Evaluation === Let P n {\displaystyle P_{n}} denote the vector space of real-valued polynomial functions of degree ≤ n {\displaystyle \leq n} defined on an interval [ a , b ] . {\displaystyle [a,b].} If c ∈ [ a , b ] , {\displaystyle c\in [a,b],} then let ev c : P n → R {\displaystyle \operatorname {ev} _{c}:P_{n}\to \mathbb {R} } be the evaluation functional ev c ⁡ f = f ( c ) . {\displaystyle \operatorname {ev} _{c}f=f(c).} The mapping f ↦ f ( c ) {\displaystyle f\mapsto f(c)} is linear since ( f + g ) ( c ) = f ( c ) + g ( c ) ( α f ) ( c ) = α f ( c ) . {\displaystyle {\begin{aligned}(f+g)(c)&=f(c)+g(c)\\(\alpha f)(c)&=\alpha f(c).\end{aligned}}} If x 0 , … , x n {\displaystyle x_{0},\ldots ,x_{n}} are n + 1 {\displaystyle n+1} distinct points in [ a , b ] , {\displaystyle [a,b],} then the evaluation functionals ev x i , {\displaystyle \operatorname {ev} _{x_{i}},} i = 0 , … , n {\displaystyle i=0,\ldots ,n} form a basis of the dual space of P n {\displaystyle P_{n}} (Lax (1996) proves this last fact using Lagrange interpolation). === Non-example === A function f {\displaystyle f} having the equation of a line f ( x ) = a + r x {\displaystyle f(x)=a+rx} with a ≠ 0 {\displaystyle a\neq 0} (for example, f ( x ) = 1 + 2 x {\displaystyle f(x)=1+2x} ) is not a linear functional on R {\displaystyle \mathbb {R} } , since it is not linear. It is, however, affine-linear. == Visualization == In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973). == Applications == === Application to quadrature === If x 0 , … , x n {\displaystyle x_{0},\ldots ,x_{n}} are n + 1 {\displaystyle n+1} distinct points in [a, b], then the linear functionals ev x i : f ↦ f ( x i ) {\displaystyle \operatorname {ev} _{x_{i}}:f\mapsto f\left(x_{i}\right)} defined above form a basis of the dual space of Pn, the space of polynomials of degree ≤ n . {\displaystyle \leq n.} The integration functional I is also a linear functional on Pn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} for which I ( f ) = a 0 f ( x 0 ) + a 1 f ( x 1 ) + ⋯ + a n f ( x n ) {\displaystyle I(f)=a_{0}f(x_{0})+a_{1}f(x_{1})+\dots +a_{n}f(x_{n})} for all f ∈ P n . {\displaystyle f\in P_{n}.} This forms the foundation of the theory of numerical quadrature. === In quantum mechanics === Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation. === Distributions === In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions. == Dual vectors and bilinear forms == Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism V → V∗ : v ↦ v∗ such that v ∗ ( w ) := ⟨ v , w ⟩ ∀ w ∈ V , {\displaystyle v^{*}(w):=\langle v,w\rangle \quad \forall w\in V,} where the bilinear form on V is denoted ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle } (for instance, in Euclidean space, ⟨ v , w ⟩ = v ⋅ w {\displaystyle \langle v,w\rangle =v\cdot w} is the dot product of v and w). The inverse isomorphism is V∗ → V : v∗ ↦ v, where v is the unique element of V such that ⟨ v , w ⟩ = v ∗ ( w ) {\displaystyle \langle v,w\rangle =v^{*}(w)} for all w ∈ V . {\displaystyle w\in V.} The above defined vector v∗ ∈ V∗ is said to be the dual vector of v ∈ V . {\displaystyle v\in V.} In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping V ↦ V∗ from V into its continuous dual space V∗. == Relationship to bases == === Basis of the dual space === Let the vector space V have a basis e 1 , e 2 , … , e n {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\dots ,\mathbf {e} _{n}} , not necessarily orthogonal. Then the dual space V ∗ {\displaystyle V^{*}} has a basis ω ~ 1 , ω ~ 2 , … , ω ~ n {\displaystyle {\tilde {\omega }}^{1},{\tilde {\omega }}^{2},\dots ,{\tilde {\omega }}^{n}} called the dual basis defined by the special property that ω ~ i ( e j ) = { 1 if i = j 0 if i ≠ j . {\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})={\begin{cases}1&{\text{if}}\ i=j\\0&{\text{if}}\ i\neq j.\end{cases}}} Or, more succinctly, ω ~ i ( e j ) = δ i j {\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})=\delta _{ij}} where δ i j {\displaystyle \delta _{ij}} is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices. A linear functional u ~ {\displaystyle {\tilde {u}}} belonging to the dual space V ~ {\displaystyle {\tilde {V}}} can be expressed as a linear combination of basis functionals, with coefficients ("components") ui, u ~ = ∑ i = 1 n u i ω ~ i . {\displaystyle {\tilde {u}}=\sum _{i=1}^{n}u_{i}\,{\tilde {\omega }}^{i}.} Then, applying the functional u ~ {\displaystyle {\tilde {u}}} to a basis vector e j {\displaystyle \mathbf {e} _{j}} yields u ~ ( e j ) = ∑ i = 1 n ( u i ω ~ i ) e j = ∑ i u i [ ω ~ i ( e j ) ] {\displaystyle {\tilde {u}}(\mathbf {e} _{j})=\sum _{i=1}^{n}\left(u_{i}\,{\tilde {\omega }}^{i}\right)\mathbf {e} _{j}=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left(\mathbf {e} _{j}\right)\right]} due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then u ~ ( e j ) = ∑ i u i [ ω ~ i ( e j ) ] = ∑ i u i δ i j = u j . {\displaystyle {\begin{aligned}{\tilde {u}}({\mathbf {e} }_{j})&=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\mathbf {e} }_{j}\right)\right]\\&=\sum _{i}u_{i}{\delta }_{ij}\\&=u_{j}.\end{aligned}}} So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector. === The dual basis and inner product === When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis. Let V have (not necessarily orthogonal) basis e 1 , … , e n . {\displaystyle \mathbf {e} _{1},\dots ,\mathbf {e} _{n}.} In three dimensions (n = 3), the dual basis can be written explicitly ω ~ i ( v ) = 1 2 ⟨ ∑ j = 1 3 ∑ k = 1 3 ε i j k ( e j × e k ) e 1 ⋅ e 2 × e 3 , v ⟩ , {\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )={\frac {1}{2}}\left\langle {\frac {\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon ^{ijk}\,(\mathbf {e} _{j}\times \mathbf {e} _{k})}{\mathbf {e} _{1}\cdot \mathbf {e} _{2}\times \mathbf {e} _{3}}},\mathbf {v} \right\rangle ,} for i = 1 , 2 , 3 , {\displaystyle i=1,2,3,} where ε is the Levi-Civita symbol and ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } the inner product (or dot product) on V. In higher dimensions, this generalizes as follows ω ~ i ( v ) = ⟨ ∑ 1 ≤ i 2 < i 3 < ⋯ < i n ≤ n ε i i 2 … i n ( ⋆ e i 2 ∧ ⋯ ∧ e i n ) ⋆ ( e 1 ∧ ⋯ ∧ e n ) , v ⟩ , {\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )=\left\langle {\frac {\sum _{1\leq i_{2}<i_{3}<\dots <i_{n}\leq n}\varepsilon ^{ii_{2}\dots i_{n}}(\star \mathbf {e} _{i_{2}}\wedge \cdots \wedge \mathbf {e} _{i_{n}})}{\star (\mathbf {e} _{1}\wedge \cdots \wedge \mathbf {e} _{n})}},\mathbf {v} \right\rangle ,} where ⋆ {\displaystyle \star } is the Hodge star operator. == Over a ring == Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module M over a ring R, a linear form on M is a linear map from M to R, where the latter is considered as a module over itself. The space of linear forms is always denoted Homk(V, k), whether k is a field or not. It is a right module if V is a left module. The existence of "enough" linear forms on a module is equivalent to projectivity. == Change of field == Suppose that X {\displaystyle X} is a vector space over C . {\displaystyle \mathbb {C} .} Restricting scalar multiplication to R {\displaystyle \mathbb {R} } gives rise to a real vector space X R {\displaystyle X_{\mathbb {R} }} called the realification of X . {\displaystyle X.} Any vector space X {\displaystyle X} over C {\displaystyle \mathbb {C} } is also a vector space over R , {\displaystyle \mathbb {R} ,} endowed with a complex structure; that is, there exists a real vector subspace X R {\displaystyle X_{\mathbb {R} }} such that we can (formally) write X = X R ⊕ X R i {\displaystyle X=X_{\mathbb {R} }\oplus X_{\mathbb {R} }i} as R {\displaystyle \mathbb {R} } -vector spaces. === Real versus complex linear functionals === Every linear functional on X {\displaystyle X} is complex-valued while every linear functional on X R {\displaystyle X_{\mathbb {R} }} is real-valued. If dim ⁡ X ≠ 0 {\displaystyle \dim X\neq 0} then a linear functional on either one of X {\displaystyle X} or X R {\displaystyle X_{\mathbb {R} }} is non-trivial (meaning not identically 0 {\displaystyle 0} ) if and only if it is surjective (because if φ ( x ) ≠ 0 {\displaystyle \varphi (x)\neq 0} then for any scalar s , {\displaystyle s,} φ ( ( s / φ ( x ) ) x ) = s {\displaystyle \varphi \left((s/\varphi (x))x\right)=s} ), where the image of a linear functional on X {\displaystyle X} is C {\displaystyle \mathbb {C} } while the image of a linear functional on X R {\displaystyle X_{\mathbb {R} }} is R . {\displaystyle \mathbb {R} .} Consequently, the only function on X {\displaystyle X} that is both a linear functional on X {\displaystyle X} and a linear function on X R {\displaystyle X_{\mathbb {R} }} is the trivial functional; in other words, X # ∩ X R # = { 0 } , {\displaystyle X^{\#}\cap X_{\mathbb {R} }^{\#}=\{0\},} where ⋅ # {\displaystyle \,{\cdot }^{\#}} denotes the space's algebraic dual space. However, every C {\displaystyle \mathbb {C} } -linear functional on X {\displaystyle X} is an R {\displaystyle \mathbb {R} } -linear operator (meaning that it is additive and homogeneous over R {\displaystyle \mathbb {R} } ), but unless it is identically 0 , {\displaystyle 0,} it is not an R {\displaystyle \mathbb {R} } -linear functional on X {\displaystyle X} because its range (which is C {\displaystyle \mathbb {C} } ) is 2-dimensional over R . {\displaystyle \mathbb {R} .} Conversely, a non-zero R {\displaystyle \mathbb {R} } -linear functional has range too small to be a C {\displaystyle \mathbb {C} } -linear functional as well. === Real and imaginary parts === If φ ∈ X # {\displaystyle \varphi \in X^{\#}} then denote its real part by φ R := Re ⁡ φ {\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi } and its imaginary part by φ i := Im ⁡ φ . {\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .} Then φ R : X → R {\displaystyle \varphi _{\mathbb {R} }:X\to \mathbb {R} } and φ i : X → R {\displaystyle \varphi _{i}:X\to \mathbb {R} } are linear functionals on X R {\displaystyle X_{\mathbb {R} }} and φ = φ R + i φ i . {\displaystyle \varphi =\varphi _{\mathbb {R} }+i\varphi _{i}.} The fact that z = Re ⁡ z − i Re ⁡ ( i z ) = Im ⁡ ( i z ) + i Im ⁡ z {\displaystyle z=\operatorname {Re} z-i\operatorname {Re} (iz)=\operatorname {Im} (iz)+i\operatorname {Im} z} for all z ∈ C {\displaystyle z\in \mathbb {C} } implies that for all x ∈ X , {\displaystyle x\in X,} φ ( x ) = φ R ( x ) − i φ R ( i x ) = φ i ( i x ) + i φ i ( x ) {\displaystyle {\begin{alignedat}{4}\varphi (x)&=\varphi _{\mathbb {R} }(x)-i\varphi _{\mathbb {R} }(ix)\\&=\varphi _{i}(ix)+i\varphi _{i}(x)\\\end{alignedat}}} and consequently, that φ i ( x ) = − φ R ( i x ) {\displaystyle \varphi _{i}(x)=-\varphi _{\mathbb {R} }(ix)} and φ R ( x ) = φ i ( i x ) . {\displaystyle \varphi _{\mathbb {R} }(x)=\varphi _{i}(ix).} The assignment φ ↦ φ R {\displaystyle \varphi \mapsto \varphi _{\mathbb {R} }} defines a bijective R {\displaystyle \mathbb {R} } -linear operator X # → X R # {\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}} whose inverse is the map L ∙ : X R # → X # {\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}} defined by the assignment g ↦ L g {\displaystyle g\mapsto L_{g}} that sends g : X R → R {\displaystyle g:X_{\mathbb {R} }\to \mathbb {R} } to the linear functional L g : X → C {\displaystyle L_{g}:X\to \mathbb {C} } defined by L g ( x ) := g ( x ) − i g ( i x ) for all x ∈ X . {\displaystyle L_{g}(x):=g(x)-ig(ix)\quad {\text{ for all }}x\in X.} The real part of L g {\displaystyle L_{g}} is g {\displaystyle g} and the bijection L ∙ : X R # → X # {\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}} is an R {\displaystyle \mathbb {R} } -linear operator, meaning that L g + h = L g + L h {\displaystyle L_{g+h}=L_{g}+L_{h}} and L r g = r L g {\displaystyle L_{rg}=rL_{g}} for all r ∈ R {\displaystyle r\in \mathbb {R} } and g , h ∈ X R # . {\displaystyle g,h\in X_{\mathbb {R} }^{\#}.} Similarly for the imaginary part, the assignment φ ↦ φ i {\displaystyle \varphi \mapsto \varphi _{i}} induces an R {\displaystyle \mathbb {R} } -linear bijection X # → X R # {\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}} whose inverse is the map X R # → X # {\displaystyle X_{\mathbb {R} }^{\#}\to X^{\#}} defined by sending I ∈ X R # {\displaystyle I\in X_{\mathbb {R} }^{\#}} to the linear functional on X {\displaystyle X} defined by x ↦ I ( i x ) + i I ( x ) . {\displaystyle x\mapsto I(ix)+iI(x).} This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray), and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described. === Properties and relationships === Suppose φ : X → C {\displaystyle \varphi :X\to \mathbb {C} } is a linear functional on X {\displaystyle X} with real part φ R := Re ⁡ φ {\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi } and imaginary part φ i := Im ⁡ φ . {\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .} Then φ = 0 {\displaystyle \varphi =0} if and only if φ R = 0 {\displaystyle \varphi _{\mathbb {R} }=0} if and only if φ i = 0. {\displaystyle \varphi _{i}=0.} Assume that X {\displaystyle X} is a topological vector space. Then φ {\displaystyle \varphi } is continuous if and only if its real part φ R {\displaystyle \varphi _{\mathbb {R} }} is continuous, if and only if φ {\displaystyle \varphi } 's imaginary part φ i {\displaystyle \varphi _{i}} is continuous. That is, either all three of φ , φ R , {\displaystyle \varphi ,\varphi _{\mathbb {R} },} and φ i {\displaystyle \varphi _{i}} are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular, φ ∈ X ′ {\displaystyle \varphi \in X^{\prime }} if and only if φ R ∈ X R ′ {\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }} where the prime denotes the space's continuous dual space. Let B ⊆ X . {\displaystyle B\subseteq X.} If u B ⊆ B {\displaystyle uB\subseteq B} for all scalars u ∈ C {\displaystyle u\in \mathbb {C} } of unit length (meaning | u | = 1 {\displaystyle |u|=1} ) then sup b ∈ B | φ ( b ) | = sup b ∈ B | φ R ( b ) | . {\displaystyle \sup _{b\in B}|\varphi (b)|=\sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|.} Similarly, if φ i := Im ⁡ φ : X → R {\displaystyle \varphi _{i}:=\operatorname {Im} \varphi :X\to \mathbb {R} } denotes the complex part of φ {\displaystyle \varphi } then i B ⊆ B {\displaystyle iB\subseteq B} implies sup b ∈ B | φ R ( b ) | = sup b ∈ B | φ i ( b ) | . {\displaystyle \sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|=\sup _{b\in B}\left|\varphi _{i}(b)\right|.} If X {\displaystyle X} is a normed space with norm ‖ ⋅ ‖ {\displaystyle \|\cdot \|} and if B = { x ∈ X : ‖ x ‖ ≤ 1 } {\displaystyle B=\{x\in X:\|x\|\leq 1\}} is the closed unit ball then the supremums above are the operator norms (defined in the usual way) of φ , φ R , {\displaystyle \varphi ,\varphi _{\mathbb {R} },} and φ i {\displaystyle \varphi _{i}} so that ‖ φ ‖ = ‖ φ R ‖ = ‖ φ i ‖ . {\displaystyle \|\varphi \|=\left\|\varphi _{\mathbb {R} }\right\|=\left\|\varphi _{i}\right\|.} This conclusion extends to the analogous statement for polars of balanced sets in general topological vector spaces. If X {\displaystyle X} is a complex Hilbert space with a (complex) inner product ⟨ ⋅ | ⋅ ⟩ {\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle } that is antilinear in its first coordinate (and linear in the second) then X R {\displaystyle X_{\mathbb {R} }} becomes a real Hilbert space when endowed with the real part of ⟨ ⋅ | ⋅ ⟩ . {\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle .} Explicitly, this real inner product on X R {\displaystyle X_{\mathbb {R} }} is defined by ⟨ x | y ⟩ R := Re ⁡ ⟨ x | y ⟩ {\displaystyle \langle x|y\rangle _{\mathbb {R} }:=\operatorname {Re} \langle x|y\rangle } for all x , y ∈ X {\displaystyle x,y\in X} and it induces the same norm on X {\displaystyle X} as ⟨ ⋅ | ⋅ ⟩ {\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle } because ⟨ x | x ⟩ R = ⟨ x | x ⟩ {\displaystyle {\sqrt {\langle x|x\rangle _{\mathbb {R} }}}={\sqrt {\langle x|x\rangle }}} for all vectors x . {\displaystyle x.} Applying the Riesz representation theorem to φ ∈ X ′ {\displaystyle \varphi \in X^{\prime }} (resp. to φ R ∈ X R ′ {\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }} ) guarantees the existence of a unique vector f φ ∈ X {\displaystyle f_{\varphi }\in X} (resp. f φ R ∈ X R {\displaystyle f_{\varphi _{\mathbb {R} }}\in X_{\mathbb {R} }} ) such that φ ( x ) = ⟨ f φ | x ⟩ {\displaystyle \varphi (x)=\left\langle f_{\varphi }|\,x\right\rangle } (resp. φ R ( x ) = ⟨ f φ R | x ⟩ R {\displaystyle \varphi _{\mathbb {R} }(x)=\left\langle f_{\varphi _{\mathbb {R} }}|\,x\right\rangle _{\mathbb {R} }} ) for all vectors x . {\displaystyle x.} The theorem also guarantees that ‖ f φ ‖ = ‖ φ ‖ X ′ {\displaystyle \left\|f_{\varphi }\right\|=\|\varphi \|_{X^{\prime }}} and ‖ f φ R ‖ = ‖ φ R ‖ X R ′ . {\displaystyle \left\|f_{\varphi _{\mathbb {R} }}\right\|=\left\|\varphi _{\mathbb {R} }\right\|_{X_{\mathbb {R} }^{\prime }}.} It is readily verified that f φ = f φ R . {\displaystyle f_{\varphi }=f_{\varphi _{\mathbb {R} }}.} Now ‖ f φ ‖ = ‖ f φ R ‖ {\displaystyle \left\|f_{\varphi }\right\|=\left\|f_{\varphi _{\mathbb {R} }}\right\|} and the previous equalities imply that ‖ φ ‖ X ′ = ‖ φ R ‖ X R ′ , {\displaystyle \|\varphi \|_{X^{\prime }}=\left\|\varphi _{\mathbb {R} }\right\|_{X_{\mathbb {R} }^{\prime }},} which is the same conclusion that was reached above. == In infinite dimensions == Below, all vector spaces are over either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .} If V {\displaystyle V} is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V {\displaystyle V} is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual. A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that | f | ≤ p . {\displaystyle |f|\leq p.} === Characterizing closed subspaces === Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed, and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete. ==== Hyperplanes and maximal subspaces ==== A vector subspace M {\displaystyle M} of X {\displaystyle X} is called maximal if M ⊊ X {\displaystyle M\subsetneq X} (meaning M ⊆ X {\displaystyle M\subseteq X} and M ≠ X {\displaystyle M\neq X} ) and does not exist a vector subspace N {\displaystyle N} of X {\displaystyle X} such that M ⊊ N ⊊ X . {\displaystyle M\subsetneq N\subsetneq X.} A vector subspace M {\displaystyle M} of X {\displaystyle X} is maximal if and only if it is the kernel of some non-trivial linear functional on X {\displaystyle X} (that is, M = ker ⁡ f {\displaystyle M=\ker f} for some linear functional f {\displaystyle f} on X {\displaystyle X} that is not identically 0). An affine hyperplane in X {\displaystyle X} is a translate of a maximal vector subspace. By linearity, a subset H {\displaystyle H} of X {\displaystyle X} is a affine hyperplane if and only if there exists some non-trivial linear functional f {\displaystyle f} on X {\displaystyle X} such that H = f − 1 ( 1 ) = { x ∈ X : f ( x ) = 1 } . {\displaystyle H=f^{-1}(1)=\{x\in X:f(x)=1\}.} If f {\displaystyle f} is a linear functional and s ≠ 0 {\displaystyle s\neq 0} is a scalar then f − 1 ( s ) = s ( f − 1 ( 1 ) ) = ( 1 s f ) − 1 ( 1 ) . {\displaystyle f^{-1}(s)=s\left(f^{-1}(1)\right)=\left({\frac {1}{s}}f\right)^{-1}(1).} This equality can be used to relate different level sets of f . {\displaystyle f.} Moreover, if f ≠ 0 {\displaystyle f\neq 0} then the kernel of f {\displaystyle f} can be reconstructed from the affine hyperplane H := f − 1 ( 1 ) {\displaystyle H:=f^{-1}(1)} by ker ⁡ f = H − H . {\displaystyle \ker f=H-H.} ==== Relationships between multiple linear functionals ==== Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem. If f is a non-trivial linear functional on X with kernel N, x ∈ X {\displaystyle x\in X} satisfies f ( x ) = 1 , {\displaystyle f(x)=1,} and U is a balanced subset of X, then N ∩ ( x + U ) = ∅ {\displaystyle N\cap (x+U)=\varnothing } if and only if | f ( u ) | < 1 {\displaystyle |f(u)|<1} for all u ∈ U . {\displaystyle u\in U.} === Hahn–Banach theorem === Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of R . {\displaystyle \mathbb {R} .} However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example, === Equicontinuity of families of linear functionals === Let X be a topological vector space (TVS) with continuous dual space X ′ . {\displaystyle X'.} For any subset H of X ′ , {\displaystyle X',} the following are equivalent: H is equicontinuous; H is contained in the polar of some neighborhood of 0 {\displaystyle 0} in X; the (pre)polar of H is a neighborhood of 0 {\displaystyle 0} in X; If H is an equicontinuous subset of X ′ {\displaystyle X'} then the following sets are also equicontinuous: the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull. Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of X ′ {\displaystyle X'} is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact). == See also == Discontinuous linear map Locally convex topological vector space – Vector space with a topology defined by convex open sets Positive linear functional – ordered vector space with a partial orderPages displaying wikidata descriptions as a fallback Multilinear form – Map from multiple vectors to an underlying field of scalars, linear in each argument Topological vector space – Vector space with a notion of nearness == Notes == === Footnotes === === Proofs === == References == == Bibliography == Axler, Sheldon (2015), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-3-319-11079-0 Bishop, Richard; Goldberg, Samuel (1980), "Chapter 4", Tensor Analysis on Manifolds, Dover Publications, ISBN 0-486-64039-6 Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908. Dunford, Nelson (1988). Linear operators (in Romanian). New York: Interscience Publishers. ISBN 0-471-60848-3. OCLC 18412261. Halmos, Paul Richard (1974), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics (1958 2nd ed.), Springer, ISBN 0-387-90093-4 Katznelson, Yitzhak; Katznelson, Yonatan R. (2008), A (Terse) Introduction to Linear Algebra, American Mathematical Society, ISBN 978-0-8218-4419-9 Lax, Peter (1996), Linear algebra, Wiley-Interscience, ISBN 978-0-471-11111-5 Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0 Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schutz, Bernard (1985), "Chapter 3", A first course in general relativity, Cambridge, UK: Cambridge University Press, ISBN 0-521-27703-5 Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Tu, Loring W. (2011), An Introduction to Manifolds, Universitext (2nd ed.), Springer, ISBN 978-0-8218-4419-9 Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Wikipedia:Linear independence#0
In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension. A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. == Definition == A sequence of vectors v 1 , v 2 , … , v k {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{k}} from a vector space V is said to be linearly dependent, if there exist scalars a 1 , a 2 , … , a k , {\displaystyle a_{1},a_{2},\dots ,a_{k},} not all zero, such that a 1 v 1 + a 2 v 2 + ⋯ + a k v k = 0 , {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} ,} where 0 {\displaystyle \mathbf {0} } denotes the zero vector. This implies that at least one of the scalars is nonzero, say a 1 ≠ 0 {\displaystyle a_{1}\neq 0} , and the above equation is able to be written as v 1 = − a 2 a 1 v 2 + ⋯ + − a k a 1 v k , {\displaystyle \mathbf {v} _{1}={\frac {-a_{2}}{a_{1}}}\mathbf {v} _{2}+\cdots +{\frac {-a_{k}}{a_{1}}}\mathbf {v} _{k},} if k > 1 , {\displaystyle k>1,} and v 1 = 0 {\displaystyle \mathbf {v} _{1}=\mathbf {0} } if k = 1. {\displaystyle k=1.} Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of the others. A sequence of vectors v 1 , v 2 , … , v n {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}} is said to be linearly independent if it is not linearly dependent, that is, if the equation a 1 v 1 + a 2 v 2 + ⋯ + a n v n = 0 , {\displaystyle a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{n}\mathbf {v} _{n}=\mathbf {0} ,} can only be satisfied by a i = 0 {\displaystyle a_{i}=0} for i = 1 , … , n . {\displaystyle i=1,\dots ,n.} This implies that no vector in the sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of 0 {\displaystyle \mathbf {0} } as a linear combination of its vectors is the trivial representation in which all the scalars a i {\textstyle a_{i}} are zero. Even more concisely, a sequence of vectors is linearly independent if and only if 0 {\displaystyle \mathbf {0} } can be represented as a linear combination of its vectors in a unique way. If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for a finite set of vectors: A finite set of vectors is linearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful. A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent. === Infinite case === An infinite set of vectors is linearly independent if every finite subset is linearly independent. This definition applies also to finite sets of vectors, since a finite set is a finite subset of itself, and every subset of a linearly independent set is also linearly independent. Conversely, an infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set. An indexed family of vectors is linearly independent if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise, the family is said to be linearly dependent. A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, x, x2, ...} as a basis. === Definition via span === Let V {\displaystyle V} be a vector space. A set X ⊆ V {\displaystyle X\subseteq V} is linearly independent if and only if X {\displaystyle X} is a minimal element of { Y ⊆ V ∣ X ⊆ Span ⁡ ( Y ) } {\displaystyle \{Y\subseteq V\mid X\subseteq \operatorname {Span} (Y)\}} by the inclusion order. In contrast, X {\displaystyle X} is linearly dependent if it has a proper subset whose span is a superset of X {\displaystyle X} . == Geometric examples == u → {\displaystyle {\vec {u}}} and v → {\displaystyle {\vec {v}}} are independent and define the plane P. u → {\displaystyle {\vec {u}}} , v → {\displaystyle {\vec {v}}} and w → {\displaystyle {\vec {w}}} are dependent because all three are contained in the same plane. u → {\displaystyle {\vec {u}}} and j → {\displaystyle {\vec {j}}} are dependent because they are parallel to each other. u → {\displaystyle {\vec {u}}} , v → {\displaystyle {\vec {v}}} and k → {\displaystyle {\vec {k}}} are independent because u → {\displaystyle {\vec {u}}} and v → {\displaystyle {\vec {v}}} are independent of each other and k → {\displaystyle {\vec {k}}} is not a linear combination of them or, equivalently, because they do not belong to a common plane. The three vectors define a three-dimensional space. The vectors o → {\displaystyle {\vec {o}}} (null vector, whose components are equal to zero) and k → {\displaystyle {\vec {k}}} are dependent since o → = 0 k → {\displaystyle {\vec {o}}=0{\vec {k}}} . === Geographic location === A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is true, but it is not necessary to find the location. In this example the "3 miles north" vector and the "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors linearly dependent, that is, one of the three vectors is unnecessary to define a specific location on a plane. Also note that if altitude is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, n linearly independent vectors are required to describe all locations in n-dimensional space. == Evaluating linear independence == === The zero vector === If one or more vectors from a given sequence of vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} is the zero vector 0 {\displaystyle \mathbf {0} } then the vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that i {\displaystyle i} is an index (i.e. an element of { 1 , … , k } {\displaystyle \{1,\ldots ,k\}} ) such that v i = 0 . {\displaystyle \mathbf {v} _{i}=\mathbf {0} .} Then let a i := 1 {\displaystyle a_{i}:=1} (alternatively, letting a i {\displaystyle a_{i}} be equal to any other non-zero scalar will also work) and then let all other scalars be 0 {\displaystyle 0} (explicitly, this means that for any index j {\displaystyle j} other than i {\displaystyle i} (i.e. for j ≠ i {\displaystyle j\neq i} ), let a j := 0 {\displaystyle a_{j}:=0} so that consequently a j v j = 0 v j = 0 {\displaystyle a_{j}\mathbf {v} _{j}=0\mathbf {v} _{j}=\mathbf {0} } ). Simplifying a 1 v 1 + ⋯ + a k v k {\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}} gives: a 1 v 1 + ⋯ + a k v k = 0 + ⋯ + 0 + a i v i + 0 + ⋯ + 0 = a i v i = a i 0 = 0 . {\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}=\mathbf {0} +\cdots +\mathbf {0} +a_{i}\mathbf {v} _{i}+\mathbf {0} +\cdots +\mathbf {0} =a_{i}\mathbf {v} _{i}=a_{i}\mathbf {0} =\mathbf {0} .} Because not all scalars are zero (in particular, a i ≠ 0 {\displaystyle a_{i}\neq 0} ), this proves that the vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} are linearly dependent. As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly independent. Now consider the special case where the sequence of v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} has length 1 {\displaystyle 1} (i.e. the case where k = 1 {\displaystyle k=1} ). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector is zero. Explicitly, if v 1 {\displaystyle \mathbf {v} _{1}} is any vector then the sequence v 1 {\displaystyle \mathbf {v} _{1}} (which is a sequence of length 1 {\displaystyle 1} ) is linearly dependent if and only if v 1 = 0 {\displaystyle \mathbf {v} _{1}=\mathbf {0} } ; alternatively, the collection v 1 {\displaystyle \mathbf {v} _{1}} is linearly independent if and only if v 1 ≠ 0 . {\displaystyle \mathbf {v} _{1}\neq \mathbf {0} .} === Linear dependence and independence of two vectors === This example considers the special case where there are exactly two vector u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } from some real or complex vector space. The vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly dependent if and only if at least one of the following is true: u {\displaystyle \mathbf {u} } is a scalar multiple of v {\displaystyle \mathbf {v} } (explicitly, this means that there exists a scalar c {\displaystyle c} such that u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } ) or v {\displaystyle \mathbf {v} } is a scalar multiple of u {\displaystyle \mathbf {u} } (explicitly, this means that there exists a scalar c {\displaystyle c} such that v = c u {\displaystyle \mathbf {v} =c\mathbf {u} } ). If u = 0 {\displaystyle \mathbf {u} =\mathbf {0} } then by setting c := 0 {\displaystyle c:=0} we have c v = 0 v = 0 = u {\displaystyle c\mathbf {v} =0\mathbf {v} =\mathbf {0} =\mathbf {u} } (this equality holds no matter what the value of v {\displaystyle \mathbf {v} } is), which shows that (1) is true in this particular case. Similarly, if v = 0 {\displaystyle \mathbf {v} =\mathbf {0} } then (2) is true because v = 0 u . {\displaystyle \mathbf {v} =0\mathbf {u} .} If u = v {\displaystyle \mathbf {u} =\mathbf {v} } (for instance, if they are both equal to the zero vector 0 {\displaystyle \mathbf {0} } ) then both (1) and (2) are true (by using c := 1 {\displaystyle c:=1} for both). If u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } then u ≠ 0 {\displaystyle \mathbf {u} \neq \mathbf {0} } is only possible if c ≠ 0 {\displaystyle c\neq 0} and v ≠ 0 {\displaystyle \mathbf {v} \neq \mathbf {0} } ; in this case, it is possible to multiply both sides by 1 c {\textstyle {\frac {1}{c}}} to conclude v = 1 c u . {\textstyle \mathbf {v} ={\frac {1}{c}}\mathbf {u} .} This shows that if u ≠ 0 {\displaystyle \mathbf {u} \neq \mathbf {0} } and v ≠ 0 {\displaystyle \mathbf {v} \neq \mathbf {0} } then (1) is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly independent). If u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } but instead u = 0 {\displaystyle \mathbf {u} =\mathbf {0} } then at least one of c {\displaystyle c} and v {\displaystyle \mathbf {v} } must be zero. Moreover, if exactly one of u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } is 0 {\displaystyle \mathbf {0} } (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). The vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly independent if and only if u {\displaystyle \mathbf {u} } is not a scalar multiple of v {\displaystyle \mathbf {v} } and v {\displaystyle \mathbf {v} } is not a scalar multiple of u {\displaystyle \mathbf {u} } . === Vectors in R2 === Three vectors: Consider the set of vectors v 1 = ( 1 , 1 ) , {\displaystyle \mathbf {v} _{1}=(1,1),} v 2 = ( − 3 , 2 ) , {\displaystyle \mathbf {v} _{2}=(-3,2),} and v 3 = ( 2 , 4 ) , {\displaystyle \mathbf {v} _{3}=(2,4),} then the condition for linear dependence seeks a set of non-zero scalars, such that a 1 [ 1 1 ] + a 2 [ − 3 2 ] + a 3 [ 2 4 ] = [ 0 0 ] , {\displaystyle a_{1}{\begin{bmatrix}1\\1\end{bmatrix}}+a_{2}{\begin{bmatrix}-3\\2\end{bmatrix}}+a_{3}{\begin{bmatrix}2\\4\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},} or [ 1 − 3 2 1 2 4 ] [ a 1 a 2 a 3 ] = [ 0 0 ] . {\displaystyle {\begin{bmatrix}1&-3&2\\1&2&4\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.} Row reduce this matrix equation by subtracting the first row from the second to obtain, [ 1 − 3 2 0 5 2 ] [ a 1 a 2 a 3 ] = [ 0 0 ] . {\displaystyle {\begin{bmatrix}1&-3&2\\0&5&2\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.} Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is [ 1 0 16 / 5 0 1 2 / 5 ] [ a 1 a 2 a 3 ] = [ 0 0 ] . {\displaystyle {\begin{bmatrix}1&0&16/5\\0&1&2/5\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.} Rearranging this equation allows us to obtain [ 1 0 0 1 ] [ a 1 a 2 ] = [ a 1 a 2 ] = − a 3 [ 16 / 5 2 / 5 ] . {\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}={\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}=-a_{3}{\begin{bmatrix}16/5\\2/5\end{bmatrix}}.} which shows that non-zero ai exist such that v 3 = ( 2 , 4 ) {\displaystyle \mathbf {v} _{3}=(2,4)} can be defined in terms of v 1 = ( 1 , 1 ) {\displaystyle \mathbf {v} _{1}=(1,1)} and v 2 = ( − 3 , 2 ) . {\displaystyle \mathbf {v} _{2}=(-3,2).} Thus, the three vectors are linearly dependent. Two vectors: Now consider the linear dependence of the two vectors v 1 = ( 1 , 1 ) {\displaystyle \mathbf {v} _{1}=(1,1)} and v 2 = ( − 3 , 2 ) , {\displaystyle \mathbf {v} _{2}=(-3,2),} and check, a 1 [ 1 1 ] + a 2 [ − 3 2 ] = [ 0 0 ] , {\displaystyle a_{1}{\begin{bmatrix}1\\1\end{bmatrix}}+a_{2}{\begin{bmatrix}-3\\2\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},} or [ 1 − 3 1 2 ] [ a 1 a 2 ] = [ 0 0 ] . {\displaystyle {\begin{bmatrix}1&-3\\1&2\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.} The same row reduction presented above yields, [ 1 0 0 1 ] [ a 1 a 2 ] = [ 0 0 ] . {\displaystyle {\begin{bmatrix}1&0\\0&1\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}.} This shows that a i = 0 , {\displaystyle a_{i}=0,} which means that the vectors v 1 = ( 1 , 1 ) {\displaystyle \mathbf {v} _{1}=(1,1)} and v 2 = ( − 3 , 2 ) {\displaystyle \mathbf {v} _{2}=(-3,2)} are linearly independent. === Vectors in R4 === In order to determine if the three vectors in R 4 , {\displaystyle \mathbb {R} ^{4},} v 1 = [ 1 4 2 − 3 ] , v 2 = [ 7 10 − 4 − 1 ] , v 3 = [ − 2 1 5 − 4 ] . {\displaystyle \mathbf {v} _{1}={\begin{bmatrix}1\\4\\2\\-3\end{bmatrix}},\mathbf {v} _{2}={\begin{bmatrix}7\\10\\-4\\-1\end{bmatrix}},\mathbf {v} _{3}={\begin{bmatrix}-2\\1\\5\\-4\end{bmatrix}}.} are linearly dependent, form the matrix equation, [ 1 7 − 2 4 10 1 2 − 4 5 − 3 − 1 − 4 ] [ a 1 a 2 a 3 ] = [ 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}1&7&-2\\4&10&1\\2&-4&5\\-3&-1&-4\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\\0\\0\end{bmatrix}}.} Row reduce this equation to obtain, [ 1 7 − 2 0 − 18 9 0 0 0 0 0 0 ] [ a 1 a 2 a 3 ] = [ 0 0 0 0 ] . {\displaystyle {\begin{bmatrix}1&7&-2\\0&-18&9\\0&0&0\\0&0&0\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}={\begin{bmatrix}0\\0\\0\\0\end{bmatrix}}.} Rearrange to solve for v3 and obtain, [ 1 7 0 − 18 ] [ a 1 a 2 ] = − a 3 [ − 2 9 ] . {\displaystyle {\begin{bmatrix}1&7\\0&-18\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}}=-a_{3}{\begin{bmatrix}-2\\9\end{bmatrix}}.} This equation is easily solved to define non-zero ai, a 1 = − 3 a 3 / 2 , a 2 = a 3 / 2 , {\displaystyle a_{1}=-3a_{3}/2,a_{2}=a_{3}/2,} where a 3 {\displaystyle a_{3}} can be chosen arbitrarily. Thus, the vectors v 1 , v 2 , {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},} and v 3 {\displaystyle \mathbf {v} _{3}} are linearly dependent. === Alternative method using determinants === An alternative method relies on the fact that n {\displaystyle n} vectors in R n {\displaystyle \mathbb {R} ^{n}} are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero. In this case, the matrix formed by the vectors is A = [ 1 − 3 1 2 ] . {\displaystyle A={\begin{bmatrix}1&-3\\1&2\end{bmatrix}}.} We may write a linear combination of the columns as A Λ = [ 1 − 3 1 2 ] [ λ 1 λ 2 ] . {\displaystyle A\Lambda ={\begin{bmatrix}1&-3\\1&2\end{bmatrix}}{\begin{bmatrix}\lambda _{1}\\\lambda _{2}\end{bmatrix}}.} We are interested in whether AΛ = 0 for some nonzero vector Λ. This depends on the determinant of A {\displaystyle A} , which is det A = 1 ⋅ 2 − 1 ⋅ ( − 3 ) = 5 ≠ 0. {\displaystyle \det A=1\cdot 2-1\cdot (-3)=5\neq 0.} Since the determinant is non-zero, the vectors ( 1 , 1 ) {\displaystyle (1,1)} and ( − 3 , 2 ) {\displaystyle (-3,2)} are linearly independent. Otherwise, suppose we have m {\displaystyle m} vectors of n {\displaystyle n} coordinates, with m < n . {\displaystyle m<n.} Then A is an n×m matrix and Λ is a column vector with m {\displaystyle m} entries, and we are again interested in AΛ = 0. As we saw previously, this is equivalent to a list of n {\displaystyle n} equations. Consider the first m {\displaystyle m} rows of A {\displaystyle A} , the first m {\displaystyle m} equations; any solution of the full list of equations must also be true of the reduced list. In fact, if ⟨i1,...,im⟩ is any list of m {\displaystyle m} rows, then the equation must be true for those rows. A ⟨ i 1 , … , i m ⟩ Λ = 0 . {\displaystyle A_{\langle i_{1},\dots ,i_{m}\rangle }\Lambda =\mathbf {0} .} Furthermore, the reverse is true. That is, we can test whether the m {\displaystyle m} vectors are linearly dependent by testing whether det A ⟨ i 1 , … , i m ⟩ = 0 {\displaystyle \det A_{\langle i_{1},\dots ,i_{m}\rangle }=0} for all possible lists of m {\displaystyle m} rows. (In case m = n {\displaystyle m=n} , this requires only one determinant, as above. If m > n {\displaystyle m>n} , then it is a theorem that the vectors must be linearly dependent.) This fact is valuable for theory; in practical calculations more efficient methods are available. === More vectors than dimensions === If there are more vectors than dimensions, the vectors are linearly dependent. This is illustrated in the example above of three vectors in R 2 . {\displaystyle \mathbb {R} ^{2}.} == Natural basis vectors == Let V = R n {\displaystyle V=\mathbb {R} ^{n}} and consider the following elements in V {\displaystyle V} , known as the natural basis vectors: e 1 = ( 1 , 0 , 0 , … , 0 ) e 2 = ( 0 , 1 , 0 , … , 0 ) ⋮ e n = ( 0 , 0 , 0 , … , 1 ) . {\displaystyle {\begin{matrix}\mathbf {e} _{1}&=&(1,0,0,\ldots ,0)\\\mathbf {e} _{2}&=&(0,1,0,\ldots ,0)\\&\vdots \\\mathbf {e} _{n}&=&(0,0,0,\ldots ,1).\end{matrix}}} Then e 1 , e 2 , … , e n {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\ldots ,\mathbf {e} _{n}} are linearly independent. == Linear independence of functions == Let V {\displaystyle V} be the vector space of all differentiable functions of a real variable t {\displaystyle t} . Then the functions e t {\displaystyle e^{t}} and e 2 t {\displaystyle e^{2t}} in V {\displaystyle V} are linearly independent. === Proof === Suppose a {\displaystyle a} and b {\displaystyle b} are two real numbers such that a e t + b e 2 t = 0 {\displaystyle ae^{t}+be^{2t}=0} Take the first derivative of the above equation: a e t + 2 b e 2 t = 0 {\displaystyle ae^{t}+2be^{2t}=0} for all values of t . {\displaystyle t.} We need to show that a = 0 {\displaystyle a=0} and b = 0. {\displaystyle b=0.} In order to do this, we subtract the first equation from the second, giving b e 2 t = 0 {\displaystyle be^{2t}=0} . Since e 2 t {\displaystyle e^{2t}} is not zero for some t {\displaystyle t} , b = 0. {\displaystyle b=0.} It follows that a = 0 {\displaystyle a=0} too. Therefore, according to the definition of linear independence, e t {\displaystyle e^{t}} and e 2 t {\displaystyle e^{2t}} are linearly independent. == Space of linear dependencies == A linear dependency or linear relation among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar components such that a 1 v 1 + ⋯ + a n v n = 0 . {\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n}=\mathbf {0} .} If such a linear dependence exists with at least a nonzero component, then the n vectors are linearly dependent. Linear dependencies among v1, ..., vn form a vector space. If the vectors are expressed by their coordinates, then the linear dependencies are the solutions of a homogeneous system of linear equations, with the coordinates of the vectors as coefficients. A basis of the vector space of linear dependencies can therefore be computed by Gaussian elimination. == Generalizations == === Affine independence === A set of vectors is said to be affinely dependent if at least one of the vectors in the set can be defined as an affine combination of the others. Otherwise, the set is called affinely independent. Any affine combination is a linear combination; therefore every affinely dependent set is linearly dependent. Contrapositively, every linearly independent set is affinely independent. Note that an affinely independent set is not necessarily linearly independent. Consider a set of m {\displaystyle m} vectors v 1 , … , v m {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{m}} of size n {\displaystyle n} each, and consider the set of m {\displaystyle m} augmented vectors ( [ 1 v 1 ] , … , [ 1 v m ] ) {\textstyle \left(\left[{\begin{smallmatrix}1\\\mathbf {v} _{1}\end{smallmatrix}}\right],\ldots ,\left[{\begin{smallmatrix}1\\\mathbf {v} _{m}\end{smallmatrix}}\right]\right)} of size n + 1 {\displaystyle n+1} each. The original vectors are affinely independent if and only if the augmented vectors are linearly independent.: 256 === Linearly independent vector subspaces === Two vector subspaces M {\displaystyle M} and N {\displaystyle N} of a vector space X {\displaystyle X} are said to be linearly independent if M ∩ N = { 0 } . {\displaystyle M\cap N=\{0\}.} More generally, a collection M 1 , … , M d {\displaystyle M_{1},\ldots ,M_{d}} of subspaces of X {\displaystyle X} are said to be linearly independent if M i ∩ ∑ k ≠ i M k = { 0 } {\textstyle M_{i}\cap \sum _{k\neq i}M_{k}=\{0\}} for every index i , {\displaystyle i,} where ∑ k ≠ i M k = { m 1 + ⋯ + m i − 1 + m i + 1 + ⋯ + m d : m k ∈ M k for all k } = span ⁡ ⋃ k ∈ { 1 , … , i − 1 , i + 1 , … , d } M k . {\textstyle \sum _{k\neq i}M_{k}={\Big \{}m_{1}+\cdots +m_{i-1}+m_{i+1}+\cdots +m_{d}:m_{k}\in M_{k}{\text{ for all }}k{\Big \}}=\operatorname {span} \bigcup _{k\in \{1,\ldots ,i-1,i+1,\ldots ,d\}}M_{k}.} The vector space X {\displaystyle X} is said to be a direct sum of M 1 , … , M d {\displaystyle M_{1},\ldots ,M_{d}} if these subspaces are linearly independent and M 1 + ⋯ + M d = X . {\displaystyle M_{1}+\cdots +M_{d}=X.} == See also == Matroid – Abstraction of linear independence of vectors == References == == External links == "Linear independence", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Linearly Dependent Functions at WolframMathWorld. Tutorial and interactive program on Linear Independence. Introduction to Linear Independence at KhanAcademy.
Wikipedia:Linear inequality#0
In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality: < less than > greater than ≤ less than or equal to ≥ greater than or equal to ≠ not equal to A linear inequality looks exactly like a linear equation, with the inequality sign replacing the equality sign. == Linear inequalities of real numbers == === Two-dimensional linear inequalities === Two-dimensional linear inequalities, are expressions in two variables of the form: a x + b y < c and a x + b y ≥ c , {\displaystyle ax+by<c{\text{ and }}ax+by\geq c,} where the inequalities may either be strict or not. The solution set of such an inequality can be graphically represented by a half-plane (all the points on one "side" of a fixed line) in the Euclidean plane. The line that determines the half-planes (ax + by = c) is not included in the solution set when the inequality is strict. A simple procedure to determine which half-plane is in the solution set is to calculate the value of ax + by at a point (x0, y0) which is not on the line and observe whether or not the inequality is satisfied. For example, to draw the solution set of x + 3y < 9, one first draws the line with equation x + 3y = 9 as a dotted line, to indicate that the line is not included in the solution set since the inequality is strict. Then, pick a convenient point not on the line, such as (0,0). Since 0 + 3(0) = 0 < 9, this point is in the solution set, so the half-plane containing this point (the half-plane "below" the line) is the solution set of this linear inequality. === Linear inequalities in general dimensions === In Rn linear inequalities are the expressions that may be written in the form f ( x ¯ ) < b {\displaystyle f({\bar {x}})<b} or f ( x ¯ ) ≤ b , {\displaystyle f({\bar {x}})\leq b,} where f is a linear form (also called a linear functional), x ¯ = ( x 1 , x 2 , … , x n ) {\displaystyle {\bar {x}}=(x_{1},x_{2},\ldots ,x_{n})} and b a constant real number. More concretely, this may be written out as a 1 x 1 + a 2 x 2 + ⋯ + a n x n < b {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}<b} or a 1 x 1 + a 2 x 2 + ⋯ + a n x n ≤ b . {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}\leq b.} Here x 1 , x 2 , . . . , x n {\displaystyle x_{1},x_{2},...,x_{n}} are called the unknowns, and a 1 , a 2 , . . . , a n {\displaystyle a_{1},a_{2},...,a_{n}} are called the coefficients. Alternatively, these may be written as g ( x ) < 0 {\displaystyle g(x)<0\,} or g ( x ) ≤ 0 , {\displaystyle g(x)\leq 0,} where g is an affine function. That is a 0 + a 1 x 1 + a 2 x 2 + ⋯ + a n x n < 0 {\displaystyle a_{0}+a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}<0} or a 0 + a 1 x 1 + a 2 x 2 + ⋯ + a n x n ≤ 0. {\displaystyle a_{0}+a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}\leq 0.} Note that any inequality containing a "greater than" or a "greater than or equal" sign can be rewritten with a "less than" or "less than or equal" sign, so there is no need to define linear inequalities using those signs. === Systems of linear inequalities === A system of linear inequalities is a set of linear inequalities in the same variables: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n ≤ b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n ≤ b 2 ⋮ ⋮ ⋮ ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n ≤ b m {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;\leq \;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;\leq \;&&&b_{2}\\\vdots \;\;\;&&&&\vdots \;\;\;&&&&\vdots \;\;\;&&&&&\;\vdots \\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;\leq \;&&&b_{m}\\\end{alignedat}}} Here x 1 , x 2 , . . . , x n {\displaystyle x_{1},\ x_{2},...,x_{n}} are the unknowns, a 11 , a 12 , . . . , a m n {\displaystyle a_{11},\ a_{12},...,\ a_{mn}} are the coefficients of the system, and b 1 , b 2 , . . . , b m {\displaystyle b_{1},\ b_{2},...,b_{m}} are the constant terms. This can be concisely written as the matrix inequality A x ≤ b , {\displaystyle Ax\leq b,} where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants. In the above systems both strict and non-strict inequalities may be used. Not all systems of linear inequalities have solutions. Variables can be eliminated from systems of linear inequalities using Fourier–Motzkin elimination. === Applications === ==== Polyhedra ==== The set of solutions of a real linear inequality constitutes a half-space of the 'n'-dimensional real space, one of the two defined by the corresponding linear equation. The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities. It is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron (possibly unbounded, e.g., a half-space, a slab between two parallel half-spaces or a polyhedral cone). It may also be empty or a convex polyhedron of lower dimension confined to an affine subspace of the n-dimensional space Rn. ==== Linear programming ==== A linear programming problem seeks to optimize (find a maximum or minimum value) a function (called the objective function) subject to a number of constraints on the variables which, in general, are linear inequalities. The list of constraints is a system of linear inequalities. == Generalization == The above definition requires well-defined operations of addition, multiplication and comparison; therefore, the notion of a linear inequality may be extended to ordered rings, and in particular to ordered fields. == References == == Sources == Angel, Allen R.; Porter, Stuart R. (1989), A Survey of Mathematics with Applications (3rd ed.), Addison-Wesley, ISBN 0-201-13696-1 Miller, Charles D.; Heeren, Vern E. (1986), Mathematical Ideas (5th ed.), Scott, Foresman, ISBN 0-673-18276-2 == External links ==
Wikipedia:Linear map#0
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V → W {\displaystyle V\to W} between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism. If a linear map is a bijection then it is called a linear isomorphism. In the case where V = W {\displaystyle V=W} , a linear map is called a linear endomorphism. Sometimes the term linear operator refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V {\displaystyle V} and W {\displaystyle W} are real vector spaces (not necessarily with V = W {\displaystyle V=W} ), or it can be used to emphasize that V {\displaystyle V} is a function space, which is a common convention in functional analysis. Sometimes the term linear function has the same meaning as linear map, while in analysis it does not. A linear map from V {\displaystyle V} to W {\displaystyle W} always maps the origin of V {\displaystyle V} to the origin of W {\displaystyle W} . Moreover, it maps linear subspaces in V {\displaystyle V} onto linear subspaces in W {\displaystyle W} (possibly of a lower dimension); for example, it maps a plane through the origin in V {\displaystyle V} to either a plane through the origin in W {\displaystyle W} , a line through the origin in W {\displaystyle W} , or just the origin in W {\displaystyle W} . Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of category theory, linear maps are the morphisms of vector spaces, and they form a category equivalent to the one of matrices. == Definition and first consequences == Let V {\displaystyle V} and W {\displaystyle W} be vector spaces over the same field K {\displaystyle K} . A function f : V → W {\displaystyle f:V\to W} is said to be a linear map if for any two vectors u , v ∈ V {\textstyle \mathbf {u} ,\mathbf {v} \in V} and any scalar c ∈ K {\displaystyle c\in K} the following two conditions are satisfied: Additivity / operation of addition f ( u + v ) = f ( u ) + f ( v ) {\displaystyle f(\mathbf {u} +\mathbf {v} )=f(\mathbf {u} )+f(\mathbf {v} )} Homogeneity of degree 1 / operation of scalar multiplication f ( c u ) = c f ( u ) {\displaystyle f(c\mathbf {u} )=cf(\mathbf {u} )} Thus, a linear map is said to be operation preserving. In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication. By the associativity of the addition operation denoted as +, for any vectors u 1 , … , u n ∈ V {\textstyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{n}\in V} and scalars c 1 , … , c n ∈ K , {\textstyle c_{1},\ldots ,c_{n}\in K,} the following equality holds: f ( c 1 u 1 + ⋯ + c n u n ) = c 1 f ( u 1 ) + ⋯ + c n f ( u n ) . {\displaystyle f(c_{1}\mathbf {u} _{1}+\cdots +c_{n}\mathbf {u} _{n})=c_{1}f(\mathbf {u} _{1})+\cdots +c_{n}f(\mathbf {u} _{n}).} Thus a linear map is one which preserves linear combinations. Denoting the zero elements of the vector spaces V {\displaystyle V} and W {\displaystyle W} by 0 V {\textstyle \mathbf {0} _{V}} and 0 W {\textstyle \mathbf {0} _{W}} respectively, it follows that f ( 0 V ) = 0 W . {\textstyle f(\mathbf {0} _{V})=\mathbf {0} _{W}.} Let c = 0 {\displaystyle c=0} and v ∈ V {\textstyle \mathbf {v} \in V} in the equation for homogeneity of degree 1: f ( 0 V ) = f ( 0 v ) = 0 f ( v ) = 0 W . {\displaystyle f(\mathbf {0} _{V})=f(0\mathbf {v} )=0f(\mathbf {v} )=\mathbf {0} _{W}.} A linear map V → K {\displaystyle V\to K} with K {\displaystyle K} viewed as a one-dimensional vector space over itself is called a linear functional. These statements generalize to any left-module R M {\textstyle {}_{R}M} over a ring R {\displaystyle R} without modification, and to any right-module upon reversing of the scalar multiplication. == Examples == A prototypical example that gives linear maps their name is a function f : R → R : x ↦ c x {\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto cx} , of which the graph is a line through the origin. More generally, any homothety v ↦ c v {\textstyle \mathbf {v} \mapsto c\mathbf {v} } centered in the origin of a vector space is a linear map (here c is a scalar). The zero map x ↦ 0 {\textstyle \mathbf {x} \mapsto \mathbf {0} } between two vector spaces (over the same field) is linear. The identity map on any module is a linear operator. For real numbers, the map x ↦ x 2 {\textstyle x\mapsto x^{2}} is not linear. For real numbers, the map x ↦ x + 1 {\textstyle x\mapsto x+1} is not linear (but is an affine transformation). If A {\displaystyle A} is a m × n {\displaystyle m\times n} real matrix, then A {\displaystyle A} defines a linear map from R n {\displaystyle \mathbb {R} ^{n}} to R m {\displaystyle \mathbb {R} ^{m}} by sending a column vector x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} to the column vector A x ∈ R m {\displaystyle A\mathbf {x} \in \mathbb {R} ^{m}} . Conversely, any linear map between finite-dimensional vector spaces can be represented in this manner; see the § Matrices, below. If f : V → W {\textstyle f:V\to W} is an isometry between real normed spaces such that f ( 0 ) = 0 {\textstyle f(0)=0} then f {\displaystyle f} is a linear map. This result is not necessarily true for complex normed space. Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions (a linear operator is a linear endomorphism, that is, a linear map with the same domain and codomain). Indeed, d d x ( a f ( x ) + b g ( x ) ) = a d f ( x ) d x + b d g ( x ) d x . {\displaystyle {\frac {d}{dx}}\left(af(x)+bg(x)\right)=a{\frac {df(x)}{dx}}+b{\frac {dg(x)}{dx}}.} A definite integral over some interval I is a linear map from the space of all real-valued integrable functions on I to R {\displaystyle \mathbb {R} } . Indeed, ∫ u v ( a f ( x ) + b g ( x ) ) d x = a ∫ u v f ( x ) d x + b ∫ u v g ( x ) d x . {\displaystyle \int _{u}^{v}\left(af(x)+bg(x)\right)dx=a\int _{u}^{v}f(x)dx+b\int _{u}^{v}g(x)dx.} An indefinite integral (or antiderivative) with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on R {\displaystyle \mathbb {R} } to the space of all real-valued, differentiable functions on R {\displaystyle \mathbb {R} } . Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions. If V {\displaystyle V} and W {\displaystyle W} are finite-dimensional vector spaces over a field F, of respective dimensions m and n, then the function that maps linear maps f : V → W {\textstyle f:V\to W} to n × m matrices in the way described in § Matrices (below) is a linear map, and even a linear isomorphism. The expected value of a random variable (which is in fact a function, and as such an element of a vector space) is linear, as for random variables X {\displaystyle X} and Y {\displaystyle Y} we have E [ X + Y ] = E [ X ] + E [ Y ] {\displaystyle E[X+Y]=E[X]+E[Y]} and E [ a X ] = a E [ X ] {\displaystyle E[aX]=aE[X]} , but the variance of a random variable is not linear. === Linear extensions === Often, a linear map is constructed by defining it on a subset of a vector space and then extending by linearity to the linear span of the domain. Suppose X {\displaystyle X} and Y {\displaystyle Y} are vector spaces and f : S → Y {\displaystyle f:S\to Y} is a function defined on some subset S ⊆ X . {\displaystyle S\subseteq X.} Then a linear extension of f {\displaystyle f} to X , {\displaystyle X,} if it exists, is a linear map F : X → Y {\displaystyle F:X\to Y} defined on X {\displaystyle X} that extends f {\displaystyle f} (meaning that F ( s ) = f ( s ) {\displaystyle F(s)=f(s)} for all s ∈ S {\displaystyle s\in S} ) and takes its values from the codomain of f . {\displaystyle f.} When the subset S {\displaystyle S} is a vector subspace of X {\displaystyle X} then a ( Y {\displaystyle Y} -valued) linear extension of f {\displaystyle f} to all of X {\displaystyle X} is guaranteed to exist if (and only if) f : S → Y {\displaystyle f:S\to Y} is a linear map. In particular, if f {\displaystyle f} has a linear extension to span ⁡ S , {\displaystyle \operatorname {span} S,} then it has a linear extension to all of X . {\displaystyle X.} The map f : S → Y {\displaystyle f:S\to Y} can be extended to a linear map F : span ⁡ S → Y {\displaystyle F:\operatorname {span} S\to Y} if and only if whenever n > 0 {\displaystyle n>0} is an integer, c 1 , … , c n {\displaystyle c_{1},\ldots ,c_{n}} are scalars, and s 1 , … , s n ∈ S {\displaystyle s_{1},\ldots ,s_{n}\in S} are vectors such that 0 = c 1 s 1 + ⋯ + c n s n , {\displaystyle 0=c_{1}s_{1}+\cdots +c_{n}s_{n},} then necessarily 0 = c 1 f ( s 1 ) + ⋯ + c n f ( s n ) . {\displaystyle 0=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right).} If a linear extension of f : S → Y {\displaystyle f:S\to Y} exists then the linear extension F : span ⁡ S → Y {\displaystyle F:\operatorname {span} S\to Y} is unique and F ( c 1 s 1 + ⋯ c n s n ) = c 1 f ( s 1 ) + ⋯ + c n f ( s n ) {\displaystyle F\left(c_{1}s_{1}+\cdots c_{n}s_{n}\right)=c_{1}f\left(s_{1}\right)+\cdots +c_{n}f\left(s_{n}\right)} holds for all n , c 1 , … , c n , {\displaystyle n,c_{1},\ldots ,c_{n},} and s 1 , … , s n {\displaystyle s_{1},\ldots ,s_{n}} as above. If S {\displaystyle S} is linearly independent then every function f : S → Y {\displaystyle f:S\to Y} into any vector space has a linear extension to a (linear) map span ⁡ S → Y {\displaystyle \;\operatorname {span} S\to Y} (the converse is also true). For example, if X = R 2 {\displaystyle X=\mathbb {R} ^{2}} and Y = R {\displaystyle Y=\mathbb {R} } then the assignment ( 1 , 0 ) → − 1 {\displaystyle (1,0)\to -1} and ( 0 , 1 ) → 2 {\displaystyle (0,1)\to 2} can be linearly extended from the linearly independent set of vectors S := { ( 1 , 0 ) , ( 0 , 1 ) } {\displaystyle S:=\{(1,0),(0,1)\}} to a linear map on span ⁡ { ( 1 , 0 ) , ( 0 , 1 ) } = R 2 . {\displaystyle \operatorname {span} \{(1,0),(0,1)\}=\mathbb {R} ^{2}.} The unique linear extension F : R 2 → R {\displaystyle F:\mathbb {R} ^{2}\to \mathbb {R} } is the map that sends ( x , y ) = x ( 1 , 0 ) + y ( 0 , 1 ) ∈ R 2 {\displaystyle (x,y)=x(1,0)+y(0,1)\in \mathbb {R} ^{2}} to F ( x , y ) = x ( − 1 ) + y ( 2 ) = − x + 2 y . {\displaystyle F(x,y)=x(-1)+y(2)=-x+2y.} Every (scalar-valued) linear functional f {\displaystyle f} defined on a vector subspace of a real or complex vector space X {\displaystyle X} has a linear extension to all of X . {\displaystyle X.} Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional f {\displaystyle f} is dominated by some given seminorm p : X → R {\displaystyle p:X\to \mathbb {R} } (meaning that | f ( m ) | ≤ p ( m ) {\displaystyle |f(m)|\leq p(m)} holds for all m {\displaystyle m} in the domain of f {\displaystyle f} ) then there exists a linear extension to X {\displaystyle X} that is also dominated by p . {\displaystyle p.} == Matrices == If V {\displaystyle V} and W {\displaystyle W} are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from V {\displaystyle V} to W {\displaystyle W} can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if A {\displaystyle A} is a real m × n {\displaystyle m\times n} matrix, then f ( x ) = A x {\displaystyle f(\mathbf {x} )=A\mathbf {x} } describes a linear map R n → R m {\displaystyle \mathbb {R} ^{n}\to \mathbb {R} ^{m}} (see Euclidean space). Let { v 1 , … , v n } {\displaystyle \{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}} be a basis for V {\displaystyle V} . Then every vector v ∈ V {\displaystyle \mathbf {v} \in V} is uniquely determined by the coefficients c 1 , … , c n {\displaystyle c_{1},\ldots ,c_{n}} in the field R {\displaystyle \mathbb {R} } : v = c 1 v 1 + ⋯ + c n v n . {\displaystyle \mathbf {v} =c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n}.} If f : V → W {\textstyle f:V\to W} is a linear map, f ( v ) = f ( c 1 v 1 + ⋯ + c n v n ) = c 1 f ( v 1 ) + ⋯ + c n f ( v n ) , {\displaystyle f(\mathbf {v} )=f(c_{1}\mathbf {v} _{1}+\cdots +c_{n}\mathbf {v} _{n})=c_{1}f(\mathbf {v} _{1})+\cdots +c_{n}f\left(\mathbf {v} _{n}\right),} which implies that the function f is entirely determined by the vectors f ( v 1 ) , … , f ( v n ) {\displaystyle f(\mathbf {v} _{1}),\ldots ,f(\mathbf {v} _{n})} . Now let { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} be a basis for W {\displaystyle W} . Then we can represent each vector f ( v j ) {\displaystyle f(\mathbf {v} _{j})} as f ( v j ) = a 1 j w 1 + ⋯ + a m j w m . {\displaystyle f\left(\mathbf {v} _{j}\right)=a_{1j}\mathbf {w} _{1}+\cdots +a_{mj}\mathbf {w} _{m}.} Thus, the function f {\displaystyle f} is entirely determined by the values of a i j {\displaystyle a_{ij}} . If we put these values into an m × n {\displaystyle m\times n} matrix M {\displaystyle M} , then we can conveniently use it to compute the vector output of f {\displaystyle f} for any vector in V {\displaystyle V} . To get M {\displaystyle M} , every column j {\displaystyle j} of M {\displaystyle M} is a vector ( a 1 j ⋮ a m j ) {\displaystyle {\begin{pmatrix}a_{1j}\\\vdots \\a_{mj}\end{pmatrix}}} corresponding to f ( v j ) {\displaystyle f(\mathbf {v} _{j})} as defined above. To define it more clearly, for some column j {\displaystyle j} that corresponds to the mapping f ( v j ) {\displaystyle f(\mathbf {v} _{j})} , M = ( ⋯ a 1 j ⋯ ⋮ a m j ) {\displaystyle \mathbf {M} ={\begin{pmatrix}\ \cdots &a_{1j}&\cdots \ \\&\vdots &\\&a_{mj}&\end{pmatrix}}} where M {\displaystyle M} is the matrix of f {\displaystyle f} . In other words, every column j = 1 , … , n {\displaystyle j=1,\ldots ,n} has a corresponding vector f ( v j ) {\displaystyle f(\mathbf {v} _{j})} whose coordinates a 1 j , ⋯ , a m j {\displaystyle a_{1j},\cdots ,a_{mj}} are the elements of column j {\displaystyle j} . A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen. The matrices of a linear transformation can be represented visually: Matrix for T {\textstyle T} relative to B {\textstyle B} : A {\textstyle A} Matrix for T {\textstyle T} relative to B ′ {\textstyle B'} : A ′ {\textstyle A'} Transition matrix from B ′ {\textstyle B'} to B {\textstyle B} : P {\textstyle P} Transition matrix from B {\textstyle B} to B ′ {\textstyle B'} : P − 1 {\textstyle P^{-1}} Such that starting in the bottom left corner [ v ] B ′ {\textstyle \left[\mathbf {v} \right]_{B'}} and looking for the bottom right corner [ T ( v ) ] B ′ {\textstyle \left[T\left(\mathbf {v} \right)\right]_{B'}} , one would left-multiply—that is, A ′ [ v ] B ′ = [ T ( v ) ] B ′ {\textstyle A'\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}} . The equivalent method would be the "longer" method going clockwise from the same point such that [ v ] B ′ {\textstyle \left[\mathbf {v} \right]_{B'}} is left-multiplied with P − 1 A P {\textstyle P^{-1}AP} , or P − 1 A P [ v ] B ′ = [ T ( v ) ] B ′ {\textstyle P^{-1}AP\left[\mathbf {v} \right]_{B'}=\left[T\left(\mathbf {v} \right)\right]_{B'}} . === Examples in two dimensions === In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples: rotation by 90 degrees counterclockwise: A = ( 0 − 1 1 0 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}} by an angle θ counterclockwise: A = ( cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ) {\displaystyle \mathbf {A} ={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}} reflection through the x axis: A = ( 1 0 0 − 1 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}} through the y axis: A = ( − 1 0 0 1 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}-1&0\\0&1\end{pmatrix}}} through a line making an angle θ with the origin: A = ( cos ⁡ 2 θ sin ⁡ 2 θ sin ⁡ 2 θ − cos ⁡ 2 θ ) {\displaystyle \mathbf {A} ={\begin{pmatrix}\cos 2\theta &\sin 2\theta \\\sin 2\theta &-\cos 2\theta \end{pmatrix}}} scaling by 2 in all directions: A = ( 2 0 0 2 ) = 2 I {\displaystyle \mathbf {A} ={\begin{pmatrix}2&0\\0&2\end{pmatrix}}=2\mathbf {I} } horizontal shear mapping: A = ( 1 m 0 1 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}1&m\\0&1\end{pmatrix}}} skew of the y axis by an angle θ: A = ( 1 − sin ⁡ θ 0 cos ⁡ θ ) {\displaystyle \mathbf {A} ={\begin{pmatrix}1&-\sin \theta \\0&\cos \theta \end{pmatrix}}} squeeze mapping: A = ( k 0 0 1 k ) {\displaystyle \mathbf {A} ={\begin{pmatrix}k&0\\0&{\frac {1}{k}}\end{pmatrix}}} projection onto the y axis: A = ( 0 0 0 1 ) . {\displaystyle \mathbf {A} ={\begin{pmatrix}0&0\\0&1\end{pmatrix}}.} If a linear map is only composed of rotation, reflection, and/or uniform scaling, then the linear map is a conformal linear transformation. == Vector space of linear maps == The composition of linear maps is linear: if f : V → W {\displaystyle f:V\to W} and g : W → Z {\textstyle g:W\to Z} are linear, then so is their composition g ∘ f : V → Z {\textstyle g\circ f:V\to Z} . It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category. The inverse of a linear map, when defined, is again a linear map. If f 1 : V → W {\textstyle f_{1}:V\to W} and f 2 : V → W {\textstyle f_{2}:V\to W} are linear, then so is their pointwise sum f 1 + f 2 {\displaystyle f_{1}+f_{2}} , which is defined by ( f 1 + f 2 ) ( x ) = f 1 ( x ) + f 2 ( x ) {\displaystyle (f_{1}+f_{2})(\mathbf {x} )=f_{1}(\mathbf {x} )+f_{2}(\mathbf {x} )} . If f : V → W {\textstyle f:V\to W} is linear and α {\textstyle \alpha } is an element of the ground field K {\textstyle K} , then the map α f {\textstyle \alpha f} , defined by ( α f ) ( x ) = α ( f ( x ) ) {\textstyle (\alpha f)(\mathbf {x} )=\alpha (f(\mathbf {x} ))} , is also linear. Thus the set L ( V , W ) {\textstyle {\mathcal {L}}(V,W)} of linear maps from V {\textstyle V} to W {\textstyle W} itself forms a vector space over K {\textstyle K} , sometimes denoted Hom ⁡ ( V , W ) {\textstyle \operatorname {Hom} (V,W)} . Furthermore, in the case that V = W {\textstyle V=W} , this vector space, denoted End ⁡ ( V ) {\textstyle \operatorname {End} (V)} , is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below. Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars. === Endomorphisms and automorphisms === A linear transformation f : V → V {\textstyle f:V\to V} is an endomorphism of V {\textstyle V} ; the set of all such endomorphisms End ⁡ ( V ) {\textstyle \operatorname {End} (V)} together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field K {\textstyle K} (and in particular a ring). The multiplicative identity element of this algebra is the identity map id : V → V {\textstyle \operatorname {id} :V\to V} . An endomorphism of V {\textstyle V} that is also an isomorphism is called an automorphism of V {\textstyle V} . The composition of two automorphisms is again an automorphism, and the set of all automorphisms of V {\textstyle V} forms a group, the automorphism group of V {\textstyle V} which is denoted by Aut ⁡ ( V ) {\textstyle \operatorname {Aut} (V)} or GL ⁡ ( V ) {\textstyle \operatorname {GL} (V)} . Since the automorphisms are precisely those endomorphisms which possess inverses under composition, Aut ⁡ ( V ) {\textstyle \operatorname {Aut} (V)} is the group of units in the ring End ⁡ ( V ) {\textstyle \operatorname {End} (V)} . If V {\textstyle V} has finite dimension n {\textstyle n} , then End ⁡ ( V ) {\textstyle \operatorname {End} (V)} is isomorphic to the associative algebra of all n × n {\textstyle n\times n} matrices with entries in K {\textstyle K} . The automorphism group of V {\textstyle V} is isomorphic to the general linear group GL ⁡ ( n , K ) {\textstyle \operatorname {GL} (n,K)} of all n × n {\textstyle n\times n} invertible matrices with entries in K {\textstyle K} . == Kernel, image and the rank–nullity theorem == If f : V → W {\textstyle f:V\to W} is linear, we define the kernel and the image or range of f {\textstyle f} by ker ⁡ ( f ) = { x ∈ V : f ( x ) = 0 } im ⁡ ( f ) = { w ∈ W : w = f ( x ) , x ∈ V } {\displaystyle {\begin{aligned}\ker(f)&=\{\,\mathbf {x} \in V:f(\mathbf {x} )=\mathbf {0} \,\}\\\operatorname {im} (f)&=\{\,\mathbf {w} \in W:\mathbf {w} =f(\mathbf {x} ),\mathbf {x} \in V\,\}\end{aligned}}} ker ⁡ ( f ) {\textstyle \ker(f)} is a subspace of V {\textstyle V} and im ⁡ ( f ) {\textstyle \operatorname {im} (f)} is a subspace of W {\textstyle W} . The following dimension formula is known as the rank–nullity theorem: dim ⁡ ( ker ⁡ ( f ) ) + dim ⁡ ( im ⁡ ( f ) ) = dim ⁡ ( V ) . {\displaystyle \dim(\ker(f))+\dim(\operatorname {im} (f))=\dim(V).} The number dim ⁡ ( im ⁡ ( f ) ) {\textstyle \dim(\operatorname {im} (f))} is also called the rank of f {\textstyle f} and written as rank ⁡ ( f ) {\textstyle \operatorname {rank} (f)} , or sometimes, ρ ( f ) {\textstyle \rho (f)} ; the number dim ⁡ ( ker ⁡ ( f ) ) {\textstyle \dim(\ker(f))} is called the nullity of f {\textstyle f} and written as null ⁡ ( f ) {\textstyle \operatorname {null} (f)} or ν ( f ) {\textstyle \nu (f)} . If V {\textstyle V} and W {\textstyle W} are finite-dimensional, bases have been chosen and f {\textstyle f} is represented by the matrix A {\textstyle A} , then the rank and nullity of f {\textstyle f} are equal to the rank and nullity of the matrix A {\textstyle A} , respectively. == Cokernel == A subtler invariant of a linear transformation f : V → W {\textstyle f:V\to W} is the cokernel, which is defined as coker ⁡ ( f ) := W / f ( V ) = W / im ⁡ ( f ) . {\displaystyle \operatorname {coker} (f):=W/f(V)=W/\operatorname {im} (f).} This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence 0 → ker ⁡ ( f ) → V → W → coker ⁡ ( f ) → 0. {\displaystyle 0\to \ker(f)\to V\to W\to \operatorname {coker} (f)\to 0.} These can be interpreted thus: given a linear equation f(v) = w to solve, the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty; the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints. The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W/f(V) is the dimension of the target space minus the dimension of the image. As a simple example, consider the map f: R2 → R2, given by f(x, y) = (0, y). Then for an equation f(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b) or equivalently stated, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R, ( a , b ) ↦ ( a ) {\textstyle (a,b)\mapsto (a)} : given a vector (a, b), the value of a is the obstruction to there being a solution. An example illustrating the infinite-dimensional case is afforded by the map f: R∞ → R∞, { a n } ↦ { b n } {\textstyle \left\{a_{n}\right\}\mapsto \left\{b_{n}\right\}} with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel ( ℵ 0 + 0 = ℵ 0 + 1 {\textstyle \aleph _{0}+0=\aleph _{0}+1} ), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h: R∞ → R∞, { a n } ↦ { c n } {\textstyle \left\{a_{n}\right\}\mapsto \left\{c_{n}\right\}} with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1. === Index === For a linear operator with finite-dimensional kernel and co-kernel, one may define index as: ind ⁡ ( f ) := dim ⁡ ( ker ⁡ ( f ) ) − dim ⁡ ( coker ⁡ ( f ) ) , {\displaystyle \operatorname {ind} (f):=\dim(\ker(f))-\dim(\operatorname {coker} (f)),} namely the degrees of freedom minus the number of constraints. For a transformation between finite-dimensional vector spaces, this is just the difference dim(V) − dim(W), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom. The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → V → W → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem. == Algebraic classifications of linear transformations == No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space. Let V and W denote vector spaces over a field F and let T: V → W be a linear map. === Monomorphism === T is said to be injective or a monomorphism if any of the following equivalent conditions are true: T is one-to-one as a map of sets. ker T = {0V} dim(ker T) = 0 T is monic or left-cancellable, which is to say, for any vector space U and any pair of linear maps R: U → V and S: U → V, the equation TR = TS implies R = S. T is left-invertible, which is to say there exists a linear map S: W → V such that ST is the identity map on V. === Epimorphism === T is said to be surjective or an epimorphism if any of the following equivalent conditions are true: T is onto as a map of sets. coker T = {0W} T is epic or right-cancellable, which is to say, for any vector space U and any pair of linear maps R: W → U and S: W → U, the equation RT = ST implies R = S. T is right-invertible, which is to say there exists a linear map S: W → V such that TS is the identity map on W. === Isomorphism === T is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to T being both one-to-one and onto (a bijection of sets) or also to T being both epic and monic, and so being a bimorphism. If T: V → V is an endomorphism, then: If, for some positive integer n, the n-th iterate of T, Tn, is identically zero, then T is said to be nilpotent. If T2 = T, then T is said to be idempotent If T = kI, where k is some scalar, then T is said to be a scaling transformation or scalar multiplication map; see scalar matrix. == Change of basis == Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates [u] as [v] = A[u]. As vectors change with the inverse of B (vectors coordinates are contravariant) its inverse transformation is [v] = B[v']. Substituting this in the first expression B [ v ′ ] = A B [ u ′ ] {\displaystyle B\left[v'\right]=AB\left[u'\right]} hence [ v ′ ] = B − 1 A B [ u ′ ] = A ′ [ u ′ ] . {\displaystyle \left[v'\right]=B^{-1}AB\left[u'\right]=A'\left[u'\right].} Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis. Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors. == Continuity == A linear transformation between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional. An infinite-dimensional domain may have discontinuous linear operators. An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin(nx)/n converges to 0, but its derivative cos(nx) does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere). == Applications == A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames. Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques. == See also == Additive map – Z-module homomorphism Antilinear map – Conjugate homogeneous additive map Bent function – Special type of Boolean function Bounded operator – Linear transformation between topological vector spaces Cauchy's functional equation – Functional equation Continuous linear operator Linear functional – Linear map from a vector space to its field of scalarsPages displaying short descriptions of redirect targets Linear isometry – Distance-preserving mathematical transformationPages displaying short descriptions of redirect targets Category of matrices Quasilinearization == Notes == == Bibliography == Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0. Bronshtein, I. N.; Semendyayev, K. A. (2004). Handbook of Mathematics (4th ed.). New York: Springer-Verlag. ISBN 3-540-43491-7. Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4. Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (Second ed.). Cambridge University Press. ISBN 978-0-521-83940-2. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Kubrusly, Carlos (2001). Elements of operator theory. Boston: Birkhäuser. ISBN 978-1-4757-3328-0. OCLC 754555941. Lang, Serge (1987), Linear Algebra (Third ed.), New York: Springer-Verlag, ISBN 0-387-96412-6 Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259. Rudin, Walter (1976). Principles of Mathematical Analysis. Walter Rudin Student Series in Advanced Mathematics (3rd ed.). New York: McGraw–Hill. ISBN 978-0-07-054235-8. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067. Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. ISBN 978-0-8218-4419-9. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Wikipedia:Linear predictive analysis#0
Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a tangent to the graph and extending the line. One use of this is in linear predictive coding which can be used as a method of reducing the amount of data needed to approximately encode a series. Suppose it is desired to store or transmit a series of values representing voice. The value at each sampling point could be transmitted (if 256 values are possible then 8 bits of data for each point are required, if the precision of 65536 levels are desired then 16 bits per sample are required). If it is known that the value rarely changes more than +/- 15 values between successive samples (-15 to +15 is 31 steps, counting the zero) then we could encode the change in 5 bits. As long as the change is less than +/- 15 values in successive steps the value will exactly reproduce the desired sequence. When the rate of change exceeds +/-15 then the reconstructed values will temporarily differ from the desired value; provided fast changes that exceed the limit are rare it may be acceptable to use the approximation in order to attain the improved coding density. == See also == Linear prediction == References ==
Wikipedia:Linear recurrence with constant coefficients#0
In mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients: ch. 17 : ch. 10 (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc. The solution of such an equation is a function of t, and not of any iterate values, giving the value of the iterate at any time. To find the solution it is necessary to know the specific values (known as initial conditions) of n of the iterates, and normally these are the n iterates that are oldest. The equation or its variable is said to be stable if from any set of initial conditions the variable's limit as time goes to infinity exists; this limit is called the steady state. Difference equations are used in a variety of contexts, such as in economics to model the evolution through time of variables such as gross domestic product, the inflation rate, the exchange rate, etc. They are used in modeling such time series because values of these variables are only measured at discrete intervals. In econometric applications, linear difference equations are modeled with stochastic terms in the form of autoregressive (AR) models and in models such as vector autoregression (VAR) and autoregressive moving average (ARMA) models that combine AR with other features. == Definitions == A linear recurrence with constant coefficients is an equation of the following form, written in terms of parameters a1, ..., an and b: y t = a 1 y t − 1 + ⋯ + a n y t − n + b , {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b,} or equivalently as y t + n = a 1 y t + n − 1 + ⋯ + a n y t + b . {\displaystyle y_{t+n}=a_{1}y_{t+n-1}+\cdots +a_{n}y_{t}+b.} The positive integer n {\displaystyle n} is called the order of the recurrence and denotes the longest time lag between iterates. The equation is called homogeneous if b = 0 and nonhomogeneous if b ≠ 0. If the equation is homogeneous, the coefficients determine the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial") p ( λ ) = λ n − a 1 λ n − 1 − a 2 λ n − 2 − ⋯ − a n {\displaystyle p(\lambda )=\lambda ^{n}-a_{1}\lambda ^{n-1}-a_{2}\lambda ^{n-2}-\cdots -a_{n}} whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence. == Conversion to homogeneous form == If b ≠ 0, the equation y t = a 1 y t − 1 + ⋯ + a n y t − n + b {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b} is said to be nonhomogeneous. To solve this equation it is convenient to convert it to homogeneous form, with no constant term. This is done by first finding the equation's steady state value—a value y* such that, if n successive iterates all had this value, so would all future values. This value is found by setting all values of y equal to y* in the difference equation, and solving, thus obtaining y ∗ = b 1 − a 1 − ⋯ − a n {\displaystyle y^{*}={\frac {b}{1-a_{1}-\cdots -a_{n}}}} assuming the denominator is not 0. If it is zero, the steady state does not exist. Given the steady state, the difference equation can be rewritten in terms of deviations of the iterates from the steady state, as ( y t − y ∗ ) = a 1 ( y t − 1 − y ∗ ) + ⋯ + a n ( y t − n − y ∗ ) {\displaystyle \left(y_{t}-y^{*}\right)=a_{1}\left(y_{t-1}-y^{*}\right)+\cdots +a_{n}\left(y_{t-n}-y^{*}\right)} which has no constant term, and which can be written more succinctly as x t = a 1 x t − 1 + ⋯ + a n x t − n {\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}} where x equals y − y*. This is the homogeneous form. If there is no steady state, the difference equation y t = a 1 y t − 1 + ⋯ + a n y t − n + b {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b} can be combined with its equivalent form y t − 1 = a 1 y t − 2 + ⋯ + a n y t − ( n + 1 ) + b {\displaystyle y_{t-1}=a_{1}y_{t-2}+\cdots +a_{n}y_{t-(n+1)}+b} to obtain (by solving both for b) y t − a 1 y t − 1 − ⋯ − a n y t − n = y t − 1 − a 1 y t − 2 − ⋯ − a n y t − ( n + 1 ) {\displaystyle y_{t}-a_{1}y_{t-1}-\cdots -a_{n}y_{t-n}=y_{t-1}-a_{1}y_{t-2}-\cdots -a_{n}y_{t-(n+1)}} in which like terms can be combined to give a homogeneous equation of one order higher than the original. == Solution example for small orders == The roots of the characteristic polynomial play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are d {\displaystyle d} distinct roots r 1 , r 2 , … , r d , {\displaystyle r_{1},r_{2},\ldots ,r_{d},} then each solution to the recurrence takes the form a n = k 1 r 1 n + k 2 r 2 n + ⋯ + k d r d n , {\displaystyle a_{n}=k_{1}r_{1}^{n}+k_{2}r_{2}^{n}+\cdots +k_{d}r_{d}^{n},} where the coefficients k i {\displaystyle k_{i}} are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of n {\displaystyle n} . For instance, if the characteristic polynomial can be factored as ( x − r ) 3 {\displaystyle (x-r)^{3}} , with the same root r {\displaystyle r} occurring three times, then the solution would take the form a n = k 1 r n + k 2 n r n + k 3 n 2 r n . {\displaystyle a_{n}=k_{1}r^{n}+k_{2}nr^{n}+k_{3}n^{2}r^{n}.} === Order 1 === For order 1, the recurrence a n = r a n − 1 {\displaystyle a_{n}=ra_{n-1}} has the solution a n = r n {\displaystyle a_{n}=r^{n}} with a 0 = 1 {\displaystyle a_{0}=1} and the most general solution is a n = k r n {\displaystyle a_{n}=kr^{n}} with a 0 = k {\displaystyle a_{0}=k} . The characteristic polynomial equated to zero (the characteristic equation) is simply t − r = 0 {\displaystyle t-r=0} . === Order 2 === Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that a n = r n {\displaystyle a_{n}=r^{n}} is a solution for the recurrence exactly when t = r {\displaystyle t=r} is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices. Consider, for example, a recurrence relation of the form a n = A a n − 1 + B a n − 2 . {\displaystyle a_{n}=Aa_{n-1}+Ba_{n-2}.} When does it have a solution of the same general form as a n = r n {\displaystyle a_{n}=r^{n}} ? Substituting this guess (ansatz) in the recurrence relation, we find that r n = A r n − 1 + B r n − 2 {\displaystyle r^{n}=Ar^{n-1}+Br^{n-2}} must be true for all n > 1 {\displaystyle n>1} . Dividing through by r n − 2 {\displaystyle r^{n-2}} , we get that all these equations reduce to the same thing: r 2 = A r + B , r 2 − A r − B = 0 , {\displaystyle {\begin{aligned}r^{2}&=Ar+B,\\r^{2}-Ar-B&=0,\end{aligned}}} which is the characteristic equation of the recurrence relation. Solve for r {\displaystyle r} to obtain the two roots λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} : these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution a n = C λ 1 n + D λ 2 n {\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}} while if they are identical (when A 2 + 4 B = 0 {\displaystyle A^{2}+4B=0} ), we have a n = C λ n + D n λ n {\displaystyle a_{n}=C\lambda ^{n}+Dn\lambda ^{n}} This is the most general solution; the two constants C {\displaystyle C} and D {\displaystyle D} can be chosen based on two given initial conditions a 0 {\displaystyle a_{0}} and a 1 {\displaystyle a_{1}} to produce a specific solution. In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters C {\displaystyle C} and D {\displaystyle D} ), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as λ 1 , λ 2 = α ± β i . {\displaystyle \lambda _{1},\lambda _{2}=\alpha \pm \beta i.} Then it can be shown that a n = C λ 1 n + D λ 2 n {\displaystyle a_{n}=C\lambda _{1}^{n}+D\lambda _{2}^{n}} can be rewritten as: 576–585 a n = 2 M n ( E cos ⁡ ( θ n ) + F sin ⁡ ( θ n ) ) = 2 G M n cos ⁡ ( θ n − δ ) , {\displaystyle a_{n}=2M^{n}\left(E\cos(\theta n)+F\sin(\theta n)\right)=2GM^{n}\cos(\theta n-\delta ),} where M = α 2 + β 2 cos ⁡ ( θ ) = α M sin ⁡ ( θ ) = β M C , D = E ∓ F i G = E 2 + F 2 cos ⁡ ( δ ) = E G sin ⁡ ( δ ) = F G {\displaystyle {\begin{array}{lcl}M={\sqrt {\alpha ^{2}+\beta ^{2}}}&\cos(\theta )={\tfrac {\alpha }{M}}&\sin(\theta )={\tfrac {\beta }{M}}\\C,D=E\mp Fi&&\\G={\sqrt {E^{2}+F^{2}}}&\cos(\delta )={\tfrac {E}{G}}&\sin(\delta )={\tfrac {F}{G}}\end{array}}} Here E {\displaystyle E} and F {\displaystyle F} (or equivalently, G {\displaystyle G} and δ {\displaystyle \delta } ) are real constants which depend on the initial conditions. Using λ 1 + λ 2 = 2 α = A , {\displaystyle \lambda _{1}+\lambda _{2}=2\alpha =A,} λ 1 ⋅ λ 2 = α 2 + β 2 = − B , {\displaystyle \lambda _{1}\cdot \lambda _{2}=\alpha ^{2}+\beta ^{2}=-B,} one may simplify the solution given above as a n = ( − B ) n 2 ( E cos ⁡ ( θ n ) + F sin ⁡ ( θ n ) ) , {\displaystyle a_{n}=(-B)^{\frac {n}{2}}\left(E\cos(\theta n)+F\sin(\theta n)\right),} where a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} are the initial conditions and E = − A a 1 + a 2 B F = − i A 2 a 1 − A a 2 + 2 a 1 B B A 2 + 4 B θ = arccos ⁡ ( A 2 − B ) {\displaystyle {\begin{aligned}E&={\frac {-Aa_{1}+a_{2}}{B}}\\F&=-i{\frac {A^{2}a_{1}-Aa_{2}+2a_{1}B}{B{\sqrt {A^{2}+4B}}}}\\\theta &=\arccos \left({\frac {A}{2{\sqrt {-B}}}}\right)\end{aligned}}} In this way there is no need to solve for λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} . In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable a {\displaystyle a} converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown to be equivalent to | A | < 1 − B < 2 {\displaystyle |A|<1-B<2} , which is equivalent to | B | < 1 {\displaystyle |B|<1} and | A | < 1 − B {\displaystyle |A|<1-B} . == General solution == === Characteristic polynomial and roots === Solving the homogeneous equation x t = a 1 x t − 1 + ⋯ + a n x t − n {\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}} involves first solving its characteristic polynomial λ n = a 1 λ n − 1 + ⋯ + a n − 2 λ 2 + a n − 1 λ + a n {\displaystyle \lambda ^{n}=a_{1}\lambda ^{n-1}+\cdots +a_{n-2}\lambda ^{2}+a_{n-1}\lambda +a_{n}} for its characteristic roots λ1, ..., λn. These roots can be solved for algebraically if n ≤ 4, but not necessarily otherwise. If the solution is to be used numerically, all the roots of this characteristic equation can be found by numerical methods. However, for use in a theoretical context it may be that the only information required about the roots is whether any of them are greater than or equal to 1 in absolute value. It may be that all the roots are real or instead there may be some that are complex numbers. In the latter case, all the complex roots come in complex conjugate pairs. === Solution with distinct characteristic roots === If all the characteristic roots are distinct, the solution of the homogeneous linear recurrence x t = a 1 x t − 1 + ⋯ + a n x t − n {\displaystyle x_{t}=a_{1}x_{t-1}+\cdots +a_{n}x_{t-n}} can be written in terms of the characteristic roots as x t = c 1 λ 1 t + ⋯ + c n λ n t {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{n}\lambda _{n}^{t}} where the coefficients ci can be found by invoking the initial conditions. Specifically, for each time period for which an iterate value is known, this value and its corresponding value of t can be substituted into the solution equation to obtain a linear equation in the n as-yet-unknown parameters; n such equations, one for each initial condition, can be solved simultaneously for the n parameter values. If all characteristic roots are real, then all the coefficient values ci will also be real; but with non-real complex roots, in general some of these coefficients will also be non-real. ==== Converting complex solution to trigonometric form ==== If there are complex roots, they come in conjugate pairs and so do the complex terms in the solution equation. If two of these complex terms are cjλtj and cj+1λtj+1, the roots λj can be written as λ j , λ j + 1 = α ± β i = M ( α M ± β M i ) {\displaystyle \lambda _{j},\lambda _{j+1}=\alpha \pm \beta i=M\left({\frac {\alpha }{M}}\pm {\frac {\beta }{M}}i\right)} where i is the imaginary unit and M is the modulus of the roots: M = α 2 + β 2 . {\displaystyle M={\sqrt {\alpha ^{2}+\beta ^{2}}}.} Then the two complex terms in the solution equation can be written as c j λ j t + c j + 1 λ j + 1 t = M t ( c j ( α M + β M i ) t + c j + 1 ( α M − β M i ) t ) = M t ( c j ( cos ⁡ θ + i sin ⁡ θ ) t + c j + 1 ( cos ⁡ θ − i sin ⁡ θ ) t ) = M t ( c j ( cos ⁡ θ t + i sin ⁡ θ t ) + c j + 1 ( cos ⁡ θ t − i sin ⁡ θ t ) ) {\displaystyle {\begin{aligned}c_{j}\lambda _{j}^{t}+c_{j+1}\lambda _{j+1}^{t}&=M^{t}\left(c_{j}\left({\frac {\alpha }{M}}+{\frac {\beta }{M}}i\right)^{t}+c_{j+1}\left({\frac {\alpha }{M}}-{\frac {\beta }{M}}i\right)^{t}\right)\\[6pt]&=M^{t}\left(c_{j}\left(\cos \theta +i\sin \theta \right)^{t}+c_{j+1}\left(\cos \theta -i\sin \theta \right)^{t}\right)\\[6pt]&=M^{t}{\bigl (}c_{j}\left(\cos \theta t+i\sin \theta t\right)+c_{j+1}\left(\cos \theta t-i\sin \theta t\right){\bigr )}\end{aligned}}} where θ is the angle whose cosine is ⁠α/M⁠ and whose sine is ⁠β/M⁠; the last equality here made use of de Moivre's formula. Now the process of finding the coefficients cj and cj+1 guarantees that they are also complex conjugates, which can be written as γ ± δi. Using this in the last equation gives this expression for the two complex terms in the solution equation: 2 M t ( γ cos ⁡ θ t − δ sin ⁡ θ t ) {\displaystyle 2M^{t}\left(\gamma \cos \theta t-\delta \sin \theta t\right)} which can also be written as 2 γ 2 + δ 2 M t cos ⁡ ( θ t + ψ ) {\displaystyle 2{\sqrt {\gamma ^{2}+\delta ^{2}}}M^{t}\cos(\theta t+\psi )} where ψ is the angle whose cosine is ⁠γ/√γ2 + δ2⁠ and whose sine is ⁠δ/√γ2 + δ2⁠. ==== Cyclicity ==== Depending on the initial conditions, even with all roots real the iterates can experience a transitory tendency to go above and below the steady state value. But true cyclicity involves a permanent tendency to fluctuate, and this occurs if there is at least one pair of complex conjugate characteristic roots. This can be seen in the trigonometric form of their contribution to the solution equation, involving cos θt and sin θt. === Solution with duplicate characteristic roots === In the second-order case, if the two roots are identical (λ1 = λ2), they can both be denoted as λ and a solution may be of the form x t = c 1 λ t + c 2 t λ t . {\displaystyle x_{t}=c_{1}\lambda ^{t}+c_{2}t\lambda ^{t}.} === Solution by conversion to matrix form === An alternative solution method involves converting the nth order difference equation to a first-order matrix difference equation. This is accomplished by writing w1,t = yt, w2,t = yt−1 = w1,t−1, w3,t = yt−2 = w2,t−1, and so on. Then the original single nth-order equation y t = a 1 y t − 1 + a 2 y t − 2 + ⋯ + a n y t − n + b {\displaystyle y_{t}=a_{1}y_{t-1}+a_{2}y_{t-2}+\cdots +a_{n}y_{t-n}+b} can be replaced by the following n first-order equations: w 1 , t = a 1 w 1 , t − 1 + a 2 w 2 , t − 1 + ⋯ + a n w n , t − 1 + b w 2 , t = w 1 , t − 1 ⋮ w n , t = w n − 1 , t − 1 . {\displaystyle {\begin{aligned}w_{1,t}&=a_{1}w_{1,t-1}+a_{2}w_{2,t-1}+\cdots +a_{n}w_{n,t-1}+b\\w_{2,t}&=w_{1,t-1}\\&\,\,\,\vdots \\w_{n,t}&=w_{n-1,t-1}.\end{aligned}}} Defining the vector wi as w i = [ w 1 , i w 2 , i ⋮ w n , i ] {\displaystyle \mathbf {w} _{i}={\begin{bmatrix}w_{1,i}\\w_{2,i}\\\vdots \\w_{n,i}\end{bmatrix}}} this can be put in matrix form as w t = A w t − 1 + b {\displaystyle \mathbf {w} _{t}=\mathbf {A} \mathbf {w} _{t-1}+\mathbf {b} } Here A is an n × n matrix in which the first row contains a1, ..., an and all other rows have a single 1 with all other elements being 0, and b is a column vector with first element b and with the rest of its elements being 0. This matrix equation can be solved using the methods in the article Matrix difference equation. In the homogeneous case yi is a para-permanent of a lower triangular matrix === Solution using generating functions === The recurrence y t = a 1 y t − 1 + ⋯ + a n y t − n + b , {\displaystyle y_{t}=a_{1}y_{t-1}+\cdots +a_{n}y_{t-n}+b,} can be solved using the theory of generating functions. First, we write Y ( x ) = ∑ t ≥ 0 y t x t {\textstyle Y(x)=\sum _{t\geq 0}y_{t}x^{t}} . The recurrence is then equivalent to the following generating function equation: Y ( x ) = a 1 x Y ( x ) + a 2 x 2 Y ( x ) + ⋯ + a n x n Y ( x ) + b 1 − x + p ( x ) {\displaystyle Y(x)=a_{1}xY(x)+a_{2}x^{2}Y(x)+\cdots +a_{n}x^{n}Y(x)+{\frac {b}{1-x}}+p(x)} where p ( x ) {\displaystyle p(x)} is a polynomial of degree at most n − 1 {\displaystyle n-1} correcting the initial terms. From this equation we can solve to get Y ( x ) = ( b 1 − x + p ( x ) ) ⋅ 1 1 − a 1 x − a 2 x 2 − ⋯ − a n x n . {\displaystyle Y(x)=\left({\frac {b}{1-x}}+p(x)\right)\cdot {\frac {1}{1-a_{1}x-a_{2}x^{2}-\cdots -a_{n}x^{n}}}.} In other words, not worrying about the exact coefficients, Y ( x ) {\displaystyle Y(x)} can be expressed as a rational function Y ( x ) = f ( x ) g ( x ) . {\displaystyle Y(x)={\frac {f(x)}{g(x)}}.} The closed form can then be derived via partial fraction decomposition. Specifically, if the generating function is written as f ( x ) g ( x ) = ∑ i f i ( x ) ( x − r i ) m i {\displaystyle {\frac {f(x)}{g(x)}}=\sum _{i}{\frac {f_{i}(x)}{(x-r_{i})^{m_{i}}}}} then the polynomial p ( x ) {\displaystyle p(x)} determines the initial set of corrections z ( n ) {\displaystyle z(n)} , the denominator ( x − r i ) m {\displaystyle (x-r_{i})^{m}} determines the exponential term r i n {\displaystyle r_{i}^{n}} , and the degree m {\displaystyle m} together with the numerator f i ( x ) {\displaystyle f_{i}(x)} determine the polynomial coefficient k i ( n ) {\displaystyle k_{i}(n)} . === Relation to solution to differential equations === The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is e λ x {\displaystyle e^{\lambda x}} where λ {\displaystyle \lambda } is a complex number that is determined by substituting the guess into the differential equation. This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation: ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n {\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}} it can be seen that the coefficients of the series are given by the n {\displaystyle n} -th derivative of f ( x ) {\displaystyle f(x)} evaluated at the point a {\displaystyle a} . The differential equation provides a linear difference equation relating these coefficients. This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation. The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that: y [ k ] → f [ n + k ] {\displaystyle y^{[k]}\to f[n+k]} and more generally x m ∗ y [ k ] → n ( n − 1 ) . . . ( n − m + 1 ) f [ n + k − m ] {\displaystyle x^{m}*y^{[k]}\to n(n-1)...(n-m+1)f[n+k-m]} Example: The recurrence relationship for the Taylor series coefficients of the equation: ( x 2 + 3 x − 4 ) y [ 3 ] − ( 3 x + 1 ) y [ 2 ] + 2 y = 0 {\displaystyle (x^{2}+3x-4)y^{[3]}-(3x+1)y^{[2]}+2y=0} is given by n ( n − 1 ) f [ n + 1 ] + 3 n f [ n + 2 ] − 4 f [ n + 3 ] − 3 n f [ n + 1 ] − f [ n + 2 ] + 2 f [ n ] = 0 {\displaystyle n(n-1)f[n+1]+3nf[n+2]-4f[n+3]-3nf[n+1]-f[n+2]+2f[n]=0} or − 4 f [ n + 3 ] + 2 n f [ n + 2 ] + n ( n − 4 ) f [ n + 1 ] + 2 f [ n ] = 0. {\displaystyle -4f[n+3]+2nf[n+2]+n(n-4)f[n+1]+2f[n]=0.} This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way. Example: The differential equation a y ″ + b y ′ + c y = 0 {\displaystyle ay''+by'+cy=0} has solution y = e a x . {\displaystyle y=e^{ax}.} The conversion of the differential equation to a difference equation of the Taylor coefficients is a f [ n + 2 ] + b f [ n + 1 ] + c f [ n ] = 0. {\displaystyle af[n+2]+bf[n+1]+cf[n]=0.} It is easy to see that the n {\displaystyle n} -th derivative of e a x {\displaystyle e^{ax}} evaluated at 0 {\displaystyle 0} is a n {\displaystyle a^{n}} . ==== Solving with z-transforms ==== Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward. == Stability == In the solution equation x t = c 1 λ 1 t + ⋯ + c n λ n t , {\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{n}\lambda _{n}^{t},} a term with real characteristic roots converges to 0 as t grows indefinitely large if the absolute value of the characteristic root is less than 1. If the absolute value equals 1, the term will stay constant as t grows if the root is +1 but will fluctuate between two values if the root is −1. If the absolute value of the root is greater than 1 the term will become larger and larger over time. A pair of terms with complex conjugate characteristic roots will converge to 0 with dampening fluctuations if the absolute value of the modulus M of the roots is less than 1; if the modulus equals 1 then constant amplitude fluctuations in the combined terms will persist; and if the modulus is greater than 1, the combined terms will show fluctuations of ever-increasing magnitude. Thus the evolving variable x will converge to 0 if all of the characteristic roots have magnitude less than 1. If the largest root has absolute value 1, neither convergence to 0 nor divergence to infinity will occur. If all roots with magnitude 1 are real and positive, x will converge to the sum of their constant terms ci; unlike in the stable case, this converged value depends on the initial conditions; different starting points lead to different points in the long run. If any root is −1, its term will contribute permanent fluctuations between two values. If any of the unit-magnitude roots are complex then constant-amplitude fluctuations of x will persist. Finally, if any characteristic root has magnitude greater than 1, then x will diverge to infinity as time goes to infinity, or will fluctuate between increasingly large positive and negative values. A theorem of Issai Schur states that all roots have magnitude less than 1 (the stable case) if and only if a particular string of determinants are all positive.: 247 If a non-homogeneous linear difference equation has been converted to homogeneous form which has been analyzed as above, then the stability and cyclicality properties of the original non-homogeneous equation will be the same as those of the derived homogeneous form, with convergence in the stable case being to the steady-state value y* instead of to 0. == See also == Recurrence relation Linear differential equation Skolem–Mahler–Lech theorem Skolem problem == References ==
Wikipedia:Linear relation#0
In linear algebra, a linear relation, or simply relation, between elements of a vector space or a module is a linear equation that has these elements as a solution. More precisely, if e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} are elements of a (left) module M over a ring R (the case of a vector space over a field is a special case), a relation between e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} is a sequence ( f 1 , … , f n ) {\displaystyle (f_{1},\dots ,f_{n})} of elements of R such that f 1 e 1 + ⋯ + f n e n = 0. {\displaystyle f_{1}e_{1}+\dots +f_{n}e_{n}=0.} The relations between e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} form a module. One is generally interested in the case where e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} is a generating set of a finitely generated module M, in which case the module of the relations is often called a syzygy module of M. The syzygy module depends on the choice of a generating set, but it is unique up to the direct sum with a free module. That is, if S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} are syzygy modules corresponding to two generating sets of the same module, then they are stably isomorphic, which means that there exist two free modules L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} such that S 1 ⊕ L 1 {\displaystyle S_{1}\oplus L_{1}} and S 2 ⊕ L 2 {\displaystyle S_{2}\oplus L_{2}} are isomorphic. Higher order syzygy modules are defined recursively: a first syzygy module of a module M is simply its syzygy module. For k > 1, a kth syzygy module of M is a syzygy module of a (k – 1)-th syzygy module. Hilbert's syzygy theorem states that, if R = K [ x 1 , … , x n ] {\displaystyle R=K[x_{1},\dots ,x_{n}]} is a polynomial ring in n indeterminates over a field, then every nth syzygy module is free. The case n = 0 is the fact that every finite dimensional vector space has a basis, and the case n = 1 is the fact that K[x] is a principal ideal domain and that every submodule of a finitely generated free K[x] module is also free. The construction of higher order syzygy modules is generalized as the definition of free resolutions, which allows restating Hilbert's syzygy theorem as a polynomial ring in n indeterminates over a field has global homological dimension n. If a and b are two elements of the commutative ring R, then (b, –a) is a relation that is said trivial. The module of trivial relations of an ideal is the submodule of the first syzygy module of the ideal that is generated by the trivial relations between the elements of a generating set of an ideal. The concept of trivial relations can be generalized to higher order syzygy modules, and this leads to the concept of the Koszul complex of an ideal, which provides information on the non-trivial relations between the generators of an ideal. == Basic definitions == Let R be a ring, and M be a left R-module. A linear relation, or simply a relation between k elements x 1 , … , x k {\displaystyle x_{1},\dots ,x_{k}} of M is a sequence ( a 1 , … , a k ) {\displaystyle (a_{1},\dots ,a_{k})} of elements of R such that a 1 x 1 + ⋯ + a k x k = 0. {\displaystyle a_{1}x_{1}+\dots +a_{k}x_{k}=0.} If x 1 , … , x k {\displaystyle x_{1},\dots ,x_{k}} is a generating set of M, the relation is often called a syzygy of M. It makes sense to call it a syzygy of M {\displaystyle M} without regard to x 1 , . . , x k {\displaystyle x_{1},..,x_{k}} because, although the syzygy module depends on the chosen generating set, most of its properties are independent; see § Stable properties, below. If the ring R is Noetherian, or, at least coherent, and if M is finitely generated, then the syzygy module is also finitely generated. A syzygy module of this syzygy module is a second syzygy module of M. Continuing this way one can define a kth syzygy module for every positive integer k. Hilbert's syzygy theorem asserts that, if M is a finitely generated module over a polynomial ring K [ x 1 , … , x n ] {\displaystyle K[x_{1},\dots ,x_{n}]} over a field, then any nth syzygy module is a free module. == Stable properties == Generally speaking, in the language of K-theory, a property is stable if it becomes true by making a direct sum with a sufficiently large free module. A fundamental property of syzygies modules is that there are "stably independent" of choices of generating sets for involved modules. The following result is the basis of these stable properties. Proof. As { x 1 , … , x m } {\displaystyle \{x_{1},\dots ,x_{m}\}} is a generating set, each y i {\displaystyle y_{i}} can be written y i = ∑ α i , j x j . {\displaystyle \textstyle y_{i}=\sum \alpha _{i,j}x_{j}.} This provides a relation r i {\displaystyle r_{i}} between x 1 , … , x m , y 1 , … , y n . {\displaystyle x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}.} Now, if r = ( a 1 , … , a m , b 1 , … , b n ) {\displaystyle r=(a_{1},\dots ,a_{m},b_{1},\dots ,b_{n})} is any relation, then r − ∑ b i r i {\displaystyle \textstyle r-\sum b_{i}r_{i}} is a relation between the x 1 , … , x m {\displaystyle x_{1},\dots ,x_{m}} only. In other words, every relation between x 1 , … , x m , y 1 , … , y n {\displaystyle x_{1},\dots ,x_{m},y_{1},\dots ,y_{n}} is a sum of a relation between x 1 , … , x m , {\displaystyle x_{1},\dots ,x_{m},} and a linear combination of the r i {\displaystyle r_{i}} s. It is straightforward to prove that this decomposition is unique, and this proves the result. ◼ {\displaystyle \blacksquare } This proves that the first syzygy module is "stably unique". More precisely, given two generating sets S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} of a module M, if S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} are the corresponding modules of relations, then there exist two free modules L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} such that S 1 ⊕ L 1 {\displaystyle S_{1}\oplus L_{1}} and S 2 ⊕ L 2 {\displaystyle S_{2}\oplus L_{2}} are isomorphic. For proving this, it suffices to apply twice the preceding proposition for getting two decompositions of the module of the relations between the union of the two generating sets. For obtaining a similar result for higher syzygy modules, it remains to prove that, if M is any module, and L is a free module, then M and M ⊕ L have isomorphic syzygy modules. It suffices to consider a generating set of M ⊕ L that consists of a generating set of M and a basis of L. For every relation between the elements of this generating set, the coefficients of the basis elements of L are all zero, and the syzygies of M ⊕ L are exactly the syzygies of M extended with zero coefficients. This completes the proof to the following theorem. == Relationship with free resolutions == Given a generating set g 1 , … , g n {\displaystyle g_{1},\dots ,g_{n}} of an R-module, one can consider a free module of L of basis G 1 , … , G n , {\displaystyle G_{1},\dots ,G_{n},} where G 1 , … , G n {\displaystyle G_{1},\dots ,G_{n}} are new indeterminates. This defines an exact sequence L ⟶ M ⟶ 0 , {\displaystyle L\longrightarrow M\longrightarrow 0,} where the left arrow is the linear map that maps each G i {\displaystyle G_{i}} to the corresponding g i . {\displaystyle g_{i}.} The kernel of this left arrow is a first syzygy module of M. One can repeat this construction with this kernel in place of M. Repeating again and again this construction, one gets a long exact sequence ⋯ ⟶ L k ⟶ L k − 1 ⟶ ⋯ ⟶ L 0 ⟶ M ⟶ 0 , {\displaystyle \cdots \longrightarrow L_{k}\longrightarrow L_{k-1}\longrightarrow \cdots \longrightarrow L_{0}\longrightarrow M\longrightarrow 0,} where all L i {\displaystyle L_{i}} are free modules. By definition, such a long exact sequence is a free resolution of M. For every k ≥ 1, the kernel S k {\displaystyle S_{k}} of the arrow starting from L k − 1 {\displaystyle L_{k-1}} is a kth syzygy module of M. It follows that the study of free resolutions is the same as the study of syzygy modules. A free resolution is finite of length ≤ n if S n {\displaystyle S_{n}} is free. In this case, one can take L n = S n , {\displaystyle L_{n}=S_{n},} and L k = 0 {\displaystyle L_{k}=0} (the zero module) for every k > n. This allows restating Hilbert's syzygy theorem: If R = K [ x 1 , … , x n ] {\displaystyle R=K[x_{1},\dots ,x_{n}]} is a polynomial ring in n indeterminates over a field K, then every free resolution is finite of length at most n. The global dimension of a commutative Noetherian ring is either infinite, or the minimal n such that every free resolution is finite of length at most n. A commutative Noetherian ring is regular if its global dimension is finite. In this case, the global dimension equals its Krull dimension. So, Hilbert's syzygy theorem may be restated in a very short sentence that hides much mathematics: A polynomial ring over a field is a regular ring. == Trivial relations == In a commutative ring R, one has always ab – ba = 0. This implies trivially that (b, –a) is a linear relation between a and b. Therefore, given a generating set g 1 , … , g k {\displaystyle g_{1},\dots ,g_{k}} of an ideal I, one calls trivial relation or trivial syzygy every element of the submodule the syzygy module that is generated by these trivial relations between two generating elements. More precisely, the module of trivial syzygies is generated by the relations r i , j = ( x 1 , … , x r ) {\displaystyle r_{i,j}=(x_{1},\dots ,x_{r})} such that x i = g j , {\displaystyle x_{i}=g_{j},} x j = − g i , {\displaystyle x_{j}=-g_{i},} and x h = 0 {\displaystyle x_{h}=0} otherwise. == History == The word syzygy came into mathematics with the work of Arthur Cayley. In that paper, Cayley used it in the theory of resultants and discriminants. As the word syzygy was used in astronomy to denote a linear relation between planets, Cayley used it to denote linear relations between minors of a matrix, such as, in the case of a 2×3 matrix: a | b c e f | − b | a c d f | + c | a b d e | = 0. {\displaystyle a\,{\begin{vmatrix}b&c\\e&f\end{vmatrix}}-b\,{\begin{vmatrix}a&c\\d&f\end{vmatrix}}+c\,{\begin{vmatrix}a&b\\d&e\end{vmatrix}}=0.} Then, the word syzygy was popularized (among mathematicians) by David Hilbert in his 1890 article, which contains three fundamental theorems on polynomials, Hilbert's syzygy theorem, Hilbert's basis theorem and Hilbert's Nullstellensatz. In his article, Cayley makes use, in a special case, of what was later called the Koszul complex, after a similar construction in differential geometry by the mathematician Jean-Louis Koszul. == Notes == == References == Cox, David; Little, John; O’Shea, Donal (2007). "Ideals, Varieties, and Algorithms". Undergraduate Texts in Mathematics. New York, NY: Springer New York. doi:10.1007/978-0-387-35651-8. ISBN 978-0-387-35650-1. ISSN 0172-6056. Cox, David; Little, John; O’Shea, Donal (2005). "Using Algebraic Geometry". Graduate Texts in Mathematics. New York: Springer-Verlag. doi:10.1007/b138611. ISBN 0-387-20706-6. Eisenbud, David (1995). Commutative Algebra with a View Toward Algebraic Geometry. Graduate Texts in Mathematics. Vol. 150. Springer-Verlag. doi:10.1007/978-1-4612-5350-1. ISBN 0-387-94268-8. David Eisenbud, The Geometry of Syzygies, Graduate Texts in Mathematics, vol. 229, Springer, 2005.
Wikipedia:Linear span#0
In mathematics, the linear span (also called the linear hull or just span) of a set S {\displaystyle S} of elements of a vector space V {\displaystyle V} is the smallest linear subspace of V {\displaystyle V} that contains S . {\displaystyle S.} It is the set of all finite linear combinations of the elements of S, and the intersection of all linear subspaces that contain S . {\displaystyle S.} It is often denoted span(S) or ⟨ S ⟩ . {\displaystyle \langle S\rangle .} For example, in geometry, two linearly independent vectors span a plane. To express that a vector space V is a linear span of a subset S, one commonly uses one of the following phrases: S spans V; S is a spanning set of V; V is spanned or generated by S; S is a generator set or a generating set of V. Spans can be generalized to many mathematical structures, in which case, the smallest substructure containing S {\displaystyle S} is generally called the substructure generated by S . {\displaystyle S.} == Definition == Given a vector space V over a field K, the span of a set S of vectors (not necessarily finite) is defined to be the intersection W of all subspaces of V that contain S. It is thus the smallest (for set inclusion) subspace containing S. It is referred to as the subspace spanned by S, or by the vectors in S. Conversely, S is called a spanning set of W, and we say that S spans W. It follows from this definition that the span of S is the set of all finite linear combinations of elements (vectors) of S, and can be defined as such. That is, span ⁡ ( S ) = { λ 1 v 1 + λ 2 v 2 + ⋯ + λ n v n ∣ n ∈ N , v 1 , . . . v n ∈ S , λ 1 , . . . λ n ∈ K } {\displaystyle \operatorname {span} (S)={\biggl \{}\lambda _{1}\mathbf {v} _{1}+\lambda _{2}\mathbf {v} _{2}+\cdots +\lambda _{n}\mathbf {v} _{n}\mid n\in \mathbb {N} ,\;\mathbf {v} _{1},...\mathbf {v} _{n}\in S,\;\lambda _{1},...\lambda _{n}\in K{\biggr \}}} When S is empty, the only possibility is n = 0, and the previous expression for span ⁡ ( S ) {\displaystyle \operatorname {span} (S)} reduces to the empty sum. The standard convention for the empty sum implies thus span ( ∅ ) = { 0 } , {\displaystyle {\text{span}}(\emptyset )=\{\mathbf {0} \},} a property that is immediate with the other definitions. However, many introductory textbooks simply include this fact as part of the definition. When S = { v 1 , … , v n } {\displaystyle S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}\}} is finite, one has span ⁡ ( S ) = { λ 1 v 1 + λ 2 v 2 + ⋯ + λ n v n ∣ λ 1 , . . . λ n ∈ K } {\displaystyle \operatorname {span} (S)=\{\lambda _{1}\mathbf {v} _{1}+\lambda _{2}\mathbf {v} _{2}+\cdots +\lambda _{n}\mathbf {v} _{n}\mid \lambda _{1},...\lambda _{n}\in K\}} == Examples == The real vector space R 3 {\displaystyle \mathbb {R} ^{3}} has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of R 3 {\displaystyle \mathbb {R} ^{3}} . Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, 1⁄2, 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent. The set {(1, 0, 0), (0, 1, 0), (1, 1, 0)} is not a spanning set of R 3 {\displaystyle \mathbb {R} ^{3}} , since its span is the space of all vectors in R 3 {\displaystyle \mathbb {R} ^{3}} whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not R 3 . {\displaystyle \mathbb {R} ^{3}.} It can be identified with R 2 {\displaystyle \mathbb {R} ^{2}} by removing the third components equal to zero. The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in R 3 {\displaystyle \mathbb {R} ^{3}} , and {(0, 0, 0)} is the intersection of all of these vector spaces. The set of monomials xn, where n is a non-negative integer, spans the space of polynomials. == Theorems == === Equivalence of definitions === The set of all linear combinations of a subset S of V, a vector space over K, is the smallest linear subspace of V containing S. Proof. We first prove that span S is a subspace of V. Since S is a subset of V, we only need to prove the existence of a zero vector 0 in span S, that span S is closed under addition, and that span S is closed under scalar multiplication. Letting S = { v 1 , v 2 , … , v n } {\displaystyle S=\{\mathbf {v} _{1},\mathbf {v} _{2},\ldots ,\mathbf {v} _{n}\}} , it is trivial that the zero vector of V exists in span S, since 0 = 0 v 1 + 0 v 2 + ⋯ + 0 v n {\displaystyle \mathbf {0} =0\mathbf {v} _{1}+0\mathbf {v} _{2}+\cdots +0\mathbf {v} _{n}} . Adding together two linear combinations of S also produces a linear combination of S: ( λ 1 v 1 + ⋯ + λ n v n ) + ( μ 1 v 1 + ⋯ + μ n v n ) = ( λ 1 + μ 1 ) v 1 + ⋯ + ( λ n + μ n ) v n {\displaystyle (\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n})+(\mu _{1}\mathbf {v} _{1}+\cdots +\mu _{n}\mathbf {v} _{n})=(\lambda _{1}+\mu _{1})\mathbf {v} _{1}+\cdots +(\lambda _{n}+\mu _{n})\mathbf {v} _{n}} , where all λ i , μ i ∈ K {\displaystyle \lambda _{i},\mu _{i}\in K} , and multiplying a linear combination of S by a scalar c ∈ K {\displaystyle c\in K} will produce another linear combination of S: c ( λ 1 v 1 + ⋯ + λ n v n ) = c λ 1 v 1 + ⋯ + c λ n v n {\displaystyle c(\lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n})=c\lambda _{1}\mathbf {v} _{1}+\cdots +c\lambda _{n}\mathbf {v} _{n}} . Thus span S is a subspace of V. It follows that S ⊆ span ⁡ S {\displaystyle S\subseteq \operatorname {span} S} , since every vi is a linear combination of S (trivially). Suppose that W is a linear subspace of V containing S. Since W is closed under addition and scalar multiplication, then every linear combination λ 1 v 1 + ⋯ + λ n v n {\displaystyle \lambda _{1}\mathbf {v} _{1}+\cdots +\lambda _{n}\mathbf {v} _{n}} must be contained in W. Thus, span S is contained in every subspace of V containing S, and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of S. === Size of spanning set is at least size of linearly independent set === Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V. Proof. Let S = { v 1 , … , v m } {\displaystyle S=\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{m}\}} be a spanning set and W = { w 1 , … , w n } {\displaystyle W=\{\mathbf {w} _{1},\ldots ,\mathbf {w} _{n}\}} be a linearly independent set of vectors from V. We want to show that m ≥ n {\displaystyle m\geq n} . Since S spans V, then S ∪ { w 1 } {\displaystyle S\cup \{\mathbf {w} _{1}\}} must also span V, and w 1 {\displaystyle \mathbf {w} _{1}} must be a linear combination of S. Thus S ∪ { w 1 } {\displaystyle S\cup \{\mathbf {w} _{1}\}} is linearly dependent, and we can remove one vector from S that is a linear combination of the other elements. This vector cannot be any of the wi, since W is linearly independent. The resulting set is { w 1 , v 1 , … , v i − 1 , v i + 1 , … , v m } {\displaystyle \{\mathbf {w} _{1},\mathbf {v} _{1},\ldots ,\mathbf {v} _{i-1},\mathbf {v} _{i+1},\ldots ,\mathbf {v} _{m}\}} , which is a spanning set of V. We repeat this step n times, where the resulting set after the pth step is the union of { w 1 , … , w p } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{p}\}} and m - p vectors of S. It is ensured until the nth step that there will always be some vi to remove out of S for every adjoint of v, and thus there are at least as many vi's as there are wi's—i.e. m ≥ n {\displaystyle m\geq n} . To verify this, we assume by way of contradiction that m < n {\displaystyle m<n} . Then, at the mth step, we have the set { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} and we can adjoin another vector w m + 1 {\displaystyle \mathbf {w} _{m+1}} . But, since { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} is a spanning set of V, w m + 1 {\displaystyle \mathbf {w} _{m+1}} is a linear combination of { w 1 , … , w m } {\displaystyle \{\mathbf {w} _{1},\ldots ,\mathbf {w} _{m}\}} . This is a contradiction, since W is linearly independent. === Spanning set can be reduced to a basis === Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V, by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension. This also indicates that a basis is a minimal spanning set when V is finite-dimensional. == Generalizations == Generalizing the definition of the span of points in space, a subset X of the ground set of a matroid is called a spanning set if the rank of X equals the rank of the entire ground set The vector space definition can also be generalized to modules. Given an R-module A and a collection of elements a1, ..., an of A, the submodule of A spanned by a1, ..., an is the sum of cyclic modules R a 1 + ⋯ + R a n = { ∑ k = 1 n r k a k | r k ∈ R } {\displaystyle Ra_{1}+\cdots +Ra_{n}=\left\{\sum _{k=1}^{n}r_{k}a_{k}{\bigg |}r_{k}\in R\right\}} consisting of all R-linear combinations of the elements ai. As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset. == Closed linear span (functional analysis) == In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set. Suppose that X is a normed vector space and let E be any non-empty subset of X. The closed linear span of E, denoted by Sp ¯ ( E ) {\displaystyle {\overline {\operatorname {Sp} }}(E)} or Span ¯ ( E ) {\displaystyle {\overline {\operatorname {Span} }}(E)} , is the intersection of all the closed linear subspaces of X which contain E. One mathematical formulation of this is Sp ¯ ( E ) = { u ∈ X | ∀ ε > 0 ∃ x ∈ Sp ⁡ ( E ) : ‖ x − u ‖ < ε } . {\displaystyle {\overline {\operatorname {Sp} }}(E)=\{u\in X|\forall \varepsilon >0\,\exists x\in \operatorname {Sp} (E):\|x-u\|<\varepsilon \}.} The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials. === Notes === The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span. Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma). === A useful lemma === Let X be a normed space and let E be any non-empty subset of X. Then (So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.) == See also == Affine hull Conical combination Convex hull == Footnotes == == Citations == == Sources == === Textbooks === Axler, Sheldon Jay (2015). Linear Algebra Done Right (PDF) (3rd ed.). Springer. ISBN 978-3-319-11079-0. Hefferon, Jim (2020). Linear Algebra (PDF) (4th ed.). Orthogonal Publishing. ISBN 978-1-944325-11-4. Mac Lane, Saunders; Birkhoff, Garrett (1999) [1988]. Algebra (3rd ed.). AMS Chelsea Publishing. ISBN 978-0821816462. Oxley, James G. (2011). Matroid Theory. Oxford Graduate Texts in Mathematics. Vol. 3 (2nd ed.). Oxford University Press. ISBN 9780199202508. Roman, Steven (2005). Advanced Linear Algebra (PDF) (2nd ed.). Springer. ISBN 0-387-24766-1. Rynne, Brian P.; Youngson, Martin A. (2008). Linear Functional Analysis. Springer. ISBN 978-1848000049. Lay, David C. (2021) Linear Algebra and Its Applications (6th Edition). Pearson. === Web === Lankham, Isaiah; Nachtergaele, Bruno; Schilling, Anne (13 February 2010). "Linear Algebra - As an Introduction to Abstract Mathematics" (PDF). University of California, Davis. Retrieved 27 September 2011. Weisstein, Eric Wolfgang. "Vector Space Span". MathWorld. Retrieved 16 Feb 2021. "Linear hull". Encyclopedia of Mathematics. 5 April 2020. Retrieved 16 Feb 2021. == External links == Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org. Sanderson, Grant (August 6, 2016). "Linear combinations, span, and basis vectors". Essence of Linear Algebra. Archived from the original on 2021-12-11 – via YouTube.
Wikipedia:Linear subspace#0
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces. == Definition == If V is a vector space over a field K, a subset W of V is a linear subspace of V if it is a vector space over K for the operations of V. Equivalently, a linear subspace of V is a nonempty subset W such that, whenever w1, w2 are elements of W and α, β are elements of K, it follows that αw1 + βw2 is in W. The singleton set consisting of the zero vector alone and the entire vector space itself are linear subspaces that are called the trivial subspaces of the vector space. == Examples == === Example I === In the vector space V = R3 (the real coordinate space over the field R of real numbers), take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V. Proof: Given u and v in W, then they can be expressed as u = (u1, u2, 0) and v = (v1, v2, 0). Then u + v = (u1+v1, u2+v2, 0+0) = (u1+v1, u2+v2, 0). Thus, u + v is an element of W, too. Given u in W and a scalar c in R, if u = (u1, u2, 0) again, then cu = (cu1, cu2, c0) = (cu1, cu2,0). Thus, cu is an element of W too. === Example II === Let the field be R again, but now let the vector space V be the Cartesian plane R2. Take W to be the set of points (x, y) of R2 such that x = y. Then W is a subspace of R2. Proof: Let p = (p1, p2) and q = (q1, q2) be elements of W, that is, points in the plane such that p1 = p2 and q1 = q2. Then p + q = (p1+q1, p2+q2); since p1 = p2 and q1 = q2, then p1 + q1 = p2 + q2, so p + q is an element of W. Let p = (p1, p2) be an element of W, that is, a point in the plane such that p1 = p2, and let c be a scalar in R. Then cp = (cp1, cp2); since p1 = p2, then cp1 = cp2, so cp is an element of W. In general, any subset of the real coordinate space Rn that is defined by a homogeneous system of linear equations will yield a subspace. (The equation in example I was z = 0, and the equation in example II was x = y.) === Example III === Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. Let C(R) be the subset consisting of continuous functions. Then C(R) is a subspace of RR. Proof: We know from calculus that 0 ∈ C(R) ⊂ RR. We know from calculus that the sum of continuous functions is continuous. Again, we know from calculus that the product of a continuous function and a number is continuous. === Example IV === Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions. The same sort of argument as before shows that this is a subspace too. Examples that extend these themes are common in functional analysis. == Properties of subspaces == From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W. The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time. In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed. The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals). == Descriptions == Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n-space that passes through the origin. A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication: ∃ c ∈ K : v ′ = c v (or v = 1 c v ′ ) {\displaystyle \exists c\in K:\mathbf {v} '=c\mathbf {v} {\text{ (or }}\mathbf {v} ={\frac {1}{c}}\mathbf {v} '{\text{)}}} This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple. A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space): ∃ c ∈ K : F ′ = c F (or F = 1 c F ′ ) {\displaystyle \exists c\in K:\mathbf {F} '=c\mathbf {F} {\text{ (or }}\mathbf {F} ={\frac {1}{c}}\mathbf {F} '{\text{)}}} It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span. === Systems of linear equations === The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn: { [ x 1 x 2 ⋮ x n ] ∈ K n : a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0 } . {\displaystyle \left\{\left[\!\!{\begin{array}{c}x_{1}\\x_{2}\\\vdots \\x_{n}\end{array}}\!\!\right]\in K^{n}:{\begin{alignedat}{6}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=0&\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=0&\\&&&&&&&&&&\vdots \quad &\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=0&\end{alignedat}}\right\}.} For example, the set of all vectors (x, y, z) (over real or rational numbers) satisfying the equations x + 3 y + 2 z = 0 and 2 x − 4 y + 5 z = 0 {\displaystyle x+3y+2z=0\quad {\text{and}}\quad 2x-4y+5z=0} is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions. === Null space of a matrix === In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation: A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {0} .} The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix A = [ 1 3 2 2 − 4 5 ] . {\displaystyle A={\begin{bmatrix}1&3&2\\2&-4&5\end{bmatrix}}.} Every subspace of Kn can be described as the null space of some matrix (see § Algorithms below for more). === Linear parametric equations === The subset of Kn described by a system of homogeneous linear parametric equations is a subspace: { [ x 1 x 2 ⋮ x n ] ∈ K n : x 1 = a 11 t 1 + a 12 t 2 + ⋯ + a 1 m t m x 2 = a 21 t 1 + a 22 t 2 + ⋯ + a 2 m t m ⋮ x n = a n 1 t 1 + a n 2 t 2 + ⋯ + a n m t m for some t 1 , … , t m ∈ K } . {\displaystyle \left\{\left[\!\!{\begin{array}{c}x_{1}\\x_{2}\\\vdots \\x_{n}\end{array}}\!\!\right]\in K^{n}:{\begin{alignedat}{7}x_{1}&&\;=\;&&a_{11}t_{1}&&\;+\;&&a_{12}t_{2}&&\;+\cdots +\;&&a_{1m}t_{m}&\\x_{2}&&\;=\;&&a_{21}t_{1}&&\;+\;&&a_{22}t_{2}&&\;+\cdots +\;&&a_{2m}t_{m}&\\&&\vdots \;\;&&&&&&&&&&&\\x_{n}&&\;=\;&&a_{n1}t_{1}&&\;+\;&&a_{n2}t_{2}&&\;+\cdots +\;&&a_{nm}t_{m}&\\\end{alignedat}}{\text{ for some }}t_{1},\ldots ,t_{m}\in K\right\}.} For example, the set of all vectors (x, y, z) parameterized by the equations x = 2 t 1 + 3 t 2 , y = 5 t 1 − 4 t 2 , and z = − t 1 + 2 t 2 {\displaystyle x=2t_{1}+3t_{2},\;\;\;\;y=5t_{1}-4t_{2},\;\;\;\;{\text{and}}\;\;\;\;z=-t_{1}+2t_{2}} is a two-dimensional subspace of K3, if K is a number field (such as real or rational numbers). === Span of vectors === In linear algebra, the system of parametric equations can be written as a single vector equation: [ x y z ] = t 1 [ 2 5 − 1 ] + t 2 [ 3 − 4 2 ] . {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}\;=\;t_{1}\!{\begin{bmatrix}2\\5\\-1\end{bmatrix}}+t_{2}\!{\begin{bmatrix}3\\-4\\2\end{bmatrix}}.} The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace. In general, a linear combination of vectors v1, v2, ... , vk is any vector of the form t 1 v 1 + ⋯ + t k v k . {\displaystyle t_{1}\mathbf {v} _{1}+\cdots +t_{k}\mathbf {v} _{k}.} The set of all possible linear combinations is called the span: Span { v 1 , … , v k } = { t 1 v 1 + ⋯ + t k v k : t 1 , … , t k ∈ K } . {\displaystyle {\text{Span}}\{\mathbf {v} _{1},\ldots ,\mathbf {v} _{k}\}=\left\{t_{1}\mathbf {v} _{1}+\cdots +t_{k}\mathbf {v} _{k}:t_{1},\ldots ,t_{k}\in K\right\}.} If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk. Example The xz-plane in R3 can be parameterized by the equations x = t 1 , y = 0 , z = t 2 . {\displaystyle x=t_{1},\;\;\;y=0,\;\;\;z=t_{2}.} As a subspace, the xz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xz-plane can be written as a linear combination of these two: ( t 1 , 0 , t 2 ) = t 1 ( 1 , 0 , 0 ) + t 2 ( 0 , 0 , 1 ) . {\displaystyle (t_{1},0,t_{2})=t_{1}(1,0,0)+t_{2}(0,0,1){\text{.}}} Geometrically, this corresponds to the fact that every point on the xz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1). === Column space and row space === A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation: x = A t where A = [ 2 3 5 − 4 − 1 2 ] . {\displaystyle \mathbf {x} =A\mathbf {t} \;\;\;\;{\text{where}}\;\;\;\;A=\left[{\begin{alignedat}{2}2&&3&\\5&&\;\;-4&\\-1&&2&\end{alignedat}}\,\right]{\text{.}}} In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A. The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below). === Independence, basis, and dimension === In general, a subspace of Kn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of t1, t2, t3. In general, vectors v1, ... , vk are called linearly independent if t 1 v 1 + ⋯ + t k v k ≠ u 1 v 1 + ⋯ + u k v k {\displaystyle t_{1}\mathbf {v} _{1}+\cdots +t_{k}\mathbf {v} _{k}\;\neq \;u_{1}\mathbf {v} _{1}+\cdots +u_{k}\mathbf {v} _{k}} for (t1, t2, ... , tk) ≠ (u1, u2, ... , uk). If v1, ..., vk are linearly independent, then the coordinates t1, ..., tk for a vector in the span are uniquely determined. A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more). Example Let S be the subspace of R4 defined by the equations x 1 = 2 x 2 and x 3 = 5 x 4 . {\displaystyle x_{1}=2x_{2}\;\;\;\;{\text{and}}\;\;\;\;x_{3}=5x_{4}.} Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors: ( 2 t 1 , t 1 , 5 t 2 , t 2 ) = t 1 ( 2 , 1 , 0 , 0 ) + t 2 ( 0 , 0 , 5 , 1 ) . {\displaystyle (2t_{1},t_{1},5t_{2},t_{2})=t_{1}(2,1,0,0)+t_{2}(0,0,5,1).} The subspace S is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1). == Operations and relations on subspaces == === Inclusion === The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension). A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ⊂ W, then dim W = k if and only if U = W. === Intersection === Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V. Proof: Let v and w be elements of U ∩ W. Then v and w belong to both U and W. Because U is a subspace, then v + w belongs to U. Similarly, since W is a subspace, then v + w belongs to W. Thus, v + w belongs to U ∩ W. Let v belong to U ∩ W, and let c be a scalar. Then v belongs to both U and W. Since U and W are subspaces, cv belongs to both U and W. Since U and W are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to U ∩ W. For every vector space V, the set {0} and V itself are subspaces of V. === Sum === If U and W are subspaces, their sum is the subspace U + W = { u + w : u ∈ U , w ∈ W } . {\displaystyle U+W=\left\{\mathbf {u} +\mathbf {w} \colon \mathbf {u} \in U,\mathbf {w} \in W\right\}.} For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality max ( dim ⁡ U , dim ⁡ W ) ≤ dim ⁡ ( U + W ) ≤ dim ⁡ ( U ) + dim ⁡ ( W ) . {\displaystyle \max(\dim U,\dim W)\leq \dim(U+W)\leq \dim(U)+\dim(W).} Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: dim ⁡ ( U + W ) = dim ⁡ ( U ) + dim ⁡ ( W ) − dim ⁡ ( U ∩ W ) . {\displaystyle \dim(U+W)=\dim(U)+\dim(W)-\dim(U\cap W).} A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as U ⊕ W {\displaystyle U\oplus W} . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. The dimension of a direct sum U ⊕ W {\displaystyle U\oplus W} is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. dim ⁡ ( U ⊕ W ) = dim ⁡ ( U ) + dim ⁡ ( W ) {\displaystyle \dim(U\oplus W)=\dim(U)+\dim(W)} === Lattice of subspaces === The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation. === Orthogonal complements === If V {\displaystyle V} is an inner product space and N {\displaystyle N} is a subset of V {\displaystyle V} , then the orthogonal complement of N {\displaystyle N} , denoted N ⊥ {\displaystyle N^{\perp }} , is again a subspace. If V {\displaystyle V} is finite-dimensional and N {\displaystyle N} is a subspace, then the dimensions of N {\displaystyle N} and N ⊥ {\displaystyle N^{\perp }} satisfy the complementary relationship dim ⁡ ( N ) + dim ⁡ ( N ⊥ ) = dim ⁡ ( V ) {\displaystyle \dim(N)+\dim(N^{\perp })=\dim(V)} . Moreover, no vector is orthogonal to itself, so N ∩ N ⊥ = { 0 } {\displaystyle N\cap N^{\perp }=\{0\}} and V {\displaystyle V} is the direct sum of N {\displaystyle N} and N ⊥ {\displaystyle N^{\perp }} . Applying orthogonal complements twice returns the original subspace: ( N ⊥ ) ⊥ = N {\displaystyle (N^{\perp })^{\perp }=N} for every subspace N {\displaystyle N} . This operation, understood as negation ( ¬ {\displaystyle \neg } ), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice). In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces N {\displaystyle N} such that N ∩ N ⊥ ≠ { 0 } {\displaystyle N\cap N^{\perp }\neq \{0\}} . As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra). == Algorithms == Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties: The reduced matrix has the same null space as the original. Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original. Row reduction does not affect the linear dependence of the column vectors. === Basis for a row space === Input An m × n matrix A. Output A basis for the row space of A. Use elementary row operations to put A into row echelon form. The nonzero rows of the echelon form are a basis for the row space of A. See the article on row space for an example. If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal. === Subspace membership === Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v with n components. Output Determines whether v is an element of S Create a (k + 1) × n matrix A whose rows are the vectors b1, ... , bk and v. Use elementary row operations to put A into row echelon form. If the echelon form has a row of zeroes, then the vectors {b1, ..., bk, v} are linearly dependent, and therefore v ∈ S. === Basis for a column space === Input An m × n matrix A Output A basis for the column space of A Use elementary row operations to put A into row echelon form. Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space. See the article on column space for an example. This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns. === Coordinates for a vector === Input A basis {b1, b2, ..., bk} for a subspace S of Kn, and a vector v ∈ S Output Numbers t1, t2, ..., tk such that v = t1b1 + ··· + tkbk Create an augmented matrix A whose columns are b1,...,bk , with the last column being v. Use elementary row operations to put A into reduced row echelon form. Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers t1, t2, ..., tk. (These should be precisely the first k entries in the final column of the reduced echelon form.) If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S. === Basis for a null space === Input An m × n matrix A. Output A basis for the null space of A Use elementary row operations to put A in reduced row echelon form. Using the reduced row echelon form, determine which of the variables x1, x2, ..., xn are free. Write equations for the dependent variables in terms of the free variables. For each free variable xi, choose a vector in the null space for which xi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A. See the article on null space for an example. === Basis for the sum and intersection of two subspaces === Given two subspaces U and W of V, a basis of the sum U + W {\displaystyle U+W} and the intersection U ∩ W {\displaystyle U\cap W} can be calculated using the Zassenhaus algorithm. === Equations for a subspace === Input A basis {b1, b2, ..., bk} for a subspace S of Kn Output An (n − k) × n matrix whose null space is S. Create a matrix A whose rows are b1, b2, ..., bk. Use elementary row operations to put A into reduced row echelon form. Let c1, c2, ..., cn be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots. This results in a homogeneous system of n − k linear equations involving the variables c1,...,cn. The (n − k) × n matrix corresponding to this system is the desired matrix with nullspace S. Example If the reduced row echelon form of A is [ 1 0 − 3 0 2 0 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 ] {\displaystyle \left[{\begin{alignedat}{6}1&&0&&-3&&0&&2&&0\\0&&1&&5&&0&&-1&&4\\0&&0&&0&&1&&7&&-9\\0&&\;\;\;\;\;0&&\;\;\;\;\;0&&\;\;\;\;\;0&&\;\;\;\;\;0&&\;\;\;\;\;0\end{alignedat}}\,\right]} then the column vectors c1, ..., c6 satisfy the equations c 3 = − 3 c 1 + 5 c 2 c 5 = 2 c 1 − c 2 + 7 c 4 c 6 = 4 c 2 − 9 c 4 {\displaystyle {\begin{alignedat}{1}\mathbf {c} _{3}&=-3\mathbf {c} _{1}+5\mathbf {c} _{2}\\\mathbf {c} _{5}&=2\mathbf {c} _{1}-\mathbf {c} _{2}+7\mathbf {c} _{4}\\\mathbf {c} _{6}&=4\mathbf {c} _{2}-9\mathbf {c} _{4}\end{alignedat}}} It follows that the row vectors of A satisfy the equations x 3 = − 3 x 1 + 5 x 2 x 5 = 2 x 1 − x 2 + 7 x 4 x 6 = 4 x 2 − 9 x 4 . {\displaystyle {\begin{alignedat}{1}x_{3}&=-3x_{1}+5x_{2}\\x_{5}&=2x_{1}-x_{2}+7x_{4}\\x_{6}&=4x_{2}-9x_{4}.\end{alignedat}}} In particular, the row vectors of A are a basis for the null space of the corresponding matrix. == See also == Cyclic subspace Invariant subspace Multilinear subspace learning Quotient space (linear algebra) Signal subspace Subspace topology == Notes == == Citations == == Sources == === Textbook === Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0. Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4. Hefferon, Jim (2020). Linear Algebra (4th ed.). Orthogonal Publishing. ISBN 978-1-944325-11-4. Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016 {{citation}}: ISBN / Date incompatibility (help) Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8 Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7 Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on March 1, 2001 Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646 Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 === Web === Weisstein, Eric Wolfgang. "Subspace". MathWorld. Retrieved 16 Feb 2021. DuChateau, Paul (5 Sep 2002). "Basic facts about Hilbert Space" (PDF). Colorado State University. Retrieved 17 Feb 2021. == External links == Strang, Gilbert (7 May 2009). "The four fundamental subspaces". Archived from the original on 2021-12-11. Retrieved 17 Feb 2021 – via YouTube. Strang, Gilbert (5 May 2020). "The big picture of linear algebra". Archived from the original on 2021-12-11. Retrieved 17 Feb 2021 – via YouTube.
Wikipedia:Linear topology#0
In algebra, a linear topology on a left A {\displaystyle A} -module M {\displaystyle M} is a topology on M {\displaystyle M} that is invariant under translations and admits a fundamental system of neighborhood of 0 {\displaystyle 0} that consists of submodules of M . {\displaystyle M.} If there is such a topology, M {\displaystyle M} is said to be linearly topologized. If A {\displaystyle A} is given a discrete topology, then M {\displaystyle M} becomes a topological A {\displaystyle A} -module with respect to a linear topology. The notion is used more commonly in algebra than in analysis. Indeed, "[t]opological vector spaces with linear topology form a natural class of topological vector spaces over discrete fields, analogous to the class of locally convex topological vector spaces over the normed fields of real or complex numbers in functional analysis." The term "linear topology" goes back to Lefschetz' work. == Examples and non-examples == For each prime number p, Z {\displaystyle \mathbb {Z} } is linearly topologized by the fundamental system of neighborhoods 0 ∈ ⋯ ⊂ p 2 Z ⊂ p Z ⊂ Z {\displaystyle 0\in \cdots \subset p^{2}\mathbb {Z} \subset p\mathbb {Z} \subset \mathbb {Z} } . Topological vector spaces appearing in functional analysis are typically not linearly topologized (since subspaces do not form a neighborhood system). == See also == == References == Bourbaki, N. (1972). Commutative algebra (Vol. 8). Hermann.
Wikipedia:Linearity of differentiation#0
In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; this property is known as linearity of differentiation, the rule of linearity, or the superposition rule for differentiation. It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). Thus it can be said that differentiation is linear, or the differential operator is a linear operator. == Statement and derivation == Let f and g be functions, with α and β constants. Now consider d d x ( α ⋅ f ( x ) + β ⋅ g ( x ) ) . {\displaystyle {\frac {\mbox{d}}{{\mbox{d}}x}}(\alpha \cdot f(x)+\beta \cdot g(x)).} By the sum rule in differentiation, this is d d x ( α ⋅ f ( x ) ) + d d x ( β ⋅ g ( x ) ) , {\displaystyle {\frac {\mbox{d}}{{\mbox{d}}x}}(\alpha \cdot f(x))+{\frac {\mbox{d}}{{\mbox{d}}x}}(\beta \cdot g(x)),} and by the constant factor rule in differentiation, this reduces to α ⋅ f ′ ( x ) + β ⋅ g ′ ( x ) . {\displaystyle \alpha \cdot f'(x)+\beta \cdot g'(x).} Therefore, d d x ( α ⋅ f ( x ) + β ⋅ g ( x ) ) = α ⋅ f ′ ( x ) + β ⋅ g ′ ( x ) . {\displaystyle {\frac {\mbox{d}}{{\mbox{d}}x}}(\alpha \cdot f(x)+\beta \cdot g(x))=\alpha \cdot f'(x)+\beta \cdot g'(x).} Omitting the brackets, this is often written as: ( α ⋅ f + β ⋅ g ) ′ = α ⋅ f ′ + β ⋅ g ′ . {\displaystyle (\alpha \cdot f+\beta \cdot g)'=\alpha \cdot f'+\beta \cdot g'.} == Detailed proofs/derivations from definition == We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown. Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to 1 {\displaystyle 1} . The difference rule is obtained by setting the first constant coefficient to 1 {\displaystyle 1} and the second constant coefficient to − 1 {\displaystyle -1} . The constant factor rule is obtained by setting either the second constant coefficient or the second function to 0 {\displaystyle 0} . (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to 0 {\displaystyle 0} . One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.) On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of − 1 {\displaystyle -1} . This would, when simplified, give us the difference rule for differentiation. In the proofs/derivations below, the coefficients a , b {\displaystyle a,b} are used; they correspond to the coefficients α , β {\displaystyle \alpha ,\beta } above. === Linearity (directly) === Let a , b ∈ R {\displaystyle a,b\in \mathbb {R} } . Let f , g {\displaystyle f,g} be functions. Let j {\displaystyle j} be a function, where j {\displaystyle j} is defined only where f {\displaystyle f} and g {\displaystyle g} are both defined. (In other words, the domain of j {\displaystyle j} is the intersection of the domains of f {\displaystyle f} and g {\displaystyle g} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = a f ( x ) + b g ( x ) {\displaystyle j(x)=af(x)+bg(x)} . We want to prove that j ′ ( x ) = a f ′ ( x ) + b g ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)+bg^{\prime }(x)} . By definition, we can see that j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 ( a f ( x + h ) + b g ( x + h ) ) − ( a f ( x ) + b g ( x ) ) h = lim h → 0 ( a f ( x + h ) − f ( x ) h + b g ( x + h ) − g ( x ) h ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\left(af(x+h)+bg(x+h)\right)-\left(af(x)+bg(x)\right)}{h}}\\&=\lim _{h\rightarrow 0}\left(a{\frac {f(x+h)-f(x)}{h}}+b{\frac {g(x+h)-g(x)}{h}}\right)\\\end{aligned}}} In order to use the limits law for the sum of limits, we need to know that lim h → 0 a f ( x + h ) − f ( x ) h {\textstyle \lim _{h\to 0}a{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 b g ( x + h ) − g ( x ) h {\textstyle \lim _{h\to 0}b{\frac {g(x+h)-g(x)}{h}}} both individually exist. For these smaller limits, we need to know that lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}} both individually exist to use the coefficient law for limits. By definition, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and g ′ ( x ) = lim h → 0 g ( x + h ) − g ( x ) h {\textstyle g^{\prime }(x)=\lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}} . So, if we know that f ′ ( x ) {\displaystyle f^{\prime }(x)} and g ′ ( x ) {\displaystyle g^{\prime }(x)} both exist, we will know that lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}} both individually exist. This allows us to use the coefficient law for limits to write lim h → 0 a f ( x + h ) − f ( x ) h = a lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle \lim _{h\to 0}a{\frac {f(x+h)-f(x)}{h}}=a\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 b g ( x + h ) − g ( x ) h = b lim h → 0 g ( x + h ) − g ( x ) h . {\displaystyle \lim _{h\to 0}b{\frac {g(x+h)-g(x)}{h}}=b\lim _{h\to 0}{\frac {g(x+h)-g(x)}{h}}.} With this, we can go back to apply the limit law for the sum of limits, since we know that lim h → 0 a f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 b g ( x + h ) − g ( x ) h {\textstyle \lim _{h\rightarrow 0}b{\frac {g(x+h)-g(x)}{h}}} both individually exist. From here, we can directly go back to the derivative we were working on. j ′ ( x ) = lim h → 0 ( a f ( x + h ) − f ( x ) h + b g ( x + h ) − g ( x ) h ) = lim h → 0 ( a f ( x + h ) − f ( x ) h ) + lim h → 0 ( b g ( x + h ) − g ( x ) h ) = a lim h → 0 ( f ( x + h ) − f ( x ) h ) + b lim h → 0 ( g ( x + h ) − g ( x ) h ) = a f ′ ( x ) + b g ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}\left(a{\frac {f(x+h)-f(x)}{h}}+b{\frac {g(x+h)-g(x)}{h}}\right)\\&=\lim _{h\rightarrow 0}\left(a{\frac {f(x+h)-f(x)}{h}}\right)+\lim _{h\rightarrow 0}\left(b{\frac {g(x+h)-g(x)}{h}}\right)\\&=a\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}\right)+b\lim _{h\rightarrow 0}\left({\frac {g(x+h)-g(x)}{h}}\right)\\&=af^{\prime }(x)+bg^{\prime }(x)\end{aligned}}} Finally, we have shown what we claimed in the beginning: j ′ ( x ) = a f ′ ( x ) + b g ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)+bg^{\prime }(x)} . === Sum === Let f , g {\displaystyle f,g} be functions. Let j {\displaystyle j} be a function, where j {\displaystyle j} is defined only where f {\displaystyle f} and g {\displaystyle g} are both defined. (In other words, the domain of j {\displaystyle j} is the intersection of the domains of f {\displaystyle f} and g {\displaystyle g} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = f ( x ) + g ( x ) {\displaystyle j(x)=f(x)+g(x)} . We want to prove that j ′ ( x ) = f ′ ( x ) + g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)+g^{\prime }(x)} . By definition, we can see that j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 ( f ( x + h ) + g ( x + h ) ) − ( f ( x ) + g ( x ) ) h = lim h → 0 ( f ( x + h ) − f ( x ) h + g ( x + h ) − g ( x ) h ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\left(f(x+h)+g(x+h)\right)-\left(f(x)+g(x)\right)}{h}}\\&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}+{\frac {g(x+h)-g(x)}{h}}\right)\\\end{aligned}}} In order to use the law for the sum of limits here, we need to show that the individual limits, lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} both exist. By definition, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and g ′ ( x ) = lim h → 0 g ( x + h ) − g ( x ) h {\textstyle g^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} , so the limits exist whenever the derivatives f ′ ( x ) {\displaystyle f^{\prime }(x)} and g ′ ( x ) {\displaystyle g^{\prime }(x)} exist. So, assuming that the derivatives exist, we can continue the above derivation j ′ ( x ) = lim h → 0 ( f ( x + h ) − f ( x ) h + g ( x + h ) − g ( x ) h ) = lim h → 0 f ( x + h ) − f ( x ) h + lim h → 0 g ( x + h ) − g ( x ) h = f ′ ( x ) + g ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}+{\frac {g(x+h)-g(x)}{h}}\right)\\&=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}+\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}\\&=f^{\prime }(x)+g^{\prime }(x)\end{aligned}}} Thus, we have shown what we wanted to show, that: j ′ ( x ) = f ′ ( x ) + g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)+g^{\prime }(x)} . === Difference === Let f , g {\displaystyle f,g} be functions. Let j {\displaystyle j} be a function, where j {\displaystyle j} is defined only where f {\displaystyle f} and g {\displaystyle g} are both defined. (In other words, the domain of j {\displaystyle j} is the intersection of the domains of f {\displaystyle f} and g {\displaystyle g} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = f ( x ) − g ( x ) {\displaystyle j(x)=f(x)-g(x)} . We want to prove that j ′ ( x ) = f ′ ( x ) − g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)-g^{\prime }(x)} . By definition, we can see that: j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 ( f ( x + h ) − ( g ( x + h ) ) − ( f ( x ) − g ( x ) ) h = lim h → 0 ( f ( x + h ) − f ( x ) h − g ( x + h ) − g ( x ) h ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {\left(f(x+h)-(g(x+h)\right)-\left(f(x)-g(x)\right)}{h}}\\&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}-{\frac {g(x+h)-g(x)}{h}}\right)\\\end{aligned}}} In order to use the law for the difference of limits here, we need to show that the individual limits, lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and lim h → 0 g ( x + h ) − g ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} both exist. By definition, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} and that g ′ ( x ) = lim h → 0 g ( x + h ) − g ( x ) h {\textstyle g^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}} , so these limits exist whenever the derivatives f ′ ( x ) {\displaystyle f^{\prime }(x)} and g ′ ( x ) {\displaystyle g^{\prime }(x)} exist. So, assuming that the derivatives exist, we can continue the above derivation j ′ ( x ) = lim h → 0 ( f ( x + h ) − f ( x ) h − g ( x + h ) − g ( x ) h ) = lim h → 0 f ( x + h ) − f ( x ) h − lim h → 0 g ( x + h ) − g ( x ) h = f ′ ( x ) − g ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}\left({\frac {f(x+h)-f(x)}{h}}-{\frac {g(x+h)-g(x)}{h}}\right)\\&=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}-\lim _{h\rightarrow 0}{\frac {g(x+h)-g(x)}{h}}\\&=f^{\prime }(x)-g^{\prime }(x)\end{aligned}}} Thus, we have shown what we wanted to show, that: j ′ ( x ) = f ′ ( x ) − g ′ ( x ) {\displaystyle j^{\prime }(x)=f^{\prime }(x)-g^{\prime }(x)} . === Constant coefficient === Let f {\displaystyle f} be a function. Let a ∈ R {\displaystyle a\in \mathbb {R} } ; a {\displaystyle a} will be the constant coefficient. Let j {\displaystyle j} be a function, where j is defined only where f {\displaystyle f} is defined. (In other words, the domain of j {\displaystyle j} is equal to the domain of f {\displaystyle f} .) Let x {\displaystyle x} be in the domain of j {\displaystyle j} . Let j ( x ) = a f ( x ) {\displaystyle j(x)=af(x)} . We want to prove that j ′ ( x ) = a f ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)} . By definition, we can see that: j ′ ( x ) = lim h → 0 j ( x + h ) − j ( x ) h = lim h → 0 a f ( x + h ) − a f ( x ) h = lim h → 0 a f ( x + h ) − f ( x ) h {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}{\frac {j(x+h)-j(x)}{h}}\\&=\lim _{h\rightarrow 0}{\frac {af(x+h)-af(x)}{h}}\\&=\lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}\\\end{aligned}}} Now, in order to use a limit law for constant coefficients to show that lim h → 0 a f ( x + h ) − f ( x ) h = a lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle \lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}=a\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} we need to show that lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} exists. However, f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h {\textstyle f^{\prime }(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} , by the definition of the derivative. So, if f ′ ( x ) {\displaystyle f^{\prime }(x)} exists, then lim h → 0 f ( x + h ) − f ( x ) h {\textstyle \lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}} exists. Thus, if we assume that f ′ ( x ) {\displaystyle f^{\prime }(x)} exists, we can use the limit law and continue our proof. j ′ ( x ) = lim h → 0 a f ( x + h ) − f ( x ) h = a lim h → 0 f ( x + h ) − f ( x ) h = a f ′ ( x ) {\displaystyle {\begin{aligned}j^{\prime }(x)&=\lim _{h\rightarrow 0}a{\frac {f(x+h)-f(x)}{h}}\\&=a\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}\\&=af^{\prime }(x)\\\end{aligned}}} Thus, we have proven that when j ( x ) = a f ( x ) {\displaystyle j(x)=af(x)} , we have j ′ ( x ) = a f ′ ( x ) {\displaystyle j^{\prime }(x)=af^{\prime }(x)} . == See also == Differentiation of integrals – Problem in mathematics Differentiation of trigonometric functions – Mathematical process of finding the derivative of a trigonometric function Differentiation rules – Rules for computing derivatives of functions Distribution (mathematics) – Mathematical term generalizing the concept of function General Leibniz rule – Generalization of the product rule in calculus Integration by parts – Mathematical method in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Product rule – Formula for the derivative of a product Quotient rule – Formula for the derivative of a ratio of functions Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector calculus identities – Mathematical identities == References ==
Wikipedia:Linearly disjoint#0
In mathematics, algebras A, B over a field k inside some field extension Ω {\displaystyle \Omega } of k are said to be linearly disjoint over k if the following equivalent conditions are met: (i) The map A ⊗ k B → A B {\displaystyle A\otimes _{k}B\to AB} induced by ( x , y ) ↦ x y {\displaystyle (x,y)\mapsto xy} is injective. (ii) Any k-basis of A remains linearly independent over B. (iii) If u i , v j {\displaystyle u_{i},v_{j}} are k-bases for A, B, then the products u i v j {\displaystyle u_{i}v_{j}} are linearly independent over k. Note that, since every subalgebra of Ω {\displaystyle \Omega } is a domain, (i) implies A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain (in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain then it is a field and A and B are linearly disjoint. However, there are examples where A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain but A and B are not linearly disjoint: for example, A = B = k(t), the field of rational functions over k. One also has: A, B are linearly disjoint over k if and only if the subfields of Ω {\displaystyle \Omega } generated by A , B {\displaystyle A,B} , resp. are linearly disjoint over k. (cf. Tensor product of fields) Suppose A, B are linearly disjoint over k. If A ′ ⊂ A {\displaystyle A'\subset A} , B ′ ⊂ B {\displaystyle B'\subset B} are subalgebras, then A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are linearly disjoint over k. Conversely, if any finitely generated subalgebras of algebras A, B are linearly disjoint, then A, B are linearly disjoint (since the condition involves only finite sets of elements.) == See also == Tensor product of fields == References == P.M. Cohn (2003). Basic algebra
Wikipedia:Line–line intersection#0
In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection. In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. == Formulas == A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see Skew lines § Testing for skewness. === Given two points on each line === First we consider the intersection of two lines L1 and L2 in two-dimensional space, with line L1 being defined by two distinct points (x1, y1) and (x2, y2), and line L2 being defined by two distinct points (x3, y3) and (x4, y4). The intersection P of line L1 and L2 can be defined using determinants. P x = | | x 1 y 1 x 2 y 2 | | x 1 1 x 2 1 | | x 3 y 3 x 4 y 4 | | x 3 1 x 4 1 | | | | x 1 1 x 2 1 | | y 1 1 y 2 1 | | x 3 1 x 4 1 | | y 3 1 y 4 1 | | P y = | | x 1 y 1 x 2 y 2 | | y 1 1 y 2 1 | | x 3 y 3 x 4 y 4 | | y 3 1 y 4 1 | | | | x 1 1 x 2 1 | | y 1 1 y 2 1 | | x 3 1 x 4 1 | | y 3 1 y 4 1 | | {\displaystyle P_{x}={\frac {\begin{vmatrix}{\begin{vmatrix}x_{1}&y_{1}\\x_{2}&y_{2}\end{vmatrix}}&{\begin{vmatrix}x_{1}&1\\x_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&y_{3}\\x_{4}&y_{4}\end{vmatrix}}&{\begin{vmatrix}x_{3}&1\\x_{4}&1\end{vmatrix}}\end{vmatrix}}{\begin{vmatrix}{\begin{vmatrix}x_{1}&1\\x_{2}&1\end{vmatrix}}&{\begin{vmatrix}y_{1}&1\\y_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&1\\x_{4}&1\end{vmatrix}}&{\begin{vmatrix}y_{3}&1\\y_{4}&1\end{vmatrix}}\end{vmatrix}}}\,\!\qquad P_{y}={\frac {\begin{vmatrix}{\begin{vmatrix}x_{1}&y_{1}\\x_{2}&y_{2}\end{vmatrix}}&{\begin{vmatrix}y_{1}&1\\y_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&y_{3}\\x_{4}&y_{4}\end{vmatrix}}&{\begin{vmatrix}y_{3}&1\\y_{4}&1\end{vmatrix}}\end{vmatrix}}{\begin{vmatrix}{\begin{vmatrix}x_{1}&1\\x_{2}&1\end{vmatrix}}&{\begin{vmatrix}y_{1}&1\\y_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&1\\x_{4}&1\end{vmatrix}}&{\begin{vmatrix}y_{3}&1\\y_{4}&1\end{vmatrix}}\end{vmatrix}}}\,\!} The determinants can be written out as: P x = ( x 1 y 2 − y 1 x 2 ) ( x 3 − x 4 ) − ( x 1 − x 2 ) ( x 3 y 4 − y 3 x 4 ) ( x 1 − x 2 ) ( y 3 − y 4 ) − ( y 1 − y 2 ) ( x 3 − x 4 ) P y = ( x 1 y 2 − y 1 x 2 ) ( y 3 − y 4 ) − ( y 1 − y 2 ) ( x 3 y 4 − y 3 x 4 ) ( x 1 − x 2 ) ( y 3 − y 4 ) − ( y 1 − y 2 ) ( x 3 − x 4 ) {\displaystyle {\begin{aligned}P_{x}&={\frac {(x_{1}y_{2}-y_{1}x_{2})(x_{3}-x_{4})-(x_{1}-x_{2})(x_{3}y_{4}-y_{3}x_{4})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}}\\[4px]P_{y}&={\frac {(x_{1}y_{2}-y_{1}x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}y_{4}-y_{3}x_{4})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}}\end{aligned}}} When the two lines are parallel or coincident, the denominator is zero. === Given two points on each line segment === The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines L1 and L2 in terms of first degree Bézier parameters: L 1 = [ x 1 y 1 ] + t [ x 2 − x 1 y 2 − y 1 ] , L 2 = [ x 3 y 3 ] + u [ x 4 − x 3 y 4 − y 3 ] {\displaystyle L_{1}={\begin{bmatrix}x_{1}\\y_{1}\end{bmatrix}}+t{\begin{bmatrix}x_{2}-x_{1}\\y_{2}-y_{1}\end{bmatrix}},\qquad L_{2}={\begin{bmatrix}x_{3}\\y_{3}\end{bmatrix}}+u{\begin{bmatrix}x_{4}-x_{3}\\y_{4}-y_{3}\end{bmatrix}}} (where t and u are real numbers). The intersection point of the lines is found with one of the following values of t or u, where t = | x 1 − x 3 x 3 − x 4 y 1 − y 3 y 3 − y 4 | | x 1 − x 2 x 3 − x 4 y 1 − y 2 y 3 − y 4 | = ( x 1 − x 3 ) ( y 3 − y 4 ) − ( y 1 − y 3 ) ( x 3 − x 4 ) ( x 1 − x 2 ) ( y 3 − y 4 ) − ( y 1 − y 2 ) ( x 3 − x 4 ) {\displaystyle t={\frac {\begin{vmatrix}x_{1}-x_{3}&x_{3}-x_{4}\\y_{1}-y_{3}&y_{3}-y_{4}\end{vmatrix}}{\begin{vmatrix}x_{1}-x_{2}&x_{3}-x_{4}\\y_{1}-y_{2}&y_{3}-y_{4}\end{vmatrix}}}={\frac {(x_{1}-x_{3})(y_{3}-y_{4})-(y_{1}-y_{3})(x_{3}-x_{4})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}}} and u = − | x 1 − x 2 x 1 − x 3 y 1 − y 2 y 1 − y 3 | | x 1 − x 2 x 3 − x 4 y 1 − y 2 y 3 − y 4 | = − ( x 1 − x 2 ) ( y 1 − y 3 ) − ( y 1 − y 2 ) ( x 1 − x 3 ) ( x 1 − x 2 ) ( y 3 − y 4 ) − ( y 1 − y 2 ) ( x 3 − x 4 ) , {\displaystyle u=-{\frac {\begin{vmatrix}x_{1}-x_{2}&x_{1}-x_{3}\\y_{1}-y_{2}&y_{1}-y_{3}\end{vmatrix}}{\begin{vmatrix}x_{1}-x_{2}&x_{3}-x_{4}\\y_{1}-y_{2}&y_{3}-y_{4}\end{vmatrix}}}=-{\frac {(x_{1}-x_{2})(y_{1}-y_{3})-(y_{1}-y_{2})(x_{1}-x_{3})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}},} with ( P x , P y ) = ( x 1 + t ( x 2 − x 1 ) , y 1 + t ( y 2 − y 1 ) ) or ( P x , P y ) = ( x 3 + u ( x 4 − x 3 ) , y 3 + u ( y 4 − y 3 ) ) {\displaystyle (P_{x},P_{y})={\bigl (}x_{1}+t(x_{2}-x_{1}),\;y_{1}+t(y_{2}-y_{1}){\bigr )}\quad {\text{or}}\quad (P_{x},P_{y})={\bigl (}x_{3}+u(x_{4}-x_{3}),\;y_{3}+u(y_{4}-y_{3}){\bigr )}} There will be an intersection if 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1. The intersection point falls within the first line segment if 0 ≤ t ≤ 1, and it falls within the second line segment if 0 ≤ u ≤ 1. These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. In the case where the two line segments share an x axis and x 2 = x 1 + 1 {\displaystyle x_{2}=x_{1}+1} , t {\displaystyle t} and u {\displaystyle u} simplify to t = u = y 1 − y 3 y 1 − y 2 − y 3 + y 4 , {\displaystyle t=u={\frac {y_{1}-y_{3}}{y_{1}-y_{2}-y_{3}+y_{4}}},} with ( P x , P y ) = ( x 1 + t , y 1 + t ( y 2 − y 1 ) ) or ( P x , P y ) = ( x 1 + t , y 3 + t ( y 4 − y 3 ) ) . {\displaystyle (P_{x},P_{y})={\bigl (}x_{1}+t,\;y_{1}+t(y_{2}-y_{1}){\bigr )}\quad {\text{or}}\quad (P_{x},P_{y})={\bigl (}x_{1}+t,\;y_{3}+t(y_{4}-y_{3}){\bigr )}.} === Given two line equations === The x and y coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equations y = ax + c and y = bx + d where a and b are the slopes (gradients) of the lines and where c and d are the y-intercepts of the lines. At the point where the two lines intersect (if they do), both y coordinates will be the same, hence the following equality: a x + c = b x + d . {\displaystyle ax+c=bx+d.} We can rearrange this expression in order to extract the value of x, a x − b x = d − c , {\displaystyle ax-bx=d-c,} and so, x = d − c a − b . {\displaystyle x={\frac {d-c}{a-b}}.} To find the y coordinate, all we need to do is substitute the value of x into either one of the two line equations, for example, into the first: y = a d − c a − b + c . {\displaystyle y=a{\frac {d-c}{a-b}}+c.} Hence, the point of intersection is P = ( d − c a − b , a d − c a − b + c ) . {\displaystyle P=\left({\frac {d-c}{a-b}},a{\frac {d-c}{a-b}}+c\right).} Note that if a = b then the two lines are parallel and they do not intersect, unless c = d as well, in which case the lines are coincident and they intersect at every point. === Using homogeneous coordinates === By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple (x, y, w). The mapping from 3D to 2D coordinates is (x′, y′) = (⁠x/w⁠, ⁠y/w⁠). We can convert 2D points to homogeneous coordinates by defining them as (x, y, 1). Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as a1x + b1y + c1 = 0 and a2x + b2y + c2 = 0. We can represent these two lines in line coordinates as U1 = (a1, b1, c1) and U2 = (a2, b2, c2). The intersection P′ of two lines is then simply given by P ′ = ( a p , b p , c p ) = U 1 × U 2 = ( b 1 c 2 − b 2 c 1 , a 2 c 1 − a 1 c 2 , a 1 b 2 − a 2 b 1 ) {\displaystyle P'=(a_{p},b_{p},c_{p})=U_{1}\times U_{2}=(b_{1}c_{2}-b_{2}c_{1},a_{2}c_{1}-a_{1}c_{2},a_{1}b_{2}-a_{2}b_{1})} If cp = 0, the lines do not intersect. == More than two lines == The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the n-line intersection problem are as follows. === In two dimensions === In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the ith equation (i = 1, …, n) as [ a i 1 a i 2 ] [ x y ] = b i , {\displaystyle {\begin{bmatrix}a_{i1}&a_{i2}\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=b_{i},} and stack these equations into matrix form as A w = b , {\displaystyle \mathbf {A} \mathbf {w} =\mathbf {b} ,} where the ith row of the n × 2 matrix A is [ai1, ai2], w is the 2 × 1 vector [xy], and the ith element of the column vector b is bi. If A has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix [A | b] is also 2, there exists a solution of the matrix equation and thus an intersection point of the n lines. The intersection point, if it exists, is given by w = A g b = ( A T A ) − 1 A T b , {\displaystyle \mathbf {w} =\mathbf {A} ^{\mathrm {g} }\mathbf {b} =\left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)^{-1}\mathbf {A} ^{\mathsf {T}}\mathbf {b} ,} where Ag is the Moore–Penrose generalized inverse of A (which has the form shown because A has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of A is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. === In three dimensions === The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form [ a i 1 a i 2 a i 3 ] [ x y z ] = b i . {\displaystyle {\begin{bmatrix}a_{i1}&a_{i2}&a_{i3}\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=b_{i}.} Thus a set of n lines can be represented by 2n equations in the 3-dimensional coordinate vector w: A w = b {\displaystyle \mathbf {A} \mathbf {w} =\mathbf {b} } where now A is 2n × 3 and b is 2n × 1. As before there is a unique intersection point if and only if A has full column rank and the augmented matrix [A | b] does not, and the unique intersection if it exists is given by w = ( A T A ) − 1 A T b . {\displaystyle \mathbf {w} =\left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)^{-1}\mathbf {A} ^{\mathsf {T}}\mathbf {b} .} == Nearest points to skew lines == In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. === In two dimensions === In the two-dimensional case, first, represent line i as a point pi on the line and a unit normal vector n̂i, perpendicular to that line. That is, if x1 and x2 are points on line 1, then let p1 = x1 and let n ^ 1 := [ 0 − 1 1 0 ] x 2 − x 1 ‖ x 2 − x 1 ‖ {\displaystyle \mathbf {\hat {n}} _{1}:={\begin{bmatrix}0&-1\\1&0\end{bmatrix}}{\frac {\mathbf {x} _{2}-\mathbf {x} _{1}}{\|\mathbf {x} _{2}-\mathbf {x} _{1}\|}}} which is the unit vector along the line, rotated by a right angle. The distance from a point x to the line (p, n̂) is given by d ( x , ( p , n ^ ) ) = | ( x − p ) ⋅ n ^ | = | ( x − p ) T n ^ | = | n ^ T ( x − p ) | = ( x − p ) T n ^ n ^ T ( x − p ) . {\displaystyle d{\bigl (}\mathbf {x} ,(\mathbf {p} ,\mathbf {\hat {n}} ){\bigr )}={\bigl |}(\mathbf {x} -\mathbf {p} )\cdot \mathbf {\hat {n}} {\bigr |}=\left|(\mathbf {x} -\mathbf {p} )^{\mathsf {T}}\mathbf {\hat {n}} \right|=\left|\mathbf {\hat {n}} ^{\mathsf {T}}(\mathbf {x} -\mathbf {p} )\right|={\sqrt {(\mathbf {x} -\mathbf {p} )^{\mathsf {T}}\mathbf {\hat {n}} \mathbf {\hat {n}} ^{\mathsf {T}}(\mathbf {x} -\mathbf {p} )}}.} And so the squared distance from a point x to a line is d ( x , ( p , n ^ ) ) 2 = ( x − p ) T ( n ^ n ^ T ) ( x − p ) . {\displaystyle d{\bigl (}\mathbf {x} ,(\mathbf {p} ,\mathbf {\hat {n}} ){\bigr )}^{2}=(\mathbf {x} -\mathbf {p} )^{\mathsf {T}}\left(\mathbf {\hat {n}} \mathbf {\hat {n}} ^{\mathsf {T}}\right)(\mathbf {x} -\mathbf {p} ).} The sum of squared distances to many lines is the cost function: E ( x ) = ∑ i ( x − p i ) T ( n ^ i n ^ i T ) ( x − p i ) . {\displaystyle E(\mathbf {x} )=\sum _{i}(\mathbf {x} -\mathbf {p} _{i})^{\mathsf {T}}\left(\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)(\mathbf {x} -\mathbf {p} _{i}).} This can be rearranged: E ( x ) = ∑ i x T n ^ i n ^ i T x − x T n ^ i n ^ i T p i − p i T n ^ i n ^ i T x + p i T n ^ i n ^ i T p i = x T ( ∑ i n ^ i n ^ i T ) x − 2 x T ( ∑ i n ^ i n ^ i T p i ) + ∑ i p i T n ^ i n ^ i T p i . {\displaystyle {\begin{aligned}E(\mathbf {x} )&=\sum _{i}\mathbf {x} ^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {x} -\mathbf {x} ^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}-\mathbf {p} _{i}^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {x} +\mathbf {p} _{i}^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\\&=\mathbf {x} ^{\mathsf {T}}\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {x} -2\mathbf {x} ^{\mathsf {T}}\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\right)+\sum _{i}\mathbf {p} _{i}^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}.\end{aligned}}} To find the minimum, we differentiate with respect to x and set the result equal to the zero vector: ∂ E ( x ) ∂ x = 0 = 2 ( ∑ i n ^ i n ^ i T ) x − 2 ( ∑ i n ^ i n ^ i T p i ) {\displaystyle {\frac {\partial E(\mathbf {x} )}{\partial \mathbf {x} }}={\boldsymbol {0}}=2\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {x} -2\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\right)} so ( ∑ i n ^ i n ^ i T ) x = ∑ i n ^ i n ^ i T p i {\displaystyle \left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {x} =\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}} and so x = ( ∑ i n ^ i n ^ i T ) − 1 ( ∑ i n ^ i n ^ i T p i ) . {\displaystyle \mathbf {x} =\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)^{-1}\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\right).} === In more than two dimensions === While n̂i is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that n̂i n̂iT is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between pi and another point giving the distance to the line. In any number of dimensions, if v̂i is a unit vector along the ith line, then n ^ i n ^ i T {\displaystyle \mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}} becomes I − v ^ i v ^ i T {\displaystyle \mathbf {I} -\mathbf {\hat {v}} _{i}\mathbf {\hat {v}} _{i}^{\mathsf {T}}} where I is the identity matrix, and so x = ( ∑ i I − v ^ i v ^ i T ) − 1 ( ∑ i ( I − v ^ i v ^ i T ) p i ) . {\displaystyle x=\left(\sum _{i}\mathbf {I} -\mathbf {\hat {v}} _{i}\mathbf {\hat {v}} _{i}^{\mathsf {T}}\right)^{-1}\left(\sum _{i}\left(\mathbf {I} -\mathbf {\hat {v}} _{i}\mathbf {\hat {v}} _{i}^{\mathsf {T}}\right)\mathbf {p} _{i}\right).} === General derivation === In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin ai and a unit direction vector n̂i. The square of the distance from a point p to one of the lines is given from Pythagoras: d i 2 = ‖ p − a i ‖ 2 − ( ( p − a i ) T n ^ i ) 2 = ( p − a i ) T ( p − a i ) − ( ( p − a i ) T n ^ i ) 2 {\displaystyle d_{i}^{2}=\left\|\mathbf {p} -\mathbf {a} _{i}\right\|^{2}-\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)^{2}=\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\left(\mathbf {p} -\mathbf {a} _{i}\right)-\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)^{2}} where (p − ai)T n̂i is the projection of p − ai on line i. The sum of distances to the square to all lines is ∑ i d i 2 = ∑ i ( ( p − a i ) T ( p − a i ) − ( ( p − a i ) T n ^ i ) 2 ) {\displaystyle \sum _{i}d_{i}^{2}=\sum _{i}\left({\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}}\left(\mathbf {p} -\mathbf {a} _{i}\right)-{\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)^{2}}\right)} To minimize this expression, we differentiate it with respect to p. ∑ i ( 2 ( p − a i ) − 2 ( ( p − a i ) T n ^ i ) n ^ i ) = 0 {\displaystyle \sum _{i}\left(2\left(\mathbf {p} -\mathbf {a} _{i}\right)-2\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)\mathbf {\hat {n}} _{i}\right)={\boldsymbol {0}}} ∑ i ( p − a i ) = ∑ i ( n ^ i n ^ i T ) ( p − a i ) {\displaystyle \sum _{i}\left(\mathbf {p} -\mathbf {a} _{i}\right)=\sum _{i}\left(\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\left(\mathbf {p} -\mathbf {a} _{i}\right)} which results in ( ∑ i ( I − n ^ i n ^ i T ) ) p = ∑ i ( I − n ^ i n ^ i T ) a i {\displaystyle \left(\sum _{i}\left(\mathbf {I} -\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\right)\mathbf {p} =\sum _{i}\left(\mathbf {I} -\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {a} _{i}} where I is the identity matrix. This is a matrix Sp = C, with solution p = S+C, where S+ is the pseudo-inverse of S. == Non-Euclidean geometry == In spherical geometry, any two great circles intersect. In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line. == See also == Line segment intersection Line intersection in projective space Distance between two parallel lines Distance from a point to a line Line–plane intersection Parallel postulate Triangulation (computer vision) Intersection (Euclidean geometry) § Two line segments == References == == External links == Distance between Lines and Segments with their Closest Point of Approach, applicable to two, three, or more dimensions.
Wikipedia:Ling Long (mathematician)#0
Ling Long is a Chinese mathematician whose research concerns modular forms, elliptic surfaces, and dessins d'enfants, as well as number theory in general. She is a professor of mathematics at Louisiana State University. == Early life and education == Long studied mathematics, computer science, and engineering at Tsinghua University, graduating in 1997. She went to Pennsylvania State University for her graduate studies; her dissertation, Modularity of Elliptic Surfaces, she worked on with Noriko Yui, visiting from Queen's University, in her time as a graduate student. She was supervised and influenced by Winnie Li. == Career == After postdoctoral research at the Institute for Advanced Study, Long joined the faculty at Iowa State University in 2003. After a year at Cornell University in 2012–2013, she moved to Louisiana State. == Recognition == Long was the 2012–2013 winner of the Ruth I. Michler Memorial Prize of the Association for Women in Mathematics. She was named to the 2023 class of Fellows of the American Mathematical Society, "for contributions to hypergeometric arithmetic, noncongruence Modular Forms, and supercongruences". She is included in a deck of playing cards featuring notable women mathematicians published by the Association for Women in Mathematics. == References == == External links == Ling Long publications indexed by Google Scholar
Wikipedia:Lior Pachter#0
Lior Samuel Pachter is a computational biologist. He works at the California Institute of Technology, where he is the Bren Professor of Computational Biology. He has widely varied research interests including genomics, combinatorics, computational geometry, machine learning, scientific computing, and statistics. == Early life and education == Pachter was born in Israel and grew up in South Africa. He earned a bachelor's degree in mathematics from the California Institute of Technology in 1994. He completed his doctorate in mathematics from the Massachusetts Institute of Technology in 1999, supervised by Bonnie Berger, with Eric Lander and Daniel Kleitman as co-advisors. == Career and research == Pachter was with the University of California, Berkeley faculty from 1999 to 2018 and was given the Sackler Chair in 2012. As well as for his technical contributions, Pachter is known for using new media to promote open science and for a thought experiment he posted on his blog according to which 'the nearest neighbor to the "perfect human"' is from Puerto Rico. This received considerable media attention, and a response was published in Scientific American. He is also known for criticisms of the All-of-us project's use of UMAP to visualize and interpret population genetics data. === Awards and honors === In 2017, Pachter was elected a Fellow of the International Society for Computational Biology (ISCB). == See also == TopHat (bioinformatics) Fast statistical alignment == References == == External links == Official website Media related to Lior Pachter at Wikimedia Commons
Wikipedia:Liouville space#0
In the mathematical physics of quantum mechanics, Liouville space, also known as line space, is the space of operators on Hilbert space. Liouville space is itself a Hilbert space under the Hilbert-Schmidt inner product. Abstractly, Liouville space is equivalent (isometrically isomorphic) to the tensor product of a Hilbert space with its dual. A common computational technique to organize computations in Liouville space is vectorization. Liouville space underlies the density operator formalism and is a common computation technique in the study of open quantum systems. == References ==
Wikipedia:Liouville's formula#0
In mathematics, Liouville's formula, also known as the Abel–Jacobi–Liouville identity, is an equation that expresses the determinant of a square-matrix solution of a first-order system of homogeneous linear differential equations in terms of the sum of the diagonal coefficients of the system. The formula is named after the French mathematician Joseph Liouville. Jacobi's formula provides another representation of the same mathematical relationship. Liouville's formula is a generalization of Abel's identity and can be used to prove it. Since Liouville's formula relates the different linearly independent solutions of the system of differential equations, it can help to find one solution from the other(s), see the example application below. == Statement of Liouville's formula == Consider the n-dimensional first-order homogeneous linear differential equation y ′ = A ( t ) y {\displaystyle y'=A(t)y} on an interval I of the real line, where A(t) for t ∈ I denotes a square matrix of dimension n with real or complex entries. Let Φ denote a matrix-valued solution on I, meaning that Φ(t) is the so-called fundamental matrix, a square matrix of dimension n with real or complex entries and the derivative satisfies Φ ′ ( t ) = A ( t ) Φ ( t ) , t ∈ I . {\displaystyle \Phi '(t)=A(t)\Phi (t),\qquad t\in I.} Let tr ⁡ A ( s ) = ∑ i = 1 n a i , i ( s ) , s ∈ I , {\displaystyle \operatorname {tr} A(s)=\sum _{i=1}^{n}a_{i,i}(s),\qquad s\in I,} denote the trace of A(s) = (ai, j (s))i, j ∈ {1,...,n}, the sum of its diagonal entries. If the trace of A is a continuous function, then the determinant of Φ satisfies det Φ ( t ) = det Φ ( t 0 ) exp ⁡ ( ∫ t 0 t tr ⁡ A ( s ) d s ) {\displaystyle \det \Phi (t)=\det \Phi (t_{0})\,\exp \left(\int _{t_{0}}^{t}\operatorname {tr} A(s)\,{\textrm {d}}s\right)} for all t and t0 in I. == Example application == This example illustrates how Liouville's formula can help to find the general solution of a first-order system of homogeneous linear differential equations. Consider y ′ = ( 1 − 1 / x 1 + x − 1 ) ⏟ = A ( x ) y {\displaystyle y'=\underbrace {\begin{pmatrix}1&-1/x\\1+x&-1\end{pmatrix}} _{=\,A(x)}y} on the open interval I = (0, ∞). Assume that the easy solution y ( x ) = ( 1 x ) , x ∈ I , {\displaystyle y(x)={\begin{pmatrix}1\\x\end{pmatrix}},\qquad x\in I,} is already found. Let y ( x ) = ( y 1 ( x ) y 2 ( x ) ) {\displaystyle y(x)={\begin{pmatrix}y_{1}(x)\\y_{2}(x)\end{pmatrix}}} denote another solution, then Φ ( x ) = ( y 1 ( x ) 1 y 2 ( x ) x ) , x ∈ I , {\displaystyle \Phi (x)={\begin{pmatrix}y_{1}(x)&1\\y_{2}(x)&x\end{pmatrix}},\qquad x\in I,} is a square-matrix-valued solution of the above differential equation. Since the trace of A(x) is zero for all x ∈ I, Liouville's formula implies that the determinant is actually a constant independent of x. Writing down the first component of the differential equation for y, we obtain using (1) that y 1 ′ ( x ) = y 1 ( x ) − y 2 ( x ) x = x y 1 ( x ) − y 2 ( x ) x = c 1 x , x ∈ I . {\displaystyle y'_{1}(x)=y_{1}(x)-{\frac {y_{2}(x)}{x}}={\frac {x\,y_{1}(x)-y_{2}(x)}{x}}={\frac {c_{1}}{x}},\qquad x\in I.} Therefore, by integration, we see that y 1 ( x ) = c 1 ln ⁡ x + c 2 , x ∈ I , {\displaystyle y_{1}(x)=c_{1}\ln x+c_{2},\qquad x\in I,} involving the natural logarithm and the constant of integration c2. Solving equation (1) for y2(x) and substituting for y1(x) gives y 2 ( x ) = x y 1 ( x ) − c 1 = c 1 x ln ⁡ x + c 2 x − c 1 , x ∈ I , {\displaystyle y_{2}(x)=x\,y_{1}(x)-c_{1}=\,c_{1}x\ln x+c_{2}x-c_{1},\qquad x\in I,} which is the general solution for y. With the special choice c1 = 0 and c2 = 1 we recover the easy solution we started with, the choice c1 = 1 and c2 = 0 yields a linearly independent solution. Therefore, Φ ( x ) = ( ln ⁡ x 1 x ln ⁡ x − 1 x ) , x ∈ I , {\displaystyle \Phi (x)={\begin{pmatrix}\ln x&1\\x\ln x-1&x\end{pmatrix}},\qquad x\in I,} is a so-called fundamental solution of the system. == Proof of Liouville's formula == We omit the argument x for brevity. By the Leibniz formula for determinants, the derivative of the determinant of Φ = (Φi, j )i, j ∈ {0,...,n} can be calculated by differentiating one row at a time and taking the sum, i.e. Since the matrix-valued solution Φ satisfies the equation Φ' = AΦ, we have for every entry of the matrix Φ' Φ i , k ′ = ∑ j = 1 n a i , j Φ j , k , i , k ∈ { 1 , … , n } , {\displaystyle \Phi '_{i,k}=\sum _{j=1}^{n}a_{i,j}\Phi _{j,k}\,,\qquad i,k\in \{1,\ldots ,n\},} or for the entire row ( Φ i , 1 ′ , … , Φ i , n ′ ) = ∑ j = 1 n a i , j ( Φ j , 1 , … , Φ j , n ) , i ∈ { 1 , … , n } . {\displaystyle (\Phi '_{i,1},\dots ,\Phi '_{i,n})=\sum _{j=1}^{n}a_{i,j}(\Phi _{j,1},\ldots ,\Phi _{j,n}),\qquad i\in \{1,\ldots ,n\}.} When we subtract from the i-th row the linear combination ∑ j = 1 j ≠ i n a i , j ( Φ j , 1 , … , Φ j , n ) , {\displaystyle \sum _{\scriptstyle j=1 \atop \scriptstyle j\neq i}^{n}a_{i,j}(\Phi _{j,1},\ldots ,\Phi _{j,n}),} of all the other rows, then the value of the determinant remains unchanged, hence det ( Φ 1 , 1 Φ 1 , 2 ⋯ Φ 1 , n ⋮ ⋮ ⋮ Φ i , 1 ′ Φ i , 2 ′ ⋯ Φ i , n ′ ⋮ ⋮ ⋮ Φ n , 1 Φ n , 2 ⋯ Φ n , n ) = det ( Φ 1 , 1 Φ 1 , 2 ⋯ Φ 1 , n ⋮ ⋮ ⋮ a i , i Φ i , 1 a i , i Φ i , 2 ⋯ a i , i Φ i , n ⋮ ⋮ ⋮ Φ n , 1 Φ n , 2 ⋯ Φ n , n ) = a i , i det Φ {\displaystyle \det {\begin{pmatrix}\Phi _{1,1}&\Phi _{1,2}&\cdots &\Phi _{1,n}\\\vdots &\vdots &&\vdots \\\Phi '_{i,1}&\Phi '_{i,2}&\cdots &\Phi '_{i,n}\\\vdots &\vdots &&\vdots \\\Phi _{n,1}&\Phi _{n,2}&\cdots &\Phi _{n,n}\end{pmatrix}}=\det {\begin{pmatrix}\Phi _{1,1}&\Phi _{1,2}&\cdots &\Phi _{1,n}\\\vdots &\vdots &&\vdots \\a_{i,i}\Phi _{i,1}&a_{i,i}\Phi _{i,2}&\cdots &a_{i,i}\Phi _{i,n}\\\vdots &\vdots &&\vdots \\\Phi _{n,1}&\Phi _{n,2}&\cdots &\Phi _{n,n}\end{pmatrix}}=a_{i,i}\det \Phi } for every i ∈ {1, . . . , n} by the linearity of the determinant with respect to every row. Hence by (2) and the definition of the trace. It remains to show that this representation of the derivative implies Liouville's formula. Fix x0 ∈ I. Since the trace of A is assumed to be continuous function on I, it is bounded on every closed and bounded subinterval of I and therefore integrable, hence g ( x ) := det Φ ( x ) exp ⁡ ( − ∫ x 0 x t r A ( ξ ) d ξ ) , x ∈ I , {\displaystyle g(x):=\det \Phi (x)\exp \left(-\int _{x_{0}}^{x}\mathrm {tr} \,A(\xi )\,{\textrm {d}}\xi \right),\qquad x\in I,} is a well defined function. Differentiating both sides, using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain g ′ ( x ) = ( ( det Φ ( x ) ) ′ − det Φ ( x ) t r A ( x ) ) exp ⁡ ( − ∫ x 0 x tr A ( ξ ) d ξ ) = 0 , x ∈ I , {\displaystyle g'(x)={\bigl (}(\det \Phi (x))'-\det \Phi (x)\,\mathrm {tr} \,A(x){\bigr )}\exp \left(-\int _{x_{0}}^{x}\operatorname {tr} \,A(\xi )\,{\textrm {d}}\xi \right)=0,\qquad x\in I,} due to the derivative in (3). Therefore, g has to be constant on I, because otherwise we would obtain a contradiction to the mean value theorem (applied separately to the real and imaginary part in the complex-valued case). Since g(x0) = det Φ(x0), Liouville's formula follows by solving the definition of g for det Φ(x). == References == Chicone, Carmen (2006), Ordinary Differential Equations with Applications (2 ed.), New York: Springer-Verlag, pp. 152–153, ISBN 978-0-387-30769-5, MR 2224508, Zbl 1120.34001 Teschl, Gerald (2012), Ordinary Differential Equations and Dynamical Systems, Providence: American Mathematical Society, MR 2961944, Zbl 1263.34002
Wikipedia:Liouville's theorem (conformal mappings)#0
In mathematics, Liouville's theorem, proved by Joseph Liouville in 1850, is a rigidity theorem about conformal mappings in Euclidean space. It states that every smooth conformal mapping on a domain of Rn, where n > 2, can be expressed as a composition of translations, similarities, orthogonal transformations and inversions: they are Möbius transformations (in n dimensions). This theorem severely limits the variety of possible conformal mappings in R3 and higher-dimensional spaces. By contrast, conformal mappings in R2 can be much more complicated – for example, all simply connected planar domains are conformally equivalent, by the Riemann mapping theorem. Generalizations of the theorem hold for transformations that are only weakly differentiable (Iwaniec & Martin 2001, Chapter 5). The focus of such a study is the non-linear Cauchy–Riemann system that is a necessary and sufficient condition for a smooth mapping f : Ω → Rn to be conformal: D f T D f = | det D f | 2 / n I {\displaystyle Df^{\mathrm {T} }Df=\left|\det Df\right|^{2/n}I} where Df is the Jacobian derivative, T is the matrix transpose, and I is the identity matrix. A weak solution of this system is defined to be an element f of the Sobolev space W1,nloc(Ω, Rn) with non-negative Jacobian determinant almost everywhere, such that the Cauchy–Riemann system holds at almost every point of Ω. Liouville's theorem is then that every weak solution (in this sense) is a Möbius transformation, meaning that it has the form f ( x ) = b + α A ( x − a ) | x − a | ε , D f = α A | x − a | ε ( I − ε x − a | x − a | ( x − a ) T | x − a | ) , {\displaystyle f(x)=b+{\frac {\alpha A(x-a)}{|x-a|^{\varepsilon }}},\qquad Df={\frac {\alpha A}{|x-a|^{\varepsilon }}}\left(I-\varepsilon {\frac {x-a}{|x-a|}}{\frac {(x-a)^{\mathrm {T} }}{|x-a|}}\right),} where a, b are vectors in Rn, α is a scalar, A is a rotation matrix, ε = 0 or 2, and the matrix in parentheses is I or a Householder matrix (so, orthogonal). Equivalently stated, any quasiconformal map of a domain in Euclidean space that is also conformal is a Möbius transformation. This equivalent statement justifies using the Sobolev space W1,n, since f ∈ W1,nloc(Ω, Rn) then follows from the geometrical condition of conformality and the ACL characterization of Sobolev space. The result is not optimal however: in even dimensions n = 2k, the theorem also holds for solutions that are only assumed to be in the space W1,kloc, and this result is sharp in the sense that there are weak solutions of the Cauchy–Riemann system in W1,p for any p < k that are not Möbius transformations. In odd dimensions, it is known that W1,n is not optimal, but a sharp result is not known. Similar rigidity results (in the smooth case) hold on any conformal manifold. The group of conformal isometries of an n-dimensional conformal Riemannian manifold always has dimension that cannot exceed that of the full conformal group SO(n + 1, 1). Equality of the two dimensions holds exactly when the conformal manifold is isometric with the n-sphere or projective space. Local versions of the result also hold: The Lie algebra of conformal Killing fields in an open set has dimension less than or equal to that of the conformal group, with equality holding if and only if the open set is locally conformally flat. == Notes == == References == Blair, David E. (2000), "Chapter 6: The Classical Proof of Liouville's Theorem", Inversion Theory and Conformal Mapping, American Mathematical Society, pp. 95–105, ISBN 0-8218-2636-0. Harley Flanders (1966) "Liouville's theorem on conformal mapping", Journal of Mathematics and Mechanics 15: 157–61, MR0184153 Monge, Gaspard (1850), Liouville, J. (ed.), Application de l'analyse à la Géométrie (in French) (5th ed.), Bachelier Iwaniec, Tadeusz; Martin, Gaven (2001), Geometric function theory and non-linear analysis, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, ISBN 978-0-19-850929-5, MR 1859913. Kobayashi, Shoshichi (1972), Transformation groups in differential geometry, Berlin, New York: Springer-Verlag. Solomentsev, E.D. (2001) [1994], "Liouville theorems", Encyclopedia of Mathematics, EMS Press
Wikipedia:Liouville's theorem (differential algebra)#0
In mathematics, Liouville's theorem, originally formulated by French mathematician Joseph Liouville in 1833 to 1841, places an important restriction on antiderivatives that can be expressed as elementary functions. The antiderivatives of certain elementary functions cannot themselves be expressed as elementary functions. These are called nonelementary antiderivatives. A standard example of such a function is e − x 2 , {\displaystyle e^{-x^{2}},} whose antiderivative is (with a multiplier of a constant) the error function, familiar from statistics. Other examples include the functions sin ⁡ ( x ) x {\displaystyle {\frac {\sin(x)}{x}}} and x x . {\displaystyle x^{x}.} Liouville's theorem states that elementary antiderivatives, if they exist, are in the same differential field as the function, plus possibly a finite number of applications of the logarithm function. == Definitions == For any differential field F , {\displaystyle F,} the constants of F {\displaystyle F} is the subfield Con ⁡ ( F ) = { f ∈ F : D f = 0 } . {\displaystyle \operatorname {Con} (F)=\{f\in F:Df=0\}.} Given two differential fields F {\displaystyle F} and G , {\displaystyle G,} G {\displaystyle G} is called a logarithmic extension of F {\displaystyle F} if G {\displaystyle G} is a simple transcendental extension of F {\displaystyle F} (that is, G = F ( t ) {\displaystyle G=F(t)} for some transcendental t {\displaystyle t} ) such that D t = D s s for some s ∈ F . {\displaystyle Dt={\frac {Ds}{s}}\quad {\text{ for some }}s\in F.} This has the form of a logarithmic derivative. Intuitively, one may think of t {\displaystyle t} as the logarithm of some element s {\displaystyle s} of F , {\displaystyle F,} in which case, this condition is analogous to the ordinary chain rule. However, F {\displaystyle F} is not necessarily equipped with a unique logarithm; one might adjoin many "logarithm-like" extensions to F . {\displaystyle F.} Similarly, an exponential extension is a simple transcendental extension that satisfies D t t = D s for some s ∈ F . {\displaystyle {\frac {Dt}{t}}=Ds\quad {\text{ for some }}s\in F.} With the above caveat in mind, this element may be thought of as an exponential of an element s {\displaystyle s} of F . {\displaystyle F.} Finally, G {\displaystyle G} is called an elementary differential extension of F {\displaystyle F} if there is a finite chain of subfields from F {\displaystyle F} to G {\displaystyle G} where each extension in the chain is either algebraic, logarithmic, or exponential. == Basic theorem == Suppose F {\displaystyle F} and G {\displaystyle G} are differential fields with Con ⁡ ( F ) = Con ⁡ ( G ) , {\displaystyle \operatorname {Con} (F)=\operatorname {Con} (G),} and that G {\displaystyle G} is an elementary differential extension of F . {\displaystyle F.} Suppose f ∈ F {\displaystyle f\in F} and g ∈ G {\displaystyle g\in G} satisfy D g = f {\displaystyle Dg=f} (in words, suppose that G {\displaystyle G} contains an antiderivative of f {\displaystyle f} ). Then there exist c 1 , … , c n ∈ Con ⁡ ( F ) {\displaystyle c_{1},\ldots ,c_{n}\in \operatorname {Con} (F)} and f 1 , … , f n , s ∈ F {\displaystyle f_{1},\ldots ,f_{n},s\in F} such that f = c 1 D f 1 f 1 + ⋯ + c n D f n f n + D s . {\displaystyle f=c_{1}{\frac {Df_{1}}{f_{1}}}+\dotsb +c_{n}{\frac {Df_{n}}{f_{n}}}+Ds.} In other words, the only functions that have "elementary antiderivatives" (that is, antiderivatives living in, at worst, an elementary differential extension of F {\displaystyle F} ) are those with this form. Thus, on an intuitive level, the theorem states that the only elementary antiderivatives are the "simple" functions plus a finite number of logarithms of "simple" functions. A proof of Liouville's theorem can be found in section 12.4 of Geddes, et al. See Lützen's scientific bibliography for a sketch of Liouville's original proof (Chapter IX. Integration in Finite Terms), its modern exposition and algebraic treatment (ibid. §61). == Examples == As an example, the field F := C ( x ) {\displaystyle F:=\mathbb {C} (x)} of rational functions in a single variable has a derivation given by the standard derivative with respect to that variable. The constants of this field are just the complex numbers C ; {\displaystyle \mathbb {C} ;} that is, Con ⁡ ( C ( x ) ) = C , {\displaystyle \operatorname {Con} (\mathbb {C} (x))=\mathbb {C} ,} The function f := 1 x , {\displaystyle f:={\tfrac {1}{x}},} which exists in C ( x ) , {\displaystyle \mathbb {C} (x),} does not have an antiderivative in C ( x ) . {\displaystyle \mathbb {C} (x).} Its antiderivatives ln ⁡ x + C {\displaystyle \ln x+C} do, however, exist in the logarithmic extension C ( x , ln ⁡ x ) . {\displaystyle \mathbb {C} (x,\ln x).} Likewise, the function 1 x 2 + 1 {\displaystyle {\tfrac {1}{x^{2}+1}}} does not have an antiderivative in C ( x ) . {\displaystyle \mathbb {C} (x).} Its antiderivatives tan − 1 ⁡ ( x ) + C {\displaystyle \tan ^{-1}(x)+C} do not seem to satisfy the requirements of the theorem, since they are not (apparently) sums of rational functions and logarithms of rational functions. However, a calculation with Euler's formula e i θ = cos ⁡ θ + i sin ⁡ θ {\displaystyle e^{i\theta }=\cos \theta +i\sin \theta } shows that in fact the antiderivatives can be written in the required manner (as logarithms of rational functions). e 2 i θ = e i θ e − i θ = cos ⁡ θ + i sin ⁡ θ cos ⁡ θ − i sin ⁡ θ = 1 + i tan ⁡ θ 1 − i tan ⁡ θ θ = 1 2 i ln ⁡ ( 1 + i tan ⁡ θ 1 − i tan ⁡ θ ) tan − 1 ⁡ x = 1 2 i ln ⁡ ( 1 + i x 1 − i x ) {\displaystyle {\begin{aligned}e^{2i\theta }&={\frac {e^{i\theta }}{e^{-i\theta }}}={\frac {\cos \theta +i\sin \theta }{\cos \theta -i\sin \theta }}={\frac {1+i\tan \theta }{1-i\tan \theta }}\\\theta &={\frac {1}{2i}}\ln \left({\frac {1+i\tan \theta }{1-i\tan \theta }}\right)\\\tan ^{-1}x&={\frac {1}{2i}}\ln \left({\frac {1+ix}{1-ix}}\right)\end{aligned}}} == Relationship with differential Galois theory == Liouville's theorem is sometimes presented as a theorem in differential Galois theory, but this is not strictly true. The theorem can be proved without any use of Galois theory. Furthermore, the Galois group of a simple antiderivative is either trivial (if no field extension is required to express it), or is simply the additive group of the constants (corresponding to the constant of integration). Thus, an antiderivative's differential Galois group does not encode enough information to determine if it can be expressed using elementary functions, the major condition of Liouville's theorem. == See also == == Notes == == References == Bertrand, D. (1996), "Review of "Lectures on differential Galois theory"" (PDF), Bulletin of the American Mathematical Society, 33 (2), doi:10.1090/s0273-0979-96-00652-0, ISSN 0002-9904 Geddes, Keith O.; Czapor, Stephen R.; Labahn, George (1992). Algorithms for Computer Algebra. Kluwer Academic Publishers. ISBN 0-7923-9259-0. Liouville, Joseph (1833a). "Premier mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 124–148. Liouville, Joseph (1833b). "Second mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 149–193. Liouville, Joseph (1833c). "Note sur la détermination des intégrales dont la valeur est algébrique". Journal für die reine und angewandte Mathematik. 10: 347–359. Magid, Andy R. (1994), Lectures on differential Galois theory, University Lecture Series, vol. 7, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-7004-4, MR 1301076 Magid, Andy R. (1999), "Differential Galois theory" (PDF), Notices of the American Mathematical Society, 46 (9): 1041–1049, ISSN 0002-9920, MR 1710665 van der Put, Marius; Singer, Michael F. (2003), Galois theory of linear differential equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 328, Berlin, New York: Springer-Verlag, ISBN 978-3-540-44228-8, MR 1960772 == External links == Weisstein, Eric W. "Liouville's Principle". MathWorld.
Wikipedia:Liouvillian function#0
In mathematics, the Liouvillian functions comprise a set of functions including the elementary functions and their repeated integrals. Liouvillian functions can be recursively defined as integrals of other Liouvillian functions. More explicitly, a Liouvillian function is a function of one variable which is the composition of a finite number of arithmetic operations (+, −, ×, ÷), exponentials, constants, solutions of algebraic equations (a generalization of nth roots), and antiderivatives. The logarithm function does not need to be explicitly included since it is the integral of 1 / x {\displaystyle 1/x} . It follows directly from the definition that the set of Liouvillian functions is closed under arithmetic operations, composition, and integration. It is also closed under differentiation. It is not closed under limits and infinite sums. Liouvillian functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. == Examples == All elementary functions are Liouvillian. Examples of well-known functions which are Liouvillian but not elementary are the nonelementary antiderivatives, for example: The error function, e r f ( x ) = 2 π ∫ 0 x e − t 2 d t , {\displaystyle \mathrm {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt,} The exponential (Ei), logarithmic (Li or li) and Fresnel (S and C) integrals. All Liouvillian functions are solutions of algebraic differential equations, but not conversely. Examples of functions which are solutions of algebraic differential equations but not Liouvillian include: the Bessel functions (except special cases); the hypergeometric functions (except special cases). Examples of functions which are not solutions of algebraic differential equations and thus not Liouvillian include all transcendentally transcendental functions, such as: the gamma function; the zeta function. == See also == Closed-form expression – Mathematical formula involving a given set of operations Differential Galois theory – Study of Galois symmetry groups of differential fields Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions Nonelementary integral – Integrals not expressible in closed-form from elementary functions Picard–Vessiot theory – Study of differential field extensions induced by linear differential equations == References == == Further reading == Davenport, J. H. (2007). "What Might 'Understand a Function' Mean". In Kauers, M.; Kerber, M.; Miner, R.; Windsteiger, W. (eds.). Towards Mechanized Mathematical Assistants. Berlin/Heidelberg: Springer. pp. 55–65. ISBN 978-3-540-73083-5.
Wikipedia:Lipman Bers#0
Lipman Bers (Latvian: Lipmans Berss; May 22, 1914 – October 29, 1993) was a Latvian-American mathematician, born in Riga, who created the theory of pseudoanalytic functions and worked on Riemann surfaces and Kleinian groups. He was also known for his work in human rights activism. == Biography == Bers was born in Riga, then under the rule of the Russian Czars, and spent several years as a child in Saint Petersburg; his family returned to Riga in approximately 1919, by which time it was part of independent Latvia. In Riga, his mother was the principal of a Jewish elementary school, and his father became the principal of a Jewish high school, both of which Bers attended, with an interlude in Berlin while his mother, by then separated from his father, attended the Berlin Psychoanalytic Institute. After high school, Bers studied at the University of Zurich for a year, but had to return to Riga again because of the difficulty of transferring money from Latvia in the international financial crisis of the time. He continued his studies at the University of Riga, where he became active in socialist politics, including giving political speeches and working for an underground newspaper. In the aftermath of the Latvian coup in 1934 by right-wing leader Kārlis Ulmanis, Bers was targeted for arrest but fled the country, first to Estonia and then to Czechoslovakia. Bers received his Ph.D. in 1938 from the University of Prague. He had begun his studies in Prague with Rudolf Carnap, but when Carnap moved to the US he switched to Charles Loewner, who would eventually become his thesis advisor. In Prague, he lived with an aunt, and married his wife Mary (née Kagan) whom he had met in elementary school and who had followed him from Riga. Having applied for postdoctoral studies in Paris, he was given a visa to go to France soon after the Munich Agreement, by which Nazi Germany annexed the Sudetenland. He and his wife Mary had a daughter in Paris. They were unable to obtain a visa there to emigrate to the US, as the Latvian quota had filled, so they escaped to the south of France ten days before the fall of Paris, and eventually obtained an emergency US visa in Marseilles, one of a group of 10,000 visas set aside for political refugees by Eleanor Roosevelt. The Bers family rejoined Bers' mother, who had by then moved to New York City and become a psychoanalyst, married to thespian Beno Tumarin. At this time, Bers worked for the YIVO Yiddish research agency. Bers spent World War II teaching mathematics as a research associate at Brown University, where he was joined by Loewner. After the war, Bers found an assistant professorship at Syracuse University (1945–1951), before moving to New York University (1951–1964) and then Columbia University (1964–1982), where he became the Davies Professor of Mathematics, and where he chaired the mathematics department from 1972 to 1975. His move to NYU coincided with a move of his family to New Rochelle, New York, where he joined a small community of émigré mathematicians. He was a visiting scholar at the Institute for Advanced Study in 1949–51. He was a Vice-President (1963–65) and a President (1975–77) of the American Mathematical Society, chaired the Division of Mathematical Sciences of the United States National Research Council from 1969 to 1971, chaired the U.S. National Committee on Mathematics from 1977 to 1981, and chaired the Mathematics Section of the National Academy of Sciences from 1967 to 1970. Late in his life, Bers suffered from Parkinson's disease and strokes. He died on October 29, 1993. == Mathematical research == Bers' doctoral work was on the subject of potential theory. While in Paris, he worked on Green's function and on integral representations. After first moving to the US, while working for YIVO, he researched Yiddish mathematics textbooks rather than pure mathematics. At Brown, he began working on problems of fluid dynamics, and in particular on the two-dimensional subsonic flows associated with cross-sections of airfoils. At this time, he began his work with Abe Gelbart on what would eventually develop into the theory of pseudoanalytic functions. Through the 1940s and 1950s he continued to develop this theory, and to use it to study the planar elliptic partial differential equations associated with subsonic flows. Another of his major results in this time concerned the singularities of the partial differential equations defining minimal surfaces. Bers proved an extension of Riemann's theorem on removable singularities, showing that any isolated singularity of a pencil of minimal surfaces can be removed; he spoke on this result at the 1950 International Congress of Mathematicians and published it in Annals of Mathematics. Later, beginning with his visit to the Institute for Advanced Study, Bers "began a ten-year odyssey that took him from pseudoanalytic functions and elliptic equations to quasiconformal mappings, Teichmüller theory, and Kleinian groups". With Lars Ahlfors, he solved the "moduli problem", of finding a holomorphic parameterization of the Teichmüller space, each point of which represents a compact Riemann surface of a given genus. During this period he also coined the popular phrasing of a question on eigenvalues of planar domains, "Can one hear the shape of a drum?", used as an article title by Mark Kac in 1966 and finally answered negatively in 1992 by an academic descendant of Bers. In the late 1950s, by way of adding a coda to his earlier work, Bers wrote several major retrospectives of flows, pseudoanalytic functions, fixed point methods, Riemann surface theory prior to his work on moduli, and the theory of several complex variables. In 1958, he presented his work on Riemann surfaces in a second talk at the International Congress of Mathematicians. Bers' work on the parameterization of Teichmüller space led him in the 1960s to consider the boundary of the parameterized space, whose points corresponded to new types of Kleinian groups, eventually to be called singly-degenerate Kleinian groups. He applied Eichler cohomology, previously developed for applications in number theory and the theory of Lie groups, to Kleinian groups. He proved the Bers area inequality, an area bound for hyperbolic surfaces that became a two-dimensional precursor to William Thurston's work on geometrization of 3-manifolds and 3-manifold volume, and in this period Bers himself also studied the continuous symmetries of hyperbolic 3-space. Quasi-Fuchsian groups may be mapped to a pair of Riemann surfaces by taking the quotient by the group of one of the two connected components of the complement of the group's limit set; fixing the image of one of these two maps leads to a subset of the space of Kleinian groups called a Bers slice. In 1970, Bers conjectured that the singly degenerate Kleinian surface groups can be found on the boundary of a Bers slice; this statement, known as the Bers density conjecture, was finally proven by Namazi, Souto, and Ohshika in 2010 and 2011. The Bers compactification of Teichmüller space also dates to this period. == Advising == Over the course of his career, Bers advised approximately 50 doctoral students, among them Enrico Arbarello, Irwin Kra, Linda Keen, Murray H. Protter, and Lesley Sibner. Approximately a third of Bers' doctoral students were women, a high proportion for mathematics. Having felt neglected by his own advisor, Bers met regularly for meals with his students and former students, maintained a keen interest in their personal lives as well as their professional accomplishments, and kept up a friendly competition with Lars Ahlfors over who could bring to larger number of academic descendants to mathematical gatherings. == Human rights activism == As a small child with his mother in Saint Petersburg, Bers had cheered the Russian Revolution and the rise of the Soviet Union, but by the late 1930s he had become disillusioned with communism after the assassination of Sergey Kirov and Stalin's ensuing purges. His son, Victor Bers, later said that "His experiences in Europe motivated his activism in the human rights movement," and Bers himself attributed his interest in human rights to the legacy of Menshevik leader Julius Martov. He founded the Committee on Human Rights of the National Academy of Sciences, and beginning in the 1970s worked to allow the emigration of dissident Soviet mathematicians including Yuri Shikhanovich, Leonid Plyushch, Valentin Turchin, and David and Gregory Chudnovsky. Within the U.S., he also opposed the American involvement in the Vietnam War and southeast Asia, and the maintenance of the U.S. nuclear arsenal during the Cold War. == Awards and honors == In 1961, Bers was elected a Fellow of the American Academy of Arts and Sciences, and in 1965 he became a Fellow of the American Association for the Advancement of Science. He joined the National Academy of Sciences in 1964. He was a member of the Finnish Academy of Sciences, and the American Philosophical Society. He received the AMS Leroy P. Steele Prize for mathematical exposition in 1975 for his paper "Uniformization, moduli, and Kleinian groups". In 1986, the New York Academy of Sciences gave him their Human Rights Award. In the early 1980s, the Association for Women in Mathematics held a symposium to honor Bers' accomplishments in mentoring women mathematicians. == Publications == === Books === Bers, Lipman (1953), Theory of pseudo-analytic functions, Institute for Mathematics and Mechanics, New York University, New York, MR 0057347 Bers, Lipman (1958), Mathematical aspects of subsonic and transonic gas dynamics, New York: John Wiley & Sons Bers, Lipman (1976), Calculus, Holt, Rinehart and Winston, (in collaboration with Frank Karal) Bers, Lipman (1998), Kra, Irwin; Maskit, Bernard (eds.), Selected works of Lipman Bers. Part 1, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0996-9, MR 1643465 Bers, Lipman (1998), Kra, Irwin; Maskit, Bernard (eds.), Selected works of Lipman Bers. Part 2, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0997-6, MR 1643469 === Selected articles === with Abe Gelbart: Bers, Lipman; Gelbart, Abe (1944). "On a class of functions defined by partial differential equations". Trans. Amer. Math. Soc. 56: 67–93. doi:10.1090/s0002-9947-1944-0010910-5. MR 0010910. Bers, Lipman (1948). "On rings of analytic functions". Bull. Amer. Math. Soc. 54 (4): 311–315. doi:10.1090/s0002-9904-1948-08992-3. MR 0024970. Bers, L. (February 1950). "Partial Differential Equations and Generalized Analytic Functions". Proc Natl Acad Sci U S A. 36 (2): 130–136. Bibcode:1950PNAS...36..130B. doi:10.1073/pnas.36.2.130. PMC 1063147. PMID 16588958. Bers, L. (January 1951). "Partial Differential Equations and Generalized Analytic Functions: Second Note". Proc Natl Acad Sci U S A. 37 (1): 42–47. Bibcode:1951PNAS...37...42B. doi:10.1073/pnas.37.1.42. PMC 1063297. PMID 16588987. Bers, Lipman (1951). "Boundary value problems for minimal surfaces with singularities at infinity". Trans. Amer. Math. Soc. 70 (3): 465–491. doi:10.1090/s0002-9947-1951-0043337-4. MR 0043337. with Shmuel Agmon: Agmon, Shmuel; Bers, Lipman (1952). "The expansion theorem for pseudo-analytic functions". Proc. Amer. Math. Soc. 3 (5): 757–764. doi:10.1090/s0002-9939-1952-0057349-4. MR 0057349. Bers, Lipman (1956). "An outline of the theory of pseudoanalytic functions". Bull. Amer. Math. Soc. 62 (4): 291–331. doi:10.1090/s0002-9904-1956-10037-2. MR 0081936. Bers, Lipman (1957). "On a theorem of Mori and the definition of quasiconformality". Trans. Amer. Math. Soc. 84: 78–84. doi:10.1090/s0002-9947-1957-0083025-7. MR 0083025. Bers, Lipman (1960). "Simultaneous uniformization". Bull. Amer. Math. Soc. 66 (2): 94–97. doi:10.1090/s0002-9904-1960-10413-2. MR 0111834. Bers, Lipman (1960). "Spaces of Riemann surfaces as bounded domains". Bull. Amer. Math. Soc. 66 (2): 98–103. doi:10.1090/s0002-9904-1960-10415-6. MR 0111835. Bers, Lipman (1961). "Holomorphic differentials as functions of moduli". Bull. Amer. Math. Soc. 67 (2): 206–210. doi:10.1090/s0002-9904-1961-10569-7. MR 0122989. with Leon Ehrenpreis: Bers, Lipman; Ehrenpreis, Leon (1964). "Holomorphic convexity of Teichmüller spaces". Bull. Amer. Math. Soc. 70 (6): 761–764. doi:10.1090/s0002-9904-1964-11230-1. MR 0168800. Bers, Lipman (1974). "On spaces of Riemann surfaces with nodes". Bull. Amer. Math. Soc. 80 (6): 1219–1222. doi:10.1090/s0002-9904-1974-13686-4. MR 0361165. Bers, Lipman (1977). "Quasiconformal mappings with applications to differential equations, function theory and topology". Bull. Amer. Math. Soc. 83 (6): 1083–1100. doi:10.1090/s0002-9904-1977-14390-5. MR 0463433. Bers, Lipman (1981). "Finite dimensional Teichmüller spaces and generalizations". Bull. Amer. Math. Soc. (N.S.). 5 (2): 131–172. doi:10.1090/s0273-0979-1981-14933-8. MR 0621883. == References == == External links == Quotations related to Lipman Bers at Wikiquote
Wikipedia:Lipót Fejér#0
Lipót Fejér (or Leopold Fejér, Hungarian pronunciation: [ˈfɛjeːr]; 9 February 1880 – 15 October 1959) was a Hungarian mathematician of Jewish heritage. Fejér was born Leopold Weisz, and changed to the Hungarian name Fejér around 1900. == Biography == He was born in Pécs, Austria-Hungary, into the Jewish family of Victoria Goldberger and Samuel Weiss. His maternal great-grandfather Samuel Nachod was a doctor and his grandfather was a renowned scholar, author of a Hebrew-Hungarian dictionary. Leopold's father, Samuel Weiss, was a shopkeeper in Pecs. In primary schools Leopold was not doing well, so for a while his father took him away to home schooling. The future scientist developed his interest in mathematics in high school thanks to his teacher Sigismund Maksay. Fejér studied mathematics and physics at the University of Budapest and at the University of Berlin, where he was taught by Hermann Schwarz. In 1902 he earned his doctorate from University of Budapest (today Eötvös Loránd University). From 1902 to 1905 Fejér taught there and from 1905 until 1911 he taught at Franz Joseph University in Kolozsvár in Austria-Hungary (now Cluj-Napoca in Romania). In 1911 Fejér was appointed to the chair of mathematics at the University of Budapest and he held that post until his death. He was elected corresponding member (1908), member (1930) of the Hungarian Academy of Sciences. During his period in the chair at Budapest Fejér led a highly successful Hungarian school of analysis. He was the thesis advisor of mathematicians such as John von Neumann, Paul Erdős, George Pólya and Pál Turán. Thanks to Fejér, Hungary has developed a strong mathematical school: he has educated a new generation of students who have gone on to become eminent scientists. As Polya recalled, a large number of them became interested in mathematics thanks to Fejér, his fascinating personality and charisma. Fejér gave short (no more than an hour) but very entertaining lectures and often sat with students in cafés, discussing mathematical problems and telling stories from his life and how he interacted with the world's leading mathematicians. Fejér's research concentrated on harmonic analysis and, in particular, Fourier series. Fejér collaborated to produce important papers, one with Carathéodory on entire functions in 1907 and another major work with Frigyes Riesz in 1922 on conformal mappings (specifically, a short proof of the Riemann mapping theorem). In 1944, Fejér was forced to resign because of his Jewish background. One night at the end of December 1944, members of the Arrow Cross Party stormed into his house. Fejér and all the residents of his house were convoyed to the banks of the Danube and were about to be shot, but were miraculously saved by a phone call "from a brave officer". Fejér was later found in a hospital in the city, where he was admitted "under unexplained circumstances". This severe trauma left a permanent mark on the scientist's mental faculties, something even he himself noticed and later often said of himself "since I became an idiot". Still, according to his colleagues, he kept on an even keel until mid-1950s, when he became senile. Lipót Fejér died in Budapest on 15 October 1959. His grave is in the distinguished Kerepesi Cemetery. == Pólya on Fejér == If you could see him in his rather Bohemian attire (which was, I suspect, carefully chosen) you would find him very eccentric. Yet he would not appear so in his natural habitat, in a certain section of Budapest middle-class society, many members of which had the same manners, if not quite the same mannerisms, as Fejér — there he would appear about half eccentric. Pólya writes the following about Fejér, telling us much about his personality: He had artistic tastes. He deeply loved music and was a good pianist. He liked a well-turned phrase. 'As to earning a living', he said, 'a professor's salary is a necessary, but not sufficient, condition.' Once he was very angry with a colleague who happened to be a topologist, and explaining the case at length he wound up by declaring '... and what he is saying is a topological mapping of the truth'. He had a quick eye for foibles and miseries; in seemingly dull situations he noticed points that were unexpectedly funny or unexpectedly pathetic. He carefully cultivated his talent of raconteur; when he told, with his characteristic gestures, of the little shortcomings of a certain great mathematician, he was irresistible. The hours spent in continental coffee houses with Fejér discussing mathematics and telling stories are a cherished recollection for many of us. Fejér presented his mathematical remarks with the same verve as his stories, and this may have helped him in winning the lasting interest of so many younger men in his problems. In the same article Pólya writes about Fejér's style of mathematics: Fejér talked about a paper he was about to write up. 'When I write a paper,' he said, 'I have to rederive for myself the rules of differentiation and sometimes even the commutative law of multiplication.' These words stuck in my memory and years later I came to think that they expressed an essential aspect of Fejér's mathematical talent; his love for the intuitively clear detail. It was not given to him to solve very difficult problems or to build vast conceptual structures. Yet he could perceive the significance, the beauty, and the promise of a rather concrete not too large problem, foresee the possibility of a solution and work at it with intensity. And, when he had found the solution, he kept on working at it with loving care, till each detail became fully transparent. It is due to such care spent on the elaboration of the solution that Fejér's papers are very clearly written, and easy to read and most of his proofs appear very clear and simple. Yet only the very naive may think that it is easy to write a paper that is easy to read, or that it is a simple thing to point out a significant problem that is capable of a simple solution. == Gallery == == See also == Fejér window Real algebraic geometry == References == == Sources == Szegö, Gabor (1960). "Leopold Fejér: In memoriam, 1880-1959". Bull. Amer. Math. Soc. 66 (5): 346–352. doi:10.1090/S0002-9904-1960-10441-7. MR 0114742. Hersch, Reuben (1993). "A Visit to Hungarian Mathematics". The Mathematical Intelligencer. 15 (2): 13–26. doi:10.1007/BF03024187. S2CID 122827181. Struik, Dirk J. (1987). A Concise History of Mathematics: Fourth Revised Edition. Courier Corporation. p. 213. ISBN 9780486138886. == External links == Birthplace of Lipót Fejér. O'Connor, John J.; Robertson, Edmund F., "Lipót Fejér", MacTutor History of Mathematics Archive, University of St Andrews Lipót Fejér at the Mathematics Genealogy Project == Further reading == Mikolás, Miklós (1970–1980). "Fejér, Lipót". Dictionary of Scientific Biography. Vol. 4. New York: Charles Scribner's Sons. pp. 561–2. ISBN 978-0-684-10114-9.
Wikipedia:Lisa Jeffrey#0
Lisa Claire Jeffrey FRSC is a Canadian mathematician, a professor of mathematics at the University of Toronto. In her research, she uses symplectic geometry to provide rigorous proofs of results in quantum field theory. == Education and career == Jeffrey graduated from Princeton University in 1986. She was awarded the Marshall Scholarship and obtained her doctorate from the University of Oxford in 1991, under the supervision of Sir Michael Atiyah. After working as postdoctoral researcher at the Institute for Advanced Study, she became an assistant professor at Princeton in 1992. She then moved to McGill University in 1995, and became full professor at the University of Toronto in 1997. Jeffrey was the 2001 winner of the Krieger–Nelson Prize and the 2002 winner of the Coxeter–James Prize. In 2007 she became a fellow of the Royal Society of Canada, and in 2012 she became a fellow of the American Mathematical Society. She was chosen to give the Association for Women in Mathematics-American Mathematical Society 2017 Noether Lecture at the Joint Mathematics Meetings. == Selected publications == Quantum fields and strings: a course for mathematicians. Vol. 1, 2. Material from the Special Year on Quantum Field Theory held at the Institute for Advanced Study, Princeton, NJ, 1996–1997. Edited by Pierre Deligne, Pavel Etingof, Daniel S. Freed, Lisa C. Jeffrey, David Kazhdan, John W. Morgan, David R. Morrison and Edward Witten. American Mathematical Society, Providence, RI; Institute for Advanced Study (IAS), Princeton, NJ, 1999. Vol. 1: xxii+723 pp.; Vol. 2: pp. i–xxiv and 727–1501. ISBN 0-8218-1198-3, 81-06 (81T30 81Txx) == References == == External links == personal web page
Wikipedia:Lisa Lorentzen#0
Lisa Lorentzen (born 1943, also published as Lisa Jacobsen) is a Norwegian mathematician known for her work on continued fractions. She is a professor emerita in the Department of Mathematical Sciences at the Norwegian University of Science and Technology (NTNU). == Books == With Haakon Waadeland, Lorentzen is the author of the book Continued Fractions with Applications (Studies in Computational Mathematics 3, North-Holland, 1992; 2nd ed., Atlantis Studies in Mathematics for Engineering and Science, Springer, 2008). She is also the author of two textbooks in Norwegian: Kalkulus for ingeniører [Calculus for engineers] and Hva er matematikk [What is mathematics?], and co-author with Arne Hole and Tom Louis Lindstrøm of Kalkulus med én og flere variable [Calculus with single and multiple variables]. == Recognition == Lorentzen is a member of the Royal Norwegian Society of Sciences and Letters. She was the 1986 winner of the academic prize of the Royal Norwegian Society of Sciences and Letters. == References ==
Wikipedia:Lisa Mantini#0
Lisa Mantini is an American mathematician. == Education == Mantini earned a Bachelor of Science from the University of Pittsburgh, and a Master of Arts and PhD from Harvard University. All these degrees were in mathematics. == Teaching == Mantini taught at Wellesley College prior to 1985. In 1985, she began to teach at Oklahoma State University. Among other awards (see below), in 1995 she received a Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics, the highest teaching honor bestowed by the Mathematical Association of America. In 1998, she gave the undergraduate lecture course, "Representations of Finite Symmetry Groups", for the Mentoring Program for Women in Mathematics at the Institute for Advanced Study in Princeton, New Jersey. == Mathematical Association of America Governor == Mantini served the Oklahoma-Arkansas Section of the Mathematical Association of America as Governor from 2002 to 2005 and from 2014 to 2017. This made her the first person to serve the Oklahoma-Arkansas Section of the Mathematical Association of America as Governor for two terms. == Notable publications == An Integral Transform in L2-Cohomology for the Ladder Representations of U(p,q), J. Fun. Anal. 60, 211-242 (1985) An L2-Cohomology Construction of Negative Spin Mass Zero Equations for U(p,q), J. Math. Anal. Appl. 136, 419-449 (1988) An L2-Cohomology Construction of Unitary Highest Weight Modules for U(p,q), Trans. Amer. Math. Soc. 323, 583-602 (1991) Inversion of an Integral Transform and Ladder Representations of U(1, q), in Representation Theory and Harmonic Analysis, Contemp. Math. 191, AMS, Providence, 1995, pp. 117–138 (with J. Lorch) To Challenge with Compassion: Goals for Mathematics Education, MAA FOCUS 15, Number 5 (October 1995), pp. 10–11 Power Series and Inversion of an Integral Transform, Pi Mu Epsilon Journal 10, 560-574 (1997) (with M. Oehrtman) Friedberg, Solomon. (2001). Teaching Mathematics in Colleges and Universities: Case Studies for Today's Classroom. (Contributing author). United States: American Mathematical Society Intertwining Ladder Representations for SU(p,q) into Dolbeault Cohomology, in Non-Commutative Harmonic Analysis, Progr. Math. 220, Birkhäuser, Boston, 2004, pp. 395–418 (with J. Lorch and J. Novak) == Notable recognition == 1994: Received the AAUW’s Founder’s Postdoctoral Fellowship 1994: Was declared one of the Oklahoma-Arkansas Section of the Mathematical Association of America's "Distinguished College/University Teachers of Mathematics" (one was chosen each year) 1995: Received a Deborah and Franklin Haimo Award for Distinguished College or University Teaching of Mathematics, the highest teaching honor bestowed by the Mathematical Association of America 2020: Received a Certificate of Meritorious Service from the Mathematical Association of America == References ==
Wikipedia:List of African-American mathematicians#0
The bestselling book and film, Hidden Figures, celebrated the role of African-American women mathematicians in the space race and the barriers they had to overcome to study and pursue a career in mathematics and related fields. Although much of African Americans' other achievements in careers in mathematical science, research, education, and applied fields have also been "hidden", the community of mathematicians has been growing. African Americans represented around 4-6% of the graduates majoring in mathematics and statistics in the US between 2000 and 2015. This list catalogs Wikipedia articles on African Americans in mathematics, as well as early recipients of doctoral degrees in mathematics and mathematics education, books and studies about African-American mathematicians, and other major landmarks. == Historical landmarks == 1792: Benjamin Banneker calculated planetary movements and predicted eclipses in his Almanac. 1867: Howard University established its Department of Mathematics. 1895: Joseph Carter Corbin, president of Branch Normal College (now University of Arkansas at Pine Bluff), published his first problem in American Mathematical Monthly. 1916: Dudley Weldon Woodard became a charter member of the Mathematical Association of America (MAA). 1925: Elbert Frank Cox is the first African-American awarded a doctoral degree in mathematics, from Cornell University. 1929: Dudley Weldon Woodard is the first African-American mathematician known to publish in a mathematics journal, with the article "On two-dimensional analysis situs with special reference to the Jordan curve-theorem" in Fundamenta Mathematicae. 1943: Euphemia Lofton Haynes is the first African-American woman to gain a doctoral degree in mathematics. 1951: The MAA Board of Governors adopted a resolution to conduct their scientific and business meetings, and social gatherings "without discrimination as to race, creed, or color". 1956: Gloria Ford Gilmer is believed to be the first African-American woman to publish mathematical research, co-authoring an article in Proceedings of the American Mathematical Society and another in Pacific Journal of Mathematics. 1969: 17 African-American mathematicians met in New Orleans, forming the National Association of Mathematicians to "promote excellence in the mathematical sciences and to promote the mathematical development of under-represented American minorities". 1973: Mathematician David Blackwell becomes the first African-American in any field to be elected to membership of the National Academy of Sciences. 1976: Howard University establishes the first PhD program in mathematics at a historically black college or university under mathematics department chair James Donaldson and professor J. Ernest Wilkins Jr. 1980: The Claytor Lecture – now the Claytor-Woodard Lecture in honor of William W S Claytor and Dudley Weldon Woodard – is established at MAA. 1982: Civil rights leader, Bob Moses (Student Nonviolent Coordinating Committee), used his MacArthur Fellowship to start the Algebra Project, a national mathematics literacy program for high schools. 1988: The MAA established a task force that led to the formation in 1990 of SUMMA, a program for the Strengthening of Underrepresented Minority Mathematics Achievement. 1992: Mathematician Freeman Hrabowski becomes president of the University of Maryland. 1994: The Blackwell Lecture is established for MAA meetings, jointly by MAA and NAM, as well as the NAM Wilkins Lecture and Bharucha-Reid Lecture. 1995: The first CAARMS – Conference for African American Researchers in Mathematical Sciences – was held, to highlight the work of researchers and students and encourage the careers of under-represented groups in mathematics. Proceedings are published by the American Mathematical Society in its Contemporary Mathematics series. 1997: Kathleen Adebola Okikiolu was the first African American awarded a Sloan Research Fellowship and Presidential Early Career Award for Scientists and Engineers. 1997 Scott W. Williams produced the website Mathematicians of the African Diaspora, a collection of African-American mathematicians, newsletter, and resources on Africans in mathematics. By early 2007 it had close to 5 million visitors. The website has been cataloged by the Library of Congress. 1999: The mathematics departments of the 25 highest-ranked universities in the US had more than 900 faculty members, of whom 4 were African-American. 2003: Clarence F. Stephens is the first African-American to be honored with the Mathematical Association of America's (MAA) most prestigious award, for Distinguished Service to Mathematics. 2004: The Association for Women in Mathematics (AWM) and MAA formally established the Etta Zuber Falconer Lecture. 2015: Katherine Coleman Johnson received the Presidential Medal of Freedom. 2016: Hidden Figures, by Margot Lee Shetterley, is published, going on to win multiple awards and reach number 1 on the New York Times bestseller list. It tells the story of African-American women mathematicians at NASA during the space race. 2017: The film adaptation, Hidden Figures, is nominated for best movie at the Academy Awards, and Katherine Johnson makes an appearance at the ceremony. 2020: The updated website Mathematicians of the African Diaspora debuts in October. The new site is supported by the National Association of Mathematicians (NAM) and the Educational Advancement Foundation (EAF). == Doctoral degrees in mathematics == The lists of doctoral degrees, including the Doctor of Philosophy (PhD) in mathematics and Doctor of Education (EdD), draw from these sources: Turner (1971), Greene (1974), Williams (1997), Zeitz (2008), Shakil (2010), and the Mathematical Association of America. (Please add any further candidates for these lists here, or on the talk page.) === First men and women === These are the first 12 known PhDs by African-American men and women in mathematics, in alphabetical order for years with multiple doctorate holders, with women first. === Doctoral degrees 1925 to 1975 === This list includes PhDs awarded to African-Americans and to African immigrants by academic institutions in the United States. === Doctoral degrees in mathematics education to 1975 === This list includes doctorates specifically in mathematics education and doctorates in education by mathematicians/mathematics educators. == Books and articles about African-American mathematicians == This list includes books and dissertations published about individual African-Americans in mathematics, and studies, biographical anthologies or histories dedicated to African-Americans in mathematics. (This list is incomplete. You can help by expanding it.) === Individuals === Benjamin Banneker: Bedini, Silvio A (1999). The life of Benjamin Banneker: the first African-American man of science. Maryland Historical Society. Hinman, Bonnie (2000). Benjamin Banneker: American Mathematician and Astronomer (Colonial Leaders). David Blackwell: Blackwell, David; Wilmot, Nadine (2003). An oral history with David Blackwell. Bancroft Library. Black, Robert (2019). David Blackwell and the Deadliest Duel. Royal Fireworks Press. Joseph James Dennis: Williams, Sherese LaTrelle (2016). To Humbly Serve: Joseph James Dennis and His Contributions to Clark College. Clark Atlanta University. Marjorie Kimbrough Kimbrouogh, Marjorie (1991). Accept no limitations: a black woman encounters corporate America. Abingdon Press. Shirley Mathis McBay: Verheyden-Hilliard, Mary Ellen (1985). Mathematician and Administrator, Shirley Mathis McBay. Equity Institute. J. Ernest Wilkins Jr.: Nkwanta, Asamoah; Barber, Janet E. (2018). "Episodes in the Life of a Genius: J. Ernest Wilkins Jr." Notices of the American Mathematical Society. Volume 65, Number 2. === Anthologies and studies === Borum, Viveka; Hilton, Adriel Adon; Walker, Erica (2016). The Role of Black Colleges in the Development of Mathematicians. Journal of Research Initiatives. Carlson, Cob; Parks, Yolanda; et al. (1996). Breakthrough: profiles of scientists of color. Working with Numbers. Blackside. Dean, Nathaniel (ed) (1997). African Americans in mathematics: DIMACS workshop, June 26–28, 1996. American Mathematical Society. Farmer, Vernon L; Shepherd-Wynn, Evelyn (2012). Voices of historical and contemporary Black American pioneers. Harmon, Marylen; Guertler, Sherry (1994). Visions of a dream: history makers: contributions of Africans and African Americans in science and mathematics. M.E. Harmon. Houston, Johnny L (2000). The History of the National Association of Mathematicians (NAM): The First Thirty (30) Years, 1969–1999. NAM. Kenschaft, Patricia Clark (2005). Change is possible: Stories of women and minorities in mathematics. Lang, Mozell P. Contributions of African American scientists and mathematicians. Harcourt School Publishers. Newell, Victoria; Gipson, Joella; Rich, Waldo L.; Stubblefield, B (1980). Black Mathematicians and Their Works. Paul, Richard; Moss, Steven (2015). We Could Not Fail: The First African Americans in the Space Program. University of Texas Press. Shetterly, Margot Lee (2016). Hidden Figures: The American dream and the untold story of the black women mathematicians who helped win the space race. Walker, Erica N (2014). Beyond Banneker: Black mathematicians and the path to excellence. Williams, Lisa D (2000). The trials, tribulations, and triumphs of black faculty in the math and science pipeline: a life history approach (Dissertation). University of Massachusetts at Amherst. Williams, Talithia M (2018). Power in numbers: The rebel women of mathematics. Race Point Publishing. === For young people === Becker, Helaine; Phumiruk, Dow (2018). Counting on Katherine: How Katherine Johnson Saved Apollo 13. Henry Holt and co. Pinkney, Andrea Davis (1998). Dear Benjamin Banneker. Schwartz, Heather E (2017). NASA Mathematician Katherine Johnson. Lerner Publications. Shetterly, Margot Lee; Conkling, Winifred; Freeman, Laura (2018). Hidden Figures: The True Story of Four Black Women and the Space Race. HarperCollins. == List of Wikipedia articles == This list includes Wikipedia articles for people from the African diaspora who have postgraduate degrees in mathematics or statistics, have worked in mathematics, or are known for mathematical accomplishments in the United States (African-Americans). The list is grouped by the time the person's first degree in mathematics was awarded, or when they began their work in mathematics. Individuals are listed alphabetically within time periods. PhDs in mathematics education are included. === Before 1900 === Benjamin Banneker (1731–1806) Kelly Miller (1863–1939), degrees from Howard University, including law degree Charles Reason (1818–1893) Thomas Fuller (1710–1782) === 1900s === Dudley Weldon Woodard (1881–1965), degrees from Wilberforce University, University of Chicago, University of Pennsylvania (PhD) === 1910s === Elbert Frank Cox (1895–1969), degrees from Indiana University, Cornell University (PhD) Euphemia Haynes (1890–1980), Smith College, Catholic University of America (PhD) === 1920s === Joseph J. Dennis (1905–1977), degrees from Clark College, Northwestern University (PhD) Angie Turner King (1905–2004), degrees from West Virginia State College, including chemistry, University of Pittsburgh (PhD, mathematics education) Georgia Caldwell Smith (1909–1961), degrees from University of Kansas, University of Chicago, University of Pittsburgh (PhD) Dorothy Vaughan (1910–2008), degree from Wilberforce University === 1930s === David Blackwell (1919–2010), degrees from University of Illinois at Urbana–Champaign (PhD) Marjorie Lee Browne (1914–1979), degrees from Howard University, University of Michigan (PhD) Katherine Johnson (1918–2020), degree from West Virginia State College Clarence F. Stephens (1917–2018), degrees from Johnson C. Smith University, University of Michigan (PhD) === 1940s === Albert Turner Bharucha-Reid (1927–1985), degree from Iowa State University Gloria Ford Gilmer, degrees from Morgan State University, University of Pennsylvania, Marquette University (PhD, education) Evelyn Boyd Granville (1924–2023), Smith College, Yale University (PhD) Mary Winston Jackson (1921–2005), degree from Hampton Institute Eleanor Green Dawley Jones (1929–2021), degrees from Howard University, Syracuse University (PhD) Abdulalim A. Shabazz (1927–2014), degrees from Lincoln University (Pennsylvania), Massachusetts Institute of Technology (MIT), Cornell University (PhD) Louise Nixon Sutton (1925–2006), degrees from North Carolina A&T State University, New York University (PhD, education) J. Ernest Wilkins, Jr. (1923–2011), degrees from University of Chicago, New York University (including degrees in engineering) === 1950s === Geraldine Claudette Darden (born 1936), degrees from Hampton Institute, University of Illinois, Syracuse University (PhD) M. Lovenia DeConge-Watson (born 1933), degrees from Seton Hill College, Louisiana State University, St. Louis University (PhD) Annie Easley (1933–2011), degrees from Xavier University, mathematics at Cleveland State University Etta Zuber Falconer (1933–2002), degrees from Fisk University, University of Wisconsin, Emory University (PhD) William Thomas Fletcher, degrees from North Carolina Central University, University of Idaho (PhD) Gloria Conyers Hewitt (born 1935), degrees from Fisk University, University of Washington (PhD) Vivienne Malone-Mayes (1932–1995), degrees from Fisk University, University of Texas (PhD) Melba Roy Mouton (1929–1990), degrees from Howard University Dolores Margaret Richard Spikes (1936–2015), degrees from Southern University, University of Illinois (PhD) Thyrsa Frazier Svager (1930–1999), degrees from Antioch College, Ohio State University (PhD) Argelia Velez-Rodriguez (b. 1936 in Cuba), degrees from Marianao Institute, University of Havana (PhD) Grace Alele Williams (1932–2022), degrees from University of Ibadan, University of Chicago (PhD, education) === 1960s === Sylvia D. Trimble Bozeman (born 1947), degrees from Alabama A&M University, Vanderbilt University, Emory University (PhD) Christine Darden (born 1942), degrees from Hampton Institute, Virginia State University, George Washington University (PhD, engineering) Lloyd Demetrius, degrees from University of Cambridge, University of Chicago (PhD) James A. Donaldson (1941–2019), degrees from Lincoln University (Pennsylvania), University of Illinois at Urbana-Champaign (PhD) Fern Y. Hunt (born 1948), degrees from Bryn Mawr College, New York University (PhD) Jeanette Scissum, degrees from Alabama A&M University, computer science PhD Raymond L. Johnson (born 1943), degrees from University of Texas at Austin, Rice University (PhD) Ronald Elbert Mickens (born 1943), degrees from Fisk University, Vanderbilt University (PhD, physics) Scott W. Williams (born 1943), degrees from Morgan State University, Lehigh University (PhD) === 1970s === Augustin Banyaga (born 1947 in Rwanda), degrees from University of Geneva (PhD) Emery N. Brown, degree from Harvard College and Harvard University (PhD, statistics) Freeman Alphonsa Hrabowski III (born 1950), degrees from Hampton Institute, University of Illinois (PhD, higher education administration/statistics) Iris Marie Mack, degrees from Vassar College (double major with physics), University of California, Los Angeles, Harvard University (PhD) Carolyn Ray Boone Mahoney (born 1946), degrees from Siena College, Ohio University (PhD) William Alfred Massey (born 1956), degrees from Princeton University, Stanford University (PhD) Lee Stiff (1949–2021), degrees from University of North Carolina at Chapel Hill, Duke University, North Carolina State University (PhD, education) === 1980s === Idris Assani (b. in Niger), degrees from Paris Dauphine University, Pierre and Marie Curie University (PhD, mathematics) Emery Neal Brown, degrees from Harvard University (PhD, statistics) and Harvard Medical School (MD) Melvin Currie (born 1948), degrees from Yale University and University of Pittsburgh (PhD, mathematics) Clifford Victor Johnson (b. 1968 in UK), degrees from Imperial College London and University of Southampton (PhD, mathematics and physics) Bob Moses (1935–2021), degrees from Hamilton College, and Harvard University (MA, philosophy); founder of Algebra Project (1982) Arlie Oswald Petters (b. 1964 in Belize), degrees from City University of New York and Massachusetts Institute of Technology (PhD, mathematics) Janice B. Walker PhD, mathematics, University of Michigan - Ann Arbor (1982) Suzanne L. Weekes (b. in Trinidad & Tobago), degrees from Indiana University and University of Michigan (PhD, mathematics and scientific computing) === 1990s === Ron Buckmire (b. 1968 in Grenada), degrees from Rensselaer Polytechnic Institute (PhD, mathematics) Toka Diagana (b. in Mauritania, degrees from Tunis El Manar University and Claude Bernard University Lyon 1 (PhD, mathematics Edray Goins (born 1972), degrees from California Institute of Technology and Stanford University (PhD, mathematics) Rudy Horne (1968–2017), degrees from University of Oklahoma and University of Colorado Boulder (PhD, applied mathematics); mathematical consultant for the movie Hidden Figures Trachette Jackson (born 1972), degrees from Arizona State University and University of Washington (PhD, mathematics) Chawne Kimber (born 1971), degrees from University of North Carolina and University of Florida (PhD, mathematics) Marilyn Strutchens (born 1962), degrees from the University of Georgia (PhD, mathematics education) Aissa Wade (b. 1967 in Senegal), degrees from University Montpellier 2, France (PhD, mathematics) Talitha Washington (born 1974), degrees from Spelman College and University of Connecticut (PhD, mathematics) === 2000s === Carla Cotwright-Williams (born 1973), degrees from California State University, Long Beach, Southern University, and University of Mississippi (PhD, mathematics) Christina Eubanks-Turner, degrees from Xavier University of Louisiana and University of Nebraska-Lincoln (PhD, mathematics) Omayra Ortega, degrees from Pomona College and University of Iowa (PhD, mathematics) Candice Price, degrees from California State University, Chico, San Francisco State University, and University of Iowa (PhD, mathematics) Dionne Price, degrees from Norfolk State University, University of North Carolina, and Emory University (PhD, biostatistics) Chelsea Walton (born 1983), degrees from Michigan State University and the University of Michigan (PhD, mathematics) Talithia Williams, degrees from Spelman College, Howard University, Rice University (PhD, statistics) Ulrica Wilson, degrees from Spelman College and Emory University (PhD, mathematics) === 2010s === John Urschel (b. 1991 in Canada), degrees from Pennsylvania State University (MS, Mathematics) and the Massachusetts Institute of Technology (Ph.D., Mathematics) == References ==
Wikipedia:List of Albanian inventions and discoveries#0
Albanian inventions and discoveries are objects, processes or techniques invented, innovated or discovered, partially or entirely, by Albanians or individuals of Albanian descent. == See also == List of Albanian inventors and discoverers == Notes == == References ==
Wikipedia:List of Albanian inventors and discoverers#0
This is a list of Albanian inventors and discoverers. The following incomplete list comprises individuals from Albania, the Albanian diaspora, and those of Albanian heritage who have contributed to the invention, innovation, or discovery of objects, processes, or techniques, either wholly or in part while working locally or abroad. The list is arranged in alphabetical order by surname. == A == Grigor Andoni: Designed and developed Albania's first armored military vehicle, Shota MRAP. == B == Veso Bey: Invented the Gjirokastër alphabet. == C == == D == == E == == F == Carol Folt: Pioneering research on the effects of dietary mercury and arsenic on human and ecosystem health. Her pioneering scientific research profoundly influenced national and global policies, leading to important recommendations on consumption, particularly regarding human health and ecosystem well-being, shaped by dietary factors. == G == Karl Ritter von Ghega: Designed the Semmering railway, the first standard-gauge mountain railway in Europe, commonly known as the world's first true mountain railway. Ghega was awarded the title of (Ritter) in recognition of his exceptional contributions to the country and was appointed as the chief planner for the entire railway network of the Austrian Empire. Savo Gjirja: Innovated methods for utilizing ethanol as an alternative engine fuel. Rifat Gjota: Invented "Gjota generators and electromotors," enhancing energy efficiency and reducing copper usage. In 1965, Gjota graduated from the Electrotechnical Faculty of the University of Skopje. He later completed his master's degree at the University of Zagreb's Faculty of Electrical Engineering and earned his doctorate from the University of Pristina's Faculty of Electrical Engineering. He subsequently became a professor at the University of Pristina Luigi Giura: Designed the Real Ferdinando Bridge, the first iron catenary suspension bridge in Italy, it was one of the earliest in continental Europe. == H == Theodhor Haxhifilipi: Invented the Todhri alphabet. Pranvera Hyseni: Discovered a main belt asteroid, 2020 SS13 and 2000 EK140. Officially named by the International Astronomical Union 45687 Pranverahyseni to recognize her efforts in astronomy outreach. She also founded the Astronomy Outreach of Kosovo, advancing astronomical education. == I == == J == == K == Sabiha Kasimati: Major pioneering research on the Ichthyology of Albania. Sabiha Kasimati made significant strides in the field of ichthyology through her groundbreaking research, most notably in her work Fishes of Albania. In this comprehensive study, she meticulously cataloged and analyzed 257 fish species, offering crucial insights into the aquatic ecosystems of Albania's lakes, rivers, and seas. Her contributions continue to serve as a vital reference for the study of the region's ichthyofauna and aquatic biodiversity. Wilson Kokalari: Contributed to the Apollo 11 mission, playing a key role in spacecraft design and testing. Wilson Kokalari was honored by having his name featured on a plaque carried to the Moon by the Apollo 11 astronauts, recognizing the contributions of all those involved in this historic achievement. Grigor Konstantinidhi: Invented the Elbasan alphabet. == L == == M == Laura Mersini-Houghton: Pioneered the multiverse hypothesis, proposing gravitational dynamics among universes. Gjon Mili: Innovated stroboscopic and stop-action photography; invented and developed tungsten filament lights for color photography. Ferid Murad: Discovered the role of Nitric Oxide in relaxing blood vessels, a breakthrough that revolutionized treatments for heart disease, erectile dysfunction, and respiratory issues in premature infants. His discovery also played a key role in the development of Viagra. His discovery earned him a Nobel Prize in 1998 for Physiology or Medicine. Mira Murati: Chief Technology Officer at OpenAI, she played an important role in developing the AI in ChatGPT, DALL-E, and Codex. == N == == O == == P == Mentor Përmeti: Invented the "Dajti" wheat variety, improving agricultural productivity. Afërdita Veveçka Priftaj: Discovered the mechanism governing gold atom clustering on crystal surfaces, crucial for advancements in catalysis and nanotechnology. == Q == == R == == S == Lulzim Shuka: Co-discovered Epimedium alpinum subsp. albanicum, a unique plant in Kosovo's Albanian Alps, alongside Kit Tan and Besnik Hallaçi. As well as Co-discovered Tulipa albanica, a distinct tulip species in Albania, alongside Besnik Hallaçi and L. Vata. == T == == U == == V == Naum Veqilharxhi: Invented the Vithkuqi alphabet. == W == == X == == Y == == Z == == See also == List of Albanian inventions and discoveries == Notes == == References ==
Wikipedia:List of American mathematicians#0
This is a list of American mathematicians. == List == James Waddell Alexander II (1888–1971) Stephanie B. Alexander, elected in 2014 as a fellow of the American Mathematical Society "for contributions to geometry, for high-quality exposition, and for exceptional teaching of mathematics" Linda J. S. Allen Ann S. Almgren, applied mathematician who works as a senior scientist and group leader of the Center for Computational Sciences and Engineering at the Lawrence Berkeley National Laboratory Frederick Almgren (1933–1997) Beverly Anderson (b. 1943) Natascha Artin Brunswick (1909–2003) Tamara Awerbuch-Friedlander (1941–2021) Wealthy Babcock (1895–1990) Benjamin Banneker (1731–1806) Augustin Banyaga (b. 1947) Ruth Aaronson Bari (1917–2005) Janet Barnett Jon Barwise (1942–2000) Richard Bellman (1920–1984) Leonid Berlyand (b. 1957) Leah Berman (b. 1976) Manjul Bhargava (b. 1974) George David Birkhoff (1884–1944) David Blackwell (1919–2010) Archie Blake (1906–1971) Nathaniel Bowditch (1773–1838) Andrew Browder (1931–2019) Felix Browder (1927–2016) William Browder (1934–2025), pioneered the surgery theory method for classifying high-dimensional manifolds. Marjorie Lee Browne (1914–1979), taught at North Carolina Central University Robert Daniel Carmichael (1879–1967) Sun-Yung Alice Chang (b. 1948), researcher in mathematical analysis Alonzo Church (1903–1995) William Schieffelin Claytor (1908–1967), third African-American to earn a Ph.D. in mathematics, University of Pennsylvania Paul Cohen (1934–2007) Don Coppersmith (b. 1950), cryptographer, first four-time Putnam Fellow in history Elbert Frank Cox (1895–1969), first African-American to earn a Ph.D. in mathematics, Cornell University Laura Demarco (b. 1974), researcher in dynamical systems and complex analysis Joseph J. Dennis (1905–1977), Clark College Joseph L. Doob (1910–2004) Jesse Douglas (1897–1965) Samuel Eilenberg (1913–1998) Noam Elkies (b. 1966), mathematical prodigy who works in computational number theory Jerald Ericksen (1924–2021) Alex Eskin (b. 1965), researcher in rational billiards and geometric group theory Christina Eubanks-Turner, American mathematics educator, graph theorist, and commutative algebraist Etta Zuber Falconer (1933–2002) Benson Farb (b. 1965), researcher in geometric group theory and low-dimensional topology Lisa Fauci, applied mathematician who applies computational fluid dynamics to biological processes Charles Fefferman (b. 1949) Henry Burchard Fine (1858–1928) Erica Flapan (b. 1956), researcher in low-dimensional topology and knot theory Alfred Leon Foster (1904–1994) Ralph Fox (1913–1973) Michael Freedman (b. 1951) Edgar Fuller Murray Gerstenhaber (1927–2024) Andrew M. Gleason (1921–2008), WWII codebreaker, major contributor in solving Hilbert's 5th Problem ("restricted" version). Thomas Godfrey (1704–1749) Ralph E. Gomory (b. 1929) Daniel Gorenstein (1923–1992) Ronald Graham (1935–2020) Evelyn Boyd Granville (1924–2023) Phillip Griffiths (b. 1938), major contributor to complex manifold approach to algebraic geometry Benedict Gross (b. 1950) Frank Harary (1921–2005) Joe Harris (mathematician) (b. 1951), prolific researcher and expositor of algebraic geometry Euphemia Haynes (1890–1980), first African-American woman to earn a Ph.D. in mathematics Gloria Conyers Hewitt (b. 1935) George William Hill (1838–1914) Einar Hille (1894–1980) Alston Scott Householder (1904–1993) Nathan Jacobson (1910–1999) Katherine Johnson (1918–2020) Theodore Kaczynski (1942–2023) Howard Jerome Keisler (b. 1936) Victor Klee (1925–2007) Holly Krieger Harold W. Kuhn (1925–2014) Kenneth Kunen (1943–2020) Solomon Lefschetz (1884–1972) Suzanne Lenhart (b. 1954) researcher in partial differential equations; president of the Association for Women in Mathematics, 2001–2003 James Lepowsky (b. 1944) Marie Litzinger (1899–1952), number theorist Jacob Lurie (b. 1977), developed derived algebraic geometry Saunders Mac Lane (1909–2005) W. T. Martin (1911–2004) William S. Massey (1920–2017) John N. Mather (1942–2017) J. Peter May (b. 1939), researcher in algebraic topology, category theory, homotopy theory, and the foundational aspects of spectra Barry Mazur (b. 1937) Curtis T. McMullen (b. 1958) Elliott Mendelson (1931–2020) Winifred Edgerton Merrill (1862–1951) Kelly Miller (1863–1939) Kenneth Millett (b. 1941) John Milnor (b. 1931) Susan Montgomery (b. 1943) E. H. Moore (1862–1932) Marston Morse (1892–1977) George Mostow (1923–2017) Frederick Mosteller (1916–2006) David Mumford (b. 1937) John Forbes Nash Jr. (1928–2015) Edward Nelson (1932–2014) Walter Noll (1925–2017) Michael O'Nan (1943–2017) Richard Palais (b. 1931) Benjamin Peirce (1809–1880) Javier Perez-Capdevila (b. 1963) Vera Pless (1931–2020), mathematician specialized in combinatorics and coding theory Jon T. Pitts (1948–2024) Daniel Quillen (1940–2011) Charles Reason (1818–1893) Jeffrey B. Remmel (1948–2017) Joseph Ritt (1893–1951) Fred S. Roberts (b. 1943) Herbert Robbins (1915–2001) Julia Robinson (1919–1985), contributor to Hilbert's tenth problem J. Barkley Rosser (1907–1989) Gerald Sacks (1933–2019) John Sarli (living), mathematician and academic Thomas Jerome Schaefer Dana Scott (b. 1932) James Serrin (1926–2012) Claude Shannon (1916–2001) Isadore Singer (1924–2021) Charles Coffin Sims (1938–2017) George Seligman (1927–2024) Stephen Smale (b. 1930) Raymond Smullyan (1919–2017) Edwin Spanier (1921–1996) Norman Steenrod (1910–1971) Elias M. Stein (1931–2018) Clarence F. Stephens (1917–2018) Lee Stiff (1949–2021) Marshall Harvey Stone (1903–1989) Theodore Strong (1790–1869) Terence Tao (b. 1975) John Tate (1925–2019) Jean Taylor (b. 1944) John G. Thompson (b. 1932) Sister Mary Domitilla Thuener (1880–1977) William Thurston (1936–2012) Clifford Truesdell (1919–2000) John Tukey (1915–2000) John Urschel (b. 1991) Dorothy Vaughan (1910–2008) Oswald Veblen (1880–1960) Mary Shore Walker (1882–1952) William C. Waterhouse (1941–2016) Herbert Wilf (1931–2012) J. Ernest Wilkins, Jr. (1923–2011) Amie Wilkinson (b. 1968), researcher in dynamical systems, ergodic theory, chaos theory and semisimple Lie groups Hassler Whitney (1907–1989) Dudley Weldon Woodard (1881–1965), second African-American to earn a Ph.D. in mathematics, University of Pennsylvania Margaret H. Wright (b. 1944), first woman president of Society for Industrial and Applied Mathematics Orit Halpern (born 1972), cyberneticist == References ==
Wikipedia:List of Brazilian mathematicians#0
This list of Brazilian mathematicians includes the famous mathematicians from Brazil and also those who were born in other countries but later became Brazilians. == See also == List of mathematicians Science and technology in Brazil
Wikipedia:List of Cambridge mathematicians#0
A list of mathematicians, past and present, with associations with the University of Cambridge. == A - F == Rediet Abebe, graduate student at Pembroke College, Cambridge Frank Adams, fellow of Trinity College, Cambridge, Lowndean Professor of Astronomy and Geometry 1970-1989 John Couch Adams, fellow of St. John's College, Cambridge 1843–1852; fellow of Pembroke College, Cambridge 1853–1892; Lowndean Professor of Astronomy and Geometry 1859-1891 Michael Atiyah, fellow of Trinity College, Cambridge 1954–1957; fellow of Pembroke College, Cambridge 1958–1961; Master of Trinity College, Cambridge 1990-1997 Charles Babbage, Lucasian Professor of Mathematics 1828-1839 Christopher Budd, Gresham Professor of Geometry, student at St John's College 1979-1983 Alan Baker, fellow of Trinity College, Cambridge 1964- H. F. Baker, fellow of St. John's College, Cambridge Dennis Barden, fellow of Pembroke College, Cambridge Isaac Barrow, fellow of Trinity College, Cambridge 1649–1655, Lucasian Professor of Mathematics Arthur Berry, 1862-1929, Vice-Provost of King's College, Cambridge Bryan John Birch, undergraduate and research student at Trinity College, Cambridge, fellow of Churchill College, Cambridge Michael Boardman Béla Bollobás, fellow of Trinity College, Cambridge Richard Ewen Borcherds Henry Briggs, Fellow of St. John's College, Cambridge William Burnside, attended St John's College and Pembroke College, Cambridge. Appointed professor of mathematics at the Royal Naval College in Greenwich, at site of University of Greenwich Mathematics Department. Dame Mary Cartwright, fellow and Mistress of Girton College, Cambridge J. W. S. Cassels, fellow of Trinity College, Cambridge 1949–1984; Sadleirian Professor of Pure Mathematics 1967-1986 Arthur Cayley, student at Trinity College, Cambridge D. G. Champernowne Sydney Chapman, student at and later lecturer and fellow (1914–1919) of Trinity College, Cambridge William Kingdon Clifford John Coates, fellow of Emmanuel College, Cambridge 1975–1977; Sadleirian Professor of Pure Mathematics 1986–2012 John Horton Conway, fellow of Sidney Sussex College, Cambridge 1964–1970; fellow of Gonville and Caius College, Cambridge 1970-1986 Roger Cotes Percy John Daniell Philip Dawid Harold Davenport James Davenport, undergraduate and research student at Trinity College, Cambridge Rollo Davidson, undergraduate and research fellow of Trinity College, Cambridge 1962–1970, fellow-elect of Churchill College, Cambridge Augustus De Morgan Paul Dirac, fellow of St. John's College, Cambridge 1927–1969; Lucasian Professor of Mathematics 1932-1969 Simon Donaldson, undergraduate at Pembroke College, Cambridge 1976-1979 Arthur Stanley Eddington Andrew Forsyth, fellow of Trinity College, Cambridge; Sadleirian Professor of Pure Mathematics == G - M == Anil Kumar Gain, Fellow of the Royal Statistical Society James Glaisher Peter Goddard, Master of St John's College, Cambridge 1994-2004 William Timothy Gowers, fellow of Trinity College, Cambridge ?- ; Rouse Ball Professor of Mathematics 1998- Geoffrey Grimmett, fellow of Churchill College, Cambridge, Professor of Mathematical Statistics 1992- Ian Grojnowski, faculty member of DPMMS, 1999- G. H. Hardy, fellow of Trinity College, Cambridge 1900–1919, 1931–1942; Sadleirian Professor of Pure Mathematics 1931-1942 Stephen Hawking, fellow of Gonville and Caius College, Cambridge 1966–2018; Lucasian Professor of Mathematics 1979-2009 Nigel Hitchin, fellow of Gonville and Caius College, Cambridge, Rouse Ball Professor of Mathematics 1994-1997 E. W. Hobson James Jeans Harold Jeffreys, fellow of St John's College, Cambridge 1914–1989; Plumian Professor of Astronomy 1946-1958 Vinod Johri, Commonwealth fellow for post-doctorate work at Department of Applied Mathematics and Theoretical Physics, Cambridge University, 1967-1968 Thomas Jones, mathematician, fellow of Trinity College, Cambridge Richard Jozsa, holder of the Leigh Trapnell Chair in Quantum Physics Frank Kelly, fellow 1976-2006 and master 2006- of Christ's College, professor of the Mathematics of Systems David George Kendall, fellow of Churchill College, Cambridge, Professor of Mathematical Statistics 1962-1985 John Maynard Keynes, B.A. in mathematics Joshua King, Lucasian Professor of Mathematics, President of Queens' College, Cambridge Frances Kirwan Thomas William Körner, fellow of Trinity Hall, Cambridge Peter Landrock, senior member of the Wolfson College, Cambridge, 1997- Joseph Larmor, Lucasian Professor of Mathematics Imre Leader, fellow of Trinity College, Cambridge John Edensor Littlewood, fellow of Trinity College, Cambridge 1908–1977; Rouse Ball Professor of Mathematics 1928-1950 Sir Donald MacAlister, fellow of St John's College, Cambridge Ian G. Macdonald, graduate of Trinity College, Cambridge James Clerk Maxwell, Cavendish Professor of Physics 1871-1879 Isaac Milner, Lucasian Professor of Mathematics, President of Queens' College, Cambridge == N - S == Crispin St. J. A. Nash-Williams Max Newman, fellow of St John's College, Cambridge Isaac Newton, fellow of Trinity College, Cambridge 1667–1701; Lucasian Professor of Mathematics 1669-1702 Richard Nickl, fellow of Gonville and Caius College, Cambridge, Professor of Mathematical Statistics James R. Norris, fellow of Churchill College, Cambridge Simon P. Norton, undergraduate and research student at Trinity College, Cambridge William McFadden Orr, fellow of St John's College, Cambridge Roger Penrose, graduate student at St John's College, Cambridge ?-1958; research fellow at St John's College, Cambridge 1958-1961 Srinivasa Ramanujan, fellow of Trinity College, Cambridge 1918-1920 Frank P. Ramsey, student of Trinity College, Cambridge and fellow of King's College, Cambridge CR Rao, statistician and former PhD student under Ronald Fisher Lewis Fry Richardson Chris Rogers Bertrand Russell, fellow of Trinity College, Cambridge Richard Samworth, fellow of St John's College, Cambridge, Professor of Statistics Graeme Segal, fellow of St John's College, Cambridge, Lowndean Professor of Astronomy and Geometry, 1990–1999 Nicholas Shepherd-Barron, fellow of Trinity College, Cambridge David Spiegelhalter, fellow of Churchill College, Cambridge, Winton Professor of the Public Understanding of Risk 2007- Sir George Gabriel Stokes, fellow then Master of Pembroke College, Cambridge, Lucasian Professor of Mathematics Peter Swinnerton-Dyer, fellow of Trinity College, Cambridge, master of St Catharine's College, Cambridge James Joseph Sylvester, undergraduate at St John's College, Cambridge 1831-1837 == T - Z == G. I. Taylor, fellow of Trinity College, Cambridge Martin J. Taylor, fellow of Trinity College, Cambridge Richard Taylor, fellow of Clare College, Cambridge John G. Thompson, fellow of Churchill College, Cambridge; Rouse Ball Professor of Mathematics 1971-1993 Alan Turing, fellow of King's College, Cambridge 1935-1945 W. T. Tutte, undergraduate and research student at Trinity College, Cambridge 1935-1941 John Venn, fellow of Gonville and Caius College, Cambridge C. T. C. Wall, graduate of Trinity College, Cambridge John Wallis, fellow of Queens' College, Cambridge 1644-1645 Richard Weber, fellow of Queens' College, Cambridge, Churchill Professor of Mathematics for Operational Research 1994-2017 Alfred North Whitehead E. T. Whittaker graduate and fellow of Trinity College, Cambridge from 1882 to 1906 Peter Whittle, fellow of Churchill College, Cambridge, Churchill Professor of Mathematics for Operational Research 1967-1994 Andrew Wiles, research student and then junior research fellow at Clare College, Cambridge 1975-1980 David Williams, professor of mathematical statistics 1985-1992 Shaun Wylie, fellow of Trinity Hall, Cambridge Erik Christopher Zeeman, fellow of Gonville and Caius College, Cambridge, honorary fellow of Christ's College, Cambridge == See also == List of mathematicians Faculty of Mathematics, University of Cambridge Lucasian Chair Sadleirian Chair Rouse Ball Professor of Mathematics Lowndean Professor of Astronomy and Geometry Churchill Professorship of Mathematics for Operational Research Professorship of Mathematical Statistics, University of Cambridge List of Wranglers of the University of Cambridge
Wikipedia:List of Chinese mathematicians#0
This is a list of Chinese mathematicians. With a history spanning over three millennia, Chinese mathematics is believed to have initially developed largely independently of other cultures. == Classical == Jing Fang: 78 – 37 BC Liu Xin: c. 50 BC – 23 AD Zhang Heng: 78 – 139 AD Liu Hong: 129 – 210 AD Cai Yong: 132 – 192 AD Liu Hui: 225 – 295 AD Wang Fan: 228 – 266 AD Sun Tzu: c. 3rd – 5th century AD Zu Chongzhi: 429 – 500 AD Zu Gengzhi: c. 450 – c. 520 AD == Middle Imperial == Zhen Luan: 535–566 Wang Xiaotong: 580–640 Li Chunfeng: 602–670 Yi Xing: 683–727 Wei Pu: 11th century Jia Xian: 1010–1070 Su Song: 1020–1101 Shen Kuo: 1031–1095 Li Zhi: 1192–1279 Qin Jiushao: c. 1202–1261 Guo Shoujing: 1231–1316 Yang Hui: c. 1238–1298 Zhu Shijie: 1249–1314 == Late Imperial == === 16th century === Cheng Dawei: 1533–1606 Zhu Zaiyu: 1536–1611 === 17th century === Xu Guangqi: 1562–1633 Minggatu: 1692–1763 === 18th century === Li Rui: 1768–1817 === 19th century === Li Shanlan: 1810–1882 Xiong Qinglai: 1893–1969 == Modern == === 20th century === Su Buqing: 1902–2003 Pao-Lu Hsu: 1910–1970 Hua Luogeng: 1910–1985 Ke Zhao: 1910–2002 Wei-Liang Chow: 1911–1995 Shiing-Shen Chern: 1911–2004 Chien Wei-zang: 1912–2010 Ky Fan: 1914–2010 Chia-Chiao Lin: 1916–2013 Wu Wenjun: 1919–2017 Yuan-Shih Chow: 1924–2022 Gu Chaohao: 1926–2012 Daoxing Xia: b. 1930 Wang Yuan: 1930–2021 Chen Jingrun: 1933–1996 Pan Chengdong: 1934–1997 Yum-Tong Siu: b. 1943 Peng Shige: b. 1947 Shing-Tung Yau: b. 1949, Fields medal recipient Yitang Zhang: b. 1955 Gang Tian: b. 1958 Jeffrey Yi-Lin Forrest: b. 1959 Huai-Dong Cao: b. 1959 Shou-Wu Zhang: b. 1962 Weinan E: b. 1963 Kefeng Liu: b. 1965 Terence Tao: b.1975 Wei Zhang: b. 1981 Zhiwei Yun: b. 1982 Chenyang Xu: b. 1981 Eddie Woo: b. 1985 == See also == List of Chinese scientists
Wikipedia:List of German mathematicians#0
This is a List of German mathematicians. == A == == B == == C == == D == == E == == F == == G == == H == == I == == J == == K == == L == == M == == N == == O == == P == == R == == S == == T == == U == == V == == W == == Z == == See also == List of mathematicians Science and technology in Germany
Wikipedia:List of Greek mathematicians#0
In historical times, Greek civilization has played one of the major roles in the history and development of Greek mathematics. To this day, a number of Greek mathematicians are considered for their innovations and influence on mathematics. == Ancient Greek mathematicians == This list includes mathematicians working within the Greek tradition in places outside Greece such as Alexandria, Egypt, regardless of whether we know their ethnicity to be Greek. == Medieval Byzantine mathematicians == == Modern Greek mathematicians == Leonidas Alaoglu (1914–1981) - Known for Banach- Alaoglu theorem. Charalambos D. Aliprantis (1946–2009) - Founder and Editor-in-Chief of the journals Economic Theory as well as Annals of Finance. Roger Apéry (1916–1994) - Professor of mathematics and mechanics at the University of Caen Proved the irrationality of zeta(3). Tom M. Apostol (1923–2016) - Professor of mathematics in California Institute of Technology, he has authored a number of books about mathematics. Dimitri Bertsekas (born 1942) - Member of the National Academy of Engineering professor with the Department of Electrical Engineering and Computer Science. Author of fifteen books and research monographs, and coauthor of an introductory probability textbook Giovanni Carandino (1784–1834) Constantin Carathéodory (1873–1950) - Mathematician who pioneered the Axiomatic Formulation of Thermodynamics. Demetrios Christodoulou (born 1951) - Mathematician-physicist who has contributed in the field of general relativity. Constantine Dafermos (born 1941) - Usually notable for hyperbolic conservation laws and control theory. Mihalis Dafermos (born 1976) - Professor of Mathematics at Princeton University and Lowndean Chair of Astronomy and Geometry at the University of Cambridge Apostolos Doxiadis (born 1953) - Australian born Mathematician. Athanassios S. Fokas (born 1952) - Contributor in the field of integrable nonlinear partial differential equations. Michael Katehakis (born 1952) - Professor at Rutgers University. Alexander S. Kechris (born 1946) - Made notable contribution for the theory of Borel equivalence relations. Nicholas Metropolis (1915–1999) - American born Greek physicist. Yiannis N. Moschovakis (1938) - Writer, also worked as theorist in University of California, Los Angeles. Christos Papakyriakopoulos (1914–1976) - Often called Papa, he specialized in geometric topology. Athanasios Papoulis (1921–2002) - Contributed a number of theories, such as Papoulis–Gerchberg algorithm, A eloquent proof, among others. Themistocles M. Rassias (born 1951) - Professor at the National Technical University of Athens. Raphaël Salem (1898–1963) - Greek mathematician after whom are named the Salem numbers and whose widow founded the Salem Prize. Cyparissos Stephanos (1857–1917) - Notable contributor of desmic systems. Katia Sycara - Professor in the Carnegie Mellon School of Computer Science's Robotics Institute and the director of the Laboratory for Agents technology and Semantic web technologies. Nicholas Varopoulos (born 1940) - Notable for his analysis on Lie groups. Stathis Zachos (born 1947) - Published a number of writings on computer science. Mihail Zervos - Working in Department of Mathematics, London School of Economics. == References ==
Wikipedia:List of Hungarian mathematicians#0
The following is a list of Hungarian mathematicians. In this page we keep the names in Hungarian order (family name first). == A == Alexits, György (1899–1978) == B == Babai, László (born 1950) Paul Erdős Prize Bárány, Imre (born 1947) Paul Erdős Prize Beck, József (born 1952) Paul Erdős Prize Bollobás, Béla (born 1943) Senior Whitehead Prize Bolyai, Farkas (1775–1856) Bolyai, János (1802–1860), spoke 10 languages, vital in the systematic equations behind Non-Euclidean Geometry Bott, Raoul (1923–2005) Steele Prize Barabási, Albert-László (born 1967) == C == Császár, Ákos (1924–2017) Csörgő, Sándor (1947–2008) Paul Erdős Prize == D == Dienes, Zoltán Pál (1916–2014) == E == Erdős, Pál (1913–1996) == F == Fejér, Lipót (1880–1959) == G == Grossmann, Marcell (1878–1936) Farkas, Gyula (1847–1930) == H == Haar, Alfréd (1885–1933) Hajnal, András (1931–2016) Hajós, György (1912–1972) Halász, Gábor (born 1941) Paul Erdős Prize Hatvani, István (1718–1786) Hell, Miksa (1720–1792), worked as an astronomer == I == Izsák, Imre (1929–1965) == J == Juhász, István (born 1943) Paul Erdős Prize == K == Kalmár, László (1905–1976) Kármán, Tódor (1881–1963) Katona, Gyula O. H. (born 1941) Katona, Gyula Y. (born 1965) Kemény, János (1926–1992) Komjáth, Péter (born 1953) Paul Erdős Prize Komlós, János (born 1942) Kőnig, Dénes (1884–1944) Kőnig, Gyula (1849–1913) Kürschák, József (1864–1933) == L == Laczkovich, Miklós (born 1948) Paul Erdős Prize Lánczos, Kornél (1893–1974) Lax, Péter (1926–2025) Lovász, László (born 1948) Paul Erdős Prize == M == Mérő, László (born 1949) Medgyessy, Pál (1919–1977) == N == Nagy, Károly (1797–1868) Neumann, János (John von Neumann) (1903–1957), pioneer of computing and game theory == O == Ottlik, Géza (1912–1990) == P == Pálfy, Péter Pál (born 1955) Paul Erdős Prize Péter, Rózsa (1905–1977) Petzval, József (1807–1891) Pintz, János, (born 1950) Paul Erdős Prize Pólya, György (1887–1985) Pósa, Lajos (born 1947) Prékopa, András (1929–2016) Pyber, László (born 1960) == R == Radó, Tibor (1895–1965) Rényi, Alfréd (1921–1970) Riesz, Frigyes (1880–1956) Riesz, Marcel (1886–1969) Ruzsa, Imre Z. (born 1953) Paul Erdős Prize == S == Sajnovics, János (1733–1785) Sárközy, András (born 1941) Paul Erdős Prize Scholtz, Ágoston (1844–1916) Segner, János András (1704–1777) Sós, Vera T. (1930–2023) Süli, Endre (born 1956) Naylor Prize and Lectureship == Sz == Szegedy, Balázs Paul Erdős and Fulkerson Prizes Szegedy, Márió (born 1960) Gödel Prize Szemerédi, Endre (born 1940) Paul Erdős and Abel Prizes Szekeres, György (1911–2005) Szőkefalvi-Nagy, Béla (1913–1998) Szőnyi, Tamás (born 1957) Paul Erdős Prize Szüsz, Peter (1924–2008) == T == Tardos, Éva (born 1957) Tardos, Gábor (born 1964) Paul Erdős Prize Turán, Pál (1910–1976) == V == Vargha, András (born 1949)
Wikipedia:List of Indian mathematicians#0
Indian mathematicians have made a number of contributions to mathematics that have significantly influenced scientists and mathematicians in the modern era. One of such works is Hindu numeral system which is predominantly used today and is likely to be used in the future. == Ancient (Before 320 CE) == Shulba sutras (around 1st millenium BCE) Baudhayana sutras (fl. c. 900 BCE) Yajnavalkya (700 BCE) Manava (fl. 750–650 BCE) Apastamba Dharmasutra (c. 600 BCE) Pāṇini (c. 520–460 BCE) Kātyāyana (fl. c. 300 BCE) Akṣapada Gautama(c. 600 BCE–200 CE) Bharata Muni (200 BCE-200 CE) Pingala (c. 3rd/2nd century BCE) Bhadrabahu (367 – 298 BCE) Umasvati (c. 200 CE) Yavaneśvara (2nd century) Vasishtha Siddhanta, 4th century CE == Classical (320 CE–520 CE) == Vasishtha Siddhanta, 4th century CE Aryabhata (476–550 CE) Yativrsabha (500–570) Varahamihira (505–587 CE) Yativṛṣabha, (6th-century CE) Virahanka (6th century CE) == Early Medieval Period (521 CE–1206 CE) == Brahmagupta (598–670 CE) Bhaskara I (600–680 CE) Shridhara (between 650–850 CE) Lalla (c. 720–790 CE) Virasena (792–853 CE) Govindasvāmi (c. 800 – c. 860 CE) Prithudaka (c. 830 – c. 900CE) Śaṅkaranārāyaṇa, (c. 840 – c. 900 CE) Vaṭeśvara (born 880 CE) Mahavira (9th century CE) Jayadeva 9th century CE Aryabhata II (920 – c. 1000) Mañjula (astronomer) (born 932) Vijayanandi (c. 940–1010) Halayudha 10th Century Bhoja (c. 990-1055 CE) Śrīpati (1019–1066) Abhayadeva Suri (1050 CE) Brahmadeva (1060–1130) Pavuluri Mallana (11th century CE) Hemachandra (1087–1172 CE) Bhaskara II (1114–1185 CE) Someshvara III (1127–1138 CE) Śārṅgadeva (1175-1247) == Late Medieval Period (1206–1526) == === 13th Century === Thakkar Pheru( 1291– 1347) === 14th century === Mahendra Suri (1340 – 1400) Narayana Pandita (1325–1400) Makaranda (fl. 1438-1478) Keshava of Nandigrama (fl. 1496–1507) ==== Navya-Nyāya (Neo-Logical) School ==== Gangesha Upadhyaya (first half of the 14th century) ==== Kerala School of Mathematics and Astronomy ==== Madhava of Sangamagrama (c. 1340 – c. 1425) Parameshvara (1360–1455), discovered drek-ganita, a mode of astronomy based on observations === 15th century === ==== Kerala School of Mathematics and Astronomy ==== Chennas Narayanan Namboodiripad (born 1428) Nilakantha Somayaji (1444–1545), mathematician and astronomer Damodara (15th century) ==== Navya-Nyāya (Neo-Logical) School ==== Raghunatha Siromani (1475–1550) == Early Modern Period (1527– 1800) == === 16th Century === Gaṇeśa Daivajna (born 1507, fl. 1520-1554) ==== Kerala School of Mathematics and Astronomy ==== Chitrabhanu (16th Century) Shankara Variyar (c. 1530) Jyeshtadeva (1500–1610), author of Yuktibhāṣā Paarangot Jyeshtadevan Namboodiri (AD 1500–1610) Achyuta Pisharati (1550–1621), mathematician and astronomer Melpathur Narayana Bhattathiri (1560–1646/1666) ==== Golagrama school of astronomy ==== Nṛsiṃha (born 1586) Mallari (fl.1575) === 17th Century === Kṛṣṇa Daivajña (17th century) Ataullah Rashidi (17th century) Munishvara (born 1603) Mulla Jaunpuri (1606–1651) Puthumana Somayaji (c. 1660–1740) ==== Golagrama school of astronomy ==== Kamalakara (1616 – 1700) Divākara (born 1606) === 18th Century === Jagannatha Samrat (1652–1744) Jai Singh II (1681 – 1743) Kerala School of Mathematics and Astronomy Sankara Varman (1774–1839) == Modern (1800–Present) == === 19th century === === 20th century === Subbayya Sivasankaranarayana Pillai (1901–1950) Raj Chandra Bose (1901–1987) Tirukkannapuram Vijayaraghavan (1902–1955) Dattaraya Ramchandra Kaprekar (1905–1986) Damodar Dharmananda Kosambi (1907-1966) Lakkoju Sanjeevaraya Sharma (1907–1998) Sarvadaman Chowla (1907-1995) Subrahmanyan Chandrasekhar (1910–1995) Subbaramiah Minakshisundaram (1913–1968) P Kesava Menon (1917–1979) S. S. Shrikhande (1917–2020) Prahalad Chunnilal Vaidya (1918–2010) Anil Kumar Gain (1919–1978) Calyampudi Radhakrishna Rao (1920–2023) Mathukumalli V. Subbarao (1921–2006) Harish-Chandra (1923–1983) P. K. Srinivasan (1924–2005) Raghu Raj Bahadur (1924–1997) Madan Lal Puri (born 1929) Shreeram Shankar Abhyankar(1930-2012) C. S. Seshadri (1932–2020) Daya-Nand Verma (1933–2010) M. S. Narasimhan (1932–2021) J. N. Srivastava (1933-2010) Srinivasacharya Raghavan (1934–2014) K. S. S. Nambooripad (1935–2020) Ramaiyengar Sridharan (born 1935) Vinod Johri (1935–2014) Karamat Ali Karamat (1936–2022) K. R. Parthasarathy (1936–2023) S. N. Seshadri (1937–1986) Ramdas L. Bhirud (1937–1997) S. Ramanan (born 1937) Pranab K. Sen (1937–2023) Veeravalli S. Varadarajan (1937–2019) Jayanta Kumar Ghosh (1937–2017) Raghavan Narasimhan (1937-2015) C. P. Ramanujam (1938–1974) V. N. Bhat (1938–2009) S. R. Srinivasa Varadhan (born 1940) M. S. Raghunathan (born 1941) Ravindra Shripad Kulkarni (born 1942) Vashishtha Narayan Singh (1942–2019) C. R. Rao (1920–2023) Thiruvenkatachari Parthasarathy (1940–2023) S. B. Rao (born 1943) Phoolan Prasad (born 1944) Gopal Prasad (born 1945) Rajagopalan Parthasarathy (born 1945) Vijay Kumar Patodi (1945–1976) Vikram Bhagvandas Mehta (1946-2014) S. G. Dani (born 1947) Raman Parimala (born 1948) Singhi Navin M. (born 1949) Sujatha Ramdorai (born 1962) R. Balasubramanian (born 1951) M. Ram Murty (born 1953) Alok Bhargava (born 1954) Madhav V. Nori (born 1954) Rattan Chand (born 1955) V. Kumar Murty (born 1956) Rajendra Bhatia (born 1952) Narendra Karmarkar (born 1957) T. N. Venkataramana (born 1958) Dipendra Prasad (born 1960) Dinesh Thakur (born 1961) Manindra Agrawal (born 1966) Madhu Sudan (born 1966) Suresh Venapally (born 1966) Chandrashekhar Khare (born 1968) U. S. R. Murty L. Mahadevan (born 1965) Kapil Hari Paranjape (born 1960) Vijay Vazirani (born 1957) Umesh Vazirani (born 1959) Prasad V. Tetali (born 1964) Mahan Mj (born 1968) Rahul Pandharipande (born 1969) Santosh Vempala (born 1971) Kannan Soundararajan (born 1973) Kiran Kedlaya (born 1974) Manjul Bhargava (born 1974) Ritabrata Munshi (born 1976) Amit Garg (born 1978) Subhash Khot (born 1978) Sourav Chatterjee (born 1979) Akshay Venkatesh (born 1981) Sucharit Sarkar (born 1983) Neena Gupta (born 1984) Nayandeep Deka Baruah (born 1972) Anand Kumar (born 1973) Bhargav Bhatt (born 1983) == See also == List of Indian scientists List of Indian astronauts List of mathematicians == References == == External links == Famous Indian Mathematicians
Wikipedia:List of Iranian mathematicians#0
The following is a list of Iranian mathematicians including ethnic Iranian mathematicians. == A == Abhari (?–1262/1265) Abu Nasr-e Mansur (c. 960–1036) Abū Ja'far al-Khāzin (900–971), mathematician and astronomer Abu al-Wafa' Buzjani (940–998), mathematician Abu al-Jud (possibly died 1014/15) Abu al-Hasan al-Ahwazi, 10th-11th century mathematician and astronomer == B == Bahai, Sheikh (1547–1621), poet, mathematician, astronomer, engineer, designer, faghih (religious scientist), and architect Abu Ma'shar al-Balkhi (787–886), known in Latin as Albumasar Abu Zayd al-Balkhi (850–934), geographer and mathematician Al-Biruni (973–1048), astronomer and mathematician Sahl ibn Bishr (c. 786–845?), astrologer, mathematician al-Birjandi (?–1528), astronomer and mathematician Caucher Birkar (1978- ), Kurdish-Iranian mathematician, 2018 Fields medalist == C == Rama Cont, Professor of Mathematics at University of Oxford, recipient of the Louis Bachelier Prize of the French Academy of Sciences (2010) == D == Abu Hanifa Dinawari (815–896), astronomer, agriculturist, botanist, metallurgist, geographer, mathematician, and historian == E == Abbas Edalat, Professor of Computer Science and Mathematics, Imperial College London == F == Kamāl al-Dīn al-Fārisī (1267–1319) Fazari, Ibrahim (?–777), mathematician and astronomer Fazari, Mohammad (?–796), mathematician and astronomer == G == Kushyar Gilani (971–1029), mathematician, geographer, astronomer Abu Said Gorgani (9th century), astronomer and mathematician == H == Habash al-Hasib al-Marwazi, mathematician, astronomer, geographer Ayn al-Quzat Hamadani, jurisconsult, mystic, philosopher, poet and mathematician == I == Isfahani Abol-fath (10th century) Al-Isfizari (11th-12th century), mathematician and astronomer == J == Al-Abbās ibn Said al-Jawharī (800-860), geometer == K == Karaji (953–1029) Jamshid-i Kashani (c. 1380–1429), astronomer and mathematician Khayyam, Omar (1048–1131), poet, mathematician, and astronomer Al-Kharaqī, astronomer and mathematician Khujandi (c. 940–c. 1000), mathematician and astronomer Muhammad ibn Musa al-Khwarizmi (a.k.a. Al-Khwarazmi, c. 780–c. 850), creator of algorithm and algebra, mathematician and astronomer Najm al-Dīn al-Qazwīnī al-Kātibī, logician and philosopher Abū Sahl al-Qūhī, mathematician and astronomer Abu Ishaq al-Kubunani (d. after 1481), mathematician, astronomer == M == Esfandiar Maasoumi, Fellow of the Royal Statistical Society, Southern Methodist University Mahani (9th century), mathematician and astronomer Maryam Mirzakhani (1977–2017) Professor of Mathematics, Stanford University; first woman recipient of the Fields Medal (2014) Muhammad Baqir Yazdi (17th century), found the pair of amicable numbers 9,363,584 and 9,437,056 == N == Nasir Khusraw (1004–1088), scientist, Ismaili scholar, mathematician, philosopher, traveler and poet Nasavi (c. 1010–c. 1075) Nizam al-Din Nishapuri, mathematician, astronomer, jurist, exegete, and poet Nayrizi (865–1022), mathematician and astronomer == Q == Ali Qushji (1403 – 16 December 1474), mathematician, astronomer and physician == S == Samarqandi, Ashraf (c. 1250–c. 1310), mathematician, astronomer Ibn Sahl, mathematician, physicist Freydoon Shahidi, Distinguished Professor of Mathematics, Purdue University Sijzi (c. 945–c. 1020), mathematician, astronomer and astrologer Zayn al-Din Omar Savaji, philosopher and logician M. Vali Siadat, Distinguished Professor of Mathematics, University of Illinois at Chicago == T == Ramin Takloo-Bighash (born 1974), number theorist, University of Illinois at Chicago Tusi, Nasireddin (1201–1274), Persian polymath, architect, philosopher, physician, scientist, and theologian Tusi, Sharafeddin (?–1213/4) == Y == Yaʿqūb ibn Ṭāriq (?–796), mathematician and astronomer Nazif ibn Yumn (?–990), mathematician == Z == Zarir Jurjani (9th century), mathematician and astronomer == References ==
Wikipedia:List of Italian mathematicians#0
A list of notable mathematicians from Italy by century: == Ancient == Marcus Terentinus Varro Boethius Vitruvius == 12th–15th centuries == == 16th century == == 17th century == == 18th century == == 19th century == == 20th century ==
Wikipedia:List of Jewish American mathematicians#0
This is a list of notable Jewish American mathematicians. For other Jewish Americans, see Lists of Jewish Americans. Abraham Adrian Albert (1905-1972), abstract algebra Kenneth Appel (1932-2013), four-color problem Lipman Bers (1914-1993), non-linear elliptic equations Paul Cohen (1934-2007), set theorist; Fields Medal (1966) Jesse Douglas (1897-1965), mathematician; Fields Medal (1936), Bôcher Memorial Prize (1943) Samuel Eilenberg (1913-1988), category theory; Wolf Prize (1986), Steele Prize (1987) Yakov Eliashberg (born 1946), symplectic topology and partial differential equations Charles Fefferman (born 1949), mathematician; Fields Medal (1978), Bôcher Prize (2008) William Feller (1906-1970), probability theory Michael Freedman (born 1951), mathematician; Fields Medal (1986) Hillel Furstenberg (born 1935), mathematician; Wolf Prize (2006/07), Abel Prize (2020) Michael Golomb (1909-2008), theory of approximation Michael Harris (born 1954), mathematician E. Morton Jellinek (1890-1963), biostatistician Edward Kasner (1878-1955), mathematician Sergiu Klainerman (born 1950), hyperbolic differential equations and general relativity, MacArthur Fellow (1991), Guggenheim Fellow (1997), Bôcher Memorial Prize(1999) Cornelius Lanczos (1893-1974), mathematician and mathematical physicist Peter Lax (1926–2025), mathematician; Wolf Prize (1987), Steele Prize (1993), Abel Prize (2005) Emma Lehmer (1906-2007), mathematician Grigory Margulis (born 1946), mathematician; Fields Medal (1978), Wolf Prize (2005), Abel Prize (2020) Barry Mazur (born 1937), mathematician; Cole Prize (1982), Chern Medal (2022) John von Neumann (1903-1957), mathematician Ken Ribet (born 1948), algebraic number theory and algebraic geometry Peter Sarnak (born 1953), analytic number theory; Pólya Prize (1998), Cole Prize (2005), Wolf Prize (2014) Yakov Sinai (born 1935), dynamical systems; Wolf Prize (1997), Steele Prize (2013), Abel Prize (2014) Isadore Singer (1924-2021), mathematician; Bôcher Prize (1969), Steele Prize (2000), Abel Prize (2004) Robert M. Solovay (born 1938), mathematician; Paris Kanellakis Award (2003) Elias Stein (1931-2018), harmonic analysis; Wolf Prize (1999), Steele Prize (2002) Edward Witten (born 1951), theoretical physics; Fields Medal (1990) == See also == List of Jewish mathematicians == References ==
Wikipedia:List of Norwegian mathematicians#0
This is a List of Lists of mathematicians and covers notable mathematicians by nationality, ethnicity, religion, profession and other characteristics. Alphabetical lists are also available (see table to the right). == Lists by nationality, ethnicity or religion == == Lists by profession == List of actuaries List of game theorists List of geometers List of logicians List of mathematical probabilists List of statisticians List of quantitative analysts == Other lists of mathematicians == List of amateur mathematicians List of mathematicians born in the 19th century List of centenarians (scientists and mathematicians) List of films about mathematicians List of women in mathematics == See also == The Mathematics Genealogy Project – Database for the academic genealogy of mathematicians List of mathematical artists == External links == The MacTutor History of Mathematics archive – Extensive list of detailed biographies The Oberwolfach Photo Collection – Photographs of mathematicians from all over the world Photos of mathematicians – Collection of photos of mathematicians (and computer scientists) made by Andrej Bauer. Famous Mathematicians Calendar of mathematicians' birthdays and death anniversaries
Wikipedia:List of Polish mathematicians#0
A list of notable Polish mathematicians: == References ==
Wikipedia:List of Russian mathematicians#0
This list of Russian mathematicians includes the famous mathematicians from the Russian Empire, the Soviet Union and the Russian Federation. == Alphabetical list == === A === Georgy Adelson-Velsky, inventor of AVL tree algorithm, developer of Kaissa, the first world computer chess champion Sergei Adian, known for his work in group theory, especially on the Burnside problem Aleksandr Aleksandrov, developer of CAT(k) space and Alexandrov's uniqueness theorem in geometry Pavel Alexandrov, author of the Alexandroff compactification and the Alexandrov topology Dmitri Anosov, developed Anosov diffeomorphism Vladimir Arnold, an author of the Kolmogorov–Arnold–Moser theorem in dynamical systems, solved Hilbert's 13th problem, raised the ADE classification and Arnold's rouble problems === B === Alexander Beilinson, influential mathematician in representation theory, algebraic geometry and mathematical physics Sergey Bernstein, developed the Bernstein polynomial, Bernstein's theorem and Bernstein inequalities in probability theory Nikolay Bogolyubov, mathematician and theoretical physicist, author of the edge-of-the-wedge theorem, Krylov–Bogolyubov theorem, describing function and multiple important contributions to quantum mechanics Vladimir Berkovich, developed Berkovich spaces Viktor Bunyakovsky, noted for his work in theoretical mechanics and number theory, and is credited with an early discovery of the Cauchy–Schwarz inequality Leonid Berlyand, PDE theorist, worked on asymptotic homogenization methods, Humboldt Prize winner === C === Georg Cantor, inventor of set theory. Cantor was born into the Russian Empire, moving to Saxony with his family at age 11. Sergey Chaplygin, author of Chaplygin's equation important in aerodynamics and notion of Chaplygin gas. Nikolai Chebotaryov, author of Chebotarev's density theorem Pafnuti Chebyshev, prominent tutor and founding father of Russian mathematics, contributed to probability, statistics and number theory, author of the Chebyshev's inequality, Chebyshev distance, Chebyshev function, Chebyshev equation etc. Sergei Chernikov, significant contributor to both infinite group theory (developer of Chernikov groups), and linear programming. === D === Boris Delaunay, inventor of Delaunay triangulation, organised the first Soviet Student Olympiad in mathematics Vladimir Drinfeld, mathematician and theoretical physicist, introduced quantum groups and ADHM construction, Fields Medal winner Eugene Dynkin, developed Dynkin diagram, Doob–Dynkin lemma and Dynkin system in algebra and probability === E === Dmitri Egorov, known for significant contributions to the areas of differential geometry and mathematical analysis. Leonhard Euler, preeminent 18th century mathematician, arguably the greatest of all time, made important discoveries in mathematical analysis, graph theory and number theory, introduced much of the modern mathematical terminology and notation (mathematical function, Euler's number, Euler circles etc.) Although Swiss born Euler spent most of his life in St. Petersburg. === F === Ivan Fesenko, number theorist Anatoly Fomenko, topologist and chronologist, put forth a controversial theory of the New Chronology Alexander Alexandrovich Friedmann (also spelled Friedman or Fridman); He was a Russian and Soviet physicist and mathematician. He originated the pioneering theory that the universe is expanding, governed by a set of equations he developed known as the Friedmann equations. Alexander Friedmann Known for Friedmann equations Friedmann–Lemaître–Robertson–Walker metric Yevgraf Fyodorov, mathematician and crystallographer, identified Periodic graph in geometry, the first to catalogue all 230 space groups of crystals === G === Boris Galerkin, developed the Galerkin method in numerical analysis Israel Gelfand, major contributor to numerous areas of mathematics, including group theory, representation theory and linear algebra, author of the Gelfand representation, Gelfand pair, Gelfand triple, integral geometry etc. Alexander Gelfond, author of Gelfond's theorem, provided means to obtain infinite number of transcendentals, including Gelfond–Schneider constant and Gelfond's constant, Wolf Prize in Mathematics winner Semyon Aranovich Gershgorin, of Gerschgorin circle theorem fame Sergei Godunov, developed Godunov's theorem and Godunov's scheme in differential equations Valery Goppa, inventor of Goppa codes, and algebraic geometry codes in the field of algebraic geometry Mikhail Gromov, a prominent developer of geometric group theory, inventor of homotopy principle, introduced Gromov's compactness theorem, Gromov norm, Gromov product etc., Wolf Prize winner === K === Leonid Kantorovich, mathematician and economist, founded linear programming, introduced the Kantorovich inequality and Kantorovich metric, developed the theory of optimal allocation of resources, Nobel Prize in Economics winner Anatoly Karatsuba, developed the Karatsuba algorithm (the first fast multiplication algorithm) David Kazhdan, Soviet, American and Israeli mathematician, Representation theory, Category theory, Kazhdan-Lusztig conjecture, Kazhdan-Margulis theorem, Kazhdan property (T). Held MacArthur Fellowship, Israel Prize, Shaw prize in Mathematics, doctoral adviser of Vladimir Voevodsky (Fields medal recipient) Leonid Khachiyan, developed the Ellipsoid algorithm for linear programming Aleksandr Khinchin, developed the Pollaczek-Khinchine formula, Wiener–Khinchin theorem and Khinchin inequality in probability theory Askold Khovanskii, inventor of the theory of Fewnomials, contributions to the theory of toric varieties, Jeffery–Williams Prize winner Andrey Kolmogorov, preeminent 20th century mathematician, Wolf Prize winner; multiple contributions to mathematics include: probability axioms, Chapman–Kolmogorov equation and Kolmogorov extension theorem in probability; Kolmogorov complexity etc. Maxim Kontsevich, author of the Kontsevich integral and Kontsevich quantization formula, Fields Medal winner Aleksandr Korkin, Vladimir Kotelnikov, pioneer in information theory, an author of fundamental sampling theorem Sofia Kovalevskaya, first woman professor in Northern Europe and Russia, the first female professor of mathematics, discovered the Kovalevskaya top Mikhail Kravchuk, developed the Kravchuk polynomials and Kravchuk matrix Mark Krein, developed the Tannaka–Krein duality, Krein–Milman theorem and Krein space, Wolf Prize winner Alexander Kronrod, developer of Gauss–Kronrod quadrature formula and Kaissa, the first world computer chess champion Aleksey Nikolaevich Krylov, first developed the method of Krylov subspace, still widely used numerical method for linear problems Nikolay Krylov, author of the edge-of-the-wedge theorem, Krylov–Bogolyubov theorem and describing function Aleksandr Kurosh, author of the Kurosh subgroup theorem and Kurosh problem in group theory === L === Olga Ladyzhenskaya, made major contributions to solution of Hilbert's 19th problem and important Navier–Stokes equations Evgeny Landis, inventor of AVL tree algorithm Vladimir Levenshtein, developed the Levenshtein automaton, Levenshtein coding and Levenshtein distance Boris Levin, Mathematician, famous for his theory of entire functions of completely regular growth; in 1956 established and led influential for almost 40 years mathematical seminar at Kharkov university, Ukraine Leonid Levin, computer scientist, developed the Cook-Levin theorem Yuri Linnik, developed Linnik's theorem in analytic number theory Nikolai Lobachevsky, a Copernicus of Geometry who created the first non-Euclidean geometry (Lobachevskian or hyperbolic geometry) Lazar Lyusternik, Mathematician, famous for work in topology and differential geometry. Codevelops Lyusternik-Schnirelmann theory with Lev Schnirelmann. Nikolai Lusin, developed Luzin's theorem, Luzin spaces and Luzin sets in descriptive set theory Aleksandr Lyapunov, founder of stability theory, author of the Lyapunov's central limit theorem, Lyapunov equation, Lyapunov fractal, Lyapunov time etc. === M === Leonty Magnitsky, a director of the Moscow School of Mathematics and Navigation, author of the principal Russian 18th century textbook in mathematics Anatoly Maltsev, researched decidability of various algebraic groups, developed the Malcev algebra Yuri Manin, author of the Gauss–Manin connection in algebraic geometry, Manin-Mumford conjecture and Manin obstruction in diophantine geometry Grigory Margulis, worked on lattices in Lie groups, Wolf Prize and Fields Medal winner Andrey Markov, Sr., invented the Markov chains, proved Markov brothers' inequality, author of the hidden Markov model, Markov number, Markov property, Markov's inequality, Markov processes, Markov random field, Markov algorithm etc. Andrey Markov, Jr., author of Markov's principle and Markov's rule in logics Yuri Matiyasevich, author of Matiyasevich's theorem in set theory, provided a negative solution for Hilbert's tenth problem Mikhail Menshikov, probabilist Alexander Mikhailov, coined the term Informatics David Milman, Mathematician, famous for his method of extreme points and centers that started geometry of Banach Spaces, and had numerous further applications in Mathematics. It starts with his theorem of extreme points that entered all text books in functional analysis, as Krein-Milman theorem === N === Mark Naimark, author of the Gelfand–Naimark theorem and Naimark's problem Pyotr Novikov, solved the word problem for groups and Burnside's problem Sergei Novikov, worked on algebraic topology and soliton theory, developed Adams–Novikov spectral sequence and Novikov conjecture, Wolf Prize and Fields Medal winner === O === Andrei Okounkov, infinite symmetric groups and Hilbert scheme researcher, Fields Medal winner Mikhail Ostrogradsky, mathematician and physicist, author of divergence theorem and partial fractions in integration === P === Grigori Perelman, made landmark contributions to Riemannian geometry and topology, proved Geometrization conjecture and Poincaré conjecture, won a Fields medal and the first Clay Millennium Prize Problems Award (declined both) Lev Pontryagin, blind mathematician, developed Pontryagin duality and Pontryagin classes in topology, and Pontryagin's minimum principle in optimal control Yury Prokhorov, author of the Lévy–Prokhorov metric and Prokhorov's theorem in probability === R === Alexander Razborov, mathematician and computational theorist who won the Nevanlinna Prize in 1990 and the Gödel Prize for contributions to computer sciences === S === Numan Yunusovich Satimov, specialist in the theory of differential equations Lev Schnirelmann, developed the Lusternik–Schnirelmann category in topology and Schnirelmann density of numbers Igor Shafarevich, introduced the Shafarevich–Weil theorem, proved the Golod–Shafarevich theorem and Shafarevich's theorem on solvable Galois groups, important dissident during the Soviet regime, wrote books and articles that criticised socialism Moses Schönfinkel, inventor of combinatory logic Sara Shakulova, first female mathematician of Tatar descent Yakov Sinai, developed the Kolmogorov–Sinai entropy and Sinai billiard, Wolf Prize winner Eugen Slutsky, statistician and economist, developed the Slutsky equation and Slutsky's theorem Stanislav Smirnov, prominent researcher of triangular lattice, Fields Medalist Sergei Sobolev, introduced the Sobolev spaces and mathematical distributions, co-developer of the first ternary computer Setun Vladimir Steklov, mathematician and physicist, founder of Steklov Institute of Mathematics, proved theorems on generalized Fourier series Bella Subbotovskaya, specialist in Boolean functions, founder of unauthorized Jewish People's University to educate Jews barred from quality universities === T === Jakow Trachtenberg, developed the Trachtenberg system of mental calculation Boris Trakhtenbrot, proved the Gap theorem, developed Trakhtenbrot's theorem Valentin Turchin, inventor of Refal programming language, introduced metasystem transition and supercompilation Andrey Tikhonov, author of Tikhonov space and Tikhonov's theorem (central in general topology), the Tikhonov regularization of ill-posed problems, invented magnetotellurics === U === Pavel Urysohn, developed the topological dimension theory and metrization theorems, Urysohn's Lemma and Fréchet–Urysohn space in topology === V === Vladimir Vapnik, developed the Vapnik–Chervonenkis theory of statistical learning and co-invented the support-vector machine method and support-vector clustering algorithms Nicolay Vasilyev, inventor of non-Aristotelian logic, the forerunner of paraconsistent and multi-valued logics Ivan Vinogradov, developed Vinogradov's theorem and Pólya–Vinogradov inequality in analytic number theory Vladimir Voevodsky, introduced a homotopy theory for schemes and modern motivic cohomology, Fields Medalist Georgy Voronoy, invented the Voronoi diagram === Y === Dmitry Yegorov, author of Egorov's Theorem in mathematical analysis === Z === Efim Zelmanov, solved the restricted Burnside problem; Fields Medal winner == See also == List of mathematicians List of Russian physicists List of Russian scientists Science and technology in Russia
Wikipedia:List of Slovenian mathematicians#0
This is a list of Slovenian mathematicians. == A == Anton Ambschel (1749–1821) == B == Vladimir Batagelj (1948–) Franc Breckerfeld (1681–1744) Silvo Breskvar (1902–1969) == F == Joannes Disma Floriantschitsch de Grienfeld (1691–1757) == G == Josip Globevnik (1945–) == H == Ferdinand Augustin Hallerstein (1703–1774) Herman of Carinthia (c. 1100–c. 1160) Franc Hočevar (1853–1919) == K == Sandi Klavžar (1962–) Josip Križan (1841–1921) France Križanič (1928–2002) Klavdija Kutnar (1980–) == L == Ivo Lah (1896–1979) == M == Dragan Marušič (1953–) Bojan Mohar (1956–) Nežka Mramor–Kosta == P == Marko Petkovšek(1955–2023) Tomaž Pisanski (1949–) Josip Plemelj (1873–1967) == R == Janez Rakovec (1949–2008) Dušan Repovš (1954–) == S == Jožef Stefan (1835–1893) == V == Jurij Vega (1754–1802) Ivan Vidav (1918–2015) == Z == Egon Zakrajšek (1941–2002) == See also == Mathematician Geometer List of Slovenians
Wikipedia:List of Ukrainian mathematicians#0
This is a list of the best known Ukrainian mathematicians. This list includes some Polish, pre-revolutionary Russian and Soviet mathematicians who lived or worked in Ukraine. == A == Akhiezer, Naum Ilyich (1901–1980) == B == Bernstein, Sergei Natanovich (1880–1968) Borok, Valentina Mikhailovna (1931–2004) Berlyand, Leonid Viktorovich (b. 1957) == D == Drinfeld, Volodymyr Gershonovych (b. 1954) == E == Eremenko, Oleksandr Emmanuilovich (b. 1954) == G == Geronimus, Yakov Lazarevich (1898–1984) Glushkov, Victor Mihailovich (1923–1982) Goldberg, Anatolii Asirovich (1930–2008) Grave, Dmytro Olexandrovych (1863–1939) == K == Kadets, Mikhail Iosiphovich (1923–2011) Korolyuk, Volodymyr Semenovych (1925–2020) Kondratiev, Yuri (b. 1953) Koshmanenko, Volodymyr Dmytrovych (b. 1943) Kravchuk, Myhailo Pylypovych (1892–1942) Krein, Mark Grigorievich (1907–1989) Krylov, Mykola Mytrofanovych (1879–1955) == M == Marchenko, Volodymyr Olexandrovych (b. 1922) Mitropolskiy, Yurii Oleksiyovych (1917–2008) Maryna Viazovska (b. 1984) == N == Naimark, Mark Aronovich (1909–1978) == P == Pastur, Leonid Andriyovych (b. 1937) Pfeiffer, Georgii Yurii (1872–1946) Pogorelov, Aleksei Vasil'evich (1919–2002) == S == Shatunovsky, Samuil Osipovich (1859–1929) Samoilenko, Anatoliy Myhailovych (1938–2020) Skorokhod, Anatoliy Volodymyrovych (1930–2011) Shor, Naum Zuselevevich (1937–2006) == V == Vaschenko-Zakharchenko, Myhailo Yegorovych (1825–1912) Voronyi, Georgiy Feodosiyovych (1868–1908) == Y == Yadrenko Myhailo Yosypovych (1932–2004) == Mathematicians born in Ukraine == Arnold, Vladimir Igorevich (b. 1937, Odesa, d. 2010, Paris) Besicovitch, Abram Samoilovitch (b. 1891, Berdyansk, d. 1970, Cambridge, UK) Fichtenholz, Grigorii Mikhailovich (b. 1888, Odesa, d. 1959, Leningrad) Fomenko Anatoliy Timofeevich (b. 1945, Donetsk) Gelfand, Israel Moiseevich (b. 1913, Okny, Kherson region, d. 2009, New Brunswick, NJ) Shafarevich, Igor Rostislavovich (b. 1923, Zhytomyr, d. 2017, Moscow) Urysohn, Pavlo Samuilovich (b. 1898, Odesa, d. 1924, Batz-sur-Mer, France) == See also == Kharkiv Mathematical School List of amateur mathematicians List of mathematicians List of Indian mathematicians List of Slovenian mathematicians Lwów School of Mathematics
Wikipedia:List of Welsh mathematicians#0
This is a list of Welsh mathematicians, who have contributed to the development of mathematics. == References == Chambers, Ll. G. Mathemategwyr Cymru (Mathematicians of Wales), Cyd Bwyllgor Addysg Cymru, 1994. == External links == Welsh scientists Mathematicians, Scientists and Inventors
Wikipedia:List of abstract algebra topics#0
Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulae and algebraic expressions involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings. == Basic language == Algebraic structures are defined primarily as sets with operations. Algebraic structure Subobjects: subgroup, subring, subalgebra, submodule etc. Binary operation Closure of an operation Associative property Distributive property Commutative property Unary operator Additive inverse, multiplicative inverse, inverse element Identity element Cancellation property Finitary operation Arity Structure preserving maps called homomorphisms are vital in the study of algebraic objects. Homomorphisms Kernels and cokernels Image and coimage Epimorphisms and monomorphisms Isomorphisms Isomorphism theorems There are several basic ways to combine algebraic objects of the same type to produce a third object of the same type. These constructions are used throughout algebra. Direct sum Direct limit Direct product Inverse limit Quotient objects: quotient group, quotient ring, quotient module etc. Tensor product Advanced concepts: Category theory Category of groups Category of abelian groups Category of rings Category of modules (over a fixed ring) Morita equivalence, Morita duality Category of vector spaces Homological algebra Filtration (algebra) Exact sequence Functor Zorn's lemma == Semigroups and monoids == Semigroup Subsemigroup Free semigroup Green's relations Inverse semigroup (or inversion semigroup, cf. [1]) Krohn–Rhodes theory Semigroup algebra Transformation semigroup Monoid Aperiodic monoid Free monoid Monoid (category theory) Monoid factorisation Syntactic monoid == Group theory == Structure Group (mathematics) Lagrange's theorem (group theory) Subgroup Coset Normal subgroup Characteristic subgroup Centralizer and normalizer subgroups Derived group Frattini subgroup Fitting subgroup Classification of finite simple groups Sylow theorems Local analysis Constructions Free group Presentation of a group Word problem for groups Quotient group Extension problem Direct sum, direct product Semidirect product Wreath product Types Simple group Finite group Abelian group Torsion subgroup Free abelian group Finitely generated abelian group Rank of an abelian group Cyclic group Locally cyclic group Solvable group Composition series Nilpotent group Divisible group Dedekind group, Hamiltonian group Examples Examples of groups Trivial group Additive group Permutation group Symmetric group Alternating group p-group List of small groups Klein four-group Quaternion group Dihedral group Dicyclic group Automorphism group Point group Circle group Linear group Orthogonal group Applications Group action Conjugacy class Inner automorphism Conjugate closure Stabilizer subgroup Orbit (group theory) Orbit-stabilizer theorem Cayley's theorem Burnside's lemma Burnside's problem Loop group Fundamental group == Ring theory == General Ring (mathematics) Commutative algebra, Commutative ring Ring theory, Noncommutative ring Algebra over a field Non-associative algebra Relatives to rings: Semiring, Nearring, Rig (algebra) Structure Subring, Subalgebra Center (algebra) Ring ideal Principal ideal Ideal quotient Maximal ideal, minimal ideal Primitive ideal, prime ideal, semiprime ideal Radical of an ideal Jacobson radical Socle of a ring unit (ring theory), Idempotent, Nilpotent, Zero divisor Characteristic (algebra) Ring homomorphism, Algebra homomorphism Ring epimorphism Ring monomorphism Ring isomorphism Skolem–Noether theorem Graded algebra Morita equivalence Brauer group Constructions Direct sum of rings, Product of rings Quotient ring Matrix ring Endomorphism ring Polynomial ring Formal power series Monoid ring, Group ring Localization of a ring Tensor algebra Symmetric algebra, Exterior algebra, Clifford algebra Free algebra Completion (ring theory) Types Field (mathematics), Division ring, division algebra Simple ring, Central simple algebra, Semisimple ring, Semisimple algebra Primitive ring, Semiprimitive ring Prime ring, Semiprime ring, Reduced ring Integral domain, Domain (ring theory) Field of fractions, Integral closure Euclidean domain, Principal ideal domain, Unique factorization domain, Dedekind domain, Prüfer domain Von Neumann regular ring Quasi-Frobenius ring Hereditary ring, Semihereditary ring Local ring, Semi-local ring Discrete valuation ring Regular local ring Cohen–Macaulay ring Gorenstein ring Artinian ring, Noetherian ring Perfect ring, semiperfect ring Baer ring, Rickart ring Lie ring, Lie algebra Ideal (Lie algebra) Jordan algebra Differential algebra Banach algebra Examples Rational number, Real number, Complex number, Quaternions, Octonions Hurwitz quaternion Gaussian integer Theorems and applications Algebraic geometry Hilbert's Nullstellensatz Hilbert's basis theorem Hopkins–Levitzki theorem Krull's principal ideal theorem Levitzky's theorem Galois theory Abel–Ruffini theorem Artin-Wedderburn theorem Jacobson density theorem Wedderburn's little theorem Lasker–Noether theorem == Field theory == Basic concepts Field (mathematics) Subfield (mathematics) Multiplicative group Primitive element (field theory) Field extension Algebraic extension Splitting field Algebraically closed field Algebraic element Algebraic closure Separable extension Separable polynomial Normal extension Galois extension Abelian extension Transcendence degree Field norm Field trace Conjugate element (field theory) Tensor product of fields Types Algebraic number field Global field Local field Finite field Symmetric function Formally real field Real closed field Applications Galois theory Galois group Inverse Galois problem Kummer theory == Module theory == General Module (mathematics) Bimodule Annihilator (ring theory) Structure Submodule Pure submodule Module homomorphism Essential submodule Superfluous submodule Singular submodule Socle of a module Radical of a module Constructions Free module Quotient module Direct sum, Direct product of modules Direct limit, Inverse limit Localization of a module Completion (ring theory) Types Simple module, Semisimple module Indecomposable module Artinian module, Noetherian module Homological types: Projective module Projective cover Swan's theorem Quillen–Suslin theorem Injective module Injective hull Flat module Flat cover Coherent module Finitely-generated module Finitely-presented module Finitely related module Algebraically compact module Reflexive module Concepts and theorems Composition series Length of a module Structure theorem for finitely generated modules over a principal ideal domain Homological dimension Projective dimension Injective dimension Flat dimension Global dimension Weak global dimension Cohomological dimension Krull dimension Regular sequence (algebra), depth (algebra) Fitting lemma Schur's lemma Nakayama's lemma Krull–Schmidt theorem Steinitz exchange lemma Jordan–Hölder theorem Artin–Rees lemma Schanuel's lemma Morita equivalence Progenerator == Representation theory == Representation theory Algebra representation Group representation Lie algebra representation Maschke's theorem Schur's lemma Equivariant map Frobenius reciprocity Induced representation Restricted representation Affine representation Projective representation Modular representation theory Quiver (mathematics) Representation theory of Hopf algebras == Non-associative systems == General Associative property, Associator Heap (mathematics) Magma (algebra) Loop (algebra), Quasigroup Nonassociative ring, Non-associative algebra Universal enveloping algebra Lie algebra (see also list of Lie group topics and list of representation theory topics) Jordan algebra Alternative algebra Power associativity Flexible algebra Examples Cayley–Dickson construction Octonions Sedenions Trigintaduonions Hyperbolic quaternions Virasoro algebra == Generalities == Algebraic structure Universal algebra Variety (universal algebra) Congruence relation Free object Generating set (universal algebra) Clone (algebra) Kernel of a function Kernel (algebra) Isomorphism class Isomorphism theorem Fundamental theorem on homomorphisms Universal property Filtration (mathematics) Category theory Monoidal category Groupoid Group object Coalgebra Bialgebra Hopf algebra Magma object Torsion (algebra) == Computer algebra == Symbolic mathematics Finite field arithmetic Gröbner basis Buchberger's algorithm == See also == List of commutative algebra topics List of homological algebra topics List of linear algebra topics List of algebraic structures Glossary of field theory Glossary of group theory Glossary of ring theory Glossary of tensor theory
Wikipedia:List of algebraic constructions#0
An algebraic construction is a method by which an algebraic entity is defined or derived from another. Instances include: Cayley–Dickson construction Proj construction Grothendieck group Gelfand–Naimark–Segal construction Ultraproduct ADHM construction Burnside ring Simplicial set Fox derivative Mapping cone (homological algebra) Prym variety Todd class Adjunction (field theory) Vaughan Jones construction Strähle construction Coset construction Plus construction Algebraic K-theory Gelfand–Naimark–Segal construction Stanley–Reisner ring construction Quotient ring construction Ward's twistor construction Hilbert symbol Hilbert's arithmetic of ends Colombeau's construction Vector bundle Integral monoid ring construction Integral group ring construction Category of Eilenberg–Moore algebras Kleisli category Adjunction (field theory) Lindenbaum–Tarski algebra construction Freudenthal magic square Stone–Čech compactification
Wikipedia:List of astronomers and mathematicians of the Kerala school#0
This is a list of astronomers and mathematicians of the Kerala school. The region surrounding the south-west coast of the Indian subcontinent, now politically organised as the Kerala State in India, has a long tradition of studies and investigations in all areas related to the branch of śāstra known as jyotiṣa. This branch of śāstra, in its broadest sense, incorporates several subdisciplines like mathematics, astronomy, astrology, horary astrology, etc. In Indian traditional jyotiṣa scholarship, there are no clear cut boundary lines separating these subdisciplines. Hence the list presented below includes all who would be called a jyotiṣa-scholar in the Indian traditional sense. All these persons will be, most likely, well versed in the subdisciplines of mathematics and astronomy as well. The list is an adaptation of the list of mathematicians and astronomers compiled by K. V. Sarma. Sarma has referred to all of them as astronomers. K. V. Sarma (1919–2005) was an Indian historian of science, particularly the astronomy and mathematics of the Kerala school. He was responsible for bringing to light several of the achievements of the Kerala school. He was editor of the Vishveshvaranand Indological Research Series, and published the critical edition of several source works in Sanskrit, including the Aryabhatiya of Aryabhata. He was recognised as "the greatest authority on Kerala's astronomical tradition". Additional information about the persons mentioned in the list are available in books on the history of Malayalam literature and on the history of Sanskrit literature in Kerala. == List astronomers and mathematicians of the Kerala school == == See also == A History of the Kerala School of Hindu Astronomy (book by K. V. Sarma) Kerala school of astronomy and mathematics == References ==
Wikipedia:List of continuity-related mathematical topics#0
In mathematics, the terms continuity, continuous, and continuum are used in a variety of related ways. == Continuity of functions and measures == Continuous function Absolutely continuous function Absolute continuity of a measure with respect to another measure Continuous probability distribution: Sometimes this term is used to mean a probability distribution whose cumulative distribution function (c.d.f.) is (simply) continuous. Sometimes it has a less inclusive meaning: a distribution whose c.d.f. is absolutely continuous with respect to Lebesgue measure. This less inclusive sense is equivalent to the condition that every set whose Lebesgue measure is 0 has probability 0. Geometric continuity Parametric continuity == Continuum == Continuum (set theory), the real line or the corresponding cardinal number Linear continuum, any ordered set that shares certain properties of the real line Continuum (topology), a nonempty compact connected metric space (sometimes a Hausdorff space) Continuum hypothesis, a conjecture of Georg Cantor that there is no cardinal number between that of countably infinite sets and the cardinality of the set of all real numbers. The latter cardinality is equal to the cardinality of the set of all subsets of a countably infinite set. Cardinality of the continuum, a cardinal number that represents the size of the set of real numbers == See also == Continuous variable
Wikipedia:List of convexity topics#0
This is a list of convexity topics, by Wikipedia page. Alpha blending - the process of combining a translucent foreground color with a background color, thereby producing a new blended color. This is a convex combination of two colors allowing for transparency effects in computer graphics. Barycentric coordinates - a coordinate system in which the location of a point of a simplex (a triangle, tetrahedron, etc.) is specified as the center of mass, or barycenter, of masses placed at its vertices. The coordinates are non-negative for points in the convex hull. Borsuk's conjecture - a conjecture about the number of pieces required to cover a body with a larger diameter. Solved by Hadwiger for the case of smooth convex bodies. Bond convexity - a measure of the non-linear relationship between price and yield duration of a bond to changes in interest rates, the second derivative of the price of the bond with respect to interest rates. A basic form of convexity in finance. Carathéodory's theorem (convex hull) - If a point x of Rd lies in the convex hull of a set P, there is a subset of P with d+1 or fewer points such that x lies in its convex hull. Choquet theory - an area of functional analysis and convex analysis concerned with measures with support on the extreme points of a convex set C. Roughly speaking, all vectors of C should appear as "averages" of extreme points. Complex convexity — extends the notion of convexity to complex numbers. Convex analysis - the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization. Convex combination - a linear combination of points where all coefficients are non-negative and sum to 1. All convex combinations are within the convex hull of the given points. Convex and Concave - a print by Escher in which many of the structure's features can be seen as both convex shapes and concave impressions. Convex body - a compact convex set in a Euclidean space whose interior is non-empty. Convex conjugate - a dual of a real functional in a vector space. Can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. Convex curve - a plane curve that lies entirely on one side of each of its supporting lines. The interior of a closed convex curve is a convex set. Convex function - a function in which the line segment between any two points on the graph of the function lies above the graph. Closed convex function - a convex function all of whose sublevel sets are closed sets. Proper convex function - a convex function whose effective domain is nonempty and it never attains minus infinity. Concave function - the negative of a convex function. Convex geometry - the branch of geometry studying convex sets, mainly in Euclidean space. Contains three sub-branches: general convexity, polytopes and polyhedra, and discrete geometry. Convex hull (aka convex envelope) - the smallest convex set that contains a given set of points in Euclidean space. Convex lens - a lens in which one or two sides is curved or bowed outwards. Light passing through the lens is converged (or focused) to a spot behind the lens. Convex optimization - a subfield of optimization, studies the problem of minimizing convex functions over convex sets. The convexity property can make optimization in some sense "easier" than the general case - for example, any local minimum must be a global minimum. Convex polygon - a 2-dimensional polygon whose interior is a convex set in the Euclidean plane. Convex polytope - an n-dimensional polytope which is also a convex set in the Euclidean n-dimensional space. Convex set - a set in Euclidean space in which contains every segment between every two of its points. Convexity (finance) - refers to non-linearities in a financial model. When the price of an underlying variable changes, the price of an output does not change linearly, but depends on the higher-order derivatives of the modeling function. Geometrically, the model is no longer flat but curved, and the degree of curvature is called the convexity. Duality (optimization) Epigraph (mathematics) - for a function f : Rn→R, the set of points lying on or above its graph Extreme point - for a convex set S in a real vector space, a point in S which does not lie in any open line segment joining two points of S. Fenchel conjugate Fenchel's inequality Fixed-point theorems in infinite-dimensional spaces, generalise the Brouwer fixed-point theorem. They have applications, for example, to the proof of existence theorems for partial differential equations Four vertex theorem - every convex curve has at least 4 vertices. Gift wrapping algorithm - an algorithm for computing the convex hull of a given set of points Graham scan - a method of finding the convex hull of a finite set of points in the plane with time complexity O(n log n) Hadwiger conjecture (combinatorial geometry) - any convex body in n-dimensional Euclidean space can be covered by 2n or fewer smaller bodies homothetic with the original body. Hadwiger's theorem - a theorem that characterizes the valuations on convex bodies in Rn. Helly's theorem Hyperplane - a subspace whose dimension is one less than that of its ambient space Indifference curve Infimal convolute Interval (mathematics) - a set of real numbers with the property that any number that lies between two numbers in the set is also included in the set Jarvis march Jensen's inequality - relates the value of a convex function of an integral to the integral of the convex function John ellipsoid - E(K) associated to a convex body K in n-dimensional Euclidean space Rn is the ellipsoid of maximal n-dimensional volume contained within K. Lagrange multiplier - a strategy for finding the local maxima and minima of a function subject to equality constraints Legendre transformation - an involutive transformation on the real-valued convex functions of one real variable Locally convex topological vector space - example of topological vector spaces (TVS) that generalize normed spaces Macbeath regions Mahler volume - a dimensionless quantity that is associated with a centrally symmetric convex body Minkowski's theorem - any convex set in R n {\displaystyle \mathbb {R} ^{n}} which is symmetric with respect to the origin and with volume greater than 2n d(L) contains a non-zero lattice point Mixed volume Mixture density Newton polygon - a tool for understanding the behaviour of polynomials over local fields Radon's theorem - on convex sets, that any set of d + 2 points in Rd can be partitioned into two disjoint sets whose convex hulls intersect Separating axis theorem Shapley–Folkman lemma - a result in convex geometry with applications in mathematical economics that describes the Minkowski addition of sets in a vector space Shephard's problem - a geometrical question Simplex - a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions Simplex method - a popular algorithm for linear programming Subdifferential - generalization of the derivative to functions which are not differentiable Supporting hyperplane - a hyperplane meeting certain conditions Supporting hyperplane theorem - that defines a supporting hyperplane
Wikipedia:List of fellows of the Fields Institute#0
In 2002, the Fields Institute initiated its fellowship program to recognize outstanding contributions to activities at the Fields Institute and within the Canadian mathematical community. The following is a list of fellows of the Fields Institute by year of appointment. == Fellows == == References ==
Wikipedia:List of fractals by Hausdorff dimension#0
According to Benoit Mandelbrot, "A fractal is by definition a set for which the Hausdorff-Besicovitch dimension strictly exceeds the topological dimension." Presented here is a list of fractals, ordered by increasing Hausdorff dimension, to illustrate what it means for a fractal to have a low or a high dimension. == Deterministic fractals == == Random and natural fractals == == See also == Fractal dimension Hausdorff dimension Scale invariance == Notes and references == == Further reading == Mandelbrot, Benoît (1982). The Fractal Geometry of Nature. W.H. Freeman. ISBN 0-7167-1186-9. Peitgen, Heinz-Otto (1988). Saupe, Dietmar (ed.). The Science of Fractal Images. Springer Verlag. ISBN 0-387-96608-0. Barnsley, Michael F. (1 January 1993). Fractals Everywhere. Morgan Kaufmann. ISBN 0-12-079061-0. Sapoval, Bernard; Mandelbrot, Benoît B. (2001). Universalités et fractales: jeux d'enfant ou délits d'initié?. Flammarion-Champs. ISBN 2-08-081466-4. == External links == The fractals on Mathworld Other fractals on Paul Bourke's website Soler's Gallery Fractals on mathcurve.com 1000fractales.free.fr - Project gathering fractals created with various software Fractals unleashed IFStile - software that computes the dimension of the boundary of self-affine tiles
Wikipedia:List of limits#0
This is a list of limits for common functions such as elementary functions. In this article, the terms a, b and c are constants with respect to x. == Limits for general functions == === Definitions of limits and related concepts === lim x → c f ( x ) = L {\displaystyle \lim _{x\to c}f(x)=L} if and only if ∀ ε > 0 ∃ δ > 0 : 0 < | x − c | < δ ⟹ | f ( x ) − L | < ε {\displaystyle \forall \varepsilon >0\ \exists \delta >0:0<|x-c|<\delta \implies |f(x)-L|<\varepsilon } . This is the (ε, δ)-definition of limit. The limit superior and limit inferior of a sequence are defined as lim sup n → ∞ x n = lim n → ∞ ( sup m ≥ n x m ) {\displaystyle \limsup _{n\to \infty }x_{n}=\lim _{n\to \infty }\left(\sup _{m\geq n}x_{m}\right)} and lim inf n → ∞ x n = lim n → ∞ ( inf m ≥ n x m ) {\displaystyle \liminf _{n\to \infty }x_{n}=\lim _{n\to \infty }\left(\inf _{m\geq n}x_{m}\right)} . A function, f ( x ) {\displaystyle f(x)} , is said to be continuous at a point, c, if lim x → c f ( x ) = f ( c ) . {\displaystyle \lim _{x\to c}f(x)=f(c).} === Operations on a single known limit === If lim x → c f ( x ) = L {\displaystyle \lim _{x\to c}f(x)=L} then: lim x → c [ f ( x ) ± a ] = L ± a {\displaystyle \lim _{x\to c}\,[f(x)\pm a]=L\pm a} lim x → c a f ( x ) = a L {\displaystyle \lim _{x\to c}\,af(x)=aL} lim x → c 1 f ( x ) = 1 L {\displaystyle \lim _{x\to c}{\frac {1}{f(x)}}={\frac {1}{L}}} if L is not equal to 0. lim x → c f ( x ) n = L n {\displaystyle \lim _{x\to c}\,f(x)^{n}=L^{n}} if n is a positive integer lim x → c f ( x ) 1 n = L 1 n {\displaystyle \lim _{x\to c}\,f(x)^{1 \over n}=L^{1 \over n}} if n is a positive integer, and if n is even, then L > 0. In general, if g(x) is continuous at L and lim x → c f ( x ) = L {\displaystyle \lim _{x\to c}f(x)=L} then lim x → c g ( f ( x ) ) = g ( L ) {\displaystyle \lim _{x\to c}g\left(f(x)\right)=g(L)} === Operations on two known limits === If lim x → c f ( x ) = L 1 {\displaystyle \lim _{x\to c}f(x)=L_{1}} and lim x → c g ( x ) = L 2 {\displaystyle \lim _{x\to c}g(x)=L_{2}} then: lim x → c [ f ( x ) ± g ( x ) ] = L 1 ± L 2 {\displaystyle \lim _{x\to c}\,[f(x)\pm g(x)]=L_{1}\pm L_{2}} lim x → c [ f ( x ) g ( x ) ] = L 1 ⋅ L 2 {\displaystyle \lim _{x\to c}\,[f(x)g(x)]=L_{1}\cdot L_{2}} lim x → c f ( x ) g ( x ) = L 1 L 2 if L 2 ≠ 0 {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}={\frac {L_{1}}{L_{2}}}\qquad {\text{ if }}L_{2}\neq 0} === Limits involving derivatives or infinitesimal changes === In these limits, the infinitesimal change h {\displaystyle h} is often denoted Δ x {\displaystyle \Delta x} or δ x {\displaystyle \delta x} . If f ( x ) {\displaystyle f(x)} is differentiable at x {\displaystyle x} , lim h → 0 f ( x + h ) − f ( x ) h = f ′ ( x ) {\displaystyle \lim _{h\to 0}{f(x+h)-f(x) \over h}=f'(x)} . This is the definition of the derivative. All differentiation rules can also be reframed as rules involving limits. For example, if g(x) is differentiable at x, lim h → 0 f ∘ g ( x + h ) − f ∘ g ( x ) h = f ′ [ g ( x ) ] g ′ ( x ) {\displaystyle \lim _{h\to 0}{f\circ g(x+h)-f\circ g(x) \over h}=f'[g(x)]g'(x)} . This is the chain rule. lim h → 0 f ( x + h ) g ( x + h ) − f ( x ) g ( x ) h = f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) {\displaystyle \lim _{h\to 0}{f(x+h)g(x+h)-f(x)g(x) \over h}=f'(x)g(x)+f(x)g'(x)} . This is the product rule. lim h → 0 ( f ( x + h ) f ( x ) ) 1 / h = exp ⁡ ( f ′ ( x ) f ( x ) ) {\displaystyle \lim _{h\to 0}\left({\frac {f(x+h)}{f(x)}}\right)^{1/h}=\exp \left({\frac {f'(x)}{f(x)}}\right)} lim h → 0 ( f ( e h x ) f ( x ) ) 1 / h = exp ⁡ ( x f ′ ( x ) f ( x ) ) {\displaystyle \lim _{h\to 0}{\left({f(e^{h}x) \over {f(x)}}\right)^{1/h}}=\exp \left({\frac {xf'(x)}{f(x)}}\right)} If f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} are differentiable on an open interval containing c, except possibly c itself, and lim x → c f ( x ) = lim x → c g ( x ) = 0 or ± ∞ {\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0{\text{ or }}\pm \infty } , L'Hôpital's rule can be used: lim x → c f ( x ) g ( x ) = lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=\lim _{x\to c}{\frac {f'(x)}{g'(x)}}} === Inequalities === If f ( x ) ≤ g ( x ) {\displaystyle f(x)\leq g(x)} for all x in an interval that contains c, except possibly c itself, and the limit of f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} both exist at c, then lim x → c f ( x ) ≤ lim x → c g ( x ) {\displaystyle \lim _{x\to c}f(x)\leq \lim _{x\to c}g(x)} If lim x → c f ( x ) = lim x → c h ( x ) = L {\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}h(x)=L} and f ( x ) ≤ g ( x ) ≤ h ( x ) {\displaystyle f(x)\leq g(x)\leq h(x)} for all x in an open interval that contains c, except possibly c itself, lim x → c g ( x ) = L . {\displaystyle \lim _{x\to c}g(x)=L.} This is known as the squeeze theorem. This applies even in the cases that f(x) and g(x) take on different values at c, or are discontinuous at c. == Polynomials and functions of the form xa == lim x → c a = a {\displaystyle \lim _{x\to c}a=a} === Polynomials in x === lim x → c x = c {\displaystyle \lim _{x\to c}x=c} lim x → c ( a x + b ) = a c + b {\displaystyle \lim _{x\to c}(ax+b)=ac+b} lim x → c x n = c n {\displaystyle \lim _{x\to c}x^{n}=c^{n}} if n is a positive integer lim x → ∞ x / a = { ∞ , a > 0 does not exist , a = 0 − ∞ , a < 0 {\displaystyle \lim _{x\to \infty }x/a={\begin{cases}\infty ,&a>0\\{\text{does not exist}},&a=0\\-\infty ,&a<0\end{cases}}} In general, if p ( x ) {\displaystyle p(x)} is a polynomial then, by the continuity of polynomials, lim x → c p ( x ) = p ( c ) {\displaystyle \lim _{x\to c}p(x)=p(c)} This is also true for rational functions, as they are continuous on their domains. === Functions of the form xa === lim x → c x a = c a . {\displaystyle \lim _{x\to c}x^{a}=c^{a}.} In particular, lim x → ∞ x a = { ∞ , a > 0 1 , a = 0 0 , a < 0 {\displaystyle \lim _{x\to \infty }x^{a}={\begin{cases}\infty ,&a>0\\1,&a=0\\0,&a<0\end{cases}}} lim x → c x 1 / a = c 1 / a {\displaystyle \lim _{x\to c}x^{1/a}=c^{1/a}} . In particular, lim x → ∞ x 1 / a = lim x → ∞ x a = ∞ for any a > 0 {\displaystyle \lim _{x\to \infty }x^{1/a}=\lim _{x\to \infty }{\sqrt[{a}]{x}}=\infty {\text{ for any }}a>0} lim x → 0 + x − n = lim x → 0 + 1 x n = + ∞ {\displaystyle \lim _{x\to 0^{+}}x^{-n}=\lim _{x\to 0^{+}}{\frac {1}{x^{n}}}=+\infty } lim x → 0 − x − n = lim x → 0 − 1 x n = { − ∞ , if n is odd + ∞ , if n is even {\displaystyle \lim _{x\to 0^{-}}x^{-n}=\lim _{x\to 0^{-}}{\frac {1}{x^{n}}}={\begin{cases}-\infty ,&{\text{if }}n{\text{ is odd}}\\+\infty ,&{\text{if }}n{\text{ is even}}\end{cases}}} lim x → ∞ a x − 1 = lim x → ∞ a / x = 0 for any real a {\displaystyle \lim _{x\to \infty }ax^{-1}=\lim _{x\to \infty }a/x=0{\text{ for any real }}a} == Exponential functions == === Functions of the form ag(x) === lim x → c e x = e c {\displaystyle \lim _{x\to c}e^{x}=e^{c}} , due to the continuity of e x {\displaystyle e^{x}} lim x → ∞ a x = { ∞ , a > 1 1 , a = 1 0 , 0 < a < 1 {\displaystyle \lim _{x\to \infty }a^{x}={\begin{cases}\infty ,&a>1\\1,&a=1\\0,&0<a<1\end{cases}}} lim x → ∞ a − x = { 0 , a > 1 1 , a = 1 ∞ , 0 < a < 1 {\displaystyle \lim _{x\to \infty }a^{-x}={\begin{cases}0,&a>1\\1,&a=1\\\infty ,&0<a<1\end{cases}}} lim x → ∞ a x = lim x → ∞ a 1 / x = { 1 , a > 0 0 , a = 0 does not exist , a < 0 {\displaystyle \lim _{x\to \infty }{\sqrt[{x}]{a}}=\lim _{x\to \infty }{a}^{1/x}={\begin{cases}1,&a>0\\0,&a=0\\{\text{does not exist}},&a<0\end{cases}}} === Functions of the form xg(x) === lim x → ∞ x x = lim x → ∞ x 1 / x = 1 {\displaystyle \lim _{x\to \infty }{\sqrt[{x}]{x}}=\lim _{x\to \infty }{x}^{1/x}=1} === Functions of the form f(x)g(x) === lim x → + ∞ ( x x + k ) x = e − k {\displaystyle \lim _{x\to +\infty }\left({\frac {x}{x+k}}\right)^{x}=e^{-k}} lim x → 0 ( 1 + x ) 1 x = e {\displaystyle \lim _{x\to 0}\left(1+x\right)^{\frac {1}{x}}=e} lim x → 0 ( 1 + k x ) m x = e m k {\displaystyle \lim _{x\to 0}\left(1+kx\right)^{\frac {m}{x}}=e^{mk}} lim x → + ∞ ( 1 + 1 x ) x = e {\displaystyle \lim _{x\to +\infty }\left(1+{\frac {1}{x}}\right)^{x}=e} lim x → + ∞ ( 1 − 1 x ) x = 1 e {\displaystyle \lim _{x\to +\infty }\left(1-{\frac {1}{x}}\right)^{x}={\frac {1}{e}}} lim x → + ∞ ( 1 + k x ) m x = e m k {\displaystyle \lim _{x\to +\infty }\left(1+{\frac {k}{x}}\right)^{mx}=e^{mk}} lim x → 0 ( 1 + a ( e − x − 1 ) ) − 1 x = e a {\displaystyle \lim _{x\to 0}\left(1+a\left({e^{-x}-1}\right)\right)^{-{\frac {1}{x}}}=e^{a}} . This limit can be derived from this limit. === Sums, products and composites === lim x → 0 x e − x = 0 {\displaystyle \lim _{x\to 0}xe^{-x}=0} lim x → ∞ x e − x = 0 {\displaystyle \lim _{x\to \infty }xe^{-x}=0} lim x → 0 ( a x − 1 x ) = ln ⁡ a , {\displaystyle \lim _{x\to 0}\left({\frac {a^{x}-1}{x}}\right)=\ln {a},} for all positive a. lim x → 0 ( e x − 1 x ) = 1 {\displaystyle \lim _{x\to 0}\left({\frac {e^{x}-1}{x}}\right)=1} lim x → 0 ( e a x − 1 x ) = a {\displaystyle \lim _{x\to 0}\left({\frac {e^{ax}-1}{x}}\right)=a} == Logarithmic functions == === Natural logarithms === lim x → c ln ⁡ x = ln ⁡ c {\displaystyle \lim _{x\to c}\ln {x}=\ln c} , due to the continuity of ln ⁡ x {\displaystyle \ln {x}} . In particular, lim x → 0 + log ⁡ x = − ∞ {\displaystyle \lim _{x\to 0^{+}}\log x=-\infty } lim x → ∞ log ⁡ x = ∞ {\displaystyle \lim _{x\to \infty }\log x=\infty } lim x → 1 ln ⁡ ( x ) x − 1 = 1 {\displaystyle \lim _{x\to 1}{\frac {\ln(x)}{x-1}}=1} lim x → 0 ln ⁡ ( x + 1 ) x = 1 {\displaystyle \lim _{x\to 0}{\frac {\ln(x+1)}{x}}=1} lim x → 0 − ln ⁡ ( 1 + a ( e − x − 1 ) ) x = a {\displaystyle \lim _{x\to 0}{\frac {-\ln \left(1+a\left({e^{-x}-1}\right)\right)}{x}}=a} . This limit follows from L'Hôpital's rule. lim x → 0 x ln ⁡ x = 0 {\displaystyle \lim _{x\to 0}x\ln x=0} , hence lim x → 0 x x = 1 {\displaystyle \lim _{x\to 0}x^{x}=1} lim x → ∞ ln ⁡ x x = 0 {\displaystyle \lim _{x\to \infty }{\frac {\ln x}{x}}=0} === Logarithms to arbitrary bases === For b > 1, lim x → 0 + log b ⁡ x = − ∞ {\displaystyle \lim _{x\to 0^{+}}\log _{b}x=-\infty } lim x → ∞ log b ⁡ x = ∞ {\displaystyle \lim _{x\to \infty }\log _{b}x=\infty } For b < 1, lim x → 0 + log b ⁡ x = ∞ {\displaystyle \lim _{x\to 0^{+}}\log _{b}x=\infty } lim x → ∞ log b ⁡ x = − ∞ {\displaystyle \lim _{x\to \infty }\log _{b}x=-\infty } Both cases can be generalized to: lim x → 0 + log b ⁡ x = − F ( b ) ∞ {\displaystyle \lim _{x\to 0^{+}}\log _{b}x=-F(b)\infty } lim x → ∞ log b ⁡ x = F ( b ) ∞ {\displaystyle \lim _{x\to \infty }\log _{b}x=F(b)\infty } where F ( x ) = 2 H ( x − 1 ) − 1 {\displaystyle F(x)=2H(x-1)-1} and H ( x ) {\displaystyle H(x)} is the Heaviside step function == Trigonometric functions == If x {\displaystyle x} is expressed in radians: lim x → a sin ⁡ x = sin ⁡ a {\displaystyle \lim _{x\to a}\sin x=\sin a} lim x → a cos ⁡ x = cos ⁡ a {\displaystyle \lim _{x\to a}\cos x=\cos a} These limits both follow from the continuity of sin and cos. lim x → 0 sin ⁡ x x = 1 {\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=1} . Or, in general, lim x → 0 sin ⁡ a x a x = 1 {\displaystyle \lim _{x\to 0}{\frac {\sin ax}{ax}}=1} , for a not equal to 0. lim x → 0 sin ⁡ a x x = a {\displaystyle \lim _{x\to 0}{\frac {\sin ax}{x}}=a} lim x → 0 sin ⁡ a x b x = a b {\displaystyle \lim _{x\to 0}{\frac {\sin ax}{bx}}={\frac {a}{b}}} , for b not equal to 0. lim x → ∞ x sin ⁡ ( 1 x ) = 1 {\displaystyle \lim _{x\to \infty }x\sin \left({\frac {1}{x}}\right)=1} lim x → 0 1 − cos ⁡ x x = lim x → 0 cos ⁡ x − 1 x = 0 {\displaystyle \lim _{x\to 0}{\frac {1-\cos x}{x}}=\lim _{x\to 0}{\frac {\cos x-1}{x}}=0} lim x → 0 1 − cos ⁡ x x 2 = 1 2 {\displaystyle \lim _{x\to 0}{\frac {1-\cos x}{x^{2}}}={\frac {1}{2}}} lim x → n ± tan ⁡ ( π x + π 2 ) = ∓ ∞ {\displaystyle \lim _{x\to n^{\pm }}\tan \left(\pi x+{\frac {\pi }{2}}\right)=\mp \infty } , for integer n. lim x → 0 tan ⁡ x x = 1 {\displaystyle \lim _{x\to 0}{\frac {\tan x}{x}}=1} . Or, in general, lim x → 0 tan ⁡ a x a x = 1 {\displaystyle \lim _{x\to 0}{\frac {\tan ax}{ax}}=1} , for a not equal to 0. lim x → 0 tan ⁡ a x b x = a b {\displaystyle \lim _{x\to 0}{\frac {\tan ax}{bx}}={\frac {a}{b}}} , for b not equal to 0. lim n → ∞ sin ⁡ sin ⁡ ⋯ sin ⁡ ( x 0 ) ⏟ n = 0 {\displaystyle \lim _{n\to \infty }\ \underbrace {\sin \sin \cdots \sin(x_{0})} _{n}=0} , where x0 is an arbitrary real number. lim n → ∞ cos ⁡ cos ⁡ ⋯ cos ⁡ ( x 0 ) ⏟ n = d {\displaystyle \lim _{n\to \infty }\ \underbrace {\cos \cos \cdots \cos(x_{0})} _{n}=d} , where d is the Dottie number. x0 can be any arbitrary real number. == Sums == In general, any infinite series is the limit of its partial sums. For example, an analytic function is the limit of its Taylor series, within its radius of convergence. lim n → ∞ ∑ k = 1 n 1 k = ∞ {\displaystyle \lim _{n\to \infty }\sum _{k=1}^{n}{\frac {1}{k}}=\infty } . This is known as the harmonic series. lim n → ∞ ( ∑ k = 1 n 1 k − log ⁡ n ) = γ {\displaystyle \lim _{n\to \infty }\left(\sum _{k=1}^{n}{\frac {1}{k}}-\log n\right)=\gamma } . This is the Euler Mascheroni constant. == Notable special limits == lim n → ∞ n n ! n = e {\displaystyle \lim _{n\to \infty }{\frac {n}{\sqrt[{n}]{n!}}}=e} lim n → ∞ ( n ! ) 1 / n = ∞ {\displaystyle \lim _{n\to \infty }\left(n!\right)^{1/n}=\infty } . This can be proven by considering the inequality e x ≥ x n n ! {\displaystyle e^{x}\geq {\frac {x^{n}}{n!}}} at x = n {\displaystyle x=n} . lim n → ∞ 2 n 2 − 2 + 2 + ⋯ + 2 ⏟ n = π {\displaystyle \lim _{n\to \infty }\,2^{n}\underbrace {\sqrt {2-{\sqrt {2+{\sqrt {2+\dots +{\sqrt {2}}}}}}}} _{n}=\pi } . This can be derived from Viète's formula for π. == Limiting behavior == === Asymptotic equivalences === Asymptotic equivalences, f ( x ) ∼ g ( x ) {\displaystyle f(x)\sim g(x)} , are true if lim x → ∞ f ( x ) g ( x ) = 1 {\displaystyle \lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1} . Therefore, they can also be reframed as limits. Some notable asymptotic equivalences include lim x → ∞ x / ln ⁡ x π ( x ) = 1 {\displaystyle \lim _{x\to \infty }{\frac {x/\ln x}{\pi (x)}}=1} , due to the prime number theorem, π ( x ) ∼ x ln ⁡ x {\displaystyle \pi (x)\sim {\frac {x}{\ln x}}} , where π(x) is the prime counting function. lim n → ∞ 2 π n ( n e ) n n ! = 1 {\displaystyle \lim _{n\to \infty }{\frac {{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}}{n!}}=1} , due to Stirling's approximation, n ! ∼ 2 π n ( n e ) n {\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}} . === Big O notation === The behaviour of functions described by Big O notation can also be described by limits. For example f ( x ) ∈ O ( g ( x ) ) {\displaystyle f(x)\in {\mathcal {O}}(g(x))} if lim sup x → ∞ | f ( x ) | g ( x ) < ∞ {\displaystyle \limsup _{x\to \infty }{\frac {|f(x)|}{g(x)}}<\infty } == References ==
Wikipedia:List of logarithmic identities#0
In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes. == Trivial identities == Trivial mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. The trivial logarithmic identities are as follows: === Explanations === By definition, we know that: log b ⁡ ( y ) = x ⟺ b x = y {\displaystyle \log _{b}(y)=x\iff b^{x}=y} , where b ≠ 0 {\displaystyle b\neq 0} and b ≠ 1 {\displaystyle b\neq 1} . Setting x = 0 {\displaystyle x=0} , we can see that: b x = y ⟺ b ( 0 ) = y ⟺ 1 = y ⟺ y = 1 {\displaystyle b^{x}=y\iff b^{(0)}=y\iff 1=y\iff y=1} . So, substituting these values into the formula, we see that: log b ⁡ ( y ) = x ⟺ log b ⁡ ( 1 ) = 0 {\displaystyle \log _{b}(y)=x\iff \log _{b}(1)=0} , which gets us the first property. Setting x = 1 {\displaystyle x=1} , we can see that: b x = y ⟺ b ( 1 ) = y ⟺ b = y ⟺ y = b {\displaystyle b^{x}=y\iff b^{(1)}=y\iff b=y\iff y=b} . So, substituting these values into the formula, we see that: log b ⁡ ( y ) = x ⟺ log b ⁡ ( b ) = 1 {\displaystyle \log _{b}(y)=x\iff \log _{b}(b)=1} , which gets us the second property. == Cancelling exponentials == Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations—much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations. b log b ⁡ ( x ) = x because antilog b ( log b ⁡ ( x ) ) = x {\displaystyle b^{\log _{b}(x)}=x{\text{ because }}{\mbox{antilog}}_{b}(\log _{b}(x))=x} log b ⁡ ( b x ) = x because log b ⁡ ( antilog b ( x ) ) = x {\displaystyle \log _{b}(b^{x})=x{\text{ because }}\log _{b}({\mbox{antilog}}_{b}(x))=x} Both of the above are derived from the following two equations that define a logarithm: (note that in this explanation, the variables of x {\displaystyle x} and x {\displaystyle x} may not be referring to the same number) log b ⁡ ( y ) = x ⟺ b x = y {\displaystyle \log _{b}(y)=x\iff b^{x}=y} Looking at the equation b x = y {\displaystyle b^{x}=y} , and substituting the value for x {\displaystyle x} of log b ⁡ ( y ) = x {\displaystyle \log _{b}(y)=x} , we get the following equation: b x = y ⟺ b log b ⁡ ( y ) = y ⟺ b log b ⁡ ( y ) = y {\displaystyle b^{x}=y\iff b^{\log _{b}(y)}=y\iff b^{\log _{b}(y)}=y} , which gets us the first equation. Another more rough way to think about it is that b something = y {\displaystyle b^{\text{something}}=y} , and that that " something {\displaystyle {\text{something}}} " is log b ⁡ ( y ) {\displaystyle \log _{b}(y)} . Looking at the equation log b ⁡ ( y ) = x {\displaystyle \log _{b}(y)=x} , and substituting the value for y {\displaystyle y} of b x = y {\displaystyle b^{x}=y} , we get the following equation: log b ⁡ ( y ) = x ⟺ log b ⁡ ( b x ) = x ⟺ log b ⁡ ( b x ) = x {\displaystyle \log _{b}(y)=x\iff \log _{b}(b^{x})=x\iff \log _{b}(b^{x})=x} , which gets us the second equation. Another more rough way to think about it is that log b ⁡ ( something ) = x {\displaystyle \log _{b}({\text{something}})=x} , and that that something " something {\displaystyle {\text{something}}} " is b x {\displaystyle b^{x}} . == Using simpler operations == Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below. The first three operations below assume that x = bc and/or y = bd, so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx). Where b {\displaystyle b} , x {\displaystyle x} , and y {\displaystyle y} are positive real numbers and b ≠ 1 {\displaystyle b\neq 1} , and c {\displaystyle c} and d {\displaystyle d} are real numbers. The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law: x y = b log b ⁡ ( x ) b log b ⁡ ( y ) = b log b ⁡ ( x ) + log b ⁡ ( y ) ⇒ log b ⁡ ( x y ) = log b ⁡ ( b log b ⁡ ( x ) + log b ⁡ ( y ) ) = log b ⁡ ( x ) + log b ⁡ ( y ) {\displaystyle xy=b^{\log _{b}(x)}b^{\log _{b}(y)}=b^{\log _{b}(x)+\log _{b}(y)}\Rightarrow \log _{b}(xy)=\log _{b}(b^{\log _{b}(x)+\log _{b}(y)})=\log _{b}(x)+\log _{b}(y)} The law for powers exploits another of the laws of indices: x y = ( b log b ⁡ ( x ) ) y = b y log b ⁡ ( x ) ⇒ log b ⁡ ( x y ) = y log b ⁡ ( x ) {\displaystyle x^{y}=(b^{\log _{b}(x)})^{y}=b^{y\log _{b}(x)}\Rightarrow \log _{b}(x^{y})=y\log _{b}(x)} The law relating to quotients then follows: log b ⁡ ( x y ) = log b ⁡ ( x y − 1 ) = log b ⁡ ( x ) + log b ⁡ ( y − 1 ) = log b ⁡ ( x ) − log b ⁡ ( y ) {\displaystyle \log _{b}{\bigg (}{\frac {x}{y}}{\bigg )}=\log _{b}(xy^{-1})=\log _{b}(x)+\log _{b}(y^{-1})=\log _{b}(x)-\log _{b}(y)} log b ⁡ ( 1 y ) = log b ⁡ ( y − 1 ) = − log b ⁡ ( y ) {\displaystyle \log _{b}{\bigg (}{\frac {1}{y}}{\bigg )}=\log _{b}(y^{-1})=-\log _{b}(y)} Similarly, the root law is derived by rewriting the root as a reciprocal power: log b ⁡ ( x y ) = log b ⁡ ( x 1 y ) = 1 y log b ⁡ ( x ) {\displaystyle \log _{b}({\sqrt[{y}]{x}})=\log _{b}(x^{\frac {1}{y}})={\frac {1}{y}}\log _{b}(x)} === Derivations of product, quotient, and power rules === These are the three main logarithm laws/rules/principles, from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations/proofs will hinge on those facts. There are multiple ways to derive/prove each logarithm law – this is just one possible method. ==== Logarithm of a product ==== To state the logarithm of a product law formally: ∀ b ∈ R + , b ≠ 1 , ∀ x , y , ∈ R + , log b ⁡ ( x y ) = log b ⁡ ( x ) + log b ⁡ ( y ) {\displaystyle \forall b\in \mathbb {R} _{+},b\neq 1,\forall x,y,\in \mathbb {R} _{+},\log _{b}(xy)=\log _{b}(x)+\log _{b}(y)} Derivation: Let b ∈ R + {\displaystyle b\in \mathbb {R} _{+}} , where b ≠ 1 {\displaystyle b\neq 1} , and let x , y ∈ R + {\displaystyle x,y\in \mathbb {R} _{+}} . We want to relate the expressions log b ⁡ ( x ) {\displaystyle \log _{b}(x)} and log b ⁡ ( y ) {\displaystyle \log _{b}(y)} . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to log b ⁡ ( x ) {\displaystyle \log _{b}(x)} and log b ⁡ ( y ) {\displaystyle \log _{b}(y)} quite often, we will give them some variable names to make working with them easier: Let m = log b ⁡ ( x ) {\displaystyle m=\log _{b}(x)} , and let n = log b ⁡ ( y ) {\displaystyle n=\log _{b}(y)} . Rewriting these as exponentials, we see that m = log b ⁡ ( x ) ⟺ b m = x , n = log b ⁡ ( y ) ⟺ b n = y . {\displaystyle {\begin{aligned}m&=\log _{b}(x)\iff b^{m}=x,\\n&=\log _{b}(y)\iff b^{n}=y.\end{aligned}}} From here, we can relate b m {\displaystyle b^{m}} (i.e. x {\displaystyle x} ) and b n {\displaystyle b^{n}} (i.e. y {\displaystyle y} ) using exponent laws as x y = ( b m ) ( b n ) = b m ⋅ b n = b m + n {\displaystyle xy=(b^{m})(b^{n})=b^{m}\cdot b^{n}=b^{m+n}} To recover the logarithms, we apply log b {\displaystyle \log _{b}} to both sides of the equality. log b ⁡ ( x y ) = log b ⁡ ( b m + n ) {\displaystyle \log _{b}(xy)=\log _{b}(b^{m+n})} The right side may be simplified using one of the logarithm properties from before: we know that log b ⁡ ( b m + n ) = m + n {\displaystyle \log _{b}(b^{m+n})=m+n} , giving log b ⁡ ( x y ) = m + n {\displaystyle \log _{b}(xy)=m+n} We now resubstitute the values for m {\displaystyle m} and n {\displaystyle n} into our equation, so our final expression is only in terms of x {\displaystyle x} , y {\displaystyle y} , and b {\displaystyle b} . log b ⁡ ( x y ) = log b ⁡ ( x ) + log b ⁡ ( y ) {\displaystyle \log _{b}(xy)=\log _{b}(x)+\log _{b}(y)} This completes the derivation. ==== Logarithm of a quotient ==== To state the logarithm of a quotient law formally: ∀ b ∈ R + , b ≠ 1 , ∀ x , y , ∈ R + , log b ⁡ ( x y ) = log b ⁡ ( x ) − log b ⁡ ( y ) {\displaystyle \forall b\in \mathbb {R} _{+},b\neq 1,\forall x,y,\in \mathbb {R} _{+},\log _{b}\left({\frac {x}{y}}\right)=\log _{b}(x)-\log _{b}(y)} Derivation: Let b ∈ R + {\displaystyle b\in \mathbb {R} _{+}} , where b ≠ 1 {\displaystyle b\neq 1} , and let x , y ∈ R + {\displaystyle x,y\in \mathbb {R} _{+}} . We want to relate the expressions log b ⁡ ( x ) {\displaystyle \log _{b}(x)} and log b ⁡ ( y ) {\displaystyle \log _{b}(y)} . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to log b ⁡ ( x ) {\displaystyle \log _{b}(x)} and log b ⁡ ( y ) {\displaystyle \log _{b}(y)} quite often, we will give them some variable names to make working with them easier: Let m = log b ⁡ ( x ) {\displaystyle m=\log _{b}(x)} , and let n = log b ⁡ ( y ) {\displaystyle n=\log _{b}(y)} . Rewriting these as exponentials, we see that: m = log b ⁡ ( x ) ⟺ b m = x , n = log b ⁡ ( y ) ⟺ b n = y . {\displaystyle {\begin{aligned}m&=\log _{b}(x)\iff b^{m}=x,\\n&=\log _{b}(y)\iff b^{n}=y.\end{aligned}}} From here, we can relate b m {\displaystyle b^{m}} (i.e. x {\displaystyle x} ) and b n {\displaystyle b^{n}} (i.e. y {\displaystyle y} ) using exponent laws as x y = ( b m ) ( b n ) = b m b n = b m − n {\displaystyle {\frac {x}{y}}={\frac {(b^{m})}{(b^{n})}}={\frac {b^{m}}{b^{n}}}=b^{m-n}} To recover the logarithms, we apply log b {\displaystyle \log _{b}} to both sides of the equality. log b ⁡ ( x y ) = log b ⁡ ( b m − n ) {\displaystyle \log _{b}\left({\frac {x}{y}}\right)=\log _{b}\left(b^{m-n}\right)} The right side may be simplified using one of the logarithm properties from before: we know that log b ⁡ ( b m − n ) = m − n {\displaystyle \log _{b}(b^{m-n})=m-n} , giving log b ⁡ ( x y ) = m − n {\displaystyle \log _{b}\left({\frac {x}{y}}\right)=m-n} We now resubstitute the values for m {\displaystyle m} and n {\displaystyle n} into our equation, so our final expression is only in terms of x {\displaystyle x} , y {\displaystyle y} , and b {\displaystyle b} . log b ⁡ ( x y ) = log b ⁡ ( x ) − log b ⁡ ( y ) {\displaystyle \log _{b}\left({\frac {x}{y}}\right)=\log _{b}(x)-\log _{b}(y)} This completes the derivation. ==== Logarithm of a power ==== To state the logarithm of a power law formally: ∀ b ∈ R + , b ≠ 1 , ∀ x ∈ R + , ∀ r ∈ R , log b ⁡ ( x r ) = r log b ⁡ ( x ) {\displaystyle \forall b\in \mathbb {R} _{+},b\neq 1,\forall x\in \mathbb {R} _{+},\forall r\in \mathbb {R} ,\log _{b}(x^{r})=r\log _{b}(x)} Derivation: Let b ∈ R + {\displaystyle b\in \mathbb {R} _{+}} , where b ≠ 1 {\displaystyle b\neq 1} , let x ∈ R + {\displaystyle x\in \mathbb {R} _{+}} , and let r ∈ R {\displaystyle r\in \mathbb {R} } . For this derivation, we want to simplify the expression log b ⁡ ( x r ) {\displaystyle \log _{b}(x^{r})} . To do this, we begin with the simpler expression log b ⁡ ( x ) {\displaystyle \log _{b}(x)} . Since we will be using log b ⁡ ( x ) {\displaystyle \log _{b}(x)} often, we will define it as a new variable: Let m = log b ⁡ ( x ) {\displaystyle m=\log _{b}(x)} . To more easily manipulate the expression, we rewrite it as an exponential. By definition, m = log b ⁡ ( x ) ⟺ b m = x {\displaystyle m=\log _{b}(x)\iff b^{m}=x} , so we have b m = x {\displaystyle b^{m}=x} Similar to the derivations above, we take advantage of another exponent law. In order to have x r {\displaystyle x^{r}} in our final expression, we raise both sides of the equality to the power of r {\displaystyle r} : ( b m ) r = ( x ) r b m r = x r {\displaystyle {\begin{aligned}(b^{m})^{r}&=(x)^{r}\\b^{mr}&=x^{r}\end{aligned}}} where we used the exponent law ( b m ) r = b m r {\displaystyle (b^{m})^{r}=b^{mr}} . To recover the logarithms, we apply log b {\displaystyle \log _{b}} to both sides of the equality. log b ⁡ ( b m r ) = log b ⁡ ( x r ) {\displaystyle \log _{b}(b^{mr})=\log _{b}(x^{r})} The left side of the equality can be simplified using a logarithm law, which states that log b ⁡ ( b m r ) = m r {\displaystyle \log _{b}(b^{mr})=mr} . m r = log b ⁡ ( x r ) {\displaystyle mr=\log _{b}(x^{r})} Substituting in the original value for m {\displaystyle m} , rearranging, and simplifying gives ( log b ⁡ ( x ) ) r = log b ⁡ ( x r ) r log b ⁡ ( x ) = log b ⁡ ( x r ) log b ⁡ ( x r ) = r log b ⁡ ( x ) {\displaystyle {\begin{aligned}\left(\log _{b}(x)\right)r&=\log _{b}(x^{r})\\r\log _{b}(x)&=\log _{b}(x^{r})\\\log _{b}(x^{r})&=r\log _{b}(x)\end{aligned}}} This completes the derivation. == Changing the base == To state the change of base logarithm formula formally: ∀ a , b ∈ R + , a , b ≠ 1 , ∀ x ∈ R + , log b ⁡ ( x ) = log a ⁡ ( x ) log a ⁡ ( b ) {\displaystyle \forall a,b\in \mathbb {R} _{+},a,b\neq 1,\forall x\in \mathbb {R} _{+},\log _{b}(x)={\frac {\log _{a}(x)}{\log _{a}(b)}}} This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base. === Proof/derivation === Let a , b ∈ R + {\displaystyle a,b\in \mathbb {R} _{+}} , where a , b ≠ 1 {\displaystyle a,b\neq 1} Let x ∈ R + {\displaystyle x\in \mathbb {R} _{+}} . Here, a {\displaystyle a} and b {\displaystyle b} are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1. The number x {\displaystyle x} will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term log b ⁡ ( x ) {\displaystyle \log _{b}(x)} quite frequently, we define it as a new variable: Let m = log b ⁡ ( x ) {\displaystyle m=\log _{b}(x)} . To more easily manipulate the expression, it can be rewritten as an exponential. b m = x {\displaystyle b^{m}=x} Applying log a {\displaystyle \log _{a}} to both sides of the equality, log a ⁡ ( b m ) = log a ⁡ ( x ) {\displaystyle \log _{a}(b^{m})=\log _{a}(x)} Now, using the logarithm of a power property, which states that log a ⁡ ( b m ) = m log a ⁡ ( b ) {\displaystyle \log _{a}(b^{m})=m\log _{a}(b)} , m log a ⁡ ( b ) = log a ⁡ ( x ) {\displaystyle m\log _{a}(b)=\log _{a}(x)} Isolating m {\displaystyle m} , we get the following: m = log a ⁡ ( x ) log a ⁡ ( b ) {\displaystyle m={\frac {\log _{a}(x)}{\log _{a}(b)}}} Resubstituting m = log b ⁡ ( x ) {\displaystyle m=\log _{b}(x)} back into the equation, log b ⁡ ( x ) = log a ⁡ ( x ) log a ⁡ ( b ) {\displaystyle \log _{b}(x)={\frac {\log _{a}(x)}{\log _{a}(b)}}} This completes the proof that log b ⁡ ( x ) = log a ⁡ ( x ) log a ⁡ ( b ) {\displaystyle \log _{b}(x)={\frac {\log _{a}(x)}{\log _{a}(b)}}} . This formula has several consequences: log b ⁡ a = 1 log a ⁡ b {\displaystyle \log _{b}a={\frac {1}{\log _{a}b}}} log b n ⁡ a = log b ⁡ a n {\displaystyle \log _{b^{n}}a={\log _{b}a \over n}} log b ⁡ a = log b ⁡ e ⋅ log e ⁡ a = log b ⁡ e ⋅ ln ⁡ a {\displaystyle \log _{b}a=\log _{b}e\cdot \log _{e}a=\log _{b}e\cdot \ln a} b log a ⁡ d = d log a ⁡ b {\displaystyle b^{\log _{a}d}=d^{\log _{a}b}} − log b ⁡ a = log b ⁡ ( 1 a ) = log 1 / b ⁡ a {\displaystyle -\log _{b}a=\log _{b}\left({1 \over a}\right)=\log _{1/b}a} log b 1 ⁡ a 1 ⋯ log b n ⁡ a n = log b π ( 1 ) ⁡ a 1 ⋯ log b π ( n ) ⁡ a n , {\displaystyle \log _{b_{1}}a_{1}\,\cdots \,\log _{b_{n}}a_{n}=\log _{b_{\pi (1)}}a_{1}\,\cdots \,\log _{b_{\pi (n)}}a_{n},} where π {\textstyle \pi } is any permutation of the subscripts 1, ..., n. For example log b ⁡ w ⋅ log a ⁡ x ⋅ log d ⁡ c ⋅ log d ⁡ z = log d ⁡ w ⋅ log b ⁡ x ⋅ log a ⁡ c ⋅ log d ⁡ z . {\displaystyle \log _{b}w\cdot \log _{a}x\cdot \log _{d}c\cdot \log _{d}z=\log _{d}w\cdot \log _{b}x\cdot \log _{a}c\cdot \log _{d}z.} === Summation/subtraction === The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities: Note that the subtraction identity is not defined if a = c {\displaystyle a=c} , since the logarithm of zero is not defined. Also note that, when programming, a {\displaystyle a} and c {\displaystyle c} may have to be switched on the right hand side of the equations if c ≫ a {\displaystyle c\gg a} to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x) function that calculates log e ⁡ ( 1 + x ) {\displaystyle \log _{e}(1+x)} without underflow (when x {\displaystyle x} is small). More generally: log b ⁡ ∑ i = 0 N a i = log b ⁡ a 0 + log b ⁡ ( 1 + ∑ i = 1 N a i a 0 ) = log b ⁡ a 0 + log b ⁡ ( 1 + ∑ i = 1 N b ( log b ⁡ a i − log b ⁡ a 0 ) ) {\displaystyle \log _{b}\sum _{i=0}^{N}a_{i}=\log _{b}a_{0}+\log _{b}\left(1+\sum _{i=1}^{N}{\frac {a_{i}}{a_{0}}}\right)=\log _{b}a_{0}+\log _{b}\left(1+\sum _{i=1}^{N}b^{\left(\log _{b}a_{i}-\log _{b}a_{0}\right)}\right)} === Exponents === A useful identity involving exponents: x log ⁡ ( log ⁡ ( x ) ) log ⁡ ( x ) = log ⁡ ( x ) {\displaystyle x^{\frac {\log(\log(x))}{\log(x)}}=\log(x)} or more universally: x log ⁡ ( a ) log ⁡ ( x ) = a {\displaystyle x^{\frac {\log(a)}{\log(x)}}=a} === Other/resulting identities === 1 1 log x ⁡ ( a ) + 1 log y ⁡ ( a ) = log x y ⁡ ( a ) {\displaystyle {\frac {1}{{\frac {1}{\log _{x}(a)}}+{\frac {1}{\log _{y}(a)}}}}=\log _{xy}(a)} 1 1 log x ⁡ ( a ) − 1 log y ⁡ ( a ) = log x y ⁡ ( a ) {\displaystyle {\frac {1}{{\frac {1}{\log _{x}(a)}}-{\frac {1}{\log _{y}(a)}}}}=\log _{\frac {x}{y}}(a)} == Inequalities == Based on, x 1 + x ≤ ln ⁡ ( 1 + x ) ≤ x ( 6 + x ) 6 + 4 x ≤ x for all − 1 < x {\displaystyle {\frac {x}{1+x}}\leq \ln(1+x)\leq {\frac {x(6+x)}{6+4x}}\leq x{\mbox{ for all }}{-1}<x} 2 x 2 + x ≤ 3 − 27 3 + 2 x ≤ x 1 + x + x 2 / 12 ≤ ln ⁡ ( 1 + x ) ≤ x 1 + x ≤ x 2 2 + x 1 + x for 0 ≤ x , reverse for − 1 < x ≤ 0 {\displaystyle {\begin{aligned}{\frac {2x}{2+x}}&\leq 3-{\sqrt {\frac {27}{3+2x}}}\leq {\frac {x}{\sqrt {1+x+x^{2}/12}}}\\[4pt]&\leq \ln(1+x)\leq {\frac {x}{\sqrt {1+x}}}\leq {\frac {x}{2}}{\frac {2+x}{1+x}}\\[4pt]&{\text{ for }}0\leq x{\text{, reverse for }}{-1}<x\leq 0\end{aligned}}} All are accurate around x = 0 {\displaystyle x=0} , but not for large numbers. == Tropical identities == The following identity relates log semiring to the min-plus semiring. lim T → 0 − T log ⁡ ( e − s T + e − t T ) = m i n { s , t } {\displaystyle \lim _{T\rightarrow 0}-T\log(e^{-{\frac {s}{T}}}+e^{-{\frac {t}{T}}})=\mathrm {min} \{s,t\}} == Calculus identities == === Limits === lim x → 0 + log a ⁡ ( x ) = − ∞ if a > 1 {\displaystyle \lim _{x\to 0^{+}}\log _{a}(x)=-\infty \quad {\mbox{if }}a>1} lim x → 0 + log a ⁡ ( x ) = ∞ if 0 < a < 1 {\displaystyle \lim _{x\to 0^{+}}\log _{a}(x)=\infty \quad {\mbox{if }}0<a<1} lim x → ∞ log a ⁡ ( x ) = ∞ if a > 1 {\displaystyle \lim _{x\to \infty }\log _{a}(x)=\infty \quad {\mbox{if }}a>1} lim x → ∞ log a ⁡ ( x ) = − ∞ if 0 < a < 1 {\displaystyle \lim _{x\to \infty }\log _{a}(x)=-\infty \quad {\mbox{if }}0<a<1} lim x → ∞ x b log a ⁡ ( x ) = ∞ if b > 0 {\displaystyle \lim _{x\to \infty }x^{b}\log _{a}(x)=\infty \quad {\mbox{if }}b>0} lim x → ∞ log a ⁡ ( x ) x b = 0 if b > 0 {\displaystyle \lim _{x\to \infty }{\frac {\log _{a}(x)}{x^{b}}}=0\quad {\mbox{if }}b>0} The last limit is often summarized as "logarithms grow more slowly than any power or root of x". === Derivatives of logarithmic functions === d d x ln ⁡ x = 1 x , x > 0 {\displaystyle {d \over dx}\ln x={1 \over x},x>0} d d x ln ⁡ | x | = 1 x , x ≠ 0 {\displaystyle {d \over dx}\ln |x|={1 \over x},x\neq 0} d d x log a ⁡ x = 1 x ln ⁡ a , x > 0 , a > 0 , and a ≠ 1 {\displaystyle {d \over dx}\log _{a}x={1 \over x\ln a},x>0,a>0,{\text{ and }}a\neq 1} === Integral definition === ln ⁡ x = ∫ 1 x 1 t d t {\displaystyle \ln x=\int _{1}^{x}{\frac {1}{t}}\ dt} To modify the limits of integration to run from x {\displaystyle x} to 1 {\displaystyle 1} , we change the order of integration, which changes the sign of the integral: − ∫ 1 x 1 t d t = ∫ x 1 1 t d t {\displaystyle -\int _{1}^{x}{\frac {1}{t}}\,dt=\int _{x}^{1}{\frac {1}{t}}\,dt} Therefore: ln ⁡ 1 x = ∫ x 1 1 t d t {\displaystyle \ln {\frac {1}{x}}=\int _{x}^{1}{\frac {1}{t}}\,dt} === Riemann Sum === ln ⁡ ( n + 1 ) = {\displaystyle \ln(n+1)=} lim k → ∞ ∑ i = 1 k 1 x i Δ x = {\displaystyle \lim _{k\to \infty }\sum _{i=1}^{k}{\frac {1}{x_{i}}}\Delta x=} lim k → ∞ ∑ i = 1 k 1 1 + i − 1 k n ⋅ n k = {\displaystyle \lim _{k\to \infty }\sum _{i=1}^{k}{\frac {1}{1+{\frac {i-1}{k}}n}}\cdot {\frac {n}{k}}=} lim k → ∞ ∑ x = 1 k ⋅ n 1 1 + x k ⋅ 1 k = {\displaystyle \lim _{k\to \infty }\sum _{x=1}^{k\cdot n}{\frac {1}{1+{\frac {x}{k}}}}\cdot {\frac {1}{k}}=} lim k → ∞ ∑ x = 1 k ⋅ n 1 k + x = lim k → ∞ ∑ x = k + 1 k ⋅ n + k 1 x = lim k → ∞ ∑ x = k + 1 k ( n + 1 ) 1 x {\displaystyle \lim _{k\to \infty }\sum _{x=1}^{k\cdot n}{\frac {1}{k+x}}=\lim _{k\to \infty }\sum _{x=k+1}^{k\cdot n+k}{\frac {1}{x}}=\lim _{k\to \infty }\sum _{x=k+1}^{k(n+1)}{\frac {1}{x}}} for Δ x = n k {\displaystyle \textstyle \Delta x={\frac {n}{k}}} and x i {\displaystyle x_{i}} is a sample point in each interval. === Series representation === The natural logarithm ln ⁡ ( 1 + x ) {\displaystyle \ln(1+x)} has a well-known Taylor series expansion that converges for x {\displaystyle x} in the open-closed interval (−1, 1]: ln ⁡ ( 1 + x ) = ∑ n = 1 ∞ ( − 1 ) n + 1 x n n = x − x 2 2 + x 3 3 − x 4 4 + x 5 5 − x 6 6 + ⋯ . {\displaystyle \ln(1+x)=\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}x^{n}}{n}}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-{\frac {x^{4}}{4}}+{\frac {x^{5}}{5}}-{\frac {x^{6}}{6}}+\cdots .} Within this interval, for x = 1 {\displaystyle x=1} , the series is conditionally convergent, and for all other values, it is absolutely convergent. For x > 1 {\displaystyle x>1} or x ≤ − 1 {\displaystyle x\leq -1} , the series does not converge to ln ⁡ ( 1 + x ) {\displaystyle \ln(1+x)} . In these cases, different representations or methods must be used to evaluate the logarithm. === Harmonic number difference === It is not uncommon in advanced mathematics, particularly in analytic number theory and asymptotic analysis, to encounter expressions involving differences or ratios of harmonic numbers at scaled indices. The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to continuous functions. This identity is expressed as lim k → ∞ ( H k ( n + 1 ) − H k ) = ln ⁡ ( n + 1 ) {\displaystyle \lim _{k\to \infty }(H_{k(n+1)}-H_{k})=\ln(n+1)} which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals ln ⁡ ( n + 1 ) {\displaystyle \ln(n+1)} in the limit) reflects how summation over increasing segments of the harmonic series exhibits integral properties, giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here H k {\displaystyle H_{k}} denotes the k {\displaystyle k} -th harmonic number, defined as H k = ∑ j = 1 k 1 j {\displaystyle H_{k}=\sum _{j=1}^{k}{\frac {1}{j}}} The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a constant, especially when extended over large intervals. As k {\displaystyle k} tends towards infinity, the difference between the harmonic numbers H k ( n + 1 ) {\displaystyle H_{k(n+1)}} and H k {\displaystyle H_{k}} converges to a non-zero value. This persistent non-zero difference, ln ⁡ ( n + 1 ) {\displaystyle \ln(n+1)} , precludes the possibility of the harmonic series approaching a finite limit, thus providing a clear mathematical articulation of its divergence. The technique of approximating sums by integrals (specifically using the integral test or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering: ∑ j = k + 1 k ( n + 1 ) 1 j ≈ ∫ k k ( n + 1 ) d x x {\displaystyle \sum _{j=k+1}^{k(n+1)}{\frac {1}{j}}\approx \int _{k}^{k(n+1)}{\frac {dx}{x}}} ==== Harmonic limit derivation ==== The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from k + 1 {\displaystyle k+1} to k ( n + 1 ) {\displaystyle k(n+1)} : H k ( n + 1 ) − H k = ∑ j = k + 1 k ( n + 1 ) 1 j {\displaystyle H_{k(n+1)}-H_{k}=\sum _{j=k+1}^{k(n+1)}{\frac {1}{j}}} This can be estimated using the integral test for convergence, or more directly by comparing it to the integral of 1 / x {\displaystyle 1/x} from k {\displaystyle k} to k ( n + 1 ) {\displaystyle k(n+1)} : lim k → ∞ ∑ j = k + 1 k ( n + 1 ) 1 j = ∫ k k ( n + 1 ) d x x = ln ⁡ ( k ( n + 1 ) ) − ln ⁡ ( k ) = ln ⁡ ( k ( n + 1 ) k ) = ln ⁡ ( n + 1 ) {\displaystyle \lim _{k\to \infty }\sum _{j=k+1}^{k(n+1)}{\frac {1}{j}}=\int _{k}^{k(n+1)}{\frac {dx}{x}}=\ln(k(n+1))-\ln(k)=\ln \left({\frac {k(n+1)}{k}}\right)=\ln(n+1)} As the window's lower bound begins at k + 1 {\displaystyle k+1} and the upper bound extends to k ( n + 1 ) {\displaystyle k(n+1)} , both of which tend toward infinity as k → ∞ {\displaystyle k\to \infty } , the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from 1 {\displaystyle 1} to n + 1 {\displaystyle n+1} where the onset k {\displaystyle k} implies this minimally discrete region. === Double series formula === The harmonic number difference formula for ln ⁡ ( m ) {\displaystyle \ln(m)} is an extension of the classic, alternating identity of ln ⁡ ( 2 ) {\displaystyle \ln(2)} : ln ⁡ ( 2 ) = lim k → ∞ ∑ n = 1 k ( 1 2 n − 1 − 1 2 n ) {\displaystyle \ln(2)=\lim _{k\to \infty }\sum _{n=1}^{k}\left({\frac {1}{2n-1}}-{\frac {1}{2n}}\right)} which can be generalized as the double series over the residues of m {\displaystyle m} : ln ⁡ ( m ) = ∑ x ∈ ⟨ m ⟩ ∩ N ∑ r ∈ Z m ∩ N ( 1 x − r − 1 x ) = ∑ x ∈ ⟨ m ⟩ ∩ N ∑ r ∈ Z m ∩ N r x ( x − r ) {\displaystyle \ln(m)=\sum _{x\in \langle m\rangle \cap \mathbb {N} }\sum _{r\in \mathbb {Z} _{m}\cap \mathbb {N} }\left({\frac {1}{x-r}}-{\frac {1}{x}}\right)=\sum _{x\in \langle m\rangle \cap \mathbb {N} }\sum _{r\in \mathbb {Z} _{m}\cap \mathbb {N} }{\frac {r}{x(x-r)}}} where ⟨ m ⟩ {\displaystyle \langle m\rangle } is the principle ideal generated by m {\displaystyle m} . Subtracting 1 x {\displaystyle \textstyle {\frac {1}{x}}} from each term 1 x − r {\displaystyle \textstyle {\frac {1}{x-r}}} (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring convergence by controlling the series' tendency toward divergence as m {\displaystyle m} increases. For example: ln ⁡ ( 4 ) = lim k → ∞ ∑ n = 1 k ( 1 4 n − 3 − 1 4 n ) + ( 1 4 n − 2 − 1 4 n ) + ( 1 4 n − 1 − 1 4 n ) {\displaystyle \ln(4)=\lim _{k\to \infty }\sum _{n=1}^{k}\left({\frac {1}{4n-3}}-{\frac {1}{4n}}\right)+\left({\frac {1}{4n-2}}-{\frac {1}{4n}}\right)+\left({\frac {1}{4n-1}}-{\frac {1}{4n}}\right)} This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues r ∈ N {\displaystyle r\in \mathbb {N} } ensures that adjustments are uniformly applied across all possible offsets within each block of m {\displaystyle m} terms. This uniform distribution of the "correction" across different intervals defined by x − r {\displaystyle x-r} functions similarly to telescoping over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series. Note that the structure of the summands of this formula matches those of the interpolated harmonic number H x {\displaystyle H_{x}} when both the domain and range are negated (i.e., − H − x {\displaystyle -H_{-x}} ). However, the interpretation and roles of the variables differ. ==== Deveci's Proof ==== A fundamental feature of the proof is the accumulation of the subtrahends 1 x {\textstyle {\frac {1}{x}}} into a unit fraction, that is, m x = 1 n {\textstyle {\frac {m}{x}}={\frac {1}{n}}} for m ∣ x {\displaystyle m\mid x} , thus m = ω + 1 {\displaystyle m=\omega +1} rather than m = | Z m ∩ N | {\displaystyle m=|\mathbb {Z} _{m}\cap \mathbb {N} |} , where the extrema of Z m ∩ N {\displaystyle \mathbb {Z} _{m}\cap \mathbb {N} } are [ 0 , ω ] {\displaystyle [0,\omega ]} if N = N 0 {\displaystyle \mathbb {N} =\mathbb {N} _{0}} and [ 1 , ω ] {\displaystyle [1,\omega ]} otherwise, with the minimum of 0 {\displaystyle 0} being implicit in the latter case due to the structural requirements of the proof. Since the cardinality of Z m ∩ N {\displaystyle \mathbb {Z} _{m}\cap \mathbb {N} } depends on the selection of one of two possible minima, the integral ∫ 1 t d t {\displaystyle \textstyle \int {\frac {1}{t}}dt} , as a set-theoretic procedure, is a function of the maximum ω {\displaystyle \omega } (which remains consistent across both interpretations) plus 1 {\displaystyle 1} , not the cardinality (which is ambiguous due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting m {\displaystyle m} -tuple—over the harmonic series, advancing the window by m {\displaystyle m} positions to select the next m {\displaystyle m} -tuple, and offsetting each element of each tuple by 1 m {\textstyle {\frac {1}{m}}} relative to the window's absolute position. The sum ∑ n = 1 k ∑ 1 x − r {\textstyle \sum _{n=1}^{k}\sum {\frac {1}{x-r}}} corresponds to H k m {\displaystyle H_{km}} which scales H m {\displaystyle H_{m}} without bound. The sum ∑ n = 1 k − 1 n {\textstyle \sum _{n=1}^{k}-{\frac {1}{n}}} corresponds to the prefix H k {\displaystyle H_{k}} trimmed from the series to establish the window's moving lower bound k + 1 {\displaystyle k+1} , and ln ⁡ ( m ) {\displaystyle \ln(m)} is the limit of the sliding window (the scaled, truncated series): ∑ n = 1 k ∑ r = 1 ω ( 1 m n − r − 1 m n ) = ∑ n = 1 k ∑ r = 0 ω ( 1 m n − r − 1 m n ) = ∑ n = 1 k ( − 1 n + ∑ r = 0 ω 1 m n − r ) = − H k + ∑ n = 1 k ∑ r = 0 ω 1 m n − r = − H k + ∑ n = 1 k ∑ r = 0 ω 1 ( n − 1 ) m + m − r = − H k + ∑ n = 1 k ∑ j = 1 m 1 ( n − 1 ) m + j = − H k + ∑ n = 1 k ( H n m − H m ( n − 1 ) ) = − H k + H m k {\displaystyle {\begin{aligned}\sum _{n=1}^{k}\sum _{r=1}^{\omega }\left({\frac {1}{mn-r}}-{\frac {1}{mn}}\right)&=\sum _{n=1}^{k}\sum _{r=0}^{\omega }\left({\frac {1}{mn-r}}-{\frac {1}{mn}}\right)\\&=\sum _{n=1}^{k}\left(-{\frac {1}{n}}+\sum _{r=0}^{\omega }{\frac {1}{mn-r}}\right)\\&=-H_{k}+\sum _{n=1}^{k}\sum _{r=0}^{\omega }{\frac {1}{mn-r}}\\&=-H_{k}+\sum _{n=1}^{k}\sum _{r=0}^{\omega }{\frac {1}{(n-1)m+m-r}}\\&=-H_{k}+\sum _{n=1}^{k}\sum _{j=1}^{m}{\frac {1}{(n-1)m+j}}\\&=-H_{k}+\sum _{n=1}^{k}\left(H_{nm}-H_{m(n-1)}\right)\\&=-H_{k}+H_{mk}\end{aligned}}} lim k → ∞ H k m − H k = ∑ x ∈ ⟨ m ⟩ ∩ N ∑ r ∈ Z m ∩ N ( 1 x − r − 1 x ) = ln ⁡ ( ω + 1 ) = ln ⁡ ( m ) {\displaystyle \lim _{k\to \infty }H_{km}-H_{k}=\sum _{x\in \langle m\rangle \cap \mathbb {N} }\sum _{r\in \mathbb {Z} _{m}\cap \mathbb {N} }\left({\frac {1}{x-r}}-{\frac {1}{x}}\right)=\ln(\omega +1)=\ln(m)} === Integrals of logarithmic functions === ∫ ln ⁡ x d x = x ln ⁡ x − x + C = x ( ln ⁡ x − 1 ) + C {\displaystyle \int \ln x\,dx=x\ln x-x+C=x(\ln x-1)+C} ∫ log a ⁡ x d x = x log a ⁡ x − x ln ⁡ a + C = x ( ln ⁡ x − 1 ) ln ⁡ a + C {\displaystyle \int \log _{a}x\,dx=x\log _{a}x-{\frac {x}{\ln a}}+C={\frac {x(\ln x-1)}{\ln a}}+C} To remember higher integrals, it is convenient to define x [ n ] = x n ( log ⁡ ( x ) − H n ) {\displaystyle x^{\left[n\right]}=x^{n}(\log(x)-H_{n})} where H n {\displaystyle H_{n}} is the nth harmonic number: x [ 0 ] = log ⁡ x {\displaystyle x^{\left[0\right]}=\log x} x [ 1 ] = x log ⁡ ( x ) − x {\displaystyle x^{\left[1\right]}=x\log(x)-x} x [ 2 ] = x 2 log ⁡ ( x ) − 3 2 x 2 {\displaystyle x^{\left[2\right]}=x^{2}\log(x)-{\begin{matrix}{\frac {3}{2}}\end{matrix}}x^{2}} x [ 3 ] = x 3 log ⁡ ( x ) − 11 6 x 3 {\displaystyle x^{\left[3\right]}=x^{3}\log(x)-{\begin{matrix}{\frac {11}{6}}\end{matrix}}x^{3}} Then d d x x [ n ] = n x [ n − 1 ] {\displaystyle {\frac {d}{dx}}\,x^{\left[n\right]}=nx^{\left[n-1\right]}} ∫ x [ n ] d x = x [ n + 1 ] n + 1 + C {\displaystyle \int x^{\left[n\right]}\,dx={\frac {x^{\left[n+1\right]}}{n+1}}+C} == Approximating large numbers == The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,657 −1. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357 × 100.09543 ≈ 1.25 × 109,808,357. Similarly, factorials can be approximated by summing the logarithms of the terms. == Complex logarithm identities == The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut. === Definitions === In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions. ln(r) is the standard natural logarithm of the real number r. Arg(z) is the principal value of the arg function; its value is restricted to (−π, π]. It can be computed using Arg(x + iy) = atan2(y, x). Log(z) is the principal value of the complex logarithm function and has imaginary part in the range (−π, π]. Log ⁡ ( z ) = ln ⁡ ( | z | ) + i Arg ⁡ ( z ) {\displaystyle \operatorname {Log} (z)=\ln(|z|)+i\operatorname {Arg} (z)} e Log ⁡ ( z ) = z {\displaystyle e^{\operatorname {Log} (z)}=z} The multiple valued version of log(z) is a set, but it is easier to write it without braces and using it in formulas follows obvious rules. log(z) is the set of complex numbers v which satisfy ev = z arg(z) is the set of possible values of the arg function applied to z. When k is any integer: log ⁡ ( z ) = ln ⁡ ( | z | ) + i arg ⁡ ( z ) {\displaystyle \log(z)=\ln(|z|)+i\arg(z)} log ⁡ ( z ) = Log ⁡ ( z ) + 2 π i k {\displaystyle \log(z)=\operatorname {Log} (z)+2\pi ik} e log ⁡ ( z ) = z {\displaystyle e^{\log(z)}=z} === Constants === Principal value forms: Log ⁡ ( 1 ) = 0 {\displaystyle \operatorname {Log} (1)=0} Log ⁡ ( e ) = 1 {\displaystyle \operatorname {Log} (e)=1} Multiple value forms, for any k an integer: log ⁡ ( 1 ) = 0 + 2 π i k {\displaystyle \log(1)=0+2\pi ik} log ⁡ ( e ) = 1 + 2 π i k {\displaystyle \log(e)=1+2\pi ik} === Summation === Principal value forms: Log ⁡ ( z 1 ) + Log ⁡ ( z 2 ) = Log ⁡ ( z 1 z 2 ) ( mod 2 π i ) {\displaystyle \operatorname {Log} (z_{1})+\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}z_{2}){\pmod {2\pi i}}} Log ⁡ ( z 1 ) + Log ⁡ ( z 2 ) = Log ⁡ ( z 1 z 2 ) ( − π < Arg ⁡ ( z 1 ) + Arg ⁡ ( z 2 ) ≤ π ; e.g., Re ⁡ z 1 ≥ 0 and Re ⁡ z 2 > 0 ) {\displaystyle \operatorname {Log} (z_{1})+\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}z_{2})\quad (-\pi <\operatorname {Arg} (z_{1})+\operatorname {Arg} (z_{2})\leq \pi ;{\text{ e.g., }}\operatorname {Re} z_{1}\geq 0{\text{ and }}\operatorname {Re} z_{2}>0)} Log ⁡ ( z 1 ) − Log ⁡ ( z 2 ) = Log ⁡ ( z 1 / z 2 ) ( mod 2 π i ) {\displaystyle \operatorname {Log} (z_{1})-\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}/z_{2}){\pmod {2\pi i}}} Log ⁡ ( z 1 ) − Log ⁡ ( z 2 ) = Log ⁡ ( z 1 / z 2 ) ( − π < Arg ⁡ ( z 1 ) − Arg ⁡ ( z 2 ) ≤ π ; e.g., Re ⁡ z 1 ≥ 0 and Re ⁡ z 2 > 0 ) {\displaystyle \operatorname {Log} (z_{1})-\operatorname {Log} (z_{2})=\operatorname {Log} (z_{1}/z_{2})\quad (-\pi <\operatorname {Arg} (z_{1})-\operatorname {Arg} (z_{2})\leq \pi ;{\text{ e.g., }}\operatorname {Re} z_{1}\geq 0{\text{ and }}\operatorname {Re} z_{2}>0)} Multiple value forms: log ⁡ ( z 1 ) + log ⁡ ( z 2 ) = log ⁡ ( z 1 z 2 ) {\displaystyle \log(z_{1})+\log(z_{2})=\log(z_{1}z_{2})} log ⁡ ( z 1 ) − log ⁡ ( z 2 ) = log ⁡ ( z 1 / z 2 ) {\displaystyle \log(z_{1})-\log(z_{2})=\log(z_{1}/z_{2})} === Powers === A complex power of a complex number can have many possible values. Principal value form: z 1 z 2 = e z 2 Log ⁡ ( z 1 ) {\displaystyle {z_{1}}^{z_{2}}=e^{z_{2}\operatorname {Log} (z_{1})}} Log ⁡ ( z 1 z 2 ) = z 2 Log ⁡ ( z 1 ) ( mod 2 π i ) {\displaystyle \operatorname {Log} {\left({z_{1}}^{z_{2}}\right)}=z_{2}\operatorname {Log} (z_{1}){\pmod {2\pi i}}} Multiple value forms: z 1 z 2 = e z 2 log ⁡ ( z 1 ) {\displaystyle {z_{1}}^{z_{2}}=e^{z_{2}\log(z_{1})}} Where k1, k2 are any integers: log ⁡ ( z 1 z 2 ) = z 2 log ⁡ ( z 1 ) + 2 π i k 2 {\displaystyle \log {\left({z_{1}}^{z_{2}}\right)}=z_{2}\log(z_{1})+2\pi ik_{2}} log ⁡ ( z 1 z 2 ) = z 2 Log ⁡ ( z 1 ) + z 2 2 π i k 1 + 2 π i k 2 {\displaystyle \log {\left({z_{1}}^{z_{2}}\right)}=z_{2}\operatorname {Log} (z_{1})+z_{2}2\pi ik_{1}+2\pi ik_{2}} == Asymptotic identities == === Pronic numbers === As a consequence of the harmonic number difference, the natural logarithm is asymptotically approximated by a finite series difference, representing a truncation of the integral at k = n {\displaystyle k=n} : H 2 T [ n ] − H n ∼ ln ⁡ ( n + 1 ) {\displaystyle H_{2T[n]}-H_{n}\sim \ln(n+1)} where T [ n ] {\displaystyle T[n]} is the nth triangular number, and 2 T [ n ] {\displaystyle 2T[n]} is the sum of the first n even integers. Since the nth pronic number is asymptotically equivalent to the nth perfect square, it follows that: H n 2 − H n ∼ ln ⁡ ( n + 1 ) {\displaystyle H_{n^{2}}-H_{n}\sim \ln(n+1)} === Prime number theorem === The prime number theorem provides the following asymptotic equivalence: n π ( n ) ∼ ln ⁡ n {\displaystyle {\frac {n}{\pi (n)}}\sim \ln n} where π ( n ) {\displaystyle \pi (n)} is the prime counting function. This relationship is equal to:: 2 n H ( 1 , 2 , … , x n ) ∼ ln ⁡ n {\displaystyle {\frac {n}{H(1,2,\ldots ,x_{n})}}\sim \ln n} where H ( x 1 , x 2 , … , x n ) {\displaystyle H(x_{1},x_{2},\ldots ,x_{n})} is the harmonic mean of x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} . This is derived from the fact that the difference between the n {\displaystyle n} th harmonic number and ln ⁡ n {\displaystyle \ln n} asymptotically approaches a small constant, resulting in H n 2 − H n ∼ H n {\displaystyle H_{n^{2}}-H_{n}\sim H_{n}} . This behavior can also be derived from the properties of logarithms: ln ⁡ n {\displaystyle \ln n} is half of ln ⁡ n 2 {\displaystyle \ln n^{2}} , and this "first half" is the natural log of the root of n 2 {\displaystyle n^{2}} , which corresponds roughly to the first 1 n {\displaystyle \textstyle {\frac {1}{n}}} th of the sum H n 2 {\displaystyle H_{n^{2}}} , or H n {\displaystyle H_{n}} . The asymptotic equivalence of the first 1 n {\displaystyle \textstyle {\frac {1}{n}}} th of H n 2 {\displaystyle H_{n^{2}}} to the latter n − 1 n {\displaystyle \textstyle {\frac {n-1}{n}}} th of the series is expressed as follows: H n H n 2 ∼ ln ⁡ n ln ⁡ n = 1 2 {\displaystyle {\frac {H_{n}}{H_{n^{2}}}}\sim {\frac {\ln {\sqrt {n}}}{\ln n}}={\frac {1}{2}}} which generalizes to: H n H n k ∼ ln ⁡ n k ln ⁡ n = 1 k {\displaystyle {\frac {H_{n}}{H_{n^{k}}}}\sim {\frac {\ln {\sqrt[{k}]{n}}}{\ln n}}={\frac {1}{k}}} k H n ∼ H n k {\displaystyle kH_{n}\sim H_{n^{k}}} and: k H n − H n ∼ ( k − 1 ) ln ⁡ ( n + 1 ) {\displaystyle kH_{n}-H_{n}\sim (k-1)\ln(n+1)} H n k − H n ∼ ( k − 1 ) ln ⁡ ( n + 1 ) {\displaystyle H_{n^{k}}-H_{n}\sim (k-1)\ln(n+1)} k H n − H n k ∼ ( k − 1 ) γ {\displaystyle kH_{n}-H_{n^{k}}\sim (k-1)\gamma } for fixed k {\displaystyle k} . The correspondence sets H n {\displaystyle H_{n}} as a unit magnitude that partitions H n k {\displaystyle H_{n^{k}}} across powers, where each interval 1 n {\displaystyle \textstyle {\frac {1}{n}}} to 1 n 2 {\displaystyle \textstyle {\frac {1}{n^{2}}}} , 1 n 2 {\displaystyle \textstyle {\frac {1}{n^{2}}}} to 1 n 3 {\displaystyle \textstyle {\frac {1}{n^{3}}}} , etc., corresponds to one H n {\displaystyle H_{n}} unit, illustrating that H n k {\displaystyle H_{n^{k}}} forms a divergent series as k → ∞ {\displaystyle k\to \infty } . === Real Arguments === These approximations extend to the real-valued domain through the interpolated harmonic number. For example, where x ∈ R {\displaystyle x\in \mathbb {R} } : H x 2 − H x ∼ ln ⁡ x {\displaystyle H_{x^{2}}-H_{x}\sim \ln x} === Stirling numbers === The natural logarithm is asymptotically related to the harmonic numbers by the Stirling numbers and the Gregory coefficients. By representing H n {\displaystyle H_{n}} in terms of Stirling numbers of the first kind, the harmonic number difference is alternatively expressed as follows, for fixed k {\displaystyle k} : s ( n k + 1 , 2 ) ( n k ) ! − s ( n + 1 , 2 ) n ! ∼ ( k − 1 ) ln ⁡ ( n + 1 ) {\displaystyle {\frac {s(n^{k}+1,2)}{(n^{k})!}}-{\frac {s(n+1,2)}{n!}}\sim (k-1)\ln(n+1)} == See also == List of formulae involving π – Uses of the constant List of integrals of logarithmic functions List of mathematical identities Lists of mathematics topics List of trigonometric identities == References == == External links == A lesson on logarithms can be found on Wikiversity Logarithm in Mathwords
Wikipedia:List of mathematical art software#0
== Gallery == == See also == ASCII art Computer-based mathematics education Computer representation of surfaces For loop Fractal-generating software Julia set Lambert W function Lens space List of interactive geometry software List of mathematical artists Mathethon - computational mathematics competition Parametric surface Procedural modeling suites Ray tracing Tesseract 3Blue1Brown - math Youtube channel == References == http://xahlee.info/math/algorithmic_math_art.html
Wikipedia:List of mathematical functions#0
In mathematics, some functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory of special functions which developed out of statistics and mathematical physics. A modern, abstract point of view contrasts large function spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such as symmetry, or relationship to harmonic analysis and group representations. See also List of types of functions == Elementary functions == Elementary functions are functions built from basic operations (e.g. addition, exponentials, logarithms...) === Algebraic functions === Algebraic functions are functions that can be expressed as the solution of a polynomial equation with integer coefficients. Polynomials: Can be generated solely by addition, multiplication, and raising to the power of a positive integer. Constant function: polynomial of degree zero, graph is a horizontal straight line Linear function: First degree polynomial, graph is a straight line. Quadratic function: Second degree polynomial, graph is a parabola. Cubic function: Third degree polynomial. Quartic function: Fourth degree polynomial. Quintic function: Fifth degree polynomial. Rational functions: A ratio of two polynomials. nth root Square root: Yields a number whose square is the given one. Cube root: Yields a number whose cube is the given one. === Elementary transcendental functions === Transcendental functions are functions that are not algebraic. Exponential function: raises a fixed number to a variable power. Hyperbolic functions: formally similar to the trigonometric functions. Inverse hyperbolic functions: inverses of the hyperbolic functions, analogous to the inverse circular functions. Logarithms: the inverses of exponential functions; useful to solve equations involving exponentials. Natural logarithm Common logarithm Binary logarithm Power functions: raise a variable number to a fixed power; also known as Allometric functions; note: if the power is a rational number it is not strictly a transcendental function. Periodic functions Trigonometric functions: sine, cosine, tangent, cotangent, secant, cosecant, exsecant, excosecant, versine, coversine, vercosine, covercosine, haversine, hacoversine, havercosine, hacovercosine, Inverse trigonometric functions etc.; used in geometry and to describe periodic phenomena. See also Gudermannian function. == Special functions == === Piecewise special functions === === Arithmetic functions === Sigma function: Sums of powers of divisors of a given natural number. Euler's totient function: Number of numbers coprime to (and not bigger than) a given one. Prime-counting function: Number of primes less than or equal to a given number. Partition function: Order-independent count of ways to write a given positive integer as a sum of positive integers. Möbius μ function: Sum of the nth primitive roots of unity, it depends on the prime factorization of n. Prime omega functions Chebyshev functions Liouville function, λ(n) = (–1)Ω(n) Von Mangoldt function, Λ(n) = log p if n is a positive power of the prime p Carmichael function === Antiderivatives of elementary functions === Logarithmic integral function: Integral of the reciprocal of the logarithm, important in the prime number theorem. Exponential integral Trigonometric integral: Including Sine Integral and Cosine Integral Inverse tangent integral Error function: An integral important for normal random variables. Fresnel integral: related to the error function; used in optics. Dawson function: occurs in probability. Faddeeva function === Gamma and related functions === Gamma function: A generalization of the factorial function. Barnes G-function Beta function: Corresponding binomial coefficient analogue. Digamma function, Polygamma function Incomplete beta function Incomplete gamma function K-function Multivariate gamma function: A generalization of the Gamma function useful in multivariate statistics. Student's t-distribution Pi function Π ( z ) = z Γ ( z ) = ( z ) ! {\displaystyle \Pi (z)=z\Gamma (z)=(z)!} === Elliptic and related functions === === Bessel and related functions === === Riemann zeta and related functions === === Hypergeometric and related functions === Hypergeometric functions: Versatile family of power series. Confluent hypergeometric function Associated Legendre functions Meijer G-function Fox H-function === Iterated exponential and related functions === Hyper operators Iterated logarithm Pentation Super-logarithms Tetration === Other standard special functions === Lambert W function: Inverse of f(w) = w exp(w). Lamé function Mathieu function Mittag-Leffler function Painlevé transcendents Parabolic cylinder function Arithmetic–geometric mean === Miscellaneous functions === Ackermann function: in the theory of computation, a computable function that is not primitive recursive. Dirac delta function: everywhere zero except for x = 0; total integral is 1. Not a function but a distribution, but sometimes informally referred to as a function, particularly by physicists and engineers. Dirichlet function: is an indicator function that matches 1 to rational numbers and 0 to irrationals. It is nowhere continuous. Thomae's function: is a function that is continuous at all irrational numbers and discontinuous at all rational numbers. It is also a modification of Dirichlet function and sometimes called Riemann function. Kronecker delta function: is a function of two variables, usually integers, which is 1 if they are equal, and 0 otherwise. Minkowski's question mark function: Derivatives vanish on the rationals. Weierstrass function: is an example of continuous function that is nowhere differentiable == See also == List of types of functions Test functions for optimization List of mathematical abbreviations List of special functions and eponyms == External links == Special functions : A programmable special functions calculator. Special functions at EqWorld: The World of Mathematical Equations.
Wikipedia:List of mathematical identities#0
This article lists mathematical identities, that is, identically true relations holding in mathematics. Bézout's identity (despite its usual name, it is not, properly speaking, an identity) Binet-cauchy identity Binomial inverse theorem Binomial identity Brahmagupta–Fibonacci two-square identity Candido's identity Cassini and Catalan identities Degen's eight-square identity Difference of two squares Euler's four-square identity Euler's identity Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities Heine's identity Hermite's identity Lagrange's identity Lagrange's trigonometric identities List of logarithmic identities MacWilliams identity Matrix determinant lemma Newton's identity Parseval's identity Pfister's sixteen-square identity Sherman–Morrison formula Sophie Germain identity Sun's curious identity Sylvester's determinant identity Vandermonde's identity Woodbury matrix identity == Identities for classes of functions == Exterior calculus identities Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities Hypergeometric function identities List of integrals of logarithmic functions List of topics related to π List of trigonometric identities Inverse trigonometric functions Logarithmic identities Summation identities Vector calculus identities == See also == List of inequalities List of set identities and relations – Equalities for combinations of sets == External links == A Collection of Algebraic Identities Matrix Identities
Wikipedia:List of mathematical theories#0
This is a list of mathematical theories.
Wikipedia:List of pre-modern Iranian scientists and scholars#0
The following is a list of Iranian scientists, engineers, and scholars who lived from antiquity up until the beginning of the modern age. == A == Abdul Qadir Gilani (12th century) theologian and philosopher Abu al-Qasim Muqane'i (10th century) physician Abu Dawood (c. 817–889), Islamic scholar Abu Hanifa (699–767), Islamic scholar Abu Said Gorgani (10th century) 'Adud al-Dawla (936–983), scientific patron Ahmad ibn Farrokh (12th century), physician Ahmad ibn 'Imad al-Din (11th century), physician and chemist Alavi Shirazi (1670–1747), royal physician in Mughal India Amuli, Muhammad ibn Mahmud (c. 1300–1352), physician Abū Ja'far al-Khāzin (900–971), mathematician and astronomer Ansari, Khwaja Abdullah (1006–1088), Islamic scholar Aqa-Kermani (18th century), physician Aqsara'i (?–1379), physician Abu Hafsa Yazid, physician Arzani, Muqim (18th century), physician Astarabadi (15th century), physician Aufi, Muhammad (1171–1242), scientist and historian Albubather, physician and astrologer Ibn Abi al-Ashʿath, physician Abu al-Hassan al-Amiri, theologian and philosopher Abu al-Hasan al-Ahwazi, mathematician and astronomer == B == Brethren of Purity Bahmanyār, philosopher Al-Baghawi (c. 1041–1122), Islamic scholar Bahāʾ al-dīn al-ʿĀmilī (1547–1621), poet, philosopher, architect, mathematician, astronomer Baha Al-Dowleh Razi (died c. 915), physician Al-Baladhuri (?–892), historian Abu Ali Bal'ami (10th century), historian Abu Ma'shar al-Balkhi (787–886), known in Latin as Albumasar, astrologer Abu Zayd al-Balkhi (850–934), geographer and mathematician Banū Mūsā brothers (9th century) Abu'l-Fadl Bayhaqi, historian Abu'l-Hasan Bayhaqi, historian and Islamic scholar Al-Bayhaqi, faqih and muhadith Muhammad Baqir Behbahani (1706–1791), theologian Bubares (died after 480 BC), engineer Ibn Bibi (13th century), historian of the Seljuks of Rum Biruni (973–1048), astronomer and mathematician Muhammad al-Bukhari (810–870), Islamic scholar Sahl ibn Bishr (c. 786–845 ?), astrologer, mathematician Bukhtishu (8th century?), Persian Christian physician of Academy of Gundishapur Bukhtishu, Abdollah ibn (c. 940–1058), Christian physician in Persia Jabril ibn Bukhtishu (9th century), Christian physician Bukhtishu, Yuhanna (9th century), Christian physician Borzuya (6th century), a.k.a. Borzouyeh-i Tabib, physician of Academy of Gundishapur Birjandi (?–1528), astronomer and mathematician Muhammad Bal'ami, historian Abu Bakr Rabee Ibn Ahmad Al-Akhawyni Bokhari, physician Abu'l-Fadl al-Bal'ami == D == Abu Hanifa Dinawari (815–896), astronomer, agriculturist, botanist, metallurgist, geographer, mathematician, and historian Ibn Durustawayh (872–958), grammarian, lexicographer and student of the Quran and hadith Ibn Qutaybah Dinwari (828–885), historian and theologian == E == Abubakr Esfarayeni (13th century?), physician == F == Al-Farghani (d. 880), astronomer, known in Latin as Alfraganus Al-Farabi (872–950) (Al-Farabi, Pharabius), philosopher Fazari, Ibrahim (?–777), mathematician and astronomer Fazari, Mohammad (?–796), mathematician and astronomer Feyz Kashani, Mohsen (?–1680), theologian Firishta (1560–1620), historian Ibn al-Faqih, historian and geographer Muhammad ibn Abi Bakr al‐Farisi (d. 1278/1279), astronomer Fazlallah Khunji Isfahani (1455–1521), religious scholar, historian and political writer == G == Gardizi (?–1061), geographer and historian Ghazali (Algazel, 1058–1111), philosopher Gilani, Hakim (?–1609), royal physician Kushyar Gilani (971–1029), mathematician, geographer, astronomer Zayn al-Din Gorgani (1041–1136), royal physician Rostam Gorgani (16th century), physician Al-Masihi (?–999), Avicenn'a master == H == Hakim Ghulam Imam, physician Hakim Muhammad Mehdi Naqi (18th century), physician Hakim Muhammad Sharif Khan (18th century), physician Hakim Nishaburi (933–1012), Islamic scholar Hallaj (858–922), mystic-philosopher Hamadani, Mir Sayyid Ali (1314–1384), poet and philosopher Harawi, Abolfadl (10th century), astronomer of Buyid dynasty Harawi, Muwaffak (10th century), pharmacologist Harawi, Muhammad ibn Yusuf (d. 1542), physician Hasani, Qavameddin (17th century), physician Ibn Hindu (1019–1032), man of letters, physician Haji Bektash Veli, mystic Ayn al-Quzat Hamadani, jurisconsult, mystic, philosopher, poet and mathematician Haseb Tabari, astronomer Hammam ibn Munabbih, Islamic scholar Hamza al-Isfahani (ca. 893–after 961), philologist and historian Abu Ja'far ibn Habash == I == Ibn Abi Sadiq (11th century), "The Second Hippocrates", Avicenna's disciple Ibn Isfandiyar (13th-century), historian Ibn Khordadbeh (c. 820–912), geographer Ibn Rustah (9th century), explorer and geographer Ilaqi, Yusef (11th century), Avicenna's pupil Mansur ibn Ilyas (14th century), physician Ibn Sina (Avicenna, 980–1037), philosopher and physician Isfahani, Imad al-Din (1125–1201) historian and rhetorician Isfahani, Jalaleddin (19th century), physician Isfahani, Husayn (15th century), physician Istakhri (?–957), geographer, gives the earliest known account of windmills Iranshahri (9th century), philosopher, teacher of Abu Bakr al-Razi Al-Isfizari (11th–12th century), mathematician and astronomer == J == Jabir ibn Hayyan (9th century), alchemist, pharmacist, philosopher, physicist, astronomer Jaghmini (14th century), physician Juwayni (1028–1085), philosopher, theologian Juzjani, Abu Ubaid (?–1070), physician Jamal ad-Din Bukhari (13th century), astronomer Jamasp, sage and philosopher Al-Abbās ibn Said al-Jawharī (800–860), geometer == K == Karaji (953–1029), mathematician Jamshid-i Kashani (c. 1380–1429), astronomer and mathematician Kashfi, Jafar (1775/6–1850/1), theologian Sadid al-Din al-Kazaruni (14th century), physician Kermani, Iwad (15th century), physician Kermani, Shams-ud-Din, Islamic scholar Al-Khazini (c. 1130), physicist Khayyam, Omar (1048–1131), poet, mathematician, and astronomer Khorasani, Sultan Ali (16th century), physician Al-Kharaqī, astronomer and mathematician Khujandi (c. 940–c. 1000), mathematician and astronomer Muhammad ibn Musa al-Khwarizmi (a.k.a. Al-Khwarazmi, c. 780–c. 850), creator of algorithm and algebra, mathematician and astronomer Najm al-Dīn al-Qazwīnī al-Kātibī, logician and philosopher Shams al-Din al-Khafri, astrologer Abū Sahl al-Qūhī, mathematician and astronomer Kubra, Najmeddin (1145–1220) Abu Ishaq al-Kubunani (d. after 1481), mathematician, astronomer Abu Zayn Kahhal, physician == M == Al-Mada'ini (752/753–843), historian Mahani (9th century), mathematician and astronomer Majusi, Ibn Abbas (?–c. 890), physician Marvazi, Abu Taher (12th century), philosopher Habash al-Hasib al-Marwazi, mathematician, astronomer, geographer Masawaiyh (777–857), or Masuya Mashallah ibn Athari (740–815), of Jewish origins, from Khorasan who designed the city of Baghdad based on Firouzabad Miskawayh (932–1030), philosopher Sharaf al-Zaman al-Marwazi, physician Hamdallah Mustawfi (1281–1349), geographer Mulla Sadra (1572–1640), philosopher Ibn al-Muqaffa' (?–756), founder of Arabic prose along with Abdol-Hamid bin Musa, Hasan (9th century), astronomer, mathematician bin Musa, Ahmad (9th century), astronomer, inventor bin Musa, Muhammad (9th century), astronomer, mathematician Muhammad ibn Muhammad Tabrizi (13th century), philosopher Abu Mansur al-Maturidi, Islamic scholar Muqatil ibn Sulayman, mufassir of Quran Ibn Manda, Hadith scholar Abu Ahmad Monajjem (241/855-56–in 13 Rabi' I 300/29 October 912), music theorist, literary historian Masarjawaih (7th century), physician Muhammad Abdolrahman, physician == N == Nagawri (14th century), physician Nahavandi, Benjamin, Jewish scholar Nahavandi, Ahmad (9th century), astronomer Nakhshabi (14th century), physician Narshakhi (899–959), historian Nasir Khusraw (1004–1088), scientist, Ismaili scholar, mathematician, philosopher, traveler and poet Natili Tabari (10th century), physician Naubakht (9th century), designer of the city of Baghdad Naubakht, Fadhl ibn (8th century), astronomer Nawbakhty (4th Hijri century), Islamic scholar, philosopher Nizam al-Din Nishapuri, mathematician, astronomer, jurist, exegete, and poet Nawbakhti, Ruh (10th century), Islamic scholar Nayrizi (865–922), mathematician and astronomer Naqshband, Baha ud-Din (1318–1389), philosopher Abu al-Qasim al-Habib Neishapuri (18th century), physician Muslim ibn al-Hajjaj (c. 815–875), Islamic scholar Nurbakhshi (16th century), physician Abu Hafs Umar an-Nasafi, theologian, mufassir, muhaddith and historian Al-Nasa'i, hadith collector Shihab al-Din Muhammad al-Nasawi, historian and biographer Abu Nu`aym, Islamic scholar == O == Ostanes, ancient Persian alchemist == P == Paul the Persian (6th century), philosopher == Q == Qazwini, Zakariya (1203–1283), physician Qumi, Qazi Sa’id (1633–1692), theologian Qumri (10th century), physician Ali Qushji (1403–16 December 1474), mathematician, astronomer and physician Ali al-Qari, Islamic scholar Ali Ibn Ibrahim Qomi, jurist and Shia scholar Al-Quda'i (d. 1062), judge, preacher and historian in Fatimid Egypt == R == Fakhr al-Din al-Razi (1150-1210), islamic theologian, physician, astronomer Razi, Amin (16th century), geographer Razi, Zakariya (Rhazes) (c. 865–925), chemist, physician, and philosopher Razi, Najmeddin (1177–1256), mystic Rumi, Jalal ad-Din Muhammad (1207–1273), Muslim poet, jurist, Islamic scholar, theologian, and Sufi mystic Rashid-al-Din Hamadani (1247–1318), historian, physician and politician Abu Hatim Ahmad ibn Hamdan al-Razi, Ismaili philosopher Rudaki (858–941), Persian poet == S == Sabzevari, Mulla Hadi (1797–1873), poet and philosopher Saghani Ostorlabi (?–990), astronomer Sahl, Fadl ibn (?–818), astronomer Sahl, Shapur ibn (?–869), physician Samarqandi, Najibeddin (13th century), physician Samarqandi, Ashraf (c. 1250–c. 1310), mathematician, astronomer Samarqandi, Dawlatshah (1438–1495/1507) biographer Sarakhsi, Ahmad ibn al-Tayyib (9th century) historian and philosopher Sarakhsi, Muhammad ibn Ahmad (?–1096), Islamic scholar Ahmad ibn al-Tayyib al-Sarakhsi, historian, traveller Shahrastani (1086–1153), historian of religions Shahrazuri (13th century), philosopher and physician Shahrazuri, Ibn al-Salah (1181–1245), Islamic scholar Shaykh Tusi (996–1067), Islamic scholar Ibn Babawayh (923–991), theologian Ibn Sahl, mathematician, physicist Abu ul-Ala Shirazi (d. 1001 CE), physician Shaykh Muhammad ibn Thaleb, physician Shirazi, Imad al-Din Mas'ud (16th century), physician Shirazi, Muhammad Hadi Khorasani (18th century), physician Shirazi, Qutbeddin (1236–1311), astronomer Shirazi, Mahmud ibn Ilyas (18th century), physician Shirazi, Najm al-Din Mahmud ibn Ilyas (?–1330), physician Shirazi, Qurayshi (17th century), physician Shirazi, Sultan Waezin (1894–1971), theologian Sibawayh, linguist and grammarian Sijzi (c. 945–c. 1020), mathematician and astronomer Sijzi, Mas'ud (14th century), physician Abd al-Rahman al-Sufi (903–986), astronomer from Ray who invented the meridian ring Mūsā ibn Shākir, astronomer Suhrawardi, Shahab al-Din (1155–1191), philosopher Abu Sulayman Sijistani, philosopher ‘Abd ar-Razzaq as-San‘ani, Islamic scholar Zayn al-Din Omar Savaji, philosopher and logician Zeynalabdin Shirvani, geographer, philosopher and poet Abu Yaqub al-Sijistani, Ismaili philosopher Abu'l-'Anbas Saymari, astrologer == T == Tabarani, Abu al-Qasim (873–970), Islamic scholar Tabari Amoli (839–923), historian Tabari, ibn Farrukhan (?–815), astrologer and architect Tabari, Abul Hasan (10th century), physician Tabari, Ibn Sahl (c. 783–c. 858), Jewish convert physician, master of Rhazes Tabrizi, Maqsud Ali (17th century), physician Taftazani (1322–1390), theologian, linguist Tayfur, Ibn Abi Tahir (819–893), linguist Tirmidhi (824–892), Islamic scholar Tunakabuni (17th century), physician Tughra'i (c. 1061–1122), physician Tusi, Nizam ol-Molk (1018–1092), Persian scholar and vizier of the Seljuq Empire Tusi, Nasireddin (1201–1274), Persian polymath, architect, philosopher, physician, scientist, and theologian Tusi, Sharafeddin (?–1213/4), mathematician Ahmad ibn Muhammad al-Tha'labi, Islamic scholar 'Abd al-Hamīd ibn Turk, Persian or Turkish mathematician == U == Safi al-Din al-Urmawi (c. 1216–1294), musician Abu al‐Uqul al‐Tabari (14th century), Yemenite astronomer of Iranian origin == V == Amin al-Din Rashid al-Din Vatvat (13th century), scholar and physician == W == Waqidi (748–822), historian Wassaf, historian Al-Wabkanawi, astronomer == Y == Yaʿqūb ibn Ṭāriq (?–796), mathematician and astronomer Yunus ibn Habib, linguist Yahya ibn Ma'in, Islamic scholar Yunus al-Katib al-Mughanni, musician Yahya ibn Abi Mansur (d. 830 CE), astronomer == Z == Dawud al-Zahiri (815–834) theologian and historian Zamakhshari (1074/5–1143/4), scholar and geographer Muhammad Zarrindast (11th century), oculist Zayn-e-Attar (?–c. 1403), physician Zarir Jurjani (9th century), mathematician and astronomer Zakariya al-Qazwini (1203–1283) physician, astronomer, geographer, and proto-science fiction writer == See also == List of contemporary Iranian scientists, scholars, and engineers List of Iranian mathematicians Nezamiyeh Academy of Gondishapur International rankings of Iran in science and technology List of Christian scientists and scholars of the medieval Islamic world List of pre-modern Arab scientists and scholars == Notes ==
Wikipedia:List of problems in loop theory and quasigroup theory#0
In mathematics, especially abstract algebra, loop theory and quasigroup theory are active research areas with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Many of the problems posed here first appeared in the Loops (Prague) conferences and the Mile High (Denver) conferences. == Open problems (Moufang loops) == === Abelian by cyclic groups resulting in Moufang loops === Let L be a Moufang loop with normal abelian subgroup (associative subloop) M of odd order such that L/M is a cyclic group of order bigger than 3. (i) Is L a group? (ii) If the orders of M and L/M are relatively prime, is L a group? Proposed: by Michael Kinyon, based on (Chein and Rajah, 2000) Comments: The assumption that L/M has order bigger than 3 is important, as there is a (commutative) Moufang loop L of order 81 with normal commutative subgroup of order 27. === Embedding CMLs of period 3 into alternative algebras === Conjecture: Any finite commutative Moufang loop of period 3 can be embedded into a commutative alternative algebra. Proposed: by Alexander Grishkov at Loops '03, Prague 2003 === Frattini subloop for Moufang loops === Conjecture: Let L be a finite Moufang loop and Φ(L) the intersection of all maximal subloops of L. Then Φ(L) is a normal nilpotent subloop of L. Proposed: by Alexander Grishkov at Loops '11, Třešť 2011 === Minimal presentations for loops M(G,2) === For a group G {\displaystyle G} , define M ( G , 2 ) {\displaystyle M(G,2)} on G {\displaystyle G} x C 2 {\displaystyle C_{2}} by ( g , 0 ) ( h , 0 ) = ( g h , 0 ) {\displaystyle (g,0)(h,0)=(gh,0)} , ( g , 0 ) ( h , 1 ) = ( h g , 1 ) {\displaystyle (g,0)(h,1)=(hg,1)} , ( g , 1 ) ( h , 0 ) = ( g h − 1 , 1 ) {\displaystyle (g,1)(h,0)=(gh^{-1},1)} , ( g , 1 ) ( h , 1 ) = ( h − 1 g , 0 ) {\displaystyle (g,1)(h,1)=(h^{-1}g,0)} . Find a minimal presentation for the Moufang loop M ( G , 2 ) {\displaystyle M(G,2)} with respect to a presentation for G {\displaystyle G} . Proposed: by Petr Vojtěchovský at Loops '03, Prague 2003 Comments: Chein showed in (Chein, 1974) that M ( G , 2 ) {\displaystyle M(G,2)} is a Moufang loop that is nonassociative if and only if G {\displaystyle G} is nonabelian. Vojtěchovský (Vojtěchovský, 2003) found a minimal presentation for M ( G , 2 ) {\displaystyle M(G,2)} when G {\displaystyle G} is a 2-generated group. === Moufang loops of order p2q3 and pq4 === Let p and q be distinct odd primes. If q is not congruent to 1 modulo p, are all Moufang loops of order p2q3 groups? What about pq4? Proposed: by Andrew Rajah at Loops '99, Prague 1999 Comments: The former has been solved by Rajah and Chee (2011) where they showed that for distinct odd primes p1 < ··· < pm < q < r1 < ··· < rn, all Moufang loops of order p12···pm2q3r12···rn2 are groups if and only if q is not congruent to 1 modulo pi for each i. === (Phillips' problem) Odd order Moufang loop with trivial nucleus === Is there a Moufang loop of odd order with trivial nucleus? Proposed: by Andrew Rajah at Loops '03, Prague 2003 === Presentations for finite simple Moufang loops === Find presentations for all nonassociative finite simple Moufang loops in the variety of Moufang loops. Proposed: by Petr Vojtěchovský at Loops '03, Prague 2003 Comments: It is shown in (Vojtěchovský, 2003) that every nonassociative finite simple Moufang loop is generated by 3 elements, with explicit formulas for the generators. === The restricted Burnside problem for Moufang loops === Conjecture: Let M be a finite Moufang loop of exponent n with m generators. Then there exists a function f(n,m) such that |M| < f(n,m). Proposed: by Alexander Grishkov at Loops '11, Třešť 2011 Comments: In the case when n is a prime different from 3 the conjecture was proved by Grishkov. If p = 3 and M is commutative, it was proved by Bruck. The general case for p = 3 was proved by G. Nagy. The case n = pm holds by the Grishkov–Zelmanov Theorem. === The Sanov and M. Hall theorems for Moufang loops === Conjecture: Let L be a finitely generated Moufang loop of exponent 4 or 6. Then L is finite. Proposed: by Alexander Grishkov at Loops '11, Třešť 2011 === Torsion in free Moufang loops === Let MFn be the free Moufang loop with n generators. Conjecture: MF3 is torsion free but MFn with n > 4 is not. Proposed: by Alexander Grishkov at Loops '03, Prague 2003 == Open problems (Bol loops) == === Nilpotency degree of the left multiplication group of a left Bol loop === For a left Bol loop Q, find some relation between the nilpotency degree of the left multiplication group of Q and the structure of Q. Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 === Are two Bol loops with similar multiplication tables isomorphic? === Let ( Q , ∗ ) {\displaystyle (Q,*)} , ( Q , + ) {\displaystyle (Q,+)} be two quasigroups defined on the same underlying set Q {\displaystyle Q} . The distance d ( ∗ , + ) {\displaystyle d(*,+)} is the number of pairs ( a , b ) {\displaystyle (a,b)} in Q × Q {\displaystyle Q\times Q} such that a ∗ b ≠ a + b {\displaystyle a*b\neq a+b} . Call a class of finite quasigroups quadratic if there is a positive real number α {\displaystyle \alpha } such that any two quasigroups ( Q , ∗ ) {\displaystyle (Q,*)} , ( Q , + ) {\displaystyle (Q,+)} of order n {\displaystyle n} from the class satisfying d ( ∗ , + ) < α n 2 {\displaystyle d(*,+)<\alpha \,n^{2}} are isomorphic. Are Moufang loops quadratic? Are Bol loops quadratic? Proposed: by Aleš Drápal at Loops '99, Prague 1999 Comments: Drápal proved in (Drápal, 1992) that groups are quadratic with α = 1 / 9 {\displaystyle \alpha =1/9} , and in (Drápal, 2000) that 2-groups are quadratic with α = 1 / 4 {\displaystyle \alpha =1/4} . === Campbell–Hausdorff series for analytic Bol loops === Determine the Campbell–Hausdorff series for analytic Bol loops. Proposed: by M. A. Akivis and V. V. Goldberg at Loops '99, Prague 1999 Comments: The problem has been partially solved for local analytic Bruck loops in (Nagy, 2002). === Universally flexible loop that is not middle Bol === A loop is universally flexible if every one of its loop isotopes is flexible, that is, satisfies (xy)x = x(yx). A loop is middle Bol if every one of its loop isotopes has the antiautomorphic inverse property, that is, satisfies (xy)−1 = y−1x−1. Is there a finite, universally flexible loop that is not middle Bol? Proposed: by Michael Kinyon at Loops '03, Prague 2003 === Finite simple Bol loop with nontrivial conjugacy classes === Is there a finite simple nonassociative Bol loop with nontrivial conjugacy classes? Proposed: by Kenneth W. Johnson and Jonathan D. H. Smith at the 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 == Open problems (Nilpotency and solvability) == === Niemenmaa's conjecture and related problems === Let Q be a loop whose inner mapping group is nilpotent. Is Q nilpotent? Is Q solvable? Proposed: at Loops '03 and '07, Prague 2003 and 2007 Comments: The answer to the first question is affirmative if Q is finite (Niemenmaa 2009). The problem is open in the general case. === Loops with abelian inner mapping group === Let Q be a loop with abelian inner mapping group. Is Q nilpotent? If so, is there a bound on the nilpotency class of Q? In particular, can the nilpotency class of Q be higher than 3? Proposed: at Loops '07, Prague 2007 Comments: When the inner mapping group Inn(Q) is finite and abelian, then Q is nilpotent (Niemenaa and Kepka). The first question is therefore open only in the infinite case. Call loop Q of Csörgõ type if it is nilpotent of class at least 3, and Inn(Q) is abelian. No loop of Csörgõ type of nilpotency class higher than 3 is known. Loops of Csörgõ type exist (Csörgõ, 2004), Buchsteiner loops of Csörgõ type exist (Csörgõ, Drápal and Kinyon, 2007), and Moufang loops of Csörgõ type exist (Nagy and Vojtěchovský, 2007). On the other hand, there are no groups of Csörgõ type (folklore), there are no commutative Moufang loops of Csörgõ type (Bruck), and there are no Moufang p-loops of Csörgõ type for p > 3 (Nagy and Vojtěchovský, 2007). === Number of nilpotent loops up to isomorphism === Determine the number of nilpotent loops of order 24 up to isomorphism. Proposed: by Petr Vojtěchovský at the 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 Comment: The counts are known for n < 24, see (Daly and Vojtěchovský, 2010). === A finite nilpotent loop without a finite basis for its laws === Construct a finite nilpotent loop with no finite basis for its laws. Proposed: by M. R. Vaughan-Lee in the Kourovka Notebook of Unsolved Problems in Group Theory Comment: There is a finite loop with no finite basis for its laws (Vaughan-Lee, 1979) but it is not nilpotent. == Open problems (quasigroups) == === Existence of infinite simple paramedial quasigroups === Are there infinite simple paramedial quasigroups? Proposed: by Jaroslav Ježek and Tomáš Kepka at Loops '03, Prague 2003 === Minimal isotopically universal varieties of quasigroups === A variety V of quasigroups is isotopically universal if every quasigroup is isotopic to a member of V. Is the variety of loops a minimal isotopically universal variety? Does every isotopically universal variety contain the variety of loops or its parastrophes? Proposed: by Tomáš Kepka and Petr Němec at Loops '03, Prague 2003 Comments: Every quasigroup is isotopic to a loop, hence the variety of loops is isotopically universal. === Small quasigroups with quasigroup core === Does there exist a quasigroup Q of order q = 14, 18, 26 or 42 such that the operation * defined on Q by x * y = y − xy is a quasigroup operation? Proposed: by Parascovia Syrbu at Loops '03, Prague 2003 Comments: see (Conselo et al., 1998) === Uniform construction of Latin squares? === Construct a latin square L of order n as follows: Let G = Kn,n be the complete bipartite graph with distinct weights on its n2 edges. Let M1 be the cheapest matching in G, M2 the cheapest matching in G with M1 removed, and so on. Each matching Mi determines a permutation pi of 1, ..., n. Let L be obtained from G by placing the permutation pi into row i of L. Does this procedure result in a uniform distribution on the space of Latin squares of order n? Proposed: by Gábor Nagy at the 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 == Open problems (miscellaneous) == === Bound on the size of multiplication groups === For a loop Q, let Mlt(Q) denote the multiplication group of Q, that is, the group generated by all left and right translations. Is |Mlt(Q)| < f(|Q|) for some variety of loops and for some polynomial f? Proposed: at the Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 === Does every finite alternative loop have 2-sided inverses? === Does every finite alternative loop, that is, every loop satisfying x(xy) = (xx)y and x(yy) = (xy)y, have 2-sided inverses? Proposed: by Warren D. Smith Comments: There are infinite alternative loops without 2-sided inverses, cf. (Ormes and Vojtěchovský, 2007) === Finite simple nonassociative automorphic loop === Find a nonassociative finite simple automorphic loop, if such a loop exists. Proposed: by Michael Kinyon at Loops '03, Prague 2003 Comments: It is known that such a loop cannot be commutative (Grishkov, Kinyon and Nagý, 2013) nor have odd order (Kinyon, Kunen, Phillips and Vojtěchovský, 2013). === Moufang theorem in non-Moufang loops === We say that a variety V of loops satisfies the Moufang theorem if for every loop Q in V the following implication holds: for every x, y, z in Q, if x(yz) = (xy)z then the subloop generated by x, y, z is a group. Is every variety that satisfies Moufang theorem contained in the variety of Moufang loops? Proposed by: Andrew Rajah at Loops '11, Třešť 2011 === Universality of Osborn loops === A loop is Osborn if it satisfies the identity x((yz)x) = (xλ\y)(zx). Is every Osborn loop universal, that is, is every isotope of an Osborn loop Osborn? If not, is there a nice identity characterizing universal Osborn loops? Proposed: by Michael Kinyon at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 Comments: Moufang and conjugacy closed loops are Osborn. See (Kinyon, 2005) for more. == Solved problems == The following problems were posed as open at various conferences and have since been solved. === Buchsteiner loop that is not conjugacy closed === Is there a Buchsteiner loop that is not conjugacy closed? Is there a finite simple Buchsteiner loop that is not conjugacy closed? Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 Solved by: Piroska Csörgõ, Aleš Drápal, and Michael Kinyon Solution: The quotient of a Buchsteiner loop by its nucleus is an abelian group of exponent 4. In particular, no nonassociative Buchsteiner loop can be simple. There exists a Buchsteiner loop of order 128 which is not conjugacy closed. === Classification of Moufang loops of order 64 === Classify nonassociative Moufang loops of order 64. Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 Solved by: Gábor P. Nagy and Petr Vojtěchovský Solution: There are 4262 nonassociative Moufang loops of order 64. They were found by the method of group modifications in (Vojtěchovský, 2006), and it was shown in (Nagy and Vojtěchovský, 2007) that the list is complete. The latter paper uses a linear-algebraic approach to Moufang loop extensions. === Conjugacy closed loop with nonisomorphic one-sided multiplication groups === Construct a conjugacy closed loop whose left multiplication group is not isomorphic to its right multiplication group. Proposed: by Aleš Drápal at Loops '03, Prague 2003 Solved by: Aleš Drápal Solution: There is such a loop of order 9. In can be obtained in the LOOPS package by the command CCLoop(9,1) === Existence of a finite simple Bol loop === Is there a finite simple Bol loop that is not Moufang? Proposed at: Loops '99, Prague 1999 Solved by: Gábor P. Nagy, 2007. Solution: A simple Bol loop that is not Moufang will be called proper. There are several families of proper simple Bol loops. A smallest proper simple Bol loop is of order 24 (Nagy 2008). There is also a proper simple Bol loop of exponent 2 (Nagy 2009), and a proper simple Bol loop of odd order (Nagy 2008). Comments: The above constructions solved two additional open problems: Is there a finite simple Bruck loop that is not Moufang? Yes, since any proper simple Bol loop of exponent 2 is Bruck. Is every Bol loop of odd order solvable? No, as witnessed by any proper simple Bol loop of odd order. === Left Bol loop with trivial right nucleus === Is there a finite non-Moufang left Bol loop with trivial right nucleus? Proposed: at Milehigh conference on quasigroups, loops, and nonassociative systems, Denver 2005 Solved by: Gábor P. Nagy, 2007 Solution: There is a finite simple left Bol loop of exponent 2 of order 96 with trivial right nucleus. Also, using an exact factorization of the Mathieu group M24, it is possible to construct a non-Moufang simple Bol loop which is a G-loop. === Lagrange property for Moufang loops === Does every finite Moufang loop have the strong Lagrange property? Proposed: by Orin Chein at Loops '99, Prague 1999 Solved by: Alexander Grishkov and Andrei Zavarnitsine, 2003 Solution: Every finite Moufang loop has the strong Lagrange property (SLP). Here is an outline of the proof: According to (Chein et al. 2003), it suffices to show SLP for nonassociative finite simple Moufang loops (NFSML). It thus suffices to show that the order of a maximal subloop of an NFSML L divides the order of L. A countable class of NFSMLs M ( q ) {\displaystyle M(q)} was discovered in (Paige 1956), and no other NSFMLs exist by (Liebeck 1987). Grishkov and Zavarnitsine matched maximal subloops of loops M ( q ) {\displaystyle M(q)} with certain subgroups of groups with triality in (Grishkov and Zavarnitsine, 2003). === Moufang loops with non-normal commutant === Is there a Moufang loop whose commutant is not normal? Proposed: by Andrew Rajah at Loops '03, Prague 2003 Solved by: Alexander Grishkov and Andrei Zavarnitsine, 2017 Solution: Yes, there is a Moufang loop of order 38 with non-normal commutant. Gagola had previously claimed the opposite, but later found a hole in his proof. === Quasivariety of cores of Bol loops === Is the class of cores of Bol loops a quasivariety? Proposed: by Jonathan D. H. Smith and Alena Vanžurová at Loops '03, Prague 2003 Solved by: Alena Vanžurová, 2004. Solution: No, the class of cores of Bol loops is not closed under subalgebras. Furthermore, the class of cores of groups is not closed under subalgebras. Here is an outline of the proof: Cores of abelian groups are medial, by (Romanowska and Smith, 1985), (Rozskowska-Lech, 1999). The smallest nonabelian group S 3 {\displaystyle S_{3}} has core containing a submagma G {\displaystyle G} of order 4 that is not medial. If G {\displaystyle G} is a core of a Bol loop, it is a core of a Bol loop of order 4, hence a core of an abelian group, a contradiction. === Parity of the number of quasigroups up to isomorphism === Let I(n) be the number of isomorphism classes of quasigroups of order n. Is I(n) odd for every n? Proposed: by Douglas S. Stones at 2nd Mile High Conference on Nonassociative Mathematics, Denver 2009 Solved by: Douglas S. Stones, 2010. Solution: I(12) is even. In fact, I(n) is odd for all n ≤ 17 except 12. (Stones 2010) === Classification of finite simple paramedial quasigroups === Classify the finite simple paramedial quasigroups. Proposed: by Jaroslav Ježek and Tomáš Kepka at Loops '03, Prague 2003. Solved by: Victor Shcherbacov and Dumitru Pushkashu (2010). Solution: Any finite simple paramedial quasigroup is isotopic to elementary abelian p-group. Such quasigroup can be either a medial unipotent quasigroup, or a medial commutative distributive quasigroup, or special kind isotope of (φ+ψ)-simple medial distributive quasigroup. == See also == Problems in Latin squares == References == Chein, Orin (1974), "Moufang Loops of Small Order I", Transactions of the American Mathematical Society, 188: 31–51, doi:10.2307/1996765, JSTOR 1996765. Chein, Orin; Kinyon, Michael K.; Rajah, Andrew; Vojtěchovský, Petr (2003), "Loops and the Lagrange property", Results in Mathematics, 43 (1–2): 74–78, arXiv:math/0205141, doi:10.1007/bf03322722, S2CID 16718438. Chein, Orin; Rajah, Andrew (2000), "Possible orders of nonassociative Moufang loops", Commentationes Mathematicae Universitatis Carolinae, 41 (2): 237–244. Conselo, E.; Conzales, S.; Markov, V.; Nechaev, A. (1998), "Recursive MDS-codes and recursively differentiable quasigroups", Diskretnaia Matematika, 10 (2): 3–29. Daly, Dan; Vojtěchovský, Petr (2009), "Enumeration of nilpotent loops via cohomology", Journal of Algebra, 322 (11): 4080–4098, arXiv:1509.05713, doi:10.1016/j.jalgebra.2009.03.042, S2CID 51794859. Drápal, Aleš (1992), "How far apart can the group multiplication tables be?", European Journal of Combinatorics, 13 (5): 335–343, doi:10.1016/S0195-6698(05)80012-5. Drápal, Aleš (2000), "Non-isomorphic 2-groups coincide in at most three quarters of their multiplication tables", European Journal of Combinatorics, 21 (3): 301–321, doi:10.1006/eujc.1999.0347 Gagola III, Stephen (2012), "A Moufang loop's commutant", Mathematical Proceedings of the Cambridge Philosophical Society, 152 (2): 193–206, Bibcode:2012MPCPS.152..193G, doi:10.1017/S0305004111000181, S2CID 121585760 Grishkov, Alexander N.; Zavarnitsine, Andrei V. (2005), "Lagrange's theorem for Moufang loops", Mathematical Proceedings of the Cambridge Philosophical Society, 139 (1): 41–57, Bibcode:2005MPCPS.139...41G, doi:10.1017/S0305004105008388 (inactive 29 December 2024), S2CID 123255962{{citation}}: CS1 maint: DOI inactive as of December 2024 (link) Grishkov, Alexander N.; Kinyon, Michael; Nagý, Gabor (2013), "Solvability of commutative automorphic loops", Proceedings of the American Mathematical Society, 142 (9): 3029–3037, arXiv:1111.7138, doi:10.1090/s0002-9939-2014-12053-3, S2CID 119125596 Kinyon, Michael K., A survey of Osborn loops (PDF), invited talk at Milehigh conference on quasigroups, loops and nonassociative systems, Denver, 2005 Kinyon, Michael; Kunen, Kenneth; Phillips, J.D.; Vojtěchovský, Petr (2016), "The structure of automorphic loops", Transactions of the American Mathematical Society, 368 (12): 8901–8927, arXiv:1210.1642, doi:10.1090/tran/6622, S2CID 38960620 Liebeck, M. W. (1987), "The classification of finite simple Moufang loops", Mathematical Proceedings of the Cambridge Philosophical Society, 102 (1): 33–47, Bibcode:1987MPCPS.102...33L, doi:10.1017/S0305004100067025 (inactive 29 December 2024), S2CID 122103993{{citation}}: CS1 maint: DOI inactive as of December 2024 (link) Nagy, Gábor P. (2002), "The Campbell–Hausdorff series of local analytic Bruck loops", Abh. Math. Sem. Univ. Hamburg, 72 (1): 79–87, doi:10.1007/BF02941666, S2CID 123589830. Nagy, Gábor P.; Vojtěchovský, Petr (2007), "Moufang loops of order 64 and 81", Journal of Symbolic Computation, 42 (9), to appear: 871–883, doi:10.1016/j.jsc.2007.06.004, hdl:10338.dmlcz/142934, S2CID 10404385. Nagy, Gábor P. (2008), "A class of simple proper Bol loops", Manuscripta Mathematica, 127 (1): 81–88, arXiv:math/0703919, doi:10.1007/s00229-008-0188-5, S2CID 17734490. Nagy, Gábor P. (2009), "A class of finite simple Bol loops of exponent 2", Transactions of the American Mathematical Society, 361 (10): 5331–5343, arXiv:0709.4544, doi:10.1090/S0002-9947-09-04646-7, S2CID 15228937. Niemenmaa, Markku (2009), "Finite loops with nilpotent inner mapping groups are centrally nilpotent", Bulletin of the Australian Mathematical Society, 79 (1): 109–114, doi:10.1017/S0004972708001093 Ormes, Nicholas; Vojtěchovský, Petr (2007), "Powers and alternative laws", Commentationes Mathematicae Universitatis Carolinae, 48 (1): 25–40. Paige, L. (1956), "A class of simple Moufang loops", Proceedings of the American Mathematical Society, 7 (3): 471–482, doi:10.2307/2032757, JSTOR 2032757. Rajah, Andrew; Chee, Wing Loon (2011), "Moufang loops of odd order p12p22···pn2q3", International Journal of Algebra, 5 (20): 965–975. Rivin, Igor; Vardi, Ilan; Zimmerman, Paul (1994), "The n-queens problem", American Mathematical Monthly, 101 (7): 629–639, doi:10.2307/2974691, JSTOR 2974691. Romanowska, Anna; Smith, Jonathan D. H. (1985), Modal Theory, Heldermann Verlag, Berlin. Rozskowska-Lech, B. (1999), "A representation of symmetric idempotent and entropic groupoids", Demonstr. Math., 32: 248–262. Shcherbacov, V.A.; Pushkashu, D.I. (2010), "On the structure of finite paramedial quasigroups", Comment. Math. Univ. Carolin., 51: 357–370. Stones, D. S. (2010), "The parity of the number of quasigroups", Discrete Mathematics, 310 (21): 3033–3039, doi:10.1016/j.disc.2010.06.027. Vojtěchovský, Petr (2003), "Generators for finite simple Moufang loops", Journal of Group Theory, 6 (2): 169–174, arXiv:math/0701701, doi:10.1515/jgth.2003.012. Vojtěchovský, Petr (2003), "The smallest Moufang loop revisited", Results in Mathematics, 44 (1–2): 189–193, arXiv:math/0701706, doi:10.1007/bf03322924, S2CID 119157018. Vojtěchovský, Petr (2006), "Toward the classification of Moufang loops of order 64", European Journal of Combinatorics, 27 (3): 444–460, arXiv:math/0701712, doi:10.1016/j.ejc.2004.10.003.s == External links == Loops '99 conference Loops '03 conference Loops '07 conference Loops '11 conference Milehigh conferences on nonassociative mathematics LOOPS package for GAP Problems in Loop Theory and Quasigroup Theory
Wikipedia:List of set identities and relations#0
This article lists mathematical properties and laws of sets, involving the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations. The binary operations of set union ( ∪ {\displaystyle \cup } ) and intersection ( ∩ {\displaystyle \cap } ) satisfy many identities. Several of these identities or "laws" have well established names. == Notation == Throughout this article, capital letters (such as A , B , C , L , M , R , S , {\displaystyle A,B,C,L,M,R,S,} and X {\displaystyle X} ) will denote sets. On the left hand side of an identity, typically, L {\displaystyle L} will be the leftmost set, M {\displaystyle M} will be the middle set, and R {\displaystyle R} will be the rightmost set. This is to facilitate applying identities to expressions that are complicated or use the same symbols as the identity. For example, the identity ( L ∖ M ) ∖ R = ( L ∖ R ) ∖ ( M ∖ R ) {\displaystyle (L\,\setminus \,M)\,\setminus \,R~=~(L\,\setminus \,R)\,\setminus \,(M\,\setminus \,R)} may be read as: ( Left set ∖ Middle set ) ∖ Right set = ( Left set ∖ Right set ) ∖ ( Middle set ∖ Right set ) . {\displaystyle ({\text{Left set}}\,\setminus \,{\text{Middle set}})\,\setminus \,{\text{Right set}}~=~({\text{Left set}}\,\setminus \,{\text{Right set}})\,\setminus \,({\text{Middle set}}\,\setminus \,{\text{Right set}}).} === Elementary set operations === For sets L {\displaystyle L} and R , {\displaystyle R,} define: L ∪ R = def { x : x ∈ L or x ∈ R } L ∩ R = def { x : x ∈ L and x ∈ R } L ∖ R = def { x : x ∈ L and x ∉ R } {\displaystyle {\begin{alignedat}{4}L\cup R&&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x\in L\;&&{\text{ or }}\;\,&&\;x\in R~\}\\L\cap R&&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x\in L\;&&{\text{ and }}&&\;x\in R~\}\\L\setminus R&&~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x\in L\;&&{\text{ and }}&&\;x\notin R~\}\\\end{alignedat}}} and L △ R = def { x : x belongs to exactly one of L and R } {\displaystyle L\triangle R~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~x~:~x{\text{ belongs to exactly one of }}L{\text{ and }}R~\}} where the symmetric difference L △ R {\displaystyle L\triangle R} is sometimes denoted by L ⊖ R {\displaystyle L\ominus R} and equals: L △ R = ( L ∖ R ) ∪ ( R ∖ L ) = ( L ∪ R ) ∖ ( L ∩ R ) . {\displaystyle {\begin{alignedat}{4}L\;\triangle \;R~&=~(L~\setminus ~&&R)~\cup ~&&(R~\setminus ~&&L)\\~&=~(L~\cup ~&&R)~\setminus ~&&(L~\cap ~&&R).\end{alignedat}}} One set L {\displaystyle L} is said to intersect another set R {\displaystyle R} if L ∩ R ≠ ∅ . {\displaystyle L\cap R\neq \varnothing .} Sets that do not intersect are said to be disjoint. The power set of X {\displaystyle X} is the set of all subsets of X {\displaystyle X} and will be denoted by ℘ ( X ) = def { L : L ⊆ X } . {\displaystyle \wp (X)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{~L~:~L\subseteq X~\}.} Universe set and complement notation The notation L ∁ = def X ∖ L . {\displaystyle L^{\complement }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~X\setminus L.} may be used if L {\displaystyle L} is a subset of some set X {\displaystyle X} that is understood (say from context, or because it is clearly stated what the superset X {\displaystyle X} is). It is emphasized that the definition of L ∁ {\displaystyle L^{\complement }} depends on context. For instance, had L {\displaystyle L} been declared as a subset of Y , {\displaystyle Y,} with the sets Y {\displaystyle Y} and X {\displaystyle X} not necessarily related to each other in any way, then L ∁ {\displaystyle L^{\complement }} would likely mean Y ∖ L {\displaystyle Y\setminus L} instead of X ∖ L . {\displaystyle X\setminus L.} If it is needed then unless indicated otherwise, it should be assumed that X {\displaystyle X} denotes the universe set, which means that all sets that are used in the formula are subsets of X . {\displaystyle X.} In particular, the complement of a set L {\displaystyle L} will be denoted by L ∁ {\displaystyle L^{\complement }} where unless indicated otherwise, it should be assumed that L ∁ {\displaystyle L^{\complement }} denotes the complement of L {\displaystyle L} in (the universe) X . {\displaystyle X.} == One subset involved == Assume L ⊆ X . {\displaystyle L\subseteq X.} Identity: Definition: e {\displaystyle e} is called a left identity element of a binary operator ∗ {\displaystyle \,\ast \,} if e ∗ R = R {\displaystyle e\,\ast \,R=R} for all R {\displaystyle R} and it is called a right identity element of ∗ {\displaystyle \,\ast \,} if L ∗ e = L {\displaystyle L\,\ast \,e=L} for all L . {\displaystyle L.} A left identity element that is also a right identity element if called an identity element. The empty set ∅ {\displaystyle \varnothing } is an identity element of binary union ∪ {\displaystyle \cup } and symmetric difference △ , {\displaystyle \triangle ,} and it is also a right identity element of set subtraction ∖ : {\displaystyle \,\setminus :} L ∩ X = L = X ∩ L where L ⊆ X L ∪ ∅ = L = ∅ ∪ L L △ ∅ = L = ∅ △ L L ∖ ∅ = L {\displaystyle {\begin{alignedat}{10}L\cap X&\;=\;&&L&\;=\;&X\cap L~~~~{\text{ where }}L\subseteq X\\[1.4ex]L\cup \varnothing &\;=\;&&L&\;=\;&\varnothing \cup L\\[1.4ex]L\,\triangle \varnothing &\;=\;&&L&\;=\;&\varnothing \,\triangle L\\[1.4ex]L\setminus \varnothing &\;=\;&&L\\[1.4ex]\end{alignedat}}} but ∅ {\displaystyle \varnothing } is not a left identity element of ∖ {\displaystyle \,\setminus \,} since ∅ ∖ L = ∅ {\displaystyle \varnothing \setminus L=\varnothing } so ∅ ∖ L = L {\textstyle \varnothing \setminus L=L} if and only if L = ∅ . {\displaystyle L=\varnothing .} Idempotence L ∗ L = L {\displaystyle L\ast L=L} and Nilpotence L ∗ L = ∅ {\displaystyle L\ast L=\varnothing } : L ∪ L = L (Idempotence) L ∩ L = L (Idempotence) L △ L = ∅ (Nilpotence of index 2) L ∖ L = ∅ (Nilpotence of index 2) {\displaystyle {\begin{alignedat}{10}L\cup L&\;=\;&&L&&\quad {\text{ (Idempotence)}}\\[1.4ex]L\cap L&\;=\;&&L&&\quad {\text{ (Idempotence)}}\\[1.4ex]L\,\triangle \,L&\;=\;&&\varnothing &&\quad {\text{ (Nilpotence of index 2)}}\\[1.4ex]L\setminus L&\;=\;&&\varnothing &&\quad {\text{ (Nilpotence of index 2)}}\\[1.4ex]\end{alignedat}}} Domination/Absorbing element: Definition: z {\displaystyle z} is called a left absorbing element of a binary operator ∗ {\displaystyle \,\ast \,} if z ∗ R = z {\displaystyle z\,\ast \,R=z} for all R {\displaystyle R} and it is called a right absorbing element of ∗ {\displaystyle \,\ast \,} if L ∗ z = z {\displaystyle L\,\ast \,z=z} for all L . {\displaystyle L.} A left absorbing element that is also a right absorbing element if called an absorbing element. Absorbing elements are also sometime called annihilating elements or zero elements. A universe set is an absorbing element of binary union ∪ . {\displaystyle \cup .} The empty set ∅ {\displaystyle \varnothing } is an absorbing element of binary intersection ∩ {\displaystyle \cap } and binary Cartesian product × , {\displaystyle \times ,} and it is also a left absorbing element of set subtraction ∖ : {\displaystyle \,\setminus :} X ∪ L = X = L ∪ X where L ⊆ X ∅ ∩ L = ∅ = L ∩ ∅ ∅ × L = ∅ = L × ∅ ∅ ∖ L = ∅ {\displaystyle {\begin{alignedat}{10}X\cup L&\;=\;&&X&\;=\;&L\cup X~~~~{\text{ where }}L\subseteq X\\[1.4ex]\varnothing \cap L&\;=\;&&\varnothing &\;=\;&L\cap \varnothing \\[1.4ex]\varnothing \times L&\;=\;&&\varnothing &\;=\;&L\times \varnothing \\[1.4ex]\varnothing \setminus L&\;=\;&&\varnothing &\;\;&\\[1.4ex]\end{alignedat}}} but ∅ {\displaystyle \varnothing } is not a right absorbing element of set subtraction since L ∖ ∅ = L {\displaystyle L\setminus \varnothing =L} where L ∖ ∅ = ∅ {\textstyle L\setminus \varnothing =\varnothing } if and only if L = ∅ . {\textstyle L=\varnothing .} Double complement or involution law: X ∖ ( X ∖ L ) = L Also written ( L ∁ ) ∁ = L where L ⊆ X (Double complement/Involution law) {\displaystyle {\begin{alignedat}{10}X\setminus (X\setminus L)&=L&&\qquad {\text{ Also written }}\quad &&\left(L^{\complement }\right)^{\complement }=L&&\quad &&{\text{ where }}L\subseteq X\quad {\text{ (Double complement/Involution law)}}\\[1.4ex]\end{alignedat}}} L ∖ ∅ = L {\displaystyle L\setminus \varnothing =L} ∅ = L ∖ L = ∅ ∖ L = L ∖ X where L ⊆ X {\displaystyle {\begin{alignedat}{4}\varnothing &=L&&\setminus L\\&=\varnothing &&\setminus L\\&=L&&\setminus X~~~~{\text{ where }}L\subseteq X\\\end{alignedat}}} L ∁ = X ∖ L (definition of notation) {\displaystyle L^{\complement }=X\setminus L\quad {\text{ (definition of notation)}}} L ∪ ( X ∖ L ) = X Also written L ∪ L ∁ = X where L ⊆ X L △ ( X ∖ L ) = X Also written L △ L ∁ = X where L ⊆ X L ∩ ( X ∖ L ) = ∅ Also written L ∩ L ∁ = ∅ {\displaystyle {\begin{alignedat}{10}L\,\cup (X\setminus L)&=X&&\qquad {\text{ Also written }}\quad &&L\cup L^{\complement }=X&&\quad &&{\text{ where }}L\subseteq X\\[1.4ex]L\,\triangle (X\setminus L)&=X&&\qquad {\text{ Also written }}\quad &&L\,\triangle L^{\complement }=X&&\quad &&{\text{ where }}L\subseteq X\\[1.4ex]L\,\cap (X\setminus L)&=\varnothing &&\qquad {\text{ Also written }}\quad &&L\cap L^{\complement }=\varnothing &&\quad &&\\[1.4ex]\end{alignedat}}} X ∖ ∅ = X Also written ∅ ∁ = X (Complement laws for the empty set)) X ∖ X = ∅ Also written X ∁ = ∅ (Complement laws for the universe set) {\displaystyle {\begin{alignedat}{10}X\setminus \varnothing &=X&&\qquad {\text{ Also written }}\quad &&\varnothing ^{\complement }=X&&\quad &&{\text{ (Complement laws for the empty set))}}\\[1.4ex]X\setminus X&=\varnothing &&\qquad {\text{ Also written }}\quad &&X^{\complement }=\varnothing &&\quad &&{\text{ (Complement laws for the universe set)}}\\[1.4ex]\end{alignedat}}} == Two sets involved == In the left hand sides of the following identities, L {\displaystyle L} is the L eft most set and R {\displaystyle R} is the R ight most set. Assume both L and R {\displaystyle L{\text{ and }}R} are subsets of some universe set X . {\displaystyle X.} === Formulas for binary set operations ⋂, ⋃, \, and ∆ === In the left hand sides of the following identities, L is the L eft most set and R is the R ight most set. Whenever necessary, both L and R should be assumed to be subsets of some universe set X, so that L ∁ := X ∖ L and R ∁ := X ∖ R . {\displaystyle L^{\complement }:=X\setminus L{\text{ and }}R^{\complement }:=X\setminus R.} L ∩ R = L ∖ ( L ∖ R ) = R ∖ ( R ∖ L ) = L ∖ ( L △ R ) = L △ ( L ∖ R ) {\displaystyle {\begin{alignedat}{9}L\cap R&=L&&\,\,\setminus \,&&(L&&\,\,\setminus &&R)\\&=R&&\,\,\setminus \,&&(R&&\,\,\setminus &&L)\\&=L&&\,\,\setminus \,&&(L&&\,\triangle \,&&R)\\&=L&&\,\triangle \,&&(L&&\,\,\setminus &&R)\\\end{alignedat}}} L ∪ R = ( L △ R ) ∪ L = ( L △ R ) △ ( L ∩ R ) = ( R ∖ L ) ∪ L (union is disjoint) {\displaystyle {\begin{alignedat}{9}L\cup R&=(&&L\,\triangle \,R)&&\,\,\cup &&&&L&&&&\\&=(&&L\,\triangle \,R)&&\,\triangle \,&&(&&L&&\cap \,&&R)\\&=(&&R\,\setminus \,L)&&\,\,\cup &&&&L&&&&~~~~~{\text{ (union is disjoint)}}\\\end{alignedat}}} L △ R = R △ L = ( L ∪ R ) ∖ ( L ∩ R ) = ( L ∖ R ) ∪ ( R ∖ L ) (union is disjoint) = ( L △ M ) △ ( M △ R ) where M is an arbitrary set. = ( L ∁ ) △ ( R ∁ ) {\displaystyle {\begin{alignedat}{9}L\,\triangle \,R&=&&R\,\triangle \,L&&&&&&&&\\&=(&&L\,\cup \,R)&&\,\setminus \,&&(&&L\,\,\cap \,R)&&\\&=(&&L\,\setminus \,R)&&\cup \,&&(&&R\,\,\setminus \,L)&&~~~~~{\text{ (union is disjoint)}}\\&=(&&L\,\triangle \,M)&&\,\triangle \,&&(&&M\,\triangle \,R)&&~~~~~{\text{ where }}M{\text{ is an arbitrary set. }}\\&=(&&L^{\complement })&&\,\triangle \,&&(&&R^{\complement })&&\\\end{alignedat}}} L ∖ R = L ∖ ( L ∩ R ) = L ∩ ( L △ R ) = L △ ( L ∩ R ) = R △ ( L ∪ R ) {\displaystyle {\begin{alignedat}{9}L\setminus R&=&&L&&\,\,\setminus &&(L&&\,\,\cap &&R)\\&=&&L&&\,\,\cap &&(L&&\,\triangle \,&&R)\\&=&&L&&\,\triangle \,&&(L&&\,\,\cap &&R)\\&=&&R&&\,\triangle \,&&(L&&\,\,\cup &&R)\\\end{alignedat}}} === De Morgan's laws === De Morgan's laws state that for L , R ⊆ X : {\displaystyle L,R\subseteq X:} X ∖ ( L ∩ R ) = ( X ∖ L ) ∪ ( X ∖ R ) Also written ( L ∩ R ) ∁ = L ∁ ∪ R ∁ (De Morgan's law) X ∖ ( L ∪ R ) = ( X ∖ L ) ∩ ( X ∖ R ) Also written ( L ∪ R ) ∁ = L ∁ ∩ R ∁ (De Morgan's law) {\displaystyle {\begin{alignedat}{10}X\setminus (L\cap R)&=(X\setminus L)\cup (X\setminus R)&&\qquad {\text{ Also written }}\quad &&(L\cap R)^{\complement }=L^{\complement }\cup R^{\complement }&&\quad &&{\text{ (De Morgan's law)}}\\[1.4ex]X\setminus (L\cup R)&=(X\setminus L)\cap (X\setminus R)&&\qquad {\text{ Also written }}\quad &&(L\cup R)^{\complement }=L^{\complement }\cap R^{\complement }&&\quad &&{\text{ (De Morgan's law)}}\\[1.4ex]\end{alignedat}}} === Commutativity === Unions, intersection, and symmetric difference are commutative operations: L ∪ R = R ∪ L (Commutativity) L ∩ R = R ∩ L (Commutativity) L △ R = R △ L (Commutativity) {\displaystyle {\begin{alignedat}{10}L\cup R&\;=\;&&R\cup L&&\quad {\text{ (Commutativity)}}\\[1.4ex]L\cap R&\;=\;&&R\cap L&&\quad {\text{ (Commutativity)}}\\[1.4ex]L\,\triangle R&\;=\;&&R\,\triangle L&&\quad {\text{ (Commutativity)}}\\[1.4ex]\end{alignedat}}} Set subtraction is not commutative. However, the commutativity of set subtraction can be characterized: from ( L ∖ R ) ∩ ( R ∖ L ) = ∅ {\displaystyle (L\,\setminus \,R)\cap (R\,\setminus \,L)=\varnothing } it follows that: L ∖ R = R ∖ L if and only if L = R . {\displaystyle L\,\setminus \,R=R\,\setminus \,L\quad {\text{ if and only if }}\quad L=R.} Said differently, if distinct symbols always represented distinct sets, then the only true formulas of the form ⋅ ∖ ⋅ = ⋅ ∖ ⋅ {\displaystyle \,\cdot \,\,\setminus \,\,\cdot \,=\,\cdot \,\,\setminus \,\,\cdot \,} that could be written would be those involving a single symbol; that is, those of the form: S ∖ S = S ∖ S . {\displaystyle S\,\setminus \,S=S\,\setminus \,S.} But such formulas are necessarily true for every binary operation ∗ {\displaystyle \,\ast \,} (because x ∗ x = x ∗ x {\displaystyle x\,\ast \,x=x\,\ast \,x} must hold by definition of equality), and so in this sense, set subtraction is as diametrically opposite to being commutative as is possible for a binary operation. Set subtraction is also neither left alternative nor right alternative; instead, ( L ∖ L ) ∖ R = L ∖ ( L ∖ R ) {\displaystyle (L\setminus L)\setminus R=L\setminus (L\setminus R)} if and only if L ∩ R = ∅ {\displaystyle L\cap R=\varnothing } if and only if ( R ∖ L ) ∖ L = R ∖ ( L ∖ L ) . {\displaystyle (R\setminus L)\setminus L=R\setminus (L\setminus L).} Set subtraction is quasi-commutative and satisfies the Jordan identity. === Other identities involving two sets === Absorption laws: L ∪ ( L ∩ R ) = L (Absorption) L ∩ ( L ∪ R ) = L (Absorption) {\displaystyle {\begin{alignedat}{4}L\cup (L\cap R)&\;=\;&&L&&\quad {\text{ (Absorption)}}\\[1.4ex]L\cap (L\cup R)&\;=\;&&L&&\quad {\text{ (Absorption)}}\\[1.4ex]\end{alignedat}}} Other properties L ∖ R = L ∩ ( X ∖ R ) Also written L ∖ R = L ∩ R ∁ where L , R ⊆ X X ∖ ( L ∖ R ) = ( X ∖ L ) ∪ R Also written ( L ∖ R ) ∁ = L ∁ ∪ R where R ⊆ X L ∖ R = ( X ∖ R ) ∖ ( X ∖ L ) Also written L ∖ R = R ∁ ∖ L ∁ where L , R ⊆ X {\displaystyle {\begin{alignedat}{10}L\setminus R&=L\cap (X\setminus R)&&\qquad {\text{ Also written }}\quad &&L\setminus R=L\cap R^{\complement }&&\quad &&{\text{ where }}L,R\subseteq X\\[1.4ex]X\setminus (L\setminus R)&=(X\setminus L)\cup R&&\qquad {\text{ Also written }}\quad &&(L\setminus R)^{\complement }=L^{\complement }\cup R&&\quad &&{\text{ where }}R\subseteq X\\[1.4ex]L\setminus R&=(X\setminus R)\setminus (X\setminus L)&&\qquad {\text{ Also written }}\quad &&L\setminus R=R^{\complement }\setminus L^{\complement }&&\quad &&{\text{ where }}L,R\subseteq X\\[1.4ex]\end{alignedat}}} Intervals: ( a , b ) ∩ ( c , d ) = ( max { a , c } , min { b , d } ) {\displaystyle (a,b)\cap (c,d)=(\max\{a,c\},\min\{b,d\})} [ a , b ) ∩ [ c , d ) = [ max { a , c } , min { b , d } ) {\displaystyle [a,b)\cap [c,d)=[\max\{a,c\},\min\{b,d\})} === Subsets ⊆ and supersets ⊇ === The following statements are equivalent for any L , R ⊆ X : {\displaystyle L,R\subseteq X:} L ⊆ R {\displaystyle L\subseteq R} Definition of subset: if l ∈ L {\displaystyle l\in L} then l ∈ R {\displaystyle l\in R} L ∩ R = L {\displaystyle L\cap R=L} L ∪ R = R {\displaystyle L\cup R=R} L △ R = R ∖ L {\displaystyle L\,\triangle \,R=R\setminus L} L △ R ⊆ R ∖ L {\displaystyle L\,\triangle \,R\subseteq R\setminus L} L ∖ R = ∅ {\displaystyle L\setminus R=\varnothing } L {\displaystyle L} and X ∖ R {\displaystyle X\setminus R} are disjoint (that is, L ∩ ( X ∖ R ) = ∅ {\displaystyle L\cap (X\setminus R)=\varnothing } ) X ∖ R ⊆ X ∖ L {\displaystyle X\setminus R\subseteq X\setminus L\qquad } (that is, R ∁ ⊆ L ∁ {\displaystyle R^{\complement }\subseteq L^{\complement }} ) The following statements are equivalent for any L , R ⊆ X : {\displaystyle L,R\subseteq X:} L ⊈ R {\displaystyle L\not \subseteq R} There exists some l ∈ L ∖ R . {\displaystyle l\in L\setminus R.} ==== Set equality ==== The following statements are equivalent: L = R {\displaystyle L=R} L △ R = ∅ {\displaystyle L\,\triangle \,R=\varnothing } L ∖ R = R ∖ L {\displaystyle L\,\setminus \,R=R\,\setminus \,L} If L ∩ R = ∅ {\displaystyle L\cap R=\varnothing } then L = R {\displaystyle L=R} if and only if L = ∅ = R . {\displaystyle L=\varnothing =R.} Uniqueness of complements: If L ∪ R = X and L ∩ R = ∅ {\textstyle L\cup R=X{\text{ and }}L\cap R=\varnothing } then R = X ∖ L {\displaystyle R=X\setminus L} ===== Empty set ===== A set L {\displaystyle L} is empty if the sentence ∀ x ( x ∉ L ) {\displaystyle \forall x(x\not \in L)} is true, where the notation x ∉ L {\displaystyle x\not \in L} is shorthand for ¬ ( x ∈ L ) . {\displaystyle \lnot (x\in L).} If L {\displaystyle L} is any set then the following are equivalent: L {\displaystyle L} is not empty, meaning that the sentence ¬ [ ∀ x ( x ∉ L ) ] {\displaystyle \lnot [\forall x(x\not \in L)]} is true (literally, the logical negation of " L {\displaystyle L} is empty" holds true). (In classical mathematics) L {\displaystyle L} is inhabited, meaning: ∃ x ( x ∈ L ) {\displaystyle \exists x(x\in L)} In constructive mathematics, "not empty" and "inhabited" are not equivalent: every inhabited set is not empty but the converse is not always guaranteed; that is, in constructive mathematics, a set L {\displaystyle L} that is not empty (where by definition, " L {\displaystyle L} is empty" means that the statement ∀ x ( x ∉ L ) {\displaystyle \forall x(x\not \in L)} is true) might not have an inhabitant (which is an x {\displaystyle x} such that x ∈ L {\displaystyle x\in L} ). L ⊈ R {\displaystyle L\not \subseteq R} for some set R {\displaystyle R} If L {\displaystyle L} is any set then the following are equivalent: L {\displaystyle L} is empty ( L = ∅ {\displaystyle L=\varnothing } ), meaning: ∀ x ( x ∉ L ) {\displaystyle \forall x(x\not \in L)} L ∪ R ⊆ R {\displaystyle L\cup R\subseteq R} for every set R {\displaystyle R} L ⊆ R {\displaystyle L\subseteq R} for every set R {\displaystyle R} L ⊆ R ∖ L {\displaystyle L\subseteq R\setminus L} for some/every set R {\displaystyle R} ∅ / L = L {\displaystyle \varnothing /L=L} Given any x , {\displaystyle x,} the following are equivalent: x ∉ L ∖ R {\textstyle x\not \in L\setminus R} x ∈ L ∩ R or x ∉ L . {\textstyle x\in L\cap R\;{\text{ or }}\;x\not \in L.} x ∈ R or x ∉ L . {\textstyle x\in R\;{\text{ or }}\;x\not \in L.} Moreover, ( L ∖ R ) ∩ R = ∅ always holds . {\displaystyle (L\setminus R)\cap R=\varnothing \qquad {\text{ always holds}}.} ==== Meets, Joins, and lattice properties ==== Inclusion is a partial order: Explicitly, this means that inclusion ⊆ , {\displaystyle \,\subseteq ,\,} which is a binary operation, has the following three properties: Reflexivity: L ⊆ L {\textstyle L\subseteq L} Antisymmetry: ( L ⊆ R and R ⊆ L ) if and only if L = R {\textstyle (L\subseteq R{\text{ and }}R\subseteq L){\text{ if and only if }}L=R} Transitivity: If L ⊆ M and M ⊆ R then L ⊆ R {\textstyle {\text{If }}L\subseteq M{\text{ and }}M\subseteq R{\text{ then }}L\subseteq R} The following proposition says that for any set S , {\displaystyle S,} the power set of S , {\displaystyle S,} ordered by inclusion, is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra. Existence of a least element and a greatest element: ∅ ⊆ L ⊆ X {\displaystyle \varnothing \subseteq L\subseteq X} Joins/supremums exist: L ⊆ L ∪ R {\displaystyle L\subseteq L\cup R} The union L ∪ R {\displaystyle L\cup R} is the join/supremum of L {\displaystyle L} and R {\displaystyle R} with respect to ⊆ {\displaystyle \,\subseteq \,} because: L ⊆ L ∪ R {\displaystyle L\subseteq L\cup R} and R ⊆ L ∪ R , {\displaystyle R\subseteq L\cup R,} and if Z {\displaystyle Z} is a set such that L ⊆ Z {\displaystyle L\subseteq Z} and R ⊆ Z {\displaystyle R\subseteq Z} then L ∪ R ⊆ Z . {\displaystyle L\cup R\subseteq Z.} The intersection L ∩ R {\displaystyle L\cap R} is the join/supremum of L {\displaystyle L} and R {\displaystyle R} with respect to ⊇ . {\displaystyle \,\supseteq .\,} Meets/infimums exist: L ∩ R ⊆ L {\displaystyle L\cap R\subseteq L} The intersection L ∩ R {\displaystyle L\cap R} is the meet/infimum of L {\displaystyle L} and R {\displaystyle R} with respect to ⊆ {\displaystyle \,\subseteq \,} because: if L ∩ R ⊆ L {\displaystyle L\cap R\subseteq L} and L ∩ R ⊆ R , {\displaystyle L\cap R\subseteq R,} and if Z {\displaystyle Z} is a set such that Z ⊆ L {\displaystyle Z\subseteq L} and Z ⊆ R {\displaystyle Z\subseteq R} then Z ⊆ L ∩ R . {\displaystyle Z\subseteq L\cap R.} The union L ∪ R {\displaystyle L\cup R} is the meet/infimum of L {\displaystyle L} and R {\displaystyle R} with respect to ⊇ . {\displaystyle \,\supseteq .\,} Other inclusion properties: L ∖ R ⊆ L {\displaystyle L\setminus R\subseteq L} ( L ∖ R ) ∩ L = L ∖ R {\displaystyle (L\setminus R)\cap L=L\setminus R} If L ⊆ R {\displaystyle L\subseteq R} then L △ R = R ∖ L . {\displaystyle L\,\triangle \,R=R\setminus L.} If L ⊆ X {\displaystyle L\subseteq X} and R ⊆ Y {\displaystyle R\subseteq Y} then L × R ⊆ X × Y {\displaystyle L\times R\subseteq X\times Y} == Three sets involved == In the left hand sides of the following identities, L {\displaystyle L} is the L eft most set, M {\displaystyle M} is the M iddle set, and R {\displaystyle R} is the R ight most set. === Precedence rules === There is no universal agreement on the order of precedence of the basic set operators. Nevertheless, many authors use precedence rules for set operators, although these rules vary with the author. One common convention is to associate intersection L ∩ R = { x : ( x ∈ L ) ∧ ( x ∈ R ) } {\displaystyle L\cap R=\{x:(x\in L)\land (x\in R)\}} with logical conjunction (and) L ∧ R {\displaystyle L\land R} and associate union L ∪ R = { x : ( x ∈ L ) ∨ ( x ∈ R ) } {\displaystyle L\cup R=\{x:(x\in L)\lor (x\in R)\}} with logical disjunction (or) L ∨ R , {\displaystyle L\lor R,} and then transfer the precedence of these logical operators (where ∧ {\displaystyle \,\land \,} has precedence over ∨ {\displaystyle \,\lor \,} ) to these set operators, thereby giving ∩ {\displaystyle \,\cap \,} precedence over ∪ . {\displaystyle \,\cup .\,} So for example, L ∪ M ∩ R {\displaystyle L\cup M\cap R} would mean L ∪ ( M ∩ R ) {\displaystyle L\cup (M\cap R)} since it would be associated with the logical statement L ∨ M ∧ R = L ∨ ( M ∧ R ) {\displaystyle L\lor M\land R~=~L\lor (M\land R)} and similarly, L ∪ M ∩ R ∪ Z {\displaystyle L\cup M\cap R\cup Z} would mean L ∪ ( M ∩ R ) ∪ Z {\displaystyle L\cup (M\cap R)\cup Z} since it would be associated with L ∨ M ∧ R ∨ Z = L ∨ ( M ∧ R ) ∨ Z . {\displaystyle L\lor M\land R\lor Z~=~L\lor (M\land R)\lor Z.} Sometimes, set complement (subtraction) ∖ {\displaystyle \,\setminus \,} is also associated with logical complement (not) ¬ , {\displaystyle \,\lnot ,\,} in which case it will have the highest precedence. More specifically, L ∖ R = { x : ( x ∈ L ) ∧ ¬ ( x ∈ R ) } {\displaystyle L\setminus R=\{x:(x\in L)\land \lnot (x\in R)\}} is rewritten L ∧ ¬ R {\displaystyle L\land \lnot R} so that for example, L ∪ M ∖ R {\displaystyle L\cup M\setminus R} would mean L ∪ ( M ∖ R ) {\displaystyle L\cup (M\setminus R)} since it would be rewritten as the logical statement L ∨ M ∧ ¬ R {\displaystyle L\lor M\land \lnot R} which is equal to L ∨ ( M ∧ ¬ R ) . {\displaystyle L\lor (M\land \lnot R).} For another example, because L ∧ ¬ M ∧ R {\displaystyle L\land \lnot M\land R} means L ∧ ( ¬ M ) ∧ R , {\displaystyle L\land (\lnot M)\land R,} which is equal to both ( L ∧ ( ¬ M ) ) ∧ R {\displaystyle (L\land (\lnot M))\land R} and L ∧ ( ( ¬ M ) ∧ R ) = L ∧ ( R ∧ ( ¬ M ) ) {\displaystyle L\land ((\lnot M)\land R)~=~L\land (R\land (\lnot M))} (where ( ¬ M ) ∧ R {\displaystyle (\lnot M)\land R} was rewritten as R ∧ ( ¬ M ) {\displaystyle R\land (\lnot M)} ), the formula L ∖ M ∩ R {\displaystyle L\setminus M\cap R} would refer to the set ( L ∖ M ) ∩ R = L ∩ ( R ∖ M ) ; {\displaystyle (L\setminus M)\cap R=L\cap (R\setminus M);} moreover, since L ∧ ( ¬ M ) ∧ R = ( L ∧ R ) ∧ ¬ M , {\displaystyle L\land (\lnot M)\land R=(L\land R)\land \lnot M,} this set is also equal to ( L ∩ R ) ∖ M {\displaystyle (L\cap R)\setminus M} (other set identities can similarly be deduced from propositional calculus identities in this way). However, because set subtraction is not associative ( L ∖ M ) ∖ R ≠ L ∖ ( M ∖ R ) , {\displaystyle (L\setminus M)\setminus R\neq L\setminus (M\setminus R),} a formula such as L ∖ M ∖ R {\displaystyle L\setminus M\setminus R} would be ambiguous; for this reason, among others, set subtraction is often not assigned any precedence at all. Symmetric difference L △ R = { x : ( x ∈ L ) ⊕ ( x ∈ R ) } {\displaystyle L\triangle R=\{x:(x\in L)\oplus (x\in R)\}} is sometimes associated with exclusive or (xor) L ⊕ R {\displaystyle L\oplus R} (also sometimes denoted by ⊻ {\displaystyle \,\veebar } ), in which case if the order of precedence from highest to lowest is ¬ , ⊕ , ∧ , ∨ {\displaystyle \,\lnot ,\,\oplus ,\,\land ,\,\lor \,} then the order of precedence (from highest to lowest) for the set operators would be ∖ , △ , ∩ , ∪ . {\displaystyle \,\setminus ,\,\triangle ,\,\cap ,\,\cup .} There is no universal agreement on the precedence of exclusive disjunction ⊕ {\displaystyle \,\oplus \,} with respect to the other logical connectives, which is why symmetric difference △ {\displaystyle \,\triangle \,} is not often assigned a precedence. === Associativity === Definition: A binary operator ∗ {\displaystyle \,\ast \,} is called associative if ( L ∗ M ) ∗ R = L ∗ ( M ∗ R ) {\displaystyle (L\,\ast \,M)\,\ast \,R=L\,\ast \,(M\,\ast \,R)} always holds. The following set operators are associative: ( L ∪ M ) ∪ R = L ∪ ( M ∪ R ) ( L ∩ M ) ∩ R = L ∩ ( M ∩ R ) ( L △ M ) △ R = L △ ( M △ R ) {\displaystyle {\begin{alignedat}{5}(L\cup M)\cup R&\;=\;\;&&L\cup (M\cup R)\\[1.4ex](L\cap M)\cap R&\;=\;\;&&L\cap (M\cap R)\\[1.4ex](L\,\triangle M)\,\triangle R&\;=\;\;&&L\,\triangle (M\,\triangle R)\\[1.4ex]\end{alignedat}}} For set subtraction, instead of associativity, only the following is always guaranteed: ( L ∖ M ) ∖ R ⊆ L ∖ ( M ∖ R ) {\displaystyle (L\,\setminus \,M)\,\setminus \,R\;~~{\color {red}{\subseteq }}~~\;L\,\setminus \,(M\,\setminus \,R)} where equality holds if and only if L ∩ R = ∅ {\displaystyle L\cap R=\varnothing } (this condition does not depend on M {\displaystyle M} ). Thus ( L ∖ M ) ∖ R = L ∖ ( M ∖ R ) {\textstyle \;(L\setminus M)\setminus R=L\setminus (M\setminus R)\;} if and only if ( R ∖ M ) ∖ L = R ∖ ( M ∖ L ) , {\displaystyle \;(R\setminus M)\setminus L=R\setminus (M\setminus L),\;} where the only difference between the left and right hand side set equalities is that the locations of L and R {\displaystyle L{\text{ and }}R} have been swapped. === Distributivity === Definition: If ∗ and ∙ {\displaystyle \ast {\text{ and }}\bullet } are binary operators then ∗ {\displaystyle \,\ast \,} left distributes over ∙ {\displaystyle \,\bullet \,} if L ∗ ( M ∙ R ) = ( L ∗ M ) ∙ ( L ∗ R ) for all L , M , R {\displaystyle L\,\ast \,(M\,\bullet \,R)~=~(L\,\ast \,M)\,\bullet \,(L\,\ast \,R)\qquad \qquad {\text{ for all }}L,M,R} while ∗ {\displaystyle \,\ast \,} right distributes over ∙ {\displaystyle \,\bullet \,} if ( L ∙ M ) ∗ R = ( L ∗ R ) ∙ ( M ∗ R ) for all L , M , R . {\displaystyle (L\,\bullet \,M)\,\ast \,R~=~(L\,\ast \,R)\,\bullet \,(M\,\ast \,R)\qquad \qquad {\text{ for all }}L,M,R.} The operator ∗ {\displaystyle \,\ast \,} distributes over ∙ {\displaystyle \,\bullet \,} if it both left distributes and right distributes over ∙ . {\displaystyle \,\bullet \,.\,} In the definitions above, to transform one side to the other, the innermost operator (the operator inside the parentheses) becomes the outermost operator and the outermost operator becomes the innermost operator. Right distributivity: ( L ∩ M ) ∪ R = ( L ∪ R ) ∩ ( M ∪ R ) (Right-distributivity of ∪ over ∩ ) ( L ∪ M ) ∪ R = ( L ∪ R ) ∪ ( M ∪ R ) (Right-distributivity of ∪ over ∪ ) ( L ∪ M ) ∩ R = ( L ∩ R ) ∪ ( M ∩ R ) (Right-distributivity of ∩ over ∪ ) ( L ∩ M ) ∩ R = ( L ∩ R ) ∩ ( M ∩ R ) (Right-distributivity of ∩ over ∩ ) ( L △ M ) ∩ R = ( L ∩ R ) △ ( M ∩ R ) (Right-distributivity of ∩ over △ ) ( L ∩ M ) × R = ( L × R ) ∩ ( M × R ) (Right-distributivity of × over ∩ ) ( L ∪ M ) × R = ( L × R ) ∪ ( M × R ) (Right-distributivity of × over ∪ ) ( L ∖ M ) × R = ( L × R ) ∖ ( M × R ) (Right-distributivity of × over ∖ ) ( L △ M ) × R = ( L × R ) △ ( M × R ) (Right-distributivity of × over △ ) ( L ∪ M ) ∖ R = ( L ∖ R ) ∪ ( M ∖ R ) (Right-distributivity of ∖ over ∪ ) ( L ∩ M ) ∖ R = ( L ∖ R ) ∩ ( M ∖ R ) (Right-distributivity of ∖ over ∩ ) ( L △ M ) ∖ R = ( L ∖ R ) △ ( M ∖ R ) (Right-distributivity of ∖ over △ ) ( L ∖ M ) ∖ R = ( L ∖ R ) ∖ ( M ∖ R ) (Right-distributivity of ∖ over ∖ ) = L ∖ ( M ∪ R ) {\displaystyle {\begin{alignedat}{9}(L\,\cap \,M)\,\cup \,R~&~~=~~&&(L\,\cup \,R)\,&&\cap \,&&(M\,\cup \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cup \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\cup \,R~&~~=~~&&(L\,\cup \,R)\,&&\cup \,&&(M\,\cup \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cup \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,&&\cup \,&&(M\,\cap \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cap \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\cap \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,&&\cap \,&&(M\,\cap \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cap \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,&&\triangle \,&&(M\,\cap \,R)\qquad &&{\text{ (Right-distributivity of }}\,\cap \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex](L\,\cap \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cap \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cup \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\setminus \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\setminus \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\triangle \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)\,&&\cup \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\cap \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)\,&&\cap \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)&&\,\triangle \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex](L\,\setminus \,M)\,\setminus \,R~&~~=~~&&(L\,\setminus \,R)&&\,\setminus \,&&(M\,\setminus \,R)\qquad &&{\text{ (Right-distributivity of }}\,\setminus \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex]~&~~=~~&&~~\;~~\;~~\;~L&&\,\setminus \,&&(M\cup R)\\[1.4ex]\end{alignedat}}} Left distributivity: L ∪ ( M ∩ R ) = ( L ∪ M ) ∩ ( L ∪ R ) (Left-distributivity of ∪ over ∩ ) L ∪ ( M ∪ R ) = ( L ∪ M ) ∪ ( L ∪ R ) (Left-distributivity of ∪ over ∪ ) L ∩ ( M ∪ R ) = ( L ∩ M ) ∪ ( L ∩ R ) (Left-distributivity of ∩ over ∪ ) L ∩ ( M ∩ R ) = ( L ∩ M ) ∩ ( L ∩ R ) (Left-distributivity of ∩ over ∩ ) L ∩ ( M △ R ) = ( L ∩ M ) △ ( L ∩ R ) (Left-distributivity of ∩ over △ ) L × ( M ∩ R ) = ( L × M ) ∩ ( L × R ) (Left-distributivity of × over ∩ ) L × ( M ∪ R ) = ( L × M ) ∪ ( L × R ) (Left-distributivity of × over ∪ ) L × ( M ∖ R ) = ( L × M ) ∖ ( L × R ) (Left-distributivity of × over ∖ ) L × ( M △ R ) = ( L × M ) △ ( L × R ) (Left-distributivity of × over △ ) {\displaystyle {\begin{alignedat}{5}L\cup (M\cap R)&\;=\;\;&&(L\cup M)\cap (L\cup R)\qquad &&{\text{ (Left-distributivity of }}\,\cup \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\cup (M\cup R)&\;=\;\;&&(L\cup M)\cup (L\cup R)&&{\text{ (Left-distributivity of }}\,\cup \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\cap (M\cup R)&\;=\;\;&&(L\cap M)\cup (L\cap R)&&{\text{ (Left-distributivity of }}\,\cap \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\cap (M\cap R)&\;=\;\;&&(L\cap M)\cap (L\cap R)&&{\text{ (Left-distributivity of }}\,\cap \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\cap (M\,\triangle \,R)&\;=\;\;&&(L\cap M)\,\triangle \,(L\cap R)&&{\text{ (Left-distributivity of }}\,\cap \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]L\times (M\cap R)&\;=\;\;&&(L\times M)\cap (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\times (M\cup R)&\;=\;\;&&(L\times M)\cup (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\times (M\,\setminus R)&\;=\;\;&&(L\times M)\,\setminus (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex]L\times (M\,\triangle R)&\;=\;\;&&(L\times M)\,\triangle (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]\end{alignedat}}} ==== Distributivity and symmetric difference ∆ ==== Intersection distributes over symmetric difference: L ∩ ( M △ R ) = ( L ∩ M ) △ ( L ∩ R ) {\displaystyle {\begin{alignedat}{5}L\,\cap \,(M\,\triangle \,R)~&~~=~~&&(L\,\cap \,M)\,\triangle \,(L\,\cap \,R)~&&~\\[1.4ex]\end{alignedat}}} ( L △ M ) ∩ R = ( L ∩ R ) △ ( M ∩ R ) {\displaystyle {\begin{alignedat}{5}(L\,\triangle \,M)\,\cap \,R~&~~=~~&&(L\,\cap \,R)\,\triangle \,(M\,\cap \,R)~&&~\\[1.4ex]\end{alignedat}}} Union does not distribute over symmetric difference because only the following is guaranteed in general: L ∪ ( M △ R ) ⊇ ( L ∪ M ) △ ( L ∪ R ) = ( M △ R ) ∖ L = ( M ∖ L ) △ ( R ∖ L ) {\displaystyle {\begin{alignedat}{5}L\cup (M\,\triangle \,R)~~{\color {red}{\supseteq }}~~\color {black}{\,}(L\cup M)\,\triangle \,(L\cup R)~&~=~&&(M\,\triangle \,R)\,\setminus \,L&~=~&&(M\,\setminus \,L)\,\triangle \,(R\,\setminus \,L)\\[1.4ex]\end{alignedat}}} Symmetric difference does not distribute over itself: L △ ( M △ R ) ≠ ( L △ M ) △ ( L △ R ) = M △ R {\displaystyle L\,\triangle \,(M\,\triangle \,R)~~{\color {red}{\neq }}~~\color {black}{\,}(L\,\triangle \,M)\,\triangle \,(L\,\triangle \,R)~=~M\,\triangle \,R} and in general, for any sets L and A {\displaystyle L{\text{ and }}A} (where A {\displaystyle A} represents M △ R {\displaystyle M\,\triangle \,R} ), L △ A {\displaystyle L\,\triangle \,A} might not be a subset, nor a superset, of L {\displaystyle L} (and the same is true for A {\displaystyle A} ). ==== Distributivity and set subtraction \ ==== Failure of set subtraction to left distribute: Set subtraction is right distributive over itself. However, set subtraction is not left distributive over itself because only the following is guaranteed in general: L ∖ ( M ∖ R ) ⊇ ( L ∖ M ) ∖ ( L ∖ R ) = L ∩ R ∖ M {\displaystyle {\begin{alignedat}{5}L\,\setminus \,(M\,\setminus \,R)&~~{\color {red}{\supseteq }}~~&&\color {black}{\,}(L\,\setminus \,M)\,\setminus \,(L\,\setminus \,R)~~=~~L\cap R\,\setminus \,M\\[1.4ex]\end{alignedat}}} where equality holds if and only if L ∖ M = L ∩ R , {\displaystyle L\,\setminus \,M=L\,\cap \,R,} which happens if and only if L ∩ M ∩ R = ∅ and L ∖ M ⊆ R . {\displaystyle L\cap M\cap R=\varnothing {\text{ and }}L\setminus M\subseteq R.} For symmetric difference, the sets L ∖ ( M △ R ) {\displaystyle L\,\setminus \,(M\,\triangle \,R)} and ( L ∖ M ) △ ( L ∖ R ) = L ∩ ( M △ R ) {\displaystyle (L\,\setminus \,M)\,\triangle \,(L\,\setminus \,R)=L\,\cap \,(M\,\triangle \,R)} are always disjoint. So these two sets are equal if and only if they are both equal to ∅ . {\displaystyle \varnothing .} Moreover, L ∖ ( M △ R ) = ∅ {\displaystyle L\,\setminus \,(M\,\triangle \,R)=\varnothing } if and only if L ∩ M ∩ R = ∅ and L ⊆ M ∪ R . {\displaystyle L\cap M\cap R=\varnothing {\text{ and }}L\subseteq M\cup R.} To investigate the left distributivity of set subtraction over unions or intersections, consider how the sets involved in (both of) De Morgan's laws are all related: ( L ∖ M ) ∩ ( L ∖ R ) = L ∖ ( M ∪ R ) ⊆ L ∖ ( M ∩ R ) = ( L ∖ M ) ∪ ( L ∖ R ) {\displaystyle {\begin{alignedat}{5}(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cup \,R)~&~~{\color {red}{\subseteq }}~~&&\color {black}{\,}L\,\setminus \,(M\,\cap \,R)~~=~~(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R)\\[1.4ex]\end{alignedat}}} always holds (the equalities on the left and right are De Morgan's laws) but equality is not guaranteed in general (that is, the containment ⊆ {\displaystyle {\color {red}{\subseteq }}} might be strict). Equality holds if and only if L ∖ ( M ∩ R ) ⊆ L ∖ ( M ∪ R ) , {\displaystyle L\,\setminus \,(M\,\cap \,R)\;\subseteq \;L\,\setminus \,(M\,\cup \,R),} which happens if and only if L ∩ M = L ∩ R . {\displaystyle L\,\cap \,M=L\,\cap \,R.} This observation about De Morgan's laws shows that ∖ {\displaystyle \,\setminus \,} is not left distributive over ∪ {\displaystyle \,\cup \,} or ∩ {\displaystyle \,\cap \,} because only the following are guaranteed in general: L ∖ ( M ∪ R ) ⊆ ( L ∖ M ) ∪ ( L ∖ R ) = L ∖ ( M ∩ R ) {\displaystyle {\begin{alignedat}{5}L\,\setminus \,(M\,\cup \,R)~&~~{\color {red}{\subseteq }}~~&&\color {black}{\,}(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cap \,R)\\[1.4ex]\end{alignedat}}} L ∖ ( M ∩ R ) ⊇ ( L ∖ M ) ∩ ( L ∖ R ) = L ∖ ( M ∪ R ) {\displaystyle {\begin{alignedat}{5}L\,\setminus \,(M\,\cap \,R)~&~~{\color {red}{\supseteq }}~~&&\color {black}{\,}(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cup \,R)\\[1.4ex]\end{alignedat}}} where equality holds for one (or equivalently, for both) of the above two inclusion formulas if and only if L ∩ M = L ∩ R . {\displaystyle L\,\cap \,M=L\,\cap \,R.} The following statements are equivalent: L ∩ M = L ∩ R {\displaystyle L\cap M\,=\,L\cap R} L ∖ M = L ∖ R {\displaystyle L\,\setminus \,M\,=\,L\,\setminus \,R} L ∖ ( M ∩ R ) = ( L ∖ M ) ∩ ( L ∖ R ) ; {\displaystyle L\,\setminus \,(M\,\cap \,R)=(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R);} that is, ∖ {\displaystyle \,\setminus \,} left distributes over ∩ {\displaystyle \,\cap \,} for these three particular sets L ∖ ( M ∪ R ) = ( L ∖ M ) ∪ ( L ∖ R ) ; {\displaystyle L\,\setminus \,(M\,\cup \,R)=(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R);} that is, ∖ {\displaystyle \,\setminus \,} left distributes over ∪ {\displaystyle \,\cup \,} for these three particular sets L ∖ ( M ∩ R ) = L ∖ ( M ∪ R ) {\displaystyle L\,\setminus \,(M\,\cap \,R)\,=\,L\,\setminus \,(M\,\cup \,R)} L ∩ ( M ∪ R ) = L ∩ M ∩ R {\displaystyle L\cap (M\cup R)\,=\,L\cap M\cap R} L ∩ ( M ∪ R ) ⊆ M ∩ R {\displaystyle L\cap (M\cup R)~\subseteq ~M\cap R} L ∩ R ⊆ M {\displaystyle L\cap R~\subseteq ~M\;} and L ∩ M ⊆ R {\displaystyle \;L\cap M~\subseteq ~R} L ∖ ( M ∖ R ) = L ∖ ( R ∖ M ) {\displaystyle L\setminus (M\setminus R)\,=\,L\setminus (R\setminus M)} L ∖ ( M ∖ R ) = L ∖ ( R ∖ M ) = L {\displaystyle L\setminus (M\setminus R)\,=\,L\setminus (R\setminus M)\,=\,L} Quasi-commutativity: ( L ∖ M ) ∖ R = ( L ∖ R ) ∖ M (Quasi-commutative) {\displaystyle (L\setminus M)\setminus R~=~(L\setminus R)\setminus M\qquad {\text{ (Quasi-commutative)}}} always holds but in general, L ∖ ( M ∖ R ) ≠ L ∖ ( R ∖ M ) . {\displaystyle L\setminus (M\setminus R)~~{\color {red}{\neq }}~~L\setminus (R\setminus M).} However, L ∖ ( M ∖ R ) ⊆ L ∖ ( R ∖ M ) {\displaystyle L\setminus (M\setminus R)~\subseteq ~L\setminus (R\setminus M)} if and only if L ∩ R ⊆ M {\displaystyle L\cap R~\subseteq ~M} if and only if L ∖ ( R ∖ M ) = L . {\displaystyle L\setminus (R\setminus M)~=~L.} Set subtraction complexity: To manage the many identities involving set subtraction, this section is divided based on where the set subtraction operation and parentheses are located on the left hand side of the identity. The great variety and (relative) complexity of formulas involving set subtraction (compared to those without it) is in part due to the fact that unlike ∪ , ∩ , {\displaystyle \,\cup ,\,\cap ,} and △ , {\displaystyle \triangle ,\,} set subtraction is neither associative nor commutative and it also is not left distributive over ∪ , ∩ , △ , {\displaystyle \,\cup ,\,\cap ,\,\triangle ,} or even over itself. === Two set subtractions === Set subtraction is not associative in general: ( L ∖ M ) ∖ R ≠ L ∖ ( M ∖ R ) {\displaystyle (L\,\setminus \,M)\,\setminus \,R\;~~{\color {red}{\neq }}~~\;L\,\setminus \,(M\,\setminus \,R)} since only the following is always guaranteed: ( L ∖ M ) ∖ R ⊆ L ∖ ( M ∖ R ) . {\displaystyle (L\,\setminus \,M)\,\setminus \,R\;~~{\color {red}{\subseteq }}~~\;L\,\setminus \,(M\,\setminus \,R).} ==== (L\M)\R ==== ( L ∖ M ) ∖ R = L ∖ ( M ∪ R ) = ( L ∖ R ) ∖ M = ( L ∖ M ) ∩ ( L ∖ R ) = ( L ∖ R ) ∖ M = ( L ∖ R ) ∖ ( M ∖ R ) {\displaystyle {\begin{alignedat}{4}(L\setminus M)\setminus R&=&&L\setminus (M\cup R)\\[0.6ex]&=(&&L\setminus R)\setminus M\\[0.6ex]&=(&&L\setminus M)\cap (L\setminus R)\\[0.6ex]&=(&&L\setminus R)\setminus M\\[0.6ex]&=(&&L\,\setminus \,R)\,\setminus \,(M\,\setminus \,R)\\[1.4ex]\end{alignedat}}} ==== L\(M\R) ==== L ∖ ( M ∖ R ) = ( L ∖ M ) ∪ ( L ∩ R ) {\displaystyle {\begin{alignedat}{4}L\setminus (M\setminus R)&=(L\setminus M)\cup (L\cap R)\\[1.4ex]\end{alignedat}}} If L ⊆ M then L ∖ ( M ∖ R ) = L ∩ R {\displaystyle L\subseteq M{\text{ then }}L\setminus (M\setminus R)=L\cap R} L ∖ ( M ∖ R ) ⊆ ( L ∖ M ) ∪ R {\textstyle L\setminus (M\setminus R)\subseteq (L\setminus M)\cup R} with equality if and only if R ⊆ L . {\displaystyle R\subseteq L.} === One set subtraction === ==== (L\M) ⁎ R ==== Set subtraction on the left, and parentheses on the left ( L ∖ M ) ∪ R = ( L ∪ R ) ∖ ( M ∖ R ) = ( L ∖ ( M ∪ R ) ) ∪ R (the outermost union is disjoint) {\displaystyle {\begin{alignedat}{4}\left(L\setminus M\right)\cup R&=(L\cup R)\setminus (M\setminus R)\\&=(L\setminus (M\cup R))\cup R~~~~~{\text{ (the outermost union is disjoint) }}\\\end{alignedat}}} ( L ∖ M ) ∩ R = ( L ∩ R ) ∖ ( M ∩ R ) (Distributive law of ∩ over ∖ ) = ( L ∩ R ) ∖ M = L ∩ ( R ∖ M ) {\displaystyle {\begin{alignedat}{4}(L\setminus M)\cap R&=(&&L\cap R)\setminus (M\cap R)~~~{\text{ (Distributive law of }}\cap {\text{ over }}\setminus {\text{ )}}\\&=(&&L\cap R)\setminus M\\&=&&L\cap (R\setminus M)\\\end{alignedat}}} ( L ∖ M ) ∩ ( L ∖ R ) = L ∖ ( M ∪ R ) ⊆ L ∖ ( M ∩ R ) = ( L ∖ M ) ∪ ( L ∖ R ) {\displaystyle {\begin{alignedat}{5}(L\,\setminus \,M)\,\cap \,(L\,\setminus \,R)~~=~~L\,\setminus \,(M\,\cup \,R)~&~~{\color {red}{\subseteq }}~~&&\color {black}{\,}L\,\setminus \,(M\,\cap \,R)~~=~~(L\,\setminus \,M)\,\cup \,(L\,\setminus \,R)\\[1.4ex]\end{alignedat}}} ( L ∖ M ) △ R = ( L ∖ ( M ∪ R ) ) ∪ ( R ∖ L ) ∪ ( L ∩ M ∩ R ) (the three outermost sets are pairwise disjoint) {\displaystyle {\begin{alignedat}{4}(L\setminus M)~\triangle ~R&=(L\setminus (M\cup R))\cup (R\setminus L)\cup (L\cap M\cap R)~~~{\text{ (the three outermost sets are pairwise disjoint) }}\\\end{alignedat}}} ( L ∖ M ) × R = ( L × R ) ∖ ( M × R ) (Distributivity) {\displaystyle (L\,\setminus M)\times R=(L\times R)\,\setminus (M\times R)~~~~~{\text{ (Distributivity)}}} ==== L\(M ⁎ R) ==== Set subtraction on the left, and parentheses on the right L ∖ ( M ∪ R ) = ( L ∖ M ) ∩ ( L ∖ R ) (De Morgan's law) = ( L ∖ M ) ∖ R = ( L ∖ R ) ∖ M {\displaystyle {\begin{alignedat}{3}L\setminus (M\cup R)&=(L\setminus M)&&\,\cap \,(&&L\setminus R)~~~~{\text{ (De Morgan's law) }}\\&=(L\setminus M)&&\,\,\setminus &&R\\&=(L\setminus R)&&\,\,\setminus &&M\\\end{alignedat}}} L ∖ ( M ∩ R ) = ( L ∖ M ) ∪ ( L ∖ R ) (De Morgan's law) {\displaystyle {\begin{alignedat}{4}L\setminus (M\cap R)&=(L\setminus M)\cup (L\setminus R)~~~~{\text{ (De Morgan's law) }}\\\end{alignedat}}} where the above two sets that are the subjects of De Morgan's laws always satisfy L ∖ ( M ∪ R ) ⊆ L ∖ ( M ∩ R ) . {\displaystyle L\,\setminus \,(M\,\cup \,R)~~{\color {red}{\subseteq }}~~\color {black}{\,}L\,\setminus \,(M\,\cap \,R).} L ∖ ( M △ R ) = ( L ∖ ( M ∪ R ) ) ∪ ( L ∩ M ∩ R ) (the outermost union is disjoint) {\displaystyle {\begin{alignedat}{4}L\setminus (M~\triangle ~R)&=(L\setminus (M\cup R))\cup (L\cap M\cap R)~~~{\text{ (the outermost union is disjoint) }}\\\end{alignedat}}} ==== (L ⁎ M)\R ==== Set subtraction on the right, and parentheses on the left ( L ∪ M ) ∖ R = ( L ∖ R ) ∪ ( M ∖ R ) {\displaystyle {\begin{alignedat}{4}(L\cup M)\setminus R&=(L\setminus R)\cup (M\setminus R)\\\end{alignedat}}} ( L ∩ M ) ∖ R = ( L ∖ R ) ∩ ( M ∖ R ) = L ∩ ( M ∖ R ) = M ∩ ( L ∖ R ) {\displaystyle {\begin{alignedat}{4}(L\cap M)\setminus R&=(&&L\setminus R)&&\cap (M\setminus R)\\&=&&L&&\cap (M\setminus R)\\&=&&M&&\cap (L\setminus R)\\\end{alignedat}}} ( L △ M ) ∖ R = ( L ∖ R ) △ ( M ∖ R ) = ( L ∪ R ) △ ( M ∪ R ) {\displaystyle {\begin{alignedat}{4}(L\,\triangle \,M)\setminus R&=(L\setminus R)~&&\triangle ~(M\setminus R)\\&=(L\cup R)~&&\triangle ~(M\cup R)\\\end{alignedat}}} ==== L ⁎ (M\R) ==== Set subtraction on the right, and parentheses on the right L ∪ ( M ∖ R ) = L ∪ ( M ∖ ( R ∪ L ) ) (the outermost union is disjoint) = [ ( L ∖ M ) ∪ ( R ∩ L ) ] ∪ ( M ∖ R ) (the outermost union is disjoint) = ( L ∖ ( M ∪ R ) ) ∪ ( R ∩ L ) ∪ ( M ∖ R ) (the three outermost sets are pairwise disjoint) {\displaystyle {\begin{alignedat}{3}L\cup (M\setminus R)&=&&&&L&&\cup \;&&(M\setminus (R\cup L))&&~~~{\text{ (the outermost union is disjoint) }}\\&=[&&(&&L\setminus M)&&\cup \;&&(R\cap L)]\cup (M\setminus R)&&~~~{\text{ (the outermost union is disjoint) }}\\&=&&(&&L\setminus (M\cup R))\;&&\;\cup &&(R\cap L)\,\,\cup (M\setminus R)&&~~~{\text{ (the three outermost sets are pairwise disjoint) }}\\\end{alignedat}}} L ∩ ( M ∖ R ) = ( L ∩ M ) ∖ ( L ∩ R ) (Distributive law of ∩ over ∖ ) = ( L ∩ M ) ∖ R = M ∩ ( L ∖ R ) = ( L ∖ R ) ∩ ( M ∖ R ) {\displaystyle {\begin{alignedat}{4}L\cap (M\setminus R)&=(&&L\cap M)&&\setminus (L\cap R)~~~{\text{ (Distributive law of }}\cap {\text{ over }}\setminus {\text{ )}}\\&=(&&L\cap M)&&\setminus R\\&=&&M&&\cap (L\setminus R)\\&=(&&L\setminus R)&&\cap (M\setminus R)\\\end{alignedat}}} L × ( M ∖ R ) = ( L × M ) ∖ ( L × R ) (Distributivity) {\displaystyle L\times (M\,\setminus R)=(L\times M)\,\setminus (L\times R)~~~~~{\text{ (Distributivity)}}} === Three operations on three sets === ==== (L • M) ⁎ (M • R) ==== Operations of the form ( L ∙ M ) ∗ ( M ∙ R ) {\displaystyle (L\bullet M)\ast (M\bullet R)} : ( L ∪ M ) ∪ ( M ∪ R ) = L ∪ M ∪ R ( L ∪ M ) ∩ ( M ∪ R ) = M ∪ ( L ∩ R ) ( L ∪ M ) ∖ ( M ∪ R ) = L ∖ ( M ∪ R ) ( L ∪ M ) △ ( M ∪ R ) = ( L ∖ ( M ∪ R ) ) ∪ ( R ∖ ( L ∪ M ) ) = ( L △ R ) ∖ M ( L ∩ M ) ∪ ( M ∩ R ) = M ∩ ( L ∪ R ) ( L ∩ M ) ∩ ( M ∩ R ) = L ∩ M ∩ R ( L ∩ M ) ∖ ( M ∩ R ) = ( L ∩ M ) ∖ R ( L ∩ M ) △ ( M ∩ R ) = [ ( L ∩ M ) ∪ ( M ∩ R ) ] ∖ ( L ∩ M ∩ R ) ( L ∖ M ) ∪ ( M ∖ R ) = ( L ∪ M ) ∖ ( M ∩ R ) ( L ∖ M ) ∩ ( M ∖ R ) = ∅ ( L ∖ M ) ∖ ( M ∖ R ) = L ∖ M ( L ∖ M ) △ ( M ∖ R ) = ( L ∖ M ) ∪ ( M ∖ R ) = ( L ∪ M ) ∖ ( M ∩ R ) ( L △ M ) ∪ ( M △ R ) = ( L ∪ M ∪ R ) ∖ ( L ∩ M ∩ R ) ( L △ M ) ∩ ( M △ R ) = ( ( L ∩ R ) ∖ M ) ∪ ( M ∖ ( L ∪ R ) ) ( L △ M ) ∖ ( M △ R ) = ( L ∖ ( M ∪ R ) ) ∪ ( ( M ∩ R ) ∖ L ) ( L △ M ) △ ( M △ R ) = L △ R {\displaystyle {\begin{alignedat}{9}(L\cup M)&\,\cup \,&&(&&M\cup R)&&&&\;=\;\;&&L\cup M\cup R\\[1.4ex](L\cup M)&\,\cap \,&&(&&M\cup R)&&&&\;=\;\;&&M\cup (L\cap R)\\[1.4ex](L\cup M)&\,\setminus \,&&(&&M\cup R)&&&&\;=\;\;&&L\,\setminus \,(M\cup R)\\[1.4ex](L\cup M)&\,\triangle \,&&(&&M\cup R)&&&&\;=\;\;&&(L\,\setminus \,(M\cup R))\,\cup \,(R\,\setminus \,(L\cup M))\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\,\triangle \,R)\,\setminus \,M\\[1.4ex](L\cap M)&\,\cup \,&&(&&M\cap R)&&&&\;=\;\;&&M\cap (L\cup R)\\[1.4ex](L\cap M)&\,\cap \,&&(&&M\cap R)&&&&\;=\;\;&&L\cap M\cap R\\[1.4ex](L\cap M)&\,\setminus \,&&(&&M\cap R)&&&&\;=\;\;&&(L\cap M)\,\setminus \,R\\[1.4ex](L\cap M)&\,\triangle \,&&(&&M\cap R)&&&&\;=\;\;&&[(L\,\cap M)\cup (M\,\cap R)]\,\setminus \,(L\,\cap M\,\cap R)\\[1.4ex](L\,\setminus M)&\,\cup \,&&(&&M\,\setminus R)&&&&\;=\;\;&&(L\,\cup M)\,\setminus (M\,\cap \,R)\\[1.4ex](L\,\setminus M)&\,\cap \,&&(&&M\,\setminus R)&&&&\;=\;\;&&\varnothing \\[1.4ex](L\,\setminus M)&\,\setminus \,&&(&&M\,\setminus R)&&&&\;=\;\;&&L\,\setminus M\\[1.4ex](L\,\setminus M)&\,\triangle \,&&(&&M\,\setminus R)&&&&\;=\;\;&&(L\,\setminus M)\cup (M\,\setminus R)\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\,\cup M)\setminus (M\,\cap R)\\[1.4ex](L\,\triangle \,M)&\,\cup \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&(L\,\cup \,M\,\cup \,R)\,\setminus \,(L\,\cap \,M\,\cap \,R)\\[1.4ex](L\,\triangle \,M)&\,\cap \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&((L\,\cap \,R)\,\setminus \,M)\,\cup \,(M\,\setminus \,(L\,\cup \,R))\\[1.4ex](L\,\triangle \,M)&\,\setminus \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&(L\,\setminus \,(M\,\cup \,R))\,\cup \,((M\,\cap \,R)\,\setminus \,L)\\[1.4ex](L\,\triangle \,M)&\,\triangle \,&&(&&M\,\triangle \,R)&&&&\;=\;\;&&L\,\triangle \,R\\[1.7ex]\end{alignedat}}} ==== (L • M) ⁎ (R\M) ==== Operations of the form ( L ∙ M ) ∗ ( R ∖ M ) {\displaystyle (L\bullet M)\ast (R\,\setminus \,M)} : ( L ∪ M ) ∪ ( R ∖ M ) = L ∪ M ∪ R ( L ∪ M ) ∩ ( R ∖ M ) = ( L ∩ R ) ∖ M ( L ∪ M ) ∖ ( R ∖ M ) = M ∪ ( L ∖ R ) ( L ∪ M ) △ ( R ∖ M ) = M ∪ ( L △ R ) ( L ∩ M ) ∪ ( R ∖ M ) = [ L ∩ ( M ∪ R ) ] ∪ [ R ∖ ( L ∪ M ) ] (disjoint union) = ( L ∩ M ) △ ( R ∖ M ) ( L ∩ M ) ∩ ( R ∖ M ) = ∅ ( L ∩ M ) ∖ ( R ∖ M ) = L ∩ M ( L ∩ M ) △ ( R ∖ M ) = ( L ∩ M ) ∪ ( R ∖ M ) (disjoint union) ( L ∖ M ) ∪ ( R ∖ M ) = L ∪ R ∖ M ( L ∖ M ) ∩ ( R ∖ M ) = ( L ∩ R ) ∖ M ( L ∖ M ) ∖ ( R ∖ M ) = L ∖ ( M ∪ R ) ( L ∖ M ) △ ( R ∖ M ) = ( L △ R ) ∖ M ( L △ M ) ∪ ( R ∖ M ) = ( L ∪ M ∪ R ) ∖ ( L ∩ M ) ( L △ M ) ∩ ( R ∖ M ) = ( L ∩ R ) ∖ M ( L △ M ) ∖ ( R ∖ M ) = [ L ∖ ( M ∪ R ) ] ∪ ( M ∖ L ) (disjoint union) = ( L △ M ) ∖ ( L ∩ R ) ( L △ M ) △ ( R ∖ M ) = L △ ( M ∪ R ) {\displaystyle {\begin{alignedat}{9}(L\cup M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\cup M\cup R\\[1.4ex](L\cup M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap R)\,\setminus \,M\\[1.4ex](L\cup M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&M\cup (L\,\setminus \,R)\\[1.4ex](L\cup M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&M\cup (L\,\triangle \,R)\\[1.4ex](L\cap M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&[L\cap (M\cup R)]\cup [R\,\setminus \,(L\cup M)]\qquad {\text{ (disjoint union)}}\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\cap M)\,\triangle \,(R\,\setminus \,M)\\[1.4ex](L\cap M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&\varnothing \\[1.4ex](L\cap M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\cap M\\[1.4ex](L\cap M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap M)\cup (R\,\setminus \,M)\qquad {\text{ (disjoint union)}}\\[1.4ex](L\,\setminus \,M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\cup R\,\setminus \,M\\[1.4ex](L\,\setminus \,M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap R)\,\setminus \,M\\[1.4ex](L\,\setminus \,M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\,\setminus \,(M\cup R)\\[1.4ex](L\,\setminus \,M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\,\triangle \,R)\,\setminus \,M\\[1.4ex](L\,\triangle \,M)&\,\cup \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cup M\cup R)\,\setminus \,(L\cap M)\\[1.4ex](L\,\triangle \,M)&\,\cap \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&(L\cap R)\,\setminus \,M\\[1.4ex](L\,\triangle \,M)&\,\setminus \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&[L\,\setminus \,(M\cup R)]\cup (M\,\setminus \,L)\qquad {\text{ (disjoint union)}}\\[1.4ex]&\,&&\,&&\,&&&&\;=\;\;&&(L\,\triangle \,M)\setminus (L\,\cap R)\\[1.4ex](L\,\triangle \,M)&\,\triangle \,&&(&&R\,\setminus \,M)&&&&\;=\;\;&&L\,\triangle \,(M\cup R)\\[1.7ex]\end{alignedat}}} ==== (L\M) ⁎ (L\R) ==== Operations of the form ( L ∖ M ) ∗ ( L ∖ R ) {\displaystyle (L\,\setminus \,M)\ast (L\,\setminus \,R)} : ( L ∖ M ) ∪ ( L ∖ R ) = L ∖ ( M ∩ R ) ( L ∖ M ) ∩ ( L ∖ R ) = L ∖ ( M ∪ R ) ( L ∖ M ) ∖ ( L ∖ R ) = ( L ∩ R ) ∖ M ( L ∖ M ) △ ( L ∖ R ) = L ∩ ( M △ R ) = ( L ∩ M ) △ ( L ∩ R ) {\displaystyle {\begin{alignedat}{9}(L\,\setminus M)&\,\cup \,&&(&&L\,\setminus R)&&\;=\;&&L\,\setminus \,(M\,\cap \,R)\\[1.4ex](L\,\setminus M)&\,\cap \,&&(&&L\,\setminus R)&&\;=\;&&L\,\setminus \,(M\,\cup \,R)\\[1.4ex](L\,\setminus M)&\,\setminus \,&&(&&L\,\setminus R)&&\;=\;&&(L\,\cap \,R)\,\setminus \,M\\[1.4ex](L\,\setminus M)&\,\triangle \,&&(&&L\,\setminus R)&&\;=\;&&L\,\cap \,(M\,\triangle \,R)\\[1.4ex]&\,&&\,&&\,&&\;=\;&&(L\cap M)\,\triangle \,(L\cap R)\\[1.4ex]\end{alignedat}}} === Other simplifications === Other properties: L ∩ M = R and L ∩ R = M if and only if M = R ⊆ L . {\displaystyle L\cap M=R\;{\text{ and }}\;L\cap R=M\qquad {\text{ if and only if }}\qquad M=R\subseteq L.} If L ⊆ M {\displaystyle L\subseteq M} then L ∖ R = L ∩ ( M ∖ R ) . {\displaystyle L\setminus R=L\cap (M\setminus R).} L × ( M ∖ R ) = ( L × M ) ∖ ( L × R ) {\displaystyle L\times (M\,\setminus R)=(L\times M)\,\setminus (L\times R)} If L ⊆ R {\displaystyle L\subseteq R} then M ∖ R ⊆ M ∖ L . {\displaystyle M\setminus R\subseteq M\setminus L.} L ∩ M ∩ R = ∅ {\displaystyle L\cap M\cap R=\varnothing } if and only if for any x ∈ L ∪ M ∪ R , {\displaystyle x\in L\cup M\cup R,} x {\displaystyle x} belongs to at most two of the sets L , M , and R . {\displaystyle L,M,{\text{ and }}R.} == Symmetric difference ∆ of finitely many sets == Given finitely many sets L 1 , … , L n , {\displaystyle L_{1},\ldots ,L_{n},} something belongs to their symmetric difference if and only if it belongs to an odd number of these sets. Explicitly, for any x , {\displaystyle x,} x ∈ L 1 △ ⋯ △ L n {\displaystyle x\in L_{1}\triangle \cdots \triangle L_{n}} if and only if the cardinality | { i : x ∈ L i } | {\displaystyle \left|\left\{i:x\in L_{i}\right\}\right|} is odd. (Recall that symmetric difference is associative so parentheses are not needed for the set L 1 △ ⋯ △ L n {\displaystyle L_{1}\triangle \cdots \triangle L_{n}} ). Consequently, the symmetric difference of three sets satisfies: L △ M △ R = ( L ∩ M ∩ R ) ∪ { x : x belongs to exactly one of the sets L , M , R } (the union is disjoint) = [ L ∩ M ∩ R ] ∪ [ L ∖ ( M ∪ R ) ] ∪ [ M ∖ ( L ∪ R ) ] ∪ [ R ∖ ( L ∪ M ) ] (all 4 sets enclosed by [ ] are pairwise disjoint) {\displaystyle {\begin{alignedat}{4}L\,\triangle \,M\,\triangle \,R&=(L\cap M\cap R)\cup \{x:x{\text{ belongs to exactly one of the sets }}L,M,R\}~~~~~~{\text{ (the union is disjoint) }}\\&=[L\cap M\cap R]\cup [L\setminus (M\cup R)]\cup [M\setminus (L\cup R)]\cup [R\setminus (L\cup M)]~~~~~~~~~{\text{ (all 4 sets enclosed by [ ] are pairwise disjoint) }}\\\end{alignedat}}} == Cartesian products ⨯ of finitely many sets == === Binary ⨯ distributes over ⋃ and ⋂ and \ and ∆ === The binary Cartesian product ⨯ distributes over unions, intersections, set subtraction, and symmetric difference: ( L ∩ M ) × R = ( L × R ) ∩ ( M × R ) (Right-distributivity of × over ∩ ) ( L ∪ M ) × R = ( L × R ) ∪ ( M × R ) (Right-distributivity of × over ∪ ) ( L ∖ M ) × R = ( L × R ) ∖ ( M × R ) (Right-distributivity of × over ∖ ) ( L △ M ) × R = ( L × R ) △ ( M × R ) (Right-distributivity of × over △ ) {\displaystyle {\begin{alignedat}{9}(L\,\cap \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cap \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex](L\,\cup \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\cup \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex](L\,\setminus \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\setminus \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex](L\,\triangle \,M)\,\times \,R~&~~=~~&&(L\,\times \,R)\,&&\triangle \,&&(M\,\times \,R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]\end{alignedat}}} L × ( M ∩ R ) = ( L × M ) ∩ ( L × R ) (Left-distributivity of × over ∩ ) L × ( M ∪ R ) = ( L × M ) ∪ ( L × R ) (Left-distributivity of × over ∪ ) L × ( M ∖ R ) = ( L × M ) ∖ ( L × R ) (Left-distributivity of × over ∖ ) L × ( M △ R ) = ( L × M ) △ ( L × R ) (Left-distributivity of × over △ ) {\displaystyle {\begin{alignedat}{5}L\times (M\cap R)&\;=\;\;&&(L\times M)\cap (L\times R)\qquad &&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cap \,{\text{)}}\\[1.4ex]L\times (M\cup R)&\;=\;\;&&(L\times M)\cup (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\times (M\setminus R)&\;=\;\;&&(L\times M)\setminus (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\setminus \,{\text{)}}\\[1.4ex]L\times (M\triangle R)&\;=\;\;&&(L\times M)\triangle (L\times R)&&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\triangle \,{\text{)}}\\[1.4ex]\end{alignedat}}} But in general, ⨯ does not distribute over itself: L × ( M × R ) ≠ ( L × M ) × ( L × R ) {\displaystyle L\times (M\times R)~\color {Red}{\neq }\color {Black}{}~(L\times M)\times (L\times R)} ( L × M ) × R ≠ ( L × R ) × ( M × R ) . {\displaystyle (L\times M)\times R~\color {Red}{\neq }\color {Black}{}~(L\times R)\times (M\times R).} === Binary ⋂ of finite ⨯ === ( L × R ) ∩ ( L 2 × R 2 ) = ( L ∩ L 2 ) × ( R ∩ R 2 ) {\displaystyle (L\times R)\cap \left(L_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(R\cap R_{2}\right)} ( L × M × R ) ∩ ( L 2 × M 2 × R 2 ) = ( L ∩ L 2 ) × ( M ∩ M 2 ) × ( R ∩ R 2 ) {\displaystyle (L\times M\times R)\cap \left(L_{2}\times M_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(M\cap M_{2}\right)\times \left(R\cap R_{2}\right)} === Binary ⋃ of finite ⨯ === ( L × R ) ∪ ( L 2 × R 2 ) = [ ( L ∖ L 2 ) × R ] ∪ [ ( L 2 ∖ L ) × R 2 ] ∪ [ ( L ∩ L 2 ) × ( R ∪ R 2 ) ] = [ L × ( R ∖ R 2 ) ] ∪ [ L 2 × ( R 2 ∖ R ) ] ∪ [ ( L ∪ L 2 ) × ( R ∩ R 2 ) ] {\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\cup ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\setminus L_{2}\right)\times R\right]~\cup ~\left[\left(L_{2}\setminus L\right)\times R_{2}\right]~\cup ~\left[\left(L\cap L_{2}\right)\times \left(R\cup R_{2}\right)\right]\\[0.5ex]~&=~\left[L\times \left(R\setminus R_{2}\right)\right]~\cup ~\left[L_{2}\times \left(R_{2}\setminus R\right)\right]~\cup ~\left[\left(L\cup L_{2}\right)\times \left(R\cap R_{2}\right)\right]\\\end{alignedat}}} === Difference \ of finite ⨯ === ( L × R ) ∖ ( L 2 × R 2 ) = [ ( L ∖ L 2 ) × R ] ∪ [ L × ( R ∖ R 2 ) ] {\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\setminus ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\,\setminus \,L_{2}\right)\times R\right]~\cup ~\left[L\times \left(R\,\setminus \,R_{2}\right)\right]\\\end{alignedat}}} and ( L × M × R ) ∖ ( L 2 × M 2 × R 2 ) = [ ( L ∖ L 2 ) × M × R ] ∪ [ L × ( M ∖ M 2 ) × R ] ∪ [ L × M × ( R ∖ R 2 ) ] {\displaystyle (L\times M\times R)~\setminus ~\left(L_{2}\times M_{2}\times R_{2}\right)~=~\left[\left(L\,\setminus \,L_{2}\right)\times M\times R\right]~\cup ~\left[L\times \left(M\,\setminus \,M_{2}\right)\times R\right]~\cup ~\left[L\times M\times \left(R\,\setminus \,R_{2}\right)\right]} === Finite ⨯ of differences \ === ( L ∖ L 2 ) × ( R ∖ R 2 ) = ( L × R ) ∖ [ ( L 2 × R ) ∪ ( L × R 2 ) ] {\displaystyle \left(L\,\setminus \,L_{2}\right)\times \left(R\,\setminus \,R_{2}\right)~=~\left(L\times R\right)\,\setminus \,\left[\left(L_{2}\times R\right)\cup \left(L\times R_{2}\right)\right]} ( L ∖ L 2 ) × ( M ∖ M 2 ) × ( R ∖ R 2 ) = ( L × M × R ) ∖ [ ( L 2 × M × R ) ∪ ( L × M 2 × R ) ∪ ( L × M × R 2 ) ] {\displaystyle \left(L\,\setminus \,L_{2}\right)\times \left(M\,\setminus \,M_{2}\right)\times \left(R\,\setminus \,R_{2}\right)~=~\left(L\times M\times R\right)\,\setminus \,\left[\left(L_{2}\times M\times R\right)\cup \left(L\times M_{2}\times R\right)\cup \left(L\times M\times R_{2}\right)\right]} === Symmetric difference ∆ and finite ⨯ === L × ( R △ R 2 ) = [ L × ( R ∖ R 2 ) ] ∪ [ L × ( R 2 ∖ R ) ] {\displaystyle L\times \left(R\,\triangle \,R_{2}\right)~=~\left[L\times \left(R\,\setminus \,R_{2}\right)\right]\,\cup \,\left[L\times \left(R_{2}\,\setminus \,R\right)\right]} ( L △ L 2 ) × R = [ ( L ∖ L 2 ) × R ] ∪ [ ( L 2 ∖ L ) × R ] {\displaystyle \left(L\,\triangle \,L_{2}\right)\times R~=~\left[\left(L\,\setminus \,L_{2}\right)\times R\right]\,\cup \,\left[\left(L_{2}\,\setminus \,L\right)\times R\right]} ( L △ L 2 ) × ( R △ R 2 ) = [ ( L ∪ L 2 ) × ( R ∪ R 2 ) ] ∖ [ ( ( L ∩ L 2 ) × R ) ∪ ( L × ( R ∩ R 2 ) ) ] = [ ( L ∖ L 2 ) × ( R 2 ∖ R ) ] ∪ [ ( L 2 ∖ L ) × ( R 2 ∖ R ) ] ∪ [ ( L ∖ L 2 ) × ( R ∖ R 2 ) ] ∪ [ ( L 2 ∖ L ) ∪ ( R ∖ R 2 ) ] {\displaystyle {\begin{alignedat}{4}\left(L\,\triangle \,L_{2}\right)\times \left(R\,\triangle \,R_{2}\right)~&=~&&&&\,\left[\left(L\cup L_{2}\right)\times \left(R\cup R_{2}\right)\right]\;\setminus \;\left[\left(\left(L\cap L_{2}\right)\times R\right)\;\cup \;\left(L\times \left(R\cap R_{2}\right)\right)\right]\\[0.7ex]&=~&&&&\,\left[\left(L\,\setminus \,L_{2}\right)\times \left(R_{2}\,\setminus \,R\right)\right]\,\cup \,\left[\left(L_{2}\,\setminus \,L\right)\times \left(R_{2}\,\setminus \,R\right)\right]\,\cup \,\left[\left(L\,\setminus \,L_{2}\right)\times \left(R\,\setminus \,R_{2}\right)\right]\,\cup \,\left[\left(L_{2}\,\setminus \,L\right)\cup \left(R\,\setminus \,R_{2}\right)\right]\\\end{alignedat}}} ( L △ L 2 ) × ( M △ M 2 ) × ( R △ R 2 ) = [ ( L ∪ L 2 ) × ( M ∪ M 2 ) × ( R ∪ R 2 ) ] ∖ [ ( ( L ∩ L 2 ) × M × R ) ∪ ( L × ( M ∩ M 2 ) × R ) ∪ ( L × M × ( R ∩ R 2 ) ) ] {\displaystyle {\begin{alignedat}{4}\left(L\,\triangle \,L_{2}\right)\times \left(M\,\triangle \,M_{2}\right)\times \left(R\,\triangle \,R_{2}\right)~&=~\left[\left(L\cup L_{2}\right)\times \left(M\cup M_{2}\right)\times \left(R\cup R_{2}\right)\right]\;\setminus \;\left[\left(\left(L\cap L_{2}\right)\times M\times R\right)\;\cup \;\left(L\times \left(M\cap M_{2}\right)\times R\right)\;\cup \;\left(L\times M\times \left(R\cap R_{2}\right)\right)\right]\\\end{alignedat}}} In general, ( L △ L 2 ) × ( R △ R 2 ) {\displaystyle \left(L\,\triangle \,L_{2}\right)\times \left(R\,\triangle \,R_{2}\right)} need not be a subset nor a superset of ( L × R ) △ ( L 2 × R 2 ) . {\displaystyle \left(L\times R\right)\,\triangle \,\left(L_{2}\times R_{2}\right).} ( L × R ) △ ( L 2 × R 2 ) = ( L × R ) ∪ ( L 2 × R 2 ) ∖ [ ( L ∩ L 2 ) × ( R ∩ R 2 ) ] {\displaystyle {\begin{alignedat}{4}\left(L\times R\right)\,\triangle \,\left(L_{2}\times R_{2}\right)~&=~&&\left(L\times R\right)\cup \left(L_{2}\times R_{2}\right)\;\setminus \;\left[\left(L\cap L_{2}\right)\times \left(R\cap R_{2}\right)\right]\\[0.7ex]\end{alignedat}}} ( L × M × R ) △ ( L 2 × M 2 × R 2 ) = ( L × M × R ) ∪ ( L 2 × M 2 × R 2 ) ∖ [ ( L ∩ L 2 ) × ( M ∩ M 2 ) × ( R ∩ R 2 ) ] {\displaystyle {\begin{alignedat}{4}\left(L\times M\times R\right)\,\triangle \,\left(L_{2}\times M_{2}\times R_{2}\right)~&=~&&\left(L\times M\times R\right)\cup \left(L_{2}\times M_{2}\times R_{2}\right)\;\setminus \;\left[\left(L\cap L_{2}\right)\times \left(M\cap M_{2}\right)\times \left(R\cap R_{2}\right)\right]\\[0.7ex]\end{alignedat}}} == Arbitrary families of sets == Let ( L i ) i ∈ I , {\displaystyle \left(L_{i}\right)_{i\in I},} ( R j ) j ∈ J , {\displaystyle \left(R_{j}\right)_{j\in J},} and ( S i , j ) ( i , j ) ∈ I × J {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}} be indexed families of sets. Whenever the assumption is needed, then all indexing sets, such as I {\displaystyle I} and J , {\displaystyle J,} are assumed to be non-empty. === Definitions === A family of sets or (more briefly) a family refers to a set whose elements are sets. An indexed family of sets is a function from some set, called its indexing set, into some family of sets. An indexed family of sets will be denoted by ( L i ) i ∈ I , {\displaystyle \left(L_{i}\right)_{i\in I},} where this notation assigns the symbol I {\displaystyle I} for the indexing set and for every index i ∈ I , {\displaystyle i\in I,} assigns the symbol L i {\displaystyle L_{i}} to the value of the function at i . {\displaystyle i.} The function itself may then be denoted by the symbol L ∙ , {\displaystyle L_{\bullet },} which is obtained from the notation ( L i ) i ∈ I {\displaystyle \left(L_{i}\right)_{i\in I}} by replacing the index i {\displaystyle i} with a bullet symbol ∙ ; {\displaystyle \bullet \,;} explicitly, L ∙ {\displaystyle L_{\bullet }} is the function: L ∙ : I → { L i : i ∈ I } i ↦ L i {\displaystyle {\begin{alignedat}{4}L_{\bullet }:\;&&I&&\;\to \;&\left\{L_{i}:i\in I\right\}\\[0.3ex]&&i&&\;\mapsto \;&L_{i}\\\end{alignedat}}} which may be summarized by writing L ∙ = ( L i ) i ∈ I . {\displaystyle L_{\bullet }=\left(L_{i}\right)_{i\in I}.} Any given indexed family of sets L ∙ = ( L i ) i ∈ I {\displaystyle L_{\bullet }=\left(L_{i}\right)_{i\in I}} (which is a function) can be canonically associated with its image/range Im ⁡ L ∙ = def { L i : i ∈ I } {\displaystyle \operatorname {Im} L_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\left\{L_{i}:i\in I\right\}} (which is a family of sets). Conversely, any given family of sets B {\displaystyle {\mathcal {B}}} may be associated with the B {\displaystyle {\mathcal {B}}} -indexed family of sets ( B ) B ∈ B , {\displaystyle (B)_{B\in {\mathcal {B}}},} which is technically the identity map B → B . {\displaystyle {\mathcal {B}}\to {\mathcal {B}}.} However, this is not a bijective correspondence because an indexed family of sets L ∙ = ( L i ) i ∈ I {\displaystyle L_{\bullet }=\left(L_{i}\right)_{i\in I}} is not required to be injective (that is, there may exist distinct indices i ≠ j {\displaystyle i\neq j} such as L i = L j {\displaystyle L_{i}=L_{j}} ), which in particular means that it is possible for distinct indexed families of sets (which are functions) to be associated with the same family of sets (by having the same image/range). Arbitrary unions defined If I = ∅ {\displaystyle I=\varnothing } then ⋃ i ∈ ∅ L i = { x : there exists i ∈ ∅ such that x ∈ L i } = ∅ , {\displaystyle \bigcup _{i\in \varnothing }L_{i}=\{x~:~{\text{ there exists }}i\in \varnothing {\text{ such that }}x\in L_{i}\}=\varnothing ,} which is somethings called the nullary union convention (despite being called a convention, this equality follows from the definition). If B {\displaystyle {\mathcal {B}}} is a family of sets then ∪ B {\displaystyle \cup {\mathcal {B}}} denotes the set: ⋃ B = def ⋃ B ∈ B B = def { x : there exists B ∈ B such that x ∈ B } . {\displaystyle \bigcup {\mathcal {B}}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcup _{B\in {\mathcal {B}}}B~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x~:~{\text{ there exists }}B\in {\mathcal {B}}{\text{ such that }}x\in B\}.} Arbitrary intersections defined If I ≠ ∅ {\displaystyle I\neq \varnothing } then If B ≠ ∅ {\displaystyle {\mathcal {B}}\neq \varnothing } is a non-empty family of sets then ∩ B {\displaystyle \cap {\mathcal {B}}} denotes the set: ⋂ B = def ⋂ B ∈ B B = def { x : x ∈ B for every B ∈ B } = { x : for all B , if B ∈ B then x ∈ B } . {\displaystyle \bigcap {\mathcal {B}}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcap _{B\in B}B~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{x~:~x\in B{\text{ for every }}B\in {\mathcal {B}}\}~=~\{x~:~{\text{ for all }}B,{\text{ if }}B\in {\mathcal {B}}{\text{ then }}x\in B\}.} Nullary intersections If I = ∅ {\displaystyle I=\varnothing } then ⋂ i ∈ ∅ L i = { x : for all i , if i ∈ ∅ then x ∈ L i } {\displaystyle \bigcap _{i\in \varnothing }L_{i}=\{x~:~{\text{ for all }}i,{\text{ if }}i\in \varnothing {\text{ then }}x\in L_{i}\}} where every possible thing x {\displaystyle x} in the universe vacuously satisfied the condition: "if i ∈ ∅ {\displaystyle i\in \varnothing } then x ∈ L i {\displaystyle x\in L_{i}} ". Consequently, ⋂ i ∈ ∅ L i = { x : true } {\displaystyle {\textstyle \bigcap \limits _{i\in \varnothing }}L_{i}=\{x:{\text{ true }}\}} consists of everything in the universe. So if I = ∅ {\displaystyle I=\varnothing } and: if you are working in a model in which there exists some universe set X {\displaystyle X} then ⋂ i ∈ ∅ L i = { x : x ∈ L i for every i ∈ ∅ } = X . {\displaystyle {\textstyle \bigcap \limits _{i\in \varnothing }}L_{i}=\{x~:~x\in L_{i}{\text{ for every }}i\in \varnothing \}~=~X.} otherwise, if you are working in a model in which "the class of all things x {\displaystyle x} " is not a set (by far the most common situation) then ⋂ i ∈ ∅ L i {\displaystyle {\textstyle \bigcap \limits _{i\in \varnothing }}L_{i}} is undefined because ⋂ i ∈ ∅ L i {\displaystyle {\textstyle \bigcap \limits _{i\in \varnothing }}L_{i}} consists of everything, which makes ⋂ i ∈ ∅ L i {\displaystyle {\textstyle \bigcap \limits _{i\in \varnothing }}L_{i}} a proper class and not a set. Assumption: Henceforth, whenever a formula requires some indexing set to be non-empty in order for an arbitrary intersection to be well-defined, then this will automatically be assumed without mention. A consequence of this is the following assumption/definition: A finite intersection of sets or an intersection of finitely many sets refers to the intersection of a finite collection of one or more sets. Some authors adopt the so called nullary intersection convention, which is the convention that an empty intersection of sets is equal to some canonical set. In particular, if all sets are subsets of some set X {\displaystyle X} then some author may declare that the empty intersection of these sets be equal to X . {\displaystyle X.} However, the nullary intersection convention is not as commonly accepted as the nullary union convention and this article will not adopt it (this is due to the fact that unlike the empty union, the value of the empty intersection depends on X {\displaystyle X} so if there are multiple sets under consideration, which is commonly the case, then the value of the empty intersection risks becoming ambiguous). Multiple index sets ⋃ j ∈ J i ∈ I , S i , j = def ⋃ ( i , j ) ∈ I × J S i , j {\displaystyle \bigcup _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcup _{(i,j)\in I\times J}S_{i,j}} ⋂ j ∈ J i ∈ I , S i , j = def ⋂ ( i , j ) ∈ I × J S i , j {\displaystyle \bigcap _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcap _{(i,j)\in I\times J}S_{i,j}} === Distributing unions and intersections === ==== Binary ⋂ of arbitrary ⋃'s ==== and If all ( L i ) i ∈ I {\displaystyle \left(L_{i}\right)_{i\in I}} are pairwise disjoint and all ( R j ) j ∈ J {\displaystyle \left(R_{j}\right)_{j\in J}} are also pairwise disjoint, then so are all ( L i ∩ R j ) ( i , j ) ∈ I × J {\displaystyle \left(L_{i}\cap R_{j}\right)_{(i,j)\in I\times J}} (that is, if ( i , j ) ≠ ( i 2 , j 2 ) {\displaystyle (i,j)\neq \left(i_{2},j_{2}\right)} then ( L i ∩ R j ) ∩ ( L i 2 ∩ R j 2 ) = ∅ {\displaystyle \left(L_{i}\cap R_{j}\right)\cap \left(L_{i_{2}}\cap R_{j_{2}}\right)=\varnothing } ). Importantly, if I = J {\displaystyle I=J} then in general, ( ⋃ i ∈ I L i ) ∩ ( ⋃ i ∈ I R i ) ≠ ⋃ i ∈ I ( L i ∩ R i ) {\displaystyle ~\left(\bigcup _{i\in I}L_{i}\right)\cap \left(\bigcup _{i\in I}R_{i}\right)~~\color {Red}{\neq }\color {Black}{}~~\bigcup _{i\in I}\left(L_{i}\cap R_{i}\right)~} (an example of this is given below). The single union on the right hand side must be over all pairs ( i , j ) ∈ I × I : {\displaystyle (i,j)\in I\times I:} ( ⋃ i ∈ I L i ) ∩ ( ⋃ i ∈ I R i ) = ⋃ j ∈ I i ∈ I , ( L i ∩ R j ) . {\displaystyle ~\left(\bigcup _{i\in I}L_{i}\right)\cap \left(\bigcup _{i\in I}R_{i}\right)~~=~~\bigcup _{\stackrel {i\in I,}{j\in I}}\left(L_{i}\cap R_{j}\right).~} The same is usually true for other similar non-trivial set equalities and relations that depend on two (potentially unrelated) indexing sets I {\displaystyle I} and J {\displaystyle J} (such as Eq. 4b or Eq. 7g). Two exceptions are Eq. 2c (unions of unions) and Eq. 2d (intersections of intersections), but both of these are among the most trivial of set equalities (although even for these equalities there is still something that must be proven). Example where equality fails: Let X ≠ ∅ {\displaystyle X\neq \varnothing } and let I = { 1 , 2 } . {\displaystyle I=\{1,2\}.} Let L 1 : = R 2 : = X {\displaystyle L_{1}\colon =R_{2}\colon =X} and let L 2 : = R 1 : = ∅ . {\displaystyle L_{2}\colon =R_{1}\colon =\varnothing .} Then X = X ∩ X = ( L 1 ∪ L 2 ) ∩ ( R 2 ∪ R 2 ) = ( ⋃ i ∈ I L i ) ∩ ( ⋃ i ∈ I R i ) ≠ ⋃ i ∈ I ( L i ∩ R i ) = ( L 1 ∩ R 1 ) ∪ ( L 2 ∩ R 2 ) = ∅ ∪ ∅ = ∅ . {\displaystyle X=X\cap X=\left(L_{1}\cup L_{2}\right)\cap \left(R_{2}\cup R_{2}\right)=\left(\bigcup _{i\in I}L_{i}\right)\cap \left(\bigcup _{i\in I}R_{i}\right)~\neq ~\bigcup _{i\in I}\left(L_{i}\cap R_{i}\right)=\left(L_{1}\cap R_{1}\right)\cup \left(L_{2}\cap R_{2}\right)=\varnothing \cup \varnothing =\varnothing .} Furthermore, ∅ = ∅ ∪ ∅ = ( L 1 ∩ L 2 ) ∪ ( R 2 ∩ R 2 ) = ( ⋂ i ∈ I L i ) ∪ ( ⋂ i ∈ I R i ) ≠ ⋂ i ∈ I ( L i ∪ R i ) = ( L 1 ∪ R 1 ) ∩ ( L 2 ∪ R 2 ) = X ∩ X = X . {\displaystyle \varnothing =\varnothing \cup \varnothing =\left(L_{1}\cap L_{2}\right)\cup \left(R_{2}\cap R_{2}\right)=\left(\bigcap _{i\in I}L_{i}\right)\cup \left(\bigcap _{i\in I}R_{i}\right)~\neq ~\bigcap _{i\in I}\left(L_{i}\cup R_{i}\right)=\left(L_{1}\cup R_{1}\right)\cap \left(L_{2}\cup R_{2}\right)=X\cap X=X.} ==== Binary ⋃ of arbitrary ⋂'s ==== and Importantly, if I = J {\displaystyle I=J} then in general, ( ⋂ i ∈ I L i ) ∪ ( ⋂ i ∈ I R i ) ≠ ⋂ i ∈ I ( L i ∪ R i ) {\displaystyle ~\left(\bigcap _{i\in I}L_{i}\right)\cup \left(\bigcap _{i\in I}R_{i}\right)~~\color {Red}{\neq }\color {Black}{}~~\bigcap _{i\in I}\left(L_{i}\cup R_{i}\right)~} (an example of this is given above). The single intersection on the right hand side must be over all pairs ( i , j ) ∈ I × I : {\displaystyle (i,j)\in I\times I:} ( ⋂ i ∈ I L i ) ∪ ( ⋂ i ∈ I R i ) = ⋂ j ∈ I i ∈ I , ( L i ∪ R j ) . {\displaystyle ~\left(\bigcap _{i\in I}L_{i}\right)\cup \left(\bigcap _{i\in I}R_{i}\right)~~=~~\bigcap _{\stackrel {i\in I,}{j\in I}}\left(L_{i}\cup R_{j}\right).~} ==== Arbitrary ⋂'s and arbitrary ⋃'s ==== ===== Incorrectly distributing by swapping ⋂ and ⋃ ===== Naively swapping ⋃ i ∈ I {\displaystyle \;{\textstyle \bigcup \limits _{i\in I}}\;} and ⋂ j ∈ J {\displaystyle \;{\textstyle \bigcap \limits _{j\in J}}\;} may produce a different set The following inclusion always holds: In general, equality need not hold and moreover, the right hand side depends on how for each fixed i ∈ I , {\displaystyle i\in I,} the sets ( S i , j ) j ∈ J {\displaystyle \left(S_{i,j}\right)_{j\in J}} are labelled; and analogously, the left hand side depends on how for each fixed j ∈ J , {\displaystyle j\in J,} the sets ( S i , j ) i ∈ I {\displaystyle \left(S_{i,j}\right)_{i\in I}} are labelled. An example demonstrating this is now given. Example of dependence on labeling and failure of equality: To see why equality need not hold when ∪ {\displaystyle \cup } and ∩ {\displaystyle \cap } are swapped, let I : = J : = { 1 , 2 } , {\displaystyle I\colon =J\colon =\{1,2\},} and let S 11 = { 1 , 2 } , S 12 = { 1 , 3 } , S 21 = { 3 , 4 } , {\displaystyle S_{11}=\{1,2\},~S_{12}=\{1,3\},~S_{21}=\{3,4\},} and S 22 = { 2 , 4 } . {\displaystyle S_{22}=\{2,4\}.} Then { 1 , 4 } = { 1 } ∪ { 4 } = ( S 11 ∩ S 12 ) ∪ ( S 21 ∩ S 22 ) = ⋃ i ∈ I ( ⋂ j ∈ J S i , j ) ≠ ⋂ j ∈ J ( ⋃ i ∈ I S i , j ) = ( S 11 ∪ S 21 ) ∩ ( S 12 ∪ S 22 ) = { 1 , 2 , 3 , 4 } . {\displaystyle \{1,4\}=\{1\}\cup \{4\}=\left(S_{11}\cap S_{12}\right)\cup \left(S_{21}\cap S_{22}\right)=\bigcup _{i\in I}\left(\bigcap _{j\in J}S_{i,j}\right)~\neq ~\bigcap _{j\in J}\left(\bigcup _{i\in I}S_{i,j}\right)=\left(S_{11}\cup S_{21}\right)\cap \left(S_{12}\cup S_{22}\right)=\{1,2,3,4\}.} If S 11 {\displaystyle S_{11}} and S 21 {\displaystyle S_{21}} are swapped while S 12 {\displaystyle S_{12}} and S 22 {\displaystyle S_{22}} are unchanged, which gives rise to the sets S ^ 11 : = { 3 , 4 } , S ^ 12 : = { 1 , 3 } , S ^ 21 : = { 1 , 2 } , {\displaystyle {\hat {S}}_{11}\colon =\{3,4\},~{\hat {S}}_{12}\colon =\{1,3\},~{\hat {S}}_{21}\colon =\{1,2\},} and S ^ 22 : = { 2 , 4 } , {\displaystyle {\hat {S}}_{22}\colon =\{2,4\},} then { 2 , 3 } = { 3 } ∪ { 2 } = ( S ^ 11 ∩ S ^ 12 ) ∪ ( S ^ 21 ∩ S ^ 22 ) = ⋃ i ∈ I ( ⋂ j ∈ J S ^ i , j ) ≠ ⋂ j ∈ J ( ⋃ i ∈ I S ^ i , j ) = ( S ^ 11 ∪ S ^ 21 ) ∩ ( S ^ 12 ∪ S ^ 22 ) = { 1 , 2 , 3 , 4 } . {\displaystyle \{2,3\}=\{3\}\cup \{2\}=\left({\hat {S}}_{11}\cap {\hat {S}}_{12}\right)\cup \left({\hat {S}}_{21}\cap {\hat {S}}_{22}\right)=\bigcup _{i\in I}\left(\bigcap _{j\in J}{\hat {S}}_{i,j}\right)~\neq ~\bigcap _{j\in J}\left(\bigcup _{i\in I}{\hat {S}}_{i,j}\right)=\left({\hat {S}}_{11}\cup {\hat {S}}_{21}\right)\cap \left({\hat {S}}_{12}\cup {\hat {S}}_{22}\right)=\{1,2,3,4\}.} In particular, the left hand side is no longer { 1 , 4 } , {\displaystyle \{1,4\},} which shows that the left hand side ⋃ i ∈ I ⋂ j ∈ J S i , j {\displaystyle {\textstyle \bigcup \limits _{i\in I}}\;{\textstyle \bigcap \limits _{j\in J}}S_{i,j}} depends on how the sets are labelled. If instead S 11 {\displaystyle S_{11}} and S 12 {\displaystyle S_{12}} are swapped while S 21 {\displaystyle S_{21}} and S 22 {\displaystyle S_{22}} are unchanged, which gives rise to the sets S ¯ 11 : = { 1 , 3 } , S ¯ 12 : = { 1 , 2 } , S ¯ 21 : = { 3 , 4 } , {\displaystyle {\overline {S}}_{11}\colon =\{1,3\},~{\overline {S}}_{12}\colon =\{1,2\},~{\overline {S}}_{21}\colon =\{3,4\},} and S ¯ 22 : = { 2 , 4 } , {\displaystyle {\overline {S}}_{22}\colon =\{2,4\},} then both the left hand side and right hand side are equal to { 1 , 4 } , {\displaystyle \{1,4\},} which shows that the right hand side also depends on how the sets are labeled. Equality in Inclusion 1 ∪∩ is a subset of ∩∪ can hold under certain circumstances, such as in 7e, which is the special case where ( S i , j ) ( i , j ) ∈ I × J {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}} is ( L i ∖ R j ) ( i , j ) ∈ I × J {\displaystyle \left(L_{i}\setminus R_{j}\right)_{(i,j)\in I\times J}} (that is, S i , j : = L i ∖ R j {\displaystyle S_{i,j}\colon =L_{i}\setminus R_{j}} with the same indexing sets I {\displaystyle I} and J {\displaystyle J} ), or such as in 7f, which is the special case where ( S i , j ) ( i , j ) ∈ I × J {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}} is ( L i ∖ R j ) ( j , i ) ∈ J × I {\displaystyle \left(L_{i}\setminus R_{j}\right)_{(j,i)\in J\times I}} (that is, S ^ j , i : = L i ∖ R j {\displaystyle {\hat {S}}_{j,i}\colon =L_{i}\setminus R_{j}} with the indexing sets I {\displaystyle I} and J {\displaystyle J} swapped). For a correct formula that extends the distributive laws, an approach other than just switching ∪ {\displaystyle \cup } and ∩ {\displaystyle \cap } is needed. ===== Correct distributive laws ===== Suppose that for each i ∈ I , {\displaystyle i\in I,} J i {\displaystyle J_{i}} is a non-empty index set and for each j ∈ J i , {\displaystyle j\in J_{i},} let T i , j {\displaystyle T_{i,j}} be any set (for example, to apply this law to ( S i , j ) ( i , j ) ∈ I × J , {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J},} use J i : = J {\displaystyle J_{i}\colon =J} for all i ∈ I {\displaystyle i\in I} and use T i , j : = S i , j {\displaystyle T_{i,j}\colon =S_{i,j}} for all i ∈ I {\displaystyle i\in I} and all j ∈ J i = J {\displaystyle j\in J_{i}=J} ). Let ∏ J ∙ = def ∏ i ∈ I J i {\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\prod _{i\in I}J_{i}} denote the Cartesian product, which can be interpreted as the set of all functions f : I → ⋃ i ∈ I J i {\displaystyle f~:~I~\to ~{\textstyle \bigcup \limits _{i\in I}}J_{i}} such that f ( i ) ∈ J i {\displaystyle f(i)\in J_{i}} for every i ∈ I . {\displaystyle i\in I.} Such a function may also be denoted using the tuple notation ( f i ) i ∈ I {\displaystyle \left(f_{i}\right)_{i\in I}} where f i = def f ( i ) {\displaystyle f_{i}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(i)} for every i ∈ I {\displaystyle i\in I} and conversely, a tuple ( f i ) i ∈ I {\displaystyle \left(f_{i}\right)_{i\in I}} is just notation for the function with domain I {\displaystyle I} whose value at i ∈ I {\displaystyle i\in I} is f i ; {\displaystyle f_{i};} both notations can be used to denote the elements of ∏ J ∙ . {\displaystyle {\textstyle \prod }J_{\bullet }.} Then where ∏ J ∙ = def ∏ i ∈ I J i . {\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}.} ===== Applying the distributive laws ===== Example application: In the particular case where all J i {\displaystyle J_{i}} are equal (that is, J i = J i 2 {\displaystyle J_{i}=J_{i_{2}}} for all i , i 2 ∈ I , {\displaystyle i,i_{2}\in I,} which is the case with the family ( S i , j ) ( i , j ) ∈ I × J , {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J},} for example), then letting J {\displaystyle J} denote this common set, the Cartesian product will be ∏ J ∙ = def ∏ i ∈ I J i = ∏ i ∈ I J = J I , {\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}={\textstyle \prod \limits _{i\in I}}J=J^{I},} which is the set of all functions of the form f : I → J . {\displaystyle f~:~I~\to ~J.} The above set equalities Eq. 5 ∩∪ to ∪∩ and Eq. 6 ∪∩ to ∩∪, respectively become: ⋂ i ∈ I ⋃ j ∈ J S i , j = ⋃ f ∈ J I ⋂ i ∈ I S i , f ( i ) {\displaystyle \bigcap _{i\in I}\;\bigcup _{j\in J}S_{i,j}=\bigcup _{f\in J^{I}}\;\bigcap _{i\in I}S_{i,f(i)}} ⋃ i ∈ I ⋂ j ∈ J S i , j = ⋂ f ∈ J I ⋃ i ∈ I S i , f ( i ) {\displaystyle \bigcup _{i\in I}\;\bigcap _{j\in J}S_{i,j}=\bigcap _{f\in J^{I}}\;\bigcup _{i\in I}S_{i,f(i)}} which when combined with Inclusion 1 ∪∩ is a subset of ∩∪ implies: ⋃ i ∈ I ⋂ j ∈ J S i , j = ⋂ f ∈ J I ⋃ i ∈ I S i , f ( i ) ⊆ ⋃ g ∈ I J ⋂ j ∈ J S g ( j ) , j = ⋂ j ∈ J ⋃ i ∈ I S i , j {\displaystyle \bigcup _{i\in I}\;\bigcap _{j\in J}S_{i,j}~=~\bigcap _{f\in J^{I}}\;\bigcup _{i\in I}S_{i,f(i)}~~\color {Red}{\subseteq }\color {Black}{}~~\bigcup _{g\in I^{J}}\;\bigcap _{j\in J}S_{g(j),j}~=~\bigcap _{j\in J}\;\bigcup _{i\in I}S_{i,j}} where on the left hand side, the indices f and i {\displaystyle f{\text{ and }}i} range over f ∈ J I and i ∈ I {\displaystyle f\in J^{I}{\text{ and }}i\in I} (so the subscripts of S i , f ( i ) {\displaystyle S_{i,f(i)}} range over i ∈ I and f ( i ) ∈ f ( I ) ⊆ J {\displaystyle i\in I{\text{ and }}f(i)\in f(I)\subseteq J} ) on the right hand side, the indices g and j {\displaystyle g{\text{ and }}j} range over g ∈ I J and j ∈ J {\displaystyle g\in I^{J}{\text{ and }}j\in J} (so the subscripts of S g ( j ) , j {\displaystyle S_{g(j),j}} range over j ∈ J and g ( j ) ∈ g ( J ) ⊆ I {\displaystyle j\in J{\text{ and }}g(j)\in g(J)\subseteq I} ). Example application: To apply the general formula to the case of ( C k ) k ∈ K {\displaystyle \left(C_{k}\right)_{k\in K}} and ( D l ) l ∈ L , {\displaystyle \left(D_{l}\right)_{l\in L},} use I : = { 1 , 2 } , {\displaystyle I\colon =\{1,2\},} J 1 : = K , {\displaystyle J_{1}\colon =K,} J 2 : = L , {\displaystyle J_{2}\colon =L,} and let T 1 , k : = C k {\displaystyle T_{1,k}\colon =C_{k}} for all k ∈ J 1 {\displaystyle k\in J_{1}} and let T 2 , l : = D l {\displaystyle T_{2,l}\colon =D_{l}} for all l ∈ J 2 . {\displaystyle l\in J_{2}.} Every map f ∈ ∏ J ∙ = def ∏ i ∈ I J i = J 1 × J 2 = K × L {\displaystyle f\in {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}=J_{1}\times J_{2}=K\times L} can be bijectively identified with the pair ( f ( 1 ) , f ( 2 ) ) ∈ K × L {\displaystyle \left(f(1),f(2)\right)\in K\times L} (the inverse sends ( k , l ) ∈ K × L {\displaystyle (k,l)\in K\times L} to the map f ( k , l ) ∈ ∏ J ∙ {\displaystyle f_{(k,l)}\in {\textstyle \prod }J_{\bullet }} defined by 1 ↦ k {\displaystyle 1\mapsto k} and 2 ↦ l ; {\displaystyle 2\mapsto l;} this is technically just a change of notation). Recall that Eq. 5 ∩∪ to ∪∩ was ⋂ i ∈ I ⋃ j ∈ J i T i , j = ⋃ f ∈ ∏ J ∙ ⋂ i ∈ I T i , f ( i ) . {\displaystyle ~\bigcap _{i\in I}\;\bigcup _{j\in J_{i}}T_{i,j}=\bigcup _{f\in {\textstyle \prod }J_{\bullet }}\;\bigcap _{i\in I}T_{i,f(i)}.~} Expanding and simplifying the left hand side gives ⋂ i ∈ I ⋃ j ∈ J i T i , j = ( ⋃ j ∈ J 1 T 1 , j ) ∩ ( ⋃ j ∈ J 2 T 2 , j ) = ( ⋃ k ∈ K T 1 , k ) ∩ ( ⋃ l ∈ L T 2 , l ) = ( ⋃ k ∈ K C k ) ∩ ( ⋃ l ∈ L D l ) {\displaystyle \bigcap _{i\in I}\;\bigcup _{j\in J_{i}}T_{i,j}=\left(\bigcup _{j\in J_{1}}T_{1,j}\right)\cap \left(\;\bigcup _{j\in J_{2}}T_{2,j}\right)=\left(\bigcup _{k\in K}T_{1,k}\right)\cap \left(\;\bigcup _{l\in L}T_{2,l}\right)=\left(\bigcup _{k\in K}C_{k}\right)\cap \left(\;\bigcup _{l\in L}D_{l}\right)} and doing the same to the right hand side gives: ⋃ f ∈ ∏ J ∙ ⋂ i ∈ I T i , f ( i ) = ⋃ f ∈ ∏ J ∙ ( T 1 , f ( 1 ) ∩ T 2 , f ( 2 ) ) = ⋃ f ∈ ∏ J ∙ ( C f ( 1 ) ∩ D f ( 2 ) ) = ⋃ ( k , l ) ∈ K × L ( C k ∩ D l ) = ⋃ l ∈ L k ∈ K , ( C k ∩ D l ) . {\displaystyle \bigcup _{f\in \prod J_{\bullet }}\;\bigcap _{i\in I}T_{i,f(i)}=\bigcup _{f\in \prod J_{\bullet }}\left(T_{1,f(1)}\cap T_{2,f(2)}\right)=\bigcup _{f\in \prod J_{\bullet }}\left(C_{f(1)}\cap D_{f(2)}\right)=\bigcup _{(k,l)\in K\times L}\left(C_{k}\cap D_{l}\right)=\bigcup _{\stackrel {k\in K,}{l\in L}}\left(C_{k}\cap D_{l}\right).} Thus the general identity Eq. 5 ∩∪ to ∪∩ reduces down to the previously given set equality Eq. 3b: ( ⋃ k ∈ K C k ) ∩ ⋃ l ∈ L D l = ⋃ l ∈ L k ∈ K , ( C k ∩ D l ) . {\displaystyle \left(\bigcup _{k\in K}C_{k}\right)\cap \;\bigcup _{l\in L}D_{l}=\bigcup _{\stackrel {k\in K,}{l\in L}}\left(C_{k}\cap D_{l}\right).} === Distributing subtraction over ⋃ and ⋂ === The next identities are known as De Morgan's laws. The following four set equalities can be deduced from the equalities 7a - 7d above. In general, naively swapping ∪ {\displaystyle \;\cup \;} and ∩ {\displaystyle \;\cap \;} may produce a different set (see this note for more details). The equalities ⋃ i ∈ I ⋂ j ∈ J ( L i ∖ R j ) = ⋂ j ∈ J ⋃ i ∈ I ( L i ∖ R j ) and ⋃ j ∈ J ⋂ i ∈ I ( L i ∖ R j ) = ⋂ i ∈ I ⋃ j ∈ J ( L i ∖ R j ) {\displaystyle \bigcup _{i\in I}\;\bigcap _{j\in J}\left(L_{i}\setminus R_{j}\right)~=~\bigcap _{j\in J}\;\bigcup _{i\in I}\left(L_{i}\setminus R_{j}\right)\quad {\text{ and }}\quad \bigcup _{j\in J}\;\bigcap _{i\in I}\left(L_{i}\setminus R_{j}\right)~=~\bigcap _{i\in I}\;\bigcup _{j\in J}\left(L_{i}\setminus R_{j}\right)} found in Eq. 7e and Eq. 7f are thus unusual in that they state exactly that swapping ∪ {\displaystyle \;\cup \;} and ∩ {\displaystyle \;\cap \;} will not change the resulting set. === Commutativity and associativity of ⋃ and ⋂ === Commutativity: ⋃ j ∈ J i ∈ I , S i , j = def ⋃ ( i , j ) ∈ I × J S i , j = ⋃ i ∈ I ( ⋃ j ∈ J S i , j ) = ⋃ j ∈ J ( ⋃ i ∈ I S i , j ) {\displaystyle \bigcup _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcup _{(i,j)\in I\times J}S_{i,j}~=~\bigcup _{i\in I}\left(\bigcup _{j\in J}S_{i,j}\right)~=~\bigcup _{j\in J}\left(\bigcup _{i\in I}S_{i,j}\right)} ⋂ j ∈ J i ∈ I , S i , j = def ⋂ ( i , j ) ∈ I × J S i , j = ⋂ i ∈ I ( ⋂ j ∈ J S i , j ) = ⋂ j ∈ J ( ⋂ i ∈ I S i , j ) {\displaystyle \bigcap _{\stackrel {i\in I,}{j\in J}}S_{i,j}~~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\bigcap _{(i,j)\in I\times J}S_{i,j}~=~\bigcap _{i\in I}\left(\bigcap _{j\in J}S_{i,j}\right)~=~\bigcap _{j\in J}\left(\bigcap _{i\in I}S_{i,j}\right)} Unions of unions and intersections of intersections: ( ⋃ i ∈ I L i ) ∪ R = ⋃ i ∈ I ( L i ∪ R ) {\displaystyle \left(\bigcup _{i\in I}L_{i}\right)\cup R~=~\bigcup _{i\in I}\left(L_{i}\cup R\right)} ( ⋂ i ∈ I L i ) ∩ R = ⋂ i ∈ I ( L i ∩ R ) {\displaystyle \left(\bigcap _{i\in I}L_{i}\right)\cap R~=~\bigcap _{i\in I}\left(L_{i}\cap R\right)} and and if I = J {\displaystyle I=J} then also: === Cartesian products Π of arbitrarily many sets === ==== Intersections ⋂ of Π ==== If ( S i , j ) ( i , j ) ∈ I × J {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}} is a family of sets then Moreover, a tuple ( x i ) i ∈ I {\displaystyle \left(x_{i}\right)_{i\in I}} belongs to the set in Eq. 8 above if and only if x i ∈ S i , j {\displaystyle x_{i}\in S_{i,j}} for all i ∈ I {\displaystyle i\in I} and all j ∈ J . {\displaystyle j\in J.} In particular, if ( L i ) i ∈ I {\displaystyle \left(L_{i}\right)_{i\in I}} and ( R i ) i ∈ I {\displaystyle \left(R_{i}\right)_{i\in I}} are two families indexed by the same set then ( ∏ i ∈ I L i ) ∩ ∏ i ∈ I R i = ∏ i ∈ I ( L i ∩ R i ) {\displaystyle \left(\prod _{i\in I}L_{i}\right)\cap \prod _{i\in I}R_{i}~=~\prod _{i\in I}\left(L_{i}\cap R_{i}\right)} So for instance, ( L × R ) ∩ ( L 2 × R 2 ) = ( L ∩ L 2 ) × ( R ∩ R 2 ) {\displaystyle (L\times R)\cap \left(L_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(R\cap R_{2}\right)} ( L × R ) ∩ ( L 2 × R 2 ) ∩ ( L 3 × R 3 ) = ( L ∩ L 2 ∩ L 3 ) × ( R ∩ R 2 ∩ R 3 ) {\displaystyle (L\times R)\cap \left(L_{2}\times R_{2}\right)\cap \left(L_{3}\times R_{3}\right)~=~\left(L\cap L_{2}\cap L_{3}\right)\times \left(R\cap R_{2}\cap R_{3}\right)} and ( L × M × R ) ∩ ( L 2 × M 2 × R 2 ) = ( L ∩ L 2 ) × ( M ∩ M 2 ) × ( R ∩ R 2 ) {\displaystyle (L\times M\times R)\cap \left(L_{2}\times M_{2}\times R_{2}\right)~=~\left(L\cap L_{2}\right)\times \left(M\cap M_{2}\right)\times \left(R\cap R_{2}\right)} Intersections of products indexed by different sets Let ( L i ) i ∈ I {\displaystyle \left(L_{i}\right)_{i\in I}} and ( R j ) j ∈ J {\displaystyle \left(R_{j}\right)_{j\in J}} be two families indexed by different sets. Technically, I ≠ J {\displaystyle I\neq J} implies ( ∏ i ∈ I L i ) ∩ ∏ j ∈ J R j = ∅ . {\displaystyle \left({\textstyle \prod \limits _{i\in I}}L_{i}\right)\cap {\textstyle \prod \limits _{j\in J}}R_{j}=\varnothing .} However, sometimes these products are somehow identified as the same set through some bijection or one of these products is identified as a subset of the other via some injective map, in which case (by abuse of notation) this intersection may be equal to some other (possibly non-empty) set. For example, if I := { 1 , 2 } {\displaystyle I:=\{1,2\}} and J := { 1 , 2 , 3 } {\displaystyle J:=\{1,2,3\}} with all sets equal to R {\displaystyle \mathbb {R} } then ∏ i ∈ I L i = ∏ i ∈ { 1 , 2 } R = R 2 {\displaystyle {\textstyle \prod \limits _{i\in I}}L_{i}={\textstyle \prod \limits _{i\in \{1,2\}}}\mathbb {R} =\mathbb {R} ^{2}} and ∏ j ∈ J R j = ∏ j ∈ { 1 , 2 , 3 } R = R 3 {\displaystyle {\textstyle \prod \limits _{j\in J}}R_{j}={\textstyle \prod \limits _{j\in \{1,2,3\}}}\mathbb {R} =\mathbb {R} ^{3}} where R 2 ∩ R 3 = ∅ {\displaystyle \mathbb {R} ^{2}\cap \mathbb {R} ^{3}=\varnothing } unless, for example, ∏ i ∈ { 1 , 2 } R = R 2 {\displaystyle {\textstyle \prod \limits _{i\in \{1,2\}}}\mathbb {R} =\mathbb {R} ^{2}} is identified as a subset of ∏ j ∈ { 1 , 2 , 3 } R = R 3 {\displaystyle {\textstyle \prod \limits _{j\in \{1,2,3\}}}\mathbb {R} =\mathbb {R} ^{3}} through some injection, such as maybe ( x , y ) ↦ ( x , y , 0 ) {\displaystyle (x,y)\mapsto (x,y,0)} for instance; however, in this particular case the product ∏ i ∈ I = { 1 , 2 } L i {\displaystyle {\textstyle \prod \limits _{i\in I=\{1,2\}}}L_{i}} actually represents the J {\displaystyle J} -indexed product ∏ j ∈ J = { 1 , 2 , 3 } L i {\displaystyle {\textstyle \prod \limits _{j\in J=\{1,2,3\}}}L_{i}} where L 3 := { 0 } . {\displaystyle L_{3}:=\{0\}.} For another example, take I := { 1 , 2 } {\displaystyle I:=\{1,2\}} and J := { 1 , 2 , 3 } {\displaystyle J:=\{1,2,3\}} with L 1 := R 2 {\displaystyle L_{1}:=\mathbb {R} ^{2}} and L 2 , R 1 , R 2 , and R 3 {\displaystyle L_{2},R_{1},R_{2},{\text{ and }}R_{3}} all equal to R . {\displaystyle \mathbb {R} .} Then ∏ i ∈ I L i = R 2 × R {\displaystyle {\textstyle \prod \limits _{i\in I}}L_{i}=\mathbb {R} ^{2}\times \mathbb {R} } and ∏ j ∈ J R j = R × R × R , {\displaystyle {\textstyle \prod \limits _{j\in J}}R_{j}=\mathbb {R} \times \mathbb {R} \times \mathbb {R} ,} which can both be identified as the same set via the bijection that sends ( ( x , y ) , z ) ∈ R 2 × R {\displaystyle ((x,y),z)\in \mathbb {R} ^{2}\times \mathbb {R} } to ( x , y , z ) ∈ R × R × R . {\displaystyle (x,y,z)\in \mathbb {R} \times \mathbb {R} \times \mathbb {R} .} Under this identification, ( ∏ i ∈ I L i ) ∩ ∏ j ∈ J R j = R 3 . {\displaystyle \left({\textstyle \prod \limits _{i\in I}}L_{i}\right)\cap \,{\textstyle \prod \limits _{j\in J}}R_{j}~=~\mathbb {R} ^{3}.} ==== Binary ⨯ distributes over arbitrary ⋃ and ⋂ ==== The binary Cartesian product ⨯ distributes over arbitrary intersections (when the indexing set is not empty) and over arbitrary unions: L × ( ⋃ i ∈ I R i ) = ⋃ i ∈ I ( L × R i ) (Left-distributivity of × over ∪ ) L × ( ⋂ i ∈ I R i ) = ⋂ i ∈ I ( L × R i ) (Left-distributivity of × over ⋂ i ∈ I when I ≠ ∅ ) ( ⋃ i ∈ I L i ) × R = ⋃ i ∈ I ( L i × R ) (Right-distributivity of × over ∪ ) ( ⋂ i ∈ I L i ) × R = ⋂ i ∈ I ( L i × R ) (Right-distributivity of × over ⋂ i ∈ I when I ≠ ∅ ) {\displaystyle {\begin{alignedat}{5}L\times \left(\bigcup _{i\in I}R_{i}\right)&\;=\;\;&&\bigcup _{i\in I}(L\times R_{i})\qquad &&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]L\times \left(\bigcap _{i\in I}R_{i}\right)&\;=\;\;&&\bigcap _{i\in I}(L\times R_{i})\qquad &&{\text{ (Left-distributivity of }}\,\times \,{\text{ over }}\,\bigcap _{i\in I}\,{\text{ when }}I\neq \varnothing \,{\text{)}}\\[1.4ex]\left(\bigcup _{i\in I}L_{i}\right)\times R&\;=\;\;&&\bigcup _{i\in I}(L_{i}\times R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\cup \,{\text{)}}\\[1.4ex]\left(\bigcap _{i\in I}L_{i}\right)\times R&\;=\;\;&&\bigcap _{i\in I}(L_{i}\times R)\qquad &&{\text{ (Right-distributivity of }}\,\times \,{\text{ over }}\,\bigcap _{i\in I}\,{\text{ when }}I\neq \varnothing \,{\text{)}}\\[1.4ex]\end{alignedat}}} ==== Distributing arbitrary Π over arbitrary ⋃ ==== Suppose that for each i ∈ I , {\displaystyle i\in I,} J i {\displaystyle J_{i}} is a non-empty index set and for each j ∈ J i , {\displaystyle j\in J_{i},} let T i , j {\displaystyle T_{i,j}} be any set (for example, to apply this law to ( S i , j ) ( i , j ) ∈ I × J , {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J},} use J i : = J {\displaystyle J_{i}\colon =J} for all i ∈ I {\displaystyle i\in I} and use T i , j : = S i , j {\displaystyle T_{i,j}\colon =S_{i,j}} for all i ∈ I {\displaystyle i\in I} and all j ∈ J i = J {\displaystyle j\in J_{i}=J} ). Let ∏ J ∙ = def ∏ i ∈ I J i {\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\prod _{i\in I}J_{i}} denote the Cartesian product, which (as mentioned above) can be interpreted as the set of all functions f : I → ⋃ i ∈ I J i {\displaystyle f~:~I~\to ~{\textstyle \bigcup \limits _{i\in I}}J_{i}} such that f ( i ) ∈ J i {\displaystyle f(i)\in J_{i}} for every i ∈ I {\displaystyle i\in I} . Then where ∏ J ∙ = def ∏ i ∈ I J i . {\displaystyle {\textstyle \prod }J_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~{\textstyle \prod \limits _{i\in I}}J_{i}.} ==== Unions ⋃ of Π ==== For unions, only the following is guaranteed in general: ⋃ j ∈ J ∏ i ∈ I S i , j ⊆ ∏ i ∈ I ⋃ j ∈ J S i , j and ⋃ i ∈ I ∏ j ∈ J S i , j ⊆ ∏ j ∈ J ⋃ i ∈ I S i , j {\displaystyle \bigcup _{j\in J}\;\prod _{i\in I}S_{i,j}~~\color {Red}{\subseteq }\color {Black}{}~~\prod _{i\in I}\;\bigcup _{j\in J}S_{i,j}\qquad {\text{ and }}\qquad \bigcup _{i\in I}\;\prod _{j\in J}S_{i,j}~~\color {Red}{\subseteq }\color {Black}{}~~\prod _{j\in J}\;\bigcup _{i\in I}S_{i,j}} where ( S i , j ) ( i , j ) ∈ I × J {\displaystyle \left(S_{i,j}\right)_{(i,j)\in I\times J}} is a family of sets. Example where equality fails: Let I = J = { 1 , 2 } , {\displaystyle I=J=\{1,2\},} let S 1 , 1 = S 2 , 2 = ∅ , {\displaystyle S_{1,1}=S_{2,2}=\varnothing ,} let X ≠ ∅ , {\displaystyle X\neq \varnothing ,} and let S 1 , 2 = S 2 , 1 = X . {\displaystyle S_{1,2}=S_{2,1}=X.} Then ∅ = ∅ ∪ ∅ = ( ∏ i ∈ I S i , 1 ) ∪ ( ∏ i ∈ I S i , 2 ) = ⋃ j ∈ J ∏ i ∈ I S i , j ≠ ∏ i ∈ I ⋃ j ∈ J S i , j = ( ⋃ j ∈ J S 1 , j ) × ( ⋃ j ∈ J S 2 , j ) = X × X . {\displaystyle \varnothing =\varnothing \cup \varnothing =\left(\prod _{i\in I}S_{i,1}\right)\cup \left(\prod _{i\in I}S_{i,2}\right)=\bigcup _{j\in J}\;\prod _{i\in I}S_{i,j}~~\color {Red}{\neq }\color {Black}{}~~\prod _{i\in I}\;\bigcup _{j\in J}S_{i,j}=\left(\bigcup _{j\in J}S_{1,j}\right)\times \left(\bigcup _{j\in J}S_{2,j}\right)=X\times X.} More generally, ∅ = ⋃ j ∈ J ∏ i ∈ I S i , j {\textstyle \varnothing =\bigcup _{j\in J}\;\prod _{i\in I}S_{i,j}} if and only if for each j ∈ J , {\displaystyle j\in J,} at least one of the sets in the I {\displaystyle I} -indexed collections of sets S ∙ , j = ( S i , j ) i ∈ I {\displaystyle S_{\bullet ,j}=\left(S_{i,j}\right)_{i\in I}} is empty, while ∏ i ∈ I ⋃ j ∈ J S i , j ≠ ∅ {\textstyle \prod _{i\in I}\;\bigcup _{j\in J}S_{i,j}\neq \varnothing } if and only if for each i ∈ I , {\displaystyle i\in I,} at least one of the sets in the J {\displaystyle J} -indexed collections of sets S i , ∙ = ( S i , j ) j ∈ J {\displaystyle S_{i,\bullet }=\left(S_{i,j}\right)_{j\in J}} is not empty. However, ( L × R ) ∪ ( L 2 × R 2 ) = [ ( L ∖ L 2 ) × R ] ∪ [ ( L 2 ∖ L ) × R 2 ] ∪ [ ( L ∩ L 2 ) × ( R ∪ R 2 ) ] = [ L × ( R ∖ R 2 ) ] ∪ [ L 2 × ( R 2 ∖ R ) ] ∪ [ ( L ∪ L 2 ) × ( R ∩ R 2 ) ] {\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\cup ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\setminus L_{2}\right)\times R\right]~\cup ~\left[\left(L_{2}\setminus L\right)\times R_{2}\right]~\cup ~\left[\left(L\cap L_{2}\right)\times \left(R\cup R_{2}\right)\right]\\[0.5ex]~&=~\left[L\times \left(R\setminus R_{2}\right)\right]~\cup ~\left[L_{2}\times \left(R_{2}\setminus R\right)\right]~\cup ~\left[\left(L\cup L_{2}\right)\times \left(R\cap R_{2}\right)\right]\\\end{alignedat}}} ==== Difference \ of Π ==== If ( L i ) i ∈ I {\displaystyle \left(L_{i}\right)_{i\in I}} and ( R i ) i ∈ I {\displaystyle \left(R_{i}\right)_{i\in I}} are two families of sets then: ( ∏ i ∈ I L i ) ∖ ∏ i ∈ I R i = ⋃ j ∈ I ∏ i ∈ I { L j ∖ R j if i = j L i if i ≠ j = ⋃ j ∈ I [ ( L j ∖ R j ) × ∏ j ≠ i i ∈ I , L i ] = ⋃ L j ⊈ R j j ∈ I , [ ( L j ∖ R j ) × ∏ j ≠ i i ∈ I , L i ] {\displaystyle {\begin{alignedat}{9}\left(\prod _{i\in I}L_{i}\right)~\setminus ~\prod _{i\in I}R_{i}~&=~\;~\bigcup _{j\in I}\;~\prod _{i\in I}{\begin{cases}L_{j}\,\setminus \,R_{j}&{\text{ if }}i=j\\L_{i}&{\text{ if }}i\neq j\\\end{cases}}\\[0.5ex]~&=~\;~\bigcup _{j\in I}\;~{\Big [}\left(L_{j}\,\setminus \,R_{j}\right)~\times ~\prod _{\stackrel {i\in I,}{j\neq i}}L_{i}{\Big ]}\\[0.5ex]~&=~\bigcup _{\stackrel {j\in I,}{L_{j}\not \subseteq R_{j}}}{\Big [}\left(L_{j}\,\setminus \,R_{j}\right)~\times ~\prod _{\stackrel {i\in I,}{j\neq i}}L_{i}{\Big ]}\\[0.3ex]\end{alignedat}}} so for instance, ( L × R ) ∖ ( L 2 × R 2 ) = [ ( L ∖ L 2 ) × R ] ∪ [ L × ( R ∖ R 2 ) ] {\displaystyle {\begin{alignedat}{9}\left(L\times R\right)~\setminus ~\left(L_{2}\times R_{2}\right)~&=~\left[\left(L\,\setminus \,L_{2}\right)\times R\right]~\cup ~\left[L\times \left(R\,\setminus \,R_{2}\right)\right]\\\end{alignedat}}} and ( L × M × R ) ∖ ( L 2 × M 2 × R 2 ) = [ ( L ∖ L 2 ) × M × R ] ∪ [ L × ( M ∖ M 2 ) × R ] ∪ [ L × M × ( R ∖ R 2 ) ] {\displaystyle (L\times M\times R)~\setminus ~\left(L_{2}\times M_{2}\times R_{2}\right)~=~\left[\left(L\,\setminus \,L_{2}\right)\times M\times R\right]~\cup ~\left[L\times \left(M\,\setminus \,M_{2}\right)\times R\right]~\cup ~\left[L\times M\times \left(R\,\setminus \,R_{2}\right)\right]} ==== Symmetric difference ∆ of Π ==== ( ∏ i ∈ I L i ) △ ( ∏ i ∈ I R i ) = ( ∏ i ∈ I L i ) ∪ ( ∏ i ∈ I R i ) ∖ ∏ i ∈ I L i ∩ R i {\displaystyle {\begin{alignedat}{9}\left(\prod _{i\in I}L_{i}\right)~\triangle ~\left(\prod _{i\in I}R_{i}\right)~&=~\;~\left(\prod _{i\in I}L_{i}\right)~\cup ~\left(\prod _{i\in I}R_{i}\right)\;\setminus \;\prod _{i\in I}L_{i}\cap R_{i}\\[0.5ex]\end{alignedat}}} == Functions and sets == Let f : X → Y {\displaystyle f:X\to Y} be any function. Let L and R {\displaystyle L{\text{ and }}R} be completely arbitrary sets. Assume A ⊆ X and C ⊆ Y . {\displaystyle A\subseteq X{\text{ and }}C\subseteq Y.} === Definitions === Let f : X → Y {\displaystyle f:X\to Y} be any function, where we denote its domain X {\displaystyle X} by domain ⁡ f {\displaystyle \operatorname {domain} f} and denote its codomain Y {\displaystyle Y} by codomain ⁡ f . {\displaystyle \operatorname {codomain} f.} Many of the identities below do not actually require that the sets be somehow related to f {\displaystyle f} 's domain or codomain (that is, to X {\displaystyle X} or Y {\displaystyle Y} ) so when some kind of relationship is necessary then it will be clearly indicated. Because of this, in this article, if L {\displaystyle L} is declared to be "any set," and it is not indicated that L {\displaystyle L} must be somehow related to X {\displaystyle X} or Y {\displaystyle Y} (say for instance, that it be a subset X {\displaystyle X} or Y {\displaystyle Y} ) then it is meant that L {\displaystyle L} is truly arbitrary. This generality is useful in situations where f : X → Y {\displaystyle f:X\to Y} is a map between two subsets X ⊆ U {\displaystyle X\subseteq U} and Y ⊆ V {\displaystyle Y\subseteq V} of some larger sets U {\displaystyle U} and V , {\displaystyle V,} and where the set L {\displaystyle L} might not be entirely contained in X = domain ⁡ f {\displaystyle X=\operatorname {domain} f} and/or Y = codomain ⁡ f {\displaystyle Y=\operatorname {codomain} f} (e.g. if all that is known about L {\displaystyle L} is that L ⊆ U {\displaystyle L\subseteq U} ); in such a situation it may be useful to know what can and cannot be said about f ( L ) {\displaystyle f(L)} and/or f − 1 ( L ) {\displaystyle f^{-1}(L)} without having to introduce a (potentially unnecessary) intersection such as: f ( L ∩ X ) {\displaystyle f(L\cap X)} and/or f − 1 ( L ∩ Y ) . {\displaystyle f^{-1}(L\cap Y).} Images and preimages of sets If L {\displaystyle L} is any set then the image of L {\displaystyle L} under f {\displaystyle f} is defined to be the set: f ( L ) = def { f ( l ) : l ∈ L ∩ domain ⁡ f } {\displaystyle f(L)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{\,f(l)~:~l\in L\cap \operatorname {domain} f\,\}} while the preimage of L {\displaystyle L} under f {\displaystyle f} is: f − 1 ( L ) = def { x ∈ domain ⁡ f : f ( x ) ∈ L } {\displaystyle f^{-1}(L)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\{\,x\in \operatorname {domain} f~:~f(x)\in L\,\}} where if L = { s } {\displaystyle L=\{s\}} is a singleton set then the fiber or preimage of s {\displaystyle s} under f {\displaystyle f} is f − 1 ( s ) = def f − 1 ( { s } ) = { x ∈ domain ⁡ f : f ( x ) = s } . {\displaystyle f^{-1}(s)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f^{-1}(\{s\})~=~\{\,x\in \operatorname {domain} f~:~f(x)=s\,\}.} Denote by Im ⁡ f {\displaystyle \operatorname {Im} f} or image ⁡ f {\displaystyle \operatorname {image} f} the image or range of f : X → Y , {\displaystyle f:X\to Y,} which is the set: Im ⁡ f = def f ( X ) = def f ( domain ⁡ f ) = { f ( x ) : x ∈ domain ⁡ f } . {\displaystyle \operatorname {Im} f~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(X)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(\operatorname {domain} f)~=~\{f(x)~:~x\in \operatorname {domain} f\}.} Saturated sets A set A {\displaystyle A} is said to be f {\displaystyle f} -saturated or a saturated set if any of the following equivalent conditions are satisfied: There exists a set R {\displaystyle R} such that A = f − 1 ( R ) . {\displaystyle A=f^{-1}(R).} Any such set R {\displaystyle R} necessarily contains f ( A ) {\displaystyle f(A)} as a subset. Any set not entirely contained in the domain of f {\displaystyle f} cannot be f {\displaystyle f} -saturated. A = f − 1 ( f ( A ) ) . {\displaystyle A=f^{-1}(f(A)).} A ⊇ f − 1 ( f ( A ) ) {\displaystyle A\supseteq f^{-1}(f(A))} and A ⊆ domain ⁡ f . {\displaystyle A\subseteq \operatorname {domain} f.} The inclusion L ∩ domain ⁡ f ⊆ f − 1 ( f ( L ) ) {\displaystyle L\cap \operatorname {domain} f\subseteq f^{-1}(f(L))} always holds, where if A ⊆ domain ⁡ f {\displaystyle A\subseteq \operatorname {domain} f} then this becomes A ⊆ f − 1 ( f ( A ) ) . {\displaystyle A\subseteq f^{-1}(f(A)).} A ⊆ domain ⁡ f {\displaystyle A\subseteq \operatorname {domain} f} and if a ∈ A {\displaystyle a\in A} and x ∈ domain ⁡ f {\displaystyle x\in \operatorname {domain} f} satisfy f ( x ) = f ( a ) , {\displaystyle f(x)=f(a),} then x ∈ A . {\displaystyle x\in A.} Whenever a fiber of f {\displaystyle f} intersects A , {\displaystyle A,} then A {\displaystyle A} contains the entire fiber. In other words, A {\displaystyle A} contains every f {\displaystyle f} -fiber that intersects it. Explicitly: whenever y ∈ Im ⁡ f {\displaystyle y\in \operatorname {Im} f} is such that A ∩ f − 1 ( y ) ≠ ∅ , {\displaystyle A\cap f^{-1}(y)\neq \varnothing ,} then f − 1 ( y ) ⊆ A . {\displaystyle f^{-1}(y)\subseteq A.} In both this statement and the next, the set Im ⁡ f {\displaystyle \operatorname {Im} f} may be replaced with any superset of Im ⁡ f {\displaystyle \operatorname {Im} f} (such as codomain ⁡ f {\displaystyle \operatorname {codomain} f} ) and the resulting statement will still be equivalent to the rest. The intersection of A {\displaystyle A} with a fiber of f {\displaystyle f} is equal to the empty set or to the fiber itself. Explicitly: for every y ∈ Im ⁡ f , {\displaystyle y\in \operatorname {Im} f,} the intersection A ∩ f − 1 ( y ) {\displaystyle A\cap f^{-1}(y)} is equal to the empty set ∅ {\displaystyle \varnothing } or to f − 1 ( y ) {\displaystyle f^{-1}(y)} (that is, A ∩ f − 1 ( y ) = ∅ {\displaystyle A\cap f^{-1}(y)=\varnothing } or A ∩ f − 1 ( y ) = f − 1 ( y ) {\displaystyle A\cap f^{-1}(y)=f^{-1}(y)} ). For a set A {\displaystyle A} to be f {\displaystyle f} -saturated, it is necessary that A ⊆ domain ⁡ f . {\displaystyle A\subseteq \operatorname {domain} f.} Compositions and restrictions of functions If f {\displaystyle f} and g {\displaystyle g} are maps then g ∘ f {\displaystyle g\circ f} denotes the composition map g ∘ f : { x ∈ domain ⁡ f : f ( x ) ∈ domain ⁡ g } → codomain ⁡ g {\displaystyle g\circ f~:~\{\,x\in \operatorname {domain} f~:~f(x)\in \operatorname {domain} g\,\}~\to ~\operatorname {codomain} g} with domain and codomain domain ⁡ ( g ∘ f ) = { x ∈ domain ⁡ f : f ( x ) ∈ domain ⁡ g } codomain ⁡ ( g ∘ f ) = codomain ⁡ g {\displaystyle {\begin{alignedat}{4}\operatorname {domain} (g\circ f)&=\{\,x\in \operatorname {domain} f~:~f(x)\in \operatorname {domain} g\,\}\\[0.4ex]\operatorname {codomain} (g\circ f)&=\operatorname {codomain} g\\[0.7ex]\end{alignedat}}} defined by ( g ∘ f ) ( x ) = def g ( f ( x ) ) . {\displaystyle (g\circ f)(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~g(f(x)).} The restriction of f : X → Y {\displaystyle f:X\to Y} to L , {\displaystyle L,} denoted by f | L , {\displaystyle f{\big \vert }_{L},} is the map f | L : L ∩ domain ⁡ f → Y {\displaystyle f{\big \vert }_{L}~:~L\cap \operatorname {domain} f~\to ~Y} with domain ⁡ f | L = L ∩ domain ⁡ f {\displaystyle \operatorname {domain} f{\big \vert }_{L}~=~L\cap \operatorname {domain} f} defined by sending x ∈ L ∩ domain ⁡ f {\displaystyle x\in L\cap \operatorname {domain} f} to f ( x ) ; {\displaystyle f(x);} that is, f | L ( x ) = def f ( x ) . {\displaystyle f{\big \vert }_{L}(x)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~f(x).} Alternatively, f | L = f ∘ In ⁡ {\displaystyle ~f{\big \vert }_{L}~=~f\circ \operatorname {In} ~} where In ⁡ : L ∩ X → X {\displaystyle ~\operatorname {In} ~:~L\cap X\to X~} denotes the inclusion map, which is defined by In ⁡ ( s ) = def s . {\displaystyle \operatorname {In} (s)~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~s.} === (Pre)Images of arbitrary unions ⋃'s and intersections ⋂'s === If ( L i ) i ∈ I {\displaystyle \left(L_{i}\right)_{i\in I}} is a family of arbitrary sets indexed by I ≠ ∅ {\displaystyle I\neq \varnothing } then: f ( ⋂ i ∈ I L i ) ⊆ ⋂ i ∈ I f ( L i ) f ( ⋃ i ∈ I L i ) = ⋃ i ∈ I f ( L i ) f − 1 ( ⋃ i ∈ I L i ) = ⋃ i ∈ I f − 1 ( L i ) f − 1 ( ⋂ i ∈ I L i ) = ⋂ i ∈ I f − 1 ( L i ) {\displaystyle {\begin{alignedat}{4}f\left(\bigcap _{i\in I}L_{i}\right)\;&~\;\color {Red}{\subseteq }\color {Black}{}~\;\;\;\bigcap _{i\in I}f\left(L_{i}\right)\\f\left(\bigcup _{i\in I}L_{i}\right)\;&~=~\;\bigcup _{i\in I}f\left(L_{i}\right)\\f^{-1}\left(\bigcup _{i\in I}L_{i}\right)\;&~=~\;\bigcup _{i\in I}f^{-1}\left(L_{i}\right)\\f^{-1}\left(\bigcap _{i\in I}L_{i}\right)\;&~=~\;\bigcap _{i\in I}f^{-1}\left(L_{i}\right)\\\end{alignedat}}} So of these four identities, it is only images of intersections that are not always preserved. Preimages preserve all basic set operations. Unions are preserved by both images and preimages. If all L i {\displaystyle L_{i}} are f {\displaystyle f} -saturated then ⋂ i ∈ I L i {\displaystyle \bigcap _{i\in I}L_{i}} be will be f {\displaystyle f} -saturated and equality will hold in the first relation above; explicitly, this means: If ( A i ) i ∈ I {\displaystyle \left(A_{i}\right)_{i\in I}} is a family of arbitrary subsets of X = domain ⁡ f , {\displaystyle X=\operatorname {domain} f,} which means that A i ⊆ X {\displaystyle A_{i}\subseteq X} for all i , {\displaystyle i,} then Conditional Equality 10a becomes: === (Pre)Images of binary set operations === Throughout, let L {\displaystyle L} and R {\displaystyle R} be any sets and let f : X → Y {\displaystyle f:X\to Y} be any function. Summary As the table below shows, set equality is not guaranteed only for images of: intersections, set subtractions, and symmetric differences. Preimages preserve set operations Preimages of sets are well-behaved with respect to all basic set operations: f − 1 ( L ∪ R ) = f − 1 ( L ) ∪ f − 1 ( R ) f − 1 ( L ∩ R ) = f − 1 ( L ) ∩ f − 1 ( R ) f − 1 ( L ∖ R ) = f − 1 ( L ) ∖ f − 1 ( R ) f − 1 ( L △ R ) = f − 1 ( L ) △ f − 1 ( R ) {\displaystyle {\begin{alignedat}{4}f^{-1}(L\cup R)~&=~f^{-1}(L)\cup f^{-1}(R)\\f^{-1}(L\cap R)~&=~f^{-1}(L)\cap f^{-1}(R)\\f^{-1}(L\setminus \,R)~&=~f^{-1}(L)\setminus \,f^{-1}(R)\\f^{-1}(L\,\triangle \,R)~&=~f^{-1}(L)\,\triangle \,f^{-1}(R)\\\end{alignedat}}} In words, preimages distribute over unions, intersections, set subtraction, and symmetric difference. Images only preserve unions Images of unions are well-behaved: f ( L ∪ R ) = f ( L ) ∪ f ( R ) {\displaystyle {\begin{alignedat}{4}f(L\cup R)~&=~f(L)\cup f(R)\\\end{alignedat}}} but images of the other basic set operations are not since only the following are guaranteed in general: f ( L ∩ R ) ⊆ f ( L ) ∩ f ( R ) f ( L ∖ R ) ⊇ f ( L ) ∖ f ( R ) f ( L △ R ) ⊇ f ( L ) △ f ( R ) {\displaystyle {\begin{alignedat}{4}f(L\cap R)~&\subseteq ~f(L)\cap f(R)\\f(L\setminus R)~&\supseteq ~f(L)\setminus f(R)\\f(L\triangle R)~&\supseteq ~f(L)\,\triangle \,f(R)\\\end{alignedat}}} In words, images distribute over unions but not necessarily over intersections, set subtraction, or symmetric difference. What these latter three operations have in common is set subtraction: they either are set subtraction L ∖ R {\displaystyle L\setminus R} or else they can naturally be defined as the set subtraction of two sets: L ∩ R = L ∖ ( L ∖ R ) and L △ R = ( L ∪ R ) ∖ ( L ∩ R ) . {\displaystyle L\cap R=L\setminus (L\setminus R)\quad {\text{ and }}\quad L\triangle R=(L\cup R)\setminus (L\cap R).} If L = X {\displaystyle L=X} then f ( X ∖ R ) ⊇ f ( X ) ∖ f ( R ) {\displaystyle f(X\setminus R)\supseteq f(X)\setminus f(R)} where as in the more general case, equality is not guaranteed. If f {\displaystyle f} is surjective then f ( X ∖ R ) ⊇ Y ∖ f ( R ) , {\displaystyle f(X\setminus R)~\supseteq ~Y\setminus f(R),} which can be rewritten as: f ( R ∁ ) ⊇ f ( R ) ∁ {\displaystyle f\left(R^{\complement }\right)~\supseteq ~f(R)^{\complement }} if R ∁ = def X ∖ R {\displaystyle R^{\complement }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~X\setminus R} and f ( R ) ∁ = def Y ∖ f ( R ) . {\displaystyle f(R)^{\complement }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~Y\setminus f(R).} ==== Counter-examples: images of operations not distributing ==== If f : { 1 , 2 } → Y {\displaystyle f:\{1,2\}\to Y} is constant, L = { 1 } , {\displaystyle L=\{1\},} and R = { 2 } {\displaystyle R=\{2\}} then all four of the set containments f ( L ∩ R ) ⊊ f ( L ) ∩ f ( R ) f ( L ∖ R ) ⊋ f ( L ) ∖ f ( R ) f ( X ∖ R ) ⊋ f ( X ) ∖ f ( R ) f ( L △ R ) ⊋ f ( L ) △ f ( R ) {\displaystyle {\begin{alignedat}{4}f(L\cap R)~&\subsetneq ~f(L)\cap f(R)\\f(L\setminus R)~&\supsetneq ~f(L)\setminus f(R)\\f(X\setminus R)~&\supsetneq ~f(X)\setminus f(R)\\f(L\triangle R)~&\supsetneq ~f(L)\triangle f(R)\\\end{alignedat}}} are strict/proper (that is, the sets are not equal) since one side is the empty set while the other is non-empty. Thus equality is not guaranteed for even the simplest of functions. The example above is now generalized to show that these four set equalities can fail for any constant function whose domain contains at least two (distinct) points. Example: Let f : X → Y {\displaystyle f:X\to Y} be any constant function with image f ( X ) = { y } {\displaystyle f(X)=\{y\}} and suppose that L , R ⊆ X {\displaystyle L,R\subseteq X} are non-empty disjoint subsets; that is, L ≠ ∅ , R ≠ ∅ , {\displaystyle L\neq \varnothing ,R\neq \varnothing ,} and L ∩ R = ∅ , {\displaystyle L\cap R=\varnothing ,} which implies that all of the sets L △ R = L ∪ R , {\displaystyle L~\triangle ~R=L\cup R,} L ∖ R = L , {\displaystyle \,L\setminus R=L,} and X ∖ R ⊇ L ∖ R {\displaystyle X\setminus R\supseteq L\setminus R} are not empty and so consequently, their images under f {\displaystyle f} are all equal to { y } . {\displaystyle \{y\}.} The containment f ( L ∖ R ) ⊋ f ( L ) ∖ f ( R ) {\displaystyle ~f(L\setminus R)~\supsetneq ~f(L)\setminus f(R)~} is strict: { y } = f ( L ∖ R ) ≠ f ( L ) ∖ f ( R ) = { y } ∖ { y } = ∅ {\displaystyle \{y\}~=~f(L\setminus R)~\neq ~f(L)\setminus f(R)~=~\{y\}\setminus \{y\}~=~\varnothing } In words: functions might not distribute over set subtraction ∖ {\displaystyle \,\setminus \,} The containment f ( X ∖ R ) ⊋ f ( X ) ∖ f ( R ) {\displaystyle ~f(X\setminus R)~\supsetneq ~f(X)\setminus f(R)~} is strict: { y } = f ( X ∖ R ) ≠ f ( X ) ∖ f ( R ) = { y } ∖ { y } = ∅ . {\displaystyle \{y\}~=~f(X\setminus R)~\neq ~f(X)\setminus f(R)~=~\{y\}\setminus \{y\}~=~\varnothing .} The containment f ( L △ R ) ⊋ f ( L ) △ f ( R ) {\displaystyle ~f(L~\triangle ~R)~\supsetneq ~f(L)~\triangle ~f(R)~} is strict: { y } = f ( L △ R ) ≠ f ( L ) △ f ( R ) = { y } △ { y } = ∅ {\displaystyle \{y\}~=~f\left(L~\triangle ~R\right)~\neq ~f(L)~\triangle ~f(R)~=~\{y\}\triangle \{y\}~=~\varnothing } In words: functions might not distribute over symmetric difference △ {\displaystyle \,\triangle \,} (which can be defined as the set subtraction of two sets: L △ R = ( L ∪ R ) ∖ ( L ∩ R ) {\displaystyle L\triangle R=(L\cup R)\setminus (L\cap R)} ). The containment f ( L ∩ R ) ⊊ f ( L ) ∩ f ( R ) {\displaystyle ~f(L\cap R)~\subsetneq ~f(L)\cap f(R)~} is strict: ∅ = f ( ∅ ) = f ( L ∩ R ) ≠ f ( L ) ∩ f ( R ) = { y } ∩ { y } = { y } {\displaystyle \varnothing ~=~f(\varnothing )~=~f(L\cap R)~\neq ~f(L)\cap f(R)~=~\{y\}\cap \{y\}~=~\{y\}} In words: functions might not distribute over set intersection ∩ {\displaystyle \,\cap \,} (which can be defined as the set subtraction of two sets: L ∩ R = L ∖ ( L ∖ R ) {\displaystyle L\cap R=L\setminus (L\setminus R)} ). What the set operations in these four examples have in common is that they either are set subtraction ∖ {\displaystyle \setminus } (examples (1) and (2)) or else they can naturally be defined as the set subtraction of two sets (examples (3) and (4)). Mnemonic: In fact, for each of the above four set formulas for which equality is not guaranteed, the direction of the containment (that is, whether to use ⊆ or ⊇ {\displaystyle \,\subseteq {\text{ or }}\supseteq \,} ) can always be deduced by imagining the function f {\displaystyle f} as being constant and the two sets ( L {\displaystyle L} and R {\displaystyle R} ) as being non-empty disjoint subsets of its domain. This is because every equality fails for such a function and sets: one side will be always be ∅ {\displaystyle \varnothing } and the other non-empty − from this fact, the correct choice of ⊆ or ⊇ {\displaystyle \,\subseteq {\text{ or }}\supseteq \,} can be deduced by answering: "which side is empty?" For example, to decide if the ? {\displaystyle ?} in f ( L △ R ) ∖ f ( R ) ? f ( ( L △ R ) ∖ R ) {\displaystyle f(L\triangle R)\setminus f(R)~\;~?~\;~f((L\triangle R)\setminus R)} should be ⊆ or ⊇ , {\displaystyle \,\subseteq {\text{ or }}\supseteq ,\,} pretend that f {\displaystyle f} is constant and that L △ R {\displaystyle L\triangle R} and R {\displaystyle R} are non-empty disjoint subsets of f {\displaystyle f} 's domain; then the left hand side would be empty (since f ( L △ R ) ∖ f ( R ) = { f 's single value } ∖ { f 's single value } = ∅ {\displaystyle f(L\triangle R)\setminus f(R)=\{f{\text{'s single value}}\}\setminus \{f{\text{'s single value}}\}=\varnothing } ), which indicates that ? {\displaystyle \,?\,} should be ⊆ {\displaystyle \,\subseteq \,} (the resulting statement is always guaranteed to be true) because this is the choice that will make ∅ = left hand side ? right hand side {\displaystyle \varnothing ={\text{left hand side}}~\;~?~\;~{\text{right hand side}}} true. Alternatively, the correct direction of containment can also be deduced by consideration of any constant f : { 1 , 2 } → Y {\displaystyle f:\{1,2\}\to Y} with L = { 1 } {\displaystyle L=\{1\}} and R = { 2 } . {\displaystyle R=\{2\}.} Furthermore, this mnemonic can also be used to correctly deduce whether or not a set operation always distribute over images or preimages; for example, to determine whether or not f ( L ∩ R ) {\displaystyle f(L\cap R)} always equals f ( L ) ∩ f ( R ) , {\displaystyle f(L)\cap f(R),} or alternatively, whether or not f − 1 ( L ∩ R ) {\displaystyle f^{-1}(L\cap R)} always equals f − 1 ( L ) ∩ f − 1 ( R ) {\displaystyle f^{-1}(L)\cap f^{-1}(R)} (although ∩ {\displaystyle \,\cap \,} was used here, it can replaced by ∪ , ∖ , or △ {\displaystyle \,\cup ,\,\setminus ,{\text{ or }}\,\triangle } ). The answer to such a question can, as before, be deduced by consideration of this constant function: the answer for the general case (that is, for arbitrary f , L , {\displaystyle f,L,} and R {\displaystyle R} ) is always the same as the answer for this choice of (constant) function and disjoint non-empty sets. ==== Conditions guaranteeing that images distribute over set operations ==== Characterizations of when equality holds for all sets: For any function f : X → Y , {\displaystyle f:X\to Y,} the following statements are equivalent: f : X → Y {\displaystyle f:X\to Y} is injective. This means: f ( x ) ≠ f ( y ) {\displaystyle f(x)\neq f(y)} for all distinct x , y ∈ X . {\displaystyle x,y\in X.} f ( L ∩ R ) = f ( L ) ∩ f ( R ) for all L , R ⊆ X . {\displaystyle f(L\cap R)=f(L)\,\cap \,f(R)\,{\text{ for all }}\,L,R\subseteq X.} (The equals sign = {\displaystyle \,=\,} can be replaced with ⊇ {\displaystyle \,\supseteq \,} ). f ( L ∖ R ) = f ( L ) ∖ f ( R ) for all L , R ⊆ X . {\displaystyle f(L\,\setminus R)=f(L)\,\setminus \,f(R)\;{\text{ for all }}\,L,R\subseteq X.} (The equals sign = {\displaystyle \,=\,} can be replaced with ⊆ {\displaystyle \,\subseteq \,} ). f ( X ∖ R ) = f ( X ) ∖ f ( R ) for all R ⊆ X . {\displaystyle f(X\setminus R)=f(X)\setminus \,f(R)\;{\text{ for all }}\,~~~~~R\subseteq X.} (The equals sign = {\displaystyle \,=\,} can be replaced with ⊆ {\displaystyle \,\subseteq \,} ). f ( L △ R ) = f ( L ) △ f ( R ) for all L , R ⊆ X . {\displaystyle f(L\,\triangle \,R)=f(L)\,\triangle \,f(R)\;{\text{ for all }}\,L,R\subseteq X.} (The equals sign = {\displaystyle \,=\,} can be replaced with ⊆ {\displaystyle \,\subseteq \,} ). Any one of the four statements (b) - (e) but with the words "for all" replaced with any one of the following: "for all singleton subsets" In particular, the statement that results from (d) gives a characterization of injectivity that explicitly involves only one point (rather than two): f {\displaystyle f} is injective if and only if f ( x ) ∉ f ( X ∖ { x } ) for every x ∈ X . {\displaystyle f(x)\not \in f(X\setminus \{x\})\;{\text{ for every }}\,x\in X.} "for all disjoint singleton subsets" For statement (d), this is the same as: "for all singleton subsets" (because the definition of "pairwise disjoint" is satisfies vacuously by any family that consists of exactly 1 set). "for all disjoint subsets" In particular, if a map is not known to be injective then barring additional information, there is no guarantee that any of the equalities in statements (b) - (e) hold. An example above can be used to help prove this characterization. Indeed, comparison of that example with such a proof suggests that the example is representative of the fundamental reason why one of these four equalities in statements (b) - (e) might not hold (that is, representative of "what goes wrong" when a set equality does not hold). ===== Conditions for f(L⋂R) = f(L)⋂f(R) ===== f ( L ∩ R ) ⊆ f ( L ) ∩ f ( R ) always holds {\displaystyle f(L\cap R)~\subseteq ~f(L)\cap f(R)\qquad \qquad {\text{ always holds}}} Characterizations of equality: The following statements are equivalent: f ( L ∩ R ) = f ( L ) ∩ f ( R ) {\displaystyle f(L\cap R)~=~f(L)\cap f(R)} f ( L ∩ R ) ⊇ f ( L ) ∩ f ( R ) {\displaystyle f(L\cap R)~\supseteq ~f(L)\cap f(R)} L ∩ f − 1 ( f ( R ) ) ⊆ f − 1 ( f ( L ∩ R ) ) {\displaystyle L\cap f^{-1}(f(R))~\subseteq ~f^{-1}(f(L\cap R))} The left hand side L ∩ f − 1 ( f ( R ) ) {\displaystyle L\cap f^{-1}(f(R))} is always equal to L ∩ f − 1 ( f ( L ) ∩ f ( R ) ) {\displaystyle L\cap f^{-1}(f(L)\cap f(R))} (because L ∩ f − 1 ( f ( R ) ) ⊆ f − 1 ( f ( L ) ) {\displaystyle L\cap f^{-1}(f(R))~\subseteq ~f^{-1}(f(L))} always holds). R ∩ f − 1 ( f ( L ) ) ⊆ f − 1 ( f ( L ∩ R ) ) {\displaystyle R\cap f^{-1}(f(L))~\subseteq ~f^{-1}(f(L\cap R))} L ∩ f − 1 ( f ( R ) ) = f − 1 ( f ( L ∩ R ) ) ∩ L {\displaystyle L\cap f^{-1}(f(R))~=~f^{-1}(f(L\cap R))\cap L} R ∩ f − 1 ( f ( L ) ) = f − 1 ( f ( L ∩ R ) ) ∩ R {\displaystyle R\cap f^{-1}(f(L))~=~f^{-1}(f(L\cap R))\cap R} If l ∈ L {\displaystyle l\in L} satisfies f ( l ) ∈ f ( R ) {\displaystyle f(l)\in f(R)} then f ( l ) ∈ f ( L ∩ R ) . {\displaystyle f(l)\in f(L\cap R).} If y ∈ f ( L ) {\displaystyle y\in f(L)} but y ∉ f ( L ∩ R ) {\displaystyle y\notin f(L\cap R)} then y ∉ f ( R ) . {\displaystyle y\notin f(R).} f ( L ) ∖ f ( L ∩ R ) ⊆ f ( L ) ∖ f ( R ) {\displaystyle f(L)\,\setminus \,f(L\cap R)~\subseteq ~f(L)\,\setminus \,f(R)} f ( R ) ∖ f ( L ∩ R ) ⊆ f ( R ) ∖ f ( L ) {\displaystyle f(R)\,\setminus \,f(L\cap R)~\subseteq ~f(R)\,\setminus \,f(L)} f ( L ∪ R ) ∖ f ( L ∩ R ) ⊆ f ( L ) △ f ( R ) {\displaystyle f(L\cup R)\setminus f(L\cap R)~\subseteq ~f(L)\,\triangle \,f(R)} Any of the above three conditions (i) - (k) but with the subset symbol ⊆ {\displaystyle \,\subseteq \,} replaced with an equals sign = . {\displaystyle \,=.\,} Sufficient conditions for equality: Equality holds if any of the following are true: f {\displaystyle f} is injective. The restriction f | L ∪ R {\displaystyle f{\big \vert }_{L\cup R}} is injective. f − 1 ( f ( R ) ) ⊆ R {\displaystyle f^{-1}(f(R))~\subseteq ~R} f − 1 ( f ( L ) ) ⊆ L {\displaystyle f^{-1}(f(L))~\subseteq ~L} R {\displaystyle R} is f {\displaystyle f} -saturated; that is, f − 1 ( f ( R ) ) = R {\displaystyle f^{-1}(f(R))=R} L {\displaystyle L} is f {\displaystyle f} -saturated; that is, f − 1 ( f ( L ) ) = L {\displaystyle f^{-1}(f(L))=L} f ( L ) ⊆ f ( L ∩ R ) {\displaystyle f(L)~\subseteq ~f(L\cap R)} f ( R ) ⊆ f ( L ∩ R ) {\displaystyle f(R)~\subseteq ~f(L\cap R)} f ( L ∖ R ) ⊆ f ( L ) ∖ f ( R ) {\displaystyle f(L\,\setminus \,R)~\subseteq ~f(L)\setminus \,f(R)} or equivalently, f ( L ∖ R ) = f ( L ) ∖ f ( R ) {\displaystyle f(L\,\setminus \,R)~=~f(L)\setminus f(R)} f ( R ∖ L ) ⊆ f ( R ) ∖ f ( L ) {\displaystyle f(R\,\setminus \,L)~\subseteq ~f(R)\setminus \,f(L)} or equivalently, f ( R ∖ L ) = f ( R ) ∖ f ( L ) {\displaystyle f(R\,\setminus \,L)~=~f(R)\setminus f(L)} f ( L △ R ) ⊆ f ( L ) △ f ( R ) {\displaystyle f\left(L~\triangle ~R\right)\subseteq f(L)~\triangle ~f(R)} or equivalently, f ( L △ R ) = f ( L ) △ f ( R ) {\displaystyle f\left(L~\triangle ~R\right)=f(L)~\triangle ~f(R)} R ∩ domain ⁡ f ⊆ L {\displaystyle R\cap \operatorname {domain} f\,\subseteq L} L ∩ domain ⁡ f ⊆ R {\displaystyle L\cap \operatorname {domain} f\,\subseteq R} R ⊆ L {\displaystyle R\subseteq L} L ⊆ R {\displaystyle L\subseteq R} In addition, the following always hold: f ( f − 1 ( L ) ∩ R ) = L ∩ f ( R ) {\displaystyle f\left(f^{-1}(L)\cap R\right)~=~L\cap f(R)} f ( f − 1 ( L ) ∪ R ) = ( L ∩ Im ⁡ f ) ∪ f ( R ) {\displaystyle f\left(f^{-1}(L)\cup R\right)~=~(L\cap \operatorname {Im} f)\cup f(R)} ===== Conditions for f(L\R) = f(L)\f(R) ===== f ( L ∖ R ) ⊇ f ( L ) ∖ f ( R ) always holds {\displaystyle f(L\setminus R)~\supseteq ~f(L)\setminus f(R)\qquad \qquad {\text{ always holds}}} Characterizations of equality: The following statements are equivalent: f ( L ∖ R ) = f ( L ) ∖ f ( R ) {\displaystyle f(L\setminus R)~=~f(L)\setminus f(R)} f ( L ∖ R ) ⊆ f ( L ) ∖ f ( R ) {\displaystyle f(L\setminus R)~\subseteq ~f(L)\setminus f(R)} L ∩ f − 1 ( f ( R ) ) ⊆ R {\displaystyle L\cap f^{-1}(f(R))~\subseteq ~R} L ∩ f − 1 ( f ( R ) ) = L ∩ R ∩ domain ⁡ f {\displaystyle L\cap f^{-1}(f(R))~=~L\cap R\cap \operatorname {domain} f} Whenever y ∈ f ( L ) ∩ f ( R ) {\displaystyle y\in f(L)\cap f(R)} then L ∩ f − 1 ( y ) ⊆ R . {\displaystyle L\cap f^{-1}(y)\subseteq R.} f ( L ) ∩ f ( R ) ⊆ { y ∈ f ( L ) : L ∩ f − 1 ( y ) ⊆ R } {\textstyle f(L)\cap f(R)~\subseteq ~\left\{y\in f(L):L\cap f^{-1}(y)\subseteq R\right\}} The set on the right hand side is always equal to { y ∈ f ( L ∩ R ) : L ∩ f − 1 ( y ) ⊆ R } . {\displaystyle \left\{y\in f(L\cap R):L\cap f^{-1}(y)\,\subseteq R\right\}.} f ( L ) ∩ f ( R ) = { y ∈ f ( L ) : L ∩ f − 1 ( y ) ⊆ R } {\textstyle f(L)\cap f(R)~=~\left\{y\in f(L):L\cap f^{-1}(y)\subseteq R\right\}} This is the above condition (f) but with the subset symbol ⊆ {\displaystyle \,\subseteq \,} replaced with an equals sign = . {\displaystyle \,=.\,} Necessary conditions for equality (excluding characterizations): If equality holds then the following are necessarily true: f ( L ∩ R ) = f ( L ) ∩ f ( R ) , {\displaystyle f(L\cap R)=f(L)\cap f(R),} or equivalently f ( L ∩ R ) ⊇ f ( L ) ∩ f ( R ) . {\displaystyle f(L\cap R)\supseteq f(L)\cap f(R).} L ∩ f − 1 ( f ( R ) ) = L ∩ f − 1 ( f ( L ∩ R ) ) {\displaystyle L\cap f^{-1}(f(R))~=~L\cap f^{-1}(f(L\cap R))} or equivalently, L ∩ f − 1 ( f ( R ) ) ⊆ f − 1 ( f ( L ∩ R ) ) {\displaystyle L\cap f^{-1}(f(R))~\subseteq ~f^{-1}(f(L\cap R))} R ∩ f − 1 ( f ( L ) ) = R ∩ f − 1 ( f ( L ∩ R ) ) {\displaystyle R\cap f^{-1}(f(L))~=~R\cap f^{-1}(f(L\cap R))} Sufficient conditions for equality: Equality holds if any of the following are true: f {\displaystyle f} is injective. The restriction f | L ∪ R {\displaystyle f{\big \vert }_{L\cup R}} is injective. f − 1 ( f ( R ) ) ⊆ R {\displaystyle f^{-1}(f(R))~\subseteq ~R} or equivalently, R ∩ domain ⁡ f = f − 1 ( f ( R ) ) {\displaystyle R\cap \operatorname {domain} f~=~f^{-1}(f(R))} R {\displaystyle R} is f {\displaystyle f} -saturated; that is, R = f − 1 ( f ( R ) ) . {\displaystyle R=f^{-1}(f(R)).} f ( L △ R ) ⊆ f ( L ) △ f ( R ) {\displaystyle f\left(L~\triangle ~R\right)\subseteq f(L)~\triangle ~f(R)} or equivalently, f ( L △ R ) = f ( L ) △ f ( R ) {\displaystyle f\left(L~\triangle ~R\right)=f(L)~\triangle ~f(R)} ===== Conditions for f(X\R) = f(X)\f(R) ===== f ( X ∖ R ) ⊇ f ( X ) ∖ f ( R ) always holds, where f : X → Y {\displaystyle f(X\setminus R)~\supseteq ~f(X)\setminus f(R)\qquad \qquad {\text{ always holds, where }}f:X\to Y} Characterizations of equality: The following statements are equivalent: f ( X ∖ R ) = f ( X ) ∖ f ( R ) {\displaystyle f(X\setminus R)~=~f(X)\setminus f(R)} f ( X ∖ R ) ⊆ f ( X ) ∖ f ( R ) {\displaystyle f(X\setminus R)~\subseteq ~f(X)\setminus f(R)} f − 1 ( f ( R ) ) ⊆ R {\displaystyle f^{-1}(f(R))\,\subseteq \,R} f − 1 ( f ( R ) ) = R ∩ domain ⁡ f {\displaystyle f^{-1}(f(R))\,=\,R\cap \operatorname {domain} f} R ∩ domain ⁡ f {\displaystyle R\cap \operatorname {domain} f} is f {\displaystyle f} -saturated. Whenever y ∈ f ( R ) {\displaystyle y\in f(R)} then f − 1 ( y ) ⊆ R . {\displaystyle f^{-1}(y)\subseteq R.} f ( R ) ⊆ { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } {\textstyle f(R)~\subseteq ~\left\{y\in f(R):f^{-1}(y)\subseteq R\right\}} f ( R ) = { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } {\textstyle f(R)~=~\left\{y\in f(R):f^{-1}(y)\subseteq R\right\}} where if R ⊆ domain ⁡ f {\displaystyle R\subseteq \operatorname {domain} f} then this list can be extended to include: R {\displaystyle R} is f {\displaystyle f} -saturated; that is, R = f − 1 ( f ( R ) ) . {\displaystyle R=f^{-1}(f(R)).} Sufficient conditions for equality: Equality holds if any of the following are true: f {\displaystyle f} is injective. R {\displaystyle R} is f {\displaystyle f} -saturated; that is, R = f − 1 ( f ( R ) ) . {\displaystyle R=f^{-1}(f(R)).} ===== Conditions for f(L∆R) = f(L)∆f(R) ===== f ( L △ R ) ⊇ f ( L ) △ f ( R ) always holds {\displaystyle f\left(L~\triangle ~R\right)~\supseteq ~f(L)~\triangle ~f(R)\qquad \qquad {\text{ always holds}}} Characterizations of equality: The following statements are equivalent: f ( L △ R ) = f ( L ) △ f ( R ) {\displaystyle f\left(L~\triangle ~R\right)=f(L)~\triangle ~f(R)} f ( L △ R ) ⊆ f ( L ) △ f ( R ) {\displaystyle f\left(L~\triangle ~R\right)\subseteq f(L)~\triangle ~f(R)} f ( L ∖ R ) = f ( L ) ∖ f ( R ) {\displaystyle f(L\,\setminus \,R)=f(L)\,\setminus \,f(R)} and f ( R ∖ L ) = f ( R ) ∖ f ( L ) {\displaystyle f(R\,\setminus \,L)=f(R)\,\setminus \,f(L)} f ( L ∖ R ) ⊆ f ( L ) ∖ f ( R ) {\displaystyle f(L\,\setminus \,R)\subseteq f(L)\,\setminus \,f(R)} and f ( R ∖ L ) ⊆ f ( R ) ∖ f ( L ) {\displaystyle f(R\,\setminus \,L)\subseteq f(R)\,\setminus \,f(L)} L ∩ f − 1 ( f ( R ) ) ⊆ R {\displaystyle L\cap f^{-1}(f(R))~\subseteq ~R} and R ∩ f − 1 ( f ( L ) ) ⊆ L {\displaystyle R\cap f^{-1}(f(L))~\subseteq ~L} The inclusions L ∩ f − 1 ( f ( R ) ) ⊆ f − 1 ( f ( L ) ) {\displaystyle L\cap f^{-1}(f(R))~\subseteq ~f^{-1}(f(L))} and R ∩ f − 1 ( f ( L ) ) ⊆ f − 1 ( f ( R ) ) {\displaystyle R\cap f^{-1}(f(L))~\subseteq ~f^{-1}(f(R))} always hold. L ∩ f − 1 ( f ( R ) ) = R ∩ f − 1 ( f ( L ) ) {\displaystyle L\cap f^{-1}(f(R))~=~R\cap f^{-1}(f(L))} If this above set equality holds, then this set will also be equal to both L ∩ R ∩ domain ⁡ f {\displaystyle L\cap R\cap \operatorname {domain} f} and L ∩ R ∩ f − 1 ( f ( L ∩ R ) ) . {\displaystyle L\cap R\cap f^{-1}(f(L\cap R)).} L ∩ f − 1 ( f ( L ∩ R ) ) = R ∩ f − 1 ( f ( L ∩ R ) ) {\displaystyle L\cap f^{-1}(f(L\cap R))~=~R\cap f^{-1}(f(L\cap R))} and f ( L ∩ R ) ⊇ f ( L ) ∩ f ( R ) . {\displaystyle f(L\cap R)~\supseteq ~f(L)\cap f(R).} Necessary conditions for equality (excluding characterizations): If equality holds then the following are necessarily true: f ( L ∩ R ) = f ( L ) ∩ f ( R ) , {\displaystyle f(L\cap R)=f(L)\cap f(R),} or equivalently f ( L ∩ R ) ⊇ f ( L ) ∩ f ( R ) . {\displaystyle f(L\cap R)\supseteq f(L)\cap f(R).} L ∩ f − 1 ( f ( L ∩ R ) ) = R ∩ f − 1 ( f ( L ∩ R ) ) {\displaystyle L\cap f^{-1}(f(L\cap R))~=~R\cap f^{-1}(f(L\cap R))} Sufficient conditions for equality: Equality holds if any of the following are true: f {\displaystyle f} is injective. The restriction f | L ∪ R {\displaystyle f{\big \vert }_{L\cup R}} is injective. ==== Exact formulas/equalities for images of set operations ==== ===== Formulas for f(L\R) = ===== For any function f : X → Y {\displaystyle f:X\to Y} and any sets L {\displaystyle L} and R , {\displaystyle R,} f ( L ∖ R ) = Y ∖ { y ∈ Y : L ∩ f − 1 ( y ) ⊆ R } = f ( L ) ∖ { y ∈ f ( L ) : L ∩ f − 1 ( y ) ⊆ R } = f ( L ) ∖ { y ∈ f ( L ∩ R ) : L ∩ f − 1 ( y ) ⊆ R } = f ( L ) ∖ { y ∈ V : L ∩ f − 1 ( y ) ⊆ R } for any superset V ⊇ f ( L ∩ R ) = f ( S ) ∖ { y ∈ f ( S ) : L ∩ f − 1 ( y ) ⊆ R } for any superset S ⊇ L ∩ X . {\displaystyle {\begin{alignedat}{4}f(L\setminus R)&=Y~~~\;\,\,\setminus \left\{y\in Y~~~~~~~~~~\;\,~:~L\cap f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(L)\setminus \left\{y\in f(L)~~~~~~~\,~:~L\cap f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(L)\setminus \left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(L)\setminus \left\{y\in V~~~~~~~~~~~~\,~:~L\cap f^{-1}(y)\subseteq R\right\}\qquad &&{\text{ for any superset }}\quad V\supseteq f(L\cap R)\\[0.4ex]&=f(S)\setminus \left\{y\in f(S)~~~~~~~\,~:~L\cap f^{-1}(y)\subseteq R\right\}\qquad &&{\text{ for any superset }}\quad S\supseteq L\cap X.\\[0.7ex]\end{alignedat}}} ===== Formulas for f(X\R) = ===== Taking L := X = domain ⁡ f {\displaystyle L:=X=\operatorname {domain} f} in the above formulas gives: f ( X ∖ R ) = Y ∖ { y ∈ Y : f − 1 ( y ) ⊆ R } = f ( X ) ∖ { y ∈ f ( X ) : f − 1 ( y ) ⊆ R } = f ( X ) ∖ { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } = f ( X ) ∖ { y ∈ W : f − 1 ( y ) ⊆ R } for any superset W ⊇ f ( R ) {\displaystyle {\begin{alignedat}{4}f(X\setminus R)&=Y~~~\;\,\,\setminus \left\{y\in Y~~~~\;\,\,:~f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(X)\setminus \left\{y\in f(X)~:~f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(X)\setminus \left\{y\in f(R)~:~f^{-1}(y)\subseteq R\right\}\\[0.4ex]&=f(X)\setminus \left\{y\in W~~~\;\,\,:~f^{-1}(y)\subseteq R\right\}\qquad {\text{ for any superset }}\quad W\supseteq f(R)\\[0.4ex]\end{alignedat}}} where the set { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } {\displaystyle \left\{y\in f(R):f^{-1}(y)\subseteq R\right\}} is equal to the image under f {\displaystyle f} of the largest f {\displaystyle f} -saturated subset of R . {\displaystyle R.} In general, only f ( X ∖ R ) ⊇ f ( X ) ∖ f ( R ) {\displaystyle f(X\setminus R)\,\supseteq \,f(X)\setminus f(R)} always holds and equality is not guaranteed; but replacing " f ( R ) {\displaystyle f(R)} " with its subset " { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } {\displaystyle \left\{y\in f(R):f^{-1}(y)\subseteq R\right\}} " results in a formula in which equality is always guaranteed: f ( X ∖ R ) = f ( X ) ∖ { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } . {\displaystyle f(X\setminus R)\,=\,f(X)\setminus \left\{y\in f(R):f^{-1}(y)\subseteq R\right\}.} From this it follows that: f ( X ∖ R ) = f ( X ) ∖ f ( R ) if and only if f ( R ) = { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } if and only if f − 1 ( f ( R ) ) ⊆ R . {\displaystyle f(X\setminus R)=f(X)\setminus f(R)\quad {\text{ if and only if }}\quad f(R)=\left\{y\in f(R):f^{-1}(y)\subseteq R\right\}\quad {\text{ if and only if }}\quad f^{-1}(f(R))\subseteq R.} If f R := { y ∈ f ( X ) : f − 1 ( y ) ⊆ R } {\displaystyle f_{R}:=\left\{y\in f(X):f^{-1}(y)\subseteq R\right\}} then f ( X ∖ R ) = f ( X ) ∖ f R , {\displaystyle f(X\setminus R)=f(X)\setminus f_{R},} which can be written more symmetrically as f ( X ∖ R ) = f X ∖ f R {\displaystyle f(X\setminus R)=f_{X}\setminus f_{R}} (since f X = f ( X ) {\displaystyle f_{X}=f(X)} ). ===== Formulas for f(L∆R) = ===== It follows from L △ R = ( L ∪ R ) ∖ ( L ∩ R ) {\displaystyle L\,\triangle \,R=(L\cup R)\setminus (L\cap R)} and the above formulas for the image of a set subtraction that for any function f : X → Y {\displaystyle f:X\to Y} and any sets L {\displaystyle L} and R , {\displaystyle R,} f ( L △ R ) = Y ∖ { y ∈ Y : L ∩ f − 1 ( y ) = R ∩ f − 1 ( y ) } = f ( L ∪ R ) ∖ { y ∈ f ( L ∪ R ) : L ∩ f − 1 ( y ) = R ∩ f − 1 ( y ) } = f ( L ∪ R ) ∖ { y ∈ f ( L ∩ R ) : L ∩ f − 1 ( y ) = R ∩ f − 1 ( y ) } = f ( L ∪ R ) ∖ { y ∈ V : L ∩ f − 1 ( y ) = R ∩ f − 1 ( y ) } for any superset V ⊇ f ( L ∩ R ) = f ( S ) ∖ { y ∈ f ( S ) : L ∩ f − 1 ( y ) = R ∩ f − 1 ( y ) } for any superset S ⊇ ( L ∪ R ) ∩ X . {\displaystyle {\begin{alignedat}{4}f(L\,\triangle \,R)&=Y~~~\;~~~\;~~~\;\setminus \left\{y\in Y~~~\,~~~\;~~~\,~~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\\[0.4ex]&=f(L\cup R)\setminus \left\{y\in f(L\cup R)~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\\[0.4ex]&=f(L\cup R)\setminus \left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\\[0.4ex]&=f(L\cup R)\setminus \left\{y\in V~~~\,~~~~~~~~~~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\qquad &&{\text{ for any superset }}\quad V\supseteq f(L\cap R)\\[0.4ex]&=f(S)~~\,~~~\,~\,\setminus \left\{y\in f(S)~~~\,~~~\;~:~L\cap f^{-1}(y)=R\cap f^{-1}(y)\right\}\qquad &&{\text{ for any superset }}\quad S\supseteq (L\cup R)\cap X.\\[0.7ex]\end{alignedat}}} ===== Formulas for f(L) = ===== It follows from the above formulas for the image of a set subtraction that for any function f : X → Y {\displaystyle f:X\to Y} and any set L , {\displaystyle L,} f ( L ) = Y ∖ { y ∈ Y : f − 1 ( y ) ∩ L = ∅ } = Im ⁡ f ∖ { y ∈ Im ⁡ f : f − 1 ( y ) ∩ L = ∅ } = W ∖ { y ∈ W : f − 1 ( y ) ∩ L = ∅ } for any superset W ⊇ f ( L ) {\displaystyle {\begin{alignedat}{4}f(L)&=Y~~~\;\,\setminus \left\{y\in Y~~~\;\,~:~f^{-1}(y)\cap L=\varnothing \right\}\\[0.4ex]&=\operatorname {Im} f\setminus \left\{y\in \operatorname {Im} f~:~f^{-1}(y)\cap L=\varnothing \right\}\\[0.4ex]&=W~~~\,\setminus \left\{y\in W~~\;\,~:~f^{-1}(y)\cap L=\varnothing \right\}\qquad {\text{ for any superset }}\quad W\supseteq f(L)\\[0.7ex]\end{alignedat}}} This is more easily seen as being a consequence of the fact that for any y ∈ Y , {\displaystyle y\in Y,} f − 1 ( y ) ∩ L = ∅ {\displaystyle f^{-1}(y)\cap L=\varnothing } if and only if y ∉ f ( L ) . {\displaystyle y\not \in f(L).} ===== Formulas for f(L⋂R) = ===== It follows from the above formulas for the image of a set that for any function f : X → Y {\displaystyle f:X\to Y} and any sets L {\displaystyle L} and R , {\displaystyle R,} f ( L ∩ R ) = Y ∖ { y ∈ Y : L ∩ R ∩ f − 1 ( y ) = ∅ } = f ( L ) ∖ { y ∈ f ( L ) : L ∩ R ∩ f − 1 ( y ) = ∅ } = f ( L ) ∖ { y ∈ U : L ∩ R ∩ f − 1 ( y ) = ∅ } for any superset U ⊇ f ( L ) = f ( R ) ∖ { y ∈ f ( R ) : L ∩ R ∩ f − 1 ( y ) = ∅ } = f ( R ) ∖ { y ∈ V : L ∩ R ∩ f − 1 ( y ) = ∅ } for any superset V ⊇ f ( R ) = f ( L ) ∩ f ( R ) ∖ { y ∈ f ( L ) ∩ f ( R ) : L ∩ R ∩ f − 1 ( y ) = ∅ } {\displaystyle {\begin{alignedat}{4}f(L\cap R)&=Y~~~~~\setminus \left\{y\in Y~~~~~~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.4ex]&=f(L)\setminus \left\{y\in f(L)~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.4ex]&=f(L)\setminus \left\{y\in U~~~~~~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}\qquad &&{\text{ for any superset }}\quad U\supseteq f(L)\\[0.4ex]&=f(R)\setminus \left\{y\in f(R)~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.4ex]&=f(R)\setminus \left\{y\in V~~~~~~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}\qquad &&{\text{ for any superset }}\quad V\supseteq f(R)\\[0.4ex]&=f(L)\cap f(R)\setminus \left\{y\in f(L)\cap f(R)~:~L\cap R\cap f^{-1}(y)=\varnothing \right\}&&\\[0.7ex]\end{alignedat}}} where moreover, for any y ∈ Y , {\displaystyle y\in Y,} L ∩ f − 1 ( y ) ⊆ L ∖ R {\displaystyle L\cap f^{-1}(y)\subseteq L\setminus R~} if and only if L ∩ R ∩ f − 1 ( y ) = ∅ {\displaystyle ~L\cap R\cap f^{-1}(y)=\varnothing ~} if and only if R ∩ f − 1 ( y ) ⊆ R ∖ L {\displaystyle ~R\cap f^{-1}(y)\subseteq R\setminus L~} if and only if y ∉ f ( L ∩ R ) . {\displaystyle ~y\not \in f(L\cap R).} The sets U {\displaystyle U} and V {\displaystyle V} mentioned above could, in particular, be any of the sets f ( L ∪ R ) , Im ⁡ f , {\displaystyle f(L\cup R),\;\operatorname {Im} f,} or Y , {\displaystyle Y,} for example. === (Pre)Images of set operations on (pre)images === Let L {\displaystyle L} and R {\displaystyle R} be arbitrary sets, f : X → Y {\displaystyle f:X\to Y} be any map, and let A ⊆ X {\displaystyle A\subseteq X} and C ⊆ Y . {\displaystyle C\subseteq Y.} (Pre)Images of operations on images Since f ( L ) ∖ f ( L ∖ R ) = { y ∈ f ( L ∩ R ) : L ∩ f − 1 ( y ) ⊆ R } , {\displaystyle f(L)\setminus f(L\setminus R)~=~\left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)\subseteq R\right\},} f − 1 ( f ( L ) ∖ f ( L ∖ R ) ) = f − 1 ( { y ∈ f ( L ∩ R ) : L ∩ f − 1 ( y ) ⊆ R } ) = { x ∈ f − 1 ( f ( L ∩ R ) ) : L ∩ f − 1 ( f ( x ) ) ⊆ R } {\displaystyle {\begin{alignedat}{4}f^{-1}(f(L)\setminus f(L\setminus R))&=&&f^{-1}\left(\left\{y\in f(L\cap R)~:~L\cap f^{-1}(y)\subseteq R\right\}\right)\\&=&&\left\{x\in f^{-1}(f(L\cap R))~:~L\cap f^{-1}(f(x))\subseteq R\right\}\\\end{alignedat}}} Since f ( X ) ∖ f ( L ∖ R ) = { y ∈ f ( X ) : L ∩ f − 1 ( y ) ⊆ R } , {\displaystyle f(X)\setminus f(L\setminus R)~=~\left\{y\in f(X)~:~L\cap f^{-1}(y)\subseteq R\right\},} f − 1 ( Y ∖ f ( L ∖ R ) ) = f − 1 ( f ( X ) ∖ f ( L ∖ R ) ) = f − 1 ( { y ∈ f ( X ) : L ∩ f − 1 ( y ) ⊆ R } ) = { x ∈ X : L ∩ f − 1 ( f ( x ) ) ⊆ R } = X ∖ f − 1 ( f ( L ∖ R ) ) {\displaystyle {\begin{alignedat}{4}f^{-1}(Y\setminus f(L\setminus R))&~=~&&f^{-1}(f(X)\setminus f(L\setminus R))\\&=&&f^{-1}\left(\left\{y\in f(X)~:~L\cap f^{-1}(y)\subseteq R\right\}\right)\\&=&&\left\{x\in X~:~L\cap f^{-1}(f(x))\subseteq R\right\}\\&~=~&&X\setminus f^{-1}(f(L\setminus R))\\\end{alignedat}}} Using L := X , {\displaystyle L:=X,} this becomes f ( X ) ∖ f ( X ∖ R ) = { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } {\displaystyle ~f(X)\setminus f(X\setminus R)~=~\left\{y\in f(R)~:~f^{-1}(y)\subseteq R\right\}~} and f − 1 ( Y ∖ f ( X ∖ R ) ) = f − 1 ( f ( X ) ∖ f ( X ∖ R ) ) = f − 1 ( { y ∈ f ( R ) : f − 1 ( y ) ⊆ R } ) = { r ∈ R ∩ X : f − 1 ( f ( r ) ) ⊆ R } ⊆ R {\displaystyle {\begin{alignedat}{4}f^{-1}(Y\setminus f(X\setminus R))&~=~&&f^{-1}(f(X)\setminus f(X\setminus R))\\&=&&f^{-1}\left(\left\{y\in f(R)~:~f^{-1}(y)\subseteq R\right\}\right)\\&=&&\left\{r\in R\cap X~:~f^{-1}(f(r))\subseteq R\right\}\\&\subseteq &&R\\\end{alignedat}}} and so f − 1 ( Y ∖ f ( L ) ) = f − 1 ( f ( X ) ∖ f ( L ) ) = f − 1 ( { y ∈ f ( X ∖ L ) : f − 1 ( y ) ∩ L = ∅ } ) = { x ∈ X ∖ L : f ( x ) ∉ f ( L ) } = X ∖ f − 1 ( f ( L ) ) ⊆ X ∖ L {\displaystyle {\begin{alignedat}{4}f^{-1}(Y\setminus f(L))&~=~&&f^{-1}(f(X)\setminus f(L))\\&=&&f^{-1}\left(\left\{y\in f(X\setminus L)~:~f^{-1}(y)\cap L=\varnothing \right\}\right)\\&=&&\{x\in X\setminus L~:~f(x)\not \in f(L)\}\\&=&&X\setminus f^{-1}(f(L))\\&\subseteq &&X\setminus L\\\end{alignedat}}} === (Pre)Images and Cartesian products Π === Let ∏ Y ∙ = def ∏ j ∈ J Y j {\displaystyle \prod Y_{\bullet }~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\prod _{j\in J}Y_{j}} and for every k ∈ J , {\displaystyle k\in J,} let π k : ∏ j ∈ J Y j → Y k {\displaystyle \pi _{k}~:~\prod _{j\in J}Y_{j}~\to ~Y_{k}} denote the canonical projection onto Y k . {\displaystyle Y_{k}.} Definitions Given a collection of maps F j : X → Y j {\displaystyle F_{j}:X\to Y_{j}} indexed by j ∈ J , {\displaystyle j\in J,} define the map ( F j ) j ∈ J : X → ∏ j ∈ J Y j x ↦ ( F j ( x j ) ) j ∈ J , {\displaystyle {\begin{alignedat}{4}\left(F_{j}\right)_{j\in J}:\;&&X&&\;\to \;&\prod _{j\in J}Y_{j}\\[0.3ex]&&x&&\;\mapsto \;&\left(F_{j}\left(x_{j}\right)\right)_{j\in J},\\\end{alignedat}}} which is also denoted by F ∙ = ( F j ) j ∈ J . {\displaystyle F_{\bullet }=\left(F_{j}\right)_{j\in J}.} This is the unique map satisfying π j ∘ F ∙ = F j for all j ∈ J . {\displaystyle \pi _{j}\circ F_{\bullet }=F_{j}\quad {\text{ for all }}j\in J.} Conversely, if given a map F : X → ∏ j ∈ J Y j {\displaystyle F~:~X~\to ~\prod _{j\in J}Y_{j}} then F = ( π j ∘ F ) j ∈ J . {\displaystyle F=\left(\pi _{j}\circ F\right)_{j\in J}.} Explicitly, what this means is that if F k = def π k ∘ F : X → Y k {\displaystyle F_{k}~{\stackrel {\scriptscriptstyle {\text{def}}}{=}}~\pi _{k}\circ F~:~X~\to ~Y_{k}} is defined for every k ∈ J , {\displaystyle k\in J,} then F {\displaystyle F} the unique map satisfying: π j ∘ F = F j {\displaystyle \pi _{j}\circ F=F_{j}} for all j ∈ J ; {\displaystyle j\in J;} or said more briefly, F = ( F j ) j ∈ J . {\displaystyle F=\left(F_{j}\right)_{j\in J}.} The map F ∙ = ( F j ) j ∈ J : X → ∏ j ∈ J Y j {\displaystyle F_{\bullet }=\left(F_{j}\right)_{j\in J}~:~X~\to ~\prod _{j\in J}Y_{j}} should not be confused with the Cartesian product ∏ j ∈ J F j {\displaystyle \prod _{j\in J}F_{j}} of these maps, which is by definition is the map ∏ j ∈ J F j : ∏ j ∈ J X → ∏ j ∈ J Y j ( x j ) j ∈ J ↦ ( F j ( x j ) ) j ∈ J {\displaystyle {\begin{alignedat}{4}\prod _{j\in J}F_{j}:\;&&\prod _{j\in J}X&&~\;\to \;~&\prod _{j\in J}Y_{j}\\[0.3ex]&&\left(x_{j}\right)_{j\in J}&&~\;\mapsto \;~&\left(F_{j}\left(x_{j}\right)\right)_{j\in J}\\\end{alignedat}}} with domain ∏ j ∈ J X = X J {\displaystyle \prod _{j\in J}X=X^{J}} rather than X . {\displaystyle X.} Preimage and images of a Cartesian product Suppose F ∙ = ( F j ) j ∈ J : X → ∏ j ∈ J Y j . {\displaystyle F_{\bullet }=\left(F_{j}\right)_{j\in J}~:~X~\to ~\prod _{j\in J}Y_{j}.} If A ⊆ X {\displaystyle A~\subseteq ~X} then F ∙ ( A ) ⊆ ∏ j ∈ J F j ( A ) . {\displaystyle F_{\bullet }(A)~~\;\color {Red}{\subseteq }\color {Black}{}\;~~\prod _{j\in J}F_{j}(A).} If B ⊆ ∏ j ∈ J Y j {\displaystyle B~\subseteq ~\prod _{j\in J}Y_{j}} then F ∙ − 1 ( B ) ⊆ ⋂ j ∈ J F j − 1 ( π j ( B ) ) {\displaystyle F_{\bullet }^{-1}(B)~~\;\color {Red}{\subseteq }\color {Black}{}\;~~\bigcap _{j\in J}F_{j}^{-1}\left(\pi _{j}(B)\right)} where equality will hold if B = ∏ j ∈ J π j ( B ) , {\displaystyle B=\prod _{j\in J}\pi _{j}(B),} in which case F ∙ − 1 ( B ) = ⋂ j ∈ J F j − 1 ( π j ( B ) ) {\textstyle F_{\bullet }^{-1}(B)=\displaystyle \bigcap _{j\in J}F_{j}^{-1}\left(\pi _{j}(B)\right)} and For equality to hold, it suffices for there to exist a family ( B j ) j ∈ J {\displaystyle \left(B_{j}\right)_{j\in J}} of subsets B j ⊆ Y j {\displaystyle B_{j}\subseteq Y_{j}} such that B = ∏ j ∈ J B j , {\displaystyle B=\prod _{j\in J}B_{j},} in which case: and π j ( B ) = B j {\displaystyle \pi _{j}(B)=B_{j}} for all j ∈ J . {\displaystyle j\in J.} === (Pre)Image of a single set === ==== Containments ⊆ and intersections ⋂ of images and preimages ==== Equivalences and implications of images and preimages Intersection of a set and a (pre)image The following statements are equivalent: ∅ = f ( L ) ∩ R {\displaystyle \varnothing =f(L)\cap R} ∅ = L ∩ f − 1 ( R ) {\displaystyle \varnothing =L\cap f^{-1}(R)} ∅ = f − 1 ( f ( L ) ) ∩ f − 1 ( R ) {\displaystyle \varnothing =f^{-1}(f(L))\cap f^{-1}(R)} ∅ = f − 1 ( f ( L ) ∩ R ) {\displaystyle \varnothing =f^{-1}(f(L)\cap R)} Thus for any t , {\displaystyle t,} t ∉ f ( L ) if and only if L ∩ f − 1 ( t ) = ∅ . {\displaystyle t\not \in f(L)\quad {\text{ if and only if }}\quad L\cap f^{-1}(t)=\varnothing .} == Sequences and collections of families of sets == === Definitions === A family of sets or simply a family is a set whose elements are sets. A family over X {\displaystyle X} is a family of subsets of X . {\displaystyle X.} The power set of a set X {\displaystyle X} is the set of all subsets of X {\displaystyle X} : ℘ ( X ) : = { S : S ⊆ X } . {\displaystyle \wp (X)~\colon =~\{\;S~:~S\subseteq X\;\}.} Notation for sequences of sets Throughout, S and T {\displaystyle S{\text{ and }}T} will be arbitrary sets and S ∙ {\displaystyle S_{\bullet }} and will denote a net or a sequence of sets where if it is a sequence then this will be indicated by either of the notations S ∙ = ( S i ) i = 1 ∞ or S ∙ = ( S i ) i ∈ N {\displaystyle S_{\bullet }=\left(S_{i}\right)_{i=1}^{\infty }\qquad {\text{ or }}\qquad S_{\bullet }=\left(S_{i}\right)_{i\in \mathbb {N} }} where N {\displaystyle \mathbb {N} } denotes the natural numbers. A notation S ∙ = ( S i ) i ∈ I {\displaystyle S_{\bullet }=\left(S_{i}\right)_{i\in I}} indicates that S ∙ {\displaystyle S_{\bullet }} is a net directed by ( I , ≤ ) , {\displaystyle (I,\leq ),} which (by definition) is a sequence if the set I , {\displaystyle I,} which is called the net's indexing set, is the natural numbers (that is, if I = N {\displaystyle I=\mathbb {N} } ) and ≤ {\displaystyle \,\leq \,} is the natural order on N . {\displaystyle \mathbb {N} .} Disjoint and monotone sequences of sets If S i ∩ S j = ∅ {\displaystyle S_{i}\cap S_{j}=\varnothing } for all distinct indices i ≠ j {\displaystyle i\neq j} then S ∙ {\displaystyle S_{\bullet }} is called a pairwise disjoint or simply a disjoint. A sequence or net S ∙ {\displaystyle S_{\bullet }} of set is called increasing or non-decreasing if (resp. decreasing or non-increasing) if for all indices i ≤ j , {\displaystyle i\leq j,} S i ⊆ S j {\displaystyle S_{i}\subseteq S_{j}} (resp. S i ⊇ S j {\displaystyle S_{i}\supseteq S_{j}} ). A sequence or net S ∙ {\displaystyle S_{\bullet }} of set is called strictly increasing (resp. strictly decreasing) if it is non-decreasing (resp. is non-increasing) and also S i ≠ S j {\displaystyle S_{i}\neq S_{j}} for all distinct indices i and j . {\displaystyle i{\text{ and }}j.} It is called monotone if it is non-decreasing or non-increasing and it is called strictly monotone if it is strictly increasing or strictly decreasing. A sequences or net S ∙ {\displaystyle S_{\bullet }} is said to increase to S , {\displaystyle S,} denoted by S ∙ ↑ S {\displaystyle S_{\bullet }\uparrow S} or S ∙ ↗ S , {\displaystyle S_{\bullet }\nearrow S,} if S ∙ {\displaystyle S_{\bullet }} is increasing and the union of all S i {\displaystyle S_{i}} is S ; {\displaystyle S;} that is, if ⋃ n S n = S and S i ⊆ S j whenever i ≤ j . {\displaystyle \bigcup _{n}S_{n}=S\qquad {\text{ and }}\qquad S_{i}\subseteq S_{j}\quad {\text{ whenever }}i\leq j.} It is said to decrease to S , {\displaystyle S,} denoted by S ∙ ↓ S {\displaystyle S_{\bullet }\downarrow S} or S ∙ ↘ S , {\displaystyle S_{\bullet }\searrow S,} if S ∙ {\displaystyle S_{\bullet }} is increasing and the intersection of all S i {\displaystyle S_{i}} is S {\displaystyle S} that is, if ⋂ n S n = S and S i ⊇ S j whenever i ≤ j . {\displaystyle \bigcap _{n}S_{n}=S\qquad {\text{ and }}\qquad S_{i}\supseteq S_{j}\quad {\text{ whenever }}i\leq j.} Definitions of elementwise operations on families If L and R {\displaystyle {\mathcal {L}}{\text{ and }}{\mathcal {R}}} are families of sets and if S {\displaystyle S} is any set then define: L ( ∪ ) R : = { L ∪ R : L ∈ L and R ∈ R } {\displaystyle {\mathcal {L}}\;(\cup )\;{\mathcal {R}}~\colon =~\{~L\cup R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}} L ( ∩ ) R : = { L ∩ R : L ∈ L and R ∈ R } {\displaystyle {\mathcal {L}}\;(\cap )\;{\mathcal {R}}~\colon =~\{~L\cap R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}} L ( ∖ ) R : = { L ∖ R : L ∈ L and R ∈ R } {\displaystyle {\mathcal {L}}\;(\setminus )\;{\mathcal {R}}~\colon =~\{~L\setminus R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}} L ( △ ) R : = { L △ R : L ∈ L and R ∈ R } {\displaystyle {\mathcal {L}}\;(\triangle )\;{\mathcal {R}}~\colon =~\{~L\;\triangle \;R~:~L\in {\mathcal {L}}~{\text{ and }}~R\in {\mathcal {R}}~\}} L | S : = { L ∩ S : L ∈ L } = L ( ∩ ) { S } {\displaystyle {\mathcal {L}}{\big \vert }_{S}~\colon =~\{L\cap S~:~L\in {\mathcal {L}}\}={\mathcal {L}}\;(\cap )\;\{S\}} which are respectively called elementwise union, elementwise intersection, elementwise (set) difference, elementwise symmetric difference, and the trace/restriction of L {\displaystyle {\mathcal {L}}} to S . {\displaystyle S.} The regular union, intersection, and set difference are all defined as usual and are denoted with their usual notation: L ∪ R , L ∩ R , L △ R , {\displaystyle {\mathcal {L}}\cup {\mathcal {R}},{\mathcal {L}}\cap {\mathcal {R}},{\mathcal {L}}\;\triangle \;{\mathcal {R}},} and L ∖ R , {\displaystyle {\mathcal {L}}\setminus {\mathcal {R}},} respectively. These elementwise operations on families of sets play an important role in, among other subjects, the theory of filters and prefilters on sets. The upward closure in X {\displaystyle X} of a family L ⊆ ℘ ( X ) {\displaystyle {\mathcal {L}}\subseteq \wp (X)} is the family: L ↑ X : = ⋃ L ∈ L { S : L ⊆ S ⊆ X } = { S ⊆ X : there exists L ∈ L such that L ⊆ S } {\displaystyle {\mathcal {L}}^{\uparrow X}~\colon =~\bigcup _{L\in {\mathcal {L}}}\{\;S~:~L\subseteq S\subseteq X\;\}~=~\{\;S\subseteq X~:~{\text{ there exists }}L\in {\mathcal {L}}{\text{ such that }}L\subseteq S\;\}} and the downward closure of L {\displaystyle {\mathcal {L}}} is the family: L ↓ : = ⋃ L ∈ L ℘ ( L ) = { S : there exists L ∈ L such that S ⊆ L } . {\displaystyle {\mathcal {L}}^{\downarrow }~\colon =~\bigcup _{L\in {\mathcal {L}}}\wp (L)~=~\{\;S~:~{\text{ there exists }}L\in {\mathcal {L}}{\text{ such that }}S\subseteq L\;\}.} ==== Definitions of categories of families of sets ==== The following table lists some well-known categories of families of sets having applications in general topology and measure theory. A family L {\displaystyle {\mathcal {L}}} is called isotone, ascending, or upward closed in X {\displaystyle X} if L ⊆ ℘ ( X ) {\displaystyle {\mathcal {L}}\subseteq \wp (X)} and L = L ↑ X . {\displaystyle {\mathcal {L}}={\mathcal {L}}^{\uparrow X}.} A family L {\displaystyle {\mathcal {L}}} is called downward closed if L = L ↓ . {\displaystyle {\mathcal {L}}={\mathcal {L}}^{\downarrow }.} A family L {\displaystyle {\mathcal {L}}} is said to be: closed under finite intersections (resp. closed under finite unions) if whenever L , R ∈ L {\displaystyle L,R\in {\mathcal {L}}} then L ∩ R ∈ L {\displaystyle L\cap R\in {\mathcal {L}}} (respectively, L ∪ R ∈ L {\displaystyle L\cup R\in {\mathcal {L}}} ). closed under countable intersections (resp. closed under countable unions) if whenever L 1 , L 2 , L 3 , … {\displaystyle L_{1},L_{2},L_{3},\ldots } are elements of L {\displaystyle {\mathcal {L}}} then so is their intersections ⋂ i = 1 ∞ L i := L 1 ∩ L 2 ∩ L 3 ∩ ⋯ {\displaystyle \bigcap _{i=1}^{\infty }L_{i}:=L_{1}\cap L_{2}\cap L_{3}\cap \cdots } (resp. so is their union ⋃ i = 1 ∞ L i := L 1 ∪ L 2 ∪ L 3 ∪ ⋯ {\displaystyle \bigcup _{i=1}^{\infty }L_{i}:=L_{1}\cup L_{2}\cup L_{3}\cup \cdots } ). closed under complementation in (or with respect to) X {\displaystyle X} if whenever L ∈ L {\displaystyle L\in {\mathcal {L}}} then X ∖ L ∈ L . {\displaystyle X\setminus L\in {\mathcal {L}}.} A family L {\displaystyle {\mathcal {L}}} of sets is called a/an: π−system if L ≠ ∅ {\displaystyle {\mathcal {L}}\neq \varnothing } and L {\displaystyle {\mathcal {L}}} is closed under finite-intersections. Every non-empty family L {\displaystyle {\mathcal {L}}} is contained in a unique smallest (with respect to ⊆ {\displaystyle \subseteq } ) π−system that is denoted by π ( L ) {\displaystyle \pi ({\mathcal {L}})} and called the π−system generated by L . {\displaystyle {\mathcal {L}}.} filter subbase and is said to have the finite intersection property if L ≠ ∅ {\displaystyle {\mathcal {L}}\neq \varnothing } and ∅ ∉ π ( L ) . {\displaystyle \varnothing \not \in \pi ({\mathcal {L}}).} filter on X {\displaystyle X} if L ≠ ∅ {\displaystyle {\mathcal {L}}\neq \varnothing } is a family of subsets of X {\displaystyle X} that is a π−system, is upward closed in X , {\displaystyle X,} and is also proper, which by definition means that it does not contain the empty set as an element. prefilter or filter base if it is a non-empty family of subsets of some set X {\displaystyle X} whose upward closure in X {\displaystyle X} is a filter on X . {\displaystyle X.} algebra on X {\displaystyle X} is a non-empty family of subsets of X {\displaystyle X} that contains the empty set, forms a π−system, and is also closed under complementation with respect to X . {\displaystyle X.} σ-algebra on X {\displaystyle X} is an algebra on X {\displaystyle X} that is closed under countable unions (or equivalently, closed under countable intersections). Sequences of sets often arise in measure theory. Algebra of sets A family Φ {\displaystyle \Phi } of subsets of a set X {\displaystyle X} is said to be an algebra of sets if ∅ ∈ Φ {\displaystyle \varnothing \in \Phi } and for all L , R ∈ Φ , {\displaystyle L,R\in \Phi ,} all three of the sets X ∖ R , L ∩ R , {\displaystyle X\setminus R,\,L\cap R,} and L ∪ R {\displaystyle L\cup R} are elements of Φ . {\displaystyle \Phi .} The article on this topic lists set identities and other relationships these three operations. Every algebra of sets is also a ring of sets and a π-system. Algebra generated by a family of sets Given any family S {\displaystyle {\mathcal {S}}} of subsets of X , {\displaystyle X,} there is a unique smallest algebra of sets in X {\displaystyle X} containing S . {\displaystyle {\mathcal {S}}.} It is called the algebra generated by S {\displaystyle {\mathcal {S}}} and it will be denote it by Φ S . {\displaystyle \Phi _{\mathcal {S}}.} This algebra can be constructed as follows: If S = ∅ {\displaystyle {\mathcal {S}}=\varnothing } then Φ S = { ∅ , X } {\displaystyle \Phi _{\mathcal {S}}=\{\varnothing ,X\}} and we are done. Alternatively, if S {\displaystyle {\mathcal {S}}} is empty then S {\displaystyle {\mathcal {S}}} may be replaced with { ∅ } , { X } , or { ∅ , X } {\displaystyle \{\varnothing \},\{X\},{\text{ or }}\{\varnothing ,X\}} and continue with the construction. Let S 0 {\displaystyle {\mathcal {S}}_{0}} be the family of all sets in S {\displaystyle {\mathcal {S}}} together with their complements (taken in X {\displaystyle X} ). Let S 1 {\displaystyle {\mathcal {S}}_{1}} be the family of all possible finite intersections of sets in S 0 . {\displaystyle {\mathcal {S}}_{0}.} Then the algebra generated by S {\displaystyle {\mathcal {S}}} is the set Φ S {\displaystyle \Phi _{\mathcal {S}}} consisting of all possible finite unions of sets in S 1 . {\displaystyle {\mathcal {S}}_{1}.} ==== Elementwise operations on families ==== Let L , M , {\displaystyle {\mathcal {L}},{\mathcal {M}},} and R {\displaystyle {\mathcal {R}}} be families of sets over X . {\displaystyle X.} On the left hand sides of the following identities, L {\displaystyle {\mathcal {L}}} is the L eft most family, M {\displaystyle {\mathcal {M}}} is in the M iddle, and R {\displaystyle {\mathcal {R}}} is the R ight most set. Commutativity: L ( ∪ ) R = R ( ∪ ) L {\displaystyle {\mathcal {L}}\;(\cup )\;{\mathcal {R}}={\mathcal {R}}\;(\cup )\;{\mathcal {L}}} L ( ∩ ) R = R ( ∩ ) L {\displaystyle {\mathcal {L}}\;(\cap )\;{\mathcal {R}}={\mathcal {R}}\;(\cap )\;{\mathcal {L}}} Associativity: [ L ( ∪ ) M ] ( ∪ ) R = L ( ∪ ) [ M ( ∪ ) R ] {\displaystyle [{\mathcal {L}}\;(\cup )\;{\mathcal {M}}]\;(\cup )\;{\mathcal {R}}={\mathcal {L}}\;(\cup )\;[{\mathcal {M}}\;(\cup )\;{\mathcal {R}}]} [ L ( ∩ ) M ] ( ∩ ) R = L ( ∩ ) [ M ( ∩ ) R ] {\displaystyle [{\mathcal {L}}\;(\cap )\;{\mathcal {M}}]\;(\cap )\;{\mathcal {R}}={\mathcal {L}}\;(\cap )\;[{\mathcal {M}}\;(\cap )\;{\mathcal {R}}]} Identity: L ( ∪ ) { ∅ } = L {\displaystyle {\mathcal {L}}\;(\cup )\;\{\varnothing \}={\mathcal {L}}} L ( ∩ ) { X } = L {\displaystyle {\mathcal {L}}\;(\cap )\;\{X\}={\mathcal {L}}} L ( ∖ ) { ∅ } = L {\displaystyle {\mathcal {L}}\;(\setminus )\;\{\varnothing \}={\mathcal {L}}} Domination: L ( ∪ ) { X } = { X } if L ≠ ∅ {\displaystyle {\mathcal {L}}\;(\cup )\;\{X\}=\{X\}~~~~{\text{ if }}{\mathcal {L}}\neq \varnothing } L ( ∩ ) { ∅ } = { ∅ } if L ≠ ∅ {\displaystyle {\mathcal {L}}\;(\cap )\;\{\varnothing \}=\{\varnothing \}~~~~{\text{ if }}{\mathcal {L}}\neq \varnothing } L ( ∪ ) ∅ = ∅ {\displaystyle {\mathcal {L}}\;(\cup )\;\varnothing =\varnothing } L ( ∩ ) ∅ = ∅ {\displaystyle {\mathcal {L}}\;(\cap )\;\varnothing =\varnothing } L ( ∖ ) ∅ = ∅ {\displaystyle {\mathcal {L}}\;(\setminus )\;\varnothing =\varnothing } ∅ ( ∖ ) R = ∅ {\displaystyle \varnothing \;(\setminus )\;{\mathcal {R}}=\varnothing } === Power set === ℘ ( L ∩ R ) = ℘ ( L ) ∩ ℘ ( R ) {\displaystyle \wp (L\cap R)~=~\wp (L)\cap \wp (R)} ℘ ( L ∪ R ) = ℘ ( L ) ( ∪ ) ℘ ( R ) ⊇ ℘ ( L ) ∪ ℘ ( R ) . {\displaystyle \wp (L\cup R)~=~\wp (L)\ (\cup )\ \wp (R)~\supseteq ~\wp (L)\cup \wp (R).} If L {\displaystyle L} and R {\displaystyle R} are subsets of a vector space X {\displaystyle X} and if s {\displaystyle s} is a scalar then ℘ ( s L ) = s ℘ ( L ) {\displaystyle \wp (sL)~=~s\wp (L)} ℘ ( L + R ) ⊇ ℘ ( L ) + ℘ ( R ) . {\displaystyle \wp (L+R)~\supseteq ~\wp (L)+\wp (R).} === Sequences of sets === Suppose that L {\displaystyle L} is any set such that L ⊇ R i {\displaystyle L\supseteq R_{i}} for every index i . {\displaystyle i.} If R ∙ {\displaystyle R_{\bullet }} decreases to R {\displaystyle R} then L ∖ R ∙ := ( L ∖ R i ) i {\displaystyle L\setminus R_{\bullet }:=\left(L\setminus R_{i}\right)_{i}} increases to L ∖ R {\displaystyle L\setminus R} whereas if instead R ∙ {\displaystyle R_{\bullet }} increases to R {\displaystyle R} then L ∖ R ∙ {\displaystyle L\setminus R_{\bullet }} decreases to L ∖ R . {\displaystyle L\setminus R.} If L and R {\displaystyle L{\text{ and }}R} are arbitrary sets and if L ∙ = ( L i ) i {\displaystyle L_{\bullet }=\left(L_{i}\right)_{i}} increases (resp. decreases) to L {\displaystyle L} then ( L i ∖ R ) i {\displaystyle \left(L_{i}\setminus R\right)_{i}} increase (resp. decreases) to L ∖ R . {\displaystyle L\setminus R.} ==== Partitions ==== Suppose that S ∙ = ( S i ) i = 1 ∞ {\displaystyle S_{\bullet }=\left(S_{i}\right)_{i=1}^{\infty }} is any sequence of sets, that S ⊆ ⋃ i S i {\displaystyle S\subseteq \bigcup _{i}S_{i}} is any subset, and for every index i , {\displaystyle i,} let D i = ( S i ∩ S ) ∖ ⋃ m = 1 i ( S m ∩ S ) . {\displaystyle D_{i}=\left(S_{i}\cap S\right)\setminus \bigcup _{m=1}^{i}\left(S_{m}\cap S\right).} Then S = ⋃ i D i {\displaystyle S=\bigcup _{i}D_{i}} and D ∙ := ( D i ) i = 1 ∞ {\displaystyle D_{\bullet }:=\left(D_{i}\right)_{i=1}^{\infty }} is a sequence of pairwise disjoint sets. Suppose that S ∙ = ( S i ) i = 1 ∞ {\displaystyle S_{\bullet }=\left(S_{i}\right)_{i=1}^{\infty }} is non-decreasing, let S 0 = ∅ , {\displaystyle S_{0}=\varnothing ,} and let D i = S i ∖ S i − 1 {\displaystyle D_{i}=S_{i}\setminus S_{i-1}} for every i = 1 , 2 , … . {\displaystyle i=1,2,\ldots .} Then ⋃ i S i = ⋃ i D i {\displaystyle \bigcup _{i}S_{i}=\bigcup _{i}D_{i}} and D ∙ = ( D i ) i = 1 ∞ {\displaystyle D_{\bullet }=\left(D_{i}\right)_{i=1}^{\infty }} is a sequence of pairwise disjoint sets. == See also == Algebra of sets – Identities and relationships involving sets Complement (set theory) – Set of the elements not in a given subset Image (mathematics)#Properties – Set of the values of a function Inclusion–exclusion principle – Counting technique in combinatorics Intersection (set theory) – Set of elements common to all of some sets List of mathematical identities Naive set theory – Informal set theories Pigeonhole principle – If there are more items than boxes holding them, one box must contain at least two items Set (mathematics) – Collection of mathematical objects Simple theorems in the algebra of sets Symmetric difference (set theory) – Elements in exactly one of two setsPages displaying short descriptions of redirect targets Union (set theory) – Set of elements in any of some sets == Notes == Notes Proofs == Citations == == References == Artin, Michael (1991). Algebra. Prentice Hall. ISBN 81-203-0871-9. Blyth, T.S. (2005). Lattices and Ordered Algebraic Structures. Springer. ISBN 1-85233-905-5.. Bylinski, Czeslaw (2004). "Some Basic Properties of Sets". Journal of Formalized Mathematics. 1. Retrieved 5 October 2021. Courant, Richard, Herbert Robbins, Ian Stewart, What is mathematics?: An Elementary Approach to Ideas and Methods, Oxford University Press US, 1996. ISBN 978-0-19-510519-3. "SUPPLEMENT TO CHAPTER II THE ALGEBRA OF SETS". Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011. Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303. Dolecki, Szymon; Mynard, Frédéric (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917. Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485. Durrett, Richard (2019). Probability: Theory and Examples (PDF). Cambridge Series in Statistical and Probabilistic Mathematics. Vol. 49 (5th ed.). Cambridge New York, NY: Cambridge University Press. ISBN 978-1-108-47368-2. OCLC 1100115281. Retrieved November 5, 2020. Halmos, Paul R. (1960). Naive set theory. The University Series in Undergraduate Mathematics. van Nostrand Company. ISBN 9780442030643. Zbl 0087.04403. {{cite book}}: ISBN / Date incompatibility (help) Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750. Kelley, John L. (1985). General Topology. Graduate Texts in Mathematics. Vol. 27 (2 ed.). Birkhäuser. ISBN 978-0-387-90125-1. Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704. Monk, James Donald (1969). Introduction to Set Theory (PDF). International series in pure and applied mathematics. New York: McGraw-Hill. ISBN 978-0-07-042715-0. OCLC 1102. Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. (accessible to patrons with print disabilities) Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Padlewska, Beata (1990). "Families of Sets". Journal of Formalized Mathematics. 1: 1. Retrieved 5 October 2021. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753. Stoll, Robert R.; Set Theory and Logic, Mineola, N.Y.: Dover Publications (1979) ISBN 0-486-63829-4. "The Algebra of Sets", pp 16—23. Trybulec, Zinaida (2002). "Properties of subsets" (PDF). Journal of Formalized Mathematics. 1: 1. Retrieved 5 October 2021. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114. Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. == External links == Operations on Sets at ProvenMath
Wikipedia:List of the 72 names on the Eiffel Tower#0
On the Eiffel Tower, 72 names of French men (scientists, engineers, and mathematicians) are engraved in recognition of their contributions. Gustave Eiffel chose this "invocation of science" because of his concern over the protests against the tower, and chose names of those who had distinguished themselves since 1789. The engravings are found on the sides of the tower under the first balcony, in letters about 60 cm (24 in) tall, and were originally painted in gold. The engraving was painted over at the beginning of the 20th century and restored in 1986–87 by Société Nouvelle d'exploitation de la Tour Eiffel, the company that the city of Paris contracts to operate the Tower. The repainting of 2010–11 restored the letters to their original gold colour. There are also names of the engineers who helped build the Tower and design its architecture on a plaque on the top of the Tower, where a laboratory was built as well. == List == === Location === The list is split in four parts (one for each side of the tower). The sides have been named after the parts of Paris that each side faces: The North-East side (also known as La Bourdonnais side) The South-East side (also known as the Military School side) The South-West side (also known as the Grenelle side) The North West side (also known as the Trocadéro side) === Names === In the table below are all the names on the four sides. == Criticism == === Women === The list contains no women. The list has been criticized for excluding the name of Sophie Germain, a noted French mathematician whose work on the theory of elasticity was used in the construction of the tower itself. In 1913, John Augustine Zahm suggested that Germain was excluded because she was a woman. === Hydraulic engineers and scholars === Fourteen hydraulic engineers and scholars are listed on the Eiffel Tower. Eiffel acknowledged most of the leading scientists in the field. Henri Philibert Gaspard Darcy is missing; some of his work did not come into wide use until the 20th century. Also missing are Antoine Chézy, who was less famous; Joseph Valentin Boussinesq, who was early in his career at the time; and mathematician Évariste Galois. Other famous French mathematicians are missing from the list: Joseph Liouville and Charles Hermite. == Notes == == References == == Further reading == Barral, Georges (1892). Le Panthéon scientifique de la tour Eiffel: histoire des origines de la construction de la Tour (in French). Savine. Reprinted as Barral, Georges (2013). Le Panthéon scientifique de la tour Eiffel: histoire des origines de la construction de la Tour (in French). Hachette Livre. ISBN 978-2-01-285936-4. == External links == Media related to 72 names on the Eiffel Tower at Wikimedia Commons Paris streets named for the 72 scientists
Wikipedia:List of trigonometric identities#0
In trigonometry, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of a triangle. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. == Pythagorean identities == The basic relationship between the sine and cosine is given by the Pythagorean identity: sin 2 ⁡ θ + cos 2 ⁡ θ = 1 , {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,} where sin 2 ⁡ θ {\displaystyle \sin ^{2}\theta } means ( sin ⁡ θ ) 2 {\displaystyle {(\sin \theta )}^{2}} and cos 2 ⁡ θ {\displaystyle \cos ^{2}\theta } means ( cos ⁡ θ ) 2 . {\displaystyle {(\cos \theta )}^{2}.} This can be viewed as a version of the Pythagorean theorem, and follows from the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} for the unit circle. This equation can be solved for either the sine or the cosine: sin ⁡ θ = ± 1 − cos 2 ⁡ θ , cos ⁡ θ = ± 1 − sin 2 ⁡ θ . {\displaystyle {\begin{aligned}\sin \theta &=\pm {\sqrt {1-\cos ^{2}\theta }},\\\cos \theta &=\pm {\sqrt {1-\sin ^{2}\theta }}.\end{aligned}}} where the sign depends on the quadrant of θ . {\displaystyle \theta .} Dividing this identity by sin 2 ⁡ θ {\displaystyle \sin ^{2}\theta } , cos 2 ⁡ θ {\displaystyle \cos ^{2}\theta } , or both yields the following identities: 1 + cot 2 ⁡ θ = csc 2 ⁡ θ 1 + tan 2 ⁡ θ = sec 2 ⁡ θ sec 2 ⁡ θ + csc 2 ⁡ θ = sec 2 ⁡ θ csc 2 ⁡ θ {\displaystyle {\begin{aligned}&1+\cot ^{2}\theta =\csc ^{2}\theta \\&1+\tan ^{2}\theta =\sec ^{2}\theta \\&\sec ^{2}\theta +\csc ^{2}\theta =\sec ^{2}\theta \csc ^{2}\theta \end{aligned}}} Using these identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign): == Reflections, shifts, and periodicity == By examining the unit circle, one can establish the following properties of the trigonometric functions. === Reflections === When the direction of a Euclidean vector is represented by an angle θ , {\displaystyle \theta ,} this is the angle determined by the free vector (starting at the origin) and the positive x {\displaystyle x} -unit vector. The same concept may also be applied to lines in an Euclidean space, where the angle is that determined by a parallel to the given line through the origin and the positive x {\displaystyle x} -axis. If a line (vector) with direction θ {\displaystyle \theta } is reflected about a line with direction α , {\displaystyle \alpha ,} then the direction angle θ ′ {\displaystyle \theta ^{\prime }} of this reflected line (vector) has the value θ ′ = 2 α − θ . {\displaystyle \theta ^{\prime }=2\alpha -\theta .} The values of the trigonometric functions of these angles θ , θ ′ {\displaystyle \theta ,\;\theta ^{\prime }} for specific angles α {\displaystyle \alpha } satisfy simple identities: either they are equal, or have opposite signs, or employ the complementary trigonometric function. These are also known as reduction formulae. === Shifts and periodicity === === Signs === The sign of trigonometric functions depends on quadrant of the angle. If − π < θ ≤ π {\displaystyle {-\pi }<\theta \leq \pi } and sgn is the sign function, sgn ⁡ ( sin ⁡ θ ) = sgn ⁡ ( csc ⁡ θ ) = { + 1 if 0 < θ < π − 1 if − π < θ < 0 0 if θ ∈ { 0 , π } sgn ⁡ ( cos ⁡ θ ) = sgn ⁡ ( sec ⁡ θ ) = { + 1 if − 1 2 π < θ < 1 2 π − 1 if − π < θ < − 1 2 π or 1 2 π < θ < π 0 if θ ∈ { − 1 2 π , 1 2 π } sgn ⁡ ( tan ⁡ θ ) = sgn ⁡ ( cot ⁡ θ ) = { + 1 if − π < θ < − 1 2 π or 0 < θ < 1 2 π − 1 if − 1 2 π < θ < 0 or 1 2 π < θ < π 0 if θ ∈ { − 1 2 π , 0 , 1 2 π , π } {\displaystyle {\begin{aligned}\operatorname {sgn}(\sin \theta )=\operatorname {sgn}(\csc \theta )&={\begin{cases}+1&{\text{if}}\ \ 0<\theta <\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <0\\0&{\text{if}}\ \ \theta \in \{0,\pi \}\end{cases}}\\[5mu]\operatorname {sgn}(\cos \theta )=\operatorname {sgn}(\sec \theta )&={\begin{cases}+1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },{\tfrac {1}{2}}\pi {\bigr \}}\end{cases}}\\[5mu]\operatorname {sgn}(\tan \theta )=\operatorname {sgn}(\cot \theta )&={\begin{cases}+1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ 0<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <0\ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },0,{\tfrac {1}{2}}\pi ,\pi {\bigr \}}\end{cases}}\end{aligned}}} The trigonometric functions are periodic with common period 2 π , {\displaystyle 2\pi ,} so for values of θ outside the interval ( − π , π ] , {\displaystyle ({-\pi },\pi ],} they take repeating values (see § Shifts and periodicity above). == Angle sum and difference identities == These are also known as the angle addition and subtraction theorems (or formulae). sin ⁡ ( α + β ) = sin ⁡ α cos ⁡ β + cos ⁡ α sin ⁡ β sin ⁡ ( α − β ) = sin ⁡ α cos ⁡ β − cos ⁡ α sin ⁡ β cos ⁡ ( α + β ) = cos ⁡ α cos ⁡ β − sin ⁡ α sin ⁡ β cos ⁡ ( α − β ) = cos ⁡ α cos ⁡ β + sin ⁡ α sin ⁡ β {\displaystyle {\begin{aligned}\sin(\alpha +\beta )&=\sin \alpha \cos \beta +\cos \alpha \sin \beta \\\sin(\alpha -\beta )&=\sin \alpha \cos \beta -\cos \alpha \sin \beta \\\cos(\alpha +\beta )&=\cos \alpha \cos \beta -\sin \alpha \sin \beta \\\cos(\alpha -\beta )&=\cos \alpha \cos \beta +\sin \alpha \sin \beta \end{aligned}}} The angle difference identities for sin ⁡ ( α − β ) {\displaystyle \sin(\alpha -\beta )} and cos ⁡ ( α − β ) {\displaystyle \cos(\alpha -\beta )} can be derived from the angle sum versions by substituting − β {\displaystyle -\beta } for β {\displaystyle \beta } and using the facts that sin ⁡ ( − β ) = − sin ⁡ ( β ) {\displaystyle \sin(-\beta )=-\sin(\beta )} and cos ⁡ ( − β ) = cos ⁡ ( β ) {\displaystyle \cos(-\beta )=\cos(\beta )} . They can also be derived by using a slightly modified version of the figure for the angle sum identities, both of which are shown here. These identities are summarized in the first two rows of the following table, which also includes sum and difference identities for the other trigonometric functions. === Sines and cosines of sums of infinitely many angles === When the series ∑ i = 1 ∞ θ i {\textstyle \sum _{i=1}^{\infty }\theta _{i}} converges absolutely then sin ( ∑ i = 1 ∞ θ i ) = ∑ odd k ≥ 1 ( − 1 ) k − 1 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ( ∏ i ∈ A sin ⁡ θ i ∏ i ∉ A cos ⁡ θ i ) cos ( ∑ i = 1 ∞ θ i ) = ∑ even k ≥ 0 ( − 1 ) k 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ( ∏ i ∈ A sin ⁡ θ i ∏ i ∉ A cos ⁡ θ i ) . {\displaystyle {\begin{aligned}{\sin }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggl )}&=\sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\!\!\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}\\{\cos }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggr )}&=\sum _{{\text{even}}\ k\geq 0}(-1)^{\frac {k}{2}}\,\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}.\end{aligned}}} Because the series ∑ i = 1 ∞ θ i {\textstyle \sum _{i=1}^{\infty }\theta _{i}} converges absolutely, it is necessarily the case that lim i → ∞ θ i = 0 , {\textstyle \lim _{i\to \infty }\theta _{i}=0,} lim i → ∞ sin ⁡ θ i = 0 , {\textstyle \lim _{i\to \infty }\sin \theta _{i}=0,} and lim i → ∞ cos ⁡ θ i = 1. {\textstyle \lim _{i\to \infty }\cos \theta _{i}=1.} In particular, in these two identities an asymmetry appears that is not seen in the case of sums of finitely many angles: in each product, there are only finitely many sine factors but there are cofinitely many cosine factors. Terms with infinitely many sine factors would necessarily be equal to zero. When only finitely many of the angles θ i {\displaystyle \theta _{i}} are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity. === Tangents and cotangents of sums === Let e k {\displaystyle e_{k}} (for k = 0 , 1 , 2 , 3 , … {\displaystyle k=0,1,2,3,\ldots } ) be the kth-degree elementary symmetric polynomial in the variables x i = tan ⁡ θ i {\displaystyle x_{i}=\tan \theta _{i}} for i = 0 , 1 , 2 , 3 , … , {\displaystyle i=0,1,2,3,\ldots ,} that is, e 0 = 1 e 1 = ∑ i x i = ∑ i tan ⁡ θ i e 2 = ∑ i < j x i x j = ∑ i < j tan ⁡ θ i tan ⁡ θ j e 3 = ∑ i < j < k x i x j x k = ∑ i < j < k tan ⁡ θ i tan ⁡ θ j tan ⁡ θ k ⋮ ⋮ {\displaystyle {\begin{aligned}e_{0}&=1\\[6pt]e_{1}&=\sum _{i}x_{i}&&=\sum _{i}\tan \theta _{i}\\[6pt]e_{2}&=\sum _{i<j}x_{i}x_{j}&&=\sum _{i<j}\tan \theta _{i}\tan \theta _{j}\\[6pt]e_{3}&=\sum _{i<j<k}x_{i}x_{j}x_{k}&&=\sum _{i<j<k}\tan \theta _{i}\tan \theta _{j}\tan \theta _{k}\\&\ \ \vdots &&\ \ \vdots \end{aligned}}} Then tan ( ∑ i θ i ) = sin ( ∑ i θ i ) / ∏ i cos ⁡ θ i cos ( ∑ i θ i ) / ∏ i cos ⁡ θ i = ∑ odd k ≥ 1 ( − 1 ) k − 1 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ∏ i ∈ A tan ⁡ θ i ∑ even k ≥ 0 ( − 1 ) k 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ∏ i ∈ A tan ⁡ θ i = e 1 − e 3 + e 5 − ⋯ e 0 − e 2 + e 4 − ⋯ cot ( ∑ i θ i ) = e 0 − e 2 + e 4 − ⋯ e 1 − e 3 + e 5 − ⋯ {\displaystyle {\begin{aligned}{\tan }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {{\sin }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}{{\cos }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}}\\[10pt]&={\frac {\displaystyle \sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}{\displaystyle \sum _{{\text{even}}\ k\geq 0}~(-1)^{\frac {k}{2}}~~\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}}={\frac {e_{1}-e_{3}+e_{5}-\cdots }{e_{0}-e_{2}+e_{4}-\cdots }}\\[10pt]{\cot }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {e_{0}-e_{2}+e_{4}-\cdots }{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} using the sine and cosine sum formulae above. The number of terms on the right side depends on the number of terms on the left side. For example: tan ⁡ ( θ 1 + θ 2 ) = e 1 e 0 − e 2 = x 1 + x 2 1 − x 1 x 2 = tan ⁡ θ 1 + tan ⁡ θ 2 1 − tan ⁡ θ 1 tan ⁡ θ 2 , tan ⁡ ( θ 1 + θ 2 + θ 3 ) = e 1 − e 3 e 0 − e 2 = ( x 1 + x 2 + x 3 ) − ( x 1 x 2 x 3 ) 1 − ( x 1 x 2 + x 1 x 3 + x 2 x 3 ) , tan ⁡ ( θ 1 + θ 2 + θ 3 + θ 4 ) = e 1 − e 3 e 0 − e 2 + e 4 = ( x 1 + x 2 + x 3 + x 4 ) − ( x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 ) 1 − ( x 1 x 2 + x 1 x 3 + x 1 x 4 + x 2 x 3 + x 2 x 4 + x 3 x 4 ) + ( x 1 x 2 x 3 x 4 ) , {\displaystyle {\begin{aligned}\tan(\theta _{1}+\theta _{2})&={\frac {e_{1}}{e_{0}-e_{2}}}={\frac {x_{1}+x_{2}}{1\ -\ x_{1}x_{2}}}={\frac {\tan \theta _{1}+\tan \theta _{2}}{1\ -\ \tan \theta _{1}\tan \theta _{2}}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}}}={\frac {(x_{1}+x_{2}+x_{3})\ -\ (x_{1}x_{2}x_{3})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3})}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3}+\theta _{4})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}+e_{4}}}\\[8pt]&={\frac {(x_{1}+x_{2}+x_{3}+x_{4})\ -\ (x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{3}x_{4}+x_{2}x_{3}x_{4})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{1}x_{4}+x_{2}x_{3}+x_{2}x_{4}+x_{3}x_{4})\ +\ (x_{1}x_{2}x_{3}x_{4})}},\end{aligned}}} and so on. The case of only finitely many terms can be proved by mathematical induction. The case of infinitely many terms can be proved by using some elementary inequalities. === Secants and cosecants of sums === sec ( ∑ i θ i ) = ∏ i sec ⁡ θ i e 0 − e 2 + e 4 − ⋯ csc ( ∑ i θ i ) = ∏ i sec ⁡ θ i e 1 − e 3 + e 5 − ⋯ {\displaystyle {\begin{aligned}{\sec }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{0}-e_{2}+e_{4}-\cdots }}\\[8pt]{\csc }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} where e k {\displaystyle e_{k}} is the kth-degree elementary symmetric polynomial in the n variables x i = tan ⁡ θ i , {\displaystyle x_{i}=\tan \theta _{i},} i = 1 , … , n , {\displaystyle i=1,\ldots ,n,} and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left. The case of only finitely many terms can be proved by mathematical induction on the number of such terms. For example, sec ⁡ ( α + β + γ ) = sec ⁡ α sec ⁡ β sec ⁡ γ 1 − tan ⁡ α tan ⁡ β − tan ⁡ α tan ⁡ γ − tan ⁡ β tan ⁡ γ csc ⁡ ( α + β + γ ) = sec ⁡ α sec ⁡ β sec ⁡ γ tan ⁡ α + tan ⁡ β + tan ⁡ γ − tan ⁡ α tan ⁡ β tan ⁡ γ . {\displaystyle {\begin{aligned}\sec(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{1-\tan \alpha \tan \beta -\tan \alpha \tan \gamma -\tan \beta \tan \gamma }}\\[8pt]\csc(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{\tan \alpha +\tan \beta +\tan \gamma -\tan \alpha \tan \beta \tan \gamma }}.\end{aligned}}} === Ptolemy's theorem === Ptolemy's theorem is important in the history of trigonometric identities, as it is how results equivalent to the sum and difference formulas for sine and cosine were first proved. It states that in a cyclic quadrilateral A B C D {\displaystyle ABCD} , as shown in the accompanying figure, the sum of the products of the lengths of opposite sides is equal to the product of the lengths of the diagonals. In the special cases of one of the diagonals or sides being a diameter of the circle, this theorem gives rise directly to the angle sum and difference trigonometric identities. The relationship follows most easily when the circle is constructed to have a diameter of length one, as shown here. By Thales's theorem, ∠ D A B {\displaystyle \angle DAB} and ∠ D C B {\displaystyle \angle DCB} are both right angles. The right-angled triangles D A B {\displaystyle DAB} and D C B {\displaystyle DCB} both share the hypotenuse B D ¯ {\displaystyle {\overline {BD}}} of length 1. Thus, the side A B ¯ = sin ⁡ α {\displaystyle {\overline {AB}}=\sin \alpha } , A D ¯ = cos ⁡ α {\displaystyle {\overline {AD}}=\cos \alpha } , B C ¯ = sin ⁡ β {\displaystyle {\overline {BC}}=\sin \beta } and C D ¯ = cos ⁡ β {\displaystyle {\overline {CD}}=\cos \beta } . By the inscribed angle theorem, the central angle subtended by the chord A C ¯ {\displaystyle {\overline {AC}}} at the circle's center is twice the angle ∠ A D C {\displaystyle \angle ADC} , i.e. 2 ( α + β ) {\displaystyle 2(\alpha +\beta )} . Therefore, the symmetrical pair of red triangles each has the angle α + β {\displaystyle \alpha +\beta } at the center. Each of these triangles has a hypotenuse of length 1 2 {\textstyle {\frac {1}{2}}} , so the length of A C ¯ {\displaystyle {\overline {AC}}} is 2 × 1 2 sin ⁡ ( α + β ) {\textstyle 2\times {\frac {1}{2}}\sin(\alpha +\beta )} , i.e. simply sin ⁡ ( α + β ) {\displaystyle \sin(\alpha +\beta )} . The quadrilateral's other diagonal is the diameter of length 1, so the product of the diagonals' lengths is also sin ⁡ ( α + β ) {\displaystyle \sin(\alpha +\beta )} . When these values are substituted into the statement of Ptolemy's theorem that | A C ¯ | ⋅ | B D ¯ | = | A B ¯ | ⋅ | C D ¯ | + | A D ¯ | ⋅ | B C ¯ | {\displaystyle |{\overline {AC}}|\cdot |{\overline {BD}}|=|{\overline {AB}}|\cdot |{\overline {CD}}|+|{\overline {AD}}|\cdot |{\overline {BC}}|} , this yields the angle sum trigonometric identity for sine: sin ⁡ ( α + β ) = sin ⁡ α cos ⁡ β + cos ⁡ α sin ⁡ β {\displaystyle \sin(\alpha +\beta )=\sin \alpha \cos \beta +\cos \alpha \sin \beta } . The angle difference formula for sin ⁡ ( α − β ) {\displaystyle \sin(\alpha -\beta )} can be similarly derived by letting the side C D ¯ {\displaystyle {\overline {CD}}} serve as a diameter instead of B D ¯ {\displaystyle {\overline {BD}}} . == Multiple-angle and half-angle formulae == === Multiple-angle formulae === ==== Double-angle formulae ==== Formulae for twice an angle. ==== Triple-angle formulae ==== Formulae for triple angles. ==== Multiple-angle formulae ==== Formulae for multiple angles. ==== Chebyshev method ==== The Chebyshev method is a recursive algorithm for finding the nth multiple angle formula knowing the ( n − 1 ) {\displaystyle (n-1)} th and ( n − 2 ) {\displaystyle (n-2)} th values. cos ⁡ ( n x ) {\displaystyle \cos(nx)} can be computed from cos ⁡ ( ( n − 1 ) x ) {\displaystyle \cos((n-1)x)} , cos ⁡ ( ( n − 2 ) x ) {\displaystyle \cos((n-2)x)} , and cos ⁡ ( x ) {\displaystyle \cos(x)} with cos ⁡ ( n x ) = 2 cos ⁡ x cos ⁡ ( ( n − 1 ) x ) − cos ⁡ ( ( n − 2 ) x ) . {\displaystyle \cos(nx)=2\cos x\cos((n-1)x)-\cos((n-2)x).} This can be proved by adding together the formulae cos ⁡ ( ( n − 1 ) x + x ) = cos ⁡ ( ( n − 1 ) x ) cos ⁡ x − sin ⁡ ( ( n − 1 ) x ) sin ⁡ x cos ⁡ ( ( n − 1 ) x − x ) = cos ⁡ ( ( n − 1 ) x ) cos ⁡ x + sin ⁡ ( ( n − 1 ) x ) sin ⁡ x {\displaystyle {\begin{aligned}\cos((n-1)x+x)&=\cos((n-1)x)\cos x-\sin((n-1)x)\sin x\\\cos((n-1)x-x)&=\cos((n-1)x)\cos x+\sin((n-1)x)\sin x\end{aligned}}} It follows by induction that cos ⁡ ( n x ) {\displaystyle \cos(nx)} is a polynomial of cos ⁡ x , {\displaystyle \cos x,} the so-called Chebyshev polynomial of the first kind, see Chebyshev polynomials#Trigonometric definition. Similarly, sin ⁡ ( n x ) {\displaystyle \sin(nx)} can be computed from sin ⁡ ( ( n − 1 ) x ) , {\displaystyle \sin((n-1)x),} sin ⁡ ( ( n − 2 ) x ) , {\displaystyle \sin((n-2)x),} and cos ⁡ x {\displaystyle \cos x} with sin ⁡ ( n x ) = 2 cos ⁡ x sin ⁡ ( ( n − 1 ) x ) − sin ⁡ ( ( n − 2 ) x ) {\displaystyle \sin(nx)=2\cos x\sin((n-1)x)-\sin((n-2)x)} This can be proved by adding formulae for sin ⁡ ( ( n − 1 ) x + x ) {\displaystyle \sin((n-1)x+x)} and sin ⁡ ( ( n − 1 ) x − x ) . {\displaystyle \sin((n-1)x-x).} Serving a purpose similar to that of the Chebyshev method, for the tangent we can write: tan ⁡ ( n x ) = tan ⁡ ( ( n − 1 ) x ) + tan ⁡ x 1 − tan ⁡ ( ( n − 1 ) x ) tan ⁡ x . {\displaystyle \tan(nx)={\frac {\tan((n-1)x)+\tan x}{1-\tan((n-1)x)\tan x}}\,.} === Half-angle formulae === sin ⁡ θ 2 = sgn ⁡ ( sin ⁡ θ 2 ) 1 − cos ⁡ θ 2 cos ⁡ θ 2 = sgn ⁡ ( cos ⁡ θ 2 ) 1 + cos ⁡ θ 2 tan ⁡ θ 2 = 1 − cos ⁡ θ sin ⁡ θ = sin ⁡ θ 1 + cos ⁡ θ = csc ⁡ θ − cot ⁡ θ = tan ⁡ θ 1 + sec ⁡ θ = sgn ⁡ ( sin ⁡ θ ) 1 − cos ⁡ θ 1 + cos ⁡ θ = − 1 + sgn ⁡ ( cos ⁡ θ ) 1 + tan 2 ⁡ θ tan ⁡ θ cot ⁡ θ 2 = 1 + cos ⁡ θ sin ⁡ θ = sin ⁡ θ 1 − cos ⁡ θ = csc ⁡ θ + cot ⁡ θ = sgn ⁡ ( sin ⁡ θ ) 1 + cos ⁡ θ 1 − cos ⁡ θ sec ⁡ θ 2 = sgn ⁡ ( cos ⁡ θ 2 ) 2 1 + cos ⁡ θ csc ⁡ θ 2 = sgn ⁡ ( sin ⁡ θ 2 ) 2 1 − cos ⁡ θ {\displaystyle {\begin{aligned}\sin {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {1-\cos \theta }{2}}}\\[3pt]\cos {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {1+\cos \theta }{2}}}\\[3pt]\tan {\frac {\theta }{2}}&={\frac {1-\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1+\cos \theta }}=\csc \theta -\cot \theta ={\frac {\tan \theta }{1+\sec {\theta }}}\\[6mu]&=\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1-\cos \theta }{1+\cos \theta }}}={\frac {-1+\operatorname {sgn}(\cos \theta ){\sqrt {1+\tan ^{2}\theta }}}{\tan \theta }}\\[3pt]\cot {\frac {\theta }{2}}&={\frac {1+\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1-\cos \theta }}=\csc \theta +\cot \theta =\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1+\cos \theta }{1-\cos \theta }}}\\\sec {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1+\cos \theta }}}\\\csc {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1-\cos \theta }}}\\\end{aligned}}} Also tan ⁡ η ± θ 2 = sin ⁡ η ± sin ⁡ θ cos ⁡ η + cos ⁡ θ tan ⁡ ( θ 2 + π 4 ) = sec ⁡ θ + tan ⁡ θ 1 − sin ⁡ θ 1 + sin ⁡ θ = | 1 − tan ⁡ θ 2 | | 1 + tan ⁡ θ 2 | {\displaystyle {\begin{aligned}\tan {\frac {\eta \pm \theta }{2}}&={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}\\[3pt]\tan \left({\frac {\theta }{2}}+{\frac {\pi }{4}}\right)&=\sec \theta +\tan \theta \\[3pt]{\sqrt {\frac {1-\sin \theta }{1+\sin \theta }}}&={\frac {\left|1-\tan {\frac {\theta }{2}}\right|}{\left|1+\tan {\frac {\theta }{2}}\right|}}\end{aligned}}} === Table === These can be shown by using either the sum and difference identities or the multiple-angle formulae. The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of a compass and straightedge construction of angle trisection to the algebraic problem of solving a cubic equation, which allows one to prove that trisection is in general impossible using the given tools. A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of the cubic equation 4x3 − 3x + d = 0, where x {\displaystyle x} is the value of the cosine function at the one-third angle and d is the known value of the cosine function at the full angle. However, the discriminant of this equation is positive, so this equation has three real roots (of which only one is the solution for the cosine of the one-third angle). None of these solutions are reducible to a real algebraic expression, as they use intermediate complex numbers under the cube roots. == Power-reduction formulae == Obtained by solving the second and third versions of the cosine double-angle formula. In general terms of powers of sin ⁡ θ {\displaystyle \sin \theta } or cos ⁡ θ {\displaystyle \cos \theta } the following is true, and can be deduced using De Moivre's formula, Euler's formula and the binomial theorem. == Product-to-sum and sum-to-product identities == The product-to-sum identities or prosthaphaeresis formulae can be proven by expanding their right-hand sides using the angle addition theorems. Historically, the first four of these were known as Werner's formulas, after Johannes Werner who used them for astronomical calculations. See amplitude modulation for an application of the product-to-sum formulae, and beat (acoustics) and phase detector for applications of the sum-to-product formulae. === Product-to-sum identities === === Sum-to-product identities === The sum-to-product identities are as follows: === Hermite's cotangent identity === Charles Hermite demonstrated the following identity. Suppose a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are complex numbers, no two of which differ by an integer multiple of π. Let A n , k = ∏ 1 ≤ j ≤ n j ≠ k cot ⁡ ( a k − a j ) {\displaystyle A_{n,k}=\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq k\end{smallmatrix}}\cot(a_{k}-a_{j})} (in particular, A 1 , 1 , {\displaystyle A_{1,1},} being an empty product, is 1). Then cot ⁡ ( z − a 1 ) ⋯ cot ⁡ ( z − a n ) = cos ⁡ n π 2 + ∑ k = 1 n A n , k cot ⁡ ( z − a k ) . {\displaystyle \cot(z-a_{1})\cdots \cot(z-a_{n})=\cos {\frac {n\pi }{2}}+\sum _{k=1}^{n}A_{n,k}\cot(z-a_{k}).} The simplest non-trivial example is the case n = 2: cot ⁡ ( z − a 1 ) cot ⁡ ( z − a 2 ) = − 1 + cot ⁡ ( a 1 − a 2 ) cot ⁡ ( z − a 1 ) + cot ⁡ ( a 2 − a 1 ) cot ⁡ ( z − a 2 ) . {\displaystyle \cot(z-a_{1})\cot(z-a_{2})=-1+\cot(a_{1}-a_{2})\cot(z-a_{1})+\cot(a_{2}-a_{1})\cot(z-a_{2}).} === Finite products of trigonometric functions === For coprime integers n, m ∏ k = 1 n ( 2 a + 2 cos ⁡ ( 2 π k m n + x ) ) = 2 ( T n ( a ) + ( − 1 ) n + m cos ⁡ ( n x ) ) {\displaystyle \prod _{k=1}^{n}\left(2a+2\cos \left({\frac {2\pi km}{n}}+x\right)\right)=2\left(T_{n}(a)+{(-1)}^{n+m}\cos(nx)\right)} where Tn is the Chebyshev polynomial. The following relationship holds for the sine function ∏ k = 1 n − 1 sin ⁡ ( k π n ) = n 2 n − 1 . {\displaystyle \prod _{k=1}^{n-1}\sin \left({\frac {k\pi }{n}}\right)={\frac {n}{2^{n-1}}}.} More generally for an integer n > 0 sin ⁡ ( n x ) = 2 n − 1 ∏ k = 0 n − 1 sin ⁡ ( k n π + x ) = 2 n − 1 ∏ k = 1 n sin ⁡ ( k n π − x ) . {\displaystyle \sin(nx)=2^{n-1}\prod _{k=0}^{n-1}\sin \left({\frac {k}{n}}\pi +x\right)=2^{n-1}\prod _{k=1}^{n}\sin \left({\frac {k}{n}}\pi -x\right).} or written in terms of the chord function crd ⁡ x ≡ 2 sin ⁡ 1 2 x {\textstyle \operatorname {crd} x\equiv 2\sin {\tfrac {1}{2}}x} , crd ⁡ ( n x ) = ∏ k = 1 n crd ⁡ ( k n 2 π − x ) . {\displaystyle \operatorname {crd} (nx)=\prod _{k=1}^{n}\operatorname {crd} \left({\frac {k}{n}}2\pi -x\right).} This comes from the factorization of the polynomial z n − 1 {\textstyle z^{n}-1} into linear factors (cf. root of unity): For any complex z and an integer n > 0, z n − 1 = ∏ k = 1 n ( z − exp ⁡ ( k n 2 π i ) ) . {\displaystyle z^{n}-1=\prod _{k=1}^{n}\left(z-\exp {\Bigl (}{\frac {k}{n}}2\pi i{\Bigr )}\right).} == Linear combinations == For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but different phase shifts is also a sine wave with the same period or frequency, but a different phase shift. This is useful in sinusoid data fitting, because the measured or observed data are linearly related to the a and b unknowns of the in-phase and quadrature components basis below, resulting in a simpler Jacobian, compared to that of c {\displaystyle c} and φ {\displaystyle \varphi } . === Sine and cosine === The linear combination, or harmonic addition, of sine and cosine waves is equivalent to a single sine wave with a phase shift and scaled amplitude, a cos ⁡ x + b sin ⁡ x = c cos ⁡ ( x + φ ) {\displaystyle a\cos x+b\sin x=c\cos(x+\varphi )} where c {\displaystyle c} and φ {\displaystyle \varphi } are defined as so: c = sgn ⁡ ( a ) a 2 + b 2 , φ = arctan ( − b / a ) , {\displaystyle {\begin{aligned}c&=\operatorname {sgn}(a){\sqrt {a^{2}+b^{2}}},\\\varphi &={\arctan }{\bigl (}{-b/a}{\bigr )},\end{aligned}}} given that a ≠ 0. {\displaystyle a\neq 0.} === Arbitrary phase shift === More generally, for arbitrary phase shifts, we have a sin ⁡ ( x + θ a ) + b sin ⁡ ( x + θ b ) = c sin ⁡ ( x + φ ) {\displaystyle a\sin(x+\theta _{a})+b\sin(x+\theta _{b})=c\sin(x+\varphi )} where c {\displaystyle c} and φ {\displaystyle \varphi } satisfy: c 2 = a 2 + b 2 + 2 a b cos ⁡ ( θ a − θ b ) , tan ⁡ φ = a sin ⁡ θ a + b sin ⁡ θ b a cos ⁡ θ a + b cos ⁡ θ b . {\displaystyle {\begin{aligned}c^{2}&=a^{2}+b^{2}+2ab\cos \left(\theta _{a}-\theta _{b}\right),\\\tan \varphi &={\frac {a\sin \theta _{a}+b\sin \theta _{b}}{a\cos \theta _{a}+b\cos \theta _{b}}}.\end{aligned}}} === More than two sinusoids === The general case reads ∑ i a i sin ⁡ ( x + θ i ) = a sin ⁡ ( x + θ ) , {\displaystyle \sum _{i}a_{i}\sin(x+\theta _{i})=a\sin(x+\theta ),} where a 2 = ∑ i , j a i a j cos ⁡ ( θ i − θ j ) {\displaystyle a^{2}=\sum _{i,j}a_{i}a_{j}\cos(\theta _{i}-\theta _{j})} and tan ⁡ θ = ∑ i a i sin ⁡ θ i ∑ i a i cos ⁡ θ i . {\displaystyle \tan \theta ={\frac {\sum _{i}a_{i}\sin \theta _{i}}{\sum _{i}a_{i}\cos \theta _{i}}}.} == Lagrange's trigonometric identities == These identities, named after Joseph Louis Lagrange, are: ∑ k = 0 n sin ⁡ k θ = cos ⁡ 1 2 θ − cos ⁡ ( ( n + 1 2 ) θ ) 2 sin ⁡ 1 2 θ ∑ k = 1 n cos ⁡ k θ = − sin ⁡ 1 2 θ + sin ⁡ ( ( n + 1 2 ) θ ) 2 sin ⁡ 1 2 θ {\displaystyle {\begin{aligned}\sum _{k=0}^{n}\sin k\theta &={\frac {\cos {\tfrac {1}{2}}\theta -\cos \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\\[5pt]\sum _{k=1}^{n}\cos k\theta &={\frac {-\sin {\tfrac {1}{2}}\theta +\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\end{aligned}}} for θ ≢ 0 ( mod 2 π ) . {\displaystyle \theta \not \equiv 0{\pmod {2\pi }}.} A related function is the Dirichlet kernel: D n ( θ ) = 1 + 2 ∑ k = 1 n cos ⁡ k θ = sin ⁡ ( ( n + 1 2 ) θ ) sin ⁡ 1 2 θ . {\displaystyle D_{n}(\theta )=1+2\sum _{k=1}^{n}\cos k\theta ={\frac {\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{\sin {\tfrac {1}{2}}\theta }}.} A similar identity is ∑ k = 1 n cos ⁡ ( 2 k − 1 ) α = sin ⁡ ( 2 n α ) 2 sin ⁡ α . {\displaystyle \sum _{k=1}^{n}\cos(2k-1)\alpha ={\frac {\sin(2n\alpha )}{2\sin \alpha }}.} The proof is the following. By using the angle sum and difference identities, sin ⁡ ( A + B ) − sin ⁡ ( A − B ) = 2 cos ⁡ A sin ⁡ B . {\displaystyle \sin(A+B)-\sin(A-B)=2\cos A\sin B.} Then let's examine the following formula, 2 sin ⁡ α ∑ k = 1 n cos ⁡ ( 2 k − 1 ) α = 2 sin ⁡ α cos ⁡ α + 2 sin ⁡ α cos ⁡ 3 α + 2 sin ⁡ α cos ⁡ 5 α + … + 2 sin ⁡ α cos ⁡ ( 2 n − 1 ) α {\displaystyle 2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha =2\sin \alpha \cos \alpha +2\sin \alpha \cos 3\alpha +2\sin \alpha \cos 5\alpha +\ldots +2\sin \alpha \cos(2n-1)\alpha } and this formula can be written by using the above identity, 2 sin ⁡ α ∑ k = 1 n cos ⁡ ( 2 k − 1 ) α = ∑ k = 1 n ( sin ⁡ ( 2 k α ) − sin ⁡ ( 2 ( k − 1 ) α ) ) = ( sin ⁡ 2 α − sin ⁡ 0 ) + ( sin ⁡ 4 α − sin ⁡ 2 α ) + ( sin ⁡ 6 α − sin ⁡ 4 α ) + … + ( sin ⁡ ( 2 n α ) − sin ⁡ ( 2 ( n − 1 ) α ) ) = sin ⁡ ( 2 n α ) . {\displaystyle {\begin{aligned}&2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha \\&\quad =\sum _{k=1}^{n}(\sin(2k\alpha )-\sin(2(k-1)\alpha ))\\&\quad =(\sin 2\alpha -\sin 0)+(\sin 4\alpha -\sin 2\alpha )+(\sin 6\alpha -\sin 4\alpha )+\ldots +(\sin(2n\alpha )-\sin(2(n-1)\alpha ))\\&\quad =\sin(2n\alpha ).\end{aligned}}} So, dividing this formula with 2 sin ⁡ α {\displaystyle 2\sin \alpha } completes the proof. == Certain linear fractional transformations == If f ( x ) {\displaystyle f(x)} is given by the linear fractional transformation f ( x ) = ( cos ⁡ α ) x − sin ⁡ α ( sin ⁡ α ) x + cos ⁡ α , {\displaystyle f(x)={\frac {(\cos \alpha )x-\sin \alpha }{(\sin \alpha )x+\cos \alpha }},} and similarly g ( x ) = ( cos ⁡ β ) x − sin ⁡ β ( sin ⁡ β ) x + cos ⁡ β , {\displaystyle g(x)={\frac {(\cos \beta )x-\sin \beta }{(\sin \beta )x+\cos \beta }},} then f ( g ( x ) ) = g ( f ( x ) ) = ( cos ⁡ ( α + β ) ) x − sin ⁡ ( α + β ) ( sin ⁡ ( α + β ) ) x + cos ⁡ ( α + β ) . {\displaystyle f{\big (}g(x){\big )}=g{\big (}f(x){\big )}={\frac {{\big (}\cos(\alpha +\beta ){\big )}x-\sin(\alpha +\beta )}{{\big (}\sin(\alpha +\beta ){\big )}x+\cos(\alpha +\beta )}}.} More tersely stated, if for all α {\displaystyle \alpha } we let f α {\displaystyle f_{\alpha }} be what we called f {\displaystyle f} above, then f α ∘ f β = f α + β . {\displaystyle f_{\alpha }\circ f_{\beta }=f_{\alpha +\beta }.} If x {\displaystyle x} is the slope of a line, then f ( x ) {\displaystyle f(x)} is the slope of its rotation through an angle of − α . {\displaystyle -\alpha .} == Relation to the complex exponential function == Euler's formula states that, for any real number x: e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where i is the imaginary unit. Substituting −x for x gives us: e − i x = cos ⁡ ( − x ) + i sin ⁡ ( − x ) = cos ⁡ x − i sin ⁡ x . {\displaystyle e^{-ix}=\cos(-x)+i\sin(-x)=\cos x-i\sin x.} These two equations can be used to solve for cosine and sine in terms of the exponential function. Specifically, cos ⁡ x = e i x + e − i x 2 {\displaystyle \cos x={\frac {e^{ix}+e^{-ix}}{2}}} sin ⁡ x = e i x − e − i x 2 i {\displaystyle \sin x={\frac {e^{ix}-e^{-ix}}{2i}}} These formulae are useful for proving many other trigonometric identities. For example, that ei(θ+φ) = eiθ eiφ means that That the real part of the left hand side equals the real part of the right hand side is an angle addition formula for cosine. The equality of the imaginary parts gives an angle addition formula for sine. The following table expresses the trigonometric functions and their inverses in terms of the exponential function and the complex logarithm. == Relation to complex hyperbolic functions == Trigonometric functions may be deduced from hyperbolic functions with complex arguments. The formulae for the relations are shown below. sin ⁡ x = − i sinh ⁡ ( i x ) cos ⁡ x = cosh ⁡ ( i x ) tan ⁡ x = − i tanh ⁡ ( i x ) cot ⁡ x = i coth ⁡ ( i x ) sec ⁡ x = sech ⁡ ( i x ) csc ⁡ x = i csch ⁡ ( i x ) {\displaystyle {\begin{aligned}\sin x&=-i\sinh(ix)\\\cos x&=\cosh(ix)\\\tan x&=-i\tanh(ix)\\\cot x&=i\coth(ix)\\\sec x&=\operatorname {sech} (ix)\\\csc x&=i\operatorname {csch} (ix)\\\end{aligned}}} == Series expansion == When using a power series expansion to define trigonometric functions, the following identities are obtained: sin ⁡ x = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n x 2 n + 1 ( 2 n + 1 ) ! , {\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n+1}}{(2n+1)!}},} cos ⁡ x = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n x 2 n ( 2 n ) ! . {\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n)!}}.} == Infinite product formulae == For applications to special functions, the following infinite product formulae for trigonometric functions are useful: sin ⁡ x = x ∏ n = 1 ∞ ( 1 − x 2 π 2 n 2 ) , cos ⁡ x = ∏ n = 1 ∞ ( 1 − x 2 π 2 ( n − 1 2 ) ) 2 ) , sinh ⁡ x = x ∏ n = 1 ∞ ( 1 + x 2 π 2 n 2 ) , cosh ⁡ x = ∏ n = 1 ∞ ( 1 + x 2 π 2 ( n − 1 2 ) ) 2 ) . {\displaystyle {\begin{aligned}\sin x&=x\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cos x&=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right),\\[10mu]\sinh x&=x\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cosh x&=\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right).\end{aligned}}} == Inverse trigonometric functions == The following identities give the result of composing a trigonometric function with an inverse trigonometric function. sin ⁡ ( arcsin ⁡ x ) = x cos ⁡ ( arcsin ⁡ x ) = 1 − x 2 tan ⁡ ( arcsin ⁡ x ) = x 1 − x 2 sin ⁡ ( arccos ⁡ x ) = 1 − x 2 cos ⁡ ( arccos ⁡ x ) = x tan ⁡ ( arccos ⁡ x ) = 1 − x 2 x sin ⁡ ( arctan ⁡ x ) = x 1 + x 2 cos ⁡ ( arctan ⁡ x ) = 1 1 + x 2 tan ⁡ ( arctan ⁡ x ) = x sin ⁡ ( arccsc ⁡ x ) = 1 x cos ⁡ ( arccsc ⁡ x ) = x 2 − 1 x tan ⁡ ( arccsc ⁡ x ) = 1 x 2 − 1 sin ⁡ ( arcsec ⁡ x ) = x 2 − 1 x cos ⁡ ( arcsec ⁡ x ) = 1 x tan ⁡ ( arcsec ⁡ x ) = x 2 − 1 sin ⁡ ( arccot ⁡ x ) = 1 1 + x 2 cos ⁡ ( arccot ⁡ x ) = x 1 + x 2 tan ⁡ ( arccot ⁡ x ) = 1 x {\displaystyle {\begin{aligned}\sin(\arcsin x)&=x&\cos(\arcsin x)&={\sqrt {1-x^{2}}}&\tan(\arcsin x)&={\frac {x}{\sqrt {1-x^{2}}}}\\\sin(\arccos x)&={\sqrt {1-x^{2}}}&\cos(\arccos x)&=x&\tan(\arccos x)&={\frac {\sqrt {1-x^{2}}}{x}}\\\sin(\arctan x)&={\frac {x}{\sqrt {1+x^{2}}}}&\cos(\arctan x)&={\frac {1}{\sqrt {1+x^{2}}}}&\tan(\arctan x)&=x\\\sin(\operatorname {arccsc} x)&={\frac {1}{x}}&\cos(\operatorname {arccsc} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\tan(\operatorname {arccsc} x)&={\frac {1}{\sqrt {x^{2}-1}}}\\\sin(\operatorname {arcsec} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\cos(\operatorname {arcsec} x)&={\frac {1}{x}}&\tan(\operatorname {arcsec} x)&={\sqrt {x^{2}-1}}\\\sin(\operatorname {arccot} x)&={\frac {1}{\sqrt {1+x^{2}}}}&\cos(\operatorname {arccot} x)&={\frac {x}{\sqrt {1+x^{2}}}}&\tan(\operatorname {arccot} x)&={\frac {1}{x}}\\\end{aligned}}} Taking the multiplicative inverse of both sides of the each equation above results in the equations for csc = 1 sin , sec = 1 cos , and cot = 1 tan . {\displaystyle \csc ={\frac {1}{\sin }},\;\sec ={\frac {1}{\cos }},{\text{ and }}\cot ={\frac {1}{\tan }}.} The right hand side of the formula above will always be flipped. For example, the equation for cot ⁡ ( arcsin ⁡ x ) {\displaystyle \cot(\arcsin x)} is: cot ⁡ ( arcsin ⁡ x ) = 1 tan ⁡ ( arcsin ⁡ x ) = 1 x 1 − x 2 = 1 − x 2 x {\displaystyle \cot(\arcsin x)={\frac {1}{\tan(\arcsin x)}}={\frac {1}{\frac {x}{\sqrt {1-x^{2}}}}}={\frac {\sqrt {1-x^{2}}}{x}}} while the equations for csc ⁡ ( arccos ⁡ x ) {\displaystyle \csc(\arccos x)} and sec ⁡ ( arccos ⁡ x ) {\displaystyle \sec(\arccos x)} are: csc ⁡ ( arccos ⁡ x ) = 1 sin ⁡ ( arccos ⁡ x ) = 1 1 − x 2 and sec ⁡ ( arccos ⁡ x ) = 1 cos ⁡ ( arccos ⁡ x ) = 1 x . {\displaystyle \csc(\arccos x)={\frac {1}{\sin(\arccos x)}}={\frac {1}{\sqrt {1-x^{2}}}}\qquad {\text{ and }}\quad \sec(\arccos x)={\frac {1}{\cos(\arccos x)}}={\frac {1}{x}}.} The following identities are implied by the reflection identities. They hold whenever x , r , s , − x , − r , and − s {\displaystyle x,r,s,-x,-r,{\text{ and }}-s} are in the domains of the relevant functions. π 2 = arcsin ⁡ ( x ) + arccos ⁡ ( x ) = arctan ⁡ ( r ) + arccot ⁡ ( r ) = arcsec ⁡ ( s ) + arccsc ⁡ ( s ) π = arccos ⁡ ( x ) + arccos ⁡ ( − x ) = arccot ⁡ ( r ) + arccot ⁡ ( − r ) = arcsec ⁡ ( s ) + arcsec ⁡ ( − s ) 0 = arcsin ⁡ ( x ) + arcsin ⁡ ( − x ) = arctan ⁡ ( r ) + arctan ⁡ ( − r ) = arccsc ⁡ ( s ) + arccsc ⁡ ( − s ) {\displaystyle {\begin{alignedat}{9}{\frac {\pi }{2}}~&=~\arcsin(x)&&+\arccos(x)~&&=~\arctan(r)&&+\operatorname {arccot}(r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arccsc}(s)\\[0.4ex]\pi ~&=~\arccos(x)&&+\arccos(-x)~&&=~\operatorname {arccot}(r)&&+\operatorname {arccot}(-r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arcsec}(-s)\\[0.4ex]0~&=~\arcsin(x)&&+\arcsin(-x)~&&=~\arctan(r)&&+\arctan(-r)~&&=~\operatorname {arccsc}(s)&&+\operatorname {arccsc}(-s)\\[1.0ex]\end{alignedat}}} Also, arctan ⁡ x + arctan ⁡ 1 x = { π 2 , if x > 0 − π 2 , if x < 0 arccot ⁡ x + arccot ⁡ 1 x = { π 2 , if x > 0 3 π 2 , if x < 0 {\displaystyle {\begin{aligned}\arctan x+\arctan {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\-{\frac {\pi }{2}},&{\text{if }}x<0\end{cases}}\\\operatorname {arccot} x+\operatorname {arccot} {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\{\frac {3\pi }{2}},&{\text{if }}x<0\end{cases}}\\\end{aligned}}} arccos ⁡ 1 x = arcsec ⁡ x and arcsec ⁡ 1 x = arccos ⁡ x {\displaystyle \arccos {\frac {1}{x}}=\operatorname {arcsec} x\qquad {\text{ and }}\qquad \operatorname {arcsec} {\frac {1}{x}}=\arccos x} arcsin ⁡ 1 x = arccsc ⁡ x and arccsc ⁡ 1 x = arcsin ⁡ x {\displaystyle \arcsin {\frac {1}{x}}=\operatorname {arccsc} x\qquad {\text{ and }}\qquad \operatorname {arccsc} {\frac {1}{x}}=\arcsin x} The arctangent function can be expanded as a series: arctan ⁡ ( n x ) = ∑ m = 1 n arctan ⁡ x 1 + ( m − 1 ) m x 2 {\displaystyle \arctan(nx)=\sum _{m=1}^{n}\arctan {\frac {x}{1+(m-1)mx^{2}}}} == Identities without variables == In terms of the arctangent function we have arctan ⁡ 1 2 = arctan ⁡ 1 3 + arctan ⁡ 1 7 . {\displaystyle \arctan {\frac {1}{2}}=\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} The curious identity known as Morrie's law, cos ⁡ 20 ∘ ⋅ cos ⁡ 40 ∘ ⋅ cos ⁡ 80 ∘ = 1 8 , {\displaystyle \cos 20^{\circ }\cdot \cos 40^{\circ }\cdot \cos 80^{\circ }={\frac {1}{8}},} is a special case of an identity that contains one variable: ∏ j = 0 k − 1 cos ⁡ ( 2 j x ) = sin ⁡ ( 2 k x ) 2 k sin ⁡ x . {\displaystyle \prod _{j=0}^{k-1}\cos \left(2^{j}x\right)={\frac {\sin \left(2^{k}x\right)}{2^{k}\sin x}}.} Similarly, sin ⁡ 20 ∘ ⋅ sin ⁡ 40 ∘ ⋅ sin ⁡ 80 ∘ = 3 8 {\displaystyle \sin 20^{\circ }\cdot \sin 40^{\circ }\cdot \sin 80^{\circ }={\frac {\sqrt {3}}{8}}} is a special case of an identity with x = 20 ∘ {\displaystyle x=20^{\circ }} : sin ⁡ x ⋅ sin ⁡ ( 60 ∘ − x ) ⋅ sin ⁡ ( 60 ∘ + x ) = sin ⁡ 3 x 4 . {\displaystyle \sin x\cdot \sin \left(60^{\circ }-x\right)\cdot \sin \left(60^{\circ }+x\right)={\frac {\sin 3x}{4}}.} For the case x = 15 ∘ {\displaystyle x=15^{\circ }} , sin ⁡ 15 ∘ ⋅ sin ⁡ 45 ∘ ⋅ sin ⁡ 75 ∘ = 2 8 , sin ⁡ 15 ∘ ⋅ sin ⁡ 75 ∘ = 1 4 . {\displaystyle {\begin{aligned}\sin 15^{\circ }\cdot \sin 45^{\circ }\cdot \sin 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\sin 15^{\circ }\cdot \sin 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} For the case x = 10 ∘ {\displaystyle x=10^{\circ }} , sin ⁡ 10 ∘ ⋅ sin ⁡ 50 ∘ ⋅ sin ⁡ 70 ∘ = 1 8 . {\displaystyle \sin 10^{\circ }\cdot \sin 50^{\circ }\cdot \sin 70^{\circ }={\frac {1}{8}}.} The same cosine identity is cos ⁡ x ⋅ cos ⁡ ( 60 ∘ − x ) ⋅ cos ⁡ ( 60 ∘ + x ) = cos ⁡ 3 x 4 . {\displaystyle \cos x\cdot \cos \left(60^{\circ }-x\right)\cdot \cos \left(60^{\circ }+x\right)={\frac {\cos 3x}{4}}.} Similarly, cos ⁡ 10 ∘ ⋅ cos ⁡ 50 ∘ ⋅ cos ⁡ 70 ∘ = 3 8 , cos ⁡ 15 ∘ ⋅ cos ⁡ 45 ∘ ⋅ cos ⁡ 75 ∘ = 2 8 , cos ⁡ 15 ∘ ⋅ cos ⁡ 75 ∘ = 1 4 . {\displaystyle {\begin{aligned}\cos 10^{\circ }\cdot \cos 50^{\circ }\cdot \cos 70^{\circ }&={\frac {\sqrt {3}}{8}},\\\cos 15^{\circ }\cdot \cos 45^{\circ }\cdot \cos 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\cos 15^{\circ }\cdot \cos 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} Similarly, tan ⁡ 50 ∘ ⋅ tan ⁡ 60 ∘ ⋅ tan ⁡ 70 ∘ = tan ⁡ 80 ∘ , tan ⁡ 40 ∘ ⋅ tan ⁡ 30 ∘ ⋅ tan ⁡ 20 ∘ = tan ⁡ 10 ∘ . {\displaystyle {\begin{aligned}\tan 50^{\circ }\cdot \tan 60^{\circ }\cdot \tan 70^{\circ }&=\tan 80^{\circ },\\\tan 40^{\circ }\cdot \tan 30^{\circ }\cdot \tan 20^{\circ }&=\tan 10^{\circ }.\end{aligned}}} The following is perhaps not as readily generalized to an identity containing variables (but see explanation below): cos ⁡ 24 ∘ + cos ⁡ 48 ∘ + cos ⁡ 96 ∘ + cos ⁡ 168 ∘ = 1 2 . {\displaystyle \cos 24^{\circ }+\cos 48^{\circ }+\cos 96^{\circ }+\cos 168^{\circ }={\frac {1}{2}}.} Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators: cos ⁡ 2 π 21 + cos ⁡ ( 2 ⋅ 2 π 21 ) + cos ⁡ ( 4 ⋅ 2 π 21 ) + cos ⁡ ( 5 ⋅ 2 π 21 ) + cos ⁡ ( 8 ⋅ 2 π 21 ) + cos ⁡ ( 10 ⋅ 2 π 21 ) = 1 2 . {\displaystyle \cos {\frac {2\pi }{21}}+\cos \left(2\cdot {\frac {2\pi }{21}}\right)+\cos \left(4\cdot {\frac {2\pi }{21}}\right)+\cos \left(5\cdot {\frac {2\pi }{21}}\right)+\cos \left(8\cdot {\frac {2\pi }{21}}\right)+\cos \left(10\cdot {\frac {2\pi }{21}}\right)={\frac {1}{2}}.} The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than ⁠21/2⁠ that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively. Other cosine identities include: 2 cos ⁡ π 3 = 1 , 2 cos ⁡ π 5 × 2 cos ⁡ 2 π 5 = 1 , 2 cos ⁡ π 7 × 2 cos ⁡ 2 π 7 × 2 cos ⁡ 3 π 7 = 1 , {\displaystyle {\begin{aligned}2\cos {\frac {\pi }{3}}&=1,\\2\cos {\frac {\pi }{5}}\times 2\cos {\frac {2\pi }{5}}&=1,\\2\cos {\frac {\pi }{7}}\times 2\cos {\frac {2\pi }{7}}\times 2\cos {\frac {3\pi }{7}}&=1,\end{aligned}}} and so forth for all odd numbers, and hence cos ⁡ π 3 + cos ⁡ π 5 × cos ⁡ 2 π 5 + cos ⁡ π 7 × cos ⁡ 2 π 7 × cos ⁡ 3 π 7 + ⋯ = 1. {\displaystyle \cos {\frac {\pi }{3}}+\cos {\frac {\pi }{5}}\times \cos {\frac {2\pi }{5}}+\cos {\frac {\pi }{7}}\times \cos {\frac {2\pi }{7}}\times \cos {\frac {3\pi }{7}}+\dots =1.} Many of those curious identities stem from more general facts like the following: ∏ k = 1 n − 1 sin ⁡ k π n = n 2 n − 1 {\displaystyle \prod _{k=1}^{n-1}\sin {\frac {k\pi }{n}}={\frac {n}{2^{n-1}}}} and ∏ k = 1 n − 1 cos ⁡ k π n = sin ⁡ π n 2 2 n − 1 . {\displaystyle \prod _{k=1}^{n-1}\cos {\frac {k\pi }{n}}={\frac {\sin {\frac {\pi n}{2}}}{2^{n-1}}}.} Combining these gives us ∏ k = 1 n − 1 tan ⁡ k π n = n sin ⁡ π n 2 {\displaystyle \prod _{k=1}^{n-1}\tan {\frac {k\pi }{n}}={\frac {n}{\sin {\frac {\pi n}{2}}}}} If n is an odd number ( n = 2 m + 1 {\displaystyle n=2m+1} ) we can make use of the symmetries to get ∏ k = 1 m tan ⁡ k π 2 m + 1 = 2 m + 1 {\displaystyle \prod _{k=1}^{m}\tan {\frac {k\pi }{2m+1}}={\sqrt {2m+1}}} The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved: ∏ k = 1 n sin ⁡ ( 2 k − 1 ) π 4 n = ∏ k = 1 n cos ⁡ ( 2 k − 1 ) π 4 n = 2 2 n {\displaystyle \prod _{k=1}^{n}\sin {\frac {\left(2k-1\right)\pi }{4n}}=\prod _{k=1}^{n}\cos {\frac {\left(2k-1\right)\pi }{4n}}={\frac {\sqrt {2}}{2^{n}}}} === Computing π === An efficient way to compute π to a large number of digits is based on the following identity without variables, due to Machin. This is known as a Machin-like formula: π 4 = 4 arctan ⁡ 1 5 − arctan ⁡ 1 239 {\displaystyle {\frac {\pi }{4}}=4\arctan {\frac {1}{5}}-\arctan {\frac {1}{239}}} or, alternatively, by using an identity of Leonhard Euler: π 4 = 5 arctan ⁡ 1 7 + 2 arctan ⁡ 3 79 {\displaystyle {\frac {\pi }{4}}=5\arctan {\frac {1}{7}}+2\arctan {\frac {3}{79}}} or by using Pythagorean triples: π = arccos ⁡ 4 5 + arccos ⁡ 5 13 + arccos ⁡ 16 65 = arcsin ⁡ 3 5 + arcsin ⁡ 12 13 + arcsin ⁡ 63 65 . {\displaystyle \pi =\arccos {\frac {4}{5}}+\arccos {\frac {5}{13}}+\arccos {\frac {16}{65}}=\arcsin {\frac {3}{5}}+\arcsin {\frac {12}{13}}+\arcsin {\frac {63}{65}}.} Others include: π 4 = arctan ⁡ 1 2 + arctan ⁡ 1 3 , {\displaystyle {\frac {\pi }{4}}=\arctan {\frac {1}{2}}+\arctan {\frac {1}{3}},} π = arctan ⁡ 1 + arctan ⁡ 2 + arctan ⁡ 3 , {\displaystyle \pi =\arctan 1+\arctan 2+\arctan 3,} π 4 = 2 arctan ⁡ 1 3 + arctan ⁡ 1 7 . {\displaystyle {\frac {\pi }{4}}=2\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} Generally, for numbers t1, ..., tn−1 ∈ (−1, 1) for which θn = Σn−1k=1 arctan tk ∈ (π/4, 3π/4), let tn = tan(π/2 − θn) = cot θn. This last expression can be computed directly using the formula for the cotangent of a sum of angles whose tangents are t1, ..., tn−1 and its value will be in (−1, 1). In particular, the computed tn will be rational whenever all the t1, ..., tn−1 values are rational. With these values, π 2 = ∑ k = 1 n arctan ⁡ ( t k ) π = ∑ k = 1 n sgn ⁡ ( t k ) arccos ⁡ ( 1 − t k 2 1 + t k 2 ) π = ∑ k = 1 n arcsin ⁡ ( 2 t k 1 + t k 2 ) π = ∑ k = 1 n arctan ⁡ ( 2 t k 1 − t k 2 ) , {\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\sum _{k=1}^{n}\arctan(t_{k})\\\pi &=\sum _{k=1}^{n}\operatorname {sgn}(t_{k})\arccos \left({\frac {1-t_{k}^{2}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arcsin \left({\frac {2t_{k}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arctan \left({\frac {2t_{k}}{1-t_{k}^{2}}}\right)\,,\end{aligned}}} where in all but the first expression, we have used tangent half-angle formulae. The first two formulae work even if one or more of the tk values is not within (−1, 1). Note that if t = p/q is rational, then the (2t, 1 − t2, 1 + t2) values in the above formulae are proportional to the Pythagorean triple (2pq, q2 − p2, q2 + p2). For example, for n = 3 terms, π 2 = arctan ⁡ ( a b ) + arctan ⁡ ( c d ) + arctan ⁡ ( b d − a c a d + b c ) {\displaystyle {\frac {\pi }{2}}=\arctan \left({\frac {a}{b}}\right)+\arctan \left({\frac {c}{d}}\right)+\arctan \left({\frac {bd-ac}{ad+bc}}\right)} for any a, b, c, d > 0. === An identity of Euclid === Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says: sin 2 ⁡ 18 ∘ + sin 2 ⁡ 30 ∘ = sin 2 ⁡ 36 ∘ . {\displaystyle \sin ^{2}18^{\circ }+\sin ^{2}30^{\circ }=\sin ^{2}36^{\circ }.} Ptolemy used this proposition to compute some angles in his table of chords in Book I, chapter 11 of Almagest. == Composition of trigonometric functions == These identities involve a trigonometric function of a trigonometric function: cos ⁡ ( t sin ⁡ x ) = J 0 ( t ) + 2 ∑ k = 1 ∞ J 2 k ( t ) cos ⁡ ( 2 k x ) {\displaystyle \cos(t\sin x)=J_{0}(t)+2\sum _{k=1}^{\infty }J_{2k}(t)\cos(2kx)} sin ⁡ ( t sin ⁡ x ) = 2 ∑ k = 0 ∞ J 2 k + 1 ( t ) sin ⁡ ( ( 2 k + 1 ) x ) {\displaystyle \sin(t\sin x)=2\sum _{k=0}^{\infty }J_{2k+1}(t)\sin {\big (}(2k+1)x{\big )}} cos ⁡ ( t cos ⁡ x ) = J 0 ( t ) + 2 ∑ k = 1 ∞ ( − 1 ) k J 2 k ( t ) cos ⁡ ( 2 k x ) {\displaystyle \cos(t\cos x)=J_{0}(t)+2\sum _{k=1}^{\infty }(-1)^{k}J_{2k}(t)\cos(2kx)} sin ⁡ ( t cos ⁡ x ) = 2 ∑ k = 0 ∞ ( − 1 ) k J 2 k + 1 ( t ) cos ⁡ ( ( 2 k + 1 ) x ) {\displaystyle \sin(t\cos x)=2\sum _{k=0}^{\infty }(-1)^{k}J_{2k+1}(t)\cos {\big (}(2k+1)x{\big )}} where Ji are Bessel functions. == Further "conditional" identities for the case α + β + γ = 180° == A conditional trigonometric identity is a trigonometric identity that holds if specified conditions on the arguments to the trigonometric functions are satisfied. The following formulae apply to arbitrary plane triangles and follow from α + β + γ = 180 ∘ , {\displaystyle \alpha +\beta +\gamma =180^{\circ },} as long as the functions occurring in the formulae are well-defined (the latter applies only to the formulae in which tangents and cotangents occur). tan ⁡ α + tan ⁡ β + tan ⁡ γ = tan ⁡ α tan ⁡ β tan ⁡ γ 1 = cot ⁡ β cot ⁡ γ + cot ⁡ γ cot ⁡ α + cot ⁡ α cot ⁡ β cot ⁡ ( α 2 ) + cot ⁡ ( β 2 ) + cot ⁡ ( γ 2 ) = cot ⁡ ( α 2 ) cot ⁡ ( β 2 ) cot ⁡ ( γ 2 ) 1 = tan ⁡ ( β 2 ) tan ⁡ ( γ 2 ) + tan ⁡ ( γ 2 ) tan ⁡ ( α 2 ) + tan ⁡ ( α 2 ) tan ⁡ ( β 2 ) sin ⁡ α + sin ⁡ β + sin ⁡ γ = 4 cos ⁡ ( α 2 ) cos ⁡ ( β 2 ) cos ⁡ ( γ 2 ) − sin ⁡ α + sin ⁡ β + sin ⁡ γ = 4 cos ⁡ ( α 2 ) sin ⁡ ( β 2 ) sin ⁡ ( γ 2 ) cos ⁡ α + cos ⁡ β + cos ⁡ γ = 4 sin ⁡ ( α 2 ) sin ⁡ ( β 2 ) sin ⁡ ( γ 2 ) + 1 − cos ⁡ α + cos ⁡ β + cos ⁡ γ = 4 sin ⁡ ( α 2 ) cos ⁡ ( β 2 ) cos ⁡ ( γ 2 ) − 1 sin ⁡ ( 2 α ) + sin ⁡ ( 2 β ) + sin ⁡ ( 2 γ ) = 4 sin ⁡ α sin ⁡ β sin ⁡ γ − sin ⁡ ( 2 α ) + sin ⁡ ( 2 β ) + sin ⁡ ( 2 γ ) = 4 sin ⁡ α cos ⁡ β cos ⁡ γ cos ⁡ ( 2 α ) + cos ⁡ ( 2 β ) + cos ⁡ ( 2 γ ) = − 4 cos ⁡ α cos ⁡ β cos ⁡ γ − 1 − cos ⁡ ( 2 α ) + cos ⁡ ( 2 β ) + cos ⁡ ( 2 γ ) = − 4 cos ⁡ α sin ⁡ β sin ⁡ γ + 1 sin 2 ⁡ α + sin 2 ⁡ β + sin 2 ⁡ γ = 2 cos ⁡ α cos ⁡ β cos ⁡ γ + 2 − sin 2 ⁡ α + sin 2 ⁡ β + sin 2 ⁡ γ = 2 cos ⁡ α sin ⁡ β sin ⁡ γ cos 2 ⁡ α + cos 2 ⁡ β + cos 2 ⁡ γ = − 2 cos ⁡ α cos ⁡ β cos ⁡ γ + 1 − cos 2 ⁡ α + cos 2 ⁡ β + cos 2 ⁡ γ = − 2 cos ⁡ α sin ⁡ β sin ⁡ γ + 1 sin 2 ⁡ ( 2 α ) + sin 2 ⁡ ( 2 β ) + sin 2 ⁡ ( 2 γ ) = − 2 cos ⁡ ( 2 α ) cos ⁡ ( 2 β ) cos ⁡ ( 2 γ ) + 2 cos 2 ⁡ ( 2 α ) + cos 2 ⁡ ( 2 β ) + cos 2 ⁡ ( 2 γ ) = 2 cos ⁡ ( 2 α ) cos ⁡ ( 2 β ) cos ⁡ ( 2 γ ) + 1 1 = sin 2 ⁡ ( α 2 ) + sin 2 ⁡ ( β 2 ) + sin 2 ⁡ ( γ 2 ) + 2 sin ⁡ ( α 2 ) sin ⁡ ( β 2 ) sin ⁡ ( γ 2 ) {\displaystyle {\begin{aligned}\tan \alpha +\tan \beta +\tan \gamma &=\tan \alpha \tan \beta \tan \gamma \\1&=\cot \beta \cot \gamma +\cot \gamma \cot \alpha +\cot \alpha \cot \beta \\\cot \left({\frac {\alpha }{2}}\right)+\cot \left({\frac {\beta }{2}}\right)+\cot \left({\frac {\gamma }{2}}\right)&=\cot \left({\frac {\alpha }{2}}\right)\cot \left({\frac {\beta }{2}}\right)\cot \left({\frac {\gamma }{2}}\right)\\1&=\tan \left({\frac {\beta }{2}}\right)\tan \left({\frac {\gamma }{2}}\right)+\tan \left({\frac {\gamma }{2}}\right)\tan \left({\frac {\alpha }{2}}\right)+\tan \left({\frac {\alpha }{2}}\right)\tan \left({\frac {\beta }{2}}\right)\\\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)\\-\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)\\\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)+1\\-\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)-1\\\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \sin \beta \sin \gamma \\-\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \cos \beta \cos \gamma \\\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \cos \beta \cos \gamma -1\\-\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \cos \beta \cos \gamma +2\\-\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \sin \beta \sin \gamma \\\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \cos \beta \cos \gamma +1\\-\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}(2\alpha )+\sin ^{2}(2\beta )+\sin ^{2}(2\gamma )&=-2\cos(2\alpha )\cos(2\beta )\cos(2\gamma )+2\\\cos ^{2}(2\alpha )+\cos ^{2}(2\beta )+\cos ^{2}(2\gamma )&=2\cos(2\alpha )\,\cos(2\beta )\,\cos(2\gamma )+1\\1&=\sin ^{2}\left({\frac {\alpha }{2}}\right)+\sin ^{2}\left({\frac {\beta }{2}}\right)+\sin ^{2}\left({\frac {\gamma }{2}}\right)+2\sin \left({\frac {\alpha }{2}}\right)\,\sin \left({\frac {\beta }{2}}\right)\,\sin \left({\frac {\gamma }{2}}\right)\end{aligned}}} == Historical shorthands == The versine, coversine, haversine, and exsecant were used in navigation. For example, the haversine formula was used to calculate the distance between two points on a sphere. They are rarely used today. == Miscellaneous == === Dirichlet kernel === The Dirichlet kernel Dn(x) is the function occurring on both sides of the next identity: 1 + 2 cos ⁡ x + 2 cos ⁡ ( 2 x ) + 2 cos ⁡ ( 3 x ) + ⋯ + 2 cos ⁡ ( n x ) = sin ⁡ ( ( n + 1 2 ) x ) sin ⁡ ( 1 2 x ) . {\displaystyle 1+2\cos x+2\cos(2x)+2\cos(3x)+\cdots +2\cos(nx)={\frac {\sin \left(\left(n+{\frac {1}{2}}\right)x\right)}{\sin \left({\frac {1}{2}}x\right)}}.} The convolution of any integrable function of period 2 π {\displaystyle 2\pi } with the Dirichlet kernel coincides with the function's n {\displaystyle n} th-degree Fourier approximation. The same holds for any measure or generalized function. === Tangent half-angle substitution === If we set t = tan ⁡ x 2 , {\displaystyle t=\tan {\frac {x}{2}},} then sin ⁡ x = 2 t 1 + t 2 ; cos ⁡ x = 1 − t 2 1 + t 2 ; e i x = 1 + i t 1 − i t ; d x = 2 d t 1 + t 2 , {\displaystyle \sin x={\frac {2t}{1+t^{2}}};\qquad \cos x={\frac {1-t^{2}}{1+t^{2}}};\qquad e^{ix}={\frac {1+it}{1-it}};\qquad dx={\frac {2\,dt}{1+t^{2}}},} where e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} sometimes abbreviated to cis x. When this substitution of t {\displaystyle t} for tan ⁠x/2⁠ is used in calculus, it follows that sin ⁡ x {\displaystyle \sin x} is replaced by ⁠2t/1 + t2⁠, cos ⁡ x {\displaystyle \cos x} is replaced by ⁠1 − t2/1 + t2⁠ and the differential dx is replaced by ⁠2 dt/1 + t2⁠. Thereby one converts rational functions of sin ⁡ x {\displaystyle \sin x} and cos ⁡ x {\displaystyle \cos x} to rational functions of t {\displaystyle t} in order to find their antiderivatives. === Viète's infinite product === cos ⁡ θ 2 ⋅ cos ⁡ θ 4 ⋅ cos ⁡ θ 8 ⋯ = ∏ n = 1 ∞ cos ⁡ θ 2 n = sin ⁡ θ θ = sinc ⁡ θ . {\displaystyle \cos {\frac {\theta }{2}}\cdot \cos {\frac {\theta }{4}}\cdot \cos {\frac {\theta }{8}}\cdots =\prod _{n=1}^{\infty }\cos {\frac {\theta }{2^{n}}}={\frac {\sin \theta }{\theta }}=\operatorname {sinc} \theta .} == See also == == References == == Bibliography == == External links == Values of sin and cos, expressed in surds, for integer multiples of 3° and of ⁠5+5/8⁠°, and for the same angles csc and sec and tan
Wikipedia:List of vector calculus identities#0
The following are important identities involving derivatives and integrals in vector calculus. == Operator notation == === Gradient === For a function f ( x , y , z ) {\displaystyle f(x,y,z)} in three-dimensional Cartesian coordinate variables, the gradient is the vector field: grad ⁡ ( f ) = ∇ f = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) f = ∂ f ∂ x i + ∂ f ∂ y j + ∂ f ∂ z k {\displaystyle \operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} } where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables ψ ( x 1 , … , x n ) {\displaystyle \psi (x_{1},\ldots ,x_{n})} , also called a scalar field, the gradient is the vector field: ∇ ψ = ( ∂ ∂ x 1 , … , ∂ ∂ x n ) ψ = ∂ ψ ∂ x 1 e 1 + ⋯ + ∂ ψ ∂ x n e n {\displaystyle \nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}} where e i ( i = 1 , 2 , . . . , n ) {\displaystyle \mathbf {e} _{i}\,(i=1,2,...,n)} are mutually orthogonal unit vectors. As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change. For a vector field A = ( A 1 , … , A n ) {\displaystyle \mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)} , also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix: J A = d A = ( ∇ A ) T = ( ∂ A i ∂ x j ) i j . {\displaystyle \mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.} For a tensor field T {\displaystyle \mathbf {T} } of any order k, the gradient grad ⁡ ( T ) = d T = ( ∇ T ) T {\displaystyle \operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}} is a tensor field of order k + 1. For a tensor field T {\displaystyle \mathbf {T} } of order k > 0, the tensor field ∇ T {\displaystyle \nabla \mathbf {T} } of order k + 1 is defined by the recursive relation ( ∇ T ) ⋅ C = ∇ ( T ⋅ C ) {\displaystyle (\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Divergence === In Cartesian coordinates, the divergence of a continuously differentiable vector field F = F x i + F y j + F z k {\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } is the scalar-valued function: div ⁡ F = ∇ ⋅ F = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) ⋅ ( F x , F y , F z ) = ∂ F x ∂ x + ∂ F y ∂ y + ∂ F z ∂ z . {\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.} As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge. The divergence of a tensor field T {\displaystyle \mathbf {T} } of non-zero order k is written as div ⁡ ( T ) = ∇ ⋅ T {\displaystyle \operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} } , a contraction of a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher-order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity, ∇ ⋅ ( A ⊗ T ) = T ( ∇ ⋅ A ) + ( A ⋅ ∇ ) T {\displaystyle \nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} } where A ⋅ ∇ {\displaystyle \mathbf {A} \cdot \nabla } is the directional derivative in the direction of A {\displaystyle \mathbf {A} } multiplied by its magnitude. Specifically, for the outer product of two vectors, ∇ ⋅ ( A B T ) = B ( ∇ ⋅ A ) + ( A ⋅ ∇ ) B . {\displaystyle \nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .} For a tensor field T {\displaystyle \mathbf {T} } of order k > 1, the tensor field ∇ ⋅ T {\displaystyle \nabla \cdot \mathbf {T} } of order k − 1 is defined by the recursive relation ( ∇ ⋅ T ) ⋅ C = ∇ ⋅ ( T ⋅ C ) {\displaystyle (\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Curl === In Cartesian coordinates, for F = F x i + F y j + F z k {\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } the curl is the vector field: curl ⁡ F = ∇ × F = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) × ( F x , F y , F z ) = | i j k ∂ ∂ x ∂ ∂ y ∂ ∂ z F x F y F z | = ( ∂ F z ∂ y − ∂ F y ∂ z ) i + ( ∂ F x ∂ z − ∂ F z ∂ x ) j + ( ∂ F y ∂ x − ∂ F x ∂ y ) k {\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}} where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. As the name implies the curl is a measure of how much nearby vectors tend in a circular direction. In Einstein notation, the vector field F = ( F 1 , F 2 , F 3 ) {\displaystyle \mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}} has curl given by: ∇ × F = ε i j k e i ∂ F k ∂ x j {\displaystyle \nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}} where ε {\displaystyle \varepsilon } = ±1 or 0 is the Levi-Civita parity symbol. For a tensor field T {\displaystyle \mathbf {T} } of order k > 1, the tensor field ∇ × T {\displaystyle \nabla \times \mathbf {T} } of order k is defined by the recursive relation ( ∇ × T ) ⋅ C = ∇ × ( T ⋅ C ) {\displaystyle (\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. A tensor field of order greater than one may be decomposed into a sum of outer products, and then the following identity may be used: ∇ × ( A ⊗ T ) = ( ∇ × A ) ⊗ T − A × ( ∇ T ) . {\displaystyle \nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).} Specifically, for the outer product of two vectors, ∇ × ( A B T ) = ( ∇ × A ) B T − A × ( ∇ B ) . {\displaystyle \nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).} === Laplacian === In Cartesian coordinates, the Laplacian of a function f ( x , y , z ) {\displaystyle f(x,y,z)} is Δ f = ∇ 2 f = ( ∇ ⋅ ∇ ) f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.} The Laplacian is a measure of how much a function is changing over a small sphere centered at the point. When the Laplacian is equal to 0, the function is called a harmonic function. That is, Δ f = 0. {\displaystyle \Delta f=0.} For a tensor field, T {\displaystyle \mathbf {T} } , the Laplacian is generally written as: Δ T = ∇ 2 T = ( ∇ ⋅ ∇ ) T {\displaystyle \Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} } and is a tensor field of the same order. For a tensor field T {\displaystyle \mathbf {T} } of order k > 0, the tensor field ∇ 2 T {\displaystyle \nabla ^{2}\mathbf {T} } of order k is defined by the recursive relation ( ∇ 2 T ) ⋅ C = ∇ 2 ( T ⋅ C ) {\displaystyle \left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Special notations === In Feynman subscript notation, ∇ B ( A ⋅ B ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle \nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } where the notation ∇B means the subscripted gradient operates on only the factor B. More general but similar is the Hestenes overdot notation in geometric algebra. The above identity is then expressed as: ∇ ˙ ( A ⋅ B ˙ ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle {\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } where overdots define the scope of the vector derivative. The dotted vector, in this case B, is differentiated, while the (undotted) A is held constant. The utility of the Feynman subscript notation lies in its use in the derivation of vector and tensor derivative identities, as in the following example which uses the algebraic identity C⋅(A×B) = (C×A)⋅B: ∇ ⋅ ( A × B ) = ∇ A ⋅ ( A × B ) + ∇ B ⋅ ( A × B ) = ( ∇ A × A ) ⋅ B + ( ∇ B × A ) ⋅ B = ( ∇ A × A ) ⋅ B − ( A × ∇ B ) ⋅ B = ( ∇ A × A ) ⋅ B − A ⋅ ( ∇ B × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla _{\mathbf {A} }\cdot (\mathbf {A} \times \mathbf {B} )+\nabla _{\mathbf {B} }\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} +(\nabla _{\mathbf {B} }\times \mathbf {A} )\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \nabla _{\mathbf {B} })\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla _{\mathbf {B} }\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}} An alternative method is to use the Cartesian components of the del operator as follows (with implicit summation over the index i): ∇ ⋅ ( A × B ) = e i ∂ i ⋅ ( A × B ) = e i ⋅ ∂ i ( A × B ) = e i ⋅ ( ∂ i A × B + A × ∂ i B ) = e i ⋅ ( ∂ i A × B ) + e i ⋅ ( A × ∂ i B ) = ( e i × ∂ i A ) ⋅ B + ( e i × A ) ⋅ ∂ i B = ( e i × ∂ i A ) ⋅ B − ( A × e i ) ⋅ ∂ i B = ( e i × ∂ i A ) ⋅ B − A ⋅ ( e i × ∂ i B ) = ( e i ∂ i × A ) ⋅ B − A ⋅ ( e i ∂ i × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\mathbf {e} _{i}\partial _{i}\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot \partial _{i}(\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} +\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} )+\mathbf {e} _{i}\cdot (\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} +(\mathbf {e} _{i}\times \mathbf {A} )\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \mathbf {e} _{i})\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\partial _{i}\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\partial _{i}\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}} Another method of deriving vector and tensor derivative identities is to replace all occurrences of a vector in an algebraic identity by the del operator, provided that no variable occurs both inside and outside the scope of an operator or both inside the scope of one operator in a term and outside the scope of another operator in the same term (i.e., the operators must be nested). The validity of this rule follows from the validity of the Feynman method, for one may always substitute a subscripted del and then immediately drop the subscript under the condition of the rule. For example, from the identity A⋅(B×C) = (A×B)⋅C we may derive A⋅(∇×C) = (A×∇)⋅C but not ∇⋅(B×C) = (∇×B)⋅C, nor from A⋅(B×A) = 0 may we derive A⋅(∇×A) = 0. On the other hand, a subscripted del operates on all occurrences of the subscript in the term, so that A⋅(∇A×A) = ∇A⋅(A×A) = ∇⋅(A×A) = 0. Also, from A×(A×C) = A(A⋅C) − (A⋅A)C we may derive ∇×(∇×C) = ∇(∇⋅C) − ∇2C, but from (Aψ)⋅(Aφ) = (A⋅A)(ψφ) we may not derive (∇ψ)⋅(∇φ) = ∇2(ψφ). A subscript c on a quantity indicates that it is temporarily considered to be a constant. Since a constant is not a variable, when the substitution rule (see the preceding paragraph) is used it, unlike a variable, may be moved into or out of the scope of a del operator, as in the following example: ∇ ⋅ ( A × B ) = ∇ ⋅ ( A × B c ) + ∇ ⋅ ( A c × B ) = ∇ ⋅ ( A × B c ) − ∇ ⋅ ( B × A c ) = ( ∇ × A ) ⋅ B c − ( ∇ × B ) ⋅ A c = ( ∇ × A ) ⋅ B − ( ∇ × B ) ⋅ A {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })+\nabla \cdot (\mathbf {A} _{\mathrm {c} }\times \mathbf {B} )\\[2pt]&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })-\nabla \cdot (\mathbf {B} \times \mathbf {A} _{\mathrm {c} })\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} _{\mathrm {c} }-(\nabla \times \mathbf {B} )\cdot \mathbf {A} _{\mathrm {c} }\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} \end{aligned}}} Another way to indicate that a quantity is a constant is to affix it as a subscript to the scope of a del operator, as follows: ∇ ( A ⋅ B ) A = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle \nabla \left(\mathbf {A{\cdot }B} \right)_{\mathbf {A} }=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } For the remainder of this article, Feynman subscript notation will be used where appropriate. == First derivative identities == For scalar fields ψ {\displaystyle \psi } , ϕ {\displaystyle \phi } and vector fields A {\displaystyle \mathbf {A} } , B {\displaystyle \mathbf {B} } , we have the following derivative identities. === Distributive properties === ∇ ( ψ + ϕ ) = ∇ ψ + ∇ ϕ ∇ ( A + B ) = ∇ A + ∇ B ∇ ⋅ ( A + B ) = ∇ ⋅ A + ∇ ⋅ B ∇ × ( A + B ) = ∇ × A + ∇ × B {\displaystyle {\begin{aligned}\nabla (\psi +\phi )&=\nabla \psi +\nabla \phi \\\nabla (\mathbf {A} +\mathbf {B} )&=\nabla \mathbf {A} +\nabla \mathbf {B} \\\nabla \cdot (\mathbf {A} +\mathbf {B} )&=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} \\\nabla \times (\mathbf {A} +\mathbf {B} )&=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} \end{aligned}}} === First derivative associative properties === ( A ⋅ ∇ ) ψ = A ⋅ ( ∇ ψ ) ( A ⋅ ∇ ) B = A ⋅ ( ∇ B ) ( A × ∇ ) ψ = A × ( ∇ ψ ) ( A × ∇ ) B = A × ( ∇ B ) {\displaystyle {\begin{aligned}(\mathbf {A} \cdot \nabla )\psi &=\mathbf {A} \cdot (\nabla \psi )\\(\mathbf {A} \cdot \nabla )\mathbf {B} &=\mathbf {A} \cdot (\nabla \mathbf {B} )\\(\mathbf {A} \times \nabla )\psi &=\mathbf {A} \times (\nabla \psi )\\(\mathbf {A} \times \nabla )\mathbf {B} &=\mathbf {A} \times (\nabla \mathbf {B} )\end{aligned}}} === Product rule for multiplication by a scalar === We have the following generalizations of the product rule in single-variable calculus. ∇ ( ψ ϕ ) = ϕ ∇ ψ + ψ ∇ ϕ ∇ ( ψ A ) = ( ∇ ψ ) A T + ψ ∇ A = ∇ ψ ⊗ A + ψ ∇ A ∇ ⋅ ( ψ A ) = ψ ∇ ⋅ A + ( ∇ ψ ) ⋅ A ∇ × ( ψ A ) = ψ ∇ × A + ( ∇ ψ ) × A ∇ 2 ( ψ ϕ ) = ψ ∇ 2 ϕ + 2 ∇ ψ ⋅ ∇ ϕ + ϕ ∇ 2 ψ {\displaystyle {\begin{aligned}\nabla (\psi \phi )&=\phi \,\nabla \psi +\psi \,\nabla \phi \\\nabla (\psi \mathbf {A} )&=(\nabla \psi )\mathbf {A} ^{\textsf {T}}+\psi \nabla \mathbf {A} \ =\ \nabla \psi \otimes \mathbf {A} +\psi \,\nabla \mathbf {A} \\\nabla \cdot (\psi \mathbf {A} )&=\psi \,\nabla {\cdot }\mathbf {A} +(\nabla \psi )\,{\cdot }\mathbf {A} \\\nabla {\times }(\psi \mathbf {A} )&=\psi \,\nabla {\times }\mathbf {A} +(\nabla \psi ){\times }\mathbf {A} \\\nabla ^{2}(\psi \phi )&=\psi \,\nabla ^{2\!}\phi +2\,\nabla \!\psi \cdot \!\nabla \phi +\phi \,\nabla ^{2\!}\psi \end{aligned}}} === Quotient rule for division by a scalar === ∇ ( ψ ϕ ) = ϕ ∇ ψ − ψ ∇ ϕ ϕ 2 ∇ ( A ϕ ) = ϕ ∇ A − ∇ ϕ ⊗ A ϕ 2 ∇ ⋅ ( A ϕ ) = ϕ ∇ ⋅ A − ∇ ϕ ⋅ A ϕ 2 ∇ × ( A ϕ ) = ϕ ∇ × A − ∇ ϕ × A ϕ 2 ∇ 2 ( ψ ϕ ) = ϕ ∇ 2 ψ − 2 ϕ ∇ ( ψ ϕ ) ⋅ ∇ ϕ − ψ ∇ 2 ϕ ϕ 2 {\displaystyle {\begin{aligned}\nabla \left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla \psi -\psi \,\nabla \phi }{\phi ^{2}}}\\[1em]\nabla \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla \mathbf {A} -\nabla \phi \otimes \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \cdot \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\cdot }\mathbf {A} -\nabla \!\phi \cdot \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \times \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\times }\mathbf {A} -\nabla \!\phi \,{\times }\,\mathbf {A} }{\phi ^{2}}}\\[1em]\nabla ^{2}\left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla ^{2\!}\psi -2\,\phi \,\nabla \!\left({\frac {\psi }{\phi }}\right)\cdot \!\nabla \phi -\psi \,\nabla ^{2\!}\phi }{\phi ^{2}}}\end{aligned}}} === Chain rule === Let f ( x ) {\displaystyle f(x)} be a one-variable function from scalars to scalars, r ( t ) = ( x 1 ( t ) , … , x n ( t ) ) {\displaystyle \mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))} a parametrized curve, ϕ : R n → R {\displaystyle \phi \!:\mathbb {R} ^{n}\to \mathbb {R} } a function from vectors to scalars, and A : R n → R n {\displaystyle \mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} a vector field. We have the following special cases of the multi-variable chain rule. ∇ ( f ∘ ϕ ) = ( f ′ ∘ ϕ ) ∇ ϕ ( r ∘ f ) ′ = ( r ′ ∘ f ) f ′ ( ϕ ∘ r ) ′ = ( ∇ ϕ ∘ r ) ⋅ r ′ ( A ∘ r ) ′ = r ′ ⋅ ( ∇ A ∘ r ) ∇ ( ϕ ∘ A ) = ( ∇ A ) ⋅ ( ∇ ϕ ∘ A ) ∇ ⋅ ( r ∘ ϕ ) = ∇ ϕ ⋅ ( r ′ ∘ ϕ ) ∇ × ( r ∘ ϕ ) = ∇ ϕ × ( r ′ ∘ ϕ ) {\displaystyle {\begin{aligned}\nabla (f\circ \phi )&=\left(f'\circ \phi \right)\nabla \phi \\(\mathbf {r} \circ f)'&=(\mathbf {r} '\circ f)f'\\(\phi \circ \mathbf {r} )'&=(\nabla \phi \circ \mathbf {r} )\cdot \mathbf {r} '\\(\mathbf {A} \circ \mathbf {r} )'&=\mathbf {r} '\cdot (\nabla \mathbf {A} \circ \mathbf {r} )\\\nabla (\phi \circ \mathbf {A} )&=(\nabla \mathbf {A} )\cdot (\nabla \phi \circ \mathbf {A} )\\\nabla \cdot (\mathbf {r} \circ \phi )&=\nabla \phi \cdot (\mathbf {r} '\circ \phi )\\\nabla \times (\mathbf {r} \circ \phi )&=\nabla \phi \times (\mathbf {r} '\circ \phi )\end{aligned}}} For a vector transformation x : R n → R n {\displaystyle \mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} we have: ∇ ⋅ ( A ∘ x ) = t r ( ( ∇ x ) ⋅ ( ∇ A ∘ x ) ) {\displaystyle \nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)} Here we take the trace of the dot product of two second-order tensors, which corresponds to the product of their matrices. === Dot product rule === ∇ ( A ⋅ B ) = ( A ⋅ ∇ ) B + ( B ⋅ ∇ ) A + A × ( ∇ × B ) + B × ( ∇ × A ) = A ⋅ J B + B ⋅ J A = ( ∇ B ) ⋅ A + ( ∇ A ) ⋅ B {\displaystyle {\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}} where J A = ( ∇ A ) T = ( ∂ A i / ∂ x j ) i j {\displaystyle \mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}} denotes the Jacobian matrix of the vector field A = ( A 1 , … , A n ) {\displaystyle \mathbf {A} =(A_{1},\ldots ,A_{n})} . Alternatively, using Feynman subscript notation, ∇ ( A ⋅ B ) = ∇ A ( A ⋅ B ) + ∇ B ( A ⋅ B ) . {\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .} See these notes. As a special case, when A = B, 1 2 ∇ ( A ⋅ A ) = A ⋅ J A = ( ∇ A ) ⋅ A = ( A ⋅ ∇ ) A + A × ( ∇ × A ) = A ∇ A . {\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.} The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection, which differentiates a vector field to give a vector-valued 1-form. === Cross product rule === ∇ ( A × B ) = ( ∇ A ) × B − ( ∇ B ) × A ∇ ⋅ ( A × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) ∇ × ( A × B ) = A ( ∇ ⋅ B ) − B ( ∇ ⋅ A ) + ( B ⋅ ∇ ) A − ( A ⋅ ∇ ) B = A ( ∇ ⋅ B ) + ( B ⋅ ∇ ) A − ( B ( ∇ ⋅ A ) + ( A ⋅ ∇ ) B ) = ∇ ⋅ ( B A T ) − ∇ ⋅ ( A B T ) = ∇ ⋅ ( B A T − A B T ) A × ( ∇ × B ) = ∇ B ( A ⋅ B ) − ( A ⋅ ∇ ) B = A ⋅ J B − ( A ⋅ ∇ ) B = ( ∇ B ) ⋅ A − A ⋅ ( ∇ B ) = A ⋅ ( J B − J B T ) ( A × ∇ ) × B = ( ∇ B ) ⋅ A − A ( ∇ ⋅ B ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B − A ( ∇ ⋅ B ) ( A × ∇ ) ⋅ B = A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla \mathbf {A} )\times \mathbf {B} \,-\,(\nabla \mathbf {B} )\times \mathbf {A} \\[5pt]\nabla \cdot (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla {\times }\mathbf {A} )\cdot \mathbf {B} \,-\,\mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\\[5pt]\nabla \times (\mathbf {A} \times \mathbf {B} )&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,-\,\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} )\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\,-\,\nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\,-\,\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[5pt]\mathbf {A} \times (\nabla \times \mathbf {B} )&\ =\ \nabla _{\mathbf {B} }(\mathbf {A} {\cdot }\mathbf {B} )\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} \cdot (\nabla \mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \cdot (\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}})\\[5pt](\mathbf {A} \times \nabla )\times \mathbf {B} &\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \times (\nabla \times \mathbf {B} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[5pt](\mathbf {A} \times \nabla )\cdot \mathbf {B} &\ =\ \mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\end{aligned}}} Note that the matrix J B − J B T {\displaystyle \mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}} is antisymmetric. == Second derivative identities == === Divergence of curl is zero === The divergence of the curl of any continuously twice-differentiable vector field A is always zero: ∇ ⋅ ( ∇ × A ) = 0 {\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0} This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. === Divergence of gradient is Laplacian === The Laplacian of a scalar field is the divergence of its gradient: Δ ψ = ∇ 2 ψ = ∇ ⋅ ( ∇ ψ ) {\displaystyle \Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )} The result is a scalar quantity. === Divergence of divergence is not defined === The divergence of a vector field A is a scalar, and the divergence of a scalar quantity is undefined. Therefore, ∇ ⋅ ( ∇ ⋅ A ) is undefined. {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}} === Curl of gradient is zero === The curl of the gradient of any continuously twice-differentiable scalar field φ {\displaystyle \varphi } (i.e., differentiability class C 2 {\displaystyle C^{2}} ) is always the zero vector: ∇ × ( ∇ φ ) = 0 . {\displaystyle \nabla \times (\nabla \varphi )=\mathbf {0} .} It can be easily proved by expressing ∇ × ( ∇ φ ) {\displaystyle \nabla \times (\nabla \varphi )} in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. === Curl of curl === ∇ × ( ∇ × A ) = ∇ ( ∇ ⋅ A ) − ∇ 2 A {\displaystyle \nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} } Here ∇2 is the vector Laplacian operating on the vector field A. === Curl of divergence is not defined === The divergence of a vector field A is a scalar, and the curl of a scalar quantity is undefined. Therefore, ∇ × ( ∇ ⋅ A ) is undefined. {\displaystyle \nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}} === Second derivative associative properties === ( ∇ ⋅ ∇ ) ψ = ∇ ⋅ ( ∇ ψ ) = ∇ 2 ψ ( ∇ ⋅ ∇ ) A = ∇ ⋅ ( ∇ A ) = ∇ 2 A ( ∇ × ∇ ) ψ = ∇ × ( ∇ ψ ) = 0 ( ∇ × ∇ ) A = ∇ × ( ∇ A ) = 0 {\displaystyle {\begin{aligned}(\nabla \cdot \nabla )\psi &=\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi \\(\nabla \cdot \nabla )\mathbf {A} &=\nabla \cdot (\nabla \mathbf {A} )=\nabla ^{2}\mathbf {A} \\(\nabla \times \nabla )\psi &=\nabla \times (\nabla \psi )=\mathbf {0} \\(\nabla \times \nabla )\mathbf {A} &=\nabla \times (\nabla \mathbf {A} )=\mathbf {0} \end{aligned}}} === A mnemonic === The figure to the right is a mnemonic for some of these identities. The abbreviations used are: D: divergence, C: curl, G: gradient, L: Laplacian, CC: curl of curl. Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist. == Summary of important identities == === Differentiation === ==== Gradient ==== ∇ ( ψ + ϕ ) = ∇ ψ + ∇ ϕ {\displaystyle \nabla (\psi +\phi )=\nabla \psi +\nabla \phi } ∇ ( ψ ϕ ) = ϕ ∇ ψ + ψ ∇ ϕ {\displaystyle \nabla (\psi \phi )=\phi \nabla \psi +\psi \nabla \phi } ∇ ( ψ A ) = ∇ ψ ⊗ A + ψ ∇ A {\displaystyle \nabla (\psi \mathbf {A} )=\nabla \psi \otimes \mathbf {A} +\psi \nabla \mathbf {A} } ∇ ( A ⋅ B ) = ( A ⋅ ∇ ) B + ( B ⋅ ∇ ) A + A × ( ∇ × B ) + B × ( ∇ × A ) {\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=(\mathbf {A} \cdot \nabla )\mathbf {B} +(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {B} )+\mathbf {B} \times (\nabla \times \mathbf {A} )} ==== Divergence ==== ∇ ⋅ ( A + B ) = ∇ ⋅ A + ∇ ⋅ B {\displaystyle \nabla \cdot (\mathbf {A} +\mathbf {B} )=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} } ∇ ⋅ ( ψ A ) = ψ ∇ ⋅ A + A ⋅ ∇ ψ {\displaystyle \nabla \cdot \left(\psi \mathbf {A} \right)=\psi \nabla \cdot \mathbf {A} +\mathbf {A} \cdot \nabla \psi } ∇ ⋅ ( A × B ) = ( ∇ × A ) ⋅ B − ( ∇ × B ) ⋅ A {\displaystyle \nabla \cdot \left(\mathbf {A} \times \mathbf {B} \right)=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} } ==== Curl ==== ∇ × ( A + B ) = ∇ × A + ∇ × B {\displaystyle \nabla \times (\mathbf {A} +\mathbf {B} )=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} } ∇ × ( ψ A ) = ψ ( ∇ × A ) − ( A × ∇ ) ψ = ψ ( ∇ × A ) + ( ∇ ψ ) × A {\displaystyle \nabla \times \left(\psi \mathbf {A} \right)=\psi \,(\nabla \times \mathbf {A} )-(\mathbf {A} \times \nabla )\psi =\psi \,(\nabla \times \mathbf {A} )+(\nabla \psi )\times \mathbf {A} } ∇ × ( ψ ∇ ϕ ) = ∇ ψ × ∇ ϕ {\displaystyle \nabla \times \left(\psi \nabla \phi \right)=\nabla \psi \times \nabla \phi } ∇ × ( A × B ) = A ( ∇ ⋅ B ) − B ( ∇ ⋅ A ) + ( B ⋅ ∇ ) A − ( A ⋅ ∇ ) B {\displaystyle \nabla \times \left(\mathbf {A} \times \mathbf {B} \right)=\mathbf {A} \left(\nabla \cdot \mathbf {B} \right)-\mathbf {B} \left(\nabla \cdot \mathbf {A} \right)+\left(\mathbf {B} \cdot \nabla \right)\mathbf {A} -\left(\mathbf {A} \cdot \nabla \right)\mathbf {B} } ==== Vector-dot-Del Operator ==== ( A ⋅ ∇ ) B = 1 2 [ ∇ ( A ⋅ B ) − ∇ × ( A × B ) − B × ( ∇ × A ) − A × ( ∇ × B ) − B ( ∇ ⋅ A ) + A ( ∇ ⋅ B ) ] {\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {B} ={\frac {1}{2}}{\bigg [}\nabla (\mathbf {A} \cdot \mathbf {B} )-\nabla \times (\mathbf {A} \times \mathbf {B} )-\mathbf {B} \times (\nabla \times \mathbf {A} )-\mathbf {A} \times (\nabla \times \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )+\mathbf {A} (\nabla \cdot \mathbf {B} ){\bigg ]}} ( A ⋅ ∇ ) A = 1 2 ∇ | A | 2 − A × ( ∇ × A ) = 1 2 ∇ | A | 2 + ( ∇ × A ) × A {\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {A} ={\frac {1}{2}}\nabla |\mathbf {A} |^{2}-\mathbf {A} \times (\nabla \times \mathbf {A} )={\frac {1}{2}}\nabla |\mathbf {A} |^{2}+(\nabla \times \mathbf {A} )\times \mathbf {A} } A ⋅ ∇ ( B ⋅ B ) = 2 B ⋅ ( A ⋅ ∇ ) B {\displaystyle \mathbf {A} \cdot \nabla (\mathbf {B} \cdot \mathbf {B} )=2\mathbf {B} \cdot (\mathbf {A} \cdot \nabla )\mathbf {B} } ==== Second derivatives ==== ∇ ⋅ ( ∇ × A ) = 0 {\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0} ∇ × ( ∇ ψ ) = 0 {\displaystyle \nabla \times (\nabla \psi )=\mathbf {0} } ∇ ⋅ ( ∇ ψ ) = ∇ 2 ψ {\displaystyle \nabla \cdot (\nabla \psi )=\nabla ^{2}\psi } (scalar Laplacian) ∇ ( ∇ ⋅ A ) − ∇ × ( ∇ × A ) = ∇ 2 A {\displaystyle \nabla \left(\nabla \cdot \mathbf {A} \right)-\nabla \times \left(\nabla \times \mathbf {A} \right)=\nabla ^{2}\mathbf {A} } (vector Laplacian) ∇ ⋅ [ ∇ A + ( ∇ A ) T ] = ∇ 2 A + ∇ ( ∇ ⋅ A ) {\displaystyle \nabla \cdot {\big [}\nabla \mathbf {A} +(\nabla \mathbf {A} )^{\textsf {T}}{\big ]}=\nabla ^{2}\mathbf {A} +\nabla (\nabla \cdot \mathbf {A} )} ∇ ⋅ ( ϕ ∇ ψ ) = ϕ ∇ 2 ψ + ∇ ϕ ⋅ ∇ ψ {\displaystyle \nabla \cdot (\phi \nabla \psi )=\phi \nabla ^{2}\psi +\nabla \phi \cdot \nabla \psi } ψ ∇ 2 ϕ − ϕ ∇ 2 ψ = ∇ ⋅ ( ψ ∇ ϕ − ϕ ∇ ψ ) {\displaystyle \psi \nabla ^{2}\phi -\phi \nabla ^{2}\psi =\nabla \cdot \left(\psi \nabla \phi -\phi \nabla \psi \right)} ∇ 2 ( ϕ ψ ) = ϕ ∇ 2 ψ + 2 ( ∇ ϕ ) ⋅ ( ∇ ψ ) + ( ∇ 2 ϕ ) ψ {\displaystyle \nabla ^{2}(\phi \psi )=\phi \nabla ^{2}\psi +2(\nabla \phi )\cdot (\nabla \psi )+\left(\nabla ^{2}\phi \right)\psi } ∇ 2 ( ψ A ) = A ∇ 2 ψ + 2 ( ∇ ψ ⋅ ∇ ) A + ψ ∇ 2 A {\displaystyle \nabla ^{2}(\psi \mathbf {A} )=\mathbf {A} \nabla ^{2}\psi +2(\nabla \psi \cdot \nabla )\mathbf {A} +\psi \nabla ^{2}\mathbf {A} } ∇ 2 ( A ⋅ B ) = A ⋅ ∇ 2 B − B ⋅ ∇ 2 A + 2 ∇ ⋅ ( ( B ⋅ ∇ ) A + B × ( ∇ × A ) ) {\displaystyle \nabla ^{2}(\mathbf {A} \cdot \mathbf {B} )=\mathbf {A} \cdot \nabla ^{2}\mathbf {B} -\mathbf {B} \cdot \nabla ^{2}\!\mathbf {A} +2\nabla \cdot ((\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {B} \times (\nabla \times \mathbf {A} ))} (Green's vector identity) ==== Third derivatives ==== ∇ 2 ( ∇ ψ ) = ∇ ( ∇ ⋅ ( ∇ ψ ) ) = ∇ ( ∇ 2 ψ ) {\displaystyle \nabla ^{2}(\nabla \psi )=\nabla (\nabla \cdot (\nabla \psi ))=\nabla \left(\nabla ^{2}\psi \right)} ∇ 2 ( ∇ ⋅ A ) = ∇ ⋅ ( ∇ ( ∇ ⋅ A ) ) = ∇ ⋅ ( ∇ 2 A ) {\displaystyle \nabla ^{2}(\nabla \cdot \mathbf {A} )=\nabla \cdot (\nabla (\nabla \cdot \mathbf {A} ))=\nabla \cdot \left(\nabla ^{2}\mathbf {A} \right)} ∇ 2 ( ∇ × A ) = − ∇ × ( ∇ × ( ∇ × A ) ) = ∇ × ( ∇ 2 A ) {\displaystyle \nabla ^{2}(\nabla \times \mathbf {A} )=-\nabla \times (\nabla \times (\nabla \times \mathbf {A} ))=\nabla \times \left(\nabla ^{2}\mathbf {A} \right)} === Integration === Below, the curly symbol ∂ means "boundary of" a surface or solid. ==== Surface–volume integrals ==== In the following surface–volume integral theorems, V denotes a three-dimensional volume with a corresponding two-dimensional boundary S = ∂V (a closed surface): ∂ V {\displaystyle \scriptstyle \partial V} ψ d S = ∭ V ∇ ψ d V {\displaystyle \psi \,d\mathbf {S} \ =\ \iiint _{V}\nabla \psi \,dV} ∂ V {\displaystyle \scriptstyle \partial V} A ⋅ d S = ∭ V ∇ ⋅ A d V {\displaystyle \mathbf {A} \cdot d\mathbf {S} \ =\ \iiint _{V}\nabla \cdot \mathbf {A} \,dV} (divergence theorem) ∂ V {\displaystyle \scriptstyle \partial V} A × d S = − ∭ V ∇ × A d V {\displaystyle \mathbf {A} \times d\mathbf {S} \ =\ -\iiint _{V}\nabla \times \mathbf {A} \,dV} ∂ V {\displaystyle \scriptstyle \partial V} ψ ∇ φ ⋅ d S = ∭ V ( ψ ∇ 2 φ + ∇ φ ⋅ ∇ ψ ) d V {\displaystyle \psi \nabla \!\varphi \cdot d\mathbf {S} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi +\nabla \!\varphi \cdot \nabla \!\psi \right)\,dV} (Green's first identity) ∂ V {\displaystyle \scriptstyle \partial V} ( ψ ∇ φ − φ ∇ ψ ) ⋅ d S = {\displaystyle \left(\psi \nabla \!\varphi -\varphi \nabla \!\psi \right)\cdot d\mathbf {S} \ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ( ψ ∂ φ ∂ n − φ ∂ ψ ∂ n ) d S {\displaystyle \left(\psi {\frac {\partial \varphi }{\partial n}}-\varphi {\frac {\partial \psi }{\partial n}}\right)dS} = ∭ V ( ψ ∇ 2 φ − φ ∇ 2 ψ ) d V {\displaystyle \displaystyle \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi -\varphi \nabla ^{2}\!\psi \right)\,dV} (Green's second identity) ∭ V A ⋅ ∇ ψ d V = {\displaystyle \iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ψ A ⋅ d S − ∭ V ψ ∇ ⋅ A d V {\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV} (integration by parts) ∭ V ψ ∇ ⋅ A d V = {\displaystyle \iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ψ A ⋅ d S − ∭ V A ⋅ ∇ ψ d V {\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV} (integration by parts) ∭ V A ⋅ ( ∇ × B ) d V = − {\displaystyle \iiint _{V}\mathbf {A} \cdot \left(\nabla \times \mathbf {B} \right)\,dV\ =\ -} ∂ V {\displaystyle \scriptstyle \partial V} ( A × B ) ⋅ d S + ∭ V ( ∇ × A ) ⋅ B d V {\displaystyle \left(\mathbf {A} \times \mathbf {B} \right)\cdot d\mathbf {S} +\iiint _{V}\left(\nabla \times \mathbf {A} \right)\cdot \mathbf {B} \,dV} (integration by parts) ∂ V {\displaystyle \scriptstyle \partial V} A × ( d S ⋅ ( B C T ) ) = ∭ V A × ( ∇ ⋅ ( B C T ) ) d V + ∭ V B ⋅ ( ∇ A ) × C d V {\displaystyle \mathbf {A} \times \left(d\mathbf {S} \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\ =\ \iiint _{V}\mathbf {A} \times \left(\nabla \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\,dV+\iiint _{V}\mathbf {B} \cdot (\nabla \mathbf {A} )\times \mathbf {C} \,dV} ∭ V ( ∇ ⋅ B + B ⋅ ∇ ) A d V = {\displaystyle \iiint _{V}\left(\nabla \cdot \mathbf {B} +\mathbf {B} \cdot \nabla \right)\mathbf {A} \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ( B ⋅ d S ) A {\displaystyle \left(\mathbf {B} \cdot d\mathbf {S} \right)\mathbf {A} } ==== Curve–surface integrals ==== In the following curve–surface integral theorems, S denotes a 2d open surface with a corresponding 1d boundary C = ∂S (a closed curve): ∮ ∂ S A ⋅ d ℓ = ∬ S ( ∇ × A ) ⋅ d S {\displaystyle \oint _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}\ =\ \iint _{S}\left(\nabla \times \mathbf {A} \right)\cdot d\mathbf {S} } (Stokes' theorem) ∮ ∂ S ψ d ℓ = − ∬ S ∇ ψ × d S {\displaystyle \oint _{\partial S}\psi \,d{\boldsymbol {\ell }}\ =\ -\iint _{S}\nabla \psi \times d\mathbf {S} } ∮ ∂ S A × d ℓ = − ∬ S ( ∇ A − ( ∇ ⋅ A ) 1 ) ⋅ d S = − ∬ S ( d S × ∇ ) × A {\displaystyle \oint _{\partial S}\mathbf {A} \times d{\boldsymbol {\ell }}\ =\ -\iint _{S}\left(\nabla \mathbf {A} -(\nabla \cdot \mathbf {A} )\mathbf {1} \right)\cdot d\mathbf {S} \ =\ -\iint _{S}\left(d\mathbf {S} \times \nabla \right)\times \mathbf {A} } ∮ ∂ S A × ( B × d ℓ ) = ∬ S ( ∇ × ( A B T ) ) ⋅ d S + ∬ S ( ∇ ⋅ ( B A T ) ) × d S {\displaystyle \oint _{\partial S}\mathbf {A} \times (\mathbf {B} \times d{\boldsymbol {\ell }})\ =\ \iint _{S}\left(\nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\right)\cdot d\mathbf {S} +\iint _{S}\left(\nabla \cdot \left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\right)\times d\mathbf {S} } ∮ ∂ S ( B ⋅ d ℓ ) A = ∬ S ( d S ⋅ [ ∇ × B − B × ∇ ] ) A {\displaystyle \oint _{\partial S}(\mathbf {B} \cdot d{\boldsymbol {\ell }})\mathbf {A} =\iint _{S}(d\mathbf {S} \cdot \left[\nabla \times \mathbf {B} -\mathbf {B} \times \nabla \right])\mathbf {A} } Integration around a closed curve in the clockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in a definite integral): ==== Endpoint-curve integrals ==== In the following endpoint–curve integral theorems, P denotes a 1d open path with signed 0d boundary points q − p = ∂ P {\displaystyle \mathbf {q} -\mathbf {p} =\partial P} and integration along P is from p {\displaystyle \mathbf {p} } to q {\displaystyle \mathbf {q} } : ψ | ∂ P = ψ ( q ) − ψ ( p ) = ∫ P ∇ ψ ⋅ d ℓ {\displaystyle \psi |_{\partial P}=\psi (\mathbf {q} )-\psi (\mathbf {p} )=\int _{P}\nabla \psi \cdot d{\boldsymbol {\ell }}} (gradient theorem) A | ∂ P = A ( q ) − A ( p ) = ∫ P ( d ℓ ⋅ ∇ ) A {\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(d{\boldsymbol {\ell }}\cdot \nabla \right)\mathbf {A} } A | ∂ P = A ( q ) − A ( p ) = ∫ P ( ∇ A ) ⋅ d ℓ + ∫ P ( ∇ × A ) × d ℓ {\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(\nabla \mathbf {A} \right)\cdot d{\boldsymbol {\ell }}+\int _{P}\left(\nabla \times \mathbf {A} \right)\times d{\boldsymbol {\ell }}} ==== Tensor integrals ==== A tensor form of a vector integral theorem may be obtained by replacing the vector (or one of them) by a tensor, provided that the vector is first made to appear only as the right-most vector of each integrand. For example, Stokes' theorem becomes ∮ ∂ S d ℓ ⋅ T = ∬ S d S ⋅ ( ∇ × T ) {\displaystyle \oint _{\partial S}d{\boldsymbol {\ell }}\cdot \mathbf {T} \ =\ \iint _{S}d\mathbf {S} \cdot \left(\nabla \times \mathbf {T} \right)} . A scalar field may also be treated as a vector and replaced by a vector or tensor. For example, Green's first identity becomes ∂ V {\displaystyle \scriptstyle \partial V} ψ d S ⋅ ∇ A = ∭ V ( ψ ∇ 2 A + ∇ ψ ⋅ ∇ A ) d V {\displaystyle \psi \,d\mathbf {S} \cdot \nabla \!\mathbf {A} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\mathbf {A} +\nabla \!\psi \cdot \nabla \!\mathbf {A} \right)\,dV} . Similar rules apply to algebraic and differentiation formulas. For algebraic formulas one may alternatively use the left-most vector position. == See also == Comparison of vector algebra and geometric algebra Del in cylindrical and spherical coordinates – Mathematical gradient operator in certain coordinate systems Differentiation rules – Rules for computing derivatives of functions Exterior calculus identities Exterior derivative – Operation on differential forms List of limits Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector algebra relations – Formulas about vectors in three-dimensional Euclidean space == References == == Further reading ==
Wikipedia:List of vector spaces in mathematics#0
This is a list of vector spaces in abstract mathematics, by Wikipedia page. Banach space Besov space Bochner space Dual space Euclidean space Fock space Fréchet space Hardy space Hilbert space Hölder space LF-space Lp space Minkowski space Montel space Morrey–Campanato space Orlicz space Riesz space Schwartz space Sobolev space Tsirelson space
Wikipedia:Lists of integrals#0
Integration is the basic operation in integral calculus. While differentiation has straightforward rules by which the derivative of a complicated function can be found by differentiating its simpler component functions, integration does not, so tables of known integrals are often useful. This page lists some of the most common antiderivatives. == Historical development of integrals == A compilation of a list of integrals (Integraltafeln) and techniques of integral calculus was published by the German mathematician Meier Hirsch (also spelled Meyer Hirsch) in 1810. These tables were republished in the United Kingdom in 1823. More extensive tables were compiled in 1858 by the Dutch mathematician David Bierens de Haan for his Tables d'intégrales définies, supplemented by Supplément aux tables d'intégrales définies in ca. 1864. A new edition was published in 1867 under the title Nouvelles tables d'intégrales définies. These tables, which contain mainly integrals of elementary functions, remained in use until the middle of the 20th century. They were then replaced by the much more extensive tables of Gradshteyn and Ryzhik. In Gradshteyn and Ryzhik, integrals originating from the book by Bierens de Haan are denoted by BI. Not all closed-form expressions have closed-form antiderivatives; this study forms the subject of differential Galois theory, which was initially developed by Joseph Liouville in the 1830s and 1840s, leading to Liouville's theorem which classifies which expressions have closed-form antiderivatives. A simple example of a function without a closed-form antiderivative is e−x2, whose antiderivative is (up to constants) the error function. Since 1968 there is the Risch algorithm for determining indefinite integrals that can be expressed in term of elementary functions, typically using a computer algebra system. Integrals that cannot be expressed using elementary functions can be manipulated symbolically using general functions such as the Meijer G-function. == Lists of integrals == More detail may be found on the following pages for the lists of integrals: List of integrals of rational functions List of integrals of irrational functions List of integrals of trigonometric functions List of integrals of inverse trigonometric functions List of integrals of hyperbolic functions List of integrals of inverse hyperbolic functions List of integrals of exponential functions List of integrals of logarithmic functions List of integrals of Gaussian functions Gradshteyn, Ryzhik, Geronimus, Tseytlin, Jeffrey, Zwillinger, and Moll's (GR) Table of Integrals, Series, and Products contains a large collection of results. An even larger, multivolume table is the Integrals and Series by Prudnikov, Brychkov, and Marichev (with volumes 1–3 listing integrals and series of elementary and special functions, volume 4–5 are tables of Laplace transforms). More compact collections can be found in e.g. Brychkov, Marichev, Prudnikov's Tables of Indefinite Integrals, or as chapters in Zwillinger's CRC Standard Mathematical Tables and Formulae or Bronshtein and Semendyayev's Guide Book to Mathematics, Handbook of Mathematics or Users' Guide to Mathematics, and other mathematical handbooks. Other useful resources include Abramowitz and Stegun and the Bateman Manuscript Project. Both works contain many identities concerning specific integrals, which are organized with the most relevant topic instead of being collected into a separate table. Two volumes of the Bateman Manuscript are specific to integral transforms. There are several web sites which have tables of integrals and integrals on demand. Wolfram Alpha can show results, and for some simpler expressions, also the intermediate steps of the integration. Wolfram Research also operates another online service, the Mathematica Online Integrator. == Integrals of simple functions == C is used for an arbitrary constant of integration that can only be determined if something about the value of the integral at some point is known. Thus, each function has an infinite number of antiderivatives. These formulas only state in another form the assertions in the table of derivatives. === Integrals with a singularity === When there is a singularity in the function being integrated such that the antiderivative becomes undefined at some point (the singularity), then C does not need to be the same on both sides of the singularity. The forms below normally assume the Cauchy principal value around a singularity in the value of C, but this is not necessary in general. For instance, in ∫ 1 x d x = ln ⁡ | x | + C {\displaystyle \int {1 \over x}\,dx=\ln \left|x\right|+C} there is a singularity at 0 and the antiderivative becomes infinite there. If the integral above were to be used to compute a definite integral between −1 and 1, one would get the wrong answer 0. This however is the Cauchy principal value of the integral around the singularity. If the integration is done in the complex plane the result depends on the path around the origin, in this case the singularity contributes −iπ when using a path above the origin and iπ for a path below the origin. A function on the real line could use a completely different value of C on either side of the origin as in: ∫ 1 x d x = ln ⁡ | x | + { A if x > 0 ; B if x < 0. {\displaystyle \int {1 \over x}\,dx=\ln |x|+{\begin{cases}A&{\text{if }}x>0;\\B&{\text{if }}x<0.\end{cases}}} === Rational functions === ∫ a d x = a x + C {\displaystyle \int a\,dx=ax+C} The following function has a non-integrable singularity at 0 for n ≤ −1: ∫ x n d x = x n + 1 n + 1 + C (for n ≠ − 1 ) {\displaystyle \int x^{n}\,dx={\frac {x^{n+1}}{n+1}}+C\qquad {\text{(for }}n\neq -1{\text{)}}} (Cavalieri's quadrature formula) ∫ ( a x + b ) n d x = ( a x + b ) n + 1 a ( n + 1 ) + C (for n ≠ − 1 ) {\displaystyle \int (ax+b)^{n}\,dx={\frac {(ax+b)^{n+1}}{a(n+1)}}+C\qquad {\text{(for }}n\neq -1{\text{)}}} ∫ 1 x d x = ln ⁡ | x | + C {\displaystyle \int {1 \over x}\,dx=\ln \left|x\right|+C} More generally, ∫ 1 x d x = { ln ⁡ | x | + C − x < 0 ln ⁡ | x | + C + x > 0 {\displaystyle \int {1 \over x}\,dx={\begin{cases}\ln \left|x\right|+C^{-}&x<0\\\ln \left|x\right|+C^{+}&x>0\end{cases}}} ∫ c a x + b d x = c a ln ⁡ | a x + b | + C {\displaystyle \int {\frac {c}{ax+b}}\,dx={\frac {c}{a}}\ln \left|ax+b\right|+C} === Exponential functions === ∫ e a x d x = 1 a e a x + C {\displaystyle \int e^{ax}\,dx={\frac {1}{a}}e^{ax}+C} ∫ f ′ ( x ) e f ( x ) d x = e f ( x ) + C {\displaystyle \int f'(x)e^{f(x)}\,dx=e^{f(x)}+C} ∫ a x d x = a x ln ⁡ a + C {\displaystyle \int a^{x}\,dx={\frac {a^{x}}{\ln a}}+C} ∫ e x ( f ( x ) + f ′ ( x ) ) d x = e x f ( x ) + C {\displaystyle \int {e^{x}\left(f\left(x\right)+f'\left(x\right)\right)\,dx}=e^{x}f\left(x\right)+C} ∫ e x ( f ( x ) − ( − 1 ) n d n f ( x ) d x n ) d x = e x ∑ k = 1 n ( − 1 ) k − 1 d k − 1 f ( x ) d x k − 1 + C {\displaystyle \int {e^{x}\left(f\left(x\right)-\left(-1\right)^{n}{\frac {d^{n}f\left(x\right)}{dx^{n}}}\right)\,dx}=e^{x}\sum _{k=1}^{n}{\left(-1\right)^{k-1}{\frac {d^{k-1}f\left(x\right)}{dx^{k-1}}}}+C} (if n {\displaystyle n} is a positive integer) ∫ e − x ( f ( x ) − d n f ( x ) d x n ) d x = − e − x ∑ k = 1 n d k − 1 f ( x ) d x k − 1 + C {\displaystyle \int {e^{-x}\left(f\left(x\right)-{\frac {d^{n}f\left(x\right)}{dx^{n}}}\right)\,dx}=-e^{-x}\sum _{k=1}^{n}{\frac {d^{k-1}f\left(x\right)}{dx^{k-1}}}+C} (if n {\displaystyle n} is a positive integer) === Logarithms === ∫ ln ⁡ x d x = x ln ⁡ x − x + C = x ( ln ⁡ x − 1 ) + C {\displaystyle \int \ln x\,dx=x\ln x-x+C=x(\ln x-1)+C} ∫ log a ⁡ x d x = x log a ⁡ x − x ln ⁡ a + C = x ln ⁡ a ( ln ⁡ x − 1 ) + C {\displaystyle \int \log _{a}x\,dx=x\log _{a}x-{\frac {x}{\ln a}}+C={\frac {x}{\ln a}}(\ln x-1)+C} === Trigonometric functions === ∫ sin ⁡ x d x = − cos ⁡ x + C {\displaystyle \int \sin {x}\,dx=-\cos {x}+C} ∫ cos ⁡ x d x = sin ⁡ x + C {\displaystyle \int \cos {x}\,dx=\sin {x}+C} ∫ tan ⁡ x d x = ln ⁡ | sec ⁡ x | + C = − ln ⁡ | cos ⁡ x | + C {\displaystyle \int \tan {x}\,dx=\ln {\left|\sec {x}\right|}+C=-\ln {\left|\cos {x}\right|}+C} ∫ cot ⁡ x d x = − ln ⁡ | csc ⁡ x | + C = ln ⁡ | sin ⁡ x | + C {\displaystyle \int \cot {x}\,dx=-\ln {\left|\csc {x}\right|}+C=\ln {\left|\sin {x}\right|}+C} ∫ sec ⁡ x d x = ln ⁡ | sec ⁡ x + tan ⁡ x | + C = ln ⁡ | tan ⁡ ( x 2 + π 4 ) | + C {\displaystyle \int \sec {x}\,dx=\ln {\left|\sec {x}+\tan {x}\right|}+C=\ln \left|\tan \left({\dfrac {x}{2}}+{\dfrac {\pi }{4}}\right)\right|+C} (See Integral of the secant function. This result was a well-known conjecture in the 17th century.) ∫ csc ⁡ x d x = − ln ⁡ | csc ⁡ x + cot ⁡ x | + C = ln ⁡ | csc ⁡ x − cot ⁡ x | + C = ln ⁡ | tan ⁡ x 2 | + C {\displaystyle \int \csc {x}\,dx=-\ln {\left|\csc {x}+\cot {x}\right|}+C=\ln {\left|\csc {x}-\cot {x}\right|}+C=\ln {\left|\tan {\frac {x}{2}}\right|}+C} ∫ sec 2 ⁡ x d x = tan ⁡ x + C {\displaystyle \int \sec ^{2}x\,dx=\tan x+C} ∫ csc 2 ⁡ x d x = − cot ⁡ x + C {\displaystyle \int \csc ^{2}x\,dx=-\cot x+C} ∫ sec ⁡ x tan ⁡ x d x = sec ⁡ x + C {\displaystyle \int \sec {x}\,\tan {x}\,dx=\sec {x}+C} ∫ csc ⁡ x cot ⁡ x d x = − csc ⁡ x + C {\displaystyle \int \csc {x}\,\cot {x}\,dx=-\csc {x}+C} ∫ sin 2 ⁡ x d x = 1 2 ( x − sin ⁡ 2 x 2 ) + C = 1 2 ( x − sin ⁡ x cos ⁡ x ) + C {\displaystyle \int \sin ^{2}x\,dx={\frac {1}{2}}\left(x-{\frac {\sin 2x}{2}}\right)+C={\frac {1}{2}}(x-\sin x\cos x)+C} ∫ cos 2 ⁡ x d x = 1 2 ( x + sin ⁡ 2 x 2 ) + C = 1 2 ( x + sin ⁡ x cos ⁡ x ) + C {\displaystyle \int \cos ^{2}x\,dx={\frac {1}{2}}\left(x+{\frac {\sin 2x}{2}}\right)+C={\frac {1}{2}}(x+\sin x\cos x)+C} ∫ tan 2 ⁡ x d x = tan ⁡ x − x + C {\displaystyle \int \tan ^{2}x\,dx=\tan x-x+C} ∫ cot 2 ⁡ x d x = − cot ⁡ x − x + C {\displaystyle \int \cot ^{2}x\,dx=-\cot x-x+C} ∫ sec 3 ⁡ x d x = 1 2 ( sec ⁡ x tan ⁡ x + ln ⁡ | sec ⁡ x + tan ⁡ x | ) + C {\displaystyle \int \sec ^{3}x\,dx={\frac {1}{2}}(\sec x\tan x+\ln |\sec x+\tan x|)+C} (See integral of secant cubed.) ∫ csc 3 ⁡ x d x = 1 2 ( − csc ⁡ x cot ⁡ x + ln ⁡ | csc ⁡ x − cot ⁡ x | ) + C = 1 2 ( ln ⁡ | tan ⁡ x 2 | − csc ⁡ x cot ⁡ x ) + C {\displaystyle \int \csc ^{3}x\,dx={\frac {1}{2}}(-\csc x\cot x+\ln |\csc x-\cot x|)+C={\frac {1}{2}}\left(\ln \left|\tan {\frac {x}{2}}\right|-\csc x\cot x\right)+C} ∫ sin n ⁡ x d x = − sin n − 1 ⁡ x cos ⁡ x n + n − 1 n ∫ sin n − 2 ⁡ x d x {\displaystyle \int \sin ^{n}x\,dx=-{\frac {\sin ^{n-1}{x}\cos {x}}{n}}+{\frac {n-1}{n}}\int \sin ^{n-2}{x}\,dx} ∫ cos n ⁡ x d x = cos n − 1 ⁡ x sin ⁡ x n + n − 1 n ∫ cos n − 2 ⁡ x d x {\displaystyle \int \cos ^{n}x\,dx={\frac {\cos ^{n-1}{x}\sin {x}}{n}}+{\frac {n-1}{n}}\int \cos ^{n-2}{x}\,dx} === Inverse trigonometric functions === ∫ arcsin ⁡ x d x = x arcsin ⁡ x + 1 − x 2 + C , for | x | ≤ 1 {\displaystyle \int \arcsin {x}\,dx=x\arcsin {x}+{\sqrt {1-x^{2}}}+C,{\text{ for }}\vert x\vert \leq 1} ∫ arccos ⁡ x d x = x arccos ⁡ x − 1 − x 2 + C , for | x | ≤ 1 {\displaystyle \int \arccos {x}\,dx=x\arccos {x}-{\sqrt {1-x^{2}}}+C,{\text{ for }}\vert x\vert \leq 1} ∫ arctan ⁡ x d x = x arctan ⁡ x − 1 2 ln ⁡ | 1 + x 2 | + C , for all real x {\displaystyle \int \arctan {x}\,dx=x\arctan {x}-{\frac {1}{2}}\ln {\vert 1+x^{2}\vert }+C,{\text{ for all real }}x} ∫ arccot ⁡ x d x = x arccot ⁡ x + 1 2 ln ⁡ | 1 + x 2 | + C , for all real x {\displaystyle \int \operatorname {arccot} {x}\,dx=x\operatorname {arccot} {x}+{\frac {1}{2}}\ln {\vert 1+x^{2}\vert }+C,{\text{ for all real }}x} ∫ arcsec ⁡ x d x = x arcsec ⁡ x − ln ⁡ | x ( 1 + 1 − x − 2 ) | + C , for | x | ≥ 1 {\displaystyle \int \operatorname {arcsec} {x}\,dx=x\operatorname {arcsec} {x}-\ln \left\vert x\,\left(1+{\sqrt {1-x^{-2}}}\,\right)\right\vert +C,{\text{ for }}\vert x\vert \geq 1} ∫ arccsc ⁡ x d x = x arccsc ⁡ x + ln ⁡ | x ( 1 + 1 − x − 2 ) | + C , for | x | ≥ 1 {\displaystyle \int \operatorname {arccsc} {x}\,dx=x\operatorname {arccsc} {x}+\ln \left\vert x\,\left(1+{\sqrt {1-x^{-2}}}\,\right)\right\vert +C,{\text{ for }}\vert x\vert \geq 1} === Hyperbolic functions === ∫ sinh ⁡ x d x = cosh ⁡ x + C {\displaystyle \int \sinh x\,dx=\cosh x+C} ∫ cosh ⁡ x d x = sinh ⁡ x + C {\displaystyle \int \cosh x\,dx=\sinh x+C} ∫ tanh ⁡ x d x = ln ( cosh ⁡ x ) + C {\displaystyle \int \tanh x\,dx=\ln \,(\cosh x)+C} ∫ coth ⁡ x d x = ln ⁡ | sinh ⁡ x | + C , for x ≠ 0 {\displaystyle \int \coth x\,dx=\ln |\sinh x|+C,{\text{ for }}x\neq 0} ∫ sech x d x = arctan ( sinh ⁡ x ) + C {\displaystyle \int \operatorname {sech} \,x\,dx=\arctan \,(\sinh x)+C} ∫ csch x d x = ln ⁡ | coth ⁡ x − csch ⁡ x | + C = ln ⁡ | tanh ⁡ x 2 | + C , for x ≠ 0 {\displaystyle \int \operatorname {csch} \,x\,dx=\ln |\operatorname {coth} x-\operatorname {csch} x|+C=\ln \left|\tanh {x \over 2}\right|+C,{\text{ for }}x\neq 0} ∫ sech 2 ⁡ x d x = tanh ⁡ x + C {\displaystyle \int \operatorname {sech} ^{2}x\,dx=\tanh x+C} ∫ csch 2 ⁡ x d x = − coth ⁡ x + C {\displaystyle \int \operatorname {csch} ^{2}x\,dx=-\operatorname {coth} x+C} ∫ sech ⁡ x tanh ⁡ x d x = − sech ⁡ x + C {\displaystyle \int \operatorname {sech} {x}\,\operatorname {tanh} {x}\,dx=-\operatorname {sech} {x}+C} ∫ csch ⁡ x coth ⁡ x d x = − csch ⁡ x + C {\displaystyle \int \operatorname {csch} {x}\,\operatorname {coth} {x}\,dx=-\operatorname {csch} {x}+C} === Inverse hyperbolic functions === ∫ arcsinh x d x = x arcsinh x − x 2 + 1 + C , for all real x {\displaystyle \int \operatorname {arcsinh} \,x\,dx=x\,\operatorname {arcsinh} \,x-{\sqrt {x^{2}+1}}+C,{\text{ for all real }}x} ∫ arccosh x d x = x arccosh x − x 2 − 1 + C , for x ≥ 1 {\displaystyle \int \operatorname {arccosh} \,x\,dx=x\,\operatorname {arccosh} \,x-{\sqrt {x^{2}-1}}+C,{\text{ for }}x\geq 1} ∫ arctanh x d x = x arctanh x + ln ⁡ ( 1 − x 2 ) 2 + C , for | x | < 1 {\displaystyle \int \operatorname {arctanh} \,x\,dx=x\,\operatorname {arctanh} \,x+{\frac {\ln \left(\,1-x^{2}\right)}{2}}+C,{\text{ for }}\vert x\vert <1} ∫ arccoth x d x = x arccoth x + ln ⁡ ( x 2 − 1 ) 2 + C , for | x | > 1 {\displaystyle \int \operatorname {arccoth} \,x\,dx=x\,\operatorname {arccoth} \,x+{\frac {\ln \left(x^{2}-1\right)}{2}}+C,{\text{ for }}\vert x\vert >1} ∫ arcsech x d x = x arcsech x + arcsin ⁡ x + C , for 0 < x ≤ 1 {\displaystyle \int \operatorname {arcsech} \,x\,dx=x\,\operatorname {arcsech} \,x+\arcsin x+C,{\text{ for }}0<x\leq 1} ∫ arccsch x d x = x arccsch x + | arcsinh x | + C , for x ≠ 0 {\displaystyle \int \operatorname {arccsch} \,x\,dx=x\,\operatorname {arccsch} \,x+\vert \operatorname {arcsinh} \,x\vert +C,{\text{ for }}x\neq 0} === Products of functions proportional to their second derivatives === ∫ cos ⁡ a x e b x d x = e b x a 2 + b 2 ( a sin ⁡ a x + b cos ⁡ a x ) + C {\displaystyle \int \cos ax\,e^{bx}\,dx={\frac {e^{bx}}{a^{2}+b^{2}}}\left(a\sin ax+b\cos ax\right)+C} ∫ sin ⁡ a x e b x d x = e b x a 2 + b 2 ( b sin ⁡ a x − a cos ⁡ a x ) + C {\displaystyle \int \sin ax\,e^{bx}\,dx={\frac {e^{bx}}{a^{2}+b^{2}}}\left(b\sin ax-a\cos ax\right)+C} ∫ cos ⁡ a x cosh ⁡ b x d x = 1 a 2 + b 2 ( a sin ⁡ a x cosh ⁡ b x + b cos ⁡ a x sinh ⁡ b x ) + C {\displaystyle \int \cos ax\,\cosh bx\,dx={\frac {1}{a^{2}+b^{2}}}\left(a\sin ax\,\cosh bx+b\cos ax\,\sinh bx\right)+C} ∫ sin ⁡ a x cosh ⁡ b x d x = 1 a 2 + b 2 ( b sin ⁡ a x sinh ⁡ b x − a cos ⁡ a x cosh ⁡ b x ) + C {\displaystyle \int \sin ax\,\cosh bx\,dx={\frac {1}{a^{2}+b^{2}}}\left(b\sin ax\,\sinh bx-a\cos ax\,\cosh bx\right)+C} === Absolute-value functions === Let f be a continuous function, that has at most one zero. If f has a zero, let g be the unique antiderivative of f that is zero at the root of f; otherwise, let g be any antiderivative of f. Then ∫ | f ( x ) | d x = sgn ⁡ ( f ( x ) ) g ( x ) + C , {\displaystyle \int \left|f(x)\right|\,dx=\operatorname {sgn}(f(x))g(x)+C,} where sgn(x) is the sign function, which takes the values −1, 0, 1 when x is respectively negative, zero or positive. This can be proved by computing the derivative of the right-hand side of the formula, taking into account that the condition on g is here for insuring the continuity of the integral. This gives the following formulas (where a ≠ 0), which are valid over any interval where f is continuous (over larger intervals, the constant C must be replaced by a piecewise constant function): ∫ | ( a x + b ) n | d x = sgn ⁡ ( a x + b ) ( a x + b ) n + 1 a ( n + 1 ) + C {\displaystyle \int \left|(ax+b)^{n}\right|\,dx=\operatorname {sgn}(ax+b){(ax+b)^{n+1} \over a(n+1)}+C} when n is odd, and n ≠ − 1 {\displaystyle n\neq -1} . ∫ | tan ⁡ a x | d x = − 1 a sgn ⁡ ( tan ⁡ a x ) ln ⁡ ( | cos ⁡ a x | ) + C {\displaystyle \int \left|\tan {ax}\right|\,dx=-{\frac {1}{a}}\operatorname {sgn}(\tan {ax})\ln(\left|\cos {ax}\right|)+C} when a x ∈ ( n π − π 2 , n π + π 2 ) {\textstyle ax\in \left(n\pi -{\frac {\pi }{2}},n\pi +{\frac {\pi }{2}}\right)} for some integer n. ∫ | csc ⁡ a x | d x = − 1 a sgn ⁡ ( csc ⁡ a x ) ln ⁡ ( | csc ⁡ a x + cot ⁡ a x | ) + C {\displaystyle \int \left|\csc {ax}\right|\,dx=-{\frac {1}{a}}\operatorname {sgn}(\csc {ax})\ln(\left|\csc {ax}+\cot {ax}\right|)+C} when a x ∈ ( n π , n π + π ) {\displaystyle ax\in \left(n\pi ,n\pi +\pi \right)} for some integer n. ∫ | sec ⁡ a x | d x = 1 a sgn ⁡ ( sec ⁡ a x ) ln ⁡ ( | sec ⁡ a x + tan ⁡ a x | ) + C {\displaystyle \int \left|\sec {ax}\right|\,dx={\frac {1}{a}}\operatorname {sgn}(\sec {ax})\ln(\left|\sec {ax}+\tan {ax}\right|)+C} when a x ∈ ( n π − π 2 , n π + π 2 ) {\textstyle ax\in \left(n\pi -{\frac {\pi }{2}},n\pi +{\frac {\pi }{2}}\right)} for some integer n. ∫ | cot ⁡ a x | d x = 1 a sgn ⁡ ( cot ⁡ a x ) ln ⁡ ( | sin ⁡ a x | ) + C {\displaystyle \int \left|\cot {ax}\right|\,dx={\frac {1}{a}}\operatorname {sgn}(\cot {ax})\ln(\left|\sin {ax}\right|)+C} when a x ∈ ( n π , n π + π ) {\displaystyle ax\in \left(n\pi ,n\pi +\pi \right)} for some integer n. If the function f does not have any continuous antiderivative which takes the value zero at the zeros of f (this is the case for the sine and the cosine functions), then sgn(f(x)) ∫ f(x) dx is an antiderivative of f on every interval on which f is not zero, but may be discontinuous at the points where f(x) = 0. For having a continuous antiderivative, one has thus to add a well chosen step function. If we also use the fact that the absolute values of sine and cosine are periodic with period π, then we get: ∫ | sin ⁡ a x | d x = 2 a ⌊ a x π ⌋ − 1 a cos ⁡ ( a x − ⌊ a x π ⌋ π ) + C {\displaystyle \int \left|\sin {ax}\right|\,dx={2 \over a}\left\lfloor {\frac {ax}{\pi }}\right\rfloor -{1 \over a}\cos {\left(ax-\left\lfloor {\frac {ax}{\pi }}\right\rfloor \pi \right)}+C} ∫ | cos ⁡ a x | d x = 2 a ⌊ a x π + 1 2 ⌋ + 1 a sin ⁡ ( a x − ⌊ a x π + 1 2 ⌋ π ) + C {\displaystyle \int \left|\cos {ax}\right|\,dx={2 \over a}\left\lfloor {\frac {ax}{\pi }}+{\frac {1}{2}}\right\rfloor +{1 \over a}\sin {\left(ax-\left\lfloor {\frac {ax}{\pi }}+{\frac {1}{2}}\right\rfloor \pi \right)}+C} === Special functions === Ci, Si: Trigonometric integrals, Ei: Exponential integral, li: Logarithmic integral function, erf: Error function ∫ Ci ⁡ ( x ) d x = x Ci ⁡ ( x ) − sin ⁡ x {\displaystyle \int \operatorname {Ci} (x)\,dx=x\operatorname {Ci} (x)-\sin x} ∫ Si ⁡ ( x ) d x = x Si ⁡ ( x ) + cos ⁡ x {\displaystyle \int \operatorname {Si} (x)\,dx=x\operatorname {Si} (x)+\cos x} ∫ Ei ⁡ ( x ) d x = x Ei ⁡ ( x ) − e x {\displaystyle \int \operatorname {Ei} (x)\,dx=x\operatorname {Ei} (x)-e^{x}} ∫ li ⁡ ( x ) d x = x li ⁡ ( x ) − Ei ⁡ ( 2 ln ⁡ x ) {\displaystyle \int \operatorname {li} (x)\,dx=x\operatorname {li} (x)-\operatorname {Ei} (2\ln x)} ∫ li ⁡ ( x ) x d x = ln ⁡ x li ⁡ ( x ) − x {\displaystyle \int {\frac {\operatorname {li} (x)}{x}}\,dx=\ln x\,\operatorname {li} (x)-x} ∫ erf ⁡ ( x ) d x = e − x 2 π + x erf ⁡ ( x ) {\displaystyle \int \operatorname {erf} (x)\,dx={\frac {e^{-x^{2}}}{\sqrt {\pi }}}+x\operatorname {erf} (x)} == Definite integrals lacking closed-form antiderivatives == There are some functions whose antiderivatives cannot be expressed in closed form. However, the values of the definite integrals of some of these functions over some common intervals can be calculated. A few useful integrals are given below. ∫ 0 ∞ x e − x d x = 1 2 π {\displaystyle \int _{0}^{\infty }{\sqrt {x}}\,e^{-x}\,dx={\frac {1}{2}}{\sqrt {\pi }}} (see also Gamma function) ∫ 0 ∞ e − a x 2 d x = 1 2 π a {\displaystyle \int _{0}^{\infty }e^{-ax^{2}}\,dx={\frac {1}{2}}{\sqrt {\frac {\pi }{a}}}} for a > 0 (the Gaussian integral) ∫ 0 ∞ x 2 e − a x 2 d x = 1 4 π a 3 {\displaystyle \int _{0}^{\infty }{x^{2}e^{-ax^{2}}\,dx}={\frac {1}{4}}{\sqrt {\frac {\pi }{a^{3}}}}} for a > 0 ∫ 0 ∞ x 2 n e − a x 2 d x = 2 n − 1 2 a ∫ 0 ∞ x 2 ( n − 1 ) e − a x 2 d x = ( 2 n − 1 ) ! ! 2 n + 1 π a 2 n + 1 = ( 2 n ) ! n ! 2 2 n + 1 π a 2 n + 1 {\displaystyle \int _{0}^{\infty }x^{2n}e^{-ax^{2}}\,dx={\frac {2n-1}{2a}}\int _{0}^{\infty }x^{2(n-1)}e^{-ax^{2}}\,dx={\frac {(2n-1)!!}{2^{n+1}}}{\sqrt {\frac {\pi }{a^{2n+1}}}}={\frac {(2n)!}{n!2^{2n+1}}}{\sqrt {\frac {\pi }{a^{2n+1}}}}} for a > 0, n is a positive integer and !! is the double factorial. ∫ 0 ∞ x 3 e − a x 2 d x = 1 2 a 2 {\displaystyle \int _{0}^{\infty }{x^{3}e^{-ax^{2}}\,dx}={\frac {1}{2a^{2}}}} when a > 0 ∫ 0 ∞ x 2 n + 1 e − a x 2 d x = n a ∫ 0 ∞ x 2 n − 1 e − a x 2 d x = n ! 2 a n + 1 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-ax^{2}}\,dx={\frac {n}{a}}\int _{0}^{\infty }x^{2n-1}e^{-ax^{2}}\,dx={\frac {n!}{2a^{n+1}}}} for a > 0, n = 0, 1, 2, .... ∫ 0 ∞ x e x − 1 d x = π 2 6 {\displaystyle \int _{0}^{\infty }{\frac {x}{e^{x}-1}}\,dx={\frac {\pi ^{2}}{6}}} (see also Bernoulli number) ∫ 0 ∞ x 2 e x − 1 d x = 2 ζ ( 3 ) ≈ 2.40 {\displaystyle \int _{0}^{\infty }{\frac {x^{2}}{e^{x}-1}}\,dx=2\zeta (3)\approx 2.40} ∫ 0 ∞ x 3 e x − 1 d x = π 4 15 {\displaystyle \int _{0}^{\infty }{\frac {x^{3}}{e^{x}-1}}\,dx={\frac {\pi ^{4}}{15}}} ∫ 0 ∞ sin ⁡ x x d x = π 2 {\displaystyle \int _{0}^{\infty }{\frac {\sin {x}}{x}}\,dx={\frac {\pi }{2}}} (see sinc function and the Dirichlet integral) ∫ 0 ∞ sin 2 ⁡ x x 2 d x = π 2 {\displaystyle \int _{0}^{\infty }{\frac {\sin ^{2}{x}}{x^{2}}}\,dx={\frac {\pi }{2}}} ∫ 0 π 2 sin n ⁡ x d x = ∫ 0 π 2 cos n ⁡ x d x = ( n − 1 ) ! ! n ! ! × { 1 if n is odd π 2 if n is even. {\displaystyle \int _{0}^{\frac {\pi }{2}}\sin ^{n}x\,dx=\int _{0}^{\frac {\pi }{2}}\cos ^{n}x\,dx={\frac {(n-1)!!}{n!!}}\times {\begin{cases}1&{\text{if }}n{\text{ is odd}}\\{\frac {\pi }{2}}&{\text{if }}n{\text{ is even.}}\end{cases}}} (if n is a positive integer and !! is the double factorial). ∫ − π π cos ⁡ ( α x ) cos n ⁡ ( β x ) d x = { 2 π 2 n ( n m ) | α | = | β ( 2 m − n ) | 0 otherwise {\displaystyle \int _{-\pi }^{\pi }\cos(\alpha x)\cos ^{n}(\beta x)dx={\begin{cases}{\frac {2\pi }{2^{n}}}{\binom {n}{m}}&|\alpha |=|\beta (2m-n)|\\0&{\text{otherwise}}\end{cases}}} (for α, β, m, n integers with β ≠ 0 and m, n ≥ 0, see also Binomial coefficient) ∫ − t t sin m ⁡ ( α x ) cos n ⁡ ( β x ) d x = 0 {\displaystyle \int _{-t}^{t}\sin ^{m}(\alpha x)\cos ^{n}(\beta x)dx=0} (for α, β real, n a non-negative integer, and m an odd, positive integer; since the integrand is odd) ∫ − π π sin ⁡ ( α x ) sin n ⁡ ( β x ) d x = { ( − 1 ) ( n + 1 2 ) ( − 1 ) m 2 π 2 n ( n m ) n odd , α = β ( 2 m − n ) 0 otherwise {\displaystyle \int _{-\pi }^{\pi }\sin(\alpha x)\sin ^{n}(\beta x)dx={\begin{cases}(-1)^{\left({\frac {n+1}{2}}\right)}(-1)^{m}{\frac {2\pi }{2^{n}}}{\binom {n}{m}}&n{\text{ odd}},\ \alpha =\beta (2m-n)\\0&{\text{otherwise}}\end{cases}}} (for α, β, m, n integers with β ≠ 0 and m, n ≥ 0, see also Binomial coefficient) ∫ − π π cos ⁡ ( α x ) sin n ⁡ ( β x ) d x = { ( − 1 ) ( n 2 ) ( − 1 ) m 2 π 2 n ( n m ) n even , | α | = | β ( 2 m − n ) | 0 otherwise {\displaystyle \int _{-\pi }^{\pi }\cos(\alpha x)\sin ^{n}(\beta x)dx={\begin{cases}(-1)^{\left({\frac {n}{2}}\right)}(-1)^{m}{\frac {2\pi }{2^{n}}}{\binom {n}{m}}&n{\text{ even}},\ |\alpha |=|\beta (2m-n)|\\0&{\text{otherwise}}\end{cases}}} (for α, β, m, n integers with β ≠ 0 and m, n ≥ 0, see also Binomial coefficient) ∫ − ∞ ∞ e − ( a x 2 + b x + c ) d x = π a exp ⁡ [ b 2 − 4 a c 4 a ] {\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}+bx+c)}\,dx={\sqrt {\frac {\pi }{a}}}\exp \left[{\frac {b^{2}-4ac}{4a}}\right]} (where exp[u] is the exponential function eu, and a > 0.) ∫ 0 ∞ x z − 1 e − x d x = Γ ( z ) {\displaystyle \int _{0}^{\infty }x^{z-1}\,e^{-x}\,dx=\Gamma (z)} (where Γ ( z ) {\displaystyle \Gamma (z)} is the Gamma function) ∫ 0 1 ( ln ⁡ 1 x ) p d x = Γ ( p + 1 ) {\displaystyle \int _{0}^{1}\left(\ln {\frac {1}{x}}\right)^{p}\,dx=\Gamma (p+1)} ∫ 0 1 x α − 1 ( 1 − x ) β − 1 d x = Γ ( α ) Γ ( β ) Γ ( α + β ) {\displaystyle \int _{0}^{1}x^{\alpha -1}(1-x)^{\beta -1}dx={\frac {\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}}} (for Re(α) > 0 and Re(β) > 0, see Beta function) ∫ 0 2 π e x cos ⁡ θ d θ = 2 π I 0 ( x ) {\displaystyle \int _{0}^{2\pi }e^{x\cos \theta }d\theta =2\pi I_{0}(x)} (where I0(x) is the modified Bessel function of the first kind) ∫ 0 2 π e x cos ⁡ θ + y sin ⁡ θ d θ = 2 π I 0 ( x 2 + y 2 ) {\displaystyle \int _{0}^{2\pi }e^{x\cos \theta +y\sin \theta }d\theta =2\pi I_{0}\left({\sqrt {x^{2}+y^{2}}}\right)} ∫ − ∞ ∞ ( 1 + x 2 ν ) − ν + 1 2 d x = ν π Γ ( ν 2 ) Γ ( ν + 1 2 ) {\displaystyle \int _{-\infty }^{\infty }\left(1+{\frac {x^{2}}{\nu }}\right)^{-{\frac {\nu +1}{2}}}\,dx={\frac {{\sqrt {\nu \pi }}\ \Gamma \left({\frac {\nu }{2}}\right)}{\Gamma \left({\frac {\nu +1}{2}}\right)}}} (for ν > 0 , this is related to the probability density function of Student's t-distribution) If the function f has bounded variation on the interval [a,b], then the method of exhaustion provides a formula for the integral: ∫ a b f ( x ) d x = ( b − a ) ∑ n = 1 ∞ ∑ m = 1 2 n − 1 ( − 1 ) m + 1 2 − n f ( a + m ( b − a ) 2 − n ) . {\displaystyle \int _{a}^{b}{f(x)\,dx}=(b-a)\sum \limits _{n=1}^{\infty }{\sum \limits _{m=1}^{2^{n}-1}{\left({-1}\right)^{m+1}}}2^{-n}f(a+m\left({b-a}\right)2^{-n}).} The "sophomore's dream": ∫ 0 1 x − x d x = ∑ n = 1 ∞ n − n ( = 1.29128 59970 6266 … ) ∫ 0 1 x x d x = − ∑ n = 1 ∞ ( − n ) − n ( = 0.78343 05107 1213 … ) {\displaystyle {\begin{aligned}\int _{0}^{1}x^{-x}\,dx&=\sum _{n=1}^{\infty }n^{-n}&&(=1.29128\,59970\,6266\dots )\\[6pt]\int _{0}^{1}x^{x}\,dx&=-\sum _{n=1}^{\infty }(-n)^{-n}&&(=0.78343\,05107\,1213\dots )\end{aligned}}} attributed to Johann Bernoulli. == See also == Differentiation rules – Rules for computing derivatives of functions Incomplete gamma function – Types of special mathematical functions Indefinite sum – the inverse of a finite differencePages displaying wikidata descriptions as a fallback Integration using Euler's formula – Use of complex numbers to evaluate integrals Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions List of limits List of mathematical identities List of mathematical series Nonelementary integral – Integrals not expressible in closed-form from elementary functions Symbolic integration – Computation of an antiderivatives == References == == Further reading == Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. Bronstein, Ilja Nikolaevič; Semendjajew, Konstantin Adolfovič (1987) [1945]. Grosche, Günter; Ziegler, Viktor; Ziegler, Dorothea (eds.). Taschenbuch der Mathematik (in German). Vol. 1. Translated by Ziegler, Viktor. Weiß, Jürgen (23 ed.). Thun and Frankfurt am Main: Verlag Harri Deutsch (and B. G. Teubner Verlagsgesellschaft, Leipzig). ISBN 3-87144-492-8. Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (8 ed.). Academic Press, Inc. ISBN 978-0-12-384933-5. LCCN 2014010276. (Several previous editions as well.) Prudnikov, Anatolii Platonovich (Прудников, Анатолий Платонович); Brychkov, Yuri A. (Брычков, Ю. А.); Marichev, Oleg Igorevich (Маричев, Олег Игоревич) (1988–1992) [1981−1986 (Russian)]. Integrals and Series. Vol. 1–5. Translated by Queen, N. M. (1 ed.). (Nauka) Gordon & Breach Science Publishers/CRC Press. ISBN 2-88124-097-6.{{cite book}}: CS1 maint: multiple names: authors list (link). Second revised edition (Russian), volume 1–3, Fiziko-Matematicheskaya Literatura, 2003. Yuri A. Brychkov (Ю. А. Брычков), Handbook of Special Functions: Derivatives, Integrals, Series and Other Formulas. Russian edition, Fiziko-Matematicheskaya Literatura, 2006. English edition, Chapman & Hall/CRC Press, 2008, ISBN 1-58488-956-X / 9781584889564. Daniel Zwillinger. CRC Standard Mathematical Tables and Formulae, 31st edition. Chapman & Hall/CRC Press, 2002. ISBN 1-58488-291-3. (Many earlier editions as well.) Meyer Hirsch, Integraltafeln oder Sammlung von Integralformeln (Duncker und Humblot, Berlin, 1810) Meyer Hirsch, Integral Tables Or A Collection of Integral Formulae (Baynes and son, London, 1823) [English translation of Integraltafeln] David Bierens de Haan, Nouvelles Tables d'Intégrales définies (Engels, Leiden, 1862) Benjamin O. Pierce A short table of integrals - revised edition (Ginn & co., Boston, 1899) == External links == === Tables of integrals === Paul's Online Math Notes A. Dieckmann, Table of Integrals (Elliptic Functions, Square Roots, Inverse Tangents and More Exotic Functions): Indefinite Integrals Definite Integrals Math Major: A Table of Integrals O'Brien, Francis J. Jr. "500 Integrals of Elementary and Special Functions". Derived integrals of exponential, logarithmic functions and special functions. Rule-based Integration Precisely defined indefinite integration rules covering a wide class of integrands Mathar, Richard J. (2012). "Yet another table of integrals". arXiv:1207.5845 [math.CA]. === Derivations === Victor Hugo Moll, The Integrals in Gradshteyn and Ryzhik === Online service === Integration examples for Wolfram Alpha === Open source programs === wxmaxima gui for Symbolic and numeric resolution of many mathematical problems === Videos === The Single Most Overpowered Integration Technique in Existence. YouTube Video by Flammable Maths on symmetries
Wikipedia:Little Astronomy#0
Little Astronomy (Greek: Μικρὸς Ἀστρονομούμενος Mikrós Astronomoúmenos) is a collection of minor works in Ancient Greek mathematics and astronomy dating from the 4th to 2nd century BCE that were probably used as an astronomical curriculum starting around the 2nd century CE. In the astronomy of the medieval Islamic world, with a few additions, the collection became known as the Middle Books (Arabic: كتاب المتوسطات Kitāb al-mutawassiṭāt), mathematical preparation for Claudius Ptolemy's Almagest, intended for students who had already studied Euclid's Elements. == Works in the collection == The works contained in the collection are: Spherics by Theodosius of Bithynia: On spherical geometry, in the style of the Elements. On the Moving Sphere by Autolycus of Pitane: On the movements of points and arcs on a sphere as it rotates on its axis. Optics by Euclid: On various effects involving propagation of light, including shadows, parallax, and perspective. Phaenomena by Euclid: A treatise in 18 propositions, each dealing with important arcs on the celestial sphere. On Habitations by Theodosius: Description of the appearance of the sky as seen from different places on earth. On Days and Nights by Theodosius: A treatise in 31 propositions on the lengths of days and nights at different times of the year. On the Sizes and Distances by Aristarchus of Samos: On the size of the Sun and Moon in the sky. On Risings and Settings by Autolycus: On the relationship between the rising and setting of stars throughout the year. On Ascensions by Hypsicles: A treatise on arithmetic progressions used to calculate approximate times for the signs of the Zodiac to rise above the horizon. In Arabic translation as the Middle Books, additional works, also originally written in Ancient Greek, were often included: Spherics by Menelaus of Alexandria: A treatise on the geometry of spherical triangles, which only survives in Arabic translation Data by Euclid Various works by or attributed to Archimedes: On the Sphere and Cylinder, On the Measurement of the Circle, Book of Lemmas Although these works are all generally found together in numerous medieval Byzantine and Arabic manuscripts, it is unclear whether this specific set of works was originally intentionally compiled together as a collection. All of the works are elementary treatises that would have been useful in a classroom setting, which increased their chance of survival through continuous use by students, and may have resulted in several of them being gathered together multiple different times independently. The earliest known author to mention the existence of a discrete "Little Astronomy" collection by name is Pappus of Alexandria, in the 4th century CE, who devotes book VI of his Collection to a commentary on selected works by Theodosius, Menelaus, Aristarchus, Euclid, and Autolycus. The oldest manuscript in which all of the extant Greek works are preserved together is Codex Vaticanus Graecus 204, which dates from the 9th or 10th century CE. == Notes == == References == Evans, James (1998). The History & Practice of Ancient Astronomy. Oxford University Press. "The Little Astronomy", pp. 89–91. ISBN 0-19-509539-1. Little, John B. (2025). Book VI of the Mathematical Collection of Pappus of Alexandria: Comprising solutions of difficulties in the "Little Astronomy". Holy Cross Bookshelf. Roughan, Christine (2023). The Little Astronomy and Middle Books between the 2nd and 13th Centuries CE: Transmissions of Astronomical Curricula (PhD thesis). New York University. == Further reading == Turner, AW (1936). "Greek Astronomers During the Fourth Century B. C." Popular Astronomy. 44: 180–187. Bibcode:1936PA.....44..180T. Turner, AW (1936). "Greek Astronomers During the Third Century B. C." Popular Astronomy. 44: 313–319. Bibcode:1936PA.....44..313T. Turner, AW (1937). "Greek Astronomers During the Second and First Centuries B. C." Popular Astronomy. 45: 81–85. Ragep, F. Jamil. "Astronomy". Encyclopaedia of Islam. Brill. doi:10.1163/1573-3912_ei3_COM_22652. == External links == Codex Vaticanus Graecus 204 at the Digital Vatican Library MS Or 45 (Alternate viewer) at the Columbia University Rare Books and Manuscripts Library MS Or 306 at Columbia (Alternate viewer)
Wikipedia:Littlewood's 4/3 inequality#0
In mathematical analysis, Littlewood's 4/3 inequality, named after John Edensor Littlewood, is an inequality that holds for every complex-valued bilinear form defined on c 0 {\displaystyle c_{0}} , the Banach space of scalar sequences that converge to zero. Precisely, let B : c 0 × c 0 → C {\displaystyle B:c_{0}\times c_{0}\to \mathbb {C} } or R {\displaystyle \mathbb {R} } be a bilinear form. Then the following holds: ( ∑ i , j = 1 ∞ | B ( e i , e j ) | 4 / 3 ) 3 / 4 ≤ 2 ‖ B ‖ , {\displaystyle \left(\sum _{i,j=1}^{\infty }|B(e_{i},e_{j})|^{4/3}\right)^{3/4}\leq {\sqrt {2}}\|B\|,} where ‖ B ‖ = sup { | B ( x 1 , x 2 ) | : ‖ x i ‖ ∞ ≤ 1 } . {\displaystyle \|B\|=\sup\{|B(x_{1},x_{2})|:\|x_{i}\|_{\infty }\leq 1\}.} The exponent 4/3 is optimal, i.e., cannot be improved by a smaller exponent. It is also known that for real scalars the aforementioned constant is sharp. == Generalizations == === Bohnenblust–Hille inequality === Bohnenblust–Hille inequality is a multilinear extension of Littlewood's inequality that states that for all m {\displaystyle m} -linear mapping M : c 0 × ⋯ × c 0 → C {\displaystyle M:c_{0}\times \cdots \times c_{0}\to \mathbb {C} } the following holds: ( ∑ i 1 , … , i m = 1 ∞ | M ( e i 1 , … , e i m ) | 2 m / ( m + 1 ) ) ( m + 1 ) / ( 2 m ) ≤ 2 ( m − 1 ) / 2 ‖ M ‖ , {\displaystyle \left(\sum _{i_{1},\ldots ,i_{m}=1}^{\infty }|M(e_{i_{1}},\ldots ,e_{i_{m}})|^{2m/(m+1)}\right)^{(m+1)/(2m)}\leq 2^{(m-1)/2}\|M\|,} == See also == Grothendieck inequality == References ==
Wikipedia:Littlewood–Richardson rule#0
In mathematics, the Littlewood–Richardson rule is a combinatorial description of the coefficients that arise when decomposing a product of two Schur functions as a linear combination of other Schur functions. These coefficients are natural numbers, which the Littlewood–Richardson rule describes as counting certain skew tableaux. They occur in many other mathematical contexts, for instance as multiplicity in the decomposition of tensor products of finite-dimensional representations of general linear groups, or in the decomposition of certain induced representations in the representation theory of the symmetric group, or in the area of algebraic combinatorics dealing with Young tableaux and symmetric polynomials. Littlewood–Richardson coefficients depend on three partitions, say λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } , of which λ {\displaystyle \lambda } and μ {\displaystyle \mu } describe the Schur functions being multiplied, and ν {\displaystyle \nu } gives the Schur function of which this is the coefficient in the linear combination; in other words they are the coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} such that s λ s μ = ∑ ν c λ , μ ν s ν . {\displaystyle s_{\lambda }s_{\mu }=\sum _{\nu }c_{\lambda ,\mu }^{\nu }s_{\nu }.} The Littlewood–Richardson rule states that c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} is equal to the number of Littlewood–Richardson tableaux of skew shape ν / λ {\displaystyle \nu /\lambda } and of weight μ {\displaystyle \mu } . == History == The Littlewood–Richardson rule was first stated by D. E. Littlewood and A. R. Richardson (1934, theorem III p. 119) but though they claimed it as a theorem they only proved it in some fairly simple special cases. Robinson (1938) claimed to complete their proof, but his argument had gaps, though it was so obscurely written that these gaps were not noticed for some time, and his argument is reproduced in the book (Littlewood 1950). Some of the gaps were later filled by Macdonald (1995). The first rigorous proofs of the rule were given four decades after it was found, by Schützenberger (1977) and Thomas (1974), after the necessary combinatorial theory was developed by C. Schensted (1961), Schützenberger (1963), and Knuth (1970) in their work on the Robinson–Schensted correspondence. There are now several short proofs of the rule, such as (Gasharov 1998), and (Stembridge 2002) using Bender-Knuth involutions. Littelmann (1994) used the Littelmann path model to generalize the Littlewood–Richardson rule to other semisimple Lie groups. The Littlewood–Richardson rule is notorious for the number of errors that appeared prior to its complete, published proof. Several published attempts to prove it are incomplete, and it is particularly difficult to avoid errors when doing hand calculations with it: even the original example in D. E. Littlewood and A. R. Richardson (1934) contains an error. === Littlewood–Richardson tableaux === A Littlewood–Richardson tableau is a skew semistandard tableau with the additional property that the sequence obtained by concatenating its reversed rows is a lattice word (or lattice permutation), which means that in every initial part of the sequence any number i {\displaystyle i} occurs at least as often as the number i + 1 {\displaystyle i+1} . Another equivalent (though not quite obviously so) characterization is that the tableau itself, and any tableau obtained from it by removing some number of its leftmost columns, has a weakly decreasing weight. Many other combinatorial notions have been found that turn out to be in bijection with Littlewood–Richardson tableaux, and can therefore also be used to define the Littlewood–Richardson coefficients. === Example === Consider the case that λ = ( 2 , 1 ) {\displaystyle \lambda =(2,1)} , μ = ( 3 , 2 , 1 ) {\displaystyle \mu =(3,2,1)} and ν = ( 4 , 3 , 2 ) {\displaystyle \nu =(4,3,2)} . Then the fact that c λ , μ ν = 2 {\displaystyle c_{\lambda ,\mu }^{\nu }=2} can be deduced from the fact that the two tableaux shown at the right are the only two Littlewood–Richardson tableaux of shape ν / λ {\displaystyle \nu /\lambda } and weight μ {\displaystyle \mu } . Indeed, since the last box on the first nonempty line of the skew diagram can only contain an entry 1, the entire first line must be filled with entries 1 (this is true for any Littlewood–Richardson tableau); in the last box of the second row we can only place a 2 by column strictness and the fact that our lattice word cannot contain any larger entry before it contains a 2. For the first box of the second row we can now either use a 1 or a 2. Once that entry is chosen, the third row must contain the remaining entries to make the weight (3,2,1), in a weakly increasing order, so we have no choice left any more; in both case it turns out that we do find a Littlewood–Richardson tableau. === A more geometrical description === The condition that the sequence of entries read from the tableau in a somewhat peculiar order form a lattice word can be replaced by a more local and geometrical condition. Since in a semistandard tableau equal entries never occur in the same column, one can number the copies of any value from right to left, which is their order of occurrence in the sequence that should be a lattice word. Call the number so associated to each entry its index, and write an entry i with index j as i[j]. Now if some Littlewood–Richardson tableau contains an entry i > 1 {\displaystyle i>1} with index j, then that entry i[j] should occur in a row strictly below that of ( i − 1 ) [ j ] {\displaystyle (i-1)[j]} (which certainly also occurs, since the entry i − 1 occurs as least as often as the entry i does). In fact the entry i[j] should also occur in a column no further to the right than that same entry ( i − 1 ) [ j ] {\displaystyle (i-1)[j]} (which at first sight appears to be a stricter condition). If the weight of the Littlewood–Richardson tableau is fixed beforehand, then one can form a fixed collection of indexed entries, and if these are placed in a way respecting those geometric restrictions, in addition to those of semistandard tableaux and the condition that indexed copies of the same entries should respect right-to-left ordering of the indexes, then the resulting tableaux are guaranteed to be Littlewood–Richardson tableaux. == An algorithmic form of the rule == The Littlewood–Richardson as stated above gives a combinatorial expression for individual Littlewood–Richardson coefficients, but gives no indication of a practical method to enumerate the Littlewood–Richardson tableaux in order to find the values of these coefficients. Indeed, for given λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } there is no simple criterion to determine whether any Littlewood–Richardson tableaux of shape ν / λ {\displaystyle \nu /\lambda } and of weight μ {\displaystyle \mu } exist at all (although there are a number of necessary conditions, the simplest of which is | λ | + | μ | = | ν | {\displaystyle |\lambda |+|\mu |=|\nu |} ); therefore it seems inevitable that in some cases one has to go through an elaborate search, only to find that no solutions exist. Nevertheless, the rule leads to a quite efficient procedure to determine the full decomposition of a product of Schur functions, in other words to determine all coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} for fixed λ and μ, but varying ν. This fixes the weight of the Littlewood–Richardson tableaux to be constructed and the "inner part" λ of their shape, but leaves the "outer part" ν free. Since the weight is known, the set of indexed entries in the geometric description is fixed. Now for successive indexed entries, all possible positions allowed by the geometric restrictions can be tried in a backtracking search. The entries can be tried in increasing order, while among equal entries they can be tried by decreasing index. The latter point is the key to efficiency of the search procedure: the entry i[j] is then restricted to be in a column to the right of i [ j + 1 ] {\displaystyle i[j+1]} , but no further to the right than i − 1 [ j ] {\displaystyle i-1[j]} (if such entries are present). This strongly restricts the set of possible positions, but always leaves at least one valid position for i [ j ] {\displaystyle i[j]} ; thus every placement of an entry will give rise to at least one complete Littlewood–Richardson tableau, and the search tree contains no dead ends. A similar method can be used to find all coefficients c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} for fixed λ and ν, but varying μ. == Littlewood–Richardson coefficients == The Littlewood–Richardson coefficients cνλμ appear in the following interrelated ways: They are the structure constants for the product in the ring of symmetric functions with respect to the basis of Schur functions s λ s μ = ∑ c λ μ ν s ν {\displaystyle s_{\lambda }s_{\mu }=\sum c_{\lambda \mu }^{\nu }s_{\nu }} or equivalently cνλμ is the inner product of sν and sλsμ. They express skew Schur functions in terms of Schur functions s ν / λ = ∑ μ c λ μ ν s μ . {\displaystyle s_{\nu /\lambda }=\sum _{\mu }c_{\lambda \mu }^{\nu }s_{\mu }.} The cνλμ appear as intersection numbers on a Grassmannian: σ λ σ μ = ∑ c λ μ ν σ ν {\displaystyle \sigma _{\lambda }\sigma _{\mu }=\sum c_{\lambda \mu }^{\nu }\sigma _{\nu }} where σμ is the class of the Schubert variety of a Grassmannian corresponding to μ. cνλμ is the number of times the irreducible representation Vλ ⊗ Vμ of the product of symmetric groups S|λ| × S|μ| appears in the restriction of the representation Vν of S|ν| to S|λ| × S|μ|. By Frobenius reciprocity this is also the number of times that Vν occurs in the representation of S|ν| induced from Vλ ⊗ Vμ. The cνλμ appear in the decomposition of the tensor product (Fulton 1997) of two Schur modules (irreducible representations of special linear groups) E λ ⊗ E μ = ⨁ ν ( E ν ) ⊕ c λ μ ν . {\displaystyle E^{\lambda }\otimes E^{\mu }=\bigoplus _{\nu }(E^{\nu })^{\oplus c_{\lambda \mu }^{\nu }}.} cνλμ is the number of standard Young tableaux of shape ν/μ that are jeu de taquin equivalent to some fixed standard Young tableau of shape λ. cνλμ is the number of Littlewood–Richardson tableaux of shape ν/λ and of weight μ. cνλμ is the number of pictures between μ and ν/λ. == Special cases == === Pieri's formula === Pieri's formula, which is the special case of the Littlewood–Richardson rule in the case when one of the partitions has only one part, states that S μ S n = ∑ λ S λ {\displaystyle S_{\mu }S_{n}=\sum _{\lambda }S_{\lambda }} where Sn is the Schur function of a partition with one row and the sum is over all partitions λ obtained from μ by adding n elements to its Ferrers diagram, no two in the same column. === Rectangular partitions === If both partitions are rectangular in shape, the sum is also multiplicity free (Okada 1998). Fix a, b, p, and q positive integers with p ≥ {\displaystyle \geq } q. Denote by ( a p ) {\displaystyle (a^{p})} the partition with p parts of length a. The partitions indexing nontrivial components of s ( a p ) s ( b q ) {\displaystyle s_{(a^{p})}s_{(b^{q})}} are those partitions λ {\displaystyle \lambda } with length ≤ p + q {\displaystyle \leq p+q} such that λ q + 1 = λ q + 2 = ⋯ = λ p = a , {\displaystyle \lambda _{q+1}=\lambda _{q+2}=\cdots =\lambda _{p}=a,} λ q ≥ m a x ( a , b ) {\displaystyle \lambda _{q}\geq \mathrm {max} (a,b)} λ i + λ p + q − i + 1 = a + b , i = 1 , … , q . {\displaystyle \lambda _{i}+\lambda _{p+q-i+1}=a+b,\quad {i=1,\dots ,q}.} For example, . == Generalizations == === Reduced Kronecker coefficients of the symmetric group === The reduced Kronecker coefficient of the symmetric group C ¯ λ , μ , ν {\displaystyle {\bar {C}}_{\lambda ,\mu ,\nu }} is a generalization of c λ , μ ν {\displaystyle c_{\lambda ,\mu }^{\nu }} to three arbitrary Young diagrams λ , μ , ν {\displaystyle \lambda ,\mu ,\nu } , which is symmetric under permutations of the three diagrams. === Skew Schur functions === Zelevinsky (1981) extended the Littlewood–Richardson rule to skew Schur functions as follows: s λ s μ / ν = ∑ λ + ω ( T ≥ j ) ∈ P s λ + ω ( T ) {\displaystyle s_{\lambda }s_{\mu /\nu }=\sum _{\lambda +\omega (T_{\geq j})\in P}s_{\lambda +\omega (T)}} where the sum is over all tableaux T on μ/ν such that for all j, the sequence of integers λ+ω(T≥j) is non-increasing, and ω is the weight. === Newell-Littlewood numbers === Newell-Littlewood numbers are defined from Littlewood–Richardson coefficients by the cubic expression N μ , ν , λ = ∑ α , β , γ c α , β μ c α , γ ν c β , γ λ {\displaystyle N_{\mu ,\nu ,\lambda }=\sum _{\alpha ,\beta ,\gamma }c_{\alpha ,\beta }^{\mu }c_{\alpha ,\gamma }^{\nu }c_{\beta ,\gamma }^{\lambda }} Newell-Littlewood numbers give some of the tensor product multiplicities of finite-dimensional representations of classical Lie groups of the types B , C , D {\displaystyle B,C,D} . The non-vanishing condition on Young diagram sizes c λ , μ ν ≠ 0 ⟹ | λ | + | μ | = | ν | {\displaystyle c_{\lambda ,\mu }^{\nu }\neq 0\implies |\lambda |+|\mu |=|\nu |} leads to N μ , ν , λ ≠ 0 ⟹ { | | λ | − | μ | | ≤ | ν | ≤ | λ | + | μ | | λ | + | μ | + | ν | ∈ 2 Z {\displaystyle N_{\mu ,\nu ,\lambda }\neq 0\implies \left\{{\begin{array}{l}||\lambda |-|\mu ||\leq |\nu |\leq |\lambda |+|\mu |\\|\lambda |+|\mu |+|\nu |\in 2\mathbb {Z} \end{array}}\right.} Newell-Littlewood numbers are generalizations of Littlewood–Richardson coefficients in the sense that | μ | + | ν | = | λ | ⟹ N μ , ν , λ = c μ , ν λ {\displaystyle |\mu |+|\nu |=|\lambda |\implies N_{\mu ,\nu ,\lambda }=c_{\mu ,\nu }^{\lambda }} Newell-Littlewood numbers that involve a Young diagram with only one row obey a Pieri-type rule: N ( k ) , μ , ν {\displaystyle N_{(k),\mu ,\nu }} is the number of ways to remove k + | μ | − | ν | 2 {\displaystyle {\frac {k+|\mu |-|\nu |}{2}}} boxes from μ {\displaystyle \mu } (from different columns), then add k − | μ | + | ν | 2 {\displaystyle {\frac {k-|\mu |+|\nu |}{2}}} boxes (to different columns) to make ν {\displaystyle \nu } . Newell-Littlewood numbers are the structure constants of an associative and commutative algebra whose basis elements are partitions, with the product μ × ν = ∑ λ N μ , ν , λ λ {\displaystyle \mu \times \nu =\sum _{\lambda }N_{\mu ,\nu ,\lambda }\lambda } . For example, ( 1 ) × ( k ) = ( k − 1 ) + ( k + 1 ) + ( k , 1 ) (Newell–Littlewood) {\displaystyle (1)\times (k)=(k-1)+(k+1)+(k,1)\quad {\text{(Newell–Littlewood)}}} ( 1 ) × ( k ) = ( k + 1 ) + ( k , 1 ) (Littlewood–Richardson) {\displaystyle (1)\times (k)=(k+1)+(k,1)\quad {\text{(Littlewood–Richardson)}}} == Examples == The examples of Littlewood–Richardson coefficients below are given in terms of products of Schur polynomials Sπ, indexed by partitions π, using the formula S λ S μ = ∑ c λ μ ν S ν . {\displaystyle S_{\lambda }S_{\mu }=\sum c_{\lambda \mu }^{\nu }S_{\nu }.} All coefficients with | ν | {\displaystyle |\nu |} at most 4 are given by: S0Sπ = Sπ for any π. where S0=1 is the Schur polynomial of the empty partition S1S1 = S2 + S11 S2S1 = S3 + S21 S11S1 = S111 + S21 S3S1 = S4 + S31 S21S1 = S31 + S22 + S211 S2S2 = S4 + S31 + S22 S2S11 = S31 + S211 S111S1 = S1111 + S211 S11S11 = S1111 + S211 + S22 Most of the coefficients for small partitions are 0 or 1, which happens in particular whenever one of the factors is of the form Sn or S11...1, because of Pieri's formula and its transposed counterpart. The simplest example with a coefficient larger than 1 happens when neither of the factors has this form: S21S21 = S42 + S411 + S33 + 2S321 + S3111 + S222 + S2211. For larger partitions the coefficients become more complicated. For example, S321S321 = S642 +S6411 +S633 +2S6321 +S63111 +S6222 +S62211 +S552 +S5511 +2S543 +4S5421 +2S54111 +3S5331 +3S5322 +4S53211 +S531111 +2S52221 +S522111 +S444 +3S4431 +2S4422 +3S44211 +S441111 +3S4332 +3S43311 +4S43221 +2S432111 +S42222 +S422211 +S3333 +2S33321 +S333111 +S33222 +S332211 with 34 terms and total multiplicity 62, and the largest coefficient is 4 S4321S4321 is a sum of 206 terms with total multiplicity is 930, and the largest coefficient is 18. S54321S54321 is a sum of 1433 terms with total multiplicity 26704, and the largest coefficient (that of S86543211) is 176. S654321S654321 is a sum of 10873 terms with total multiplicity is 1458444 (so the average value of the coefficients is more than 100, and they can be as large as 2064). The original example given by Littlewood & Richardson (1934, p. 122-124) was (after correcting for 3 tableaux they found but forgot to include in the final sum) S431S221 = S652 + S6511 + S643 + 2S6421 + S64111 + S6331 + S6322 + S63211 + S553 + 2S5521 + S55111 + 2S5431 + 2S5422 + 3S54211 + S541111 + S5332 + S53311 + 2S53221 + S532111 + S4432 + S44311 + 2S44221 + S442111 + S43321 + S43222 + S432211 with 26 terms coming from the following 34 tableaux: ....11 ....11 ....11 ....11 ....11 ....11 ....11 ....11 ....11 ...22 ...22 ...2 ...2 ...2 ...2 ... ... ... .3 . .23 .2 .3 . .22 .2 .2 3 3 2 2 3 23 2 3 3 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ...12 ...12 ...12 ...12 ...2 ...1 ...1 ...2 ...1 .23 .2 .3 . .13 .22 .2 .1 .2 3 2 2 2 3 23 23 2 3 3 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ....1 ...2 ...2 ...2 ... ... ... ... ... .1 .3 . .12 .12 .1 .2 .2 2 1 1 23 2 22 13 1 3 2 2 3 3 2 2 3 3 .... .... .... .... .... .... .... .... ...1 ...1 ...1 ...1 ...1 ... ... ... .12 .12 .1 .2 .2 .11 .1 .1 23 2 22 13 1 22 12 12 3 3 2 2 3 23 2 3 3 Calculating skew Schur functions is similar. For example, the 15 Littlewood–Richardson tableaux for ν=5432 and λ=331 are ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...11 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 ...2 .11 .11 .11 .12 .11 .12 .13 .13 .23 .13 .13 .12 .12 .23 .23 12 13 22 12 23 13 12 24 14 14 22 23 33 13 34 so S5432/331 = Σcνλμ Sμ = S52 + S511 + S4111 + S2221 + 2S43 + 2S3211 + 2S322 + 2S331 + 3S421 (Fulton 1997, p. 64). == Notes == == References == Fulton, William (1997), Young tableaux, London Mathematical Society Student Texts, vol. 35, Cambridge University Press, p. 121, ISBN 978-0-521-56144-0, MR 1464693 Gasharov, Vesselin (1998), "A short proof of the Littlewood-Richardson rule", European Journal of Combinatorics, 19 (4): 451–453, doi:10.1006/eujc.1998.0212, ISSN 0195-6698, MR 1630540 James, Gordon (1987), "The representation theory of the symmetric groups", The Arcata Conference on Representations of Finite Groups (Arcata, Calif., 1986), Proc. Sympos. Pure Math., vol. 47, Providence, R.I.: American Mathematical Society, pp. 111–126, MR 0933355 Knuth, Donald E. (1970), "Permutations, matrices, and generalized Young tableaux", Pacific Journal of Mathematics, 34 (3): 709–727, doi:10.2140/pjm.1970.34.709, ISSN 0030-8730, MR 0272654 Littelmann, Peter (1994), "A Littlewood-Richardson rule for symmetrizable Kac-Moody algebras" (PDF), Invent. Math., 116: 329–346, Bibcode:1994InMat.116..329L, doi:10.1007/BF01231564, S2CID 85546837 Littlewood, Dudley E. (1950), The theory of group characters and matrix representations of groups, AMS Chelsea Publishing, Providence, RI, ISBN 978-0-8218-4067-2, MR 0002127 {{citation}}: ISBN / Date incompatibility (help) Littlewood, D. E.; Richardson, A. R. (1934), "Group Characters and Algebra", Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 233 (721–730), The Royal Society: 99–141, Bibcode:1934RSPTA.233...99L, doi:10.1098/rsta.1934.0015, ISSN 0264-3952, JSTOR 91293 Macdonald, I. G. (1995), Symmetric functions and Hall polynomials, Oxford Mathematical Monographs (2nd ed.), The Clarendon Press Oxford University Press, ISBN 978-0-19-853489-1, MR 1354144, archived from the original on 2012-12-11 Okada, Soichi (1998), "Applications of minor summation formulas to rectangular-shaped representations of classical groups", Journal of Algebra, 205 (2): 337–367, doi:10.1006/jabr.1997.7408, ISSN 0021-8693, MR 1632816 Robinson, G. de B. (1938), "On the Representations of the Symmetric Group", American Journal of Mathematics, 60 (3), The Johns Hopkins University Press: 745–760, doi:10.2307/2371609, ISSN 0002-9327, JSTOR 2371609 Zbl0019.25102 Schensted, C. (1961), "Longest increasing and decreasing subsequences", Canadian Journal of Mathematics, 13: 179–191, doi:10.4153/CJM-1961-015-3, ISSN 0008-414X, MR 0121305 Schützenberger, M. P. (1963), "Quelques remarques sur une construction de Schensted", Mathematica Scandinavica, 12: 117–128, doi:10.7146/math.scand.a-10676, ISSN 0025-5521, MR 0190017 Schützenberger, Marcel-Paul (1977), "La correspondance de Robinson", Combinatoire et représentation du groupe symétrique (Actes Table Ronde CNRS, Univ. Louis-Pasteur Strasbourg, Strasbourg, 1976), Lecture Notes in Mathematics, vol. 579, Berlin, New York: Springer-Verlag, pp. 59–113, doi:10.1007/BFb0090012, ISBN 978-3-540-08143-2, MR 0498826 Stembridge, John R. (2002), "A concise proof of the Littlewood-Richardson rule" (PDF), Electronic Journal of Combinatorics, 9 (1): Note 5, 4 pp. (electronic), doi:10.37236/1666, ISSN 1077-8926, MR 1912814 Thomas, Glânffrwd P. (1974), Baxter algebras and Schur functions, Ph.D. Thesis, Swansea: University College of Swansea van Leeuwen, Marc A. A. (2001), "The Littlewood-Richardson rule, and related combinatorics" (PDF), Interaction of combinatorics and representation theory, MSJ Mem., vol. 11, Tokyo: Math. Soc. Japan, pp. 95–145, MR 1862150 Zelevinsky, A. V. (1981), "A generalization of the Littlewood-Richardson rule and the Robinson-Schensted-Knuth correspondence", Journal of Algebra, 69 (1): 82–94, doi:10.1016/0021-8693(81)90128-9, ISSN 0021-8693, MR 0613858 == External links == An online program, decomposing products of Schur functions using the Littlewood–Richardson rule
Wikipedia:Liu Hui's π algorithm#0
Liu Hui's π algorithm was invented by Liu Hui (fl. 3rd century), a mathematician of the state of Cao Wei. Before his time, the ratio of the circumference of a circle to its diameter was often taken experimentally as three in China, while Zhang Heng (78–139) rendered it as 3.1724 (from the proportion of the celestial circle to the diameter of the earth, 92/29) or as π ≈ 10 ≈ 3.162 {\displaystyle \pi \approx {\sqrt {10}}\approx 3.162} . Liu Hui was not satisfied with this value. He commented that it was too large and overshot the mark. Another mathematician Wang Fan (219–257) provided π ≈ 142/45 ≈ 3.156. All these empirical π values were accurate to two digits (i.e. one decimal place). Liu Hui was the first Chinese mathematician to provide a rigorous algorithm for calculation of π to any accuracy. Liu Hui's own calculation with a 96-gon provided an accuracy of five digits ie π ≈ 3.1416. Liu Hui remarked in his commentary to The Nine Chapters on the Mathematical Art, that the ratio of the circumference of an inscribed hexagon to the diameter of the circle was three, hence π must be greater than three. He went on to provide a detailed step-by-step description of an iterative algorithm to calculate π to any required accuracy based on bisecting polygons; he calculated π to between 3.141024 and 3.142708 with a 96-gon; he suggested that 3.14 was a good enough approximation, and expressed π as 157/50; he admitted that this number was a bit small. Later he invented a quick method to improve on it, and obtained π ≈ 3.1416 with only a 96-gon, a level of accuracy comparable to that from a 1536-gon. His most important contribution in this area was his simple iterative π algorithm. == Area of a circle == Liu Hui argued: "Multiply one side of a hexagon by the radius (of its circumcircle), then multiply this by three, to yield the area of a dodecagon; if we cut a hexagon into a dodecagon, multiply its side by its radius, then again multiply by six, we get the area of a 24-gon; the finer we cut, the smaller the loss with respect to the area of circle, thus with further cut after cut, the area of the resulting polygon will coincide and become one with the circle; there will be no loss". This is essentially equivalent to: lim N → ∞ area of N -gon = area of circle . {\displaystyle \lim _{N\to \infty }{\text{area of }}N{\text{-gon}}={\text{area of circle}}.\,} Further, Liu Hui proved that the area of a circle is half of its circumference multiplied by its radius. He said: "Between a polygon and a circle, there is excess radius. Multiply the excess radius by a side of the polygon. The resulting area exceeds the boundary of the circle". In the diagram d = excess radius. Multiplying d by one side results in oblong ABCD which exceeds the boundary of the circle. If a side of the polygon is small (i.e. there is a very large number of sides), then the excess radius will be small, hence excess area will be small. As in the diagram, when N → ∞, d → 0, and ABCD → 0. "Multiply the side of a polygon by its radius, and the area doubles; hence multiply half the circumference by the radius to yield the area of circle". When N → ∞, half the circumference of the N-gon approaches a semicircle, thus half a circumference of a circle multiplied by its radius equals the area of the circle. Liu Hui did not explain in detail this deduction. However, it is self-evident by using Liu Hui's "in-out complement principle" which he provided elsewhere in The Nine Chapters on the Mathematical Art: Cut up a geometric shape into parts, rearrange the parts to form another shape, the area of the two shapes will be identical. Thus rearranging the six green triangles, three blue triangles and three red triangles into a rectangle with width = 3L, and height R shows that the area of the dodecagon = 3RL. In general, multiplying half of the circumference of a N-gon by its radius yields the area of a 2N-gon. Liu Hui used this result repetitively in his π algorithm. == Liu Hui's π inequality == Liu Hui proved an inequality involving π by considering the area of inscribed polygons with N and 2N sides. In the diagram, the yellow area represents the area of an N-gon, denoted by A N {\displaystyle A_{N}} , and the yellow area plus the green area represents the area of a 2N-gon, denoted by A 2 N {\displaystyle A_{2N}} . Therefore, the green area represents the difference between the areas of the 2N-gon and the N-gon: D 2 N = A 2 N − A N . {\displaystyle D_{2N}=A_{2N}-A_{N}.} The red area is equal to the green area, and so is also D 2 N {\displaystyle D_{2N}} . So Yellow area + green area + red area = A 2 N + D 2 N . {\displaystyle A_{2N}+D_{2N}.} Let A C {\displaystyle A_{C}} represent the area of the circle. Then A 2 N < A C < A 2 N + D 2 N . {\displaystyle A_{2N}<A_{C}<A_{2N}+D_{2N}.} If the radius of the circle is taken to be 1, then we have Liu Hui's π inequality: A 2 N < π < A 2 N + D 2 N . {\displaystyle A_{2N}<\pi <A_{2N}+D_{2N}.} == Iterative algorithm == Liu Hui began with an inscribed hexagon. Let M be the length of one side AB of hexagon, r is the radius of circle. Bisect AB with line OPC, AC becomes one side of dodecagon (12-gon), let its length be m. Let the length of PC be j and the length of OP be G. APO, APC are two right angle triangles. Liu Hui used the Pythagorean theorem repetitively: G 2 = r 2 − ( M 2 ) 2 {\displaystyle {}G^{2}=r^{2}-\left({\tfrac {M}{2}}\right)^{2}} G = r 2 − M 2 4 {\displaystyle {}G={\sqrt {r^{2}-{\tfrac {M^{2}}{4}}}}} j = r − G = r − r 2 − M 2 4 {\displaystyle {}j=r-G=r-{\sqrt {r^{2}-{\tfrac {M^{2}}{4}}}}} m 2 = ( M 2 ) 2 + j 2 {\displaystyle {}m^{2}=\left({\tfrac {M}{2}}\right)^{2}+j^{2}} m = ( M 2 ) 2 + j 2 {\displaystyle {}m={\sqrt {\left({\tfrac {M}{2}}\right)^{2}+j^{2}}}} m = ( M 2 ) 2 + ( r − G ) 2 {\displaystyle {}m={\sqrt {\left({\tfrac {M}{2}}\right)^{2}+\left(r-G\right)^{2}}}} m = ( M 2 ) 2 + ( r − r 2 − M 2 4 ) 2 {\displaystyle {}m={\sqrt {\left({\tfrac {M}{2}}\right)^{2}+\left(r-{\sqrt {r^{2}-{\tfrac {M^{2}}{4}}}}\right)^{2}}}} From here, there is now a technique to determine m from M, which gives the side length for a polygon with twice the number of edges. Starting with a hexagon, Liu Hui could determine the side length of a dodecagon using this formula. Then continue repetitively to determine the side length of an icositetragon given the side length of a dodecagon. He could do this recursively as many times as necessary. Knowing how to determine the area of these polygons, Liu Hui could then approximate π. With r = 10 {\displaystyle r=10} units, he obtained area of 96-gon A 96 = 313 584 625 {\displaystyle {}A_{96}=313{584 \over 625}} area of 192-gon A 192 = 314 64 625 {\displaystyle {}A_{192}=314{64 \over 625}} Difference of 96-gon and 48-gon: D 192 = 314 64 625 − 313 584 625 = 105 625 {\displaystyle {}D_{192}=314{\frac {64}{625}}-313{\frac {584}{625}}={\frac {105}{625}}} from Liu Hui's π inequality: A 2 N < A C < A 2 N + D 2 N . {\displaystyle A_{2N}<A_{C}<A_{2N}+D_{2N}.} Since r = 10, A C = 100 × π {\displaystyle A_{C}=100\times \pi } therefore: 314 64 625 < 100 × π < 314 64 625 + 105 625 {\displaystyle {}314{\frac {64}{625}}<100\times \pi <314{\frac {64}{625}}+{\frac {105}{625}}} 314 64 625 < 100 × π < 314 169 625 {\displaystyle {}314{\frac {64}{625}}<100\times \pi <314{\frac {169}{625}}} 3.141024 < π < 3.142704. {\displaystyle {}3.141024<\pi <3.142704.} He never took π as the average of the lower limit 3.141024 and upper limit 3.142704. Instead he suggested that 3.14 was a good enough approximation for π, and expressed it as a fraction 157 50 {\displaystyle {\tfrac {157}{50}}} ; he pointed out this number is slightly less than the actual value of π. Liu Hui carried out his calculation with rod calculus, and expressed his results with fractions. However, the iterative nature of Liu Hui's π algorithm is quite clear: 2 − m 2 = 2 + ( 2 − M 2 ) , {\displaystyle 2-m^{2}={\sqrt {2+(2-M^{2})}}\,,} in which m is the length of one side of the next–order polygon bisected from M. The same calculation is done repeatedly, each step requiring only one addition and one square root extraction. == Quick method == Calculation of square roots of irrational numbers was not an easy task in the third century with counting rods. Liu Hui discovered a shortcut by comparing the area differentials of polygons, and found that the proportion of the difference in area of successive order polygons was approximately 1/4. Let DN denote the difference in areas of N-gon and (N/2)-gon D N = A N − A N / 2 {\displaystyle D_{N}=A_{N}-A_{N/2}\,} He found: D 96 ≈ 1 4 D 48 {\displaystyle D_{96}\approx {\tfrac {1}{4}}D_{48}} D 192 ≈ 1 4 D 96 {\displaystyle D_{192}\approx {\tfrac {1}{4}}D_{96}} 1 Hence: D 384 ≈ 1 4 D 192 D 768 ≈ ( 1 4 ) 2 D 192 D 1536 ≈ ( 1 4 ) 3 D 192 D 3072 ≈ ( 1 4 ) 4 D 192 ⋮ {\displaystyle {\begin{aligned}D_{384}&{}\approx {\tfrac {1}{4}}D_{192}\\D_{768}&{}\approx \left({\tfrac {1}{4}}\right)^{2}D_{192}\\D_{1536}&{}\approx \left({\tfrac {1}{4}}\right)^{3}D_{192}\\D_{3072}&{}\approx \left({\tfrac {1}{4}}\right)^{4}D_{192}\\&{}\ \ \vdots \end{aligned}}} Area of unit radius circle = π = A 192 + D 384 + D 768 + D 1536 + D 3072 + ⋯ ≈ A 192 + F ⋅ D 192 . {\displaystyle {}\pi =A_{192}+D_{384}+D_{768}+D_{1536}+D_{3072}+\cdots \approx A_{192}+F\cdot D_{192}.\,} In which F = 1 4 + ( 1 4 ) 2 + ( 1 4 ) 3 + ( 1 4 ) 4 + ⋯ = 1 4 1 − 1 4 = 1 3 . {\displaystyle F={\tfrac {1}{4}}+\left({\tfrac {1}{4}}\right)^{2}+\left({\tfrac {1}{4}}\right)^{3}+\left({\tfrac {1}{4}}\right)^{4}+\cdots ={\frac {\frac {1}{4}}{1-{\frac {1}{4}}}}={\tfrac {1}{3}}.} That is all the subsequent excess areas add up amount to one third of the D 192 {\displaystyle D_{192}} area of unit circle = π ≈ A 192 + ( 1 3 ) D 192 ≈ 3927 1250 ≈ 3.1416. {\displaystyle {}=\pi \approx A_{192}+\left({\tfrac {1}{3}}\right)D_{192}\approx {3927 \over 1250}\approx 3.1416.\,} 2 Liu Hui was quite happy with this result because he had acquired the same result with the calculation for a 1536-gon, obtaining the area of a 3072-gon. This explains four questions: Why he stopped short at A192 in his presentation of his algorithm. Because he discovered a quick method of improving the accuracy of π, achieving same result of 1536-gon with only 96-gon. After all calculation of square roots was not a simple task with rod calculus. With the quick method, he only needed to perform one more subtraction, one more division (by 3) and one more addition, instead of four more square root extractions. Why he preferred to calculate π through calculation of areas instead of circumferences of successive polygons, because the quick method required information about the difference in areas of successive polygons. Who was the true author of the paragraph containing calculation of π = 3927 1250 . {\displaystyle \pi ={3927 \over 1250}.} That famous paragraph began with "A Han dynasty bronze container in the military warehouse of Jin dynasty....". Many scholars, among them Yoshio Mikami and Joseph Needham, believed that the "Han dynasty bronze container" paragraph was the work of Liu Hui and not Zu Chongzhi as other believed, because of the strong correlation of the two methods through area calculation, and because there was not a single word mentioning Zu's 3.1415926 < π < 3.1415927 result obtained through 12288-gon. == Later developments == Liu Hui established a solid algorithm for calculation of π to any accuracy. Zu Chongzhi was familiar with Liu Hui's work, and obtained greater accuracy by applying his algorithm to a 12288-gon. From Liu Hui's formula for 2N-gon: A 2 N = m N × r {\displaystyle A_{2N}=m_{N}\times r} For 12288-gon inscribed in a unit radius circle: A 24576 = 3.14159261864 < π {\displaystyle A_{24576}=3.14159261864<\pi } . From Liu Hui's π inequality: A 24576 < π < A 24576 + D 24576 {\displaystyle A_{24576}<\pi <A_{24576}+D_{24576}} In which D 24576 = A 24576 − A 12288 = 0.0000001021 {\displaystyle D_{24576}=A_{24576}-A_{12288}=0.0000001021} A 24576 = 3.14159261864 < π < 3.14159261864 + 0.0000001021 {\displaystyle A_{24576}=3.14159261864<\pi <3.14159261864+0.0000001021} . Therefore 3.14159261864 < π < 3.141592706934 {\displaystyle 3.14159261864<\pi <3.141592706934} Truncated to eight significant digits: 3.1415926 < π < 3.1415927 {\displaystyle 3.1415926<\pi <3.1415927} . That was the famous Zu Chongzhi π inequality. Zu Chongzhi then used the interpolation formula by He Chengtian (何承天, 370-447) and obtained an approximating fraction: π ≈ 355 113 {\displaystyle \pi \approx {355 \over 113}} . However, this π value disappeared in Chinese history for a long period of time (e.g. Song dynasty mathematician Qin Jiushao used π= 22 7 {\displaystyle {22 \over 7}} and π = 10 ) {\displaystyle \pi ={\sqrt {10}})} ), until Yuan dynasty mathematician Zhao Yuqin worked on a variation of Liu Hui's π algorithm, by bisecting an inscribed square and obtained again π ≈ 355 113 . {\displaystyle \pi \approx {355 \over 113}.} == Significance of Liu Hui's algorithm == Liu Hui's π algorithm was one of his most important contributions to ancient Chinese mathematics. It was based on calculation of N-gon area, in contrast to the Archimedean algorithm based on polygon circumference. With this method Zu Chongzhi obtained the eight-digit result: 3.1415926 < π < 3.1415927, which held the world record for the most accurate value of π for centuries, until Madhava of Sangamagrama calculated 11 digits in the 14th century or Jamshid al-Kashi calculated 16 digits in 1424; the best approximations for π known in Europe were only accurate to 7 digits until Ludolph van Ceulen calculated 20 digits in 1596. == See also == Method of exhaustion (5th century BC) Zhao Youqin's π algorithm (13-14th century) Proof of Newton's Formula for Pi (17th century) == Notes == ^1 Correct value: 0.2502009052 ^2 Correct values: A 192 = 3.1410319509 {\displaystyle A_{192}=3.1410319509} D 192 = 0.0016817478 {\displaystyle D_{192}=0.0016817478} π ≈ A 192 + 1 3 D 192 ≊ 3.1410319509 + 0.0016817478 / 3 {\displaystyle \pi \approx A_{192}+{\frac {1}{3}}D_{192}\approxeq 3.1410319509+0.0016817478/3} π ≈ 3.1410319509 + 0.0005605826 {\displaystyle \pi \approx 3.1410319509+0.0005605826} π ≈ 3.1415925335. {\displaystyle \pi \approx 3.1415925335.} Liu Hui's quick method was potentially able to deliver almost the same result of 12288-gon (3.141592516588) with only 96-gon. == References == == Further reading == Needham, Joseph (1986). Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Taipei: Caves Books, Ltd. Wu Wenjun ed, History of Chinese Mathematics Vol III (in Chinese) ISBN 7-303-04557-0
Wikipedia:Liu Sifeng#0
Liu Sifeng (Chinese: 刘思峰; born 15 July 1955) is a Chinese systems engineer. He was the director of the Institute for Grey Systems Studies at Nanjing University of Aeronautics and Astronautics, Nanjing, China. He is best known for his work on grey system theory. == Education and career == Liu obtained his BE in Mathematics from Henan University in 1981, then his MS in Economics (1986) and PhD in Systems Engineering (1998) from Huazhong University of Science and Technology, Wuhan, China. He was the doctoral student of Julong Deng, the founder of grey system theory. Liu was appointed as a lecturer at Henan University in 1985. He was promoted through the ranks, reaching full professor in 1994. In 2000, he moved as a distinguished professor to Nanjing University of Aeronautics and Astronautics, where he also serves as director of the Institute for Grey Systems Studies. In 2014, he worked as a research professor at De Montfort University in Leicester, UK. Liu is the editor-in-chief of Grey Systems: Theory and Application, and of the Journal of Grey System. In 2013, he became the Fellow of Marie Skłodowska-Curie Actions. == Awards and honors == Liu is an honorary fellow of the World Organisation of Systems and Cybernetics, and an honorary editor of "International Journal of Grey Systems" (USA). German Chancellor Angela Merkel mentioned Liu's contributions to grey system theory in a 2019 speech at Huazhong University of Science and Technology. In 2023, he received the Global Excellence Award from the Grey Systems Society of Pakistan. == Books == Liu, Sifeng; Lin, Yi (2006). Grey Information: Theory and Practical Applications. Springer Science & Business Media. ISBN 978-1-84628-342-0. Dang, Yaoguo; Liu, Sifeng; Wang, Yuhong (2010). Optimization of Regional Industrial Structures and Applications. CRC Press. ISBN 978-1-4200-8752-9. Fang, Zhigeng; Liu, Sifeng; Shi, Hongxing; Lin, Yi (2010). Grey Game Theory and Its Applications in Economic Decision-Making. CRC Press. ISBN 978-1-4200-8740-6. Liu, Sifeng; Fang, Zhigeng; Shi, Hongxing; Guo, Benhai (2010). Theory of Science and Technology Transfer and Applications. CRC Press. ISBN 978-1-4200-8742-0. Liu, Sifeng; Forrest, Jeffrey Yi-Lin, eds. (2010). Advances in Grey Systems Research. Springer. ISBN 978-3-642-13938-3. Jian, Lirong; Liu, Sifeng; Lin, Yi (2011). Hybrid Rough Sets and Applications in Uncertain Decision-Making. CRC Press. ISBN 978-1-4200-8749-9. Liu, Sifeng; Xie, Naiming; Yuan, Chaoqing; Fang, Zhigeng (2012). Systems Evaluation: Methods, Models, and Applications. CRC Press. ISBN 978-1-4200-8847-2. Liu, Sifeng; Yang, Yingjie; Forrest, Jeffrey (2016). Grey Data Analysis: Methods, Models and Applications. Springer. ISBN 978-981-10-1841-1. == References == == External links == Liu Sifeng publications indexed by Google Scholar
Wikipedia:Ljubisa D.R. Kocinac#0
Ljubisa Dragi Rosanda Kocinac (born in Serbia in January 1947) is a mathematician and currently a Professor Emeritus at the University of Niš, Serbia. == Biography == He completed his PhD, focused on cardinal functions, at the University of Belgrade in 1983, under the supervision of Đuro Kurepa. Kocinac has published over 160 papers and four books in topology, real analysis and fields of sets. He has actively promoted research on selection principles, as a fruitful collaborator and as an organizer of the first conferences in a series of international workshops titled Coverings, Selections and Games in Topology. The fourth of this series, held in Caserta, Italy, in June 2012 was dedicated to him on the occasion of his sixty-fifth birthday. His research interests include aspects of topology, especially selection principles, topological games and coverings of topological spaces, and mathematical analysis. In particular, he introduced star selection principles. He is also known for his wit. When asked "Would you like some wine?", Kočinac replied "Only if you insist. (Pause) Please insist.". == References ==
Wikipedia:Lloyd Dines#0
Lloyd Lyne Dines (29 March 1885, in Shelbyville, Missouri – 17 January 1964, in Quincy, Illinois) was an American-Canadian mathematician, known for his pioneering work on linear inequalities. == Education and career == Dines received B.A. in 1906 and M.A. in 1907 from Northwestern University and Ph.D. in 1911 from the University of Chicago under Gilbert Bliss with thesis The highest common factor of a system of polynomials in one variable, with an application to implicit functions. In 1911 he became an instructor of mathematics at Columbia University and then became an associate professor at the University of Arizona. From 1915 to 1934 he was a professor at the University of Saskatchewan. In 1928 he was elected to the Royal Society of Canada. In 1932 he was an invited speaker at the International Congress of Mathematicians in Zürich. From 1934 to 1945 he was a professor at the Carnegie Institute of Technology and chair of the mathematics department. After his retirement in 1945, Dines held visiting professorships at the University of Saskatchewan, Smith College, and Northwestern University. == Selected publications == Dines, Lloyd L. (1913). "Concerning two recent theorems on implicit functions". Bull. Amer. Math. Soc. 19 (9): 462–467. doi:10.1090/s0002-9904-1913-02397-8. MR 1559394. Dines, L. L. (1915). "Complete existential theory of Sheffer's postulates for Boolean algebras". Bull. Amer. Math. Soc. 21 (4): 183–188. doi:10.1090/s0002-9904-1915-02595-4. MR 1559609. Dines, L. L. (1919). "Projective transformations in function space". Trans. Amer. Math. Soc. 20: 45–65. doi:10.1090/s0002-9947-1919-1501115-2. MR 1501115. Dines, L. L. (1924). "Concerning a suggested and discarded generalization of the Weierstrass factorization theorem". Bull. Amer. Math. Soc. 30 (5–6): 233–236. doi:10.1090/s0002-9904-1924-03889-0. MR 1560882. Dines, Lloyd L. (1924). "A theorem on the factorization of polynomials of a certain type". Trans. Amer. Math. Soc. 26: 17–24. doi:10.1090/s0002-9947-1924-1501262-7. MR 1501262. Dines, L. L. (1927). "Linear inequalities in general analysis". Bull. Amer. Math. Soc. 33 (6): 695–700. doi:10.1090/s0002-9904-1927-04461-5. MR 1561451. Dines, Lloyd L. (1927). "On sets of functions of a general variable". Trans. Amer. Math. Soc. 29 (2): 463–470. doi:10.1090/s0002-9947-1927-1501398-3. MR 1501398. Dines, L. L. (1928). "A theorem on orthogonal sequences". Trans. Amer. Math. Soc. 30 (2): 439–446. doi:10.1090/s0002-9947-1928-1501437-0. MR 1501437. Dines, Lloyd L. (1928). "A theorem on orthogonal functions with an application to integral inequalities". Trans. Amer. Math. Soc. 30 (2): 425–438. doi:10.1090/s0002-9947-1928-1501436-9. MR 1501436. Dines, L. L. (1930). "Linear inequalities and some related properties of functions". Bull. Amer. Math. Soc. 36 (6): 393–405. doi:10.1090/s0002-9904-1930-04956-3. MR 1561956. Dines, L. L. (1936). "Convex extension and linear inequalities". Bull. Amer. Math. Soc. 42 (6): 353–365. doi:10.1090/s0002-9904-1936-06299-3. MR 1563300. S2CID 44594799. with David Moskovitz: Moskovitz, David; Dines, L. L. (1940). "On the supporting-plane property of a convex body". Bull. Amer. Math. Soc. 46 (6): 482–489. doi:10.1090/s0002-9904-1940-07242-8. MR 0002008. Dines, Lloyd L. (1941). "On the mapping of quadratic forms". Bull. Amer. Math. Soc. 47 (6): 494–498. doi:10.1090/s0002-9904-1941-07494-x. MR 0004216. Dines, Lloyd L. (1942). "On the mapping of n quadratic forms". Bull. Amer. Math. Soc. 48 (6): 467–471. doi:10.1090/s0002-9904-1942-07707-x. MR 0006141. Dines, Lloyd L. (1943). "On linear combinations of quadratic forms". Bull. Amer. Math. Soc. 49 (6): 388–393. doi:10.1090/s0002-9904-1943-07925-6. MR 0008068. Dines, LL (November 1947). "On a theorem of von Neumann". Proc Natl Acad Sci U S A. 33 (11): 329–31. Bibcode:1947PNAS...33..329D. doi:10.1073/pnas.33.11.329. PMC 1079066. PMID 16588759. == References ==