text
stringlengths
559
401k
source
stringlengths
13
121
In linear algebra, the trace of a square matrix A, denoted tr(A), is the sum of the elements on its main diagonal, a 11 + a 22 + ⋯ + a n n {\displaystyle a_{11}+a_{22}+\dots +a_{nn}} . It is only defined for a square matrix (n × n). The trace of a matrix is the sum of its eigenvalues (counted with multiplicities). Also, tr(AB) = tr(BA) for any matrices A and B of the same size. Thus, similar matrices have the same trace. As a consequence, one can define the trace of a linear operator mapping a finite-dimensional vector space into itself, since all matrices describing such an operator with respect to a basis are similar. The trace is related to the derivative of the determinant (see Jacobi's formula). == Definition == The trace of an n × n square matrix A is defined as: 34  tr ⁡ ( A ) = ∑ i = 1 n a i i = a 11 + a 22 + ⋯ + a n n {\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{n}a_{ii}=a_{11}+a_{22}+\dots +a_{nn}} where aii denotes the entry on the i th row and i th column of A. The entries of A can be real numbers, complex numbers, or more generally elements of a field F. The trace is not defined for non-square matrices. == Example == Let A be a matrix, with A = ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) = ( 1 0 3 11 5 2 6 12 − 5 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}={\begin{pmatrix}1&0&3\\11&5&2\\6&12&-5\end{pmatrix}}} Then tr ⁡ ( A ) = ∑ i = 1 3 a i i = a 11 + a 22 + a 33 = 1 + 5 + ( − 5 ) = 1 {\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{3}a_{ii}=a_{11}+a_{22}+a_{33}=1+5+(-5)=1} == Properties == === Basic properties === The trace is a linear mapping. That is, tr ⁡ ( A + B ) = tr ⁡ ( A ) + tr ⁡ ( B ) tr ⁡ ( c A ) = c tr ⁡ ( A ) {\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} )\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} )\end{aligned}}} for all square matrices A and B, and all scalars c.: 34  A matrix and its transpose have the same trace:: 34  tr ⁡ ( A ) = tr ⁡ ( A T ) . {\displaystyle \operatorname {tr} (\mathbf {A} )=\operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\right).} This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal. === Trace of a product === The trace of a square matrix which is the product of two matrices can be rewritten as the sum of entry-wise products of their elements, i.e. as the sum of all elements of their Hadamard product. Phrased directly, if A and B are two m × n matrices, then: tr ⁡ ( A T B ) = tr ⁡ ( A B T ) = tr ⁡ ( B T A ) = tr ⁡ ( B A T ) = ∑ i = 1 m ∑ j = 1 n a i j b i j . {\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {B} \right)=\operatorname {tr} \left(\mathbf {A} \mathbf {B} ^{\mathsf {T}}\right)=\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {A} \right)=\operatorname {tr} \left(\mathbf {B} \mathbf {A} ^{\mathsf {T}}\right)=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ij}\;.} If one views any real m × n matrix as a vector of length mn (an operation called vectorization) then the above operation on A and B coincides with the standard dot product. According to the above expression, tr(A⊤A) is a sum of squares and hence is nonnegative, equal to zero if and only if A is zero.: 7  Furthermore, as noted in the above formula, tr(A⊤B) = tr(B⊤A). These demonstrate the positive-definiteness and symmetry required of an inner product; it is common to call tr(A⊤B) the Frobenius inner product of A and B. This is a natural inner product on the vector space of all real matrices of fixed dimensions. The norm derived from this inner product is called the Frobenius norm, and it satisfies a submultiplicative property, as can be proven with the Cauchy–Schwarz inequality: 0 ≤ [ tr ⁡ ( A B ) ] 2 ≤ tr ⁡ ( A T A ) tr ⁡ ( B T B ) , {\displaystyle 0\leq \left[\operatorname {tr} (\mathbf {A} \mathbf {B} )\right]^{2}\leq \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {B} \right),} if A and B are real matrices such that A B is a square matrix. The Frobenius inner product and norm arise frequently in matrix calculus and statistics. The Frobenius inner product may be extended to a hermitian inner product on the complex vector space of all complex matrices of a fixed size, by replacing B by its complex conjugate. The symmetry of the Frobenius inner product may be phrased more directly as follows: the matrices in the trace of a product can be switched without changing the result. If A and B are m × n and n × m real or complex matrices, respectively, then: 34  This is notable both for the fact that AB does not usually equal BA, and also since the trace of either does not usually equal tr(A)tr(B). The similarity-invariance of the trace, meaning that tr(A) = tr(P−1AP) for any square matrix A and any invertible matrix P of the same dimensions, is a fundamental consequence. This is proved by tr ⁡ ( P − 1 ( A P ) ) = tr ⁡ ( ( A P ) P − 1 ) = tr ⁡ ( A ) . {\displaystyle \operatorname {tr} \left(\mathbf {P} ^{-1}(\mathbf {A} \mathbf {P} )\right)=\operatorname {tr} \left((\mathbf {A} \mathbf {P} )\mathbf {P} ^{-1}\right)=\operatorname {tr} (\mathbf {A} ).} Similarity invariance is the crucial property of the trace in order to discuss traces of linear transformations as below. Additionally, for real column vectors a ∈ R n {\displaystyle \mathbf {a} \in \mathbb {R} ^{n}} and b ∈ R n {\displaystyle \mathbf {b} \in \mathbb {R} ^{n}} , the trace of the outer product is equivalent to the inner product: === Cyclic property === More generally, the trace is invariant under circular shifts, that is, This is known as the cyclic property. Arbitrary permutations are not allowed: in general, tr ⁡ ( A B C D ) ≠ tr ⁡ ( A C B D ) . {\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} \mathbf {D} )\neq \operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} \mathbf {D} )~.} However, if products of three symmetric matrices are considered, any permutation is allowed, since: tr ⁡ ( A B C ) = tr ⁡ ( ( A B C ) T ) = tr ⁡ ( C B A ) = tr ⁡ ( A C B ) , {\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} )=\operatorname {tr} \left(\left(\mathbf {A} \mathbf {B} \mathbf {C} \right)^{\mathsf {T}}\right)=\operatorname {tr} (\mathbf {C} \mathbf {B} \mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} ),} where the first equality is because the traces of a matrix and its transpose are equal. Note that this is not true in general for more than three factors. === Trace of a Kronecker product === The trace of the Kronecker product of two matrices is the product of their traces: tr ⁡ ( A ⊗ B ) = tr ⁡ ( A ) tr ⁡ ( B ) . {\displaystyle \operatorname {tr} (\mathbf {A} \otimes \mathbf {B} )=\operatorname {tr} (\mathbf {A} )\operatorname {tr} (\mathbf {B} ).} === Characterization of the trace === The following three properties: tr ⁡ ( A + B ) = tr ⁡ ( A ) + tr ⁡ ( B ) , tr ⁡ ( c A ) = c tr ⁡ ( A ) , tr ⁡ ( A B ) = tr ⁡ ( B A ) , {\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} ),\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} ),\\\operatorname {tr} (\mathbf {A} \mathbf {B} )&=\operatorname {tr} (\mathbf {B} \mathbf {A} ),\end{aligned}}} characterize the trace up to a scalar multiple in the following sense: If f {\displaystyle f} is a linear functional on the space of square matrices that satisfies f ( x y ) = f ( y x ) , {\displaystyle f(xy)=f(yx),} then f {\displaystyle f} and tr {\displaystyle \operatorname {tr} } are proportional. For n × n {\displaystyle n\times n} matrices, imposing the normalization f ( I ) = n {\displaystyle f(\mathbf {I} )=n} makes f {\displaystyle f} equal to the trace. === Trace as the sum of eigenvalues === Given any n × n matrix A, there is where λ1, ..., λn are the eigenvalues of A counted with multiplicity. This holds true even if A is a real matrix and some (or all) of the eigenvalues are complex numbers. This may be regarded as a consequence of the existence of the Jordan canonical form, together with the similarity-invariance of the trace discussed above. === Trace of commutator === When both A and B are n × n matrices, the trace of the (ring-theoretic) commutator of A and B vanishes: tr([A, B]) = 0, because tr(AB) = tr(BA) and tr is linear. One can state this as "the trace is a map of Lie algebras gln → k from operators to scalars", as the commutator of scalars is trivial (it is an Abelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices. Conversely, any square matrix with zero trace is a linear combination of the commutators of pairs of matrices. Moreover, any square matrix with zero trace is unitarily equivalent to a square matrix with diagonal consisting of all zeros. === Traces of special kinds of matrices === === Relationship to the characteristic polynomial === The trace of an n × n {\displaystyle n\times n} matrix A {\displaystyle A} is the coefficient of t n − 1 {\displaystyle t^{n-1}} in the characteristic polynomial, possibly changed of sign, according to the convention in the definition of the characteristic polynomial. == Relationship to eigenvalues == If A is a linear operator represented by a square matrix with real or complex entries and if λ1, ..., λn are the eigenvalues of A (listed according to their algebraic multiplicities), then This follows from the fact that A is always similar to its Jordan form, an upper triangular matrix having λ1, ..., λn on the main diagonal. In contrast, the determinant of A is the product of its eigenvalues; that is, det ( A ) = ∏ i λ i . {\displaystyle \det(\mathbf {A} )=\prod _{i}\lambda _{i}.} Everything in the present section applies as well to any square matrix with coefficients in an algebraically closed field. === Derivative relationships === If ΔA is a square matrix with small entries and I denotes the identity matrix, then we have approximately det ( I + Δ A ) ≈ 1 + tr ⁡ ( Δ A ) . {\displaystyle \det(\mathbf {I} +\mathbf {\Delta A} )\approx 1+\operatorname {tr} (\mathbf {\Delta A} ).} Precisely this means that the trace is the derivative of the determinant function at the identity matrix. Jacobi's formula d det ( A ) = tr ⁡ ( adj ⁡ ( A ) ⋅ d A ) {\displaystyle d\det(\mathbf {A} )=\operatorname {tr} {\big (}\operatorname {adj} (\mathbf {A} )\cdot d\mathbf {A} {\big )}} is more general and describes the differential of the determinant at an arbitrary square matrix, in terms of the trace and the adjugate of the matrix. From this (or from the connection between the trace and the eigenvalues), one can derive a relation between the trace function, the matrix exponential function, and the determinant: det ( exp ⁡ ( A ) ) = exp ⁡ ( tr ⁡ ( A ) ) . {\displaystyle \det(\exp(\mathbf {A} ))=\exp(\operatorname {tr} (\mathbf {A} )).} A related characterization of the trace applies to linear vector fields. Given a matrix A, define a vector field F on Rn by F(x) = Ax. The components of this vector field are linear functions (given by the rows of A). Its divergence div F is a constant function, whose value is equal to tr(A). By the divergence theorem, one can interpret this in terms of flows: if F(x) represents the velocity of a fluid at location x and U is a region in Rn, the net flow of the fluid out of U is given by tr(A) · vol(U), where vol(U) is the volume of U. The trace is a linear operator, hence it commutes with the derivative: d tr ⁡ ( X ) = tr ⁡ ( d X ) . {\displaystyle d\operatorname {tr} (\mathbf {X} )=\operatorname {tr} (d\mathbf {X} ).} == Trace of a linear operator == In general, given some linear map f : V → V (where V is a finite-dimensional vector space), we can define the trace of this map by considering the trace of a matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis-independent definition for the trace of a linear map. Such a definition can be given using the canonical isomorphism between the space End(V) of linear maps on V and V ⊗ V*, where V* is the dual space of V. Let v be in V and let g be in V*. Then the trace of the indecomposable element v ⊗ g is defined to be g(v); the trace of a general element is defined by linearity. The trace of a linear map f : V → V can then be defined as the trace, in the above sense, of the element of V ⊗ V* corresponding to f under the above mentioned canonical isomorphism. Using an explicit basis for V and the corresponding dual basis for V*, one can show that this gives the same definition of the trace as given above. == Numerical algorithms == === Stochastic estimator === The trace can be estimated unbiasedly by "Hutchinson's trick":Given any matrix W ∈ R n × n {\displaystyle {\boldsymbol {W}}\in \mathbb {R} ^{n\times n}} , and any random u ∈ R n {\displaystyle {\boldsymbol {u}}\in \mathbb {R} ^{n}} with E [ u u ⊺ ] = I {\displaystyle \mathbb {E} [{\boldsymbol {u}}{\boldsymbol {u}}^{\intercal }]=\mathbf {I} } , we have E [ u ⊺ W u ] = tr ⁡ W {\displaystyle \mathbb {E} [{\boldsymbol {u}}^{\intercal }{\boldsymbol {W}}{\boldsymbol {u}}]=\operatorname {tr} {\boldsymbol {W}}} . For a proof expand the expectation directly. Usually, the random vector is sampled from N ⁡ ( 0 , I ) {\displaystyle \operatorname {N} (\mathbf {0} ,\mathbf {I} )} (normal distribution) or { ± n − 1 / 2 } n {\displaystyle \{\pm n^{-1/2}\}^{n}} (Rademacher distribution). More sophisticated stochastic estimators of trace have been developed. == Applications == If a 2 x 2 real matrix has zero trace, its square is a diagonal matrix. The trace of a 2 × 2 complex matrix is used to classify Möbius transformations. First, the matrix is normalized to make its determinant equal to one. Then, if the square of the trace is 4, the corresponding transformation is parabolic. If the square is in the interval [0,4), it is elliptic. Finally, if the square is greater than 4, the transformation is loxodromic. See classification of Möbius transformations. The trace is used to define characters of group representations. Two representations A, B : G → GL(V) of a group G are equivalent (up to change of basis on V) if tr(A(g)) = tr(B(g)) for all g ∈ G. The trace also plays a central role in the distribution of quadratic forms. == Lie algebra == The trace is a map of Lie algebras tr : g l n → K {\displaystyle \operatorname {tr} :{\mathfrak {gl}}_{n}\to K} from the Lie algebra g l n {\displaystyle {\mathfrak {gl}}_{n}} of linear operators on an n-dimensional space (n × n matrices with entries in K {\displaystyle K} ) to the Lie algebra K of scalars; as K is Abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes: tr ⁡ ( [ A , B ] ) = 0 for each A , B ∈ g l n . {\displaystyle \operatorname {tr} ([\mathbf {A} ,\mathbf {B} ])=0{\text{ for each }}\mathbf {A} ,\mathbf {B} \in {\mathfrak {gl}}_{n}.} The kernel of this map, a matrix whose trace is zero, is often said to be traceless or trace free, and these matrices form the simple Lie algebra s l n {\displaystyle {\mathfrak {sl}}_{n}} , which is the Lie algebra of the special linear group of matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while the special linear Lie algebra is the matrices which do not alter volume of infinitesimal sets. In fact, there is an internal direct sum decomposition g l n = s l n ⊕ K {\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K} of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as: A ↦ 1 n tr ⁡ ( A ) I . {\displaystyle \mathbf {A} \mapsto {\frac {1}{n}}\operatorname {tr} (\mathbf {A} )\mathbf {I} .} Formally, one can compose the trace (the counit map) with the unit map K → g l n {\displaystyle K\to {\mathfrak {gl}}_{n}} of "inclusion of scalars" to obtain a map g l n → g l n {\displaystyle {\mathfrak {gl}}_{n}\to {\mathfrak {gl}}_{n}} mapping onto scalars, and multiplying by n. Dividing by n makes this a projection, yielding the formula above. In terms of short exact sequences, one has 0 → s l n → g l n → tr K → 0 {\displaystyle 0\to {\mathfrak {sl}}_{n}\to {\mathfrak {gl}}_{n}{\overset {\operatorname {tr} }{\to }}K\to 0} which is analogous to 1 → SL n → GL n → det K ∗ → 1 {\displaystyle 1\to \operatorname {SL} _{n}\to \operatorname {GL} _{n}{\overset {\det }{\to }}K^{*}\to 1} (where K ∗ = K ∖ { 0 } {\displaystyle K^{*}=K\setminus \{0\}} ) for Lie groups. However, the trace splits naturally (via 1 / n {\displaystyle 1/n} times scalars) so g l n = s l n ⊕ K {\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K} , but the splitting of the determinant would be as the nth root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose: GL n ≠ SL n × K ∗ . {\displaystyle \operatorname {GL} _{n}\neq \operatorname {SL} _{n}\times K^{*}.} === Bilinear forms === The bilinear form (where X, Y are square matrices) B ( X , Y ) = tr ⁡ ( ad ⁡ ( X ) ad ⁡ ( Y ) ) {\displaystyle B(\mathbf {X} ,\mathbf {Y} )=\operatorname {tr} (\operatorname {ad} (\mathbf {X} )\operatorname {ad} (\mathbf {Y} ))} where ad ⁡ ( X ) Y = [ X , Y ] = X Y − Y X {\displaystyle \operatorname {ad} (\mathbf {X} )\mathbf {Y} =[\mathbf {X} ,\mathbf {Y} ]=\mathbf {X} \mathbf {Y} -\mathbf {Y} \mathbf {X} } and for orientation, if det ⁡ Y ≠ 0 {\displaystyle \operatorname {det} \mathbf {Y} \neq 0} then ad ⁡ ( X ) = X − Y X Y − 1 . {\displaystyle \operatorname {ad} (\mathbf {X} )=\mathbf {X} -\mathbf {Y} \mathbf {X} \mathbf {Y} ^{-1}~.} B ( X , Y ) {\displaystyle B(\mathbf {X} ,\mathbf {Y} )} is called the Killing form; it is used to classify Lie algebras. The trace defines a bilinear form: ( X , Y ) ↦ tr ⁡ ( X Y ) . {\displaystyle (\mathbf {X} ,\mathbf {Y} )\mapsto \operatorname {tr} (\mathbf {X} \mathbf {Y} )~.} The form is symmetric, non-degenerate and associative in the sense that: tr ⁡ ( X [ Y , Z ] ) = tr ⁡ ( [ X , Y ] Z ) . {\displaystyle \operatorname {tr} (\mathbf {X} [\mathbf {Y} ,\mathbf {Z} ])=\operatorname {tr} ([\mathbf {X} ,\mathbf {Y} ]\mathbf {Z} ).} For a complex simple Lie algebra (such as s l {\displaystyle {\mathfrak {sl}}} n), every such bilinear form is proportional to each other; in particular, to the Killing form. Two matrices X and Y are said to be trace orthogonal if tr ⁡ ( X Y ) = 0. {\displaystyle \operatorname {tr} (\mathbf {X} \mathbf {Y} )=0.} There is a generalization to a general representation ( ρ , g , V ) {\displaystyle (\rho ,{\mathfrak {g}},V)} of a Lie algebra g {\displaystyle {\mathfrak {g}}} , such that ρ {\displaystyle \rho } is a homomorphism of Lie algebras ρ : g → End ( V ) . {\displaystyle \rho :{\mathfrak {g}}\rightarrow {\text{End}}(V).} The trace form tr V {\displaystyle {\text{tr}}_{V}} on End ( V ) {\displaystyle {\text{End}}(V)} is defined as above. The bilinear form ϕ ( X , Y ) = tr V ( ρ ( X ) ρ ( Y ) ) {\displaystyle \phi (\mathbf {X} ,\mathbf {Y} )={\text{tr}}_{V}(\rho (\mathbf {X} )\rho (\mathbf {Y} ))} is symmetric and invariant due to cyclicity. == Generalizations == The concept of trace of a matrix is generalized to the trace class of compact operators on Hilbert spaces, and the analog of the Frobenius norm is called the Hilbert–Schmidt norm. If K {\displaystyle K} is a trace-class operator, then for any orthonormal basis { e n } n = 1 {\displaystyle \{e_{n}\}_{n=1}} , the trace is given by tr ⁡ ( K ) = ∑ n ⟨ e n , K e n ⟩ , {\displaystyle \operatorname {tr} (K)=\sum _{n}\left\langle e_{n},Ke_{n}\right\rangle ,} and is finite and independent of the orthonormal basis. The partial trace is another generalization of the trace that is operator-valued. The trace of a linear operator Z {\displaystyle Z} which lives on a product space A ⊗ B {\displaystyle A\otimes B} is equal to the partial traces over A {\displaystyle A} and B {\displaystyle B} : tr ⁡ ( Z ) = tr A ⁡ ( tr B ⁡ ( Z ) ) = tr B ⁡ ( tr A ⁡ ( Z ) ) . {\displaystyle \operatorname {tr} (Z)=\operatorname {tr} _{A}\left(\operatorname {tr} _{B}(Z)\right)=\operatorname {tr} _{B}\left(\operatorname {tr} _{A}(Z)\right).} For more properties and a generalization of the partial trace, see traced monoidal categories. If A {\displaystyle A} is a general associative algebra over a field k {\displaystyle k} , then a trace on A {\displaystyle A} is often defined to be any functional tr : A → k {\displaystyle \operatorname {tr} :A\to k} which vanishes on commutators; tr ⁡ ( [ a , b ] ) = 0 {\displaystyle \operatorname {tr} ([a,b])=0} for all a , b ∈ A {\displaystyle a,b\in A} . Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar. A supertrace is the generalization of a trace to the setting of superalgebras. The operation of tensor contraction generalizes the trace to arbitrary tensors. Gomme and Klein (2011) define a matrix trace operator trm {\displaystyle \operatorname {trm} } that operates on block matrices and use it to compute second-order perturbation solutions to dynamic economic models without the need for tensor notation. == Traces in the language of tensor products == Given a vector space V, there is a natural bilinear map V × V∗ → F given by sending (v, φ) to the scalar φ(v). The universal property of the tensor product V ⊗ V∗ automatically implies that this bilinear map is induced by a linear functional on V ⊗ V∗. Similarly, there is a natural bilinear map V × V∗ → Hom(V, V) given by sending (v, φ) to the linear map w ↦ φ(w)v. The universal property of the tensor product, just as used previously, says that this bilinear map is induced by a linear map V ⊗ V∗ → Hom(V, V). If V is finite-dimensional, then this linear map is a linear isomorphism. This fundamental fact is a straightforward consequence of the existence of a (finite) basis of V, and can also be phrased as saying that any linear map V → V can be written as the sum of (finitely many) rank-one linear maps. Composing the inverse of the isomorphism with the linear functional obtained above results in a linear functional on Hom(V, V). This linear functional is exactly the same as the trace. Using the definition of trace as the sum of diagonal elements, the matrix formula tr(AB) = tr(BA) is straightforward to prove, and was given above. In the present perspective, one is considering linear maps S and T, and viewing them as sums of rank-one maps, so that there are linear functionals φi and ψj and nonzero vectors vi and wj such that S(u) = Σφi(u)vi and T(u) = Σψj(u)wj for any u in V. Then ( S ∘ T ) ( u ) = ∑ i φ i ( ∑ j ψ j ( u ) w j ) v i = ∑ i ∑ j ψ j ( u ) φ i ( w j ) v i {\displaystyle (S\circ T)(u)=\sum _{i}\varphi _{i}\left(\sum _{j}\psi _{j}(u)w_{j}\right)v_{i}=\sum _{i}\sum _{j}\psi _{j}(u)\varphi _{i}(w_{j})v_{i}} for any u in V. The rank-one linear map u ↦ ψj(u)φi(wj)vi has trace ψj(vi)φi(wj) and so tr ⁡ ( S ∘ T ) = ∑ i ∑ j ψ j ( v i ) φ i ( w j ) = ∑ j ∑ i φ i ( w j ) ψ j ( v i ) . {\displaystyle \operatorname {tr} (S\circ T)=\sum _{i}\sum _{j}\psi _{j}(v_{i})\varphi _{i}(w_{j})=\sum _{j}\sum _{i}\varphi _{i}(w_{j})\psi _{j}(v_{i}).} Following the same procedure with S and T reversed, one finds exactly the same formula, proving that tr(S ∘ T) equals tr(T ∘ S). The above proof can be regarded as being based upon tensor products, given that the fundamental identity of End(V) with V ⊗ V∗ is equivalent to the expressibility of any linear map as the sum of rank-one linear maps. As such, the proof may be written in the notation of tensor products. Then one may consider the multilinear map V × V∗ × V × V∗ → V ⊗ V∗ given by sending (v, φ, w, ψ) to φ(w)v ⊗ ψ. Further composition with the trace map then results in φ(w)ψ(v), and this is unchanged if one were to have started with (w, ψ, v, φ) instead. One may also consider the bilinear map End(V) × End(V) → End(V) given by sending (f, g) to the composition f ∘ g, which is then induced by a linear map End(V) ⊗ End(V) → End(V). It can be seen that this coincides with the linear map V ⊗ V∗ ⊗ V ⊗ V∗ → V ⊗ V∗. The established symmetry upon composition with the trace map then establishes the equality of the two traces. For any finite dimensional vector space V, there is a natural linear map F → V ⊗ V'; in the language of linear maps, it assigns to a scalar c the linear map c⋅idV. Sometimes this is called coevaluation map, and the trace V ⊗ V' → F is called evaluation map. These structures can be axiomatized to define categorical traces in the abstract setting of category theory. == See also == Trace of a tensor with respect to a metric tensor Characteristic function Field trace Golden–Thompson inequality Singular trace Specht's theorem Trace class Trace identity Trace inequalities von Neumann's trace inequality == Notes == == References == == External links == "Trace of a square matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Trace_(linear_algebra)
In mathematics, especially in the area of abstract algebra known as ring theory, a free algebra is the noncommutative analogue of a polynomial ring since its elements may be described as "polynomials" with non-commuting variables. Likewise, the polynomial ring may be regarded as a free commutative algebra. == Definition == For R a commutative ring, the free (associative, unital) algebra on n indeterminates {X1,...,Xn} is the free R-module with a basis consisting of all words over the alphabet {X1,...,Xn} (including the empty word, which is the unit of the free algebra). This R-module becomes an R-algebra by defining a multiplication as follows: the product of two basis elements is the concatenation of the corresponding words: ( X i 1 X i 2 ⋯ X i l ) ⋅ ( X j 1 X j 2 ⋯ X j m ) = X i 1 X i 2 ⋯ X i l X j 1 X j 2 ⋯ X j m , {\displaystyle \left(X_{i_{1}}X_{i_{2}}\cdots X_{i_{l}}\right)\cdot \left(X_{j_{1}}X_{j_{2}}\cdots X_{j_{m}}\right)=X_{i_{1}}X_{i_{2}}\cdots X_{i_{l}}X_{j_{1}}X_{j_{2}}\cdots X_{j_{m}},} and the product of two arbitrary R-module elements is thus uniquely determined (because the multiplication in an R-algebra must be R-bilinear). This R-algebra is denoted R⟨X1,...,Xn⟩. This construction can easily be generalized to an arbitrary set X of indeterminates. In short, for an arbitrary set X = { X i ; i ∈ I } {\displaystyle X=\{X_{i}\,;\;i\in I\}} , the free (associative, unital) R-algebra on X is R ⟨ X ⟩ := ⨁ w ∈ X ∗ R w {\displaystyle R\langle X\rangle :=\bigoplus _{w\in X^{\ast }}Rw} with the R-bilinear multiplication that is concatenation on words, where X* denotes the free monoid on X (i.e. words on the letters Xi), ⊕ {\displaystyle \oplus } denotes the external direct sum, and Rw denotes the free R-module on 1 element, the word w. For example, in R⟨X1,X2,X3,X4⟩, for scalars α, β, γ, δ ∈ R, a concrete example of a product of two elements is ( α X 1 X 2 2 + β X 2 X 3 ) ⋅ ( γ X 2 X 1 + δ X 1 4 X 4 ) = α γ X 1 X 2 3 X 1 + α δ X 1 X 2 2 X 1 4 X 4 + β γ X 2 X 3 X 2 X 1 + β δ X 2 X 3 X 1 4 X 4 {\displaystyle (\alpha X_{1}X_{2}^{2}+\beta X_{2}X_{3})\cdot (\gamma X_{2}X_{1}+\delta X_{1}^{4}X_{4})=\alpha \gamma X_{1}X_{2}^{3}X_{1}+\alpha \delta X_{1}X_{2}^{2}X_{1}^{4}X_{4}+\beta \gamma X_{2}X_{3}X_{2}X_{1}+\beta \delta X_{2}X_{3}X_{1}^{4}X_{4}} . The non-commutative polynomial ring may be identified with the monoid ring over R of the free monoid of all finite words in the Xi. == Contrast with polynomials == Since the words over the alphabet {X1, ...,Xn} form a basis of R⟨X1,...,Xn⟩, it is clear that any element of R⟨X1, ...,Xn⟩ can be written uniquely in the form: ∑ k = 0 ∞ ∑ i 1 , i 2 , ⋯ , i k ∈ { 1 , 2 , ⋯ , n } a i 1 , i 2 , ⋯ , i k X i 1 X i 2 ⋯ X i k , {\displaystyle \sum \limits _{k=0}^{\infty }\,\,\,\sum \limits _{i_{1},i_{2},\cdots ,i_{k}\in \left\lbrace 1,2,\cdots ,n\right\rbrace }a_{i_{1},i_{2},\cdots ,i_{k}}X_{i_{1}}X_{i_{2}}\cdots X_{i_{k}},} where a i 1 , i 2 , . . . , i k {\displaystyle a_{i_{1},i_{2},...,i_{k}}} are elements of R and all but finitely many of these elements are zero. This explains why the elements of R⟨X1,...,Xn⟩ are often denoted as "non-commutative polynomials" in the "variables" (or "indeterminates") X1,...,Xn; the elements a i 1 , i 2 , . . . , i k {\displaystyle a_{i_{1},i_{2},...,i_{k}}} are said to be "coefficients" of these polynomials, and the R-algebra R⟨X1,...,Xn⟩ is called the "non-commutative polynomial algebra over R in n indeterminates". Note that unlike in an actual polynomial ring, the variables do not commute. For example, X1X2 does not equal X2X1. More generally, one can construct the free algebra R⟨E⟩ on any set E of generators. Since rings may be regarded as Z-algebras, a free ring on E can be defined as the free algebra Z⟨E⟩. Over a field, the free algebra on n indeterminates can be constructed as the tensor algebra on an n-dimensional vector space. For a more general coefficient ring, the same construction works if we take the free module on n generators. The construction of the free algebra on E is functorial in nature and satisfies an appropriate universal property. The free algebra functor is left adjoint to the forgetful functor from the category of R-algebras to the category of sets. Free algebras over division rings are free ideal rings. == See also == Cofree coalgebra Tensor algebra Free object Noncommutative ring Rational series Term algebra == References == Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. ISBN 978-0-521-19022-0. Zbl 1250.68007. L.A. Bokut' (2001) [1994], "Free associative algebra", Encyclopedia of Mathematics, EMS Press
Wikipedia/Free_algebra
A system of polynomial equations (sometimes simply a polynomial system) is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k. A solution of a polynomial system is a set of values for the xis which belong to some algebraically closed field extension K of k, and make all equations true. When k is the field of rational numbers, K is generally assumed to be the field of complex numbers, because each solution belongs to a field extension of k, which is isomorphic to a subfield of the complex numbers. This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fields k in which computation (including equality testing) is easy and efficient, that is the field of rational numbers and finite fields. Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, see Diophantine equation. == Definition == A simple example of a system of polynomial equations is x 2 + y 2 − 5 = 0 x y − 2 = 0. {\displaystyle {\begin{aligned}x^{2}+y^{2}-5&=0\\xy-2&=0.\end{aligned}}} Its solutions are the four pairs (x, y) = (1, 2), (2, 1), (-1, -2), (-2, -1). These solutions can easily be checked by substitution, but more work is needed for proving that there are no other solutions. The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions. A system of polynomial equations, or polynomial system is a collection of equations f 1 ( x 1 , … , x m ) = 0 ⋮ f n ( x 1 , … , x m ) = 0 , {\displaystyle {\begin{aligned}f_{1}\left(x_{1},\ldots ,x_{m}\right)&=0\\&\;\;\vdots \\f_{n}\left(x_{1},\ldots ,x_{m}\right)&=0,\end{aligned}}} where each fh is a polynomial in the indeterminates x1, ..., xm, with integer coefficients, or coefficients in some fixed field, often the field of rational numbers or a finite field. Other fields of coefficients, such as the real numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers). A solution of a polynomial system is a tuple of values of (x1, ..., xm) that satisfies all equations of the polynomial system. The solutions are sought in the complex numbers, or more generally in an algebraically closed field containing the coefficients. In particular, in characteristic zero, all complex solutions are sought. Searching for the real or rational solutions are much more difficult problems that are not considered in this article. The set of solutions is not always finite; for example, the solutions of the system x ( x − 1 ) = 0 x ( y − 1 ) = 0 {\displaystyle {\begin{aligned}x(x-1)&=0\\x(y-1)&=0\end{aligned}}} are a point (x,y) = (1,1) and a line x = 0. Even when the solution set is finite, there is, in general, no closed-form expression of the solutions (in the case of a single equation, this is Abel–Ruffini theorem). The Barth surface, shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numerous singular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such an overdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most 53 = 125, by Bézout's theorem. However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface. == Basic properties and definitions == A system is overdetermined if the number of equations is higher than the number of variables. A system is inconsistent if it has no complex solution (or, if the coefficients are not complex numbers, no solution in an algebraically closed field containing the coefficients). By Hilbert's Nullstellensatz this means that 1 is a linear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the system x3 – 1 = 0, x2 – 1 = 0 is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solution x = 1. A system is underdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in an algebraically closed field that contains the coefficients of the equations). This is a non-trivial result of commutative algebra that involves, in particular, Hilbert's Nullstellensatz and Krull's principal ideal theorem. A system is zero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that the algebraic variety of the solutions has dimension zero. A system with infinitely many solutions is said to be positive-dimensional. A zero-dimensional system with as many equations as variables is sometimes said to be well-behaved. Bézout's theorem asserts that a well-behaved system whose equations have degrees d1, ..., dn has at most d1⋅⋅⋅dn solutions. This bound is sharp. If all the degrees are equal to d, this bound becomes dn and is exponential in the number of variables. (The fundamental theorem of algebra is the special case n = 1.) This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound). == What is solving? == The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of a Gröbner basis of the left-hand sides of the equations. The system is inconsistent if this Gröbner basis is reduced to 1. The system is zero-dimensional if, for every variable there is a leading monomial of some element of the Gröbner basis which is a pure power of this variable. For this test, the best monomial order (that is the one which leads generally to the fastest computation) is usually the graded reverse lexicographic one (grevlex). If the system is positive-dimensional, it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield of algebraic geometry. A natural example of such a question concerning positive-dimensional systems is the following: decide if a polynomial system over the rational numbers has a finite number of real solutions and compute them. A generalization of this question is find at least one solution in each connected component of the set of real solutions of a polynomial system. The classical algorithm for solving these question is cylindrical algebraic decomposition, which has a doubly exponential computational complexity and therefore cannot be used in practice, except for very small examples. For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is called numeric. A solution is certified if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions. The other way of representing the solutions is said to be algebraic. It uses the fact that, for a zero-dimensional system, the solutions belong to the algebraic closure of the field k of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficients is a highly unstable problem. == Extensions == === Trigonometric equations === A trigonometric equation is an equation g = 0 where g is a trigonometric polynomial. Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (using sum and difference formulas), replacing sin(x) and cos(x) by two new variables s and c and adding the new equation s2 + c2 – 1 = 0. For example, because of the identity cos ⁡ ( 3 x ) = 4 cos 3 ⁡ ( x ) − 3 cos ⁡ ( x ) , {\displaystyle \cos(3x)=4\cos ^{3}(x)-3\cos(x),} solving the equation sin 3 ⁡ ( x ) + cos ⁡ ( 3 x ) = 0 {\displaystyle \sin ^{3}(x)+\cos(3x)=0} is equivalent to solving the polynomial system { s 3 + 4 c 3 − 3 c = 0 s 2 + c 2 − 1 = 0. {\displaystyle {\begin{cases}s^{3}+4c^{3}-3c&=0\\s^{2}+c^{2}-1&=0.\end{cases}}} For each solution (c0, s0) of this system, there is a unique solution x of the equation such that 0 ≤ x < 2π. In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system. === Solutions in a finite field === When solving a system over a finite field k with q elements, one is primarily interested in the solutions in k. As the elements of k are exactly the solutions of the equation xq – x = 0, it suffices, for restricting the solutions to k, to add the equation xiq – xi = 0 for each variable xi. === Coefficients in a number field or in a finite field with non-prime order === The elements of an algebraic number field are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers. For example, if a system contains 2 {\displaystyle {\sqrt {2}}} , a system over the rational numbers is obtained by adding the equation r22 – 2 = 0 and replacing 2 {\displaystyle {\sqrt {2}}} by r2 in the other equations. In the case of a finite field, the same transformation allows always supposing that the field k has a prime order. == Algebraic representation of the solutions == === Regular chains === The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomials f1(x1), f2(x1, x2), ..., fn(x1, ..., xn) such that, for every i such that 1 ≤ i ≤ n fi is a polynomial in x1, ..., xi only, which has a degree di > 0 in xi; the coefficient of xidi in fi is a polynomial in x1, ..., xi −1 which does not have any common zero with f1, ..., fi − 1. To such a regular chain is associated a triangular system of equations { f 1 ( x 1 ) = 0 f 2 ( x 1 , x 2 ) = 0 ⋮ f n ( x 1 , x 2 , … , x n ) = 0. {\displaystyle {\begin{cases}f_{1}(x_{1})=0\\f_{2}(x_{1},x_{2})=0\\\quad \vdots \\f_{n}(x_{1},x_{2},\ldots ,x_{n})=0.\end{cases}}} The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained from fi has degree di and thus that the system has d1 ... dn solutions, provided that there is no multiple root in this resolution process (fundamental theorem of algebra). Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions. { x 2 − 1 = 0 ( x − 1 ) ( y − 1 ) = 0 y 2 − 1 = 0. {\displaystyle {\begin{cases}x^{2}-1=0\\(x-1)(y-1)=0\\y^{2}-1=0.\end{cases}}} There are several algorithms for computing a triangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional) into regular chains (or regular semi-algebraic systems). There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first the Gröbner basis for the graded reverse lexicographic order (grevlex), then deducing the lexicographical Gröbner basis by FGLM algorithm and finally applying the Lextriangular algorithm. This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of: The output may involve huge integers which may make the computation and the use of the result problematic. To deduce the numeric values of the solutions from the output, one has to solve univariate polynomials with approximate coefficients, which is a highly unstable problem. The first issue has been solved by Dahan and Schost: Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, called equiprojectable decomposition, depends only on the choice of the coordinates. This allows the use of modular methods for computing efficiently the equiprojectable decomposition. The second issue is generally solved by outputting regular chains of a special form, sometimes called shape lemma, for which all di but the first one are equal to 1. For getting such regular chains, one may have to add a further variable, called separating variable, which is given the index 0. The rational univariate representation, described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis. === Rational univariate representation === The rational univariate representation or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier. A RUR of a zero-dimensional system consists in a linear combination x0 of the variables, called separating variable, and a system of equations { h ( x 0 ) = 0 x 1 = g 1 ( x 0 ) / g 0 ( x 0 ) ⋮ x n = g n ( x 0 ) / g 0 ( x 0 ) , {\displaystyle {\begin{cases}h(x_{0})=0\\x_{1}=g_{1}(x_{0})/g_{0}(x_{0})\\\quad \vdots \\x_{n}=g_{n}(x_{0})/g_{0}(x_{0}),\end{cases}}} where h is a univariate polynomial in x0 of degree D and g0, ..., gn are univariate polynomials in x0 of degree less than D. Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties. All but a finite number linear combinations of the variables are separating variables. When the separating variable is chosen, the RUR exists and is unique. In particular h and the gi are defined independently of any algorithm to compute them. The solutions of the system are in one-to-one correspondence with the roots of h and the multiplicity of each root of h equals the multiplicity of the corresponding solution. The solutions of the system are obtained by substituting the roots of h in the other equations. If h does not have any multiple root then g0 is the derivative of h. For example, for the system in the previous section, every linear combination of the variable, except the multiples of x, y and x + y, is a separating variable. If one chooses t = ⁠x – y/2⁠ as a separating variable, then the RUR is { t 3 − t = 0 x = t 2 + 2 t − 1 3 t 2 − 1 y = t 2 − 2 t − 1 3 t 2 − 1 . {\displaystyle {\begin{cases}t^{3}-t=0\\x={\frac {t^{2}+2t-1}{3t^{2}-1}}\\y={\frac {t^{2}-2t-1}{3t^{2}-1}}.\\\end{cases}}} The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size. For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them in rational functions. This allows production of certified approximations of the solutions to any given precision. Moreover, the univariate polynomial h(x0) of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides the prime decomposition of the given ideal (that is the primary decomposition of the radical of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities. Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension. == Solving numerically == === General solving algorithms === The general numerical algorithms which are designed for any system of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to find all solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution. Nevertheless, two methods deserve to be mentioned here. Newton's method may be used if the number of equations is equal to the number of variables. It does not allow one to find all the solutions nor to prove that there is no solution. But it is very fast when starting from a point which is close to a solution. Therefore, it is a basic tool for the homotopy continuation method described below. Optimization is rarely used for solving polynomial systems, but it succeeded, circa 1970, in showing that a system of 81 quadratic equations in 56 variables is not inconsistent. With the other known methods, this remains beyond the possibilities of modern technology, as of 2022. This method consists simply in minimizing the sum of the squares of the equations. If zero is found as a local minimum, then it is attained at a solution. This method works for overdetermined systems, but outputs an empty information if all local minimums which are found are positive. === Homotopy continuation method === This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades. This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say N {\displaystyle N} , is kept. In the second step, a system g 1 = 0 , … , g n = 0 {\displaystyle g_{1}=0,\,\ldots ,\,g_{n}=0} of polynomial equations is generated which has exactly N {\displaystyle N} solutions that are easy to compute. This new system has the same number n {\displaystyle n} of variables and the same number n {\displaystyle n} of equations and the same general structure as the system to solve, f 1 = 0 , … , f n = 0 {\displaystyle f_{1}=0,\,\ldots ,\,f_{n}=0} . Then a homotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system ( 1 − t ) g 1 + t f 1 = 0 , … , ( 1 − t ) g n + t f n = 0 {\displaystyle (1-t)g_{1}+tf_{1}=0,\,\ldots ,\,(1-t)g_{n}+tf_{n}=0} . The homotopy continuation consists in deforming the parameter t {\displaystyle t} from 0 to 1 and following the N {\displaystyle N} solutions during this deformation. This gives the desired solutions for t = 1 {\displaystyle t=1} . Following means that, if t 1 < t 2 {\displaystyle t_{1}<t_{2}} , the solutions for t = t 2 {\displaystyle t=t_{2}} are deduced from the solutions for t = t 1 {\displaystyle t=t_{1}} by Newton's method. The difficulty here is to well choose the value of t 2 − t 1 : {\displaystyle t_{2}-t_{1}:} Too large, Newton's convergence may be slow and may even jump from a solution path to another one. Too small, and the number of steps slows down the method. === Numerically solving from the rational univariate representation === To deduce the numeric values of the solutions from a RUR seems easy: it suffices to compute the roots of the univariate polynomial and to substitute them in the other equations. This is not so easy because the evaluation of a polynomial at the roots of another polynomial is highly unstable. The roots of the univariate polynomial have thus to be computed at a high precision which may not be defined once for all. There are two algorithms which fulfill this requirement. Aberth method, implemented in MPSolve computes all the complex roots to any precision. Uspensky's algorithm of Collins and Akritas, improved by Rouillier and Zimmermann and based on Descartes' rule of signs. This algorithms computes the real roots, isolated in intervals of arbitrary small width. It is implemented in Maple (functions fsolve and RootFinding[Isolate]). == Software packages == There are at least four software packages which can solve zero-dimensional systems automatically (by automatically, one means that no human intervention is needed between input and output, and thus that no knowledge of the method by the user is needed). There are also several other software packages which may be useful for solving zero-dimensional systems. Some of them are listed after the automatic solvers. The Maple function RootFinding[Isolate] takes as input any polynomial system over the rational numbers (if some coefficients are floating point numbers, they are converted to rational numbers) and outputs the real solutions represented either (optionally) as intervals of rational numbers or as floating point approximations of arbitrary precision. If the system is not zero dimensional, this is signaled as an error. Internally, this solver, designed by F. Rouillier computes first a Gröbner basis and then a Rational Univariate Representation from which the required approximation of the solutions are deduced. It works routinely for systems having up to a few hundred complex solutions. The rational univariate representation may be computed with Maple function Groebner[RationalUnivariateRepresentation]. To extract all the complex solutions from a rational univariate representation, one may use MPSolve, which computes the complex roots of univariate polynomials to any precision. It is recommended to run MPSolve several times, doubling the precision each time, until solutions remain stable, as the substitution of the roots in the equations of the input variables can be highly unstable. The second solver is PHCpack, written under the direction of J. Verschelde. PHCpack implements the homotopy continuation method. This solver computes the isolated complex solutions of polynomial systems having as many equations as variables. The third solver is Bertini, written by D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini uses numerical homotopy continuation with adaptive precision. In addition to computing zero-dimensional solution sets, both PHCpack and Bertini are capable of working with positive dimensional solution sets. The fourth solver is the Maple library RegularChains, written by Marc Moreno-Maza and collaborators. It contains various functions for solving polynomial systems by means of regular chains. == See also == Elimination theory Systems of polynomial inequalities Triangular decomposition Wu's method of characteristic set == References ==
Wikipedia/Systems_of_polynomial_equations
Homological algebra is the branch of mathematics that studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert. Homological algebra is the study of homological functors and the intricate algebraic structures that they entail; its development was closely intertwined with the emergence of category theory. A central concept is that of chain complexes, which can be studied through their homology and cohomology. Homological algebra affords the means to extract information contained in these complexes and present it in the form of homological invariants of rings, modules, topological spaces, and other "tangible" mathematical objects. A spectral sequence is a powerful tool for this. It has played an enormous role in algebraic topology. Its influence has gradually expanded and presently includes commutative algebra, algebraic geometry, algebraic number theory, representation theory, mathematical physics, operator algebras, complex analysis, and the theory of partial differential equations. K-theory is an independent discipline which draws upon methods of homological algebra, as does the noncommutative geometry of Alain Connes. == History == Homological algebra began to be studied in its most basic form in the late 19th century as a branch of topology and in the 1940s became an independent subject with the study of objects such as the ext functor and the tor functor, among others. == Chain complexes and homology == The notion of chain complex is central in homological algebra. An abstract chain complex is a sequence ( C ∙ , d ∙ ) {\displaystyle (C_{\bullet },d_{\bullet })} of abelian groups and group homomorphisms, with the property that the composition of any two consecutive maps is zero: C ∙ : ⋯ ⟶ C n + 1 ⟶ d n + 1 C n ⟶ d n C n − 1 ⟶ d n − 1 ⋯ , d n ∘ d n + 1 = 0. {\displaystyle C_{\bullet }:\cdots \longrightarrow C_{n+1}{\stackrel {d_{n+1}}{\longrightarrow }}C_{n}{\stackrel {d_{n}}{\longrightarrow }}C_{n-1}{\stackrel {d_{n-1}}{\longrightarrow }}\cdots ,\quad d_{n}\circ d_{n+1}=0.} The elements of Cn are called n-chains and the homomorphisms dn are called the boundary maps or differentials. The chain groups Cn may be endowed with extra structure; for example, they may be vector spaces or modules over a fixed ring R. The differentials must preserve the extra structure if it exists; for example, they must be linear maps or homomorphisms of R-modules. For notational convenience, restrict attention to abelian groups (more correctly, to the category Ab of abelian groups); a celebrated theorem by Barry Mitchell implies the results will generalize to any abelian category. Every chain complex defines two further sequences of abelian groups, the cycles Zn = Ker dn and the boundaries Bn = Im dn+1, where Ker d and Im d denote the kernel and the image of d. Since the composition of two consecutive boundary maps is zero, these groups are embedded into each other as B n ⊆ Z n ⊆ C n . {\displaystyle B_{n}\subseteq Z_{n}\subseteq C_{n}.} Subgroups of abelian groups are automatically normal; therefore we can define the nth homology group Hn(C) as the factor group of the n-cycles by the n-boundaries, H n ( C ) = Z n / B n = Ker d n / Im d n + 1 . {\displaystyle H_{n}(C)=Z_{n}/B_{n}=\operatorname {Ker} \,d_{n}/\operatorname {Im} \,d_{n+1}.} A chain complex is called acyclic or an exact sequence if all its homology groups are zero. Chain complexes arise in abundance in algebra and algebraic topology. For example, if X is a topological space then the singular chains Cn(X) are formal linear combinations of continuous maps from the standard n-simplex into X; if K is a simplicial complex then the simplicial chains Cn(K) are formal linear combinations of the n-simplices of K; if A = F/R is a presentation of an abelian group A by generators and relations, where F is a free abelian group spanned by the generators and R is the subgroup of relations, then letting C1(A) = R, C0(A) = F, and Cn(A) = 0 for all other n defines a sequence of abelian groups. In all these cases, there are natural differentials dn making Cn into a chain complex, whose homology reflects the structure of the topological space X, the simplicial complex K, or the abelian group A. In the case of topological spaces, we arrive at the notion of singular homology, which plays a fundamental role in investigating the properties of such spaces, for example, manifolds. On a philosophical level, homological algebra teaches us that certain chain complexes associated with algebraic or geometric objects (topological spaces, simplicial complexes, R-modules) contain a lot of valuable algebraic information about them, with the homology being only the most readily available part. On a technical level, homological algebra provides the tools for manipulating complexes and extracting this information. Here are two general illustrations. Two objects X and Y are connected by a map f between them. Homological algebra studies the relation, induced by the map f, between chain complexes associated with X and Y and their homology. This is generalized to the case of several objects and maps connecting them. Phrased in the language of category theory, homological algebra studies the functorial properties of various constructions of chain complexes and of the homology of these complexes. An object X admits multiple descriptions (for example, as a topological space and as a simplicial complex) or the complex C ∙ ( X ) {\displaystyle C_{\bullet }(X)} is constructed using some 'presentation' of X, which involves non-canonical choices. It is important to know the effect of change in the description of X on chain complexes associated with X. Typically, the complex and its homology H ∙ ( C ) {\displaystyle H_{\bullet }(C)} are functorial with respect to the presentation; and the homology (although not the complex itself) is actually independent of the presentation chosen, thus it is an invariant of X. == Standard tools == === Exact sequences === In the context of group theory, a sequence G 0 → f 1 G 1 → f 2 G 2 → f 3 ⋯ → f n G n {\displaystyle G_{0}\;{\xrightarrow {f_{1}}}\;G_{1}\;{\xrightarrow {f_{2}}}\;G_{2}\;{\xrightarrow {f_{3}}}\;\cdots \;{\xrightarrow {f_{n}}}\;G_{n}} of groups and group homomorphisms is called exact if the image of each homomorphism is equal to the kernel of the next: i m ( f k ) = k e r ( f k + 1 ) . {\displaystyle \mathrm {im} (f_{k})=\mathrm {ker} (f_{k+1}).\!} Note that the sequence of groups and homomorphisms may be either finite or infinite. A similar definition can be made for certain other algebraic structures. For example, one could have an exact sequence of vector spaces and linear maps, or of modules and module homomorphisms. More generally, the notion of an exact sequence makes sense in any category with kernels and cokernels. ==== Short ==== The most common type of exact sequence is the short exact sequence. This is an exact sequence of the form A ↪ f B ↠ g C {\displaystyle A\;{\overset {f}{\hookrightarrow }}\;B\;{\overset {g}{\twoheadrightarrow }}\;C} where ƒ is a monomorphism and g is an epimorphism. In this case, A is a subobject of B, and the corresponding quotient is isomorphic to C: C ≅ B / f ( A ) . {\displaystyle C\cong B/f(A).} (where f(A) = im(f)). A short exact sequence of abelian groups may also be written as an exact sequence with five terms: 0 → A → f B → g C → 0 {\displaystyle 0\;{\xrightarrow {}}\;A\;{\xrightarrow {f}}\;B\;{\xrightarrow {g}}\;C\;{\xrightarrow {}}\;0} where 0 represents the zero object, such as the trivial group or a zero-dimensional vector space. The placement of the 0's forces ƒ to be a monomorphism and g to be an epimorphism (see below). ==== Long ==== A long exact sequence is an exact sequence indexed by the natural numbers. === Five lemma === Consider the following commutative diagram in any abelian category (such as the category of abelian groups or the category of vector spaces over a given field) or in the category of groups. The five lemma states that, if the rows are exact, m and p are isomorphisms, l is an epimorphism, and q is a monomorphism, then n is also an isomorphism. === Snake lemma === In an abelian category (such as the category of abelian groups or the category of vector spaces over a given field), consider a commutative diagram: where the rows are exact sequences and 0 is the zero object. Then there is an exact sequence relating the kernels and cokernels of a, b, and c: ker ⁡ a → ker ⁡ b → ker ⁡ c → d coker ⁡ a → coker ⁡ b → coker ⁡ c {\displaystyle \ker a\to \ker b\to \ker c{\overset {d}{\to }}\operatorname {coker} a\to \operatorname {coker} b\to \operatorname {coker} c} Furthermore, if the morphism f is a monomorphism, then so is the morphism ker a → ker b, and if g' is an epimorphism, then so is coker b → coker c. === Abelian categories === In mathematics, an abelian category is a category in which morphisms and objects can be added and in which kernels and cokernels exist and have desirable properties. The motivating prototype example of an abelian category is the category of abelian groups, Ab. The theory originated in a tentative attempt to unify several cohomology theories by Alexander Grothendieck. Abelian categories are very stable categories, for example they are regular and they satisfy the snake lemma. The class of Abelian categories is closed under several categorical constructions, for example, the category of chain complexes of an Abelian category, or the category of functors from a small category to an Abelian category are Abelian as well. These stability properties make them inevitable in homological algebra and beyond; the theory has major applications in algebraic geometry, cohomology and pure category theory. Abelian categories are named after Niels Henrik Abel. More concretely, a category is abelian if it has a zero object, it has all binary products and binary coproducts, and it has all kernels and cokernels. all monomorphisms and epimorphisms are normal. === Derived functor === Suppose we are given a covariant left exact functor F : A → B between two abelian categories A and B. If 0 → A → B → C → 0 is a short exact sequence in A, then applying F yields the exact sequence 0 → F(A) → F(B) → F(C) and one could ask how to continue this sequence to the right to form a long exact sequence. Strictly speaking, this question is ill-posed, since there are always numerous different ways to continue a given exact sequence to the right. But it turns out that (if A is "nice" enough) there is one canonical way of doing so, given by the right derived functors of F. For every i≥1, there is a functor RiF: A → B, and the above sequence continues like so: 0 → F(A) → F(B) → F(C) → R1F(A) → R1F(B) → R1F(C) → R2F(A) → R2F(B) → ... . From this we see that F is an exact functor if and only if R1F = 0; so in a sense the right derived functors of F measure "how far" F is from being exact. === Ext functor === Let R be a ring and let ModR be the category of modules over R. Let B be in ModR and set T(B) = HomR(A,B), for fixed A in ModR. This is a left exact functor and thus has right derived functors RnT. The Ext functor is defined by Ext R n ⁡ ( A , B ) = ( R n T ) ( B ) . {\displaystyle \operatorname {Ext} _{R}^{n}(A,B)=(R^{n}T)(B).} This can be calculated by taking any injective resolution 0 → B → I 0 → I 1 → ⋯ , {\displaystyle 0\rightarrow B\rightarrow I^{0}\rightarrow I^{1}\rightarrow \cdots ,} and computing 0 → Hom R ⁡ ( A , I 0 ) → Hom R ⁡ ( A , I 1 ) → ⋯ . {\displaystyle 0\rightarrow \operatorname {Hom} _{R}(A,I^{0})\rightarrow \operatorname {Hom} _{R}(A,I^{1})\rightarrow \cdots .} Then (RnT)(B) is the cohomology of this complex. Note that HomR(A,B) is excluded from the complex. An alternative definition is given using the functor G(A)=HomR(A,B). For a fixed module B, this is a contravariant left exact functor, and thus we also have right derived functors RnG, and can define Ext R n ⁡ ( A , B ) = ( R n G ) ( A ) . {\displaystyle \operatorname {Ext} _{R}^{n}(A,B)=(R^{n}G)(A).} This can be calculated by choosing any projective resolution ⋯ → P 1 → P 0 → A → 0 , {\displaystyle \dots \rightarrow P^{1}\rightarrow P^{0}\rightarrow A\rightarrow 0,} and proceeding dually by computing 0 → Hom R ⁡ ( P 0 , B ) → Hom R ⁡ ( P 1 , B ) → ⋯ . {\displaystyle 0\rightarrow \operatorname {Hom} _{R}(P^{0},B)\rightarrow \operatorname {Hom} _{R}(P^{1},B)\rightarrow \cdots .} Then (RnG)(A) is the cohomology of this complex. Again note that HomR(A,B) is excluded. These two constructions turn out to yield isomorphic results, and so both may be used to calculate the Ext functor. === Tor functor === Suppose R is a ring, and denoted by R-Mod the category of left R-modules and by Mod-R the category of right R-modules (if R is commutative, the two categories coincide). Fix a module B in R-Mod. For A in Mod-R, set T(A) = A⊗RB. Then T is a right exact functor from Mod-R to the category of abelian groups Ab (in the case when R is commutative, it is a right exact functor from Mod-R to Mod-R) and its left derived functors LnT are defined. We set T o r n R ( A , B ) = ( L n T ) ( A ) {\displaystyle \mathrm {Tor} _{n}^{R}(A,B)=(L_{n}T)(A)} i.e., we take a projective resolution ⋯ → P 2 → P 1 → P 0 → A → 0 {\displaystyle \cdots \rightarrow P_{2}\rightarrow P_{1}\rightarrow P_{0}\rightarrow A\rightarrow 0} then remove the A term and tensor the projective resolution with B to get the complex ⋯ → P 2 ⊗ R B → P 1 ⊗ R B → P 0 ⊗ R B → 0 {\displaystyle \cdots \rightarrow P_{2}\otimes _{R}B\rightarrow P_{1}\otimes _{R}B\rightarrow P_{0}\otimes _{R}B\rightarrow 0} (note that A⊗RB does not appear and the last arrow is just the zero map) and take the homology of this complex. === Spectral sequence === Fix an abelian category, such as a category of modules over a ring. A spectral sequence is a choice of a nonnegative integer r0 and a collection of three sequences: For all integers r ≥ r0, an object Er, called a sheet (as in a sheet of paper), or sometimes a page or a term, Endomorphisms dr : Er → Er satisfying dr o dr = 0, called boundary maps or differentials, Isomorphisms of Er+1 with H(Er), the homology of Er with respect to dr. A doubly graded spectral sequence has a tremendous amount of data to keep track of, but there is a common visualization technique which makes the structure of the spectral sequence clearer. We have three indices, r, p, and q. For each r, imagine that we have a sheet of graph paper. On this sheet, we will take p to be the horizontal direction and q to be the vertical direction. At each lattice point we have the object E r p , q {\displaystyle E_{r}^{p,q}} . It is very common for n = p + q to be another natural index in the spectral sequence. n runs diagonally, northwest to southeast, across each sheet. In the homological case, the differentials have bidegree (−r, r − 1), so they decrease n by one. In the cohomological case, n is increased by one. When r is zero, the differential moves objects one space down or up. This is similar to the differential on a chain complex. When r is one, the differential moves objects one space to the left or right. When r is two, the differential moves objects just like a knight's move in chess. For higher r, the differential acts like a generalized knight's move. == Functoriality == A continuous map of topological spaces gives rise to a homomorphism between their nth homology groups for all n. This basic fact of algebraic topology finds a natural explanation through certain properties of chain complexes. Since it is very common to study several topological spaces simultaneously, in homological algebra one is led to simultaneous consideration of multiple chain complexes. A morphism between two chain complexes, F : C ∙ → D ∙ , {\displaystyle F:C_{\bullet }\to D_{\bullet },} is a family of homomorphisms of abelian groups F n : C n → D n {\displaystyle F_{n}:C_{n}\to D_{n}} that commute with the differentials, in the sense that F n − 1 ∘ d n C = d n D ∘ F n {\displaystyle F_{n-1}\circ d_{n}^{C}=d_{n}^{D}\circ F_{n}} for all n. A morphism of chain complexes induces a morphism H ∙ ( F ) {\displaystyle H_{\bullet }(F)} of their homology groups, consisting of the homomorphisms H n ( F ) : H n ( C ) → H n ( D ) {\displaystyle H_{n}(F):H_{n}(C)\to H_{n}(D)} for all n. A morphism F is called a quasi-isomorphism if it induces an isomorphism on the nth homology for all n. Many constructions of chain complexes arising in algebra and geometry, including singular homology, have the following functoriality property: if two objects X and Y are connected by a map f, then the associated chain complexes are connected by a morphism F = C ( f ) : C ∙ ( X ) → C ∙ ( Y ) , {\displaystyle F=C(f):C_{\bullet }(X)\to C_{\bullet }(Y),} and moreover, the composition g ∘ f {\displaystyle g\circ f} of maps f: X → Y and g: Y → Z induces the morphism C ( g ∘ f ) : C ∙ ( X ) → C ∙ ( Z ) {\displaystyle C(g\circ f):C_{\bullet }(X)\to C_{\bullet }(Z)} that coincides with the composition C ( g ) ∘ C ( f ) . {\displaystyle C(g)\circ C(f).} It follows that the homology groups H ∙ ( C ) {\displaystyle H_{\bullet }(C)} are functorial as well, so that morphisms between algebraic or topological objects give rise to compatible maps between their homology. The following definition arises from a typical situation in algebra and topology. A triple consisting of three chain complexes L ∙ , M ∙ , N ∙ {\displaystyle L_{\bullet },M_{\bullet },N_{\bullet }} and two morphisms between them, f : L ∙ → M ∙ , g : M ∙ → N ∙ , {\displaystyle f:L_{\bullet }\to M_{\bullet },g:M_{\bullet }\to N_{\bullet },} is called an exact triple, or a short exact sequence of complexes, and written as 0 ⟶ L ∙ ⟶ f M ∙ ⟶ g N ∙ ⟶ 0 , {\displaystyle 0\longrightarrow L_{\bullet }{\overset {f}{\longrightarrow }}M_{\bullet }{\overset {g}{\longrightarrow }}N_{\bullet }\longrightarrow 0,} if for any n, the sequence 0 ⟶ L n ⟶ f n M n ⟶ g n N n ⟶ 0 {\displaystyle 0\longrightarrow L_{n}{\overset {f_{n}}{\longrightarrow }}M_{n}{\overset {g_{n}}{\longrightarrow }}N_{n}\longrightarrow 0} is a short exact sequence of abelian groups. By definition, this means that fn is an injection, gn is a surjection, and Im fn = Ker gn. One of the most basic theorems of homological algebra, sometimes known as the zig-zag lemma, states that, in this case, there is a long exact sequence in homology ⋯ ⟶ H n ( L ) ⟶ H n ( f ) H n ( M ) ⟶ H n ( g ) H n ( N ) ⟶ δ n H n − 1 ( L ) ⟶ H n − 1 ( f ) H n − 1 ( M ) ⟶ ⋯ , {\displaystyle \cdots \longrightarrow H_{n}(L){\overset {H_{n}(f)}{\longrightarrow }}H_{n}(M){\overset {H_{n}(g)}{\longrightarrow }}H_{n}(N){\overset {\delta _{n}}{\longrightarrow }}H_{n-1}(L){\overset {H_{n-1}(f)}{\longrightarrow }}H_{n-1}(M)\longrightarrow \cdots ,} where the homology groups of L, M, and N cyclically follow each other, and δn are certain homomorphisms determined by f and g, called the connecting homomorphisms. Topological manifestations of this theorem include the Mayer–Vietoris sequence and the long exact sequence for relative homology. == Foundational aspects == Cohomology theories have been defined for many different objects such as topological spaces, sheaves, groups, rings, Lie algebras, and C*-algebras. The study of modern algebraic geometry would be almost unthinkable without sheaf cohomology. Central to homological algebra is the notion of exact sequence; these can be used to perform actual calculations. A classical tool of homological algebra is that of derived functor; the most basic examples are functors Ext and Tor. With a diverse set of applications in mind, it was natural to try to put the whole subject on a uniform basis. There were several attempts before the subject settled down. An approximate history can be stated as follows: Cartan-Eilenberg: In their 1956 book "Homological Algebra", these authors used projective and injective module resolutions. 'Tohoku': The approach in a celebrated paper by Alexander Grothendieck which appeared in the Second Series of the Tohoku Mathematical Journal in 1957, using the abelian category concept (to include sheaves of abelian groups). The derived category of Grothendieck and Verdier. Derived categories date back to Verdier's 1967 thesis. They are examples of triangulated categories used in a number of modern theories. These move from computability to generality. The computational sledgehammer par excellence is the spectral sequence; these are essential in the Cartan-Eilenberg and Tohoku approaches where they are needed, for instance, to compute the derived functors of a composition of two functors. Spectral sequences are less essential in the derived category approach, but still play a role whenever concrete computations are necessary. There have been attempts at 'non-commutative' theories which extend first cohomology as torsors (important in Galois cohomology). == See also == Abstract nonsense, a term for homological algebra and category theory Derivator Homotopical algebra List of homological algebra topics == References == Henri Cartan, Samuel Eilenberg, Homological Algebra. With an appendix by David A. Buchsbaum. Reprint of the 1956 original. Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1999. xvi+390 pp. ISBN 0-691-04991-2 Grothendieck, Alexander (1957). "Sur quelques points d'algèbre homologique, I". Tohoku Mathematical Journal. 9 (2): 119–221. doi:10.2748/tmj/1178244839. Saunders Mac Lane, Homology. Reprint of the 1975 edition. Classics in Mathematics. Springer-Verlag, Berlin, 1995. x+422 pp. ISBN 3-540-58662-8 Peter Hilton; Stammbach, U. A Course in Homological Algebra. Second edition. Graduate Texts in Mathematics, 4. Springer-Verlag, New York, 1997. xii+364 pp. ISBN 0-387-94823-6 Gelfand, Sergei I.; Yuri Manin, Methods of Homological Algebra. Translated from Russian 1988 edition. Second edition. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2003. xx+372 pp. ISBN 3-540-43583-2 Gelfand, Sergei I.; Yuri Manin, Homological Algebra. Translated from the 1989 Russian original by the authors. Reprint of the original English edition from the series Encyclopaedia of Mathematical Sciences (Algebra, V, Encyclopaedia Math. Sci., 38, Springer, Berlin, 1994). Springer-Verlag, Berlin, 1999. iv+222 pp. ISBN 3-540-65378-3 Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259.
Wikipedia/Homological_algebra
In mathematics, an algebraic equation or polynomial equation is an equation of the form P = 0 {\displaystyle P=0} , where P is a polynomial with coefficients in some field, often the field of the rational numbers. For example, x 5 − 3 x + 1 = 0 {\displaystyle x^{5}-3x+1=0} is an algebraic equation with integer coefficients and y 4 + x y 2 − x 3 3 + x y 2 + y 2 + 1 7 = 0 {\displaystyle y^{4}+{\frac {xy}{2}}-{\frac {x^{3}}{3}}+xy^{2}+y^{2}+{\frac {1}{7}}=0} is a multivariate polynomial equation over the rationals. For many authors, the term algebraic equation refers only to the univariate case, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables (the multivariate case), in which case the term polynomial equation is usually preferred. Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). == Terminology == The term "algebraic equation" dates from the time when the main problem of algebra was to solve univariate polynomial equations. This problem was completely solved during the 19th century; see Fundamental theorem of algebra, Abel–Ruffini theorem and Galois theory. Since then, the scope of algebra has been dramatically enlarged. In particular, it includes the study of equations that involve nth roots and, more generally, algebraic expressions. This makes the term algebraic equation ambiguous outside the context of the old problem. So the term polynomial equation is generally preferred when this ambiguity may occur, specially when considering multivariate equations. == History == The study of algebraic equations is probably as old as mathematics: the Babylonian mathematicians, as early as 2000 BC could solve some kinds of quadratic equations (displayed on Old Babylonian clay tablets). Univariate algebraic equations over the rationals (i.e., with rational coefficients) have a very long history. Ancient mathematicians wanted the solutions in the form of radical expressions, like x = 1 + 5 2 {\displaystyle x={\frac {1+{\sqrt {5}}}{2}}} for the positive solution of x 2 − x − 1 = 0 {\displaystyle x^{2}-x-1=0} . The ancient Egyptians knew how to solve equations of degree 2 in this manner. The Indian mathematician Brahmagupta (597–668 AD) explicitly described the quadratic formula in his treatise Brāhmasphuṭasiddhānta published in 628 AD, but written in words instead of symbols. In the 9th century Muhammad ibn Musa al-Khwarizmi and other Islamic mathematicians derived the quadratic formula, the general solution of equations of degree 2, and recognized the importance of the discriminant. During the Renaissance in 1545, Gerolamo Cardano published the solution of Scipione del Ferro and Niccolò Fontana Tartaglia to equations of degree 3 and that of Lodovico Ferrari for equations of degree 4. Finally Niels Henrik Abel proved, in 1824, that equations of degree 5 and higher do not have general solutions using radicals. Galois theory, named after Évariste Galois, showed that some equations of at least degree 5 do not even have an idiosyncratic solution in radicals, and gave criteria for deciding if an equation is in fact solvable using radicals. == Areas of study == The algebraic equations are the basis of a number of areas of modern mathematics: Algebraic number theory is the study of (univariate) algebraic equations over the rationals (that is, with rational coefficients). Galois theory was introduced by Évariste Galois to specify criteria for deciding if an algebraic equation may be solved in terms of radicals. In field theory, an algebraic extension is an extension such that every element is a root of an algebraic equation over the base field. Transcendental number theory is the study of the real numbers which are not solutions to an algebraic equation over the rationals. A Diophantine equation is a (usually multivariate) polynomial equation with integer coefficients for which one is interested in the integer solutions. Algebraic geometry is the study of the solutions in an algebraically closed field of multivariate polynomial equations. Two equations are equivalent if they have the same set of solutions. In particular the equation P = Q {\displaystyle P=Q} is equivalent to P − Q = 0 {\displaystyle P-Q=0} . It follows that the study of algebraic equations is equivalent to the study of polynomials. A polynomial equation over the rationals can always be converted to an equivalent one in which the coefficients are integers. For example, multiplying through by 42 = 2·3·7 and grouping its terms in the first member, the previously mentioned polynomial equation y 4 + x y 2 = x 3 3 − x y 2 + y 2 − 1 7 {\displaystyle y^{4}+{\frac {xy}{2}}={\frac {x^{3}}{3}}-xy^{2}+y^{2}-{\frac {1}{7}}} becomes 42 y 4 + 21 x y − 14 x 3 + 42 x y 2 − 42 y 2 + 6 = 0. {\displaystyle 42y^{4}+21xy-14x^{3}+42xy^{2}-42y^{2}+6=0.} Because sine, exponentiation, and 1/T are not polynomial functions, e T x 2 + 1 T x y + sin ⁡ ( T ) z − 2 = 0 {\displaystyle e^{T}x^{2}+{\frac {1}{T}}xy+\sin(T)z-2=0} is not a polynomial equation in the four variables x, y, z, and T over the rational numbers. However, it is a polynomial equation in the three variables x, y, and z over the field of the elementary functions in the variable T. == Theory == === Polynomials === Given an equation in unknown x ( E ) a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = 0 {\displaystyle (\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0} , with coefficients in a field K, one can equivalently say that the solutions of (E) in K are the roots in K of the polynomial P = a n X n + a n − 1 X n − 1 + ⋯ + a 1 X + a 0 ∈ K [ X ] {\displaystyle P=a_{n}X^{n}+a_{n-1}X^{n-1}+\dots +a_{1}X+a_{0}\quad \in K[X]} . It can be shown that a polynomial of degree n in a field has at most n roots. The equation (E) therefore has at most n solutions. If K' is a field extension of K, one may consider (E) to be an equation with coefficients in K and the solutions of (E) in K are also solutions in K' (the converse does not hold in general). It is always possible to find a field extension of K known as the rupture field of the polynomial P, in which (E) has at least one solution. === Existence of solutions to real and complex equations === The fundamental theorem of algebra states that the field of the complex numbers is closed algebraically, that is, all polynomial equations with complex coefficients and degree at least one have a solution. It follows that all polynomial equations of degree 1 or more with real coefficients have a complex solution. On the other hand, an equation such as x 2 + 1 = 0 {\displaystyle x^{2}+1=0} does not have a solution in R {\displaystyle \mathbb {R} } (the solutions are the imaginary units i and −i). While the real solutions of real equations are intuitive (they are the x-coordinates of the points where the curve y = P(x) intersects the x-axis), the existence of complex solutions to real equations can be surprising and less easy to visualize. However, a monic polynomial of odd degree must necessarily have a real root. The associated polynomial function in x is continuous, and it approaches − ∞ {\displaystyle -\infty } as x approaches − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } as x approaches + ∞ {\displaystyle +\infty } . By the intermediate value theorem, it must therefore assume the value zero at some real x, which is then a solution of the polynomial equation. === Connection to Galois theory === There exist formulas giving the solutions of real or complex polynomials of degree less than or equal to four as a function of their coefficients. Abel showed that it is not possible to find such a formula in general (using only the four arithmetic operations and taking roots) for equations of degree five or higher. Galois theory provides a criterion which allows one to determine whether the solution to a given polynomial equation can be expressed using radicals. == Explicit solution of numerical equations == === Approach === The explicit solution of a real or complex equation of degree 1 is trivial. Solving an equation of higher degree n reduces to factoring the associated polynomial, that is, rewriting (E) in the form a n ( x − z 1 ) … ( x − z n ) = 0 {\displaystyle a_{n}(x-z_{1})\dots (x-z_{n})=0} , where the solutions are then the z 1 , … , z n {\displaystyle z_{1},\dots ,z_{n}} . The problem is then to express the z i {\displaystyle z_{i}} in terms of the a i {\displaystyle a_{i}} . This approach applies more generally if the coefficients and solutions belong to an integral domain. === General techniques === ==== Factoring ==== If an equation P(x) = 0 of degree n has a rational root α, the associated polynomial can be factored to give the form P(X) = (X − α)Q(X) (by dividing P(X) by X − α or by writing P(X) − P(α) as a linear combination of terms of the form Xk − αk, and factoring out X − α. Solving P(x) = 0 thus reduces to solving the degree n − 1 equation Q(x) = 0. See for example the case n = 3. ==== Elimination of the sub-dominant term ==== To solve an equation of degree n, ( E ) a n x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 = 0 {\displaystyle (\mathrm {E} )\qquad a_{n}x^{n}+a_{n-1}x^{n-1}+\dots +a_{1}x+a_{0}=0} , a common preliminary step is to eliminate the degree-n - 1 term: by setting x = y − a n − 1 n a n {\displaystyle x=y-{\frac {a_{n-1}}{n\,a_{n}}}} , equation (E) becomes a n y n + b n − 2 y n − 2 + ⋯ + b 1 y + b 0 = 0 {\displaystyle a_{n}y^{n}+b_{n-2}y^{n-2}+\dots +b_{1}y+b_{0}=0} . Leonhard Euler developed this technique for the case n = 3 but it is also applicable to the case n = 4, for example. === Quadratic equations === To solve a quadratic equation of the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} one calculates the discriminant Δ defined by Δ = b 2 − 4 a c {\displaystyle \Delta =b^{2}-4ac} . If the polynomial has real coefficients, it has: two distinct real roots if Δ > 0 {\displaystyle \Delta >0} ; one real double root if Δ = 0 {\displaystyle \Delta =0} ; no real root if Δ < 0 {\displaystyle \Delta <0} , but two complex conjugate roots. === Cubic equations === The best-known method for solving cubic equations, by writing roots in terms of radicals, is Cardano's formula. === Quartic equations === For detailed discussions of some solution methods see: Tschirnhaus transformation (general method, not guaranteed to succeed); Bezout method (general method, not guaranteed to succeed); Ferrari method (solutions for degree 4); Euler method (solutions for degree 4); Lagrange method (solutions for degree 4); Descartes method (solutions for degree 2 or 4); A quartic equation a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0} with a ≠ 0 {\displaystyle a\neq 0} may be reduced to a quadratic equation by a change of variable provided it is either biquadratic (b = d = 0) or quasi-palindromic (e = a, d = b). Some cubic and quartic equations can be solved using trigonometry or hyperbolic functions. === Higher-degree equations === Évariste Galois and Niels Henrik Abel showed independently that in general a polynomial of degree 5 or higher is not solvable using radicals. Some particular equations do have solutions, such as those associated with the cyclotomic polynomials of degrees 5 and 17. Charles Hermite, on the other hand, showed that polynomials of degree 5 are solvable using elliptical functions. Otherwise, one may find numerical approximations to the roots using root-finding algorithms, such as Newton's method. == See also == Algebraic function Algebraic number Root finding Linear equation (degree = 1) Quadratic equation (degree = 2) Cubic equation (degree = 3) Quartic equation (degree = 4) Quintic equation (degree = 5) Sextic equation (degree = 6) Septic equation (degree = 7) System of linear equations System of polynomial equations Linear Diophantine equation Linear equation over a ring Cramer's theorem (algebraic curves), on the number of points usually sufficient to determine a bivariate n-th degree curve == References == "Algebraic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Algebraic Equation". MathWorld.
Wikipedia/Algebraic_equation
In mathematics, a linear equation is an equation that may be put in the form a 1 x 1 + … + a n x n + b = 0 , {\displaystyle a_{1}x_{1}+\ldots +a_{n}x_{n}+b=0,} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the variables (or unknowns), and b , a 1 , … , a n {\displaystyle b,a_{1},\ldots ,a_{n}} are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are required to not all be zero. Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken. The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true. In the case of just one variable, there is exactly one solution (provided that a 1 ≠ 0 {\displaystyle a_{1}\neq 0} ). Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown. In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in n variables form a hyperplane (a subspace of dimension n − 1) in the Euclidean space of dimension n. Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, to linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations. == One variable == A linear equation in one variable x can be written as a x + b = 0 , {\displaystyle ax+b=0,} with a ≠ 0 {\displaystyle a\neq 0} . The solution is x = − b a {\displaystyle x=-{\frac {b}{a}}} . == Two variables == A linear equation in two variables x and y can be written as a x + b y + c = 0 , {\displaystyle ax+by+c=0,} where a and b are not both 0. If a and b are real numbers, it has infinitely many solutions. === Linear function === If b ≠ 0, the equation a x + b y + c = 0 {\displaystyle ax+by+c=0} is a linear equation in the single variable y for every value of x. It therefore has a unique solution for y, which is given by y = − a b x − c b . {\displaystyle y=-{\frac {a}{b}}x-{\frac {c}{b}}.} This defines a function. The graph of this function is a line with slope − a b {\displaystyle -{\frac {a}{b}}} and y-intercept − c b . {\displaystyle -{\frac {c}{b}}.} The functions whose graph is a line are generally called linear functions in the context of calculus. However, in linear algebra, a linear function is a function that maps a sum to the sum of the images of the summands. So, for this definition, the above function is linear only when c = 0, that is when the line passes through the origin. To avoid confusion, the functions whose graph is an arbitrary line are often called affine functions, and the linear functions such that c = 0 are often called linear maps. === Geometric interpretation === Each solution (x, y) of a linear equation a x + b y + c = 0 {\displaystyle ax+by+c=0} may be viewed as the Cartesian coordinates of a point in the Euclidean plane. With this interpretation, all solutions of the equation form a line, provided that a and b are not both zero. Conversely, every line is the set of all solutions of a linear equation. The phrase "linear equation" takes its origin in this correspondence between lines and equations: a linear equation in two variables is an equation whose solutions form a line. If b ≠ 0, the line is the graph of the function of x that has been defined in the preceding section. If b = 0, the line is a vertical line (that is a line parallel to the y-axis) of equation x = − c a , {\displaystyle x=-{\frac {c}{a}},} which is not the graph of a function of x. Similarly, if a ≠ 0, the line is the graph of a function of y, and, if a = 0, one has a horizontal line of equation y = − c b . {\displaystyle y=-{\frac {c}{b}}.} === Equation of a line === There are various ways of defining a line. In the following subsections, a linear equation of the line is given in each case. ==== Slope–intercept form or Gradient-intercept form ==== A non-vertical line can be defined by its slope m, and its y-intercept y0 (the y coordinate of its intersection with the y-axis). In this case, its linear equation can be written y = m x + y 0 . {\displaystyle y=mx+y_{0}.} If, moreover, the line is not horizontal, it can be defined by its slope and its x-intercept x0. In this case, its equation can be written y = m ( x − x 0 ) , {\displaystyle y=m(x-x_{0}),} or, equivalently, y = m x − m x 0 . {\displaystyle y=mx-mx_{0}.} These forms rely on the habit of considering a nonvertical line as the graph of a function. For a line given by an equation a x + b y + c = 0 , {\displaystyle ax+by+c=0,} these forms can be easily deduced from the relations m = − a b , x 0 = − c a , y 0 = − c b . {\displaystyle {\begin{aligned}m&=-{\frac {a}{b}},\\x_{0}&=-{\frac {c}{a}},\\y_{0}&=-{\frac {c}{b}}.\end{aligned}}} ==== Point–slope form or Point-gradient form ==== A non-vertical line can be defined by its slope m, and the coordinates x 1 , y 1 {\displaystyle x_{1},y_{1}} of any point of the line. In this case, a linear equation of the line is y = y 1 + m ( x − x 1 ) , {\displaystyle y=y_{1}+m(x-x_{1}),} or y = m x + y 1 − m x 1 . {\displaystyle y=mx+y_{1}-mx_{1}.} This equation can also be written y − y 1 = m ( x − x 1 ) {\displaystyle y-y_{1}=m(x-x_{1})} to emphasize that the slope of a line can be computed from the coordinates of any two points. ==== Intercept form ==== A line that is not parallel to an axis and does not pass through the origin cuts the axes into two different points. The intercept values x0 and y0 of these two points are nonzero, and an equation of the line is x x 0 + y y 0 = 1. {\displaystyle {\frac {x}{x_{0}}}+{\frac {y}{y_{0}}}=1.} (It is easy to verify that the line defined by this equation has x0 and y0 as intercept values). ==== Two-point form ==== Given two different points (x1, y1) and (x2, y2), there is exactly one line that passes through them. There are several ways to write a linear equation of this line. If x1 ≠ x2, the slope of the line is y 2 − y 1 x 2 − x 1 . {\displaystyle {\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}.} Thus, a point-slope form is y − y 1 = y 2 − y 1 x 2 − x 1 ( x − x 1 ) . {\displaystyle y-y_{1}={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}(x-x_{1}).} By clearing denominators, one gets the equation ( x 2 − x 1 ) ( y − y 1 ) − ( y 2 − y 1 ) ( x − x 1 ) = 0 , {\displaystyle (x_{2}-x_{1})(y-y_{1})-(y_{2}-y_{1})(x-x_{1})=0,} which is valid also when x1 = x2 (to verify this, it suffices to verify that the two given points satisfy the equation). This form is not symmetric in the two given points, but a symmetric form can be obtained by regrouping the constant terms: ( y 1 − y 2 ) x + ( x 2 − x 1 ) y + ( x 1 y 2 − x 2 y 1 ) = 0 {\displaystyle (y_{1}-y_{2})x+(x_{2}-x_{1})y+(x_{1}y_{2}-x_{2}y_{1})=0} (exchanging the two points changes the sign of the left-hand side of the equation). ==== Determinant form ==== The two-point form of the equation of a line can be expressed simply in terms of a determinant. There are two common ways for that. The equation ( x 2 − x 1 ) ( y − y 1 ) − ( y 2 − y 1 ) ( x − x 1 ) = 0 {\displaystyle (x_{2}-x_{1})(y-y_{1})-(y_{2}-y_{1})(x-x_{1})=0} is the result of expanding the determinant in the equation | x − x 1 y − y 1 x 2 − x 1 y 2 − y 1 | = 0. {\displaystyle {\begin{vmatrix}x-x_{1}&y-y_{1}\\x_{2}-x_{1}&y_{2}-y_{1}\end{vmatrix}}=0.} The equation ( y 1 − y 2 ) x + ( x 2 − x 1 ) y + ( x 1 y 2 − x 2 y 1 ) = 0 {\displaystyle (y_{1}-y_{2})x+(x_{2}-x_{1})y+(x_{1}y_{2}-x_{2}y_{1})=0} can be obtained by expanding with respect to its first row the determinant in the equation | x y 1 x 1 y 1 1 x 2 y 2 1 | = 0. {\displaystyle {\begin{vmatrix}x&y&1\\x_{1}&y_{1}&1\\x_{2}&y_{2}&1\end{vmatrix}}=0.} Besides being very simple and mnemonic, this form has the advantage of being a special case of the more general equation of a hyperplane passing through n points in a space of dimension n − 1. These equations rely on the condition of linear dependence of points in a projective space. == More than two variables == A linear equation with more than two variables may always be assumed to have the form a 1 x 1 + a 2 x 2 + ⋯ + a n x n + b = 0. {\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}+b=0.} The coefficient b, often denoted a0 is called the constant term (sometimes the absolute term in old books). Depending on the context, the term coefficient can be reserved for the ai with i > 0. When dealing with n = 3 {\displaystyle n=3} variables, it is common to use x , y {\displaystyle x,\;y} and z {\displaystyle z} instead of indexed variables. A solution of such an equation is a n-tuple such that substituting each element of the tuple for the corresponding variable transforms the equation into a true equality. For an equation to be meaningful, the coefficient of at least one variable must be non-zero. If every variable has a zero coefficient, then, as mentioned for one variable, the equation is either inconsistent (for b ≠ 0) as having no solution, or all n-tuples are solutions. The n-tuples that are solutions of a linear equation in n variables are the Cartesian coordinates of the points of an (n − 1)-dimensional hyperplane in an n-dimensional Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). In the case of three variables, this hyperplane is a plane. If a linear equation is given with aj ≠ 0, then the equation can be solved for xj, yielding x j = − b a j − ∑ i ∈ { 1 , … , n } , i ≠ j a i a j x i . {\displaystyle x_{j}=-{\frac {b}{a_{j}}}-\sum _{i\in \{1,\ldots ,n\},i\neq j}{\frac {a_{i}}{a_{j}}}x_{i}.} If the coefficients are real numbers, this defines a real-valued function of n real variables. == See also == Linear equation over a ring Algebraic equation Line coordinates Linear inequality Nonlinear equation == Notes == == References == Barnett, R.A.; Ziegler, M.R.; Byleen, K.E. (2008), College Mathematics for Business, Economics, Life Sciences and the Social Sciences (11th ed.), Upper Saddle River, N.J.: Pearson, ISBN 978-0-13-157225-6 Larson, Ron; Hostetler, Robert (2007), Precalculus:A Concise Course, Houghton Mifflin, ISBN 978-0-618-62719-6 Wilson, W.A.; Tracey, J.I. (1925), Analytic Geometry (revised ed.), D.C. Heath == External links == "Linear equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Linear_equations
In mathematics, geometric topology is the study of manifolds and maps between them, particularly embeddings of one manifold into another. == History == Geometric topology as an area distinct from algebraic topology may be said to have originated in the 1935 classification of lens spaces by Reidemeister torsion, which required distinguishing spaces that are homotopy equivalent but not homeomorphic. This was the origin of simple homotopy theory. The use of the term geometric topology to describe these seems to have originated rather recently. == Differences between low-dimensional and high-dimensional topology == Manifolds differ radically in behavior in high and low dimension. High-dimensional topology refers to manifolds of dimension 5 and above, or in relative terms, embeddings in codimension 3 and above. Low-dimensional topology is concerned with questions in dimensions up to 4, or embeddings in codimension up to 2. Dimension 4 is special, in that in some respects (topologically), dimension 4 is high-dimensional, while in other respects (differentiably), dimension 4 is low-dimensional; this overlap yields phenomena exceptional to dimension 4, such as exotic differentiable structures on R4. Thus the topological classification of 4-manifolds is in principle tractable, and the key questions are: does a topological manifold admit a differentiable structure, and if so, how many? Notably, the smooth case of dimension 4 is the last open case of the generalized Poincaré conjecture; see Gluck twists. The distinction is because surgery theory works in dimension 5 and above (in fact, in many cases, it works topologically in dimension 4, though this is very involved to prove), and thus the behavior of manifolds in dimension 5 and above may be studied using the surgery theory program. In dimension 4 and below (topologically, in dimension 3 and below), surgery theory does not work. Indeed, one approach to discussing low-dimensional manifolds is to ask "what would surgery theory predict to be true, were it to work?" – and then understand low-dimensional phenomena as deviations from this. The precise reason for the difference at dimension 5 is because the Whitney embedding theorem, the key technical trick which underlies surgery theory, requires 2+1 dimensions. Roughly, the Whitney trick allows one to "unknot" knotted spheres – more precisely, remove self-intersections of immersions; it does this via a homotopy of a disk – the disk has 2 dimensions, and the homotopy adds 1 more – and thus in codimension greater than 2, this can be done without intersecting itself; hence embeddings in codimension greater than 2 can be understood by surgery. In surgery theory, the key step is in the middle dimension, and thus when the middle dimension has codimension more than 2 (loosely, 2½ is enough, hence total dimension 5 is enough), the Whitney trick works. The key consequence of this is Smale's h-cobordism theorem, which works in dimension 5 and above, and forms the basis for surgery theory. A modification of the Whitney trick can work in 4 dimensions, and is called Casson handles – because there are not enough dimensions, a Whitney disk introduces new kinks, which can be resolved by another Whitney disk, leading to a sequence ("tower") of disks. The limit of this tower yields a topological but not differentiable map, hence surgery works topologically but not differentiably in dimension 4. == Important tools in geometric topology == === Fundamental group === In all dimensions, the fundamental group of a manifold is a very important invariant, and determines much of the structure; in dimensions 1, 2 and 3, the possible fundamental groups are restricted, while in dimension 4 and above every finitely presented group is the fundamental group of a manifold (note that it is sufficient to show this for 4- and 5-dimensional manifolds, and then to take products with spheres to get higher ones). === Orientability === A manifold is orientable if it has a consistent choice of orientation, and a connected orientable manifold has exactly two different possible orientations. In this setting, various equivalent formulations of orientability can be given, depending on the desired application and level of generality. Formulations applicable to general topological manifolds often employ methods of homology theory, whereas for differentiable manifolds more structure is present, allowing a formulation in terms of differential forms. An important generalization of the notion of orientability of a space is that of orientability of a family of spaces parameterized by some other space (a fiber bundle) for which an orientation must be selected in each of the spaces which varies continuously with respect to changes in the parameter values. === Handle decompositions === A handle decomposition of an m-manifold M is a union ∅ = M − 1 ⊂ M 0 ⊂ M 1 ⊂ M 2 ⊂ ⋯ ⊂ M m − 1 ⊂ M m = M {\displaystyle \emptyset =M_{-1}\subset M_{0}\subset M_{1}\subset M_{2}\subset \dots \subset M_{m-1}\subset M_{m}=M} where each M i {\displaystyle M_{i}} is obtained from M i − 1 {\displaystyle M_{i-1}} by the attaching of i {\displaystyle i} -handles. A handle decomposition is to a manifold what a CW-decomposition is to a topological space—in many regards the purpose of a handle decomposition is to have a language analogous to CW-complexes, but adapted to the world of smooth manifolds. Thus an i-handle is the smooth analogue of an i-cell. Handle decompositions of manifolds arise naturally via Morse theory. The modification of handle structures is closely linked to Cerf theory. === Local flatness === Local flatness is a property of a submanifold in a topological manifold of larger dimension. In the category of topological manifolds, locally flat submanifolds play a role similar to that of embedded submanifolds in the category of smooth manifolds. Suppose a d dimensional manifold N is embedded into an n dimensional manifold M (where d < n). If x ∈ N , {\displaystyle x\in N,} we say N is locally flat at x if there is a neighborhood U ⊂ M {\displaystyle U\subset M} of x such that the topological pair ( U , U ∩ N ) {\displaystyle (U,U\cap N)} is homeomorphic to the pair ( R n , R d ) {\displaystyle (\mathbb {R} ^{n},\mathbb {R} ^{d})} , with a standard inclusion of R d {\displaystyle \mathbb {R} ^{d}} as a subspace of R n {\displaystyle \mathbb {R} ^{n}} . That is, there exists a homeomorphism U → R n {\displaystyle U\to R^{n}} such that the image of U ∩ N {\displaystyle U\cap N} coincides with R d {\displaystyle \mathbb {R} ^{d}} . === Schönflies theorems === The generalized Schoenflies theorem states that, if an (n − 1)-dimensional sphere S is embedded into the n-dimensional sphere Sn in a locally flat way (that is, the embedding extends to that of a thickened sphere), then the pair (Sn, S) is homeomorphic to the pair (Sn, Sn−1), where Sn−1 is the equator of the n-sphere. Brown and Mazur received the Veblen Prize for their independent proofs of this theorem. == Branches of geometric topology == === Low-dimensional topology === Low-dimensional topology includes: Surfaces (2-manifolds) 3-manifolds 4-manifolds each have their own theory, where there are some connections. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of 8 possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. === Knot theory === Knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined together so that it cannot be undone. In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3 (since we're using topology, a circle isn't bound to the classical geometric concept, but to all of its homeomorphisms). Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see knot (mathematics). Higher-dimensional knots are n-dimensional spheres in m-dimensional Euclidean space. === High-dimensional geometric topology === In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. A characteristic class is a way of associating to each principal bundle on a topological space X a cohomology class of X. The cohomology class measures the extent to which the bundle is "twisted" — particularly, whether it possesses sections or not. In other words, characteristic classes are global invariants which measure the deviation of a local product structure from a global product structure. They are one of the unifying geometric concepts in algebraic topology, differential geometry and algebraic geometry. Surgery theory is a collection of techniques used to produce one manifold from another in a 'controlled' way, introduced by Milnor (1961). Surgery refers to cutting out parts of the manifold and replacing it with a part of another manifold, matching up along the cut or boundary. This is closely related to, but not identical with, handlebody decompositions. It is a major tool in the study and classification of manifolds of dimension greater than 3. More technically, the idea is to start with a well-understood manifold M and perform surgery on it to produce a manifold M ′ having some desired property, in such a way that the effects on the homology, homotopy groups, or other interesting invariants of the manifold are known. The classification of exotic spheres by Kervaire and Milnor (1963) led to the emergence of surgery theory as a major tool in high-dimensional topology. == See also == Category:Maps of manifolds List of geometric topology topics Plumbing (mathematics) == References == R. B. Sher and R. J. Daverman (2002), Handbook of Geometric Topology, North-Holland. ISBN 0-444-82432-4.
Wikipedia/Geometric_topology
In mathematics, a set B of elements of a vector space V is called a basis (pl.: bases) if every element of V can be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors. Equivalently, a set B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B. In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Basis vectors find applications in the study of crystal structures and frames of reference. == Definition == A basis B of a vector space V over a field F (such as the real numbers R or the complex numbers C) is a linearly independent subset of V that spans V. This means that a subset B of V is a basis if it satisfies the two following conditions: linear independence for every finite subset { v 1 , … , v m } {\displaystyle \{\mathbf {v} _{1},\dotsc ,\mathbf {v} _{m}\}} of B, if c 1 v 1 + ⋯ + c m v m = 0 {\displaystyle c_{1}\mathbf {v} _{1}+\cdots +c_{m}\mathbf {v} _{m}=\mathbf {0} } for some c 1 , … , c m {\displaystyle c_{1},\dotsc ,c_{m}} in F, then c 1 = ⋯ = c m = 0 {\displaystyle c_{1}=\cdots =c_{m}=0} ; spanning property for every vector v in V, one can choose a 1 , … , a n {\displaystyle a_{1},\dotsc ,a_{n}} in F and v 1 , … , v n {\displaystyle \mathbf {v} _{1},\dotsc ,\mathbf {v} _{n}} in B such that v = a 1 v 1 + ⋯ + a n v n {\displaystyle \mathbf {v} =a_{1}\mathbf {v} _{1}+\cdots +a_{n}\mathbf {v} _{n}} . The scalars a i {\displaystyle a_{i}} are called the coordinates of the vector v with respect to the basis B, and by the first property they are uniquely determined. A vector space that has a finite basis is called finite-dimensional. In this case, the finite subset can be taken as B itself to check for linear independence in the above definition. It is often convenient or even necessary to have an ordering on the basis vectors, for example, when discussing orientation, or when one considers the scalar coefficients of a vector with respect to a basis without referring explicitly to the basis elements. In this case, the ordering is necessary for associating each coefficient to the corresponding basis element. This ordering can be done by numbering the basis elements. In order to emphasize that an order has been chosen, one speaks of an ordered basis, which is therefore not simply an unstructured set, but a sequence, an indexed family, or similar; see § Ordered bases and coordinates below. == Examples == The set R2 of the ordered pairs of real numbers is a vector space under the operations of component-wise addition ( a , b ) + ( c , d ) = ( a + c , b + d ) {\displaystyle (a,b)+(c,d)=(a+c,b+d)} and scalar multiplication λ ( a , b ) = ( λ a , λ b ) , {\displaystyle \lambda (a,b)=(\lambda a,\lambda b),} where λ {\displaystyle \lambda } is any real number. A simple basis of this vector space consists of the two vectors e1 = (1, 0) and e2 = (0, 1). These vectors form a basis (called the standard basis) because any vector v = (a, b) of R2 may be uniquely written as v = a e 1 + b e 2 . {\displaystyle \mathbf {v} =a\mathbf {e} _{1}+b\mathbf {e} _{2}.} Any other pair of linearly independent vectors of R2, such as (1, 1) and (−1, 2), forms also a basis of R2. More generally, if F is a field, the set F n {\displaystyle F^{n}} of n-tuples of elements of F is a vector space for similarly defined addition and scalar multiplication. Let e i = ( 0 , … , 0 , 1 , 0 , … , 0 ) {\displaystyle \mathbf {e} _{i}=(0,\ldots ,0,1,0,\ldots ,0)} be the n-tuple with all components equal to 0, except the ith, which is 1. Then e 1 , … , e n {\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}} is a basis of F n , {\displaystyle F^{n},} which is called the standard basis of F n . {\displaystyle F^{n}.} A different flavor of example is given by polynomial rings. If F is a field, the collection F[X] of all polynomials in one indeterminate X with coefficients in F is an F-vector space. One basis for this space is the monomial basis B, consisting of all monomials: B = { 1 , X , X 2 , … } . {\displaystyle B=\{1,X,X^{2},\ldots \}.} Any set of polynomials such that there is exactly one polynomial of each degree (such as the Bernstein basis polynomials or Chebyshev polynomials) is also a basis. (Such a set of polynomials is called a polynomial sequence.) But there are also many bases for F[X] that are not of this form. == Properties == Many properties of finite bases result from the Steinitz exchange lemma, which states that, for any vector space V, given a finite spanning set S and a linearly independent set L of n elements of V, one may replace n well-chosen elements of S by the elements of L to get a spanning set containing L, having its other elements in S, and having the same number of elements as S. Most properties resulting from the Steinitz exchange lemma remain true when there is no finite spanning set, but their proofs in the infinite case generally require the axiom of choice or a weaker form of it, such as the ultrafilter lemma. If V is a vector space over a field F, then: If L is a linearly independent subset of a spanning set S ⊆ V, then there is a basis B such that L ⊆ B ⊆ S . {\displaystyle L\subseteq B\subseteq S.} V has a basis (this is the preceding property with L being the empty set, and S = V). All bases of V have the same cardinality, which is called the dimension of V. This is the dimension theorem. A generating set S is a basis of V if and only if it is minimal, that is, no proper subset of S is also a generating set of V. A linearly independent set L is a basis if and only if it is maximal, that is, it is not a proper subset of any linearly independent set. If V is a vector space of dimension n, then: A subset of V with n elements is a basis if and only if it is linearly independent. A subset of V with n elements is a basis if and only if it is a spanning set of V. == Coordinates == Let V be a vector space of finite dimension n over a field F, and B = { b 1 , … , b n } {\displaystyle B=\{\mathbf {b} _{1},\ldots ,\mathbf {b} _{n}\}} be a basis of V. By definition of a basis, every v in V may be written, in a unique way, as v = λ 1 b 1 + ⋯ + λ n b n , {\displaystyle \mathbf {v} =\lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n},} where the coefficients λ 1 , … , λ n {\displaystyle \lambda _{1},\ldots ,\lambda _{n}} are scalars (that is, elements of F), which are called the coordinates of v over B. However, if one talks of the set of the coefficients, one loses the correspondence between coefficients and basis elements, and several vectors may have the same set of coefficients. For example, 3 b 1 + 2 b 2 {\displaystyle 3\mathbf {b} _{1}+2\mathbf {b} _{2}} and 2 b 1 + 3 b 2 {\displaystyle 2\mathbf {b} _{1}+3\mathbf {b} _{2}} have the same set of coefficients {2, 3}, and are different. It is therefore often convenient to work with an ordered basis; this is typically done by indexing the basis elements by the first natural numbers. Then, the coordinates of a vector form a sequence similarly indexed, and a vector is completely characterized by the sequence of coordinates. An ordered basis, especially when used in conjunction with an origin, is also called a coordinate frame or simply a frame (for example, a Cartesian frame or an affine frame). Let, as usual, F n {\displaystyle F^{n}} be the set of the n-tuples of elements of F. This set is an F-vector space, with addition and scalar multiplication defined component-wise. The map φ : ( λ 1 , … , λ n ) ↦ λ 1 b 1 + ⋯ + λ n b n {\displaystyle \varphi :(\lambda _{1},\ldots ,\lambda _{n})\mapsto \lambda _{1}\mathbf {b} _{1}+\cdots +\lambda _{n}\mathbf {b} _{n}} is a linear isomorphism from the vector space F n {\displaystyle F^{n}} onto V. In other words, F n {\displaystyle F^{n}} is the coordinate space of V, and the n-tuple φ − 1 ( v ) {\displaystyle \varphi ^{-1}(\mathbf {v} )} is the coordinate vector of v. The inverse image by φ {\displaystyle \varphi } of b i {\displaystyle \mathbf {b} _{i}} is the n-tuple e i {\displaystyle \mathbf {e} _{i}} all of whose components are 0, except the ith that is 1. The e i {\displaystyle \mathbf {e} _{i}} form an ordered basis of F n {\displaystyle F^{n}} , which is called its standard basis or canonical basis. The ordered basis B is the image by φ {\displaystyle \varphi } of the canonical basis of F n {\displaystyle F^{n}} . It follows from what precedes that every ordered basis is the image by a linear isomorphism of the canonical basis of F n {\displaystyle F^{n}} , and that every linear isomorphism from F n {\displaystyle F^{n}} onto V may be defined as the isomorphism that maps the canonical basis of F n {\displaystyle F^{n}} onto a given ordered basis of V. In other words, it is equivalent to define an ordered basis of V, or a linear isomorphism from F n {\displaystyle F^{n}} onto V. == Change of basis == Let V be a vector space of dimension n over a field F. Given two (ordered) bases B old = ( v 1 , … , v n ) {\displaystyle B_{\text{old}}=(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})} and B new = ( w 1 , … , w n ) {\displaystyle B_{\text{new}}=(\mathbf {w} _{1},\ldots ,\mathbf {w} _{n})} of V, it is often useful to express the coordinates of a vector x with respect to B o l d {\displaystyle B_{\mathrm {old} }} in terms of the coordinates with respect to B n e w . {\displaystyle B_{\mathrm {new} }.} This can be done by the change-of-basis formula, that is described below. The subscripts "old" and "new" have been chosen because it is customary to refer to B o l d {\displaystyle B_{\mathrm {old} }} and B n e w {\displaystyle B_{\mathrm {new} }} as the old basis and the new basis, respectively. It is useful to describe the old coordinates in terms of the new ones, because, in general, one has expressions involving the old coordinates, and if one wants to obtain equivalent expressions in terms of the new coordinates; this is obtained by replacing the old coordinates by their expressions in terms of the new coordinates. Typically, the new basis vectors are given by their coordinates over the old basis, that is, w j = ∑ i = 1 n a i , j v i . {\displaystyle \mathbf {w} _{j}=\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}.} If ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} and ( y 1 , … , y n ) {\displaystyle (y_{1},\ldots ,y_{n})} are the coordinates of a vector x over the old and the new basis respectively, the change-of-basis formula is x i = ∑ j = 1 n a i , j y j , {\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},} for i = 1, ..., n. This formula may be concisely written in matrix notation. Let A be the matrix of the a i , j {\displaystyle a_{i,j}} , and X = [ x 1 ⋮ x n ] and Y = [ y 1 ⋮ y n ] {\displaystyle X={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}\quad {\text{and}}\quad Y={\begin{bmatrix}y_{1}\\\vdots \\y_{n}\end{bmatrix}}} be the column vectors of the coordinates of v in the old and the new basis respectively, then the formula for changing coordinates is X = A Y . {\displaystyle X=AY.} The formula can be proven by considering the decomposition of the vector x on the two bases: one has x = ∑ i = 1 n x i v i , {\displaystyle \mathbf {x} =\sum _{i=1}^{n}x_{i}\mathbf {v} _{i},} and x = ∑ j = 1 n y j w j = ∑ j = 1 n y j ∑ i = 1 n a i , j v i = ∑ i = 1 n ( ∑ j = 1 n a i , j y j ) v i . {\displaystyle \mathbf {x} =\sum _{j=1}^{n}y_{j}\mathbf {w} _{j}=\sum _{j=1}^{n}y_{j}\sum _{i=1}^{n}a_{i,j}\mathbf {v} _{i}=\sum _{i=1}^{n}{\biggl (}\sum _{j=1}^{n}a_{i,j}y_{j}{\biggr )}\mathbf {v} _{i}.} The change-of-basis formula results then from the uniqueness of the decomposition of a vector over a basis, here B old {\displaystyle B_{\text{old}}} ; that is x i = ∑ j = 1 n a i , j y j , {\displaystyle x_{i}=\sum _{j=1}^{n}a_{i,j}y_{j},} for i = 1, ..., n. == Related notions == === Free module === If one replaces the field occurring in the definition of a vector space by a ring, one gets the definition of a module. For modules, linear independence and spanning sets are defined exactly as for vector spaces, although "generating set" is more commonly used than that of "spanning set". Like for vector spaces, a basis of a module is a linearly independent subset that is also a generating set. A major difference with the theory of vector spaces is that not every module has a basis. A module that has a basis is called a free module. Free modules play a fundamental role in module theory, as they may be used for describing the structure of non-free modules through free resolutions. A module over the integers is exactly the same thing as an abelian group. Thus a free module over the integers is also a free abelian group. Free abelian groups have specific properties that are not shared by modules over other rings. Specifically, every subgroup of a free abelian group is a free abelian group, and, if G is a subgroup of a finitely generated free abelian group H (that is an abelian group that has a finite basis), then there is a basis e 1 , … , e n {\displaystyle \mathbf {e} _{1},\ldots ,\mathbf {e} _{n}} of H and an integer 0 ≤ k ≤ n such that a 1 e 1 , … , a k e k {\displaystyle a_{1}\mathbf {e} _{1},\ldots ,a_{k}\mathbf {e} _{k}} is a basis of G, for some nonzero integers a 1 , … , a k {\displaystyle a_{1},\ldots ,a_{k}} . For details, see Free abelian group § Subgroups. === Analysis === In the context of infinite-dimensional vector spaces over the real or complex numbers, the term Hamel basis (named after Georg Hamel) or algebraic basis can be used to refer to a basis as defined in this article. This is to make a distinction with other notions of "basis" that exist when infinite-dimensional vector spaces are endowed with extra structure. The most important alternatives are orthogonal bases on Hilbert spaces, Schauder bases, and Markushevich bases on normed linear spaces. In the case of the real numbers R viewed as a vector space over the field Q of rational numbers, Hamel bases are uncountable, and have specifically the cardinality of the continuum, which is the cardinal number 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} , where ℵ 0 {\displaystyle \aleph _{0}} (aleph-nought) is the smallest infinite cardinal, the cardinal of the integers. The common feature of the other notions is that they permit the taking of infinite linear combinations of the basis vectors in order to generate the space. This, of course, requires that infinite sums are meaningfully defined on these spaces, as is the case for topological vector spaces – a large class of vector spaces including e.g. Hilbert spaces, Banach spaces, or Fréchet spaces. The preference of other types of bases for infinite-dimensional spaces is justified by the fact that the Hamel basis becomes "too big" in Banach spaces: If X is an infinite-dimensional normed vector space that is complete (i.e. X is a Banach space), then any Hamel basis of X is necessarily uncountable. This is a consequence of the Baire category theorem. The completeness as well as infinite dimension are crucial assumptions in the previous claim. Indeed, finite-dimensional spaces have by definition finite bases and there are infinite-dimensional (non-complete) normed spaces that have countable Hamel bases. Consider c 00 {\displaystyle c_{00}} , the space of the sequences x = ( x n ) {\displaystyle x=(x_{n})} of real numbers that have only finitely many non-zero elements, with the norm ‖ x ‖ = sup n | x n | {\textstyle \|x\|=\sup _{n}|x_{n}|} . Its standard basis, consisting of the sequences having only one non-zero element, which is equal to 1, is a countable Hamel basis. ==== Example ==== In the study of Fourier series, one learns that the functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are an "orthogonal basis" of the (real or complex) vector space of all (real or complex valued) functions on the interval [0, 2π] that are square-integrable on this interval, i.e., functions f satisfying ∫ 0 2 π | f ( x ) | 2 d x < ∞ . {\displaystyle \int _{0}^{2\pi }\left|f(x)\right|^{2}\,dx<\infty .} The functions {1} ∪ { sin(nx), cos(nx) : n = 1, 2, 3, ... } are linearly independent, and every function f that is square-integrable on [0, 2π] is an "infinite linear combination" of them, in the sense that lim n → ∞ ∫ 0 2 π | a 0 + ∑ k = 1 n ( a k cos ⁡ ( k x ) + b k sin ⁡ ( k x ) ) − f ( x ) | 2 d x = 0 {\displaystyle \lim _{n\to \infty }\int _{0}^{2\pi }{\biggl |}a_{0}+\sum _{k=1}^{n}\left(a_{k}\cos \left(kx\right)+b_{k}\sin \left(kx\right)\right)-f(x){\biggr |}^{2}dx=0} for suitable (real or complex) coefficients ak, bk. But many square-integrable functions cannot be represented as finite linear combinations of these basis functions, which therefore do not comprise a Hamel basis. Every Hamel basis of this space is much bigger than this merely countably infinite set of functions. Hamel bases of spaces of this kind are typically not useful, whereas orthonormal bases of these spaces are essential in Fourier analysis. === Geometry === The geometric notions of an affine space, projective space, convex set, and cone have related notions of basis. An affine basis for an n-dimensional affine space is n + 1 {\displaystyle n+1} points in general linear position. A projective basis is n + 2 {\displaystyle n+2} points in general position, in a projective space of dimension n. A convex basis of a polytope is the set of the vertices of its convex hull. A cone basis consists of one point by edge of a polygonal cone. See also a Hilbert basis (linear programming). === Random basis === For a probability distribution in Rn with a probability density function, such as the equidistribution in an n-dimensional ball with respect to Lebesgue measure, it can be shown that n randomly and independently chosen vectors will form a basis with probability one, which is due to the fact that n linearly dependent vectors x1, ..., xn in Rn should satisfy the equation det[x1 ⋯ xn] = 0 (zero determinant of the matrix with columns xi), and the set of zeros of a non-trivial polynomial has zero measure. This observation has led to techniques for approximating random bases. It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. For spaces with inner product, x is ε-orthogonal to y if | ⟨ x , y ⟩ | / ( ‖ x ‖ ‖ y ‖ ) < ε {\displaystyle \left|\left\langle x,y\right\rangle \right|/\left(\left\|x\right\|\left\|y\right\|\right)<\varepsilon } (that is, cosine of the angle between x and y is less than ε). In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution in n-dimensional ball. Choose N independent random vectors from a ball (they are independent and identically distributed). Let θ be a small positive number. Then for N random vectors are all pairwise ε-orthogonal with probability 1 − θ. This N growth exponentially with dimension n and N ≫ n {\displaystyle N\gg n} for sufficiently big n. This property of random bases is a manifestation of the so-called measure concentration phenomenon. The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from the n-dimensional cube [−1, 1]n as a function of dimension, n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube. If the angle between the vectors was within π/2 ± 0.037π/2 then the vector was retained. At the next step a new vector is generated in the same hypercube, and its angles with the previously generated vectors are evaluated. If these angles are within π/2 ± 0.037π/2 then the vector is retained. The process is repeated until the chain of almost orthogonality breaks, and the number of such pairwise almost orthogonal vectors (length of the chain) is recorded. For each n, 20 pairwise almost orthogonal chains were constructed numerically for each dimension. Distribution of the length of these chains is presented. == Proof that every vector space has a basis == Let V be any vector space over some field F. Let X be the set of all linearly independent subsets of V. The set X is nonempty since the empty set is an independent subset of V, and it is partially ordered by inclusion, which is denoted, as usual, by ⊆. Let Y be a subset of X that is totally ordered by ⊆, and let LY be the union of all the elements of Y (which are themselves certain subsets of V). Since (Y, ⊆) is totally ordered, every finite subset of LY is a subset of an element of Y, which is a linearly independent subset of V, and hence LY is linearly independent. Thus LY is an element of X. Therefore, LY is an upper bound for Y in (X, ⊆): it is an element of X, that contains every element of Y. As X is nonempty, and every totally ordered subset of (X, ⊆) has an upper bound in X, Zorn's lemma asserts that X has a maximal element. In other words, there exists some element Lmax of X satisfying the condition that whenever Lmax ⊆ L for some element L of X, then L = Lmax. It remains to prove that Lmax is a basis of V. Since Lmax belongs to X, we already know that Lmax is a linearly independent subset of V. If there were some vector w of V that is not in the span of Lmax, then w would not be an element of Lmax either. Let Lw = Lmax ∪ {w}. This set is an element of X, that is, it is a linearly independent subset of V (because w is not in the span of Lmax, and Lmax is independent). As Lmax ⊆ Lw, and Lmax ≠ Lw (because Lw contains the vector w that is not contained in Lmax), this contradicts the maximality of Lmax. Thus this shows that Lmax spans V. Hence Lmax is linearly independent and spans V. It is thus a basis of V, and this proves that every vector space has a basis. This proof relies on Zorn's lemma, which is equivalent to the axiom of choice. Conversely, it has been proved that if every vector space has a basis, then the axiom of choice is true. Thus the two assertions are equivalent. == See also == Basis of a matroid Basis of a linear program Coordinate system Change of basis – Coordinate change in linear algebra Frame of a vector space – Similar to the basis of a vector space, but not necessarily linearly independentPages displaying short descriptions of redirect targets Spherical basis – Basis used to express spherical tensors == Notes == == References == === General references === Blass, Andreas (1984), "Existence of bases implies the axiom of choice" (PDF), Axiomatic set theory, Contemporary Mathematics volume 31, Providence, R.I.: American Mathematical Society, pp. 31–33, ISBN 978-0-8218-5026-8, MR 0763890 Brown, William A. (1991), Matrices and vector spaces, New York: M. Dekker, ISBN 978-0-8247-8419-5 Lang, Serge (1987), Linear algebra, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96412-6 === Historical references === Banach, Stefan (1922), "Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales (On operations in abstract sets and their application to integral equations)" (PDF), Fundamenta Mathematicae (in French), 3: 133–181, doi:10.4064/fm-3-1-133-181, ISSN 0016-2736 Bolzano, Bernard (1804), Betrachtungen über einige Gegenstände der Elementargeometrie (Considerations of some aspects of elementary geometry) (in German) Bourbaki, Nicolas (1969), Éléments d'histoire des mathématiques (Elements of history of mathematics) (in French), Paris: Hermann Dorier, Jean-Luc (1995), "A general outline of the genesis of vector space theory", Historia Mathematica, 22 (3): 227–261, doi:10.1006/hmat.1995.1024, MR 1347828 Fourier, Jean Baptiste Joseph (1822), Théorie analytique de la chaleur (in French), Chez Firmin Didot, père et fils Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (in German), reprint: Hermann Grassmann. Translated by Lloyd C. Kannenberg. (2000), Extension Theory, Kannenberg, L.C., Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2031-5 Hamel, Georg (1905), "Eine Basis aller Zahlen und die unstetigen Lösungen der Funktionalgleichung f(x+y)=f(x)+f(y)", Mathematische Annalen (in German), 60 (3), Leipzig: 459–462, doi:10.1007/BF01457624, S2CID 120063569 Hamilton, William Rowan (1853), Lectures on Quaternions, Royal Irish Academy Möbius, August Ferdinand (1827), Der Barycentrische Calcul : ein neues Hülfsmittel zur analytischen Behandlung der Geometrie (Barycentric calculus: a new utility for an analytic treatment of geometry) (in German), archived from the original on 2009-04-12 Moore, Gregory H. (1995), "The axiomatization of linear algebra: 1875–1940", Historia Mathematica, 22 (3): 262–303, doi:10.1006/hmat.1995.1025 Peano, Giuseppe (1888), Calcolo Geometrico secondo l'Ausdehnungslehre di H. Grassmann preceduto dalle Operazioni della Logica Deduttiva (in Italian), Turin{{citation}}: CS1 maint: location missing publisher (link) == External links == Instructional videos from Khan Academy Introduction to bases of subspaces Proof that any subspace basis has same number of elements "Linear combinations, span, and basis vectors". Essence of linear algebra. August 6, 2016. Archived from the original on 2021-11-17 – via YouTube. "Basis", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Basis_(linear_algebra)
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural language processing aims to understand and process textual and linguistic data. The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science. == History == The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, because of various reasons, including the fact that he documented the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1914, the Spanish engineer Leonardo Torres Quevedo published his Essays on Automatics, and designed, inspired by Babbage, a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, a prototype that demonstrated the feasibility of an electromechanical analytical engine, on which commands could be typed and the results printed automatically. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and Columbia University was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. == Etymology and scope == Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM, in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921. Louis justifies the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline. His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases. In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain." A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been exchange of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic. Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra. The relationship between computer science and software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines. The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research. == Philosophy == === Epistemology of computer science === Despite the word science in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. Nonetheless, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena. Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs that can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems. === Paradigms of computer science === A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence). Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems. == Fields == As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science. === Theoretical computer science === Theoretical computer science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. It aims to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. ==== Theory of computation ==== According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems. The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation. ==== Information and coding theory ==== Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods. ==== Data structures and algorithms ==== Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency. ==== Programming language theory and formal methods ==== Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals. Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification. === Applied computer science === ==== Computer graphics and visualization ==== Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games. ==== Image and sound processing ==== Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier – whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of the unsolved problems in theoretical computer science. ==== Computational science, finance and engineering ==== Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, societies and social situations (notably war games) along with their habitats, and interactions among biological cells. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits. ==== Human–computer interaction ==== Human–computer interaction (HCI) is the field of study and research concerned with the design and use of computer systems, mainly based on the analysis of the interaction between humans and computer interfaces. HCI has several subfields that focus on the relationship between emotions, social behavior and brain activity with computers. ==== Software engineering ==== Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it does not just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes. ==== Artificial intelligence ==== Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data. === Computer systems === ==== Computer architecture and microarchitecture ==== Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks Jr., members of the Machine Organization department in IBM's main research center in 1959. ==== Concurrent, parallel and distributed computing ==== Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the parallel random access machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals. ==== Computer networks ==== This branch of computer science aims studies the construction and behavior of computer networks. It addresses their performance, resilience, security, scalability, and cost-effectiveness, along with the variety of services they can provide. ==== Computer security and cryptography ==== Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits. ==== Databases and data mining ==== A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets. == Discoveries == The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science: Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything". All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.). Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything". Every algorithm can be expressed in a language for a computer consisting of only five basic instructions: move left one location; move right one location; read symbol at current location; print 0 at current location; print 1 at current location. Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything". Only three rules are needed to combine any set of basic instructions into more complex ones: sequence: first do this, then do that; selection: IF such-and-such is the case, THEN do this, ELSE do that; repetition: WHILE such-and-such is the case, DO this. The three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming). == Programming paradigms == Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include: Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements. Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates. Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another. Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities. == Research == Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals. == See also == == Notes == == References == == Further reading == == External links == DBLP Computer Science Bibliography Association for Computing Machinery Institute of Electrical and Electronics Engineers
Wikipedia/Computer_science
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?". In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see Church–Turing thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory. == History == The theory of computation can be considered the creation of models of all kinds in the field of computer science. Therefore, mathematics and logic are used. In the last century, it separated from mathematics and became an independent academic discipline with its own conferences such as FOCS in 1960 and STOC in 1969, and its own awards such as the IMU Abacus Medal (established in 1981 as the Rolf Nevanlinna Prize), the Gödel Prize, established in 1993, and the Knuth Prize, established in 1996. Some pioneers of the theory of computation were Ramon Llull, Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, Rózsa Péter, John von Neumann and Claude Shannon. == Branches == === Automata theory === Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word (Αυτόματα) which means that something is doing something by itself. Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability. ==== Formal language theory ==== Formal language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet. It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.e. Chomsky hierarchy, and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed. === Computability theory === Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the halting problem cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result. Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property. Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are reducible to the Turing model. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory. === Computational complexity theory === Computational complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps it takes to perform a computation, and how much memory is required to perform that computation. In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grow linearly in the size of the problem. To simplify this problem, computer scientists have adopted big O notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example, we might say that the problem requires O ( n ) {\displaystyle O(n)} steps to solve. Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven Millennium Prize Problems stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook. == Models of computation == Aside from a Turing machine, other equivalent (see Church–Turing thesis) models of computation are in use. Lambda calculus A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction. Combinatory logic is a concept which has many similarities to λ {\displaystyle \lambda } -calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in λ {\displaystyle \lambda } -calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics). μ-recursive functions a computation consists of a mu-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function f ( x ) {\displaystyle f(x)} the functions g ( x ) {\displaystyle g(x)} and h ( x , y ) {\displaystyle h(x,y)} appear, then terms of the form 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or μ recursion. For instance if f ( x ) = h ( x , g ( x ) ) {\displaystyle f(x)=h(x,g(x))} , then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(5,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs. Markov algorithm a string rewriting system that uses grammar-like rules to operate on strings of symbols. Register machine is a theoretically interesting idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriately huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques. In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions. Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way to the Chomsky hierarchy of languages is obtained. == References == == Further reading == Textbooks aimed at computer scientists (There are many textbooks in this area; this list is by necessity incomplete.) Hopcroft, John E.; Motwani, Rajeev; Ullman, Jeffrey D. (2006) [1979]. Introduction to Automata Theory, Languages, and Computation (3rd ed.). Addison-Wesley. ISBN 0-321-45536-3. — One of the standard references in the field. Linz P (2007). An introduction to formal language and automata. Narosa Publishing. ISBN 9788173197819. Sipser, Michael (2013). Introduction to the Theory of Computation (3rd ed.). Cengage Learning. ISBN 978-1-133-18779-0. Eitan Gurari (1989). An Introduction to the Theory of Computation. Computer Science Press. ISBN 0-7167-8182-4. Archived from the original on 2007-01-07. Hein, James L. (1996) Theory of Computation. Sudbury, MA: Jones & Bartlett. ISBN 978-0-86720-497-1 A gentle introduction to the field, appropriate for second-year undergraduate computer science students. Taylor, R. Gregory (1998). Models of Computation and Formal Languages. New York: Oxford University Press. ISBN 978-0-19-510983-2 An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students. Jon Kleinberg, and Éva Tardos (2006): Algorithm Design, Pearson/Addison-Wesley, ISBN 978-0-32129535-4 Lewis, F. D. (2007). Essentials of theoretical computer science A textbook covering the topics of formal languages, automata and grammars. The emphasis appears to be on presenting an overview of the results and their applications rather than providing proofs of the results. Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, ISBN 0-12-206382-1. Covers a wider range of topics than most other introductory books, including program semantics and quantification theory. Aimed at graduate students. Books on computability theory from the (wider) mathematical perspective Hartley Rogers, Jr (1987). Theory of Recursive Functions and Effective Computability, MIT Press. ISBN 0-262-68052-1 S. Barry Cooper (2004). Computability Theory. Chapman and Hall/CRC. ISBN 1-58488-237-9.. Carl H. Smith, A recursive introduction to the theory of computation, Springer, 1994, ISBN 0-387-94332-3. A shorter textbook suitable for graduate students in Computer Science. Historical perspective Richard L. Epstein and Walter A. Carnielli (2000). Computability: Computable Functions, Logic, and the Foundations of Mathematics, with Computability: A Timeline (2nd ed.). Wadsworth/Thomson Learning. ISBN 0-534-54644-7.. == External links == Theory of Computation at MIT Theory of Computation at Harvard Computability Logic - A theory of interactive computation. The main web source on this subject.
Wikipedia/Theory_of_computation
Topology (from the Greek words τόπος, 'place, location', and λόγος, 'study') is the branch of mathematics concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of topological spaces, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. The following are basic examples of topological properties: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Wilhelm Leibniz, who in the 17th century envisioned the geometria situs and analysis situs. Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although, it was not until the first decades of the 20th century that the idea of a topological space was developed. == Motivation == The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the way they are put together. For example, the square and the circle have many properties in common: they are both one-dimensional objects (from a topological point of view) and both separate the plane into two parts, the part inside and the part outside. In one of the first papers in topology, Leonhard Euler demonstrated that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges or on their distance from one another, but only on connectivity properties: which bridges connect to which islands or riverbanks. This Seven Bridges of Königsberg problem led to the branch of mathematics known as graph theory. Similarly, the hairy ball theorem of algebraic topology says that "one cannot comb the hair flat on a hairy ball without creating a cowlick." This fact is immediately convincing to most people, even though they might not recognize the more formal statement of the theorem, that there is no nonvanishing continuous tangent vector field on the sphere. As with the Bridges of Königsberg, the result does not depend on the shape of the sphere; it applies to any kind of smooth blob, as long as it has no holes. To deal with these problems that do not rely on the exact shape of the objects, one must be clear about just what properties these problems do rely on. From this need arises the notion of homeomorphism. The impossibility of crossing each bridge just once applies to any arrangement of bridges homeomorphic to those in Königsberg, and the hairy ball theorem applies to any space homeomorphic to a sphere. Intuitively, two spaces are homeomorphic if one can be deformed into the other without cutting or gluing. A famous example, known as the "Topologist's Breakfast", is that a topologist cannot distinguish a coffee mug from a doughnut; a sufficiently pliable doughnut could be reshaped to a coffee cup by creating a dimple and progressively enlarging it while shrinking the hole into a handle. Homeomorphism can be considered the most basic topological equivalence. Another is homotopy equivalence. This is harder to describe without getting technical, but the essential notion is that two objects are homotopy equivalent if they both result from "squishing" some larger object. == History == Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler. His 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750, Euler wrote to a friend that he had realized the importance of the edges of a polyhedron. This led to his polyhedron formula, V − E + F = 2 (where V, E, and F respectively indicate the number of vertices, edges, and faces of the polyhedron). Some authorities regard this analysis as the first theorem, signaling the birth of topology. Further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term "Topologie" in Vorstudien zur Topologie, written in his native German, in 1847, having used the word for ten years in correspondence before its first appearance in print. The English form "topology" was used in 1883 in Listing's obituary in the journal Nature to distinguish "qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated". Their work was corrected, consolidated and greatly extended by Henri Poincaré. In 1895, he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. The development of topology in the 20th century was marked by significant advances in both foundational theory and its application to other fields of mathematics. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a special case of a general topological space, with any given topological space potentially giving rise to many distinct metric spaces. In 1914, Felix Hausdorff coined the term "topological space" and defined what is now called a Hausdorff space. Currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski. Modern topology depends strongly on the ideas of set theory, developed by Georg Cantor in the later part of the 19th century. In addition to establishing the basic ideas of set theory, Cantor considered point sets in Euclidean space as part of his study of Fourier series. For further developments, see point-set topology and algebraic topology. The 2022 Abel Prize was awarded to Dennis Sullivan "for his groundbreaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects". == Concepts == === Topologies on sets === The term topology also refers to a specific mathematical idea central to the area of mathematics called topology. Informally, a topology describes how elements of a set relate spatially to each other. The same set can have different topologies. For instance, the real line, the complex plane, and the Cantor set can be thought of as the same set with different topologies. Formally, let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: Both the empty set and X are elements of τ. Any union of elements of τ is an element of τ. Any intersection of finitely many elements of τ is an element of τ. If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. By definition, every topology is a π-system. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (that is, its complement is open). A subset of X may be open, closed, both (a clopen set), or neither. The empty set and X itself are always both closed and open. An open subset of X which contains a point x is called an open neighborhood of x. === Continuous functions and homeomorphisms === A function or map from one topological space to another is called continuous if the inverse image of any open set is open. If the function maps the real numbers to the real numbers (both spaces with the standard topology), then this definition of continuous is equivalent to the definition of continuous in calculus. If a continuous function is one-to-one and onto, and if the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function is said to be homeomorphic to the range. Another way of saying this is that the function has a natural extension to the topology. If two spaces are homeomorphic, they have identical topological properties and are considered topologically the same. The cube and the sphere are homeomorphic, as are the coffee cup and the doughnut. However, the sphere is not homeomorphic to the doughnut. === Manifolds === While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an n-dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension n. Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds). == Topics == === General topology === General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology. The basic object of study is topological spaces, which are sets equipped with a topology, that is, a family of subsets, called open sets, which is closed under finite intersections and (finite or infinite) unions. The fundamental concepts of topology, such as continuity, compactness, and connectedness, can be defined in terms of open sets. Intuitively, continuous functions take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The words nearby, arbitrarily small, and far apart can all be made precise by using open sets. Several topologies can be defined on a given space. Changing a topology consists of changing the collection of open sets. This changes which functions are continuous and which subsets are compact or connected. Metric spaces are an important class of topological spaces where the distance between any two points is defined by a function called a metric. In a metric space, an open set is a union of open disks, where an open disk of radius r centered at x is the set of all points whose distance to x is less than r. Many common spaces are topological spaces whose topology can be defined by a metric. This is the case of the real line, the complex plane, real and complex vector spaces and Euclidean spaces. Having a metric simplifies many proofs. === Algebraic topology === Algebraic topology is a branch of mathematics that uses tools from algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. The most important of these invariants are homotopy groups, homology, and cohomology. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. === Differential topology === Differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds. More specifically, differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are "softer" than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold – that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume. === Geometric topology === Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (that is, spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology. Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem. In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, and negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of eight possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. === Generalizations === Occasionally, one needs to use the tools of topology but a "set of points" is not available. In pointless topology one considers instead the lattice of open sets as the basic notion of the theory, while Grothendieck topologies are structures defined on arbitrary categories that allow the definition of sheaves on those categories and with that the definition of general cohomology theories. == Applications == === Biology === Topology has been used to study various biological systems including molecules and nanostructure (e.g., membraneous objects). In particular, circuit topology and knot theory have been extensively applied to classify and compare the topology of folded proteins and nucleic acids. Circuit topology classifies folded molecular chains based on the pairwise arrangement of their intra-chain contacts and chain crossings. Knot theory, a branch of topology, is used in biology to study the effects of certain enzymes on DNA. These enzymes cut, twist, and reconnect the DNA, causing knotting with observable effects such as slower electrophoresis. === Computer science === Topological data analysis uses techniques from algebraic topology to determine the large-scale structure of a set (for instance, determining if a cloud of points is spherical or toroidal). The main method used by topological data analysis is to: Replace a set of data points with a family of simplicial complexes, indexed by a proximity parameter. Analyse these topological complexes via algebraic topology – specifically, via the theory of persistent homology. Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode. Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties. === Physics === Topology is relevant to physics in areas such as condensed matter physics, quantum field theory, quantum computing and physical cosmology. The topological dependence of mechanical properties in solids is of interest in the disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials. The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space. Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory, the theory of four-manifolds in algebraic topology, and the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory. The topological classification of Calabi–Yau manifolds has important implications in string theory, as different manifolds can sustain different kinds of strings. In topological quantum computers, the qubits are stored in topological properties, that are by definition invariant with respect to homotopies. In cosmology, topology can be used to describe the overall shape of the universe. This area of research is commonly known as spacetime topology. In condensed matter, a relevant application to topological physics comes from the possibility of obtaining a one-way current, which is a current protected from backscattering. It was first discovered in electronics with the famous quantum Hall effect, and then generalized in other areas of physics, for instance in photonics by F.D.M Haldane. === Robotics === The possible positions of a robot can be described by a manifold called configuration space. In the area of motion planning, one finds paths between two points in configuration space. These paths represent a motion of the robot's joints and other parts into the desired pose. === Games and puzzles === Disentanglement puzzles are based on topological aspects of the puzzle's shapes and components. === Fiber art === In order to create a continuous join of pieces in a modular construction, it is necessary to create an unbroken path in an order that surrounds each piece and traverses each edge only once. This process is an application of the Eulerian path. == Resources and research == === Major journals === Geometry & Topology- a mathematic research journal focused on geometry and topology, and their applications, published by Mathematical Sciences Publishers. Journal of Topology- a scientific journal which publishes papers of high quality and significance in topology, geometry, and adjacent areas of mathematics. === Major books === Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-181629-9 Willard, Stephen (2016). General topology. Dover books on mathematics. Mineola, N.Y: Dover publications. ISBN 978-0-486-43479-7 Armstrong, M. A. (1983). Basic topology. Undergraduate texts in mathematics. New York: Springer-Verlag. ISBN 978-0-387-90839-7 John Kelley "General Topology" Springer, 1979. == See also == == References == === Citations === === Bibliography === Aleksandrov, P.S. (1969) [1956]. "Chapter XVIII Topology". In Aleksandrov, A.D.; Kolmogorov, A.N.; Lavrent'ev, M.A. (eds.). Mathematics / Its Content, Methods and Meaning (2nd ed.). The M.I.T. Press. Croom, Fred H. (1989). Principles of Topology. Saunders College Publishing. ISBN 978-0-03-029804-2. Richeson, D. (2008). Euler's Gem: The Polyhedron Formula and the Birth of Topology. Princeton University Press. == Further reading == Ryszard Engelking, General Topology, Heldermann Verlag, Sigma Series in Pure Mathematics, December 1989, ISBN 3-88538-006-4. Bourbaki; Elements of Mathematics: General Topology, Addison–Wesley (1966). Breitenberger, E. (2006). "Johann Benedict Listing". In James, I.M. (ed.). History of Topology. North Holland. ISBN 978-0-444-82375-5. Kelley, John L. (1975) [1955]. General Topology. Graduate Texts in Mathematics. Vol. 27 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-90125-1. OCLC 1365153. Brown, Ronald (2006). Topology and Groupoids. Booksurge. ISBN 978-1-4196-2722-4. (Provides a well-motivated, geometric account of general topology, and shows the use of groupoids in discussing van Kampen's theorem, covering spaces, and orbit spaces.) Wacław Sierpiński, General Topology, Dover Publications, 2000, ISBN 0-486-41148-6 Pickover, Clifford A. (2006). The Möbius Strip: Dr. August Möbius's Marvelous Band in Mathematics, Games, Literature, Art, Technology, and Cosmology. Thunder's Mouth Press. ISBN 978-1-56025-826-1. (Provides a popular introduction to topology and geometry) Gemignani, Michael C. (1990) [1967]. Elementary Topology (2nd ed.). Dover Publications Inc. ISBN 978-0-486-66522-1. == External links == "Topology, general". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Elementary Topology: A First Course Viro, Ivanov, Netsvetaev, Kharlamov. The Topological Zoo at The Geometry Center. Topology Atlas Topology Course Lecture Notes Aisling McCluskey and Brian McMaster, Topology Atlas. Topology Glossary Moscow 1935: Topology moving towards America, a historical essay by Hassler Whitney.
Wikipedia/Topology
Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.: 58  Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility. The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry. Many algebraic varieties are differentiable manifolds, but an algebraic variety may have singular points while a differentiable manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces. In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type. == Overview and definitions == An affine variety over an algebraically closed field is conceptually the easiest type of variety to define, which will be done in this section. Next, one can define projective and quasi-projective varieties in a similar way. The most general definition of a variety is obtained by patching together smaller quasi-projective varieties. It is not obvious that one can construct genuinely new examples of varieties in this way, but Nagata gave an example of such a new variety in the 1950s. === Affine varieties === For an algebraically closed field K and a natural number n, let An be an affine n-space over K, identified to K n {\displaystyle K^{n}} through the choice of an affine coordinate system. The polynomials  f  in the ring K[x1, ..., xn] can be viewed as K-valued functions on An by evaluating  f  at the points in An, i.e. by choosing values in K for each xi. For each set S of polynomials in K[x1, ..., xn], define the zero-locus Z(S) to be the set of points in An on which the functions in S simultaneously vanish, that is to say Z ( S ) = { x ∈ A n ∣ f ( x ) = 0 for all f ∈ S } . {\displaystyle Z(S)=\left\{x\in \mathbf {A} ^{n}\mid f(x)=0{\text{ for all }}f\in S\right\}.} A subset V of An is called an affine algebraic set if V = Z(S) for some S.: 2  A nonempty affine algebraic set V is called irreducible if it cannot be written as the union of two proper algebraic subsets.: 3  An irreducible affine algebraic set is also called an affine variety.: 3  (Some authors use the phrase affine variety to refer to any affine algebraic set, irreducible or not.) Affine varieties can be given a natural topology by declaring the closed sets to be precisely the affine algebraic sets. This topology is called the Zariski topology.: 2  Given a subset V of An, we define I(V) to be the ideal of all polynomial functions vanishing on V: I ( V ) = { f ∈ K [ x 1 , … , x n ] ∣ f ( x ) = 0 for all x ∈ V } . {\displaystyle I(V)=\left\{f\in K[x_{1},\ldots ,x_{n}]\mid f(x)=0{\text{ for all }}x\in V\right\}.} For any affine algebraic set V, the coordinate ring or structure ring of V is the quotient of the polynomial ring by this ideal.: 4  === Projective varieties and quasi-projective varieties === Let k be an algebraically closed field and let Pn be the projective n-space over k. Let  f  in k[x0, ..., xn] be a homogeneous polynomial of degree d. It is not well-defined to evaluate  f  on points in Pn in homogeneous coordinates. However, because  f  is homogeneous, meaning that  f  (λx0, ..., λxn) = λd f  (x0, ..., xn), it does make sense to ask whether  f  vanishes at a point [x0 : ... : xn]. For each set S of homogeneous polynomials, define the zero-locus of S to be the set of points in Pn on which the functions in S vanish: Z ( S ) = { x ∈ P n ∣ f ( x ) = 0 for all f ∈ S } . {\displaystyle Z(S)=\{x\in \mathbf {P} ^{n}\mid f(x)=0{\text{ for all }}f\in S\}.} A subset V of Pn is called a projective algebraic set if V = Z(S) for some S.: 9  An irreducible projective algebraic set is called a projective variety.: 10  Projective varieties are also equipped with the Zariski topology by declaring all algebraic sets to be closed. Given a subset V of Pn, let I(V) be the ideal generated by all homogeneous polynomials vanishing on V. For any projective algebraic set V, the coordinate ring of V is the quotient of the polynomial ring by this ideal.: 10  A quasi-projective variety is a Zariski open subset of a projective variety. Notice that every affine variety is quasi-projective. Notice also that the complement of an algebraic set in an affine variety is a quasi-projective variety; in the context of affine varieties, such a quasi-projective variety is usually not called a variety but a constructible set. === Abstract varieties === In classical algebraic geometry, all varieties were by definition quasi-projective varieties, meaning that they were open subvarieties of closed subvarieties of a projective space. For example, in Chapter 1 of Hartshorne a variety over an algebraically closed field is defined to be a quasi-projective variety,: 15  but from Chapter 2 onwards, the term variety (also called an abstract variety) refers to a more general object, which locally is a quasi-projective variety, but when viewed as a whole is not necessarily quasi-projective; i.e. it might not have an embedding into projective space.: 105  So classically the definition of an algebraic variety required an embedding into projective space, and this embedding was used to define the topology on the variety and the regular functions on the variety. The disadvantage of such a definition is that not all varieties come with natural embeddings into projective space. For example, under this definition, the product P1 × P1 is not a variety until it is embedded into a larger projective space; this is usually done by the Segre embedding. Furthermore, any variety that admits one embedding into projective space admits many others, for example by composing the embedding with the Veronese embedding; thus many notions that should be intrinsic, such as that of a regular function, are not obviously so. The earliest successful attempt to define an algebraic variety abstractly, without an embedding, was made by André Weil. In his Foundations of Algebraic Geometry, using valuations. Claude Chevalley made a definition of a scheme, which served a similar purpose, but was more general. However, Alexander Grothendieck's definition of a scheme is more general still and has received the most widespread acceptance. In Grothendieck's language, an abstract algebraic variety is usually defined to be an integral, separated scheme of finite type over an algebraically closed field,: 104–105  although some authors drop the irreducibility or the reducedness or the separateness condition or allow the underlying field to be not algebraically closed. Classical algebraic varieties are the quasiprojective integral separated finite type schemes over an algebraically closed field. ==== Existence of non-quasiprojective abstract algebraic varieties ==== One of the earliest examples of a non-quasiprojective algebraic variety were given by Nagata. Nagata's example was not complete (the analog of compactness), but soon afterwards he found an algebraic surface that was complete and non-projective.: Remark 4.10.2 p.105  Since then other examples have been found: for example, it is straightforward to construct toric varieties that are not quasi-projective but complete. == Examples == === Subvariety === A subvariety is a subset of a variety that is itself a variety (with respect to the topological structure induced by the ambient variety). For example, every open subset of a variety is a variety. See also closed immersion. Hilbert's Nullstellensatz says that closed subvarieties of an affine or projective variety are in one-to-one correspondence with the prime ideals or non-irrelevant homogeneous prime ideals of the coordinate ring of the variety. === Affine variety === ==== Example 1 ==== Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element  f  (x, y): f ( x , y ) = x + y − 1. {\displaystyle f(x,y)=x+y-1.} The zero-locus of  f  (x, y) is the set of points in A2 on which this function vanishes: it is the set of all pairs of complex numbers (x, y) such that y = 1 − x. This is called a line in the affine plane. (In the classical topology coming from the topology on the complex numbers, a complex line is a real manifold of dimension two.) This is the set Z( f ): Z ( f ) = { ( x , 1 − x ) ∈ C 2 } . {\displaystyle Z(f)=\{(x,1-x)\in \mathbf {C} ^{2}\}.} Thus the subset V = Z( f ) of A2 is an algebraic set. The set V is not empty. It is irreducible, as it cannot be written as the union of two proper algebraic subsets. Thus it is an affine algebraic variety. ==== Example 2 ==== Let k = C, and A2 be the two-dimensional affine space over C. Polynomials in the ring C[x, y] can be viewed as complex valued functions on A2 by evaluating at the points in A2. Let subset S of C[x, y] contain a single element g(x, y): g ( x , y ) = x 2 + y 2 − 1. {\displaystyle g(x,y)=x^{2}+y^{2}-1.} The zero-locus of g(x, y) is the set of points in A2 on which this function vanishes, that is the set of points (x,y) such that x2 + y2 = 1. As g(x, y) is an absolutely irreducible polynomial, this is an algebraic variety. The set of its real points (that is the points for which x and y are real numbers), is known as the unit circle; this name is also often given to the whole variety. ==== Example 3 ==== The following example is neither a hypersurface, nor a linear space, nor a single point. Let A3 be the three-dimensional affine space over C. The set of points (x, x2, x3) for x in C is an algebraic variety, and more precisely an algebraic curve that is not contained in any plane. It is the twisted cubic shown in the above figure. It may be defined by the equations y − x 2 = 0 z − x 3 = 0 {\displaystyle {\begin{aligned}y-x^{2}&=0\\z-x^{3}&=0\end{aligned}}} The irreducibility of this algebraic set needs a proof. One approach in this case is to check that the projection (x, y, z) → (x, y) is injective on the set of the solutions and that its image is an irreducible plane curve. For more difficult examples, a similar proof may always be given, but may imply a difficult computation: first a Gröbner basis computation to compute the dimension, followed by a random linear change of variables (not always needed); then a Gröbner basis computation for another monomial ordering to compute the projection and to prove that it is generically injective and that its image is a hypersurface, and finally a polynomial factorization to prove the irreducibility of the image. ==== General linear group ==== The set of n-by-n matrices over the base field k can be identified with the affine n2-space A n 2 {\displaystyle \mathbb {A} ^{n^{2}}} with coordinates x i j {\displaystyle x_{ij}} such that x i j ( A ) {\displaystyle x_{ij}(A)} is the (i, j)-th entry of the matrix A {\displaystyle A} . The determinant det {\displaystyle \det } is then a polynomial in x i j {\displaystyle x_{ij}} and thus defines the hypersurface H = V ( det ) {\displaystyle H=V(\det )} in A n 2 {\displaystyle \mathbb {A} ^{n^{2}}} . The complement of H {\displaystyle H} is then an open subset of A n 2 {\displaystyle \mathbb {A} ^{n^{2}}} that consists of all the invertible n-by-n matrices, the general linear group GL n ⁡ ( k ) {\displaystyle \operatorname {GL} _{n}(k)} . It is an affine variety, since, in general, the complement of a hypersurface in an affine variety is affine. Explicitly, consider A n 2 × A 1 {\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}} where the affine line is given coordinate t. Then GL n ⁡ ( k ) {\displaystyle \operatorname {GL} _{n}(k)} amounts to the zero-locus in A n 2 × A 1 {\displaystyle \mathbb {A} ^{n^{2}}\times \mathbb {A} ^{1}} of the polynomial in x i j , t {\displaystyle x_{ij},t} : t ⋅ det [ x i j ] − 1 , {\displaystyle t\cdot \det[x_{ij}]-1,} i.e., the set of matrices A such that t det ( A ) = 1 {\displaystyle t\det(A)=1} has a solution. This is best seen algebraically: the coordinate ring of GL n ⁡ ( k ) {\displaystyle \operatorname {GL} _{n}(k)} is the localization k [ x i j ∣ 0 ≤ i , j ≤ n ] [ det − 1 ] {\displaystyle k[x_{ij}\mid 0\leq i,j\leq n][{\det }^{-1}]} , which can be identified with k [ x i j , t ∣ 0 ≤ i , j ≤ n ] / ( t det − 1 ) {\displaystyle k[x_{ij},t\mid 0\leq i,j\leq n]/(t\det -1)} . The multiplicative group k* of the base field k is the same as GL 1 ⁡ ( k ) {\displaystyle \operatorname {GL} _{1}(k)} and thus is an affine variety. A finite product of it ( k ∗ ) r {\displaystyle (k^{*})^{r}} is an algebraic torus, which is again an affine variety. A general linear group is an example of a linear algebraic group, an affine variety that has a structure of a group in such a way the group operations are morphism of varieties. ==== Characteristic variety ==== Let A be a not-necessarily-commutative algebra over a field k. Even if A is not commutative, it can still happen that A has a Z {\displaystyle \mathbb {Z} } -filtration so that the associated ring gr ⁡ A = ⨁ i = − ∞ ∞ A i / A i − 1 {\displaystyle \operatorname {gr} A=\bigoplus _{i=-\infty }^{\infty }A_{i}/{A_{i-1}}} is commutative, reduced and finitely generated as a k-algebra; i.e., gr ⁡ A {\displaystyle \operatorname {gr} A} is the coordinate ring of an affine (reducible) variety X. For example, if A is the universal enveloping algebra of a finite-dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} , then gr ⁡ A {\displaystyle \operatorname {gr} A} is a polynomial ring (the PBW theorem); more precisely, the coordinate ring of the dual vector space g ∗ {\displaystyle {\mathfrak {g}}^{*}} . Let M be a filtered module over A (i.e., A i M j ⊂ M i + j {\displaystyle A_{i}M_{j}\subset M_{i+j}} ). If gr ⁡ M {\displaystyle \operatorname {gr} M} is fintiely generated as a gr ⁡ A {\displaystyle \operatorname {gr} A} -algebra, then the support of gr ⁡ M {\displaystyle \operatorname {gr} M} in X; i.e., the locus where gr ⁡ M {\displaystyle \operatorname {gr} M} does not vanish is called the characteristic variety of M. The notion plays an important role in the theory of D-modules. === Projective variety === A projective variety is a closed subvariety of a projective space. That is, it is the zero locus of a set of homogeneous polynomials that generate a prime ideal. ==== Example 1 ==== A plane projective curve is the zero locus of an irreducible homogeneous polynomial in three indeterminates. The projective line P1 is an example of a projective curve; it can be viewed as the curve in the projective plane P2 = {[x, y, z]} defined by x = 0. For another example, first consider the affine cubic curve y 2 = x 3 − x . {\displaystyle y^{2}=x^{3}-x.} in the 2-dimensional affine space (over a field of characteristic not two). It has the associated cubic homogeneous polynomial equation: y 2 z = x 3 − x z 2 , {\displaystyle y^{2}z=x^{3}-xz^{2},} which defines a curve in P2 called an elliptic curve. The curve has genus one (genus formula); in particular, it is not isomorphic to the projective line P1, which has genus zero. Using genus to distinguish curves is very basic: in fact, the genus is the first invariant one uses to classify curves (see also the construction of moduli of algebraic curves). ==== Example 2: Grassmannian ==== Let V be a finite-dimensional vector space. The Grassmannian variety Gn(V) is the set of all n-dimensional subspaces of V. It is a projective variety: it is embedded into a projective space via the Plücker embedding: { G n ( V ) ↪ P ( ∧ n V ) ⟨ b 1 , … , b n ⟩ ↦ [ b 1 ∧ ⋯ ∧ b n ] {\displaystyle {\begin{cases}G_{n}(V)\hookrightarrow \mathbf {P} \left(\wedge ^{n}V\right)\\\langle b_{1},\ldots ,b_{n}\rangle \mapsto [b_{1}\wedge \cdots \wedge b_{n}]\end{cases}}} where bi are any set of linearly independent vectors in V, ∧ n V {\displaystyle \wedge ^{n}V} is the n-th exterior power of V, and the bracket [w] means the line spanned by the nonzero vector w. The Grassmannian variety comes with a natural vector bundle (or locally free sheaf in other terminology) called the tautological bundle, which is important in the study of characteristic classes such as Chern classes. ==== Jacobian variety and abelian variety ==== Let C be a smooth complete curve and Pic ⁡ ( C ) {\displaystyle \operatorname {Pic} (C)} the Picard group of it; i.e., the group of isomorphism classes of line bundles on C. Since C is smooth, Pic ⁡ ( C ) {\displaystyle \operatorname {Pic} (C)} can be identified as the divisor class group of C and thus there is the degree homomorphism deg : Pic ⁡ ( C ) → Z {\displaystyle \operatorname {deg} :\operatorname {Pic} (C)\to \mathbb {Z} } . The Jacobian variety Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} of C is the kernel of this degree map; i.e., the group of the divisor classes on C of degree zero. A Jacobian variety is an example of an abelian variety, a complete variety with a compatible abelian group structure on it (the name "abelian" is however not because it is an abelian group). An abelian variety turns out to be projective (in short, algebraic theta functions give an embedding into a projective space. See equations defining abelian varieties); thus, Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} is a projective variety. The tangent space to Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} at the identity element is naturally isomorphic to H 1 ⁡ ( C , O C ) ; {\displaystyle \operatorname {H} ^{1}(C,{\mathcal {O}}_{C});} hence, the dimension of Jac ⁡ ( C ) {\displaystyle \operatorname {Jac} (C)} is the genus of C {\displaystyle C} . Fix a point P 0 {\displaystyle P_{0}} on C {\displaystyle C} . For each integer n > 0 {\displaystyle n>0} , there is a natural morphism C n → Jac ⁡ ( C ) , ( P 1 , … , P r ) ↦ [ P 1 + ⋯ + P n − n P 0 ] {\displaystyle C^{n}\to \operatorname {Jac} (C),\,(P_{1},\dots ,P_{r})\mapsto [P_{1}+\cdots +P_{n}-nP_{0}]} where C n {\displaystyle C^{n}} is the product of n copies of C. For g = 1 {\displaystyle g=1} (i.e., C is an elliptic curve), the above morphism for n = 1 {\displaystyle n=1} turns out to be an isomorphism;: Ch. IV, Example 1.3.7.  in particular, an elliptic curve is an abelian variety. ==== Moduli varieties ==== Given an integer g ≥ 0 {\displaystyle g\geq 0} , the set of isomorphism classes of smooth complete curves of genus g {\displaystyle g} is called the moduli of curves of genus g {\displaystyle g} and is denoted as M g {\displaystyle {\mathfrak {M}}_{g}} . There are few ways to show this moduli has a structure of a possibly reducible algebraic variety; for example, one way is to use geometric invariant theory which ensures a set of isomorphism classes has a (reducible) quasi-projective variety structure. Moduli such as the moduli of curves of fixed genus is typically not a projective variety; roughly the reason is that a degeneration (limit) of a smooth curve tends to be non-smooth or reducible. This leads to the notion of a stable curve of genus g ≥ 2 {\displaystyle g\geq 2} , a not-necessarily-smooth complete curve with no terribly bad singularities and not-so-large automorphism group. The moduli of stable curves M ¯ g {\displaystyle {\overline {\mathfrak {M}}}_{g}} , the set of isomorphism classes of stable curves of genus g ≥ 2 {\displaystyle g\geq 2} , is then a projective variety which contains M g {\displaystyle {\mathfrak {M}}_{g}} as an open dense subset. Since M ¯ g {\displaystyle {\overline {\mathfrak {M}}}_{g}} is obtained by adding boundary points to M g {\displaystyle {\mathfrak {M}}_{g}} , M ¯ g {\displaystyle {\overline {\mathfrak {M}}}_{g}} is colloquially said to be a compactification of M g {\displaystyle {\mathfrak {M}}_{g}} . Historically a paper of Mumford and Deligne introduced the notion of a stable curve to show M g {\displaystyle {\mathfrak {M}}_{g}} is irreducible when g ≥ 2 {\displaystyle g\geq 2} . The moduli of curves exemplifies a typical situation: a moduli of nice objects tend not to be projective but only quasi-projective. Another case is a moduli of vector bundles on a curve. Here, there are the notions of stable and semistable vector bundles on a smooth complete curve C {\displaystyle C} . The moduli of semistable vector bundles of a given rank n {\displaystyle n} and a given degree d {\displaystyle d} (degree of the determinant of the bundle) is then a projective variety denoted as S U C ( n , d ) {\displaystyle SU_{C}(n,d)} , which contains the set U C ( n , d ) {\displaystyle U_{C}(n,d)} of isomorphism classes of stable vector bundles of rank n {\displaystyle n} and degree d {\displaystyle d} as an open subset. Since a line bundle is stable, such a moduli is a generalization of the Jacobian variety of C {\displaystyle C} . In general, in contrast to the case of moduli of curves, a compactification of a moduli need not be unique and, in some cases, different non-equivalent compactifications are constructed using different methods and by different authors. An example over C {\displaystyle \mathbb {C} } is the problem of compactifying D / Γ {\displaystyle D/\Gamma } , the quotient of a bounded symmetric domain D {\displaystyle D} by an action of an arithmetic discrete group Γ {\displaystyle \Gamma } . A basic example of D / Γ {\displaystyle D/\Gamma } is when D = H g {\displaystyle D={\mathfrak {H}}_{g}} , Siegel's upper half-space and Γ {\displaystyle \Gamma } commensurable with Sp ⁡ ( 2 g , Z ) {\displaystyle \operatorname {Sp} (2g,\mathbb {Z} )} ; in that case, D / Γ {\displaystyle D/\Gamma } has an interpretation as the moduli A g {\displaystyle {\mathfrak {A}}_{g}} of principally polarized complex abelian varieties of dimension g {\displaystyle g} (a principal polarization identifies an abelian variety with its dual). The theory of toric varieties (or torus embeddings) gives a way to compactify D / Γ {\displaystyle D/\Gamma } , a toroidal compactification of it. But there are other ways to compactify D / Γ {\displaystyle D/\Gamma } ; for example, there is the minimal compactification of D / Γ {\displaystyle D/\Gamma } due to Baily and Borel: it is the projective variety associated to the graded ring formed by modular forms (in the Siegel case, Siegel modular forms; see also Siegel modular variety). The non-uniqueness of compactifications is due to the lack of moduli interpretations of those compactifications; i.e., they do not represent (in the category-theory sense) any natural moduli problem or, in the precise language, there is no natural moduli stack that would be an analog of moduli stack of stable curves. === Non-affine and non-projective example === An algebraic variety can be neither affine nor projective. To give an example, let X = P1 × A1 and p: X → A1 the projection. Here X is an algebraic variety since it is a product of varieties. It is not affine since P1 is a closed subvariety of X (as the zero locus of p), but an affine variety cannot contain a projective variety of positive dimension as a closed subvariety. It is not projective either, since there is a nonconstant regular function on X; namely, p. Another example of a non-affine non-projective variety is X = A2 − (0, 0) (cf. Morphism of varieties § Examples.) === Non-examples === Consider the affine line A 1 {\displaystyle \mathbb {A} ^{1}} over C {\displaystyle \mathbb {C} } . The complement of the circle { z ∈ C with | z | 2 = 1 } {\displaystyle \{z\in \mathbb {C} {\text{ with }}|z|^{2}=1\}} in A 1 = C {\displaystyle \mathbb {A} ^{1}=\mathbb {C} } is not an algebraic variety (nor even an algebraic set). Note that | z | 2 − 1 {\displaystyle |z|^{2}-1} is not a polynomial in z {\displaystyle z} (although it is a polynomial in the real coordinates x , y {\displaystyle x,y} ). On the other hand, the complement of the origin in A 1 = C {\displaystyle \mathbb {A} ^{1}=\mathbb {C} } is an algebraic (affine) variety, since the origin is the zero-locus of z {\displaystyle z} . This may be explained as follows: the affine line has dimension one and so any subvariety of it other than itself must have strictly less dimension; namely, zero. For similar reasons, a unitary group (over the complex numbers) is not an algebraic variety, while the special linear group SL n ⁡ ( C ) {\displaystyle \operatorname {SL} _{n}(\mathbb {C} )} is a closed subvariety of GL n ⁡ ( C ) {\displaystyle \operatorname {GL} _{n}(\mathbb {C} )} , the zero-locus of det − 1 {\displaystyle \det -1} . (Over a different base field, a unitary group can however be given a structure of a variety.) == Basic results == An affine algebraic set V is a variety if and only if I(V) is a prime ideal; equivalently, V is a variety if and only if its coordinate ring is an integral domain.: 52 : 4  Every nonempty affine algebraic set may be written uniquely as a finite union of algebraic varieties (where none of the varieties in the decomposition is a subvariety of any other).: 5  The dimension of a variety may be defined in various equivalent ways. See Dimension of an algebraic variety for details. A product of finitely many algebraic varieties (over an algebraically closed field) is an algebraic variety. A finite product of affine varieties is affine and a finite product of projective varieties is projective. == Isomorphism of algebraic varieties == Let V1, V2 be algebraic varieties. We say V1 and V2 are isomorphic, and write V1 ≅ V2, if there are regular maps φ : V1 → V2 and ψ : V2 → V1 such that the compositions ψ ∘ φ and φ ∘ ψ are the identity maps on V1 and V2 respectively. == Discussion and generalizations == The basic definitions and facts above enable one to do classical algebraic geometry. To be able to do more — for example, to deal with varieties over fields that are not algebraically closed — some foundational changes are required. The modern notion of a variety is considerably more abstract than the one above, though equivalent in the case of varieties over algebraically closed fields. An abstract algebraic variety is a particular kind of scheme; the generalization to schemes on the geometric side enables an extension of the correspondence described above to a wider class of rings. A scheme is a locally ringed space such that every point has a neighbourhood that, as a locally ringed space, is isomorphic to a spectrum of a ring. Basically, a variety over k is a scheme whose structure sheaf is a sheaf of k-algebras with the property that the rings R that occur above are all integral domains and are all finitely generated k-algebras, that is to say, they are quotients of polynomial algebras by prime ideals. This definition works over any field k. It allows you to glue affine varieties (along common open sets) without worrying whether the resulting object can be put into some projective space. This also leads to difficulties since one can introduce somewhat pathological objects, e.g. an affine line with zero doubled. Such objects are usually not considered varieties, and are eliminated by requiring the schemes underlying a variety to be separated. (Strictly speaking, there is also a third condition, namely, that one needs only finitely many affine patches in the definition above.) Some modern researchers also remove the restriction on a variety having integral domain affine charts, and when speaking of a variety only require that the affine charts have trivial nilradical. A complete variety is a variety such that any map from an open subset of a nonsingular curve into it can be extended uniquely to the whole curve. Every projective variety is complete, but not vice versa. These varieties have been called "varieties in the sense of Serre", since Serre's foundational paper FAC on sheaf cohomology was written for them. They remain typical objects to start studying in algebraic geometry, even if more general objects are also used in an auxiliary way. One way that leads to generalizations is to allow reducible algebraic sets (and fields k that aren't algebraically closed), so the rings R may not be integral domains. A more significant modification is to allow nilpotents in the sheaf of rings, that is, rings which are not reduced. This is one of several generalizations of classical algebraic geometry that are built into Grothendieck's theory of schemes. Allowing nilpotent elements in rings is related to keeping track of "multiplicities" in algebraic geometry. For example, the closed subscheme of the affine line defined by x2 = 0 is different from the subscheme defined by x = 0 (the origin). More generally, the fiber of a morphism of schemes X → Y at a point of Y may be non-reduced, even if X and Y are reduced. Geometrically, this says that fibers of good mappings may have nontrivial "infinitesimal" structure. There are further generalizations called algebraic spaces and stacks. == Algebraic manifolds == An algebraic manifold is an algebraic variety that is also an m-dimensional manifold, and hence every sufficiently small local patch is isomorphic to km. Equivalently, the variety is smooth (free from singular points). When k is the real numbers, R, algebraic manifolds are called Nash manifolds. Algebraic manifolds can be defined as the zero set of a finite collection of analytic algebraic functions. Projective algebraic manifolds are an equivalent definition for projective varieties. The Riemann sphere is one example. == See also == Variety (disambiguation) — listing also several mathematical meanings Function field of an algebraic variety Birational geometry Motive (algebraic geometry) Analytic variety Zariski–Riemann space Semi-algebraic set Fano variety Mnëv's universality theorem == Notes == == References == === Sources === This article incorporates material from Isomorphism of varieties on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Algebraic_varieties
In algebra, ring theory is the study of rings, algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studies the structure of rings; their representations, or, in different language, modules; special classes of rings (group rings, division rings, universal enveloping algebras); related structures like rngs; as well as an array of properties that prove to be of interest both within the theory itself and for its applications, such as homological properties and polynomial identities. Commutative rings are much better understood than noncommutative ones. Algebraic geometry and algebraic number theory, which provide many natural examples of commutative rings, have driven much of the development of commutative ring theory, which is now, under the name of commutative algebra, a major area of modern mathematics. Because these three fields (algebraic geometry, algebraic number theory and commutative algebra) are so intimately connected it is usually difficult and meaningless to decide which field a particular result belongs to. For example, Hilbert's Nullstellensatz is a theorem which is fundamental for algebraic geometry, and is stated and proved in terms of commutative algebra. Similarly, Fermat's Last Theorem is stated in terms of elementary arithmetic, which is a part of commutative algebra, but its proof involves deep results of both algebraic number theory and algebraic geometry. Noncommutative rings are quite different in flavour, since more unusual behavior can arise. While the theory has developed in its own right, a fairly recent trend has sought to parallel the commutative development by building the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces'. This trend started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups. It has led to a better understanding of noncommutative rings, especially noncommutative Noetherian rings. For the definitions of a ring and basic concepts and their properties, see Ring (mathematics). The definitions of terms used throughout ring theory may be found in Glossary of ring theory. == Commutative rings == A ring is called commutative if its multiplication is commutative. Commutative rings resemble familiar number systems, and various definitions for commutative rings are designed to formalize properties of the integers. Commutative rings are also important in algebraic geometry. In commutative ring theory, numbers are often replaced by ideals, and the definition of the prime ideal tries to capture the essence of prime numbers. Integral domains, non-trivial commutative rings where no two non-zero elements multiply to give zero, generalize another property of the integers and serve as the proper realm to study divisibility. Principal ideal domains are integral domains in which every ideal can be generated by a single element, another property shared by the integers. Euclidean domains are integral domains in which the Euclidean algorithm can be carried out. Important examples of commutative rings can be constructed as rings of polynomials and their factor rings. Summary: Euclidean domain ⊂ principal ideal domain ⊂ unique factorization domain ⊂ integral domain ⊂ commutative ring. === Algebraic geometry === Algebraic geometry is in many ways the mirror image of commutative algebra. This correspondence started with Hilbert's Nullstellensatz that establishes a one-to-one correspondence between the points of an algebraic variety, and the maximal ideals of its coordinate ring. This correspondence has been enlarged and systematized for translating (and proving) most geometrical properties of algebraic varieties into algebraic properties of associated commutative rings. Alexander Grothendieck completed this by introducing schemes, a generalization of algebraic varieties, which may be built from any commutative ring. More precisely, the spectrum of a commutative ring is the space of its prime ideals equipped with Zariski topology, and augmented with a sheaf of rings. These objects are the "affine schemes" (generalization of affine varieties), and a general scheme is then obtained by "gluing together" (by purely algebraic methods) several such affine schemes, in analogy to the way of constructing a manifold by gluing together the charts of an atlas. == Noncommutative rings == Noncommutative rings resemble rings of matrices in many respects. Following the model of algebraic geometry, attempts have been made recently at defining noncommutative geometry based on noncommutative rings. Noncommutative rings and associative algebras (rings that are also vector spaces) are often studied via their categories of modules. A module over a ring is an abelian group that the ring acts on as a ring of endomorphisms, very much akin to the way fields (integral domains in which every non-zero element is invertible) act on vector spaces. Examples of noncommutative rings are given by rings of square matrices or more generally by rings of endomorphisms of abelian groups or modules, and by monoid rings. === Representation theory === Representation theory is a branch of mathematics that draws heavily on non-commutative rings. It studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and the algebraic operations in terms of matrix addition and matrix multiplication, which is non-commutative. The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices in such a way that the group operation is matrix multiplication. == Some relevant theorems == General Isomorphism theorems for rings Nakayama's lemma Structure theorems The Artin–Wedderburn theorem determines the structure of semisimple rings The Jacobson density theorem determines the structure of primitive rings Goldie's theorem determines the structure of semiprime Goldie rings The Zariski–Samuel theorem determines the structure of a commutative principal ideal ring The Hopkins–Levitzki theorem gives necessary and sufficient conditions for a Noetherian ring to be an Artinian ring Morita theory consists of theorems determining when two rings have "equivalent" module categories Cartan–Brauer–Hua theorem gives insight on the structure of division rings Wedderburn's little theorem states that finite domains are fields Other The Skolem–Noether theorem characterizes the automorphisms of simple rings == Structures and invariants of rings == === Dimension of a commutative ring === In this section, R denotes a commutative ring. The Krull dimension of R is the supremum of the lengths n of all the chains of prime ideals p 0 ⊊ p 1 ⊊ ⋯ ⊊ p n {\displaystyle {\mathfrak {p}}_{0}\subsetneq {\mathfrak {p}}_{1}\subsetneq \cdots \subsetneq {\mathfrak {p}}_{n}} . It turns out that the polynomial ring k [ t 1 , ⋯ , t n ] {\displaystyle k[t_{1},\cdots ,t_{n}]} over a field k has dimension n. The fundamental theorem of dimension theory states that the following numbers coincide for a noetherian local ring ( R , m ) {\displaystyle (R,{\mathfrak {m}})} : The Krull dimension of R. The minimum number of the generators of the m {\displaystyle {\mathfrak {m}}} -primary ideals. The dimension of the graded ring gr m ⁡ ( R ) = ⨁ k ≥ 0 m k / m k + 1 {\displaystyle \textstyle \operatorname {gr} _{\mathfrak {m}}(R)=\bigoplus _{k\geq 0}{\mathfrak {m}}^{k}/{{\mathfrak {m}}^{k+1}}} (equivalently, 1 plus the degree of its Hilbert polynomial). A commutative ring R is said to be catenary if for every pair of prime ideals p ⊂ p ′ {\displaystyle {\mathfrak {p}}\subset {\mathfrak {p}}'} , there exists a finite chain of prime ideals p = p 0 ⊊ ⋯ ⊊ p n = p ′ {\displaystyle {\mathfrak {p}}={\mathfrak {p}}_{0}\subsetneq \cdots \subsetneq {\mathfrak {p}}_{n}={\mathfrak {p}}'} that is maximal in the sense that it is impossible to insert an additional prime ideal between two ideals in the chain, and all such maximal chains between p {\displaystyle {\mathfrak {p}}} and p ′ {\displaystyle {\mathfrak {p}}'} have the same length. Practically all noetherian rings that appear in applications are catenary. Ratliff proved that a noetherian local integral domain R is catenary if and only if for every prime ideal p {\displaystyle {\mathfrak {p}}} , dim ⁡ R = ht ⁡ p + dim ⁡ R / p {\displaystyle \operatorname {dim} R=\operatorname {ht} {\mathfrak {p}}+\operatorname {dim} R/{\mathfrak {p}}} where ht ⁡ p {\displaystyle \operatorname {ht} {\mathfrak {p}}} is the height of p {\displaystyle {\mathfrak {p}}} . If R is an integral domain that is a finitely generated k-algebra, then its dimension is the transcendence degree of its field of fractions over k. If S is an integral extension of a commutative ring R, then S and R have the same dimension. Closely related concepts are those of depth and global dimension. In general, if R is a noetherian local ring, then the depth of R is less than or equal to the dimension of R. When the equality holds, R is called a Cohen–Macaulay ring. A regular local ring is an example of a Cohen–Macaulay ring. It is a theorem of Serre that R is a regular local ring if and only if it has finite global dimension and in that case the global dimension is the Krull dimension of R. The significance of this is that a global dimension is a homological notion. === Morita equivalence === Two rings R, S are said to be Morita equivalent if the category of left modules over R is equivalent to the category of left modules over S. In fact, two commutative rings which are Morita equivalent must be isomorphic, so the notion does not add anything new to the category of commutative rings. However, commutative rings can be Morita equivalent to noncommutative rings, so Morita equivalence is coarser than isomorphism. Morita equivalence is especially important in algebraic topology and functional analysis. === Finitely generated projective module over a ring and Picard group === Let R be a commutative ring and P ( R ) {\displaystyle \mathbf {P} (R)} the set of isomorphism classes of finitely generated projective modules over R; let also P n ( R ) {\displaystyle \mathbf {P} _{n}(R)} subsets consisting of those with constant rank n. (The rank of a module M is the continuous function Spec ⁡ R → Z , p ↦ dim ⁡ M ⊗ R k ( p ) {\displaystyle \operatorname {Spec} R\to \mathbb {Z} ,\,{\mathfrak {p}}\mapsto \dim M\otimes _{R}k({\mathfrak {p}})} .) P 1 ( R ) {\displaystyle \mathbf {P} _{1}(R)} is usually denoted by Pic(R). It is an abelian group called the Picard group of R. If R is an integral domain with the field of fractions F of R, then there is an exact sequence of groups: 1 → R ∗ → F ∗ → f ↦ f R Cart ⁡ ( R ) → Pic ⁡ ( R ) → 1 {\displaystyle 1\to R^{*}\to F^{*}{\overset {f\mapsto fR}{\to }}\operatorname {Cart} (R)\to \operatorname {Pic} (R)\to 1} where Cart ⁡ ( R ) {\displaystyle \operatorname {Cart} (R)} is the set of fractional ideals of R. If R is a regular domain (i.e., regular at any prime ideal), then Pic(R) is precisely the divisor class group of R. For example, if R is a principal ideal domain, then Pic(R) vanishes. In algebraic number theory, R will be taken to be the ring of integers, which is Dedekind and thus regular. It follows that Pic(R) is a finite group (finiteness of class number) that measures the deviation of the ring of integers from being a PID. One can also consider the group completion of P ( R ) {\displaystyle \mathbf {P} (R)} ; this results in a commutative ring K0(R). Note that K0(R) = K0(S) if two commutative rings R, S are Morita equivalent. === Structure of noncommutative rings === The structure of a noncommutative ring is more complicated than that of a commutative ring. For example, there exist simple rings that contain no non-trivial proper (two-sided) ideals, yet contain non-trivial proper left or right ideals. Various invariants exist for commutative rings, whereas invariants of noncommutative rings are difficult to find. As an example, the nilradical of a ring, the set of all nilpotent elements, is not necessarily an ideal unless the ring is commutative. Specifically, the set of all nilpotent elements in the ring of all n × n matrices over a division ring never forms an ideal, irrespective of the division ring chosen. There are, however, analogues of the nilradical defined for noncommutative rings, that coincide with the nilradical when commutativity is assumed. The concept of the Jacobson radical of a ring; that is, the intersection of all right (left) annihilators of simple right (left) modules over a ring, is one example. The fact that the Jacobson radical can be viewed as the intersection of all maximal right (left) ideals in the ring, shows how the internal structure of the ring is reflected by its modules. It is also a fact that the intersection of all maximal right ideals in a ring is the same as the intersection of all maximal left ideals in the ring, in the context of all rings; irrespective of whether the ring is commutative. Noncommutative rings are an active area of research due to their ubiquity in mathematics. For instance, the ring of n-by-n matrices over a field is noncommutative despite its natural occurrence in geometry, physics and many parts of mathematics. More generally, endomorphism rings of abelian groups are rarely commutative, the simplest example being the endomorphism ring of the Klein four-group. One of the best-known strictly noncommutative ring is the quaternions. == Applications == === The ring of integers of a number field === === The coordinate ring of an algebraic variety === If X is an affine algebraic variety, then the set of all regular functions on X forms a ring called the coordinate ring of X. For a projective variety, there is an analogous ring called the homogeneous coordinate ring. Those rings are essentially the same things as varieties: they correspond in essentially a unique way. This may be seen via either Hilbert's Nullstellensatz or scheme-theoretic constructions (i.e., Spec and Proj). === Ring of invariants === A basic (and perhaps the most fundamental) question in the classical invariant theory is to find and study polynomials in the polynomial ring k [ V ] {\displaystyle k[V]} that are invariant under the action of a finite group (or more generally reductive) G on V. The main example is the ring of symmetric polynomials: symmetric polynomials are polynomials that are invariant under permutation of variable. The fundamental theorem of symmetric polynomials states that this ring is R [ σ 1 , … , σ n ] {\displaystyle R[\sigma _{1},\ldots ,\sigma _{n}]} where σ i {\displaystyle \sigma _{i}} are elementary symmetric polynomials. == History == Commutative ring theory originated in algebraic number theory, algebraic geometry, and invariant theory. Central to the development of these subjects were the rings of integers in algebraic number fields and algebraic function fields, and the rings of polynomials in two or more variables. Noncommutative ring theory began with attempts to extend the complex numbers to various hypercomplex number systems. The genesis of the theories of commutative and noncommutative rings dates back to the early 19th century, while their maturity was achieved only in the third decade of the 20th century. More precisely, William Rowan Hamilton put forth the quaternions and biquaternions; James Cockle presented tessarines and coquaternions; and William Kingdon Clifford was an enthusiast of split-biquaternions, which he called algebraic motors. These noncommutative algebras, and the non-associative Lie algebras, were studied within universal algebra before the subject was divided into particular mathematical structure types. One sign of re-organization was the use of direct sums to describe algebraic structure. The various hypercomplex numbers were identified with matrix rings by Joseph Wedderburn (1908) and Emil Artin (1928). Wedderburn's structure theorems were formulated for finite-dimensional algebras over a field while Artin generalized them to Artinian rings. In 1920, Emmy Noether, in collaboration with W. Schmeidler, published a paper about the theory of ideals in which they defined left and right ideals in a ring. The following year she published a landmark paper called Idealtheorie in Ringbereichen, analyzing ascending chain conditions with regard to (mathematical) ideals. Noted algebraist Irving Kaplansky called this work "revolutionary"; the publication gave rise to the term "Noetherian ring", and several other mathematical objects being called Noetherian. == See also == == Notes == == References == Allenby, R. B. J. T. (1991), Rings, Fields and Groups (Second ed.), Edward Arnold, London, p. xxvi+383, ISBN 0-7131-3476-3, MR 1144518 Blyth, T.S.; Robertson, E.F. (1985), Groups, Rings and Fields: Algebra through practice, Book 3, Cambridge: Cambridge University Press, ISBN 0-521-27288-2 Faith, Carl (1999), Rings and Things and a Fine Array of Twentieth Century Associative Algebra, Mathematical Surveys and Monographs, vol. 65, Providence, RI: American Mathematical Society, ISBN 0-8218-0993-8, MR 1657671 Goodearl, K. R.; Warfield, R. B. Jr. (1989), An Introduction to Noncommutative Noetherian Rings, London Mathematical Society Student Texts, vol. 16, Cambridge: Cambridge University Press, ISBN 0-521-36086-2, MR 1020298 Judson, Thomas W. (1997), Abstract Algebra: Theory and Applications Kimberling, Clark (1981), "Emmy Noether and Her Influence", in Brewer, James W; Smith, Martha K (eds.), Emmy Noether: A Tribute to Her Life and Work, Marcel Dekker, pp. 3–61 Lam, T. Y. (1999), Lectures on Modules and Rings, Graduate Texts in Mathematics, vol. 189, New York: Springer-Verlag, doi:10.1007/978-1-4612-0525-8, ISBN 0-387-98428-3, MR 1653294 Lam, T. Y. (2001), A First Course in Noncommutative Rings, Graduate Texts in Mathematics, vol. 131 (Second ed.), New York: Springer-Verlag, doi:10.1007/978-1-4419-8616-0, ISBN 0-387-95183-0, MR 1838439 Lam, T. Y. (2003), Exercises in Classical Ring Theory, Problem Books in Mathematics (Second ed.), New York: Springer-Verlag, ISBN 0-387-00500-5, MR 2003255 Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics, vol. 8 (Second ed.), Cambridge, UK.: Cambridge University Press, ISBN 0-521-36764-6, MR 1011461 McConnell, J. C.; Robson, J. C. (2001), Noncommutative Noetherian Rings, Graduate Studies in Mathematics, vol. 30, Providence, RI: American Mathematical Society, doi:10.1090/gsm/030, ISBN 0-8218-2169-5, MR 1811901 O'Connor, J. J.; Robertson, E. F. (September 2004), "The development of ring theory", MacTutor History of Mathematics Archive Pierce, Richard S. (1982), Associative Algebras, Graduate Texts in Mathematics, vol. 88, New York: Springer-Verlag, ISBN 0-387-90693-2, MR 0674652 Rowen, Louis H. (1988), Ring Theory, Vol. I, Pure and Applied Mathematics, vol. 127, Boston, MA: Academic Press, ISBN 0-12-599841-4, MR 0940245. Vol. II, Pure and Applied Mathematics 128, ISBN 0-12-599842-2. Weibel, Charles A. (2013), The K-book: An introduction to algebraic K-theory, Graduate Studies in Mathematics, vol. 145, Providence, RI: American Mathematical Society, ISBN 978-0-8218-9132-2, MR 3076731
Wikipedia/Ring_theory
In mathematics, K-theory is, roughly speaking, the study of a ring generated by vector bundles over a topological space or scheme. In algebraic topology, it is a cohomology theory known as topological K-theory. In algebra and algebraic geometry, it is referred to as algebraic K-theory. It is also a fundamental tool in the field of operator algebras. It can be seen as the study of certain kinds of invariants of large matrices. K-theory involves the construction of families of K-functors that map from topological spaces or schemes, or to be even more general: any object of a homotopy category to associated rings; these rings reflect some aspects of the structure of the original spaces or schemes. As with functors to groups in algebraic topology, the reason for this functorial mapping is that it is easier to compute some topological properties from the mapped rings than from the original spaces or schemes. Examples of results gleaned from the K-theory approach include the Grothendieck–Riemann–Roch theorem, Bott periodicity, the Atiyah–Singer index theorem, and the Adams operations. In high energy physics, K-theory and in particular twisted K-theory have appeared in Type II string theory where it has been conjectured that they classify D-branes, Ramond–Ramond field strengths and also certain spinors on generalized complex manifolds. In condensed matter physics K-theory has been used to classify topological insulators, superconductors and stable Fermi surfaces. For more details, see K-theory (physics). == Grothendieck completion == The Grothendieck completion of an abelian monoid into an abelian group is a necessary ingredient for defining K-theory since all definitions start by constructing an abelian monoid from a suitable category and turning it into an abelian group through this universal construction. Given an abelian monoid ( A , + ′ ) {\displaystyle (A,+')} let ∼ {\displaystyle \sim } be the relation on A 2 = A × A {\displaystyle A^{2}=A\times A} defined by ( a 1 , a 2 ) ∼ ( b 1 , b 2 ) {\displaystyle (a_{1},a_{2})\sim (b_{1},b_{2})} if there exists a c ∈ A {\displaystyle c\in A} such that a 1 + ′ b 2 + ′ c = a 2 + ′ b 1 + ′ c . {\displaystyle a_{1}+'b_{2}+'c=a_{2}+'b_{1}+'c.} Then, the set G ( A ) = A 2 / ∼ {\displaystyle G(A)=A^{2}/\sim } has the structure of a group ( G ( A ) , + ) {\displaystyle (G(A),+)} where: [ ( a 1 , a 2 ) ] + [ ( b 1 , b 2 ) ] = [ ( a 1 + ′ b 1 , a 2 + ′ b 2 ) ] . {\displaystyle [(a_{1},a_{2})]+[(b_{1},b_{2})]=[(a_{1}+'b_{1},a_{2}+'b_{2})].} Equivalence classes in this group should be thought of as formal differences of elements in the abelian monoid. This group ( G ( A ) , + ) {\displaystyle (G(A),+)} is also associated with a monoid homomorphism i : A → G ( A ) {\displaystyle i:A\to G(A)} given by a ↦ [ ( a , 0 ) ] , {\displaystyle a\mapsto [(a,0)],} which has a certain universal property. To get a better understanding of this group, consider some equivalence classes of the abelian monoid ( A , + ) {\displaystyle (A,+)} . Here we will denote the identity element of A {\displaystyle A} by 0 {\displaystyle 0} so that [ ( 0 , 0 ) ] {\displaystyle [(0,0)]} will be the identity element of ( G ( A ) , + ) . {\displaystyle (G(A),+).} First, ( 0 , 0 ) ∼ ( n , n ) {\displaystyle (0,0)\sim (n,n)} for any n ∈ A {\displaystyle n\in A} since we can set c = 0 {\displaystyle c=0} and apply the equation from the equivalence relation to get n = n . {\displaystyle n=n.} This implies [ ( a , b ) ] + [ ( b , a ) ] = [ ( a + b , a + b ) ] = [ ( 0 , 0 ) ] {\displaystyle [(a,b)]+[(b,a)]=[(a+b,a+b)]=[(0,0)]} hence we have an additive inverse [ ( b , a ) ] {\displaystyle [(b,a)]} for each [ ( a , b ) ] ∈ G ( A ) {\displaystyle [(a,b)]\in G(A)} . This should give us the hint that we should be thinking of the equivalence classes [ ( a , b ) ] {\displaystyle [(a,b)]} as formal differences a − b . {\displaystyle a-b.} Another useful observation is the invariance of equivalence classes under scaling: ( a , b ) ∼ ( a + k , b + k ) {\displaystyle (a,b)\sim (a+k,b+k)} for any k ∈ A . {\displaystyle k\in A.} The Grothendieck completion can be viewed as a functor G : A b M o n → A b G r p , {\displaystyle G:\mathbf {AbMon} \to \mathbf {AbGrp} ,} and it has the property that it is left adjoint to the corresponding forgetful functor U : A b G r p → A b M o n . {\displaystyle U:\mathbf {AbGrp} \to \mathbf {AbMon} .} That means that, given a morphism ϕ : A → U ( B ) {\displaystyle \phi :A\to U(B)} of an abelian monoid A {\displaystyle A} to the underlying abelian monoid of an abelian group B , {\displaystyle B,} there exists a unique abelian group morphism G ( A ) → B . {\displaystyle G(A)\to B.} === Example for natural numbers === An illustrative example to look at is the Grothendieck completion of N {\displaystyle \mathbb {N} } . We can see that G ( ( N , + ) ) = ( Z , + ) . {\displaystyle G((\mathbb {N} ,+))=(\mathbb {Z} ,+).} For any pair ( a , b ) {\displaystyle (a,b)} we can find a minimal representative ( a ′ , b ′ ) {\displaystyle (a',b')} by using the invariance under scaling. For example, we can see from the scaling invariance that ( 4 , 6 ) ∼ ( 3 , 5 ) ∼ ( 2 , 4 ) ∼ ( 1 , 3 ) ∼ ( 0 , 2 ) {\displaystyle (4,6)\sim (3,5)\sim (2,4)\sim (1,3)\sim (0,2)} In general, if k := min { a , b } {\displaystyle k:=\min\{a,b\}} then ( a , b ) ∼ ( a − k , b − k ) {\displaystyle (a,b)\sim (a-k,b-k)} which is of the form ( c , 0 ) {\displaystyle (c,0)} or ( 0 , d ) . {\displaystyle (0,d).} This shows that we should think of the ( a , 0 ) {\displaystyle (a,0)} as positive integers and the ( 0 , b ) {\displaystyle (0,b)} as negative integers. == Definitions == There are a number of basic definitions of K-theory: two coming from topology and two from algebraic geometry. === Grothendieck group for compact Hausdorff spaces === Given a compact Hausdorff space X {\displaystyle X} consider the set of isomorphism classes of finite-dimensional vector bundles over X {\displaystyle X} , denoted Vect ( X ) {\displaystyle {\text{Vect}}(X)} and let the isomorphism class of a vector bundle π : E → X {\displaystyle \pi :E\to X} be denoted [ E ] {\displaystyle [E]} . Since isomorphism classes of vector bundles behave well with respect to direct sums, we can write these operations on isomorphism classes by [ E ] ⊕ [ E ′ ] = [ E ⊕ E ′ ] {\displaystyle [E]\oplus [E']=[E\oplus E']} It should be clear that ( Vect ( X ) , ⊕ ) {\displaystyle ({\text{Vect}}(X),\oplus )} is an abelian monoid where the unit is given by the trivial vector bundle R 0 × X → X {\displaystyle \mathbb {R} ^{0}\times X\to X} . We can then apply the Grothendieck completion to get an abelian group from this abelian monoid. This is called the K-theory of X {\displaystyle X} and is denoted K 0 ( X ) {\displaystyle K^{0}(X)} . We can use the Serre–Swan theorem and some algebra to get an alternative description of vector bundles over X {\displaystyle X} as projective modules over the ring C 0 ( X ; C ) {\displaystyle C^{0}(X;\mathbb {C} )} of continuous complex-valued functions. Then, these can be identified with idempotent matrices in some ring of matrices M n × n ( C 0 ( X ; C ) ) {\displaystyle M_{n\times n}(C^{0}(X;\mathbb {C} ))} . We can define equivalence classes of idempotent matrices and form an abelian monoid Idem ( X ) {\displaystyle {\textbf {Idem}}(X)} . Its Grothendieck completion is also called K 0 ( X ) {\displaystyle K^{0}(X)} . One of the main techniques for computing the Grothendieck group for topological spaces comes from the Atiyah–Hirzebruch spectral sequence, which makes it very accessible. The only required computations for understanding the spectral sequences are computing the group K 0 {\displaystyle K^{0}} for the spheres S n {\displaystyle S^{n}} .pg 51-110 === Grothendieck group of vector bundles in algebraic geometry === There is an analogous construction by considering vector bundles in algebraic geometry. For a Noetherian scheme X {\displaystyle X} there is a set Vect ( X ) {\displaystyle {\text{Vect}}(X)} of all isomorphism classes of algebraic vector bundles on X {\displaystyle X} . Then, as before, the direct sum ⊕ {\displaystyle \oplus } of isomorphisms classes of vector bundles is well-defined, giving an abelian monoid ( Vect ( X ) , ⊕ ) {\displaystyle ({\text{Vect}}(X),\oplus )} . Then, the Grothendieck group K 0 ( X ) {\displaystyle K^{0}(X)} is defined by the application of the Grothendieck construction on this abelian monoid. === Grothendieck group of coherent sheaves in algebraic geometry === In algebraic geometry, the same construction can be applied to algebraic vector bundles over a smooth scheme. But, there is an alternative construction for any Noetherian scheme X {\displaystyle X} . If we look at the isomorphism classes of coherent sheaves Coh ⁡ ( X ) {\displaystyle \operatorname {Coh} (X)} we can mod out by the relation [ E ] = [ E ′ ] + [ E ″ ] {\displaystyle [{\mathcal {E}}]=[{\mathcal {E}}']+[{\mathcal {E}}'']} if there is a short exact sequence 0 → E ′ → E → E ″ → 0. {\displaystyle 0\to {\mathcal {E}}'\to {\mathcal {E}}\to {\mathcal {E}}''\to 0.} This gives the Grothendieck-group K 0 ( X ) {\displaystyle K_{0}(X)} which is isomorphic to K 0 ( X ) {\displaystyle K^{0}(X)} if X {\displaystyle X} is smooth. The group K 0 ( X ) {\displaystyle K_{0}(X)} is special because there is also a ring structure: we define it as [ E ] ⋅ [ E ′ ] = ∑ ( − 1 ) k [ Tor k O X ⁡ ( E , E ′ ) ] . {\displaystyle [{\mathcal {E}}]\cdot [{\mathcal {E}}']=\sum (-1)^{k}\left[\operatorname {Tor} _{k}^{{\mathcal {O}}_{X}}({\mathcal {E}},{\mathcal {E}}')\right].} Using the Grothendieck–Riemann–Roch theorem, we have that ch : K 0 ( X ) ⊗ Q → A ( X ) ⊗ Q {\displaystyle \operatorname {ch} :K_{0}(X)\otimes \mathbb {Q} \to A(X)\otimes \mathbb {Q} } is an isomorphism of rings. Hence we can use K 0 ( X ) {\displaystyle K_{0}(X)} for intersection theory. == Early history == The subject can be said to begin with Alexander Grothendieck (1957), who used it to formulate his Grothendieck–Riemann–Roch theorem. It takes its name from the German Klasse, meaning "class". Grothendieck needed to work with coherent sheaves on an algebraic variety X. Rather than working directly with the sheaves, he defined a group using isomorphism classes of sheaves as generators of the group, subject to a relation that identifies any extension of two sheaves with their sum. The resulting group is called K(X) when only locally free sheaves are used, or G(X) when all are coherent sheaves. Either of these two constructions is referred to as the Grothendieck group; K(X) has cohomological behavior and G(X) has homological behavior. If X is a smooth variety, the two groups are the same. If it is a smooth affine variety, then all extensions of locally free sheaves split, so the group has an alternative definition. In topology, by applying the same construction to vector bundles, Michael Atiyah and Friedrich Hirzebruch defined K(X) for a topological space X in 1959, and using the Bott periodicity theorem they made it the basis of an extraordinary cohomology theory. It played a major role in the second proof of the Atiyah–Singer index theorem (circa 1962). Furthermore, this approach led to a noncommutative K-theory for C*-algebras. Already in 1955, Jean-Pierre Serre had used the analogy of vector bundles with projective modules to formulate Serre's conjecture, which states that every finitely generated projective module over a polynomial ring is free; this assertion is correct, but was not settled until 20 years later. (Swan's theorem is another aspect of this analogy.) == Developments == The other historical origin of algebraic K-theory was the work of J. H. C. Whitehead and others on what later became known as Whitehead torsion. There followed a period in which there were various partial definitions of higher K-theory functors. Finally, two useful and equivalent definitions were given by Daniel Quillen using homotopy theory in 1969 and 1972. A variant was also given by Friedhelm Waldhausen in order to study the algebraic K-theory of spaces, which is related to the study of pseudo-isotopies. Much modern research on higher K-theory is related to algebraic geometry and the study of motivic cohomology. The corresponding constructions involving an auxiliary quadratic form received the general name L-theory. It is a major tool of surgery theory. In string theory, the K-theory classification of Ramond–Ramond field strengths and the charges of stable D-branes was first proposed in 1997. In 2022, Russian mathematician Alexander Ivanovich Efimov constructed a significant generalization of algebraic K-theory, particularly applied to dualizable ( ∞ , 1 ) {\displaystyle (\infty ,1)} -categories == Examples and properties == === K0 of a field === The easiest example of the Grothendieck group is the Grothendieck group of a point Spec ( F ) {\displaystyle {\text{Spec}}(\mathbb {F} )} for a field F {\displaystyle \mathbb {F} } . Since a vector bundle over this space is just a finite dimensional vector space, which is a free object in the category of coherent sheaves, hence projective, the monoid of isomorphism classes is N {\displaystyle \mathbb {N} } corresponding to the dimension of the vector space. It is an easy exercise to show that the Grothendieck group is then Z {\displaystyle \mathbb {Z} } . === K0 of an Artinian algebra over a field === One important property of the Grothendieck group of a Noetherian scheme X {\displaystyle X} is that it is invariant under reduction, hence K ( X ) = K ( X red ) {\displaystyle K(X)=K(X_{\text{red}})} . Hence the Grothendieck group of any Artinian F {\displaystyle \mathbb {F} } -algebra is a direct sum of copies of Z {\displaystyle \mathbb {Z} } , one for each connected component of its spectrum. For example, K 0 ( Spec ( F [ x ] ( x 9 ) × F ) ) = Z ⊕ Z {\displaystyle K_{0}\left({\text{Spec}}\left({\frac {\mathbb {F} [x]}{(x^{9})}}\times \mathbb {F} \right)\right)=\mathbb {Z} \oplus \mathbb {Z} } === K0 of projective space === One of the most commonly used computations of the Grothendieck group is with the computation of K ( P n ) {\displaystyle K(\mathbb {P} ^{n})} for projective space over a field. This is because the intersection numbers of a projective X {\displaystyle X} can be computed by embedding i : X ↪ P n {\displaystyle i:X\hookrightarrow \mathbb {P} ^{n}} and using the push pull formula i ∗ ( [ i ∗ E ] ⋅ [ i ∗ F ] ) {\displaystyle i^{*}([i_{*}{\mathcal {E}}]\cdot [i_{*}{\mathcal {F}}])} . This makes it possible to do concrete calculations with elements in K ( X ) {\displaystyle K(X)} without having to explicitly know its structure since K ( P n ) = Z [ T ] ( T n + 1 ) {\displaystyle K(\mathbb {P} ^{n})={\frac {\mathbb {Z} [T]}{(T^{n+1})}}} One technique for determining the Grothendieck group of P n {\displaystyle \mathbb {P} ^{n}} comes from its stratification as P n = A n ∐ A n − 1 ∐ ⋯ ∐ A 0 {\displaystyle \mathbb {P} ^{n}=\mathbb {A} ^{n}\coprod \mathbb {A} ^{n-1}\coprod \cdots \coprod \mathbb {A} ^{0}} since the Grothendieck group of coherent sheaves on affine spaces are isomorphic to Z {\displaystyle \mathbb {Z} } , and the intersection of A n − k 1 , A n − k 2 {\displaystyle \mathbb {A} ^{n-k_{1}},\mathbb {A} ^{n-k_{2}}} is generically A n − k 1 ∩ A n − k 2 = A n − k 1 − k 2 {\displaystyle \mathbb {A} ^{n-k_{1}}\cap \mathbb {A} ^{n-k_{2}}=\mathbb {A} ^{n-k_{1}-k_{2}}} for k 1 + k 2 ≤ n {\displaystyle k_{1}+k_{2}\leq n} . === K0 of a projective bundle === Another important formula for the Grothendieck group is the projective bundle formula: given a rank r vector bundle E {\displaystyle {\mathcal {E}}} over a Noetherian scheme X {\displaystyle X} , the Grothendieck group of the projective bundle P ( E ) = Proj ⁡ ( Sym ∙ ⁡ ( E ∨ ) ) {\displaystyle \mathbb {P} ({\mathcal {E}})=\operatorname {Proj} (\operatorname {Sym} ^{\bullet }({\mathcal {E}}^{\vee }))} is a free K ( X ) {\displaystyle K(X)} -module of rank r with basis 1 , ξ , … , ξ n − 1 {\displaystyle 1,\xi ,\dots ,\xi ^{n-1}} . This formula allows one to compute the Grothendieck group of P F n {\displaystyle \mathbb {P} _{\mathbb {F} }^{n}} . This make it possible to compute the K 0 {\displaystyle K_{0}} or Hirzebruch surfaces. In addition, this can be used to compute the Grothendieck group K ( P n ) {\displaystyle K(\mathbb {P} ^{n})} by observing it is a projective bundle over the field F {\displaystyle \mathbb {F} } . === K0 of singular spaces and spaces with isolated quotient singularities === One recent technique for computing the Grothendieck group of spaces with minor singularities comes from evaluating the difference between K 0 ( X ) {\displaystyle K^{0}(X)} and K 0 ( X ) {\displaystyle K_{0}(X)} , which comes from the fact every vector bundle can be equivalently described as a coherent sheaf. This is done using the Grothendieck group of the Singularity category D s g ( X ) {\displaystyle D_{sg}(X)} from derived noncommutative algebraic geometry. It gives a long exact sequence starting with ⋯ → K 0 ( X ) → K 0 ( X ) → K s g ( X ) → 0 {\displaystyle \cdots \to K^{0}(X)\to K_{0}(X)\to K_{sg}(X)\to 0} where the higher terms come from higher K-theory. Note that vector bundles on a singular X {\displaystyle X} are given by vector bundles E → X s m {\displaystyle E\to X_{sm}} on the smooth locus X s m ↪ X {\displaystyle X_{sm}\hookrightarrow X} . This makes it possible to compute the Grothendieck group on weighted projective spaces since they typically have isolated quotient singularities. In particular, if these singularities have isotropy groups G i {\displaystyle G_{i}} then the map K 0 ( X ) → K 0 ( X ) {\displaystyle K^{0}(X)\to K_{0}(X)} is injective and the cokernel is annihilated by lcm ( | G 1 | , … , | G k | ) n − 1 {\displaystyle {\text{lcm}}(|G_{1}|,\ldots ,|G_{k}|)^{n-1}} for n = dim ⁡ X {\displaystyle n=\dim X} .pg 3 === K0 of a smooth projective curve === For a smooth projective curve C {\displaystyle C} the Grothendieck group is K 0 ( C ) = Z ⊕ Pic ( C ) {\displaystyle K_{0}(C)=\mathbb {Z} \oplus {\text{Pic}}(C)} for Picard group of C {\displaystyle C} . This follows from the Brown-Gersten-Quillen spectral sequencepg 72 of algebraic K-theory. For a regular scheme of finite type over a field, there is a convergent spectral sequence E 1 p , q = ∐ x ∈ X ( p ) K − p − q ( k ( x ) ) ⇒ K − p − q ( X ) {\displaystyle E_{1}^{p,q}=\coprod _{x\in X^{(p)}}K^{-p-q}(k(x))\Rightarrow K_{-p-q}(X)} for X ( p ) {\displaystyle X^{(p)}} the set of codimension p {\displaystyle p} points, meaning the set of subschemes x : Y → X {\displaystyle x:Y\to X} of codimension p {\displaystyle p} , and k ( x ) {\displaystyle k(x)} the algebraic function field of the subscheme. This spectral sequence has the propertypg 80 E 2 p , − p ≅ CH p ( X ) {\displaystyle E_{2}^{p,-p}\cong {\text{CH}}^{p}(X)} for the Chow ring of X {\displaystyle X} , essentially giving the computation of K 0 ( C ) {\displaystyle K_{0}(C)} . Note that because C {\displaystyle C} has no codimension 2 {\displaystyle 2} points, the only nontrivial parts of the spectral sequence are E 1 0 , q , E 1 1 , q {\displaystyle E_{1}^{0,q},E_{1}^{1,q}} , hence E ∞ 1 , − 1 ≅ E 2 1 , − 1 ≅ CH 1 ( C ) E ∞ 0 , 0 ≅ E 2 0 , 0 ≅ CH 0 ( C ) {\displaystyle {\begin{aligned}E_{\infty }^{1,-1}\cong E_{2}^{1,-1}&\cong {\text{CH}}^{1}(C)\\E_{\infty }^{0,0}\cong E_{2}^{0,0}&\cong {\text{CH}}^{0}(C)\end{aligned}}} The coniveau filtration can then be used to determine K 0 ( C ) {\displaystyle K_{0}(C)} as the desired explicit direct sum since it gives an exact sequence 0 → F 1 ( K 0 ( X ) ) → K 0 ( X ) → K 0 ( X ) / F 1 ( K 0 ( X ) ) → 0 {\displaystyle 0\to F^{1}(K_{0}(X))\to K_{0}(X)\to K_{0}(X)/F^{1}(K_{0}(X))\to 0} where the left hand term is isomorphic to CH 1 ( C ) ≅ Pic ( C ) {\displaystyle {\text{CH}}^{1}(C)\cong {\text{Pic}}(C)} and the right hand term is isomorphic to C H 0 ( C ) ≅ Z {\displaystyle CH^{0}(C)\cong \mathbb {Z} } . Since Ext Ab 1 ( Z , G ) = 0 {\displaystyle {\text{Ext}}_{\text{Ab}}^{1}(\mathbb {Z} ,G)=0} , we have the sequence of abelian groups above splits, giving the isomorphism. Note that if C {\displaystyle C} is a smooth projective curve of genus g {\displaystyle g} over C {\displaystyle \mathbb {C} } , then K 0 ( C ) ≅ Z ⊕ ( C g / Z 2 g ) {\displaystyle K_{0}(C)\cong \mathbb {Z} \oplus (\mathbb {C} ^{g}/\mathbb {Z} ^{2g})} Moreover, the techniques above using the derived category of singularities for isolated singularities can be extended to isolated Cohen-Macaulay singularities, giving techniques for computing the Grothendieck group of any singular algebraic curve. This is because reduction gives a generically smooth curve, and all singularities are Cohen-Macaulay. == Applications == === Virtual bundles === One useful application of the Grothendieck-group is to define virtual vector bundles. For example, if we have an embedding of smooth spaces Y ↪ X {\displaystyle Y\hookrightarrow X} then there is a short exact sequence 0 → Ω Y → Ω X | Y → C Y / X → 0 {\displaystyle 0\to \Omega _{Y}\to \Omega _{X}|_{Y}\to C_{Y/X}\to 0} where C Y / X {\displaystyle C_{Y/X}} is the conormal bundle of Y {\displaystyle Y} in X {\displaystyle X} . If we have a singular space Y {\displaystyle Y} embedded into a smooth space X {\displaystyle X} we define the virtual conormal bundle as [ Ω X | Y ] − [ Ω Y ] {\displaystyle [\Omega _{X}|_{Y}]-[\Omega _{Y}]} Another useful application of virtual bundles is with the definition of a virtual tangent bundle of an intersection of spaces: Let Y 1 , Y 2 ⊂ X {\displaystyle Y_{1},Y_{2}\subset X} be projective subvarieties of a smooth projective variety. Then, we can define the virtual tangent bundle of their intersection Z = Y 1 ∩ Y 2 {\displaystyle Z=Y_{1}\cap Y_{2}} as [ T Z ] v i r = [ T Y 1 ] | Z + [ T Y 2 ] | Z − [ T X ] | Z . {\displaystyle [T_{Z}]^{vir}=[T_{Y_{1}}]|_{Z}+[T_{Y_{2}}]|_{Z}-[T_{X}]|_{Z}.} Kontsevich uses this construction in one of his papers. === Chern characters === Chern classes can be used to construct a homomorphism of rings from the topological K-theory of a space to (the completion of) its rational cohomology. For a line bundle L, the Chern character ch is defined by ch ⁡ ( L ) = exp ⁡ ( c 1 ( L ) ) := ∑ m = 0 ∞ c 1 ( L ) m m ! . {\displaystyle \operatorname {ch} (L)=\exp(c_{1}(L)):=\sum _{m=0}^{\infty }{\frac {c_{1}(L)^{m}}{m!}}.} More generally, if V = L 1 ⊕ ⋯ ⊕ L n {\displaystyle V=L_{1}\oplus \dots \oplus L_{n}} is a direct sum of line bundles, with first Chern classes x i = c 1 ( L i ) , {\displaystyle x_{i}=c_{1}(L_{i}),} the Chern character is defined additively ch ⁡ ( V ) = e x 1 + ⋯ + e x n := ∑ m = 0 ∞ 1 m ! ( x 1 m + ⋯ + x n m ) . {\displaystyle \operatorname {ch} (V)=e^{x_{1}}+\dots +e^{x_{n}}:=\sum _{m=0}^{\infty }{\frac {1}{m!}}(x_{1}^{m}+\dots +x_{n}^{m}).} The Chern character is useful in part because it facilitates the computation of the Chern class of a tensor product. The Chern character is used in the Hirzebruch–Riemann–Roch theorem. == Equivariant K-theory == The equivariant algebraic K-theory is an algebraic K-theory associated to the category Coh G ⁡ ( X ) {\displaystyle \operatorname {Coh} ^{G}(X)} of equivariant coherent sheaves on an algebraic scheme X {\displaystyle X} with action of a linear algebraic group G {\displaystyle G} , via Quillen's Q-construction; thus, by definition, K i G ( X ) = π i ( B + Coh G ⁡ ( X ) ) . {\displaystyle K_{i}^{G}(X)=\pi _{i}(B^{+}\operatorname {Coh} ^{G}(X)).} In particular, K 0 G ( C ) {\displaystyle K_{0}^{G}(C)} is the Grothendieck group of Coh G ⁡ ( X ) {\displaystyle \operatorname {Coh} ^{G}(X)} . The theory was developed by R. W. Thomason in 1980s. Specifically, he proved equivariant analogs of fundamental theorems such as the localization theorem. == See also == Bott periodicity KK-theory KR-theory List of cohomology theories Algebraic K-theory Topological K-theory Operator K-theory Grothendieck–Riemann–Roch theorem == Notes == == References == Atiyah, Michael Francis (1989). K-theory. Advanced Book Classics (2nd ed.). Addison-Wesley. ISBN 978-0-201-09394-0. MR 1043170. Friedlander, Eric; Grayson, Daniel, eds. (2005). Handbook of K-Theory. Berlin, New York: Springer-Verlag. doi:10.1007/978-3-540-27855-9. ISBN 978-3-540-30436-4. MR 2182598. Park, Efton (2008). Complex Topological K-Theory. Cambridge Studies in Advanced Mathematics. Vol. 111. Cambridge University Press. ISBN 978-0-521-85634-8. Swan, R. G. (1968). Algebraic K-Theory. Lecture Notes in Mathematics. Vol. 76. Springer. ISBN 3-540-04245-8. Karoubi, Max (1978). K-theory: an introduction. Classics in Mathematics. Springer-Verlag. doi:10.1007/978-3-540-79890-3. ISBN 0-387-08090-2. Karoubi, Max (2006). "K-theory. An elementary introduction". arXiv:math/0602082. Hatcher, Allen (2003). "Vector Bundles & K-Theory". Weibel, Charles (2013). The K-book: an introduction to algebraic K-theory. Grad. Studies in Math. Vol. 145. American Math Society. ISBN 978-0-8218-9132-2. == External links == Grothendieck-Riemann-Roch Max Karoubi's Page K-theory preprint archive
Wikipedia/K-theory
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; {\displaystyle f(x)=x^{2}+1;} in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if f ( x ) = x 2 + 1 , {\displaystyle f(x)=x^{2}+1,} then f ( 4 ) = 4 2 + 1 = 17. {\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details. == Definition == A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written y = f ( x ) . {\displaystyle y=f(x).} In this notation, x is the argument or variable of the function. A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain. A function f, its domain X, and its codomain Y are often specified by the notation f : X → Y . {\displaystyle f:X\to Y.} One may write x ↦ y {\displaystyle x\mapsto y} instead of y = f ( x ) {\displaystyle y=f(x)} , where the symbol ↦ {\displaystyle \mapsto } (read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function x ↦ x 2 . {\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a real function, the determination of the domain of the function x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function. A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S. === Formal definition === The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} The set of all these pairs is called the Cartesian product of X and Y and denoted X × Y . {\displaystyle X\times Y.} Thus, the above definition may be formalized as follows. A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions: For every x {\displaystyle x} in X {\displaystyle X} there exists y {\displaystyle y} in Y {\displaystyle Y} such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} If ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( x , z ) ∈ R , {\displaystyle (x,z)\in R,} then y = z . {\displaystyle y=z.} This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation): A function is formed by three sets, the domain X , {\displaystyle X,} the codomain Y , {\displaystyle Y,} and the graph R {\displaystyle R} that satisfy the three following conditions. R ⊆ { ( x , y ) ∣ x ∈ X , y ∈ Y } {\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}} ∀ x ∈ X , ∃ y ∈ Y , ( x , y ) ∈ R {\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad } ( x , y ) ∈ R ∧ ( x , z ) ∈ R ⟹ y = z {\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad } === Partial functions === Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every x ∈ X , {\displaystyle x\in X,} there is at most one y in Y such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} Using functional notation, this means that, given x ∈ X , {\displaystyle x\in X,} either f ( x ) {\displaystyle f(x)} is in Y, or it is undefined. The set of the elements of X such that f ( x ) {\displaystyle f(x)} is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function. In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. In calculus, a real-valued function of a real variable or real function is a partial function from the set R {\displaystyle \mathbb {R} } of the real numbers to itself. Given a real function f : x ↦ f ( x ) {\displaystyle f:x\mapsto f(x)} its multiplicative inverse x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse. Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers C {\displaystyle \mathbb {C} } . The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function z ↦ 1 / ζ ( z ) {\displaystyle z\mapsto 1/\zeta (z)} is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis. In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem). === Multivariate functions === A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation. Commonly, an n-tuple is denoted enclosed between parentheses, such as in ( 1 , 2 , … , n ) . {\displaystyle (1,2,\ldots ,n).} When using functional notation, one usually omits the parentheses surrounding tuples, writing f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} instead of f ( ( x 1 , … , x n ) ) . {\displaystyle f((x_{1},\ldots ,x_{n})).} Given n sets X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} such that x 1 ∈ X 1 , … , x n ∈ X n {\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}} is called the Cartesian product of X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} and denoted X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain. f : U → Y , {\displaystyle f:U\to Y,} where the domain U has the form U ⊆ X 1 × ⋯ × X n . {\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all the X i {\displaystyle X_{i}} are equal to the set R {\displaystyle \mathbb {R} } of the real numbers or to the set C {\displaystyle \mathbb {C} } of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables. == Notation == There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. === Functional notation === The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f ( x ) , sin ⁡ ( 3 ) , or f ( x 2 + 1 ) . {\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain ( x 2 + 1 {\displaystyle x^{2}+1} in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let f ( x ) = sin ⁡ ( x 2 + 1 ) {\displaystyle f(x)=\sin(x^{2}+1)} ". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let f ( x ) {\displaystyle f(x)} be a function". This is an abuse of notation that is useful for a simpler formulation. === Arrow notation === Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, x ↦ x + 1 {\displaystyle x\mapsto x+1} is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of R {\displaystyle \mathbb {R} } is implied. The domain and codomain can also be explicitly stated, for example: sqr : Z → Z x ↦ x 2 . {\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose f : X × X → Y ; ( x , t ) ↦ f ( x , t ) {\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)} is a function in two variables, and we want to refer to a partially applied function X → Y {\displaystyle X\to Y} produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} using the arrow notation. The expression x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} (read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). === Index notation === Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes f x . {\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element f n {\displaystyle f_{n}} is called the nth element of the sequence. The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map x ↦ f ( x , t ) {\displaystyle x\mapsto f(x,t)} (see above) would be denoted f t {\displaystyle f_{t}} using index notation, if we define the collection of maps f t {\displaystyle f_{t}} by the formula f t ( x ) = f ( x , t ) {\displaystyle f_{t}(x)=f(x,t)} for all x , t ∈ X {\displaystyle x,t\in X} . === Dot notation === In the notation x ↦ f ( x ) , {\displaystyle x\mapsto f(x),} the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, a ( ⋅ ) 2 {\displaystyle a(\cdot )^{2}} may stand for the function x ↦ a x 2 {\displaystyle x\mapsto ax^{2}} , and ∫ a ( ⋅ ) f ( u ) d u {\textstyle \int _{a}^{\,(\cdot )}f(u)\,du} may stand for a function defined by an integral with variable upper bound: x ↦ ∫ a x f ( u ) d u {\textstyle x\mapsto \int _{a}^{x}f(u)\,du} . === Specialized notations === There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. === Functions of more than one variable === In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers ( x , y ) {\displaystyle (x,y)} to the sum of their squares, x 2 + y 2 {\displaystyle x^{2}+y^{2}} . Such a function is commonly written as f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as f ( w , x , y ) {\displaystyle f(w,x,y)} , f ( w , x , y , z ) {\displaystyle f(w,x,y,z)} . == Other terms == A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. == Specifying a function == Given a function f {\displaystyle f} , by definition, to each element x {\displaystyle x} of the domain of the function f {\displaystyle f} , there is a unique element associated to it, the value f ( x ) {\displaystyle f(x)} of f {\displaystyle f} at x {\displaystyle x} . There are several ways to specify or describe how x {\displaystyle x} is related to f ( x ) {\displaystyle f(x)} , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function f {\displaystyle f} . === By listing function values === On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if A = { 1 , 2 , 3 } {\displaystyle A=\{1,2,3\}} , then one can define a function f : A → R {\displaystyle f:A\to \mathbb {R} } by f ( 1 ) = 2 , f ( 2 ) = 3 , f ( 3 ) = 4. {\displaystyle f(1)=2,f(2)=3,f(3)=4.} === By a formula === Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, f {\displaystyle f} can be defined by the formula f ( n ) = n + 1 {\displaystyle f(n)=n+1} , for n ∈ { 1 , 2 , 3 } {\displaystyle n\in \{1,2,3\}} . When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, f ( x ) = 1 + x 2 {\displaystyle f(x)={\sqrt {1+x^{2}}}} defines a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } whose domain is R , {\displaystyle \mathbb {R} ,} because 1 + x 2 {\displaystyle 1+x^{2}} is always positive if x is a real number. On the other hand, f ( x ) = 1 − x 2 {\displaystyle f(x)={\sqrt {1-x^{2}}}} defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions can be classified by the nature of formulas that define them: A quadratic function is a function that may be written f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} where a, b, c are constants. More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, f ( x ) = x 3 − 3 x − 1 {\displaystyle f(x)=x^{3}-3x-1} and f ( x ) = ( x − 1 ) ( x 3 + 1 ) + 2 x 2 − 1 {\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1} are polynomial functions of x {\displaystyle x} . A rational function is the same, with divisions also allowed, such as f ( x ) = x − 1 x + 1 , {\displaystyle f(x)={\frac {x-1}{x+1}},} and f ( x ) = 1 x + 1 + 3 x − 2 x − 1 . {\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.} An algebraic function is the same, with nth roots and roots of polynomials also allowed. An elementary function is the same, with logarithms and exponential functions allowed. === Inverse and implicit functions === A function f : X → Y , {\displaystyle f:X\to Y,} with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function f − 1 : Y → X {\displaystyle f^{-1}:Y\to X} that maps y ∈ Y {\displaystyle y\in Y} to the element x ∈ X {\displaystyle x\in X} such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function f : X → Y {\displaystyle f:X\to Y} is not bijective, it may occur that one can select subsets E ⊆ X {\displaystyle E\subseteq X} and F ⊆ Y {\displaystyle F\subseteq Y} such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every x ∈ E , {\displaystyle x\in E,} there is some y ∈ Y {\displaystyle y\in Y} such that x R y. If one has a criterion allowing selecting such a y for every x ∈ E , {\displaystyle x\in E,} this defines a function f : E → Y , {\displaystyle f:E\to Y,} called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving y = ± 1 − x 2 , {\displaystyle y=\pm {\sqrt {1-x^{2}}},} but, in more complicated examples, this is impossible. For example, the relation y 5 + y + x = 0 {\displaystyle y^{5}+y+x=0} defines y as an implicit function of x, called the Bring radical, which has R {\displaystyle \mathbb {R} } as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. === Using differential calculus === Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by e x = ∑ n = 0 ∞ x n n ! {\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}} . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. === By recurrence === Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations. The factorial function on the nonnegative integers ( n ↦ n ! {\displaystyle n\mapsto n!} ) is a basic example, as it can be defined by the recurrence relation n ! = n ( n − 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} == Representing a function == A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. === Graphs and plots === Given a function f : X → Y , {\displaystyle f:X\to Y,} its graph is, formally, the set G = { ( x , f ( x ) ) ∣ x ∈ X } . {\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element ( x , y ) ∈ G {\displaystyle (x,y)\in G} may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} consisting of all points with coordinates ( x , x 2 ) {\displaystyle (x,x^{2})} for x ∈ R , {\displaystyle x\in \mathbb {R} ,} yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates ( r , θ ) = ( x , x 2 ) , {\displaystyle (r,\theta )=(x,x^{2}),} the plot obtained is Fermat's spiral. === Tables === A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function f : { 1 , … , 5 } 2 → R {\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} } defined as f ( x , y ) = x y {\displaystyle f(x,y)=xy} can be represented by the familiar multiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. === Bar chart === A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). == General properties == This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. === Standard functions === There are a number of standard functions that occur frequently: For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function ∅ → X {\displaystyle \varnothing \to X} is not equal to ∅ → Y {\displaystyle \varnothing \to Y} if and only if X ≠ Y {\displaystyle X\neq Y} , although their graphs are both the empty set. For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. Given a function f : X → Y , {\displaystyle f:X\to Y,} the canonical surjection of f onto its image f ( X ) = { f ( x ) ∣ x ∈ X } {\displaystyle f(X)=\{f(x)\mid x\in X\}} is the function from X to f(X) that maps x to f(x). For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. The identity function on a set X, often denoted by idX, is the inclusion of X into itself. === Function composition === Given two functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f:X\rightarrow Z} defined by ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value of g ∘ f {\displaystyle g\circ f} is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right. The composition g ∘ f {\displaystyle g\circ f} is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} satisfy these conditions, the composition is not necessarily commutative, that is, the functions g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then g ( f ( x ) ) = x 2 + 1 {\displaystyle g(f(x))=x^{2}+1} and f ( g ( x ) ) = ( x + 1 ) 2 {\displaystyle f(g(x))=(x+1)^{2}} agree just for x = 0. {\displaystyle x=0.} The function composition is associative in the sense that, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal, that is, ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle (h\circ g)\circ f=h\circ (g\circ f).} Therefore, it is usual to just write h ∘ g ∘ f . {\displaystyle h\circ g\circ f.} The identity functions id X {\displaystyle \operatorname {id} _{X}} and id Y {\displaystyle \operatorname {id} _{Y}} are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has f ∘ id X = id Y ∘ f = f . {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} === Image and preimage === Let f : X → Y . {\displaystyle f:X\to Y.} The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is, f ( A ) = { f ( x ) ∣ x ∈ A } . {\displaystyle f(A)=\{f(x)\mid x\in A\}.} The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain. On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by f − 1 ( y ) {\displaystyle f^{-1}(y)} and is given by the equation f − 1 ( y ) = { x ∈ X ∣ f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by f − 1 ( B ) {\displaystyle f^{-1}(B)} and is given by the equation f − 1 ( B ) = { x ∈ X ∣ f ( x ) ∈ B } . {\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of { 4 , 9 } {\displaystyle \{4,9\}} under the square function is the set { − 3 , − 2 , 2 , 3 } {\displaystyle \{-3,-2,2,3\}} . By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then f − 1 ( 0 ) = Z {\displaystyle f^{-1}(0)=\mathbb {Z} } . If f : X → Y {\displaystyle f:X\to Y} is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: A ⊆ B ⟹ f ( A ) ⊆ f ( B ) {\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)} C ⊆ D ⟹ f − 1 ( C ) ⊆ f − 1 ( D ) {\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)} A ⊆ f − 1 ( f ( A ) ) {\displaystyle A\subseteq f^{-1}(f(A))} C ⊇ f ( f − 1 ( C ) ) {\displaystyle C\supseteq f(f^{-1}(C))} f ( f − 1 ( f ( A ) ) ) = f ( A ) {\displaystyle f(f^{-1}(f(A)))=f(A)} f − 1 ( f ( f − 1 ( C ) ) ) = f − 1 ( C ) {\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)} The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted f − 1 . {\displaystyle f^{-1}.} In this case f − 1 ( C ) {\displaystyle f^{-1}(C)} may denote either the image by f − 1 {\displaystyle f^{-1}} or the preimage by f of C. This is not a problem, as these sets are equal. The notation f ( A ) {\displaystyle f(A)} and f − 1 ( C ) {\displaystyle f^{-1}(C)} may be ambiguous in the case of sets that contain some subsets as elements, such as { x , { x } } . {\displaystyle \{x,\{x\}\}.} In this case, some care may be needed, for example, by using square brackets f [ A ] , f − 1 [ C ] {\displaystyle f[A],f^{-1}[C]} for images and preimages of subsets and ordinary parentheses for images and preimages of elements. === Injective, surjective and bijective functions === Let f : X → Y {\displaystyle f:X\to Y} be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element x 0 {\displaystyle x_{0}} in X (which exists as X is supposed to be nonempty), and one defines g by g ( y ) = x {\displaystyle g(y)=x} if y = f ( x ) {\displaystyle y=f(x)} and g ( y ) = x 0 {\displaystyle g(y)=x_{0}} if y ∉ f ( X ) . {\displaystyle y\not \in f(X).} Conversely, if g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} and y = f ( x ) , {\displaystyle y=f(x),} then x = g ( y ) , {\displaystyle x=g(y),} and thus f − 1 ( y ) = { x } . {\displaystyle f^{-1}(y)=\{x\}.} The function f is surjective (or onto, or is a surjection) if its range f ( X ) {\displaystyle f(X)} equals its codomain Y {\displaystyle Y} , that is, if, for each element y {\displaystyle y} of the codomain, there exists some element x {\displaystyle x} of the domain such that f ( x ) = y {\displaystyle f(x)=y} (in other words, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of every y ∈ Y {\displaystyle y\in Y} is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that f ∘ g = id Y , {\displaystyle f\circ g=\operatorname {id} _{Y},} that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by g ( y ) = x , {\displaystyle g(y)=x,} where x {\displaystyle x} is an arbitrarily chosen element of f − 1 ( y ) . {\displaystyle f^{-1}(y).} The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X {\displaystyle g\circ f=\operatorname {id} _{X}} and f ∘ g = id Y . {\displaystyle f\circ g=\operatorname {id} _{Y}.} (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function f : X → Y {\displaystyle f:X\to Y} may be factorized as the composition i ∘ s {\displaystyle i\circ s} of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. === Restriction and extension === If f : X → Y {\displaystyle f:X\to Y} is a function and S is a subset of X, then the restriction of f {\displaystyle f} to S, denoted f | S {\displaystyle f|_{S}} , is the function from S to Y defined by f | S ( x ) = f ( x ) {\displaystyle f|_{S}(x)=f(x)} for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function f {\displaystyle f} such that f | S {\displaystyle f|_{S}} is injective, then the canonical surjection of f | S {\displaystyle f|_{S}} onto its image f | S ( S ) = f ( S ) {\displaystyle f|_{S}(S)=f(S)} is a bijection, and thus has an inverse function from f ( S ) {\displaystyle f(S)} to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let X = ⋃ i ∈ I U i {\textstyle X=\bigcup _{i\in I}U_{i}} be the decomposition of X as a union of subsets, and suppose that a function f i : U i → Y {\displaystyle f_{i}:U_{i}\to Y} is defined on each U i {\displaystyle U_{i}} such that for each pair i , j {\displaystyle i,j} of indices, the restrictions of f i {\displaystyle f_{i}} and f j {\displaystyle f_{j}} to U i ∩ U j {\displaystyle U_{i}\cap U_{j}} are equal. Then this defines a unique function f : X → Y {\displaystyle f:X\to Y} such that f | U i = f i {\displaystyle f|_{U_{i}}=f_{i}} for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function h ( x ) = a x + b c x + d {\displaystyle h(x)={\frac {ax+b}{cx+d}}} such that ad − bc ≠ 0. Its domain is the set of all real numbers different from − d / c , {\displaystyle -d/c,} and its image is the set of all real numbers different from a / c . {\displaystyle a/c.} If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c {\displaystyle h(\infty )=a/c} and h ( − d / c ) = ∞ {\displaystyle h(-d/c)=\infty } . == In calculus == The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. === Real function === A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ( f + g ) ( x ) = f ( x ) + g ( x ) ( f − g ) ( x ) = f ( x ) − g ( x ) ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) . {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by f g ( x ) = f ( x ) g ( x ) , {\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function x ↦ 1 x , {\displaystyle x\mapsto {\frac {1}{x}},} whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of f ( x ) − f ( y ) x − y {\displaystyle {\frac {f(x)-f(y)}{x-y}}} does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation y ″ + y = 0 {\displaystyle y''+y=0} such that sin ⁡ 0 = 0 , cos ⁡ 0 = 1 , ∂ sin ⁡ x ∂ x ( 0 ) = 1 , ∂ cos ⁡ x ∂ x ( 0 ) = 0. {\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} === Vector-valued function === When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of R n {\displaystyle \mathbb {R} ^{n}} or other spaces that share geometric or topological properties of R n {\displaystyle \mathbb {R} ^{n}} , such as manifolds. These vector-valued functions are given the name vector fields. == Function space == In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. == Multi-valued functions == Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point x 0 , {\displaystyle x_{0},} there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number x 0 , {\displaystyle x_{0},} there are two choices for the value of the square root, one of which is positive and denoted x 0 , {\displaystyle {\sqrt {x_{0}}},} and another which is negative and denoted − x 0 . {\displaystyle -{\sqrt {x_{0}}}.} These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of x 3 − 3 x − y = 0 {\displaystyle x^{3}-3x-y=0} (see the figure on the right). For y = 0 one may choose either 0 , 3 , or − 3 {\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}} for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. == In the foundations of mathematics == The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function x ↦ { x } . {\displaystyle x\mapsto \{x\}.} Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. == In computer science == In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects. In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory. Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from constant functions, successor, and projection functions via the operators composition, primitive recursion, and minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.), every sequence of symbols may be coded as a sequence of bits, a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. == See also == === Subpages === === Generalizations === === Related topics === == Notes == == References == == Sources == == Further reading == == External links == The Wolfram Functions – website giving formulae and visualizations of many mathematical functions NIST Digital Library of Mathematical Functions
Wikipedia/Function_(mathematics)
In mathematics, an algebraic structure or algebraic system consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities (known as axioms) that these operations must satisfy. An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors). Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms). In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring. The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category. == Introduction == Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, a + (b + c) = (a + b) + c and a(bc) = (ab)c are associative laws, and a + b = b + a and ab = ba are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law. Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem. In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses. == Common axioms == === Equational axioms === An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples. Commutativity An operation ∗ {\displaystyle *} is commutative if x ∗ y = y ∗ x {\displaystyle x*y=y*x} for every x and y in the algebraic structure. Associativity An operation ∗ {\displaystyle *} is associative if ( x ∗ y ) ∗ z = x ∗ ( y ∗ z ) {\displaystyle (x*y)*z=x*(y*z)} for every x, y and z in the algebraic structure. Left distributivity An operation ∗ {\displaystyle *} is left-distributive with respect to another operation + {\displaystyle +} if x ∗ ( y + z ) = ( x ∗ y ) + ( x ∗ z ) {\displaystyle x*(y+z)=(x*y)+(x*z)} for every x, y and z in the algebraic structure (the second operation is denoted here as + {\displaystyle +} , because the second operation is addition in many common examples). Right distributivity An operation ∗ {\displaystyle *} is right-distributive with respect to another operation + {\displaystyle +} if ( y + z ) ∗ x = ( y ∗ x ) + ( z ∗ x ) {\displaystyle (y+z)*x=(y*x)+(z*x)} for every x, y and z in the algebraic structure. Distributivity An operation ∗ {\displaystyle *} is distributive with respect to another operation + {\displaystyle +} if it is both left-distributive and right-distributive. If the operation ∗ {\displaystyle *} is commutative, left and right distributivity are both equivalent to distributivity. === Existential axioms === Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form "for all X there is y such that f ( X , y ) = g ( X , y ) {\displaystyle f(X,y)=g(X,y)} ", where X is a k-tuple of variables. Choosing a specific value of y for each value of X defines a function φ : X ↦ y , {\displaystyle \varphi :X\mapsto y,} which can be viewed as an operation of arity k, and the axiom becomes the identity f ( X , φ ( X ) ) = g ( X , φ ( X ) ) . {\displaystyle f(X,\varphi (X))=g(X,\varphi (X)).} The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation x ↦ − x . {\displaystyle x\mapsto -x.} Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety. Here are some of the most common existential axioms. Identity element A binary operation ∗ {\displaystyle *} has an identity element if there is an element e such that x ∗ e = x and e ∗ x = x {\displaystyle x*e=x\quad {\text{and}}\quad e*x=x} for all x in the structure. Here, the auxiliary operation is the operation of arity zero that has e as its result. Inverse element Given a binary operation ∗ {\displaystyle *} that has an identity element e, an element x is invertible if it has an inverse element, that is, if there exists an element inv ⁡ ( x ) {\displaystyle \operatorname {inv} (x)} such that inv ⁡ ( x ) ∗ x = e and x ∗ inv ⁡ ( x ) = e . {\displaystyle \operatorname {inv} (x)*x=e\quad {\text{and}}\quad x*\operatorname {inv} (x)=e.} For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible. === Non-equational axioms === The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers ( ∀ , ∃ {\displaystyle \forall ,\exists } ) that apply to elements (not to subsets) of the structure. Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: the structure has a unary operation inv such that ∀ x , x = 0 or x ⋅ inv ⁡ ( x ) = 1. {\displaystyle \forall x,\quad x=0\quad {\text{or}}\quad x\cdot \operatorname {inv} (x)=1.} The operation inv can be viewed either as a partial operation that is not defined for x = 0; or as an ordinary function whose value at 0 is arbitrary and must not be used. == Common algebraic structures == === One set with operations === Simple structures: no binary operation: Set: a degenerate algebraic structure S having no operations. Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers. Group: a monoid with a unary operation (inverse), giving rise to inverse elements. Abelian group: a group whose binary operation is commutative. Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition. Ring: a semiring whose additive monoid is an abelian group. Division ring: a nontrivial ring in which division by nonzero elements is defined. Commutative ring: a ring in which the multiplication operation is commutative. Field: a commutative division ring (i.e. a commutative ring which contains a multiplicative inverse for every nonzero element). Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law. Complete lattice: a lattice in which arbitrary meet and joins exist. Bounded lattice: a lattice with a greatest element and least element. Distributive lattice: a lattice in which each of meet and join distributes over the other. A power set under union and intersection forms a distributive lattice. Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation. === Two sets with operations === Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M → M, which satisfies several axioms. Counting the ring operations these systems have at least three operations. Vector space: a module where the ring R is a field or, in some contexts, a division ring. Algebra over a field: a module over a field, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication. Inner product space: a field F and vector space V with a definite bilinear form V × V → F. == Hybrid structures == Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure. Topological group: a group with a topology compatible with the group operation. Lie group: a topological group with a compatible smooth manifold structure. Ordered groups, ordered rings and ordered fields: each type of structure with a compatible partial order. Archimedean group: a linearly ordered group for which the Archimedean property holds. Topological vector space: a vector space whose M has a compatible topology. Normed vector space: a vector space with a compatible norm. If such a space is complete (as a metric space) then it is called a Banach space. Hilbert space: an inner product space over the real or complex numbers whose inner product gives rise to a Banach space structure. Vertex operator algebra Von Neumann algebra: a *-algebra of operators on a Hilbert space equipped with the weak operator topology. == Universal algebra == Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry). Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x, m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group. Some structures do not form varieties, because either: It is necessary that 0 ≠ 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity; Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations. Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because ( 1 , 0 ) ⋅ ( 0 , 1 ) = ( 0 , 0 ) {\displaystyle (1,0)\cdot (0,1)=(0,0)} , but fields do not have zero divisors. == Category theory == Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure. There are various concepts in category theory that try to capture the algebraic character of a context, for instance algebraic category essentially algebraic category presentable category locally presentable category monadic functors and categories universal property. == Different meanings of "structure" == In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set A {\displaystyle A} ", means that we have defined ring operations on the set A {\displaystyle A} . For another example, the group ( Z , + ) {\displaystyle (\mathbb {Z} ,+)} can be seen as a set Z {\displaystyle \mathbb {Z} } that is equipped with an algebraic structure, namely the operation + {\displaystyle +} . == See also == Free object Mathematical structure Signature (logic) Structure (mathematical logic) == Notes == == References == Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (2nd ed.), AMS Chelsea, ISBN 978-0-8218-1646-2 Michel, Anthony N.; Herget, Charles J. (1993), Applied Algebra and Functional Analysis, New York: Dover Publications, ISBN 978-0-486-67598-5 Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag, ISBN 978-3-540-90578-3 Category theory Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2 Taylor, Paul (1999), Practical foundations of mathematics, Cambridge University Press, ISBN 978-0-521-63107-5 == External links == Jipsen's algebra structures. Includes many structures not mentioned here. Mathworld page on abstract algebra. Stanford Encyclopedia of Philosophy: Algebra by Vaughan Pratt.
Wikipedia/Algebraic_structure
In mathematics, a quintic function is a function of the form g ( x ) = a x 5 + b x 4 + c x 3 + d x 2 + e x + f , {\displaystyle g(x)=ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f,\,} where a, b, c, d, e and f are members of a field, typically the rational numbers, the real numbers or the complex numbers, and a is nonzero. In other words, a quintic function is defined by a polynomial of degree five. Because they have an odd degree, normal quintic functions appear similar to normal cubic functions when graphed, except they may possess one additional local maximum and one additional local minimum. The derivative of a quintic function is a quartic function. Setting g(x) = 0 and assuming a ≠ 0 produces a quintic equation of the form: a x 5 + b x 4 + c x 3 + d x 2 + e x + f = 0. {\displaystyle ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0.\,} Solving quintic equations in terms of radicals (nth roots) was a major problem in algebra from the 16th century, when cubic and quartic equations were solved, until the first half of the 19th century, when the impossibility of such a general solution was proved with the Abel–Ruffini theorem. == Finding roots of a quintic equation == Finding the roots (zeros) of a given polynomial has been a prominent mathematical problem. Solving linear, quadratic, cubic and quartic equations in terms of radicals and elementary arithmetic operations on the coefficients can always be done, no matter whether the roots are rational or irrational, real or complex; there are formulas that yield the required solutions. However, there is no algebraic expression (that is, in terms of radicals) for the solutions of general quintic equations over the rationals; this statement is known as the Abel–Ruffini theorem, first asserted in 1799 and completely proven in 1824. This result also holds for equations of higher degree. An example of a quintic whose roots cannot be expressed in terms of radicals is x5 − x + 1 = 0. Numerical approximations of quintics roots can be computed with root-finding algorithms for polynomials. Although some quintics may be solved in terms of radicals, the solution is generally too complicated to be used in practice. == Solvable quintics == Some quintic equations can be solved in terms of radicals. These include the quintic equations defined by a polynomial that is reducible, such as x5 − x4 − x + 1 = (x2 + 1)(x + 1)(x − 1)2. For example, it has been shown that x 5 − x − r = 0 {\displaystyle x^{5}-x-r=0} has solutions in radicals if and only if it has an integer solution or r is one of ±15, ±22440, or ±2759640, in which cases the polynomial is reducible. As solving reducible quintic equations reduces immediately to solving polynomials of lower degree, only irreducible quintic equations are considered in the remainder of this section, and the term "quintic" will refer only to irreducible quintics. A solvable quintic is thus an irreducible quintic polynomial whose roots may be expressed in terms of radicals. To characterize solvable quintics, and more generally solvable polynomials of higher degree, Évariste Galois developed techniques which gave rise to group theory and Galois theory. Applying these techniques, Arthur Cayley found a general criterion for determining whether any given quintic is solvable. This criterion is the following. Given the equation a x 5 + b x 4 + c x 3 + d x 2 + e x + f = 0 , {\displaystyle ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0,} the Tschirnhaus transformation x = y − ⁠b/5a⁠, which depresses the quintic (that is, removes the term of degree four), gives the equation y 5 + p y 3 + q y 2 + r y + s = 0 , {\displaystyle y^{5}+py^{3}+qy^{2}+ry+s=0,} where p = 5 a c − 2 b 2 5 a 2 q = 25 a 2 d − 15 a b c + 4 b 3 25 a 3 r = 125 a 3 e − 50 a 2 b d + 15 a b 2 c − 3 b 4 125 a 4 s = 3125 a 4 f − 625 a 3 b e + 125 a 2 b 2 d − 25 a b 3 c + 4 b 5 3125 a 5 {\displaystyle {\begin{aligned}p&={\frac {5ac-2b^{2}}{5a^{2}}}\\[4pt]q&={\frac {25a^{2}d-15abc+4b^{3}}{25a^{3}}}\\[4pt]r&={\frac {125a^{3}e-50a^{2}bd+15ab^{2}c-3b^{4}}{125a^{4}}}\\[4pt]s&={\frac {3125a^{4}f-625a^{3}be+125a^{2}b^{2}d-25ab^{3}c+4b^{5}}{3125a^{5}}}\end{aligned}}} Both quintics are solvable by radicals if and only if either they are factorisable in equations of lower degrees with rational coefficients or the polynomial P2 − 1024 z Δ, named Cayley's resolvent, has a rational root in z, where P = z 3 − z 2 ( 20 r + 3 p 2 ) − z ( 8 p 2 r − 16 p q 2 − 240 r 2 + 400 s q − 3 p 4 ) − p 6 + 28 p 4 r − 16 p 3 q 2 − 176 p 2 r 2 − 80 p 2 s q + 224 p r q 2 − 64 q 4 + 4000 p s 2 + 320 r 3 − 1600 r s q {\displaystyle {\begin{aligned}P={}&z^{3}-z^{2}(20r+3p^{2})-z(8p^{2}r-16pq^{2}-240r^{2}+400sq-3p^{4})\\[4pt]&-p^{6}+28p^{4}r-16p^{3}q^{2}-176p^{2}r^{2}-80p^{2}sq+224prq^{2}-64q^{4}\\[4pt]&+4000ps^{2}+320r^{3}-1600rsq\end{aligned}}} and Δ = − 128 p 2 r 4 + 3125 s 4 − 72 p 4 q r s + 560 p 2 q r 2 s + 16 p 4 r 3 + 256 r 5 + 108 p 5 s 2 − 1600 q r 3 s + 144 p q 2 r 3 − 900 p 3 r s 2 + 2000 p r 2 s 2 − 3750 p q s 3 + 825 p 2 q 2 s 2 + 2250 q 2 r s 2 + 108 q 5 s − 27 q 4 r 2 − 630 p q 3 r s + 16 p 3 q 3 s − 4 p 3 q 2 r 2 . {\displaystyle {\begin{aligned}\Delta ={}&-128p^{2}r^{4}+3125s^{4}-72p^{4}qrs+560p^{2}qr^{2}s+16p^{4}r^{3}+256r^{5}+108p^{5}s^{2}\\[4pt]&-1600qr^{3}s+144pq^{2}r^{3}-900p^{3}rs^{2}+2000pr^{2}s^{2}-3750pqs^{3}+825p^{2}q^{2}s^{2}\\[4pt]&+2250q^{2}rs^{2}+108q^{5}s-27q^{4}r^{2}-630pq^{3}rs+16p^{3}q^{3}s-4p^{3}q^{2}r^{2}.\end{aligned}}} Cayley's result allows us to test if a quintic is solvable. If it is the case, finding its roots is a more difficult problem, which consists of expressing the roots in terms of radicals involving the coefficients of the quintic and the rational root of Cayley's resolvent. In 1888, George Paxton Young described how to solve a solvable quintic equation, without providing an explicit formula; in 2004, Daniel Lazard wrote out a three-page formula. === Quintics in Bring–Jerrard form === There are several parametric representations of solvable quintics of the form x5 + ax + b = 0, called the Bring–Jerrard form. During the second half of the 19th century, John Stuart Glashan, George Paxton Young, and Carl Runge gave such a parameterization: an irreducible quintic with rational coefficients in Bring–Jerrard form is solvable if and only if either a = 0 or it may be written x 5 + 5 μ 4 ( 4 ν + 3 ) ν 2 + 1 x + 4 μ 5 ( 2 ν + 1 ) ( 4 ν + 3 ) ν 2 + 1 = 0 {\displaystyle x^{5}+{\frac {5\mu ^{4}(4\nu +3)}{\nu ^{2}+1}}x+{\frac {4\mu ^{5}(2\nu +1)(4\nu +3)}{\nu ^{2}+1}}=0} where μ and ν are rational. In 1994, Blair Spearman and Kenneth S. Williams gave an alternative, x 5 + 5 e 4 ( 4 c + 3 ) c 2 + 1 x + − 4 e 5 ( 2 c − 11 ) c 2 + 1 = 0. {\displaystyle x^{5}+{\frac {5e^{4}(4c+3)}{c^{2}+1}}x+{\frac {-4e^{5}(2c-11)}{c^{2}+1}}=0.} The relationship between the 1885 and 1994 parameterizations can be seen by defining the expression b = 4 5 ( a + 20 ± 2 ( 20 − a ) ( 5 + a ) ) {\displaystyle b={\frac {4}{5}}\left(a+20\pm 2{\sqrt {(20-a)(5+a)}}\right)} where ⁠ a = 5 4 ν + 3 ν 2 + 1 {\displaystyle a=5{\tfrac {4\nu +3}{\nu ^{2}+1}}} ⁠. Using the negative case of the square root yields, after scaling variables, the first parametrization while the positive case gives the second. The substitution ⁠ c = − m ℓ 5 , {\displaystyle c=-{\tfrac {m}{\ell ^{5}}},} ⁠ ⁠ e = 1 ℓ {\displaystyle e={\tfrac {1}{\ell }}} ⁠ in the Spearman–Williams parameterization allows one to not exclude the special case a = 0, giving the following result: If a and b are rational numbers, the equation x5 + ax + b = 0 is solvable by radicals if either its left-hand side is a product of polynomials of degree less than 5 with rational coefficients or there exist two rational numbers ℓ and m such that a = 5 ℓ ( 3 ℓ 5 − 4 m ) m 2 + ℓ 10 b = 4 ( 11 ℓ 5 + 2 m ) m 2 + ℓ 10 . {\displaystyle a={\frac {5\ell (3\ell ^{5}-4m)}{m^{2}+\ell ^{10}}}\qquad b={\frac {4(11\ell ^{5}+2m)}{m^{2}+\ell ^{10}}}.} === Roots of a solvable quintic === A polynomial equation is solvable by radicals if its Galois group is a solvable group. In the case of irreducible quintics, the Galois group is a subgroup of the symmetric group S5 of all permutations of a five element set, which is solvable if and only if it is a subgroup of the group F5, of order 20, generated by the cyclic permutations (1 2 3 4 5) and (1 2 4 3). If the quintic is solvable, one of the solutions may be represented by an algebraic expression involving a fifth root and at most two square roots, generally nested. The other solutions may then be obtained either by changing the fifth root or by multiplying all the occurrences of the fifth root by the same power of a primitive 5th root of unity, such as − 10 − 2 5 + 5 − 1 4 . {\displaystyle {\frac {{\sqrt {-10-2{\sqrt {5}}}}+{\sqrt {5}}-1}{4}}.} In fact, all four primitive fifth roots of unity may be obtained by changing the signs of the square roots appropriately; namely, the expression α − 10 − 2 β 5 + β 5 − 1 4 , {\displaystyle {\frac {\alpha {\sqrt {-10-2\beta {\sqrt {5}}}}+\beta {\sqrt {5}}-1}{4}},} where α , β ∈ { − 1 , 1 } {\displaystyle \alpha ,\beta \in \{-1,1\}} , yields the four distinct primitive fifth roots of unity. It follows that one may need four different square roots for writing all the roots of a solvable quintic. Even for the first root that involves at most two square roots, the expression of the solutions in terms of radicals is usually highly complicated. However, when no square root is needed, the form of the first solution may be rather simple, as for the equation x5 − 5x4 + 30x3 − 50x2 + 55x − 21 = 0, for which the only real solution is x = 1 + 2 5 − ( 2 5 ) 2 + ( 2 5 ) 3 − ( 2 5 ) 4 . {\displaystyle x=1+{\sqrt[{5}]{2}}-\left({\sqrt[{5}]{2}}\right)^{2}+\left({\sqrt[{5}]{2}}\right)^{3}-\left({\sqrt[{5}]{2}}\right)^{4}.} An example of a more complicated (although small enough to be written here) solution is the unique real root of x5 − 5x + 12 = 0. Let a = √2φ−1, b = √2φ, and c = 4√5, where φ = ⁠1+√5/2⁠ is the golden ratio. Then the only real solution x = −1.84208... is given by − c x = ( a + c ) 2 ( b − c ) 5 + ( − a + c ) ( b − c ) 2 5 + ( a + c ) ( b + c ) 2 5 − ( − a + c ) 2 ( b + c ) 5 , {\displaystyle -cx={\sqrt[{5}]{(a+c)^{2}(b-c)}}+{\sqrt[{5}]{(-a+c)(b-c)^{2}}}+{\sqrt[{5}]{(a+c)(b+c)^{2}}}-{\sqrt[{5}]{(-a+c)^{2}(b+c)}}\,,} or, equivalently, by x = y 1 5 + y 2 5 + y 3 5 + y 4 5 , {\displaystyle x={\sqrt[{5}]{y_{1}}}+{\sqrt[{5}]{y_{2}}}+{\sqrt[{5}]{y_{3}}}+{\sqrt[{5}]{y_{4}}}\,,} where the yi are the four roots of the quartic equation y 4 + 4 y 3 + 4 5 y 2 − 8 5 3 y − 1 5 5 = 0 . {\displaystyle y^{4}+4y^{3}+{\frac {4}{5}}y^{2}-{\frac {8}{5^{3}}}y-{\frac {1}{5^{5}}}=0\,.} More generally, if an equation P(x) = 0 of prime degree p with rational coefficients is solvable in radicals, then one can define an auxiliary equation Q(y) = 0 of degree p − 1, also with rational coefficients, such that each root of P is the sum of p-th roots of the roots of Q. These p-th roots were introduced by Joseph-Louis Lagrange, and their products by p are commonly called Lagrange resolvents. The computation of Q and its roots can be used to solve P(x) = 0. However these p-th roots may not be computed independently (this would provide pp−1 roots instead of p). Thus a correct solution needs to express all these p-roots in term of one of them. Galois theory shows that this is always theoretically possible, even if the resulting formula may be too large to be of any use. It is possible that some of the roots of Q are rational (as in the first example of this section) or some are zero. In these cases, the formula for the roots is much simpler, as for the solvable de Moivre quintic x 5 + 5 a x 3 + 5 a 2 x + b = 0 , {\displaystyle x^{5}+5ax^{3}+5a^{2}x+b=0\,,} where the auxiliary equation has two zero roots and reduces, by factoring them out, to the quadratic equation y 2 + b y − a 5 = 0 , {\displaystyle y^{2}+by-a^{5}=0\,,} such that the five roots of the de Moivre quintic are given by x k = ω k y i 5 − a ω k y i 5 , {\displaystyle x_{k}=\omega ^{k}{\sqrt[{5}]{y_{i}}}-{\frac {a}{\omega ^{k}{\sqrt[{5}]{y_{i}}}}},} where yi is any root of the auxiliary quadratic equation and ω is any of the four primitive 5th roots of unity. This can be easily generalized to construct a solvable septic and other odd degrees, not necessarily prime. === Other solvable quintics === There are infinitely many solvable quintics in Bring–Jerrard form which have been parameterized in a preceding section. Up to the scaling of the variable, there are exactly five solvable quintics of the shape x 5 + a x 2 + b {\displaystyle x^{5}+ax^{2}+b} , which are (where s is a scaling factor): x 5 − 2 s 3 x 2 − s 5 5 {\displaystyle x^{5}-2s^{3}x^{2}-{\frac {s^{5}}{5}}} x 5 − 100 s 3 x 2 − 1000 s 5 {\displaystyle x^{5}-100s^{3}x^{2}-1000s^{5}} x 5 − 5 s 3 x 2 − 3 s 5 {\displaystyle x^{5}-5s^{3}x^{2}-3s^{5}} x 5 − 5 s 3 x 2 + 15 s 5 {\displaystyle x^{5}-5s^{3}x^{2}+15s^{5}} x 5 − 25 s 3 x 2 − 300 s 5 {\displaystyle x^{5}-25s^{3}x^{2}-300s^{5}} Paxton Young (1888) gave a number of examples of solvable quintics: An infinite sequence of solvable quintics may be constructed, whose roots are sums of nth roots of unity, with n = 10k + 1 being a prime number: There are also two parameterized families of solvable quintics: The Kondo–Brumer quintic, x 5 + ( a − 3 ) x 4 + ( − a + b + 3 ) x 3 + ( a 2 − a − 1 − 2 b ) x 2 + b x + a = 0 {\displaystyle x^{5}+(a-3)\,x^{4}+(-a+b+3)\,x^{3}+(a^{2}-a-1-2b)\,x^{2}+b\,x+a=0} and the family depending on the parameters a , ℓ , m {\displaystyle a,\ell ,m} x 5 − 5 p ( 2 x 3 + a x 2 + b x ) − p c = 0 {\displaystyle x^{5}-5\,p\left(2\,x^{3}+a\,x^{2}+b\,x\right)-p\,c=0} where p = 1 4 [ ℓ 2 ( 4 m 2 + a 2 ) − m 2 ] , {\displaystyle p={\tfrac {1}{4}}\left[\,\ell ^{2}(4m^{2}+a^{2})-m^{2}\,\right]\;,} b = ℓ ( 4 m 2 + a 2 ) − 5 p − 2 m 2 , {\displaystyle b=\ell \,(4m^{2}+a^{2})-5p-2m^{2}\;,} c = 1 2 [ b ( a + 4 m ) − p ( a − 4 m ) − a 2 m ] . {\displaystyle c={\tfrac {1}{2}}\left[\,b(a+4m)-p(a-4m)-a^{2}m\,\right]\;.} === Casus irreducibilis === Analogously to cubic equations, there are solvable quintics which have five real roots all of whose solutions in radicals involve roots of complex numbers. This is casus irreducibilis for the quintic, which is discussed in Dummit.: p.17  Indeed, if an irreducible quintic has all roots real, no root can be expressed purely in terms of real radicals (as is true for all polynomial degrees that are not powers of 2). == Beyond radicals == About 1835, Jerrard demonstrated that quintics can be solved by using ultraradicals (also known as Bring radicals), the unique real root of t5 + t − a = 0 for real numbers a. In 1858, Charles Hermite showed that the Bring radical could be characterized in terms of the Jacobi theta functions and their associated elliptic modular functions, using an approach similar to the more familiar approach of solving cubic equations by means of trigonometric functions. At around the same time, Leopold Kronecker, using group theory, developed a simpler way of deriving Hermite's result, as had Francesco Brioschi. Later, Felix Klein came up with a method that relates the symmetries of the icosahedron, Galois theory, and the elliptic modular functions that are featured in Hermite's solution, giving an explanation for why they should appear at all, and developed his own solution in terms of generalized hypergeometric functions. Similar phenomena occur in degree 7 (septic equations) and 11, as studied by Klein and discussed in Icosahedral symmetry § Related geometries. === Solving with Bring radicals === A Tschirnhaus transformation, which may be computed by solving a quartic equation, reduces the general quintic equation of the form x 5 + a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 = 0 {\displaystyle x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0\,} to the Bring–Jerrard normal form x5 − x + t = 0. The roots of this equation cannot be expressed by radicals. However, in 1858, Charles Hermite published the first known solution of this equation in terms of elliptic functions. At around the same time Francesco Brioschi and Leopold Kronecker came upon equivalent solutions. See Bring radical for details on these solutions and some related ones. == Application to celestial mechanics == Solving for the locations of the Lagrangian points of an astronomical orbit in which the masses of both objects are non-negligible involves solving a quintic. More precisely, the locations of L2 and L1 are the solutions to the following equations, where the gravitational forces of two masses on a third (for example, Sun and Earth on satellites such as Gaia and the James Webb Space Telescope at L2 and SOHO at L1) provide the satellite's centripetal force necessary to be in a synchronous orbit with Earth around the Sun: G m M S ( R ± r ) 2 ± G m M E r 2 = m ω 2 ( R ± r ) {\displaystyle {\frac {GmM_{S}}{(R\pm r)^{2}}}\pm {\frac {GmM_{E}}{r^{2}}}=m\omega ^{2}(R\pm r)} The ± sign corresponds to L2 and L1, respectively; G is the gravitational constant, ω the angular velocity, r the distance of the satellite to Earth, R the distance Sun to Earth (that is, the semi-major axis of Earth's orbit), and m, ME, and MS are the respective masses of satellite, Earth, and Sun. Using Kepler's Third Law ω 2 = 4 π 2 P 2 = G ( M S + M E ) R 3 {\displaystyle \omega ^{2}={\frac {4\pi ^{2}}{P^{2}}}={\frac {G(M_{S}+M_{E})}{R^{3}}}} and rearranging all terms yields the quintic a r 5 + b r 4 + c r 3 + d r 2 + e r + f = 0 {\displaystyle ar^{5}+br^{4}+cr^{3}+dr^{2}+er+f=0} with: a = ± ( M S + M E ) , b = + ( M S + M E ) 3 R , c = ± ( M S + M E ) 3 R 2 , d = + ( M E ∓ M E ) R 3 ( thus d = 0 for L 2 ) , e = ± M E 2 R 4 , f = ∓ M E R 5 . {\displaystyle {\begin{aligned}&a=\pm (M_{S}+M_{E}),\\&b=+(M_{S}+M_{E})3R,\\&c=\pm (M_{S}+M_{E})3R^{2},\\&d=+(M_{E}\mp M_{E})R^{3}\ ({\text{thus }}d=0{\text{ for }}L_{2}),\\&e=\pm M_{E}2R^{4},\\&f=\mp M_{E}R^{5}.\end{aligned}}} Solving these two quintics yields r = 1.501 × 109 m for L2 and r = 1.491 × 109 m for L1. The Sun–Earth Lagrangian points L2 and L1 are usually given as 1.5 million km from Earth. If the mass of the smaller object (ME) is much smaller than the mass of the larger object (MS), then the quintic equation can be greatly reduced and L1 and L2 are at approximately the radius of the Hill sphere, given by: r ≈ R M E 3 M S 3 {\displaystyle r\approx R{\sqrt[{3}]{\frac {M_{E}}{3M_{S}}}}} That also yields r = 1.5 × 109 m for satellites at L1 and L2 in the Sun-Earth system. == See also == Sextic equation Septic function Theory of equations Principal equation form == Notes == == References == Charles Hermite, "Sur la résolution de l'équation du cinquème degré", Œuvres de Charles Hermite, 2:5–21, Gauthier-Villars, 1908. Klein, Felix (1888). Lectures on the Icosahedron and the Solution of Equations of the Fifth Degree. Translated by Morrice, George Gavin. Trübner & Co. ISBN 0-486-49528-0. {{cite book}}: ISBN / Date incompatibility (help) Leopold Kronecker, "Sur la résolution de l'equation du cinquième degré, extrait d'une lettre adressée à M. Hermite", Comptes Rendus de l'Académie des Sciences, 46:1:1150–1152 1858. Blair Spearman and Kenneth S. Williams, "Characterization of solvable quintics x5 + ax + b, American Mathematical Monthly, 101:986–992 (1994). Ian Stewart, Galois Theory 2nd Edition, Chapman and Hall, 1989. ISBN 0-412-34550-1. Discusses Galois Theory in general including a proof of insolvability of the general quintic. Jörg Bewersdorff, Galois theory for beginners: A historical perspective, American Mathematical Society, 2006. ISBN 0-8218-3817-2. Chapter 8 (The solution of equations of the fifth degree at the Wayback Machine (archived 31 March 2010)) gives a description of the solution of solvable quintics x5 + cx + d. Victor S. Adamchik and David J. Jeffrey, "Polynomial transformations of Tschirnhaus, Bring and Jerrard," ACM SIGSAM Bulletin, Vol. 37, No. 3, September 2003, pp. 90–94. Ehrenfried Walter von Tschirnhaus, "A method for removing all intermediate terms from a given equation," ACM SIGSAM Bulletin, Vol. 37, No. 1, March 2003, pp. 1–3. Lazard, Daniel (2004). "Solving quintics in radicals". In Olav Arnfinn Laudal; Ragni Piene (eds.). The Legacy of Niels Henrik Abel. Berlin. pp. 207–225. ISBN 3-540-43826-2. Archived from the original on January 6, 2005.{{cite book}}: CS1 maint: location missing publisher (link) Tóth, Gábor (2002), Finite Möbius groups, minimal immersions of spheres, and moduli == External links == Mathworld - Quintic Equation – more details on methods for solving Quintics. Solving Solvable Quintics – a method for solving solvable quintics due to David S. Dummit. A method for removing all intermediate terms from a given equation - a recent English translation of Tschirnhaus' 1683 paper. Bruce Bartlett:The Quintic, the Icosahedron, and Elliptic Curves, AMS Notices (April 2024)
Wikipedia/Quintic_function
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted by 1 and 0, whereas in elementary algebra the values of the variables are numbers. Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations in the same way that elementary algebra describes numerical operations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term Boolean algebra was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian [sic] Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics. == History == A precursor of Boolean algebra was Gottfried Wilhelm Leibniz's algebra of concepts. The usage of binary in relation to the I Ching was central to Leibniz's characteristica universalis. It eventually created the foundations of algebra of concepts. Leibniz's algebra of concepts is deductively equivalent to the Boolean algebra of sets. Boole's algebra predated the modern developments in abstract algebra and mathematical logic; it is however seen as connected to the origins of both fields. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington and others, until it reached the modern conception of an (abstract) mathematical structure. For example, the empirical observation that one can manipulate expressions in the algebra of sets, by translating them into expressions in Boole's algebra, is explained in modern terms by saying that the algebra of sets is a Boolean algebra (note the indefinite article). In fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. In the 1930s, while studying switching circuits, Claude Shannon observed that one could also apply the rules of Boole's algebra in this setting, and he introduced switching algebra as a way to analyze and design circuits by algebraic means in terms of logic gates. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra. In modern circuit engineering settings, there is little need to consider other Boolean algebras, thus "switching algebra" and "Boolean algebra" are often used interchangeably. Efficient implementation of Boolean functions is a fundamental problem in the design of combinational logic circuits. Modern electronic design automation tools for very-large-scale integration (VLSI) circuits often rely on an efficient representation of Boolean functions known as (reduced ordered) binary decision diagrams (BDD) for logic synthesis and formal verification. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra. Thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, like those from first-order logic. Although the development of mathematical logic did not follow Boole's program, the connection between his algebra and logic was later put on firm ground in the setting of algebraic logic, which also studies the algebraic systems of many other logics. The problem of determining whether the variables of a given Boolean (propositional) formula can be assigned in such a way as to make the formula evaluate to true is called the Boolean satisfiability problem (SAT), and is of importance to theoretical computer science, being the first problem shown to be NP-complete. The closely related model of computation known as a Boolean circuit relates time complexity (of an algorithm) to circuit complexity. == Values == Whereas expressions denote mainly numbers in elementary algebra, in Boolean algebra, they denote the truth values false and true. These values are represented with the bits, 0 and 1. They do not behave like the integers 0 and 1, for which 1 + 1 = 2, but may be identified with the elements of the two-element field GF(2), that is, integer arithmetic modulo 2, for which 1 + 1 = 0. Addition and multiplication then play the Boolean roles of XOR (exclusive-or) and AND (conjunction), respectively, with disjunction x ∨ y (inclusive-or) definable as x + y − xy and negation ¬x as 1 − x. In GF(2), − may be replaced by +, since they denote the same operation; however, this way of writing Boolean operations allows applying the usual arithmetic operations of integers (this may be useful when using a programming language in which GF(2) is not implemented). Boolean algebra also deals with functions which have their values in the set {0,1}. A sequence of bits is a commonly used example of such a function. Another common example is the totality of subsets of a set E: to a subset F of E, one can define the indicator function that takes the value 1 on F, and 0 outside F. The most general example is the set elements of a Boolean algebra, with all of the foregoing being instances thereof. As with elementary algebra, the purely equational part of the theory may be developed, without considering explicit values for the variables. == Operations == === Basic operations === While Elementary algebra has four operations (addition, subtraction, multiplication, and division), the Boolean algebra has only three basic operations: conjunction, disjunction, and negation, expressed with the corresponding binary operators AND ( ∧ {\displaystyle \land } ) and OR ( ∨ {\displaystyle \lor } ) and the unary operator NOT ( ¬ {\displaystyle \neg } ), collectively referred to as Boolean operators. Variables in Boolean algebra that store the logical value of 0 and 1 are called the Boolean variables. They are used to store either true or false values. The basic operations on Boolean variables x and y are defined as follows: Alternatively, the values of x ∧ y, x ∨ y, and ¬x can be expressed by tabulating their values with truth tables as follows: When used in expressions, the operators are applied according to the precedence rules. As with elementary algebra, expressions in parentheses are evaluated first, following the precedence rules. If the truth values 0 and 1 are interpreted as integers, these operations may be expressed with the ordinary operations of arithmetic (where x + y uses addition and xy uses multiplication), or by the minimum/maximum functions: x ∧ y = x y = min ( x , y ) x ∨ y = x + y − x y = x + y ( 1 − x ) = max ( x , y ) ¬ x = 1 − x {\displaystyle {\begin{aligned}x\wedge y&=xy=\min(x,y)\\x\vee y&=x+y-xy=x+y(1-x)=\max(x,y)\\\neg x&=1-x\end{aligned}}} One might consider that only negation and one of the two other operations are basic because of the following identities that allow one to define conjunction in terms of negation and the disjunction, and vice versa (De Morgan's laws): x ∧ y = ¬ ( ¬ x ∨ ¬ y ) x ∨ y = ¬ ( ¬ x ∧ ¬ y ) {\displaystyle {\begin{aligned}x\wedge y&=\neg (\neg x\vee \neg y)\\x\vee y&=\neg (\neg x\wedge \neg y)\end{aligned}}} === Secondary operations === Operations composed from the basic operations include, among others, the following: These definitions give rise to the following truth tables giving the values of these operations for all four possible inputs. Material conditional The first operation, x → y, or Cxy, is called material implication. If x is true, then the result of expression x → y is taken to be that of y (e.g. if x is true and y is false, then x → y is also false). But if x is false, then the value of y can be ignored; however, the operation must return some Boolean value and there are only two choices. So by definition, x → y is true when x is false (relevance logic rejects this definition, by viewing an implication with a false premise as something other than either true or false). Exclusive OR (XOR) The second operation, x ⊕ y, or Jxy, is called exclusive or (often abbreviated as XOR) to distinguish it from disjunction as the inclusive kind. It excludes the possibility of both x and y being true (e.g. see table): if both are true then result is false. Defined in terms of arithmetic it is addition where mod 2 is 1 + 1 = 0. Logical equivalence The third operation, the complement of exclusive or, is equivalence or Boolean equality: x ≡ y, or Exy, is true just when x and y have the same value. Hence x ⊕ y as its complement can be understood as x ≠ y, being true just when x and y are different. Thus, its counterpart in arithmetic mod 2 is x + y. Equivalence's counterpart in arithmetic mod 2 is x + y + 1. == Laws == A law of Boolean algebra is an identity such as x ∨ (y ∨ z) = (x ∨ y) ∨ z between two Boolean terms, where a Boolean term is defined as an expression built up from variables and the constants 0 and 1 using the operations ∧, ∨, and ¬. The concept can be extended to terms involving other Boolean operations such as ⊕, →, and ≡, but such extensions are unnecessary for the purposes to which the laws are put. Such purposes include the definition of a Boolean algebra as any model of the Boolean laws, and as a means for deriving new laws from old as in the derivation of x ∨ (y ∧ z) = x ∨ (z ∧ y) from y ∧ z = z ∧ y (as treated in § Axiomatizing Boolean algebra). === Monotone laws === Boolean algebra satisfies many of the same laws as ordinary algebra when one matches up ∨ with addition and ∧ with multiplication. In particular the following laws are common to both kinds of algebra: The following laws hold in Boolean algebra, but not in ordinary algebra: Taking x = 2 in the third law above shows that it is not an ordinary algebra law, since 2 × 2 = 4. The remaining five laws can be falsified in ordinary algebra by taking all variables to be 1. For example, in absorption law 1, the left hand side would be 1(1 + 1) = 2, while the right hand side would be 1 (and so on). All of the laws treated thus far have been for conjunction and disjunction. These operations have the property that changing either argument either leaves the output unchanged, or the output changes in the same way as the input. Equivalently, changing any variable from 0 to 1 never results in the output changing from 1 to 0. Operations with this property are said to be monotone. Thus the axioms thus far have all been for monotonic Boolean logic. Nonmonotonicity enters via complement ¬ as follows. === Nonmonotone laws === The complement operation is defined by the following two laws. Complementation 1 x ∧ ¬ x = 0 Complementation 2 x ∨ ¬ x = 1 {\displaystyle {\begin{aligned}&{\text{Complementation 1}}&x\wedge \neg x&=0\\&{\text{Complementation 2}}&x\vee \neg x&=1\end{aligned}}} All properties of negation including the laws below follow from the above two laws alone. In both ordinary and Boolean algebra, negation works by exchanging pairs of elements, hence in both algebras it satisfies the double negation law (also called involution law) Double negation ¬ ( ¬ x ) = x {\displaystyle {\begin{aligned}&{\text{Double negation}}&\neg {(\neg {x})}&=x\end{aligned}}} But whereas ordinary algebra satisfies the two laws ( − x ) ( − y ) = x y ( − x ) + ( − y ) = − ( x + y ) {\displaystyle {\begin{aligned}(-x)(-y)&=xy\\(-x)+(-y)&=-(x+y)\end{aligned}}} Boolean algebra satisfies De Morgan's laws: De Morgan 1 ¬ x ∧ ¬ y = ¬ ( x ∨ y ) De Morgan 2 ¬ x ∨ ¬ y = ¬ ( x ∧ y ) {\displaystyle {\begin{aligned}&{\text{De Morgan 1}}&\neg x\wedge \neg y&=\neg {(x\vee y)}\\&{\text{De Morgan 2}}&\neg x\vee \neg y&=\neg {(x\wedge y)}\end{aligned}}} === Completeness === The laws listed above define Boolean algebra, in the sense that they entail the rest of the subject. The laws complementation 1 and 2, together with the monotone laws, suffice for this purpose and can therefore be taken as one possible complete set of laws or axiomatization of Boolean algebra. Every law of Boolean algebra follows logically from these axioms. Furthermore, Boolean algebras can then be defined as the models of these axioms as treated in § Boolean algebras. Writing down further laws of Boolean algebra cannot give rise to any new consequences of these axioms, nor can it rule out any model of them. In contrast, in a list of some but not all of the same laws, there could have been Boolean laws that did not follow from those on the list, and moreover there would have been models of the listed laws that were not Boolean algebras. This axiomatization is by no means the only one, or even necessarily the most natural given that attention was not paid as to whether some of the axioms followed from others, but there was simply a choice to stop when enough laws had been noticed, treated further in § Axiomatizing Boolean algebra. Or the intermediate notion of axiom can be sidestepped altogether by defining a Boolean law directly as any tautology, understood as an equation that holds for all values of its variables over 0 and 1. All these definitions of Boolean algebra can be shown to be equivalent. === Duality principle === Principle: If {X, R} is a partially ordered set, then {X, R(inverse)} is also a partially ordered set. There is nothing special about the choice of symbols for the values of Boolean algebra. 0 and 1 could be renamed to α and β, and as long as it was done consistently throughout, it would still be Boolean algebra, albeit with some obvious cosmetic differences. But suppose 0 and 1 were renamed 1 and 0 respectively. Then it would still be Boolean algebra, and moreover operating on the same values. However, it would not be identical to our original Boolean algebra because now ∨ behaves the way ∧ used to do and vice versa. So there are still some cosmetic differences to show that the notation has been changed, despite the fact that 0s and 1s are still being used. But if in addition to interchanging the names of the values, the names of the two binary operations are also interchanged, now there is no trace of what was done. The end product is completely indistinguishable from what was started with. The columns for x ∧ y and x ∨ y in the truth tables have changed places, but that switch is immaterial. When values and operations can be paired up in a way that leaves everything important unchanged when all pairs are switched simultaneously, the members of each pair are called dual to each other. Thus 0 and 1 are dual, and ∧ and ∨ are dual. The duality principle, also called De Morgan duality, asserts that Boolean algebra is unchanged when all dual pairs are interchanged. One change not needed to make as part of this interchange was to complement. Complement is a self-dual operation. The identity or do-nothing operation x (copy the input to the output) is also self-dual. A more complicated example of a self-dual operation is (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x). There is no self-dual binary operation that depends on both its arguments. A composition of self-dual operations is a self-dual operation. For example, if f(x, y, z) = (x ∧ y) ∨ (y ∧ z) ∨ (z ∧ x), then f(f(x, y, z), x, t) is a self-dual operation of four arguments x, y, z, t. The principle of duality can be explained from a group theory perspective by the fact that there are exactly four functions that are one-to-one mappings (automorphisms) of the set of Boolean polynomials back to itself: the identity function, the complement function, the dual function and the contradual function (complemented dual). These four functions form a group under function composition, isomorphic to the Klein four-group, acting on the set of Boolean polynomials. Walter Gottschalk remarked that consequently a more appropriate name for the phenomenon would be the principle (or square) of quaternality.: 21–22  == Diagrammatic representations == === Venn diagrams === A Venn diagram can be used as a representation of a Boolean operation using shaded overlapping regions. There is one region for each variable, all circular in the examples here. The interior and exterior of region x corresponds respectively to the values 1 (true) and 0 (false) for variable x. The shading indicates the value of the operation for each combination of regions, with dark denoting 1 and light 0 (some authors use the opposite convention). The three Venn diagrams in the figure below represent respectively conjunction x ∧ y, disjunction x ∨ y, and complement ¬x. For conjunction, the region inside both circles is shaded to indicate that x ∧ y is 1 when both variables are 1. The other regions are left unshaded to indicate that x ∧ y is 0 for the other three combinations. The second diagram represents disjunction x ∨ y by shading those regions that lie inside either or both circles. The third diagram represents complement ¬x by shading the region not inside the circle. While we have not shown the Venn diagrams for the constants 0 and 1, they are trivial, being respectively a white box and a dark box, neither one containing a circle. However, we could put a circle for x in those boxes, in which case each would denote a function of one argument, x, which returns the same value independently of x, called a constant function. As far as their outputs are concerned, constants and constant functions are indistinguishable; the difference is that a constant takes no arguments, called a zeroary or nullary operation, while a constant function takes one argument, which it ignores, and is a unary operation. Venn diagrams are helpful in visualizing laws. The commutativity laws for ∧ and ∨ can be seen from the symmetry of the diagrams: a binary operation that was not commutative would not have a symmetric diagram because interchanging x and y would have the effect of reflecting the diagram horizontally and any failure of commutativity would then appear as a failure of symmetry. Idempotence of ∧ and ∨ can be visualized by sliding the two circles together and noting that the shaded area then becomes the whole circle, for both ∧ and ∨. To see the first absorption law, x ∧ (x ∨ y) = x, start with the diagram in the middle for x ∨ y and note that the portion of the shaded area in common with the x circle is the whole of the x circle. For the second absorption law, x ∨ (x ∧ y) = x, start with the left diagram for x∧y and note that shading the whole of the x circle results in just the x circle being shaded, since the previous shading was inside the x circle. The double negation law can be seen by complementing the shading in the third diagram for ¬x, which shades the x circle. To visualize the first De Morgan's law, (¬x) ∧ (¬y) = ¬(x ∨ y), start with the middle diagram for x ∨ y and complement its shading so that only the region outside both circles is shaded, which is what the right hand side of the law describes. The result is the same as if we shaded that region which is both outside the x circle and outside the y circle, i.e. the conjunction of their exteriors, which is what the left hand side of the law describes. The second De Morgan's law, (¬x) ∨ (¬y) = ¬(x ∧ y), works the same way with the two diagrams interchanged. The first complement law, x ∧ ¬x = 0, says that the interior and exterior of the x circle have no overlap. The second complement law, x ∨ ¬x = 1, says that everything is either inside or outside the x circle. === Digital logic gates === Digital logic is the application of the Boolean algebra of 0 and 1 to electronic hardware consisting of logic gates connected to form a circuit diagram. Each gate implements a Boolean operation, and is depicted schematically by a shape indicating the operation. The shapes associated with the gates for conjunction (AND-gates), disjunction (OR-gates), and complement (inverters) are as follows: The lines on the left of each gate represent input wires or ports. The value of the input is represented by a voltage on the lead. For so-called "active-high" logic, 0 is represented by a voltage close to zero or "ground," while 1 is represented by a voltage close to the supply voltage; active-low reverses this. The line on the right of each gate represents the output port, which normally follows the same voltage conventions as the input ports. Complement is implemented with an inverter gate. The triangle denotes the operation that simply copies the input to the output; the small circle on the output denotes the actual inversion complementing the input. The convention of putting such a circle on any port means that the signal passing through this port is complemented on the way through, whether it is an input or output port. The duality principle, or De Morgan's laws, can be understood as asserting that complementing all three ports of an AND gate converts it to an OR gate and vice versa, as shown in Figure 4 below. Complementing both ports of an inverter however leaves the operation unchanged. More generally, one may complement any of the eight subsets of the three ports of either an AND or OR gate. The resulting sixteen possibilities give rise to only eight Boolean operations, namely those with an odd number of 1s in their truth table. There are eight such because the "odd-bit-out" can be either 0 or 1 and can go in any of four positions in the truth table. There being sixteen binary Boolean operations, this must leave eight operations with an even number of 1s in their truth tables. Two of these are the constants 0 and 1 (as binary operations that ignore both their inputs); four are the operations that depend nontrivially on exactly one of their two inputs, namely x, y, ¬x, and ¬y; and the remaining two are x ⊕ y (XOR) and its complement x ≡ y. == Boolean algebras == The term "algebra" denotes both a subject, namely the subject of algebra, and an object, namely an algebraic structure. Whereas the foregoing has addressed the subject of Boolean algebra, this section deals with mathematical objects called Boolean algebras, defined in full generality as any model of the Boolean laws. We begin with a special case of the notion definable without reference to the laws, namely concrete Boolean algebras, and then give the formal definition of the general notion. === Concrete Boolean algebras === A concrete Boolean algebra or field of sets is any nonempty set of subsets of a given set X closed under the set operations of union, intersection, and complement relative to X. (Historically X itself was required to be nonempty as well to exclude the degenerate or one-element Boolean algebra, which is the one exception to the rule that all Boolean algebras satisfy the same equations since the degenerate algebra satisfies every equation. However, this exclusion conflicts with the preferred purely equational definition of "Boolean algebra", there being no way to rule out the one-element algebra using only equations— 0 ≠ 1 does not count, being a negated equation. Hence modern authors allow the degenerate Boolean algebra and let X be empty.) Example 1. The power set 2X of X, consisting of all subsets of X. Here X may be any set: empty, finite, infinite, or even uncountable. Example 2. The empty set and X. This two-element algebra shows that a concrete Boolean algebra can be finite even when it consists of subsets of an infinite set. It can be seen that every field of subsets of X must contain the empty set and X. Hence no smaller example is possible, other than the degenerate algebra obtained by taking X to be empty so as to make the empty set and X coincide. Example 3. The set of finite and cofinite sets of integers, where a cofinite set is one omitting only finitely many integers. This is clearly closed under complement, and is closed under union because the union of a cofinite set with any set is cofinite, while the union of two finite sets is finite. Intersection behaves like union with "finite" and "cofinite" interchanged. This example is countably infinite because there are only countably many finite sets of integers. Example 4. For a less trivial example of the point made by example 2, consider a Venn diagram formed by n closed curves partitioning the diagram into 2n regions, and let X be the (infinite) set of all points in the plane not on any curve but somewhere within the diagram. The interior of each region is thus an infinite subset of X, and every point in X is in exactly one region. Then the set of all 22n possible unions of regions (including the empty set obtained as the union of the empty set of regions and X obtained as the union of all 2n regions) is closed under union, intersection, and complement relative to X and therefore forms a concrete Boolean algebra. Again, there are finitely many subsets of an infinite set forming a concrete Boolean algebra, with example 2 arising as the case n = 0 of no curves. === Subsets as bit vectors === A subset Y of X can be identified with an indexed family of bits with index set X, with the bit indexed by x ∈ X being 1 or 0 according to whether or not x ∈ Y. (This is the so-called characteristic function notion of a subset.) For example, a 32-bit computer word consists of 32 bits indexed by the set {0,1,2,...,31}, with 0 and 31 indexing the low and high order bits respectively. For a smaller example, if ⁠ X = { a , b , c } {\displaystyle X=\{a,b,c\}} ⁠ where a, b, c are viewed as bit positions in that order from left to right, the eight subsets {}, {c}, {b}, {b,c}, {a}, {a,c}, {a,b}, and {a,b,c} of X can be identified with the respective bit vectors 000, 001, 010, 011, 100, 101, 110, and 111. Bit vectors indexed by the set of natural numbers are infinite sequences of bits, while those indexed by the reals in the unit interval [0,1] are packed too densely to be able to write conventionally but nonetheless form well-defined indexed families (imagine coloring every point of the interval [0,1] either black or white independently; the black points then form an arbitrary subset of [0,1]). From this bit vector viewpoint, a concrete Boolean algebra can be defined equivalently as a nonempty set of bit vectors all of the same length (more generally, indexed by the same set) and closed under the bit vector operations of bitwise ∧, ∨, and ¬, as in 1010∧0110 = 0010, 1010∨0110 = 1110, and ¬1010 = 0101, the bit vector realizations of intersection, union, and complement respectively. === Prototypical Boolean algebra === The set {0,1} and its Boolean operations as treated above can be understood as the special case of bit vectors of length one, which by the identification of bit vectors with subsets can also be understood as the two subsets of a one-element set. This is called the prototypical Boolean algebra, justified by the following observation. The laws satisfied by all nondegenerate concrete Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. This observation is proved as follows. Certainly any law satisfied by all concrete Boolean algebras is satisfied by the prototypical one since it is concrete. Conversely any law that fails for some concrete Boolean algebra must have failed at a particular bit position, in which case that position by itself furnishes a one-bit counterexample to that law. Nondegeneracy ensures the existence of at least one bit position because there is only one empty bit vector. The final goal of the next section can be understood as eliminating "concrete" from the above observation. That goal is reached via the stronger observation that, up to isomorphism, all Boolean algebras are concrete. === Boolean algebras: the definition === The Boolean algebras so far have all been concrete, consisting of bit vectors or equivalently of subsets of some set. Such a Boolean algebra consists of a set and operations on that set which can be shown to satisfy the laws of Boolean algebra. Instead of showing that the Boolean laws are satisfied, we can instead postulate a set X, two binary operations on X, and one unary operation, and require that those operations satisfy the laws of Boolean algebra. The elements of X need not be bit vectors or subsets but can be anything at all. This leads to the more general abstract definition. A Boolean algebra is any set with binary operations ∧ and ∨ and a unary operation ¬ thereon satisfying the Boolean laws. For the purposes of this definition it is irrelevant how the operations came to satisfy the laws, whether by fiat or proof. All concrete Boolean algebras satisfy the laws (by proof rather than fiat), whence every concrete Boolean algebra is a Boolean algebra according to our definitions. This axiomatic definition of a Boolean algebra as a set and certain operations satisfying certain laws or axioms by fiat is entirely analogous to the abstract definitions of group, ring, field etc. characteristic of modern or abstract algebra. Given any complete axiomatization of Boolean algebra, such as the axioms for a complemented distributive lattice, a sufficient condition for an algebraic structure of this kind to satisfy all the Boolean laws is that it satisfy just those axioms. The following is therefore an equivalent definition. A Boolean algebra is a complemented distributive lattice. The section on axiomatization lists other axiomatizations, any of which can be made the basis of an equivalent definition. === Representable Boolean algebras === Although every concrete Boolean algebra is a Boolean algebra, not every Boolean algebra need be concrete. Let n be a square-free positive integer, one not divisible by the square of an integer, for example 30 but not 12. The operations of greatest common divisor, least common multiple, and division into n (that is, ¬x = n/x), can be shown to satisfy all the Boolean laws when their arguments range over the positive divisors of n. Hence those divisors form a Boolean algebra. These divisors are not subsets of a set, making the divisors of n a Boolean algebra that is not concrete according to our definitions. However, if each divisor of n is represented by the set of its prime factors, this nonconcrete Boolean algebra is isomorphic to the concrete Boolean algebra consisting of all sets of prime factors of n, with union corresponding to least common multiple, intersection to greatest common divisor, and complement to division into n. So this example, while not technically concrete, is at least "morally" concrete via this representation, called an isomorphism. This example is an instance of the following notion. A Boolean algebra is called representable when it is isomorphic to a concrete Boolean algebra. The next question is answered positively as follows. Every Boolean algebra is representable. That is, up to isomorphism, abstract and concrete Boolean algebras are the same thing. This result depends on the Boolean prime ideal theorem, a choice principle slightly weaker than the axiom of choice. This strong relationship implies a weaker result strengthening the observation in the previous subsection to the following easy consequence of representability. The laws satisfied by all Boolean algebras coincide with those satisfied by the prototypical Boolean algebra. It is weaker in the sense that it does not of itself imply representability. Boolean algebras are special here, for example a relation algebra is a Boolean algebra with additional structure but it is not the case that every relation algebra is representable in the sense appropriate to relation algebras. == Axiomatizing Boolean algebra == The above definition of an abstract Boolean algebra as a set together with operations satisfying "the" Boolean laws raises the question of what those laws are. A simplistic answer is "all Boolean laws", which can be defined as all equations that hold for the Boolean algebra of 0 and 1. However, since there are infinitely many such laws, this is not a satisfactory answer in practice, leading to the question of it suffices to require only finitely many laws to hold. In the case of Boolean algebras, the answer is "yes": the finitely many equations listed above are sufficient. Thus, Boolean algebra is said to be finitely axiomatizable or finitely based. Moreover, the number of equations needed can be further reduced. To begin with, some of the above laws are implied by some of the others. A sufficient subset of the above laws consists of the pairs of associativity, commutativity, and absorption laws, distributivity of ∧ over ∨ (or the other distributivity law—one suffices), and the two complement laws. In fact, this is the traditional axiomatization of Boolean algebra as a complemented distributive lattice. By introducing additional laws not listed above, it becomes possible to shorten the list of needed equations yet further; for instance, with the vertical bar representing the Sheffer stroke operation, the single axiom ( ( a ∣ b ) ∣ c ) ∣ ( a ∣ ( ( a ∣ c ) ∣ a ) ) = c {\displaystyle ((a\mid b)\mid c)\mid (a\mid ((a\mid c)\mid a))=c} is sufficient to completely axiomatize Boolean algebra. It is also possible to find longer single axioms using more conventional operations; see Minimal axioms for Boolean algebra. == Propositional logic == Propositional logic is a logical system that is intimately connected to Boolean algebra. Many syntactic concepts of Boolean algebra carry over to propositional logic with only minor changes in notation and terminology, while the semantics of propositional logic are defined via Boolean algebras in a way that the tautologies (theorems) of propositional logic correspond to equational theorems of Boolean algebra. Syntactically, every Boolean term corresponds to a propositional formula of propositional logic. In this translation between Boolean algebra and propositional logic, Boolean variables x, y, ... become propositional variables (or atoms) P, Q, ... Boolean terms such as x ∨ y become propositional formulas P ∨ Q; 0 becomes false or ⊥, and 1 becomes true or T. It is convenient when referring to generic propositions to use Greek letters Φ, Ψ, ... as metavariables (variables outside the language of propositional calculus, used when talking about propositional calculus) to denote propositions. The semantics of propositional logic rely on truth assignments. The essential idea of a truth assignment is that the propositional variables are mapped to elements of a fixed Boolean algebra, and then the truth value of a propositional formula using these letters is the element of the Boolean algebra that is obtained by computing the value of the Boolean term corresponding to the formula. In classical semantics, only the two-element Boolean algebra is used, while in Boolean-valued semantics arbitrary Boolean algebras are considered. A tautology is a propositional formula that is assigned truth value 1 by every truth assignment of its propositional variables to an arbitrary Boolean algebra (or, equivalently, every truth assignment to the two element Boolean algebra). These semantics permit a translation between tautologies of propositional logic and equational theorems of Boolean algebra. Every tautology Φ of propositional logic can be expressed as the Boolean equation Φ = 1, which will be a theorem of Boolean algebra. Conversely, every theorem Φ = Ψ of Boolean algebra corresponds to the tautologies (Φ ∨ ¬Ψ) ∧ (¬Φ ∨ Ψ) and (Φ ∧ Ψ) ∨ (¬Φ ∧ ¬Ψ). If → is in the language, these last tautologies can also be written as (Φ → Ψ) ∧ (Ψ → Φ), or as two separate theorems Φ → Ψ and Ψ → Φ; if ≡ is available, then the single tautology Φ ≡ Ψ can be used. === Applications === One motivating application of propositional calculus is the analysis of propositions and deductive arguments in natural language. Whereas the proposition "if x = 3, then x + 1 = 4" depends on the meanings of such symbols as + and 1, the proposition "if x = 3, then x = 3" does not; it is true merely by virtue of its structure, and remains true whether "x = 3" is replaced by "x = 4" or "the moon is made of green cheese." The generic or abstract form of this tautology is "if P, then P," or in the language of Boolean algebra, P → P. Replacing P by x = 3 or any other proposition is called instantiation of P by that proposition. The result of instantiating P in an abstract proposition is called an instance of the proposition. Thus, x = 3 → x = 3 is a tautology by virtue of being an instance of the abstract tautology P → P. All occurrences of the instantiated variable must be instantiated with the same proposition, to avoid such nonsense as P → x = 3 or x = 3 → x = 4. Propositional calculus restricts attention to abstract propositions, those built up from propositional variables using Boolean operations. Instantiation is still possible within propositional calculus, but only by instantiating propositional variables by abstract propositions, such as instantiating Q by Q → P in P → (Q → P) to yield the instance P → ((Q → P) → P). (The availability of instantiation as part of the machinery of propositional calculus avoids the need for metavariables within the language of propositional calculus, since ordinary propositional variables can be considered within the language to denote arbitrary propositions. The metavariables themselves are outside the reach of instantiation, not being part of the language of propositional calculus but rather part of the same language for talking about it that this sentence is written in, where there is a need to be able to distinguish propositional variables and their instantiations as being distinct syntactic entities.) === Deductive systems for propositional logic === An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A proof in an axiom system A is a finite nonempty sequence of propositions each of which is either an instance of an axiom of A or follows by some rule of A from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem. ==== Sequent calculus ==== Propositional calculus is commonly organized as a Hilbert system, whose operations are just those of Boolean algebra and whose theorems are Boolean tautologies, those Boolean terms equal to the Boolean constant 1. Another form is sequent calculus, which has two sorts, propositions as in ordinary propositional calculus, and pairs of lists of propositions called sequents, such as A ∨ B, A ∧ C, ... ⊢ A, B → C, .... The two halves of a sequent are called the antecedent and the succedent respectively. The customary metavariable denoting an antecedent or part thereof is Γ, and for a succedent Δ; thus Γ, A ⊢ Δ would denote a sequent whose succedent is a list Δ and whose antecedent is a list Γ with an additional proposition A appended after it. The antecedent is interpreted as the conjunction of its propositions, the succedent as the disjunction of its propositions, and the sequent itself as the entailment of the succedent by the antecedent. Entailment differs from implication in that whereas the latter is a binary operation that returns a value in a Boolean algebra, the former is a binary relation which either holds or does not hold. In this sense, entailment is an external form of implication, meaning external to the Boolean algebra, thinking of the reader of the sequent as also being external and interpreting and comparing antecedents and succedents in some Boolean algebra. The natural interpretation of ⊢ is as ≤ in the partial order of the Boolean algebra defined by x ≤ y just when x ∨ y = y. This ability to mix external implication ⊢ and internal implication → in the one logic is among the essential differences between sequent calculus and propositional calculus. == Applications == Boolean algebra as the calculus of two values is fundamental to computer circuits, computer programming, and mathematical logic, and is also used in other areas of mathematics such as set theory and statistics. === Computers === In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master's thesis, A Symbolic Analysis of Relay and Switching Circuits. Today, all modern general-purpose computers perform their functions using two-value Boolean logic; that is, their electrical circuits are a physical manifestation of two-value Boolean logic. They achieve this in various ways: as voltages on wires in high-speed circuits and capacitive storage devices, as orientations of a magnetic domain in ferromagnetic storage devices, as holes in punched cards or paper tape, and so on. (Some early computers used decimal circuits or mechanisms instead of two-valued logic circuits.) Of course, it is possible to code more than two symbols in any given medium. For example, one might use respectively 0, 1, 2, and 3 volts to code a four-symbol alphabet on a wire, or holes of different sizes in a punched card. In practice, the tight constraints of high speed, small size, and low power combine to make noise a major factor. This makes it hard to distinguish between symbols when there are several possible symbols that could occur at a single site. Rather than attempting to distinguish between four voltages on one wire, digital designers have settled on two voltages per wire, high and low. Computers use two-value Boolean circuits for the above reasons. The most common computer architectures use ordered sequences of Boolean values, called bits, of 32 or 64 values, e.g. 01101000110101100101010101001011. When programming in machine code, assembly language, and certain other programming languages, programmers work with the low-level digital structure of the data registers. These registers operate on voltages, where zero volts represents Boolean 0, and a reference voltage (often +5 V, +3.3 V, or +1.8 V) represents Boolean 1. Such languages support both numeric operations and logical operations. In this context, "numeric" means that the computer treats sequences of bits as binary numbers (base two numbers) and executes arithmetic operations like add, subtract, multiply, or divide. "Logical" refers to the Boolean logical operations of disjunction, conjunction, and negation between two sequences of bits, in which each bit in one sequence is simply compared to its counterpart in the other sequence. Programmers therefore have the option of working in and applying the rules of either numeric algebra or Boolean algebra as needed. A core differentiating feature between these families of operations is the existence of the carry operation in the first but not the second. === Two-valued logic === Other areas where two values is a good choice are the law and mathematics. In everyday relaxed conversation, nuanced or complex answers such as "maybe" or "only on the weekend" are acceptable. In more focused situations such as a court of law or theorem-based mathematics, however, it is deemed advantageous to frame questions so as to admit a simple yes-or-no answer—is the defendant guilty or not guilty, is the proposition true or false—and to disallow any other answer. However, limiting this might prove in practice for the respondent, the principle of the simple yes–no question has become a central feature of both judicial and mathematical logic, making two-valued logic deserving of organization and study in its own right. A central concept of set theory is membership. An organization may permit multiple degrees of membership, such as novice, associate, and full. With sets, however, an element is either in or out. The candidates for membership in a set work just like the wires in a digital computer: each candidate is either a member or a nonmember, just as each wire is either high or low. Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory. Two-valued logic can be extended to multi-valued logic, notably by replacing the Boolean domain {0, 1} with the unit interval [0,1], in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with 1 − x, conjunction (AND) is replaced with multiplication (xy), and disjunction (OR) is defined via De Morgan's law. Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. === Boolean operations === The original application for Boolean operations was mathematical logic, where it combines the truth values, true or false, of individual formulas. ==== Natural language ==== Natural languages such as English have words for several Boolean operations, in particular conjunction (and), disjunction (or), negation (not), and implication (implies). But not is synonymous with and not. When used to combine situational assertions such as "the block is on the table" and "cats drink milk", which naïvely are either true or false, the meanings of these logical connectives often have the meaning of their logical counterparts. However, with descriptions of behavior such as "Jim walked through the door", one starts to notice differences such as failure of commutativity, for example, the conjunction of "Jim opened the door" with "Jim walked through the door" in that order is not equivalent to their conjunction in the other order, since and usually means and then in such cases. Questions can be similar: the order "Is the sky blue, and why is the sky blue?" makes more sense than the reverse order. Conjunctive commands about behavior are like behavioral assertions, as in get dressed and go to school. Disjunctive commands such love me or leave me or fish or cut bait tend to be asymmetric via the implication that one alternative is less preferable. Conjoined nouns such as tea and milk generally describe aggregation as with set union while tea or milk is a choice. However, context can reverse these senses, as in your choices are coffee and tea which usually means the same as your choices are coffee or tea (alternatives). Double negation, as in "I don't not like milk", rarely means literally "I do like milk" but rather conveys some sort of hedging, as though to imply that there is a third possibility. "Not not P" can be loosely interpreted as "surely P", and although P necessarily implies "not not P," the converse is suspect in English, much as with intuitionistic logic. In view of the highly idiosyncratic usage of conjunctions in natural languages, Boolean algebra cannot be considered a reliable framework for interpreting them. ==== Digital logic ==== Boolean operations are used in digital logic to combine the bits carried on individual wires, thereby interpreting them over {0,1}. When a vector of n identical binary gates are used to combine two bit vectors each of n bits, the individual bit operations can be understood collectively as a single operation on values from a Boolean algebra with 2n elements. ==== Naive set theory ==== Naive set theory interprets Boolean operations as acting on subsets of a given set X. As we saw earlier this behavior exactly parallels the coordinate-wise combinations of bit vectors, with the union of two sets corresponding to the disjunction of two bit vectors and so on. ==== Video cards ==== The 256-element free Boolean algebra on three generators is deployed in computer displays based on raster graphics, which use bit blit to manipulate whole regions consisting of pixels, relying on Boolean operations to specify how the source region should be combined with the destination, typically with the help of a third region called the mask. Modern video cards offer all 223 = 256 ternary operations for this purpose, with the choice of operation being a one-byte (8-bit) parameter. The constants SRC = 0xaa or 0b10101010, DST = 0xcc or 0b11001100, and MSK = 0xf0 or 0b11110000 allow Boolean operations such as (SRC^DST)&MSK (meaning XOR the source and destination and then AND the result with the mask) to be written directly as a constant denoting a byte calculated at compile time, 0x80 in the (SRC^DST)&MSK example, 0x88 if just SRC^DST, etc. At run time the video card interprets the byte as the raster operation indicated by the original expression in a uniform way that requires remarkably little hardware and which takes time completely independent of the complexity of the expression. ==== Modeling and CAD ==== Solid modeling systems for computer aided design offer a variety of methods for building objects from other objects, combination by Boolean operations being one of them. In this method the space in which objects exist is understood as a set S of voxels (the three-dimensional analogue of pixels in two-dimensional graphics) and shapes are defined as subsets of S, allowing objects to be combined as sets via union, intersection, etc. One obvious use is in building a complex shape from simple shapes simply as the union of the latter. Another use is in sculpting understood as removal of material: any grinding, milling, routing, or drilling operation that can be performed with physical machinery on physical materials can be simulated on the computer with the Boolean operation x ∧ ¬y or x − y, which in set theory is set difference, remove the elements of y from those of x. Thus given two shapes one to be machined and the other the material to be removed, the result of machining the former to remove the latter is described simply as their set difference. ==== Boolean searches ==== Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set." The following examples use a syntax supported by Google. Doublequotes are used to combine whitespace-separated words into a single search term. Whitespace is used to specify logical AND, as it is the default operator for joining search terms: "Search term 1" "Search term 2" The OR keyword is used for logical OR: "Search term 1" OR "Search term 2" A prefixed minus sign is used for logical NOT: "Search term 1" −"Search term 2" == See also == == Notes == == References == == Further reading == Mano, Morris; Ciletti, Michael D. (2013). Digital Design. Pearson. ISBN 978-0-13-277420-8. Whitesitt, J. Eldon (1995). Boolean algebra and its applications. Courier Dover Publications. ISBN 978-0-486-68483-3. Dwinger, Philip (1971). Introduction to Boolean algebras. Würzburg, Germany: Physica Verlag. Sikorski, Roman (1969). Boolean Algebras (3 ed.). Berlin, Germany: Springer-Verlag. ISBN 978-0-387-04469-9. Bocheński, Józef Maria (1959). A Précis of Mathematical Logic. Translated from the French and German editions by Otto Bird. Dordrecht, South Holland: D. Reidel. === Historical perspective === Boole, George (1848). "The Calculus of Logic". Cambridge and Dublin Mathematical Journal. III: 183–198. Hailperin, Theodore (1986). Boole's logic and probability: a critical exposition from the standpoint of contemporary algebra, logic, and probability theory (2 ed.). Elsevier. ISBN 978-0-444-87952-3. Gabbay, Dov M.; Woods, John, eds. (2004). The rise of modern logic: from Leibniz to Frege. Handbook of the History of Logic. Vol. 3. Elsevier. ISBN 978-0-444-51611-4., several relevant chapters by Hailperin, Valencia, and Grattan-Guinness Badesa, Calixto (2004). "Chapter 1. Algebra of Classes and Propositional Calculus". The birth of model theory: Löwenheim's theorem in the frame of the theory of relatives. Princeton University Press. ISBN 978-0-691-05853-5. Stanković, Radomir S. [in German]; Astola, Jaakko Tapio [in Finnish] (2011). Written at Niš, Serbia & Tampere, Finland. From Boolean Logic to Switching Circuits and Automata: Towards Modern Information Technology. Studies in Computational Intelligence. Vol. 335 (1 ed.). Berlin & Heidelberg, Germany: Springer-Verlag. pp. xviii + 212. doi:10.1007/978-3-642-11682-7. ISBN 978-3-642-11681-0. ISSN 1860-949X. LCCN 2011921126. Retrieved 2022-10-25. "The Algebra of Logic Tradition" entry by Burris, Stanley in the Stanford Encyclopedia of Philosophy, 21 February 2012 == External links ==
Wikipedia/Boolean_algebra
In mathematics, a rate is the quotient of two quantities, often represented as a fraction. If the divisor (or fraction denominator) in the rate is equal to one expressed as a single unit, and if it is assumed that this quantity can be changed systematically (i.e., is an independent variable), then the dividend (the fraction numerator) of the rate expresses the corresponding rate of change in the other (dependent) variable. In some cases, it may be regarded as a change to a value, which is caused by a change of a value in respect to another value. For example, acceleration is a change in velocity with respect to time Temporal rate is a common type of rate ("per unit of time"), such as speed, heart rate, and flux. In fact, often rate is a synonym of rhythm or frequency, a count per second (i.e., hertz); e.g., radio frequencies or sample rates. In describing the units of a rate, the word "per" is used to separate the units of the two measurements used to calculate the rate; for example, a heart rate is expressed as "beats per minute". Rates that have a non-time divisor or denominator include exchange rates, literacy rates, and electric field (in volts per meter). A rate defined using two numbers of the same units will result in a dimensionless quantity, also known as ratio or simply as a rate (such as tax rates) or counts (such as literacy rate). Dimensionless rates can be expressed as a percentage (for example, the global literacy rate in 1998 was 80%), fraction, or multiple. == Properties and examples == Rates and ratios often vary with time, location, particular element (or subset) of a set of objects, etc. Thus they are often mathematical functions. A rate (or ratio) may often be thought of as an output-input ratio, benefit-cost ratio, all considered in the broad sense. For example, miles per hour in transportation is the output (or benefit) in terms of miles of travel, which one gets from spending an hour (a cost in time) of traveling (at this velocity). A set of sequential indices may be used to enumerate elements (or subsets) of a set of ratios under study. For example, in finance, one could define I by assigning consecutive integers to companies, to political subdivisions (such as states), to different investments, etc. The reason for using indices I is so a set of ratios (i=0, N) can be used in an equation to calculate a function of the rates such as an average of a set of ratios. For example, the average velocity found from the set of v I 's mentioned above. Finding averages may involve using weighted averages and possibly using the harmonic mean. A ratio r=a/b has both a numerator "a" and a denominator "b". The value of a and b may be a real number or integer. The inverse of a ratio r is 1/r = b/a. A rate may be equivalently expressed as an inverse of its value if the ratio of its units is also inverse. For example, 5 miles (mi) per kilowatt-hour (kWh) corresponds to 1/5 kWh/mi (or 200 Wh/mi). Rates are relevant to many aspects of everyday life. For example: How fast are you driving? The speed of the car (often expressed in miles per hour) is a rate. What interest does your savings account pay you? The amount of interest paid per year is a rate. == Rate of change == Consider the case where the numerator f {\displaystyle f} of a rate is a function f ( a ) {\displaystyle f(a)} where a {\displaystyle a} happens to be the denominator of the rate δ f / δ a {\displaystyle \delta f/\delta a} . A rate of change of f {\displaystyle f} with respect to a {\displaystyle a} (where a {\displaystyle a} is incremented by h {\displaystyle h} ) can be formally defined in two ways: Average rate of change = f ( x + h ) − f ( x ) h Instantaneous rate of change = lim h → 0 f ( x + h ) − f ( x ) h {\displaystyle {\begin{aligned}{\mbox{Average rate of change}}&={\frac {f(x+h)-f(x)}{h}}\\{\mbox{Instantaneous rate of change}}&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\end{aligned}}} where f(x) is the function with respect to x over the interval from a to a+h. An instantaneous rate of change is equivalent to a derivative. For example, the average speed of a car can be calculated using the total distance traveled between two points, divided by the travel time. In contrast, the instantaneous velocity can be determined by viewing a speedometer. == Temporal rates == In chemistry and physics: Speed, the rate of change of position, or the change of position per unit of time Acceleration, the rate of change in speed, or the change in speed per unit of time Power, the rate of doing work, or the amount of energy transferred per unit time Frequency, the number of occurrences of a repeating event per unit of time Angular frequency and rotation speed, the number of turns per unit of time Reaction rate, the speed at which chemical reactions occur Volumetric flow rate, the volume of fluid which passes through a given surface per unit of time; e.g., cubic meters per second === Counts-per-time rates === Radioactive decay, the amount of radioactive material in which one nucleus decays per second, measured in becquerels In computing: Bit rate, the number of bits that are conveyed or processed by a computer per unit of time Symbol rate, the number of symbol changes (signaling events) made to the transmission medium per second Sampling rate, the number of samples (signal measurements) per second Miscellaneous definitions: Rate of reinforcement, number of reinforcements per unit of time, usually per minute Heart rate, usually measured in beats per minute == Economics/finance rates/ratios == Exchange rate, how much one currency is worth in terms of the other Inflation rate, the ratio of the change in the general price level during a year to the starting price level Interest rate, the price a borrower pays for the use of the money they do not own (ratio of payment to amount borrowed) Price–earnings ratio, market price per share of stock divided by annual earnings per share Rate of return, the ratio of money gained or lost on an investment relative to the amount of money invested Tax rate, the tax amount divided by the taxable income Unemployment rate, the ratio of the number of people who are unemployed to the number in the labor force Wage rate, the amount paid for working a given amount of time (or doing a standard amount of accomplished work) (ratio of payment to time) == Other rates == Birth rate, and mortality rate, the number of births or deaths scaled to the size of that population, per unit of time Literacy rate, the proportion of the population over age fifteen that can read and write Sex ratio or gender ratio, the ratio of males to females in a population == See also == Derivative Gradient Hertz Slope == References ==
Wikipedia/Rates_of_change
In mathematics, a quadratic function of a single variable is a function of the form f ( x ) = a x 2 + b x + c , a ≠ 0 , {\displaystyle f(x)=ax^{2}+bx+c,\quad a\neq 0,} where ⁠ x {\displaystyle x} ⁠ is its variable, and ⁠ a {\displaystyle a} ⁠, ⁠ b {\displaystyle b} ⁠, and ⁠ c {\displaystyle c} ⁠ are coefficients. The expression ⁠ a x 2 + b x + c {\displaystyle \textstyle ax^{2}+bx+c} ⁠, especially when treated as an object in itself rather than as a function, is a quadratic polynomial, a polynomial of degree two. In elementary mathematics a polynomial and its associated polynomial function are rarely distinguished and the terms quadratic function and quadratic polynomial are nearly synonymous and often abbreviated as quadratic. The graph of a real single-variable quadratic function is a parabola. If a quadratic function is equated with zero, then the result is a quadratic equation. The solutions of a quadratic equation are the zeros (or roots) of the corresponding quadratic function, of which there can be two, one, or zero. The solutions are described by the quadratic formula. A quadratic polynomial or quadratic function can involve more than one variable. For example, a two-variable quadratic function of variables ⁠ x {\displaystyle x} ⁠ and ⁠ y {\displaystyle y} ⁠ has the form f ( x , y ) = a x 2 + b x y + c y 2 + d x + e y + f , {\displaystyle f(x,y)=ax^{2}+bxy+cy^{2}+dx+ey+f,} with at least one of ⁠ a {\displaystyle a} ⁠, ⁠ b {\displaystyle b} ⁠, and ⁠ c {\displaystyle c} ⁠ not equal to zero. In general the zeros of such a quadratic function describe a conic section (a circle or other ellipse, a parabola, or a hyperbola) in the ⁠ x {\displaystyle x} ⁠–⁠ y {\displaystyle y} ⁠ plane. A quadratic function can have an arbitrarily large number of variables. The set of its zero form a quadric, which is a surface in the case of three variables and a hypersurface in general case. == Etymology == The adjective quadratic comes from the Latin word quadrātum ("square"). A term raised to the second power like ⁠ x 2 {\displaystyle \textstyle x^{2}} ⁠ is called a square in algebra because it is the area of a square with side ⁠ x {\displaystyle x} ⁠. == Terminology == === Coefficients === The coefficients of a quadratic function are often taken to be real or complex numbers, but they may be taken in any ring, in which case the domain and the codomain are this ring (see polynomial evaluation). === Degree === When using the term "quadratic polynomial", authors sometimes mean "having degree exactly 2", and sometimes "having degree at most 2". If the degree is less than 2, this may be called a "degenerate case". Usually the context will establish which of the two is meant. Sometimes the word "order" is used with the meaning of "degree", e.g. a second-order polynomial. However, where the "degree of a polynomial" refers to the largest degree of a non-zero term of the polynomial, more typically "order" refers to the lowest degree of a non-zero term of a power series. === Variables === A quadratic polynomial may involve a single variable x (the univariate case), or multiple variables such as x, y, and z (the multivariate case). ==== The one-variable case ==== Any single-variable quadratic polynomial may be written as a x 2 + b x + c , {\displaystyle ax^{2}+bx+c,} where x is the variable, and a, b, and c represent the coefficients. Such polynomials often arise in a quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} The solutions to this equation are called the roots and can be expressed in terms of the coefficients as the quadratic formula. Each quadratic polynomial has an associated quadratic function, whose graph is a parabola. ==== Bivariate and multivariate cases ==== Any quadratic polynomial with two variables may be written as a x 2 + b y 2 + c x y + d x + e y + f , {\displaystyle ax^{2}+by^{2}+cxy+dx+ey+f,} where x and y are the variables and a, b, c, d, e, f are the coefficients, and one of a, b and c is nonzero. Such polynomials are fundamental to the study of conic sections, as the implicit equation of a conic section is obtained by equating to zero a quadratic polynomial, and the zeros of a quadratic function form a (possibly degenerate) conic section. Similarly, quadratic polynomials with three or more variables correspond to quadric surfaces or hypersurfaces. Quadratic polynomials that have only terms of degree two are called quadratic forms. == Forms of a univariate quadratic function == A univariate quadratic function can be expressed in three formats: f ( x ) = a x 2 + b x + c {\displaystyle f(x)=ax^{2}+bx+c} is called the standard form, f ( x ) = a ( x − r 1 ) ( x − r 2 ) {\displaystyle f(x)=a(x-r_{1})(x-r_{2})} is called the factored form, where r1 and r2 are the roots of the quadratic function and the solutions of the corresponding quadratic equation. f ( x ) = a ( x − h ) 2 + k {\displaystyle f(x)=a(x-h)^{2}+k} is called the vertex form, where h and k are the x and y coordinates of the vertex, respectively. The coefficient a is the same value in all three forms. To convert the standard form to factored form, one needs only the quadratic formula to determine the two roots r1 and r2. To convert the standard form to vertex form, one needs a process called completing the square. To convert the factored form (or vertex form) to standard form, one needs to multiply, expand and/or distribute the factors. == Graph of the univariate function == Regardless of the format, the graph of a univariate quadratic function f ( x ) = a x 2 + b x + c {\displaystyle f(x)=ax^{2}+bx+c} is a parabola (as shown at the right). Equivalently, this is the graph of the bivariate quadratic equation y = a x 2 + b x + c {\displaystyle y=ax^{2}+bx+c} . If a > 0, the parabola opens upwards. If a < 0, the parabola opens downwards. The coefficient a controls the degree of curvature of the graph; a larger magnitude of a gives the graph a more closed (sharply curved) appearance. The coefficients b and a together control the location of the axis of symmetry of the parabola (also the x-coordinate of the vertex and the h parameter in the vertex form) which is at x = − b 2 a . {\displaystyle x=-{\frac {b}{2a}}.} The coefficient c controls the height of the parabola; more specifically, it is the height of the parabola where it intercepts the y-axis. === Vertex === The vertex of a parabola is the place where it turns; hence, it is also called the turning point. If the quadratic function is in vertex form, the vertex is (h, k). Using the method of completing the square, one can turn the standard form f ( x ) = a x 2 + b x + c {\displaystyle f(x)=ax^{2}+bx+c} into f ( x ) = a x 2 + b x + c = a ( x − h ) 2 + k = a ( x − − b 2 a ) 2 + ( c − b 2 4 a ) , {\displaystyle {\begin{aligned}f(x)&=ax^{2}+bx+c\\&=a(x-h)^{2}+k\\&=a\left(x-{\frac {-b}{2a}}\right)^{2}+\left(c-{\frac {b^{2}}{4a}}\right),\\\end{aligned}}} so the vertex, (h, k), of the parabola in standard form is ( − b 2 a , c − b 2 4 a ) . {\displaystyle \left(-{\frac {b}{2a}},c-{\frac {b^{2}}{4a}}\right).} If the quadratic function is in factored form f ( x ) = a ( x − r 1 ) ( x − r 2 ) {\displaystyle f(x)=a(x-r_{1})(x-r_{2})} the average of the two roots, i.e., r 1 + r 2 2 {\displaystyle {\frac {r_{1}+r_{2}}{2}}} is the x-coordinate of the vertex, and hence the vertex (h, k) is ( r 1 + r 2 2 , f ( r 1 + r 2 2 ) ) . {\displaystyle \left({\frac {r_{1}+r_{2}}{2}},f\left({\frac {r_{1}+r_{2}}{2}}\right)\right).} The vertex is also the maximum point if a < 0, or the minimum point if a > 0. The vertical line x = h = − b 2 a {\displaystyle x=h=-{\frac {b}{2a}}} that passes through the vertex is also the axis of symmetry of the parabola. ==== Maximum and minimum points ==== Using calculus, the vertex point, being a maximum or minimum of the function, can be obtained by finding the roots of the derivative: f ( x ) = a x 2 + b x + c ⇒ f ′ ( x ) = 2 a x + b {\displaystyle f(x)=ax^{2}+bx+c\quad \Rightarrow \quad f'(x)=2ax+b} x is a root of f '(x) if f '(x) = 0 resulting in x = − b 2 a {\displaystyle x=-{\frac {b}{2a}}} with the corresponding function value f ( x ) = a ( − b 2 a ) 2 + b ( − b 2 a ) + c = c − b 2 4 a , {\displaystyle f(x)=a\left(-{\frac {b}{2a}}\right)^{2}+b\left(-{\frac {b}{2a}}\right)+c=c-{\frac {b^{2}}{4a}},} so again the vertex point coordinates, (h, k), can be expressed as ( − b 2 a , c − b 2 4 a ) . {\displaystyle \left(-{\frac {b}{2a}},c-{\frac {b^{2}}{4a}}\right).} == Roots of the univariate function == === Exact roots === The roots (or zeros), r1 and r2, of the univariate quadratic function f ( x ) = a x 2 + b x + c = a ( x − r 1 ) ( x − r 2 ) , {\displaystyle {\begin{aligned}f(x)&=ax^{2}+bx+c\\&=a(x-r_{1})(x-r_{2}),\\\end{aligned}}} are the values of x for which f(x) = 0. When the coefficients a, b, and c, are real or complex, the roots are r 1 = − b − b 2 − 4 a c 2 a , {\displaystyle r_{1}={\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}},} r 2 = − b + b 2 − 4 a c 2 a . {\displaystyle r_{2}={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}.} === Upper bound on the magnitude of the roots === The modulus of the roots of a quadratic a x 2 + b x + c {\displaystyle ax^{2}+bx+c} can be no greater than max ( | a | , | b | , | c | ) | a | × ϕ , {\displaystyle {\frac {\max(|a|,|b|,|c|)}{|a|}}\times \phi ,} where ϕ {\displaystyle \phi } is the golden ratio 1 + 5 2 . {\displaystyle {\frac {1+{\sqrt {5}}}{2}}.} == The square root of a univariate quadratic function == The square root of a univariate quadratic function gives rise to one of the four conic sections, almost always either to an ellipse or to a hyperbola. If a > 0 , {\displaystyle a>0,} then the equation y = ± a x 2 + b x + c {\displaystyle y=\pm {\sqrt {ax^{2}+bx+c}}} describes a hyperbola, as can be seen by squaring both sides. The directions of the axes of the hyperbola are determined by the ordinate of the minimum point of the corresponding parabola y p = a x 2 + b x + c . {\displaystyle y_{p}=ax^{2}+bx+c.} If the ordinate is negative, then the hyperbola's major axis (through its vertices) is horizontal, while if the ordinate is positive then the hyperbola's major axis is vertical. If a < 0 , {\displaystyle a<0,} then the equation y = ± a x 2 + b x + c {\displaystyle y=\pm {\sqrt {ax^{2}+bx+c}}} describes either a circle or other ellipse or nothing at all. If the ordinate of the maximum point of the corresponding parabola y p = a x 2 + b x + c {\displaystyle y_{p}=ax^{2}+bx+c} is positive, then its square root describes an ellipse, but if the ordinate is negative then it describes an empty locus of points. == Iteration == To iterate a function f ( x ) = a x 2 + b x + c {\displaystyle f(x)=ax^{2}+bx+c} , one applies the function repeatedly, using the output from one iteration as the input to the next. One cannot always deduce the analytic form of f ( n ) ( x ) {\displaystyle f^{(n)}(x)} , which means the nth iteration of f ( x ) {\displaystyle f(x)} . (The superscript can be extended to negative numbers, referring to the iteration of the inverse of f ( x ) {\displaystyle f(x)} if the inverse exists.) But there are some analytically tractable cases. For example, for the iterative equation f ( x ) = a ( x − c ) 2 + c {\displaystyle f(x)=a(x-c)^{2}+c} one has f ( x ) = a ( x − c ) 2 + c = h ( − 1 ) ( g ( h ( x ) ) ) , {\displaystyle f(x)=a(x-c)^{2}+c=h^{(-1)}(g(h(x))),} where g ( x ) = a x 2 {\displaystyle g(x)=ax^{2}} and h ( x ) = x − c . {\displaystyle h(x)=x-c.} So by induction, f ( n ) ( x ) = h ( − 1 ) ( g ( n ) ( h ( x ) ) ) {\displaystyle f^{(n)}(x)=h^{(-1)}(g^{(n)}(h(x)))} can be obtained, where g ( n ) ( x ) {\displaystyle g^{(n)}(x)} can be easily computed as g ( n ) ( x ) = a 2 n − 1 x 2 n . {\displaystyle g^{(n)}(x)=a^{2^{n}-1}x^{2^{n}}.} Finally, we have f ( n ) ( x ) = a 2 n − 1 ( x − c ) 2 n + c {\displaystyle f^{(n)}(x)=a^{2^{n}-1}(x-c)^{2^{n}}+c} as the solution. See Topological conjugacy for more detail about the relationship between f and g. And see Complex quadratic polynomial for the chaotic behavior in the general iteration. The logistic map x n + 1 = r x n ( 1 − x n ) , 0 ≤ x 0 < 1 {\displaystyle x_{n+1}=rx_{n}(1-x_{n}),\quad 0\leq x_{0}<1} with parameter 2<r<4 can be solved in certain cases, one of which is chaotic and one of which is not. In the chaotic case r=4 the solution is x n = sin 2 ⁡ ( 2 n θ π ) {\displaystyle x_{n}=\sin ^{2}(2^{n}\theta \pi )} where the initial condition parameter θ {\displaystyle \theta } is given by θ = 1 π sin − 1 ⁡ ( x 0 1 / 2 ) {\displaystyle \theta ={\tfrac {1}{\pi }}\sin ^{-1}(x_{0}^{1/2})} . For rational θ {\displaystyle \theta } , after a finite number of iterations x n {\displaystyle x_{n}} maps into a periodic sequence. But almost all θ {\displaystyle \theta } are irrational, and, for irrational θ {\displaystyle \theta } , x n {\displaystyle x_{n}} never repeats itself – it is non-periodic and exhibits sensitive dependence on initial conditions, so it is said to be chaotic. The solution of the logistic map when r=2 is x n = 1 2 − 1 2 ( 1 − 2 x 0 ) 2 n {\displaystyle x_{n}={\frac {1}{2}}-{\frac {1}{2}}(1-2x_{0})^{2^{n}}} for x 0 ∈ [ 0 , 1 ) {\displaystyle x_{0}\in [0,1)} . Since ( 1 − 2 x 0 ) ∈ ( − 1 , 1 ) {\displaystyle (1-2x_{0})\in (-1,1)} for any value of x 0 {\displaystyle x_{0}} other than the unstable fixed point 0, the term ( 1 − 2 x 0 ) 2 n {\displaystyle (1-2x_{0})^{2^{n}}} goes to 0 as n goes to infinity, so x n {\displaystyle x_{n}} goes to the stable fixed point 1 2 . {\displaystyle {\tfrac {1}{2}}.} == Bivariate (two variable) quadratic function == A bivariate quadratic function is a second-degree polynomial of the form f ( x , y ) = A x 2 + B y 2 + C x + D y + E x y + F , {\displaystyle f(x,y)=Ax^{2}+By^{2}+Cx+Dy+Exy+F,} where A, B, C, D, and E are fixed coefficients and F is the constant term. Such a function describes a quadratic surface. Setting f ( x , y ) {\displaystyle f(x,y)} equal to zero describes the intersection of the surface with the plane z = 0 , {\displaystyle z=0,} which is a locus of points equivalent to a conic section. === Minimum/maximum === If 4 A B − E 2 < 0 , {\displaystyle 4AB-E^{2}<0,} the function has no maximum or minimum; its graph forms a hyperbolic paraboloid. If 4 A B − E 2 > 0 , {\displaystyle 4AB-E^{2}>0,} the function has a minimum if both A > 0 and B > 0, and a maximum if both A < 0 and B < 0; its graph forms an elliptic paraboloid. In this case the minimum or maximum occurs at ( x m , y m ) , {\displaystyle (x_{m},y_{m}),} where: x m = − 2 B C − D E 4 A B − E 2 , {\displaystyle x_{m}=-{\frac {2BC-DE}{4AB-E^{2}}},} y m = − 2 A D − C E 4 A B − E 2 . {\displaystyle y_{m}=-{\frac {2AD-CE}{4AB-E^{2}}}.} If 4 A B − E 2 = 0 {\displaystyle 4AB-E^{2}=0} and D E − 2 C B = 2 A D − C E ≠ 0 , {\displaystyle DE-2CB=2AD-CE\neq 0,} the function has no maximum or minimum; its graph forms a parabolic cylinder. If 4 A B − E 2 = 0 {\displaystyle 4AB-E^{2}=0} and D E − 2 C B = 2 A D − C E = 0 , {\displaystyle DE-2CB=2AD-CE=0,} the function achieves the maximum/minimum at a line—a minimum if A>0 and a maximum if A<0; its graph forms a parabolic cylinder. == See also == Quadratic form Quadratic equation Matrix representation of conic sections Quadric Periodic points of complex quadratic mappings List of mathematical functions == References == Glencoe, McGraw-Hill (2003). Algebra 1. Glencoe/McGraw Hill. ISBN 9780078250835. Saxon, John H. (May 1991). Algebra 2. Saxon Publishers, Incorporated. ISBN 9780939798629.
Wikipedia/Quadratic_function
Set theory is the branch of mathematical logic that studies sets, which can be informally described as collections of objects. Although objects of any kind can be collected into a set, set theory – as a branch of mathematics – is mostly concerned with those that are relevant to mathematics as a whole. The modern study of set theory was initiated by the German mathematicians Richard Dedekind and Georg Cantor in the 1870s. In particular, Georg Cantor is commonly considered the founder of set theory. The non-formalized systems investigated during this early stage go under the name of naive set theory. After the discovery of paradoxes within naive set theory (such as Russell's paradox, Cantor's paradox and the Burali-Forti paradox), various axiomatic systems were proposed in the early twentieth century, of which Zermelo–Fraenkel set theory (with or without the axiom of choice) is still the best-known and most studied. Set theory is commonly employed as a foundational system for the whole of mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Besides its foundational role, set theory also provides the framework to develop a mathematical theory of infinity, and has various applications in computer science (such as in the theory of relational algebra), philosophy, formal semantics, and evolutionary dynamics. Its foundational appeal, together with its paradoxes, and its implications for the concept of infinity and its multiple applications have made set theory an area of major interest for logicians and philosophers of mathematics. Contemporary research into set theory covers a vast array of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. == History == === Early history === The basic notion of grouping objects has existed since at least the emergence of numbers, and the notion of treating sets as their own objects has existed since at least the Tree of Porphyry, 3rd-century AD. The simplicity and ubiquity of sets makes it hard to determine the origin of sets as now used in mathematics, however, Bernard Bolzano's Paradoxes of the Infinite (Paradoxien des Unendlichen, 1851) is generally considered the first rigorous introduction of sets to mathematics. In his work, he (among other things) expanded on Galileo's paradox, and introduced one-to-one correspondence of infinite sets, for example between the intervals [ 0 , 5 ] {\displaystyle [0,5]} and [ 0 , 12 ] {\displaystyle [0,12]} by the relation 5 y = 12 x {\displaystyle 5y=12x} . However, he resisted saying these sets were equinumerous, and his work is generally considered to have been uninfluential in mathematics of his time. Before mathematical set theory, basic concepts of infinity were considered to be solidly in the domain of philosophy (see: Infinity (philosophy) and Infinity § History). Since the 5th century BC, beginning with Greek philosopher Zeno of Elea in the West (and early Indian mathematicians in the East), mathematicians had struggled with the concept of infinity. With the development of calculus in the late 17th century, philosophers began to generally distinguish between actual and potential infinity, wherein mathematics was only considered in the latter. Carl Friedrich Gauss famously stated: "Infinity is nothing more than a figure of speech which helps us talk about limits. The notion of a completed infinity doesn't belong in mathematics." Development of mathematical set theory was motivated by several mathematicians. Bernhard Riemann's lecture On the Hypotheses which lie at the Foundations of Geometry (1854) proposed new ideas about topology, and about basing mathematics (especially geometry) in terms of sets or manifolds in the sense of a class (which he called Mannigfaltigkeit) now called point-set topology. The lecture was published by Richard Dedekind in 1868, along with Riemann's paper on trigonometric series (which presented the Riemann integral), The latter was a starting point a movement in real analysis for the study of “seriously” discontinuous functions. A young Georg Cantor entered into this area, which led him to the study of point-sets. Around 1871, influenced by Riemann, Dedekind began working with sets in his publications, which dealt very clearly and precisely with equivalence relations, partitions of sets, and homomorphisms. Thus, many of the usual set-theoretic procedures of twentieth-century mathematics go back to his work. However, he did not publish a formal explanation of his set theory until 1888. === Naive set theory === Set theory, as understood by modern mathematicians, is generally considered to be founded by a single paper in 1874 by Georg Cantor titled On a Property of the Collection of All Real Algebraic Numbers. In his paper, he developed the notion of cardinality, comparing the sizes of two sets by setting them in one-to-one correspondence. His "revolutionary discovery" was that the set of all real numbers is uncountable, that is, one cannot put all real numbers in a list. This theorem is proved using Cantor's first uncountability proof, which differs from the more familiar proof using his diagonal argument. Cantor introduced fundamental constructions in set theory, such as the power set of a set A, which is the set of all possible subsets of A. He later proved that the size of the power set of A is strictly larger than the size of A, even when A is an infinite set; this result soon became known as Cantor's theorem. Cantor developed a theory of transfinite numbers, called cardinals and ordinals, which extended the arithmetic of the natural numbers. His notation for the cardinal numbers was the Hebrew letter ℵ {\displaystyle \aleph } (ℵ, aleph) with a natural number subscript; for the ordinals he employed the Greek letter ω {\displaystyle \omega } (ω, omega). Set theory was beginning to become an essential ingredient of the new “modern” approach to mathematics. Originally, Cantor's theory of transfinite numbers was regarded as counter-intuitive – even shocking. This caused it to encounter resistance from mathematical contemporaries such as Leopold Kronecker and Henri Poincaré and later from Hermann Weyl and L. E. J. Brouwer, while Ludwig Wittgenstein raised philosophical objections (see: Controversy over Cantor's theory). Dedekind's algebraic style only began to find followers in the 1890s Despite the controversy, Cantor's set theory gained remarkable ground around the turn of the 20th century with the work of several notable mathematicians and philosophers. Richard Dedekind, around the same time, began working with sets in his publications, and famously constructing the real numbers using Dedekind cuts. He also worked with Giuseppe Peano in developing the Peano axioms, which formalized natural-number arithmetic, using set-theoretic ideas, which also introduced the epsilon symbol for set membership. Possibly most prominently, Gottlob Frege began to develop his Foundations of Arithmetic. In his work, Frege tries to ground all mathematics in terms of logical axioms using Cantor's cardinality. For example, the sentence "the number of horses in the barn is four" means that four objects fall under the concept horse in the barn. Frege attempted to explain our grasp of numbers through cardinality ('the number of...', or N x : F x {\displaystyle Nx:Fx} ), relying on Hume's principle. However, Frege's work was short-lived, as it was found by Bertrand Russell that his axioms lead to a contradiction. Specifically, Frege's Basic Law V (now known as the axiom schema of unrestricted comprehension). According to Basic Law V, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. The contradiction, called Russell's paradox, is shown as follows: Let R be the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: Let R = { x ∣ x ∉ x } , then R ∈ R ⟺ R ∉ R {\displaystyle {\text{Let }}R=\{x\mid x\not \in x\}{\text{, then }}R\in R\iff R\not \in R} This came around a time of several paradoxes or counter-intuitive results. For example, that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. The result was a foundational crisis of mathematics. == Basic concepts and notation == Set theory begins with a fundamental binary relation between an object o and a set A. If o is a member (or element) of A, the notation o ∈ A is used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }. Since sets are objects, the membership relation can relate sets as well, i.e., sets themselves can be members of other sets. A derived binary relation between two sets is the subset relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, denoted A ⊆ B. For example, {1, 2} is a subset of {1, 2, 3}, and so is {2} but {1, 4} is not. As implied by this definition, a set is a subset of itself. For cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined, variously denoted A ⊂ B {\displaystyle A\subset B} , A ⊊ B {\displaystyle A\subsetneq B} , or A ⫋ B {\displaystyle A\subsetneqq B} (note however that the notation A ⊂ B {\displaystyle A\subset B} is sometimes used synonymously with A ⊆ B {\displaystyle A\subseteq B} ; that is, allowing the possibility that A and B are equal). We call A a proper subset of B if and only if A is a subset of B, but A is not equal to B. Also, 1, 2, and 3 are members (elements) of the set {1, 2, 3}, but are not subsets of it; and in turn, the subsets, such as {1}, are not members of the set {1, 2, 3}. More complicated relations can exist; for example, the set {1} is both a member and a proper subset of the set {1, {1}}. Just as arithmetic features binary operations on numbers, set theory features binary operations on sets. The following is a partial list of them: Union of the sets A and B, denoted A ∪ B, is the set of all objects that are a member of A, or B, or both. For example, the union of {1, 2, 3} and {2, 3, 4} is the set {1, 2, 3, 4}. Intersection of the sets A and B, denoted A ∩ B, is the set of all objects that are members of both A and B. For example, the intersection of {1, 2, 3} and {2, 3, 4} is the set {2, 3}. Set difference of U and A, denoted U ∖ A, is the set of all members of U that are not members of A. The set difference {1, 2, 3} ∖ {2, 3, 4} is {1}, while conversely, the set difference {2, 3, 4} ∖ {1, 2, 3} is {4}. When A is a subset of U, the set difference U ∖ A is also called the complement of A in U. In this case, if the choice of U is clear from the context, the notation Ac is sometimes used instead of U ∖ A, particularly if U is a universal set as in the study of Venn diagrams. Symmetric difference of sets A and B, denoted A △ B or A ⊖ B, is the set of all objects that are a member of exactly one of A and B (elements which are in one of the sets, but not in both). For instance, for the sets {1, 2, 3} and {2, 3, 4}, the symmetric difference set is {1, 4}. It is the set difference of the union and the intersection, (A ∪ B) ∖ (A ∩ B) or (A ∖ B) ∪ (B ∖ A). Cartesian product of A and B, denoted A × B, is the set whose members are all possible ordered pairs (a, b), where a is a member of A and b is a member of B. For example, the Cartesian product of {1, 2} and {red, white} is {(1, red), (1, white), (2, red), (2, white)}. Some basic sets of central importance are the set of natural numbers, the set of real numbers and the empty set – the unique set containing no elements. The empty set is also occasionally called the null set, though this name is ambiguous and can lead to several interpretations. The empty set can be denoted with empty braces " { } {\displaystyle \{\}} " or the symbol " ∅ {\displaystyle \varnothing } " or " ∅ {\displaystyle \emptyset } ". The power set of a set A, denoted P ( A ) {\displaystyle {\mathcal {P}}(A)} , is the set whose members are all of the possible subsets of A. For example, the power set of {1, 2} is { {}, {1}, {2}, {1, 2} }. Notably, P ( A ) {\displaystyle {\mathcal {P}}(A)} contains both A and the empty set. == Ontology == A set is pure if all of its members are sets, all members of its members are sets, and so on. For example, the set containing only the empty set is a nonempty pure set. In modern set theory, it is common to restrict attention to the von Neumann universe of pure sets, and many systems of axiomatic set theory are designed to axiomatize the pure sets only. There are many technical advantages to this restriction, and little generality is lost, because essentially all mathematical concepts can be modeled by pure sets. Sets in the von Neumann universe are organized into a cumulative hierarchy, based on how deeply their members, members of members, etc. are nested. Each set in this hierarchy is assigned (by transfinite recursion) an ordinal number α {\displaystyle \alpha } , known as its rank. The rank of a pure set X {\displaystyle X} is defined to be the least ordinal that is strictly greater than the rank of any of its elements. For example, the empty set is assigned rank 0, while the set containing only the empty set is assigned rank 1. For each ordinal α {\displaystyle \alpha } , the set V α {\displaystyle V_{\alpha }} is defined to consist of all pure sets with rank less than α {\displaystyle \alpha } . The entire von Neumann universe is denoted V {\displaystyle V} . == Formalized set theory == Elementary set theory can be studied informally and intuitively, and so can be taught in primary schools using Venn diagrams. The intuitive approach tacitly assumes that a set may be formed from the class of all objects satisfying any particular defining condition. This assumption gives rise to paradoxes, the simplest and best known of which are Russell's paradox and the Burali-Forti paradox. Axiomatic set theory was originally devised to rid set theory of such paradoxes. The most widely studied systems of axiomatic set theory imply that all sets form a cumulative hierarchy. Such systems come in two flavors, those whose ontology consists of: Sets alone. This includes the most common axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). Fragments of ZFC include: Zermelo set theory, which replaces the axiom schema of replacement with that of separation; General set theory, a small fragment of Zermelo set theory sufficient for the Peano axioms and finite sets; Kripke–Platek set theory, which omits the axioms of infinity, powerset, and choice, and weakens the axiom schemata of separation and replacement. Sets and proper classes. These include Von Neumann–Bernays–Gödel set theory, which has the same strength as ZFC for theorems about sets alone, and Morse–Kelley set theory and Tarski–Grothendieck set theory, both of which are stronger than ZFC. The above systems can be modified to allow urelements, objects that can be members of sets but that are not themselves sets and do not have any members. The New Foundations systems of NFU (allowing urelements) and NF (lacking them), associate with Willard Van Orman Quine, are not based on a cumulative hierarchy. NF and NFU include a "set of everything", relative to which every set has a complement. In these systems urelements matter, because NF, but not NFU, produces sets for which the axiom of choice does not hold. Despite NF's ontology not reflecting the traditional cumulative hierarchy and violating well-foundedness, Thomas Forster has argued that it does reflect an iterative conception of set. Systems of constructive set theory, such as CST, CZF, and IZF, embed their set axioms in intuitionistic instead of classical logic. Yet other systems accept classical logic but feature a nonstandard membership relation. These include rough set theory and fuzzy set theory, in which the value of an atomic formula embodying the membership relation is not simply True or False. The Boolean-valued models of ZFC are a related subject. An enrichment of ZFC called internal set theory was proposed by Edward Nelson in 1977. == Applications == Many mathematical concepts can be defined precisely using only set theoretic concepts. For example, mathematical structures as diverse as graphs, manifolds, rings, vector spaces, and relational algebras can all be defined as sets satisfying various (axiomatic) properties. Equivalence and order relations are ubiquitous in mathematics, and the theory of mathematical relations can be described in set theory. Set theory is also a promising foundational system for much of mathematics. Since the publication of the first volume of Principia Mathematica, it has been claimed that most (or even all) mathematical theorems can be derived using an aptly designed set of axioms for set theory, augmented with many definitions, using first or second-order logic. For example, properties of the natural and real numbers can be derived within set theory, as each of these number systems can be defined by representing their elements as sets of specific forms. Set theory as a foundation for mathematical analysis, topology, abstract algebra, and discrete mathematics is likewise uncontroversial; mathematicians accept (in principle) that theorems in these areas can be derived from the relevant definitions and the axioms of set theory. However, it remains that few full derivations of complex mathematical theorems from set theory have been formally verified, since such formal derivations are often much longer than the natural language proofs mathematicians commonly present. One verification project, Metamath, includes human-written, computer-verified derivations of more than 12,000 theorems starting from ZFC set theory, first-order logic and propositional logic. == Areas of study == Set theory is a major area of research in mathematics with many interrelated subfields: === Combinatorial set theory === Combinatorial set theory concerns extensions of finite combinatorics to infinite sets. This includes the study of cardinal arithmetic and the study of extensions of Ramsey's theorem such as the Erdős–Rado theorem. === Descriptive set theory === Descriptive set theory is the study of subsets of the real line and, more generally, subsets of Polish spaces. It begins with the study of pointclasses in the Borel hierarchy and extends to the study of more complex hierarchies such as the projective hierarchy and the Wadge hierarchy. Many properties of Borel sets can be established in ZFC, but proving these properties hold for more complicated sets requires additional axioms related to determinacy and large cardinals. The field of effective descriptive set theory is between set theory and recursion theory. It includes the study of lightface pointclasses, and is closely related to hyperarithmetical theory. In many cases, results of classical descriptive set theory have effective versions; in some cases, new results are obtained by proving the effective version first and then extending ("relativizing") it to make it more broadly applicable. A recent area of research concerns Borel equivalence relations and more complicated definable equivalence relations. This has important applications to the study of invariants in many fields of mathematics. === Fuzzy set theory === In set theory as Cantor defined and Zermelo and Fraenkel axiomatized, an object is either a member of a set or not. In fuzzy set theory this condition was relaxed by Lotfi A. Zadeh so an object has a degree of membership in a set, a number between 0 and 1. For example, the degree of membership of a person in the set of "tall people" is more flexible than a simple yes or no answer and can be a real number such as 0.75. === Inner model theory === An inner model of Zermelo–Fraenkel set theory (ZF) is a transitive class that includes all the ordinals and satisfies all the axioms of ZF. The canonical example is the constructible universe L developed by Gödel. One reason that the study of inner models is of interest is that it can be used to prove consistency results. For example, it can be shown that regardless of whether a model V of ZF satisfies the continuum hypothesis or the axiom of choice, the inner model L constructed inside the original model will satisfy both the generalized continuum hypothesis and the axiom of choice. Thus the assumption that ZF is consistent (has at least one model) implies that ZF together with these two principles is consistent. The study of inner models is common in the study of determinacy and large cardinals, especially when considering axioms such as the axiom of determinacy that contradict the axiom of choice. Even if a fixed model of set theory satisfies the axiom of choice, it is possible for an inner model to fail to satisfy the axiom of choice. For example, the existence of sufficiently large cardinals implies that there is an inner model satisfying the axiom of determinacy (and thus not satisfying the axiom of choice). === Large cardinals === A large cardinal is a cardinal number with an extra property. Many such properties are studied, including inaccessible cardinals, measurable cardinals, and many more. These properties typically imply the cardinal number must be very large, with the existence of a cardinal with the specified property unprovable in Zermelo–Fraenkel set theory. === Determinacy === Determinacy refers to the fact that, under appropriate assumptions, certain two-player games of perfect information are determined from the start in the sense that one player must have a winning strategy. The existence of these strategies has important consequences in descriptive set theory, as the assumption that a broader class of games is determined often implies that a broader class of sets will have a topological property. The axiom of determinacy (AD) is an important object of study; although incompatible with the axiom of choice, AD implies that all subsets of the real line are well behaved (in particular, measurable and with the perfect set property). AD can be used to prove that the Wadge degrees have an elegant structure. === Forcing === Paul Cohen invented the method of forcing while searching for a model of ZFC in which the continuum hypothesis fails, or a model of ZF in which the axiom of choice fails. Forcing adjoins to some given model of set theory additional sets in order to create a larger model with properties determined (i.e. "forced") by the construction and the original model. For example, Cohen's construction adjoins additional subsets of the natural numbers without changing any of the cardinal numbers of the original model. Forcing is also one of two methods for proving relative consistency by finitistic methods, the other method being Boolean-valued models. === Cardinal invariants === A cardinal invariant is a property of the real line measured by a cardinal number. For example, a well-studied invariant is the smallest cardinality of a collection of meagre sets of reals whose union is the entire real line. These are invariants in the sense that any two isomorphic models of set theory must give the same cardinal for each invariant. Many cardinal invariants have been studied, and the relationships between them are often complex and related to axioms of set theory. === Set-theoretic topology === Set-theoretic topology studies questions of general topology that are set-theoretic in nature or that require advanced methods of set theory for their solution. Many of these theorems are independent of ZFC, requiring stronger axioms for their proof. A famous problem is the normal Moore space question, a question in general topology that was the subject of intense research. The answer to the normal Moore space question was eventually proved to be independent of ZFC. == Controversy == From set theory's inception, some mathematicians have objected to it as a foundation for mathematics. The most common objection to set theory, one Kronecker voiced in set theory's earliest years, starts from the constructivist view that mathematics is loosely related to computation. If this view is granted, then the treatment of infinite sets, both in naive and in axiomatic set theory, introduces into mathematics methods and objects that are not computable even in principle. The feasibility of constructivism as a substitute foundation for mathematics was greatly increased by Errett Bishop's influential book Foundations of Constructive Analysis. A different objection put forth by Henri Poincaré is that defining sets using the axiom schemas of specification and replacement, as well as the axiom of power set, introduces impredicativity, a type of circularity, into the definitions of mathematical objects. The scope of predicatively founded mathematics, while less than that of the commonly accepted Zermelo–Fraenkel theory, is much greater than that of constructive mathematics, to the point that Solomon Feferman has said that "all of scientifically applicable analysis can be developed [using predicative methods]". Ludwig Wittgenstein condemned set theory philosophically for its connotations of mathematical platonism. He wrote that "set theory is wrong", since it builds on the "nonsense" of fictitious symbolism, has "pernicious idioms", and that it is nonsensical to talk about "all numbers". Wittgenstein identified mathematics with algorithmic human deduction; the need for a secure foundation for mathematics seemed, to him, nonsensical. Moreover, since human effort is necessarily finite, Wittgenstein's philosophy required an ontological commitment to radical constructivism and finitism. Meta-mathematical statements – which, for Wittgenstein, included any statement quantifying over infinite domains, and thus almost all modern set theory – are not mathematics. Few modern philosophers have adopted Wittgenstein's views after a spectacular blunder in Remarks on the Foundations of Mathematics: Wittgenstein attempted to refute Gödel's incompleteness theorems after having only read the abstract. As reviewers Kreisel, Bernays, Dummett, and Goodstein all pointed out, many of his critiques did not apply to the paper in full. Only recently have philosophers such as Crispin Wright begun to rehabilitate Wittgenstein's arguments. Category theorists have proposed topos theory as an alternative to traditional axiomatic set theory. Topos theory can interpret various alternatives to that theory, such as constructivism, finite set theory, and computable set theory. Topoi also give a natural setting for forcing and discussions of the independence of choice from ZF, as well as providing the framework for pointless topology and Stone spaces. An active area of research is the univalent foundations and related to it homotopy type theory. Within homotopy type theory, a set may be regarded as a homotopy 0-type, with universal properties of sets arising from the inductive and recursive properties of higher inductive types. Principles such as the axiom of choice and the law of the excluded middle can be formulated in a manner corresponding to the classical formulation in set theory or perhaps in a spectrum of distinct ways unique to type theory. Some of these principles may be proven to be a consequence of other principles. The variety of formulations of these axiomatic principles allows for a detailed analysis of the formulations required in order to derive various mathematical results. == Mathematical education == As set theory gained popularity as a foundation for modern mathematics, there has been support for the idea of introducing the basics of naive set theory early in mathematics education. In the US in the 1960s, the New Math experiment aimed to teach basic set theory, among other abstract concepts, to primary school students but was met with much criticism. The math syllabus in European schools followed this trend and currently includes the subject at different levels in all grades. Venn diagrams are widely employed to explain basic set-theoretic relationships to primary school students (even though John Venn originally devised them as part of a procedure to assess the validity of inferences in term logic). Set theory is used to introduce students to logical operators (NOT, AND, OR), and semantic or rule description (technically intensional definition) of sets (e.g. "months starting with the letter A"), which may be useful when learning computer programming, since Boolean logic is used in various programming languages. Likewise, sets and other collection-like objects, such as multisets and lists, are common datatypes in computer science and programming. In addition to that, certain sets are commonly used in mathematical teaching, such as the sets N {\displaystyle \mathbb {N} } of natural numbers, Z {\displaystyle \mathbb {Z} } of integers, R {\displaystyle \mathbb {R} } of real numbers, etc.). These are commonly used when defining a mathematical function as a relation from one set (the domain) to another set (the range). == See also == Glossary of set theory Class (set theory) List of set theory topics Relational model – borrows from set theory Venn diagram Elementary Theory of the Category of Sets Structural set theory == Notes == == Citations == == References == Devlin, Keith (1993), The Joy of Sets: Fundamentals of Contemporary Set Theory, Undergraduate Texts in Mathematics (2nd ed.), Springer Verlag, doi:10.1007/978-1-4612-0903-4, ISBN 0-387-94094-4 Ferreirós, Jose (2001), Labyrinth of Thought: A History of Set Theory and Its Role in Modern Mathematics, Berlin: Springer, ISBN 978-3-7643-5749-8 Monk, J. Donald (1969), Introduction to Set Theory, McGraw-Hill Book Company, ISBN 978-0-898-74006-6 Potter, Michael (2004), Set Theory and Its Philosophy: A Critical Introduction, Oxford University Press, ISBN 978-0-191-55643-2 Smullyan, Raymond M.; Fitting, Melvin (2010), Set Theory and the Continuum Problem, Dover Publications, ISBN 978-0-486-47484-7 Tiles, Mary (2004), The Philosophy of Set Theory: An Historical Introduction to Cantor's Paradise, Dover Publications, ISBN 978-0-486-43520-6 Dauben, Joseph W. (1977), "Georg Cantor and Pope Leo XIII: Mathematics, Theology, and the Infinite", Journal of the History of Ideas, 38 (1): 85–108, doi:10.2307/2708842, JSTOR 2708842 Dauben, Joseph W. (1979), [Unavailable on archive.org] Georg Cantor: his mathematics and philosophy of the infinite, Boston: Harvard University Press, ISBN 978-0-691-02447-9 == External links == Daniel Cunningham, Set Theory article in the Internet Encyclopedia of Philosophy. Jose Ferreiros, "The Early Development of Set Theory" article in the [Stanford Encyclopedia of Philosophy]. Foreman, Matthew, Akihiro Kanamori, eds. Handbook of Set Theory. 3 vols., 2010. Each chapter surveys some aspect of contemporary research in set theory. Does not cover established elementary set theory, on which see Devlin (1993). "Axiomatic set theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] "Set theory", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Schoenflies, Arthur (1898). Mengenlehre in Klein's encyclopedia. Online books, and library resources in your library and in other libraries about set theory Rudin, Walter B. (April 6, 1990), "Set Theory: An Offspring of Analysis", Marden Lecture in Mathematics, University of Wisconsin-Milwaukee, archived from the original on 2021-10-31 – via YouTube
Wikipedia/Set_theory
In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning, such as preserving distances, angles, or ratios (scale). More specifically, it is a function whose domain and range are sets of points – most often a real coordinate space, R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 {\displaystyle \mathbb {R} ^{3}} – such that the function is bijective so that its inverse exists. The study of geometry may be approached by the study of these transformations, such as in transformation geometry. == Classifications == Geometric transformations can be classified by the dimension of their operand sets (thus distinguishing between, say, planar transformations and spatial transformations). They can also be classified according to the properties they preserve: Displacements preserve distances and oriented angles (e.g., translations); Isometries preserve angles and distances (e.g., Euclidean transformations); Similarities preserve angles and ratios between distances (e.g., resizing); Affine transformations preserve parallelism (e.g., scaling, shear); Projective transformations preserve collinearity; Each of these classes contains the previous one. Möbius transformations using complex coordinates on the plane (as well as circle inversion) preserve the set of all lines and circles, but may interchange lines and circles. Conformal transformations preserve angles, and are, in the first order, similarities. Equiareal transformations, preserve areas in the planar case or volumes in the three dimensional case. and are, in the first order, affine transformations of determinant 1. Homeomorphisms (bicontinuous transformations) preserve the neighborhoods of points. Diffeomorphisms (bidifferentiable transformations) are the transformations that are affine in the first order; they contain the preceding ones as special cases, and can be further refined. Transformations of the same type form groups that may be sub-groups of other transformation groups. == Opposite group actions == Many geometric transformations are expressed with linear algebra. The bijective linear transformations are elements of a general linear group. The linear transformation A is non-singular. For a row vector v, the matrix product vA gives another row vector w = vA. The transpose of a row vector v is a column vector vT, and the transpose of the above equality is w T = ( v A ) T = A T v T . {\displaystyle w^{T}=(vA)^{T}=A^{T}v^{T}.} Here AT provides a left action on column vectors. In transformation geometry there are compositions AB. Starting with a row vector v, the right action of the composed transformation is w = vAB. After transposition, w T = ( v A B ) T = ( A B ) T v T = B T A T v T . {\displaystyle w^{T}=(vAB)^{T}=(AB)^{T}v^{T}=B^{T}A^{T}v^{T}.} Thus for AB the associated left group action is B T A T . {\displaystyle B^{T}A^{T}.} In the study of opposite groups, the distinction is made between opposite group actions because commutative groups are the only groups for which these opposites are equal. == Active and passive transformations == == See also == Coordinate transformation Erlangen program Symmetry (geometry) Motion Reflection Rigid transformation Rotation Topology Transformation matrix == References == == Further reading == Adler, Irving (2012) [1966], A New Look at Geometry, Dover, ISBN 978-0-486-49851-5 Dienes, Z. P.; Golding, E. W. (1967) . Geometry Through Transformations (3 vols.): Geometry of Distortion, Geometry of Congruence, and Groups and Coordinates. New York: Herder and Herder. David Gans – Transformations and geometries. Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9. {{cite book}}: ISBN / Date incompatibility (help) John McCleary (2013) Geometry from a Differentiable Viewpoint, Cambridge University Press ISBN 978-0-521-11607-7 Modenov, P. S.; Parkhomenko, A. S. (1965) . Geometric Transformations (2 vols.): Euclidean and Affine Transformations, and Projective Transformations. New York: Academic Press. A. N. Pressley – Elementary Differential Geometry. Yaglom, I. M. (1962, 1968, 1973, 2009) . Geometric Transformations (4 vols.). Random House (I, II & III), MAA (I, II, III & IV).
Wikipedia/Geometric_transformation
In algebra, a quartic function is a function of the formα f ( x ) = a x 4 + b x 3 + c x 2 + d x + e , {\displaystyle f(x)=ax^{4}+bx^{3}+cx^{2}+dx+e,} where a is nonzero, which is defined by a polynomial of degree four, called a quartic polynomial. A quartic equation, or equation of the fourth degree, is an equation that equates a quartic polynomial to zero, of the form a x 4 + b x 3 + c x 2 + d x + e = 0 , {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0,} where a ≠ 0. The derivative of a quartic function is a cubic function. Sometimes the term biquadratic is used instead of quartic, but, usually, biquadratic function refers to a quadratic function of a square (or, equivalently, to the function defined by a quartic polynomial without terms of odd degree), having the form f ( x ) = a x 4 + c x 2 + e . {\displaystyle f(x)=ax^{4}+cx^{2}+e.} Since a quartic function is defined by a polynomial of even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If a is positive, then the function increases to positive infinity at both ends; and thus the function has a global minimum. Likewise, if a is negative, it decreases to negative infinity and has a global maximum. In both cases it may or may not have another local maximum and another local minimum. The degree four (quartic case) is the highest degree such that every polynomial equation can be solved by radicals, according to the Abel–Ruffini theorem. == History == Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna. The proof that four is the highest degree of a general polynomial for which such solutions can be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois prior to dying in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result. == Applications == Each coordinate of the intersection points of two conic sections is a solution of a quartic equation. The same is true for the intersection of a line and a torus. It follows that quartic equations often arise in computational geometry and all related fields such as computer graphics, computer-aided design, computer-aided manufacturing and optics. Here are examples of other geometric problems whose solution involves solving a quartic equation. In computer-aided manufacturing, the torus is a shape that is commonly associated with the endmill cutter. To calculate its location relative to a triangulated surface, the position of a horizontal torus on the z-axis must be found where it is tangent to a fixed line, and this requires the solution of a general quartic equation to be calculated. A quartic equation arises also in the process of solving the crossed ladders problem, in which the lengths of two crossed ladders, each based against one wall and leaning against another, are given along with the height at which they cross, and the distance between the walls is to be found. In optics, Alhazen's problem is "Given a light source and a spherical mirror, find the point on the mirror where the light will be reflected to the eye of an observer." This leads to a quartic equation. Finding the distance of closest approach of two ellipses involves solving a quartic equation. The eigenvalues of a 4×4 matrix are the roots of a quartic polynomial which is the characteristic polynomial of the matrix. The characteristic equation of a fourth-order linear difference equation or differential equation is a quartic equation. An example arises in the Timoshenko-Rayleigh theory of beam bending. Intersections between spheres, cylinders, or other quadrics can be found using quartic equations. == Inflection points and golden ratio == Letting F and G be the distinct inflection points of the graph of a quartic function, and letting H be the intersection of the inflection secant line FG and the quartic, nearer to G than to F, then G divides FH into the golden section: F G G H = 1 + 5 2 = φ ( the golden ratio ) . {\displaystyle {\frac {FG}{GH}}={\frac {1+{\sqrt {5}}}{2}}=\varphi \;({\text{the golden ratio}}).} Moreover, the area of the region between the secant line and the quartic below the secant line equals the area of the region between the secant line and the quartic above the secant line. One of those regions is disjointed into sub-regions of equal area. == Solution == === Nature of the roots === Given the general quartic equation a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0} with real coefficients and a ≠ 0 the nature of its roots is mainly determined by the sign of its discriminant Δ = 256 a 3 e 3 − 192 a 2 b d e 2 − 128 a 2 c 2 e 2 + 144 a 2 c d 2 e − 27 a 2 d 4 + 144 a b 2 c e 2 − 6 a b 2 d 2 e − 80 a b c 2 d e + 18 a b c d 3 + 16 a c 4 e − 4 a c 3 d 2 − 27 b 4 e 2 + 18 b 3 c d e − 4 b 3 d 3 − 4 b 2 c 3 e + b 2 c 2 d 2 {\displaystyle {\begin{aligned}\Delta ={}&256a^{3}e^{3}-192a^{2}bde^{2}-128a^{2}c^{2}e^{2}+144a^{2}cd^{2}e-27a^{2}d^{4}\\&+144ab^{2}ce^{2}-6ab^{2}d^{2}e-80abc^{2}de+18abcd^{3}+16ac^{4}e\\&-4ac^{3}d^{2}-27b^{4}e^{2}+18b^{3}cde-4b^{3}d^{3}-4b^{2}c^{3}e+b^{2}c^{2}d^{2}\end{aligned}}} This may be refined by considering the signs of four other polynomials: P = 8 a c − 3 b 2 {\displaystyle P=8ac-3b^{2}} such that ⁠P/8a2⁠ is the second degree coefficient of the associated depressed quartic (see below); R = b 3 + 8 d a 2 − 4 a b c , {\displaystyle R=b^{3}+8da^{2}-4abc,} such that ⁠R/8a3⁠ is the first degree coefficient of the associated depressed quartic; Δ 0 = c 2 − 3 b d + 12 a e , {\displaystyle \Delta _{0}=c^{2}-3bd+12ae,} which is 0 if the quartic has a triple root; and D = 64 a 3 e − 16 a 2 c 2 + 16 a b 2 c − 16 a 2 b d − 3 b 4 {\displaystyle D=64a^{3}e-16a^{2}c^{2}+16ab^{2}c-16a^{2}bd-3b^{4}} which is 0 if the quartic has two double roots. The possible cases for the nature of the roots are as follows: If ∆ < 0 then the equation has two distinct real roots and two complex conjugate non-real roots. If ∆ > 0 then either the equation's four roots are all real or none is. If P < 0 and D < 0 then all four roots are real and distinct. If P > 0 or D > 0 then there are two pairs of non-real complex conjugate roots. If ∆ = 0 then (and only then) the polynomial has a multiple root. Here are the different cases that can occur: If P < 0 and D < 0 and ∆0 ≠ 0, there are a real double root and two real simple roots. If D > 0 or (P > 0 and (D ≠ 0 or R ≠ 0)), there are a real double root and two complex conjugate roots. If ∆0 = 0 and D ≠ 0, there are a triple root and a simple root, all real. If D = 0, then: If P < 0, there are two real double roots. If P > 0 and R = 0, there are two complex conjugate double roots. If ∆0 = 0, all four roots are equal to −⁠b/4a⁠ There are some cases that do not seem to be covered, but in fact they cannot occur. For example, ∆0 > 0, P = 0 and D ≤ 0 is not a possible case. In fact, if ∆0 > 0 and P = 0 then D > 0, since 16 a 2 Δ 0 = 3 D + P 2 ; {\displaystyle 16a^{2}\Delta _{0}=3D+P^{2};} so this combination is not possible. === General formula for roots === The four roots x1, x2, x3, and x4 for the general quartic equation a x 4 + b x 3 + c x 2 + d x + e = 0 {\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0\,} with a ≠ 0 are given in the following formula, which is deduced from the one in the section on Ferrari's method by back changing the variables (see § Converting to a depressed quartic) and using the formulas for the quadratic and cubic equations. x 1 , 2 = − b 4 a − S ± 1 2 − 4 S 2 − 2 p + q S x 3 , 4 = − b 4 a + S ± 1 2 − 4 S 2 − 2 p − q S {\displaystyle {\begin{aligned}x_{1,2}\ &=-{\frac {b}{4a}}-S\pm {\frac {1}{2}}{\sqrt {-4S^{2}-2p+{\frac {q}{S}}}}\\x_{3,4}\ &=-{\frac {b}{4a}}+S\pm {\frac {1}{2}}{\sqrt {-4S^{2}-2p-{\frac {q}{S}}}}\end{aligned}}} where p and q are the coefficients of the second and of the first degree respectively in the associated depressed quartic p = 8 a c − 3 b 2 8 a 2 q = b 3 − 4 a b c + 8 a 2 d 8 a 3 {\displaystyle {\begin{aligned}p&={\frac {8ac-3b^{2}}{8a^{2}}}\\q&={\frac {b^{3}-4abc+8a^{2}d}{8a^{3}}}\end{aligned}}} and where S = 1 2 − 2 3 p + 1 3 a ( Q + Δ 0 Q ) Q = Δ 1 + Δ 1 2 − 4 Δ 0 3 2 3 {\displaystyle {\begin{aligned}S&={\frac {1}{2}}{\sqrt {-{\frac {2}{3}}\ p+{\frac {1}{3a}}\left(Q+{\frac {\Delta _{0}}{Q}}\right)}}\\Q&={\sqrt[{3}]{\frac {\Delta _{1}+{\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}}{2}}}\end{aligned}}} (if S = 0 or Q = 0, see § Special cases of the formula, below) with Δ 0 = c 2 − 3 b d + 12 a e Δ 1 = 2 c 3 − 9 b c d + 27 b 2 e + 27 a d 2 − 72 a c e {\displaystyle {\begin{aligned}\Delta _{0}&=c^{2}-3bd+12ae\\\Delta _{1}&=2c^{3}-9bcd+27b^{2}e+27ad^{2}-72ace\end{aligned}}} and Δ 1 2 − 4 Δ 0 3 = − 27 Δ , {\displaystyle \Delta _{1}^{2}-4\Delta _{0}^{3}=-27\Delta \ ,} where Δ {\displaystyle \Delta } is the aforementioned discriminant. For the cube root expression for Q, any of the three cube roots in the complex plane can be used, although if one of them is real that is the natural and simplest one to choose. The mathematical expressions of these last four terms are very similar to those of their cubic counterparts. ==== Special cases of the formula ==== If Δ > 0 , {\displaystyle \Delta >0,} the value of Q {\displaystyle Q} is a non-real complex number. In this case, either all roots are non-real or they are all real. In the latter case, the value of S {\displaystyle S} is also real, despite being expressed in terms of Q ; {\displaystyle Q;} this is casus irreducibilis of the cubic function extended to the present context of the quartic. One may prefer to express it in a purely real way, by using trigonometric functions, as follows: S = 1 2 − 2 3 p + 2 3 a Δ 0 cos ⁡ φ 3 {\displaystyle S={\frac {1}{2}}{\sqrt {-{\frac {2}{3}}\ p+{\frac {2}{3a}}{\sqrt {\Delta _{0}}}\cos {\frac {\varphi }{3}}}}} where φ = arccos ⁡ ( Δ 1 2 Δ 0 3 ) . {\displaystyle \varphi =\arccos \left({\frac {\Delta _{1}}{2{\sqrt {\Delta _{0}^{3}}}}}\right).} If Δ ≠ 0 {\displaystyle \Delta \neq 0} and Δ 0 = 0 , {\displaystyle \Delta _{0}=0,} the sign of Δ 1 2 − 4 Δ 0 3 = Δ 1 2 {\displaystyle {\sqrt {\Delta _{1}^{2}-4\Delta _{0}^{3}}}={\sqrt {\Delta _{1}^{2}}}} has to be chosen to have Q ≠ 0 , {\displaystyle Q\neq 0,} that is one should define Δ 1 2 {\displaystyle {\sqrt {\Delta _{1}^{2}}}} as Δ 1 , {\displaystyle \Delta _{1},} maintaining the sign of Δ 1 . {\displaystyle \Delta _{1}.} If S = 0 , {\displaystyle S=0,} then one must change the choice of the cube root in Q {\displaystyle Q} in order to have S ≠ 0. {\displaystyle S\neq 0.} This is always possible except if the quartic may be factored into ( x + b 4 a ) 4 . {\displaystyle \left(x+{\tfrac {b}{4a}}\right)^{4}.} The result is then correct, but misleading because it hides the fact that no cube root is needed in this case. In fact this case may occur only if the numerator of q {\displaystyle q} is zero, in which case the associated depressed quartic is biquadratic; it may thus be solved by the method described below. If Δ = 0 {\displaystyle \Delta =0} and Δ 0 = 0 , {\displaystyle \Delta _{0}=0,} and thus also Δ 1 = 0 , {\displaystyle \Delta _{1}=0,} at least three roots are equal to each other, and the roots are rational functions of the coefficients. The triple root x 0 {\displaystyle x_{0}} is a common root of the quartic and its second derivative 2 ( 6 a x 2 + 3 b x + c ) ; {\displaystyle 2(6ax^{2}+3bx+c);} it is thus also the unique root of the remainder of the Euclidean division of the quartic by its second derivative, which is a linear polynomial. The simple root x 1 {\displaystyle x_{1}} can be deduced from x 1 + 3 x 0 = − b / a . {\displaystyle x_{1}+3x_{0}=-b/a.} If Δ = 0 {\displaystyle \Delta =0} and Δ 0 ≠ 0 , {\displaystyle \Delta _{0}\neq 0,} the above expression for the roots is correct but misleading, hiding the fact that the polynomial is reducible and no cube root is needed to represent the roots. === Simpler cases === ==== Reducible quartics ==== Consider the general quartic Q ( x ) = a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 . {\displaystyle Q(x)=a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}.} It is reducible if Q(x) = R(x)×S(x), where R(x) and S(x) are non-constant polynomials with rational coefficients (or more generally with coefficients in the same field as the coefficients of Q(x)). Such a factorization will take one of two forms: Q ( x ) = ( x − x 1 ) ( b 3 x 3 + b 2 x 2 + b 1 x + b 0 ) {\displaystyle Q(x)=(x-x_{1})(b_{3}x^{3}+b_{2}x^{2}+b_{1}x+b_{0})} or Q ( x ) = ( c 2 x 2 + c 1 x + c 0 ) ( d 2 x 2 + d 1 x + d 0 ) . {\displaystyle Q(x)=(c_{2}x^{2}+c_{1}x+c_{0})(d_{2}x^{2}+d_{1}x+d_{0}).} In either case, the roots of Q(x) are the roots of the factors, which may be computed using the formulas for the roots of a quadratic function or cubic function. Detecting the existence of such factorizations can be done using the resolvent cubic of Q(x). It turns out that: if we are working over R (that is, if coefficients are restricted to be real numbers) (or, more generally, over some real closed field) then there is always such a factorization; if we are working over Q (that is, if coefficients are restricted to be rational numbers) then there is an algorithm to determine whether or not Q(x) is reducible and, if it is, how to express it as a product of polynomials of smaller degree. In fact, several methods of solving quartic equations (Ferrari's method, Descartes' method, and, to a lesser extent, Euler's method) are based upon finding such factorizations. ==== Biquadratic equation ==== If a3 = a1 = 0 then the function Q ( x ) = a 4 x 4 + a 2 x 2 + a 0 {\displaystyle Q(x)=a_{4}x^{4}+a_{2}x^{2}+a_{0}} is called a biquadratic function; equating it to zero defines a biquadratic equation, which is easy to solve as follows Let the auxiliary variable z = x2. Then Q(x) becomes a quadratic q in z: q(z) = a4z2 + a2z + a0. Let z+ and z− be the roots of q(z). Then the roots of the quartic Q(x) are x 1 = + z + , x 2 = − z + , x 3 = + z − , x 4 = − z − . {\displaystyle {\begin{aligned}x_{1}&=+{\sqrt {z_{+}}},\\x_{2}&=-{\sqrt {z_{+}}},\\x_{3}&=+{\sqrt {z_{-}}},\\x_{4}&=-{\sqrt {z_{-}}}.\end{aligned}}} ==== Quasi-palindromic equation ==== The polynomial P ( x ) = a 0 x 4 + a 1 x 3 + a 2 x 2 + a 1 m x + a 0 m 2 {\displaystyle P(x)=a_{0}x^{4}+a_{1}x^{3}+a_{2}x^{2}+a_{1}mx+a_{0}m^{2}} is almost palindromic, as P(mx) = ⁠x4/m2⁠P(⁠m/x⁠) (it is palindromic if m = 1). The change of variables z = x + ⁠m/x⁠ in ⁠P(x)/x2⁠ = 0 produces the quadratic equation a0z2 + a1z + a2 − 2ma0 = 0. Since x2 − xz + m = 0, the quartic equation P(x) = 0 may be solved by applying the quadratic formula twice. === Solution methods === ==== Converting to a depressed quartic ==== For solving purposes, it is generally better to convert the quartic into a depressed quartic by the following simple change of variable. All formulas are simpler and some methods work only in this case. The roots of the original quartic are easily recovered from that of the depressed quartic by the reverse change of variable. Let a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 = 0 {\displaystyle a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0} be the general quartic equation we want to solve. Dividing by a4, provides the equivalent equation x4 + bx3 + cx2 + dx + e = 0, with b = ⁠a3/a4⁠, c = ⁠a2/a4⁠, d = ⁠a1/a4⁠, and e = ⁠a0/a4⁠. Substituting y − ⁠b/4⁠ for x gives, after regrouping the terms, the equation y4 + py2 + qy + r = 0, where p = 8 c − 3 b 2 8 = 8 a 2 a 4 − 3 a 3 2 8 a 4 2 q = b 3 − 4 b c + 8 d 8 = a 3 3 − 4 a 2 a 3 a 4 + 8 a 1 a 4 2 8 a 4 3 r = − 3 b 4 + 256 e − 64 b d + 16 b 2 c 256 = − 3 a 3 4 + 256 a 0 a 4 3 − 64 a 1 a 3 a 4 2 + 16 a 2 a 3 2 a 4 256 a 4 4 . {\displaystyle {\begin{aligned}p&={\frac {8c-3b^{2}}{8}}={\frac {8a_{2}a_{4}-3{a_{3}}^{2}}{8{a_{4}}^{2}}}\\q&={\frac {b^{3}-4bc+8d}{8}}={\frac {{a_{3}}^{3}-4a_{2}a_{3}a_{4}+8a_{1}{a_{4}}^{2}}{8{a_{4}}^{3}}}\\r&={\frac {-3b^{4}+256e-64bd+16b^{2}c}{256}}={\frac {-3{a_{3}}^{4}+256a_{0}{a_{4}}^{3}-64a_{1}a_{3}{a_{4}}^{2}+16a_{2}{a_{3}}^{2}a_{4}}{256{a_{4}}^{4}}}.\end{aligned}}} If y0 is a root of this depressed quartic, then y0 − ⁠b/4⁠ (that is y0 − ⁠a3/4a4⁠) is a root of the original quartic and every root of the original quartic can be obtained by this process. ==== Ferrari's solution ==== As explained in the preceding section, we may start with the depressed quartic equation y 4 + p y 2 + q y + r = 0. {\displaystyle y^{4}+py^{2}+qy+r=0.} This depressed quartic can be solved by means of a method discovered by Lodovico Ferrari. The depressed equation may be rewritten (this is easily verified by expanding the square and regrouping all terms in the left-hand side) as ( y 2 + p 2 ) 2 = − q y − r + p 2 4 . {\displaystyle \left(y^{2}+{\frac {p}{2}}\right)^{2}=-qy-r+{\frac {p^{2}}{4}}.} Then, we introduce a variable m into the factor on the left-hand side by adding 2y2m + pm + m2 to both sides. After regrouping the coefficients of the power of y on the right-hand side, this gives the equation which is equivalent to the original equation, whichever value is given to m. As the value of m may be arbitrarily chosen, we will choose it in order to complete the square on the right-hand side. This implies that the discriminant in y of this quadratic equation is zero, that is m is a root of the equation ( − q ) 2 − 4 ( 2 m ) ( m 2 + p m + p 2 4 − r ) = 0 , {\displaystyle (-q)^{2}-4(2m)\left(m^{2}+pm+{\frac {p^{2}}{4}}-r\right)=0,\,} which may be rewritten as This is the resolvent cubic of the quartic equation. The value of m may thus be obtained from Cardano's formula. When m is a root of this equation, the right-hand side of equation (1) is the square ( 2 m y − q 2 2 m ) 2 . {\displaystyle \left({\sqrt {2m}}y-{\frac {q}{2{\sqrt {2m}}}}\right)^{2}.} However, this induces a division by zero if m = 0. This implies q = 0, and thus that the depressed equation is bi-quadratic, and may be solved by an easier method (see above). This was not a problem at the time of Ferrari, when one solved only explicitly given equations with numeric coefficients. For a general formula that is always true, one thus needs to choose a root of the cubic equation such that m ≠ 0. This is always possible except for the depressed equation y4 = 0. Now, if m is a root of the cubic equation such that m ≠ 0, equation (1) becomes ( y 2 + p 2 + m ) 2 = ( y 2 m − q 2 2 m ) 2 . {\displaystyle \left(y^{2}+{\frac {p}{2}}+m\right)^{2}=\left(y{\sqrt {2m}}-{\frac {q}{2{\sqrt {2m}}}}\right)^{2}.} This equation is of the form M2 = N2, which can be rearranged as M2 − N2 = 0 or (M + N)(M − N) = 0. Therefore, equation (1) may be rewritten as ( y 2 + p 2 + m + 2 m y − q 2 2 m ) ( y 2 + p 2 + m − 2 m y + q 2 2 m ) = 0. {\displaystyle \left(y^{2}+{\frac {p}{2}}+m+{\sqrt {2m}}y-{\frac {q}{2{\sqrt {2m}}}}\right)\left(y^{2}+{\frac {p}{2}}+m-{\sqrt {2m}}y+{\frac {q}{2{\sqrt {2m}}}}\right)=0.} This equation is easily solved by applying to each factor the quadratic formula. Solving them we may write the four roots as y = ± 1 2 m ± 2 − ( 2 p + 2 m ± 1 2 q m ) 2 , {\displaystyle y={\pm _{1}{\sqrt {2m}}\pm _{2}{\sqrt {-\left(2p+2m\pm _{1}{{\sqrt {2}}q \over {\sqrt {m}}}\right)}} \over 2},} where ±1 and ±2 denote either + or −. As the two occurrences of ±1 must denote the same sign, this leaves four possibilities, one for each root. Therefore, the solutions of the original quartic equation are x = − a 3 4 a 4 + ± 1 2 m ± 2 − ( 2 p + 2 m ± 1 2 q m ) 2 . {\displaystyle x=-{a_{3} \over 4a_{4}}+{\pm _{1}{\sqrt {2m}}\pm _{2}{\sqrt {-\left(2p+2m\pm _{1}{{\sqrt {2}}q \over {\sqrt {m}}}\right)}} \over 2}.} A comparison with the general formula above shows that √2m = 2S. ==== Descartes' solution ==== Descartes introduced in 1637 the method of finding the roots of a quartic polynomial by factoring it into two quadratic ones. Let x 4 + b x 3 + c x 2 + d x + e = ( x 2 + s x + t ) ( x 2 + u x + v ) = x 4 + ( s + u ) x 3 + ( t + v + s u ) x 2 + ( s v + t u ) x + t v {\displaystyle {\begin{aligned}x^{4}+bx^{3}+cx^{2}+dx+e&=(x^{2}+sx+t)(x^{2}+ux+v)\\&=x^{4}+(s+u)x^{3}+(t+v+su)x^{2}+(sv+tu)x+tv\end{aligned}}} By equating coefficients, this results in the following system of equations: { b = s + u c = t + v + s u d = s v + t u e = t v {\displaystyle \left\{{\begin{array}{l}b=s+u\\c=t+v+su\\d=sv+tu\\e=tv\end{array}}\right.} This can be simplified by starting again with the depressed quartic y4 + py2 + qy + r, which can be obtained by substituting y − b/4 for x. Since the coefficient of y3 is 0, we get s = −u, and: { p + u 2 = t + v q = u ( t − v ) r = t v {\displaystyle \left\{{\begin{array}{l}p+u^{2}=t+v\\q=u(t-v)\\r=tv\end{array}}\right.} One can now eliminate both t and v by doing the following: u 2 ( p + u 2 ) 2 − q 2 = u 2 ( t + v ) 2 − u 2 ( t − v ) 2 = u 2 [ ( t + v + ( t − v ) ) ( t + v − ( t − v ) ) ] = u 2 ( 2 t ) ( 2 v ) = 4 u 2 t v = 4 u 2 r {\displaystyle {\begin{aligned}u^{2}(p+u^{2})^{2}-q^{2}&=u^{2}(t+v)^{2}-u^{2}(t-v)^{2}\\&=u^{2}[(t+v+(t-v))(t+v-(t-v))]\\&=u^{2}(2t)(2v)\\&=4u^{2}tv\\&=4u^{2}r\end{aligned}}} If we set U = u2, then solving this equation becomes finding the roots of the resolvent cubic which is done elsewhere. This resolvent cubic is equivalent to the resolvent cubic given above (equation (1a)), as can be seen by substituting U = 2m. If u is a square root of a non-zero root of this resolvent (such a non-zero root exists except for the quartic x4, which is trivially factored), { s = − u 2 t = p + u 2 + q / u 2 v = p + u 2 − q / u {\displaystyle \left\{{\begin{array}{l}s=-u\\2t=p+u^{2}+q/u\\2v=p+u^{2}-q/u\end{array}}\right.} The symmetries in this solution are as follows. There are three roots of the cubic, corresponding to the three ways that a quartic can be factored into two quadratics, and choosing positive or negative values of u for the square root of U merely exchanges the two quadratics with one another. The above solution shows that a quartic polynomial with rational coefficients and a zero coefficient on the cubic term is factorable into quadratics with rational coefficients if and only if either the resolvent cubic (2) has a non-zero root which is the square of a rational, or p2 − 4r is the square of rational and q = 0; this can readily be checked using the rational root test. ==== Euler's solution ==== A variant of the previous method is due to Euler. Unlike the previous methods, both of which use some root of the resolvent cubic, Euler's method uses all of them. Consider a depressed quartic x4 + px2 + qx + r. Observe that, if x4 + px2 + qx + r = (x2 + sx + t)(x2 − sx + v), r1 and r2 are the roots of x2 + sx + t, r3 and r4 are the roots of x2 − sx + v, then the roots of x4 + px2 + qx + r are r1, r2, r3, and r4, r1 + r2 = −s, r3 + r4 = s. Therefore, (r1 + r2)(r3 + r4) = −s2. In other words, −(r1 + r2)(r3 + r4) is one of the roots of the resolvent cubic (2) and this suggests that the roots of that cubic are equal to −(r1 + r2)(r3 + r4), −(r1 + r3)(r2 + r4), and −(r1 + r4)(r2 + r3). This is indeed true and it follows from Vieta's formulas. It also follows from Vieta's formulas, together with the fact that we are working with a depressed quartic, that r1 + r2 + r3 + r4 = 0. (Of course, this also follows from the fact that r1 + r2 + r3 + r4 = −s + s.) Therefore, if α, β, and γ are the roots of the resolvent cubic, then the numbers r1, r2, r3, and r4 are such that { r 1 + r 2 + r 3 + r 4 = 0 ( r 1 + r 2 ) ( r 3 + r 4 ) = − α ( r 1 + r 3 ) ( r 2 + r 4 ) = − β ( r 1 + r 4 ) ( r 2 + r 3 ) = − γ . {\displaystyle \left\{{\begin{array}{l}r_{1}+r_{2}+r_{3}+r_{4}=0\\(r_{1}+r_{2})(r_{3}+r_{4})=-\alpha \\(r_{1}+r_{3})(r_{2}+r_{4})=-\beta \\(r_{1}+r_{4})(r_{2}+r_{3})=-\gamma {\text{.}}\end{array}}\right.} It is a consequence of the first two equations that r1 + r2 is a square root of α and that r3 + r4 is the other square root of α. For the same reason, r1 + r3 is a square root of β, r2 + r4 is the other square root of β, r1 + r4 is a square root of γ, r2 + r3 is the other square root of γ. Therefore, the numbers r1, r2, r3, and r4 are such that { r 1 + r 2 + r 3 + r 4 = 0 r 1 + r 2 = α r 1 + r 3 = β r 1 + r 4 = γ ; {\displaystyle \left\{{\begin{array}{l}r_{1}+r_{2}+r_{3}+r_{4}=0\\r_{1}+r_{2}={\sqrt {\alpha }}\\r_{1}+r_{3}={\sqrt {\beta }}\\r_{1}+r_{4}={\sqrt {\gamma }}{\text{;}}\end{array}}\right.} the sign of the square roots will be dealt with below. The only solution of this system is: { r 1 = α + β + γ 2 r 2 = α − β − γ 2 r 3 = − α + β − γ 2 r 4 = − α − β + γ 2 . {\displaystyle \left\{{\begin{array}{l}r_{1}={\frac {{\sqrt {\alpha }}+{\sqrt {\beta }}+{\sqrt {\gamma }}}{2}}\\[2mm]r_{2}={\frac {{\sqrt {\alpha }}-{\sqrt {\beta }}-{\sqrt {\gamma }}}{2}}\\[2mm]r_{3}={\frac {-{\sqrt {\alpha }}+{\sqrt {\beta }}-{\sqrt {\gamma }}}{2}}\\[2mm]r_{4}={\frac {-{\sqrt {\alpha }}-{\sqrt {\beta }}+{\sqrt {\gamma }}}{2}}{\text{.}}\end{array}}\right.} Since, in general, there are two choices for each square root, it might look as if this provides 8 (= 23) choices for the set {r1, r2, r3, r4}, but, in fact, it provides no more than 2 such choices, because the consequence of replacing one of the square roots by the symmetric one is that the set {r1, r2, r3, r4} becomes the set {−r1, −r2, −r3, −r4}. In order to determine the right sign of the square roots, one simply chooses some square root for each of the numbers α, β, and γ and uses them to compute the numbers r1, r2, r3, and r4 from the previous equalities. Then, one computes the number √α√β√γ. Since α, β, and γ are the roots of (2), it is a consequence of Vieta's formulas that their product is equal to q2 and therefore that √α√β√γ = ±q. But a straightforward computation shows that √α√β√γ = r1r2r3 + r1r2r4 + r1r3r4 + r2r3r4. If this number is −q, then the choice of the square roots was a good one (again, by Vieta's formulas); otherwise, the roots of the polynomial will be −r1, −r2, −r3, and −r4, which are the numbers obtained if one of the square roots is replaced by the symmetric one (or, what amounts to the same thing, if each of the three square roots is replaced by the symmetric one). This argument suggests another way of choosing the square roots: pick any square root √α of α and any square root √β of β; define √γ as − q α β {\displaystyle -{\frac {q}{{\sqrt {\alpha }}{\sqrt {\beta }}}}} . Of course, this will make no sense if α or β is equal to 0, but 0 is a root of (2) only when q = 0, that is, only when we are dealing with a biquadratic equation, in which case there is a much simpler approach. ==== Solving by Lagrange resolvent ==== The symmetric group S4 on four elements has the Klein four-group as a normal subgroup. This suggests using a resolvent cubic whose roots may be variously described as a discrete Fourier transform or a Hadamard matrix transform of the roots; see Lagrange resolvents for the general method. Denote by xi, for i from 0 to 3, the four roots of x4 + bx3 + cx2 + dx + e. If we set s 0 = 1 2 ( x 0 + x 1 + x 2 + x 3 ) , s 1 = 1 2 ( x 0 − x 1 + x 2 − x 3 ) , s 2 = 1 2 ( x 0 + x 1 − x 2 − x 3 ) , s 3 = 1 2 ( x 0 − x 1 − x 2 + x 3 ) , {\displaystyle {\begin{aligned}s_{0}&={\tfrac {1}{2}}(x_{0}+x_{1}+x_{2}+x_{3}),\\[4pt]s_{1}&={\tfrac {1}{2}}(x_{0}-x_{1}+x_{2}-x_{3}),\\[4pt]s_{2}&={\tfrac {1}{2}}(x_{0}+x_{1}-x_{2}-x_{3}),\\[4pt]s_{3}&={\tfrac {1}{2}}(x_{0}-x_{1}-x_{2}+x_{3}),\end{aligned}}} then since the transformation is an involution we may express the roots in terms of the four si in exactly the same way. Since we know the value s0 = −⁠b/2⁠, we only need the values for s1, s2 and s3. These are the roots of the polynomial ( s 2 − s 1 2 ) ( s 2 − s 2 2 ) ( s 2 − s 3 2 ) . {\displaystyle (s^{2}-{s_{1}}^{2})(s^{2}-{s_{2}}^{2})(s^{2}-{s_{3}}^{2}).} Substituting the si by their values in term of the xi, this polynomial may be expanded in a polynomial in s whose coefficients are symmetric polynomials in the xi. By the fundamental theorem of symmetric polynomials, these coefficients may be expressed as polynomials in the coefficients of the monic quartic. If, for simplification, we suppose that the quartic is depressed, that is b = 0, this results in the polynomial This polynomial is of degree six, but only of degree three in s2, and so the corresponding equation is solvable by the method described in the article about cubic function. By substituting the roots in the expression of the xi in terms of the si, we obtain expression for the roots. In fact we obtain, apparently, several expressions, depending on the numbering of the roots of the cubic polynomial and of the signs given to their square roots. All these different expressions may be deduced from one of them by simply changing the numbering of the xi. These expressions are unnecessarily complicated, involving the cubic roots of unity, which can be avoided as follows. If s is any non-zero root of (3), and if we set F 1 ( x ) = x 2 + s x + c 2 + s 2 2 − d 2 s F 2 ( x ) = x 2 − s x + c 2 + s 2 2 + d 2 s {\displaystyle {\begin{aligned}F_{1}(x)&=x^{2}+sx+{\frac {c}{2}}+{\frac {s^{2}}{2}}-{\frac {d}{2s}}\\F_{2}(x)&=x^{2}-sx+{\frac {c}{2}}+{\frac {s^{2}}{2}}+{\frac {d}{2s}}\end{aligned}}} then F 1 ( x ) × F 2 ( x ) = x 4 + c x 2 + d x + e . {\displaystyle F_{1}(x)\times F_{2}(x)=x^{4}+cx^{2}+dx+e.} We therefore can solve the quartic by solving for s and then solving for the roots of the two factors using the quadratic formula. This gives exactly the same formula for the roots as the one provided by Descartes' method. ==== Solving with algebraic geometry ==== There is an alternative solution using algebraic geometry In brief, one interprets the roots as the intersection of two quadratic curves, then finds the three reducible quadratic curves (pairs of lines) that pass through these points (this corresponds to the resolvent cubic, the pairs of lines being the Lagrange resolvents), and then use these linear equations to solve the quadratic. The four roots of the depressed quartic x4 + px2 + qx + r = 0 may also be expressed as the x coordinates of the intersections of the two quadratic equations y2 + py + qx + r = 0 and y − x2 = 0 i.e., using the substitution y = x2 that two quadratics intersect in four points is an instance of Bézout's theorem. Explicitly, the four points are Pi ≔ (xi, xi2) for the four roots xi of the quartic. These four points are not collinear because they lie on the irreducible quadratic y = x2 and thus there is a 1-parameter family of quadratics (a pencil of curves) passing through these points. Writing the projectivization of the two quadratics as quadratic forms in three variables: F 1 ( X , Y , Z ) := Y 2 + p Y Z + q X Z + r Z 2 , F 2 ( X , Y , Z ) := Y Z − X 2 {\displaystyle {\begin{aligned}F_{1}(X,Y,Z)&:=Y^{2}+pYZ+qXZ+rZ^{2},\\F_{2}(X,Y,Z)&:=YZ-X^{2}\end{aligned}}} the pencil is given by the forms λF1 + μF2 for any point [λ, μ] in the projective line — in other words, where λ and μ are not both zero, and multiplying a quadratic form by a constant does not change its quadratic curve of zeros. This pencil contains three reducible quadratics, each corresponding to a pair of lines, each passing through two of the four points, which can be done ( 4 2 ) {\displaystyle \textstyle {\binom {4}{2}}} = 6 different ways. Denote these Q1 = L12 + L34, Q2 = L13 + L24, and Q3 = L14 + L23. Given any two of these, their intersection has exactly the four points. The reducible quadratics, in turn, may be determined by expressing the quadratic form λF1 + μF2 as a 3×3 matrix: reducible quadratics correspond to this matrix being singular, which is equivalent to its determinant being zero, and the determinant is a homogeneous degree three polynomial in λ and μ and corresponds to the resolvent cubic. == See also == Linear function – Linear map or polynomial function of degree one Quadratic function – Polynomial function of degree two Cubic function – Polynomial function of degree 3 Quintic function – Polynomial function of degree 5 == Notes == ^α For the purposes of this article, e is used as a variable as opposed to its conventional use as Euler's number (except when otherwise specified). == References == == Further reading == Carpenter, W. (1966). "On the solution of the real quartic". Mathematics Magazine. 39 (1): 28–30. doi:10.2307/2688990. JSTOR 2688990. Yacoub, M.D.; Fraidenraich, G. (July 2012). "A solution to the quartic equation". Mathematical Gazette. 96: 271–275. doi:10.1017/s002555720000454x. S2CID 124512391. == External links == Quartic formula as four single equations at PlanetMath. Ferrari's achievement
Wikipedia/Quartic_function
In mathematics, a noncommutative ring is a ring whose multiplication is not commutative; that is, there exist a and b in the ring such that ab and ba are different. Equivalently, a noncommutative ring is a ring that is not a commutative ring. Noncommutative algebra is the part of ring theory devoted to study of properties of the noncommutative rings, including the properties that apply also to commutative rings. Sometimes the term noncommutative ring is used instead of ring to refer to an unspecified ring which is not necessarily commutative, and hence may be commutative. Generally, this is for emphasizing that the studied properties are not restricted to commutative rings, as, in many contexts, ring is used as a shorthand for commutative ring. Although some authors do not assume that rings have a multiplicative identity, in this article we make that assumption unless stated otherwise. == Examples == Some examples of noncommutative rings: The matrix ring of n-by-n matrices over the real numbers, where n > 1 Hamilton's quaternions Any group ring constructed from a group that is not abelian Some examples of rings that are not typically commutative (but may be commutative in simple cases): The free ring Z ⟨ x 1 , … , x n ⟩ {\displaystyle \mathbb {Z} \langle x_{1},\ldots ,x_{n}\rangle } generated by a finite set, an example of two non-equal elements being 2 x 1 x 2 + x 2 x 1 ≠ 3 x 1 x 2 {\displaystyle 2x_{1}x_{2}+x_{2}x_{1}\neq 3x_{1}x_{2}} The Weyl algebra A n ( C ) {\displaystyle A_{n}(\mathbb {C} )} , being the ring of polynomial differential operators defined over affine space; for example, A 1 ( C ) ≅ C ⟨ x , y ⟩ / ( x y − y x − 1 ) {\displaystyle A_{1}(\mathbb {C} )\cong \mathbb {C} \langle x,y\rangle /(xy-yx-1)} , where the ideal corresponds to the commutator The quotient ring C ⟨ x 1 , … , x n ⟩ / ( x i x j − q i j x j x i ) {\displaystyle \mathbb {C} \langle x_{1},\ldots ,x_{n}\rangle /(x_{i}x_{j}-q_{ij}x_{j}x_{i})} , called a quantum plane, where q i j ∈ C {\displaystyle q_{ij}\in \mathbb {C} } Any Clifford algebra can be described explicitly using an algebra presentation: given an F {\displaystyle \mathbb {F} } -vector space V {\displaystyle V} of dimension n with a quadratic form q : V ⊗ V → F {\displaystyle q:V\otimes V\to \mathbb {F} } , the associated Clifford algebra has the presentation F ⟨ e 1 , … , e n ⟩ / ( e i e j + e j e i − q ( e i , e j ) ) {\displaystyle \mathbb {F} \langle e_{1},\ldots ,e_{n}\rangle /(e_{i}e_{j}+e_{j}e_{i}-q(e_{i},e_{j}))} for any basis e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} of V {\displaystyle V} , Superalgebras are another example of noncommutative rings; they can be presented as C [ x 1 , … , x n ] ⟨ θ 1 , … , θ m ⟩ / ( θ i θ j + θ j θ i ) {\displaystyle \mathbb {C} [x_{1},\ldots ,x_{n}]\langle \theta _{1},\ldots ,\theta _{m}\rangle /(\theta _{i}\theta _{j}+\theta _{j}\theta _{i})} There are finite noncommutative rings: for example, the n-by-n matrices over a finite field, for n > 1. The smallest noncommutative ring is the ring of the upper triangular matrices over the field with two elements; it has eight elements and all noncommutative rings with eight elements are isomorphic to it or to its opposite. == History == Beginning with division rings arising from geometry, the study of noncommutative rings has grown into a major area of modern algebra. The theory and exposition of noncommutative rings was expanded and refined in the 19th and 20th centuries by numerous authors. An incomplete list of such contributors includes E. Artin, Richard Brauer, P. M. Cohn, W. R. Hamilton, I. N. Herstein, N. Jacobson, K. Morita, E. Noether, Ø. Ore, J. Wedderburn and others. == Differences between commutative and noncommutative algebra == Because noncommutative rings of scientific interest are more complicated than commutative rings, their structure, properties and behavior are less well understood. A great deal of work has been done successfully generalizing some results from commutative rings to noncommutative rings. A major difference between rings which are and are not commutative is the necessity to separately consider right ideals and left ideals. It is common for noncommutative ring theorists to enforce a condition on one of these types of ideals while not requiring it to hold for the opposite side. For commutative rings, the left–right distinction does not exist. == Important classes == === Division rings === A division ring, also called a skew field, is a ring in which division is possible. Specifically, it is a nonzero ring in which every nonzero element a has a multiplicative inverse, i.e., an element x with a · x = x · a = 1. Stated differently, a ring is a division ring if and only if its group of units is the set of all nonzero elements. Division rings differ from fields only in that their multiplication is not required to be commutative. However, by Wedderburn's little theorem all finite division rings are commutative and therefore finite fields. Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". === Semisimple rings === A module over a (not necessarily commutative) ring with unity is said to be semisimple (or completely reducible) if it is the direct sum of simple (irreducible) submodules. A ring is said to be (left)-semisimple if it is semisimple as a left module over itself. Surprisingly, a left-semisimple ring is also right-semisimple and vice versa. The left/right distinction is therefore unnecessary. === Semiprimitive rings === A semiprimitive ring or Jacobson semisimple ring or J-semisimple ring is a ring whose Jacobson radical is zero. This is a type of ring more general than a semisimple ring, but where simple modules still provide enough information about the ring. Rings such as the ring of integers are semiprimitive, and an artinian semiprimitive ring is just a semisimple ring. Semiprimitive rings can be understood as subdirect products of primitive rings, which are described by the Jacobson density theorem. === Simple rings === A simple ring is a non-zero ring that has no two-sided ideal besides the zero ideal and itself. A simple ring can always be considered as a simple algebra. Rings which are simple as rings but not as modules do exist: the full matrix ring over a field does not have any nontrivial ideals (since any ideal of M(n,R) is of the form M(n,I) with I an ideal of R), but has nontrivial left ideals (namely, the sets of matrices which have some fixed zero columns). According to the Artin–Wedderburn theorem, every simple ring that is left or right Artinian is a matrix ring over a division ring. In particular, the only simple rings that are a finite-dimensional vector space over the real numbers are rings of matrices over either the real numbers, the complex numbers, or the quaternions. Any quotient of a ring by a maximal ideal is a simple ring. In particular, a field is a simple ring. A ring R is simple if and only if its opposite ring Ro is simple. An example of a simple ring that is not a matrix ring over a division ring is the Weyl algebra. == Important theorems == === Wedderburn's little theorem === Wedderburn's little theorem states that every finite domain is a field. In other words, for finite rings, there is no distinction between domains, division rings and fields. The Artin–Zorn theorem generalizes the theorem to alternative rings: every finite simple alternative ring is a field. === Artin–Wedderburn theorem === The Artin–Wedderburn theorem is a classification theorem for semisimple rings and semisimple algebras. The theorem states that an (Artinian) semisimple ring R is isomorphic to a product of finitely many ni-by-ni matrix rings over division rings Di, for some integers ni, both of which are uniquely determined up to permutation of the index i. In particular, any simple left or right Artinian ring is isomorphic to an n-by-n matrix ring over a division ring D, where both n and D are uniquely determined. As a direct corollary, the Artin–Wedderburn theorem implies that every simple ring that is finite-dimensional over a division ring (a simple algebra) is a matrix ring. This is Joseph Wedderburn's original result. Emil Artin later generalized it to the case of Artinian rings. === Jacobson density theorem === The Jacobson density theorem is a theorem concerning simple modules over a ring R. The theorem can be applied to show that any primitive ring can be viewed as a "dense" subring of the ring of linear transformations of a vector space. This theorem first appeared in the literature in 1945, in the famous paper "Structure Theory of Simple Rings Without Finiteness Assumptions" by Nathan Jacobson. This can be viewed as a kind of generalization of the Artin-Wedderburn theorem's conclusion about the structure of simple Artinian rings. More formally, the theorem can be stated as follows: The Jacobson Density Theorem. Let U be a simple right R-module, D = End(UR), and X ⊂ U a finite and D-linearly independent set. If A is a D-linear transformation on U then there exists r ∈ R such that A(x) = x · r for all x in X. === Nakayama's lemma === Let J(R) be the Jacobson radical of R. If U is a right module over a ring, R, and I is a right ideal in R, then define U·I to be the set of all (finite) sums of elements of the form u·i, where · is simply the action of R on U. Necessarily, U·I is a submodule of U. If V is a maximal submodule of U, then U/V is simple. So U·J(R) is necessarily a subset of V, by the definition of J(R) and the fact that U/V is simple. Thus, if U contains at least one (proper) maximal submodule, U·J(R) is a proper submodule of U. However, this need not hold for arbitrary modules U over R, for U need not contain any maximal submodules. Naturally, if U is a Noetherian module, this holds. If R is Noetherian, and U is finitely generated, then U is a Noetherian module over R, and the conclusion is satisfied. Somewhat remarkable is that the weaker assumption, namely that U is finitely generated as an R-module (and no finiteness assumption on R), is sufficient to guarantee the conclusion. This is essentially the statement of Nakayama's lemma. Precisely, one has the following. Nakayama's lemma: Let U be a finitely generated right module over a ring R. If U is a non-zero module, then U·J(R) is a proper submodule of U. A version of the lemma holds for right modules over non-commutative unitary rings R. The resulting theorem is sometimes known as the Jacobson–Azumaya theorem. === Noncommutative localization === Localization is a systematic method of adding multiplicative inverses to a ring, and is usually applied to commutative rings. Given a ring R and a subset S, one wants to construct some ring R* and ring homomorphism from R to R*, such that the image of S consists of units (invertible elements) in R*. Further one wants R* to be the 'best possible' or 'most general' way to do this – in the usual fashion this should be expressed by a universal property. The localization of R by S is usually denoted by S −1R; however other notations are used in some important special cases. If S is the set of the non zero elements of an integral domain, then the localization is the field of fractions and thus usually denoted Frac(R). Localizing non-commutative rings is more difficult; the localization does not exist for every set S of prospective units. One condition which ensures that the localization exists is the Ore condition. One case for non-commutative rings where localization has a clear interest is for rings of differential operators. It has the interpretation, for example, of adjoining a formal inverse D−1 for a differentiation operator D. This is done in many contexts in methods for differential equations. There is now a large mathematical theory about it, named microlocalization, connecting with numerous other branches. The micro- tag is to do with connections with Fourier theory, in particular. === Morita equivalence === Morita equivalence is a relationship defined between rings that preserves many ring-theoretic properties. It is named after Japanese mathematician Kiiti Morita who defined equivalence and a similar notion of duality in 1958. Two rings R and S (associative, with 1) are said to be (Morita) equivalent if there is an equivalence of the category of (left) modules over R, R-Mod, and the category of (left) modules over S, S-Mod. It can be shown that the left module categories R-Mod and S-Mod are equivalent if and only if the right module categories Mod-R and Mod-S are equivalent. Further it can be shown that any functor from R-Mod to S-Mod that yields an equivalence is automatically additive. === Brauer group === The Brauer group of a field K is an abelian group whose elements are Morita equivalence classes of central simple algebras of finite rank over K and addition is induced by the tensor product of algebras. It arose out of attempts to classify division algebras over a field and is named after the algebraist Richard Brauer. The group may also be defined in terms of Galois cohomology. More generally, the Brauer group of a scheme is defined in terms of Azumaya algebras. === Ore conditions === The Ore condition is a condition introduced by Øystein Ore, in connection with the question of extending beyond commutative rings the construction of a field of fractions, or more generally localization of a ring. The right Ore condition for a multiplicative subset S of a ring R is that for a ∈ R and s ∈ S, the intersection aS ∩ sR ≠ ∅. A domain that satisfies the right Ore condition is called a right Ore domain. The left case is defined similarly. === Goldie's theorem === In mathematics, Goldie's theorem is a basic structural result in ring theory, proved by Alfred Goldie during the 1950s. What is now termed a right Goldie ring is a ring R that has finite uniform dimension (also called "finite rank") as a right module over itself, and satisfies the ascending chain condition on right annihilators of subsets of R. Goldie's theorem states that the semiprime right Goldie rings are precisely those that have a semisimple Artinian right classical ring of quotients. The structure of this ring of quotients is then completely determined by the Artin–Wedderburn theorem. In particular, Goldie's theorem applies to semiprime right Noetherian rings, since by definition right Noetherian rings have the ascending chain condition on all right ideals. This is sufficient to guarantee that a right-Noetherian ring is right Goldie. The converse does not hold: every right Ore domain is a right Goldie domain, and hence so is every commutative integral domain. A consequence of Goldie's theorem, again due to Goldie, is that every semiprime principal right ideal ring is isomorphic to a finite direct sum of prime principal right ideal rings. Every prime principal right ideal ring is isomorphic to a matrix ring over a right Ore domain. == See also == Derived algebraic geometry Noncommutative geometry Noncommutative algebraic geometry Noncommutative harmonic analysis Representation theory (group theory) == Notes == == References == Isaacs, I. Martin (1993), Algebra, a graduate course (1st ed.), Brooks/Cole, ISBN 0-534-19002-2 Jacobson, N. (1945), "Structure theory of simple rings without finiteness assumptions", Transactions of the American Mathematical Society, 57: 228–245, doi:10.2307/1990204, JSTOR 1990204 Nagata, M. (1962), Local Rings, Wiley-Interscience == Further reading == Herstein, I. N. (1968), Noncommutative Rings, Mathematical Association of America, ISBN 0-88385-015-X Lam, T. Y. (2001), A First Course in Noncommutative Rings, Springer-Verlag
Wikipedia/Noncommutative_algebra
An algebraic number is a number that is a root of a non-zero polynomial in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, ( 1 + 5 ) / 2 {\displaystyle (1+{\sqrt {5}})/2} , is an algebraic number, because it is a root of the polynomial x2 − x − 1. That is, it is a value for x for which the polynomial evaluates to zero. As another example, the complex number 1 + i {\displaystyle 1+i} is algebraic because it is a root of x4 + 4. All integers and rational numbers are algebraic, as are all roots of integers. Real and complex numbers that are not algebraic, such as π and e, are called transcendental numbers. The set of algebraic (complex) numbers is countably infinite and has measure zero in the Lebesgue measure as a subset of the uncountable complex numbers. In that sense, almost all complex numbers are transcendental. Similarly, the set of algebraic (real) numbers is countably infinite and has Lebesgue measure zero as a subset of the real numbers, and in that sense almost all real numbers are transcendental. == Examples == All rational numbers are algebraic. Any rational number, expressed as the quotient of an integer a and a (non-zero) natural number b, satisfies the above definition, because x = ⁠a/b⁠ is the root of a non-zero polynomial, namely bx − a. Quadratic irrational numbers, irrational solutions of a quadratic polynomial ax2 + bx + c with integer coefficients a, b, and c, are algebraic numbers. If the quadratic polynomial is monic (a = 1), the roots are further qualified as quadratic integers. Gaussian integers, complex numbers a + bi for which both a and b are integers, are also quadratic integers. This is because a + bi and a − bi are the two roots of the quadratic x2 − 2ax + a2 + b2. A constructible number can be constructed from a given unit length using a straightedge and compass. It includes all quadratic irrational roots, all rational numbers, and all numbers that can be formed from these using the basic arithmetic operations and the extraction of square roots. (By designating cardinal directions for +1, −1, +i, and −i, complex numbers such as 3 + i 2 {\displaystyle 3+i{\sqrt {2}}} are considered constructible.) Any expression formed from algebraic numbers using any finite combination of the basic arithmetic operations and extraction of nth roots gives another algebraic number. Polynomial roots that cannot be expressed in terms of the basic arithmetic operations and extraction of nth roots (such as the roots of x5 − x + 1). That happens with many but not all polynomials of degree 5 or higher. Values of trigonometric functions of rational multiples of π (except when undefined): for example, cos ⁠π/7⁠, cos ⁠3π/7⁠, and cos ⁠5π/7⁠ satisfy 8x3 − 4x2 − 4x + 1 = 0. This polynomial is irreducible over the rationals and so the three cosines are conjugate algebraic numbers. Likewise, tan ⁠3π/16⁠, tan ⁠7π/16⁠, tan ⁠11π/16⁠, and tan ⁠15π/16⁠ satisfy the irreducible polynomial x4 − 4x3 − 6x2 + 4x + 1 = 0, and so are conjugate algebraic integers. This is the equivalent of angles which, when measured in degrees, have rational numbers. Some but not all irrational numbers are algebraic: The numbers 2 {\displaystyle {\sqrt {2}}} and 3 3 2 {\displaystyle {\frac {\sqrt[{3}]{3}}{2}}} are algebraic since they are roots of polynomials x2 − 2 and 8x3 − 3, respectively. The golden ratio φ is algebraic since it is a root of the polynomial x2 − x − 1. The numbers π and e are not algebraic numbers (see the Lindemann–Weierstrass theorem). == Properties == If a polynomial with rational coefficients is multiplied through by the least common denominator, the resulting polynomial with integer coefficients has the same roots. This shows that an algebraic number can be equivalently defined as a root of a polynomial with either integer or rational coefficients. Given an algebraic number, there is a unique monic polynomial with rational coefficients of least degree that has the number as a root. This polynomial is called its minimal polynomial. If its minimal polynomial has degree n, then the algebraic number is said to be of degree n. For example, all rational numbers have degree 1, and an algebraic number of degree 2 is a quadratic irrational. The algebraic numbers are dense in the reals. This follows from the fact they contain the rational numbers, which are dense in the reals themselves. The set of algebraic numbers is countable, and therefore its Lebesgue measure as a subset of the complex numbers is 0 (essentially, the algebraic numbers take up no space in the complex numbers). That is to say, "almost all" real and complex numbers are transcendental. All algebraic numbers are computable and therefore definable and arithmetical. For real numbers a and b, the complex number a + bi is algebraic if and only if both a and b are algebraic. === Degree of simple extensions of the rationals as a criterion to algebraicity === For any α, the simple extension of the rationals by α, denoted by Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} (whose elements are the f ( α ) {\displaystyle f(\alpha )} for f {\displaystyle f} a rational function with rational coefficients which is defined at α {\displaystyle \alpha } ), is of finite degree if and only if α is an algebraic number. The condition of finite degree means that there is a finite set { a i | 1 ≤ i ≤ k } {\displaystyle \{a_{i}|1\leq i\leq k\}} in Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} such that Q ( α ) = ∑ i = 1 k a i Q {\displaystyle \mathbb {Q} (\alpha )=\sum _{i=1}^{k}a_{i}\mathbb {Q} } ; that is, every member in Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} can be written as ∑ i = 1 k a i q i {\displaystyle \sum _{i=1}^{k}a_{i}q_{i}} for some rational numbers { q i | 1 ≤ i ≤ k } {\displaystyle \{q_{i}|1\leq i\leq k\}} (note that the set { a i } {\displaystyle \{a_{i}\}} is fixed). Indeed, since the a i − s {\displaystyle a_{i}-s} are themselves members of Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} , each can be expressed as sums of products of rational numbers and powers of α, and therefore this condition is equivalent to the requirement that for some finite n {\displaystyle n} , Q ( α ) = { ∑ i = − n n α i q i | q i ∈ Q } {\displaystyle \mathbb {Q} (\alpha )=\{\sum _{i=-n}^{n}\alpha ^{i}q_{i}|q_{i}\in \mathbb {Q} \}} . The latter condition is equivalent to α n + 1 {\displaystyle \alpha ^{n+1}} , itself a member of Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} , being expressible as ∑ i = − n n α i q i {\displaystyle \sum _{i=-n}^{n}\alpha ^{i}q_{i}} for some rationals { q i } {\displaystyle \{q_{i}\}} , so α 2 n + 1 = ∑ i = 0 2 n α i q i − n {\displaystyle \alpha ^{2n+1}=\sum _{i=0}^{2n}\alpha ^{i}q_{i-n}} or, equivalently, α is a root of x 2 n + 1 − ∑ i = 0 2 n x i q i − n {\displaystyle x^{2n+1}-\sum _{i=0}^{2n}x^{i}q_{i-n}} ; that is, an algebraic number with a minimal polynomial of degree not larger than 2 n + 1 {\displaystyle 2n+1} . It can similarly be proven that for any finite set of algebraic numbers α 1 {\displaystyle \alpha _{1}} , α 2 {\displaystyle \alpha _{2}} ... α n {\displaystyle \alpha _{n}} , the field extension Q ( α 1 , α 2 , . . . α n ) {\displaystyle \mathbb {Q} (\alpha _{1},\alpha _{2},...\alpha _{n})} has a finite degree. == Field == The sum, difference, product, and quotient (if the denominator is nonzero) of two algebraic numbers is again algebraic: For any two algebraic numbers α, β, this follows directly from the fact that the simple extension Q ( γ ) {\displaystyle \mathbb {Q} (\gamma )} , for γ {\displaystyle \gamma } being either α + β {\displaystyle \alpha +\beta } , α − β {\displaystyle \alpha -\beta } , α β {\displaystyle \alpha \beta } or (for β ≠ 0 {\displaystyle \beta \neq 0} ) α / β {\displaystyle \alpha /\beta } , is a linear subspace of the finite-degree field extension Q ( α , β ) {\displaystyle \mathbb {Q} (\alpha ,\beta )} , and therefore has a finite degree itself, from which it follows (as shown above) that γ {\displaystyle \gamma } is algebraic. An alternative way of showing this is constructively, by using the resultant. Algebraic numbers thus form a field Q ¯ {\displaystyle {\overline {\mathbb {Q} }}} (sometimes denoted by A {\displaystyle \mathbb {A} } , but that usually denotes the adele ring). === Algebraic closure === Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. That can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals and so it is called the algebraic closure of the rationals. That the field of algebraic numbers is algebraically closed can be proven as follows: Let β be a root of a polynomial α 0 + α 1 x + α 2 x 2 . . . + α n x n {\displaystyle \alpha _{0}+\alpha _{1}x+\alpha _{2}x^{2}...+\alpha _{n}x^{n}} with coefficients that are algebraic numbers α 0 {\displaystyle \alpha _{0}} , α 1 {\displaystyle \alpha _{1}} , α 2 {\displaystyle \alpha _{2}} ... α n {\displaystyle \alpha _{n}} . The field extension Q ′ ≡ Q ( α 1 , α 2 , . . . α n ) {\displaystyle \mathbb {Q} ^{\prime }\equiv \mathbb {Q} (\alpha _{1},\alpha _{2},...\alpha _{n})} then has a finite degree with respect to Q {\displaystyle \mathbb {Q} } . The simple extension Q ′ ( β ) {\displaystyle \mathbb {Q} ^{\prime }(\beta )} then has a finite degree with respect to Q ′ {\displaystyle \mathbb {Q} ^{\prime }} (since all powers of β can be expressed by powers of up to β n − 1 {\displaystyle \beta ^{n-1}} ). Therefore, Q ′ ( β ) = Q ( β , α 1 , α 2 , . . . α n ) {\displaystyle \mathbb {Q} ^{\prime }(\beta )=\mathbb {Q} (\beta ,\alpha _{1},\alpha _{2},...\alpha _{n})} also has a finite degree with respect to Q {\displaystyle \mathbb {Q} } . Since Q ( β ) {\displaystyle \mathbb {Q} (\beta )} is a linear subspace of Q ′ ( β ) {\displaystyle \mathbb {Q} ^{\prime }(\beta )} , it must also have a finite degree with respect to Q {\displaystyle \mathbb {Q} } , so β must be an algebraic number. == Related fields == === Numbers defined by radicals === Any number that can be obtained from the integers using a finite number of additions, subtractions, multiplications, divisions, and taking (possibly complex) nth roots where n is a positive integer are algebraic. The converse, however, is not true: there are algebraic numbers that cannot be obtained in this manner. These numbers are roots of polynomials of degree 5 or higher, a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). For example, the equation: x 5 − x − 1 = 0 {\displaystyle x^{5}-x-1=0} has a unique real root, ≈ 1.1673, that cannot be expressed in terms of only radicals and arithmetic operations. === Closed-form number === Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, all numbers that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms are called "elementary numbers", and these include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers explicitly defined in terms of polynomials, exponentials, and logarithms – this does not include all algebraic numbers, but does include some simple transcendental numbers such as e or ln 2. == Algebraic integers == An algebraic integer is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are 5 + 13 2 , {\displaystyle 5+13{\sqrt {2}},} 2 − 6 i , {\displaystyle 2-6i,} and 1 2 ( 1 + i 3 ) . {\textstyle {\frac {1}{2}}(1+i{\sqrt {3}}).} Therefore, the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials x − k for all k ∈ Z {\displaystyle k\in \mathbb {Z} } . In this sense, algebraic integers are to algebraic numbers what integers are to rational numbers. The sum, difference and product of algebraic integers are again algebraic integers, which means that the algebraic integers form a ring. The name algebraic integer comes from the fact that the only rational numbers that are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If K is a number field, its ring of integers is the subring of algebraic integers in K, and is frequently denoted as OK. These are the prototypical examples of Dedekind domains. == Special classes == Algebraic solution Gaussian integer Eisenstein integer Quadratic irrational number Fundamental unit Root of unity Gaussian period Pisot–Vijayaraghavan number Salem number == Notes == == References == Artin, Michael (1991), Algebra, Prentice Hall, ISBN 0-13-004763-5, MR 1129886 Garibaldi, Skip (June 2008), "Somewhat more than governors need to know about trigonometry", Mathematics Magazine, 81 (3): 191–200, doi:10.1080/0025570x.2008.11953548, JSTOR 27643106 Hardy, Godfrey Harold; Wright, Edward M. (1972), An introduction to the theory of numbers (5th ed.), Oxford: Clarendon, ISBN 0-19-853171-0 Ireland, Kenneth; Rosen, Michael (1990) [1st ed. 1982], A Classical Introduction to Modern Number Theory (2nd ed.), Berlin: Springer, doi:10.1007/978-1-4757-2103-4, ISBN 0-387-97329-X, MR 1070716 Lang, Serge (2002) [1st ed. 1965], Algebra (3rd ed.), New York: Springer, ISBN 978-0-387-95385-4, MR 1878556 Niven, Ivan M. (1956), Irrational Numbers, Mathematical Association of America Ore, Øystein (1948), Number Theory and Its History, New York: McGraw-Hill
Wikipedia/Algebraic_number
A solution in radicals or algebraic solution is an expression of a solution of a polynomial equation that is algebraic, that is, relies only on addition, subtraction, multiplication, division, raising to integer powers, and extraction of nth roots (square roots, cube roots, etc.). A well-known example is the quadratic formula x = − b ± b 2 − 4 a c 2 a , {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}},} which expresses the solutions of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} There exist algebraic solutions for cubic equations and quartic equations, which are more complicated than the quadratic formula. The Abel–Ruffini theorem,: 211  and, more generally Galois theory, state that some quintic equations, such as x 5 − x + 1 = 0 , {\displaystyle x^{5}-x+1=0,} do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation x 10 = 2 {\displaystyle x^{10}=2} can be solved as x = ± 2 10 . {\displaystyle x=\pm {\sqrt[{10}]{2}}.} The eight other solutions are nonreal complex numbers, which are also algebraic and have the form x = ± r 2 10 , {\displaystyle x=\pm r{\sqrt[{10}]{2}},} where r is a fifth root of unity, which can be expressed with two nested square roots. See also Quintic function § Other solvable quintics for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. == See also == Radical symbol Solvable quintics Solvable sextics Solvable septics == References ==
Wikipedia/Algebraic_solution
In mathematics education, precalculus is a course, or a set of courses, that includes algebra and trigonometry at a level that is designed to prepare students for the study of calculus, thus the name precalculus. Schools often distinguish between algebra and trigonometry as two separate parts of the coursework. == Concept == For students to succeed at finding the derivatives and antiderivatives with calculus, they will need facility with algebraic expressions, particularly in modification and transformation of such expressions. Leonhard Euler wrote the first precalculus book in 1748 called Introductio in analysin infinitorum (Latin: Introduction to the Analysis of the Infinite), which "was meant as a survey of concepts and methods in analysis and analytic geometry preliminary to the study of differential and integral calculus." He began with the fundamental concepts of variables and functions. His innovation is noted for its use of exponentiation to introduce the transcendental functions. The general logarithm, to an arbitrary positive base, Euler presents as the inverse of an exponential function. Then the natural logarithm is obtained by taking as base "the number for which the hyperbolic logarithm is one", sometimes called Euler's number, and written e {\displaystyle e} . This appropriation of the significant number from Grégoire de Saint-Vincent’s calculus suffices to establish the natural logarithm. This part of precalculus prepares the student for integration of the monomial x p {\displaystyle x^{p}} in the instance of p = − 1 {\displaystyle p=-1} . Today's precalculus text computes e {\displaystyle e} as the limit e = lim n → ∞ ( 1 + 1 n ) n {\displaystyle e=\lim _{n\rightarrow \infty }\left(1+{\frac {1}{n}}\right)^{n}} . An exposition on compound interest in financial mathematics may motivate this limit. Another difference in the modern text is avoidance of complex numbers, except as they may arise as roots of a quadratic equation with a negative discriminant, or in Euler's formula as application of trigonometry. Euler used not only complex numbers but also infinite series in his precalculus. Today's course may cover arithmetic and geometric sequences and series, but not the application by Saint-Vincent to gain his hyperbolic logarithm, which Euler used to finesse his precalculus. == Variable content == Precalculus prepares students for calculus somewhat differently from how pre-algebra prepares students for algebra. While pre-algebra often has extensive coverage of basic algebraic concepts, precalculus courses might see only small amounts of calculus concepts, if at all, and usually involve covering algebraic topics that might not have been given attention in earlier algebra courses. Some precalculus courses might differ from others in terms of content. For example, an honors-level course might spend more time on conic sections, Euclidean vectors, and other topics needed for calculus, used in fields such as medicine or engineering. A college preparatory/regular class might focus on topics used in business-related careers, such as matrices, or power functions. A standard course considers functions, function composition, and inverse functions, often in connection with sets and real numbers. In particular, polynomials and rational functions are developed. Algebraic skills are exercised with trigonometric functions and trigonometric identities. The binomial theorem, polar coordinates, parametric equations, and the limits of sequences and series are other common topics of precalculus. Sometimes the mathematical induction method of proof for propositions dependent upon a natural number may be demonstrated, but generally, coursework involves exercises rather than theory. == Sample texts == Roland E. Larson & Robert P. Hostetler (1989) Precalculus, second edition, D.C. Heath and Company ISBN 0-669-16277-9 Margaret L. Lial & Charles D. Miller (1988) Precalculus, Scott Foresman ISBN 0-673-15872-1 Jerome E. Kaufmann (1988) Precalculus, PWS-Kent Publishing Company (Wadsworth) Karl J. Smith (1990) Precalculus Mathematics: a functional approach, fourth edition, Brooks/Cole ISBN 0-534-11922-0 Michael Sullivan (1993) Precalculus, third edition, Dellen imprint of Macmillan Publishers ISBN 0-02-418421-7 === Online access === Jay Abramson and others (2014) Precalculus from OpenStax David Lippman & Melonie Rasmussen (2017) Precalculus: an investigation of functions Carl Stitz & Jeff Zeager (2013) Precalculus (pdf) == See also == AP Precalculus AP Calculus AP Statistics Pre-algebra Mathematics education in the United States == References == == External links == Precalculus information at Mathworld
Wikipedia/Precalculus
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis. The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent functions. Their reciprocals are respectively the cosecant, the secant, and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions. The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed. == Notation == Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation, for example sin(x). Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation, not function composition. For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function, not the reciprocal. For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function, but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. == Right-angled triangle definitions == If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ. Thus these six ratios define six functions of θ, which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ, and adjacent represents the side between the angle θ and the right angle. Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠π/2⁠ radians. Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. == Radians versus degrees == In geometric applications, the argument of a trigonometric function is generally the measure of an angle. For this purpose, any angular unit is convenient. One common unit is degrees, in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics). However, in calculus and mathematical analysis, the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers, rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function, via power series, or as solutions to differential equations given particular initial values (see below), without reference to any geometric notions. The other four trigonometric functions (tan, cot, sec, csc) can be defined as quotients and reciprocals of sin and cos, except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), and a complete turn (360°) is an angle of 2π (≈ 6.28) rad. For real number x, the notation sin x, cos x, etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown (sin x°, cos x°, etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180x/π)°, so that, for example, sin π = sin 180° when we take x = π. In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π/180 ≈ 0.0175. == Unit-circle definitions == The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x-axis (counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A, is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y- and x-axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x- and y-coordinate values of point A. That is, cos ⁡ θ = x A {\displaystyle \cos \theta =x_{\mathrm {A} }\quad } and sin ⁡ θ = y A . {\displaystyle \quad \sin \theta =y_{\mathrm {A} }.} In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse. And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity. cos 2 ⁡ θ + sin 2 ⁡ θ = 1. {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1.} The other trigonometric functions can be found along the unit circle as tan ⁡ θ = y B {\displaystyle \tan \theta =y_{\mathrm {B} }\quad } and cot ⁡ θ = x C , {\displaystyle \quad \cot \theta =x_{\mathrm {C} },} csc ⁡ θ = y D {\displaystyle \csc \theta \ =y_{\mathrm {D} }\quad } and sec ⁡ θ = x E . {\displaystyle \quad \sec \theta =x_{\mathrm {E} }.} By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is tan ⁡ θ = sin ⁡ θ cos ⁡ θ , cot ⁡ θ = cos ⁡ θ sin ⁡ θ , sec ⁡ θ = 1 cos ⁡ θ , csc ⁡ θ = 1 sin ⁡ θ . {\displaystyle \tan \theta ={\frac {\sin \theta }{\cos \theta }},\quad \cot \theta ={\frac {\cos \theta }{\sin \theta }},\quad \sec \theta ={\frac {1}{\cos \theta }},\quad \csc \theta ={\frac {1}{\sin \theta }}.} Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A, B, C, D, and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities sin ⁡ θ = sin ⁡ ( θ + 2 k π ) {\displaystyle \sin \theta =\sin \left(\theta +2k\pi \right)\quad } and cos ⁡ θ = cos ⁡ ( θ + 2 k π ) {\displaystyle \quad \cos \theta =\cos \left(\theta +2k\pi \right)} hold for any angle θ and any integer k. The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities tan ⁡ θ = tan ⁡ ( θ + k π ) {\displaystyle \tan \theta =\tan(\theta +k\pi )\quad } and cot ⁡ θ = cot ⁡ ( θ + k π ) {\displaystyle \quad \cot \theta =\cot(\theta +k\pi )} hold for any angle θ and any integer k. == Algebraic values == The algebraic expressions for the most important angles are as follows: sin ⁡ 0 = sin ⁡ 0 ∘ = 0 2 = 0 {\displaystyle \sin 0=\sin 0^{\circ }\quad ={\frac {\sqrt {0}}{2}}=0} (zero angle) sin ⁡ π 6 = sin ⁡ 30 ∘ = 1 2 = 1 2 {\displaystyle \sin {\frac {\pi }{6}}=\sin 30^{\circ }={\frac {\sqrt {1}}{2}}={\frac {1}{2}}} sin ⁡ π 4 = sin ⁡ 45 ∘ = 2 2 = 1 2 {\displaystyle \sin {\frac {\pi }{4}}=\sin 45^{\circ }={\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}} sin ⁡ π 3 = sin ⁡ 60 ∘ = 3 2 {\displaystyle \sin {\frac {\pi }{3}}=\sin 60^{\circ }={\frac {\sqrt {3}}{2}}} sin ⁡ π 2 = sin ⁡ 90 ∘ = 4 2 = 1 {\displaystyle \sin {\frac {\pi }{2}}=\sin 90^{\circ }={\frac {\sqrt {4}}{2}}=1} (right angle) Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. For an angle which, measured in degrees, is a multiple of three, the exact trigonometric values of the sine and the cosine may be expressed in terms of square roots. These values of the sine and the cosine may thus be constructed by ruler and compass. For an angle of an integer number of degrees, the sine and the cosine may be expressed in terms of square roots and the cube root of a non-real complex number. Galois theory allows a proof that, if the angle is not a multiple of 3°, non-real cube roots are unavoidable. For an angle which, expressed in degrees, is a rational number, the sine and the cosine are algebraic numbers, which may be expressed in terms of nth roots. This results from the fact that the Galois groups of the cyclotomic polynomials are cyclic. For an angle which, expressed in degrees, is not a rational number, then either the angle or both the sine and the cosine are transcendental numbers. This is a corollary of Baker's theorem, proved in 1966. If the sine of an angle is a rational number then the cosine is not necessarily a rational number, and vice-versa. However if the tangent of an angle is rational then both the sine and cosine of the double angle will be rational. === Simple algebraic values === The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. == Definitions in analysis == G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Using the "geometry" of the unit circle, which requires formulating the arc length of a circle (or area of a sector) analytically. By a power series, which is particularly well-suited to complex variables. By using an infinite product expansion. By inverting the inverse trigonometric functions, which can be defined as integrals of algebraic or rational functions. As solutions of a differential equation. === Definition by differential equations === Sine and cosine can be defined as the unique solution to the initial value problem: d d x sin ⁡ x = cos ⁡ x , d d x cos ⁡ x = − sin ⁡ x , sin ⁡ ( 0 ) = 0 , cos ⁡ ( 0 ) = 1. {\displaystyle {\frac {d}{dx}}\sin x=\cos x,\ {\frac {d}{dx}}\cos x=-\sin x,\ \sin(0)=0,\ \cos(0)=1.} Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation y ″ + y = 0 . {\displaystyle y''+y=0\,.} Sine is the unique solution with y(0) = 0 and y′(0) = 1; cosine is the unique solution with y(0) = 1 and y′(0) = 0. One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , d d x tan ⁡ x = cos 2 ⁡ x + sin 2 ⁡ x cos 2 ⁡ x = 1 + tan 2 ⁡ x , {\displaystyle {\frac {d}{dx}}\tan x={\frac {\cos ^{2}x+\sin ^{2}x}{\cos ^{2}x}}=1+\tan ^{2}x\,,} so the tangent function satisfies the ordinary differential equation y ′ = 1 + y 2 . {\displaystyle y'=1+y^{2}\,.} It is the unique solution with y(0) = 0. === Power series expansion === The basic trigonometric functions can be defined by the following power series expansions. These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: sin ⁡ x = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 cos ⁡ x = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n . {\displaystyle {\begin{aligned}\sin x&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\\[8pt]\cos x&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}.\end{aligned}}} The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane. Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points called poles. Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence. Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. More precisely, defining Un, the nth up/down number, Bn, the nth Bernoulli number, and En, is the nth Euler number, one has the following series expansions: tan ⁡ x = ∑ n = 0 ∞ U 2 n + 1 ( 2 n + 1 ) ! x 2 n + 1 = ∑ n = 1 ∞ ( − 1 ) n − 1 2 2 n ( 2 2 n − 1 ) B 2 n ( 2 n ) ! x 2 n − 1 = x + 1 3 x 3 + 2 15 x 5 + 17 315 x 7 + ⋯ , for | x | < π 2 . {\displaystyle {\begin{aligned}\tan x&{}=\sum _{n=0}^{\infty }{\frac {U_{2n+1}}{(2n+1)!}}x^{2n+1}\\[8mu]&{}=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}2^{2n}\left(2^{2n}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&{}=x+{\frac {1}{3}}x^{3}+{\frac {2}{15}}x^{5}+{\frac {17}{315}}x^{7}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}} csc ⁡ x = ∑ n = 0 ∞ ( − 1 ) n + 1 2 ( 2 2 n − 1 − 1 ) B 2 n ( 2 n ) ! x 2 n − 1 = x − 1 + 1 6 x + 7 360 x 3 + 31 15120 x 5 + ⋯ , for 0 < | x | < π . {\displaystyle {\begin{aligned}\csc x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n+1}2\left(2^{2n-1}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}+{\frac {1}{6}}x+{\frac {7}{360}}x^{3}+{\frac {31}{15120}}x^{5}+\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}} sec ⁡ x = ∑ n = 0 ∞ U 2 n ( 2 n ) ! x 2 n = ∑ n = 0 ∞ ( − 1 ) n E 2 n ( 2 n ) ! x 2 n = 1 + 1 2 x 2 + 5 24 x 4 + 61 720 x 6 + ⋯ , for | x | < π 2 . {\displaystyle {\begin{aligned}\sec x&=\sum _{n=0}^{\infty }{\frac {U_{2n}}{(2n)!}}x^{2n}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}\\[5mu]&=1+{\frac {1}{2}}x^{2}+{\frac {5}{24}}x^{4}+{\frac {61}{720}}x^{6}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}} cot ⁡ x = ∑ n = 0 ∞ ( − 1 ) n 2 2 n B 2 n ( 2 n ) ! x 2 n − 1 = x − 1 − 1 3 x − 1 45 x 3 − 2 945 x 5 − ⋯ , for 0 < | x | < π . {\displaystyle {\begin{aligned}\cot x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}2^{2n}B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}-{\frac {1}{3}}x-{\frac {1}{45}}x^{3}-{\frac {2}{945}}x^{5}-\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}} === Continued fraction expansion === The following continued fractions are valid in the whole complex plane: sin ⁡ x = x 1 + x 2 2 ⋅ 3 − x 2 + 2 ⋅ 3 x 2 4 ⋅ 5 − x 2 + 4 ⋅ 5 x 2 6 ⋅ 7 − x 2 + ⋱ {\displaystyle \sin x={\cfrac {x}{1+{\cfrac {x^{2}}{2\cdot 3-x^{2}+{\cfrac {2\cdot 3x^{2}}{4\cdot 5-x^{2}+{\cfrac {4\cdot 5x^{2}}{6\cdot 7-x^{2}+\ddots }}}}}}}}} cos ⁡ x = 1 1 + x 2 1 ⋅ 2 − x 2 + 1 ⋅ 2 x 2 3 ⋅ 4 − x 2 + 3 ⋅ 4 x 2 5 ⋅ 6 − x 2 + ⋱ {\displaystyle \cos x={\cfrac {1}{1+{\cfrac {x^{2}}{1\cdot 2-x^{2}+{\cfrac {1\cdot 2x^{2}}{3\cdot 4-x^{2}+{\cfrac {3\cdot 4x^{2}}{5\cdot 6-x^{2}+\ddots }}}}}}}}} tan ⁡ x = x 1 − x 2 3 − x 2 5 − x 2 7 − ⋱ = 1 1 x − 1 3 x − 1 5 x − 1 7 x − ⋱ {\displaystyle \tan x={\cfrac {x}{1-{\cfrac {x^{2}}{3-{\cfrac {x^{2}}{5-{\cfrac {x^{2}}{7-\ddots }}}}}}}}={\cfrac {1}{{\cfrac {1}{x}}-{\cfrac {1}{{\cfrac {3}{x}}-{\cfrac {1}{{\cfrac {5}{x}}-{\cfrac {1}{{\cfrac {7}{x}}-\ddots }}}}}}}}} The last one was used in the historically first proof that π is irrational. === Partial fraction expansion === There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: π cot ⁡ π x = lim N → ∞ ∑ n = − N N 1 x + n . {\displaystyle \pi \cot \pi x=\lim _{N\to \infty }\sum _{n=-N}^{N}{\frac {1}{x+n}}.} This identity can be proved with the Herglotz trick. Combining the (–n)th with the nth term lead to absolutely convergent series: π cot ⁡ π x = 1 x + 2 x ∑ n = 1 ∞ 1 x 2 − n 2 . {\displaystyle \pi \cot \pi x={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {1}{x^{2}-n^{2}}}.} Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: π csc ⁡ π x = ∑ n = − ∞ ∞ ( − 1 ) n x + n = 1 x + 2 x ∑ n = 1 ∞ ( − 1 ) n x 2 − n 2 , {\displaystyle \pi \csc \pi x=\sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{x+n}}={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{x^{2}-n^{2}}},} π 2 csc 2 ⁡ π x = ∑ n = − ∞ ∞ 1 ( x + n ) 2 , {\displaystyle \pi ^{2}\csc ^{2}\pi x=\sum _{n=-\infty }^{\infty }{\frac {1}{(x+n)^{2}}},} π sec ⁡ π x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ( n + 1 2 ) 2 − x 2 , {\displaystyle \pi \sec \pi x=\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n+1)}{(n+{\tfrac {1}{2}})^{2}-x^{2}}},} π tan ⁡ π x = 2 x ∑ n = 0 ∞ 1 ( n + 1 2 ) 2 − x 2 . {\displaystyle \pi \tan \pi x=2x\sum _{n=0}^{\infty }{\frac {1}{(n+{\tfrac {1}{2}})^{2}-x^{2}}}.} === Infinite product expansion === The following infinite product for the sine is due to Leonhard Euler, and is of great importance in complex analysis: sin ⁡ z = z ∏ n = 1 ∞ ( 1 − z 2 n 2 π 2 ) , z ∈ C . {\displaystyle \sin z=z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .} This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . From this, it can be deduced also that cos ⁡ z = ∏ n = 1 ∞ ( 1 − z 2 ( n − 1 / 2 ) 2 π 2 ) , z ∈ C . {\displaystyle \cos z=\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{(n-1/2)^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .} === Euler's formula and the exponential function === Euler's formula relates sine and cosine to the exponential function: e i x = cos ⁡ x + i sin ⁡ x . {\displaystyle e^{ix}=\cos x+i\sin x.} This formula is commonly considered for real values of x, but it remains true for all complex values. Proof: Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2. The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1, as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has e i x = cos ⁡ x + i sin ⁡ x e − i x = cos ⁡ x − i sin ⁡ x . {\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\[5pt]e^{-ix}&=\cos x-i\sin x.\end{aligned}}} Solving this linear system in sine and cosine, one can express them in terms of the exponential function: sin ⁡ x = e i x − e − i x 2 i cos ⁡ x = e i x + e − i x 2 . {\displaystyle {\begin{aligned}\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\[5pt]\cos x&={\frac {e^{ix}+e^{-ix}}{2}}.\end{aligned}}} When x is real, this may be rewritten as cos ⁡ x = Re ⁡ ( e i x ) , sin ⁡ x = Im ⁡ ( e i x ) . {\displaystyle \cos x=\operatorname {Re} \left(e^{ix}\right),\qquad \sin x=\operatorname {Im} \left(e^{ix}\right).} Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups. The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . === Definition via integration === Another way to define the trigonometric functions in analysis is using integration. For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass. On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . === Definitions using functional equations === One can also define the trigonometric functions using various functional equations. For example, the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula cos ⁡ ( x − y ) = cos ⁡ x cos ⁡ y + sin ⁡ x sin ⁡ y {\displaystyle \cos(x-y)=\cos x\cos y+\sin x\sin y\,} and the added condition 0 < x cos ⁡ x < sin ⁡ x < x for 0 < x < 1. {\displaystyle 0<x\cos x<\sin x<x\quad {\text{ for }}\quad 0<x<1.} === In the complex plane === The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y\\[5pt]\cos z&=\cos x\cosh y-i\sin x\sinh y\end{aligned}}} By taking advantage of domain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. == Periodicity and asymptotes == The sine and cosine functions are periodic, with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} == Basic identities == Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π/2], see Proofs of trigonometric identities). For non-geometrical proofs using only tools of calculus, one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. === Parity === The cosine and the secant are even functions; the other trigonometric functions are odd functions. That is: sin ⁡ ( − x ) = − sin ⁡ x cos ⁡ ( − x ) = cos ⁡ x tan ⁡ ( − x ) = − tan ⁡ x cot ⁡ ( − x ) = − cot ⁡ x csc ⁡ ( − x ) = − csc ⁡ x sec ⁡ ( − x ) = sec ⁡ x . {\displaystyle {\begin{aligned}\sin(-x)&=-\sin x\\\cos(-x)&=\cos x\\\tan(-x)&=-\tan x\\\cot(-x)&=-\cot x\\\csc(-x)&=-\csc x\\\sec(-x)&=\sec x.\end{aligned}}} === Periods === All trigonometric functions are periodic functions of period 2π. This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k, one has sin ⁡ ( x + 2 k π ) = sin ⁡ x cos ⁡ ( x + 2 k π ) = cos ⁡ x tan ⁡ ( x + k π ) = tan ⁡ x cot ⁡ ( x + k π ) = cot ⁡ x csc ⁡ ( x + 2 k π ) = csc ⁡ x sec ⁡ ( x + 2 k π ) = sec ⁡ x . {\displaystyle {\begin{array}{lrl}\sin(x+&2k\pi )&=\sin x\\\cos(x+&2k\pi )&=\cos x\\\tan(x+&k\pi )&=\tan x\\\cot(x+&k\pi )&=\cot x\\\csc(x+&2k\pi )&=\csc x\\\sec(x+&2k\pi )&=\sec x.\end{array}}} See Periodicity and asymptotes. === Pythagorean identity === The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is sin 2 ⁡ x + cos 2 ⁡ x = 1 {\displaystyle \sin ^{2}x+\cos ^{2}x=1} . Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives tan 2 ⁡ x + 1 = sec 2 ⁡ x {\displaystyle \tan ^{2}x+1=\sec ^{2}x} 1 + cot 2 ⁡ x = csc 2 ⁡ x {\displaystyle 1+\cot ^{2}x=\csc ^{2}x} and sec 2 ⁡ x + csc 2 ⁡ x = sec 2 ⁡ x csc 2 ⁡ x {\displaystyle \sec ^{2}x+\csc ^{2}x=\sec ^{2}x\csc ^{2}x} . === Sum and difference formulas === The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities). One can also produce them algebraically using Euler's formula. Sum sin ⁡ ( x + y ) = sin ⁡ x cos ⁡ y + cos ⁡ x sin ⁡ y , cos ⁡ ( x + y ) = cos ⁡ x cos ⁡ y − sin ⁡ x sin ⁡ y , tan ⁡ ( x + y ) = tan ⁡ x + tan ⁡ y 1 − tan ⁡ x tan ⁡ y . {\displaystyle {\begin{aligned}\sin \left(x+y\right)&=\sin x\cos y+\cos x\sin y,\\[5mu]\cos \left(x+y\right)&=\cos x\cos y-\sin x\sin y,\\[5mu]\tan(x+y)&={\frac {\tan x+\tan y}{1-\tan x\tan y}}.\end{aligned}}} Difference sin ⁡ ( x − y ) = sin ⁡ x cos ⁡ y − cos ⁡ x sin ⁡ y , cos ⁡ ( x − y ) = cos ⁡ x cos ⁡ y + sin ⁡ x sin ⁡ y , tan ⁡ ( x − y ) = tan ⁡ x − tan ⁡ y 1 + tan ⁡ x tan ⁡ y . {\displaystyle {\begin{aligned}\sin \left(x-y\right)&=\sin x\cos y-\cos x\sin y,\\[5mu]\cos \left(x-y\right)&=\cos x\cos y+\sin x\sin y,\\[5mu]\tan(x-y)&={\frac {\tan x-\tan y}{1+\tan x\tan y}}.\end{aligned}}} When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae. sin ⁡ 2 x = 2 sin ⁡ x cos ⁡ x = 2 tan ⁡ x 1 + tan 2 ⁡ x , cos ⁡ 2 x = cos 2 ⁡ x − sin 2 ⁡ x = 2 cos 2 ⁡ x − 1 = 1 − 2 sin 2 ⁡ x = 1 − tan 2 ⁡ x 1 + tan 2 ⁡ x , tan ⁡ 2 x = 2 tan ⁡ x 1 − tan 2 ⁡ x . {\displaystyle {\begin{aligned}\sin 2x&=2\sin x\cos x={\frac {2\tan x}{1+\tan ^{2}x}},\\[5mu]\cos 2x&=\cos ^{2}x-\sin ^{2}x=2\cos ^{2}x-1=1-2\sin ^{2}x={\frac {1-\tan ^{2}x}{1+\tan ^{2}x}},\\[5mu]\tan 2x&={\frac {2\tan x}{1-\tan ^{2}x}}.\end{aligned}}} These identities can be used to derive the product-to-sum identities. By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : sin ⁡ θ = 2 t 1 + t 2 , cos ⁡ θ = 1 − t 2 1 + t 2 , tan ⁡ θ = 2 t 1 − t 2 . {\displaystyle {\begin{aligned}\sin \theta &={\frac {2t}{1+t^{2}}},\\[5mu]\cos \theta &={\frac {1-t^{2}}{1+t^{2}}},\\[5mu]\tan \theta &={\frac {2t}{1-t^{2}}}.\end{aligned}}} Together with d θ = 2 1 + t 2 d t , {\displaystyle d\theta ={\frac {2}{1+t^{2}}}\,dt,} this is the tangent half-angle substitution, which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. === Derivatives and antiderivatives === The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule. The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration. Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine. Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: d cos ⁡ x d x = d d x sin ⁡ ( π / 2 − x ) = − cos ⁡ ( π / 2 − x ) = − sin ⁡ x , d csc ⁡ x d x = d d x sec ⁡ ( π / 2 − x ) = − sec ⁡ ( π / 2 − x ) tan ⁡ ( π / 2 − x ) = − csc ⁡ x cot ⁡ x , d cot ⁡ x d x = d d x tan ⁡ ( π / 2 − x ) = − sec 2 ⁡ ( π / 2 − x ) = − csc 2 ⁡ x . {\displaystyle {\begin{aligned}{\frac {d\cos x}{dx}}&={\frac {d}{dx}}\sin(\pi /2-x)=-\cos(\pi /2-x)=-\sin x\,,\\{\frac {d\csc x}{dx}}&={\frac {d}{dx}}\sec(\pi /2-x)=-\sec(\pi /2-x)\tan(\pi /2-x)=-\csc x\cot x\,,\\{\frac {d\cot x}{dx}}&={\frac {d}{dx}}\tan(\pi /2-x)=-\sec ^{2}(\pi /2-x)=-\csc ^{2}x\,.\end{aligned}}} == Inverse functions == The trigonometric functions are periodic, and hence not injective, so strictly speaking, they do not have an inverse function. However, on each interval on which a trigonometric function is monotonic, one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin−1, cos−1, etc. are often used for arcsin and arccos, etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms. == Applications == === Angles and sides of a triangle === In this section A, B, C denote the three (interior) angles of a triangle, and a, b, c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. ==== Law of sines ==== The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C: sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius. It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. ==== Law of cosines ==== The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem: c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c. This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem. The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. ==== Law of tangents ==== The law of tangents says that: tan ⁡ A − B 2 tan ⁡ A + B 2 = a − b a + b {\displaystyle {\frac {\tan {\frac {A-B}{2}}}{\tan {\frac {A+B}{2}}}}={\frac {a-b}{a+b}}} . ==== Law of cotangents ==== If s is the triangle's semiperimeter, (a + b + c)/2, and r is the radius of the triangle's incircle, then rs is the triangle's area. Therefore Heron's formula implies that: r = 1 s ( s − a ) ( s − b ) ( s − c ) {\displaystyle r={\sqrt {{\frac {1}{s}}(s-a)(s-b)(s-c)}}} . The law of cotangents says that: cot ⁡ A 2 = s − a r {\displaystyle \cot {\frac {A}{2}}={\frac {s-a}{r}}} It follows that cot ⁡ A 2 s − a = cot ⁡ B 2 s − b = cot ⁡ C 2 s − c = 1 r . {\displaystyle {\frac {\cot {\dfrac {A}{2}}}{s-a}}={\frac {\cot {\dfrac {B}{2}}}{s-b}}={\frac {\cot {\dfrac {C}{2}}}{s-c}}={\frac {1}{r}}.} === Periodic functions === The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion. Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves. Under rather general conditions, a periodic function f (x) can be expressed as a sum of sine waves or cosine waves in a Fourier series. Denoting the sine or cosine basis functions by φk, the expansion of the periodic function f (t) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. == History == While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. (See Aryabhata's sine table.) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. The trigonometric functions were later studied by mathematicians including Omar Khayyám, Bhāskara II, Nasir al-Din al-Tusi, Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho. Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series. (See Madhava series and Madhava's sine table.) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin, cos, and tan in his book Trigonométrie. In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x. Though defined as ratios of sides of a right triangle, and thus appearing to be rational functions, Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin., cos., tang., cot., sec., and cosec.). A few functions were common historically, but are now seldom used, such as the chord, versine (which appeared in the earliest tables), haversine, coversine, half-tangent (tangent of half an angle), and exsecant. List of trigonometric identities shows more relations between these functions. crd ⁡ θ = 2 sin ⁡ 1 2 θ , vers ⁡ θ = 1 − cos ⁡ θ = 2 sin 2 ⁡ 1 2 θ , hav ⁡ θ = 1 2 vers ⁡ θ = sin 2 ⁡ 1 2 θ , covers ⁡ θ = 1 − sin ⁡ θ = vers ⁡ ( 1 2 π − θ ) , exsec ⁡ θ = sec ⁡ θ − 1. {\displaystyle {\begin{aligned}\operatorname {crd} \theta &=2\sin {\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {vers} \theta &=1-\cos \theta =2\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {hav} \theta &={\tfrac {1}{2}}\operatorname {vers} \theta =\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {covers} \theta &=1-\sin \theta =\operatorname {vers} {\bigl (}{\tfrac {1}{2}}\pi -\theta {\bigr )},\\[5mu]\operatorname {exsec} \theta &=\sec \theta -1.\end{aligned}}} Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. == Etymology == The word sine derives from Latin sinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib, meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin. The choice was based on a misreading of the Arabic written form j-y-b (جيب), which itself originated as a transliteration from Sanskrit jīvā, which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans—"cutting"—since the line cuts the circle. The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly. == See also == == Notes == == References == == External links == "Trigonometric functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Visionlearning Module on Wave Mathematics GonioLab Visualization of the unit circle, trigonometric and hyperbolic functions q-Sine Article about the q-analog of sin at MathWorld q-Cosine Article about the q-analog of cos at MathWorld
Wikipedia/Trigonometric_functions
In mathematics, a transcendental function is an analytic function that does not satisfy a polynomial equation whose coefficients are functions of the independent variable that can be written using only the basic operations of addition, subtraction, multiplication, and division (without the need of taking limits). This is in contrast to an algebraic function. Examples of transcendental functions include the exponential function, the logarithm function, the hyperbolic functions, and the trigonometric functions. Equations over these expressions are called transcendental equations. == Definition == Formally, an analytic function f {\displaystyle f} of one real or complex variable is transcendental if it is algebraically independent of that variable. This means the function does not satisfy any polynomial equation. For example, the function f {\displaystyle f} given by f ( x ) = a x + b c x + d {\displaystyle f(x)={\frac {ax+b}{cx+d}}} for all x {\displaystyle x} is not transcendental, but algebraic, because it satisfies the polynomial equation ( a x + b ) − ( c x + d ) f ( x ) = 0 {\displaystyle (ax+b)-(cx+d)f(x)=0} . Similarly, the function f {\displaystyle f} that satisfies the equation f ( x ) 5 + f ( x ) = x {\displaystyle f(x)^{5}+f(x)=x} for all x {\displaystyle x} is not transcendental, but algebraic, even though it cannot be written as a finite expression involving the basic arithmetic operations. This definition can be extended to functions of several variables. == History == The transcendental functions sine and cosine were tabulated from physical measurements in antiquity, as evidenced in Greece (Hipparchus) and India (jya and koti-jya). In describing Ptolemy's table of chords, an equivalent to a table of sines, Olaf Pedersen wrote: The mathematical notion of continuity as an explicit concept is unknown to Ptolemy. That he, in fact, treats these functions as continuous appears from his unspoken presumption that it is possible to determine a value of the dependent variable corresponding to any value of the independent variable by the simple process of linear interpolation. A revolutionary understanding of these circular functions occurred in the 17th century and was explicated by Leonhard Euler in 1748 in his Introduction to the Analysis of the Infinite. These ancient transcendental functions became known as continuous functions through quadrature of the rectangular hyperbola xy = 1 by Grégoire de Saint-Vincent in 1647, two millennia after Archimedes had produced The Quadrature of the Parabola. The area under the hyperbola was shown to have the scaling property of constant area for a constant ratio of bounds. The hyperbolic logarithm function so described was of limited service until 1748 when Leonhard Euler related it to functions where a constant is raised to a variable exponent, such as the exponential function where the constant base is e. By introducing these transcendental functions and noting the bijection property that implies an inverse function, some facility was provided for algebraic manipulations of the natural logarithm even if it is not an algebraic function. The exponential function is written exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} . Euler identified it with the infinite series ∑ k = 0 ∞ x k / k ! {\textstyle \sum _{k=0}^{\infty }x^{k}/k!} , where k! denotes the factorial of k. The even and odd terms of this series provide sums denoting cosh(x) and sinh(x), so that e x = cosh ⁡ x + sinh ⁡ x . {\displaystyle e^{x}=\cosh x+\sinh x.} These transcendental hyperbolic functions can be converted into circular functions sine and cosine by introducing (−1)k into the series, resulting in alternating series. After Euler, mathematicians view the sine and cosine this way to relate the transcendence to logarithm and exponent functions, often through Euler's formula in complex number arithmetic. == Examples == The following functions are transcendental: f 1 ( x ) = x π f 2 ( x ) = e x f 3 ( x ) = log e ⁡ x f 4 ( x ) = cosh ⁡ x f 5 ( x ) = sinh ⁡ x f 6 ( x ) = tanh ⁡ x f 7 ( x ) = sinh − 1 ⁡ x f 8 ( x ) = tanh − 1 ⁡ x f 9 ( x ) = cos ⁡ x f 10 ( x ) = sin ⁡ x f 11 ( x ) = tan ⁡ x f 12 ( x ) = sin − 1 ⁡ x f 13 ( x ) = tan − 1 ⁡ x f 14 ( x ) = x ! f 15 ( x ) = 1 / x ! f 16 ( x ) = x x {\displaystyle {\begin{aligned}f_{1}(x)&=x^{\pi }\\[2pt]f_{2}(x)&=e^{x}\\[2pt]f_{3}(x)&=\log _{e}{x}\\[2pt]f_{4}(x)&=\cosh {x}\\f_{5}(x)&=\sinh {x}\\f_{6}(x)&=\tanh {x}\\f_{7}(x)&=\sinh ^{-1}{x}\\[2pt]f_{8}(x)&=\tanh ^{-1}{x}\\[2pt]f_{9}(x)&=\cos {x}\\f_{10}(x)&=\sin {x}\\f_{11}(x)&=\tan {x}\\f_{12}(x)&=\sin ^{-1}{x}\\[2pt]f_{13}(x)&=\tan ^{-1}{x}\\[2pt]f_{14}(x)&=x!\\f_{15}(x)&=1/x!\\[2pt]f_{16}(x)&=x^{x}\\[2pt]\end{aligned}}} For the first function f 1 ( x ) {\displaystyle f_{1}(x)} , the exponent π {\displaystyle \pi } can be replaced by any other irrational number, and the function will remain transcendental. For the second and third functions f 2 ( x ) {\displaystyle f_{2}(x)} and f 3 ( x ) {\displaystyle f_{3}(x)} , the base e {\displaystyle e} can be replaced by any other positive real number base not equaling 1, and the functions will remain transcendental. Functions 4-8 denote the hyperbolic trigonometric functions, while functions 9-13 denote the circular trigonometric functions. The fourteenth function f 14 ( x ) {\displaystyle f_{14}(x)} denotes the analytic extension of the factorial function via the gamma function, and f 15 ( x ) {\displaystyle f_{15}(x)} is its reciprocal, an entire function. Finally, in the last function f 16 ( x ) {\displaystyle f_{16}(x)} , the exponent x {\displaystyle x} can be replaced by k x {\displaystyle kx} for any nonzero real k {\displaystyle k} , and the function will remain transcendental. == Algebraic and transcendental functions == The most familiar transcendental functions are the logarithm, the exponential (with any non-trivial base), the trigonometric, and the hyperbolic functions, and the inverses of all of these. Less familiar are the special functions of analysis, such as the gamma, elliptic, and zeta functions, all of which are transcendental. The generalized hypergeometric and Bessel functions are transcendental in general, but algebraic for some special parameter values. Transcendental functions cannot be defined using only the operations of addition, subtraction, multiplication, division, and n {\displaystyle n} th roots (where n {\displaystyle n} is any integer), without using some "limiting process". A function that is not transcendental is algebraic. Simple examples of algebraic functions are the rational functions and the square root function, but in general, algebraic functions cannot be defined as finite formulas of the elementary functions, as shown by the example above with f ( x ) 5 + f ( x ) = x {\displaystyle f(x)^{5}+f(x)=x} (see Abel–Ruffini theorem). The indefinite integral of many algebraic functions is transcendental. For example, the integral ∫ t = 1 x 1 t d t {\displaystyle \int _{t=1}^{x}{\frac {1}{t}}dt} turns out to equal the logarithm function l o g e ( x ) {\displaystyle log_{e}(x)} . Similarly, the limit or the infinite sum of many algebraic function sequences is transcendental. For example, lim n → ∞ ( 1 + x / n ) n {\displaystyle \lim _{n\to \infty }(1+x/n)^{n}} converges to the exponential function e x {\displaystyle e^{x}} , and the infinite sum ∑ n = 0 ∞ x 2 n ( 2 n ) ! {\displaystyle \sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}} turns out to equal the hyperbolic cosine function cosh ⁡ x {\displaystyle \cosh x} . In fact, it is impossible to define any transcendental function in terms of algebraic functions without using some such "limiting procedure" (integrals, sequential limits, and infinite sums are just a few). Differential algebra examines how integration frequently creates functions that are algebraically independent of some class, such as when one takes polynomials with trigonometric functions as variables. == Transcendentally transcendental functions == Most familiar transcendental functions, including the special functions of mathematical physics, are solutions of algebraic differential equations. Those that are not, such as the gamma and the zeta functions, are called transcendentally transcendental or hypertranscendental functions. == Exceptional set == If f is an algebraic function and α {\displaystyle \alpha } is an algebraic number then f (α) is also an algebraic number. The converse is not true: there are entire transcendental functions f such that f (α) is an algebraic number for any algebraic α. For a given transcendental function the set of algebraic numbers giving algebraic results is called the exceptional set of that function. Formally it is defined by: E ( f ) = { α ∈ Q ¯ : f ( α ) ∈ Q ¯ } . {\displaystyle {\mathcal {E}}(f)=\left\{\alpha \in {\overline {\mathbb {Q} }}\,:\,f(\alpha )\in {\overline {\mathbb {Q} }}\right\}.} In many instances the exceptional set is fairly small. For example, E ( exp ) = { 0 } , {\displaystyle {\mathcal {E}}(\exp )=\{0\},} this was proved by Lindemann in 1882. In particular exp(1) = e is transcendental. Also, since exp(iπ) = −1 is algebraic we know that iπ cannot be algebraic. Since i is algebraic this implies that π is a transcendental number. In general, finding the exceptional set of a function is a difficult problem, but if it can be calculated then it can often lead to results in transcendental number theory. Here are some other known exceptional sets: Klein's j-invariant E ( j ) = { α ∈ H : [ Q ( α ) : Q ] = 2 } , {\displaystyle {\mathcal {E}}(j)=\left\{\alpha \in {\mathcal {H}}\,:\,[\mathbb {Q} (\alpha ):\mathbb {Q} ]=2\right\},} where ⁠ H {\displaystyle {\mathcal {H}}} ⁠ is the upper half-plane, and ⁠ [ Q ( α ) : Q ] {\displaystyle [\mathbb {Q} (\alpha ):\mathbb {Q} ]} ⁠ is the degree of the number field ⁠ Q ( α ) . {\displaystyle \mathbb {Q} (\alpha ).} ⁠ This result is due to Theodor Schneider. Exponential function in base 2: E ( 2 x ) = Q , {\displaystyle {\mathcal {E}}(2^{x})=\mathbb {Q} ,} This result is a corollary of the Gelfond–Schneider theorem, which states that if α ≠ 0 , 1 {\displaystyle \alpha \neq 0,1} is algebraic, and β {\displaystyle \beta } is algebraic and irrational then α β {\displaystyle \alpha ^{\beta }} is transcendental. Thus the function 2x could be replaced by cx for any algebraic c not equal to 0 or 1. Indeed, we have: E ( x x ) = E ( x 1 x ) = Q ∖ { 0 } . {\displaystyle {\mathcal {E}}(x^{x})={\mathcal {E}}\left(x^{\frac {1}{x}}\right)=\mathbb {Q} \setminus \{0\}.} A consequence of Schanuel's conjecture in transcendental number theory would be that E ( e e x ) = ∅ . {\displaystyle {\mathcal {E}}\left(e^{e^{x}}\right)=\emptyset .} A function with empty exceptional set that does not require assuming Schanuel's conjecture is f ( x ) = exp ⁡ ( 1 + π x ) . {\displaystyle f(x)=\exp(1+\pi x).} While calculating the exceptional set for a given function is not easy, it is known that given any subset of the algebraic numbers, say A, there is a transcendental function whose exceptional set is A. The subset does not need to be proper, meaning that A can be the set of algebraic numbers. This directly implies that there exist transcendental functions that produce transcendental numbers only when given transcendental numbers. Alex Wilkie also proved that there exist transcendental functions for which first-order-logic proofs about their transcendence do not exist by providing an exemplary analytic function. == Dimensional analysis == In dimensional analysis, transcendental functions are notable because they make sense only when their argument is dimensionless (possibly after algebraic reduction). Because of this, transcendental functions can be an easy-to-spot source of dimensional errors. For example, log(5 metres) is a nonsensical expression, unlike log(5 metres / 3 metres) or log(3) metres. One could attempt to apply a logarithmic identity to get log(5) + log(metres), which highlights the problem: applying a non-algebraic operation to a dimension creates meaningless results. == See also == Complex function Function (mathematics) Generalized function List of special functions and eponyms List of types of functions Rational function Special functions == References == == External links == Definition of "Transcendental function" in the Encyclopedia of Math
Wikipedia/Transcendental_function
In mathematics, an algebraic function is a function that can be defined as the root of an irreducible polynomial equation. Algebraic functions are often algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are: f ( x ) = 1 / x {\displaystyle f(x)=1/x} f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} f ( x ) = 1 + x 3 x 3 / 7 − 7 x 1 / 3 {\displaystyle f(x)={\frac {\sqrt {1+x^{3}}}{x^{3/7}-{\sqrt {7}}x^{1/3}}}} Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by f ( x ) 5 + f ( x ) + x = 0 {\displaystyle f(x)^{5}+f(x)+x=0} . In more precise terms, an algebraic function of degree n in one variable x is a function y = f ( x ) , {\displaystyle y=f(x),} that is continuous in its domain and satisfies a polynomial equation of positive degree a n ( x ) y n + a n − 1 ( x ) y n − 1 + ⋯ + a 0 ( x ) = 0 {\displaystyle a_{n}(x)y^{n}+a_{n-1}(x)y^{n-1}+\cdots +a_{0}(x)=0} where the coefficients ai(x) are polynomial functions of x, with integer coefficients. It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the ai(x)'s. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is algebraic over the field generated by these coefficients. The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number. Sometimes, coefficients a i ( x ) {\displaystyle a_{i}(x)} that are polynomial over a ring R are considered, and one then talks about "functions algebraic over R". A function which is not algebraic is called a transcendental function, as it is for example the case of exp ⁡ x , tan ⁡ x , ln ⁡ x , Γ ( x ) {\displaystyle \exp x,\tan x,\ln x,\Gamma (x)} . A composition of transcendental functions can give an algebraic function: f ( x ) = cos ⁡ arcsin ⁡ x = 1 − x 2 {\displaystyle f(x)=\cos \arcsin x={\sqrt {1-x^{2}}}} . As a polynomial equation of degree n has up to n roots (and exactly n roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to n functions, sometimes also called branches. Consider for example the equation of the unit circle: y 2 + x 2 = 1. {\displaystyle y^{2}+x^{2}=1.\,} This determines y, except only up to an overall sign; accordingly, it has two branches: y = ± 1 − x 2 . {\displaystyle y=\pm {\sqrt {1-x^{2}}}.\,} An algebraic function in m variables is similarly defined as a function y = f ( x 1 , … , x m ) {\displaystyle y=f(x_{1},\dots ,x_{m})} which solves a polynomial equation in m + 1 variables: p ( y , x 1 , x 2 , … , x m ) = 0. {\displaystyle p(y,x_{1},x_{2},\dots ,x_{m})=0.} It is normally assumed that p should be an irreducible polynomial. The existence of an algebraic function is then guaranteed by the implicit function theorem. Formally, an algebraic function in m variables over the field K is an element of the algebraic closure of the field of rational functions K(x1, ..., xm). == Algebraic functions in one variable == === Introduction and overview === The informal definition of an algebraic function provides a number of clues about their properties. To gain an intuitive understanding, it may be helpful to regard algebraic functions as functions which can be formed by the usual algebraic operations: addition, multiplication, division, and taking an nth root. This is something of an oversimplification; because of the fundamental theorem of Galois theory, algebraic functions need not be expressible by radicals. First, note that any polynomial function y = p ( x ) {\displaystyle y=p(x)} is an algebraic function, since it is simply the solution y to the equation y − p ( x ) = 0. {\displaystyle y-p(x)=0.\,} More generally, any rational function y = p ( x ) q ( x ) {\displaystyle y={\frac {p(x)}{q(x)}}} is algebraic, being the solution to q ( x ) y − p ( x ) = 0. {\displaystyle q(x)y-p(x)=0.} Moreover, the nth root of any polynomial y = p ( x ) n {\textstyle y={\sqrt[{n}]{p(x)}}} is an algebraic function, solving the equation y n − p ( x ) = 0. {\displaystyle y^{n}-p(x)=0.} Surprisingly, the inverse function of an algebraic function is an algebraic function. For supposing that y is a solution to a n ( x ) y n + ⋯ + a 0 ( x ) = 0 , {\displaystyle a_{n}(x)y^{n}+\cdots +a_{0}(x)=0,} for each value of x, then x is also a solution of this equation for each value of y. Indeed, interchanging the roles of x and y and gathering terms, b m ( y ) x m + b m − 1 ( y ) x m − 1 + ⋯ + b 0 ( y ) = 0. {\displaystyle b_{m}(y)x^{m}+b_{m-1}(y)x^{m-1}+\cdots +b_{0}(y)=0.} Writing x as a function of y gives the inverse function, also an algebraic function. However, not every function has an inverse. For example, y = x2 fails the horizontal line test: it fails to be one-to-one. The inverse is the algebraic "function" x = ± y {\displaystyle x=\pm {\sqrt {y}}} . Another way to understand this, is that the set of branches of the polynomial equation defining our algebraic function is the graph of an algebraic curve. === The role of complex numbers === From an algebraic perspective, complex numbers enter quite naturally into the study of algebraic functions. First of all, by the fundamental theorem of algebra, the complex numbers are an algebraically closed field. Hence any polynomial relation p(y, x) = 0 is guaranteed to have at least one solution (and in general a number of solutions not exceeding the degree of p in y) for y at each point x, provided we allow y to assume complex as well as real values. Thus, problems to do with the domain of an algebraic function can safely be minimized. Furthermore, even if one is ultimately interested in real algebraic functions, there may be no means to express the function in terms of addition, multiplication, division and taking nth roots without resorting to complex numbers (see casus irreducibilis). For example, consider the algebraic function determined by the equation y 3 − x y + 1 = 0. {\displaystyle y^{3}-xy+1=0.\,} Using the cubic formula, we get y = − 2 x − 108 + 12 81 − 12 x 3 3 + − 108 + 12 81 − 12 x 3 3 6 . {\displaystyle y=-{\frac {2x}{\sqrt[{3}]{-108+12{\sqrt {81-12x^{3}}}}}}+{\frac {\sqrt[{3}]{-108+12{\sqrt {81-12x^{3}}}}}{6}}.} For x ≤ 3 4 3 , {\displaystyle x\leq {\frac {3}{\sqrt[{3}]{4}}},} the square root is real and the cubic root is thus well defined, providing the unique real root. On the other hand, for x > 3 4 3 , {\displaystyle x>{\frac {3}{\sqrt[{3}]{4}}},} the square root is not real, and one has to choose, for the square root, either non-real square root. Thus the cubic root has to be chosen among three non-real numbers. If the same choices are done in the two terms of the formula, the three choices for the cubic root provide the three branches shown, in the accompanying image. It may be proven that there is no way to express this function in terms of nth roots using real numbers only, even though the resulting function is real-valued on the domain of the graph shown. On a more significant theoretical level, using complex numbers allows one to use the powerful techniques of complex analysis to discuss algebraic functions. In particular, the argument principle can be used to show that any algebraic function is in fact an analytic function, at least in the multiple-valued sense. Formally, let p(x, y) be a complex polynomial in the complex variables x and y. Suppose that x0 ∈ C is such that the polynomial p(x0, y) of y has n distinct zeros. We shall show that the algebraic function is analytic in a neighborhood of x0. Choose a system of n non-overlapping discs Δi containing each of these zeros. Then by the argument principle 1 2 π i ∮ ∂ Δ i p y ( x 0 , y ) p ( x 0 , y ) d y = 1. {\displaystyle {\frac {1}{2\pi i}}\oint _{\partial \Delta _{i}}{\frac {p_{y}(x_{0},y)}{p(x_{0},y)}}\,dy=1.} By continuity, this also holds for all x in a neighborhood of x0. In particular, p(x, y) has only one root in Δi, given by the residue theorem: f i ( x ) = 1 2 π i ∮ ∂ Δ i y p y ( x , y ) p ( x , y ) d y {\displaystyle f_{i}(x)={\frac {1}{2\pi i}}\oint _{\partial \Delta _{i}}y{\frac {p_{y}(x,y)}{p(x,y)}}\,dy} which is an analytic function. === Monodromy === Note that the foregoing proof of analyticity derived an expression for a system of n different function elements fi (x), provided that x is not a critical point of p(x, y). A critical point is a point where the number of distinct zeros is smaller than the degree of p, and this occurs only where the highest degree term of p or the discriminant vanish. Hence there are only finitely many such points c1, ..., cm. A close analysis of the properties of the function elements fi near the critical points can be used to show that the monodromy cover is ramified over the critical points (and possibly the point at infinity). Thus the holomorphic extension of the fi has at worst algebraic poles and ordinary algebraic branchings over the critical points. Note that, away from the critical points, we have p ( x , y ) = a n ( x ) ( y − f 1 ( x ) ) ( y − f 2 ( x ) ) ⋯ ( y − f n ( x ) ) {\displaystyle p(x,y)=a_{n}(x)(y-f_{1}(x))(y-f_{2}(x))\cdots (y-f_{n}(x))} since the fi are by definition the distinct zeros of p. The monodromy group acts by permuting the factors, and thus forms the monodromy representation of the Galois group of p. (The monodromy action on the universal covering space is related but different notion in the theory of Riemann surfaces.) == History == The ideas surrounding algebraic functions go back at least as far as René Descartes. The first discussion of algebraic functions appears to have been in Edward Waring's 1794 An Essay on the Principles of Human Knowledge in which he writes: let a quantity denoting the ordinate, be an algebraic function of the abscissa x, by the common methods of division and extraction of roots, reduce it into an infinite series ascending or descending according to the dimensions of x, and then find the integral of each of the resulting terms. == See also == Algebraic expression Analytic function Complex function Elementary function Function (mathematics) Generalized function List of special functions and eponyms List of types of functions Polynomial Rational function Special functions Transcendental function == References == Ahlfors, Lars (1979). Complex Analysis. McGraw Hill. van der Waerden, B.L. (1931). Modern Algebra, Volume II. Springer. == External links == Definition of "Algebraic function" in the Encyclopedia of Math Weisstein, Eric W. "Algebraic Function". MathWorld. Algebraic Function at PlanetMath. Definition of "Algebraic function" Archived 2020-10-26 at the Wayback Machine in David J. Darling's Internet Encyclopedia of Science
Wikipedia/Algebraic_function
Special functions are particular mathematical functions that have more or less established names and notations due to their importance in mathematical analysis, functional analysis, geometry, physics, or other applications. The term is defined by consensus, and thus lacks a general formal definition, but the list of mathematical functions contains functions that are commonly accepted as special. == Tables of special functions == Many special functions appear as solutions of differential equations or integrals of elementary functions. Therefore, tables of integrals usually include descriptions of special functions, and tables of special functions include most important integrals; at least, the integral representation of special functions. Because symmetries of differential equations are essential to both physics and mathematics, the theory of special functions is closely related to the theory of Lie groups and Lie algebras, as well as certain topics in mathematical physics. Symbolic computation engines usually recognize the majority of special functions. === Notations used for special functions === Functions with established international notations are the sine ( sin {\displaystyle \sin } ), cosine ( cos {\displaystyle \cos } ), exponential function ( exp {\displaystyle \exp } ), and error function ( erf {\displaystyle \operatorname {erf} } or erfc {\displaystyle \operatorname {erfc} } ). Some special functions have several notations: The natural logarithm may be denoted ln {\displaystyle \ln } , log {\displaystyle \log } , log e {\displaystyle \log _{e}} , or Log {\displaystyle \operatorname {Log} } depending on the context. The tangent function may be denoted tan {\displaystyle \tan } , Tan {\displaystyle \operatorname {Tan} } , or tg {\displaystyle \operatorname {tg} } (used in several European languages). Arctangent may be denoted arctan {\displaystyle \arctan } , atan {\displaystyle \operatorname {atan} } , arctg {\displaystyle \operatorname {arctg} } , or tan − 1 {\displaystyle \tan ^{-1}} . The Bessel functions may be denoted J n ( x ) , {\displaystyle J_{n}(x),} besselj ⁡ ( n , x ) , {\displaystyle \operatorname {besselj} (n,x),} B e s s e l J [ n , x ] . {\displaystyle {\rm {BesselJ}}[n,x].} Subscripts are often used to indicate arguments, typically integers. In a few cases, the semicolon (;) or even backslash (\) is used as a separator for arguments. This may confuse the translation to algorithmic languages. Superscripts may indicate not only a power (exponent), but some other modification of the function. Examples (particularly with trigonometric and hyperbolic functions) include: cos 3 ⁡ ( x ) {\displaystyle \cos ^{3}(x)} usually means ( cos ⁡ ( x ) ) 3 {\displaystyle (\cos(x))^{3}} cos 2 ⁡ ( x ) {\displaystyle \cos ^{2}(x)} is typically ( cos ⁡ ( x ) ) 2 {\displaystyle (\cos(x))^{2}} , but never cos ⁡ ( cos ⁡ ( x ) ) {\displaystyle \cos(\cos(x))} cos − 1 ⁡ ( x ) {\displaystyle \cos ^{-1}(x)} usually means arccos ⁡ ( x ) {\displaystyle \arccos(x)} , not ( cos ⁡ ( x ) ) − 1 {\displaystyle (\cos(x))^{-1}} ; this may cause confusion, since the meaning of this superscript is inconsistent with the others. === Evaluation of special functions === Most special functions are considered as a function of a complex variable. They are analytic; the singularities and cuts are described; the differential and integral representations are known and the expansion to the Taylor series or asymptotic series are available. In addition, sometimes there exist relations with other special functions; a complicated special function can be expressed in terms of simpler functions. Various representations can be used for the evaluation; the simplest way to evaluate a function is to expand it into a Taylor series. However, such representation may converge slowly or not at all. In algorithmic languages, rational approximations are typically used, although they may behave badly in the case of complex argument(s). == History of special functions == === Classical theory === While trigonometry and exponential functions were systematized and unified by the eighteenth century, the search for a complete and unified theory of special functions has continued since the nineteenth century. The high point of special function theory in 1800–1900 was the theory of elliptic functions; treatises that were essentially complete, such as that of Tannery and Molk, expounded all the basic identities of the theory using techniques from analytic function theory (based on complex analysis). The end of the century also saw a very detailed discussion of spherical harmonics. === Changing and fixed motivations === While pure mathematicians sought a broad theory deriving as many as possible of the known special functions from a single principle, for a long time the special functions were the province of applied mathematics. Applications to the physical sciences and engineering determined the relative importance of functions. Before electronic computation, the importance of a special function was affirmed by the laborious computation of extended tables of values for ready look-up, as for the familiar logarithm tables. (Babbage's difference engine was an attempt to compute such tables.) For this purpose, the main techniques are: numerical analysis, the discovery of infinite series or other analytical expressions allowing rapid calculation; and reduction of as many functions as possible to the given function. More theoretical questions include: asymptotic analysis; analytic continuation and monodromy in the complex plane; and symmetry principles and other structural equations. === Twentieth century === The twentieth century saw several waves of interest in special function theory. The classic Whittaker and Watson (1902) textbook sought to unify the theory using complex analysis; the G. N. Watson tome A Treatise on the Theory of Bessel Functions pushed the techniques as far as possible for one important type, including asymptotic results. The later Bateman Manuscript Project, under the editorship of Arthur Erdélyi, attempted to be encyclopedic, and came around the time when electronic computation was coming to the fore and tabulation ceased to be the main issue. === Contemporary theories === The modern theory of orthogonal polynomials is of a definite but limited scope. Hypergeometric series, observed by Felix Klein to be important in astronomy and mathematical physics, became an intricate theory, requiring later conceptual arrangement. Lie group representations give an immediate generalization of spherical functions; from 1950 onwards substantial parts of classical theory were recast in terms of Lie groups. Further, work on algebraic combinatorics also revived interest in older parts of the theory. Conjectures of Ian G. Macdonald helped open up large and active new fields with a special function flavour. Difference equations have begun to take their place beside differential equations as a source of special functions. == Special functions in number theory == In number theory, certain special functions have traditionally been studied, such as particular Dirichlet series and modular forms. Almost all aspects of special function theory are reflected there, as well as some new ones, such as came out of monstrous moonshine theory. == Special functions of matrix arguments == Analogues of several special functions have been defined on the space of positive definite matrices, among them the power function which goes back to Atle Selberg, the multivariate gamma function, and types of Bessel functions. The NIST Digital Library of Mathematical Functions has a section covering several special functions of matrix arguments. == Researchers == == See also == List of mathematical functions List of special functions and eponyms Elementary function == References == === Bibliography === Andrews, George E.; Askey, Richard; Roy, Ranjan (1999). Special functions. Encyclopedia of Mathematics and its Applications. Vol. 71. Cambridge University Press. ISBN 978-0-521-62321-6. MR 1688958. Terras, Audrey (2016). Harmonic analysis on symmetric spaces – Higher rank spaces, positive definite matrix space and generalizations (second ed.). Springer Nature. ISBN 978-1-4939-3406-5. MR 3496932. Whittaker, E. T.; Watson, G. N. (1996-09-13). A Course of Modern Analysis. Cambridge University Press. ISBN 978-0-521-58807-2. N. N. Levedev (Translated & Edited by Richard A. Sliverman): Special Functions & Their Applications, DOVER, ISBN 978-0-486-60624-8 (1972). # Originally published from Prentice-Hall Inc.(1965). Nico M. Temme: Special Functions: An Introduction to the Classical Functions of Mathematical Physics, Wiley-Interscience,ISBN 978-0-471-11313-1 (1996). Yury A. Brychkov: Handbook of Special Functions: Derivatives, Integrals, Series and Other Formulas, CRC Press, ISBN 978-1-58488-956-4 (2008). W. W. Bell: Special Functions : for Scientists and Engineers, Dover, ISBN 978-0-486-43521-3 (2004). === Numerical calculation method of function value === Shanjie Zhang and Jian-Ming Jin: Computation of Special Functions, Wiley-Interscience, ISBN 978-0-471-11963-0 (1996). William J. Thompson: Atlas for Computing Mathematical Functions: An Illustrated Guide for Practitioners; With Programs in C and Mathematica, Wiley-Interscience, ISBN 978-0-471-00260-4 (March, 1997). William J. Thompson: Atlas for Computing Mathematical Functions: An illustrated Guide for Practitioners; With Programs in Fortran 90 and Mathematica, Wiley-Interscience, ISBN 978-0-471-18171-2 (June, 1997). Amparo Gil, Javier Segura and Nico M. Temme: Numerical Methods for Special Functions, SIAM, ISBN 978-0-898716-34-4 (2007). == External links == National Institute of Standards and Technology, United States Department of Commerce. NIST Digital Library of Mathematical Functions. Archived from the original on December 13, 2018. Weisstein, Eric W. "Special Function". MathWorld. Online calculator, Online scientific calculator with over 100 functions (>=32 digits, many complex) (German language) Special functions at EqWorld: The World of Mathematical Equations Special functions and polynomials by Gerard 't Hooft and Stefan Nobbenhuis (April 8, 2013) Numerical Methods for Special Functions, by A. Gil, J. Segura, N.M. Temme (2007). R. Jagannathan, (P,Q)-Special Functions Specialfunctionswiki
Wikipedia/Special_functions
In mathematics, the gamma function (represented by Γ, capital Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function Γ ( z ) {\displaystyle \Gamma (z)} is defined for all complex numbers z {\displaystyle z} except non-positive integers, and for every positive integer z = n {\displaystyle z=n} , Γ ( n ) = ( n − 1 ) ! . {\displaystyle \Gamma (n)=(n-1)!\,.} The gamma function can be defined via a convergent improper integral for complex numbers with positive real part: Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t , ℜ ( z ) > 0 . {\displaystyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}{\text{ d}}t,\ \qquad \Re (z)>0\,.} The gamma function then is defined in the complex plane as the analytic continuation of this integral function: it is a meromorphic function which is holomorphic except at zero and the negative integers, where it has simple poles. The gamma function has no zeros, so the reciprocal gamma function ⁠1/Γ(z)⁠ is an entire function. In fact, the gamma function corresponds to the Mellin transform of the negative exponential function: Γ ( z ) = M { e − x } ( z ) . {\displaystyle \Gamma (z)={\mathcal {M}}\{e^{-x}\}(z)\,.} Other extensions of the factorial function do exist, but the gamma function is the most popular and useful. It appears as a factor in various probability-distribution functions and other formulas in the fields of probability, statistics, analytic number theory, and combinatorics. == Motivation == The gamma function can be seen as a solution to the interpolation problem of finding a smooth curve y = f ( x ) {\displaystyle y=f(x)} that connects the points of the factorial sequence: ( x , y ) = ( n , n ! ) {\displaystyle (x,y)=(n,n!)} for all positive integer values of n {\displaystyle n} . The simple formula for the factorial, x! = 1 × 2 × ⋯ × x is only valid when x is a positive integer, and no elementary function has this property, but a good solution is the gamma function f ( x ) = Γ ( x + 1 ) {\displaystyle f(x)=\Gamma (x+1)} . The gamma function is not only smooth but analytic (except at the non-positive integers), and it can be defined in several explicit ways. However, it is not the only analytic function that extends the factorial, as one may add any analytic function that is zero on the positive integers, such as k sin ⁡ ( m π x ) {\displaystyle k\sin(m\pi x)} for an integer m {\displaystyle m} . Such a function is known as a pseudogamma function, the most famous being the Hadamard function. A more restrictive requirement is the functional equation which interpolates the shifted factorial f ( n ) = ( n − 1 ) ! {\displaystyle f(n)=(n{-}1)!} : f ( x + 1 ) = x f ( x ) for all x > 0 , f ( 1 ) = 1. {\displaystyle f(x+1)=xf(x)\ {\text{ for all }}x>0,\qquad f(1)=1.} But this still does not give a unique solution, since it allows for multiplication by any periodic function g ( x ) {\displaystyle g(x)} with g ( x ) = g ( x + 1 ) {\displaystyle g(x)=g(x+1)} and g ( 0 ) = 1 {\displaystyle g(0)=1} , such as g ( x ) = e k sin ⁡ ( m π x ) {\displaystyle g(x)=e^{k\sin(m\pi x)}} . One way to resolve the ambiguity is the Bohr–Mollerup theorem, which shows that f ( x ) = Γ ( x ) {\displaystyle f(x)=\Gamma (x)} is the unique interpolating function for the factorial, defined over the positive reals, which is logarithmically convex, meaning that y = log ⁡ f ( x ) {\displaystyle y=\log f(x)} is convex. == Definition == === Main definition === The notation Γ ( z ) {\displaystyle \Gamma (z)} is due to Legendre. If the real part of the complex number z is strictly positive ( ℜ ( z ) > 0 {\displaystyle \Re (z)>0} ), then the integral Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t {\displaystyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}\,dt} converges absolutely, and is known as the Euler integral of the second kind. (Euler's integral of the first kind is the beta function.) Using integration by parts, one sees that: Γ ( z + 1 ) = ∫ 0 ∞ t z e − t d t = [ − t z e − t ] 0 ∞ + ∫ 0 ∞ z t z − 1 e − t d t = lim t → ∞ ( − t z e − t ) − ( − 0 z e − 0 ) + z ∫ 0 ∞ t z − 1 e − t d t . {\displaystyle {\begin{aligned}\Gamma (z+1)&=\int _{0}^{\infty }t^{z}e^{-t}\,dt\\&={\Bigl [}-t^{z}e^{-t}{\Bigr ]}_{0}^{\infty }+\int _{0}^{\infty }zt^{z-1}e^{-t}\,dt\\&=\lim _{t\to \infty }\left(-t^{z}e^{-t}\right)-\left(-0^{z}e^{-0}\right)+z\int _{0}^{\infty }t^{z-1}e^{-t}\,dt.\end{aligned}}} Recognizing that − t z e − t → 0 {\displaystyle -t^{z}e^{-t}\to 0} as t → ∞ , {\displaystyle t\to \infty ,} Γ ( z + 1 ) = z ∫ 0 ∞ t z − 1 e − t d t = z Γ ( z ) . {\displaystyle {\begin{aligned}\Gamma (z+1)&=z\int _{0}^{\infty }t^{z-1}e^{-t}\,dt\\&=z\Gamma (z).\end{aligned}}} Then Γ ( 1 ) {\displaystyle \Gamma (1)} can be calculated as: Γ ( 1 ) = ∫ 0 ∞ t 1 − 1 e − t d t = ∫ 0 ∞ e − t d t = 1. {\displaystyle {\begin{aligned}\Gamma (1)&=\int _{0}^{\infty }t^{1-1}e^{-t}\,dt\\&=\int _{0}^{\infty }e^{-t}\,dt\\&=1.\end{aligned}}} Thus we can show that Γ ( n ) = ( n − 1 ) ! {\displaystyle \Gamma (n)=(n-1)!} for any positive integer n by induction. Specifically, the base case is that Γ ( 1 ) = 1 = 0 ! {\displaystyle \Gamma (1)=1=0!} , and the induction step is that Γ ( n + 1 ) = n Γ ( n ) = n ( n − 1 ) ! = n ! . {\displaystyle \Gamma (n+1)=n\Gamma (n)=n(n-1)!=n!.} The identity Γ ( z ) = Γ ( z + 1 ) z {\textstyle \Gamma (z)={\frac {\Gamma (z+1)}{z}}} can be used (or, yielding the same result, analytic continuation can be used) to uniquely extend the integral formulation for Γ ( z ) {\displaystyle \Gamma (z)} to a meromorphic function defined for all complex numbers z, except integers less than or equal to zero. It is this extended version that is commonly referred to as the gamma function. === Alternative definitions === There are many equivalent definitions. ==== Euler's definition as an infinite product ==== For a fixed integer m {\displaystyle m} , as the integer n {\displaystyle n} increases, we have that lim n → ∞ n ! ( n + 1 ) m ( n + m ) ! = 1 . {\displaystyle \lim _{n\to \infty }{\frac {n!\,\left(n+1\right)^{m}}{(n+m)!}}=1\,.} If m {\displaystyle m} is not an integer, then this equation is meaningless, since in this section the factorial of a non-integer has not been defined yet. However, let us assume that this equation continues to hold when m {\displaystyle m} is replaced by an arbitrary complex number z {\displaystyle z} , in order to define the Gamma function for non-integers: lim n → ∞ n ! ( n + 1 ) z ( n + z ) ! = 1 . {\displaystyle \lim _{n\to \infty }{\frac {n!\,\left(n+1\right)^{z}}{(n+z)!}}=1\,.} Multiplying both sides by ( z − 1 ) ! {\displaystyle (z-1)!} gives ( z − 1 ) ! = 1 z lim n → ∞ n ! z ! ( n + z ) ! ( n + 1 ) z = 1 z lim n → ∞ ( 1 ⋅ 2 ⋯ n ) 1 ( 1 + z ) ⋯ ( n + z ) ( 2 1 ⋅ 3 2 ⋯ n + 1 n ) z = 1 z ∏ n = 1 ∞ [ 1 1 + z n ( 1 + 1 n ) z ] . {\displaystyle {\begin{aligned}(z-1)!&={\frac {1}{z}}\lim _{n\to \infty }n!{\frac {z!}{(n+z)!}}(n+1)^{z}\\[8pt]&={\frac {1}{z}}\lim _{n\to \infty }(1\cdot 2\cdots n){\frac {1}{(1+z)\cdots (n+z)}}\left({\frac {2}{1}}\cdot {\frac {3}{2}}\cdots {\frac {n+1}{n}}\right)^{z}\\[8pt]&={\frac {1}{z}}\prod _{n=1}^{\infty }\left[{\frac {1}{1+{\frac {z}{n}}}}\left(1+{\frac {1}{n}}\right)^{z}\right].\end{aligned}}} This infinite product, which is due to Euler, converges for all complex numbers z {\displaystyle z} except the non-positive integers, which fail because of a division by zero. In fact, the above assumption produces a unique definition of Γ ( z ) {\displaystyle \Gamma (z)} as ⁠ ( z − 1 ) ! {\displaystyle (z-1)!} ⁠. Intuitively, this formula indicates that Γ ( z ) {\displaystyle \Gamma (z)} is approximately the result of computing Γ ( n + 1 ) = n ! {\displaystyle \Gamma (n+1)=n!} for some large integer n {\displaystyle n} , multiplying by ( n + 1 ) z {\displaystyle (n+1)^{z}} to approximate Γ ( n + z + 1 ) {\displaystyle \Gamma (n+z+1)} , and then using the relationship Γ ( x + 1 ) = x Γ ( x ) {\displaystyle \Gamma (x+1)=x\Gamma (x)} backwards n + 1 {\displaystyle n+1} times to get an approximation for Γ ( z ) {\displaystyle \Gamma (z)} ; and furthermore that this approximation becomes exact as n {\displaystyle n} increases to infinity. The infinite product for the reciprocal 1 Γ ( z ) = z ∏ n = 1 ∞ [ ( 1 + z n ) / ( 1 + 1 n ) z ] {\displaystyle {\frac {1}{\Gamma (z)}}=z\prod _{n=1}^{\infty }\left[\left(1+{\frac {z}{n}}\right)/{\left(1+{\frac {1}{n}}\right)^{z}}\right]} is an entire function, converging for every complex number z. ==== Weierstrass's definition ==== The definition for the gamma function due to Weierstrass is also valid for all complex numbers z {\displaystyle z} except non-positive integers: Γ ( z ) = e − γ z z ∏ n = 1 ∞ ( 1 + z n ) − 1 e z / n , {\displaystyle \Gamma (z)={\frac {e^{-\gamma z}}{z}}\prod _{n=1}^{\infty }\left(1+{\frac {z}{n}}\right)^{-1}e^{z/n},} where γ ≈ 0.577216 {\displaystyle \gamma \approx 0.577216} is the Euler–Mascheroni constant. This is the Hadamard product of 1 / Γ ( z ) {\displaystyle 1/\Gamma (z)} in a rewritten form. == Properties == === General === Besides the fundamental property discussed above: Γ ( z + 1 ) = z Γ ( z ) {\displaystyle \Gamma (z+1)=z\ \Gamma (z)} other important functional equations for the gamma function are Euler's reflection formula Γ ( 1 − z ) Γ ( z ) = π sin ⁡ π z , z ∉ Z {\displaystyle \Gamma (1-z)\Gamma (z)={\frac {\pi }{\sin \pi z}},\qquad z\not \in \mathbb {Z} } which implies Γ ( z − n ) = ( − 1 ) n − 1 Γ ( − z ) Γ ( 1 + z ) Γ ( n + 1 − z ) , n ∈ Z {\displaystyle \Gamma (z-n)=(-1)^{n-1}\;{\frac {\Gamma (-z)\Gamma (1+z)}{\Gamma (n+1-z)}},\qquad n\in \mathbb {Z} } and the Legendre duplication formula Γ ( z ) Γ ( z + 1 2 ) = 2 1 − 2 z π Γ ( 2 z ) . {\displaystyle \Gamma (z)\Gamma \left(z+{\tfrac {1}{2}}\right)=2^{1-2z}\;{\sqrt {\pi }}\;\Gamma (2z).} The duplication formula is a special case of the multiplication theorem (see Eq. 5.5.6): ∏ k = 0 m − 1 Γ ( z + k m ) = ( 2 π ) m − 1 2 m 1 2 − m z Γ ( m z ) . {\displaystyle \prod _{k=0}^{m-1}\Gamma \left(z+{\frac {k}{m}}\right)=(2\pi )^{\frac {m-1}{2}}\;m^{{\frac {1}{2}}-mz}\;\Gamma (mz).} A simple but useful property, which can be seen from the limit definition, is: Γ ( z ) ¯ = Γ ( z ¯ ) ⇒ Γ ( z ) Γ ( z ¯ ) ∈ R . {\displaystyle {\overline {\Gamma (z)}}=\Gamma ({\overline {z}})\;\Rightarrow \;\Gamma (z)\Gamma ({\overline {z}})\in \mathbb {R} .} In particular, with z = a + bi, this product is | Γ ( a + b i ) | 2 = | Γ ( a ) | 2 ∏ k = 0 ∞ 1 1 + b 2 ( a + k ) 2 {\displaystyle |\Gamma (a+bi)|^{2}=|\Gamma (a)|^{2}\prod _{k=0}^{\infty }{\frac {1}{1+{\frac {b^{2}}{(a+k)^{2}}}}}} If the real part is an integer or a half-integer, this can be finitely expressed in closed form: | Γ ( b i ) | 2 = π b sinh ⁡ π b | Γ ( 1 2 + b i ) | 2 = π cosh ⁡ π b | Γ ( 1 + b i ) | 2 = π b sinh ⁡ π b | Γ ( 1 + n + b i ) | 2 = π b sinh ⁡ π b ∏ k = 1 n ( k 2 + b 2 ) , n ∈ N | Γ ( − n + b i ) | 2 = π b sinh ⁡ π b ∏ k = 1 n ( k 2 + b 2 ) − 1 , n ∈ N | Γ ( 1 2 ± n + b i ) | 2 = π cosh ⁡ π b ∏ k = 1 n ( ( k − 1 2 ) 2 + b 2 ) ± 1 , n ∈ N {\displaystyle {\begin{aligned}|\Gamma (bi)|^{2}&={\frac {\pi }{b\sinh \pi b}}\\[1ex]\left|\Gamma \left({\tfrac {1}{2}}+bi\right)\right|^{2}&={\frac {\pi }{\cosh \pi b}}\\[1ex]\left|\Gamma \left(1+bi\right)\right|^{2}&={\frac {\pi b}{\sinh \pi b}}\\[1ex]\left|\Gamma \left(1+n+bi\right)\right|^{2}&={\frac {\pi b}{\sinh \pi b}}\prod _{k=1}^{n}\left(k^{2}+b^{2}\right),\quad n\in \mathbb {N} \\[1ex]\left|\Gamma \left(-n+bi\right)\right|^{2}&={\frac {\pi }{b\sinh \pi b}}\prod _{k=1}^{n}\left(k^{2}+b^{2}\right)^{-1},\quad n\in \mathbb {N} \\[1ex]\left|\Gamma \left({\tfrac {1}{2}}\pm n+bi\right)\right|^{2}&={\frac {\pi }{\cosh \pi b}}\prod _{k=1}^{n}\left(\left(k-{\tfrac {1}{2}}\right)^{2}+b^{2}\right)^{\pm 1},\quad n\in \mathbb {N} \\[-1ex]&\end{aligned}}} Perhaps the best-known value of the gamma function at a non-integer argument is Γ ( 1 2 ) = π , {\displaystyle \Gamma \left({\tfrac {1}{2}}\right)={\sqrt {\pi }},} which can be found by setting z = 1 2 {\textstyle z={\frac {1}{2}}} in the reflection formula, by using the relation to the beta function given below with z 1 = z 2 = 1 2 {\textstyle z_{1}=z_{2}={\frac {1}{2}}} , or simply by making the substitution t = u 2 {\displaystyle t=u^{2}} in the integral definition of the gamma function, resulting in a Gaussian integral. In general, for non-negative integer values of n {\displaystyle n} we have: Γ ( 1 2 + n ) = ( 2 n ) ! 4 n n ! π = ( 2 n − 1 ) ! ! 2 n π = ( n − 1 2 n ) n ! π Γ ( 1 2 − n ) = ( − 4 ) n n ! ( 2 n ) ! π = ( − 2 ) n ( 2 n − 1 ) ! ! π = π ( − 1 / 2 n ) n ! {\displaystyle {\begin{aligned}\Gamma \left({\tfrac {1}{2}}+n\right)&={(2n)! \over 4^{n}n!}{\sqrt {\pi }}={\frac {(2n-1)!!}{2^{n}}}{\sqrt {\pi }}={\binom {n-{\frac {1}{2}}}{n}}n!{\sqrt {\pi }}\\[8pt]\Gamma \left({\tfrac {1}{2}}-n\right)&={(-4)^{n}n! \over (2n)!}{\sqrt {\pi }}={\frac {(-2)^{n}}{(2n-1)!!}}{\sqrt {\pi }}={\frac {\sqrt {\pi }}{{\binom {-1/2}{n}}n!}}\end{aligned}}} where the double factorial ( 2 n − 1 ) ! ! = ( 2 n − 1 ) ( 2 n − 3 ) ⋯ ( 3 ) ( 1 ) {\displaystyle (2n-1)!!=(2n-1)(2n-3)\cdots (3)(1)} . See Particular values of the gamma function for calculated values. It might be tempting to generalize the result that Γ ( 1 2 ) = π {\textstyle \Gamma \left({\frac {1}{2}}\right)={\sqrt {\pi }}} by looking for a formula for other individual values Γ ( r ) {\displaystyle \Gamma (r)} where r {\displaystyle r} is rational, especially because according to Gauss's digamma theorem, it is possible to do so for the closely related digamma function at every rational value. However, these numbers Γ ( r ) {\displaystyle \Gamma (r)} are not known to be expressible by themselves in terms of elementary functions. It has been proved that Γ ( n + r ) {\displaystyle \Gamma (n+r)} is a transcendental number and algebraically independent of π {\displaystyle \pi } for any integer n {\displaystyle n} and each of the fractions r = 1 6 , 1 4 , 1 3 , 2 3 , 3 4 , 5 6 {\textstyle r={\frac {1}{6}},{\frac {1}{4}},{\frac {1}{3}},{\frac {2}{3}},{\frac {3}{4}},{\frac {5}{6}}} . In general, when computing values of the gamma function, we must settle for numerical approximations. The derivatives of the gamma function are described in terms of the polygamma function, ψ(0)(z): Γ ′ ( z ) = Γ ( z ) ψ ( 0 ) ( z ) . {\displaystyle \Gamma '(z)=\Gamma (z)\psi ^{(0)}(z).} For a positive integer m the derivative of the gamma function can be calculated as follows: Γ ′ ( m + 1 ) = m ! ( − γ + ∑ k = 1 m 1 k ) = m ! ( − γ + H ( m ) ) , {\displaystyle \Gamma '(m+1)=m!\left(-\gamma +\sum _{k=1}^{m}{\frac {1}{k}}\right)=m!\left(-\gamma +H(m)\right)\,,} where H(m) is the mth harmonic number and γ is the Euler–Mascheroni constant. For ℜ ( z ) > 0 {\displaystyle \Re (z)>0} the n {\displaystyle n} th derivative of the gamma function is: d n d z n Γ ( z ) = ∫ 0 ∞ t z − 1 e − t ( log ⁡ t ) n d t . {\displaystyle {\frac {d^{n}}{dz^{n}}}\Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}(\log t)^{n}\,dt.} (This can be derived by differentiating the integral form of the gamma function with respect to z {\displaystyle z} , and using the technique of differentiation under the integral sign.) Using the identity Γ ( n ) ( 1 ) = ( − 1 ) n B n ( γ , 1 ! ζ ( 2 ) , … , ( n − 1 ) ! ζ ( n ) ) {\displaystyle \Gamma ^{(n)}(1)=(-1)^{n}B_{n}(\gamma ,1!\zeta (2),\ldots ,(n-1)!\zeta (n))} where ζ ( z ) {\displaystyle \zeta (z)} is the Riemann zeta function, and B n {\displaystyle B_{n}} is the n {\displaystyle n} -th Bell polynomial, we have in particular the Laurent series expansion of the gamma function Γ ( z ) = 1 z − γ + 1 2 ( γ 2 + π 2 6 ) z − 1 6 ( γ 3 + γ π 2 2 + 2 ζ ( 3 ) ) z 2 + O ( z 3 ) . {\displaystyle \Gamma (z)={\frac {1}{z}}-\gamma +{\frac {1}{2}}\left(\gamma ^{2}+{\frac {\pi ^{2}}{6}}\right)z-{\frac {1}{6}}\left(\gamma ^{3}+{\frac {\gamma \pi ^{2}}{2}}+2\zeta (3)\right)z^{2}+O(z^{3}).} === Inequalities === When restricted to the positive real numbers, the gamma function is a strictly logarithmically convex function. This property may be stated in any of the following three equivalent ways: For any two positive real numbers x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} , and for any t ∈ [ 0 , 1 ] {\displaystyle t\in [0,1]} , Γ ( t x 1 + ( 1 − t ) x 2 ) ≤ Γ ( x 1 ) t Γ ( x 2 ) 1 − t . {\displaystyle \Gamma (tx_{1}+(1-t)x_{2})\leq \Gamma (x_{1})^{t}\Gamma (x_{2})^{1-t}.} For any two positive real numbers x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} , and x 2 {\displaystyle x_{2}} > x 1 {\displaystyle x_{1}} ( Γ ( x 2 ) Γ ( x 1 ) ) 1 x 2 − x 1 > exp ⁡ ( Γ ′ ( x 1 ) Γ ( x 1 ) ) . {\displaystyle \left({\frac {\Gamma (x_{2})}{\Gamma (x_{1})}}\right)^{\frac {1}{x_{2}-x_{1}}}>\exp \left({\frac {\Gamma '(x_{1})}{\Gamma (x_{1})}}\right).} For any positive real number x {\displaystyle x} , Γ ″ ( x ) Γ ( x ) > Γ ′ ( x ) 2 . {\displaystyle \Gamma ''(x)\Gamma (x)>\Gamma '(x)^{2}.} The last of these statements is, essentially by definition, the same as the statement that ψ ( 1 ) ( x ) > 0 {\displaystyle \psi ^{(1)}(x)>0} , where ψ ( 1 ) {\displaystyle \psi ^{(1)}} is the polygamma function of order 1. To prove the logarithmic convexity of the gamma function, it therefore suffices to observe that ψ ( 1 ) {\displaystyle \psi ^{(1)}} has a series representation which, for positive real x, consists of only positive terms. Logarithmic convexity and Jensen's inequality together imply, for any positive real numbers x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} and a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} , Γ ( a 1 x 1 + ⋯ + a n x n a 1 + ⋯ + a n ) ≤ ( Γ ( x 1 ) a 1 ⋯ Γ ( x n ) a n ) 1 a 1 + ⋯ + a n . {\displaystyle \Gamma \left({\frac {a_{1}x_{1}+\cdots +a_{n}x_{n}}{a_{1}+\cdots +a_{n}}}\right)\leq {\bigl (}\Gamma (x_{1})^{a_{1}}\cdots \Gamma (x_{n})^{a_{n}}{\bigr )}^{\frac {1}{a_{1}+\cdots +a_{n}}}.} There are also bounds on ratios of gamma functions. The best-known is Gautschi's inequality, which says that for any positive real number x and any s ∈ (0, 1), x 1 − s < Γ ( x + 1 ) Γ ( x + s ) < ( x + 1 ) 1 − s . {\displaystyle x^{1-s}<{\frac {\Gamma (x+1)}{\Gamma (x+s)}}<\left(x+1\right)^{1-s}.} === Stirling's formula === The behavior of Γ ( x ) {\displaystyle \Gamma (x)} for an increasing positive real variable is given by Stirling's formula Γ ( x + 1 ) ∼ 2 π x ( x e ) x , {\displaystyle \Gamma (x+1)\sim {\sqrt {2\pi x}}\left({\frac {x}{e}}\right)^{x},} where the symbol ∼ {\displaystyle \sim } means asymptotic convergence: the ratio of the two sides converges to 1 in the limit x → + ∞ {\textstyle x\to +\infty } . This growth is faster than exponential, exp ⁡ ( β x ) {\displaystyle \exp(\beta x)} , for any fixed value of β {\displaystyle \beta } . Another useful limit for asymptotic approximations for x → + ∞ {\displaystyle x\to +\infty } is: Γ ( x + α ) ∼ Γ ( x ) x α , α ∈ C . {\displaystyle {\Gamma (x+\alpha )}\sim {\Gamma (x)x^{\alpha }},\qquad \alpha \in \mathbb {C} .} When writing the error term as an infinite product, Stirling's formula can be used to define the gamma function: Γ ( x ) = 2 π x ( x e ) x ∏ n = 0 ∞ [ 1 e ( 1 + 1 x + n ) x + n + 1 2 ] {\displaystyle \Gamma (x)={\sqrt {\frac {2\pi }{x}}}\left({\frac {x}{e}}\right)^{x}\prod _{n=0}^{\infty }\left[{\frac {1}{e}}\left(1+{\frac {1}{x+n}}\right)^{x+n+{\frac {1}{2}}}\right]} === Extension to negative, non-integer values === Although the main definition of the gamma function—the Euler integral of the second kind—is only valid (on the real axis) for positive arguments, its domain can be extended with analytic continuation to negative arguments by shifting the negative argument to positive values by using either the Euler's reflection formula, Γ ( − x ) = 1 Γ ( x + 1 ) π sin ⁡ ( π ( x + 1 ) ) , {\displaystyle \Gamma (-x)={\frac {1}{\Gamma (x+1)}}{\frac {\pi }{\sin {\big (}\pi (x+1){\big )}}},} or the fundamental property, Γ ( − x ) := 1 − x Γ ( − x + 1 ) , {\displaystyle \Gamma (-x):={\frac {1}{-x}}\Gamma (-x+1),} when x ∉ Z {\displaystyle x\not \in \mathbb {Z} } . For example, Γ ( − 1 2 ) = − 2 Γ ( 1 2 ) . {\displaystyle \Gamma \left(-{\frac {1}{2}}\right)=-2\Gamma \left({\frac {1}{2}}\right).} === Residues === The behavior for non-positive z {\displaystyle z} is more intricate. Euler's integral does not converge for ℜ ( z ) ≤ 0 {\displaystyle \Re (z)\leq 0} , but the function it defines in the positive complex half-plane has a unique analytic continuation to the negative half-plane. One way to find that analytic continuation is to use Euler's integral for positive arguments and extend the domain to negative numbers by repeated application of the recurrence formula, Γ ( z ) = Γ ( z + n + 1 ) z ( z + 1 ) ⋯ ( z + n ) , {\displaystyle \Gamma (z)={\frac {\Gamma (z+n+1)}{z(z+1)\cdots (z+n)}},} choosing n {\displaystyle n} such that z + n {\displaystyle z+n} is positive. The product in the denominator is zero when z {\displaystyle z} equals any of the integers 0 , − 1 , − 2 , … {\displaystyle 0,-1,-2,\ldots } . Thus, the gamma function must be undefined at those points to avoid division by zero; it is a meromorphic function with simple poles at the non-positive integers. For a function f {\displaystyle f} of a complex variable z {\displaystyle z} , at a simple pole c {\displaystyle c} , the residue of f {\displaystyle f} is given by: Res ⁡ ( f , c ) = lim z → c ( z − c ) f ( z ) . {\displaystyle \operatorname {Res} (f,c)=\lim _{z\to c}(z-c)f(z).} For the simple pole z = − n {\displaystyle z=-n} , the recurrence formula can be rewritten as: ( z + n ) Γ ( z ) = Γ ( z + n + 1 ) z ( z + 1 ) ⋯ ( z + n − 1 ) . {\displaystyle (z+n)\Gamma (z)={\frac {\Gamma (z+n+1)}{z(z+1)\cdots (z+n-1)}}.} The numerator at z = − n , {\displaystyle z=-n,} is Γ ( z + n + 1 ) = Γ ( 1 ) = 1 {\displaystyle \Gamma (z+n+1)=\Gamma (1)=1} and the denominator z ( z + 1 ) ⋯ ( z + n − 1 ) = − n ( 1 − n ) ⋯ ( n − 1 − n ) = ( − 1 ) n n ! . {\displaystyle z(z+1)\cdots (z+n-1)=-n(1-n)\cdots (n-1-n)=(-1)^{n}n!.} So the residues of the gamma function at those points are: Res ⁡ ( Γ , − n ) = ( − 1 ) n n ! . {\displaystyle \operatorname {Res} (\Gamma ,-n)={\frac {(-1)^{n}}{n!}}.} The gamma function is non-zero everywhere along the real line, although it comes arbitrarily close to zero as z → −∞. There is in fact no complex number z {\displaystyle z} for which Γ ( z ) = 0 {\displaystyle \Gamma (z)=0} , and hence the reciprocal gamma function 1 Γ ( z ) {\textstyle {\frac {1}{\Gamma (z)}}} is an entire function, with zeros at z = 0 , − 1 , − 2 , … {\displaystyle z=0,-1,-2,\ldots } . === Minima and maxima === On the real line, the gamma function has a local minimum at zmin ≈ +1.46163214496836234126 where it attains the value Γ(zmin) ≈ +0.88560319441088870027. The gamma function rises to either side of this minimum. The solution to Γ(z − 0.5) = Γ(z + 0.5) is z = +1.5 and the common value is Γ(1) = Γ(2) = +1. The positive solution to Γ(z − 1) = Γ(z + 1) is z = φ ≈ +1.618, the golden ratio, and the common value is Γ(φ − 1) = Γ(φ + 1) = φ! ≈ +1.44922960226989660037. The gamma function must alternate sign between its poles at the non-positive integers because the product in the forward recurrence contains an odd number of negative factors if the number of poles between z {\displaystyle z} and z + n {\displaystyle z+n} is odd, and an even number if the number of poles is even. The values at the local extrema of the gamma function along the real axis between the non-positive integers are: Γ(−0.50408300826445540925...) = −3.54464361115500508912..., Γ(−1.57349847316239045877...) = 2.30240725833968013582..., Γ(−2.61072086844414465000...) = −0.88813635840124192009..., Γ(−3.63529336643690109783...) = 0.24512753983436625043..., Γ(−4.65323776174314244171...) = −0.05277963958731940076..., etc. === Integral representations === There are many formulas, besides the Euler integral of the second kind, that express the gamma function as an integral. For instance, when the real part of z is positive, Γ ( z ) = ∫ − ∞ ∞ e z t − e t d t {\displaystyle \Gamma (z)=\int _{-\infty }^{\infty }e^{zt-e^{t}}\,dt} and Γ ( z ) = ∫ 0 1 ( log ⁡ 1 t ) z − 1 d t , {\displaystyle \Gamma (z)=\int _{0}^{1}\left(\log {\frac {1}{t}}\right)^{z-1}\,dt,} Γ ( z ) = 2 c z ∫ 0 ∞ t 2 z − 1 e − c t 2 d t , c > 0 {\displaystyle \Gamma (z)=2c^{z}\int _{0}^{\infty }t^{2z-1}e^{-ct^{2}}\,dt\,,\;c>0} where the three integrals respectively follow from the substitutions t = e − x {\displaystyle t=e^{-x}} , t = − log ⁡ x {\displaystyle t=-\log x} and t = c x 2 {\displaystyle t=cx^{2}} in Euler's second integral. The last integral in particular makes clear the connection between the gamma function at half integer arguments and the Gaussian integral: if z = 1 / 2 , c = 1 {\displaystyle z=1/2,\;c=1} we get Γ ( 1 / 2 ) = 2 ∫ 0 ∞ e − t 2 d t = π . {\displaystyle \Gamma (1/2)=2\int _{0}^{\infty }e^{-t^{2}}\,dt={\sqrt {\pi }}\;.} Binet's first integral formula for the gamma function states that, when the real part of z is positive, then: l o g Γ ⁡ ( z ) = ( z − 1 2 ) log ⁡ z − z + 1 2 log ⁡ ( 2 π ) + ∫ 0 ∞ ( 1 2 − 1 t + 1 e t − 1 ) e − t z t d t . {\displaystyle \operatorname {log\Gamma } (z)=\left(z-{\frac {1}{2}}\right)\log z-z+{\frac {1}{2}}\log(2\pi )+\int _{0}^{\infty }\left({\frac {1}{2}}-{\frac {1}{t}}+{\frac {1}{e^{t}-1}}\right){\frac {e^{-tz}}{t}}\,dt.} The integral on the right-hand side may be interpreted as a Laplace transform. That is, log ⁡ ( Γ ( z ) ( e z ) z z 2 π ) = L ( 1 2 t − 1 t 2 + 1 t ( e t − 1 ) ) ( z ) . {\displaystyle \log \left(\Gamma (z)\left({\frac {e}{z}}\right)^{z}{\sqrt {\frac {z}{2\pi }}}\right)={\mathcal {L}}\left({\frac {1}{2t}}-{\frac {1}{t^{2}}}+{\frac {1}{t(e^{t}-1)}}\right)(z).} Binet's second integral formula states that, again when the real part of z is positive, then: l o g Γ ⁡ ( z ) = ( z − 1 2 ) log ⁡ z − z + 1 2 log ⁡ ( 2 π ) + 2 ∫ 0 ∞ arctan ⁡ ( t / z ) e 2 π t − 1 d t . {\displaystyle \operatorname {log\Gamma } (z)=\left(z-{\frac {1}{2}}\right)\log z-z+{\frac {1}{2}}\log(2\pi )+2\int _{0}^{\infty }{\frac {\arctan(t/z)}{e^{2\pi t}-1}}\,dt.} Let C be a Hankel contour, meaning a path that begins and ends at the point ∞ on the Riemann sphere, whose unit tangent vector converges to −1 at the start of the path and to 1 at the end, which has winding number 1 around 0, and which does not cross [0, ∞). Fix a branch of log ⁡ ( − t ) {\displaystyle \log(-t)} by taking a branch cut along [0, ∞) and by taking log ⁡ ( − t ) {\displaystyle \log(-t)} to be real when t is on the negative real axis. Assume z is not an integer. Then Hankel's formula for the gamma function is: Γ ( z ) = − 1 2 i sin ⁡ π z ∫ C ( − t ) z − 1 e − t d t , {\displaystyle \Gamma (z)=-{\frac {1}{2i\sin \pi z}}\int _{C}(-t)^{z-1}e^{-t}\,dt,} where ( − t ) z − 1 {\displaystyle (-t)^{z-1}} is interpreted as exp ⁡ ( ( z − 1 ) log ⁡ ( − t ) ) {\displaystyle \exp((z-1)\log(-t))} . The reflection formula leads to the closely related expression 1 Γ ( z ) = i 2 π ∫ C ( − t ) − z e − t d t , {\displaystyle {\frac {1}{\Gamma (z)}}={\frac {i}{2\pi }}\int _{C}(-t)^{-z}e^{-t}\,dt,} again valid whenever z is not an integer. === Continued fraction representation === The gamma function can also be represented by a sum of two continued fractions: Γ ( z ) = e − 1 2 + 0 − z + 1 z − 1 2 + 2 − z + 2 z − 2 2 + 4 − z + 3 z − 3 2 + 6 − z + 4 z − 4 2 + 8 − z + 5 z − 5 2 + 10 − z + ⋱ + e − 1 z + 0 − z + 0 z + 1 + 1 z + 2 − z + 1 z + 3 + 2 z + 4 − z + 2 z + 5 + 3 z + 6 − ⋱ {\displaystyle {\begin{aligned}\Gamma (z)&={\cfrac {e^{-1}}{2+0-z+1{\cfrac {z-1}{2+2-z+2{\cfrac {z-2}{2+4-z+3{\cfrac {z-3}{2+6-z+4{\cfrac {z-4}{2+8-z+5{\cfrac {z-5}{2+10-z+\ddots }}}}}}}}}}}}\\&+\ {\cfrac {e^{-1}}{z+0-{\cfrac {z+0}{z+1+{\cfrac {1}{z+2-{\cfrac {z+1}{z+3+{\cfrac {2}{z+4-{\cfrac {z+2}{z+5+{\cfrac {3}{z+6-\ddots }}}}}}}}}}}}}}\end{aligned}}} where z ∈ C {\displaystyle z\in \mathbb {C} } . === Fourier series expansion === The logarithm of the gamma function has the following Fourier series expansion for 0 < z < 1 : {\displaystyle 0<z<1:} l o g Γ ⁡ ( z ) = ( 1 2 − z ) ( γ + log ⁡ 2 ) + ( 1 − z ) log ⁡ π − 1 2 log ⁡ sin ⁡ ( π z ) + 1 π ∑ n = 1 ∞ log ⁡ n n sin ⁡ ( 2 π n z ) , {\displaystyle \operatorname {log\Gamma } (z)=\left({\frac {1}{2}}-z\right)(\gamma +\log 2)+(1-z)\log \pi -{\frac {1}{2}}\log \sin(\pi z)+{\frac {1}{\pi }}\sum _{n=1}^{\infty }{\frac {\log n}{n}}\sin(2\pi nz),} which was for a long time attributed to Ernst Kummer, who derived it in 1847. However, Iaroslav Blagouchine discovered that Carl Johan Malmsten first derived this series in 1842. === Raabe's formula === In 1840 Joseph Ludwig Raabe proved that ∫ a a + 1 log ⁡ Γ ( z ) d z = 1 2 log ⁡ 2 π + a log ⁡ a − a , a > 0. {\displaystyle \int _{a}^{a+1}\log \Gamma (z)\,dz={\tfrac {1}{2}}\log 2\pi +a\log a-a,\quad a>0.} In particular, if a = 0 {\displaystyle a=0} then ∫ 0 1 log ⁡ Γ ( z ) d z = 1 2 log ⁡ 2 π . {\displaystyle \int _{0}^{1}\log \Gamma (z)\,dz={\tfrac {1}{2}}\log 2\pi .} The latter can be derived taking the logarithm in the above multiplication formula, which gives an expression for the Riemann sum of the integrand. Taking the limit for a → ∞ {\displaystyle a\to \infty } gives the formula. === Pi function === An alternative notation introduced by Gauss is the Π {\displaystyle \Pi } -function, a shifted version of the gamma function: Π ( z ) = Γ ( z + 1 ) = z Γ ( z ) = ∫ 0 ∞ e − t t z d t , {\displaystyle \Pi (z)=\Gamma (z+1)=z\Gamma (z)=\int _{0}^{\infty }e^{-t}t^{z}\,dt,} so that Π ( n ) = n ! {\displaystyle \Pi (n)=n!} for every non-negative integer n {\displaystyle n} . Using the pi function, the reflection formula is: Π ( z ) Π ( − z ) = π z sin ⁡ ( π z ) = 1 sinc ⁡ ( z ) {\displaystyle \Pi (z)\Pi (-z)={\frac {\pi z}{\sin(\pi z)}}={\frac {1}{\operatorname {sinc} (z)}}} using the normalized sinc function; while the multiplication theorem becomes: Π ( z m ) Π ( z − 1 m ) ⋯ Π ( z − m + 1 m ) = ( 2 π ) m − 1 2 m − z − 1 2 Π ( z ) . {\displaystyle \Pi \left({\frac {z}{m}}\right)\,\Pi \left({\frac {z-1}{m}}\right)\cdots \Pi \left({\frac {z-m+1}{m}}\right)=(2\pi )^{\frac {m-1}{2}}m^{-z-{\frac {1}{2}}}\Pi (z)\ .} The shifted reciprocal gamma function is sometimes denoted π ( z ) = 1 Π ( z ) , {\textstyle \pi (z)={\frac {1}{\Pi (z)}}\ ,} an entire function. The volume of an n-ellipsoid with radii r1, …, rn can be expressed as V n ( r 1 , … , r n ) = π n 2 Π ( n 2 ) ∏ k = 1 n r k . {\displaystyle V_{n}(r_{1},\dotsc ,r_{n})={\frac {\pi ^{\frac {n}{2}}}{\Pi \left({\frac {n}{2}}\right)}}\prod _{k=1}^{n}r_{k}.} === Relation to other functions === In the first integral defining the gamma function, the limits of integration are fixed. The upper incomplete gamma function is obtained by allowing the lower limit of integration to vary: Γ ( z , x ) = ∫ x ∞ t z − 1 e − t d t . {\displaystyle \Gamma (z,x)=\int _{x}^{\infty }t^{z-1}e^{-t}dt.} There is a similar lower incomplete gamma function. The gamma function is related to Euler's beta function by the formula B ( z 1 , z 2 ) = ∫ 0 1 t z 1 − 1 ( 1 − t ) z 2 − 1 d t = Γ ( z 1 ) Γ ( z 2 ) Γ ( z 1 + z 2 ) . {\displaystyle \mathrm {B} (z_{1},z_{2})=\int _{0}^{1}t^{z_{1}-1}(1-t)^{z_{2}-1}\,dt={\frac {\Gamma (z_{1})\,\Gamma (z_{2})}{\Gamma (z_{1}+z_{2})}}.} The logarithmic derivative of the gamma function is called the digamma function; higher derivatives are the polygamma functions. The analog of the gamma function over a finite field or a finite ring is the Gaussian sums, a type of exponential sum. The reciprocal gamma function is an entire function and has been studied as a specific topic. The gamma function also shows up in an important relation with the Riemann zeta function, ζ ( z ) {\displaystyle \zeta (z)} . π − z 2 Γ ( z 2 ) ζ ( z ) = π − 1 − z 2 Γ ( 1 − z 2 ) ζ ( 1 − z ) . {\displaystyle \pi ^{-{\frac {z}{2}}}\;\Gamma \left({\frac {z}{2}}\right)\zeta (z)=\pi ^{-{\frac {1-z}{2}}}\;\Gamma \left({\frac {1-z}{2}}\right)\;\zeta (1-z).} It also appears in the following formula: ζ ( z ) Γ ( z ) = ∫ 0 ∞ u z e u − 1 d u u , {\displaystyle \zeta (z)\Gamma (z)=\int _{0}^{\infty }{\frac {u^{z}}{e^{u}-1}}\,{\frac {du}{u}},} which is valid only for ℜ ( z ) > 1 {\displaystyle \Re (z)>1} . The logarithm of the gamma function satisfies the following formula due to Lerch: l o g Γ ⁡ ( z ) = ζ H ′ ( 0 , z ) − ζ ′ ( 0 ) , {\displaystyle \operatorname {log\Gamma } (z)=\zeta _{H}'(0,z)-\zeta '(0),} where ζ H {\displaystyle \zeta _{H}} is the Hurwitz zeta function, ζ {\displaystyle \zeta } is the Riemann zeta function and the prime (′) denotes differentiation in the first variable. The gamma function is related to the stretched exponential function. For instance, the moments of that function are ⟨ τ n ⟩ ≡ ∫ 0 ∞ t n − 1 e − ( t τ ) β d t = τ n β Γ ( n β ) . {\displaystyle \langle \tau ^{n}\rangle \equiv \int _{0}^{\infty }t^{n-1}\,e^{-\left({\frac {t}{\tau }}\right)^{\beta }}\,\mathrm {d} t={\frac {\tau ^{n}}{\beta }}\Gamma \left({n \over \beta }\right).} === Particular values === Including up to the first 20 digits after the decimal point, some particular values of the gamma function are: Γ ( − 3 2 ) = 4 π 3 ≈ + 2.36327 18012 07354 70306 Γ ( − 1 2 ) = − 2 π ≈ − 3.54490 77018 11032 05459 Γ ( 1 2 ) = π ≈ + 1.77245 38509 05516 02729 Γ ( 1 ) = 0 ! = + 1 Γ ( 3 2 ) = π 2 ≈ + 0.88622 69254 52758 01364 Γ ( 2 ) = 1 ! = + 1 Γ ( 5 2 ) = 3 π 4 ≈ + 1.32934 03881 79137 02047 Γ ( 3 ) = 2 ! = + 2 Γ ( 7 2 ) = 15 π 8 ≈ + 3.32335 09704 47842 55118 Γ ( 4 ) = 3 ! = + 6 {\displaystyle {\begin{array}{rcccl}\Gamma \left(-{\tfrac {3}{2}}\right)&=&{\tfrac {4{\sqrt {\pi }}}{3}}&\approx &+2.36327\,18012\,07354\,70306\\\Gamma \left(-{\tfrac {1}{2}}\right)&=&-2{\sqrt {\pi }}&\approx &-3.54490\,77018\,11032\,05459\\\Gamma \left({\tfrac {1}{2}}\right)&=&{\sqrt {\pi }}&\approx &+1.77245\,38509\,05516\,02729\\\Gamma (1)&=&0!&=&+1\\\Gamma \left({\tfrac {3}{2}}\right)&=&{\tfrac {\sqrt {\pi }}{2}}&\approx &+0.88622\,69254\,52758\,01364\\\Gamma (2)&=&1!&=&+1\\\Gamma \left({\tfrac {5}{2}}\right)&=&{\tfrac {3{\sqrt {\pi }}}{4}}&\approx &+1.32934\,03881\,79137\,02047\\\Gamma (3)&=&2!&=&+2\\\Gamma \left({\tfrac {7}{2}}\right)&=&{\tfrac {15{\sqrt {\pi }}}{8}}&\approx &+3.32335\,09704\,47842\,55118\\\Gamma (4)&=&3!&=&+6\end{array}}} (These numbers can be found in the OEIS. The values presented here are truncated rather than rounded.) The complex-valued gamma function is undefined for non-positive integers, but in these cases the value can be defined in the Riemann sphere as ∞. The reciprocal gamma function is well defined and analytic at these values (and in the entire complex plane): 1 Γ ( − 3 ) = 1 Γ ( − 2 ) = 1 Γ ( − 1 ) = 1 Γ ( 0 ) = 0. {\displaystyle {\frac {1}{\Gamma (-3)}}={\frac {1}{\Gamma (-2)}}={\frac {1}{\Gamma (-1)}}={\frac {1}{\Gamma (0)}}=0.} == Log-gamma function == Because the gamma and factorial functions grow so rapidly for moderately large arguments, many computing environments include a function that returns the natural logarithm of the gamma function, often given the name lgamma or lngamma in programming environments or gammaln in spreadsheets. This grows much more slowly, and for combinatorial calculations allows adding and subtracting logarithmic values instead of multiplying and dividing very large values. It is often defined as l o g Γ ⁡ ( z ) = − γ z − log ⁡ z + ∑ k = 1 ∞ [ z k − log ⁡ ( 1 + z k ) ] . {\displaystyle \operatorname {log\Gamma } (z)=-\gamma z-\log z+\sum _{k=1}^{\infty }\left[{\frac {z}{k}}-\log \left(1+{\frac {z}{k}}\right)\right].} The digamma function, which is the derivative of this function, is also commonly seen. In the context of technical and physical applications, e.g. with wave propagation, the functional equation l o g Γ ⁡ ( z ) = l o g Γ ⁡ ( z + 1 ) − log ⁡ z {\displaystyle \operatorname {log\Gamma } (z)=\operatorname {log\Gamma } (z+1)-\log z} is often used since it allows one to determine function values in one strip of width 1 in z from the neighbouring strip. In particular, starting with a good approximation for a z with large real part one may go step by step down to the desired z. Following an indication of Carl Friedrich Gauss, Rocktaeschel (1922) proposed for logΓ(z) an approximation for large Re(z): l o g Γ ⁡ ( z ) ≈ ( z − 1 2 ) log ⁡ z − z + 1 2 log ⁡ ( 2 π ) . {\displaystyle \operatorname {log\Gamma } (z)\approx (z-{\tfrac {1}{2}})\log z-z+{\tfrac {1}{2}}\log(2\pi ).} This can be used to accurately approximate logΓ(z) for z with a smaller Re(z) via (P.E.Böhmer, 1939) l o g Γ ⁡ ( z − m ) = l o g Γ ⁡ ( z ) − ∑ k = 1 m log ⁡ ( z − k ) . {\displaystyle \operatorname {log\Gamma } (z-m)=\operatorname {log\Gamma } (z)-\sum _{k=1}^{m}\log(z-k).} A more accurate approximation can be obtained by using more terms from the asymptotic expansions of logΓ(z) and Γ(z), which are based on Stirling's approximation. Γ ( z ) ∼ z z − 1 2 e − z 2 π ( 1 + 1 12 z + 1 288 z 2 − 139 51 840 z 3 − 571 2 488 320 z 4 ) {\displaystyle \Gamma (z)\sim z^{z-{\frac {1}{2}}}e^{-z}{\sqrt {2\pi }}\left(1+{\frac {1}{12z}}+{\frac {1}{288z^{2}}}-{\frac {139}{51\,840z^{3}}}-{\frac {571}{2\,488\,320z^{4}}}\right)} as |z| → ∞ at constant |arg(z)| < π. (See sequences A001163 and A001164 in the OEIS.) In a more "natural" presentation: l o g Γ ⁡ ( z ) = z log ⁡ z − z − 1 2 log ⁡ z + 1 2 log ⁡ 2 π + 1 12 z − 1 360 z 3 + 1 1260 z 5 + o ( 1 z 5 ) {\displaystyle \operatorname {log\Gamma } (z)=z\log z-z-{\tfrac {1}{2}}\log z+{\tfrac {1}{2}}\log 2\pi +{\frac {1}{12z}}-{\frac {1}{360z^{3}}}+{\frac {1}{1260z^{5}}}+o\left({\frac {1}{z^{5}}}\right)} as |z| → ∞ at constant |arg(z)| < π. (See sequences A046968 and A046969 in the OEIS.) The coefficients of the terms with k > 1 of z1−k in the last expansion are simply B k k ( k − 1 ) {\displaystyle {\frac {B_{k}}{k(k-1)}}} where the Bk are the Bernoulli numbers. The gamma function also has Stirling Series (derived by Charles Hermite in 1900) equal to l o g Γ ⁡ ( 1 + x ) = x ( x − 1 ) 2 ! log ⁡ ( 2 ) + x ( x − 1 ) ( x − 2 ) 3 ! ( log ⁡ ( 3 ) − 2 log ⁡ ( 2 ) ) + ⋯ , ℜ ( x ) > 0. {\displaystyle \operatorname {log\Gamma } (1+x)={\frac {x(x-1)}{2!}}\log(2)+{\frac {x(x-1)(x-2)}{3!}}(\log(3)-2\log(2))+\cdots ,\quad \Re (x)>0.} === Properties === The Bohr–Mollerup theorem states that among all functions extending the factorial functions to the positive real numbers, only the gamma function is log-convex, that is, its natural logarithm is convex on the positive real axis. Another characterisation is given by the Wielandt theorem. The gamma function is the unique function that simultaneously satisfies Γ ( 1 ) = 1 {\displaystyle \Gamma (1)=1} , Γ ( z + 1 ) = z Γ ( z ) {\displaystyle \Gamma (z+1)=z\Gamma (z)} for all complex numbers z {\displaystyle z} except the non-positive integers, and, for integer n, lim n → ∞ Γ ( n + z ) Γ ( n ) n z = 1 {\textstyle \lim _{n\to \infty }{\frac {\Gamma (n+z)}{\Gamma (n)\;n^{z}}}=1} for all complex numbers z {\displaystyle z} . In a certain sense, the log-gamma function is the more natural form; it makes some intrinsic attributes of the function clearer. A striking example is the Taylor series of logΓ around 1: l o g Γ ⁡ ( z + 1 ) = − γ z + ∑ k = 2 ∞ ζ ( k ) k ( − z ) k ∀ | z | < 1 {\displaystyle \operatorname {log\Gamma } (z+1)=-\gamma z+\sum _{k=2}^{\infty }{\frac {\zeta (k)}{k}}\,(-z)^{k}\qquad \forall \;|z|<1} with ζ(k) denoting the Riemann zeta function at k. So, using the following property: ζ ( s ) Γ ( s ) = ∫ 0 ∞ t s e t − 1 d t t {\displaystyle \zeta (s)\Gamma (s)=\int _{0}^{\infty }{\frac {t^{s}}{e^{t}-1}}\,{\frac {dt}{t}}} an integral representation for the log-gamma function is: l o g Γ ⁡ ( z + 1 ) = − γ z + ∫ 0 ∞ e − z t − 1 + z t t ( e t − 1 ) d t {\displaystyle \operatorname {log\Gamma } (z+1)=-\gamma z+\int _{0}^{\infty }{\frac {e^{-zt}-1+zt}{t\left(e^{t}-1\right)}}\,dt} or, setting z = 1 to obtain an integral for γ, we can replace the γ term with its integral and incorporate that into the above formula, to get: l o g Γ ⁡ ( z + 1 ) = ∫ 0 ∞ e − z t − z e − t − 1 + z t ( e t − 1 ) d t . {\displaystyle \operatorname {log\Gamma } (z+1)=\int _{0}^{\infty }{\frac {e^{-zt}-ze^{-t}-1+z}{t\left(e^{t}-1\right)}}\,dt\,.} There also exist special formulas for the logarithm of the gamma function for rational z. For instance, if k {\displaystyle k} and n {\displaystyle n} are integers with k < n {\displaystyle k<n} and k ≠ n / 2 , {\displaystyle k\neq n/2\,,} then l o g Γ ⁡ ( k n ) = ( n − 2 k ) log ⁡ 2 π 2 n + 1 2 { log ⁡ π − log ⁡ sin ⁡ π k n } + 1 π ∑ r = 1 n − 1 γ + log ⁡ r r ⋅ sin ⁡ 2 π r k n − 1 2 π sin ⁡ 2 π k n ⋅ ∫ 0 ∞ e − n x ⋅ log ⁡ x cosh ⁡ x − cos ⁡ ( 2 π k / n ) d x . {\displaystyle {\begin{aligned}\operatorname {log\Gamma } \left({\frac {k}{n}}\right)={}&{\frac {\,(n-2k)\log 2\pi \,}{2n}}+{\frac {1}{2}}\left\{\,\log \pi -\log \sin {\frac {\pi k}{n}}\,\right\}+{\frac {1}{\pi }}\!\sum _{r=1}^{n-1}{\frac {\,\gamma +\log r\,}{r}}\cdot \sin {\frac {\,2\pi rk\,}{n}}\\&{}-{\frac {1}{2\pi }}\sin {\frac {2\pi k}{n}}\cdot \!\int _{0}^{\infty }\!\!{\frac {\,e^{-nx}\!\cdot \log x\,}{\,\cosh x-\cos(2\pi k/n)\,}}\,{\mathrm {d} }x.\end{aligned}}} This formula is sometimes used for numerical computation, since the integrand decreases very quickly. === Integration over log-gamma === The integral ∫ 0 z l o g Γ ⁡ ( x ) d x {\displaystyle \int _{0}^{z}\operatorname {log\Gamma } (x)\,dx} can be expressed in terms of the Barnes G-function (see Barnes G-function for a proof): ∫ 0 z l o g Γ ⁡ ( x ) d x = z 2 log ⁡ ( 2 π ) + z ( 1 − z ) 2 + z l o g Γ ⁡ ( z ) − log ⁡ G ( z + 1 ) {\displaystyle \int _{0}^{z}\operatorname {log\Gamma } (x)\,dx={\frac {z}{2}}\log(2\pi )+{\frac {z(1-z)}{2}}+z\operatorname {log\Gamma } (z)-\log G(z+1)} where Re(z) > −1. It can also be written in terms of the Hurwitz zeta function: ∫ 0 z l o g Γ ⁡ ( x ) d x = z 2 log ⁡ ( 2 π ) + z ( 1 − z ) 2 − ζ ′ ( − 1 ) + ζ ′ ( − 1 , z ) . {\displaystyle \int _{0}^{z}\operatorname {log\Gamma } (x)\,dx={\frac {z}{2}}\log(2\pi )+{\frac {z(1-z)}{2}}-\zeta '(-1)+\zeta '(-1,z).} When z = 1 {\displaystyle z=1} it follows that ∫ 0 1 l o g Γ ⁡ ( x ) d x = 1 2 log ⁡ ( 2 π ) , {\displaystyle \int _{0}^{1}\operatorname {log\Gamma } (x)\,dx={\frac {1}{2}}\log(2\pi ),} and this is a consequence of Raabe's formula as well. O. Espinosa and V. Moll derived a similar formula for the integral of the square of l o g Γ {\displaystyle \operatorname {log\Gamma } } : ∫ 0 1 log 2 ⁡ Γ ( x ) d x = γ 2 12 + π 2 48 + 1 3 γ L 1 + 4 3 L 1 2 − ( γ + 2 L 1 ) ζ ′ ( 2 ) π 2 + ζ ′ ′ ( 2 ) 2 π 2 , {\displaystyle \int _{0}^{1}\log ^{2}\Gamma (x)dx={\frac {\gamma ^{2}}{12}}+{\frac {\pi ^{2}}{48}}+{\frac {1}{3}}\gamma L_{1}+{\frac {4}{3}}L_{1}^{2}-\left(\gamma +2L_{1}\right){\frac {\zeta ^{\prime }(2)}{\pi ^{2}}}+{\frac {\zeta ^{\prime \prime }(2)}{2\pi ^{2}}},} where L 1 {\displaystyle L_{1}} is 1 2 log ⁡ ( 2 π ) {\displaystyle {\frac {1}{2}}\log(2\pi )} . D. H. Bailey and his co-authors gave an evaluation for L n := ∫ 0 1 log n ⁡ Γ ( x ) d x {\displaystyle L_{n}:=\int _{0}^{1}\log ^{n}\Gamma (x)\,dx} when n = 1 , 2 {\displaystyle n=1,2} in terms of the Tornheim–Witten zeta function and its derivatives. In addition, it is also known that lim n → ∞ L n n ! = 1. {\displaystyle \lim _{n\to \infty }{\frac {L_{n}}{n!}}=1.} == Approximations == Complex values of the gamma function can be approximated using Stirling's approximation or the Lanczos approximation, Γ ( z ) ∼ 2 π z z − 1 / 2 e − z as z → ∞ in | arg ⁡ ( z ) | < π . {\displaystyle \Gamma (z)\sim {\sqrt {2\pi }}z^{z-1/2}e^{-z}\quad {\hbox{as }}z\to \infty {\hbox{ in }}\left|\arg(z)\right|<\pi .} This is precise in the sense that the ratio of the approximation to the true value approaches 1 in the limit as |z| goes to infinity. The gamma function can be computed to fixed precision for Re ⁡ ( z ) ∈ [ 1 , 2 ] {\displaystyle \operatorname {Re} (z)\in [1,2]} by applying integration by parts to Euler's integral. For any positive number x the gamma function can be written Γ ( z ) = ∫ 0 x e − t t z d t t + ∫ x ∞ e − t t z d t t = x z e − x ∑ n = 0 ∞ x n z ( z + 1 ) ⋯ ( z + n ) + ∫ x ∞ e − t t z d t t . {\displaystyle {\begin{aligned}\Gamma (z)&=\int _{0}^{x}e^{-t}t^{z}\,{\frac {dt}{t}}+\int _{x}^{\infty }e^{-t}t^{z}\,{\frac {dt}{t}}\\&=x^{z}e^{-x}\sum _{n=0}^{\infty }{\frac {x^{n}}{z(z+1)\cdots (z+n)}}+\int _{x}^{\infty }e^{-t}t^{z}\,{\frac {dt}{t}}.\end{aligned}}} When Re(z) ∈ [1,2] and x ≥ 1 {\displaystyle x\geq 1} , the absolute value of the last integral is smaller than ( x + 1 ) e − x {\displaystyle (x+1)e^{-x}} . By choosing a large enough x {\displaystyle x} , this last expression can be made smaller than 2 − N {\displaystyle 2^{-N}} for any desired value N {\displaystyle N} . Thus, the gamma function can be evaluated to N {\displaystyle N} bits of precision with the above series. A fast algorithm for calculation of the Euler gamma function for any algebraic argument (including rational) was constructed by E.A. Karatsuba. For arguments that are integer multiples of ⁠1/24⁠, the gamma function can also be evaluated quickly using arithmetic–geometric mean iterations (see particular values of the gamma function). == Practical implementations == Unlike many other functions, such as a Normal Distribution, no obvious fast, accurate implementation that is easy to implement for the Gamma Function Γ ( z ) {\displaystyle \Gamma (z)} is easily found. Therefore, it is worth investigating potential solutions. For the case that speed is more important than accuracy, published tables for Γ ( z ) {\displaystyle \Gamma (z)} are easily found in an Internet search, such as the Online Wiley Library. Such tables may be used with linear interpolation. Greater accuracy is obtainable with the use of cubic interpolation at the cost of more computational overhead. Since Γ ( z ) {\displaystyle \Gamma (z)} tables are usually published for argument values between 1 and 2, the property Γ ( z + 1 ) = z Γ ( z ) {\displaystyle \Gamma (z+1)=z\ \Gamma (z)} may be used to quickly and easily translate all real values z < 1 {\displaystyle z<1} and z > 2 {\displaystyle z>2} into the range 1 ≤ z ≤ 2 {\displaystyle 1\leq z\leq 2} , such that only tabulated values of z {\displaystyle z} between 1 and 2 need be used. If interpolation tables are not desirable, then the Lanczos approximation mentioned above works well for 1 to 2 digits of accuracy for small, commonly used values of z. If the Lanczos approximation is not sufficiently accurate, the Stirling's formula for the Gamma Function may be used. == Applications == One author describes the gamma function as "Arguably, the most common special function, or the least 'special' of them. The other transcendental functions […] are called 'special' because you could conceivably avoid some of them by staying away from many specialized mathematical topics. On the other hand, the gamma function Γ(z) is most difficult to avoid." === Integration problems === The gamma function finds application in such diverse areas as quantum physics, astrophysics and fluid dynamics. The gamma distribution, which is formulated in terms of the gamma function, is used in statistics to model a wide range of processes; for example, the time between occurrences of earthquakes. The primary reason for the gamma function's usefulness in such contexts is the prevalence of expressions of the type f ( t ) e − g ( t ) {\displaystyle f(t)e^{-g(t)}} which describe processes that decay exponentially in time or space. Integrals of such expressions can occasionally be solved in terms of the gamma function when no elementary solution exists. For example, if f is a power function and g is a linear function, a simple change of variables u := a ⋅ t {\displaystyle u:=a\cdot t} gives the evaluation ∫ 0 ∞ t b e − a t d t = 1 a b ∫ 0 ∞ u b e − u d ( u a ) = Γ ( b + 1 ) a b + 1 . {\displaystyle \int _{0}^{\infty }t^{b}e^{-at}\,dt={\frac {1}{a^{b}}}\int _{0}^{\infty }u^{b}e^{-u}d\left({\frac {u}{a}}\right)={\frac {\Gamma (b+1)}{a^{b+1}}}.} The fact that the integration is performed along the entire positive real line might signify that the gamma function describes the cumulation of a time-dependent process that continues indefinitely, or the value might be the total of a distribution in an infinite space. It is of course frequently useful to take limits of integration other than 0 and ∞ to describe the cumulation of a finite process, in which case the ordinary gamma function is no longer a solution; the solution is then called an incomplete gamma function. (The ordinary gamma function, obtained by integrating across the entire positive real line, is sometimes called the complete gamma function for contrast.) An important category of exponentially decaying functions is that of Gaussian functions a e − ( x − b ) 2 c 2 {\displaystyle ae^{-{\frac {(x-b)^{2}}{c^{2}}}}} and integrals thereof, such as the error function. There are many interrelations between these functions and the gamma function; notably, the factor π {\displaystyle {\sqrt {\pi }}} obtained by evaluating Γ ( 1 2 ) {\textstyle \Gamma \left({\frac {1}{2}}\right)} is the "same" as that found in the normalizing factor of the error function and the normal distribution. The integrals discussed so far involve transcendental functions, but the gamma function also arises from integrals of purely algebraic functions. In particular, the arc lengths of ellipses and of the lemniscate, which are curves defined by algebraic equations, are given by elliptic integrals that in special cases can be evaluated in terms of the gamma function. The gamma function can also be used to calculate "volume" and "area" of n-dimensional hyperspheres. === Calculating products === The gamma function's ability to generalize factorial products immediately leads to applications in many areas of mathematics; in combinatorics, and by extension in areas such as probability theory and the calculation of power series. Many expressions involving products of successive integers can be written as some combination of factorials, the most important example perhaps being that of the binomial coefficient. For example, for any complex numbers z and n, with |z| < 1, we can write ( 1 + z ) n = ∑ k = 0 ∞ Γ ( n + 1 ) k ! Γ ( n − k + 1 ) z k , {\displaystyle (1+z)^{n}=\sum _{k=0}^{\infty }{\frac {\Gamma (n+1)}{k!\Gamma (n-k+1)}}z^{k},} which closely resembles the binomial coefficient when n is a non-negative integer, ( 1 + z ) n = ∑ k = 0 n n ! k ! ( n − k ) ! z k = ∑ k = 0 n ( n k ) z k . {\displaystyle (1+z)^{n}=\sum _{k=0}^{n}{\frac {n!}{k!(n-k)!}}z^{k}=\sum _{k=0}^{n}{\binom {n}{k}}z^{k}.} The example of binomial coefficients motivates why the properties of the gamma function when extended to negative numbers are natural. A binomial coefficient gives the number of ways to choose k elements from a set of n elements; if k > n, there are of course no ways. If k > n, (n − k)! is the factorial of a negative integer and hence infinite if we use the gamma function definition of factorials—dividing by infinity gives the expected value of 0. We can replace the factorial by a gamma function to extend any such formula to the complex numbers. Generally, this works for any product wherein each factor is a rational function of the index variable, by factoring the rational function into linear expressions. If P and Q are monic polynomials of degree m and n with respective roots p1, …, pm and q1, …, qn, we have ∏ i = a b P ( i ) Q ( i ) = ( ∏ j = 1 m Γ ( b − p j + 1 ) Γ ( a − p j ) ) ( ∏ k = 1 n Γ ( a − q k ) Γ ( b − q k + 1 ) ) . {\displaystyle \prod _{i=a}^{b}{\frac {P(i)}{Q(i)}}=\left(\prod _{j=1}^{m}{\frac {\Gamma (b-p_{j}+1)}{\Gamma (a-p_{j})}}\right)\left(\prod _{k=1}^{n}{\frac {\Gamma (a-q_{k})}{\Gamma (b-q_{k}+1)}}\right).} If we have a way to calculate the gamma function numerically, it is very simple to calculate numerical values of such products. The number of gamma functions in the right-hand side depends only on the degree of the polynomials, so it does not matter whether b − a equals 5 or 105. By taking the appropriate limits, the equation can also be made to hold even when the left-hand product contains zeros or poles. By taking limits, certain rational products with infinitely many factors can be evaluated in terms of the gamma function as well. Due to the Weierstrass factorization theorem, analytic functions can be written as infinite products, and these can sometimes be represented as finite products or quotients of the gamma function. We have already seen one striking example: the reflection formula essentially represents the sine function as the product of two gamma functions. Starting from this formula, the exponential function as well as all the trigonometric and hyperbolic functions can be expressed in terms of the gamma function. More functions yet, including the hypergeometric function and special cases thereof, can be represented by means of complex contour integrals of products and quotients of the gamma function, called Mellin–Barnes integrals. === Analytic number theory === An application of the gamma function is the study of the Riemann zeta function. A fundamental property of the Riemann zeta function is its functional equation: Γ ( s 2 ) ζ ( s ) π − s 2 = Γ ( 1 − s 2 ) ζ ( 1 − s ) π − 1 − s 2 . {\displaystyle \Gamma \left({\frac {s}{2}}\right)\zeta (s)\pi ^{-{\frac {s}{2}}}=\Gamma \left({\frac {1-s}{2}}\right)\zeta (1-s)\pi ^{-{\frac {1-s}{2}}}.} Among other things, this provides an explicit form for the analytic continuation of the zeta function to a meromorphic function in the complex plane and leads to an immediate proof that the zeta function has infinitely many so-called "trivial" zeros on the real line. Borwein et al. call this formula "one of the most beautiful findings in mathematics". Another contender for that title might be ζ ( s ) Γ ( s ) = ∫ 0 ∞ t s e t − 1 d t t . {\displaystyle \zeta (s)\;\Gamma (s)=\int _{0}^{\infty }{\frac {t^{s}}{e^{t}-1}}\,{\frac {dt}{t}}.} Both formulas were derived by Bernhard Riemann in his seminal 1859 paper "Ueber die Anzahl der Primzahlen unter einer gegebenen Größe" ("On the Number of Primes Less Than a Given Magnitude"), one of the milestones in the development of analytic number theory—the branch of mathematics that studies prime numbers using the tools of mathematical analysis. == History == The gamma function has caught the interest of some of the most prominent mathematicians of all time. Its history, notably documented by Philip J. Davis in an article that won him the 1963 Chauvenet Prize, reflects many of the major developments within mathematics since the 18th century. In the words of Davis, "each generation has found something of interest to say about the gamma function. Perhaps the next generation will also." === 18th century: Euler and Stirling === The problem of extending the factorial to non-integer arguments was apparently first considered by Daniel Bernoulli and Christian Goldbach in the 1720s. In particular, in a letter from Bernoulli to Goldbach dated 6 October 1729 Bernoulli introduced the product representation x ! = lim n → ∞ ( n + 1 + x 2 ) x − 1 ∏ k = 1 n k + 1 k + x {\displaystyle x!=\lim _{n\to \infty }\left(n+1+{\frac {x}{2}}\right)^{x-1}\prod _{k=1}^{n}{\frac {k+1}{k+x}}} which is well defined for real values of x other than the negative integers. Leonhard Euler later gave two different definitions: the first was not his integral but an infinite product that is well defined for all complex numbers n other than the negative integers, n ! = ∏ k = 1 ∞ ( 1 + 1 k ) n 1 + n k , {\displaystyle n!=\prod _{k=1}^{\infty }{\frac {\left(1+{\frac {1}{k}}\right)^{n}}{1+{\frac {n}{k}}}}\,,} of which he informed Goldbach in a letter dated 13 October 1729. He wrote to Goldbach again on 8 January 1730, to announce his discovery of the integral representation n ! = ∫ 0 1 ( − log ⁡ s ) n d s , {\displaystyle n!=\int _{0}^{1}(-\log s)^{n}\,ds\,,} which is valid when the real part of the complex number n is strictly greater than −1 (i.e., ℜ ( n ) > − 1 {\displaystyle \Re (n)>-1} ). By the change of variables t = −ln s, this becomes the familiar Euler integral. Euler published his results in the paper "De progressionibus transcendentibus seu quarum termini generales algebraice dari nequeunt" ("On transcendental progressions, that is, those whose general terms cannot be given algebraically"), submitted to the St. Petersburg Academy on 28 November 1729. Euler further discovered some of the gamma function's important functional properties, including the reflection formula. James Stirling, a contemporary of Euler, also attempted to find a continuous expression for the factorial and came up with what is now known as Stirling's formula. Although Stirling's formula gives a good estimate of n!, also for non-integers, it does not provide the exact value. Extensions of his formula that correct the error were given by Stirling himself and by Jacques Philippe Marie Binet. === 19th century: Gauss, Weierstrass and Legendre === Carl Friedrich Gauss rewrote Euler's product as Γ ( z ) = lim m → ∞ m z m ! z ( z + 1 ) ( z + 2 ) ⋯ ( z + m ) {\displaystyle \Gamma (z)=\lim _{m\to \infty }{\frac {m^{z}m!}{z(z+1)(z+2)\cdots (z+m)}}} and used this formula to discover new properties of the gamma function. Although Euler was a pioneer in the theory of complex variables, he does not appear to have considered the factorial of a complex number, as instead Gauss first did. Gauss also proved the multiplication theorem of the gamma function and investigated the connection between the gamma function and elliptic integrals. Karl Weierstrass further established the role of the gamma function in complex analysis, starting from yet another product representation, Γ ( z ) = e − γ z z ∏ k = 1 ∞ ( 1 + z k ) − 1 e z k , {\displaystyle \Gamma (z)={\frac {e^{-\gamma z}}{z}}\prod _{k=1}^{\infty }\left(1+{\frac {z}{k}}\right)^{-1}e^{\frac {z}{k}},} where γ is the Euler–Mascheroni constant. Weierstrass originally wrote his product as one for ⁠1/Γ⁠, in which case it is taken over the function's zeros rather than its poles. Inspired by this result, he proved what is known as the Weierstrass factorization theorem—that any entire function can be written as a product over its zeros in the complex plane; a generalization of the fundamental theorem of algebra. The name gamma function and the symbol Γ were introduced by Adrien-Marie Legendre around 1811; Legendre also rewrote Euler's integral definition in its modern form. Although the symbol is an upper-case Greek gamma, there is no accepted standard for whether the function name should be written "gamma function" or "Gamma function" (some authors simply write "Γ-function"). The alternative "pi function" notation Π(z) = z! due to Gauss is sometimes encountered in older literature, but Legendre's notation is dominant in modern works. It is justified to ask why we distinguish between the "ordinary factorial" and the gamma function by using distinct symbols, and particularly why the gamma function should be normalized to Γ(n + 1) = n! instead of simply using "Γ(n) = n!". Consider that the notation for exponents, xn, has been generalized from integers to complex numbers xz without any change. Legendre's motivation for the normalization is not known, and has been criticized as cumbersome by some (the 20th-century mathematician Cornelius Lanczos, for example, called it "void of any rationality" and would instead use z!). Legendre's normalization does simplify some formulae, but complicates others. From a modern point of view, the Legendre normalization of the gamma function is the integral of the additive character e−x against the multiplicative character xz with respect to the Haar measure d x x {\textstyle {\frac {dx}{x}}} on the Lie group R+. Thus this normalization makes it clearer that the gamma function is a continuous analogue of a Gauss sum. === 19th–20th centuries: characterizing the gamma function === It is somewhat problematic that a large number of definitions have been given for the gamma function. Although they describe the same function, it is not entirely straightforward to prove the equivalence. Stirling never proved that his extended formula corresponds exactly to Euler's gamma function; a proof was first given by Charles Hermite in 1900. Instead of finding a specialized proof for each formula, it would be desirable to have a general method of identifying the gamma function. One way to prove equivalence would be to find a differential equation that characterizes the gamma function. Most special functions in applied mathematics arise as solutions to differential equations, whose solutions are unique. However, the gamma function does not appear to satisfy any simple differential equation. Otto Hölder proved in 1887 that the gamma function at least does not satisfy any algebraic differential equation by showing that a solution to such an equation could not satisfy the gamma function's recurrence formula, making it a transcendentally transcendental function. This result is known as Hölder's theorem. A definite and generally applicable characterization of the gamma function was not given until 1922. Harald Bohr and Johannes Mollerup then proved what is known as the Bohr–Mollerup theorem: that the gamma function is the unique solution to the factorial recurrence relation that is positive and logarithmically convex for positive z and whose value at 1 is 1 (a function is logarithmically convex if its logarithm is convex). Another characterisation is given by the Wielandt theorem. The Bohr–Mollerup theorem is useful because it is relatively easy to prove logarithmic convexity for any of the different formulas used to define the gamma function. Taking things further, instead of defining the gamma function by any particular formula, we can choose the conditions of the Bohr–Mollerup theorem as the definition, and then pick any formula we like that satisfies the conditions as a starting point for studying the gamma function. This approach was used by the Bourbaki group. Borwein & Corless review three centuries of work on the gamma function. === Reference tables and software === Although the gamma function can be calculated virtually as easily as any mathematically simpler function with a modern computer—even with a programmable pocket calculator—this was of course not always the case. Until the mid-20th century, mathematicians relied on hand-made tables; in the case of the gamma function, notably a table computed by Gauss in 1813 and one computed by Legendre in 1825. Tables of complex values of the gamma function, as well as hand-drawn graphs, were given in Tables of Functions With Formulas and Curves by Jahnke and Emde, first published in Germany in 1909. According to Michael Berry, "the publication in J&E of a three-dimensional graph showing the poles of the gamma function in the complex plane acquired an almost iconic status." There was in fact little practical need for anything but real values of the gamma function until the 1930s, when applications for the complex gamma function were discovered in theoretical physics. As electronic computers became available for the production of tables in the 1950s, several extensive tables for the complex gamma function were published to meet the demand, including a table accurate to 12 decimal places from the U.S. National Bureau of Standards. Double-precision floating-point implementations of the gamma function and its logarithm are now available in most scientific computing software and special functions libraries, for example TK Solver, Matlab, GNU Octave, and the GNU Scientific Library. The gamma function was also added to the C standard library (math.h). Arbitrary-precision implementations are available in most computer algebra systems, such as Mathematica and Maple. PARI/GP, MPFR and MPFUN contain free arbitrary-precision implementations. In some software calculators, e.g. Windows Calculator and GNOME Calculator, the factorial function returns Γ(x + 1) when the input x is a non-integer value. == See also == == Notes == This article incorporates material from the Citizendium article "Gamma function", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL. == Further reading == == External links == NIST Digital Library of Mathematical Functions:Gamma function Pascal Sebah and Xavier Gourdon. Introduction to the Gamma Function. In PostScript and HTML formats. C++ reference for std::tgamma Examples of problems involving the gamma function can be found at Exampleproblems.com. "Gamma function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Wolfram gamma function evaluator (arbitrary precision) "Gamma". Wolfram Functions Site. Volume of n-Spheres and the Gamma Function at MathPages
Wikipedia/Gamma_function
In abstract algebra, a subset S {\displaystyle S} of a field L {\displaystyle L} is algebraically independent over a subfield K {\displaystyle K} if the elements of S {\displaystyle S} do not satisfy any non-trivial polynomial equation with coefficients in K {\displaystyle K} . In particular, a one element set { α } {\displaystyle \{\alpha \}} is algebraically independent over K {\displaystyle K} if and only if α {\displaystyle \alpha } is transcendental over K {\displaystyle K} . In general, all the elements of an algebraically independent set S {\displaystyle S} over K {\displaystyle K} are by necessity transcendental over K {\displaystyle K} , and over all of the field extensions over K {\displaystyle K} generated by the remaining elements of S {\displaystyle S} . == Example == The real numbers π {\displaystyle {\sqrt {\pi }}} and 2 π + 1 {\displaystyle 2\pi +1} are transcendental numbers: they are not the roots of any nontrivial polynomial whose coefficients are rational numbers. Thus, the sets { π } {\displaystyle \{{\sqrt {\pi }}\}} and { 2 π + 1 } {\displaystyle \{2\pi +1\}} are both algebraically independent over the rational numbers. However, the set { π , 2 π + 1 } {\displaystyle \{{\sqrt {\pi }},2\pi +1\}} is not algebraically independent over the rational numbers Q {\displaystyle \mathbb {Q} } , because the nontrivial polynomial P ( x , y ) = 2 x 2 − y + 1 {\displaystyle P(x,y)=2x^{2}-y+1} is zero when x = π {\displaystyle x={\sqrt {\pi }}} and y = 2 π + 1 {\displaystyle y=2\pi +1} . == Algebraic independence of known constants == Although π and e are transcendental, it is not known whether { π , e } {\displaystyle \{\pi ,e\}} is algebraically independent over Q {\displaystyle \mathbb {Q} } . In fact, it is not even known whether π + e {\displaystyle \pi +e} is irrational. Nesterenko proved in 1996 that: the numbers π {\displaystyle \pi } , e π {\displaystyle e^{\pi }} , and Γ ( 1 / 4 ) {\displaystyle \Gamma (1/4)} , where Γ {\displaystyle \Gamma } is the gamma function, are algebraically independent over Q {\displaystyle \mathbb {Q} } ; the numbers e π 3 {\displaystyle e^{\pi {\sqrt {3}}}} and Γ ( 1 / 3 ) {\displaystyle \Gamma (1/3)} are algebraically independent over Q {\displaystyle \mathbb {Q} } ; for all positive integers n {\displaystyle n} , the number e π n {\displaystyle e^{\pi {\sqrt {n}}}} is algebraically independent over Q {\displaystyle \mathbb {Q} } . == Results and open problems == The Lindemann–Weierstrass theorem can often be used to prove that some sets are algebraically independent over Q {\displaystyle \mathbb {Q} } . It states that whenever α 1 , … , α n {\displaystyle \alpha _{1},\ldots ,\alpha _{n}} are algebraic numbers that are linearly independent over Q {\displaystyle \mathbb {Q} } , then e α 1 , … , e α n {\displaystyle e^{\alpha _{1}},\ldots ,e^{\alpha _{n}}} are also algebraically independent over Q {\displaystyle \mathbb {Q} } . The Schanuel conjecture would establish the algebraic independence of many numbers, including π and e, but remains unproven: Let { z 1 , . . . , z n } {\displaystyle \{z_{1},...,z_{n}\}} be any set of n {\displaystyle n} complex numbers that are linearly independent over Q {\displaystyle \mathbb {Q} } . The field extension Q ( z 1 , . . . , z n , e z 1 , . . . , e z n ) {\displaystyle \mathbb {Q} (z_{1},...,z_{n},e^{z_{1}},...,e^{z_{n}})} has transcendence degree at least n {\displaystyle n} over Q {\displaystyle \mathbb {Q} } . == Algebraic matroids == Given a field extension L / K {\displaystyle L/K} that is not algebraic, Zorn's lemma can be used to show that there always exists a maximal algebraically independent subset of L {\displaystyle L} over K {\displaystyle K} . Further, all the maximal algebraically independent subsets have the same cardinality, known as the transcendence degree of the extension. For every finite set S {\displaystyle S} of elements of L {\displaystyle L} , the algebraically independent subsets of S {\displaystyle S} satisfy the axioms that define the independent sets of a matroid. In this matroid, the rank of a set of elements is its transcendence degree, and the flat generated by a set T {\displaystyle T} of elements is the intersection of L {\displaystyle L} with the field K [ T ] {\displaystyle K[T]} . A matroid that can be generated in this way is called an algebraic matroid. No good characterization of algebraic matroids is known, but certain matroids are known to be non-algebraic; the smallest is the Vámos matroid. Many finite matroids may be represented by a matrix over a field K {\displaystyle K} , in which the matroid elements correspond to matrix columns, and a set of elements is independent if the corresponding set of columns is linearly independent. Every matroid with a linear representation of this type may also be represented as an algebraic matroid, by choosing an indeterminate for each row of the matrix, and by using the matrix coefficients within each column to assign each matroid element a linear combination of these transcendentals. The converse is false: not every algebraic matroid has a linear representation. == See also == Linear independence Transcendental number Lindemann-Weierstrass theorem Schanuel's conjecture == References == == External links == Chen, Johnny. "Algebraically Independent". MathWorld.
Wikipedia/Algebraically_independent
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L. The set of rational functions over a field K is a field, the field of fractions of the ring of the polynomial functions over K. == Definitions == A function f {\displaystyle f} is called a rational function if it can be written in the form f ( x ) = P ( x ) Q ( x ) {\displaystyle f(x)={\frac {P(x)}{Q(x)}}} where P {\displaystyle P} and Q {\displaystyle Q} are polynomial functions of x {\displaystyle x} and Q {\displaystyle Q} is not the zero function. The domain of f {\displaystyle f} is the set of all values of x {\displaystyle x} for which the denominator Q ( x ) {\displaystyle Q(x)} is not zero. However, if P {\displaystyle \textstyle P} and Q {\displaystyle \textstyle Q} have a non-constant polynomial greatest common divisor R {\displaystyle \textstyle R} , then setting P = P 1 R {\displaystyle \textstyle P=P_{1}R} and Q = Q 1 R {\displaystyle \textstyle Q=Q_{1}R} produces a rational function f 1 ( x ) = P 1 ( x ) Q 1 ( x ) , {\displaystyle f_{1}(x)={\frac {P_{1}(x)}{Q_{1}(x)}},} which may have a larger domain than f {\displaystyle f} , and is equal to f {\displaystyle f} on the domain of f . {\displaystyle f.} It is a common usage to identify f {\displaystyle f} and f 1 {\displaystyle f_{1}} , that is to extend "by continuity" the domain of f {\displaystyle f} to that of f 1 . {\displaystyle f_{1}.} Indeed, one can define a rational fraction as an equivalence class of fractions of polynomials, where two fractions A ( x ) B ( x ) {\displaystyle \textstyle {\frac {A(x)}{B(x)}}} and C ( x ) D ( x ) {\displaystyle \textstyle {\frac {C(x)}{D(x)}}} are considered equivalent if A ( x ) D ( x ) = B ( x ) C ( x ) {\displaystyle A(x)D(x)=B(x)C(x)} . In this case P ( x ) Q ( x ) {\displaystyle \textstyle {\frac {P(x)}{Q(x)}}} is equivalent to P 1 ( x ) Q 1 ( x ) . {\displaystyle \textstyle {\frac {P_{1}(x)}{Q_{1}(x)}}.} A proper rational function is a rational function in which the degree of P ( x ) {\displaystyle P(x)} is less than the degree of Q ( x ) {\displaystyle Q(x)} and both are real polynomials, named by analogy to a proper fraction in Q . {\displaystyle \mathbb {Q} .} === Complex rational functions === In complex analysis, a rational function f ( z ) = P ( z ) Q ( z ) {\displaystyle f(z)={\frac {P(z)}{Q(z)}}} is the ratio of two polynomials with complex coefficients, where Q is not the zero polynomial and P and Q have no common factor (this avoids f taking the indeterminate value 0/0). The domain of f is the set of complex numbers such that Q ( z ) ≠ 0 {\displaystyle Q(z)\neq 0} . Every rational function can be naturally extended to a function whose domain and range are the whole Riemann sphere (complex projective line). A complex rational function with degree one is a Möbius transformation. Rational functions are representative examples of meromorphic functions. Iteration of rational functions on the Riemann sphere (i.e. a rational mapping) creates discrete dynamical systems. Julia sets for rational maps === Degree === There are several non equivalent definitions of the degree of a rational function. Most commonly, the degree of a rational function is the maximum of the degrees of its constituent polynomials P and Q, when the fraction is reduced to lowest terms. If the degree of f is d, then the equation f ( z ) = w {\displaystyle f(z)=w\,} has d distinct solutions in z except for certain values of w, called critical values, where two or more solutions coincide or where some solution is rejected at infinity (that is, when the degree of the equation decreases after having cleared the denominator). The degree of the graph of a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator. In some contexts, such as in asymptotic analysis, the degree of a rational function is the difference between the degrees of the numerator and the denominator.: §13.6.1 : Chapter IV  In network synthesis and network analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called a biquadratic function. == Examples == The rational function f ( x ) = x 3 − 2 x 2 ( x 2 − 5 ) {\displaystyle f(x)={\frac {x^{3}-2x}{2(x^{2}-5)}}} is not defined at x 2 = 5 ⇔ x = ± 5 . {\displaystyle x^{2}=5\Leftrightarrow x=\pm {\sqrt {5}}.} It is asymptotic to x 2 {\displaystyle {\tfrac {x}{2}}} as x → ∞ . {\displaystyle x\to \infty .} The rational function f ( x ) = x 2 + 2 x 2 + 1 {\displaystyle f(x)={\frac {x^{2}+2}{x^{2}+1}}} is defined for all real numbers, but not for all complex numbers, since if x were a square root of − 1 {\displaystyle -1} (i.e. the imaginary unit or its negative), then formal evaluation would lead to division by zero: f ( i ) = i 2 + 2 i 2 + 1 = − 1 + 2 − 1 + 1 = 1 0 , {\displaystyle f(i)={\frac {i^{2}+2}{i^{2}+1}}={\frac {-1+2}{-1+1}}={\frac {1}{0}},} which is undefined. A constant function such as f(x) = π is a rational function since constants are polynomials. The function itself is rational, even though the value of f(x) is irrational for all x. Every polynomial function f ( x ) = P ( x ) {\displaystyle f(x)=P(x)} is a rational function with Q ( x ) = 1. {\displaystyle Q(x)=1.} A function that cannot be written in this form, such as f ( x ) = sin ⁡ ( x ) , {\displaystyle f(x)=\sin(x),} is not a rational function. However, the adjective "irrational" is not generally used for functions. Every Laurent polynomial can be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is a subring of the rational functions. The rational function f ( x ) = x x {\displaystyle f(x)={\tfrac {x}{x}}} is equal to 1 for all x except 0, where there is a removable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, since x/x is equivalent to 1/1. == Taylor series == The coefficients of a Taylor series of any rational function satisfy a linear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collecting like terms after clearing the denominator. For example, 1 x 2 − x + 2 = ∑ k = 0 ∞ a k x k . {\displaystyle {\frac {1}{x^{2}-x+2}}=\sum _{k=0}^{\infty }a_{k}x^{k}.} Multiplying through by the denominator and distributing, 1 = ( x 2 − x + 2 ) ∑ k = 0 ∞ a k x k {\displaystyle 1=(x^{2}-x+2)\sum _{k=0}^{\infty }a_{k}x^{k}} 1 = ∑ k = 0 ∞ a k x k + 2 − ∑ k = 0 ∞ a k x k + 1 + 2 ∑ k = 0 ∞ a k x k . {\displaystyle 1=\sum _{k=0}^{\infty }a_{k}x^{k+2}-\sum _{k=0}^{\infty }a_{k}x^{k+1}+2\sum _{k=0}^{\infty }a_{k}x^{k}.} After adjusting the indices of the sums to get the same powers of x, we get 1 = ∑ k = 2 ∞ a k − 2 x k − ∑ k = 1 ∞ a k − 1 x k + 2 ∑ k = 0 ∞ a k x k . {\displaystyle 1=\sum _{k=2}^{\infty }a_{k-2}x^{k}-\sum _{k=1}^{\infty }a_{k-1}x^{k}+2\sum _{k=0}^{\infty }a_{k}x^{k}.} Combining like terms gives 1 = 2 a 0 + ( 2 a 1 − a 0 ) x + ∑ k = 2 ∞ ( a k − 2 − a k − 1 + 2 a k ) x k . {\displaystyle 1=2a_{0}+(2a_{1}-a_{0})x+\sum _{k=2}^{\infty }(a_{k-2}-a_{k-1}+2a_{k})x^{k}.} Since this holds true for all x in the radius of convergence of the original Taylor series, we can compute as follows. Since the constant term on the left must equal the constant term on the right it follows that a 0 = 1 2 . {\displaystyle a_{0}={\frac {1}{2}}.} Then, since there are no powers of x on the left, all of the coefficients on the right must be zero, from which it follows that a 1 = 1 4 {\displaystyle a_{1}={\frac {1}{4}}} a k = 1 2 ( a k − 1 − a k − 2 ) for k ≥ 2. {\displaystyle a_{k}={\frac {1}{2}}(a_{k-1}-a_{k-2})\quad {\text{for}}\ k\geq 2.} Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by using partial fraction decomposition we can write any proper rational function as a sum of factors of the form 1 / (ax + b) and expand these as geometric series, giving an explicit formula for the Taylor coefficients; this is the method of generating functions. == Abstract algebra == In abstract algebra the concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from any field. In this setting, given a field F and some indeterminate X, a rational expression (also known as a rational fraction or, in algebraic geometry, a rational function) is any element of the field of fractions of the polynomial ring F[X]. Any rational expression can be written as the quotient of two polynomials P/Q with Q ≠ 0, although this representation isn't unique. P/Q is equivalent to R/S, for polynomials P, Q, R, and S, when PS = QR. However, since F[X] is a unique factorization domain, there is a unique representation for any rational expression P/Q with P and Q polynomials of lowest degree and Q chosen to be monic. This is similar to how a fraction of integers can always be written uniquely in lowest terms by canceling out common factors. The field of rational expressions is denoted F(X). This field is said to be generated (as a field) over F by (a transcendental element) X, because F(X) does not contain any proper subfield containing both F and the element X. === Notion of a rational function on an algebraic variety === Like polynomials, rational expressions can also be generalized to n indeterminates X1,..., Xn, by taking the field of fractions of F[X1,..., Xn], which is denoted by F(X1,..., Xn). An extended version of the abstract idea of rational function is used in algebraic geometry. There the function field of an algebraic variety V is formed as the field of fractions of the coordinate ring of V (more accurately said, of a Zariski-dense affine open set in V). Its elements f are considered as regular functions in the sense of algebraic geometry on non-empty open sets U, and also may be seen as morphisms to the projective line. == Applications == Rational functions are used in numerical analysis for interpolation and approximation of functions, for example the Padé approximants introduced by Henri Padé. Approximations in terms of rational functions are well suited for computer algebra systems and other numerical software. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials. Rational functions are used to approximate or model more complex equations in science and engineering including fields and forces in physics, spectroscopy in analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo, wave functions for atoms and molecules, optics and photography to improve image resolution, and acoustics and sound. In signal processing, the Laplace transform (for continuous systems) or the z-transform (for discrete-time systems) of the impulse response of commonly-used linear time-invariant systems (filters) with infinite impulse response are rational functions over complex numbers. == See also == Partial fraction decomposition Partial fractions in integration Function field of an algebraic variety Algebraic fractions – a generalization of rational functions that allows taking integer roots == References == == Further reading == "Rational function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007), "Section 3.4. Rational Function Interpolation and Extrapolation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8 == External links == Dynamic visualization of rational functions with JSXGraph
Wikipedia/Rational_function
In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f {\displaystyle f} , is a member x {\displaystyle x} of the domain of f {\displaystyle f} such that f ( x ) {\displaystyle f(x)} vanishes at x {\displaystyle x} ; that is, the function f {\displaystyle f} attains the value of 0 at x {\displaystyle x} , or equivalently, x {\displaystyle x} is a solution to the equation f ( x ) = 0 {\displaystyle f(x)=0} . A "zero" of a function is thus an input value that produces an output of 0. A root of a polynomial is a zero of the corresponding polynomial function. The fundamental theorem of algebra shows that any non-zero polynomial has a number of roots at most equal to its degree, and that the number of roots and the degree are equal when one considers the complex roots (or more generally, the roots in an algebraically closed extension) counted with their multiplicities. For example, the polynomial f {\displaystyle f} of degree two, defined by f ( x ) = x 2 − 5 x + 6 = ( x − 2 ) ( x − 3 ) {\displaystyle f(x)=x^{2}-5x+6=(x-2)(x-3)} has the two roots (or zeros) that are 2 and 3. f ( 2 ) = 2 2 − 5 × 2 + 6 = 0 and f ( 3 ) = 3 2 − 5 × 3 + 6 = 0. {\displaystyle f(2)=2^{2}-5\times 2+6=0{\text{ and }}f(3)=3^{2}-5\times 3+6=0.} If the function maps real numbers to real numbers, then its zeros are the x {\displaystyle x} -coordinates of the points where its graph meets the x-axis. An alternative name for such a point ( x , 0 ) {\displaystyle (x,0)} in this context is an x {\displaystyle x} -intercept. == Solution of an equation == Every equation in the unknown x {\displaystyle x} may be rewritten as f ( x ) = 0 {\displaystyle f(x)=0} by regrouping all the terms in the left-hand side. It follows that the solutions of such an equation are exactly the zeros of the function f {\displaystyle f} . In other words, a "zero of a function" is precisely a "solution of the equation obtained by equating the function to 0", and the study of zeros of functions is exactly the same as the study of solutions of equations. == Polynomial roots == Every real polynomial of odd degree has an odd number of real roots (counting multiplicities); likewise, a real polynomial of even degree must have an even number of real roots. Consequently, real odd polynomials must have at least one real root (because the smallest odd whole number is 1), whereas even polynomials may have none. This principle can be proven by reference to the intermediate value theorem: since polynomial functions are continuous, the function value must cross zero, in the process of changing from negative to positive or vice versa (which always happens for odd functions). === Fundamental theorem of algebra === The fundamental theorem of algebra states that every polynomial of degree n {\displaystyle n} has n {\displaystyle n} complex roots, counted with their multiplicities. The non-real roots of polynomials with real coefficients come in conjugate pairs. Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. == Computing roots == There are many methods for computing accurate approximations of roots of functions, the best being Newton's method, see Root-finding algorithm. For polynomials, there are specialized algorithms that are more efficient and may provide all roots or all real roots; see Polynomial root-finding and Real-root isolation. Some polynomial, including all those of degree no greater than 4, can have all their roots expressed algebraically in terms of their coefficients; see Solution in radicals. == Zero set == In various areas of mathematics, the zero set of a function is the set of all its zeros. More precisely, if f : X → R {\displaystyle f:X\to \mathbb {R} } is a real-valued function (or, more generally, a function taking values in some additive group), its zero set is f − 1 ( 0 ) {\displaystyle f^{-1}(0)} , the inverse image of { 0 } {\displaystyle \{0\}} in X {\displaystyle X} . Under the same hypothesis on the codomain of the function, a level set of a function f {\displaystyle f} is the zero set of the function f − c {\displaystyle f-c} for some c {\displaystyle c} in the codomain of f . {\displaystyle f.} The zero set of a linear map is also known as its kernel. The cozero set of the function f : X → R {\displaystyle f:X\to \mathbb {R} } is the complement of the zero set of f {\displaystyle f} (i.e., the subset of X {\displaystyle X} on which f {\displaystyle f} is nonzero). === Applications === In algebraic geometry, the first definition of an algebraic variety is through zero sets. Specifically, an affine algebraic set is the intersection of the zero sets of several polynomials, in a polynomial ring k [ x 1 , … , x n ] {\displaystyle k\left[x_{1},\ldots ,x_{n}\right]} over a field. In this context, a zero set is sometimes called a zero locus. In analysis and geometry, any closed subset of R n {\displaystyle \mathbb {R} ^{n}} is the zero set of a smooth function defined on all of R n {\displaystyle \mathbb {R} ^{n}} . This extends to any smooth manifold as a corollary of paracompactness. In differential geometry, zero sets are frequently used to define manifolds. An important special case is the case that f {\displaystyle f} is a smooth function from R p {\displaystyle \mathbb {R} ^{p}} to R n {\displaystyle \mathbb {R} ^{n}} . If zero is a regular value of f {\displaystyle f} , then the zero set of f {\displaystyle f} is a smooth manifold of dimension m = p − n {\displaystyle m=p-n} by the regular value theorem. For example, the unit m {\displaystyle m} -sphere in R m + 1 {\displaystyle \mathbb {R} ^{m+1}} is the zero set of the real-valued function f ( x ) = ‖ x ‖ 2 − 1 {\displaystyle f(x)=\Vert x\Vert ^{2}-1} . == See also == Root-finding algorithm Bolzano's theorem, a continuous function that takes opposite signs at the end points of an interval has at least a zero in the interval. Gauss–Lucas theorem, the complex zeros of the derivative of a polynomial lie inside the convex hull of the roots of the polynomial. Marden's theorem, a refinement of Gauss–Lucas theorem for polynomials of degree three Sendov's conjecture, a conjectured refinement of Gauss-Lucas theorem zero at infinity Zero crossing, property of the graph of a function near a zero Zeros and poles of holomorphic functions == References == == Further reading == Weisstein, Eric W. "Root". MathWorld.
Wikipedia/Root_of_a_function
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation x 2 d 2 y d x 2 + x d y d x + ( x 2 − α 2 ) y = 0 {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0} for an arbitrary complex number α {\displaystyle \alpha } , which represents the order of the Bessel function. Although α {\displaystyle \alpha } and − α {\displaystyle -\alpha } produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of α {\displaystyle \alpha } . The most important cases are when α {\displaystyle \alpha } is an integer or half-integer. Bessel functions for integer α {\displaystyle \alpha } are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer α {\displaystyle \alpha } are obtained when solving the Helmholtz equation in spherical coordinates. == Applications == Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example: Electromagnetic waves in a cylindrical waveguide Pressure amplitudes of inviscid rotational flows Heat conduction in a cylindrical object Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory) Diffusion problems on a lattice Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle Position space representation of the Feynman propagator in quantum field theory Solving for patterns of acoustical radiation Frequency-dependent friction in circular pipelines Dynamics of floating bodies Angular resolution Diffraction from helical objects, including DNA Probability density function of product of two normally distributed random variables Analyzing of the surface waves generated by microtremors, in geophysics and seismology. Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter). == Definitions == Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of α {\displaystyle \alpha } when α {\displaystyle \alpha } is known to be an integer. Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn. === Bessel functions of the first kind: Jα === Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by x α {\displaystyle x^{\alpha }} times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation: J α ( x ) = ∑ m = 0 ∞ ( − 1 ) m m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , {\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },} where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by 2 {\displaystyle 2} in x / 2 {\displaystyle x/2} ; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to x − 1 / 2 {\displaystyle x^{-{1}/{2}}} (see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.) For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers): J − n ( x ) = ( − 1 ) n J n ( x ) . {\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).} This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below. ==== Bessel's integrals ==== Another definition of the Bessel function, for integer values of n, is possible using an integral representation: J n ( x ) = 1 π ∫ 0 π cos ⁡ ( n τ − x sin ⁡ τ ) d τ = 1 π Re ⁡ ( ∫ 0 π e i ( n τ − x sin ⁡ τ ) d τ ) , {\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),} which is also called Hansen-Bessel formula. This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0: J α ( x ) = 1 π ∫ 0 π cos ⁡ ( α τ − x sin ⁡ τ ) d τ − sin ⁡ ( α π ) π ∫ 0 ∞ e − x sinh ⁡ t − α t d t . {\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.} ==== Relation to hypergeometric series ==== The Bessel functions can be expressed in terms of the generalized hypergeometric series as J α ( x ) = ( x 2 ) α Γ ( α + 1 ) 0 F 1 ( α + 1 ; − x 2 4 ) . {\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).} This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function. ==== Relation to Laguerre polynomials ==== In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as J α ( x ) ( x 2 ) α = e − t Γ ( α + 1 ) ∑ k = 0 ∞ L k ( α ) ( x 2 4 t ) ( k + α k ) t k k ! . {\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.} === Bessel functions of the second kind: Yα === The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann. For non-integer α, Yα(x) is related to Jα(x) by Y α ( x ) = J α ( x ) cos ⁡ ( α π ) − J − α ( x ) sin ⁡ ( α π ) . {\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.} In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n: Y n ( x ) = lim α → n Y α ( x ) . {\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).} If n is a nonnegative integer, we have the series Y n ( z ) = − ( z 2 ) − n π ∑ k = 0 n − 1 ( n − k − 1 ) ! k ! ( z 2 4 ) k + 2 π J n ( z ) ln ⁡ z 2 − ( z 2 ) n π ∑ k = 0 ∞ ( ψ ( k + 1 ) + ψ ( n + k + 1 ) ) ( − z 2 4 ) k k ! ( n + k ) ! {\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}} where ψ ( z ) {\displaystyle \psi (z)} is the digamma function, the logarithmic derivative of the gamma function. There is also a corresponding integral formula (for Re(x) > 0): Y n ( x ) = 1 π ∫ 0 π sin ⁡ ( x sin ⁡ θ − n θ ) d θ − 1 π ∫ 0 ∞ ( e n t + ( − 1 ) n e − n t ) e − x sinh ⁡ t d t . {\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.} In the case where n = 0: (with γ {\displaystyle \gamma } being Euler's constant) Y 0 ( x ) = 4 π 2 ∫ 0 1 2 π cos ⁡ ( x cos ⁡ θ ) ( γ + ln ⁡ ( 2 x sin 2 ⁡ θ ) ) d θ . {\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .} Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below. When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid: Y − n ( x ) = ( − 1 ) n Y n ( x ) . {\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).} Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α. The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem. === Hankel functions: H(1)α, H(2)α === Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as H α ( 1 ) ( x ) = J α ( x ) + i Y α ( x ) , H α ( 2 ) ( x ) = J α ( x ) − i Y α ( x ) , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}} where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel. These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real x > 0 {\displaystyle x>0} where J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for e ± i x {\displaystyle e^{\pm ix}} and J α ( x ) {\displaystyle J_{\alpha }(x)} , Y α ( x ) {\displaystyle Y_{\alpha }(x)} for cos ⁡ ( x ) {\displaystyle \cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , as explicitly shown in the asymptotic expansion. The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency). Using the previous relationships, they can be expressed as H α ( 1 ) ( x ) = J − α ( x ) − e − α π i J α ( x ) i sin ⁡ α π , H α ( 2 ) ( x ) = J − α ( x ) − e α π i J α ( x ) − i sin ⁡ α π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}} If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not: H − α ( 1 ) ( x ) = e α π i H α ( 1 ) ( x ) , H − α ( 2 ) ( x ) = e − α π i H α ( 2 ) ( x ) . {\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}} In particular, if α = m + ⁠1/2⁠ with m a nonnegative integer, the above relations imply directly that J − ( m + 1 2 ) ( x ) = ( − 1 ) m + 1 Y m + 1 2 ( x ) , Y − ( m + 1 2 ) ( x ) = ( − 1 ) m J m + 1 2 ( x ) . {\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}} These are useful in developing the spherical Bessel functions (see below). The Hankel functions admit the following integral representations for Re(x) > 0: H α ( 1 ) ( x ) = 1 π i ∫ − ∞ + ∞ + π i e x sinh ⁡ t − α t d t , H α ( 2 ) ( x ) = − 1 π i ∫ − ∞ + ∞ − π i e x sinh ⁡ t − α t d t , {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}} where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis. === Modified Bessel functions: Iα, Kα === The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as I α ( x ) = i − α J α ( i x ) = ∑ m = 0 ∞ 1 m ! Γ ( m + α + 1 ) ( x 2 ) 2 m + α , K α ( x ) = π 2 I − α ( x ) − I α ( x ) sin ⁡ α π , {\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}} when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor. K α {\displaystyle K_{\alpha }} can be expressed in terms of Hankel functions: K α ( x ) = { π 2 i α + 1 H α ( 1 ) ( i x ) − π < arg ⁡ x ≤ π 2 π 2 ( − i ) α + 1 H α ( 2 ) ( − i x ) − π 2 < arg ⁡ x ≤ π {\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}} Using these two formulae the result to J α 2 ( z ) {\displaystyle J_{\alpha }^{2}(z)} + Y α 2 ( z ) {\displaystyle Y_{\alpha }^{2}(z)} , commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following J α 2 ( x ) + Y α 2 ( x ) = 8 π 2 ∫ 0 ∞ cosh ⁡ ( 2 α t ) K 0 ( 2 x sinh ⁡ t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,} given that the condition Re(x) > 0 is met. It can also be shown that J α 2 ( x ) + Y α 2 ( x ) = 8 cos ⁡ ( α π ) π 2 ∫ 0 ∞ K 2 α ( 2 x sinh ⁡ t ) d t , {\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,} only when |Re(α)| < ⁠1/2⁠ and Re(x) ≥ 0 but not when x = 0. We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ ⁠π/2⁠): J α ( i z ) = e α π i 2 I α ( z ) , Y α ( i z ) = e ( α + 1 ) π i 2 I α ( z ) − 2 π e − α π i 2 K α ( z ) . {\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}} Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation: x 2 d 2 y d x 2 + x d y d x − ( x 2 + α 2 ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.} Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and ⁠1/2⁠Γ(|α|)(2/x)|α| otherwise. Two integral formulas for the modified Bessel functions are (for Re(x) > 0): I α ( x ) = 1 π ∫ 0 π e x cos ⁡ θ cos ⁡ α θ d θ − sin ⁡ α π π ∫ 0 ∞ e − x cosh ⁡ t − α t d t , K α ( x ) = ∫ 0 ∞ e − x cosh ⁡ t cosh ⁡ α t d t . {\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}} Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0): 2 K 0 ( ω ) = ∫ − ∞ ∞ e i ω t t 2 + 1 d t . {\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.} It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane. Modified Bessel functions of the second kind may be represented with Bassett's integral K n ( x z ) = Γ ( n + 1 2 ) ( 2 z ) n π x n ∫ 0 ∞ cos ⁡ ( x t ) d t ( t 2 + z 2 ) n + 1 2 . {\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.} Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals K 1 3 ( ξ ) = 3 ∫ 0 ∞ exp ⁡ ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x , K 2 3 ( ξ ) = 1 3 ∫ 0 ∞ 3 + 2 x 2 1 + x 2 3 exp ⁡ ( − ξ ( 1 + 4 x 2 3 ) 1 + x 2 3 ) d x . {\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}} The modified Bessel function K 1 2 ( ξ ) = ( 2 ξ / π ) − 1 / 2 exp ⁡ ( − ξ ) {\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )} is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions. The modified Bessel function of the second kind has also been called by the following names (now rare): Basset function after Alfred Barnard Basset Modified Bessel function of the third kind Modified Hankel function Macdonald function after Hector Munro Macdonald === Spherical Bessel functions: jn, yn === When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form x 2 d 2 y d x 2 + 2 x d y d x + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.} The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by j n ( x ) = π 2 x J n + 1 2 ( x ) , y n ( x ) = π 2 x Y n + 1 2 ( x ) = ( − 1 ) n + 1 π 2 x J − n − 1 2 ( x ) . {\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}} yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions. From the relations to the ordinary Bessel functions it is directly seen that: j n ( x ) = ( − 1 ) n y − n − 1 ( x ) y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) {\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}} The spherical Bessel functions can also be written as (Rayleigh's formulas) j n ( x ) = ( − x ) n ( 1 x d d x ) n sin ⁡ x x , y n ( x ) = − ( − x ) n ( 1 x d d x ) n cos ⁡ x x . {\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}} The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are: j 0 ( x ) = sin ⁡ x x . j 1 ( x ) = sin ⁡ x x 2 − cos ⁡ x x , j 2 ( x ) = ( 3 x 2 − 1 ) sin ⁡ x x − 3 cos ⁡ x x 2 , j 3 ( x ) = ( 15 x 3 − 6 x ) sin ⁡ x x − ( 15 x 2 − 1 ) cos ⁡ x x {\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}} and y 0 ( x ) = − j − 1 ( x ) = − cos ⁡ x x , y 1 ( x ) = j − 2 ( x ) = − cos ⁡ x x 2 − sin ⁡ x x , y 2 ( x ) = − j − 3 ( x ) = ( − 3 x 2 + 1 ) cos ⁡ x x − 3 sin ⁡ x x 2 , y 3 ( x ) = j − 4 ( x ) = ( − 15 x 3 + 6 x ) cos ⁡ x x − ( 15 x 2 − 1 ) sin ⁡ x x . {\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}} The first few non-zero roots of the first few spherical Bessel functions are: ==== Generating function ==== The spherical Bessel functions have the generating functions 1 z cos ⁡ ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! j n − 1 ( z ) , 1 z sin ⁡ ( z 2 − 2 z t ) = ∑ n = 0 ∞ t n n ! y n − 1 ( z ) . {\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}} ==== Finite series expansions ==== In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression: j n ( x ) = π 2 x J n + 1 2 ( x ) = = 1 2 x [ e i x ∑ r = 0 n i r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r − n − 1 ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = 1 x [ sin ⁡ ( x − n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r + cos ⁡ ( x − n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] y n ( x ) = ( − 1 ) n + 1 j − n − 1 ( x ) = ( − 1 ) n + 1 π 2 x J − ( n + 1 2 ) ( x ) = = ( − 1 ) n + 1 2 x [ e i x ∑ r = 0 n i r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r + e − i x ∑ r = 0 n ( − i ) r + n ( n + r ) ! r ! ( n − r ) ! ( 2 x ) r ] = = ( − 1 ) n + 1 x [ cos ⁡ ( x + n π 2 ) ∑ r = 0 [ n 2 ] ( − 1 ) r ( n + 2 r ) ! ( 2 r ) ! ( n − 2 r ) ! ( 2 x ) 2 r − sin ⁡ ( x + n π 2 ) ∑ r = 0 [ n − 1 2 ] ( − 1 ) r ( n + 2 r + 1 ) ! ( 2 r + 1 ) ! ( n − 2 r − 1 ) ! ( 2 x ) 2 r + 1 ] {\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}} ==== Differential relations ==== In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ... ( 1 z d d z ) m ( z n + 1 f n ( z ) ) = z n − m + 1 f n − m ( z ) , ( 1 z d d z ) m ( z − n f n ( z ) ) = ( − 1 ) m z − n − m f n + m ( z ) . {\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}} === Spherical Hankel functions: h(1)n, h(2)n === There are also spherical analogues of the Hankel functions: h n ( 1 ) ( x ) = j n ( x ) + i y n ( x ) , h n ( 2 ) ( x ) = j n ( x ) − i y n ( x ) . {\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}} There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n: h n ( 1 ) ( x ) = ( − i ) n + 1 e i x x ∑ m = 0 n i m m ! ( 2 x ) m ( n + m ) ! ( n − m ) ! , {\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},} and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = ⁠sin x/x⁠ and y0(x) = −⁠cos x/x⁠, and so on. The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field. === Riccati–Bessel functions: Sn, Cn, ξn, ζn === Riccati–Bessel functions only slightly differ from spherical Bessel functions: S n ( x ) = x j n ( x ) = π x 2 J n + 1 2 ( x ) C n ( x ) = − x y n ( x ) = − π x 2 Y n + 1 2 ( x ) ξ n ( x ) = x h n ( 1 ) ( x ) = π x 2 H n + 1 2 ( 1 ) ( x ) = S n ( x ) − i C n ( x ) ζ n ( x ) = x h n ( 2 ) ( x ) = π x 2 H n + 1 2 ( 2 ) ( x ) = S n ( x ) + i C n ( x ) {\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}} They satisfy the differential equation x 2 d 2 y d x 2 + ( x 2 − n ( n + 1 ) ) y = 0. {\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.} For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references. Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn. == Asymptotic forms == The Bessel functions have the following asymptotic forms. For small arguments 0 < z ≪ α + 1 {\displaystyle 0<z\ll {\sqrt {\alpha +1}}} , one obtains, when α {\displaystyle \alpha } is not a negative integer: J α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.} When α is a negative integer, we have J α ( z ) ∼ ( − 1 ) α ( − α ) ! ( 2 z ) α . {\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.} For the Bessel function of the second kind we have three cases: Y α ( z ) ∼ { 2 π ( ln ⁡ ( z 2 ) + γ ) if α = 0 − Γ ( α ) π ( 2 z ) α + 1 Γ ( α + 1 ) ( z 2 ) α cot ⁡ ( α π ) if α is a positive integer (one term dominates unless α is imaginary) , − ( − 1 ) α Γ ( − α ) π ( z 2 ) α if α is a negative integer, {\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}} where γ is the Euler–Mascheroni constant (0.5772...). For large real arguments z ≫ |α2 − ⁠1/4⁠|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1: J α ( z ) = 2 π z ( cos ⁡ ( z − α π 2 − π 4 ) + e | Im ⁡ ( z ) | O ( | z | − 1 ) ) for | arg ⁡ z | < π , Y α ( z ) = 2 π z ( sin ⁡ ( z − α π 2 − π 4 ) + e | Im ⁡ ( z ) | O ( | z | − 1 ) ) for | arg ⁡ z | < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}} (For α = ⁠1/2⁠, the last terms in these formulas drop out completely; see the spherical Bessel functions above.) The asymptotic forms for the Hankel functions are: H α ( 1 ) ( z ) ∼ 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 2 π , H α ( 2 ) ( z ) ∼ 2 π z e − i ( z − α π 2 − π 4 ) for − 2 π < arg ⁡ z < π . {\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}} These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z). It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part): J α ( z ) ∼ 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 0 , J α ( z ) ∼ 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg ⁡ z < π , Y α ( z ) ∼ − i 1 2 π z e i ( z − α π 2 − π 4 ) for − π < arg ⁡ z < 0 , Y α ( z ) ∼ i 1 2 π z e − i ( z − α π 2 − π 4 ) for 0 < arg ⁡ z < π . {\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}} For the modified Bessel functions, Hankel developed asymptotic expansions as well: I α ( z ) ∼ e z 2 π z ( 1 − 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 − ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg ⁡ z | < π 2 , K α ( z ) ∼ π 2 z e − z ( 1 + 4 α 2 − 1 8 z + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) 2 ! ( 8 z ) 2 + ( 4 α 2 − 1 ) ( 4 α 2 − 9 ) ( 4 α 2 − 25 ) 3 ! ( 8 z ) 3 + ⋯ ) for | arg ⁡ z | < 3 π 2 . {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}} There is also the asymptotic form (for large real z {\displaystyle z} ) I α ( z ) = 1 2 π z 1 + α 2 z 2 4 exp ⁡ ( − α arcsinh ⁡ ( α z ) + z 1 + α 2 z 2 ) ( 1 + O ( 1 z 1 + α 2 z 2 ) ) . {\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}} When α = ⁠1/2⁠, all the terms except the first vanish, and we have I 1 / 2 ( z ) = 2 π sinh ⁡ ( z ) z ∼ e z 2 π z for | arg ⁡ z | < π 2 , K 1 / 2 ( z ) = π 2 e − z z . {\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}} For small arguments 0 < | z | ≪ α + 1 {\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}} , we have I α ( z ) ∼ 1 Γ ( α + 1 ) ( z 2 ) α , K α ( z ) ∼ { − ln ⁡ ( z 2 ) − γ if α = 0 Γ ( α ) 2 ( 2 z ) α if α > 0 {\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}} == Properties == For integer order α = n, Jn is often defined via a Laurent series for a generating function: e x 2 ( t − 1 t ) = ∑ n = − ∞ ∞ J n ( x ) t n {\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}} an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.) Infinite series of Bessel functions in the form ∑ ν = − ∞ ∞ J N ν + p ( x ) {\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)} where ν , p ∈ Z , N ∈ Z + \nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+} arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3: ∑ ν = − ∞ ∞ J 3 ν + p ( x ) = 1 3 [ 1 + 2 cos ⁡ ( x 3 / 2 − 2 π p / 3 ) ] {\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]} . More generally, the Sung series and the alternating Sung series are written as: ∑ ν = − ∞ ∞ J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ⁡ 2 π q / N e − i 2 π p q / N {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}} ∑ ν = − ∞ ∞ ( − 1 ) ν J N ν + p ( x ) = 1 N ∑ q = 0 N − 1 e i x sin ⁡ ( 2 q + 1 ) π / N e − i ( 2 q + 1 ) π p / N {\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}} A series expansion using Bessel functions (Kapteyn series) is 1 1 − z = 1 + 2 ∑ n = 1 ∞ J n ( n z ) . {\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).} Another important relation for integer orders is the Jacobi–Anger expansion: e i z cos ⁡ ϕ = ∑ n = − ∞ ∞ i n J n ( z ) e i n ϕ {\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }} and e ± i z sin ⁡ ϕ = J 0 ( z ) + 2 ∑ n = 1 ∞ J 2 n ( z ) cos ⁡ ( 2 n ϕ ) ± 2 i ∑ n = 0 ∞ J 2 n + 1 ( z ) sin ⁡ ( ( 2 n + 1 ) ϕ ) {\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )} which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal. More generally, a series f ( z ) = a 0 ν J ν ( z ) + 2 ⋅ ∑ k = 1 ∞ a k ν J ν + k ( z ) {\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)} is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form a k 0 = 1 2 π i ∫ | z | = c f ( z ) O k ( z ) d z {\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz} where Ok is Neumann's polynomial. Selected functions admit the special representation f ( z ) = ∑ k = 0 ∞ a k ν J ν + 2 k ( z ) {\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)} with a k ν = 2 ( ν + 2 k ) ∫ 0 ∞ f ( z ) J ν + 2 k ( z ) z d z {\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz} due to the orthogonality relation ∫ 0 ∞ J α ( z ) J β ( z ) d z z = 2 π sin ⁡ ( π 2 ( α − β ) ) α 2 − β 2 {\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}} More generally, if f has a branch-point near the origin of such a nature that f ( z ) = ∑ k = 0 a k J ν + k ( z ) {\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)} then L { ∑ k = 0 a k J ν + k } ( s ) = 1 1 + s 2 ∑ k = 0 a k ( s + 1 + s 2 ) ν + k {\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}} or ∑ k = 0 a k ξ ν + k = 1 + ξ 2 2 ξ L { f } ( 1 − ξ 2 2 ξ ) {\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)} where L { f } {\displaystyle {\mathcal {L}}\{f\}} is the Laplace transform of f. Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula: J ν ( z ) = ( z 2 ) ν Γ ( ν + 1 2 ) π ∫ − 1 1 e i z s ( 1 − s 2 ) ν − 1 2 d s = 2 ( z 2 ) ν ⋅ π ⋅ Γ ( 1 2 − ν ) ∫ 1 ∞ sin ⁡ z u ( u 2 − 1 ) ν + 1 2 d u {\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}} where ν > −⁠1/2⁠ and z ∈ C. This formula is useful especially when working with Fourier transforms. Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that: ∫ 0 1 x J α ( x u α , m ) J α ( x u α , n ) d x = δ m , n 2 [ J α + 1 ( u α , m ) ] 2 = δ m , n 2 [ J α ′ ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}} where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m. An analogous relationship for the spherical Bessel functions follows immediately: ∫ 0 1 x 2 j α ( x u α , m ) j α ( x u α , n ) d x = δ m , n 2 [ j α + 1 ( u α , m ) ] 2 {\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}} If one defines a boxcar function of x that depends on a small parameter ε as: f ε ( x ) = 1 ε rect ⁡ ( x − 1 ε ) {\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)} (where rect is the rectangle function) then the Hankel transform of it (of any given order α > −⁠1/2⁠), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x): ∫ 0 ∞ k J α ( k x ) g ε ( k ) d k = f ε ( x ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)} which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense): ∫ 0 ∞ k J α ( k x ) J α ( k ) d k = δ ( x − 1 ) {\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)} A change of variables then yields the closure equation: ∫ 0 ∞ x J α ( u x ) J α ( v x ) d x = 1 u δ ( u − v ) {\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)} for α > −⁠1/2⁠. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is: ∫ 0 ∞ x 2 j α ( u x ) j α ( v x ) d x = π 2 u v δ ( u − v ) {\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)} for α > −1. Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions: A α ( x ) d B α d x − d A α d x B α ( x ) = C α x {\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}} where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular, J α ( x ) d Y α d x − d J α d x Y α ( x ) = 2 π x {\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}} and I α ( x ) d K α d x − d I α d x K α ( x ) = − 1 x , {\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},} for α > −1. For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let 0 < j α , 1 < j α , 2 < ⋯ < j α , n < ⋯ {\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots } be all its positive zeros, then J α ( z ) = ( z 2 ) α Γ ( α + 1 ) ∏ n = 1 ∞ ( 1 − z 2 j α , n 2 ) {\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)} (There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.) === Recurrence relations === The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations 2 α x Z α ( x ) = Z α − 1 ( x ) + Z α + 1 ( x ) {\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)} and 2 d Z α ( x ) d x = Z α − 1 ( x ) − Z α + 1 ( x ) , {\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),} where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that ( 1 x d d x ) m [ x α Z α ( x ) ] = x α − m Z α − m ( x ) , ( 1 x d d x ) m [ Z α ( x ) x α ] = ( − 1 ) m Z α + m ( x ) x α + m . {\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}} Using the previous relations one can arrive to similar relations for the Spherical Bessel functions: 2 α + 1 x j α ( x ) = j α − 1 + j α + 1 {\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}} and d j α ( x ) d x = j α − 1 − α + 1 x j α {\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }} Modified Bessel functions follow similar relations: e ( x 2 ) ( t + 1 t ) = ∑ n = − ∞ ∞ I n ( x ) t n {\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}} and e z cos ⁡ θ = I 0 ( z ) + 2 ∑ n = 1 ∞ I n ( z ) cos ⁡ n θ {\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta } and 1 2 π ∫ 0 2 π e z cos ⁡ ( m θ ) + y cos ⁡ θ d θ = I 0 ( z ) I 0 ( y ) + 2 ∑ n = 1 ∞ I n ( z ) I m n ( y ) . {\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).} The recurrence relation reads C α − 1 ( x ) − C α + 1 ( x ) = 2 α x C α ( x ) , C α − 1 ( x ) + C α + 1 ( x ) = 2 d d x C α ( x ) , {\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}} where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems. === Transcendence === In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative ⁠J'ν(x)/Jν(x)⁠ are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that Γ ( v + 1 ) ( 2 / x ) v J v ( x ) {\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)} is transcendental under the same assumptions. === Sums with Bessel functions === The product of two Bessel functions admits the following sum: ∑ ν = − ∞ ∞ J ν ( x ) J n − ν ( y ) = J n ( x + y ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),} ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( y ) = J n ( y − x ) . {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).} From these equalities it follows that ∑ ν = − ∞ ∞ J ν ( x ) J ν + n ( x ) = δ n , 0 {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}} and as a consequence ∑ ν = − ∞ ∞ J ν 2 ( x ) = 1. {\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.} These sums can be extended to include a term multiplier that is a polynomial function of the index. For example, ∑ ν = − ∞ ∞ ν J ν ( x ) J ν + n ( x ) = x 2 ( δ n , 1 + δ n , − 1 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),} ∑ ν = − ∞ ∞ ν J ν 2 ( x ) = 0 , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,} ∑ ν = − ∞ ∞ ν 2 J ν ( x ) J ν + n ( x ) = x 2 ( δ n , − 1 − δ n , 1 ) + x 2 4 ( δ n , − 2 + 2 δ n , 0 + δ n , 2 ) , {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),} ∑ ν = − ∞ ∞ ν 2 J ν 2 ( x ) = x 2 2 . {\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.} == Multiplication theorem == The Bessel functions obey a multiplication theorem λ − ν J ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( 1 − λ 2 ) z 2 ) n J ν + n ( z ) , {\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),} where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are λ − ν I ν ( λ z ) = ∑ n = 0 ∞ 1 n ! ( ( λ 2 − 1 ) z 2 ) n I ν + n ( z ) {\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)} and λ − ν K ν ( λ z ) = ∑ n = 0 ∞ ( − 1 ) n n ! ( ( λ 2 − 1 ) z 2 ) n K ν + n ( z ) . {\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).} == Zeros of the Bessel function == === Bourget's hypothesis === Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929. === Transcendence === Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives J ν ( n ) ( x ) {\displaystyle J_{\nu }^{(n)}(x)} for n ≤ 18 are transcendental, except for the special values J 1 ( 3 ) ( ± 3 ) = 0 {\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0} and J 0 ( 4 ) ( ± 3 ) = 0 {\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0} . === Numerical approaches === For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004). === Numerical values === The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively. == History == === Waves and elasticity problems === The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered J 0 ( x ) {\displaystyle J_{0}(x)} . Bernoulli also developed a method to find the zeros of the function. Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions I n ( x ) {\displaystyle I_{n}(x)} . In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings. Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for J ± 1 / 3 ( x ) {\displaystyle J_{\pm 1/3}(x)} . Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to J n ( x ) {\displaystyle J_{n}(x)} , for integer n. During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of J 0 ( x ) {\displaystyle J_{0}(x)} using cosine. At the beginning of the 1800s, Joseph Fourier used J 0 ( x ) {\displaystyle J_{0}(x)} to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions). === Astronomical problems === In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later. In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions. == See also == == Notes == == References == == External links ==
Wikipedia/Bessel_function
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin(t) and cos(t) are cos(t) and –sin(t) respectively, the derivatives of sinh(t) and cosh(t) are cosh(t) and sinh(t) respectively. Hyperbolic functions are used to express the angle of parallelism in hyperbolic geometry. They are used to express Lorentz boosts as hyperbolic rotations in special relativity. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, and fluid dynamics. The basic hyperbolic functions are: hyperbolic sine "sinh" (), hyperbolic cosine "cosh" (), from which are derived: hyperbolic tangent "tanh" (), hyperbolic cotangent "coth" (), hyperbolic secant "sech" (), hyperbolic cosecant "csch" or "cosech" () corresponding to the derived trigonometric functions. The inverse hyperbolic functions are: inverse hyperbolic sine "arsinh" (also denoted "sinh−1", "asinh" or sometimes "arcsinh") inverse hyperbolic cosine "arcosh" (also denoted "cosh−1", "acosh" or sometimes "arccosh") inverse hyperbolic tangent "artanh" (also denoted "tanh−1", "atanh" or sometimes "arctanh") inverse hyperbolic cotangent "arcoth" (also denoted "coth−1", "acoth" or sometimes "arccoth") inverse hyperbolic secant "arsech" (also denoted "sech−1", "asech" or sometimes "arcsech") inverse hyperbolic cosecant "arcsch" (also denoted "arcosech", "csch−1", "cosech−1","acsch", "acosech", or sometimes "arccsch" or "arccosech") The hyperbolic functions take a real argument called a hyperbolic angle. The magnitude of a hyperbolic angle is the area of its hyperbolic sector to xy = 1. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector. In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane. By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument. == History == The first known calculation of a hyperbolic trigonometry problem is attributed to Gerardus Mercator when issuing the Mercator map projection circa 1566. It requires tabulating solutions to a transcendental equation involving hyperbolic functions. The first to suggest a similarity between the sector of the circle and that of the hyperbola was Isaac Newton in his 1687 Principia Mathematica. Roger Cotes suggested to modify the trigonometric functions using the imaginary unit i = − 1 {\displaystyle i={\sqrt {-1}}} to obtain an oblate spheroid from a prolate one. Hyperbolic functions were formally introduced in 1757 by Vincenzo Riccati. Riccati used Sc. and Cc. (sinus/cosinus circulare) to refer to circular functions and Sh. and Ch. (sinus/cosinus hyperbolico) to refer to hyperbolic functions. As early as 1759, Daviet de Foncenex showed the interchangeability of the trigonometric and hyperbolic functions using the imaginary unit and extended de Moivre's formula to hyperbolic functions. During the 1760s, Johann Heinrich Lambert systematized the use functions and provided exponential expressions in various publications. Lambert credited Riccati for the terminology and names of the functions, but altered the abbreviations to those used today. == Notation == == Definitions == There are various equivalent ways to define the hyperbolic functions. === Exponential definitions === In terms of the exponential function: Hyperbolic sine: the odd part of the exponential function, that is, sinh ⁡ x = e x − e − x 2 = e 2 x − 1 2 e x = 1 − e − 2 x 2 e − x . {\displaystyle \sinh x={\frac {e^{x}-e^{-x}}{2}}={\frac {e^{2x}-1}{2e^{x}}}={\frac {1-e^{-2x}}{2e^{-x}}}.} Hyperbolic cosine: the even part of the exponential function, that is, cosh ⁡ x = e x + e − x 2 = e 2 x + 1 2 e x = 1 + e − 2 x 2 e − x . {\displaystyle \cosh x={\frac {e^{x}+e^{-x}}{2}}={\frac {e^{2x}+1}{2e^{x}}}={\frac {1+e^{-2x}}{2e^{-x}}}.} Hyperbolic tangent: tanh ⁡ x = sinh ⁡ x cosh ⁡ x = e x − e − x e x + e − x = e 2 x − 1 e 2 x + 1 . {\displaystyle \tanh x={\frac {\sinh x}{\cosh x}}={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{2x}-1}{e^{2x}+1}}.} Hyperbolic cotangent: for x ≠ 0, coth ⁡ x = cosh ⁡ x sinh ⁡ x = e x + e − x e x − e − x = e 2 x + 1 e 2 x − 1 . {\displaystyle \coth x={\frac {\cosh x}{\sinh x}}={\frac {e^{x}+e^{-x}}{e^{x}-e^{-x}}}={\frac {e^{2x}+1}{e^{2x}-1}}.} Hyperbolic secant: sech ⁡ x = 1 cosh ⁡ x = 2 e x + e − x = 2 e x e 2 x + 1 . {\displaystyle \operatorname {sech} x={\frac {1}{\cosh x}}={\frac {2}{e^{x}+e^{-x}}}={\frac {2e^{x}}{e^{2x}+1}}.} Hyperbolic cosecant: for x ≠ 0, csch ⁡ x = 1 sinh ⁡ x = 2 e x − e − x = 2 e x e 2 x − 1 . {\displaystyle \operatorname {csch} x={\frac {1}{\sinh x}}={\frac {2}{e^{x}-e^{-x}}}={\frac {2e^{x}}{e^{2x}-1}}.} === Differential equation definitions === The hyperbolic functions may be defined as solutions of differential equations: The hyperbolic sine and cosine are the solution (s, c) of the system c ′ ( x ) = s ( x ) , s ′ ( x ) = c ( x ) , {\displaystyle {\begin{aligned}c'(x)&=s(x),\\s'(x)&=c(x),\\\end{aligned}}} with the initial conditions s ( 0 ) = 0 , c ( 0 ) = 1. {\displaystyle s(0)=0,c(0)=1.} The initial conditions make the solution unique; without them any pair of functions ( a e x + b e − x , a e x − b e − x ) {\displaystyle (ae^{x}+be^{-x},ae^{x}-be^{-x})} would be a solution. sinh(x) and cosh(x) are also the unique solution of the equation f ″(x) = f (x), such that f (0) = 1, f ′(0) = 0 for the hyperbolic cosine, and f (0) = 0, f ′(0) = 1 for the hyperbolic sine. === Complex trigonometric definitions === Hyperbolic functions may also be deduced from trigonometric functions with complex arguments: Hyperbolic sine: sinh ⁡ x = − i sin ⁡ ( i x ) . {\displaystyle \sinh x=-i\sin(ix).} Hyperbolic cosine: cosh ⁡ x = cos ⁡ ( i x ) . {\displaystyle \cosh x=\cos(ix).} Hyperbolic tangent: tanh ⁡ x = − i tan ⁡ ( i x ) . {\displaystyle \tanh x=-i\tan(ix).} Hyperbolic cotangent: coth ⁡ x = i cot ⁡ ( i x ) . {\displaystyle \coth x=i\cot(ix).} Hyperbolic secant: sech ⁡ x = sec ⁡ ( i x ) . {\displaystyle \operatorname {sech} x=\sec(ix).} Hyperbolic cosecant: csch ⁡ x = i csc ⁡ ( i x ) . {\displaystyle \operatorname {csch} x=i\csc(ix).} where i is the imaginary unit with i2 = −1. The above definitions are related to the exponential definitions via Euler's formula (See § Hyperbolic functions for complex numbers below). == Characterizing properties == === Hyperbolic cosine === It can be shown that the area under the curve of the hyperbolic cosine (over a finite interval) is always equal to the arc length corresponding to that interval: area = ∫ a b cosh ⁡ x d x = ∫ a b 1 + ( d d x cosh ⁡ x ) 2 d x = arc length. {\displaystyle {\text{area}}=\int _{a}^{b}\cosh x\,dx=\int _{a}^{b}{\sqrt {1+\left({\frac {d}{dx}}\cosh x\right)^{2}}}\,dx={\text{arc length.}}} === Hyperbolic tangent === The hyperbolic tangent is the (unique) solution to the differential equation f ′ = 1 − f 2, with f (0) = 0. == Useful relations == The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) for θ {\displaystyle \theta } , 2 θ {\displaystyle 2\theta } , 3 θ {\displaystyle 3\theta } or θ {\displaystyle \theta } and φ {\displaystyle \varphi } into a hyperbolic identity, by: expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term containing a product of two sinhs. Odd and even functions: sinh ⁡ ( − x ) = − sinh ⁡ x cosh ⁡ ( − x ) = cosh ⁡ x {\displaystyle {\begin{aligned}\sinh(-x)&=-\sinh x\\\cosh(-x)&=\cosh x\end{aligned}}} Hence: tanh ⁡ ( − x ) = − tanh ⁡ x coth ⁡ ( − x ) = − coth ⁡ x sech ⁡ ( − x ) = sech ⁡ x csch ⁡ ( − x ) = − csch ⁡ x {\displaystyle {\begin{aligned}\tanh(-x)&=-\tanh x\\\coth(-x)&=-\coth x\\\operatorname {sech} (-x)&=\operatorname {sech} x\\\operatorname {csch} (-x)&=-\operatorname {csch} x\end{aligned}}} Thus, cosh x and sech x are even functions; the others are odd functions. arsech ⁡ x = arcosh ⁡ ( 1 x ) arcsch ⁡ x = arsinh ⁡ ( 1 x ) arcoth ⁡ x = artanh ⁡ ( 1 x ) {\displaystyle {\begin{aligned}\operatorname {arsech} x&=\operatorname {arcosh} \left({\frac {1}{x}}\right)\\\operatorname {arcsch} x&=\operatorname {arsinh} \left({\frac {1}{x}}\right)\\\operatorname {arcoth} x&=\operatorname {artanh} \left({\frac {1}{x}}\right)\end{aligned}}} Hyperbolic sine and cosine satisfy: cosh ⁡ x + sinh ⁡ x = e x cosh ⁡ x − sinh ⁡ x = e − x {\displaystyle {\begin{aligned}\cosh x+\sinh x&=e^{x}\\\cosh x-\sinh x&=e^{-x}\end{aligned}}} which are analogous to Euler's formula, and cosh 2 ⁡ x − sinh 2 ⁡ x = 1 {\displaystyle \cosh ^{2}x-\sinh ^{2}x=1} which is analogous to the Pythagorean trigonometric identity. One also has sech 2 ⁡ x = 1 − tanh 2 ⁡ x csch 2 ⁡ x = coth 2 ⁡ x − 1 {\displaystyle {\begin{aligned}\operatorname {sech} ^{2}x&=1-\tanh ^{2}x\\\operatorname {csch} ^{2}x&=\coth ^{2}x-1\end{aligned}}} for the other functions. === Sums of arguments === sinh ⁡ ( x + y ) = sinh ⁡ x cosh ⁡ y + cosh ⁡ x sinh ⁡ y cosh ⁡ ( x + y ) = cosh ⁡ x cosh ⁡ y + sinh ⁡ x sinh ⁡ y tanh ⁡ ( x + y ) = tanh ⁡ x + tanh ⁡ y 1 + tanh ⁡ x tanh ⁡ y {\displaystyle {\begin{aligned}\sinh(x+y)&=\sinh x\cosh y+\cosh x\sinh y\\\cosh(x+y)&=\cosh x\cosh y+\sinh x\sinh y\\\tanh(x+y)&={\frac {\tanh x+\tanh y}{1+\tanh x\tanh y}}\\\end{aligned}}} particularly cosh ⁡ ( 2 x ) = sinh 2 ⁡ x + cosh 2 ⁡ x = 2 sinh 2 ⁡ x + 1 = 2 cosh 2 ⁡ x − 1 sinh ⁡ ( 2 x ) = 2 sinh ⁡ x cosh ⁡ x tanh ⁡ ( 2 x ) = 2 tanh ⁡ x 1 + tanh 2 ⁡ x {\displaystyle {\begin{aligned}\cosh(2x)&=\sinh ^{2}{x}+\cosh ^{2}{x}=2\sinh ^{2}x+1=2\cosh ^{2}x-1\\\sinh(2x)&=2\sinh x\cosh x\\\tanh(2x)&={\frac {2\tanh x}{1+\tanh ^{2}x}}\\\end{aligned}}} Also: sinh ⁡ x + sinh ⁡ y = 2 sinh ⁡ ( x + y 2 ) cosh ⁡ ( x − y 2 ) cosh ⁡ x + cosh ⁡ y = 2 cosh ⁡ ( x + y 2 ) cosh ⁡ ( x − y 2 ) {\displaystyle {\begin{aligned}\sinh x+\sinh y&=2\sinh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\cosh x+\cosh y&=2\cosh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} === Subtraction formulas === sinh ⁡ ( x − y ) = sinh ⁡ x cosh ⁡ y − cosh ⁡ x sinh ⁡ y cosh ⁡ ( x − y ) = cosh ⁡ x cosh ⁡ y − sinh ⁡ x sinh ⁡ y tanh ⁡ ( x − y ) = tanh ⁡ x − tanh ⁡ y 1 − tanh ⁡ x tanh ⁡ y {\displaystyle {\begin{aligned}\sinh(x-y)&=\sinh x\cosh y-\cosh x\sinh y\\\cosh(x-y)&=\cosh x\cosh y-\sinh x\sinh y\\\tanh(x-y)&={\frac {\tanh x-\tanh y}{1-\tanh x\tanh y}}\\\end{aligned}}} Also: sinh ⁡ x − sinh ⁡ y = 2 cosh ⁡ ( x + y 2 ) sinh ⁡ ( x − y 2 ) cosh ⁡ x − cosh ⁡ y = 2 sinh ⁡ ( x + y 2 ) sinh ⁡ ( x − y 2 ) {\displaystyle {\begin{aligned}\sinh x-\sinh y&=2\cosh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\cosh x-\cosh y&=2\sinh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} === Half argument formulas === sinh ⁡ ( x 2 ) = sinh ⁡ x 2 ( cosh ⁡ x + 1 ) = sgn ⁡ x cosh ⁡ x − 1 2 cosh ⁡ ( x 2 ) = cosh ⁡ x + 1 2 tanh ⁡ ( x 2 ) = sinh ⁡ x cosh ⁡ x + 1 = sgn ⁡ x cosh ⁡ x − 1 cosh ⁡ x + 1 = e x − 1 e x + 1 {\displaystyle {\begin{aligned}\sinh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\sqrt {2(\cosh x+1)}}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{2}}}\\[6px]\cosh \left({\frac {x}{2}}\right)&={\sqrt {\frac {\cosh x+1}{2}}}\\[6px]\tanh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\cosh x+1}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{\cosh x+1}}}={\frac {e^{x}-1}{e^{x}+1}}\end{aligned}}} where sgn is the sign function. If x ≠ 0, then tanh ⁡ ( x 2 ) = cosh ⁡ x − 1 sinh ⁡ x = coth ⁡ x − csch ⁡ x {\displaystyle \tanh \left({\frac {x}{2}}\right)={\frac {\cosh x-1}{\sinh x}}=\coth x-\operatorname {csch} x} === Square formulas === sinh 2 ⁡ x = 1 2 ( cosh ⁡ 2 x − 1 ) cosh 2 ⁡ x = 1 2 ( cosh ⁡ 2 x + 1 ) {\displaystyle {\begin{aligned}\sinh ^{2}x&={\tfrac {1}{2}}(\cosh 2x-1)\\\cosh ^{2}x&={\tfrac {1}{2}}(\cosh 2x+1)\end{aligned}}} === Inequalities === The following inequality is useful in statistics: cosh ⁡ ( t ) ≤ e t 2 / 2 . {\displaystyle \operatorname {cosh} (t)\leq e^{t^{2}/2}.} It can be proved by comparing the Taylor series of the two functions term by term. == Inverse functions as logarithms == arsinh ⁡ ( x ) = ln ⁡ ( x + x 2 + 1 ) arcosh ⁡ ( x ) = ln ⁡ ( x + x 2 − 1 ) x ≥ 1 artanh ⁡ ( x ) = 1 2 ln ⁡ ( 1 + x 1 − x ) | x | < 1 arcoth ⁡ ( x ) = 1 2 ln ⁡ ( x + 1 x − 1 ) | x | > 1 arsech ⁡ ( x ) = ln ⁡ ( 1 x + 1 x 2 − 1 ) = ln ⁡ ( 1 + 1 − x 2 x ) 0 < x ≤ 1 arcsch ⁡ ( x ) = ln ⁡ ( 1 x + 1 x 2 + 1 ) x ≠ 0 {\displaystyle {\begin{aligned}\operatorname {arsinh} (x)&=\ln \left(x+{\sqrt {x^{2}+1}}\right)\\\operatorname {arcosh} (x)&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&&x\geq 1\\\operatorname {artanh} (x)&={\frac {1}{2}}\ln \left({\frac {1+x}{1-x}}\right)&&|x|<1\\\operatorname {arcoth} (x)&={\frac {1}{2}}\ln \left({\frac {x+1}{x-1}}\right)&&|x|>1\\\operatorname {arsech} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)=\ln \left({\frac {1+{\sqrt {1-x^{2}}}}{x}}\right)&&0<x\leq 1\\\operatorname {arcsch} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&&x\neq 0\end{aligned}}} == Derivatives == d d x sinh ⁡ x = cosh ⁡ x d d x cosh ⁡ x = sinh ⁡ x d d x tanh ⁡ x = 1 − tanh 2 ⁡ x = sech 2 ⁡ x = 1 cosh 2 ⁡ x d d x coth ⁡ x = 1 − coth 2 ⁡ x = − csch 2 ⁡ x = − 1 sinh 2 ⁡ x x ≠ 0 d d x sech ⁡ x = − tanh ⁡ x sech ⁡ x d d x csch ⁡ x = − coth ⁡ x csch ⁡ x x ≠ 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\sinh x&=\cosh x\\{\frac {d}{dx}}\cosh x&=\sinh x\\{\frac {d}{dx}}\tanh x&=1-\tanh ^{2}x=\operatorname {sech} ^{2}x={\frac {1}{\cosh ^{2}x}}\\{\frac {d}{dx}}\coth x&=1-\coth ^{2}x=-\operatorname {csch} ^{2}x=-{\frac {1}{\sinh ^{2}x}}&&x\neq 0\\{\frac {d}{dx}}\operatorname {sech} x&=-\tanh x\operatorname {sech} x\\{\frac {d}{dx}}\operatorname {csch} x&=-\coth x\operatorname {csch} x&&x\neq 0\end{aligned}}} d d x arsinh ⁡ x = 1 x 2 + 1 d d x arcosh ⁡ x = 1 x 2 − 1 1 < x d d x artanh ⁡ x = 1 1 − x 2 | x | < 1 d d x arcoth ⁡ x = 1 1 − x 2 1 < | x | d d x arsech ⁡ x = − 1 x 1 − x 2 0 < x < 1 d d x arcsch ⁡ x = − 1 | x | 1 + x 2 x ≠ 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&={\frac {1}{\sqrt {x^{2}+1}}}\\{\frac {d}{dx}}\operatorname {arcosh} x&={\frac {1}{\sqrt {x^{2}-1}}}&&1<x\\{\frac {d}{dx}}\operatorname {artanh} x&={\frac {1}{1-x^{2}}}&&|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&={\frac {1}{1-x^{2}}}&&1<|x|\\{\frac {d}{dx}}\operatorname {arsech} x&=-{\frac {1}{x{\sqrt {1-x^{2}}}}}&&0<x<1\\{\frac {d}{dx}}\operatorname {arcsch} x&=-{\frac {1}{|x|{\sqrt {1+x^{2}}}}}&&x\neq 0\end{aligned}}} == Second derivatives == Each of the functions sinh and cosh is equal to its second derivative, that is: d 2 d x 2 sinh ⁡ x = sinh ⁡ x {\displaystyle {\frac {d^{2}}{dx^{2}}}\sinh x=\sinh x} d 2 d x 2 cosh ⁡ x = cosh ⁡ x . {\displaystyle {\frac {d^{2}}{dx^{2}}}\cosh x=\cosh x\,.} All functions with this property are linear combinations of sinh and cosh, in particular the exponential functions e x {\displaystyle e^{x}} and e − x {\displaystyle e^{-x}} . == Standard integrals == ∫ sinh ⁡ ( a x ) d x = a − 1 cosh ⁡ ( a x ) + C ∫ cosh ⁡ ( a x ) d x = a − 1 sinh ⁡ ( a x ) + C ∫ tanh ⁡ ( a x ) d x = a − 1 ln ⁡ ( cosh ⁡ ( a x ) ) + C ∫ coth ⁡ ( a x ) d x = a − 1 ln ⁡ | sinh ⁡ ( a x ) | + C ∫ sech ⁡ ( a x ) d x = a − 1 arctan ⁡ ( sinh ⁡ ( a x ) ) + C ∫ csch ⁡ ( a x ) d x = a − 1 ln ⁡ | tanh ⁡ ( a x 2 ) | + C = a − 1 ln ⁡ | coth ⁡ ( a x ) − csch ⁡ ( a x ) | + C = − a − 1 arcoth ⁡ ( cosh ⁡ ( a x ) ) + C {\displaystyle {\begin{aligned}\int \sinh(ax)\,dx&=a^{-1}\cosh(ax)+C\\\int \cosh(ax)\,dx&=a^{-1}\sinh(ax)+C\\\int \tanh(ax)\,dx&=a^{-1}\ln(\cosh(ax))+C\\\int \coth(ax)\,dx&=a^{-1}\ln \left|\sinh(ax)\right|+C\\\int \operatorname {sech} (ax)\,dx&=a^{-1}\arctan(\sinh(ax))+C\\\int \operatorname {csch} (ax)\,dx&=a^{-1}\ln \left|\tanh \left({\frac {ax}{2}}\right)\right|+C=a^{-1}\ln \left|\coth \left(ax\right)-\operatorname {csch} \left(ax\right)\right|+C=-a^{-1}\operatorname {arcoth} \left(\cosh \left(ax\right)\right)+C\end{aligned}}} The following integrals can be proved using hyperbolic substitution: ∫ 1 a 2 + u 2 d u = arsinh ⁡ ( u a ) + C ∫ 1 u 2 − a 2 d u = sgn ⁡ u arcosh ⁡ | u a | + C ∫ 1 a 2 − u 2 d u = a − 1 artanh ⁡ ( u a ) + C u 2 < a 2 ∫ 1 a 2 − u 2 d u = a − 1 arcoth ⁡ ( u a ) + C u 2 > a 2 ∫ 1 u a 2 − u 2 d u = − a − 1 arsech ⁡ | u a | + C ∫ 1 u a 2 + u 2 d u = − a − 1 arcsch ⁡ | u a | + C {\displaystyle {\begin{aligned}\int {{\frac {1}{\sqrt {a^{2}+u^{2}}}}\,du}&=\operatorname {arsinh} \left({\frac {u}{a}}\right)+C\\\int {{\frac {1}{\sqrt {u^{2}-a^{2}}}}\,du}&=\operatorname {sgn} {u}\operatorname {arcosh} \left|{\frac {u}{a}}\right|+C\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {artanh} \left({\frac {u}{a}}\right)+C&&u^{2}<a^{2}\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {arcoth} \left({\frac {u}{a}}\right)+C&&u^{2}>a^{2}\\\int {{\frac {1}{u{\sqrt {a^{2}-u^{2}}}}}\,du}&=-a^{-1}\operatorname {arsech} \left|{\frac {u}{a}}\right|+C\\\int {{\frac {1}{u{\sqrt {a^{2}+u^{2}}}}}\,du}&=-a^{-1}\operatorname {arcsch} \left|{\frac {u}{a}}\right|+C\end{aligned}}} where C is the constant of integration. == Taylor series expressions == It is possible to express explicitly the Taylor series at zero (or the Laurent series, if the function is not defined at zero) of the above functions. sinh ⁡ x = x + x 3 3 ! + x 5 5 ! + x 7 7 ! + ⋯ = ∑ n = 0 ∞ x 2 n + 1 ( 2 n + 1 ) ! {\displaystyle \sinh x=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}} This series is convergent for every complex value of x. Since the function sinh x is odd, only odd exponents for x occur in its Taylor series. cosh ⁡ x = 1 + x 2 2 ! + x 4 4 ! + x 6 6 ! + ⋯ = ∑ n = 0 ∞ x 2 n ( 2 n ) ! {\displaystyle \cosh x=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}} This series is convergent for every complex value of x. Since the function cosh x is even, only even exponents for x occur in its Taylor series. The sum of the sinh and cosh series is the infinite series expression of the exponential function. The following series are followed by a description of a subset of their domain of convergence, where the series is convergent and its sum equals the function. tanh ⁡ x = x − x 3 3 + 2 x 5 15 − 17 x 7 315 + ⋯ = ∑ n = 1 ∞ 2 2 n ( 2 2 n − 1 ) B 2 n x 2 n − 1 ( 2 n ) ! , | x | < π 2 coth ⁡ x = x − 1 + x 3 − x 3 45 + 2 x 5 945 + ⋯ = ∑ n = 0 ∞ 2 2 n B 2 n x 2 n − 1 ( 2 n ) ! , 0 < | x | < π sech ⁡ x = 1 − x 2 2 + 5 x 4 24 − 61 x 6 720 + ⋯ = ∑ n = 0 ∞ E 2 n x 2 n ( 2 n ) ! , | x | < π 2 csch ⁡ x = x − 1 − x 6 + 7 x 3 360 − 31 x 5 15120 + ⋯ = ∑ n = 0 ∞ 2 ( 1 − 2 2 n − 1 ) B 2 n x 2 n − 1 ( 2 n ) ! , 0 < | x | < π {\displaystyle {\begin{aligned}\tanh x&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots =\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\coth x&=x^{-1}+{\frac {x}{3}}-{\frac {x^{3}}{45}}+{\frac {2x^{5}}{945}}+\cdots =\sum _{n=0}^{\infty }{\frac {2^{2n}B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \\\operatorname {sech} x&=1-{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}-{\frac {61x^{6}}{720}}+\cdots =\sum _{n=0}^{\infty }{\frac {E_{2n}x^{2n}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\operatorname {csch} x&=x^{-1}-{\frac {x}{6}}+{\frac {7x^{3}}{360}}-{\frac {31x^{5}}{15120}}+\cdots =\sum _{n=0}^{\infty }{\frac {2(1-2^{2n-1})B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \end{aligned}}} where: B n {\displaystyle B_{n}} is the nth Bernoulli number E n {\displaystyle E_{n}} is the nth Euler number == Infinite products and continued fractions == The following expansions are valid in the whole complex plane: sinh ⁡ x = x ∏ n = 1 ∞ ( 1 + x 2 n 2 π 2 ) = x 1 − x 2 2 ⋅ 3 + x 2 − 2 ⋅ 3 x 2 4 ⋅ 5 + x 2 − 4 ⋅ 5 x 2 6 ⋅ 7 + x 2 − ⋱ {\displaystyle \sinh x=x\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{n^{2}\pi ^{2}}}\right)={\cfrac {x}{1-{\cfrac {x^{2}}{2\cdot 3+x^{2}-{\cfrac {2\cdot 3x^{2}}{4\cdot 5+x^{2}-{\cfrac {4\cdot 5x^{2}}{6\cdot 7+x^{2}-\ddots }}}}}}}}} cosh ⁡ x = ∏ n = 1 ∞ ( 1 + x 2 ( n − 1 / 2 ) 2 π 2 ) = 1 1 − x 2 1 ⋅ 2 + x 2 − 1 ⋅ 2 x 2 3 ⋅ 4 + x 2 − 3 ⋅ 4 x 2 5 ⋅ 6 + x 2 − ⋱ {\displaystyle \cosh x=\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{(n-1/2)^{2}\pi ^{2}}}\right)={\cfrac {1}{1-{\cfrac {x^{2}}{1\cdot 2+x^{2}-{\cfrac {1\cdot 2x^{2}}{3\cdot 4+x^{2}-{\cfrac {3\cdot 4x^{2}}{5\cdot 6+x^{2}-\ddots }}}}}}}}} tanh ⁡ x = 1 1 x + 1 3 x + 1 5 x + 1 7 x + ⋱ {\displaystyle \tanh x={\cfrac {1}{{\cfrac {1}{x}}+{\cfrac {1}{{\cfrac {3}{x}}+{\cfrac {1}{{\cfrac {5}{x}}+{\cfrac {1}{{\cfrac {7}{x}}+\ddots }}}}}}}}} == Comparison with circular functions == The hyperbolic functions represent an expansion of trigonometry beyond the circular functions. Both types depend on an argument, either circular angle or hyperbolic angle. Since the area of a circular sector with radius r and angle u (in radians) is r2u/2, it will be equal to u when r = √2. In the diagram, such a circle is tangent to the hyperbola xy = 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict a hyperbolic sector with area corresponding to hyperbolic angle magnitude. The legs of the two right triangles with hypotenuse on the ray defining the angles are of length √2 times the circular and hyperbolic functions. The hyperbolic angle is an invariant measure with respect to the squeeze mapping, just as the circular angle is invariant under rotation. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the function a cosh(x/a) is the catenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. == Relationship to the exponential function == The decomposition of the exponential function in its even and odd parts gives the identities e x = cosh ⁡ x + sinh ⁡ x , {\displaystyle e^{x}=\cosh x+\sinh x,} and e − x = cosh ⁡ x − sinh ⁡ x . {\displaystyle e^{-x}=\cosh x-\sinh x.} Combined with Euler's formula e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} this gives e x + i y = ( cosh ⁡ x + sinh ⁡ x ) ( cos ⁡ y + i sin ⁡ y ) {\displaystyle e^{x+iy}=(\cosh x+\sinh x)(\cos y+i\sin y)} for the general complex exponential function. Additionally, e x = 1 + tanh ⁡ x 1 − tanh ⁡ x = 1 + tanh ⁡ x 2 1 − tanh ⁡ x 2 {\displaystyle e^{x}={\sqrt {\frac {1+\tanh x}{1-\tanh x}}}={\frac {1+\tanh {\frac {x}{2}}}{1-\tanh {\frac {x}{2}}}}} == Hyperbolic functions for complex numbers == Since the exponential function can be defined for any complex argument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functions sinh z and cosh z are then holomorphic. Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers: e i x = cos ⁡ x + i sin ⁡ x e − i x = cos ⁡ x − i sin ⁡ x {\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\e^{-ix}&=\cos x-i\sin x\end{aligned}}} so: cosh ⁡ ( i x ) = 1 2 ( e i x + e − i x ) = cos ⁡ x sinh ⁡ ( i x ) = 1 2 ( e i x − e − i x ) = i sin ⁡ x cosh ⁡ ( x + i y ) = cosh ⁡ ( x ) cos ⁡ ( y ) + i sinh ⁡ ( x ) sin ⁡ ( y ) sinh ⁡ ( x + i y ) = sinh ⁡ ( x ) cos ⁡ ( y ) + i cosh ⁡ ( x ) sin ⁡ ( y ) tanh ⁡ ( i x ) = i tan ⁡ x cosh ⁡ x = cos ⁡ ( i x ) sinh ⁡ x = − i sin ⁡ ( i x ) tanh ⁡ x = − i tan ⁡ ( i x ) {\displaystyle {\begin{aligned}\cosh(ix)&={\frac {1}{2}}\left(e^{ix}+e^{-ix}\right)=\cos x\\\sinh(ix)&={\frac {1}{2}}\left(e^{ix}-e^{-ix}\right)=i\sin x\\\cosh(x+iy)&=\cosh(x)\cos(y)+i\sinh(x)\sin(y)\\\sinh(x+iy)&=\sinh(x)\cos(y)+i\cosh(x)\sin(y)\\\tanh(ix)&=i\tan x\\\cosh x&=\cos(ix)\\\sinh x&=-i\sin(ix)\\\tanh x&=-i\tan(ix)\end{aligned}}} Thus, hyperbolic functions are periodic with respect to the imaginary component, with period 2 π i {\displaystyle 2\pi i} ( π i {\displaystyle \pi i} for hyperbolic tangent and cotangent). == See also == e (mathematical constant) Equal incircles theorem, based on sinh Hyperbolastic functions Hyperbolic growth Inverse hyperbolic functions List of integrals of hyperbolic functions Poinsot's spirals Sigmoid function Soboleva modified hyperbolic tangent Trigonometric functions == References == == External links == "Hyperbolic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Hyperbolic functions on PlanetMath GonioLab: Visualization of the unit circle, trigonometric and hyperbolic functions (Java Web Start) Web-based calculator of hyperbolic functions
Wikipedia/Hyperbolic_function
An algebraic number is a number that is a root of a non-zero polynomial in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, ( 1 + 5 ) / 2 {\displaystyle (1+{\sqrt {5}})/2} , is an algebraic number, because it is a root of the polynomial x2 − x − 1. That is, it is a value for x for which the polynomial evaluates to zero. As another example, the complex number 1 + i {\displaystyle 1+i} is algebraic because it is a root of x4 + 4. All integers and rational numbers are algebraic, as are all roots of integers. Real and complex numbers that are not algebraic, such as π and e, are called transcendental numbers. The set of algebraic (complex) numbers is countably infinite and has measure zero in the Lebesgue measure as a subset of the uncountable complex numbers. In that sense, almost all complex numbers are transcendental. Similarly, the set of algebraic (real) numbers is countably infinite and has Lebesgue measure zero as a subset of the real numbers, and in that sense almost all real numbers are transcendental. == Examples == All rational numbers are algebraic. Any rational number, expressed as the quotient of an integer a and a (non-zero) natural number b, satisfies the above definition, because x = ⁠a/b⁠ is the root of a non-zero polynomial, namely bx − a. Quadratic irrational numbers, irrational solutions of a quadratic polynomial ax2 + bx + c with integer coefficients a, b, and c, are algebraic numbers. If the quadratic polynomial is monic (a = 1), the roots are further qualified as quadratic integers. Gaussian integers, complex numbers a + bi for which both a and b are integers, are also quadratic integers. This is because a + bi and a − bi are the two roots of the quadratic x2 − 2ax + a2 + b2. A constructible number can be constructed from a given unit length using a straightedge and compass. It includes all quadratic irrational roots, all rational numbers, and all numbers that can be formed from these using the basic arithmetic operations and the extraction of square roots. (By designating cardinal directions for +1, −1, +i, and −i, complex numbers such as 3 + i 2 {\displaystyle 3+i{\sqrt {2}}} are considered constructible.) Any expression formed from algebraic numbers using any finite combination of the basic arithmetic operations and extraction of nth roots gives another algebraic number. Polynomial roots that cannot be expressed in terms of the basic arithmetic operations and extraction of nth roots (such as the roots of x5 − x + 1). That happens with many but not all polynomials of degree 5 or higher. Values of trigonometric functions of rational multiples of π (except when undefined): for example, cos ⁠π/7⁠, cos ⁠3π/7⁠, and cos ⁠5π/7⁠ satisfy 8x3 − 4x2 − 4x + 1 = 0. This polynomial is irreducible over the rationals and so the three cosines are conjugate algebraic numbers. Likewise, tan ⁠3π/16⁠, tan ⁠7π/16⁠, tan ⁠11π/16⁠, and tan ⁠15π/16⁠ satisfy the irreducible polynomial x4 − 4x3 − 6x2 + 4x + 1 = 0, and so are conjugate algebraic integers. This is the equivalent of angles which, when measured in degrees, have rational numbers. Some but not all irrational numbers are algebraic: The numbers 2 {\displaystyle {\sqrt {2}}} and 3 3 2 {\displaystyle {\frac {\sqrt[{3}]{3}}{2}}} are algebraic since they are roots of polynomials x2 − 2 and 8x3 − 3, respectively. The golden ratio φ is algebraic since it is a root of the polynomial x2 − x − 1. The numbers π and e are not algebraic numbers (see the Lindemann–Weierstrass theorem). == Properties == If a polynomial with rational coefficients is multiplied through by the least common denominator, the resulting polynomial with integer coefficients has the same roots. This shows that an algebraic number can be equivalently defined as a root of a polynomial with either integer or rational coefficients. Given an algebraic number, there is a unique monic polynomial with rational coefficients of least degree that has the number as a root. This polynomial is called its minimal polynomial. If its minimal polynomial has degree n, then the algebraic number is said to be of degree n. For example, all rational numbers have degree 1, and an algebraic number of degree 2 is a quadratic irrational. The algebraic numbers are dense in the reals. This follows from the fact they contain the rational numbers, which are dense in the reals themselves. The set of algebraic numbers is countable, and therefore its Lebesgue measure as a subset of the complex numbers is 0 (essentially, the algebraic numbers take up no space in the complex numbers). That is to say, "almost all" real and complex numbers are transcendental. All algebraic numbers are computable and therefore definable and arithmetical. For real numbers a and b, the complex number a + bi is algebraic if and only if both a and b are algebraic. === Degree of simple extensions of the rationals as a criterion to algebraicity === For any α, the simple extension of the rationals by α, denoted by Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} (whose elements are the f ( α ) {\displaystyle f(\alpha )} for f {\displaystyle f} a rational function with rational coefficients which is defined at α {\displaystyle \alpha } ), is of finite degree if and only if α is an algebraic number. The condition of finite degree means that there is a finite set { a i | 1 ≤ i ≤ k } {\displaystyle \{a_{i}|1\leq i\leq k\}} in Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} such that Q ( α ) = ∑ i = 1 k a i Q {\displaystyle \mathbb {Q} (\alpha )=\sum _{i=1}^{k}a_{i}\mathbb {Q} } ; that is, every member in Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} can be written as ∑ i = 1 k a i q i {\displaystyle \sum _{i=1}^{k}a_{i}q_{i}} for some rational numbers { q i | 1 ≤ i ≤ k } {\displaystyle \{q_{i}|1\leq i\leq k\}} (note that the set { a i } {\displaystyle \{a_{i}\}} is fixed). Indeed, since the a i − s {\displaystyle a_{i}-s} are themselves members of Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} , each can be expressed as sums of products of rational numbers and powers of α, and therefore this condition is equivalent to the requirement that for some finite n {\displaystyle n} , Q ( α ) = { ∑ i = − n n α i q i | q i ∈ Q } {\displaystyle \mathbb {Q} (\alpha )=\{\sum _{i=-n}^{n}\alpha ^{i}q_{i}|q_{i}\in \mathbb {Q} \}} . The latter condition is equivalent to α n + 1 {\displaystyle \alpha ^{n+1}} , itself a member of Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} , being expressible as ∑ i = − n n α i q i {\displaystyle \sum _{i=-n}^{n}\alpha ^{i}q_{i}} for some rationals { q i } {\displaystyle \{q_{i}\}} , so α 2 n + 1 = ∑ i = 0 2 n α i q i − n {\displaystyle \alpha ^{2n+1}=\sum _{i=0}^{2n}\alpha ^{i}q_{i-n}} or, equivalently, α is a root of x 2 n + 1 − ∑ i = 0 2 n x i q i − n {\displaystyle x^{2n+1}-\sum _{i=0}^{2n}x^{i}q_{i-n}} ; that is, an algebraic number with a minimal polynomial of degree not larger than 2 n + 1 {\displaystyle 2n+1} . It can similarly be proven that for any finite set of algebraic numbers α 1 {\displaystyle \alpha _{1}} , α 2 {\displaystyle \alpha _{2}} ... α n {\displaystyle \alpha _{n}} , the field extension Q ( α 1 , α 2 , . . . α n ) {\displaystyle \mathbb {Q} (\alpha _{1},\alpha _{2},...\alpha _{n})} has a finite degree. == Field == The sum, difference, product, and quotient (if the denominator is nonzero) of two algebraic numbers is again algebraic: For any two algebraic numbers α, β, this follows directly from the fact that the simple extension Q ( γ ) {\displaystyle \mathbb {Q} (\gamma )} , for γ {\displaystyle \gamma } being either α + β {\displaystyle \alpha +\beta } , α − β {\displaystyle \alpha -\beta } , α β {\displaystyle \alpha \beta } or (for β ≠ 0 {\displaystyle \beta \neq 0} ) α / β {\displaystyle \alpha /\beta } , is a linear subspace of the finite-degree field extension Q ( α , β ) {\displaystyle \mathbb {Q} (\alpha ,\beta )} , and therefore has a finite degree itself, from which it follows (as shown above) that γ {\displaystyle \gamma } is algebraic. An alternative way of showing this is constructively, by using the resultant. Algebraic numbers thus form a field Q ¯ {\displaystyle {\overline {\mathbb {Q} }}} (sometimes denoted by A {\displaystyle \mathbb {A} } , but that usually denotes the adele ring). === Algebraic closure === Every root of a polynomial equation whose coefficients are algebraic numbers is again algebraic. That can be rephrased by saying that the field of algebraic numbers is algebraically closed. In fact, it is the smallest algebraically closed field containing the rationals and so it is called the algebraic closure of the rationals. That the field of algebraic numbers is algebraically closed can be proven as follows: Let β be a root of a polynomial α 0 + α 1 x + α 2 x 2 . . . + α n x n {\displaystyle \alpha _{0}+\alpha _{1}x+\alpha _{2}x^{2}...+\alpha _{n}x^{n}} with coefficients that are algebraic numbers α 0 {\displaystyle \alpha _{0}} , α 1 {\displaystyle \alpha _{1}} , α 2 {\displaystyle \alpha _{2}} ... α n {\displaystyle \alpha _{n}} . The field extension Q ′ ≡ Q ( α 1 , α 2 , . . . α n ) {\displaystyle \mathbb {Q} ^{\prime }\equiv \mathbb {Q} (\alpha _{1},\alpha _{2},...\alpha _{n})} then has a finite degree with respect to Q {\displaystyle \mathbb {Q} } . The simple extension Q ′ ( β ) {\displaystyle \mathbb {Q} ^{\prime }(\beta )} then has a finite degree with respect to Q ′ {\displaystyle \mathbb {Q} ^{\prime }} (since all powers of β can be expressed by powers of up to β n − 1 {\displaystyle \beta ^{n-1}} ). Therefore, Q ′ ( β ) = Q ( β , α 1 , α 2 , . . . α n ) {\displaystyle \mathbb {Q} ^{\prime }(\beta )=\mathbb {Q} (\beta ,\alpha _{1},\alpha _{2},...\alpha _{n})} also has a finite degree with respect to Q {\displaystyle \mathbb {Q} } . Since Q ( β ) {\displaystyle \mathbb {Q} (\beta )} is a linear subspace of Q ′ ( β ) {\displaystyle \mathbb {Q} ^{\prime }(\beta )} , it must also have a finite degree with respect to Q {\displaystyle \mathbb {Q} } , so β must be an algebraic number. == Related fields == === Numbers defined by radicals === Any number that can be obtained from the integers using a finite number of additions, subtractions, multiplications, divisions, and taking (possibly complex) nth roots where n is a positive integer are algebraic. The converse, however, is not true: there are algebraic numbers that cannot be obtained in this manner. These numbers are roots of polynomials of degree 5 or higher, a result of Galois theory (see Quintic equations and the Abel–Ruffini theorem). For example, the equation: x 5 − x − 1 = 0 {\displaystyle x^{5}-x-1=0} has a unique real root, ≈ 1.1673, that cannot be expressed in terms of only radicals and arithmetic operations. === Closed-form number === Algebraic numbers are all numbers that can be defined explicitly or implicitly in terms of polynomials, starting from the rational numbers. One may generalize this to "closed-form numbers", which may be defined in various ways. Most broadly, all numbers that can be defined explicitly or implicitly in terms of polynomials, exponentials, and logarithms are called "elementary numbers", and these include the algebraic numbers, plus some transcendental numbers. Most narrowly, one may consider numbers explicitly defined in terms of polynomials, exponentials, and logarithms – this does not include all algebraic numbers, but does include some simple transcendental numbers such as e or ln 2. == Algebraic integers == An algebraic integer is an algebraic number that is a root of a polynomial with integer coefficients with leading coefficient 1 (a monic polynomial). Examples of algebraic integers are 5 + 13 2 , {\displaystyle 5+13{\sqrt {2}},} 2 − 6 i , {\displaystyle 2-6i,} and 1 2 ( 1 + i 3 ) . {\textstyle {\frac {1}{2}}(1+i{\sqrt {3}}).} Therefore, the algebraic integers constitute a proper superset of the integers, as the latter are the roots of monic polynomials x − k for all k ∈ Z {\displaystyle k\in \mathbb {Z} } . In this sense, algebraic integers are to algebraic numbers what integers are to rational numbers. The sum, difference and product of algebraic integers are again algebraic integers, which means that the algebraic integers form a ring. The name algebraic integer comes from the fact that the only rational numbers that are algebraic integers are the integers, and because the algebraic integers in any number field are in many ways analogous to the integers. If K is a number field, its ring of integers is the subring of algebraic integers in K, and is frequently denoted as OK. These are the prototypical examples of Dedekind domains. == Special classes == Algebraic solution Gaussian integer Eisenstein integer Quadratic irrational number Fundamental unit Root of unity Gaussian period Pisot–Vijayaraghavan number Salem number == Notes == == References == Artin, Michael (1991), Algebra, Prentice Hall, ISBN 0-13-004763-5, MR 1129886 Garibaldi, Skip (June 2008), "Somewhat more than governors need to know about trigonometry", Mathematics Magazine, 81 (3): 191–200, doi:10.1080/0025570x.2008.11953548, JSTOR 27643106 Hardy, Godfrey Harold; Wright, Edward M. (1972), An introduction to the theory of numbers (5th ed.), Oxford: Clarendon, ISBN 0-19-853171-0 Ireland, Kenneth; Rosen, Michael (1990) [1st ed. 1982], A Classical Introduction to Modern Number Theory (2nd ed.), Berlin: Springer, doi:10.1007/978-1-4757-2103-4, ISBN 0-387-97329-X, MR 1070716 Lang, Serge (2002) [1st ed. 1965], Algebra (3rd ed.), New York: Springer, ISBN 978-0-387-95385-4, MR 1878556 Niven, Ivan M. (1956), Irrational Numbers, Mathematical Association of America Ore, Øystein (1948), Number Theory and Its History, New York: McGraw-Hill
Wikipedia/Algebraic_numbers
In mathematics, the exponential function is the unique real function which maps zero to one and has a derivative everywhere equal to its value. The exponential of a variable ⁠ x {\displaystyle x} ⁠ is denoted ⁠ exp ⁡ x {\displaystyle \exp x} ⁠ or ⁠ e x {\displaystyle e^{x}} ⁠, with the two notations used interchangeably. It is called exponential because its argument can be seen as an exponent to which a constant number e ≈ 2.718, the base, is raised. There are several other definitions of the exponential function, which are all equivalent although being of very different nature. The exponential function converts sums to products: it maps the additive identity 0 to the multiplicative identity 1, and the exponential of a sum is equal to the product of separate exponentials, ⁠ exp ⁡ ( x + y ) = exp ⁡ x ⋅ exp ⁡ y {\displaystyle \exp(x+y)=\exp x\cdot \exp y} ⁠. Its inverse function, the natural logarithm, ⁠ ln {\displaystyle \ln } ⁠ or ⁠ log {\displaystyle \log } ⁠, converts products to sums: ⁠ ln ⁡ ( x ⋅ y ) = ln ⁡ x + ln ⁡ y {\displaystyle \ln(x\cdot y)=\ln x+\ln y} ⁠. The exponential function is occasionally called the natural exponential function, matching the name natural logarithm, for distinguishing it from some other functions that are also commonly called exponential functions. These functions include the functions of the form ⁠ f ( x ) = b x {\displaystyle f(x)=b^{x}} ⁠, which is exponentiation with a fixed base ⁠ b {\displaystyle b} ⁠. More generally, and especially in applications, functions of the general form ⁠ f ( x ) = a b x {\displaystyle f(x)=ab^{x}} ⁠ are also called exponential functions. They grow or decay exponentially in that the rate that ⁠ f ( x ) {\displaystyle f(x)} ⁠ changes when ⁠ x {\displaystyle x} ⁠ is increased is proportional to the current value of ⁠ f ( x ) {\displaystyle f(x)} ⁠. The exponential function can be generalized to accept complex numbers as arguments. This reveals relations between multiplication of complex numbers, rotations in the complex plane, and trigonometry. Euler's formula ⁠ exp ⁡ i θ = cos ⁡ θ + i sin ⁡ θ {\displaystyle \exp i\theta =\cos \theta +i\sin \theta } ⁠ expresses and summarizes these relations. The exponential function can be even further generalized to accept other types of arguments, such as matrices and elements of Lie algebras. == Graph == The graph of y = e x {\displaystyle y=e^{x}} is upward-sloping, and increases faster than every power of ⁠ x {\displaystyle x} ⁠. The graph always lies above the x-axis, but becomes arbitrarily close to it for large negative x; thus, the x-axis is a horizontal asymptote. The equation d d x e x = e x {\displaystyle {\tfrac {d}{dx}}e^{x}=e^{x}} means that the slope of the tangent to the graph at each point is equal to its height (its y-coordinate) at that point. == Definitions and fundamental properties == There are several equivalent definitions of the exponential function, although of very different nature. === Differential equation === One of the simplest definitions is: The exponential function is the unique differentiable function that equals its derivative, and takes the value 1 for the value 0 of its variable. This "conceptual" definition requires a uniqueness proof and an existence proof, but it allows an easy derivation of the main properties of the exponential function. Uniqueness: If ⁠ f ( x ) {\displaystyle f(x)} ⁠ and ⁠ g ( x ) {\displaystyle g(x)} ⁠ are two functions satisfying the above definition, then the derivative of ⁠ f / g {\displaystyle f/g} ⁠ is zero everywhere because of the quotient rule. It follows that ⁠ f / g {\displaystyle f/g} ⁠ is constant; this constant is 1 since ⁠ f ( 0 ) = g ( 0 ) = 1 {\displaystyle f(0)=g(0)=1} ⁠. Existence is proved in each of the two following sections. === Inverse of natural logarithm === The exponential function is the inverse function of the natural logarithm. The inverse function theorem implies that the natural logarithm has an inverse function, that satisfies the above definition. This is a first proof of existence. Therefore, one has ln ⁡ ( exp ⁡ x ) = x exp ⁡ ( ln ⁡ y ) = y {\displaystyle {\begin{aligned}\ln(\exp x)&=x\\\exp(\ln y)&=y\end{aligned}}} for every real number x {\displaystyle x} and every positive real number y . {\displaystyle y.} === Power series === The exponential function is the sum of the power series exp ⁡ ( x ) = 1 + x + x 2 2 ! + x 3 3 ! + ⋯ = ∑ n = 0 ∞ x n n ! , {\displaystyle {\begin{aligned}\exp(x)&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}},\end{aligned}}} where n ! {\displaystyle n!} is the factorial of n (the product of the n first positive integers). This series is absolutely convergent for every x {\displaystyle x} per the ratio test. So, the derivative of the sum can be computed by term-by-term differentiation, and this shows that the sum of the series satisfies the above definition. This is a second existence proof, and shows, as a byproduct, that the exponential function is defined for every ⁠ x {\displaystyle x} ⁠, and is everywhere the sum of its Maclaurin series. === Functional equation === The exponential satisfies the functional equation: exp ⁡ ( x + y ) = exp ⁡ ( x ) ⋅ exp ⁡ ( y ) . {\displaystyle \exp(x+y)=\exp(x)\cdot \exp(y).} This results from the uniqueness and the fact that the function f ( x ) = exp ⁡ ( x + y ) / exp ⁡ ( y ) {\displaystyle f(x)=\exp(x+y)/\exp(y)} satisfies the above definition. It can be proved that a function that satisfies this functional equation has the form ⁠ x ↦ exp ⁡ ( c x ) {\displaystyle x\mapsto \exp(cx)} ⁠ if it is either continuous or monotonic. It is thus differentiable, and equals the exponential function if its derivative at 0 is 1. === Limit of integer powers === The exponential function is the limit, as the integer n goes to infinity, exp ⁡ ( x ) = lim n → + ∞ ( 1 + x n ) n . {\displaystyle \exp(x)=\lim _{n\to +\infty }\left(1+{\frac {x}{n}}\right)^{n}.} By continuity of the logarithm, this can be proved by taking logarithms and proving x = lim n → ∞ ln ⁡ ( 1 + x n ) n = lim n → ∞ n ln ⁡ ( 1 + x n ) , {\displaystyle x=\lim _{n\to \infty }\ln \left(1+{\frac {x}{n}}\right)^{n}=\lim _{n\to \infty }n\ln \left(1+{\frac {x}{n}}\right),} for example with Taylor's theorem. === Properties === Reciprocal: The functional equation implies ⁠ e x e − x = 1 {\displaystyle e^{x}e^{-x}=1} ⁠. Therefore ⁠ e x ≠ 0 {\displaystyle e^{x}\neq 0} ⁠ for every ⁠ x {\displaystyle x} ⁠ and 1 e x = e − x . {\displaystyle {\frac {1}{e^{x}}}=e^{-x}.} Positiveness: ⁠ e x > 0 {\displaystyle e^{x}>0} ⁠ for every real number ⁠ x {\displaystyle x} ⁠. This results from the intermediate value theorem, since ⁠ e 0 = 1 {\displaystyle e^{0}=1} ⁠ and, if one would have ⁠ e x < 0 {\displaystyle e^{x}<0} ⁠ for some ⁠ x {\displaystyle x} ⁠, there would be an ⁠ y {\displaystyle y} ⁠ such that ⁠ e y = 0 {\displaystyle e^{y}=0} ⁠ between ⁠ 0 {\displaystyle 0} ⁠ and ⁠ x {\displaystyle x} ⁠. Since the exponential function equals its derivative, this implies that the exponential function is monotonically increasing. Extension of exponentiation to positive real bases: Let b be a positive real number. The exponential function and the natural logarithm being the inverse each of the other, one has b = exp ⁡ ( ln ⁡ b ) . {\displaystyle b=\exp(\ln b).} If n is an integer, the functional equation of the logarithm implies b n = exp ⁡ ( ln ⁡ b n ) = exp ⁡ ( n ln ⁡ b ) . {\displaystyle b^{n}=\exp(\ln b^{n})=\exp(n\ln b).} Since the right-most expression is defined if n is any real number, this allows defining ⁠ b x {\displaystyle b^{x}} ⁠ for every positive real number b and every real number x: b x = exp ⁡ ( x ln ⁡ b ) . {\displaystyle b^{x}=\exp(x\ln b).} In particular, if b is the Euler's number e = exp ⁡ ( 1 ) , {\displaystyle e=\exp(1),} one has ln ⁡ e = 1 {\displaystyle \ln e=1} (inverse function) and thus e x = exp ⁡ ( x ) . {\displaystyle e^{x}=\exp(x).} This shows the equivalence of the two notations for the exponential function. == General exponential functions == A function is commonly called an exponential function—with an indefinite article—if it has the form ⁠ x ↦ b x {\displaystyle x\mapsto b^{x}} ⁠, that is, if it is obtained from exponentiation by fixing the base and letting the exponent vary. More generally and especially in applied contexts, the term exponential function is commonly used for functions of the form ⁠ f ( x ) = a b x {\displaystyle f(x)=ab^{x}} ⁠. This may be motivated by the fact that, if the values of the function represent quantities, a change of measurement unit changes the value of ⁠ a {\displaystyle a} ⁠, and so, it is nonsensical to impose ⁠ a = 1 {\displaystyle a=1} ⁠. These most general exponential functions are the differentiable functions that satisfy the following equivalent characterizations. ⁠ f ( x ) = a b x {\displaystyle f(x)=ab^{x}} ⁠ for every ⁠ x {\displaystyle x} ⁠ and some constants ⁠ a {\displaystyle a} ⁠ and ⁠ b > 0 {\displaystyle b>0} ⁠. ⁠ f ( x ) = a e k x {\displaystyle f(x)=ae^{kx}} ⁠ for every ⁠ x {\displaystyle x} ⁠ and some constants ⁠ a {\displaystyle a} ⁠ and ⁠ k {\displaystyle k} ⁠. The value of f ′ ( x ) / f ( x ) {\displaystyle f'(x)/f(x)} is independent of x {\displaystyle x} . For every d , {\displaystyle d,} the value of f ( x + d ) / f ( x ) {\displaystyle f(x+d)/f(x)} is independent of x ; {\displaystyle x;} that is, f ( x + d ) f ( x ) = f ( y + d ) f ( y ) {\displaystyle {\frac {f(x+d)}{f(x)}}={\frac {f(y+d)}{f(y)}}} for every x, y. The base of an exponential function is the base of the exponentiation that appears in it when written as ⁠ x → a b x {\displaystyle x\to ab^{x}} ⁠, namely ⁠ b {\displaystyle b} ⁠. The base is ⁠ e k {\displaystyle e^{k}} ⁠ in the second characterization, exp ⁡ f ′ ( x ) f ( x ) {\textstyle \exp {\frac {f'(x)}{f(x)}}} in the third one, and ( f ( x + d ) f ( x ) ) 1 / d {\textstyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}} in the last one. === In applications === The last characterization is important in empirical sciences, as allowing a direct experimental test whether a function is an exponential function. Exponential growth or exponential decay—where the variable change is proportional to the variable value—are thus modeled with exponential functions. Examples are unlimited population growth leading to Malthusian catastrophe, continuously compounded interest, and radioactive decay. If the modeling function has the form ⁠ x ↦ a e k x , {\displaystyle x\mapsto ae^{kx},} ⁠ or, equivalently, is a solution of the differential equation ⁠ y ′ = k y {\displaystyle y'=ky} ⁠, the constant ⁠ k {\displaystyle k} ⁠ is called, depending on the context, the decay constant, disintegration constant, rate constant, or transformation constant. === Equivalence proof === For proving the equivalence of the above properties, one can proceed as follows. The two first characterizations are equivalent, since, if ⁠ b = e k {\displaystyle b=e^{k}} ⁠ and ⁠ k = ln ⁡ b {\displaystyle k=\ln b} ⁠, one has e k x = ( e k ) x = b x . {\displaystyle e^{kx}=(e^{k})^{x}=b^{x}.} The basic properties of the exponential function (derivative and functional equation) implies immediately the third and the last condition. Suppose that the third condition is verified, and let ⁠ k {\displaystyle k} ⁠ be the constant value of f ′ ( x ) / f ( x ) . {\displaystyle f'(x)/f(x).} Since ∂ e k x ∂ x = k e k x , {\textstyle {\frac {\partial e^{kx}}{\partial x}}=ke^{kx},} the quotient rule for derivation implies that ∂ ∂ x f ( x ) e k x = 0 , {\displaystyle {\frac {\partial }{\partial x}}\,{\frac {f(x)}{e^{kx}}}=0,} and thus that there is a constant ⁠ a {\displaystyle a} ⁠ such that f ( x ) = a e k x . {\displaystyle f(x)=ae^{kx}.} If the last condition is verified, let φ ( d ) = f ( x + d ) / f ( x ) , {\textstyle \varphi (d)=f(x+d)/f(x),} which is independent of ⁠ x {\displaystyle x} ⁠. Using ⁠ φ ( 0 ) = 1 {\displaystyle \varphi (0)=1} ⁠, one gets f ( x + d ) − f ( x ) d = f ( x ) φ ( d ) − φ ( 0 ) d . {\displaystyle {\frac {f(x+d)-f(x)}{d}}=f(x)\,{\frac {\varphi (d)-\varphi (0)}{d}}.} Taking the limit when ⁠ d {\displaystyle d} ⁠ tends to zero, one gets that the third condition is verified with ⁠ k = φ ′ ( 0 ) {\displaystyle k=\varphi '(0)} ⁠. It follows therefore that ⁠ f ( x ) = a e k x {\displaystyle f(x)=ae^{kx}} ⁠ for some ⁠ a , {\displaystyle a,} ⁠ and ⁠ φ ( d ) = e k d . {\displaystyle \varphi (d)=e^{kd}.} ⁠ As a byproduct, one gets that ( f ( x + d ) f ( x ) ) 1 / d = e k {\displaystyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}=e^{k}} is independent of both ⁠ x {\displaystyle x} ⁠ and ⁠ d {\displaystyle d} ⁠. == Compound interest == The earliest occurrence of the exponential function was in Jacob Bernoulli's study of compound interests in 1683. This is this study that led Bernoulli to consider the number lim n → ∞ ( 1 + 1 n ) n {\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}} now known as Euler's number and denoted ⁠ e {\displaystyle e} ⁠. The exponential function is involved as follows in the computation of continuously compounded interests. If a principal amount of 1 earns interest at an annual rate of x compounded monthly, then the interest earned each month is ⁠x/12⁠ times the current value, so each month the total value is multiplied by (1 + ⁠x/12⁠), and the value at the end of the year is (1 + ⁠x/12⁠)12. If instead interest is compounded daily, this becomes (1 + ⁠x/365⁠)365. Letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, exp ⁡ x = lim n → ∞ ( 1 + x n ) n {\displaystyle \exp x=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}} first given by Leonhard Euler. == Differential equations == Exponential functions occur very often in solutions of differential equations. The exponential functions can be defined as solutions of differential equations. Indeed, the exponential function is a solution of the simplest possible differential equation, namely ⁠ y ′ = y {\displaystyle y'=y} ⁠. Every other exponential function, of the form ⁠ y = a b x {\displaystyle y=ab^{x}} ⁠, is a solution of the differential equation ⁠ y ′ = k y {\displaystyle y'=ky} ⁠, and every solution of this differential equation has this form. The solutions of an equation of the form y ′ + k y = f ( x ) {\displaystyle y'+ky=f(x)} involve exponential functions in a more sophisticated way, since they have the form y = c e − k x + e − k x ∫ f ( x ) e k x d x , {\displaystyle y=ce^{-kx}+e^{-kx}\int f(x)e^{kx}dx,} where ⁠ c {\displaystyle c} ⁠ is an arbitrary constant and the integral denotes any antiderivative of its argument. More generally, the solutions of every linear differential equation with constant coefficients can be expressed in terms of exponential functions and, when they are not homogeneous, antiderivatives. This holds true also for systems of linear differential equations with constant coefficients. == Complex exponential == The exponential function can be naturally extended to a complex function, which is a function with the complex numbers as domain and codomain, such that its restriction to the reals is the above-defined exponential function, called real exponential function in what follows. This function is also called the exponential function, and also denoted ⁠ e z {\displaystyle e^{z}} ⁠ or ⁠ exp ⁡ ( z ) {\displaystyle \exp(z)} ⁠. For distinguishing the complex case from the real one, the extended function is also called complex exponential function or simply complex exponential. Most of the definitions of the exponential function can be used verbatim for definiting the complex exponential function, and the proof of their equivalence is the same as in the real case. The complex exponential function can be defined in several equivalent ways that are the same as in the real case. The complex exponential is the unique complex function that equals its complex derivative and takes the value ⁠ 1 {\displaystyle 1} ⁠ for the argument ⁠ 0 {\displaystyle 0} ⁠: d e z d z = e z and e 0 = 1. {\displaystyle {\frac {de^{z}}{dz}}=e^{z}\quad {\text{and}}\quad e^{0}=1.} The complex exponential function is the sum of the series e z = ∑ k = 0 ∞ z k k ! . {\displaystyle e^{z}=\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}.} This series is absolutely convergent for every complex number ⁠ z {\displaystyle z} ⁠. So, the complex differential is an entire function. The complex exponential function is the limit e z = lim n → ∞ ( 1 + z n ) n {\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}} The functional equation e w + z = e w e z {\displaystyle e^{w+z}=e^{w}e^{z}} holds for every complex numbers ⁠ w {\displaystyle w} ⁠ and ⁠ z {\displaystyle z} ⁠. The complex exponential is the unique continuous function that satisfies this functional equation and has the value ⁠ 1 {\displaystyle 1} ⁠ for ⁠ z = 0 {\displaystyle z=0} ⁠. The complex logarithm is a right-inverse function of the complex exponential: e log ⁡ z = z . {\displaystyle e^{\log z}=z.} However, since the complex logarithm is a multivalued function, one has log ⁡ e z = { z + 2 i k π ∣ k ∈ Z } , {\displaystyle \log e^{z}=\{z+2ik\pi \mid k\in \mathbb {Z} \},} and it is difficult to define the complex exponential from the complex logarithm. On the opposite, this is the complex logarithm that is often defined from the complex exponential. The complex exponential has the following properties: 1 e z = e − z {\displaystyle {\frac {1}{e^{z}}}=e^{-z}} and e z ≠ 0 for every z ∈ C . {\displaystyle e^{z}\neq 0\quad {\text{for every }}z\in \mathbb {C} .} It is periodic function of period ⁠ 2 i π {\displaystyle 2i\pi } ⁠; that is e z + 2 i k π = e z for every k ∈ Z . {\displaystyle e^{z+2ik\pi }=e^{z}\quad {\text{for every }}k\in \mathbb {Z} .} This results from Euler's identity ⁠ e i π = − 1 {\displaystyle e^{i\pi }=-1} ⁠ and the functional identity. The complex conjugate of the complex exponential is e z ¯ = e z ¯ . {\displaystyle {\overline {e^{z}}}=e^{\overline {z}}.} Its modulus is | e z | = e | ℜ ( z ) | , {\displaystyle |e^{z}|=e^{|\Re (z)|},} where ⁠ ℜ ( z ) {\displaystyle \Re (z)} ⁠ denotes the real part of ⁠ z {\displaystyle z} ⁠. === Relationship with trigonometry === Complex exponential and trigonometric functions are strongly related by Euler's formula: e i t = cos ⁡ ( t ) + i sin ⁡ ( t ) . {\displaystyle e^{it}=\cos(t)+i\sin(t).} This formula provides the decomposition of complex exponential into real and imaginary parts: e x + i y = e x cos ⁡ y + i e x sin ⁡ y . {\displaystyle e^{x+iy}=e^{x}\,\cos y+ie^{x}\,\sin y.} The trigonometric functions can be expressed in terms of complex exponentials: cos ⁡ x = e i x + e − i x 2 sin ⁡ x = e i x − e − i x 2 i tan ⁡ x = i 1 − e 2 i x 1 + e 2 i x {\displaystyle {\begin{aligned}\cos x&={\frac {e^{ix}+e^{-ix}}{2}}\\\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\\tan x&=i\,{\frac {1-e^{2ix}}{1+e^{2ix}}}\end{aligned}}} In these formulas, ⁠ x , y , t {\displaystyle x,y,t} ⁠ are commonly interpreted as real variables, but the formulas remain valid if the variables are interpreted as complex variables. These formulas may be used to define trigonometric functions of a complex variable. === Plots === 3D plots of real part, imaginary part, and modulus of the exponential function Considering the complex exponential function as a function involving four real variables: v + i w = exp ⁡ ( x + i y ) {\displaystyle v+iw=\exp(x+iy)} the graph of the exponential function is a two-dimensional surface curving through four dimensions. Starting with a color-coded portion of the x y {\displaystyle xy} domain, the following are depictions of the graph as variously projected into two or three dimensions. Graphs of the complex exponential function The second image shows how the domain complex plane is mapped into the range complex plane: zero is mapped to 1 the real x {\displaystyle x} axis is mapped to the positive real v {\displaystyle v} axis the imaginary y {\displaystyle y} axis is wrapped around the unit circle at a constant angular rate values with negative real parts are mapped inside the unit circle values with positive real parts are mapped outside of the unit circle values with a constant real part are mapped to circles centered at zero values with a constant imaginary part are mapped to rays extending from zero The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image. The third image shows the graph extended along the real x {\displaystyle x} axis. It shows the graph is a surface of revolution about the x {\displaystyle x} axis of the graph of the real exponential function, producing a horn or funnel shape. The fourth image shows the graph extended along the imaginary y {\displaystyle y} axis. It shows that the graph's surface for positive and negative y {\displaystyle y} values doesn't really meet along the negative real v {\displaystyle v} axis, but instead forms a spiral surface about the y {\displaystyle y} axis. Because its y {\displaystyle y} values have been extended to ±2π, this image also better depicts the 2π periodicity in the imaginary y {\displaystyle y} value. == Matrices and Banach algebras == The power series definition of the exponential function makes sense for square matrices (for which the function is called the matrix exponential) and more generally in any unital Banach algebra B. In this setting, e0 = 1, and ex is invertible with inverse e−x for any x in B. If xy = yx, then ex + y = exey, but this identity can fail for noncommuting x and y. Some alternative definitions lead to the same function. For instance, ex can be defined as lim n → ∞ ( 1 + x n ) n . {\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} Or ex can be defined as fx(1), where fx : R → B is the solution to the differential equation ⁠dfx/dt⁠(t) = x fx(t), with initial condition fx(0) = 1; it follows that fx(t) = etx for every t in R. == Lie algebras == Given a Lie group G and its associated Lie algebra g {\displaystyle {\mathfrak {g}}} , the exponential map is a map g {\displaystyle {\mathfrak {g}}} ↦ G satisfying similar properties. In fact, since R is the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie group GL(n,R) of invertible n × n matrices has as Lie algebra M(n,R), the space of all n × n matrices, the exponential function for square matrices is a special case of the Lie algebra exponential map. The identity exp ⁡ ( x + y ) = exp ⁡ ( x ) exp ⁡ ( y ) {\displaystyle \exp(x+y)=\exp(x)\exp(y)} can fail for Lie algebra elements x and y that do not commute; the Baker–Campbell–Hausdorff formula supplies the necessary correction terms. == Transcendency == The function ez is a transcendental function, which means that it is not a root of a polynomial over the ring of the rational fractions C ( z ) . {\displaystyle \mathbb {C} (z).} If a1, ..., an are distinct complex numbers, then ea1z, ..., eanz are linearly independent over C ( z ) {\displaystyle \mathbb {C} (z)} , and hence ez is transcendental over C ( z ) {\displaystyle \mathbb {C} (z)} . == Computation == The Taylor series definition above is generally efficient for computing (an approximation of) e x {\displaystyle e^{x}} . However, when computing near the argument x = 0 {\displaystyle x=0} , the result will be close to 1, and computing the value of the difference e x − 1 {\displaystyle e^{x}-1} with floating-point arithmetic may lead to the loss of (possibly all) significant figures, producing a large relative error, possibly even a meaningless result. Following a proposal by William Kahan, it may thus be useful to have a dedicated routine, often called expm1, which computes ex − 1 directly, bypassing computation of ex. For example, one may use the Taylor series: e x − 1 = x + x 2 2 + x 3 6 + ⋯ + x n n ! + ⋯ . {\displaystyle e^{x}-1=x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots +{\frac {x^{n}}{n!}}+\cdots .} This was first implemented in 1979 in the Hewlett-Packard HP-41C calculator, and provided by several calculators, operating systems (for example Berkeley UNIX 4.3BSD), computer algebra systems, and programming languages (for example C99). In addition to base e, the IEEE 754-2008 standard defines similar exponential functions near 0 for base 2 and 10: 2 x − 1 {\displaystyle 2^{x}-1} and 10 x − 1 {\displaystyle 10^{x}-1} . A similar approach has been used for the logarithm; see log1p. An identity in terms of the hyperbolic tangent, expm1 ⁡ ( x ) = e x − 1 = 2 tanh ⁡ ( x / 2 ) 1 − tanh ⁡ ( x / 2 ) , {\displaystyle \operatorname {expm1} (x)=e^{x}-1={\frac {2\tanh(x/2)}{1-\tanh(x/2)}},} gives a high-precision value for small values of x on systems that do not implement expm1(x). === Continued fractions === The exponential function can also be computed with continued fractions. A continued fraction for ex can be obtained via an identity of Euler: e x = 1 + x 1 − x x + 2 − 2 x x + 3 − 3 x x + 4 − ⋱ {\displaystyle e^{x}=1+{\cfrac {x}{1-{\cfrac {x}{x+2-{\cfrac {2x}{x+3-{\cfrac {3x}{x+4-\ddots }}}}}}}}} The following generalized continued fraction for ez converges more quickly: e z = 1 + 2 z 2 − z + z 2 6 + z 2 10 + z 2 14 + ⋱ {\displaystyle e^{z}=1+{\cfrac {2z}{2-z+{\cfrac {z^{2}}{6+{\cfrac {z^{2}}{10+{\cfrac {z^{2}}{14+\ddots }}}}}}}}} or, by applying the substitution z = ⁠x/y⁠: e x y = 1 + 2 x 2 y − x + x 2 6 y + x 2 10 y + x 2 14 y + ⋱ {\displaystyle e^{\frac {x}{y}}=1+{\cfrac {2x}{2y-x+{\cfrac {x^{2}}{6y+{\cfrac {x^{2}}{10y+{\cfrac {x^{2}}{14y+\ddots }}}}}}}}} with a special case for z = 2: e 2 = 1 + 4 0 + 2 2 6 + 2 2 10 + 2 2 14 + ⋱ = 7 + 2 5 + 1 7 + 1 9 + 1 11 + ⋱ {\displaystyle e^{2}=1+{\cfrac {4}{0+{\cfrac {2^{2}}{6+{\cfrac {2^{2}}{10+{\cfrac {2^{2}}{14+\ddots }}}}}}}}=7+{\cfrac {2}{5+{\cfrac {1}{7+{\cfrac {1}{9+{\cfrac {1}{11+\ddots }}}}}}}}} This formula also converges, though more slowly, for z > 2. For example: e 3 = 1 + 6 − 1 + 3 2 6 + 3 2 10 + 3 2 14 + ⋱ = 13 + 54 7 + 9 14 + 9 18 + 9 22 + ⋱ {\displaystyle e^{3}=1+{\cfrac {6}{-1+{\cfrac {3^{2}}{6+{\cfrac {3^{2}}{10+{\cfrac {3^{2}}{14+\ddots }}}}}}}}=13+{\cfrac {54}{7+{\cfrac {9}{14+{\cfrac {9}{18+{\cfrac {9}{22+\ddots }}}}}}}}} == See also == == Notes == == References == == External links == "Exponential function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Exponential_function
In mathematics, the inverse trigonometric functions (occasionally also called antitrigonometric, cyclometric, or arcus functions) are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry. == Notation == Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: arcsin(x), arccos(x), arctan(x), etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships: when measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Thus in the unit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine is x" is the same as "the angle whose cosine is x", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms asin, acos, atan. The notations sin−1(x), cos−1(x), tan−1(x), etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established sin[−1](x), cos[−1](x), tan[−1](x) – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: tan − 1 ⁡ ( x ) = { arctan ⁡ ( x ) + π k ∣ k ∈ Z } . {\displaystyle \tan ^{-1}(x)=\{\arctan(x)+\pi k\mid k\in \mathbb {Z} \}~.} However, this might appear to conflict logically with the common semantics for expressions such as sin2(x) (although only sin2 x, without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function. The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, (cos(x))−1 = sec(x). Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “−1” superscript: Sin−1(x), Cos−1(x), Tan−1(x), etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by sin−1(x), cos−1(x), etc., or, better, by sin−1 x, cos−1 x, etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case. Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions. == Basic concepts == === Principal values === Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inverse functions. Therefore, the result ranges of the inverse functions are proper (i.e. strict) subsets of the domains of the original functions. For example, using function in the sense of multivalued functions, just as the square root function y = x {\displaystyle y={\sqrt {x}}} could be defined from y 2 = x , {\displaystyle y^{2}=x,} the function y = arcsin ⁡ ( x ) {\displaystyle y=\arcsin(x)} is defined so that sin ⁡ ( y ) = x . {\displaystyle \sin(y)=x.} For a given real number x , {\displaystyle x,} with − 1 ≤ x ≤ 1 , {\displaystyle -1\leq x\leq 1,} there are multiple (in fact, countably infinitely many) numbers y {\displaystyle y} such that sin ⁡ ( y ) = x {\displaystyle \sin(y)=x} ; for example, sin ⁡ ( 0 ) = 0 , {\displaystyle \sin(0)=0,} but also sin ⁡ ( π ) = 0 , {\displaystyle \sin(\pi )=0,} sin ⁡ ( 2 π ) = 0 , {\displaystyle \sin(2\pi )=0,} etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions. The principal inverses are listed in the following table. Note: Some authors define the range of arcsecant to be ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π ≤ y < 3 π 2 {\textstyle \pi \leq y<{\frac {3\pi }{2}}} ), because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range, tan ⁡ ( arcsec ⁡ ( x ) ) = x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))={\sqrt {x^{2}-1}},} whereas with the range ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π 2 < y ≤ π {\textstyle {\frac {\pi }{2}}<y\leq \pi } ), we would have to write tan ⁡ ( arcsec ⁡ ( x ) ) = ± x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))=\pm {\sqrt {x^{2}-1}},} since tangent is nonnegative on 0 ≤ y < π 2 , {\textstyle 0\leq y<{\frac {\pi }{2}},} but nonpositive on π 2 < y ≤ π . {\textstyle {\frac {\pi }{2}}<y\leq \pi .} For a similar reason, the same authors define the range of arccosecant to be ( − π < y ≤ − π 2 {\textstyle (-\pi <y\leq -{\frac {\pi }{2}}} or 0 < y ≤ π 2 ) . {\textstyle 0<y\leq {\frac {\pi }{2}}).} ==== Domains ==== If x is allowed to be a complex number, then the range of y applies only to its real part. The table below displays names and domains of the inverse trigonometric functions along with the range of their usual principal values in radians. The symbol R = ( − ∞ , ∞ ) {\displaystyle \mathbb {R} =(-\infty ,\infty )} denotes the set of all real numbers and Z = { … , − 2 , − 1 , 0 , 1 , 2 , … } {\displaystyle \mathbb {Z} =\{\ldots ,\,-2,\,-1,\,0,\,1,\,2,\,\ldots \}} denotes the set of all integers. The set of all integer multiples of π {\displaystyle \pi } is denoted by π Z := { π n : n ∈ Z } = { … , − 2 π , − π , 0 , π , 2 π , … } . {\displaystyle \pi \mathbb {Z} ~:=~\{\pi n\;:\;n\in \mathbb {Z} \}~=~\{\ldots ,\,-2\pi ,\,-\pi ,\,0,\,\pi ,\,2\pi ,\,\ldots \}.} The symbol ∖ {\displaystyle \,\setminus \,} denotes set subtraction so that, for instance, R ∖ ( − 1 , 1 ) = ( − ∞ , − 1 ] ∪ [ 1 , ∞ ) {\displaystyle \mathbb {R} \setminus (-1,1)=(-\infty ,-1]\cup [1,\infty )} is the set of points in R {\displaystyle \mathbb {R} } (that is, real numbers) that are not in the interval ( − 1 , 1 ) . {\displaystyle (-1,1).} The Minkowski sum notation π Z + ( 0 , π ) {\textstyle \pi \mathbb {Z} +(0,\pi )} and π Z + ( − π 2 , π 2 ) {\displaystyle \pi \mathbb {Z} +{\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}} that is used above to concisely write the domains of cot , csc , tan , and sec {\displaystyle \cot ,\csc ,\tan ,{\text{ and }}\sec } is now explained. Domain of cotangent cot {\displaystyle \cot } and cosecant csc {\displaystyle \csc } : The domains of cot {\displaystyle \,\cot \,} and csc {\displaystyle \,\csc \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which sin ⁡ θ ≠ 0 , {\displaystyle \sin \theta \neq 0,} i.e. all real numbers that are not of the form π n {\displaystyle \pi n} for some integer n , {\displaystyle n,} π Z + ( 0 , π ) = ⋯ ∪ ( − 2 π , − π ) ∪ ( − π , 0 ) ∪ ( 0 , π ) ∪ ( π , 2 π ) ∪ ⋯ = R ∖ π Z {\displaystyle {\begin{aligned}\pi \mathbb {Z} +(0,\pi )&=\cdots \cup (-2\pi ,-\pi )\cup (-\pi ,0)\cup (0,\pi )\cup (\pi ,2\pi )\cup \cdots \\&=\mathbb {R} \setminus \pi \mathbb {Z} \end{aligned}}} Domain of tangent tan {\displaystyle \tan } and secant sec {\displaystyle \sec } : The domains of tan {\displaystyle \,\tan \,} and sec {\displaystyle \,\sec \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which cos ⁡ θ ≠ 0 , {\displaystyle \cos \theta \neq 0,} π Z + ( − π 2 , π 2 ) = ⋯ ∪ ( − 3 π 2 , − π 2 ) ∪ ( − π 2 , π 2 ) ∪ ( π 2 , 3 π 2 ) ∪ ⋯ = R ∖ ( π 2 + π Z ) {\displaystyle {\begin{aligned}\pi \mathbb {Z} +\left(-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right)&=\cdots \cup {\bigl (}{-{\tfrac {3\pi }{2}}},{-{\tfrac {\pi }{2}}}{\bigr )}\cup {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}\cup {\bigl (}{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}}{\bigr )}\cup \cdots \\&=\mathbb {R} \setminus \left({\tfrac {\pi }{2}}+\pi \mathbb {Z} \right)\\\end{aligned}}} === Solutions to elementary trigonometric equations === Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2 π : {\displaystyle 2\pi :} Sine and cosecant begin their period at 2 π k − π 2 {\textstyle 2\pi k-{\frac {\pi }{2}}} (where k {\displaystyle k} is an integer), finish it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then reverse themselves over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cosine and secant begin their period at 2 π k , {\displaystyle 2\pi k,} finish it at 2 π k + π . {\displaystyle 2\pi k+\pi .} and then reverse themselves over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} Tangent begins its period at 2 π k − π 2 , {\textstyle 2\pi k-{\frac {\pi }{2}},} finishes it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then repeats it (forward) over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cotangent begins its period at 2 π k , {\displaystyle 2\pi k,} finishes it at 2 π k + π , {\displaystyle 2\pi k+\pi ,} and then repeats it (forward) over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} This periodicity is reflected in the general inverses, where k {\displaystyle k} is some integer. The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions. It is assumed that the given values θ , {\displaystyle \theta ,} r , {\displaystyle r,} s , {\displaystyle s,} x , {\displaystyle x,} and y {\displaystyle y} all lie within appropriate ranges so that the relevant expressions below are well-defined. Note that "for some k ∈ Z {\displaystyle k\in \mathbb {Z} } " is just another way of saying "for some integer k . {\displaystyle k.} " The symbol ⟺ {\displaystyle \,\iff \,} is logical equality and indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote for more details and an example illustrating this concept). where the first four solutions can be written in expanded form as: For example, if cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} then θ = π + 2 π k = − π + 2 π ( 1 + k ) {\displaystyle \theta =\pi +2\pi k=-\pi +2\pi (1+k)} for some k ∈ Z . {\displaystyle k\in \mathbb {Z} .} While if sin ⁡ θ = ± 1 {\displaystyle \sin \theta =\pm 1} then θ = π 2 + π k = − π 2 + π ( k + 1 ) {\textstyle \theta ={\frac {\pi }{2}}+\pi k=-{\frac {\pi }{2}}+\pi (k+1)} for some k ∈ Z , {\displaystyle k\in \mathbb {Z} ,} where k {\displaystyle k} will be even if sin ⁡ θ = 1 {\displaystyle \sin \theta =1} and it will be odd if sin ⁡ θ = − 1. {\displaystyle \sin \theta =-1.} The equations sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} and csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} have the same solutions as cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} and sin ⁡ θ = ± 1 , {\displaystyle \sin \theta =\pm 1,} respectively. In all equations above except for those just solved (i.e. except for sin {\displaystyle \sin } / csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} and cos {\displaystyle \cos } / sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} ), the integer k {\displaystyle k} in the solution's formula is uniquely determined by θ {\displaystyle \theta } (for fixed r , s , x , {\displaystyle r,s,x,} and y {\displaystyle y} ). With the help of integer parity Parity ⁡ ( h ) = { 0 if h is even 1 if h is odd {\displaystyle \operatorname {Parity} (h)={\begin{cases}0&{\text{if }}h{\text{ is even }}\\1&{\text{if }}h{\text{ is odd }}\\\end{cases}}} it is possible to write a solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} that doesn't involve the "plus or minus" ± {\displaystyle \,\pm \,} symbol: c o s θ = x {\displaystyle cos\;\theta =x\quad } if and only if θ = ( − 1 ) h arccos ⁡ ( x ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\arccos(x)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z . {\displaystyle h\in \mathbb {Z} .} And similarly for the secant function, s e c θ = r {\displaystyle sec\;\theta =r\quad } if and only if θ = ( − 1 ) h arcsec ⁡ ( r ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\operatorname {arcsec}(r)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z , {\displaystyle h\in \mathbb {Z} ,} where π h + π Parity ⁡ ( h ) {\displaystyle \pi h+\pi \operatorname {Parity} (h)} equals π h {\displaystyle \pi h} when the integer h {\displaystyle h} is even, and equals π h + π {\displaystyle \pi h+\pi } when it's odd. ==== Detailed example and explanation of the "plus or minus" symbol ± ==== The solutions to cos ⁡ θ = x {\displaystyle \cos \theta =x} and sec ⁡ θ = x {\displaystyle \sec \theta =x} involve the "plus or minus" symbol ± , {\displaystyle \,\pm ,\,} whose meaning is now clarified. Only the solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} will be discussed since the discussion for sec ⁡ θ = x {\displaystyle \sec \theta =x} is the same. We are given x {\displaystyle x} between − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} and we know that there is an angle θ {\displaystyle \theta } in some interval that satisfies cos ⁡ θ = x . {\displaystyle \cos \theta =x.} We want to find this θ . {\displaystyle \theta .} The table above indicates that the solution is θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which is a shorthand way of saying that (at least) one of the following statement is true: θ = arccos ⁡ x + 2 π k {\displaystyle \,\theta =\arccos x+2\pi k\,} for some integer k , {\displaystyle k,} or θ = − arccos ⁡ x + 2 π k {\displaystyle \,\theta =-\arccos x+2\pi k\,} for some integer k . {\displaystyle k.} As mentioned above, if arccos ⁡ x = π {\displaystyle \,\arccos x=\pi \,} (which by definition only happens when x = cos ⁡ π = − 1 {\displaystyle x=\cos \pi =-1} ) then both statements (1) and (2) hold, although with different values for the integer k {\displaystyle k} : if K {\displaystyle K} is the integer from statement (1), meaning that θ = π + 2 π K {\displaystyle \theta =\pi +2\pi K} holds, then the integer k {\displaystyle k} for statement (2) is K + 1 {\displaystyle K+1} (because θ = − π + 2 π ( 1 + K ) {\displaystyle \theta =-\pi +2\pi (1+K)} ). However, if x ≠ − 1 {\displaystyle x\neq -1} then the integer k {\displaystyle k} is unique and completely determined by θ . {\displaystyle \theta .} If arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} (which by definition only happens when x = cos ⁡ 0 = 1 {\displaystyle x=\cos 0=1} ) then ± arccos ⁡ x = 0 {\displaystyle \,\pm \arccos x=0\,} (because + arccos ⁡ x = + 0 = 0 {\displaystyle \,+\arccos x=+0=0\,} and − arccos ⁡ x = − 0 = 0 {\displaystyle \,-\arccos x=-0=0\,} so in both cases ± arccos ⁡ x {\displaystyle \,\pm \arccos x\,} is equal to 0 {\displaystyle 0} ) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold). Having considered the cases arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} and arccos ⁡ x = π , {\displaystyle \,\arccos x=\pi ,\,} we now focus on the case where arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and arccos ⁡ x ≠ π , {\displaystyle \,\arccos x\neq \pi ,\,} So assume this from now on. The solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} is still θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, because arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and 0 < arccos ⁡ x < π , {\displaystyle \,0<\arccos x<\pi ,\,} statements (1) and (2) are different and furthermore, exactly one of the two equalities holds (not both). Additional information about θ {\displaystyle \theta } is needed to determine which one holds. For example, suppose that x = 0 {\displaystyle x=0} and that all that is known about θ {\displaystyle \theta } is that − π ≤ θ ≤ π {\displaystyle \,-\pi \leq \theta \leq \pi \,} (and nothing more is known). Then arccos ⁡ x = arccos ⁡ 0 = π 2 {\displaystyle \arccos x=\arccos 0={\frac {\pi }{2}}} and moreover, in this particular case k = 0 {\displaystyle k=0} (for both the + {\displaystyle \,+\,} case and the − {\displaystyle \,-\,} case) and so consequently, θ = ± arccos ⁡ x + 2 π k = ± ( π 2 ) + 2 π ( 0 ) = ± π 2 . {\displaystyle \theta ~=~\pm \arccos x+2\pi k~=~\pm \left({\frac {\pi }{2}}\right)+2\pi (0)~=~\pm {\frac {\pi }{2}}.} This means that θ {\displaystyle \theta } could be either π / 2 {\displaystyle \,\pi /2\,} or − π / 2. {\displaystyle \,-\pi /2.} Without additional information it is not possible to determine which of these values θ {\displaystyle \theta } has. An example of some additional information that could determine the value of θ {\displaystyle \theta } would be knowing that the angle is above the x {\displaystyle x} -axis (in which case θ = π / 2 {\displaystyle \theta =\pi /2} ) or alternatively, knowing that it is below the x {\displaystyle x} -axis (in which case θ = − π / 2 {\displaystyle \theta =-\pi /2} ). ==== Equal identical trigonometric functions ==== The table below shows how two angles θ {\displaystyle \theta } and φ {\displaystyle \varphi } must be related if their values under a given trigonometric function are equal or negatives of each other. The vertical double arrow ⇕ {\displaystyle \Updownarrow } in the last row indicates that θ {\displaystyle \theta } and φ {\displaystyle \varphi } satisfy | sin ⁡ θ | = | sin ⁡ φ | {\displaystyle \left|\sin \theta \right|=\left|\sin \varphi \right|} if and only if they satisfy | cos ⁡ θ | = | cos ⁡ φ | . {\displaystyle \left|\cos \theta \right|=\left|\cos \varphi \right|.} Set of all solutions to elementary trigonometric equations Thus given a single solution θ {\displaystyle \theta } to an elementary trigonometric equation ( sin ⁡ θ = y {\displaystyle \sin \theta =y} is such an equation, for instance, and because sin ⁡ ( arcsin ⁡ y ) = y {\displaystyle \sin(\arcsin y)=y} always holds, θ := arcsin ⁡ y {\displaystyle \theta :=\arcsin y} is always a solution), the set of all solutions to it are: === Transforming equations === The equations above can be transformed by using the reflection and shift identities: These formulas imply, in particular, that the following hold: sin ⁡ θ = − sin ⁡ ( − θ ) = − sin ⁡ ( π + θ ) = − sin ⁡ ( π − θ ) = − cos ⁡ ( π 2 + θ ) = − cos ⁡ ( π 2 − θ ) = − cos ⁡ ( − π 2 − θ ) = − cos ⁡ ( − π 2 + θ ) = − cos ⁡ ( 3 π 2 − θ ) = − cos ⁡ ( − 3 π 2 + θ ) cos ⁡ θ = − cos ⁡ ( − θ ) = − cos ⁡ ( π + θ ) = − cos ⁡ ( π − θ ) = − sin ⁡ ( π 2 + θ ) = − sin ⁡ ( π 2 − θ ) = − sin ⁡ ( − π 2 − θ ) = − sin ⁡ ( − π 2 + θ ) = − sin ⁡ ( 3 π 2 − θ ) = − sin ⁡ ( − 3 π 2 + θ ) tan ⁡ θ = − tan ⁡ ( − θ ) = − tan ⁡ ( π + θ ) = − tan ⁡ ( π − θ ) = − cot ⁡ ( π 2 + θ ) = − cot ⁡ ( π 2 − θ ) = − cot ⁡ ( − π 2 − θ ) = − cot ⁡ ( − π 2 + θ ) = − cot ⁡ ( 3 π 2 − θ ) = − cot ⁡ ( − 3 π 2 + θ ) {\displaystyle {\begin{aligned}\sin \theta &=-\sin(-\theta )&&=-\sin(\pi +\theta )&&={\phantom {-}}\sin(\pi -\theta )\\&=-\cos \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cos \left({\frac {\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {\pi }{2}}-\theta \right)\\&={\phantom {-}}\cos \left(-{\frac {\pi }{2}}+\theta \right)&&=-\cos \left({\frac {3\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\cos \theta &={\phantom {-}}\cos(-\theta )&&=-\cos(\pi +\theta )&&=-\cos(\pi -\theta )\\&={\phantom {-}}\sin \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\sin \left({\frac {\pi }{2}}-\theta \right)&&=-\sin \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\sin \left(-{\frac {\pi }{2}}+\theta \right)&&=-\sin \left({\frac {3\pi }{2}}-\theta \right)&&={\phantom {-}}\sin \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\tan \theta &=-\tan(-\theta )&&={\phantom {-}}\tan(\pi +\theta )&&=-\tan(\pi -\theta )\\&=-\cot \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {\pi }{2}}-\theta \right)&&={\phantom {-}}\cot \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\cot \left(-{\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {3\pi }{2}}-\theta \right)&&=-\cot \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\end{aligned}}} where swapping sin ↔ csc , {\displaystyle \sin \leftrightarrow \csc ,} swapping cos ↔ sec , {\displaystyle \cos \leftrightarrow \sec ,} and swapping tan ↔ cot {\displaystyle \tan \leftrightarrow \cot } gives the analogous equations for csc , sec , and cot , {\displaystyle \csc ,\sec ,{\text{ and }}\cot ,} respectively. So for example, by using the equality sin ⁡ ( π 2 − θ ) = cos ⁡ θ , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=\cos \theta ,} the equation cos ⁡ θ = x {\displaystyle \cos \theta =x} can be transformed into sin ⁡ ( π 2 − θ ) = x , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=x,} which allows for the solution to the equation sin ⁡ φ = x {\displaystyle \;\sin \varphi =x\;} (where φ := π 2 − θ {\textstyle \varphi :={\frac {\pi }{2}}-\theta } ) to be used; that solution being: φ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z , {\displaystyle \varphi =(-1)^{k}\arcsin(x)+\pi k\;{\text{ for some }}k\in \mathbb {Z} ,} which becomes: π 2 − θ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z {\displaystyle {\frac {\pi }{2}}-\theta ~=~(-1)^{k}\arcsin(x)+\pi k\quad {\text{ for some }}k\in \mathbb {Z} } where using the fact that ( − 1 ) k = ( − 1 ) − k {\displaystyle (-1)^{k}=(-1)^{-k}} and substituting h := − k {\displaystyle h:=-k} proves that another solution to cos ⁡ θ = x {\displaystyle \;\cos \theta =x\;} is: θ = ( − 1 ) h + 1 arcsin ⁡ ( x ) + π h + π 2 for some h ∈ Z . {\displaystyle \theta ~=~(-1)^{h+1}\arcsin(x)+\pi h+{\frac {\pi }{2}}\quad {\text{ for some }}h\in \mathbb {Z} .} The substitution arcsin ⁡ x = π 2 − arccos ⁡ x {\displaystyle \;\arcsin x={\frac {\pi }{2}}-\arccos x\;} may be used express the right hand side of the above formula in terms of arccos ⁡ x {\displaystyle \;\arccos x\;} instead of arcsin ⁡ x . {\displaystyle \;\arcsin x.\;} === Relationships between trigonometric functions and inverse trigonometric functions === Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of length x , {\displaystyle x,} then applying the Pythagorean theorem and definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes that x {\displaystyle x} is positive, and thus the result has to be corrected through the use of absolute values and the signum (sgn) operation. === Relationships among the inverse trigonometric functions === Complementary angles: arccos ⁡ ( x ) = π 2 − arcsin ⁡ ( x ) arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) arccsc ⁡ ( x ) = π 2 − arcsec ⁡ ( x ) {\displaystyle {\begin{aligned}\arccos(x)&={\frac {\pi }{2}}-\arcsin(x)\\[0.5em]\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\\[0.5em]\operatorname {arccsc}(x)&={\frac {\pi }{2}}-\operatorname {arcsec}(x)\end{aligned}}} Negative arguments: arcsin ⁡ ( − x ) = − arcsin ⁡ ( x ) arccsc ⁡ ( − x ) = − arccsc ⁡ ( x ) arccos ⁡ ( − x ) = π − arccos ⁡ ( x ) arcsec ⁡ ( − x ) = π − arcsec ⁡ ( x ) arctan ⁡ ( − x ) = − arctan ⁡ ( x ) arccot ⁡ ( − x ) = π − arccot ⁡ ( x ) {\displaystyle {\begin{aligned}\arcsin(-x)&=-\arcsin(x)\\\operatorname {arccsc}(-x)&=-\operatorname {arccsc}(x)\\\arccos(-x)&=\pi -\arccos(x)\\\operatorname {arcsec}(-x)&=\pi -\operatorname {arcsec}(x)\\\arctan(-x)&=-\arctan(x)\\\operatorname {arccot}(-x)&=\pi -\operatorname {arccot}(x)\end{aligned}}} Reciprocal arguments: arcsin ⁡ ( 1 x ) = arccsc ⁡ ( x ) arccsc ⁡ ( 1 x ) = arcsin ⁡ ( x ) arccos ⁡ ( 1 x ) = arcsec ⁡ ( x ) arcsec ⁡ ( 1 x ) = arccos ⁡ ( x ) arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) , if x > 0 arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) − π = − π 2 − arctan ⁡ ( x ) , if x < 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) = π 2 − arccot ⁡ ( x ) , if x > 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) + π = 3 π 2 − arccot ⁡ ( x ) , if x < 0 {\displaystyle {\begin{aligned}\arcsin \left({\frac {1}{x}}\right)&=\operatorname {arccsc}(x)&\\[0.3em]\operatorname {arccsc} \left({\frac {1}{x}}\right)&=\arcsin(x)&\\[0.3em]\arccos \left({\frac {1}{x}}\right)&=\operatorname {arcsec}(x)&\\[0.3em]\operatorname {arcsec} \left({\frac {1}{x}}\right)&=\arccos(x)&\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x>0\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)-\pi &=-{\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x<0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)&={\frac {\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x>0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)+\pi &={\frac {3\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x<0\end{aligned}}} The identities above can be used with (and derived from) the fact that sin {\displaystyle \sin } and csc {\displaystyle \csc } are reciprocals (i.e. csc = 1 sin {\displaystyle \csc ={\tfrac {1}{\sin }}} ), as are cos {\displaystyle \cos } and sec , {\displaystyle \sec ,} and tan {\displaystyle \tan } and cot . {\displaystyle \cot .} Useful identities if one only has a fragment of a sine table: arcsin ⁡ ( x ) = 1 2 arccos ⁡ ( 1 − 2 x 2 ) , if 0 ≤ x ≤ 1 arcsin ⁡ ( x ) = arctan ⁡ ( x 1 − x 2 ) arccos ⁡ ( x ) = 1 2 arccos ⁡ ( 2 x 2 − 1 ) , if 0 ≤ x ≤ 1 arccos ⁡ ( x ) = arctan ⁡ ( 1 − x 2 x ) arccos ⁡ ( x ) = arcsin ⁡ ( 1 − x 2 ) , if 0 ≤ x ≤ 1 , from which you get arccos ( 1 − x 2 1 + x 2 ) = arcsin ⁡ ( 2 x 1 + x 2 ) , if 0 ≤ x ≤ 1 arcsin ( 1 − x 2 ) = π 2 − sgn ⁡ ( x ) arcsin ⁡ ( x ) arctan ⁡ ( x ) = arcsin ⁡ ( x 1 + x 2 ) arccot ⁡ ( x ) = arccos ⁡ ( x 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&={\frac {1}{2}}\arccos \left(1-2x^{2}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin(x)&=\arctan \left({\frac {x}{\sqrt {1-x^{2}}}}\right)\\\arccos(x)&={\frac {1}{2}}\arccos \left(2x^{2}-1\right)\,,{\text{ if }}0\leq x\leq 1\\\arccos(x)&=\arctan \left({\frac {\sqrt {1-x^{2}}}{x}}\right)\\\arccos(x)&=\arcsin \left({\sqrt {1-x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1{\text{ , from which you get }}\\\arccos &\left({\frac {1-x^{2}}{1+x^{2}}}\right)=\arcsin \left({\frac {2x}{1+x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin &\left({\sqrt {1-x^{2}}}\right)={\frac {\pi }{2}}-\operatorname {sgn}(x)\arcsin(x)\\\arctan(x)&=\arcsin \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\\\operatorname {arccot}(x)&=\arccos \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\end{aligned}}} Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real). A useful form that follows directly from the table above is arctan ⁡ ( x ) = arccos ⁡ ( 1 1 + x 2 ) , if x ≥ 0 {\displaystyle \arctan(x)=\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\,,{\text{ if }}x\geq 0} . It is obtained by recognizing that cos ⁡ ( arctan ⁡ ( x ) ) = 1 1 + x 2 = cos ⁡ ( arccos ⁡ ( 1 1 + x 2 ) ) {\displaystyle \cos \left(\arctan \left(x\right)\right)={\sqrt {\frac {1}{1+x^{2}}}}=\cos \left(\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\right)} . From the half-angle formula, tan ⁡ ( θ 2 ) = sin ⁡ ( θ ) 1 + cos ⁡ ( θ ) {\displaystyle \tan \left({\tfrac {\theta }{2}}\right)={\tfrac {\sin(\theta )}{1+\cos(\theta )}}} , we get: arcsin ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 − x 2 ) arccos ⁡ ( x ) = 2 arctan ⁡ ( 1 − x 2 1 + x ) , if − 1 < x ≤ 1 arctan ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1-x^{2}}}}}\right)\\[0.5em]\arccos(x)&=2\arctan \left({\frac {\sqrt {1-x^{2}}}{1+x}}\right)\,,{\text{ if }}-1<x\leq 1\\[0.5em]\arctan(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1+x^{2}}}}}\right)\end{aligned}}} === Arctangent addition formula === arctan ⁡ ( u ) ± arctan ⁡ ( v ) = arctan ⁡ ( u ± v 1 ∓ u v ) ( mod π ) , u v ≠ 1 . {\displaystyle \arctan(u)\pm \arctan(v)=\arctan \left({\frac {u\pm v}{1\mp uv}}\right){\pmod {\pi }}\,,\quad uv\neq 1\,.} This is derived from the tangent addition formula tan ⁡ ( α ± β ) = tan ⁡ ( α ) ± tan ⁡ ( β ) 1 ∓ tan ⁡ ( α ) tan ⁡ ( β ) , {\displaystyle \tan(\alpha \pm \beta )={\frac {\tan(\alpha )\pm \tan(\beta )}{1\mp \tan(\alpha )\tan(\beta )}}\,,} by letting α = arctan ⁡ ( u ) , β = arctan ⁡ ( v ) . {\displaystyle \alpha =\arctan(u)\,,\quad \beta =\arctan(v)\,.} == In calculus == === Derivatives of inverse trigonometric functions === The derivatives for complex values of z are as follows: d d z arcsin ⁡ ( z ) = 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arccos ⁡ ( z ) = − 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arctan ⁡ ( z ) = 1 1 + z 2 ; z ≠ − i , + i d d z arccot ⁡ ( z ) = − 1 1 + z 2 ; z ≠ − i , + i d d z arcsec ⁡ ( z ) = 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 d d z arccsc ⁡ ( z ) = − 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 {\displaystyle {\begin{aligned}{\frac {d}{dz}}\arcsin(z)&{}={\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arccos(z)&{}=-{\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arctan(z)&{}={\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arccot}(z)&{}=-{\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arcsec}(z)&{}={\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\\{\frac {d}{dz}}\operatorname {arccsc}(z)&{}=-{\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\end{aligned}}} Only for real values of x: d d x arcsec ⁡ ( x ) = 1 | x | x 2 − 1 ; | x | > 1 d d x arccsc ⁡ ( x ) = − 1 | x | x 2 − 1 ; | x | > 1 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arcsec}(x)&{}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\\{\frac {d}{dx}}\operatorname {arccsc}(x)&{}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\end{aligned}}} These formulas can be derived in terms of the derivatives of trigonometric functions. For example, if x = sin ⁡ θ {\displaystyle x=\sin \theta } , then d x / d θ = cos ⁡ θ = 1 − x 2 , {\textstyle dx/d\theta =\cos \theta ={\sqrt {1-x^{2}}},} so d d x arcsin ⁡ ( x ) = d θ d x = 1 d x / d θ = 1 1 − x 2 . {\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {d\theta }{dx}}={\frac {1}{dx/d\theta }}={\frac {1}{\sqrt {1-x^{2}}}}.} === Expression as definite integrals === Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral: arcsin ⁡ ( x ) = ∫ 0 x 1 1 − z 2 d z , | x | ≤ 1 arccos ⁡ ( x ) = ∫ x 1 1 1 − z 2 d z , | x | ≤ 1 arctan ⁡ ( x ) = ∫ 0 x 1 z 2 + 1 d z , arccot ⁡ ( x ) = ∫ x ∞ 1 z 2 + 1 d z , arcsec ⁡ ( x ) = ∫ 1 x 1 z z 2 − 1 d z = π + ∫ − x − 1 1 z z 2 − 1 d z , x ≥ 1 arccsc ⁡ ( x ) = ∫ x ∞ 1 z z 2 − 1 d z = ∫ − ∞ − x 1 z z 2 − 1 d z , x ≥ 1 {\displaystyle {\begin{aligned}\arcsin(x)&{}=\int _{0}^{x}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arccos(x)&{}=\int _{x}^{1}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arctan(x)&{}=\int _{0}^{x}{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arccot}(x)&{}=\int _{x}^{\infty }{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arcsec}(x)&{}=\int _{1}^{x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\pi +\int _{-x}^{-1}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\operatorname {arccsc}(x)&{}=\int _{x}^{\infty }{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\int _{-\infty }^{-x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\end{aligned}}} When x equals 1, the integrals with limited domains are improper integrals, but still well-defined. === Infinite series === Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, 1 1 − z 2 {\textstyle {\tfrac {1}{\sqrt {1-z^{2}}}}} , as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative 1 1 + z 2 {\textstyle {\frac {1}{1+z^{2}}}} in a geometric series, and applying the integral definition above (see Leibniz series). arcsin ⁡ ( z ) = z + ( 1 2 ) z 3 3 + ( 1 ⋅ 3 2 ⋅ 4 ) z 5 5 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) z 7 7 + ⋯ = ∑ n = 0 ∞ ( 2 n − 1 ) ! ! ( 2 n ) ! ! z 2 n + 1 2 n + 1 = ∑ n = 0 ∞ ( 2 n ) ! ( 2 n n ! ) 2 z 2 n + 1 2 n + 1 ; | z | ≤ 1 {\displaystyle {\begin{aligned}\arcsin(z)&=z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {z^{7}}{7}}+\cdots \\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n-1)!!}{(2n)!!}}{\frac {z^{2n+1}}{2n+1}}\\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n)!}{(2^{n}n!)^{2}}}{\frac {z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\end{aligned}}} arctan ⁡ ( z ) = z − z 3 3 + z 5 5 − z 7 7 + ⋯ = ∑ n = 0 ∞ ( − 1 ) n z 2 n + 1 2 n + 1 ; | z | ≤ 1 z ≠ i , − i {\displaystyle \arctan(z)=z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq i,-i} Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example, arccos ⁡ ( x ) = π / 2 − arcsin ⁡ ( x ) {\displaystyle \arccos(x)=\pi /2-\arcsin(x)} , arccsc ⁡ ( x ) = arcsin ⁡ ( 1 / x ) {\displaystyle \operatorname {arccsc}(x)=\arcsin(1/x)} , and so on. Another series is given by: 2 ( arcsin ⁡ ( x 2 ) ) 2 = ∑ n = 1 ∞ x 2 n n 2 ( 2 n n ) . {\displaystyle 2\left(\arcsin \left({\frac {x}{2}}\right)\right)^{2}=\sum _{n=1}^{\infty }{\frac {x^{2n}}{n^{2}{\binom {2n}{n}}}}.} Leonhard Euler found a series for the arctangent that converges more quickly than its Taylor series: arctan ⁡ ( z ) = z 1 + z 2 ∑ n = 0 ∞ ∏ k = 1 n 2 k z 2 ( 2 k + 1 ) ( 1 + z 2 ) . {\displaystyle \arctan(z)={\frac {z}{1+z^{2}}}\sum _{n=0}^{\infty }\prod _{k=1}^{n}{\frac {2kz^{2}}{(2k+1)(1+z^{2})}}.} (The term in the sum for n = 0 is the empty product, so is 1.) Alternatively, this can be expressed as arctan ⁡ ( z ) = ∑ n = 0 ∞ 2 2 n ( n ! ) 2 ( 2 n + 1 ) ! z 2 n + 1 ( 1 + z 2 ) n + 1 . {\displaystyle \arctan(z)=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}{\frac {z^{2n+1}}{(1+z^{2})^{n+1}}}.} Another series for the arctangent function is given by arctan ⁡ ( z ) = i ∑ n = 1 ∞ 1 2 n − 1 ( 1 ( 1 + 2 i / z ) 2 n − 1 − 1 ( 1 − 2 i / z ) 2 n − 1 ) , {\displaystyle \arctan(z)=i\sum _{n=1}^{\infty }{\frac {1}{2n-1}}\left({\frac {1}{(1+2i/z)^{2n-1}}}-{\frac {1}{(1-2i/z)^{2n-1}}}\right),} where i = − 1 {\displaystyle i={\sqrt {-1}}} is the imaginary unit. ==== Continued fractions for arctangent ==== Two alternatives to the power series for arctangent are these generalized continued fractions: arctan ⁡ ( z ) = z 1 + ( 1 z ) 2 3 − 1 z 2 + ( 3 z ) 2 5 − 3 z 2 + ( 5 z ) 2 7 − 5 z 2 + ( 7 z ) 2 9 − 7 z 2 + ⋱ = z 1 + ( 1 z ) 2 3 + ( 2 z ) 2 5 + ( 3 z ) 2 7 + ( 4 z ) 2 9 + ⋱ {\displaystyle \arctan(z)={\frac {z}{1+{\cfrac {(1z)^{2}}{3-1z^{2}+{\cfrac {(3z)^{2}}{5-3z^{2}+{\cfrac {(5z)^{2}}{7-5z^{2}+{\cfrac {(7z)^{2}}{9-7z^{2}+\ddots }}}}}}}}}}={\frac {z}{1+{\cfrac {(1z)^{2}}{3+{\cfrac {(2z)^{2}}{5+{\cfrac {(3z)^{2}}{7+{\cfrac {(4z)^{2}}{9+\ddots }}}}}}}}}}} The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series. === Indefinite integrals of inverse trigonometric functions === For real and complex values of z: ∫ arcsin ⁡ ( z ) d z = z arcsin ⁡ ( z ) + 1 − z 2 + C ∫ arccos ⁡ ( z ) d z = z arccos ⁡ ( z ) − 1 − z 2 + C ∫ arctan ⁡ ( z ) d z = z arctan ⁡ ( z ) − 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arccot ⁡ ( z ) d z = z arccot ⁡ ( z ) + 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arcsec ⁡ ( z ) d z = z arcsec ⁡ ( z ) − ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C ∫ arccsc ⁡ ( z ) d z = z arccsc ⁡ ( z ) + ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C {\displaystyle {\begin{aligned}\int \arcsin(z)\,dz&{}=z\,\arcsin(z)+{\sqrt {1-z^{2}}}+C\\\int \arccos(z)\,dz&{}=z\,\arccos(z)-{\sqrt {1-z^{2}}}+C\\\int \arctan(z)\,dz&{}=z\,\arctan(z)-{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arccot}(z)\,dz&{}=z\,\operatorname {arccot}(z)+{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arcsec}(z)\,dz&{}=z\,\operatorname {arcsec}(z)-\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\\\int \operatorname {arccsc}(z)\,dz&{}=z\,\operatorname {arccsc}(z)+\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\end{aligned}}} For real x ≥ 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − ln ⁡ ( x + x 2 − 1 ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + ln ⁡ ( x + x 2 − 1 ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\end{aligned}}} For all real x not between -1 and 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\end{aligned}}} The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − arcosh ⁡ ( | x | ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + arcosh ⁡ ( | x | ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {arcosh} (|x|)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {arcosh} (|x|)+C\\\end{aligned}}} The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above. All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above. ==== Example ==== Using ∫ u d v = u v − ∫ v d u {\displaystyle \int u\,dv=uv-\int v\,du} (i.e. integration by parts), set u = arcsin ⁡ ( x ) d v = d x d u = d x 1 − x 2 v = x {\displaystyle {\begin{aligned}u&=\arcsin(x)&dv&=dx\\du&={\frac {dx}{\sqrt {1-x^{2}}}}&v&=x\end{aligned}}} Then ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) − ∫ x 1 − x 2 d x , {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)-\int {\frac {x}{\sqrt {1-x^{2}}}}\,dx,} which by the simple substitution w = 1 − x 2 , d w = − 2 x d x {\displaystyle w=1-x^{2},\ dw=-2x\,dx} yields the final result: ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) + 1 − x 2 + C {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)+{\sqrt {1-x^{2}}}+C} == Extension to the complex plane == Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extension is: arctan ⁡ ( z ) = ∫ 0 z d x 1 + x 2 z ≠ − i , + i {\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dx}{1+x^{2}}}\quad z\neq -i,+i} where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is the branch cut between the principal sheet and other sheets. The path of the integral must not cross a branch cut. For z not on a branch cut, a straight line path from 0 to z is such a path. For z on a branch cut, the path must approach from Re[x] > 0 for the upper branch cut and from Re[x] < 0 for the lower branch cut. The arcsine function may then be defined as: arcsin ⁡ ( z ) = arctan ⁡ ( z 1 − z 2 ) z ≠ − 1 , + 1 {\displaystyle \arcsin(z)=\arctan \left({\frac {z}{\sqrt {1-z^{2}}}}\right)\quad z\neq -1,+1} where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets; arccos ⁡ ( z ) = π 2 − arcsin ⁡ ( z ) z ≠ − 1 , + 1 {\displaystyle \arccos(z)={\frac {\pi }{2}}-\arcsin(z)\quad z\neq -1,+1} which has the same cut as arcsin; arccot ⁡ ( z ) = π 2 − arctan ⁡ ( z ) z ≠ − i , i {\displaystyle \operatorname {arccot}(z)={\frac {\pi }{2}}-\arctan(z)\quad z\neq -i,i} which has the same cut as arctan; arcsec ⁡ ( z ) = arccos ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arcsec}(z)=\arccos \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets; arccsc ⁡ ( z ) = arcsin ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arccsc}(z)=\arcsin \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} which has the same cut as arcsec. === Logarithmic forms === These functions may also be expressed using complex logarithms. This extends their domains to the complex plane in a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts. arcsin ⁡ ( z ) = − i ln ⁡ ( 1 − z 2 + i z ) = i ln ⁡ ( 1 − z 2 − i z ) = arccsc ⁡ ( 1 z ) arccos ⁡ ( z ) = − i ln ⁡ ( i 1 − z 2 + z ) = π 2 − arcsin ⁡ ( z ) = arcsec ⁡ ( 1 z ) arctan ⁡ ( z ) = − i 2 ln ⁡ ( i − z i + z ) = − i 2 ln ⁡ ( 1 + i z 1 − i z ) = arccot ⁡ ( 1 z ) arccot ⁡ ( z ) = − i 2 ln ⁡ ( z + i z − i ) = − i 2 ln ⁡ ( i z − 1 i z + 1 ) = arctan ⁡ ( 1 z ) arcsec ⁡ ( z ) = − i ln ⁡ ( i 1 − 1 z 2 + 1 z ) = π 2 − arccsc ⁡ ( z ) = arccos ⁡ ( 1 z ) arccsc ⁡ ( z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) = i ln ⁡ ( 1 − 1 z 2 − i z ) = arcsin ⁡ ( 1 z ) {\displaystyle {\begin{aligned}\arcsin(z)&{}=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)=i\ln \left({\sqrt {1-z^{2}}}-iz\right)&{}=\operatorname {arccsc} \left({\frac {1}{z}}\right)\\[10pt]\arccos(z)&{}=-i\ln \left(i{\sqrt {1-z^{2}}}+z\right)={\frac {\pi }{2}}-\arcsin(z)&{}=\operatorname {arcsec} \left({\frac {1}{z}}\right)\\[10pt]\arctan(z)&{}=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)=-{\frac {i}{2}}\ln \left({\frac {1+iz}{1-iz}}\right)&{}=\operatorname {arccot} \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccot}(z)&{}=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)=-{\frac {i}{2}}\ln \left({\frac {iz-1}{iz+1}}\right)&{}=\arctan \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arcsec}(z)&{}=-i\ln \left(i{\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {1}{z}}\right)={\frac {\pi }{2}}-\operatorname {arccsc}(z)&{}=\arccos \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccsc}(z)&{}=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)=i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}-{\frac {i}{z}}\right)&{}=\arcsin \left({\frac {1}{z}}\right)\end{aligned}}} ==== Generalization ==== Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by using Euler's formula to form a right triangle in the complex plane. Algebraically, this gives us: c e i θ = c cos ⁡ ( θ ) + i c sin ⁡ ( θ ) {\displaystyle ce^{i\theta }=c\cos(\theta )+ic\sin(\theta )} or c e i θ = a + i b {\displaystyle ce^{i\theta }=a+ib} where a {\displaystyle a} is the adjacent side, b {\displaystyle b} is the opposite side, and c {\displaystyle c} is the hypotenuse. From here, we can solve for θ {\displaystyle \theta } . e ln ⁡ ( c ) + i θ = a + i b ln ⁡ c + i θ = ln ⁡ ( a + i b ) θ = Im ⁡ ( ln ⁡ ( a + i b ) ) {\displaystyle {\begin{aligned}e^{\ln(c)+i\theta }&=a+ib\\\ln c+i\theta &=\ln(a+ib)\\\theta &=\operatorname {Im} \left(\ln(a+ib)\right)\end{aligned}}} or θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\frac {a+ib}{c}}\right)} Simply taking the imaginary part works for any real-valued a {\displaystyle a} and b {\displaystyle b} , but if a {\displaystyle a} or b {\displaystyle b} is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part of ln ⁡ ( a + b i ) {\displaystyle \ln(a+bi)} also removes c {\displaystyle c} from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our input z {\displaystyle z} , we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using the Pythagorean Theorem relation a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions for θ {\displaystyle \theta } that result from plugging the values into the equations θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\tfrac {a+ib}{c}}\right)} above and simplifying. a b c − i ln ⁡ ( a + i b c ) θ θ a , b ∈ R arcsin ⁡ ( z ) 1 − z 2 z 1 − i ln ⁡ ( 1 − z 2 + i z 1 ) = − i ln ⁡ ( 1 − z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − z 2 + i z ) ) arccos ⁡ ( z ) z 1 − z 2 1 − i ln ⁡ ( z + i 1 − z 2 1 ) = − i ln ⁡ ( z + z 2 − 1 ) Im ⁡ ( ln ⁡ ( z + z 2 − 1 ) ) arctan ⁡ ( z ) 1 z 1 + z 2 − i ln ⁡ ( 1 + i z 1 + z 2 ) = − i 2 ln ⁡ ( i − z i + z ) Im ⁡ ( ln ⁡ ( 1 + i z ) ) arccot ⁡ ( z ) z 1 z 2 + 1 − i ln ⁡ ( z + i z 2 + 1 ) = − i 2 ln ⁡ ( z + i z − i ) Im ⁡ ( ln ⁡ ( z + i ) ) arcsec ⁡ ( z ) 1 z 2 − 1 z − i ln ⁡ ( 1 + i z 2 − 1 z ) = − i ln ⁡ ( 1 z + 1 z 2 − 1 ) Im ⁡ ( ln ⁡ ( 1 z + 1 z 2 − 1 ) ) arccsc ⁡ ( z ) z 2 − 1 1 z − i ln ⁡ ( z 2 − 1 + i z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − 1 z 2 + i z ) ) {\displaystyle {\begin{aligned}&a&&b&&c&&-i\ln \left({\frac {a+ib}{c}}\right)&&\theta &&\theta _{a,b\in \mathbb {R} }\\\arcsin(z)\ \ &{\sqrt {1-z^{2}}}&&z&&1&&-i\ln \left({\frac {{\sqrt {1-z^{2}}}+iz}{1}}\right)&&=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-z^{2}}}+iz\right)\right)\\\arccos(z)\ \ &z&&{\sqrt {1-z^{2}}}&&1&&-i\ln \left({\frac {z+i{\sqrt {1-z^{2}}}}{1}}\right)&&=-i\ln \left(z+{\sqrt {z^{2}-1}}\right)&&\operatorname {Im} \left(\ln \left(z+{\sqrt {z^{2}-1}}\right)\right)\\\arctan(z)\ \ &1&&z&&{\sqrt {1+z^{2}}}&&-i\ln \left({\frac {1+iz}{\sqrt {1+z^{2}}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)&&\operatorname {Im} \left(\ln \left(1+iz\right)\right)\\\operatorname {arccot}(z)\ \ &z&&1&&{\sqrt {z^{2}+1}}&&-i\ln \left({\frac {z+i}{\sqrt {z^{2}+1}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)&&\operatorname {Im} \left(\ln \left(z+i\right)\right)\\\operatorname {arcsec}(z)\ \ &1&&{\sqrt {z^{2}-1}}&&z&&-i\ln \left({\frac {1+i{\sqrt {z^{2}-1}}}{z}}\right)&&=-i\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)&&\operatorname {Im} \left(\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)\right)\\\operatorname {arccsc}(z)\ \ &{\sqrt {z^{2}-1}}&&1&&z&&-i\ln \left({\frac {{\sqrt {z^{2}-1}}+i}{z}}\right)&&=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)\right)\\\end{aligned}}} The particular form of the simplified expression can cause the output to differ from the usual principal branch of each of the inverse trig functions. The formulations given will output the usual principal branch when using the Im ⁡ ( ln ⁡ z ) ∈ ( − π , π ] {\displaystyle \operatorname {Im} \left(\ln z\right)\in (-\pi ,\pi ]} and Re ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Re} \left({\sqrt {z}}\right)\geq 0} principal branch for every function except arccotangent in the θ {\displaystyle \theta } column. Arccotangent in the θ {\displaystyle \theta } column will output on its usual principal branch by using the Im ⁡ ( ln ⁡ z ) ∈ [ 0 , 2 π ) {\displaystyle \operatorname {Im} \left(\ln z\right)\in [0,2\pi )} and Im ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Im} \left({\sqrt {z}}\right)\geq 0} convention. In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valued z {\displaystyle z} , the definitions allow for hyperbolic angles as outputs and can be used to further define the inverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function. ==== Example proof ==== sin ⁡ ( ϕ ) = z ϕ = arcsin ⁡ ( z ) {\displaystyle {\begin{aligned}\sin(\phi )&=z\\\phi &=\arcsin(z)\end{aligned}}} Using the exponential definition of sine, and letting ξ = e i ϕ , {\displaystyle \xi =e^{i\phi },} z = e i ϕ − e − i ϕ 2 i 2 i z = ξ − 1 ξ 0 = ξ 2 − 2 i z ξ − 1 ξ = i z ± 1 − z 2 ϕ = − i ln ⁡ ( i z ± 1 − z 2 ) {\displaystyle {\begin{aligned}z&={\frac {e^{i\phi }-e^{-i\phi }}{2i}}\\[10mu]2iz&=\xi -{\frac {1}{\xi }}\\[5mu]0&=\xi ^{2}-2iz\xi -1\\[5mu]\xi &=iz\pm {\sqrt {1-z^{2}}}\\[5mu]\phi &=-i\ln \left(iz\pm {\sqrt {1-z^{2}}}\right)\end{aligned}}} (the positive branch is chosen) ϕ = arcsin ⁡ ( z ) = − i ln ⁡ ( i z + 1 − z 2 ) {\displaystyle \phi =\arcsin(z)=-i\ln \left(iz+{\sqrt {1-z^{2}}}\right)} == Applications == === Finding the angle of a right triangle === Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) . {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right).} Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: a 2 + b 2 = h 2 {\displaystyle a^{2}+b^{2}=h^{2}} where h {\displaystyle h} is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed. θ = arctan ⁡ ( opposite adjacent ) . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)\,.} For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows: θ = arctan ⁡ ( opposite adjacent ) = arctan ⁡ ( rise run ) = arctan ⁡ ( 8 20 ) ≈ 21.8 ∘ . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)=\arctan \left({\frac {\text{rise}}{\text{run}}}\right)=\arctan \left({\frac {8}{20}}\right)\approx 21.8^{\circ }\,.} === In computer science and engineering === ==== Two-argument variant of arctangent ==== The two-argument atan2 function computes the arctangent of y/x given y and x, but with a range of (−π, π]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering. In terms of the standard arctan function, that is with range of (−π/2, π/2), it can be expressed as follows: atan2 ⁡ ( y , x ) = { arctan ⁡ ( y x ) x > 0 arctan ⁡ ( y x ) + π y ≥ 0 , x < 0 arctan ⁡ ( y x ) − π y < 0 , x < 0 π 2 y > 0 , x = 0 − π 2 y < 0 , x = 0 undefined y = 0 , x = 0 {\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan \left({\frac {y}{x}}\right)&\quad x>0\\\arctan \left({\frac {y}{x}}\right)+\pi &\quad y\geq 0,\;x<0\\\arctan \left({\frac {y}{x}}\right)-\pi &\quad y<0,\;x<0\\{\frac {\pi }{2}}&\quad y>0,\;x=0\\-{\frac {\pi }{2}}&\quad y<0,\;x=0\\{\text{undefined}}&\quad y=0,\;x=0\end{cases}}} It also equals the principal value of the argument of the complex number x + iy. This limited version of the function above may also be defined using the tangent half-angle formulae as follows: atan2 ⁡ ( y , x ) = 2 arctan ⁡ ( y x 2 + y 2 + x ) {\displaystyle \operatorname {atan2} (y,x)=2\arctan \left({\frac {y}{{\sqrt {x^{2}+y^{2}}}+x}}\right)} provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use. The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. (See variations at atan2 § Realizations of the function in common computer languages.) ==== Arctangent function with location parameter ==== In many applications the solution y {\displaystyle y} of the equation x = tan ⁡ ( y ) {\displaystyle x=\tan(y)} is to come as close as possible to a given value − ∞ < η < ∞ {\displaystyle -\infty <\eta <\infty } . The adequate solution is produced by the parameter modified arctangent function y = arctan η ⁡ ( x ) := arctan ⁡ ( x ) + π rni ⁡ ( η − arctan ⁡ ( x ) π ) . {\displaystyle y=\arctan _{\eta }(x):=\arctan(x)+\pi \,\operatorname {rni} \left({\frac {\eta -\arctan(x)}{\pi }}\right)\,.} The function rni {\displaystyle \operatorname {rni} } rounds to the nearest integer. ==== Numerical accuracy ==== For angles near 0 and π, arccosine is ill-conditioned, and similarly with arcsine for angles near −π/2 and π/2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods. == See also == == Notes == == References == Abramowitz, Milton; Stegun, Irene A., eds. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications. ISBN 978-0-486-61272-0. == External links == Weisstein, Eric W. "Inverse Tangent". MathWorld.
Wikipedia/Inverse_trigonometric_functions
In mathematics, the inverse hyperbolic functions are inverses of the hyperbolic functions, analogous to the inverse circular functions. There are six in common use: inverse hyperbolic sine, inverse hyperbolic cosine, inverse hyperbolic tangent, inverse hyperbolic cosecant, inverse hyperbolic secant, and inverse hyperbolic cotangent. They are commonly denoted by the symbols for the hyperbolic functions, prefixed with arc- or ar- or with a superscript − 1 {\displaystyle {-1}} (for example arcsinh, arsinh, or sinh − 1 {\displaystyle \sinh ^{-1}} ). For a given value of a hyperbolic function, the inverse hyperbolic function provides the corresponding hyperbolic angle measure, for example arsinh ⁡ ( sinh ⁡ a ) = a {\displaystyle \operatorname {arsinh} (\sinh a)=a} and sinh ⁡ ( arsinh ⁡ x ) = x . {\displaystyle \sinh(\operatorname {arsinh} x)=x.} Hyperbolic angle measure is the length of an arc of a unit hyperbola x 2 − y 2 = 1 {\displaystyle x^{2}-y^{2}=1} as measured in the Lorentzian plane (not the length of a hyperbolic arc in the Euclidean plane), and twice the area of the corresponding hyperbolic sector. This is analogous to the way circular angle measure is the arc length of an arc of the unit circle in the Euclidean plane or twice the area of the corresponding circular sector. Alternately hyperbolic angle is the area of a sector of the hyperbola x y = 1. {\displaystyle xy=1.} Some authors call the inverse hyperbolic functions hyperbolic area functions. Hyperbolic functions occur in the calculation of angles and distances in hyperbolic geometry. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity. == Notation == The earliest and most widely adopted symbols use the prefix arc- (that is: arcsinh, arccosh, arctanh, arcsech, arccsch, arccoth), by analogy with the inverse circular functions (arcsin, etc.). For a unit hyperbola ("Lorentzian circle") in the Lorentzian plane (pseudo-Euclidean plane of signature (1, 1)) or in the hyperbolic number plane, the hyperbolic angle measure (argument to the hyperbolic functions) is indeed the arc length of a hyperbolic arc. Also common is the notation sinh − 1 , {\displaystyle \sinh ^{-1},} cosh − 1 , {\displaystyle \cosh ^{-1},} etc., although care must be taken to avoid misinterpretations of the superscript −1 as an exponent. The standard convention is that sinh − 1 ⁡ x {\displaystyle \sinh ^{-1}x} or sinh − 1 ⁡ ( x ) {\displaystyle \sinh ^{-1}(x)} means the inverse function while ( sinh ⁡ x ) − 1 {\displaystyle (\sinh x)^{-1}} or sinh ⁡ ( x ) − 1 {\displaystyle \sinh(x)^{-1}} means the reciprocal 1 / sinh ⁡ x . {\displaystyle 1/\sinh x.} Especially inconsistent is the conventional use of positive integer superscripts to indicate an exponent rather than function composition, e.g. sinh 2 ⁡ x {\displaystyle \sinh ^{2}x} conventionally means ( sinh ⁡ x ) 2 {\displaystyle (\sinh x)^{2}} and not sinh ⁡ ( sinh ⁡ x ) . {\displaystyle \sinh(\sinh x).} Because the argument of hyperbolic functions is not the arc length of a hyperbolic arc in the Euclidean plane, some authors have condemned the prefix arc-, arguing that the prefix ar- (for area) or arg- (for argument) should be preferred. Following this recommendation, the ISO 80000-2 standard abbreviations use the prefix ar- (that is: arsinh, arcosh, artanh, arsech, arcsch, arcoth). In computer programming languages, inverse circular and hyperbolic functions are often named with the shorter prefix a- (asinh, etc.). This article will consistently adopt the prefix ar- for convenience. == Definitions in terms of logarithms == Since the hyperbolic functions are quadratic rational functions of the exponential function exp ⁡ x , {\displaystyle \exp x,} they may be solved using the quadratic formula and then written in terms of the natural logarithm. arsinh ⁡ x = ln ⁡ ( x + x 2 + 1 ) − ∞ < x < ∞ , arcosh ⁡ x = ln ⁡ ( x + x 2 − 1 ) 1 ≤ x < ∞ , artanh ⁡ x = 1 2 ln ⁡ 1 + x 1 − x − 1 < x < 1 , arcsch ⁡ x = ln ⁡ ( 1 x + 1 x 2 + 1 ) − ∞ < x < ∞ , x ≠ 0 , arsech ⁡ x = ln ⁡ ( 1 x + 1 x 2 − 1 ) 0 < x ≤ 1 , arcoth ⁡ x = 1 2 ln ⁡ x + 1 x − 1 − ∞ < x < − 1 or 1 < x < ∞ . {\displaystyle {\begin{aligned}\operatorname {arsinh} x&=\ln \left(x+{\sqrt {x^{2}+1}}\right)&-\infty &<x<\infty ,\\[10mu]\operatorname {arcosh} x&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&1&\leq x<\infty ,\\[10mu]\operatorname {artanh} x&={\frac {1}{2}}\ln {\frac {1+x}{1-x}}&-1&<x<1,\\[10mu]\operatorname {arcsch} x&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&-\infty &<x<\infty ,\ x\neq 0,\\[10mu]\operatorname {arsech} x&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)&0&<x\leq 1,\\[10mu]\operatorname {arcoth} x&={\frac {1}{2}}\ln {\frac {x+1}{x-1}}&-\infty &<x<-1\ \ {\text{or}}\ \ 1<x<\infty .\end{aligned}}} For complex arguments, the inverse circular and hyperbolic functions, the square root, and the natural logarithm are all multi-valued functions. == Addition formulae == arsinh ⁡ u ± arsinh ⁡ v = arsinh ⁡ ( u 1 + v 2 ± v 1 + u 2 ) {\displaystyle \operatorname {arsinh} u\pm \operatorname {arsinh} v=\operatorname {arsinh} \left(u{\sqrt {1+v^{2}}}\pm v{\sqrt {1+u^{2}}}\right)} arcosh ⁡ u ± arcosh ⁡ v = arcosh ⁡ ( u v ± ( u 2 − 1 ) ( v 2 − 1 ) ) {\displaystyle \operatorname {arcosh} u\pm \operatorname {arcosh} v=\operatorname {arcosh} \left(uv\pm {\sqrt {(u^{2}-1)(v^{2}-1)}}\right)} artanh ⁡ u ± artanh ⁡ v = artanh ⁡ ( u ± v 1 ± u v ) {\displaystyle \operatorname {artanh} u\pm \operatorname {artanh} v=\operatorname {artanh} \left({\frac {u\pm v}{1\pm uv}}\right)} arcoth ⁡ u ± arcoth ⁡ v = arcoth ⁡ ( 1 ± u v u ± v ) {\displaystyle \operatorname {arcoth} u\pm \operatorname {arcoth} v=\operatorname {arcoth} \left({\frac {1\pm uv}{u\pm v}}\right)} arsinh ⁡ u + arcosh ⁡ v = arsinh ⁡ ( u v + ( 1 + u 2 ) ( v 2 − 1 ) ) = arcosh ⁡ ( v 1 + u 2 + u v 2 − 1 ) {\displaystyle {\begin{aligned}\operatorname {arsinh} u+\operatorname {arcosh} v&=\operatorname {arsinh} \left(uv+{\sqrt {(1+u^{2})(v^{2}-1)}}\right)\\&=\operatorname {arcosh} \left(v{\sqrt {1+u^{2}}}+u{\sqrt {v^{2}-1}}\right)\end{aligned}}} == Other identities == 2 arcosh ⁡ x = arcosh ⁡ ( 2 x 2 − 1 ) for x ≥ 1 4 arcosh ⁡ x = arcosh ⁡ ( 8 x 4 − 8 x 2 + 1 ) for x ≥ 1 2 arsinh ⁡ x = ± arcosh ⁡ ( 2 x 2 + 1 ) 4 arsinh ⁡ x = arcosh ⁡ ( 8 x 4 + 8 x 2 + 1 ) for x ≥ 0 {\displaystyle {\begin{aligned}2\operatorname {arcosh} x&=\operatorname {arcosh} (2x^{2}-1)&\quad {\hbox{ for }}x\geq 1\\4\operatorname {arcosh} x&=\operatorname {arcosh} (8x^{4}-8x^{2}+1)&\quad {\hbox{ for }}x\geq 1\\2\operatorname {arsinh} x&=\pm \operatorname {arcosh} (2x^{2}+1)\\4\operatorname {arsinh} x&=\operatorname {arcosh} (8x^{4}+8x^{2}+1)&\quad {\hbox{ for }}x\geq 0\end{aligned}}} ln ⁡ ( x ) = arcosh ⁡ ( x 2 + 1 2 x ) = arsinh ⁡ ( x 2 − 1 2 x ) = artanh ⁡ ( x 2 − 1 x 2 + 1 ) {\displaystyle \ln(x)=\operatorname {arcosh} \left({\frac {x^{2}+1}{2x}}\right)=\operatorname {arsinh} \left({\frac {x^{2}-1}{2x}}\right)=\operatorname {artanh} \left({\frac {x^{2}-1}{x^{2}+1}}\right)} == Composition of hyperbolic and inverse hyperbolic functions == sinh ⁡ ( arcosh ⁡ x ) = x 2 − 1 for | x | > 1 sinh ⁡ ( artanh ⁡ x ) = x 1 − x 2 for − 1 < x < 1 cosh ⁡ ( arsinh ⁡ x ) = 1 + x 2 cosh ⁡ ( artanh ⁡ x ) = 1 1 − x 2 for − 1 < x < 1 tanh ⁡ ( arsinh ⁡ x ) = x 1 + x 2 tanh ⁡ ( arcosh ⁡ x ) = x 2 − 1 x for | x | > 1 {\displaystyle {\begin{aligned}&\sinh(\operatorname {arcosh} x)={\sqrt {x^{2}-1}}\quad {\text{for}}\quad |x|>1\\&\sinh(\operatorname {artanh} x)={\frac {x}{\sqrt {1-x^{2}}}}\quad {\text{for}}\quad -1<x<1\\&\cosh(\operatorname {arsinh} x)={\sqrt {1+x^{2}}}\\&\cosh(\operatorname {artanh} x)={\frac {1}{\sqrt {1-x^{2}}}}\quad {\text{for}}\quad -1<x<1\\&\tanh(\operatorname {arsinh} x)={\frac {x}{\sqrt {1+x^{2}}}}\\&\tanh(\operatorname {arcosh} x)={\frac {\sqrt {x^{2}-1}}{x}}\quad {\text{for}}\quad |x|>1\end{aligned}}} == Composition of inverse hyperbolic and circular functions == arsinh ⁡ ( tan ⁡ α ) = artanh ⁡ ( sin ⁡ α ) = ln ⁡ ( 1 + sin ⁡ α cos ⁡ α ) = ± arcosh ⁡ ( 1 cos ⁡ α ) {\displaystyle \operatorname {arsinh} \left(\tan \alpha \right)=\operatorname {artanh} \left(\sin \alpha \right)=\ln \left({\frac {1+\sin \alpha }{\cos \alpha }}\right)=\pm \operatorname {arcosh} \left({\frac {1}{\cos \alpha }}\right)} ln ⁡ ( | tan ⁡ α | ) = − artanh ⁡ ( cos ⁡ 2 α ) {\displaystyle \ln \left(\left|\tan \alpha \right|\right)=-\operatorname {artanh} \left(\cos 2\alpha \right)} == Conversions == ln ⁡ x = artanh ⁡ ( x 2 − 1 x 2 + 1 ) = arsinh ⁡ ( x 2 − 1 2 x ) = ± arcosh ⁡ ( x 2 + 1 2 x ) {\displaystyle \ln x=\operatorname {artanh} \left({\frac {x^{2}-1}{x^{2}+1}}\right)=\operatorname {arsinh} \left({\frac {x^{2}-1}{2x}}\right)=\pm \operatorname {arcosh} \left({\frac {x^{2}+1}{2x}}\right)} artanh ⁡ x = arsinh ⁡ ( x 1 − x 2 ) = ± arcosh ⁡ ( 1 1 − x 2 ) {\displaystyle \operatorname {artanh} x=\operatorname {arsinh} \left({\frac {x}{\sqrt {1-x^{2}}}}\right)=\pm \operatorname {arcosh} \left({\frac {1}{\sqrt {1-x^{2}}}}\right)} arsinh ⁡ x = artanh ⁡ ( x 1 + x 2 ) = ± arcosh ⁡ ( 1 + x 2 ) {\displaystyle \operatorname {arsinh} x=\operatorname {artanh} \left({\frac {x}{\sqrt {1+x^{2}}}}\right)=\pm \operatorname {arcosh} \left({\sqrt {1+x^{2}}}\right)} arcosh ⁡ x = | arsinh ⁡ ( x 2 − 1 ) | = | artanh ⁡ ( x 2 − 1 x ) | {\displaystyle \operatorname {arcosh} x=\left|\operatorname {arsinh} \left({\sqrt {x^{2}-1}}\right)\right|=\left|\operatorname {artanh} \left({\frac {\sqrt {x^{2}-1}}{x}}\right)\right|} == Derivatives == d d x arsinh ⁡ x = 1 x 2 + 1 , for all real x d d x arcosh ⁡ x = 1 x 2 − 1 , for all real x > 1 d d x artanh ⁡ x = 1 1 − x 2 , for all real | x | < 1 d d x arcoth ⁡ x = 1 1 − x 2 , for all real | x | > 1 d d x arsech ⁡ x = − 1 x 1 − x 2 , for all real x ∈ ( 0 , 1 ) d d x arcsch ⁡ x = − 1 | x | 1 + x 2 , for all real x , except 0 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&{}={\frac {1}{\sqrt {x^{2}+1}}},{\text{ for all real }}x\\{\frac {d}{dx}}\operatorname {arcosh} x&{}={\frac {1}{\sqrt {x^{2}-1}}},{\text{ for all real }}x>1\\{\frac {d}{dx}}\operatorname {artanh} x&{}={\frac {1}{1-x^{2}}},{\text{ for all real }}|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&{}={\frac {1}{1-x^{2}}},{\text{ for all real }}|x|>1\\{\frac {d}{dx}}\operatorname {arsech} x&{}={\frac {-1}{x{\sqrt {1-x^{2}}}}},{\text{ for all real }}x\in (0,1)\\{\frac {d}{dx}}\operatorname {arcsch} x&{}={\frac {-1}{|x|{\sqrt {1+x^{2}}}}},{\text{ for all real }}x{\text{, except }}0\\\end{aligned}}} These formulas can be derived in terms of the derivatives of hyperbolic functions. For example, if x = sinh ⁡ θ {\displaystyle x=\sinh \theta } , then d x / d θ = cosh ⁡ θ = 1 + x 2 , {\textstyle dx/d\theta =\cosh \theta ={\sqrt {1+x^{2}}},} so d d x arsinh ⁡ ( x ) = d θ d x = 1 d x / d θ = 1 1 + x 2 . {\displaystyle {\frac {d}{dx}}\operatorname {arsinh} (x)={\frac {d\theta }{dx}}={\frac {1}{dx/d\theta }}={\frac {1}{\sqrt {1+x^{2}}}}.} == Series expansions == Expansion series can be obtained for the above functions: arsinh ⁡ x = x − ( 1 2 ) x 3 3 + ( 1 ⋅ 3 2 ⋅ 4 ) x 5 5 − ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) x 7 7 ± ⋯ = ∑ n = 0 ∞ ( ( − 1 ) n ( 2 n ) ! 2 2 n ( n ! ) 2 ) x 2 n + 1 2 n + 1 , | x | < 1 {\displaystyle {\begin{aligned}\operatorname {arsinh} x&=x-\left({\frac {1}{2}}\right){\frac {x^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {x^{5}}{5}}-\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {x^{7}}{7}}\pm \cdots \\&=\sum _{n=0}^{\infty }\left({\frac {(-1)^{n}(2n)!}{2^{2n}(n!)^{2}}}\right){\frac {x^{2n+1}}{2n+1}},\qquad \left|x\right|<1\end{aligned}}} arcosh ⁡ x = ln ⁡ ( 2 x ) − ( ( 1 2 ) x − 2 2 + ( 1 ⋅ 3 2 ⋅ 4 ) x − 4 4 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) x − 6 6 + ⋯ ) = ln ⁡ ( 2 x ) − ∑ n = 1 ∞ ( ( 2 n ) ! 2 2 n ( n ! ) 2 ) x − 2 n 2 n , | x | > 1 {\displaystyle {\begin{aligned}\operatorname {arcosh} x&=\ln(2x)-\left(\left({\frac {1}{2}}\right){\frac {x^{-2}}{2}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {x^{-4}}{4}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {x^{-6}}{6}}+\cdots \right)\\&=\ln(2x)-\sum _{n=1}^{\infty }\left({\frac {(2n)!}{2^{2n}(n!)^{2}}}\right){\frac {x^{-2n}}{2n}},\qquad \left|x\right|>1\end{aligned}}} artanh ⁡ x = x + x 3 3 + x 5 5 + x 7 7 + ⋯ = ∑ n = 0 ∞ x 2 n + 1 2 n + 1 , | x | < 1 {\displaystyle {\begin{aligned}\operatorname {artanh} x&=x+{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}+{\frac {x^{7}}{7}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{2n+1}},\qquad \left|x\right|<1\end{aligned}}} arcsch ⁡ x = arsinh ⁡ 1 x = x − 1 − ( 1 2 ) x − 3 3 + ( 1 ⋅ 3 2 ⋅ 4 ) x − 5 5 − ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) x − 7 7 ± ⋯ = ∑ n = 0 ∞ ( ( − 1 ) n ( 2 n ) ! 2 2 n ( n ! ) 2 ) x − ( 2 n + 1 ) 2 n + 1 , | x | > 1 {\displaystyle {\begin{aligned}\operatorname {arcsch} x=\operatorname {arsinh} {\frac {1}{x}}&=x^{-1}-\left({\frac {1}{2}}\right){\frac {x^{-3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {x^{-5}}{5}}-\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {x^{-7}}{7}}\pm \cdots \\&=\sum _{n=0}^{\infty }\left({\frac {(-1)^{n}(2n)!}{2^{2n}(n!)^{2}}}\right){\frac {x^{-(2n+1)}}{2n+1}},\qquad \left|x\right|>1\end{aligned}}} arsech ⁡ x = arcosh ⁡ 1 x = ln ⁡ 2 x − ( ( 1 2 ) x 2 2 + ( 1 ⋅ 3 2 ⋅ 4 ) x 4 4 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) x 6 6 + ⋯ ) = ln ⁡ 2 x − ∑ n = 1 ∞ ( ( 2 n ) ! 2 2 n ( n ! ) 2 ) x 2 n 2 n , 0 < x ≤ 1 {\displaystyle {\begin{aligned}\operatorname {arsech} x=\operatorname {arcosh} {\frac {1}{x}}&=\ln {\frac {2}{x}}-\left(\left({\frac {1}{2}}\right){\frac {x^{2}}{2}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {x^{4}}{4}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {x^{6}}{6}}+\cdots \right)\\&=\ln {\frac {2}{x}}-\sum _{n=1}^{\infty }\left({\frac {(2n)!}{2^{2n}(n!)^{2}}}\right){\frac {x^{2n}}{2n}},\qquad 0<x\leq 1\end{aligned}}} arcoth ⁡ x = artanh ⁡ 1 x = x − 1 + x − 3 3 + x − 5 5 + x − 7 7 + ⋯ = ∑ n = 0 ∞ x − ( 2 n + 1 ) 2 n + 1 , | x | > 1 {\displaystyle {\begin{aligned}\operatorname {arcoth} x=\operatorname {artanh} {\frac {1}{x}}&=x^{-1}+{\frac {x^{-3}}{3}}+{\frac {x^{-5}}{5}}+{\frac {x^{-7}}{7}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {x^{-(2n+1)}}{2n+1}},\qquad \left|x\right|>1\end{aligned}}} An asymptotic expansion for arsinh is given by arsinh ⁡ x = ln ⁡ ( 2 x ) + ∑ n = 1 ∞ ( − 1 ) n − 1 ( 2 n − 1 ) ! ! 2 n ( 2 n ) ! ! 1 x 2 n {\displaystyle \operatorname {arsinh} x=\ln(2x)+\sum \limits _{n=1}^{\infty }{\left({-1}\right)^{n-1}{\frac {\left({2n-1}\right)!!}{2n\left({2n}\right)!!}}}{\frac {1}{x^{2n}}}} == Principal values in the complex plane == As functions of a complex variable, inverse hyperbolic functions are multivalued functions that are analytic except at a finite number of points. For such a function, it is common to define a principal value, which is a single valued analytic function which coincides with one specific branch of the multivalued function, over a domain consisting of the complex plane in which a finite number of arcs (usually half lines or line segments) have been removed. These arcs are called branch cuts. The principal value of the multifunction is chosen at a particular point and values elsewhere in the domain of definition are defined to agree with those found by analytic continuation. For example, for the square root, the principal value is defined as the square root that has a positive real part. This defines a single valued analytic function, which is defined everywhere, except for non-positive real values of the variables (where the two square roots have a zero real part). This principal value of the square root function is denoted x {\displaystyle {\sqrt {x}}} in what follows. Similarly, the principal value of the logarithm, denoted Log {\displaystyle \operatorname {Log} } in what follows, is defined as the value for which the imaginary part has the smallest absolute value. It is defined everywhere except for non-positive real values of the variable, for which two different values of the logarithm reach the minimum. For all inverse hyperbolic functions, the principal value may be defined in terms of principal values of the square root and the logarithm function. However, in some cases, the formulas of § Definitions in terms of logarithms do not give a correct principal value, as giving a domain of definition which is too small and, in one case non-connected. === Principal value of the inverse hyperbolic sine === The principal value of the inverse hyperbolic sine is given by arsinh ⁡ z = Log ⁡ ( z + z 2 + 1 ) . {\displaystyle \operatorname {arsinh} z=\operatorname {Log} (z+{\sqrt {z^{2}+1}}\,)\,.} The argument of the square root is a non-positive real number, if and only if z belongs to one of the intervals [i, +i∞) and (−i∞, −i] of the imaginary axis. If the argument of the logarithm is real, then it is positive. Thus this formula defines a principal value for arsinh, with branch cuts [i, +i∞) and (−i∞, −i]. This is optimal, as the branch cuts must connect the singular points i and −i to infinity. === Principal value of the inverse hyperbolic cosine === The formula for the inverse hyperbolic cosine given in § Inverse hyperbolic cosine is not convenient, since similar to the principal values of the logarithm and the square root, the principal value of arcosh would not be defined for imaginary z. Thus the square root has to be factorized, leading to arcosh ⁡ z = Log ⁡ ( z + z + 1 z − 1 ) . {\displaystyle \operatorname {arcosh} z=\operatorname {Log} (z+{\sqrt {z+1}}{\sqrt {z-1}}\,)\,.} The principal values of the square roots are both defined, except if z belongs to the real interval (−∞, 1]. If the argument of the logarithm is real, then z is real and has the same sign. Thus, the above formula defines a principal value of arcosh outside the real interval (−∞, 1], which is thus the unique branch cut. === Principal values of the inverse hyperbolic tangent and cotangent === The formulas given in § Definitions in terms of logarithms suggests artanh ⁡ z = 1 2 Log ⁡ ( 1 + z 1 − z ) arcoth ⁡ z = 1 2 Log ⁡ ( z + 1 z − 1 ) {\displaystyle {\begin{aligned}\operatorname {artanh} z&={\frac {1}{2}}\operatorname {Log} \left({\frac {1+z}{1-z}}\right)\\\operatorname {arcoth} z&={\frac {1}{2}}\operatorname {Log} \left({\frac {z+1}{z-1}}\right)\end{aligned}}} for the definition of the principal values of the inverse hyperbolic tangent and cotangent. In these formulas, the argument of the logarithm is real if and only if z is real. For artanh, this argument is in the real interval (−∞, 0], if z belongs either to (−∞, −1] or to [1, ∞). For arcoth, the argument of the logarithm is in (−∞, 0], if and only if z belongs to the real interval [−1, 1]. Therefore, these formulas define convenient principal values, for which the branch cuts are (−∞, −1] and [1, ∞) for the inverse hyperbolic tangent, and [−1, 1] for the inverse hyperbolic cotangent. In view of a better numerical evaluation near the branch cuts, some authors use the following definitions of the principal values, although the second one introduces a removable singularity at z = 0. The two definitions of artanh {\displaystyle \operatorname {artanh} } differ for real values of z {\displaystyle z} with z > 1 {\displaystyle z>1} . The ones of arcoth {\displaystyle \operatorname {arcoth} } differ for real values of z {\displaystyle z} with z ∈ [ 0 , 1 ) {\displaystyle z\in [0,1)} . artanh ⁡ z = 1 2 Log ⁡ ( 1 + z ) − 1 2 Log ⁡ ( 1 − z ) arcoth ⁡ z = 1 2 Log ⁡ ( 1 + 1 z ) − 1 2 Log ⁡ ( 1 − 1 z ) {\displaystyle {\begin{aligned}\operatorname {artanh} z&={\tfrac {1}{2}}\operatorname {Log} \left({1+z}\right)-{\tfrac {1}{2}}\operatorname {Log} \left({1-z}\right)\\\operatorname {arcoth} z&={\tfrac {1}{2}}\operatorname {Log} \left({1+{\frac {1}{z}}}\right)-{\tfrac {1}{2}}\operatorname {Log} \left({1-{\frac {1}{z}}}\right)\end{aligned}}} === Principal value of the inverse hyperbolic cosecant === For the inverse hyperbolic cosecant, the principal value is defined as arcsch ⁡ z = Log ⁡ ( 1 z + 1 z 2 + 1 ) {\displaystyle \operatorname {arcsch} z=\operatorname {Log} \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}+1}}\,\right)} . It is defined except when the arguments of the logarithm and the square root are non-positive real numbers. The principal value of the square root is thus defined outside the interval [−i, i] of the imaginary line. If the argument of the logarithm is real, then z is a non-zero real number, and this implies that the argument of the logarithm is positive. Thus, the principal value is defined by the above formula outside the branch cut, consisting of the interval [−i, i] of the imaginary line. (At z = 0, there is a singular point that is included in the branch cut.) === Principal value of the inverse hyperbolic secant === Here, as in the case of the inverse hyperbolic cosine, we have to factorize the square root. This gives the principal value arsech ⁡ z = Log ⁡ ( 1 z + 1 z + 1 1 z − 1 ) . {\displaystyle \operatorname {arsech} z=\operatorname {Log} \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z}}+1}}\,{\sqrt {{\frac {1}{z}}-1}}\right).} If the argument of a square root is real, then z is real, and it follows that both principal values of square roots are defined, except if z is real and belongs to one of the intervals (−∞, 0] and [1, +∞). If the argument of the logarithm is real and negative, then z is also real and negative. It follows that the principal value of arsech is well defined, by the above formula outside two branch cuts, the real intervals (−∞, 0] and [1, +∞). For z = 0, there is a singular point that is included in one of the branch cuts. === Graphical representation === In the following graphical representation of the principal values of the inverse hyperbolic functions, the branch cuts appear as discontinuities of the color. The fact that the whole branch cuts appear as discontinuities, shows that these principal values may not be extended into analytic functions defined over larger domains. In other words, the above defined branch cuts are minimal. == See also == Complex logarithm Hyperbolic secant distribution ISO 80000-2 List of integrals of inverse hyperbolic functions == References == == Bibliography == Herbert Busemann and Paul J. Kelly (1953) Projective Geometry and Projective Metrics, page 207, Academic Press. == External links == "Inverse hyperbolic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Inverse_hyperbolic_functions
Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic operations other than the standard arithmetic operations, such as addition and multiplication. Elementary algebra is the main form of algebra taught in schools. It examines mathematical statements using variables for unspecified values and seeks to determine for which values the statements are true. To do so, it uses different methods of transforming equations to isolate variables. Linear algebra is a closely related field that investigates linear equations and combinations of them called systems of linear equations. It provides methods to find the values that solve all equations in the system at the same time, and to study the set of these solutions. Abstract algebra studies algebraic structures, which consist of a set of mathematical objects together with one or several operations defined on that set. It is a generalization of elementary and linear algebra since it allows mathematical objects other than numbers and non-arithmetic operations. It distinguishes between different types of algebraic structures, such as groups, rings, and fields, based on the number of operations they use and the laws they follow, called axioms. Universal algebra and category theory provide general frameworks to investigate abstract patterns that characterize different classes of algebraic structures. Algebraic methods were first studied in the ancient period to solve specific problems in fields like geometry. Subsequent mathematicians examined general techniques to solve equations independent of their specific applications. They described equations and their solutions using words and abbreviations until the 16th and 17th centuries when a rigorous symbolic formalism was developed. In the mid-19th century, the scope of algebra broadened beyond a theory of equations to cover diverse types of algebraic operations and structures. Algebra is relevant to many branches of mathematics, such as geometry, topology, number theory, and calculus, and other fields of inquiry, like logic and the empirical sciences. == Definition and etymology == Algebra is the branch of mathematics that studies algebraic structures and the operations they use. An algebraic structure is a non-empty set of mathematical objects, such as the integers, together with algebraic operations defined on that set, like addition and multiplication. Algebra explores the laws, general characteristics, and types of algebraic structures. Within certain algebraic structures, it examines the use of variables in equations and how to manipulate these equations. Algebra is often understood as a generalization of arithmetic. Arithmetic studies operations like addition, subtraction, multiplication, and division, in a particular domain of numbers, such as the real numbers. Elementary algebra constitutes the first level of abstraction. Like arithmetic, it restricts itself to specific types of numbers and operations. It generalizes these operations by allowing indefinite quantities in the form of variables in addition to numbers. A higher level of abstraction is found in abstract algebra, which is not limited to a particular domain and examines algebraic structures such as groups and rings. It extends beyond typical arithmetic operations by also covering other types of operations. Universal algebra is still more abstract in that it is not interested in specific algebraic structures but investigates the characteristics of algebraic structures in general. The term "algebra" is sometimes used in a more narrow sense to refer only to elementary algebra or only to abstract algebra. When used as a countable noun, an algebra is a specific type of algebraic structure that involves a vector space equipped with a certain type of binary operation. Depending on the context, "algebra" can also refer to other algebraic structures, like a Lie algebra or an associative algebra. The word algebra comes from the Arabic term الجبر (al-jabr), which originally referred to the surgical treatment of bonesetting. In the 9th century, the term received a mathematical meaning when the Persian mathematician Muhammad ibn Musa al-Khwarizmi employed it to describe a method of solving equations and used it in the title of a treatise on algebra, al-Kitāb al-Mukhtaṣar fī Ḥisāb al-Jabr wal-Muqābalah [The Compendious Book on Calculation by Completion and Balancing] which was translated into Latin as Liber Algebrae et Almucabola. The word entered the English language in the 16th century from Italian, Spanish, and medieval Latin. Initially, its meaning was restricted to the theory of equations, that is, to the art of manipulating polynomial equations in view of solving them. This changed in the 19th century when the scope of algebra broadened to cover the study of diverse types of algebraic operations and structures together with their underlying axioms, the laws they follow. == Major branches == === Elementary algebra === Elementary algebra, also called school algebra, college algebra, and classical algebra, is the oldest and most basic form of algebra. It is a generalization of arithmetic that relies on variables and examines how mathematical statements may be transformed. Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithm. For example, the operation of addition combines two numbers, called the addends, into a third number, called the sum, as in 2 + 5 = 7 {\displaystyle 2+5=7} . Elementary algebra relies on the same operations while allowing variables in addition to regular numbers. Variables are symbols for unspecified or unknown quantities. They make it possible to state relationships for which one does not know the exact values and to express general laws that are true, independent of which numbers are used. For example, the equation 2 × 3 = 3 × 2 {\displaystyle 2\times 3=3\times 2} belongs to arithmetic and expresses an equality only for these specific numbers. By replacing the numbers with variables, it is possible to express a general law that applies to any possible combination of numbers, like the commutative property of multiplication, which is expressed in the equation a × b = b × a {\displaystyle a\times b=b\times a} . Algebraic expressions are formed by using arithmetic operations to combine variables and numbers. By convention, the lowercase letters ⁠ x {\displaystyle x} ⁠, ⁠ y {\displaystyle y} ⁠, and z {\displaystyle z} represent variables. In some cases, subscripts are added to distinguish variables, as in ⁠ x 1 {\displaystyle x_{1}} ⁠, ⁠ x 2 {\displaystyle x_{2}} ⁠, and ⁠ x 3 {\displaystyle x_{3}} ⁠. The lowercase letters ⁠ a {\displaystyle a} ⁠, ⁠ b {\displaystyle b} ⁠, and c {\displaystyle c} are usually used for constants and coefficients. The expression 5 x + 3 {\displaystyle 5x+3} is an algebraic expression created by multiplying the number 5 with the variable x {\displaystyle x} and adding the number 3 to the result. Other examples of algebraic expressions are 32 x y z {\displaystyle 32xyz} and 64 x 1 2 + 7 x 2 − c {\displaystyle 64x_{1}{}^{2}+7x_{2}-c} . Some algebraic expressions take the form of statements that relate two expressions to one another. An equation is a statement formed by comparing two expressions, saying that they are equal. This can be expressed using the equals sign (⁠ = {\displaystyle =} ⁠), as in ⁠ 5 x 2 + 6 x = 3 y + 4 {\displaystyle 5x^{2}+6x=3y+4} ⁠. Inequations involve a different type of comparison, saying that the two sides are different. This can be expressed using symbols such as the less-than sign (⁠ < {\displaystyle <} ⁠), the greater-than sign (⁠ > {\displaystyle >} ⁠), and the inequality sign (⁠ ≠ {\displaystyle \neq } ⁠). Unlike other expressions, statements can be true or false, and their truth value usually depends on the values of the variables. For example, the statement x 2 = 4 {\displaystyle x^{2}=4} is true if x {\displaystyle x} is either 2 or −2 and false otherwise. Equations with variables can be divided into identity equations and conditional equations. Identity equations are true for all values that can be assigned to the variables, such as the equation ⁠ 2 x + 5 x = 7 x {\displaystyle 2x+5x=7x} ⁠. Conditional equations are only true for some values. For example, the equation x + 4 = 9 {\displaystyle x+4=9} is only true if x {\displaystyle x} is 5. The main goal of elementary algebra is to determine the values for which a statement is true. This can be achieved by transforming and manipulating statements according to certain rules. A key principle guiding this process is that whatever operation is applied to one side of an equation also needs to be done to the other side. For example, if one subtracts 5 from the left side of an equation one also needs to subtract 5 from the right side to balance both sides. The goal of these steps is usually to isolate the variable one is interested in on one side, a process known as solving the equation for that variable. For example, the equation x − 7 = 4 {\displaystyle x-7=4} can be solved for x {\displaystyle x} by adding 7 to both sides, which isolates x {\displaystyle x} on the left side and results in the equation ⁠ x = 11 {\displaystyle x=11} ⁠. There are many other techniques used to solve equations. Simplification is employed to replace a complicated expression with an equivalent simpler one. For example, the expression 7 x − 3 x {\displaystyle 7x-3x} can be replaced with the expression 4 x {\displaystyle 4x} since 7 x − 3 x = ( 7 − 3 ) x = 4 x {\displaystyle 7x-3x=(7-3)x=4x} by the distributive property. For statements with several variables, substitution is a common technique to replace one variable with an equivalent expression that does not use this variable. For example, if one knows that y = 3 x {\displaystyle y=3x} then one can simplify the expression 7 x y {\displaystyle 7xy} to arrive at ⁠ 21 x 2 {\displaystyle 21x^{2}} ⁠. In a similar way, if one knows the value of one variable one may be able to use it to determine the value of other variables. Algebraic equations can be interpreted geometrically to describe spatial figures in the form of a graph. To do so, the different variables in the equation are understood as coordinates and the values that solve the equation are interpreted as points of a graph. For example, if x {\displaystyle x} is set to zero in the equation ⁠ y = 0.5 x − 1 {\displaystyle y=0.5x-1} ⁠, then y {\displaystyle y} must be −1 for the equation to be true. This means that the ( x , y ) {\displaystyle (x,y)} -pair ( 0 , − 1 ) {\displaystyle (0,-1)} is part of the graph of the equation. The ( x , y ) {\displaystyle (x,y)} -pair ⁠ ( 0 , 7 ) {\displaystyle (0,7)} ⁠, by contrast, does not solve the equation and is therefore not part of the graph. The graph encompasses the totality of ( x , y ) {\displaystyle (x,y)} -pairs that solve the equation. ==== Polynomials ==== A polynomial is an expression consisting of one or more terms that are added or subtracted from each other, like ⁠ x 4 + 3 x y 2 + 5 x 3 − 1 {\displaystyle x^{4}+3xy^{2}+5x^{3}-1} ⁠. Each term is either a constant, a variable, or a product of a constant and variables. Each variable can be raised to a positive integer power. A monomial is a polynomial with one term while two- and three-term polynomials are called binomials and trinomials. The degree of a polynomial is the maximal value (among its terms) of the sum of the exponents of the variables (4 in the above example). Polynomials of degree one are called linear polynomials. Linear algebra studies systems of linear polynomials. A polynomial is said to be univariate or multivariate, depending on whether it uses one or more variables. Factorization is a method used to simplify polynomials, making it easier to analyze them and determine the values for which they evaluate to zero. Factorization consists of rewriting a polynomial as a product of several factors. For example, the polynomial x 2 − 3 x − 10 {\displaystyle x^{2}-3x-10} can be factorized as ⁠ ( x + 2 ) ( x − 5 ) {\displaystyle (x+2)(x-5)} ⁠. The polynomial as a whole is zero if and only if one of its factors is zero, i.e., if x {\displaystyle x} is either −2 or 5. Before the 19th century, much of algebra was devoted to polynomial equations, that is equations obtained by equating a polynomial to zero. The first attempts for solving polynomial equations were to express the solutions in terms of nth roots. The solution of a second-degree polynomial equation of the form a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} is given by the quadratic formula x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}}.} Solutions for the degrees 3 and 4 are given by the cubic and quartic formulas. There are no general solutions for higher degrees, as proven in the 19th century by the Abel–Ruffini theorem. Even when general solutions do not exist, approximate solutions can be found by numerical tools like the Newton–Raphson method. The fundamental theorem of algebra asserts that every univariate polynomial equation of positive degree with real or complex coefficients has at least one complex solution. Consequently, every polynomial of a positive degree can be factorized into linear polynomials. This theorem was proved at the beginning of the 19th century, but this does not close the problem since the theorem does not provide any way for computing the solutions. === Linear algebra === Linear algebra starts with the study of systems of linear equations. An equation is linear if it can be expressed in the form ⁠ a 1 x 1 + a 2 x 2 + . . . + a n x n = b {\displaystyle a_{1}x_{1}+a_{2}x_{2}+...+a_{n}x_{n}=b} ⁠, where ⁠ a 1 {\displaystyle a_{1}} ⁠, ⁠ a 2 {\displaystyle a_{2}} ⁠, ..., a n {\displaystyle a_{n}} and b {\displaystyle b} are constants. Examples are x 1 − 7 x 2 + 3 x 3 = 0 {\displaystyle x_{1}-7x_{2}+3x_{3}=0} and ⁠ 1 4 x − y = 4 {\displaystyle \textstyle {\frac {1}{4}}x-y=4} ⁠. A system of linear equations is a set of linear equations for which one is interested in common solutions. Matrices are rectangular arrays of values that have been originally introduced for having a compact and synthetic notation for systems of linear equations. For example, the system of equations 9 x 1 + 3 x 2 − 13 x 3 = 0 2.3 x 1 + 7 x 3 = 9 − 5 x 1 − 17 x 2 = − 3 {\displaystyle {\begin{aligned}9x_{1}+3x_{2}-13x_{3}&=0\\2.3x_{1}+7x_{3}&=9\\-5x_{1}-17x_{2}&=-3\end{aligned}}} can be written as A X = B , {\displaystyle AX=B,} where ⁠ A {\displaystyle A} ⁠, X {\displaystyle X} and B {\displaystyle B} are the matrices A = [ 9 3 − 13 2.3 0 7 − 5 − 17 0 ] , X = [ x 1 x 2 x 3 ] , B = [ 0 9 − 3 ] . {\displaystyle A={\begin{bmatrix}9&3&-13\\2.3&0&7\\-5&-17&0\end{bmatrix}},\quad X={\begin{bmatrix}x_{1}\\x_{2}\\x_{3}\end{bmatrix}},\quad B={\begin{bmatrix}0\\9\\-3\end{bmatrix}}.} Under some conditions on the number of rows and columns, matrices can be added, multiplied, and sometimes inverted. All methods for solving linear systems may be expressed as matrix manipulations using these operations. For example, solving the above system consists of computing an inverted matrix A − 1 {\displaystyle A^{-1}} such that A − 1 A = I , {\displaystyle A^{-1}A=I,} where I {\displaystyle I} is the identity matrix. Then, multiplying on the left both members of the above matrix equation by A − 1 , {\displaystyle A^{-1},} one gets the solution of the system of linear equations as X = A − 1 B . {\displaystyle X=A^{-1}B.} Methods of solving systems of linear equations range from the introductory, like substitution and elimination, to more advanced techniques using matrices, such as Cramer's rule, the Gaussian elimination, and LU decomposition. Some systems of equations are inconsistent, meaning that no solutions exist because the equations contradict each other. Consistent systems have either one unique solution or an infinite number of solutions. The study of vector spaces and linear maps form a large part of linear algebra. A vector space is an algebraic structure formed by a set with an addition that makes it an abelian group and a scalar multiplication that is compatible with addition (see vector space for details). A linear map is a function between vector spaces that is compatible with addition and scalar multiplication. In the case of finite-dimensional vector spaces, vectors and linear maps can be represented by matrices. It follows that the theories of matrices and finite-dimensional vector spaces are essentially the same. In particular, vector spaces provide a third way for expressing and manipulating systems of linear equations. From this perspective, a matrix is a representation of a linear map: if one chooses a particular basis to describe the vectors being transformed, then the entries in the matrix give the results of applying the linear map to the basis vectors. Systems of equations can be interpreted as geometric figures. For systems with two variables, each equation represents a line in two-dimensional space. The point where the two lines intersect is the solution of the full system because this is the only point that solves both the first and the second equation. For inconsistent systems, the two lines run parallel, meaning that there is no solution since they never intersect. If two equations are not independent then they describe the same line, meaning that every solution of one equation is also a solution of the other equation. These relations make it possible to seek solutions graphically by plotting the equations and determining where they intersect. The same principles also apply to systems of equations with more variables, with the difference being that the equations do not describe lines but higher dimensional figures. For instance, equations with three variables correspond to planes in three-dimensional space, and the points where all planes intersect solve the system of equations. === Abstract algebra === Abstract algebra, also called modern algebra, is the study of algebraic structures. An algebraic structure is a framework for understanding operations on mathematical objects, like the addition of numbers. While elementary algebra and linear algebra work within the confines of particular algebraic structures, abstract algebra takes a more general approach that compares how algebraic structures differ from each other and what types of algebraic structures there are, such as groups, rings, and fields. The key difference between these types of algebraic structures lies in the number of operations they use and the laws they obey. In mathematics education, abstract algebra refers to an advanced undergraduate course that mathematics majors take after completing courses in linear algebra. On a formal level, an algebraic structure is a set of mathematical objects, called the underlying set, together with one or several operations. Abstract algebra is primarily interested in binary operations, which take any two objects from the underlying set as inputs and map them to another object from this set as output. For example, the algebraic structure ⟨ N , + ⟩ {\displaystyle \langle \mathbb {N} ,+\rangle } has the natural numbers (⁠ N {\displaystyle \mathbb {N} } ⁠) as the underlying set and addition (⁠ + {\displaystyle +} ⁠) as its binary operation. The underlying set can contain mathematical objects other than numbers, and the operations are not restricted to regular arithmetic operations. For instance, the underlying set of the symmetry group of a geometric object is made up of geometric transformations, such as rotations, under which the object remains unchanged. Its binary operation is function composition, which takes two transformations as input and has the transformation resulting from applying the first transformation followed by the second as its output. ==== Group theory ==== Abstract algebra classifies algebraic structures based on the laws or axioms that its operations obey and the number of operations it uses. One of the most basic types is a group, which has one operation and requires that this operation is associative and has an identity element and inverse elements. An operation is associative if the order of several applications does not matter, i.e., if ( a ∘ b ) ∘ c {\displaystyle (a\circ b)\circ c} is the same as a ∘ ( b ∘ c ) {\displaystyle a\circ (b\circ c)} for all elements. An operation has an identity element or a neutral element if one element e exists that does not change the value of any other element, i.e., if ⁠ a ∘ e = e ∘ a = a {\displaystyle a\circ e=e\circ a=a} ⁠. An operation has inverse elements if for any element a {\displaystyle a} there exists a reciprocal element a − 1 {\displaystyle a^{-1}} that undoes ⁠ a {\displaystyle a} ⁠. If an element operates on its inverse then the result is the neutral element e, expressed formally as ⁠ a ∘ a − 1 = a − 1 ∘ a = e {\displaystyle a\circ a^{-1}=a^{-1}\circ a=e} ⁠. Every algebraic structure that fulfills these requirements is a group. For example, ⟨ Z , + ⟩ {\displaystyle \langle \mathbb {Z} ,+\rangle } is a group formed by the set of integers together with the operation of addition. The neutral element is 0 and the inverse element of any number a {\displaystyle a} is − a {\displaystyle -a} . The natural numbers with addition, by contrast, do not form a group since they contain only positive integers and therefore lack inverse elements. Group theory examines the nature of groups, with basic theorems such as the fundamental theorem of finite abelian groups and the Feit–Thompson theorem. The latter was a key early step in one of the most important mathematical achievements of the 20th century: the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups. ==== Ring theory and field theory ==== A ring is an algebraic structure with two operations that work similarly to the addition and multiplication of numbers and are named and generally denoted similarly. A ring is a commutative group under addition: the addition of the ring is associative, commutative, and has an identity element and inverse elements. The multiplication is associative and distributive with respect to addition; that is, a ( b + c ) = a b + a c {\displaystyle a(b+c)=ab+ac} and ( b + c ) a = b a + c a . {\displaystyle (b+c)a=ba+ca.} Moreover, multiplication is associative and has an identity element generally denoted as 1. Multiplication needs not to be commutative; if it is commutative, one has a commutative ring. The ring of integers (⁠ Z {\displaystyle \mathbb {Z} } ⁠) is one of the simplest commutative rings. A field is a commutative ring such that ⁠ 1 ≠ 0 {\displaystyle 1\neq 0} ⁠ and each nonzero element has a multiplicative inverse. The ring of integers does not form a field because it lacks multiplicative inverses. For example, the multiplicative inverse of 7 {\displaystyle 7} is ⁠ 1 7 {\displaystyle {\tfrac {1}{7}}} ⁠, which is not an integer. The rational numbers, the real numbers, and the complex numbers each form a field with the operations of addition and multiplication. Ring theory is the study of rings, exploring concepts such as subrings, quotient rings, polynomial rings, and ideals as well as theorems such as Hilbert's basis theorem. Field theory is concerned with fields, examining field extensions, algebraic closures, and finite fields. Galois theory explores the relation between field theory and group theory, relying on the fundamental theorem of Galois theory. ==== Theories of interrelations among structures ==== Besides groups, rings, and fields, there are many other algebraic structures studied by algebra. They include magmas, semigroups, monoids, abelian groups, commutative rings, modules, lattices, vector spaces, algebras over a field, and associative and non-associative algebras. They differ from each other regarding the types of objects they describe and the requirements that their operations fulfill. Many are related to each other in that a basic structure can be turned into a more specialized structure by adding constraints. For example, a magma becomes a semigroup if its operation is associative. Homomorphisms are tools to examine structural features by comparing two algebraic structures. A homomorphism is a function from the underlying set of one algebraic structure to the underlying set of another algebraic structure that preserves certain structural characteristics. If the two algebraic structures use binary operations and have the form ⟨ A , ∘ ⟩ {\displaystyle \langle A,\circ \rangle } and ⟨ B , ⋆ ⟩ {\displaystyle \langle B,\star \rangle } then the function h : A → B {\displaystyle h:A\to B} is a homomorphism if it fulfills the following requirement: ⁠ h ( x ∘ y ) = h ( x ) ⋆ h ( y ) {\displaystyle h(x\circ y)=h(x)\star h(y)} ⁠. The existence of a homomorphism reveals that the operation ⋆ {\displaystyle \star } in the second algebraic structure plays the same role as the operation ∘ {\displaystyle \circ } does in the first algebraic structure. Isomorphisms are a special type of homomorphism that indicates a high degree of similarity between two algebraic structures. An isomorphism is a bijective homomorphism, meaning that it establishes a one-to-one relationship between the elements of the two algebraic structures. This implies that every element of the first algebraic structure is mapped to one unique element in the second structure without any unmapped elements in the second structure. Another tool of comparison is the relation between an algebraic structure and its subalgebra. The algebraic structure and its subalgebra use the same operations, which follow the same axioms. The only difference is that the underlying set of the subalgebra is a subset of the underlying set of the algebraic structure. All operations in the subalgebra are required to be closed in its underlying set, meaning that they only produce elements that belong to this set. For example, the set of even integers together with addition is a subalgebra of the full set of integers together with addition. This is the case because the sum of two even numbers is again an even number. But the set of odd integers together with addition is not a subalgebra because it is not closed: adding two odd numbers produces an even number, which is not part of the chosen subset. Universal algebra is the study of algebraic structures in general. As part of its general perspective, it is not concerned with the specific elements that make up the underlying sets and considers operations with more than two inputs, such as ternary operations. It provides a framework for investigating what structural features different algebraic structures have in common. One of those structural features concerns the identities that are true in different algebraic structures. In this context, an identity is a universal equation or an equation that is true for all elements of the underlying set. For example, commutativity is a universal equation that states that a ∘ b {\displaystyle a\circ b} is identical to b ∘ a {\displaystyle b\circ a} for all elements. A variety is a class of all algebraic structures that satisfy certain identities. For example, if two algebraic structures satisfy commutativity then they are both part of the corresponding variety. Category theory examines how mathematical objects are related to each other using the concept of categories. A category is a collection of objects together with a collection of morphisms or "arrows" between those objects. These two collections must satisfy certain conditions. For example, morphisms can be joined, or composed: if there exists a morphism from object a {\displaystyle a} to object ⁠ b {\displaystyle b} ⁠, and another morphism from object b {\displaystyle b} to object ⁠ c {\displaystyle c} ⁠, then there must also exist one from object a {\displaystyle a} to object ⁠ c {\displaystyle c} ⁠. Composition of morphisms is required to be associative, and there must be an "identity morphism" for every object. Categories are widely used in contemporary mathematics since they provide a unifying framework to describe and analyze many fundamental mathematical concepts. For example, sets can be described with the category of sets, and any group can be regarded as the morphisms of a category with just one object. == History == The origin of algebra lies in attempts to solve mathematical problems involving arithmetic calculations and unknown quantities. These developments happened in the ancient period in Babylonia, Egypt, Greece, China, and India. One of the earliest documents on algebraic problems is the Rhind Mathematical Papyrus from ancient Egypt, which was written around 1650 BCE. It discusses solutions to linear equations, as expressed in problems like "A quantity; its fourth is added to it. It becomes fifteen. What is the quantity?" Babylonian clay tablets from around the same time explain methods to solve linear and quadratic polynomial equations, such as the method of completing the square. Many of these insights found their way to the ancient Greeks. Starting in the 6th century BCE, their main interest was geometry rather than algebra, but they employed algebraic methods to solve geometric problems. For example, they studied geometric figures while taking their lengths and areas as unknown quantities to be determined, as exemplified in Pythagoras' formulation of the difference of two squares method and later in Euclid's Elements. In the 3rd century CE, Diophantus provided a detailed treatment of how to solve algebraic equations in a series of books called Arithmetica. He was the first to experiment with symbolic notation to express polynomials. Diophantus's work influenced Arab development of algebra with many of his methods reflected in the concepts and techniques used in medieval Arabic algebra. In ancient China, The Nine Chapters on the Mathematical Art, a book composed over the period spanning from the 10th century BCE to the 2nd century CE, explored various techniques for solving algebraic equations, including the use of matrix-like constructs. There is no unanimity of opinion as to whether these early developments are part of algebra or only precursors. They offered solutions to algebraic problems but did not conceive them in an abstract and general manner, focusing instead on specific cases and applications. This changed with the Persian mathematician al-Khwarizmi, who published his The Compendious Book on Calculation by Completion and Balancing in 825 CE. It presents the first detailed treatment of general methods that can be used to manipulate linear and quadratic equations by "reducing" and "balancing" both sides. Other influential contributions to algebra came from the Arab mathematician Thābit ibn Qurra also in the 9th century and the Persian mathematician Omar Khayyam in the 11th and 12th centuries. In India, Brahmagupta investigated how to solve quadratic equations and systems of equations with several variables in the 7th century CE. Among his innovations were the use of zero and negative numbers in algebraic equations. The Indian mathematicians Mahāvīra in the 9th century and Bhāskara II in the 12th century further refined Brahmagupta's methods and concepts. In 1247, the Chinese mathematician Qin Jiushao wrote the Mathematical Treatise in Nine Sections, which includes an algorithm for the numerical evaluation of polynomials, including polynomials of higher degrees. The Italian mathematician Fibonacci brought al-Khwarizmi's ideas and techniques to Europe in books including his Liber Abaci. In 1545, the Italian polymath Gerolamo Cardano published his book Ars Magna, which covered many topics in algebra, discussed imaginary numbers, and was the first to present general methods for solving cubic and quartic equations. In the 16th and 17th centuries, the French mathematicians François Viète and René Descartes introduced letters and symbols to denote variables and operations, making it possible to express equations in an concise and abstract manner. Their predecessors had relied on verbal descriptions of problems and solutions. Some historians see this development as a key turning point in the history of algebra and consider what came before it as the prehistory of algebra because it lacked the abstract nature based on symbolic manipulation. In the 17th and 18th centuries, many attempts were made to find general solutions to polynomials of degree five and higher. All of them failed. At the end of the 18th century, the German mathematician Carl Friedrich Gauss proved the fundamental theorem of algebra, which describes the existence of zeros of polynomials of any degree without providing a general solution. At the beginning of the 19th century, the Italian mathematician Paolo Ruffini and the Norwegian mathematician Niels Henrik Abel were able to show that no general solution exists for polynomials of degree five and higher. In response to and shortly after their findings, the French mathematician Évariste Galois developed what came later to be known as Galois theory, which offered a more in-depth analysis of the solutions of polynomials while also laying the foundation of group theory. Mathematicians soon realized the relevance of group theory to other fields and applied it to disciplines like geometry and number theory. Starting in the mid-19th century, interest in algebra shifted from the study of polynomials associated with elementary algebra towards a more general inquiry into algebraic structures, marking the emergence of abstract algebra. This approach explored the axiomatic basis of arbitrary algebraic operations. The invention of new algebraic systems based on different operations and elements accompanied this development, such as Boolean algebra, vector algebra, and matrix algebra. Influential early developments in abstract algebra were made by the German mathematicians David Hilbert, Ernst Steinitz, and Emmy Noether as well as the Austrian mathematician Emil Artin. They researched different forms of algebraic structures and categorized them based on their underlying axioms into types, like groups, rings, and fields. The idea of the even more general approach associated with universal algebra was conceived by the English mathematician Alfred North Whitehead in his 1898 book A Treatise on Universal Algebra. Starting in the 1930s, the American mathematician Garrett Birkhoff expanded these ideas and developed many of the foundational concepts of this field. The invention of universal algebra led to the emergence of various new areas focused on the algebraization of mathematics—that is, the application of algebraic methods to other branches of mathematics. Topological algebra arose in the early 20th century, studying algebraic structures such as topological groups and Lie groups. In the 1940s and 50s, homological algebra emerged, employing algebraic techniques to study homology. Around the same time, category theory was developed and has since played a key role in the foundations of mathematics. Other developments were the formulation of model theory and the study of free algebras. == Applications == The influence of algebra is wide-reaching, both within mathematics and in its applications to other fields. The algebraization of mathematics is the process of applying algebraic methods and principles to other branches of mathematics, such as geometry, topology, number theory, and calculus. It happens by employing symbols in the form of variables to express mathematical insights on a more general level, allowing mathematicians to develop formal models describing how objects interact and relate to each other. One application, found in geometry, is the use of algebraic statements to describe geometric figures. For example, the equation y = 3 x − 7 {\displaystyle y=3x-7} describes a line in two-dimensional space while the equation x 2 + y 2 + z 2 = 1 {\displaystyle x^{2}+y^{2}+z^{2}=1} corresponds to a sphere in three-dimensional space. Of special interest to algebraic geometry are algebraic varieties, which are solutions to systems of polynomial equations that can be used to describe more complex geometric figures. Algebraic reasoning can also solve geometric problems. For example, one can determine whether and where the line described by y = x + 1 {\displaystyle y=x+1} intersects with the circle described by x 2 + y 2 = 25 {\displaystyle x^{2}+y^{2}=25} by solving the system of equations made up of these two equations. Topology studies the properties of geometric figures or topological spaces that are preserved under operations of continuous deformation. Algebraic topology relies on algebraic theories such as group theory to classify topological spaces. For example, homotopy groups classify topological spaces based on the existence of loops or holes in them. Number theory is concerned with the properties of and relations between integers. Algebraic number theory applies algebraic methods and principles to this field of inquiry. Examples are the use of algebraic expressions to describe general laws, like Fermat's Last Theorem, and of algebraic structures to analyze the behavior of numbers, such as the ring of integers. The related field of combinatorics uses algebraic techniques to solve problems related to counting, arrangement, and combination of discrete objects. An example in algebraic combinatorics is the application of group theory to analyze graphs and symmetries. The insights of algebra are also relevant to calculus, which uses mathematical expressions to examine rates of change and accumulation. It relies on algebra, for instance, to understand how these expressions can be transformed and what role variables play in them. Algebraic logic employs the methods of algebra to describe and analyze the structures and patterns that underlie logical reasoning, exploring both the relevant mathematical structures themselves and their application to concrete problems of logic. It includes the study of Boolean algebra to describe propositional logic as well as the formulation and analysis of algebraic structures corresponding to more complex systems of logic. Algebraic methods are also commonly employed in other areas, like the natural sciences. For example, they are used to express scientific laws and solve equations in physics, chemistry, and biology. Similar applications are found in fields like economics, geography, engineering (including electronics and robotics), and computer science to express relationships, solve problems, and model systems. Linear algebra plays a central role in artificial intelligence and machine learning, for instance, by enabling the efficient processing and analysis of large datasets. Various fields rely on algebraic structures investigated by abstract algebra. For example, physical sciences like crystallography and quantum mechanics make extensive use of group theory, which is also employed to study puzzles such as Sudoku and Rubik's cubes, and origami. Both coding theory and cryptology rely on abstract algebra to solve problems associated with data transmission, like avoiding the effects of noise and ensuring data security. == Education == Algebra education mostly focuses on elementary algebra, which is one of the reasons why elementary algebra is also called school algebra. It is usually not introduced until secondary education since it requires mastery of the fundamentals of arithmetic while posing new cognitive challenges associated with abstract reasoning and generalization. It aims to familiarize students with the formal side of mathematics by helping them understand mathematical symbolism, for example, how variables can be used to represent unknown quantities. An additional difficulty for students lies in the fact that, unlike arithmetic calculations, algebraic expressions are often difficult to solve directly. Instead, students need to learn how to transform them according to certain laws, often to determine an unknown quantity. Some tools to introduce students to the abstract side of algebra rely on concrete models and visualizations of equations, including geometric analogies, manipulatives including sticks or cups, and "function machines" representing equations as flow diagrams. One method uses balance scales as a pictorial approach to help students grasp basic problems of algebra. The mass of some objects on the scale is unknown and represents variables. Solving an equation corresponds to adding and removing objects on both sides in such a way that the sides stay in balance until the only object remaining on one side is the object of unknown mass. Word problems are another tool to show how algebra is applied to real-life situations. For example, students may be presented with a situation in which Naomi's brother has twice as many apples as Naomi. Given that both together have twelve apples, students are then asked to find an algebraic equation that describes this situation (⁠ 2 x + x = 12 {\displaystyle 2x+x=12} ⁠) and to determine how many apples Naomi has (⁠ x = 4 {\displaystyle x=4} ⁠). At the university level, mathematics students encounter advanced algebra topics from linear and abstract algebra. Initial undergraduate courses in linear algebra focus on matrices, vector spaces, and linear maps. Upon completing them, students are usually introduced to abstract algebra, where they learn about algebraic structures like groups, rings, and fields, as well as the relations between them. The curriculum typically also covers specific instances of algebraic structures, such as the systems of rational numbers, the real numbers, and the polynomials. == See also == == References == === Notes === === Citations === === Sources === == External links ==
Wikipedia/Algebra
In mathematics, a quintic function is a function of the form g ( x ) = a x 5 + b x 4 + c x 3 + d x 2 + e x + f , {\displaystyle g(x)=ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f,\,} where a, b, c, d, e and f are members of a field, typically the rational numbers, the real numbers or the complex numbers, and a is nonzero. In other words, a quintic function is defined by a polynomial of degree five. Because they have an odd degree, normal quintic functions appear similar to normal cubic functions when graphed, except they may possess one additional local maximum and one additional local minimum. The derivative of a quintic function is a quartic function. Setting g(x) = 0 and assuming a ≠ 0 produces a quintic equation of the form: a x 5 + b x 4 + c x 3 + d x 2 + e x + f = 0. {\displaystyle ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0.\,} Solving quintic equations in terms of radicals (nth roots) was a major problem in algebra from the 16th century, when cubic and quartic equations were solved, until the first half of the 19th century, when the impossibility of such a general solution was proved with the Abel–Ruffini theorem. == Finding roots of a quintic equation == Finding the roots (zeros) of a given polynomial has been a prominent mathematical problem. Solving linear, quadratic, cubic and quartic equations in terms of radicals and elementary arithmetic operations on the coefficients can always be done, no matter whether the roots are rational or irrational, real or complex; there are formulas that yield the required solutions. However, there is no algebraic expression (that is, in terms of radicals) for the solutions of general quintic equations over the rationals; this statement is known as the Abel–Ruffini theorem, first asserted in 1799 and completely proven in 1824. This result also holds for equations of higher degree. An example of a quintic whose roots cannot be expressed in terms of radicals is x5 − x + 1 = 0. Numerical approximations of quintics roots can be computed with root-finding algorithms for polynomials. Although some quintics may be solved in terms of radicals, the solution is generally too complicated to be used in practice. == Solvable quintics == Some quintic equations can be solved in terms of radicals. These include the quintic equations defined by a polynomial that is reducible, such as x5 − x4 − x + 1 = (x2 + 1)(x + 1)(x − 1)2. For example, it has been shown that x 5 − x − r = 0 {\displaystyle x^{5}-x-r=0} has solutions in radicals if and only if it has an integer solution or r is one of ±15, ±22440, or ±2759640, in which cases the polynomial is reducible. As solving reducible quintic equations reduces immediately to solving polynomials of lower degree, only irreducible quintic equations are considered in the remainder of this section, and the term "quintic" will refer only to irreducible quintics. A solvable quintic is thus an irreducible quintic polynomial whose roots may be expressed in terms of radicals. To characterize solvable quintics, and more generally solvable polynomials of higher degree, Évariste Galois developed techniques which gave rise to group theory and Galois theory. Applying these techniques, Arthur Cayley found a general criterion for determining whether any given quintic is solvable. This criterion is the following. Given the equation a x 5 + b x 4 + c x 3 + d x 2 + e x + f = 0 , {\displaystyle ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0,} the Tschirnhaus transformation x = y − ⁠b/5a⁠, which depresses the quintic (that is, removes the term of degree four), gives the equation y 5 + p y 3 + q y 2 + r y + s = 0 , {\displaystyle y^{5}+py^{3}+qy^{2}+ry+s=0,} where p = 5 a c − 2 b 2 5 a 2 q = 25 a 2 d − 15 a b c + 4 b 3 25 a 3 r = 125 a 3 e − 50 a 2 b d + 15 a b 2 c − 3 b 4 125 a 4 s = 3125 a 4 f − 625 a 3 b e + 125 a 2 b 2 d − 25 a b 3 c + 4 b 5 3125 a 5 {\displaystyle {\begin{aligned}p&={\frac {5ac-2b^{2}}{5a^{2}}}\\[4pt]q&={\frac {25a^{2}d-15abc+4b^{3}}{25a^{3}}}\\[4pt]r&={\frac {125a^{3}e-50a^{2}bd+15ab^{2}c-3b^{4}}{125a^{4}}}\\[4pt]s&={\frac {3125a^{4}f-625a^{3}be+125a^{2}b^{2}d-25ab^{3}c+4b^{5}}{3125a^{5}}}\end{aligned}}} Both quintics are solvable by radicals if and only if either they are factorisable in equations of lower degrees with rational coefficients or the polynomial P2 − 1024 z Δ, named Cayley's resolvent, has a rational root in z, where P = z 3 − z 2 ( 20 r + 3 p 2 ) − z ( 8 p 2 r − 16 p q 2 − 240 r 2 + 400 s q − 3 p 4 ) − p 6 + 28 p 4 r − 16 p 3 q 2 − 176 p 2 r 2 − 80 p 2 s q + 224 p r q 2 − 64 q 4 + 4000 p s 2 + 320 r 3 − 1600 r s q {\displaystyle {\begin{aligned}P={}&z^{3}-z^{2}(20r+3p^{2})-z(8p^{2}r-16pq^{2}-240r^{2}+400sq-3p^{4})\\[4pt]&-p^{6}+28p^{4}r-16p^{3}q^{2}-176p^{2}r^{2}-80p^{2}sq+224prq^{2}-64q^{4}\\[4pt]&+4000ps^{2}+320r^{3}-1600rsq\end{aligned}}} and Δ = − 128 p 2 r 4 + 3125 s 4 − 72 p 4 q r s + 560 p 2 q r 2 s + 16 p 4 r 3 + 256 r 5 + 108 p 5 s 2 − 1600 q r 3 s + 144 p q 2 r 3 − 900 p 3 r s 2 + 2000 p r 2 s 2 − 3750 p q s 3 + 825 p 2 q 2 s 2 + 2250 q 2 r s 2 + 108 q 5 s − 27 q 4 r 2 − 630 p q 3 r s + 16 p 3 q 3 s − 4 p 3 q 2 r 2 . {\displaystyle {\begin{aligned}\Delta ={}&-128p^{2}r^{4}+3125s^{4}-72p^{4}qrs+560p^{2}qr^{2}s+16p^{4}r^{3}+256r^{5}+108p^{5}s^{2}\\[4pt]&-1600qr^{3}s+144pq^{2}r^{3}-900p^{3}rs^{2}+2000pr^{2}s^{2}-3750pqs^{3}+825p^{2}q^{2}s^{2}\\[4pt]&+2250q^{2}rs^{2}+108q^{5}s-27q^{4}r^{2}-630pq^{3}rs+16p^{3}q^{3}s-4p^{3}q^{2}r^{2}.\end{aligned}}} Cayley's result allows us to test if a quintic is solvable. If it is the case, finding its roots is a more difficult problem, which consists of expressing the roots in terms of radicals involving the coefficients of the quintic and the rational root of Cayley's resolvent. In 1888, George Paxton Young described how to solve a solvable quintic equation, without providing an explicit formula; in 2004, Daniel Lazard wrote out a three-page formula. === Quintics in Bring–Jerrard form === There are several parametric representations of solvable quintics of the form x5 + ax + b = 0, called the Bring–Jerrard form. During the second half of the 19th century, John Stuart Glashan, George Paxton Young, and Carl Runge gave such a parameterization: an irreducible quintic with rational coefficients in Bring–Jerrard form is solvable if and only if either a = 0 or it may be written x 5 + 5 μ 4 ( 4 ν + 3 ) ν 2 + 1 x + 4 μ 5 ( 2 ν + 1 ) ( 4 ν + 3 ) ν 2 + 1 = 0 {\displaystyle x^{5}+{\frac {5\mu ^{4}(4\nu +3)}{\nu ^{2}+1}}x+{\frac {4\mu ^{5}(2\nu +1)(4\nu +3)}{\nu ^{2}+1}}=0} where μ and ν are rational. In 1994, Blair Spearman and Kenneth S. Williams gave an alternative, x 5 + 5 e 4 ( 4 c + 3 ) c 2 + 1 x + − 4 e 5 ( 2 c − 11 ) c 2 + 1 = 0. {\displaystyle x^{5}+{\frac {5e^{4}(4c+3)}{c^{2}+1}}x+{\frac {-4e^{5}(2c-11)}{c^{2}+1}}=0.} The relationship between the 1885 and 1994 parameterizations can be seen by defining the expression b = 4 5 ( a + 20 ± 2 ( 20 − a ) ( 5 + a ) ) {\displaystyle b={\frac {4}{5}}\left(a+20\pm 2{\sqrt {(20-a)(5+a)}}\right)} where ⁠ a = 5 4 ν + 3 ν 2 + 1 {\displaystyle a=5{\tfrac {4\nu +3}{\nu ^{2}+1}}} ⁠. Using the negative case of the square root yields, after scaling variables, the first parametrization while the positive case gives the second. The substitution ⁠ c = − m ℓ 5 , {\displaystyle c=-{\tfrac {m}{\ell ^{5}}},} ⁠ ⁠ e = 1 ℓ {\displaystyle e={\tfrac {1}{\ell }}} ⁠ in the Spearman–Williams parameterization allows one to not exclude the special case a = 0, giving the following result: If a and b are rational numbers, the equation x5 + ax + b = 0 is solvable by radicals if either its left-hand side is a product of polynomials of degree less than 5 with rational coefficients or there exist two rational numbers ℓ and m such that a = 5 ℓ ( 3 ℓ 5 − 4 m ) m 2 + ℓ 10 b = 4 ( 11 ℓ 5 + 2 m ) m 2 + ℓ 10 . {\displaystyle a={\frac {5\ell (3\ell ^{5}-4m)}{m^{2}+\ell ^{10}}}\qquad b={\frac {4(11\ell ^{5}+2m)}{m^{2}+\ell ^{10}}}.} === Roots of a solvable quintic === A polynomial equation is solvable by radicals if its Galois group is a solvable group. In the case of irreducible quintics, the Galois group is a subgroup of the symmetric group S5 of all permutations of a five element set, which is solvable if and only if it is a subgroup of the group F5, of order 20, generated by the cyclic permutations (1 2 3 4 5) and (1 2 4 3). If the quintic is solvable, one of the solutions may be represented by an algebraic expression involving a fifth root and at most two square roots, generally nested. The other solutions may then be obtained either by changing the fifth root or by multiplying all the occurrences of the fifth root by the same power of a primitive 5th root of unity, such as − 10 − 2 5 + 5 − 1 4 . {\displaystyle {\frac {{\sqrt {-10-2{\sqrt {5}}}}+{\sqrt {5}}-1}{4}}.} In fact, all four primitive fifth roots of unity may be obtained by changing the signs of the square roots appropriately; namely, the expression α − 10 − 2 β 5 + β 5 − 1 4 , {\displaystyle {\frac {\alpha {\sqrt {-10-2\beta {\sqrt {5}}}}+\beta {\sqrt {5}}-1}{4}},} where α , β ∈ { − 1 , 1 } {\displaystyle \alpha ,\beta \in \{-1,1\}} , yields the four distinct primitive fifth roots of unity. It follows that one may need four different square roots for writing all the roots of a solvable quintic. Even for the first root that involves at most two square roots, the expression of the solutions in terms of radicals is usually highly complicated. However, when no square root is needed, the form of the first solution may be rather simple, as for the equation x5 − 5x4 + 30x3 − 50x2 + 55x − 21 = 0, for which the only real solution is x = 1 + 2 5 − ( 2 5 ) 2 + ( 2 5 ) 3 − ( 2 5 ) 4 . {\displaystyle x=1+{\sqrt[{5}]{2}}-\left({\sqrt[{5}]{2}}\right)^{2}+\left({\sqrt[{5}]{2}}\right)^{3}-\left({\sqrt[{5}]{2}}\right)^{4}.} An example of a more complicated (although small enough to be written here) solution is the unique real root of x5 − 5x + 12 = 0. Let a = √2φ−1, b = √2φ, and c = 4√5, where φ = ⁠1+√5/2⁠ is the golden ratio. Then the only real solution x = −1.84208... is given by − c x = ( a + c ) 2 ( b − c ) 5 + ( − a + c ) ( b − c ) 2 5 + ( a + c ) ( b + c ) 2 5 − ( − a + c ) 2 ( b + c ) 5 , {\displaystyle -cx={\sqrt[{5}]{(a+c)^{2}(b-c)}}+{\sqrt[{5}]{(-a+c)(b-c)^{2}}}+{\sqrt[{5}]{(a+c)(b+c)^{2}}}-{\sqrt[{5}]{(-a+c)^{2}(b+c)}}\,,} or, equivalently, by x = y 1 5 + y 2 5 + y 3 5 + y 4 5 , {\displaystyle x={\sqrt[{5}]{y_{1}}}+{\sqrt[{5}]{y_{2}}}+{\sqrt[{5}]{y_{3}}}+{\sqrt[{5}]{y_{4}}}\,,} where the yi are the four roots of the quartic equation y 4 + 4 y 3 + 4 5 y 2 − 8 5 3 y − 1 5 5 = 0 . {\displaystyle y^{4}+4y^{3}+{\frac {4}{5}}y^{2}-{\frac {8}{5^{3}}}y-{\frac {1}{5^{5}}}=0\,.} More generally, if an equation P(x) = 0 of prime degree p with rational coefficients is solvable in radicals, then one can define an auxiliary equation Q(y) = 0 of degree p − 1, also with rational coefficients, such that each root of P is the sum of p-th roots of the roots of Q. These p-th roots were introduced by Joseph-Louis Lagrange, and their products by p are commonly called Lagrange resolvents. The computation of Q and its roots can be used to solve P(x) = 0. However these p-th roots may not be computed independently (this would provide pp−1 roots instead of p). Thus a correct solution needs to express all these p-roots in term of one of them. Galois theory shows that this is always theoretically possible, even if the resulting formula may be too large to be of any use. It is possible that some of the roots of Q are rational (as in the first example of this section) or some are zero. In these cases, the formula for the roots is much simpler, as for the solvable de Moivre quintic x 5 + 5 a x 3 + 5 a 2 x + b = 0 , {\displaystyle x^{5}+5ax^{3}+5a^{2}x+b=0\,,} where the auxiliary equation has two zero roots and reduces, by factoring them out, to the quadratic equation y 2 + b y − a 5 = 0 , {\displaystyle y^{2}+by-a^{5}=0\,,} such that the five roots of the de Moivre quintic are given by x k = ω k y i 5 − a ω k y i 5 , {\displaystyle x_{k}=\omega ^{k}{\sqrt[{5}]{y_{i}}}-{\frac {a}{\omega ^{k}{\sqrt[{5}]{y_{i}}}}},} where yi is any root of the auxiliary quadratic equation and ω is any of the four primitive 5th roots of unity. This can be easily generalized to construct a solvable septic and other odd degrees, not necessarily prime. === Other solvable quintics === There are infinitely many solvable quintics in Bring–Jerrard form which have been parameterized in a preceding section. Up to the scaling of the variable, there are exactly five solvable quintics of the shape x 5 + a x 2 + b {\displaystyle x^{5}+ax^{2}+b} , which are (where s is a scaling factor): x 5 − 2 s 3 x 2 − s 5 5 {\displaystyle x^{5}-2s^{3}x^{2}-{\frac {s^{5}}{5}}} x 5 − 100 s 3 x 2 − 1000 s 5 {\displaystyle x^{5}-100s^{3}x^{2}-1000s^{5}} x 5 − 5 s 3 x 2 − 3 s 5 {\displaystyle x^{5}-5s^{3}x^{2}-3s^{5}} x 5 − 5 s 3 x 2 + 15 s 5 {\displaystyle x^{5}-5s^{3}x^{2}+15s^{5}} x 5 − 25 s 3 x 2 − 300 s 5 {\displaystyle x^{5}-25s^{3}x^{2}-300s^{5}} Paxton Young (1888) gave a number of examples of solvable quintics: An infinite sequence of solvable quintics may be constructed, whose roots are sums of nth roots of unity, with n = 10k + 1 being a prime number: There are also two parameterized families of solvable quintics: The Kondo–Brumer quintic, x 5 + ( a − 3 ) x 4 + ( − a + b + 3 ) x 3 + ( a 2 − a − 1 − 2 b ) x 2 + b x + a = 0 {\displaystyle x^{5}+(a-3)\,x^{4}+(-a+b+3)\,x^{3}+(a^{2}-a-1-2b)\,x^{2}+b\,x+a=0} and the family depending on the parameters a , ℓ , m {\displaystyle a,\ell ,m} x 5 − 5 p ( 2 x 3 + a x 2 + b x ) − p c = 0 {\displaystyle x^{5}-5\,p\left(2\,x^{3}+a\,x^{2}+b\,x\right)-p\,c=0} where p = 1 4 [ ℓ 2 ( 4 m 2 + a 2 ) − m 2 ] , {\displaystyle p={\tfrac {1}{4}}\left[\,\ell ^{2}(4m^{2}+a^{2})-m^{2}\,\right]\;,} b = ℓ ( 4 m 2 + a 2 ) − 5 p − 2 m 2 , {\displaystyle b=\ell \,(4m^{2}+a^{2})-5p-2m^{2}\;,} c = 1 2 [ b ( a + 4 m ) − p ( a − 4 m ) − a 2 m ] . {\displaystyle c={\tfrac {1}{2}}\left[\,b(a+4m)-p(a-4m)-a^{2}m\,\right]\;.} === Casus irreducibilis === Analogously to cubic equations, there are solvable quintics which have five real roots all of whose solutions in radicals involve roots of complex numbers. This is casus irreducibilis for the quintic, which is discussed in Dummit.: p.17  Indeed, if an irreducible quintic has all roots real, no root can be expressed purely in terms of real radicals (as is true for all polynomial degrees that are not powers of 2). == Beyond radicals == About 1835, Jerrard demonstrated that quintics can be solved by using ultraradicals (also known as Bring radicals), the unique real root of t5 + t − a = 0 for real numbers a. In 1858, Charles Hermite showed that the Bring radical could be characterized in terms of the Jacobi theta functions and their associated elliptic modular functions, using an approach similar to the more familiar approach of solving cubic equations by means of trigonometric functions. At around the same time, Leopold Kronecker, using group theory, developed a simpler way of deriving Hermite's result, as had Francesco Brioschi. Later, Felix Klein came up with a method that relates the symmetries of the icosahedron, Galois theory, and the elliptic modular functions that are featured in Hermite's solution, giving an explanation for why they should appear at all, and developed his own solution in terms of generalized hypergeometric functions. Similar phenomena occur in degree 7 (septic equations) and 11, as studied by Klein and discussed in Icosahedral symmetry § Related geometries. === Solving with Bring radicals === A Tschirnhaus transformation, which may be computed by solving a quartic equation, reduces the general quintic equation of the form x 5 + a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 = 0 {\displaystyle x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0\,} to the Bring–Jerrard normal form x5 − x + t = 0. The roots of this equation cannot be expressed by radicals. However, in 1858, Charles Hermite published the first known solution of this equation in terms of elliptic functions. At around the same time Francesco Brioschi and Leopold Kronecker came upon equivalent solutions. See Bring radical for details on these solutions and some related ones. == Application to celestial mechanics == Solving for the locations of the Lagrangian points of an astronomical orbit in which the masses of both objects are non-negligible involves solving a quintic. More precisely, the locations of L2 and L1 are the solutions to the following equations, where the gravitational forces of two masses on a third (for example, Sun and Earth on satellites such as Gaia and the James Webb Space Telescope at L2 and SOHO at L1) provide the satellite's centripetal force necessary to be in a synchronous orbit with Earth around the Sun: G m M S ( R ± r ) 2 ± G m M E r 2 = m ω 2 ( R ± r ) {\displaystyle {\frac {GmM_{S}}{(R\pm r)^{2}}}\pm {\frac {GmM_{E}}{r^{2}}}=m\omega ^{2}(R\pm r)} The ± sign corresponds to L2 and L1, respectively; G is the gravitational constant, ω the angular velocity, r the distance of the satellite to Earth, R the distance Sun to Earth (that is, the semi-major axis of Earth's orbit), and m, ME, and MS are the respective masses of satellite, Earth, and Sun. Using Kepler's Third Law ω 2 = 4 π 2 P 2 = G ( M S + M E ) R 3 {\displaystyle \omega ^{2}={\frac {4\pi ^{2}}{P^{2}}}={\frac {G(M_{S}+M_{E})}{R^{3}}}} and rearranging all terms yields the quintic a r 5 + b r 4 + c r 3 + d r 2 + e r + f = 0 {\displaystyle ar^{5}+br^{4}+cr^{3}+dr^{2}+er+f=0} with: a = ± ( M S + M E ) , b = + ( M S + M E ) 3 R , c = ± ( M S + M E ) 3 R 2 , d = + ( M E ∓ M E ) R 3 ( thus d = 0 for L 2 ) , e = ± M E 2 R 4 , f = ∓ M E R 5 . {\displaystyle {\begin{aligned}&a=\pm (M_{S}+M_{E}),\\&b=+(M_{S}+M_{E})3R,\\&c=\pm (M_{S}+M_{E})3R^{2},\\&d=+(M_{E}\mp M_{E})R^{3}\ ({\text{thus }}d=0{\text{ for }}L_{2}),\\&e=\pm M_{E}2R^{4},\\&f=\mp M_{E}R^{5}.\end{aligned}}} Solving these two quintics yields r = 1.501 × 109 m for L2 and r = 1.491 × 109 m for L1. The Sun–Earth Lagrangian points L2 and L1 are usually given as 1.5 million km from Earth. If the mass of the smaller object (ME) is much smaller than the mass of the larger object (MS), then the quintic equation can be greatly reduced and L1 and L2 are at approximately the radius of the Hill sphere, given by: r ≈ R M E 3 M S 3 {\displaystyle r\approx R{\sqrt[{3}]{\frac {M_{E}}{3M_{S}}}}} That also yields r = 1.5 × 109 m for satellites at L1 and L2 in the Sun-Earth system. == See also == Sextic equation Septic function Theory of equations Principal equation form == Notes == == References == Charles Hermite, "Sur la résolution de l'équation du cinquème degré", Œuvres de Charles Hermite, 2:5–21, Gauthier-Villars, 1908. Klein, Felix (1888). Lectures on the Icosahedron and the Solution of Equations of the Fifth Degree. Translated by Morrice, George Gavin. Trübner & Co. ISBN 0-486-49528-0. {{cite book}}: ISBN / Date incompatibility (help) Leopold Kronecker, "Sur la résolution de l'equation du cinquième degré, extrait d'une lettre adressée à M. Hermite", Comptes Rendus de l'Académie des Sciences, 46:1:1150–1152 1858. Blair Spearman and Kenneth S. Williams, "Characterization of solvable quintics x5 + ax + b, American Mathematical Monthly, 101:986–992 (1994). Ian Stewart, Galois Theory 2nd Edition, Chapman and Hall, 1989. ISBN 0-412-34550-1. Discusses Galois Theory in general including a proof of insolvability of the general quintic. Jörg Bewersdorff, Galois theory for beginners: A historical perspective, American Mathematical Society, 2006. ISBN 0-8218-3817-2. Chapter 8 (The solution of equations of the fifth degree at the Wayback Machine (archived 31 March 2010)) gives a description of the solution of solvable quintics x5 + cx + d. Victor S. Adamchik and David J. Jeffrey, "Polynomial transformations of Tschirnhaus, Bring and Jerrard," ACM SIGSAM Bulletin, Vol. 37, No. 3, September 2003, pp. 90–94. Ehrenfried Walter von Tschirnhaus, "A method for removing all intermediate terms from a given equation," ACM SIGSAM Bulletin, Vol. 37, No. 1, March 2003, pp. 1–3. Lazard, Daniel (2004). "Solving quintics in radicals". In Olav Arnfinn Laudal; Ragni Piene (eds.). The Legacy of Niels Henrik Abel. Berlin. pp. 207–225. ISBN 3-540-43826-2. Archived from the original on January 6, 2005.{{cite book}}: CS1 maint: location missing publisher (link) Tóth, Gábor (2002), Finite Möbius groups, minimal immersions of spheres, and moduli == External links == Mathworld - Quintic Equation – more details on methods for solving Quintics. Solving Solvable Quintics – a method for solving solvable quintics due to David S. Dummit. A method for removing all intermediate terms from a given equation - a recent English translation of Tschirnhaus' 1683 paper. Bruce Bartlett:The Quintic, the Icosahedron, and Elliptic Curves, AMS Notices (April 2024)
Wikipedia/Quintic_equation
In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the canonical pairing of a vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices (one a subscript, the other a superscript) of the tensor are set equal to each other and summed over. In Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2. Tensor contraction can be seen as a generalization of the trace. == Abstract formulation == Let V be a vector space over a field k. The core of the contraction operation, and the simplest case, is the canonical pairing of V with its dual vector space V∗. The pairing is the linear map from the tensor product of these two spaces to the field k: C : V ⊗ V ∗ → k {\displaystyle C:V\otimes V^{*}\rightarrow k} corresponding to the bilinear form ⟨ v , f ⟩ = f ( v ) {\displaystyle \langle v,f\rangle =f(v)} where f is in V∗ and v is in V. The map C defines the contraction operation on a tensor of type (1, 1), which is an element of V ⊗ V ∗ {\displaystyle V\otimes V^{*}} . Note that the result is a scalar (an element of k). In finite dimensions, using the natural isomorphism between V ⊗ V ∗ {\displaystyle V\otimes V^{*}} and the space of linear maps from V to V, one obtains a basis-free definition of the trace. In general, a tensor of type (m, n) (with m ≥ 1 and n ≥ 1) is an element of the vector space V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*}} (where there are m factors V and n factors V∗). Applying the canonical pairing to the kth V factor and the lth V∗ factor, and using the identity on all other factors, defines the (k, l) contraction operation, which is a linear map that yields a tensor of type (m − 1, n − 1). By analogy with the (1, 1) case, the general contraction operation is sometimes called the trace. == Contraction in index notation == In tensor index notation, the basic contraction of a vector and a dual vector is denoted by f ~ ( v → ) = f γ v γ , {\displaystyle {\tilde {f}}({\vec {v}})=f_{\gamma }v^{\gamma },} which is shorthand for the explicit coordinate summation f γ v γ = f 1 v 1 + f 2 v 2 + ⋯ + f n v n {\displaystyle f_{\gamma }v^{\gamma }=f_{1}v^{1}+f_{2}v^{2}+\cdots +f_{n}v^{n}} (where vi are the components of v in a particular basis and fi are the components of f in the corresponding dual basis). Since a general mixed dyadic tensor is a linear combination of decomposable tensors of the form f ⊗ v {\displaystyle f\otimes v} , the explicit formula for the dyadic case follows: let T = T j i e i ⊗ e j {\displaystyle \mathbf {T} =T_{j}^{i}\mathbf {e} _{i}\otimes \mathbf {e} ^{j}} be a mixed dyadic tensor. Then its contraction is T j i e i ⋅ e j = T j i δ i j = T j j = T 1 1 + ⋯ + T n n {\displaystyle T_{j}^{i}\mathbf {e} _{i}\cdot \mathbf {e} ^{j}=T_{j}^{i}\delta _{i}{}^{j}=T_{j}^{j}=T_{1}^{1}+\cdots +T_{n}^{n}} . A general contraction is denoted by labeling one covariant index and one contravariant index with the same letter, summation over that index being implied by the summation convention. The resulting contracted tensor inherits the remaining indices of the original tensor. For example, contracting a tensor T of type (2,2) on the second and third indices to create a new tensor U of type (1,1) is written as T a b b c = ∑ b T a b b c = T a 1 1 c + T a 2 2 c + ⋯ + T a n n c = U a c . {\displaystyle T^{ab}{}_{bc}=\sum _{b}{T^{ab}{}_{bc}}=T^{a1}{}_{1c}+T^{a2}{}_{2c}+\cdots +T^{an}{}_{nc}=U^{a}{}_{c}.} By contrast, let T = e i ⊗ e j {\displaystyle \mathbf {T} =\mathbf {e} ^{i}\otimes \mathbf {e} ^{j}} be an unmixed dyadic tensor. This tensor does not contract; if its base vectors are dotted, the result is the contravariant metric tensor, g i j = e i ⋅ e j {\displaystyle g^{ij}=\mathbf {e} ^{i}\cdot \mathbf {e} ^{j}} , whose rank is 2. == Metric contraction == As in the previous example, contraction on a pair of indices that are either both contravariant or both covariant is not possible in general. However, in the presence of an inner product (also known as a metric) g, such contractions are possible. One uses the metric to raise or lower one of the indices, as needed, and then one uses the usual operation of contraction. The combined operation is known as metric contraction. == Application to tensor fields == Contraction is often applied to tensor fields over spaces (e.g. Euclidean space, manifolds, or schemes). Since contraction is a purely algebraic operation, it can be applied pointwise to a tensor field, e.g. if T is a (1,1) tensor field on Euclidean space, then in any coordinates, its contraction (a scalar field) U at a point x is given by U ( x ) = ∑ i T i i ( x ) {\displaystyle U(x)=\sum _{i}T_{i}^{i}(x)} Since the role of x is not complicated here, it is often suppressed, and the notation for tensor fields becomes identical to that for purely algebraic tensors. Over a Riemannian manifold, a metric (field of inner products) is available, and both metric and non-metric contractions are crucial to the theory. For example, the Ricci tensor is a non-metric contraction of the Riemann curvature tensor, and the scalar curvature is the unique metric contraction of the Ricci tensor. One can also view contraction of a tensor field in the context of modules over an appropriate ring of functions on the manifold or the context of sheaves of modules over the structure sheaf; see the discussion at the end of this article. === Tensor divergence === As an application of the contraction of a tensor field, let V be a vector field on a Riemannian manifold (for example, Euclidean space). Let V α β {\displaystyle V^{\alpha }{}_{\beta }} be the covariant derivative of V (in some choice of coordinates). In the case of Cartesian coordinates in Euclidean space, one can write V α β = ∂ V α ∂ x β . {\displaystyle V^{\alpha }{}_{\beta }={\partial V^{\alpha } \over \partial x^{\beta }}.} Then changing index β to α causes the pair of indices to become bound to each other, so that the derivative contracts with itself to obtain the following sum: V α α = V 0 0 + ⋯ + V n n , {\displaystyle V^{\alpha }{}_{\alpha }=V^{0}{}_{0}+\cdots +V^{n}{}_{n},} which is the divergence div V. Then div ⁡ V = V α α = 0 {\displaystyle \operatorname {div} V=V^{\alpha }{}_{\alpha }=0} is a continuity equation for V. In general, one can define various divergence operations on higher-rank tensor fields, as follows. If T is a tensor field with at least one contravariant index, taking the covariant differential and contracting the chosen contravariant index with the new covariant index corresponding to the differential results in a new tensor of rank one lower than that of T. == Contraction of a pair of tensors == One can generalize the core contraction operation (vector with dual vector) in a slightly different way, by considering a pair of tensors T and U. The tensor product T ⊗ U {\displaystyle T\otimes U} is a new tensor, which, if it has at least one covariant and one contravariant index, can be contracted. The case where T is a vector and U is a dual vector is exactly the core operation introduced first in this article. In tensor index notation, to contract two tensors with each other, one places them side by side (juxtaposed) as factors of the same term. This implements the tensor product, yielding a composite tensor. Contracting two indices in this composite tensor implements the desired contraction of the two tensors. For example, matrices can be represented as tensors of type (1,1) with the first index being contravariant and the second index being covariant. Let Λ α β {\displaystyle \Lambda ^{\alpha }{}_{\beta }} be the components of one matrix and let M β γ {\displaystyle \mathrm {M} ^{\beta }{}_{\gamma }} be the components of a second matrix. Then their multiplication is given by the following contraction, an example of the contraction of a pair of tensors: Λ α β M β γ = N α γ {\displaystyle \Lambda ^{\alpha }{}_{\beta }\mathrm {M} ^{\beta }{}_{\gamma }=\mathrm {N} ^{\alpha }{}_{\gamma }} . Also, the interior product of a vector with a differential form is a special case of the contraction of two tensors with each other. == More general algebraic contexts == Let R be a commutative ring and let M be a finite free module over R. Then contraction operates on the full (mixed) tensor algebra of M in exactly the same way as it does in the case of vector spaces over a field. (The key fact is that the canonical pairing is still perfect in this case.) More generally, let OX be a sheaf of commutative rings over a topological space X, e.g. OX could be the structure sheaf of a complex manifold, analytic space, or scheme. Let M be a locally free sheaf of modules over OX of finite rank. Then the dual of M is still well-behaved and contraction operations make sense in this context. == See also == Tensor product Partial trace Interior product Raising and lowering indices Musical isomorphism Ricci calculus == Notes == == References == Bishop, Richard L.; Goldberg, Samuel I. (1980). Tensor Analysis on Manifolds. New York: Dover. ISBN 0-486-64039-6. Menzel, Donald H. (1961). Mathematical Physics. New York: Dover. ISBN 0-486-60056-4. {{cite book}}: ISBN / Date incompatibility (help)
Wikipedia/Tensor_contraction
In theoretical particle physics, the gluon field strength tensor is a second order tensor field characterizing the gluon interaction between quarks. The strong interaction is one of the fundamental interactions of nature, and the quantum field theory (QFT) to describe it is called quantum chromodynamics (QCD). Quarks interact with each other by the strong force due to their color charge, mediated by gluons. Gluons themselves possess color charge and can mutually interact. The gluon field strength tensor is a rank 2 tensor field on the spacetime with values in the adjoint bundle of the chromodynamical SU(3) gauge group (see vector bundle for necessary definitions). == Convention == Throughout this article, Latin indices (typically a, b, c, n) take values 1, 2, ..., 8 for the eight gluon color charges, while Greek indices (typically α, β, μ, ν) take values 0 for timelike components and 1, 2, 3 for spacelike components of four-vectors and four-dimensional spacetime tensors. In all equations, the summation convention is used on all color and tensor indices, unless the text explicitly states that there is no sum to be taken (e.g. “no sum”). == Definition == Below the definitions (and most of the notation) follow K. Yagi, T. Hatsuda, Y. Miake and Greiner, Schäfer. === Tensor components === The tensor is denoted G, (or F, F, or some variant), and has components defined proportional to the commutator of the quark covariant derivative Dμ: G α β = ± 1 i g s [ D α , D β ] , {\displaystyle G_{\alpha \beta }=\pm {\frac {1}{ig_{\text{s}}}}[D_{\alpha },D_{\beta }]\,,} where: D μ = ∂ μ ± i g s t a A μ a , {\displaystyle D_{\mu }=\partial _{\mu }\pm ig_{\text{s}}t_{a}{\mathcal {A}}_{\mu }^{a}\,,} in which i is the imaginary unit; gs is the coupling constant of the strong force; ta = λa/2 are the Gell-Mann matrices λa divided by 2; a is a color index in the adjoint representation of SU(3) which take values 1, 2, ..., 8 for the eight generators of the group, namely the Gell-Mann matrices; μ is a spacetime index, 0 for timelike components and 1, 2, 3 for spacelike components; A μ = t a A μ a {\displaystyle {\mathcal {A}}_{\mu }=t_{a}{\mathcal {A}}_{\mu }^{a}} expresses the gluon field, a spin-1 gauge field or, in differential-geometric parlance, a connection in the SU(3) principal bundle; A μ {\displaystyle {\mathcal {A}}_{\mu }} are its four (coordinate-system dependent) components, that in a fixed gauge are 3×3 traceless Hermitian matrix-valued functions, while A μ a {\displaystyle {\mathcal {A}}_{\mu }^{a}} are 32 real-valued functions, the four components for each of the eight four-vector fields. Note that different authors choose different signs. Expanding the commutator gives; G α β = ∂ α A β − ∂ β A α ± i g s [ A α , A β ] {\displaystyle G_{\alpha \beta }=\partial _{\alpha }{\mathcal {A}}_{\beta }-\partial _{\beta }{\mathcal {A}}_{\alpha }\pm ig_{\text{s}}[{\mathcal {A}}_{\alpha },{\mathcal {A}}_{\beta }]} Substituting t a A α a = A α {\displaystyle t_{a}{\mathcal {A}}_{\alpha }^{a}={\mathcal {A}}_{\alpha }} and using the commutation relation [ t a , t b ] = i f a b c t c {\displaystyle [t_{a},t_{b}]=if_{ab}{}^{c}t_{c}} for the Gell-Mann matrices (with a relabeling of indices), in which f abc are the structure constants of SU(3), each of the gluon field strength components can be expressed as a linear combination of the Gell-Mann matrices as follows: G α β = ∂ α t a A β a − ∂ β t a A α a ± i g s [ t b , t c ] A α b A β c = t a ( ∂ α A β a − ∂ β A α a ± i 2 f b c a g s A α b A β c ) = t a G α β a , {\displaystyle {\begin{aligned}G_{\alpha \beta }&=\partial _{\alpha }t_{a}{\mathcal {A}}_{\beta }^{a}-\partial _{\beta }t_{a}{\mathcal {A}}_{\alpha }^{a}\pm ig_{\text{s}}\left[t_{b},t_{c}\right]{\mathcal {A}}_{\alpha }^{b}{\mathcal {A}}_{\beta }^{c}\\&=t_{a}\left(\partial _{\alpha }{\mathcal {A}}_{\beta }^{a}-\partial _{\beta }{\mathcal {A}}_{\alpha }^{a}\pm i^{2}f_{bc}{}^{a}g_{\text{s}}{\mathcal {A}}_{\alpha }^{b}{\mathcal {A}}_{\beta }^{c}\right)\\&=t_{a}G_{\alpha \beta }^{a}\\\end{aligned}}\,,} so that: G α β a = ∂ α A β a − ∂ β A α a ∓ g s f a b c A α b A β c , {\displaystyle G_{\alpha \beta }^{a}=\partial _{\alpha }{\mathcal {A}}_{\beta }^{a}-\partial _{\beta }{\mathcal {A}}_{\alpha }^{a}\mp g_{\text{s}}f^{a}{}_{bc}{\mathcal {A}}_{\alpha }^{b}{\mathcal {A}}_{\beta }^{c}\,,} where again a, b, c = 1, 2, ..., 8 are color indices. As with the gluon field, in a specific coordinate system and fixed gauge Gαβ are 3×3 traceless Hermitian matrix-valued functions, while Gaαβ are real-valued functions, the components of eight four-dimensional second order tensor fields. === Differential forms === The gluon color field can be described using the language of differential forms, specifically as an adjoint bundle-valued curvature 2-form (note that fibers of the adjoint bundle are the su(3) Lie algebra); G = d A ∓ g s A ∧ A , {\displaystyle \mathbf {G} =\mathrm {d} {\boldsymbol {\mathcal {A}}}\mp g_{\text{s}}\,{\boldsymbol {\mathcal {A}}}\wedge {\boldsymbol {\mathcal {A}}}\,,} where A {\displaystyle {\boldsymbol {\mathcal {A}}}} is the gluon field, a vector potential 1-form corresponding to G and ∧ is the (antisymmetric) wedge product of this algebra, producing the structure constants f abc. The Cartan-derivative of the field form (i.e. essentially the divergence of the field) would be zero in the absence of the "gluon terms", i.e. those A {\displaystyle {\boldsymbol {\mathcal {A}}}} which represent the non-abelian character of the SU(3). A more mathematically formal derivation of these same ideas (but a slightly altered setting) can be found in the article on metric connections. === Comparison with the electromagnetic tensor === This almost parallels the electromagnetic field tensor (also denoted F ) in quantum electrodynamics, given by the electromagnetic four-potential A describing a spin-1 photon; F α β = ∂ α A β − ∂ β A α , {\displaystyle F_{\alpha \beta }=\partial _{\alpha }A_{\beta }-\partial _{\beta }A_{\alpha }\,,} or in the language of differential forms: F = d A . {\displaystyle \mathbf {F} =\mathrm {d} \mathbf {A} \,.} The key difference between quantum electrodynamics and quantum chromodynamics is that the gluon field strength has extra terms which lead to self-interactions between the gluons and asymptotic freedom. This is a complication of the strong force making it inherently non-linear, contrary to the linear theory of the electromagnetic force. QCD is a non-abelian gauge theory. The word non-abelian in group-theoretical language means that the group operation is not commutative, making the corresponding Lie algebra non-trivial. == QCD Lagrangian density == Characteristic of field theories, the dynamics of the field strength are summarized by a suitable Lagrangian density and substitution into the Euler–Lagrange equation (for fields) obtains the equation of motion for the field. The Lagrangian density for massless quarks, bound by gluons, is: L = − 1 2 t r ( G α β G α β ) + ψ ¯ ( i D μ ) γ μ ψ {\displaystyle {\mathcal {L}}=-{\frac {1}{2}}\mathrm {tr} \left(G_{\alpha \beta }G^{\alpha \beta }\right)+{\bar {\psi }}\left(iD_{\mu }\right)\gamma ^{\mu }\psi } where "tr" denotes trace of the 3×3 matrix GαβGαβ, and γμ are the 4×4 gamma matrices. In the fermionic term i ψ ¯ ( i D μ ) γ μ ψ {\displaystyle i{\bar {\psi }}\left(iD_{\mu }\right)\gamma ^{\mu }\psi } , both color and spinor indices are suppressed. With indices explicit, ψ i , α {\displaystyle \psi _{i,\alpha }} where i = 1 , … , 3 {\displaystyle i=1,\ldots ,3} are color indices and α = 1 , … , 4 {\displaystyle \alpha =1,\ldots ,4} are Dirac spinor indices. == Gauge transformations == In contrast to QED, the gluon field strength tensor is not gauge invariant by itself. Only the product of two contracted over all indices is gauge invariant. == Equations of motion == Treated as a classical field theory, the equations of motion for the quark fields are: ( i ℏ γ μ D μ − m c ) ψ = 0 {\displaystyle (i\hbar \gamma ^{\mu }D_{\mu }-mc)\psi =0} which is like the Dirac equation, and the equations of motion for the gluon (gauge) fields are: [ D μ , G μ ν ] = g s j ν {\displaystyle \left[D_{\mu },G^{\mu \nu }\right]=g_{\text{s}}j^{\nu }} which are similar to the Maxwell equations (when written in tensor notation). More specifically, these are the Yang–Mills equations for quark and gluon fields. The color charge four-current is the source of the gluon field strength tensor, analogous to the electromagnetic four-current as the source of the electromagnetic tensor. It is given by j ν = t b j b ν , j b ν = ψ ¯ γ ν t b ψ , {\displaystyle j^{\nu }=t^{b}j_{b}^{\nu }\,,\quad j_{b}^{\nu }={\bar {\psi }}\gamma ^{\nu }t^{b}\psi ,} which is a conserved current since color charge is conserved. In other words, the color four-current must satisfy the continuity equation: D ν j ν = 0 . {\displaystyle D_{\nu }j^{\nu }=0\,.} == See also == Quark confinement Gell-Mann matrices Field (physics) Yang–Mills field Eightfold Way (physics) Einstein tensor Wilson loop Wess–Zumino gauge Quantum chromodynamics binding energy Ricci calculus Special unitary group == References == === Notes === === Further reading === ==== Books ==== H. Fritzsch (1982). Quarks: the stuff of matter. Allen lane. ISBN 978-0-7139-15334. B.R. Martin; G. Shaw (2009). Particle Physics. Manchester Physics Series (3rd ed.). John Wiley & Sons. ISBN 978-0-470-03294-7. S. Sarkar; H. Satz; B. Sinha (2009). The Physics of the Quark-Gluon Plasma: Introductory Lectures. Springer. ISBN 978-3642022852. J. Thanh Van Tran, ed. (1987). Hadrons, Quarks and Gluons: Proceedings of the Hadronic Session of the Twenty-Second Rencontre de Moriond, Les Arcs-Savoie-France. Atlantica Séguier Frontières. ISBN 978-2863320488. R. Alkofer; H. Reinhart (1995). Chiral Quark Dynamics. Springer. ISBN 978-3540601371. K. Chung (2008). Hadronic Production of ψ(2S) Cross Section and Polarization. ISBN 978-0549597742. J. Collins (2011). Foundations of Perturbative QCD. Cambridge University Press. ISBN 978-0521855334. W.N.A. Cottingham; D.A.A. Greenwood (1998). Standard Model of Particle Physics. Cambridge University Press. ISBN 978-0521588324. ==== Selected papers ==== J.P. Maa; Q. Wang; G.P. Zhang (2012). "QCD evolutions of twist-3 chirality-odd operators". Physics Letters B. 718 (4–5): 1358–1363. arXiv:1210.1006. Bibcode:2013PhLB..718.1358M. doi:10.1016/j.physletb.2012.12.007. S2CID 118575585. M. D’Elia, A. Di Giacomo, E. Meggiolaro (1997). "Field strength correlators in full QCD". Physics Letters B. 408 (1–4): 315–319. arXiv:hep-lat/9705032. Bibcode:1997PhLB..408..315D. doi:10.1016/S0370-2693(97)00814-9. S2CID 119533874.{{cite journal}}: CS1 maint: multiple names: authors list (link) A. Di Giacomo; M. D’elia; H. Panagopoulos; E. Meggiolaro (1998). "Gauge Invariant Field Strength Correlators In QCD". arXiv:hep-lat/9808056. M. Neubert (1993). "A Virial Theorem for the Kinetic Energy of a Heavy Quark inside Hadrons". Physics Letters B. 322 (4): 419–424. arXiv:hep-ph/9311232. Bibcode:1994PhLB..322..419N. doi:10.1016/0370-2693(94)91174-6. S2CID 14214029. M. Neubert; N. Brambilla; H.G. Dosch; A. Vairo (1998). "Field strength correlators and dual effective dynamics in QCD". Physical Review D. 58 (3): 034010. arXiv:hep-ph/9802273. Bibcode:1998PhRvD..58c4010B. doi:10.1103/PhysRevD.58.034010. S2CID 1824834. M. Neubert (1996). "QCD sum-rule calculation of the kinetic energy and chromo-interaction of heavy quarks inside mesons" (PDF). Physics Letters B (FTP). (To view documents see Help:FTP) == External links == K. Ellis (2005). "QCD" (PDF). Fermilab. Archived from the original on September 26, 2006. "Chapter 2: The QCD Lagrangian" (PDF). Technische Universität München. Retrieved 2013-10-17.
Wikipedia/Gluon_field_strength_tensor
In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space. == In linear algebra == Let F be a field and let X be any set. The functions X → F can be given the structure of a vector space over F where the operations are defined pointwise, that is, for any f, g : X → F, any x in X, and any c in F, define ( f + g ) ( x ) = f ( x ) + g ( x ) ( c ⋅ f ) ( x ) = c ⋅ f ( x ) {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(c\cdot f)(x)&=c\cdot f(x)\end{aligned}}} When the domain X has additional structure, one might consider instead the subset (or subspace) of all such functions which respect that structure. For example, if V and also X itself are vector spaces over F, the set of linear maps X → V form a vector space over F with pointwise operations (often denoted Hom(X,V)). One such space is the dual space of X: the set of linear functionals X → F with addition and scalar multiplication defined pointwise. The cardinal dimension of a function space with no extra structure can be found by the Erdős–Kaplansky theorem. == Examples == Function spaces appear in various areas of mathematics: In set theory, the set of functions from X to Y may be denoted {X → Y} or YX. As a special case, the power set of a set X may be identified with the set of all functions from X to {0, 1}, denoted 2X. The set of bijections from X to Y is denoted X ↔ Y {\displaystyle X\leftrightarrow Y} . The factorial notation X! may be used for permutations of a single set X. In functional analysis, the same is seen for continuous linear transformations, including topologies on the vector spaces in the above, and many of the major examples are function spaces carrying a topology; the best known examples include Hilbert spaces and Banach spaces. In functional analysis, the set of all functions from the natural numbers to some set X is called a sequence space. It consists of the set of all possible sequences of elements of X. In topology, one may attempt to put a topology on the space of continuous functions from a topological space X to another one Y, with utility depending on the nature of the spaces. A commonly used example is the compact-open topology, e.g. loop space. Also available is the product topology on the space of set theoretic functions (i.e. not necessarily continuous functions) YX. In this context, this topology is also referred to as the topology of pointwise convergence. In algebraic topology, the study of homotopy theory is essentially that of discrete invariants of function spaces; In the theory of stochastic processes, the basic technical problem is how to construct a probability measure on a function space of paths of the process (functions of time); In category theory, the function space is called an exponential object or map object. It appears in one way as the representation canonical bifunctor; but as (single) functor, of type [ X , − ] {\displaystyle [X,-]} , it appears as an adjoint functor to a functor of type − × X {\displaystyle -\times X} on objects; In functional programming and lambda calculus, function types are used to express the idea of higher-order functions In programming more generally, many higher-order function concepts occur with or without explicit typing, such as closures. In domain theory, the basic idea is to find constructions from partial orders that can model lambda calculus, by creating a well-behaved Cartesian closed category. In the representation theory of finite groups, given two finite-dimensional representations V and W of a group G, one can form a representation of G over the vector space of linear maps Hom(V,W) called the Hom representation. == Functional analysis == Functional analysis is organized around adequate techniques to bring function spaces as topological vector spaces within reach of the ideas that would apply to normed spaces of finite dimension. Here we use the real line as an example domain, but the spaces below exist on suitable open subsets Ω ⊆ R n {\displaystyle \Omega \subseteq \mathbb {R} ^{n}} C ( R ) {\displaystyle C(\mathbb {R} )} continuous functions endowed with the uniform norm topology C c ( R ) {\displaystyle C_{c}(\mathbb {R} )} continuous functions with compact support B ( R ) {\displaystyle B(\mathbb {R} )} bounded functions C 0 ( R ) {\displaystyle C_{0}(\mathbb {R} )} continuous functions which vanish at infinity C r ( R ) {\displaystyle C^{r}(\mathbb {R} )} continuous functions that have r continuous derivatives. C ∞ ( R ) {\displaystyle C^{\infty }(\mathbb {R} )} smooth functions C c ∞ ( R ) {\displaystyle C_{c}^{\infty }(\mathbb {R} )} smooth functions with compact support (i.e. the set of bump functions) C ω ( R ) {\displaystyle C^{\omega }(\mathbb {R} )} real analytic functions L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} , for 1 ≤ p ≤ ∞ {\displaystyle 1\leq p\leq \infty } , is the Lp space of measurable functions whose p-norm ‖ f ‖ p = ( ∫ R | f | p ) 1 / p {\textstyle \|f\|_{p}=\left(\int _{\mathbb {R} }|f|^{p}\right)^{1/p}} is finite S ( R ) {\displaystyle {\mathcal {S}}(\mathbb {R} )} , the Schwartz space of rapidly decreasing smooth functions and its continuous dual, S ′ ( R ) {\displaystyle {\mathcal {S}}'(\mathbb {R} )} tempered distributions D ( R ) {\displaystyle D(\mathbb {R} )} compact support in limit topology W k , p {\displaystyle W^{k,p}} Sobolev space of functions whose weak derivatives up to order k are in L p {\displaystyle L^{p}} O U {\displaystyle {\mathcal {O}}_{U}} holomorphic functions linear functions piecewise linear functions continuous functions, compact open topology all functions, space of pointwise convergence Hardy space Hölder space Càdlàg functions, also known as the Skorokhod space Lip 0 ( R ) {\displaystyle {\text{Lip}}_{0}(\mathbb {R} )} , the space of all Lipschitz functions on R {\displaystyle \mathbb {R} } that vanish at zero. == Uniform Norm == If y is an element of the function space C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} of all continuous functions that are defined on a closed interval [a, b], the norm ‖ y ‖ ∞ {\displaystyle \|y\|_{\infty }} defined on C ( a , b ) {\displaystyle {\mathcal {C}}(a,b)} is the maximum absolute value of y (x) for a ≤ x ≤ b, ‖ y ‖ ∞ ≡ max a ≤ x ≤ b | y ( x ) | where y ∈ C ( a , b ) {\displaystyle \|y\|_{\infty }\equiv \max _{a\leq x\leq b}|y(x)|\qquad {\text{where}}\ \ y\in {\mathcal {C}}(a,b)} is called the uniform norm or supremum norm ('sup norm'). == Bibliography == Kolmogorov, A. N., & Fomin, S. V. (1967). Elements of the theory of functions and functional analysis. Courier Dover Publications. Stein, Elias; Shakarchi, R. (2011). Functional Analysis: An Introduction to Further Topics in Analysis. Princeton University Press. == See also == List of mathematical functions Clifford algebra Tensor field Spectral theory Functional determinant == References ==
Wikipedia/Function_space
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra. In differential geometry, an intrinsic geometric statement may be described by a tensor field on a manifold, and then doesn't need to make reference to coordinates at all. The same is true in general relativity, of tensor fields describing a physical property. The component-free approach is also used extensively in abstract algebra and homological algebra, where tensors arise naturally. == Definition via tensor products of vector spaces == Given a finite set {V1, ..., Vn} of vector spaces over a common field F, one may form their tensor product V1 ⊗ ... ⊗ Vn, an element of which is termed a tensor. A tensor on the vector space V is then defined to be an element of (i.e., a vector in) a vector space of the form: V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*}} where V∗ is the dual space of V. If there are m copies of V and n copies of V∗ in our product, the tensor is said to be of type (m, n) and contravariant of order m and covariant of order n and of total order m + n. The tensors of order zero are just the scalars (elements of the field F), those of contravariant order 1 are the vectors in V, and those of covariant order 1 are the one-forms in V∗ (for this reason, the elements of the last two spaces are often called the contravariant and covariant vectors). The space of all tensors of type (m, n) is denoted T n m ( V ) = V ⊗ ⋯ ⊗ V ⏟ m ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ n . {\displaystyle T_{n}^{m}(V)=\underbrace {V\otimes \dots \otimes V} _{m}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{n}.} Example 1. The space of type (1, 1) tensors, T 1 1 ( V ) = V ⊗ V ∗ , {\displaystyle T_{1}^{1}(V)=V\otimes V^{*},} is isomorphic in a natural way to the space of linear transformations from V to V. Example 2. A bilinear form on a real vector space V, V × V → F , {\displaystyle V\times V\to F,} corresponds in a natural way to a type (0, 2) tensor in T 2 0 ( V ) = V ∗ ⊗ V ∗ . {\displaystyle T_{2}^{0}(V)=V^{*}\otimes V^{*}.} An example of such a bilinear form may be defined, termed the associated metric tensor, and is usually denoted g. == Tensor rank == A simple tensor (also called a tensor of rank one, elementary tensor or decomposable tensor) is a tensor that can be written as a product of tensors of the form T = a ⊗ b ⊗ ⋯ ⊗ d {\displaystyle T=a\otimes b\otimes \cdots \otimes d} where a, b, ..., d are nonzero and in V or V∗ – that is, if the tensor is nonzero and completely factorizable. Every tensor can be expressed as a sum of simple tensors. The rank of a tensor T is the minimum number of simple tensors that sum to T. The zero tensor has rank zero. A nonzero order 0 or 1 tensor always has rank 1. The rank of a non-zero order 2 or higher tensor is less than or equal to the product of the dimensions of all but the highest-dimensioned vectors in (a sum of products of) which the tensor can be expressed, which is dn−1 when each product is of n vectors from a finite-dimensional vector space of dimension d. The term rank of a tensor extends the notion of the rank of a matrix in linear algebra, although the term is also often used to mean the order (or degree) of a tensor. The rank of a matrix is the minimum number of column vectors needed to span the range of the matrix. A matrix thus has rank one if it can be written as an outer product of two nonzero vectors: A = v w T . {\displaystyle A=vw^{\mathrm {T} }.} The rank of a matrix A is the smallest number of such outer products that can be summed to produce it: A = v 1 w 1 T + ⋯ + v k w k T . {\displaystyle A=v_{1}w_{1}^{\mathrm {T} }+\cdots +v_{k}w_{k}^{\mathrm {T} }.} In indices, a tensor of rank 1 is a tensor of the form T i j … k ℓ … = a i b j ⋯ c k d ℓ ⋯ . {\displaystyle T_{ij\dots }^{k\ell \dots }=a_{i}b_{j}\cdots c^{k}d^{\ell }\cdots .} The rank of a tensor of order 2 agrees with the rank when the tensor is regarded as a matrix, and can be determined from Gaussian elimination for instance. The rank of an order 3 or higher tensor is however often very difficult to determine, and low rank decompositions of tensors are sometimes of great practical interest. In fact, the problem of finding the rank of an order 3 tensor over any finite field is NP-Complete, and over the rationals, is NP-Hard. Computational tasks such as the efficient multiplication of matrices and the efficient evaluation of polynomials can be recast as the problem of simultaneously evaluating a set of bilinear forms z k = ∑ i j T i j k x i y j {\displaystyle z_{k}=\sum _{ij}T_{ijk}x_{i}y_{j}} for given inputs xi and yj. If a low-rank decomposition of the tensor T is known, then an efficient evaluation strategy is known. == Universal property == The space T n m ( V ) {\displaystyle T_{n}^{m}(V)} can be characterized by a universal property in terms of multilinear mappings. Amongst the advantages of this approach are that it gives a way to show that many linear mappings are "natural" or "geometric" (in other words are independent of any choice of basis). Explicit computational information can then be written down using bases, and this order of priorities can be more convenient than proving a formula gives rise to a natural mapping. Another aspect is that tensor products are not used only for free modules, and the "universal" approach carries over more easily to more general situations. A scalar-valued function on a Cartesian product (or direct sum) of vector spaces f : V 1 × ⋯ × V N → F {\displaystyle f:V_{1}\times \cdots \times V_{N}\to F} is multilinear if it is linear in each argument. The space of all multilinear mappings from V1 × ... × VN to W is denoted LN(V1, ..., VN; W). When N = 1, a multilinear mapping is just an ordinary linear mapping, and the space of all linear mappings from V to W is denoted L(V; W). The universal characterization of the tensor product implies that, for each multilinear function f ∈ L m + n ( V ∗ , … , V ∗ ⏟ m , V , … , V ⏟ n ; W ) {\displaystyle f\in L^{m+n}(\underbrace {V^{*},\ldots ,V^{*}} _{m},\underbrace {V,\ldots ,V} _{n};W)} (where W can represent the field of scalars, a vector space, or a tensor space) there exists a unique linear function T f ∈ L ( V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ m ⊗ V ⊗ ⋯ ⊗ V ⏟ n ; W ) {\displaystyle T_{f}\in L(\underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{m}\otimes \underbrace {V\otimes \cdots \otimes V} _{n};W)} such that f ( α 1 , … , α m , v 1 , … , v n ) = T f ( α 1 ⊗ ⋯ ⊗ α m ⊗ v 1 ⊗ ⋯ ⊗ v n ) {\displaystyle f(\alpha _{1},\ldots ,\alpha _{m},v_{1},\ldots ,v_{n})=T_{f}(\alpha _{1}\otimes \cdots \otimes \alpha _{m}\otimes v_{1}\otimes \cdots \otimes v_{n})} for all vi in V and αi in V∗. Using the universal property, it follows, when V is finite dimensional, that the space of (m, n)-tensors admits a natural isomorphism T n m ( V ) ≅ L ( V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ m ⊗ V ⊗ ⋯ ⊗ V ⏟ n ; F ) ≅ L m + n ( V ∗ , … , V ∗ ⏟ m , V , … , V ⏟ n ; F ) . {\displaystyle T_{n}^{m}(V)\cong L(\underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{m}\otimes \underbrace {V\otimes \cdots \otimes V} _{n};F)\cong L^{m+n}(\underbrace {V^{*},\ldots ,V^{*}} _{m},\underbrace {V,\ldots ,V} _{n};F).} Each V in the definition of the tensor corresponds to a V∗ inside the argument of the linear maps, and vice versa. (Note that in the former case, there are m copies of V and n copies of V∗, and in the latter case vice versa). In particular, one has T 0 1 ( V ) ≅ L ( V ∗ ; F ) ≅ V , T 1 0 ( V ) ≅ L ( V ; F ) = V ∗ , T 1 1 ( V ) ≅ L ( V ; V ) . {\displaystyle {\begin{aligned}T_{0}^{1}(V)&\cong L(V^{*};F)\cong V,\\T_{1}^{0}(V)&\cong L(V;F)=V^{*},\\T_{1}^{1}(V)&\cong L(V;V).\end{aligned}}} == Tensor fields == Differential geometry, physics and engineering must often deal with tensor fields on smooth manifolds. The term tensor is sometimes used as a shorthand for tensor field. A tensor field expresses the concept of a tensor that varies from point to point on the manifold. == References == Abraham, Ralph; Marsden, Jerrold E. (1985), Foundations of Mechanics (2nd ed.), Reading, Massachusetts: Addison-Wesley, ISBN 0-201-40840-6. Bourbaki, Nicolas (1989), Elements of Mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9. de Groote, H. F. (1987), Lectures on the Complexity of Bilinear Problems, Lecture Notes in Computer Science, vol. 245, Springer, ISBN 3-540-17205-X. Halmos, Paul (1974), Finite-dimensional Vector Spaces, Springer, ISBN 0-387-90093-4. Håstad, Johan (November 15, 1989), "Tensor Rank Is NP-Complete", Journal of Algorithms, 11 (4): 644–654, doi:10.1016/0196-6774(90)90014-6. Jeevanjee, Nadir (2011), "An Introduction to Tensors and Group Theory for Physicists", Physics Today, 65 (4): 64, Bibcode:2012PhT....65d..64P, doi:10.1063/PT.3.1523, ISBN 978-0-8176-4714-8. Knuth, Donald E. (1998) [1969], The Art of Computer Programming, vol. 2 (3rd ed.), Addison-Wesley, pp. 145–146, ISBN 978-0-201-89684-8. Hackbusch, Wolfgang (2012), Tensor Spaces and Numerical Tensor Calculus, Springer, p. 4, ISBN 978-3-642-28027-6.
Wikipedia/Component-free_treatment_of_tensors
In mathematics, the tensor bundle of a manifold is the direct sum of all tensor products of the tangent bundle and the cotangent bundle of that manifold. To do calculus on the tensor bundle a connection is needed, except for the special case of the exterior derivative of antisymmetric tensors. == Definition == A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. == References == Lee, John M. (2012). Introduction to Smooth Manifolds. Graduate Texts in Mathematics. Vol. 218 (Second ed.). New York London: Springer-Verlag. ISBN 978-1-4419-9981-8. OCLC 808682771. Saunders, David J. (1989). The Geometry of Jet Bundles. London Mathematical Society Lecture Note Series. Vol. 142. Cambridge New York: Cambridge University Press. ISBN 978-0-521-36948-0. OCLC 839304386. Steenrod, Norman (5 April 1999). The Topology of Fibre Bundles. Princeton Mathematical Series. Vol. 14. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-00548-5. OCLC 40734875. == See also == Fiber bundle – Continuous surjection satisfying a local triviality condition Spinor bundle – Geometric structure Tensor field – Assignment of a tensor continuously varying across a region of space
Wikipedia/Tensor_bundle
In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions. The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology. == Introduction == The term differential is used nonrigorously in calculus to refer to an infinitesimal ("infinitely small") change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is, intuitively, extremely useful, and there are a number of ways to make the notion mathematically precise. Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula d y = d y d x d x , {\displaystyle dy={\frac {dy}{dx}}\,dx,} where d y d x {\displaystyle {\frac {dy}{dx}}\,} denotes not 'dy divided by dx' as one would intuitively read, but 'the derivative of y with respect to x '. This formula summarizes the idea that the derivative of y with respect to x is the limit of the ratio of differences Δy/Δx as Δx approaches zero: d y ( x ) d x = lim Δ x → 0 Δ y ( x ) Δ x {\displaystyle {\dfrac {\mathrm {d} y(x)}{\mathrm {d} x}}=\lim _{\Delta x\rightarrow 0}{\dfrac {\Delta y(x)}{\Delta x}}} You can meet d {\displaystyle d} is italicised ( d {\displaystyle d} ) or slanted (d) or regular, the last emphasizes d {\displaystyle \mathrm {d} } is an operator designation like the summation operator ( ∑ ) {\displaystyle \left(\sum \right)} , the delta operator (the finite difference operator) ( Δ {\displaystyle \Delta } ), trigonometric functions ( sin , cos , tan {\displaystyle \sin ,\cos ,\tan } )... === Basic notions === In calculus, the differential represents a change in the linearization of a function. The total differential is its generalization for functions of multiple variables. In traditional approaches to calculus, differentials (e.g. dx, dy, dt, etc.) are interpreted as infinitesimals. There are several methods of defining infinitesimals rigorously, but it is sufficient to say that an infinitesimal number is smaller in absolute value than any positive real number, just as an infinitely large number is larger than any real number. The differential is another name for the Jacobian matrix of partial derivatives of a function from Rn to Rm (especially when this matrix is viewed as a linear map). More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback. Stochastic calculus provides a notion of stochastic differential and an associated calculus for stochastic processes. The integrator in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution and integration by parts formulae for Stieltjes integral correspond, respectively, to the chain rule and product rule for the differential. == History and usage == Infinitesimal quantities played a significant role in the development of calculus. Archimedes used them, even though he did not believe that arguments involving infinitesimals were rigorous. Isaac Newton referred to them as fluxions. However, it was Gottfried Leibniz who coined the term differentials for infinitesimal quantities and introduced the notation for them which is still used today. In Leibniz's notation, if x is a variable quantity, then dx denotes an infinitesimal change in the variable x. Thus, if y is a function of x, then the derivative of y with respect to x is often denoted dy/dx, which would otherwise be denoted (in the notation of Newton or Lagrange) ẏ or y′. The use of differentials in this form attracted much criticism, for instance in the famous pamphlet The Analyst by Bishop Berkeley. Nevertheless, the notation has remained popular because it suggests strongly the idea that the derivative of y at x is its instantaneous rate of change (the slope of the graph's tangent line), which may be obtained by taking the limit of the ratio Δy/Δx as Δx becomes arbitrarily small. Differentials are also compatible with dimensional analysis, where a differential such as dx has the same dimensions as the variable x. Calculus evolved into a distinct branch of mathematics during the 17th century CE, although there were antecedents going back to antiquity. The presentations of, e.g., Newton, Leibniz, were marked by non-rigorous definitions of terms like differential, fluent and "infinitely small". While many of the arguments in Bishop Berkeley's 1734 The Analyst are theological in nature, modern mathematicians acknowledge the validity of his argument against "the Ghosts of departed Quantities"; however, the modern approaches do not have the same technical issues. Despite the lack of rigor, immense progress was made in the 17th and 18th centuries. In the 19th century, Cauchy and others gradually developed the Epsilon, delta approach to continuity, limits and derivatives, giving a solid conceptual foundation for calculus. In the 20th century, several new concepts in, e.g., multivariable calculus, differential geometry, seemed to encapsulate the intent of the old terms, especially differential; both differential and infinitesimal are used with new, more rigorous, meanings. Differentials are also used in the notation for integrals because an integral can be regarded as an infinite sum of infinitesimal quantities: the area under a graph is obtained by subdividing the graph into infinitely thin strips and summing their areas. In an expression such as ∫ f ( x ) d x , {\displaystyle \int f(x)\,dx,} the integral sign (which is a modified long s) denotes the infinite sum, f(x) denotes the "height" of a thin strip, and the differential dx denotes its infinitely thin width. == Approaches == There are several approaches for making the notion of differentials mathematically precise. Differentials as linear maps. This approach underlies the definition of the derivative and the exterior derivative in differential geometry. Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry. Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced. Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers that contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson. These approaches are very different from each other, but they have in common the idea of being quantitative, i.e., saying not just that a differential is infinitely small, but how small it is. === Differentials as linear maps === There is a simple way to make precise sense of differentials, first used on the Real line by regarding them as linear maps. It can be used on R {\displaystyle \mathbb {R} } , R n {\displaystyle \mathbb {R} ^{n}} , a Hilbert space, a Banach space, or more generally, a topological vector space. The case of the Real line is the easiest to explain. This type of differential is also known as a covariant vector or cotangent vector, depending on context. ==== Differentials as linear maps on R ==== Suppose f ( x ) {\displaystyle f(x)} is a real-valued function on R {\displaystyle \mathbb {R} } . We can reinterpret the variable x {\displaystyle x} in f ( x ) {\displaystyle f(x)} as being a function rather than a number, namely the identity map on the real line, which takes a real number p {\displaystyle p} to itself: x ( p ) = p {\displaystyle x(p)=p} . Then f ( x ) {\displaystyle f(x)} is the composite of f {\displaystyle f} with x {\displaystyle x} , whose value at p {\displaystyle p} is f ( x ( p ) ) = f ( p ) {\displaystyle f(x(p))=f(p)} . The differential d ⁡ f {\displaystyle \operatorname {d} f} (which of course depends on f {\displaystyle f} ) is then a function whose value at p {\displaystyle p} (usually denoted d f p {\displaystyle df_{p}} ) is not a number, but a linear map from R {\displaystyle \mathbb {R} } to R {\displaystyle \mathbb {R} } . Since a linear map from R {\displaystyle \mathbb {R} } to R {\displaystyle \mathbb {R} } is given by a 1 × 1 {\displaystyle 1\times 1} matrix, it is essentially the same thing as a number, but the change in the point of view allows us to think of d f p {\displaystyle df_{p}} as an infinitesimal and compare it with the standard infinitesimal d x p {\displaystyle dx_{p}} , which is again just the identity map from R {\displaystyle \mathbb {R} } to R {\displaystyle \mathbb {R} } (a 1 × 1 {\displaystyle 1\times 1} matrix with entry 1 {\displaystyle 1} ). The identity map has the property that if ε {\displaystyle \varepsilon } is very small, then d x p ( ε ) {\displaystyle dx_{p}(\varepsilon )} is very small, which enables us to regard it as infinitesimal. The differential d f p {\displaystyle df_{p}} has the same property, because it is just a multiple of d x p {\displaystyle dx_{p}} , and this multiple is the derivative f ′ ( p ) {\displaystyle f'(p)} by definition. We therefore obtain that d f p = f ′ ( p ) d x p {\displaystyle df_{p}=f'(p)\,dx_{p}} , and hence d f = f ′ d x {\displaystyle df=f'\,dx} . Thus we recover the idea that f ′ {\displaystyle f'} is the ratio of the differentials d f {\displaystyle df} and d x {\displaystyle dx} . This would just be a trick were it not for the fact that: it captures the idea of the derivative of f {\displaystyle f} at p {\displaystyle p} as the best linear approximation to f {\displaystyle f} at p {\displaystyle p} ; it has many generalizations. ==== Differentials as linear maps on Rn ==== If f {\displaystyle f} is a function from R n {\displaystyle \mathbb {R} ^{n}} to R {\displaystyle \mathbb {R} } , then we say that f {\displaystyle f} is differentiable at p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} if there is a linear map d f p {\displaystyle df_{p}} from R n {\displaystyle \mathbb {R} ^{n}} to R {\displaystyle \mathbb {R} } such that for any ε > 0 {\displaystyle \varepsilon >0} , there is a neighbourhood N {\displaystyle N} of p {\displaystyle p} such that for x ∈ N {\displaystyle x\in N} , | f ( x ) − f ( p ) − d f p ( x − p ) | < ε | x − p | . {\displaystyle \left|f(x)-f(p)-df_{p}(x-p)\right|<\varepsilon \left|x-p\right|.} We can now use the same trick as in the one-dimensional case and think of the expression f ( x 1 , x 2 , … , x n ) {\displaystyle f(x_{1},x_{2},\ldots ,x_{n})} as the composite of f {\displaystyle f} with the standard coordinates x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} on R n {\displaystyle \mathbb {R} ^{n}} (so that x j ( p ) {\displaystyle x_{j}(p)} is the j {\displaystyle j} -th component of p ∈ R n {\displaystyle p\in \mathbb {R} ^{n}} ). Then the differentials ( d x 1 ) p , ( d x 2 ) p , … , ( d x n ) p {\displaystyle \left(dx_{1}\right)_{p},\left(dx_{2}\right)_{p},\ldots ,\left(dx_{n}\right)_{p}} at a point p {\displaystyle p} form a basis for the vector space of linear maps from R n {\displaystyle \mathbb {R} ^{n}} to R {\displaystyle \mathbb {R} } and therefore, if f {\displaystyle f} is differentiable at p {\displaystyle p} , we can write d ⁡ f p {\displaystyle \operatorname {d} f_{p}} as a linear combination of these basis elements: d f p = ∑ j = 1 n D j f ( p ) ( d x j ) p . {\displaystyle df_{p}=\sum _{j=1}^{n}D_{j}f(p)\,(dx_{j})_{p}.} The coefficients D j f ( p ) {\displaystyle D_{j}f(p)} are (by definition) the partial derivatives of f {\displaystyle f} at p {\displaystyle p} with respect to x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} . Hence, if f {\displaystyle f} is differentiable on all of R n {\displaystyle \mathbb {R} ^{n}} , we can write, more concisely: d ⁡ f = ∂ f ∂ x 1 d x 1 + ∂ f ∂ x 2 d x 2 + ⋯ + ∂ f ∂ x n d x n . {\displaystyle \operatorname {d} f={\frac {\partial f}{\partial x_{1}}}\,dx_{1}+{\frac {\partial f}{\partial x_{2}}}\,dx_{2}+\cdots +{\frac {\partial f}{\partial x_{n}}}\,dx_{n}.} In the one-dimensional case this becomes d f = d f d x d x {\displaystyle df={\frac {df}{dx}}dx} as before. This idea generalizes straightforwardly to functions from R n {\displaystyle \mathbb {R} ^{n}} to R m {\displaystyle \mathbb {R} ^{m}} . Furthermore, it has the decisive advantage over other definitions of the derivative that it is invariant under changes of coordinates. This means that the same idea can be used to define the differential of smooth maps between smooth manifolds. Aside: Note that the existence of all the partial derivatives of f ( x ) {\displaystyle f(x)} at x {\displaystyle x} is a necessary condition for the existence of a differential at x {\displaystyle x} . However it is not a sufficient condition. For counterexamples, see Gateaux derivative. ==== Differentials as linear maps on a vector space ==== The same procedure works on a vector space with a enough additional structure to reasonably talk about continuity. The most concrete case is a Hilbert space, also known as a complete inner product space, where the inner product and its associated norm define a suitable concept of distance. The same procedure works for a Banach space, also known as a complete Normed vector space. However, for a more general topological vector space, some of the details are more abstract because there is no concept of distance. For the important case of a finite dimension, any inner product space is a Hilbert space, any normed vector space is a Banach space and any topological vector space is complete. As a result, you can define a coordinate system from an arbitrary basis and use the same technique as for R n {\displaystyle \mathbb {R} ^{n}} . === Differentials as germs of functions === This approach works on any differentiable manifold. If U and V are open sets containing p f : U → R {\displaystyle f\colon U\to \mathbb {R} } is continuous g : V → R {\displaystyle g\colon V\to \mathbb {R} } is continuous then f is equivalent to g at p, denoted f ∼ p g {\displaystyle f\sim _{p}g} , if and only if there is an open W ⊆ U ∩ V {\displaystyle W\subseteq U\cap V} containing p such that f ( x ) = g ( x ) {\displaystyle f(x)=g(x)} for every x in W. The germ of f at p, denoted [ f ] p {\displaystyle [f]_{p}} , is the set of all real continuous functions equivalent to f at p; if f is smooth at p then [ f ] p {\displaystyle [f]_{p}} is a smooth germ. If U 1 {\displaystyle U_{1}} , U 2 {\displaystyle U_{2}} V 1 {\displaystyle V_{1}} and V 2 {\displaystyle V_{2}} are open sets containing p f 1 : U 1 → R {\displaystyle f_{1}\colon U_{1}\to \mathbb {R} } , f 2 : U 2 → R {\displaystyle f_{2}\colon U_{2}\to \mathbb {R} } , g 1 : V 1 → R {\displaystyle g_{1}\colon V_{1}\to \mathbb {R} } and g 2 : V 2 → R {\displaystyle g_{2}\colon V_{2}\to \mathbb {R} } are smooth functions f 1 ∼ p g 1 {\displaystyle f_{1}\sim _{p}g_{1}} f 2 ∼ p g 2 {\displaystyle f_{2}\sim _{p}g_{2}} r is a real number then r ∗ f 1 ∼ p r ∗ g 1 {\displaystyle r*f_{1}\sim _{p}r*g_{1}} f 1 + f 2 : U 1 ∩ U 2 → R ∼ p g 1 + g 2 : V 1 ∩ V 2 → R {\displaystyle f_{1}+f_{2}\colon U_{1}\cap U_{2}\to \mathbb {R} \sim _{p}g_{1}+g_{2}\colon V_{1}\cap V_{2}\to \mathbb {R} } f 1 ∗ f 2 : U 1 ∩ U 2 → R ∼ p g 1 ∗ g 2 : V 1 ∩ V 2 → R {\displaystyle f_{1}*f_{2}\colon U_{1}\cap U_{2}\to \mathbb {R} \sim _{p}g_{1}*g_{2}\colon V_{1}\cap V_{2}\to \mathbb {R} } This shows that the germs at p form an algebra. Define I p {\displaystyle {\mathcal {I}}_{p}} to be the set of all smooth germs vanishing at p and I p 2 {\displaystyle {\mathcal {I}}_{p}^{2}} to be the product of ideals I p I p {\displaystyle {\mathcal {I}}_{p}{\mathcal {I}}_{p}} . Then a differential at p (cotangent vector at p) is an element of I p / I p 2 {\displaystyle {\mathcal {I}}_{p}/{\mathcal {I}}_{p}^{2}} . The differential of a smooth function f at p, denoted d f p {\displaystyle \mathrm {d} f_{p}} , is [ f − f ( p ) ] p / I p 2 {\displaystyle [f-f(p)]_{p}/{\mathcal {I}}_{p}^{2}} . A similar approach is to define differential equivalence of first order in terms of derivatives in an arbitrary coordinate patch. Then the differential of f at p is the set of all functions differentially equivalent to f − f ( p ) {\displaystyle f-f(p)} at p. === Algebraic geometry === In algebraic geometry, differentials and other infinitesimal notions are handled in a very explicit way by accepting that the coordinate ring or structure sheaf of a space may contain nilpotent elements. The simplest example is the ring of dual numbers R[ε], where ε2 = 0. This can be motivated by the algebro-geometric point of view on the derivative of a function f from R to R at a point p. For this, note first that f − f(p) belongs to the ideal Ip of functions on R which vanish at p. If the derivative f vanishes at p, then f − f(p) belongs to the square Ip2 of this ideal. Hence the derivative of f at p may be captured by the equivalence class [f − f(p)] in the quotient space Ip/Ip2, and the 1-jet of f (which encodes its value and its first derivative) is the equivalence class of f in the space of all functions modulo Ip2. Algebraic geometers regard this equivalence class as the restriction of f to a thickened version of the point p whose coordinate ring is not R (which is the quotient space of functions on R modulo Ip) but R[ε] which is the quotient space of functions on R modulo Ip2. Such a thickened point is a simple example of a scheme. ==== Algebraic geometry notions ==== Differentials are also important in algebraic geometry, and there are several important notions. Abelian differentials usually mean differential one-forms on an algebraic curve or Riemann surface. Quadratic differentials (which behave like "squares" of abelian differentials) are also important in the theory of Riemann surfaces. Kähler differentials provide a general notion of differential in algebraic geometry. === Synthetic differential geometry === A fifth approach to infinitesimals is the method of synthetic differential geometry or smooth infinitesimal analysis. This is closely related to the algebraic-geometric approach, except that the infinitesimals are more implicit and intuitive. The main idea of this approach is to replace the category of sets with another category of smoothly varying sets which is a topos. In this category, one can define the real numbers, smooth functions, and so on, but the real numbers automatically contain nilpotent infinitesimals, so these do not need to be introduced by hand as in the algebraic geometric approach. However the logic in this new category is not identical to the familiar logic of the category of sets: in particular, the law of the excluded middle does not hold. This means that set-theoretic mathematical arguments only extend to smooth infinitesimal analysis if they are constructive (e.g., do not use proof by contradiction). Constructivists regard this disadvantage as a positive thing, since it forces one to find constructive arguments wherever they are available. === Nonstandard analysis === The final approach to infinitesimals again involves extending the real numbers, but in a less drastic way. In the nonstandard analysis approach there are no nilpotent infinitesimals, only invertible ones, which may be viewed as the reciprocals of infinitely large numbers. Such extensions of the real numbers may be constructed explicitly using equivalence classes of sequences of real numbers, so that, for example, the sequence (1, 1/2, 1/3, ..., 1/n, ...) represents an infinitesimal. The first-order logic of this new set of hyperreal numbers is the same as the logic for the usual real numbers, but the completeness axiom (which involves second-order logic) does not hold. Nevertheless, this suffices to develop an elementary and quite intuitive approach to calculus using infinitesimals, see transfer principle. == Differential geometry == The notion of a differential motivates several concepts in differential geometry (and differential topology). The differential (Pushforward) of a map between manifolds. Differential forms provide a framework which accommodates multiplication and differentiation of differentials. The exterior derivative is a notion of differentiation of differential forms which generalizes the differential of a function (which is a differential 1-form). Pullback is, in particular, a geometric name for the chain rule for composing a map between manifolds with a differential form on the target manifold. Covariant derivatives or differentials provide a general notion for differentiating of vector fields and tensor fields on a manifold, or, more generally, sections of a vector bundle: see Connection (vector bundle). This ultimately leads to the general concept of a connection. == Other meanings == The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex ( C ∙ , d ∙ ) , {\displaystyle (C_{\bullet },d_{\bullet }),} the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials. The properties of the differential also motivate the algebraic notions of a derivation and a differential algebra. == See also == Differential equation Differential form Differential of a function == Notes == == Citations == == References == Apostol, Tom M. (1967), Calculus (2nd ed.), Wiley, ISBN 978-0-471-00005-1. Bell, John L. (1998), Invitation to Smooth Infinitesimal Analysis (PDF). Boyer, Carl B. (1991), "Archimedes of Syracuse", A History of Mathematics (2nd ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8. Darling, R. W. R. (1994), Differential forms and connections, Cambridge, UK: Cambridge University Press, Bibcode:1994dfc..book.....D, ISBN 978-0-521-46800-8. Eisenbud, David; Harris, Joe (1998), The Geometry of Schemes, Springer-Verlag, ISBN 978-0-387-98637-1 Keisler, H. Jerome (1986), Elementary Calculus: An Infinitesimal Approach (2nd ed.). Kock, Anders (2006), Synthetic Differential Geometry (PDF) (2nd ed.), Cambridge University Press. Lawvere, F.W. (1968), Outline of synthetic differential geometry (PDF) (published 1998). Moerdijk, I.; Reyes, Gonzalo E. (1991), Models for Smooth Infinitesimal Analysis, Springer-Verlag, ISBN 978-1-441-93095-8. Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3. Weisstein, Eric W. "Differentials". MathWorld.
Wikipedia/Differential_(infinitesimal)
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions. It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the reference implementation (sometimes confusingly referred to as the BLAS library) and is not optimized for speed but is in the public domain. Most libraries that offer linear algebra routines conform to the BLAS interface, allowing library users to develop programs that are indifferent to the BLAS library being used. Many BLAS libraries have been developed, targeting various different hardware platforms. Examples includes cuBLAS (NVIDIA GPU, GPGPU), rocBLAS (AMD GPU), and OpenBLAS. Examples of CPU-based BLAS library branches include: OpenBLAS, BLIS (BLAS-like Library Instantiation Software), Arm Performance Libraries, ATLAS, and Intel Math Kernel Library (iMKL). AMD maintains a fork of BLIS that is optimized for the AMD platform. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. iMKL is a freeware and proprietary vendor library optimized for x86 and x86-64 with a performance emphasis on Intel processors. OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The LINPACK benchmarks rely heavily on the BLAS routine gemm for its performance measurements. Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including LAPACK, LINPACK, Armadillo, GNU Octave, Mathematica, MATLAB, NumPy, R, Julia and Lisp-Stat. == Background == With the advent of numerical programming, sophisticated subroutine libraries became useful. These libraries would contain subroutines for common high-level mathematical operations such as root finding, matrix inversion, and solving systems of equations. The language of choice was FORTRAN. The most prominent numerical programming library was IBM's Scientific Subroutine Package (SSP). These subroutine libraries allowed programmers to concentrate on their specific problems and avoid re-implementing well-known algorithms. The library routines would also be better than average implementations; matrix algorithms, for example, might use full pivoting to get better numerical accuracy. The library routines would also have more efficient routines. For example, a library may include a program to solve a matrix that is upper triangular. The libraries would include single-precision and double-precision versions of some algorithms. Initially, these subroutines used hard-coded loops for their low-level operations. For example, if a subroutine needed to perform a matrix multiplication, then the subroutine would have three nested loops. Linear algebra programs have many common low-level operations (the so-called "kernel" operations, not related to operating systems). Between 1973 and 1977, several of these kernel operations were identified. These kernel operations became defined subroutines that math libraries could call. The kernel calls had advantages over hard-coded loops: the library routine would be more readable, there were fewer chances for bugs, and the kernel implementation could be optimized for speed. A specification for these kernel operations using scalars and vectors, the level-1 Basic Linear Algebra Subroutines (BLAS), was published in 1979. BLAS was used to implement the linear algebra subroutine library LINPACK. The BLAS abstraction allows customization for high performance. For example, LINPACK is a general purpose library that can be used on many different machines without modification. LINPACK could use a generic version of BLAS. To gain performance, different machines might use tailored versions of BLAS. As computer architectures became more sophisticated, vector machines appeared. BLAS for a vector machine could use the machine's fast vector operations. (While vector processors eventually fell out of favor, vector instructions in modern CPUs are essential for optimal performance in BLAS routines.) Other machine features became available and could also be exploited. Consequently, BLAS was augmented from 1984 to 1986 with level-2 kernel operations that concerned vector-matrix operations. Memory hierarchy was also recognized as something to exploit. Many computers have cache memory that is much faster than main memory; keeping matrix manipulations localized allows better usage of the cache. In 1987 and 1988, the level 3 BLAS were identified to do matrix-matrix operations. The level 3 BLAS encouraged block-partitioned algorithms. The LAPACK library uses level 3 BLAS. The original BLAS concerned only densely stored vectors and matrices. Further extensions to BLAS, such as for sparse matrices, have been addressed. == Functionality == BLAS functionality is categorized into three sets of routines called "levels", which correspond to both the chronological order of definition and publication, as well as the degree of the polynomial in the complexities of algorithms; Level 1 BLAS operations typically take linear time, O(n), Level 2 operations quadratic time and Level 3 operations cubic time. Modern BLAS implementations typically provide all three levels. === Level 1 === This level consists of all the routines described in the original presentation of BLAS (1979), which defined only vector operations on strided arrays: dot products, vector norms, a generalized vector addition of the form y ← α x + y {\displaystyle {\boldsymbol {y}}\leftarrow \alpha {\boldsymbol {x}}+{\boldsymbol {y}}} (called "axpy", "a x plus y") and several other operations. === Level 2 === This level contains matrix-vector operations including, among other things, a generalized matrix-vector multiplication (gemv): y ← α A x + β y {\displaystyle {\boldsymbol {y}}\leftarrow \alpha {\boldsymbol {A}}{\boldsymbol {x}}+\beta {\boldsymbol {y}}} as well as a solver for x in the linear equation T x = y {\displaystyle {\boldsymbol {T}}{\boldsymbol {x}}={\boldsymbol {y}}} with T being triangular. Design of the Level 2 BLAS started in 1984, with results published in 1988. The Level 2 subroutines are especially intended to improve performance of programs using BLAS on vector processors, where Level 1 BLAS are suboptimal "because they hide the matrix-vector nature of the operations from the compiler." === Level 3 === This level, formally published in 1990, contains matrix-matrix operations, including a "general matrix multiplication" (gemm), of the form C ← α A B + β C , {\displaystyle {\boldsymbol {C}}\leftarrow \alpha {\boldsymbol {A}}{\boldsymbol {B}}+\beta {\boldsymbol {C}},} where A and B can optionally be transposed or hermitian-conjugated inside the routine, and all three matrices may be strided. The ordinary matrix multiplication A B can be performed by setting α to one and C to an all-zeros matrix of the appropriate size. Also included in Level 3 are routines for computing B ← α T − 1 B , {\displaystyle {\boldsymbol {B}}\leftarrow \alpha {\boldsymbol {T}}^{-1}{\boldsymbol {B}},} where T is a triangular matrix, among other functionality. Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS, and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, gemm is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of A, B into block matrices, gemm can be implemented recursively. This is one of the motivations for including the β parameter, so the results of previous blocks can be accumulated. Note that this decomposition requires the special case β = 1 which many implementations optimize for, thereby eliminating one multiplication for each value of C. This decomposition allows for better locality of reference both in space and time of the data used in the product. This, in turn, takes advantage of the cache on the system. For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as ATLAS. More recently, implementations by Kazushige Goto have shown that blocking only for the L2 cache, combined with careful amortizing of copying to contiguous memory to reduce TLB misses, is superior to ATLAS. A highly tuned implementation based on these ideas is part of the GotoBLAS, OpenBLAS and BLIS. A common variation of gemm is the gemm3m, which calculates a complex product using "three real matrix multiplications and five real matrix additions instead of the conventional four real matrix multiplications and two real matrix additions", an algorithm similar to Strassen algorithm first described by Peter Ungar. == Implementations == Accelerate Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK. Arm Performance Libraries Arm Performance Libraries, supporting Arm 64-bit AArch64-based processors, available from Arm. ATLAS Automatically Tuned Linear Algebra Software, an open source implementation of BLAS APIs for C and Fortran 77. BLIS BLAS-like Library Instantiation Software framework for rapid instantiation. Optimized for most modern CPUs. BLIS is a complete refactoring of the GotoBLAS that reduces the amount of code that must be written for a given platform. C++ AMP BLAS The C++ AMP BLAS Library is an open source implementation of BLAS for Microsoft's AMP language extension for Visual C++. cuBLAS Optimized BLAS for NVIDIA based GPU cards, requiring few additional library calls. NVBLAS Optimized BLAS for NVIDIA based GPU cards, providing only Level 3 functions, but as direct drop-in replacement for other BLAS libraries. clBLAS An OpenCL implementation of BLAS by AMD. Part of the AMD Compute Libraries. clBLAST A tuned OpenCL implementation of most of the BLAS api. Eigen BLAS A Fortran 77 and C BLAS library implemented on top of the MPL-licensed Eigen library, supporting x86, x86-64, ARM (NEON), and PowerPC architectures. ESSL IBM's Engineering and Scientific Subroutine Library, supporting the PowerPC architecture under AIX and Linux. GotoBLAS Kazushige Goto's BSD-licensed implementation of BLAS, tuned in particular for Intel Nehalem/Atom, VIA Nanoprocessor, AMD Opteron. GNU Scientific Library Multi-platform implementation of many numerical routines. Contains a CBLAS interface. HP MLIB HP's Math library supporting IA-64, PA-RISC, x86 and Opteron architecture under HP-UX and Linux. Intel MKL The Intel Math Kernel Library, supporting x86 32-bits and 64-bits, available free from Intel. Includes optimizations for Intel Pentium, Core and Intel Xeon CPUs and Intel Xeon Phi; support for Linux, Windows and macOS. MathKeisan NEC's math library, supporting NEC SX architecture under SUPER-UX, and Itanium under Linux Netlib BLAS The official reference implementation on Netlib, written in Fortran 77. Netlib CBLAS Reference C interface to the BLAS. It is also possible (and popular) to call the Fortran BLAS from C. OpenBLAS Optimized BLAS based on GotoBLAS, supporting x86, x86-64, MIPS and ARM processors. PDLIB/SX NEC's Public Domain Mathematical Library for the NEC SX-4 system. rocBLAS Implementation that runs on AMD GPUs via ROCm. SCSL SGI's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's Irix workstations. Sun Performance Library Optimized BLAS and LAPACK for SPARC, Core and AMD64 architectures under Solaris 8, 9, and 10 as well as Linux. uBLAS A generic C++ template class library providing BLAS functionality. Part of the Boost library. It provides bindings to many hardware-accelerated libraries in a unifying notation. Moreover, uBLAS focuses on correctness of the algorithms using advanced C++ features. === Libraries using BLAS === Armadillo Armadillo is a C++ linear algebra library aiming towards a good balance between speed and ease of use. It employs template classes, and has optional links to BLAS/ATLAS and LAPACK. It is sponsored by NICTA (in Australia) and is licensed under a free license. LAPACK LAPACK is a higher level Linear Algebra library built upon BLAS. Like BLAS, a reference implementation exists, but many alternatives like libFlame and MKL exist. Mir An LLVM-accelerated generic numerical library for science and machine learning written in D. It provides generic linear algebra subprograms (GLAS). It can be built on a CBLAS implementation. == Similar libraries (not compatible with BLAS) == Elemental Elemental is an open source software for distributed-memory dense and sparse-direct linear algebra and optimization. HASEM is a C++ template library, being able to solve linear equations and to compute eigenvalues. It is licensed under BSD License. LAMA The Library for Accelerated Math Applications (LAMA) is a C++ template library for writing numerical solvers targeting various kinds of hardware (e.g. GPUs through CUDA or OpenCL) on distributed memory systems, hiding the hardware specific programming from the program developer MTL4 The Matrix Template Library version 4 is a generic C++ template library providing sparse and dense BLAS functionality. MTL4 establishes an intuitive interface (similar to MATLAB) and broad applicability thanks to generic programming. == Sparse BLAS == Several extensions to BLAS for handling sparse matrices have been suggested over the course of the library's history; a small set of sparse matrix kernel routines was finally standardized in 2002. == Batched BLAS == The traditional BLAS functions have been also ported to architectures that support large amounts of parallelism such as GPUs. Here, the traditional BLAS functions provide typically good performance for large matrices. However, when computing e.g., matrix-matrix-products of many small matrices by using the GEMM routine, those architectures show significant performance losses. To address this issue, in 2017 a batched version of the BLAS function has been specified. Taking the GEMM routine from above as an example, the batched version performs the following computation simultaneously for many matrices: C [ k ] ← α A [ k ] B [ k ] + β C [ k ] ∀ k {\displaystyle {\boldsymbol {C}}[k]\leftarrow \alpha {\boldsymbol {A}}[k]{\boldsymbol {B}}[k]+\beta {\boldsymbol {C}}[k]\quad \forall k} The index k {\displaystyle k} in square brackets indicates that the operation is performed for all matrices k {\displaystyle k} in a stack. Often, this operation is implemented for a strided batched memory layout where all matrices follow concatenated in the arrays A {\displaystyle A} , B {\displaystyle B} and C {\displaystyle C} . Batched BLAS functions can be a versatile tool and allow e.g. a fast implementation of exponential integrators and Magnus integrators that handle long integration periods with many time steps. Here, the matrix exponentiation, the computationally expensive part of the integration, can be implemented in parallel for all time-steps by using Batched BLAS functions. == See also == List of numerical libraries Math Kernel Library, math library optimized for the Intel architecture; includes BLAS, LAPACK Numerical linear algebra, the type of problem BLAS solves == References == == Further reading == BLAST Forum (2001-08-21), Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard, Knoxville, TN: University of Tennessee Dodson, D. S.; Grimes, R. G. (1982), "Remark on algorithm 539: Basic Linear Algebra Subprograms for Fortran usage", ACM Trans. Math. Softw., 8 (4): 403–404, doi:10.1145/356012.356020, S2CID 43081631 Dodson, D. S. (1983), "Corrigendum: Remark on "Algorithm 539: Basic Linear Algebra Subroutines for FORTRAN usage"", ACM Trans. Math. Softw., 9: 140, doi:10.1145/356022.356032, S2CID 22163977 J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J. Hanson, Algorithm 656: An extended set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 14 (1988), pp. 18–32. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 1–17. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, Algorithm 679: A set of Level 3 Basic Linear Algebra Subprograms, ACM Trans. Math. Softw., 16 (1990), pp. 18–28. New BLAS L. S. Blackford, J. Demmel, J. Dongarra, I. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, R. C. Whaley, An Updated Set of Basic Linear Algebra Subprograms (BLAS), ACM Trans. Math. Softw., 28-2 (2002), pp. 135–151. J. Dongarra, Basic Linear Algebra Subprograms Technical Forum Standard, International Journal of High Performance Applications and Supercomputing, 16(1) (2002), pp. 1–111, and International Journal of High Performance Applications and Supercomputing, 16(2) (2002), pp. 115–199. == External links == BLAS homepage on Netlib.org BLAS FAQ BLAS Quick Reference Guide from LAPACK Users' Guide Lawson Oral History One of the original authors of the BLAS discusses its creation in an oral history interview. Charles L. Lawson Oral history interview by Thomas Haigh, 6 and 7 November 2004, San Clemente, California. Society for Industrial and Applied Mathematics, Philadelphia, PA. Dongarra Oral History In an oral history interview, Jack Dongarra explores the early relationship of BLAS to LINPACK, the creation of higher level BLAS versions for new architectures, and his later work on the ATLAS system to automatically optimize BLAS for particular machines. Jack Dongarra, Oral history interview by Thomas Haigh, 26 April 2005, University of Tennessee, Knoxville TN. Society for Industrial and Applied Mathematics, Philadelphia, PA How does BLAS get such extreme performance? Ten naive 1000×1000 matrix multiplications (1010 floating point multiply-adds) takes 15.77 seconds on 2.6 GHz processor; BLAS implementation takes 1.32 seconds. An Overview of the Sparse Basic Linear Algebra Subprograms: The New Standard from the BLAS Technical Forum [2]
Wikipedia/Basic_Linear_Algebra_Subprograms
In pure and applied mathematics, quantum mechanics and computer graphics, a tensor operator generalizes the notion of operators which are scalars and vectors. A special class of these are spherical tensor operators which apply the notion of the spherical basis and spherical harmonics. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions. The coordinate-free generalization of a tensor operator is known as a representation operator. == The general notion of scalar, vector, and tensor operators == In quantum mechanics, physical observables that are scalars, vectors, and tensors, must be represented by scalar, vector, and tensor operators, respectively. Whether something is a scalar, vector, or tensor depends on how it is viewed by two observers whose coordinate frames are related to each other by a rotation. Alternatively, one may ask how, for a single observer, a physical quantity transforms if the state of the system is rotated. Consider, for example, a system consisting of a molecule of mass M {\displaystyle M} , traveling with a definite center of mass momentum, p z ^ {\displaystyle p{\mathbf {\hat {z}} }} , in the z {\displaystyle z} direction. If we rotate the system by 90 ∘ {\displaystyle 90^{\circ }} about the y {\displaystyle y} axis, the momentum will change to p x ^ {\displaystyle p{\mathbf {\hat {x}} }} , which is in the x {\displaystyle x} direction. The center-of-mass kinetic energy of the molecule will, however, be unchanged at p 2 / 2 M {\displaystyle p^{2}/2M} . The kinetic energy is a scalar and the momentum is a vector, and these two quantities must be represented by a scalar and a vector operator, respectively. By the latter in particular, we mean an operator whose expected values in the initial and the rotated states are p z ^ {\displaystyle p{\mathbf {\hat {z}} }} and p x ^ {\displaystyle p{\mathbf {\hat {x}} }} . The kinetic energy on the other hand must be represented by a scalar operator, whose expected value must be the same in the initial and the rotated states. In the same way, tensor quantities must be represented by tensor operators. An example of a tensor quantity (of rank two) is the electrical quadrupole moment of the above molecule. Likewise, the octupole and hexadecapole moments would be tensors of rank three and four, respectively. Other examples of scalar operators are the total energy operator (more commonly called the Hamiltonian), the potential energy, and the dipole-dipole interaction energy of two atoms. Examples of vector operators are the momentum, the position, the orbital angular momentum, L {\displaystyle {\mathbf {L} }} , and the spin angular momentum, S {\displaystyle {\mathbf {S} }} . (Fine print: Angular momentum is a vector as far as rotations are concerned, but unlike position or momentum it does not change sign under space inversion, and when one wishes to provide this information, it is said to be a pseudovector.) Scalar, vector and tensor operators can also be formed by products of operators. For example, the scalar product L ⋅ S {\displaystyle {\mathbf {L} }\cdot {\mathbf {S} }} of the two vector operators, L {\displaystyle {\mathbf {L} }} and S {\displaystyle {\mathbf {S} }} , is a scalar operator, which figures prominently in discussions of the spin–orbit interaction. Similarly, the quadrupole moment tensor of our example molecule has the nine components Q i j = ∑ α q α ( 3 r α , i r α , j − r α 2 δ i j ) . {\displaystyle Q_{ij}=\sum _{\alpha }q_{\alpha }\left(3r_{\alpha ,i}r_{\alpha ,j}-r_{\alpha }^{2}\delta _{ij}\right).} Here, the indices i {\displaystyle i} and j {\displaystyle j} can independently take on the values 1, 2, and 3 (or x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} ) corresponding to the three Cartesian axes, the index α {\displaystyle \alpha } runs over all particles (electrons and nuclei) in the molecule, q α {\displaystyle q_{\alpha }} is the charge on particle α {\displaystyle \alpha } , and r α , i {\displaystyle r_{\alpha ,i}} is the i {\displaystyle i} -th component of the position of this particle. Each term in the sum is a tensor operator. In particular, the nine products r α , i r α , j {\displaystyle r_{\alpha ,i}r_{\alpha ,j}} together form a second rank tensor, formed by taking the outer product of the vector operator r α {\displaystyle {\mathbf {r} }_{\alpha }} with itself. == Rotations of quantum states == === Quantum rotation operator === The rotation operator about the unit vector n (defining the axis of rotation) through angle θ is U [ R ( θ , n ^ ) ] = exp ⁡ ( − i θ ℏ n ^ ⋅ J ) {\displaystyle U[R(\theta ,{\hat {\mathbf {n} }})]=\exp \left(-{\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right)} where J = (Jx, Jy, Jz) are the rotation generators (also the angular momentum matrices): J x = ℏ 2 ( 0 1 0 1 0 1 0 1 0 ) J y = ℏ 2 ( 0 i 0 − i 0 i 0 − i 0 ) J z = ℏ ( − 1 0 0 0 0 0 0 0 1 ) {\displaystyle J_{x}={\frac {\hbar }{\sqrt {2}}}{\begin{pmatrix}0&1&0\\1&0&1\\0&1&0\end{pmatrix}}\,\quad J_{y}={\frac {\hbar }{\sqrt {2}}}{\begin{pmatrix}0&i&0\\-i&0&i\\0&-i&0\end{pmatrix}}\,\quad J_{z}=\hbar {\begin{pmatrix}-1&0&0\\0&0&0\\0&0&1\end{pmatrix}}} and let R ^ = R ^ ( θ , n ^ ) {\displaystyle {\widehat {R}}={\widehat {R}}(\theta ,{\hat {\mathbf {n} }})} be a rotation matrix. According to the Rodrigues' rotation formula, the rotation operator then amounts to U [ R ( θ , n ^ ) ] = 1 1 − i sin ⁡ θ ℏ n ^ ⋅ J − 1 − cos ⁡ θ ℏ 2 ( n ^ ⋅ J ) 2 . {\displaystyle U[R(\theta ,{\hat {\mathbf {n} }})]=1\!\!1-{\frac {i\sin \theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} -{\frac {1-\cos \theta }{\hbar ^{2}}}({\hat {\mathbf {n} }}\cdot \mathbf {J} )^{2}.} An operator Ω ^ {\displaystyle {\widehat {\Omega }}} is invariant under a unitary transformation U if Ω ^ = U † Ω ^ U ; {\displaystyle {\widehat {\Omega }}={U}^{\dagger }{\widehat {\Omega }}U;} in this case for the rotation U ^ ( R ) {\displaystyle {\widehat {U}}(R)} , Ω ^ = U ( R ) † Ω ^ U ( R ) = exp ⁡ ( i θ ℏ n ^ ⋅ J ) Ω ^ exp ⁡ ( − i θ ℏ n ^ ⋅ J ) . {\displaystyle {\widehat {\Omega }}={U(R)}^{\dagger }{\widehat {\Omega }}U(R)=\exp \left({\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right){\widehat {\Omega }}\exp \left(-{\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right).} === Angular momentum eigenkets === The orthonormal basis set for total angular momentum is | j , m ⟩ {\displaystyle |j,m\rangle } , where j is the total angular momentum quantum number and m is the magnetic angular momentum quantum number, which takes values −j, −j + 1, ..., j − 1, j. A general state within the j subspace | ψ ⟩ = ∑ m c j m | j , m ⟩ {\displaystyle |\psi \rangle =\sum _{m}c_{jm}|j,m\rangle } rotates to a new state by: | ψ ¯ ⟩ = U ( R ) | ψ ⟩ = ∑ m c j m U ( R ) | j , m ⟩ {\displaystyle |{\bar {\psi }}\rangle =U(R)|\psi \rangle =\sum _{m}c_{jm}U(R)|j,m\rangle } Using the completeness condition: I = ∑ m ′ | j , m ′ ⟩ ⟨ j , m ′ | {\displaystyle I=\sum _{m'}|j,m'\rangle \langle j,m'|} we have | ψ ¯ ⟩ = I U ( R ) | ψ ⟩ = ∑ m m ′ c j m | j , m ′ ⟩ ⟨ j , m ′ | U ( R ) | j , m ⟩ {\displaystyle |{\bar {\psi }}\rangle =IU(R)|\psi \rangle =\sum _{mm'}c_{jm}|j,m'\rangle \langle j,m'|U(R)|j,m\rangle } Introducing the Wigner D matrix elements: D ( R ) m ′ m ( j ) = ⟨ j , m ′ | U ( R ) | j , m ⟩ {\displaystyle {D(R)}_{m'm}^{(j)}=\langle j,m'|U(R)|j,m\rangle } gives the matrix multiplication: | ψ ¯ ⟩ = ∑ m m ′ c j m D m ′ m ( j ) | j , m ′ ⟩ ⇒ | ψ ¯ ⟩ = D ( j ) | ψ ⟩ {\displaystyle |{\bar {\psi }}\rangle =\sum _{mm'}c_{jm}D_{m'm}^{(j)}|j,m'\rangle \quad \Rightarrow \quad |{\bar {\psi }}\rangle =D^{(j)}|\psi \rangle } For one basis ket: | j , m ¯ ⟩ = ∑ m ′ D ( R ) m ′ m ( j ) | j , m ′ ⟩ {\displaystyle |{\overline {j,m}}\rangle =\sum _{m'}{D(R)}_{m'm}^{(j)}|j,m'\rangle } For the case of orbital angular momentum, the eigenstates | ℓ , m ⟩ {\displaystyle |\ell ,m\rangle } of the orbital angular momentum operator L and solutions of Laplace's equation on a 3d sphere are spherical harmonics: Y ℓ m ( θ , ϕ ) = ⟨ θ , ϕ | ℓ , m ⟩ = ( 2 ℓ + 1 ) 4 π ( ℓ − m ) ! ( ℓ + m ) ! P ℓ m ( cos ⁡ θ ) e i m ϕ {\displaystyle Y_{\ell }^{m}(\theta ,\phi )=\langle \theta ,\phi |\ell ,m\rangle ={\sqrt {{(2\ell +1) \over 4\pi }{(\ell -m)! \over (\ell +m)!}}}\,P_{\ell }^{m}(\cos {\theta })\,e^{im\phi }} where Pℓm is an associated Legendre polynomial, ℓ is the orbital angular momentum quantum number, and m is the orbital magnetic quantum number which takes the values −ℓ, −ℓ + 1, ... ℓ − 1, ℓ The formalism of spherical harmonics have wide applications in applied mathematics, and are closely related to the formalism of spherical tensors, as shown below. Spherical harmonics are functions of the polar and azimuthal angles, ϕ and θ respectively, which can be conveniently collected into a unit vector n(θ, ϕ) pointing in the direction of those angles, in the Cartesian basis it is: n ^ ( θ , ϕ ) = cos ⁡ ϕ sin ⁡ θ e x + sin ⁡ ϕ sin ⁡ θ e y + cos ⁡ θ e z {\displaystyle {\hat {\mathbf {n} }}(\theta ,\phi )=\cos \phi \sin \theta \mathbf {e} _{x}+\sin \phi \sin \theta \mathbf {e} _{y}+\cos \theta \mathbf {e} _{z}} So a spherical harmonic can also be written Y ℓ m = ⟨ n | ℓ m ⟩ {\displaystyle Y_{\ell }^{m}=\langle \mathbf {n} |\ell m\rangle } . Spherical harmonic states | m , ℓ ⟩ {\displaystyle |m,\ell \rangle } rotate according to the inverse rotation matrix U ( R − 1 ) {\displaystyle U(R^{-1})} , while | ℓ , m ⟩ {\displaystyle |\ell ,m\rangle } rotates by the initial rotation matrix U ^ ( R ) {\displaystyle {\widehat {U}}(R)} . | ℓ , m ¯ ⟩ = ∑ m ′ D m ′ m ( ℓ ) [ U ( R − 1 ) ] | ℓ , m ′ ⟩ , | n ^ ¯ ⟩ = U ( R ) | n ^ ⟩ {\displaystyle |{\overline {\ell ,m}}\rangle =\sum _{m'}D_{m'm}^{(\ell )}[U(R^{-1})]|\ell ,m'\rangle \,,\quad |{\overline {\hat {\mathbf {n} }}}\rangle =U(R)|{\hat {\mathbf {n} }}\rangle } == Rotation of tensor operators == We define the Rotation of an operator by requiring that the expectation value of the original operator A ^ {\displaystyle {\widehat {\mathbf {A} }}} with respect to the initial state be equal to the expectation value of the rotated operator with respect to the rotated state, ⟨ ψ ′ | A ′ ^ | ψ ′ ⟩ = ⟨ ψ | A ^ | ψ ⟩ {\displaystyle \langle \psi '|{\widehat {A'}}|\psi '\rangle =\langle \psi |{\widehat {A}}|\psi \rangle } Now as, | ψ ⟩ → | ψ ′ ⟩ = U ( R ) | ψ ⟩ , ⟨ ψ | → ⟨ ψ ′ | = ⟨ ψ | U † ( R ) {\displaystyle |\psi \rangle ~\rightarrow ~|\psi '\rangle =U(R)|\psi \rangle \,,\quad \langle \psi |~\rightarrow ~\langle \psi '|=\langle \psi |U^{\dagger }(R)} we have, ⟨ ψ | U † ( R ) A ^ ′ U ( R ) | ψ ⟩ = ⟨ ψ | A ^ | ψ ⟩ {\displaystyle \langle \psi |U^{\dagger }(R){\widehat {A}}'U(R)|\psi \rangle =\langle \psi |{\widehat {A}}|\psi \rangle } since, | ψ ⟩ {\displaystyle |\psi \rangle } is arbitrary, U † ( R ) A ^ ′ U ( R ) = A ^ {\displaystyle U^{\dagger }(R){\widehat {A}}'U(R)={\widehat {A}}} === Scalar operators === A scalar operator is invariant under rotations: U ( R ) † S ^ U ( R ) = S ^ {\displaystyle U(R)^{\dagger }{\widehat {S}}U(R)={\widehat {S}}} This is equivalent to saying a scalar operator commutes with the rotation generators: [ S ^ , J ^ ] = 0 {\displaystyle \left[{\widehat {S}},{\widehat {\mathbf {J} }}\right]=0} Examples of scalar operators include the energy operator: E ^ ψ = i ℏ ∂ ∂ t ψ {\displaystyle {\widehat {E}}\psi =i\hbar {\frac {\partial }{\partial t}}\psi } potential energy V (in the case of a central potential only) V ^ ( r , t ) ψ ( r , t ) = V ( r , t ) ψ ( r , t ) {\displaystyle {\widehat {V}}(r,t)\psi (\mathbf {r} ,t)=V(r,t)\psi (\mathbf {r} ,t)} kinetic energy T: T ^ ψ ( r , t ) = − ℏ 2 2 m ( ∇ 2 ψ ) ( r , t ) {\displaystyle {\widehat {T}}\psi (\mathbf {r} ,t)=-{\frac {\hbar ^{2}}{2m}}(\nabla ^{2}\psi )(\mathbf {r} ,t)} the spin–orbit coupling: L ^ ⋅ S ^ = L ^ x S ^ x + L ^ y S ^ y + L ^ z S ^ z . {\displaystyle {\widehat {\mathbf {L} }}\cdot {\widehat {\mathbf {S} }}={\widehat {L}}_{x}{\widehat {S}}_{x}+{\widehat {L}}_{y}{\widehat {S}}_{y}+{\widehat {L}}_{z}{\widehat {S}}_{z}\,.} === Vector operators === Vector operators (as well as pseudovector operators) are a set of 3 operators that can be rotated according to: U ( R ) † V ^ i U ( R ) = ∑ j R i j V ^ j {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{i}U(R)=\sum _{j}R_{ij}{\widehat {V}}_{j}} Any observable vector quantity of a quantum mechanical system should be invariant of the choice of frame of reference. The transformation of expectation value vector which applies for any wavefunction, ensures the above equality. In Dirac notation: ⟨ ψ ¯ | V ^ a | ψ ¯ ⟩ = ⟨ ψ | U ( R ) † V ^ a U ( R ) | ψ ⟩ = ∑ b R a b ⟨ ψ | V ^ b | ψ ⟩ {\displaystyle \langle {\bar {\psi }}|{\widehat {V}}_{a}|{\bar {\psi }}\rangle =\langle \psi |{U(R)}^{\dagger }{\widehat {V}}_{a}U(R)|\psi \rangle =\sum _{b}R_{ab}\langle \psi |{\widehat {V}}_{b}|\psi \rangle } where the RHS is due to the rotation transformation acting on the vector formed by expectation values. Since |Ψ⟩ is any quantum state, the same result follows: U ( R ) † V ^ a U ( R ) = ∑ b R a b V ^ b {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{a}U(R)=\sum _{b}R_{ab}{\widehat {V}}_{b}} Note that here, the term "vector" is used two different ways: kets such as |ψ⟩ are elements of abstract Hilbert spaces, while the vector operator is defined as a quantity whose components transform in a certain way under rotations. From the above relation for infinitesimal rotations and the Baker Hausdorff lemma, by equating coefficients of order δ θ {\displaystyle \delta \theta } , one can derive the commutation relation with the rotation generator: [ V ^ a , J ^ b ] = ∑ c i ℏ ε a b c V ^ c {\displaystyle {\left[{\widehat {V}}_{a},{\widehat {J}}_{b}\right]=\sum _{c}i\hbar \varepsilon _{abc}{\widehat {V}}_{c}}} where εijk is the Levi-Civita symbol, which all vector operators must satisfy, by construction. The above commutator rule can also be used as an alternative definition for vector operators which can be shown by using the Baker Hausdorff lemma. As the symbol εijk is a pseudotensor, pseudovector operators are invariant up to a sign: +1 for proper rotations and −1 for improper rotations. Since operators can be shown to form a vector operator by their commutation relation with angular momentum components (which are generators of rotation), its examples include: the position operator: r ^ ψ = r ψ {\displaystyle {\widehat {\mathbf {r} }}\psi =\mathbf {r} \psi } the momentum operator: p ^ ψ = − i ℏ ∇ ψ {\displaystyle {\widehat {\mathbf {p} }}\psi =-i\hbar \nabla \psi } and peusodovector operators include the orbital angular momentum operator: L ^ ψ = − i ℏ r × ∇ ψ {\displaystyle {\widehat {\mathbf {L} }}\psi =-i\hbar \mathbf {r} \times \nabla \psi } as well the spin operator S, and hence the total angular momentum J ^ = L ^ + S ^ . {\displaystyle {\widehat {\mathbf {J} }}={\widehat {\mathbf {L} }}+{\widehat {\mathbf {S} }}\,.} ==== Scalar operators from vector operators ==== If V → {\displaystyle {\vec {V}}} and W → {\displaystyle {\vec {W}}} are two vector operators, the dot product between the two vector operators can be defined as: V → ⋅ W → = ∑ i = 1 3 V i ^ W i ^ {\displaystyle {\vec {V}}\cdot {\vec {W}}=\sum _{i=1}^{3}{\hat {V_{i}}}{\hat {W_{i}}}} Under rotation of coordinates, the newly defined operator transforms as: U ( R ) † ( V → ⋅ W → ) U ( R ) = U ( R ) † ( ∑ i = 1 3 V i ^ W i ^ ) U ( R ) = ∑ i = 1 3 ( U ( R ) † V ^ i U ( R ) ) ( U ( R ) † W ^ i U ( R ) ) = ∑ i = 1 3 ( ∑ j = 1 3 R i j V ^ j ⋅ ∑ k = 1 3 R i k W ^ k ) {\displaystyle {U(R)}^{\dagger }({\vec {V}}\cdot {\vec {W}})U(R)={U(R)}^{\dagger }\left(\sum _{i=1}^{3}{\hat {V_{i}}}{\hat {W_{i}}}\right)U(R)=\sum _{i=1}^{3}({U(R)}^{\dagger }{\hat {V}}_{i}U(R))({U(R)}^{\dagger }{\hat {W}}_{i}U(R))=\sum _{i=1}^{3}\left(\sum _{j=1}^{3}R_{ij}{\widehat {V}}_{j}\cdot \sum _{k=1}^{3}R_{ik}{\widehat {W}}_{k}\right)} Rearranging terms and using transpose of rotation matrix as its inverse property: U ( R ) † ( V → ⋅ W → ) U ( R ) = ∑ k = 1 3 ∑ j = 1 3 ( ∑ i = 1 3 R j i T R i k ) V ^ j W ^ k = ∑ k = 1 3 ∑ j = 1 3 δ j , k V ^ j W ^ k = ∑ i = 1 3 V ^ i W ^ i {\displaystyle {U(R)}^{\dagger }({\vec {V}}\cdot {\vec {W}})U(R)=\sum _{k=1}^{3}\sum _{j=1}^{3}\left(\sum _{i=1}^{3}R_{ji}^{T}R_{ik}\right){\widehat {V}}_{j}{\widehat {W}}_{k}=\sum _{k=1}^{3}\sum _{j=1}^{3}\delta _{j,k}{\widehat {V}}_{j}{\widehat {W}}_{k}=\sum _{i=1}^{3}{\widehat {V}}_{i}{\widehat {W}}_{i}} Where the RHS is the V → ⋅ W → {\displaystyle {\vec {V}}\cdot {\vec {W}}} operator originally defined. Since the dot product defined is invariant under rotation transformation, it is said to be a scalar operator. === Spherical vector operators === A vector operator in the spherical basis is V = (V+1, V0, V−1) where the components are: V + 1 = − 1 2 ( V x + i V y ) V − 1 = 1 2 ( V x − i V y ) , V 0 = V z , {\displaystyle V_{+1}=-{\frac {1}{\sqrt {2}}}(V_{x}+iV_{y})\,\quad V_{-1}={\frac {1}{\sqrt {2}}}(V_{x}-iV_{y})\,,\quad V_{0}=V_{z}\,,} using J ± = J x ± i J y , {\textstyle J_{\pm }=J_{x}\pm iJ_{y}\,,} the various commutators with the rotation generators and ladder operators are: [ J z , V + 1 ] = + ℏ V + 1 [ J z , V 0 ] = 0 V 0 [ J z , V − 1 ] = − ℏ V − 1 [ J + , V + 1 ] = 0 [ J + , V 0 ] = 2 ℏ V + 1 [ J + , V − 1 ] = 2 ℏ V 0 [ J − , V + 1 ] = 2 ℏ V 0 [ J − , V 0 ] = 2 ℏ V − 1 [ J − , V − 1 ] = 0 {\displaystyle {\begin{aligned}\left[J_{z},V_{+1}\right]&=+\hbar V_{+1}\\[1ex]\left[J_{z},V_{0}\right]&=0V_{0}\\[1ex]\left[J_{z},V_{-1}\right]&=-\hbar V_{-1}\\[2ex]\left[J_{+},V_{+1}\right]&=0\\[1ex]\left[J_{+},V_{0}\right]&={\sqrt {2}}\hbar V_{+1}\\[1ex]\left[J_{+},V_{-1}\right]&={\sqrt {2}}\hbar V_{0}\\[2ex]\left[J_{-},V_{+1}\right]&={\sqrt {2}}\hbar V_{0}\\[1ex]\left[J_{-},V_{0}\right]&={\sqrt {2}}\hbar V_{-1}\\[1ex]\left[J_{-},V_{-1}\right]&=0\\[1ex]\end{aligned}}} which are of similar form of J z | 1 , + 1 ⟩ = + ℏ | 1 , + 1 ⟩ J z | 1 , 0 ⟩ = 0 | 1 , 0 ⟩ J z | 1 , − 1 ⟩ = − ℏ | 1 , − 1 ⟩ J + | 1 , + 1 ⟩ = 0 J + | 1 , 0 ⟩ = 2 ℏ | 1 , + 1 ⟩ J + | 1 , − 1 ⟩ = 2 ℏ | 1 , 0 ⟩ J − | 1 , + 1 ⟩ = 2 ℏ | 1 , 0 ⟩ J − | 1 , 0 ⟩ = 2 ℏ | 1 , − 1 ⟩ J − | 1 , − 1 ⟩ = 0 {\displaystyle {\begin{aligned}J_{z}|1,+1\rangle &=+\hbar |1,+1\rangle \\[1ex]J_{z}|1,0\rangle &=0|1,0\rangle \\[1ex]J_{z}|1,-1\rangle &=-\hbar |1,-1\rangle \\[2ex]J_{+}|1,+1\rangle &=0\\[1ex]J_{+}|1,0\rangle &={\sqrt {2}}\hbar |1,+1\rangle \\[1ex]J_{+}|1,-1\rangle &={\sqrt {2}}\hbar |1,0\rangle \\[2ex]J_{-}|1,+1\rangle &={\sqrt {2}}\hbar |1,0\rangle \\[1ex]J_{-}|1,0\rangle &={\sqrt {2}}\hbar |1,-1\rangle \\[1ex]J_{-}|1,-1\rangle &=0\\[1ex]\end{aligned}}} In the spherical basis, the generators of rotation are: J ± 1 = ∓ 1 2 J ± , J 0 = J z {\displaystyle J_{\pm 1}=\mp {\frac {1}{\sqrt {2}}}J_{\pm }\,,\quad J_{0}=J_{z}} From the transformation of operators and Baker Hausdorff lemma: U ( R ) † V ^ q U ( R ) = V ^ q + i θ ℏ [ n ^ ⋅ J → , V ^ q ] + ∑ k = 2 ∞ ( i θ ℏ [ n ^ ⋅ J → , . ] ) k k ! V ^ q = e x p ( i θ ℏ n ^ ⋅ A d J → ) V ^ q {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{q}U(R)={\widehat {V}}_{q}+i{\frac {\theta }{\hbar }}\left[{\hat {n}}\cdot {\vec {J}},{\widehat {V}}_{q}\right]+\sum _{k=2}^{\infty }{\frac {\left(i{\frac {\theta }{\hbar }}[{\hat {n}}\cdot {\vec {J}},.]\right)^{k}}{k!}}{\widehat {V}}_{q}=exp\left({i{\frac {\theta }{\hbar }}{\hat {n}}\cdot Ad_{\vec {J}}}\right){\widehat {V}}_{q}} compared to U ( R ) | j , k ⟩ = | j , k ⟩ − i θ ℏ n ^ ⋅ J → | j , k ⟩ + ∑ k = 2 ∞ ( − i θ ℏ n ^ ⋅ J → ) k k ! | j , k ⟩ = e x p ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ {\displaystyle U(R)|j,k\rangle =|j,k\rangle -i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}|j,k\rangle +\sum _{k=2}^{\infty }{\frac {\left(-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}\right)^{k}}{k!}}|j,k\rangle =exp\left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle } it can be argued that the commutator with operator replaces the action of operator on state for transformations of operators as compared with that of states: U ( R ) | j , k ⟩ = exp ⁡ ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ = ∑ j ′ , k ′ | j ′ , k ′ ⟩ ⟨ j ′ , k ′ | exp ⁡ ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ = ∑ k ′ D k ′ k ( j ) ( R ) | j , k ′ ⟩ {\displaystyle U(R)|j,k\rangle =\exp \left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle =\sum _{j',k'}|j',k'\rangle \langle j',k'|\exp \left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle =\sum _{k'}D_{k'k}^{(j)}(R)|j,k'\rangle } The rotation transformation in the spherical basis (originally written in the Cartesian basis) is then, due to similarity of commutation and operator shown above: U ( R ) † V ^ q U ( R ) = ∑ q ′ D q ′ q ( 1 ) ( R − 1 ) V ^ q ′ {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{q}U(R)=\sum _{q'}{{D_{q'q}^{(1)}}(R^{-1})}{\widehat {V}}_{q'}} One can generalize the vector operator concept easily to tensorial operators, shown next. === Tensor operators === In general, a tensor operator is one that transforms according to a tensor: U ( R ) † T ^ p q r ⋯ a b c ⋯ U ( R ) = R p , α R q , β R r , γ ⋯ T ^ i j k ⋯ α β γ ⋯ R i , a − 1 R j , b − 1 R k , c − 1 ⋯ {\displaystyle U(R)^{\dagger }{\widehat {T}}_{pqr\cdots }^{abc\cdots }U(R)=R_{p,\alpha }R_{q,\beta }R_{r,\gamma }\cdots {\widehat {T}}_{ijk\cdots }^{\alpha \beta \gamma \cdots }R_{i,a}^{-1}R_{j,b}^{-1}R_{k,c}^{-1}\cdots } where the basis are transformed by R − 1 {\displaystyle R^{-1}} or the vector components transform by R {\displaystyle R} . In the subsequent discussion surrounding tensor operators, the index notation regarding covariant/contravariant behavior is ignored entirely. Instead, contravariant components is implied by context. Hence for an n times contravariant tensor: U ( R ) † T ^ p q r ⋯ U ( R ) = R p i R q j R r k ⋯ T ^ i j k ⋯ {\displaystyle U(R)^{\dagger }{\widehat {T}}_{pqr\cdots }U(R)=R_{pi}R_{qj}R_{rk}\cdots {\widehat {T}}_{ijk\cdots }} ==== Examples of tensor operators ==== The Quadrupole moment operator, Q i j = ∑ α q α ( 3 r α i r α j − r α 2 δ i j ) {\displaystyle Q_{ij}=\sum _{\alpha }q_{\alpha }(3r_{\alpha i}r_{\alpha j}-r_{\alpha }^{2}\delta _{ij})} Components of two tensor vector operators can be multiplied to give another Tensor operator. T i j = V i W j {\displaystyle T_{ij}=V_{i}W_{j}} In general, n number of tensor operators will also give another tensor operator T p q r ⋯ k = V p ( 1 ) V q ( 2 ) V r 3 ) ⋯ V k ( n ) {\displaystyle T_{pqr\cdots k}=V_{p}^{(1)}V_{q}^{(2)}V_{r}^{3)}\cdots V_{k}^{(n)}} or, T i 1 i 2 ⋯ j 1 j 2 ⋯ = V i 1 i 2 ⋯ W j 1 j 2 ⋯ {\displaystyle T_{i_{1}i_{2}\cdots j_{1}j_{2}\cdots }=V_{i_{1}i_{2}\cdots }W_{j_{1}j_{2}\cdots }} Note: In general, a tensor operator cannot be written as the tensor product of other tensor operators as given in the above example. ==== Tensor operator from vector operators ==== If V → {\displaystyle {\vec {V}}} and W → {\displaystyle {\vec {W}}} are two three dimensional vector operators, then a rank 2 Cartesian dyadic tensors can be formed from nine operators of form T ^ i j = V i ^ W j ^ {\displaystyle {\hat {T}}_{ij}={\hat {V_{i}}}{\hat {W_{j}}}} , U ( R ) † T ^ i j U ( R ) = U ( R ) † ( V i ^ W j ^ ) U ( R ) = ( U ( R ) † V ^ i U ( R ) ) ( U ( R ) † W ^ j U ( R ) ) = ( ∑ l = 1 3 R i l V ^ l ⋅ ∑ k = 1 3 R j k W ^ k ) {\displaystyle {U(R)}^{\dagger }{\hat {T}}_{ij}U(R)={U(R)}^{\dagger }({\hat {V_{i}}}{\hat {W_{j}}})U(R)=({U(R)}^{\dagger }{\hat {V}}_{i}U(R))({U(R)}^{\dagger }{\hat {W}}_{j}U(R))=\left(\sum _{l=1}^{3}R_{il}{\hat {V}}_{l}\cdot \sum _{k=1}^{3}R_{jk}{\hat {W}}_{k}\right)} Rearranging terms, we get: U ( R ) † T ^ i j U ( R ) = ∑ k = 1 3 ∑ l = 1 3 ( R i l R j k T ^ l k ) {\displaystyle {U(R)}^{\dagger }{\hat {T}}_{ij}U(R)=\sum _{k=1}^{3}\sum _{l=1}^{3}\left(R_{il}R_{jk}{\hat {T}}_{lk}\right)} The RHS of the equation is change of basis equation for twice contravariant tensors where the basis are transformed by R − 1 {\displaystyle R^{-1}} or the vector components transform by R {\displaystyle R} which matches transformation of vector operator components. Hence the operator tensor described forms a rank 2 tensor, in tensor representation, T ^ = V → ⊗ W → = ( V ^ i W ^ j ) ( e i ⊗ e j ) {\displaystyle {\hat {\mathbf {T} }}={\vec {V}}\otimes {\vec {W}}=({\hat {V}}_{i}{\hat {W}}_{j})(\mathbf {e} _{i}\otimes \mathbf {e} _{j})} Similarly, an n-times contravariant tensor operator can be formed similarly by n vector operators. We observe that the subspace spanned by linear combinations of the rank two tensor components form an invariant subspace, ie. the subspace does not change under rotation since the transformed components itself is a linear combination of the tensor components. However, this subspace is not irreducible ie. it can be further divided into invariant subspaces under rotation. Otherwise, the subspace is called reducible. In other words, there exists specific sets of different linear combinations of the components such that they transforms into a linear combination of the same set under rotation. In the above example, we will show that the 9 independent tensor components can be divided into a set of 1, 3 and 5 combination of operators that each form irreducible invariant subspaces. === Irreducible tensor operators === The subspace spanned by { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} can be divided two subspaces; three independent antisymmetric components { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} and six independent symmetric component { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} , defined as A ^ i j = 1 2 ( T ^ i j − T ^ j i ) {\displaystyle {\hat {A}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}-{\hat {T}}_{ji})} and S ^ i j = 1 2 ( T ^ i j + T ^ j i ) {\displaystyle {\hat {S}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}+{\hat {T}}_{ji})} . Using the { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} transformation under rotation formula, it can be shown that both { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} and { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} are transformed into a linear combination of members of its own sets. Although { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} is irreducible, the same cannot be said about { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} . The six independent symmetric component set can be divided into five independent traceless symmetric component and the invariant trace can be its own subspace. Hence, the invariant subspaces of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} are formed respectively by: One invariant trace of the tensor, t ^ = ∑ k = 1 3 T ^ k k {\displaystyle {\hat {t}}=\sum _{k=1}^{3}{\hat {T}}_{kk}} Three linearly independent antisymmetric components from: A ^ i j = 1 2 ( T ^ i j − T ^ j i ) {\displaystyle {\hat {A}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}-{\hat {T}}_{ji})} Five linearly independent traceless symmetric components from S ^ i j = 1 2 ( T ^ i j + T ^ j i ) − 1 3 t ^ δ i j {\displaystyle {\hat {S}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}+{\hat {T}}_{ji})-{\frac {1}{3}}{\hat {t}}\delta _{ij}} If T ^ i j = V i ^ W j ^ {\displaystyle {\hat {T}}_{ij}={\hat {V_{i}}}{\hat {W_{j}}}} , the invariant subspaces of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} formed are represented by: One invariant scalar operator V → ⋅ W → {\displaystyle {\vec {V}}\cdot {\vec {W}}} Three linearly independent components from 1 2 ( V ^ i W ^ j − V ^ j W ^ i ) {\displaystyle {\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}-{\hat {V}}_{j}{\hat {W}}_{i})} Five linearly independent components from 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 ( V → ⋅ W → ) δ i j {\displaystyle {\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}+{\hat {V}}_{j}{\hat {W}}_{i})-{\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}} From the above examples, the nine component { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} are split into subspaces formed by one, three and five components. These numbers add up to the number of components of the original tensor in a manner similar to the dimension of vector subspaces adding to the dimension of the space that is a direct sum of these subspaces. Similarly, every element of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} can be expressed in terms of a linear combination of components from its invariant subspaces: T ^ i j = 1 3 t ^ δ i j + A ^ i j + S ^ i j {\displaystyle {\hat {T}}_{ij}={\frac {1}{3}}{\hat {t}}\delta _{ij}+{\hat {A}}_{ij}+{\hat {S}}_{ij}} or T ^ i j = 1 3 ( V → ⋅ W → ) δ i j + ( 1 2 ( V ^ i W ^ j − V ^ j W ^ i ) ) + ( 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 ( V → ⋅ W → ) δ i j ) = T ( 0 ) + T ( 1 ) + T ( 2 ) {\displaystyle {\hat {T}}_{ij}={\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}+\left({\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}-{\hat {V}}_{j}{\hat {W}}_{i})\right)+\left({\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}+{\hat {V}}_{j}{\hat {W}}_{i})-{\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}\right)=\mathbf {T} ^{(0)}+\mathbf {T} ^{(1)}+\mathbf {T} ^{(2)}} where: T ^ i j ( 0 ) = V ^ k W ^ k 3 δ i j {\displaystyle {\widehat {T}}_{ij}^{(0)}={\frac {{\widehat {V}}_{k}{\widehat {W}}_{k}}{3}}\delta _{ij}} T ^ i j ( 1 ) = 1 2 [ V ^ i W ^ j − V ^ j W ^ i ] = V ^ [ i W ^ j ] {\displaystyle {\widehat {T}}_{ij}^{(1)}={\frac {1}{2}}\left[{\widehat {V}}_{i}{\widehat {W}}_{j}-{\widehat {V}}_{j}{\widehat {W}}_{i}\right]={\widehat {V}}_{[i}{\widehat {W}}_{j]}} T ^ i j ( 2 ) = 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 V ^ k W ^ k δ i j = V ^ ( i W ^ j ) − T i j ( 0 ) {\displaystyle {\widehat {T}}_{ij}^{(2)}={\tfrac {1}{2}}\left({\widehat {V}}_{i}{\widehat {W}}_{j}+{\widehat {V}}_{j}{\widehat {W}}_{i}\right)-{\tfrac {1}{3}}{\widehat {V}}_{k}{\widehat {W}}_{k}\delta _{ij}={\widehat {V}}_{(i}{\widehat {W}}_{j)}-T_{ij}^{(0)}} In general cartesian tensors of rank greater than 1 are reducible. In quantum mechanics, this particular example bears resemblance to the addition of two spin one particles where both are 3 dimensional, hence the total space being 9 dimensional, can be formed by spin 0, spin 1 and spin 2 systems each having 1 dimensional, 3 dimensional and 5 dimensional space respectively. These three terms are irreducible, which means they cannot be decomposed further and still be tensors satisfying the defining transformation laws under which they must be invariant. Each of the irreducible representations T(0), T(1), T(2) ... transform like angular momentum eigenstates according to the number of independent components. It is possible that a given tensor may have one or more of these components vanish. For example, the quadrupole moment tensor is already symmetric and traceless, and hence has only 5 independent components to begin with. === Spherical tensor operators === Spherical tensor operators are generally defined as operators with the following transformation rule, under rotation of coordinate system: T ^ m ( j ) → U ( R ) † T ^ m ( j ) U ( R ) = ∑ m ′ D m ′ m ( j ) ( R − 1 ) T ^ m ′ ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}\rightarrow U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=\sum _{m'}D_{m'm}^{(j)}(R^{-1}){\widehat {T}}_{m'}^{(j)}} The commutation relations can be found by expanding LHS and RHS as: U ( R ) † T ^ m ( j ) U ( R ) = ( 1 + i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) T ^ m ( j ) ( 1 − i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) = ∑ m ′ ⟨ j , m ′ | ( 1 + i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) | j , m ⟩ T ^ m ′ ( j ) {\displaystyle U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=\left(1+{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right){\widehat {T}}_{m}^{(j)}\left(1-{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right)=\sum _{m'}\langle j,m'|\left(1+{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right)|j,m\rangle {\widehat {T}}_{m'}^{(j)}} Simplifying and applying limits to select only first order terms, we get: [ n ^ ⋅ J → , T ^ m ( j ) ] = ∑ m ′ T ^ m ′ ( j ) ⟨ j , m ′ | J → ⋅ n ^ | j , m ⟩ {\displaystyle {[{\hat {n}}\cdot {\vec {J}}},{\widehat {T}}_{m}^{(j)}]=\sum _{m'}{\widehat {T}}_{m'}^{(j)}\langle j,m'|{\vec {J}}\cdot {\hat {n}}|j,m\rangle } For choices of n ^ = x ^ ± i y ^ {\displaystyle {\hat {n}}={\hat {x}}\pm i{\hat {y}}} or n ^ = z ^ {\displaystyle {\hat {n}}={\hat {z}}} , we get: [ J ± , T ^ m ( j ) ] = ℏ ( j ∓ m ) ( j ± m + 1 ) T ^ m ± 1 ( j ) [ J z , T ^ m ( j ) ] = ℏ m T ^ m ( j ) {\displaystyle {\begin{aligned}\left[J_{\pm },{\widehat {T}}_{m}^{(j)}\right]&=\hbar {\sqrt {(j\mp m)(j\pm m+1)}}{\widehat {T}}_{m\pm 1}^{(j)}\\[1ex]\left[J_{z},{\widehat {T}}_{m}^{(j)}\right]&=\hbar m{\widehat {T}}_{m}^{(j)}\end{aligned}}} Note the similarity of the above to: J ± | j , m ⟩ = ℏ ( j ∓ m ) ( j ± m + 1 ) | j , m ± 1 ⟩ J z | j , m ⟩ = ℏ m | j , m ⟩ {\displaystyle {\begin{aligned}J_{\pm }|j,m\rangle &=\hbar {\sqrt {(j\mp m)(j\pm m+1)}}|j,m\pm 1\rangle \\[1ex]J_{z}|j,m\rangle &=\hbar m|j,m\rangle \end{aligned}}} Since J x {\displaystyle J_{x}} and J y {\displaystyle J_{y}} are linear combinations of J ± {\displaystyle J_{\pm }} , they share the same similarity due to linearity. If, only the commutation relations hold, using the following relation, | j , m ⟩ → U ( R ) | j , m ⟩ = e x p ( − i θ ℏ n ^ ⋅ J → ) | j , m ⟩ = ∑ m ′ D m ′ m ( j ) ( R ) | j , m ′ ⟩ {\displaystyle |j,m\rangle \rightarrow U(R)|j,m\rangle =exp\left(-{i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,m\rangle =\sum _{m'}D_{m'm}^{(j)}(R)|j,m'\rangle } we find due to similarity of actions of J {\displaystyle J} on wavefunction | j , m ⟩ {\displaystyle |j,m\rangle } and the commutation relations on T ^ m ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}} , that: T ^ m ( j ) → U ( R ) † T ^ m ( j ) U ( R ) = e x p ( i θ ℏ n ^ ⋅ a d J → ) T ^ m ( j ) = ∑ m ′ D m ′ m ( j ) ( R − 1 ) T ^ m ′ ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}\rightarrow U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=exp\left({i{\frac {\theta }{\hbar }}{\hat {n}}\cdot ad_{\vec {J}}}\right){\widehat {T}}_{m}^{(j)}=\sum _{m'}D_{m'm}^{(j)}(R^{-1}){\widehat {T}}_{m'}^{(j)}} where the exponential form is given by Baker–Hausdorff lemma. Hence, the above commutation relations and the transformation property are equivalent definitions of spherical tensor operators. It can also be shown that { a d J ^ i } {\displaystyle \{ad_{{\hat {J}}_{i}}\}} transform like a vector due to their commutation relation. In the following section, construction of spherical tensors will be discussed. For example, since example of spherical vector operators is shown, it can be used to construct higher order spherical tensor operators. In general, spherical tensor operators can be constructed from two perspectives. One way is to specify how spherical tensors transform under a physical rotation - a group theoretical definition. A rotated angular momentum eigenstate can be decomposed into a linear combination of the initial eigenstates: the coefficients in the linear combination consist of Wigner rotation matrix entries. Or by continuing the previous example of the second order dyadic tensor T = a ⊗ b, casting each of a and b into the spherical basis and substituting into T gives the spherical tensor operators of the second order. ==== Construction using Clebsch–Gordan coefficients ==== Combination of two spherical tensors A q 1 ( k 1 ) {\displaystyle A_{q_{1}}^{(k_{1})}} and B q 2 ( k 2 ) {\displaystyle B_{q_{2}}^{(k_{2})}} in the following manner involving the Clebsch–Gordan coefficients can be proved to give another spherical tensor of the form: T q ( k ) = ∑ q 1 , q 2 ⟨ k 1 , k 2 ; q 1 , q 2 | k 1 , k 2 ; k , q ⟩ A q 1 ( k 1 ) B q 2 ( k 2 ) {\displaystyle T_{q}^{(k)}=\sum _{q_{1},q_{2}}\langle k_{1},k_{2};q_{1},q_{2}|k_{1},k_{2};k,q\rangle A_{q_{1}}^{(k_{1})}B_{q_{2}}^{(k_{2})}} This equation can be used to construct higher order spherical tensor operators, for example, second order spherical tensor operators using two first order spherical tensor operators, say A and B, discussed previously: T ^ ± 2 ( 2 ) = a ^ ± 1 b ^ ± 1 T ^ ± 1 ( 2 ) = 1 2 ( a ^ ± 1 b ^ 0 + a ^ 0 b ^ ± 1 ) T ^ 0 ( 2 ) = 1 6 ( a ^ + 1 b ^ − 1 + a ^ − 1 b ^ + 1 + 2 a ^ 0 b ^ 0 ) {\displaystyle {\begin{aligned}{\widehat {T}}_{\pm 2}^{(2)}&={\widehat {a}}_{\pm 1}{\widehat {b}}_{\pm 1}\\[1ex]{\widehat {T}}_{\pm 1}^{(2)}&={\tfrac {1}{\sqrt {2}}}\left({\widehat {a}}_{\pm 1}{\widehat {b}}_{0}+{\widehat {a}}_{0}{\widehat {b}}_{\pm 1}\right)\\[1ex]{\widehat {T}}_{0}^{(2)}&={\tfrac {1}{\sqrt {6}}}\left({\widehat {a}}_{+1}{\widehat {b}}_{-1}+{\widehat {a}}_{-1}{\widehat {b}}_{+1}+2{\widehat {a}}_{0}{\widehat {b}}_{0}\right)\end{aligned}}} Using the infinitesimal rotation operator and its Hermitian conjugate, one can derive the commutation relation in the spherical basis: [ J a , T ^ q ( 2 ) ] = ∑ q ′ D ( J a ) q q ′ ( 2 ) T ^ q ′ ( 2 ) = ∑ q ′ ⟨ j = 2 , m = q | J a | j = 2 , m = q ′ ⟩ T ^ q ′ ( 2 ) {\displaystyle \left[J_{a},{\widehat {T}}_{q}^{(2)}\right]=\sum _{q'}{D(J_{a})}_{qq'}^{(2)}{\widehat {T}}_{q'}^{(2)}=\sum _{q'}\langle j{=}2,m{=}q|J_{a}|j{=}2,m{=}q'\rangle {\widehat {T}}_{q'}^{(2)}} and the finite rotation transformation in the spherical basis can be verified: U ( R ) † T ^ q ( 2 ) U ( R ) = ∑ q ′ D ( R ) q q ′ ( 2 ) ∗ T ^ q ′ ( 2 ) {\displaystyle {U(R)}^{\dagger }{\widehat {T}}_{q}^{(2)}U(R)=\sum _{q'}{{D(R)}_{qq'}^{(2)}}^{*}{\widehat {T}}_{q'}^{(2)}} ==== Using Spherical Harmonics ==== Define an operator by its spectrum: Υ l m | r ⟩ = r l Y l m ( θ , ϕ ) | r ⟩ = Υ l m ( r → ) | r ⟩ {\displaystyle \Upsilon _{l}^{m}|r\rangle =r^{l}Y_{l}^{m}(\theta ,\phi )|r\rangle =\Upsilon _{l}^{m}({\vec {r}})|r\rangle } Since for spherical harmonics under rotation: Y ℓ = k m = q ( n ) = ⟨ n | k , q ⟩ → U ( R ) † Y ℓ = k m = q ( n ) U ( R ) = Y ℓ = k m = q ( R n ) = ⟨ n | D ( R ) † | k , q ⟩ = ∑ q ′ D q ′ , q ( k ) ( R − 1 ) Y ℓ = k m = q ′ ( n ) {\displaystyle Y_{\ell =k}^{m=q}(\mathbf {n} )=\langle \mathbf {n} |k,q\rangle \rightarrow U(R)^{\dagger }Y_{\ell =k}^{m=q}(\mathbf {n} )U(R)=Y_{\ell =k}^{m=q}(R\mathbf {n} )=\langle \mathbf {n} |D(R)^{\dagger }|k,q\rangle =\sum _{q'}D_{q',q}^{(k)}(R^{-1})Y_{\ell =k}^{m=q'}(\mathbf {n} )} It can also been shown that: Υ l m ( r → ) → U ( R ) † Υ l m ( r → ) U ( R ) = ∑ m ′ D m ′ , m ( l ) ( R − 1 ) Υ l m ′ ( r → ) {\displaystyle \Upsilon _{l}^{m}({\vec {r}})\rightarrow U(R)^{\dagger }\Upsilon _{l}^{m}({\vec {r}})U(R)=\sum _{m'}D_{m',m}^{(l)}(R^{-1})\Upsilon _{l}^{m'}({\vec {r}})} Then Υ l m ( V → ) {\displaystyle \Upsilon _{l}^{m}({\vec {V}})} , where V → {\displaystyle {\vec {V}}} is a vector operator, also transforms in the same manner ie, is a spherical tensor operator. The process involves expressing Υ l m ( r → ) = r l Y l m ( θ , ϕ ) = Υ l m ( x , y , z ) {\displaystyle \Upsilon _{l}^{m}({\vec {r}})=r^{l}Y_{l}^{m}(\theta ,\phi )=\Upsilon _{l}^{m}(x,y,z)} in terms of x, y and z and replacing x, y and z with operators Vx Vy and Vz which from vector operator. The resultant operator is hence a spherical tensor operator T ^ m ( l ) {\displaystyle {\hat {T}}_{m}^{(l)}} .^ This may include constant due to normalization from spherical harmonics which is meaningless in context of operators. The Hermitian adjoint of a spherical tensor may be defined as ( T † ) q ( k ) = ( − 1 ) k − q ( T − q ( k ) ) † . {\displaystyle (T^{\dagger })_{q}^{(k)}=(-1)^{k-q}(T_{-q}^{(k)})^{\dagger }.} There is some arbitrariness in the choice of the phase factor: any factor containing (−1)±q will satisfy the commutation relations. The above choice of phase has the advantages of being real and that the tensor product of two commuting Hermitian operators is still Hermitian. Some authors define it with a different sign on q, without the k, or use only the floor of k. == Angular momentum and spherical harmonics == === Orbital angular momentum and spherical harmonics === Orbital angular momentum operators have the ladder operators: L ± = L x ± i L y {\displaystyle L_{\pm }=L_{x}\pm iL_{y}} which raise or lower the orbital magnetic quantum number mℓ by one unit. This has almost exactly the same form as the spherical basis, aside from constant multiplicative factors. === Spherical tensor operators and quantum spin === Spherical tensors can also be formed from algebraic combinations of the spin operators Sx, Sy, Sz, as matrices, for a spin system with total quantum number j = ℓ + s (and ℓ = 0). Spin operators have the ladder operators: S ± = S x ± i S y {\displaystyle S_{\pm }=S_{x}\pm iS_{y}} which raise or lower the spin magnetic quantum number ms by one unit. == Applications == Spherical bases have broad applications in pure and applied mathematics and physical sciences where spherical geometries occur. === Dipole radiative transitions in a single-electron atom (alkali) === The transition amplitude is proportional to matrix elements of the dipole operator between the initial and final states. We use an electrostatic, spinless model for the atom and we consider the transition from the initial energy level Enℓ to final level En′ℓ′. These levels are degenerate, since the energy does not depend on the magnetic quantum number m or m′. The wave functions have the form, ψ n ℓ m ( r , θ , ϕ ) = R n ℓ ( r ) Y ℓ m ( θ , ϕ ) {\displaystyle \psi _{n\ell m}(r,\theta ,\phi )=R_{n\ell }(r)Y_{\ell m}(\theta ,\phi )} The dipole operator is proportional to the position operator of the electron, so we must evaluate matrix elements of the form, ⟨ n ′ ℓ ′ m ′ | r | n ℓ m ⟩ {\displaystyle \langle n'\ell 'm'|\mathbf {r} |n\ell m\rangle } where, the initial state is on the right and the final one on the left. The position operator r has three components, and the initial and final levels consist of 2ℓ + 1 and 2ℓ′ + 1 degenerate states, respectively. Therefore if we wish to evaluate the intensity of a spectral line as it would be observed, we really have to evaluate 3(2ℓ′+ 1)(2ℓ+ 1) matrix elements, for example, 3×3×5 = 45 in a 3d → 2p transition. This is actually an exaggeration, as we shall see, because many of the matrix elements vanish, but there are still many non-vanishing matrix elements to be calculated. A great simplification can be achieved by expressing the components of r, not with respect to the Cartesian basis, but with respect to the spherical basis. First we define, r q = e ^ q ⋅ r {\displaystyle r_{q}={\hat {\mathbf {e} }}_{q}\cdot \mathbf {r} } Next, by inspecting a table of the Yℓm′s, we find that for ℓ = 1 we have, r Y 11 ( θ , ϕ ) = − r 3 8 π sin ⁡ ( θ ) e i ϕ = 3 4 π ( − x + i y 2 ) r Y 10 ( θ , ϕ ) = r 3 4 π cos ⁡ ( θ ) = 3 4 π z r Y 1 − 1 ( θ , ϕ ) = r 3 8 π sin ⁡ ( θ ) e − i ϕ = 3 4 π ( x − i y 2 ) {\displaystyle {\begin{aligned}rY_{11}(\theta ,\phi )&=&&-r{\sqrt {\frac {3}{8\pi }}}\sin(\theta )e^{i\phi }&=&{\sqrt {\frac {3}{4\pi }}}\left(-{\frac {x+iy}{\sqrt {2}}}\right)\\rY_{10}(\theta ,\phi )&=&&r{\sqrt {\frac {3}{4\pi }}}\cos(\theta )&=&{\sqrt {\frac {3}{4\pi }}}z\\rY_{1-1}(\theta ,\phi )&=&&r{\sqrt {\frac {3}{8\pi }}}\sin(\theta )e^{-i\phi }&=&{\sqrt {\frac {3}{4\pi }}}\left({\frac {x-iy}{\sqrt {2}}}\right)\end{aligned}}} where, we have multiplied each Y1m by the radius r. On the right hand side we see the spherical components rq of the position vector r. The results can be summarized by, r Y 1 q ( θ , ϕ ) = 3 4 π r q {\displaystyle rY_{1q}(\theta ,\phi )={\sqrt {\frac {3}{4\pi }}}r_{q}} for q = 1, 0, −1, where q appears explicitly as a magnetic quantum number. This equation reveals a relationship between vector operators and the angular momentum value ℓ = 1, something we will have more to say about presently. Now the matrix elements become a product of a radial integral times an angular integral, ⟨ n ′ ℓ ′ m ′ | r q | n ℓ m ⟩ = ( ∫ 0 ∞ r 2 d r R n ′ ℓ ′ ∗ ( r ) r R n ℓ ( r ) ) ( 4 π 3 ∫ sin ⁡ ( θ ) d Ω Y ℓ ′ m ′ ∗ ( θ , ϕ ) Y 1 q ( θ , ϕ ) Y ℓ m ( θ , ϕ ) ) {\displaystyle \langle n'\ell 'm'|r_{q}|n\ell m\rangle =\left(\int _{0}^{\infty }r^{2}drR_{n'\ell '}^{*}(r)rR_{n\ell }(r)\right)\left({\sqrt {\frac {4\pi }{3}}}\int \sin {(\theta )}d\Omega Y_{\ell 'm'}^{*}(\theta ,\phi )Y_{1q}(\theta ,\phi )Y_{\ell m}(\theta ,\phi )\right)} We see that all the dependence on the three magnetic quantum numbers (m′,q,m) is contained in the angular part of the integral. Moreover, the angular integral can be evaluated by the three-Yℓm formula, whereupon it becomes proportional to the Clebsch-Gordan coefficient, ⟨ ℓ ′ m ′ | ℓ 1 m q ⟩ {\displaystyle \langle \ell 'm'|\ell 1mq\rangle } The radial integral is independent of the three magnetic quantum numbers (m′, q, m), and the trick we have just used does not help us to evaluate it. But it is only one integral, and after it has been done, all the other integrals can be evaluated just by computing or looking up Clebsch–Gordan coefficients. The selection rule m′ = q + m in the Clebsch–Gordan coefficient means that many of the integrals vanish, so we have exaggerated the total number of integrals that need to be done. But had we worked with the Cartesian components ri of r, this selection rule might not have been obvious. In any case, even with the selection rule, there may still be many nonzero integrals to be done (nine, in the case 3d → 2p). The example we have just given of simplifying the calculation of matrix elements for a dipole transition is really an application of the Wigner–Eckart theorem, which we take up later in these notes. === Magnetic resonance === The spherical tensor formalism provides a common platform for treating coherence and relaxation in nuclear magnetic resonance. In NMR and EPR, spherical tensor operators are employed to express the quantum dynamics of particle spin, by means of an equation of motion for the density matrix entries, or to formulate dynamics in terms of an equation of motion in Liouville space. The Liouville space equation of motion governs the observable averages of spin variables. When relaxation is formulated using a spherical tensor basis in Liouville space, insight is gained because the relaxation matrix exhibits the cross-relaxation of spin observables directly. === Image processing and computer graphics === == See also == Wigner–Eckart theorem Structure tensor Clebsch–Gordan coefficients for SU(3) == References == === Notes === === Sources === === Further reading === ==== Spherical harmonics ==== G.W.F. Drake (2006). Springer Handbook of Atomic, Molecular, and Optical Physics (2nd ed.). Springer. p. 57. ISBN 978-0-3872-6308-3. F.A. Dahlen; J. Tromp (1998). Theoretical global seismology (2nd ed.). Princeton University Press. p. appendix C. ISBN 978-0-69100-1241. D.O. Thompson; D.E. Chimenti (1997). Review of Progress in Quantitative Nondestructive Evaluation. Vol. 16. Springer. p. 1708. ISBN 978-0-3064-55971. H. Paetz; G. Schieck (2011). Nuclear Physics with Polarized Particles. Lecture Notes in Physics. Vol. 842. Springer. p. 31. ISBN 978-364-224-225-0. V. Devanathan (1999). Angular Momentum Techniques in Quantum Mechanics. Fundamental Theories of Physics. Vol. 108. Springer. pp. 34, 61. ISBN 978-0-7923-5866-4. V.D. Kleiman; R.N. Zare (1998). "5". A Companion to Angular Momentum. John Wiley & Sons. p. 112. ISBN 978-0-4711-9249-7. ==== Angular momentum and spin ==== Devanathan, V (2002). "Vectors and Tensors in Spherical Basis". Angular Momentum Techniques in Quantum Mechanics. Fundamental Theories of Physics. Vol. 108. pp. 24–33. doi:10.1007/0-306-47123-X_3. ISBN 978-0-306-47123-0. K.T. Hecht (2000). Quantum mechanics. Graduate texts in contemporary physics. Springer. ISBN 978-0-387-989-198. ==== Condensed matter physics ==== J.A. Mettes; J.B. Keith; R.B. McClurg (2002). "Molecular Crystal Global Phase Diagrams:I Method of Construction" (PDF). B.Henderson, R.H. Bartram (2005). Crystal-Field Engineering of Solid-State Laser Materials. Cambridge Studies in Modern Optics. Vol. 25. Cambridge University Press. p. 49. ISBN 978-0-52101-8012. Edward U. Condon, and Halis Odabaşı (1980). Atomic Structure. CUP Archive. ISBN 978-0-5212-98933. Melinda J. Duer, ed. (2008). "3". Solid State NMR Spectroscopy: Principles and Applications. John Wiley & Sons. p. 113. ISBN 978-0-4709-9938-7. K.D. Bonin; V.V. Kresin (1997). "2". Electric - Dipole Polarizabilities of Atoms, Molecules and Clusters. World Scientific. pp. 14–15. ISBN 978-981-022-493-6. A.E. McDermott, T.Polenova (2012). Solid State NMR Studies of Biopolymers. EMR handbooks. John Wiley & Sons. p. 42. ISBN 978-111-858-889-5. ==== Magnetic resonance ==== L.J. Mueller (2011). "Tensors and rotations in NMR". Concepts in Magnetic Resonance Part A. 38A (5): 221–235. doi:10.1002/cmr.a.20224. S2CID 8889942. M.S. Anwar (2004). "Spherical Tensor Operators in NMR" (PDF). P. Callaghan (1993). Principles of nuclear magnetic resonance microscopy. Oxford University Press. pp. 56–57. ISBN 978-0-198-539-971. ==== Image processing ==== M. Reisert; H. Burkhardt (2009). S. Aja-Fernández (ed.). Tensors in Image Processing and Computer Vision. Springer. ISBN 978-184-8822-993. D.H. Laidlaw; J. Weickert (2009). Visualization and Processing of Tensor Fields: Advances and Perspectives. Mathematics and Visualization. Springer. ISBN 978-354-088-378-4. M. Felsberg; E. Jonsson (2005). Energy Tensors: Quadratic, Phase Invariant Image Operators. Lecture Notes in Computer Science. Vol. 3663. Springer. pp. 493–500. E. König; S. Kremer (1979). "Tensor Operator Algebra for Point Groups". Magnetism Diagrams for Transition Metal Ions. Lecture Notes in Computer Science. Vol. 3663. Springer. pp. 13–20. doi:10.1007/978-1-4613-3003-5_3. ISBN 978-1-4613-3005-9. == External links == (2012) Clebsch-Gordon (sic) coefficients and the tensor spherical harmonics The tensor spherical harmonics (2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem Archived 2014-07-20 at the Wayback Machine M. Fowler (2008), Tensor Operators Tensor_Operators (2009) Tensor Operators and the Wigner Eckart Theorem The Wigner-Eckart theorem (2004) Rotational Transformations and Spherical Tensor Operators Tensor operators Evaluation of the matrix elements for radiative transitions D.K. Ghosh, (2013) Angular Momentum - III : Wigner- Eckart Theorem B. Baragiola (2002) Tensor Operators Spherical Tensors
Wikipedia/Tensor_operator
In mathematics and theoretical physics, a tensor is antisymmetric or alternating on (or with respect to) an index subset if it alternates sign (+/−) when any two indices of the subset are interchanged. The index subset must generally either be all covariant or all contravariant. For example, T i j k … = − T j i k … = T j k i … = − T k j i … = T k i j … = − T i k j … {\displaystyle T_{ijk\dots }=-T_{jik\dots }=T_{jki\dots }=-T_{kji\dots }=T_{kij\dots }=-T_{ikj\dots }} holds when the tensor is antisymmetric with respect to its first three indices. If a tensor changes sign under exchange of each pair of its indices, then the tensor is completely (or totally) antisymmetric. A completely antisymmetric covariant tensor field of order k {\displaystyle k} may be referred to as a differential k {\displaystyle k} -form, and a completely antisymmetric contravariant tensor field may be referred to as a k {\displaystyle k} -vector field. == Antisymmetric and symmetric tensors == A tensor A that is antisymmetric on indices i {\displaystyle i} and j {\displaystyle j} has the property that the contraction with a tensor B that is symmetric on indices i {\displaystyle i} and j {\displaystyle j} is identically 0. For a general tensor U with components U i j k … {\displaystyle U_{ijk\dots }} and a pair of indices i {\displaystyle i} and j , {\displaystyle j,} U has symmetric and antisymmetric parts defined as: Similar definitions can be given for other pairs of indices. As the term "part" suggests, a tensor is the sum of its symmetric part and antisymmetric part for a given pair of indices, as in U i j k … = U ( i j ) k … + U [ i j ] k … . {\displaystyle U_{ijk\dots }=U_{(ij)k\dots }+U_{[ij]k\dots }.} == Notation == A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrary dimensions, for an order 2 covariant tensor M, M [ a b ] = 1 2 ! ( M a b − M b a ) , {\displaystyle M_{[ab]}={\frac {1}{2!}}(M_{ab}-M_{ba}),} and for an order 3 covariant tensor T, T [ a b c ] = 1 3 ! ( T a b c − T a c b + T b c a − T b a c + T c a b − T c b a ) . {\displaystyle T_{[abc]}={\frac {1}{3!}}(T_{abc}-T_{acb}+T_{bca}-T_{bac}+T_{cab}-T_{cba}).} In any 2 and 3 dimensions, these can be written as M [ a b ] = 1 2 ! δ a b c d M c d , T [ a b c ] = 1 3 ! δ a b c d e f T d e f . {\displaystyle {\begin{aligned}M_{[ab]}&={\frac {1}{2!}}\,\delta _{ab}^{cd}M_{cd},\\[2pt]T_{[abc]}&={\frac {1}{3!}}\,\delta _{abc}^{def}T_{def}.\end{aligned}}} where δ a b … c d … {\displaystyle \delta _{ab\dots }^{cd\dots }} is the generalized Kronecker delta, and the Einstein summation convention is in use. More generally, irrespective of the number of dimensions, antisymmetrization over p {\displaystyle p} indices may be expressed as T [ a 1 … a p ] = 1 p ! δ a 1 … a p b 1 … b p T b 1 … b p . {\displaystyle T_{[a_{1}\dots a_{p}]}={\frac {1}{p!}}\delta _{a_{1}\dots a_{p}}^{b_{1}\dots b_{p}}T_{b_{1}\dots b_{p}}.} In general, every tensor of rank 2 can be decomposed into a symmetric and anti-symmetric pair as: T i j = 1 2 ( T i j + T j i ) + 1 2 ( T i j − T j i ) . {\displaystyle T_{ij}={\frac {1}{2}}(T_{ij}+T_{ji})+{\frac {1}{2}}(T_{ij}-T_{ji}).} This decomposition is not in general true for tensors of rank 3 or more, which have more complex symmetries. == Examples == Totally antisymmetric tensors include: Trivially, all scalars and vectors (tensors of order 0 and 1) are totally antisymmetric (as well as being totally symmetric). The electromagnetic tensor, F μ ν {\displaystyle F_{\mu \nu }} in electromagnetism. The Riemannian volume form on a pseudo-Riemannian manifold. == See also == Antisymmetric matrix – Form of a matrixPages displaying short descriptions of redirect targets Exterior algebra – Algebra associated to any vector space Levi-Civita symbol – Antisymmetric permutation object acting on tensors Ricci calculus – Tensor index notation for tensor-based calculations Symmetric tensor – Tensor invariant under permutations of vectors it acts on Symmetrization – process that converts any function in n variables to a symmetric function in n variablesPages displaying wikidata descriptions as a fallback == Notes == == References == Penrose, Roger (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0. == External links == Antisymmetric Tensor – mathworld.wolfram.com
Wikipedia/Antisymmetric_tensor
The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity. == Definition == The stress–energy tensor involves the use of superscripted variables (not exponents; see Tensor index notation and Einstein summation notation). If Cartesian coordinates in SI units are used, then the components of the position four-vector x are given by: [ x0, x1, x2, x3 ]. In traditional Cartesian coordinates these are instead customarily written [ t, x, y, z ], where t is coordinate time, and x, y, and z are coordinate distances. The stress–energy tensor is defined as the tensor Tαβ of order two that gives the flux of the αth component of the momentum vector across a surface with constant xβ coordinate. In the theory of relativity, this momentum vector is taken as the four-momentum. In general relativity, the stress–energy tensor is symmetric, T α β = T β α . {\displaystyle T^{\alpha \beta }=T^{\beta \alpha }.} In some alternative theories like Einstein–Cartan theory, the stress–energy tensor may not be perfectly symmetric because of a nonzero spin tensor, which geometrically corresponds to a nonzero torsion tensor. == Components == Because the stress–energy tensor is of order 2, its components can be displayed in 4 × 4 matrix form: T μ ν = ( T 00 T 01 T 02 T 03 T 10 T 11 T 12 T 13 T 20 T 21 T 22 T 23 T 30 T 31 T 32 T 33 ) , {\displaystyle T^{\mu \nu }={\begin{pmatrix}T^{00}&T^{01}&T^{02}&T^{03}\\T^{10}&T^{11}&T^{12}&T^{13}\\T^{20}&T^{21}&T^{22}&T^{23}\\T^{30}&T^{31}&T^{32}&T^{33}\end{pmatrix}}\,,} where the indices μ and ν take on the values 0, 1, 2, 3. In the following, k and ℓ range from 1 through 3: In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress–energy tensor in the proper frame of reference. In other words, the stress–energy tensor in engineering differs from the relativistic stress–energy tensor by a momentum-convective term. === Covariant and mixed forms === Most of this article works with the contravariant form, Tμν of the stress–energy tensor. However, it is often convenient to work with the covariant form, T μ ν = T α β g α μ g β ν , {\displaystyle T_{\mu \nu }=T^{\alpha \beta }g_{\alpha \mu }g_{\beta \nu },} or the mixed form, T μ ν = T μ α g α ν . {\displaystyle T^{\mu }{}_{\nu }=T^{\mu \alpha }g_{\alpha \nu }.} This article uses the spacelike sign convention (− + + +) for the metric signature. == Conservation law == === In special relativity === The stress–energy tensor is the conserved Noether current associated with spacetime translations. The divergence of the non-gravitational stress–energy is zero. In other words, non-gravitational energy and momentum are conserved, 0 = T μ ν ; ν ≡ ∇ ν T μ ν . {\displaystyle 0=T^{\mu \nu }{}_{;\nu }\ \equiv \ \nabla _{\nu }T^{\mu \nu }{}~.} When gravity is negligible and using a Cartesian coordinate system for spacetime, this may be expressed in terms of partial derivatives as 0 = T μ ν , ν ≡ ∂ ν T μ ν . {\displaystyle 0=T^{\mu \nu }{}_{,\nu }\ \equiv \ \partial _{\nu }T^{\mu \nu }~.} The integral form of the non-covariant formulation is 0 = ∫ ∂ N T μ ν d 3 s ν {\displaystyle 0=\int _{\partial N}T^{\mu \nu }\mathrm {d} ^{3}s_{\nu }} where N is any compact four-dimensional region of spacetime; ∂ N {\textstyle \partial N} is its boundary, a three-dimensional hypersurface; and d 3 s ν {\textstyle \mathrm {d} ^{3}s_{\nu }} is an element of the boundary regarded as the outward pointing normal. In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show that angular momentum is also conserved: 0 = ( x α T μ ν − x μ T α ν ) , ν . {\displaystyle 0=(x^{\alpha }T^{\mu \nu }-x^{\mu }T^{\alpha \nu })_{,\nu }\,.} === In general relativity === When gravity is non-negligible or when using arbitrary coordinate systems, the divergence of the stress–energy still vanishes. But in this case, a coordinate-free definition of the divergence is used which incorporates the covariant derivative 0 = div ⁡ T = T μ ν ; ν = ∇ ν T μ ν = T μ ν , ν + Γ μ σ ν T σ ν + Γ ν σ ν T μ σ {\displaystyle 0=\operatorname {div} T=T^{\mu \nu }{}_{;\nu }=\nabla _{\nu }T^{\mu \nu }=T^{\mu \nu }{}_{,\nu }+\Gamma ^{\mu }{}_{\sigma \nu }T^{\sigma \nu }+\Gamma ^{\nu }{}_{\sigma \nu }T^{\mu \sigma }} where Γ μ σ ν {\textstyle \Gamma ^{\mu }{}_{\sigma \nu }} is the Christoffel symbol, which is the gravitational force field. Consequently, if ξ μ {\textstyle \xi ^{\mu }} is any Killing vector field, then the conservation law associated with the symmetry generated by the Killing vector field may be expressed as 0 = ∇ ν ( ξ μ T ν μ ) = 1 − g ∂ ν ( − g ξ μ T μ ν ) {\displaystyle 0=\nabla _{\nu }\left(\xi ^{\mu }T^{\nu }{}_{\mu }\right)={\frac {1}{\sqrt {-g}}}\partial _{\nu }\left({\sqrt {-g}}\ \xi ^{\mu }T_{\mu }^{\nu }\right)} The integral form of this is 0 = ∫ ∂ N ξ μ T ν μ − g d 3 s ν . {\displaystyle 0=\int _{\partial N}\xi ^{\mu }T^{\nu }{}_{\mu }{\sqrt {-g}}\ \mathrm {d} ^{3}s_{\nu }\,.} == In special relativity == In special relativity, the stress–energy tensor contains information about the energy and momentum densities of a given system, in addition to the momentum and energy flux densities. Given a Lagrangian density L {\textstyle {\mathcal {L}}} that is a function of a set of fields ϕ α {\textstyle \phi _{\alpha }} and their derivatives, but explicitly not of any of the spacetime coordinates, we can construct the canonical stress–energy tensor by looking at the total derivative with respect to one of the generalized coordinates of the system. So, with our condition ∂ L ∂ x ν = 0 {\displaystyle {\frac {\partial {\mathcal {L}}}{\partial x^{\nu }}}=0} By using the chain rule, we then have d L d x ν = d ν L = ∂ L ∂ ( ∂ μ ϕ α ) ∂ ( ∂ μ ϕ α ) ∂ x ν + ∂ L ∂ ϕ α ∂ ϕ α ∂ x ν {\displaystyle {\frac {d{\mathcal {L}}}{dx^{\nu }}}=d_{\nu }{\mathcal {L}}={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}{\frac {\partial (\partial _{\mu }\phi _{\alpha })}{\partial x^{\nu }}}+{\frac {\partial {\mathcal {L}}}{\partial \phi _{\alpha }}}{\frac {\partial \phi _{\alpha }}{\partial x^{\nu }}}} Written in useful shorthand, d ν L = ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ∂ μ ϕ α + ∂ L ∂ ϕ α ∂ ν ϕ α {\displaystyle d_{\nu }{\mathcal {L}}={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\partial _{\mu }\phi _{\alpha }+{\frac {\partial {\mathcal {L}}}{\partial \phi _{\alpha }}}\partial _{\nu }\phi _{\alpha }} Then, we can use the Euler–Lagrange Equation: ∂ μ ( ∂ L ∂ ( ∂ μ ϕ α ) ) = ∂ L ∂ ϕ α {\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\right)={\frac {\partial {\mathcal {L}}}{\partial \phi _{\alpha }}}} And then use the fact that partial derivatives commute so that we now have d ν L = ∂ L ∂ ( ∂ μ ϕ α ) ∂ μ ∂ ν ϕ α + ∂ μ ( ∂ L ∂ ( ∂ μ ϕ α ) ) ∂ ν ϕ α {\displaystyle d_{\nu }{\mathcal {L}}={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\mu }\partial _{\nu }\phi _{\alpha }+\partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\right)\partial _{\nu }\phi _{\alpha }} We can recognize the right hand side as a product rule. Writing it as the derivative of a product of functions tells us that d ν L = ∂ μ [ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α ] {\displaystyle d_{\nu }{\mathcal {L}}=\partial _{\mu }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }\right]} Now, in flat space, one can write d ν L = ∂ μ [ δ ν μ L ] {\textstyle d_{\nu }{\mathcal {L}}=\partial _{\mu }[\delta _{\nu }^{\mu }{\mathcal {L}}]} . Doing this and moving it to the other side of the equation tells us that ∂ μ [ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α ] − ∂ μ ( δ ν μ L ) = 0 {\displaystyle \partial _{\mu }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }\right]-\partial _{\mu }\left(\delta _{\nu }^{\mu }{\mathcal {L}}\right)=0} And upon regrouping terms, ∂ μ [ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α − δ ν μ L ] = 0 {\displaystyle \partial _{\mu }\left[{\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }-\delta _{\nu }^{\mu }{\mathcal {L}}\right]=0} This is to say that the divergence of the tensor in the brackets is 0. Indeed, with this, we define the stress–energy tensor: T μ ν ≡ ∂ L ∂ ( ∂ μ ϕ α ) ∂ ν ϕ α − δ ν μ L {\displaystyle T^{\mu }{}_{\nu }\equiv {\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\nu }\phi _{\alpha }-\delta _{\nu }^{\mu }{\mathcal {L}}} By construction it has the property that ∂ μ T μ ν = 0 {\displaystyle \partial _{\mu }T^{\mu }{}_{\nu }=0} Note that this divergenceless property of this tensor is equivalent to four continuity equations. That is, fields have at least four sets of quantities that obey the continuity equation. As an example, it can be seen that T 0 0 {\textstyle T^{0}{}_{0}} is the energy density of the system and that it is thus possible to obtain the Hamiltonian density from the stress–energy tensor. Indeed, since this is the case, observing that ∂ μ T μ 0 = 0 {\textstyle \partial _{\mu }T^{\mu }{}_{0}=0} , we then have ∂ H ∂ t + ∇ ⋅ ( ∂ L ∂ ∇ ϕ α ϕ ˙ α ) = 0 {\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}+\nabla \cdot \left({\frac {\partial {\mathcal {L}}}{\partial \nabla \phi _{\alpha }}}{\dot {\phi }}_{\alpha }\right)=0} We can then conclude that the terms of ∂ L ∂ ∇ ϕ α ϕ ˙ α {\textstyle {\frac {\partial {\mathcal {L}}}{\partial \nabla \phi _{\alpha }}}{\dot {\phi }}_{\alpha }} represent the energy flux density of the system. === Trace === The trace of the stress–energy tensor is defined to be ⁠ T μ μ {\displaystyle T^{\mu }{}_{\mu }} ⁠, so T μ μ = ∂ L ∂ ( ∂ μ ϕ α ) ∂ μ ϕ α − δ μ μ L . {\displaystyle T^{\mu }{}_{\mu }={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\mu }\phi _{\alpha }-\delta _{\mu }^{\mu }{\mathcal {L}}.} Since ⁠ δ μ μ = 4 {\displaystyle \delta _{\mu }^{\mu }=4} ⁠, T μ μ = ∂ L ∂ ( ∂ μ ϕ α ) ∂ μ ϕ α − 4 L . {\displaystyle T^{\mu }{}_{\mu }={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{\alpha })}}\partial _{\mu }\phi _{\alpha }-4{\mathcal {L}}.} == In general relativity == In general relativity, the symmetric stress–energy tensor acts as the source of spacetime curvature, and is the current density associated with gauge transformations of gravity which are general curvilinear coordinate transformations. (If there is torsion, then the tensor is no longer symmetric. This corresponds to the case with a nonzero spin tensor in Einstein–Cartan gravity theory.) In general relativity, the partial derivatives used in special relativity are replaced by covariant derivatives. What this means is that the continuity equation no longer implies that the non-gravitational energy and momentum expressed by the tensor are absolutely conserved, i.e. the gravitational field can do work on matter and vice versa. In the classical limit of Newtonian gravity, this has a simple interpretation: kinetic energy is being exchanged with gravitational potential energy, which is not included in the tensor, and momentum is being transferred through the field to other bodies. In general relativity the Landau–Lifshitz pseudotensor is a unique way to define the gravitational field energy and momentum densities. Any such stress–energy pseudotensor can be made to vanish locally by a coordinate transformation. In curved spacetime, the spacelike integral now depends on the spacelike slice, in general. There is in fact no way to define a global energy–momentum vector in a general curved spacetime. === Einstein field equations === In general relativity, the stress–energy tensor is studied in the context of the Einstein field equations which are often written as G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },} where G μ ν = R μ ν − 1 2 R g μ ν {\textstyle G_{\mu \nu }=R_{\mu \nu }-{\tfrac {1}{2}}R\,g_{\mu \nu }} is the Einstein tensor, R μ ν {\textstyle R_{\mu \nu }} is the Ricci tensor, R = g α β R α β {\textstyle R=g^{\alpha \beta }R_{\alpha \beta }} is the scalar curvature, g μ ν {\textstyle g_{\mu \nu }\,} is the metric tensor, Λ is the cosmological constant (negligible at the scale of a galaxy or smaller), and κ = 8 π G / c 4 {\textstyle \kappa =8\pi G/c^{4}} is the Einstein gravitational constant. == Stress–energy in special situations == === Isolated particle === In special relativity, the stress–energy of a non-interacting particle with rest mass m and trajectory x p ( t ) {\textstyle \mathbf {x} _{\text{p}}(t)} is: T α β ( x , t ) = m v α ( t ) v β ( t ) 1 − ( v / c ) 2 δ ( x − x p ( t ) ) = E c 2 v α ( t ) v β ( t ) δ ( x − x p ( t ) ) {\displaystyle T^{\alpha \beta }(\mathbf {x} ,t)={\frac {m\,v^{\alpha }(t)v^{\beta }(t)}{\sqrt {1-(v/c)^{2}}}}\;\,\delta \left(\mathbf {x} -\mathbf {x} _{\text{p}}(t)\right)={\frac {E}{c^{2}}}\;v^{\alpha }(t)v^{\beta }(t)\;\,\delta (\mathbf {x} -\mathbf {x} _{\text{p}}(t))} where v α {\textstyle v^{\alpha }} is the velocity vector (which should not be confused with four-velocity, since it is missing a γ {\textstyle \gamma } ) v α = ( 1 , d x p d t ( t ) ) , {\displaystyle v^{\alpha }=\left(1,{\frac {d\mathbf {x} _{\text{p}}}{dt}}(t)\right)\,,} δ {\textstyle \delta } is the Dirac delta function and E = p 2 c 2 + m 2 c 4 {\textstyle E={\sqrt {p^{2}c^{2}+m^{2}c^{4}}}} is the energy of the particle. Written in the language of classical physics, the stress–energy tensor would be (relativistic mass, momentum, the dyadic product of momentum and velocity) ( E c 2 , p , p v ) . {\displaystyle \left({\frac {E}{c^{2}}},\,\mathbf {p} ,\,\mathbf {p} \,\mathbf {v} \right)\,.} === Stress–energy of a fluid in equilibrium === For a perfect fluid in thermodynamic equilibrium, the stress–energy tensor takes on a particularly simple form T α β = ( ρ + p c 2 ) u α u β + p g α β {\displaystyle T^{\alpha \beta }\,=\left(\rho +{p \over c^{2}}\right)u^{\alpha }u^{\beta }+pg^{\alpha \beta }} where ρ {\textstyle \rho } is the mass–energy density (kilograms per cubic meter), p {\textstyle p} is the hydrostatic pressure (pascals), u α {\textstyle u^{\alpha }} is the fluid's four-velocity, and g α β {\textstyle g^{\alpha \beta }} is the matrix inverse of the metric tensor. Therefore, the trace is given by T α α = g α β T β α = 3 p − ρ c 2 . {\displaystyle T^{\alpha }{}_{\,\alpha }=g_{\alpha \beta }T^{\beta \alpha }=3p-\rho c^{2}\,.} The four-velocity satisfies u α u β g α β = − c 2 . {\displaystyle u^{\alpha }u^{\beta }g_{\alpha \beta }=-c^{2}\,.} In an inertial frame of reference comoving with the fluid, better known as the fluid's proper frame of reference, the four-velocity is u α = ( 1 , 0 , 0 , 0 ) , {\displaystyle u^{\alpha }=(1,0,0,0)\,,} the matrix inverse of the metric tensor is simply g α β = ( − 1 c 2 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle g^{\alpha \beta }\,=\left({\begin{matrix}-{\frac {1}{c^{2}}}&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right)} and the stress–energy tensor is a diagonal matrix T α β = ( ρ 0 0 0 0 p 0 0 0 0 p 0 0 0 0 p ) . {\displaystyle T^{\alpha \beta }=\left({\begin{matrix}\rho &0&0&0\\0&p&0&0\\0&0&p&0\\0&0&0&p\end{matrix}}\right).} === Electromagnetic stress–energy tensor === The Hilbert stress–energy tensor of a source-free electromagnetic field is T μ ν = 1 μ 0 ( F μ α g α β F ν β − 1 4 g μ ν F δ γ F δ γ ) {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left(F^{\mu \alpha }g_{\alpha \beta }F^{\nu \beta }-{\frac {1}{4}}g^{\mu \nu }F_{\delta \gamma }F^{\delta \gamma }\right)} where F μ ν {\textstyle F_{\mu \nu }} is the electromagnetic field tensor. === Scalar field === The stress–energy tensor for a complex scalar field ϕ {\textstyle \phi } that satisfies the Klein–Gordon equation is T μ ν = ℏ 2 m ( g μ α g ν β + g μ β g ν α − g μ ν g α β ) ∂ α ϕ ¯ ∂ β ϕ − g μ ν m c 2 ϕ ¯ ϕ , {\displaystyle T^{\mu \nu }={\frac {\hbar ^{2}}{m}}\left(g^{\mu \alpha }g^{\nu \beta }+g^{\mu \beta }g^{\nu \alpha }-g^{\mu \nu }g^{\alpha \beta }\right)\partial _{\alpha }{\bar {\phi }}\partial _{\beta }\phi -g^{\mu \nu }mc^{2}{\bar {\phi }}\phi ,} and when the metric is flat (Minkowski in Cartesian coordinates) its components work out to be: T 00 = ℏ 2 m c 4 ( ∂ 0 ϕ ¯ ∂ 0 ϕ + c 2 ∂ k ϕ ¯ ∂ k ϕ ) + m ϕ ¯ ϕ , T 0 i = T i 0 = − ℏ 2 m c 2 ( ∂ 0 ϕ ¯ ∂ i ϕ + ∂ i ϕ ¯ ∂ 0 ϕ ) , a n d T i j = ℏ 2 m ( ∂ i ϕ ¯ ∂ j ϕ + ∂ j ϕ ¯ ∂ i ϕ ) − δ i j ( ℏ 2 m η α β ∂ α ϕ ¯ ∂ β ϕ + m c 2 ϕ ¯ ϕ ) . {\displaystyle {\begin{aligned}T^{00}&={\frac {\hbar ^{2}}{mc^{4}}}\left(\partial _{0}{\bar {\phi }}\partial _{0}\phi +c^{2}\partial _{k}{\bar {\phi }}\partial _{k}\phi \right)+m{\bar {\phi }}\phi ,\\T^{0i}=T^{i0}&=-{\frac {\hbar ^{2}}{mc^{2}}}\left(\partial _{0}{\bar {\phi }}\partial _{i}\phi +\partial _{i}{\bar {\phi }}\partial _{0}\phi \right),\ \mathrm {and} \\T^{ij}&={\frac {\hbar ^{2}}{m}}\left(\partial _{i}{\bar {\phi }}\partial _{j}\phi +\partial _{j}{\bar {\phi }}\partial _{i}\phi \right)-\delta _{ij}\left({\frac {\hbar ^{2}}{m}}\eta ^{\alpha \beta }\partial _{\alpha }{\bar {\phi }}\partial _{\beta }\phi +mc^{2}{\bar {\phi }}\phi \right).\end{aligned}}} == Variant definitions of stress–energy == There are a number of inequivalent definitions of non-gravitational stress–energy: === Hilbert stress–energy tensor === The Hilbert stress–energy tensor is defined as the functional derivative T μ ν = − 2 − g δ S m a t t e r δ g μ ν = − 2 − g ∂ ( − g L m a t t e r ) ∂ g μ ν = − 2 ∂ L m a t t e r ∂ g μ ν + g μ ν L m a t t e r , {\displaystyle T_{\mu \nu }={\frac {-2}{\sqrt {-g}}}{\frac {\delta S_{\mathrm {matter} }}{\delta g^{\mu \nu }}}={\frac {-2}{\sqrt {-g}}}{\frac {\partial \left({\sqrt {-g}}{\mathcal {L}}_{\mathrm {matter} }\right)}{\partial g^{\mu \nu }}}=-2{\frac {\partial {\mathcal {L}}_{\mathrm {matter} }}{\partial g^{\mu \nu }}}+g_{\mu \nu }{\mathcal {L}}_{\mathrm {matter} },} where S m a t t e r {\textstyle S_{\mathrm {matter} }} is the nongravitational part of the action, L m a t t e r {\textstyle {\mathcal {L}}_{\mathrm {matter} }} is the nongravitational part of the Lagrangian density, and the Euler–Lagrange equation has been used. This is symmetric and gauge-invariant. See Einstein–Hilbert action for more information. === Canonical stress–energy tensor === Noether's theorem implies that there is a conserved current associated with translations through space and time; for details see the section above on the stress–energy tensor in special relativity. This is called the canonical stress–energy tensor. Generally, this is not symmetric and if we have some gauge theory, it may not be gauge invariant because space-dependent gauge transformations do not commute with spatial translations. In general relativity, the translations are with respect to the coordinate system and as such, do not transform covariantly. See the section below on the gravitational stress–energy pseudotensor. === Belinfante–Rosenfeld stress–energy tensor === In the presence of spin or other intrinsic angular momentum, the canonical Noether stress–energy tensor fails to be symmetric. The Belinfante–Rosenfeld stress–energy tensor is constructed from the canonical stress–energy tensor and the spin current in such a way as to be symmetric and still conserved. In general relativity, this modified tensor agrees with the Hilbert stress–energy tensor. == Gravitational stress–energy == By the equivalence principle, gravitational stress–energy will always vanish locally at any chosen point in some chosen frame, therefore gravitational stress–energy cannot be expressed as a non-zero tensor; instead we have to use a pseudotensor. In general relativity, there are many possible distinct definitions of the gravitational stress–energy–momentum pseudotensor. These include the Einstein pseudotensor and the Landau–Lifshitz pseudotensor. The Landau–Lifshitz pseudotensor can be reduced to zero at any event in spacetime by choosing an appropriate coordinate system. == See also == == Notes == == References == == Further reading == Wyss, Walter (14 July 2005). "The energy–momentum tensor in classical field theory" (PDF). Universal Journal of Physics and Applications. Old and New Concepts of Physics [prior journal name]. II (3–4): 295–310. ISSN 2331-6543. ... classical field theory and in particular in the role that a divergence term plays in a lagrangian ... == External links == Lecture, Stephan Waner Caltech Tutorial on Relativity — A simple discussion of the relation between the stress–energy tensor of general relativity and the metric
Wikipedia/Stress–energy_tensor
In linear algebra, linear transformations can be represented by matrices. If T {\displaystyle T} is a linear transformation mapping R n {\displaystyle \mathbb {R} ^{n}} to R m {\displaystyle \mathbb {R} ^{m}} and x {\displaystyle \mathbf {x} } is a column vector with n {\displaystyle n} entries, then there exists an m × n {\displaystyle m\times n} matrix A {\displaystyle A} , called the transformation matrix of T {\displaystyle T} , such that: T ( x ) = A x {\displaystyle T(\mathbf {x} )=A\mathbf {x} } Note that A {\displaystyle A} has m {\displaystyle m} rows and n {\displaystyle n} columns, whereas the transformation T {\displaystyle T} is from R n {\displaystyle \mathbb {R} ^{n}} to R m {\displaystyle \mathbb {R} ^{m}} . There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors. == Uses == Matrices allow arbitrary linear transformations to be displayed in a consistent format, suitable for computation. This also allows transformations to be composed easily (by multiplying their matrices). Linear transformations are not the only ones that can be represented by matrices. Some transformations that are non-linear on an n-dimensional Euclidean space Rn can be represented as linear transformations on the n+1-dimensional space Rn+1. These include both affine transformations (such as translation) and projective transformations. For this reason, 4×4 transformation matrices are widely used in 3D computer graphics. These n+1-dimensional transformation matrices are called, depending on their application, affine transformation matrices, projective transformation matrices, or more generally non-linear transformation matrices. With respect to an n-dimensional matrix, an n+1-dimensional matrix can be described as an augmented matrix. In the physical sciences, an active transformation is one which actually changes the physical position of a system, and makes sense even in the absence of a coordinate system whereas a passive transformation is a change in the coordinate description of the physical system (change of basis). The distinction between active and passive transformations is important. By default, by transformation, mathematicians usually mean active transformations, while physicists could mean either. Put differently, a passive transformation refers to description of the same object as viewed from two different coordinate frames. == Finding the matrix of a transformation == If one has a linear transformation T ( x ) {\displaystyle T(x)} in functional form, it is easy to determine the transformation matrix A by transforming each of the vectors of the standard basis by T, then inserting the result into the columns of a matrix. In other words, A = [ T ( e 1 ) T ( e 2 ) ⋯ T ( e n ) ] {\displaystyle A={\begin{bmatrix}T(\mathbf {e} _{1})&T(\mathbf {e} _{2})&\cdots &T(\mathbf {e} _{n})\end{bmatrix}}} For example, the function T ( x ) = 5 x {\displaystyle T(x)=5x} is a linear transformation. Applying the above process (suppose that n = 2 in this case) reveals that: T ( x ) = 5 x = 5 I x = [ 5 0 0 5 ] x {\displaystyle T(\mathbf {x} )=5\mathbf {x} =5I\mathbf {x} ={\begin{bmatrix}5&0\\0&5\end{bmatrix}}\mathbf {x} } The matrix representation of vectors and operators depends on the chosen basis; a similar matrix will result from an alternate basis. Nevertheless, the method to find the components remains the same. To elaborate, vector v {\displaystyle \mathbf {v} } can be represented in basis vectors, E = [ e 1 e 2 ⋯ e n ] {\displaystyle E={\begin{bmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\cdots &\mathbf {e} _{n}\end{bmatrix}}} with coordinates [ v ] E = [ v 1 v 2 ⋯ v n ] T {\displaystyle [\mathbf {v} ]_{E}={\begin{bmatrix}v_{1}&v_{2}&\cdots &v_{n}\end{bmatrix}}^{\mathrm {T} }} : v = v 1 e 1 + v 2 e 2 + ⋯ + v n e n = ∑ i v i e i = E [ v ] E {\displaystyle \mathbf {v} =v_{1}\mathbf {e} _{1}+v_{2}\mathbf {e} _{2}+\cdots +v_{n}\mathbf {e} _{n}=\sum _{i}v_{i}\mathbf {e} _{i}=E[\mathbf {v} ]_{E}} Now, express the result of the transformation matrix A upon v {\displaystyle \mathbf {v} } , in the given basis: A ( v ) = A ( ∑ i v i e i ) = ∑ i v i A ( e i ) = [ A ( e 1 ) A ( e 2 ) ⋯ A ( e n ) ] [ v ] E = A ⋅ [ v ] E = [ e 1 e 2 ⋯ e n ] [ a 1 , 1 a 1 , 2 ⋯ a 1 , n a 2 , 1 a 2 , 2 ⋯ a 2 , n ⋮ ⋮ ⋱ ⋮ a n , 1 a n , 2 ⋯ a n , n ] [ v 1 v 2 ⋮ v n ] {\displaystyle {\begin{aligned}A(\mathbf {v} )&=A\left(\sum _{i}v_{i}\mathbf {e} _{i}\right)=\sum _{i}{v_{i}A(\mathbf {e} _{i})}\\&={\begin{bmatrix}A(\mathbf {e} _{1})&A(\mathbf {e} _{2})&\cdots &A(\mathbf {e} _{n})\end{bmatrix}}[\mathbf {v} ]_{E}=A\cdot [\mathbf {v} ]_{E}\\[3pt]&={\begin{bmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\cdots &\mathbf {e} _{n}\end{bmatrix}}{\begin{bmatrix}a_{1,1}&a_{1,2}&\cdots &a_{1,n}\\a_{2,1}&a_{2,2}&\cdots &a_{2,n}\\\vdots &\vdots &\ddots &\vdots \\a_{n,1}&a_{n,2}&\cdots &a_{n,n}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}\end{aligned}}} The a i , j {\displaystyle a_{i,j}} elements of matrix A are determined for a given basis E by applying A to every e j = [ 0 0 ⋯ ( v j = 1 ) ⋯ 0 ] T {\displaystyle \mathbf {e} _{j}={\begin{bmatrix}0&0&\cdots &(v_{j}=1)&\cdots &0\end{bmatrix}}^{\mathrm {T} }} , and observing the response vector A e j = a 1 , j e 1 + a 2 , j e 2 + ⋯ + a n , j e n = ∑ i a i , j e i . {\displaystyle A\mathbf {e} _{j}=a_{1,j}\mathbf {e} _{1}+a_{2,j}\mathbf {e} _{2}+\cdots +a_{n,j}\mathbf {e} _{n}=\sum _{i}a_{i,j}\mathbf {e} _{i}.} This equation defines the wanted elements, a i , j {\displaystyle a_{i,j}} , of j-th column of the matrix A. === Eigenbasis and diagonal matrix === Yet, there is a special basis for an operator in which the components form a diagonal matrix and, thus, multiplication complexity reduces to n. Being diagonal means that all coefficients a i , j {\displaystyle a_{i,j}} except a i , i {\displaystyle a_{i,i}} are zeros leaving only one term in the sum ∑ a i , j e i {\textstyle \sum a_{i,j}\mathbf {e} _{i}} above. The surviving diagonal elements, a i , i {\displaystyle a_{i,i}} , are known as eigenvalues and designated with λ i {\displaystyle \lambda _{i}} in the defining equation, which reduces to A e i = λ i e i {\displaystyle A\mathbf {e} _{i}=\lambda _{i}\mathbf {e} _{i}} . The resulting equation is known as eigenvalue equation. The eigenvectors and eigenvalues are derived from it via the characteristic polynomial. With diagonalization, it is often possible to translate to and from eigenbases. == Examples in 2 dimensions == Most common geometric transformations that keep the origin fixed are linear, including rotation, scaling, shearing, reflection, and orthogonal projection; if an affine transformation is not a pure translation it keeps some point fixed, and that point can be chosen as origin to make the transformation linear. In two dimensions, linear transformations can be represented using a 2×2 transformation matrix. === Stretching === A stretch in the xy-plane is a linear transformation which enlarges all distances in a particular direction by a constant factor but does not affect distances in the perpendicular direction. We only consider stretches along the x-axis and y-axis. A stretch along the x-axis has the form x' = kx; y' = y for some positive constant k. (Note that if k > 1, then this really is a "stretch"; if k < 1, it is technically a "compression", but we still call it a stretch. Also, if k = 1, then the transformation is an identity, i.e. it has no effect.) The matrix associated with a stretch by a factor k along the x-axis is given by: [ k 0 0 1 ] {\displaystyle {\begin{bmatrix}k&0\\0&1\end{bmatrix}}} Similarly, a stretch by a factor k along the y-axis has the form x' = x; y' = ky, so the matrix associated with this transformation is [ 1 0 0 k ] {\displaystyle {\begin{bmatrix}1&0\\0&k\end{bmatrix}}} === Squeezing === If the two stretches above are combined with reciprocal values, then the transformation matrix represents a squeeze mapping: [ k 0 0 1 / k ] . {\displaystyle {\begin{bmatrix}k&0\\0&1/k\end{bmatrix}}.} A square with sides parallel to the axes is transformed to a rectangle that has the same area as the square. The reciprocal stretch and compression leave the area invariant. === Rotation === For rotation by an angle θ counterclockwise (positive direction) about the origin the functional form is x ′ = x cos ⁡ θ − y sin ⁡ θ {\displaystyle x'=x\cos \theta -y\sin \theta } and y ′ = x sin ⁡ θ + y cos ⁡ θ {\displaystyle y'=x\sin \theta +y\cos \theta } . Written in matrix form, this becomes: [ x ′ y ′ ] = [ cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ] [ x y ] {\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}} Similarly, for a rotation clockwise (negative direction) about the origin, the functional form is x ′ = x cos ⁡ θ + y sin ⁡ θ {\displaystyle x'=x\cos \theta +y\sin \theta } and y ′ = − x sin ⁡ θ + y cos ⁡ θ {\displaystyle y'=-x\sin \theta +y\cos \theta } the matrix form is: [ x ′ y ′ ] = [ cos ⁡ θ sin ⁡ θ − sin ⁡ θ cos ⁡ θ ] [ x y ] {\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}} These formulae assume that the x axis points right and the y axis points up. === Shearing === For shear mapping (visually similar to slanting), there are two possibilities. A shear parallel to the x axis has x ′ = x + k y {\displaystyle x'=x+ky} and y ′ = y {\displaystyle y'=y} . Written in matrix form, this becomes: [ x ′ y ′ ] = [ 1 k 0 1 ] [ x y ] {\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}1&k\\0&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}} A shear parallel to the y axis has x ′ = x {\displaystyle x'=x} and y ′ = y + k x {\displaystyle y'=y+kx} , which has matrix form: [ x ′ y ′ ] = [ 1 0 k 1 ] [ x y ] {\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}1&0\\k&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}} === Reflection === For reflection about a line that goes through the origin, let l = ( l x , l y ) {\displaystyle \mathbf {l} =(l_{x},l_{y})} be a vector in the direction of the line. Then the transformation matrix is: A = 1 ‖ l ‖ 2 [ l x 2 − l y 2 2 l x l y 2 l x l y l y 2 − l x 2 ] {\displaystyle \mathbf {A} ={\frac {1}{\lVert \mathbf {l} \rVert ^{2}}}{\begin{bmatrix}l_{x}^{2}-l_{y}^{2}&2l_{x}l_{y}\\2l_{x}l_{y}&l_{y}^{2}-l_{x}^{2}\end{bmatrix}}} === Orthogonal projection === To project a vector orthogonally onto a line that goes through the origin, let u = ( u x , u y ) {\displaystyle \mathbf {u} =(u_{x},u_{y})} be a vector in the direction of the line. Then the transformation matrix is: A = 1 ‖ u ‖ 2 [ u x 2 u x u y u x u y u y 2 ] {\displaystyle \mathbf {A} ={\frac {1}{\lVert \mathbf {u} \rVert ^{2}}}{\begin{bmatrix}u_{x}^{2}&u_{x}u_{y}\\u_{x}u_{y}&u_{y}^{2}\end{bmatrix}}} As with reflections, the orthogonal projection onto a line that does not pass through the origin is an affine, not linear, transformation. Parallel projections are also linear transformations and can be represented simply by a matrix. However, perspective projections are not, and to represent these with a matrix, homogeneous coordinates can be used. == Examples in 3D computer graphics == === Rotation === The matrix to rotate an angle θ about any axis defined by unit vector (x,y,z) is [ x x ( 1 − cos ⁡ θ ) + cos ⁡ θ y x ( 1 − cos ⁡ θ ) − z sin ⁡ θ z x ( 1 − cos ⁡ θ ) + y sin ⁡ θ x y ( 1 − cos ⁡ θ ) + z sin ⁡ θ y y ( 1 − cos ⁡ θ ) + cos ⁡ θ z y ( 1 − cos ⁡ θ ) − x sin ⁡ θ x z ( 1 − cos ⁡ θ ) − y sin ⁡ θ y z ( 1 − cos ⁡ θ ) + x sin ⁡ θ z z ( 1 − cos ⁡ θ ) + cos ⁡ θ ] . {\displaystyle {\begin{bmatrix}xx(1-\cos \theta )+\cos \theta &yx(1-\cos \theta )-z\sin \theta &zx(1-\cos \theta )+y\sin \theta \\xy(1-\cos \theta )+z\sin \theta &yy(1-\cos \theta )+\cos \theta &zy(1-\cos \theta )-x\sin \theta \\xz(1-\cos \theta )-y\sin \theta &yz(1-\cos \theta )+x\sin \theta &zz(1-\cos \theta )+\cos \theta \end{bmatrix}}.} === Reflection === To reflect a point through a plane a x + b y + c z = 0 {\displaystyle ax+by+cz=0} (which goes through the origin), one can use A = I − 2 N N T {\displaystyle \mathbf {A} =\mathbf {I} -2\mathbf {NN} ^{\mathrm {T} }} , where I {\displaystyle \mathbf {I} } is the 3×3 identity matrix and N {\displaystyle \mathbf {N} } is the three-dimensional unit vector for the vector normal of the plane. If the L2 norm of a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} is unity, the transformation matrix can be expressed as: A = [ 1 − 2 a 2 − 2 a b − 2 a c − 2 a b 1 − 2 b 2 − 2 b c − 2 a c − 2 b c 1 − 2 c 2 ] {\displaystyle \mathbf {A} ={\begin{bmatrix}1-2a^{2}&-2ab&-2ac\\-2ab&1-2b^{2}&-2bc\\-2ac&-2bc&1-2c^{2}\end{bmatrix}}} Note that these are particular cases of a Householder reflection in two and three dimensions. A reflection about a line or plane that does not go through the origin is not a linear transformation — it is an affine transformation — as a 4×4 affine transformation matrix, it can be expressed as follows (assuming the normal is a unit vector): [ x ′ y ′ z ′ 1 ] = [ 1 − 2 a 2 − 2 a b − 2 a c − 2 a d − 2 a b 1 − 2 b 2 − 2 b c − 2 b d − 2 a c − 2 b c 1 − 2 c 2 − 2 c d 0 0 0 1 ] [ x y z 1 ] {\displaystyle {\begin{bmatrix}x'\\y'\\z'\\1\end{bmatrix}}={\begin{bmatrix}1-2a^{2}&-2ab&-2ac&-2ad\\-2ab&1-2b^{2}&-2bc&-2bd\\-2ac&-2bc&1-2c^{2}&-2cd\\0&0&0&1\end{bmatrix}}{\begin{bmatrix}x\\y\\z\\1\end{bmatrix}}} where d = − p ⋅ N {\displaystyle d=-\mathbf {p} \cdot \mathbf {N} } for some point p {\displaystyle \mathbf {p} } on the plane, or equivalently, a x + b y + c z + d = 0 {\displaystyle ax+by+cz+d=0} . If the 4th component of the vector is 0 instead of 1, then only the vector's direction is reflected and its magnitude remains unchanged, as if it were mirrored through a parallel plane that passes through the origin. This is a useful property as it allows the transformation of both positional vectors and normal vectors with the same matrix. See homogeneous coordinates and affine transformations below for further explanation. == Composing and inverting transformations == One of the main motivations for using matrices to represent linear transformations is that transformations can then be easily composed and inverted. Composition is accomplished by matrix multiplication. Row and column vectors are operated upon by matrices, rows on the left and columns on the right. Since text reads from left to right, column vectors are preferred when transformation matrices are composed: If A and B are the matrices of two linear transformations, then the effect of first applying A and then B to a column vector x {\displaystyle \mathbf {x} } is given by: B ( A x ) = ( B A ) x . {\displaystyle \mathbf {B} (\mathbf {A} \mathbf {x} )=(\mathbf {BA} )\mathbf {x} .} In other words, the matrix of the combined transformation A followed by B is simply the product of the individual matrices. When A is an invertible matrix there is a matrix A−1 that represents a transformation that "undoes" A since its composition with A is the identity matrix. In some practical applications, inversion can be computed using general inversion algorithms or by performing inverse operations (that have obvious geometric interpretation, like rotating in opposite direction) and then composing them in reverse order. Reflection matrices are a special case because they are their own inverses and don't need to be separately calculated. == Other kinds of transformations == === Affine transformations === To represent affine transformations with matrices, we can use homogeneous coordinates. This means representing a 2-vector (x, y) as a 3-vector (x, y, 1), and similarly for higher dimensions. Using this system, translation can be expressed with matrix multiplication. The functional form x ′ = x + t x ; y ′ = y + t y {\displaystyle x'=x+t_{x};y'=y+t_{y}} becomes: [ x ′ y ′ 1 ] = [ 1 0 t x 0 1 t y 0 0 1 ] [ x y 1 ] . {\displaystyle {\begin{bmatrix}x'\\y'\\1\end{bmatrix}}={\begin{bmatrix}1&0&t_{x}\\0&1&t_{y}\\0&0&1\end{bmatrix}}{\begin{bmatrix}x\\y\\1\end{bmatrix}}.} All ordinary linear transformations are included in the set of affine transformations, and can be described as a simplified form of affine transformations. Therefore, any linear transformation can also be represented by a general transformation matrix. The latter is obtained by expanding the corresponding linear transformation matrix by one row and column, filling the extra space with zeros except for the lower-right corner, which must be set to 1. For example, the counter-clockwise rotation matrix from above becomes: [ cos ⁡ θ − sin ⁡ θ 0 sin ⁡ θ cos ⁡ θ 0 0 0 1 ] {\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta &0\\\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}} Using transformation matrices containing homogeneous coordinates, translations become linear, and thus can be seamlessly intermixed with all other types of transformations. The reason is that the real plane is mapped to the w = 1 plane in real projective space, and so translation in real Euclidean space can be represented as a shear in real projective space. Although a translation is a non-linear transformation in a 2-D or 3-D Euclidean space described by Cartesian coordinates (i.e. it can't be combined with other transformations while preserving commutativity and other properties), it becomes, in a 3-D or 4-D projective space described by homogeneous coordinates, a simple linear transformation (a shear). More affine transformations can be obtained by composition of two or more affine transformations. For example, given a translation T' with vector ( t x ′ , t y ′ ) , {\displaystyle (t'_{x},t'_{y}),} a rotation R by an angle θ counter-clockwise, a scaling S with factors ( s x , s y ) {\displaystyle (s_{x},s_{y})} and a translation T of vector ( t x , t y ) , {\displaystyle (t_{x},t_{y}),} the result M of T'RST is: [ s x cos ⁡ θ − s y sin ⁡ θ t x s x cos ⁡ θ − t y s y sin ⁡ θ + t x ′ s x sin ⁡ θ s y cos ⁡ θ t x s x sin ⁡ θ + t y s y cos ⁡ θ + t y ′ 0 0 1 ] {\displaystyle {\begin{bmatrix}s_{x}\cos \theta &-s_{y}\sin \theta &t_{x}s_{x}\cos \theta -t_{y}s_{y}\sin \theta +t'_{x}\\s_{x}\sin \theta &s_{y}\cos \theta &t_{x}s_{x}\sin \theta +t_{y}s_{y}\cos \theta +t'_{y}\\0&0&1\end{bmatrix}}} When using affine transformations, the homogeneous component of a coordinate vector (normally called w) will never be altered. One can therefore safely assume that it is always 1 and ignore it. However, this is not true when using perspective projections. === Perspective projection === Another type of transformation, of importance in 3D computer graphics, is the perspective projection. Whereas parallel projections are used to project points onto the image plane along parallel lines, the perspective projection projects points onto the image plane along lines that emanate from a single point, called the center of projection. This means that an object has a smaller projection when it is far away from the center of projection and a larger projection when it is closer (see also reciprocal function). The simplest perspective projection uses the origin as the center of projection, and the plane at z = 1 {\displaystyle z=1} as the image plane. The functional form of this transformation is then x ′ = x / z {\displaystyle x'=x/z} ; y ′ = y / z {\displaystyle y'=y/z} . We can express this in homogeneous coordinates as: [ x c y c z c w c ] = [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 ] [ x y z 1 ] = [ x y z z ] {\displaystyle {\begin{bmatrix}x_{c}\\y_{c}\\z_{c}\\w_{c}\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&1&0\end{bmatrix}}{\begin{bmatrix}x\\y\\z\\1\end{bmatrix}}={\begin{bmatrix}x\\y\\z\\z\end{bmatrix}}} After carrying out the matrix multiplication, the homogeneous component w c {\displaystyle w_{c}} will be equal to the value of z {\displaystyle z} and the other three will not change. Therefore, to map back into the real plane we must perform the homogeneous divide or perspective divide by dividing each component by w c {\displaystyle w_{c}} : [ x ′ y ′ z ′ 1 ] = 1 w c [ x c y c z c w c ] = [ x / z y / z 1 1 ] {\displaystyle {\begin{bmatrix}x'\\y'\\z'\\1\end{bmatrix}}={\frac {1}{w_{c}}}{\begin{bmatrix}x_{c}\\y_{c}\\z_{c}\\w_{c}\end{bmatrix}}={\begin{bmatrix}x/z\\y/z\\1\\1\end{bmatrix}}} More complicated perspective projections can be composed by combining this one with rotations, scales, translations, and shears to move the image plane and center of projection wherever they are desired. == See also == 3D projection Change of basis Image rectification Pose (computer vision) Rigid transformation Transformation (function) Transformation geometry == References == == External links == The Matrix Page Practical examples in POV-Ray Reference page - Rotation of axes Linear Transformation Calculator Transformation Applet - Generate matrices from 2D transformations and vice versa. Coordinate transformation under rotation in 2D Excel Fun - Build 3D graphics from a spreadsheet
Wikipedia/Transformation_matrix
Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics. == Vector and tensor algebra in three-dimensional curvilinear coordinates == Elementary vector and tensor algebra in curvilinear coordinates is used in some of the older scientific literature in mechanics and physics and can be indispensable to understanding work from the early and mid 1900s, for example the text by Green and Zerna. Some useful relations in the algebra of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Naghdi, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet. === Coordinate transformations === Consider two coordinate systems with coordinate variables ( Z 1 , Z 2 , Z 3 ) {\displaystyle (Z^{1},Z^{2},Z^{3})} and ( Z 1 ´ , Z 2 ´ , Z 3 ´ ) {\displaystyle (Z^{\acute {1}},Z^{\acute {2}},Z^{\acute {3}})} , which we shall represent in short as just Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} respectively and always assume our index i {\displaystyle i} runs from 1 through 3. We shall assume that these coordinates systems are embedded in the three-dimensional euclidean space. Coordinates Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} may be used to explain each other, because as we move along the coordinate line in one coordinate system we can use the other to describe our position. In this way Coordinates Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} are functions of each other Z i = f i ( Z 1 ´ , Z 2 ´ , Z 3 ´ ) {\displaystyle Z^{i}=f^{i}(Z^{\acute {1}},Z^{\acute {2}},Z^{\acute {3}})} for i = 1 , 2 , 3 {\displaystyle i=1,2,3} which can be written as Z i = Z i ( Z 1 ´ , Z 2 ´ , Z 3 ´ ) = Z i ( Z i ´ ) {\displaystyle Z^{i}=Z^{i}(Z^{\acute {1}},Z^{\acute {2}},Z^{\acute {3}})=Z^{i}(Z^{\acute {i}})} for i ´ , i = 1 , 2 , 3 {\displaystyle {\acute {i}},i=1,2,3} These three equations together are also called a coordinate transformation from Z i ´ {\displaystyle Z^{\acute {i}}} to Z i {\displaystyle Z^{i}} . Let us denote this transformation by T {\displaystyle T} . We will therefore represent the transformation from the coordinate system with coordinate variables Z i ´ {\displaystyle Z^{\acute {i}}} to the coordinate system with coordinates Z i {\displaystyle Z^{i}} as: Z = T ( z ´ ) {\displaystyle Z=T({\acute {z}})} Similarly we can represent Z i ´ {\displaystyle Z^{\acute {i}}} as a function of Z i {\displaystyle Z^{i}} as follows: Z i ´ = g i ´ ( Z 1 , Z 2 , Z 3 ) {\displaystyle Z^{\acute {i}}=g^{\acute {i}}(Z^{1},Z^{2},Z^{3})} for i ´ = 1 , 2 , 3 {\displaystyle {\acute {i}}=1,2,3} and we can write the free equations more compactly as Z i ´ = Z i ´ ( Z 1 , Z 2 , Z 3 ) = Z i ´ ( Z i ) {\displaystyle Z^{\acute {i}}=Z^{\acute {i}}(Z^{1},Z^{2},Z^{3})=Z^{\acute {i}}(Z^{i})} for i ´ , i = 1 , 2 , 3 {\displaystyle {\acute {i}},i=1,2,3} These three equations together are also called a coordinate transformation from Z i {\displaystyle Z^{i}} to Z i ´ {\displaystyle Z^{\acute {i}}} . Let us denote this transformation by S {\displaystyle S} . We will represent the transformation from the coordinate system with coordinate variables Z i {\displaystyle Z^{i}} to the coordinate system with coordinates Z i ´ {\displaystyle Z^{\acute {i}}} as: z ´ = S ( z ) {\displaystyle {\acute {z}}=S(z)} If the transformation T {\displaystyle T} is bijective then we call the image of the transformation, namely Z i {\displaystyle Z^{i}} , a set of admissible coordinates for Z i ´ {\displaystyle Z^{\acute {i}}} . If T {\displaystyle T} is linear the coordinate system Z i {\displaystyle Z^{i}} will be called an affine coordinate system, otherwise Z i {\displaystyle Z^{i}} is called a curvilinear coordinate system. ==== The Jacobian ==== As we now see that the Coordinates Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} are functions of each other, we can take the derivative of the coordinate variable Z i {\displaystyle Z^{i}} with respect to the coordinate variable Z i ´ {\displaystyle Z^{\acute {i}}} . Consider ∂ Z i ∂ Z i ´ = d e f J i ´ i {\displaystyle {\frac {\partial {Z^{i}}}{\partial {Z^{\acute {i}}}}}\;{\overset {\underset {\mathrm {def} }{}}{=}}\;J_{\acute {i}}^{i}} for i ´ , i = 1 , 2 , 3 {\displaystyle {\acute {i}},i=1,2,3} , these derivatives can be arranged in a matrix, say J {\displaystyle J} , in which J i ´ i {\displaystyle J_{\acute {i}}^{i}} is the element in the i {\displaystyle i} -th row and i ´ {\displaystyle {\acute {i}}} -th column J = ( J 1 ´ 1 J 2 ´ 1 J 3 ´ 1 J 1 ´ 2 J 2 ´ 2 J 3 ´ 2 J 1 ´ 3 J 2 ´ 3 J 3 ´ 3 ) = ( ∂ Z 1 ∂ Z 1 ´ ∂ Z 1 ∂ Z 2 ´ ∂ Z 1 ∂ Z 3 ´ ∂ Z 2 ∂ Z 1 ´ ∂ Z 2 ∂ Z 2 ´ ∂ Z 2 ∂ Z 3 ´ ∂ Z 3 ∂ Z 1 ´ ∂ Z 3 ∂ Z 2 ´ ∂ Z 3 ∂ Z 3 ´ ) {\displaystyle J={\begin{pmatrix}J_{\acute {1}}^{1}&J_{\acute {2}}^{1}&J_{\acute {3}}^{1}\\J_{\acute {1}}^{2}&J_{\acute {2}}^{2}&J_{\acute {3}}^{2}\\J_{\acute {1}}^{3}&J_{\acute {2}}^{3}&J_{\acute {3}}^{3}\end{pmatrix}}={\begin{pmatrix}{\partial {Z^{1}} \over \partial {Z^{\acute {1}}}}&{\partial {Z^{1}} \over \partial {Z^{\acute {2}}}}&{\partial {Z^{1}} \over \partial {Z^{\acute {3}}}}\\{\partial {Z^{2}} \over \partial {Z^{\acute {1}}}}&{\partial {Z^{2}} \over \partial {Z^{\acute {2}}}}&{\partial {Z^{2}} \over \partial {Z^{\acute {3}}}}\\{\partial {Z^{3}} \over \partial {Z^{\acute {1}}}}&{\partial {Z^{3}} \over \partial {Z^{\acute {2}}}}&{\partial {Z^{3}} \over \partial {Z^{\acute {3}}}}\end{pmatrix}}} The resultant matrix is called the Jacobian matrix. === Vectors in curvilinear coordinates === Let ( b 1 , b 2 , b 3 ) {\displaystyle (\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3})} be an arbitrary basis for three-dimensional Euclidean space. In general, the basis vectors are neither unit vectors nor mutually orthogonal. However, they are required to be linearly independent. Then a vector v {\displaystyle \mathbf {v} } can be expressed as: 27  v = v k b k {\displaystyle \mathbf {v} =v^{k}\,\mathbf {b} _{k}} The components v k {\displaystyle v^{k}} are the contravariant components of the vector v {\displaystyle \mathbf {v} } . The reciprocal basis ( b 1 , b 2 , b 3 ) {\displaystyle (\mathbf {b} ^{1},\mathbf {b} ^{2},\mathbf {b} ^{3})} is defined by the relation : 28–29  b i ⋅ b j = δ j i {\displaystyle \mathbf {b} ^{i}\cdot \mathbf {b} _{j}=\delta _{j}^{i}} where δ j i {\displaystyle \delta _{j}^{i}} is the Kronecker delta. The vector v {\displaystyle \mathbf {v} } can also be expressed in terms of the reciprocal basis: v = v k b k {\displaystyle \mathbf {v} =v_{k}~\mathbf {b} ^{k}} The components v k {\displaystyle v_{k}} are the covariant components of the vector v {\displaystyle \mathbf {v} } . === Second-order tensors in curvilinear coordinates === A second-order tensor can be expressed as S = S i j b i ⊗ b j = S j i b i ⊗ b j = S i j b i ⊗ b j = S i j b i ⊗ b j {\displaystyle {\boldsymbol {S}}=S^{ij}~\mathbf {b} _{i}\otimes \mathbf {b} _{j}=S_{~j}^{i}~\mathbf {b} _{i}\otimes \mathbf {b} ^{j}=S_{i}^{~j}~\mathbf {b} ^{i}\otimes \mathbf {b} _{j}=S_{ij}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}} The components S i j {\displaystyle S^{ij}} are called the contravariant components, S j i {\displaystyle S_{~j}^{i}} the mixed right-covariant components, S i j {\displaystyle S_{i}^{~j}} the mixed left-covariant components, and S i j {\displaystyle S_{ij}} the covariant components of the second-order tensor. ==== Metric tensor and relations between components ==== The quantities g i j {\displaystyle g_{ij}} , g i j {\displaystyle g^{ij}} are defined as: 39  g i j = b i ⋅ b j = g j i ; g i j = b i ⋅ b j = g j i {\displaystyle g_{ij}=\mathbf {b} _{i}\cdot \mathbf {b} _{j}=g_{ji}~;~~g^{ij}=\mathbf {b} ^{i}\cdot \mathbf {b} ^{j}=g^{ji}} From the above equations we have v i = g i k v k ; v i = g i k v k ; b i = g i j b j ; b i = g i j b j {\displaystyle v^{i}=g^{ik}~v_{k}~;~~v_{i}=g_{ik}~v^{k}~;~~\mathbf {b} ^{i}=g^{ij}~\mathbf {b} _{j}~;~~\mathbf {b} _{i}=g_{ij}~\mathbf {b} ^{j}} The components of a vector are related by: 30–32  v ⋅ b i = v k b k ⋅ b i = v k δ k i = v i {\displaystyle \mathbf {v} \cdot \mathbf {b} ^{i}=v^{k}~\mathbf {b} _{k}\cdot \mathbf {b} ^{i}=v^{k}~\delta _{k}^{i}=v^{i}} v ⋅ b i = v k b k ⋅ b i = v k δ i k = v i {\displaystyle \mathbf {v} \cdot \mathbf {b} _{i}=v_{k}~\mathbf {b} ^{k}\cdot \mathbf {b} _{i}=v_{k}~\delta _{i}^{k}=v_{i}} Also, v ⋅ b i = v k b k ⋅ b i = g k i v k {\displaystyle \mathbf {v} \cdot \mathbf {b} _{i}=v^{k}~\mathbf {b} _{k}\cdot \mathbf {b} _{i}=g_{ki}~v^{k}} v ⋅ b i = v k b k ⋅ b i = g k i v k {\displaystyle \mathbf {v} \cdot \mathbf {b} ^{i}=v_{k}~\mathbf {b} ^{k}\cdot \mathbf {b} ^{i}=g^{ki}~v_{k}} The components of the second-order tensor are related by S i j = g i k S k j = g j k S k i = g i k g j l S k l {\displaystyle S^{ij}=g^{ik}~S_{k}^{~j}=g^{jk}~S_{~k}^{i}=g^{ik}~g^{jl}~S_{kl}} === The alternating tensor === In an orthonormal right-handed basis, the third-order alternating tensor is defined as E = ε i j k e i ⊗ e j ⊗ e k {\displaystyle {\boldsymbol {\mathcal {E}}}=\varepsilon _{ijk}~\mathbf {e} ^{i}\otimes \mathbf {e} ^{j}\otimes \mathbf {e} ^{k}} In a general curvilinear basis the same tensor may be expressed as E = E i j k b i ⊗ b j ⊗ b k = E i j k b i ⊗ b j ⊗ b k {\displaystyle {\boldsymbol {\mathcal {E}}}={\mathcal {E}}_{ijk}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}\otimes \mathbf {b} ^{k}={\mathcal {E}}^{ijk}~\mathbf {b} _{i}\otimes \mathbf {b} _{j}\otimes \mathbf {b} _{k}} It can be shown that E i j k = [ b i , b j , b k ] = ( b i × b j ) ⋅ b k ; E i j k = [ b i , b j , b k ] {\displaystyle {\mathcal {E}}_{ijk}=\left[\mathbf {b} _{i},\mathbf {b} _{j},\mathbf {b} _{k}\right]=(\mathbf {b} _{i}\times \mathbf {b} _{j})\cdot \mathbf {b} _{k}~;~~{\mathcal {E}}^{ijk}=\left[\mathbf {b} ^{i},\mathbf {b} ^{j},\mathbf {b} ^{k}\right]} Now, b i × b j = J ε i j p b p = g ε i j p b p {\displaystyle \mathbf {b} _{i}\times \mathbf {b} _{j}=J~\varepsilon _{ijp}~\mathbf {b} ^{p}={\sqrt {g}}~\varepsilon _{ijp}~\mathbf {b} ^{p}} Hence, E i j k = J ε i j k = g ε i j k {\displaystyle {\mathcal {E}}_{ijk}=J~\varepsilon _{ijk}={\sqrt {g}}~\varepsilon _{ijk}} Similarly, we can show that E i j k = 1 J ε i j k = 1 g ε i j k {\displaystyle {\mathcal {E}}^{ijk}={\cfrac {1}{J}}~\varepsilon ^{ijk}={\cfrac {1}{\sqrt {g}}}~\varepsilon ^{ijk}} === Vector operations === ==== Identity map ==== The identity map I {\displaystyle \mathbf {I} } defined by I ⋅ v = v {\displaystyle \mathbf {I} \cdot \mathbf {v} =\mathbf {v} } can be shown to be:: 39  I = g i j b i ⊗ b j = g i j b i ⊗ b j = b i ⊗ b i = b i ⊗ b i {\displaystyle \mathbf {I} =g^{ij}\mathbf {b} _{i}\otimes \mathbf {b} _{j}=g_{ij}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=\mathbf {b} _{i}\otimes \mathbf {b} ^{i}=\mathbf {b} ^{i}\otimes \mathbf {b} _{i}} ==== Scalar (dot) product ==== The scalar product of two vectors in curvilinear coordinates is: 32  u ⋅ v = u i v i = u i v i = g i j u i v j = g i j u i v j {\displaystyle \mathbf {u} \cdot \mathbf {v} =u^{i}v_{i}=u_{i}v^{i}=g_{ij}u^{i}v^{j}=g^{ij}u_{i}v_{j}} ==== Vector (cross) product ==== The cross product of two vectors is given by:: 32–34  u × v = ε i j k u j v k e i {\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{ijk}u_{j}v_{k}\mathbf {e} _{i}} where ε i j k {\displaystyle \varepsilon _{ijk}} is the permutation symbol and e i {\displaystyle \mathbf {e} _{i}} is a Cartesian basis vector. In curvilinear coordinates, the equivalent expression is: u × v = [ ( b m × b n ) ⋅ b s ] u m v n b s = E s m n u m v n b s {\displaystyle \mathbf {u} \times \mathbf {v} =[(\mathbf {b} _{m}\times \mathbf {b} _{n})\cdot \mathbf {b} _{s}]u^{m}v^{n}\mathbf {b} ^{s}={\mathcal {E}}_{smn}u^{m}v^{n}\mathbf {b} ^{s}} where E i j k {\displaystyle {\mathcal {E}}_{ijk}} is the third-order alternating tensor. The cross product of two vectors is given by: u × v = ε i j k u ^ j v ^ k e i {\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{ijk}{\hat {u}}_{j}{\hat {v}}_{k}\mathbf {e} _{i}} where ε i j k {\displaystyle \varepsilon _{ijk}} is the permutation symbol and e i {\displaystyle \mathbf {e} _{i}} is a Cartesian basis vector. Therefore, e p × e q = ε i p q e i {\displaystyle \mathbf {e} _{p}\times \mathbf {e} _{q}=\varepsilon _{ipq}\mathbf {e} _{i}} and b m × b n = ∂ x ∂ q m × ∂ x ∂ q n = ∂ ( x p e p ) ∂ q m × ∂ ( x q e q ) ∂ q n = ∂ x p ∂ q m ∂ x q ∂ q n e p × e q = ε i p q ∂ x p ∂ q m ∂ x q ∂ q n e i . {\displaystyle \mathbf {b} _{m}\times \mathbf {b} _{n}={\frac {\partial \mathbf {x} }{\partial q^{m}}}\times {\frac {\partial \mathbf {x} }{\partial q^{n}}}={\frac {\partial (x_{p}\mathbf {e} _{p})}{\partial q^{m}}}\times {\frac {\partial (x_{q}\mathbf {e} _{q})}{\partial q^{n}}}={\frac {\partial x_{p}}{\partial q^{m}}}{\frac {\partial x_{q}}{\partial q^{n}}}\mathbf {e} _{p}\times \mathbf {e} _{q}=\varepsilon _{ipq}{\frac {\partial x_{p}}{\partial q^{m}}}{\frac {\partial x_{q}}{\partial q^{n}}}\mathbf {e} _{i}.} Hence, ( b m × b n ) ⋅ b s = ε i p q ∂ x p ∂ q m ∂ x q ∂ q n ∂ x i ∂ q s {\displaystyle (\mathbf {b} _{m}\times \mathbf {b} _{n})\cdot \mathbf {b} _{s}=\varepsilon _{ipq}{\frac {\partial x_{p}}{\partial q^{m}}}{\frac {\partial x_{q}}{\partial q^{n}}}{\frac {\partial x_{i}}{\partial q^{s}}}} Returning to the vector product and using the relations: u ^ j = ∂ x j ∂ q m u m , v ^ k = ∂ x k ∂ q n v n , e i = ∂ x i ∂ q s b s , {\displaystyle {\hat {u}}_{j}={\frac {\partial x_{j}}{\partial q^{m}}}u^{m},\quad {\hat {v}}_{k}={\frac {\partial x_{k}}{\partial q^{n}}}v^{n},\quad \mathbf {e} _{i}={\frac {\partial x_{i}}{\partial q^{s}}}\mathbf {b} ^{s},} gives us: u × v = ε i j k u ^ j v ^ k e i = ε i j k ∂ x j ∂ q m ∂ x k ∂ q n ∂ x i ∂ q s u m v n b s = [ ( b m × b n ) ⋅ b s ] u m v n b s = E s m n u m v n b s {\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{ijk}{\hat {u}}_{j}{\hat {v}}_{k}\mathbf {e} _{i}=\varepsilon _{ijk}{\frac {\partial x_{j}}{\partial q^{m}}}{\frac {\partial x_{k}}{\partial q^{n}}}{\frac {\partial x_{i}}{\partial q^{s}}}u^{m}v^{n}\mathbf {b} ^{s}=[(\mathbf {b} _{m}\times \mathbf {b} _{n})\cdot \mathbf {b} _{s}]u^{m}v^{n}\mathbf {b} ^{s}={\mathcal {E}}_{smn}u^{m}v^{n}\mathbf {b} ^{s}} === Tensor operations === ==== Identity map ==== The identity map I {\displaystyle {\mathsf {I}}} defined by I ⋅ v = v {\displaystyle {\mathsf {I}}\cdot \mathbf {v} =\mathbf {v} } can be shown to be: 39  I = g i j b i ⊗ b j = g i j b i ⊗ b j = b i ⊗ b i = b i ⊗ b i {\displaystyle {\mathsf {I}}=g^{ij}\mathbf {b} _{i}\otimes \mathbf {b} _{j}=g_{ij}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=\mathbf {b} _{i}\otimes \mathbf {b} ^{i}=\mathbf {b} ^{i}\otimes \mathbf {b} _{i}} ==== Action of a second-order tensor on a vector ==== The action v = S u {\displaystyle \mathbf {v} ={\boldsymbol {S}}\mathbf {u} } can be expressed in curvilinear coordinates as v i b i = S i j u j b i = S j i u j b i ; v i b i = S i j u i b i = S i j u j b i {\displaystyle v^{i}\mathbf {b} _{i}=S^{ij}u_{j}\mathbf {b} _{i}=S_{j}^{i}u^{j}\mathbf {b} _{i};\qquad v_{i}\mathbf {b} ^{i}=S_{ij}u^{i}\mathbf {b} ^{i}=S_{i}^{j}u_{j}\mathbf {b} ^{i}} ==== Inner product of two second-order tensors ==== The inner product of two second-order tensors U = S ⋅ T {\displaystyle {\boldsymbol {U}}={\boldsymbol {S}}\cdot {\boldsymbol {T}}} can be expressed in curvilinear coordinates as U i j b i ⊗ b j = S i k T . j k b i ⊗ b j = S i . k T k j b i ⊗ b j {\displaystyle U_{ij}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=S_{ik}T_{.j}^{k}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=S_{i}^{.k}T_{kj}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}} Alternatively, U = S i j T . n m g j m b i ⊗ b n = S . m i T . n m b i ⊗ b n = S i j T j n b i ⊗ b n {\displaystyle {\boldsymbol {U}}=S^{ij}T_{.n}^{m}g_{jm}\mathbf {b} _{i}\otimes \mathbf {b} ^{n}=S_{.m}^{i}T_{.n}^{m}\mathbf {b} _{i}\otimes \mathbf {b} ^{n}=S^{ij}T_{jn}\mathbf {b} _{i}\otimes \mathbf {b} ^{n}} ==== Determinant of a second-order tensor ==== If S {\displaystyle {\boldsymbol {S}}} is a second-order tensor, then the determinant is defined by the relation [ S u , S v , S w ] = det S [ u , v , w ] {\displaystyle \left[{\boldsymbol {S}}\mathbf {u} ,{\boldsymbol {S}}\mathbf {v} ,{\boldsymbol {S}}\mathbf {w} \right]=\det {\boldsymbol {S}}\left[\mathbf {u} ,\mathbf {v} ,\mathbf {w} \right]} where u , v , w {\displaystyle \mathbf {u} ,\mathbf {v} ,\mathbf {w} } are arbitrary vectors and [ u , v , w ] := u ⋅ ( v × w ) . {\displaystyle \left[\mathbf {u} ,\mathbf {v} ,\mathbf {w} \right]:=\mathbf {u} \cdot (\mathbf {v} \times \mathbf {w} ).} === Relations between curvilinear and Cartesian basis vectors === Let ( e 1 , e 2 , e 3 ) {\displaystyle (\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3})} be the usual Cartesian basis vectors for the Euclidean space of interest and let b i = F e i {\displaystyle \mathbf {b} _{i}={\boldsymbol {F}}\mathbf {e} _{i}} where F {\displaystyle {\boldsymbol {F}}} is a second-order transformation tensor that maps e i {\displaystyle \mathbf {e} _{i}} to b i {\displaystyle \mathbf {b} _{i}} . Then, b i ⊗ e i = ( F e i ) ⊗ e i = F ( e i ⊗ e i ) = F . {\displaystyle \mathbf {b} _{i}\otimes \mathbf {e} _{i}=({\boldsymbol {F}}\mathbf {e} _{i})\otimes \mathbf {e} _{i}={\boldsymbol {F}}(\mathbf {e} _{i}\otimes \mathbf {e} _{i})={\boldsymbol {F}}~.} From this relation we can show that b i = F − T e i ; g i j = [ F − 1 F − T ] i j ; g i j = [ g i j ] − 1 = [ F T F ] i j {\displaystyle \mathbf {b} ^{i}={\boldsymbol {F}}^{-{\rm {T}}}\mathbf {e} ^{i}~;~~g^{ij}=[{\boldsymbol {F}}^{-{\rm {1}}}{\boldsymbol {F}}^{-{\rm {T}}}]_{ij}~;~~g_{ij}=[g^{ij}]^{-1}=[{\boldsymbol {F}}^{\rm {T}}{\boldsymbol {F}}]_{ij}} Let J := det F {\displaystyle J:=\det {\boldsymbol {F}}} be the Jacobian of the transformation. Then, from the definition of the determinant, [ b 1 , b 2 , b 3 ] = det F [ e 1 , e 2 , e 3 ] . {\displaystyle \left[\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3}\right]=\det {\boldsymbol {F}}\left[\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\right]~.} Since [ e 1 , e 2 , e 3 ] = 1 {\displaystyle \left[\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\right]=1} we have J = det F = [ b 1 , b 2 , b 3 ] = b 1 ⋅ ( b 2 × b 3 ) {\displaystyle J=\det {\boldsymbol {F}}=\left[\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3}\right]=\mathbf {b} _{1}\cdot (\mathbf {b} _{2}\times \mathbf {b} _{3})} A number of interesting results can be derived using the above relations. First, consider g := det [ g i j ] {\displaystyle g:=\det[g_{ij}]} Then g = det [ F T ] ⋅ det [ F ] = J ⋅ J = J 2 {\displaystyle g=\det[{\boldsymbol {F}}^{\rm {T}}]\cdot \det[{\boldsymbol {F}}]=J\cdot J=J^{2}} Similarly, we can show that det [ g i j ] = 1 J 2 {\displaystyle \det[g^{ij}]={\cfrac {1}{J^{2}}}} Therefore, using the fact that [ g i j ] = [ g i j ] − 1 {\displaystyle [g^{ij}]=[g_{ij}]^{-1}} , ∂ g ∂ g i j = 2 J ∂ J ∂ g i j = g g i j {\displaystyle {\cfrac {\partial g}{\partial g_{ij}}}=2~J~{\cfrac {\partial J}{\partial g_{ij}}}=g~g^{ij}} Another interesting relation is derived below. Recall that b i ⋅ b j = δ j i ⇒ b 1 ⋅ b 1 = 1 , b 1 ⋅ b 2 = b 1 ⋅ b 3 = 0 ⇒ b 1 = A ( b 2 × b 3 ) {\displaystyle \mathbf {b} ^{i}\cdot \mathbf {b} _{j}=\delta _{j}^{i}\quad \Rightarrow \quad \mathbf {b} ^{1}\cdot \mathbf {b} _{1}=1,~\mathbf {b} ^{1}\cdot \mathbf {b} _{2}=\mathbf {b} ^{1}\cdot \mathbf {b} _{3}=0\quad \Rightarrow \quad \mathbf {b} ^{1}=A~(\mathbf {b} _{2}\times \mathbf {b} _{3})} where A {\displaystyle A} is a, yet undetermined, constant. Then b 1 ⋅ b 1 = A b 1 ⋅ ( b 2 × b 3 ) = A J = 1 ⇒ A = 1 J {\displaystyle \mathbf {b} ^{1}\cdot \mathbf {b} _{1}=A~\mathbf {b} _{1}\cdot (\mathbf {b} _{2}\times \mathbf {b} _{3})=AJ=1\quad \Rightarrow \quad A={\cfrac {1}{J}}} This observation leads to the relations b 1 = 1 J ( b 2 × b 3 ) ; b 2 = 1 J ( b 3 × b 1 ) ; b 3 = 1 J ( b 1 × b 2 ) {\displaystyle \mathbf {b} ^{1}={\cfrac {1}{J}}(\mathbf {b} _{2}\times \mathbf {b} _{3})~;~~\mathbf {b} ^{2}={\cfrac {1}{J}}(\mathbf {b} _{3}\times \mathbf {b} _{1})~;~~\mathbf {b} ^{3}={\cfrac {1}{J}}(\mathbf {b} _{1}\times \mathbf {b} _{2})} In index notation, ε i j k b k = 1 J ( b i × b j ) = 1 g ( b i × b j ) {\displaystyle \varepsilon _{ijk}~\mathbf {b} ^{k}={\cfrac {1}{J}}(\mathbf {b} _{i}\times \mathbf {b} _{j})={\cfrac {1}{\sqrt {g}}}(\mathbf {b} _{i}\times \mathbf {b} _{j})} where ε i j k {\displaystyle \varepsilon _{ijk}} is the usual permutation symbol. We have not identified an explicit expression for the transformation tensor F {\displaystyle {\boldsymbol {F}}} because an alternative form of the mapping between curvilinear and Cartesian bases is more useful. Assuming a sufficient degree of smoothness in the mapping (and a bit of abuse of notation), we have b i = ∂ x ∂ q i = ∂ x ∂ x j ∂ x j ∂ q i = e j ∂ x j ∂ q i {\displaystyle \mathbf {b} _{i}={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}={\cfrac {\partial \mathbf {x} }{\partial x_{j}}}~{\cfrac {\partial x_{j}}{\partial q^{i}}}=\mathbf {e} _{j}~{\cfrac {\partial x_{j}}{\partial q^{i}}}} Similarly, e i = b j ∂ q j ∂ x i {\displaystyle \mathbf {e} _{i}=\mathbf {b} _{j}~{\cfrac {\partial q^{j}}{\partial x_{i}}}} From these results we have e k ⋅ b i = ∂ x k ∂ q i ⇒ ∂ x k ∂ q i b i = e k ⋅ ( b i ⊗ b i ) = e k {\displaystyle \mathbf {e} ^{k}\cdot \mathbf {b} _{i}={\frac {\partial x_{k}}{\partial q^{i}}}\quad \Rightarrow \quad {\frac {\partial x_{k}}{\partial q^{i}}}~\mathbf {b} ^{i}=\mathbf {e} ^{k}\cdot (\mathbf {b} _{i}\otimes \mathbf {b} ^{i})=\mathbf {e} ^{k}} and b k = ∂ q k ∂ x i e i {\displaystyle \mathbf {b} ^{k}={\frac {\partial q^{k}}{\partial x_{i}}}~\mathbf {e} ^{i}} == Vector and tensor calculus in three-dimensional curvilinear coordinates == Simmonds, in his book on tensor analysis, quotes Albert Einstein saying The magic of this theory will hardly fail to impose itself on anybody who has truly understood it; it represents a genuine triumph of the method of absolute differential calculus, founded by Gauss, Riemann, Ricci, and Levi-Civita. Vector and tensor calculus in general curvilinear coordinates is used in tensor analysis on four-dimensional curvilinear manifolds in general relativity, in the mechanics of curved shells, in examining the invariance properties of Maxwell's equations which has been of interest in metamaterials and in many other fields. Some useful relations in the calculus of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet. === Basic definitions === Let the position of a point in space be characterized by three coordinate variables ( q 1 , q 2 , q 3 ) {\displaystyle (q^{1},q^{2},q^{3})} . The coordinate curve q 1 {\displaystyle q^{1}} represents a curve on which q 2 {\displaystyle q^{2}} and q 3 {\displaystyle q^{3}} are constant. Let x {\displaystyle \mathbf {x} } be the position vector of the point relative to some origin. Then, assuming that such a mapping and its inverse exist and are continuous, we can write : 55  x = φ ( q 1 , q 2 , q 3 ) ; q i = ψ i ( x ) = [ φ − 1 ( x ) ] i {\displaystyle \mathbf {x} ={\boldsymbol {\varphi }}(q^{1},q^{2},q^{3})~;~~q^{i}=\psi ^{i}(\mathbf {x} )=[{\boldsymbol {\varphi }}^{-1}(\mathbf {x} )]^{i}} The fields ψ i ( x ) {\displaystyle \psi ^{i}(\mathbf {x} )} are called the curvilinear coordinate functions of the curvilinear coordinate system ψ ( x ) = φ − 1 ( x ) {\displaystyle {\boldsymbol {\psi }}(\mathbf {x} )={\boldsymbol {\varphi }}^{-1}(\mathbf {x} )} . The q i {\displaystyle q^{i}} coordinate curves are defined by the one-parameter family of functions given by x i ( α ) = φ ( α , q j , q k ) , i ≠ j ≠ k {\displaystyle \mathbf {x} _{i}(\alpha )={\boldsymbol {\varphi }}(\alpha ,q^{j},q^{k})~,~~i\neq j\neq k} with q j {\displaystyle q^{j}} , q k {\displaystyle q^{k}} fixed. === Tangent vector to coordinate curves === The tangent vector to the curve x i {\displaystyle \mathbf {x} _{i}} at the point x i ( α ) {\displaystyle \mathbf {x} _{i}(\alpha )} (or to the coordinate curve q i {\displaystyle q_{i}} at the point x {\displaystyle \mathbf {x} } ) is d x i d α ≡ ∂ x ∂ q i {\displaystyle {\cfrac {\rm {{d}\mathbf {x} _{i}}}{\rm {{d}\alpha }}}\equiv {\cfrac {\partial \mathbf {x} }{\partial q^{i}}}} === Gradient === ==== Scalar field ==== Let f ( x ) {\displaystyle f(\mathbf {x} )} be a scalar field in space. Then f ( x ) = f [ φ ( q 1 , q 2 , q 3 ) ] = f φ ( q 1 , q 2 , q 3 ) {\displaystyle f(\mathbf {x} )=f[{\boldsymbol {\varphi }}(q^{1},q^{2},q^{3})]=f_{\varphi }(q^{1},q^{2},q^{3})} The gradient of the field f {\displaystyle f} is defined by [ ∇ f ( x ) ] ⋅ c = d d α f ( x + α c ) | α = 0 {\displaystyle [{\boldsymbol {\nabla }}f(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\rm {d}}{\rm {{d}\alpha }}}f(\mathbf {x} +\alpha \mathbf {c} ){\biggr |}_{\alpha =0}} where c {\displaystyle \mathbf {c} } is an arbitrary constant vector. If we define the components c i {\displaystyle c^{i}} of c {\displaystyle \mathbf {c} } are such that q i + α c i = ψ i ( x + α c ) {\displaystyle q^{i}+\alpha ~c^{i}=\psi ^{i}(\mathbf {x} +\alpha ~\mathbf {c} )} then [ ∇ f ( x ) ] ⋅ c = d d α f φ ( q 1 + α c 1 , q 2 + α c 2 , q 3 + α c 3 ) | α = 0 = ∂ f φ ∂ q i c i = ∂ f ∂ q i c i {\displaystyle [{\boldsymbol {\nabla }}f(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\rm {d}}{\rm {{d}\alpha }}}f_{\varphi }(q^{1}+\alpha ~c^{1},q^{2}+\alpha ~c^{2},q^{3}+\alpha ~c^{3}){\biggr |}_{\alpha =0}={\cfrac {\partial f_{\varphi }}{\partial q^{i}}}~c^{i}={\cfrac {\partial f}{\partial q^{i}}}~c^{i}} If we set f ( x ) = ψ i ( x ) {\displaystyle f(\mathbf {x} )=\psi ^{i}(\mathbf {x} )} , then since q i = ψ i ( x ) {\displaystyle q^{i}=\psi ^{i}(\mathbf {x} )} , we have [ ∇ ψ i ( x ) ] ⋅ c = ∂ ψ i ∂ q j c j = c i {\displaystyle [{\boldsymbol {\nabla }}\psi ^{i}(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\partial \psi ^{i}}{\partial q^{j}}}~c^{j}=c^{i}} which provides a means of extracting the contravariant component of a vector c {\displaystyle \mathbf {c} } . If b i {\displaystyle \mathbf {b} _{i}} is the covariant (or natural) basis at a point, and if b i {\displaystyle \mathbf {b} ^{i}} is the contravariant (or reciprocal) basis at that point, then [ ∇ f ( x ) ] ⋅ c = ∂ f ∂ q i c i = ( ∂ f ∂ q i b i ) ( c i b i ) ⇒ ∇ f ( x ) = ∂ f ∂ q i b i {\displaystyle [{\boldsymbol {\nabla }}f(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\partial f}{\partial q^{i}}}~c^{i}=\left({\cfrac {\partial f}{\partial q^{i}}}~\mathbf {b} ^{i}\right)\left(c^{i}~\mathbf {b} _{i}\right)\quad \Rightarrow \quad {\boldsymbol {\nabla }}f(\mathbf {x} )={\cfrac {\partial f}{\partial q^{i}}}~\mathbf {b} ^{i}} A brief rationale for this choice of basis is given in the next section. ==== Vector field ==== A similar process can be used to arrive at the gradient of a vector field f ( x ) {\displaystyle \mathbf {f} (\mathbf {x} )} . The gradient is given by [ ∇ f ( x ) ] ⋅ c = ∂ f ∂ q i c i {\displaystyle [{\boldsymbol {\nabla }}\mathbf {f} (\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\partial \mathbf {f} }{\partial q^{i}}}~c^{i}} If we consider the gradient of the position vector field r ( x ) = x {\displaystyle \mathbf {r} (\mathbf {x} )=\mathbf {x} } , then we can show that c = ∂ x ∂ q i c i = b i ( x ) c i ; b i ( x ) := ∂ x ∂ q i {\displaystyle \mathbf {c} ={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}~c^{i}=\mathbf {b} _{i}(\mathbf {x} )~c^{i}~;~~\mathbf {b} _{i}(\mathbf {x} ):={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}} The vector field b i {\displaystyle \mathbf {b} _{i}} is tangent to the q i {\displaystyle q^{i}} coordinate curve and forms a natural basis at each point on the curve. This basis, as discussed at the beginning of this article, is also called the covariant curvilinear basis. We can also define a reciprocal basis, or contravariant curvilinear basis, b i {\displaystyle \mathbf {b} ^{i}} . All the algebraic relations between the basis vectors, as discussed in the section on tensor algebra, apply for the natural basis and its reciprocal at each point x {\displaystyle \mathbf {x} } . Since c {\displaystyle \mathbf {c} } is arbitrary, we can write ∇ f ( x ) = ∂ f ∂ q i ⊗ b i {\displaystyle {\boldsymbol {\nabla }}\mathbf {f} (\mathbf {x} )={\cfrac {\partial \mathbf {f} }{\partial q^{i}}}\otimes \mathbf {b} ^{i}} Note that the contravariant basis vector b i {\displaystyle \mathbf {b} ^{i}} is perpendicular to the surface of constant ψ i {\displaystyle \psi ^{i}} and is given by b i = ∇ ψ i {\displaystyle \mathbf {b} ^{i}={\boldsymbol {\nabla }}\psi ^{i}} ==== Christoffel symbols of the first kind ==== The Christoffel symbols of the first kind are defined as b i , j = ∂ b i ∂ q j := Γ i j k b k ⇒ b i , j ⋅ b l = Γ i j l {\displaystyle \mathbf {b} _{i,j}={\frac {\partial \mathbf {b} _{i}}{\partial q^{j}}}:=\Gamma _{ijk}~\mathbf {b} ^{k}\quad \Rightarrow \quad \mathbf {b} _{i,j}\cdot \mathbf {b} _{l}=\Gamma _{ijl}} To express Γ i j k {\displaystyle \Gamma _{ijk}} in terms of g i j {\displaystyle g_{ij}} we note that g i j , k = ( b i ⋅ b j ) , k = b i , k ⋅ b j + b i ⋅ b j , k = Γ i k j + Γ j k i g i k , j = ( b i ⋅ b k ) , j = b i , j ⋅ b k + b i ⋅ b k , j = Γ i j k + Γ k j i g j k , i = ( b j ⋅ b k ) , i = b j , i ⋅ b k + b j ⋅ b k , i = Γ j i k + Γ k i j {\displaystyle {\begin{aligned}g_{ij,k}&=(\mathbf {b} _{i}\cdot \mathbf {b} _{j})_{,k}=\mathbf {b} _{i,k}\cdot \mathbf {b} _{j}+\mathbf {b} _{i}\cdot \mathbf {b} _{j,k}=\Gamma _{ikj}+\Gamma _{jki}\\g_{ik,j}&=(\mathbf {b} _{i}\cdot \mathbf {b} _{k})_{,j}=\mathbf {b} _{i,j}\cdot \mathbf {b} _{k}+\mathbf {b} _{i}\cdot \mathbf {b} _{k,j}=\Gamma _{ijk}+\Gamma _{kji}\\g_{jk,i}&=(\mathbf {b} _{j}\cdot \mathbf {b} _{k})_{,i}=\mathbf {b} _{j,i}\cdot \mathbf {b} _{k}+\mathbf {b} _{j}\cdot \mathbf {b} _{k,i}=\Gamma _{jik}+\Gamma _{kij}\end{aligned}}} Since b i , j = b j , i {\displaystyle \mathbf {b} _{i,j}=\mathbf {b} _{j,i}} we have Γ i j k = Γ j i k {\displaystyle \Gamma _{ijk}=\Gamma _{jik}} . Using these to rearrange the above relations gives Γ i j k = 1 2 ( g i k , j + g j k , i − g i j , k ) = 1 2 [ ( b i ⋅ b k ) , j + ( b j ⋅ b k ) , i − ( b i ⋅ b j ) , k ] {\displaystyle \Gamma _{ijk}={\frac {1}{2}}(g_{ik,j}+g_{jk,i}-g_{ij,k})={\frac {1}{2}}[(\mathbf {b} _{i}\cdot \mathbf {b} _{k})_{,j}+(\mathbf {b} _{j}\cdot \mathbf {b} _{k})_{,i}-(\mathbf {b} _{i}\cdot \mathbf {b} _{j})_{,k}]} ==== Christoffel symbols of the second kind ==== The Christoffel symbols of the second kind are defined as Γ i j k = Γ j i k {\displaystyle \Gamma _{ij}^{k}=\Gamma _{ji}^{k}} in which ∂ b i ∂ q j = Γ i j k b k {\displaystyle {\cfrac {\partial \mathbf {b} _{i}}{\partial q^{j}}}=\Gamma _{ij}^{k}~\mathbf {b} _{k}} This implies that Γ i j k = ∂ b i ∂ q j ⋅ b k = − b i ⋅ ∂ b k ∂ q j {\displaystyle \Gamma _{ij}^{k}={\cfrac {\partial \mathbf {b} _{i}}{\partial q^{j}}}\cdot \mathbf {b} ^{k}=-\mathbf {b} _{i}\cdot {\cfrac {\partial \mathbf {b} ^{k}}{\partial q^{j}}}} Other relations that follow are ∂ b i ∂ q j = − Γ j k i b k ; ∇ b i = Γ i j k b k ⊗ b j ; ∇ b i = − Γ j k i b k ⊗ b j {\displaystyle {\cfrac {\partial \mathbf {b} ^{i}}{\partial q^{j}}}=-\Gamma _{jk}^{i}~\mathbf {b} ^{k}~;~~{\boldsymbol {\nabla }}\mathbf {b} _{i}=\Gamma _{ij}^{k}~\mathbf {b} _{k}\otimes \mathbf {b} ^{j}~;~~{\boldsymbol {\nabla }}\mathbf {b} ^{i}=-\Gamma _{jk}^{i}~\mathbf {b} ^{k}\otimes \mathbf {b} ^{j}} Another particularly useful relation, which shows that the Christoffel symbol depends only on the metric tensor and its derivatives, is Γ i j k = g k m 2 ( ∂ g m i ∂ q j + ∂ g m j ∂ q i − ∂ g i j ∂ q m ) {\displaystyle \Gamma _{ij}^{k}={\frac {g^{km}}{2}}\left({\frac {\partial g_{mi}}{\partial q^{j}}}+{\frac {\partial g_{mj}}{\partial q^{i}}}-{\frac {\partial g_{ij}}{\partial q^{m}}}\right)} ==== Explicit expression for the gradient of a vector field ==== The following expressions for the gradient of a vector field in curvilinear coordinates are quite useful. ∇ v = [ ∂ v i ∂ q k + Γ l k i v l ] b i ⊗ b k = [ ∂ v i ∂ q k − Γ k i l v l ] b i ⊗ b k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} &=\left[{\cfrac {\partial v^{i}}{\partial q^{k}}}+\Gamma _{lk}^{i}~v^{l}\right]~\mathbf {b} _{i}\otimes \mathbf {b} ^{k}\\[8pt]&=\left[{\cfrac {\partial v_{i}}{\partial q^{k}}}-\Gamma _{ki}^{l}~v_{l}\right]~\mathbf {b} ^{i}\otimes \mathbf {b} ^{k}\end{aligned}}} ==== Representing a physical vector field ==== The vector field v {\displaystyle \mathbf {v} } can be represented as v = v i b i = v ^ i b ^ i {\displaystyle \mathbf {v} =v_{i}~\mathbf {b} ^{i}={\hat {v}}_{i}~{\hat {\mathbf {b} }}^{i}} where v i {\displaystyle v_{i}} are the covariant components of the field, v ^ i {\displaystyle {\hat {v}}_{i}} are the physical components, and (no summation) b ^ i = b i g i i {\displaystyle {\hat {\mathbf {b} }}^{i}={\cfrac {\mathbf {b} ^{i}}{\sqrt {g^{ii}}}}} is the normalized contravariant basis vector. === Second-order tensor field === The gradient of a second order tensor field can similarly be expressed as ∇ S = ∂ S ∂ q i ⊗ b i {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {S}}={\frac {\partial {\boldsymbol {S}}}{\partial q^{i}}}\otimes \mathbf {b} ^{i}} ==== Explicit expressions for the gradient ==== If we consider the expression for the tensor in terms of a contravariant basis, then ∇ S = ∂ ∂ q k [ S i j b i ⊗ b j ] ⊗ b k = [ ∂ S i j ∂ q k − Γ k i l S l j − Γ k j l S i l ] b i ⊗ b j ⊗ b k {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {S}}={\frac {\partial }{\partial q^{k}}}[S_{ij}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}]\otimes \mathbf {b} ^{k}=\left[{\frac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}\otimes \mathbf {b} ^{k}} We may also write ∇ S = [ ∂ S i j ∂ q k + Γ k l i S l j + Γ k l j S i l ] b i ⊗ b j ⊗ b k = [ ∂ S j i ∂ q k + Γ k l i S j l − Γ k j l S l i ] b i ⊗ b j ⊗ b k = [ ∂ S i j ∂ q k − Γ i k l S l j + Γ k l j S i l ] b i ⊗ b j ⊗ b k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {S}}&=\left[{\cfrac {\partial S^{ij}}{\partial q^{k}}}+\Gamma _{kl}^{i}~S^{lj}+\Gamma _{kl}^{j}~S^{il}\right]~\mathbf {b} _{i}\otimes \mathbf {b} _{j}\otimes \mathbf {b} ^{k}\\[8pt]&=\left[{\cfrac {\partial S_{~j}^{i}}{\partial q^{k}}}+\Gamma _{kl}^{i}~S_{~j}^{l}-\Gamma _{kj}^{l}~S_{~l}^{i}\right]~\mathbf {b} _{i}\otimes \mathbf {b} ^{j}\otimes \mathbf {b} ^{k}\\[8pt]&=\left[{\cfrac {\partial S_{i}^{~j}}{\partial q^{k}}}-\Gamma _{ik}^{l}~S_{l}^{~j}+\Gamma _{kl}^{j}~S_{i}^{~l}\right]~\mathbf {b} ^{i}\otimes \mathbf {b} _{j}\otimes \mathbf {b} ^{k}\end{aligned}}} ==== Representing a physical second-order tensor field ==== The physical components of a second-order tensor field can be obtained by using a normalized contravariant basis, i.e., S = S i j b i ⊗ b j = S ^ i j b ^ i ⊗ b ^ j {\displaystyle {\boldsymbol {S}}=S_{ij}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}={\hat {S}}_{ij}~{\hat {\mathbf {b} }}^{i}\otimes {\hat {\mathbf {b} }}^{j}} where the hatted basis vectors have been normalized. This implies that (again no summation) S ^ i j = S i j g i i g j j {\displaystyle {\hat {S}}_{ij}=S_{ij}~{\sqrt {g^{ii}~g^{jj}}}} === Divergence === ==== Vector field ==== The divergence of a vector field v {\displaystyle \mathbf {v} } is defined as div ⁡ v = ∇ ⋅ v = tr ( ∇ v ) {\displaystyle \operatorname {div} ~\mathbf {v} ={\boldsymbol {\nabla }}\cdot \mathbf {v} ={\text{tr}}({\boldsymbol {\nabla }}\mathbf {v} )} In terms of components with respect to a curvilinear basis ∇ ⋅ v = ∂ v i ∂ q i + Γ ℓ i i v ℓ = [ ∂ v i ∂ q j − Γ j i ℓ v ℓ ] g i j {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {\partial v^{i}}{\partial q^{i}}}+\Gamma _{\ell i}^{i}~v^{\ell }=\left[{\cfrac {\partial v_{i}}{\partial q^{j}}}-\Gamma _{ji}^{\ell }~v_{\ell }\right]~g^{ij}} An alternative equation for the divergence of a vector field is frequently used. To derive this relation recall that ∇ ⋅ v = ∂ v i ∂ q i + Γ ℓ i i v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+\Gamma _{\ell i}^{i}~v^{\ell }} Now, Γ ℓ i i = Γ i ℓ i = g m i 2 [ ∂ g i m ∂ q ℓ + ∂ g ℓ m ∂ q i − ∂ g i l ∂ q m ] {\displaystyle \Gamma _{\ell i}^{i}=\Gamma _{i\ell }^{i}={\cfrac {g^{mi}}{2}}\left[{\frac {\partial g_{im}}{\partial q^{\ell }}}+{\frac {\partial g_{\ell m}}{\partial q^{i}}}-{\frac {\partial g_{il}}{\partial q^{m}}}\right]} Noting that, due to the symmetry of g {\displaystyle {\boldsymbol {g}}} , g m i ∂ g ℓ m ∂ q i = g m i ∂ g i ℓ ∂ q m {\displaystyle g^{mi}~{\frac {\partial g_{\ell m}}{\partial q^{i}}}=g^{mi}~{\frac {\partial g_{i\ell }}{\partial q^{m}}}} we have ∇ ⋅ v = ∂ v i ∂ q i + g m i 2 ∂ g i m ∂ q ℓ v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {g^{mi}}{2}}~{\frac {\partial g_{im}}{\partial q^{\ell }}}~v^{\ell }} Recall that if [ g i j ] {\displaystyle [g_{ij}]} is the matrix whose components are g i j {\displaystyle g_{ij}} , then the inverse of the matrix is [ g i j ] − 1 = [ g i j ] {\displaystyle [g_{ij}]^{-1}=[g^{ij}]} . The inverse of the matrix is given by [ g i j ] = [ g i j ] − 1 = A i j g ; g := det ( [ g i j ] ) = det g {\displaystyle [g^{ij}]=[g_{ij}]^{-1}={\cfrac {A^{ij}}{g}}~;~~g:=\det([g_{ij}])=\det {\boldsymbol {g}}} where A i j {\displaystyle A^{ij}} is the cofactor matrix of the components g i j {\displaystyle g_{ij}} . From matrix algebra we have g = det ( [ g i j ] ) = ∑ i g i j A i j ⇒ ∂ g ∂ g i j = A i j {\displaystyle g=\det([g_{ij}])=\sum _{i}g_{ij}~A^{ij}\quad \Rightarrow \quad {\frac {\partial g}{\partial g_{ij}}}=A^{ij}} Hence, [ g i j ] = 1 g ∂ g ∂ g i j {\displaystyle [g^{ij}]={\cfrac {1}{g}}~{\frac {\partial g}{\partial g_{ij}}}} Plugging this relation into the expression for the divergence gives ∇ ⋅ v = ∂ v i ∂ q i + 1 2 g ∂ g ∂ g m i ∂ g i m ∂ q ℓ v ℓ = ∂ v i ∂ q i + 1 2 g ∂ g ∂ q ℓ v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {1}{2g}}~{\frac {\partial g}{\partial g_{mi}}}~{\frac {\partial g_{im}}{\partial q^{\ell }}}~v^{\ell }={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {1}{2g}}~{\frac {\partial g}{\partial q^{\ell }}}~v^{\ell }} A little manipulation leads to the more compact form ∇ ⋅ v = 1 g ∂ ∂ q i ( v i g ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}(v^{i}~{\sqrt {g}})} ==== Second-order tensor field ==== The divergence of a second-order tensor field is defined using ( ∇ ⋅ S ) ⋅ a = ∇ ⋅ ( S a ) {\displaystyle ({\boldsymbol {\nabla }}\cdot {\boldsymbol {S}})\cdot \mathbf {a} ={\boldsymbol {\nabla }}\cdot ({\boldsymbol {S}}\mathbf {a} )} where a {\displaystyle \mathbf {a} } is an arbitrary constant vector. In curvilinear coordinates, ∇ ⋅ S = [ ∂ S i j ∂ q k − Γ k i l S l j − Γ k j l S i l ] g i k b j = [ ∂ S i j ∂ q i + Γ i l i S l j + Γ i l j S i l ] b j = [ ∂ S j i ∂ q i + Γ i l i S j l − Γ i j l S l i ] b j = [ ∂ S i j ∂ q k − Γ i k l S l j + Γ k l j S i l ] g i k b j {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left[{\cfrac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~g^{ik}~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S^{ij}}{\partial q^{i}}}+\Gamma _{il}^{i}~S^{lj}+\Gamma _{il}^{j}~S^{il}\right]~\mathbf {b} _{j}\\[8pt]&=\left[{\cfrac {\partial S_{~j}^{i}}{\partial q^{i}}}+\Gamma _{il}^{i}~S_{~j}^{l}-\Gamma _{ij}^{l}~S_{~l}^{i}\right]~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S_{i}^{~j}}{\partial q^{k}}}-\Gamma _{ik}^{l}~S_{l}^{~j}+\Gamma _{kl}^{j}~S_{i}^{~l}\right]~g^{ik}~\mathbf {b} _{j}\end{aligned}}} === Laplacian === ==== Scalar field ==== The Laplacian of a scalar field φ ( x ) {\displaystyle \varphi (\mathbf {x} )} is defined as ∇ 2 φ := ∇ ⋅ ( ∇ φ ) {\displaystyle \nabla ^{2}\varphi :={\boldsymbol {\nabla }}\cdot ({\boldsymbol {\nabla }}\varphi )} Using the alternative expression for the divergence of a vector field gives us ∇ 2 φ = 1 g ∂ ∂ q i ( [ ∇ φ ] i g ) {\displaystyle \nabla ^{2}\varphi ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}([{\boldsymbol {\nabla }}\varphi ]^{i}~{\sqrt {g}})} Now ∇ φ = ∂ φ ∂ q l b l = g l i ∂ φ ∂ q l b i ⇒ [ ∇ φ ] i = g l i ∂ φ ∂ q l {\displaystyle {\boldsymbol {\nabla }}\varphi ={\frac {\partial \varphi }{\partial q^{l}}}~\mathbf {b} ^{l}=g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}~\mathbf {b} _{i}\quad \Rightarrow \quad [{\boldsymbol {\nabla }}\varphi ]^{i}=g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}} Therefore, ∇ 2 φ = 1 g ∂ ∂ q i ( g l i ∂ φ ∂ q l g ) {\displaystyle \nabla ^{2}\varphi ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}\left(g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}~{\sqrt {g}}\right)} === Curl of a vector field === The curl of a vector field v {\displaystyle \mathbf {v} } in covariant curvilinear coordinates can be written as ∇ × v = E r s t v s | r b t {\displaystyle {\boldsymbol {\nabla }}\times \mathbf {v} ={\mathcal {E}}^{rst}v_{s|r}~\mathbf {b} _{t}} where v s | r = v s , r − Γ s r i v i {\displaystyle v_{s|r}=v_{s,r}-\Gamma _{sr}^{i}~v_{i}} == Orthogonal curvilinear coordinates == Assume, for the purposes of this section, that the curvilinear coordinate system is orthogonal, i.e., b i ⋅ b j = { g i i if i = j 0 if i ≠ j , {\displaystyle \mathbf {b} _{i}\cdot \mathbf {b} _{j}={\begin{cases}g_{ii}&{\text{if }}i=j\\0&{\text{if }}i\neq j,\end{cases}}} or equivalently, b i ⋅ b j = { g i i if i = j 0 if i ≠ j , {\displaystyle \mathbf {b} ^{i}\cdot \mathbf {b} ^{j}={\begin{cases}g^{ii}&{\text{if }}i=j\\0&{\text{if }}i\neq j,\end{cases}}} where g i i = g i i − 1 {\displaystyle g^{ii}=g_{ii}^{-1}} . As before, b i , b j {\displaystyle \mathbf {b} _{i},\mathbf {b} _{j}} are covariant basis vectors and b i {\displaystyle \mathbf {b} ^{i}} , b j {\displaystyle \mathbf {b} ^{j}} are contravariant basis vectors. Also, let ( e 1 , e 2 , e 3 ) {\displaystyle (\mathbf {e} ^{1},\mathbf {e} ^{2},\mathbf {e} ^{3})} be a background, fixed, Cartesian basis. A list of orthogonal curvilinear coordinates is given below. === Metric tensor in orthogonal curvilinear coordinates === Let r ( x ) {\displaystyle \mathbf {r} (\mathbf {x} )} be the position vector of the point x {\displaystyle \mathbf {x} } with respect to the origin of the coordinate system. The notation can be simplified by noting that x {\displaystyle \mathbf {x} } = r ( x ) {\displaystyle \mathbf {r} (\mathbf {x} )} . At each point we can construct a small line element d x {\displaystyle \mathrm {d} \mathbf {x} } . The square of the length of the line element is the scalar product d x ⋅ d x {\displaystyle \mathrm {d} \mathbf {x} \cdot \mathrm {d} \mathbf {x} } and is called the metric of the space. Recall that the space of interest is assumed to be Euclidean when we talk of curvilinear coordinates. Let us express the position vector in terms of the background, fixed, Cartesian basis, i.e., x = ∑ i = 1 3 x i e i {\displaystyle \mathbf {x} =\sum _{i=1}^{3}x_{i}~\mathbf {e} _{i}} Using the chain rule, we can then express d x {\displaystyle \mathrm {d} \mathbf {x} } in terms of three-dimensional orthogonal curvilinear coordinates ( q 1 , q 2 , q 3 ) {\displaystyle (q^{1},q^{2},q^{3})} as d x = ∑ i = 1 3 ∑ j = 1 3 ( ∂ x i ∂ q j e i ) d q j {\displaystyle \mathrm {d} \mathbf {x} =\sum _{i=1}^{3}\sum _{j=1}^{3}\left({\cfrac {\partial x_{i}}{\partial q^{j}}}~\mathbf {e} _{i}\right)\mathrm {d} q^{j}} Therefore, the metric is given by d x ⋅ d x = ∑ i = 1 3 ∑ j = 1 3 ∑ k = 1 3 ∂ x i ∂ q j ∂ x i ∂ q k d q j d q k {\displaystyle \mathrm {d} \mathbf {x} \cdot \mathrm {d} \mathbf {x} =\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\cfrac {\partial x_{i}}{\partial q^{j}}}~{\cfrac {\partial x_{i}}{\partial q^{k}}}~\mathrm {d} q^{j}~\mathrm {d} q^{k}} The symmetric quantity g i j ( q i , q j ) = ∑ k = 1 3 ∂ x k ∂ q i ∂ x k ∂ q j = b i ⋅ b j {\displaystyle g_{ij}(q^{i},q^{j})=\sum _{k=1}^{3}{\cfrac {\partial x_{k}}{\partial q^{i}}}~{\cfrac {\partial x_{k}}{\partial q^{j}}}=\mathbf {b} _{i}\cdot \mathbf {b} _{j}} is called the fundamental (or metric) tensor of the Euclidean space in curvilinear coordinates. Note also that g i j = ∂ x ∂ q i ⋅ ∂ x ∂ q j = ( ∑ k h k i e k ) ⋅ ( ∑ m h m j e m ) = ∑ k h k i h k j {\displaystyle g_{ij}={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}\cdot {\cfrac {\partial \mathbf {x} }{\partial q^{j}}}=\left(\sum _{k}h_{ki}~\mathbf {e} _{k}\right)\cdot \left(\sum _{m}h_{mj}~\mathbf {e} _{m}\right)=\sum _{k}h_{ki}~h_{kj}} where h i j {\displaystyle h_{ij}} are the Lamé coefficients. If we define the scale factors, h i {\displaystyle h_{i}} , using b i ⋅ b i = g i i = ∑ k h k i 2 =: h i 2 ⇒ | ∂ x ∂ q i | = | b i | = g i i = h i {\displaystyle \mathbf {b} _{i}\cdot \mathbf {b} _{i}=g_{ii}=\sum _{k}h_{ki}^{2}=:h_{i}^{2}\quad \Rightarrow \quad \left|{\cfrac {\partial \mathbf {x} }{\partial q^{i}}}\right|=\left|\mathbf {b} _{i}\right|={\sqrt {g_{ii}}}=h_{i}} we get a relation between the fundamental tensor and the Lamé coefficients. ==== Example: Polar coordinates ==== If we consider polar coordinates for R 2 {\displaystyle \mathbb {R} ^{2}} , note that ( x , y ) = ( r cos ⁡ θ , r sin ⁡ θ ) {\displaystyle (x,y)=(r\cos \theta ,r\sin \theta )} ( r , θ ) {\displaystyle (r,\theta )} are the curvilinear coordinates, and the Jacobian determinant of the transformation ( r , θ ) → ( r cos ⁡ θ , r sin ⁡ θ ) {\displaystyle (r,\theta )\to (r\cos \theta ,r\sin \theta )} is r {\displaystyle r} . The orthogonal basis vectors are b r = ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle \mathbf {b} _{r}=(\cos \theta ,\sin \theta )} , b θ = ( − r sin ⁡ θ , r cos ⁡ θ ) {\displaystyle \mathbf {b} _{\theta }=(-r\sin \theta ,r\cos \theta )} . The normalized basis vectors are e r = ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle \mathbf {e} _{r}=(\cos \theta ,\sin \theta )} , e θ = ( − sin ⁡ θ , cos ⁡ θ ) {\displaystyle \mathbf {e} _{\theta }=(-\sin \theta ,\cos \theta )} and the scale factors are h r = 1 {\displaystyle h_{r}=1} and h θ = r {\displaystyle h_{\theta }=r} . The fundamental tensor is g 11 = 1 {\displaystyle g_{11}=1} , g 22 = r 2 {\displaystyle g_{22}=r^{2}} , g 12 = g 21 = 0 {\displaystyle g_{12}=g_{21}=0} . === Line and surface integrals === If we wish to use curvilinear coordinates for vector calculus calculations, adjustments need to be made in the calculation of line, surface and volume integrals. For simplicity, we again restrict the discussion to three dimensions and orthogonal curvilinear coordinates. However, the same arguments apply for n {\displaystyle n} -dimensional problems though there are some additional terms in the expressions when the coordinate system is not orthogonal. ==== Line integrals ==== Normally in the calculation of line integrals we are interested in calculating ∫ C f d s = ∫ a b f ( x ( t ) ) | ∂ x ∂ t | d t {\displaystyle \int _{C}f\,ds=\int _{a}^{b}f(\mathbf {x} (t))\left|{\partial \mathbf {x} \over \partial t}\right|\;dt} where x ( t ) {\displaystyle \mathbf {x} (t)} parametrizes C {\displaystyle C} in Cartesian coordinates. In curvilinear coordinates, the term | ∂ x ∂ t | = | ∑ i = 1 3 ∂ x ∂ q i ∂ q i ∂ t | {\displaystyle \left|{\partial \mathbf {x} \over \partial t}\right|=\left|\sum _{i=1}^{3}{\partial \mathbf {x} \over \partial q^{i}}{\partial q^{i} \over \partial t}\right|} by the chain rule. And from the definition of the Lamé coefficients, ∂ x ∂ q i = ∑ k h k i e k {\displaystyle {\partial \mathbf {x} \over \partial q^{i}}=\sum _{k}h_{ki}~\mathbf {e} _{k}} and thus | ∂ x ∂ t | = | ∑ k ( ∑ i h k i ∂ q i ∂ t ) e k | = ∑ i ∑ j ∑ k h k i h k j ∂ q i ∂ t ∂ q j ∂ t = ∑ i ∑ j g i j ∂ q i ∂ t ∂ q j ∂ t {\displaystyle {\begin{aligned}\left|{\partial \mathbf {x} \over \partial t}\right|&=\left|\sum _{k}\left(\sum _{i}h_{ki}~{\cfrac {\partial q^{i}}{\partial t}}\right)\mathbf {e} _{k}\right|\\[8pt]&={\sqrt {\sum _{i}\sum _{j}\sum _{k}h_{ki}~h_{kj}{\cfrac {\partial q^{i}}{\partial t}}{\cfrac {\partial q^{j}}{\partial t}}}}={\sqrt {\sum _{i}\sum _{j}g_{ij}~{\cfrac {\partial q^{i}}{\partial t}}{\cfrac {\partial q^{j}}{\partial t}}}}\end{aligned}}} Now, since g i j = 0 {\displaystyle g_{ij}=0} when i ≠ j {\displaystyle i\neq j} , we have | ∂ x ∂ t | = ∑ i g i i ( ∂ q i ∂ t ) 2 = ∑ i h i 2 ( ∂ q i ∂ t ) 2 {\displaystyle \left|{\partial \mathbf {x} \over \partial t}\right|={\sqrt {\sum _{i}g_{ii}~\left({\cfrac {\partial q^{i}}{\partial t}}\right)^{2}}}={\sqrt {\sum _{i}h_{i}^{2}~\left({\cfrac {\partial q^{i}}{\partial t}}\right)^{2}}}} and we can proceed normally. ==== Surface integrals ==== Likewise, if we are interested in a surface integral, the relevant calculation, with the parameterization of the surface in Cartesian coordinates is: ∫ S f d S = ∬ T f ( x ( s , t ) ) | ∂ x ∂ s × ∂ x ∂ t | d s d t {\displaystyle \int _{S}f\,dS=\iint _{T}f(\mathbf {x} (s,t))\left|{\partial \mathbf {x} \over \partial s}\times {\partial \mathbf {x} \over \partial t}\right|\,ds\,dt} Again, in curvilinear coordinates, we have | ∂ x ∂ s × ∂ x ∂ t | = | ( ∑ i ∂ x ∂ q i ∂ q i ∂ s ) × ( ∑ j ∂ x ∂ q j ∂ q j ∂ t ) | {\displaystyle \left|{\partial \mathbf {x} \over \partial s}\times {\partial \mathbf {x} \over \partial t}\right|=\left|\left(\sum _{i}{\partial \mathbf {x} \over \partial q^{i}}{\partial q^{i} \over \partial s}\right)\times \left(\sum _{j}{\partial \mathbf {x} \over \partial q^{j}}{\partial q^{j} \over \partial t}\right)\right|} and we make use of the definition of curvilinear coordinates again to yield ∂ x ∂ q i ∂ q i ∂ s = ∑ k ( ∑ i = 1 3 h k i ∂ q i ∂ s ) e k ; ∂ x ∂ q j ∂ q j ∂ t = ∑ m ( ∑ j = 1 3 h m j ∂ q j ∂ t ) e m {\displaystyle {\partial \mathbf {x} \over \partial q^{i}}{\partial q^{i} \over \partial s}=\sum _{k}\left(\sum _{i=1}^{3}h_{ki}~{\partial q^{i} \over \partial s}\right)\mathbf {e} _{k}~;~~{\partial \mathbf {x} \over \partial q^{j}}{\partial q^{j} \over \partial t}=\sum _{m}\left(\sum _{j=1}^{3}h_{mj}~{\partial q^{j} \over \partial t}\right)\mathbf {e} _{m}} Therefore, | ∂ x ∂ s × ∂ x ∂ t | = | ∑ k ∑ m ( ∑ i = 1 3 h k i ∂ q i ∂ s ) ( ∑ j = 1 3 h m j ∂ q j ∂ t ) e k × e m | = | ∑ p ∑ k ∑ m E k m p ( ∑ i = 1 3 h k i ∂ q i ∂ s ) ( ∑ j = 1 3 h m j ∂ q j ∂ t ) e p | {\displaystyle {\begin{aligned}\left|{\partial \mathbf {x} \over \partial s}\times {\partial \mathbf {x} \over \partial t}\right|&=\left|\sum _{k}\sum _{m}\left(\sum _{i=1}^{3}h_{ki}~{\partial q^{i} \over \partial s}\right)\left(\sum _{j=1}^{3}h_{mj}~{\partial q^{j} \over \partial t}\right)\mathbf {e} _{k}\times \mathbf {e} _{m}\right|\\[8pt]&=\left|\sum _{p}\sum _{k}\sum _{m}{\mathcal {E}}_{kmp}\left(\sum _{i=1}^{3}h_{ki}~{\partial q^{i} \over \partial s}\right)\left(\sum _{j=1}^{3}h_{mj}~{\partial q^{j} \over \partial t}\right)\mathbf {e} _{p}\right|\end{aligned}}} where E {\displaystyle {\mathcal {E}}} is the permutation symbol. In determinant form, the cross product in terms of curvilinear coordinates will be: | e 1 e 2 e 3 ∑ i h 1 i ∂ q i ∂ s ∑ i h 2 i ∂ q i ∂ s ∑ i h 3 i ∂ q i ∂ s ∑ j h 1 j ∂ q j ∂ t ∑ j h 2 j ∂ q j ∂ t ∑ j h 3 j ∂ q j ∂ t | {\displaystyle {\begin{vmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\mathbf {e} _{3}\\&&\\\sum _{i}h_{1i}{\partial q^{i} \over \partial s}&\sum _{i}h_{2i}{\partial q^{i} \over \partial s}&\sum _{i}h_{3i}{\partial q^{i} \over \partial s}\\&&\\\sum _{j}h_{1j}{\partial q^{j} \over \partial t}&\sum _{j}h_{2j}{\partial q^{j} \over \partial t}&\sum _{j}h_{3j}{\partial q^{j} \over \partial t}\end{vmatrix}}} === Grad, curl, div, Laplacian === In orthogonal curvilinear coordinates of 3 dimensions, where b i = ∑ k g i k b k ; g i i = 1 g i i = 1 h i 2 {\displaystyle \mathbf {b} ^{i}=\sum _{k}g^{ik}~\mathbf {b} _{k}~;~~g^{ii}={\cfrac {1}{g_{ii}}}={\cfrac {1}{h_{i}^{2}}}} one can express the gradient of a scalar or vector field as ∇ φ = ∑ i ∂ φ ∂ q i b i = ∑ i ∑ j ∂ φ ∂ q i g i j b j = ∑ i 1 h i 2 ∂ f ∂ q i b i ; ∇ v = ∑ i 1 h i 2 ∂ v ∂ q i ⊗ b i {\displaystyle \nabla \varphi =\sum _{i}{\partial \varphi \over \partial q^{i}}~\mathbf {b} ^{i}=\sum _{i}\sum _{j}{\partial \varphi \over \partial q^{i}}~g^{ij}~\mathbf {b} _{j}=\sum _{i}{\cfrac {1}{h_{i}^{2}}}~{\partial f \over \partial q^{i}}~\mathbf {b} _{i}~;~~\nabla \mathbf {v} =\sum _{i}{\cfrac {1}{h_{i}^{2}}}~{\partial \mathbf {v} \over \partial q^{i}}\otimes \mathbf {b} _{i}} For an orthogonal basis g = g 11 g 22 g 33 = h 1 2 h 2 2 h 3 2 ⇒ g = h 1 h 2 h 3 {\displaystyle g=g_{11}~g_{22}~g_{33}=h_{1}^{2}~h_{2}^{2}~h_{3}^{2}\quad \Rightarrow \quad {\sqrt {g}}=h_{1}h_{2}h_{3}} The divergence of a vector field can then be written as ∇ ⋅ v = 1 h 1 h 2 h 3 ∂ ∂ q i ( h 1 h 2 h 3 v i ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{h_{1}h_{2}h_{3}}}~{\frac {\partial }{\partial q^{i}}}(h_{1}h_{2}h_{3}~v^{i})} Also, v i = g i k v k ⇒ v 1 = g 11 v 1 = v 1 h 1 2 ; v 2 = g 22 v 2 = v 2 h 2 2 ; v 3 = g 33 v 3 = v 3 h 3 2 {\displaystyle v^{i}=g^{ik}~v_{k}\quad \Rightarrow v^{1}=g^{11}~v_{1}={\cfrac {v_{1}}{h_{1}^{2}}}~;~~v^{2}=g^{22}~v_{2}={\cfrac {v_{2}}{h_{2}^{2}}}~;~~v^{3}=g^{33}~v_{3}={\cfrac {v_{3}}{h_{3}^{2}}}} Therefore, ∇ ⋅ v = 1 h 1 h 2 h 3 ∑ i ∂ ∂ q i ( h 1 h 2 h 3 h i 2 v i ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{h_{1}h_{2}h_{3}}}~\sum _{i}{\frac {\partial }{\partial q^{i}}}\left({\cfrac {h_{1}h_{2}h_{3}}{h_{i}^{2}}}~v_{i}\right)} We can get an expression for the Laplacian in a similar manner by noting that g l i ∂ φ ∂ q l = { g 11 ∂ φ ∂ q 1 , g 22 ∂ φ ∂ q 2 , g 33 ∂ φ ∂ q 3 } = { 1 h 1 2 ∂ φ ∂ q 1 , 1 h 2 2 ∂ φ ∂ q 2 , 1 h 3 2 ∂ φ ∂ q 3 } {\displaystyle g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}=\left\{g^{11}~{\frac {\partial \varphi }{\partial q^{1}}},g^{22}~{\frac {\partial \varphi }{\partial q^{2}}},g^{33}~{\frac {\partial \varphi }{\partial q^{3}}}\right\}=\left\{{\cfrac {1}{h_{1}^{2}}}~{\frac {\partial \varphi }{\partial q^{1}}},{\cfrac {1}{h_{2}^{2}}}~{\frac {\partial \varphi }{\partial q^{2}}},{\cfrac {1}{h_{3}^{2}}}~{\frac {\partial \varphi }{\partial q^{3}}}\right\}} Then we have ∇ 2 φ = 1 h 1 h 2 h 3 ∑ i ∂ ∂ q i ( h 1 h 2 h 3 h i 2 ∂ φ ∂ q i ) {\displaystyle \nabla ^{2}\varphi ={\cfrac {1}{h_{1}h_{2}h_{3}}}~\sum _{i}{\frac {\partial }{\partial q^{i}}}\left({\cfrac {h_{1}h_{2}h_{3}}{h_{i}^{2}}}~{\frac {\partial \varphi }{\partial q^{i}}}\right)} The expressions for the gradient, divergence, and Laplacian can be directly extended to n {\displaystyle n} -dimensions. The curl of a vector field is given by ∇ × v = 1 h 1 h 2 h 3 ∑ i = 1 n e i ∑ j k ε i j k h i ∂ ( h k v k ) ∂ q j {\displaystyle \nabla \times \mathbf {v} ={\frac {1}{h_{1}h_{2}h_{3}}}\sum _{i=1}^{n}\mathbf {e} _{i}\sum _{jk}\varepsilon _{ijk}h_{i}{\frac {\partial (h_{k}v_{k})}{\partial q^{j}}}} where ε i j k {\displaystyle \varepsilon _{ijk}} is the Levi-Civita symbol. == Example: Cylindrical polar coordinates == For cylindrical coordinates we have ( x 1 , x 2 , x 3 ) = x = φ ( q 1 , q 2 , q 3 ) = φ ( r , θ , z ) = { r cos ⁡ θ , r sin ⁡ θ , z } {\displaystyle (x_{1},x_{2},x_{3})=\mathbf {x} ={\boldsymbol {\varphi }}(q^{1},q^{2},q^{3})={\boldsymbol {\varphi }}(r,\theta ,z)=\{r\cos \theta ,r\sin \theta ,z\}} and { ψ 1 ( x ) , ψ 2 ( x ) , ψ 3 ( x ) } = ( q 1 , q 2 , q 3 ) ≡ ( r , θ , z ) = { x 1 2 + x 2 2 , tan − 1 ⁡ ( x 2 / x 1 ) , x 3 } {\displaystyle \{\psi ^{1}(\mathbf {x} ),\psi ^{2}(\mathbf {x} ),\psi ^{3}(\mathbf {x} )\}=(q^{1},q^{2},q^{3})\equiv (r,\theta ,z)=\{{\sqrt {x_{1}^{2}+x_{2}^{2}}},\tan ^{-1}(x_{2}/x_{1}),x_{3}\}} where 0 < r < ∞ , 0 < θ < 2 π , − ∞ < z < ∞ {\displaystyle 0<r<\infty ~,~~0<\theta <2\pi ~,~~-\infty <z<\infty } Then the covariant and contravariant basis vectors are b 1 = e r = b 1 b 2 = r e θ = r 2 b 2 b 3 = e z = b 3 {\displaystyle {\begin{aligned}\mathbf {b} _{1}&=\mathbf {e} _{r}=\mathbf {b} ^{1}\\\mathbf {b} _{2}&=r~\mathbf {e} _{\theta }=r^{2}~\mathbf {b} ^{2}\\\mathbf {b} _{3}&=\mathbf {e} _{z}=\mathbf {b} ^{3}\end{aligned}}} where e r , e θ , e z {\displaystyle \mathbf {e} _{r},\mathbf {e} _{\theta },\mathbf {e} _{z}} are the unit vectors in the r , θ , z {\displaystyle r,\theta ,z} directions. Note that the components of the metric tensor are such that g i j = g i j = 0 ( i ≠ j ) ; g 11 = 1 , g 22 = 1 r , g 33 = 1 {\displaystyle g^{ij}=g_{ij}=0(i\neq j)~;~~{\sqrt {g^{11}}}=1,~{\sqrt {g^{22}}}={\cfrac {1}{r}},~{\sqrt {g^{33}}}=1} which shows that the basis is orthogonal. The non-zero components of the Christoffel symbol of the second kind are Γ 12 2 = Γ 21 2 = 1 r ; Γ 22 1 = − r {\displaystyle \Gamma _{12}^{2}=\Gamma _{21}^{2}={\cfrac {1}{r}}~;~~\Gamma _{22}^{1}=-r} === Representing a physical vector field === The normalized contravariant basis vectors in cylindrical polar coordinates are b ^ 1 = e r ; b ^ 2 = e θ ; b ^ 3 = e z {\displaystyle {\hat {\mathbf {b} }}^{1}=\mathbf {e} _{r}~;~~{\hat {\mathbf {b} }}^{2}=\mathbf {e} _{\theta }~;~~{\hat {\mathbf {b} }}^{3}=\mathbf {e} _{z}} and the physical components of a vector v {\displaystyle \mathbf {v} } are ( v ^ 1 , v ^ 2 , v ^ 3 ) = ( v 1 , v 2 / r , v 3 ) =: ( v r , v θ , v z ) {\displaystyle ({\hat {v}}_{1},{\hat {v}}_{2},{\hat {v}}_{3})=(v_{1},v_{2}/r,v_{3})=:(v_{r},v_{\theta },v_{z})} === Gradient of a scalar field === The gradient of a scalar field, f ( x ) {\displaystyle f(\mathbf {x} )} , in cylindrical coordinates can now be computed from the general expression in curvilinear coordinates and has the form ∇ f = ∂ f ∂ r e r + 1 r ∂ f ∂ θ e θ + ∂ f ∂ z e z {\displaystyle {\boldsymbol {\nabla }}f={\cfrac {\partial f}{\partial r}}~\mathbf {e} _{r}+{\cfrac {1}{r}}~{\cfrac {\partial f}{\partial \theta }}~\mathbf {e} _{\theta }+{\cfrac {\partial f}{\partial z}}~\mathbf {e} _{z}} === Gradient of a vector field === Similarly, the gradient of a vector field, v ( x ) {\displaystyle \mathbf {v} (\mathbf {x} )} , in cylindrical coordinates can be shown to be ∇ v = ∂ v r ∂ r e r ⊗ e r + 1 r ( ∂ v r ∂ θ − v θ ) e r ⊗ e θ + ∂ v r ∂ z e r ⊗ e z + ∂ v θ ∂ r e θ ⊗ e r + 1 r ( ∂ v θ ∂ θ + v r ) e θ ⊗ e θ + ∂ v θ ∂ z e θ ⊗ e z + ∂ v z ∂ r e z ⊗ e r + 1 r ∂ v z ∂ θ e z ⊗ e θ + ∂ v z ∂ z e z ⊗ e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} &={\cfrac {\partial v_{r}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left({\cfrac {\partial v_{r}}{\partial \theta }}-v_{\theta }\right)~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\cfrac {\partial v_{r}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\cfrac {\partial v_{\theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left({\cfrac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\cfrac {\partial v_{\theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\cfrac {\partial v_{z}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}{\cfrac {\partial v_{z}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\cfrac {\partial v_{z}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\end{aligned}}} === Divergence of a vector field === Using the equation for the divergence of a vector field in curvilinear coordinates, the divergence in cylindrical coordinates can be shown to be ∇ ⋅ v = ∂ v r ∂ r + 1 r ( ∂ v θ ∂ θ + v r ) + ∂ v z ∂ z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} &={\cfrac {\partial v_{r}}{\partial r}}+{\cfrac {1}{r}}\left({\cfrac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)+{\cfrac {\partial v_{z}}{\partial z}}\end{aligned}}} === Laplacian of a scalar field === The Laplacian is more easily computed by noting that ∇ 2 f = ∇ ⋅ ∇ f {\displaystyle {\boldsymbol {\nabla }}^{2}f={\boldsymbol {\nabla }}\cdot {\boldsymbol {\nabla }}f} . In cylindrical polar coordinates v = ∇ f = [ v r v θ v z ] = [ ∂ f ∂ r 1 r ∂ f ∂ θ ∂ f ∂ z ] {\displaystyle \mathbf {v} ={\boldsymbol {\nabla }}f=\left[v_{r}~~v_{\theta }~~v_{z}\right]=\left[{\cfrac {\partial f}{\partial r}}~~{\cfrac {1}{r}}{\cfrac {\partial f}{\partial \theta }}~~{\cfrac {\partial f}{\partial z}}\right]} Hence, ∇ ⋅ v = ∇ 2 f = ∂ 2 f ∂ r 2 + 1 r ( 1 r ∂ 2 f ∂ θ 2 + ∂ f ∂ r ) + ∂ 2 f ∂ z 2 = 1 r [ ∂ ∂ r ( r ∂ f ∂ r ) ] + 1 r 2 ∂ 2 f ∂ θ 2 + ∂ 2 f ∂ z 2 {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\boldsymbol {\nabla }}^{2}f={\cfrac {\partial ^{2}f}{\partial r^{2}}}+{\cfrac {1}{r}}\left({\cfrac {1}{r}}{\cfrac {\partial ^{2}f}{\partial \theta ^{2}}}+{\cfrac {\partial f}{\partial r}}\right)+{\cfrac {\partial ^{2}f}{\partial z^{2}}}={\cfrac {1}{r}}\left[{\cfrac {\partial }{\partial r}}\left(r{\cfrac {\partial f}{\partial r}}\right)\right]+{\cfrac {1}{r^{2}}}{\cfrac {\partial ^{2}f}{\partial \theta ^{2}}}+{\cfrac {\partial ^{2}f}{\partial z^{2}}}} === Representing a physical second-order tensor field === The physical components of a second-order tensor field are those obtained when the tensor is expressed in terms of a normalized contravariant basis. In cylindrical polar coordinates these components are: S ^ 11 = S 11 =: S r r , S ^ 12 = S 12 r =: S r θ , S ^ 13 = S 13 =: S r z S ^ 21 = S 21 r =: S θ r , S ^ 22 = S 22 r 2 =: S θ θ , S ^ 23 = S 23 r =: S θ z S ^ 31 = S 31 =: S z r , S ^ 32 = S 32 r =: S z θ , S ^ 33 = S 33 =: S z z {\displaystyle {\begin{aligned}{\hat {S}}_{11}&=S_{11}=:S_{rr},&{\hat {S}}_{12}&={\frac {S_{12}}{r}}=:S_{r\theta },&{\hat {S}}_{13}&=S_{13}=:S_{rz}\\[6pt]{\hat {S}}_{21}&={\frac {S_{21}}{r}}=:S_{\theta r},&{\hat {S}}_{22}&={\frac {S_{22}}{r^{2}}}=:S_{\theta \theta },&{\hat {S}}_{23}&={\frac {S_{23}}{r}}=:S_{\theta z}\\[6pt]{\hat {S}}_{31}&=S_{31}=:S_{zr},&{\hat {S}}_{32}&={\frac {S_{32}}{r}}=:S_{z\theta },&{\hat {S}}_{33}&=S_{33}=:S_{zz}\end{aligned}}} === Gradient of a second-order tensor field === Using the above definitions we can show that the gradient of a second-order tensor field in cylindrical polar coordinates can be expressed as ∇ S = ∂ S r r ∂ r e r ⊗ e r ⊗ e r + 1 r [ ∂ S r r ∂ θ − ( S θ r + S r θ ) ] e r ⊗ e r ⊗ e θ + ∂ S r r ∂ z e r ⊗ e r ⊗ e z + ∂ S r θ ∂ r e r ⊗ e θ ⊗ e r + 1 r [ ∂ S r θ ∂ θ + ( S r r − S θ θ ) ] e r ⊗ e θ ⊗ e θ + ∂ S r θ ∂ z e r ⊗ e θ ⊗ e z + ∂ S r z ∂ r e r ⊗ e z ⊗ e r + 1 r [ ∂ S r z ∂ θ − S θ z ] e r ⊗ e z ⊗ e θ + ∂ S r z ∂ z e r ⊗ e z ⊗ e z + ∂ S θ r ∂ r e θ ⊗ e r ⊗ e r + 1 r [ ∂ S θ r ∂ θ + ( S r r − S θ θ ) ] e θ ⊗ e r ⊗ e θ + ∂ S θ r ∂ z e θ ⊗ e r ⊗ e z + ∂ S θ θ ∂ r e θ ⊗ e θ ⊗ e r + 1 r [ ∂ S θ θ ∂ θ + ( S r θ + S θ r ) ] e θ ⊗ e θ ⊗ e θ + ∂ S θ θ ∂ z e θ ⊗ e θ ⊗ e z + ∂ S θ z ∂ r e θ ⊗ e z ⊗ e r + 1 r [ ∂ S θ z ∂ θ + S r z ] e θ ⊗ e z ⊗ e θ + ∂ S θ z ∂ z e θ ⊗ e z ⊗ e z + ∂ S z r ∂ r e z ⊗ e r ⊗ e r + 1 r [ ∂ S z r ∂ θ − S z θ ] e z ⊗ e r ⊗ e θ + ∂ S z r ∂ z e z ⊗ e r ⊗ e z + ∂ S z θ ∂ r e z ⊗ e θ ⊗ e r + 1 r [ ∂ S z θ ∂ θ + S z r ] e z ⊗ e θ ⊗ e θ + ∂ S z θ ∂ z e z ⊗ e θ ⊗ e z + ∂ S z z ∂ r e z ⊗ e z ⊗ e r + 1 r ∂ S z z ∂ θ e z ⊗ e z ⊗ e θ + ∂ S z z ∂ z e z ⊗ e z ⊗ e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {S}}&={\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{rr}}{\partial \theta }}-(S_{\theta r}+S_{r\theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{rr}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{r\theta }}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{r\theta }}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{rz}}{\partial \theta }}-S_{\theta z}\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{rz}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{\theta r}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta r}}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{\theta r}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{\theta \theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{\theta \theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{\theta z}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{\theta z}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{zr}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{zr}}{\partial \theta }}-S_{z\theta }\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{z\theta }}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{z\theta }}{\partial \theta }}+S_{zr}\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{zz}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}~{\frac {\partial S_{zz}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}\end{aligned}}} === Divergence of a second-order tensor field === The divergence of a second-order tensor field in cylindrical polar coordinates can be obtained from the expression for the gradient by collecting terms where the scalar product of the two outer vectors in the dyadic products is nonzero. Therefore, ∇ ⋅ S = ∂ S r r ∂ r e r + ∂ S r θ ∂ r e θ + ∂ S r z ∂ r e z + 1 r [ ∂ S r θ ∂ θ + ( S r r − S θ θ ) ] e r + 1 r [ ∂ S θ θ ∂ θ + ( S r θ + S θ r ) ] e θ + 1 r [ ∂ S θ z ∂ θ + S r z ] e z + ∂ S z r ∂ z e r + ∂ S z θ ∂ z e θ + ∂ S z z ∂ z e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&={\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}+{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{\theta }+{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{z}\\[8pt]&+{\cfrac {1}{r}}\left[{\frac {\partial S_{r\theta }}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{r}+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{\theta }+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\end{aligned}}} == See also == Covariance and contravariance Basic introduction to the mathematics of curved spacetime Orthogonal coordinates Frenet–Serret formulas Covariant derivative Tensor derivative (continuum mechanics) Curvilinear perspective Del in cylindrical and spherical coordinates == References == Notes Further reading == External links == Derivation of Unit Vectors in Curvilinear Coordinates MathWorld's page on Curvilinear Coordinates Prof. R. Brannon's E-Book on Curvilinear Coordinates
Wikipedia/Tensors_in_curvilinear_coordinates
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one. Multivariable calculus may be thought of as an elementary part of calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus. == Introduction == In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces: There are infinite ways to approach a single point in higher dimensions, as opposed to two (from the positive and negative direction) in 1D; There are multiple extended objects associated with the dimension; for example, for a 1D function, it must be represented as a curve on the 2D Cartesian plane, but a function with two variables is a surface in 3D, while curves can also live in 3D space. The consequence of the first difference is the difference in the definition of the limit and differentiation. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators. The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined. == Limits == A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions. A limit along a path may be defined by considering a parametrised path s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} in n-dimensional Euclidean space. Any function f ( x → ) : R n → R m {\displaystyle f({\overrightarrow {x}}):\mathbb {R} ^{n}\to \mathbb {R} ^{m}} can then be projected on the path as a 1D function f ( s ( t ) ) {\displaystyle f(s(t))} . The limit of f {\displaystyle f} to the point s ( t 0 ) {\displaystyle s(t_{0})} along the path s ( t ) {\displaystyle s(t)} can hence be defined as Note that the value of this limit can be dependent on the form of s ( t ) {\displaystyle s(t)} , i.e. the path chosen, not just the point which the limit approaches.: 19–22  For example, consider the function f ( x , y ) = x 2 y x 4 + y 2 . {\displaystyle f(x,y)={\frac {x^{2}y}{x^{4}+y^{2}}}.} If the point ( 0 , 0 ) {\displaystyle (0,0)} is approached through the line y = k x {\displaystyle y=kx} , or in parametric form: Then the limit along the path will be: On the other hand, if the path y = ± x 2 {\displaystyle y=\pm x^{2}} (or parametrically, x ( t ) = t , y ( t ) = ± t 2 {\displaystyle x(t)=t,\,y(t)=\pm t^{2}} ) is chosen, then the limit becomes: Since taking different paths towards the same point yields different values, a general limit at the point ( 0 , 0 ) {\displaystyle (0,0)} cannot be defined for the function. A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} that the limit of f {\displaystyle f} to some point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} is L, if and only if for all continuous functions s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} such that s ( t 0 ) = x 0 {\displaystyle s(t_{0})=x_{0}} . === Continuity === From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} that f {\displaystyle f} is continuous at the point x 0 {\displaystyle x_{0}} , if and only if for all continuous functions s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} such that s ( t 0 ) = x 0 {\displaystyle s(t_{0})=x_{0}} . As with limits, being continuous along one path s ( t ) {\displaystyle s(t)} does not imply multivariate continuity. Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example.: 17–19  For example, for a real-valued function f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } with two real-valued parameters, f ( x , y ) {\displaystyle f(x,y)} , continuity of f {\displaystyle f} in x {\displaystyle x} for fixed y {\displaystyle y} and continuity of f {\displaystyle f} in y {\displaystyle y} for fixed x {\displaystyle x} does not imply continuity of f {\displaystyle f} . Consider f ( x , y ) = { y x − y if 0 ≤ y < x ≤ 1 x y − x if 0 ≤ x < y ≤ 1 1 − x if 0 < x = y 0 everywhere else . {\displaystyle f(x,y)={\begin{cases}{\frac {y}{x}}-y&{\text{if}}\quad 0\leq y<x\leq 1\\{\frac {x}{y}}-x&{\text{if}}\quad 0\leq x<y\leq 1\\1-x&{\text{if}}\quad 0<x=y\\0&{\text{everywhere else}}.\end{cases}}} It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle ( 0 , 1 ) × ( 0 , 1 ) {\displaystyle (0,1)\times (0,1)} . Furthermore, the functions defined for constant x {\displaystyle x} and y {\displaystyle y} and 0 ≤ a ≤ 1 {\displaystyle 0\leq a\leq 1} by g a ( x ) = f ( x , a ) {\displaystyle g_{a}(x)=f(x,a)\quad } and h a ( y ) = f ( a , y ) {\displaystyle \quad h_{a}(y)=f(a,y)\quad } are continuous. Specifically, g 0 ( x ) = f ( x , 0 ) = h 0 ( 0 , y ) = f ( 0 , y ) = 0 {\displaystyle g_{0}(x)=f(x,0)=h_{0}(0,y)=f(0,y)=0} for all x and y. Therefore, f ( 0 , 0 ) = 0 {\displaystyle f(0,0)=0} and moreover, along the coordinate axes, lim x → 0 f ( x , 0 ) = 0 {\displaystyle \lim _{x\to 0}f(x,0)=0} and lim y → 0 f ( 0 , y ) = 0 {\displaystyle \lim _{y\to 0}f(0,y)=0} . Therefore the function is continuous along both individual arguments. However, consider the parametric path x ( t ) = t , y ( t ) = t {\displaystyle x(t)=t,\,y(t)=t} . The parametric function becomes Therefore, It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates. === Theorems regarding multivariate limits and continuity === All properties of linearity and superposition from single-variable calculus carry over to multivariate calculus. Composition: If f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} and g : R m → R p {\displaystyle g:\mathbb {R} ^{m}\to \mathbb {R} ^{p}} are both multivariate continuous functions at the points x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} and f ( x 0 ) ∈ R m {\displaystyle f(x_{0})\in \mathbb {R} ^{m}} respectively, then g ∘ f : R n → R p {\displaystyle g\circ f:\mathbb {R} ^{n}\to \mathbb {R} ^{p}} is also a multivariate continuous function at the point x 0 {\displaystyle x_{0}} . Multiplication: If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } and g : R n → R {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} } are both continuous functions at the point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} , then f g : R n → R {\displaystyle fg:\mathbb {R} ^{n}\to \mathbb {R} } is continuous at x 0 {\displaystyle x_{0}} , and f / g : R n → R {\displaystyle f/g:\mathbb {R} ^{n}\to \mathbb {R} } is also continuous at x 0 {\displaystyle x_{0}} provided that g ( x 0 ) ≠ 0 {\displaystyle g(x_{0})\neq 0} . If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a continuous function at point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} , then | f | {\displaystyle |f|} is also continuous at the same point. If f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} is Lipschitz continuous (with the appropriate normed spaces as needed) in the neighbourhood of the point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} , then f {\displaystyle f} is multivariate continuous at x 0 {\displaystyle x_{0}} . == Differentiation == === Directional derivative === The derivative of a single-variable function is defined as Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } along some path s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} : Unlike limits, for which the value depends on the exact form of the path s ( t ) {\displaystyle s(t)} , it can be shown that the derivative along the path depends only on the tangent vector of the path at s ( t 0 ) {\displaystyle s(t_{0})} , i.e. s ′ ( t 0 ) {\displaystyle s'(t_{0})} , provided that f {\displaystyle f} is Lipschitz continuous at s ( t 0 ) {\displaystyle s(t_{0})} , and that the limit exits for at least one such path. It is therefore possible to generate the definition of the directional derivative as follows: The directional derivative of a scalar-valued function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } along the unit vector u ^ {\displaystyle {\hat {\mathbf {u}}}} at some point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} is or, when expressed in terms of ordinary differentiation, which is a well defined expression because f ( x 0 + u ^ t ) {\displaystyle f(x_{0}+{\hat {\mathbf {u}}}t)} is a scalar function with one variable in t {\displaystyle t} . It is not possible to define a unique scalar derivative without a direction; it is clear for example that ∇ u ^ f ( x 0 ) = − ∇ − u ^ f ( x 0 ) {\displaystyle \nabla _{\hat {\mathbf {u}}}f(x_{0})=-\nabla _{-{\hat {\mathbf {u}}}}f(x_{0})} . It is also possible for directional derivatives to exist for some directions but not for others. === Partial derivative === The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.: 26ff  A partial derivative may be thought of as the directional derivative of the function along a coordinate axis. Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator ( ∇ {\displaystyle \nabla } ) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.: 654ff  == Multiple integration == The multiple integral extends the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration.: 367ff  The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves. === Fundamental theorem of calculus in multiple dimensions === In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:: 543ff  Gradient theorem Stokes' theorem Divergence theorem Green's theorem. In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds. == Applications and uses == Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular, Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics. Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data. Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus. Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus. == See also == List of multivariable calculus topics Multivariate statistics == References == == External links == UC Berkeley video lectures on Multivariable Calculus, Fall 2009, Professor Edward Frenkel MIT video lectures on Multivariable Calculus, Fall 2007 Multivariable Calculus: A free online textbook by George Cain and James Herod Multivariable Calculus Online: A free online textbook by Jeff Knisley Multivariable Calculus – A Very Quick Review, Prof. Blair Perot, University of Massachusetts Amherst Multivariable Calculus, Online text by Dr. Jerry Shurman
Wikipedia/Multivariate_calculus
In mathematics, the tensor product V ⊗ W {\displaystyle V\otimes W} of two vector spaces V {\displaystyle V} and W {\displaystyle W} (over the same field) is a vector space to which is associated a bilinear map V × W → V ⊗ W {\displaystyle V\times W\rightarrow V\otimes W} that maps a pair ( v , w ) , v ∈ V , w ∈ W {\displaystyle (v,w),\ v\in V,w\in W} to an element of V ⊗ W {\displaystyle V\otimes W} denoted ⁠ v ⊗ w {\displaystyle v\otimes w} ⁠. An element of the form v ⊗ w {\displaystyle v\otimes w} is called the tensor product of v {\displaystyle v} and w {\displaystyle w} . An element of V ⊗ W {\displaystyle V\otimes W} is a tensor, and the tensor product of two vectors is sometimes called an elementary tensor or a decomposable tensor. The elementary tensors span V ⊗ W {\displaystyle V\otimes W} in the sense that every element of V ⊗ W {\displaystyle V\otimes W} is a sum of elementary tensors. If bases are given for V {\displaystyle V} and W {\displaystyle W} , a basis of V ⊗ W {\displaystyle V\otimes W} is formed by all tensor products of a basis element of V {\displaystyle V} and a basis element of W {\displaystyle W} . The tensor product of two vector spaces captures the properties of all bilinear maps in the sense that a bilinear map from V × W {\displaystyle V\times W} into another vector space Z {\displaystyle Z} factors uniquely through a linear map V ⊗ W → Z {\displaystyle V\otimes W\to Z} (see the section below titled 'Universal property'), i.e. the bilinear map is associated to a unique linear map from the tensor product V ⊗ W {\displaystyle V\otimes W} to Z {\displaystyle Z} . Tensor products are used in many application areas, including physics and engineering. For example, in general relativity, the gravitational field is described through the metric tensor, which is a tensor field with one tensor at each point of the space-time manifold, and each belonging to the tensor product of the cotangent space at the point with itself. == Definitions and constructions == The tensor product of two vector spaces is a vector space that is defined up to an isomorphism. There are several equivalent ways to define it. Most consist of defining explicitly a vector space that is called a tensor product, and, generally, the equivalence proof results almost immediately from the basic properties of the vector spaces that are so defined. The tensor product can also be defined through a universal property; see § Universal property, below. As for every universal property, all objects that satisfy the property are isomorphic through a unique isomorphism that is compatible with the universal property. When this definition is used, the other definitions may be viewed as constructions of objects satisfying the universal property and as proofs that there are objects satisfying the universal property, that is that tensor products exist. === From bases === Let V and W be two vector spaces over a field F, with respective bases B V {\displaystyle B_{V}} and ⁠ B W {\displaystyle B_{W}} ⁠. The tensor product V ⊗ W {\displaystyle V\otimes W} of V and W is a vector space that has as a basis the set of all v ⊗ w {\displaystyle v\otimes w} with v ∈ B V {\displaystyle v\in B_{V}} and ⁠ w ∈ B W {\displaystyle w\in B_{W}} ⁠. This definition can be formalized in the following way (this formalization is rarely used in practice, as the preceding informal definition is generally sufficient): V ⊗ W {\displaystyle V\otimes W} is the set of the functions from the Cartesian product B V × B W {\displaystyle B_{V}\times B_{W}} to F that have a finite number of nonzero values. The pointwise operations make V ⊗ W {\displaystyle V\otimes W} a vector space. The function that maps ( v , w ) {\displaystyle (v,w)} to 1 and the other elements of B V × B W {\displaystyle B_{V}\times B_{W}} to 0 is denoted ⁠ v ⊗ w {\displaystyle v\otimes w} ⁠. The set { v ⊗ w ∣ v ∈ B V , w ∈ B W } {\displaystyle \{v\otimes w\mid v\in B_{V},w\in B_{W}\}} is then straightforwardly a basis of ⁠ V ⊗ W {\displaystyle V\otimes W} ⁠, which is called the tensor product of the bases B V {\displaystyle B_{V}} and ⁠ B W {\displaystyle B_{W}} ⁠. We can equivalently define V ⊗ W {\displaystyle V\otimes W} to be the set of bilinear forms on V × W {\displaystyle V\times W} that are nonzero at only a finite number of elements of ⁠ B V × B W {\displaystyle B_{V}\times B_{W}} ⁠. To see this, given ( x , y ) ∈ V × W {\displaystyle (x,y)\in V\times W} and a bilinear form ⁠ B : V × W → F {\displaystyle B:V\times W\to F} ⁠, we can decompose x {\displaystyle x} and y {\displaystyle y} in the bases B V {\displaystyle B_{V}} and B W {\displaystyle B_{W}} as: x = ∑ v ∈ B V x v v and y = ∑ w ∈ B W y w w , {\displaystyle x=\sum _{v\in B_{V}}x_{v}\,v\quad {\text{and}}\quad y=\sum _{w\in B_{W}}y_{w}\,w,} where only a finite number of x v {\displaystyle x_{v}} 's and y w {\displaystyle y_{w}} 's are nonzero, and find by the bilinearity of B {\displaystyle B} that: B ( x , y ) = ∑ v ∈ B V ∑ w ∈ B W x v y w B ( v , w ) {\displaystyle B(x,y)=\sum _{v\in B_{V}}\sum _{w\in B_{W}}x_{v}y_{w}\,B(v,w)} Hence, we see that the value of B {\displaystyle B} for any ( x , y ) ∈ V × W {\displaystyle (x,y)\in V\times W} is uniquely and totally determined by the values that it takes on ⁠ B V × B W {\displaystyle B_{V}\times B_{W}} ⁠. This lets us extend the maps v ⊗ w {\displaystyle v\otimes w} defined on B V × B W {\displaystyle B_{V}\times B_{W}} as before into bilinear maps v ⊗ w : V × W → F {\displaystyle v\otimes w:V\times W\to F} , by letting: ( v ⊗ w ) ( x , y ) := ∑ v ′ ∈ B V ∑ w ′ ∈ B W x v ′ y w ′ ( v ⊗ w ) ( v ′ , w ′ ) = x v y w . {\displaystyle (v\otimes w)(x,y):=\sum _{v'\in B_{V}}\sum _{w'\in B_{W}}x_{v'}y_{w'}\,(v\otimes w)(v',w')=x_{v}\,y_{w}.} Then we can express any bilinear form B {\displaystyle B} as a (potentially infinite) formal linear combination of the v ⊗ w {\displaystyle v\otimes w} maps according to: B = ∑ v ∈ B V ∑ w ∈ B W B ( v , w ) ( v ⊗ w ) {\displaystyle B=\sum _{v\in B_{V}}\sum _{w\in B_{W}}B(v,w)(v\otimes w)} making these maps similar to a Schauder basis for the vector space Hom ( V , W ; F ) {\displaystyle {\text{Hom}}(V,W;F)} of all bilinear forms on ⁠ V × W {\displaystyle V\times W} ⁠. To instead have it be a proper Hamel basis, it only remains to add the requirement that B {\displaystyle B} is nonzero at an only a finite number of elements of ⁠ B V × B W {\displaystyle B_{V}\times B_{W}} ⁠, and consider the subspace of such maps instead. In either construction, the tensor product of two vectors is defined from their decomposition on the bases. More precisely, taking the basis decompositions of x ∈ V {\displaystyle x\in V} and y ∈ W {\displaystyle y\in W} as before: x ⊗ y = ( ∑ v ∈ B V x v v ) ⊗ ( ∑ w ∈ B W y w w ) = ∑ v ∈ B V ∑ w ∈ B W x v y w v ⊗ w . {\displaystyle {\begin{aligned}x\otimes y&={\biggl (}\sum _{v\in B_{V}}x_{v}\,v{\biggr )}\otimes {\biggl (}\sum _{w\in B_{W}}y_{w}\,w{\biggr )}\\[5mu]&=\sum _{v\in B_{V}}\sum _{w\in B_{W}}x_{v}y_{w}\,v\otimes w.\end{aligned}}} This definition is quite clearly derived from the coefficients of B ( v , w ) {\displaystyle B(v,w)} in the expansion by bilinearity of B ( x , y ) {\displaystyle B(x,y)} using the bases B V {\displaystyle B_{V}} and ⁠ B W {\displaystyle B_{W}} ⁠, as done above. It is then straightforward to verify that with this definition, the map ⊗ : ( x , y ) ↦ x ⊗ y {\displaystyle {\otimes }:(x,y)\mapsto x\otimes y} is a bilinear map from V × W {\displaystyle V\times W} to V ⊗ W {\displaystyle V\otimes W} satisfying the universal property that any construction of the tensor product satisfies (see below). If arranged into a rectangular array, the coordinate vector of x ⊗ y {\displaystyle x\otimes y} is the outer product of the coordinate vectors of x {\displaystyle x} and ⁠ y {\displaystyle y} ⁠. Therefore, the tensor product is a generalization of the outer product, that is, an abstraction of it beyond coordinate vectors. A limitation of this definition of the tensor product is that, if one changes bases, a different tensor product is defined. However, the decomposition on one basis of the elements of the other basis defines a canonical isomorphism between the two tensor products of vector spaces, which allows identifying them. Also, contrarily to the two following alternative definitions, this definition cannot be extended into a definition of the tensor product of modules over a ring. === As a quotient space === A construction of the tensor product that is basis independent can be obtained in the following way. Let V and W be two vector spaces over a field F. One considers first a vector space L that has the Cartesian product V × W {\displaystyle V\times W} as a basis. That is, the basis elements of L are the pairs ( v , w ) {\displaystyle (v,w)} with v ∈ V {\displaystyle v\in V} and ⁠ w ∈ W {\displaystyle w\in W} ⁠. To get such a vector space, one can define it as the vector space of the functions V × W → F {\displaystyle V\times W\to F} that have a finite number of nonzero values and identifying ( v , w ) {\displaystyle (v,w)} with the function that takes the value 1 on ( v , w ) {\displaystyle (v,w)} and 0 otherwise. Let R be the linear subspace of L that is spanned by the relations that the tensor product must satisfy. More precisely, R is spanned by the elements of one of the forms: ( v 1 + v 2 , w ) − ( v 1 , w ) − ( v 2 , w ) , ( v , w 1 + w 2 ) − ( v , w 1 ) − ( v , w 2 ) , ( s v , w ) − s ( v , w ) , ( v , s w ) − s ( v , w ) , {\displaystyle {\begin{aligned}(v_{1}+v_{2},w)&-(v_{1},w)-(v_{2},w),\\(v,w_{1}+w_{2})&-(v,w_{1})-(v,w_{2}),\\(sv,w)&-s(v,w),\\(v,sw)&-s(v,w),\end{aligned}}} where ⁠ v , v 1 , v 2 ∈ V {\displaystyle v,v_{1},v_{2}\in V} ⁠, w , w 1 , w 2 ∈ W {\displaystyle w,w_{1},w_{2}\in W} and ⁠ s ∈ F {\displaystyle s\in F} ⁠. Then, the tensor product is defined as the quotient space: V ⊗ W = L / R , {\displaystyle V\otimes W=L/R,} and the image of ( v , w ) {\displaystyle (v,w)} in this quotient is denoted ⁠ v ⊗ w {\displaystyle v\otimes w} ⁠. It is straightforward to prove that the result of this construction satisfies the universal property considered below. (A very similar construction can be used to define the tensor product of modules.) === Universal property === In this section, the universal property satisfied by the tensor product is described. As for every universal property, two objects that satisfy the property are related by a unique isomorphism. It follows that this is a (non-constructive) way to define the tensor product of two vector spaces. In this context, the preceding constructions of tensor products may be viewed as proofs of existence of the tensor product so defined. A consequence of this approach is that every property of the tensor product can be deduced from the universal property, and that, in practice, one may forget the method that has been used to prove its existence. The "universal-property definition" of the tensor product of two vector spaces is the following (recall that a bilinear map is a function that is separately linear in each of its arguments): The tensor product of two vector spaces V and W is a vector space denoted as ⁠ V ⊗ W {\displaystyle V\otimes W} ⁠, together with a bilinear map ⊗ : ( v , w ) ↦ v ⊗ w {\displaystyle {\otimes }:(v,w)\mapsto v\otimes w} from V × W {\displaystyle V\times W} to ⁠ V ⊗ W {\displaystyle V\otimes W} ⁠, such that, for every bilinear map ⁠ h : V × W → Z {\displaystyle h:V\times W\to Z} ⁠, there is a unique linear map ⁠ h ~ : V ⊗ W → Z {\displaystyle {\tilde {h}}:V\otimes W\to Z} ⁠, such that h = h ~ ∘ ⊗ {\displaystyle h={\tilde {h}}\circ {\otimes }} (that is, h ( v , w ) = h ~ ( v ⊗ w ) {\displaystyle h(v,w)={\tilde {h}}(v\otimes w)} for every v ∈ V {\displaystyle v\in V} and ⁠ w ∈ W {\displaystyle w\in W} ⁠). === Linearly disjoint === Like the universal property above, the following characterization may also be used to determine whether or not a given vector space and given bilinear map form a tensor product. For example, it follows immediately that if ⁠ X = C m {\displaystyle X=\mathbb {C} ^{m}} ⁠ and ⁠ Y = C n {\displaystyle Y=\mathbb {C} ^{n}} ⁠, where m {\displaystyle m} and n {\displaystyle n} are positive integers, then one may set Z = C m n {\displaystyle Z=\mathbb {C} ^{mn}} and define the bilinear map as T : C m × C n → C m n ( x , y ) = ( ( x 1 , … , x m ) , ( y 1 , … , y n ) ) ↦ ( x i y j ) j = 1 , … , n i = 1 , … , m {\displaystyle {\begin{aligned}T:\mathbb {C} ^{m}\times \mathbb {C} ^{n}&\to \mathbb {C} ^{mn}\\(x,y)=((x_{1},\ldots ,x_{m}),(y_{1},\ldots ,y_{n}))&\mapsto (x_{i}y_{j})_{\stackrel {i=1,\ldots ,m}{j=1,\ldots ,n}}\end{aligned}}} to form the tensor product of X {\displaystyle X} and ⁠ Y {\displaystyle Y} ⁠. Often, this map T {\displaystyle T} is denoted by ⊗ {\displaystyle \,\otimes \,} so that x ⊗ y = T ( x , y ) . {\displaystyle x\otimes y=T(x,y).} As another example, suppose that C S {\displaystyle \mathbb {C} ^{S}} is the vector space of all complex-valued functions on a set S {\displaystyle S} with addition and scalar multiplication defined pointwise (meaning that f + g {\displaystyle f+g} is the map s ↦ f ( s ) + g ( s ) {\displaystyle s\mapsto f(s)+g(s)} and c f {\displaystyle cf} is the map ⁠ s ↦ c f ( s ) {\displaystyle s\mapsto cf(s)} ⁠). Let S {\displaystyle S} and T {\displaystyle T} be any sets and for any f ∈ C S {\displaystyle f\in \mathbb {C} ^{S}} and ⁠ g ∈ C T {\displaystyle g\in \mathbb {C} ^{T}} ⁠, let f ⊗ g ∈ C S × T {\displaystyle f\otimes g\in \mathbb {C} ^{S\times T}} denote the function defined by ⁠ ( s , t ) ↦ f ( s ) g ( t ) {\displaystyle (s,t)\mapsto f(s)g(t)} ⁠. If X ⊆ C S {\displaystyle X\subseteq \mathbb {C} ^{S}} and Y ⊆ C T {\displaystyle Y\subseteq \mathbb {C} ^{T}} are vector subspaces then the vector subspace Z := span ⁡ { f ⊗ g : f ∈ X , g ∈ Y } {\displaystyle Z:=\operatorname {span} \left\{f\otimes g:f\in X,g\in Y\right\}} of C S × T {\displaystyle \mathbb {C} ^{S\times T}} together with the bilinear map: X × Y → Z ( f , g ) ↦ f ⊗ g {\displaystyle {\begin{alignedat}{4}\;&&X\times Y&&\;\to \;&Z\\[0.3ex]&&(f,g)&&\;\mapsto \;&f\otimes g\\\end{alignedat}}} form a tensor product of X {\displaystyle X} and ⁠ Y {\displaystyle Y} ⁠. == Properties == === Dimension === If V and W are vector spaces of finite dimension, then V ⊗ W {\displaystyle V\otimes W} is finite-dimensional, and its dimension is the product of the dimensions of V and W. This results from the fact that a basis of V ⊗ W {\displaystyle V\otimes W} is formed by taking all tensor products of a basis element of V and a basis element of W. === Associativity === The tensor product is associative in the sense that, given three vector spaces ⁠ U , V , W {\displaystyle U,V,W} ⁠, there is a canonical isomorphism: ( U ⊗ V ) ⊗ W ≅ U ⊗ ( V ⊗ W ) , {\displaystyle (U\otimes V)\otimes W\cong U\otimes (V\otimes W),} that maps ( u ⊗ v ) ⊗ w {\displaystyle (u\otimes v)\otimes w} to ⁠ u ⊗ ( v ⊗ w ) {\displaystyle u\otimes (v\otimes w)} ⁠. This allows omitting parentheses in the tensor product of more than two vector spaces or vectors. === Commutativity as vector space operation === The tensor product of two vector spaces V {\displaystyle V} and W {\displaystyle W} is commutative in the sense that there is a canonical isomorphism: V ⊗ W ≅ W ⊗ V , {\displaystyle V\otimes W\cong W\otimes V,} that maps v ⊗ w {\displaystyle v\otimes w} to ⁠ w ⊗ v {\displaystyle w\otimes v} ⁠. On the other hand, even when ⁠ V = W {\displaystyle V=W} ⁠, the tensor product of vectors is not commutative; that is ⁠ v ⊗ w ≠ w ⊗ v {\displaystyle v\otimes w\neq w\otimes v} ⁠, in general. The map x ⊗ y ↦ y ⊗ x {\displaystyle x\otimes y\mapsto y\otimes x} from V ⊗ V {\displaystyle V\otimes V} to itself induces a linear automorphism that is called a braiding map. More generally and as usual (see tensor algebra), let V ⊗ n {\displaystyle V^{\otimes n}} denote the tensor product of n copies of the vector space V. For every permutation s of the first n positive integers, the map: x 1 ⊗ ⋯ ⊗ x n ↦ x s ( 1 ) ⊗ ⋯ ⊗ x s ( n ) {\displaystyle x_{1}\otimes \cdots \otimes x_{n}\mapsto x_{s(1)}\otimes \cdots \otimes x_{s(n)}} induces a linear automorphism of ⁠ V ⊗ n → V ⊗ n {\displaystyle V^{\otimes n}\to V^{\otimes n}} ⁠, which is called a braiding map. == Tensor product of linear maps == Given a linear map ⁠ f : U → V {\displaystyle f:U\to V} ⁠, and a vector space W, the tensor product: f ⊗ W : U ⊗ W → V ⊗ W {\displaystyle f\otimes W:U\otimes W\to V\otimes W} is the unique linear map such that: ( f ⊗ W ) ( u ⊗ w ) = f ( u ) ⊗ w . {\displaystyle (f\otimes W)(u\otimes w)=f(u)\otimes w.} The tensor product W ⊗ f {\displaystyle W\otimes f} is defined similarly. Given two linear maps f : U → V {\displaystyle f:U\to V} and ⁠ g : W → Z {\displaystyle g:W\to Z} ⁠, their tensor product: f ⊗ g : U ⊗ W → V ⊗ Z {\displaystyle f\otimes g:U\otimes W\to V\otimes Z} is the unique linear map that satisfies: ( f ⊗ g ) ( u ⊗ w ) = f ( u ) ⊗ g ( w ) . {\displaystyle (f\otimes g)(u\otimes w)=f(u)\otimes g(w).} One has: f ⊗ g = ( f ⊗ Z ) ∘ ( U ⊗ g ) = ( V ⊗ g ) ∘ ( f ⊗ W ) . {\displaystyle f\otimes g=(f\otimes Z)\circ (U\otimes g)=(V\otimes g)\circ (f\otimes W).} In terms of category theory, this means that the tensor product is a bifunctor from the category of vector spaces to itself. If f and g are both injective or surjective, then the same is true for all above defined linear maps. In particular, the tensor product with a vector space is an exact functor; this means that every exact sequence is mapped to an exact sequence (tensor products of modules do not transform injections into injections, but they are right exact functors). By choosing bases of all vector spaces involved, the linear maps f and g can be represented by matrices. Then, depending on how the tensor v ⊗ w {\displaystyle v\otimes w} is vectorized, the matrix describing the tensor product f ⊗ g {\displaystyle f\otimes g} is the Kronecker product of the two matrices. For example, if V, X, W, and U above are all two-dimensional and bases have been fixed for all of them, and f and g are given by the matrices: A = [ a 1 , 1 a 1 , 2 a 2 , 1 a 2 , 2 ] , B = [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] , {\displaystyle A={\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}},\qquad B={\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}},} respectively, then the tensor product of these two matrices is: [ a 1 , 1 a 1 , 2 a 2 , 1 a 2 , 2 ] ⊗ [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] = [ a 1 , 1 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] a 1 , 2 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] a 2 , 1 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] a 2 , 2 [ b 1 , 1 b 1 , 2 b 2 , 1 b 2 , 2 ] ] = [ a 1 , 1 b 1 , 1 a 1 , 1 b 1 , 2 a 1 , 2 b 1 , 1 a 1 , 2 b 1 , 2 a 1 , 1 b 2 , 1 a 1 , 1 b 2 , 2 a 1 , 2 b 2 , 1 a 1 , 2 b 2 , 2 a 2 , 1 b 1 , 1 a 2 , 1 b 1 , 2 a 2 , 2 b 1 , 1 a 2 , 2 b 1 , 2 a 2 , 1 b 2 , 1 a 2 , 1 b 2 , 2 a 2 , 2 b 2 , 1 a 2 , 2 b 2 , 2 ] . {\displaystyle {\begin{aligned}{\begin{bmatrix}a_{1,1}&a_{1,2}\\a_{2,1}&a_{2,2}\\\end{bmatrix}}\otimes {\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&={\begin{bmatrix}a_{1,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{1,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\[3pt]a_{2,1}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}&a_{2,2}{\begin{bmatrix}b_{1,1}&b_{1,2}\\b_{2,1}&b_{2,2}\\\end{bmatrix}}\\\end{bmatrix}}\\&={\begin{bmatrix}a_{1,1}b_{1,1}&a_{1,1}b_{1,2}&a_{1,2}b_{1,1}&a_{1,2}b_{1,2}\\a_{1,1}b_{2,1}&a_{1,1}b_{2,2}&a_{1,2}b_{2,1}&a_{1,2}b_{2,2}\\a_{2,1}b_{1,1}&a_{2,1}b_{1,2}&a_{2,2}b_{1,1}&a_{2,2}b_{1,2}\\a_{2,1}b_{2,1}&a_{2,1}b_{2,2}&a_{2,2}b_{2,1}&a_{2,2}b_{2,2}\\\end{bmatrix}}.\end{aligned}}} The resultant rank is at most 4, and thus the resultant dimension is 4. rank here denotes the tensor rank i.e. the number of requisite indices (while the matrix rank counts the number of degrees of freedom in the resulting array). ⁠ Tr ⁡ A ⊗ B = Tr ⁡ A × Tr ⁡ B {\displaystyle \operatorname {Tr} A\otimes B=\operatorname {Tr} A\times \operatorname {Tr} B} ⁠. A dyadic product is the special case of the tensor product between two vectors of the same dimension. == General tensors == For non-negative integers r and s a type ( r , s ) {\displaystyle (r,s)} tensor on a vector space V is an element of: T s r ( V ) = V ⊗ ⋯ ⊗ V ⏟ r ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ s = V ⊗ r ⊗ ( V ∗ ) ⊗ s . {\displaystyle T_{s}^{r}(V)=\underbrace {V\otimes \cdots \otimes V} _{r}\otimes \underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{s}=V^{\otimes r}\otimes \left(V^{*}\right)^{\otimes s}.} Here V ∗ {\displaystyle V^{*}} is the dual vector space (which consists of all linear maps f from V to the ground field K). There is a product map, called the (tensor) product of tensors: T s r ( V ) ⊗ K T s ′ r ′ ( V ) → T s + s ′ r + r ′ ( V ) . {\displaystyle T_{s}^{r}(V)\otimes _{K}T_{s'}^{r'}(V)\to T_{s+s'}^{r+r'}(V).} It is defined by grouping all occurring "factors" V together: writing v i {\displaystyle v_{i}} for an element of V and f i {\displaystyle f_{i}} for an element of the dual space: ( v 1 ⊗ f 1 ) ⊗ ( v 1 ′ ) = v 1 ⊗ v 1 ′ ⊗ f 1 . {\displaystyle (v_{1}\otimes f_{1})\otimes (v'_{1})=v_{1}\otimes v'_{1}\otimes f_{1}.} If V is finite dimensional, then picking a basis of V and the corresponding dual basis of V ∗ {\displaystyle V^{*}} naturally induces a basis of T s r ( V ) {\displaystyle T_{s}^{r}(V)} (this basis is described in the article on Kronecker products). In terms of these bases, the components of a (tensor) product of two (or more) tensors can be computed. For example, if F and G are two covariant tensors of orders m and n respectively (i.e. F ∈ T m 0 {\displaystyle F\in T_{m}^{0}} and ⁠ G ∈ T n 0 {\displaystyle G\in T_{n}^{0}} ⁠), then the components of their tensor product are given by: ( F ⊗ G ) i 1 i 2 ⋯ i m + n = F i 1 i 2 ⋯ i m G i m + 1 i m + 2 i m + 3 ⋯ i m + n . {\displaystyle (F\otimes G)_{i_{1}i_{2}\cdots i_{m+n}}=F_{i_{1}i_{2}\cdots i_{m}}G_{i_{m+1}i_{m+2}i_{m+3}\cdots i_{m+n}}.} Thus, the components of the tensor product of two tensors are the ordinary product of the components of each tensor. Another example: let U be a tensor of type (1, 1) with components ⁠ U β α {\displaystyle U_{\beta }^{\alpha }} ⁠, and let V be a tensor of type ( 1 , 0 ) {\displaystyle (1,0)} with components ⁠ V γ {\displaystyle V^{\gamma }} ⁠. Then: ( U ⊗ V ) α β γ = U α β V γ {\displaystyle \left(U\otimes V\right)^{\alpha }{}_{\beta }{}^{\gamma }=U^{\alpha }{}_{\beta }V^{\gamma }} and: ( V ⊗ U ) μ ν σ = V μ U ν σ . {\displaystyle (V\otimes U)^{\mu \nu }{}_{\sigma }=V^{\mu }U^{\nu }{}_{\sigma }.} Tensors equipped with their product operation form an algebra, called the tensor algebra. === Evaluation map and tensor contraction === For tensors of type (1, 1) there is a canonical evaluation map: V ⊗ V ∗ → K {\displaystyle V\otimes V^{*}\to K} defined by its action on pure tensors: v ⊗ f ↦ f ( v ) . {\displaystyle v\otimes f\mapsto f(v).} More generally, for tensors of type ⁠ ( r , s ) {\displaystyle (r,s)} ⁠, with r, s > 0, there is a map, called tensor contraction: T s r ( V ) → T s − 1 r − 1 ( V ) . {\displaystyle T_{s}^{r}(V)\to T_{s-1}^{r-1}(V).} (The copies of V {\displaystyle V} and V ∗ {\displaystyle V^{*}} on which this map is to be applied must be specified.) On the other hand, if V {\displaystyle V} is finite-dimensional, there is a canonical map in the other direction (called the coevaluation map): { K → V ⊗ V ∗ λ ↦ ∑ i λ v i ⊗ v i ∗ {\displaystyle {\begin{cases}K\to V\otimes V^{*}\\\lambda \mapsto \sum _{i}\lambda v_{i}\otimes v_{i}^{*}\end{cases}}} where v 1 , … , v n {\displaystyle v_{1},\ldots ,v_{n}} is any basis of ⁠ V {\displaystyle V} ⁠, and v i ∗ {\displaystyle v_{i}^{*}} is its dual basis. This map does not depend on the choice of basis. The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases. === Adjoint representation === The tensor product T s r ( V ) {\displaystyle T_{s}^{r}(V)} may be naturally viewed as a module for the Lie algebra E n d ( V ) {\displaystyle \mathrm {End} (V)} by means of the diagonal action: for simplicity let us assume ⁠ r = s = 1 {\displaystyle r=s=1} ⁠, then, for each ⁠ u ∈ E n d ( V ) {\displaystyle u\in \mathrm {End} (V)} ⁠, u ( a ⊗ b ) = u ( a ) ⊗ b − a ⊗ u ∗ ( b ) , {\displaystyle u(a\otimes b)=u(a)\otimes b-a\otimes u^{*}(b),} where u ∗ ∈ E n d ( V ∗ ) {\displaystyle u^{*}\in \mathrm {End} \left(V^{*}\right)} is the transpose of u, that is, in terms of the obvious pairing on ⁠ V ⊗ V ∗ {\displaystyle V\otimes V^{*}} ⁠, ⟨ u ( a ) , b ⟩ = ⟨ a , u ∗ ( b ) ⟩ . {\displaystyle \langle u(a),b\rangle =\langle a,u^{*}(b)\rangle .} There is a canonical isomorphism T 1 1 ( V ) → E n d ( V ) {\displaystyle T_{1}^{1}(V)\to \mathrm {End} (V)} given by: ( a ⊗ b ) ( x ) = ⟨ x , b ⟩ a . {\displaystyle (a\otimes b)(x)=\langle x,b\rangle a.} Under this isomorphism, every u in E n d ( V ) {\displaystyle \mathrm {End} (V)} may be first viewed as an endomorphism of T 1 1 ( V ) {\displaystyle T_{1}^{1}(V)} and then viewed as an endomorphism of ⁠ E n d ( V ) {\displaystyle \mathrm {End} (V)} ⁠. In fact it is the adjoint representation ad(u) of ⁠ E n d ( V ) {\displaystyle \mathrm {End} (V)} ⁠. == Linear maps as tensors == Given two finite dimensional vector spaces U, V over the same field K, denote the dual space of U as U*, and the K-vector space of all linear maps from U to V as Hom(U,V). There is an isomorphism: U ∗ ⊗ V ≅ H o m ( U , V ) , {\displaystyle U^{*}\otimes V\cong \mathrm {Hom} (U,V),} defined by an action of the pure tensor f ⊗ v ∈ U ∗ ⊗ V {\displaystyle f\otimes v\in U^{*}\otimes V} on an element of ⁠ U {\displaystyle U} ⁠, ( f ⊗ v ) ( u ) = f ( u ) v . {\displaystyle (f\otimes v)(u)=f(u)v.} Its "inverse" can be defined using a basis { u i } {\displaystyle \{u_{i}\}} and its dual basis { u i ∗ } {\displaystyle \{u_{i}^{*}\}} as in the section "Evaluation map and tensor contraction" above: { H o m ( U , V ) → U ∗ ⊗ V F ↦ ∑ i u i ∗ ⊗ F ( u i ) . {\displaystyle {\begin{cases}\mathrm {Hom} (U,V)\to U^{*}\otimes V\\F\mapsto \sum _{i}u_{i}^{*}\otimes F(u_{i}).\end{cases}}} This result implies: dim ⁡ ( U ⊗ V ) = dim ⁡ ( U ) dim ⁡ ( V ) , {\displaystyle \dim(U\otimes V)=\dim(U)\dim(V),} which automatically gives the important fact that { u i ⊗ v j } {\displaystyle \{u_{i}\otimes v_{j}\}} forms a basis of U ⊗ V {\displaystyle U\otimes V} where { u i } , { v j } {\displaystyle \{u_{i}\},\{v_{j}\}} are bases of U and V. Furthermore, given three vector spaces U, V, W the tensor product is linked to the vector space of all linear maps, as follows: H o m ( U ⊗ V , W ) ≅ H o m ( U , H o m ( V , W ) ) . {\displaystyle \mathrm {Hom} (U\otimes V,W)\cong \mathrm {Hom} (U,\mathrm {Hom} (V,W)).} This is an example of adjoint functors: the tensor product is "left adjoint" to Hom. == Tensor products of modules over a ring == The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field: A ⊗ R B := F ( A × B ) / G , {\displaystyle A\otimes _{R}B:=F(A\times B)/G,} where now F ( A × B ) {\displaystyle F(A\times B)} is the free R-module generated by the cartesian product and G is the R-module generated by these relations. More generally, the tensor product can be defined even if the ring is non-commutative. In this case A has to be a right-R-module and B is a left-R-module, and instead of the last two relations above, the relation: ( a r , b ) ∼ ( a , r b ) {\displaystyle (ar,b)\sim (a,rb)} is imposed. If R is non-commutative, this is no longer an R-module, but just an abelian group. The universal property also carries over, slightly modified: the map φ : A × B → A ⊗ R B {\displaystyle \varphi :A\times B\to A\otimes _{R}B} defined by ( a , b ) ↦ a ⊗ b {\displaystyle (a,b)\mapsto a\otimes b} is a middle linear map (referred to as "the canonical middle linear map"); that is, it satisfies: φ ( a + a ′ , b ) = φ ( a , b ) + φ ( a ′ , b ) φ ( a , b + b ′ ) = φ ( a , b ) + φ ( a , b ′ ) φ ( a r , b ) = φ ( a , r b ) {\displaystyle {\begin{aligned}\varphi (a+a',b)&=\varphi (a,b)+\varphi (a',b)\\\varphi (a,b+b')&=\varphi (a,b)+\varphi (a,b')\\\varphi (ar,b)&=\varphi (a,rb)\end{aligned}}} The first two properties make φ a bilinear map of the abelian group ⁠ A × B {\displaystyle A\times B} ⁠. For any middle linear map ψ {\displaystyle \psi } of ⁠ A × B {\displaystyle A\times B} ⁠, a unique group homomorphism f of A ⊗ R B {\displaystyle A\otimes _{R}B} satisfies ⁠ ψ = f ∘ φ {\displaystyle \psi =f\circ \varphi } ⁠, and this property determines φ {\displaystyle \varphi } within group isomorphism. See the main article for details. === Tensor product of modules over a non-commutative ring === Let A be a right R-module and B be a left R-module. Then the tensor product of A and B is an abelian group defined by: A ⊗ R B := F ( A × B ) / G {\displaystyle A\otimes _{R}B:=F(A\times B)/G} where F ( A × B ) {\displaystyle F(A\times B)} is a free abelian group over A × B {\displaystyle A\times B} and G is the subgroup of F ( A × B ) {\displaystyle F(A\times B)} generated by relations: ∀ a , a 1 , a 2 ∈ A , ∀ b , b 1 , b 2 ∈ B , for all r ∈ R : ( a 1 , b ) + ( a 2 , b ) − ( a 1 + a 2 , b ) , ( a , b 1 ) + ( a , b 2 ) − ( a , b 1 + b 2 ) , ( a r , b ) − ( a , r b ) . {\displaystyle {\begin{aligned}&\forall a,a_{1},a_{2}\in A,\forall b,b_{1},b_{2}\in B,{\text{ for all }}r\in R:\\&(a_{1},b)+(a_{2},b)-(a_{1}+a_{2},b),\\&(a,b_{1})+(a,b_{2})-(a,b_{1}+b_{2}),\\&(ar,b)-(a,rb).\\\end{aligned}}} The universal property can be stated as follows. Let G be an abelian group with a map q : A × B → G {\displaystyle q:A\times B\to G} that is bilinear, in the sense that: q ( a 1 + a 2 , b ) = q ( a 1 , b ) + q ( a 2 , b ) , q ( a , b 1 + b 2 ) = q ( a , b 1 ) + q ( a , b 2 ) , q ( a r , b ) = q ( a , r b ) . {\displaystyle {\begin{aligned}q(a_{1}+a_{2},b)&=q(a_{1},b)+q(a_{2},b),\\q(a,b_{1}+b_{2})&=q(a,b_{1})+q(a,b_{2}),\\q(ar,b)&=q(a,rb).\end{aligned}}} Then there is a unique map q ¯ : A ⊗ B → G {\displaystyle {\overline {q}}:A\otimes B\to G} such that q ¯ ( a ⊗ b ) = q ( a , b ) {\displaystyle {\overline {q}}(a\otimes b)=q(a,b)} for all a ∈ A {\displaystyle a\in A} and ⁠ b ∈ B {\displaystyle b\in B} ⁠. Furthermore, we can give A ⊗ R B {\displaystyle A\otimes _{R}B} a module structure under some extra conditions: If A is a (S,R)-bimodule, then A ⊗ R B {\displaystyle A\otimes _{R}B} is a left S-module, where ⁠ s ( a ⊗ b ) := ( s a ) ⊗ b {\displaystyle s(a\otimes b):=(sa)\otimes b} ⁠. If B is a (R,S)-bimodule, then A ⊗ R B {\displaystyle A\otimes _{R}B} is a right S-module, where ⁠ ( a ⊗ b ) s := a ⊗ ( b s ) {\displaystyle (a\otimes b)s:=a\otimes (bs)} ⁠. If A is a (S,R)-bimodule and B is a (R,T)-bimodule, then A ⊗ R B {\displaystyle A\otimes _{R}B} is a (S,T)-bimodule, where the left and right actions are defined in the same way as the previous two examples. If R is a commutative ring, then A and B are (R,R)-bimodules where r a := a r {\displaystyle ra:=ar} and ⁠ b r := r b {\displaystyle br:=rb} ⁠. By 3), we can conclude A ⊗ R B {\displaystyle A\otimes _{R}B} is a (R,R)-bimodule. === Computing the tensor product === For vector spaces, the tensor product V ⊗ W {\displaystyle V\otimes W} is quickly computed since bases of V of W immediately determine a basis of ⁠ V ⊗ W {\displaystyle V\otimes W} ⁠, as was mentioned above. For modules over a general (commutative) ring, not every module is free. For example, Z/nZ is not a free abelian group (Z-module). The tensor product with Z/nZ is given by: M ⊗ Z Z / n Z = M / n M . {\displaystyle M\otimes _{\mathbf {Z} }\mathbf {Z} /n\mathbf {Z} =M/nM.} More generally, given a presentation of some R-module M, that is, a number of generators m i ∈ M , i ∈ I {\displaystyle m_{i}\in M,i\in I} together with relations: ∑ j ∈ J a j i m i = 0 , a i j ∈ R , {\displaystyle \sum _{j\in J}a_{ji}m_{i}=0,\qquad a_{ij}\in R,} the tensor product can be computed as the following cokernel: M ⊗ R N = coker ⁡ ( N J → N I ) {\displaystyle M\otimes _{R}N=\operatorname {coker} \left(N^{J}\to N^{I}\right)} Here ⁠ N J = ⊕ j ∈ J N {\displaystyle N^{J}=\oplus _{j\in J}N} ⁠, and the map N J → N I {\displaystyle N^{J}\to N^{I}} is determined by sending some n ∈ N {\displaystyle n\in N} in the jth copy of N J {\displaystyle N^{J}} to a i j n {\displaystyle a_{ij}n} (in ⁠ N I {\displaystyle N^{I}} ⁠). Colloquially, this may be rephrased by saying that a presentation of M gives rise to a presentation of ⁠ M ⊗ R N {\displaystyle M\otimes _{R}N} ⁠. This is referred to by saying that the tensor product is a right exact functor. It is not in general left exact, that is, given an injective map of R-modules ⁠ M 1 → M 2 {\displaystyle M_{1}\to M_{2}} ⁠, the tensor product: M 1 ⊗ R N → M 2 ⊗ R N {\displaystyle M_{1}\otimes _{R}N\to M_{2}\otimes _{R}N} is not usually injective. For example, tensoring the (injective) map given by multiplication with n, n : Z → Z with Z/nZ yields the zero map 0 : Z/nZ → Z/nZ, which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product. == Tensor product of algebras == Let R be a commutative ring. The tensor product of R-modules applies, in particular, if A and B are R-algebras. In this case, the tensor product A ⊗ R B {\displaystyle A\otimes _{R}B} is an R-algebra itself by putting: ( a 1 ⊗ b 1 ) ⋅ ( a 2 ⊗ b 2 ) = ( a 1 ⋅ a 2 ) ⊗ ( b 1 ⋅ b 2 ) . {\displaystyle (a_{1}\otimes b_{1})\cdot (a_{2}\otimes b_{2})=(a_{1}\cdot a_{2})\otimes (b_{1}\cdot b_{2}).} For example: R [ x ] ⊗ R R [ y ] ≅ R [ x , y ] . {\displaystyle R[x]\otimes _{R}R[y]\cong R[x,y].} A particular example is when A and B are fields containing a common subfield R. The tensor product of fields is closely related to Galois theory: if, say, A = R[x] / f(x), where f is some irreducible polynomial with coefficients in R, the tensor product can be calculated as: A ⊗ R B ≅ B [ x ] / f ( x ) {\displaystyle A\otimes _{R}B\cong B[x]/f(x)} where now f is interpreted as the same polynomial, but with its coefficients regarded as elements of B. In the larger field B, the polynomial may become reducible, which brings in Galois theory. For example, if A = B is a Galois extension of R, then: A ⊗ R A ≅ A [ x ] / f ( x ) {\displaystyle A\otimes _{R}A\cong A[x]/f(x)} is isomorphic (as an A-algebra) to the ⁠ A deg ⁡ ( f ) {\displaystyle A^{\operatorname {deg} (f)}} ⁠. == Eigenconfigurations of tensors == Square matrices A {\displaystyle A} with entries in a field K {\displaystyle K} represent linear maps of vector spaces, say ⁠ K n → K n {\displaystyle K^{n}\to K^{n}} ⁠, and thus linear maps ψ : P n − 1 → P n − 1 {\displaystyle \psi :\mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}} of projective spaces over ⁠ K {\displaystyle K} ⁠. If A {\displaystyle A} is nonsingular then ψ {\displaystyle \psi } is well-defined everywhere, and the eigenvectors of A {\displaystyle A} correspond to the fixed points of ⁠ ψ {\displaystyle \psi } ⁠. The eigenconfiguration of A {\displaystyle A} consists of n {\displaystyle n} points in ⁠ P n − 1 {\displaystyle \mathbb {P} ^{n-1}} ⁠, provided A {\displaystyle A} is generic and K {\displaystyle K} is algebraically closed. The fixed points of nonlinear maps are the eigenvectors of tensors. Let A = ( a i 1 i 2 ⋯ i d ) {\displaystyle A=(a_{i_{1}i_{2}\cdots i_{d}})} be a d {\displaystyle d} -dimensional tensor of format n × n × ⋯ × n {\displaystyle n\times n\times \cdots \times n} with entries ( a i 1 i 2 ⋯ i d ) {\displaystyle (a_{i_{1}i_{2}\cdots i_{d}})} lying in an algebraically closed field K {\displaystyle K} of characteristic zero. Such a tensor A ∈ ( K n ) ⊗ d {\displaystyle A\in (K^{n})^{\otimes d}} defines polynomial maps K n → K n {\displaystyle K^{n}\to K^{n}} and P n − 1 → P n − 1 {\displaystyle \mathbb {P} ^{n-1}\to \mathbb {P} ^{n-1}} with coordinates: ψ i ( x 1 , … , x n ) = ∑ j 2 = 1 n ∑ j 3 = 1 n ⋯ ∑ j d = 1 n a i j 2 j 3 ⋯ j d x j 2 x j 3 ⋯ x j d for i = 1 , … , n {\displaystyle \psi _{i}(x_{1},\ldots ,x_{n})=\sum _{j_{2}=1}^{n}\sum _{j_{3}=1}^{n}\cdots \sum _{j_{d}=1}^{n}a_{ij_{2}j_{3}\cdots j_{d}}x_{j_{2}}x_{j_{3}}\cdots x_{j_{d}}\;\;{\mbox{for }}i=1,\ldots ,n} Thus each of the n {\displaystyle n} coordinates of ψ {\displaystyle \psi } is a homogeneous polynomial ψ i {\displaystyle \psi _{i}} of degree d − 1 {\displaystyle d-1} in ⁠ x = ( x 1 , … , x n ) {\displaystyle \mathbf {x} =\left(x_{1},\ldots ,x_{n}\right)} ⁠. The eigenvectors of A {\displaystyle A} are the solutions of the constraint: rank ( x 1 x 2 ⋯ x n ψ 1 ( x ) ψ 2 ( x ) ⋯ ψ n ( x ) ) ≤ 1 {\displaystyle {\mbox{rank}}{\begin{pmatrix}x_{1}&x_{2}&\cdots &x_{n}\\\psi _{1}(\mathbf {x} )&\psi _{2}(\mathbf {x} )&\cdots &\psi _{n}(\mathbf {x} )\end{pmatrix}}\leq 1} and the eigenconfiguration is given by the variety of the 2 × 2 {\displaystyle 2\times 2} minors of this matrix. == Other examples of tensor products == === Topological tensor products === Hilbert spaces generalize finite-dimensional vector spaces to arbitrary dimensions. There is an analogous operation, also called the "tensor product," that makes Hilbert spaces a symmetric monoidal category. It is essentially constructed as the metric space completion of the algebraic tensor product discussed above. However, it does not satisfy the obvious analogue of the universal property defining tensor products; the morphisms for that property must be restricted to Hilbert–Schmidt operators. In situations where the imposition of an inner product is inappropriate, one can still attempt to complete the algebraic tensor product, as a topological tensor product. However, such a construction is no longer uniquely specified: in many cases, there are multiple natural topologies on the algebraic tensor product. === Tensor product of graded vector spaces === Some vector spaces can be decomposed into direct sums of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition). === Tensor product of representations === Vector spaces endowed with an additional multiplicative structure are called algebras. The tensor product of such algebras is described by the Littlewood–Richardson rule. === Tensor product of quadratic forms === === Tensor product of multilinear forms === Given two multilinear forms f ( x 1 , … , x k ) {\displaystyle f(x_{1},\dots ,x_{k})} and g ( x 1 , … , x m ) {\displaystyle g(x_{1},\dots ,x_{m})} on a vector space V {\displaystyle V} over the field K {\displaystyle K} their tensor product is the multilinear form: ( f ⊗ g ) ( x 1 , … , x k + m ) = f ( x 1 , … , x k ) g ( x k + 1 , … , x k + m ) . {\displaystyle (f\otimes g)(x_{1},\dots ,x_{k+m})=f(x_{1},\dots ,x_{k})g(x_{k+1},\dots ,x_{k+m}).} This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product. === Tensor product of sheaves of modules === === Tensor product of line bundles === === Tensor product of fields === === Tensor product of graphs === It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above. === Monoidal categories === The most general setting for the tensor product is the monoidal category. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects. == Quotient algebras == A number of important subspaces of the tensor algebra can be constructed as quotients: these include the exterior algebra, the symmetric algebra, the Clifford algebra, the Weyl algebra, and the universal enveloping algebra in general. The exterior algebra is constructed from the exterior product. Given a vector space V, the exterior product V ∧ V {\displaystyle V\wedge V} is defined as: V ∧ V := V ⊗ V / { v ⊗ v ∣ v ∈ V } . {\displaystyle V\wedge V:=V\otimes V{\big /}\{v\otimes v\mid v\in V\}.} When the underlying field of V does not have characteristic 2, then this definition is equivalent to: V ∧ V := V ⊗ V / { v 1 ⊗ v 2 + v 2 ⊗ v 1 ∣ ( v 1 , v 2 ) ∈ V 2 } . {\displaystyle V\wedge V:=V\otimes V{\big /}{\bigl \{}v_{1}\otimes v_{2}+v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}{\bigr \}}.} The image of v 1 ⊗ v 2 {\displaystyle v_{1}\otimes v_{2}} in the exterior product is usually denoted v 1 ∧ v 2 {\displaystyle v_{1}\wedge v_{2}} and satisfies, by construction, ⁠ v 1 ∧ v 2 = − v 2 ∧ v 1 {\displaystyle v_{1}\wedge v_{2}=-v_{2}\wedge v_{1}} ⁠. Similar constructions are possible for V ⊗ ⋯ ⊗ V {\displaystyle V\otimes \dots \otimes V} (n factors), giving rise to ⁠ Λ n V {\displaystyle \Lambda ^{n}V} ⁠, the nth exterior power of V. The latter notion is the basis of differential n-forms. The symmetric algebra is constructed in a similar manner, from the symmetric product: V ⊙ V := V ⊗ V / { v 1 ⊗ v 2 − v 2 ⊗ v 1 ∣ ( v 1 , v 2 ) ∈ V 2 } . {\displaystyle V\odot V:=V\otimes V{\big /}{\bigl \{}v_{1}\otimes v_{2}-v_{2}\otimes v_{1}\mid (v_{1},v_{2})\in V^{2}{\bigr \}}.} More generally: Sym n ⁡ V := V ⊗ ⋯ ⊗ V ⏟ n / ( ⋯ ⊗ v i ⊗ v i + 1 ⊗ ⋯ − ⋯ ⊗ v i + 1 ⊗ v i ⊗ … ) {\displaystyle \operatorname {Sym} ^{n}V:=\underbrace {V\otimes \dots \otimes V} _{n}{\big /}(\dots \otimes v_{i}\otimes v_{i+1}\otimes \dots -\dots \otimes v_{i+1}\otimes v_{i}\otimes \dots )} That is, in the symmetric algebra two adjacent vectors (and therefore all of them) can be interchanged. The resulting objects are called symmetric tensors. == Tensor product in programming == === Array programming languages === Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c). J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable. However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL). == See also == Dyadics – Second order tensor in vector algebra Extension of scalars Monoidal category – Category admitting tensor products Tensor algebra – Universal construction in multilinear algebra Tensor contraction – Operation in mathematics Topological tensor product – Tensor product constructions for topological vector spaces == Notes == == References == Bourbaki, Nicolas (1989). Elements of mathematics, Algebra I. Springer-Verlag. ISBN 3-540-64243-9. Gowers, Timothy. "How to lose your fear of tensor products". Archived from the original on 7 May 2021. Grillet, Pierre A. (2007). Abstract Algebra. Springer Science+Business Media, LLC. ISBN 978-0387715674. Halmos, Paul (1974). Finite dimensional vector spaces. Springer. ISBN 0-387-90093-4. Hungerford, Thomas W. (2003). Algebra. Springer. ISBN 0387905189. Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001 Mac Lane, S.; Birkhoff, G. (1999). Algebra. AMS Chelsea. ISBN 0-8218-1646-2. Aguiar, M.; Mahajan, S. (2010). Monoidal functors, species and Hopf algebras. CRM Monograph Series Vol 29. ISBN 978-0-8218-4776-3. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. "Bibliography on the nonabelian tensor product of groups".
Wikipedia/Tensor_product
In physics and materials science, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate loads are applied to them; if the material is elastic, the object will return to its initial shape and size after removal. This is in contrast to plasticity, in which the object fails to do so and instead remains in its deformed state. The physical reasons for elastic behavior can be quite different for different materials. In metals, the atomic lattice changes size and shape when forces are applied (energy is added to the system). When forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied. Hooke's law states that the force required to deform elastic objects should be directly proportional to the distance of deformation, regardless of how large that distance becomes. This is known as perfect elasticity, in which a given object will return to its original shape no matter how strongly it is deformed. This is an ideal concept only; most materials which possess elasticity in practice remain purely elastic only up to very small deformations, after which plastic (permanent) deformation occurs. In engineering, the elasticity of a material is quantified by the elastic modulus such as the Young's modulus, bulk modulus or shear modulus which measure the amount of stress needed to achieve a unit of strain; a higher modulus indicates that the material is harder to deform. The SI unit of this modulus is the pascal (Pa). The material's elastic limit or yield strength is the maximum stress that can arise before the onset of plastic deformation. Its SI unit is also the pascal (Pa). == Overview == When an elastic material is deformed due to an external force, it experiences internal resistance to the deformation and restores it to its original state if the external force is no longer applied. There are various elastic moduli, such as Young's modulus, the shear modulus, and the bulk modulus, all of which are measures of the inherent elastic properties of a material as a resistance to deformation under an applied load. The various moduli apply to different kinds of deformation. For instance, Young's modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. Young's modulus and shear modulus are only for solids, whereas the bulk modulus is for solids, liquids, and gases. The elasticity of materials is described by a stress–strain curve, which shows the relation between stress (the average restorative internal force per unit area) and strain (the relative deformation). The curve is generally nonlinear, but it can (by use of a Taylor series) be approximated as linear for sufficiently small deformations (in which higher-order terms are negligible). If the material is isotropic, the linearized stress–strain relationship is called Hooke's law, which is often presumed to apply up to the elastic limit for most metals or crystalline materials whereas nonlinear elasticity is generally required to model large deformations of rubbery materials even in the elastic range. For even higher stresses, materials exhibit plastic behavior, that is, they deform irreversibly and do not return to their original shape after stress is no longer applied. For rubber-like materials such as elastomers, the slope of the stress–strain curve increases with stress, meaning that rubbers progressively become more difficult to stretch, while for most metals, the gradient decreases at very high stresses, meaning that they progressively become easier to stretch. Elasticity is not exhibited only by solids; non-Newtonian fluids, such as viscoelastic fluids, will also exhibit elasticity in certain conditions quantified by the Deborah number. In response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, these fluids may start to flow like a viscous liquid. Because the elasticity of a material is described in terms of a stress–strain relation, it is essential that the terms stress and strain be defined without ambiguity. Typically, two types of relation are considered. The first type deals with materials that are elastic only for small strains. The second deals with materials that are not limited to small strains. Clearly, the second type of relation is more general in the sense that it must include the first type as a special case. For small strains, the measure of stress that is used is the Cauchy stress while the measure of strain that is used is the infinitesimal strain tensor; the resulting (predicted) material behavior is termed linear elasticity, which (for isotropic media) is called the generalized Hooke's law. Cauchy elastic materials and hypoelastic materials are models that extend Hooke's law to allow for the possibility of large rotations, large distortions, and intrinsic or induced anisotropy. For more general situations, any of a number of stress measures can be used, and it is generally desired (but not required) that the elastic stress–strain relation be phrased in terms of a finite strain measure that is work conjugate to the selected stress measure, i.e., the time integral of the inner product of the stress measure with the rate of the strain measure should be equal to the change in internal energy for any adiabatic process that remains below the elastic limit. == Units == === International System === The SI unit for elasticity and the elastic modulus is the pascal (Pa). This unit is defined as force per unit area, generally a measurement of pressure, which in mechanics corresponds to stress. The pascal and therefore elasticity have the dimension L−1⋅M⋅T−2. For most commonly used engineering materials, the elastic modulus is on the scale of gigapascals (GPa, 109 Pa). == Linear elasticity == As noted above, for small deformations, most elastic materials such as springs exhibit linear elasticity and can be described by a linear relation between the stress and strain. This relationship is known as Hooke's law. A geometry-dependent version of the idea was first formulated by Robert Hooke in 1675 as a Latin anagram, "ceiiinosssttuv". He published the answer in 1678: "Ut tensio, sic vis" meaning "As the extension, so the force", a linear relationship commonly referred to as Hooke's law. This law can be stated as a relationship between tensile force F and corresponding extension displacement x {\displaystyle x} , F = k x , {\displaystyle F=kx,} where k is a constant known as the rate or spring constant. It can also be stated as a relationship between stress σ {\displaystyle \sigma } and strain ε {\displaystyle \varepsilon } : σ = E ε , {\displaystyle \sigma =E\varepsilon ,} where E is known as the Young's modulus. Although the general proportionality constant between stress and strain in three dimensions is a 4th-order tensor called stiffness, systems that exhibit symmetry, such as a one-dimensional rod, can often be reduced to applications of Hooke's law. == Finite elasticity == The elastic behavior of objects that undergo finite deformations has been described using a number of models, such as Cauchy elastic material models, Hypoelastic material models, and Hyperelastic material models. The deformation gradient (F) is the primary deformation measure used in finite strain theory. === Cauchy elastic materials === A material is said to be Cauchy-elastic if the Cauchy stress tensor σ is a function of the deformation gradient F alone: σ = G ( F ) {\displaystyle \ {\boldsymbol {\sigma }}={\mathcal {G}}({\boldsymbol {F}})} It is generally incorrect to state that Cauchy stress is a function of merely a strain tensor, as such a model lacks crucial information about material rotation needed to produce correct results for an anisotropic medium subjected to vertical extension in comparison to the same extension applied horizontally and then subjected to a 90-degree rotation; both these deformations have the same spatial strain tensors yet must produce different values of the Cauchy stress tensor. Even though the stress in a Cauchy-elastic material depends only on the state of deformation, the work done by stresses might depend on the path of deformation. Therefore, Cauchy elasticity includes non-conservative "non-hyperelastic" models (in which work of deformation is path dependent) as well as conservative "hyperelastic material" models (for which stress can be derived from a scalar "elastic potential" function). === Hypoelastic materials === A hypoelastic material can be rigorously defined as one that is modeled using a constitutive equation satisfying the following two criteria: The Cauchy stress σ {\displaystyle {\boldsymbol {\sigma }}} at time t {\displaystyle t} depends only on the order in which the body has occupied its past configurations, but not on the time rate at which these past configurations were traversed. As a special case, this criterion includes a Cauchy elastic material, for which the current stress depends only on the current configuration rather than the history of past configurations. There is a tensor-valued function G {\displaystyle G} such that σ ˙ = G ( σ , L ) , {\displaystyle {\dot {\boldsymbol {\sigma }}}=G({\boldsymbol {\sigma }},{\boldsymbol {L}})\,,} in which σ ˙ {\displaystyle {\dot {\boldsymbol {\sigma }}}} is the material rate of the Cauchy stress tensor, and L {\displaystyle {\boldsymbol {L}}} is the spatial velocity gradient tensor. If only these two original criteria are used to define hypoelasticity, then hyperelasticity would be included as a special case, which prompts some constitutive modelers to append a third criterion that specifically requires a hypoelastic model to not be hyperelastic (i.e., hypoelasticity implies that stress is not derivable from an energy potential). If this third criterion is adopted, it follows that a hypoelastic material might admit nonconservative adiabatic loading paths that start and end with the same deformation gradient but do not start and end at the same internal energy. Note that the second criterion requires only that the function G {\displaystyle G} exists. As detailed in the main hypoelastic material article, specific formulations of hypoelastic models typically employ so-called objective rates so that the G {\displaystyle G} function exists only implicitly and is typically needed explicitly only for numerical stress updates performed via direct integration of the actual (not objective) stress rate. === Hyperelastic materials === Hyperelastic materials (also called Green elastic materials) are conservative models that are derived from a strain energy density function (W). A model is hyperelastic if and only if it is possible to express the Cauchy stress tensor as a function of the deformation gradient via a relationship of the form σ = 1 J ∂ W ∂ F F T where J := det F . {\displaystyle {\boldsymbol {\sigma }}={\cfrac {1}{J}}~{\cfrac {\partial W}{\partial {\boldsymbol {F}}}}{\boldsymbol {F}}^{\textsf {T}}\quad {\text{where}}\quad J:=\det {\boldsymbol {F}}\,.} This formulation takes the energy potential (W) as a function of the deformation gradient ( F {\displaystyle {\boldsymbol {F}}} ). By also requiring satisfaction of material objectivity, the energy potential may be alternatively regarded as a function of the Cauchy-Green deformation tensor ( C := F T F {\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{\textsf {T}}{\boldsymbol {F}}} ), in which case the hyperelastic model may be written alternatively as σ = 2 J F ∂ W ∂ C F T where J := det F . {\displaystyle {\boldsymbol {\sigma }}={\cfrac {2}{J}}~{\boldsymbol {F}}{\cfrac {\partial W}{\partial {\boldsymbol {C}}}}{\boldsymbol {F}}^{\textsf {T}}\quad {\text{where}}\quad J:=\det {\boldsymbol {F}}\,.} == Applications == Linear elasticity is used widely in the design and analysis of structures such as beams, plates and shells, and sandwich composites. This theory is also the basis of much of fracture mechanics. Hyperelasticity is primarily used to determine the response of elastomer-based objects such as gaskets and of biological materials such as soft tissues and cell membranes. == Factors affecting elasticity == In a given isotropic solid, with known theoretical elasticity for the bulk material in terms of Young's modulus,the effective elasticity will be governed by porosity. Generally a more porous material will exhibit lower stiffness. More specifically, the fraction of pores, their distribution at different sizes and the nature of the fluid with which they are filled give rise to different elastic behaviours in solids. For isotropic materials containing cracks, the presence of fractures affects the Young and the shear moduli perpendicular to the planes of the cracks, which decrease (Young's modulus faster than the shear modulus) as the fracture density increases, indicating that the presence of cracks makes bodies brittler. Microscopically, the stress–strain relationship of materials is in general governed by the Helmholtz free energy, a thermodynamic quantity. Molecules settle in the configuration which minimizes the free energy, subject to constraints derived from their structure, and, depending on whether the energy or the entropy term dominates the free energy, materials can broadly be classified as energy-elastic and entropy-elastic. As such, microscopic factors affecting the free energy, such as the equilibrium distance between molecules, can affect the elasticity of materials: for instance, in inorganic materials, as the equilibrium distance between molecules at 0 K increases, the bulk modulus decreases. The effect of temperature on elasticity is difficult to isolate, because there are numerous factors affecting it. For instance, the bulk modulus of a material is dependent on the form of its lattice, its behavior under expansion, as well as the vibrations of the molecules, all of which are dependent on temperature. == See also == == Notes == == References == == External links == The Feynman Lectures on Physics Vol. II Ch. 38: Elasticity
Wikipedia/Elasticity_(physics)
In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another (see tensor field), except that it is additionally multiplied or weighted by a power W {\displaystyle W} of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W {\displaystyle W} are called tensor capacity. A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle. == Motivation == In physics and related fields, it is often useful to work with the components of an algebraic object rather than the object itself. An example would be decomposing a vector into a sum of basis vectors weighted by some coefficients such as v → = c 1 e → 1 + c 2 e → 2 + c 3 e → 3 {\displaystyle {\vec {v}}=c_{1}{\vec {e}}_{1}+c_{2}{\vec {e}}_{2}+c_{3}{\vec {e}}_{3}} where v → {\displaystyle {\vec {v}}} is a vector in 3-dimensional Euclidean space, c i ∈ R 1 and e → i {\displaystyle c_{i}\in \mathbb {R} ^{1}{\text{ and }}{\vec {e}}_{i}} are the usual standard basis vectors in Euclidean space. This is usually necessary for computational purposes, and can often be insightful when algebraic objects represent complex abstractions but their components have concrete interpretations. However, with this identification, one has to be careful to track changes of the underlying basis in which the quantity is expanded; it may in the course of a computation become expedient to change the basis while the vector v → {\displaystyle {\vec {v}}} remains fixed in physical space. More generally, if an algebraic object represents a geometric object, but is expressed in terms of a particular basis, then it is necessary to, when the basis is changed, also change the representation. Physicists will often call this representation of a geometric object a tensor if it transforms under a sequence of linear maps given a linear change of basis (although confusingly others call the underlying geometric object which hasn't changed under the coordinate transformation a "tensor", a convention this article strictly avoids). In general there are representations which transform in arbitrary ways depending on how the geometric invariant is reconstructed from the representation. In certain special cases it is convenient to use representations which transform almost like tensors, but with an additional, nonlinear factor in the transformation. A prototypical example is a matrix representing the cross product (area of spanned parallelogram) on R 2 . {\displaystyle \mathbb {R} ^{2}.} The representation is given by in the standard basis by u → × v → = [ u 1 u 2 ] [ 0 1 − 1 0 ] [ v 1 v 2 ] = u 1 v 2 − u 2 v 1 {\displaystyle {\vec {u}}\times {\vec {v}}={\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}=u_{1}v_{2}-u_{2}v_{1}} If we now try to express this same expression in a basis other than the standard basis, then the components of the vectors will change, say according to [ u 1 ′ u 2 ′ ] T = A [ u 1 u 2 ] T {\textstyle {\begin{bmatrix}u'_{1}&u'_{2}\end{bmatrix}}^{\textsf {T}}=A{\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}^{\textsf {T}}} where A {\displaystyle A} is some 2 by 2 matrix of real numbers. Given that the area of the spanned parallelogram is a geometric invariant, it cannot have changed under the change of basis, and so the new representation of this matrix must be: ( A − 1 ) T [ 0 1 − 1 0 ] A − 1 {\displaystyle \left(A^{-1}\right)^{\textsf {T}}{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}A^{-1}} which, when expanded is just the original expression but multiplied by the determinant of A − 1 , {\displaystyle A^{-1},} which is also 1 det A . {\textstyle {\frac {1}{\det A}}.} In fact this representation could be thought of as a two index tensor transformation, but instead, it is computationally easier to think of the tensor transformation rule as multiplication by 1 det A , {\textstyle {\frac {1}{\det A}},} rather than as 2 matrix multiplications (In fact in higher dimensions, the natural extension of this is n , n × n {\displaystyle n,n\times n} matrix multiplications, which for large n {\displaystyle n} is completely infeasible). Objects which transform in this way are called tensor densities because they arise naturally when considering problems regarding areas and volumes, and so are frequently used in integration. == Definition == Some authors classify tensor densities into the two types called (authentic) tensor densities and pseudotensor densities in this article. Other authors classify them differently, into the types called even tensor densities and odd tensor densities. When a tensor density weight is an integer there is an equivalence between these approaches that depends upon whether the integer is even or odd. Note that these classifications elucidate the different ways that tensor densities may transform somewhat pathologically under orientation-reversing coordinate transformations. Regardless of their classifications into these types, there is only one way that tensor densities transform under orientation-preserving coordinate transformations. In this article we have chosen the convention that assigns a weight of +2 to g = det ( g ρ σ ) {\displaystyle g=\det \left(g_{\rho \sigma }\right)} , the determinant of the metric tensor expressed with covariant indices. With this choice, classical densities, like charge density, will be represented by tensor densities of weight +1. Some authors use a sign convention for weights that is the negation of that presented here. In contrast to the meaning used in this article, in general relativity "pseudotensor" sometimes means an object that does not transform like a tensor or relative tensor of any weight. === Tensor and pseudotensor densities === For example, a mixed rank-two (authentic) tensor density of weight W {\displaystyle W} transforms as: T β α = ( det [ ∂ x ¯ ι ∂ x γ ] ) W ∂ x α ∂ x ¯ δ ∂ x ¯ ϵ ∂ x β T ¯ ϵ δ , {\displaystyle {\mathfrak {T}}_{\beta }^{\alpha }=\left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,,} ((authentic) tensor density of (integer) weight W {\displaystyle W} ) where T ¯ {\displaystyle {\bar {\mathfrak {T}}}} is the rank-two tensor density in the x ¯ {\displaystyle {\bar {x}}} coordinate system, T {\displaystyle {\mathfrak {T}}} is the transformed tensor density in the x {\displaystyle {x}} coordinate system; and we use the Jacobian determinant. Because the determinant can be negative, which it is for an orientation-reversing coordinate transformation, this formula is applicable only when W {\displaystyle W} is an integer. (However, see even and odd tensor densities below.) We say that a tensor density is a pseudotensor density when there is an additional sign flip under an orientation-reversing coordinate transformation. A mixed rank-two pseudotensor density of weight W {\displaystyle W} transforms as T β α = sgn ⁡ ( det [ ∂ x ¯ ι ∂ x γ ] ) ( det [ ∂ x ¯ ι ∂ x γ ] ) W ∂ x α ∂ x ¯ δ ∂ x ¯ ϵ ∂ x β T ¯ ϵ δ , {\displaystyle {\mathfrak {T}}_{\beta }^{\alpha }=\operatorname {sgn} \left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)\left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,,} (pseudotensor density of (integer) weight W {\displaystyle W} ) where sgn( ⋅ {\displaystyle \cdot } ) is a function that returns +1 when its argument is positive or −1 when its argument is negative. === Even and odd tensor densities === The transformations for even and odd tensor densities have the benefit of being well defined even when W {\displaystyle W} is not an integer. Thus one can speak of, say, an odd tensor density of weight +2 or an even tensor density of weight −1/2. When W {\displaystyle W} is an even integer the above formula for an (authentic) tensor density can be rewritten as T β α = | det [ ∂ x ¯ ι ∂ x γ ] | W ∂ x α ∂ x ¯ δ ∂ x ¯ ϵ ∂ x β T ¯ ϵ δ . {\displaystyle {\mathfrak {T}}_{\beta }^{\alpha }=\left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,.} (even tensor density of weight W {\displaystyle W} ) Similarly, when W {\displaystyle W} is an odd integer the formula for an (authentic) tensor density can be rewritten as T β α = sgn ⁡ ( det [ ∂ x ¯ ι ∂ x γ ] ) | det [ ∂ x ¯ ι ∂ x γ ] | W ∂ x α ∂ x ¯ δ ∂ x ¯ ϵ ∂ x β T ¯ ϵ δ . {\displaystyle {\mathfrak {T}}_{\beta }^{\alpha }=\operatorname {sgn} \left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)\left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,.} (odd tensor density of weight W {\displaystyle W} ) === Weights of zero and one === A tensor density of any type that has weight zero is also called an absolute tensor. An (even) authentic tensor density of weight zero is also called an ordinary tensor. If a weight is not specified but the word "relative" or "density" is used in a context where a specific weight is needed, it is usually assumed that the weight is +1. === Algebraic properties === A linear combination (also known as a weighted sum) of tensor densities of the same type and weight W {\displaystyle W} is again a tensor density of that type and weight. A product of two tensor densities of any types, and with weights W 1 {\displaystyle W_{1}} and W 2 {\displaystyle W_{2}} , is a tensor density of weight W 1 + W 2 . {\displaystyle W_{1}+W_{2}.} A product of authentic tensor densities and pseudotensor densities will be an authentic tensor density when an even number of the factors are pseudotensor densities; it will be a pseudotensor density when an odd number of the factors are pseudotensor densities. Similarly, a product of even tensor densities and odd tensor densities will be an even tensor density when an even number of the factors are odd tensor densities; it will be an odd tensor density when an odd number of the factors are odd tensor densities. The contraction of indices on a tensor density with weight W {\displaystyle W} again yields a tensor density of weight W . {\displaystyle W.} Using (2) and (3) one sees that raising and lowering indices using the metric tensor (weight 0) leaves the weight unchanged. === Matrix inversion and matrix determinant of tensor densities === If T α β {\displaystyle {\mathfrak {T}}_{\alpha \beta }} is a non-singular matrix and a rank-two tensor density of weight W {\displaystyle W} with covariant indices then its matrix inverse will be a rank-two tensor density of weight − W {\displaystyle -W} with contravariant indices. Similar statements apply when the two indices are contravariant or are mixed covariant and contravariant. If T α β {\displaystyle {\mathfrak {T}}_{\alpha \beta }} is a rank-two tensor density of weight W {\displaystyle W} with covariant indices then the matrix determinant det T α β {\displaystyle \det {\mathfrak {T}}_{\alpha \beta }} will have weight N W + 2 , {\displaystyle NW+2,} where N {\displaystyle N} is the number of space-time dimensions. If T α β {\displaystyle {\mathfrak {T}}^{\alpha \beta }} is a rank-two tensor density of weight W {\displaystyle W} with contravariant indices then the matrix determinant det T α β {\displaystyle \det {\mathfrak {T}}^{\alpha \beta }} will have weight N W − 2. {\displaystyle NW-2.} The matrix determinant det T β α {\displaystyle \det {\mathfrak {T}}_{~\beta }^{\alpha }} will have weight N W . {\displaystyle NW.} == General relativity == === Relation of Jacobian determinant and metric tensor === Any non-singular ordinary tensor T μ ν {\displaystyle T_{\mu \nu }} transforms as T μ ν = ∂ x ¯ κ ∂ x μ T ¯ κ λ ∂ x ¯ λ ∂ x ν , {\displaystyle T_{\mu \nu }={\frac {\partial {\bar {x}}^{\kappa }}{\partial {x}^{\mu }}}{\bar {T}}_{\kappa \lambda }{\frac {\partial {\bar {x}}^{\lambda }}{\partial {x}^{\nu }}}\,,} where the right-hand side can be viewed as the product of three matrices. Taking the determinant of both sides of the equation (using that the determinant of a matrix product is the product of the determinants), dividing both sides by det ( T ¯ κ λ ) , {\displaystyle \det \left({\bar {T}}_{\kappa \lambda }\right),} and taking their square root gives | det [ ∂ x ¯ ι ∂ x γ ] | = det ( T μ ν ) det ( T ¯ κ λ ) . {\displaystyle \left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ={\sqrt {\frac {\det({T}_{\mu \nu })}{\det \left({\bar {T}}_{\kappa \lambda }\right)}}}\,.} When the tensor T {\displaystyle T} is the metric tensor, g κ λ , {\displaystyle {g}_{\kappa \lambda },} and x ¯ ι {\displaystyle {\bar {x}}^{\iota }} is a locally inertial coordinate system where g ¯ κ λ = η κ λ = {\displaystyle {\bar {g}}_{\kappa \lambda }=\eta _{\kappa \lambda }=} diag(−1,+1,+1,+1), the Minkowski metric, then det ( g ¯ κ λ ) = det ( η κ λ ) = {\displaystyle \det \left({\bar {g}}_{\kappa \lambda }\right)=\det(\eta _{\kappa \lambda })=} −1 and so | det [ ∂ x ¯ ι ∂ x γ ] | = − g , {\displaystyle \left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ={\sqrt {-{g}}}\,,} where g = det ( g μ ν ) {\displaystyle {g}=\det \left({g}_{\mu \nu }\right)} is the determinant of the metric tensor g μ ν . {\displaystyle {g}_{\mu \nu }.} === Use of metric tensor to manipulate tensor densities === Consequently, an even tensor density, T ν … μ … , {\displaystyle {\mathfrak {T}}_{\nu \dots }^{\mu \dots },} of weight W {\displaystyle W} , can be written in the form T ν … μ … = − g W T ν … μ … , {\displaystyle {\mathfrak {T}}_{\nu \dots }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots }^{\mu \dots }\,,} where T ν … μ … {\displaystyle T_{\nu \dots }^{\mu \dots }\,} is an ordinary tensor. In a locally inertial coordinate system, where g κ λ = η κ λ , {\displaystyle g_{\kappa \lambda }=\eta _{\kappa \lambda },} it will be the case that T ν … μ … {\displaystyle {\mathfrak {T}}_{\nu \dots }^{\mu \dots }} and T ν … μ … {\displaystyle T_{\nu \dots }^{\mu \dots }\,} will be represented with the same numbers. When using the metric connection (Levi-Civita connection), the covariant derivative of an even tensor density is defined as T ν … ; α μ … = − g W T ν … ; α μ … = − g W ( − g − W T ν … μ … ) ; α . {\displaystyle {\mathfrak {T}}_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}\left({\sqrt {-g}}\;^{-W}{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\right)_{;\alpha }\,.} For an arbitrary connection, the covariant derivative is defined by adding an extra term, namely − W Γ δ α δ T ν … μ … {\displaystyle -W\,\Gamma _{~\delta \alpha }^{\delta }\,{\mathfrak {T}}_{\nu \dots }^{\mu \dots }} to the expression that would be appropriate for the covariant derivative of an ordinary tensor. Equivalently, the product rule is obeyed ( T ν … μ … S τ … σ … ) ; α = ( T ν … ; α μ … ) S τ … σ … + T ν … μ … ( S τ … ; α σ … ) , {\displaystyle \left({\mathfrak {T}}_{\nu \dots }^{\mu \dots }{\mathfrak {S}}_{\tau \dots }^{\sigma \dots }\right)_{;\alpha }=\left({\mathfrak {T}}_{\nu \dots ;\alpha }^{\mu \dots }\right){\mathfrak {S}}_{\tau \dots }^{\sigma \dots }+{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\left({\mathfrak {S}}_{\tau \dots ;\alpha }^{\sigma \dots }\right)\,,} where, for the metric connection, the covariant derivative of any function of g κ λ {\displaystyle g_{\kappa \lambda }} is always zero, g κ λ ; α = 0 ( − g W ) ; α = ( − g W ) , α − W Γ δ α δ − g W = W 2 g κ λ g κ λ , α − g W − W Γ δ α δ − g W = 0 . {\displaystyle {\begin{aligned}g_{\kappa \lambda ;\alpha }&=0\\\left({\sqrt {-g}}\;^{W}\right)_{;\alpha }&=\left({\sqrt {-g}}\;^{W}\right)_{,\alpha }-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}={\frac {W}{2}}g^{\kappa \lambda }g_{\kappa \lambda ,\alpha }{\sqrt {-g}}\;^{W}-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}=0\,.\end{aligned}}} == Examples == The expression − g {\displaystyle {\sqrt {-g}}} is a scalar density. By the convention of this article it has a weight of +1. The density of electric current J μ {\displaystyle {\mathfrak {J}}^{\mu }} (for example, J 2 {\displaystyle {\mathfrak {J}}^{2}} is the amount of electric charge crossing the 3-volume element d x 3 d x 4 d x 1 {\displaystyle dx^{3}\,dx^{4}\,dx^{1}} divided by that element — do not use the metric in this calculation) is a contravariant vector density of weight +1. It is often written as J μ = J μ − g {\displaystyle {\mathfrak {J}}^{\mu }=J^{\mu }{\sqrt {-g}}} or J μ = ε μ α β γ J α β γ / 3 ! , {\displaystyle {\mathfrak {J}}^{\mu }=\varepsilon ^{\mu \alpha \beta \gamma }{\mathcal {J}}_{\alpha \beta \gamma }/3!,} where J μ {\displaystyle J^{\mu }\,} and the differential form J α β γ {\displaystyle {\mathcal {J}}_{\alpha \beta \gamma }} are absolute tensors, and where ε μ α β γ {\displaystyle \varepsilon ^{\mu \alpha \beta \gamma }} is the Levi-Civita symbol; see below. The density of Lorentz force f μ {\displaystyle {\mathfrak {f}}_{\mu }} (that is, the linear momentum transferred from the electromagnetic field to matter within a 4-volume element d x 1 d x 2 d x 3 d x 4 {\displaystyle dx^{1}\,dx^{2}\,dx^{3}\,dx^{4}} divided by that element — do not use the metric in this calculation) is a covariant vector density of weight +1. In N {\displaystyle N} -dimensional space-time, the Levi-Civita symbol may be regarded as either a rank- N {\displaystyle N} contravariant (odd) authentic tensor density of weight +1 ( ϵ α 1 ⋯ ϵ α N {\displaystyle \epsilon ^{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}} ) or a rank- N {\displaystyle N} covariant (odd) authentic tensor density of weight −1 ( ϵ α 1 ⋯ ϵ α N {\displaystyle \epsilon _{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}} ): ϵ α 1 ⋯ ϵ α N = ϵ ¯ β 1 ⋯ ϵ β N ∂ x α 1 ∂ x ¯ β 1 ⋯ ∂ x α N ∂ x ¯ β N ( det [ ∂ x ¯ β ∂ x α ] ) + 1 {\displaystyle \epsilon ^{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}={\bar {\epsilon }}^{\beta _{1}\cdots \epsilon _{\beta _{N}}}{\frac {\partial x^{\alpha _{1}}}{\partial {\bar {x}}^{\beta _{1}}}}\cdots {\frac {\partial x^{\alpha _{N}}}{\partial {\bar {x}}^{\beta _{N}}}}\left(\det \left[{\frac {\partial {\bar {x}}^{\beta }}{\partial x^{\alpha }}}\right]\right)^{+1}} ϵ α 1 ⋯ ϵ α N = ϵ ¯ β 1 ⋯ ϵ β N ∂ x ¯ β 1 ∂ x α 1 ⋯ ∂ x ¯ β N ∂ x α N ( det [ ∂ x ¯ β ∂ x α ] ) − 1 . {\displaystyle \epsilon _{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}={\bar {\epsilon }}_{\beta _{1}\cdots \epsilon _{\beta _{N}}}{\frac {\partial {\bar {x}}^{\beta _{1}}}{\partial x^{\alpha _{1}}}}\cdots {\frac {\partial {\bar {x}}^{\beta _{N}}}{\partial x^{\alpha _{N}}}}\left(\det \left[{\frac {\partial {\bar {x}}^{\beta }}{\partial x^{\alpha }}}\right]\right)^{-1}\,.} Notice that the Levi-Civita symbol (so regarded) does not obey the usual convention for raising or lowering of indices with the metric tensor. That is, it is true that ε α β γ δ g α κ g β λ g γ μ g δ ν = ε κ λ μ ν g , {\displaystyle \varepsilon ^{\alpha \beta \gamma \delta }\,g_{\alpha \kappa }\,g_{\beta \lambda }\,g_{\gamma \mu }g_{\delta \nu }\,=\,\varepsilon _{\kappa \lambda \mu \nu }\,g\,,} but in general relativity, where g = det ( g ρ σ ) {\displaystyle g=\det \left(g_{\rho \sigma }\right)} is always negative, this is never equal to ε κ λ μ ν . {\displaystyle \varepsilon _{\kappa \lambda \mu \nu }.} The determinant of the metric tensor, g = det ( g ρ σ ) = 1 4 ! ε α β γ δ ε κ λ μ ν g α κ g β λ g γ μ g δ ν , {\displaystyle g=\det \left(g_{\rho \sigma }\right)={\frac {1}{4!}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }\,,} is an (even) authentic scalar density of weight +2, being the contraction of the product of 2 (odd) authentic tensor densities of weight +1 and four (even) authentic tensor densities of weight 0. == See also == == Notes == == References == Spivak, Michael (1999), A Comprehensive Introduction to Differential Geometry, Vol I (3rd ed.), p. 134. Kuptsov, L.P. (2001) [1994], "Tensor Density", Encyclopedia of Mathematics, EMS Press. Charles Misner; Kip S Thorne & John Archibald Wheeler (1973). Gravitation. W. H. Freeman. p. 501ff. ISBN 0-7167-0344-0.{{cite book}}: CS1 maint: multiple names: authors list (link) Weinberg, Steven (1972), Gravitation and Cosmology, John Wiley & sons, Inc, ISBN 0-471-92567-5
Wikipedia/Tensor_density
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix. Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors". Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor. == Definition == Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction. === As multidimensional arrays === A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an n-dimensional space is represented by a one-dimensional array with n components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2 tensor T could be denoted Tij , where i and j are indices running from 1 to n, or also by T ij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while Tij and T ij can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together. The total number of indices (m) required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an m-dimensional array or an m-way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors. Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors e ^ i {\displaystyle \mathbf {\hat {e}} _{i}} are expressed in terms of the old basis vectors e j {\displaystyle \mathbf {e} _{j}} as, e ^ i = ∑ j = 1 n e j R i j = e j R i j . {\displaystyle \mathbf {\hat {e}} _{i}=\sum _{j=1}^{n}\mathbf {e} _{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.} Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R, v ^ i = ( R − 1 ) j i v j , {\displaystyle {\hat {v}}^{i}=\left(R^{-1}\right)_{j}^{i}v^{j},} where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself, w ^ i = w j R i j . {\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.} This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript). As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array T {\displaystyle T} that transforms under a change of basis matrix R = ( R i j ) {\displaystyle R=\left(R_{i}^{j}\right)} by T ^ = R − 1 T R {\displaystyle {\hat {T}}=R^{-1}TR} . For the individual matrix entries, this transformation law has the form T ^ j ′ i ′ = ( R − 1 ) i i ′ T j i R j ′ j {\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}} so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1). Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above: v = v ^ i e ^ i = ( ( R − 1 ) j i v j ) ( e k R i k ) = ( ( R − 1 ) j i R i k ) v j e k = δ j k v j e k = v k e k = v i e i {\displaystyle \mathbf {v} ={\hat {v}}^{i}\,\mathbf {\hat {e}} _{i}=\left(\left(R^{-1}\right)_{j}^{i}{v}^{j}\right)\left(\mathbf {e} _{k}R_{i}^{k}\right)=\left(\left(R^{-1}\right)_{j}^{i}R_{i}^{k}\right){v}^{j}\mathbf {e} _{k}=\delta _{j}^{k}{v}^{j}\mathbf {e} _{k}={v}^{k}\,\mathbf {e} _{k}={v}^{i}\,\mathbf {e} _{i}} , where δ j k {\displaystyle \delta _{j}^{k}} is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like v i e i {\displaystyle {v}^{i}\,\mathbf {e} _{i}} can immediately be seen to be geometrically identical in all coordinate systems. Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components ( T v ) i {\displaystyle (Tv)^{i}} are given by ( T v ) i = T j i v j {\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}} . These components transform contravariantly, since ( T v ^ ) i ′ = T ^ j ′ i ′ v ^ j ′ = [ ( R − 1 ) i i ′ T j i R j ′ j ] [ ( R − 1 ) k j ′ v k ] = ( R − 1 ) i i ′ ( T v ) i . {\displaystyle \left({\widehat {Tv}}\right)^{i'}={\hat {T}}_{j'}^{i'}{\hat {v}}^{j'}=\left[\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}\right]\left[\left(R^{-1}\right)_{k}^{j'}v^{k}\right]=\left(R^{-1}\right)_{i}^{i'}(Tv)^{i}.} The transformation law for an order p + q tensor with p contravariant indices and q covariant indices is thus given as, T ^ j 1 ′ , … , j q ′ i 1 ′ , … , i p ′ = ( R − 1 ) i 1 i 1 ′ ⋯ ( R − 1 ) i p i p ′ {\displaystyle {\hat {T}}_{j'_{1},\ldots ,j'_{q}}^{i'_{1},\ldots ,i'_{p}}=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} T j 1 , … , j q i 1 , … , i p {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}} R j 1 ′ j 1 ⋯ R j q ′ j q . {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.} Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type (p, q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), p + q in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type (p, q) is also called a (p, q)-tensor for short. This discussion motivates the following formal definition: Definition. A tensor of type (p, q) is an assignment of a multidimensional array T j 1 … j q i 1 … i p [ f ] {\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}[\mathbf {f} ]} to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis f ↦ f ⋅ R = ( e i R 1 i , … , e i R n i ) {\displaystyle \mathbf {f} \mapsto \mathbf {f} \cdot R=\left(\mathbf {e} _{i}R_{1}^{i},\dots ,\mathbf {e} _{i}R_{n}^{i}\right)} then the multidimensional array obeys the transformation law T j 1 ′ … j q ′ i 1 ′ … i p ′ [ f ⋅ R ] = ( R − 1 ) i 1 i 1 ′ ⋯ ( R − 1 ) i p i p ′ {\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}} T j 1 , … , j q i 1 , … , i p [ f ] {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]} R j 1 ′ j 1 ⋯ R j q ′ j q . {\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.} The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci. An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If f = ( f 1 , … , f n ) {\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})} is an ordered basis, and R = ( R j i ) {\displaystyle R=\left(R_{j}^{i}\right)} is an invertible n × n {\displaystyle n\times n} matrix, then the action is given by f R = ( f i R 1 i , … , f i R n i ) . {\displaystyle \mathbf {f} R=\left(\mathbf {f} _{i}R_{1}^{i},\dots ,\mathbf {f} _{i}R_{n}^{i}\right).} Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let ρ {\displaystyle \rho } be a representation of GL(n) on W (that is, a group homomorphism ρ : GL ( n ) → GL ( W ) {\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)} ). Then a tensor of type ρ {\displaystyle \rho } is an equivariant map T : F → W {\displaystyle T:F\to W} . Equivariance here means that T ( F R ) = ρ ( R − 1 ) T ( F ) . {\displaystyle T(FR)=\rho \left(R^{-1}\right)T(F).} When ρ {\displaystyle \rho } is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups. === As multilinear maps === A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type (p, q) tensor T is defined as a multilinear map, T : V ∗ × ⋯ × V ∗ ⏟ p copies × V × ⋯ × V ⏟ q copies → R , {\displaystyle T:\underbrace {V^{*}\times \dots \times V^{*}} _{p{\text{ copies}}}\times \underbrace {V\times \dots \times V} _{q{\text{ copies}}}\rightarrow \mathbf {R} ,} where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers, ⁠ R {\displaystyle \mathbb {R} } ⁠. More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing ⁠ R {\displaystyle \mathbb {R} } ⁠ as the codomain of the multilinear maps. By applying a multilinear map T of type (p, q) to a basis {ej} for V and a canonical cobasis {εi} for V∗, T j 1 … j q i 1 … i p ≡ T ( ε i 1 , … , ε i p , e j 1 , … , e j q ) , {\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T\left({\boldsymbol {\varepsilon }}^{i_{1}},\ldots ,{\boldsymbol {\varepsilon }}^{i_{p}},\mathbf {e} _{j_{1}},\ldots ,\mathbf {e} _{j_{q}}\right),} a (p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors. In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual. === Using tensor products === For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here. A type (p, q) tensor is defined in this context as an element of the tensor product of vector spaces, T ∈ V ⊗ ⋯ ⊗ V ⏟ p copies ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ q copies . {\displaystyle T\in \underbrace {V\otimes \dots \otimes V} _{p{\text{ copies}}}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{q{\text{ copies}}}.} A basis vi of V and basis wj of W naturally induce a basis vi ⊗ wj of the tensor product V ⊗ W. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual basis {εj}, i.e. T = T j 1 … j q i 1 … i p e i 1 ⊗ ⋯ ⊗ e i p ⊗ ε j 1 ⊗ ⋯ ⊗ ε j q . {\displaystyle T=T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\;\mathbf {e} _{i_{1}}\otimes \cdots \otimes \mathbf {e} _{i_{p}}\otimes {\boldsymbol {\varepsilon }}^{j_{1}}\otimes \cdots \otimes {\boldsymbol {\varepsilon }}^{j_{q}}.} Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (p, q) tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps. This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual: U ⊗ V ≅ ( U ∗ ∗ ) ⊗ ( V ∗ ∗ ) ≅ ( U ∗ ⊗ V ∗ ) ∗ ≅ Hom 2 ⁡ ( U ∗ × V ∗ ; F ) {\displaystyle U\otimes V\cong \left(U^{**}\right)\otimes \left(V^{**}\right)\cong \left(U^{*}\otimes V^{*}\right)^{*}\cong \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)} The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from Hom 2 ⁡ ( U ∗ × V ∗ ; F ) {\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)} and Hom ⁡ ( U ∗ ⊗ V ∗ ; F ) {\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)} . Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above. === Tensors in infinite dimensions === This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories. === Tensor fields === In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor. In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions, x ¯ i ( x 1 , … , x n ) , {\displaystyle {\bar {x}}^{i}\left(x^{1},\ldots ,x^{n}\right),} defining a coordinate transformation, T ^ j 1 ′ … j q ′ i 1 ′ … i p ′ ( x ¯ 1 , … , x ¯ n ) = ∂ x ¯ i 1 ′ ∂ x i 1 ⋯ ∂ x ¯ i p ′ ∂ x i p ∂ x j 1 ∂ x ¯ j 1 ′ ⋯ ∂ x j q ∂ x ¯ j q ′ T j 1 … j q i 1 … i p ( x 1 , … , x n ) . {\displaystyle {\hat {T}}_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}\left({\bar {x}}^{1},\ldots ,{\bar {x}}^{n}\right)={\frac {\partial {\bar {x}}^{i'_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial {\bar {x}}^{i'_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial {\bar {x}}^{j'_{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial {\bar {x}}^{j'_{q}}}}T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\left(x^{1},\ldots ,x^{n}\right).} == History == The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898. Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense. In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect: I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot. Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product. From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s. == Examples == An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol ε i j k {\displaystyle \varepsilon _{ijk}} nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems. This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor. Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table. == Properties == Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing ε i j k {\displaystyle \varepsilon _{ijk}} not being a tensor, for the sign change under transformations changing the orientation. Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, vectors: n (contravariant indices) and dual vectors: m (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers (n, m), which determine the precise form of the transformation law. The order of a tensor is the sum of these two numbers. The order (also degree or rank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order 2 + 0 = 2, the same as the stress tensor, taking one vector and returning another 1 + 1 = 2. The ε i j k {\displaystyle \varepsilon _{ijk}} -symbol, mapping two vectors to one vector, would have order 2 + 1 = 3. The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order 2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this. == Notation == There are several notational systems that are used to describe tensors and perform calculations involving them. === Ricci calculus === Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives. === Einstein summation convention === The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way. === Penrose graphical notation === Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices. === Abstract index notation === The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation. === Component-free notation === A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces. == Operations == There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type. === Tensor product === The tensor product takes two tensors, S and T, and produces a new tensor, S ⊗ T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e., ( S ⊗ T ) ( v 1 , … , v n , v n + 1 , … , v n + m ) = S ( v 1 , … , v n ) T ( v n + 1 , … , v n + m ) , {\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),} which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e., ( S ⊗ T ) j 1 … j k j k + 1 … j k + m i 1 … i l i l + 1 … i l + n = S j 1 … j k i 1 … i l T j k + 1 … j k + m i l + 1 … i l + n . {\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.} If S is of type (l, k) and T is of type (n, m), then the tensor product S ⊗ T has type (l + n, k + m). === Contraction === Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a (1, 1)-tensor T i j {\displaystyle T_{i}^{j}} can be contracted to a scalar through T i i {\displaystyle T_{i}^{i}} , where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace. The contraction is often used in conjunction with the tensor product to contract an index from each tensor. The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor T ∈ V ⊗ V ⊗ V ∗ {\displaystyle T\in V\otimes V\otimes V^{*}} can be written as a linear combination T = v 1 ⊗ w 1 ⊗ α 1 + v 2 ⊗ w 2 ⊗ α 2 + ⋯ + v N ⊗ w N ⊗ α N . {\displaystyle T=v_{1}\otimes w_{1}\otimes \alpha _{1}+v_{2}\otimes w_{2}\otimes \alpha _{2}+\cdots +v_{N}\otimes w_{N}\otimes \alpha _{N}.} The contraction of T on the first and last slots is then the vector α 1 ( v 1 ) w 1 + α 2 ( v 2 ) w 2 + ⋯ + α N ( v N ) w N . {\displaystyle \alpha _{1}(v_{1})w_{1}+\alpha _{2}(v_{2})w_{2}+\cdots +\alpha _{N}(v_{N})w_{N}.} In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a (2, 0)-tensor T i j {\displaystyle T^{ij}} can be contracted to a scalar through T i j g i j {\displaystyle T^{ij}g_{ij}} (yet again assuming the summation convention). === Raising or lowering an index === When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index. Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor. == Applications == === Continuum mechanics === Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed. If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point. === Other examples from physics === Common applications include: Electromagnetic tensor (or Faraday tensor) in electromagnetism Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics Permittivity and electric susceptibility are tensors in anisotropic media Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments Quantum mechanics and quantum computing utilize tensor products for combination of quantum states === Computer vision and optics === The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix. The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities: P i ε 0 = ∑ j χ i j ( 1 ) E j + ∑ j k χ i j k ( 2 ) E j E k + ∑ j k ℓ χ i j k ℓ ( 3 ) E j E k E ℓ + ⋯ . {\displaystyle {\frac {P_{i}}{\varepsilon _{0}}}=\sum _{j}\chi _{ij}^{(1)}E_{j}+\sum _{jk}\chi _{ijk}^{(2)}E_{j}E_{k}+\sum _{jk\ell }\chi _{ijk\ell }^{(3)}E_{j}E_{k}E_{\ell }+\cdots .\!} Here χ ( 1 ) {\displaystyle \chi ^{(1)}} is the linear susceptibility, χ ( 2 ) {\displaystyle \chi ^{(2)}} gives the Pockels effect and second harmonic generation, and χ ( 3 ) {\displaystyle \chi ^{(3)}} gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter. === Machine learning === The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same. == Generalizations == === Tensor products of vector spaces === The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space V ⊗ W is a second-order "tensor" in this more general sense, and an order-d tensor may likewise be defined as an element of a tensor product of d different vector spaces. A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring. === Tensors in infinite dimensions === The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds. === Tensor densities === Suppose that a homogeneous medium fills R3, so that the density of the medium is described by a single scalar value ρ in kg⋅m−3. The mass, in kg, of a region Ω is obtained by multiplying ρ by the volume of the region Ω, or equivalently integrating the constant ρ over the region: m = ∫ Ω ρ d x d y d z , {\displaystyle m=\int _{\Omega }\rho \,dx\,dy\,dz,} where the Cartesian coordinates x, y, z are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100: x ′ = 100 x , y ′ = 100 y , z ′ = 100 z . {\displaystyle x'=100x,\quad y'=100y,\quad z'=100z.} The numerical value of the density ρ must then also transform by 100−3 m3/cm3 to compensate, so that the numerical value of the mass in kg is still given by integral of ρ d x d y d z {\displaystyle \rho \,dx\,dy\,dz} . Thus ρ ′ = 100 − 3 ρ {\displaystyle \rho '=100^{-3}\rho } (in units of kg⋅cm−3). More generally, if the Cartesian coordinates x, y, z undergo a linear transformation, then the numerical value of the density ρ must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, ρ is a function of the variables x, y, z (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold. A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition: T j 1 ′ … j q ′ i 1 ′ … i p ′ [ f ⋅ R ] = | det R | − w ( R − 1 ) i 1 i 1 ′ ⋯ ( R − 1 ) i p i p ′ T j 1 , … , j q i 1 , … , i p [ f ] R j 1 ′ j 1 ⋯ R j q ′ j q . {\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left|\det R\right|^{-w}\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.} Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism. Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an (x, y) ∈ R2 with the transformation law ( x , y ) ↦ ( x + y log ⁡ | det R | , y ) . {\displaystyle (x,y)\mapsto (x+y\log \left|\det R\right|,y).} === Geometric objects === The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles. === Spinors === When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant. Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well. == See also == The dictionary definition of tensor at Wiktionary Array data type, for tensor storage and manipulation Bitensor === Foundational === === Applications === == Explanatory notes == == References == === Specific === === General === This article incorporates material from tensor on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == External links ==
Wikipedia/Classical_treatment_of_tensors
In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra with the additional structure of a distinguished subspace. As K-algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems. The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford (1845–1879). The most familiar Clifford algebras, the orthogonal Clifford algebras, are also referred to as (pseudo-)Riemannian Clifford algebras, as distinct from symplectic Clifford algebras. == Introduction and basic properties == A Clifford algebra is a unital associative algebra that contains and is generated by a vector space V over a field K, where V is equipped with a quadratic form Q : V → K. The Clifford algebra Cl(V, Q) is the "freest" unital associative algebra generated by V subject to the condition v 2 = Q ( v ) 1 for all v ∈ V , {\displaystyle v^{2}=Q(v)1\ {\text{ for all }}v\in V,} where the product on the left is that of the algebra, and the 1 on the right is the algebra's multiplicative identity (not to be confused with the multiplicative identity of K). The idea of being the "freest" or "most general" algebra subject to this identity can be formally expressed through the notion of a universal property, as done below. When V is a finite-dimensional real vector space and Q is nondegenerate, Cl(V, Q) may be identified by the label Clp,q(R), indicating that V has an orthogonal basis with p elements with ei2 = +1, q with ei2 = −1, and where R indicates that this is a Clifford algebra over the reals; i.e. coefficients of elements of the algebra are real numbers. This basis may be found by orthogonal diagonalization. The free algebra generated by V may be written as the tensor algebra ⨁n≥0 V ⊗ ⋯ ⊗ V, that is, the direct sum of the tensor product of n copies of V over all n. Therefore one obtains a Clifford algebra as the quotient of this tensor algebra by the two-sided ideal generated by elements of the form v ⊗ v − Q(v)1 for all elements v ∈ V. The product induced by the tensor product in the quotient algebra is written using juxtaposition (e.g. uv). Its associativity follows from the associativity of the tensor product. The Clifford algebra has a distinguished subspace V, being the image of the embedding map. Such a subspace cannot in general be uniquely determined given only a K-algebra that is isomorphic to the Clifford algebra. If 2 is invertible in the ground field K, then one can rewrite the fundamental identity above in the form u v + v u = 2 ⟨ u , v ⟩ 1 for all u , v ∈ V , {\displaystyle uv+vu=2\langle u,v\rangle 1\ {\text{ for all }}u,v\in V,} where ⟨ u , v ⟩ = 1 2 ( Q ( u + v ) − Q ( u ) − Q ( v ) ) {\displaystyle \langle u,v\rangle ={\frac {1}{2}}\left(Q(u+v)-Q(u)-Q(v)\right)} is the symmetric bilinear form associated with Q, via the polarization identity. Quadratic forms and Clifford algebras in characteristic 2 form an exceptional case in this respect. In particular, if char(K) = 2 it is not true that a quadratic form necessarily or uniquely determines a symmetric bilinear form that satisfies Q(v) = ⟨v, v⟩, Many of the statements in this article include the condition that the characteristic is not 2, and are false if this condition is removed. === As a quantization of the exterior algebra === Clifford algebras are closely related to exterior algebras. Indeed, if Q = 0 then the Clifford algebra Cl(V, Q) is just the exterior algebra ⋀V. Whenever 2 is invertible in the ground field K, there exists a canonical linear isomorphism between ⋀V and Cl(V, Q). That is, they are naturally isomorphic as vector spaces, but with different multiplications (in the case of characteristic two, they are still isomorphic as vector spaces, just not naturally). Clifford multiplication together with the distinguished subspace is strictly richer than the exterior product since it makes use of the extra information provided by Q. The Clifford algebra is a filtered algebra; the associated graded algebra is the exterior algebra. More precisely, Clifford algebras may be thought of as quantizations (cf. quantum group) of the exterior algebra, in the same way that the Weyl algebra is a quantization of the symmetric algebra. Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras. == Universal property and construction == Let V be a vector space over a field K, and let Q : V → K be a quadratic form on V. In most cases of interest the field K is either the field of real numbers R, or the field of complex numbers C, or a finite field. A Clifford algebra Cl(V, Q) is a pair (B, i), where B is a unital associative algebra over K and i is a linear map i : V → B that satisfies i(v)2 = Q(v)1B for all v in V, defined by the following universal property: given any unital associative algebra A over K and any linear map j : V → A such that j ( v ) 2 = Q ( v ) 1 A for all v ∈ V {\displaystyle j(v)^{2}=Q(v)1_{A}{\text{ for all }}v\in V} (where 1A denotes the multiplicative identity of A), there is a unique algebra homomorphism f : B → A such that the following diagram commutes (i.e. such that f ∘ i = j): The quadratic form Q may be replaced by a (not necessarily symmetric) bilinear form ⟨⋅,⋅⟩ that has the property ⟨v, v⟩ = Q(v), v ∈ V, in which case an equivalent requirement on j is j ( v ) j ( v ) = ⟨ v , v ⟩ 1 A for all v ∈ V . {\displaystyle j(v)j(v)=\langle v,v\rangle 1_{A}\quad {\text{ for all }}v\in V.} When the characteristic of the field is not 2, this may be replaced by what is then an equivalent requirement, j ( v ) j ( w ) + j ( w ) j ( v ) = ( ⟨ v , w ⟩ + ⟨ w , v ⟩ ) 1 A for all v , w ∈ V , {\displaystyle j(v)j(w)+j(w)j(v)=(\langle v,w\rangle +\langle w,v\rangle )1_{A}\quad {\text{ for all }}v,w\in V,} where the bilinear form may additionally be restricted to being symmetric without loss of generality. A Clifford algebra as described above always exists and can be constructed as follows: start with the most general algebra that contains V, namely the tensor algebra T(V), and then enforce the fundamental identity by taking a suitable quotient. In our case we want to take the two-sided ideal IQ in T(V) generated by all elements of the form v ⊗ v − Q ( v ) 1 {\displaystyle v\otimes v-Q(v)1} for all v ∈ V {\displaystyle v\in V} and define Cl(V, Q) as the quotient algebra Cl ⁡ ( V , Q ) = T ( V ) / I Q . {\displaystyle \operatorname {Cl} (V,Q)=T(V)/I_{Q}.} The ring product inherited by this quotient is sometimes referred to as the Clifford product to distinguish it from the exterior product and the scalar product. It is then straightforward to show that Cl(V, Q) contains V and satisfies the above universal property, so that Cl is unique up to a unique isomorphism; thus one speaks of "the" Clifford algebra Cl(V, Q). It also follows from this construction that i is injective. One usually drops the i and considers V as a linear subspace of Cl(V, Q). The universal characterization of the Clifford algebra shows that the construction of Cl(V, Q) is functorial in nature. Namely, Cl can be considered as a functor from the category of vector spaces with quadratic forms (whose morphisms are linear maps that preserve the quadratic form) to the category of associative algebras. The universal property guarantees that linear maps between vector spaces (that preserve the quadratic form) extend uniquely to algebra homomorphisms between the associated Clifford algebras. == Basis and dimension == Since V comes equipped with a quadratic form Q, in characteristic not equal to 2 there exist bases for V that are orthogonal. An orthogonal basis is one such that for a symmetric bilinear form ⟨ e i , e j ⟩ = 0 {\displaystyle \langle e_{i},e_{j}\rangle =0} for i ≠ j {\displaystyle i\neq j} , and ⟨ e i , e i ⟩ = Q ( e i ) . {\displaystyle \langle e_{i},e_{i}\rangle =Q(e_{i}).} The fundamental Clifford identity implies that for an orthogonal basis e i e j = − e j e i {\displaystyle e_{i}e_{j}=-e_{j}e_{i}} for i ≠ j {\displaystyle i\neq j} , and e i 2 = Q ( e i ) . {\displaystyle e_{i}^{2}=Q(e_{i}).} This makes manipulation of orthogonal basis vectors quite simple. Given a product e i 1 e i 2 ⋯ e i k {\displaystyle e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}} of distinct orthogonal basis vectors of V, one can put them into a standard order while including an overall sign determined by the number of pairwise swaps needed to do so (i.e. the signature of the ordering permutation). If the dimension of V over K is n and {e1, ..., en} is an orthogonal basis of (V, Q), then Cl(V, Q) is free over K with a basis { e i 1 e i 2 ⋯ e i k ∣ 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n and 0 ≤ k ≤ n } . {\displaystyle \{e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}\mid 1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n{\text{ and }}0\leq k\leq n\}.} The empty product (k = 0) is defined as being the multiplicative identity element. For each value of k there are n choose k basis elements, so the total dimension of the Clifford algebra is dim ⁡ Cl ⁡ ( V , Q ) = ∑ k = 0 n ( n k ) = 2 n . {\displaystyle \dim \operatorname {Cl} (V,Q)=\sum _{k=0}^{n}{\binom {n}{k}}=2^{n}.} == Examples: real and complex Clifford algebras == The most important Clifford algebras are those over real and complex vector spaces equipped with nondegenerate quadratic forms. Each of the algebras Clp,q(R) and Cln(C) is isomorphic to A or A ⊕ A, where A is a full matrix ring with entries from R, C, or H. For a complete classification of these algebras see Classification of Clifford algebras. === Real numbers === Clifford algebras are also sometimes referred to as geometric algebras, most often over the real numbers. Every nondegenerate quadratic form on a finite-dimensional real vector space is equivalent to the standard diagonal form: Q ( v ) = v 1 2 + ⋯ + v p 2 − v p + 1 2 − ⋯ − v p + q 2 , {\displaystyle Q(v)=v_{1}^{2}+\dots +v_{p}^{2}-v_{p+1}^{2}-\dots -v_{p+q}^{2},} where n = p + q is the dimension of the vector space. The pair of integers (p, q) is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted Rp,q. The Clifford algebra on Rp,q is denoted Clp,q(R). The symbol Cln(R) means either Cln,0(R) or Cl0,n(R), depending on whether the author prefers positive-definite or negative-definite spaces. A standard basis {e1, ..., en} for Rp,q consists of n = p + q mutually orthogonal vectors, p of which square to +1 and q of which square to −1. Of such a basis, the algebra Clp,q(R) will therefore have p vectors that square to +1 and q vectors that square to −1. A few low-dimensional cases are: Cl0,0(R) is naturally isomorphic to R since there are no nonzero vectors. Cl0,1(R) is a two-dimensional algebra generated by e1 that squares to −1, and is algebra-isomorphic to C, the field of complex numbers. Cl1,0(R) is a two-dimensional algebra generated by e1 that squares to 1, and is algebra-isomorphic to the split-complex numbers. Cl0,2(R) is a four-dimensional algebra spanned by {1, e1, e2, e1e2}. The latter three elements all square to −1 and anticommute, and so the algebra is isomorphic to the quaternions H. Cl2,0(R) ≅ Cl1,1(R) is isomorphic to the algebra of split-quaternions. Cl0,3(R) is an 8-dimensional algebra isomorphic to the direct sum H ⊕ H, the split-biquaternions. Cl3,0(R) ≅ Cl1,2(R), also called the Pauli algebra, is isomorphic to the algebra of biquaternions. === Complex numbers === One can also study Clifford algebras on complex vector spaces. Every nondegenerate quadratic form on a complex vector space of dimension n is equivalent to the standard diagonal form Q ( z ) = z 1 2 + z 2 2 + ⋯ + z n 2 . {\displaystyle Q(z)=z_{1}^{2}+z_{2}^{2}+\dots +z_{n}^{2}.} Thus, for each dimension n, up to isomorphism there is only one Clifford algebra of a complex vector space with a nondegenerate quadratic form. We will denote the Clifford algebra on Cn with the standard quadratic form by Cln(C). For the first few cases one finds that Cl0(C) ≅ C, the complex numbers Cl1(C) ≅ C ⊕ C, the bicomplex numbers Cl2(C) ≅ M2(C), the biquaternions where Mn(C) denotes the algebra of n × n matrices over C. == Examples: constructing quaternions and dual quaternions == === Quaternions === In this section, Hamilton's quaternions are constructed as the even subalgebra of the Clifford algebra Cl3,0(R). Let the vector space V be real three-dimensional space R3, and the quadratic form be the usual quadratic form. Then, for v, w in R3 we have the bilinear form (or scalar product) v ⋅ w = v 1 w 1 + v 2 w 2 + v 3 w 3 . {\displaystyle v\cdot w=v_{1}w_{1}+v_{2}w_{2}+v_{3}w_{3}.} Now introduce the Clifford product of vectors v and w given by v w + w v = 2 ( v ⋅ w ) . {\displaystyle vw+wv=2(v\cdot w).} Denote a set of orthogonal unit vectors of R3 as {e1, e2, e3}, then the Clifford product yields the relations e 2 e 3 = − e 3 e 2 , e 1 e 3 = − e 3 e 1 , e 1 e 2 = − e 2 e 1 , {\displaystyle e_{2}e_{3}=-e_{3}e_{2},\,\,\,e_{1}e_{3}=-e_{3}e_{1},\,\,\,e_{1}e_{2}=-e_{2}e_{1},} and e 1 2 = e 2 2 = e 3 2 = 1. {\displaystyle e_{1}^{2}=e_{2}^{2}=e_{3}^{2}=1.} The general element of the Clifford algebra Cl3,0(R) is given by A = a 0 + a 1 e 1 + a 2 e 2 + a 3 e 3 + a 4 e 2 e 3 + a 5 e 1 e 3 + a 6 e 1 e 2 + a 7 e 1 e 2 e 3 . {\displaystyle A=a_{0}+a_{1}e_{1}+a_{2}e_{2}+a_{3}e_{3}+a_{4}e_{2}e_{3}+a_{5}e_{1}e_{3}+a_{6}e_{1}e_{2}+a_{7}e_{1}e_{2}e_{3}.} The linear combination of the even degree elements of Cl3,0(R) defines the even subalgebra Cl[0]3,0(R) with the general element q = q 0 + q 1 e 2 e 3 + q 2 e 1 e 3 + q 3 e 1 e 2 . {\displaystyle q=q_{0}+q_{1}e_{2}e_{3}+q_{2}e_{1}e_{3}+q_{3}e_{1}e_{2}.} The basis elements can be identified with the quaternion basis elements i, j, k as i = e 2 e 3 , j = e 1 e 3 , k = e 1 e 2 , {\displaystyle i=e_{2}e_{3},j=e_{1}e_{3},k=e_{1}e_{2},} which shows that the even subalgebra Cl[0]3,0(R) is Hamilton's real quaternion algebra. To see this, compute i 2 = ( e 2 e 3 ) 2 = e 2 e 3 e 2 e 3 = − e 2 e 2 e 3 e 3 = − 1 , {\displaystyle i^{2}=(e_{2}e_{3})^{2}=e_{2}e_{3}e_{2}e_{3}=-e_{2}e_{2}e_{3}e_{3}=-1,} and i j = e 2 e 3 e 1 e 3 = − e 2 e 3 e 3 e 1 = − e 2 e 1 = e 1 e 2 = k . {\displaystyle ij=e_{2}e_{3}e_{1}e_{3}=-e_{2}e_{3}e_{3}e_{1}=-e_{2}e_{1}=e_{1}e_{2}=k.} Finally, i j k = e 2 e 3 e 1 e 3 e 1 e 2 = − 1. {\displaystyle ijk=e_{2}e_{3}e_{1}e_{3}e_{1}e_{2}=-1.} === Dual quaternions === In this section, dual quaternions are constructed as the even subalgebra of a Clifford algebra of real four-dimensional space with a degenerate quadratic form. Let the vector space V be real four-dimensional space R4, and let the quadratic form Q be a degenerate form derived from the Euclidean metric on R3. For v, w in R4 introduce the degenerate bilinear form d ( v , w ) = v 1 w 1 + v 2 w 2 + v 3 w 3 . {\displaystyle d(v,w)=v_{1}w_{1}+v_{2}w_{2}+v_{3}w_{3}.} This degenerate scalar product projects distance measurements in R4 onto the R3 hyperplane. The Clifford product of vectors v and w is given by v w + w v = − 2 d ( v , w ) . {\displaystyle vw+wv=-2\,d(v,w).} Note the negative sign is introduced to simplify the correspondence with quaternions. Denote a set of mutually orthogonal unit vectors of R4 as {e1, e2, e3, e4}, then the Clifford product yields the relations e m e n = − e n e m , m ≠ n , {\displaystyle e_{m}e_{n}=-e_{n}e_{m},\,\,\,m\neq n,} and e 1 2 = e 2 2 = e 3 2 = − 1 , e 4 2 = 0. {\displaystyle e_{1}^{2}=e_{2}^{2}=e_{3}^{2}=-1,\,\,e_{4}^{2}=0.} The general element of the Clifford algebra Cl(R4, d) has 16 components. The linear combination of the even degree elements defines the even subalgebra Cl[0](R4, d) with the general element H = h 0 + h 1 e 2 e 3 + h 2 e 3 e 1 + h 3 e 1 e 2 + h 4 e 4 e 1 + h 5 e 4 e 2 + h 6 e 4 e 3 + h 7 e 1 e 2 e 3 e 4 . {\displaystyle H=h_{0}+h_{1}e_{2}e_{3}+h_{2}e_{3}e_{1}+h_{3}e_{1}e_{2}+h_{4}e_{4}e_{1}+h_{5}e_{4}e_{2}+h_{6}e_{4}e_{3}+h_{7}e_{1}e_{2}e_{3}e_{4}.} The basis elements can be identified with the quaternion basis elements i, j, k and the dual unit ε as i = e 2 e 3 , j = e 3 e 1 , k = e 1 e 2 , ε = e 1 e 2 e 3 e 4 . {\displaystyle i=e_{2}e_{3},j=e_{3}e_{1},k=e_{1}e_{2},\,\,\varepsilon =e_{1}e_{2}e_{3}e_{4}.} This provides the correspondence of Cl[0]0,3,1(R) with dual quaternion algebra. To see this, compute ε 2 = ( e 1 e 2 e 3 e 4 ) 2 = e 1 e 2 e 3 e 4 e 1 e 2 e 3 e 4 = − e 1 e 2 e 3 ( e 4 e 4 ) e 1 e 2 e 3 = 0 , {\displaystyle \varepsilon ^{2}=(e_{1}e_{2}e_{3}e_{4})^{2}=e_{1}e_{2}e_{3}e_{4}e_{1}e_{2}e_{3}e_{4}=-e_{1}e_{2}e_{3}(e_{4}e_{4})e_{1}e_{2}e_{3}=0,} and ε i = ( e 1 e 2 e 3 e 4 ) e 2 e 3 = e 1 e 2 e 3 e 4 e 2 e 3 = e 2 e 3 ( e 1 e 2 e 3 e 4 ) = i ε . {\displaystyle \varepsilon i=(e_{1}e_{2}e_{3}e_{4})e_{2}e_{3}=e_{1}e_{2}e_{3}e_{4}e_{2}e_{3}=e_{2}e_{3}(e_{1}e_{2}e_{3}e_{4})=i\varepsilon .} The exchanges of e1 and e4 alternate signs an even number of times, and show the dual unit ε commutes with the quaternion basis elements i, j, k. == Examples: in small dimension == Let K be any field of characteristic not 2. === Dimension 1 === For dim V = 1, if Q has diagonalization diag(a), that is there is a non-zero vector x such that Q(x) = a, then Cl(V, Q) is algebra-isomorphic to a K-algebra generated by an element x that satisfies x2 = a, the quadratic algebra K[X] / (X2 − a). In particular, if a = 0 (that is, Q is the zero quadratic form) then Cl(V, Q) is algebra-isomorphic to the dual numbers algebra over K. If a is a non-zero square in K, then Cl(V, Q) ≃ K ⊕ K. Otherwise, Cl(V, Q) is isomorphic to the quadratic field extension K(√a) of K. === Dimension 2 === For dim V = 2, if Q has diagonalization diag(a, b) with non-zero a and b (which always exists if Q is non-degenerate), then Cl(V, Q) is isomorphic to a K-algebra generated by elements x and y that satisfies x2 = a, y2 = b and xy = −yx. Thus Cl(V, Q) is isomorphic to the (generalized) quaternion algebra (a, b)K. We retrieve Hamilton's quaternions when a = b = −1, since H = (−1, −1)R. As a special case, if some x in V satisfies Q(x) = 1, then Cl(V, Q) ≃ M2(K). == Properties == === Relation to the exterior algebra === Given a vector space V, one can construct the exterior algebra ⋀V, whose definition is independent of any quadratic form on V. It turns out that if K does not have characteristic 2 then there is a natural isomorphism between ⋀V and Cl(V, Q) considered as vector spaces (and there exists an isomorphism in characteristic two, which may not be natural). This is an algebra isomorphism if and only if Q = 0. One can thus consider the Clifford algebra Cl(V, Q) as an enrichment (or more precisely, a quantization, cf. the Introduction) of the exterior algebra on V with a multiplication that depends on Q (one can still define the exterior product independently of Q). The easiest way to establish the isomorphism is to choose an orthogonal basis {e1, ..., en} for V and extend it to a basis for Cl(V, Q) as described above. The map Cl(V, Q) → ⋀V is determined by e i 1 e i 2 ⋯ e i k ↦ e i 1 ∧ e i 2 ∧ ⋯ ∧ e i k . {\displaystyle e_{i_{1}}e_{i_{2}}\cdots e_{i_{k}}\mapsto e_{i_{1}}\wedge e_{i_{2}}\wedge \cdots \wedge e_{i_{k}}.} Note that this works only if the basis {e1, ..., en} is orthogonal. One can show that this map is independent of the choice of orthogonal basis and so gives a natural isomorphism. If the characteristic of K is 0, one can also establish the isomorphism by antisymmetrizing. Define functions fk : V × ⋯ × V → Cl(V, Q) by f k ( v 1 , … , v k ) = 1 k ! ∑ σ ∈ S k sgn ⁡ ( σ ) v σ ( 1 ) ⋯ v σ ( k ) {\displaystyle f_{k}(v_{1},\ldots ,v_{k})={\frac {1}{k!}}\sum _{\sigma \in \mathrm {S} _{k}}\operatorname {sgn}(\sigma )\,v_{\sigma (1)}\cdots v_{\sigma (k)}} where the sum is taken over the symmetric group on k elements, Sk. Since fk is alternating, it induces a unique linear map ⋀k V → Cl(V, Q). The direct sum of these maps gives a linear map between ⋀V and Cl(V, Q). This map can be shown to be a linear isomorphism, and it is natural. A more sophisticated way to view the relationship is to construct a filtration on Cl(V, Q). Recall that the tensor algebra T(V) has a natural filtration: F0 ⊂ F1 ⊂ F2 ⊂ ⋯, where Fk contains sums of tensors with order ≤ k. Projecting this down to the Clifford algebra gives a filtration on Cl(V, Q). The associated graded algebra Gr F ⁡ Cl ⁡ ( V , Q ) = ⨁ k F k / F k − 1 {\displaystyle \operatorname {Gr} _{F}\operatorname {Cl} (V,Q)=\bigoplus _{k}F^{k}/F^{k-1}} is naturally isomorphic to the exterior algebra ⋀V. Since the associated graded algebra of a filtered algebra is always isomorphic to the filtered algebra as filtered vector spaces (by choosing complements of Fk in Fk+1 for all k), this provides an isomorphism (although not a natural one) in any characteristic, even two. === Grading === In the following, assume that the characteristic is not 2. Clifford algebras are Z2-graded algebras (also known as superalgebras). Indeed, the linear map on V defined by v ↦ −v (reflection through the origin) preserves the quadratic form Q and so by the universal property of Clifford algebras extends to an algebra automorphism α : Cl ⁡ ( V , Q ) → Cl ⁡ ( V , Q ) . {\displaystyle \alpha :\operatorname {Cl} (V,Q)\to \operatorname {Cl} (V,Q).} Since α is an involution (i.e. it squares to the identity) one can decompose Cl(V, Q) into positive and negative eigenspaces of α Cl ⁡ ( V , Q ) = Cl [ 0 ] ⁡ ( V , Q ) ⊕ Cl [ 1 ] ⁡ ( V , Q ) {\displaystyle \operatorname {Cl} (V,Q)=\operatorname {Cl} ^{[0]}(V,Q)\oplus \operatorname {Cl} ^{[1]}(V,Q)} where Cl [ i ] ⁡ ( V , Q ) = { x ∈ Cl ⁡ ( V , Q ) ∣ α ( x ) = ( − 1 ) i x } . {\displaystyle \operatorname {Cl} ^{[i]}(V,Q)=\left\{x\in \operatorname {Cl} (V,Q)\mid \alpha (x)=(-1)^{i}x\right\}.} Since α is an automorphism it follows that: Cl [ i ] ⁡ ( V , Q ) Cl [ j ] ⁡ ( V , Q ) = Cl [ i + j ] ⁡ ( V , Q ) {\displaystyle \operatorname {Cl} ^{[i]}(V,Q)\operatorname {Cl} ^{[j]}(V,Q)=\operatorname {Cl} ^{[i+j]}(V,Q)} where the bracketed superscripts are read modulo 2. This gives Cl(V, Q) the structure of a Z2-graded algebra. The subspace Cl[0](V, Q) forms a subalgebra of Cl(V, Q), called the even subalgebra. The subspace Cl[1](V, Q) is called the odd part of Cl(V, Q) (it is not a subalgebra). This Z2-grading plays an important role in the analysis and application of Clifford algebras. The automorphism α is called the main involution or grade involution. Elements that are pure in this Z2-grading are simply said to be even or odd. Remark. The Clifford algebra is not a Z-graded algebra, but is Z-filtered, where Cl≤i(V, Q) is the subspace spanned by all products of at most i elements of V. Cl ⩽ i ⁡ ( V , Q ) ⋅ Cl ⩽ j ⁡ ( V , Q ) ⊂ Cl ⩽ i + j ⁡ ( V , Q ) . {\displaystyle \operatorname {Cl} ^{\leqslant i}(V,Q)\cdot \operatorname {Cl} ^{\leqslant j}(V,Q)\subset \operatorname {Cl} ^{\leqslant i+j}(V,Q).} The degree of a Clifford number usually refers to the degree in the Z-grading. The even subalgebra Cl[0](V, Q) of a Clifford algebra is itself isomorphic to a Clifford algebra. If V is the orthogonal direct sum of a vector a of nonzero norm Q(a) and a subspace U, then Cl[0](V, Q) is isomorphic to Cl(U, −Q(a)Q|U), where Q|U is the form Q restricted to U. In particular over the reals this implies that: Cl p , q [ 0 ] ⁡ ( R ) ≅ { Cl p , q − 1 ⁡ ( R ) q > 0 Cl q , p − 1 ⁡ ( R ) p > 0 {\displaystyle \operatorname {Cl} _{p,q}^{[0]}(\mathbf {R} )\cong {\begin{cases}\operatorname {Cl} _{p,q-1}(\mathbf {R} )&q>0\\\operatorname {Cl} _{q,p-1}(\mathbf {R} )&p>0\end{cases}}} In the negative-definite case this gives an inclusion Cl0,n − 1(R) ⊂ Cl0,n(R), which extends the sequence Likewise, in the complex case, one can show that the even subalgebra of Cln(C) is isomorphic to Cln−1(C). === Antiautomorphisms === In addition to the automorphism α, there are two antiautomorphisms that play an important role in the analysis of Clifford algebras. Recall that the tensor algebra T(V) comes with an antiautomorphism that reverses the order in all products of vectors: v 1 ⊗ v 2 ⊗ ⋯ ⊗ v k ↦ v k ⊗ ⋯ ⊗ v 2 ⊗ v 1 . {\displaystyle v_{1}\otimes v_{2}\otimes \cdots \otimes v_{k}\mapsto v_{k}\otimes \cdots \otimes v_{2}\otimes v_{1}.} Since the ideal IQ is invariant under this reversal, this operation descends to an antiautomorphism of Cl(V, Q) called the transpose or reversal operation, denoted by xt. The transpose is an antiautomorphism: (xy)t = yt xt. The transpose operation makes no use of the Z2-grading so we define a second antiautomorphism by composing α and the transpose. We call this operation Clifford conjugation denoted x ¯ {\displaystyle {\bar {x}}} x ¯ = α ( x t ) = α ( x ) t . {\displaystyle {\bar {x}}=\alpha (x^{\mathrm {t} })=\alpha (x)^{\mathrm {t} }.} Of the two antiautomorphisms, the transpose is the more fundamental. Note that all of these operations are involutions. One can show that they act as ±1 on elements that are pure in the Z-grading. In fact, all three operations depend on only the degree modulo 4. That is, if x is pure with degree k then α ( x ) = ± x x t = ± x x ¯ = ± x {\displaystyle \alpha (x)=\pm x\qquad x^{\mathrm {t} }=\pm x\qquad {\bar {x}}=\pm x} where the signs are given by the following table: === Clifford scalar product === When the characteristic is not 2, the quadratic form Q on V can be extended to a quadratic form on all of Cl(V, Q) (which we also denoted by Q). A basis-independent definition of one such extension is Q ( x ) = ⟨ x t x ⟩ 0 {\displaystyle Q(x)=\left\langle x^{\mathrm {t} }x\right\rangle _{0}} where ⟨a⟩0 denotes the scalar part of a (the degree-0 part in the Z-grading). One can show that Q ( v 1 v 2 ⋯ v k ) = Q ( v 1 ) Q ( v 2 ) ⋯ Q ( v k ) {\displaystyle Q(v_{1}v_{2}\cdots v_{k})=Q(v_{1})Q(v_{2})\cdots Q(v_{k})} where the vi are elements of V – this identity is not true for arbitrary elements of Cl(V, Q). The associated symmetric bilinear form on Cl(V, Q) is given by ⟨ x , y ⟩ = ⟨ x t y ⟩ 0 . {\displaystyle \langle x,y\rangle =\left\langle x^{\mathrm {t} }y\right\rangle _{0}.} One can check that this reduces to the original bilinear form when restricted to V. The bilinear form on all of Cl(V, Q) is nondegenerate if and only if it is nondegenerate on V. The operator of left (respectively right) Clifford multiplication by the transpose at of an element a is the adjoint of left (respectively right) Clifford multiplication by a with respect to this inner product. That is, ⟨ a x , y ⟩ = ⟨ x , a t y ⟩ , {\displaystyle \langle ax,y\rangle =\left\langle x,a^{\mathrm {t} }y\right\rangle ,} and ⟨ x a , y ⟩ = ⟨ x , y a t ⟩ . {\displaystyle \langle xa,y\rangle =\left\langle x,ya^{\mathrm {t} }\right\rangle .} == Structure of Clifford algebras == In this section we assume that characteristic is not 2, the vector space V is finite-dimensional and that the associated symmetric bilinear form of Q is nondegenerate. A central simple algebra over K is a matrix algebra over a (finite-dimensional) division algebra with center K. For example, the central simple algebras over the reals are matrix algebras over either the reals or the quaternions. If V has even dimension then Cl(V, Q) is a central simple algebra over K. If V has even dimension then the even subalgebra Cl[0](V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K. If V has odd dimension then Cl(V, Q) is a central simple algebra over a quadratic extension of K or a sum of two isomorphic central simple algebras over K. If V has odd dimension then the even subalgebra Cl[0](V, Q) is a central simple algebra over K. The structure of Clifford algebras can be worked out explicitly using the following result. Suppose that U has even dimension and a non-singular bilinear form with discriminant d, and suppose that V is another vector space with a quadratic form. The Clifford algebra of U + V is isomorphic to the tensor product of the Clifford algebras of U and (−1)dim(U)/2dV, which is the space V with its quadratic form multiplied by (−1)dim(U)/2d. Over the reals, this implies in particular that Cl p + 2 , q ⁡ ( R ) = M 2 ( R ) ⊗ Cl q , p ⁡ ( R ) {\displaystyle \operatorname {Cl} _{p+2,q}(\mathbf {R} )=\mathrm {M} _{2}(\mathbf {R} )\otimes \operatorname {Cl} _{q,p}(\mathbf {R} )} Cl p + 1 , q + 1 ⁡ ( R ) = M 2 ( R ) ⊗ Cl p , q ⁡ ( R ) {\displaystyle \operatorname {Cl} _{p+1,q+1}(\mathbf {R} )=\mathrm {M} _{2}(\mathbf {R} )\otimes \operatorname {Cl} _{p,q}(\mathbf {R} )} Cl p , q + 2 ⁡ ( R ) = H ⊗ Cl q , p ⁡ ( R ) . {\displaystyle \operatorname {Cl} _{p,q+2}(\mathbf {R} )=\mathbf {H} \otimes \operatorname {Cl} _{q,p}(\mathbf {R} ).} These formulas can be used to find the structure of all real Clifford algebras and all complex Clifford algebras; see the classification of Clifford algebras. Notably, the Morita equivalence class of a Clifford algebra (its representation theory: the equivalence class of the category of modules over it) depends on only the signature (p − q) mod 8. This is an algebraic form of Bott periodicity. == Lipschitz group == The class of Lipschitz groups (a.k.a. Clifford groups or Clifford–Lipschitz groups) was discovered by Rudolf Lipschitz. In this section we assume that V is finite-dimensional and the quadratic form Q is nondegenerate. An action on the elements of a Clifford algebra by its group of units may be defined in terms of a twisted conjugation: twisted conjugation by x maps y ↦ α(x) y x−1, where α is the main involution defined above. The Lipschitz group Γ is defined to be the set of invertible elements x that stabilize the set of vectors under this action, meaning that for all v in V we have: α ( x ) v x − 1 ∈ V . {\displaystyle \alpha (x)vx^{-1}\in V.} This formula also defines an action of the Lipschitz group on the vector space V that preserves the quadratic form Q, and so gives a homomorphism from the Lipschitz group to the orthogonal group. The Lipschitz group contains all elements r of V for which Q(r) is invertible in K, and these act on V by the corresponding reflections that take v to v − (⟨r, v⟩ + ⟨v, r⟩)r‍/‍Q(r). (In characteristic 2 these are called orthogonal transvections rather than reflections.) If V is a finite-dimensional real vector space with a non-degenerate quadratic form then the Lipschitz group maps onto the orthogonal group of V with respect to the form (by the Cartan–Dieudonné theorem) and the kernel consists of the nonzero elements of the field K. This leads to exact sequences 1 → K × → Γ → O V ⁡ ( K ) → 1 , {\displaystyle 1\rightarrow K^{\times }\rightarrow \Gamma \rightarrow \operatorname {O} _{V}(K)\rightarrow 1,} 1 → K × → Γ 0 → SO V ⁡ ( K ) → 1. {\displaystyle 1\rightarrow K^{\times }\rightarrow \Gamma ^{0}\rightarrow \operatorname {SO} _{V}(K)\rightarrow 1.} Over other fields or with indefinite forms, the map is not in general onto, and the failure is captured by the spinor norm. === Spinor norm === In arbitrary characteristic, the spinor norm Q is defined on the Lipschitz group by Q ( x ) = x t x . {\displaystyle Q(x)=x^{\mathrm {t} }x.} It is a homomorphism from the Lipschitz group to the group K× of non-zero elements of K. It coincides with the quadratic form Q of V when V is identified with a subspace of the Clifford algebra. Several authors define the spinor norm slightly differently, so that it differs from the one here by a factor of −1, 2, or −2 on Γ1. The difference is not very important in characteristic other than 2. The nonzero elements of K have spinor norm in the group (K×)2 of squares of nonzero elements of the field K. So when V is finite-dimensional and non-singular we get an induced map from the orthogonal group of V to the group K×‍/‍(K×)2, also called the spinor norm. The spinor norm of the reflection about r⊥, for any vector r, has image Q(r) in K×‍/‍(K×)2, and this property uniquely defines it on the orthogonal group. This gives exact sequences: 1 → { ± 1 } → Pin V ⁡ ( K ) → O V ⁡ ( K ) → K × / ( K × ) 2 , 1 → { ± 1 } → Spin V ⁡ ( K ) → SO V ⁡ ( K ) → K × / ( K × ) 2 . {\displaystyle {\begin{aligned}1\to \{\pm 1\}\to \operatorname {Pin} _{V}(K)&\to \operatorname {O} _{V}(K)\to K^{\times }/\left(K^{\times }\right)^{2},\\1\to \{\pm 1\}\to \operatorname {Spin} _{V}(K)&\to \operatorname {SO} _{V}(K)\to K^{\times }/\left(K^{\times }\right)^{2}.\end{aligned}}} Note that in characteristic 2 the group {±1} has just one element. From the point of view of Galois cohomology of algebraic groups, the spinor norm is a connecting homomorphism on cohomology. Writing μ2 for the algebraic group of square roots of 1 (over a field of characteristic not 2 it is roughly the same as a two-element group with trivial Galois action), the short exact sequence 1 → μ 2 → Pin V → O V → 1 {\displaystyle 1\to \mu _{2}\rightarrow \operatorname {Pin} _{V}\rightarrow \operatorname {O} _{V}\rightarrow 1} yields a long exact sequence on cohomology, which begins 1 → H 0 ( μ 2 ; K ) → H 0 ( Pin V ; K ) → H 0 ( O V ; K ) → H 1 ( μ 2 ; K ) . {\displaystyle 1\to H^{0}(\mu _{2};K)\to H^{0}(\operatorname {Pin} _{V};K)\to H^{0}(\operatorname {O} _{V};K)\to H^{1}(\mu _{2};K).} The 0th Galois cohomology group of an algebraic group with coefficients in K is just the group of K-valued points: H0(G; K) = G(K), and H1(μ2; K) ≅ K×‍/‍(K×)2, which recovers the previous sequence 1 → { ± 1 } → Pin V ⁡ ( K ) → O V ⁡ ( K ) → K × / ( K × ) 2 , {\displaystyle 1\to \{\pm 1\}\to \operatorname {Pin} _{V}(K)\to \operatorname {O} _{V}(K)\to K^{\times }/\left(K^{\times }\right)^{2},} where the spinor norm is the connecting homomorphism H0(OV; K) → H1(μ2; K). == Spin and pin groups == In this section we assume that V is finite-dimensional and its bilinear form is non-singular. The pin group PinV(K) is the subgroup of the Lipschitz group Γ of elements of spinor norm 1, and similarly the spin group SpinV(K) is the subgroup of elements of Dickson invariant 0 in PinV(K). When the characteristic is not 2, these are the elements of determinant 1. The spin group usually has index 2 in the pin group. Recall from the previous section that there is a homomorphism from the Lipschitz group onto the orthogonal group. We define the special orthogonal group to be the image of Γ0. If K does not have characteristic 2 this is just the group of elements of the orthogonal group of determinant 1. If K does have characteristic 2, then all elements of the orthogonal group have determinant 1, and the special orthogonal group is the set of elements of Dickson invariant 0. There is a homomorphism from the pin group to the orthogonal group. The image consists of the elements of spinor norm 1 ∈ K×‍/‍(K×)2. The kernel consists of the elements +1 and −1, and has order 2 unless K has characteristic 2. Similarly there is a homomorphism from the Spin group to the special orthogonal group of V. In the common case when V is a positive or negative definite space over the reals, the spin group maps onto the special orthogonal group, and is simply connected when V has dimension at least 3. Further the kernel of this homomorphism consists of 1 and −1. So in this case the spin group, Spin(n), is a double cover of SO(n). Note, however, that the simple connectedness of the spin group is not true in general: if V is Rp,q for p and q both at least 2 then the spin group is not simply connected. In this case the algebraic group Spinp,q is simply connected as an algebraic group, even though its group of real valued points Spinp,q(R) is not simply connected. This is a rather subtle point, which completely confused the authors of at least one standard book about spin groups. == Spinors == Clifford algebras Clp,q(C), with p + q = 2n even, are matrix algebras that have a complex representation of dimension 2n. By restricting to the group Pinp,q(R) we get a complex representation of the Pin group of the same dimension, called the spin representation. If we restrict this to the spin group Spinp,q(R) then it splits as the sum of two half spin representations (or Weyl representations) of dimension 2n−1. If p + q = 2n + 1 is odd then the Clifford algebra Clp,q(C) is a sum of two matrix algebras, each of which has a representation of dimension 2n, and these are also both representations of the pin group Pinp,q(R). On restriction to the spin group Spinp,q(R) these become isomorphic, so the spin group has a complex spinor representation of dimension 2n. More generally, spinor groups and pin groups over any field have similar representations whose exact structure depends on the structure of the corresponding Clifford algebras: whenever a Clifford algebra has a factor that is a matrix algebra over some division algebra, we get a corresponding representation of the pin and spin groups over that division algebra. For examples over the reals see the article on spinors. === Real spinors === To describe the real spin representations, one must know how the spin group sits inside its Clifford algebra. The pin group, Pinp,q is the set of invertible elements in Clp,q that can be written as a product of unit vectors: P i n p , q = { v 1 v 2 ⋯ v r ∣ ∀ i ‖ v i ‖ = ± 1 } . {\displaystyle \mathrm {Pin} _{p,q}=\left\{v_{1}v_{2}\cdots v_{r}\mid \forall i\,\|v_{i}\|=\pm 1\right\}.} Comparing with the above concrete realizations of the Clifford algebras, the pin group corresponds to the products of arbitrarily many reflections: it is a cover of the full orthogonal group O(p, q). The spin group consists of those elements of Pinp,q that are products of an even number of unit vectors. Thus by the Cartan–Dieudonné theorem Spin is a cover of the group of proper rotations SO(p, q). Let α : Cl → Cl be the automorphism that is given by the mapping v ↦ −v acting on pure vectors. Then in particular, Spinp,q is the subgroup of Pinp,q whose elements are fixed by α. Let Cl p , q [ 0 ] = { x ∈ Cl p , q ∣ α ( x ) = x } . {\displaystyle \operatorname {Cl} _{p,q}^{[0]}=\{x\in \operatorname {Cl} _{p,q}\mid \alpha (x)=x\}.} (These are precisely the elements of even degree in Clp,q.) Then the spin group lies within Cl[0]p,q. The irreducible representations of Clp,q restrict to give representations of the pin group. Conversely, since the pin group is generated by unit vectors, all of its irreducible representation are induced in this manner. Thus the two representations coincide. For the same reasons, the irreducible representations of the spin coincide with the irreducible representations of Cl[0]p,q. To classify the pin representations, one need only appeal to the classification of Clifford algebras. To find the spin representations (which are representations of the even subalgebra), one can first make use of either of the isomorphisms (see above) Cl p , q [ 0 ] ≈ Cl p , q − 1 , for q > 0 {\displaystyle \operatorname {Cl} _{p,q}^{[0]}\approx \operatorname {Cl} _{p,q-1},{\text{ for }}q>0} Cl p , q [ 0 ] ≈ Cl q , p − 1 , for p > 0 {\displaystyle \operatorname {Cl} _{p,q}^{[0]}\approx \operatorname {Cl} _{q,p-1},{\text{ for }}p>0} and realize a spin representation in signature (p, q) as a pin representation in either signature (p, q − 1) or (q, p − 1). == Applications == === Differential geometry === One of the principal applications of the exterior algebra is in differential geometry where it is used to define the bundle of differential forms on a smooth manifold. In the case of a (pseudo-)Riemannian manifold, the tangent spaces come equipped with a natural quadratic form induced by the metric. Thus, one can define a Clifford bundle in analogy with the exterior bundle. This has a number of important applications in Riemannian geometry. Perhaps more important is the link to a spin manifold, its associated spinor bundle and spinc manifolds. === Physics === Clifford algebras have numerous important applications in physics. Physicists usually consider a Clifford algebra to be an algebra that has a basis that is generated by the matrices γ0, ..., γ3, called Dirac matrices, which have the property that γ i γ j + γ j γ i = 2 η i j , {\displaystyle \gamma _{i}\gamma _{j}+\gamma _{j}\gamma _{i}=2\eta _{ij},} where η is the matrix of a quadratic form of signature (1, 3) (or (3, 1) corresponding to the two equivalent choices of metric signature). These are exactly the defining relations for the Clifford algebra Cl1,3(R), whose complexification is Cl1,3(R)C, which, by the classification of Clifford algebras, is isomorphic to the algebra of 4 × 4 complex matrices Cl4(C) ≈ M4(C). However, it is best to retain the notation Cl1,3(R)C, since any transformation that takes the bilinear form to the canonical form is not a Lorentz transformation of the underlying spacetime. The Clifford algebra of spacetime used in physics thus has more structure than Cl4(C). It has in addition a set of preferred transformations – Lorentz transformations. Whether complexification is necessary to begin with depends in part on conventions used and in part on how much one wants to incorporate straightforwardly, but complexification is most often necessary in quantum mechanics where the spin representation of the Lie algebra so(1, 3) sitting inside the Clifford algebra conventionally requires a complex Clifford algebra. For reference, the spin Lie algebra is given by σ μ ν = − i 4 [ γ μ , γ ν ] , [ σ μ ν , σ ρ τ ] = i ( η τ μ σ ρ ν + η ν τ σ μ ρ − η ρ μ σ τ ν − η ν ρ σ μ τ ) . {\displaystyle {\begin{aligned}\sigma ^{\mu \nu }&=-{\frac {i}{4}}\left[\gamma ^{\mu },\,\gamma ^{\nu }\right],\\\left[\sigma ^{\mu \nu },\,\sigma ^{\rho \tau }\right]&=i\left(\eta ^{\tau \mu }\sigma ^{\rho \nu }+\eta ^{\nu \tau }\sigma ^{\mu \rho }-\eta ^{\rho \mu }\sigma ^{\tau \nu }-\eta ^{\nu \rho }\sigma ^{\mu \tau }\right).\end{aligned}}} This is in the (3, 1) convention, hence fits in Cl3,1(R)C. The Dirac matrices were first written down by Paul Dirac when he was trying to write a relativistic first-order wave equation for the electron, and give an explicit isomorphism from the Clifford algebra to the algebra of complex matrices. The result was used to define the Dirac equation and introduce the Dirac operator. The entire Clifford algebra shows up in quantum field theory in the form of Dirac field bilinears. The use of Clifford algebras to describe quantum theory has been advanced among others by Mario Schönberg, by David Hestenes in terms of geometric calculus, by David Bohm and Basil Hiley and co-workers in form of a hierarchy of Clifford algebras, and by Elio Conte et al. === Computer vision === Clifford algebras have been applied in the problem of action recognition and classification in computer vision. Rodriguez et al propose a Clifford embedding to generalize traditional MACH filters to video (3D spatiotemporal volume), and vector-valued data such as optical flow. Vector-valued data is analyzed using the Clifford Fourier Transform. Based on these vectors action filters are synthesized in the Clifford Fourier domain and recognition of actions is performed using Clifford correlation. The authors demonstrate the effectiveness of the Clifford embedding by recognizing actions typically performed in classic feature films and sports broadcast television. == Generalizations == While this article focuses on a Clifford algebra of a vector space over a field, the definition extends without change to a module over any unital, associative, commutative ring. Clifford algebras may be generalized to a form of degree higher than quadratic over a vector space. == History == == See also == == Notes == == Citations == == References == == Further reading == == External links == "Clifford algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Planetmath entry on Clifford algebras Archived 2005-04-15 at the Wayback Machine A history of Clifford algebras (unverified) John Baez on Clifford algebras Clifford Algebra: A Visual Introduction Clifford Algebra Explorer : A Pedagogical Tool
Wikipedia/Clifford_algebra
Two-point tensors, or double vectors, are tensor-like quantities which transform as Euclidean vectors with respect to each of their indices. They are used in continuum mechanics to transform between reference ("material") and present ("configuration") coordinates. Examples include the deformation gradient and the first Piola–Kirchhoff stress tensor. As with many applications of tensors, Einstein summation notation is frequently used. To clarify this notation, capital indices are often used to indicate reference coordinates and lowercase for present coordinates. Thus, a two-point tensor will have one capital and one lower-case index; for example, AjM. == Continuum mechanics == A conventional tensor can be viewed as a transformation of vectors in one coordinate system to other vectors in the same coordinate system. In contrast, a two-point tensor transforms vectors from one coordinate system to another. That is, a conventional tensor, Q = Q p q ( e p ⊗ e q ) {\displaystyle \mathbf {Q} =Q_{pq}(\mathbf {e} _{p}\otimes \mathbf {e} _{q})} , actively transforms a vector u to a vector v such that v = Q u {\displaystyle \mathbf {v} =\mathbf {Q} \mathbf {u} } where v and u are measured in the same space and their coordinates representation is with respect to the same basis (denoted by the "e"). In contrast, a two-point tensor, G will be written as G = G p q ( e p ⊗ E q ) {\displaystyle \mathbf {G} =G_{pq}(\mathbf {e} _{p}\otimes \mathbf {E} _{q})} and will transform a vector, U, in E system to a vector, v, in the e system as v = G U {\displaystyle \mathbf {v} =\mathbf {GU} } . == The transformation law for two-point tensor == Suppose we have two coordinate systems one primed and another unprimed and a vectors' components transform between them as v p ′ = Q p q v q {\displaystyle v'_{p}=Q_{pq}v_{q}} . For tensors suppose we then have T p q ( e p ⊗ e q ) {\displaystyle T_{pq}(e_{p}\otimes e_{q})} . A tensor in the system e i {\displaystyle e_{i}} . In another system, let the same tensor be given by T p q ′ ( e p ′ ⊗ e q ′ ) {\displaystyle T'_{pq}(e'_{p}\otimes e'_{q})} . We can say T i j ′ = Q i p Q j r T p r {\displaystyle T'_{ij}=Q_{ip}Q_{jr}T_{pr}} . Then T ′ = Q T Q T {\displaystyle T'=QTQ^{\mathsf {T}}} is the routine tensor transformation. But a two-point tensor between these systems is just F p q ( e p ′ ⊗ e q ) {\displaystyle F_{pq}(e'_{p}\otimes e_{q})} which transforms as F ′ = Q F {\displaystyle F'=QF} . == Simple example == The most mundane example of a two-point tensor is the transformation tensor, the Q in the above discussion. Note that v p ′ = Q p q u q {\displaystyle v'_{p}=Q_{pq}u_{q}} . Now, writing out in full, u = u q e q {\displaystyle u=u_{q}e_{q}} and also v = v p ′ e p ′ {\displaystyle v=v'_{p}e'_{p}} . This then requires Q to be of the form Q p q ( e p ′ ⊗ e q ) {\displaystyle Q_{pq}(e'_{p}\otimes e_{q})} . By definition of tensor product, So we can write u p e p = ( Q p q ( e p ′ ⊗ e q ) ) ( v q e q ) {\displaystyle u_{p}e_{p}=(Q_{pq}(e'_{p}\otimes e_{q}))(v_{q}e_{q})} Thus u p e p = Q p q v q ( e p ′ ⊗ e q ) e q {\displaystyle u_{p}e_{p}=Q_{pq}v_{q}(e'_{p}\otimes e_{q})e_{q}} Incorporating (1), we have u p e p = Q p q v q e p {\displaystyle u_{p}e_{p}=Q_{pq}v_{q}e_{p}} . == See also == Mixed tensor Covariance and contravariance of vectors == References == == External links == Mathematical foundations of elasticity By Jerrold E. Marsden, Thomas J. R. Hughes Two-point Tensors at iMechanica
Wikipedia/Two-point_tensor
In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861. A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays. A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor. For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules. == Applications == Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning. Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. == Notation for indices == === Basis-related distinctions === ==== Space and time coordinates ==== Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows: The lowercase Latin alphabet a, b, c, ... is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately. The lowercase Greek alphabet α, β, γ, ... is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components. Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space. ==== Coordinate and index notation ==== The author(s) will usually make it clear whether a subscript is intended as an index or as a label. For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector A = (A1, A2, A3) = (Ax, Ay, Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x, y, z. In the expression Ai, i is interpreted as an index ranging over the values 1, 2, 3, while the x, y, z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label t. ==== Reference to basis ==== Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in: X ϕ ^ , Y λ ¯ , Z η ~ , T μ ′ {\displaystyle X_{\hat {\phi }}\,,Y_{\bar {\lambda }}\,,Z_{\tilde {\eta }}\,,T_{\mu '}} to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in: v μ ′ = v ν L ν μ ′ . {\displaystyle v^{\mu '}=v^{\nu }L_{\nu }{}^{\mu '}.} This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor. === Upper and lower indices === Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as a i j b j k {\displaystyle a_{ij}b_{jk}} for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained. ==== Covariant tensor components ==== A lower index (subscript) indicates covariance of the components with respect to that index: A α β γ ⋯ {\displaystyle A_{\alpha \beta \gamma \cdots }} ==== Contravariant tensor components ==== An upper index (superscript) indicates contravariance of the components with respect to that index: A α β γ ⋯ {\displaystyle A^{\alpha \beta \gamma \cdots }} ==== Mixed-variance tensor components ==== A tensor may have both upper and lower indices: A α β γ δ ⋯ . {\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.} Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta). ==== Tensor type and degree ==== The number of each upper and lower indices of a tensor gives its type: a tensor with p upper and q lower indices is said to be of type (p, q), or to be a type-(p, q) tensor. The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type (p, q) has degree p + q. ==== Summation convention ==== The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: A α B α ≡ ∑ α A α B α or A α B α ≡ ∑ α A α B α . {\displaystyle A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\quad {\text{or}}\quad A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{\alpha }B_{\alpha }\,.} The operation implied by such a summation is called tensor contraction: A α B β → A α B α ≡ ∑ α A α B α . {\displaystyle A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\,.} This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: A α γ B α C γ β ≡ ∑ α ∑ γ A α γ B α C γ β . {\displaystyle A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.} Other combinations of repeated indices within a term are considered to be ill-formed, such as The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis. ==== Multi-index notation ==== If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list: A i 1 ⋯ i n B i 1 ⋯ i n j 1 ⋯ j m C j 1 ⋯ j m ≡ A I B I J C J , {\displaystyle A_{i_{1}\cdots i_{n}}B^{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}C_{j_{1}\cdots j_{m}}\equiv A_{I}B^{IJ}C_{J},} where I = i1 i2 ⋅⋅⋅ in and J = j1 j2 ⋅⋅⋅ jm. ==== Sequential summation ==== A pair of vertical bars | ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices: A | α β γ | ⋯ B α β γ ⋯ = A α β γ ⋯ B | α β γ | ⋯ = ∑ α < β < γ A α β γ ⋯ B α β γ ⋯ {\displaystyle A_{|\alpha \beta \gamma |\cdots }B^{\alpha \beta \gamma \cdots }=A_{\alpha \beta \gamma \cdots }B^{|\alpha \beta \gamma |\cdots }=\sum _{\alpha <\beta <\gamma }A_{\alpha \beta \gamma \cdots }B^{\alpha \beta \gamma \cdots }} means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example: A | α β γ | | δ ϵ ⋯ λ | B α β γ δ ϵ ⋯ λ | μ ν ⋯ ζ | C μ ν ⋯ ζ = ∑ α < β < γ ∑ δ < ϵ < ⋯ < λ ∑ μ < ν < ⋯ < ζ A α β γ δ ϵ ⋯ λ B α β γ δ ϵ ⋯ λ μ ν ⋯ ζ C μ ν ⋯ ζ {\displaystyle {\begin{aligned}&A_{|\alpha \beta \gamma |}{}^{|\delta \epsilon \cdots \lambda |}B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}C^{\mu \nu \cdots \zeta }\\[3pt]={}&\sum _{\alpha <\beta <\gamma }~\sum _{\delta <\epsilon <\cdots <\lambda }~\sum _{\mu <\nu <\cdots <\zeta }A_{\alpha \beta \gamma }{}^{\delta \epsilon \cdots \lambda }B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }C^{\mu \nu \cdots \zeta }\end{aligned}}} When using multi-index notation, an underarrow is placed underneath the block of indices: A P ⇁ Q ⇁ B P Q R ⇁ C R = ∑ P ⇁ ∑ Q ⇁ ∑ R ⇁ A P Q B P Q R C R {\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{\underset {\rightharpoondown }{Q}}B^{P}{}_{Q{\underset {\rightharpoondown }{R}}}C^{R}=\sum _{\underset {\rightharpoondown }{P}}\sum _{\underset {\rightharpoondown }{Q}}\sum _{\underset {\rightharpoondown }{R}}A_{P}{}^{Q}B^{P}{}_{QR}C^{R}} where P ⇁ = | α β γ | , Q ⇁ = | δ ϵ ⋯ λ | , R ⇁ = | μ ν ⋯ ζ | {\displaystyle {\underset {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |} ==== Raising and lowering indices ==== By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa: B γ β ⋯ = g γ α A α β ⋯ and A α β ⋯ = g α γ B γ β ⋯ {\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }} The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation. === Correlations between index positions and invariance === This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation. The Kronecker delta is used, see also below. == General outlines for index notation and operations == Tensors are equal if and only if every corresponding component is equal; e.g., tensor A equals tensor B if and only if A α β γ = B α β γ {\displaystyle A^{\alpha }{}_{\beta \gamma }=B^{\alpha }{}_{\beta \gamma }} for all α, β, γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis). === Free and dummy indices === Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices. === A tensor equation represents many ordinary (real-valued) equations === The components of tensors (like Aα, Bβγ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m, the equality represents mn equations: each index takes on every value of a specific set of values. For instance, if A α B β γ C γ δ + D α β E δ = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }} is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α, β, δ), there are 43 = 64 equations. Three of these are: A 0 B 1 0 C 00 + A 0 B 1 1 C 10 + A 0 B 1 2 C 20 + A 0 B 1 3 C 30 + D 0 1 E 0 = T 0 1 0 A 1 B 0 0 C 00 + A 1 B 0 1 C 10 + A 1 B 0 2 C 20 + A 1 B 0 3 C 30 + D 1 0 E 0 = T 1 0 0 A 1 B 2 0 C 02 + A 1 B 2 1 C 12 + A 1 B 2 2 C 22 + A 1 B 2 3 C 32 + D 1 2 E 2 = T 1 2 2 . {\displaystyle {\begin{aligned}A^{0}B_{1}{}^{0}C_{00}+A^{0}B_{1}{}^{1}C_{10}+A^{0}B_{1}{}^{2}C_{20}+A^{0}B_{1}{}^{3}C_{30}+D^{0}{}_{1}{}E_{0}&=T^{0}{}_{1}{}_{0}\\A^{1}B_{0}{}^{0}C_{00}+A^{1}B_{0}{}^{1}C_{10}+A^{1}B_{0}{}^{2}C_{20}+A^{1}B_{0}{}^{3}C_{30}+D^{1}{}_{0}{}E_{0}&=T^{1}{}_{0}{}_{0}\\A^{1}B_{2}{}^{0}C_{02}+A^{1}B_{2}{}^{1}C_{12}+A^{1}B_{2}{}^{2}C_{22}+A^{1}B_{2}{}^{3}C_{32}+D^{1}{}_{2}{}E_{2}&=T^{1}{}_{2}{}_{2}.\end{aligned}}} This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation. === Indices are replaceable labels === Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is: A α B β γ C γ δ + D α β E δ → A λ B β μ C μ δ + D λ β E δ , {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{\mu \delta }+D^{\lambda }{}_{\beta }{}E_{\delta }\,,} whereas an erroneous change is: A α B β γ C γ δ + D α β E δ ↛ A λ B β γ C μ δ + D α β E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{\mu \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\,.} In the first replacement, λ replaced α and μ replaced γ everywhere, so the expression still has the same meaning. In the second, λ did not fully replace α, and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next. === Indices are the same in every term === The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: A α B β γ C γ δ + D α δ E β = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\delta }E_{\beta }=T^{\alpha }{}_{\beta }{}_{\delta }} as for an erroneous expression: A α B β γ C γ δ + D α β γ E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.} In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, α, β, δ line up throughout and γ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent. === Brackets and punctuation used once where implied === When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets. Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices. == Symmetric and antisymmetric parts == === Symmetric part of tensor === Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p, one takes a sum over the permutations of those indices ασ(i) for i = 1, 2, 3, ..., p, and then divides by the number of permutations: A ( α 1 α 2 ⋯ α p ) α p + 1 ⋯ α q = 1 p ! ∑ σ A α σ ( 1 ) ⋯ α σ ( p ) α p + 1 ⋯ α q . {\displaystyle A_{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {1}{p!}}\sum _{\sigma }A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\,.} For example, two symmetrizing indices mean there are two indices to permute and sum over: A ( α β ) γ ⋯ = 1 2 ! ( A α β γ ⋯ + A β α γ ⋯ ) {\displaystyle A_{(\alpha \beta )\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }+A_{\beta \alpha \gamma \cdots }\right)} while for three symmetrizing indices, there are three indices to sum over and permute: A ( α β γ ) δ ⋯ = 1 3 ! ( A α β γ δ ⋯ + A γ α β δ ⋯ + A β γ α δ ⋯ + A α γ β δ ⋯ + A γ β α δ ⋯ + A β α γ δ ⋯ ) {\displaystyle A_{(\alpha \beta \gamma )\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }+A_{\alpha \gamma \beta \delta \cdots }+A_{\gamma \beta \alpha \delta \cdots }+A_{\beta \alpha \gamma \delta \cdots }\right)} The symmetrization is distributive over addition; A ( α ( B β ) γ ⋯ + C β ) γ ⋯ ) = A ( α B β ) γ ⋯ + A ( α C β ) γ ⋯ {\displaystyle A_{(\alpha }\left(B_{\beta )\gamma \cdots }+C_{\beta )\gamma \cdots }\right)=A_{(\alpha }B_{\beta )\gamma \cdots }+A_{(\alpha }C_{\beta )\gamma \cdots }} Indices are not part of the symmetrization when they are: not on the same level, for example; A ( α B β γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B^{\beta }{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }+A_{\gamma }B^{\beta }{}_{\alpha }\right)} within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; A ( α B | β | γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B_{|\beta |}{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }+A_{\gamma }B_{\beta \alpha }\right)} Here the α and γ indices are symmetrized, β is not. === Antisymmetric or alternating part of tensor === Square brackets, [ ], around multiple indices denotes the antisymmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices ασ(i) multiplied by the signature of the permutation sgn(σ) is taken, then divided by the number of permutations: A [ α 1 ⋯ α p ] α p + 1 ⋯ α q = 1 p ! ∑ σ sgn ⁡ ( σ ) A α σ ( 1 ) ⋯ α σ ( p ) α p + 1 ⋯ α q = δ α 1 ⋯ α p β 1 … β p A β 1 ⋯ β p α p + 1 ⋯ α q {\displaystyle {\begin{aligned}&A_{[\alpha _{1}\cdots \alpha _{p}]\alpha _{p+1}\cdots \alpha _{q}}\\[3pt]={}&{\dfrac {1}{p!}}\sum _{\sigma }\operatorname {sgn}(\sigma )A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\\={}&\delta _{\alpha _{1}\cdots \alpha _{p}}^{\beta _{1}\dots \beta _{p}}A_{\beta _{1}\cdots \beta _{p}\alpha _{p+1}\cdots \alpha _{q}}\\\end{aligned}}} where δβ1⋅⋅⋅βpα1⋅⋅⋅αp is the generalized Kronecker delta of degree 2p, with scaling as defined below. For example, two antisymmetrizing indices imply: A [ α β ] γ ⋯ = 1 2 ! ( A α β γ ⋯ − A β α γ ⋯ ) {\displaystyle A_{[\alpha \beta ]\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }-A_{\beta \alpha \gamma \cdots }\right)} while three antisymmetrizing indices imply: A [ α β γ ] δ ⋯ = 1 3 ! ( A α β γ δ ⋯ + A γ α β δ ⋯ + A β γ α δ ⋯ − A α γ β δ ⋯ − A γ β α δ ⋯ − A β α γ δ ⋯ ) {\displaystyle A_{[\alpha \beta \gamma ]\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }-A_{\alpha \gamma \beta \delta \cdots }-A_{\gamma \beta \alpha \delta \cdots }-A_{\beta \alpha \gamma \delta \cdots }\right)} as for a more specific example, if F represents the electromagnetic tensor, then the equation 0 = F [ α β , γ ] = 1 3 ! ( F α β , γ + F γ α , β + F β γ , α − F β α , γ − F α γ , β − F γ β , α ) {\displaystyle 0=F_{[\alpha \beta ,\gamma ]}={\dfrac {1}{3!}}\left(F_{\alpha \beta ,\gamma }+F_{\gamma \alpha ,\beta }+F_{\beta \gamma ,\alpha }-F_{\beta \alpha ,\gamma }-F_{\alpha \gamma ,\beta }-F_{\gamma \beta ,\alpha }\right)\,} represents Gauss's law for magnetism and Faraday's law of induction. As before, the antisymmetrization is distributive over addition; A [ α ( B β ] γ ⋯ + C β ] γ ⋯ ) = A [ α B β ] γ ⋯ + A [ α C β ] γ ⋯ {\displaystyle A_{[\alpha }\left(B_{\beta ]\gamma \cdots }+C_{\beta ]\gamma \cdots }\right)=A_{[\alpha }B_{\beta ]\gamma \cdots }+A_{[\alpha }C_{\beta ]\gamma \cdots }} As with symmetrization, indices are not antisymmetrized when they are: not on the same level, for example; A [ α B β γ ] = 1 2 ! ( A α B β γ − A γ B β α ) {\displaystyle A_{[\alpha }B^{\beta }{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }-A_{\gamma }B^{\beta }{}_{\alpha }\right)} within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; A [ α B | β | γ ] = 1 2 ! ( A α B β γ − A γ B β α ) {\displaystyle A_{[\alpha }B_{|\beta |}{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }-A_{\gamma }B_{\beta \alpha }\right)} Here the α and γ indices are antisymmetrized, β is not. === Sum of symmetric and antisymmetric parts === Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: A α β γ ⋯ = A ( α β ) γ ⋯ + A [ α β ] γ ⋯ {\displaystyle A_{\alpha \beta \gamma \cdots }=A_{(\alpha \beta )\gamma \cdots }+A_{[\alpha \beta ]\gamma \cdots }} as can be seen by adding the above expressions for A(αβ)γ⋅⋅⋅ and A[αβ]γ⋅⋅⋅. This does not hold for other than two indices. == Differentiation == For compactness, derivatives may be indicated by adding indices after a comma or semicolon. === Partial derivative === While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by xμ, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable xγ, a comma is placed before an appended lower index of the coordinate variable. A α β ⋯ , γ = ∂ ∂ x γ A α β ⋯ {\displaystyle A_{\alpha \beta \cdots ,\gamma }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }} This may be repeated (without adding further commas): A α 1 α 2 ⋯ α p , α p + 1 ⋯ α q = ∂ ∂ x α q ⋯ ∂ ∂ x α p + 2 ∂ ∂ x α p + 1 A α 1 α 2 ⋯ α p . {\displaystyle A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {\partial }{\partial x^{\alpha _{q}}}}\cdots {\dfrac {\partial }{\partial x^{\alpha _{p+2}}}}{\dfrac {\partial }{\partial x^{\alpha _{p+1}}}}A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}.} These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates x α , γ = δ γ α , {\displaystyle x^{\alpha }{}_{,\gamma }=\delta _{\gamma }^{\alpha },} where δ is the Kronecker delta. === Covariant derivative === The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon ( ; ) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash ( / ) or in three-dimensional curved space a single vertical bar ( | ). The covariant derivative of a scalar function, a contravariant vector and a covariant vector are: f ; β = f , β {\displaystyle f_{;\beta }=f_{,\beta }} A α ; β = A α , β + Γ α γ β A γ {\displaystyle A^{\alpha }{}_{;\beta }=A^{\alpha }{}_{,\beta }+\Gamma ^{\alpha }{}_{\gamma \beta }A^{\gamma }} A α ; β = A α , β − Γ γ α β A γ , {\displaystyle A_{\alpha ;\beta }=A_{\alpha ,\beta }-\Gamma ^{\gamma }{}_{\alpha \beta }A_{\gamma }\,,} where Γαγβ are the connection coefficients. For an arbitrary tensor: T α 1 ⋯ α r β 1 ⋯ β s ; γ = T α 1 ⋯ α r β 1 ⋯ β s , γ + Γ α 1 δ γ T δ α 2 ⋯ α r β 1 ⋯ β s + ⋯ + Γ α r δ γ T α 1 ⋯ α r − 1 δ β 1 ⋯ β s − Γ δ β 1 γ T α 1 ⋯ α r δ β 2 ⋯ β s − ⋯ − Γ δ β s γ T α 1 ⋯ α r β 1 ⋯ β s − 1 δ . {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma }&\\=T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&+\,\Gamma ^{\alpha _{1}}{}_{\delta \gamma }T^{\delta \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\cdots +\Gamma ^{\alpha _{r}}{}_{\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\delta }{}_{\beta _{1}\cdots \beta _{s}}\\&-\,\Gamma ^{\delta }{}_{\beta _{1}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\delta \beta _{2}\cdots \beta _{s}}-\cdots -\Gamma ^{\delta }{}_{\beta _{s}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\delta }\,.\end{aligned}}} An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol ∇β. For the case of a vector field Aα: ∇ β A α = A α ; β . {\displaystyle \nabla _{\beta }A^{\alpha }=A^{\alpha }{}_{;\beta }\,.} The covariant formulation of the directional derivative of any tensor field along a vector vγ may be expressed as its contraction with the covariant derivative, e.g.: v γ A α ; γ . {\displaystyle v^{\gamma }A_{\alpha ;\gamma }\,.} The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly. This derivative is characterized by the product rule: ( A α β ⋯ B γ δ ⋯ ) ; ϵ = A α β ⋯ ; ϵ B γ δ ⋯ + A α β ⋯ B γ δ ⋯ ; ϵ . {\displaystyle (A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots })_{;\epsilon }=A^{\alpha }{}_{\beta \cdots ;\epsilon }B^{\gamma }{}_{\delta \cdots }+A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots ;\epsilon }\,.} ==== Connection types ==== A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection. A connection is a metric connection when the covariant derivative of the metric tensor vanishes: g μ ν ; ξ = 0 . {\displaystyle g_{\mu \nu ;\xi }=0\,.} An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: Tαβγ = 0) is a Levi-Civita connection. The Γαβγ for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind. === Exterior derivative === The exterior derivative of a totally antisymmetric type (0, s) tensor field with components Aα1⋅⋅⋅αs (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:: 232–233  ( d A ) γ α 1 ⋯ α s = ∂ ∂ x [ γ A α 1 ⋯ α s ] = A [ α 1 ⋯ α s , γ ] . {\displaystyle (\mathrm {d} A)_{\gamma \alpha _{1}\cdots \alpha _{s}}={\frac {\partial }{\partial x^{[\gamma }}}A_{\alpha _{1}\cdots \alpha _{s}]}=A_{[\alpha _{1}\cdots \alpha _{s},\gamma ]}.} This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule. === Lie derivative === The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type (r, s) tensor field T along (the flow of) a contravariant vector field Xρ may be expressed using a coordinate basis as ( L X T ) α 1 ⋯ α r β 1 ⋯ β s = X γ T α 1 ⋯ α r β 1 ⋯ β s , γ − X α 1 , γ T γ α 2 ⋯ α r β 1 ⋯ β s − ⋯ − X α r , γ T α 1 ⋯ α r − 1 γ β 1 ⋯ β s + X γ , β 1 T α 1 ⋯ α r γ β 2 ⋯ β s + ⋯ + X γ , β s T α 1 ⋯ α r β 1 ⋯ β s − 1 γ . {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}&\\=X^{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&-\,X^{\alpha _{1}}{}_{,\gamma }T^{\gamma \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -X^{\alpha _{r}}{}_{,\gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\gamma }{}_{\beta _{1}\cdots \beta _{s}}\\&+\,X^{\gamma }{}_{,\beta _{1}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\gamma \beta _{2}\cdots \beta _{s}}+\cdots +X^{\gamma }{}_{,\beta _{s}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\gamma }\,.\end{aligned}}} This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero: ( L X X ) α = X γ X α , γ − X α , γ X γ = 0 . {\displaystyle ({\mathcal {L}}_{X}X)^{\alpha }=X^{\gamma }X^{\alpha }{}_{,\gamma }-X^{\alpha }{}_{,\gamma }X^{\gamma }=0\,.} == Notable tensors == === Kronecker delta === The Kronecker delta is like the identity matrix when multiplied and contracted: δ β α A β = A α δ ν μ B μ = B ν . {\displaystyle {\begin{aligned}\delta _{\beta }^{\alpha }\,A^{\beta }&=A^{\alpha }\\\delta _{\nu }^{\mu }\,B_{\mu }&=B_{\nu }.\end{aligned}}} The components δαβ are the same in any basis and form an invariant tensor of type (1, 1), i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant. Its trace is the dimensionality of the space; for example, in four-dimensional spacetime, δ ρ ρ = δ 0 0 + δ 1 1 + δ 2 2 + δ 3 3 = 4. {\displaystyle \delta _{\rho }^{\rho }=\delta _{0}^{0}+\delta _{1}^{1}+\delta _{2}^{2}+\delta _{3}^{3}=4.} The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree 2p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of p! on the right): δ β 1 ⋯ β p α 1 ⋯ α p = δ β 1 [ α 1 ⋯ δ β p α p ] , {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}=\delta _{\beta _{1}}^{[\alpha _{1}}\cdots \delta _{\beta _{p}}^{\alpha _{p}]},} and acts as an antisymmetrizer on p indices: δ β 1 ⋯ β p α 1 ⋯ α p A β 1 ⋯ β p = A [ α 1 ⋯ α p ] . {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}\,A^{\beta _{1}\cdots \beta _{p}}=A^{[\alpha _{1}\cdots \alpha _{p}]}.} === Torsion tensor === An affine connection has a torsion tensor Tαβγ: T α β γ = Γ α β γ − Γ α γ β − γ α β γ , {\displaystyle T^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\beta \gamma }-\Gamma ^{\alpha }{}_{\gamma \beta }-\gamma ^{\alpha }{}_{\beta \gamma },} where γαβγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis. For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations Γ α β γ = Γ α γ β . {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\gamma \beta }.} === Riemann curvature tensor === If this tensor is defined as R ρ σ μ ν = Γ ρ ν σ , μ − Γ ρ μ σ , ν + Γ ρ μ λ Γ λ ν σ − Γ ρ ν λ Γ λ μ σ , {\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\Gamma ^{\rho }{}_{\nu \sigma ,\mu }-\Gamma ^{\rho }{}_{\mu \sigma ,\nu }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }\,,} then it is the commutator of the covariant derivative with itself: A ν ; ρ σ − A ν ; σ ρ = A β R β ν ρ σ , {\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }\,,} since the connection is torsionless, which means that the torsion tensor vanishes. This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: T α 1 ⋯ α r β 1 ⋯ β s ; γ δ − T α 1 ⋯ α r β 1 ⋯ β s ; δ γ = − R α 1 ρ γ δ T ρ α 2 ⋯ α r β 1 ⋯ β s − ⋯ − R α r ρ γ δ T α 1 ⋯ α r − 1 ρ β 1 ⋯ β s + R σ β 1 γ δ T α 1 ⋯ α r σ β 2 ⋯ β s + ⋯ + R σ β s γ δ T α 1 ⋯ α r β 1 ⋯ β s − 1 σ {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma \delta }&-T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\delta \gamma }\\&\!\!\!\!\!\!\!\!\!\!=-R^{\alpha _{1}}{}_{\rho \gamma \delta }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -R^{\alpha _{r}}{}_{\rho \gamma \delta }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}\\&+R^{\sigma }{}_{\beta _{1}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}+\cdots +R^{\sigma }{}_{\beta _{s}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\,\end{aligned}}} which are often referred to as the Ricci identities. === Metric tensor === The metric tensor gαβ is used for lowering indices and gives the length of any space-like curve length = ∫ y 1 y 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{length}}=\int _{y_{1}}^{y_{2}}{\sqrt {g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,} where γ is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve duration = ∫ t 1 t 2 − 1 c 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{duration}}=\int _{t_{1}}^{t_{2}}{\sqrt {{\frac {-1}{c^{2}}}g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,} where γ is any smooth strictly monotone parameterization of the trajectory. See also Line element. The inverse matrix gαβ of the metric tensor is another important tensor, used for raising indices: g α β g β γ = δ γ α . {\displaystyle g^{\alpha \beta }g_{\beta \gamma }=\delta _{\gamma }^{\alpha }\,.} == See also == == Notes == == References == == Sources == Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds (First Dover 1980 ed.), The Macmillan Company, ISBN 0-486-64039-6 Danielson, Donald A. (2003). Vectors and Tensors in Engineering and Physics (2/e ed.). Westview (Perseus). ISBN 978-0-8133-4080-7. Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Kluwer Academic Publishers (Springer). ISBN 1-4020-1015-X. Lovelock, David; Hanno Rund (1989) [1975]. Tensors, Differential Forms, and Variational Principles. Dover. ISBN 978-0-486-65840-7. C. Møller (1952), The Theory of Relativity (3rd ed.), Oxford University Press Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. ISBN 978-0-486-63612-2. {{cite book}}: ISBN / Date incompatibility (help) J.R. Tyldesley (1975), An introduction to Tensor Analysis: For Engineers and Applied Scientists, Longman, ISBN 0-582-44355-5 D.C. Kay (1988), Tensor Calculus, Schaum's Outlines, McGraw Hill (USA), ISBN 0-07-033484-6 T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, ISBN 978-1107-602601 == Further reading == Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Springer. ISBN 1-4020-1015-X. Sokolnikoff, Ivan S (1951). Tensor Analysis: Theory and Applications to Geometry and Mechanics of Continua. Wiley. ISBN 0471810525. {{cite book}}: ISBN / Date incompatibility (help) Borisenko, A.I.; Tarapov, I.E. (1979). Vector and Tensor Analysis with Applications (2nd ed.). Dover. ISBN 0486638332. Itskov, Mikhail (2015). Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics (2nd ed.). Springer. ISBN 9783319163420. Tyldesley, J. R. (1973). An introduction to Tensor Analysis: For Engineers and Applied Scientists. Longman. ISBN 0-582-44355-5. Kay, D. C. (1988). Tensor Calculus. Schaum’s Outlines. McGraw Hill. ISBN 0-07-033484-6. Grinfeld, P. (2014). Introduction to Tensor Analysis and the Calculus of Moving Surfaces. Springer. ISBN 978-1-4614-7866-9. == External links == Dullemond, Kees; Peeters, Kasper (1991–2010). "Introduction to Tensor Calculus" (PDF). Retrieved 17 May 2018.
Wikipedia/Ricci_calculus
In mathematics, the nonmetricity tensor in differential geometry is the covariant derivative of the metric tensor. It is therefore a tensor field of order three. It vanishes for the case of Riemannian geometry and can be used to study non-Riemannian spacetimes. == Definition == By components, it is defined as follows. Q μ α β = ∇ μ g α β {\displaystyle Q_{\mu \alpha \beta }=\nabla _{\mu }g_{\alpha \beta }} It measures the rate of change of the components of the metric tensor along the flow of a given vector field, since ∇ μ ≡ ∇ ∂ μ {\displaystyle \nabla _{\mu }\equiv \nabla _{\partial _{\mu }}} where { ∂ μ } μ = 0 , 1 , 2 , 3 {\displaystyle \{\partial _{\mu }\}_{\mu =0,1,2,3}} is the coordinate basis of vector fields of the tangent bundle, in the case of having a 4-dimensional manifold. == Relation to connection == We say that a connection Γ {\displaystyle \Gamma } is compatible with the metric when its associated covariant derivative of the metric tensor (call it ∇ Γ {\displaystyle \nabla ^{\Gamma }} , for example) is zero, i.e. ∇ μ Γ g α β = 0. {\displaystyle \nabla _{\mu }^{\Gamma }g_{\alpha \beta }=0.} If the connection is also torsion-free (i.e. totally symmetric) then it is known as the Levi-Civita connection, which is the only one without torsion and compatible with the metric tensor. If we see it from a geometrical point of view, a non-vanishing nonmetricity tensor for a metric tensor g {\displaystyle g} implies that the modulus of a vector defined on the tangent bundle to a certain point p {\displaystyle p} of the manifold, changes when it is evaluated along the direction (flow) of another arbitrary vector. == References == == External links == Iosifidis, Damianos; Petkou, Anastasios C.; Tsagas, Christos G. (May 2019). "Torsion/nonmetricity duality in f(R) gravity". General Relativity and Gravitation. 51 (5): 66. arXiv:1810.06602. Bibcode:2019GReGr..51...66I. doi:10.1007/s10714-019-2539-9. ISSN 0001-7701. S2CID 53554290.
Wikipedia/Nonmetricity_tensor
In continuum mechanics, the Cauchy stress tensor (symbol σ {\displaystyle {\boldsymbol {\sigma }}} , named after Augustin-Louis Cauchy), also called true stress tensor or simply stress tensor, completely defines the state of stress at a point inside a material in the deformed state, placement, or configuration. The second order tensor consists of nine components σ i j {\displaystyle \sigma _{ij}} and relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e: T ( e ) = e ⋅ σ or T j ( e ) = ∑ i σ i j e i . {\displaystyle \mathbf {T} ^{(\mathbf {e} )}=\mathbf {e} \cdot {\boldsymbol {\sigma }}\quad {\text{or}}\quad T_{j}^{(e)}=\sum _{i}\sigma _{ij}e_{i}.} The SI base units of both stress tensor and traction vector are newton per square metre (N/m2) or pascal (Pa), corresponding to the stress scalar. The unit vector is dimensionless. The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle for stress. The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations: it is a central concept in the linear theory of elasticity. For large deformations, also called finite deformations, other measures of stress are required, such as the Piola–Kirchhoff stress tensor, the Biot stress tensor, and the Kirchhoff stress tensor. According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine. However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, K n → 1 {\displaystyle K_{n}\rightarrow 1} , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. There are certain invariants associated with the stress tensor, whose values do not depend upon the coordinate system chosen, or the area element upon which the stress tensor operates. These are the three eigenvalues of the stress tensor, which are called the principal stresses. == Euler–Cauchy stress principle – stress vector == The Euler–Cauchy stress principle states that upon any surface (real or imaginary) that divides the body, the action of one part of the body on the other is equivalent (equipollent) to the system of distributed forces and couples on the surface dividing the body, and it is represented by a field T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} , called the traction vector, defined on the surface S {\displaystyle S} and assumed to depend continuously on the surface's unit vector n {\displaystyle \mathbf {n} } .: p.66–96  To formulate the Euler–Cauchy stress principle, consider an imaginary surface S {\displaystyle S} passing through an internal material point P {\displaystyle P} dividing the continuous body into two segments, as seen in Figure 2.1a or 2.1b (one may use either the cutting plane diagram or the diagram with the arbitrary volume inside the continuum enclosed by the surface S {\displaystyle S} ). Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces F {\displaystyle \mathbf {F} } and body forces b {\displaystyle \mathbf {b} } . Thus, the total force F {\displaystyle {\mathcal {F}}} applied to a body or to a portion of the body can be expressed as: F = b + F {\displaystyle {\mathcal {F}}=\mathbf {b} +\mathbf {F} } Only surface forces will be discussed in this article as they are relevant to the Cauchy stress tensor. When the body is subjected to external surface forces or contact forces F {\displaystyle \mathbf {F} } , following Euler's equations of motion, internal contact forces and moments are transmitted from point to point in the body, and from one segment to the other through the dividing surface S {\displaystyle S} , due to the mechanical contact of one portion of the continuum onto the other (Figure 2.1a and 2.1b). On an element of area Δ S {\displaystyle \Delta S} containing P {\displaystyle P} , with normal vector n {\displaystyle \mathbf {n} } , the force distribution is equipollent to a contact force Δ F {\displaystyle \Delta \mathbf {F} } exerted at point P and surface moment Δ M {\displaystyle \Delta \mathbf {M} } . In particular, the contact force is given by Δ F = T ( n ) Δ S {\displaystyle \Delta \mathbf {F} =\mathbf {T} ^{(\mathbf {n} )}\,\Delta S} where T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} is the mean surface traction. Cauchy's stress principle asserts: p.47–102  that as Δ S {\displaystyle \Delta S} becomes very small and tends to zero the ratio Δ F / Δ S {\displaystyle \Delta \mathbf {F} /\Delta S} becomes d F / d S {\displaystyle d\mathbf {F} /dS} and the couple stress vector Δ M {\displaystyle \Delta \mathbf {M} } vanishes. In specific fields of continuum mechanics the couple stress is assumed not to vanish; however, classical branches of continuum mechanics address non-polar materials which do not consider couple stresses and body moments. The resultant vector d F / d S {\displaystyle d\mathbf {F} /dS} is defined as the surface traction, also called stress vector, traction, or traction vector. given by T ( n ) = T i ( n ) e i {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=T_{i}^{(\mathbf {n} )}\mathbf {e} _{i}} at the point P {\displaystyle P} associated with a plane with a normal vector n {\displaystyle \mathbf {n} } : T i ( n ) = lim Δ S → 0 Δ F i Δ S = d F i d S . {\displaystyle T_{i}^{(\mathbf {n} )}=\lim _{\Delta S\to 0}{\frac {\Delta F_{i}}{\Delta S}}={dF_{i} \over dS}.} This equation means that the stress vector depends on its location in the body and the orientation of the plane on which it is acting. This implies that the balancing action of internal contact forces generates a contact force density or Cauchy traction field T ( n , x , t ) {\displaystyle \mathbf {T} (\mathbf {n} ,\mathbf {x} ,t)} that represents a distribution of internal contact forces throughout the volume of the body in a particular configuration of the body at a given time t {\displaystyle t} . It is not a vector field because it depends not only on the position x {\displaystyle \mathbf {x} } of a particular material point, but also on the local orientation of the surface element as defined by its normal vector n {\displaystyle \mathbf {n} } . Depending on the orientation of the plane under consideration, the stress vector may not necessarily be perpendicular to that plane, i.e. parallel to n {\displaystyle \mathbf {n} } , and can be resolved into two components (Figure 2.1c): one normal to the plane, called normal stress σ n = lim Δ S → 0 Δ F n Δ S = d F n d S , {\displaystyle \mathbf {\sigma _{\mathrm {n} }} =\lim _{\Delta S\to 0}{\frac {\Delta F_{\mathrm {n} }}{\Delta S}}={\frac {dF_{\mathrm {n} }}{dS}},} where d F n {\displaystyle dF_{\mathrm {n} }} is the normal component of the force d F {\displaystyle d\mathbf {F} } to the differential area d S {\displaystyle dS} and the other parallel to this plane, called the shear stress τ = lim Δ S → 0 Δ F s Δ S = d F s d S , {\displaystyle \mathbf {\tau } =\lim _{\Delta S\to 0}{\frac {\Delta F_{\mathrm {s} }}{\Delta S}}={\frac {dF_{\mathrm {s} }}{dS}},} where d F s {\displaystyle dF_{\mathrm {s} }} is the tangential component of the force d F {\displaystyle d\mathbf {F} } to the differential surface area d S {\displaystyle dS} . The shear stress can be further decomposed into two mutually perpendicular vectors. === Cauchy's postulate === According to the Cauchy Postulate, the stress vector T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} remains unchanged for all surfaces passing through the point P {\displaystyle P} and having the same normal vector n {\displaystyle \mathbf {n} } at P {\displaystyle P} , i.e., having a common tangent at P {\displaystyle P} . This means that the stress vector is a function of the normal vector n {\displaystyle \mathbf {n} } only, and is not influenced by the curvature of the internal surfaces. === Cauchy's fundamental lemma === A consequence of Cauchy's postulate is Cauchy's Fundamental Lemma, also called the Cauchy reciprocal theorem,: p.103–130  which states that the stress vectors acting on opposite sides of the same surface are equal in magnitude and opposite in direction. Cauchy's fundamental lemma is equivalent to Newton's third law of motion of action and reaction, and is expressed as − T ( n ) = T ( − n ) . {\displaystyle -\mathbf {T} ^{(\mathbf {n} )}=\mathbf {T} ^{(-\mathbf {n} )}.} == Cauchy's stress theorem—stress tensor == The state of stress at a point in the body is then defined by all the stress vectors T(n) associated with all planes (infinite in number) that pass through that point. However, according to Cauchy's fundamental theorem, also called Cauchy's stress theorem, merely by knowing the stress vectors on three mutually perpendicular planes, the stress vector on any other plane passing through that point can be found through coordinate transformation equations. Cauchy's stress theorem states that there exists a second-order tensor field σ(x, t), called the Cauchy stress tensor, independent of n, such that T is a linear function of n: T ( n ) = n ⋅ σ or T j ( n ) = σ i j n i . {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=\mathbf {n} \cdot {\boldsymbol {\sigma }}\quad {\text{or}}\quad T_{j}^{(n)}=\sigma _{ij}n_{i}.} This equation implies that the stress vector T(n) at any point P in a continuum associated with a plane with normal unit vector n can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components σij of the stress tensor σ. To prove this expression, consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area dA oriented in an arbitrary direction specified by a normal unit vector n (Figure 2.2). The tetrahedron is formed by slicing the infinitesimal element along an arbitrary plane with unit normal n. The stress vector on this plane is denoted by T(n). The stress vectors acting on the faces of the tetrahedron are denoted as T(e1), T(e2), and T(e3), and are by definition the components σij of the stress tensor σ. This tetrahedron is sometimes called the Cauchy tetrahedron. The equilibrium of forces, i.e. Euler's first law of motion (Newton's second law of motion), gives: T ( n ) d A − T ( e 1 ) d A 1 − T ( e 2 ) d A 2 − T ( e 3 ) d A 3 = ρ ( h 3 d A ) a , {\displaystyle \mathbf {T} ^{(\mathbf {n} )}\,dA-\mathbf {T} ^{(\mathbf {e} _{1})}\,dA_{1}-\mathbf {T} ^{(\mathbf {e} _{2})}\,dA_{2}-\mathbf {T} ^{(\mathbf {e} _{3})}\,dA_{3}=\rho \left({\frac {h}{3}}dA\right)\mathbf {a} ,} where the right-hand-side represents the product of the mass enclosed by the tetrahedron and its acceleration: ρ is the density, a is the acceleration, and h is the height of the tetrahedron, considering the plane n as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting dA into each face (using the dot product): d A 1 = ( n ⋅ e 1 ) d A = n 1 d A , {\displaystyle dA_{1}=\left(\mathbf {n} \cdot \mathbf {e} _{1}\right)dA=n_{1}\;dA,} d A 2 = ( n ⋅ e 2 ) d A = n 2 d A , {\displaystyle dA_{2}=\left(\mathbf {n} \cdot \mathbf {e} _{2}\right)dA=n_{2}\;dA,} d A 3 = ( n ⋅ e 3 ) d A = n 3 d A , {\displaystyle dA_{3}=\left(\mathbf {n} \cdot \mathbf {e} _{3}\right)dA=n_{3}\;dA,} and then substituting into the equation to cancel out dA: T ( n ) − T ( e 1 ) n 1 − T ( e 2 ) n 2 − T ( e 3 ) n 3 = ρ ( h 3 ) a . {\displaystyle \mathbf {T} ^{(\mathbf {n} )}-\mathbf {T} ^{(\mathbf {e} _{1})}n_{1}-\mathbf {T} ^{(\mathbf {e} _{2})}n_{2}-\mathbf {T} ^{(\mathbf {e} _{3})}n_{3}=\rho \left({\frac {h}{3}}\right)\mathbf {a} .} To consider the limiting case as the tetrahedron shrinks to a point, h must go to 0 (intuitively, the plane n is translated along n toward O). As a result, the right-hand-side of the equation approaches 0, so T ( n ) = T ( e 1 ) n 1 + T ( e 2 ) n 2 + T ( e 3 ) n 3 . {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=\mathbf {T} ^{(\mathbf {e} _{1})}n_{1}+\mathbf {T} ^{(\mathbf {e} _{2})}n_{2}+\mathbf {T} ^{(\mathbf {e} _{3})}n_{3}.} Assuming a material element (see figure at the top of the page) with planes perpendicular to the coordinate axes of a Cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. T(e1), T(e2), and T(e3) can be decomposed into a normal component and two shear components, i.e. components in the direction of the three coordinate axes. For the particular case of a surface with normal unit vector oriented in the direction of the x1-axis, denote the normal stress by σ11, and the two shear stresses as σ12 and σ13: T ( e 1 ) = T 1 ( e 1 ) e 1 + T 2 ( e 1 ) e 2 + T 3 ( e 1 ) e 3 = σ 11 e 1 + σ 12 e 2 + σ 13 e 3 , {\displaystyle \mathbf {T} ^{(\mathbf {e} _{1})}=T_{1}^{(\mathbf {e} _{1})}\mathbf {e} _{1}+T_{2}^{(\mathbf {e} _{1})}\mathbf {e} _{2}+T_{3}^{(\mathbf {e} _{1})}\mathbf {e} _{3}=\sigma _{11}\mathbf {e} _{1}+\sigma _{12}\mathbf {e} _{2}+\sigma _{13}\mathbf {e} _{3},} T ( e 2 ) = T 1 ( e 2 ) e 1 + T 2 ( e 2 ) e 2 + T 3 ( e 2 ) e 3 = σ 21 e 1 + σ 22 e 2 + σ 23 e 3 , {\displaystyle \mathbf {T} ^{(\mathbf {e} _{2})}=T_{1}^{(\mathbf {e} _{2})}\mathbf {e} _{1}+T_{2}^{(\mathbf {e} _{2})}\mathbf {e} _{2}+T_{3}^{(\mathbf {e} _{2})}\mathbf {e} _{3}=\sigma _{21}\mathbf {e} _{1}+\sigma _{22}\mathbf {e} _{2}+\sigma _{23}\mathbf {e} _{3},} T ( e 3 ) = T 1 ( e 3 ) e 1 + T 2 ( e 3 ) e 2 + T 3 ( e 3 ) e 3 = σ 31 e 1 + σ 32 e 2 + σ 33 e 3 , {\displaystyle \mathbf {T} ^{(\mathbf {e} _{3})}=T_{1}^{(\mathbf {e} _{3})}\mathbf {e} _{1}+T_{2}^{(\mathbf {e} _{3})}\mathbf {e} _{2}+T_{3}^{(\mathbf {e} _{3})}\mathbf {e} _{3}=\sigma _{31}\mathbf {e} _{1}+\sigma _{32}\mathbf {e} _{2}+\sigma _{33}\mathbf {e} _{3},} In index notation this is T ( e i ) = T j ( e i ) e j = σ i j e j . {\displaystyle \mathbf {T} ^{(\mathbf {e} _{i})}=T_{j}^{(\mathbf {e} _{i})}\mathbf {e} _{j}=\sigma _{ij}\mathbf {e} _{j}.} The nine components σij of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which can be used to completely define the state of stress at a point and is given by σ = σ i j = [ T ( e 1 ) T ( e 2 ) T ( e 3 ) ] = [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] ≡ [ σ x x σ x y σ x z σ y x σ y y σ y z σ z x σ z y σ z z ] ≡ [ σ x τ x y τ x z τ y x σ y τ y z τ z x τ z y σ z ] , {\displaystyle {\boldsymbol {\sigma }}=\sigma _{ij}=\left[{\begin{matrix}\mathbf {T} ^{(\mathbf {e} _{1})}\\\mathbf {T} ^{(\mathbf {e} _{2})}\\\mathbf {T} ^{(\mathbf {e} _{3})}\\\end{matrix}}\right]=\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\\\end{matrix}}\right]\equiv \left[{\begin{matrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\\\end{matrix}}\right]\equiv \left[{\begin{matrix}\sigma _{x}&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{y}&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{z}\\\end{matrix}}\right],} where σ11, σ22, and σ33 are normal stresses, and σ12, σ13, σ21, σ23, σ31, and σ32 are shear stresses. The first index i indicates that the stress acts on a plane normal to the Xi -axis, and the second index j denotes the direction in which the stress acts (For example, σ12 implies that the stress is acting on the plane that is normal to the 1st axis i.e.;X1 and acts along the 2nd axis i.e.;X2). A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction. Thus, using the components of the stress tensor T ( n ) = T ( e 1 ) n 1 + T ( e 2 ) n 2 + T ( e 3 ) n 3 = ∑ i = 1 3 T ( e i ) n i = ( σ i j e j ) n i = σ i j n i e j {\displaystyle {\begin{aligned}\mathbf {T} ^{(\mathbf {n} )}&=\mathbf {T} ^{(\mathbf {e} _{1})}n_{1}+\mathbf {T} ^{(\mathbf {e} _{2})}n_{2}+\mathbf {T} ^{(\mathbf {e} _{3})}n_{3}\\&=\sum _{i=1}^{3}\mathbf {T} ^{(\mathbf {e} _{i})}n_{i}\\&=\left(\sigma _{ij}\mathbf {e} _{j}\right)n_{i}\\&=\sigma _{ij}n_{i}\mathbf {e} _{j}\end{aligned}}} or, equivalently, T j ( n ) = σ i j n i . {\displaystyle T_{j}^{(\mathbf {n} )}=\sigma _{ij}n_{i}.} Alternatively, in matrix form we have [ T 1 ( n ) T 2 ( n ) T 3 ( n ) ] = [ n 1 n 2 n 3 ] ⋅ [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] . {\displaystyle \left[{\begin{matrix}T_{1}^{(\mathbf {n} )}&T_{2}^{(\mathbf {n} )}&T_{3}^{(\mathbf {n} )}\end{matrix}}\right]=\left[{\begin{matrix}n_{1}&n_{2}&n_{3}\end{matrix}}\right]\cdot \left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\\\end{matrix}}\right].} The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a six-dimensional vector of the form: σ = [ σ 1 σ 2 σ 3 σ 4 σ 5 σ 6 ] T ≡ [ σ 11 σ 22 σ 33 σ 23 σ 13 σ 12 ] T . {\displaystyle {\boldsymbol {\sigma }}={\begin{bmatrix}\sigma _{1}&\sigma _{2}&\sigma _{3}&\sigma _{4}&\sigma _{5}&\sigma _{6}\end{bmatrix}}^{\textsf {T}}\equiv {\begin{bmatrix}\sigma _{11}&\sigma _{22}&\sigma _{33}&\sigma _{23}&\sigma _{13}&\sigma _{12}\end{bmatrix}}^{\textsf {T}}.} The Voigt notation is used extensively in representing stress–strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software. === Transformation rule of the stress tensor === It can be shown that the stress tensor is a contravariant second order tensor, which is a statement of how it transforms under a change of the coordinate system. From an xi-system to an xi' -system, the components σij in the initial system are transformed into the components σij' in the new system according to the tensor transformation rule (Figure 2.4): σ i j ′ = a i m a j n σ m n or σ ′ = A σ A T , {\displaystyle \sigma '_{ij}=a_{im}a_{jn}\sigma _{mn}\quad {\text{or}}\quad {\boldsymbol {\sigma }}'=\mathbf {A} {\boldsymbol {\sigma }}\mathbf {A} ^{\textsf {T}},} where A is a rotation matrix with components aij. In matrix form this is [ σ 11 ′ σ 12 ′ σ 13 ′ σ 21 ′ σ 22 ′ σ 23 ′ σ 31 ′ σ 32 ′ σ 33 ′ ] = [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ] [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] [ a 11 a 21 a 31 a 12 a 22 a 32 a 13 a 23 a 33 ] . {\displaystyle \left[{\begin{matrix}\sigma '_{11}&\sigma '_{12}&\sigma '_{13}\\\sigma '_{21}&\sigma '_{22}&\sigma '_{23}\\\sigma '_{31}&\sigma '_{32}&\sigma '_{33}\\\end{matrix}}\right]=\left[{\begin{matrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\\\end{matrix}}\right]\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\\\end{matrix}}\right]\left[{\begin{matrix}a_{11}&a_{21}&a_{31}\\a_{12}&a_{22}&a_{32}\\a_{13}&a_{23}&a_{33}\\\end{matrix}}\right].} Expanding the matrix operation, and simplifying terms using the symmetry of the stress tensor, gives σ 11 ′ = a 11 2 σ 11 + a 12 2 σ 22 + a 13 2 σ 33 + 2 a 11 a 12 σ 12 + 2 a 11 a 13 σ 13 + 2 a 12 a 13 σ 23 , σ 22 ′ = a 21 2 σ 11 + a 22 2 σ 22 + a 23 2 σ 33 + 2 a 21 a 22 σ 12 + 2 a 21 a 23 σ 13 + 2 a 22 a 23 σ 23 , σ 33 ′ = a 31 2 σ 11 + a 32 2 σ 22 + a 33 2 σ 33 + 2 a 31 a 32 σ 12 + 2 a 31 a 33 σ 13 + 2 a 32 a 33 σ 23 , σ 12 ′ = a 11 a 21 σ 11 + a 12 a 22 σ 22 + a 13 a 23 σ 33 + ( a 11 a 22 + a 12 a 21 ) σ 12 + ( a 12 a 23 + a 13 a 22 ) σ 23 + ( a 11 a 23 + a 13 a 21 ) σ 13 , σ 23 ′ = a 21 a 31 σ 11 + a 22 a 32 σ 22 + a 23 a 33 σ 33 + ( a 21 a 32 + a 22 a 31 ) σ 12 + ( a 22 a 33 + a 23 a 32 ) σ 23 + ( a 21 a 33 + a 23 a 31 ) σ 13 , σ 13 ′ = a 11 a 31 σ 11 + a 12 a 32 σ 22 + a 13 a 33 σ 33 + ( a 11 a 32 + a 12 a 31 ) σ 12 + ( a 12 a 33 + a 13 a 32 ) σ 23 + ( a 11 a 33 + a 13 a 31 ) σ 13 . {\displaystyle {\begin{aligned}\sigma _{11}'={}&a_{11}^{2}\sigma _{11}+a_{12}^{2}\sigma _{22}+a_{13}^{2}\sigma _{33}+2a_{11}a_{12}\sigma _{12}+2a_{11}a_{13}\sigma _{13}+2a_{12}a_{13}\sigma _{23},\\\sigma _{22}'={}&a_{21}^{2}\sigma _{11}+a_{22}^{2}\sigma _{22}+a_{23}^{2}\sigma _{33}+2a_{21}a_{22}\sigma _{12}+2a_{21}a_{23}\sigma _{13}+2a_{22}a_{23}\sigma _{23},\\\sigma _{33}'={}&a_{31}^{2}\sigma _{11}+a_{32}^{2}\sigma _{22}+a_{33}^{2}\sigma _{33}+2a_{31}a_{32}\sigma _{12}+2a_{31}a_{33}\sigma _{13}+2a_{32}a_{33}\sigma _{23},\\\sigma _{12}'={}&a_{11}a_{21}\sigma _{11}+a_{12}a_{22}\sigma _{22}+a_{13}a_{23}\sigma _{33}\\&+(a_{11}a_{22}+a_{12}a_{21})\sigma _{12}+(a_{12}a_{23}+a_{13}a_{22})\sigma _{23}+(a_{11}a_{23}+a_{13}a_{21})\sigma _{13},\\\sigma _{23}'={}&a_{21}a_{31}\sigma _{11}+a_{22}a_{32}\sigma _{22}+a_{23}a_{33}\sigma _{33}\\&+(a_{21}a_{32}+a_{22}a_{31})\sigma _{12}+(a_{22}a_{33}+a_{23}a_{32})\sigma _{23}+(a_{21}a_{33}+a_{23}a_{31})\sigma _{13},\\\sigma _{13}'={}&a_{11}a_{31}\sigma _{11}+a_{12}a_{32}\sigma _{22}+a_{13}a_{33}\sigma _{33}\\&+(a_{11}a_{32}+a_{12}a_{31})\sigma _{12}+(a_{12}a_{33}+a_{13}a_{32})\sigma _{23}+(a_{11}a_{33}+a_{13}a_{31})\sigma _{13}.\end{aligned}}} The Mohr circle for stress is a graphical representation of this transformation of stresses. === Normal and shear stresses === The magnitude of the normal stress component σn of any stress vector T(n) acting on an arbitrary plane with normal unit vector n at a given point, in terms of the components σij of the stress tensor σ, is the dot product of the stress vector and the normal unit vector: σ n = T ( n ) ⋅ n = T i ( n ) n i = σ i j n i n j . {\displaystyle {\begin{aligned}\sigma _{\mathrm {n} }&=\mathbf {T} ^{(\mathbf {n} )}\cdot \mathbf {n} \\&=T_{i}^{(\mathbf {n} )}n_{i}\\&=\sigma _{ij}n_{i}n_{j}.\end{aligned}}} The magnitude of the shear stress component τn, acting orthogonal to the vector n, can then be found using the Pythagorean theorem: τ n = ( T ( n ) ) 2 − σ n 2 = T i ( n ) T i ( n ) − σ n 2 , {\displaystyle {\begin{aligned}\tau _{\mathrm {n} }&={\sqrt {\left(T^{(\mathbf {n} )}\right)^{2}-\sigma _{\mathrm {n} }^{2}}}\\&={\sqrt {T_{i}^{(\mathbf {n} )}T_{i}^{(\mathbf {n} )}-\sigma _{\mathrm {n} }^{2}}},\end{aligned}}} where ( T ( n ) ) 2 = T i ( n ) T i ( n ) = ( σ i j n j ) ( σ i k n k ) = σ i j σ i k n j n k . {\displaystyle \left(T^{(\mathbf {n} )}\right)^{2}=T_{i}^{(\mathbf {n} )}T_{i}^{(\mathbf {n} )}=\left(\sigma _{ij}n_{j}\right)\left(\sigma _{ik}n_{k}\right)=\sigma _{ij}\sigma _{ik}n_{j}n_{k}.} == Balance laws – Cauchy's equations of motion == === Cauchy's first law of motion === According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations: σ j i , j + F i = 0 {\displaystyle \sigma _{ji,j}+F_{i}=0} , where σ j i , j = ∑ j ∂ j σ j i {\displaystyle \sigma _{ji,j}=\sum _{j}\partial _{j}\sigma _{ji}} For example, for a hydrostatic fluid in equilibrium conditions, the stress tensor takes on the form: σ i j = − p δ i j , {\displaystyle {\sigma _{ij}}=-p{\delta _{ij}},} where p {\displaystyle p} is the hydrostatic pressure, and δ i j {\displaystyle {\delta _{ij}}\ } is the kronecker delta. === Cauchy's second law of motion === According to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine: σ i j = σ j i {\displaystyle \sigma _{ij}=\sigma _{ji}} However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, K n → 1 {\displaystyle K_{n}\rightarrow 1} , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers. == Principal stresses and stress invariants == At every point in a stressed body there are at least three planes, called principal planes, with normal vectors n {\displaystyle \mathbf {n} } , called principal directions, where the corresponding stress vector is perpendicular to the plane, i.e., parallel or in the same direction as the normal vector n {\displaystyle \mathbf {n} } , and where there are no normal shear stresses τ n {\displaystyle \tau _{\mathrm {n} }} . The three stresses normal to these principal planes are called principal stresses. The components σ i j {\displaystyle \sigma _{ij}} of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the magnitude of the vector is a physical quantity (a scalar) and is independent of the Cartesian coordinate system chosen to represent the vector (so long as it is normal). Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors. A stress vector parallel to the normal unit vector n {\displaystyle \mathbf {n} } is given by: T ( n ) = λ n = σ n n {\displaystyle \mathbf {T} ^{(\mathbf {n} )}=\lambda \mathbf {n} =\mathbf {\sigma } _{\mathrm {n} }\mathbf {n} } where λ {\displaystyle \lambda } is a constant of proportionality, and in this particular case corresponds to the magnitudes σ n {\displaystyle \sigma _{\mathrm {n} }} of the normal stress vectors or principal stresses. Knowing that T i ( n ) = σ i j n j {\displaystyle T_{i}^{(n)}=\sigma _{ij}n_{j}} and n i = δ i j n j {\displaystyle n_{i}=\delta _{ij}n_{j}} , we have T i ( n ) = λ n i σ i j n j = λ n i σ i j n j − λ n i = 0 ( σ i j − λ δ i j ) n j = 0 {\displaystyle {\begin{aligned}T_{i}^{(n)}&=\lambda n_{i}\\\sigma _{ij}n_{j}&=\lambda n_{i}\\\sigma _{ij}n_{j}-\lambda n_{i}&=0\\\left(\sigma _{ij}-\lambda \delta _{ij}\right)n_{j}&=0\\\end{aligned}}} This is a homogeneous system, i.e. equal to zero, of three linear equations where n j {\displaystyle n_{j}} are the unknowns. To obtain a nontrivial (non-zero) solution for n j {\displaystyle n_{j}} , the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus, | σ i j − λ δ i j | = | σ 11 − λ σ 12 σ 13 σ 21 σ 22 − λ σ 23 σ 31 σ 32 σ 33 − λ | = 0 {\displaystyle \left|\sigma _{ij}-\lambda \delta _{ij}\right|={\begin{vmatrix}\sigma _{11}-\lambda &\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}-\lambda &\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}-\lambda \\\end{vmatrix}}=0} Expanding the determinant leads to the characteristic equation | σ i j − λ δ i j | = − λ 3 + I 1 λ 2 − I 2 λ + I 3 = 0 {\displaystyle \left|\sigma _{ij}-\lambda \delta _{ij}\right|=-\lambda ^{3}+I_{1}\lambda ^{2}-I_{2}\lambda +I_{3}=0} where I 1 = σ 11 + σ 22 + σ 33 = σ k k = tr ( σ ) I 2 = | σ 22 σ 23 σ 32 σ 33 | + | σ 11 σ 13 σ 31 σ 33 | + | σ 11 σ 12 σ 21 σ 22 | = σ 11 σ 22 + σ 22 σ 33 + σ 11 σ 33 − σ 12 2 − σ 23 2 − σ 31 2 = 1 2 ( σ i i σ j j − σ i j σ j i ) = 1 2 [ ( tr ( σ ) ) 2 − tr ( σ 2 ) ] I 3 = det ( σ i j ) = det ( σ ) = σ 11 σ 22 σ 33 + 2 σ 12 σ 23 σ 31 − σ 12 2 σ 33 − σ 23 2 σ 11 − σ 31 2 σ 22 {\displaystyle {\begin{aligned}I_{1}&=\sigma _{11}+\sigma _{22}+\sigma _{33}\\&=\sigma _{kk}={\text{tr}}({\boldsymbol {\sigma }})\\[4pt]I_{2}&={\begin{vmatrix}\sigma _{22}&\sigma _{23}\\\sigma _{32}&\sigma _{33}\\\end{vmatrix}}+{\begin{vmatrix}\sigma _{11}&\sigma _{13}\\\sigma _{31}&\sigma _{33}\\\end{vmatrix}}+{\begin{vmatrix}\sigma _{11}&\sigma _{12}\\\sigma _{21}&\sigma _{22}\\\end{vmatrix}}\\&=\sigma _{11}\sigma _{22}+\sigma _{22}\sigma _{33}+\sigma _{11}\sigma _{33}-\sigma _{12}^{2}-\sigma _{23}^{2}-\sigma _{31}^{2}\\&={\frac {1}{2}}\left(\sigma _{ii}\sigma _{jj}-\sigma _{ij}\sigma _{ji}\right)={\frac {1}{2}}\left[\left({\text{tr}}({\boldsymbol {\sigma }})\right)^{2}-{\text{tr}}\left({\boldsymbol {\sigma }}^{2}\right)\right]\\[4pt]I_{3}&=\det(\sigma _{ij})=\det({\boldsymbol {\sigma }})\\&=\sigma _{11}\sigma _{22}\sigma _{33}+2\sigma _{12}\sigma _{23}\sigma _{31}-\sigma _{12}^{2}\sigma _{33}-\sigma _{23}^{2}\sigma _{11}-\sigma _{31}^{2}\sigma _{22}\\\end{aligned}}} The characteristic equation has three real roots λ i {\displaystyle \lambda _{i}} , i.e. not imaginary due to the symmetry of the stress tensor. The σ 1 = max ( λ 1 , λ 2 , λ 3 ) {\displaystyle \sigma _{1}=\max \left(\lambda _{1},\lambda _{2},\lambda _{3}\right)} , σ 3 = min ( λ 1 , λ 2 , λ 3 ) {\displaystyle \sigma _{3}=\min \left(\lambda _{1},\lambda _{2},\lambda _{3}\right)} and σ 2 = I 1 − σ 1 − σ 3 {\displaystyle \sigma _{2}=I_{1}-\sigma _{1}-\sigma _{3}} , are the principal stresses, functions of the eigenvalues λ i {\displaystyle \lambda _{i}} . The eigenvalues are the roots of the characteristic polynomial. The principal stresses are unique for a given stress tensor. Therefore, from the characteristic equation, the coefficients I 1 {\displaystyle I_{1}} , I 2 {\displaystyle I_{2}} and I 3 {\displaystyle I_{3}} , called the first, second, and third stress invariants, respectively, always have the same value regardless of the coordinate system's orientation. For each eigenvalue, there is a non-trivial solution for n j {\displaystyle n_{j}} in the equation ( σ i j − λ δ i j ) n j = 0 {\displaystyle \left(\sigma _{ij}-\lambda \delta _{ij}\right)n_{j}=0} . These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent of the orientation. A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix: σ i j = [ σ 1 0 0 0 σ 2 0 0 0 σ 3 ] {\displaystyle \sigma _{ij}={\begin{bmatrix}\sigma _{1}&0&0\\0&\sigma _{2}&0\\0&0&\sigma _{3}\end{bmatrix}}} The principal stresses can be combined to form the stress invariants, I 1 {\displaystyle I_{1}} , I 2 {\displaystyle I_{2}} , and I 3 {\displaystyle I_{3}} . The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus, I 1 = σ 1 + σ 2 + σ 3 I 2 = σ 1 σ 2 + σ 2 σ 3 + σ 3 σ 1 I 3 = σ 1 σ 2 σ 3 {\displaystyle {\begin{aligned}I_{1}&=\sigma _{1}+\sigma _{2}+\sigma _{3}\\I_{2}&=\sigma _{1}\sigma _{2}+\sigma _{2}\sigma _{3}+\sigma _{3}\sigma _{1}\\I_{3}&=\sigma _{1}\sigma _{2}\sigma _{3}\\\end{aligned}}} Because of its simplicity, the principal coordinate system is often useful when considering the state of the elastic medium at a particular point. Principal stresses are often expressed in the following equation for evaluating stresses in the x and y directions or axial and bending stresses on a part.: p.58–59  The principal normal stresses can then be used to calculate the von Mises stress and ultimately the safety factor and margin of safety. σ 1 , σ 2 = σ x + σ y 2 ± ( σ x − σ y 2 ) 2 + τ x y 2 {\displaystyle \sigma _{1},\sigma _{2}={\frac {\sigma _{x}+\sigma _{y}}{2}}\pm {\sqrt {\left({\frac {\sigma _{x}-\sigma _{y}}{2}}\right)^{2}+\tau _{xy}^{2}}}} Using just the part of the equation under the square root is equal to the maximum and minimum shear stress for plus and minus. This is shown as: τ max , τ min = ± ( σ x − σ y 2 ) 2 + τ x y 2 {\displaystyle \tau _{\max },\tau _{\min }=\pm {\sqrt {\left({\frac {\sigma _{x}-\sigma _{y}}{2}}\right)^{2}+\tau _{xy}^{2}}}} == Maximum and minimum shear stresses == The maximum shear stress or maximum principal shear stress is equal to one-half the difference between the largest and smallest principal stresses, and acts on the plane that bisects the angle between the directions of the largest and smallest principal stresses, i.e. the plane of the maximum shear stress is oriented 45 ∘ {\displaystyle 45^{\circ }} from the principal stress planes. The maximum shear stress is expressed as τ max = 1 2 | σ max − σ min | {\displaystyle \tau _{\max }={\frac {1}{2}}\left|\sigma _{\max }-\sigma _{\min }\right|} Assuming σ 1 ≥ σ 2 ≥ σ 3 {\displaystyle \sigma _{1}\geq \sigma _{2}\geq \sigma _{3}} then τ max = 1 2 | σ 1 − σ 3 | {\displaystyle \tau _{\max }={\frac {1}{2}}\left|\sigma _{1}-\sigma _{3}\right|} When the stress tensor is non zero the normal stress component acting on the plane for the maximum shear stress is non-zero and it is equal to σ n = 1 2 ( σ 1 + σ 3 ) {\displaystyle \sigma _{\text{n}}={\frac {1}{2}}\left(\sigma _{1}+\sigma _{3}\right)} == Stress deviator tensor == The stress tensor σ i j {\displaystyle \sigma _{ij}} can be expressed as the sum of two other stress tensors: a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, π δ i j {\displaystyle \pi \delta _{ij}} , which tends to change the volume of the stressed body; and a deviatoric component called the stress deviator tensor, s i j {\displaystyle s_{ij}} , which tends to distort it. So σ i j = s i j + π δ i j , {\displaystyle \sigma _{ij}=s_{ij}+\pi \delta _{ij},\,} where π {\displaystyle \pi } is the mean stress given by π = σ k k 3 = σ 11 + σ 22 + σ 33 3 = 1 3 I 1 . {\displaystyle \pi ={\frac {\sigma _{kk}}{3}}={\frac {\sigma _{11}+\sigma _{22}+\sigma _{33}}{3}}={\frac {1}{3}}I_{1}.\,} Pressure ( p {\displaystyle p} ) is generally defined as negative one-third the trace of the stress tensor minus any stress the divergence of the velocity contributes with, i.e. p = ζ ∇ ⋅ u → − π = ζ ∂ u k ∂ x k − π = ∑ k ζ ∂ u k ∂ x k − π , {\displaystyle p=\zeta \,\nabla \cdot {\vec {u}}-\pi =\zeta \,{\frac {\partial u_{k}}{\partial x_{k}}}-\pi =\sum _{k}\zeta \,{\frac {\partial u_{k}}{\partial x_{k}}}-\pi ,} where ζ {\displaystyle \zeta } is a proportionality constant (viz. the Volume viscosity), ∇ ⋅ {\displaystyle \nabla \cdot } is the divergence operator, x k {\displaystyle x_{k}} is the k:th Cartesian coordinate, u → {\displaystyle {\vec {u}}} is the flow velocity and u k {\displaystyle u_{k}} is the k:th Cartesian component of u → {\displaystyle {\vec {u}}} . The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the Cauchy stress tensor: s i j = σ i j − σ k k 3 δ i j , [ s 11 s 12 s 13 s 21 s 22 s 23 s 31 s 32 s 33 ] = [ σ 11 σ 12 σ 13 σ 21 σ 22 σ 23 σ 31 σ 32 σ 33 ] − [ π 0 0 0 π 0 0 0 π ] = [ σ 11 − π σ 12 σ 13 σ 21 σ 22 − π σ 23 σ 31 σ 32 σ 33 − π ] . {\displaystyle {\begin{aligned}s_{ij}&=\sigma _{ij}-{\frac {\sigma _{kk}}{3}}\delta _{ij},\,\\\left[{\begin{matrix}s_{11}&s_{12}&s_{13}\\s_{21}&s_{22}&s_{23}\\s_{31}&s_{32}&s_{33}\end{matrix}}\right]&=\left[{\begin{matrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{matrix}}\right]-\left[{\begin{matrix}\pi &0&0\\0&\pi &0\\0&0&\pi \end{matrix}}\right]\\&=\left[{\begin{matrix}\sigma _{11}-\pi &\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}-\pi &\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}-\pi \end{matrix}}\right].\end{aligned}}} === Invariants of the stress deviator tensor === As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor s i j {\displaystyle s_{ij}} are the same as the principal directions of the stress tensor σ i j {\displaystyle \sigma _{ij}} . Thus, the characteristic equation is | s i j − λ δ i j | = λ 3 − J 1 λ 2 − J 2 λ − J 3 = 0 , {\displaystyle \left|s_{ij}-\lambda \delta _{ij}\right|=\lambda ^{3}-J_{1}\lambda ^{2}-J_{2}\lambda -J_{3}=0,} where J 1 {\displaystyle J_{1}} , J 2 {\displaystyle J_{2}} and J 3 {\displaystyle J_{3}} are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of s i j {\displaystyle s_{ij}} or its principal values s 1 {\displaystyle s_{1}} , s 2 {\displaystyle s_{2}} , and s 3 {\displaystyle s_{3}} , or alternatively, as a function of σ i j {\displaystyle \sigma _{ij}} or its principal values σ 1 {\displaystyle \sigma _{1}} , σ 2 {\displaystyle \sigma _{2}} , and σ 3 {\displaystyle \sigma _{3}} . Thus, J 1 = s k k = 0 , J 2 = 1 2 s i j s j i = 1 2 tr ⁡ ( s 2 ) = 1 2 ( s 1 2 + s 2 2 + s 3 2 ) = 1 6 [ ( σ 11 − σ 22 ) 2 + ( σ 22 − σ 33 ) 2 + ( σ 33 − σ 11 ) 2 ] + σ 12 2 + σ 23 2 + σ 31 2 = 1 6 [ ( σ 1 − σ 2 ) 2 + ( σ 2 − σ 3 ) 2 + ( σ 3 − σ 1 ) 2 ] = 1 3 I 1 2 − I 2 = 1 2 [ tr ⁡ ( σ 2 ) − 1 3 tr ⁡ ( σ ) 2 ] , J 3 = det ( s i j ) = 1 3 s i j s j k s k i = 1 3 tr ( s 3 ) = 1 3 ( s 1 3 + s 2 3 + s 3 3 ) = s 1 s 2 s 3 = 2 27 I 1 3 − 1 3 I 1 I 2 + I 3 = 1 3 [ tr ( σ 3 ) − tr ⁡ ( σ 2 ) tr ⁡ ( σ ) + 2 9 tr ⁡ ( σ ) 3 ] . {\displaystyle {\begin{aligned}J_{1}&=s_{kk}=0,\\[3pt]J_{2}&={\frac {1}{2}}s_{ij}s_{ji}={\frac {1}{2}}\operatorname {tr} \left({\boldsymbol {s}}^{2}\right)\\&={\frac {1}{2}}\left(s_{1}^{2}+s_{2}^{2}+s_{3}^{2}\right)\\&={\frac {1}{6}}\left[(\sigma _{11}-\sigma _{22})^{2}+(\sigma _{22}-\sigma _{33})^{2}+(\sigma _{33}-\sigma _{11})^{2}\right]+\sigma _{12}^{2}+\sigma _{23}^{2}+\sigma _{31}^{2}\\&={\frac {1}{6}}\left[(\sigma _{1}-\sigma _{2})^{2}+(\sigma _{2}-\sigma _{3})^{2}+(\sigma _{3}-\sigma _{1})^{2}\right]\\&={\frac {1}{3}}I_{1}^{2}-I_{2}={\frac {1}{2}}\left[\operatorname {tr} \left({\boldsymbol {\sigma }}^{2}\right)-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})^{2}\right],\\[3pt]J_{3}&=\det(s_{ij})\\&={\frac {1}{3}}s_{ij}s_{jk}s_{ki}={\frac {1}{3}}{\text{tr}}\left({\boldsymbol {s}}^{3}\right)\\&={\frac {1}{3}}\left(s_{1}^{3}+s_{2}^{3}+s_{3}^{3}\right)\\&=s_{1}s_{2}s_{3}\\&={\frac {2}{27}}I_{1}^{3}-{\frac {1}{3}}I_{1}I_{2}+I_{3}={\frac {1}{3}}\left[{\text{tr}}({\boldsymbol {\sigma }}^{3})-\operatorname {tr} \left({\boldsymbol {\sigma }}^{2}\right)\operatorname {tr} ({\boldsymbol {\sigma }})+{\frac {2}{9}}\operatorname {tr} ({\boldsymbol {\sigma }})^{3}\right].\,\end{aligned}}} Because s k k = 0 {\displaystyle s_{kk}=0} , the stress deviator tensor is in a state of pure shear. A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as σ vM = 3 J 2 = 1 2 [ ( σ 1 − σ 2 ) 2 + ( σ 2 − σ 3 ) 2 + ( σ 3 − σ 1 ) 2 ] . {\displaystyle \sigma _{\text{vM}}={\sqrt {3\,J_{2}}}={\sqrt {{\frac {1}{2}}~\left[(\sigma _{1}-\sigma _{2})^{2}+(\sigma _{2}-\sigma _{3})^{2}+(\sigma _{3}-\sigma _{1})^{2}\right]}}\,.} == Octahedral stresses == Considering the principal directions as the coordinate axes, a plane whose normal vector makes equal angles with each of the principal axes (i.e. having direction cosines equal to | 1 / 3 | {\displaystyle |1/{\sqrt {3}}|} ) is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress σ oct {\displaystyle \sigma _{\text{oct}}} and octahedral shear stress τ oct {\displaystyle \tau _{\text{oct}}} , respectively. Octahedral plane passing through the origin is known as the π-plane (π not to be confused with mean stress denoted by π in above section) . On the π-plane, s i j = 1 3 I {\textstyle s_{ij}={\frac {1}{3}}I} . Knowing that the stress tensor of point O (Figure 6) in the principal axes is σ i j = [ σ 1 0 0 0 σ 2 0 0 0 σ 3 ] {\displaystyle \sigma _{ij}={\begin{bmatrix}\sigma _{1}&0&0\\0&\sigma _{2}&0\\0&0&\sigma _{3}\end{bmatrix}}} the stress vector on an octahedral plane is then given by: T oct ( n ) = σ i j n i e j = σ 1 n 1 e 1 + σ 2 n 2 e 2 + σ 3 n 3 e 3 = 1 3 ( σ 1 e 1 + σ 2 e 2 + σ 3 e 3 ) {\displaystyle {\begin{aligned}\mathbf {T} _{\text{oct}}^{(\mathbf {n} )}&=\sigma _{ij}n_{i}\mathbf {e} _{j}\\&=\sigma _{1}n_{1}\mathbf {e} _{1}+\sigma _{2}n_{2}\mathbf {e} _{2}+\sigma _{3}n_{3}\mathbf {e} _{3}\\&={\frac {1}{\sqrt {3}}}(\sigma _{1}\mathbf {e} _{1}+\sigma _{2}\mathbf {e} _{2}+\sigma _{3}\mathbf {e} _{3})\end{aligned}}} The normal component of the stress vector at point O associated with the octahedral plane is σ oct = T i ( n ) n i = σ i j n i n j = σ 1 n 1 n 1 + σ 2 n 2 n 2 + σ 3 n 3 n 3 = 1 3 ( σ 1 + σ 2 + σ 3 ) = 1 3 I 1 {\displaystyle {\begin{aligned}\sigma _{\text{oct}}&=T_{i}^{(n)}n_{i}\\&=\sigma _{ij}n_{i}n_{j}\\&=\sigma _{1}n_{1}n_{1}+\sigma _{2}n_{2}n_{2}+\sigma _{3}n_{3}n_{3}\\&={\frac {1}{3}}(\sigma _{1}+\sigma _{2}+\sigma _{3})={\frac {1}{3}}I_{1}\end{aligned}}} which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes. The shear stress on the octahedral plane is then τ oct = T i ( n ) T i ( n ) − σ oct 2 = [ 1 3 ( σ 1 2 + σ 2 2 + σ 3 2 ) − 1 9 ( σ 1 + σ 2 + σ 3 ) 2 ] 1 2 = 1 3 [ ( σ 1 − σ 2 ) 2 + ( σ 2 − σ 3 ) 2 + ( σ 3 − σ 1 ) 2 ] 1 2 = 1 3 2 I 1 2 − 6 I 2 = 2 3 J 2 {\displaystyle {\begin{aligned}\tau _{\text{oct}}&={\sqrt {T_{i}^{(n)}T_{i}^{(n)}-\sigma _{\text{oct}}^{2}}}\\&=\left[{\frac {1}{3}}\left(\sigma _{1}^{2}+\sigma _{2}^{2}+\sigma _{3}^{2}\right)-{\frac {1}{9}}(\sigma _{1}+\sigma _{2}+\sigma _{3})^{2}\right]^{\frac {1}{2}}\\&={\frac {1}{3}}\left[(\sigma _{1}-\sigma _{2})^{2}+(\sigma _{2}-\sigma _{3})^{2}+(\sigma _{3}-\sigma _{1})^{2}\right]^{\frac {1}{2}}={\frac {1}{3}}{\sqrt {2I_{1}^{2}-6I_{2}}}={\sqrt {{\frac {2}{3}}J_{2}}}\end{aligned}}} == See also == Cauchy momentum equation Critical plane analysis Stress–energy tensor == Notes == == References ==
Wikipedia/Cauchy_stress_tensor
In mathematics and physics, a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold) or of the physical space. Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in material object, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor A is defined on a vector fields set X(M) over a module M, we call A a tensor field on M. A tensor field, in common usage, is often referred to in the shorter form "tensor". For example, the Riemann curvature tensor refers a tensor field, as it associates a tensor to each point of a Riemannian manifold, a topological space. == Definition == Let M {\displaystyle M} be a manifold, for instance the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} . Definition. A tensor field of type ( p , q ) {\displaystyle (p,q)} is a section T ∈ Γ ( M , V ⊗ p ⊗ ( V ∗ ) ⊗ q ) {\displaystyle T\ \in \ \Gamma (M,V^{\otimes p}\otimes (V^{*})^{\otimes q})} where V = T M {\displaystyle V=TM} to be the tangent bundle of M {\displaystyle M} (whose sections are called vector fields or contra variant vector fields in Physics) and V ∗ = T ∗ M {\displaystyle V^{*}=T^{*}M} is its dual bundle, the cotangent space (whose sections are called 1 forms, or covariant vector fields in Physics), and ⊗ {\displaystyle \otimes } is the tensor product of vector bundles. Equivalently, a tensor field is a collection of elements T x ∈ V x ⊗ p ⊗ ( V x ∗ ) ⊗ q {\displaystyle T_{x}\in V_{x}^{\otimes p}\otimes (V_{x}^{*})^{\otimes q}} for every point x ∈ M {\displaystyle x\in M} , where ⊗ {\displaystyle \otimes } now denotes the tensor product of vectors spaces, such that it constitutes a smooth map T : M → V ⊗ p ⊗ ( V ∗ ) ⊗ q {\displaystyle T:M\rightarrow V^{\otimes p}\otimes (V^{*})^{\otimes q}} . The elements T x {\displaystyle T_{x}} are called tensors. Locally in a coordinate neighbourhood U {\displaystyle U} with coordinates x 1 , … x n {\displaystyle x^{1},\ldots x^{n}} we have a local basis (Vielbein) of vector fields ∂ 1 = ∂ ∂ x n … ∂ n = ∂ ∂ x n {\displaystyle \partial _{1}={\frac {\partial }{\partial x^{n}}}\ldots \partial _{n}={\frac {\partial }{\partial x_{n}}}} , and a dual basis of 1 forms d x 1 , … d x n {\displaystyle dx^{1},\ldots dx^{n}} so that d x i ( ∂ j ) = ∂ j x i = δ j i {\displaystyle dx^{i}(\partial _{j})=\partial _{j}x^{i}=\delta _{j}^{i}} . In the coordinate neighbourhood U {\displaystyle U} we then have T x = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ i 1 ⊗ ⋯ ⊗ ∂ i p ⊗ d x j 1 ⊗ ⋯ ⊗ d x j q {\displaystyle T_{x}=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n})\partial _{i_{1}}\otimes \cdots \otimes \partial _{i_{p}}\otimes dx^{j_{1}}\otimes \cdots \otimes dx^{j_{q}}} where here and below we use Einstein summation conventions. Note that if we choose different coordinate system y 1 … y n {\displaystyle y^{1}\ldots y^{n}} then ∂ ∂ x i = ∂ y k ∂ x i ∂ ∂ y k {\displaystyle {\frac {\partial }{\partial x^{i}}}={\frac {\partial y^{k}}{\partial x^{i}}}{\frac {\partial }{\partial y^{k}}}} and d x j = ∂ x j ∂ y ℓ d y ℓ {\displaystyle dx^{j}={\frac {\partial x^{j}}{\partial y^{\ell }}}dy^{\ell }} where the coordinates ( x 1 , … , x n ) {\displaystyle (x^{1},\ldots ,x^{n})} can be expressed in the coordinates ( y 1 , … y n {\displaystyle (y^{1},\ldots y^{n}} and vice versa, so that T x = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ ∂ x i 1 ⊗ ⋯ ⊗ ∂ ∂ x i p ⊗ d x j 1 ⊗ ⋯ ⊗ d x j q = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ y k 1 ∂ x i 1 ⋯ ∂ y k p ∂ x i p ∂ x j 1 ∂ y ℓ 1 ⋯ ∂ x j q ∂ y ℓ q ∂ ∂ y k 1 ⊗ ⋯ ⊗ ∂ ∂ y k p ⊗ d y ℓ 1 ⊗ ⋯ ⊗ d y ℓ q = T ℓ 1 , ⋯ ℓ q k 1 , … , k p ( y 1 , … y n ) ∂ ∂ y k 1 ⊗ ⋯ ⊗ ∂ ∂ y k p ⊗ d y ℓ 1 ⊗ ⋯ ⊗ d y ℓ q {\displaystyle {\begin{aligned}T_{x}&=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial }{\partial x^{i_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial x^{i_{p}}}}\otimes dx^{j_{1}}\otimes \cdots \otimes dx^{j_{q}}\\&=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial y^{k_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial y^{k_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial y^{\ell _{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial y^{\ell _{q}}}}{\frac {\partial }{\partial y^{k_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial y^{k_{p}}}}\otimes dy^{\ell _{1}}\otimes \cdots \otimes dy^{\ell _{q}}\\&=T_{\ell _{1},\cdots \ell _{q}}^{k_{1},\ldots ,k_{p}}(y^{1},\ldots y^{n}){\frac {\partial }{\partial y^{k_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial y^{k_{p}}}}\otimes dy^{\ell _{1}}\otimes \cdots \otimes dy^{\ell _{q}}\\\end{aligned}}} i.e. T ℓ 1 , ⋯ ℓ q k 1 , … , k p ( y 1 , … y n ) = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ y k 1 ∂ x i 1 ⋯ ∂ y k p ∂ x i p ∂ x j 1 ∂ y ℓ 1 ⋯ ∂ x j q ∂ y ℓ q {\displaystyle T_{\ell _{1},\cdots \ell _{q}}^{k_{1},\ldots ,k_{p}}(y^{1},\ldots y^{n})=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial y^{k_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial y^{k_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial y^{\ell _{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial y^{\ell _{q}}}}} The system of indexed functions T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n})} (one system for each choice of coordinate system) connected by transformations as above are the tensors in the definitions below. Remark One can, more generally, take V {\displaystyle V} to be any vector bundle on M {\displaystyle M} , and V ∗ {\displaystyle V^{*}} its dual bundle. In that case can be a more general topological space. These sections are called tensors of V {\displaystyle V} or tensors for short if no confusion is possible . == Geometric introduction == Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface. Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field g {\displaystyle g} , such that given any two vectors v , w {\displaystyle v,w} at point x {\displaystyle x} , their inner product is g x ( v , w ) {\displaystyle g_{x}(v,w)} . The field g {\displaystyle g} could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix. In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates. == Via coordinate transitions == Following Schouten (1951) and McConnell (1957), the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. For example, coordinates belonging to the n-dimensional real coordinate space R n {\displaystyle \mathbb {R} ^{n}} may be subjected to arbitrary affine transformations: x k ↦ A j k x j + a k {\displaystyle x^{k}\mapsto A_{j}^{k}x^{j}+a^{k}} (with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions v k {\displaystyle v_{k}} that transforms under this affine transformation by the rule v k ↦ v i A k i . {\displaystyle v_{k}\mapsto v_{i}A_{k}^{i}.} The list of Cartesian coordinate basis vectors e k {\displaystyle \mathbf {e} _{k}} transforms as a covector, since under the affine transformation e k ↦ A k i e i {\displaystyle \mathbf {e} _{k}\mapsto A_{k}^{i}\mathbf {e} _{i}} . A contravariant vector is a system of functions v k {\displaystyle v^{k}} of the coordinates that, under such an affine transformation undergoes a transformation v k ↦ ( A − 1 ) j k v j . {\displaystyle v^{k}\mapsto (A^{-1})_{j}^{k}v^{j}.} This is precisely the requirement needed to ensure that the quantity v k e k {\displaystyle v^{k}\mathbf {e} _{k}} is an invariant object that does not depend on the coordinate system chosen. More generally, the coordinates of a tensor of valence (p,q) have p upper indices and q lower indices, with the transformation law being T i 1 ⋯ i p j 1 ⋯ j q ↦ A i 1 ′ i 1 ⋯ A i p ′ i p T i 1 ′ ⋯ i p ′ j 1 ′ ⋯ j q ′ ( A − 1 ) j 1 j 1 ′ ⋯ ( A − 1 ) j q j q ′ . {\displaystyle {T^{i_{1}\cdots i_{p}}}_{j_{1}\cdots j_{q}}\mapsto A_{i'_{1}}^{i_{1}}\cdots A_{i'_{p}}^{i_{p}}{T^{i'_{1}\cdots i'_{p}}}_{j'_{1}\cdots j'_{q}}(A^{-1})_{j_{1}}^{j'_{1}}\cdots (A^{-1})_{j_{q}}^{j'_{q}}.} The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable, analytic, etc.). A covector field is a function v k {\displaystyle v_{k}} of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field v k {\displaystyle v^{k}} transforms by the inverse Jacobian. == Tensor bundles == A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M. For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder. Given a vector bundle V over M, the corresponding field concept is called a section of the bundle: for m varying over M, a choice of vector vm in Vm, where Vm is the vector space "at" m. Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction. We therefore can give a definition of tensor field, namely as a section of some tensor bundle. (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ , {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*},} where V is the tangent space at that point and V∗ is the cotangent space. See also tangent bundle and cotangent bundle. Given two tensor bundles E → M and F → M, a linear map A: Γ(E) → Γ(F) from the space of sections of E to sections of F can be considered itself as a tensor section of E ∗ ⊗ F {\displaystyle \scriptstyle E^{*}\otimes F} if and only if it satisfies A(fs) = fA(s), for each section s in Γ(E) and each smooth function f on M. Thus a tensor section is not only a linear map on the vector space of sections, but a C∞(M)-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are. == Notation == The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T(M) might sometimes be written as T 0 1 ( M ) = T ( M ) = T M {\displaystyle T_{0}^{1}(M)=T(M)=TM} to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M. This should not be confused with the very similar looking notation T 0 1 ( V ) {\displaystyle T_{0}^{1}(V)} ; in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M. Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M. Thus, T n m ( M ) {\displaystyle {\mathcal {T}}_{n}^{m}(M)} are the sections of the (m,n) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set. == Tensor fields as multilinear forms == There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M, which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth (C∞) vector fields on M, X ( M ) := T 0 1 ( M ) {\displaystyle {\mathfrak {X}}(M):={\mathcal {T}}_{0}^{1}(M)} (see the section on notation above) as a single space – a module over the ring of smooth functions, C∞(M), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring. As a motivating example, consider the space Ω 1 ( M ) = T 1 0 ( M ) {\displaystyle \Omega ^{1}(M)={\mathcal {T}}_{1}^{0}(M)} of smooth covector fields (1-forms), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X, we define ω ~ ( X ) ( p ) := ω ( p ) ( X ( p ) ) . {\displaystyle {\tilde {\omega }}(X)(p):=\omega (p)(X(p)).} Because of the pointwise nature of everything involved, the action of ω ~ {\displaystyle {\tilde {\omega }}} on X is a C∞(M)-linear map, that is, ω ~ ( f X ) ( p ) = ω ( p ) ( ( f X ) ( p ) ) = ω ( p ) ( f ( p ) X ( p ) ) = f ( p ) ω ( p ) ( X ( p ) ) = ( f ω ) ( p ) ( X ( p ) ) = ( f ω ~ ) ( X ) ( p ) {\displaystyle {\tilde {\omega }}(fX)(p)=\omega (p)((fX)(p))=\omega (p)(f(p)X(p))=f(p)\omega (p)(X(p))=(f\omega )(p)(X(p))=(f{\tilde {\omega }})(X)(p)} for any p in M and smooth function f. Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there). In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general (k,l) tensor fields on M as C∞(M)-multilinear maps defined on k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M). Now, given any arbitrary mapping T from a product of k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M), it turns out that it arises from a tensor field on M if and only if it is multilinear over C∞(M). Namely C∞(M)-module of tensor fields of type ( k , l ) {\displaystyle (k,l)} over M is canonically isomorphic to C∞(M)-module of C∞(M)-multilinear forms Ω 1 ( M ) × … × Ω 1 ( M ) ⏟ l t i m e s × X ( M ) × … × X ( M ) ⏟ k t i m e s → C ∞ ( M ) . {\displaystyle \underbrace {\Omega ^{1}(M)\times \ldots \times \Omega ^{1}(M)} _{l\ \mathrm {times} }\times \underbrace {{\mathfrak {X}}(M)\times \ldots \times {\mathfrak {X}}(M)} _{k\ \mathrm {times} }\to C^{\infty }(M).} This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously. A frequent example application of this general rule is showing that the Levi-Civita connection, which is a mapping of smooth vector fields ( X , Y ) ↦ ∇ X Y {\displaystyle (X,Y)\mapsto \nabla _{X}Y} taking a pair of vector fields to a vector field, does not define a tensor field on M. This is because it is only R {\displaystyle \mathbb {R} } -linear in Y (in place of full C∞(M)-linearity, it satisfies the Leibniz rule, ∇ X ( f Y ) = ( X f ) Y + f ∇ X Y {\displaystyle \nabla _{X}(fY)=(Xf)Y+f\nabla _{X}Y} )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation. == Applications == The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity. In electromagnetism, the electric and magnetic fields are combined into an electromagnetic tensor field. Differential forms, used in defining integration on manifolds, are a type of tensor field. == Tensor calculus == In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus. Even to formulate such equations requires a fresh notion, the covariant derivative. This handles the formulation of variation of a tensor field along a vector field. The original absolute differential calculus notion, which was later called tensor calculus, led to the isolation of the geometric concept of connection. == Twisting by a line bundle == An extension of the tensor field idea incorporates an extra line bundle L on M. If W is the tensor product bundle of V with L, then W is a bundle of vector spaces of just the same dimension as V. This allows one to define the concept of tensor density, a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold, namely the determinant bundle of the cotangent bundle. (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold.) For a more traditional explanation see the tensor density article. One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = ⁠1/2⁠. In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s. Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization. == Flat case == When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion. == Cocycles and chain rules == As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields. Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts. What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities; this kind of descent argument justifies abstractly the whole theory. == Generalizations == === Tensor densities === The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the wth power, is called a tensor density with weight w. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range. A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted det g {\displaystyle {\sqrt {\det g}}} . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition: det ( g ′ ) = ( det ∂ x ∂ x ′ ) 2 det ( g ) , {\displaystyle \det(g')=\left(\det {\frac {\partial x}{\partial x'}}\right)^{2}\det(g),} which is the transformation law for a scalar density of weight +2. More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see Density on a manifold. == See also == Bitensor – Tensorial object depending on two points in a manifold Jet bundle – Construction in differential topology Ricci calculus – Tensor index notation for tensor-based calculations Spinor field – Geometric structurePages displaying short descriptions of redirect targets == Notes == == References == O'neill, Barrett (1983). Semi-Riemannian Geometry With Applications to Relativity. Elsevier Science. ISBN 9780080570570. Frankel, T. (2012), The Geometry of Physics (3rd edition), Cambridge University Press, ISBN 978-1-107-60260-1. Lambourne [Open University], R.J.A. (2010), Relativity, Gravitation, and Cosmology, Cambridge University Press, Bibcode:2010rgc..book.....L, ISBN 978-0-521-13138-4. Lerner, R.G.; Trigg, G.L. (1991), Encyclopaedia of Physics (2nd Edition), VHC Publishers. McConnell, A. J. (1957), Applications of Tensor Analysis, Dover Publications, ISBN 9780486145020 {{citation}}: ISBN / Date incompatibility (help). McMahon, D. (2006), Relativity DeMystified, McGraw Hill (USA), ISBN 0-07-145545-0. C. Misner, K. S. Thorne, J. A. Wheeler (1973), Gravitation, W.H. Freeman & Co, ISBN 0-7167-0344-0{{citation}}: CS1 maint: multiple names: authors list (link). Parker, C.B. (1994), McGraw Hill Encyclopaedia of Physics (2nd Edition), McGraw Hill, ISBN 0-07-051400-3. Schouten, Jan Arnoldus (1951), Tensor Analysis for Physicists, Oxford University Press. Steenrod, Norman (5 April 1999). The Topology of Fibre Bundles. Princeton Mathematical Series. Vol. 14. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-00548-5. OCLC 40734875.
Wikipedia/Tensor_field
In differential geometry, the Weyl curvature tensor, named after Hermann Weyl, is a measure of the curvature of spacetime or, more generally, a pseudo-Riemannian manifold. Like the Riemann curvature tensor, the Weyl tensor expresses the tidal force that a body feels when moving along a geodesic. The Weyl tensor differs from the Riemann curvature tensor in that it does not convey information on how the volume of the body changes, but rather only how the shape of the body is distorted by the tidal force. The Ricci curvature, or trace component of the Riemann tensor contains precisely the information about how volumes change in the presence of tidal forces, so the Weyl tensor is the traceless component of the Riemann tensor. This tensor has the same symmetries as the Riemann tensor, but satisfies the extra condition that it is trace-free: metric contraction on any pair of indices yields zero. It is obtained from the Riemann tensor by subtracting a tensor that is a linear expression in the Ricci tensor. In general relativity, the Weyl curvature is the only part of the curvature that exists in free space—a solution of the vacuum Einstein equation—and it governs the propagation of gravitational waves through regions of space devoid of matter. More generally, the Weyl curvature is the only component of curvature for Ricci-flat manifolds and always governs the characteristics of the field equations of an Einstein manifold. In dimensions 2 and 3 the Weyl curvature tensor vanishes identically. In dimensions ≥ 4, the Weyl curvature is generally nonzero. If the Weyl tensor vanishes in dimension ≥ 4, then the metric is locally conformally flat: there exists a local coordinate system in which the metric tensor is proportional to a constant tensor. This fact was a key component of Nordström's theory of gravitation, which was a precursor of general relativity. == Definition == The Weyl tensor can be obtained from the full curvature tensor by subtracting out various traces. This is most easily done by writing the Riemann tensor as a (0,4) valence tensor (by contracting with the metric). The (0,4) valence Weyl tensor is then (Petersen 2006, p. 92) C = R − 1 n − 2 ( R i c − s n g ) ∧ ◯ g − s 2 n ( n − 1 ) g ∧ ◯ g {\displaystyle C=R-{\frac {1}{n-2}}\left(\mathrm {Ric} -{\frac {s}{n}}g\right){~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}g-{\frac {s}{2n(n-1)}}g{~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}g} where n is the dimension of the manifold, g is the metric, R is the Riemann tensor, Ric is the Ricci tensor, s is the scalar curvature, and h ∧ ◯ k {\displaystyle h{~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}k} denotes the Kulkarni–Nomizu product of two symmetric (0,2) tensors: ( h ∧ ◯ k ) ( v 1 , v 2 , v 3 , v 4 ) = h ( v 1 , v 3 ) k ( v 2 , v 4 ) + h ( v 2 , v 4 ) k ( v 1 , v 3 ) − h ( v 1 , v 4 ) k ( v 2 , v 3 ) − h ( v 2 , v 3 ) k ( v 1 , v 4 ) {\displaystyle {\begin{aligned}(h{~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}k)\left(v_{1},v_{2},v_{3},v_{4}\right)=\quad &h\left(v_{1},v_{3}\right)k\left(v_{2},v_{4}\right)+h\left(v_{2},v_{4}\right)k\left(v_{1},v_{3}\right)\\{}-{}&h\left(v_{1},v_{4}\right)k\left(v_{2},v_{3}\right)-h\left(v_{2},v_{3}\right)k\left(v_{1},v_{4}\right)\end{aligned}}} In tensor component notation, this can be written as C i k ℓ m = R i k ℓ m + 1 n − 2 ( R i m g k ℓ − R i ℓ g k m + R k ℓ g i m − R k m g i ℓ ) + 1 ( n − 1 ) ( n − 2 ) R ( g i ℓ g k m − g i m g k ℓ ) . {\displaystyle {\begin{aligned}C_{ik\ell m}=R_{ik\ell m}+{}&{\frac {1}{n-2}}\left(R_{im}g_{k\ell }-R_{i\ell }g_{km}+R_{k\ell }g_{im}-R_{km}g_{i\ell }\right)\\{}+{}&{\frac {1}{(n-1)(n-2)}}R\left(g_{i\ell }g_{km}-g_{im}g_{k\ell }\right).\ \end{aligned}}} The ordinary (1,3) valent Weyl tensor is then given by contracting the above with the inverse of the metric. The decomposition (1) expresses the Riemann tensor as an orthogonal direct sum, in the sense that | R | 2 = | C | 2 + | 1 n − 2 ( R i c − s n g ) ∧ ◯ g | 2 + | s 2 n ( n − 1 ) g ∧ ◯ g | 2 . {\displaystyle |R|^{2}=|C|^{2}+\left|{\frac {1}{n-2}}\left(\mathrm {Ric} -{\frac {s}{n}}g\right){~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}g\right|^{2}+\left|{\frac {s}{2n(n-1)}}g{~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}g\right|^{2}.} This decomposition, known as the Ricci decomposition, expresses the Riemann curvature tensor into its irreducible components under the action of the orthogonal group. In dimension 4, the Weyl tensor further decomposes into invariant factors for the action of the special orthogonal group, the self-dual and antiself-dual parts C+ and C−. The Weyl tensor can also be expressed using the Schouten tensor, which is a trace-adjusted multiple of the Ricci tensor, P = 1 n − 2 ( R i c − s 2 ( n − 1 ) g ) . {\displaystyle P={\frac {1}{n-2}}\left(\mathrm {Ric} -{\frac {s}{2(n-1)}}g\right).} Then C = R − P ∧ ◯ g . {\displaystyle C=R-P{~\wedge \!\!\!\!\!\!\!\!\;\bigcirc ~}g.} In indices, C a b c d = R a b c d − 2 n − 2 ( g a [ c R d ] b − g b [ c R d ] a ) + 2 ( n − 1 ) ( n − 2 ) R g a [ c g d ] b {\displaystyle C_{abcd}=R_{abcd}-{\frac {2}{n-2}}\left(g_{a[c}R_{d]b}-g_{b[c}R_{d]a}\right)+{\frac {2}{(n-1)(n-2)}}R~g_{a[c}g_{d]b}} where R a b c d {\displaystyle R_{abcd}} is the Riemann tensor, R a b {\displaystyle R_{ab}} is the Ricci tensor, R {\displaystyle R} is the Ricci scalar (the scalar curvature) and brackets around indices refers to the antisymmetric part. Equivalently, C a b c d = R a b c d − 4 S [ a [ c δ b ] d ] {\displaystyle {C_{ab}}^{cd}={R_{ab}}^{cd}-4S_{[a}^{[c}\delta _{b]}^{d]}} where S denotes the Schouten tensor. == Properties == === Conformal rescaling === The Weyl tensor has the special property that it is invariant under conformal changes to the metric. That is, if g μ ν ↦ g μ ν ′ = f g μ ν {\displaystyle g_{\mu \nu }\mapsto g'_{\mu \nu }=fg_{\mu \nu }} for some positive scalar function f {\displaystyle f} then the (1,3) valent Weyl tensor satisfies C ′ b c d a = C b c d a {\displaystyle {C'}_{\ \ bcd}^{a}=C_{\ \ bcd}^{a}} . For this reason the Weyl tensor is also called the conformal tensor. It follows that a necessary condition for a Riemannian manifold to be conformally flat is that the Weyl tensor vanish. In dimensions ≥ 4 this condition is sufficient as well. In dimension 3 the vanishing of the Cotton tensor is a necessary and sufficient condition for the Riemannian manifold being conformally flat. Any 2-dimensional (smooth) Riemannian manifold is conformally flat, a consequence of the existence of isothermal coordinates. Indeed, the existence of a conformally flat scale amounts to solving the overdetermined partial differential equation D d f − d f ⊗ d f + ( | d f | 2 + Δ f n − 2 ) g = Ric . {\displaystyle Ddf-df\otimes df+\left(|df|^{2}+{\frac {\Delta f}{n-2}}\right)g=\operatorname {Ric} .} In dimension ≥ 4, the vanishing of the Weyl tensor is the only integrability condition for this equation; in dimension 3, it is the Cotton tensor instead. === Symmetries === The Weyl tensor has the same symmetries as the Riemann tensor. This includes: C ( u , v ) = − C ( v , u ) ⟨ C ( u , v ) w , z ⟩ = − ⟨ C ( u , v ) z , w ⟩ C ( u , v ) w + C ( v , w ) u + C ( w , u ) v = 0. {\displaystyle {\begin{aligned}C(u,v)&=-C(v,u)\\\langle C(u,v)w,z\rangle &=-\langle C(u,v)z,w\rangle \\C(u,v)w+C(v,w)u+C(w,u)v&=0.\end{aligned}}} In addition, of course, the Weyl tensor is trace free: tr ⁡ C ( u , ⋅ ) v = 0 {\displaystyle \operatorname {tr} C(u,\cdot )v=0} for all u, v. In indices these four conditions are C a b c d = − C b a c d = − C a b d c C a b c d + C a c d b + C a d b c = 0 C a b a c = 0. {\displaystyle {\begin{aligned}C_{abcd}=-C_{bacd}&=-C_{abdc}\\C_{abcd}+C_{acdb}+C_{adbc}&=0\\{C^{a}}_{bac}&=0.\end{aligned}}} === Bianchi identity === Taking traces of the usual second Bianchi identity of the Riemann tensor eventually shows that ∇ a C a b c d = 2 ( n − 3 ) ∇ [ c S d ] b {\displaystyle \nabla _{a}{C^{a}}_{bcd}=2(n-3)\nabla _{[c}S_{d]b}} where S is the Schouten tensor. The valence (0,3) tensor on the right-hand side is the Cotton tensor, apart from the initial factor. == See also == Curvature of Riemannian manifolds Christoffel symbols provides a coordinate expression for the Weyl tensor. Lanczos tensor Peeling theorem Petrov classification Plebanski tensor Weyl curvature hypothesis Weyl scalar == Notes == == References == Hawking, Stephen W.; Ellis, George F. R. (1973), The Large Scale Structure of Space-Time, Cambridge University Press, ISBN 0-521-09906-4 Petersen, Peter (2006), Riemannian geometry, Graduate Texts in Mathematics, vol. 171 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 0387292462, MR 2243772. Sharpe, R.W. (1997), Differential Geometry: Cartan's Generalization of Klein's Erlangen Program, Springer-Verlag, New York, ISBN 0-387-94732-9. Singer, I.M.; Thorpe, J.A. (1969), "The curvature of 4-dimensional Einstein spaces", Global Analysis (Papers in Honor of K. Kodaira), Univ. Tokyo Press, pp. 355–365 "Weyl tensor", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Grøn, Øyvind; Hervik, Sigbjørn (2007), Einstein's General Theory of Relativity, New York: Springer, ISBN 978-0-387-69199-2
Wikipedia/Weyl_tensor
In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics. For instance, the expression f ( x ) d x {\displaystyle f(x)\,dx} is an example of a 1-form, and can be integrated over an interval [ a , b ] {\displaystyle [a,b]} contained in the domain of f {\displaystyle f} : ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx.} Similarly, the expression f ( x , y , z ) d x ∧ d y + g ( x , y , z ) d z ∧ d x + h ( x , y , z ) d y ∧ d z {\displaystyle f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz} is a 2-form that can be integrated over a surface S {\displaystyle S} : ∫ S ( f ( x , y , z ) d x ∧ d y + g ( x , y , z ) d z ∧ d x + h ( x , y , z ) d y ∧ d z ) . {\displaystyle \int _{S}\left(f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz\right).} The symbol ∧ {\displaystyle \wedge } denotes the exterior product, sometimes called the wedge product, of two differential forms. Likewise, a 3-form f ( x , y , z ) d x ∧ d y ∧ d z {\displaystyle f(x,y,z)\,dx\wedge dy\wedge dz} represents a volume element that can be integrated over a region of space. In general, a k-form is an object that may be integrated over a k-dimensional manifold, and is homogeneous of degree k in the coordinate differentials d x , d y , … . {\displaystyle dx,dy,\ldots .} On an n-dimensional manifold, a top-dimensional form (n-form) is called a volume form. The differential forms form an alternating algebra. This implies that d y ∧ d x = − d x ∧ d y {\displaystyle dy\wedge dx=-dx\wedge dy} and d x ∧ d x = 0. {\displaystyle dx\wedge dx=0.} This alternating property reflects the orientation of the domain of integration. The exterior derivative is an operation on differential forms that, given a k-form φ {\displaystyle \varphi } , produces a (k+1)-form d φ . {\displaystyle d\varphi .} This operation extends the differential of a function (a function can be considered as a 0-form, and its differential is d f ( x ) = f ′ ( x ) d x {\displaystyle df(x)=f'(x)\,dx} ). This allows expressing the fundamental theorem of calculus, the divergence theorem, Green's theorem, and Stokes' theorem as special cases of a single general result, the generalized Stokes theorem. Differential 1-forms are naturally dual to vector fields on a differentiable manifold, and the pairing between vector fields and 1-forms is extended to arbitrary differential forms by the interior product. The algebra of differential forms along with the exterior derivative defined on it is preserved by the pullback under smooth functions between two manifolds. This feature allows geometrically invariant information to be moved from one space to another via the pullback, provided that the information is expressed in terms of differential forms. As an example, the change of variables formula for integration becomes a simple statement that an integral is preserved under pullback. == History == Differential forms are part of the field of differential geometry, influenced by linear algebra. Although the notion of a differential is quite old, the initial attempt at an algebraic organization of differential forms is usually credited to Élie Cartan with reference to his 1899 paper. Some aspects of the exterior algebra of differential forms appears in Hermann Grassmann's 1844 work, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics). == Concept == Differential forms provide an approach to multivariable calculus that is independent of coordinates. === Integration and orientation === A differential k-form can be integrated over an oriented manifold of dimension k. A differential 1-form can be thought of as measuring an infinitesimal oriented length, or 1-dimensional oriented density. A differential 2-form can be thought of as measuring an infinitesimal oriented area, or 2-dimensional oriented density. And so on. Integration of differential forms is well-defined only on oriented manifolds. An example of a 1-dimensional manifold is an interval [a, b], and intervals can be given an orientation: they are positively oriented if a < b, and negatively oriented otherwise. If a < b then the integral of the differential 1-form f(x) dx over the interval [a, b] (with its natural positive orientation) is ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} which is the negative of the integral of the same differential form over the same interval, when equipped with the opposite orientation. That is: ∫ b a f ( x ) d x = − ∫ a b f ( x ) d x . {\displaystyle \int _{b}^{a}f(x)\,dx=-\int _{a}^{b}f(x)\,dx.} This gives a geometrical context to the conventions for one-dimensional integrals, that the sign changes when the orientation of the interval is reversed. A standard explanation of this in one-variable integration theory is that, when the limits of integration are in the opposite order (b < a), the increment dx is negative in the direction of integration. More generally, an m-form is an oriented density that can be integrated over an m-dimensional oriented manifold. (For example, a 1-form can be integrated over an oriented curve, a 2-form can be integrated over an oriented surface, etc.) If M is an oriented m-dimensional manifold, and M′ is the same manifold with opposite orientation and ω is an m-form, then one has: ∫ M ω = − ∫ M ′ ω . {\displaystyle \int _{M}\omega =-\int _{M'}\omega \,.} These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure μ and integrates over a subset A, without any notion of orientation; one writes ∫ A f d μ = ∫ [ a , b ] f d μ {\textstyle \int _{A}f\,d\mu =\int _{[a,b]}f\,d\mu } to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher-dimensional manifolds; see below for details. Making the notion of an oriented density precise, and thus of a differential form, involves the exterior algebra. The differentials of a set of coordinates, dx1, ..., dxn can be used as a basis for all 1-forms. Each of these represents a covector at each point on the manifold that may be thought of as measuring a small displacement in the corresponding coordinate direction. A general 1-form is a linear combination of these differentials at every point on the manifold: f 1 d x 1 + ⋯ + f n d x n , {\displaystyle f_{1}\,dx^{1}+\cdots +f_{n}\,dx^{n},} where the fk = fk(x1, ... , xn) are functions of all the coordinates. A differential 1-form is integrated along an oriented curve as a line integral. The expressions dxi ∧ dxj, where i < j can be used as a basis at every point on the manifold for all 2-forms. This may be thought of as an infinitesimal oriented square parallel to the xi–xj-plane. A general 2-form is a linear combination of these at every point on the manifold: ∑ 1 ≤ i < j ≤ n f i , j d x i ∧ d x j {\textstyle \sum _{1\leq i<j\leq n}f_{i,j}\,dx^{i}\wedge dx^{j}} , and it is integrated just like a surface integral. A fundamental operation defined on differential forms is the exterior product (the symbol is the wedge ∧). This is similar to the cross product from vector calculus, in that it is an alternating product. For instance, d x 1 ∧ d x 2 = − d x 2 ∧ d x 1 {\displaystyle dx^{1}\wedge dx^{2}=-dx^{2}\wedge dx^{1}} because the square whose first side is dx1 and second side is dx2 is to be regarded as having the opposite orientation as the square whose first side is dx2 and whose second side is dx1. This is why we only need to sum over expressions dxi ∧ dxj, with i < j; for example: a(dxi ∧ dxj) + b(dxj ∧ dxi) = (a − b) dxi ∧ dxj. The exterior product allows higher-degree differential forms to be built out of lower-degree ones, in much the same way that the cross product in vector calculus allows one to compute the area vector of a parallelogram from vectors pointing up the two sides. Alternating also implies that dxi ∧ dxi = 0, in the same way that the cross product of parallel vectors, whose magnitude is the area of the parallelogram spanned by those vectors, is zero. In higher dimensions, dxi1 ∧ ⋅⋅⋅ ∧ dxim = 0 if any two of the indices i1, ..., im are equal, in the same way that the "volume" enclosed by a parallelotope whose edge vectors are linearly dependent is zero. === Multi-index notation === A common notation for the wedge product of elementary k-forms is so called multi-index notation: in an n-dimensional context, for I = ( i 1 , i 2 , … , i k ) , 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n {\displaystyle I=(i_{1},i_{2},\ldots ,i_{k}),1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n} , we define d x I := d x i 1 ∧ ⋯ ∧ d x i k = ⋀ i ∈ I d x i {\textstyle dx^{I}:=dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}=\bigwedge _{i\in I}dx^{i}} . Another useful notation is obtained by defining the set of all strictly increasing multi-indices of length k, in a space of dimension n, denoted J k , n := { I = ( i 1 , … , i k ) : 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n } {\displaystyle {\mathcal {J}}_{k,n}:=\{I=(i_{1},\ldots ,i_{k}):1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n\}} . Then locally (wherever the coordinates apply), { d x I } I ∈ J k , n {\displaystyle \{dx^{I}\}_{I\in {\mathcal {J}}_{k,n}}} spans the space of differential k-forms in a manifold M of dimension n, when viewed as a module over the ring C∞(M) of smooth functions on M. By calculating the size of J k , n {\displaystyle {\mathcal {J}}_{k,n}} combinatorially, the module of k-forms on an n-dimensional manifold, and in general space of k-covectors on an n-dimensional vector space, is n choose k: | J k , n | = ( n k ) {\textstyle |{\mathcal {J}}_{k,n}|={\binom {n}{k}}} . This also demonstrates that there are no nonzero differential forms of degree greater than the dimension of the underlying manifold. === The exterior derivative === In addition to the exterior product, there is also the exterior derivative operator d. The exterior derivative of a differential form is a generalization of the differential of a function, in the sense that the exterior derivative of f ∈ C∞(M) = Ω0(M) is exactly the differential of f. When generalized to higher forms, if ω = f dxI is a simple k-form, then its exterior derivative dω is a (k + 1)-form defined by taking the differential of the coefficient functions: d ω = ∑ i = 1 n ∂ f ∂ x i d x i ∧ d x I . {\displaystyle d\omega =\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}\,dx^{i}\wedge dx^{I}.} with extension to general k-forms through linearity: if τ = ∑ I ∈ J k , n a I d x I ∈ Ω k ( M ) {\textstyle \tau =\sum _{I\in {\mathcal {J}}_{k,n}}a_{I}\,dx^{I}\in \Omega ^{k}(M)} , then its exterior derivative is d τ = ∑ I ∈ J k , n ( ∑ j = 1 n ∂ a I ∂ x j d x j ) ∧ d x I ∈ Ω k + 1 ( M ) {\displaystyle d\tau =\sum _{I\in {\mathcal {J}}_{k,n}}\left(\sum _{j=1}^{n}{\frac {\partial a_{I}}{\partial x^{j}}}\,dx^{j}\right)\wedge dx^{I}\in \Omega ^{k+1}(M)} In R3, with the Hodge star operator, the exterior derivative corresponds to gradient, curl, and divergence, although this correspondence, like the cross product, does not generalize to higher dimensions, and should be treated with some caution. The exterior derivative itself applies in an arbitrary finite number of dimensions, and is a flexible and powerful tool with wide application in differential geometry, differential topology, and many areas in physics. Of note, although the above definition of the exterior derivative was defined with respect to local coordinates, it can be defined in an entirely coordinate-free manner, as an antiderivation of degree 1 on the exterior algebra of differential forms. The benefit of this more general approach is that it allows for a natural coordinate-free approach to integrate on manifolds. It also allows for a natural generalization of the fundamental theorem of calculus, called the (generalized) Stokes' theorem, which is a central result in the theory of integration on manifolds. === Differential calculus === Let U be an open set in Rn. A differential 0-form ("zero-form") is defined to be a smooth function f on U – the set of which is denoted C∞(U). If v is any vector in Rn, then f has a directional derivative ∂v f, which is another function on U whose value at a point p ∈ U is the rate of change (at p) of f in the v direction: ( ∂ v f ) ( p ) = d d t f ( p + t v ) | t = 0 . {\displaystyle (\partial _{\mathbf {v} }f)(p)=\left.{\frac {d}{dt}}f(p+t\mathbf {v} )\right|_{t=0}.} (This notion can be extended pointwise to the case that v is a vector field on U by evaluating v at the point p in the definition.) In particular, if v = ej is the jth coordinate vector then ∂v f is the partial derivative of f with respect to the jth coordinate vector, i.e., ∂f / ∂xj, where x1, x2, ..., xn are the coordinate vectors in U. By their very definition, partial derivatives depend upon the choice of coordinates: if new coordinates y1, y2, ..., yn are introduced, then ∂ f ∂ x j = ∑ i = 1 n ∂ y i ∂ x j ∂ f ∂ y i . {\displaystyle {\frac {\partial f}{\partial x^{j}}}=\sum _{i=1}^{n}{\frac {\partial y^{i}}{\partial x^{j}}}{\frac {\partial f}{\partial y^{i}}}.} The first idea leading to differential forms is the observation that ∂v f (p) is a linear function of v: ( ∂ v + w f ) ( p ) = ( ∂ v f ) ( p ) + ( ∂ w f ) ( p ) ( ∂ c v f ) ( p ) = c ( ∂ v f ) ( p ) {\displaystyle {\begin{aligned}(\partial _{\mathbf {v} +\mathbf {w} }f)(p)&=(\partial _{\mathbf {v} }f)(p)+(\partial _{\mathbf {w} }f)(p)\\(\partial _{c\mathbf {v} }f)(p)&=c(\partial _{\mathbf {v} }f)(p)\end{aligned}}} for any vectors v, w and any real number c. At each point p, this linear map from Rn to R is denoted dfp and called the derivative or differential of f at p. Thus dfp(v) = ∂v f (p). Extended over the whole set, the object df can be viewed as a function that takes a vector field on U, and returns a real-valued function whose value at each point is the derivative along the vector field of the function f. Note that at each p, the differential dfp is not a real number, but a linear functional on tangent vectors, and a prototypical example of a differential 1-form. Since any vector v is a linear combination Σ vjej of its components, df is uniquely determined by dfp(ej) for each j and each p ∈ U, which are just the partial derivatives of f on U. Thus df provides a way of encoding the partial derivatives of f. It can be decoded by noticing that the coordinates x1, x2, ..., xn are themselves functions on U, and so define differential 1-forms dx1, dx2, ..., dxn. Let f = xi. Since ∂xi / ∂xj = δij, the Kronecker delta function, it follows that The meaning of this expression is given by evaluating both sides at an arbitrary point p: on the right hand side, the sum is defined "pointwise", so that d f p = ∑ i = 1 n ∂ f ∂ x i ( p ) ( d x i ) p . {\displaystyle df_{p}=\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}(p)(dx^{i})_{p}.} Applying both sides to ej, the result on each side is the jth partial derivative of f at p. Since p and j were arbitrary, this proves the formula (*). More generally, for any smooth functions gi and hi on U, we define the differential 1-form α = Σi gi dhi pointwise by α p = ∑ i g i ( p ) ( d h i ) p {\displaystyle \alpha _{p}=\sum _{i}g_{i}(p)(dh_{i})_{p}} for each p ∈ U. Any differential 1-form arises this way, and by using (*) it follows that any differential 1-form α on U may be expressed in coordinates as α = ∑ i = 1 n f i d x i {\displaystyle \alpha =\sum _{i=1}^{n}f_{i}\,dx^{i}} for some smooth functions fi on U. The second idea leading to differential forms arises from the following question: given a differential 1-form α on U, when does there exist a function f on U such that α = df? The above expansion reduces this question to the search for a function f whose partial derivatives ∂f / ∂xi are equal to n given functions fi. For n > 1, such a function does not always exist: any smooth function f satisfies ∂ 2 f ∂ x i ∂ x j = ∂ 2 f ∂ x j ∂ x i , {\displaystyle {\frac {\partial ^{2}f}{\partial x^{i}\,\partial x^{j}}}={\frac {\partial ^{2}f}{\partial x^{j}\,\partial x^{i}}},} so it will be impossible to find such an f unless ∂ f j ∂ x i − ∂ f i ∂ x j = 0 {\displaystyle {\frac {\partial f_{j}}{\partial x^{i}}}-{\frac {\partial f_{i}}{\partial x^{j}}}=0} for all i and j. The skew-symmetry of the left hand side in i and j suggests introducing an antisymmetric product ∧ on differential 1-forms, the exterior product, so that these equations can be combined into a single condition ∑ i , j = 1 n ∂ f j ∂ x i d x i ∧ d x j = 0 , {\displaystyle \sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}=0,} where ∧ is defined so that: d x i ∧ d x j = − d x j ∧ d x i . {\displaystyle dx^{i}\wedge dx^{j}=-dx^{j}\wedge dx^{i}.} This is an example of a differential 2-form. This 2-form is called the exterior derivative dα of α = ∑nj=1 fj dxj. It is given by d α = ∑ j = 1 n d f j ∧ d x j = ∑ i , j = 1 n ∂ f j ∂ x i d x i ∧ d x j . {\displaystyle d\alpha =\sum _{j=1}^{n}df_{j}\wedge dx^{j}=\sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}.} To summarize: dα = 0 is a necessary condition for the existence of a function f with α = df. Differential 0-forms, 1-forms, and 2-forms are special cases of differential forms. For each k, there is a space of differential k-forms, which can be expressed in terms of the coordinates as ∑ i 1 , i 2 … i k = 1 n f i 1 i 2 … i k d x i 1 ∧ d x i 2 ∧ ⋯ ∧ d x i k {\displaystyle \sum _{i_{1},i_{2}\ldots i_{k}=1}^{n}f_{i_{1}i_{2}\ldots i_{k}}\,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}} for a collection of functions fi1i2⋅⋅⋅ik. Antisymmetry, which was already present for 2-forms, makes it possible to restrict the sum to those sets of indices for which i1 < i2 < ... < ik−1 < ik. Differential forms can be multiplied together using the exterior product, and for any differential k-form α, there is a differential (k + 1)-form dα called the exterior derivative of α. Differential forms, the exterior product and the exterior derivative are independent of a choice of coordinates. Consequently, they may be defined on any smooth manifold M. One way to do this is cover M with coordinate charts and define a differential k-form on M to be a family of differential k-forms on each chart which agree on the overlaps. However, there are more intrinsic definitions which make the independence of coordinates manifest. == Intrinsic definitions == Let M be a smooth manifold. A smooth differential form of degree k is a smooth section of the kth exterior power of the cotangent bundle of M. The set of all differential k-forms on a manifold M is a vector space, often denoted Ω k ( M ) {\displaystyle \Omega ^{k}(M)} . The definition of a differential form may be restated as follows. At any point p ∈ M {\displaystyle p\in M} , a k-form β {\displaystyle \beta } defines an element β p ∈ ⋀ k T p ∗ M , {\displaystyle \beta _{p}\in {\textstyle \bigwedge }^{k}T_{p}^{*}M,} where T p M {\displaystyle T_{p}M} is the tangent space to M at p and T p ∗ ( M ) {\displaystyle T_{p}^{*}(M)} is its dual space. This space is naturally isomorphic to the fiber at p of the dual bundle of the kth exterior power of the tangent bundle of M. That is, β {\displaystyle \beta } is also a linear functional β p : ⋀ k T p M → R {\textstyle \beta _{p}\colon {\textstyle \bigwedge }^{k}T_{p}M\to \mathbf {R} } , i.e. the dual of the kth exterior power is isomorphic to the kth exterior power of the dual: ⋀ k T p ∗ M ≅ ( ⋀ k T p M ) ∗ {\displaystyle {\textstyle \bigwedge }^{k}T_{p}^{*}M\cong {\Big (}{\textstyle \bigwedge }^{k}T_{p}M{\Big )}^{*}} By the universal property of exterior powers, this is equivalently an alternating multilinear map: β p : ⨁ n = 1 k T p M → R . {\displaystyle \beta _{p}\colon \bigoplus _{n=1}^{k}T_{p}M\to \mathbf {R} .} Consequently, a differential k-form may be evaluated against any k-tuple of tangent vectors to the same point p of M. For example, a differential 1-form α assigns to each point p ∈ M {\displaystyle p\in M} a linear functional αp on T p M {\displaystyle T_{p}M} . In the presence of an inner product on T p M {\displaystyle T_{p}M} (induced by a Riemannian metric on M), αp may be represented as the inner product with a tangent vector X p {\displaystyle X_{p}} . Differential 1-forms are sometimes called covariant vector fields, covector fields, or "dual vector fields", particularly within physics. The exterior algebra may be embedded in the tensor algebra by means of the alternation map. The alternation map is defined as a mapping Alt : ⨂ k T ∗ M → ⨂ k T ∗ M . {\displaystyle \operatorname {Alt} \colon {\bigotimes }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.} For a tensor τ {\displaystyle \tau } at a point p, Alt ⁡ ( τ p ) ( x 1 , … , x k ) = 1 k ! ∑ σ ∈ S k sgn ⁡ ( σ ) τ p ( x σ ( 1 ) , … , x σ ( k ) ) , {\displaystyle \operatorname {Alt} (\tau _{p})(x_{1},\dots ,x_{k})={\frac {1}{k!}}\sum _{\sigma \in S_{k}}\operatorname {sgn}(\sigma )\tau _{p}(x_{\sigma (1)},\dots ,x_{\sigma (k)}),} where Sk is the symmetric group on k elements. The alternation map is constant on the cosets of the ideal in the tensor algebra generated by the symmetric 2-forms, and therefore descends to an embedding Alt : ⋀ k T ∗ M → ⨂ k T ∗ M . {\displaystyle \operatorname {Alt} \colon {\textstyle \bigwedge }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.} This map exhibits β {\displaystyle \beta } as a totally antisymmetric covariant tensor field of rank k. The differential forms on M are in one-to-one correspondence with such tensor fields. == Operations == As well as the addition and multiplication by scalar operations which arise from the vector space structure, there are several other standard operations defined on differential forms. The most important operations are the exterior product of two differential forms, the exterior derivative of a single differential form, the interior product of a differential form and a vector field, the Lie derivative of a differential form with respect to a vector field and the covariant derivative of a differential form with respect to a vector field on a manifold with a defined connection. === Exterior product === The exterior product of a k-form α and an ℓ-form β, denoted α ∧ β, is a (k + ℓ)-form. At each point p of the manifold M, the forms α and β are elements of an exterior power of the cotangent space at p. When the exterior algebra is viewed as a quotient of the tensor algebra, the exterior product corresponds to the tensor product (modulo the equivalence relation defining the exterior algebra). The antisymmetry inherent in the exterior algebra means that when α ∧ β is viewed as a multilinear functional, it is alternating. However, when the exterior algebra is embedded as a subspace of the tensor algebra by means of the alternation map, the tensor product α ⊗ β is not alternating. There is an explicit formula which describes the exterior product in this situation. The exterior product is α ∧ β = Alt ⁡ ( α ⊗ β ) . {\displaystyle \alpha \wedge \beta =\operatorname {Alt} (\alpha \otimes \beta ).} If the embedding of ⋀ n T ∗ M {\displaystyle {\textstyle \bigwedge }^{n}T^{*}M} into ⨂ n T ∗ M {\displaystyle {\bigotimes }^{n}T^{*}M} is done via the map n ! Alt {\displaystyle n!\operatorname {Alt} } instead of Alt {\displaystyle \operatorname {Alt} } , the exterior product is α ∧ β = ( k + ℓ ) ! k ! ℓ ! Alt ⁡ ( α ⊗ β ) . {\displaystyle \alpha \wedge \beta ={\frac {(k+\ell )!}{k!\ell !}}\operatorname {Alt} (\alpha \otimes \beta ).} This description is useful for explicit computations. For example, if k = ℓ = 1, then α ∧ β is the 2-form whose value at a point p is the alternating bilinear form defined by ( α ∧ β ) p ( v , w ) = α p ( v ) β p ( w ) − α p ( w ) β p ( v ) {\displaystyle (\alpha \wedge \beta )_{p}(v,w)=\alpha _{p}(v)\beta _{p}(w)-\alpha _{p}(w)\beta _{p}(v)} for v, w ∈ TpM. The exterior product is bilinear: If α, β, and γ are any differential forms, and if f is any smooth function, then α ∧ ( β + γ ) = α ∧ β + α ∧ γ , {\displaystyle \alpha \wedge (\beta +\gamma )=\alpha \wedge \beta +\alpha \wedge \gamma ,} α ∧ ( f ⋅ β ) = f ⋅ ( α ∧ β ) . {\displaystyle \alpha \wedge (f\cdot \beta )=f\cdot (\alpha \wedge \beta ).} It is skew commutative (also known as graded commutative), meaning that it satisfies a variant of anticommutativity that depends on the degrees of the forms: if α is a k-form and β is an ℓ-form, then α ∧ β = ( − 1 ) k ℓ β ∧ α . {\displaystyle \alpha \wedge \beta =(-1)^{k\ell }\beta \wedge \alpha .} One also has the graded Leibniz rule: d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β . {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .} === Riemannian manifold === On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, the metric defines a fibre-wise isomorphism of the tangent and cotangent bundles. This makes it possible to convert vector fields to covector fields and vice versa. It also enables the definition of additional operations such as the Hodge star operator ⋆ : Ω k ( M ) → ∼ Ω n − k ( M ) {\displaystyle \star \colon \Omega ^{k}(M)\ {\stackrel {\sim }{\to }}\ \Omega ^{n-k}(M)} and the codifferential δ : Ω k ( M ) → Ω k − 1 ( M ) {\displaystyle \delta \colon \Omega ^{k}(M)\rightarrow \Omega ^{k-1}(M)} , which has degree −1 and is adjoint to the exterior differential d. ==== Vector field structures ==== On a pseudo-Riemannian manifold, 1-forms can be identified with vector fields; vector fields have additional distinct algebraic structures, which are listed here for context and to avoid confusion. Firstly, each (co)tangent space generates a Clifford algebra, where the product of a (co)vector with itself is given by the value of a quadratic form – in this case, the natural one induced by the metric. This algebra is distinct from the exterior algebra of differential forms, which can be viewed as a Clifford algebra where the quadratic form vanishes (since the exterior product of any vector with itself is zero). Clifford algebras are thus non-anticommutative ("quantum") deformations of the exterior algebra. They are studied in geometric algebra. Another alternative is to consider vector fields as derivations. The (noncommutative) algebra of differential operators they generate is the Weyl algebra and is a noncommutative ("quantum") deformation of the symmetric algebra in the vector fields. === Exterior differential complex === One important property of the exterior derivative is that d2 = 0. This means that the exterior derivative defines a cochain complex: 0 → Ω 0 ( M ) → d Ω 1 ( M ) → d Ω 2 ( M ) → d Ω 3 ( M ) → ⋯ → Ω n ( M ) → 0. {\displaystyle 0\ \to \ \Omega ^{0}(M)\ {\stackrel {d}{\to }}\ \Omega ^{1}(M)\ {\stackrel {d}{\to }}\ \Omega ^{2}(M)\ {\stackrel {d}{\to }}\ \Omega ^{3}(M)\ \to \ \cdots \ \to \ \Omega ^{n}(M)\ \to \ 0.} This complex is called the de Rham complex, and its cohomology is by definition the de Rham cohomology of M. By the Poincaré lemma, the de Rham complex is locally exact except at Ω0(M). The kernel at Ω0(M) is the space of locally constant functions on M. Therefore, the complex is a resolution of the constant sheaf R, which in turn implies a form of de Rham's theorem: de Rham cohomology computes the sheaf cohomology of R. == Pullback == Suppose that f : M → N is smooth. The differential of f is a smooth map df : TM → TN between the tangent bundles of M and N. This map is also denoted f∗ and called the pushforward. For any point p ∈ M and any tangent vector v ∈ TpM, there is a well-defined pushforward vector f∗(v) in Tf(p)N. However, the same is not true of a vector field. If f is not injective, say because q ∈ N has two or more preimages, then the vector field may determine two or more distinct vectors in TqN. If f is not surjective, then there will be a point q ∈ N at which f∗ does not determine any tangent vector at all. Since a vector field on N determines, by definition, a unique tangent vector at every point of N, the pushforward of a vector field does not always exist. By contrast, it is always possible to pull back a differential form. A differential form on N may be viewed as a linear functional on each tangent space. Precomposing this functional with the differential df : TM → TN defines a linear functional on each tangent space of M and therefore a differential form on M. The existence of pullbacks is one of the key features of the theory of differential forms. It leads to the existence of pullback maps in other situations, such as pullback homomorphisms in de Rham cohomology. Formally, let f : M → N be smooth, and let ω be a smooth k-form on N. Then there is a differential form f∗ω on M, called the pullback of ω, which captures the behavior of ω as seen relative to f. To define the pullback, fix a point p of M and tangent vectors v1, ..., vk to M at p. The pullback of ω is defined by the formula ( f ∗ ω ) p ( v 1 , … , v k ) = ω f ( p ) ( f ∗ v 1 , … , f ∗ v k ) . {\displaystyle (f^{*}\omega )_{p}(v_{1},\ldots ,v_{k})=\omega _{f(p)}(f_{*}v_{1},\ldots ,f_{*}v_{k}).} There are several more abstract ways to view this definition. If ω is a 1-form on N, then it may be viewed as a section of the cotangent bundle T∗N of N. Using ∗ to denote a dual map, the dual to the differential of f is (df)∗ : T∗N → T∗M. The pullback of ω may be defined to be the composite M → f N → ω T ∗ N ⟶ ( d f ) ∗ T ∗ M . {\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ T^{*}N\ {\stackrel {(df)^{*}}{\longrightarrow }}\ T^{*}M.} This is a section of the cotangent bundle of M and hence a differential 1-form on M. In full generality, let ⋀ k ( d f ) ∗ {\textstyle \bigwedge ^{k}(df)^{*}} denote the kth exterior power of the dual map to the differential. Then the pullback of a k-form ω is the composite M → f N → ω ⋀ k T ∗ N ⟶ ⋀ k ( d f ) ∗ ⋀ k T ∗ M . {\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ {\textstyle \bigwedge }^{k}T^{*}N\ {\stackrel {{\bigwedge }^{k}(df)^{*}}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}T^{*}M.} Another abstract way to view the pullback comes from viewing a k-form ω as a linear functional on tangent spaces. From this point of view, ω is a morphism of vector bundles ⋀ k T N → ω N × R , {\displaystyle {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} ,} where N × R is the trivial rank one bundle on N. The composite map ⋀ k T M ⟶ ⋀ k d f ⋀ k T N → ω N × R {\displaystyle {\textstyle \bigwedge }^{k}TM\ {\stackrel {{\bigwedge }^{k}df}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} } defines a linear functional on each tangent space of M, and therefore it factors through the trivial bundle M × R. The vector bundle morphism ⋀ k T M → M × R {\textstyle {\textstyle \bigwedge }^{k}TM\to M\times \mathbf {R} } defined in this way is f∗ω. Pullback respects all of the basic operations on forms. If ω and η are forms and c is a real number, then f ∗ ( c ω ) = c ( f ∗ ω ) , f ∗ ( ω + η ) = f ∗ ω + f ∗ η , f ∗ ( ω ∧ η ) = f ∗ ω ∧ f ∗ η , f ∗ ( d ω ) = d ( f ∗ ω ) . {\displaystyle {\begin{aligned}f^{*}(c\omega )&=c(f^{*}\omega ),\\f^{*}(\omega +\eta )&=f^{*}\omega +f^{*}\eta ,\\f^{*}(\omega \wedge \eta )&=f^{*}\omega \wedge f^{*}\eta ,\\f^{*}(d\omega )&=d(f^{*}\omega ).\end{aligned}}} The pullback of a form can also be written in coordinates. Assume that x1, ..., xm are coordinates on M, that y1, ..., yn are coordinates on N, and that these coordinate systems are related by the formulas yi = fi(x1, ..., xm) for all i. Locally on N, ω can be written as ω = ∑ i 1 < ⋯ < i k ω i 1 ⋯ i k d y i 1 ∧ ⋯ ∧ d y i k , {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{k}}\omega _{i_{1}\cdots i_{k}}\,dy^{i_{1}}\wedge \cdots \wedge dy^{i_{k}},} where, for each choice of i1, ..., ik, ωi1⋅⋅⋅ik is a real-valued function of y1, ..., yn. Using the linearity of pullback and its compatibility with exterior product, the pullback of ω has the formula f ∗ ω = ∑ i 1 < ⋯ < i k ( ω i 1 ⋯ i k ∘ f ) d f i 1 ∧ ⋯ ∧ d f i k . {\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f)\,df_{i_{1}}\wedge \cdots \wedge df_{i_{k}}.} Each exterior derivative dfi can be expanded in terms of dx1, ..., dxm. The resulting k-form can be written using Jacobian matrices: f ∗ ω = ∑ i 1 < ⋯ < i k ∑ j 1 < ⋯ < j k ( ω i 1 ⋯ i k ∘ f ) ∂ ( f i 1 , … , f i k ) ∂ ( x j 1 , … , x j k ) d x j 1 ∧ ⋯ ∧ d x j k . {\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}\sum _{j_{1}<\cdots <j_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f){\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}\,dx^{j_{1}}\wedge \cdots \wedge dx^{j_{k}}.} Here, ∂ ( f i 1 , … , f i k ) ∂ ( x j 1 , … , x j k ) {\textstyle {\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}} denotes the determinant of the matrix whose entries are ∂ f i m ∂ x j n {\textstyle {\frac {\partial f_{i_{m}}}{\partial x^{j_{n}}}}} , 1 ≤ m , n ≤ k {\displaystyle 1\leq m,n\leq k} . == Integration == A differential k-form can be integrated over an oriented k-dimensional manifold. When the k-form is defined on an n-dimensional manifold with n > k, then the k-form can be integrated over oriented k-dimensional submanifolds. If k = 0, integration over oriented 0-dimensional submanifolds is just the summation of the integrand evaluated at points, according to the orientation of those points. Other values of k = 1, 2, 3, ... correspond to line integrals, surface integrals, volume integrals, and so on. There are several equivalent ways to formally define the integral of a differential form, all of which depend on reducing to the case of Euclidean space. === Integration on Euclidean space === Let U be an open subset of Rn. Give Rn its standard orientation and U the restriction of that orientation. Every smooth n-form ω on U has the form ω = f ( x ) d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega =f(x)\,dx^{1}\wedge \cdots \wedge dx^{n}} for some smooth function f : Rn → R. Such a function has an integral in the usual Riemann or Lebesgue sense. This allows us to define the integral of ω to be the integral of f: ∫ U ω = def ∫ U f ( x ) d x 1 ⋯ d x n . {\displaystyle \int _{U}\omega \ {\stackrel {\text{def}}{=}}\int _{U}f(x)\,dx^{1}\cdots dx^{n}.} Fixing an orientation is necessary for this to be well-defined. The skew-symmetry of differential forms means that the integral of, say, dx1 ∧ dx2 must be the negative of the integral of dx2 ∧ dx1. Riemann and Lebesgue integrals cannot see this dependence on the ordering of the coordinates, so they leave the sign of the integral undetermined. The orientation resolves this ambiguity. === Integration over chains === Let M be an n-manifold and ω an n-form on M. First, assume that there is a parametrization of M by an open subset of Euclidean space. That is, assume that there exists a diffeomorphism φ : D → M {\displaystyle \varphi \colon D\to M} where D ⊆ Rn. Give M the orientation induced by φ. Then (Rudin 1976) defines the integral of ω over M to be the integral of φ∗ω over D. In coordinates, this has the following expression. Fix an embedding of M in RI with coordinates x1, ..., xI. Then ω = ∑ i 1 < ⋯ < i n a i 1 , … , i n ( x ) d x i 1 ∧ ⋯ ∧ d x i n . {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}({\mathbf {x} })\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{n}}.} Suppose that φ is defined by φ ( u ) = ( x 1 ( u ) , … , x I ( u ) ) . {\displaystyle \varphi ({\mathbf {u} })=(x^{1}({\mathbf {u} }),\ldots ,x^{I}({\mathbf {u} })).} Then the integral may be written in coordinates as ∫ M ω = ∫ D ∑ i 1 < ⋯ < i n a i 1 , … , i n ( φ ( u ) ) ∂ ( x i 1 , … , x i n ) ∂ ( u 1 , … , u n ) d u 1 ⋯ d u n , {\displaystyle \int _{M}\omega =\int _{D}\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}(\varphi ({\mathbf {u} })){\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\dots ,u^{n})}}\,du^{1}\cdots du^{n},} where ∂ ( x i 1 , … , x i n ) ∂ ( u 1 , … , u n ) {\displaystyle {\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\ldots ,u^{n})}}} is the determinant of the Jacobian. The Jacobian exists because φ is differentiable. In general, an n-manifold cannot be parametrized by an open subset of Rn. But such a parametrization is always possible locally, so it is possible to define integrals over arbitrary manifolds by defining them as sums of integrals over collections of local parametrizations. Moreover, it is also possible to define parametrizations of k-dimensional subsets for k < n, and this makes it possible to define integrals of k-forms. To make this precise, it is convenient to fix a standard domain D in Rk, usually a cube or a simplex. A k-chain is a formal sum of smooth embeddings D → M. That is, it is a collection of smooth embeddings, each of which is assigned an integer multiplicity. Each smooth embedding determines a k-dimensional submanifold of M. If the chain is c = ∑ i = 1 r m i φ i , {\displaystyle c=\sum _{i=1}^{r}m_{i}\varphi _{i},} then the integral of a k-form ω over c is defined to be the sum of the integrals over the terms of c: ∫ c ω = ∑ i = 1 r m i ∫ D φ i ∗ ω . {\displaystyle \int _{c}\omega =\sum _{i=1}^{r}m_{i}\int _{D}\varphi _{i}^{*}\omega .} This approach to defining integration does not assign a direct meaning to integration over the whole manifold M. However, it is still possible to assign such a meaning indirectly because every smooth manifold may be smoothly triangulated in an essentially unique way, and the integral over M may be defined to be the integral over the chain determined by a triangulation. === Integration using partitions of unity === There is another approach, expounded in (Dieudonné 1972), which does directly assign a meaning to integration over M, but this approach requires fixing an orientation of M. The integral of an n-form ω on an n-dimensional manifold is defined by working in charts. Suppose first that ω is supported on a single positively oriented chart. On this chart, it may be pulled back to an n-form on an open subset of Rn. Here, the form has a well-defined Riemann or Lebesgue integral as before. The change of variables formula and the assumption that the chart is positively oriented together ensure that the integral of ω is independent of the chosen chart. In the general case, use a partition of unity to write ω as a sum of n-forms, each of which is supported in a single positively oriented chart, and define the integral of ω to be the sum of the integrals of each term in the partition of unity. It is also possible to integrate k-forms on oriented k-dimensional submanifolds using this more intrinsic approach. The form is pulled back to the submanifold, where the integral is defined using charts as before. For example, given a path γ(t) : [0, 1] → R2, integrating a 1-form on the path is simply pulling back the form to a form f(t) dt on [0, 1], and this integral is the integral of the function f(t) on the interval. === Integration along fibers === Fubini's theorem states that the integral over a set that is a product may be computed as an iterated integral over the two factors in the product. This suggests that the integral of a differential form over a product ought to be computable as an iterated integral as well. The geometric flexibility of differential forms ensures that this is possible not just for products, but in more general situations as well. Under some hypotheses, it is possible to integrate along the fibers of a smooth map, and the analog of Fubini's theorem is the case where this map is the projection from a product to one of its factors. Because integrating a differential form over a submanifold requires fixing an orientation, a prerequisite to integration along fibers is the existence of a well-defined orientation on those fibers. Let M and N be two orientable manifolds of pure dimensions m and n, respectively. Suppose that f : M → N is a surjective submersion. This implies that each fiber f−1(y) is (m − n)-dimensional and that, around each point of M, there is a chart on which f looks like the projection from a product onto one of its factors. Fix x ∈ M and set y = f(x). Suppose that ω x ∈ ⋀ m T x ∗ M , η y ∈ ⋀ n T y ∗ N , {\displaystyle {\begin{aligned}\omega _{x}&\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,\\[2pt]\eta _{y}&\in {\textstyle \bigwedge }^{n}T_{y}^{*}N,\end{aligned}}} and that ηy does not vanish. Following (Dieudonné 1972), there is a unique σ x ∈ ⋀ m − n T x ∗ ( f − 1 ( y ) ) {\displaystyle \sigma _{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}(f^{-1}(y))} which may be thought of as the fibral part of ωx with respect to ηy. More precisely, define j : f−1(y) → M to be the inclusion. Then σx is defined by the property that ω x = ( f ∗ η y ) x ∧ σ x ′ ∈ ⋀ m T x ∗ M , {\displaystyle \omega _{x}=(f^{*}\eta _{y})_{x}\wedge \sigma '_{x}\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,} where σ x ′ ∈ ⋀ m − n T x ∗ M {\displaystyle \sigma '_{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M} is any (m − n)-covector for which σ x = j ∗ σ x ′ . {\displaystyle \sigma _{x}=j^{*}\sigma '_{x}.} The form σx may also be notated ωx / ηy. Moreover, for fixed y, σx varies smoothly with respect to x. That is, suppose that ω : f − 1 ( y ) → T ∗ M {\displaystyle \omega \colon f^{-1}(y)\to T^{*}M} is a smooth section of the projection map; we say that ω is a smooth differential m-form on M along f−1(y). Then there is a smooth differential (m − n)-form σ on f−1(y) such that, at each x ∈ f−1(y), σ x = ω x / η y . {\displaystyle \sigma _{x}=\omega _{x}/\eta _{y}.} This form is denoted ω / ηy. The same construction works if ω is an m-form in a neighborhood of the fiber, and the same notation is used. A consequence is that each fiber f−1(y) is orientable. In particular, a choice of orientation forms on M and N defines an orientation of every fiber of f. The analog of Fubini's theorem is as follows. As before, M and N are two orientable manifolds of pure dimensions m and n, and f : M → N is a surjective submersion. Fix orientations of M and N, and give each fiber of f the induced orientation. Let ω be an m-form on M, and let η be an n-form on N that is almost everywhere positive with respect to the orientation of N. Then, for almost every y ∈ N, the form ω / ηy is a well-defined integrable m − n form on f−1(y). Moreover, there is an integrable n-form on N defined by y ↦ ( ∫ f − 1 ( y ) ω / η y ) η y . {\displaystyle y\mapsto {\bigg (}\int _{f^{-1}(y)}\omega /\eta _{y}{\bigg )}\,\eta _{y}.} Denote this form by ( ∫ f − 1 ( y ) ω / η ) η . {\displaystyle {\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .} Then (Dieudonné 1972) proves the generalized Fubini formula ∫ M ω = ∫ N ( ∫ f − 1 ( y ) ω / η ) η . {\displaystyle \int _{M}\omega =\int _{N}{\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .} It is also possible to integrate forms of other degrees along the fibers of a submersion. Assume the same hypotheses as before, and let α be a compactly supported (m − n + k)-form on M. Then there is a k-form γ on N which is the result of integrating α along the fibers of f. The form α is defined by specifying, at each y ∈ N, how γ pairs with each k-vector v at y, and the value of that pairing is an integral over f−1(y) that depends only on α, v, and the orientations of M and N. More precisely, at each y ∈ N, there is an isomorphism ⋀ k T y N → ⋀ n − k T y ∗ N {\displaystyle {\textstyle \bigwedge }^{k}T_{y}N\to {\textstyle \bigwedge }^{n-k}T_{y}^{*}N} defined by the interior product v ↦ v ⌟ ζ y , {\displaystyle \mathbf {v} \mapsto \mathbf {v} \,\lrcorner \,\zeta _{y},} for any choice of volume form ζ in the orientation of N. If x ∈ f−1(y), then a k-vector v at y determines an (n − k)-covector at x by pullback: f ∗ ( v ⌟ ζ y ) ∈ ⋀ n − k T x ∗ M . {\displaystyle f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\in {\textstyle \bigwedge }^{n-k}T_{x}^{*}M.} Each of these covectors has an exterior product against α, so there is an (m − n)-form βv on M along f−1(y) defined by ( β v ) x = ( α x ∧ f ∗ ( v ⌟ ζ y ) ) / ζ y ∈ ⋀ m − n T x ∗ M . {\displaystyle (\beta _{\mathbf {v} })_{x}=\left(\alpha _{x}\wedge f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\right){\big /}\zeta _{y}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M.} This form depends on the orientation of N but not the choice of ζ. Then the k-form γ is uniquely defined by the property ⟨ γ y , v ⟩ = ∫ f − 1 ( y ) β v , {\displaystyle \langle \gamma _{y},\mathbf {v} \rangle =\int _{f^{-1}(y)}\beta _{\mathbf {v} },} and γ is smooth (Dieudonné 1972). This form also denoted α♭ and called the integral of α along the fibers of f. Integration along fibers is important for the construction of Gysin maps in de Rham cohomology. Integration along fibers satisfies the projection formula (Dieudonné 1972). If λ is any ℓ-form on N, then α ♭ ∧ λ = ( α ∧ f ∗ λ ) ♭ . {\displaystyle \alpha ^{\flat }\wedge \lambda =(\alpha \wedge f^{*}\lambda )^{\flat }.} === Stokes's theorem === The fundamental relationship between the exterior derivative and integration is given by the Stokes' theorem: If ω is an (n − 1)-form with compact support on M and ∂M denotes the boundary of M with its induced orientation, then ∫ M d ω = ∫ ∂ M ω . {\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .} A key consequence of this is that "the integral of a closed form over homologous chains is equal": If ω is a closed k-form and M and N are k-chains that are homologous (such that M − N is the boundary of a (k + 1)-chain W), then ∫ M ω = ∫ N ω {\displaystyle \textstyle {\int _{M}\omega =\int _{N}\omega }} , since the difference is the integral ∫ W d ω = ∫ W 0 = 0 {\displaystyle \textstyle \int _{W}d\omega =\int _{W}0=0} . For example, if ω = df is the derivative of a potential function on the plane or Rn, then the integral of ω over a path from a to b does not depend on the choice of path (the integral is f(b) − f(a)), since different paths with given endpoints are homotopic, hence homologous (a weaker condition). This case is called the gradient theorem, and generalizes the fundamental theorem of calculus. This path independence is very useful in contour integration. This theorem also underlies the duality between de Rham cohomology and the homology of chains. === Relation with measures === On a general differentiable manifold (without additional structure), differential forms cannot be integrated over subsets of the manifold; this distinction is key to the distinction between differential forms, which are integrated over chains or oriented submanifolds, and measures, which are integrated over subsets. The simplest example is attempting to integrate the 1-form dx over the interval [0, 1]. Assuming the usual distance (and thus measure) on the real line, this integral is either 1 or −1, depending on orientation: ∫ 0 1 d x = 1 {\textstyle \int _{0}^{1}dx=1} , while ∫ 1 0 d x = − ∫ 0 1 d x = − 1 {\textstyle \int _{1}^{0}dx=-\int _{0}^{1}dx=-1} . By contrast, the integral of the measure |dx| on the interval is unambiguously 1 (i.e. the integral of the constant function 1 with respect to this measure is 1). Similarly, under a change of coordinates a differential n-form changes by the Jacobian determinant J, while a measure changes by the absolute value of the Jacobian determinant, |J|, which further reflects the issue of orientation. For example, under the map x ↦ −x on the line, the differential form dx pulls back to −dx; orientation has reversed; while the Lebesgue measure, which here we denote |dx|, pulls back to |dx|; it does not change. In the presence of the additional data of an orientation, it is possible to integrate n-forms (top-dimensional forms) over the entire manifold or over compact subsets; integration over the entire manifold corresponds to integrating the form over the fundamental class of the manifold, [M]. Formally, in the presence of an orientation, one may identify n-forms with densities on a manifold; densities in turn define a measure, and thus can be integrated (Folland 1999, Section 11.4, pp. 361–362). On an orientable but not oriented manifold, there are two choices of orientation; either choice allows one to integrate n-forms over compact subsets, with the two choices differing by a sign. On a non-orientable manifold, n-forms and densities cannot be identified —notably, any top-dimensional form must vanish somewhere (there are no volume forms on non-orientable manifolds), but there are nowhere-vanishing densities— thus while one can integrate densities over compact subsets, one cannot integrate n-forms. One can instead identify densities with top-dimensional pseudoforms. Even in the presence of an orientation, there is in general no meaningful way to integrate k-forms over subsets for k < n because there is no consistent way to use the ambient orientation to orient k-dimensional subsets. Geometrically, a k-dimensional subset can be turned around in place, yielding the same subset with the opposite orientation; for example, the horizontal axis in a plane can be rotated by 180 degrees. Compare the Gram determinant of a set of k vectors in an n-dimensional space, which, unlike the determinant of n vectors, is always positive, corresponding to a squared number. An orientation of a k-submanifold is therefore extra data not derivable from the ambient manifold. On a Riemannian manifold, one may define a k-dimensional Hausdorff measure for any k (integer or real), which may be integrated over k-dimensional subsets of the manifold. A function times this Hausdorff measure can then be integrated over k-dimensional subsets, providing a measure-theoretic analog to integration of k-forms. The n-dimensional Hausdorff measure yields a density, as above. === Currents === The differential form analog of a distribution or generalized function is called a current. The space of k-currents on M is the dual space to an appropriate space of differential k-forms. Currents play the role of generalized domains of integration, similar to but even more flexible than chains. == Applications in physics == Differential forms arise in some important physical contexts. For example, in Maxwell's theory of electromagnetism, the Faraday 2-form, or electromagnetic field strength, is F = 1 2 f a b d x a ∧ d x b , {\displaystyle \mathbf {F} ={\frac {1}{2}}f_{ab}\,dx^{a}\wedge dx^{b}\,,} where the fab are formed from the electromagnetic fields E → {\displaystyle {\vec {E}}} and B → {\displaystyle {\vec {B}}} ; e.g., f12 = Ez/c, f23 = −Bz, or equivalent definitions. This form is a special case of the curvature form on the U(1) principal bundle on which both electromagnetism and general gauge theories may be described. The connection form for the principal bundle is the vector potential, typically denoted by A, when represented in some gauge. One then has F = d A . {\displaystyle \mathbf {F} =d\mathbf {A} .} The current 3-form is J = 1 6 j a ε a b c d d x b ∧ d x c ∧ d x d , {\displaystyle \mathbf {J} ={\frac {1}{6}}j^{a}\,\varepsilon _{abcd}\,dx^{b}\wedge dx^{c}\wedge dx^{d}\,,} where ja are the four components of the current density. (Here it is a matter of convention to write Fab instead of fab, i.e. to use capital letters, and to write Ja instead of ja. However, the vector rsp. tensor components and the above-mentioned forms have different physical dimensions. Moreover, by decision of an international commission of the International Union of Pure and Applied Physics, the magnetic polarization vector has been called J → {\displaystyle {\vec {J}}} for several decades, and by some publishers J; i.e., the same name is used for different quantities.) Using the above-mentioned definitions, Maxwell's equations can be written very compactly in geometrized units as d F = 0 d ⋆ F = J , {\displaystyle {\begin{aligned}d{\mathbf {F} }&=\mathbf {0} \\d{\star \mathbf {F} }&=\mathbf {J} ,\end{aligned}}} where ⋆ {\displaystyle \star } denotes the Hodge star operator. Similar considerations describe the geometry of gauge theories in general. The 2-form ⋆ F {\displaystyle {\star }\mathbf {F} } , which is dual to the Faraday form, is also called Maxwell 2-form. Electromagnetism is an example of a U(1) gauge theory. Here the Lie group is U(1), the one-dimensional unitary group, which is in particular abelian. There are gauge theories, such as Yang–Mills theory, in which the Lie group is not abelian. In that case, one gets relations which are similar to those described here. The analog of the field F in such theories is the curvature form of the connection, which is represented in a gauge by a Lie algebra-valued one-form A. The Yang–Mills field F is then defined by F = d A + A ∧ A . {\displaystyle \mathbf {F} =d\mathbf {A} +\mathbf {A} \wedge \mathbf {A} .} In the abelian case, such as electromagnetism, A ∧ A = 0, but this does not hold in general. Likewise the field equations are modified by additional terms involving exterior products of A and F, owing to the structure equations of the gauge group. == Applications in geometric measure theory == Numerous minimality results for complex analytic manifolds are based on the Wirtinger inequality for 2-forms. A succinct proof may be found in Herbert Federer's classic text Geometric Measure Theory. The Wirtinger inequality is also a key ingredient in Gromov's inequality for complex projective space in systolic geometry. == See also == Closed and exact differential forms Complex differential form Vector-valued differential form Equivariant differential form Calculus on Manifolds Multilinear form Polynomial differential form Presymplectic form == Notes == == References == == External links == Weisstein, Eric W. "Differential form". MathWorld. Sjamaar, Reyer (2006), Manifolds and differential forms lecture notes (PDF), a course taught at Cornell University. Bachman, David (2003), A Geometric Approach to Differential Forms, arXiv:math/0306194, Bibcode:2003math......6194B, an undergraduate text. Needham, Tristan. Visual differential geometry and forms: a mathematical drama in five acts. Princeton University Press, 2021.
Wikipedia/Differential_forms
In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics. For instance, the expression f ( x ) d x {\displaystyle f(x)\,dx} is an example of a 1-form, and can be integrated over an interval [ a , b ] {\displaystyle [a,b]} contained in the domain of f {\displaystyle f} : ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx.} Similarly, the expression f ( x , y , z ) d x ∧ d y + g ( x , y , z ) d z ∧ d x + h ( x , y , z ) d y ∧ d z {\displaystyle f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz} is a 2-form that can be integrated over a surface S {\displaystyle S} : ∫ S ( f ( x , y , z ) d x ∧ d y + g ( x , y , z ) d z ∧ d x + h ( x , y , z ) d y ∧ d z ) . {\displaystyle \int _{S}\left(f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz\right).} The symbol ∧ {\displaystyle \wedge } denotes the exterior product, sometimes called the wedge product, of two differential forms. Likewise, a 3-form f ( x , y , z ) d x ∧ d y ∧ d z {\displaystyle f(x,y,z)\,dx\wedge dy\wedge dz} represents a volume element that can be integrated over a region of space. In general, a k-form is an object that may be integrated over a k-dimensional manifold, and is homogeneous of degree k in the coordinate differentials d x , d y , … . {\displaystyle dx,dy,\ldots .} On an n-dimensional manifold, a top-dimensional form (n-form) is called a volume form. The differential forms form an alternating algebra. This implies that d y ∧ d x = − d x ∧ d y {\displaystyle dy\wedge dx=-dx\wedge dy} and d x ∧ d x = 0. {\displaystyle dx\wedge dx=0.} This alternating property reflects the orientation of the domain of integration. The exterior derivative is an operation on differential forms that, given a k-form φ {\displaystyle \varphi } , produces a (k+1)-form d φ . {\displaystyle d\varphi .} This operation extends the differential of a function (a function can be considered as a 0-form, and its differential is d f ( x ) = f ′ ( x ) d x {\displaystyle df(x)=f'(x)\,dx} ). This allows expressing the fundamental theorem of calculus, the divergence theorem, Green's theorem, and Stokes' theorem as special cases of a single general result, the generalized Stokes theorem. Differential 1-forms are naturally dual to vector fields on a differentiable manifold, and the pairing between vector fields and 1-forms is extended to arbitrary differential forms by the interior product. The algebra of differential forms along with the exterior derivative defined on it is preserved by the pullback under smooth functions between two manifolds. This feature allows geometrically invariant information to be moved from one space to another via the pullback, provided that the information is expressed in terms of differential forms. As an example, the change of variables formula for integration becomes a simple statement that an integral is preserved under pullback. == History == Differential forms are part of the field of differential geometry, influenced by linear algebra. Although the notion of a differential is quite old, the initial attempt at an algebraic organization of differential forms is usually credited to Élie Cartan with reference to his 1899 paper. Some aspects of the exterior algebra of differential forms appears in Hermann Grassmann's 1844 work, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics). == Concept == Differential forms provide an approach to multivariable calculus that is independent of coordinates. === Integration and orientation === A differential k-form can be integrated over an oriented manifold of dimension k. A differential 1-form can be thought of as measuring an infinitesimal oriented length, or 1-dimensional oriented density. A differential 2-form can be thought of as measuring an infinitesimal oriented area, or 2-dimensional oriented density. And so on. Integration of differential forms is well-defined only on oriented manifolds. An example of a 1-dimensional manifold is an interval [a, b], and intervals can be given an orientation: they are positively oriented if a < b, and negatively oriented otherwise. If a < b then the integral of the differential 1-form f(x) dx over the interval [a, b] (with its natural positive orientation) is ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} which is the negative of the integral of the same differential form over the same interval, when equipped with the opposite orientation. That is: ∫ b a f ( x ) d x = − ∫ a b f ( x ) d x . {\displaystyle \int _{b}^{a}f(x)\,dx=-\int _{a}^{b}f(x)\,dx.} This gives a geometrical context to the conventions for one-dimensional integrals, that the sign changes when the orientation of the interval is reversed. A standard explanation of this in one-variable integration theory is that, when the limits of integration are in the opposite order (b < a), the increment dx is negative in the direction of integration. More generally, an m-form is an oriented density that can be integrated over an m-dimensional oriented manifold. (For example, a 1-form can be integrated over an oriented curve, a 2-form can be integrated over an oriented surface, etc.) If M is an oriented m-dimensional manifold, and M′ is the same manifold with opposite orientation and ω is an m-form, then one has: ∫ M ω = − ∫ M ′ ω . {\displaystyle \int _{M}\omega =-\int _{M'}\omega \,.} These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure μ and integrates over a subset A, without any notion of orientation; one writes ∫ A f d μ = ∫ [ a , b ] f d μ {\textstyle \int _{A}f\,d\mu =\int _{[a,b]}f\,d\mu } to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher-dimensional manifolds; see below for details. Making the notion of an oriented density precise, and thus of a differential form, involves the exterior algebra. The differentials of a set of coordinates, dx1, ..., dxn can be used as a basis for all 1-forms. Each of these represents a covector at each point on the manifold that may be thought of as measuring a small displacement in the corresponding coordinate direction. A general 1-form is a linear combination of these differentials at every point on the manifold: f 1 d x 1 + ⋯ + f n d x n , {\displaystyle f_{1}\,dx^{1}+\cdots +f_{n}\,dx^{n},} where the fk = fk(x1, ... , xn) are functions of all the coordinates. A differential 1-form is integrated along an oriented curve as a line integral. The expressions dxi ∧ dxj, where i < j can be used as a basis at every point on the manifold for all 2-forms. This may be thought of as an infinitesimal oriented square parallel to the xi–xj-plane. A general 2-form is a linear combination of these at every point on the manifold: ∑ 1 ≤ i < j ≤ n f i , j d x i ∧ d x j {\textstyle \sum _{1\leq i<j\leq n}f_{i,j}\,dx^{i}\wedge dx^{j}} , and it is integrated just like a surface integral. A fundamental operation defined on differential forms is the exterior product (the symbol is the wedge ∧). This is similar to the cross product from vector calculus, in that it is an alternating product. For instance, d x 1 ∧ d x 2 = − d x 2 ∧ d x 1 {\displaystyle dx^{1}\wedge dx^{2}=-dx^{2}\wedge dx^{1}} because the square whose first side is dx1 and second side is dx2 is to be regarded as having the opposite orientation as the square whose first side is dx2 and whose second side is dx1. This is why we only need to sum over expressions dxi ∧ dxj, with i < j; for example: a(dxi ∧ dxj) + b(dxj ∧ dxi) = (a − b) dxi ∧ dxj. The exterior product allows higher-degree differential forms to be built out of lower-degree ones, in much the same way that the cross product in vector calculus allows one to compute the area vector of a parallelogram from vectors pointing up the two sides. Alternating also implies that dxi ∧ dxi = 0, in the same way that the cross product of parallel vectors, whose magnitude is the area of the parallelogram spanned by those vectors, is zero. In higher dimensions, dxi1 ∧ ⋅⋅⋅ ∧ dxim = 0 if any two of the indices i1, ..., im are equal, in the same way that the "volume" enclosed by a parallelotope whose edge vectors are linearly dependent is zero. === Multi-index notation === A common notation for the wedge product of elementary k-forms is so called multi-index notation: in an n-dimensional context, for I = ( i 1 , i 2 , … , i k ) , 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n {\displaystyle I=(i_{1},i_{2},\ldots ,i_{k}),1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n} , we define d x I := d x i 1 ∧ ⋯ ∧ d x i k = ⋀ i ∈ I d x i {\textstyle dx^{I}:=dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}=\bigwedge _{i\in I}dx^{i}} . Another useful notation is obtained by defining the set of all strictly increasing multi-indices of length k, in a space of dimension n, denoted J k , n := { I = ( i 1 , … , i k ) : 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n } {\displaystyle {\mathcal {J}}_{k,n}:=\{I=(i_{1},\ldots ,i_{k}):1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n\}} . Then locally (wherever the coordinates apply), { d x I } I ∈ J k , n {\displaystyle \{dx^{I}\}_{I\in {\mathcal {J}}_{k,n}}} spans the space of differential k-forms in a manifold M of dimension n, when viewed as a module over the ring C∞(M) of smooth functions on M. By calculating the size of J k , n {\displaystyle {\mathcal {J}}_{k,n}} combinatorially, the module of k-forms on an n-dimensional manifold, and in general space of k-covectors on an n-dimensional vector space, is n choose k: | J k , n | = ( n k ) {\textstyle |{\mathcal {J}}_{k,n}|={\binom {n}{k}}} . This also demonstrates that there are no nonzero differential forms of degree greater than the dimension of the underlying manifold. === The exterior derivative === In addition to the exterior product, there is also the exterior derivative operator d. The exterior derivative of a differential form is a generalization of the differential of a function, in the sense that the exterior derivative of f ∈ C∞(M) = Ω0(M) is exactly the differential of f. When generalized to higher forms, if ω = f dxI is a simple k-form, then its exterior derivative dω is a (k + 1)-form defined by taking the differential of the coefficient functions: d ω = ∑ i = 1 n ∂ f ∂ x i d x i ∧ d x I . {\displaystyle d\omega =\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}\,dx^{i}\wedge dx^{I}.} with extension to general k-forms through linearity: if τ = ∑ I ∈ J k , n a I d x I ∈ Ω k ( M ) {\textstyle \tau =\sum _{I\in {\mathcal {J}}_{k,n}}a_{I}\,dx^{I}\in \Omega ^{k}(M)} , then its exterior derivative is d τ = ∑ I ∈ J k , n ( ∑ j = 1 n ∂ a I ∂ x j d x j ) ∧ d x I ∈ Ω k + 1 ( M ) {\displaystyle d\tau =\sum _{I\in {\mathcal {J}}_{k,n}}\left(\sum _{j=1}^{n}{\frac {\partial a_{I}}{\partial x^{j}}}\,dx^{j}\right)\wedge dx^{I}\in \Omega ^{k+1}(M)} In R3, with the Hodge star operator, the exterior derivative corresponds to gradient, curl, and divergence, although this correspondence, like the cross product, does not generalize to higher dimensions, and should be treated with some caution. The exterior derivative itself applies in an arbitrary finite number of dimensions, and is a flexible and powerful tool with wide application in differential geometry, differential topology, and many areas in physics. Of note, although the above definition of the exterior derivative was defined with respect to local coordinates, it can be defined in an entirely coordinate-free manner, as an antiderivation of degree 1 on the exterior algebra of differential forms. The benefit of this more general approach is that it allows for a natural coordinate-free approach to integrate on manifolds. It also allows for a natural generalization of the fundamental theorem of calculus, called the (generalized) Stokes' theorem, which is a central result in the theory of integration on manifolds. === Differential calculus === Let U be an open set in Rn. A differential 0-form ("zero-form") is defined to be a smooth function f on U – the set of which is denoted C∞(U). If v is any vector in Rn, then f has a directional derivative ∂v f, which is another function on U whose value at a point p ∈ U is the rate of change (at p) of f in the v direction: ( ∂ v f ) ( p ) = d d t f ( p + t v ) | t = 0 . {\displaystyle (\partial _{\mathbf {v} }f)(p)=\left.{\frac {d}{dt}}f(p+t\mathbf {v} )\right|_{t=0}.} (This notion can be extended pointwise to the case that v is a vector field on U by evaluating v at the point p in the definition.) In particular, if v = ej is the jth coordinate vector then ∂v f is the partial derivative of f with respect to the jth coordinate vector, i.e., ∂f / ∂xj, where x1, x2, ..., xn are the coordinate vectors in U. By their very definition, partial derivatives depend upon the choice of coordinates: if new coordinates y1, y2, ..., yn are introduced, then ∂ f ∂ x j = ∑ i = 1 n ∂ y i ∂ x j ∂ f ∂ y i . {\displaystyle {\frac {\partial f}{\partial x^{j}}}=\sum _{i=1}^{n}{\frac {\partial y^{i}}{\partial x^{j}}}{\frac {\partial f}{\partial y^{i}}}.} The first idea leading to differential forms is the observation that ∂v f (p) is a linear function of v: ( ∂ v + w f ) ( p ) = ( ∂ v f ) ( p ) + ( ∂ w f ) ( p ) ( ∂ c v f ) ( p ) = c ( ∂ v f ) ( p ) {\displaystyle {\begin{aligned}(\partial _{\mathbf {v} +\mathbf {w} }f)(p)&=(\partial _{\mathbf {v} }f)(p)+(\partial _{\mathbf {w} }f)(p)\\(\partial _{c\mathbf {v} }f)(p)&=c(\partial _{\mathbf {v} }f)(p)\end{aligned}}} for any vectors v, w and any real number c. At each point p, this linear map from Rn to R is denoted dfp and called the derivative or differential of f at p. Thus dfp(v) = ∂v f (p). Extended over the whole set, the object df can be viewed as a function that takes a vector field on U, and returns a real-valued function whose value at each point is the derivative along the vector field of the function f. Note that at each p, the differential dfp is not a real number, but a linear functional on tangent vectors, and a prototypical example of a differential 1-form. Since any vector v is a linear combination Σ vjej of its components, df is uniquely determined by dfp(ej) for each j and each p ∈ U, which are just the partial derivatives of f on U. Thus df provides a way of encoding the partial derivatives of f. It can be decoded by noticing that the coordinates x1, x2, ..., xn are themselves functions on U, and so define differential 1-forms dx1, dx2, ..., dxn. Let f = xi. Since ∂xi / ∂xj = δij, the Kronecker delta function, it follows that The meaning of this expression is given by evaluating both sides at an arbitrary point p: on the right hand side, the sum is defined "pointwise", so that d f p = ∑ i = 1 n ∂ f ∂ x i ( p ) ( d x i ) p . {\displaystyle df_{p}=\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}(p)(dx^{i})_{p}.} Applying both sides to ej, the result on each side is the jth partial derivative of f at p. Since p and j were arbitrary, this proves the formula (*). More generally, for any smooth functions gi and hi on U, we define the differential 1-form α = Σi gi dhi pointwise by α p = ∑ i g i ( p ) ( d h i ) p {\displaystyle \alpha _{p}=\sum _{i}g_{i}(p)(dh_{i})_{p}} for each p ∈ U. Any differential 1-form arises this way, and by using (*) it follows that any differential 1-form α on U may be expressed in coordinates as α = ∑ i = 1 n f i d x i {\displaystyle \alpha =\sum _{i=1}^{n}f_{i}\,dx^{i}} for some smooth functions fi on U. The second idea leading to differential forms arises from the following question: given a differential 1-form α on U, when does there exist a function f on U such that α = df? The above expansion reduces this question to the search for a function f whose partial derivatives ∂f / ∂xi are equal to n given functions fi. For n > 1, such a function does not always exist: any smooth function f satisfies ∂ 2 f ∂ x i ∂ x j = ∂ 2 f ∂ x j ∂ x i , {\displaystyle {\frac {\partial ^{2}f}{\partial x^{i}\,\partial x^{j}}}={\frac {\partial ^{2}f}{\partial x^{j}\,\partial x^{i}}},} so it will be impossible to find such an f unless ∂ f j ∂ x i − ∂ f i ∂ x j = 0 {\displaystyle {\frac {\partial f_{j}}{\partial x^{i}}}-{\frac {\partial f_{i}}{\partial x^{j}}}=0} for all i and j. The skew-symmetry of the left hand side in i and j suggests introducing an antisymmetric product ∧ on differential 1-forms, the exterior product, so that these equations can be combined into a single condition ∑ i , j = 1 n ∂ f j ∂ x i d x i ∧ d x j = 0 , {\displaystyle \sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}=0,} where ∧ is defined so that: d x i ∧ d x j = − d x j ∧ d x i . {\displaystyle dx^{i}\wedge dx^{j}=-dx^{j}\wedge dx^{i}.} This is an example of a differential 2-form. This 2-form is called the exterior derivative dα of α = ∑nj=1 fj dxj. It is given by d α = ∑ j = 1 n d f j ∧ d x j = ∑ i , j = 1 n ∂ f j ∂ x i d x i ∧ d x j . {\displaystyle d\alpha =\sum _{j=1}^{n}df_{j}\wedge dx^{j}=\sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}.} To summarize: dα = 0 is a necessary condition for the existence of a function f with α = df. Differential 0-forms, 1-forms, and 2-forms are special cases of differential forms. For each k, there is a space of differential k-forms, which can be expressed in terms of the coordinates as ∑ i 1 , i 2 … i k = 1 n f i 1 i 2 … i k d x i 1 ∧ d x i 2 ∧ ⋯ ∧ d x i k {\displaystyle \sum _{i_{1},i_{2}\ldots i_{k}=1}^{n}f_{i_{1}i_{2}\ldots i_{k}}\,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}} for a collection of functions fi1i2⋅⋅⋅ik. Antisymmetry, which was already present for 2-forms, makes it possible to restrict the sum to those sets of indices for which i1 < i2 < ... < ik−1 < ik. Differential forms can be multiplied together using the exterior product, and for any differential k-form α, there is a differential (k + 1)-form dα called the exterior derivative of α. Differential forms, the exterior product and the exterior derivative are independent of a choice of coordinates. Consequently, they may be defined on any smooth manifold M. One way to do this is cover M with coordinate charts and define a differential k-form on M to be a family of differential k-forms on each chart which agree on the overlaps. However, there are more intrinsic definitions which make the independence of coordinates manifest. == Intrinsic definitions == Let M be a smooth manifold. A smooth differential form of degree k is a smooth section of the kth exterior power of the cotangent bundle of M. The set of all differential k-forms on a manifold M is a vector space, often denoted Ω k ( M ) {\displaystyle \Omega ^{k}(M)} . The definition of a differential form may be restated as follows. At any point p ∈ M {\displaystyle p\in M} , a k-form β {\displaystyle \beta } defines an element β p ∈ ⋀ k T p ∗ M , {\displaystyle \beta _{p}\in {\textstyle \bigwedge }^{k}T_{p}^{*}M,} where T p M {\displaystyle T_{p}M} is the tangent space to M at p and T p ∗ ( M ) {\displaystyle T_{p}^{*}(M)} is its dual space. This space is naturally isomorphic to the fiber at p of the dual bundle of the kth exterior power of the tangent bundle of M. That is, β {\displaystyle \beta } is also a linear functional β p : ⋀ k T p M → R {\textstyle \beta _{p}\colon {\textstyle \bigwedge }^{k}T_{p}M\to \mathbf {R} } , i.e. the dual of the kth exterior power is isomorphic to the kth exterior power of the dual: ⋀ k T p ∗ M ≅ ( ⋀ k T p M ) ∗ {\displaystyle {\textstyle \bigwedge }^{k}T_{p}^{*}M\cong {\Big (}{\textstyle \bigwedge }^{k}T_{p}M{\Big )}^{*}} By the universal property of exterior powers, this is equivalently an alternating multilinear map: β p : ⨁ n = 1 k T p M → R . {\displaystyle \beta _{p}\colon \bigoplus _{n=1}^{k}T_{p}M\to \mathbf {R} .} Consequently, a differential k-form may be evaluated against any k-tuple of tangent vectors to the same point p of M. For example, a differential 1-form α assigns to each point p ∈ M {\displaystyle p\in M} a linear functional αp on T p M {\displaystyle T_{p}M} . In the presence of an inner product on T p M {\displaystyle T_{p}M} (induced by a Riemannian metric on M), αp may be represented as the inner product with a tangent vector X p {\displaystyle X_{p}} . Differential 1-forms are sometimes called covariant vector fields, covector fields, or "dual vector fields", particularly within physics. The exterior algebra may be embedded in the tensor algebra by means of the alternation map. The alternation map is defined as a mapping Alt : ⨂ k T ∗ M → ⨂ k T ∗ M . {\displaystyle \operatorname {Alt} \colon {\bigotimes }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.} For a tensor τ {\displaystyle \tau } at a point p, Alt ⁡ ( τ p ) ( x 1 , … , x k ) = 1 k ! ∑ σ ∈ S k sgn ⁡ ( σ ) τ p ( x σ ( 1 ) , … , x σ ( k ) ) , {\displaystyle \operatorname {Alt} (\tau _{p})(x_{1},\dots ,x_{k})={\frac {1}{k!}}\sum _{\sigma \in S_{k}}\operatorname {sgn}(\sigma )\tau _{p}(x_{\sigma (1)},\dots ,x_{\sigma (k)}),} where Sk is the symmetric group on k elements. The alternation map is constant on the cosets of the ideal in the tensor algebra generated by the symmetric 2-forms, and therefore descends to an embedding Alt : ⋀ k T ∗ M → ⨂ k T ∗ M . {\displaystyle \operatorname {Alt} \colon {\textstyle \bigwedge }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.} This map exhibits β {\displaystyle \beta } as a totally antisymmetric covariant tensor field of rank k. The differential forms on M are in one-to-one correspondence with such tensor fields. == Operations == As well as the addition and multiplication by scalar operations which arise from the vector space structure, there are several other standard operations defined on differential forms. The most important operations are the exterior product of two differential forms, the exterior derivative of a single differential form, the interior product of a differential form and a vector field, the Lie derivative of a differential form with respect to a vector field and the covariant derivative of a differential form with respect to a vector field on a manifold with a defined connection. === Exterior product === The exterior product of a k-form α and an ℓ-form β, denoted α ∧ β, is a (k + ℓ)-form. At each point p of the manifold M, the forms α and β are elements of an exterior power of the cotangent space at p. When the exterior algebra is viewed as a quotient of the tensor algebra, the exterior product corresponds to the tensor product (modulo the equivalence relation defining the exterior algebra). The antisymmetry inherent in the exterior algebra means that when α ∧ β is viewed as a multilinear functional, it is alternating. However, when the exterior algebra is embedded as a subspace of the tensor algebra by means of the alternation map, the tensor product α ⊗ β is not alternating. There is an explicit formula which describes the exterior product in this situation. The exterior product is α ∧ β = Alt ⁡ ( α ⊗ β ) . {\displaystyle \alpha \wedge \beta =\operatorname {Alt} (\alpha \otimes \beta ).} If the embedding of ⋀ n T ∗ M {\displaystyle {\textstyle \bigwedge }^{n}T^{*}M} into ⨂ n T ∗ M {\displaystyle {\bigotimes }^{n}T^{*}M} is done via the map n ! Alt {\displaystyle n!\operatorname {Alt} } instead of Alt {\displaystyle \operatorname {Alt} } , the exterior product is α ∧ β = ( k + ℓ ) ! k ! ℓ ! Alt ⁡ ( α ⊗ β ) . {\displaystyle \alpha \wedge \beta ={\frac {(k+\ell )!}{k!\ell !}}\operatorname {Alt} (\alpha \otimes \beta ).} This description is useful for explicit computations. For example, if k = ℓ = 1, then α ∧ β is the 2-form whose value at a point p is the alternating bilinear form defined by ( α ∧ β ) p ( v , w ) = α p ( v ) β p ( w ) − α p ( w ) β p ( v ) {\displaystyle (\alpha \wedge \beta )_{p}(v,w)=\alpha _{p}(v)\beta _{p}(w)-\alpha _{p}(w)\beta _{p}(v)} for v, w ∈ TpM. The exterior product is bilinear: If α, β, and γ are any differential forms, and if f is any smooth function, then α ∧ ( β + γ ) = α ∧ β + α ∧ γ , {\displaystyle \alpha \wedge (\beta +\gamma )=\alpha \wedge \beta +\alpha \wedge \gamma ,} α ∧ ( f ⋅ β ) = f ⋅ ( α ∧ β ) . {\displaystyle \alpha \wedge (f\cdot \beta )=f\cdot (\alpha \wedge \beta ).} It is skew commutative (also known as graded commutative), meaning that it satisfies a variant of anticommutativity that depends on the degrees of the forms: if α is a k-form and β is an ℓ-form, then α ∧ β = ( − 1 ) k ℓ β ∧ α . {\displaystyle \alpha \wedge \beta =(-1)^{k\ell }\beta \wedge \alpha .} One also has the graded Leibniz rule: d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β . {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .} === Riemannian manifold === On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, the metric defines a fibre-wise isomorphism of the tangent and cotangent bundles. This makes it possible to convert vector fields to covector fields and vice versa. It also enables the definition of additional operations such as the Hodge star operator ⋆ : Ω k ( M ) → ∼ Ω n − k ( M ) {\displaystyle \star \colon \Omega ^{k}(M)\ {\stackrel {\sim }{\to }}\ \Omega ^{n-k}(M)} and the codifferential δ : Ω k ( M ) → Ω k − 1 ( M ) {\displaystyle \delta \colon \Omega ^{k}(M)\rightarrow \Omega ^{k-1}(M)} , which has degree −1 and is adjoint to the exterior differential d. ==== Vector field structures ==== On a pseudo-Riemannian manifold, 1-forms can be identified with vector fields; vector fields have additional distinct algebraic structures, which are listed here for context and to avoid confusion. Firstly, each (co)tangent space generates a Clifford algebra, where the product of a (co)vector with itself is given by the value of a quadratic form – in this case, the natural one induced by the metric. This algebra is distinct from the exterior algebra of differential forms, which can be viewed as a Clifford algebra where the quadratic form vanishes (since the exterior product of any vector with itself is zero). Clifford algebras are thus non-anticommutative ("quantum") deformations of the exterior algebra. They are studied in geometric algebra. Another alternative is to consider vector fields as derivations. The (noncommutative) algebra of differential operators they generate is the Weyl algebra and is a noncommutative ("quantum") deformation of the symmetric algebra in the vector fields. === Exterior differential complex === One important property of the exterior derivative is that d2 = 0. This means that the exterior derivative defines a cochain complex: 0 → Ω 0 ( M ) → d Ω 1 ( M ) → d Ω 2 ( M ) → d Ω 3 ( M ) → ⋯ → Ω n ( M ) → 0. {\displaystyle 0\ \to \ \Omega ^{0}(M)\ {\stackrel {d}{\to }}\ \Omega ^{1}(M)\ {\stackrel {d}{\to }}\ \Omega ^{2}(M)\ {\stackrel {d}{\to }}\ \Omega ^{3}(M)\ \to \ \cdots \ \to \ \Omega ^{n}(M)\ \to \ 0.} This complex is called the de Rham complex, and its cohomology is by definition the de Rham cohomology of M. By the Poincaré lemma, the de Rham complex is locally exact except at Ω0(M). The kernel at Ω0(M) is the space of locally constant functions on M. Therefore, the complex is a resolution of the constant sheaf R, which in turn implies a form of de Rham's theorem: de Rham cohomology computes the sheaf cohomology of R. == Pullback == Suppose that f : M → N is smooth. The differential of f is a smooth map df : TM → TN between the tangent bundles of M and N. This map is also denoted f∗ and called the pushforward. For any point p ∈ M and any tangent vector v ∈ TpM, there is a well-defined pushforward vector f∗(v) in Tf(p)N. However, the same is not true of a vector field. If f is not injective, say because q ∈ N has two or more preimages, then the vector field may determine two or more distinct vectors in TqN. If f is not surjective, then there will be a point q ∈ N at which f∗ does not determine any tangent vector at all. Since a vector field on N determines, by definition, a unique tangent vector at every point of N, the pushforward of a vector field does not always exist. By contrast, it is always possible to pull back a differential form. A differential form on N may be viewed as a linear functional on each tangent space. Precomposing this functional with the differential df : TM → TN defines a linear functional on each tangent space of M and therefore a differential form on M. The existence of pullbacks is one of the key features of the theory of differential forms. It leads to the existence of pullback maps in other situations, such as pullback homomorphisms in de Rham cohomology. Formally, let f : M → N be smooth, and let ω be a smooth k-form on N. Then there is a differential form f∗ω on M, called the pullback of ω, which captures the behavior of ω as seen relative to f. To define the pullback, fix a point p of M and tangent vectors v1, ..., vk to M at p. The pullback of ω is defined by the formula ( f ∗ ω ) p ( v 1 , … , v k ) = ω f ( p ) ( f ∗ v 1 , … , f ∗ v k ) . {\displaystyle (f^{*}\omega )_{p}(v_{1},\ldots ,v_{k})=\omega _{f(p)}(f_{*}v_{1},\ldots ,f_{*}v_{k}).} There are several more abstract ways to view this definition. If ω is a 1-form on N, then it may be viewed as a section of the cotangent bundle T∗N of N. Using ∗ to denote a dual map, the dual to the differential of f is (df)∗ : T∗N → T∗M. The pullback of ω may be defined to be the composite M → f N → ω T ∗ N ⟶ ( d f ) ∗ T ∗ M . {\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ T^{*}N\ {\stackrel {(df)^{*}}{\longrightarrow }}\ T^{*}M.} This is a section of the cotangent bundle of M and hence a differential 1-form on M. In full generality, let ⋀ k ( d f ) ∗ {\textstyle \bigwedge ^{k}(df)^{*}} denote the kth exterior power of the dual map to the differential. Then the pullback of a k-form ω is the composite M → f N → ω ⋀ k T ∗ N ⟶ ⋀ k ( d f ) ∗ ⋀ k T ∗ M . {\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ {\textstyle \bigwedge }^{k}T^{*}N\ {\stackrel {{\bigwedge }^{k}(df)^{*}}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}T^{*}M.} Another abstract way to view the pullback comes from viewing a k-form ω as a linear functional on tangent spaces. From this point of view, ω is a morphism of vector bundles ⋀ k T N → ω N × R , {\displaystyle {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} ,} where N × R is the trivial rank one bundle on N. The composite map ⋀ k T M ⟶ ⋀ k d f ⋀ k T N → ω N × R {\displaystyle {\textstyle \bigwedge }^{k}TM\ {\stackrel {{\bigwedge }^{k}df}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} } defines a linear functional on each tangent space of M, and therefore it factors through the trivial bundle M × R. The vector bundle morphism ⋀ k T M → M × R {\textstyle {\textstyle \bigwedge }^{k}TM\to M\times \mathbf {R} } defined in this way is f∗ω. Pullback respects all of the basic operations on forms. If ω and η are forms and c is a real number, then f ∗ ( c ω ) = c ( f ∗ ω ) , f ∗ ( ω + η ) = f ∗ ω + f ∗ η , f ∗ ( ω ∧ η ) = f ∗ ω ∧ f ∗ η , f ∗ ( d ω ) = d ( f ∗ ω ) . {\displaystyle {\begin{aligned}f^{*}(c\omega )&=c(f^{*}\omega ),\\f^{*}(\omega +\eta )&=f^{*}\omega +f^{*}\eta ,\\f^{*}(\omega \wedge \eta )&=f^{*}\omega \wedge f^{*}\eta ,\\f^{*}(d\omega )&=d(f^{*}\omega ).\end{aligned}}} The pullback of a form can also be written in coordinates. Assume that x1, ..., xm are coordinates on M, that y1, ..., yn are coordinates on N, and that these coordinate systems are related by the formulas yi = fi(x1, ..., xm) for all i. Locally on N, ω can be written as ω = ∑ i 1 < ⋯ < i k ω i 1 ⋯ i k d y i 1 ∧ ⋯ ∧ d y i k , {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{k}}\omega _{i_{1}\cdots i_{k}}\,dy^{i_{1}}\wedge \cdots \wedge dy^{i_{k}},} where, for each choice of i1, ..., ik, ωi1⋅⋅⋅ik is a real-valued function of y1, ..., yn. Using the linearity of pullback and its compatibility with exterior product, the pullback of ω has the formula f ∗ ω = ∑ i 1 < ⋯ < i k ( ω i 1 ⋯ i k ∘ f ) d f i 1 ∧ ⋯ ∧ d f i k . {\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f)\,df_{i_{1}}\wedge \cdots \wedge df_{i_{k}}.} Each exterior derivative dfi can be expanded in terms of dx1, ..., dxm. The resulting k-form can be written using Jacobian matrices: f ∗ ω = ∑ i 1 < ⋯ < i k ∑ j 1 < ⋯ < j k ( ω i 1 ⋯ i k ∘ f ) ∂ ( f i 1 , … , f i k ) ∂ ( x j 1 , … , x j k ) d x j 1 ∧ ⋯ ∧ d x j k . {\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}\sum _{j_{1}<\cdots <j_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f){\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}\,dx^{j_{1}}\wedge \cdots \wedge dx^{j_{k}}.} Here, ∂ ( f i 1 , … , f i k ) ∂ ( x j 1 , … , x j k ) {\textstyle {\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}} denotes the determinant of the matrix whose entries are ∂ f i m ∂ x j n {\textstyle {\frac {\partial f_{i_{m}}}{\partial x^{j_{n}}}}} , 1 ≤ m , n ≤ k {\displaystyle 1\leq m,n\leq k} . == Integration == A differential k-form can be integrated over an oriented k-dimensional manifold. When the k-form is defined on an n-dimensional manifold with n > k, then the k-form can be integrated over oriented k-dimensional submanifolds. If k = 0, integration over oriented 0-dimensional submanifolds is just the summation of the integrand evaluated at points, according to the orientation of those points. Other values of k = 1, 2, 3, ... correspond to line integrals, surface integrals, volume integrals, and so on. There are several equivalent ways to formally define the integral of a differential form, all of which depend on reducing to the case of Euclidean space. === Integration on Euclidean space === Let U be an open subset of Rn. Give Rn its standard orientation and U the restriction of that orientation. Every smooth n-form ω on U has the form ω = f ( x ) d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega =f(x)\,dx^{1}\wedge \cdots \wedge dx^{n}} for some smooth function f : Rn → R. Such a function has an integral in the usual Riemann or Lebesgue sense. This allows us to define the integral of ω to be the integral of f: ∫ U ω = def ∫ U f ( x ) d x 1 ⋯ d x n . {\displaystyle \int _{U}\omega \ {\stackrel {\text{def}}{=}}\int _{U}f(x)\,dx^{1}\cdots dx^{n}.} Fixing an orientation is necessary for this to be well-defined. The skew-symmetry of differential forms means that the integral of, say, dx1 ∧ dx2 must be the negative of the integral of dx2 ∧ dx1. Riemann and Lebesgue integrals cannot see this dependence on the ordering of the coordinates, so they leave the sign of the integral undetermined. The orientation resolves this ambiguity. === Integration over chains === Let M be an n-manifold and ω an n-form on M. First, assume that there is a parametrization of M by an open subset of Euclidean space. That is, assume that there exists a diffeomorphism φ : D → M {\displaystyle \varphi \colon D\to M} where D ⊆ Rn. Give M the orientation induced by φ. Then (Rudin 1976) defines the integral of ω over M to be the integral of φ∗ω over D. In coordinates, this has the following expression. Fix an embedding of M in RI with coordinates x1, ..., xI. Then ω = ∑ i 1 < ⋯ < i n a i 1 , … , i n ( x ) d x i 1 ∧ ⋯ ∧ d x i n . {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}({\mathbf {x} })\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{n}}.} Suppose that φ is defined by φ ( u ) = ( x 1 ( u ) , … , x I ( u ) ) . {\displaystyle \varphi ({\mathbf {u} })=(x^{1}({\mathbf {u} }),\ldots ,x^{I}({\mathbf {u} })).} Then the integral may be written in coordinates as ∫ M ω = ∫ D ∑ i 1 < ⋯ < i n a i 1 , … , i n ( φ ( u ) ) ∂ ( x i 1 , … , x i n ) ∂ ( u 1 , … , u n ) d u 1 ⋯ d u n , {\displaystyle \int _{M}\omega =\int _{D}\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}(\varphi ({\mathbf {u} })){\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\dots ,u^{n})}}\,du^{1}\cdots du^{n},} where ∂ ( x i 1 , … , x i n ) ∂ ( u 1 , … , u n ) {\displaystyle {\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\ldots ,u^{n})}}} is the determinant of the Jacobian. The Jacobian exists because φ is differentiable. In general, an n-manifold cannot be parametrized by an open subset of Rn. But such a parametrization is always possible locally, so it is possible to define integrals over arbitrary manifolds by defining them as sums of integrals over collections of local parametrizations. Moreover, it is also possible to define parametrizations of k-dimensional subsets for k < n, and this makes it possible to define integrals of k-forms. To make this precise, it is convenient to fix a standard domain D in Rk, usually a cube or a simplex. A k-chain is a formal sum of smooth embeddings D → M. That is, it is a collection of smooth embeddings, each of which is assigned an integer multiplicity. Each smooth embedding determines a k-dimensional submanifold of M. If the chain is c = ∑ i = 1 r m i φ i , {\displaystyle c=\sum _{i=1}^{r}m_{i}\varphi _{i},} then the integral of a k-form ω over c is defined to be the sum of the integrals over the terms of c: ∫ c ω = ∑ i = 1 r m i ∫ D φ i ∗ ω . {\displaystyle \int _{c}\omega =\sum _{i=1}^{r}m_{i}\int _{D}\varphi _{i}^{*}\omega .} This approach to defining integration does not assign a direct meaning to integration over the whole manifold M. However, it is still possible to assign such a meaning indirectly because every smooth manifold may be smoothly triangulated in an essentially unique way, and the integral over M may be defined to be the integral over the chain determined by a triangulation. === Integration using partitions of unity === There is another approach, expounded in (Dieudonné 1972), which does directly assign a meaning to integration over M, but this approach requires fixing an orientation of M. The integral of an n-form ω on an n-dimensional manifold is defined by working in charts. Suppose first that ω is supported on a single positively oriented chart. On this chart, it may be pulled back to an n-form on an open subset of Rn. Here, the form has a well-defined Riemann or Lebesgue integral as before. The change of variables formula and the assumption that the chart is positively oriented together ensure that the integral of ω is independent of the chosen chart. In the general case, use a partition of unity to write ω as a sum of n-forms, each of which is supported in a single positively oriented chart, and define the integral of ω to be the sum of the integrals of each term in the partition of unity. It is also possible to integrate k-forms on oriented k-dimensional submanifolds using this more intrinsic approach. The form is pulled back to the submanifold, where the integral is defined using charts as before. For example, given a path γ(t) : [0, 1] → R2, integrating a 1-form on the path is simply pulling back the form to a form f(t) dt on [0, 1], and this integral is the integral of the function f(t) on the interval. === Integration along fibers === Fubini's theorem states that the integral over a set that is a product may be computed as an iterated integral over the two factors in the product. This suggests that the integral of a differential form over a product ought to be computable as an iterated integral as well. The geometric flexibility of differential forms ensures that this is possible not just for products, but in more general situations as well. Under some hypotheses, it is possible to integrate along the fibers of a smooth map, and the analog of Fubini's theorem is the case where this map is the projection from a product to one of its factors. Because integrating a differential form over a submanifold requires fixing an orientation, a prerequisite to integration along fibers is the existence of a well-defined orientation on those fibers. Let M and N be two orientable manifolds of pure dimensions m and n, respectively. Suppose that f : M → N is a surjective submersion. This implies that each fiber f−1(y) is (m − n)-dimensional and that, around each point of M, there is a chart on which f looks like the projection from a product onto one of its factors. Fix x ∈ M and set y = f(x). Suppose that ω x ∈ ⋀ m T x ∗ M , η y ∈ ⋀ n T y ∗ N , {\displaystyle {\begin{aligned}\omega _{x}&\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,\\[2pt]\eta _{y}&\in {\textstyle \bigwedge }^{n}T_{y}^{*}N,\end{aligned}}} and that ηy does not vanish. Following (Dieudonné 1972), there is a unique σ x ∈ ⋀ m − n T x ∗ ( f − 1 ( y ) ) {\displaystyle \sigma _{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}(f^{-1}(y))} which may be thought of as the fibral part of ωx with respect to ηy. More precisely, define j : f−1(y) → M to be the inclusion. Then σx is defined by the property that ω x = ( f ∗ η y ) x ∧ σ x ′ ∈ ⋀ m T x ∗ M , {\displaystyle \omega _{x}=(f^{*}\eta _{y})_{x}\wedge \sigma '_{x}\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,} where σ x ′ ∈ ⋀ m − n T x ∗ M {\displaystyle \sigma '_{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M} is any (m − n)-covector for which σ x = j ∗ σ x ′ . {\displaystyle \sigma _{x}=j^{*}\sigma '_{x}.} The form σx may also be notated ωx / ηy. Moreover, for fixed y, σx varies smoothly with respect to x. That is, suppose that ω : f − 1 ( y ) → T ∗ M {\displaystyle \omega \colon f^{-1}(y)\to T^{*}M} is a smooth section of the projection map; we say that ω is a smooth differential m-form on M along f−1(y). Then there is a smooth differential (m − n)-form σ on f−1(y) such that, at each x ∈ f−1(y), σ x = ω x / η y . {\displaystyle \sigma _{x}=\omega _{x}/\eta _{y}.} This form is denoted ω / ηy. The same construction works if ω is an m-form in a neighborhood of the fiber, and the same notation is used. A consequence is that each fiber f−1(y) is orientable. In particular, a choice of orientation forms on M and N defines an orientation of every fiber of f. The analog of Fubini's theorem is as follows. As before, M and N are two orientable manifolds of pure dimensions m and n, and f : M → N is a surjective submersion. Fix orientations of M and N, and give each fiber of f the induced orientation. Let ω be an m-form on M, and let η be an n-form on N that is almost everywhere positive with respect to the orientation of N. Then, for almost every y ∈ N, the form ω / ηy is a well-defined integrable m − n form on f−1(y). Moreover, there is an integrable n-form on N defined by y ↦ ( ∫ f − 1 ( y ) ω / η y ) η y . {\displaystyle y\mapsto {\bigg (}\int _{f^{-1}(y)}\omega /\eta _{y}{\bigg )}\,\eta _{y}.} Denote this form by ( ∫ f − 1 ( y ) ω / η ) η . {\displaystyle {\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .} Then (Dieudonné 1972) proves the generalized Fubini formula ∫ M ω = ∫ N ( ∫ f − 1 ( y ) ω / η ) η . {\displaystyle \int _{M}\omega =\int _{N}{\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .} It is also possible to integrate forms of other degrees along the fibers of a submersion. Assume the same hypotheses as before, and let α be a compactly supported (m − n + k)-form on M. Then there is a k-form γ on N which is the result of integrating α along the fibers of f. The form α is defined by specifying, at each y ∈ N, how γ pairs with each k-vector v at y, and the value of that pairing is an integral over f−1(y) that depends only on α, v, and the orientations of M and N. More precisely, at each y ∈ N, there is an isomorphism ⋀ k T y N → ⋀ n − k T y ∗ N {\displaystyle {\textstyle \bigwedge }^{k}T_{y}N\to {\textstyle \bigwedge }^{n-k}T_{y}^{*}N} defined by the interior product v ↦ v ⌟ ζ y , {\displaystyle \mathbf {v} \mapsto \mathbf {v} \,\lrcorner \,\zeta _{y},} for any choice of volume form ζ in the orientation of N. If x ∈ f−1(y), then a k-vector v at y determines an (n − k)-covector at x by pullback: f ∗ ( v ⌟ ζ y ) ∈ ⋀ n − k T x ∗ M . {\displaystyle f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\in {\textstyle \bigwedge }^{n-k}T_{x}^{*}M.} Each of these covectors has an exterior product against α, so there is an (m − n)-form βv on M along f−1(y) defined by ( β v ) x = ( α x ∧ f ∗ ( v ⌟ ζ y ) ) / ζ y ∈ ⋀ m − n T x ∗ M . {\displaystyle (\beta _{\mathbf {v} })_{x}=\left(\alpha _{x}\wedge f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\right){\big /}\zeta _{y}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M.} This form depends on the orientation of N but not the choice of ζ. Then the k-form γ is uniquely defined by the property ⟨ γ y , v ⟩ = ∫ f − 1 ( y ) β v , {\displaystyle \langle \gamma _{y},\mathbf {v} \rangle =\int _{f^{-1}(y)}\beta _{\mathbf {v} },} and γ is smooth (Dieudonné 1972). This form also denoted α♭ and called the integral of α along the fibers of f. Integration along fibers is important for the construction of Gysin maps in de Rham cohomology. Integration along fibers satisfies the projection formula (Dieudonné 1972). If λ is any ℓ-form on N, then α ♭ ∧ λ = ( α ∧ f ∗ λ ) ♭ . {\displaystyle \alpha ^{\flat }\wedge \lambda =(\alpha \wedge f^{*}\lambda )^{\flat }.} === Stokes's theorem === The fundamental relationship between the exterior derivative and integration is given by the Stokes' theorem: If ω is an (n − 1)-form with compact support on M and ∂M denotes the boundary of M with its induced orientation, then ∫ M d ω = ∫ ∂ M ω . {\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .} A key consequence of this is that "the integral of a closed form over homologous chains is equal": If ω is a closed k-form and M and N are k-chains that are homologous (such that M − N is the boundary of a (k + 1)-chain W), then ∫ M ω = ∫ N ω {\displaystyle \textstyle {\int _{M}\omega =\int _{N}\omega }} , since the difference is the integral ∫ W d ω = ∫ W 0 = 0 {\displaystyle \textstyle \int _{W}d\omega =\int _{W}0=0} . For example, if ω = df is the derivative of a potential function on the plane or Rn, then the integral of ω over a path from a to b does not depend on the choice of path (the integral is f(b) − f(a)), since different paths with given endpoints are homotopic, hence homologous (a weaker condition). This case is called the gradient theorem, and generalizes the fundamental theorem of calculus. This path independence is very useful in contour integration. This theorem also underlies the duality between de Rham cohomology and the homology of chains. === Relation with measures === On a general differentiable manifold (without additional structure), differential forms cannot be integrated over subsets of the manifold; this distinction is key to the distinction between differential forms, which are integrated over chains or oriented submanifolds, and measures, which are integrated over subsets. The simplest example is attempting to integrate the 1-form dx over the interval [0, 1]. Assuming the usual distance (and thus measure) on the real line, this integral is either 1 or −1, depending on orientation: ∫ 0 1 d x = 1 {\textstyle \int _{0}^{1}dx=1} , while ∫ 1 0 d x = − ∫ 0 1 d x = − 1 {\textstyle \int _{1}^{0}dx=-\int _{0}^{1}dx=-1} . By contrast, the integral of the measure |dx| on the interval is unambiguously 1 (i.e. the integral of the constant function 1 with respect to this measure is 1). Similarly, under a change of coordinates a differential n-form changes by the Jacobian determinant J, while a measure changes by the absolute value of the Jacobian determinant, |J|, which further reflects the issue of orientation. For example, under the map x ↦ −x on the line, the differential form dx pulls back to −dx; orientation has reversed; while the Lebesgue measure, which here we denote |dx|, pulls back to |dx|; it does not change. In the presence of the additional data of an orientation, it is possible to integrate n-forms (top-dimensional forms) over the entire manifold or over compact subsets; integration over the entire manifold corresponds to integrating the form over the fundamental class of the manifold, [M]. Formally, in the presence of an orientation, one may identify n-forms with densities on a manifold; densities in turn define a measure, and thus can be integrated (Folland 1999, Section 11.4, pp. 361–362). On an orientable but not oriented manifold, there are two choices of orientation; either choice allows one to integrate n-forms over compact subsets, with the two choices differing by a sign. On a non-orientable manifold, n-forms and densities cannot be identified —notably, any top-dimensional form must vanish somewhere (there are no volume forms on non-orientable manifolds), but there are nowhere-vanishing densities— thus while one can integrate densities over compact subsets, one cannot integrate n-forms. One can instead identify densities with top-dimensional pseudoforms. Even in the presence of an orientation, there is in general no meaningful way to integrate k-forms over subsets for k < n because there is no consistent way to use the ambient orientation to orient k-dimensional subsets. Geometrically, a k-dimensional subset can be turned around in place, yielding the same subset with the opposite orientation; for example, the horizontal axis in a plane can be rotated by 180 degrees. Compare the Gram determinant of a set of k vectors in an n-dimensional space, which, unlike the determinant of n vectors, is always positive, corresponding to a squared number. An orientation of a k-submanifold is therefore extra data not derivable from the ambient manifold. On a Riemannian manifold, one may define a k-dimensional Hausdorff measure for any k (integer or real), which may be integrated over k-dimensional subsets of the manifold. A function times this Hausdorff measure can then be integrated over k-dimensional subsets, providing a measure-theoretic analog to integration of k-forms. The n-dimensional Hausdorff measure yields a density, as above. === Currents === The differential form analog of a distribution or generalized function is called a current. The space of k-currents on M is the dual space to an appropriate space of differential k-forms. Currents play the role of generalized domains of integration, similar to but even more flexible than chains. == Applications in physics == Differential forms arise in some important physical contexts. For example, in Maxwell's theory of electromagnetism, the Faraday 2-form, or electromagnetic field strength, is F = 1 2 f a b d x a ∧ d x b , {\displaystyle \mathbf {F} ={\frac {1}{2}}f_{ab}\,dx^{a}\wedge dx^{b}\,,} where the fab are formed from the electromagnetic fields E → {\displaystyle {\vec {E}}} and B → {\displaystyle {\vec {B}}} ; e.g., f12 = Ez/c, f23 = −Bz, or equivalent definitions. This form is a special case of the curvature form on the U(1) principal bundle on which both electromagnetism and general gauge theories may be described. The connection form for the principal bundle is the vector potential, typically denoted by A, when represented in some gauge. One then has F = d A . {\displaystyle \mathbf {F} =d\mathbf {A} .} The current 3-form is J = 1 6 j a ε a b c d d x b ∧ d x c ∧ d x d , {\displaystyle \mathbf {J} ={\frac {1}{6}}j^{a}\,\varepsilon _{abcd}\,dx^{b}\wedge dx^{c}\wedge dx^{d}\,,} where ja are the four components of the current density. (Here it is a matter of convention to write Fab instead of fab, i.e. to use capital letters, and to write Ja instead of ja. However, the vector rsp. tensor components and the above-mentioned forms have different physical dimensions. Moreover, by decision of an international commission of the International Union of Pure and Applied Physics, the magnetic polarization vector has been called J → {\displaystyle {\vec {J}}} for several decades, and by some publishers J; i.e., the same name is used for different quantities.) Using the above-mentioned definitions, Maxwell's equations can be written very compactly in geometrized units as d F = 0 d ⋆ F = J , {\displaystyle {\begin{aligned}d{\mathbf {F} }&=\mathbf {0} \\d{\star \mathbf {F} }&=\mathbf {J} ,\end{aligned}}} where ⋆ {\displaystyle \star } denotes the Hodge star operator. Similar considerations describe the geometry of gauge theories in general. The 2-form ⋆ F {\displaystyle {\star }\mathbf {F} } , which is dual to the Faraday form, is also called Maxwell 2-form. Electromagnetism is an example of a U(1) gauge theory. Here the Lie group is U(1), the one-dimensional unitary group, which is in particular abelian. There are gauge theories, such as Yang–Mills theory, in which the Lie group is not abelian. In that case, one gets relations which are similar to those described here. The analog of the field F in such theories is the curvature form of the connection, which is represented in a gauge by a Lie algebra-valued one-form A. The Yang–Mills field F is then defined by F = d A + A ∧ A . {\displaystyle \mathbf {F} =d\mathbf {A} +\mathbf {A} \wedge \mathbf {A} .} In the abelian case, such as electromagnetism, A ∧ A = 0, but this does not hold in general. Likewise the field equations are modified by additional terms involving exterior products of A and F, owing to the structure equations of the gauge group. == Applications in geometric measure theory == Numerous minimality results for complex analytic manifolds are based on the Wirtinger inequality for 2-forms. A succinct proof may be found in Herbert Federer's classic text Geometric Measure Theory. The Wirtinger inequality is also a key ingredient in Gromov's inequality for complex projective space in systolic geometry. == See also == Closed and exact differential forms Complex differential form Vector-valued differential form Equivariant differential form Calculus on Manifolds Multilinear form Polynomial differential form Presymplectic form == Notes == == References == == External links == Weisstein, Eric W. "Differential form". MathWorld. Sjamaar, Reyer (2006), Manifolds and differential forms lecture notes (PDF), a course taught at Cornell University. Bachman, David (2003), A Geometric Approach to Differential Forms, arXiv:math/0306194, Bibcode:2003math......6194B, an undergraduate text. Needham, Tristan. Visual differential geometry and forms: a mathematical drama in five acts. Princeton University Press, 2021.
Wikipedia/Differential_form
In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was developed by Arnold Sommerfeld after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski.: 22  The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below. == Definition == The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form: F = d e f d A . {\displaystyle F\ {\stackrel {\mathrm {def} }{=}}\ \mathrm {d} A.} Therefore, F is a differential 2-form— an antisymmetric rank-2 tensor field—on Minkowski space. In component form, F μ ν = ∂ μ A ν − ∂ ν A μ . {\displaystyle F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }.} where ∂ {\displaystyle \partial } is the four-gradient and A {\displaystyle A} is the four-potential. SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space (+ − − −), will be used throughout this article. === Relationship with the classical fields === The Faraday differential 2-form is given by F = ( E x / c ) d x ∧ d t + ( E y / c ) d y ∧ d t + ( E z / c ) d z ∧ d t + B x d y ∧ d z + B y d z ∧ d x + B z d x ∧ d y , {\displaystyle F=(E_{x}/c)\ dx\wedge dt+(E_{y}/c)\ dy\wedge dt+(E_{z}/c)\ dz\wedge dt+B_{x}\ dy\wedge dz+B_{y}\ dz\wedge dx+B_{z}\ dx\wedge dy,} where d t {\displaystyle dt} is the time element times the speed of light c {\displaystyle c} . This is the exterior derivative of its 1-form antiderivative A = A x d x + A y d y + A z d z − ( ϕ / c ) d t {\displaystyle A=A_{x}\ dx+A_{y}\ dy+A_{z}\ dz-(\phi /c)\ dt} , where ϕ ( x → , t ) {\displaystyle \phi ({\vec {x}},t)} has − ∇ → ϕ = E → {\displaystyle -{\vec {\nabla }}\phi ={\vec {E}}} ( ϕ {\displaystyle \phi } is a scalar potential for the irrotational/conservative vector field E → {\displaystyle {\vec {E}}} ) and A → ( x → , t ) {\displaystyle {\vec {A}}({\vec {x}},t)} has ∇ → × A → = B → {\displaystyle {\vec {\nabla }}\times {\vec {A}}={\vec {B}}} ( A → {\displaystyle {\vec {A}}} is a vector potential for the solenoidal vector field B → {\displaystyle {\vec {B}}} ). Note that { d F = 0 ⋆ d ⋆ F = J {\displaystyle {\begin{cases}dF=0\\{\star }d{\star }F=J\end{cases}}} where d {\displaystyle d} is the exterior derivative, ⋆ {\displaystyle {\star }} is the Hodge star, J = − J x d x − J y d y − J z d z + ρ d t {\displaystyle J=-J_{x}\ dx-J_{y}\ dy-J_{z}\ dz+\rho \ dt} (where J → {\displaystyle {\vec {J}}} is the electric current density, and ρ {\displaystyle \rho } is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations. The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates: E i = c F 0 i , {\displaystyle E_{i}=cF_{0i},} where c is the speed of light, and B i = − 1 / 2 ϵ i j k F j k , {\displaystyle B_{i}=-1/2\epsilon _{ijk}F^{jk},} where ϵ i j k {\displaystyle \epsilon _{ijk}} is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the fields in the new frame will be given by the new components. In contravariant matrix form with metric signature (+,-,-,-), F μ ν = [ 0 − E x / c − E y / c − E z / c E x / c 0 − B z B y E y / c B z 0 − B x E z / c − B y B x 0 ] . {\displaystyle F^{\mu \nu }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The covariant form is given by index lowering, F μ ν = η α ν F β α η μ β = [ 0 E x / c E y / c E z / c − E x / c 0 − B z B y − E y / c B z 0 − B x − E z / c − B y B x 0 ] . {\displaystyle F_{\mu \nu }=\eta _{\alpha \nu }F^{\beta \alpha }\eta _{\mu \beta }={\begin{bmatrix}0&E_{x}/c&E_{y}/c&E_{z}/c\\-E_{x}/c&0&-B_{z}&B_{y}\\-E_{y}/c&B_{z}&0&-B_{x}\\-E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}.} The Faraday tensor's Hodge dual is G α β = 1 2 ϵ α β γ δ F γ δ = [ 0 − B x − B y − B z B x 0 E z / c − E y / c B y − E z / c 0 E x / c B z E y / c − E x / c 0 ] {\displaystyle {G^{\alpha \beta }={\frac {1}{2}}\epsilon ^{\alpha \beta \gamma \delta }F_{\gamma \delta }={\begin{bmatrix}0&-B_{x}&-B_{y}&-B_{z}\\B_{x}&0&E_{z}/c&-E_{y}/c\\B_{y}&-E_{z}/c&0&E_{x}/c\\B_{z}&E_{y}/c&-E_{x}/c&0\end{bmatrix}}}} From now on in this article, when the electric or magnetic fields are mentioned, a Cartesian coordinate system is assumed, and the electric and magnetic fields are with respect to the coordinate system's reference frame, as in the equations above. === Properties === The matrix form of the field tensor yields the following properties: Antisymmetry: F μ ν = − F ν μ {\displaystyle F^{\mu \nu }=-F^{\nu \mu }} Six independent components: In Cartesian coordinates, these are simply the three spatial components of the electric field (Ex, Ey, Ez) and magnetic field (Bx, By, Bz). Inner product: If one forms an inner product of the field strength tensor a Lorentz invariant is formed F μ ν F μ ν = 2 ( B 2 − E 2 c 2 ) {\displaystyle F_{\mu \nu }F^{\mu \nu }=2\left(B^{2}-{\frac {E^{2}}{c^{2}}}\right)} meaning this number does not change from one frame of reference to another. Pseudoscalar invariant: The product of the tensor F μ ν {\displaystyle F^{\mu \nu }} with its Hodge dual G μ ν {\displaystyle G^{\mu \nu }} gives a Lorentz invariant: G γ δ F γ δ = 1 2 ϵ α β γ δ F α β F γ δ = − 4 c B ⋅ E {\displaystyle G_{\gamma \delta }F^{\gamma \delta }={\frac {1}{2}}\epsilon _{\alpha \beta \gamma \delta }F^{\alpha \beta }F^{\gamma \delta }=-{\frac {4}{c}}\mathbf {B} \cdot \mathbf {E} \,} where ϵ α β γ δ {\displaystyle \epsilon _{\alpha \beta \gamma \delta }} is the rank-4 Levi-Civita symbol. The sign for the above depends on the convention used for the Levi-Civita symbol. The convention used here is ϵ 0123 = − 1 {\displaystyle \epsilon _{0123}=-1} . Determinant: det ( F ) = 1 c 2 ( B ⋅ E ) 2 {\displaystyle \det \left(F\right)={\frac {1}{c^{2}}}\left(\mathbf {B} \cdot \mathbf {E} \right)^{2}} which is proportional to the square of the above invariant. Trace: F = F μ μ = 0 {\displaystyle F={{F}^{\mu }}_{\mu }=0} which is equal to zero. === Significance === This tensor simplifies and reduces Maxwell's equations as four vector calculus equations into two tensor field equations. In electrostatics and electrodynamics, Gauss's law and Ampère's circuital law are respectively: ∇ ⋅ E = ρ ϵ 0 , ∇ × B − 1 c 2 ∂ E ∂ t = μ 0 J {\displaystyle \nabla \cdot \mathbf {E} ={\frac {\rho }{\epsilon _{0}}},\quad \nabla \times \mathbf {B} -{\frac {1}{c^{2}}}{\frac {\partial \mathbf {E} }{\partial t}}=\mu _{0}\mathbf {J} } and reduce to the inhomogeneous Maxwell equation: ∂ α F β α = − μ 0 J β {\displaystyle \partial _{\alpha }F^{\beta \alpha }=-\mu _{0}J^{\beta }} , where J α = ( c ρ , J ) {\displaystyle J^{\alpha }=(c\rho ,\mathbf {J} )} is the four-current. In magnetostatics and magnetodynamics, Gauss's law for magnetism and Maxwell–Faraday equation are respectively: ∇ ⋅ B = 0 , ∂ B ∂ t + ∇ × E = 0 {\displaystyle \nabla \cdot \mathbf {B} =0,\quad {\frac {\partial \mathbf {B} }{\partial t}}+\nabla \times \mathbf {E} =\mathbf {0} } which reduce to the Bianchi identity: ∂ γ F α β + ∂ α F β γ + ∂ β F γ α = 0 {\displaystyle \partial _{\gamma }F_{\alpha \beta }+\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha }=0} or using the index notation with square brackets[note 1] for the antisymmetric part of the tensor: ∂ [ α F β γ ] = 0 {\displaystyle \partial _{[\alpha }F_{\beta \gamma ]}=0} Using the expression relating the Faraday tensor to the four-potential, one can prove that the above antisymmetric quantity turns to zero identically ( ≡ 0 {\displaystyle \equiv 0} ). This tensor equation reproduces the homogeneous Maxwell's equations. == Relativity == The field tensor derives its name from the fact that the electromagnetic field is found to obey the tensor transformation law, this general property of physical laws being recognised after the advent of special relativity. This theory stipulated that all the laws of physics should take the same form in all coordinate systems – this led to the introduction of tensors. The tensor formalism also leads to a mathematically simpler presentation of physical laws. The inhomogeneous Maxwell equation leads to the continuity equation: ∂ α J α = J α , α = 0 {\displaystyle \partial _{\alpha }J^{\alpha }=J^{\alpha }{}_{,\alpha }=0} implying conservation of charge. Maxwell's laws above can be generalised to curved spacetime by simply replacing partial derivatives with covariant derivatives: F [ α β ; γ ] = 0 {\displaystyle F_{[\alpha \beta ;\gamma ]}=0} and F α β ; α = μ 0 J β {\displaystyle F^{\alpha \beta }{}_{;\alpha }=\mu _{0}J^{\beta }} where the semicolon notation represents a covariant derivative, as opposed to a partial derivative. These equations are sometimes referred to as the curved space Maxwell equations. Again, the second equation implies charge conservation (in curved spacetime): J α ; α = 0 {\displaystyle J^{\alpha }{}_{;\alpha }\,=0} The stress-energy tensor of electromagnetism T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] , {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,,} satisfies T α β , β + F α β J β = 0 . {\displaystyle {T^{\alpha \beta }}_{,\beta }+F^{\alpha \beta }J_{\beta }=0\,.} == Lagrangian formulation of classical electromagnetism == Classical electromagnetism and Maxwell's equations can be derived from the action: S = ∫ ( − 1 4 μ 0 F μ ν F μ ν − J μ A μ ) d 4 x {\displaystyle {\mathcal {S}}=\int \left(-{\begin{matrix}{\frac {1}{4\mu _{0}}}\end{matrix}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\right)\mathrm {d} ^{4}x\,} where d 4 x {\displaystyle \mathrm {d} ^{4}x} is over space and time. This means the Lagrangian density is L = − 1 4 μ 0 F μ ν F μ ν − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν − ∂ ν A μ ) ( ∂ μ A ν − ∂ ν A μ ) − J μ A μ = − 1 4 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν − ∂ μ A ν ∂ ν A μ + ∂ ν A μ ∂ ν A μ ) − J μ A μ {\displaystyle {\begin{aligned}{\mathcal {L}}&=-{\frac {1}{4\mu _{0}}}F_{\mu \nu }F^{\mu \nu }-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\right)\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\&=-{\frac {1}{4\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }-\partial _{\mu }A_{\nu }\partial ^{\nu }A^{\mu }+\partial _{\nu }A_{\mu }\partial ^{\nu }A^{\mu }\right)-J^{\mu }A_{\mu }\\\end{aligned}}} The two middle terms in the parentheses are the same, as are the two outer terms, so the Lagrangian density is L = − 1 2 μ 0 ( ∂ μ A ν ∂ μ A ν − ∂ ν A μ ∂ μ A ν ) − J μ A μ . {\displaystyle {\mathcal {L}}=-{\frac {1}{2\mu _{0}}}\left(\partial _{\mu }A_{\nu }\partial ^{\mu }A^{\nu }-\partial _{\nu }A_{\mu }\partial ^{\mu }A^{\nu }\right)-J^{\mu }A_{\mu }.} Substituting this into the Euler–Lagrange equation of motion for a field: ∂ μ ( ∂ L ∂ ( ∂ μ A ν ) ) − ∂ L ∂ A ν = 0 {\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }A_{\nu })}}\right)-{\frac {\partial {\mathcal {L}}}{\partial A_{\nu }}}=0} So the Euler–Lagrange equation becomes: − ∂ μ 1 μ 0 ( ∂ μ A ν − ∂ ν A μ ) + J ν = 0. {\displaystyle -\partial _{\mu }{\frac {1}{\mu _{0}}}\left(\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }\right)+J^{\nu }=0.\,} The quantity in parentheses above is just the field tensor, so this finally simplifies to ∂ μ F μ ν = μ 0 J ν {\displaystyle \partial _{\mu }F^{\mu \nu }=\mu _{0}J^{\nu }} That equation is another way of writing the two inhomogeneous Maxwell's equations (namely, Gauss's law and Ampère's circuital law) using the substitutions: 1 c E i = − F 0 i ϵ i j k B k = − F i j {\displaystyle {\begin{aligned}{\frac {1}{c}}E^{i}&=-F^{0i}\\\epsilon ^{ijk}B_{k}&=-F^{ij}\end{aligned}}} where i, j, k take the values 1, 2, and 3. === Hamiltonian form === The Hamiltonian density can be obtained with the usual relation, H ( ϕ i , π i ) = π i ϕ ˙ i ( ϕ i , π i ) − L . {\displaystyle {\mathcal {H}}(\phi ^{i},\pi _{i})=\pi _{i}{\dot {\phi }}^{i}(\phi ^{i},\pi _{i})-{\mathcal {L}}\,.} Here ϕ i = A i {\displaystyle \phi ^{i}=A^{i}} are the fields and the momentum density of the EM field is π i = T 0 i = 1 μ 0 F 0 α F i α = 1 μ 0 c E × B . {\displaystyle \pi _{i}=T_{0i}={\frac {1}{\mu _{0}}}F_{0}{}^{\alpha }F_{i\alpha }={\frac {1}{\mu _{0}c}}\mathbf {E} \times \mathbf {B} \,.} such that the conserved quantity associated with translation from Noether's theorem is the total momentum P = ∑ α m α x ˙ α + 1 μ 0 c ∫ V d 3 x E × B . {\displaystyle \mathbf {P} =\sum _{\alpha }m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {1}{\mu _{0}c}}\int _{\mathcal {V}}\mathrm {d} ^{3}x\,\mathbf {E} \times \mathbf {B} \,.} The Hamiltonian density for the electromagnetic field is related to the electromagnetic stress-energy tensor T μ ν = 1 μ 0 [ F μ α F ν α − 1 4 η μ ν F α β F α β ] . {\displaystyle T^{\mu \nu }={\frac {1}{\mu _{0}}}\left[F^{\mu \alpha }F^{\nu }{}_{\alpha }-{\frac {1}{4}}\eta ^{\mu \nu }F_{\alpha \beta }F^{\alpha \beta }\right]\,.} as H = T 00 = 1 2 ( ϵ 0 E 2 + 1 μ 0 B 2 ) = 1 8 π ( E 2 + B 2 ) . {\displaystyle {\mathcal {H}}=T_{00}={\frac {1}{2}}\left(\epsilon _{0}\mathbf {E} ^{2}+{\frac {1}{\mu _{0}}}\mathbf {B} ^{2}\right)={\frac {1}{8\pi }}\left(\mathbf {E} ^{2}+\mathbf {B} ^{2}\right)\,.} where we have neglected the energy density of matter, assuming only the EM field, and the last equality assumes the CGS system. The momentum of nonrelativistic charges interarcting with the EM field in the Coulomb gauge ( ∇ ⋅ A = ∇ i A i = 0 {\displaystyle \nabla \cdot \mathbf {A} =\nabla _{i}A^{i}=0} ) is p α = m α x ˙ α + q α c A ( x α ) . {\displaystyle \mathbf {p} _{\alpha }=m_{\alpha }{\dot {\mathbf {x} }}_{\alpha }+{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\,.} The total Hamiltonian of the matter + EM field system is H = ∫ V d 3 x T 00 = H m a t + H e m . {\displaystyle H=\int _{\mathcal {V}}d^{3}x\,T_{00}=H_{\rm {mat}}+H_{\rm {em}}\,.} where for nonrelativistic point particles in the Coulomb gauge H m a t = ∑ α m α | x ˙ α | 2 + ∑ α < β q α q β | x α − x β | = ∑ α 1 2 m α [ p α − q α c A ( x α ) ] 2 + ∑ α < β q α q β | x α − x β | . {\displaystyle H_{\rm {mat}}=\sum _{\alpha }m_{\alpha }|{\dot {\mathbf {x} }}_{\alpha }|^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}=\sum _{\alpha }{\frac {1}{2m_{\alpha }}}\left[\mathbf {p} _{\alpha }-{\frac {q_{\alpha }}{c}}\mathbf {A} (\mathbf {x} _{\alpha })\right]^{2}+\sum _{\alpha <\beta }{\frac {q_{\alpha }q_{\beta }}{|\mathbf {x} _{\alpha }-\mathbf {x} _{\beta }|}}\,.} where the last term is identically 1 8 π ∫ V d 3 x E ∥ 2 {\displaystyle {\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\mathbf {E} _{\parallel }^{2}} where E ∥ i = ∇ i A 0 {\displaystyle {E}_{\parallel i}={\nabla _{i}}A_{0}} and H e m = 1 8 π ∫ V d 3 x ( E ⊥ 2 + B 2 ) . {\displaystyle H_{\rm {em}}={\frac {1}{8\pi }}\int _{\mathcal {V}}d^{3}x\left(\mathbf {E} _{\perp }^{2}+\mathbf {B} ^{2}\right)\,.} where and E ⊥ i = − 1 c ∂ 0 A i {\displaystyle {E}_{\perp i}=-{\frac {1}{c}}\partial _{0}A_{i}} . === Quantum electrodynamics and field theory === The Lagrangian of quantum electrodynamics extends beyond the classical Lagrangian established in relativity to incorporate the creation and annihilation of photons (and electrons): L = ψ ¯ ( i ℏ c γ α D α − m c 2 ) ψ − 1 4 μ 0 F α β F α β , {\displaystyle {\mathcal {L}}={\bar {\psi }}\left(i\hbar c\,\gamma ^{\alpha }D_{\alpha }-mc^{2}\right)\psi -{\frac {1}{4\mu _{0}}}F_{\alpha \beta }F^{\alpha \beta },} where the first part in the right hand side, containing the Dirac spinor ψ {\displaystyle \psi } , represents the Dirac field. In quantum field theory it is used as the template for the gauge field strength tensor. By being employed in addition to the local interaction Lagrangian it reprises its usual role in QED. == See also == Classification of electromagnetic fields Covariant formulation of classical electromagnetism Electromagnetic stress–energy tensor Gluon field strength tensor Ricci calculus Riemann–Silberstein vector == Notes == == References == Brau, Charles A. (2004). Modern Problems in Classical Electrodynamics. Oxford University Press. ISBN 0-19-514665-4. Jackson, John D. (1999). Classical Electrodynamics. John Wiley & Sons, Inc. ISBN 0-471-30932-X. Peskin, Michael E.; Schroeder, Daniel V. (1995). An Introduction to Quantum Field Theory. Perseus Publishing. ISBN 0-201-50397-2.
Wikipedia/Electromagnetic_tensor
In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinate transformation (e.g., an improper rotation), which is a transformation that can be expressed as a proper rotation followed by reflection. This is a generalization of a pseudovector. To evaluate a tensor or pseudotensor sign, it has to be contracted with some vectors, as many as its rank is, belonging to the space where the rotation is made while keeping the tensor coordinates unaffected (differently from what one does in the case of a base change). Under improper rotation a pseudotensor and a proper tensor of the same rank will have different sign which depends on the rank being even or odd. Sometimes inversion of the axes is used as an example of an improper rotation to see the behaviour of a pseudotensor, but it works only if vector space dimensions is odd otherwise inversion is a proper rotation without an additional reflection. There is a second meaning for pseudotensor (and likewise for pseudovector), restricted to general relativity. Tensors obey strict transformation laws, but pseudotensors in this sense are not so constrained. Consequently, the form of a pseudotensor will, in general, change as the frame of reference is altered. An equation containing pseudotensors which holds in one frame will not necessarily hold in a different frame. This makes pseudotensors of limited relevance because equations in which they appear are not invariant in form. Mathematical developments in the 1980s have allowed pseudotensors to be understood as sections of jet bundles. == Definition == Two quite different mathematical objects are called a pseudotensor in different contexts. The first context is essentially a tensor multiplied by an extra sign factor, such that the pseudotensor changes sign under reflections when a normal tensor does not. According to one definition, a pseudotensor P of the type ( p , q ) {\displaystyle (p,q)} is a geometric object whose components in an arbitrary basis are enumerated by ( p + q ) {\displaystyle (p+q)} indices and obey the transformation rule P ^ j 1 … j p i 1 … i q = ( − 1 ) A A i 1 k 1 ⋯ A i q k q B l 1 j 1 ⋯ B l p j p P l 1 … l p k 1 … k q {\displaystyle {\hat {P}}_{\,j_{1}\ldots j_{p}}^{i_{1}\ldots i_{q}}=(-1)^{A}A^{i_{1}}{}_{k_{1}}\cdots A^{i_{q}}{}_{k_{q}}B^{l_{1}}{}_{j_{1}}\cdots B^{l_{p}}{}_{j_{p}}P_{l_{1}\ldots l_{p}}^{k_{1}\ldots k_{q}}} under a change of basis. Here P ^ j 1 … j p i 1 … i q , P l 1 … l p k 1 … k q {\displaystyle {\hat {P}}_{\,j_{1}\ldots j_{p}}^{i_{1}\ldots i_{q}},P_{l_{1}\ldots l_{p}}^{k_{1}\ldots k_{q}}} are the components of the pseudotensor in the new and old bases, respectively, A i q k q {\displaystyle A^{i_{q}}{}_{k_{q}}} is the transition matrix for the contravariant indices, B l p j p {\displaystyle B^{l_{p}}{}_{j_{p}}} is the transition matrix for the covariant indices, and ( − 1 ) A = s i g n ( det ( A i q k q ) ) = ± 1 . {\displaystyle (-1)^{A}=\mathrm {sign} \left(\det \left(A^{i_{q}}{}_{k_{q}}\right)\right)=\pm {1}.} This transformation rule differs from the rule for an ordinary tensor only by the presence of the factor ( − 1 ) A . {\displaystyle (-1)^{A}.} The second context where the word "pseudotensor" is used is general relativity. In that theory, one cannot describe the energy and momentum of the gravitational field by an energy–momentum tensor. Instead, one introduces objects that behave as tensors only with respect to restricted coordinate transformations. Strictly speaking, such objects are not tensors at all. A famous example of such a pseudotensor is the Landau–Lifshitz pseudotensor. == Examples == On non-orientable manifolds, one cannot define a volume form globally due to the non-orientability, but one can define a volume element, which is formally a density, and may also be called a pseudo-volume form, due to the additional sign twist (tensoring with the sign bundle). The volume element is a pseudotensor density according to the first definition. A change of variables in multi-dimensional integration may be achieved through the incorporation of a factor of the absolute value of the determinant of the Jacobian matrix. The use of the absolute value introduces a sign change for improper coordinate transformations to compensate for the convention of keeping integration (volume) element positive; as such, an integrand is an example of a pseudotensor density according to the first definition. The Christoffel symbols of an affine connection on a manifold can be thought of as the correction terms to the partial derivatives of a coordinate expression of a vector field with respect to the coordinates to render it the vector field's covariant derivative. While the affine connection itself doesn't depend on the choice of coordinates, its Christoffel symbols do, making them a pseudotensor quantity according to the second definition. == See also == Action (physics) – Physical quantity of dimension energy × time Conservation law – Scientific law regarding conservation of a physical propertyPages displaying short descriptions of redirect targets General relativity – Theory of gravitation as curved spacetime Tensor – Algebraic object with geometric applications Tensor density – Generalization of tensor fields Tensor field – Assignment of a tensor continuously varying across a region of space Noether's theorem – Statement relating differentiable symmetries to conserved quantities Pseudovector – Physical quantity that changes sign with improper rotation Variational principle – Scientific principles enabling the use of the calculus of variations == References == == External links == Mathworld description for pseudotensor.
Wikipedia/Pseudotensor
In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold M (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p (that is, a bilinear function that maps pairs of tangent vectors to real numbers), and a metric field on M consists of a metric tensor at each point p of M that varies smoothly with p. A metric tensor g is positive-definite if g(v, v) > 0 for every nonzero vector v. A manifold equipped with a positive-definite metric tensor is known as a Riemannian manifold. Such a metric tensor can be thought of as specifying infinitesimal distance on the manifold. On a Riemannian manifold M, the length of a smooth curve between two points p and q can be defined by integration, and the distance between p and q can be defined as the infimum of the lengths of all such curves; this makes M a metric space. Conversely, the metric tensor itself is the derivative of the distance function (taken in a suitable manner). While the notion of a metric tensor was known in some sense to mathematicians such as Gauss from the early 19th century, it was not until the early 20th century that its properties as a tensor were understood by, in particular, Gregorio Ricci-Curbastro and Tullio Levi-Civita, who first codified the notion of a tensor. The metric tensor is an example of a tensor field. The components of a metric tensor in a coordinate basis take on the form of a symmetric matrix whose entries transform covariantly under changes to the coordinate system. Thus a metric tensor is a covariant symmetric tensor. From the coordinate-independent point of view, a metric tensor field is defined to be a nondegenerate symmetric bilinear form on each tangent space that varies smoothly from point to point. == Introduction == Carl Friedrich Gauss in his 1827 Disquisitiones generales circa superficies curvas (General investigations of curved surfaces) considered a surface parametrically, with the Cartesian coordinates x, y, and z of points on the surface depending on two auxiliary variables u and v. Thus a parametric surface is (in today's terms) a vector-valued function r → ( u , v ) = ( x ( u , v ) , y ( u , v ) , z ( u , v ) ) {\displaystyle {\vec {r}}(u,\,v)={\bigl (}x(u,\,v),\,y(u,\,v),\,z(u,\,v){\bigr )}} depending on an ordered pair of real variables (u, v), and defined in an open set D in the uv-plane. One of the chief aims of Gauss's investigations was to deduce those features of the surface which could be described by a function which would remain unchanged if the surface underwent a transformation in space (such as bending the surface without stretching it), or a change in the particular parametric form of the same geometrical surface. One natural such invariant quantity is the length of a curve drawn along the surface. Another is the angle between a pair of curves drawn along the surface and meeting at a common point. A third such quantity is the area of a piece of the surface. The study of these invariants of a surface led Gauss to introduce the predecessor of the modern notion of the metric tensor. The metric tensor is [ E F F G ] {\textstyle {\begin{bmatrix}E&F\\F&G\end{bmatrix}}} in the description below; E, F, and G in the matrix can contain any number as long as the matrix is positive definite. === Arc length === If the variables u and v are taken to depend on a third variable, t, taking values in an interval [a, b], then r→(u(t), v(t)) will trace out a parametric curve in parametric surface M. The arc length of that curve is given by the integral s = ∫ a b ‖ d d t r → ( u ( t ) , v ( t ) ) ‖ d t = ∫ a b u ′ ( t ) 2 r → u ⋅ r → u + 2 u ′ ( t ) v ′ ( t ) r → u ⋅ r → v + v ′ ( t ) 2 r → v ⋅ r → v d t , {\displaystyle {\begin{aligned}s&=\int _{a}^{b}\left\|{\frac {d}{dt}}{\vec {r}}(u(t),v(t))\right\|\,dt\\[5pt]&=\int _{a}^{b}{\sqrt {u'(t)^{2}\,{\vec {r}}_{u}\cdot {\vec {r}}_{u}+2u'(t)v'(t)\,{\vec {r}}_{u}\cdot {\vec {r}}_{v}+v'(t)^{2}\,{\vec {r}}_{v}\cdot {\vec {r}}_{v}}}\,dt\,,\end{aligned}}} where ‖ ⋅ ‖ {\displaystyle \left\|\cdot \right\|} represents the Euclidean norm. Here the chain rule has been applied, and the subscripts denote partial derivatives: r → u = ∂ r → ∂ u , r → v = ∂ r → ∂ v . {\displaystyle {\vec {r}}_{u}={\frac {\partial {\vec {r}}}{\partial u}}\,,\quad {\vec {r}}_{v}={\frac {\partial {\vec {r}}}{\partial v}}\,.} The integrand is the restriction to the curve of the square root of the (quadratic) differential where The quantity ds in (1) is called the line element, while ds2 is called the first fundamental form of M. Intuitively, it represents the principal part of the square of the displacement undergone by r→(u, v) when u is increased by du units, and v is increased by dv units. Using matrix notation, the first fundamental form becomes d s 2 = [ d u d v ] [ E F F G ] [ d u d v ] {\displaystyle ds^{2}={\begin{bmatrix}du&dv\end{bmatrix}}{\begin{bmatrix}E&F\\F&G\end{bmatrix}}{\begin{bmatrix}du\\dv\end{bmatrix}}} === Coordinate transformations === Suppose now that a different parameterization is selected, by allowing u and v to depend on another pair of variables u′ and v′. Then the analog of (2) for the new variables is The chain rule relates E′, F′, and G′ to E, F, and G via the matrix equation where the superscript T denotes the matrix transpose. The matrix with the coefficients E, F, and G arranged in this way therefore transforms by the Jacobian matrix of the coordinate change J = [ ∂ u ∂ u ′ ∂ u ∂ v ′ ∂ v ∂ u ′ ∂ v ∂ v ′ ] . {\displaystyle J={\begin{bmatrix}{\frac {\partial u}{\partial u'}}&{\frac {\partial u}{\partial v'}}\\{\frac {\partial v}{\partial u'}}&{\frac {\partial v}{\partial v'}}\end{bmatrix}}\,.} A matrix which transforms in this way is one kind of what is called a tensor. The matrix [ E F F G ] {\displaystyle {\begin{bmatrix}E&F\\F&G\end{bmatrix}}} with the transformation law (3) is known as the metric tensor of the surface. === Invariance of arclength under coordinate transformations === Ricci-Curbastro & Levi-Civita (1900) first observed the significance of a system of coefficients E, F, and G, that transformed in this way on passing from one system of coordinates to another. The upshot is that the first fundamental form (1) is invariant under changes in the coordinate system, and that this follows exclusively from the transformation properties of E, F, and G. Indeed, by the chain rule, [ d u d v ] = [ ∂ u ∂ u ′ ∂ u ∂ v ′ ∂ v ∂ u ′ ∂ v ∂ v ′ ] [ d u ′ d v ′ ] {\displaystyle {\begin{bmatrix}du\\dv\end{bmatrix}}={\begin{bmatrix}{\dfrac {\partial u}{\partial u'}}&{\dfrac {\partial u}{\partial v'}}\\{\dfrac {\partial v}{\partial u'}}&{\dfrac {\partial v}{\partial v'}}\end{bmatrix}}{\begin{bmatrix}du'\\dv'\end{bmatrix}}} so that d s 2 = [ d u d v ] [ E F F G ] [ d u d v ] = [ d u ′ d v ′ ] [ ∂ u ∂ u ′ ∂ u ∂ v ′ ∂ v ∂ u ′ ∂ v ∂ v ′ ] T [ E F F G ] [ ∂ u ∂ u ′ ∂ u ∂ v ′ ∂ v ∂ u ′ ∂ v ∂ v ′ ] [ d u ′ d v ′ ] = [ d u ′ d v ′ ] [ E ′ F ′ F ′ G ′ ] [ d u ′ d v ′ ] = ( d s ′ ) 2 . {\displaystyle {\begin{aligned}ds^{2}&={\begin{bmatrix}du&dv\end{bmatrix}}{\begin{bmatrix}E&F\\F&G\end{bmatrix}}{\begin{bmatrix}du\\dv\end{bmatrix}}\\[6pt]&={\begin{bmatrix}du'&dv'\end{bmatrix}}{\begin{bmatrix}{\dfrac {\partial u}{\partial u'}}&{\dfrac {\partial u}{\partial v'}}\\[6pt]{\dfrac {\partial v}{\partial u'}}&{\dfrac {\partial v}{\partial v'}}\end{bmatrix}}^{\mathsf {T}}{\begin{bmatrix}E&F\\F&G\end{bmatrix}}{\begin{bmatrix}{\dfrac {\partial u}{\partial u'}}&{\dfrac {\partial u}{\partial v'}}\\[6pt]{\dfrac {\partial v}{\partial u'}}&{\dfrac {\partial v}{\partial v'}}\end{bmatrix}}{\begin{bmatrix}du'\\dv'\end{bmatrix}}\\[6pt]&={\begin{bmatrix}du'&dv'\end{bmatrix}}{\begin{bmatrix}E'&F'\\F'&G'\end{bmatrix}}{\begin{bmatrix}du'\\dv'\end{bmatrix}}\\[6pt]&=(ds')^{2}\,.\end{aligned}}} === Length and angle === Another interpretation of the metric tensor, also considered by Gauss, is that it provides a way in which to compute the length of tangent vectors to the surface, as well as the angle between two tangent vectors. In contemporary terms, the metric tensor allows one to compute the dot product of tangent vectors in a manner independent of the parametric description of the surface. Any tangent vector at a point of the parametric surface M can be written in the form p = p 1 r → u + p 2 r → v {\displaystyle \mathbf {p} =p_{1}{\vec {r}}_{u}+p_{2}{\vec {r}}_{v}} for suitable real numbers p1 and p2. If two tangent vectors are given: a = a 1 r → u + a 2 r → v b = b 1 r → u + b 2 r → v {\displaystyle {\begin{aligned}\mathbf {a} &=a_{1}{\vec {r}}_{u}+a_{2}{\vec {r}}_{v}\\\mathbf {b} &=b_{1}{\vec {r}}_{u}+b_{2}{\vec {r}}_{v}\end{aligned}}} then using the bilinearity of the dot product, a ⋅ b = a 1 b 1 r → u ⋅ r → u + a 1 b 2 r → u ⋅ r → v + a 2 b 1 r → v ⋅ r → u + a 2 b 2 r → v ⋅ r → v = a 1 b 1 E + a 1 b 2 F + a 2 b 1 F + a 2 b 2 G . = [ a 1 a 2 ] [ E F F G ] [ b 1 b 2 ] . {\displaystyle {\begin{aligned}\mathbf {a} \cdot \mathbf {b} &=a_{1}b_{1}{\vec {r}}_{u}\cdot {\vec {r}}_{u}+a_{1}b_{2}{\vec {r}}_{u}\cdot {\vec {r}}_{v}+a_{2}b_{1}{\vec {r}}_{v}\cdot {\vec {r}}_{u}+a_{2}b_{2}{\vec {r}}_{v}\cdot {\vec {r}}_{v}\\[8pt]&=a_{1}b_{1}E+a_{1}b_{2}F+a_{2}b_{1}F+a_{2}b_{2}G.\\[8pt]&={\begin{bmatrix}a_{1}&a_{2}\end{bmatrix}}{\begin{bmatrix}E&F\\F&G\end{bmatrix}}{\begin{bmatrix}b_{1}\\b_{2}\end{bmatrix}}\,.\end{aligned}}} This is plainly a function of the four variables a1, b1, a2, and b2. It is more profitably viewed, however, as a function that takes a pair of arguments a = [a1 a2] and b = [b1 b2] which are vectors in the uv-plane. That is, put g ( a , b ) = a 1 b 1 E + a 1 b 2 F + a 2 b 1 F + a 2 b 2 G . {\displaystyle g(\mathbf {a} ,\mathbf {b} )=a_{1}b_{1}E+a_{1}b_{2}F+a_{2}b_{1}F+a_{2}b_{2}G\,.} This is a symmetric function in a and b, meaning that g ( a , b ) = g ( b , a ) . {\displaystyle g(\mathbf {a} ,\mathbf {b} )=g(\mathbf {b} ,\mathbf {a} )\,.} It is also bilinear, meaning that it is linear in each variable a and b separately. That is, g ( λ a + μ a ′ , b ) = λ g ( a , b ) + μ g ( a ′ , b ) , and g ( a , λ b + μ b ′ ) = λ g ( a , b ) + μ g ( a , b ′ ) {\displaystyle {\begin{aligned}g\left(\lambda \mathbf {a} +\mu \mathbf {a} ',\mathbf {b} \right)&=\lambda g(\mathbf {a} ,\mathbf {b} )+\mu g\left(\mathbf {a} ',\mathbf {b} \right),\quad {\text{and}}\\g\left(\mathbf {a} ,\lambda \mathbf {b} +\mu \mathbf {b} '\right)&=\lambda g(\mathbf {a} ,\mathbf {b} )+\mu g\left(\mathbf {a} ,\mathbf {b} '\right)\end{aligned}}} for any vectors a, a′, b, and b′ in the uv plane, and any real numbers μ and λ. In particular, the length of a tangent vector a is given by ‖ a ‖ = g ( a , a ) {\displaystyle \left\|\mathbf {a} \right\|={\sqrt {g(\mathbf {a} ,\mathbf {a} )}}} and the angle θ between two vectors a and b is calculated by cos ⁡ ( θ ) = g ( a , b ) ‖ a ‖ ‖ b ‖ . {\displaystyle \cos(\theta )={\frac {g(\mathbf {a} ,\mathbf {b} )}{\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|}}\,.} === Area === The surface area is another numerical quantity which should depend only on the surface itself, and not on how it is parameterized. If the surface M is parameterized by the function r→(u, v) over the domain D in the uv-plane, then the surface area of M is given by the integral ∬ D | r → u × r → v | d u d v {\displaystyle \iint _{D}\left|{\vec {r}}_{u}\times {\vec {r}}_{v}\right|\,du\,dv} where × denotes the cross product, and the absolute value denotes the length of a vector in Euclidean space. By Lagrange's identity for the cross product, the integral can be written ∬ D ( r → u ⋅ r → u ) ( r → v ⋅ r → v ) − ( r → u ⋅ r → v ) 2 d u d v = ∬ D E G − F 2 d u d v = ∬ D det [ E F F G ] d u d v {\displaystyle {\begin{aligned}&\iint _{D}{\sqrt {\left({\vec {r}}_{u}\cdot {\vec {r}}_{u}\right)\left({\vec {r}}_{v}\cdot {\vec {r}}_{v}\right)-\left({\vec {r}}_{u}\cdot {\vec {r}}_{v}\right)^{2}}}\,du\,dv\\[5pt]={}&\iint _{D}{\sqrt {EG-F^{2}}}\,du\,dv\\[5pt]={}&\iint _{D}{\sqrt {\det {\begin{bmatrix}E&F\\F&G\end{bmatrix}}}}\,du\,dv\end{aligned}}} where det is the determinant. == Definition == Let M be a smooth manifold of dimension n; for instance a surface (in the case n = 2) or hypersurface in the Cartesian space R n + 1 {\displaystyle \mathbb {R} ^{n+1}} . At each point p ∈ M there is a vector space TpM, called the tangent space, consisting of all tangent vectors to the manifold at the point p. A metric tensor at p is a function gp(Xp, Yp) which takes as inputs a pair of tangent vectors Xp and Yp at p, and produces as an output a real number (scalar), so that the following conditions are satisfied: gp is bilinear. A function of two vector arguments is bilinear if it is linear separately in each argument. Thus if Up, Vp, Yp are three tangent vectors at p and a and b are real numbers, then g p ( a U p + b V p , Y p ) = a g p ( U p , Y p ) + b g p ( V p , Y p ) , and g p ( Y p , a U p + b V p ) = a g p ( Y p , U p ) + b g p ( Y p , V p ) . {\displaystyle {\begin{aligned}g_{p}(aU_{p}+bV_{p},Y_{p})&=ag_{p}(U_{p},Y_{p})+bg_{p}(V_{p},Y_{p})\,,\quad {\text{and}}\\g_{p}(Y_{p},aU_{p}+bV_{p})&=ag_{p}(Y_{p},U_{p})+bg_{p}(Y_{p},V_{p})\,.\end{aligned}}} gp is symmetric. A function of two vector arguments is symmetric provided that for all vectors Xp and Yp, g p ( X p , Y p ) = g p ( Y p , X p ) . {\displaystyle g_{p}(X_{p},Y_{p})=g_{p}(Y_{p},X_{p})\,.} gp is nondegenerate. A bilinear function is nondegenerate provided that, for every tangent vector Xp ≠ 0, the function Y p ↦ g p ( X p , Y p ) {\displaystyle Y_{p}\mapsto g_{p}(X_{p},Y_{p})} obtained by holding Xp constant and allowing Yp to vary is not identically zero. That is, for every Xp ≠ 0 there exists a Yp such that gp(Xp, Yp) ≠ 0. A metric tensor field g on M assigns to each point p of M a metric tensor gp in the tangent space at p in a way that varies smoothly with p. More precisely, given any open subset U of manifold M and any (smooth) vector fields X and Y on U, the real function g ( X , Y ) ( p ) = g p ( X p , Y p ) {\displaystyle g(X,Y)(p)=g_{p}(X_{p},Y_{p})} is a smooth function of p. == Components of the metric == The components of the metric in any basis of vector fields, or frame, f = (X1, ..., Xn) are given by The n2 functions gij[f] form the entries of an n × n symmetric matrix, G[f]. If v = ∑ i = 1 n v i X i , w = ∑ i = 1 n w i X i {\displaystyle v=\sum _{i=1}^{n}v^{i}X_{i}\,,\quad w=\sum _{i=1}^{n}w^{i}X_{i}} are two vectors at p ∈ U, then the value of the metric applied to v and w is determined by the coefficients (4) by bilinearity: g ( v , w ) = ∑ i , j = 1 n v i w j g ( X i , X j ) = ∑ i , j = 1 n v i w j g i j [ f ] {\displaystyle g(v,w)=\sum _{i,j=1}^{n}v^{i}w^{j}g\left(X_{i},X_{j}\right)=\sum _{i,j=1}^{n}v^{i}w^{j}g_{ij}[\mathbf {f} ]} Denoting the matrix (gij[f]) by G[f] and arranging the components of the vectors v and w into column vectors v[f] and w[f], g ( v , w ) = v [ f ] T G [ f ] w [ f ] = w [ f ] T G [ f ] v [ f ] {\displaystyle g(v,w)=\mathbf {v} [\mathbf {f} ]^{\mathsf {T}}G[\mathbf {f} ]\mathbf {w} [\mathbf {f} ]=\mathbf {w} [\mathbf {f} ]^{\mathsf {T}}G[\mathbf {f} ]\mathbf {v} [\mathbf {f} ]} where v[f]T and w[f]T denote the transpose of the vectors v[f] and w[f], respectively. Under a change of basis of the form f ↦ f ′ = ( ∑ k X k a k 1 , … , ∑ k X k a k n ) = f A {\displaystyle \mathbf {f} \mapsto \mathbf {f} '=\left(\sum _{k}X_{k}a_{k1},\dots ,\sum _{k}X_{k}a_{kn}\right)=\mathbf {f} A} for some invertible n × n matrix A = (aij), the matrix of components of the metric changes by A as well. That is, G [ f A ] = A T G [ f ] A {\displaystyle G[\mathbf {f} A]=A^{\mathsf {T}}G[\mathbf {f} ]A} or, in terms of the entries of this matrix, g i j [ f A ] = ∑ k , l = 1 n a k i g k l [ f ] a l j . {\displaystyle g_{ij}[\mathbf {f} A]=\sum _{k,l=1}^{n}a_{ki}g_{kl}[\mathbf {f} ]a_{lj}\,.} For this reason, the system of quantities gij[f] is said to transform covariantly with respect to changes in the frame f. === Metric in coordinates === A system of n real-valued functions (x1, ..., xn), giving a local coordinate system on an open set U in M, determines a basis of vector fields on U f = ( X 1 = ∂ ∂ x 1 , … , X n = ∂ ∂ x n ) . {\displaystyle \mathbf {f} =\left(X_{1}={\frac {\partial }{\partial x^{1}}},\dots ,X_{n}={\frac {\partial }{\partial x^{n}}}\right)\,.} The metric g has components relative to this frame given by g i j [ f ] = g ( ∂ ∂ x i , ∂ ∂ x j ) . {\displaystyle g_{ij}\left[\mathbf {f} \right]=g\left({\frac {\partial }{\partial x^{i}}},{\frac {\partial }{\partial x^{j}}}\right)\,.} Relative to a new system of local coordinates, say y i = y i ( x 1 , x 2 , … , x n ) , i = 1 , 2 , … , n {\displaystyle y^{i}=y^{i}(x^{1},x^{2},\dots ,x^{n}),\quad i=1,2,\dots ,n} the metric tensor will determine a different matrix of coefficients, g i j [ f ′ ] = g ( ∂ ∂ y i , ∂ ∂ y j ) . {\displaystyle g_{ij}\left[\mathbf {f} '\right]=g\left({\frac {\partial }{\partial y^{i}}},{\frac {\partial }{\partial y^{j}}}\right).} This new system of functions is related to the original gij(f) by means of the chain rule ∂ ∂ y i = ∑ k = 1 n ∂ x k ∂ y i ∂ ∂ x k {\displaystyle {\frac {\partial }{\partial y^{i}}}=\sum _{k=1}^{n}{\frac {\partial x^{k}}{\partial y^{i}}}{\frac {\partial }{\partial x^{k}}}} so that g i j [ f ′ ] = ∑ k , l = 1 n ∂ x k ∂ y i g k l [ f ] ∂ x l ∂ y j . {\displaystyle g_{ij}\left[\mathbf {f} '\right]=\sum _{k,l=1}^{n}{\frac {\partial x^{k}}{\partial y^{i}}}g_{kl}\left[\mathbf {f} \right]{\frac {\partial x^{l}}{\partial y^{j}}}.} Or, in terms of the matrices G[f] = (gij[f]) and G[f′] = (gij[f′]), G [ f ′ ] = ( ( D y ) − 1 ) T G [ f ] ( D y ) − 1 {\displaystyle G\left[\mathbf {f} '\right]=\left((Dy)^{-1}\right)^{\mathsf {T}}G\left[\mathbf {f} \right](Dy)^{-1}} where Dy denotes the Jacobian matrix of the coordinate change. === Signature of a metric === Associated to any metric tensor is the quadratic form defined in each tangent space by q m ( X m ) = g m ( X m , X m ) , X m ∈ T m M . {\displaystyle q_{m}(X_{m})=g_{m}(X_{m},X_{m})\,,\quad X_{m}\in T_{m}M.} If qm is positive for all non-zero Xm, then the metric is positive-definite at m. If the metric is positive-definite at every m ∈ M, then g is called a Riemannian metric. More generally, if the quadratic forms qm have constant signature independent of m, then the signature of g is this signature, and g is called a pseudo-Riemannian metric. If M is connected, then the signature of qm does not depend on m. By Sylvester's law of inertia, a basis of tangent vectors Xi can be chosen locally so that the quadratic form diagonalizes in the following manner q m ( ∑ i ξ i X i ) = ( ξ 1 ) 2 + ( ξ 2 ) 2 + ⋯ + ( ξ p ) 2 − ( ξ p + 1 ) 2 − ⋯ − ( ξ n ) 2 {\displaystyle q_{m}\left(\sum _{i}\xi ^{i}X_{i}\right)=\left(\xi ^{1}\right)^{2}+\left(\xi ^{2}\right)^{2}+\cdots +\left(\xi ^{p}\right)^{2}-\left(\xi ^{p+1}\right)^{2}-\cdots -\left(\xi ^{n}\right)^{2}} for some p between 1 and n. Any two such expressions of q (at the same point m of M) will have the same number p of positive signs. The signature of g is the pair of integers (p, n − p), signifying that there are p positive signs and n − p negative signs in any such expression. Equivalently, the metric has signature (p, n − p) if the matrix gij of the metric has p positive and n − p negative eigenvalues. Certain metric signatures which arise frequently in applications are: If g has signature (n, 0), then g is a Riemannian metric, and M is called a Riemannian manifold. Otherwise, g is a pseudo-Riemannian metric, and M is called a pseudo-Riemannian manifold (the term semi-Riemannian is also used). If M is four-dimensional with signature (1, 3) or (3, 1), then the metric is called Lorentzian. More generally, a metric tensor in dimension n other than 4 of signature (1, n − 1) or (n − 1, 1) is sometimes also called Lorentzian. If M is 2n-dimensional and g has signature (n, n), then the metric is called ultrahyperbolic. === Inverse metric === Let f = (X1, ..., Xn) be a basis of vector fields, and as above let G[f] be the matrix of coefficients g i j [ f ] = g ( X i , X j ) . {\displaystyle g_{ij}[\mathbf {f} ]=g\left(X_{i},X_{j}\right)\,.} One can consider the inverse matrix G[f]−1, which is identified with the inverse metric (or conjugate or dual metric). The inverse metric satisfies a transformation law when the frame f is changed by a matrix A via The inverse metric transforms contravariantly, or with respect to the inverse of the change of basis matrix A. Whereas the metric itself provides a way to measure the length of (or angle between) vector fields, the inverse metric supplies a means of measuring the length of (or angle between) covector fields; that is, fields of linear functionals. To see this, suppose that α is a covector field. To wit, for each point p, α determines a function αp defined on tangent vectors at p so that the following linearity condition holds for all tangent vectors Xp and Yp, and all real numbers a and b: α p ( a X p + b Y p ) = a α p ( X p ) + b α p ( Y p ) . {\displaystyle \alpha _{p}\left(aX_{p}+bY_{p}\right)=a\alpha _{p}\left(X_{p}\right)+b\alpha _{p}\left(Y_{p}\right)\,.} As p varies, α is assumed to be a smooth function in the sense that p ↦ α p ( X p ) {\displaystyle p\mapsto \alpha _{p}\left(X_{p}\right)} is a smooth function of p for any smooth vector field X. Any covector field α has components in the basis of vector fields f. These are determined by α i = α ( X i ) , i = 1 , 2 , … , n . {\displaystyle \alpha _{i}=\alpha \left(X_{i}\right)\,,\quad i=1,2,\dots ,n\,.} Denote the row vector of these components by α [ f ] = [ α 1 α 2 … α n ] . {\displaystyle \alpha [\mathbf {f} ]={\big \lbrack }{\begin{array}{cccc}\alpha _{1}&\alpha _{2}&\dots &\alpha _{n}\end{array}}{\big \rbrack }\,.} Under a change of f by a matrix A, α[f] changes by the rule α [ f A ] = α [ f ] A . {\displaystyle \alpha [\mathbf {f} A]=\alpha [\mathbf {f} ]A\,.} That is, the row vector of components α[f] transforms as a covariant vector. For a pair α and β of covector fields, define the inverse metric applied to these two covectors by The resulting definition, although it involves the choice of basis f, does not actually depend on f in an essential way. Indeed, changing basis to fA gives α [ f A ] G [ f A ] − 1 β [ f A ] T = ( α [ f ] A ) ( A − 1 G [ f ] − 1 ( A − 1 ) T ) ( A T β [ f ] T ) = α [ f ] G [ f ] − 1 β [ f ] T . {\displaystyle {\begin{aligned}&\alpha [\mathbf {f} A]G[\mathbf {f} A]^{-1}\beta [\mathbf {f} A]^{\mathsf {T}}\\={}&\left(\alpha [\mathbf {f} ]A\right)\left(A^{-1}G[\mathbf {f} ]^{-1}\left(A^{-1}\right)^{\mathsf {T}}\right)\left(A^{\mathsf {T}}\beta [\mathbf {f} ]^{\mathsf {T}}\right)\\={}&\alpha [\mathbf {f} ]G[\mathbf {f} ]^{-1}\beta [\mathbf {f} ]^{\mathsf {T}}.\end{aligned}}} So that the right-hand side of equation (6) is unaffected by changing the basis f to any other basis fA whatsoever. Consequently, the equation may be assigned a meaning independently of the choice of basis. The entries of the matrix G[f] are denoted by gij, where the indices i and j have been raised to indicate the transformation law (5). === Raising and lowering indices === In a basis of vector fields f = (X1, ..., Xn), any smooth tangent vector field X can be written in the form for some uniquely determined smooth functions v1, ..., vn. Upon changing the basis f by a nonsingular matrix A, the coefficients vi change in such a way that equation (7) remains true. That is, X = f A v [ f A ] = f v [ f ] . {\displaystyle X=\mathbf {fA} v[\mathbf {fA} ]=\mathbf {f} v[\mathbf {f} ]\,.} Consequently, v[fA] = A−1v[f]. In other words, the components of a vector transform contravariantly (that is, inversely or in the opposite way) under a change of basis by the nonsingular matrix A. The contravariance of the components of v[f] is notationally designated by placing the indices of vi[f] in the upper position. A frame also allows covectors to be expressed in terms of their components. For the basis of vector fields f = (X1, ..., Xn) define the dual basis to be the linear functionals (θ1[f], ..., θn[f]) such that θ i [ f ] ( X j ) = { 1 i f i = j 0 i f i ≠ j . {\displaystyle \theta ^{i}[\mathbf {f} ](X_{j})={\begin{cases}1&\mathrm {if} \ i=j\\0&\mathrm {if} \ i\not =j.\end{cases}}} That is, θi[f](Xj) = δji, the Kronecker delta. Let θ [ f ] = [ θ 1 [ f ] θ 2 [ f ] ⋮ θ n [ f ] ] . {\displaystyle \theta [\mathbf {f} ]={\begin{bmatrix}\theta ^{1}[\mathbf {f} ]\\\theta ^{2}[\mathbf {f} ]\\\vdots \\\theta ^{n}[\mathbf {f} ]\end{bmatrix}}.} Under a change of basis f ↦ fA for a nonsingular matrix A, θ[f] transforms via θ [ f A ] = A − 1 θ [ f ] . {\displaystyle \theta [\mathbf {f} A]=A^{-1}\theta [\mathbf {f} ].} Any linear functional α on tangent vectors can be expanded in terms of the dual basis θ where a[f] denotes the row vector [ a1[f] ... an[f] ]. The components ai transform when the basis f is replaced by fA in such a way that equation (8) continues to hold. That is, α = a [ f A ] θ [ f A ] = a [ f ] θ [ f ] {\displaystyle \alpha =a[\mathbf {f} A]\theta [\mathbf {f} A]=a[\mathbf {f} ]\theta [\mathbf {f} ]} whence, because θ[fA] = A−1θ[f], it follows that a[fA] = a[f]A. That is, the components a transform covariantly (by the matrix A rather than its inverse). The covariance of the components of a[f] is notationally designated by placing the indices of ai[f] in the lower position. Now, the metric tensor gives a means to identify vectors and covectors as follows. Holding Xp fixed, the function g p ( X p , − ) : Y p ↦ g p ( X p , Y p ) {\displaystyle g_{p}(X_{p},-):Y_{p}\mapsto g_{p}(X_{p},Y_{p})} of tangent vector Yp defines a linear functional on the tangent space at p. This operation takes a vector Xp at a point p and produces a covector gp(Xp, −). In a basis of vector fields f, if a vector field X has components v[f], then the components of the covector field g(X, −) in the dual basis are given by the entries of the row vector a [ f ] = v [ f ] T G [ f ] . {\displaystyle a[\mathbf {f} ]=v[\mathbf {f} ]^{\mathsf {T}}G[\mathbf {f} ].} Under a change of basis f ↦ fA, the right-hand side of this equation transforms via v [ f A ] T G [ f A ] = v [ f ] T ( A − 1 ) T A T G [ f ] A = v [ f ] T G [ f ] A {\displaystyle v[\mathbf {f} A]^{\mathsf {T}}G[\mathbf {f} A]=v[\mathbf {f} ]^{\mathsf {T}}\left(A^{-1}\right)^{\mathsf {T}}A^{\mathsf {T}}G[\mathbf {f} ]A=v[\mathbf {f} ]^{\mathsf {T}}G[\mathbf {f} ]A} so that a[fA] = a[f]A: a transforms covariantly. The operation of associating to the (contravariant) components of a vector field v[f] = [ v1[f] v2[f] ... vn[f] ]T the (covariant) components of the covector field a[f] = [ a1[f] a2[f] … an[f] ], where a i [ f ] = ∑ k = 1 n v k [ f ] g k i [ f ] {\displaystyle a_{i}[\mathbf {f} ]=\sum _{k=1}^{n}v^{k}[\mathbf {f} ]g_{ki}[\mathbf {f} ]} is called lowering the index. To raise the index, one applies the same construction but with the inverse metric instead of the metric. If a[f] = [ a1[f] a2[f] ... an[f] ] are the components of a covector in the dual basis θ[f], then the column vector has components which transform contravariantly: v [ f A ] = A − 1 v [ f ] . {\displaystyle v[\mathbf {f} A]=A^{-1}v[\mathbf {f} ].} Consequently, the quantity X = fv[f] does not depend on the choice of basis f in an essential way, and thus defines a vector field on M. The operation (9) associating to the (covariant) components of a covector a[f] the (contravariant) components of a vector v[f] given is called raising the index. In components, (9) is v i [ f ] = ∑ k = 1 n g i k [ f ] a k [ f ] . {\displaystyle v^{i}[\mathbf {f} ]=\sum _{k=1}^{n}g^{ik}[\mathbf {f} ]a_{k}[\mathbf {f} ].} === Induced metric === Let U be an open set in ℝn, and let φ be a continuously differentiable function from U into the Euclidean space ℝm, where m > n. The mapping φ is called an immersion if its differential is injective at every point of U. The image of φ is called an immersed submanifold. More specifically, for m = 3, which means that the ambient Euclidean space is ℝ3, the induced metric tensor is called the first fundamental form. Suppose that φ is an immersion onto the submanifold M ⊂ Rm. The usual Euclidean dot product in ℝm is a metric which, when restricted to vectors tangent to M, gives a means for taking the dot product of these tangent vectors. This is called the induced metric. Suppose that v is a tangent vector at a point of U, say v = v 1 e 1 + ⋯ + v n e n {\displaystyle v=v^{1}\mathbf {e} _{1}+\dots +v^{n}\mathbf {e} _{n}} where ei are the standard coordinate vectors in ℝn. When φ is applied to U, the vector v goes over to the vector tangent to M given by φ ∗ ( v ) = ∑ i = 1 n ∑ a = 1 m v i ∂ φ a ∂ x i e a . {\displaystyle \varphi _{*}(v)=\sum _{i=1}^{n}\sum _{a=1}^{m}v^{i}{\frac {\partial \varphi ^{a}}{\partial x^{i}}}\mathbf {e} _{a}\,.} (This is called the pushforward of v along φ.) Given two such vectors, v and w, the induced metric is defined by g ( v , w ) = φ ∗ ( v ) ⋅ φ ∗ ( w ) . {\displaystyle g(v,w)=\varphi _{*}(v)\cdot \varphi _{*}(w).} It follows from a straightforward calculation that the matrix of the induced metric in the basis of coordinate vector fields e is given by G ( e ) = ( D φ ) T ( D φ ) {\displaystyle G(\mathbf {e} )=(D\varphi )^{\mathsf {T}}(D\varphi )} where Dφ is the Jacobian matrix: D φ = [ ∂ φ 1 ∂ x 1 ∂ φ 1 ∂ x 2 … ∂ φ 1 ∂ x n ∂ φ 2 ∂ x 1 ∂ φ 2 ∂ x 2 … ∂ φ 2 ∂ x n ⋮ ⋮ ⋱ ⋮ ∂ φ m ∂ x 1 ∂ φ m ∂ x 2 … ∂ φ m ∂ x n ] . {\displaystyle D\varphi ={\begin{bmatrix}{\frac {\partial \varphi ^{1}}{\partial x^{1}}}&{\frac {\partial \varphi ^{1}}{\partial x^{2}}}&\dots &{\frac {\partial \varphi ^{1}}{\partial x^{n}}}\\[1ex]{\frac {\partial \varphi ^{2}}{\partial x^{1}}}&{\frac {\partial \varphi ^{2}}{\partial x^{2}}}&\dots &{\frac {\partial \varphi ^{2}}{\partial x^{n}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial \varphi ^{m}}{\partial x^{1}}}&{\frac {\partial \varphi ^{m}}{\partial x^{2}}}&\dots &{\frac {\partial \varphi ^{m}}{\partial x^{n}}}\end{bmatrix}}.} == Intrinsic definitions of a metric == The notion of a metric can be defined intrinsically using the language of fiber bundles and vector bundles. In these terms, a metric tensor is a function from the fiber product of the tangent bundle of M with itself to R such that the restriction of g to each fiber is a nondegenerate bilinear mapping g p : T p M × T p M → R . {\displaystyle g_{p}:\mathrm {T} _{p}M\times \mathrm {T} _{p}M\to \mathbf {R} .} The mapping (10) is required to be continuous, and often continuously differentiable, smooth, or real analytic, depending on the case of interest, and whether M can support such a structure. === Metric as a section of a bundle === By the universal property of the tensor product, any bilinear mapping (10) gives rise naturally to a section g⊗ of the dual of the tensor product bundle of TM with itself g ⊗ ∈ Γ ( ( T M ⊗ T M ) ∗ ) . {\displaystyle g_{\otimes }\in \Gamma \left((\mathrm {T} M\otimes \mathrm {T} M)^{*}\right).} The section g⊗ is defined on simple elements of TM ⊗ TM by g ⊗ ( v ⊗ w ) = g ( v , w ) {\displaystyle g_{\otimes }(v\otimes w)=g(v,w)} and is defined on arbitrary elements of TM ⊗ TM by extending linearly to linear combinations of simple elements. The original bilinear form g is symmetric if and only if g ⊗ ∘ τ = g ⊗ {\displaystyle g_{\otimes }\circ \tau =g_{\otimes }} where τ : T M ⊗ T M → ≅ T M ⊗ T M {\displaystyle \tau :\mathrm {T} M\otimes \mathrm {T} M{\stackrel {\cong }{\to }}TM\otimes TM} is the braiding map. Since M is finite-dimensional, there is a natural isomorphism ( T M ⊗ T M ) ∗ ≅ T ∗ M ⊗ T ∗ M , {\displaystyle (\mathrm {T} M\otimes \mathrm {T} M)^{*}\cong \mathrm {T} ^{*}M\otimes \mathrm {T} ^{*}M,} so that g⊗ is regarded also as a section of the bundle T*M ⊗ T*M of the cotangent bundle T*M with itself. Since g is symmetric as a bilinear mapping, it follows that g⊗ is a symmetric tensor. === Metric in a vector bundle === More generally, one may speak of a metric in a vector bundle. If E is a vector bundle over a manifold M, then a metric is a mapping g : E × M E → R {\displaystyle g:E\times _{M}E\to \mathbf {R} } from the fiber product of E to R which is bilinear in each fiber: g p : E p × E p → R . {\displaystyle g_{p}:E_{p}\times E_{p}\to \mathbf {R} .} Using duality as above, a metric is often identified with a section of the tensor product bundle E* ⊗ E*. === Tangent–cotangent isomorphism === The metric tensor gives a natural isomorphism from the tangent bundle to the cotangent bundle, sometimes called the musical isomorphism. This isomorphism is obtained by setting, for each tangent vector Xp ∈ TpM, S g X p = def g ( X p , − ) , {\displaystyle S_{g}X_{p}\,{\stackrel {\text{def}}{=}}\,g(X_{p},-),} the linear functional on TpM which sends a tangent vector Yp at p to gp(Xp,Yp). That is, in terms of the pairing [−, −] between TpM and its dual space T∗pM, [ S g X p , Y p ] = g p ( X p , Y p ) {\displaystyle [S_{g}X_{p},Y_{p}]=g_{p}(X_{p},Y_{p})} for all tangent vectors Xp and Yp. The mapping Sg is a linear transformation from TpM to T∗pM. It follows from the definition of non-degeneracy that the kernel of Sg is reduced to zero, and so by the rank–nullity theorem, Sg is a linear isomorphism. Furthermore, Sg is a symmetric linear transformation in the sense that [ S g X p , Y p ] = [ S g Y p , X p ] {\displaystyle [S_{g}X_{p},Y_{p}]=[S_{g}Y_{p},X_{p}]} for all tangent vectors Xp and Yp. Conversely, any linear isomorphism S : TpM → T∗pM defines a non-degenerate bilinear form on TpM by means of g S ( X p , Y p ) = [ S X p , Y p ] . {\displaystyle g_{S}(X_{p},Y_{p})=[SX_{p},Y_{p}]\,.} This bilinear form is symmetric if and only if S is symmetric. There is thus a natural one-to-one correspondence between symmetric bilinear forms on TpM and symmetric linear isomorphisms of TpM to the dual T∗pM. As p varies over M, Sg defines a section of the bundle Hom(TM, T*M) of vector bundle isomorphisms of the tangent bundle to the cotangent bundle. This section has the same smoothness as g: it is continuous, differentiable, smooth, or real-analytic according as g. The mapping Sg, which associates to every vector field on M a covector field on M gives an abstract formulation of "lowering the index" on a vector field. The inverse of Sg is a mapping T*M → TM which, analogously, gives an abstract formulation of "raising the index" on a covector field. The inverse S−1g defines a linear mapping S g − 1 : T ∗ M → T M {\displaystyle S_{g}^{-1}:\mathrm {T} ^{*}M\to \mathrm {T} M} which is nonsingular and symmetric in the sense that [ S g − 1 α , β ] = [ S g − 1 β , α ] {\displaystyle \left[S_{g}^{-1}\alpha ,\beta \right]=\left[S_{g}^{-1}\beta ,\alpha \right]} for all covectors α, β. Such a nonsingular symmetric mapping gives rise (by the tensor-hom adjunction) to a map T ∗ M ⊗ T ∗ M → R {\displaystyle \mathrm {T} ^{*}M\otimes \mathrm {T} ^{*}M\to \mathbf {R} } or by the double dual isomorphism to a section of the tensor product T M ⊗ T M . {\displaystyle \mathrm {T} M\otimes \mathrm {T} M.} == Arclength and the line element == Suppose that g is a Riemannian metric on M. In a local coordinate system xi, i = 1, 2, …, n, the metric tensor appears as a matrix, denoted here by G, whose entries are the components gij of the metric tensor relative to the coordinate vector fields. Let γ(t) be a piecewise-differentiable parametric curve in M, for a ≤ t ≤ b. The arclength of the curve is defined by L = ∫ a b ∑ i , j = 1 n g i j ( γ ( t ) ) ( d d t x i ∘ γ ( t ) ) ( d d t x j ∘ γ ( t ) ) d t . {\displaystyle L=\int _{a}^{b}{\sqrt {\sum _{i,j=1}^{n}g_{ij}(\gamma (t))\left({\frac {d}{dt}}x^{i}\circ \gamma (t)\right)\left({\frac {d}{dt}}x^{j}\circ \gamma (t)\right)}}\,dt\,.} In connection with this geometrical application, the quadratic differential form d s 2 = ∑ i , j = 1 n g i j ( p ) d x i d x j {\displaystyle ds^{2}=\sum _{i,j=1}^{n}g_{ij}(p)dx^{i}dx^{j}} is called the first fundamental form associated to the metric, while ds is the line element. When ds2 is pulled back to the image of a curve in M, it represents the square of the differential with respect to arclength. For a pseudo-Riemannian metric, the length formula above is not always defined, because the term under the square root may become negative. We generally only define the length of a curve when the quantity under the square root is always of one sign or the other. In this case, define L = ∫ a b | ∑ i , j = 1 n g i j ( γ ( t ) ) ( d d t x i ∘ γ ( t ) ) ( d d t x j ∘ γ ( t ) ) | d t . {\displaystyle L=\int _{a}^{b}{\sqrt {\left|\sum _{i,j=1}^{n}g_{ij}(\gamma (t))\left({\frac {d}{dt}}x^{i}\circ \gamma (t)\right)\left({\frac {d}{dt}}x^{j}\circ \gamma (t)\right)\right|}}\,dt\,.} While these formulas use coordinate expressions, they are in fact independent of the coordinates chosen; they depend only on the metric, and the curve along which the formula is integrated. === The energy, variational principles and geodesics === Given a segment of a curve, another frequently defined quantity is the (kinetic) energy of the curve: E = 1 2 ∫ a b ∑ i , j = 1 n g i j ( γ ( t ) ) ( d d t x i ∘ γ ( t ) ) ( d d t x j ∘ γ ( t ) ) d t . {\displaystyle E={\frac {1}{2}}\int _{a}^{b}\sum _{i,j=1}^{n}g_{ij}(\gamma (t))\left({\frac {d}{dt}}x^{i}\circ \gamma (t)\right)\left({\frac {d}{dt}}x^{j}\circ \gamma (t)\right)\,dt\,.} This usage comes from physics, specifically, classical mechanics, where the integral E can be seen to directly correspond to the kinetic energy of a point particle moving on the surface of a manifold. Thus, for example, in Jacobi's formulation of Maupertuis' principle, the metric tensor can be seen to correspond to the mass tensor of a moving particle. In many cases, whenever a calculation calls for the length to be used, a similar calculation using the energy may be done as well. This often leads to simpler formulas by avoiding the need for the square-root. Thus, for example, the geodesic equations may be obtained by applying variational principles to either the length or the energy. In the latter case, the geodesic equations are seen to arise from the principle of least action: they describe the motion of a "free particle" (a particle feeling no forces) that is confined to move on the manifold, but otherwise moves freely, with constant momentum, within the manifold. == Canonical measure and volume form == In analogy with the case of surfaces, a metric tensor on an n-dimensional paracompact manifold M gives rise to a natural way to measure the n-dimensional volume of subsets of the manifold. The resulting natural positive Borel measure allows one to develop a theory of integrating functions on the manifold by means of the associated Lebesgue integral. A measure can be defined, by the Riesz representation theorem, by giving a positive linear functional Λ on the space C0(M) of compactly supported continuous functions on M. More precisely, if M is a manifold with a (pseudo-)Riemannian metric tensor g, then there is a unique positive Borel measure μg such that for any coordinate chart (U, φ), Λ f = ∫ U f d μ g = ∫ φ ( U ) f ∘ φ − 1 ( x ) | det g | d x {\displaystyle \Lambda f=\int _{U}f\,d\mu _{g}=\int _{\varphi (U)}f\circ \varphi ^{-1}(x){\sqrt {\left|\det g\right|}}\,dx} for all f supported in U. Here det g is the determinant of the matrix formed by the components of the metric tensor in the coordinate chart. That Λ is well-defined on functions supported in coordinate neighborhoods is justified by Jacobian change of variables. It extends to a unique positive linear functional on C0(M) by means of a partition of unity. If M is also oriented, then it is possible to define a natural volume form from the metric tensor. In a positively oriented coordinate system (x1, ..., xn) the volume form is represented as ω = | det g | d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega ={\sqrt {\left|\det g\right|}}\,dx^{1}\wedge \cdots \wedge dx^{n}} where the dxi are the coordinate differentials and ∧ denotes the exterior product in the algebra of differential forms. The volume form also gives a way to integrate functions on the manifold, and this geometric integral agrees with the integral obtained by the canonical Borel measure. == Examples == === Euclidean metric === The most familiar example is that of elementary Euclidean geometry: the two-dimensional Euclidean metric tensor. In the usual Cartesian (x, y) coordinates, we can write g = [ 1 0 0 1 ] . {\displaystyle g={\begin{bmatrix}1&0\\0&1\end{bmatrix}}\,.} The length of a curve reduces to the formula: L = ∫ a b ( d x ) 2 + ( d y ) 2 . {\displaystyle L=\int _{a}^{b}{\sqrt {(dx)^{2}+(dy)^{2}}}\,.} The Euclidean metric in some other common coordinate systems can be written as follows. Polar coordinates (r, θ): x = r cos ⁡ θ y = r sin ⁡ θ J = [ cos ⁡ θ − r sin ⁡ θ sin ⁡ θ r cos ⁡ θ ] . {\displaystyle {\begin{aligned}x&=r\cos \theta \\y&=r\sin \theta \\J&={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &r\cos \theta \end{bmatrix}}\,.\end{aligned}}} So g = J T J = [ cos 2 ⁡ θ + sin 2 ⁡ θ − r sin ⁡ θ cos ⁡ θ + r sin ⁡ θ cos ⁡ θ − r cos ⁡ θ sin ⁡ θ + r cos ⁡ θ sin ⁡ θ r 2 sin 2 ⁡ θ + r 2 cos 2 ⁡ θ ] = [ 1 0 0 r 2 ] {\displaystyle g=J^{\mathsf {T}}J={\begin{bmatrix}\cos ^{2}\theta +\sin ^{2}\theta &-r\sin \theta \cos \theta +r\sin \theta \cos \theta \\-r\cos \theta \sin \theta +r\cos \theta \sin \theta &r^{2}\sin ^{2}\theta +r^{2}\cos ^{2}\theta \end{bmatrix}}={\begin{bmatrix}1&0\\0&r^{2}\end{bmatrix}}} by trigonometric identities. In general, in a Cartesian coordinate system xi on a Euclidean space, the partial derivatives ∂ / ∂xi are orthonormal with respect to the Euclidean metric. Thus the metric tensor is the Kronecker delta δij in this coordinate system. The metric tensor with respect to arbitrary (possibly curvilinear) coordinates qi is given by g i j = ∑ k l δ k l ∂ x k ∂ q i ∂ x l ∂ q j = ∑ k ∂ x k ∂ q i ∂ x k ∂ q j . {\displaystyle g_{ij}=\sum _{kl}\delta _{kl}{\frac {\partial x^{k}}{\partial q^{i}}}{\frac {\partial x^{l}}{\partial q^{j}}}=\sum _{k}{\frac {\partial x^{k}}{\partial q^{i}}}{\frac {\partial x^{k}}{\partial q^{j}}}.} ==== The round metric on a sphere ==== The unit sphere in ℝ3 comes equipped with a natural metric induced from the ambient Euclidean metric, through the process explained in the induced metric section. In standard spherical coordinates (θ, φ), with θ the colatitude, the angle measured from the z-axis, and φ the angle from the x-axis in the xy-plane, the metric takes the form g = [ 1 0 0 sin 2 ⁡ θ ] . {\displaystyle g={\begin{bmatrix}1&0\\0&\sin ^{2}\theta \end{bmatrix}}\,.} This is usually written in the form d s 2 = d θ 2 + sin 2 ⁡ θ d φ 2 . {\displaystyle ds^{2}=d\theta ^{2}+\sin ^{2}\theta \,d\varphi ^{2}\,.} === Lorentzian metrics from relativity === In flat Minkowski space (special relativity), with coordinates r μ → ( x 0 , x 1 , x 2 , x 3 ) = ( c t , x , y , z ) , {\displaystyle r^{\mu }\rightarrow \left(x^{0},x^{1},x^{2},x^{3}\right)=(ct,x,y,z)\,,} the metric is, depending on choice of metric signature, g = [ 1 0 0 0 0 − 1 0 0 0 0 − 1 0 0 0 0 − 1 ] or g = [ − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] . {\displaystyle g={\begin{bmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}}\quad {\text{or}}\quad g={\begin{bmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}\,.} For a curve with—for example—constant time coordinate, the length formula with this metric reduces to the usual length formula. For a timelike curve, the length formula gives the proper time along the curve. In this case, the spacetime interval is written as d s 2 = c 2 d t 2 − d x 2 − d y 2 − d z 2 = d r μ d r μ = g μ ν d r μ d r ν . {\displaystyle ds^{2}=c^{2}dt^{2}-dx^{2}-dy^{2}-dz^{2}=dr^{\mu }dr_{\mu }=g_{\mu \nu }dr^{\mu }dr^{\nu }\,.} The Schwarzschild metric describes the spacetime around a spherically symmetric body, such as a planet, or a black hole. With coordinates ( x 0 , x 1 , x 2 , x 3 ) = ( c t , r , θ , φ ) , {\displaystyle \left(x^{0},x^{1},x^{2},x^{3}\right)=(ct,r,\theta ,\varphi )\,,} we can write the metric as g μ ν = [ ( 1 − 2 G M r c 2 ) 0 0 0 0 − ( 1 − 2 G M r c 2 ) − 1 0 0 0 0 − r 2 0 0 0 0 − r 2 sin 2 ⁡ θ ] , {\displaystyle g_{\mu \nu }={\begin{bmatrix}\left(1-{\frac {2GM}{rc^{2}}}\right)&0&0&0\\0&-\left(1-{\frac {2GM}{rc^{2}}}\right)^{-1}&0&0\\0&0&-r^{2}&0\\0&0&0&-r^{2}\sin ^{2}\theta \end{bmatrix}}\,,} where G (inside the matrix) is the gravitational constant and M represents the total mass–energy content of the central object. == See also == Riemannian manifold Pseudo-Riemannian manifold Basic introduction to the mathematics of curved spacetime Clifford algebra Finsler manifold List of coordinate charts Ricci calculus Tissot's indicatrix, a technique to visualize the metric tensor == Notes == == References == Dodson, C. T. J.; Poston, T. (1991), Tensor geometry, Graduate Texts in Mathematics, vol. 130 (2nd ed.), Berlin, New York: Springer-Verlag, doi:10.1007/978-3-642-10514-2, ISBN 978-3-540-52018-4, MR 1223091 Gallot, Sylvestre; Hulin, Dominique; Lafontaine, Jacques (2004), Riemannian Geometry (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-20493-0. Gauss, Carl Friedrich (1827), General Investigations of Curved Surfaces, New York: Raven Press (published 1965) translated by A. M. Hiltebeitel and J. C. Morehead; "Disquisitiones generales circa superficies curvas", Commentationes Societatis Regiae Scientiarum Gottingesis Recentiores Vol. VI (1827), pp. 99–146. Hawking, S.W.; Ellis, G.F.R. (1973), The large scale structure of space-time, Cambridge University Press. Kay, David (1988), Schaum's Outline of Theory and Problems of Tensor Calculus, McGraw-Hill, ISBN 978-0-07-033484-7. Kline, Morris (1990), Mathematical thought from ancient to modern times, Volume 3, Oxford University Press. Lee, John (1997), Riemannian manifolds, Springer Verlag, ISBN 978-0-387-98322-6. Michor, Peter W. (2008), Topics in Differential Geometry, Graduate Studies in Mathematics, vol. 93, Providence: American Mathematical Society (to appear). Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0 Ricci-Curbastro, Gregorio; Levi-Civita, Tullio (1900), "Méthodes de calcul différentiel absolu et leurs applications", Mathematische Annalen, 54 (1): 125–201, doi:10.1007/BF01454201, ISSN 1432-1807, S2CID 120009332 Sternberg, S. (1983), Lectures on Differential Geometry (2nd ed.), New York: Chelsea Publishing Co., ISBN 0-8218-1385-4 Vaughn, Michael T. (2007), Introduction to mathematical physics (PDF), Weinheim: Wiley-VCH Verlag GmbH & Co., doi:10.1002/9783527618859, ISBN 978-3-527-40627-2, MR 2324500 Wells, Raymond (1980), Differential Analysis on Complex Manifolds, Berlin, New York: Springer-Verlag
Wikipedia/Metric_tensor
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra. In differential geometry, an intrinsic geometric statement may be described by a tensor field on a manifold, and then doesn't need to make reference to coordinates at all. The same is true in general relativity, of tensor fields describing a physical property. The component-free approach is also used extensively in abstract algebra and homological algebra, where tensors arise naturally. == Definition via tensor products of vector spaces == Given a finite set {V1, ..., Vn} of vector spaces over a common field F, one may form their tensor product V1 ⊗ ... ⊗ Vn, an element of which is termed a tensor. A tensor on the vector space V is then defined to be an element of (i.e., a vector in) a vector space of the form: V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*}} where V∗ is the dual space of V. If there are m copies of V and n copies of V∗ in our product, the tensor is said to be of type (m, n) and contravariant of order m and covariant of order n and of total order m + n. The tensors of order zero are just the scalars (elements of the field F), those of contravariant order 1 are the vectors in V, and those of covariant order 1 are the one-forms in V∗ (for this reason, the elements of the last two spaces are often called the contravariant and covariant vectors). The space of all tensors of type (m, n) is denoted T n m ( V ) = V ⊗ ⋯ ⊗ V ⏟ m ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ n . {\displaystyle T_{n}^{m}(V)=\underbrace {V\otimes \dots \otimes V} _{m}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{n}.} Example 1. The space of type (1, 1) tensors, T 1 1 ( V ) = V ⊗ V ∗ , {\displaystyle T_{1}^{1}(V)=V\otimes V^{*},} is isomorphic in a natural way to the space of linear transformations from V to V. Example 2. A bilinear form on a real vector space V, V × V → F , {\displaystyle V\times V\to F,} corresponds in a natural way to a type (0, 2) tensor in T 2 0 ( V ) = V ∗ ⊗ V ∗ . {\displaystyle T_{2}^{0}(V)=V^{*}\otimes V^{*}.} An example of such a bilinear form may be defined, termed the associated metric tensor, and is usually denoted g. == Tensor rank == A simple tensor (also called a tensor of rank one, elementary tensor or decomposable tensor) is a tensor that can be written as a product of tensors of the form T = a ⊗ b ⊗ ⋯ ⊗ d {\displaystyle T=a\otimes b\otimes \cdots \otimes d} where a, b, ..., d are nonzero and in V or V∗ – that is, if the tensor is nonzero and completely factorizable. Every tensor can be expressed as a sum of simple tensors. The rank of a tensor T is the minimum number of simple tensors that sum to T. The zero tensor has rank zero. A nonzero order 0 or 1 tensor always has rank 1. The rank of a non-zero order 2 or higher tensor is less than or equal to the product of the dimensions of all but the highest-dimensioned vectors in (a sum of products of) which the tensor can be expressed, which is dn−1 when each product is of n vectors from a finite-dimensional vector space of dimension d. The term rank of a tensor extends the notion of the rank of a matrix in linear algebra, although the term is also often used to mean the order (or degree) of a tensor. The rank of a matrix is the minimum number of column vectors needed to span the range of the matrix. A matrix thus has rank one if it can be written as an outer product of two nonzero vectors: A = v w T . {\displaystyle A=vw^{\mathrm {T} }.} The rank of a matrix A is the smallest number of such outer products that can be summed to produce it: A = v 1 w 1 T + ⋯ + v k w k T . {\displaystyle A=v_{1}w_{1}^{\mathrm {T} }+\cdots +v_{k}w_{k}^{\mathrm {T} }.} In indices, a tensor of rank 1 is a tensor of the form T i j … k ℓ … = a i b j ⋯ c k d ℓ ⋯ . {\displaystyle T_{ij\dots }^{k\ell \dots }=a_{i}b_{j}\cdots c^{k}d^{\ell }\cdots .} The rank of a tensor of order 2 agrees with the rank when the tensor is regarded as a matrix, and can be determined from Gaussian elimination for instance. The rank of an order 3 or higher tensor is however often very difficult to determine, and low rank decompositions of tensors are sometimes of great practical interest. In fact, the problem of finding the rank of an order 3 tensor over any finite field is NP-Complete, and over the rationals, is NP-Hard. Computational tasks such as the efficient multiplication of matrices and the efficient evaluation of polynomials can be recast as the problem of simultaneously evaluating a set of bilinear forms z k = ∑ i j T i j k x i y j {\displaystyle z_{k}=\sum _{ij}T_{ijk}x_{i}y_{j}} for given inputs xi and yj. If a low-rank decomposition of the tensor T is known, then an efficient evaluation strategy is known. == Universal property == The space T n m ( V ) {\displaystyle T_{n}^{m}(V)} can be characterized by a universal property in terms of multilinear mappings. Amongst the advantages of this approach are that it gives a way to show that many linear mappings are "natural" or "geometric" (in other words are independent of any choice of basis). Explicit computational information can then be written down using bases, and this order of priorities can be more convenient than proving a formula gives rise to a natural mapping. Another aspect is that tensor products are not used only for free modules, and the "universal" approach carries over more easily to more general situations. A scalar-valued function on a Cartesian product (or direct sum) of vector spaces f : V 1 × ⋯ × V N → F {\displaystyle f:V_{1}\times \cdots \times V_{N}\to F} is multilinear if it is linear in each argument. The space of all multilinear mappings from V1 × ... × VN to W is denoted LN(V1, ..., VN; W). When N = 1, a multilinear mapping is just an ordinary linear mapping, and the space of all linear mappings from V to W is denoted L(V; W). The universal characterization of the tensor product implies that, for each multilinear function f ∈ L m + n ( V ∗ , … , V ∗ ⏟ m , V , … , V ⏟ n ; W ) {\displaystyle f\in L^{m+n}(\underbrace {V^{*},\ldots ,V^{*}} _{m},\underbrace {V,\ldots ,V} _{n};W)} (where W can represent the field of scalars, a vector space, or a tensor space) there exists a unique linear function T f ∈ L ( V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ m ⊗ V ⊗ ⋯ ⊗ V ⏟ n ; W ) {\displaystyle T_{f}\in L(\underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{m}\otimes \underbrace {V\otimes \cdots \otimes V} _{n};W)} such that f ( α 1 , … , α m , v 1 , … , v n ) = T f ( α 1 ⊗ ⋯ ⊗ α m ⊗ v 1 ⊗ ⋯ ⊗ v n ) {\displaystyle f(\alpha _{1},\ldots ,\alpha _{m},v_{1},\ldots ,v_{n})=T_{f}(\alpha _{1}\otimes \cdots \otimes \alpha _{m}\otimes v_{1}\otimes \cdots \otimes v_{n})} for all vi in V and αi in V∗. Using the universal property, it follows, when V is finite dimensional, that the space of (m, n)-tensors admits a natural isomorphism T n m ( V ) ≅ L ( V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ m ⊗ V ⊗ ⋯ ⊗ V ⏟ n ; F ) ≅ L m + n ( V ∗ , … , V ∗ ⏟ m , V , … , V ⏟ n ; F ) . {\displaystyle T_{n}^{m}(V)\cong L(\underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{m}\otimes \underbrace {V\otimes \cdots \otimes V} _{n};F)\cong L^{m+n}(\underbrace {V^{*},\ldots ,V^{*}} _{m},\underbrace {V,\ldots ,V} _{n};F).} Each V in the definition of the tensor corresponds to a V∗ inside the argument of the linear maps, and vice versa. (Note that in the former case, there are m copies of V and n copies of V∗, and in the latter case vice versa). In particular, one has T 0 1 ( V ) ≅ L ( V ∗ ; F ) ≅ V , T 1 0 ( V ) ≅ L ( V ; F ) = V ∗ , T 1 1 ( V ) ≅ L ( V ; V ) . {\displaystyle {\begin{aligned}T_{0}^{1}(V)&\cong L(V^{*};F)\cong V,\\T_{1}^{0}(V)&\cong L(V;F)=V^{*},\\T_{1}^{1}(V)&\cong L(V;V).\end{aligned}}} == Tensor fields == Differential geometry, physics and engineering must often deal with tensor fields on smooth manifolds. The term tensor is sometimes used as a shorthand for tensor field. A tensor field expresses the concept of a tensor that varies from point to point on the manifold. == References == Abraham, Ralph; Marsden, Jerrold E. (1985), Foundations of Mechanics (2nd ed.), Reading, Massachusetts: Addison-Wesley, ISBN 0-201-40840-6. Bourbaki, Nicolas (1989), Elements of Mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9. de Groote, H. F. (1987), Lectures on the Complexity of Bilinear Problems, Lecture Notes in Computer Science, vol. 245, Springer, ISBN 3-540-17205-X. Halmos, Paul (1974), Finite-dimensional Vector Spaces, Springer, ISBN 0-387-90093-4. Håstad, Johan (November 15, 1989), "Tensor Rank Is NP-Complete", Journal of Algorithms, 11 (4): 644–654, doi:10.1016/0196-6774(90)90014-6. Jeevanjee, Nadir (2011), "An Introduction to Tensors and Group Theory for Physicists", Physics Today, 65 (4): 64, Bibcode:2012PhT....65d..64P, doi:10.1063/PT.3.1523, ISBN 978-0-8176-4714-8. Knuth, Donald E. (1998) [1969], The Art of Computer Programming, vol. 2 (3rd ed.), Addison-Wesley, pp. 145–146, ISBN 978-0-201-89684-8. Hackbusch, Wolfgang (2012), Tensor Spaces and Numerical Tensor Calculus, Springer, p. 4, ISBN 978-3-642-28027-6.
Wikipedia/Tensor_(intrinsic_definition)
In linear algebra and functional analysis, a projection is a linear transformation P {\displaystyle P} from a vector space to itself (an endomorphism) such that P ∘ P = P {\displaystyle P\circ P=P} . That is, whenever P {\displaystyle P} is applied twice to any vector, it gives the same result as if it were applied once (i.e. P {\displaystyle P} is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object. == Definitions == A projection on a vector space V {\displaystyle V} is a linear operator P : V → V {\displaystyle P\colon V\to V} such that P 2 = P {\displaystyle P^{2}=P} . When V {\displaystyle V} has an inner product and is complete, i.e. when V {\displaystyle V} is a Hilbert space, the concept of orthogonality can be used. A projection P {\displaystyle P} on a Hilbert space V {\displaystyle V} is called an orthogonal projection if it satisfies ⟨ P x , y ⟩ = ⟨ x , P y ⟩ {\displaystyle \langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P\mathbf {y} \rangle } for all x , y ∈ V {\displaystyle \mathbf {x} ,\mathbf {y} \in V} . A projection on a Hilbert space that is not orthogonal is called an oblique projection. === Projection matrix === A square matrix P {\displaystyle P} is called a projection matrix if it is equal to its square, i.e. if P 2 = P {\displaystyle P^{2}=P} .: p. 38  A square matrix P {\displaystyle P} is called an orthogonal projection matrix if P 2 = P = P T {\displaystyle P^{2}=P=P^{\mathrm {T} }} for a real matrix, and respectively P 2 = P = P ∗ {\displaystyle P^{2}=P=P^{*}} for a complex matrix, where P T {\displaystyle P^{\mathrm {T} }} denotes the transpose of P {\displaystyle P} and P ∗ {\displaystyle P^{*}} denotes the adjoint or Hermitian transpose of P {\displaystyle P} .: p. 223  A projection matrix that is not an orthogonal projection matrix is called an oblique projection matrix. The eigenvalues of a projection matrix must be 0 or 1. == Examples == === Orthogonal projection === For example, the function which maps the point ( x , y , z ) {\displaystyle (x,y,z)} in three-dimensional space R 3 {\displaystyle \mathbb {R} ^{3}} to the point ( x , y , 0 ) {\displaystyle (x,y,0)} is an orthogonal projection onto the xy-plane. This function is represented by the matrix P = [ 1 0 0 0 1 0 0 0 0 ] . {\displaystyle P={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\end{bmatrix}}.} The action of this matrix on an arbitrary vector is P [ x y z ] = [ x y 0 ] . {\displaystyle P{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}.} To see that P {\displaystyle P} is indeed a projection, i.e., P = P 2 {\displaystyle P=P^{2}} , we compute P 2 [ x y z ] = P [ x y 0 ] = [ x y 0 ] = P [ x y z ] . {\displaystyle P^{2}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=P{\begin{bmatrix}x\\y\\0\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}=P{\begin{bmatrix}x\\y\\z\end{bmatrix}}.} Observing that P T = P {\displaystyle P^{\mathrm {T} }=P} shows that the projection is an orthogonal projection. === Oblique projection === A simple example of a non-orthogonal (oblique) projection is P = [ 0 0 α 1 ] . {\displaystyle P={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}.} Via matrix multiplication, one sees that P 2 = [ 0 0 α 1 ] [ 0 0 α 1 ] = [ 0 0 α 1 ] = P . {\displaystyle P^{2}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}{\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}=P.} showing that P {\displaystyle P} is indeed a projection. The projection P {\displaystyle P} is orthogonal if and only if α = 0 {\displaystyle \alpha =0} because only then P T = P . {\displaystyle P^{\mathrm {T} }=P.} == Properties and classification == === Idempotence === By definition, a projection P {\displaystyle P} is idempotent (i.e. P 2 = P {\displaystyle P^{2}=P} ). === Open map === Every projection is an open map onto its image, meaning that it maps each open set in the domain to an open set in the subspace topology of the image. That is, for any vector x {\displaystyle \mathbf {x} } and any ball B x {\displaystyle B_{\mathbf {x} }} (with positive radius) centered on x {\displaystyle \mathbf {x} } , there exists a ball B P x {\displaystyle B_{P\mathbf {x} }} (with positive radius) centered on P x {\displaystyle P\mathbf {x} } that is wholly contained in the image P ( B x ) {\displaystyle P(B_{\mathbf {x} })} . === Complementarity of image and kernel === Let W {\displaystyle W} be a finite-dimensional vector space and P {\displaystyle P} be a projection on W {\displaystyle W} . Suppose the subspaces U {\displaystyle U} and V {\displaystyle V} are the image and kernel of P {\displaystyle P} respectively. Then P {\displaystyle P} has the following properties: P {\displaystyle P} is the identity operator I {\displaystyle I} on U {\displaystyle U} : ∀ x ∈ U : P x = x . {\displaystyle \forall \mathbf {x} \in U:P\mathbf {x} =\mathbf {x} .} We have a direct sum W = U ⊕ V {\displaystyle W=U\oplus V} . Every vector x ∈ W {\displaystyle \mathbf {x} \in W} may be decomposed uniquely as x = u + v {\displaystyle \mathbf {x} =\mathbf {u} +\mathbf {v} } with u = P x {\displaystyle \mathbf {u} =P\mathbf {x} } and v = x − P x = ( I − P ) x {\displaystyle \mathbf {v} =\mathbf {x} -P\mathbf {x} =\left(I-P\right)\mathbf {x} } , and where u ∈ U , v ∈ V . {\displaystyle \mathbf {u} \in U,\mathbf {v} \in V.} The image and kernel of a projection are complementary, as are P {\displaystyle P} and Q = I − P {\displaystyle Q=I-P} . The operator Q {\displaystyle Q} is also a projection as the image and kernel of P {\displaystyle P} become the kernel and image of Q {\displaystyle Q} and vice versa. We say P {\displaystyle P} is a projection along V {\displaystyle V} onto U {\displaystyle U} (kernel/image) and Q {\displaystyle Q} is a projection along U {\displaystyle U} onto V {\displaystyle V} . === Spectrum === In infinite-dimensional vector spaces, the spectrum of a projection is contained in { 0 , 1 } {\displaystyle \{0,1\}} as ( λ I − P ) − 1 = 1 λ I + 1 λ ( λ − 1 ) P . {\displaystyle (\lambda I-P)^{-1}={\frac {1}{\lambda }}I+{\frac {1}{\lambda (\lambda -1)}}P.} Only 0 or 1 can be an eigenvalue of a projection. This implies that an orthogonal projection P {\displaystyle P} is always a positive semi-definite matrix. In general, the corresponding eigenspaces are (respectively) the kernel and range of the projection. Decomposition of a vector space into direct sums is not unique. Therefore, given a subspace V {\displaystyle V} , there may be many projections whose range (or kernel) is V {\displaystyle V} . If a projection is nontrivial it has minimal polynomial x 2 − x = x ( x − 1 ) {\displaystyle x^{2}-x=x(x-1)} , which factors into distinct linear factors, and thus P {\displaystyle P} is diagonalizable. === Product of projections === The product of projections is not in general a projection, even if they are orthogonal. If two projections commute then their product is a projection, but the converse is false: the product of two non-commuting projections may be a projection. If two orthogonal projections commute then their product is an orthogonal projection. If the product of two orthogonal projections is an orthogonal projection, then the two orthogonal projections commute (more generally: two self-adjoint endomorphisms commute if and only if their product is self-adjoint). === Orthogonal projections === When the vector space W {\displaystyle W} has an inner product and is complete (is a Hilbert space) the concept of orthogonality can be used. An orthogonal projection is a projection for which the range U {\displaystyle U} and the kernel V {\displaystyle V} are orthogonal subspaces. Thus, for every x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } in W {\displaystyle W} , ⟨ P x , ( y − P y ) ⟩ = ⟨ ( x − P x ) , P y ⟩ = 0 {\displaystyle \langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =\langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =0} . Equivalently: ⟨ x , P y ⟩ = ⟨ P x , P y ⟩ = ⟨ P x , y ⟩ . {\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle .} A projection is orthogonal if and only if it is self-adjoint. Using the self-adjoint and idempotent properties of P {\displaystyle P} , for any x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } in W {\displaystyle W} we have P x ∈ U {\displaystyle P\mathbf {x} \in U} , y − P y ∈ V {\displaystyle \mathbf {y} -P\mathbf {y} \in V} , and ⟨ P x , y − P y ⟩ = ⟨ x , ( P − P 2 ) y ⟩ = 0 {\displaystyle \langle P\mathbf {x} ,\mathbf {y} -P\mathbf {y} \rangle =\langle \mathbf {x} ,\left(P-P^{2}\right)\mathbf {y} \rangle =0} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product associated with W {\displaystyle W} . Therefore, P {\displaystyle P} and I − P {\displaystyle I-P} are orthogonal projections. The other direction, namely that if P {\displaystyle P} is orthogonal then it is self-adjoint, follows from the implication from ⟨ ( x − P x ) , P y ⟩ = ⟨ P x , ( y − P y ) ⟩ = 0 {\displaystyle \langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =\langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =0} to ⟨ x , P y ⟩ = ⟨ P x , P y ⟩ = ⟨ P x , y ⟩ = ⟨ x , P ∗ y ⟩ {\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P^{*}\mathbf {y} \rangle } for every x {\displaystyle x} and y {\displaystyle y} in W {\displaystyle W} ; thus P = P ∗ {\displaystyle P=P^{*}} . The existence of an orthogonal projection onto a closed subspace follows from the Hilbert projection theorem. ==== Properties and special cases ==== An orthogonal projection is a bounded operator. This is because for every v {\displaystyle \mathbf {v} } in the vector space we have, by the Cauchy–Schwarz inequality: ‖ P v ‖ 2 = ⟨ P v , P v ⟩ = ⟨ P v , v ⟩ ≤ ‖ P v ‖ ⋅ ‖ v ‖ {\displaystyle \left\|P\mathbf {v} \right\|^{2}=\langle P\mathbf {v} ,P\mathbf {v} \rangle =\langle P\mathbf {v} ,\mathbf {v} \rangle \leq \left\|P\mathbf {v} \right\|\cdot \left\|\mathbf {v} \right\|} Thus ‖ P v ‖ ≤ ‖ v ‖ {\displaystyle \left\|P\mathbf {v} \right\|\leq \left\|\mathbf {v} \right\|} . For finite-dimensional complex or real vector spaces, the standard inner product can be substituted for ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } . ===== Formulas ===== A simple case occurs when the orthogonal projection is onto a line. If u {\displaystyle \mathbf {u} } is a unit vector on the line, then the projection is given by the outer product P u = u u T . {\displaystyle P_{\mathbf {u} }=\mathbf {u} \mathbf {u} ^{\mathsf {T}}.} (If u {\displaystyle \mathbf {u} } is complex-valued, the transpose in the above equation is replaced by a Hermitian transpose). This operator leaves u invariant, and it annihilates all vectors orthogonal to u {\displaystyle \mathbf {u} } , proving that it is indeed the orthogonal projection onto the line containing u. A simple way to see this is to consider an arbitrary vector x {\displaystyle \mathbf {x} } as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it, x = x ∥ + x ⊥ {\displaystyle \mathbf {x} =\mathbf {x} _{\parallel }+\mathbf {x} _{\perp }} . Applying projection, we get P u x = u u T x ∥ + u u T x ⊥ = u ( sgn ⁡ ( u T x ∥ ) ‖ x ∥ ‖ ) + u ⋅ 0 = x ∥ {\displaystyle P_{\mathbf {u} }\mathbf {x} =\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }+\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\perp }=\mathbf {u} \left(\operatorname {sgn} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }\right)\left\|\mathbf {x} _{\parallel }\right\|\right)+\mathbf {u} \cdot \mathbf {0} =\mathbf {x} _{\parallel }} by the properties of the dot product of parallel and perpendicular vectors. This formula can be generalized to orthogonal projections on a subspace of arbitrary dimension. Let u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} be an orthonormal basis of the subspace U {\displaystyle U} , with the assumption that the integer k ≥ 1 {\displaystyle k\geq 1} , and let A {\displaystyle A} denote the n × k {\displaystyle n\times k} matrix whose columns are u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} , i.e., A = [ u 1 ⋯ u k ] {\displaystyle A={\begin{bmatrix}\mathbf {u} _{1}&\cdots &\mathbf {u} _{k}\end{bmatrix}}} . Then the projection is given by: P A = A A T {\displaystyle P_{A}=AA^{\mathsf {T}}} which can be rewritten as P A = ∑ i ⟨ u i , ⋅ ⟩ u i . {\displaystyle P_{A}=\sum _{i}\langle \mathbf {u} _{i},\cdot \rangle \mathbf {u} _{i}.} The matrix A T {\displaystyle A^{\mathsf {T}}} is the partial isometry that vanishes on the orthogonal complement of U {\displaystyle U} , and A {\displaystyle A} is the isometry that embeds U {\displaystyle U} into the underlying vector space. The range of P A {\displaystyle P_{A}} is therefore the final space of A {\displaystyle A} . It is also clear that A A T {\displaystyle AA^{\mathsf {T}}} is the identity operator on U {\displaystyle U} . The orthonormality condition can also be dropped. If u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} is a (not necessarily orthonormal) basis with k ≥ 1 {\displaystyle k\geq 1} , and A {\displaystyle A} is the matrix with these vectors as columns, then the projection is: P A = A ( A T A ) − 1 A T . {\displaystyle P_{A}=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}.} The matrix A {\displaystyle A} still embeds U {\displaystyle U} into the underlying vector space but is no longer an isometry in general. The matrix ( A T A ) − 1 {\displaystyle \left(A^{\mathsf {T}}A\right)^{-1}} is a "normalizing factor" that recovers the norm. For example, the rank-1 operator u u T {\displaystyle \mathbf {u} \mathbf {u} ^{\mathsf {T}}} is not a projection if ‖ u ‖ ≠ 1. {\displaystyle \left\|\mathbf {u} \right\|\neq 1.} After dividing by u T u = ‖ u ‖ 2 , {\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {u} =\left\|\mathbf {u} \right\|^{2},} we obtain the projection u ( u T u ) − 1 u T {\displaystyle \mathbf {u} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {u} \right)^{-1}\mathbf {u} ^{\mathsf {T}}} onto the subspace spanned by u {\displaystyle u} . In the general case, we can have an arbitrary positive definite matrix D {\displaystyle D} defining an inner product ⟨ x , y ⟩ D = y † D x {\displaystyle \langle x,y\rangle _{D}=y^{\dagger }Dx} , and the projection P A {\displaystyle P_{A}} is given by P A x = argmin y ∈ range ⁡ ( A ) ⁡ ‖ x − y ‖ D 2 {\textstyle P_{A}x=\operatorname {argmin} _{y\in \operatorname {range} (A)}\left\|x-y\right\|_{D}^{2}} . Then P A = A ( A T D A ) − 1 A T D . {\displaystyle P_{A}=A\left(A^{\mathsf {T}}DA\right)^{-1}A^{\mathsf {T}}D.} When the range space of the projection is generated by a frame (i.e. the number of generators is greater than its dimension), the formula for the projection takes the form: P A = A A + {\displaystyle P_{A}=AA^{+}} . Here A + {\displaystyle A^{+}} stands for the Moore–Penrose pseudoinverse. This is just one of many ways to construct the projection operator. If [ A B ] {\displaystyle {\begin{bmatrix}A&B\end{bmatrix}}} is a non-singular matrix and A T B = 0 {\displaystyle A^{\mathsf {T}}B=0} (i.e., B {\displaystyle B} is the null space matrix of A {\displaystyle A} ), the following holds: I = [ A B ] [ A B ] − 1 [ A T B T ] − 1 [ A T B T ] = [ A B ] ( [ A T B T ] [ A B ] ) − 1 [ A T B T ] = [ A B ] [ A T A O O B T B ] − 1 [ A T B T ] = A ( A T A ) − 1 A T + B ( B T B ) − 1 B T {\displaystyle {\begin{aligned}I&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}\left({\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}\right)^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A^{\mathsf {T}}A&O\\O&B^{\mathsf {T}}B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\[4pt]&=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}+B\left(B^{\mathsf {T}}B\right)^{-1}B^{\mathsf {T}}\end{aligned}}} If the orthogonal condition is enhanced to A T W B = A T W T B = 0 {\displaystyle A^{\mathsf {T}}WB=A^{\mathsf {T}}W^{\mathsf {T}}B=0} with W {\displaystyle W} non-singular, the following holds: I = [ A B ] [ ( A T W A ) − 1 A T ( B T W B ) − 1 B T ] W . {\displaystyle I={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}\left(A^{\mathsf {T}}WA\right)^{-1}A^{\mathsf {T}}\\\left(B^{\mathsf {T}}WB\right)^{-1}B^{\mathsf {T}}\end{bmatrix}}W.} All these formulas also hold for complex inner product spaces, provided that the conjugate transpose is used instead of the transpose. Further details on sums of projectors can be found in Banerjee and Roy (2014). Also see Banerjee (2004) for application of sums of projectors in basic spherical trigonometry. === Oblique projections === The term oblique projections is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see oblique projection), though not as frequently as orthogonal projections. Whereas calculating the fitted value of an ordinary least squares regression requires an orthogonal projection, calculating the fitted value of an instrumental variables regression requires an oblique projection. A projection is defined by its kernel and the basis vectors used to characterize its range (which is a complement of the kernel). When these basis vectors are orthogonal to the kernel, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the kernel, the projection is an oblique projection, or just a projection. ==== A matrix representation formula for a nonzero projection operator ==== Let P : V → V {\displaystyle P\colon V\to V} be a linear operator such that P 2 = P {\displaystyle P^{2}=P} and assume that P {\displaystyle P} is not the zero operator. Let the vectors u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} form a basis for the range of P {\displaystyle P} , and assemble these vectors in the n × k {\displaystyle n\times k} matrix A {\displaystyle A} . Then k ≥ 1 {\displaystyle k\geq 1} , otherwise k = 0 {\displaystyle k=0} and P {\displaystyle P} is the zero operator. The range and the kernel are complementary spaces, so the kernel has dimension n − k {\displaystyle n-k} . It follows that the orthogonal complement of the kernel has dimension k {\displaystyle k} . Let v 1 , … , v k {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{k}} form a basis for the orthogonal complement of the kernel of the projection, and assemble these vectors in the matrix B {\displaystyle B} . Then the projection P {\displaystyle P} (with the condition k ≥ 1 {\displaystyle k\geq 1} ) is given by P = A ( B T A ) − 1 B T . {\displaystyle P=A\left(B^{\mathsf {T}}A\right)^{-1}B^{\mathsf {T}}.} This expression generalizes the formula for orthogonal projections given above. A standard proof of this expression is the following. For any vector x {\displaystyle \mathbf {x} } in the vector space V {\displaystyle V} , we can decompose x = x 1 + x 2 {\displaystyle \mathbf {x} =\mathbf {x} _{1}+\mathbf {x} _{2}} , where vector x 1 = P ( x ) {\displaystyle \mathbf {x} _{1}=P(\mathbf {x} )} is in the image of P {\displaystyle P} , and vector x 2 = x − P ( x ) . {\displaystyle \mathbf {x} _{2}=\mathbf {x} -P(\mathbf {x} ).} So P ( x 2 ) = P ( x ) − P 2 ( x ) = 0 {\displaystyle P(\mathbf {x} _{2})=P(\mathbf {x} )-P^{2}(\mathbf {x} )=\mathbf {0} } , and then x 2 {\displaystyle \mathbf {x} _{2}} is in the kernel of P {\displaystyle P} , which is the null space of A . {\displaystyle A.} In other words, the vector x 1 {\displaystyle \mathbf {x} _{1}} is in the column space of A , {\displaystyle A,} so x 1 = A w {\displaystyle \mathbf {x} _{1}=A\mathbf {w} } for some k {\displaystyle k} dimension vector w {\displaystyle \mathbf {w} } and the vector x 2 {\displaystyle \mathbf {x} _{2}} satisfies B T x 2 = 0 {\displaystyle B^{\mathsf {T}}\mathbf {x} _{2}=\mathbf {0} } by the construction of B {\displaystyle B} . Put these conditions together, and we find a vector w {\displaystyle \mathbf {w} } so that B T ( x − A w ) = 0 {\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} } . Since matrices A {\displaystyle A} and B {\displaystyle B} are of full rank k {\displaystyle k} by their construction, the k × k {\displaystyle k\times k} -matrix B T A {\displaystyle B^{\mathsf {T}}A} is invertible. So the equation B T ( x − A w ) = 0 {\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} } gives the vector w = ( B T A ) − 1 B T x . {\displaystyle \mathbf {w} =(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} .} In this way, P x = x 1 = A w = A ( B T A ) − 1 B T x {\displaystyle P\mathbf {x} =\mathbf {x} _{1}=A\mathbf {w} =A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} } for any vector x ∈ V {\displaystyle \mathbf {x} \in V} and hence P = A ( B T A ) − 1 B T {\displaystyle P=A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}} . In the case that P {\displaystyle P} is an orthogonal projection, we can take A = B {\displaystyle A=B} , and it follows that P = A ( A T A ) − 1 A T {\displaystyle P=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}} . By using this formula, one can easily check that P = P T {\displaystyle P=P^{\mathsf {T}}} . In general, if the vector space is over complex number field, one then uses the Hermitian transpose A ∗ {\displaystyle A^{*}} and has the formula P = A ( A ∗ A ) − 1 A ∗ {\displaystyle P=A\left(A^{*}A\right)^{-1}A^{*}} . Recall that one can express the Moore–Penrose inverse of the matrix A {\displaystyle A} by A + = ( A ∗ A ) − 1 A ∗ {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}} since A {\displaystyle A} has full column rank, so P = A A + {\displaystyle P=AA^{+}} . ==== Singular values ==== I − P {\displaystyle I-P} is also an oblique projection. The singular values of P {\displaystyle P} and I − P {\displaystyle I-P} can be computed by an orthonormal basis of A {\displaystyle A} . Let Q A {\displaystyle Q_{A}} be an orthonormal basis of A {\displaystyle A} and let Q A ⊥ {\displaystyle Q_{A}^{\perp }} be the orthogonal complement of Q A {\displaystyle Q_{A}} . Denote the singular values of the matrix Q A T A ( B T A ) − 1 B T Q A ⊥ {\displaystyle Q_{A}^{T}A(B^{T}A)^{-1}B^{T}Q_{A}^{\perp }} by the positive values γ 1 ≥ γ 2 ≥ … ≥ γ k {\displaystyle \gamma _{1}\geq \gamma _{2}\geq \ldots \geq \gamma _{k}} . With this, the singular values for P {\displaystyle P} are: σ i = { 1 + γ i 2 1 ≤ i ≤ k 0 otherwise {\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\0&{\text{otherwise}}\end{cases}}} and the singular values for I − P {\displaystyle I-P} are σ i = { 1 + γ i 2 1 ≤ i ≤ k 1 k + 1 ≤ i ≤ n − k 0 otherwise {\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\1&k+1\leq i\leq n-k\\0&{\text{otherwise}}\end{cases}}} This implies that the largest singular values of P {\displaystyle P} and I − P {\displaystyle I-P} are equal, and thus that the matrix norm of the oblique projections are the same. However, the condition number satisfies the relation κ ( I − P ) = σ 1 1 ≥ σ 1 σ k = κ ( P ) {\displaystyle \kappa (I-P)={\frac {\sigma _{1}}{1}}\geq {\frac {\sigma _{1}}{\sigma _{k}}}=\kappa (P)} , and is therefore not necessarily equal. === Finding projection with an inner product === Let V {\displaystyle V} be a vector space (in this case a plane) spanned by orthogonal vectors u 1 , u 2 , … , u p {\displaystyle \mathbf {u} _{1},\mathbf {u} _{2},\dots ,\mathbf {u} _{p}} . Let y {\displaystyle y} be a vector. One can define a projection of y {\displaystyle \mathbf {y} } onto V {\displaystyle V} as proj V ⁡ y = y ⋅ u i u i ⋅ u i u i {\displaystyle \operatorname {proj} _{V}\mathbf {y} ={\frac {\mathbf {y} \cdot \mathbf {u} ^{i}}{\mathbf {u} ^{i}\cdot \mathbf {u} ^{i}}}\mathbf {u} ^{i}} where repeated indices are summed over (Einstein sum notation). The vector y {\displaystyle \mathbf {y} } can be written as an orthogonal sum such that y = proj V ⁡ y + z {\displaystyle \mathbf {y} =\operatorname {proj} _{V}\mathbf {y} +\mathbf {z} } . proj V ⁡ y {\displaystyle \operatorname {proj} _{V}\mathbf {y} } is sometimes denoted as y ^ {\displaystyle {\hat {\mathbf {y} }}} . There is a theorem in linear algebra that states that this z {\displaystyle \mathbf {z} } is the smallest distance (the orthogonal distance) from y {\displaystyle \mathbf {y} } to V {\displaystyle V} and is commonly used in areas such as machine learning. == Canonical forms == Any projection P = P 2 {\displaystyle P=P^{2}} on a vector space of dimension d {\displaystyle d} over a field is a diagonalizable matrix, since its minimal polynomial divides x 2 − x {\displaystyle x^{2}-x} , which splits into distinct linear factors. Thus there exists a basis in which P {\displaystyle P} has the form P = I r ⊕ 0 d − r {\displaystyle P=I_{r}\oplus 0_{d-r}} where r {\displaystyle r} is the rank of P {\displaystyle P} . Here I r {\displaystyle I_{r}} is the identity matrix of size r {\displaystyle r} , 0 d − r {\displaystyle 0_{d-r}} is the zero matrix of size d − r {\displaystyle d-r} , and ⊕ {\displaystyle \oplus } is the direct sum operator. If the vector space is complex and equipped with an inner product, then there is an orthonormal basis in which the matrix of P is P = [ 1 σ 1 0 0 ] ⊕ ⋯ ⊕ [ 1 σ k 0 0 ] ⊕ I m ⊕ 0 s . {\displaystyle P={\begin{bmatrix}1&\sigma _{1}\\0&0\end{bmatrix}}\oplus \cdots \oplus {\begin{bmatrix}1&\sigma _{k}\\0&0\end{bmatrix}}\oplus I_{m}\oplus 0_{s}.} where σ 1 ≥ σ 2 ≥ ⋯ ≥ σ k > 0 {\displaystyle \sigma _{1}\geq \sigma _{2}\geq \dots \geq \sigma _{k}>0} . The integers k , s , m {\displaystyle k,s,m} and the real numbers σ i {\displaystyle \sigma _{i}} are uniquely determined. 2 k + s + m = d {\displaystyle 2k+s+m=d} . The factor I m ⊕ 0 s {\displaystyle I_{m}\oplus 0_{s}} corresponds to the maximal invariant subspace on which P {\displaystyle P} acts as an orthogonal projection (so that P itself is orthogonal if and only if k = 0 {\displaystyle k=0} ) and the σ i {\displaystyle \sigma _{i}} -blocks correspond to the oblique components. == Projections on normed vector spaces == When the underlying vector space X {\displaystyle X} is a (not necessarily finite-dimensional) normed vector space, analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume now X {\displaystyle X} is a Banach space. Many of the algebraic results discussed above survive the passage to this context. A given direct sum decomposition of X {\displaystyle X} into complementary subspaces still specifies a projection, and vice versa. If X {\displaystyle X} is the direct sum X = U ⊕ V {\displaystyle X=U\oplus V} , then the operator defined by P ( u + v ) = u {\displaystyle P(u+v)=u} is still a projection with range U {\displaystyle U} and kernel V {\displaystyle V} . It is also clear that P 2 = P {\displaystyle P^{2}=P} . Conversely, if P {\displaystyle P} is projection on X {\displaystyle X} , i.e. P 2 = P {\displaystyle P^{2}=P} , then it is easily verified that ( 1 − P ) 2 = ( 1 − P ) {\displaystyle (1-P)^{2}=(1-P)} . In other words, 1 − P {\displaystyle 1-P} is also a projection. The relation P 2 = P {\displaystyle P^{2}=P} implies 1 = P + ( 1 − P ) {\displaystyle 1=P+(1-P)} and X {\displaystyle X} is the direct sum rg ⁡ ( P ) ⊕ rg ⁡ ( 1 − P ) {\displaystyle \operatorname {rg} (P)\oplus \operatorname {rg} (1-P)} . However, in contrast to the finite-dimensional case, projections need not be continuous in general. If a subspace U {\displaystyle U} of X {\displaystyle X} is not closed in the norm topology, then the projection onto U {\displaystyle U} is not continuous. In other words, the range of a continuous projection P {\displaystyle P} must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus a continuous projection P {\displaystyle P} gives a decomposition of X {\displaystyle X} into two complementary closed subspaces: X = rg ⁡ ( P ) ⊕ ker ⁡ ( P ) = ker ⁡ ( 1 − P ) ⊕ ker ⁡ ( P ) {\displaystyle X=\operatorname {rg} (P)\oplus \ker(P)=\ker(1-P)\oplus \ker(P)} . The converse holds also, with an additional assumption. Suppose U {\displaystyle U} is a closed subspace of X {\displaystyle X} . If there exists a closed subspace V {\displaystyle V} such that X = U ⊕ V, then the projection P {\displaystyle P} with range U {\displaystyle U} and kernel V {\displaystyle V} is continuous. This follows from the closed graph theorem. Suppose xn → x and Pxn → y. One needs to show that P x = y {\displaystyle Px=y} . Since U {\displaystyle U} is closed and {Pxn} ⊂ U, y lies in U {\displaystyle U} , i.e. Py = y. Also, xn − Pxn = (I − P)xn → x − y. Because V {\displaystyle V} is closed and {(I − P)xn} ⊂ V, we have x − y ∈ V {\displaystyle x-y\in V} , i.e. P ( x − y ) = P x − P y = P x − y = 0 {\displaystyle P(x-y)=Px-Py=Px-y=0} , which proves the claim. The above argument makes use of the assumption that both U {\displaystyle U} and V {\displaystyle V} are closed. In general, given a closed subspace U {\displaystyle U} , there need not exist a complementary closed subspace V {\displaystyle V} , although for Hilbert spaces this can always be done by taking the orthogonal complement. For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence of Hahn–Banach theorem. Let U {\displaystyle U} be the linear span of u {\displaystyle u} . By Hahn–Banach, there exists a bounded linear functional φ {\displaystyle \varphi } such that φ(u) = 1. The operator P ( x ) = φ ( x ) u {\displaystyle P(x)=\varphi (x)u} satisfies P 2 = P {\displaystyle P^{2}=P} , i.e. it is a projection. Boundedness of φ {\displaystyle \varphi } implies continuity of P {\displaystyle P} and therefore ker ⁡ ( P ) = rg ⁡ ( I − P ) {\displaystyle \ker(P)=\operatorname {rg} (I-P)} is a closed complementary subspace of U {\displaystyle U} . == Applications and further considerations == Projections (orthogonal and otherwise) play a major role in algorithms for certain linear algebra problems: QR decomposition (see Householder transformation and Gram–Schmidt decomposition); Singular value decomposition Reduction to Hessenberg form (the first step in many eigenvalue algorithms) Linear regression Projective elements of matrix algebras are used in the construction of certain K-groups in Operator K-theory As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations of characteristic functions. Idempotents are used in classifying, for instance, semisimple algebras, while measure theory begins with considering characteristic functions of measurable sets. Therefore, as one can imagine, projections are very often encountered in the context of operator algebras. In particular, a von Neumann algebra is generated by its complete lattice of projections. == Generalizations == More generally, given a map between normed vector spaces T : V → W , {\displaystyle T\colon V\to W,} one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that ( ker ⁡ T ) ⊥ → W {\displaystyle (\ker T)^{\perp }\to W} be an isometry (compare Partial isometry); in particular it must be onto. The case of an orthogonal projection is when W is a subspace of V. In Riemannian geometry, this is used in the definition of a Riemannian submersion. == See also == Centering matrix, which is an example of a projection matrix. Dykstra's projection algorithm to compute the projection onto an intersection of sets Invariant subspace Least-squares spectral analysis Orthogonalization Properties of trace == Notes == == References == Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388 Dunford, N.; Schwartz, J. T. (1958). Linear Operators, Part I: General Theory. Interscience. Meyer, Carl D. (2000). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-454-8. Brezinski, Claude: Projection Methods for Systems of Equations, North-Holland, ISBN 0-444-82777-3 (1997). == External links == MIT Linear Algebra Lecture on Projection Matrices on YouTube, from MIT OpenCourseWare Linear Algebra 15d: The Projection Transformation on YouTube, by Pavel Grinfeld. Planar Geometric Projections Tutorial – a simple-to-follow tutorial explaining the different types of planar geometric projections.
Wikipedia/Projection_(linear_algebra)
In general relativity, the metric tensor (in this context often abbreviated to simply the metric) is the fundamental object of study. The metric captures all the geometric and causal structure of spacetime, being used to define notions such as time, distance, volume, curvature, angle, and separation of the future and the past. In general relativity, the metric tensor plays the role of the gravitational potential in the classical theory of gravitation, although the physical content of the associated equations is entirely different. Gutfreund and Renn say "that in general relativity the gravitational potential is represented by the metric tensor." == Notation and conventions == This article works with a metric signature that is mostly positive (− + + +); see sign convention. The gravitation constant G {\displaystyle G} will be kept explicit. This article employs the Einstein summation convention, where repeated indices are automatically summed over. == Definition == Mathematically, spacetime is represented by a four-dimensional differentiable manifold M {\displaystyle M} and the metric tensor is given as a covariant, second-degree, symmetric tensor on M {\displaystyle M} , conventionally denoted by g {\displaystyle g} . Moreover, the metric is required to be nondegenerate with signature (− + + +). A manifold M {\displaystyle M} equipped with such a metric is a type of Lorentzian manifold. Explicitly, the metric tensor is a symmetric bilinear form on each tangent space of M {\displaystyle M} that varies in a smooth (or differentiable) manner from point to point. Given two tangent vectors u {\displaystyle u} and v {\displaystyle v} at a point x {\displaystyle x} in M {\displaystyle M} , the metric can be evaluated on u {\displaystyle u} and v {\displaystyle v} to give a real number: g x ( u , v ) = g x ( v , u ) ∈ R . {\displaystyle g_{x}(u,v)=g_{x}(v,u)\in \mathbb {R} .} This is a generalization of the dot product of ordinary Euclidean space. Unlike Euclidean space – where the dot product is positive definite – the metric is indefinite and gives each tangent space the structure of Minkowski space. == Local coordinates and matrix representations == Physicists usually work in local coordinates (i.e. coordinates defined on some local patch of M {\displaystyle M} ). In local coordinates x μ {\displaystyle x^{\mu }} (where μ {\displaystyle \mu } is an index that runs from 0 to 3) the metric can be written in the form g = g μ ν d x μ ⊗ d x ν . {\displaystyle g=g_{\mu \nu }dx^{\mu }\otimes dx^{\nu }.} The factors d x μ {\displaystyle dx^{\mu }} are one-form gradients of the scalar coordinate fields x μ {\displaystyle x^{\mu }} . The metric is thus a linear combination of tensor products of one-form gradients of coordinates. The coefficients g μ ν {\displaystyle g_{\mu \nu }} are a set of 16 real-valued functions (since the tensor g {\displaystyle g} is a tensor field, which is defined at all points of a spacetime manifold). In order for the metric to be symmetric g μ ν = g ν μ , {\displaystyle g_{\mu \nu }=g_{\nu \mu },} giving 10 independent coefficients. If the local coordinates are specified, or understood from context, the metric can be written as a 4 × 4 symmetric matrix with entries g μ ν {\displaystyle g_{\mu \nu }} . The nondegeneracy of g μ ν {\displaystyle g_{\mu \nu }} means that this matrix is non-singular (i.e. has non-vanishing determinant), while the Lorentzian signature of g {\displaystyle g} implies that the matrix has one negative and three positive eigenvalues. Physicists often refer to this matrix or the coordinates g μ ν {\displaystyle g_{\mu \nu }} themselves as the metric (see, however, abstract index notation). With the quantities d x μ {\displaystyle dx^{\mu }} being regarded as the components of an infinitesimal coordinate displacement four-vector (not to be confused with the one-forms of the same notation above), the metric determines the invariant square of an infinitesimal line element, often referred to as an interval. The interval is often denoted d s 2 = g μ ν d x μ d x ν . {\displaystyle ds^{2}=g_{\mu \nu }dx^{\mu }dx^{\nu }.} The interval d s 2 {\displaystyle ds^{2}} imparts information about the causal structure of spacetime. When d s 2 < 0 {\displaystyle ds^{2}<0} , the interval is timelike and the square root of the absolute value of d s 2 {\displaystyle ds^{2}} is an incremental proper time. Only timelike intervals can be physically traversed by a massive object. When d s 2 = 0 {\displaystyle ds^{2}=0} , the interval is lightlike, and can only be traversed by (massless) things that move at the speed of light. When d s 2 > 0 {\displaystyle ds^{2}>0} , the interval is spacelike and the square root of d s 2 {\displaystyle ds^{2}} acts as an incremental proper length. Spacelike intervals cannot be traversed, since they connect events that are outside each other's light cones. Events can be causally related only if they are within each other's light cones. The components of the metric depend on the choice of local coordinate system. Under a change of coordinates x μ → x μ ¯ {\displaystyle x^{\mu }\to x^{\bar {\mu }}} , the metric components transform as g μ ¯ ν ¯ = ∂ x ρ ∂ x μ ¯ ∂ x σ ∂ x ν ¯ g ρ σ = Λ ρ μ ¯ Λ σ ν ¯ g ρ σ . {\displaystyle g_{{\bar {\mu }}{\bar {\nu }}}={\frac {\partial x^{\rho }}{\partial x^{\bar {\mu }}}}{\frac {\partial x^{\sigma }}{\partial x^{\bar {\nu }}}}g_{\rho \sigma }=\Lambda ^{\rho }{}_{\bar {\mu }}\,\Lambda ^{\sigma }{}_{\bar {\nu }}\,g_{\rho \sigma }.} == Properties == The metric tensor plays a key role in index manipulation. In index notation, the coefficients g μ ν {\displaystyle g_{\mu \nu }} of the metric tensor g {\displaystyle \mathbf {g} } provide a link between covariant and contravariant components of other tensors. Contracting the contravariant index of a tensor with one of a covariant metric tensor coefficient has the effect of lowering the index g μ ν A ν = A μ {\displaystyle g_{\mu \nu }A^{\nu }=A_{\mu }} and similarly a contravariant metric coefficient raises the index g μ ν A ν = A μ . {\displaystyle g^{\mu \nu }A_{\nu }=A^{\mu }.} Applying this property of raising and lowering indices to the metric tensor components themselves leads to the property g μ ν g ν λ = δ μ λ {\displaystyle g_{\mu \nu }g^{\nu \lambda }=\delta _{\mu }^{\lambda }} For a diagonal metric (one for which coefficients g μ ν = 0 , ∀ μ ≠ ν {\displaystyle g_{\mu \nu }=0,\,\forall \mu \neq \nu } ; i.e. the basis vectors are orthogonal to each other), this implies that a given covariant coefficient of the metric tensor is the inverse of the corresponding contravariant coefficient g 00 = ( g 00 ) − 1 , g 11 = ( g 11 ) − 1 {\displaystyle g_{00}=(g^{00})^{-1},g_{11}=(g^{11})^{-1}} , etc. == Examples == === Flat spacetime === The simplest example of a Lorentzian manifold is flat spacetime, which can be given as R4 with coordinates ( t , x , y , z ) {\displaystyle (t,x,y,z)} and the metric d s 2 = − c 2 d t 2 + d x 2 + d y 2 + d z 2 = η μ ν d x μ d x ν . {\displaystyle ds^{2}=-c^{2}dt^{2}+dx^{2}+dy^{2}+dz^{2}=\eta _{\mu \nu }dx^{\mu }dx^{\nu }.} These coordinates actually cover all of R4. The flat space metric (or Minkowski metric) is often denoted by the symbol η and is the metric used in special relativity. In the above coordinates, the matrix representation of η is η = ( − c 2 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle \eta ={\begin{pmatrix}-c^{2}&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} (An alternative convention replaces coordinate t {\displaystyle t} by c t {\displaystyle ct} , and defines η {\displaystyle \eta } as in Minkowski space § Standard basis.) In spherical coordinates ( t , r , θ , ϕ ) {\displaystyle (t,r,\theta ,\phi )} , the flat space metric takes the form d s 2 = − c 2 d t 2 + d r 2 + r 2 d Ω 2 {\displaystyle ds^{2}=-c^{2}dt^{2}+dr^{2}+r^{2}d\Omega ^{2}} where d Ω 2 = d θ 2 + sin 2 ⁡ θ d ϕ 2 {\displaystyle d\Omega ^{2}=d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}} is the standard metric on the 2-sphere. === Black hole metrics === The Schwarzschild metric describes an uncharged, non-rotating black hole. There are also metrics that describe rotating and charged black holes. ==== Schwarzschild metric ==== Besides the flat space metric the most important metric in general relativity is the Schwarzschild metric which can be given in one set of local coordinates by d s 2 = − ( 1 − 2 G M r c 2 ) c 2 d t 2 + ( 1 − 2 G M r c 2 ) − 1 d r 2 + r 2 d Ω 2 {\displaystyle ds^{2}=-\left(1-{\frac {2GM}{rc^{2}}}\right)c^{2}dt^{2}+\left(1-{\frac {2GM}{rc^{2}}}\right)^{-1}dr^{2}+r^{2}d\Omega ^{2}} where, again, d Ω 2 {\displaystyle d\Omega ^{2}} is the standard metric on the 2-sphere. Here, G {\displaystyle G} is the gravitation constant and M {\displaystyle M} is a constant with the dimensions of mass. Its derivation can be found here. The Schwarzschild metric approaches the Minkowski metric as M {\displaystyle M} approaches zero (except at the origin where it is undefined). Similarly, when r {\displaystyle r} goes to infinity, the Schwarzschild metric approaches the Minkowski metric. With coordinates ( x 0 , x 1 , x 2 , x 3 ) = ( c t , r , θ , φ ) , {\displaystyle \left(x^{0},x^{1},x^{2},x^{3}\right)=(ct,r,\theta ,\varphi )\,,} the metric can be written as g μ ν = [ − ( 1 − 2 G M r c 2 ) 0 0 0 0 ( 1 − 2 G M r c 2 ) − 1 0 0 0 0 r 2 0 0 0 0 r 2 sin 2 ⁡ θ ] . {\displaystyle g_{\mu \nu }={\begin{bmatrix}-\left(1-{\frac {2GM}{rc^{2}}}\right)&0&0&0\\0&\left(1-{\frac {2GM}{rc^{2}}}\right)^{-1}&0&0\\0&0&r^{2}&0\\0&0&0&r^{2}\sin ^{2}\theta \end{bmatrix}}\,.} Several other systems of coordinates have been devised for the Schwarzschild metric: Eddington–Finkelstein coordinates, Gullstrand–Painlevé coordinates, Kruskal–Szekeres coordinates, and Lemaître coordinates. ==== Rotating and charged black holes ==== The Schwarzschild solution supposes an object that is not rotating in space and is not charged. To account for charge, the metric must satisfy the Einstein field equations like before, as well as Maxwell's equations in a curved spacetime. A charged, non-rotating mass is described by the Reissner–Nordström metric. Rotating black holes are described by the Kerr metric (uncharged) and the Kerr–Newman metric (charged). === Other metrics === Other notable metrics are: Alcubierre metric, de Sitter/anti-de Sitter metrics, Friedmann–Lemaître–Robertson–Walker metric, Isotropic coordinates, Lemaître–Tolman metric, Peres metric, Rindler coordinates, Weyl–Lewis–Papapetrou coordinates, Gödel metric. Some of them are without the event horizon or can be without the gravitational singularity. == Volume == The metric g induces a natural volume form (up to a sign), which can be used to integrate over a region of a manifold. Given local coordinates x μ {\displaystyle x^{\mu }} for the manifold, the volume form can be written v o l g = ± | det ( g μ ν ) | d x 0 ∧ d x 1 ∧ d x 2 ∧ d x 3 {\displaystyle \mathrm {vol} _{g}=\pm {\sqrt {\left|\det(g_{\mu \nu })\right|}}\,dx^{0}\wedge dx^{1}\wedge dx^{2}\wedge dx^{3}} where det ( g μ ν ) {\displaystyle \det(g_{\mu \nu })} is the determinant of the matrix of components of the metric tensor for the given coordinate system. == Curvature == The metric g {\displaystyle g} completely determines the curvature of spacetime. According to the fundamental theorem of Riemannian geometry, there is a unique connection ∇ on any semi-Riemannian manifold that is compatible with the metric and torsion-free. This connection is called the Levi-Civita connection. The Christoffel symbols of this connection are given in terms of partial derivatives of the metric in local coordinates x μ {\displaystyle x^{\mu }} by the formula Γ λ μ ν = 1 2 g λ ρ ( ∂ g ρ μ ∂ x ν + ∂ g ρ ν ∂ x μ − ∂ g μ ν ∂ x ρ ) = 1 2 g λ ρ ( g ρ μ , ν + g ρ ν , μ − g μ ν , ρ ) {\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }={\frac {1}{2}}g^{\lambda \rho }\left({\frac {\partial g_{\rho \mu }}{\partial x^{\nu }}}+{\frac {\partial g_{\rho \nu }}{\partial x^{\mu }}}-{\frac {\partial g_{\mu \nu }}{\partial x^{\rho }}}\right)={\frac {1}{2}}g^{\lambda \rho }\left(g_{\rho \mu ,\nu }+g_{\rho \nu ,\mu }-g_{\mu \nu ,\rho }\right)} (where commas indicate partial derivatives). The curvature of spacetime is then given by the Riemann curvature tensor which is defined in terms of the Levi-Civita connection ∇. In local coordinates this tensor is given by: R ρ σ μ ν = ∂ μ Γ ρ ν σ − ∂ ν Γ ρ μ σ + Γ ρ μ λ Γ λ ν σ − Γ ρ ν λ Γ λ μ σ . {\displaystyle {R^{\rho }}_{\sigma \mu \nu }=\partial _{\mu }\Gamma ^{\rho }{}_{\nu \sigma }-\partial _{\nu }\Gamma ^{\rho }{}_{\mu \sigma }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }.} The curvature is then expressible purely in terms of the metric g {\displaystyle g} and its derivatives. == Einstein's equations == One of the core ideas of general relativity is that the metric (and the associated geometry of spacetime) is determined by the matter and energy content of spacetime. Einstein's field equations: R μ ν − 1 2 R g μ ν = 8 π G c 4 T μ ν {\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }={\frac {8\pi G}{c^{4}}}\,T_{\mu \nu }} where the Ricci curvature tensor R ν ρ = d e f R μ ν μ ρ {\displaystyle R_{\nu \rho }\ {\stackrel {\mathrm {def} }{=}}\ {R^{\mu }}_{\nu \mu \rho }} and the scalar curvature R = d e f g μ ν R μ ν {\displaystyle R\ {\stackrel {\mathrm {def} }{=}}\ g^{\mu \nu }R_{\mu \nu }} relate the metric (and the associated curvature tensors) to the stress–energy tensor T μ ν {\displaystyle T_{\mu \nu }} . This tensor equation is a complicated set of nonlinear partial differential equations for the metric components. Exact solutions of Einstein's field equations are very difficult to find. == See also == Alternatives to general relativity Introduction to the mathematics of general relativity Mathematics of general relativity Ricci calculus == References == See general relativity resources for a list of references.
Wikipedia/Metric_tensor_(general_relativity)
In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra. There are numerous ways to multiply two Euclidean vectors. The dot product takes in two vectors and returns a scalar, while the cross product returns a pseudovector. Both of these have various significant geometric interpretations and are widely used in mathematics, physics, and engineering. The dyadic product takes in two vectors and returns a second order tensor called a dyadic in this context. A dyadic can be used to contain physical or geometric information, although in general there is no direct way of geometrically interpreting it. The dyadic product is distributive over vector addition, and associative with scalar multiplication. Therefore, the dyadic product is linear in both of its operands. In general, two dyadics can be added to get another dyadic, and multiplied by numbers to scale the dyadic. However, the product is not commutative; changing the order of the vectors results in a different dyadic. The formalism of dyadic algebra is an extension of vector algebra to include the dyadic product of vectors. The dyadic product is also associative with the dot and cross products with other vectors, which allows the dot, cross, and dyadic products to be combined to obtain other scalars, vectors, or dyadics. It also has some aspects of matrix algebra, as the numerical components of vectors can be arranged into row and column vectors, and those of second order tensors in square matrices. Also, the dot, cross, and dyadic products can all be expressed in matrix form. Dyadic expressions may closely resemble the matrix equivalents. The dot product of a dyadic with a vector gives another vector, and taking the dot product of this result gives a scalar derived from the dyadic. The effect that a given dyadic has on other vectors can provide indirect physical or geometric interpretations. Dyadic notation was first established by Josiah Willard Gibbs in 1884. The notation and terminology are relatively obsolete today. Its uses in physics include continuum mechanics and electromagnetism. In this article, upper-case bold variables denote dyadics (including dyads) whereas lower-case bold variables denote vectors. An alternative notation uses respectively double and single over- or underbars. == Definitions and terminology == === Dyadic, outer, and tensor products === A dyad is a tensor of order two and rank one, and is the dyadic product of two vectors (complex vectors in general), whereas a dyadic is a general tensor of order two (which may be full rank or not). There are several equivalent terms and notations for this product: the dyadic product of two vectors a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } is denoted by a b {\displaystyle \mathbf {a} \mathbf {b} } (juxtaposed; no symbols, multiplication signs, crosses, dots, etc.) the outer product of two column vectors a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } is denoted and defined as a ⊗ b {\displaystyle \mathbf {a} \otimes \mathbf {b} } or a b T {\displaystyle \mathbf {a} \mathbf {b} ^{\mathsf {T}}} , where T {\displaystyle {\mathsf {T}}} means transpose, the tensor product of two vectors a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } is denoted a ⊗ b {\displaystyle \mathbf {a} \otimes \mathbf {b} } , In the dyadic context they all have the same definition and meaning, and are used synonymously, although the tensor product is an instance of the more general and abstract use of the term. ==== Three-dimensional Euclidean space ==== To illustrate the equivalent usage, consider three-dimensional Euclidean space, letting: a = a 1 i + a 2 j + a 3 k b = b 1 i + b 2 j + b 3 k {\displaystyle {\begin{aligned}\mathbf {a} &=a_{1}\mathbf {i} +a_{2}\mathbf {j} +a_{3}\mathbf {k} \\\mathbf {b} &=b_{1}\mathbf {i} +b_{2}\mathbf {j} +b_{3}\mathbf {k} \end{aligned}}} be two vectors where i, j, k (also denoted e1, e2, e3) are the standard basis vectors in this vector space (see also Cartesian coordinates). Then the dyadic product of a and b can be represented as a sum: a b = a 1 b 1 i i + a 1 b 2 i j + a 1 b 3 i k + a 2 b 1 j i + a 2 b 2 j j + a 2 b 3 j k + a 3 b 1 k i + a 3 b 2 k j + a 3 b 3 k k {\displaystyle {\begin{aligned}\mathbf {ab} =\qquad &a_{1}b_{1}\mathbf {ii} +a_{1}b_{2}\mathbf {ij} +a_{1}b_{3}\mathbf {ik} \\{}+{}&a_{2}b_{1}\mathbf {ji} +a_{2}b_{2}\mathbf {jj} +a_{2}b_{3}\mathbf {jk} \\{}+{}&a_{3}b_{1}\mathbf {ki} +a_{3}b_{2}\mathbf {kj} +a_{3}b_{3}\mathbf {kk} \end{aligned}}} or by extension from row and column vectors, a 3×3 matrix (also the result of the outer product or tensor product of a and b): a b ≡ a ⊗ b ≡ a b T = ( a 1 a 2 a 3 ) ( b 1 b 2 b 3 ) = ( a 1 b 1 a 1 b 2 a 1 b 3 a 2 b 1 a 2 b 2 a 2 b 3 a 3 b 1 a 3 b 2 a 3 b 3 ) . {\displaystyle \mathbf {ab} \equiv \mathbf {a} \otimes \mathbf {b} \equiv \mathbf {ab} ^{\mathsf {T}}={\begin{pmatrix}a_{1}\\a_{2}\\a_{3}\end{pmatrix}}{\begin{pmatrix}b_{1}&b_{2}&b_{3}\end{pmatrix}}={\begin{pmatrix}a_{1}b_{1}&a_{1}b_{2}&a_{1}b_{3}\\a_{2}b_{1}&a_{2}b_{2}&a_{2}b_{3}\\a_{3}b_{1}&a_{3}b_{2}&a_{3}b_{3}\end{pmatrix}}.} A dyad is a component of the dyadic (a monomial of the sum or equivalently an entry of the matrix) — the dyadic product of a pair of basis vectors scalar multiplied by a number. Just as the standard basis (and unit) vectors i, j, k, have the representations: i = ( 1 0 0 ) , j = ( 0 1 0 ) , k = ( 0 0 1 ) {\displaystyle {\begin{aligned}\mathbf {i} &={\begin{pmatrix}1\\0\\0\end{pmatrix}},&\mathbf {j} &={\begin{pmatrix}0\\1\\0\end{pmatrix}},&\mathbf {k} &={\begin{pmatrix}0\\0\\1\end{pmatrix}}\end{aligned}}} (which can be transposed), the standard basis (and unit) dyads have the representation: i i = ( 1 0 0 0 0 0 0 0 0 ) , i j = ( 0 1 0 0 0 0 0 0 0 ) , i k = ( 0 0 1 0 0 0 0 0 0 ) j i = ( 0 0 0 1 0 0 0 0 0 ) , j j = ( 0 0 0 0 1 0 0 0 0 ) , j k = ( 0 0 0 0 0 1 0 0 0 ) k i = ( 0 0 0 0 0 0 1 0 0 ) , k j = ( 0 0 0 0 0 0 0 1 0 ) , k k = ( 0 0 0 0 0 0 0 0 1 ) {\displaystyle {\begin{aligned}\mathbf {ii} &={\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\end{pmatrix}},&\mathbf {ij} &={\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}},&\mathbf {ik} &={\begin{pmatrix}0&0&1\\0&0&0\\0&0&0\end{pmatrix}}\\\mathbf {ji} &={\begin{pmatrix}0&0&0\\1&0&0\\0&0&0\end{pmatrix}},&\mathbf {jj} &={\begin{pmatrix}0&0&0\\0&1&0\\0&0&0\end{pmatrix}},&\mathbf {jk} &={\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\end{pmatrix}}\\\mathbf {ki} &={\begin{pmatrix}0&0&0\\0&0&0\\1&0&0\end{pmatrix}},&\mathbf {kj} &={\begin{pmatrix}0&0&0\\0&0&0\\0&1&0\end{pmatrix}},&\mathbf {kk} &={\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}}\end{aligned}}} For a simple numerical example in the standard basis: A = 2 i j + 3 2 j i − 8 π j k + 2 2 3 k k = 2 ( 0 1 0 0 0 0 0 0 0 ) + 3 2 ( 0 0 0 1 0 0 0 0 0 ) − 8 π ( 0 0 0 0 0 1 0 0 0 ) + 2 2 3 ( 0 0 0 0 0 0 0 0 1 ) = ( 0 2 0 3 2 0 − 8 π 0 0 2 2 3 ) {\displaystyle {\begin{aligned}\mathbf {A} &=2\mathbf {ij} +{\frac {\sqrt {3}}{2}}\mathbf {ji} -8\pi \mathbf {jk} +{\frac {2{\sqrt {2}}}{3}}\mathbf {kk} \\[2pt]&=2{\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}}+{\frac {\sqrt {3}}{2}}{\begin{pmatrix}0&0&0\\1&0&0\\0&0&0\end{pmatrix}}-8\pi {\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\end{pmatrix}}+{\frac {2{\sqrt {2}}}{3}}{\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}}\\[2pt]&={\begin{pmatrix}0&2&0\\{\frac {\sqrt {3}}{2}}&0&-8\pi \\0&0&{\frac {2{\sqrt {2}}}{3}}\end{pmatrix}}\end{aligned}}} ==== N-dimensional Euclidean space ==== If the Euclidean space is N-dimensional, and a = ∑ i = 1 N a i e i = a 1 e 1 + a 2 e 2 + … + a N e N b = ∑ j = 1 N b j e j = b 1 e 1 + b 2 e 2 + … + b N e N {\displaystyle {\begin{aligned}\mathbf {a} &=\sum _{i=1}^{N}a_{i}\mathbf {e} _{i}=a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+{\ldots }+a_{N}\mathbf {e} _{N}\\\mathbf {b} &=\sum _{j=1}^{N}b_{j}\mathbf {e} _{j}=b_{1}\mathbf {e} _{1}+b_{2}\mathbf {e} _{2}+\ldots +b_{N}\mathbf {e} _{N}\end{aligned}}} where ei and ej are the standard basis vectors in N-dimensions (the index i on ei selects a specific vector, not a component of the vector as in ai), then in algebraic form their dyadic product is: a b = ∑ j = 1 N ∑ i = 1 N a i b j e i e j . {\displaystyle \mathbf {ab} =\sum _{j=1}^{N}\sum _{i=1}^{N}a_{i}b_{j}\mathbf {e} _{i}\mathbf {e} _{j}.} This is known as the nonion form of the dyadic. Their outer/tensor product in matrix form is: a b = a b T = ( a 1 a 2 ⋮ a N ) ( b 1 b 2 ⋯ b N ) = ( a 1 b 1 a 1 b 2 ⋯ a 1 b N a 2 b 1 a 2 b 2 ⋯ a 2 b N ⋮ ⋮ ⋱ ⋮ a N b 1 a N b 2 ⋯ a N b N ) . {\displaystyle \mathbf {ab} =\mathbf {ab} ^{\mathsf {T}}={\begin{pmatrix}a_{1}\\a_{2}\\\vdots \\a_{N}\end{pmatrix}}{\begin{pmatrix}b_{1}&b_{2}&\cdots &b_{N}\end{pmatrix}}={\begin{pmatrix}a_{1}b_{1}&a_{1}b_{2}&\cdots &a_{1}b_{N}\\a_{2}b_{1}&a_{2}b_{2}&\cdots &a_{2}b_{N}\\\vdots &\vdots &\ddots &\vdots \\a_{N}b_{1}&a_{N}b_{2}&\cdots &a_{N}b_{N}\end{pmatrix}}.} A dyadic polynomial A, otherwise known as a dyadic, is formed from multiple vectors ai and bj: A = ∑ i a i b i = a 1 b 1 + a 2 b 2 + a 3 b 3 + … {\displaystyle \mathbf {A} =\sum _{i}\mathbf {a} _{i}\mathbf {b} _{i}=\mathbf {a} _{1}\mathbf {b} _{1}+\mathbf {a} _{2}\mathbf {b} _{2}+\mathbf {a} _{3}\mathbf {b} _{3}+\ldots } A dyadic which cannot be reduced to a sum of less than N dyads is said to be complete. In this case, the forming vectors are non-coplanar, see Chen (1983). === Classification === The following table classifies dyadics: === Identities === The following identities are a direct consequence of the definition of the tensor product: == Dyadic algebra == === Product of dyadic and vector === There are four operations defined on a vector and dyadic, constructed from the products defined on vectors. === Product of dyadic and dyadic === There are five operations for a dyadic to another dyadic. Let a, b, c, d be real vectors. Then: Letting A = ∑ i a i b i , B = ∑ j c j d j {\displaystyle \mathbf {A} =\sum _{i}\mathbf {a} _{i}\mathbf {b} _{i},\quad \mathbf {B} =\sum _{j}\mathbf {c} _{j}\mathbf {d} _{j}} be two general dyadics, we have: ==== Double-dot product ==== The first definition of the double-dot product is the Frobenius inner product, tr ⁡ ( A B T ) = ∑ i , j tr ⁡ ( a i b i T d j c j T ) = ∑ i , j tr ⁡ ( c j T a i b i T d j ) = ∑ i , j ( a i ⋅ c j ) ( b i ⋅ d j ) = A ⋅ ⋅ B {\displaystyle {\begin{aligned}\operatorname {tr} \left(\mathbf {A} \mathbf {B} ^{\mathsf {T}}\right)&=\sum _{i,j}\operatorname {tr} \left(\mathbf {a} _{i}\mathbf {b} _{i}^{\mathsf {T}}\mathbf {d} _{j}\mathbf {c} _{j}^{\mathsf {T}}\right)\\&=\sum _{i,j}\operatorname {tr} \left(\mathbf {c} _{j}^{\mathsf {T}}\mathbf {a} _{i}\mathbf {b} _{i}^{\mathsf {T}}\mathbf {d} _{j}\right)\\&=\sum _{i,j}(\mathbf {a} _{i}\cdot \mathbf {c} _{j})(\mathbf {b} _{i}\cdot \mathbf {d} _{j})\\&=\mathbf {A} {}_{\centerdot }^{\centerdot }\mathbf {B} \end{aligned}}} Furthermore, since, A T = ∑ i , j ( a i b j T ) T = ∑ i , j b i a j T {\displaystyle {\begin{aligned}\mathbf {A} ^{\mathsf {T}}&=\sum _{i,j}\left(\mathbf {a} _{i}\mathbf {b} _{j}^{\mathsf {T}}\right)^{\mathsf {T}}\\&=\sum _{i,j}\mathbf {b} _{i}\mathbf {a} _{j}^{\mathsf {T}}\end{aligned}}} we get that, A ⋅ ⋅ B = A ⋅ ⋅ _ B T {\displaystyle \mathbf {A} {}_{\centerdot }^{\centerdot }\mathbf {B} =\mathbf {A} {\underline {{}_{\centerdot }^{\centerdot }}}\mathbf {B} ^{\mathsf {T}}} so the second possible definition of the double-dot product is just the first with an additional transposition on the second dyadic. For these reasons, the first definition of the double-dot product is preferred, though some authors still use the second. ==== Double-cross product ==== We can see that, for any dyad formed from two vectors a and b, its double cross product is zero. ( a b ) × × ( a b ) = ( a × a ) ( b × b ) = 0 {\displaystyle \left(\mathbf {ab} \right){}_{\times }^{\times }\left(\mathbf {ab} \right)=\left(\mathbf {a} \times \mathbf {a} \right)\left(\mathbf {b} \times \mathbf {b} \right)=0} However, by definition, a dyadic double-cross product on itself will generally be non-zero. For example, a dyadic A composed of six different vectors A = ∑ i = 1 3 a i b i {\displaystyle \mathbf {A} =\sum _{i=1}^{3}\mathbf {a} _{i}\mathbf {b} _{i}} has a non-zero self-double-cross product of A × × A = 2 [ ( a 1 × a 2 ) ( b 1 × b 2 ) + ( a 2 × a 3 ) ( b 2 × b 3 ) + ( a 3 × a 1 ) ( b 3 × b 1 ) ] {\displaystyle \mathbf {A} {}_{\times }^{\times }\mathbf {A} =2\left[\left(\mathbf {a} _{1}\times \mathbf {a} _{2}\right)\left(\mathbf {b} _{1}\times \mathbf {b} _{2}\right)+\left(\mathbf {a} _{2}\times \mathbf {a} _{3}\right)\left(\mathbf {b} _{2}\times \mathbf {b} _{3}\right)+\left(\mathbf {a} _{3}\times \mathbf {a} _{1}\right)\left(\mathbf {b} _{3}\times \mathbf {b} _{1}\right)\right]} ==== Tensor contraction ==== The spur or expansion factor arises from the formal expansion of the dyadic in a coordinate basis by replacing each dyadic product by a dot product of vectors: | A | = A 11 i ⋅ i + A 12 i ⋅ j + A 13 i ⋅ k + A 21 j ⋅ i + A 22 j ⋅ j + A 23 j ⋅ k + A 31 k ⋅ i + A 32 k ⋅ j + A 33 k ⋅ k = A 11 + A 22 + A 33 {\displaystyle {\begin{aligned}|\mathbf {A} |=\qquad &A_{11}\mathbf {i} \cdot \mathbf {i} +A_{12}\mathbf {i} \cdot \mathbf {j} +A_{13}\mathbf {i} \cdot \mathbf {k} \\{}+{}&A_{21}\mathbf {j} \cdot \mathbf {i} +A_{22}\mathbf {j} \cdot \mathbf {j} +A_{23}\mathbf {j} \cdot \mathbf {k} \\{}+{}&A_{31}\mathbf {k} \cdot \mathbf {i} +A_{32}\mathbf {k} \cdot \mathbf {j} +A_{33}\mathbf {k} \cdot \mathbf {k} \\[6pt]=\qquad &A_{11}+A_{22}+A_{33}\end{aligned}}} in index notation this is the contraction of indices on the dyadic: | A | = ∑ i A i i {\displaystyle |\mathbf {A} |=\sum _{i}A_{i}{}^{i}} In three dimensions only, the rotation factor arises by replacing every dyadic product by a cross product ⟨ A ⟩ = A 11 i × i + A 12 i × j + A 13 i × k + A 21 j × i + A 22 j × j + A 23 j × k + A 31 k × i + A 32 k × j + A 33 k × k = A 12 k − A 13 j − A 21 k + A 23 i + A 31 j − A 32 i = ( A 23 − A 32 ) i + ( A 31 − A 13 ) j + ( A 12 − A 21 ) k {\displaystyle {\begin{aligned}\langle \mathbf {A} \rangle =\qquad &A_{11}\mathbf {i} \times \mathbf {i} +A_{12}\mathbf {i} \times \mathbf {j} +A_{13}\mathbf {i} \times \mathbf {k} \\{}+{}&A_{21}\mathbf {j} \times \mathbf {i} +A_{22}\mathbf {j} \times \mathbf {j} +A_{23}\mathbf {j} \times \mathbf {k} \\{}+{}&A_{31}\mathbf {k} \times \mathbf {i} +A_{32}\mathbf {k} \times \mathbf {j} +A_{33}\mathbf {k} \times \mathbf {k} \\[6pt]=\qquad &A_{12}\mathbf {k} -A_{13}\mathbf {j} -A_{21}\mathbf {k} \\{}+{}&A_{23}\mathbf {i} +A_{31}\mathbf {j} -A_{32}\mathbf {i} \\[6pt]=\qquad &\left(A_{23}-A_{32}\right)\mathbf {i} +\left(A_{31}-A_{13}\right)\mathbf {j} +\left(A_{12}-A_{21}\right)\mathbf {k} \\\end{aligned}}} In index notation this is the contraction of A with the Levi-Civita tensor ⟨ A ⟩ = ∑ j k ϵ i j k A j k . {\displaystyle \langle \mathbf {A} \rangle =\sum _{jk}{\epsilon _{i}}^{jk}A_{jk}.} == Unit dyadic == There exists a unit dyadic, denoted by I, such that, for any vector a, I ⋅ a = a ⋅ I = a {\displaystyle \mathbf {I} \cdot \mathbf {a} =\mathbf {a} \cdot \mathbf {I} =\mathbf {a} } Given a basis of 3 vectors a, b and c, with reciprocal basis a ^ , b ^ , c ^ {\displaystyle {\hat {\mathbf {a} }},{\hat {\mathbf {b} }},{\hat {\mathbf {c} }}} , the unit dyadic is expressed by I = a a ^ + b b ^ + c c ^ {\displaystyle \mathbf {I} =\mathbf {a} {\hat {\mathbf {a} }}+\mathbf {b} {\hat {\mathbf {b} }}+\mathbf {c} {\hat {\mathbf {c} }}} In the standard basis (for definitions of i, j, k see in the above section § Three-dimensional Euclidean space), I = i i + j j + k k {\displaystyle \mathbf {I} =\mathbf {ii} +\mathbf {jj} +\mathbf {kk} } Explicitly, the dot product to the right of the unit dyadic is I ⋅ a = ( i i + j j + k k ) ⋅ a = i ( i ⋅ a ) + j ( j ⋅ a ) + k ( k ⋅ a ) = i a x + j a y + k a z = a {\displaystyle {\begin{aligned}\mathbf {I} \cdot \mathbf {a} &=(\mathbf {i} \mathbf {i} +\mathbf {j} \mathbf {j} +\mathbf {k} \mathbf {k} )\cdot \mathbf {a} \\&=\mathbf {i} (\mathbf {i} \cdot \mathbf {a} )+\mathbf {j} (\mathbf {j} \cdot \mathbf {a} )+\mathbf {k} (\mathbf {k} \cdot \mathbf {a} )\\&=\mathbf {i} a_{x}+\mathbf {j} a_{y}+\mathbf {k} a_{z}\\&=\mathbf {a} \end{aligned}}} and to the left a ⋅ I = a ⋅ ( i i + j j + k k ) = ( a ⋅ i ) i + ( a ⋅ j ) j + ( a ⋅ k ) k = a x i + a y j + a z k = a {\displaystyle {\begin{aligned}\mathbf {a} \cdot \mathbf {I} &=\mathbf {a} \cdot (\mathbf {i} \mathbf {i} +\mathbf {j} \mathbf {j} +\mathbf {k} \mathbf {k} )\\&=(\mathbf {a} \cdot \mathbf {i} )\mathbf {i} +(\mathbf {a} \cdot \mathbf {j} )\mathbf {j} +(\mathbf {a} \cdot \mathbf {k} )\mathbf {k} \\&=a_{x}\mathbf {i} +a_{y}\mathbf {j} +a_{z}\mathbf {k} \\&=\mathbf {a} \end{aligned}}} The corresponding matrix is I = ( 1 0 0 0 1 0 0 0 1 ) {\displaystyle \mathbf {I} ={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\\end{pmatrix}}} This can be put on more careful foundations (explaining what the logical content of "juxtaposing notation" could possibly mean) using the language of tensor products. If V is a finite-dimensional vector space, a dyadic tensor on V is an elementary tensor in the tensor product of V with its dual space. The tensor product of V and its dual space is isomorphic to the space of linear maps from V to V: a dyadic tensor vf is simply the linear map sending any w in V to f(w)v. When V is Euclidean n-space, we can use the inner product to identify the dual space with V itself, making a dyadic tensor an elementary tensor product of two vectors in Euclidean space. In this sense, the unit dyadic ij is the function from 3-space to itself sending a1i + a2j + a3k to a2i, and jj sends this sum to a2j. Now it is revealed in what (precise) sense ii + jj + kk is the identity: it sends a1i + a2j + a3k to itself because its effect is to sum each unit vector in the standard basis scaled by the coefficient of the vector in that basis. === Properties of unit dyadics === ( a × I ) ⋅ ( b × I ) = b a − ( a ⋅ b ) I I × ⋅ ( a b ) = b × a I × × A = ( A ⋅ ⋅ I ) I − A T I ⋅ ⋅ ( a b ) = ( I ⋅ a ) ⋅ b = a ⋅ b = t r ( a b ) {\displaystyle {\begin{aligned}\left(\mathbf {a} \times \mathbf {I} \right)\cdot \left(\mathbf {b} \times \mathbf {I} \right)&=\mathbf {ba} -\left(\mathbf {a} \cdot \mathbf {b} \right)\mathbf {I} \\\mathbf {I} {}_{\times }^{\,\centerdot }\left(\mathbf {ab} \right)&=\mathbf {b} \times \mathbf {a} \\\mathbf {I} {}_{\times }^{\times }\mathbf {A} &=(\mathbf {A} {}_{\,\centerdot }^{\,\centerdot }\mathbf {I} )\mathbf {I} -\mathbf {A} ^{\mathsf {T}}\\\mathbf {I} {}_{\,\centerdot }^{\,\centerdot }\left(\mathbf {ab} \right)&=\left(\mathbf {I} \cdot \mathbf {a} \right)\cdot \mathbf {b} =\mathbf {a} \cdot \mathbf {b} =\mathrm {tr} \left(\mathbf {ab} \right)\end{aligned}}} where "tr" denotes the trace. == Examples == === Vector projection and rejection === A nonzero vector a can always be split into two perpendicular components, one parallel (‖) to the direction of a unit vector n, and one perpendicular (⊥) to it; a = a ∥ + a ⊥ {\displaystyle \mathbf {a} =\mathbf {a} _{\parallel }+\mathbf {a} _{\perp }} The parallel component is found by vector projection, which is equivalent to the dot product of a with the dyadic nn, a ∥ = n ( n ⋅ a ) = ( n n ) ⋅ a {\displaystyle \mathbf {a} _{\parallel }=\mathbf {n} (\mathbf {n} \cdot \mathbf {a} )=(\mathbf {nn} )\cdot \mathbf {a} } and the perpendicular component is found from vector rejection, which is equivalent to the dot product of a with the dyadic I − nn, a ⊥ = a − n ( n ⋅ a ) = ( I − n n ) ⋅ a {\displaystyle \mathbf {a} _{\perp }=\mathbf {a} -\mathbf {n} (\mathbf {n} \cdot \mathbf {a} )=(\mathbf {I} -\mathbf {nn} )\cdot \mathbf {a} } === Rotation dyadic === ==== 2d rotations ==== The dyadic J = j i − i j = ( 0 − 1 1 0 ) {\displaystyle \mathbf {J} =\mathbf {ji} -\mathbf {ij} ={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}} is a 90° anticlockwise rotation operator in 2d. It can be left-dotted with a vector r = xi + yj to produce the vector, ( j i − i j ) ⋅ ( x i + y j ) = x j i ⋅ i − x i j ⋅ i + y j i ⋅ j − y i j ⋅ j = − y i + x j , {\displaystyle (\mathbf {ji} -\mathbf {ij} )\cdot (x\mathbf {i} +y\mathbf {j} )=x\mathbf {ji} \cdot \mathbf {i} -x\mathbf {ij} \cdot \mathbf {i} +y\mathbf {ji} \cdot \mathbf {j} -y\mathbf {ij} \cdot \mathbf {j} =-y\mathbf {i} +x\mathbf {j} ,} in summary J ⋅ r = r r o t {\displaystyle \mathbf {J} \cdot \mathbf {r} =\mathbf {r} _{\mathrm {rot} }} or in matrix notation ( 0 − 1 1 0 ) ( x y ) = ( − y x ) . {\displaystyle {\begin{pmatrix}0&-1\\1&0\end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}={\begin{pmatrix}-y\\x\end{pmatrix}}.} For any angle θ, the 2d rotation dyadic for a rotation anti-clockwise in the plane is R = I cos ⁡ θ + J sin ⁡ θ = ( i i + j j ) cos ⁡ θ + ( j i − i j ) sin ⁡ θ = ( cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ) {\displaystyle \mathbf {R} =\mathbf {I} \cos \theta +\mathbf {J} \sin \theta =(\mathbf {ii} +\mathbf {jj} )\cos \theta +(\mathbf {ji} -\mathbf {ij} )\sin \theta ={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\;\cos \theta \end{pmatrix}}} where I and J are as above, and the rotation of any 2d vector a = axi + ayj is a r o t = R ⋅ a {\displaystyle \mathbf {a} _{\mathrm {rot} }=\mathbf {R} \cdot \mathbf {a} } ==== 3d rotations ==== A general 3d rotation of a vector a, about an axis in the direction of a unit vector ω and anticlockwise through angle θ, can be performed using Rodrigues' rotation formula in the dyadic form a r o t = R ⋅ a , {\displaystyle \mathbf {a} _{\mathrm {rot} }=\mathbf {R} \cdot \mathbf {a} \,,} where the rotation dyadic is R = I cos ⁡ θ + Ω sin ⁡ θ + ω ω ( 1 − cos ⁡ θ ) , {\displaystyle \mathbf {R} =\mathbf {I} \cos \theta +{\boldsymbol {\Omega }}\sin \theta +{\boldsymbol {\omega \omega }}(1-\cos \theta )\,,} and the Cartesian entries of ω also form those of the dyadic Ω = ω x ( k j − j k ) + ω y ( i k − k i ) + ω z ( j i − i j ) , {\displaystyle {\boldsymbol {\Omega }}=\omega _{x}(\mathbf {kj} -\mathbf {jk} )+\omega _{y}(\mathbf {ik} -\mathbf {ki} )+\omega _{z}(\mathbf {ji} -\mathbf {ij} )\,,} The effect of Ω on a is the cross product Ω ⋅ a = ω × a {\displaystyle {\boldsymbol {\Omega }}\cdot \mathbf {a} ={\boldsymbol {\omega }}\times \mathbf {a} } which is the dyadic form the cross product matrix with a column vector. === Lorentz transformation === In special relativity, the Lorentz boost with speed v in the direction of a unit vector n can be expressed as t ′ = γ ( t − v n ⋅ r c 2 ) {\displaystyle t'=\gamma \left(t-{\frac {v\mathbf {n} \cdot \mathbf {r} }{c^{2}}}\right)} r ′ = [ I + ( γ − 1 ) n n ] ⋅ r − γ v n t {\displaystyle \mathbf {r} '=[\mathbf {I} +(\gamma -1)\mathbf {nn} ]\cdot \mathbf {r} -\gamma v\mathbf {n} t} where γ = 1 1 − v 2 c 2 {\displaystyle \gamma ={\frac {1}{\sqrt {1-{\dfrac {v^{2}}{c^{2}}}}}}} is the Lorentz factor. == Related terms == Some authors generalize from the term dyadic to related terms triadic, tetradic and polyadic. == See also == Kronecker product Bivector Polyadic algebra Unit vector Multivector Differential form Quaternions Field (mathematics) == Notes == === Explanatory notes === === Citations === == References == P. Mitiguy (2009). "Vectors and dyadics" (PDF). Stanford, USA. Chapter 2 Spiegel, M.R.; Lipschutz, S.; Spellman, D. (2009). Vector analysis, Schaum's outlines. McGraw Hill. ISBN 978-0-07-161545-7. A.J.M. Spencer (1992). Continuum Mechanics. Dover Publications. ISBN 0-486-43594-6.. Morse, Philip M.; Feshbach, Herman (1953), "§1.6: Dyadics and other vector operators", Methods of theoretical physics, Volume 1, New York: McGraw-Hill, pp. 54–92, ISBN 978-0-07-043316-8, MR 0059774 {{citation}}: ISBN / Date incompatibility (help). Ismo V. Lindell (1996). Methods for Electromagnetic Field Analysis. Wiley-Blackwell. ISBN 978-0-7803-6039-6.. Hollis C. Chen (1983). Theory of Electromagnetic Wave - A Coordinate-free approach. McGraw Hill. ISBN 978-0-07-010688-8.. K. Cahill (2013). Physical Mathematics. Cambridge University Press. ISBN 978-1107005211. == External links == Vector Analysis, a Text-Book for the use of Students of Mathematics and Physics, Founded upon the Lectures of J. Willard Gibbs PhD LLD, Edwind Bidwell Wilson PhD Advanced Field Theory, I.V.Lindel Vector and Dyadic Analysis Introductory Tensor Analysis Nasa.gov, Foundations of Tensor Analysis for students of Physics and Engineering with an Introduction to the Theory of Relativity, J.C. Kolecki Nasa.gov, An introduction to Tensors for students of Physics and Engineering, J.C. Kolecki
Wikipedia/Dyadic_tensor
In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a (not necessarily commutative) ring. The concept of a module also generalizes the notion of an abelian group, since the abelian groups are exactly the modules over the ring of integers. Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operations of addition between elements of the ring or module and is compatible with the ring multiplication. Modules are very closely related to the representation theory of groups. They are also one of the central notions of commutative algebra and homological algebra, and are used widely in algebraic geometry and algebraic topology. == Introduction and definition == === Motivation === In a vector space, the set of scalars is a field and acts on the vectors by scalar multiplication, subject to certain axioms such as the distributive law. In a module, the scalars need only be a ring, so the module concept represents a significant generalization. In commutative algebra, both ideals and quotient rings are modules, so that many arguments about ideals or quotient rings can be combined into a single argument about modules. In non-commutative algebra, the distinction between left ideals, ideals, and modules becomes more pronounced, though some ring-theoretic conditions can be expressed either about left ideals or left modules. Much of the theory of modules consists of extending as many of the desirable properties of vector spaces as possible to the realm of modules over a "well-behaved" ring, such as a principal ideal domain. However, modules can be quite a bit more complicated than vector spaces; for instance, not all modules have a basis, and, even for those that do (free modules), the number of elements in a basis need not be the same for all bases (that is to say that they may not have a unique rank) if the underlying ring does not satisfy the invariant basis number condition, unlike vector spaces, which always have a (possibly infinite) basis whose cardinality is then unique. (These last two assertions require the axiom of choice in general, but not in the case of finite-dimensional vector spaces, or certain well-behaved infinite-dimensional vector spaces such as Lp spaces.) === Formal definition === Suppose that R is a ring, and 1 is its multiplicative identity. A left R-module M consists of an abelian group (M, +) and an operation · : R × M → M such that for all r, s in R and x, y in M, we have r ⋅ ( x + y ) = r ⋅ x + r ⋅ y {\displaystyle r\cdot (x+y)=r\cdot x+r\cdot y} , ( r + s ) ⋅ x = r ⋅ x + s ⋅ x {\displaystyle (r+s)\cdot x=r\cdot x+s\cdot x} , ( r s ) ⋅ x = r ⋅ ( s ⋅ x ) {\displaystyle (rs)\cdot x=r\cdot (s\cdot x)} , 1 ⋅ x = x . {\displaystyle 1\cdot x=x.} The operation · is called scalar multiplication. Often the symbol · is omitted, but in this article we use it and reserve juxtaposition for multiplication in R. One may write RM to emphasize that M is a left R-module. A right R-module MR is defined similarly in terms of an operation · : M × R → M. The qualificative of left- or right-module does not depend on whether the scalars are written on the left or on the right, but on the property 3: if, in the above definition, the property 3 is replaced by ( r s ) ⋅ x = s ⋅ ( r ⋅ x ) , {\displaystyle (rs)\cdot x=s\cdot (r\cdot x),} one gets a right-module, even if the scalars are written on the left. However, writing the scalars on the left for left-modules and on the right for right modules makes the manipulation of property 3 much easier. Authors who do not require rings to be unital omit condition 4 in the definition above; they would call the structures defined above "unital left R-modules". In this article, consistent with the glossary of ring theory, all rings and modules are assumed to be unital. An (R,S)-bimodule is an abelian group together with both a left scalar multiplication · by elements of R and a right scalar multiplication ∗ by elements of S, making it simultaneously a left R-module and a right S-module, satisfying the additional condition (r · x) ∗ s = r ⋅ (x ∗ s) for all r in R, x in M, and s in S. If R is commutative, then left R-modules are the same as right R-modules and are simply called R-modules. Most often the scalars are written on the left in this case. == Examples == If K is a field, then K-modules are called K-vector spaces (vector spaces over K). If K is a field, and K[x] a univariate polynomial ring, then a K[x]-module M is a K-module with an additional action of x on M by a group homomorphism that commutes with the action of K on M. In other words, a K[x]-module is a K-vector space M combined with a linear map from M to M. Applying the structure theorem for finitely generated modules over a principal ideal domain to this example shows the existence of the rational and Jordan canonical forms. The concept of a Z-module agrees with the notion of an abelian group. That is, every abelian group is a module over the ring of integers Z in a unique way. For n > 0, let n ⋅ x = x + x + ... + x (n summands), 0 ⋅ x = 0, and (−n) ⋅ x = −(n ⋅ x). Such a module need not have a basis—groups containing torsion elements do not. (For example, in the group of integers modulo 3, one cannot find even one element that satisfies the definition of a linearly independent set, since when an integer such as 3 or 6 multiplies an element, the result is 0. However, if a finite field is considered as a module over the same finite field taken as a ring, it is a vector space and does have a basis.) The decimal fractions (including negative ones) form a module over the integers. Only singletons are linearly independent sets, but there is no singleton that can serve as a basis, so the module has no basis and no rank, in the usual sense of linear algebra. However this module has a torsion-free rank equal to 1. If R is any ring and n a natural number, then the cartesian product Rn is both a left and right R-module over R if we use the component-wise operations. Hence when n = 1, R is an R-module, where the scalar multiplication is just ring multiplication. The case n = 0 yields the trivial R-module {0} consisting only of its identity element. Modules of this type are called free and if R has invariant basis number (e.g. any commutative ring or field) the number n is then the rank of the free module. If Mn(R) is the ring of n × n matrices over a ring R, M is an Mn(R)-module, and ei is the n × n matrix with 1 in the (i, i)-entry (and zeros elsewhere), then eiM is an R-module, since reim = eirm ∈ eiM. So M breaks up as the direct sum of R-modules, M = e1M ⊕ ... ⊕ enM. Conversely, given an R-module M0, then M0⊕n is an Mn(R)-module. In fact, the category of R-modules and the category of Mn(R)-modules are equivalent. The special case is that the module M is just R as a module over itself, then Rn is an Mn(R)-module. If S is a nonempty set, M is a left R-module, and MS is the collection of all functions f : S → M, then with addition and scalar multiplication in MS defined pointwise by (f + g)(s) = f(s) + g(s) and (rf)(s) = rf(s), MS is a left R-module. The right R-module case is analogous. In particular, if R is commutative then the collection of R-module homomorphisms h : M → N (see below) is an R-module (and in fact a submodule of NM). If X is a smooth manifold, then the smooth functions from X to the real numbers form a ring C∞(X). The set of all smooth vector fields defined on X forms a module over C∞(X), and so do the tensor fields and the differential forms on X. More generally, the sections of any vector bundle form a projective module over C∞(X), and by Swan's theorem, every projective module is isomorphic to the module of sections of some vector bundle; the category of C∞(X)-modules and the category of vector bundles over X are equivalent. If R is any ring and I is any left ideal in R, then I is a left R-module, and analogously right ideals in R are right R-modules. If R is a ring, we can define the opposite ring Rop, which has the same underlying set and the same addition operation, but the opposite multiplication: if ab = c in R, then ba = c in Rop. Any left R-module M can then be seen to be a right module over Rop, and any right module over R can be considered a left module over Rop. Modules over a Lie algebra are (associative algebra) modules over its universal enveloping algebra. If R and S are rings with a ring homomorphism φ : R → S, then every S-module M is an R-module by defining rm = φ(r)m. In particular, S itself is such an R-module. == Submodules and homomorphisms == Suppose M is a left R-module and N is a subgroup of M. Then N is a submodule (or more explicitly an R-submodule) if for any n in N and any r in R, the product r ⋅ n (or n ⋅ r for a right R-module) is in N. If X is any subset of an R-module M, then the submodule spanned by X is defined to be ⟨ X ⟩ = ⋂ N ⊇ X N {\textstyle \langle X\rangle =\,\bigcap _{N\supseteq X}N} where N runs over the submodules of M that contain X, or explicitly { ∑ i = 1 k r i x i ∣ r i ∈ R , x i ∈ X } {\textstyle \left\{\sum _{i=1}^{k}r_{i}x_{i}\mid r_{i}\in R,x_{i}\in X\right\}} , which is important in the definition of tensor products of modules. The set of submodules of a given module M, together with the two binary operations + (the module spanned by the union of the arguments) and ∩, forms a lattice that satisfies the modular law: Given submodules U, N1, N2 of M such that N1 ⊆ N2, then the following two submodules are equal: (N1 + U) ∩ N2 = N1 + (U ∩ N2). If M and N are left R-modules, then a map f : M → N is a homomorphism of R-modules if for any m, n in M and r, s in R, f ( r ⋅ m + s ⋅ n ) = r ⋅ f ( m ) + s ⋅ f ( n ) {\displaystyle f(r\cdot m+s\cdot n)=r\cdot f(m)+s\cdot f(n)} . This, like any homomorphism of mathematical objects, is just a mapping that preserves the structure of the objects. Another name for a homomorphism of R-modules is an R-linear map. A bijective module homomorphism f : M → N is called a module isomorphism, and the two modules M and N are called isomorphic. Two isomorphic modules are identical for all practical purposes, differing solely in the notation for their elements. The kernel of a module homomorphism f : M → N is the submodule of M consisting of all elements that are sent to zero by f, and the image of f is the submodule of N consisting of values f(m) for all elements m of M. The isomorphism theorems familiar from groups and vector spaces are also valid for R-modules. Given a ring R, the set of all left R-modules together with their module homomorphisms forms an abelian category, denoted by R-Mod (see category of modules). == Types of modules == Finitely generated An R-module M is finitely generated if there exist finitely many elements x1, ..., xn in M such that every element of M is a linear combination of those elements with coefficients from the ring R. Cyclic A module is called a cyclic module if it is generated by one element. Free A free R-module is a module that has a basis, or equivalently, one that is isomorphic to a direct sum of copies of the ring R. These are the modules that behave very much like vector spaces. Projective Projective modules are direct summands of free modules and share many of their desirable properties. Injective Injective modules are defined dually to projective modules. Flat A module is called flat if taking the tensor product of it with any exact sequence of R-modules preserves exactness. Torsionless A module is called torsionless if it embeds into its algebraic dual. Simple A simple module S is a module that is not {0} and whose only submodules are {0} and S. Simple modules are sometimes called irreducible. Semisimple A semisimple module is a direct sum (finite or not) of simple modules. Historically these modules are also called completely reducible. Indecomposable An indecomposable module is a non-zero module that cannot be written as a direct sum of two non-zero submodules. Every simple module is indecomposable, but there are indecomposable modules that are not simple (e.g. uniform modules). Faithful A faithful module M is one where the action of each r ≠ 0 in R on M is nontrivial (i.e. r ⋅ x ≠ 0 for some x in M). Equivalently, the annihilator of M is the zero ideal. Torsion-free A torsion-free module is a module over a ring such that 0 is the only element annihilated by a regular element (non zero-divisor) of the ring, equivalently rm = 0 implies r = 0 or m = 0. Noetherian A Noetherian module is a module that satisfies the ascending chain condition on submodules, that is, every increasing chain of submodules becomes stationary after finitely many steps. Equivalently, every submodule is finitely generated. Artinian An Artinian module is a module that satisfies the descending chain condition on submodules, that is, every decreasing chain of submodules becomes stationary after finitely many steps. Graded A graded module is a module with a decomposition as a direct sum M = ⨁x Mx over a graded ring R = ⨁x Rx such that RxMy ⊆ Mx+y for all x and y. Uniform A uniform module is a module in which all pairs of nonzero submodules have nonzero intersection. == Further notions == === Relation to representation theory === A representation of a group G over a field k is a module over the group ring k[G]. If M is a left R-module, then the action of an element r in R is defined to be the map M → M that sends each x to rx (or xr in the case of a right module), and is necessarily a group endomorphism of the abelian group (M, +). The set of all group endomorphisms of M is denoted EndZ(M) and forms a ring under addition and composition, and sending a ring element r of R to its action actually defines a ring homomorphism from R to EndZ(M). Such a ring homomorphism R → EndZ(M) is called a representation of the abelian group M over the ring R; an alternative and equivalent way of defining left R-modules is to say that a left R-module is an abelian group M together with a representation of M over R. Such a representation R → EndZ(M) may also be called a ring action of R on M. A representation is called faithful if the map R → EndZ(M) is injective. In terms of modules, this means that if r is an element of R such that rx = 0 for all x in M, then r = 0. Every abelian group is a faithful module over the integers or over the ring of integers modulo n, Z/nZ, for some n. === Generalizations === A ring R corresponds to a preadditive category R with a single object. With this understanding, a left R-module is just a covariant additive functor from R to the category Ab of abelian groups, and right R-modules are contravariant additive functors. This suggests that, if C is any preadditive category, a covariant additive functor from C to Ab should be considered a generalized left module over C. These functors form a functor category C-Mod, which is the natural generalization of the module category R-Mod. Modules over commutative rings can be generalized in a different direction: take a ringed space (X, OX) and consider the sheaves of OX-modules (see sheaf of modules). These form a category OX-Mod, and play an important role in modern algebraic geometry. If X has only a single point, then this is a module category in the old sense over the commutative ring OX(X). One can also consider modules over a semiring. Modules over rings are abelian groups, but modules over semirings are only commutative monoids. Most applications of modules are still possible. In particular, for any semiring S, the matrices over S form a semiring over which the tuples of elements from S are a module (in this generalized sense only). This allows a further generalization of the concept of vector space incorporating the semirings from theoretical computer science. Over near-rings, one can consider near-ring modules, a nonabelian generalization of modules. == See also == Group ring Algebra (ring theory) Module (model theory) Module spectrum Annihilator == Notes == == References == F.W. Anderson and K.R. Fuller: Rings and Categories of Modules, Graduate Texts in Mathematics, Vol. 13, 2nd Ed., Springer-Verlag, New York, 1992, ISBN 0-387-97845-3, ISBN 3-540-97845-3 Nathan Jacobson. Structure of rings. Colloquium publications, Vol. 37, 2nd Ed., AMS Bookstore, 1964, ISBN 978-0-8218-1037-8 == External links == "Module", Encyclopedia of Mathematics, EMS Press, 2001 [1994] module at the nLab
Wikipedia/Module_(ring_theory)
In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics. For instance, the expression f ( x ) d x {\displaystyle f(x)\,dx} is an example of a 1-form, and can be integrated over an interval [ a , b ] {\displaystyle [a,b]} contained in the domain of f {\displaystyle f} : ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx.} Similarly, the expression f ( x , y , z ) d x ∧ d y + g ( x , y , z ) d z ∧ d x + h ( x , y , z ) d y ∧ d z {\displaystyle f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz} is a 2-form that can be integrated over a surface S {\displaystyle S} : ∫ S ( f ( x , y , z ) d x ∧ d y + g ( x , y , z ) d z ∧ d x + h ( x , y , z ) d y ∧ d z ) . {\displaystyle \int _{S}\left(f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz\right).} The symbol ∧ {\displaystyle \wedge } denotes the exterior product, sometimes called the wedge product, of two differential forms. Likewise, a 3-form f ( x , y , z ) d x ∧ d y ∧ d z {\displaystyle f(x,y,z)\,dx\wedge dy\wedge dz} represents a volume element that can be integrated over a region of space. In general, a k-form is an object that may be integrated over a k-dimensional manifold, and is homogeneous of degree k in the coordinate differentials d x , d y , … . {\displaystyle dx,dy,\ldots .} On an n-dimensional manifold, a top-dimensional form (n-form) is called a volume form. The differential forms form an alternating algebra. This implies that d y ∧ d x = − d x ∧ d y {\displaystyle dy\wedge dx=-dx\wedge dy} and d x ∧ d x = 0. {\displaystyle dx\wedge dx=0.} This alternating property reflects the orientation of the domain of integration. The exterior derivative is an operation on differential forms that, given a k-form φ {\displaystyle \varphi } , produces a (k+1)-form d φ . {\displaystyle d\varphi .} This operation extends the differential of a function (a function can be considered as a 0-form, and its differential is d f ( x ) = f ′ ( x ) d x {\displaystyle df(x)=f'(x)\,dx} ). This allows expressing the fundamental theorem of calculus, the divergence theorem, Green's theorem, and Stokes' theorem as special cases of a single general result, the generalized Stokes theorem. Differential 1-forms are naturally dual to vector fields on a differentiable manifold, and the pairing between vector fields and 1-forms is extended to arbitrary differential forms by the interior product. The algebra of differential forms along with the exterior derivative defined on it is preserved by the pullback under smooth functions between two manifolds. This feature allows geometrically invariant information to be moved from one space to another via the pullback, provided that the information is expressed in terms of differential forms. As an example, the change of variables formula for integration becomes a simple statement that an integral is preserved under pullback. == History == Differential forms are part of the field of differential geometry, influenced by linear algebra. Although the notion of a differential is quite old, the initial attempt at an algebraic organization of differential forms is usually credited to Élie Cartan with reference to his 1899 paper. Some aspects of the exterior algebra of differential forms appears in Hermann Grassmann's 1844 work, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics). == Concept == Differential forms provide an approach to multivariable calculus that is independent of coordinates. === Integration and orientation === A differential k-form can be integrated over an oriented manifold of dimension k. A differential 1-form can be thought of as measuring an infinitesimal oriented length, or 1-dimensional oriented density. A differential 2-form can be thought of as measuring an infinitesimal oriented area, or 2-dimensional oriented density. And so on. Integration of differential forms is well-defined only on oriented manifolds. An example of a 1-dimensional manifold is an interval [a, b], and intervals can be given an orientation: they are positively oriented if a < b, and negatively oriented otherwise. If a < b then the integral of the differential 1-form f(x) dx over the interval [a, b] (with its natural positive orientation) is ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} which is the negative of the integral of the same differential form over the same interval, when equipped with the opposite orientation. That is: ∫ b a f ( x ) d x = − ∫ a b f ( x ) d x . {\displaystyle \int _{b}^{a}f(x)\,dx=-\int _{a}^{b}f(x)\,dx.} This gives a geometrical context to the conventions for one-dimensional integrals, that the sign changes when the orientation of the interval is reversed. A standard explanation of this in one-variable integration theory is that, when the limits of integration are in the opposite order (b < a), the increment dx is negative in the direction of integration. More generally, an m-form is an oriented density that can be integrated over an m-dimensional oriented manifold. (For example, a 1-form can be integrated over an oriented curve, a 2-form can be integrated over an oriented surface, etc.) If M is an oriented m-dimensional manifold, and M′ is the same manifold with opposite orientation and ω is an m-form, then one has: ∫ M ω = − ∫ M ′ ω . {\displaystyle \int _{M}\omega =-\int _{M'}\omega \,.} These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure μ and integrates over a subset A, without any notion of orientation; one writes ∫ A f d μ = ∫ [ a , b ] f d μ {\textstyle \int _{A}f\,d\mu =\int _{[a,b]}f\,d\mu } to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher-dimensional manifolds; see below for details. Making the notion of an oriented density precise, and thus of a differential form, involves the exterior algebra. The differentials of a set of coordinates, dx1, ..., dxn can be used as a basis for all 1-forms. Each of these represents a covector at each point on the manifold that may be thought of as measuring a small displacement in the corresponding coordinate direction. A general 1-form is a linear combination of these differentials at every point on the manifold: f 1 d x 1 + ⋯ + f n d x n , {\displaystyle f_{1}\,dx^{1}+\cdots +f_{n}\,dx^{n},} where the fk = fk(x1, ... , xn) are functions of all the coordinates. A differential 1-form is integrated along an oriented curve as a line integral. The expressions dxi ∧ dxj, where i < j can be used as a basis at every point on the manifold for all 2-forms. This may be thought of as an infinitesimal oriented square parallel to the xi–xj-plane. A general 2-form is a linear combination of these at every point on the manifold: ∑ 1 ≤ i < j ≤ n f i , j d x i ∧ d x j {\textstyle \sum _{1\leq i<j\leq n}f_{i,j}\,dx^{i}\wedge dx^{j}} , and it is integrated just like a surface integral. A fundamental operation defined on differential forms is the exterior product (the symbol is the wedge ∧). This is similar to the cross product from vector calculus, in that it is an alternating product. For instance, d x 1 ∧ d x 2 = − d x 2 ∧ d x 1 {\displaystyle dx^{1}\wedge dx^{2}=-dx^{2}\wedge dx^{1}} because the square whose first side is dx1 and second side is dx2 is to be regarded as having the opposite orientation as the square whose first side is dx2 and whose second side is dx1. This is why we only need to sum over expressions dxi ∧ dxj, with i < j; for example: a(dxi ∧ dxj) + b(dxj ∧ dxi) = (a − b) dxi ∧ dxj. The exterior product allows higher-degree differential forms to be built out of lower-degree ones, in much the same way that the cross product in vector calculus allows one to compute the area vector of a parallelogram from vectors pointing up the two sides. Alternating also implies that dxi ∧ dxi = 0, in the same way that the cross product of parallel vectors, whose magnitude is the area of the parallelogram spanned by those vectors, is zero. In higher dimensions, dxi1 ∧ ⋅⋅⋅ ∧ dxim = 0 if any two of the indices i1, ..., im are equal, in the same way that the "volume" enclosed by a parallelotope whose edge vectors are linearly dependent is zero. === Multi-index notation === A common notation for the wedge product of elementary k-forms is so called multi-index notation: in an n-dimensional context, for I = ( i 1 , i 2 , … , i k ) , 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n {\displaystyle I=(i_{1},i_{2},\ldots ,i_{k}),1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n} , we define d x I := d x i 1 ∧ ⋯ ∧ d x i k = ⋀ i ∈ I d x i {\textstyle dx^{I}:=dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}=\bigwedge _{i\in I}dx^{i}} . Another useful notation is obtained by defining the set of all strictly increasing multi-indices of length k, in a space of dimension n, denoted J k , n := { I = ( i 1 , … , i k ) : 1 ≤ i 1 < i 2 < ⋯ < i k ≤ n } {\displaystyle {\mathcal {J}}_{k,n}:=\{I=(i_{1},\ldots ,i_{k}):1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n\}} . Then locally (wherever the coordinates apply), { d x I } I ∈ J k , n {\displaystyle \{dx^{I}\}_{I\in {\mathcal {J}}_{k,n}}} spans the space of differential k-forms in a manifold M of dimension n, when viewed as a module over the ring C∞(M) of smooth functions on M. By calculating the size of J k , n {\displaystyle {\mathcal {J}}_{k,n}} combinatorially, the module of k-forms on an n-dimensional manifold, and in general space of k-covectors on an n-dimensional vector space, is n choose k: | J k , n | = ( n k ) {\textstyle |{\mathcal {J}}_{k,n}|={\binom {n}{k}}} . This also demonstrates that there are no nonzero differential forms of degree greater than the dimension of the underlying manifold. === The exterior derivative === In addition to the exterior product, there is also the exterior derivative operator d. The exterior derivative of a differential form is a generalization of the differential of a function, in the sense that the exterior derivative of f ∈ C∞(M) = Ω0(M) is exactly the differential of f. When generalized to higher forms, if ω = f dxI is a simple k-form, then its exterior derivative dω is a (k + 1)-form defined by taking the differential of the coefficient functions: d ω = ∑ i = 1 n ∂ f ∂ x i d x i ∧ d x I . {\displaystyle d\omega =\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}\,dx^{i}\wedge dx^{I}.} with extension to general k-forms through linearity: if τ = ∑ I ∈ J k , n a I d x I ∈ Ω k ( M ) {\textstyle \tau =\sum _{I\in {\mathcal {J}}_{k,n}}a_{I}\,dx^{I}\in \Omega ^{k}(M)} , then its exterior derivative is d τ = ∑ I ∈ J k , n ( ∑ j = 1 n ∂ a I ∂ x j d x j ) ∧ d x I ∈ Ω k + 1 ( M ) {\displaystyle d\tau =\sum _{I\in {\mathcal {J}}_{k,n}}\left(\sum _{j=1}^{n}{\frac {\partial a_{I}}{\partial x^{j}}}\,dx^{j}\right)\wedge dx^{I}\in \Omega ^{k+1}(M)} In R3, with the Hodge star operator, the exterior derivative corresponds to gradient, curl, and divergence, although this correspondence, like the cross product, does not generalize to higher dimensions, and should be treated with some caution. The exterior derivative itself applies in an arbitrary finite number of dimensions, and is a flexible and powerful tool with wide application in differential geometry, differential topology, and many areas in physics. Of note, although the above definition of the exterior derivative was defined with respect to local coordinates, it can be defined in an entirely coordinate-free manner, as an antiderivation of degree 1 on the exterior algebra of differential forms. The benefit of this more general approach is that it allows for a natural coordinate-free approach to integrate on manifolds. It also allows for a natural generalization of the fundamental theorem of calculus, called the (generalized) Stokes' theorem, which is a central result in the theory of integration on manifolds. === Differential calculus === Let U be an open set in Rn. A differential 0-form ("zero-form") is defined to be a smooth function f on U – the set of which is denoted C∞(U). If v is any vector in Rn, then f has a directional derivative ∂v f, which is another function on U whose value at a point p ∈ U is the rate of change (at p) of f in the v direction: ( ∂ v f ) ( p ) = d d t f ( p + t v ) | t = 0 . {\displaystyle (\partial _{\mathbf {v} }f)(p)=\left.{\frac {d}{dt}}f(p+t\mathbf {v} )\right|_{t=0}.} (This notion can be extended pointwise to the case that v is a vector field on U by evaluating v at the point p in the definition.) In particular, if v = ej is the jth coordinate vector then ∂v f is the partial derivative of f with respect to the jth coordinate vector, i.e., ∂f / ∂xj, where x1, x2, ..., xn are the coordinate vectors in U. By their very definition, partial derivatives depend upon the choice of coordinates: if new coordinates y1, y2, ..., yn are introduced, then ∂ f ∂ x j = ∑ i = 1 n ∂ y i ∂ x j ∂ f ∂ y i . {\displaystyle {\frac {\partial f}{\partial x^{j}}}=\sum _{i=1}^{n}{\frac {\partial y^{i}}{\partial x^{j}}}{\frac {\partial f}{\partial y^{i}}}.} The first idea leading to differential forms is the observation that ∂v f (p) is a linear function of v: ( ∂ v + w f ) ( p ) = ( ∂ v f ) ( p ) + ( ∂ w f ) ( p ) ( ∂ c v f ) ( p ) = c ( ∂ v f ) ( p ) {\displaystyle {\begin{aligned}(\partial _{\mathbf {v} +\mathbf {w} }f)(p)&=(\partial _{\mathbf {v} }f)(p)+(\partial _{\mathbf {w} }f)(p)\\(\partial _{c\mathbf {v} }f)(p)&=c(\partial _{\mathbf {v} }f)(p)\end{aligned}}} for any vectors v, w and any real number c. At each point p, this linear map from Rn to R is denoted dfp and called the derivative or differential of f at p. Thus dfp(v) = ∂v f (p). Extended over the whole set, the object df can be viewed as a function that takes a vector field on U, and returns a real-valued function whose value at each point is the derivative along the vector field of the function f. Note that at each p, the differential dfp is not a real number, but a linear functional on tangent vectors, and a prototypical example of a differential 1-form. Since any vector v is a linear combination Σ vjej of its components, df is uniquely determined by dfp(ej) for each j and each p ∈ U, which are just the partial derivatives of f on U. Thus df provides a way of encoding the partial derivatives of f. It can be decoded by noticing that the coordinates x1, x2, ..., xn are themselves functions on U, and so define differential 1-forms dx1, dx2, ..., dxn. Let f = xi. Since ∂xi / ∂xj = δij, the Kronecker delta function, it follows that The meaning of this expression is given by evaluating both sides at an arbitrary point p: on the right hand side, the sum is defined "pointwise", so that d f p = ∑ i = 1 n ∂ f ∂ x i ( p ) ( d x i ) p . {\displaystyle df_{p}=\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}(p)(dx^{i})_{p}.} Applying both sides to ej, the result on each side is the jth partial derivative of f at p. Since p and j were arbitrary, this proves the formula (*). More generally, for any smooth functions gi and hi on U, we define the differential 1-form α = Σi gi dhi pointwise by α p = ∑ i g i ( p ) ( d h i ) p {\displaystyle \alpha _{p}=\sum _{i}g_{i}(p)(dh_{i})_{p}} for each p ∈ U. Any differential 1-form arises this way, and by using (*) it follows that any differential 1-form α on U may be expressed in coordinates as α = ∑ i = 1 n f i d x i {\displaystyle \alpha =\sum _{i=1}^{n}f_{i}\,dx^{i}} for some smooth functions fi on U. The second idea leading to differential forms arises from the following question: given a differential 1-form α on U, when does there exist a function f on U such that α = df? The above expansion reduces this question to the search for a function f whose partial derivatives ∂f / ∂xi are equal to n given functions fi. For n > 1, such a function does not always exist: any smooth function f satisfies ∂ 2 f ∂ x i ∂ x j = ∂ 2 f ∂ x j ∂ x i , {\displaystyle {\frac {\partial ^{2}f}{\partial x^{i}\,\partial x^{j}}}={\frac {\partial ^{2}f}{\partial x^{j}\,\partial x^{i}}},} so it will be impossible to find such an f unless ∂ f j ∂ x i − ∂ f i ∂ x j = 0 {\displaystyle {\frac {\partial f_{j}}{\partial x^{i}}}-{\frac {\partial f_{i}}{\partial x^{j}}}=0} for all i and j. The skew-symmetry of the left hand side in i and j suggests introducing an antisymmetric product ∧ on differential 1-forms, the exterior product, so that these equations can be combined into a single condition ∑ i , j = 1 n ∂ f j ∂ x i d x i ∧ d x j = 0 , {\displaystyle \sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}=0,} where ∧ is defined so that: d x i ∧ d x j = − d x j ∧ d x i . {\displaystyle dx^{i}\wedge dx^{j}=-dx^{j}\wedge dx^{i}.} This is an example of a differential 2-form. This 2-form is called the exterior derivative dα of α = ∑nj=1 fj dxj. It is given by d α = ∑ j = 1 n d f j ∧ d x j = ∑ i , j = 1 n ∂ f j ∂ x i d x i ∧ d x j . {\displaystyle d\alpha =\sum _{j=1}^{n}df_{j}\wedge dx^{j}=\sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}.} To summarize: dα = 0 is a necessary condition for the existence of a function f with α = df. Differential 0-forms, 1-forms, and 2-forms are special cases of differential forms. For each k, there is a space of differential k-forms, which can be expressed in terms of the coordinates as ∑ i 1 , i 2 … i k = 1 n f i 1 i 2 … i k d x i 1 ∧ d x i 2 ∧ ⋯ ∧ d x i k {\displaystyle \sum _{i_{1},i_{2}\ldots i_{k}=1}^{n}f_{i_{1}i_{2}\ldots i_{k}}\,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}} for a collection of functions fi1i2⋅⋅⋅ik. Antisymmetry, which was already present for 2-forms, makes it possible to restrict the sum to those sets of indices for which i1 < i2 < ... < ik−1 < ik. Differential forms can be multiplied together using the exterior product, and for any differential k-form α, there is a differential (k + 1)-form dα called the exterior derivative of α. Differential forms, the exterior product and the exterior derivative are independent of a choice of coordinates. Consequently, they may be defined on any smooth manifold M. One way to do this is cover M with coordinate charts and define a differential k-form on M to be a family of differential k-forms on each chart which agree on the overlaps. However, there are more intrinsic definitions which make the independence of coordinates manifest. == Intrinsic definitions == Let M be a smooth manifold. A smooth differential form of degree k is a smooth section of the kth exterior power of the cotangent bundle of M. The set of all differential k-forms on a manifold M is a vector space, often denoted Ω k ( M ) {\displaystyle \Omega ^{k}(M)} . The definition of a differential form may be restated as follows. At any point p ∈ M {\displaystyle p\in M} , a k-form β {\displaystyle \beta } defines an element β p ∈ ⋀ k T p ∗ M , {\displaystyle \beta _{p}\in {\textstyle \bigwedge }^{k}T_{p}^{*}M,} where T p M {\displaystyle T_{p}M} is the tangent space to M at p and T p ∗ ( M ) {\displaystyle T_{p}^{*}(M)} is its dual space. This space is naturally isomorphic to the fiber at p of the dual bundle of the kth exterior power of the tangent bundle of M. That is, β {\displaystyle \beta } is also a linear functional β p : ⋀ k T p M → R {\textstyle \beta _{p}\colon {\textstyle \bigwedge }^{k}T_{p}M\to \mathbf {R} } , i.e. the dual of the kth exterior power is isomorphic to the kth exterior power of the dual: ⋀ k T p ∗ M ≅ ( ⋀ k T p M ) ∗ {\displaystyle {\textstyle \bigwedge }^{k}T_{p}^{*}M\cong {\Big (}{\textstyle \bigwedge }^{k}T_{p}M{\Big )}^{*}} By the universal property of exterior powers, this is equivalently an alternating multilinear map: β p : ⨁ n = 1 k T p M → R . {\displaystyle \beta _{p}\colon \bigoplus _{n=1}^{k}T_{p}M\to \mathbf {R} .} Consequently, a differential k-form may be evaluated against any k-tuple of tangent vectors to the same point p of M. For example, a differential 1-form α assigns to each point p ∈ M {\displaystyle p\in M} a linear functional αp on T p M {\displaystyle T_{p}M} . In the presence of an inner product on T p M {\displaystyle T_{p}M} (induced by a Riemannian metric on M), αp may be represented as the inner product with a tangent vector X p {\displaystyle X_{p}} . Differential 1-forms are sometimes called covariant vector fields, covector fields, or "dual vector fields", particularly within physics. The exterior algebra may be embedded in the tensor algebra by means of the alternation map. The alternation map is defined as a mapping Alt : ⨂ k T ∗ M → ⨂ k T ∗ M . {\displaystyle \operatorname {Alt} \colon {\bigotimes }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.} For a tensor τ {\displaystyle \tau } at a point p, Alt ⁡ ( τ p ) ( x 1 , … , x k ) = 1 k ! ∑ σ ∈ S k sgn ⁡ ( σ ) τ p ( x σ ( 1 ) , … , x σ ( k ) ) , {\displaystyle \operatorname {Alt} (\tau _{p})(x_{1},\dots ,x_{k})={\frac {1}{k!}}\sum _{\sigma \in S_{k}}\operatorname {sgn}(\sigma )\tau _{p}(x_{\sigma (1)},\dots ,x_{\sigma (k)}),} where Sk is the symmetric group on k elements. The alternation map is constant on the cosets of the ideal in the tensor algebra generated by the symmetric 2-forms, and therefore descends to an embedding Alt : ⋀ k T ∗ M → ⨂ k T ∗ M . {\displaystyle \operatorname {Alt} \colon {\textstyle \bigwedge }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.} This map exhibits β {\displaystyle \beta } as a totally antisymmetric covariant tensor field of rank k. The differential forms on M are in one-to-one correspondence with such tensor fields. == Operations == As well as the addition and multiplication by scalar operations which arise from the vector space structure, there are several other standard operations defined on differential forms. The most important operations are the exterior product of two differential forms, the exterior derivative of a single differential form, the interior product of a differential form and a vector field, the Lie derivative of a differential form with respect to a vector field and the covariant derivative of a differential form with respect to a vector field on a manifold with a defined connection. === Exterior product === The exterior product of a k-form α and an ℓ-form β, denoted α ∧ β, is a (k + ℓ)-form. At each point p of the manifold M, the forms α and β are elements of an exterior power of the cotangent space at p. When the exterior algebra is viewed as a quotient of the tensor algebra, the exterior product corresponds to the tensor product (modulo the equivalence relation defining the exterior algebra). The antisymmetry inherent in the exterior algebra means that when α ∧ β is viewed as a multilinear functional, it is alternating. However, when the exterior algebra is embedded as a subspace of the tensor algebra by means of the alternation map, the tensor product α ⊗ β is not alternating. There is an explicit formula which describes the exterior product in this situation. The exterior product is α ∧ β = Alt ⁡ ( α ⊗ β ) . {\displaystyle \alpha \wedge \beta =\operatorname {Alt} (\alpha \otimes \beta ).} If the embedding of ⋀ n T ∗ M {\displaystyle {\textstyle \bigwedge }^{n}T^{*}M} into ⨂ n T ∗ M {\displaystyle {\bigotimes }^{n}T^{*}M} is done via the map n ! Alt {\displaystyle n!\operatorname {Alt} } instead of Alt {\displaystyle \operatorname {Alt} } , the exterior product is α ∧ β = ( k + ℓ ) ! k ! ℓ ! Alt ⁡ ( α ⊗ β ) . {\displaystyle \alpha \wedge \beta ={\frac {(k+\ell )!}{k!\ell !}}\operatorname {Alt} (\alpha \otimes \beta ).} This description is useful for explicit computations. For example, if k = ℓ = 1, then α ∧ β is the 2-form whose value at a point p is the alternating bilinear form defined by ( α ∧ β ) p ( v , w ) = α p ( v ) β p ( w ) − α p ( w ) β p ( v ) {\displaystyle (\alpha \wedge \beta )_{p}(v,w)=\alpha _{p}(v)\beta _{p}(w)-\alpha _{p}(w)\beta _{p}(v)} for v, w ∈ TpM. The exterior product is bilinear: If α, β, and γ are any differential forms, and if f is any smooth function, then α ∧ ( β + γ ) = α ∧ β + α ∧ γ , {\displaystyle \alpha \wedge (\beta +\gamma )=\alpha \wedge \beta +\alpha \wedge \gamma ,} α ∧ ( f ⋅ β ) = f ⋅ ( α ∧ β ) . {\displaystyle \alpha \wedge (f\cdot \beta )=f\cdot (\alpha \wedge \beta ).} It is skew commutative (also known as graded commutative), meaning that it satisfies a variant of anticommutativity that depends on the degrees of the forms: if α is a k-form and β is an ℓ-form, then α ∧ β = ( − 1 ) k ℓ β ∧ α . {\displaystyle \alpha \wedge \beta =(-1)^{k\ell }\beta \wedge \alpha .} One also has the graded Leibniz rule: d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β . {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .} === Riemannian manifold === On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, the metric defines a fibre-wise isomorphism of the tangent and cotangent bundles. This makes it possible to convert vector fields to covector fields and vice versa. It also enables the definition of additional operations such as the Hodge star operator ⋆ : Ω k ( M ) → ∼ Ω n − k ( M ) {\displaystyle \star \colon \Omega ^{k}(M)\ {\stackrel {\sim }{\to }}\ \Omega ^{n-k}(M)} and the codifferential δ : Ω k ( M ) → Ω k − 1 ( M ) {\displaystyle \delta \colon \Omega ^{k}(M)\rightarrow \Omega ^{k-1}(M)} , which has degree −1 and is adjoint to the exterior differential d. ==== Vector field structures ==== On a pseudo-Riemannian manifold, 1-forms can be identified with vector fields; vector fields have additional distinct algebraic structures, which are listed here for context and to avoid confusion. Firstly, each (co)tangent space generates a Clifford algebra, where the product of a (co)vector with itself is given by the value of a quadratic form – in this case, the natural one induced by the metric. This algebra is distinct from the exterior algebra of differential forms, which can be viewed as a Clifford algebra where the quadratic form vanishes (since the exterior product of any vector with itself is zero). Clifford algebras are thus non-anticommutative ("quantum") deformations of the exterior algebra. They are studied in geometric algebra. Another alternative is to consider vector fields as derivations. The (noncommutative) algebra of differential operators they generate is the Weyl algebra and is a noncommutative ("quantum") deformation of the symmetric algebra in the vector fields. === Exterior differential complex === One important property of the exterior derivative is that d2 = 0. This means that the exterior derivative defines a cochain complex: 0 → Ω 0 ( M ) → d Ω 1 ( M ) → d Ω 2 ( M ) → d Ω 3 ( M ) → ⋯ → Ω n ( M ) → 0. {\displaystyle 0\ \to \ \Omega ^{0}(M)\ {\stackrel {d}{\to }}\ \Omega ^{1}(M)\ {\stackrel {d}{\to }}\ \Omega ^{2}(M)\ {\stackrel {d}{\to }}\ \Omega ^{3}(M)\ \to \ \cdots \ \to \ \Omega ^{n}(M)\ \to \ 0.} This complex is called the de Rham complex, and its cohomology is by definition the de Rham cohomology of M. By the Poincaré lemma, the de Rham complex is locally exact except at Ω0(M). The kernel at Ω0(M) is the space of locally constant functions on M. Therefore, the complex is a resolution of the constant sheaf R, which in turn implies a form of de Rham's theorem: de Rham cohomology computes the sheaf cohomology of R. == Pullback == Suppose that f : M → N is smooth. The differential of f is a smooth map df : TM → TN between the tangent bundles of M and N. This map is also denoted f∗ and called the pushforward. For any point p ∈ M and any tangent vector v ∈ TpM, there is a well-defined pushforward vector f∗(v) in Tf(p)N. However, the same is not true of a vector field. If f is not injective, say because q ∈ N has two or more preimages, then the vector field may determine two or more distinct vectors in TqN. If f is not surjective, then there will be a point q ∈ N at which f∗ does not determine any tangent vector at all. Since a vector field on N determines, by definition, a unique tangent vector at every point of N, the pushforward of a vector field does not always exist. By contrast, it is always possible to pull back a differential form. A differential form on N may be viewed as a linear functional on each tangent space. Precomposing this functional with the differential df : TM → TN defines a linear functional on each tangent space of M and therefore a differential form on M. The existence of pullbacks is one of the key features of the theory of differential forms. It leads to the existence of pullback maps in other situations, such as pullback homomorphisms in de Rham cohomology. Formally, let f : M → N be smooth, and let ω be a smooth k-form on N. Then there is a differential form f∗ω on M, called the pullback of ω, which captures the behavior of ω as seen relative to f. To define the pullback, fix a point p of M and tangent vectors v1, ..., vk to M at p. The pullback of ω is defined by the formula ( f ∗ ω ) p ( v 1 , … , v k ) = ω f ( p ) ( f ∗ v 1 , … , f ∗ v k ) . {\displaystyle (f^{*}\omega )_{p}(v_{1},\ldots ,v_{k})=\omega _{f(p)}(f_{*}v_{1},\ldots ,f_{*}v_{k}).} There are several more abstract ways to view this definition. If ω is a 1-form on N, then it may be viewed as a section of the cotangent bundle T∗N of N. Using ∗ to denote a dual map, the dual to the differential of f is (df)∗ : T∗N → T∗M. The pullback of ω may be defined to be the composite M → f N → ω T ∗ N ⟶ ( d f ) ∗ T ∗ M . {\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ T^{*}N\ {\stackrel {(df)^{*}}{\longrightarrow }}\ T^{*}M.} This is a section of the cotangent bundle of M and hence a differential 1-form on M. In full generality, let ⋀ k ( d f ) ∗ {\textstyle \bigwedge ^{k}(df)^{*}} denote the kth exterior power of the dual map to the differential. Then the pullback of a k-form ω is the composite M → f N → ω ⋀ k T ∗ N ⟶ ⋀ k ( d f ) ∗ ⋀ k T ∗ M . {\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ {\textstyle \bigwedge }^{k}T^{*}N\ {\stackrel {{\bigwedge }^{k}(df)^{*}}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}T^{*}M.} Another abstract way to view the pullback comes from viewing a k-form ω as a linear functional on tangent spaces. From this point of view, ω is a morphism of vector bundles ⋀ k T N → ω N × R , {\displaystyle {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} ,} where N × R is the trivial rank one bundle on N. The composite map ⋀ k T M ⟶ ⋀ k d f ⋀ k T N → ω N × R {\displaystyle {\textstyle \bigwedge }^{k}TM\ {\stackrel {{\bigwedge }^{k}df}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} } defines a linear functional on each tangent space of M, and therefore it factors through the trivial bundle M × R. The vector bundle morphism ⋀ k T M → M × R {\textstyle {\textstyle \bigwedge }^{k}TM\to M\times \mathbf {R} } defined in this way is f∗ω. Pullback respects all of the basic operations on forms. If ω and η are forms and c is a real number, then f ∗ ( c ω ) = c ( f ∗ ω ) , f ∗ ( ω + η ) = f ∗ ω + f ∗ η , f ∗ ( ω ∧ η ) = f ∗ ω ∧ f ∗ η , f ∗ ( d ω ) = d ( f ∗ ω ) . {\displaystyle {\begin{aligned}f^{*}(c\omega )&=c(f^{*}\omega ),\\f^{*}(\omega +\eta )&=f^{*}\omega +f^{*}\eta ,\\f^{*}(\omega \wedge \eta )&=f^{*}\omega \wedge f^{*}\eta ,\\f^{*}(d\omega )&=d(f^{*}\omega ).\end{aligned}}} The pullback of a form can also be written in coordinates. Assume that x1, ..., xm are coordinates on M, that y1, ..., yn are coordinates on N, and that these coordinate systems are related by the formulas yi = fi(x1, ..., xm) for all i. Locally on N, ω can be written as ω = ∑ i 1 < ⋯ < i k ω i 1 ⋯ i k d y i 1 ∧ ⋯ ∧ d y i k , {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{k}}\omega _{i_{1}\cdots i_{k}}\,dy^{i_{1}}\wedge \cdots \wedge dy^{i_{k}},} where, for each choice of i1, ..., ik, ωi1⋅⋅⋅ik is a real-valued function of y1, ..., yn. Using the linearity of pullback and its compatibility with exterior product, the pullback of ω has the formula f ∗ ω = ∑ i 1 < ⋯ < i k ( ω i 1 ⋯ i k ∘ f ) d f i 1 ∧ ⋯ ∧ d f i k . {\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f)\,df_{i_{1}}\wedge \cdots \wedge df_{i_{k}}.} Each exterior derivative dfi can be expanded in terms of dx1, ..., dxm. The resulting k-form can be written using Jacobian matrices: f ∗ ω = ∑ i 1 < ⋯ < i k ∑ j 1 < ⋯ < j k ( ω i 1 ⋯ i k ∘ f ) ∂ ( f i 1 , … , f i k ) ∂ ( x j 1 , … , x j k ) d x j 1 ∧ ⋯ ∧ d x j k . {\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}\sum _{j_{1}<\cdots <j_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f){\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}\,dx^{j_{1}}\wedge \cdots \wedge dx^{j_{k}}.} Here, ∂ ( f i 1 , … , f i k ) ∂ ( x j 1 , … , x j k ) {\textstyle {\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}} denotes the determinant of the matrix whose entries are ∂ f i m ∂ x j n {\textstyle {\frac {\partial f_{i_{m}}}{\partial x^{j_{n}}}}} , 1 ≤ m , n ≤ k {\displaystyle 1\leq m,n\leq k} . == Integration == A differential k-form can be integrated over an oriented k-dimensional manifold. When the k-form is defined on an n-dimensional manifold with n > k, then the k-form can be integrated over oriented k-dimensional submanifolds. If k = 0, integration over oriented 0-dimensional submanifolds is just the summation of the integrand evaluated at points, according to the orientation of those points. Other values of k = 1, 2, 3, ... correspond to line integrals, surface integrals, volume integrals, and so on. There are several equivalent ways to formally define the integral of a differential form, all of which depend on reducing to the case of Euclidean space. === Integration on Euclidean space === Let U be an open subset of Rn. Give Rn its standard orientation and U the restriction of that orientation. Every smooth n-form ω on U has the form ω = f ( x ) d x 1 ∧ ⋯ ∧ d x n {\displaystyle \omega =f(x)\,dx^{1}\wedge \cdots \wedge dx^{n}} for some smooth function f : Rn → R. Such a function has an integral in the usual Riemann or Lebesgue sense. This allows us to define the integral of ω to be the integral of f: ∫ U ω = def ∫ U f ( x ) d x 1 ⋯ d x n . {\displaystyle \int _{U}\omega \ {\stackrel {\text{def}}{=}}\int _{U}f(x)\,dx^{1}\cdots dx^{n}.} Fixing an orientation is necessary for this to be well-defined. The skew-symmetry of differential forms means that the integral of, say, dx1 ∧ dx2 must be the negative of the integral of dx2 ∧ dx1. Riemann and Lebesgue integrals cannot see this dependence on the ordering of the coordinates, so they leave the sign of the integral undetermined. The orientation resolves this ambiguity. === Integration over chains === Let M be an n-manifold and ω an n-form on M. First, assume that there is a parametrization of M by an open subset of Euclidean space. That is, assume that there exists a diffeomorphism φ : D → M {\displaystyle \varphi \colon D\to M} where D ⊆ Rn. Give M the orientation induced by φ. Then (Rudin 1976) defines the integral of ω over M to be the integral of φ∗ω over D. In coordinates, this has the following expression. Fix an embedding of M in RI with coordinates x1, ..., xI. Then ω = ∑ i 1 < ⋯ < i n a i 1 , … , i n ( x ) d x i 1 ∧ ⋯ ∧ d x i n . {\displaystyle \omega =\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}({\mathbf {x} })\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{n}}.} Suppose that φ is defined by φ ( u ) = ( x 1 ( u ) , … , x I ( u ) ) . {\displaystyle \varphi ({\mathbf {u} })=(x^{1}({\mathbf {u} }),\ldots ,x^{I}({\mathbf {u} })).} Then the integral may be written in coordinates as ∫ M ω = ∫ D ∑ i 1 < ⋯ < i n a i 1 , … , i n ( φ ( u ) ) ∂ ( x i 1 , … , x i n ) ∂ ( u 1 , … , u n ) d u 1 ⋯ d u n , {\displaystyle \int _{M}\omega =\int _{D}\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}(\varphi ({\mathbf {u} })){\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\dots ,u^{n})}}\,du^{1}\cdots du^{n},} where ∂ ( x i 1 , … , x i n ) ∂ ( u 1 , … , u n ) {\displaystyle {\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\ldots ,u^{n})}}} is the determinant of the Jacobian. The Jacobian exists because φ is differentiable. In general, an n-manifold cannot be parametrized by an open subset of Rn. But such a parametrization is always possible locally, so it is possible to define integrals over arbitrary manifolds by defining them as sums of integrals over collections of local parametrizations. Moreover, it is also possible to define parametrizations of k-dimensional subsets for k < n, and this makes it possible to define integrals of k-forms. To make this precise, it is convenient to fix a standard domain D in Rk, usually a cube or a simplex. A k-chain is a formal sum of smooth embeddings D → M. That is, it is a collection of smooth embeddings, each of which is assigned an integer multiplicity. Each smooth embedding determines a k-dimensional submanifold of M. If the chain is c = ∑ i = 1 r m i φ i , {\displaystyle c=\sum _{i=1}^{r}m_{i}\varphi _{i},} then the integral of a k-form ω over c is defined to be the sum of the integrals over the terms of c: ∫ c ω = ∑ i = 1 r m i ∫ D φ i ∗ ω . {\displaystyle \int _{c}\omega =\sum _{i=1}^{r}m_{i}\int _{D}\varphi _{i}^{*}\omega .} This approach to defining integration does not assign a direct meaning to integration over the whole manifold M. However, it is still possible to assign such a meaning indirectly because every smooth manifold may be smoothly triangulated in an essentially unique way, and the integral over M may be defined to be the integral over the chain determined by a triangulation. === Integration using partitions of unity === There is another approach, expounded in (Dieudonné 1972), which does directly assign a meaning to integration over M, but this approach requires fixing an orientation of M. The integral of an n-form ω on an n-dimensional manifold is defined by working in charts. Suppose first that ω is supported on a single positively oriented chart. On this chart, it may be pulled back to an n-form on an open subset of Rn. Here, the form has a well-defined Riemann or Lebesgue integral as before. The change of variables formula and the assumption that the chart is positively oriented together ensure that the integral of ω is independent of the chosen chart. In the general case, use a partition of unity to write ω as a sum of n-forms, each of which is supported in a single positively oriented chart, and define the integral of ω to be the sum of the integrals of each term in the partition of unity. It is also possible to integrate k-forms on oriented k-dimensional submanifolds using this more intrinsic approach. The form is pulled back to the submanifold, where the integral is defined using charts as before. For example, given a path γ(t) : [0, 1] → R2, integrating a 1-form on the path is simply pulling back the form to a form f(t) dt on [0, 1], and this integral is the integral of the function f(t) on the interval. === Integration along fibers === Fubini's theorem states that the integral over a set that is a product may be computed as an iterated integral over the two factors in the product. This suggests that the integral of a differential form over a product ought to be computable as an iterated integral as well. The geometric flexibility of differential forms ensures that this is possible not just for products, but in more general situations as well. Under some hypotheses, it is possible to integrate along the fibers of a smooth map, and the analog of Fubini's theorem is the case where this map is the projection from a product to one of its factors. Because integrating a differential form over a submanifold requires fixing an orientation, a prerequisite to integration along fibers is the existence of a well-defined orientation on those fibers. Let M and N be two orientable manifolds of pure dimensions m and n, respectively. Suppose that f : M → N is a surjective submersion. This implies that each fiber f−1(y) is (m − n)-dimensional and that, around each point of M, there is a chart on which f looks like the projection from a product onto one of its factors. Fix x ∈ M and set y = f(x). Suppose that ω x ∈ ⋀ m T x ∗ M , η y ∈ ⋀ n T y ∗ N , {\displaystyle {\begin{aligned}\omega _{x}&\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,\\[2pt]\eta _{y}&\in {\textstyle \bigwedge }^{n}T_{y}^{*}N,\end{aligned}}} and that ηy does not vanish. Following (Dieudonné 1972), there is a unique σ x ∈ ⋀ m − n T x ∗ ( f − 1 ( y ) ) {\displaystyle \sigma _{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}(f^{-1}(y))} which may be thought of as the fibral part of ωx with respect to ηy. More precisely, define j : f−1(y) → M to be the inclusion. Then σx is defined by the property that ω x = ( f ∗ η y ) x ∧ σ x ′ ∈ ⋀ m T x ∗ M , {\displaystyle \omega _{x}=(f^{*}\eta _{y})_{x}\wedge \sigma '_{x}\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,} where σ x ′ ∈ ⋀ m − n T x ∗ M {\displaystyle \sigma '_{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M} is any (m − n)-covector for which σ x = j ∗ σ x ′ . {\displaystyle \sigma _{x}=j^{*}\sigma '_{x}.} The form σx may also be notated ωx / ηy. Moreover, for fixed y, σx varies smoothly with respect to x. That is, suppose that ω : f − 1 ( y ) → T ∗ M {\displaystyle \omega \colon f^{-1}(y)\to T^{*}M} is a smooth section of the projection map; we say that ω is a smooth differential m-form on M along f−1(y). Then there is a smooth differential (m − n)-form σ on f−1(y) such that, at each x ∈ f−1(y), σ x = ω x / η y . {\displaystyle \sigma _{x}=\omega _{x}/\eta _{y}.} This form is denoted ω / ηy. The same construction works if ω is an m-form in a neighborhood of the fiber, and the same notation is used. A consequence is that each fiber f−1(y) is orientable. In particular, a choice of orientation forms on M and N defines an orientation of every fiber of f. The analog of Fubini's theorem is as follows. As before, M and N are two orientable manifolds of pure dimensions m and n, and f : M → N is a surjective submersion. Fix orientations of M and N, and give each fiber of f the induced orientation. Let ω be an m-form on M, and let η be an n-form on N that is almost everywhere positive with respect to the orientation of N. Then, for almost every y ∈ N, the form ω / ηy is a well-defined integrable m − n form on f−1(y). Moreover, there is an integrable n-form on N defined by y ↦ ( ∫ f − 1 ( y ) ω / η y ) η y . {\displaystyle y\mapsto {\bigg (}\int _{f^{-1}(y)}\omega /\eta _{y}{\bigg )}\,\eta _{y}.} Denote this form by ( ∫ f − 1 ( y ) ω / η ) η . {\displaystyle {\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .} Then (Dieudonné 1972) proves the generalized Fubini formula ∫ M ω = ∫ N ( ∫ f − 1 ( y ) ω / η ) η . {\displaystyle \int _{M}\omega =\int _{N}{\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .} It is also possible to integrate forms of other degrees along the fibers of a submersion. Assume the same hypotheses as before, and let α be a compactly supported (m − n + k)-form on M. Then there is a k-form γ on N which is the result of integrating α along the fibers of f. The form α is defined by specifying, at each y ∈ N, how γ pairs with each k-vector v at y, and the value of that pairing is an integral over f−1(y) that depends only on α, v, and the orientations of M and N. More precisely, at each y ∈ N, there is an isomorphism ⋀ k T y N → ⋀ n − k T y ∗ N {\displaystyle {\textstyle \bigwedge }^{k}T_{y}N\to {\textstyle \bigwedge }^{n-k}T_{y}^{*}N} defined by the interior product v ↦ v ⌟ ζ y , {\displaystyle \mathbf {v} \mapsto \mathbf {v} \,\lrcorner \,\zeta _{y},} for any choice of volume form ζ in the orientation of N. If x ∈ f−1(y), then a k-vector v at y determines an (n − k)-covector at x by pullback: f ∗ ( v ⌟ ζ y ) ∈ ⋀ n − k T x ∗ M . {\displaystyle f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\in {\textstyle \bigwedge }^{n-k}T_{x}^{*}M.} Each of these covectors has an exterior product against α, so there is an (m − n)-form βv on M along f−1(y) defined by ( β v ) x = ( α x ∧ f ∗ ( v ⌟ ζ y ) ) / ζ y ∈ ⋀ m − n T x ∗ M . {\displaystyle (\beta _{\mathbf {v} })_{x}=\left(\alpha _{x}\wedge f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\right){\big /}\zeta _{y}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M.} This form depends on the orientation of N but not the choice of ζ. Then the k-form γ is uniquely defined by the property ⟨ γ y , v ⟩ = ∫ f − 1 ( y ) β v , {\displaystyle \langle \gamma _{y},\mathbf {v} \rangle =\int _{f^{-1}(y)}\beta _{\mathbf {v} },} and γ is smooth (Dieudonné 1972). This form also denoted α♭ and called the integral of α along the fibers of f. Integration along fibers is important for the construction of Gysin maps in de Rham cohomology. Integration along fibers satisfies the projection formula (Dieudonné 1972). If λ is any ℓ-form on N, then α ♭ ∧ λ = ( α ∧ f ∗ λ ) ♭ . {\displaystyle \alpha ^{\flat }\wedge \lambda =(\alpha \wedge f^{*}\lambda )^{\flat }.} === Stokes's theorem === The fundamental relationship between the exterior derivative and integration is given by the Stokes' theorem: If ω is an (n − 1)-form with compact support on M and ∂M denotes the boundary of M with its induced orientation, then ∫ M d ω = ∫ ∂ M ω . {\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .} A key consequence of this is that "the integral of a closed form over homologous chains is equal": If ω is a closed k-form and M and N are k-chains that are homologous (such that M − N is the boundary of a (k + 1)-chain W), then ∫ M ω = ∫ N ω {\displaystyle \textstyle {\int _{M}\omega =\int _{N}\omega }} , since the difference is the integral ∫ W d ω = ∫ W 0 = 0 {\displaystyle \textstyle \int _{W}d\omega =\int _{W}0=0} . For example, if ω = df is the derivative of a potential function on the plane or Rn, then the integral of ω over a path from a to b does not depend on the choice of path (the integral is f(b) − f(a)), since different paths with given endpoints are homotopic, hence homologous (a weaker condition). This case is called the gradient theorem, and generalizes the fundamental theorem of calculus. This path independence is very useful in contour integration. This theorem also underlies the duality between de Rham cohomology and the homology of chains. === Relation with measures === On a general differentiable manifold (without additional structure), differential forms cannot be integrated over subsets of the manifold; this distinction is key to the distinction between differential forms, which are integrated over chains or oriented submanifolds, and measures, which are integrated over subsets. The simplest example is attempting to integrate the 1-form dx over the interval [0, 1]. Assuming the usual distance (and thus measure) on the real line, this integral is either 1 or −1, depending on orientation: ∫ 0 1 d x = 1 {\textstyle \int _{0}^{1}dx=1} , while ∫ 1 0 d x = − ∫ 0 1 d x = − 1 {\textstyle \int _{1}^{0}dx=-\int _{0}^{1}dx=-1} . By contrast, the integral of the measure |dx| on the interval is unambiguously 1 (i.e. the integral of the constant function 1 with respect to this measure is 1). Similarly, under a change of coordinates a differential n-form changes by the Jacobian determinant J, while a measure changes by the absolute value of the Jacobian determinant, |J|, which further reflects the issue of orientation. For example, under the map x ↦ −x on the line, the differential form dx pulls back to −dx; orientation has reversed; while the Lebesgue measure, which here we denote |dx|, pulls back to |dx|; it does not change. In the presence of the additional data of an orientation, it is possible to integrate n-forms (top-dimensional forms) over the entire manifold or over compact subsets; integration over the entire manifold corresponds to integrating the form over the fundamental class of the manifold, [M]. Formally, in the presence of an orientation, one may identify n-forms with densities on a manifold; densities in turn define a measure, and thus can be integrated (Folland 1999, Section 11.4, pp. 361–362). On an orientable but not oriented manifold, there are two choices of orientation; either choice allows one to integrate n-forms over compact subsets, with the two choices differing by a sign. On a non-orientable manifold, n-forms and densities cannot be identified —notably, any top-dimensional form must vanish somewhere (there are no volume forms on non-orientable manifolds), but there are nowhere-vanishing densities— thus while one can integrate densities over compact subsets, one cannot integrate n-forms. One can instead identify densities with top-dimensional pseudoforms. Even in the presence of an orientation, there is in general no meaningful way to integrate k-forms over subsets for k < n because there is no consistent way to use the ambient orientation to orient k-dimensional subsets. Geometrically, a k-dimensional subset can be turned around in place, yielding the same subset with the opposite orientation; for example, the horizontal axis in a plane can be rotated by 180 degrees. Compare the Gram determinant of a set of k vectors in an n-dimensional space, which, unlike the determinant of n vectors, is always positive, corresponding to a squared number. An orientation of a k-submanifold is therefore extra data not derivable from the ambient manifold. On a Riemannian manifold, one may define a k-dimensional Hausdorff measure for any k (integer or real), which may be integrated over k-dimensional subsets of the manifold. A function times this Hausdorff measure can then be integrated over k-dimensional subsets, providing a measure-theoretic analog to integration of k-forms. The n-dimensional Hausdorff measure yields a density, as above. === Currents === The differential form analog of a distribution or generalized function is called a current. The space of k-currents on M is the dual space to an appropriate space of differential k-forms. Currents play the role of generalized domains of integration, similar to but even more flexible than chains. == Applications in physics == Differential forms arise in some important physical contexts. For example, in Maxwell's theory of electromagnetism, the Faraday 2-form, or electromagnetic field strength, is F = 1 2 f a b d x a ∧ d x b , {\displaystyle \mathbf {F} ={\frac {1}{2}}f_{ab}\,dx^{a}\wedge dx^{b}\,,} where the fab are formed from the electromagnetic fields E → {\displaystyle {\vec {E}}} and B → {\displaystyle {\vec {B}}} ; e.g., f12 = Ez/c, f23 = −Bz, or equivalent definitions. This form is a special case of the curvature form on the U(1) principal bundle on which both electromagnetism and general gauge theories may be described. The connection form for the principal bundle is the vector potential, typically denoted by A, when represented in some gauge. One then has F = d A . {\displaystyle \mathbf {F} =d\mathbf {A} .} The current 3-form is J = 1 6 j a ε a b c d d x b ∧ d x c ∧ d x d , {\displaystyle \mathbf {J} ={\frac {1}{6}}j^{a}\,\varepsilon _{abcd}\,dx^{b}\wedge dx^{c}\wedge dx^{d}\,,} where ja are the four components of the current density. (Here it is a matter of convention to write Fab instead of fab, i.e. to use capital letters, and to write Ja instead of ja. However, the vector rsp. tensor components and the above-mentioned forms have different physical dimensions. Moreover, by decision of an international commission of the International Union of Pure and Applied Physics, the magnetic polarization vector has been called J → {\displaystyle {\vec {J}}} for several decades, and by some publishers J; i.e., the same name is used for different quantities.) Using the above-mentioned definitions, Maxwell's equations can be written very compactly in geometrized units as d F = 0 d ⋆ F = J , {\displaystyle {\begin{aligned}d{\mathbf {F} }&=\mathbf {0} \\d{\star \mathbf {F} }&=\mathbf {J} ,\end{aligned}}} where ⋆ {\displaystyle \star } denotes the Hodge star operator. Similar considerations describe the geometry of gauge theories in general. The 2-form ⋆ F {\displaystyle {\star }\mathbf {F} } , which is dual to the Faraday form, is also called Maxwell 2-form. Electromagnetism is an example of a U(1) gauge theory. Here the Lie group is U(1), the one-dimensional unitary group, which is in particular abelian. There are gauge theories, such as Yang–Mills theory, in which the Lie group is not abelian. In that case, one gets relations which are similar to those described here. The analog of the field F in such theories is the curvature form of the connection, which is represented in a gauge by a Lie algebra-valued one-form A. The Yang–Mills field F is then defined by F = d A + A ∧ A . {\displaystyle \mathbf {F} =d\mathbf {A} +\mathbf {A} \wedge \mathbf {A} .} In the abelian case, such as electromagnetism, A ∧ A = 0, but this does not hold in general. Likewise the field equations are modified by additional terms involving exterior products of A and F, owing to the structure equations of the gauge group. == Applications in geometric measure theory == Numerous minimality results for complex analytic manifolds are based on the Wirtinger inequality for 2-forms. A succinct proof may be found in Herbert Federer's classic text Geometric Measure Theory. The Wirtinger inequality is also a key ingredient in Gromov's inequality for complex projective space in systolic geometry. == See also == Closed and exact differential forms Complex differential form Vector-valued differential form Equivariant differential form Calculus on Manifolds Multilinear form Polynomial differential form Presymplectic form == Notes == == References == == External links == Weisstein, Eric W. "Differential form". MathWorld. Sjamaar, Reyer (2006), Manifolds and differential forms lecture notes (PDF), a course taught at Cornell University. Bachman, David (2003), A Geometric Approach to Differential Forms, arXiv:math/0306194, Bibcode:2003math......6194B, an undergraduate text. Needham, Tristan. Visual differential geometry and forms: a mathematical drama in five acts. Princeton University Press, 2021.
Wikipedia/Exterior_calculus
In differential geometry, the Einstein tensor (named after Albert Einstein; also known as the trace-reversed Ricci tensor) is used to express the curvature of a pseudo-Riemannian manifold. In general relativity, it occurs in the Einstein field equations for gravitation that describe spacetime curvature in a manner that is consistent with conservation of energy and momentum. == Definition == The Einstein tensor G {\displaystyle {\boldsymbol {G}}} is a tensor of order 2 defined over pseudo-Riemannian manifolds. In index-free notation it is defined as G = R − 1 2 g R , {\displaystyle {\boldsymbol {G}}={\boldsymbol {R}}-{\frac {1}{2}}{\boldsymbol {g}}R,} where R {\displaystyle {\boldsymbol {R}}} is the Ricci tensor, g {\displaystyle {\boldsymbol {g}}} is the metric tensor and R {\displaystyle R} is the scalar curvature, which is computed as the trace of the Ricci tensor R μ ν {\displaystyle R_{\mu \nu }} by ⁠ R = g μ ν R μ ν {\displaystyle R=g^{\mu \nu }R_{\mu \nu }} ⁠. In component form, the previous equation reads as G μ ν = R μ ν − 1 2 g μ ν R . {\displaystyle G_{\mu \nu }=R_{\mu \nu }-{1 \over 2}g_{\mu \nu }R.} The Einstein tensor is symmetric G μ ν = G ν μ {\displaystyle G_{\mu \nu }=G_{\nu \mu }} and, like the on shell stress–energy tensor, has zero divergence: ∇ μ G μ ν = 0 . {\displaystyle \nabla _{\mu }G^{\mu \nu }=0\,.} == Explicit form == The Ricci tensor depends only on the metric tensor, so the Einstein tensor can be defined directly with just the metric tensor. However, this expression is complex and rarely quoted in textbooks. The complexity of this expression can be shown using the formula for the Ricci tensor in terms of Christoffel symbols: G α β = R α β − 1 2 g α β R = R α β − 1 2 g α β g γ ζ R γ ζ = ( δ α γ δ β ζ − 1 2 g α β g γ ζ ) R γ ζ = ( δ α γ δ β ζ − 1 2 g α β g γ ζ ) ( Γ ϵ γ ζ , ϵ − Γ ϵ γ ϵ , ζ + Γ ϵ ϵ σ Γ σ γ ζ − Γ ϵ ζ σ Γ σ ϵ γ ) , G α β = ( g α γ g β ζ − 1 2 g α β g γ ζ ) ( Γ ϵ γ ζ , ϵ − Γ ϵ γ ϵ , ζ + Γ ϵ ϵ σ Γ σ γ ζ − Γ ϵ ζ σ Γ σ ϵ γ ) , {\displaystyle {\begin{aligned}G_{\alpha \beta }&=R_{\alpha \beta }-{\frac {1}{2}}g_{\alpha \beta }R\\&=R_{\alpha \beta }-{\frac {1}{2}}g_{\alpha \beta }g^{\gamma \zeta }R_{\gamma \zeta }\\&=\left(\delta _{\alpha }^{\gamma }\delta _{\beta }^{\zeta }-{\frac {1}{2}}g_{\alpha \beta }g^{\gamma \zeta }\right)R_{\gamma \zeta }\\&=\left(\delta _{\alpha }^{\gamma }\delta _{\beta }^{\zeta }-{\frac {1}{2}}g_{\alpha \beta }g^{\gamma \zeta }\right)\left(\Gamma ^{\epsilon }{}_{\gamma \zeta ,\epsilon }-\Gamma ^{\epsilon }{}_{\gamma \epsilon ,\zeta }+\Gamma ^{\epsilon }{}_{\epsilon \sigma }\Gamma ^{\sigma }{}_{\gamma \zeta }-\Gamma ^{\epsilon }{}_{\zeta \sigma }\Gamma ^{\sigma }{}_{\epsilon \gamma }\right),\\[2pt]G^{\alpha \beta }&=\left(g^{\alpha \gamma }g^{\beta \zeta }-{\frac {1}{2}}g^{\alpha \beta }g^{\gamma \zeta }\right)\left(\Gamma ^{\epsilon }{}_{\gamma \zeta ,\epsilon }-\Gamma ^{\epsilon }{}_{\gamma \epsilon ,\zeta }+\Gamma ^{\epsilon }{}_{\epsilon \sigma }\Gamma ^{\sigma }{}_{\gamma \zeta }-\Gamma ^{\epsilon }{}_{\zeta \sigma }\Gamma ^{\sigma }{}_{\epsilon \gamma }\right),\end{aligned}}} where δ β α {\displaystyle \delta _{\beta }^{\alpha }} is the Kronecker tensor and the Christoffel symbol Γ α β γ {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }} is defined as Γ α β γ = 1 2 g α ϵ ( g β ϵ , γ + g γ ϵ , β − g β γ , ϵ ) . {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }={\frac {1}{2}}g^{\alpha \epsilon }\left(g_{\beta \epsilon ,\gamma }+g_{\gamma \epsilon ,\beta }-g_{\beta \gamma ,\epsilon }\right).} and terms of the form Γ β γ , μ α {\displaystyle \Gamma _{\beta \gamma ,\mu }^{\alpha }} or g β γ , μ {\displaystyle g_{\beta \gamma ,\mu }} represent partial derivatives in the μ-direction, e.g.: Γ α β γ , μ = ∂ μ Γ α β γ = ∂ ∂ x μ Γ α β γ {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma ,\mu }=\partial _{\mu }\Gamma ^{\alpha }{}_{\beta \gamma }={\frac {\partial }{\partial x^{\mu }}}\Gamma ^{\alpha }{}_{\beta \gamma }} Before cancellations, this formula results in 2 × ( 6 + 6 + 9 + 9 ) = 60 {\displaystyle 2\times (6+6+9+9)=60} individual terms. Cancellations bring this number down somewhat. In the special case of a locally inertial reference frame near a point, the first derivatives of the metric tensor vanish and the component form of the Einstein tensor is considerably simplified: G α β = g γ μ [ g γ [ β , μ ] α + g α [ μ , β ] γ − 1 2 g α β g ϵ σ ( g ϵ [ μ , σ ] γ + g γ [ σ , μ ] ϵ ) ] = g γ μ ( δ α ϵ δ β σ − 1 2 g ϵ σ g α β ) ( g ϵ [ μ , σ ] γ + g γ [ σ , μ ] ϵ ) , {\displaystyle {\begin{aligned}G_{\alpha \beta }&=g^{\gamma \mu }\left[g_{\gamma [\beta ,\mu ]\alpha }+g_{\alpha [\mu ,\beta ]\gamma }-{\frac {1}{2}}g_{\alpha \beta }g^{\epsilon \sigma }\left(g_{\epsilon [\mu ,\sigma ]\gamma }+g_{\gamma [\sigma ,\mu ]\epsilon }\right)\right]\\&=g^{\gamma \mu }\left(\delta _{\alpha }^{\epsilon }\delta _{\beta }^{\sigma }-{\frac {1}{2}}g^{\epsilon \sigma }g_{\alpha \beta }\right)\left(g_{\epsilon [\mu ,\sigma ]\gamma }+g_{\gamma [\sigma ,\mu ]\epsilon }\right),\end{aligned}}} where square brackets conventionally denote antisymmetrization over bracketed indices, i.e. g α [ β , γ ] ϵ = 1 2 ( g α β , γ ϵ − g α γ , β ϵ ) . {\displaystyle g_{\alpha [\beta ,\gamma ]\epsilon }\,={\frac {1}{2}}\left(g_{\alpha \beta ,\gamma \epsilon }-g_{\alpha \gamma ,\beta \epsilon }\right).} == Trace == The trace of the Einstein tensor can be computed by contracting the equation in the definition with the metric tensor ⁠ g μ ν {\displaystyle g^{\mu \nu }} ⁠. In n {\displaystyle n} dimensions (of arbitrary signature): g μ ν G μ ν = g μ ν R μ ν − 1 2 g μ ν g μ ν R G = R − 1 2 ( n R ) = 2 − n 2 R {\displaystyle {\begin{aligned}g^{\mu \nu }G_{\mu \nu }&=g^{\mu \nu }R_{\mu \nu }-{1 \over 2}g^{\mu \nu }g_{\mu \nu }R\\G&=R-{1 \over 2}(nR)={{2-n} \over 2}R\end{aligned}}} Therefore, in the special case of ⁠ n = 4 {\displaystyle n=4} ⁠ dimensions, ⁠ G = − R {\displaystyle G=-R} ⁠. That is, the trace of the Einstein tensor is the negative of the Ricci tensor's trace. Thus, another name for the Einstein tensor is the trace-reversed Ricci tensor. This n = 4 {\displaystyle n=4} case is especially relevant in the theory of general relativity. == Use in general relativity == The Einstein tensor allows the Einstein field equations to be written in the concise form: G μ ν + Λ g μ ν = κ T μ ν , {\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },} where Λ {\displaystyle \Lambda } is the cosmological constant and κ {\displaystyle \kappa } is the Einstein gravitational constant. From the explicit form of the Einstein tensor, the Einstein tensor is a nonlinear function of the metric tensor, but is linear in the second partial derivatives of the metric. As a symmetric order-2 tensor, the Einstein tensor has 10 independent components in a 4-dimensional space. It follows that the Einstein field equations are a set of 10 quasilinear second-order partial differential equations for the metric tensor. The contracted Bianchi identities can also be easily expressed with the aid of the Einstein tensor: ∇ μ G μ ν = 0. {\displaystyle \nabla _{\mu }G^{\mu \nu }=0.} The (contracted) Bianchi identities automatically ensure the covariant conservation of the stress–energy tensor in curved spacetimes: ∇ μ T μ ν = 0. {\displaystyle \nabla _{\mu }T^{\mu \nu }=0.} The physical significance of the Einstein tensor is highlighted by this identity. In terms of the densitized stress tensor contracted on a Killing vector ⁠ ξ μ {\displaystyle \xi ^{\mu }} ⁠, an ordinary conservation law holds: ∂ μ ( − g T μ ν ξ ν ) = 0. {\displaystyle \partial _{\mu }\left({\sqrt {-g}}\ T^{\mu }{}_{\nu }\xi ^{\nu }\right)=0.} == Uniqueness == David Lovelock has shown that, in a four-dimensional differentiable manifold, the Einstein tensor is the only tensorial and divergence-free function of the g μ ν {\displaystyle g_{\mu \nu }} and at most their first and second partial derivatives. However, the Einstein field equation is not the only equation which satisfies the three conditions: Resemble but generalize Newton–Poisson gravitational equation Apply to all coordinate systems, and Guarantee local covariant conservation of energy–momentum for any metric tensor. Many alternative theories have been proposed, such as the Einstein–Cartan theory, that also satisfy the above conditions. == See also == Contracted Bianchi identities Vermeil's theorem Mathematics of general relativity General relativity resources == Notes == == References == Ohanian, Hans C.; Remo Ruffini (1994). Gravitation and Spacetime (Second ed.). W. W. Norton & Company. ISBN 978-0-393-96501-8. Martin, John Legat (1995). General Relativity: A First Course for Physicists. Prentice Hall International Series in Physics and Applied Physics (Revised ed.). Prentice Hall. ISBN 978-0-13-291196-2.
Wikipedia/Einstein_tensor
In mathematics, mathematical physics, and theoretical physics, the spin tensor is a quantity used to describe the rotational motion of particles in spacetime. The spin tensor has application in general relativity and special relativity, as well as quantum mechanics, relativistic quantum mechanics, and quantum field theory. The special Euclidean group SE(d) of direct isometries is generated by translations and rotations. Its Lie algebra is written s e ( d ) {\displaystyle {\mathfrak {se}}(d)} . This article uses Cartesian coordinates and tensor index notation. == Background on Noether currents == The Noether current for translations in space is momentum, while the current for increments in time is energy. These two statements combine into one in spacetime: translations in spacetime, i.e. a displacement between two events, is generated by the four-momentum P. Conservation of four-momentum is given by the continuity equation: ∂ ν T μ ν = 0 , {\displaystyle \partial _{\nu }T^{\mu \nu }=0\,,} where T μ ν {\displaystyle T^{\mu \nu }\,} is the stress–energy tensor, and ∂ are partial derivatives that make up the four-gradient (in non-Cartesian coordinates this must be replaced by the covariant derivative). Integrating over space: ∫ d 3 x T μ 0 ( x → , t ) = P μ {\displaystyle \int d^{3}xT^{\mu 0}\left({\vec {x}},t\right)=P^{\mu }} gives the four-momentum vector at time t. The Noether current for a rotation about the point y is given by a tensor of 3rd order, denoted M y α β μ {\displaystyle M_{y}^{\alpha \beta \mu }} . Because of the Lie algebra relations M y α β μ ( x ) = M 0 α β μ ( x ) + y α T β μ ( x ) − y β T α μ ( x ) , {\displaystyle M_{y}^{\alpha \beta \mu }(x)=M_{0}^{\alpha \beta \mu }(x)+y^{\alpha }T^{\beta \mu }(x)-y^{\beta }T^{\alpha \mu }(x)\,,} where the 0 subscript indicates the origin (unlike momentum, angular momentum depends on the origin), the integral: ∫ d 3 x M 0 μ ν ( x → , t ) {\displaystyle \int d^{3}xM_{0}^{\mu \nu }({\vec {x}},t)} gives the angular momentum tensor M μ ν {\displaystyle M^{\mu \nu }\,} at time t. == Definition == The spin tensor is defined at a point x to be the value of the Noether current at x of a rotation about x, S α β μ ( x ) = d e f M x α β μ ( x ) = M 0 α β μ ( x ) + x α T β μ ( x ) − x β T α μ ( x ) {\displaystyle S^{\alpha \beta \mu }(\mathbf {x} )\mathrel {\stackrel {\mathrm {def} }{=}} M_{x}^{\alpha \beta \mu }(\mathbf {x} )=M_{0}^{\alpha \beta \mu }(\mathbf {x} )+x^{\alpha }T^{\beta \mu }(\mathbf {x} )-x^{\beta }T^{\alpha \mu }(\mathbf {x} )} The continuity equation ∂ μ M 0 α β μ = 0 , {\displaystyle \partial _{\mu }M_{0}^{\alpha \beta \mu }=0\,,} implies: ∂ μ S α β μ = T β α − T α β ≠ 0 {\displaystyle \partial _{\mu }S^{\alpha \beta \mu }=T^{\beta \alpha }-T^{\alpha \beta }\neq 0} and therefore, the stress–energy tensor is not a symmetric tensor. The quantity S is the density of spin angular momentum (spin in this case is not only for a point-like particle, but also for an extended body), and M is the density of orbital angular momentum. The total angular momentum is always the sum of spin and orbital contributions. The relation: T i j − T j i {\displaystyle T_{ij}-T_{ji}} gives the torque density showing the rate of conversion between the orbital angular momentum and spin. == Examples == Examples of materials with a nonzero spin density are molecular fluids, the electromagnetic field and turbulent fluids. For molecular fluids, the individual molecules may be spinning. The electromagnetic field can have circularly polarized light. For turbulent fluids, we may arbitrarily make a distinction between long wavelength phenomena and short wavelength phenomena. A long wavelength vorticity may be converted via turbulence into tinier and tinier vortices transporting the angular momentum into smaller and smaller wavelengths while simultaneously reducing the vorticity. This can be approximated by the eddy viscosity. == See also == Belinfante–Rosenfeld stress–energy tensor Poincaré group Lorentz group Relativistic angular momentum Mathisson–Papapetrou–Dixon equations Pauli–Lubanski pseudovector == References == A. K. Raychaudhuri; S. Banerji; A. Banerjee (2003). General Relativity, Astrophysics, and Cosmology. Astronomy and astrophysics library. Springer. pp. 66–67. ISBN 978-038-740-628-2. J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 156–159, §5.11. ISBN 978-0-7167-0344-0. L. M. Butcher; A. Lasenby; M. Hobson (2012). "Localizing the Angular Momentum of Linear Gravity". Phys. Rev. D. 86 (8): 084012. arXiv:1210.0831. Bibcode:2012PhRvD..86h4012B. doi:10.1103/PhysRevD.86.084012. S2CID 119220791. T. Banks (2008). "Modern Quantum Field Theory: A Concise Introduction". Cambridge University Press. ISBN 978-113-947-389-7. S. Kopeikin, M.Efroimsky, G. Kaplan (2011). "Relativistic Celestial Mechanics of the Solar System". John Wiley & Sons. ISBN 978-352-763-457-6.{{cite news}}: CS1 maint: multiple names: authors list (link) W. F. Maher; J. D. Zund (1968). "A spinor approach to the Lanczos spin tensor". Il Nuovo Cimento A. 10. 57 (4). Springer: 638–648. Bibcode:1968NCimA..57..638M. doi:10.1007/BF02751371. S2CID 124665829. == External links == von Jan Steinhoff. "Canonical Formulation of Spin in General Relativity (Dissertation)" (PDF). Retrieved 2013-10-27.
Wikipedia/Spin_tensor
In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others. The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume. Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism. Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which are responsible for the success of all life on Earth. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems. == Overview == In physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts : the conservation laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, describe how the quantity being studied must be conserved. The constitutive equations describe how the quantity in question responds to various stimuli via transport. Prominent examples include Fourier's law of heat conduction and the Navier–Stokes equations, which describe, respectively, the response of heat flux to temperature gradients and the relationship between fluid flux and the forces applied to the fluid. These equations also demonstrate the deep connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of these physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system and transport ceases. The various aspects of such equilibrium are directly connected to a specific transport: heat transfer is the system's attempt to achieve thermal equilibrium with its environment, just as mass and momentum transport move the system towards chemical and mechanical equilibrium. Examples of transport processes include heat conduction (energy transfer), fluid flow (momentum transfer), molecular diffusion (mass transfer), radiation and electric charge transfer in semiconductors. Transport phenomena have wide application. For example, in solid state physics, the motion and interaction of electrons, holes and phonons are studied under "transport phenomena". Another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy. The transport of mass, energy, and momentum can be affected by the presence of external sources: An odor dissipates more slowly (and may intensify) when the source of the odor remains present. The rate of cooling of a solid that is conducting heat depends on whether a heat source is applied. The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air. == Commonalities among phenomena == An important principle in the study of transport phenomena is analogy between phenomena. === Diffusion === There are some notable similarities in equations for momentum, energy, and mass transfer which can all be transported by diffusion, as illustrated by the following examples: Mass: the spreading and dissipation of odors in air is an example of mass diffusion. Energy: the conduction of heat in a solid material is an example of heat diffusion. Momentum: the drag experienced by a rain drop as it falls in the atmosphere is an example of momentum diffusion (the rain drop loses momentum to the surrounding air through viscous stresses and decelerates). The molecular transfer equations of Newton's law for fluid momentum, Fourier's law for heat, and Fick's law for mass are very similar. One can convert from one transport coefficient to another in order to compare all three different transport phenomena. A great deal of effort has been devoted in the literature to developing analogies among these three transport processes for turbulent transfer so as to allow prediction of one from any of the others. The Reynolds analogy assumes that the turbulent diffusivities are all equal and that the molecular diffusivities of momentum (μ/ρ) and mass (DAB) are negligible compared to the turbulent diffusivities. When liquids are present and/or drag is present, the analogy is not valid. Other analogies, such as von Karman's and Prandtl's, usually result in poor relations. The most successful and most widely used analogy is the Chilton and Colburn J-factor analogy. This analogy is based on experimental data for gases and liquids in both the laminar and turbulent regimes. Although it is based on experimental data, it can be shown to satisfy the exact solution derived from laminar flow over a flat plate. All of this information is used to predict transfer of mass. === Onsager reciprocal relations === In fluid systems described in terms of temperature, matter density, and pressure, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions (a "reciprocal relation"). What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. The heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics. The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once. == Momentum transfer == In momentum transfer, the fluid is treated as a continuous distribution of matter. The study of momentum transfer, or fluid mechanics can be divided into two branches: fluid statics (fluids at rest), and fluid dynamics (fluids in motion). When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer. The equation for momentum transfer is Newton's law of viscosity written as follows: τ z x = − ρ ν ∂ υ x ∂ z {\displaystyle \tau _{zx}=-\rho \nu {\frac {\partial \upsilon _{x}}{\partial z}}} where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed. == Mass transfer == When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.' Mass transfer can take place due to different driving forces. Some of them are: Mass can be transferred by the action of a pressure gradient (pressure diffusion) Forced diffusion occurs because of the action of some external force Diffusion can be caused by temperature gradients (thermal diffusion) Diffusion can be caused by differences in chemical potential This can be compared to Fick's law of diffusion, for a species A in a binary mixture consisting of A and B: J A y = − D A B ∂ C a ∂ y {\displaystyle J_{Ay}=-D_{AB}{\frac {\partial Ca}{\partial y}}} where D is the diffusivity constant. == Heat transfer == Many important engineered systems involve heat transfer. Some examples are the heating and cooling of process streams, phase changes, distillation, etc. The basic principle is the Fourier's law which is expressed as follows for a static system: q ″ = − k d T d x {\displaystyle q''=-k{\frac {dT}{dx}}} The net flux of heat through a system equals the conductivity times the rate of change of temperature with respect to position. For convective transport involving turbulent flow, complex geometries, or difficult boundary conditions, the heat transfer may be represented by a heat transfer coefficient. Q = h ⋅ A ⋅ Δ T {\displaystyle Q=h\cdot A\cdot {\Delta T}} where A is the surface area, Δ T {\displaystyle {\Delta T}} is the temperature driving force, Q is the heat flow per unit time, and h is the heat transfer coefficient. Within heat transfer, two principal types of convection can occur: Forced convection can occur in both laminar and turbulent flow. In the situation of laminar flow in circular tubes, several dimensionless numbers are used such as Nusselt number, Reynolds number, and Prandtl number. The commonly used equation is N u a = h a D k {\displaystyle Nu_{a}={\frac {h_{a}D}{k}}} . Natural or free convection is a function of Grashof and Prandtl numbers. The complexities of free convection heat transfer make it necessary to mainly use empirical relations from experimental data. Heat transfer is analyzed in packed beds, nuclear reactors and heat exchangers. == Heat and mass transfer analogy == The heat and mass analogy allows solutions for mass transfer problems to be obtained from known solutions to heat transfer problems. Its arises from similar non-dimensional governing equations between heat and mass transfer. === Derivation === The non-dimensional energy equation for fluid flow in a boundary layer can simplify to the following, when heating from viscous dissipation and heat generation can be neglected: u ∗ ∂ T ∗ ∂ x ∗ + v ∗ ∂ T ∗ ∂ y ∗ = 1 R e L P r ∂ 2 T ∗ ∂ y ∗ 2 {\displaystyle {u^{*}{\frac {\partial T^{*}}{\partial x^{*}}}}+{v^{*}{\frac {\partial T^{*}}{\partial y^{*}}}}={\frac {1}{Re_{L}Pr}}{\frac {\partial ^{2}T^{*}}{\partial y^{*2}}}} Where u ∗ {\displaystyle {u^{*}}} and v ∗ {\displaystyle {v^{*}}} are the velocities in the x and y directions respectively normalized by the free stream velocity, x ∗ {\displaystyle {x^{*}}} and y ∗ {\displaystyle {y^{*}}} are the x and y coordinates non-dimensionalized by a relevant length scale, R e L {\displaystyle {Re_{L}}} is the Reynolds number, P r {\displaystyle {Pr}} is the Prandtl number, and T ∗ {\displaystyle {T^{*}}} is the non-dimensional temperature, which is defined by the local, minimum, and maximum temperatures: T ∗ = T − T m i n T m a x − T m i n {\displaystyle T^{*}={\frac {T-T_{min}}{T_{max}-T_{min}}}} The non-dimensional species transport equation for fluid flow in a boundary layer can be given as the following, assuming no bulk species generation: u ∗ ∂ C A ∗ ∂ x ∗ + v ∗ ∂ C A ∗ ∂ y ∗ = 1 R e L S c ∂ 2 C A ∗ ∂ y ∗ 2 {\displaystyle {u^{*}{\frac {\partial C_{A}^{*}}{\partial x^{*}}}}+{v^{*}{\frac {\partial C_{A}^{*}}{\partial y^{*}}}}={\frac {1}{Re_{L}Sc}}{\frac {\partial ^{2}C_{A}^{*}}{\partial y^{*2}}}} Where C A ∗ {\displaystyle {C_{A}^{*}}} is the non-dimensional concentration, and S c {\displaystyle {Sc}} is the Schmidt number. Transport of heat is driven by temperature differences, while transport of species is due to concentration differences. They differ by the relative diffusion of their transport compared to the diffusion of momentum. For heat, the comparison is between viscous diffusivity ( ν {\displaystyle {\nu }} ) and thermal diffusion ( α {\displaystyle {\alpha }} ), given by the Prandtl number. Meanwhile, for mass transfer, the comparison is between viscous diffusivity ( ν {\displaystyle {\nu }} ) and mass Diffusivity ( D {\displaystyle {D}} ), given by the Schmidt number. In some cases direct analytic solutions can be found from these equations for the Nusselt and Sherwood numbers. In cases where experimental results are used, one can assume these equations underlie the observed transport. At an interface, the boundary conditions for both equations are also similar. For heat transfer at an interface, the no-slip condition allows us to equate conduction with convection, thus equating Fourier's law and Newton's law of cooling: q ″ = k d T d y = h ( T s − T b ) {\displaystyle q''=k{\frac {dT}{dy}}=h(T_{s}-T_{b})} Where q” is the heat flux, k {\displaystyle {k}} is the thermal conductivity, h {\displaystyle {h}} is the heat transfer coefficient, and the subscripts s {\displaystyle {s}} and b {\displaystyle {b}} compare the surface and bulk values respectively. For mass transfer at an interface, we can equate Fick's law with Newton's law for convection, yielding: J = D d C d y = h m ( C m − C b ) {\displaystyle J=D{\frac {dC}{dy}}=h_{m}(C_{m}-C_{b})} Where J {\displaystyle {J}} is the mass flux [kg/s m 3 {\displaystyle {m^{3}}} ], D {\displaystyle {D}} is the diffusivity of species a in fluid b, and h m {\displaystyle {h_{m}}} is the mass transfer coefficient. As we can see, q ″ {\displaystyle {q''}} and J {\displaystyle {J}} are analogous, k {\displaystyle {k}} and D {\displaystyle {D}} are analogous, while T {\displaystyle {T}} and C {\displaystyle {C}} are analogous. === Implementing the Analogy === Heat-Mass Analogy: Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat. In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient n {\displaystyle n} . Therefore, one can directly calculate these numbers from one another using: N u S h = P r n S c n {\displaystyle {\frac {Nu}{Sh}}={\frac {Pr^{n}}{Sc^{n}}}} Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent. We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding: h h m = k D L e n = ρ C p L e 1 − n {\displaystyle {\frac {h}{h_{m}}}={\frac {k}{DLe^{n}}}=\rho C_{p}Le^{1-n}} For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy. Said analogy also relates viscous forces and heat transfer, like the Reynolds analogy. === Limitations === The analogy between heat transfer and mass transfer is strictly limited to binary diffusion in dilute (ideal) solutions for which the mass transfer rates are low enough that mass transfer has no effect on the velocity field. The concentration of the diffusing species must be low enough that the chemical potential gradient is accurately represented by the concentration gradient (thus, the analogy has limited application to concentrated liquid solutions). When the rate of mass transfer is high or the concentration of the diffusing species is not low, corrections to the low-rate heat transfer coefficient can sometimes help. Further, in multicomponent mixtures, the transport of one species is affected by the chemical potential gradients of other species. The heat and mass analogy may also break down in cases where the governing equations differ substantially. For instance, situations with substantial contributions from generation terms in the flow, such as bulk heat generation or bulk chemical reactions, may cause solutions to diverge. === Applications of the Heat-Mass Analogy === The analogy is useful for both using heat and mass transport to predict one another, or for understanding systems which experience simultaneous heat and mass transfer. For example, predicting heat transfer coefficients around turbine blades is challenging and is often done through measuring evaporating of a volatile compound and using the analogy. Many systems also experience simultaneous mass and heat transfer, and particularly common examples occur in processes with phase change, as the enthalpy of phase change often substantially influences heat transfer. Such examples include: evaporation at a water surface, transport of vapor in the air gap above a membrane distillation desalination membrane, and HVAC dehumidification equipment that combine heat transfer and selective membranes. == Applications == === Pollution === The study of transport processes is relevant for understanding the release and distribution of pollutants into the environment. In particular, accurate modeling can inform mitigation strategies. Examples include the control of surface water pollution from urban runoff, and policies intended to reduce the copper content of vehicle brake pads in the U.S. == See also == Constitutive equation Continuity equation Wave propagation Pulse Action potential Bioheat transfer == References == == External links == Transport Phenomena Archive Archived 2017-10-08 at the Wayback Machine in the Teaching Archives of the Materials Digital Library Pathway "Some Classical Transport Phenomena Problems with Solutions – Fluid Mechanics". "Some Classical Transport Phenomena Problems with Solutions – Heat Transfer". "Some Classical Transport Phenomena Problems with Solutions – Mass Transfer".
Wikipedia/Transport_phenomena
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables (multivariate), rather than just one. Multivariable calculus may be thought of as an elementary part of calculus on Euclidean space. The special case of calculus in three dimensional space is often called vector calculus. == Introduction == In single-variable calculus, operations like differentiation and integration are made to functions of a single variable. In multivariate calculus, it is required to generalize these to multiple variables, and the domain is therefore multi-dimensional. Care is therefore required in these generalizations, because of two key differences between 1D and higher dimensional spaces: There are infinite ways to approach a single point in higher dimensions, as opposed to two (from the positive and negative direction) in 1D; There are multiple extended objects associated with the dimension; for example, for a 1D function, it must be represented as a curve on the 2D Cartesian plane, but a function with two variables is a surface in 3D, while curves can also live in 3D space. The consequence of the first difference is the difference in the definition of the limit and differentiation. Directional limits and derivatives define the limit and differential along a 1D parametrized curve, reducing the problem to the 1D case. Further higher-dimensional objects can be constructed from these operators. The consequence of the second difference is the existence of multiple types of integration, including line integrals, surface integrals and volume integrals. Due to the non-uniqueness of these integrals, an antiderivative or indefinite integral cannot be properly defined. == Limits == A study of limits and continuity in multivariable calculus yields many counterintuitive results not demonstrated by single-variable functions. A limit along a path may be defined by considering a parametrised path s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} in n-dimensional Euclidean space. Any function f ( x → ) : R n → R m {\displaystyle f({\overrightarrow {x}}):\mathbb {R} ^{n}\to \mathbb {R} ^{m}} can then be projected on the path as a 1D function f ( s ( t ) ) {\displaystyle f(s(t))} . The limit of f {\displaystyle f} to the point s ( t 0 ) {\displaystyle s(t_{0})} along the path s ( t ) {\displaystyle s(t)} can hence be defined as Note that the value of this limit can be dependent on the form of s ( t ) {\displaystyle s(t)} , i.e. the path chosen, not just the point which the limit approaches.: 19–22  For example, consider the function f ( x , y ) = x 2 y x 4 + y 2 . {\displaystyle f(x,y)={\frac {x^{2}y}{x^{4}+y^{2}}}.} If the point ( 0 , 0 ) {\displaystyle (0,0)} is approached through the line y = k x {\displaystyle y=kx} , or in parametric form: Then the limit along the path will be: On the other hand, if the path y = ± x 2 {\displaystyle y=\pm x^{2}} (or parametrically, x ( t ) = t , y ( t ) = ± t 2 {\displaystyle x(t)=t,\,y(t)=\pm t^{2}} ) is chosen, then the limit becomes: Since taking different paths towards the same point yields different values, a general limit at the point ( 0 , 0 ) {\displaystyle (0,0)} cannot be defined for the function. A general limit can be defined if the limits to a point along all possible paths converge to the same value, i.e. we say for a function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} that the limit of f {\displaystyle f} to some point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} is L, if and only if for all continuous functions s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} such that s ( t 0 ) = x 0 {\displaystyle s(t_{0})=x_{0}} . === Continuity === From the concept of limit along a path, we can then derive the definition for multivariate continuity in the same manner, that is: we say for a function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} that f {\displaystyle f} is continuous at the point x 0 {\displaystyle x_{0}} , if and only if for all continuous functions s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} such that s ( t 0 ) = x 0 {\displaystyle s(t_{0})=x_{0}} . As with limits, being continuous along one path s ( t ) {\displaystyle s(t)} does not imply multivariate continuity. Continuity in each argument not being sufficient for multivariate continuity can also be seen from the following example.: 17–19  For example, for a real-valued function f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } with two real-valued parameters, f ( x , y ) {\displaystyle f(x,y)} , continuity of f {\displaystyle f} in x {\displaystyle x} for fixed y {\displaystyle y} and continuity of f {\displaystyle f} in y {\displaystyle y} for fixed x {\displaystyle x} does not imply continuity of f {\displaystyle f} . Consider f ( x , y ) = { y x − y if 0 ≤ y < x ≤ 1 x y − x if 0 ≤ x < y ≤ 1 1 − x if 0 < x = y 0 everywhere else . {\displaystyle f(x,y)={\begin{cases}{\frac {y}{x}}-y&{\text{if}}\quad 0\leq y<x\leq 1\\{\frac {x}{y}}-x&{\text{if}}\quad 0\leq x<y\leq 1\\1-x&{\text{if}}\quad 0<x=y\\0&{\text{everywhere else}}.\end{cases}}} It is easy to verify that this function is zero by definition on the boundary and outside of the quadrangle ( 0 , 1 ) × ( 0 , 1 ) {\displaystyle (0,1)\times (0,1)} . Furthermore, the functions defined for constant x {\displaystyle x} and y {\displaystyle y} and 0 ≤ a ≤ 1 {\displaystyle 0\leq a\leq 1} by g a ( x ) = f ( x , a ) {\displaystyle g_{a}(x)=f(x,a)\quad } and h a ( y ) = f ( a , y ) {\displaystyle \quad h_{a}(y)=f(a,y)\quad } are continuous. Specifically, g 0 ( x ) = f ( x , 0 ) = h 0 ( 0 , y ) = f ( 0 , y ) = 0 {\displaystyle g_{0}(x)=f(x,0)=h_{0}(0,y)=f(0,y)=0} for all x and y. Therefore, f ( 0 , 0 ) = 0 {\displaystyle f(0,0)=0} and moreover, along the coordinate axes, lim x → 0 f ( x , 0 ) = 0 {\displaystyle \lim _{x\to 0}f(x,0)=0} and lim y → 0 f ( 0 , y ) = 0 {\displaystyle \lim _{y\to 0}f(0,y)=0} . Therefore the function is continuous along both individual arguments. However, consider the parametric path x ( t ) = t , y ( t ) = t {\displaystyle x(t)=t,\,y(t)=t} . The parametric function becomes Therefore, It is hence clear that the function is not multivariate continuous, despite being continuous in both coordinates. === Theorems regarding multivariate limits and continuity === All properties of linearity and superposition from single-variable calculus carry over to multivariate calculus. Composition: If f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} and g : R m → R p {\displaystyle g:\mathbb {R} ^{m}\to \mathbb {R} ^{p}} are both multivariate continuous functions at the points x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} and f ( x 0 ) ∈ R m {\displaystyle f(x_{0})\in \mathbb {R} ^{m}} respectively, then g ∘ f : R n → R p {\displaystyle g\circ f:\mathbb {R} ^{n}\to \mathbb {R} ^{p}} is also a multivariate continuous function at the point x 0 {\displaystyle x_{0}} . Multiplication: If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } and g : R n → R {\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} } are both continuous functions at the point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} , then f g : R n → R {\displaystyle fg:\mathbb {R} ^{n}\to \mathbb {R} } is continuous at x 0 {\displaystyle x_{0}} , and f / g : R n → R {\displaystyle f/g:\mathbb {R} ^{n}\to \mathbb {R} } is also continuous at x 0 {\displaystyle x_{0}} provided that g ( x 0 ) ≠ 0 {\displaystyle g(x_{0})\neq 0} . If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a continuous function at point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} , then | f | {\displaystyle |f|} is also continuous at the same point. If f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} is Lipschitz continuous (with the appropriate normed spaces as needed) in the neighbourhood of the point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} , then f {\displaystyle f} is multivariate continuous at x 0 {\displaystyle x_{0}} . == Differentiation == === Directional derivative === The derivative of a single-variable function is defined as Using the extension of limits discussed above, one can then extend the definition of the derivative to a scalar-valued function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } along some path s ( t ) : R → R n {\displaystyle s(t):\mathbb {R} \to \mathbb {R} ^{n}} : Unlike limits, for which the value depends on the exact form of the path s ( t ) {\displaystyle s(t)} , it can be shown that the derivative along the path depends only on the tangent vector of the path at s ( t 0 ) {\displaystyle s(t_{0})} , i.e. s ′ ( t 0 ) {\displaystyle s'(t_{0})} , provided that f {\displaystyle f} is Lipschitz continuous at s ( t 0 ) {\displaystyle s(t_{0})} , and that the limit exits for at least one such path. It is therefore possible to generate the definition of the directional derivative as follows: The directional derivative of a scalar-valued function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } along the unit vector u ^ {\displaystyle {\hat {\mathbf {u}}}} at some point x 0 ∈ R n {\displaystyle x_{0}\in \mathbb {R} ^{n}} is or, when expressed in terms of ordinary differentiation, which is a well defined expression because f ( x 0 + u ^ t ) {\displaystyle f(x_{0}+{\hat {\mathbf {u}}}t)} is a scalar function with one variable in t {\displaystyle t} . It is not possible to define a unique scalar derivative without a direction; it is clear for example that ∇ u ^ f ( x 0 ) = − ∇ − u ^ f ( x 0 ) {\displaystyle \nabla _{\hat {\mathbf {u}}}f(x_{0})=-\nabla _{-{\hat {\mathbf {u}}}}f(x_{0})} . It is also possible for directional derivatives to exist for some directions but not for others. === Partial derivative === The partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a multivariable function is a derivative with respect to one variable with all other variables held constant.: 26ff  A partial derivative may be thought of as the directional derivative of the function along a coordinate axis. Partial derivatives may be combined in interesting ways to create more complicated expressions of the derivative. In vector calculus, the del operator ( ∇ {\displaystyle \nabla } ) is used to define the concepts of gradient, divergence, and curl in terms of partial derivatives. A matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a linear transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called partial differential equations or PDEs. These equations are generally more difficult to solve than ordinary differential equations, which contain derivatives with respect to only one variable.: 654ff  == Multiple integration == The multiple integral extends the concept of the integral to functions of any number of variables. Double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubini's theorem guarantees that a multiple integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration.: 367ff  The surface integral and the line integral are used to integrate over curved manifolds such as surfaces and curves. === Fundamental theorem of calculus in multiple dimensions === In single-variable calculus, the fundamental theorem of calculus establishes a link between the derivative and the integral. The link between the derivative and the integral in multivariable calculus is embodied by the integral theorems of vector calculus:: 543ff  Gradient theorem Stokes' theorem Divergence theorem Green's theorem. In a more advanced study of multivariable calculus, it is seen that these four theorems are specific incarnations of a more general theorem, the generalized Stokes' theorem, which applies to the integration of differential forms over manifolds. == Applications and uses == Techniques of multivariable calculus are used to study many objects of interest in the material world. In particular, Multivariable calculus can be applied to analyze deterministic systems that have multiple degrees of freedom. Functions with independent variables corresponding to each of the degrees of freedom are often used to model these systems, and multivariable calculus provides tools for characterizing the system dynamics. Multivariate calculus is used in the optimal control of continuous time dynamic systems. It is used in regression analysis to derive formulas for estimating relationships among various sets of empirical data. Multivariable calculus is used in many fields of natural and social science and engineering to model and study high-dimensional systems that exhibit deterministic behavior. In economics, for example, consumer choice over a variety of goods, and producer choice over various inputs to use and outputs to produce, are modeled with multivariate calculus. Non-deterministic, or stochastic systems can be studied using a different kind of mathematics, such as stochastic calculus. == See also == List of multivariable calculus topics Multivariate statistics == References == == External links == UC Berkeley video lectures on Multivariable Calculus, Fall 2009, Professor Edward Frenkel MIT video lectures on Multivariable Calculus, Fall 2007 Multivariable Calculus: A free online textbook by George Cain and James Herod Multivariable Calculus Online: A free online textbook by Jeff Knisley Multivariable Calculus – A Very Quick Review, Prof. Blair Perot, University of Massachusetts Amherst Multivariable Calculus, Online text by Dr. Jerry Shurman
Wikipedia/Multivariable_calculus
In mathematics and physics, vector is a term that refers to quantities that cannot be expressed by a single number (a scalar), or to elements of some vector spaces. Historically, vectors were introduced in geometry and physics (typically in mechanics) for quantities that have both a magnitude and a direction, such as displacements, forces and velocity. Such quantities are represented by geometric vectors in the same way as distances, masses and time are represented by real numbers. The term vector is also used, in some contexts, for tuples, which are finite sequences (of numbers or other objects) of a fixed length. Both geometric vectors and tuples can be added and scaled, and these vector operations led to the concept of a vector space, which is a set equipped with a vector addition and a scalar multiplication that satisfy some axioms generalizing the main properties of operations on the above sorts of vectors. A vector space formed by geometric vectors is called a Euclidean vector space, and a vector space formed by tuples is called a coordinate vector space. Many vector spaces are considered in mathematics, such as extension fields, polynomial rings, algebras and function spaces. The term vector is generally not used for elements of these vector spaces, and is generally reserved for geometric vectors, tuples, and elements of unspecified vector spaces (for example, when discussing general properties of vector spaces). == Vectors in Euclidean geometry == == Vector quantities == == Vector spaces == == Vectors in algebra == Every algebra over a field is a vector space, but elements of an algebra are generally not called vectors. However, in some cases, they are called vectors, mainly due to historical reasons. Vector quaternion, a quaternion with a zero real part Multivector or p-vector, an element of the exterior algebra of a vector space. Spinors, also called spin vectors, have been introduced for extending the notion of rotation vector. In fact, rotation vectors represent well rotations locally, but not globally, because a closed loop in the space of rotation vectors may induce a curve in the space of rotations that is not a loop. Also, the manifold of rotation vectors is orientable, while the manifold of rotations is not. Spinors are elements of a vector subspace of some Clifford algebra. Witt vector, an infinite sequence of elements of a commutative ring, which belongs to an algebra over this ring, and has been introduced for handling carry propagation in the operations on p-adic numbers. == Data represented by vectors == The set R n {\displaystyle \mathbb {R} ^{n}} of tuples of n real numbers has a natural structure of vector space defined by component-wise addition and scalar multiplication. It is common to call these tuples vectors, even in contexts where vector-space operations do not apply. More generally, when some data can be represented naturally by vectors, they are often called vectors even when addition and scalar multiplication of vectors are not valid operations on these data. Here are some examples. Rotation vector, a Euclidean vector whose direction is that of the axis of a rotation and magnitude is the angle of the rotation. Burgers vector, a vector that represents the magnitude and direction of the lattice distortion of dislocation in a crystal lattice Interval vector, in musical set theory, an array that expresses the intervallic content of a pitch-class set Probability vector, in statistics, a vector with non-negative entries that sum to one. Random vector or multivariate random variable, in statistics, a set of real-valued random variables that may be correlated. However, a random vector may also refer to a random variable that takes its values in a vector space. Logical vector, a vector of 0s and 1s (Booleans). == Vectors in calculus == Calculus serves as a foundational mathematical tool in the realm of vectors, offering a framework for the analysis and manipulation of vector quantities in diverse scientific disciplines, notably physics and engineering. Vector-valued functions, where the output is a vector, are scrutinized using calculus to derive essential insights into motion within three-dimensional space. Vector calculus extends traditional calculus principles to vector fields, introducing operations like gradient, divergence, and curl, which find applications in physics and engineering contexts. Line integrals, crucial for calculating work along a path within force fields, and surface integrals, employed to determine quantities like flux, illustrate the practical utility of calculus in vector analysis. Volume integrals, essential for computations involving scalar or vector fields over three-dimensional regions, contribute to understanding mass distribution, charge density, and fluid flow rates. == See also == Vector (disambiguation) === Vector spaces with more structure === Graded vector space, a type of vector space that includes the extra structure of gradation Normed vector space, a vector space on which a norm is defined Hilbert space Ordered vector space, a vector space equipped with a partial order Super vector space, name for a Z2-graded vector space Symplectic vector space, a vector space V equipped with a non-degenerate, skew-symmetric, bilinear form Topological vector space, a blend of topological structure with the algebraic concept of a vector space === Vector fields === A vector field is a vector-valued function that, generally, has a domain of the same dimension (as a manifold) as its codomain, Conservative vector field, a vector field that is the gradient of a scalar potential field Hamiltonian vector field, a vector field defined for any energy function or Hamiltonian Killing vector field, a vector field on a Riemannian manifold associated with a symmetry Solenoidal vector field, a vector field with zero divergence Vector potential, a vector field whose curl is a given vector field Vector flow, a set of closely related concepts of the flow determined by a vector field === See also === Ricci calculus Vector Analysis, a textbook on vector calculus by Wilson, first published in 1901, which did much to standardize the notation and vocabulary of three-dimensional linear algebra and vector calculus Vector bundle, a topological construction that makes precise the idea of a family of vector spaces parameterized by another space Vector calculus, a branch of mathematics concerned with differentiation and integration of vector fields Vector differential, or del, a vector differential operator represented by the nabla symbol ∇ {\displaystyle \nabla } Vector Laplacian, the vector Laplace operator, denoted by ∇ 2 {\displaystyle \nabla ^{2}} , is a differential operator defined over a vector field Vector notation, common notation used when working with vectors Vector operator, a type of differential operator used in vector calculus Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector perpendicular to the original two Vector projection, also known as vector resolute or vector component, a linear mapping producing a vector parallel to a second vector Vector-valued function, a function that has a vector space as a codomain Vectorization (mathematics), a linear transformation that converts a matrix into a column vector Vector autoregression, an econometric model used to capture the evolution and the interdependencies between multiple time series Vector boson, a boson with the spin quantum number equal to 1 Vector measure, a function defined on a family of sets and taking vector values satisfying certain properties Vector meson, a meson with total spin 1 and odd parity Vector quantization, a quantization technique used in signal processing Vector soliton, a solitary wave with multiple components coupled together that maintains its shape during propagation Vector synthesis, a type of audio synthesis Phase vector == Notes == == References == Vectors - The Feynman Lectures on Physics Heinbockel, J. H. (2001). Introduction to Tensor Calculus and Continuum Mechanics. Trafford Publishing. ISBN 1-55369-133-4. Itô, Kiyosi (1993). Encyclopedic Dictionary of Mathematics (2nd ed.). MIT Press. ISBN 978-0-262-59020-4. Ivanov, A.B. (2001) [1994], "Vector", Encyclopedia of Mathematics, EMS Press Pedoe, Daniel (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0.
Wikipedia/Vector_(mathematics_and_physics)
In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero (dα = 0); and an exact form is a differential form, α, that is the exterior derivative of another differential form β, i.e. α = dβ. Thus, an exact form is in the image of d, and a closed form is in the kernel of d (also known as null space). For an exact form α, α = dβ for some differential form β of degree one less than that of α. The form β is called a "potential form" or "primitive" for α. Since the exterior derivative of a closed form is zero, β is not unique, but can be modified by the addition of any closed form of degree one less than that of α. Because d2 = 0, every exact form is necessarily closed. The question of whether every closed form is exact depends on the topology of the domain of interest. On a contractible domain, every closed form is exact by the Poincaré lemma. More general questions of this kind on an arbitrary differentiable manifold are the subject of de Rham cohomology, which allows one to obtain purely topological information using differential methods. == Examples == A simple example of a form that is closed but not exact is the 1-form d θ {\displaystyle d\theta } given by the derivative of argument on the punctured plane R 2 ∖ { 0 } {\displaystyle \mathbb {R} ^{2}\smallsetminus \{0\}} . Since θ {\displaystyle \theta } is not actually a function (see the next paragraph) d θ {\displaystyle d\theta } is not an exact form. Still, d θ {\displaystyle d\theta } has vanishing derivative and is therefore closed. Note that the argument θ {\displaystyle \theta } is only defined up to an integer multiple of 2 π {\displaystyle 2\pi } since a single point p {\displaystyle p} can be assigned different arguments r {\displaystyle r} , r + 2 π {\displaystyle r+2\pi } , etc. We can assign arguments in a locally consistent manner around p {\displaystyle p} , but not in a globally consistent manner. This is because if we trace a loop from p {\displaystyle p} counterclockwise around the origin and back to p {\displaystyle p} , the argument increases by 2 π {\displaystyle 2\pi } . Generally, the argument θ {\displaystyle \theta } changes by ∮ S 1 d θ {\displaystyle \oint _{S^{1}}d\theta } over a counter-clockwise oriented loop S 1 {\displaystyle S^{1}} . Even though the argument θ {\displaystyle \theta } is not technically a function, the different local definitions of θ {\displaystyle \theta } at a point p {\displaystyle p} differ from one another by constants. Since the derivative at p {\displaystyle p} only uses local data, and since functions that differ by a constant have the same derivative, the argument has a globally well-defined derivative " d θ {\displaystyle d\theta } ". The upshot is that d θ {\displaystyle d\theta } is a one-form on R 2 ∖ { 0 } {\displaystyle \mathbb {R} ^{2}\smallsetminus \{0\}} that is not actually the derivative of any well-defined function θ {\displaystyle \theta } . We say that d θ {\displaystyle d\theta } is not exact. Explicitly, d θ {\displaystyle d\theta } is given as: d θ = − y d x + x d y x 2 + y 2 , {\displaystyle d\theta ={\frac {-y\,dx+x\,dy}{x^{2}+y^{2}}},} which by inspection has derivative zero. Notice that if we restrict the domain to the right half-plane, we can write d θ = d ( tan − 1 ⁡ ( y / x ) ) {\displaystyle d\theta =d\left(\tan ^{-1}(y/x)\right)} , but the angle function θ = tan − 1 ⁡ ( y / x ) {\displaystyle \theta =\tan ^{-1}(y/x)} is neither smooth nor continuous over R 2 ∖ { 0 } {\displaystyle \mathbb {R} ^{2}\smallsetminus \{0\}} (as is any choice of angle function). Because d θ {\displaystyle d\theta } has vanishing derivative, we say that it is closed. On the other hand, for the one-form α = − y d x + x d y , {\displaystyle \alpha =-y\,dx+x\,dy,} d α ≠ 0 {\displaystyle d\alpha \neq 0} . Thus α {\displaystyle \alpha } is not even closed, never mind exact. The form d θ {\displaystyle d\theta } generates the de Rham cohomology group H d R 1 ( R 2 ∖ { 0 } ) ≅ R , {\displaystyle H_{dR}^{1}(\mathbb {R} ^{2}\smallsetminus \{0\})\cong \mathbb {R} ,} meaning that any closed form ω {\displaystyle \omega } is the sum of an exact form d f {\displaystyle df} and a multiple of d θ {\displaystyle d\theta } : ω = d f + k d θ {\displaystyle \omega =df+k\ d\theta } , where k = 1 2 π ∮ S 1 ω {\textstyle k={\frac {1}{2\pi }}\oint _{S^{1}}\omega } accounts for a non-trivial contour integral around the origin, which is the only obstruction to a closed form on the punctured plane (locally the derivative of a potential function) being the derivative of a globally defined function. == Examples in low dimensions == Differential forms in R 2 {\displaystyle \mathbb {R} ^{2}} and R 3 {\displaystyle \mathbb {R} ^{3}} were well known in the mathematical physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the basic area element d x ∧ d y {\displaystyle dx\wedge dy} , so that it is the 1-forms α = f ( x , y ) d x + g ( x , y ) d y {\displaystyle \alpha =f(x,y)\,dx+g(x,y)\,dy} that are of real interest. The formula for the exterior derivative d {\displaystyle d} here is d α = ( g x − f y ) d x ∧ d y {\displaystyle d\alpha =(g_{x}-f_{y})\,dx\wedge dy} where the subscripts denote partial derivatives. Therefore the condition for α {\displaystyle \alpha } to be closed is f y = g x . {\displaystyle f_{y}=g_{x}.} In this case if h ( x , y ) {\displaystyle h(x,y)} is a function then d h = h x d x + h y d y . {\displaystyle dh=h_{x}\,dx+h_{y}\,dy.} The implication from 'exact' to 'closed' is then a consequence of the symmetry of second derivatives, with respect to x {\displaystyle x} and y {\displaystyle y} . The gradient theorem asserts that a 1-form is exact if and only if the line integral of the form depends only on the endpoints of the curve, or equivalently, if the integral around any smooth closed curve is zero. === Vector field analogies === On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields (by duality via the metric), so there is a notion of a vector field corresponding to a closed or exact form. In 3 dimensions, an exact vector field (thought of as a 1-form) is called a conservative vector field, meaning that it is the derivative (gradient) of a 0-form (smooth scalar field), called the scalar potential. A closed vector field (thought of as a 1-form) is one whose derivative (curl) vanishes, and is called an irrotational vector field. Thinking of a vector field as a 2-form instead, a closed vector field is one whose derivative (divergence) vanishes, and is called an incompressible flow (sometimes solenoidal vector field). The term incompressible is used because a non-zero divergence corresponds to the presence of sources and sinks in analogy with a fluid. The concepts of conservative and incompressible vector fields generalize to n dimensions, because gradient and divergence generalize to n dimensions; curl is defined only in three dimensions, thus the concept of irrotational vector field does not generalize in this way. == Poincaré lemma == The Poincaré lemma states that if B is an open ball in Rn, any closed p-form ω defined on B is exact, for any integer p with 1 ≤ p ≤ n. More generally, the lemma states that on a contractible open subset of a manifold (e.g., R n {\displaystyle \mathbb {R} ^{n}} ), a closed p-form, p > 0, is exact. == Formulation as cohomology == When the difference of two closed forms is an exact form, they are said to be cohomologous to each other. That is, if ζ and η are closed forms, and one can find some β such that ζ − η = d β {\displaystyle \zeta -\eta =d\beta } then one says that ζ and η are cohomologous to each other. Exact forms are sometimes said to be cohomologous to zero. The set of all forms cohomologous to a given form (and thus to each other) is called a de Rham cohomology class; the general study of such classes is known as cohomology. It makes no real sense to ask whether a 0-form (smooth function) is exact, since d increases degree by 1; but the clues from topology suggest that only the zero function should be called "exact". The cohomology classes are identified with locally constant functions. Using contracting homotopies similar to the one used in the proof of the Poincaré lemma, it can be shown that de Rham cohomology is homotopy-invariant. == Relevance to thermodynamics == Consider a thermodynamic system whose equilibrium states are specified by n {\displaystyle n} thermodynamic variables, x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} . The first law of thermodynamics can be stated as follows: In any process that results in an infinitesimal change of state where the internal energy of the system changes by an amount d U ( x 1 , x 2 , … , x n ) , {\displaystyle dU(x_{1},x_{2},\ldots ,x_{n}),} and an amount of work d W ( x 1 , x 2 , … , x n ) {\displaystyle dW(x_{1},x_{2},\ldots ,x_{n})} is done on the system, one must also supply an amount of heat d U − d W . {\displaystyle dU-dW.} The second law of thermodynamics is an empirical law of nature which says that there is no thermodynamic system for which d U = d W {\displaystyle dU=dW} in every circumstance, or in mathematical terms that, the differential form d U − d W {\displaystyle dU-dW} is not closed. Caratheodory's theorem further states that there exists an integrating denominator T {\displaystyle T} such that d S ≡ d U − d W T {\displaystyle dS\equiv {\frac {dU-dW}{T}}} is a closed 1-form. The integrating denominator T {\displaystyle T} is the temperature, and the state function S ( x 1 , x 2 , … , x n ) {\displaystyle S(x_{1},x_{2},\ldots ,x_{n})} is the equilibrium entropy. == Application in electrodynamics == In electrodynamics, the case of the magnetic field B → ( r ) {\displaystyle {\vec {B}}(\mathbf {r} )} produced by a stationary electrical current is important. There one deals with the vector potential A → ( r ) {\displaystyle {\vec {A}}(\mathbf {r} )} of this field. This case corresponds to k = 2, and the defining region is the full R 3 {\displaystyle \mathbb {R} ^{3}} . The current-density vector is j → {\displaystyle {\vec {j}}} . It corresponds to the current two-form I := j 1 ( x 1 , x 2 , x 3 ) d x 2 ∧ d x 3 + j 2 ( x 1 , x 2 , x 3 ) d x 3 ∧ d x 1 + j 3 ( x 1 , x 2 , x 3 ) d x 1 ∧ d x 2 . {\displaystyle \mathbf {I} :=j_{1}(x_{1},x_{2},x_{3})\,{\rm {d}}x_{2}\wedge {\rm {d}}x_{3}+j_{2}(x_{1},x_{2},x_{3})\,{\rm {d}}x_{3}\wedge {\rm {d}}x_{1}+j_{3}(x_{1},x_{2},x_{3})\,{\rm {d}}x_{1}\wedge {\rm {d}}x_{2}.} For the magnetic field B → {\displaystyle {\vec {B}}} one has analogous results: it corresponds to the induction two-form Φ B := B 1 d x 2 ∧ d x 3 + ⋯ {\displaystyle \Phi _{B}:=B_{1}{\rm {d}}x_{2}\wedge {\rm {d}}x_{3}+\cdots } , and can be derived from the vector potential A → {\displaystyle {\vec {A}}} , or the corresponding one-form A {\displaystyle \mathbf {A} } , B → = curl ⁡ A → = { ∂ A 3 ∂ x 2 − ∂ A 2 ∂ x 3 , ∂ A 1 ∂ x 3 − ∂ A 3 ∂ x 1 , ∂ A 2 ∂ x 1 − ∂ A 1 ∂ x 2 } , or Φ B = d A . {\displaystyle {\vec {B}}=\operatorname {curl} {\vec {A}}=\left\{{\frac {\partial A_{3}}{\partial x_{2}}}-{\frac {\partial A_{2}}{\partial x_{3}}},{\frac {\partial A_{1}}{\partial x_{3}}}-{\frac {\partial A_{3}}{\partial x_{1}}},{\frac {\partial A_{2}}{\partial x_{1}}}-{\frac {\partial A_{1}}{\partial x_{2}}}\right\},{\text{ or }}\Phi _{B}={\rm {d}}\mathbf {A} .} Thereby the vector potential A → {\displaystyle {\vec {A}}} corresponds to the potential one-form A := A 1 d x 1 + A 2 d x 2 + A 3 d x 3 . {\displaystyle \mathbf {A} :=A_{1}\,{\rm {d}}x_{1}+A_{2}\,{\rm {d}}x_{2}+A_{3}\,{\rm {d}}x_{3}.} The closedness of the magnetic-induction two-form corresponds to the property of the magnetic field that it is source-free: div ⁡ B → ≡ 0 {\displaystyle \operatorname {div} {\vec {B}}\equiv 0} , i.e., that there are no magnetic monopoles. In a special gauge, div ⁡ A → = ! 0 {\displaystyle \operatorname {div} {\vec {A}}{~{\stackrel {!}{=}}~}0} , this implies for i = 1, 2, 3 A i ( r → ) = ∫ μ 0 j i ( r → ′ ) d x 1 ′ d x 2 ′ d x 3 ′ 4 π | r → − r → ′ | . {\displaystyle A_{i}({\vec {r}})=\int {\frac {\mu _{0}j_{i}\left({\vec {r}}'\right)\,\,dx_{1}'\,dx_{2}'\,dx_{3}'}{4\pi |{\vec {r}}-{\vec {r}}'|}}\,.} (Here μ 0 {\displaystyle \mu _{0}} is the magnetic constant.) This equation is remarkable, because it corresponds completely to a well-known formula for the electrical field E → {\displaystyle {\vec {E}}} , namely for the electrostatic Coulomb potential φ ( x 1 , x 2 , x 3 ) {\displaystyle \varphi (x_{1},x_{2},x_{3})} of a charge density ρ ( x 1 , x 2 , x 3 ) {\displaystyle \rho (x_{1},x_{2},x_{3})} . At this place one can already guess that E → {\displaystyle {\vec {E}}} and B → , {\displaystyle {\vec {B}},} ρ {\displaystyle \rho } and j → , {\displaystyle {\vec {j}},} φ {\displaystyle \varphi } and A → {\displaystyle {\vec {A}}} can be unified to quantities with six rsp. four nontrivial components, which is the basis of the relativistic invariance of the Maxwell equations. If the condition of stationarity is left, on the left-hand side of the above-mentioned equation one must add, in the equations for A i {\displaystyle A_{i}} , to the three space coordinates, as a fourth variable also the time t, whereas on the right-hand side, in j i ′ {\displaystyle j_{i}'} , the so-called "retarded time", t ′ := t − | r → − r → ′ | c {\displaystyle t':=t-{\frac {|{\vec {r}}-{\vec {r}}'|}{c}}} , must be used, i.e. it is added to the argument of the current-density. Finally, as before, one integrates over the three primed space coordinates. (As usual c is the vacuum velocity of light.) == Notes == == Citations == == References == Chandrasekhar, S. (1939). An Introduction to the Study of Stellar Structure. Dover. Flanders, Harley (1989) [1963]. Differential forms with applications to the physical sciences. New York: Dover Publications. ISBN 978-0-486-66169-8.. Warner, Frank W. (1983), Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, vol. 94, Springer, ISBN 0-387-90894-3 Napier, Terrence; Ramachandran, Mohan (2011), An introduction to Riemann surfaces, Birkhäuser, ISBN 978-0-8176-4693-6 Singer, I. M.; Thorpe, J. A. (1976), Lecture Notes on Elementary Topology and Geometry, University of Bangalore Press, ISBN 0721114784
Wikipedia/Closed_and_exact_differential_forms
In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861. A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays. A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor. For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules. == Applications == Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning. Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. == Notation for indices == === Basis-related distinctions === ==== Space and time coordinates ==== Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows: The lowercase Latin alphabet a, b, c, ... is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately. The lowercase Greek alphabet α, β, γ, ... is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components. Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space. ==== Coordinate and index notation ==== The author(s) will usually make it clear whether a subscript is intended as an index or as a label. For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector A = (A1, A2, A3) = (Ax, Ay, Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x, y, z. In the expression Ai, i is interpreted as an index ranging over the values 1, 2, 3, while the x, y, z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label t. ==== Reference to basis ==== Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in: X ϕ ^ , Y λ ¯ , Z η ~ , T μ ′ {\displaystyle X_{\hat {\phi }}\,,Y_{\bar {\lambda }}\,,Z_{\tilde {\eta }}\,,T_{\mu '}} to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in: v μ ′ = v ν L ν μ ′ . {\displaystyle v^{\mu '}=v^{\nu }L_{\nu }{}^{\mu '}.} This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor. === Upper and lower indices === Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as a i j b j k {\displaystyle a_{ij}b_{jk}} for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained. ==== Covariant tensor components ==== A lower index (subscript) indicates covariance of the components with respect to that index: A α β γ ⋯ {\displaystyle A_{\alpha \beta \gamma \cdots }} ==== Contravariant tensor components ==== An upper index (superscript) indicates contravariance of the components with respect to that index: A α β γ ⋯ {\displaystyle A^{\alpha \beta \gamma \cdots }} ==== Mixed-variance tensor components ==== A tensor may have both upper and lower indices: A α β γ δ ⋯ . {\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.} Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta). ==== Tensor type and degree ==== The number of each upper and lower indices of a tensor gives its type: a tensor with p upper and q lower indices is said to be of type (p, q), or to be a type-(p, q) tensor. The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type (p, q) has degree p + q. ==== Summation convention ==== The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: A α B α ≡ ∑ α A α B α or A α B α ≡ ∑ α A α B α . {\displaystyle A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\quad {\text{or}}\quad A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{\alpha }B_{\alpha }\,.} The operation implied by such a summation is called tensor contraction: A α B β → A α B α ≡ ∑ α A α B α . {\displaystyle A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\,.} This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: A α γ B α C γ β ≡ ∑ α ∑ γ A α γ B α C γ β . {\displaystyle A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.} Other combinations of repeated indices within a term are considered to be ill-formed, such as The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis. ==== Multi-index notation ==== If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list: A i 1 ⋯ i n B i 1 ⋯ i n j 1 ⋯ j m C j 1 ⋯ j m ≡ A I B I J C J , {\displaystyle A_{i_{1}\cdots i_{n}}B^{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}C_{j_{1}\cdots j_{m}}\equiv A_{I}B^{IJ}C_{J},} where I = i1 i2 ⋅⋅⋅ in and J = j1 j2 ⋅⋅⋅ jm. ==== Sequential summation ==== A pair of vertical bars | ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices: A | α β γ | ⋯ B α β γ ⋯ = A α β γ ⋯ B | α β γ | ⋯ = ∑ α < β < γ A α β γ ⋯ B α β γ ⋯ {\displaystyle A_{|\alpha \beta \gamma |\cdots }B^{\alpha \beta \gamma \cdots }=A_{\alpha \beta \gamma \cdots }B^{|\alpha \beta \gamma |\cdots }=\sum _{\alpha <\beta <\gamma }A_{\alpha \beta \gamma \cdots }B^{\alpha \beta \gamma \cdots }} means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example: A | α β γ | | δ ϵ ⋯ λ | B α β γ δ ϵ ⋯ λ | μ ν ⋯ ζ | C μ ν ⋯ ζ = ∑ α < β < γ ∑ δ < ϵ < ⋯ < λ ∑ μ < ν < ⋯ < ζ A α β γ δ ϵ ⋯ λ B α β γ δ ϵ ⋯ λ μ ν ⋯ ζ C μ ν ⋯ ζ {\displaystyle {\begin{aligned}&A_{|\alpha \beta \gamma |}{}^{|\delta \epsilon \cdots \lambda |}B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}C^{\mu \nu \cdots \zeta }\\[3pt]={}&\sum _{\alpha <\beta <\gamma }~\sum _{\delta <\epsilon <\cdots <\lambda }~\sum _{\mu <\nu <\cdots <\zeta }A_{\alpha \beta \gamma }{}^{\delta \epsilon \cdots \lambda }B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }C^{\mu \nu \cdots \zeta }\end{aligned}}} When using multi-index notation, an underarrow is placed underneath the block of indices: A P ⇁ Q ⇁ B P Q R ⇁ C R = ∑ P ⇁ ∑ Q ⇁ ∑ R ⇁ A P Q B P Q R C R {\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{\underset {\rightharpoondown }{Q}}B^{P}{}_{Q{\underset {\rightharpoondown }{R}}}C^{R}=\sum _{\underset {\rightharpoondown }{P}}\sum _{\underset {\rightharpoondown }{Q}}\sum _{\underset {\rightharpoondown }{R}}A_{P}{}^{Q}B^{P}{}_{QR}C^{R}} where P ⇁ = | α β γ | , Q ⇁ = | δ ϵ ⋯ λ | , R ⇁ = | μ ν ⋯ ζ | {\displaystyle {\underset {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |} ==== Raising and lowering indices ==== By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa: B γ β ⋯ = g γ α A α β ⋯ and A α β ⋯ = g α γ B γ β ⋯ {\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }} The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation. === Correlations between index positions and invariance === This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation. The Kronecker delta is used, see also below. == General outlines for index notation and operations == Tensors are equal if and only if every corresponding component is equal; e.g., tensor A equals tensor B if and only if A α β γ = B α β γ {\displaystyle A^{\alpha }{}_{\beta \gamma }=B^{\alpha }{}_{\beta \gamma }} for all α, β, γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis). === Free and dummy indices === Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices. === A tensor equation represents many ordinary (real-valued) equations === The components of tensors (like Aα, Bβγ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m, the equality represents mn equations: each index takes on every value of a specific set of values. For instance, if A α B β γ C γ δ + D α β E δ = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }} is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α, β, δ), there are 43 = 64 equations. Three of these are: A 0 B 1 0 C 00 + A 0 B 1 1 C 10 + A 0 B 1 2 C 20 + A 0 B 1 3 C 30 + D 0 1 E 0 = T 0 1 0 A 1 B 0 0 C 00 + A 1 B 0 1 C 10 + A 1 B 0 2 C 20 + A 1 B 0 3 C 30 + D 1 0 E 0 = T 1 0 0 A 1 B 2 0 C 02 + A 1 B 2 1 C 12 + A 1 B 2 2 C 22 + A 1 B 2 3 C 32 + D 1 2 E 2 = T 1 2 2 . {\displaystyle {\begin{aligned}A^{0}B_{1}{}^{0}C_{00}+A^{0}B_{1}{}^{1}C_{10}+A^{0}B_{1}{}^{2}C_{20}+A^{0}B_{1}{}^{3}C_{30}+D^{0}{}_{1}{}E_{0}&=T^{0}{}_{1}{}_{0}\\A^{1}B_{0}{}^{0}C_{00}+A^{1}B_{0}{}^{1}C_{10}+A^{1}B_{0}{}^{2}C_{20}+A^{1}B_{0}{}^{3}C_{30}+D^{1}{}_{0}{}E_{0}&=T^{1}{}_{0}{}_{0}\\A^{1}B_{2}{}^{0}C_{02}+A^{1}B_{2}{}^{1}C_{12}+A^{1}B_{2}{}^{2}C_{22}+A^{1}B_{2}{}^{3}C_{32}+D^{1}{}_{2}{}E_{2}&=T^{1}{}_{2}{}_{2}.\end{aligned}}} This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation. === Indices are replaceable labels === Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is: A α B β γ C γ δ + D α β E δ → A λ B β μ C μ δ + D λ β E δ , {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{\mu \delta }+D^{\lambda }{}_{\beta }{}E_{\delta }\,,} whereas an erroneous change is: A α B β γ C γ δ + D α β E δ ↛ A λ B β γ C μ δ + D α β E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{\mu \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\,.} In the first replacement, λ replaced α and μ replaced γ everywhere, so the expression still has the same meaning. In the second, λ did not fully replace α, and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next. === Indices are the same in every term === The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: A α B β γ C γ δ + D α δ E β = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\delta }E_{\beta }=T^{\alpha }{}_{\beta }{}_{\delta }} as for an erroneous expression: A α B β γ C γ δ + D α β γ E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.} In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, α, β, δ line up throughout and γ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent. === Brackets and punctuation used once where implied === When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets. Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices. == Symmetric and antisymmetric parts == === Symmetric part of tensor === Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p, one takes a sum over the permutations of those indices ασ(i) for i = 1, 2, 3, ..., p, and then divides by the number of permutations: A ( α 1 α 2 ⋯ α p ) α p + 1 ⋯ α q = 1 p ! ∑ σ A α σ ( 1 ) ⋯ α σ ( p ) α p + 1 ⋯ α q . {\displaystyle A_{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {1}{p!}}\sum _{\sigma }A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\,.} For example, two symmetrizing indices mean there are two indices to permute and sum over: A ( α β ) γ ⋯ = 1 2 ! ( A α β γ ⋯ + A β α γ ⋯ ) {\displaystyle A_{(\alpha \beta )\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }+A_{\beta \alpha \gamma \cdots }\right)} while for three symmetrizing indices, there are three indices to sum over and permute: A ( α β γ ) δ ⋯ = 1 3 ! ( A α β γ δ ⋯ + A γ α β δ ⋯ + A β γ α δ ⋯ + A α γ β δ ⋯ + A γ β α δ ⋯ + A β α γ δ ⋯ ) {\displaystyle A_{(\alpha \beta \gamma )\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }+A_{\alpha \gamma \beta \delta \cdots }+A_{\gamma \beta \alpha \delta \cdots }+A_{\beta \alpha \gamma \delta \cdots }\right)} The symmetrization is distributive over addition; A ( α ( B β ) γ ⋯ + C β ) γ ⋯ ) = A ( α B β ) γ ⋯ + A ( α C β ) γ ⋯ {\displaystyle A_{(\alpha }\left(B_{\beta )\gamma \cdots }+C_{\beta )\gamma \cdots }\right)=A_{(\alpha }B_{\beta )\gamma \cdots }+A_{(\alpha }C_{\beta )\gamma \cdots }} Indices are not part of the symmetrization when they are: not on the same level, for example; A ( α B β γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B^{\beta }{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }+A_{\gamma }B^{\beta }{}_{\alpha }\right)} within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; A ( α B | β | γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B_{|\beta |}{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }+A_{\gamma }B_{\beta \alpha }\right)} Here the α and γ indices are symmetrized, β is not. === Antisymmetric or alternating part of tensor === Square brackets, [ ], around multiple indices denotes the antisymmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices ασ(i) multiplied by the signature of the permutation sgn(σ) is taken, then divided by the number of permutations: A [ α 1 ⋯ α p ] α p + 1 ⋯ α q = 1 p ! ∑ σ sgn ⁡ ( σ ) A α σ ( 1 ) ⋯ α σ ( p ) α p + 1 ⋯ α q = δ α 1 ⋯ α p β 1 … β p A β 1 ⋯ β p α p + 1 ⋯ α q {\displaystyle {\begin{aligned}&A_{[\alpha _{1}\cdots \alpha _{p}]\alpha _{p+1}\cdots \alpha _{q}}\\[3pt]={}&{\dfrac {1}{p!}}\sum _{\sigma }\operatorname {sgn}(\sigma )A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\\={}&\delta _{\alpha _{1}\cdots \alpha _{p}}^{\beta _{1}\dots \beta _{p}}A_{\beta _{1}\cdots \beta _{p}\alpha _{p+1}\cdots \alpha _{q}}\\\end{aligned}}} where δβ1⋅⋅⋅βpα1⋅⋅⋅αp is the generalized Kronecker delta of degree 2p, with scaling as defined below. For example, two antisymmetrizing indices imply: A [ α β ] γ ⋯ = 1 2 ! ( A α β γ ⋯ − A β α γ ⋯ ) {\displaystyle A_{[\alpha \beta ]\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }-A_{\beta \alpha \gamma \cdots }\right)} while three antisymmetrizing indices imply: A [ α β γ ] δ ⋯ = 1 3 ! ( A α β γ δ ⋯ + A γ α β δ ⋯ + A β γ α δ ⋯ − A α γ β δ ⋯ − A γ β α δ ⋯ − A β α γ δ ⋯ ) {\displaystyle A_{[\alpha \beta \gamma ]\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }-A_{\alpha \gamma \beta \delta \cdots }-A_{\gamma \beta \alpha \delta \cdots }-A_{\beta \alpha \gamma \delta \cdots }\right)} as for a more specific example, if F represents the electromagnetic tensor, then the equation 0 = F [ α β , γ ] = 1 3 ! ( F α β , γ + F γ α , β + F β γ , α − F β α , γ − F α γ , β − F γ β , α ) {\displaystyle 0=F_{[\alpha \beta ,\gamma ]}={\dfrac {1}{3!}}\left(F_{\alpha \beta ,\gamma }+F_{\gamma \alpha ,\beta }+F_{\beta \gamma ,\alpha }-F_{\beta \alpha ,\gamma }-F_{\alpha \gamma ,\beta }-F_{\gamma \beta ,\alpha }\right)\,} represents Gauss's law for magnetism and Faraday's law of induction. As before, the antisymmetrization is distributive over addition; A [ α ( B β ] γ ⋯ + C β ] γ ⋯ ) = A [ α B β ] γ ⋯ + A [ α C β ] γ ⋯ {\displaystyle A_{[\alpha }\left(B_{\beta ]\gamma \cdots }+C_{\beta ]\gamma \cdots }\right)=A_{[\alpha }B_{\beta ]\gamma \cdots }+A_{[\alpha }C_{\beta ]\gamma \cdots }} As with symmetrization, indices are not antisymmetrized when they are: not on the same level, for example; A [ α B β γ ] = 1 2 ! ( A α B β γ − A γ B β α ) {\displaystyle A_{[\alpha }B^{\beta }{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }-A_{\gamma }B^{\beta }{}_{\alpha }\right)} within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; A [ α B | β | γ ] = 1 2 ! ( A α B β γ − A γ B β α ) {\displaystyle A_{[\alpha }B_{|\beta |}{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }-A_{\gamma }B_{\beta \alpha }\right)} Here the α and γ indices are antisymmetrized, β is not. === Sum of symmetric and antisymmetric parts === Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: A α β γ ⋯ = A ( α β ) γ ⋯ + A [ α β ] γ ⋯ {\displaystyle A_{\alpha \beta \gamma \cdots }=A_{(\alpha \beta )\gamma \cdots }+A_{[\alpha \beta ]\gamma \cdots }} as can be seen by adding the above expressions for A(αβ)γ⋅⋅⋅ and A[αβ]γ⋅⋅⋅. This does not hold for other than two indices. == Differentiation == For compactness, derivatives may be indicated by adding indices after a comma or semicolon. === Partial derivative === While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by xμ, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable xγ, a comma is placed before an appended lower index of the coordinate variable. A α β ⋯ , γ = ∂ ∂ x γ A α β ⋯ {\displaystyle A_{\alpha \beta \cdots ,\gamma }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }} This may be repeated (without adding further commas): A α 1 α 2 ⋯ α p , α p + 1 ⋯ α q = ∂ ∂ x α q ⋯ ∂ ∂ x α p + 2 ∂ ∂ x α p + 1 A α 1 α 2 ⋯ α p . {\displaystyle A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {\partial }{\partial x^{\alpha _{q}}}}\cdots {\dfrac {\partial }{\partial x^{\alpha _{p+2}}}}{\dfrac {\partial }{\partial x^{\alpha _{p+1}}}}A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}.} These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates x α , γ = δ γ α , {\displaystyle x^{\alpha }{}_{,\gamma }=\delta _{\gamma }^{\alpha },} where δ is the Kronecker delta. === Covariant derivative === The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon ( ; ) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash ( / ) or in three-dimensional curved space a single vertical bar ( | ). The covariant derivative of a scalar function, a contravariant vector and a covariant vector are: f ; β = f , β {\displaystyle f_{;\beta }=f_{,\beta }} A α ; β = A α , β + Γ α γ β A γ {\displaystyle A^{\alpha }{}_{;\beta }=A^{\alpha }{}_{,\beta }+\Gamma ^{\alpha }{}_{\gamma \beta }A^{\gamma }} A α ; β = A α , β − Γ γ α β A γ , {\displaystyle A_{\alpha ;\beta }=A_{\alpha ,\beta }-\Gamma ^{\gamma }{}_{\alpha \beta }A_{\gamma }\,,} where Γαγβ are the connection coefficients. For an arbitrary tensor: T α 1 ⋯ α r β 1 ⋯ β s ; γ = T α 1 ⋯ α r β 1 ⋯ β s , γ + Γ α 1 δ γ T δ α 2 ⋯ α r β 1 ⋯ β s + ⋯ + Γ α r δ γ T α 1 ⋯ α r − 1 δ β 1 ⋯ β s − Γ δ β 1 γ T α 1 ⋯ α r δ β 2 ⋯ β s − ⋯ − Γ δ β s γ T α 1 ⋯ α r β 1 ⋯ β s − 1 δ . {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma }&\\=T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&+\,\Gamma ^{\alpha _{1}}{}_{\delta \gamma }T^{\delta \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\cdots +\Gamma ^{\alpha _{r}}{}_{\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\delta }{}_{\beta _{1}\cdots \beta _{s}}\\&-\,\Gamma ^{\delta }{}_{\beta _{1}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\delta \beta _{2}\cdots \beta _{s}}-\cdots -\Gamma ^{\delta }{}_{\beta _{s}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\delta }\,.\end{aligned}}} An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol ∇β. For the case of a vector field Aα: ∇ β A α = A α ; β . {\displaystyle \nabla _{\beta }A^{\alpha }=A^{\alpha }{}_{;\beta }\,.} The covariant formulation of the directional derivative of any tensor field along a vector vγ may be expressed as its contraction with the covariant derivative, e.g.: v γ A α ; γ . {\displaystyle v^{\gamma }A_{\alpha ;\gamma }\,.} The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly. This derivative is characterized by the product rule: ( A α β ⋯ B γ δ ⋯ ) ; ϵ = A α β ⋯ ; ϵ B γ δ ⋯ + A α β ⋯ B γ δ ⋯ ; ϵ . {\displaystyle (A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots })_{;\epsilon }=A^{\alpha }{}_{\beta \cdots ;\epsilon }B^{\gamma }{}_{\delta \cdots }+A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots ;\epsilon }\,.} ==== Connection types ==== A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection. A connection is a metric connection when the covariant derivative of the metric tensor vanishes: g μ ν ; ξ = 0 . {\displaystyle g_{\mu \nu ;\xi }=0\,.} An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: Tαβγ = 0) is a Levi-Civita connection. The Γαβγ for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind. === Exterior derivative === The exterior derivative of a totally antisymmetric type (0, s) tensor field with components Aα1⋅⋅⋅αs (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:: 232–233  ( d A ) γ α 1 ⋯ α s = ∂ ∂ x [ γ A α 1 ⋯ α s ] = A [ α 1 ⋯ α s , γ ] . {\displaystyle (\mathrm {d} A)_{\gamma \alpha _{1}\cdots \alpha _{s}}={\frac {\partial }{\partial x^{[\gamma }}}A_{\alpha _{1}\cdots \alpha _{s}]}=A_{[\alpha _{1}\cdots \alpha _{s},\gamma ]}.} This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule. === Lie derivative === The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type (r, s) tensor field T along (the flow of) a contravariant vector field Xρ may be expressed using a coordinate basis as ( L X T ) α 1 ⋯ α r β 1 ⋯ β s = X γ T α 1 ⋯ α r β 1 ⋯ β s , γ − X α 1 , γ T γ α 2 ⋯ α r β 1 ⋯ β s − ⋯ − X α r , γ T α 1 ⋯ α r − 1 γ β 1 ⋯ β s + X γ , β 1 T α 1 ⋯ α r γ β 2 ⋯ β s + ⋯ + X γ , β s T α 1 ⋯ α r β 1 ⋯ β s − 1 γ . {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}&\\=X^{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&-\,X^{\alpha _{1}}{}_{,\gamma }T^{\gamma \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -X^{\alpha _{r}}{}_{,\gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\gamma }{}_{\beta _{1}\cdots \beta _{s}}\\&+\,X^{\gamma }{}_{,\beta _{1}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\gamma \beta _{2}\cdots \beta _{s}}+\cdots +X^{\gamma }{}_{,\beta _{s}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\gamma }\,.\end{aligned}}} This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero: ( L X X ) α = X γ X α , γ − X α , γ X γ = 0 . {\displaystyle ({\mathcal {L}}_{X}X)^{\alpha }=X^{\gamma }X^{\alpha }{}_{,\gamma }-X^{\alpha }{}_{,\gamma }X^{\gamma }=0\,.} == Notable tensors == === Kronecker delta === The Kronecker delta is like the identity matrix when multiplied and contracted: δ β α A β = A α δ ν μ B μ = B ν . {\displaystyle {\begin{aligned}\delta _{\beta }^{\alpha }\,A^{\beta }&=A^{\alpha }\\\delta _{\nu }^{\mu }\,B_{\mu }&=B_{\nu }.\end{aligned}}} The components δαβ are the same in any basis and form an invariant tensor of type (1, 1), i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant. Its trace is the dimensionality of the space; for example, in four-dimensional spacetime, δ ρ ρ = δ 0 0 + δ 1 1 + δ 2 2 + δ 3 3 = 4. {\displaystyle \delta _{\rho }^{\rho }=\delta _{0}^{0}+\delta _{1}^{1}+\delta _{2}^{2}+\delta _{3}^{3}=4.} The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree 2p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of p! on the right): δ β 1 ⋯ β p α 1 ⋯ α p = δ β 1 [ α 1 ⋯ δ β p α p ] , {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}=\delta _{\beta _{1}}^{[\alpha _{1}}\cdots \delta _{\beta _{p}}^{\alpha _{p}]},} and acts as an antisymmetrizer on p indices: δ β 1 ⋯ β p α 1 ⋯ α p A β 1 ⋯ β p = A [ α 1 ⋯ α p ] . {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}\,A^{\beta _{1}\cdots \beta _{p}}=A^{[\alpha _{1}\cdots \alpha _{p}]}.} === Torsion tensor === An affine connection has a torsion tensor Tαβγ: T α β γ = Γ α β γ − Γ α γ β − γ α β γ , {\displaystyle T^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\beta \gamma }-\Gamma ^{\alpha }{}_{\gamma \beta }-\gamma ^{\alpha }{}_{\beta \gamma },} where γαβγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis. For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations Γ α β γ = Γ α γ β . {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\gamma \beta }.} === Riemann curvature tensor === If this tensor is defined as R ρ σ μ ν = Γ ρ ν σ , μ − Γ ρ μ σ , ν + Γ ρ μ λ Γ λ ν σ − Γ ρ ν λ Γ λ μ σ , {\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\Gamma ^{\rho }{}_{\nu \sigma ,\mu }-\Gamma ^{\rho }{}_{\mu \sigma ,\nu }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }\,,} then it is the commutator of the covariant derivative with itself: A ν ; ρ σ − A ν ; σ ρ = A β R β ν ρ σ , {\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }\,,} since the connection is torsionless, which means that the torsion tensor vanishes. This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: T α 1 ⋯ α r β 1 ⋯ β s ; γ δ − T α 1 ⋯ α r β 1 ⋯ β s ; δ γ = − R α 1 ρ γ δ T ρ α 2 ⋯ α r β 1 ⋯ β s − ⋯ − R α r ρ γ δ T α 1 ⋯ α r − 1 ρ β 1 ⋯ β s + R σ β 1 γ δ T α 1 ⋯ α r σ β 2 ⋯ β s + ⋯ + R σ β s γ δ T α 1 ⋯ α r β 1 ⋯ β s − 1 σ {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma \delta }&-T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\delta \gamma }\\&\!\!\!\!\!\!\!\!\!\!=-R^{\alpha _{1}}{}_{\rho \gamma \delta }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -R^{\alpha _{r}}{}_{\rho \gamma \delta }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}\\&+R^{\sigma }{}_{\beta _{1}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}+\cdots +R^{\sigma }{}_{\beta _{s}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\,\end{aligned}}} which are often referred to as the Ricci identities. === Metric tensor === The metric tensor gαβ is used for lowering indices and gives the length of any space-like curve length = ∫ y 1 y 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{length}}=\int _{y_{1}}^{y_{2}}{\sqrt {g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,} where γ is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve duration = ∫ t 1 t 2 − 1 c 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{duration}}=\int _{t_{1}}^{t_{2}}{\sqrt {{\frac {-1}{c^{2}}}g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,} where γ is any smooth strictly monotone parameterization of the trajectory. See also Line element. The inverse matrix gαβ of the metric tensor is another important tensor, used for raising indices: g α β g β γ = δ γ α . {\displaystyle g^{\alpha \beta }g_{\beta \gamma }=\delta _{\gamma }^{\alpha }\,.} == See also == == Notes == == References == == Sources == Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds (First Dover 1980 ed.), The Macmillan Company, ISBN 0-486-64039-6 Danielson, Donald A. (2003). Vectors and Tensors in Engineering and Physics (2/e ed.). Westview (Perseus). ISBN 978-0-8133-4080-7. Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Kluwer Academic Publishers (Springer). ISBN 1-4020-1015-X. Lovelock, David; Hanno Rund (1989) [1975]. Tensors, Differential Forms, and Variational Principles. Dover. ISBN 978-0-486-65840-7. C. Møller (1952), The Theory of Relativity (3rd ed.), Oxford University Press Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. ISBN 978-0-486-63612-2. {{cite book}}: ISBN / Date incompatibility (help) J.R. Tyldesley (1975), An introduction to Tensor Analysis: For Engineers and Applied Scientists, Longman, ISBN 0-582-44355-5 D.C. Kay (1988), Tensor Calculus, Schaum's Outlines, McGraw Hill (USA), ISBN 0-07-033484-6 T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, ISBN 978-1107-602601 == Further reading == Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Springer. ISBN 1-4020-1015-X. Sokolnikoff, Ivan S (1951). Tensor Analysis: Theory and Applications to Geometry and Mechanics of Continua. Wiley. ISBN 0471810525. {{cite book}}: ISBN / Date incompatibility (help) Borisenko, A.I.; Tarapov, I.E. (1979). Vector and Tensor Analysis with Applications (2nd ed.). Dover. ISBN 0486638332. Itskov, Mikhail (2015). Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics (2nd ed.). Springer. ISBN 9783319163420. Tyldesley, J. R. (1973). An introduction to Tensor Analysis: For Engineers and Applied Scientists. Longman. ISBN 0-582-44355-5. Kay, D. C. (1988). Tensor Calculus. Schaum’s Outlines. McGraw Hill. ISBN 0-07-033484-6. Grinfeld, P. (2014). Introduction to Tensor Analysis and the Calculus of Moving Surfaces. Springer. ISBN 978-1-4614-7866-9. == External links == Dullemond, Kees; Peeters, Kasper (1991–2010). "Introduction to Tensor Calculus" (PDF). Retrieved 17 May 2018.
Wikipedia/Tensor_calculus
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically: ker ⁡ ( L ) = { v ∈ V ∣ L ( v ) = 0 } = L − 1 ( 0 ) . {\displaystyle \ker(L)=\left\{\mathbf {v} \in V\mid L(\mathbf {v} )=\mathbf {0} \right\}=L^{-1}(\mathbf {0} ).} == Properties == The kernel of L is a linear subspace of the domain V. In the linear map L : V → W , {\displaystyle L:V\to W,} two elements of V have the same image in W if and only if their difference lies in the kernel of L, that is, L ( v 1 ) = L ( v 2 ) if and only if L ( v 1 − v 2 ) = 0 . {\displaystyle L\left(\mathbf {v} _{1}\right)=L\left(\mathbf {v} _{2}\right)\quad {\text{ if and only if }}\quad L\left(\mathbf {v} _{1}-\mathbf {v} _{2}\right)=\mathbf {0} .} From this, it follows by the first isomorphism theorem that the image of L is isomorphic to the quotient of V by the kernel: im ⁡ ( L ) ≅ V / ker ⁡ ( L ) . {\displaystyle \operatorname {im} (L)\cong V/\ker(L).} In the case where V is finite-dimensional, this implies the rank–nullity theorem: dim ⁡ ( ker ⁡ L ) + dim ⁡ ( im ⁡ L ) = dim ⁡ ( V ) . {\displaystyle \dim(\ker L)+\dim(\operatorname {im} L)=\dim(V).} where the term rank refers to the dimension of the image of L, dim ⁡ ( im ⁡ L ) , {\displaystyle \dim(\operatorname {im} L),} while nullity refers to the dimension of the kernel of L, dim ⁡ ( ker ⁡ L ) . {\displaystyle \dim(\ker L).} That is, Rank ⁡ ( L ) = dim ⁡ ( im ⁡ L ) and Nullity ⁡ ( L ) = dim ⁡ ( ker ⁡ L ) , {\displaystyle \operatorname {Rank} (L)=\dim(\operatorname {im} L)\qquad {\text{ and }}\qquad \operatorname {Nullity} (L)=\dim(\ker L),} so that the rank–nullity theorem can be restated as Rank ⁡ ( L ) + Nullity ⁡ ( L ) = dim ⁡ ( domain ⁡ L ) . {\displaystyle \operatorname {Rank} (L)+\operatorname {Nullity} (L)=\dim \left(\operatorname {domain} L\right).} When V is an inner product space, the quotient V / ker ⁡ ( L ) {\displaystyle V/\ker(L)} can be identified with the orthogonal complement in V of ker ⁡ ( L ) {\displaystyle \ker(L)} . This is the generalization to linear operators of the row space, or coimage, of a matrix. == Generalization to modules == The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply. == In functional analysis == If V and W are topological vector spaces such that W is finite-dimensional, then a linear operator L: V → W is continuous if and only if the kernel of L is a closed subspace of V. == Representation as matrix multiplication == Consider a linear map represented as a m × n matrix A with coefficients in a field K (typically R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ), that is operating on column vectors x with n components over K. The kernel of this linear map is the set of solutions to the equation Ax = 0, where 0 is understood as the zero vector. The dimension of the kernel of A is called the nullity of A. In set-builder notation, N ⁡ ( A ) = Null ⁡ ( A ) = ker ⁡ ( A ) = { x ∈ K n ∣ A x = 0 } . {\displaystyle \operatorname {N} (A)=\operatorname {Null} (A)=\operatorname {ker} (A)=\left\{\mathbf {x} \in K^{n}\mid A\mathbf {x} =\mathbf {0} \right\}.} The matrix equation is equivalent to a homogeneous system of linear equations: A x = 0 ⇔ a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0 . {\displaystyle A\mathbf {x} =\mathbf {0} \;\;\Leftrightarrow \;\;{\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\;\cdots \;+\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\;\cdots \;+\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \ \;&&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\;\cdots \;+\;&&a_{mn}x_{n}&&\;=\;&&&0{\text{.}}\\\end{alignedat}}} Thus the kernel of A is the same as the solution set to the above homogeneous equations. === Subspace properties === The kernel of a m × n matrix A over a field K is a linear subspace of Kn. That is, the kernel of A, the set Null(A), has the following three properties: Null(A) always contains the zero vector, since A0 = 0. If x ∈ Null(A) and y ∈ Null(A), then x + y ∈ Null(A). This follows from the distributivity of matrix multiplication over addition. If x ∈ Null(A) and c is a scalar c ∈ K, then cx ∈ Null(A), since A(cx) = c(Ax) = c0 = 0. === The row space of a matrix === The product Ax can be written in terms of the dot product of vectors as follows: A x = [ a 1 ⋅ x a 2 ⋅ x ⋮ a m ⋅ x ] . {\displaystyle A\mathbf {x} ={\begin{bmatrix}\mathbf {a} _{1}\cdot \mathbf {x} \\\mathbf {a} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {a} _{m}\cdot \mathbf {x} \end{bmatrix}}.} Here, a1, ... , am denote the rows of the matrix A. It follows that x is in the kernel of A, if and only if x is orthogonal (or perpendicular) to each of the row vectors of A (since orthogonality is defined as having a dot product of 0). The row space, or coimage, of a matrix A is the span of the row vectors of A. By the above reasoning, the kernel of A is the orthogonal complement to the row space. That is, a vector x lies in the kernel of A, if and only if it is perpendicular to every vector in the row space of A. The dimension of the row space of A is called the rank of A, and the dimension of the kernel of A is called the nullity of A. These quantities are related by the rank–nullity theorem rank ⁡ ( A ) + nullity ⁡ ( A ) = n . {\displaystyle \operatorname {rank} (A)+\operatorname {nullity} (A)=n.} === Left null space === The left null space, or cokernel, of a matrix A consists of all column vectors x such that xTA = 0T, where T denotes the transpose of a matrix. The left null space of A is the same as the kernel of AT. The left null space of A is the orthogonal complement to the column space of A, and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of A are the four fundamental subspaces associated with the matrix A. === Nonhomogeneous systems of linear equations === The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: A x = b or a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m {\displaystyle A\mathbf {x} =\mathbf {b} \quad {\text{or}}\quad {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\;\cdots \;+\;&&a_{1n}x_{n}&&\;=\;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\;\cdots \;+\;&&a_{2n}x_{n}&&\;=\;&&&b_{2}\\&&&&&&&&&&\vdots \ \;&&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\;\cdots \;+\;&&a_{mn}x_{n}&&\;=\;&&&b_{m}\\\end{alignedat}}} If u and v are two possible solutions to the above equation, then A ( u − v ) = A u − A v = b − b = 0 {\displaystyle A(\mathbf {u} -\mathbf {v} )=A\mathbf {u} -A\mathbf {v} =\mathbf {b} -\mathbf {b} =\mathbf {0} } Thus, the difference of any two solutions to the equation Ax = b lies in the kernel of A. It follows that any solution to the equation Ax = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation Ax = b is { v + x ∣ A v = b ∧ x ∈ Null ⁡ ( A ) } , {\displaystyle \left\{\mathbf {v} +\mathbf {x} \mid A\mathbf {v} =\mathbf {b} \land \mathbf {x} \in \operatorname {Null} (A)\right\},} Geometrically, this says that the solution set to Ax = b is the translation of the kernel of A by the vector v. See also Fredholm alternative and flat (geometry). == Illustration == The following is a simple illustration of the computation of the kernel of a matrix (see § Computation by Gaussian elimination, below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel. Consider the matrix A = [ 2 3 5 − 4 2 3 ] . {\displaystyle A={\begin{bmatrix}2&3&5\\-4&2&3\end{bmatrix}}.} The kernel of this matrix consists of all vectors (x, y, z) ∈ R3 for which [ 2 3 5 − 4 2 3 ] [ x y z ] = [ 0 0 ] , {\displaystyle {\begin{bmatrix}2&3&5\\-4&2&3\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},} which can be expressed as a homogeneous system of linear equations involving x, y, and z: 2 x + 3 y + 5 z = 0 , − 4 x + 2 y + 3 z = 0. {\displaystyle {\begin{aligned}2x+3y+5z&=0,\\-4x+2y+3z&=0.\end{aligned}}} The same linear equations can also be written in matrix form as: [ 2 3 5 0 − 4 2 3 0 ] . {\displaystyle \left[{\begin{array}{ccc|c}2&3&5&0\\-4&2&3&0\end{array}}\right].} Through Gauss–Jordan elimination, the matrix can be reduced to: [ 1 0 1 / 16 0 0 1 13 / 8 0 ] . {\displaystyle \left[{\begin{array}{ccc|c}1&0&1/16&0\\0&1&13/8&0\end{array}}\right].} Rewriting the matrix in equation form yields: x = − 1 16 z y = − 13 8 z . {\displaystyle {\begin{aligned}x&=-{\frac {1}{16}}z\\y&=-{\frac {13}{8}}z.\end{aligned}}} The elements of the kernel can be further expressed in parametric vector form, as follows: [ x y z ] = c [ − 1 / 16 − 13 / 8 1 ] ( where c ∈ R ) {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}=c{\begin{bmatrix}-1/16\\-13/8\\1\end{bmatrix}}\quad ({\text{where }}c\in \mathbb {R} )} Since c is a free variable ranging over all real numbers, this can be expressed equally well as: [ x y z ] = c [ − 1 − 26 16 ] . {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}=c{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}.} The kernel of A is precisely the solution set to these equations (in this case, a line through the origin in R3). Here, the vector (−1,−26,16)T constitutes a basis of the kernel of A. The nullity of A is therefore 1, as it is spanned by a single vector. The following dot products are zero: [ 2 3 5 ] [ − 1 − 26 16 ] = 0 a n d [ − 4 2 3 ] [ − 1 − 26 16 ] = 0 , {\displaystyle {\begin{bmatrix}2&3&5\end{bmatrix}}{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}=0\quad \mathrm {and} \quad {\begin{bmatrix}-4&2&3\end{bmatrix}}{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}=0,} which illustrates that vectors in the kernel of A are orthogonal to each of the row vectors of A. These two (linearly independent) row vectors span the row space of A—a plane orthogonal to the vector (−1,−26,16)T. With the rank 2 of A, the nullity 1 of A, and the dimension 3 of A, we have an illustration of the rank-nullity theorem. == Examples == If L: Rm → Rn, then the kernel of L is the solution set to a homogeneous system of linear equations. As in the above illustration, if L is the operator: L ( x 1 , x 2 , x 3 ) = ( 2 x 1 + 3 x 2 + 5 x 3 , − 4 x 1 + 2 x 2 + 3 x 3 ) {\displaystyle L(x_{1},x_{2},x_{3})=(2x_{1}+3x_{2}+5x_{3},\;-4x_{1}+2x_{2}+3x_{3})} then the kernel of L is the set of solutions to the equations 2 x 1 + 3 x 2 + 5 x 3 = 0 − 4 x 1 + 2 x 2 + 3 x 3 = 0 {\displaystyle {\begin{alignedat}{7}2x_{1}&\;+\;&3x_{2}&\;+\;&5x_{3}&\;=\;&0\\-4x_{1}&\;+\;&2x_{2}&\;+\;&3x_{3}&\;=\;&0\end{alignedat}}} Let C[0,1] denote the vector space of all continuous real-valued functions on the interval [0,1], and define L: C[0,1] → R by the rule L ( f ) = f ( 0.3 ) . {\displaystyle L(f)=f(0.3).} Then the kernel of L consists of all functions f ∈ C[0,1] for which f(0.3) = 0. Let C∞(R) be the vector space of all infinitely differentiable functions R → R, and let D: C∞(R) → C∞(R) be the differentiation operator: D ( f ) = d f d x . {\displaystyle D(f)={\frac {df}{dx}}.} Then the kernel of D consists of all functions in C∞(R) whose derivatives are zero, i.e. the set of all constant functions. Let R∞ be the direct product of infinitely many copies of R, and let s: R∞ → R∞ be the shift operator s ( x 1 , x 2 , x 3 , x 4 , … ) = ( x 2 , x 3 , x 4 , … ) . {\displaystyle s(x_{1},x_{2},x_{3},x_{4},\ldots )=(x_{2},x_{3},x_{4},\ldots ).} Then the kernel of s is the one-dimensional subspace consisting of all vectors (x1, 0, 0, 0, ...). If V is an inner product space and W is a subspace, the kernel of the orthogonal projection V → W is the orthogonal complement to W in V. == Computation by Gaussian elimination == A basis of the kernel of a matrix may be computed by Gaussian elimination. For this purpose, given an m × n matrix A, we construct first the row augmented matrix [ A I ] , {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}},} where I is the n × n identity matrix. Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix [ B C ] . {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}.} A basis of the kernel of A consists in the non-zero columns of C such that the corresponding column of B is a zero column. In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that A = [ 1 0 − 3 0 2 − 8 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 ] . {\displaystyle A={\begin{bmatrix}1&0&-3&0&2&-8\\0&1&5&0&-1&4\\0&0&0&1&7&-9\\0&0&0&0&0&0\end{bmatrix}}.} Then [ A I ] = [ 1 0 − 3 0 2 − 8 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}={\begin{bmatrix}1&0&-3&0&2&-8\\0&1&5&0&-1&4\\0&0&0&1&7&-9\\0&0&0&0&0&0\\\hline 1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&1&0&0\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix}}.} Putting the upper part in column echelon form by column operations on the whole matrix gives [ B C ] = [ 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 3 − 2 8 0 1 0 − 5 1 − 4 0 0 0 1 0 0 0 0 1 0 − 7 9 0 0 0 0 1 0 0 0 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}={\begin{bmatrix}1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&0&0&0\\\hline 1&0&0&3&-2&8\\0&1&0&-5&1&-4\\0&0&0&1&0&0\\0&0&1&0&-7&9\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix}}.} The last three columns of B are zero columns. Therefore, the three last vectors of C, [ 3 − 5 1 0 0 0 ] , [ − 2 1 0 − 7 1 0 ] , [ 8 − 4 0 9 0 1 ] {\displaystyle \left[\!\!{\begin{array}{r}3\\-5\\1\\0\\0\\0\end{array}}\right],\;\left[\!\!{\begin{array}{r}-2\\1\\0\\-7\\1\\0\end{array}}\right],\;\left[\!\!{\begin{array}{r}8\\-4\\0\\9\\0\\1\end{array}}\right]} are a basis of the kernel of A. Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that [ A I ] {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}} reduces to [ B C ] {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}} means that there exists an invertible matrix P {\displaystyle P} such that [ A I ] P = [ B C ] , {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}P={\begin{bmatrix}B\\\hline C\end{bmatrix}},} with B {\displaystyle B} in column echelon form. Thus A P = B {\displaystyle AP=B} , I P = C {\displaystyle IP=C} , and A C = B {\displaystyle AC=B} . A column vector v {\displaystyle \mathbf {v} } belongs to the kernel of A {\displaystyle A} (that is A v = 0 {\displaystyle A\mathbf {v} =\mathbf {0} } ) if and only if B w = 0 , {\displaystyle B\mathbf {w} =\mathbf {0} ,} where w = P − 1 v = C − 1 v {\displaystyle \mathbf {w} =P^{-1}\mathbf {v} =C^{-1}\mathbf {v} } . As B {\displaystyle B} is in column echelon form, B w = 0 {\displaystyle B\mathbf {w} =\mathbf {0} } , if and only if the nonzero entries of w {\displaystyle \mathbf {w} } correspond to the zero columns of B {\displaystyle B} . By multiplying by C {\displaystyle C} , one may deduce that this is the case if and only if v = C w {\displaystyle \mathbf {v} =C\mathbf {w} } is a linear combination of the corresponding columns of C {\displaystyle C} . == Numerical computation == The problem of computing the kernel on a computer depends on the nature of the coefficients. === Exact coefficients === If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication). For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware. === Floating point computation === For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number. Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library. == See also == == Notes and references == == Bibliography == == External links == "Kernel of a matrix", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Khan Academy, Introduction to the Null Space of a Matrix
Wikipedia/Kernel_(linear_algebra)
In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor (after Bernhard Riemann and Elwin Bruno Christoffel) is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold (i.e., it is a tensor field). It is a local invariant of Riemannian metrics that measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection. It is a central mathematical tool in the theory of general relativity, the modern theory of gravity. The curvature of spacetime is in principle observable via the geodesic deviation equation. The curvature tensor represents the tidal force experienced by a rigid body moving along a geodesic in a sense made precise by the Jacobi equation. == Definition == Let ( M , g ) {\displaystyle (M,g)} be a Riemannian or pseudo-Riemannian manifold, and X ( M ) {\displaystyle {\mathfrak {X}}(M)} be the space of all vector fields on M {\displaystyle M} . We define the Riemann curvature tensor as a map X ( M ) × X ( M ) × X ( M ) → X ( M ) {\displaystyle {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\times {\mathfrak {X}}(M)\rightarrow {\mathfrak {X}}(M)} by the following formula where ∇ {\displaystyle \nabla } is the Levi-Civita connection: R ( X , Y ) Z = ∇ X ∇ Y Z − ∇ Y ∇ X Z − ∇ [ X , Y ] Z {\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z} or equivalently R ( X , Y ) = [ ∇ X , ∇ Y ] − ∇ [ X , Y ] {\displaystyle R(X,Y)=[\nabla _{X},\nabla _{Y}]-\nabla _{[X,Y]}} where [ X , Y ] {\displaystyle [X,Y]} is the Lie bracket of vector fields and [ ∇ X , ∇ Y ] {\displaystyle [\nabla _{X},\nabla _{Y}]} is a commutator of differential operators. It turns out that the right-hand side actually only depends on the value of the vector fields X , Y , Z {\displaystyle X,Y,Z} at a given point, which is notable since the covariant derivative of a vector field also depends on the field values in a neighborhood of the point. Hence, R {\displaystyle R} is a ( 1 , 3 ) {\displaystyle (1,3)} -tensor field. For fixed X , Y {\displaystyle X,Y} , the linear transformation Z ↦ R ( X , Y ) Z {\displaystyle Z\mapsto R(X,Y)Z} is also called the curvature transformation or endomorphism. Occasionally, the curvature tensor is defined with the opposite sign. The curvature tensor measures noncommutativity of the covariant derivative, and as such is the integrability obstruction for the existence of an isometry with Euclidean space (called, in this context, flat space). Since the Levi-Civita connection is torsion-free, its curvature can also be expressed in terms of the second covariant derivative ∇ X , Y 2 Z = ∇ X ∇ Y Z − ∇ ∇ X Y Z {\textstyle \nabla _{X,Y}^{2}Z=\nabla _{X}\nabla _{Y}Z-\nabla _{\nabla _{X}Y}Z} which depends only on the values of X , Y {\displaystyle X,Y} at a point. The curvature can then be written as R ( X , Y ) = ∇ X , Y 2 − ∇ Y , X 2 {\displaystyle R(X,Y)=\nabla _{X,Y}^{2}-\nabla _{Y,X}^{2}} Thus, the curvature tensor measures the noncommutativity of the second covariant derivative. In abstract index notation, R d c a b Z c = ∇ a ∇ b Z d − ∇ b ∇ a Z d . {\displaystyle R^{d}{}_{cab}Z^{c}=\nabla _{a}\nabla _{b}Z^{d}-\nabla _{b}\nabla _{a}Z^{d}.} The Riemann curvature tensor is also the commutator of the covariant derivative of an arbitrary covector A ν {\displaystyle A_{\nu }} with itself: A ν ; ρ σ − A ν ; σ ρ = A β R β ν ρ σ . {\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }.} This formula is often called the Ricci identity. This is the classical method used by Ricci and Levi-Civita to obtain an expression for the Riemann curvature tensor. This identity can be generalized to get the commutators for two covariant derivatives of arbitrary tensors as follows ∇ δ ∇ γ T α 1 ⋯ α r β 1 ⋯ β s − ∇ γ ∇ δ T α 1 ⋯ α r β 1 ⋯ β s = R α 1 ρ δ γ T ρ α 2 ⋯ α r β 1 ⋯ β s + … + R α r ρ δ γ T α 1 ⋯ α r − 1 ρ β 1 ⋯ β s − R σ β 1 δ γ T α 1 ⋯ α r σ β 2 ⋯ β s − … − R σ β s δ γ T α 1 ⋯ α r β 1 ⋯ β s − 1 σ {\displaystyle {\begin{aligned}&\nabla _{\delta }\nabla _{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\nabla _{\gamma }\nabla _{\delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}\\[3pt]={}&R^{\alpha _{1}}{}_{\rho \delta \gamma }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\ldots +R^{\alpha _{r}}{}_{\rho \delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}-R^{\sigma }{}_{\beta _{1}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}-\ldots -R^{\sigma }{}_{\beta _{s}\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\end{aligned}}} This formula also applies to tensor densities without alteration, because for the Levi-Civita (not generic) connection one gets: ∇ μ ( g ) ≡ ( g ) ; μ = 0 , {\displaystyle \nabla _{\mu }\left({\sqrt {g}}\right)\equiv \left({\sqrt {g}}\right)_{;\mu }=0,} where g = | det ( g μ ν ) | . {\displaystyle g=\left|\det \left(g_{\mu \nu }\right)\right|.} It is sometimes convenient to also define the purely covariant version of the curvature tensor by R σ μ ν ρ = g ρ ζ R ζ σ μ ν . {\displaystyle R_{\sigma \mu \nu \rho }=g_{\rho \zeta }R^{\zeta }{}_{\sigma \mu \nu }.} == Geometric meaning == === Informally === One can see the effects of curved space by comparing a tennis court and the Earth. Start at the lower right corner of the tennis court, with a racket held out towards north. Then while walking around the outline of the court, at each step make sure the tennis racket is maintained in the same orientation, parallel to its previous positions. Once the loop is complete the tennis racket will be parallel to its initial starting position. This is because tennis courts are built so the surface is flat. On the other hand, the surface of the Earth is curved: we can complete a loop on the surface of the Earth. Starting at the equator, point a tennis racket north along the surface of the Earth. Once again the tennis racket should always remain parallel to its previous position, using the local plane of the horizon as a reference. For this path, first walk to the north pole, then walk sideways (i.e. without turning), then down to the equator, and finally walk backwards to your starting position. Now the tennis racket will be pointing towards the west, even though when you began your journey it pointed north and you never turned your body. This process is akin to parallel transporting a vector along the path and the difference identifies how lines which appear "straight" are only "straight" locally. Each time a loop is completed the tennis racket will be deflected further from its initial position by an amount depending on the distance and the curvature of the surface. It is possible to identify paths along a curved surface where parallel transport works as it does on flat space. These are the geodesics of the space, for example any segment of a great circle of a sphere. The concept of a curved space in mathematics differs from conversational usage. For example, if the above process was completed on a cylinder one would find that it is not curved overall as the curvature around the cylinder cancels with the flatness along the cylinder, which is a consequence of Gaussian curvature and Gauss's Theorema Egregium. A familiar example of this is a floppy pizza slice, which will remain rigid along its length if it is curved along its width. The Riemann curvature tensor is a way to capture a measure of the intrinsic curvature. When you write it down in terms of its components (like writing down the components of a vector), it consists of a multi-dimensional array of sums and products of partial derivatives (some of those partial derivatives can be thought of as akin to capturing the curvature imposed upon someone walking in straight lines on a curved surface). === Formally === When a vector in a Euclidean space is parallel transported around a loop, it will again point in the initial direction after returning to its original position. However, this property does not hold in the general case. The Riemann curvature tensor directly measures the failure of this in a general Riemannian manifold. This failure is known as the non-holonomy of the manifold. Let x t {\displaystyle x_{t}} be a curve in a Riemannian manifold M {\displaystyle M} . Denote by τ x t : T x 0 M → T x t M {\displaystyle \tau _{x_{t}}:T_{x_{0}}M\to T_{x_{t}}M} the parallel transport map along x t {\displaystyle x_{t}} . The parallel transport maps are related to the covariant derivative by ∇ x ˙ 0 Y = lim h → 0 1 h ( τ x h − 1 ( Y x h ) − Y x 0 ) = d d t ( τ x t − 1 ( Y x t ) ) | t = 0 {\displaystyle \nabla _{{\dot {x}}_{0}}Y=\lim _{h\to 0}{\frac {1}{h}}\left(\tau _{x_{h}}^{-1}\left(Y_{x_{h}}\right)-Y_{x_{0}}\right)=\left.{\frac {d}{dt}}\left(\tau _{x_{t}}^{-1}(Y_{x_{t}})\right)\right|_{t=0}} for each vector field Y {\displaystyle Y} defined along the curve. Suppose that X {\displaystyle X} and Y {\displaystyle Y} are a pair of commuting vector fields. Each of these fields generates a one-parameter group of diffeomorphisms in a neighborhood of x 0 {\displaystyle x_{0}} . Denote by τ t X {\displaystyle \tau _{tX}} and τ t Y {\displaystyle \tau _{tY}} , respectively, the parallel transports along the flows of X {\displaystyle X} and Y {\displaystyle Y} for time t {\displaystyle t} . Parallel transport of a vector Z ∈ T x 0 M {\displaystyle Z\in T_{x_{0}}M} around the quadrilateral with sides t Y {\displaystyle tY} , s X {\displaystyle sX} , − t Y {\displaystyle -tY} , − s X {\displaystyle -sX} is given by τ s X − 1 τ t Y − 1 τ s X τ t Y Z . {\displaystyle \tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z.} The difference between this and Z {\displaystyle Z} measures the failure of parallel transport to return Z {\displaystyle Z} to its original position in the tangent space T x 0 M {\displaystyle T_{x_{0}}M} . Shrinking the loop by sending s , t → 0 {\displaystyle s,t\to 0} gives the infinitesimal description of this deviation: d d s d d t τ s X − 1 τ t Y − 1 τ s X τ t Y Z | s = t = 0 = ( ∇ X ∇ Y − ∇ Y ∇ X − ∇ [ X , Y ] ) Z = R ( X , Y ) Z {\displaystyle \left.{\frac {d}{ds}}{\frac {d}{dt}}\tau _{sX}^{-1}\tau _{tY}^{-1}\tau _{sX}\tau _{tY}Z\right|_{s=t=0}=\left(\nabla _{X}\nabla _{Y}-\nabla _{Y}\nabla _{X}-\nabla _{[X,Y]}\right)Z=R(X,Y)Z} where R {\displaystyle R} is the Riemann curvature tensor. == Coordinate expression == Converting to the tensor index notation, the Riemann curvature tensor is given by R ρ σ μ ν = d x ρ ( R ( ∂ μ , ∂ ν ) ∂ σ ) {\displaystyle R^{\rho }{}_{\sigma \mu \nu }=dx^{\rho }\left(R\left(\partial _{\mu },\partial _{\nu }\right)\partial _{\sigma }\right)} where ∂ μ = ∂ / ∂ x μ {\displaystyle \partial _{\mu }=\partial /\partial x^{\mu }} are the coordinate vector fields. The above expression can be written using Christoffel symbols: R ρ σ μ ν = ∂ μ Γ ρ ν σ − ∂ ν Γ ρ μ σ + Γ ρ μ λ Γ λ ν σ − Γ ρ ν λ Γ λ μ σ {\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\partial _{\mu }\Gamma ^{\rho }{}_{\nu \sigma }-\partial _{\nu }\Gamma ^{\rho }{}_{\mu \sigma }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }} (See also List of formulas in Riemannian geometry). == Symmetries and identities == The Riemann curvature tensor has the following symmetries and identities: where the bracket ⟨ , ⟩ {\displaystyle \langle ,\rangle } refers to the inner product on the tangent space induced by the metric tensor and the brackets and parentheses on the indices denote the antisymmetrization and symmetrization operators, respectively. If there is nonzero torsion, the Bianchi identities involve the torsion tensor. The first (algebraic) Bianchi identity was discovered by Ricci, but is often called the first Bianchi identity or algebraic Bianchi identity, because it looks similar to the differential Bianchi identity. The first three identities form a complete list of symmetries of the curvature tensor, i.e. given any tensor which satisfies the identities above, one can find a Riemannian manifold with such a curvature tensor at some point. Simple calculations show that such a tensor has n 2 ( n 2 − 1 ) / 12 {\displaystyle n^{2}\left(n^{2}-1\right)/12} independent components. Interchange symmetry follows from these. The algebraic symmetries are also equivalent to saying that R belongs to the image of the Young symmetrizer corresponding to the partition 2+2. On a Riemannian manifold one has the covariant derivative ∇ u R {\displaystyle \nabla _{u}R} and the Bianchi identity (often called the second Bianchi identity or differential Bianchi identity) takes the form of the last identity in the table. == Ricci curvature == The Ricci curvature tensor is the contraction of the first and third indices of the Riemann tensor. R a b ⏟ Ricci ≡ R c a c b = g c d R c a d b ⏟ Riemann {\displaystyle \underbrace {R_{ab}} _{\text{Ricci}}\equiv R^{c}{}_{acb}=g^{cd}\underbrace {R_{cadb}} _{\text{Riemann}}} == Special cases == === Surfaces === For a two-dimensional surface, the Bianchi identities imply that the Riemann tensor has only one independent component, which means that the Ricci scalar completely determines the Riemann tensor. There is only one valid expression for the Riemann tensor which fits the required symmetries: R a b c d = f ( R ) ( g a c g d b − g a d g c b ) {\displaystyle R_{abcd}=f(R)\left(g_{ac}g_{db}-g_{ad}g_{cb}\right)} and by contracting with the metric twice we find the explicit form: R a b c d = K ( g a c g d b − g a d g c b ) , {\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right),} where g a b {\displaystyle g_{ab}} is the metric tensor and K = R / 2 {\displaystyle K=R/2} is a function called the Gaussian curvature and a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} and d {\displaystyle d} take values either 1 or 2. The Riemann tensor has only one functionally independent component. The Gaussian curvature coincides with the sectional curvature of the surface. It is also exactly half the scalar curvature of the 2-manifold, while the Ricci curvature tensor of the surface is simply given by R a b = K g a b . {\displaystyle R_{ab}=Kg_{ab}.} === Space forms === A Riemannian manifold is a space form if its sectional curvature is equal to a constant K {\displaystyle K} . The Riemann tensor of a space form is given by R a b c d = K ( g a c g d b − g a d g c b ) . {\displaystyle R_{abcd}=K\left(g_{ac}g_{db}-g_{ad}g_{cb}\right).} Conversely, except in dimension 2, if the curvature of a Riemannian manifold has this form for some function K {\displaystyle K} , then the Bianchi identities imply that K {\displaystyle K} is constant and thus that the manifold is (locally) a space form. == See also == Introduction to the mathematics of general relativity Decomposition of the Riemann curvature tensor Curvature of Riemannian manifolds Ricci curvature tensor == Citations == == References ==
Wikipedia/Riemann_curvature_tensor
In mathematics and physics, Penrose graphical notation or tensor diagram notation is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the notation consists of several shapes linked together by lines. The notation widely appears in modern quantum theory, particularly in matrix product states and quantum circuits. In particular, categorical quantum mechanics (which includes ZX-calculus) is a fully comprehensive reformulation of quantum theory in terms of Penrose diagrams. The notation has been studied extensively by Predrag Cvitanović, who used it, along with Feynman's diagrams and other related notations in developing "birdtracks", a group-theoretical diagram to classify the classical Lie groups. Penrose's notation has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra. == Interpretations == === Multilinear algebra === In the language of multilinear algebra, each shape represents a multilinear function. The lines attached to shapes represent the inputs or outputs of a function, and attaching shapes together in some way is essentially the composition of functions. === Tensors === In the language of tensor algebra, a particular tensor is associated with a particular shape with many lines projecting upwards and downwards, corresponding to abstract upper and lower indices of tensors respectively. Connecting lines between two shapes corresponds to contraction of indices. One advantage of this notation is that one does not have to invent new letters for new indices. This notation is also explicitly basis-independent. === Matrices === Each shape represents a matrix, and tensor multiplication is done horizontally, and matrix multiplication is done vertically. == Representation of special tensors == === Metric tensor === The metric tensor is represented by a U-shaped loop or an upside-down U-shaped loop, depending on the type of tensor that is used. === Levi-Civita tensor === The Levi-Civita antisymmetric tensor is represented by a thick horizontal bar with sticks pointing downwards or upwards, depending on the type of tensor that is used. === Structure constant === The structure constants ( γ a b c {\displaystyle {\gamma _{ab}}^{c}} ) of a Lie algebra are represented by a small triangle with one line pointing upwards and two lines pointing downwards. == Tensor operations == === Contraction of indices === Contraction of indices is represented by joining the index lines together. === Symmetrization === Symmetrization of indices is represented by a thick zigzag or wavy bar crossing the index lines horizontally. === Antisymmetrization === Antisymmetrization of indices is represented by a thick straight line crossing the index lines horizontally. == Determinant == The determinant is formed by applying antisymmetrization to the indices. === Covariant derivative === The covariant derivative ( ∇ {\displaystyle \nabla } ) is represented by a circle around the tensor(s) to be differentiated and a line joined from the circle pointing downwards to represent the lower index of the derivative. == Tensor manipulation == The diagrammatic notation is useful in manipulating tensor algebra. It usually involves a few simple "identities" of tensor manipulations. For example, ε a . . . c ε a . . . c = n ! {\displaystyle \varepsilon _{a...c}\varepsilon ^{a...c}=n!} , where n is the number of dimensions, is a common "identity". === Riemann curvature tensor === The Ricci and Bianchi identities given in terms of the Riemann curvature tensor illustrate the power of the notation == Extensions == The notation has been extended with support for spinors and twistors. == See also == Abstract index notation Angular momentum diagrams (quantum mechanics) Braided monoidal category Categorical quantum mechanics uses tensor diagram notation Matrix product state uses Penrose graphical notation Ricci calculus Spin networks Trace diagram == Notes ==
Wikipedia/Penrose_graphical_notation
In mathematics, a symmetric tensor is an unmixed tensor that is invariant under a permutation of its vector arguments: T ( v 1 , v 2 , … , v r ) = T ( v σ 1 , v σ 2 , … , v σ r ) {\displaystyle T(v_{1},v_{2},\ldots ,v_{r})=T(v_{\sigma 1},v_{\sigma 2},\ldots ,v_{\sigma r})} for every permutation σ of the symbols {1, 2, ..., r}. Alternatively, a symmetric tensor of order r represented in coordinates as a quantity with r indices satisfies T i 1 i 2 ⋯ i r = T i σ 1 i σ 2 ⋯ i σ r . {\displaystyle T_{i_{1}i_{2}\cdots i_{r}}=T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma r}}.} The space of symmetric tensors of order r on a finite-dimensional vector space V is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics. == Definition == Let V be a vector space and T ∈ V ⊗ k {\displaystyle T\in V^{\otimes k}} a tensor of order k. Then T is a symmetric tensor if τ σ T = T {\displaystyle \tau _{\sigma }T=T\,} for the braiding maps associated to every permutation σ on the symbols {1,2,...,k} (or equivalently for every transposition on these symbols). Given a basis {ei} of V, any symmetric tensor T of rank k can be written as T = ∑ i 1 , … , i k = 1 N T i 1 i 2 ⋯ i k e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i k {\displaystyle T=\sum _{i_{1},\ldots ,i_{k}=1}^{N}T_{i_{1}i_{2}\cdots i_{k}}e^{i_{1}}\otimes e^{i_{2}}\otimes \cdots \otimes e^{i_{k}}} for some unique list of coefficients T i 1 i 2 ⋯ i k {\displaystyle T_{i_{1}i_{2}\cdots i_{k}}} (the components of the tensor in the basis) that are symmetric on the indices. That is to say T i σ 1 i σ 2 ⋯ i σ k = T i 1 i 2 ⋯ i k {\displaystyle T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma k}}=T_{i_{1}i_{2}\cdots i_{k}}} for every permutation σ. The space of all symmetric tensors of order k defined on V is often denoted by Sk(V) or Symk(V). It is itself a vector space, and if V has dimension N then the dimension of Symk(V) is the binomial coefficient dim ⁡ Sym k ⁡ ( V ) = ( N + k − 1 k ) . {\displaystyle \dim \operatorname {Sym} ^{k}(V)={N+k-1 \choose k}.} We then construct Sym(V) as the direct sum of Symk(V) for k = 0,1,2,... Sym ⁡ ( V ) = ⨁ k = 0 ∞ Sym k ⁡ ( V ) . {\displaystyle \operatorname {Sym} (V)=\bigoplus _{k=0}^{\infty }\operatorname {Sym} ^{k}(V).} == Examples == There are many examples of symmetric tensors. Some include, the metric tensor, g μ ν {\displaystyle g_{\mu \nu }} , the Einstein tensor, G μ ν {\displaystyle G_{\mu \nu }} and the Ricci tensor, R μ ν {\displaystyle R_{\mu \nu }} . Many material properties and fields used in physics and engineering can be represented as symmetric tensor fields; for example: stress, strain, and anisotropic conductivity. Also, in diffusion MRI one often uses symmetric tensors to describe diffusion in the brain or other parts of the body. Ellipsoids are examples of algebraic varieties; and so, for general rank, symmetric tensors, in the guise of homogeneous polynomials, are used to define projective varieties, and are often studied as such. Given a Riemannian manifold ( M , g ) {\displaystyle (M,g)} equipped with its Levi-Civita connection ∇ {\displaystyle \nabla } , the covariant curvature tensor is a symmetric order 2 tensor over the vector space V = Ω 2 ( M ) = ⋀ 2 T ∗ M {\textstyle V=\Omega ^{2}(M)=\bigwedge ^{2}T^{*}M} of differential 2-forms. This corresponds to the fact that, viewing R i j k ℓ ∈ ( T ∗ M ) ⊗ 4 {\displaystyle R_{ijk\ell }\in (T^{*}M)^{\otimes 4}} , we have the symmetry R i j k ℓ = R k ℓ i j {\displaystyle R_{ij\,k\ell }=R_{k\ell \,ij}} between the first and second pairs of arguments in addition to antisymmetry within each pair: R j i k ℓ = − R i j k ℓ = R i j ℓ k {\displaystyle R_{jik\ell }=-R_{ijk\ell }=R_{ij\ell k}} . == Symmetric part of a tensor == Suppose V {\displaystyle V} is a vector space over a field of characteristic 0. If T ∈ V⊗k is a tensor of order k {\displaystyle k} , then the symmetric part of T {\displaystyle T} is the symmetric tensor defined by Sym T = 1 k ! ∑ σ ∈ S k τ σ T , {\displaystyle \operatorname {Sym} \,T={\frac {1}{k!}}\sum _{\sigma \in {\mathfrak {S}}_{k}}\tau _{\sigma }T,} the summation extending over the symmetric group on k symbols. In terms of a basis, and employing the Einstein summation convention, if T = T i 1 i 2 ⋯ i k e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i k , {\displaystyle T=T_{i_{1}i_{2}\cdots i_{k}}e^{i_{1}}\otimes e^{i_{2}}\otimes \cdots \otimes e^{i_{k}},} then Sym T = 1 k ! ∑ σ ∈ S k T i σ 1 i σ 2 ⋯ i σ k e i 1 ⊗ e i 2 ⊗ ⋯ ⊗ e i k . {\displaystyle \operatorname {Sym} \,T={\frac {1}{k!}}\sum _{\sigma \in {\mathfrak {S}}_{k}}T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma k}}e^{i_{1}}\otimes e^{i_{2}}\otimes \cdots \otimes e^{i_{k}}.} The components of the tensor appearing on the right are often denoted by T ( i 1 i 2 ⋯ i k ) = 1 k ! ∑ σ ∈ S k T i σ 1 i σ 2 ⋯ i σ k {\displaystyle T_{(i_{1}i_{2}\cdots i_{k})}={\frac {1}{k!}}\sum _{\sigma \in {\mathfrak {S}}_{k}}T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma k}}} with parentheses () around the indices being symmetrized. Square brackets [] are used to indicate anti-symmetrization. == Symmetric product == If T is a simple tensor, given as a pure tensor product T = v 1 ⊗ v 2 ⊗ ⋯ ⊗ v r {\displaystyle T=v_{1}\otimes v_{2}\otimes \cdots \otimes v_{r}} then the symmetric part of T is the symmetric product of the factors: v 1 ⊙ v 2 ⊙ ⋯ ⊙ v r := 1 r ! ∑ σ ∈ S r v σ 1 ⊗ v σ 2 ⊗ ⋯ ⊗ v σ r . {\displaystyle v_{1}\odot v_{2}\odot \cdots \odot v_{r}:={\frac {1}{r!}}\sum _{\sigma \in {\mathfrak {S}}_{r}}v_{\sigma 1}\otimes v_{\sigma 2}\otimes \cdots \otimes v_{\sigma r}.} In general we can turn Sym(V) into an algebra by defining the commutative and associative product ⊙. Given two tensors T1 ∈ Symk1(V) and T2 ∈ Symk2(V), we use the symmetrization operator to define: T 1 ⊙ T 2 = Sym ⁡ ( T 1 ⊗ T 2 ) ( ∈ Sym k 1 + k 2 ⁡ ( V ) ) . {\displaystyle T_{1}\odot T_{2}=\operatorname {Sym} (T_{1}\otimes T_{2})\quad \left(\in \operatorname {Sym} ^{k_{1}+k_{2}}(V)\right).} It can be verified (as is done by Kostrikin and Manin) that the resulting product is in fact commutative and associative. In some cases the operator is omitted: T1T2 = T1 ⊙ T2. In some cases an exponential notation is used: v ⊙ k = v ⊙ v ⊙ ⋯ ⊙ v ⏟ k times = v ⊗ v ⊗ ⋯ ⊗ v ⏟ k times = v ⊗ k . {\displaystyle v^{\odot k}=\underbrace {v\odot v\odot \cdots \odot v} _{k{\text{ times}}}=\underbrace {v\otimes v\otimes \cdots \otimes v} _{k{\text{ times}}}=v^{\otimes k}.} Where v is a vector. Again, in some cases the ⊙ is left out: v k = v v ⋯ v ⏟ k times = v ⊙ v ⊙ ⋯ ⊙ v ⏟ k times . {\displaystyle v^{k}=\underbrace {v\,v\,\cdots \,v} _{k{\text{ times}}}=\underbrace {v\odot v\odot \cdots \odot v} _{k{\text{ times}}}.} == Decomposition == In analogy with the theory of symmetric matrices, a (real) symmetric tensor of order 2 can be "diagonalized". More precisely, for any tensor T ∈ Sym2(V), there is an integer r, non-zero unit vectors v1,...,vr ∈ V and weights λ1,...,λr such that T = ∑ i = 1 r λ i v i ⊗ v i . {\displaystyle T=\sum _{i=1}^{r}\lambda _{i}\,v_{i}\otimes v_{i}.} The minimum number r for which such a decomposition is possible is the (symmetric) rank of T. The vectors appearing in this minimal expression are the principal axes of the tensor, and generally have an important physical meaning. For example, the principal axes of the inertia tensor define the Poinsot's ellipsoid representing the moment of inertia. Also see Sylvester's law of inertia. For symmetric tensors of arbitrary order k, decompositions T = ∑ i = 1 r λ i v i ⊗ k {\displaystyle T=\sum _{i=1}^{r}\lambda _{i}\,v_{i}^{\otimes k}} are also possible. The minimum number r for which such a decomposition is possible is the symmetric rank of T. This minimal decomposition is called a Waring decomposition; it is a symmetric form of the tensor rank decomposition. For second-order tensors this corresponds to the rank of the matrix representing the tensor in any basis, and it is well known that the maximum rank is equal to the dimension of the underlying vector space. However, for higher orders this need not hold: the rank can be higher than the number of dimensions in the underlying vector space. Moreover, the rank and symmetric rank of a symmetric tensor may differ. == See also == Antisymmetric tensor Ricci calculus Schur polynomial Symmetric polynomial Transpose Young symmetrizer == Notes == == References == Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9. Bourbaki, Nicolas (1990), Elements of mathematics, Algebra II, Springer-Verlag, ISBN 3-540-19375-8. Greub, Werner Hildbert (1967), Multilinear algebra, Die Grundlehren der Mathematischen Wissenschaften, Band 136, Springer-Verlag New York, Inc., New York, MR 0224623. Sternberg, Shlomo (1983), Lectures on differential geometry, New York: Chelsea, ISBN 978-0-8284-0316-0. == External links == Cesar O. Aguilar, The Dimension of Symmetric k-tensors
Wikipedia/Symmetric_tensor
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible. Numerical linear algebra aims to solve problems of continuous mathematics using finite precision computers, so its applications to the natural and social sciences are as vast as the applications of continuous mathematics. It is often a fundamental part of engineering and computational science problems, such as image and signal processing, telecommunication, computational finance, materials science simulations, structural biology, data mining, bioinformatics, and fluid dynamics. Matrix methods are particularly used in finite difference methods, finite element methods, and the modeling of differential equations. Noting the broad applications of numerical linear algebra, Lloyd N. Trefethen and David Bau, III argue that it is "as fundamental to the mathematical sciences as calculus and differential equations",: x  even though it is a comparatively small field. Because many properties of matrices and vectors also apply to functions and operators, numerical linear algebra can also be viewed as a type of functional analysis which has a particular emphasis on practical algorithms.: ix  Common problems in numerical linear algebra include obtaining matrix decompositions like the singular value decomposition, the QR factorization, the LU factorization, or the eigendecomposition, which can then be used to answer common linear algebraic problems like solving linear systems of equations, locating eigenvalues, or least squares optimisation. Numerical linear algebra's central concern with developing algorithms that do not introduce errors when applied to real data on a finite precision computer is often achieved by iterative methods rather than direct ones. == History == Numerical linear algebra was developed by computer pioneers like John von Neumann, Alan Turing, James H. Wilkinson, Alston Scott Householder, George Forsythe, and Heinz Rutishauser, in order to apply the earliest computers to problems in continuous mathematics, such as ballistics problems and the solutions to systems of partial differential equations. The first serious attempt to minimize computer error in the application of algorithms to real data is John von Neumann and Herman Goldstine's work in 1947. The field has grown as technology has increasingly enabled researchers to solve complex problems on extremely large high-precision matrices, and some numerical algorithms have grown in prominence as technologies like parallel computing have made them practical approaches to scientific problems. == Matrix decompositions == === Partitioned matrices === For many problems in applied linear algebra, it is useful to adopt the perspective of a matrix as being a concatenation of column vectors. For example, when solving the linear system x = A − 1 b {\displaystyle x=A^{-1}b} , rather than understanding x as the product of A − 1 {\displaystyle A^{-1}} with b, it is helpful to think of x as the vector of coefficients in the linear expansion of b in the basis formed by the columns of A.: 8  Thinking of matrices as a concatenation of columns is also a practical approach for the purposes of matrix algorithms. This is because matrix algorithms frequently contain two nested loops: one over the columns of a matrix A, and another over the rows of A. For example, for matrices A m × n {\displaystyle A^{m\times n}} and vectors x n × 1 {\displaystyle x^{n\times 1}} and y m × 1 {\displaystyle y^{m\times 1}} , we could use the column partitioning perspective to compute y := Ax + y as === Singular value decomposition === The singular value decomposition of a matrix A m × n {\displaystyle A^{m\times n}} is A = U Σ V ∗ {\displaystyle A=U\Sigma V^{\ast }} where U and V are unitary, and Σ {\displaystyle \Sigma } is diagonal. The diagonal entries of Σ {\displaystyle \Sigma } are called the singular values of A. Because singular values are the square roots of the eigenvalues of A A ∗ {\displaystyle AA^{\ast }} , there is a tight connection between the singular value decomposition and eigenvalue decompositions. This means that most methods for computing the singular value decomposition are similar to eigenvalue methods;: 36  perhaps the most common method involves Householder procedures.: 253  === QR factorization === The QR factorization of a matrix A m × n {\displaystyle A^{m\times n}} is a matrix Q m × m {\displaystyle Q^{m\times m}} and a matrix R m × n {\displaystyle R^{m\times n}} so that A = QR, where Q is orthogonal and R is upper triangular.: 50 : 223  The two main algorithms for computing QR factorizations are the Gram–Schmidt process and the Householder transformation. The QR factorization is often used to solve linear least-squares problems, and eigenvalue problems (by way of the iterative QR algorithm). === LU factorization === An LU factorization of a matrix A consists of a lower triangular matrix L and an upper triangular matrix U so that A = LU. The matrix U is found by an upper triangularization procedure which involves left-multiplying A by a series of matrices M 1 , … , M n − 1 {\displaystyle M_{1},\ldots ,M_{n-1}} to form the product M n − 1 ⋯ M 1 A = U {\displaystyle M_{n-1}\cdots M_{1}A=U} , so that equivalently L = M 1 − 1 ⋯ M n − 1 − 1 {\displaystyle L=M_{1}^{-1}\cdots M_{n-1}^{-1}} .: 147 : 96  === Eigenvalue decomposition === The eigenvalue decomposition of a matrix A m × m {\displaystyle A^{m\times m}} is A = X Λ X − 1 {\displaystyle A=X\Lambda X^{-1}} , where the columns of X are the eigenvectors of A, and Λ {\displaystyle \Lambda } is a diagonal matrix the diagonal entries of which are the corresponding eigenvalues of A.: 33  There is no direct method for finding the eigenvalue decomposition of an arbitrary matrix. Because it is not possible to write a program that finds the exact roots of an arbitrary polynomial in finite time, any general eigenvalue solver must necessarily be iterative.: 192  == Algorithms == === Gaussian elimination === From the numerical linear algebra perspective, Gaussian elimination is a procedure for factoring a matrix A into its LU factorization, which Gaussian elimination accomplishes by left-multiplying A by a succession of matrices L m − 1 ⋯ L 2 L 1 A = U {\displaystyle L_{m-1}\cdots L_{2}L_{1}A=U} until U is upper triangular and L is lower triangular, where L ≡ L 1 − 1 L 2 − 1 ⋯ L m − 1 − 1 {\displaystyle L\equiv L_{1}^{-1}L_{2}^{-1}\cdots L_{m-1}^{-1}} .: 148  Naive programs for Gaussian elimination are notoriously highly unstable, and produce huge errors when applied to matrices with many significant digits. The simplest solution is to introduce pivoting, which produces a modified Gaussian elimination algorithm that is stable.: 151  === Solutions of linear systems === Numerical linear algebra characteristically approaches matrices as a concatenation of columns vectors. In order to solve the linear system x = A − 1 b {\displaystyle x=A^{-1}b} , the traditional algebraic approach is to understand x as the product of A − 1 {\displaystyle A^{-1}} with b. Numerical linear algebra instead interprets x as the vector of coefficients of the linear expansion of b in the basis formed by the columns of A.: 8  Many different decompositions can be used to solve the linear problem, depending on the characteristics of the matrix A and the vectors x and b, which may make one factorization much easier to obtain than others. If A = QR is a QR factorization of A, then equivalently R x = Q ∗ b {\displaystyle Rx=Q^{\ast }b} . This is as easy to compute as a matrix factorization.: 54  If A = X Λ X − 1 {\displaystyle A=X\Lambda X^{-1}} is an eigendecomposition A, and we seek to find b so that b = Ax, with b ′ = X − 1 b {\displaystyle b'=X^{-1}b} and x ′ = X − 1 x {\displaystyle x'=X^{-1}x} , then we have b ′ = Λ x ′ {\displaystyle b'=\Lambda x'} .: 33  This is closely related to the solution to the linear system using the singular value decomposition, because singular values of a matrix are the absolute values of its eigenvalues, which are also equivalent to the square roots of the absolute values of the eigenvalues of the Gram matrix X ∗ X {\displaystyle X^{*}X} . And if A = LU is an LU factorization of A, then Ax = b can be solved using the triangular matrices Ly = b and Ux = y.: 147 : 99  === Least squares optimisation === Matrix decompositions suggest a number of ways to solve the linear system r = b − Ax where we seek to minimize r, as in the regression problem. The QR algorithm solves this problem by computing the reduced QR factorization of A and rearranging to obtain R ^ x = Q ^ ∗ b {\displaystyle {\widehat {R}}x={\widehat {Q}}^{\ast }b} . This upper triangular system can then be solved for x. The SVD also suggests an algorithm for obtaining linear least squares. By computing the reduced SVD decomposition A = U ^ Σ ^ V ∗ {\displaystyle A={\widehat {U}}{\widehat {\Sigma }}V^{\ast }} and then computing the vector U ^ ∗ b {\displaystyle {\widehat {U}}^{\ast }b} , we reduce the least squares problem to a simple diagonal system.: 84  The fact that least squares solutions can be produced by the QR and SVD factorizations means that, in addition to the classical normal equations method for solving least squares problems, these problems can also be solved by methods that include the Gram-Schmidt algorithm and Householder methods. == Conditioning and stability == Allow that a problem is a function f : X → Y {\displaystyle f:X\to Y} , where X is a normed vector space of data and Y is a normed vector space of solutions. For some data point x ∈ X {\displaystyle x\in X} , the problem is said to be ill-conditioned if a small perturbation in x produces a large change in the value of f(x). We can quantify this by defining a condition number which represents how well-conditioned a problem is, defined as κ ^ = lim δ → 0 sup ‖ δ x ‖ ≤ δ ‖ δ f ‖ ‖ δ x ‖ . {\displaystyle {\widehat {\kappa }}=\lim _{\delta \to 0}\sup _{\|\delta x\|\leq \delta }{\frac {\|\delta f\|}{\|\delta x\|}}.} Instability is the tendency of computer algorithms, which depend on floating-point arithmetic, to produce results that differ dramatically from the exact mathematical solution to a problem. When a matrix contains real data with many significant digits, many algorithms for solving problems like linear systems of equation or least squares optimisation may produce highly inaccurate results. Creating stable algorithms for ill-conditioned problems is a central concern in numerical linear algebra. One example is that the stability of householder triangularization makes it a particularly robust solution method for linear systems, whereas the instability of the normal equations method for solving least squares problems is a reason to favour matrix decomposition methods like using the singular value decomposition. Some matrix decomposition methods may be unstable, but have straightforward modifications that make them stable; one example is the unstable Gram–Schmidt, which can easily be changed to produce the stable modified Gram–Schmidt.: 140  Another classical problem in numerical linear algebra is the finding that Gaussian elimination is unstable, but becomes stable with the introduction of pivoting. == Iterative methods == There are two reasons that iterative algorithms are an important part of numerical linear algebra. First, many important numerical problems have no direct solution; in order to find the eigenvalues and eigenvectors of an arbitrary matrix, we can only adopt an iterative approach. Second, noniterative algorithms for an arbitrary m × m {\displaystyle m\times m} matrix require O ( m 3 ) {\displaystyle O(m^{3})} time, which is a surprisingly high floor given that matrices contain only m 2 {\displaystyle m^{2}} numbers. Iterative approaches can take advantage of several features of some matrices to reduce this time. For example, when a matrix is sparse, an iterative algorithm can skip many of the steps that a direct approach would necessarily follow, even if they are redundant steps given a highly structured matrix. The core of many iterative methods in numerical linear algebra is the projection of a matrix onto a lower dimensional Krylov subspace, which allows features of a high-dimensional matrix to be approximated by iteratively computing the equivalent features of similar matrices starting in a low dimension space and moving to successively higher dimensions. When A is symmetric and we wish to solve the linear problem Ax = b, the classical iterative approach is the conjugate gradient method. If A is not symmetric, then examples of iterative solutions to the linear problem are the generalized minimal residual method and CGN. If A is symmetric, then to solve the eigenvalue and eigenvector problem we can use the Lanczos algorithm, and if A is non-symmetric, then we can use Arnoldi iteration. == Software == Several programming languages use numerical linear algebra optimisation techniques and are designed to implement numerical linear algebra algorithms. These languages include MATLAB, Analytica, Maple, and Mathematica. Other programming languages which are not explicitly designed for numerical linear algebra have libraries that provide numerical linear algebra routines and optimisation; C and Fortran have packages like Basic Linear Algebra Subprograms and LAPACK, Python has the library NumPy, and Perl has the Perl Data Language. Many numerical linear algebra commands in R rely on these more fundamental libraries like LAPACK. More libraries can be found on the List of numerical libraries. == References == == Further reading == Dongarra, Jack; Hammarling, Sven (1990). "Evolution of Numerical Software for Dense Linear Algebra". In Cox, M. G.; Hammarling, S. (eds.). Reliable Numerical Computation. Oxford: Clarendon Press. pp. 297–327. ISBN 0-19-853564-3. Claude Brezinski, Gérard Meurant and Michela Redivo-Zaglia (2022): A Journey through the History of Numerical Linear Algebra, SIAM, ISBN 978-1-61197-722-6. Demmel, J. W. (1997): Applied Numerical Linear Algebra, SIAM. Ciarlet, P. G., Miara, B., & Thomas, J. M. (1989): Introduction to Numerical Linear Algebra and Optimization, Cambridge Univ. Press. Trefethen, Lloyd; Bau III, David (1997): Numerical Linear Algebra (1st ed.), SIAM, ISBN 978-0-89871-361-9. Golub, Gene H.; Van Loan, Charles F. (1996): Matrix Computations (3rd ed.), The Johns Hopkins University Press. ISBN 0-8018-5413-X. G. W. Stewart (1998): Matrix Algorithms Vol I: Basic Decompositions, SIAM, ISBN 0-89871-414-1. G. W. Stewart (2001): Matrix Algorithms Vol II: Eigensystems, SIAM, ISBN 0-89871-503-2. Varga, Richard S. (2000): Matrix Iterative Analysis, Springer. Yousef Saad (2003) : Iterative Methods for Sparse Linear Systems, 2nd Ed., SIAM, ISBN 978-0-89871534-7. Raf Vandebril, Marc Van Barel, and Nicola Mastronardi (2008): Matrix Computations and Semiseparable Matrices, Volume 1: Linear systems, Johns Hopkins Univ. Press, ISBN 978-0-8018-8714-7. Raf Vandebril, Marc Van Barel, and Nicola Mastronardi (2008): Matrix Computations and Semiseparable Matrices, Volume 2: Eigenvalue and Singular Value Methods, Johns Hopkins Univ. Press, ISBN 978-0-8018-9052-9. Higham, N. J. (2002): Accuracy and Stability of Numerical Algorithms, SIAM. Higham, N. J. (2008): Functions of Matrices: Theory and Computation, SIAM. David S. Watkins (2008): The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods, SIAM. Liesen, J., and Strakos, Z. (2012): Krylov Subspace Methods: Principles and Analysis, Oxford Univ. Press. == External links == Freely available software for numerical algebra on the web, composed by Jack Dongarra and Hatem Ltaief, University of Tennessee NAG Library of numerical linear algebra routines Numerical Linear Algebra Group on Twitter (Research group in the United Kingdom) siagla on Twitter (Activity group about numerical linear algebra in the Society for Industrial and Applied Mathematics) The GAMM Activity Group on Applied and Numerical Linear Algebra
Wikipedia/Numerical_linear_algebra
The following tables provide a comparison of linear algebra software libraries, either specialized or general purpose libraries with significant linear algebra coverage. == Dense linear algebra == === General information === === Matrix types and operations === Matrix types (special types like bidiagonal/tridiagonal are not listed): Real – general (nonsymmetric) real Complex – general (nonsymmetric) complex SPD – symmetric positive definite (real) HPD – Hermitian positive definite (complex) SY – symmetric (real) HE – Hermitian (complex) BND – band Operations: TF – triangular factorizations (LU, Cholesky) OF – orthogonal factorizations (QR, QL, generalized factorizations) EVP – eigenvalue problems SVD – singular value decomposition GEVP – generalized EVP GSVD – generalized SVD == References == == External links == scipy on GitHub armadillo on GitHub mathnet-numerics on GitHub
Wikipedia/Comparison_of_linear_algebra_libraries
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix generated from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition. == Definition and illustration == === First minors === If A is a square matrix, then the minor of the entry in the i-th row and j-th column (also called the (i, j) minor, or a first minor) is the determinant of the submatrix formed by deleting the i-th row and j-th column. This number is often denoted Mi, j. The (i, j) cofactor is obtained by multiplying the minor by (−1)i + j. To illustrate these definitions, consider the following 3 × 3 matrix, [ 1 4 7 3 0 5 − 1 9 11 ] {\displaystyle {\begin{bmatrix}1&4&7\\3&0&5\\-1&9&11\\\end{bmatrix}}} To compute the minor M2,3 and the cofactor C2,3, we find the determinant of the above matrix with row 2 and column 3 removed. M 2 , 3 = det [ 1 4 ◻ ◻ ◻ ◻ − 1 9 ◻ ] = det [ 1 4 − 1 9 ] = 9 − ( − 4 ) = 13 {\displaystyle M_{2,3}=\det {\begin{bmatrix}1&4&\Box \\\Box &\Box &\Box \\-1&9&\Box \\\end{bmatrix}}=\det {\begin{bmatrix}1&4\\-1&9\\\end{bmatrix}}=9-(-4)=13} So the cofactor of the (2,3) entry is C 2 , 3 = ( − 1 ) 2 + 3 ( M 2 , 3 ) = − 13. {\displaystyle C_{2,3}=(-1)^{2+3}(M_{2,3})=-13.} === General definition === Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A, also called minor determinant of order k of A or, if m = n, the (n − k)th minor determinant of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a k × k matrix obtained from A by deleting m − k rows and n − k columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting m − k rows and n − k columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of ( m k ) ⋅ ( n k ) {\textstyle {m \choose k}\cdot {n \choose k}} minors of size k × k. The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix. Let I = 1 ≤ i 1 < i 2 < ⋯ < i k ≤ m , J = 1 ≤ j 1 < j 2 < ⋯ < j k ≤ n , {\displaystyle {\begin{aligned}I&=1\leq i_{1}<i_{2}<\cdots <i_{k}\leq m,\\[2pt]J&=1\leq j_{1}<j_{2}<\cdots <j_{k}\leq n,\end{aligned}}} be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes. The minor det ( ( A i p , j q ) p , q = 1 , … , k ) {\textstyle \det {\bigl (}(\mathbf {A} _{i_{p},j_{q}})_{p,q=1,\ldots ,k}{\bigr )}} corresponding to these choices of indexes is denoted det I , J A {\displaystyle \det _{I,J}A} or det A I , J {\displaystyle \det \mathbf {A} _{I,J}} or [ A ] I , J {\displaystyle [\mathbf {A} ]_{I,J}} or M I , J {\displaystyle M_{I,J}} or M i 1 , i 2 , … , i k , j 1 , j 2 , … , j k {\displaystyle M_{i_{1},i_{2},\ldots ,i_{k},j_{1},j_{2},\ldots ,j_{k}}} or M ( i ) , ( j ) {\displaystyle M_{(i),(j)}} (where the (i) denotes the sequence of indexes I, etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexes I and J, some authors mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in I and columns whose indexes are in J, whereas some other authors mean by a minor associated to I and J the determinant of the matrix formed from the original matrix by deleting the rows in I and columns in J; which notation is used should always be checked. In this article, we use the inclusive definition of choosing the elements from rows of I and columns of J. The exceptional case is the case of the first minor or the (i, j)-minor described above; in that case, the exclusive meaning M i , j = det ( ( A p , q ) p ≠ i , q ≠ j ) {\textstyle M_{i,j}=\det {\bigl (}\left(\mathbf {A} _{p,q}\right)_{p\neq i,q\neq j}{\bigr )}} is standard everywhere in the literature and is used in this article also. === Complement === The complement Bijk..., pqr... of a minor Mijk..., pqr... of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (ijk...) and columns (pqr...) associated with Mijk..., pqr... have been removed. The complement of the first minor of an element aij is merely that element. == Applications of minors and cofactors == === Cofactor expansion of the determinant === The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an n × n matrix A = (aij), the determinant of A, denoted det(A), can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining C i j = ( − 1 ) i + j M i j {\displaystyle C_{ij}=(-1)^{i+j}M_{ij}} then the cofactor expansion along the j-th column gives: det ( A ) = a 1 j C 1 j + a 2 j C 2 j + a 3 j C 3 j + ⋯ + a n j C n j = ∑ i = 1 n a i j C i j = ∑ i = 1 n a i j ( − 1 ) i + j M i j {\displaystyle {\begin{aligned}\det(\mathbf {A} )&=a_{1j}C_{1j}+a_{2j}C_{2j}+a_{3j}C_{3j}+\cdots +a_{nj}C_{nj}\\[2pt]&=\sum _{i=1}^{n}a_{ij}C_{ij}\\[2pt]&=\sum _{i=1}^{n}a_{ij}(-1)^{i+j}M_{ij}\end{aligned}}} The cofactor expansion along the i-th row gives: det ( A ) = a i 1 C i 1 + a i 2 C i 2 + a i 3 C i 3 + ⋯ + a i n C i n = ∑ j = 1 n a i j C i j = ∑ j = 1 n a i j ( − 1 ) i + j M i j {\displaystyle {\begin{aligned}\det(\mathbf {A} )&=a_{i1}C_{i1}+a_{i2}C_{i2}+a_{i3}C_{i3}+\cdots +a_{in}C_{in}\\[2pt]&=\sum _{j=1}^{n}a_{ij}C_{ij}\\[2pt]&=\sum _{j=1}^{n}a_{ij}(-1)^{i+j}M_{ij}\end{aligned}}} === Inverse of a matrix === One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors or, sometimes, comatrix): C = [ C 11 C 12 ⋯ C 1 n C 21 C 22 ⋯ C 2 n ⋮ ⋮ ⋱ ⋮ C n 1 C n 2 ⋯ C n n ] {\displaystyle \mathbf {C} ={\begin{bmatrix}C_{11}&C_{12}&\cdots &C_{1n}\\C_{21}&C_{22}&\cdots &C_{2n}\\\vdots &\vdots &\ddots &\vdots \\C_{n1}&C_{n2}&\cdots &C_{nn}\end{bmatrix}}} Then the inverse of A is the transpose of the cofactor matrix times the reciprocal of the determinant of A: A − 1 = 1 det ⁡ ( A ) C T . {\displaystyle \mathbf {A} ^{-1}={\frac {1}{\operatorname {det} (\mathbf {A} )}}\mathbf {C} ^{\mathsf {T}}.} The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of A. The above formula can be generalized as follows: Let I = 1 ≤ i 1 < i 2 < … < i k ≤ n , J = 1 ≤ j 1 < j 2 < … < j k ≤ n , {\displaystyle {\begin{aligned}I&=1\leq i_{1}<i_{2}<\ldots <i_{k}\leq n,\\[2pt]J&=1\leq j_{1}<j_{2}<\ldots <j_{k}\leq n,\end{aligned}}} be ordered sequences (in natural order) of indexes (here A is an n × n matrix). Then [ A − 1 ] I , J = ± [ A ] J ′ , I ′ det A , {\displaystyle [\mathbf {A} ^{-1}]_{I,J}=\pm {\frac {[\mathbf {A} ]_{J',I'}}{\det \mathbf {A} }},} where I′, J′ denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to I, J, so that every index 1, ..., n appears exactly once in either I or I', but not in both (similarly for the J and J') and [A]I, J denotes the determinant of the submatrix of A formed by choosing the rows of the index set I and columns of index set J. Also, [ A ] I , J = det ( ( A i p , j q ) p , q = 1 , … , k ) . {\displaystyle [\mathbf {A} ]_{I,J}=\det {\bigl (}(A_{i_{p},j_{q}})_{p,q=1,\ldots ,k}{\bigr )}.} A simple proof can be given using wedge product. Indeed, [ A − 1 ] I , J ( e 1 ∧ … ∧ e n ) = ± ( A − 1 e j 1 ) ∧ … ∧ ( A − 1 e j k ) ∧ e i 1 ′ ∧ … ∧ e i n − k ′ , {\displaystyle {\bigl [}\mathbf {A} ^{-1}{\bigr ]}_{I,J}(e_{1}\wedge \ldots \wedge e_{n})=\pm (\mathbf {A} ^{-1}e_{j_{1}})\wedge \ldots \wedge (\mathbf {A} ^{-1}e_{j_{k}})\wedge e_{i'_{1}}\wedge \ldots \wedge e_{i'_{n-k}},} where e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} are the basis vectors. Acting by A on both sides, one gets [ A − 1 ] I , J det A ( e 1 ∧ … ∧ e n ) = ± ( e j 1 ) ∧ … ∧ ( e j k ) ∧ ( A e i 1 ′ ) ∧ … ∧ ( A e i n − k ′ ) = ± [ A ] J ′ , I ′ ( e 1 ∧ … ∧ e n ) . {\displaystyle {\begin{aligned}&\ {\bigl [}\mathbf {A} ^{-1}{\bigr ]}_{I,J}\det \mathbf {A} (e_{1}\wedge \ldots \wedge e_{n})\\[2pt]=&\ \pm (e_{j_{1}})\wedge \ldots \wedge (e_{j_{k}})\wedge (\mathbf {A} e_{i'_{1}})\wedge \ldots \wedge (\mathbf {A} e_{i'_{n-k}})\\[2pt]=&\ \pm [\mathbf {A} ]_{J',I'}(e_{1}\wedge \ldots \wedge e_{n}).\end{aligned}}} The sign can be worked out to be ( − 1 ) ∧ ( ∑ s = 1 k i s − ∑ s = 1 k j s ) , {\displaystyle (-1)^{\wedge }\!\!\left(\sum _{s=1}^{k}i_{s}-\sum _{s=1}^{k}j_{s}\right),} so the sign is determined by the sums of elements in I and J. === Other applications === Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero. We will use the following notation for minors: if A is an m × n matrix, I is a subset of {1, ..., m} with k elements, and J is a subset of {1, ..., n} with k elements, then we write [A]I, J for the k × k minor of A that corresponds to the rows with index in I and the columns with index in J. If I = J, then [A]I, J is called a principal minor. If the matrix that corresponds to a principal minor is a square upper-left submatrix of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k, also known as a leading principal submatrix), then the principal minor is called a leading principal minor (of order k) or corner (principal) minor (of order k). For an n × n square matrix, there are n leading principal minors. A basic minor of a matrix is the determinant of a square submatrix that is of maximal size with nonzero determinant. For Hermitian matrices, the leading principal minors can be used to test for positive definiteness and the principal minors can be used to test for positive semidefiniteness. See Sylvester's criterion for more details. Both the formula for ordinary matrix multiplication and the Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of {1, ..., m} with k elements and J is a subset of {1, ..., p} with k elements. Then [ A B ] I , J = ∑ K [ A ] I , K [ B ] K , J {\displaystyle [\mathbf {AB} ]_{I,J}=\sum _{K}[\mathbf {A} ]_{I,K}[\mathbf {B} ]_{K,J}\,} where the sum extends over all subsets K of {1, ..., n} with k elements. This formula is a straightforward extension of the Cauchy–Binet formula. == Multilinear algebra approach == A more systematic, algebraic treatment of minors is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the k-th exterior power map. If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix ( 1 4 3 − 1 2 1 ) {\displaystyle {\begin{pmatrix}1&4\\3&\!\!-1\\2&1\\\end{pmatrix}}} are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product ( e 1 + 3 e 2 + 2 e 3 ) ∧ ( 4 e 1 − e 2 + e 3 ) {\displaystyle (\mathbf {e} _{1}+3\mathbf {e} _{2}+2\mathbf {e} _{3})\wedge (4\mathbf {e} _{1}-\mathbf {e} _{2}+\mathbf {e} _{3})} where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and alternating, e i ∧ e i = 0 , {\displaystyle \mathbf {e} _{i}\wedge \mathbf {e} _{i}=0,} and antisymmetric, e i ∧ e j = − e j ∧ e i , {\displaystyle \mathbf {e} _{i}\wedge \mathbf {e} _{j}=-\mathbf {e} _{j}\wedge \mathbf {e} _{i},} we can simplify this expression to − 13 e 1 ∧ e 2 − 7 e 1 ∧ e 3 + 5 e 2 ∧ e 3 {\displaystyle -13\mathbf {e} _{1}\wedge \mathbf {e} _{2}-7\mathbf {e} _{1}\wedge \mathbf {e} _{3}+5\mathbf {e} _{2}\wedge \mathbf {e} _{3}} where the coefficients agree with the minors computed earlier. == A remark about different notation == In some books, instead of cofactor the term adjunct is used. Moreover, it is denoted as Aij and defined in the same way as cofactor: A i j = ( − 1 ) i + j M i j {\displaystyle \mathbf {A} _{ij}=(-1)^{i+j}\mathbf {M} _{ij}} Using this notation the inverse matrix is written this way: M − 1 = 1 det ( M ) [ A 11 A 21 ⋯ A n 1 A 12 A 22 ⋯ A n 2 ⋮ ⋮ ⋱ ⋮ A 1 n A 2 n ⋯ A n n ] {\displaystyle \mathbf {M} ^{-1}={\frac {1}{\det(M)}}{\begin{bmatrix}A_{11}&A_{21}&\cdots &A_{n1}\\A_{12}&A_{22}&\cdots &A_{n2}\\\vdots &\vdots &\ddots &\vdots \\A_{1n}&A_{2n}&\cdots &A_{nn}\end{bmatrix}}} Keep in mind that adjunct is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator. == See also == Submatrix Compound matrix == References == == External links == MIT Linear Algebra Lecture on Cofactors at Google Video, from MIT OpenCourseWare PlanetMath entry of Cofactors Springer Encyclopedia of Mathematics entry for Minor
Wikipedia/Minor_(linear_algebra)
In linear algebra, the quotient of a vector space V {\displaystyle V} by a subspace N {\displaystyle N} is a vector space obtained by "collapsing" N {\displaystyle N} to zero. The space obtained is called a quotient space and is denoted V / N {\displaystyle V/N} (read " V {\displaystyle V} mod N {\displaystyle N} " or " V {\displaystyle V} by N {\displaystyle N} "). == Definition == Formally, the construction is as follows. Let V {\displaystyle V} be a vector space over a field K {\displaystyle \mathbb {K} } , and let N {\displaystyle N} be a subspace of V {\displaystyle V} . We define an equivalence relation ∼ {\displaystyle \sim } on V {\displaystyle V} by stating that x ∼ y {\displaystyle x\sim y} iff x − y ∈ N {\displaystyle x-y\in N} . That is, x {\displaystyle x} is related to y {\displaystyle y} if and only if one can be obtained from the other by adding an element of N {\displaystyle N} . This definition implies that any element of N {\displaystyle N} is related to the zero vector; more precisely, all the vectors in N {\displaystyle N} get mapped into the equivalence class of the zero vector. The equivalence class – or, in this case, the coset – of x {\displaystyle x} is defined as [ x ] := { x + n : n ∈ N } {\displaystyle [x]:=\{x+n:n\in N\}} and is often denoted using the shorthand [ x ] = x + N {\displaystyle [x]=x+N} . The quotient space V / N {\displaystyle V/N} is then defined as V / ∼ {\displaystyle V/_{\sim }} , the set of all equivalence classes induced by ∼ {\displaystyle \sim } on V {\displaystyle V} . Scalar multiplication and addition are defined on the equivalence classes by α [ x ] = [ α x ] {\displaystyle \alpha [x]=[\alpha x]} for all α ∈ K {\displaystyle \alpha \in \mathbb {K} } , and [ x ] + [ y ] = [ x + y ] {\displaystyle [x]+[y]=[x+y]} . It is not hard to check that these operations are well-defined (i.e. do not depend on the choice of representatives). These operations turn the quotient space V / N {\displaystyle V/N} into a vector space over K {\displaystyle \mathbb {K} } with N {\displaystyle N} being the zero class, [ 0 ] {\displaystyle [0]} . The mapping that associates to v ∈ V {\displaystyle v\in V} the equivalence class [ v ] {\displaystyle [v]} is known as the quotient map. Alternatively phrased, the quotient space V / N {\displaystyle V/N} is the set of all affine subsets of V {\displaystyle V} which are parallel to N {\displaystyle N} . == Examples == === Lines in Cartesian Plane === Let X = R2 be the standard Cartesian plane, and let Y be a line through the origin in X. Then the quotient space X/Y can be identified with the space of all lines in X which are parallel to Y. That is to say that, the elements of the set X/Y are lines in X parallel to Y. Note that the points along any one such line will satisfy the equivalence relation because their difference vectors belong to Y. This gives a way to visualize quotient spaces geometrically. (By re-parameterising these lines, the quotient space can more conventionally be represented as the space of all points along a line through the origin that is not parallel to Y. Similarly, the quotient space for R3 by a line through the origin can again be represented as the set of all co-parallel lines, or alternatively be represented as the vector space consisting of a plane which only intersects the line at the origin.) === Subspaces of Cartesian Space === Another example is the quotient of Rn by the subspace spanned by the first m standard basis vectors. The space Rn consists of all n-tuples of real numbers (x1, ..., xn). The subspace, identified with Rm, consists of all n-tuples such that the last n − m entries are zero: (x1, ..., xm, 0, 0, ..., 0). Two vectors of Rn are in the same equivalence class modulo the subspace if and only if they are identical in the last n − m coordinates. The quotient space Rn/Rm is isomorphic to Rn−m in an obvious manner. === Polynomial Vector Space === Let P 3 ( R ) {\displaystyle {\mathcal {P}}_{3}(\mathbb {R} )} be the vector space of all cubic polynomials over the real numbers. Then P 3 ( R ) / ⟨ x 2 ⟩ {\displaystyle {\mathcal {P}}_{3}(\mathbb {R} )/\langle x^{2}\rangle } is a quotient space, where each element is the set corresponding to polynomials that differ by a quadratic term only. For example, one element of the quotient space is { x 3 + a x 2 − 2 x + 3 : a ∈ R } {\displaystyle \{x^{3}+ax^{2}-2x+3:a\in \mathbb {R} \}} , while another element of the quotient space is { a x 2 + 2.7 x : a ∈ R } {\displaystyle \{ax^{2}+2.7x:a\in \mathbb {R} \}} . === General Subspaces === More generally, if V is an (internal) direct sum of subspaces U and W, V = U ⊕ W {\displaystyle V=U\oplus W} then the quotient space V/U is naturally isomorphic to W. === Lebesgue Integrals === An important example of a functional quotient space is an Lp space. == Properties == There is a natural epimorphism from V to the quotient space V/U given by sending x to its equivalence class [x]. The kernel (or nullspace) of this epimorphism is the subspace U. This relationship is neatly summarized by the short exact sequence 0 → U → V → V / U → 0. {\displaystyle 0\to U\to V\to V/U\to 0.\,} If U is a subspace of V, the dimension of V/U is called the codimension of U in V. Since a basis of V may be constructed from a basis A of U and a basis B of V/U by adding a representative of each element of B to A, the dimension of V is the sum of the dimensions of U and V/U. If V is finite-dimensional, it follows that the codimension of U in V is the difference between the dimensions of V and U: c o d i m ( U ) = dim ⁡ ( V / U ) = dim ⁡ ( V ) − dim ⁡ ( U ) . {\displaystyle \mathrm {codim} (U)=\dim(V/U)=\dim(V)-\dim(U).} Let T : V → W be a linear operator. The kernel of T, denoted ker(T), is the set of all x in V such that Tx = 0. The kernel is a subspace of V. The first isomorphism theorem for vector spaces says that the quotient space V/ker(T) is isomorphic to the image of V in W. An immediate corollary, for finite-dimensional spaces, is the rank–nullity theorem: the dimension of V is equal to the dimension of the kernel (the nullity of T) plus the dimension of the image (the rank of T). The cokernel of a linear operator T : V → W is defined to be the quotient space W/im(T). == Quotient of a Banach space by a subspace == If X is a Banach space and M is a closed subspace of X, then the quotient X/M is again a Banach space. The quotient space is already endowed with a vector space structure by the construction of the previous section. We define a norm on X/M by ‖ [ x ] ‖ X / M = inf m ∈ M ‖ x − m ‖ X = inf m ∈ M ‖ x + m ‖ X = inf y ∈ [ x ] ‖ y ‖ X . {\displaystyle \|[x]\|_{X/M}=\inf _{m\in M}\|x-m\|_{X}=\inf _{m\in M}\|x+m\|_{X}=\inf _{y\in [x]}\|y\|_{X}.} === Examples === Let C[0,1] denote the Banach space of continuous real-valued functions on the interval [0,1] with the sup norm. Denote the subspace of all functions f ∈ C[0,1] with f(0) = 0 by M. Then the equivalence class of some function g is determined by its value at 0, and the quotient space C[0,1]/M is isomorphic to R. If X is a Hilbert space, then the quotient space X/M is isomorphic to the orthogonal complement of M. === Generalization to locally convex spaces === The quotient of a locally convex space by a closed subspace is again locally convex. Indeed, suppose that X is locally convex so that the topology on X is generated by a family of seminorms {pα | α ∈ A} where A is an index set. Let M be a closed subspace, and define seminorms qα on X/M by q α ( [ x ] ) = inf v ∈ [ x ] p α ( v ) . {\displaystyle q_{\alpha }([x])=\inf _{v\in [x]}p_{\alpha }(v).} Then X/M is a locally convex space, and the topology on it is the quotient topology. If, furthermore, X is metrizable, then so is X/M. If X is a Fréchet space, then so is X/M. == See also == Quotient group Quotient module Quotient set Quotient space (topology) == References == == Sources == Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0. Dieudonné, Jean (1976), Treatise on Analysis, vol. 2, Academic Press, ISBN 978-0122155024 Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-90093-4. Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9. Roman, Steven (2005). Advanced Linear Algebra. Graduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-24766-1.
Wikipedia/Quotient_space_(linear_algebra)
In mathematics, an argument of a function is a value provided to obtain the function's result. It is also called an independent variable. For example, the binary function f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} has two arguments, x {\displaystyle x} and y {\displaystyle y} , in an ordered pair ( x , y ) {\displaystyle (x,y)} . The hypergeometric function is an example of a four-argument function. The number of arguments that a function takes is called the arity of the function. A function that takes a single argument as input, such as f ( x ) = x 2 {\displaystyle f(x)=x^{2}} , is called a unary function. A function of two or more variables is considered to have a domain consisting of ordered pairs or tuples of argument values. The argument of a circular function is an angle. The argument of a hyperbolic function is a hyperbolic angle. A mathematical function has one or more arguments in the form of independent variables designated in the definition, which can also contain parameters. The independent variables are mentioned in the list of arguments that the function takes, whereas the parameters are not. For example, in the logarithmic function f ( x ) = log b ⁡ ( x ) , {\displaystyle f(x)=\log _{b}(x),} the base b {\displaystyle b} is considered a parameter. Sometimes, subscripts can be used to denote arguments. For example, we can use subscripts to denote the arguments with respect to which partial derivatives are taken. The use of the term "argument" in this sense developed from astronomy, which historically used tables to determine the spatial positions of planets from their positions in the sky (ephemerides). These tables were organized according to measured angles called arguments, literally "that which elucidates something else." == See also == Domain of a function – Mathematical concept Function prototype – Declaration of a function's name and type signature but not body Parameter (computer programming) – Representation of an argument in a function definition Propositional function Type signature – Defines the inputs and outputs for a function, subroutine or method Value (mathematics) – Notion in mathematics == References == == External links == Weisstein, Eric W. "Argument". MathWorld. Argument at PlanetMath.
Wikipedia/Argument_of_a_function
In tensor analysis, a mixed tensor is a tensor which is neither strictly covariant nor strictly contravariant; at least one of the indices of a mixed tensor will be a subscript (covariant) and at least one of the indices will be a superscript (contravariant). A mixed tensor of type or valence ( M N ) {\textstyle {\binom {M}{N}}} , also written "type (M, N)", with both M > 0 and N > 0, is a tensor which has M contravariant indices and N covariant indices. Such a tensor can be defined as a linear function which maps an (M + N)-tuple of M one-forms and N vectors to a scalar. == Changing the tensor type == Consider the following octet of related tensors: T α β γ , T α β γ , T α β γ , T α β γ , T α β γ , T α β γ , T α β γ , T α β γ . {\displaystyle T_{\alpha \beta \gamma },\ T_{\alpha \beta }{}^{\gamma },\ T_{\alpha }{}^{\beta }{}_{\gamma },\ T_{\alpha }{}^{\beta \gamma },\ T^{\alpha }{}_{\beta \gamma },\ T^{\alpha }{}_{\beta }{}^{\gamma },\ T^{\alpha \beta }{}_{\gamma },\ T^{\alpha \beta \gamma }.} The first one is covariant, the last one contravariant, and the remaining ones mixed. Notationally, these tensors differ from each other by the covariance/contravariance of their indices. A given contravariant index of a tensor can be lowered using the metric tensor gμν, and a given covariant index can be raised using the inverse metric tensor gμν. Thus, gμν could be called the index lowering operator and gμν the index raising operator. Generally, the covariant metric tensor, contracted with a tensor of type (M, N), yields a tensor of type (M − 1, N + 1), whereas its contravariant inverse, contracted with a tensor of type (M, N), yields a tensor of type (M + 1, N − 1). === Examples === As an example, a mixed tensor of type (1, 2) can be obtained by raising an index of a covariant tensor of type (0, 3), T α β λ = T α β γ g γ λ , {\displaystyle T_{\alpha \beta }{}^{\lambda }=T_{\alpha \beta \gamma }\,g^{\gamma \lambda },} where T α β λ {\displaystyle T_{\alpha \beta }{}^{\lambda }} is the same tensor as T α β γ {\displaystyle T_{\alpha \beta }{}^{\gamma }} , because T α β λ δ λ γ = T α β γ , {\displaystyle T_{\alpha \beta }{}^{\lambda }\,\delta _{\lambda }{}^{\gamma }=T_{\alpha \beta }{}^{\gamma },} with Kronecker δ acting here like an identity matrix. Likewise, T α λ γ = T α β γ g β λ , {\displaystyle T_{\alpha }{}^{\lambda }{}_{\gamma }=T_{\alpha \beta \gamma }\,g^{\beta \lambda },} T α λ ϵ = T α β γ g β λ g γ ϵ , {\displaystyle T_{\alpha }{}^{\lambda \epsilon }=T_{\alpha \beta \gamma }\,g^{\beta \lambda }\,g^{\gamma \epsilon },} T α β γ = g γ λ T α β λ , {\displaystyle T^{\alpha \beta }{}_{\gamma }=g_{\gamma \lambda }\,T^{\alpha \beta \lambda },} T α λ ϵ = g λ β g ϵ γ T α β γ . {\displaystyle T^{\alpha }{}_{\lambda \epsilon }=g_{\lambda \beta }\,g_{\epsilon \gamma }\,T^{\alpha \beta \gamma }.} Raising an index of the metric tensor is equivalent to contracting it with its inverse, yielding the Kronecker delta, g μ λ g λ ν = g μ ν = δ μ ν , {\displaystyle g^{\mu \lambda }\,g_{\lambda \nu }=g^{\mu }{}_{\nu }=\delta ^{\mu }{}_{\nu },} so any mixed version of the metric tensor will be equal to the Kronecker delta, which will also be mixed. == See also == Covariance and contravariance of vectors Einstein notation Ricci calculus Tensor (intrinsic definition) Two-point tensor == References == D.C. Kay (1988). Tensor Calculus. Schaum’s Outlines, McGraw Hill (USA). ISBN 0-07-033484-6. Wheeler, J.A.; Misner, C.; Thorne, K.S. (1973). "§3.5 Working with Tensors". Gravitation. W.H. Freeman & Co. pp. 85–86. ISBN 0-7167-0344-0. R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4. == External links == Index Gymnastics, Wolfram Alpha
Wikipedia/Mixed_tensor
In differential geometry, the torsion tensor is a tensor that is associated to any affine connection. The torsion tensor is a bilinear map of two input vectors X , Y {\displaystyle X,Y} , that produces an output vector T ( X , Y ) {\displaystyle T(X,Y)} representing the displacement within a tangent space when the tangent space is developed (or "rolled") along an infinitesimal parallelogram whose sides are X , Y {\displaystyle X,Y} . It is skew symmetric in its inputs, because developing over the parallelogram in the opposite sense produces the opposite displacement, similarly to how a screw moves in opposite ways when it is twisted in two directions. Torsion is particularly useful in the study of the geometry of geodesics. Given a system of parametrized geodesics, one can specify a class of affine connections having those geodesics, but differing by their torsions. There is a unique connection which absorbs the torsion, generalizing the Levi-Civita connection to other, possibly non-metric situations (such as Finsler geometry). The difference between a connection with torsion, and a corresponding connection without torsion is a tensor, called the contorsion tensor. Absorption of torsion also plays a fundamental role in the study of G-structures and Cartan's equivalence method. Torsion is also useful in the study of unparametrized families of geodesics, via the associated projective connection. In relativity theory, such ideas have been implemented in the form of Einstein–Cartan theory. == Definition == Let M be a manifold with an affine connection on the tangent bundle (aka covariant derivative) ∇. The torsion tensor (sometimes called the Cartan (torsion) tensor) of ∇ is the vector-valued 2-form defined on vector fields X and Y by T ( X , Y ) := ∇ X Y − ∇ Y X − [ X , Y ] {\displaystyle T(X,Y):=\nabla _{X}Y-\nabla _{Y}X-[X,Y]} where [X, Y] is the Lie bracket of two vector fields. By the Leibniz rule, T(fX, Y) = T(X, fY) = fT(X, Y) for any smooth function f. So T is tensorial, despite being defined in terms of the connection which is a first order differential operator: it gives a 2-form on tangent vectors, while the covariant derivative is only defined for vector fields. === Components of the torsion tensor === The components of the torsion tensor T c a b {\displaystyle T^{c}{}_{ab}} in terms of a local basis (e1, ..., en) of sections of the tangent bundle can be derived by setting X = ei, Y = ej and by introducing the commutator coefficients γkijek := [ei, ej]. The components of the torsion are then T k i j := Γ k i j − Γ k j i − γ k i j , i , j , k = 1 , 2 , … , n . {\displaystyle T^{k}{}_{ij}:=\Gamma ^{k}{}_{ij}-\Gamma ^{k}{}_{ji}-\gamma ^{k}{}_{ij},\quad i,j,k=1,2,\ldots ,n.} Here Γ k i j {\displaystyle {\Gamma ^{k}}_{ij}} are the connection coefficients defining the connection. If the basis is holonomic then the Lie brackets vanish, γ k i j = 0 {\displaystyle \gamma ^{k}{}_{ij}=0} . So T k i j = 2 Γ k [ i j ] {\displaystyle T^{k}{}_{ij}=2\Gamma ^{k}{}_{[ij]}} . In particular (see below), while the geodesic equations determine the symmetric part of the connection, the torsion tensor determines the antisymmetric part. === The torsion form === The torsion form, an alternative characterization of torsion, applies to the frame bundle FM of the manifold M. This principal bundle is equipped with a connection form ω, a gl(n)-valued one-form which maps vertical vectors to the generators of the right action in gl(n) and equivariantly intertwines the right action of GL(n) on the tangent bundle of FM with the adjoint representation on gl(n). The frame bundle also carries a canonical one-form θ, with values in Rn, defined at a frame u ∈ FxM (regarded as a linear function u : Rn → TxM) by θ ( X ) = u − 1 ( π ∗ ( X ) ) {\displaystyle \theta (X)=u^{-1}(\pi _{*}(X))} where π : FM → M is the projection mapping for the principal bundle and π∗ is its push-forward. The torsion form is then Θ = d θ + ω ∧ θ . {\displaystyle \Theta =d\theta +\omega \wedge \theta .} Equivalently, Θ = Dθ, where D is the exterior covariant derivative determined by the connection. The torsion form is a (horizontal) tensorial form with values in Rn, meaning that under the right action of g ∈ GL(n) it transforms equivariantly: R g ∗ Θ = g − 1 ⋅ Θ {\displaystyle R_{g}^{*}\Theta =g^{-1}\cdot \Theta } where g − 1 {\displaystyle g^{-1}} acts on the right-hand side by its canonical action on Rn. ==== Torsion form in a frame ==== The torsion form may be expressed in terms of a connection form on the base manifold M, written in a particular frame of the tangent bundle (e1, ..., en). The connection form expresses the exterior covariant derivative of these basic sections: D e i = e j ω j i . {\displaystyle D\mathbf {e} _{i}=\mathbf {e} _{j}{\omega ^{j}}_{i}.} The solder form for the tangent bundle (relative to this frame) is the dual basis θi ∈ T∗M of the ei, so that θi(ej) = δij (the Kronecker delta). Then the torsion 2-form has components Θ k = d θ k + ω k j ∧ θ j = T k i j θ i ∧ θ j . {\displaystyle \Theta ^{k}=d\theta ^{k}+{\omega ^{k}}_{j}\wedge \theta ^{j}={T^{k}}_{ij}\theta ^{i}\wedge \theta ^{j}.} In the rightmost expression, T k i j = θ k ( ∇ e i e j − ∇ e j e i − [ e i , e j ] ) {\displaystyle {T^{k}}_{ij}=\theta ^{k}\left(\nabla _{\mathbf {e} _{i}}\mathbf {e} _{j}-\nabla _{\mathbf {e} _{j}}\mathbf {e} _{i}-\left[\mathbf {e} _{i},\mathbf {e} _{j}\right]\right)} are the frame-components of the torsion tensor, as given in the previous definition. It can be easily shown that Θi transforms tensorially in the sense that if a different frame e ~ i = e j g j i {\displaystyle {\tilde {\mathbf {e} }}_{i}=\mathbf {e} _{j}{g^{j}}_{i}} for some invertible matrix-valued function (gji), then Θ ~ i = ( g − 1 ) i j Θ j . {\displaystyle {\tilde {\Theta }}^{i}={\left(g^{-1}\right)^{i}}_{j}\Theta ^{j}.} In other terms, Θ is a tensor of type (1, 2) (carrying one contravariant and two covariant indices). Alternatively, the solder form can be characterized in a frame-independent fashion as the TM-valued one-form θ on M corresponding to the identity endomorphism of the tangent bundle under the duality isomorphism End(TM) ≈ TM ⊗ T∗M. Then the torsion 2-form is a section Θ ∈ Hom ( ⋀ 2 T M , T M ) {\displaystyle \Theta \in {\text{Hom}}\left({\textstyle \bigwedge }^{2}{\rm {T}}M,{\rm {T}}M\right)} given by Θ = D θ , {\displaystyle \Theta =D\theta ,} where D is the exterior covariant derivative. (See connection form for further details.) === Irreducible decomposition === The torsion tensor can be decomposed into two irreducible parts: a trace-free part and another part which contains the trace terms. Using the index notation, the trace of T is given by a i = T k i k , {\displaystyle a_{i}=T^{k}{}_{ik},} and the trace-free part is B i j k = T i j k + 1 n − 1 δ i j a k − 1 n − 1 δ i k a j , {\displaystyle B^{i}{}_{jk}=T^{i}{}_{jk}+{\frac {1}{n-1}}\delta ^{i}{}_{j}a_{k}-{\frac {1}{n-1}}\delta ^{i}{}_{k}a_{j},} where δij is the Kronecker delta. Intrinsically, one has T ∈ Hom ⁡ ( ⋀ 2 T M , T M ) . {\displaystyle T\in \operatorname {Hom} \left({\textstyle \bigwedge }^{2}{\rm {T}}M,{\rm {T}}M\right).} The trace of T, tr T, is an element of T∗M defined as follows. For each vector fixed X ∈ TM, T defines an element T(X) of Hom(TM, TM) via T ( X ) : Y ↦ T ( X ∧ Y ) . {\displaystyle T(X):Y\mapsto T(X\wedge Y).} Then (tr T)(X) is defined as the trace of this endomorphism. That is, ( tr T ) ( X ) = def tr ⁡ ( T ( X ) ) . {\displaystyle (\operatorname {tr} \,T)(X){\stackrel {\text{def}}{=}}\operatorname {tr} (T(X)).} The trace-free part of T is then T 0 = T − 1 n − 1 ι ( tr T ) , {\displaystyle T_{0}=T-{\frac {1}{n-1}}\iota (\operatorname {tr} \,T),} where ι denotes the interior product. == Curvature and the Bianchi identities == The curvature tensor of ∇ is a mapping TM × TM → End(TM) defined on vector fields X, Y, and Z by R ( X , Y ) Z = ∇ X ∇ Y Z − ∇ Y ∇ X Z − ∇ [ X , Y ] Z . {\displaystyle R(X,Y)Z=\nabla _{X}\nabla _{Y}Z-\nabla _{Y}\nabla _{X}Z-\nabla _{[X,Y]}Z.} For vectors at a point, this definition is independent of how the vectors are extended to vector fields away from the point (thus it defines a tensor, much like the torsion). The Bianchi identities relate the curvature and torsion as follows. Let S {\displaystyle {\mathfrak {S}}} denote the cyclic sum over X, Y, and Z. For instance, S ( R ( X , Y ) Z ) := R ( X , Y ) Z + R ( Y , Z ) X + R ( Z , X ) Y . {\displaystyle {\mathfrak {S}}\left(R\left(X,Y\right)Z\right):=R(X,Y)Z+R(Y,Z)X+R(Z,X)Y.} Then the following identities hold Bianchi's first identity: S ( R ( X , Y ) Z ) = S ( T ( T ( X , Y ) , Z ) + ( ∇ X T ) ( Y , Z ) ) {\displaystyle {\mathfrak {S}}\left(R\left(X,Y\right)Z\right)={\mathfrak {S}}\left(T\left(T(X,Y),Z\right)+\left(\nabla _{X}T\right)\left(Y,Z\right)\right)} Bianchi's second identity: S ( ( ∇ X R ) ( Y , Z ) + R ( T ( X , Y ) , Z ) ) = 0 {\displaystyle {\mathfrak {S}}\left(\left(\nabla _{X}R\right)\left(Y,Z\right)+R\left(T\left(X,Y\right),Z\right)\right)=0} === The curvature form and Bianchi identities === The curvature form is the gl(n)-valued 2-form Ω = D ω = d ω + ω ∧ ω {\displaystyle \Omega =D\omega =d\omega +\omega \wedge \omega } where, again, D denotes the exterior covariant derivative. In terms of the curvature form and torsion form, the corresponding Bianchi identities are D Θ = Ω ∧ θ {\displaystyle D\Theta =\Omega \wedge \theta } D Ω = 0. {\displaystyle D\Omega =0.} Moreover, one can recover the curvature and torsion tensors from the curvature and torsion forms as follows. At a point u of FxM, one has R ( X , Y ) Z = u ( 2 Ω ( π − 1 ( X ) , π − 1 ( Y ) ) ) ( u − 1 ( Z ) ) , T ( X , Y ) = u ( 2 Θ ( π − 1 ( X ) , π − 1 ( Y ) ) ) , {\displaystyle {\begin{aligned}R(X,Y)Z&=u\left(2\Omega \left(\pi ^{-1}(X),\pi ^{-1}(Y)\right)\right)\left(u^{-1}(Z)\right),\\T(X,Y)&=u\left(2\Theta \left(\pi ^{-1}(X),\pi ^{-1}(Y)\right)\right),\end{aligned}}} where again u : Rn → TxM is the function specifying the frame in the fibre, and the choice of lift of the vectors via π−1 is irrelevant since the curvature and torsion forms are horizontal (they vanish on the ambiguous vertical vectors). == Characterizations and interpretations == The torsion is a manner of characterizing the amount of slipping or twisting that a plane does when rolling along a surface or higher dimensional affine manifold. For example, consider rolling a plane along a small circle drawn on a sphere. If the plane does not slip or twist, then when the plane is rolled all the way along the circle, it will also trace a circle in the plane. It turns out that the plane will have rotated (despite there being no twist whilst rolling it), an effect due to the curvature of the sphere. But the curve traced out will still be a circle, and so in particular a closed curve that begins and ends at the same point. On the other hand, if the plane were rolled along the sphere, but it was allowed it to slip or twist in the process, then the path the circle traces on the plane could be a much more general curve that need not even be closed. The torsion is a way to quantify this additional slipping and twisting while rolling a plane along a curve. Thus the torsion tensor can be intuitively understood by taking a small parallelogram circuit with sides given by vectors v and w, in a space and rolling the tangent space along each of the four sides of the parallelogram, marking the point of contact as it goes. When the circuit is completed, the marked curve will have been displaced out of the plane of the parallelogram by a vector, denoted T ( v , w ) {\displaystyle T(v,w)} . Thus the torsion tensor is a tensor: a (bilinear) function of two input vectors v and w that produces an output vector T ( v , w ) {\displaystyle T(v,w)} . It is skew symmetric in the arguments v and w, a reflection of the fact that traversing the circuit in the opposite sense undoes the original displacement, in much the same way that twisting a screw in opposite directions displaces the screw in opposite ways. The torsion tensor thus is related to, although distinct from, the torsion of a curve, as it appears in the Frenet–Serret formulas: the torsion of a connection measures a dislocation of a developed curve out of its plane, while the torsion of a curve is also a dislocation out of its osculating plane. In the geometry of surfaces, the geodesic torsion describes how a surface twists about a curve on the surface. The companion notion of curvature measures how moving frames roll along a curve without slipping or twisting. === Example === Consider the (flat) Euclidean space M = R 3 {\displaystyle M=\mathbb {R} ^{3}} . On it, we put a connection that is flat, but with non-zero torsion, defined on the standard Euclidean frame e 1 , e 2 , e 3 {\displaystyle e_{1},e_{2},e_{3}} by the (Euclidean) cross product: ∇ e i e j = e i × e j . {\displaystyle \nabla _{e_{i}}e_{j}=e_{i}\times e_{j}.} Consider now the parallel transport of the vector e 2 {\displaystyle e_{2}} along the e 1 {\displaystyle e_{1}} axis, starting at the origin. The parallel vector field X ( x ) = a ( x ) e 2 + b ( x ) e 3 {\displaystyle X(x)=a(x)e_{2}+b(x)e_{3}} thus satisfies X ( 0 ) = e 2 {\displaystyle X(0)=e_{2}} , and the differential equation 0 = X ˙ = ∇ e 1 X = a ˙ e 2 + b ˙ e 3 + a e 1 × e 2 + b e 1 × e 3 = ( a ˙ − b ) e 2 + ( b ˙ + a ) e 3 . {\displaystyle {\begin{aligned}0={\dot {X}}&=\nabla _{e_{1}}X={\dot {a}}e_{2}+{\dot {b}}e_{3}+ae_{1}\times e_{2}+be_{1}\times e_{3}\\&=({\dot {a}}-b)e_{2}+({\dot {b}}+a)e_{3}.\end{aligned}}} Thus a ˙ = b , b ˙ = − a {\displaystyle {\dot {a}}=b,{\dot {b}}=-a} , and the solution is X = cos ⁡ x e 2 − sin ⁡ x e 3 {\displaystyle X=\cos x\,e_{2}-\sin x\,e_{3}} . Now the tip of the vector X {\displaystyle X} , as it is transported along the e 1 {\displaystyle e_{1}} axis traces out the helix x e 1 + cos ⁡ x e 2 − sin ⁡ x e 3 . {\displaystyle x\,e_{1}+\cos x\,e_{2}-\sin x\,e_{3}.} Thus we see that, in the presence of torsion, parallel transport tends to twist a frame around the direction of motion, analogously to the role played by torsion in the classical differential geometry of curves. === Development === One interpretation of the torsion involves the development of a curve. Suppose that a piecewise smooth closed loop γ : [ 0 , 1 ] → M {\displaystyle \gamma :[0,1]\to M} is given, based at the point p ∈ M {\displaystyle p\in M} , where γ ( 0 ) = γ ( 1 ) = p {\displaystyle \gamma (0)=\gamma (1)=p} . We assume that γ {\displaystyle \gamma } is homotopic to zero. The curve can be developed into the tangent space at p {\displaystyle p} in the following manner. Let θ i {\displaystyle \theta ^{i}} be a parallel coframe along γ {\displaystyle \gamma } , and let x i {\displaystyle x^{i}} be the coordinates on T p M {\displaystyle T_{p}M} induced by θ i ( p ) {\displaystyle \theta ^{i}(p)} . A development of γ {\displaystyle \gamma } is a curve γ ~ {\displaystyle {\tilde {\gamma }}} in T p M {\displaystyle T_{p}M} whose coordinates x i = x i ( t ) {\displaystyle x^{i}=x^{i}(t)} sastify the differential equation d x i = γ ∗ θ i . {\displaystyle dx^{i}=\gamma ^{*}\theta ^{i}.} If the torsion is zero, then the developed curve γ ~ {\displaystyle {\tilde {\gamma }}} is also a closed loop (so that γ ~ ( 0 ) = γ ~ ( 1 ) {\displaystyle {\tilde {\gamma }}(0)={\tilde {\gamma }}(1)} ). On the other hand, if the torsion is non-zero, then the developed curve may not be closed, so that γ ~ ( 0 ) ≠ γ ~ ( 1 ) {\displaystyle {\tilde {\gamma }}(0)\not ={\tilde {\gamma }}(1)} . Thus the development of a loop in the presence of torsion can become dislocated, analogously to a screw dislocation. The foregoing considerations can be made more quantitative by considering a small parallelogram, originating at the point p ∈ M {\displaystyle p\in M} , with sides v , w ∈ T p M {\displaystyle v,w\in T_{p}M} . Then the tangent bivector to the parallelogram is v ∧ w ∈ Λ 2 T p M {\displaystyle v\wedge w\in \Lambda ^{2}T_{p}M} . The development of this parallelogram, using the connection, is no longer closed in general, and the displacement in going around the loop is translation by the vector Θ ( v , w ) {\displaystyle \Theta (v,w)} , where Θ {\displaystyle \Theta } is the torsion tensor, up to higher order terms in v , w {\displaystyle v,w} . This displacement is directly analogous to the Burgers vector of crystallography. More generally, one can also transport a moving frame along the curve γ ~ {\displaystyle {\tilde {\gamma }}} . The linear transformation that the frame undergoes between t = 0 , t = 1 {\displaystyle t=0,t=1} is then determined by the curvature of the connection. Together, the linear transformation of the frame and the translation of the starting point from γ ~ ( 0 ) {\displaystyle {\tilde {\gamma }}(0)} to γ ~ ( 1 ) {\displaystyle {\tilde {\gamma }}(1)} comprise the holonomy of the connection. === The torsion of a filament === In materials science, and especially elasticity theory, ideas of torsion also play an important role. One problem models the growth of vines, focusing on the question of how vines manage to twist around objects. The vine itself is modeled as a pair of elastic filaments twisted around one another. In its energy-minimizing state, the vine naturally grows in the shape of a helix. But the vine may also be stretched out to maximize its extent (or length). In this case, the torsion of the vine is related to the torsion of the pair of filaments (or equivalently the surface torsion of the ribbon connecting the filaments), and it reflects the difference between the length-maximizing (geodesic) configuration of the vine and its energy-minimizing configuration. === Torsion and vorticity === In fluid dynamics, torsion is naturally associated to vortex lines. Suppose that a connection D {\displaystyle D} is given in three dimensions, with curvature 2-form Ω a b {\displaystyle \Omega _{a}^{b}} and torsion 2-form Θ a = D θ a {\displaystyle \Theta ^{a}=D\theta ^{a}} . Let η a b c {\displaystyle \eta _{abc}} be the skew-symmetric Levi-Civita tensor, and t a = 1 2 η a b c ∧ Ω b c , {\displaystyle t_{a}={\tfrac {1}{2}}\eta _{abc}\wedge \Omega ^{bc},} s a b = − η a b c ∧ Θ c . {\displaystyle s_{ab}=-\eta _{abc}\wedge \Theta ^{c}.} Then the Bianchi identities The Bianchi identities are D Ω b a = 0 , D Θ a = Ω b a ∧ θ b . {\displaystyle D\Omega _{b}^{a}=0,\quad D\Theta ^{a}=\Omega _{b}^{a}\wedge \theta ^{b}.} imply that D t a = 0 {\displaystyle Dt_{a}=0} and D s a b = θ a ∧ t b − θ b ∧ t a . {\displaystyle Ds_{ab}=\theta _{a}\wedge t_{b}-\theta _{b}\wedge t_{a}.} These are the equations satisfied by an equilibrium continuous medium with moment density s a b {\displaystyle s_{ab}} . == Geodesics and the absorption of torsion == Suppose that γ(t) is a curve on M. Then γ is an affinely parametrized geodesic provided that ∇ γ ˙ ( t ) γ ˙ ( t ) = 0 {\displaystyle \nabla _{{\dot {\gamma }}(t)}{\dot {\gamma }}(t)=0} for all time t in the domain of γ. (Here the dot denotes differentiation with respect to t, which associates with γ the tangent vector pointing along it.) Each geodesic is uniquely determined by its initial tangent vector at time t = 0, γ ˙ ( 0 ) {\displaystyle {\dot {\gamma }}(0)} . One application of the torsion of a connection involves the geodesic spray of the connection: roughly the family of all affinely parametrized geodesics. Torsion is the ambiguity of classifying connections in terms of their geodesic sprays: Two connections ∇ and ∇′ which have the same affinely parametrized geodesics (i.e., the same geodesic spray) differ only by torsion. More precisely, if X and Y are a pair of tangent vectors at p ∈ M, then let Δ ( X , Y ) = ∇ X Y ~ − ∇ X ′ Y ~ {\displaystyle \Delta (X,Y)=\nabla _{X}{\tilde {Y}}-\nabla '_{X}{\tilde {Y}}} be the difference of the two connections, calculated in terms of arbitrary extensions of X and Y away from p. By the Leibniz product rule, one sees that Δ does not actually depend on how X and Y′ are extended (so it defines a tensor on M). Let S and A be the symmetric and alternating parts of Δ: S ( X , Y ) = 1 2 ( Δ ( X , Y ) + Δ ( Y , X ) ) {\displaystyle S(X,Y)={\tfrac {1}{2}}\left(\Delta (X,Y)+\Delta (Y,X)\right)} A ( X , Y ) = 1 2 ( Δ ( X , Y ) − Δ ( Y , X ) ) {\displaystyle A(X,Y)={\tfrac {1}{2}}\left(\Delta (X,Y)-\Delta (Y,X)\right)} Then A ( X , Y ) = 1 2 ( T ( X , Y ) − T ′ ( X , Y ) ) {\displaystyle A(X,Y)={\tfrac {1}{2}}\left(T(X,Y)-T'(X,Y)\right)} is the difference of the torsion tensors. ∇ and ∇′ define the same families of affinely parametrized geodesics if and only if S(X, Y) = 0. In other words, the symmetric part of the difference of two connections determines whether they have the same parametrized geodesics, whereas the skew part of the difference is determined by the relative torsions of the two connections. Another consequence is: Given any affine connection ∇, there is a unique torsion-free connection ∇′ with the same family of affinely parametrized geodesics. The difference between these two connections is in fact a tensor, the contorsion tensor. This is a generalization of the fundamental theorem of Riemannian geometry to general affine (possibly non-metric) connections. Picking out the unique torsion-free connection subordinate to a family of parametrized geodesics is known as absorption of torsion, and it is one of the stages of Cartan's equivalence method. == See also == Contorsion tensor Curtright field Curvature tensor Levi-Civita connection Torsion coefficient Torsion of curves == Notes == == References == Bishop, R.L.; Goldberg, S.I. (1980), Tensor analysis on manifolds, Dover Publications Cartan, É. (1923), "Sur les variétés à connexion affine, et la théorie de la relativité généralisée (première partie)", Annales Scientifiques de l'École Normale Supérieure, 40: 325–412, doi:10.24033/asens.751 Cartan, É. (1924), "Sur les variétés à connexion affine, et la théorie de la relativité généralisée (première partie) (Suite)", Annales Scientifiques de l'École Normale Supérieure, 41: 1–25, doi:10.24033/asens.753 Elzanowski, M.; Epstein, M. (1985), "Geometric characterization of hyperelastic uniformity", Archive for Rational Mechanics and Analysis, 88 (4): 347–357, Bibcode:1985ArRMA..88..347E, doi:10.1007/BF00250871, S2CID 120127682 Goriely, A.; Robertson-Tessi, M.; Tabor, M.; Vandiver, R. (2006), "Elastic growth models" (PDF), BIOMAT-2006, Springer-Verlag, archived from the original (PDF) on 2006-12-29 Hehl, F.W.; von der Heyde, P.; Kerlick, G.D.; Nester, J.M. (1976), "General relativity with spin and torsion: Foundations and prospects", Rev. Mod. Phys., 48 (3): 393–416, Bibcode:1976RvMP...48..393H, doi:10.1103/revmodphys.48.393, 393. Kibble, T.W.B. (1961), "Lorentz invariance and the gravitational field", J. Math. Phys., 2 (2): 212–221, Bibcode:1961JMP.....2..212K, doi:10.1063/1.1703702, 212. Kobayashi, S.; Nomizu, K. (1963), Foundations of Differential Geometry, vol. 1 & 2 (New ed.), Wiley-Interscience (published 1996), ISBN 0-471-15733-3 {{citation}}: ISBN / Date incompatibility (help) Poplawski, N.J. (2009), Spacetime and fields, arXiv:0911.0334, Bibcode:2009arXiv0911.0334P Schouten, J.A. (1954), Ricci Calculus, Springer-Verlag Schrödinger, E. (1950), Space-Time Structure, Cambridge University Press Sciama, D.W. (1964), "The physical structure of general relativity", Rev. Mod. Phys., 36 (1): 463, Bibcode:1964RvMP...36..463S, doi:10.1103/RevModPhys.36.463 Spivak, M. (1999), A comprehensive introduction to differential geometry, Volume II, Houston, Texas: Publish or Perish, ISBN 0-914098-71-3 == External links == Bill Thurston (2011) Rolling without slipping interpretation of torsion, URL (version: 2011-01-27).
Wikipedia/Torsion_tensor
Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. Continuum mechanics deals with deformable bodies, as opposed to rigid bodies. A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships. Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics. == Concept of a continuum == The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus. Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties. == Major areas == An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties. == Formulation of models == Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body B {\displaystyle {\mathcal {B}}} being modeled. The points within this region are called particles or material points. Different configurations or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time t {\displaystyle t} is labeled κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} . A particular particle within the body in a particular configuration is characterized by a position vector x = ∑ i = 1 3 x i e i , {\displaystyle \mathbf {x} =\sum _{i=1}^{3}x_{i}\mathbf {e} _{i},} where e i {\displaystyle \mathbf {e} _{i}} are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position X {\displaystyle \mathbf {X} } in some reference configuration, for example the configuration at the initial time, so that x = κ t ( X ) . {\displaystyle \mathbf {x} =\kappa _{t}(\mathbf {X} ).} This function needs to have various properties so that the model makes physical sense. κ t ( ⋅ ) {\displaystyle \kappa _{t}(\cdot )} needs to be: continuous in time, so that the body changes in a way which is realistic, globally invertible at all times, so that the body cannot intersect itself, orientation-preserving, as transformations which produce mirror reflections are not possible in nature. For the mathematical formulation of the model, κ t ( ⋅ ) {\displaystyle \kappa _{t}(\cdot )} is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated. == Forces in a continuum == A solid is a deformable body that possesses shear strength, sc. a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces. Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces F C {\displaystyle \mathbf {F} _{C}} and body forces F B {\displaystyle \mathbf {F} _{B}} . Thus, the total force F {\displaystyle {\mathcal {F}}} applied to a body or to a portion of the body can be expressed as: F = F C + F B {\displaystyle {\mathcal {F}}=\mathbf {F} _{C}+\mathbf {F} _{B}} === Surface forces === Surface forces or contact forces, expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup. The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a contact force density or Cauchy traction field T ( n , x , t ) {\displaystyle \mathbf {T} (\mathbf {n} ,\mathbf {x} ,t)} that represents this distribution in a particular configuration of the body at a given time t {\displaystyle t\,\!} . It is not a vector field because it depends not only on the position x {\displaystyle \mathbf {x} } of a particular material point, but also on the local orientation of the surface element as defined by its normal vector n {\displaystyle \mathbf {n} } . Any differential area d S {\displaystyle dS\,\!} with normal vector n {\displaystyle \mathbf {n} } of a given internal surface area S {\displaystyle S\,\!} , bounding a portion of the body, experiences a contact force d F C {\displaystyle d\mathbf {F} _{C}\,\!} arising from the contact between both portions of the body on each side of S {\displaystyle S\,\!} , and it is given by d F C = T ( n ) d S {\displaystyle d\mathbf {F} _{C}=\mathbf {T} ^{(\mathbf {n} )}\,dS} where T ( n ) {\displaystyle \mathbf {T} ^{(\mathbf {n} )}} is the surface traction, also called stress vector, traction, or traction vector. The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle). The total contact force on the particular internal surface S {\displaystyle S\,\!} is then expressed as the sum (surface integral) of the contact forces on all differential surfaces d S {\displaystyle dS\,\!} : F C = ∫ S T ( n ) d S {\displaystyle \mathbf {F} _{C}=\int _{S}\mathbf {T} ^{(\mathbf {n} )}\,dS} In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, sc. only relative changes in stress are considered, not the absolute values of stress. === Body forces === Body forces are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, e.g. gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, i.e. acting on every point in it. Body forces are represented by a body force density b ( x , t ) {\displaystyle \mathbf {b} (\mathbf {x} ,t)} (per unit of mass), which is a frame-indifferent vector field. In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density ρ ( x , t ) {\displaystyle \mathbf {\rho } (\mathbf {x} ,t)\,\!} of the material, and it is specified in terms of force per unit mass ( b i {\displaystyle b_{i}\,\!} ) or per unit volume ( p i {\displaystyle p_{i}\,\!} ). These two specifications are related through the material density by the equation ρ b i = p i {\displaystyle \rho b_{i}=p_{i}\,\!} . Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field. The total body force applied to a continuous body is expressed as F B = ∫ V b d m = ∫ V ρ b d V {\displaystyle \mathbf {F} _{B}=\int _{V}\mathbf {b} \,dm=\int _{V}\rho \mathbf {b} \,dV} Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque M {\displaystyle {\mathcal {M}}} about the origin is given by M = M C + M B {\displaystyle {\mathcal {M}}=\mathbf {M} _{C}+\mathbf {M} _{B}} In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are couple stresses (surface couples, contact torques) and body moments. Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration (e.g. bones), solids under the action of an external magnetic field, and the dislocation theory of metals. Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called polar materials. Non-polar materials are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials. Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by F = ∫ V a d m = ∫ S T d S + ∫ V ρ b d V {\displaystyle {\mathcal {F}}=\int _{V}\mathbf {a} \,dm=\int _{S}\mathbf {T} \,dS+\int _{V}\rho \mathbf {b} \,dV} M = ∫ S r × T d S + ∫ V r × ρ b d V {\displaystyle {\mathcal {M}}=\int _{S}\mathbf {r} \times \mathbf {T} \,dS+\int _{V}\mathbf {r} \times \rho \mathbf {b} \,dV} == Kinematics: motion and deformation == A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} to a current or deformed configuration κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} (Figure 2). The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line. There is continuity during motion or deformation of a continuum body in the sense that: The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at t = 0 {\displaystyle t=0} is considered the reference configuration, κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} . The components X i {\displaystyle X_{i}} of the position vector X {\displaystyle \mathbf {X} } of a particle, taken with respect to the reference configuration, are called the material or reference coordinates. When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description. === Lagrangian description === In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at t = 0 {\displaystyle t=0} . An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} . This description is normally used in solid mechanics. In the Lagrangian description, the motion of a continuum body is expressed by the mapping function χ ( ⋅ ) {\displaystyle \chi (\cdot )} (Figure 2), x = χ ( X , t ) {\displaystyle \mathbf {x} =\chi (\mathbf {X} ,t)} which is a mapping of the initial configuration κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} onto the current configuration κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} , giving a geometrical correspondence between them, i.e. giving the position vector x = x i e i {\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}} that a particle X {\displaystyle X} , with a position vector X {\displaystyle \mathbf {X} } in the undeformed or reference configuration κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} , will occupy in the current or deformed configuration κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} at time t {\displaystyle t} . The components x i {\displaystyle x_{i}} are called the spatial coordinates. Physical and kinematic properties P i j … {\displaystyle P_{ij\ldots }} , i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. P i j … = P i j … ( X , t ) {\displaystyle P_{ij\ldots }=P_{ij\ldots }(\mathbf {X} ,t)} . The material derivative of any property P i j … {\displaystyle P_{ij\ldots }} of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the substantial derivative, or comoving derivative, or convective derivative. It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles. In the Lagrangian description, the material derivative of P i j … {\displaystyle P_{ij\ldots }} is simply the partial derivative with respect to time, and the position vector X {\displaystyle \mathbf {X} } is held constant as it does not change with time. Thus, we have d d t [ P i j … ( X , t ) ] = ∂ ∂ t [ P i j … ( X , t ) ] {\displaystyle {\frac {d}{dt}}[P_{ij\ldots }(\mathbf {X} ,t)]={\frac {\partial }{\partial t}}[P_{ij\ldots }(\mathbf {X} ,t)]} The instantaneous position x {\displaystyle \mathbf {x} } is a property of a particle, and its material derivative is the instantaneous flow velocity v {\displaystyle \mathbf {v} } of the particle. Therefore, the flow velocity field of the continuum is given by v = x ˙ = d x d t = ∂ χ ( X , t ) ∂ t {\displaystyle \mathbf {v} ={\dot {\mathbf {x} }}={\frac {d\mathbf {x} }{dt}}={\frac {\partial \chi (\mathbf {X} ,t)}{\partial t}}} Similarly, the acceleration field is given by a = v ˙ = x ¨ = d 2 x d t 2 = ∂ 2 χ ( X , t ) ∂ t 2 {\displaystyle \mathbf {a} ={\dot {\mathbf {v} }}={\ddot {\mathbf {x} }}={\frac {d^{2}\mathbf {x} }{dt^{2}}}={\frac {\partial ^{2}\chi (\mathbf {X} ,t)}{\partial t^{2}}}} Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function χ ( ⋅ ) {\displaystyle \chi (\cdot )} and P i j … ( ⋅ ) {\displaystyle P_{ij\ldots }(\cdot )} are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third. === Eulerian description === Continuity allows for the inverse of χ ( ⋅ ) {\displaystyle \chi (\cdot )} to trace backwards where the particle currently located at x {\displaystyle \mathbf {x} } was located in the initial or referenced configuration κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} . In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration. The Eulerian description, introduced by d'Alembert, focuses on the current configuration κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} , giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time. Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function X = χ − 1 ( x , t ) {\displaystyle \mathbf {X} =\chi ^{-1}(\mathbf {x} ,t)} which provides a tracing of the particle which now occupies the position x {\displaystyle \mathbf {x} } in the current configuration κ t ( B ) {\displaystyle \kappa _{t}({\mathcal {B}})} to its original position X {\displaystyle \mathbf {X} } in the initial configuration κ 0 ( B ) {\displaystyle \kappa _{0}({\mathcal {B}})} . A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus, J = | ∂ χ i ∂ X J | = | ∂ x i ∂ X J | ≠ 0 {\displaystyle J=\left|{\frac {\partial \chi _{i}}{\partial X_{J}}}\right|=\left|{\frac {\partial x_{i}}{\partial X_{J}}}\right|\neq 0} In the Eulerian description, the physical properties P i j … {\displaystyle P_{ij\ldots }} are expressed as P i j … = P i j … ( X , t ) = P i j … [ χ − 1 ( x , t ) , t ] = p i j … ( x , t ) {\displaystyle P_{ij\ldots }=P_{ij\ldots }(\mathbf {X} ,t)=P_{ij\ldots }[\chi ^{-1}(\mathbf {x} ,t),t]=p_{ij\ldots }(\mathbf {x} ,t)} where the functional form of P i j … {\displaystyle P_{ij\ldots }} in the Lagrangian description is not the same as the form of p i j … {\displaystyle p_{ij\ldots }} in the Eulerian description. The material derivative of p i j … ( x , t ) {\displaystyle p_{ij\ldots }(\mathbf {x} ,t)} , using the chain rule, is then d d t [ p i j … ( x , t ) ] = ∂ ∂ t [ p i j … ( x , t ) ] + ∂ ∂ x k [ p i j … ( x , t ) ] d x k d t {\displaystyle {\frac {d}{dt}}[p_{ij\ldots }(\mathbf {x} ,t)]={\frac {\partial }{\partial t}}[p_{ij\ldots }(\mathbf {x} ,t)]+{\frac {\partial }{\partial x_{k}}}[p_{ij\ldots }(\mathbf {x} ,t)]{\frac {dx_{k}}{dt}}} The first term on the right-hand side of this equation gives the local rate of change of the property p i j … ( x , t ) {\displaystyle p_{ij\ldots }(\mathbf {x} ,t)} occurring at position x {\displaystyle \mathbf {x} } . The second term of the right-hand side is the convective rate of change and expresses the contribution of the particle changing position in space (motion). Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position x {\displaystyle \mathbf {x} } . === Displacement field === The vector joining the positions of a particle P {\displaystyle P} in the undeformed configuration and deformed configuration is called the displacement vector u ( X , t ) = u i e i {\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}} , in the Lagrangian description, or U ( x , t ) = U J E J {\displaystyle \mathbf {U} (\mathbf {x} ,t)=U_{J}\mathbf {E} _{J}} , in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as u ( X , t ) = b + x ( X , t ) − X or u i = α i J b J + x i − α i J X J {\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} +\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}} or in terms of the spatial coordinates as U ( x , t ) = b + x − X ( x , t ) or U J = b J + α J i x i − X J {\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} +\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}\,} where α J i {\displaystyle \alpha _{Ji}} are the direction cosines between the material and spatial coordinate systems with unit vectors E J {\displaystyle \mathbf {E} _{J}} and e i {\displaystyle \mathbf {e} _{i}} , respectively. Thus E J ⋅ e i = α J i = α i J {\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}} and the relationship between u i {\displaystyle u_{i}} and U J {\displaystyle U_{J}} is then given by u i = α i J U J or U J = α J i u i {\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}} Knowing that e i = α i J E J {\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}} then u ( X , t ) = u i e i = u i ( α i J E J ) = U J E J = U ( x , t ) {\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)} It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in b = 0 {\displaystyle \mathbf {b} =0} , and the direction cosines become Kronecker deltas, i.e. E J ⋅ e i = δ J i = δ i J {\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}} Thus, we have u ( X , t ) = x ( X , t ) − X or u i = x i − δ i J X J {\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}} or in terms of the spatial coordinates as U ( x , t ) = x − X ( x , t ) or U J = δ J i x i − X J {\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}} == Governing equations == Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied. The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes: the physical quantity itself flows through the surface that bounds the volume, there is a source of the physical quantity on the surface of the volume, or/and, there is a source of the physical quantity inside the volume. Let Ω {\displaystyle \Omega } be the body (an open subset of Euclidean space) and let ∂ Ω {\displaystyle \partial \Omega } be its surface (the boundary of Ω {\displaystyle \Omega } ). Let the motion of material points in the body be described by the map x = χ ( X ) = x ( X ) {\displaystyle \mathbf {x} ={\boldsymbol {\chi }}(\mathbf {X} )=\mathbf {x} (\mathbf {X} )} where X {\displaystyle \mathbf {X} } is the position of a point in the initial configuration and x {\displaystyle \mathbf {x} } is the location of the same point in the deformed configuration. The deformation gradient is given by F = ∂ x ∂ X = ∇ x . {\displaystyle {\boldsymbol {F}}={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}=\nabla \mathbf {x} ~.} === Balance laws === Let f ( x , t ) {\displaystyle f(\mathbf {x} ,t)} be a physical quantity that is flowing through the body. Let g ( x , t ) {\displaystyle g(\mathbf {x} ,t)} be sources on the surface of the body and let h ( x , t ) {\displaystyle h(\mathbf {x} ,t)} be sources inside the body. Let n ( x , t ) {\displaystyle \mathbf {n} (\mathbf {x} ,t)} be the outward unit normal to the surface ∂ Ω {\displaystyle \partial \Omega } . Let v ( x , t ) {\displaystyle \mathbf {v} (\mathbf {x} ,t)} be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface ∂ Ω {\displaystyle \partial \Omega } is moving be u n {\displaystyle u_{n}} (in the direction n {\displaystyle \mathbf {n} } ). Then, balance laws can be expressed in the general form d d t [ ∫ Ω f ( x , t ) dV ] = ∫ ∂ Ω f ( x , t ) [ u n ( x , t ) − v ( x , t ) ⋅ n ( x , t ) ] dA + ∫ ∂ Ω g ( x , t ) dA + ∫ Ω h ( x , t ) dV . {\displaystyle {\cfrac {d}{dt}}\left[\int _{\Omega }f(\mathbf {x} ,t)~{\text{dV}}\right]=\int _{\partial \Omega }f(\mathbf {x} ,t)[u_{n}(\mathbf {x} ,t)-\mathbf {v} (\mathbf {x} ,t)\cdot \mathbf {n} (\mathbf {x} ,t)]~{\text{dA}}+\int _{\partial \Omega }g(\mathbf {x} ,t)~{\text{dA}}+\int _{\Omega }h(\mathbf {x} ,t)~{\text{dV}}~.} The functions f ( x , t ) {\displaystyle f(\mathbf {x} ,t)} , g ( x , t ) {\displaystyle g(\mathbf {x} ,t)} , and h ( x , t ) {\displaystyle h(\mathbf {x} ,t)} can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws. If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations) ρ ˙ + ρ ( ∇ ⋅ v ) = 0 Balance of Mass ρ v ˙ − ∇ ⋅ σ − ρ b = 0 Balance of Linear Momentum (Cauchy's first law of motion) σ = σ T Balance of Angular Momentum (Cauchy's second law of motion) ρ e ˙ − σ : ( ∇ v ) + ∇ ⋅ q − ρ s = 0 Balance of Energy. {\displaystyle {\begin{aligned}{\dot {\rho }}+\rho ({\boldsymbol {\nabla }}\cdot \mathbf {v} )&=0&&\qquad {\text{Balance of Mass}}\\\rho ~{\dot {\mathbf {v} }}-{\boldsymbol {\nabla }}\cdot {\boldsymbol {\sigma }}-\rho ~\mathbf {b} &=0&&\qquad {\text{Balance of Linear Momentum (Cauchy's first law of motion)}}\\{\boldsymbol {\sigma }}&={\boldsymbol {\sigma }}^{T}&&\qquad {\text{Balance of Angular Momentum (Cauchy's second law of motion)}}\\\rho ~{\dot {e}}-{\boldsymbol {\sigma }}:({\boldsymbol {\nabla }}\mathbf {v} )+{\boldsymbol {\nabla }}\cdot \mathbf {q} -\rho ~s&=0&&\qquad {\text{Balance of Energy.}}\end{aligned}}} In the above equations ρ ( x , t ) {\displaystyle \rho (\mathbf {x} ,t)} is the mass density (current), ρ ˙ {\displaystyle {\dot {\rho }}} is the material time derivative of ρ {\displaystyle \rho } , v ( x , t ) {\displaystyle \mathbf {v} (\mathbf {x} ,t)} is the particle velocity, v ˙ {\displaystyle {\dot {\mathbf {v} }}} is the material time derivative of v {\displaystyle \mathbf {v} } , σ ( x , t ) {\displaystyle {\boldsymbol {\sigma }}(\mathbf {x} ,t)} is the Cauchy stress tensor, b ( x , t ) {\displaystyle \mathbf {b} (\mathbf {x} ,t)} is the body force density, e ( x , t ) {\displaystyle e(\mathbf {x} ,t)} is the internal energy per unit mass, e ˙ {\displaystyle {\dot {e}}} is the material time derivative of e {\displaystyle e} , q ( x , t ) {\displaystyle \mathbf {q} (\mathbf {x} ,t)} is the heat flux vector, and s ( x , t ) {\displaystyle s(\mathbf {x} ,t)} is an energy source per unit mass. The operators used are defined below. With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as ρ det ( F ) − ρ 0 = 0 Balance of Mass ρ 0 x ¨ − ∇ ∘ ⋅ P − ρ 0 b = 0 Balance of Linear Momentum F ⋅ P T = P ⋅ F T Balance of Angular Momentum ρ 0 e ˙ − P T : F ˙ + ∇ ∘ ⋅ q − ρ 0 s = 0 Balance of Energy. {\displaystyle {\begin{aligned}\rho ~\det({\boldsymbol {F}})-\rho _{0}&=0&&\qquad {\text{Balance of Mass}}\\\rho _{0}~{\ddot {\mathbf {x} }}-{\boldsymbol {\nabla }}_{\circ }\cdot {\boldsymbol {P}}-\rho _{0}~\mathbf {b} &=0&&\qquad {\text{Balance of Linear Momentum}}\\{\boldsymbol {F}}\cdot {\boldsymbol {P}}^{T}&={\boldsymbol {P}}\cdot {\boldsymbol {F}}^{T}&&\qquad {\text{Balance of Angular Momentum}}\\\rho _{0}~{\dot {e}}-{\boldsymbol {P}}^{T}:{\dot {\boldsymbol {F}}}+{\boldsymbol {\nabla }}_{\circ }\cdot \mathbf {q} -\rho _{0}~s&=0&&\qquad {\text{Balance of Energy.}}\end{aligned}}} In the above, P {\displaystyle {\boldsymbol {P}}} is the first Piola-Kirchhoff stress tensor, and ρ 0 {\displaystyle \rho _{0}} is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by P = J σ ⋅ F − T where J = det ( F ) {\displaystyle {\boldsymbol {P}}=J~{\boldsymbol {\sigma }}\cdot {\boldsymbol {F}}^{-T}~{\text{where}}~J=\det({\boldsymbol {F}})} We can alternatively define the nominal stress tensor N {\displaystyle {\boldsymbol {N}}} which is the transpose of the first Piola-Kirchhoff stress tensor such that N = P T = J F − 1 ⋅ σ . {\displaystyle {\boldsymbol {N}}={\boldsymbol {P}}^{T}=J~{\boldsymbol {F}}^{-1}\cdot {\boldsymbol {\sigma }}~.} Then the balance laws become ρ det ( F ) − ρ 0 = 0 Balance of Mass ρ 0 x ¨ − ∇ ∘ ⋅ N T − ρ 0 b = 0 Balance of Linear Momentum F ⋅ N = N T ⋅ F T Balance of Angular Momentum ρ 0 e ˙ − N : F ˙ + ∇ ∘ ⋅ q − ρ 0 s = 0 Balance of Energy. {\displaystyle {\begin{aligned}\rho ~\det({\boldsymbol {F}})-\rho _{0}&=0&&\qquad {\text{Balance of Mass}}\\\rho _{0}~{\ddot {\mathbf {x} }}-{\boldsymbol {\nabla }}_{\circ }\cdot {\boldsymbol {N}}^{T}-\rho _{0}~\mathbf {b} &=0&&\qquad {\text{Balance of Linear Momentum}}\\{\boldsymbol {F}}\cdot {\boldsymbol {N}}&={\boldsymbol {N}}^{T}\cdot {\boldsymbol {F}}^{T}&&\qquad {\text{Balance of Angular Momentum}}\\\rho _{0}~{\dot {e}}-{\boldsymbol {N}}:{\dot {\boldsymbol {F}}}+{\boldsymbol {\nabla }}_{\circ }\cdot \mathbf {q} -\rho _{0}~s&=0&&\qquad {\text{Balance of Energy.}}\end{aligned}}} ==== Operators ==== The operators in the above equations are defined as ∇ v = ∑ i , j = 1 3 ∂ v i ∂ x j e i ⊗ e j = v i , j e i ⊗ e j ; ∇ ⋅ v = ∑ i = 1 3 ∂ v i ∂ x i = v i , i ; ∇ ⋅ S = ∑ i , j = 1 3 ∂ S i j ∂ x j e i = σ i j , j e i . {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} &=\sum _{i,j=1}^{3}{\frac {\partial v_{i}}{\partial x_{j}}}\mathbf {e} _{i}\otimes \mathbf {e} _{j}=v_{i,j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}~;\\[1ex]{\boldsymbol {\nabla }}\cdot \mathbf {v} &=\sum _{i=1}^{3}{\frac {\partial v_{i}}{\partial x_{i}}}=v_{i,i}~;\\[1ex]{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\sum _{i,j=1}^{3}{\frac {\partial S_{ij}}{\partial x_{j}}}~\mathbf {e} _{i}=\sigma _{ij,j}~\mathbf {e} _{i}~.\end{aligned}}} where v {\displaystyle \mathbf {v} } is a vector field, S {\displaystyle {\boldsymbol {S}}} is a second-order tensor field, and e i {\displaystyle \mathbf {e} _{i}} are the components of an orthonormal basis in the current configuration. Also, ∇ ∘ v = ∑ i , j = 1 3 ∂ v i ∂ X j E i ⊗ E j = v i , j E i ⊗ E j ; ∇ ∘ ⋅ v = ∑ i = 1 3 ∂ v i ∂ X i = v i , i ; ∇ ∘ ⋅ S = ∑ i , j = 1 3 ∂ S i j ∂ X j E i = S i j , j E i {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}_{\circ }\mathbf {v} &=\sum _{i,j=1}^{3}{\frac {\partial v_{i}}{\partial X_{j}}}\mathbf {E} _{i}\otimes \mathbf {E} _{j}=v_{i,j}\mathbf {E} _{i}\otimes \mathbf {E} _{j}~;\\[1ex]{\boldsymbol {\nabla }}_{\circ }\cdot \mathbf {v} &=\sum _{i=1}^{3}{\frac {\partial v_{i}}{\partial X_{i}}}=v_{i,i}~;\\[1ex]{\boldsymbol {\nabla }}_{\circ }\cdot {\boldsymbol {S}}&=\sum _{i,j=1}^{3}{\frac {\partial S_{ij}}{\partial X_{j}}}~\mathbf {E} _{i}=S_{ij,j}~\mathbf {E} _{i}\end{aligned}}} where v {\displaystyle \mathbf {v} } is a vector field, S {\displaystyle {\boldsymbol {S}}} is a second-order tensor field, and E i {\displaystyle \mathbf {E} _{i}} are the components of an orthonormal basis in the reference configuration. The inner product is defined as A : B = ∑ i , j = 1 3 A i j B i j = trace ⁡ ( A B T ) . {\displaystyle {\boldsymbol {A}}:{\boldsymbol {B}}=\sum _{i,j=1}^{3}A_{ij}~B_{ij}=\operatorname {trace} ({\boldsymbol {A}}{\boldsymbol {B}}^{T})~.} === Clausius–Duhem inequality === The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved. Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density ρ {\displaystyle \rho } and an internal specific entropy (i.e. entropy per unit mass) η {\displaystyle \eta } in the region of interest. Let Ω {\displaystyle \Omega } be such a region and let ∂ Ω {\displaystyle \partial \Omega } be its boundary. Then the second law of thermodynamics states that the rate of increase of η {\displaystyle \eta } in this region is greater than or equal to the sum of that supplied to Ω {\displaystyle \Omega } (as a flux or from internal sources) and the change of the internal entropy density ρ η {\displaystyle \rho \eta } due to material flowing in and out of the region. Let ∂ Ω {\displaystyle \partial \Omega } move with a flow velocity u n {\displaystyle u_{n}} and let particles inside Ω {\displaystyle \Omega } have velocities v {\displaystyle \mathbf {v} } . Let n {\displaystyle \mathbf {n} } be the unit outward normal to the surface ∂ Ω {\displaystyle \partial \Omega } . Let ρ {\displaystyle \rho } be the density of matter in the region, q ¯ {\displaystyle {\bar {q}}} be the entropy flux at the surface, and r {\displaystyle r} be the entropy source per unit mass. Then the entropy inequality may be written as d d t ( ∫ Ω ρ η dV ) ≥ ∫ ∂ Ω ρ η ( u n − v ⋅ n ) dA + ∫ ∂ Ω q ¯ dA + ∫ Ω ρ r dV . {\displaystyle {\cfrac {d}{dt}}\left(\int _{\Omega }\rho ~\eta ~{\text{dV}}\right)\geq \int _{\partial \Omega }\rho ~\eta ~(u_{n}-\mathbf {v} \cdot \mathbf {n} )~{\text{dA}}+\int _{\partial \Omega }{\bar {q}}~{\text{dA}}+\int _{\Omega }\rho ~r~{\text{dV}}.} The scalar entropy flux can be related to the vector flux at the surface by the relation q ¯ = − ψ ( x ) ⋅ n {\displaystyle {\bar {q}}=-{\boldsymbol {\psi }}(\mathbf {x} )\cdot \mathbf {n} } . Under the assumption of incrementally isothermal conditions, we have ψ ( x ) = q ( x ) T ; r = s T {\displaystyle {\boldsymbol {\psi }}(\mathbf {x} )={\cfrac {\mathbf {q} (\mathbf {x} )}{T}}~;~~r={\cfrac {s}{T}}} where q {\displaystyle \mathbf {q} } is the heat flux vector, s {\displaystyle s} is an energy source per unit mass, and T {\displaystyle T} is the absolute temperature of a material point at x {\displaystyle \mathbf {x} } at time t {\displaystyle t} . We then have the Clausius–Duhem inequality in integral form: d d t ( ∫ Ω ρ η dV ) ≥ ∫ ∂ Ω ρ η ( u n − v ⋅ n ) dA − ∫ ∂ Ω q ⋅ n T dA + ∫ Ω ρ s T dV . {\displaystyle {{\cfrac {d}{dt}}\left(\int _{\Omega }\rho ~\eta ~{\text{dV}}\right)\geq \int _{\partial \Omega }\rho ~\eta ~(u_{n}-\mathbf {v} \cdot \mathbf {n} )~{\text{dA}}-\int _{\partial \Omega }{\cfrac {\mathbf {q} \cdot \mathbf {n} }{T}}~{\text{dA}}+\int _{\Omega }{\cfrac {\rho ~s}{T}}~{\text{dV}}.}} We can show that the entropy inequality may be written in differential form as ρ η ˙ ≥ − ∇ ⋅ ( q T ) + ρ s T . {\displaystyle {\rho ~{\dot {\eta }}\geq -{\boldsymbol {\nabla }}\cdot \left({\cfrac {\mathbf {q} }{T}}\right)+{\cfrac {\rho ~s}{T}}.}} In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as ρ ( e ˙ − T η ˙ ) − σ : ∇ v ≤ − q ⋅ ∇ T T . {\displaystyle {\rho ~({\dot {e}}-T~{\dot {\eta }})-{\boldsymbol {\sigma }}:{\boldsymbol {\nabla }}\mathbf {v} \leq -{\cfrac {\mathbf {q} \cdot {\boldsymbol {\nabla }}T}{T}}.}} == Validity == The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure. When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous. == Applications == Continuum mechanics Solid mechanics Fluid mechanics Engineering Civil engineering Mechanical engineering Aerospace engineering Biomedical engineering Chemical engineering == See also == == Explanatory notes == == References == === Citations === === Works cited === Atanackovic, Teodor M.; Guran, Ardeshir (16 June 2000). Theory of Elasticity for Scientists and Engineers. Dover books on physics. Springer Science & Business Media. ISBN 978-0-8176-4072-9. Chadwick, Peter (1 January 1999). Continuum Mechanics: Concise Theory and Problems. Courier Corporation. ISBN 978-0-486-40180-5. Dienes, J. K.; Solem, J. C. (1999). "Nonlinear behavior of some hydrostatically stressed isotropic elastomeric foams". Acta Mechanica. 138 (3–4): 155–162. doi:10.1007/BF01291841. S2CID 120320672. Fung, Y. C. (1977). A First Course in Continuum Mechanics (2nd ed.). Prentice-Hall, Inc. ISBN 978-0-13-318311-5. Irgens, Fridtjov (10 January 2008). Continuum Mechanics. Springer Science & Business Media. ISBN 978-3-540-74298-2. Liu, I-Shih (28 May 2002). Continuum Mechanics. Springer Science & Business Media. ISBN 978-3-540-43019-3. Lubliner, Jacob (2008). Plasticity Theory (PDF) (Revised ed.). Dover Publications. ISBN 978-0-486-46290-5. Archived from the original (PDF) on 31 March 2010. Ostoja-Starzewski, M. (2008). "7-10". Microstructural randomness and scaling in mechanics of materials. CRC Press. ISBN 978-1-58488-417-0. Spencer, A. J. M. (1980). Continuum Mechanics. Longman Group Limited (London). p. 83. ISBN 978-0-582-44282-5. Roberts, A. J. (1994). A One-Dimensional Introduction to Continuum Mechanics. World Scientific. Smith, Donald R. (1993). "2". An introduction to continuum mechanics-after Truesdell and Noll. Solids mechanics and its applications. Vol. 22. Springer Science & Business Media. ISBN 978-90-481-4314-6. Wu, Han-Chin (20 December 2004). Continuum Mechanics and Plasticity. Taylor & Francis. ISBN 978-1-58488-363-0. === General references === Batra, R. C. (2006). Elements of Continuum Mechanics. Reston, VA: AIAA. Bertram, Albrecht (2012). Elasticity and Plasticity of Large Deformations - An Introduction (Third ed.). Springer. doi:10.1007/978-3-642-24615-9. ISBN 978-3-642-24615-9. S2CID 116496103. Chandramouli, P.N (2014). Continuum Mechanics. Yes Dee Publishing Pvt Ltd. ISBN 9789380381398. Archived from the original on 4 August 2018. Retrieved 24 March 2014. Eringen, A. Cemal (1980). Mechanics of Continua (2nd ed.). Krieger Pub Co. ISBN 978-0-88275-663-9. Chen, Youping; James D. Lee; Azim Eskandarian (2009). Meshless Methods in Solid Mechanics (First ed.). Springer New York. ISBN 978-1-4419-2148-2. Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 978-0-8493-9779-0. Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8. Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 978-3-540-20619-4. Gurtin, M. E. (1981). An Introduction to Continuum Mechanics. New York: Academic Press. Lai, W. Michael; David Rubin; Erhard Krempl (1996). Introduction to Continuum Mechanics (3rd ed.). Elsevier, Inc. ISBN 978-0-7506-2894-5. Archived from the original on 6 February 2009. Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 978-0-8493-1138-3. Malvern, Lawrence E. (1969). Introduction to the mechanics of a continuous medium. New Jersey: Prentice-Hall, Inc. Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 978-0-07-040663-6. Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 978-0-8493-1855-9. Maugin, G. A. (1999). The Thermomechanics of Nonlinear Irreversible Behaviors: An Introduction. Singapore: World Scientific. Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 978-0-521-83979-2. Ostoja-Starzewski, Martin (2008). Microstructural Randomness and Scaling in Mechanics of Materials. Boca Raton, FL: Chapman & Hall/CRC Press. ISBN 978-1-58488-417-0. Rees, David (2006). Basic Engineering Plasticity - An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 978-0-7506-8025-7. Wright, T. W. (2002). The Physics and Mathematics of Adiabatic Shear Bands. Cambridge, UK: Cambridge University Press. == External links == "Objectivity in classical continuum mechanics: Motions, Eulerian and Lagrangian functions; Deformation gradient; Lie derivatives; Velocity-addition formula, Coriolis; Objectivity" by Gilles Leborgne, April 7, 2021: "Part IV Velocity-addition formula and Objectivity"
Wikipedia/Continuum_mechanics
In mathematics and particularly in algebra, a system of equations (either linear or nonlinear) is called consistent if there is at least one set of values for the unknowns that satisfies each equation in the system—that is, when substituted into each of the equations, they make each equation hold true as an identity. In contrast, a linear or non linear equation system is called inconsistent if there is no set of values for the unknowns that satisfies all of the equations. If a system of equations is inconsistent, then the equations cannot be true together leading to contradictory information, such as the false statements 2 = 1, or x 3 + y 3 = 5 {\displaystyle x^{3}+y^{3}=5} and x 3 + y 3 = 6 {\displaystyle x^{3}+y^{3}=6} (which implies 5 = 6). Both types of equation system, inconsistent and consistent, can be any of overdetermined (having more equations than unknowns), underdetermined (having fewer equations than unknowns), or exactly determined. == Simple examples == === Underdetermined and consistent === The system x + y + z = 3 , x + y + 2 z = 4 {\displaystyle {\begin{aligned}x+y+z&=3,\\x+y+2z&=4\end{aligned}}} has an infinite number of solutions, all of them having z = 1 (as can be seen by subtracting the first equation from the second), and all of them therefore having x + y = 2 for any values of x and y. The nonlinear system x 2 + y 2 + z 2 = 10 , x 2 + y 2 = 5 {\displaystyle {\begin{aligned}x^{2}+y^{2}+z^{2}&=10,\\x^{2}+y^{2}&=5\end{aligned}}} has an infinitude of solutions, all involving z = ± 5 . {\displaystyle z=\pm {\sqrt {5}}.} Since each of these systems has more than one solution, it is an indeterminate system . === Underdetermined and inconsistent === The system x + y + z = 3 , x + y + z = 4 {\displaystyle {\begin{aligned}x+y+z&=3,\\x+y+z&=4\end{aligned}}} has no solutions, as can be seen by subtracting the first equation from the second to obtain the impossible 0 = 1. The non-linear system x 2 + y 2 + z 2 = 17 , x 2 + y 2 + z 2 = 14 {\displaystyle {\begin{aligned}x^{2}+y^{2}+z^{2}&=17,\\x^{2}+y^{2}+z^{2}&=14\end{aligned}}} has no solutions, because if one equation is subtracted from the other we obtain the impossible 0 = 3. === Exactly determined and consistent === The system x + y = 3 , x + 2 y = 5 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=5\end{aligned}}} has exactly one solution: x = 1, y = 2 The nonlinear system x + y = 1 , x 2 + y 2 = 1 {\displaystyle {\begin{aligned}x+y&=1,\\x^{2}+y^{2}&=1\end{aligned}}} has the two solutions (x, y) = (1, 0) and (x, y) = (0, 1), while x 3 + y 3 + z 3 = 10 , x 3 + 2 y 3 + z 3 = 12 , 3 x 3 + 5 y 3 + 3 z 3 = 34 {\displaystyle {\begin{aligned}x^{3}+y^{3}+z^{3}&=10,\\x^{3}+2y^{3}+z^{3}&=12,\\3x^{3}+5y^{3}+3z^{3}&=34\end{aligned}}} has an infinite number of solutions because the third equation is the first equation plus twice the second one and hence contains no independent information; thus any value of z can be chosen and values of x and y can be found to satisfy the first two (and hence the third) equations. === Exactly determined and inconsistent === The system x + y = 3 , 4 x + 4 y = 10 {\displaystyle {\begin{aligned}x+y&=3,\\4x+4y&=10\end{aligned}}} has no solutions; the inconsistency can be seen by multiplying the first equation by 4 and subtracting the second equation to obtain the impossible 0 = 2. Likewise, x 3 + y 3 + z 3 = 10 , x 3 + 2 y 3 + z 3 = 12 , 3 x 3 + 5 y 3 + 3 z 3 = 32 {\displaystyle {\begin{aligned}x^{3}+y^{3}+z^{3}&=10,\\x^{3}+2y^{3}+z^{3}&=12,\\3x^{3}+5y^{3}+3z^{3}&=32\end{aligned}}} is an inconsistent system because the first equation plus twice the second minus the third contains the contradiction 0 = 2. === Overdetermined and consistent === The system x + y = 3 , x + 2 y = 7 , 4 x + 6 y = 20 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=7,\\4x+6y&=20\end{aligned}}} has a solution, x = –1, y = 4, because the first two equations do not contradict each other and the third equation is redundant (since it contains the same information as can be obtained from the first two equations by multiplying each through by 2 and summing them). The system x + 2 y = 7 , 3 x + 6 y = 21 , 7 x + 14 y = 49 {\displaystyle {\begin{aligned}x+2y&=7,\\3x+6y&=21,\\7x+14y&=49\end{aligned}}} has an infinitude of solutions since all three equations give the same information as each other (as can be seen by multiplying through the first equation by either 3 or 7). Any value of y is part of a solution, with the corresponding value of x being 7 – 2y. The nonlinear system x 2 − 1 = 0 , y 2 − 1 = 0 , ( x − 1 ) ( y − 1 ) = 0 {\displaystyle {\begin{aligned}x^{2}-1&=0,\\y^{2}-1&=0,\\(x-1)(y-1)&=0\end{aligned}}} has the three solutions (x, y) = (1, –1), (–1, 1), (1, 1). === Overdetermined and inconsistent === The system x + y = 3 , x + 2 y = 7 , 4 x + 6 y = 21 {\displaystyle {\begin{aligned}x+y&=3,\\x+2y&=7,\\4x+6y&=21\end{aligned}}} is inconsistent because the last equation contradicts the information embedded in the first two, as seen by multiplying each of the first two through by 2 and summing them. The system x 2 + y 2 = 1 , x 2 + 2 y 2 = 2 , 2 x 2 + 3 y 2 = 4 {\displaystyle {\begin{aligned}x^{2}+y^{2}&=1,\\x^{2}+2y^{2}&=2,\\2x^{2}+3y^{2}&=4\end{aligned}}} is inconsistent because the sum of the first two equations contradicts the third one. == Criteria for consistency == As can be seen from the above examples, consistency versus inconsistency is a different issue from comparing the numbers of equations and unknowns. === Linear systems === A linear system is consistent if and only if its coefficient matrix has the same rank as does its augmented matrix (the coefficient matrix with an extra column added, that column being the column vector of constants). === Nonlinear systems === == References ==
Wikipedia/Consistent_equations
In algebra, the center of a ring R is the subring consisting of the elements x such that xy = yx for all elements y in R. It is a commutative ring and is denoted as Z(R); 'Z' stands for the German word Zentrum, meaning "center". If R is a ring, then R is an associative algebra over its center. Conversely, if R is an associative algebra over a commutative subring S, then S is a subring of the center of R, and if S happens to be the center of R, then the algebra R is called a central algebra. == Examples == The center of a commutative ring R is R itself. The center of a skew-field is a field. The center of the (full) matrix ring with entries in a commutative ring R consists of R-scalar multiples of the identity matrix. Let F be a field extension of a field k, and R an algebra over k. Then Z(R ⊗k F) = Z(R) ⊗k F. The center of the universal enveloping algebra of a Lie algebra plays an important role in the representation theory of Lie algebras. For example, a Casimir element is an element of such a center that is used to analyze Lie algebra representations. See also: Harish-Chandra isomorphism. The center of a simple algebra is a field. == See also == Center of a group Central simple algebra Morita equivalence == Notes == == References ==
Wikipedia/Center_(ring_theory)
In mathematics, especially functional analysis, a Banach algebra, named after Stefan Banach, is an associative algebra A {\displaystyle A} over the real or complex numbers (or over a non-Archimedean complete normed field) that at the same time is also a Banach space, that is, a normed space that is complete in the metric induced by the norm. The norm is required to satisfy ‖ x y ‖ ≤ ‖ x ‖ ‖ y ‖ for all x , y ∈ A . {\displaystyle \|x\,y\|\ \leq \|x\|\,\|y\|\quad {\text{ for all }}x,y\in A.} This ensures that the multiplication operation is continuous with respect to the metric topology. A Banach algebra is called unital if it has an identity element for the multiplication whose norm is 1 , {\displaystyle 1,} and commutative if its multiplication is commutative. Any Banach algebra A {\displaystyle A} (whether it is unital or not) can be embedded isometrically into a unital Banach algebra A e {\displaystyle A_{e}} so as to form a closed ideal of A e {\displaystyle A_{e}} . Often one assumes a priori that the algebra under consideration is unital because one can develop much of the theory by considering A e {\displaystyle A_{e}} and then applying the outcome in the original algebra. However, this is not the case all the time. For example, one cannot define all the trigonometric functions in a Banach algebra without identity. The theory of real Banach algebras can be very different from the theory of complex Banach algebras. For example, the spectrum of an element of a nontrivial complex Banach algebra can never be empty, whereas in a real Banach algebra it could be empty for some elements. Banach algebras can also be defined over fields of p {\displaystyle p} -adic numbers. This is part of p {\displaystyle p} -adic analysis. == Examples == The prototypical example of a Banach algebra is C 0 ( X ) {\displaystyle C_{0}(X)} , the space of (complex-valued) continuous functions, defined on a locally compact Hausdorff space X {\displaystyle X} , that vanish at infinity. C 0 ( X ) {\displaystyle C_{0}(X)} is unital if and only if X {\displaystyle X} is compact. The complex conjugation being an involution, C 0 ( X ) {\displaystyle C_{0}(X)} is in fact a C*-algebra. More generally, every C*-algebra is a Banach algebra by definition. The set of real (or complex) numbers is a Banach algebra with norm given by the absolute value. The set of all real or complex n {\displaystyle n} -by- n {\displaystyle n} matrices becomes a unital Banach algebra if we equip it with a sub-multiplicative matrix norm. Take the Banach space R n {\displaystyle \mathbb {R} ^{n}} (or C n {\displaystyle \mathbb {C} ^{n}} ) with norm ‖ x ‖ = max | x i | {\displaystyle \|x\|=\max _{}|x_{i}|} and define multiplication componentwise: ( x 1 , … , x n ) ( y 1 , … , y n ) = ( x 1 y 1 , … , x n y n ) . {\displaystyle \left(x_{1},\ldots ,x_{n}\right)\left(y_{1},\ldots ,y_{n}\right)=\left(x_{1}y_{1},\ldots ,x_{n}y_{n}\right).} The quaternions form a 4-dimensional real Banach algebra, with the norm being given by the absolute value of quaternions. The algebra of all bounded real- or complex-valued functions defined on some set (with pointwise multiplication and the supremum norm) is a unital Banach algebra. The algebra of all bounded continuous real- or complex-valued functions on some locally compact space (again with pointwise operations and supremum norm) is a Banach algebra. The algebra of all continuous linear operators on a Banach space E {\displaystyle E} (with functional composition as multiplication and the operator norm as norm) is a unital Banach algebra. The set of all compact operators on E {\displaystyle E} is a Banach algebra and closed ideal. It is without identity if dim ⁡ E = ∞ . {\displaystyle \dim E=\infty .} If G {\displaystyle G} is a locally compact Hausdorff topological group and μ {\displaystyle \mu } is its Haar measure, then the Banach space L 1 ( G ) {\displaystyle L^{1}(G)} of all μ {\displaystyle \mu } -integrable functions on G {\displaystyle G} becomes a Banach algebra under the convolution x y ( g ) = ∫ x ( h ) y ( h − 1 g ) d μ ( h ) {\displaystyle xy(g)=\int x(h)y\left(h^{-1}g\right)d\mu (h)} for x , y ∈ L 1 ( G ) . {\displaystyle x,y\in L^{1}(G).} Uniform algebra: A Banach algebra that is a subalgebra of the complex algebra C ( X ) {\displaystyle C(X)} with the supremum norm and that contains the constants and separates the points of X {\displaystyle X} (which must be a compact Hausdorff space). Natural Banach function algebra: A uniform algebra all of whose characters are evaluations at points of X . {\displaystyle X.} C*-algebra: A Banach algebra that is a closed *-subalgebra of the algebra of bounded operators on some Hilbert space. Measure algebra: A Banach algebra consisting of all Radon measures on some locally compact group, where the product of two measures is given by convolution of measures. The algebra of the quaternions H {\displaystyle \mathbb {H} } is a real Banach algebra, but it is not a complex algebra (and hence not a complex Banach algebra) for the simple reason that the center of the quaternions is the real numbers, which cannot contain a copy of the complex numbers. An affinoid algebra is a certain kind of Banach algebra over a nonarchimedean field. Affinoid algebras are the basic building blocks in rigid analytic geometry. == Properties == Several elementary functions that are defined via power series may be defined in any unital Banach algebra; examples include the exponential function and the trigonometric functions, and more generally any entire function. (In particular, the exponential map can be used to define abstract index groups.) The formula for the geometric series remains valid in general unital Banach algebras. The binomial theorem also holds for two commuting elements of a Banach algebra. The set of invertible elements in any unital Banach algebra is an open set, and the inversion operation on this set is continuous (and hence is a homeomorphism), so that it forms a topological group under multiplication. If a Banach algebra has unit 1 , {\displaystyle \mathbf {1} ,} then 1 {\displaystyle \mathbf {1} } cannot be a commutator; that is, x y − y x ≠ 1 {\displaystyle xy-yx\neq \mathbf {1} }   for any x , y ∈ A . {\displaystyle x,y\in A.} This is because x y {\displaystyle xy} and y x {\displaystyle yx} have the same spectrum except possibly 0. {\displaystyle 0.} The various algebras of functions given in the examples above have very different properties from standard examples of algebras such as the reals. For example: Every real Banach algebra that is a division algebra is isomorphic to the reals, the complexes, or the quaternions. Hence, the only complex Banach algebra that is a division algebra is the complexes. (This is known as the Gelfand–Mazur theorem.) Every unital real Banach algebra with no zero divisors, and in which every principal ideal is closed, is isomorphic to the reals, the complexes, or the quaternions. Every commutative real unital Noetherian Banach algebra with no zero divisors is isomorphic to the real or complex numbers. Every commutative real unital Noetherian Banach algebra (possibly having zero divisors) is finite-dimensional. Permanently singular elements in Banach algebras are topological divisors of zero, that is, considering extensions B {\displaystyle B} of Banach algebras A {\displaystyle A} some elements that are singular in the given algebra A {\displaystyle A} have a multiplicative inverse element in a Banach algebra extension B . {\displaystyle B.} Topological divisors of zero in A {\displaystyle A} are permanently singular in any Banach extension B {\displaystyle B} of A . {\displaystyle A.} == Spectral theory == Unital Banach algebras over the complex field provide a general setting to develop spectral theory. The spectrum of an element x ∈ A , {\displaystyle x\in A,} denoted by σ ( x ) {\displaystyle \sigma (x)} , consists of all those complex scalars λ {\displaystyle \lambda } such that x − λ 1 {\displaystyle x-\lambda \mathbf {1} } is not invertible in A . {\displaystyle A.} The spectrum of any element x {\displaystyle x} is a closed subset of the closed disc in C {\displaystyle \mathbb {C} } with radius ‖ x ‖ {\displaystyle \|x\|} and center 0 , {\displaystyle 0,} and thus is compact. Moreover, the spectrum σ ( x ) {\displaystyle \sigma (x)} of an element x {\displaystyle x} is non-empty and satisfies the spectral radius formula: sup { | λ | : λ ∈ σ ( x ) } = lim n → ∞ ‖ x n ‖ 1 / n . {\displaystyle \sup\{|\lambda |:\lambda \in \sigma (x)\}=\lim _{n\to \infty }\|x^{n}\|^{1/n}.} Given x ∈ A , {\displaystyle x\in A,} the holomorphic functional calculus allows to define f ( x ) ∈ A {\displaystyle f(x)\in A} for any function f {\displaystyle f} holomorphic in a neighborhood of σ ( x ) . {\displaystyle \sigma (x).} Furthermore, the spectral mapping theorem holds: σ ( f ( x ) ) = f ( σ ( x ) ) . {\displaystyle \sigma (f(x))=f(\sigma (x)).} When the Banach algebra A {\displaystyle A} is the algebra L ( X ) {\displaystyle L(X)} of bounded linear operators on a complex Banach space X {\displaystyle X} (for example, the algebra of square matrices), the notion of the spectrum in A {\displaystyle A} coincides with the usual one in operator theory. For f ∈ C ( X ) {\displaystyle f\in C(X)} (with a compact Hausdorff space X {\displaystyle X} ), one sees that: σ ( f ) = { f ( t ) : t ∈ X } . {\displaystyle \sigma (f)=\{f(t):t\in X\}.} The norm of a normal element x {\displaystyle x} of a C*-algebra coincides with its spectral radius. This generalizes an analogous fact for normal operators. Let A {\displaystyle A} be a complex unital Banach algebra in which every non-zero element x {\displaystyle x} is invertible (a division algebra). For every a ∈ A , {\displaystyle a\in A,} there is λ ∈ C {\displaystyle \lambda \in \mathbb {C} } such that a − λ 1 {\displaystyle a-\lambda \mathbf {1} } is not invertible (because the spectrum of a {\displaystyle a} is not empty) hence a = λ 1 : {\displaystyle a=\lambda \mathbf {1} :} this algebra A {\displaystyle A} is naturally isomorphic to C {\displaystyle \mathbb {C} } (the complex case of the Gelfand–Mazur theorem). == Ideals and characters == Let A {\displaystyle A} be a unital commutative Banach algebra over C . {\displaystyle \mathbb {C} .} Since A {\displaystyle A} is then a commutative ring with unit, every non-invertible element of A {\displaystyle A} belongs to some maximal ideal of A . {\displaystyle A.} Since a maximal ideal m {\displaystyle {\mathfrak {m}}} in A {\displaystyle A} is closed, A / m {\displaystyle A/{\mathfrak {m}}} is a Banach algebra that is a field, and it follows from the Gelfand–Mazur theorem that there is a bijection between the set of all maximal ideals of A {\displaystyle A} and the set Δ ( A ) {\displaystyle \Delta (A)} of all nonzero homomorphisms from A {\displaystyle A} to C . {\displaystyle \mathbb {C} .} The set Δ ( A ) {\displaystyle \Delta (A)} is called the "structure space" or "character space" of A , {\displaystyle A,} and its members "characters". A character χ {\displaystyle \chi } is a linear functional on A {\displaystyle A} that is at the same time multiplicative, χ ( a b ) = χ ( a ) χ ( b ) , {\displaystyle \chi (ab)=\chi (a)\chi (b),} and satisfies χ ( 1 ) = 1. {\displaystyle \chi (\mathbf {1} )=1.} Every character is automatically continuous from A {\displaystyle A} to C , {\displaystyle \mathbb {C} ,} since the kernel of a character is a maximal ideal, which is closed. Moreover, the norm (that is, operator norm) of a character is one. Equipped with the topology of pointwise convergence on A {\displaystyle A} (that is, the topology induced by the weak-* topology of A ∗ {\displaystyle A^{*}} ), the character space, Δ ( A ) , {\displaystyle \Delta (A),} is a Hausdorff compact space. For any x ∈ A , {\displaystyle x\in A,} σ ( x ) = σ ( x ^ ) {\displaystyle \sigma (x)=\sigma ({\hat {x}})} where x ^ {\displaystyle {\hat {x}}} is the Gelfand representation of x {\displaystyle x} defined as follows: x ^ {\displaystyle {\hat {x}}} is the continuous function from Δ ( A ) {\displaystyle \Delta (A)} to C {\displaystyle \mathbb {C} } given by x ^ ( χ ) = χ ( x ) . {\displaystyle {\hat {x}}(\chi )=\chi (x).} The spectrum of x ^ , {\displaystyle {\hat {x}},} in the formula above, is the spectrum as element of the algebra C ( Δ ( A ) ) {\displaystyle C(\Delta (A))} of complex continuous functions on the compact space Δ ( A ) . {\displaystyle \Delta (A).} Explicitly, σ ( x ^ ) = { χ ( x ) : χ ∈ Δ ( A ) } . {\displaystyle \sigma ({\hat {x}})=\{\chi (x):\chi \in \Delta (A)\}.} As an algebra, a unital commutative Banach algebra is semisimple (that is, its Jacobson radical is zero) if and only if its Gelfand representation has trivial kernel. An important example of such an algebra is a commutative C*-algebra. In fact, when A {\displaystyle A} is a commutative unital C*-algebra, the Gelfand representation is then an isometric *-isomorphism between A {\displaystyle A} and C ( Δ ( A ) ) . {\displaystyle C(\Delta (A)).} == Banach *-algebras == A Banach *-algebra A {\displaystyle A} is a Banach algebra over the field of complex numbers, together with a map ∗ : A → A {\displaystyle {}^{*}:A\to A} that has the following properties: ( x ∗ ) ∗ = x {\displaystyle \left(x^{*}\right)^{*}=x} for all x ∈ A {\displaystyle x\in A} (so the map is an involution). ( x + y ) ∗ = x ∗ + y ∗ {\displaystyle (x+y)^{*}=x^{*}+y^{*}} for all x , y ∈ A . {\displaystyle x,y\in A.} ( λ x ) ∗ = λ ¯ x ∗ {\displaystyle (\lambda x)^{*}={\bar {\lambda }}x^{*}} for every λ ∈ C {\displaystyle \lambda \in \mathbb {C} } and every x ∈ A ; {\displaystyle x\in A;} here, λ ¯ {\displaystyle {\bar {\lambda }}} denotes the complex conjugate of λ . {\displaystyle \lambda .} ( x y ) ∗ = y ∗ x ∗ {\displaystyle (xy)^{*}=y^{*}x^{*}} for all x , y ∈ A . {\displaystyle x,y\in A.} In other words, a Banach *-algebra is a Banach algebra over C {\displaystyle \mathbb {C} } that is also a *-algebra. In most natural examples, one also has that the involution is isometric, that is, ‖ x ∗ ‖ = ‖ x ‖ for all x ∈ A . {\displaystyle \|x^{*}\|=\|x\|\quad {\text{ for all }}x\in A.} Some authors include this isometric property in the definition of a Banach *-algebra. A Banach *-algebra satisfying ‖ x ∗ x ‖ = ‖ x ∗ ‖ ‖ x ‖ {\displaystyle \|x^{*}x\|=\|x^{*}\|\|x\|} is a C*-algebra. == See also == Approximate identity – net in a normed algebra that acts as a substitute for an identity elementPages displaying wikidata descriptions as a fallback Kaplansky's conjecture – Numerous conjectures by mathematician Irving KaplanskyPages displaying short descriptions of redirect targets Operator algebra – Branch of functional analysis Shilov boundary == Notes == == References ==
Wikipedia/Banach_algebra