text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, the Yoneda lemma is a fundamental result in category theory. It is an abstract result on functors of the type morphisms into a fixed object. It is a vast generalisation of Cayley's theorem from group theory (viewing a group as a miniature category with just one object and only isomorphisms). It also generalizes the information-preserving relation between a term and its continuation-passing style transformation from programming language theory. It allows the embedding of any locally small category into a category of functors (contravariant set-valued functors) defined on that category. It also clarifies how the embedded category, of representable functors and their natural transformations, relates to the other objects in the larger functor category. It is an important tool that underlies several modern developments in algebraic geometry and representation theory. It is named after Nobuo Yoneda. == Generalities == The Yoneda lemma suggests that instead of studying the locally small category C {\displaystyle {\mathcal {C}}} , one should study the category of all functors of C {\displaystyle {\mathcal {C}}} into S e t {\displaystyle \mathbf {Set} } (the category of sets with functions as morphisms). S e t {\displaystyle \mathbf {Set} } is a category we think we understand well, and a functor of C {\displaystyle {\mathcal {C}}} into S e t {\displaystyle \mathbf {Set} } can be seen as a "representation" of C {\displaystyle {\mathcal {C}}} in terms of known structures. The original category C {\displaystyle {\mathcal {C}}} is contained in this functor category, but new objects appear in the functor category, which were absent and "hidden" in C {\displaystyle {\mathcal {C}}} . Treating these new objects just like the old ones often unifies and simplifies the theory. This approach is akin to (and in fact generalizes) the common method of studying a ring by investigating the modules over that ring. The ring takes the place of the category C {\displaystyle {\mathcal {C}}} , and the category of modules over the ring is a category of functors defined on C {\displaystyle {\mathcal {C}}} . == Formal statement == Yoneda's lemma concerns functors from a fixed category C {\displaystyle {\mathcal {C}}} to a category of sets, S e t {\displaystyle \mathbf {Set} } . If C {\displaystyle {\mathcal {C}}} is a locally small category (i.e. the hom-sets are actual sets and not proper classes), then each object A {\displaystyle A} of C {\displaystyle {\mathcal {C}}} gives rise to a functor to S e t {\displaystyle \mathbf {Set} } called a hom-functor. This functor is denoted: h A ( − ) ≡ H o m ( A , − ) {\displaystyle h_{A}(-)\equiv \mathrm {Hom} (A,-)} . The (covariant) hom-functor h A {\displaystyle h_{A}} sends X ∈ C {\displaystyle X\in {\mathcal {C}}} to the set of morphisms H o m ( A , X ) {\displaystyle \mathrm {Hom} (A,X)} and sends a morphism f : X → Y {\displaystyle f\colon X\to Y} (where Y ∈ C {\displaystyle Y\in {\mathcal {C}}} ) to the morphism f ∘ − {\displaystyle f\circ -} (composition with f {\displaystyle f} on the left) that sends a morphism g {\displaystyle g} in H o m ( A , X ) {\displaystyle \mathrm {Hom} (A,X)} to the morphism f ∘ g {\displaystyle f\circ g} in H o m ( A , Y ) {\displaystyle \mathrm {Hom} (A,Y)} . That is, h A ( f ) = H o m ( A , f ) , or {\displaystyle h_{A}(f)=\mathrm {Hom} (A,f),{\text{ or}}} h A ( f ) ( g ) = f ∘ g {\displaystyle h_{A}(f)(g)=f\circ g} Yoneda's lemma says that: Here the notation S e t C {\displaystyle \mathbf {Set} ^{\mathcal {C}}} denotes the category of functors from C {\displaystyle {\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } . Given a natural transformation Φ {\displaystyle \Phi } from h A {\displaystyle h_{A}} to F {\displaystyle F} , the corresponding element of F ( A ) {\displaystyle F(A)} is u = Φ A ( i d A ) {\displaystyle u=\Phi _{A}(\mathrm {id} _{A})} ; and given an element u {\displaystyle u} of F ( A ) {\displaystyle F(A)} , the corresponding natural transformation is given by Φ X ( f ) = F ( f ) ( u ) {\displaystyle \Phi _{X}(f)=F(f)(u)} which assigns to a morphism f : A → X {\displaystyle f\colon A\to X} a value of F ( X ) {\displaystyle F(X)} . === Contravariant version === There is a contravariant version of Yoneda's lemma, which concerns contravariant functors from C {\displaystyle {\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } . This version involves the contravariant hom-functor h A ( − ) ≡ H o m ( − , A ) , {\displaystyle h^{A}(-)\equiv \mathrm {Hom} (-,A),} which sends X {\displaystyle X} to the hom-set H o m ( X , A ) {\displaystyle \mathrm {Hom} (X,A)} . Given an arbitrary contravariant functor G {\displaystyle G} from C {\displaystyle {\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } , Yoneda's lemma asserts that N a t ( h A , G ) ≅ G ( A ) . {\displaystyle \mathrm {Nat} (h^{A},G)\cong G(A).} === Naturality === The bijections provided in the (covariant) Yoneda lemma (for each A {\displaystyle A} and F {\displaystyle F} ) are the components of a natural isomorphism between two certain functors from C × S e t C {\displaystyle {\mathcal {C}}\times \mathbf {Set} ^{\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } .: 61  One of the two functors is the evaluation functor − ( − ) : C × S e t C → S e t {\displaystyle -(-)\colon {\mathcal {C}}\times \mathbf {Set} ^{\mathcal {C}}\to \mathbf {Set} } − ( − ) : ( A , F ) ↦ F ( A ) {\displaystyle -(-)\colon (A,F)\mapsto F(A)} that sends a pair ( f , Φ ) {\displaystyle (f,\Phi )} of a morphism f : A → B {\displaystyle f\colon A\to B} in C {\displaystyle {\mathcal {C}}} and a natural transformation Φ : F → G {\displaystyle \Phi \colon F\to G} to the map Φ B ∘ F ( f ) = G ( f ) ∘ Φ A : F ( A ) → G ( B ) . {\displaystyle \Phi _{B}\circ F(f)=G(f)\circ \Phi _{A}\colon F(A)\to G(B).} This is enough to determine the other functor since we know what the natural isomorphism is. Under the second functor Nat ⁡ ( hom ⁡ ( − , − ) , − ) : C × Set C → Set , {\displaystyle \operatorname {Nat} (\hom(-,-),-)\colon {\mathcal {C}}\times \operatorname {Set} ^{\mathcal {C}}\to \operatorname {Set} ,} Nat ⁡ ( hom ⁡ ( − , − ) , − ) : ( A , F ) ↦ Nat ⁡ ( hom ⁡ ( A , − ) , F ) , {\displaystyle \operatorname {Nat} (\hom(-,-),-)\colon (A,F)\mapsto \operatorname {Nat} (\hom(A,-),F),} the image of a pair ( f , Φ ) {\displaystyle (f,\Phi )} is the map Nat ⁡ ( hom ⁡ ( f , − ) , Φ ) = Nat ⁡ ( hom ⁡ ( B , − ) , Φ ) ∘ Nat ⁡ ( hom ⁡ ( f , − ) , F ) = Nat ⁡ ( hom ⁡ ( f , − ) , G ) ∘ Nat ⁡ ( hom ⁡ ( A , − ) , Φ ) {\displaystyle \operatorname {Nat} (\hom(f,-),\Phi )=\operatorname {Nat} (\hom(B,-),\Phi )\circ \operatorname {Nat} (\hom(f,-),F)=\operatorname {Nat} (\hom(f,-),G)\circ \operatorname {Nat} (\hom(A,-),\Phi )} that sends a natural transformation Ψ : hom ⁡ ( A , − ) → F {\displaystyle \Psi \colon \hom(A,-)\to F} to the natural transformation Φ ∘ Ψ ∘ hom ⁡ ( f , − ) : hom ⁡ ( B , − ) → G {\displaystyle \Phi \circ \Psi \circ \hom(f,-)\colon \hom(B,-)\to G} , whose components are ( Φ ∘ Ψ ∘ hom ⁡ ( f , − ) ) C ( g ) = ( Φ ∘ Ψ ) C ( g ∘ f ) ( g : B → C ) . {\displaystyle (\Phi \circ \Psi \circ \hom(f,-))_{C}(g)=(\Phi \circ \Psi )_{C}(g\circ f)\qquad (g\colon B\to C).} === Naming conventions === The use of h A {\displaystyle h_{A}} for the covariant hom-functor and h A {\displaystyle h^{A}} for the contravariant hom-functor is not completely standard. Many texts and articles either use the opposite convention or completely unrelated symbols for these two functors. However, most modern algebraic geometry texts starting with Alexander Grothendieck's foundational EGA use the convention in this article. The mnemonic "falling into something" can be helpful in remembering that h A {\displaystyle h_{A}} is the covariant hom-functor. When the letter A {\displaystyle A} is falling (i.e. a subscript), h A {\displaystyle h_{A}} assigns to an object X {\displaystyle X} the morphisms from A {\displaystyle A} into X {\displaystyle X} . === Proof === Since Φ {\displaystyle \Phi } is a natural transformation, we have the following commutative diagram: This diagram shows that the natural transformation Φ {\displaystyle \Phi } is completely determined by Φ A ( i d A ) = u {\displaystyle \Phi _{A}(\mathrm {id} _{A})=u} since for each morphism f : A → X {\displaystyle f\colon A\to X} one has Φ X ( f ) = ( F f ) u . {\displaystyle \Phi _{X}(f)=(Ff)u.} Moreover, any element u ∈ F ( A ) {\displaystyle u\in F(A)} defines a natural transformation in this way. The proof in the contravariant case is completely analogous. === The Yoneda embedding === An important special case of Yoneda's lemma is when the functor F {\displaystyle F} from C {\displaystyle {\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } is another hom-functor h B {\displaystyle h_{B}} . In this case, the covariant version of Yoneda's lemma states that N a t ( h A , h B ) ≅ H o m ( B , A ) . {\displaystyle \mathrm {Nat} (h_{A},h_{B})\cong \mathrm {Hom} (B,A).} That is, natural transformations between hom-functors are in one-to-one correspondence with morphisms (in the reverse direction) between the associated objects. Given a morphism f : B → A {\displaystyle f\colon B\to A} the associated natural transformation is denoted H o m ( f , − ) {\displaystyle \mathrm {Hom} (f,-)} . Mapping each object A {\displaystyle A} in C {\displaystyle {\mathcal {C}}} to its associated hom-functor h A = H o m ( A , − ) {\displaystyle h_{A}=\mathrm {Hom} (A,-)} and each morphism f : B → A {\displaystyle f\colon B\to A} to the corresponding natural transformation H o m ( f , − ) {\displaystyle \mathrm {Hom} (f,-)} determines a contravariant functor h ∙ {\displaystyle h_{\bullet }} from C {\displaystyle {\mathcal {C}}} to S e t C {\displaystyle \mathbf {Set} ^{\mathcal {C}}} , the functor category of all (covariant) functors from C {\displaystyle {\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } . One can interpret h ∙ {\displaystyle h_{\bullet }} as a covariant functor: h ∙ : C op → S e t C . {\displaystyle h_{\bullet }\colon {\mathcal {C}}^{\text{op}}\to \mathbf {Set} ^{\mathcal {C}}.} The meaning of Yoneda's lemma in this setting is that the functor h ∙ {\displaystyle h_{\bullet }} is fully faithful, and therefore gives an embedding of C o p {\displaystyle {\mathcal {C}}^{\mathrm {op} }} in the category of functors to S e t {\displaystyle \mathbf {Set} } . The collection of all functors { h A | A ∈ C } {\displaystyle \{h_{A}|A\in C\}} is a subcategory of S e t C {\displaystyle \mathbf {Set} ^{\mathcal {C}}} . Therefore, Yoneda embedding implies that the category C o p {\displaystyle {\mathcal {C}}^{\mathrm {op} }} is isomorphic to the category { h A | A ∈ C } {\displaystyle \{h_{A}|A\in C\}} . The contravariant version of Yoneda's lemma states that N a t ( h A , h B ) ≅ H o m ( A , B ) . {\displaystyle \mathrm {Nat} (h^{A},h^{B})\cong \mathrm {Hom} (A,B).} Therefore, h ∙ {\displaystyle h^{\bullet }} gives rise to a covariant functor from C {\displaystyle {\mathcal {C}}} to the category of contravariant functors to S e t {\displaystyle \mathbf {Set} } : h ∙ : C → S e t C o p . {\displaystyle h^{\bullet }\colon {\mathcal {C}}\to \mathbf {Set} ^{{\mathcal {C}}^{\mathrm {op} }}.} Yoneda's lemma then states that any locally small category C {\displaystyle {\mathcal {C}}} can be embedded in the category of contravariant functors from C {\displaystyle {\mathcal {C}}} to S e t {\displaystyle \mathbf {Set} } via h ∙ {\displaystyle h^{\bullet }} . This is called the Yoneda embedding. The Yoneda embedding is sometimes denoted by よ, the hiragana Yo. === Representable functor === The Yoneda embedding essentially states that for every (locally small) category, objects in that category can be represented by presheaves, in a full and faithful manner. That is, N a t ( h A , P ) ≅ P ( A ) {\displaystyle \mathrm {Nat} (h^{A},P)\cong P(A)} for a presheaf P. Many common categories are, in fact, categories of pre-sheaves, and on closer inspection, prove to be categories of sheaves, and as such examples are commonly topological in nature, they can be seen to be topoi in general. The Yoneda lemma provides a point of leverage by which the topological structure of a category can be studied and understood. === In terms of (co)end calculus === Given two categories C {\displaystyle \mathbf {C} } and D {\displaystyle \mathbf {D} } with two functors F , G : C → D {\displaystyle F,G:\mathbf {C} \to \mathbf {D} } , natural transformations between them can be written as the following end. N a t ( F , G ) = ∫ c ∈ C H o m D ( F c , G c ) {\displaystyle \mathrm {Nat} (F,G)=\int _{c\in \mathbf {C} }\mathrm {Hom} _{\mathbf {D} }(Fc,Gc)} For any functors K : C o p → S e t s {\displaystyle K\colon \mathbf {C} ^{op}\to \mathbf {Sets} } and H : C → S e t s {\displaystyle H\colon \mathbf {C} \to \mathbf {Sets} } the following formulas are all formulations of the Yoneda lemma. K ≅ ∫ c ∈ C K c × H o m C ( − , c ) , K ≅ ∫ c ∈ C ( K c ) H o m C ( c , − ) , {\displaystyle K\cong \int ^{c\in \mathbf {C} }Kc\times \mathrm {Hom} _{\mathbf {C} }(-,c),\qquad K\cong \int _{c\in \mathbf {C} }(Kc)^{\mathrm {Hom} _{\mathbf {C} }(c,-)},} H ≅ ∫ c ∈ C H c × H o m C ( c , − ) , H ≅ ∫ c ∈ C ( H c ) H o m C ( − , c ) . {\displaystyle H\cong \int ^{c\in \mathbf {C} }Hc\times \mathrm {Hom} _{\mathbf {C} }(c,-),\qquad H\cong \int _{c\in \mathbf {C} }(Hc)^{\mathrm {Hom} _{\mathbf {C} }(-,c)}.} == Preadditive categories, rings and modules == A preadditive category is a category where the morphism sets form abelian groups and the composition of morphisms is bilinear; examples are categories of abelian groups or modules. In a preadditive category, there is both a "multiplication" and an "addition" of morphisms, which is why preadditive categories are viewed as generalizations of rings. Rings are preadditive categories with one object. The Yoneda lemma remains true for preadditive categories if we choose as our extension the category of additive contravariant functors from the original category into the category of abelian groups; these are functors which are compatible with the addition of morphisms and should be thought of as forming a module category over the original category. The Yoneda lemma then yields the natural procedure to enlarge a preadditive category so that the enlarged version remains preadditive — in fact, the enlarged version is an abelian category, a much more powerful condition. In the case of a ring R {\displaystyle R} , the extended category is the category of all right modules over R {\displaystyle R} , and the statement of the Yoneda lemma reduces to the well-known isomorphism M ≅ H o m R ( R , M ) {\displaystyle M\cong \mathrm {Hom} _{R}(R,M)} for all right modules M {\displaystyle M} over R {\displaystyle R} . == Relationship to Cayley's theorem == As stated above, the Yoneda lemma may be considered as a vast generalization of Cayley's theorem from group theory. To see this, let C {\displaystyle {\mathcal {C}}} be a category with a single object ∗ {\displaystyle *} such that every morphism is an isomorphism (i.e. a groupoid with one object). Then G = H o m C ( ∗ , ∗ ) {\displaystyle G=\mathrm {Hom} _{\mathcal {C}}(*,*)} forms a group under the operation of composition, and any group can be realized as a category in this way. In this context, a covariant functor C → S e t {\displaystyle {\mathcal {C}}\to \mathbf {Set} } consists of a set X {\displaystyle X} and a group homomorphism G → P e r m ( X ) {\displaystyle G\to \mathrm {Perm} (X)} , where P e r m ( X ) {\displaystyle \mathrm {Perm} (X)} is the group of permutations of X {\displaystyle X} ; in other words, X {\displaystyle X} is a G-set. A natural transformation between such functors is the same thing as an equivariant map between G {\displaystyle G} -sets: a set function α : X → Y {\displaystyle \alpha \colon X\to Y} with the property that α ( g ⋅ x ) = g ⋅ α ( x ) {\displaystyle \alpha (g\cdot x)=g\cdot \alpha (x)} for all g {\displaystyle g} in G {\displaystyle G} and x {\displaystyle x} in X {\displaystyle X} . (On the left side of this equation, the ⋅ {\displaystyle \cdot } denotes the action of G {\displaystyle G} on X {\displaystyle X} , and on the right side the action on Y {\displaystyle Y} .) Now the covariant hom-functor H o m C ( ∗ , − ) {\displaystyle \mathrm {Hom} _{\mathcal {C}}(*,-)} corresponds to the action of G {\displaystyle G} on itself by left-multiplication (the contravariant version corresponds to right-multiplication). The Yoneda lemma with F = H o m C ( ∗ , − ) {\displaystyle F=\mathrm {Hom} _{\mathcal {C}}(*,-)} states that N a t ( H o m C ( ∗ , − ) , H o m C ( ∗ , − ) ) ≅ H o m C ( ∗ , ∗ ) {\displaystyle \mathrm {Nat} (\mathrm {Hom} _{\mathcal {C}}(*,-),\mathrm {Hom} _{\mathcal {C}}(*,-))\cong \mathrm {Hom} _{\mathcal {C}}(*,*)} , that is, the equivariant maps from this G {\displaystyle G} -set to itself are in bijection with G {\displaystyle G} . But it is easy to see that (1) these maps form a group under composition, which is a subgroup of P e r m ( G ) {\displaystyle \mathrm {Perm} (G)} , and (2) the function which gives the bijection is a group homomorphism. (Going in the reverse direction, it associates to every g {\displaystyle g} in G {\displaystyle G} the equivariant map of right-multiplication by g {\displaystyle g} .) Thus G {\displaystyle G} is isomorphic to a subgroup of P e r m ( G ) {\displaystyle \mathrm {Perm} (G)} , which is the statement of Cayley's theorem. == History == Yoshiki Kinoshita stated in 1996 that the term "Yoneda lemma" was coined by Saunders Mac Lane following an interview he had with Yoneda in the Gare du Nord station. == See also == Representation theorem Completions in category theory 2-Yoneda lemma == Notes == == References == Yoneda lemma at the nLab == External links == Mizar system proof: Wojciechowski, M. (1997). "Yoneda Embedding". Formalized Mathematics Journal. 6 (3): 377–380. CiteSeerX 10.1.1.73.7127. Beurier, Erwan; Pastor, Dominique (July 2019). "A crash course on Category Theory".
Wikipedia/Yoneda_lemma
In mathematics, an n-group, or n-dimensional higher group, is a special kind of n-category that generalises the concept of group to higher-dimensional algebra. Here, n {\displaystyle n} may be any natural number or infinity. The thesis of Alexander Grothendieck's student Hoàng Xuân Sính was an in-depth study of 2-groups under the moniker 'gr-category'. The general definition of n {\displaystyle n} -group is a matter of ongoing research. However, it is expected that every topological space will have a homotopy n {\displaystyle n} -group at every point, which will encapsulate the Postnikov tower of the space up to the homotopy group π n {\displaystyle \pi _{n}} , or the entire Postnikov tower for n = ∞ {\displaystyle n=\infty } . == Examples == === Eilenberg-Maclane spaces === One of the principal examples of higher groups come from the homotopy types of Eilenberg–MacLane spaces K ( A , n ) {\displaystyle K(A,n)} since they are the fundamental building blocks for constructing higher groups, and homotopy types in general. For instance, every group G {\displaystyle G} can be turned into an Eilenberg-Maclane space K ( G , 1 ) {\displaystyle K(G,1)} through a simplicial construction, and it behaves functorially. This construction gives an equivalence between groups and 1-groups. Note that some authors write K ( G , 1 ) {\displaystyle K(G,1)} as B G {\displaystyle BG} , and for an abelian group A {\displaystyle A} , K ( A , n ) {\displaystyle K(A,n)} is written as B n A {\displaystyle B^{n}A} . === 2-groups === The definition and many properties of 2-groups are already known. 2-groups can be described using crossed modules and their classifying spaces. Essentially, these are given by a quadruple ( π 1 , π 2 , t , ω ) {\displaystyle (\pi _{1},\pi _{2},t,\omega )} where π 1 , π 2 {\displaystyle \pi _{1},\pi _{2}} are groups with π 2 {\displaystyle \pi _{2}} abelian, t : π 1 → Aut ⁡ π 2 {\displaystyle t:\pi _{1}\to \operatorname {Aut} \pi _{2}} a group homomorphism, and ω ∈ H 3 ( B π 1 , π 2 ) {\displaystyle \omega \in H^{3}(B\pi _{1},\pi _{2})} a cohomology class. These groups can be encoded as homotopy 2 {\displaystyle 2} -types X {\displaystyle X} with π 1 X = π 1 {\displaystyle \pi _{1}X=\pi _{1}} and π 2 X = π 2 {\displaystyle \pi _{2}X=\pi _{2}} , with the action coming from the action of π 1 X {\displaystyle \pi _{1}X} on higher homotopy groups, and ω {\displaystyle \omega } coming from the Postnikov tower since there is a fibration B 2 π 2 → X → B π 1 {\displaystyle B^{2}\pi _{2}\to X\to B\pi _{1}} coming from a map B π 1 → B 3 π 2 {\displaystyle B\pi _{1}\to B^{3}\pi _{2}} . Note that this idea can be used to construct other higher groups with group data having trivial middle groups π 1 , e , … , e , π n {\displaystyle \pi _{1},e,\ldots ,e,\pi _{n}} , where the fibration sequence is now B n π n → X → B π 1 {\displaystyle B^{n}\pi _{n}\to X\to B\pi _{1}} coming from a map B π 1 → B n + 1 π n {\displaystyle B\pi _{1}\to B^{n+1}\pi _{n}} whose homotopy class is an element of H n + 1 ( B π 1 , π n ) {\displaystyle H^{n+1}(B\pi _{1},\pi _{n})} . === 3-groups === Another interesting and accessible class of examples which requires homotopy theoretic methods, not accessible to strict groupoids, comes from looking at homotopy 3-types of groups. Essentially, these are given by a triple of groups ( π 1 , π 2 , π 3 ) {\displaystyle (\pi _{1},\pi _{2},\pi _{3})} with only the first group being non-abelian, and some additional homotopy theoretic data from the Postnikov tower. If we take this 3-group as a homotopy 3-type X {\displaystyle X} , the existence of universal covers gives us a homotopy type X ^ → X {\displaystyle {\hat {X}}\to X} which fits into a fibration sequence X ^ → X → B π 1 {\displaystyle {\hat {X}}\to X\to B\pi _{1}} giving a homotopy X ^ {\displaystyle {\hat {X}}} type with π 1 {\displaystyle \pi _{1}} trivial on which π 1 {\displaystyle \pi _{1}} acts on. These can be understood explicitly using the previous model of 2-groups, shifted up by degree (called delooping). Explicitly, X ^ {\displaystyle {\hat {X}}} fits into a Postnikov tower with associated Serre fibration B 3 π 3 → X ^ → B 2 π 2 {\displaystyle B^{3}\pi _{3}\to {\hat {X}}\to B^{2}\pi _{2}} giving where the B 3 π 3 {\displaystyle B^{3}\pi _{3}} -bundle X ^ → B 2 π 2 {\displaystyle {\hat {X}}\to B^{2}\pi _{2}} comes from a map B 2 π 2 → B 4 π 3 {\displaystyle B^{2}\pi _{2}\to B^{4}\pi _{3}} , giving a cohomology class in H 4 ( B 2 π 2 , π 3 ) {\displaystyle H^{4}(B^{2}\pi _{2},\pi _{3})} . Then, X {\displaystyle X} can be reconstructed using a homotopy quotient X ^ / / π 1 ≃ X {\displaystyle {\hat {X}}//\pi _{1}\simeq X} . === n-groups === The previous construction gives the general idea of how to consider higher groups in general. For an n-group with groups π 1 , π 2 , … , π n {\displaystyle \pi _{1},\pi _{2},\ldots ,\pi _{n}} with the latter bunch being abelian, we can consider the associated homotopy type X {\displaystyle X} and first consider the universal cover X ^ → X {\displaystyle {\hat {X}}\to X} . Then, this is a space with trivial π 1 ( X ^ ) = 0 {\displaystyle \pi _{1}({\hat {X}})=0} , making it easier to construct the rest of the homotopy type using the Postnikov tower. Then, the homotopy quotient X ^ / / π 1 {\displaystyle {\hat {X}}//\pi _{1}} gives a reconstruction of X {\displaystyle X} , showing the data of an n {\displaystyle n} -group is a higher group, or simple space, with trivial π 1 {\displaystyle \pi _{1}} such that a group G {\displaystyle G} acts on it homotopy theoretically. This observation is reflected in the fact that homotopy types are not realized by simplicial groups, but simplicial groupoidspg 295 since the groupoid structure models the homotopy quotient − / / π 1 {\displaystyle -//\pi _{1}} . Going through the construction of a 4-group X {\displaystyle X} is instructive because it gives the general idea for how to construct the groups in general. For simplicity, let's assume π 1 = e {\displaystyle \pi _{1}=e} is trivial, so the non-trivial groups are π 2 , π 3 , π 4 {\displaystyle \pi _{2},\pi _{3},\pi _{4}} . This gives a Postnikov tower X → X 3 → B 2 π 2 → ∗ {\displaystyle X\to X_{3}\to B^{2}\pi _{2}\to *} where the first non-trivial map X 3 → B 2 π 2 {\displaystyle X_{3}\to B^{2}\pi _{2}} is a fibration with fiber B 3 π 3 {\displaystyle B^{3}\pi _{3}} . Again, this is classified by a cohomology class in H 4 ( B 2 π 2 , π 3 ) {\displaystyle H^{4}(B^{2}\pi _{2},\pi _{3})} . Now, to construct X {\displaystyle X} from X 3 {\displaystyle X_{3}} , there is an associated fibration B 4 π 4 → X → X 3 {\displaystyle B^{4}\pi _{4}\to X\to X_{3}} given by a homotopy class [ X 3 , B 5 π 4 ] ≅ H 5 ( X 3 , π 4 ) {\displaystyle [X_{3},B^{5}\pi _{4}]\cong H^{5}(X_{3},\pi _{4})} . In principle this cohomology group should be computable using the previous fibration B 3 π 3 → X 3 → B 2 π 2 {\displaystyle B^{3}\pi _{3}\to X_{3}\to B^{2}\pi _{2}} with the Serre spectral sequence with the correct coefficients, namely π 4 {\displaystyle \pi _{4}} . Doing this recursively, say for a 5 {\displaystyle 5} -group, would require several spectral sequence computations, at worst n ! {\displaystyle n!} many spectral sequence computations for an n {\displaystyle n} -group. ==== n-groups from sheaf cohomology ==== For a complex manifold X {\displaystyle X} with universal cover π : X ~ → X {\displaystyle \pi :{\tilde {X}}\to X} , and a sheaf of abelian groups F {\displaystyle {\mathcal {F}}} on X {\displaystyle X} , for every n ≥ 0 {\displaystyle n\geq 0} there exists canonical homomorphisms ϕ n : H n ( π 1 X , H 0 ( X ~ , π ∗ F ) ) → H n ( X , F ) {\displaystyle \phi _{n}:H^{n}(\pi _{1}X,H^{0}({\tilde {X}},\pi ^{*}{\mathcal {F}}))\to H^{n}(X,{\mathcal {F}})} giving a technique for relating n-groups constructed from a complex manifold X {\displaystyle X} and sheaf cohomology on X {\displaystyle X} . This is particularly applicable for complex tori. == See also == ∞-groupoid Crossed module Homotopy hypothesis Abelian 2-group == References == Hoàng Xuân Sính, Gr-catégories, PhD thesis, (1973) "Thesis of Hoàng Xuân Sính (Gr-catégories)". Archived from the original on 2022-08-27. Baez, John C.; Lauda, Aaron D. (2003). "Higher-Dimensional Algebra V: 2-Groups". arXiv:math/0307200v3. Roberts, David Michael; Schreiber, Urs (2008). "The inner automorphism 3-group of a strict 2-group". Journal of Homotopy and Related Structures. 3: 193–244. arXiv:0708.1741. "Classification of weak 3-groups". MathOverflow. Jardine, J. F. (January 2001). "Stacks and the homotopy theory of simplicial sheaves". Homology, Homotopy and Applications. 3 (2): 361–384. doi:10.4310/HHA.2001.v3.n2.a5. S2CID 123554728. === Algebraic models for homotopy n-types === Blanc, David (1999). "Algebraic invariants for homotopy types". Mathematical Proceedings of the Cambridge Philosophical Society. 127 (3): 497–523. arXiv:math/9812035. Bibcode:1999MPCPS.127..497B. doi:10.1017/S030500419900393X. S2CID 17663055. Arvasi, Z.; Ulualan, E. (2006). "On algebraic models for homotopy 3-types" (PDF). Journal of Homotopy and Related Structures. 1: 1–27. arXiv:math/0602180. Bibcode:2006math......2180A. Brown, Ronald (1992). "Computing homotopy types using crossed n-cubes of groups". Adams Memorial Symposium on Algebraic Topology. pp. 187–210. arXiv:math/0109091. doi:10.1017/CBO9780511526305.014. ISBN 9780521420747. S2CID 2750149. Joyal, André; Kock, Joachim (2007). "Weak units and homotopy 3-types". Categories in Algebra, Geometry and Mathematical Physics. Contemporary Mathematics. Vol. 431. pp. 257–276. doi:10.1090/conm/431/08277. ISBN 9780821839706. S2CID 13931985. Algebraic models for homotopy n-types at the nLab - musings by Tim porter discussing the pitfalls of modelling homotopy n-types with n-cubes === Cohomology of higher groups === Eilenberg, Samuel; MacLane, Saunders (1946). "Determination of the Second Homology and Cohomology Groups of a Space by Means of Homotopy Invariants". Proceedings of the National Academy of Sciences. 32 (11): 277–280. Bibcode:1946PNAS...32..277E. doi:10.1073/pnas.32.11.277. PMC 1078947. PMID 16588731. Thomas, Sebastian (2009). "The third cohomology group classifies crossed module extensions". arXiv:0911.2861 [math.KT]. Thomas, Sebastian (January 2010). "On the second cohomology group of a simplicial group". Homology, Homotopy and Applications. 12 (2): 167–210. arXiv:0911.2864. doi:10.4310/HHA.2010.v12.n2.a6. S2CID 55449228. Noohi, Behrang (2011). "Group cohomology with coefficients in a crossed module". Journal of the Institute of Mathematics of Jussieu. 10 (2): 359–404. arXiv:0902.0161. doi:10.1017/S1474748010000186. S2CID 7835760. === Cohomology of higher groups over a site === Note this is (slightly) distinct from the previous section, because it is about taking cohomology over a space X {\displaystyle X} with values in a higher group G ∙ {\displaystyle \mathbb {G} _{\bullet }} , giving higher cohomology groups H ∗ ( X , G ∙ ) {\displaystyle \mathbb {H} ^{*}(X,\mathbb {G} _{\bullet })} . If we are considering X {\displaystyle X} as a homotopy type and assuming the homotopy hypothesis, then these are the same cohomology groups. Jibladze, Mamuka; Pirashvili, Teimuraz (2011). "Cohomology with coefficients in stacks of Picard categories". arXiv:1101.2918 [math.AT]. Debremaeker, Raymond (2017). "Cohomology with values in a sheaf of crossed groups over a site". arXiv:1702.02128 [math.AG].
Wikipedia/N-group_(category_theory)
In mathematics, an automorphic L-function is a function L(s,π,r) of a complex variable s, associated to an automorphic representation π of a reductive group G over a global field and a finite-dimensional complex representation r of the Langlands dual group LG of G, generalizing the Dirichlet L-series of a Dirichlet character and the Mellin transform of a modular form. They were introduced by Langlands (1967, 1970, 1971). Borel (1979) and Arthur & Gelbart (1991) gave surveys of automorphic L-functions. == Properties == Automorphic L {\displaystyle L} -functions should have the following properties (which have been proved in some cases but are still conjectural in other cases). The L-function L ( s , π , r ) {\displaystyle L(s,\pi ,r)} should be a product over the places v {\displaystyle v} of F {\displaystyle F} of local L {\displaystyle L} functions. L ( s , π , r ) = ∏ v L ( s , π v , r v ) {\displaystyle L(s,\pi ,r)=\prod _{v}L(s,\pi _{v},r_{v})} Here the automorphic representation π = ⊗ π v {\displaystyle \pi =\otimes \pi _{v}} is a tensor product of the representations π v {\displaystyle \pi _{v}} of local groups. The L-function is expected to have an analytic continuation as a meromorphic function of all complex s {\displaystyle s} , and satisfy a functional equation L ( s , π , r ) = ϵ ( s , π , r ) L ( 1 − s , π , r ∨ ) {\displaystyle L(s,\pi ,r)=\epsilon (s,\pi ,r)L(1-s,\pi ,r^{\lor })} where the factor ϵ ( s , π , r ) {\displaystyle \epsilon (s,\pi ,r)} is a product of "local constants" ϵ ( s , π , r ) = ∏ v ϵ ( s , π v , r v , ψ v ) {\displaystyle \epsilon (s,\pi ,r)=\prod _{v}\epsilon (s,\pi _{v},r_{v},\psi _{v})} almost all of which are 1. == General linear groups == Godement & Jacquet (1972) constructed the automorphic L-functions for general linear groups with r the standard representation (so-called standard L-functions) and verified analytic continuation and the functional equation, by using a generalization of the method in Tate's thesis. Ubiquitous in the Langlands Program are Rankin-Selberg products of representations of GL(m) and GL(n). The resulting Rankin-Selberg L-functions satisfy a number of analytic properties, their functional equation being first proved via the Langlands–Shahidi method. In general, the Langlands functoriality conjectures imply that automorphic L-functions of a connected reductive group are equal to products of automorphic L-functions of general linear groups. A proof of Langlands functoriality would also lead towards a thorough understanding of the analytic properties of automorphic L-functions. == See also == Grand Riemann hypothesis == References == Arthur, James; Gelbart, Stephen (1991), "Lectures on automorphic L-functions", in Coates, John; Taylor, M. J. (eds.), L-functions and arithmetic (Durham, 1989) (PDF), London Math. Soc. Lecture Note Ser., vol. 153, Cambridge University Press, pp. 1–59, doi:10.1017/CBO9780511526053.003, ISBN 978-0-521-38619-7, MR 1110389 Borel, Armand (1979), "Automorphic L-functions", in Borel, Armand; Casselman, W. (eds.), Automorphic forms, representations and L-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 2, vol. XXXIII, Providence, R.I.: American Mathematical Society, pp. 27–61, doi:10.1090/pspum/033.2/546608, ISBN 978-0-8218-1437-6, MR 0546608 Cogdell, James W.; Kim, Henry H.; Murty, Maruti Ram (2004), Lectures on automorphic L-functions, Fields Institute Monographs, vol. 20, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3516-6, MR 2071722 Gelbart, Stephen; Piatetski-Shapiro, Ilya; Rallis, Stephen (1987), Explicit Constructions of Automorphic L-Functions, Lecture Notes in Mathematics, vol. 1254, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0078125, ISBN 978-3-540-17848-4, MR 0892097 Godement, Roger; Jacquet, Hervé (1972), Zeta Functions of Simple Algebras, Lecture Notes in Mathematics, vol. 260, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0070263, ISBN 978-3-540-05797-0, MR 0342495 Jacquet, H.; Piatetski-Shapiro, I. I.; Shalika, J. A. (1983), "Rankin-Selberg Convolutions", Amer. J. Math., 105 (2): 367–464, doi:10.2307/2374264, JSTOR 2374264 Langlands, Robert (1967), Letter to Prof. Weil Langlands, R. P. (1970), "Problems in the theory of automorphic forms", Lectures in modern analysis and applications, III, Lecture Notes in Math, vol. 170, Berlin, New York: Springer-Verlag, pp. 18–61, doi:10.1007/BFb0079065, ISBN 978-3-540-05284-5, MR 0302614 Langlands, Robert P. (1971) [1967], Euler products, Yale University Press, ISBN 978-0-300-01395-5, MR 0419366 Shahidi, F. (1981), "On certain "L"-functions", Amer. J. Math., 103 (2): 297–355, doi:10.2307/2374219, JSTOR 2374219
Wikipedia/Automorphic_L-function
Faltings's theorem is a result in arithmetic geometry, according to which a curve of genus greater than 1 over the field Q {\displaystyle \mathbb {Q} } of rational numbers has only finitely many rational points. This was conjectured in 1922 by Louis Mordell, and known as the Mordell conjecture until its 1983 proof by Gerd Faltings. The conjecture was later generalized by replacing Q {\displaystyle \mathbb {Q} } by any number field. == Background == Let C {\displaystyle C} be a non-singular algebraic curve of genus g {\displaystyle g} over Q {\displaystyle \mathbb {Q} } . Then the set of rational points on C {\displaystyle C} may be determined as follows: When g = 0 {\displaystyle g=0} , there are either no points or infinitely many. In such cases, C {\displaystyle C} may be handled as a conic section. When g = 1 {\displaystyle g=1} , if there are any points, then C {\displaystyle C} is an elliptic curve and its rational points form a finitely generated abelian group. (This is Mordell's Theorem, later generalized to the Mordell–Weil theorem.) Moreover, Mazur's torsion theorem restricts the structure of the torsion subgroup. When g > 1 {\displaystyle g>1} , according to Faltings's theorem, C {\displaystyle C} has only a finite number of rational points. == Proofs == Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places. Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture, together with tools from algebraic geometry, including the theory of Néron models. The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties. === Later proofs === Paul Vojta gave a proof based on Diophantine approximation. Enrico Bombieri found a more elementary variant of Vojta's proof. Brian Lawrence and Akshay Venkatesh gave a proof based on p-adic Hodge theory, borrowing also some of the easier ingredients of Faltings's original proof. == Consequences == Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured: The Mordell conjecture that a curve of genus greater than 1 over a number field has only finitely many rational points; The Isogeny theorem that abelian varieties with isomorphic Tate modules (as Q ℓ {\displaystyle \mathbb {Q} _{\ell }} -modules with Galois action) are isogenous. A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem: for any fixed n ≥ 4 {\displaystyle n\geq 4} there are at most finitely many primitive integer solutions (pairwise coprime solutions) to a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} , since for such n {\displaystyle n} the Fermat curve x n + y n = 1 {\displaystyle x^{n}+y^{n}=1} has genus greater than 1. == Generalizations == Because of the Mordell–Weil theorem, Faltings's theorem can be reformulated as a statement about the intersection of a curve C {\displaystyle C} with a finitely generated subgroup Γ {\displaystyle \Gamma } of an abelian variety A {\displaystyle A} . Generalizing by replacing A {\displaystyle A} by a semiabelian variety, C {\displaystyle C} by an arbitrary subvariety of A {\displaystyle A} , and Γ {\displaystyle \Gamma } by an arbitrary finite-rank subgroup of A {\displaystyle A} leads to the Mordell–Lang conjecture, which was proved in 1995 by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings. Another higher-dimensional generalization of Faltings's theorem is the Bombieri–Lang conjecture that if X {\displaystyle X} is a pseudo-canonical variety (i.e., a variety of general type) over a number field k {\displaystyle k} , then X ( k ) {\displaystyle X(k)} is not Zariski dense in X {\displaystyle X} . Even more general conjectures have been put forth by Paul Vojta. The Mordell conjecture for function fields was proved by Yuri Ivanovich Manin and by Hans Grauert. In 1990, Robert F. Coleman found and fixed a gap in Manin's proof. == Notes == == Citations == == References ==
Wikipedia/Mordell_conjecture
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem. == History == The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and as of 2022 the method still yields results. The method is the subject of a monograph Vaughan (1997) by R. C. Vaughan. == Outline == The goal is to prove asymptotic behavior of a series: to show that an ~ F(n) for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle. The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals. === Setup === The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers an for n = 0, 1, 2, 3, ..., we want some asymptotic information of the type an ~ F(n), where we have some heuristic reason to guess the form taken by F (an ansatz), we write f ( z ) = ∑ a n z n {\displaystyle f(z)=\sum a_{n}z^{n}} a power series generating function. The interesting cases are where f is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation. === Residues === From that formulation, it follows directly from the residue theorem that I n = ∮ C f ( z ) z − ( n + 1 ) d z = 2 π i a n {\displaystyle I_{n}=\oint _{C}f(z)z^{-(n+1)}\,dz=2\pi ia_{n}} for integers n ≥ 0, where C is a circle of radius r and centred at 0, for any r with 0 < r < 1; in other words, I n {\displaystyle I_{n}} is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take r = 1 directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of f may not be defined there. === Singularities on unit circle === The problem addressed by the circle method is to force the issue of taking r = 1, by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity: ζ = exp ⁡ ( 2 π i r s ) . {\displaystyle \zeta \ =\exp \left({\frac {2\pi ir}{s}}\right).} Here the denominator s, assuming that ⁠r/s⁠ is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical f near ζ. === Method === The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of In, as r → 1, should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity ζ into two classes, according to whether s ≤ N or s > N, where N is a function of n that is ours to choose conveniently. The integral In is divided up into integrals each on some arc of the circle that is adjacent to ζ, of length a function of s (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up 2πiF(n) (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than F(n). == Discussion == Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions. === Waring's problem === In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example. It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at z = 1; followed by z = −1, and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity i and −i that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity. In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled. === Vinogradov trigonometric sums === Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral In is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of r in the limiting operation to be set directly to the value 1. == Applications == Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables k is large relative to the degree d (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If d is fixed and k is small, other methods are required, and indeed the Hasse principle tends to fail. == Rademacher's contour == In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution z = exp(2πiτ), so that the contour integral becomes an integral from τ = i to τ = 1 + i. (The number i could be replaced by any number on the upper half-plane, but i is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from i to 1 + i by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions). == Notes == == References == Apostol, Tom M. (1990), Modular functions and Dirichlet series in number theory (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-97127-8 Mardzhanishvili, K. K. (1985), "Ivan Matveevich Vinogradov: a brief outline of his life and works", I. M. Vinogradov, Selected Works, Berlin{{citation}}: CS1 maint: location missing publisher (link) Rademacher, Hans (1943), "On the expansion of the partition function in a series", Annals of Mathematics, Second Series, 44 (3), The Annals of Mathematics, Vol. 44, No. 3: 416–422, doi:10.2307/1968973, JSTOR 1968973, MR 0008618 Vaughan, R. C. (1997), The Hardy–Littlewood Method, Cambridge Tracts in Mathematics, vol. 125 (2nd ed.), Cambridge University Press, ISBN 978-0-521-57347-4 == Further reading == Wang, Yuan (1991). Diophantine equations and inequalities in algebraic number fields. Berlin: Springer-Verlag. doi:10.1007/978-3-642-58171-7. ISBN 9783642634895. OCLC 851809136. == External links == Terence Tao, Heuristic limitations of the circle method, a blog post in 2012
Wikipedia/Hardy–Littlewood_method
In mathematics, auxiliary functions are an important construction in transcendental number theory. They are functions that appear in most proofs in this area of mathematics and that have specific, desirable properties, such as taking the value zero for many arguments, or having a zero of high order at some point. == Definition == Auxiliary functions are not a rigorously defined kind of function, rather they are functions which are either explicitly constructed or at least shown to exist and which provide a contradiction to some assumed hypothesis, or otherwise prove the result in question. Creating a function during the course of a proof in order to prove the result is not a technique exclusive to transcendence theory, but the term "auxiliary function" usually refers to the functions created in this area. == Explicit functions == === Liouville's transcendence criterion === Because of the naming convention mentioned above, auxiliary functions can be dated back to their source simply by looking at the earliest results in transcendence theory. One of these first results was Liouville's proof that transcendental numbers exist when he showed that the so called Liouville numbers were transcendental. He did this by discovering a transcendence criterion which these numbers satisfied. To derive this criterion he started with a general algebraic number α and found some property that this number would necessarily satisfy. The auxiliary function he used in the course of proving this criterion was simply the minimal polynomial of α, which is the irreducible polynomial f with integer coefficients such that f(α) = 0. This function can be used to estimate how well the algebraic number α can be estimated by rational numbers p/q. Specifically if α has degree d at least two then he showed that | f ( p q ) | ≥ 1 q d , {\displaystyle \left|f\left({\frac {p}{q}}\right)\right|\geq {\frac {1}{q^{d}}},} and also, using the mean value theorem, that there is some constant depending on α, say c(α), such that | f ( p q ) | ≤ c ( α ) | α − p q | . {\displaystyle \left|f\left({\frac {p}{q}}\right)\right|\leq c(\alpha )\left|\alpha -{\frac {p}{q}}\right|.} Combining these results gives a property that the algebraic number must satisfy; therefore any number not satisfying this criterion must be transcendental. The auxiliary function in Liouville's work is very simple, merely a polynomial that vanishes at a given algebraic number. This kind of property is usually the one that auxiliary functions satisfy. They either vanish or become very small at particular points, which is usually combined with the assumption that they do not vanish or can't be too small to derive a result. === Fourier's proof of the irrationality of e === Another simple, early occurrence is in Fourier's proof of the irrationality of e, though the notation used usually disguises this fact. Fourier's proof used the power series of the exponential function: e x = ∑ n = 0 ∞ x n n ! . {\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}.} By truncating this power series after, say, N + 1 terms we get a polynomial with rational coefficients of degree N which is in some sense "close" to the function ex. Specifically if we look at the auxiliary function defined by the remainder: R ( x ) = e x − ∑ n = 0 N x n n ! {\displaystyle R(x)=e^{x}-\sum _{n=0}^{N}{\frac {x^{n}}{n!}}} then this function—an exponential polynomial—should take small values for x close to zero. If e is a rational number then by letting x = 1 in the above formula we see that R(1) is also a rational number. However, Fourier proved that R(1) could not be rational by eliminating every possible denominator. Thus e cannot be rational. === Hermite's proof of the irrationality of er === Hermite extended the work of Fourier by approximating the function ex not with a polynomial but with a rational function, that is a quotient of two polynomials. In particular he chose polynomials A(x) and B(x) such that the auxiliary function R defined by R ( x ) = B ( x ) e x − A ( x ) {\displaystyle R(x)=B(x)e^{x}-A(x)} could be made as small as he wanted around x = 0. But if er were rational then R(r) would have to be rational with a particular denominator, yet Hermite could make R(r) too small to have such a denominator, hence a contradiction. === Hermite's proof of the transcendence of e === To prove that e was in fact transcendental, Hermite took his work one step further by approximating not just the function ex, but also the functions ekx for integers k = 1,...,m, where he assumed e was algebraic with degree m. By approximating ekx by rational functions with integer coefficients and with the same denominator, say Ak(x) / B(x), he could define auxiliary functions Rk(x) by R k ( x ) = B ( x ) e k x − A k ( x ) . {\displaystyle R_{k}(x)=B(x)e^{kx}-A_{k}(x).} For his contradiction Hermite supposed that e satisfied the polynomial equation with integer coefficients a0 + a1e + ... + amem = 0. Multiplying this expression through by B(1) he noticed that it implied R = a 0 + a 1 R 1 ( 1 ) + ⋯ + a m R m ( 1 ) = a 1 A 1 ( 1 ) + ⋯ + a m A m ( 1 ) . {\displaystyle R=a_{0}+a_{1}R_{1}(1)+\cdots +a_{m}R_{m}(1)=a_{1}A_{1}(1)+\cdots +a_{m}A_{m}(1).} The right hand side is an integer and so, by estimating the auxiliary functions and proving that 0 < |R| < 1 he derived the necessary contradiction. == Auxiliary functions from the pigeonhole principle == The auxiliary functions sketched above can all be explicitly calculated and worked with. A breakthrough by Axel Thue and Carl Ludwig Siegel in the twentieth century was the realisation that these functions don't necessarily need to be explicitly known – it can be enough to know they exist and have certain properties. Using the Pigeonhole Principle Thue, and later Siegel, managed to prove the existence of auxiliary functions which, for example, took the value zero at many different points, or took high order zeros at a smaller collection of points. Moreover they proved it was possible to construct such functions without making the functions too large. Their auxiliary functions were not explicit functions, then, but by knowing that a certain function with certain properties existed, they used its properties to simplify the transcendence proofs of the nineteenth century and give several new results. This method was picked up on and used by several other mathematicians, including Alexander Gelfond and Theodor Schneider who used it independently to prove the Gelfond–Schneider theorem. Alan Baker also used the method in the 1960s for his work on linear forms in logarithms and ultimately Baker's theorem. Another example of the use of this method from the 1960s is outlined below. === Auxiliary polynomial theorem === Let β equal the cube root of b/a in the equation ax3 + bx3 = c and assume m is an integer that satisfies m + 1 > 2n/3 ≥ m ≥ 3 where n is a positive integer. Then there exists F ( X , Y ) = P ( X ) + Y ∗ Q ( X ) {\displaystyle F(X,Y)=P(X)+Y*Q(X)} such that ∑ i = 0 m + n u i X i = P ( X ) , {\displaystyle \sum _{i=0}^{m+n}u_{i}X^{i}=P(X),} ∑ i = 0 m + n v i X i = Q ( X ) . {\displaystyle \sum _{i=0}^{m+n}v_{i}X^{i}=Q(X).} The auxiliary polynomial theorem states max 0 ≤ i ≤ m + n ( | u i | , | v i | ) ≤ 2 b 9 ( m + n ) . {\displaystyle \max _{0\leq i\leq m+n}{(|u_{i}|,|v_{i}|)}\leq 2b^{9(m+n)}.} === A theorem of Lang === In the 1960s Serge Lang proved a result using this non-explicit form of auxiliary functions. The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems. The theorem deals with a number field K and meromorphic functions f1,...,fN of order at most ρ, at least two of which are algebraically independent, and such that if we differentiate any of these functions then the result is a polynomial in all of the functions. Under these hypotheses the theorem states that if there are m distinct complex numbers ω1,...,ωm such that fi (ωj ) is in K for all combinations of i and j, then m is bounded by m ≤ 20 ρ [ K : Q ] . {\displaystyle m\leq 20\rho [K:\mathbb {Q} ].} To prove the result Lang took two algebraically independent functions from f1,...,fN, say f and g, and then created an auxiliary function which was simply a polynomial F in f and g. This auxiliary function could not be explicitly stated since f and g are not explicitly known. But using Siegel's lemma Lang showed how to make F in such a way that it vanished to a high order at the m complex numbers ω1,...,ωm. Because of this high order vanishing it can be shown that a high-order derivative of F takes a value of small size one of the ωis, "size" here referring to an algebraic property of a number. Using the maximum modulus principle Lang also found a separate way to estimate the absolute values of derivatives of F, and using standard results comparing the size of a number and its absolute value he showed that these estimates were contradicted unless the claimed bound on m holds. == Interpolation determinants == After the myriad of successes gleaned from using existent but not explicit auxiliary functions, in the 1990s Michel Laurent introduced the idea of interpolation determinants. These are alternants – determinants of matrices of the form M = ( φ i ( ζ j ) ) 1 ≤ i , j ≤ N {\displaystyle {\mathcal {M}}=\left(\varphi _{i}(\zeta _{j})\right)_{1\leq i,j\leq N}} where φi are a set of functions interpolated at a set of points ζj. Since a determinant is just a polynomial in the entries of a matrix, these auxiliary functions succumb to study by analytic means. A problem with the method was the need to choose a basis before the matrix could be worked with. A development by Jean-Benoît Bost removed this problem with the use of Arakelov theory, and research in this area is ongoing. The example below gives an idea of the flavour of this approach. === A proof of the Hermite–Lindemann theorem === One of the simpler applications of this method is a proof of the real version of the Hermite–Lindemann theorem. That is, if α is a non-zero, real algebraic number, then eα is transcendental. First we let k be some natural number and n be a large multiple of k. The interpolation determinant considered is the determinant Δ of the n4×n4 matrix ( { exp ⁡ ( j 2 x ) x j 1 − 1 } ( i 1 − 1 ) | x = ( i 2 − 1 ) α ) . {\displaystyle \left(\{\exp(j_{2}x)x^{j_{1}-1}\}^{(i_{1}-1)}{\Big |}_{x=(i_{2}-1)\alpha }\right).} The rows of this matrix are indexed by 1 ≤ i1 ≤ n4/k and 1 ≤ i2 ≤ k, while the columns are indexed by 1 ≤ j1 ≤ n3 and 1 ≤ j2 ≤ n. So the functions in our matrix are monomials in x and ex and their derivatives, and we are interpolating at the k points 0,α,2α,...,(k − 1)α. Assuming that eα is algebraic we can form the number field Q(α,eα) of degree m over Q, and then multiply Δ by a suitable denominator as well as all its images under the embeddings of the field Q(α,eα) into C. For algebraic reasons this product is necessarily an integer, and using arguments relating to Wronskians it can be shown that it is non-zero, so its absolute value is an integer Ω ≥ 1. Using a version of the mean value theorem for matrices it is possible to get an analytic bound on Ω as well, and in fact using big-O notation we have Ω = O ( exp ⁡ ( ( m + 1 k − 3 2 ) n 8 log ⁡ n ) ) . {\displaystyle \Omega =O\left(\exp \left(\left({\frac {m+1}{k}}-{\frac {3}{2}}\right)n^{8}\log n\right)\right).} The number m is fixed by the degree of the field Q(α,eα), but k is the number of points we are interpolating at, and so we can increase it at will. And once k > 2(m + 1)/3 we will have Ω → 0, eventually contradicting the established condition Ω ≥ 1. Thus eα cannot be algebraic after all. == Notes == == References == Waldschmidt, Michel. "An Introduction to Irrationality and Transcendence Methods" (PDF). Liouville, Joseph (1844). "Sur des classes très étendues de quantités dont la valeur n'est ni algébrique, ni même réductible à des irrationnelles algébriques". J. Math. Pures Appl. 18: 883–885, and 910–911. Hermite, Charles (1873). "Sur la fonction exponentielle". C. R. Acad. Sci. Paris. 77. Thue, Axel (1977). Selected Mathematical Papers. Oslo: Universitetsforlaget. Siegel, Carl Ludwig (1929). "Über einige Anwendungen diophantischer Approximationen". Abhandlungen Akad. Berlin. 1: 70. Siegel, Carl Ludwig (1932). "Über die Perioden elliptischer Funktionen". Journal für die reine und angewandte Mathematik. 1932 (167): 62–69. doi:10.1515/crll.1932.167.62. S2CID 199545608. Gel'fond, A. O. (1934). "Sur le septième Problème de D. Hilbert". Izv. Akad. Nauk SSSR. 7: 623–630. Schneider, Theodor (1934). "Transzendenzuntersuchungen periodischer Funktionen. I. Transzendend von Potenzen". J. Reine Angew. Math. 172: 65–69. Baker, Alan; Wüstholz, G. (2007), "Logarithmic forms and Diophantine geometry", New Mathematical Monographs, vol. 9, Cambridge University Press, p. 198 Lang, Serge (1966). Introduction to Transcendental Numbers. Addison–Wesley Publishing Company. Laurent, Michel (1991). "Sur quelques résultats récents de transcendance". Astérisque. 198–200: 209–230. Bost, Jean-Benoît (1996). "Périodes et isogénies des variétés abéliennes sur les corps de nombres (d'après D. Masser et G. Wüstholz)". Astérisque. 237: 795. Pila, Jonathan (1993). "Geometric and arithmetic postulation of the exponential function". J. Austral. Math. Soc. A. 54: 111–127. doi:10.1017/s1446788700037022.
Wikipedia/Auxiliary_function
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations on the formal series. There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed. Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients. == History == Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. George Pólya writes in Mathematics and plausible reasoning: The name "generating function" is due to Laplace. Yet, without giving it a name, Euler used the device of generating functions long before Laplace [..]. He applied this mathematical tool to several problems in Combinatory Analysis and the Theory of Numbers. == Definition == A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in a bag, and then we have only one object to carry, the bag. A generating function is a clothesline on which we hang up a sequence of numbers for display. === Convergence === Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. Thus generating functions are not functions in the formal sense of a mapping from a domain to a codomain. These expressions in terms of the indeterminate x may involve arithmetic operations, differentiation with respect to x and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of x. Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of x, and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for x. === Limitations === Not all expressions that are meaningful as functions of x are meaningful as expressions designating formal series; for example, negative and fractional powers of x are examples of functions that do not have a corresponding formal power series. == Types == === Ordinary generating function (OGF) === When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function. The ordinary generating function of a sequence an is: G ( a n ; x ) = ∑ n = 0 ∞ a n x n . {\displaystyle G(a_{n};x)=\sum _{n=0}^{\infty }a_{n}x^{n}.} If an is the probability mass function of a discrete random variable, then its ordinary generating function is called a probability-generating function. === Exponential generating function (EGF) === The exponential generating function of a sequence an is EG ⁡ ( a n ; x ) = ∑ n = 0 ∞ a n x n n ! . {\displaystyle \operatorname {EG} (a_{n};x)=\sum _{n=0}^{\infty }a_{n}{\frac {x^{n}}{n!}}.} Exponential generating functions are generally more convenient than ordinary generating functions for combinatorial enumeration problems that involve labelled objects. Another benefit of exponential generating functions is that they are useful in transferring linear recurrence relations to the realm of differential equations. For example, take the Fibonacci sequence {fn} that satisfies the linear recurrence relation fn+2 = fn+1 + fn. The corresponding exponential generating function has the form EF ⁡ ( x ) = ∑ n = 0 ∞ f n n ! x n {\displaystyle \operatorname {EF} (x)=\sum _{n=0}^{\infty }{\frac {f_{n}}{n!}}x^{n}} and its derivatives can readily be shown to satisfy the differential equation EF″(x) = EF′(x) + EF(x) as a direct analogue with the recurrence relation above. In this view, the factorial term n! is merely a counter-term to normalise the derivative operator acting on xn. === Poisson generating function === The Poisson generating function of a sequence an is PG ⁡ ( a n ; x ) = ∑ n = 0 ∞ a n e − x x n n ! = e − x EG ⁡ ( a n ; x ) . {\displaystyle \operatorname {PG} (a_{n};x)=\sum _{n=0}^{\infty }a_{n}e^{-x}{\frac {x^{n}}{n!}}=e^{-x}\,\operatorname {EG} (a_{n};x).} === Lambert series === The Lambert series of a sequence an is LG ⁡ ( a n ; x ) = ∑ n = 1 ∞ a n x n 1 − x n . {\displaystyle \operatorname {LG} (a_{n};x)=\sum _{n=1}^{\infty }a_{n}{\frac {x^{n}}{1-x^{n}}}.} Note that in a Lambert series the index n starts at 1, not at 0, as the first term would otherwise be undefined. The Lambert series coefficients in the power series expansions b n := [ x n ] LG ⁡ ( a n ; x ) {\displaystyle b_{n}:=[x^{n}]\operatorname {LG} (a_{n};x)} for integers n ≥ 1 are related by the divisor sum b n = ∑ d | n a d . {\displaystyle b_{n}=\sum _{d|n}a_{d}.} The main article provides several more classical, or at least well-known examples related to special arithmetic functions in number theory. As an example of a Lambert series identity not given in the main article, we can show that for |x|, |xq| < 1 we have that ∑ n = 1 ∞ q n x n 1 − x n = ∑ n = 1 ∞ q n x n 2 1 − q x n + ∑ n = 1 ∞ q n x n ( n + 1 ) 1 − x n , {\displaystyle \sum _{n=1}^{\infty }{\frac {q^{n}x^{n}}{1-x^{n}}}=\sum _{n=1}^{\infty }{\frac {q^{n}x^{n^{2}}}{1-qx^{n}}}+\sum _{n=1}^{\infty }{\frac {q^{n}x^{n(n+1)}}{1-x^{n}}},} where we have the special case identity for the generating function of the divisor function, d(n) ≡ σ0(n), given by ∑ n = 1 ∞ x n 1 − x n = ∑ n = 1 ∞ x n 2 ( 1 + x n ) 1 − x n . {\displaystyle \sum _{n=1}^{\infty }{\frac {x^{n}}{1-x^{n}}}=\sum _{n=1}^{\infty }{\frac {x^{n^{2}}\left(1+x^{n}\right)}{1-x^{n}}}.} === Bell series === The Bell series of a sequence an is an expression in terms of both an indeterminate x and a prime p and is given by: BG p ⁡ ( a n ; x ) = ∑ n = 0 ∞ a p n x n . {\displaystyle \operatorname {BG} _{p}(a_{n};x)=\sum _{n=0}^{\infty }a_{p^{n}}x^{n}.} === Dirichlet series generating functions (DGFs) === Formal Dirichlet series are often classified as generating functions, although they are not strictly formal power series. The Dirichlet series generating function of a sequence an is: DG ⁡ ( a n ; s ) = ∑ n = 1 ∞ a n n s . {\displaystyle \operatorname {DG} (a_{n};s)=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.} The Dirichlet series generating function is especially useful when an is a multiplicative function, in which case it has an Euler product expression in terms of the function's Bell series: DG ⁡ ( a n ; s ) = ∏ p BG p ⁡ ( a n ; p − s ) . {\displaystyle \operatorname {DG} (a_{n};s)=\prod _{p}\operatorname {BG} _{p}(a_{n};p^{-s})\,.} If an is a Dirichlet character then its Dirichlet series generating function is called a Dirichlet L-series. We also have a relation between the pair of coefficients in the Lambert series expansions above and their DGFs. Namely, we can prove that: [ x n ] LG ⁡ ( a n ; x ) = b n {\displaystyle [x^{n}]\operatorname {LG} (a_{n};x)=b_{n}} if and only if DG ⁡ ( a n ; s ) ζ ( s ) = DG ⁡ ( b n ; s ) , {\displaystyle \operatorname {DG} (a_{n};s)\zeta (s)=\operatorname {DG} (b_{n};s),} where ζ(s) is the Riemann zeta function. The sequence ak generated by a Dirichlet series generating function (DGF) corresponding to: DG ⁡ ( a k ; s ) = ζ ( s ) m {\displaystyle \operatorname {DG} (a_{k};s)=\zeta (s)^{m}} has the ordinary generating function: ∑ k = 1 k = n a k x k = x + ( m 1 ) ∑ 2 ≤ a ≤ n x a + ( m 2 ) ∑ a = 2 ∞ ∑ b = 2 ∞ a b ≤ n x a b + ( m 3 ) ∑ a = 2 ∞ ∑ c = 2 ∞ ∑ b = 2 ∞ a b c ≤ n x a b c + ( m 4 ) ∑ a = 2 ∞ ∑ b = 2 ∞ ∑ c = 2 ∞ ∑ d = 2 ∞ a b c d ≤ n x a b c d + ⋯ {\displaystyle \sum _{k=1}^{k=n}a_{k}x^{k}=x+{\binom {m}{1}}\sum _{2\leq a\leq n}x^{a}+{\binom {m}{2}}{\underset {ab\leq n}{\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }}}x^{ab}+{\binom {m}{3}}{\underset {abc\leq n}{\sum _{a=2}^{\infty }\sum _{c=2}^{\infty }\sum _{b=2}^{\infty }}}x^{abc}+{\binom {m}{4}}{\underset {abcd\leq n}{\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }\sum _{c=2}^{\infty }\sum _{d=2}^{\infty }}}x^{abcd}+\cdots } === Polynomial sequence generating functions === The idea of generating functions can be extended to sequences of other objects. Thus, for example, polynomial sequences of binomial type are generated by: e x f ( t ) = ∑ n = 0 ∞ p n ( x ) n ! t n {\displaystyle e^{xf(t)}=\sum _{n=0}^{\infty }{\frac {p_{n}(x)}{n!}}t^{n}} where pn(x) is a sequence of polynomials and f(t) is a function of a certain form. Sheffer sequences are generated in a similar way. See the main article generalized Appell polynomials for more information. Examples of polynomial sequences generated by more complex generating functions include: Appell polynomials Chebyshev polynomials Difference polynomials Generalized Appell polynomials q-difference polynomials === Other generating functions === Other sequences generated by more complex generating functions include: Double exponential generating functions e.g. the Bell numbers Hadamard products of generating functions and diagonal generating functions, and their corresponding integral transformations ==== Convolution polynomials ==== Knuth's article titled "Convolution Polynomials" defines a generalized class of convolution polynomial sequences by their special generating functions of the form F ( z ) x = exp ⁡ ( x log ⁡ F ( z ) ) = ∑ n = 0 ∞ f n ( x ) z n , {\displaystyle F(z)^{x}=\exp {\bigl (}x\log F(z){\bigr )}=\sum _{n=0}^{\infty }f_{n}(x)z^{n},} for some analytic function F with a power series expansion such that F(0) = 1. We say that a family of polynomials, f0, f1, f2, ..., forms a convolution family if deg fn ≤ n and if the following convolution condition holds for all x, y and for all n ≥ 0: f n ( x + y ) = f n ( x ) f 0 ( y ) + f n − 1 ( x ) f 1 ( y ) + ⋯ + f 1 ( x ) f n − 1 ( y ) + f 0 ( x ) f n ( y ) . {\displaystyle f_{n}(x+y)=f_{n}(x)f_{0}(y)+f_{n-1}(x)f_{1}(y)+\cdots +f_{1}(x)f_{n-1}(y)+f_{0}(x)f_{n}(y).} We see that for non-identically zero convolution families, this definition is equivalent to requiring that the sequence have an ordinary generating function of the first form given above. A sequence of convolution polynomials defined in the notation above has the following properties: The sequence n! · fn(x) is of binomial type Special values of the sequence include fn(1) = [zn] F(z) and fn(0) = δn,0, and For arbitrary (fixed) x , y , t ∈ C {\displaystyle x,y,t\in \mathbb {C} } , these polynomials satisfy convolution formulas of the form f n ( x + y ) = ∑ k = 0 n f k ( x ) f n − k ( y ) f n ( 2 x ) = ∑ k = 0 n f k ( x ) f n − k ( x ) x n f n ( x + y ) = ( x + y ) ∑ k = 0 n k f k ( x ) f n − k ( y ) ( x + y ) f n ( x + y + t n ) x + y + t n = ∑ k = 0 n x f k ( x + t k ) x + t k y f n − k ( y + t ( n − k ) ) y + t ( n − k ) . {\displaystyle {\begin{aligned}f_{n}(x+y)&=\sum _{k=0}^{n}f_{k}(x)f_{n-k}(y)\\f_{n}(2x)&=\sum _{k=0}^{n}f_{k}(x)f_{n-k}(x)\\xnf_{n}(x+y)&=(x+y)\sum _{k=0}^{n}kf_{k}(x)f_{n-k}(y)\\{\frac {(x+y)f_{n}(x+y+tn)}{x+y+tn}}&=\sum _{k=0}^{n}{\frac {xf_{k}(x+tk)}{x+tk}}{\frac {yf_{n-k}(y+t(n-k))}{y+t(n-k)}}.\end{aligned}}} For a fixed non-zero parameter t ∈ C {\displaystyle t\in \mathbb {C} } , we have modified generating functions for these convolution polynomial sequences given by z F n ( x + t n ) ( x + t n ) = [ z n ] F t ( z ) x , {\displaystyle {\frac {zF_{n}(x+tn)}{(x+tn)}}=\left[z^{n}\right]{\mathcal {F}}_{t}(z)^{x},} where 𝓕t(z) is implicitly defined by a functional equation of the form 𝓕t(z) = F(x𝓕t(z)t). Moreover, we can use matrix methods (as in the reference) to prove that given two convolution polynomial sequences, ⟨ fn(x) ⟩ and ⟨ gn(x) ⟩, with respective corresponding generating functions, F(z)x and G(z)x, then for arbitrary t we have the identity [ z n ] ( G ( z ) F ( z G ( z ) t ) ) x = ∑ k = 0 n F k ( x ) G n − k ( x + t k ) . {\displaystyle \left[z^{n}\right]\left(G(z)F\left(zG(z)^{t}\right)\right)^{x}=\sum _{k=0}^{n}F_{k}(x)G_{n-k}(x+tk).} Examples of convolution polynomial sequences include the binomial power series, 𝓑t(z) = 1 + z𝓑t(z)t, so-termed tree polynomials, the Bell numbers, B(n), the Laguerre polynomials, and the Stirling convolution polynomials. == Ordinary generating functions == === Examples for simple sequences === Polynomials are a special case of ordinary generating functions, corresponding to finite sequences, or equivalently sequences that vanish after a certain point. These are important in that many finite sequences can usefully be interpreted as generating functions, such as the Poincaré polynomial and others. A fundamental generating function is that of the constant sequence 1, 1, 1, 1, 1, 1, 1, 1, 1, ..., whose ordinary generating function is the geometric series ∑ n = 0 ∞ x n = 1 1 − x . {\displaystyle \sum _{n=0}^{\infty }x^{n}={\frac {1}{1-x}}.} The left-hand side is the Maclaurin series expansion of the right-hand side. Alternatively, the equality can be justified by multiplying the power series on the left by 1 − x, and checking that the result is the constant power series 1 (in other words, that all coefficients except the one of x0 are equal to 0). Moreover, there can be no other power series with this property. The left-hand side therefore designates the multiplicative inverse of 1 − x in the ring of power series. Expressions for the ordinary generating function of other sequences are easily derived from this one. For instance, the substitution x → ax gives the generating function for the geometric sequence 1, a, a2, a3, ... for any constant a: ∑ n = 0 ∞ ( a x ) n = 1 1 − a x . {\displaystyle \sum _{n=0}^{\infty }(ax)^{n}={\frac {1}{1-ax}}.} (The equality also follows directly from the fact that the left-hand side is the Maclaurin series expansion of the right-hand side.) In particular, ∑ n = 0 ∞ ( − 1 ) n x n = 1 1 + x . {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}x^{n}={\frac {1}{1+x}}.} One can also introduce regular gaps in the sequence by replacing x by some power of x, so for instance for the sequence 1, 0, 1, 0, 1, 0, 1, 0, ... (which skips over x, x3, x5, ...) one gets the generating function ∑ n = 0 ∞ x 2 n = 1 1 − x 2 . {\displaystyle \sum _{n=0}^{\infty }x^{2n}={\frac {1}{1-x^{2}}}.} By squaring the initial generating function, or by finding the derivative of both sides with respect to x and making a change of running variable n → n + 1, one sees that the coefficients form the sequence 1, 2, 3, 4, 5, ..., so one has ∑ n = 0 ∞ ( n + 1 ) x n = 1 ( 1 − x ) 2 , {\displaystyle \sum _{n=0}^{\infty }(n+1)x^{n}={\frac {1}{(1-x)^{2}}},} and the third power has as coefficients the triangular numbers 1, 3, 6, 10, 15, 21, ... whose term n is the binomial coefficient (n + 22), so that ∑ n = 0 ∞ ( n + 2 2 ) x n = 1 ( 1 − x ) 3 . {\displaystyle \sum _{n=0}^{\infty }{\binom {n+2}{2}}x^{n}={\frac {1}{(1-x)^{3}}}.} More generally, for any non-negative integer k and non-zero real value a, it is true that ∑ n = 0 ∞ a n ( n + k k ) x n = 1 ( 1 − a x ) k + 1 . {\displaystyle \sum _{n=0}^{\infty }a^{n}{\binom {n+k}{k}}x^{n}={\frac {1}{(1-ax)^{k+1}}}\,.} Since 2 ( n + 2 2 ) − 3 ( n + 1 1 ) + ( n 0 ) = 2 ( n + 1 ) ( n + 2 ) 2 − 3 ( n + 1 ) + 1 = n 2 , {\displaystyle 2{\binom {n+2}{2}}-3{\binom {n+1}{1}}+{\binom {n}{0}}=2{\frac {(n+1)(n+2)}{2}}-3(n+1)+1=n^{2},} one can find the ordinary generating function for the sequence 0, 1, 4, 9, 16, ... of square numbers by linear combination of binomial-coefficient generating sequences: G ( n 2 ; x ) = ∑ n = 0 ∞ n 2 x n = 2 ( 1 − x ) 3 − 3 ( 1 − x ) 2 + 1 1 − x = x ( x + 1 ) ( 1 − x ) 3 . {\displaystyle G(n^{2};x)=\sum _{n=0}^{\infty }n^{2}x^{n}={\frac {2}{(1-x)^{3}}}-{\frac {3}{(1-x)^{2}}}+{\frac {1}{1-x}}={\frac {x(x+1)}{(1-x)^{3}}}.} We may also expand alternately to generate this same sequence of squares as a sum of derivatives of the geometric series in the following form: G ( n 2 ; x ) = ∑ n = 0 ∞ n 2 x n = ∑ n = 0 ∞ n ( n − 1 ) x n + ∑ n = 0 ∞ n x n = x 2 D 2 [ 1 1 − x ] + x D [ 1 1 − x ] = 2 x 2 ( 1 − x ) 3 + x ( 1 − x ) 2 = x ( x + 1 ) ( 1 − x ) 3 . {\displaystyle {\begin{aligned}G(n^{2};x)&=\sum _{n=0}^{\infty }n^{2}x^{n}\\[4px]&=\sum _{n=0}^{\infty }n(n-1)x^{n}+\sum _{n=0}^{\infty }nx^{n}\\[4px]&=x^{2}D^{2}\left[{\frac {1}{1-x}}\right]+xD\left[{\frac {1}{1-x}}\right]\\[4px]&={\frac {2x^{2}}{(1-x)^{3}}}+{\frac {x}{(1-x)^{2}}}={\frac {x(x+1)}{(1-x)^{3}}}.\end{aligned}}} By induction, we can similarly show for positive integers m ≥ 1 that n m = ∑ j = 0 m { m j } n ! ( n − j ) ! , {\displaystyle n^{m}=\sum _{j=0}^{m}{\begin{Bmatrix}m\\j\end{Bmatrix}}{\frac {n!}{(n-j)!}},} where {nk} denote the Stirling numbers of the second kind and where the generating function ∑ n = 0 ∞ n ! ( n − j ) ! z n = j ! ⋅ z j ( 1 − z ) j + 1 , {\displaystyle \sum _{n=0}^{\infty }{\frac {n!}{(n-j)!}}\,z^{n}={\frac {j!\cdot z^{j}}{(1-z)^{j+1}}},} so that we can form the analogous generating functions over the integral mth powers generalizing the result in the square case above. In particular, since we can write z k ( 1 − z ) k + 1 = ∑ i = 0 k ( k i ) ( − 1 ) k − i ( 1 − z ) i + 1 , {\displaystyle {\frac {z^{k}}{(1-z)^{k+1}}}=\sum _{i=0}^{k}{\binom {k}{i}}{\frac {(-1)^{k-i}}{(1-z)^{i+1}}},} we can apply a well-known finite sum identity involving the Stirling numbers to obtain that ∑ n = 0 ∞ n m z n = ∑ j = 0 m { m + 1 j + 1 } ( − 1 ) m − j j ! ( 1 − z ) j + 1 . {\displaystyle \sum _{n=0}^{\infty }n^{m}z^{n}=\sum _{j=0}^{m}{\begin{Bmatrix}m+1\\j+1\end{Bmatrix}}{\frac {(-1)^{m-j}j!}{(1-z)^{j+1}}}.} === Rational functions === The ordinary generating function of a sequence can be expressed as a rational function (the ratio of two finite-degree polynomials) if and only if the sequence is a linear recursive sequence with constant coefficients; this generalizes the examples above. Conversely, every sequence generated by a fraction of polynomials satisfies a linear recurrence with constant coefficients; these coefficients are identical to the coefficients of the fraction denominator polynomial (so they can be directly read off). This observation shows it is easy to solve for generating functions of sequences defined by a linear finite difference equation with constant coefficients, and then hence, for explicit closed-form formulas for the coefficients of these generating functions. The prototypical example here is to derive Binet's formula for the Fibonacci numbers via generating function techniques. We also notice that the class of rational generating functions precisely corresponds to the generating functions that enumerate quasi-polynomial sequences of the form f n = p 1 ( n ) ρ 1 n + ⋯ + p ℓ ( n ) ρ ℓ n , {\displaystyle f_{n}=p_{1}(n)\rho _{1}^{n}+\cdots +p_{\ell }(n)\rho _{\ell }^{n},} where the reciprocal roots, ρ i ∈ C {\displaystyle \rho _{i}\in \mathbb {C} } , are fixed scalars and where pi(n) is a polynomial in n for all 1 ≤ i ≤ ℓ. In general, Hadamard products of rational functions produce rational generating functions. Similarly, if F ( s , t ) := ∑ m , n ≥ 0 f ( m , n ) w m z n {\displaystyle F(s,t):=\sum _{m,n\geq 0}f(m,n)w^{m}z^{n}} is a bivariate rational generating function, then its corresponding diagonal generating function, diag ⁡ ( F ) := ∑ n = 0 ∞ f ( n , n ) z n , {\displaystyle \operatorname {diag} (F):=\sum _{n=0}^{\infty }f(n,n)z^{n},} is algebraic. For example, if we let F ( s , t ) := ∑ i , j ≥ 0 ( i + j i ) s i t j = 1 1 − s − t , {\displaystyle F(s,t):=\sum _{i,j\geq 0}{\binom {i+j}{i}}s^{i}t^{j}={\frac {1}{1-s-t}},} then this generating function's diagonal coefficient generating function is given by the well-known OGF formula diag ⁡ ( F ) = ∑ n = 0 ∞ ( 2 n n ) z n = 1 1 − 4 z . {\displaystyle \operatorname {diag} (F)=\sum _{n=0}^{\infty }{\binom {2n}{n}}z^{n}={\frac {1}{\sqrt {1-4z}}}.} This result is computed in many ways, including Cauchy's integral formula or contour integration, taking complex residues, or by direct manipulations of formal power series in two variables. === Operations on generating functions === ==== Multiplication yields convolution ==== Multiplication of ordinary generating functions yields a discrete convolution (the Cauchy product) of the sequences. For example, the sequence of cumulative sums (compare to the slightly more general Euler–Maclaurin formula) ( a 0 , a 0 + a 1 , a 0 + a 1 + a 2 , … ) {\displaystyle (a_{0},a_{0}+a_{1},a_{0}+a_{1}+a_{2},\ldots )} of a sequence with ordinary generating function G(an; x) has the generating function G ( a n ; x ) ⋅ 1 1 − x {\displaystyle G(a_{n};x)\cdot {\frac {1}{1-x}}} because ⁠1/1 − x⁠ is the ordinary generating function for the sequence (1, 1, ...). See also the section on convolutions in the applications section of this article below for further examples of problem solving with convolutions of generating functions and interpretations. ==== Shifting sequence indices ==== For integers m ≥ 1, we have the following two analogous identities for the modified generating functions enumerating the shifted sequence variants of ⟨ gn − m ⟩ and ⟨ gn + m ⟩, respectively: z m G ( z ) = ∑ n = m ∞ g n − m z n G ( z ) − g 0 − g 1 z − ⋯ − g m − 1 z m − 1 z m = ∑ n = 0 ∞ g n + m z n . {\displaystyle {\begin{aligned}&z^{m}G(z)=\sum _{n=m}^{\infty }g_{n-m}z^{n}\\[4px]&{\frac {G(z)-g_{0}-g_{1}z-\cdots -g_{m-1}z^{m-1}}{z^{m}}}=\sum _{n=0}^{\infty }g_{n+m}z^{n}.\end{aligned}}} ==== Differentiation and integration of generating functions ==== We have the following respective power series expansions for the first derivative of a generating function and its integral: G ′ ( z ) = ∑ n = 0 ∞ ( n + 1 ) g n + 1 z n z ⋅ G ′ ( z ) = ∑ n = 0 ∞ n g n z n ∫ 0 z G ( t ) d t = ∑ n = 1 ∞ g n − 1 n z n . {\displaystyle {\begin{aligned}G'(z)&=\sum _{n=0}^{\infty }(n+1)g_{n+1}z^{n}\\[4px]z\cdot G'(z)&=\sum _{n=0}^{\infty }ng_{n}z^{n}\\[4px]\int _{0}^{z}G(t)\,dt&=\sum _{n=1}^{\infty }{\frac {g_{n-1}}{n}}z^{n}.\end{aligned}}} The differentiation–multiplication operation of the second identity can be repeated k times to multiply the sequence by nk, but that requires alternating between differentiation and multiplication. If instead doing k differentiations in sequence, the effect is to multiply by the kth falling factorial: z k G ( k ) ( z ) = ∑ n = 0 ∞ n k _ g n z n = ∑ n = 0 ∞ n ( n − 1 ) ⋯ ( n − k + 1 ) g n z n for all k ∈ N . {\displaystyle z^{k}G^{(k)}(z)=\sum _{n=0}^{\infty }n^{\underline {k}}g_{n}z^{n}=\sum _{n=0}^{\infty }n(n-1)\dotsb (n-k+1)g_{n}z^{n}\quad {\text{for all }}k\in \mathbb {N} .} Using the Stirling numbers of the second kind, that can be turned into another formula for multiplying by n k {\displaystyle n^{k}} as follows (see the main article on generating function transformations): ∑ j = 0 k { k j } z j F ( j ) ( z ) = ∑ n = 0 ∞ n k f n z n for all k ∈ N . {\displaystyle \sum _{j=0}^{k}{\begin{Bmatrix}k\\j\end{Bmatrix}}z^{j}F^{(j)}(z)=\sum _{n=0}^{\infty }n^{k}f_{n}z^{n}\quad {\text{for all }}k\in \mathbb {N} .} A negative-order reversal of this sequence powers formula corresponding to the operation of repeated integration is defined by the zeta series transformation and its generalizations defined as a derivative-based transformation of generating functions, or alternately termwise by and performing an integral transformation on the sequence generating function. Related operations of performing fractional integration on a sequence generating function are discussed here. ==== Enumerating arithmetic progressions of sequences ==== In this section we give formulas for generating functions enumerating the sequence {fan + b} given an ordinary generating function F(z), where a ≥ 2, 0 ≤ b < a, and a and b are integers (see the main article on transformations). For a = 2, this is simply the familiar decomposition of a function into even and odd parts (i.e., even and odd powers): ∑ n = 0 ∞ f 2 n z 2 n = F ( z ) + F ( − z ) 2 ∑ n = 0 ∞ f 2 n + 1 z 2 n + 1 = F ( z ) − F ( − z ) 2 . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }f_{2n}z^{2n}&={\frac {F(z)+F(-z)}{2}}\\[4px]\sum _{n=0}^{\infty }f_{2n+1}z^{2n+1}&={\frac {F(z)-F(-z)}{2}}.\end{aligned}}} More generally, suppose that a ≥ 3 and that ωa = exp ⁠2πi/a⁠ denotes the ath primitive root of unity. Then, as an application of the discrete Fourier transform, we have the formula ∑ n = 0 ∞ f a n + b z a n + b = 1 a ∑ m = 0 a − 1 ω a − m b F ( ω a m z ) . {\displaystyle \sum _{n=0}^{\infty }f_{an+b}z^{an+b}={\frac {1}{a}}\sum _{m=0}^{a-1}\omega _{a}^{-mb}F\left(\omega _{a}^{m}z\right).} For integers m ≥ 1, another useful formula providing somewhat reversed floored arithmetic progressions — effectively repeating each coefficient m times — are generated by the identity ∑ n = 0 ∞ f ⌊ n m ⌋ z n = 1 − z m 1 − z F ( z m ) = ( 1 + z + ⋯ + z m − 2 + z m − 1 ) F ( z m ) . {\displaystyle \sum _{n=0}^{\infty }f_{\left\lfloor {\frac {n}{m}}\right\rfloor }z^{n}={\frac {1-z^{m}}{1-z}}F(z^{m})=\left(1+z+\cdots +z^{m-2}+z^{m-1}\right)F(z^{m}).} === P-recursive sequences and holonomic generating functions === ==== Definitions ==== A formal power series (or function) F(z) is said to be holonomic if it satisfies a linear differential equation of the form c 0 ( z ) F ( r ) ( z ) + c 1 ( z ) F ( r − 1 ) ( z ) + ⋯ + c r ( z ) F ( z ) = 0 , {\displaystyle c_{0}(z)F^{(r)}(z)+c_{1}(z)F^{(r-1)}(z)+\cdots +c_{r}(z)F(z)=0,} where the coefficients ci(z) are in the field of rational functions, C ( z ) {\displaystyle \mathbb {C} (z)} . Equivalently, F ( z ) {\displaystyle F(z)} is holonomic if the vector space over C ( z ) {\displaystyle \mathbb {C} (z)} spanned by the set of all of its derivatives is finite dimensional. Since we can clear denominators if need be in the previous equation, we may assume that the functions, ci(z) are polynomials in z. Thus we can see an equivalent condition that a generating function is holonomic if its coefficients satisfy a P-recurrence of the form c ^ s ( n ) f n + s + c ^ s − 1 ( n ) f n + s − 1 + ⋯ + c ^ 0 ( n ) f n = 0 , {\displaystyle {\widehat {c}}_{s}(n)f_{n+s}+{\widehat {c}}_{s-1}(n)f_{n+s-1}+\cdots +{\widehat {c}}_{0}(n)f_{n}=0,} for all large enough n ≥ n0 and where the ĉi(n) are fixed finite-degree polynomials in n. In other words, the properties that a sequence be P-recursive and have a holonomic generating function are equivalent. Holonomic functions are closed under the Hadamard product operation ⊙ on generating functions. ==== Examples ==== The functions ez, log z, cos z, arcsin z, 1 + z {\displaystyle {\sqrt {1+z}}} , the dilogarithm function Li2(z), the generalized hypergeometric functions pFq(...; ...; z) and the functions defined by the power series ∑ n = 0 ∞ z n ( n ! ) 2 {\displaystyle \sum _{n=0}^{\infty }{\frac {z^{n}}{(n!)^{2}}}} and the non-convergent ∑ n = 0 ∞ n ! ⋅ z n {\displaystyle \sum _{n=0}^{\infty }n!\cdot z^{n}} are all holonomic. Examples of P-recursive sequences with holonomic generating functions include fn ≔ ⁠1/n + 1⁠ (2nn) and fn ≔ ⁠2n/n2 + 1⁠, where sequences such as n {\displaystyle {\sqrt {n}}} and log n are not P-recursive due to the nature of singularities in their corresponding generating functions. Similarly, functions with infinitely many singularities such as tan z, sec z, and Γ(z) are not holonomic functions. ==== Software for working with P-recursive sequences and holonomic generating functions ==== Tools for processing and working with P-recursive sequences in Mathematica include the software packages provided for non-commercial use on the RISC Combinatorics Group algorithmic combinatorics software site. Despite being mostly closed-source, particularly powerful tools in this software suite are provided by the Guess package for guessing P-recurrences for arbitrary input sequences (useful for experimental mathematics and exploration) and the Sigma package which is able to find P-recurrences for many sums and solve for closed-form solutions to P-recurrences involving generalized harmonic numbers. Other packages listed on this particular RISC site are targeted at working with holonomic generating functions specifically. === Relation to discrete-time Fourier transform === When the series converges absolutely, G ( a n ; e − i ω ) = ∑ n = 0 ∞ a n e − i ω n {\displaystyle G\left(a_{n};e^{-i\omega }\right)=\sum _{n=0}^{\infty }a_{n}e^{-i\omega n}} is the discrete-time Fourier transform of the sequence a0, a1, .... === Asymptotic growth of a sequence === In calculus, often the growth rate of the coefficients of a power series can be used to deduce a radius of convergence for the power series. The reverse can also hold; often the radius of convergence for a generating function can be used to deduce the asymptotic growth of the underlying sequence. For instance, if an ordinary generating function G(an; x) that has a finite radius of convergence of r can be written as G ( a n ; x ) = A ( x ) + B ( x ) ( 1 − x r ) − β x α {\displaystyle G(a_{n};x)={\frac {A(x)+B(x)\left(1-{\frac {x}{r}}\right)^{-\beta }}{x^{\alpha }}}} where each of A(x) and B(x) is a function that is analytic to a radius of convergence greater than r (or is entire), and where B(r) ≠ 0 then a n ∼ B ( r ) r α Γ ( β ) n β − 1 ( 1 r ) n ∼ B ( r ) r α ( n + β − 1 n ) ( 1 r ) n = B ( r ) r α ( ( β n ) ) ( 1 r ) n , {\displaystyle a_{n}\sim {\frac {B(r)}{r^{\alpha }\Gamma (\beta )}}\,n^{\beta -1}\left({\frac {1}{r}}\right)^{n}\sim {\frac {B(r)}{r^{\alpha }}}{\binom {n+\beta -1}{n}}\left({\frac {1}{r}}\right)^{n}={\frac {B(r)}{r^{\alpha }}}\left(\!\!{\binom {\beta }{n}}\!\!\right)\left({\frac {1}{r}}\right)^{n}\,,} using the gamma function, a binomial coefficient, or a multiset coefficient. Note that limit as n goes to infinity of the ratio of an to any of these expressions is guaranteed to be 1; not merely that an is proportional to them. Often this approach can be iterated to generate several terms in an asymptotic series for an. In particular, G ( a n − B ( r ) r α ( n + β − 1 n ) ( 1 r ) n ; x ) = G ( a n ; x ) − B ( r ) r α ( 1 − x r ) − β . {\displaystyle G\left(a_{n}-{\frac {B(r)}{r^{\alpha }}}{\binom {n+\beta -1}{n}}\left({\frac {1}{r}}\right)^{n};x\right)=G(a_{n};x)-{\frac {B(r)}{r^{\alpha }}}\left(1-{\frac {x}{r}}\right)^{-\beta }\,.} The asymptotic growth of the coefficients of this generating function can then be sought via the finding of A, B, α, β, and r to describe the generating function, as above. Similar asymptotic analysis is possible for exponential generating functions; with an exponential generating function, it is ⁠an/n!⁠ that grows according to these asymptotic formulae. Generally, if the generating function of one sequence minus the generating function of a second sequence has a radius of convergence that is larger than the radius of convergence of the individual generating functions then the two sequences have the same asymptotic growth. ==== Asymptotic growth of the sequence of squares ==== As derived above, the ordinary generating function for the sequence of squares is: G ( n 2 ; x ) = x ( x + 1 ) ( 1 − x ) 3 . {\displaystyle G(n^{2};x)={\frac {x(x+1)}{(1-x)^{3}}}.} With r = 1, α = −1, β = 3, A(x) = 0, and B(x) = x + 1, we can verify that the squares grow as expected, like the squares: a n ∼ B ( r ) r α Γ ( β ) n β − 1 ( 1 r ) n = 1 + 1 1 − 1 Γ ( 3 ) n 3 − 1 ( 1 1 ) n = n 2 . {\displaystyle a_{n}\sim {\frac {B(r)}{r^{\alpha }\Gamma (\beta )}}\,n^{\beta -1}\left({\frac {1}{r}}\right)^{n}={\frac {1+1}{1^{-1}\,\Gamma (3)}}\,n^{3-1}\left({\frac {1}{1}}\right)^{n}=n^{2}.} ==== Asymptotic growth of the Catalan numbers ==== The ordinary generating function for the Catalan numbers is G ( C n ; x ) = 1 − 1 − 4 x 2 x . {\displaystyle G(C_{n};x)={\frac {1-{\sqrt {1-4x}}}{2x}}.} With r = ⁠1/4⁠, α = 1, β = −⁠1/2⁠, A(x) = ⁠1/2⁠, and B(x) = −⁠1/2⁠, we can conclude that, for the Catalan numbers: C n ∼ B ( r ) r α Γ ( β ) n β − 1 ( 1 r ) n = − 1 2 ( 1 4 ) 1 Γ ( − 1 2 ) n − 1 2 − 1 ( 1 1 4 ) n = 4 n n 3 2 π . {\displaystyle C_{n}\sim {\frac {B(r)}{r^{\alpha }\Gamma (\beta )}}\,n^{\beta -1}\left({\frac {1}{r}}\right)^{n}={\frac {-{\frac {1}{2}}}{\left({\frac {1}{4}}\right)^{1}\Gamma \left(-{\frac {1}{2}}\right)}}\,n^{-{\frac {1}{2}}-1}\left({\frac {1}{\,{\frac {1}{4}}\,}}\right)^{n}={\frac {4^{n}}{n^{\frac {3}{2}}{\sqrt {\pi }}}}.} === Bivariate and multivariate generating functions === The generating function in several variables can be generalized to arrays with multiple indices. These non-polynomial double sum examples are called multivariate generating functions, or super generating functions. For two variables, these are often called bivariate generating functions. ==== Bivariate case ==== The ordinary generating function of a two-dimensional array am,n (where n and m are natural numbers) is: G ( a m , n ; x , y ) = ∑ m , n = 0 ∞ a m , n x m y n . {\displaystyle G(a_{m,n};x,y)=\sum _{m,n=0}^{\infty }a_{m,n}x^{m}y^{n}.} For instance, since (1 + x)n is the ordinary generating function for binomial coefficients for a fixed n, one may ask for a bivariate generating function that generates the binomial coefficients (nk) for all k and n. To do this, consider (1 + x)n itself as a sequence in n, and find the generating function in y that has these sequence values as coefficients. Since the generating function for an is: 1 1 − a y , {\displaystyle {\frac {1}{1-ay}},} the generating function for the binomial coefficients is: ∑ n , k ( n k ) x k y n = 1 1 − ( 1 + x ) y = 1 1 − y − x y . {\displaystyle \sum _{n,k}{\binom {n}{k}}x^{k}y^{n}={\frac {1}{1-(1+x)y}}={\frac {1}{1-y-xy}}.} Other examples of such include the following two-variable generating functions for the binomial coefficients, the Stirling numbers, and the Eulerian numbers, where ω and z denote the two variables: e z + w z = ∑ m , n ≥ 0 ( n m ) w m z n n ! e w ( e z − 1 ) = ∑ m , n ≥ 0 { n m } w m z n n ! 1 ( 1 − z ) w = ∑ m , n ≥ 0 [ n m ] w m z n n ! 1 − w e ( w − 1 ) z − w = ∑ m , n ≥ 0 ⟨ n m ⟩ w m z n n ! e w − e z w e z − z e w = ∑ m , n ≥ 0 ⟨ m + n + 1 m ⟩ w m z n ( m + n + 1 ) ! . {\displaystyle {\begin{aligned}e^{z+wz}&=\sum _{m,n\geq 0}{\binom {n}{m}}w^{m}{\frac {z^{n}}{n!}}\\[4px]e^{w(e^{z}-1)}&=\sum _{m,n\geq 0}{\begin{Bmatrix}n\\m\end{Bmatrix}}w^{m}{\frac {z^{n}}{n!}}\\[4px]{\frac {1}{(1-z)^{w}}}&=\sum _{m,n\geq 0}{\begin{bmatrix}n\\m\end{bmatrix}}w^{m}{\frac {z^{n}}{n!}}\\[4px]{\frac {1-w}{e^{(w-1)z}-w}}&=\sum _{m,n\geq 0}\left\langle {\begin{matrix}n\\m\end{matrix}}\right\rangle w^{m}{\frac {z^{n}}{n!}}\\[4px]{\frac {e^{w}-e^{z}}{we^{z}-ze^{w}}}&=\sum _{m,n\geq 0}\left\langle {\begin{matrix}m+n+1\\m\end{matrix}}\right\rangle {\frac {w^{m}z^{n}}{(m+n+1)!}}.\end{aligned}}} ==== Multivariate case ==== Multivariate generating functions arise in practice when calculating the number of contingency tables of non-negative integers with specified row and column totals. Suppose the table has r rows and c columns; the row sums are t1, t2 ... tr and the column sums are s1, s2 ... sc. Then, according to I. J. Good, the number of such tables is the coefficient of: x 1 t 1 ⋯ x r t r y 1 s 1 ⋯ y c s c {\displaystyle x_{1}^{t_{1}}\cdots x_{r}^{t_{r}}y_{1}^{s_{1}}\cdots y_{c}^{s_{c}}} in: ∏ i = 1 r ∏ j = 1 c 1 1 − x i y j . {\displaystyle \prod _{i=1}^{r}\prod _{j=1}^{c}{\frac {1}{1-x_{i}y_{j}}}.} === Representation by continued fractions (Jacobi-type J-fractions) === ==== Definitions ==== Expansions of (formal) Jacobi-type and Stieltjes-type continued fractions (J-fractions and S-fractions, respectively) whose hth rational convergents represent 2h-order accurate power series are another way to express the typically divergent ordinary generating functions for many special one and two-variate sequences. The particular form of the Jacobi-type continued fractions (J-fractions) are expanded as in the following equation and have the next corresponding power series expansions with respect to z for some specific, application-dependent component sequences, {abi} and {ci}, where z ≠ 0 denotes the formal variable in the second power series expansion given below: J [ ∞ ] ( z ) = 1 1 − c 1 z − ab 2 z 2 1 − c 2 z − ab 3 z 2 ⋱ = 1 + c 1 z + ( ab 2 + c 1 2 ) z 2 + ( 2 ab 2 c 1 + c 1 3 + ab 2 c 2 ) z 3 + ⋯ {\displaystyle {\begin{aligned}J^{[\infty ]}(z)&={\cfrac {1}{1-c_{1}z-{\cfrac {{\text{ab}}_{2}z^{2}}{1-c_{2}z-{\cfrac {{\text{ab}}_{3}z^{2}}{\ddots }}}}}}\\[4px]&=1+c_{1}z+\left({\text{ab}}_{2}+c_{1}^{2}\right)z^{2}+\left(2{\text{ab}}_{2}c_{1}+c_{1}^{3}+{\text{ab}}_{2}c_{2}\right)z^{3}+\cdots \end{aligned}}} The coefficients of z n {\displaystyle z^{n}} , denoted in shorthand by jn ≔ [zn] J[∞](z), in the previous equations correspond to matrix solutions of the equations: [ k 0 , 1 k 1 , 1 0 0 ⋯ k 0 , 2 k 1 , 2 k 2 , 2 0 ⋯ k 0 , 3 k 1 , 3 k 2 , 3 k 3 , 3 ⋯ ⋮ ⋮ ⋮ ⋮ ] = [ k 0 , 0 0 0 0 ⋯ k 0 , 1 k 1 , 1 0 0 ⋯ k 0 , 2 k 1 , 2 k 2 , 2 0 ⋯ ⋮ ⋮ ⋮ ⋮ ] ⋅ [ c 1 1 0 0 ⋯ ab 2 c 2 1 0 ⋯ 0 ab 3 c 3 1 ⋯ ⋮ ⋮ ⋮ ⋮ ] , {\displaystyle {\begin{bmatrix}k_{0,1}&k_{1,1}&0&0&\cdots \\k_{0,2}&k_{1,2}&k_{2,2}&0&\cdots \\k_{0,3}&k_{1,3}&k_{2,3}&k_{3,3}&\cdots \\\vdots &\vdots &\vdots &\vdots \end{bmatrix}}={\begin{bmatrix}k_{0,0}&0&0&0&\cdots \\k_{0,1}&k_{1,1}&0&0&\cdots \\k_{0,2}&k_{1,2}&k_{2,2}&0&\cdots \\\vdots &\vdots &\vdots &\vdots \end{bmatrix}}\cdot {\begin{bmatrix}c_{1}&1&0&0&\cdots \\{\text{ab}}_{2}&c_{2}&1&0&\cdots \\0&{\text{ab}}_{3}&c_{3}&1&\cdots \\\vdots &\vdots &\vdots &\vdots \end{bmatrix}},} where j0 ≡ k0,0 = 1, jn = k0,n for n ≥ 1, kr,s = 0 if r > s, and where for all integers p, q ≥ 0, we have an addition formula relation given by: j p + q = k 0 , p ⋅ k 0 , q + ∑ i = 1 min ( p , q ) ab 2 ⋯ ab i + 1 × k i , p ⋅ k i , q . {\displaystyle j_{p+q}=k_{0,p}\cdot k_{0,q}+\sum _{i=1}^{\min(p,q)}{\text{ab}}_{2}\cdots {\text{ab}}_{i+1}\times k_{i,p}\cdot k_{i,q}.} ==== Properties of the hth convergent functions ==== For h ≥ 0 (though in practice when h ≥ 2), we can define the rational hth convergents to the infinite J-fraction, J[∞](z), expanded by: Conv h ⁡ ( z ) := P h ( z ) Q h ( z ) = j 0 + j 1 z + ⋯ + j 2 h − 1 z 2 h − 1 + ∑ n = 2 h ∞ j ~ h , n z n {\displaystyle \operatorname {Conv} _{h}(z):={\frac {P_{h}(z)}{Q_{h}(z)}}=j_{0}+j_{1}z+\cdots +j_{2h-1}z^{2h-1}+\sum _{n=2h}^{\infty }{\widetilde {j}}_{h,n}z^{n}} component-wise through the sequences, Ph(z) and Qh(z), defined recursively by: P h ( z ) = ( 1 − c h z ) P h − 1 ( z ) − ab h z 2 P h − 2 ( z ) + δ h , 1 Q h ( z ) = ( 1 − c h z ) Q h − 1 ( z ) − ab h z 2 Q h − 2 ( z ) + ( 1 − c 1 z ) δ h , 1 + δ 0 , 1 . {\displaystyle {\begin{aligned}P_{h}(z)&=(1-c_{h}z)P_{h-1}(z)-{\text{ab}}_{h}z^{2}P_{h-2}(z)+\delta _{h,1}\\Q_{h}(z)&=(1-c_{h}z)Q_{h-1}(z)-{\text{ab}}_{h}z^{2}Q_{h-2}(z)+(1-c_{1}z)\delta _{h,1}+\delta _{0,1}.\end{aligned}}} Moreover, the rationality of the convergent function Convh(z) for all h ≥ 2 implies additional finite difference equations and congruence properties satisfied by the sequence of jn, and for Mh ≔ ab2 ⋯ abh + 1 if h ‖ Mh then we have the congruence j n ≡ [ z n ] Conv h ⁡ ( z ) ( mod h ) , {\displaystyle j_{n}\equiv [z^{n}]\operatorname {Conv} _{h}(z){\pmod {h}},} for non-symbolic, determinate choices of the parameter sequences {abi} and {ci} when h ≥ 2, that is, when these sequences do not implicitly depend on an auxiliary parameter such as q, x, or R as in the examples contained in the table below. ==== Examples ==== The next table provides examples of closed-form formulas for the component sequences found computationally (and subsequently proved correct in the cited references) in several special cases of the prescribed sequences, jn, generated by the general expansions of the J-fractions defined in the first subsection. Here we define 0 < |a|, |b|, |q| < 1 and the parameters R , α ∈ Z + {\displaystyle R,\alpha \in \mathbb {Z} ^{+}} and x to be indeterminates with respect to these expansions, where the prescribed sequences enumerated by the expansions of these J-fractions are defined in terms of the q-Pochhammer symbol, Pochhammer symbol, and the binomial coefficients. The radii of convergence of these series corresponding to the definition of the Jacobi-type J-fractions given above are in general different from that of the corresponding power series expansions defining the ordinary generating functions of these sequences. == Examples == === Square numbers === Generating functions for the sequence of square numbers an = n2 are: where ζ(s) is the Riemann zeta function. == Applications == Generating functions are used to: Find a closed formula for a sequence given in a recurrence relation, for example, Fibonacci numbers. Find recurrence relations for sequences—the form of a generating function may suggest a recurrence formula. Find relationships between sequences—if the generating functions of two sequences have a similar form, then the sequences themselves may be related. Explore the asymptotic behaviour of sequences. Prove identities involving sequences. Solve enumeration problems in combinatorics and encoding their solutions. Rook polynomials are an example of an application in combinatorics. Evaluate infinite sums. === Various techniques: Evaluating sums and tackling other problems with generating functions === ==== Example 1: Formula for sums of harmonic numbers ==== Generating functions give us several methods to manipulate sums and to establish identities between sums. The simplest case occurs when sn = Σnk = 0 ak. We then know that S(z) = ⁠A(z)/1 − z⁠ for the corresponding ordinary generating functions. For example, we can manipulate s n = ∑ k = 1 n H k , {\displaystyle s_{n}=\sum _{k=1}^{n}H_{k}\,,} where Hk = 1 + ⁠1/2⁠ + ⋯ + ⁠1/k⁠ are the harmonic numbers. Let H ( z ) = ∑ n = 1 ∞ H n z n {\displaystyle H(z)=\sum _{n=1}^{\infty }{H_{n}z^{n}}} be the ordinary generating function of the harmonic numbers. Then H ( z ) = 1 1 − z ∑ n = 1 ∞ z n n , {\displaystyle H(z)={\frac {1}{1-z}}\sum _{n=1}^{\infty }{\frac {z^{n}}{n}}\,,} and thus S ( z ) = ∑ n = 1 ∞ s n z n = 1 ( 1 − z ) 2 ∑ n = 1 ∞ z n n . {\displaystyle S(z)=\sum _{n=1}^{\infty }{s_{n}z^{n}}={\frac {1}{(1-z)^{2}}}\sum _{n=1}^{\infty }{\frac {z^{n}}{n}}\,.} Using 1 ( 1 − z ) 2 = ∑ n = 0 ∞ ( n + 1 ) z n , {\displaystyle {\frac {1}{(1-z)^{2}}}=\sum _{n=0}^{\infty }(n+1)z^{n}\,,} convolution with the numerator yields s n = ∑ k = 1 n n + 1 − k k = ( n + 1 ) H n − n , {\displaystyle s_{n}=\sum _{k=1}^{n}{\frac {n+1-k}{k}}=(n+1)H_{n}-n\,,} which can also be written as ∑ k = 1 n H k = ( n + 1 ) ( H n + 1 − 1 ) . {\displaystyle \sum _{k=1}^{n}{H_{k}}=(n+1)(H_{n+1}-1)\,.} ==== Example 2: Modified binomial coefficient sums and the binomial transform ==== As another example of using generating functions to relate sequences and manipulate sums, for an arbitrary sequence ⟨ fn ⟩ we define the two sequences of sums s n := ∑ m = 0 n ( n m ) f m 3 n − m s ~ n := ∑ m = 0 n ( n m ) ( m + 1 ) ( m + 2 ) ( m + 3 ) f m 3 n − m , {\displaystyle {\begin{aligned}s_{n}&:=\sum _{m=0}^{n}{\binom {n}{m}}f_{m}3^{n-m}\\[4px]{\tilde {s}}_{n}&:=\sum _{m=0}^{n}{\binom {n}{m}}(m+1)(m+2)(m+3)f_{m}3^{n-m}\,,\end{aligned}}} for all n ≥ 0, and seek to express the second sums in terms of the first. We suggest an approach by generating functions. First, we use the binomial transform to write the generating function for the first sum as S ( z ) = 1 1 − 3 z F ( z 1 − 3 z ) . {\displaystyle S(z)={\frac {1}{1-3z}}F\left({\frac {z}{1-3z}}\right).} Since the generating function for the sequence ⟨ (n + 1)(n + 2)(n + 3) fn ⟩ is given by 6 F ( z ) + 18 z F ′ ( z ) + 9 z 2 F ″ ( z ) + z 3 F ‴ ( z ) {\displaystyle 6F(z)+18zF'(z)+9z^{2}F''(z)+z^{3}F'''(z)} we may write the generating function for the second sum defined above in the form S ~ ( z ) = 6 ( 1 − 3 z ) F ( z 1 − 3 z ) + 18 z ( 1 − 3 z ) 2 F ′ ( z 1 − 3 z ) + 9 z 2 ( 1 − 3 z ) 3 F ″ ( z 1 − 3 z ) + z 3 ( 1 − 3 z ) 4 F ‴ ( z 1 − 3 z ) . {\displaystyle {\tilde {S}}(z)={\frac {6}{(1-3z)}}F\left({\frac {z}{1-3z}}\right)+{\frac {18z}{(1-3z)^{2}}}F'\left({\frac {z}{1-3z}}\right)+{\frac {9z^{2}}{(1-3z)^{3}}}F''\left({\frac {z}{1-3z}}\right)+{\frac {z^{3}}{(1-3z)^{4}}}F'''\left({\frac {z}{1-3z}}\right).} In particular, we may write this modified sum generating function in the form of a ( z ) ⋅ S ( z ) + b ( z ) ⋅ z S ′ ( z ) + c ( z ) ⋅ z 2 S ″ ( z ) + d ( z ) ⋅ z 3 S ‴ ( z ) , {\displaystyle a(z)\cdot S(z)+b(z)\cdot zS'(z)+c(z)\cdot z^{2}S''(z)+d(z)\cdot z^{3}S'''(z),} for a(z) = 6(1 − 3z)3, b(z) = 18(1 − 3z)3, c(z) = 9(1 − 3z)3, and d(z) = (1 − 3z)3, where (1 − 3z)3 = 1 − 9z + 27z2 − 27z3. Finally, it follows that we may express the second sums through the first sums in the following form: s ~ n = [ z n ] ( 6 ( 1 − 3 z ) 3 ∑ n = 0 ∞ s n z n + 18 ( 1 − 3 z ) 3 ∑ n = 0 ∞ n s n z n + 9 ( 1 − 3 z ) 3 ∑ n = 0 ∞ n ( n − 1 ) s n z n + ( 1 − 3 z ) 3 ∑ n = 0 ∞ n ( n − 1 ) ( n − 2 ) s n z n ) = ( n + 1 ) ( n + 2 ) ( n + 3 ) s n − 9 n ( n + 1 ) ( n + 2 ) s n − 1 + 27 ( n − 1 ) n ( n + 1 ) s n − 2 − ( n − 2 ) ( n − 1 ) n s n − 3 . {\displaystyle {\begin{aligned}{\tilde {s}}_{n}&=[z^{n}]\left(6(1-3z)^{3}\sum _{n=0}^{\infty }s_{n}z^{n}+18(1-3z)^{3}\sum _{n=0}^{\infty }ns_{n}z^{n}+9(1-3z)^{3}\sum _{n=0}^{\infty }n(n-1)s_{n}z^{n}+(1-3z)^{3}\sum _{n=0}^{\infty }n(n-1)(n-2)s_{n}z^{n}\right)\\[4px]&=(n+1)(n+2)(n+3)s_{n}-9n(n+1)(n+2)s_{n-1}+27(n-1)n(n+1)s_{n-2}-(n-2)(n-1)ns_{n-3}.\end{aligned}}} ==== Example 3: Generating functions for mutually recursive sequences ==== In this example, we reformulate a generating function example given in Section 7.3 of Concrete Mathematics (see also Section 7.1 of the same reference for pretty pictures of generating function series). In particular, suppose that we seek the total number of ways (denoted Un) to tile a 3-by-n rectangle with unmarked 2-by-1 domino pieces. Let the auxiliary sequence, Vn, be defined as the number of ways to cover a 3-by-n rectangle-minus-corner section of the full rectangle. We seek to use these definitions to give a closed form formula for Un without breaking down this definition further to handle the cases of vertical versus horizontal dominoes. Notice that the ordinary generating functions for our two sequences correspond to the series: U ( z ) = 1 + 3 z 2 + 11 z 4 + 41 z 6 + ⋯ , V ( z ) = z + 4 z 3 + 15 z 5 + 56 z 7 + ⋯ . {\displaystyle {\begin{aligned}U(z)=1+3z^{2}+11z^{4}+41z^{6}+\cdots ,\\V(z)=z+4z^{3}+15z^{5}+56z^{7}+\cdots .\end{aligned}}} If we consider the possible configurations that can be given starting from the left edge of the 3-by-n rectangle, we are able to express the following mutually dependent, or mutually recursive, recurrence relations for our two sequences when n ≥ 2 defined as above where U0 = 1, U1 = 0, V0 = 0, and V1 = 1: U n = 2 V n − 1 + U n − 2 V n = U n − 1 + V n − 2 . {\displaystyle {\begin{aligned}U_{n}&=2V_{n-1}+U_{n-2}\\V_{n}&=U_{n-1}+V_{n-2}.\end{aligned}}} Since we have that for all integers m ≥ 0, the index-shifted generating functions satisfy z m G ( z ) = ∑ n = m ∞ g n − m z n , {\displaystyle z^{m}G(z)=\sum _{n=m}^{\infty }g_{n-m}z^{n}\,,} we can use the initial conditions specified above and the previous two recurrence relations to see that we have the next two equations relating the generating functions for these sequences given by U ( z ) = 2 z V ( z ) + z 2 U ( z ) + 1 V ( z ) = z U ( z ) + z 2 V ( z ) = z 1 − z 2 U ( z ) , {\displaystyle {\begin{aligned}U(z)&=2zV(z)+z^{2}U(z)+1\\V(z)&=zU(z)+z^{2}V(z)={\frac {z}{1-z^{2}}}U(z),\end{aligned}}} which then implies by solving the system of equations (and this is the particular trick to our method here) that U ( z ) = 1 − z 2 1 − 4 z 2 + z 4 = 1 3 − 3 ⋅ 1 1 − ( 2 + 3 ) z 2 + 1 3 + 3 ⋅ 1 1 − ( 2 − 3 ) z 2 . {\displaystyle U(z)={\frac {1-z^{2}}{1-4z^{2}+z^{4}}}={\frac {1}{3-{\sqrt {3}}}}\cdot {\frac {1}{1-\left(2+{\sqrt {3}}\right)z^{2}}}+{\frac {1}{3+{\sqrt {3}}}}\cdot {\frac {1}{1-\left(2-{\sqrt {3}}\right)z^{2}}}.} Thus by performing algebraic simplifications to the sequence resulting from the second partial fractions expansions of the generating function in the previous equation, we find that U2n + 1 ≡ 0 and that U 2 n = ⌈ ( 2 + 3 ) n 3 − 3 ⌉ , {\displaystyle U_{2n}=\left\lceil {\frac {\left(2+{\sqrt {3}}\right)^{n}}{3-{\sqrt {3}}}}\right\rceil \,,} for all integers n ≥ 0. We also note that the same shifted generating function technique applied to the second-order recurrence for the Fibonacci numbers is the prototypical example of using generating functions to solve recurrence relations in one variable already covered, or at least hinted at, in the subsection on rational functions given above. === Convolution (Cauchy products) === A discrete convolution of the terms in two formal power series turns a product of generating functions into a generating function enumerating a convolved sum of the original sequence terms (see Cauchy product). Consider A(z) and B(z) are ordinary generating functions. C ( z ) = A ( z ) B ( z ) ⇔ [ z n ] C ( z ) = ∑ k = 0 n a k b n − k {\displaystyle C(z)=A(z)B(z)\Leftrightarrow [z^{n}]C(z)=\sum _{k=0}^{n}{a_{k}b_{n-k}}} Consider A(z) and B(z) are exponential generating functions. C ( z ) = A ( z ) B ( z ) ⇔ [ z n n ! ] C ( z ) = ∑ k = 0 n ( n k ) a k b n − k {\displaystyle C(z)=A(z)B(z)\Leftrightarrow \left[{\frac {z^{n}}{n!}}\right]C(z)=\sum _{k=0}^{n}{\binom {n}{k}}a_{k}b_{n-k}} Consider the triply convolved sequence resulting from the product of three ordinary generating functions C ( z ) = F ( z ) G ( z ) H ( z ) ⇔ [ z n ] C ( z ) = ∑ j + k + l = n f j g k h l {\displaystyle C(z)=F(z)G(z)H(z)\Leftrightarrow [z^{n}]C(z)=\sum _{j+k+l=n}f_{j}g_{k}h_{l}} Consider the m-fold convolution of a sequence with itself for some positive integer m ≥ 1 (see the example below for an application) C ( z ) = G ( z ) m ⇔ [ z n ] C ( z ) = ∑ k 1 + k 2 + ⋯ + k m = n g k 1 g k 2 ⋯ g k m {\displaystyle C(z)=G(z)^{m}\Leftrightarrow [z^{n}]C(z)=\sum _{k_{1}+k_{2}+\cdots +k_{m}=n}g_{k_{1}}g_{k_{2}}\cdots g_{k_{m}}} Multiplication of generating functions, or convolution of their underlying sequences, can correspond to a notion of independent events in certain counting and probability scenarios. For example, if we adopt the notational convention that the probability generating function, or pgf, of a random variable Z is denoted by GZ(z), then we can show that for any two random variables G X + Y ( z ) = G X ( z ) G Y ( z ) , {\displaystyle G_{X+Y}(z)=G_{X}(z)G_{Y}(z)\,,} if X and Y are independent. ==== Example: The money-changing problem ==== The number of ways to pay n ≥ 0 cents in coin denominations of values in the set {1, 5, 10, 25, 50} (i.e., in pennies, nickels, dimes, quarters, and half dollars, respectively), where we distinguish instances based upon the total number of each coin but not upon the order in which the coins are presented, is given by the ordinary generating function 1 1 − z 1 1 − z 5 1 1 − z 10 1 1 − z 25 1 1 − z 50 . {\displaystyle {\frac {1}{1-z}}{\frac {1}{1-z^{5}}}{\frac {1}{1-z^{10}}}{\frac {1}{1-z^{25}}}{\frac {1}{1-z^{50}}}\,.} When we also distinguish based upon the order in which the coins are presented (e.g., one penny then one nickel is distinct from one nickel then one penny), the ordinary generating function is 1 1 − z − z 5 − z 10 − z 25 − z 50 . {\displaystyle {\frac {1}{1-z-z^{5}-z^{10}-z^{25}-z^{50}}}\,.} If we allow the n cents to be paid in coins of any positive integer denomination, we arrive at the partition function ordinary generating function expanded by an infinite q-Pochhammer symbol product, ∏ n = 1 ∞ ( 1 − z n ) − 1 . {\displaystyle \prod _{n=1}^{\infty }\left(1-z^{n}\right)^{-1}\,.} When the order of the coins matters, the ordinary generating function is 1 1 − ∑ n = 1 ∞ z n = 1 − z 1 − 2 z . {\displaystyle {\frac {1}{1-\sum _{n=1}^{\infty }z^{n}}}={\frac {1-z}{1-2z}}\,.} ==== Example: Generating function for the Catalan numbers ==== An example where convolutions of generating functions are useful allows us to solve for a specific closed-form function representing the ordinary generating function for the Catalan numbers, Cn. In particular, this sequence has the combinatorial interpretation as being the number of ways to insert parentheses into the product x0 · x1 ·⋯· xn so that the order of multiplication is completely specified. For example, C2 = 2 which corresponds to the two expressions x0 · (x1 · x2) and (x0 · x1) · x2. It follows that the sequence satisfies a recurrence relation given by C n = ∑ k = 0 n − 1 C k C n − 1 − k + δ n , 0 = C 0 C n − 1 + C 1 C n − 2 + ⋯ + C n − 1 C 0 + δ n , 0 , n ≥ 0 , {\displaystyle C_{n}=\sum _{k=0}^{n-1}C_{k}C_{n-1-k}+\delta _{n,0}=C_{0}C_{n-1}+C_{1}C_{n-2}+\cdots +C_{n-1}C_{0}+\delta _{n,0}\,,\quad n\geq 0\,,} and so has a corresponding convolved generating function, C(z), satisfying C ( z ) = z ⋅ C ( z ) 2 + 1 . {\displaystyle C(z)=z\cdot C(z)^{2}+1\,.} Since C(0) = 1 ≠ ∞, we then arrive at a formula for this generating function given by C ( z ) = 1 − 1 − 4 z 2 z = ∑ n = 0 ∞ 1 n + 1 ( 2 n n ) z n . {\displaystyle C(z)={\frac {1-{\sqrt {1-4z}}}{2z}}=\sum _{n=0}^{\infty }{\frac {1}{n+1}}{\binom {2n}{n}}z^{n}\,.} Note that the first equation implicitly defining C(z) above implies that C ( z ) = 1 1 − z ⋅ C ( z ) , {\displaystyle C(z)={\frac {1}{1-z\cdot C(z)}}\,,} which then leads to another "simple" (of form) continued fraction expansion of this generating function. ==== Example: Spanning trees of fans and convolutions of convolutions ==== A fan of order n is defined to be a graph on the vertices {0, 1, ..., n} with 2n − 1 edges connected according to the following rules: Vertex 0 is connected by a single edge to each of the other n vertices, and vertex k {\displaystyle k} is connected by a single edge to the next vertex k + 1 for all 1 ≤ k < n. There is one fan of order one, three fans of order two, eight fans of order three, and so on. A spanning tree is a subgraph of a graph which contains all of the original vertices and which contains enough edges to make this subgraph connected, but not so many edges that there is a cycle in the subgraph. We ask how many spanning trees fn of a fan of order n are possible for each n ≥ 1. As an observation, we may approach the question by counting the number of ways to join adjacent sets of vertices. For example, when n = 4, we have that f4 = 4 + 3 · 1 + 2 · 2 + 1 · 3 + 2 · 1 · 1 + 1 · 2 · 1 + 1 · 1 · 2 + 1 · 1 · 1 · 1 = 21, which is a sum over the m-fold convolutions of the sequence gn = n = [zn] ⁠z/(1 − z)2⁠ for m ≔ 1, 2, 3, 4. More generally, we may write a formula for this sequence as f n = ∑ m > 0 ∑ k 1 + k 2 + ⋯ + k m = n k 1 , k 2 , … , k m > 0 g k 1 g k 2 ⋯ g k m , {\displaystyle f_{n}=\sum _{m>0}\sum _{\scriptstyle k_{1}+k_{2}+\cdots +k_{m}=n \atop \scriptstyle k_{1},k_{2},\ldots ,k_{m}>0}g_{k_{1}}g_{k_{2}}\cdots g_{k_{m}}\,,} from which we see that the ordinary generating function for this sequence is given by the next sum of convolutions as F ( z ) = G ( z ) + G ( z ) 2 + G ( z ) 3 + ⋯ = G ( z ) 1 − G ( z ) = z ( 1 − z ) 2 − z = z 1 − 3 z + z 2 , {\displaystyle F(z)=G(z)+G(z)^{2}+G(z)^{3}+\cdots ={\frac {G(z)}{1-G(z)}}={\frac {z}{(1-z)^{2}-z}}={\frac {z}{1-3z+z^{2}}}\,,} from which we are able to extract an exact formula for the sequence by taking the partial fraction expansion of the last generating function. === Implicit generating functions and the Lagrange inversion formula === One often encounters generating functions specified by a functional equation, instead of an explicit specification. For example, the generating function T(z) for the number of binary trees on n nodes (leaves included) satisfies T ( z ) = z ( 1 + T ( z ) 2 ) {\displaystyle T(z)=z\left(1+T(z)^{2}\right)} The Lagrange inversion theorem is a tool used to explicitly evaluate solutions to such equations. Applying the above theorem to our functional equation yields (with ϕ ( z ) = 1 + z 2 {\textstyle \phi (z)=1+z^{2}} ): [ z n ] T ( z ) = [ z n − 1 ] 1 n ( 1 + z 2 ) n {\displaystyle [z^{n}]T(z)=[z^{n-1}]{\frac {1}{n}}(1+z^{2})^{n}} Via the binomial theorem expansion, for even n {\displaystyle n} , the formula returns 0 {\displaystyle 0} . This is expected as one can prove that the number of leaves of a binary tree are one more than the number of its internal nodes, so the total sum should always be an odd number. For odd n {\displaystyle n} , however, we get [ z n − 1 ] 1 n ( 1 + z 2 ) n = 1 n ( n n + 1 2 ) {\displaystyle [z^{n-1}]{\frac {1}{n}}(1+z^{2})^{n}={\frac {1}{n}}{\dbinom {n}{\frac {n+1}{2}}}} The expression becomes much neater if we let n {\displaystyle n} be the number of internal nodes: Now the expression just becomes the n {\displaystyle n} th Catalan number. === Introducing a free parameter (snake oil method) === Sometimes the sum sn is complicated, and it is not always easy to evaluate. The "Free Parameter" method is another method (called "snake oil" by H. Wilf) to evaluate these sums. Both methods discussed so far have n as limit in the summation. When n does not appear explicitly in the summation, we may consider n as a "free" parameter and treat sn as a coefficient of F(z) = Σ sn zn, change the order of the summations on n and k, and try to compute the inner sum. For example, if we want to compute s n = ∑ k = 0 ∞ ( n + k m + 2 k ) ( 2 k k ) ( − 1 ) k k + 1 , m , n ∈ N 0 , {\displaystyle s_{n}=\sum _{k=0}^{\infty }{{\binom {n+k}{m+2k}}{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}}\,,\quad m,n\in \mathbb {N} _{0}\,,} we can treat n as a "free" parameter, and set F ( z ) = ∑ n = 0 ∞ ( ∑ k = 0 ∞ ( n + k m + 2 k ) ( 2 k k ) ( − 1 ) k k + 1 ) z n . {\displaystyle F(z)=\sum _{n=0}^{\infty }{\left(\sum _{k=0}^{\infty }{{\binom {n+k}{m+2k}}{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}}\right)}z^{n}\,.} Interchanging summation ("snake oil") gives F ( z ) = ∑ k = 0 ∞ ( 2 k k ) ( − 1 ) k k + 1 z − k ∑ n = 0 ∞ ( n + k m + 2 k ) z n + k . {\displaystyle F(z)=\sum _{k=0}^{\infty }{{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}z^{-k}}\sum _{n=0}^{\infty }{{\binom {n+k}{m+2k}}z^{n+k}}\,.} Now the inner sum is ⁠zm + 2k/(1 − z)m + 2k + 1⁠. Thus F ( z ) = z m ( 1 − z ) m + 1 ∑ k = 0 ∞ 1 k + 1 ( 2 k k ) ( − z ( 1 − z ) 2 ) k = z m ( 1 − z ) m + 1 ∑ k = 0 ∞ C k ( − z ( 1 − z ) 2 ) k where C k = k th Catalan number = z m ( 1 − z ) m + 1 1 − 1 + 4 z ( 1 − z ) 2 − 2 z ( 1 − z ) 2 = − z m − 1 2 ( 1 − z ) m − 1 ( 1 − 1 + z 1 − z ) = z m ( 1 − z ) m = z z m − 1 ( 1 − z ) m . {\displaystyle {\begin{aligned}F(z)&={\frac {z^{m}}{(1-z)^{m+1}}}\sum _{k=0}^{\infty }{{\frac {1}{k+1}}{\binom {2k}{k}}\left({\frac {-z}{(1-z)^{2}}}\right)^{k}}\\[4px]&={\frac {z^{m}}{(1-z)^{m+1}}}\sum _{k=0}^{\infty }{C_{k}\left({\frac {-z}{(1-z)^{2}}}\right)^{k}}&{\text{where }}C_{k}=k{\text{th Catalan number}}\\[4px]&={\frac {z^{m}}{(1-z)^{m+1}}}{\frac {1-{\sqrt {1+{\frac {4z}{(1-z)^{2}}}}}}{\frac {-2z}{(1-z)^{2}}}}\\[4px]&={\frac {-z^{m-1}}{2(1-z)^{m-1}}}\left(1-{\frac {1+z}{1-z}}\right)\\[4px]&={\frac {z^{m}}{(1-z)^{m}}}=z{\frac {z^{m-1}}{(1-z)^{m}}}\,.\end{aligned}}} Then we obtain s n = { ( n − 1 m − 1 ) for m ≥ 1 , [ n = 0 ] for m = 0 . {\displaystyle s_{n}={\begin{cases}\displaystyle {\binom {n-1}{m-1}}&{\text{for }}m\geq 1\,,\\{}[n=0]&{\text{for }}m=0\,.\end{cases}}} It is instructive to use the same method again for the sum, but this time take m as the free parameter instead of n. We thus set G ( z ) = ∑ m = 0 ∞ ( ∑ k = 0 ∞ ( n + k m + 2 k ) ( 2 k k ) ( − 1 ) k k + 1 ) z m . {\displaystyle G(z)=\sum _{m=0}^{\infty }\left(\sum _{k=0}^{\infty }{\binom {n+k}{m+2k}}{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}\right)z^{m}\,.} Interchanging summation ("snake oil") gives G ( z ) = ∑ k = 0 ∞ ( 2 k k ) ( − 1 ) k k + 1 z − 2 k ∑ m = 0 ∞ ( n + k m + 2 k ) z m + 2 k . {\displaystyle G(z)=\sum _{k=0}^{\infty }{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}z^{-2k}\sum _{m=0}^{\infty }{\binom {n+k}{m+2k}}z^{m+2k}\,.} Now the inner sum is (1 + z)n + k. Thus G ( z ) = ( 1 + z ) n ∑ k = 0 ∞ 1 k + 1 ( 2 k k ) ( − ( 1 + z ) z 2 ) k = ( 1 + z ) n ∑ k = 0 ∞ C k ( − ( 1 + z ) z 2 ) k where C k = k th Catalan number = ( 1 + z ) n 1 − 1 + 4 ( 1 + z ) z 2 − 2 ( 1 + z ) z 2 = ( 1 + z ) n z 2 − z z 2 + 4 + 4 z − 2 ( 1 + z ) = ( 1 + z ) n z 2 − z ( z + 2 ) − 2 ( 1 + z ) = ( 1 + z ) n − 2 z − 2 ( 1 + z ) = z ( 1 + z ) n − 1 . {\displaystyle {\begin{aligned}G(z)&=(1+z)^{n}\sum _{k=0}^{\infty }{\frac {1}{k+1}}{\binom {2k}{k}}\left({\frac {-(1+z)}{z^{2}}}\right)^{k}\\[4px]&=(1+z)^{n}\sum _{k=0}^{\infty }C_{k}\,\left({\frac {-(1+z)}{z^{2}}}\right)^{k}&{\text{where }}C_{k}=k{\text{th Catalan number}}\\[4px]&=(1+z)^{n}\,{\frac {1-{\sqrt {1+{\frac {4(1+z)}{z^{2}}}}}}{\frac {-2(1+z)}{z^{2}}}}\\[4px]&=(1+z)^{n}\,{\frac {z^{2}-z{\sqrt {z^{2}+4+4z}}}{-2(1+z)}}\\[4px]&=(1+z)^{n}\,{\frac {z^{2}-z(z+2)}{-2(1+z)}}\\[4px]&=(1+z)^{n}\,{\frac {-2z}{-2(1+z)}}=z(1+z)^{n-1}\,.\end{aligned}}} Thus we obtain s n = [ z m ] z ( 1 + z ) n − 1 = [ z m − 1 ] ( 1 + z ) n − 1 = ( n − 1 m − 1 ) , {\displaystyle s_{n}=\left[z^{m}\right]z(1+z)^{n-1}=\left[z^{m-1}\right](1+z)^{n-1}={\binom {n-1}{m-1}}\,,} for m ≥ 1 as before. === Generating functions prove congruences === We say that two generating functions (power series) are congruent modulo m, written A(z) ≡ B(z) (mod m) if their coefficients are congruent modulo m for all n ≥ 0, i.e., an ≡ bn (mod m) for all relevant cases of the integers n (note that we need not assume that m is an integer here—it may very well be polynomial-valued in some indeterminate x, for example). If the "simpler" right-hand-side generating function, B(z), is a rational function of z, then the form of this sequence suggests that the sequence is eventually periodic modulo fixed particular cases of integer-valued m ≥ 2. For example, we can prove that the Euler numbers, ⟨ E n ⟩ = ⟨ 1 , 1 , 5 , 61 , 1385 , … ⟩ ⟼ ⟨ 1 , 1 , 2 , 1 , 2 , 1 , 2 , … ⟩ ( mod 3 ) , {\displaystyle \langle E_{n}\rangle =\langle 1,1,5,61,1385,\ldots \rangle \longmapsto \langle 1,1,2,1,2,1,2,\ldots \rangle {\pmod {3}}\,,} satisfy the following congruence modulo 3: ∑ n = 0 ∞ E n z n = 1 − z 2 1 + z 2 ( mod 3 ) . {\displaystyle \sum _{n=0}^{\infty }E_{n}z^{n}={\frac {1-z^{2}}{1+z^{2}}}{\pmod {3}}\,.} One useful method of obtaining congruences for sequences enumerated by special generating functions modulo any integers (i.e., not only prime powers pk) is given in the section on continued fraction representations of (even non-convergent) ordinary generating functions by J-fractions above. We cite one particular result related to generating series expanded through a representation by continued fraction from Lando's Lectures on Generating Functions as follows: Generating functions also have other uses in proving congruences for their coefficients. We cite the next two specific examples deriving special case congruences for the Stirling numbers of the first kind and for the partition function p(n) which show the versatility of generating functions in tackling problems involving integer sequences. ==== The Stirling numbers modulo small integers ==== The main article on the Stirling numbers generated by the finite products S n ( x ) := ∑ k = 0 n [ n k ] x k = x ( x + 1 ) ( x + 2 ) ⋯ ( x + n − 1 ) , n ≥ 1 , {\displaystyle S_{n}(x):=\sum _{k=0}^{n}{\begin{bmatrix}n\\k\end{bmatrix}}x^{k}=x(x+1)(x+2)\cdots (x+n-1)\,,\quad n\geq 1\,,} provides an overview of the congruences for these numbers derived strictly from properties of their generating function as in Section 4.6 of Wilf's stock reference Generatingfunctionology. We repeat the basic argument and notice that when reduces modulo 2, these finite product generating functions each satisfy S n ( x ) = [ x ( x + 1 ) ] ⋅ [ x ( x + 1 ) ] ⋯ = x ⌈ n 2 ⌉ ( x + 1 ) ⌊ n 2 ⌋ , {\displaystyle S_{n}(x)=[x(x+1)]\cdot [x(x+1)]\cdots =x^{\left\lceil {\frac {n}{2}}\right\rceil }(x+1)^{\left\lfloor {\frac {n}{2}}\right\rfloor }\,,} which implies that the parity of these Stirling numbers matches that of the binomial coefficient [ n k ] ≡ ( ⌊ n 2 ⌋ k − ⌈ n 2 ⌉ ) ( mod 2 ) , {\displaystyle {\begin{bmatrix}n\\k\end{bmatrix}}\equiv {\binom {\left\lfloor {\frac {n}{2}}\right\rfloor }{k-\left\lceil {\frac {n}{2}}\right\rceil }}{\pmod {2}}\,,} and consequently shows that [nk] is even whenever k < ⌊ ⁠n/2⁠ ⌋. Similarly, we can reduce the right-hand-side products defining the Stirling number generating functions modulo 3 to obtain slightly more complicated expressions providing that [ n m ] ≡ [ x m ] ( x ⌈ n 3 ⌉ ( x + 1 ) ⌈ n − 1 3 ⌉ ( x + 2 ) ⌊ n 3 ⌋ ) ( mod 3 ) ≡ ∑ k = 0 m ( ⌈ n − 1 3 ⌉ k ) ( ⌊ n 3 ⌋ m − k − ⌈ n 3 ⌉ ) × 2 ⌈ n 3 ⌉ + ⌊ n 3 ⌋ − ( m − k ) ( mod 3 ) . {\displaystyle {\begin{aligned}{\begin{bmatrix}n\\m\end{bmatrix}}&\equiv [x^{m}]\left(x^{\left\lceil {\frac {n}{3}}\right\rceil }(x+1)^{\left\lceil {\frac {n-1}{3}}\right\rceil }(x+2)^{\left\lfloor {\frac {n}{3}}\right\rfloor }\right)&&{\pmod {3}}\\&\equiv \sum _{k=0}^{m}{\begin{pmatrix}\left\lceil {\frac {n-1}{3}}\right\rceil \\k\end{pmatrix}}{\begin{pmatrix}\left\lfloor {\frac {n}{3}}\right\rfloor \\m-k-\left\lceil {\frac {n}{3}}\right\rceil \end{pmatrix}}\times 2^{\left\lceil {\frac {n}{3}}\right\rceil +\left\lfloor {\frac {n}{3}}\right\rfloor -(m-k)}&&{\pmod {3}}\,.\end{aligned}}} ==== Congruences for the partition function ==== In this example, we pull in some of the machinery of infinite products whose power series expansions generate the expansions of many special functions and enumerate partition functions. In particular, we recall that the partition function p(n) is generated by the reciprocal infinite q-Pochhammer symbol product (or z-Pochhammer product as the case may be) given by ∑ n = 0 ∞ p ( n ) z n = 1 ( 1 − z ) ( 1 − z 2 ) ( 1 − z 3 ) ⋯ = 1 + z + 2 z 2 + 3 z 3 + 5 z 4 + 7 z 5 + 11 z 6 + ⋯ . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }p(n)z^{n}&={\frac {1}{\left(1-z\right)\left(1-z^{2}\right)\left(1-z^{3}\right)\cdots }}\\[4pt]&=1+z+2z^{2}+3z^{3}+5z^{4}+7z^{5}+11z^{6}+\cdots .\end{aligned}}} This partition function satisfies many known congruence properties, which notably include the following results though there are still many open questions about the forms of related integer congruences for the function: p ( 5 m + 4 ) ≡ 0 ( mod 5 ) p ( 7 m + 5 ) ≡ 0 ( mod 7 ) p ( 11 m + 6 ) ≡ 0 ( mod 11 ) p ( 25 m + 24 ) ≡ 0 ( mod 5 2 ) . {\displaystyle {\begin{aligned}p(5m+4)&\equiv 0{\pmod {5}}\\p(7m+5)&\equiv 0{\pmod {7}}\\p(11m+6)&\equiv 0{\pmod {11}}\\p(25m+24)&\equiv 0{\pmod {5^{2}}}\,.\end{aligned}}} We show how to use generating functions and manipulations of congruences for formal power series to give a highly elementary proof of the first of these congruences listed above. First, we observe that in the binomial coefficient generating function 1 ( 1 − z ) 5 = ∑ i = 0 ∞ ( 4 + i 4 ) z i , {\displaystyle {\frac {1}{(1-z)^{5}}}=\sum _{i=0}^{\infty }{\binom {4+i}{4}}z^{i}\,,} all of the coefficients are divisible by 5 except for those which correspond to the powers 1, z5, z10, ... and moreover in those cases the remainder of the coefficient is 1 modulo 5. Thus, 1 ( 1 − z ) 5 ≡ 1 1 − z 5 ( mod 5 ) , {\displaystyle {\frac {1}{(1-z)^{5}}}\equiv {\frac {1}{1-z^{5}}}{\pmod {5}}\,,} or equivalently 1 − z 5 ( 1 − z ) 5 ≡ 1 ( mod 5 ) . {\displaystyle {\frac {1-z^{5}}{(1-z)^{5}}}\equiv 1{\pmod {5}}\,.} It follows that ( 1 − z 5 ) ( 1 − z 10 ) ( 1 − z 15 ) ⋯ ( ( 1 − z ) ( 1 − z 2 ) ( 1 − z 3 ) ⋯ ) 5 ≡ 1 ( mod 5 ) . {\displaystyle {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\left(1-z^{15}\right)\cdots }{\left((1-z)\left(1-z^{2}\right)\left(1-z^{3}\right)\cdots \right)^{5}}}\equiv 1{\pmod {5}}\,.} Using the infinite product expansions of z ⋅ ( 1 − z 5 ) ( 1 − z 10 ) ⋯ ( 1 − z ) ( 1 − z 2 ) ⋯ = z ⋅ ( ( 1 − z ) ( 1 − z 2 ) ⋯ ) 4 × ( 1 − z 5 ) ( 1 − z 10 ) ⋯ ( ( 1 − z ) ( 1 − z 2 ) ⋯ ) 5 , {\displaystyle z\cdot {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\cdots }{\left(1-z\right)\left(1-z^{2}\right)\cdots }}=z\cdot \left((1-z)\left(1-z^{2}\right)\cdots \right)^{4}\times {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\cdots }{\left(\left(1-z\right)\left(1-z^{2}\right)\cdots \right)^{5}}}\,,} it can be shown that the coefficient of z5m + 5 in z · ((1 − z)(1 − z2)⋯)4 is divisible by 5 for all m. Finally, since ∑ n = 1 ∞ p ( n − 1 ) z n = z ( 1 − z ) ( 1 − z 2 ) ⋯ = z ⋅ ( 1 − z 5 ) ( 1 − z 10 ) ⋯ ( 1 − z ) ( 1 − z 2 ) ⋯ × ( 1 + z 5 + z 10 + ⋯ ) ( 1 + z 10 + z 20 + ⋯ ) ⋯ {\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }p(n-1)z^{n}&={\frac {z}{(1-z)\left(1-z^{2}\right)\cdots }}\\[6px]&=z\cdot {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\cdots }{(1-z)\left(1-z^{2}\right)\cdots }}\times \left(1+z^{5}+z^{10}+\cdots \right)\left(1+z^{10}+z^{20}+\cdots \right)\cdots \end{aligned}}} we may equate the coefficients of z5m + 5 in the previous equations to prove our desired congruence result, namely that p(5m + 4) ≡ 0 (mod 5) for all m ≥ 0. === Transformations of generating functions === There are a number of transformations of generating functions that provide other applications (see the main article). A transformation of a sequence's ordinary generating function (OGF) provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas involving a sequence OGF (see integral transformations) or weighted sums over the higher-order derivatives of these functions (see derivative transformations). Generating function transformations can come into play when we seek to express a generating function for the sums s n := ∑ m = 0 n ( n m ) C n , m a m , {\displaystyle s_{n}:=\sum _{m=0}^{n}{\binom {n}{m}}C_{n,m}a_{m},} in the form of S(z) = g(z) A(f(z)) involving the original sequence generating function. For example, if the sums are s n := ∑ k = 0 ∞ ( n + k m + 2 k ) a k {\displaystyle s_{n}:=\sum _{k=0}^{\infty }{\binom {n+k}{m+2k}}a_{k}\,} then the generating function for the modified sum expressions is given by S ( z ) = z m ( 1 − z ) m + 1 A ( z ( 1 − z ) 2 ) {\displaystyle S(z)={\frac {z^{m}}{(1-z)^{m+1}}}A\left({\frac {z}{(1-z)^{2}}}\right)} (see also the binomial transform and the Stirling transform). There are also integral formulas for converting between a sequence's OGF, F(z), and its exponential generating function, or EGF, F̂(z), and vice versa given by F ( z ) = ∫ 0 ∞ F ^ ( t z ) e − t d t , F ^ ( z ) = 1 2 π ∫ − π π F ( z e − i ϑ ) e e i ϑ d ϑ , {\displaystyle {\begin{aligned}F(z)&=\int _{0}^{\infty }{\hat {F}}(tz)e^{-t}\,dt\,,\\[4px]{\hat {F}}(z)&={\frac {1}{2\pi }}\int _{-\pi }^{\pi }F\left(ze^{-i\vartheta }\right)e^{e^{i\vartheta }}\,d\vartheta \,,\end{aligned}}} provided that these integrals converge for appropriate values of z. == Tables of special generating functions == An initial listing of special mathematical series is found here. A number of useful and special sequence generating functions are found in Section 5.4 and 7.4 of Concrete Mathematics and in Section 2.5 of Wilf's Generatingfunctionology. Other special generating functions of note include the entries in the next table, which is by no means complete. == See also == Moment-generating function Probability-generating function Generating function transformation Stanley's reciprocity theorem Integer partition Combinatorial principles Cyclic sieving Z-transform Umbral calculus Coins in a fountain == Notes == == References == === Citations === Aigner, Martin (2007). A Course in Enumeration. Graduate Texts in Mathematics. Vol. 238. Springer. ISBN 978-3-540-39035-0. Doubilet, Peter; Rota, Gian-Carlo; Stanley, Richard (1972). "On the foundations of combinatorial theory. VI. The idea of generating function". Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability. 2: 267–318. Zbl 0267.05002. Reprinted in Rota, Gian-Carlo (1975). "3. The idea of generating function". Finite Operator Calculus. With the collaboration of P. Doubilet, C. Greene, D. Kahaner, A. Odlyzko and R. Stanley. Academic Press. pp. 83–134. ISBN 0-12-596650-4. Zbl 0328.05007. Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics. Cambridge University Press. ISBN 978-0-521-89806-5. Zbl 1165.05001. Goulden, Ian P.; Jackson, David M. (2004). Combinatorial Enumeration. Dover Publications. ISBN 978-0486435978. Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). "Chapter 7: Generating Functions". Concrete Mathematics. A foundation for computer science (2nd ed.). Addison-Wesley. pp. 320–380. ISBN 0-201-55802-5. Zbl 0836.00001. Lando, Sergei K. (2003). Lectures on Generating Functions. American Mathematical Society. ISBN 978-0-8218-3481-7. Wilf, Herbert S. (1994). Generatingfunctionology (2nd ed.). Academic Press. ISBN 0-12-751956-4. Zbl 0831.05001. == External links == "Introduction To Ordinary Generating Functions" by Mike Zabrocki, York University, Mathematics and Statistics "Generating function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Generating Functions, Power Indices and Coin Change at cut-the-knot "Generating Functions" by Ed Pegg Jr., Wolfram Demonstrations Project, 2007.
Wikipedia/Generating_function
A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry, height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers. For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. 7 for the coordinates (3/7, 1/2)), but in a logarithmic scale. == Significance == Height functions allow mathematicians to count objects, such as rational points, that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms) below any given constant is finite despite the set of rational numbers being infinite. In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by Alan Baker (1966, 1967a, 1967b). In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by Wolfgang M. Schmidt (1972) demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation. Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by Weil (1929) and Faltings (1983) respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture, have far-reaching implications for problems in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic. == History == An early form of height function was proposed by Giambattista Benedetti (c. 1563), who argued that the consonance of a musical interval could be measured by the product of its numerator and denominator (in reduced form); see Giambattista Benedetti § Music. Heights in Diophantine geometry were initially developed by André Weil and Douglas Northcott beginning in the 1920s. Innovations in 1960s were the Néron–Tate height and the realization that heights were linked to projective representations in much the same way that ample line bundles are in other parts of algebraic geometry. In the 1970s, Suren Arakelov developed Arakelov heights in Arakelov theory. In 1983, Faltings developed his theory of Faltings heights in his proof of Faltings's theorem. == Height functions in Diophantine geometry == === Naive height === Classical or naive height is defined in terms of ordinary absolute value on homogeneous coordinates. It is typically a logarithmic scale and therefore can be viewed as being proportional to the "algebraic complexity" or number of bits needed to store a point. It is typically defined to be the logarithm of the maximum absolute value of the vector of coprime integers obtained by multiplying through by a lowest common denominator. This may be used to define height on a point in projective space over Q, or of a polynomial, regarded as a vector of coefficients, or of an algebraic number, from the height of its minimal polynomial. The naive height of a rational number x = p/q (in lowest terms) is multiplicative height H ( p / q ) = max { | p | , | q | } {\displaystyle H(p/q)=\max\{|p|,|q|\}} logarithmic height: h ( p / q ) = log ⁡ H ( p / q ) {\displaystyle h(p/q)=\log H(p/q)} Therefore, the naive multiplicative and logarithmic heights of 4/10 are 5 and log(5), for example. The naive height H of an elliptic curve E given by y2 = x3 + Ax + B is defined to be H(E) = log max(4|A|3, 27|B|2). === Néron–Tate height === The Néron–Tate height, or canonical height, is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field. It is named after André Néron, who first defined it as a sum of local heights, and John Tate, who defined it globally in an unpublished work. === Weil height === Let X be a projective variety over a number field K. Let L be a line bundle on X. One defines the Weil height on X with respect to L as follows. First, suppose that L is very ample. A choice of basis of the space Γ ( X , L ) {\displaystyle \Gamma (X,L)} of global sections defines a morphism ϕ from X to projective space, and for all points p on X, one defines h L ( p ) := h ( ϕ ( p ) ) {\displaystyle h_{L}(p):=h(\phi (p))} , where h is the naive height on projective space. For fixed X and L, choosing a different basis of global sections changes h L {\displaystyle h_{L}} , but only by a bounded function of p. Thus h L {\displaystyle h_{L}} is well-defined up to addition of a function that is O(1). In general, one can write L as the difference of two very ample line bundles L1 and L2 on X and define h L := h L 1 − h L 2 , {\displaystyle h_{L}:=h_{L_{1}}-h_{L_{2}},} which again is well-defined up to O(1). ==== Arakelov height ==== The Arakelov height on a projective space over the field of algebraic numbers is a global height function with local contributions coming from Fubini–Study metrics on the Archimedean fields and the usual metric on the non-Archimedean fields. It is the usual Weil height equipped with a different metric. === Faltings height === The Faltings height of an abelian variety defined over a number field is a measure of its arithmetic complexity. It is defined in terms of the height of a metrized line bundle. It was introduced by Faltings (1983) in his proof of the Mordell conjecture. == Height functions in algebra == === Height of a polynomial === For a polynomial P of degree n given by P = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n , {\displaystyle P=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n},} the height H(P) is defined to be the maximum of the magnitudes of its coefficients: H ( P ) = max i | a i | . {\displaystyle H(P)={\underset {i}{\max }}\,|a_{i}|.} One could similarly define the length L(P) as the sum of the magnitudes of the coefficients: L ( P ) = ∑ i = 0 n | a i | . {\displaystyle L(P)=\sum _{i=0}^{n}|a_{i}|.} ==== Relation to Mahler measure ==== The Mahler measure M(P) of P is also a measure of the complexity of P. The three functions H(P), L(P) and M(P) are related by the inequalities ( n ⌊ n / 2 ⌋ ) − 1 H ( P ) ≤ M ( P ) ≤ H ( P ) n + 1 ; {\displaystyle {\binom {n}{\lfloor n/2\rfloor }}^{-1}H(P)\leq M(P)\leq H(P){\sqrt {n+1}};} L ( p ) ≤ 2 n M ( p ) ≤ 2 n L ( p ) ; {\displaystyle L(p)\leq 2^{n}M(p)\leq 2^{n}L(p);} H ( p ) ≤ L ( p ) ≤ ( n + 1 ) H ( p ) {\displaystyle H(p)\leq L(p)\leq (n+1)H(p)} where ( n ⌊ n / 2 ⌋ ) {\displaystyle \scriptstyle {\binom {n}{\lfloor n/2\rfloor }}} is the binomial coefficient. == Height functions in automorphic forms == One of the conditions in the definition of an automorphic form on the general linear group of an adelic algebraic group is moderate growth, which is an asymptotic condition on the growth of a height function on the general linear group viewed as an affine variety. == Other height functions == The height of an irreducible rational number x = p/q, q > 0 is | p | + q {\displaystyle |p|+q} (this function is used for constructing a bijection between N {\displaystyle \mathbb {N} } and Q {\displaystyle \mathbb {Q} } ). == See also == abc conjecture Birch and Swinnerton-Dyer conjecture Elliptic Lehmer conjecture Heath-Brown–Moroz constant Height of a formal group law Height zeta function Raynaud's isogeny theorem == References == == Sources == Baker, Alan (1966). "Linear forms in the logarithms of algebraic numbers. I". Mathematika. 13 (2): 204–216. doi:10.1112/S0025579300003971. ISSN 0025-5793. MR 0220680. Baker, Alan (1967a). "Linear forms in the logarithms of algebraic numbers. II". Mathematika. 14: 102–107. doi:10.1112/S0025579300008068. ISSN 0025-5793. MR 0220680. Baker, Alan (1967b). "Linear forms in the logarithms of algebraic numbers. III". Mathematika. 14 (2): 220–228. doi:10.1112/S0025579300003843. ISSN 0025-5793. MR 0220680. Baker, Alan; Wüstholz, Gisbert (2007). Logarithmic Forms and Diophantine Geometry. New Mathematical Monographs. Vol. 9. Cambridge University Press. p. 3. ISBN 978-0-521-88268-2. Zbl 1145.11004. Bombieri, Enrico; Gubler, Walter (2006). Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. ISBN 978-0-521-71229-3. Zbl 1130.11034. Borwein, Peter (2002). Computational Excursions in Analysis and Number Theory. CMS Books in Mathematics. Springer-Verlag. pp. 2, 3, 14148. ISBN 0-387-95444-9. Zbl 1020.12001. Bump, Daniel (1998). Automorphic Forms and Representations. Cambridge Studies in Advanced Mathematics. Vol. 55. Cambridge University Press. p. 300. ISBN 9780521658188. Cornell, Gary; Silverman, Joseph H. (1986). Arithmetic geometry. New York: Springer. ISBN 0387963111. → Contains an English translation of Faltings (1983) Faltings, Gerd (1983). "Endlichkeitssätze für abelsche Varietäten über Zahlkörpern" [Finiteness theorems for abelian varieties over number fields]. Inventiones Mathematicae (in German). 73 (3): 349–366. Bibcode:1983InMat..73..349F. doi:10.1007/BF01388432. MR 0718935. S2CID 121049418. Faltings, Gerd (1991). "Diophantine approximation on abelian varieties". Annals of Mathematics. 123 (3): 549–576. doi:10.2307/2944319. JSTOR 2944319. MR 1109353. Fili, Paul; Petsche, Clayton; Pritsker, Igor (2017). "Energy integrals and small points for the Arakelov height". Archiv der Mathematik. 109 (5): 441–454. arXiv:1507.01900. doi:10.1007/s00013-017-1080-x. S2CID 119161942. Mahler, K. (1963). "On two extremum properties of polynomials". Illinois Journal of Mathematics. 7 (4): 681–701. doi:10.1215/ijm/1255645104. Zbl 0117.04003. Néron, André (1965). "Quasi-fonctions et hauteurs sur les variétés abéliennes". Annals of Mathematics (in French). 82 (2): 249–331. doi:10.2307/1970644. JSTOR 1970644. MR 0179173. Schinzel, Andrzej (2000). Polynomials with special regard to reducibility. Encyclopedia of Mathematics and Its Applications. Vol. 77. Cambridge: Cambridge University Press. p. 212. ISBN 0-521-66225-7. Zbl 0956.12001. Schmidt, Wolfgang M. (1972). "Norm form equations". Annals of Mathematics. Second Series. 96 (3): 526–551. doi:10.2307/1970824. JSTOR 1970824. MR 0314761. Lang, Serge (1988). Introduction to Arakelov theory. New York: Springer-Verlag. ISBN 0-387-96793-1. MR 0969124. Zbl 0667.14001. Lang, Serge (1997). Survey of Diophantine Geometry. Springer-Verlag. ISBN 3-540-61223-8. Zbl 0869.11051. Weil, André (1929). "L'arithmétique sur les courbes algébriques". Acta Mathematica. 52 (1): 281–315. doi:10.1007/BF02592688. MR 1555278. Silverman, Joseph H. (1994). Advanced Topics in the Arithmetic of Elliptic Curves. New York: Springer. ISBN 978-1-4612-0851-8. Vojta, Paul (1987). Diophantine Approximations and Value Distribution Theory. Lecture Notes in Mathematics. Vol. 1239. Berlin, New York: Springer-Verlag. doi:10.1007/BFb0072989. ISBN 978-3-540-17551-3. MR 0883451. Zbl 0609.14011. Kolmogorov, Andrey; Fomin, Sergei (1957). Elements of the Theory of Functions and Functional Analysis. New York: Graylock Press. == External links == Polynomial height at Mathworld
Wikipedia/Height_function
In number theory, the Elliott–Halberstam conjecture is a conjecture about the distribution of prime numbers in arithmetic progressions. It has many applications in sieve theory. It is named for Peter D. T. A. Elliott and Heini Halberstam, who stated a specific version of the conjecture in 1968. One version of the conjecture is as follows, and stating it requires some notation. Let π ( x ) {\displaystyle \pi (x)} , the prime-counting function, denote the number of primes less than or equal to x {\displaystyle x} . If q {\displaystyle q} is a positive integer and a {\displaystyle a} is coprime to q {\displaystyle q} , we let π ( x ; q , a ) {\displaystyle \pi (x;q,a)} denote the number of primes less than or equal to x {\displaystyle x} which are equal to a {\displaystyle a} modulo q {\displaystyle q} . Dirichlet's theorem on primes in arithmetic progressions then tells us that π ( x ; q , a ) ∼ π ( x ) φ ( q ) ( x → ∞ ) {\displaystyle \pi (x;q,a)\sim {\frac {\pi (x)}{\varphi (q)}}\ \ (x\rightarrow \infty )} where φ {\displaystyle \varphi } is Euler's totient function. If we then define the error function E ( x ; q ) = max gcd ( a , q ) = 1 | π ( x ; q , a ) − π ( x ) φ ( q ) | {\displaystyle E(x;q)=\max _{{\text{gcd}}(a,q)=1}\left|\pi (x;q,a)-{\frac {\pi (x)}{\varphi (q)}}\right|} where the max is taken over all a {\displaystyle a} coprime to q {\displaystyle q} , then the Elliott–Halberstam conjecture is the assertion that for every θ < 1 {\displaystyle \theta <1} and A > 0 {\displaystyle A>0} there exists a constant C > 0 {\displaystyle C>0} such that ∑ 1 ≤ q ≤ x θ E ( x ; q ) ≤ C x log A ⁡ x {\displaystyle \sum _{1\leq q\leq x^{\theta }}E(x;q)\leq {\frac {Cx}{\log ^{A}x}}} for all x > 2 {\displaystyle x>2} . This conjecture was proven for all θ < 1 / 2 {\displaystyle \theta <1/2} by Enrico Bombieri and A. I. Vinogradov (the Bombieri–Vinogradov theorem, sometimes known simply as "Bombieri's theorem"); this result is already quite useful, being an averaged form of the generalized Riemann hypothesis. It is known that the conjecture fails at the endpoint θ = 1 {\displaystyle \theta =1} . In 1986, Bombieri, Friedlander and Iwaniec generalized the Elliott-Halberstam conjecture, using Dirichlet convolution of arithmetic functions related to the von Mangoldt function. The Elliott–Halberstam conjecture has several consequences. A striking one is the result announced by Dan Goldston, János Pintz, and Cem Yıldırım, which shows (assuming this conjecture) that there are infinitely many pairs of primes which differ by at most 16. In November 2013, James Maynard showed that subject to the Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 12. In August 2014, Polymath group showed that subject to the generalized Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 6. Without assuming any form of the conjecture, the lowest proven bound is 246. == Original conjecture == The original Elliott-Halberstam conjecture is not clearly stated in their paper, but can be inferred there from (1) page 59 and the comment above the Theorem on page 62. It says that ∑ q ≤ x φ ( q ) max ( h , q ) = 1 ( π ( x , q , h ) − li ⁡ x φ ( q ) ) 2 ≪ x 2 log A ⁡ x {\displaystyle \sum _{q\leq x}\varphi (q)\max _{(h,q)=1}\left(\pi (x,q,h)-{\frac {\operatorname {li} x}{\varphi (q)}}\right)^{2}\ll {\frac {x^{2}}{\log ^{A}x}}} provided X < x ( log ⁡ x ) − A − 1 {\displaystyle X<x(\log x)^{-A-1}} , where li ⁡ x {\displaystyle \operatorname {li} x} denotes the logarithmic integral and φ {\displaystyle \varphi } the Euler function. == See also == Barban–Davenport–Halberstam theorem Sexy prime Siegel–Walfisz theorem == Notes ==
Wikipedia/Elliott–Halberstam_conjecture
Multiplicative number theory is a subfield of analytic number theory that deals with prime numbers and with factorization and divisors. The focus is usually on developing approximate formulas for counting these objects in various contexts. The prime number theorem is a key result in this subject. The Mathematics Subject Classification for multiplicative number theory is 11Nxx. == Scope == Multiplicative number theory deals primarily in asymptotic estimates for arithmetic functions. Historically the subject has been dominated by the prime number theorem, first by attempts to prove it and then by improvements in the error term. The Dirichlet divisor problem that estimates the average order of the divisor function d(n) and Gauss's circle problem that estimates the average order of the number of representations of a number as a sum of two squares are also classical problems, and again the focus is on improving the error estimates. The distribution of primes numbers among residue classes modulo an integer is an area of active research. Dirichlet's theorem on primes in arithmetic progressions shows that there are an infinity of primes in each co-prime residue class, and the prime number theorem for arithmetic progressions shows that the primes are asymptotically equidistributed among the residue classes. The Bombieri–Vinogradov theorem gives a more precise measure of how evenly they are distributed. There is also much interest in the size of the smallest prime in an arithmetic progression; Linnik's theorem gives an estimate. The twin prime conjecture, namely that there are an infinity of primes p such that p+2 is also prime, is the subject of active research. Chen's theorem shows that there are an infinity of primes p such that p+2 is either prime or the product of two primes. == Methods == The methods belong primarily to analytic number theory, but elementary methods, especially sieve methods, are also very important. The large sieve and exponential sums are usually considered part of multiplicative number theory. The distribution of prime numbers is closely tied to the behavior of the Riemann zeta function and the Riemann hypothesis, and these subjects are studied both from a number theory viewpoint and a complex analysis viewpoint. == Standard texts == A large part of analytic number theory deals with multiplicative problems, and so most of its texts contain sections on multiplicative number theory. These are some well-known texts that deal specifically with multiplicative problems: Davenport, Harold (2000). Multiplicative Number Theory (3rd ed.). Berlin: Springer. ISBN 978-0-387-95097-6. Montgomery, Hugh; Robert C. Vaughan (2005). Multiplicative Number Theory I. Classical Theory. Cambridge: Cambridge University Press. ISBN 978-0-521-84903-6. == See also == Multiplicative combinatorics Additive combinatorics Additive number theory Sum-product phenomenon
Wikipedia/Multiplicative_number_theory
Maier's matrix method is a technique in analytic number theory by Helmut Maier that is used to demonstrate the existence of intervals of natural numbers within which the prime numbers are distributed with a certain property. In particular, it has been used to prove Maier's theorem (Maier 1985) and also the existence of chains of large gaps between consecutive primes (Maier 1981). The method uses estimates for the distribution of prime numbers in arithmetic progressions to prove the existence of a large set of intervals where the number of primes in the set is well understood and hence that at least one of the intervals contains primes in the required distribution. == The method == The method first selects a primorial and then constructs an interval in which the distribution of integers coprime to the primorial is well understood. By looking at copies of the interval translated by multiples of the primorial an array (or matrix) of integers is formed where the rows are the translated intervals and the columns are arithmetic progressions where the difference is the primorial. By Dirichlet's theorem on arithmetic progressions the columns will contain many primes if and only if the integer in the original interval was coprime to the primorial. Good estimates for the number of small primes in these progressions due to (Gallagher 1970) allows the estimation of the primes in the matrix which guarantees the existence of at least one row or interval with at least a certain number of primes. == References == Maier, Helmut (1985), "Primes in short intervals", The Michigan Mathematical Journal, 32 (2): 221–225, doi:10.1307/mmj/1029003189 Maier, Helmut (1981), "Chains of large gaps between consecutive primes", Advances in Mathematics, 39 (3): 257–269, doi:10.1016/0001-8708(81)90003-7 Gallagher, Patrick (1970), "A large sieve density estimate near σ=1", Inventiones Mathematicae, 11 (4): 329–339, Bibcode:1970InMat..11..329G, doi:10.1007/BF01403187
Wikipedia/Maier's_matrix_method
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as φ ( n ) {\displaystyle \varphi (n)} or ϕ ( n ) {\displaystyle \phi (n)} , and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n. For example, the totatives of n = 9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since gcd(9, 3) = gcd(9, 6) = 3 and gcd(9, 9) = 9. Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1 the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1. Euler's totient function is a multiplicative function, meaning that if two numbers m and n are relatively prime, then φ(mn) = φ(m)φ(n). This function gives the order of the multiplicative group of integers modulo n (the group of units of the ring Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } ). It is also used for defining the RSA encryption system. == History, terminology, and notation == Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter π to denote it: he wrote πD for "the multitude of numbers less than D, and which have no common divisor with it". This definition varies from the current definition for the totient function at D = 1 but is otherwise the same. The now-standard notation φ(A) comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss did not use parentheses around the argument and wrote φA. Thus, it is often called Euler's phi function or simply the phi function. In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's. The cototient of n is defined as n − φ(n). It counts the number of positive integers less than or equal to n that have at least one prime factor in common with n. == Computing Euler's totient function == There are several formulae for computing φ(n). === Euler's product formula === It states φ ( n ) = n ∏ p ∣ n ( 1 − 1 p ) , {\displaystyle \varphi (n)=n\prod _{p\mid n}\left(1-{\frac {1}{p}}\right),} where the product is over the distinct prime numbers dividing n. An equivalent formulation is φ ( n ) = p 1 k 1 − 1 ( p 1 − 1 ) p 2 k 2 − 1 ( p 2 − 1 ) ⋯ p r k r − 1 ( p r − 1 ) , {\displaystyle \varphi (n)=p_{1}^{k_{1}-1}(p_{1}{-}1)\,p_{2}^{k_{2}-1}(p_{2}{-}1)\cdots p_{r}^{k_{r}-1}(p_{r}{-}1),} where n = p 1 k 1 p 2 k 2 ⋯ p r k r {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}} is the prime factorization of n {\displaystyle n} (that is, p 1 , p 2 , … , p r {\displaystyle p_{1},p_{2},\ldots ,p_{r}} are distinct prime numbers). The proof of these formulae depends on two important facts. ==== Phi is a multiplicative function ==== This means that if gcd(m, n) = 1, then φ(m) φ(n) = φ(mn). Proof outline: Let A, B, C be the sets of positive integers which are coprime to and less than m, n, mn, respectively, so that |A| = φ(m), etc. Then there is a bijection between A × B and C by the Chinese remainder theorem. ==== Value of phi for a prime power argument ==== If p is prime and k ≥ 1, then φ ( p k ) = p k − p k − 1 = p k − 1 ( p − 1 ) = p k ( 1 − 1 p ) . {\displaystyle \varphi \left(p^{k}\right)=p^{k}-p^{k-1}=p^{k-1}(p-1)=p^{k}\left(1-{\tfrac {1}{p}}\right).} Proof: Since p is a prime number, the only possible values of gcd(pk, m) are 1, p, p2, ..., pk, and the only way to have gcd(pk, m) > 1 is if m is a multiple of p, that is, m ∈ {p, 2p, 3p, ..., pk − 1p = pk}, and there are pk − 1 such multiples not greater than pk. Therefore, the other pk − pk − 1 numbers are all relatively prime to pk. ==== Proof of Euler's product formula ==== The fundamental theorem of arithmetic states that if n > 1 there is a unique expression n = p 1 k 1 p 2 k 2 ⋯ p r k r , {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}},} where p1 < p2 < ... < pr are prime numbers and each ki ≥ 1. (The case n = 1 corresponds to the empty product.) Repeatedly using the multiplicative property of φ and the formula for φ(pk) gives φ ( n ) = φ ( p 1 k 1 ) φ ( p 2 k 2 ) ⋯ φ ( p r k r ) = p 1 k 1 ( 1 − 1 p 1 ) p 2 k 2 ( 1 − 1 p 2 ) ⋯ p r k r ( 1 − 1 p r ) = p 1 k 1 p 2 k 2 ⋯ p r k r ( 1 − 1 p 1 ) ( 1 − 1 p 2 ) ⋯ ( 1 − 1 p r ) = n ( 1 − 1 p 1 ) ( 1 − 1 p 2 ) ⋯ ( 1 − 1 p r ) . {\displaystyle {\begin{array}{rcl}\varphi (n)&=&\varphi (p_{1}^{k_{1}})\,\varphi (p_{2}^{k_{2}})\cdots \varphi (p_{r}^{k_{r}})\\[.1em]&=&p_{1}^{k_{1}}\left(1-{\frac {1}{p_{1}}}\right)p_{2}^{k_{2}}\left(1-{\frac {1}{p_{2}}}\right)\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{r}^{k_{r}}\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right)\\[.1em]&=&n\left(1-{\frac {1}{p_{1}}}\right)\left(1-{\frac {1}{p_{2}}}\right)\cdots \left(1-{\frac {1}{p_{r}}}\right).\end{array}}} This gives both versions of Euler's product formula. An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} , excluding the sets of integers divisible by the prime divisors. ==== Example ==== φ ( 20 ) = φ ( 2 2 5 ) = 20 ( 1 − 1 2 ) ( 1 − 1 5 ) = 20 ⋅ 1 2 ⋅ 4 5 = 8. {\displaystyle \varphi (20)=\varphi (2^{2}5)=20\,(1-{\tfrac {1}{2}})\,(1-{\tfrac {1}{5}})=20\cdot {\tfrac {1}{2}}\cdot {\tfrac {4}{5}}=8.} In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19. The alternative formula uses only integers: φ ( 20 ) = φ ( 2 2 5 1 ) = 2 2 − 1 ( 2 − 1 ) 5 1 − 1 ( 5 − 1 ) = 2 ⋅ 1 ⋅ 1 ⋅ 4 = 8. {\displaystyle \varphi (20)=\varphi (2^{2}5^{1})=2^{2-1}(2{-}1)\,5^{1-1}(5{-}1)=2\cdot 1\cdot 1\cdot 4=8.} === Fourier transform === The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let F { x } [ m ] = ∑ k = 1 n x k ⋅ e − 2 π i m k n {\displaystyle {\mathcal {F}}\{\mathbf {x} \}[m]=\sum \limits _{k=1}^{n}x_{k}\cdot e^{{-2\pi i}{\frac {mk}{n}}}} where xk = gcd(k,n) for k ∈ {1, ..., n}. Then φ ( n ) = F { x } [ 1 ] = ∑ k = 1 n gcd ( k , n ) e − 2 π i k n . {\displaystyle \varphi (n)={\mathcal {F}}\{\mathbf {x} \}[1]=\sum \limits _{k=1}^{n}\gcd(k,n)e^{-2\pi i{\frac {k}{n}}}.} The real part of this formula is φ ( n ) = ∑ k = 1 n gcd ( k , n ) cos ⁡ 2 π k n . {\displaystyle \varphi (n)=\sum \limits _{k=1}^{n}\gcd(k,n)\cos {\tfrac {2\pi k}{n}}.} For example, using cos ⁡ π 5 = 5 + 1 4 {\displaystyle \cos {\tfrac {\pi }{5}}={\tfrac {{\sqrt {5}}+1}{4}}} and cos ⁡ 2 π 5 = 5 − 1 4 {\displaystyle \cos {\tfrac {2\pi }{5}}={\tfrac {{\sqrt {5}}-1}{4}}} : φ ( 10 ) = gcd ( 1 , 10 ) cos ⁡ 2 π 10 + gcd ( 2 , 10 ) cos ⁡ 4 π 10 + gcd ( 3 , 10 ) cos ⁡ 6 π 10 + ⋯ + gcd ( 10 , 10 ) cos ⁡ 20 π 10 = 1 ⋅ ( 5 + 1 4 ) + 2 ⋅ ( 5 − 1 4 ) + 1 ⋅ ( − 5 − 1 4 ) + 2 ⋅ ( − 5 + 1 4 ) + 5 ⋅ ( − 1 ) + 2 ⋅ ( − 5 + 1 4 ) + 1 ⋅ ( − 5 − 1 4 ) + 2 ⋅ ( 5 − 1 4 ) + 1 ⋅ ( 5 + 1 4 ) + 10 ⋅ ( 1 ) = 4. {\displaystyle {\begin{array}{rcl}\varphi (10)&=&\gcd(1,10)\cos {\tfrac {2\pi }{10}}+\gcd(2,10)\cos {\tfrac {4\pi }{10}}+\gcd(3,10)\cos {\tfrac {6\pi }{10}}+\cdots +\gcd(10,10)\cos {\tfrac {20\pi }{10}}\\&=&1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+5\cdot (-1)\\&&+\ 2\cdot (-{\tfrac {{\sqrt {5}}+1}{4}})+1\cdot (-{\tfrac {{\sqrt {5}}-1}{4}})+2\cdot ({\tfrac {{\sqrt {5}}-1}{4}})+1\cdot ({\tfrac {{\sqrt {5}}+1}{4}})+10\cdot (1)\\&=&4.\end{array}}} Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of n. However, it does involve the calculation of the greatest common divisor of n and every positive integer less than n, which suffices to provide the factorization anyway. === Divisor sum === The property established by Gauss, that ∑ d ∣ n φ ( d ) = n , {\displaystyle \sum _{d\mid n}\varphi (d)=n,} where the sum is over all positive divisors d of n, can be proven in several ways. (See Arithmetical function for notational conventions.) One proof is to note that φ(d) is also equal to the number of possible generators of the cyclic group Cd ; specifically, if Cd = ⟨g⟩ with gd = 1, then gk is a generator for every k coprime to d. Since every element of Cn generates a cyclic subgroup, and each subgroup Cd ⊆ Cn is generated by precisely φ(d) elements of Cn, the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the nth roots of unity and the primitive dth roots of unity. The formula can also be derived from elementary arithmetic. For example, let n = 20 and consider the positive fractions up to 1 with denominator 20: 1 20 , 2 20 , 3 20 , 4 20 , 5 20 , 6 20 , 7 20 , 8 20 , 9 20 , 10 20 , 11 20 , 12 20 , 13 20 , 14 20 , 15 20 , 16 20 , 17 20 , 18 20 , 19 20 , 20 20 . {\displaystyle {\tfrac {1}{20}},\,{\tfrac {2}{20}},\,{\tfrac {3}{20}},\,{\tfrac {4}{20}},\,{\tfrac {5}{20}},\,{\tfrac {6}{20}},\,{\tfrac {7}{20}},\,{\tfrac {8}{20}},\,{\tfrac {9}{20}},\,{\tfrac {10}{20}},\,{\tfrac {11}{20}},\,{\tfrac {12}{20}},\,{\tfrac {13}{20}},\,{\tfrac {14}{20}},\,{\tfrac {15}{20}},\,{\tfrac {16}{20}},\,{\tfrac {17}{20}},\,{\tfrac {18}{20}},\,{\tfrac {19}{20}},\,{\tfrac {20}{20}}.} Put them into lowest terms: 1 20 , 1 10 , 3 20 , 1 5 , 1 4 , 3 10 , 7 20 , 2 5 , 9 20 , 1 2 , 11 20 , 3 5 , 13 20 , 7 10 , 3 4 , 4 5 , 17 20 , 9 10 , 19 20 , 1 1 {\displaystyle {\tfrac {1}{20}},\,{\tfrac {1}{10}},\,{\tfrac {3}{20}},\,{\tfrac {1}{5}},\,{\tfrac {1}{4}},\,{\tfrac {3}{10}},\,{\tfrac {7}{20}},\,{\tfrac {2}{5}},\,{\tfrac {9}{20}},\,{\tfrac {1}{2}},\,{\tfrac {11}{20}},\,{\tfrac {3}{5}},\,{\tfrac {13}{20}},\,{\tfrac {7}{10}},\,{\tfrac {3}{4}},\,{\tfrac {4}{5}},\,{\tfrac {17}{20}},\,{\tfrac {9}{10}},\,{\tfrac {19}{20}},\,{\tfrac {1}{1}}} These twenty fractions are all the positive ⁠k/d⁠ ≤ 1 whose denominators are the divisors d = 1, 2, 4, 5, 10, 20. The fractions with 20 as denominator are those with numerators relatively prime to 20, namely ⁠1/20⁠, ⁠3/20⁠, ⁠7/20⁠, ⁠9/20⁠, ⁠11/20⁠, ⁠13/20⁠, ⁠17/20⁠, ⁠19/20⁠; by definition this is φ(20) fractions. Similarly, there are φ(10) fractions with denominator 10, and φ(5) fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size φ(d) for each d dividing 20. A similar argument applies for any n. Möbius inversion applied to the divisor sum formula gives φ ( n ) = ∑ d ∣ n μ ( d ) ⋅ n d = n ∑ d ∣ n μ ( d ) d , {\displaystyle \varphi (n)=\sum _{d\mid n}\mu \left(d\right)\cdot {\frac {n}{d}}=n\sum _{d\mid n}{\frac {\mu (d)}{d}},} where μ is the Möbius function, the multiplicative function defined by μ ( p ) = − 1 {\displaystyle \mu (p)=-1} and μ ( p k ) = 0 {\displaystyle \mu (p^{k})=0} for each prime p and k ≥ 2. This formula may also be derived from the product formula by multiplying out ∏ p ∣ n ( 1 − 1 p ) {\textstyle \prod _{p\mid n}(1-{\frac {1}{p}})} to get ∑ d ∣ n μ ( d ) d . {\textstyle \sum _{d\mid n}{\frac {\mu (d)}{d}}.} An example: φ ( 20 ) = μ ( 1 ) ⋅ 20 + μ ( 2 ) ⋅ 10 + μ ( 4 ) ⋅ 5 + μ ( 5 ) ⋅ 4 + μ ( 10 ) ⋅ 2 + μ ( 20 ) ⋅ 1 = 1 ⋅ 20 − 1 ⋅ 10 + 0 ⋅ 5 − 1 ⋅ 4 + 1 ⋅ 2 + 0 ⋅ 1 = 8. {\displaystyle {\begin{aligned}\varphi (20)&=\mu (1)\cdot 20+\mu (2)\cdot 10+\mu (4)\cdot 5+\mu (5)\cdot 4+\mu (10)\cdot 2+\mu (20)\cdot 1\\[.5em]&=1\cdot 20-1\cdot 10+0\cdot 5-1\cdot 4+1\cdot 2+0\cdot 1=8.\end{aligned}}} == Some values == The first 100 values (sequence A000010 in the OEIS) are shown in the table and graph below: In the graph at right the top line y = n − 1 is an upper bound valid for all n other than one, and attained if and only if n is a prime number. A simple lower bound is φ ( n ) ≥ n / 2 {\displaystyle \varphi (n)\geq {\sqrt {n/2}}} , which is rather loose: in fact, the lower limit of the graph is proportional to ⁠n/log log n⁠. == Euler's theorem == This states that if a and n are relatively prime then a φ ( n ) ≡ 1 mod n . {\displaystyle a^{\varphi (n)}\equiv 1\mod n.} The special case where n is prime is known as Fermat's little theorem. This follows from Lagrange's theorem and the fact that φ(n) is the order of the multiplicative group of integers modulo n. The RSA cryptosystem is based on this theorem: it implies that the inverse of the function a ↦ ae mod n, where e is the (public) encryption exponent, is the function b ↦ bd mod n, where d, the (private) decryption exponent, is the multiplicative inverse of e modulo φ(n). The difficulty of computing φ(n) without knowing the factorization of n is thus the difficulty of computing d: this is known as the RSA problem which can be solved by factoring n. The owner of the private key knows the factorization, since an RSA private key is constructed by choosing n as the product of two (randomly chosen) large primes p and q. Only n is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization. == Other formulae == a ∣ b ⟹ φ ( a ) ∣ φ ( b ) {\displaystyle a\mid b\implies \varphi (a)\mid \varphi (b)} m ∣ φ ( a m − 1 ) {\displaystyle m\mid \varphi (a^{m}-1)} φ ( m n ) = φ ( m ) φ ( n ) ⋅ d φ ( d ) where d = gcd ⁡ ( m , n ) {\displaystyle \varphi (mn)=\varphi (m)\varphi (n)\cdot {\frac {d}{\varphi (d)}}\quad {\text{where }}d=\operatorname {gcd} (m,n)} In particular: φ ( 2 m ) = { 2 φ ( m ) if m is even φ ( m ) if m is odd {\displaystyle \varphi (2m)={\begin{cases}2\varphi (m)&{\text{ if }}m{\text{ is even}}\\\varphi (m)&{\text{ if }}m{\text{ is odd}}\end{cases}}} φ ( n m ) = n m − 1 φ ( n ) {\displaystyle \varphi \left(n^{m}\right)=n^{m-1}\varphi (n)} φ ( lcm ⁡ ( m , n ) ) ⋅ φ ( gcd ⁡ ( m , n ) ) = φ ( m ) ⋅ φ ( n ) {\displaystyle \varphi (\operatorname {lcm} (m,n))\cdot \varphi (\operatorname {gcd} (m,n))=\varphi (m)\cdot \varphi (n)} Compare this to the formula lcm ⁡ ( m , n ) ⋅ gcd ⁡ ( m , n ) = m ⋅ n {\textstyle \operatorname {lcm} (m,n)\cdot \operatorname {gcd} (m,n)=m\cdot n} (see least common multiple). φ(n) is even for n ≥ 3. Moreover, if n has r distinct odd prime factors, 2r | φ(n) For any a > 1 and n > 6 such that 4 ∤ n there exists an l ≥ 2n such that l | φ(an − 1). φ ( n ) n = φ ( rad ⁡ ( n ) ) rad ⁡ ( n ) {\displaystyle {\frac {\varphi (n)}{n}}={\frac {\varphi (\operatorname {rad} (n))}{\operatorname {rad} (n)}}} where rad(n) is the radical of n (the product of all distinct primes dividing n). ∑ d ∣ n μ 2 ( d ) φ ( d ) = n φ ( n ) {\displaystyle \sum _{d\mid n}{\frac {\mu ^{2}(d)}{\varphi (d)}}={\frac {n}{\varphi (n)}}} ∑ 1 ≤ k ≤ n − 1 g c d ( k , n ) = 1 k = 1 2 n φ ( n ) for n > 1 {\displaystyle \sum _{1\leq k\leq n-1 \atop gcd(k,n)=1}\!\!k={\tfrac {1}{2}}n\varphi (n)\quad {\text{for }}n>1} ∑ k = 1 n φ ( k ) = 1 2 ( 1 + ∑ k = 1 n μ ( k ) ⌊ n k ⌋ 2 ) = 3 π 2 n 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) {\displaystyle \sum _{k=1}^{n}\varphi (k)={\tfrac {1}{2}}\left(1+\sum _{k=1}^{n}\mu (k)\left\lfloor {\frac {n}{k}}\right\rfloor ^{2}\right)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)} ( cited in) ∑ k = 1 n φ ( k ) = 3 π 2 n 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 1 3 ) {\displaystyle \sum _{k=1}^{n}\varphi (k)={\frac {3}{\pi ^{2}}}n^{2}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)} [Liu (2016)] ∑ k = 1 n φ ( k ) k = ∑ k = 1 n μ ( k ) k ⌊ n k ⌋ = 6 π 2 n + O ( ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) {\displaystyle \sum _{k=1}^{n}{\frac {\varphi (k)}{k}}=\sum _{k=1}^{n}{\frac {\mu (k)}{k}}\left\lfloor {\frac {n}{k}}\right\rfloor ={\frac {6}{\pi ^{2}}}n+O\left((\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)} ∑ k = 1 n k φ ( k ) = 315 ζ ( 3 ) 2 π 4 n − log ⁡ n 2 + O ( ( log ⁡ n ) 2 3 ) {\displaystyle \sum _{k=1}^{n}{\frac {k}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}n-{\frac {\log n}{2}}+O\left((\log n)^{\frac {2}{3}}\right)} ∑ k = 1 n 1 φ ( k ) = 315 ζ ( 3 ) 2 π 4 ( log ⁡ n + γ − ∑ p prime log ⁡ p p 2 − p + 1 ) + O ( ( log ⁡ n ) 2 3 n ) {\displaystyle \sum _{k=1}^{n}{\frac {1}{\varphi (k)}}={\frac {315\,\zeta (3)}{2\pi ^{4}}}\left(\log n+\gamma -\sum _{p{\text{ prime}}}{\frac {\log p}{p^{2}-p+1}}\right)+O\left({\frac {(\log n)^{\frac {2}{3}}}{n}}\right)} (where γ is the Euler–Mascheroni constant). === Menon's identity === In 1965 P. Kesava Menon proved ∑ gcd ( k , n ) = 1 1 ≤ k ≤ n gcd ( k − 1 , n ) = φ ( n ) d ( n ) , {\displaystyle \sum _{\stackrel {1\leq k\leq n}{\gcd(k,n)=1}}\!\!\!\!\gcd(k-1,n)=\varphi (n)d(n),} where d(n) = σ0(n) is the number of divisors of n. === Divisibility by any fixed positive integer === The following property, which is part of the « folklore » (i.e., apparently unpublished as a specific result: see the introduction of this article in which it is stated as having « long been known ») has important consequences. For instance it rules out uniform distribution of the values of φ ( n ) {\displaystyle \varphi (n)} in the arithmetic progressions modulo q {\displaystyle q} for any integer q > 1 {\displaystyle q>1} . For every fixed positive integer q {\displaystyle q} , the relation q | φ ( n ) {\displaystyle q|\varphi (n)} holds for almost all n {\displaystyle n} , meaning for all but o ( x ) {\displaystyle o(x)} values of n ≤ x {\displaystyle n\leq x} as x → ∞ {\displaystyle x\rightarrow \infty } . This is an elementary consequence of the fact that the sum of the reciprocals of the primes congruent to 1 modulo q {\displaystyle q} diverges, which itself is a corollary of the proof of Dirichlet's theorem on arithmetic progressions. == Generating functions == The Dirichlet series for φ(n) may be written in terms of the Riemann zeta function as: ∑ n = 1 ∞ φ ( n ) n s = ζ ( s − 1 ) ζ ( s ) {\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)}{n^{s}}}={\frac {\zeta (s-1)}{\zeta (s)}}} where the left-hand side converges for ℜ ( s ) > 2 {\displaystyle \Re (s)>2} . The Lambert series generating function is ∑ n = 1 ∞ φ ( n ) q n 1 − q n = q ( 1 − q ) 2 {\displaystyle \sum _{n=1}^{\infty }{\frac {\varphi (n)q^{n}}{1-q^{n}}}={\frac {q}{(1-q)^{2}}}} which converges for |q| < 1. Both of these are proved by elementary series manipulations and the formulae for φ(n). == Growth rate == In the words of Hardy & Wright, the order of φ(n) is "always 'nearly n'." First lim sup φ ( n ) n = 1 , {\displaystyle \lim \sup {\frac {\varphi (n)}{n}}=1,} but as n goes to infinity, for all δ > 0 φ ( n ) n 1 − δ → ∞ . {\displaystyle {\frac {\varphi (n)}{n^{1-\delta }}}\rightarrow \infty .} These two formulae can be proved by using little more than the formulae for φ(n) and the divisor sum function σ(n). In fact, during the proof of the second formula, the inequality 6 π 2 < φ ( n ) σ ( n ) n 2 < 1 , {\displaystyle {\frac {6}{\pi ^{2}}}<{\frac {\varphi (n)\sigma (n)}{n^{2}}}<1,} true for n > 1, is proved. We also have lim inf φ ( n ) n log ⁡ log ⁡ n = e − γ . {\displaystyle \lim \inf {\frac {\varphi (n)}{n}}\log \log n=e^{-\gamma }.} Here γ is Euler's constant, γ = 0.577215665..., so eγ = 1.7810724... and e−γ = 0.56145948.... Proving this does not quite require the prime number theorem. Since log log n goes to infinity, this formula shows that lim inf φ ( n ) n = 0. {\displaystyle \lim \inf {\frac {\varphi (n)}{n}}=0.} In fact, more is true. φ ( n ) > n e γ log ⁡ log ⁡ n + 3 log ⁡ log ⁡ n for n > 2 {\displaystyle \varphi (n)>{\frac {n}{e^{\gamma }\;\log \log n+{\frac {3}{\log \log n}}}}\quad {\text{for }}n>2} and φ ( n ) < n e γ log ⁡ log ⁡ n for infinitely many n . {\displaystyle \varphi (n)<{\frac {n}{e^{\gamma }\log \log n}}\quad {\text{for infinitely many }}n.} The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption.": 173  For the average order, we have φ ( 1 ) + φ ( 2 ) + ⋯ + φ ( n ) = 3 n 2 π 2 + O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 4 3 ) as n → ∞ , {\displaystyle \varphi (1)+\varphi (2)+\cdots +\varphi (n)={\frac {3n^{2}}{\pi ^{2}}}+O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {4}{3}}\right)\quad {\text{as }}n\rightarrow \infty ,} due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov. By a combination of van der Corput's and Vinogradov's methods, H.-Q. Liu (On Euler's function.Proc. Roy. Soc. Edinburgh Sect. A 146 (2016), no. 4, 769–775) improved the error term to O ( n ( log ⁡ n ) 2 3 ( log ⁡ log ⁡ n ) 1 3 ) {\displaystyle O\left(n(\log n)^{\frac {2}{3}}(\log \log n)^{\frac {1}{3}}\right)} (this is currently the best known estimate of this type). The "Big O" stands for a quantity that is bounded by a constant times the function of n inside the parentheses (which is small compared to n2). This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is ⁠6/π2⁠. == Ratio of consecutive values == In 1950 Somayajulu proved lim inf φ ( n + 1 ) φ ( n ) = 0 and lim sup φ ( n + 1 ) φ ( n ) = ∞ . {\displaystyle {\begin{aligned}\lim \inf {\frac {\varphi (n+1)}{\varphi (n)}}&=0\quad {\text{and}}\\[5px]\lim \sup {\frac {\varphi (n+1)}{\varphi (n)}}&=\infty .\end{aligned}}} In 1954 Schinzel and Sierpiński strengthened this, proving that the set { φ ( n + 1 ) φ ( n ) , n = 1 , 2 , … } {\displaystyle \left\{{\frac {\varphi (n+1)}{\varphi (n)}},\;\;n=1,2,\ldots \right\}} is dense in the positive real numbers. They also proved that the set { φ ( n ) n , n = 1 , 2 , … } {\displaystyle \left\{{\frac {\varphi (n)}{n}},\;\;n=1,2,\ldots \right\}} is dense in the interval (0,1). == Totient number == A totient number is a value of Euler's totient function: that is, an m for which there is at least one n for which φ(n) = m. The valency or multiplicity of a totient number m is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient. The number of totient numbers up to a given limit x is x log ⁡ x e ( C + o ( 1 ) ) ( log ⁡ log ⁡ log ⁡ x ) 2 {\displaystyle {\frac {x}{\log x}}e^{{\big (}C+o(1){\big )}(\log \log \log x)^{2}}} for a constant C = 0.8178146.... If counted accordingly to multiplicity, the number of totient numbers up to a given limit x is | { n : φ ( n ) ≤ x } | = ζ ( 2 ) ζ ( 3 ) ζ ( 6 ) ⋅ x + R ( x ) {\displaystyle {\Big \vert }\{n:\varphi (n)\leq x\}{\Big \vert }={\frac {\zeta (2)\zeta (3)}{\zeta (6)}}\cdot x+R(x)} where the error term R is of order at most ⁠x/(log x)k⁠ for any positive k. It is known that the multiplicity of m exceeds mδ infinitely often for any δ < 0.55655. === Ford's theorem === Ford (1999) proved that for every integer k ≥ 2 there is a totient number m of multiplicity k: that is, for which the equation φ(n) = m has exactly k solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often. However, no number m is known with multiplicity k = 1. Carmichael's totient function conjecture is the statement that there is no such m. === Perfect totient numbers === A perfect totient number is an integer that is equal to the sum of its iterated totients. That is, we apply the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and add together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number. == Applications == === Cyclotomy === In the last section of the Disquisitiones Gauss proves that a regular n-gon can be constructed with straightedge and compass if φ(n) is a power of 2. If n is a power of an odd prime number the formula for the totient says its totient can be a power of two only if n is a first power and n − 1 is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more. Thus, a regular n-gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such n are 2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... (sequence A003401 in the OEIS). === Prime number theorem for arithmetic progressions === === The RSA cryptosystem === Setting up an RSA system involves choosing large prime numbers p and q, computing n = pq and k = φ(n), and finding two numbers e and d such that ed ≡ 1 (mod k). The numbers n and e (the "encryption key") are released to the public, and d (the "decryption key") is kept private. A message, represented by an integer m, where 0 < m < n, is encrypted by computing S = me (mod n). It is decrypted by computing t = Sd (mod n). Euler's Theorem can be used to show that if 0 < t < n, then t = m. The security of an RSA system would be compromised if the number n could be efficiently factored or if φ(n) could be efficiently computed without factoring n. == Unsolved problems == === Lehmer's conjecture === If p is prime, then φ(p) = p − 1. In 1932 D. H. Lehmer asked if there are any composite numbers n such that φ(n) divides n − 1. None are known. In 1933 he proved that if any such n exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ω(n) ≥ 7). In 1980 Cohen and Hagis proved that n > 1020 and that ω(n) ≥ 14. Further, Hagis showed that if 3 divides n then n > 101937042 and ω(n) ≥ 298848. === Carmichael's conjecture === This states that there is no number n with the property that for all other numbers m, m ≠ n, φ(m) ≠ φ(n). See Ford's theorem above. As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10. === Riemann hypothesis === The Riemann hypothesis is true if and only if the inequality n φ ( n ) < e γ log ⁡ log ⁡ n + e γ ( 4 + γ − log ⁡ 4 π ) log ⁡ n {\displaystyle {\frac {n}{\varphi (n)}}<e^{\gamma }\log \log n+{\frac {e^{\gamma }(4+\gamma -\log 4\pi )}{\sqrt {\log n}}}} is true for all n ≥ p120569# where γ is Euler's constant and p120569# is the product of the first 120569 primes. == See also == Carmichael function (λ) Dedekind psi function (𝜓) Divisor function (σ) Duffin–Schaeffer conjecture Generalizations of Fermat's little theorem Highly composite number Multiplicative group of integers modulo n Ramanujan sum Totient summatory function (𝛷) == Notes == == References == == External links == "Totient function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Euler's Phi Function and the Chinese Remainder Theorem — proof that φ(n) is multiplicative Archived 2021-02-28 at the Wayback Machine Euler's totient function calculator in JavaScript — up to 20 digits Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions Archived 2021-01-16 at the Wayback Machine Plytage, Loomis, Polhill Summing Up The Euler Phi Function
Wikipedia/Totient_function
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem. == History == The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and as of 2022 the method still yields results. The method is the subject of a monograph Vaughan (1997) by R. C. Vaughan. == Outline == The goal is to prove asymptotic behavior of a series: to show that an ~ F(n) for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle. The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals. === Setup === The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers an for n = 0, 1, 2, 3, ..., we want some asymptotic information of the type an ~ F(n), where we have some heuristic reason to guess the form taken by F (an ansatz), we write f ( z ) = ∑ a n z n {\displaystyle f(z)=\sum a_{n}z^{n}} a power series generating function. The interesting cases are where f is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation. === Residues === From that formulation, it follows directly from the residue theorem that I n = ∮ C f ( z ) z − ( n + 1 ) d z = 2 π i a n {\displaystyle I_{n}=\oint _{C}f(z)z^{-(n+1)}\,dz=2\pi ia_{n}} for integers n ≥ 0, where C is a circle of radius r and centred at 0, for any r with 0 < r < 1; in other words, I n {\displaystyle I_{n}} is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take r = 1 directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of f may not be defined there. === Singularities on unit circle === The problem addressed by the circle method is to force the issue of taking r = 1, by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity: ζ = exp ⁡ ( 2 π i r s ) . {\displaystyle \zeta \ =\exp \left({\frac {2\pi ir}{s}}\right).} Here the denominator s, assuming that ⁠r/s⁠ is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical f near ζ. === Method === The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of In, as r → 1, should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity ζ into two classes, according to whether s ≤ N or s > N, where N is a function of n that is ours to choose conveniently. The integral In is divided up into integrals each on some arc of the circle that is adjacent to ζ, of length a function of s (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up 2πiF(n) (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than F(n). == Discussion == Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions. === Waring's problem === In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example. It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at z = 1; followed by z = −1, and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity i and −i that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity. In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled. === Vinogradov trigonometric sums === Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral In is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of r in the limiting operation to be set directly to the value 1. == Applications == Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables k is large relative to the degree d (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If d is fixed and k is small, other methods are required, and indeed the Hasse principle tends to fail. == Rademacher's contour == In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution z = exp(2πiτ), so that the contour integral becomes an integral from τ = i to τ = 1 + i. (The number i could be replaced by any number on the upper half-plane, but i is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from i to 1 + i by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions). == Notes == == References == Apostol, Tom M. (1990), Modular functions and Dirichlet series in number theory (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-97127-8 Mardzhanishvili, K. K. (1985), "Ivan Matveevich Vinogradov: a brief outline of his life and works", I. M. Vinogradov, Selected Works, Berlin{{citation}}: CS1 maint: location missing publisher (link) Rademacher, Hans (1943), "On the expansion of the partition function in a series", Annals of Mathematics, Second Series, 44 (3), The Annals of Mathematics, Vol. 44, No. 3: 416–422, doi:10.2307/1968973, JSTOR 1968973, MR 0008618 Vaughan, R. C. (1997), The Hardy–Littlewood Method, Cambridge Tracts in Mathematics, vol. 125 (2nd ed.), Cambridge University Press, ISBN 978-0-521-57347-4 == Further reading == Wang, Yuan (1991). Diophantine equations and inequalities in algebraic number fields. Berlin: Springer-Verlag. doi:10.1007/978-3-642-58171-7. ISBN 9783642634895. OCLC 851809136. == External links == Terence Tao, Heuristic limitations of the circle method, a blog post in 2012
Wikipedia/Hardy–Littlewood_circle_method
In mathematics, Montgomery's pair correlation conjecture is a conjecture made by Hugh Montgomery (1973) that the pair correlation between pairs of zeros of the Riemann zeta function (normalized to have unit average spacing) is 1 − ( sin ⁡ ( π u ) π u ) 2 , {\displaystyle 1-\left({\frac {\sin(\pi u)}{\pi u}}\right)^{\!2},} which, as Freeman Dyson pointed out to him, is the same as the pair correlation function of random Hermitian matrices. == Conjecture == Under the assumption that the Riemann hypothesis is true. Let α ≤ β {\displaystyle \alpha \leq \beta } be fixed, then the conjecture states lim T → ∞ # { ( γ , γ ′ ) : 0 < γ , γ ′ ≤ T and 2 π α / log ⁡ ( T ) ≤ γ − γ ′ ≤ 2 π β / log ⁡ ( T ) } T 2 π log ⁡ T = ∫ α β 1 − ( sin ⁡ ( π u ) π u ) 2 d u {\displaystyle \lim _{T\to \infty }{\frac {\#\{(\gamma ,\gamma '):0<\gamma ,\gamma '\leq T{\text{ and }}2\pi \alpha /\log(T)\leq \gamma -\gamma '\leq 2\pi \beta /\log(T)\}}{{\frac {T}{2\pi }}\log {T}}}=\int \limits _{\alpha }^{\beta }1-\left({\frac {\sin(\pi u)}{\pi u}}\right)^{2}\mathrm {d} u} and where each γ , γ ′ {\displaystyle \gamma ,\gamma '} is the imaginary part of the non-trivial zeros of Riemann zeta function, that is 1 2 + i γ {\displaystyle {\tfrac {1}{2}}+i\gamma } . == Explanation == Informally, this means that the chance of finding a zero in a very short interval of length 2πL/log(T) at a distance 2πu/log(T) from a zero 1/2+iT is about L times the expression above. (The factor 2π/log(T) is a normalization factor that can be thought of informally as the average spacing between zeros with imaginary part about T.) Andrew Odlyzko (1987) showed that the conjecture was supported by large-scale computer calculations of the zeros. The conjecture has been extended to correlations of more than two zeros, and also to zeta functions of automorphic representations (Rudnick & Sarnak 1996). In 1982 a student of Montgomery's, Ali Erhan Özlük, proved the pair correlation conjecture for some of Dirichlet's L-functions.A.E. Ozluk (1982) The connection with random unitary matrices could lead to a proof of the Riemann hypothesis (RH). The Hilbert–Pólya conjecture asserts that the zeros of the Riemann Zeta function correspond to the eigenvalues of a linear operator, and implies RH. Some people think this is a promising approach (Andrew Odlyzko (1987)). Montgomery was studying the Fourier transform F(x) of the pair correlation function, and showed (assuming the Riemann hypothesis) that it was equal to |x| for |x| < 1. His methods were unable to determine it for |x| ≥ 1, but he conjectured that it was equal to 1 for these x, which implies that the pair correlation function is as above. He was also motivated by the notion that the Riemann hypothesis is not a brick wall, and one should feel free to make stronger conjectures. == F(α) conjecture or strong pair correlation conjecture == Let again 1 2 + i γ {\displaystyle {\tfrac {1}{2}}+i\gamma } and 1 2 + i γ ′ {\displaystyle {\tfrac {1}{2}}+i\gamma '} stand for non-trivial zeros of the Riemann zeta function. Montgomery introduced the function F ( α ) := F T ( α ) = ( T 2 π log ⁡ ( T ) ) − 1 ∑ 0 < γ , γ ′ ≤ T T i α ( γ − γ ′ ) w ( γ − γ ′ ) {\displaystyle F(\alpha ):=F_{T}(\alpha )=\left({\frac {T}{2\pi }}\log(T)\right)^{-1}\sum \limits _{0<\gamma ,\gamma '\leq T}T^{i\alpha (\gamma -\gamma ')}w(\gamma -\gamma ')} for T > 2 , α ∈ R {\displaystyle T>2,\;\alpha \in \mathbb {R} } and some weight function w ( u ) := 4 ( 4 + u 2 ) {\displaystyle w(u):={\tfrac {4}{(4+u^{2})}}} . Montgomery and Goldston proved under the Riemann hypothesis, that for | α | ≤ 1 {\displaystyle |\alpha |\leq 1} this function converges uniformly F ( α ) = T − 2 | α | log ⁡ ( T ) ( 1 + o ( 1 ) ) + | α | + o ( 1 ) , T → ∞ . {\displaystyle F(\alpha )=T^{-2|\alpha |}\log(T)(1+{\mathcal {o}}(1))+|\alpha |+{\mathcal {o}}(1),\quad T\to \infty .} Montgomery conjectured, which is now known as the F(α) conjecture or strong pair correlation conjecture, that for | α | > 1 {\displaystyle |\alpha |>1} we have uniform convergence F ( α ) = 1 + o ( 1 ) , T → ∞ {\displaystyle F(\alpha )=1+{\mathcal {o}}(1),\quad T\to \infty } for α {\displaystyle \alpha } in a bounded interval. == Numerical calculation by Odlyzko == In the 1980s, motivated by Montgomery's conjecture, Odlyzko began an intensive numerical study of the statistics of the zeros of ζ(s). He confirmed the distribution of the spacings between non-trivial zeros using detailed numerical calculations and demonstrated that Montgomery's conjecture would be true and that the distribution would agree with the distribution of spacings of GUE random matrix eigenvalues using Cray X-MP. In 1987 he reported the calculations in the paper Andrew Odlyzko (1987). For non-trivial zero, 1/2 + iγn, let the normalized spacings be δ n = γ n + 1 − γ n 2 π log ⁡ γ n 2 π . {\displaystyle \delta _{n}={\frac {\gamma _{n+1}-\gamma _{n}}{2\pi }}\,{\log {\frac {\gamma _{n}}{2\pi }}}.} Then we would expect the following formula as the limit for M , N → ∞ {\displaystyle M,N\to \infty } : 1 M { ( n , k ) ∣ N ≤ n ≤ N + M , k ≥ 0 , δ n + δ n + 1 + ⋯ + δ n + k ∈ [ α , β ] } ∼ ∫ α β ( 1 − ( sin ⁡ π u π u ) 2 ) d u {\displaystyle {\frac {1}{M}}\{(n,k)\mid N\leq n\leq N+M,\,k\geq 0,\,\delta _{n}+\delta _{n+1}+\cdots +\delta _{n+k}\in [\alpha ,\beta ]\}\sim \int _{\alpha }^{\beta }\left(1-{\biggl (}{\frac {\sin {\pi u}}{\pi u}}{\biggr )}^{2}\right)du} Based on a new algorithm developed by Odlyzko and Arnold Schönhage that allowed them to compute a value of ζ(1/2 + it) in an average time of tε steps, Odlyzko computed millions of zeros at heights around 1020 and gave some evidence for the GUE conjecture. The figure contains the first 105 non-trivial zeros of the Riemann zeta function. As more zeros are sampled, the more closely their distribution approximates the shape of the GUE random matrix. == See also == Lehmer pair == References == Ozluk, A.E. (1982), Pair Correlation of Zeros of Dirichlet L-functions, Ph. D. Dissertation, Ann Arbor: Univ. of Michigan, MR 2632180 Katz, Nicholas M.; Sarnak, Peter (1999), "Zeroes of zeta functions and symmetry", Bulletin of the American Mathematical Society, New Series, 36 (1): 1–26, doi:10.1090/S0273-0979-99-00766-1, ISSN 0002-9904, MR 1640151 Montgomery, Hugh L. (1973), "The pair correlation of zeros of the zeta function", Analytic number theory, Proc. Sympos. Pure Math., vol. XXIV, Providence, R.I.: American Mathematical Society, pp. 181–193, MR 0337821 Odlyzko, A. M. (1987), "On the distribution of spacings between zeros of the zeta function", Mathematics of Computation, 48 (177): 273–308, doi:10.2307/2007890, ISSN 0025-5718, JSTOR 2007890, MR 0866115 Rudnick, Zeév; Sarnak, Peter (1996), "Zeros of principal L-functions and random matrix theory", Duke Mathematical Journal, 81 (2): 269–322, doi:10.1215/S0012-7094-96-08115-6, ISSN 0012-7094, MR 1395406
Wikipedia/Montgomery's_pair_correlation_conjecture
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. == Elementary examples == === Trivial example === The system of one equation in one unknown 2 x = 4 {\displaystyle 2x=4} has the solution x = 2. {\displaystyle x=2.} However, most interesting linear systems have at least two equations. === Simple nontrivial example === The simplest kind of nontrivial linear system involves two equations and two variables: 2 x + 3 y = 6 4 x + 9 y = 15 . {\displaystyle {\begin{alignedat}{5}2x&&\;+\;&&3y&&\;=\;&&6&\\4x&&\;+\;&&9y&&\;=\;&&15&.\end{alignedat}}} One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} : x = 3 − 3 2 y . {\displaystyle x=3-{\frac {3}{2}}y.} Now substitute this expression for x into the bottom equation: 4 ( 3 − 3 2 y ) + 9 y = 15. {\displaystyle 4\left(3-{\frac {3}{2}}y\right)+9y=15.} This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) == General form == A general system of m linear equations with n unknowns and coefficients can be written as { a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m , {\displaystyle {\begin{cases}a_{11}x_{1}+a_{12}x_{2}+\dots +a_{1n}x_{n}=b_{1}\\a_{21}x_{1}+a_{22}x_{2}+\dots +a_{2n}x_{n}=b_{2}\\\vdots \\a_{m1}x_{1}+a_{m2}x_{2}+\dots +a_{mn}x_{n}=b_{m},\end{cases}}} where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. === Vector equation === One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. x 1 [ a 11 a 21 ⋮ a m 1 ] + x 2 [ a 12 a 22 ⋮ a m 2 ] + ⋯ + x n [ a 1 n a 2 n ⋮ a m n ] = [ b 1 b 2 ⋮ b m ] {\displaystyle x_{1}{\begin{bmatrix}a_{11}\\a_{21}\\\vdots \\a_{m1}\end{bmatrix}}+x_{2}{\begin{bmatrix}a_{12}\\a_{22}\\\vdots \\a_{m2}\end{bmatrix}}+\dots +x_{n}{\begin{bmatrix}a_{1n}\\a_{2n}\\\vdots \\a_{mn}\end{bmatrix}}={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}} This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span, and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed. === Matrix equation === The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m×n matrix, x is a column vector with n entries, and b is a column vector with m entries. A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix. == Solution set == A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways: The system has infinitely many solutions. The system has a unique solution. The system has no solution. === Geometric interpretation === For a system involving two variables (x and y), each linear equation determines a line on the xy-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For n variables, each linear equation determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than n. === General behavior === In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system. In general, a system with the same number of equations and unknowns has a single unique solution. In general, a system with more equations than unknowns has no solution. Such a system is also known as an overdetermined system. In the first case, the dimension of the solution set is, in general, equal to n − m, where n is the number of variables and m is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are linearly dependent, or if it is inconsistent and has no more equations than unknowns. == Properties == === Independence === The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations 3 x + 2 y = 6 and 6 x + 4 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;6x+4y=12} are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations x − 2 y = − 1 3 x + 5 y = 8 4 x + 3 y = 7 {\displaystyle {\begin{alignedat}{5}x&&\;-\;&&2y&&\;=\;&&-1&\\3x&&\;+\;&&5y&&\;=\;&&8&\\4x&&\;+\;&&3y&&\;=\;&&7&\end{alignedat}}} are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. === Consistency === A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1. For example, the equations 3 x + 2 y = 6 and 3 x + 2 y = 12 {\displaystyle 3x+2y=6\;\;\;\;{\text{and}}\;\;\;\;3x+2y=12} are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations x + y = 1 2 x + y = 1 3 x + 2 y = 3 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&y&&\;=\;&&1&\\2x&&\;+\;&&y&&\;=\;&&1&\\3x&&\;+\;&&2y&&\;=\;&&3&\end{alignedat}}} are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. === Equivalence === Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. == Solving a linear system == There are several algorithms for solving a system of linear equations. === Describing the solution === When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 {\displaystyle {\begin{alignedat}{7}x&&\;+\;&&3y&&\;-\;&&2z&&\;=\;&&5&\\3x&&\;+\;&&5y&&\;+\;&&6z&&\;=\;&&7&\end{alignedat}}} The solution set to this system can be described by the following equations: x = − 7 z − 1 and y = 3 z + 2 . {\displaystyle x=-7z-1\;\;\;\;{\text{and}}\;\;\;\;y=3z+2{\text{.}}} Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: y = − 3 7 x + 11 7 and z = − 1 7 x − 1 7 . {\displaystyle y=-{\frac {3}{7}}x+{\frac {11}{7}}\;\;\;\;{\text{and}}\;\;\;\;z=-{\frac {1}{7}}x-{\frac {1}{7}}{\text{.}}} Here x is the free variable, and y and z are dependent. === Elimination of variables === The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: In the first equation, solve for one of the variables in terms of the others. Substitute this expression into the remaining equations. This yields a system of equations with one fewer equation and unknown. Repeat steps 1 and 2 until the system is reduced to a single linear equation. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system: { x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{cases}x+3y-2z=5\\3x+5y+6z=7\\2x+4y+3z=8\end{cases}}} Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields { y = 3 z + 2 y = 7 2 z + 1 {\displaystyle {\begin{cases}y=3z+2\\y={\tfrac {7}{2}}z+1\end{cases}}} Since the LHS of both of these equations equal y, equating the RHS of the equations. We now have: 3 z + 2 = 7 2 z + 1 ⇒ z = 2 {\displaystyle {\begin{aligned}3z+2={\tfrac {7}{2}}z+1\\\Rightarrow z=2\end{aligned}}} Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} . === Row reduction === In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] . {\displaystyle \left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]{\text{.}}} This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: [ 1 3 − 2 5 3 5 6 7 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 2 4 3 8 ] ∼ [ 1 3 − 2 5 0 − 4 12 − 8 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 − 2 7 − 2 ] ∼ [ 1 3 − 2 5 0 1 − 3 2 0 0 1 2 ] ∼ [ 1 3 − 2 5 0 1 0 8 0 0 1 2 ] ∼ [ 1 3 0 9 0 1 0 8 0 0 1 2 ] ∼ [ 1 0 0 − 15 0 1 0 8 0 0 1 2 ] . {\displaystyle {\begin{aligned}\left[{\begin{array}{rrr|r}1&3&-2&5\\3&5&6&7\\2&4&3&8\end{array}}\right]&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\2&4&3&8\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&-4&12&-8\\0&-2&7&-2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&-2&7&-2\end{array}}\right]\\&\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&-3&2\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&-2&5\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&3&0&9\\0&1&0&8\\0&0&1&2\end{array}}\right]\sim \left[{\begin{array}{rrr|r}1&0&0&-15\\0&1&0&8\\0&0&1&2\end{array}}\right].\end{aligned}}} The last matrix is in reduced row echelon form, and represents the system x = −15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. === Cramer's rule === Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system x + 3 y − 2 z = 5 3 x + 5 y + 6 z = 7 2 x + 4 y + 3 z = 8 {\displaystyle {\begin{alignedat}{7}x&\;+&\;3y&\;-&\;2z&\;=&\;5\\3x&\;+&\;5y&\;+&\;6z&\;=&\;7\\2x&\;+&\;4y&\;+&\;3z&\;=&\;8\end{alignedat}}} is given by x = | 5 3 − 2 7 5 6 8 4 3 | | 1 3 − 2 3 5 6 2 4 3 | , y = | 1 5 − 2 3 7 6 2 8 3 | | 1 3 − 2 3 5 6 2 4 3 | , z = | 1 3 5 3 5 7 2 4 8 | | 1 3 − 2 3 5 6 2 4 3 | . {\displaystyle x={\frac {\,{\begin{vmatrix}5&3&-2\\7&5&6\\8&4&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;y={\frac {\,{\begin{vmatrix}1&5&-2\\3&7&6\\2&8&3\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}},\;\;\;\;z={\frac {\,{\begin{vmatrix}1&3&5\\3&5&7\\2&4&8\end{vmatrix}}\,}{\,{\begin{vmatrix}1&3&-2\\3&5&6\\2&4&3\end{vmatrix}}\,}}.} For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. === Matrix solution === If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n=m columns) and has full rank (all m rows are independent), then the system has a unique solution given by x = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} } where A − 1 {\displaystyle A^{-1}} is the inverse of A. More generally, regardless of whether m=n or not and regardless of the rank of A, all solutions (if any exist) are given using the Moore–Penrose inverse of A, denoted A + {\displaystyle A^{+}} , as follows: x = A + b + ( I − A + A ) w {\displaystyle \mathbf {x} =A^{+}\mathbf {b} +\left(I-A^{+}A\right)\mathbf {w} } where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to x = A − 1 b + ( I − A − 1 A ) w = A − 1 b + ( I − I ) w = A − 1 b {\displaystyle \mathbf {x} =A^{-1}\mathbf {b} +\left(I-A^{-1}A\right)\mathbf {w} =A^{-1}\mathbf {b} +\left(I-I\right)\mathbf {w} =A^{-1}\mathbf {b} } as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation. === Other methods === While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: x ( k + 1 ) = D − 1 ( b − ( L + U ) x ( k ) ) {\displaystyle {\mathbf {x}}^{(k+1)}=D^{-1}({\mathbf {b}}-(L+U){\mathbf {x}}^{(k)})} When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. There is also a quantum algorithm for linear systems of equations. == Homogeneous systems == A system of linear equations is homogeneous if all of the constant terms are zero: a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0. {\displaystyle {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \;\ &&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;=\;&&&0.\\\end{alignedat}}} A homogeneous system is equivalent to a matrix equation of the form A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries. === Homogeneous solution set === Every homogeneous system has at least one solution, known as the zero (or trivial) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det(A) ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A. === Relation to nonhomogeneous systems === There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: A x = b and A x = 0 . {\displaystyle A\mathbf {x} =\mathbf {b} \qquad {\text{and}}\qquad A\mathbf {x} =\mathbf {0} .} Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as { p + v : v is any solution to A x = 0 } . {\displaystyle \left\{\mathbf {p} +\mathbf {v} :\mathbf {v} {\text{ is any solution to }}A\mathbf {x} =\mathbf {0} \right\}.} Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A. == See also == Arrangement of hyperplanes Iterative refinement – Method to improve accuracy of numerical solutions to systems of linear equations Coates graph – A mathematical graph for solution of linear equations LAPACK – Software library for numerical linear algebra Linear equation over a ring Linear least squares – Least squares approximation of linear functions to dataPages displaying short descriptions of redirect targets Matrix decomposition – Representation of a matrix as a product Matrix splitting – Representation of a matrix as a sum NAG Numerical Library – Software library of numerical-analysis algorithms Rybicki Press algorithm – An algorithm for inverting a matrix Simultaneous equations – Set of equations to be solved togetherPages displaying short descriptions of redirect targets == References == == Bibliography == Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3 Cullen, Charles G. (1990), Matrices and Linear Transformations, MA: Dover, ISBN 978-0-486-66328-9 Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 0-8018-5414-8 Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9 Harrow, Aram W.; Hassidim, Avinatan; Lloyd, Seth (2009), "Quantum Algorithm for Linear Systems of Equations", Physical Review Letters, 103 (15): 150502, arXiv:0811.3171, Bibcode:2009PhRvL.103o0502H, doi:10.1103/PhysRevLett.103.150502, PMID 19905613, S2CID 5187993 Sterling, Mary J. (2009), Linear Algebra for Dummies, Indianapolis, Indiana: Wiley, ISBN 978-0-470-43090-3 Whitelaw, T. A. (1991), Introduction to Linear Algebra (2nd ed.), CRC Press, ISBN 0-7514-0159-5 == Further reading == Axler, Sheldon Jay (1997). Linear Algebra Done Right (2nd ed.). Springer-Verlag. ISBN 0-387-98259-0. Lay, David C. (August 22, 2005). Linear Algebra and Its Applications (3rd ed.). Addison Wesley. ISBN 978-0-321-28713-7. Meyer, Carl D. (February 15, 2001). Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM). ISBN 978-0-89871-454-8. Archived from the original on March 1, 2001. Poole, David (2006). Linear Algebra: A Modern Introduction (2nd ed.). Brooks/Cole. ISBN 0-534-99845-3. Anton, Howard (2005). Elementary Linear Algebra (Applications Version) (9th ed.). Wiley International. Leon, Steven J. (2006). Linear Algebra With Applications (7th ed.). Pearson Prentice Hall. Strang, Gilbert (2005). Linear Algebra and Its Applications. Peng, Richard; Vempala, Santosh S. (2024). "Solving Sparse Linear Systems Faster than Matrix Multiplication". Comm. ACM. 67 (7): 79–86. arXiv:2007.10254. doi:10.1145/3615679. == External links == Media related to System of linear equations at Wikimedia Commons
Wikipedia/Linear_equation_system
A system of polynomial equations (sometimes simply a polynomial system) is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k. A solution of a polynomial system is a set of values for the xis which belong to some algebraically closed field extension K of k, and make all equations true. When k is the field of rational numbers, K is generally assumed to be the field of complex numbers, because each solution belongs to a field extension of k, which is isomorphic to a subfield of the complex numbers. This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fields k in which computation (including equality testing) is easy and efficient, that is the field of rational numbers and finite fields. Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, see Diophantine equation. == Definition == A simple example of a system of polynomial equations is x 2 + y 2 − 5 = 0 x y − 2 = 0. {\displaystyle {\begin{aligned}x^{2}+y^{2}-5&=0\\xy-2&=0.\end{aligned}}} Its solutions are the four pairs (x, y) = (1, 2), (2, 1), (-1, -2), (-2, -1). These solutions can easily be checked by substitution, but more work is needed for proving that there are no other solutions. The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions. A system of polynomial equations, or polynomial system is a collection of equations f 1 ( x 1 , … , x m ) = 0 ⋮ f n ( x 1 , … , x m ) = 0 , {\displaystyle {\begin{aligned}f_{1}\left(x_{1},\ldots ,x_{m}\right)&=0\\&\;\;\vdots \\f_{n}\left(x_{1},\ldots ,x_{m}\right)&=0,\end{aligned}}} where each fh is a polynomial in the indeterminates x1, ..., xm, with integer coefficients, or coefficients in some fixed field, often the field of rational numbers or a finite field. Other fields of coefficients, such as the real numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers). A solution of a polynomial system is a tuple of values of (x1, ..., xm) that satisfies all equations of the polynomial system. The solutions are sought in the complex numbers, or more generally in an algebraically closed field containing the coefficients. In particular, in characteristic zero, all complex solutions are sought. Searching for the real or rational solutions are much more difficult problems that are not considered in this article. The set of solutions is not always finite; for example, the solutions of the system x ( x − 1 ) = 0 x ( y − 1 ) = 0 {\displaystyle {\begin{aligned}x(x-1)&=0\\x(y-1)&=0\end{aligned}}} are a point (x,y) = (1,1) and a line x = 0. Even when the solution set is finite, there is, in general, no closed-form expression of the solutions (in the case of a single equation, this is Abel–Ruffini theorem). The Barth surface, shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numerous singular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such an overdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most 53 = 125, by Bézout's theorem. However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface. == Basic properties and definitions == A system is overdetermined if the number of equations is higher than the number of variables. A system is inconsistent if it has no complex solution (or, if the coefficients are not complex numbers, no solution in an algebraically closed field containing the coefficients). By Hilbert's Nullstellensatz this means that 1 is a linear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the system x3 – 1 = 0, x2 – 1 = 0 is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solution x = 1. A system is underdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in an algebraically closed field that contains the coefficients of the equations). This is a non-trivial result of commutative algebra that involves, in particular, Hilbert's Nullstellensatz and Krull's principal ideal theorem. A system is zero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that the algebraic variety of the solutions has dimension zero. A system with infinitely many solutions is said to be positive-dimensional. A zero-dimensional system with as many equations as variables is sometimes said to be well-behaved. Bézout's theorem asserts that a well-behaved system whose equations have degrees d1, ..., dn has at most d1⋅⋅⋅dn solutions. This bound is sharp. If all the degrees are equal to d, this bound becomes dn and is exponential in the number of variables. (The fundamental theorem of algebra is the special case n = 1.) This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound). == What is solving? == The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of a Gröbner basis of the left-hand sides of the equations. The system is inconsistent if this Gröbner basis is reduced to 1. The system is zero-dimensional if, for every variable there is a leading monomial of some element of the Gröbner basis which is a pure power of this variable. For this test, the best monomial order (that is the one which leads generally to the fastest computation) is usually the graded reverse lexicographic one (grevlex). If the system is positive-dimensional, it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield of algebraic geometry. A natural example of such a question concerning positive-dimensional systems is the following: decide if a polynomial system over the rational numbers has a finite number of real solutions and compute them. A generalization of this question is find at least one solution in each connected component of the set of real solutions of a polynomial system. The classical algorithm for solving these question is cylindrical algebraic decomposition, which has a doubly exponential computational complexity and therefore cannot be used in practice, except for very small examples. For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is called numeric. A solution is certified if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions. The other way of representing the solutions is said to be algebraic. It uses the fact that, for a zero-dimensional system, the solutions belong to the algebraic closure of the field k of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficients is a highly unstable problem. == Extensions == === Trigonometric equations === A trigonometric equation is an equation g = 0 where g is a trigonometric polynomial. Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (using sum and difference formulas), replacing sin(x) and cos(x) by two new variables s and c and adding the new equation s2 + c2 – 1 = 0. For example, because of the identity cos ⁡ ( 3 x ) = 4 cos 3 ⁡ ( x ) − 3 cos ⁡ ( x ) , {\displaystyle \cos(3x)=4\cos ^{3}(x)-3\cos(x),} solving the equation sin 3 ⁡ ( x ) + cos ⁡ ( 3 x ) = 0 {\displaystyle \sin ^{3}(x)+\cos(3x)=0} is equivalent to solving the polynomial system { s 3 + 4 c 3 − 3 c = 0 s 2 + c 2 − 1 = 0. {\displaystyle {\begin{cases}s^{3}+4c^{3}-3c&=0\\s^{2}+c^{2}-1&=0.\end{cases}}} For each solution (c0, s0) of this system, there is a unique solution x of the equation such that 0 ≤ x < 2π. In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system. === Solutions in a finite field === When solving a system over a finite field k with q elements, one is primarily interested in the solutions in k. As the elements of k are exactly the solutions of the equation xq – x = 0, it suffices, for restricting the solutions to k, to add the equation xiq – xi = 0 for each variable xi. === Coefficients in a number field or in a finite field with non-prime order === The elements of an algebraic number field are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers. For example, if a system contains 2 {\displaystyle {\sqrt {2}}} , a system over the rational numbers is obtained by adding the equation r22 – 2 = 0 and replacing 2 {\displaystyle {\sqrt {2}}} by r2 in the other equations. In the case of a finite field, the same transformation allows always supposing that the field k has a prime order. == Algebraic representation of the solutions == === Regular chains === The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomials f1(x1), f2(x1, x2), ..., fn(x1, ..., xn) such that, for every i such that 1 ≤ i ≤ n fi is a polynomial in x1, ..., xi only, which has a degree di > 0 in xi; the coefficient of xidi in fi is a polynomial in x1, ..., xi −1 which does not have any common zero with f1, ..., fi − 1. To such a regular chain is associated a triangular system of equations { f 1 ( x 1 ) = 0 f 2 ( x 1 , x 2 ) = 0 ⋮ f n ( x 1 , x 2 , … , x n ) = 0. {\displaystyle {\begin{cases}f_{1}(x_{1})=0\\f_{2}(x_{1},x_{2})=0\\\quad \vdots \\f_{n}(x_{1},x_{2},\ldots ,x_{n})=0.\end{cases}}} The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained from fi has degree di and thus that the system has d1 ... dn solutions, provided that there is no multiple root in this resolution process (fundamental theorem of algebra). Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions. { x 2 − 1 = 0 ( x − 1 ) ( y − 1 ) = 0 y 2 − 1 = 0. {\displaystyle {\begin{cases}x^{2}-1=0\\(x-1)(y-1)=0\\y^{2}-1=0.\end{cases}}} There are several algorithms for computing a triangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional) into regular chains (or regular semi-algebraic systems). There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first the Gröbner basis for the graded reverse lexicographic order (grevlex), then deducing the lexicographical Gröbner basis by FGLM algorithm and finally applying the Lextriangular algorithm. This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of: The output may involve huge integers which may make the computation and the use of the result problematic. To deduce the numeric values of the solutions from the output, one has to solve univariate polynomials with approximate coefficients, which is a highly unstable problem. The first issue has been solved by Dahan and Schost: Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, called equiprojectable decomposition, depends only on the choice of the coordinates. This allows the use of modular methods for computing efficiently the equiprojectable decomposition. The second issue is generally solved by outputting regular chains of a special form, sometimes called shape lemma, for which all di but the first one are equal to 1. For getting such regular chains, one may have to add a further variable, called separating variable, which is given the index 0. The rational univariate representation, described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis. === Rational univariate representation === The rational univariate representation or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier. A RUR of a zero-dimensional system consists in a linear combination x0 of the variables, called separating variable, and a system of equations { h ( x 0 ) = 0 x 1 = g 1 ( x 0 ) / g 0 ( x 0 ) ⋮ x n = g n ( x 0 ) / g 0 ( x 0 ) , {\displaystyle {\begin{cases}h(x_{0})=0\\x_{1}=g_{1}(x_{0})/g_{0}(x_{0})\\\quad \vdots \\x_{n}=g_{n}(x_{0})/g_{0}(x_{0}),\end{cases}}} where h is a univariate polynomial in x0 of degree D and g0, ..., gn are univariate polynomials in x0 of degree less than D. Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties. All but a finite number linear combinations of the variables are separating variables. When the separating variable is chosen, the RUR exists and is unique. In particular h and the gi are defined independently of any algorithm to compute them. The solutions of the system are in one-to-one correspondence with the roots of h and the multiplicity of each root of h equals the multiplicity of the corresponding solution. The solutions of the system are obtained by substituting the roots of h in the other equations. If h does not have any multiple root then g0 is the derivative of h. For example, for the system in the previous section, every linear combination of the variable, except the multiples of x, y and x + y, is a separating variable. If one chooses t = ⁠x – y/2⁠ as a separating variable, then the RUR is { t 3 − t = 0 x = t 2 + 2 t − 1 3 t 2 − 1 y = t 2 − 2 t − 1 3 t 2 − 1 . {\displaystyle {\begin{cases}t^{3}-t=0\\x={\frac {t^{2}+2t-1}{3t^{2}-1}}\\y={\frac {t^{2}-2t-1}{3t^{2}-1}}.\\\end{cases}}} The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size. For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them in rational functions. This allows production of certified approximations of the solutions to any given precision. Moreover, the univariate polynomial h(x0) of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides the prime decomposition of the given ideal (that is the primary decomposition of the radical of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities. Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension. == Solving numerically == === General solving algorithms === The general numerical algorithms which are designed for any system of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to find all solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution. Nevertheless, two methods deserve to be mentioned here. Newton's method may be used if the number of equations is equal to the number of variables. It does not allow one to find all the solutions nor to prove that there is no solution. But it is very fast when starting from a point which is close to a solution. Therefore, it is a basic tool for the homotopy continuation method described below. Optimization is rarely used for solving polynomial systems, but it succeeded, circa 1970, in showing that a system of 81 quadratic equations in 56 variables is not inconsistent. With the other known methods, this remains beyond the possibilities of modern technology, as of 2022. This method consists simply in minimizing the sum of the squares of the equations. If zero is found as a local minimum, then it is attained at a solution. This method works for overdetermined systems, but outputs an empty information if all local minimums which are found are positive. === Homotopy continuation method === This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades. This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say N {\displaystyle N} , is kept. In the second step, a system g 1 = 0 , … , g n = 0 {\displaystyle g_{1}=0,\,\ldots ,\,g_{n}=0} of polynomial equations is generated which has exactly N {\displaystyle N} solutions that are easy to compute. This new system has the same number n {\displaystyle n} of variables and the same number n {\displaystyle n} of equations and the same general structure as the system to solve, f 1 = 0 , … , f n = 0 {\displaystyle f_{1}=0,\,\ldots ,\,f_{n}=0} . Then a homotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system ( 1 − t ) g 1 + t f 1 = 0 , … , ( 1 − t ) g n + t f n = 0 {\displaystyle (1-t)g_{1}+tf_{1}=0,\,\ldots ,\,(1-t)g_{n}+tf_{n}=0} . The homotopy continuation consists in deforming the parameter t {\displaystyle t} from 0 to 1 and following the N {\displaystyle N} solutions during this deformation. This gives the desired solutions for t = 1 {\displaystyle t=1} . Following means that, if t 1 < t 2 {\displaystyle t_{1}<t_{2}} , the solutions for t = t 2 {\displaystyle t=t_{2}} are deduced from the solutions for t = t 1 {\displaystyle t=t_{1}} by Newton's method. The difficulty here is to well choose the value of t 2 − t 1 : {\displaystyle t_{2}-t_{1}:} Too large, Newton's convergence may be slow and may even jump from a solution path to another one. Too small, and the number of steps slows down the method. === Numerically solving from the rational univariate representation === To deduce the numeric values of the solutions from a RUR seems easy: it suffices to compute the roots of the univariate polynomial and to substitute them in the other equations. This is not so easy because the evaluation of a polynomial at the roots of another polynomial is highly unstable. The roots of the univariate polynomial have thus to be computed at a high precision which may not be defined once for all. There are two algorithms which fulfill this requirement. Aberth method, implemented in MPSolve computes all the complex roots to any precision. Uspensky's algorithm of Collins and Akritas, improved by Rouillier and Zimmermann and based on Descartes' rule of signs. This algorithms computes the real roots, isolated in intervals of arbitrary small width. It is implemented in Maple (functions fsolve and RootFinding[Isolate]). == Software packages == There are at least four software packages which can solve zero-dimensional systems automatically (by automatically, one means that no human intervention is needed between input and output, and thus that no knowledge of the method by the user is needed). There are also several other software packages which may be useful for solving zero-dimensional systems. Some of them are listed after the automatic solvers. The Maple function RootFinding[Isolate] takes as input any polynomial system over the rational numbers (if some coefficients are floating point numbers, they are converted to rational numbers) and outputs the real solutions represented either (optionally) as intervals of rational numbers or as floating point approximations of arbitrary precision. If the system is not zero dimensional, this is signaled as an error. Internally, this solver, designed by F. Rouillier computes first a Gröbner basis and then a Rational Univariate Representation from which the required approximation of the solutions are deduced. It works routinely for systems having up to a few hundred complex solutions. The rational univariate representation may be computed with Maple function Groebner[RationalUnivariateRepresentation]. To extract all the complex solutions from a rational univariate representation, one may use MPSolve, which computes the complex roots of univariate polynomials to any precision. It is recommended to run MPSolve several times, doubling the precision each time, until solutions remain stable, as the substitution of the roots in the equations of the input variables can be highly unstable. The second solver is PHCpack, written under the direction of J. Verschelde. PHCpack implements the homotopy continuation method. This solver computes the isolated complex solutions of polynomial systems having as many equations as variables. The third solver is Bertini, written by D. J. Bates, J. D. Hauenstein, A. J. Sommese, and C. W. Wampler. Bertini uses numerical homotopy continuation with adaptive precision. In addition to computing zero-dimensional solution sets, both PHCpack and Bertini are capable of working with positive dimensional solution sets. The fourth solver is the Maple library RegularChains, written by Marc Moreno-Maza and collaborators. It contains various functions for solving polynomial systems by means of regular chains. == See also == Elimination theory Systems of polynomial inequalities Triangular decomposition Wu's method of characteristic set == References ==
Wikipedia/Nonlinear_equation_system
A periodic function, also called a periodic waveform (or simply periodic wave), is a function that repeats its values at regular intervals or periods. The repeatable part of the function or waveform is called a cycle. For example, the trigonometric functions, which repeat at intervals of 2 π {\displaystyle 2\pi } radians, are periodic functions. Periodic functions are used throughout science to describe oscillations, waves, and other phenomena that exhibit periodicity. Any function that is not periodic is called aperiodic. == Definition == A function f is said to be periodic if, for some nonzero constant P, it is the case that f ( x + P ) = f ( x ) {\displaystyle f(x+P)=f(x)} for all values of x in the domain. A nonzero constant P for which this is the case is called a period of the function. If there exists a least positive constant P with this property, it is called the fundamental period (also primitive period, basic period, or prime period.) Often, "the" period of a function is used to mean its fundamental period. A function with period P will repeat on intervals of length P, and these intervals are sometimes also referred to as periods of the function. Geometrically, a periodic function can be defined as a function whose graph exhibits translational symmetry, i.e. a function f is periodic with period P if the graph of f is invariant under translation in the x-direction by a distance of P. This definition of periodicity can be extended to other geometric shapes and patterns, as well as be generalized to higher dimensions, such as periodic tessellations of the plane. A sequence can also be viewed as a function defined on the natural numbers, and for a periodic sequence these notions are defined accordingly. == Examples == === Real number examples === The sine function is periodic with period 2 π {\displaystyle 2\pi } , since sin ⁡ ( x + 2 π ) = sin ⁡ x {\displaystyle \sin(x+2\pi )=\sin x} for all values of x {\displaystyle x} . This function repeats on intervals of length 2 π {\displaystyle 2\pi } (see the graph to the right). Everyday examples are seen when the variable is time; for instance the hands of a clock or the phases of the moon show periodic behaviour. Periodic motion is motion in which the position(s) of the system are expressible as periodic functions, all with the same period. For a function on the real numbers or on the integers, that means that the entire graph can be formed from copies of one particular portion, repeated at regular intervals. A simple example of a periodic function is the function f {\displaystyle f} that gives the "fractional part" of its argument. Its period is 1. In particular, f ( 0.5 ) = f ( 1.5 ) = f ( 2.5 ) = ⋯ = 0.5 {\displaystyle f(0.5)=f(1.5)=f(2.5)=\cdots =0.5} The graph of the function f {\displaystyle f} is the sawtooth wave. The trigonometric functions sine and cosine are common periodic functions, with period 2 π {\displaystyle 2\pi } (see the figure on the right). The subject of Fourier series investigates the idea that an 'arbitrary' periodic function is a sum of trigonometric functions with matching periods. According to the definition above, some exotic functions, for example the Dirichlet function, are also periodic; in the case of Dirichlet function, any nonzero rational number is a period. === Complex number examples === Using complex variables we have the common period function: e i k x = cos ⁡ k x + i sin ⁡ k x . {\displaystyle e^{ikx}=\cos kx+i\,\sin kx.} Since the cosine and sine functions are both periodic with period 2 π {\displaystyle 2\pi } , the complex exponential is made up of cosine and sine waves. This means that Euler's formula (above) has the property such that if L {\displaystyle L} is the period of the function, then L = 2 π k . {\displaystyle L={\frac {2\pi }{k}}.} ==== Double-periodic functions ==== A function whose domain is the complex numbers can have two incommensurate periods without being constant. The elliptic functions are such functions. ("Incommensurate" in this context means not real multiples of each other.) == Properties == Periodic functions can take on values many times. More specifically, if a function f {\displaystyle f} is periodic with period P {\displaystyle P} , then for all x {\displaystyle x} in the domain of f {\displaystyle f} and all positive integers n {\displaystyle n} , f ( x + n P ) = f ( x ) {\displaystyle f(x+nP)=f(x)} If f ( x ) {\displaystyle f(x)} is a function with period P {\displaystyle P} , then f ( a x ) {\displaystyle f(ax)} , where a {\displaystyle a} is a non-zero real number such that a x {\displaystyle ax} is within the domain of f {\displaystyle f} , is periodic with period P a {\textstyle {\frac {P}{a}}} . For example, f ( x ) = sin ⁡ ( x ) {\displaystyle f(x)=\sin(x)} has period 2 π {\displaystyle 2\pi } and, therefore, sin ⁡ ( 5 x ) {\displaystyle \sin(5x)} will have period 2 π 5 {\textstyle {\frac {2\pi }{5}}} . Some periodic functions can be described by Fourier series. For instance, for L2 functions, Carleson's theorem states that they have a pointwise (Lebesgue) almost everywhere convergent Fourier series. Fourier series can only be used for periodic functions, or for functions on a bounded (compact) interval. If f {\displaystyle f} is a periodic function with period P {\displaystyle P} that can be described by a Fourier series, the coefficients of the series can be described by an integral over an interval of length P {\displaystyle P} . Any function that consists only of periodic functions with the same period is also periodic (with period equal or smaller), including: addition, subtraction, multiplication and division of periodic functions, and taking a power or a root of a periodic function (provided it is defined for all x {\displaystyle x} ). == Generalizations == === Antiperiodic functions === One subset of periodic functions is that of antiperiodic functions. This is a function f {\displaystyle f} such that f ( x + P ) = − f ( x ) {\displaystyle f(x+P)=-f(x)} for all x {\displaystyle x} . For example, the sine and cosine functions are π {\displaystyle \pi } -antiperiodic and 2 π {\displaystyle 2\pi } -periodic. While a P {\displaystyle P} -antiperiodic function is a 2 P {\displaystyle 2P} -periodic function, the converse is not necessarily true. === Bloch-periodic functions === A further generalization appears in the context of Bloch's theorems and Floquet theory, which govern the solution of various periodic differential equations. In this context, the solution (in one dimension) is typically a function of the form f ( x + P ) = e i k P f ( x ) , {\displaystyle f(x+P)=e^{ikP}f(x)~,} where k {\displaystyle k} is a real or complex number (the Bloch wavevector or Floquet exponent). Functions of this form are sometimes called Bloch-periodic in this context. A periodic function is the special case k = 0 {\displaystyle k=0} , and an antiperiodic function is the special case k = π / P {\displaystyle k=\pi /P} . Whenever k P / π {\displaystyle kP/\pi } is rational, the function is also periodic. === Quotient spaces as domain === In signal processing you encounter the problem, that Fourier series represent periodic functions and that Fourier series satisfy convolution theorems (i.e. convolution of Fourier series corresponds to multiplication of represented periodic function and vice versa), but periodic functions cannot be convolved with the usual definition, since the involved integrals diverge. A possible way out is to define a periodic function on a bounded but periodic domain. To this end you can use the notion of a quotient space: R / Z = { x + Z : x ∈ R } = { { y : y ∈ R ∧ y − x ∈ Z } : x ∈ R } {\displaystyle {\mathbb {R} /\mathbb {Z} }=\{x+\mathbb {Z} :x\in \mathbb {R} \}=\{\{y:y\in \mathbb {R} \land y-x\in \mathbb {Z} \}:x\in \mathbb {R} \}} . That is, each element in R / Z {\displaystyle {\mathbb {R} /\mathbb {Z} }} is an equivalence class of real numbers that share the same fractional part. Thus a function like f : R / Z → R {\displaystyle f:{\mathbb {R} /\mathbb {Z} }\to \mathbb {R} } is a representation of a 1-periodic function. == Calculating period == Consider a real waveform consisting of superimposed frequencies, expressed in a set as ratios to a fundamental frequency, f: F = 1⁄f [f1 f2 f3 ... fN] where all non-zero elements ≥1 and at least one of the elements of the set is 1. To find the period, T, first find the least common denominator of all the elements in the set. Period can be found as T = LCD⁄f. Consider that for a simple sinusoid, T = 1⁄f. Therefore, the LCD can be seen as a periodicity multiplier. For set representing all notes of Western major scale: [1 9⁄8 5⁄4 4⁄3 3⁄2 5⁄3 15⁄8] the LCD is 24 therefore T = 24⁄f. For set representing all notes of a major triad: [1 5⁄4 3⁄2] the LCD is 4 therefore T = 4⁄f. For set representing all notes of a minor triad: [1 6⁄5 3⁄2] the LCD is 10 therefore T = 10⁄f. If no least common denominator exists, for instance if one of the above elements were irrational, then the wave would not be periodic. == See also == == References == Ekeland, Ivar (1990). "One". Convexity methods in Hamiltonian mechanics. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Vol. 19. Berlin: Springer-Verlag. pp. x+247. ISBN 3-540-50613-6. MR 1051888. == External links == "Periodic function". Encyclopedia of Mathematics. EMS Press. 2001 [1994]. Weisstein, Eric W. "Periodic Function". MathWorld.
Wikipedia/Periodic_function
In trigonometry, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of a triangle. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. == Pythagorean identities == The basic relationship between the sine and cosine is given by the Pythagorean identity: sin 2 ⁡ θ + cos 2 ⁡ θ = 1 , {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,} where sin 2 ⁡ θ {\displaystyle \sin ^{2}\theta } means ( sin ⁡ θ ) 2 {\displaystyle {(\sin \theta )}^{2}} and cos 2 ⁡ θ {\displaystyle \cos ^{2}\theta } means ( cos ⁡ θ ) 2 . {\displaystyle {(\cos \theta )}^{2}.} This can be viewed as a version of the Pythagorean theorem, and follows from the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} for the unit circle. This equation can be solved for either the sine or the cosine: sin ⁡ θ = ± 1 − cos 2 ⁡ θ , cos ⁡ θ = ± 1 − sin 2 ⁡ θ . {\displaystyle {\begin{aligned}\sin \theta &=\pm {\sqrt {1-\cos ^{2}\theta }},\\\cos \theta &=\pm {\sqrt {1-\sin ^{2}\theta }}.\end{aligned}}} where the sign depends on the quadrant of θ . {\displaystyle \theta .} Dividing this identity by sin 2 ⁡ θ {\displaystyle \sin ^{2}\theta } , cos 2 ⁡ θ {\displaystyle \cos ^{2}\theta } , or both yields the following identities: 1 + cot 2 ⁡ θ = csc 2 ⁡ θ 1 + tan 2 ⁡ θ = sec 2 ⁡ θ sec 2 ⁡ θ + csc 2 ⁡ θ = sec 2 ⁡ θ csc 2 ⁡ θ {\displaystyle {\begin{aligned}&1+\cot ^{2}\theta =\csc ^{2}\theta \\&1+\tan ^{2}\theta =\sec ^{2}\theta \\&\sec ^{2}\theta +\csc ^{2}\theta =\sec ^{2}\theta \csc ^{2}\theta \end{aligned}}} Using these identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign): == Reflections, shifts, and periodicity == By examining the unit circle, one can establish the following properties of the trigonometric functions. === Reflections === When the direction of a Euclidean vector is represented by an angle θ , {\displaystyle \theta ,} this is the angle determined by the free vector (starting at the origin) and the positive x {\displaystyle x} -unit vector. The same concept may also be applied to lines in an Euclidean space, where the angle is that determined by a parallel to the given line through the origin and the positive x {\displaystyle x} -axis. If a line (vector) with direction θ {\displaystyle \theta } is reflected about a line with direction α , {\displaystyle \alpha ,} then the direction angle θ ′ {\displaystyle \theta ^{\prime }} of this reflected line (vector) has the value θ ′ = 2 α − θ . {\displaystyle \theta ^{\prime }=2\alpha -\theta .} The values of the trigonometric functions of these angles θ , θ ′ {\displaystyle \theta ,\;\theta ^{\prime }} for specific angles α {\displaystyle \alpha } satisfy simple identities: either they are equal, or have opposite signs, or employ the complementary trigonometric function. These are also known as reduction formulae. === Shifts and periodicity === === Signs === The sign of trigonometric functions depends on quadrant of the angle. If − π < θ ≤ π {\displaystyle {-\pi }<\theta \leq \pi } and sgn is the sign function, sgn ⁡ ( sin ⁡ θ ) = sgn ⁡ ( csc ⁡ θ ) = { + 1 if 0 < θ < π − 1 if − π < θ < 0 0 if θ ∈ { 0 , π } sgn ⁡ ( cos ⁡ θ ) = sgn ⁡ ( sec ⁡ θ ) = { + 1 if − 1 2 π < θ < 1 2 π − 1 if − π < θ < − 1 2 π or 1 2 π < θ < π 0 if θ ∈ { − 1 2 π , 1 2 π } sgn ⁡ ( tan ⁡ θ ) = sgn ⁡ ( cot ⁡ θ ) = { + 1 if − π < θ < − 1 2 π or 0 < θ < 1 2 π − 1 if − 1 2 π < θ < 0 or 1 2 π < θ < π 0 if θ ∈ { − 1 2 π , 0 , 1 2 π , π } {\displaystyle {\begin{aligned}\operatorname {sgn}(\sin \theta )=\operatorname {sgn}(\csc \theta )&={\begin{cases}+1&{\text{if}}\ \ 0<\theta <\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <0\\0&{\text{if}}\ \ \theta \in \{0,\pi \}\end{cases}}\\[5mu]\operatorname {sgn}(\cos \theta )=\operatorname {sgn}(\sec \theta )&={\begin{cases}+1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },{\tfrac {1}{2}}\pi {\bigr \}}\end{cases}}\\[5mu]\operatorname {sgn}(\tan \theta )=\operatorname {sgn}(\cot \theta )&={\begin{cases}+1&{\text{if}}\ \ {-\pi }<\theta <-{\tfrac {1}{2}}\pi \ \ {\text{or}}\ \ 0<\theta <{\tfrac {1}{2}}\pi \\-1&{\text{if}}\ \ {-{\tfrac {1}{2}}\pi }<\theta <0\ \ {\text{or}}\ \ {\tfrac {1}{2}}\pi <\theta <\pi \\0&{\text{if}}\ \ \theta \in {\bigl \{}{-{\tfrac {1}{2}}\pi },0,{\tfrac {1}{2}}\pi ,\pi {\bigr \}}\end{cases}}\end{aligned}}} The trigonometric functions are periodic with common period 2 π , {\displaystyle 2\pi ,} so for values of θ outside the interval ( − π , π ] , {\displaystyle ({-\pi },\pi ],} they take repeating values (see § Shifts and periodicity above). == Angle sum and difference identities == These are also known as the angle addition and subtraction theorems (or formulae). sin ⁡ ( α + β ) = sin ⁡ α cos ⁡ β + cos ⁡ α sin ⁡ β sin ⁡ ( α − β ) = sin ⁡ α cos ⁡ β − cos ⁡ α sin ⁡ β cos ⁡ ( α + β ) = cos ⁡ α cos ⁡ β − sin ⁡ α sin ⁡ β cos ⁡ ( α − β ) = cos ⁡ α cos ⁡ β + sin ⁡ α sin ⁡ β {\displaystyle {\begin{aligned}\sin(\alpha +\beta )&=\sin \alpha \cos \beta +\cos \alpha \sin \beta \\\sin(\alpha -\beta )&=\sin \alpha \cos \beta -\cos \alpha \sin \beta \\\cos(\alpha +\beta )&=\cos \alpha \cos \beta -\sin \alpha \sin \beta \\\cos(\alpha -\beta )&=\cos \alpha \cos \beta +\sin \alpha \sin \beta \end{aligned}}} The angle difference identities for sin ⁡ ( α − β ) {\displaystyle \sin(\alpha -\beta )} and cos ⁡ ( α − β ) {\displaystyle \cos(\alpha -\beta )} can be derived from the angle sum versions by substituting − β {\displaystyle -\beta } for β {\displaystyle \beta } and using the facts that sin ⁡ ( − β ) = − sin ⁡ ( β ) {\displaystyle \sin(-\beta )=-\sin(\beta )} and cos ⁡ ( − β ) = cos ⁡ ( β ) {\displaystyle \cos(-\beta )=\cos(\beta )} . They can also be derived by using a slightly modified version of the figure for the angle sum identities, both of which are shown here. These identities are summarized in the first two rows of the following table, which also includes sum and difference identities for the other trigonometric functions. === Sines and cosines of sums of infinitely many angles === When the series ∑ i = 1 ∞ θ i {\textstyle \sum _{i=1}^{\infty }\theta _{i}} converges absolutely then sin ( ∑ i = 1 ∞ θ i ) = ∑ odd k ≥ 1 ( − 1 ) k − 1 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ( ∏ i ∈ A sin ⁡ θ i ∏ i ∉ A cos ⁡ θ i ) cos ( ∑ i = 1 ∞ θ i ) = ∑ even k ≥ 0 ( − 1 ) k 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ( ∏ i ∈ A sin ⁡ θ i ∏ i ∉ A cos ⁡ θ i ) . {\displaystyle {\begin{aligned}{\sin }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggl )}&=\sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\!\!\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}\\{\cos }{\biggl (}\sum _{i=1}^{\infty }\theta _{i}{\biggr )}&=\sum _{{\text{even}}\ k\geq 0}(-1)^{\frac {k}{2}}\,\sum _{\begin{smallmatrix}A\subseteq \{\,1,2,3,\dots \,\}\\\left|A\right|=k\end{smallmatrix}}{\biggl (}\prod _{i\in A}\sin \theta _{i}\prod _{i\not \in A}\cos \theta _{i}{\biggr )}.\end{aligned}}} Because the series ∑ i = 1 ∞ θ i {\textstyle \sum _{i=1}^{\infty }\theta _{i}} converges absolutely, it is necessarily the case that lim i → ∞ θ i = 0 , {\textstyle \lim _{i\to \infty }\theta _{i}=0,} lim i → ∞ sin ⁡ θ i = 0 , {\textstyle \lim _{i\to \infty }\sin \theta _{i}=0,} and lim i → ∞ cos ⁡ θ i = 1. {\textstyle \lim _{i\to \infty }\cos \theta _{i}=1.} In particular, in these two identities an asymmetry appears that is not seen in the case of sums of finitely many angles: in each product, there are only finitely many sine factors but there are cofinitely many cosine factors. Terms with infinitely many sine factors would necessarily be equal to zero. When only finitely many of the angles θ i {\displaystyle \theta _{i}} are nonzero then only finitely many of the terms on the right side are nonzero because all but finitely many sine factors vanish. Furthermore, in each term all but finitely many of the cosine factors are unity. === Tangents and cotangents of sums === Let e k {\displaystyle e_{k}} (for k = 0 , 1 , 2 , 3 , … {\displaystyle k=0,1,2,3,\ldots } ) be the kth-degree elementary symmetric polynomial in the variables x i = tan ⁡ θ i {\displaystyle x_{i}=\tan \theta _{i}} for i = 0 , 1 , 2 , 3 , … , {\displaystyle i=0,1,2,3,\ldots ,} that is, e 0 = 1 e 1 = ∑ i x i = ∑ i tan ⁡ θ i e 2 = ∑ i < j x i x j = ∑ i < j tan ⁡ θ i tan ⁡ θ j e 3 = ∑ i < j < k x i x j x k = ∑ i < j < k tan ⁡ θ i tan ⁡ θ j tan ⁡ θ k ⋮ ⋮ {\displaystyle {\begin{aligned}e_{0}&=1\\[6pt]e_{1}&=\sum _{i}x_{i}&&=\sum _{i}\tan \theta _{i}\\[6pt]e_{2}&=\sum _{i<j}x_{i}x_{j}&&=\sum _{i<j}\tan \theta _{i}\tan \theta _{j}\\[6pt]e_{3}&=\sum _{i<j<k}x_{i}x_{j}x_{k}&&=\sum _{i<j<k}\tan \theta _{i}\tan \theta _{j}\tan \theta _{k}\\&\ \ \vdots &&\ \ \vdots \end{aligned}}} Then tan ( ∑ i θ i ) = sin ( ∑ i θ i ) / ∏ i cos ⁡ θ i cos ( ∑ i θ i ) / ∏ i cos ⁡ θ i = ∑ odd k ≥ 1 ( − 1 ) k − 1 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ∏ i ∈ A tan ⁡ θ i ∑ even k ≥ 0 ( − 1 ) k 2 ∑ A ⊆ { 1 , 2 , 3 , … } | A | = k ∏ i ∈ A tan ⁡ θ i = e 1 − e 3 + e 5 − ⋯ e 0 − e 2 + e 4 − ⋯ cot ( ∑ i θ i ) = e 0 − e 2 + e 4 − ⋯ e 1 − e 3 + e 5 − ⋯ {\displaystyle {\begin{aligned}{\tan }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {{\sin }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}{{\cos }{\bigl (}\sum _{i}\theta _{i}{\bigr )}/\prod _{i}\cos \theta _{i}}}\\[10pt]&={\frac {\displaystyle \sum _{{\text{odd}}\ k\geq 1}(-1)^{\frac {k-1}{2}}\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}{\displaystyle \sum _{{\text{even}}\ k\geq 0}~(-1)^{\frac {k}{2}}~~\sum _{\begin{smallmatrix}A\subseteq \{1,2,3,\dots \}\\\left|A\right|=k\end{smallmatrix}}\prod _{i\in A}\tan \theta _{i}}}={\frac {e_{1}-e_{3}+e_{5}-\cdots }{e_{0}-e_{2}+e_{4}-\cdots }}\\[10pt]{\cot }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {e_{0}-e_{2}+e_{4}-\cdots }{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} using the sine and cosine sum formulae above. The number of terms on the right side depends on the number of terms on the left side. For example: tan ⁡ ( θ 1 + θ 2 ) = e 1 e 0 − e 2 = x 1 + x 2 1 − x 1 x 2 = tan ⁡ θ 1 + tan ⁡ θ 2 1 − tan ⁡ θ 1 tan ⁡ θ 2 , tan ⁡ ( θ 1 + θ 2 + θ 3 ) = e 1 − e 3 e 0 − e 2 = ( x 1 + x 2 + x 3 ) − ( x 1 x 2 x 3 ) 1 − ( x 1 x 2 + x 1 x 3 + x 2 x 3 ) , tan ⁡ ( θ 1 + θ 2 + θ 3 + θ 4 ) = e 1 − e 3 e 0 − e 2 + e 4 = ( x 1 + x 2 + x 3 + x 4 ) − ( x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 ) 1 − ( x 1 x 2 + x 1 x 3 + x 1 x 4 + x 2 x 3 + x 2 x 4 + x 3 x 4 ) + ( x 1 x 2 x 3 x 4 ) , {\displaystyle {\begin{aligned}\tan(\theta _{1}+\theta _{2})&={\frac {e_{1}}{e_{0}-e_{2}}}={\frac {x_{1}+x_{2}}{1\ -\ x_{1}x_{2}}}={\frac {\tan \theta _{1}+\tan \theta _{2}}{1\ -\ \tan \theta _{1}\tan \theta _{2}}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}}}={\frac {(x_{1}+x_{2}+x_{3})\ -\ (x_{1}x_{2}x_{3})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3})}},\\[8pt]\tan(\theta _{1}+\theta _{2}+\theta _{3}+\theta _{4})&={\frac {e_{1}-e_{3}}{e_{0}-e_{2}+e_{4}}}\\[8pt]&={\frac {(x_{1}+x_{2}+x_{3}+x_{4})\ -\ (x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{3}x_{4}+x_{2}x_{3}x_{4})}{1\ -\ (x_{1}x_{2}+x_{1}x_{3}+x_{1}x_{4}+x_{2}x_{3}+x_{2}x_{4}+x_{3}x_{4})\ +\ (x_{1}x_{2}x_{3}x_{4})}},\end{aligned}}} and so on. The case of only finitely many terms can be proved by mathematical induction. The case of infinitely many terms can be proved by using some elementary inequalities. === Secants and cosecants of sums === sec ( ∑ i θ i ) = ∏ i sec ⁡ θ i e 0 − e 2 + e 4 − ⋯ csc ( ∑ i θ i ) = ∏ i sec ⁡ θ i e 1 − e 3 + e 5 − ⋯ {\displaystyle {\begin{aligned}{\sec }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{0}-e_{2}+e_{4}-\cdots }}\\[8pt]{\csc }{\Bigl (}\sum _{i}\theta _{i}{\Bigr )}&={\frac {\prod _{i}\sec \theta _{i}}{e_{1}-e_{3}+e_{5}-\cdots }}\end{aligned}}} where e k {\displaystyle e_{k}} is the kth-degree elementary symmetric polynomial in the n variables x i = tan ⁡ θ i , {\displaystyle x_{i}=\tan \theta _{i},} i = 1 , … , n , {\displaystyle i=1,\ldots ,n,} and the number of terms in the denominator and the number of factors in the product in the numerator depend on the number of terms in the sum on the left. The case of only finitely many terms can be proved by mathematical induction on the number of such terms. For example, sec ⁡ ( α + β + γ ) = sec ⁡ α sec ⁡ β sec ⁡ γ 1 − tan ⁡ α tan ⁡ β − tan ⁡ α tan ⁡ γ − tan ⁡ β tan ⁡ γ csc ⁡ ( α + β + γ ) = sec ⁡ α sec ⁡ β sec ⁡ γ tan ⁡ α + tan ⁡ β + tan ⁡ γ − tan ⁡ α tan ⁡ β tan ⁡ γ . {\displaystyle {\begin{aligned}\sec(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{1-\tan \alpha \tan \beta -\tan \alpha \tan \gamma -\tan \beta \tan \gamma }}\\[8pt]\csc(\alpha +\beta +\gamma )&={\frac {\sec \alpha \sec \beta \sec \gamma }{\tan \alpha +\tan \beta +\tan \gamma -\tan \alpha \tan \beta \tan \gamma }}.\end{aligned}}} === Ptolemy's theorem === Ptolemy's theorem is important in the history of trigonometric identities, as it is how results equivalent to the sum and difference formulas for sine and cosine were first proved. It states that in a cyclic quadrilateral A B C D {\displaystyle ABCD} , as shown in the accompanying figure, the sum of the products of the lengths of opposite sides is equal to the product of the lengths of the diagonals. In the special cases of one of the diagonals or sides being a diameter of the circle, this theorem gives rise directly to the angle sum and difference trigonometric identities. The relationship follows most easily when the circle is constructed to have a diameter of length one, as shown here. By Thales's theorem, ∠ D A B {\displaystyle \angle DAB} and ∠ D C B {\displaystyle \angle DCB} are both right angles. The right-angled triangles D A B {\displaystyle DAB} and D C B {\displaystyle DCB} both share the hypotenuse B D ¯ {\displaystyle {\overline {BD}}} of length 1. Thus, the side A B ¯ = sin ⁡ α {\displaystyle {\overline {AB}}=\sin \alpha } , A D ¯ = cos ⁡ α {\displaystyle {\overline {AD}}=\cos \alpha } , B C ¯ = sin ⁡ β {\displaystyle {\overline {BC}}=\sin \beta } and C D ¯ = cos ⁡ β {\displaystyle {\overline {CD}}=\cos \beta } . By the inscribed angle theorem, the central angle subtended by the chord A C ¯ {\displaystyle {\overline {AC}}} at the circle's center is twice the angle ∠ A D C {\displaystyle \angle ADC} , i.e. 2 ( α + β ) {\displaystyle 2(\alpha +\beta )} . Therefore, the symmetrical pair of red triangles each has the angle α + β {\displaystyle \alpha +\beta } at the center. Each of these triangles has a hypotenuse of length 1 2 {\textstyle {\frac {1}{2}}} , so the length of A C ¯ {\displaystyle {\overline {AC}}} is 2 × 1 2 sin ⁡ ( α + β ) {\textstyle 2\times {\frac {1}{2}}\sin(\alpha +\beta )} , i.e. simply sin ⁡ ( α + β ) {\displaystyle \sin(\alpha +\beta )} . The quadrilateral's other diagonal is the diameter of length 1, so the product of the diagonals' lengths is also sin ⁡ ( α + β ) {\displaystyle \sin(\alpha +\beta )} . When these values are substituted into the statement of Ptolemy's theorem that | A C ¯ | ⋅ | B D ¯ | = | A B ¯ | ⋅ | C D ¯ | + | A D ¯ | ⋅ | B C ¯ | {\displaystyle |{\overline {AC}}|\cdot |{\overline {BD}}|=|{\overline {AB}}|\cdot |{\overline {CD}}|+|{\overline {AD}}|\cdot |{\overline {BC}}|} , this yields the angle sum trigonometric identity for sine: sin ⁡ ( α + β ) = sin ⁡ α cos ⁡ β + cos ⁡ α sin ⁡ β {\displaystyle \sin(\alpha +\beta )=\sin \alpha \cos \beta +\cos \alpha \sin \beta } . The angle difference formula for sin ⁡ ( α − β ) {\displaystyle \sin(\alpha -\beta )} can be similarly derived by letting the side C D ¯ {\displaystyle {\overline {CD}}} serve as a diameter instead of B D ¯ {\displaystyle {\overline {BD}}} . == Multiple-angle and half-angle formulae == === Multiple-angle formulae === ==== Double-angle formulae ==== Formulae for twice an angle. ==== Triple-angle formulae ==== Formulae for triple angles. ==== Multiple-angle formulae ==== Formulae for multiple angles. ==== Chebyshev method ==== The Chebyshev method is a recursive algorithm for finding the nth multiple angle formula knowing the ( n − 1 ) {\displaystyle (n-1)} th and ( n − 2 ) {\displaystyle (n-2)} th values. cos ⁡ ( n x ) {\displaystyle \cos(nx)} can be computed from cos ⁡ ( ( n − 1 ) x ) {\displaystyle \cos((n-1)x)} , cos ⁡ ( ( n − 2 ) x ) {\displaystyle \cos((n-2)x)} , and cos ⁡ ( x ) {\displaystyle \cos(x)} with cos ⁡ ( n x ) = 2 cos ⁡ x cos ⁡ ( ( n − 1 ) x ) − cos ⁡ ( ( n − 2 ) x ) . {\displaystyle \cos(nx)=2\cos x\cos((n-1)x)-\cos((n-2)x).} This can be proved by adding together the formulae cos ⁡ ( ( n − 1 ) x + x ) = cos ⁡ ( ( n − 1 ) x ) cos ⁡ x − sin ⁡ ( ( n − 1 ) x ) sin ⁡ x cos ⁡ ( ( n − 1 ) x − x ) = cos ⁡ ( ( n − 1 ) x ) cos ⁡ x + sin ⁡ ( ( n − 1 ) x ) sin ⁡ x {\displaystyle {\begin{aligned}\cos((n-1)x+x)&=\cos((n-1)x)\cos x-\sin((n-1)x)\sin x\\\cos((n-1)x-x)&=\cos((n-1)x)\cos x+\sin((n-1)x)\sin x\end{aligned}}} It follows by induction that cos ⁡ ( n x ) {\displaystyle \cos(nx)} is a polynomial of cos ⁡ x , {\displaystyle \cos x,} the so-called Chebyshev polynomial of the first kind, see Chebyshev polynomials#Trigonometric definition. Similarly, sin ⁡ ( n x ) {\displaystyle \sin(nx)} can be computed from sin ⁡ ( ( n − 1 ) x ) , {\displaystyle \sin((n-1)x),} sin ⁡ ( ( n − 2 ) x ) , {\displaystyle \sin((n-2)x),} and cos ⁡ x {\displaystyle \cos x} with sin ⁡ ( n x ) = 2 cos ⁡ x sin ⁡ ( ( n − 1 ) x ) − sin ⁡ ( ( n − 2 ) x ) {\displaystyle \sin(nx)=2\cos x\sin((n-1)x)-\sin((n-2)x)} This can be proved by adding formulae for sin ⁡ ( ( n − 1 ) x + x ) {\displaystyle \sin((n-1)x+x)} and sin ⁡ ( ( n − 1 ) x − x ) . {\displaystyle \sin((n-1)x-x).} Serving a purpose similar to that of the Chebyshev method, for the tangent we can write: tan ⁡ ( n x ) = tan ⁡ ( ( n − 1 ) x ) + tan ⁡ x 1 − tan ⁡ ( ( n − 1 ) x ) tan ⁡ x . {\displaystyle \tan(nx)={\frac {\tan((n-1)x)+\tan x}{1-\tan((n-1)x)\tan x}}\,.} === Half-angle formulae === sin ⁡ θ 2 = sgn ⁡ ( sin ⁡ θ 2 ) 1 − cos ⁡ θ 2 cos ⁡ θ 2 = sgn ⁡ ( cos ⁡ θ 2 ) 1 + cos ⁡ θ 2 tan ⁡ θ 2 = 1 − cos ⁡ θ sin ⁡ θ = sin ⁡ θ 1 + cos ⁡ θ = csc ⁡ θ − cot ⁡ θ = tan ⁡ θ 1 + sec ⁡ θ = sgn ⁡ ( sin ⁡ θ ) 1 − cos ⁡ θ 1 + cos ⁡ θ = − 1 + sgn ⁡ ( cos ⁡ θ ) 1 + tan 2 ⁡ θ tan ⁡ θ cot ⁡ θ 2 = 1 + cos ⁡ θ sin ⁡ θ = sin ⁡ θ 1 − cos ⁡ θ = csc ⁡ θ + cot ⁡ θ = sgn ⁡ ( sin ⁡ θ ) 1 + cos ⁡ θ 1 − cos ⁡ θ sec ⁡ θ 2 = sgn ⁡ ( cos ⁡ θ 2 ) 2 1 + cos ⁡ θ csc ⁡ θ 2 = sgn ⁡ ( sin ⁡ θ 2 ) 2 1 − cos ⁡ θ {\displaystyle {\begin{aligned}\sin {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {1-\cos \theta }{2}}}\\[3pt]\cos {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {1+\cos \theta }{2}}}\\[3pt]\tan {\frac {\theta }{2}}&={\frac {1-\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1+\cos \theta }}=\csc \theta -\cot \theta ={\frac {\tan \theta }{1+\sec {\theta }}}\\[6mu]&=\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1-\cos \theta }{1+\cos \theta }}}={\frac {-1+\operatorname {sgn}(\cos \theta ){\sqrt {1+\tan ^{2}\theta }}}{\tan \theta }}\\[3pt]\cot {\frac {\theta }{2}}&={\frac {1+\cos \theta }{\sin \theta }}={\frac {\sin \theta }{1-\cos \theta }}=\csc \theta +\cot \theta =\operatorname {sgn}(\sin \theta ){\sqrt {\frac {1+\cos \theta }{1-\cos \theta }}}\\\sec {\frac {\theta }{2}}&=\operatorname {sgn} \left(\cos {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1+\cos \theta }}}\\\csc {\frac {\theta }{2}}&=\operatorname {sgn} \left(\sin {\frac {\theta }{2}}\right){\sqrt {\frac {2}{1-\cos \theta }}}\\\end{aligned}}} Also tan ⁡ η ± θ 2 = sin ⁡ η ± sin ⁡ θ cos ⁡ η + cos ⁡ θ tan ⁡ ( θ 2 + π 4 ) = sec ⁡ θ + tan ⁡ θ 1 − sin ⁡ θ 1 + sin ⁡ θ = | 1 − tan ⁡ θ 2 | | 1 + tan ⁡ θ 2 | {\displaystyle {\begin{aligned}\tan {\frac {\eta \pm \theta }{2}}&={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}\\[3pt]\tan \left({\frac {\theta }{2}}+{\frac {\pi }{4}}\right)&=\sec \theta +\tan \theta \\[3pt]{\sqrt {\frac {1-\sin \theta }{1+\sin \theta }}}&={\frac {\left|1-\tan {\frac {\theta }{2}}\right|}{\left|1+\tan {\frac {\theta }{2}}\right|}}\end{aligned}}} === Table === These can be shown by using either the sum and difference identities or the multiple-angle formulae. The fact that the triple-angle formula for sine and cosine only involves powers of a single function allows one to relate the geometric problem of a compass and straightedge construction of angle trisection to the algebraic problem of solving a cubic equation, which allows one to prove that trisection is in general impossible using the given tools. A formula for computing the trigonometric identities for the one-third angle exists, but it requires finding the zeroes of the cubic equation 4x3 − 3x + d = 0, where x {\displaystyle x} is the value of the cosine function at the one-third angle and d is the known value of the cosine function at the full angle. However, the discriminant of this equation is positive, so this equation has three real roots (of which only one is the solution for the cosine of the one-third angle). None of these solutions are reducible to a real algebraic expression, as they use intermediate complex numbers under the cube roots. == Power-reduction formulae == Obtained by solving the second and third versions of the cosine double-angle formula. In general terms of powers of sin ⁡ θ {\displaystyle \sin \theta } or cos ⁡ θ {\displaystyle \cos \theta } the following is true, and can be deduced using De Moivre's formula, Euler's formula and the binomial theorem. == Product-to-sum and sum-to-product identities == The product-to-sum identities or prosthaphaeresis formulae can be proven by expanding their right-hand sides using the angle addition theorems. Historically, the first four of these were known as Werner's formulas, after Johannes Werner who used them for astronomical calculations. See amplitude modulation for an application of the product-to-sum formulae, and beat (acoustics) and phase detector for applications of the sum-to-product formulae. === Product-to-sum identities === === Sum-to-product identities === The sum-to-product identities are as follows: === Hermite's cotangent identity === Charles Hermite demonstrated the following identity. Suppose a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are complex numbers, no two of which differ by an integer multiple of π. Let A n , k = ∏ 1 ≤ j ≤ n j ≠ k cot ⁡ ( a k − a j ) {\displaystyle A_{n,k}=\prod _{\begin{smallmatrix}1\leq j\leq n\\j\neq k\end{smallmatrix}}\cot(a_{k}-a_{j})} (in particular, A 1 , 1 , {\displaystyle A_{1,1},} being an empty product, is 1). Then cot ⁡ ( z − a 1 ) ⋯ cot ⁡ ( z − a n ) = cos ⁡ n π 2 + ∑ k = 1 n A n , k cot ⁡ ( z − a k ) . {\displaystyle \cot(z-a_{1})\cdots \cot(z-a_{n})=\cos {\frac {n\pi }{2}}+\sum _{k=1}^{n}A_{n,k}\cot(z-a_{k}).} The simplest non-trivial example is the case n = 2: cot ⁡ ( z − a 1 ) cot ⁡ ( z − a 2 ) = − 1 + cot ⁡ ( a 1 − a 2 ) cot ⁡ ( z − a 1 ) + cot ⁡ ( a 2 − a 1 ) cot ⁡ ( z − a 2 ) . {\displaystyle \cot(z-a_{1})\cot(z-a_{2})=-1+\cot(a_{1}-a_{2})\cot(z-a_{1})+\cot(a_{2}-a_{1})\cot(z-a_{2}).} === Finite products of trigonometric functions === For coprime integers n, m ∏ k = 1 n ( 2 a + 2 cos ⁡ ( 2 π k m n + x ) ) = 2 ( T n ( a ) + ( − 1 ) n + m cos ⁡ ( n x ) ) {\displaystyle \prod _{k=1}^{n}\left(2a+2\cos \left({\frac {2\pi km}{n}}+x\right)\right)=2\left(T_{n}(a)+{(-1)}^{n+m}\cos(nx)\right)} where Tn is the Chebyshev polynomial. The following relationship holds for the sine function ∏ k = 1 n − 1 sin ⁡ ( k π n ) = n 2 n − 1 . {\displaystyle \prod _{k=1}^{n-1}\sin \left({\frac {k\pi }{n}}\right)={\frac {n}{2^{n-1}}}.} More generally for an integer n > 0 sin ⁡ ( n x ) = 2 n − 1 ∏ k = 0 n − 1 sin ⁡ ( k n π + x ) = 2 n − 1 ∏ k = 1 n sin ⁡ ( k n π − x ) . {\displaystyle \sin(nx)=2^{n-1}\prod _{k=0}^{n-1}\sin \left({\frac {k}{n}}\pi +x\right)=2^{n-1}\prod _{k=1}^{n}\sin \left({\frac {k}{n}}\pi -x\right).} or written in terms of the chord function crd ⁡ x ≡ 2 sin ⁡ 1 2 x {\textstyle \operatorname {crd} x\equiv 2\sin {\tfrac {1}{2}}x} , crd ⁡ ( n x ) = ∏ k = 1 n crd ⁡ ( k n 2 π − x ) . {\displaystyle \operatorname {crd} (nx)=\prod _{k=1}^{n}\operatorname {crd} \left({\frac {k}{n}}2\pi -x\right).} This comes from the factorization of the polynomial z n − 1 {\textstyle z^{n}-1} into linear factors (cf. root of unity): For any complex z and an integer n > 0, z n − 1 = ∏ k = 1 n ( z − exp ⁡ ( k n 2 π i ) ) . {\displaystyle z^{n}-1=\prod _{k=1}^{n}\left(z-\exp {\Bigl (}{\frac {k}{n}}2\pi i{\Bigr )}\right).} == Linear combinations == For some purposes it is important to know that any linear combination of sine waves of the same period or frequency but different phase shifts is also a sine wave with the same period or frequency, but a different phase shift. This is useful in sinusoid data fitting, because the measured or observed data are linearly related to the a and b unknowns of the in-phase and quadrature components basis below, resulting in a simpler Jacobian, compared to that of c {\displaystyle c} and φ {\displaystyle \varphi } . === Sine and cosine === The linear combination, or harmonic addition, of sine and cosine waves is equivalent to a single sine wave with a phase shift and scaled amplitude, a cos ⁡ x + b sin ⁡ x = c cos ⁡ ( x + φ ) {\displaystyle a\cos x+b\sin x=c\cos(x+\varphi )} where c {\displaystyle c} and φ {\displaystyle \varphi } are defined as so: c = sgn ⁡ ( a ) a 2 + b 2 , φ = arctan ( − b / a ) , {\displaystyle {\begin{aligned}c&=\operatorname {sgn}(a){\sqrt {a^{2}+b^{2}}},\\\varphi &={\arctan }{\bigl (}{-b/a}{\bigr )},\end{aligned}}} given that a ≠ 0. {\displaystyle a\neq 0.} === Arbitrary phase shift === More generally, for arbitrary phase shifts, we have a sin ⁡ ( x + θ a ) + b sin ⁡ ( x + θ b ) = c sin ⁡ ( x + φ ) {\displaystyle a\sin(x+\theta _{a})+b\sin(x+\theta _{b})=c\sin(x+\varphi )} where c {\displaystyle c} and φ {\displaystyle \varphi } satisfy: c 2 = a 2 + b 2 + 2 a b cos ⁡ ( θ a − θ b ) , tan ⁡ φ = a sin ⁡ θ a + b sin ⁡ θ b a cos ⁡ θ a + b cos ⁡ θ b . {\displaystyle {\begin{aligned}c^{2}&=a^{2}+b^{2}+2ab\cos \left(\theta _{a}-\theta _{b}\right),\\\tan \varphi &={\frac {a\sin \theta _{a}+b\sin \theta _{b}}{a\cos \theta _{a}+b\cos \theta _{b}}}.\end{aligned}}} === More than two sinusoids === The general case reads ∑ i a i sin ⁡ ( x + θ i ) = a sin ⁡ ( x + θ ) , {\displaystyle \sum _{i}a_{i}\sin(x+\theta _{i})=a\sin(x+\theta ),} where a 2 = ∑ i , j a i a j cos ⁡ ( θ i − θ j ) {\displaystyle a^{2}=\sum _{i,j}a_{i}a_{j}\cos(\theta _{i}-\theta _{j})} and tan ⁡ θ = ∑ i a i sin ⁡ θ i ∑ i a i cos ⁡ θ i . {\displaystyle \tan \theta ={\frac {\sum _{i}a_{i}\sin \theta _{i}}{\sum _{i}a_{i}\cos \theta _{i}}}.} == Lagrange's trigonometric identities == These identities, named after Joseph Louis Lagrange, are: ∑ k = 0 n sin ⁡ k θ = cos ⁡ 1 2 θ − cos ⁡ ( ( n + 1 2 ) θ ) 2 sin ⁡ 1 2 θ ∑ k = 1 n cos ⁡ k θ = − sin ⁡ 1 2 θ + sin ⁡ ( ( n + 1 2 ) θ ) 2 sin ⁡ 1 2 θ {\displaystyle {\begin{aligned}\sum _{k=0}^{n}\sin k\theta &={\frac {\cos {\tfrac {1}{2}}\theta -\cos \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\\[5pt]\sum _{k=1}^{n}\cos k\theta &={\frac {-\sin {\tfrac {1}{2}}\theta +\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{2\sin {\tfrac {1}{2}}\theta }}\end{aligned}}} for θ ≢ 0 ( mod 2 π ) . {\displaystyle \theta \not \equiv 0{\pmod {2\pi }}.} A related function is the Dirichlet kernel: D n ( θ ) = 1 + 2 ∑ k = 1 n cos ⁡ k θ = sin ⁡ ( ( n + 1 2 ) θ ) sin ⁡ 1 2 θ . {\displaystyle D_{n}(\theta )=1+2\sum _{k=1}^{n}\cos k\theta ={\frac {\sin \left(\left(n+{\tfrac {1}{2}}\right)\theta \right)}{\sin {\tfrac {1}{2}}\theta }}.} A similar identity is ∑ k = 1 n cos ⁡ ( 2 k − 1 ) α = sin ⁡ ( 2 n α ) 2 sin ⁡ α . {\displaystyle \sum _{k=1}^{n}\cos(2k-1)\alpha ={\frac {\sin(2n\alpha )}{2\sin \alpha }}.} The proof is the following. By using the angle sum and difference identities, sin ⁡ ( A + B ) − sin ⁡ ( A − B ) = 2 cos ⁡ A sin ⁡ B . {\displaystyle \sin(A+B)-\sin(A-B)=2\cos A\sin B.} Then let's examine the following formula, 2 sin ⁡ α ∑ k = 1 n cos ⁡ ( 2 k − 1 ) α = 2 sin ⁡ α cos ⁡ α + 2 sin ⁡ α cos ⁡ 3 α + 2 sin ⁡ α cos ⁡ 5 α + … + 2 sin ⁡ α cos ⁡ ( 2 n − 1 ) α {\displaystyle 2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha =2\sin \alpha \cos \alpha +2\sin \alpha \cos 3\alpha +2\sin \alpha \cos 5\alpha +\ldots +2\sin \alpha \cos(2n-1)\alpha } and this formula can be written by using the above identity, 2 sin ⁡ α ∑ k = 1 n cos ⁡ ( 2 k − 1 ) α = ∑ k = 1 n ( sin ⁡ ( 2 k α ) − sin ⁡ ( 2 ( k − 1 ) α ) ) = ( sin ⁡ 2 α − sin ⁡ 0 ) + ( sin ⁡ 4 α − sin ⁡ 2 α ) + ( sin ⁡ 6 α − sin ⁡ 4 α ) + … + ( sin ⁡ ( 2 n α ) − sin ⁡ ( 2 ( n − 1 ) α ) ) = sin ⁡ ( 2 n α ) . {\displaystyle {\begin{aligned}&2\sin \alpha \sum _{k=1}^{n}\cos(2k-1)\alpha \\&\quad =\sum _{k=1}^{n}(\sin(2k\alpha )-\sin(2(k-1)\alpha ))\\&\quad =(\sin 2\alpha -\sin 0)+(\sin 4\alpha -\sin 2\alpha )+(\sin 6\alpha -\sin 4\alpha )+\ldots +(\sin(2n\alpha )-\sin(2(n-1)\alpha ))\\&\quad =\sin(2n\alpha ).\end{aligned}}} So, dividing this formula with 2 sin ⁡ α {\displaystyle 2\sin \alpha } completes the proof. == Certain linear fractional transformations == If f ( x ) {\displaystyle f(x)} is given by the linear fractional transformation f ( x ) = ( cos ⁡ α ) x − sin ⁡ α ( sin ⁡ α ) x + cos ⁡ α , {\displaystyle f(x)={\frac {(\cos \alpha )x-\sin \alpha }{(\sin \alpha )x+\cos \alpha }},} and similarly g ( x ) = ( cos ⁡ β ) x − sin ⁡ β ( sin ⁡ β ) x + cos ⁡ β , {\displaystyle g(x)={\frac {(\cos \beta )x-\sin \beta }{(\sin \beta )x+\cos \beta }},} then f ( g ( x ) ) = g ( f ( x ) ) = ( cos ⁡ ( α + β ) ) x − sin ⁡ ( α + β ) ( sin ⁡ ( α + β ) ) x + cos ⁡ ( α + β ) . {\displaystyle f{\big (}g(x){\big )}=g{\big (}f(x){\big )}={\frac {{\big (}\cos(\alpha +\beta ){\big )}x-\sin(\alpha +\beta )}{{\big (}\sin(\alpha +\beta ){\big )}x+\cos(\alpha +\beta )}}.} More tersely stated, if for all α {\displaystyle \alpha } we let f α {\displaystyle f_{\alpha }} be what we called f {\displaystyle f} above, then f α ∘ f β = f α + β . {\displaystyle f_{\alpha }\circ f_{\beta }=f_{\alpha +\beta }.} If x {\displaystyle x} is the slope of a line, then f ( x ) {\displaystyle f(x)} is the slope of its rotation through an angle of − α . {\displaystyle -\alpha .} == Relation to the complex exponential function == Euler's formula states that, for any real number x: e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where i is the imaginary unit. Substituting −x for x gives us: e − i x = cos ⁡ ( − x ) + i sin ⁡ ( − x ) = cos ⁡ x − i sin ⁡ x . {\displaystyle e^{-ix}=\cos(-x)+i\sin(-x)=\cos x-i\sin x.} These two equations can be used to solve for cosine and sine in terms of the exponential function. Specifically, cos ⁡ x = e i x + e − i x 2 {\displaystyle \cos x={\frac {e^{ix}+e^{-ix}}{2}}} sin ⁡ x = e i x − e − i x 2 i {\displaystyle \sin x={\frac {e^{ix}-e^{-ix}}{2i}}} These formulae are useful for proving many other trigonometric identities. For example, that ei(θ+φ) = eiθ eiφ means that That the real part of the left hand side equals the real part of the right hand side is an angle addition formula for cosine. The equality of the imaginary parts gives an angle addition formula for sine. The following table expresses the trigonometric functions and their inverses in terms of the exponential function and the complex logarithm. == Relation to complex hyperbolic functions == Trigonometric functions may be deduced from hyperbolic functions with complex arguments. The formulae for the relations are shown below. sin ⁡ x = − i sinh ⁡ ( i x ) cos ⁡ x = cosh ⁡ ( i x ) tan ⁡ x = − i tanh ⁡ ( i x ) cot ⁡ x = i coth ⁡ ( i x ) sec ⁡ x = sech ⁡ ( i x ) csc ⁡ x = i csch ⁡ ( i x ) {\displaystyle {\begin{aligned}\sin x&=-i\sinh(ix)\\\cos x&=\cosh(ix)\\\tan x&=-i\tanh(ix)\\\cot x&=i\coth(ix)\\\sec x&=\operatorname {sech} (ix)\\\csc x&=i\operatorname {csch} (ix)\\\end{aligned}}} == Series expansion == When using a power series expansion to define trigonometric functions, the following identities are obtained: sin ⁡ x = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n x 2 n + 1 ( 2 n + 1 ) ! , {\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n+1}}{(2n+1)!}},} cos ⁡ x = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n x 2 n ( 2 n ) ! . {\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n)!}}.} == Infinite product formulae == For applications to special functions, the following infinite product formulae for trigonometric functions are useful: sin ⁡ x = x ∏ n = 1 ∞ ( 1 − x 2 π 2 n 2 ) , cos ⁡ x = ∏ n = 1 ∞ ( 1 − x 2 π 2 ( n − 1 2 ) ) 2 ) , sinh ⁡ x = x ∏ n = 1 ∞ ( 1 + x 2 π 2 n 2 ) , cosh ⁡ x = ∏ n = 1 ∞ ( 1 + x 2 π 2 ( n − 1 2 ) ) 2 ) . {\displaystyle {\begin{aligned}\sin x&=x\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cos x&=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right),\\[10mu]\sinh x&=x\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}n^{2}}}\right),&\cosh x&=\prod _{n=1}^{\infty }\left(1+{\frac {x^{2}}{\pi ^{2}\left(n-{\frac {1}{2}}\right)\!{\vphantom {)}}^{2}}}\right).\end{aligned}}} == Inverse trigonometric functions == The following identities give the result of composing a trigonometric function with an inverse trigonometric function. sin ⁡ ( arcsin ⁡ x ) = x cos ⁡ ( arcsin ⁡ x ) = 1 − x 2 tan ⁡ ( arcsin ⁡ x ) = x 1 − x 2 sin ⁡ ( arccos ⁡ x ) = 1 − x 2 cos ⁡ ( arccos ⁡ x ) = x tan ⁡ ( arccos ⁡ x ) = 1 − x 2 x sin ⁡ ( arctan ⁡ x ) = x 1 + x 2 cos ⁡ ( arctan ⁡ x ) = 1 1 + x 2 tan ⁡ ( arctan ⁡ x ) = x sin ⁡ ( arccsc ⁡ x ) = 1 x cos ⁡ ( arccsc ⁡ x ) = x 2 − 1 x tan ⁡ ( arccsc ⁡ x ) = 1 x 2 − 1 sin ⁡ ( arcsec ⁡ x ) = x 2 − 1 x cos ⁡ ( arcsec ⁡ x ) = 1 x tan ⁡ ( arcsec ⁡ x ) = x 2 − 1 sin ⁡ ( arccot ⁡ x ) = 1 1 + x 2 cos ⁡ ( arccot ⁡ x ) = x 1 + x 2 tan ⁡ ( arccot ⁡ x ) = 1 x {\displaystyle {\begin{aligned}\sin(\arcsin x)&=x&\cos(\arcsin x)&={\sqrt {1-x^{2}}}&\tan(\arcsin x)&={\frac {x}{\sqrt {1-x^{2}}}}\\\sin(\arccos x)&={\sqrt {1-x^{2}}}&\cos(\arccos x)&=x&\tan(\arccos x)&={\frac {\sqrt {1-x^{2}}}{x}}\\\sin(\arctan x)&={\frac {x}{\sqrt {1+x^{2}}}}&\cos(\arctan x)&={\frac {1}{\sqrt {1+x^{2}}}}&\tan(\arctan x)&=x\\\sin(\operatorname {arccsc} x)&={\frac {1}{x}}&\cos(\operatorname {arccsc} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\tan(\operatorname {arccsc} x)&={\frac {1}{\sqrt {x^{2}-1}}}\\\sin(\operatorname {arcsec} x)&={\frac {\sqrt {x^{2}-1}}{x}}&\cos(\operatorname {arcsec} x)&={\frac {1}{x}}&\tan(\operatorname {arcsec} x)&={\sqrt {x^{2}-1}}\\\sin(\operatorname {arccot} x)&={\frac {1}{\sqrt {1+x^{2}}}}&\cos(\operatorname {arccot} x)&={\frac {x}{\sqrt {1+x^{2}}}}&\tan(\operatorname {arccot} x)&={\frac {1}{x}}\\\end{aligned}}} Taking the multiplicative inverse of both sides of the each equation above results in the equations for csc = 1 sin , sec = 1 cos , and cot = 1 tan . {\displaystyle \csc ={\frac {1}{\sin }},\;\sec ={\frac {1}{\cos }},{\text{ and }}\cot ={\frac {1}{\tan }}.} The right hand side of the formula above will always be flipped. For example, the equation for cot ⁡ ( arcsin ⁡ x ) {\displaystyle \cot(\arcsin x)} is: cot ⁡ ( arcsin ⁡ x ) = 1 tan ⁡ ( arcsin ⁡ x ) = 1 x 1 − x 2 = 1 − x 2 x {\displaystyle \cot(\arcsin x)={\frac {1}{\tan(\arcsin x)}}={\frac {1}{\frac {x}{\sqrt {1-x^{2}}}}}={\frac {\sqrt {1-x^{2}}}{x}}} while the equations for csc ⁡ ( arccos ⁡ x ) {\displaystyle \csc(\arccos x)} and sec ⁡ ( arccos ⁡ x ) {\displaystyle \sec(\arccos x)} are: csc ⁡ ( arccos ⁡ x ) = 1 sin ⁡ ( arccos ⁡ x ) = 1 1 − x 2 and sec ⁡ ( arccos ⁡ x ) = 1 cos ⁡ ( arccos ⁡ x ) = 1 x . {\displaystyle \csc(\arccos x)={\frac {1}{\sin(\arccos x)}}={\frac {1}{\sqrt {1-x^{2}}}}\qquad {\text{ and }}\quad \sec(\arccos x)={\frac {1}{\cos(\arccos x)}}={\frac {1}{x}}.} The following identities are implied by the reflection identities. They hold whenever x , r , s , − x , − r , and − s {\displaystyle x,r,s,-x,-r,{\text{ and }}-s} are in the domains of the relevant functions. π 2 = arcsin ⁡ ( x ) + arccos ⁡ ( x ) = arctan ⁡ ( r ) + arccot ⁡ ( r ) = arcsec ⁡ ( s ) + arccsc ⁡ ( s ) π = arccos ⁡ ( x ) + arccos ⁡ ( − x ) = arccot ⁡ ( r ) + arccot ⁡ ( − r ) = arcsec ⁡ ( s ) + arcsec ⁡ ( − s ) 0 = arcsin ⁡ ( x ) + arcsin ⁡ ( − x ) = arctan ⁡ ( r ) + arctan ⁡ ( − r ) = arccsc ⁡ ( s ) + arccsc ⁡ ( − s ) {\displaystyle {\begin{alignedat}{9}{\frac {\pi }{2}}~&=~\arcsin(x)&&+\arccos(x)~&&=~\arctan(r)&&+\operatorname {arccot}(r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arccsc}(s)\\[0.4ex]\pi ~&=~\arccos(x)&&+\arccos(-x)~&&=~\operatorname {arccot}(r)&&+\operatorname {arccot}(-r)~&&=~\operatorname {arcsec}(s)&&+\operatorname {arcsec}(-s)\\[0.4ex]0~&=~\arcsin(x)&&+\arcsin(-x)~&&=~\arctan(r)&&+\arctan(-r)~&&=~\operatorname {arccsc}(s)&&+\operatorname {arccsc}(-s)\\[1.0ex]\end{alignedat}}} Also, arctan ⁡ x + arctan ⁡ 1 x = { π 2 , if x > 0 − π 2 , if x < 0 arccot ⁡ x + arccot ⁡ 1 x = { π 2 , if x > 0 3 π 2 , if x < 0 {\displaystyle {\begin{aligned}\arctan x+\arctan {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\-{\frac {\pi }{2}},&{\text{if }}x<0\end{cases}}\\\operatorname {arccot} x+\operatorname {arccot} {\dfrac {1}{x}}&={\begin{cases}{\frac {\pi }{2}},&{\text{if }}x>0\\{\frac {3\pi }{2}},&{\text{if }}x<0\end{cases}}\\\end{aligned}}} arccos ⁡ 1 x = arcsec ⁡ x and arcsec ⁡ 1 x = arccos ⁡ x {\displaystyle \arccos {\frac {1}{x}}=\operatorname {arcsec} x\qquad {\text{ and }}\qquad \operatorname {arcsec} {\frac {1}{x}}=\arccos x} arcsin ⁡ 1 x = arccsc ⁡ x and arccsc ⁡ 1 x = arcsin ⁡ x {\displaystyle \arcsin {\frac {1}{x}}=\operatorname {arccsc} x\qquad {\text{ and }}\qquad \operatorname {arccsc} {\frac {1}{x}}=\arcsin x} The arctangent function can be expanded as a series: arctan ⁡ ( n x ) = ∑ m = 1 n arctan ⁡ x 1 + ( m − 1 ) m x 2 {\displaystyle \arctan(nx)=\sum _{m=1}^{n}\arctan {\frac {x}{1+(m-1)mx^{2}}}} == Identities without variables == In terms of the arctangent function we have arctan ⁡ 1 2 = arctan ⁡ 1 3 + arctan ⁡ 1 7 . {\displaystyle \arctan {\frac {1}{2}}=\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} The curious identity known as Morrie's law, cos ⁡ 20 ∘ ⋅ cos ⁡ 40 ∘ ⋅ cos ⁡ 80 ∘ = 1 8 , {\displaystyle \cos 20^{\circ }\cdot \cos 40^{\circ }\cdot \cos 80^{\circ }={\frac {1}{8}},} is a special case of an identity that contains one variable: ∏ j = 0 k − 1 cos ⁡ ( 2 j x ) = sin ⁡ ( 2 k x ) 2 k sin ⁡ x . {\displaystyle \prod _{j=0}^{k-1}\cos \left(2^{j}x\right)={\frac {\sin \left(2^{k}x\right)}{2^{k}\sin x}}.} Similarly, sin ⁡ 20 ∘ ⋅ sin ⁡ 40 ∘ ⋅ sin ⁡ 80 ∘ = 3 8 {\displaystyle \sin 20^{\circ }\cdot \sin 40^{\circ }\cdot \sin 80^{\circ }={\frac {\sqrt {3}}{8}}} is a special case of an identity with x = 20 ∘ {\displaystyle x=20^{\circ }} : sin ⁡ x ⋅ sin ⁡ ( 60 ∘ − x ) ⋅ sin ⁡ ( 60 ∘ + x ) = sin ⁡ 3 x 4 . {\displaystyle \sin x\cdot \sin \left(60^{\circ }-x\right)\cdot \sin \left(60^{\circ }+x\right)={\frac {\sin 3x}{4}}.} For the case x = 15 ∘ {\displaystyle x=15^{\circ }} , sin ⁡ 15 ∘ ⋅ sin ⁡ 45 ∘ ⋅ sin ⁡ 75 ∘ = 2 8 , sin ⁡ 15 ∘ ⋅ sin ⁡ 75 ∘ = 1 4 . {\displaystyle {\begin{aligned}\sin 15^{\circ }\cdot \sin 45^{\circ }\cdot \sin 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\sin 15^{\circ }\cdot \sin 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} For the case x = 10 ∘ {\displaystyle x=10^{\circ }} , sin ⁡ 10 ∘ ⋅ sin ⁡ 50 ∘ ⋅ sin ⁡ 70 ∘ = 1 8 . {\displaystyle \sin 10^{\circ }\cdot \sin 50^{\circ }\cdot \sin 70^{\circ }={\frac {1}{8}}.} The same cosine identity is cos ⁡ x ⋅ cos ⁡ ( 60 ∘ − x ) ⋅ cos ⁡ ( 60 ∘ + x ) = cos ⁡ 3 x 4 . {\displaystyle \cos x\cdot \cos \left(60^{\circ }-x\right)\cdot \cos \left(60^{\circ }+x\right)={\frac {\cos 3x}{4}}.} Similarly, cos ⁡ 10 ∘ ⋅ cos ⁡ 50 ∘ ⋅ cos ⁡ 70 ∘ = 3 8 , cos ⁡ 15 ∘ ⋅ cos ⁡ 45 ∘ ⋅ cos ⁡ 75 ∘ = 2 8 , cos ⁡ 15 ∘ ⋅ cos ⁡ 75 ∘ = 1 4 . {\displaystyle {\begin{aligned}\cos 10^{\circ }\cdot \cos 50^{\circ }\cdot \cos 70^{\circ }&={\frac {\sqrt {3}}{8}},\\\cos 15^{\circ }\cdot \cos 45^{\circ }\cdot \cos 75^{\circ }&={\frac {\sqrt {2}}{8}},\\\cos 15^{\circ }\cdot \cos 75^{\circ }&={\frac {1}{4}}.\end{aligned}}} Similarly, tan ⁡ 50 ∘ ⋅ tan ⁡ 60 ∘ ⋅ tan ⁡ 70 ∘ = tan ⁡ 80 ∘ , tan ⁡ 40 ∘ ⋅ tan ⁡ 30 ∘ ⋅ tan ⁡ 20 ∘ = tan ⁡ 10 ∘ . {\displaystyle {\begin{aligned}\tan 50^{\circ }\cdot \tan 60^{\circ }\cdot \tan 70^{\circ }&=\tan 80^{\circ },\\\tan 40^{\circ }\cdot \tan 30^{\circ }\cdot \tan 20^{\circ }&=\tan 10^{\circ }.\end{aligned}}} The following is perhaps not as readily generalized to an identity containing variables (but see explanation below): cos ⁡ 24 ∘ + cos ⁡ 48 ∘ + cos ⁡ 96 ∘ + cos ⁡ 168 ∘ = 1 2 . {\displaystyle \cos 24^{\circ }+\cos 48^{\circ }+\cos 96^{\circ }+\cos 168^{\circ }={\frac {1}{2}}.} Degree measure ceases to be more felicitous than radian measure when we consider this identity with 21 in the denominators: cos ⁡ 2 π 21 + cos ⁡ ( 2 ⋅ 2 π 21 ) + cos ⁡ ( 4 ⋅ 2 π 21 ) + cos ⁡ ( 5 ⋅ 2 π 21 ) + cos ⁡ ( 8 ⋅ 2 π 21 ) + cos ⁡ ( 10 ⋅ 2 π 21 ) = 1 2 . {\displaystyle \cos {\frac {2\pi }{21}}+\cos \left(2\cdot {\frac {2\pi }{21}}\right)+\cos \left(4\cdot {\frac {2\pi }{21}}\right)+\cos \left(5\cdot {\frac {2\pi }{21}}\right)+\cos \left(8\cdot {\frac {2\pi }{21}}\right)+\cos \left(10\cdot {\frac {2\pi }{21}}\right)={\frac {1}{2}}.} The factors 1, 2, 4, 5, 8, 10 may start to make the pattern clear: they are those integers less than ⁠21/2⁠ that are relatively prime to (or have no prime factors in common with) 21. The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively. Other cosine identities include: 2 cos ⁡ π 3 = 1 , 2 cos ⁡ π 5 × 2 cos ⁡ 2 π 5 = 1 , 2 cos ⁡ π 7 × 2 cos ⁡ 2 π 7 × 2 cos ⁡ 3 π 7 = 1 , {\displaystyle {\begin{aligned}2\cos {\frac {\pi }{3}}&=1,\\2\cos {\frac {\pi }{5}}\times 2\cos {\frac {2\pi }{5}}&=1,\\2\cos {\frac {\pi }{7}}\times 2\cos {\frac {2\pi }{7}}\times 2\cos {\frac {3\pi }{7}}&=1,\end{aligned}}} and so forth for all odd numbers, and hence cos ⁡ π 3 + cos ⁡ π 5 × cos ⁡ 2 π 5 + cos ⁡ π 7 × cos ⁡ 2 π 7 × cos ⁡ 3 π 7 + ⋯ = 1. {\displaystyle \cos {\frac {\pi }{3}}+\cos {\frac {\pi }{5}}\times \cos {\frac {2\pi }{5}}+\cos {\frac {\pi }{7}}\times \cos {\frac {2\pi }{7}}\times \cos {\frac {3\pi }{7}}+\dots =1.} Many of those curious identities stem from more general facts like the following: ∏ k = 1 n − 1 sin ⁡ k π n = n 2 n − 1 {\displaystyle \prod _{k=1}^{n-1}\sin {\frac {k\pi }{n}}={\frac {n}{2^{n-1}}}} and ∏ k = 1 n − 1 cos ⁡ k π n = sin ⁡ π n 2 2 n − 1 . {\displaystyle \prod _{k=1}^{n-1}\cos {\frac {k\pi }{n}}={\frac {\sin {\frac {\pi n}{2}}}{2^{n-1}}}.} Combining these gives us ∏ k = 1 n − 1 tan ⁡ k π n = n sin ⁡ π n 2 {\displaystyle \prod _{k=1}^{n-1}\tan {\frac {k\pi }{n}}={\frac {n}{\sin {\frac {\pi n}{2}}}}} If n is an odd number ( n = 2 m + 1 {\displaystyle n=2m+1} ) we can make use of the symmetries to get ∏ k = 1 m tan ⁡ k π 2 m + 1 = 2 m + 1 {\displaystyle \prod _{k=1}^{m}\tan {\frac {k\pi }{2m+1}}={\sqrt {2m+1}}} The transfer function of the Butterworth low pass filter can be expressed in terms of polynomial and poles. By setting the frequency as the cutoff frequency, the following identity can be proved: ∏ k = 1 n sin ⁡ ( 2 k − 1 ) π 4 n = ∏ k = 1 n cos ⁡ ( 2 k − 1 ) π 4 n = 2 2 n {\displaystyle \prod _{k=1}^{n}\sin {\frac {\left(2k-1\right)\pi }{4n}}=\prod _{k=1}^{n}\cos {\frac {\left(2k-1\right)\pi }{4n}}={\frac {\sqrt {2}}{2^{n}}}} === Computing π === An efficient way to compute π to a large number of digits is based on the following identity without variables, due to Machin. This is known as a Machin-like formula: π 4 = 4 arctan ⁡ 1 5 − arctan ⁡ 1 239 {\displaystyle {\frac {\pi }{4}}=4\arctan {\frac {1}{5}}-\arctan {\frac {1}{239}}} or, alternatively, by using an identity of Leonhard Euler: π 4 = 5 arctan ⁡ 1 7 + 2 arctan ⁡ 3 79 {\displaystyle {\frac {\pi }{4}}=5\arctan {\frac {1}{7}}+2\arctan {\frac {3}{79}}} or by using Pythagorean triples: π = arccos ⁡ 4 5 + arccos ⁡ 5 13 + arccos ⁡ 16 65 = arcsin ⁡ 3 5 + arcsin ⁡ 12 13 + arcsin ⁡ 63 65 . {\displaystyle \pi =\arccos {\frac {4}{5}}+\arccos {\frac {5}{13}}+\arccos {\frac {16}{65}}=\arcsin {\frac {3}{5}}+\arcsin {\frac {12}{13}}+\arcsin {\frac {63}{65}}.} Others include: π 4 = arctan ⁡ 1 2 + arctan ⁡ 1 3 , {\displaystyle {\frac {\pi }{4}}=\arctan {\frac {1}{2}}+\arctan {\frac {1}{3}},} π = arctan ⁡ 1 + arctan ⁡ 2 + arctan ⁡ 3 , {\displaystyle \pi =\arctan 1+\arctan 2+\arctan 3,} π 4 = 2 arctan ⁡ 1 3 + arctan ⁡ 1 7 . {\displaystyle {\frac {\pi }{4}}=2\arctan {\frac {1}{3}}+\arctan {\frac {1}{7}}.} Generally, for numbers t1, ..., tn−1 ∈ (−1, 1) for which θn = Σn−1k=1 arctan tk ∈ (π/4, 3π/4), let tn = tan(π/2 − θn) = cot θn. This last expression can be computed directly using the formula for the cotangent of a sum of angles whose tangents are t1, ..., tn−1 and its value will be in (−1, 1). In particular, the computed tn will be rational whenever all the t1, ..., tn−1 values are rational. With these values, π 2 = ∑ k = 1 n arctan ⁡ ( t k ) π = ∑ k = 1 n sgn ⁡ ( t k ) arccos ⁡ ( 1 − t k 2 1 + t k 2 ) π = ∑ k = 1 n arcsin ⁡ ( 2 t k 1 + t k 2 ) π = ∑ k = 1 n arctan ⁡ ( 2 t k 1 − t k 2 ) , {\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\sum _{k=1}^{n}\arctan(t_{k})\\\pi &=\sum _{k=1}^{n}\operatorname {sgn}(t_{k})\arccos \left({\frac {1-t_{k}^{2}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arcsin \left({\frac {2t_{k}}{1+t_{k}^{2}}}\right)\\\pi &=\sum _{k=1}^{n}\arctan \left({\frac {2t_{k}}{1-t_{k}^{2}}}\right)\,,\end{aligned}}} where in all but the first expression, we have used tangent half-angle formulae. The first two formulae work even if one or more of the tk values is not within (−1, 1). Note that if t = p/q is rational, then the (2t, 1 − t2, 1 + t2) values in the above formulae are proportional to the Pythagorean triple (2pq, q2 − p2, q2 + p2). For example, for n = 3 terms, π 2 = arctan ⁡ ( a b ) + arctan ⁡ ( c d ) + arctan ⁡ ( b d − a c a d + b c ) {\displaystyle {\frac {\pi }{2}}=\arctan \left({\frac {a}{b}}\right)+\arctan \left({\frac {c}{d}}\right)+\arctan \left({\frac {bd-ac}{ad+bc}}\right)} for any a, b, c, d > 0. === An identity of Euclid === Euclid showed in Book XIII, Proposition 10 of his Elements that the area of the square on the side of a regular pentagon inscribed in a circle is equal to the sum of the areas of the squares on the sides of the regular hexagon and the regular decagon inscribed in the same circle. In the language of modern trigonometry, this says: sin 2 ⁡ 18 ∘ + sin 2 ⁡ 30 ∘ = sin 2 ⁡ 36 ∘ . {\displaystyle \sin ^{2}18^{\circ }+\sin ^{2}30^{\circ }=\sin ^{2}36^{\circ }.} Ptolemy used this proposition to compute some angles in his table of chords in Book I, chapter 11 of Almagest. == Composition of trigonometric functions == These identities involve a trigonometric function of a trigonometric function: cos ⁡ ( t sin ⁡ x ) = J 0 ( t ) + 2 ∑ k = 1 ∞ J 2 k ( t ) cos ⁡ ( 2 k x ) {\displaystyle \cos(t\sin x)=J_{0}(t)+2\sum _{k=1}^{\infty }J_{2k}(t)\cos(2kx)} sin ⁡ ( t sin ⁡ x ) = 2 ∑ k = 0 ∞ J 2 k + 1 ( t ) sin ⁡ ( ( 2 k + 1 ) x ) {\displaystyle \sin(t\sin x)=2\sum _{k=0}^{\infty }J_{2k+1}(t)\sin {\big (}(2k+1)x{\big )}} cos ⁡ ( t cos ⁡ x ) = J 0 ( t ) + 2 ∑ k = 1 ∞ ( − 1 ) k J 2 k ( t ) cos ⁡ ( 2 k x ) {\displaystyle \cos(t\cos x)=J_{0}(t)+2\sum _{k=1}^{\infty }(-1)^{k}J_{2k}(t)\cos(2kx)} sin ⁡ ( t cos ⁡ x ) = 2 ∑ k = 0 ∞ ( − 1 ) k J 2 k + 1 ( t ) cos ⁡ ( ( 2 k + 1 ) x ) {\displaystyle \sin(t\cos x)=2\sum _{k=0}^{\infty }(-1)^{k}J_{2k+1}(t)\cos {\big (}(2k+1)x{\big )}} where Ji are Bessel functions. == Further "conditional" identities for the case α + β + γ = 180° == A conditional trigonometric identity is a trigonometric identity that holds if specified conditions on the arguments to the trigonometric functions are satisfied. The following formulae apply to arbitrary plane triangles and follow from α + β + γ = 180 ∘ , {\displaystyle \alpha +\beta +\gamma =180^{\circ },} as long as the functions occurring in the formulae are well-defined (the latter applies only to the formulae in which tangents and cotangents occur). tan ⁡ α + tan ⁡ β + tan ⁡ γ = tan ⁡ α tan ⁡ β tan ⁡ γ 1 = cot ⁡ β cot ⁡ γ + cot ⁡ γ cot ⁡ α + cot ⁡ α cot ⁡ β cot ⁡ ( α 2 ) + cot ⁡ ( β 2 ) + cot ⁡ ( γ 2 ) = cot ⁡ ( α 2 ) cot ⁡ ( β 2 ) cot ⁡ ( γ 2 ) 1 = tan ⁡ ( β 2 ) tan ⁡ ( γ 2 ) + tan ⁡ ( γ 2 ) tan ⁡ ( α 2 ) + tan ⁡ ( α 2 ) tan ⁡ ( β 2 ) sin ⁡ α + sin ⁡ β + sin ⁡ γ = 4 cos ⁡ ( α 2 ) cos ⁡ ( β 2 ) cos ⁡ ( γ 2 ) − sin ⁡ α + sin ⁡ β + sin ⁡ γ = 4 cos ⁡ ( α 2 ) sin ⁡ ( β 2 ) sin ⁡ ( γ 2 ) cos ⁡ α + cos ⁡ β + cos ⁡ γ = 4 sin ⁡ ( α 2 ) sin ⁡ ( β 2 ) sin ⁡ ( γ 2 ) + 1 − cos ⁡ α + cos ⁡ β + cos ⁡ γ = 4 sin ⁡ ( α 2 ) cos ⁡ ( β 2 ) cos ⁡ ( γ 2 ) − 1 sin ⁡ ( 2 α ) + sin ⁡ ( 2 β ) + sin ⁡ ( 2 γ ) = 4 sin ⁡ α sin ⁡ β sin ⁡ γ − sin ⁡ ( 2 α ) + sin ⁡ ( 2 β ) + sin ⁡ ( 2 γ ) = 4 sin ⁡ α cos ⁡ β cos ⁡ γ cos ⁡ ( 2 α ) + cos ⁡ ( 2 β ) + cos ⁡ ( 2 γ ) = − 4 cos ⁡ α cos ⁡ β cos ⁡ γ − 1 − cos ⁡ ( 2 α ) + cos ⁡ ( 2 β ) + cos ⁡ ( 2 γ ) = − 4 cos ⁡ α sin ⁡ β sin ⁡ γ + 1 sin 2 ⁡ α + sin 2 ⁡ β + sin 2 ⁡ γ = 2 cos ⁡ α cos ⁡ β cos ⁡ γ + 2 − sin 2 ⁡ α + sin 2 ⁡ β + sin 2 ⁡ γ = 2 cos ⁡ α sin ⁡ β sin ⁡ γ cos 2 ⁡ α + cos 2 ⁡ β + cos 2 ⁡ γ = − 2 cos ⁡ α cos ⁡ β cos ⁡ γ + 1 − cos 2 ⁡ α + cos 2 ⁡ β + cos 2 ⁡ γ = − 2 cos ⁡ α sin ⁡ β sin ⁡ γ + 1 sin 2 ⁡ ( 2 α ) + sin 2 ⁡ ( 2 β ) + sin 2 ⁡ ( 2 γ ) = − 2 cos ⁡ ( 2 α ) cos ⁡ ( 2 β ) cos ⁡ ( 2 γ ) + 2 cos 2 ⁡ ( 2 α ) + cos 2 ⁡ ( 2 β ) + cos 2 ⁡ ( 2 γ ) = 2 cos ⁡ ( 2 α ) cos ⁡ ( 2 β ) cos ⁡ ( 2 γ ) + 1 1 = sin 2 ⁡ ( α 2 ) + sin 2 ⁡ ( β 2 ) + sin 2 ⁡ ( γ 2 ) + 2 sin ⁡ ( α 2 ) sin ⁡ ( β 2 ) sin ⁡ ( γ 2 ) {\displaystyle {\begin{aligned}\tan \alpha +\tan \beta +\tan \gamma &=\tan \alpha \tan \beta \tan \gamma \\1&=\cot \beta \cot \gamma +\cot \gamma \cot \alpha +\cot \alpha \cot \beta \\\cot \left({\frac {\alpha }{2}}\right)+\cot \left({\frac {\beta }{2}}\right)+\cot \left({\frac {\gamma }{2}}\right)&=\cot \left({\frac {\alpha }{2}}\right)\cot \left({\frac {\beta }{2}}\right)\cot \left({\frac {\gamma }{2}}\right)\\1&=\tan \left({\frac {\beta }{2}}\right)\tan \left({\frac {\gamma }{2}}\right)+\tan \left({\frac {\gamma }{2}}\right)\tan \left({\frac {\alpha }{2}}\right)+\tan \left({\frac {\alpha }{2}}\right)\tan \left({\frac {\beta }{2}}\right)\\\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)\\-\sin \alpha +\sin \beta +\sin \gamma &=4\cos \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)\\\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\sin \left({\frac {\beta }{2}}\right)\sin \left({\frac {\gamma }{2}}\right)+1\\-\cos \alpha +\cos \beta +\cos \gamma &=4\sin \left({\frac {\alpha }{2}}\right)\cos \left({\frac {\beta }{2}}\right)\cos \left({\frac {\gamma }{2}}\right)-1\\\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \sin \beta \sin \gamma \\-\sin(2\alpha )+\sin(2\beta )+\sin(2\gamma )&=4\sin \alpha \cos \beta \cos \gamma \\\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \cos \beta \cos \gamma -1\\-\cos(2\alpha )+\cos(2\beta )+\cos(2\gamma )&=-4\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \cos \beta \cos \gamma +2\\-\sin ^{2}\alpha +\sin ^{2}\beta +\sin ^{2}\gamma &=2\cos \alpha \sin \beta \sin \gamma \\\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \cos \beta \cos \gamma +1\\-\cos ^{2}\alpha +\cos ^{2}\beta +\cos ^{2}\gamma &=-2\cos \alpha \sin \beta \sin \gamma +1\\\sin ^{2}(2\alpha )+\sin ^{2}(2\beta )+\sin ^{2}(2\gamma )&=-2\cos(2\alpha )\cos(2\beta )\cos(2\gamma )+2\\\cos ^{2}(2\alpha )+\cos ^{2}(2\beta )+\cos ^{2}(2\gamma )&=2\cos(2\alpha )\,\cos(2\beta )\,\cos(2\gamma )+1\\1&=\sin ^{2}\left({\frac {\alpha }{2}}\right)+\sin ^{2}\left({\frac {\beta }{2}}\right)+\sin ^{2}\left({\frac {\gamma }{2}}\right)+2\sin \left({\frac {\alpha }{2}}\right)\,\sin \left({\frac {\beta }{2}}\right)\,\sin \left({\frac {\gamma }{2}}\right)\end{aligned}}} == Historical shorthands == The versine, coversine, haversine, and exsecant were used in navigation. For example, the haversine formula was used to calculate the distance between two points on a sphere. They are rarely used today. == Miscellaneous == === Dirichlet kernel === The Dirichlet kernel Dn(x) is the function occurring on both sides of the next identity: 1 + 2 cos ⁡ x + 2 cos ⁡ ( 2 x ) + 2 cos ⁡ ( 3 x ) + ⋯ + 2 cos ⁡ ( n x ) = sin ⁡ ( ( n + 1 2 ) x ) sin ⁡ ( 1 2 x ) . {\displaystyle 1+2\cos x+2\cos(2x)+2\cos(3x)+\cdots +2\cos(nx)={\frac {\sin \left(\left(n+{\frac {1}{2}}\right)x\right)}{\sin \left({\frac {1}{2}}x\right)}}.} The convolution of any integrable function of period 2 π {\displaystyle 2\pi } with the Dirichlet kernel coincides with the function's n {\displaystyle n} th-degree Fourier approximation. The same holds for any measure or generalized function. === Tangent half-angle substitution === If we set t = tan ⁡ x 2 , {\displaystyle t=\tan {\frac {x}{2}},} then sin ⁡ x = 2 t 1 + t 2 ; cos ⁡ x = 1 − t 2 1 + t 2 ; e i x = 1 + i t 1 − i t ; d x = 2 d t 1 + t 2 , {\displaystyle \sin x={\frac {2t}{1+t^{2}}};\qquad \cos x={\frac {1-t^{2}}{1+t^{2}}};\qquad e^{ix}={\frac {1+it}{1-it}};\qquad dx={\frac {2\,dt}{1+t^{2}}},} where e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} sometimes abbreviated to cis x. When this substitution of t {\displaystyle t} for tan ⁠x/2⁠ is used in calculus, it follows that sin ⁡ x {\displaystyle \sin x} is replaced by ⁠2t/1 + t2⁠, cos ⁡ x {\displaystyle \cos x} is replaced by ⁠1 − t2/1 + t2⁠ and the differential dx is replaced by ⁠2 dt/1 + t2⁠. Thereby one converts rational functions of sin ⁡ x {\displaystyle \sin x} and cos ⁡ x {\displaystyle \cos x} to rational functions of t {\displaystyle t} in order to find their antiderivatives. === Viète's infinite product === cos ⁡ θ 2 ⋅ cos ⁡ θ 4 ⋅ cos ⁡ θ 8 ⋯ = ∏ n = 1 ∞ cos ⁡ θ 2 n = sin ⁡ θ θ = sinc ⁡ θ . {\displaystyle \cos {\frac {\theta }{2}}\cdot \cos {\frac {\theta }{4}}\cdot \cos {\frac {\theta }{8}}\cdots =\prod _{n=1}^{\infty }\cos {\frac {\theta }{2^{n}}}={\frac {\sin \theta }{\theta }}=\operatorname {sinc} \theta .} == See also == == References == == Bibliography == == External links == Values of sin and cos, expressed in surds, for integer multiples of 3° and of ⁠5+5/8⁠°, and for the same angles csc and sec and tan
Wikipedia/Trigonometric_equation
In mathematics, an implicit equation is a relation of the form R ( x 1 , … , x n ) = 0 , {\displaystyle R(x_{1},\dots ,x_{n})=0,} where R is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is x 2 + y 2 − 1 = 0. {\displaystyle x^{2}+y^{2}-1=0.} An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments.: 204–206  For example, the equation x 2 + y 2 − 1 = 0 {\displaystyle x^{2}+y^{2}-1=0} of the unit circle defines y as an implicit function of x if −1 ≤ x ≤ 1, and y is restricted to nonnegative values. The implicit function theorem provides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zero multivariable functions that are continuously differentiable. == Examples == === Inverse functions === A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If g is a function of x that has a unique inverse, then the inverse function of g, called g−1, is the unique function giving a solution of the equation y = g ( x ) {\displaystyle y=g(x)} for x in terms of y. This solution can then be written as x = g − 1 ( y ) . {\displaystyle x=g^{-1}(y)\,.} Defining g−1 as the inverse of g is an implicit definition. For some functions g, g−1(y) can be written out explicitly as a closed-form expression — for instance, if g(x) = 2x − 1, then g−1(y) = ⁠1/2⁠(y + 1). However, this is often not possible, or only by introducing a new notation (as in the product log example below). Intuitively, an inverse function is obtained from g by interchanging the roles of the dependent and independent variables. Example: The product log is an implicit function giving the solution for x of the equation y − xex = 0. === Algebraic functions === An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable x gives a solution for y of an equation a n ( x ) y n + a n − 1 ( x ) y n − 1 + ⋯ + a 0 ( x ) = 0 , {\displaystyle a_{n}(x)y^{n}+a_{n-1}(x)y^{n-1}+\cdots +a_{0}(x)=0\,,} where the coefficients ai(x) are polynomial functions of x. This algebraic function can be written as the right side of the solution equation y = f(x). Written like this, f is a multi-valued implicit function. Algebraic functions play an important role in mathematical analysis and algebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation: x 2 + y 2 − 1 = 0 . {\displaystyle x^{2}+y^{2}-1=0\,.} Solving for y gives an explicit solution: y = ± 1 − x 2 . {\displaystyle y=\pm {\sqrt {1-x^{2}}}\,.} But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation as y = f(x), where f is the multi-valued implicit function. While explicit solutions can be found for equations that are quadratic, cubic, and quartic in y, the same is not in general true for quintic and higher degree equations, such as y 5 + 2 y 4 − 7 y 3 + 3 y 2 − 6 y − x = 0 . {\displaystyle y^{5}+2y^{4}-7y^{3}+3y^{2}-6y-x=0\,.} Nevertheless, one can still refer to the implicit solution y = f(x) involving the multi-valued implicit function f. == Caveats == Not every equation R(x, y) = 0 implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given by x − C(y) = 0 where C is a cubic polynomial having a "hump" in its graph. Thus, for an implicit function to be a true (single-valued) function it might be necessary to use just part of the graph. An implicit function can sometimes be successfully defined as a true function only after "zooming in" on some part of the x-axis and "cutting away" some unwanted function branches. Then an equation expressing y as an implicit function of the other variables can be written. The defining equation R(x, y) = 0 can also have other pathologies. For example, the equation x = 0 does not imply a function f(x) giving solutions for y at all; it is a vertical line. In order to avoid a problem like this, various constraints are frequently imposed on the allowable sorts of equations or on the domain. The implicit function theorem provides a uniform way of handling these sorts of pathologies. == Implicit differentiation == In calculus, a method called implicit differentiation makes use of the chain rule to differentiate implicitly defined functions. To differentiate an implicit function y(x), defined by an equation R(x, y) = 0, it is not generally possible to solve it explicitly for y and then differentiate. Instead, one can totally differentiate R(x, y) = 0 with respect to x and y and then solve the resulting linear equation for ⁠dy/dx⁠ to explicitly get the derivative in terms of x and y. Even when it is possible to explicitly solve the original equation, the formula resulting from total differentiation is, in general, much simpler and easier to use. === Examples === ==== Example 1 ==== Consider y + x + 5 = 0 . {\displaystyle y+x+5=0\,.} This equation is easy to solve for y, giving y = − x − 5 , {\displaystyle y=-x-5\,,} where the right side is the explicit form of the function y(x). Differentiation then gives ⁠dy/dx⁠ = −1. Alternatively, one can totally differentiate the original equation: d y d x + d x d x + d d x ( 5 ) = 0 ; d y d x + 1 + 0 = 0 . {\displaystyle {\begin{aligned}{\frac {dy}{dx}}+{\frac {dx}{dx}}+{\frac {d}{dx}}(5)&=0\,;\\[6px]{\frac {dy}{dx}}+1+0&=0\,.\end{aligned}}} Solving for ⁠dy/dx⁠ gives d y d x = − 1 , {\displaystyle {\frac {dy}{dx}}=-1\,,} the same answer as obtained previously. ==== Example 2 ==== An example of an implicit function for which implicit differentiation is easier than using explicit differentiation is the function y(x) defined by the equation x 4 + 2 y 2 = 8 . {\displaystyle x^{4}+2y^{2}=8\,.} To differentiate this explicitly with respect to x, one has first to get y ( x ) = ± 8 − x 4 2 , {\displaystyle y(x)=\pm {\sqrt {\frac {8-x^{4}}{2}}}\,,} and then differentiate this function. This creates two derivatives: one for y ≥ 0 and another for y < 0. It is substantially easier to implicitly differentiate the original equation: 4 x 3 + 4 y d y d x = 0 , {\displaystyle 4x^{3}+4y{\frac {dy}{dx}}=0\,,} giving d y d x = − 4 x 3 4 y = − x 3 y . {\displaystyle {\frac {dy}{dx}}={\frac {-4x^{3}}{4y}}=-{\frac {x^{3}}{y}}\,.} ==== Example 3 ==== Often, it is difficult or impossible to solve explicitly for y, and implicit differentiation is the only feasible method of differentiation. An example is the equation y 5 − y = x . {\displaystyle y^{5}-y=x\,.} It is impossible to algebraically express y explicitly as a function of x, and therefore one cannot find ⁠dy/dx⁠ by explicit differentiation. Using the implicit method, ⁠dy/dx⁠ can be obtained by differentiating the equation to obtain 5 y 4 d y d x − d y d x = d x d x , {\displaystyle 5y^{4}{\frac {dy}{dx}}-{\frac {dy}{dx}}={\frac {dx}{dx}}\,,} where ⁠dx/dx⁠ = 1. Factoring out ⁠dy/dx⁠ shows that ( 5 y 4 − 1 ) d y d x = 1 , {\displaystyle \left(5y^{4}-1\right){\frac {dy}{dx}}=1\,,} which yields the result d y d x = 1 5 y 4 − 1 , {\displaystyle {\frac {dy}{dx}}={\frac {1}{5y^{4}-1}}\,,} which is defined for y ≠ ± 1 5 4 and y ≠ ± i 5 4 . {\displaystyle y\neq \pm {\frac {1}{\sqrt[{4}]{5}}}\quad {\text{and}}\quad y\neq \pm {\frac {i}{\sqrt[{4}]{5}}}\,.} === General formula for derivative of implicit function === If R(x, y) = 0, the derivative of the implicit function y(x) is given by: §11.5  d y d x = − ∂ R ∂ x ∂ R ∂ y = − R x R y , {\displaystyle {\frac {dy}{dx}}=-{\frac {\,{\frac {\partial R}{\partial x}}\,}{\frac {\partial R}{\partial y}}}=-{\frac {R_{x}}{R_{y}}}\,,} where Rx and Ry indicate the partial derivatives of R with respect to x and y. The above formula comes from using the generalized chain rule to obtain the total derivative — with respect to x — of both sides of R(x, y) = 0: ∂ R ∂ x d x d x + ∂ R ∂ y d y d x = 0 , {\displaystyle {\frac {\partial R}{\partial x}}{\frac {dx}{dx}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,} hence ∂ R ∂ x + ∂ R ∂ y d y d x = 0 , {\displaystyle {\frac {\partial R}{\partial x}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,} which, when solved for ⁠dy/dx⁠, gives the expression above. == Implicit function theorem == Let R(x, y) be a differentiable function of two variables, and (a, b) be a pair of real numbers such that R(a, b) = 0. If ⁠∂R/∂y⁠ ≠ 0, then R(x, y) = 0 defines an implicit function that is differentiable in some small enough neighbourhood of (a, b); in other words, there is a differentiable function f that is defined and differentiable in some neighbourhood of a, such that R(x, f(x)) = 0 for x in this neighbourhood. The condition ⁠∂R/∂y⁠ ≠ 0 means that (a, b) is a regular point of the implicit curve of implicit equation R(x, y) = 0 where the tangent is not vertical. In a less technical language, implicit functions exist and can be differentiated, if the curve has a non-vertical tangent.: §11.5  == In algebraic geometry == Consider a relation of the form R(x1, …, xn) = 0, where R is a multivariable polynomial. The set of the values of the variables that satisfy this relation is called an implicit curve if n = 2 and an implicit surface if n = 3. The implicit equations are the basis of algebraic geometry, whose basic subjects of study are the simultaneous solutions of several implicit equations whose left-hand sides are polynomials. These sets of simultaneous solutions are called affine algebraic sets. == In differential equations == The solutions of differential equations generally appear expressed by an implicit function. == Applications in economics == === Marginal rate of substitution === In economics, when the level set R(x, y) = 0 is an indifference curve for the quantities x and y consumed of two goods, the absolute value of the implicit derivative ⁠dy/dx⁠ is interpreted as the marginal rate of substitution of the two goods: how much more of y one must receive in order to be indifferent to a loss of one unit of x. === Marginal rate of technical substitution === Similarly, sometimes the level set R(L, K) is an isoquant showing various combinations of utilized quantities L of labor and K of physical capital each of which would result in the production of the same given quantity of output of some good. In this case the absolute value of the implicit derivative ⁠dK/dL⁠ is interpreted as the marginal rate of technical substitution between the two factors of production: how much more capital the firm must use to produce the same amount of output with one less unit of labor. === Optimization === Often in economic theory, some function such as a utility function or a profit function is to be maximized with respect to a choice vector x even though the objective function has not been restricted to any specific functional form. The implicit function theorem guarantees that the first-order conditions of the optimization define an implicit function for each element of the optimal vector x* of the choice vector x. When profit is being maximized, typically the resulting implicit functions are the labor demand function and the supply functions of various goods. When utility is being maximized, typically the resulting implicit functions are the labor supply function and the demand functions for various goods. Moreover, the influence of the problem's parameters on x* — the partial derivatives of the implicit function — can be expressed as total derivatives of the system of first-order conditions found using total differentiation. == See also == == References == == Further reading == Binmore, K. G. (1983). "Implicit Functions". Calculus. New York: Cambridge University Press. pp. 198–211. ISBN 0-521-28952-1. Rudin, Walter (1976). Principles of Mathematical Analysis. Boston: McGraw-Hill. pp. 223–228. ISBN 0-07-054235-X. Simon, Carl P.; Blume, Lawrence (1994). "Implicit Functions and Their Derivatives". Mathematics for Economists. New York: W. W. Norton. pp. 334–371. ISBN 0-393-95733-0. == External links == Archived at Ghostarchive and the Wayback Machine: "Implicit Differentiation, What's Going on Here?". 3Blue1Brown. Essence of Calculus. May 3, 2017 – via YouTube.
Wikipedia/Implicit_equation
In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle. More modern definitions express the sine and cosine as infinite series, or as the solutions of certain differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers. The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period. == Elementary descriptions == === Right-angled triangle definition === To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: The opposite side is the side opposite to the angle of interest; in this case, it is a {\displaystyle a} . The hypotenuse is the side opposite the right angle; in this case, it is h {\displaystyle h} . The hypotenuse is always the longest side of a right-angled triangle. The adjacent side is the remaining side; in this case, it is b {\displaystyle b} . It forms a side of (and is adjacent to) both the angle of interest and the right angle. Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} === Special angle measures === As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar, and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. === Laws === The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius. The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem. === Vector definition === The cross product and dot product are operations on two vectors in Euclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} == Analytic descriptions == === Unit circle definition === The sine and cosine functions may also be defined in a more general way by using unit circle, a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} -axis. The x {\displaystyle x} - and y {\displaystyle y} -coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} -coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. ==== Graph of a function and its elementary properties ==== Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} -axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} -coordinate. In other words, both sine and cosine functions are periodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically, sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This phase shift can be expressed as cos⁡(θ)=sin⁡(θ+π/2) or sin⁡(θ)=cos⁡(θ−π/2). This is distinct from the cofunction identities that follow below, which arise from right-triangle geometry and are not phase shifts: sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number. The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. ==== Continuity and differentiation ==== The sine and cosine functions are infinitely differentiable. The derivative of sine is cosine, and the derivative of cosine is negative sine: d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. These derivatives can be applied to the first derivative test, according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. It can also be applied to second derivative test, according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . ==== Integral and the usage in mensuration ==== Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration. These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions. In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant. ==== Inverse functions ==== The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . As sine and cosine are not injective, their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value. The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . The inverse function of both sine and cosine are defined as: θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} ==== Other identities ==== According to Pythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1: sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin2 and cos2 are, themselves, shifted and scaled sine waves. Specifically, sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. === Series and polynomials === Both sine and cosine functions can be defined by using a Taylor series, a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation, the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} -th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. More generally, for all complex numbers: sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination, resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial. The trigonometric polynomial's ample applications may be acquired in its interpolation, and its extension of a periodic function known as the Fourier series. Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} == Complex numbers relationship == === Complex exponential function definitions === Both sine and cosine can be extended further via complex number, a set of numbers composed of both real and imaginary numbers. For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula: e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane, the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} === Polar coordinates === Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . === Complex arguments === Applying the series definition of the sine and cosine to a complex argument, z, gives: sin ⁡ ( z ) = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! z 2 n + 1 = e i z − e − i z 2 i = sinh ⁡ ( i z ) i = − i sinh ⁡ ( i z ) cos ⁡ ( z ) = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! z 2 n = e i z + e − i z 2 = cosh ⁡ ( i z ) {\displaystyle {\begin{aligned}\sin(z)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}z^{2n+1}\\&={\frac {e^{iz}-e^{-iz}}{2i}}\\&={\frac {\sinh \left(iz\right)}{i}}\\&=-i\sinh \left(iz\right)\\\cos(z)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}z^{2n}\\&={\frac {e^{iz}+e^{-iz}}{2}}\\&=\cosh(iz)\\\end{aligned}}} where sinh and cosh are the hyperbolic sine and cosine. These are entire functions. It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: sin ⁡ ( x + i y ) = sin ⁡ ( x ) cos ⁡ ( i y ) + cos ⁡ ( x ) sin ⁡ ( i y ) = sin ⁡ ( x ) cosh ⁡ ( y ) + i cos ⁡ ( x ) sinh ⁡ ( y ) cos ⁡ ( x + i y ) = cos ⁡ ( x ) cos ⁡ ( i y ) − sin ⁡ ( x ) sin ⁡ ( i y ) = cos ⁡ ( x ) cosh ⁡ ( y ) − i sin ⁡ ( x ) sinh ⁡ ( y ) {\displaystyle {\begin{aligned}\sin(x+iy)&=\sin(x)\cos(iy)+\cos(x)\sin(iy)\\&=\sin(x)\cosh(y)+i\cos(x)\sinh(y)\\\cos(x+iy)&=\cos(x)\cos(iy)-\sin(x)\sin(iy)\\&=\cos(x)\cosh(y)-i\sin(x)\sinh(y)\\\end{aligned}}} ==== Partial fraction and product expansions of complex sine ==== Using the partial fraction expansion technique in complex analysis, one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} ==== Usage of complex sine ==== sin(z) is found in the functional equation for the Gamma function, Γ ( s ) Γ ( 1 − s ) = π sin ⁡ ( π s ) , {\displaystyle \Gamma (s)\Gamma (1-s)={\pi \over \sin(\pi s)},} which in turn is found in the functional equation for the Riemann zeta-function, ζ ( s ) = 2 ( 2 π ) s − 1 Γ ( 1 − s ) sin ⁡ ( π 2 s ) ζ ( 1 − s ) . {\displaystyle \zeta (s)=2(2\pi )^{s-1}\Gamma (1-s)\sin \left({\frac {\pi }{2}}s\right)\zeta (1-s).} As a holomorphic function, sin z is a 2D solution of Laplace's equation: Δ u ( x 1 , x 2 ) = 0. {\displaystyle \Delta u(x_{1},x_{2})=0.} The complex sine function is also related to the level curves of pendulums. === Complex graphs === == Background == === Etymology === The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem). This was transliterated in Arabic as jība, which is meaningless in that language and written as jb (جب). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb (جيب), which means 'bosom', 'pocket', or 'fold'. When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona, he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. The English form sine was introduced in Thomas Fale's 1593 Horologiographia. The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle' as cosinus in Edmund Gunter's Canon triangulorum (1620), which also includes a similar definition of cotangens. === History === While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period (Aryabhatiya and Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin, cos, and tan; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x. Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviations sin., cos., tang., cot., sec., and cosec. == Software implementations == There is no standard algorithm for calculating sine and cosine. IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(1022). A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos. Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h: sin(double), sinf(float), and sinl(long double). The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z). CPython's math functions call the C math library, and use a double-precision floating-point format. === Turns based implementations === Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. These functions are called sinpi and cospi in MATLAB, OpenCL, R, Julia, CUDA, and ARM. For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin. The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred. == See also == == References == === Footnotes === === Citations === === Works cited === == External links == Media related to Sine function at Wikimedia Commons
Wikipedia/Sine_function
In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or x1/n). All elementary functions are continuous on their domains. Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it. == Examples == === Basic examples === Elementary functions of a single variable x include: Constant functions: 2 , π , e , {\displaystyle 2,\ \pi ,\ e,} etc. Rational powers of x: x , x 2 , x ( x 1 2 ) , x 2 3 , {\displaystyle x,\ x^{2},\ {\sqrt {x}}\ (x^{\frac {1}{2}}),\ x^{\frac {2}{3}},} etc. Exponential functions: e x , a x {\displaystyle e^{x},\ a^{x}} Logarithms: log ⁡ x , log a ⁡ x {\displaystyle \log x,\ \log _{a}x} Trigonometric functions: sin ⁡ x , cos ⁡ x , tan ⁡ x , {\displaystyle \sin x,\ \cos x,\ \tan x,} etc. Inverse trigonometric functions: arcsin ⁡ x , arccos ⁡ x , {\displaystyle \arcsin x,\ \arccos x,} etc. Hyperbolic functions: sinh ⁡ x , cosh ⁡ x , {\displaystyle \sinh x,\ \cosh x,} etc. Inverse hyperbolic functions: arsinh ⁡ x , arcosh ⁡ x , {\displaystyle \operatorname {arsinh} x,\ \operatorname {arcosh} x,} etc. All functions obtained by adding, subtracting, multiplying or dividing a finite number of any of the previous functions All functions obtained by root extraction of a polynomial with coefficients in elementary functions All functions obtained by composing a finite number of any of the previously listed functions Certain elementary functions of a single complex variable z, such as z {\displaystyle {\sqrt {z}}} and log ⁡ z {\displaystyle \log z} , may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function e z {\displaystyle e^{z}} composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with i z {\displaystyle iz} instead provides the trigonometric functions. === Composite examples === Examples of elementary functions include: Addition, e.g. (x + 1) Multiplication, e.g. (2x) Polynomial functions e tan ⁡ x 1 + x 2 sin ⁡ ( 1 + ( log ⁡ x ) 2 ) {\displaystyle {\frac {e^{\tan x}}{1+x^{2}}}\sin \left({\sqrt {1+(\log x)^{2}}}\right)} − i log ⁡ ( x + i 1 − x 2 ) {\displaystyle -i\log \left(x+i{\sqrt {1-x^{2}}}\right)} The last function is equal to arccos ⁡ x {\displaystyle \arccos x} , the inverse cosine, in the entire complex plane. All monomials, polynomials, rational functions and algebraic functions are elementary. The absolute value function, for real x {\displaystyle x} , is also elementary as it can be expressed as the composition of a power and root of x {\displaystyle x} : | x | = x 2 {\textstyle |x|={\sqrt {x^{2}}}} . === Non-elementary functions === Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function. Some examples of functions that are not elementary: tetration the gamma function non-elementary Liouvillian functions, including the exponential integral (Ei), logarithmic integral (Li or li) and Fresnel integrals (S and C). the error function, e r f ( x ) = 2 π ∫ 0 x e − t 2 d t , {\displaystyle \mathrm {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt,} a fact that may not be immediately obvious, but can be proven using the Risch algorithm. other nonelementary integrals, including the Dirichlet integral and elliptic integral. == Closure == It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are not closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions. == Differential algebra == The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions. A differential field F is a field F0 (rational functions over the rationals Q for example) together with a derivation map u → ∂u. (Here ∂u is a new function. Sometimes the notation u′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear ∂ ( u + v ) = ∂ u + ∂ v {\displaystyle \partial (u+v)=\partial u+\partial v} and satisfies the Leibniz product rule ∂ ( u ⋅ v ) = ∂ u ⋅ v + u ⋅ ∂ v . {\displaystyle \partial (u\cdot v)=\partial u\cdot v+u\cdot \partial v\,.} An element h is a constant if ∂h = 0. If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants. A function u of a differential extension F[u] of a differential field F is an elementary function over F if the function u is algebraic over F, or is an exponential, that is, ∂u = u ∂a for a ∈ F, or is a logarithm, that is, ∂u = ∂a / a for a ∈ F. (see also Liouville's theorem) == See also == Algebraic function – Mathematical function Closed-form expression – Mathematical formula involving a given set of operations Differential Galois theory – Study of Galois symmetry groups of differential fields Elementary function arithmetic – System of arithmetic in proof theory Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions Tarski's high school algebra problem – Mathematical problem Transcendental function – Analytic function that does not satisfy a polynomial equation Tupper's self-referential formula – Formula that visually represents itself when graphed == Notes == == References == Liouville, Joseph (1833a). "Premier mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 124–148. Liouville, Joseph (1833b). "Second mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 149–193. Liouville, Joseph (1833c). "Note sur la détermination des intégrales dont la valeur est algébrique". Journal für die reine und angewandte Mathematik. 10: 347–359. Ritt, Joseph (1950). Differential Algebra. AMS. Rosenlicht, Maxwell (1972). "Integration in finite terms". American Mathematical Monthly. 79 (9): 963–972. doi:10.2307/2318066. JSTOR 2318066. == Further reading == Davenport, James H. (2007). "What Might "Understand a Function" Mean?". Towards Mechanized Mathematical Assistants. Lecture Notes in Computer Science. Vol. 4573. pp. 55–65. doi:10.1007/978-3-540-73086-6_5. ISBN 978-3-540-73083-5. S2CID 8049737. == External links == Elementary functions at Encyclopaedia of Mathematics Weisstein, Eric W. "Elementary function". MathWorld.
Wikipedia/Elementary_functions
An equation in mathematics is a formula stating that two expressions have the same value. Equation may also refer to: Chemical equation, a symbolic representation of a chemical reaction Equation of time, the difference between solar time, as shown by a sundial, and mean time, as shown by a clock that runs at constant speed Equation clock, a clock that contains a mechanism that embodies the equation of time, so the clock shows solar time Equation of state, a relationship between physical conditions and the state of a material Equation (band), an English folk band formed in 1996 Equation Group, a computer espionage group "The Equation", a 2008 episode of Fringe "[Equation]" (ΔMi−1 = −αΣn=1NDi[n] [Σj∈C[i]Fji[n − 1] + Fexti[n−1]]), the first B-side of "Windowlicker" by Aphex Twin, also known as "[Formula]" == See also == List of equations
Wikipedia/Equation_(disambiguation)
In applied mathematics, a transcendental equation is an equation over the real (or complex) numbers that is not algebraic, that is, if at least one of its sides describes a transcendental function. Examples include: x = e − x x = cos ⁡ x 2 x = x 2 {\displaystyle {\begin{aligned}x&=e^{-x}\\x&=\cos x\\2^{x}&=x^{2}\end{aligned}}} A transcendental equation need not be an equation between elementary functions, although most published examples are. In some cases, a transcendental equation can be solved by transforming it into an equivalent algebraic equation. Some such transformations are sketched below; computer algebra systems may provide more elaborated transformations. In general, however, only approximate solutions can be found. == Transformation into an algebraic equation == Ad hoc methods exist for some classes of transcendental equations in one variable to transform them into algebraic equations which then might be solved. === Exponential equations === If the unknown, say x, occurs only in exponents: applying the natural logarithm to both sides may yield an algebraic equation, e.g. 4 x = 3 x 2 − 1 ⋅ 2 5 x {\displaystyle 4^{x}=3^{x^{2}-1}\cdot 2^{5x}} transforms to x ln ⁡ 4 = ( x 2 − 1 ) ln ⁡ 3 + 5 x ln ⁡ 2 {\displaystyle x\ln 4=(x^{2}-1)\ln 3+5x\ln 2} , which simplifies to x 2 ln ⁡ 3 + x ( 5 ln ⁡ 2 − ln ⁡ 4 ) − ln ⁡ 3 = 0 {\displaystyle x^{2}\ln 3+x(5\ln 2-\ln 4)-\ln 3=0} , which has the solutions x = − 3 ln ⁡ 2 ± 9 ( ln ⁡ 2 ) 2 − 4 ( ln ⁡ 3 ) 2 2 ln ⁡ 3 . {\displaystyle x={\frac {-3\ln 2\pm {\sqrt {9(\ln 2)^{2}-4(\ln 3)^{2}}}}{2\ln 3}}.} This will not work if addition occurs "at the base line", as in 4 x = 3 x 2 − 1 + 2 5 x . {\displaystyle 4^{x}=3^{x^{2}-1}+2^{5x}.} if all "base constants" can be written as integer or rational powers of some number q, then substituting y=qx may succeed, e.g. 2 x − 1 + 4 x − 2 − 8 x − 2 = 0 {\displaystyle 2^{x-1}+4^{x-2}-8^{x-2}=0} transforms, using y=2x, to 1 2 y + 1 16 y 2 − 1 64 y 3 = 0 {\displaystyle {\frac {1}{2}}y+{\frac {1}{16}}y^{2}-{\frac {1}{64}}y^{3}=0} which has the solutions y ∈ { 0 , − 4 , 8 } {\displaystyle y\in \{0,-4,8\}} , hence x = log 2 ⁡ 8 = 3 {\displaystyle x=\log _{2}8=3} is the only real solution. This will not work if squares or higher power of x occurs in an exponent, or if the "base constants" do not "share" a common q. sometimes, substituting y=xex may obtain an algebraic equation; after the solutions for y are known, those for x can be obtained by applying the Lambert W function, e.g.: x 2 e 2 x + 2 = 3 x e x {\displaystyle x^{2}e^{2x}+2=3xe^{x}} transforms to y 2 + 2 = 3 y , {\displaystyle y^{2}+2=3y,} which has the solutions y ∈ { 1 , 2 } , {\displaystyle y\in \{1,2\},} hence x ∈ { W 0 ( 1 ) , W 0 ( 2 ) , W − 1 ( 1 ) , W − 1 ( 2 ) } {\displaystyle x\in \{W_{0}(1),W_{0}(2),W_{-1}(1),W_{-1}(2)\}} , where W 0 {\displaystyle W_{0}} and W − 1 {\displaystyle W_{-1}} denote the real-valued branches of the multivalued W {\displaystyle W} function. === Logarithmic equations === If the unknown x occurs only in arguments of a logarithm function: applying exponentiation to both sides may yield an algebraic equation, e.g. 2 log 5 ⁡ ( 3 x − 1 ) − log 5 ⁡ ( 12 x + 1 ) = 0 {\displaystyle 2\log _{5}(3x-1)-\log _{5}(12x+1)=0} transforms, using exponentiation to base 5. {\displaystyle 5.} to ( 3 x − 1 ) 2 12 x + 1 = 1 , {\displaystyle {\frac {(3x-1)^{2}}{12x+1}}=1,} which has the solutions x ∈ { 0 , 2 } . {\displaystyle x\in \{0,2\}.} If only real numbers are considered, x = 0 {\displaystyle x=0} is not a solution, as it leads to a non-real subexpression log 5 ⁡ ( − 1 ) {\displaystyle \log _{5}(-1)} in the given equation. This requires the original equation to consist of integer-coefficient linear combinations of logarithms w.r.t. a unique base, and the logarithm arguments to be polynomials in x. if all "logarithm calls" have a unique base b {\displaystyle b} and a unique argument expression f ( x ) , {\displaystyle f(x),} then substituting y = log b ⁡ ( f ( x ) ) {\displaystyle y=\log _{b}(f(x))} may lead to a simpler equation, e.g. 5 ln ⁡ ( sin ⁡ x 2 ) + 6 = 7 ln ⁡ ( sin ⁡ x 2 ) + 8 {\displaystyle 5\ln(\sin x^{2})+6=7{\sqrt {\ln(\sin x^{2})+8}}} transforms, using y = ln ⁡ ( sin ⁡ x 2 ) , {\displaystyle y=\ln(\sin x^{2}),} to 5 y + 6 = 7 y + 8 , {\displaystyle 5y+6=7{\sqrt {y+8}},} which is algebraic and has the single solution y = 89 25 {\displaystyle y={\frac {89}{25}}} . After that, applying inverse operations to the substitution equation yields x = arcsin ⁡ exp ⁡ y = arcsin ⁡ exp ⁡ 89 25 . {\displaystyle x={\sqrt {\arcsin \exp y}}={\sqrt {\arcsin \exp {\frac {89}{25}}}}.} === Trigonometric equations === If the unknown x occurs only as argument of trigonometric functions: applying Pythagorean identities and trigonometric sum and multiple formulas, arguments of the forms sin ⁡ ( n x + a ) , cos ⁡ ( m x + b ) , tan ⁡ ( l x + c ) , . . . {\displaystyle \sin(nx+a),\cos(mx+b),\tan(lx+c),...} with integer n , m , l , . . . {\displaystyle n,m,l,...} might all be transformed to arguments of the form, say, sin ⁡ x {\displaystyle \sin x} . After that, substituting y = sin ⁡ ( x ) {\displaystyle y=\sin(x)} yields an algebraic equation, e.g. sin ⁡ ( x + a ) = ( cos 2 ⁡ x ) − 1 {\displaystyle \sin(x+a)=(\cos ^{2}x)-1} transforms to ( sin ⁡ x ) ( cos ⁡ a ) + 1 − sin 2 ⁡ x ( sin ⁡ a ) = 1 − ( sin 2 ⁡ x ) − 1 {\displaystyle (\sin x)(\cos a)+{\sqrt {1-\sin ^{2}x}}(\sin a)=1-(\sin ^{2}x)-1} , and, after substitution, to y ( cos ⁡ a ) + 1 − y 2 ( sin ⁡ a ) = − y 2 {\displaystyle y(\cos a)+{\sqrt {1-y^{2}}}(\sin a)=-y^{2}} which is algebraic and can be solved. After that, applying x = 2 k π + arcsin ⁡ y {\displaystyle x=2k\pi +\arcsin y} obtains the solutions. === Hyperbolic equations === If the unknown x occurs only in linear expressions inside arguments of hyperbolic functions, unfolding them by their defining exponential expressions and substituting y = exp ⁡ ( x ) {\displaystyle y=\exp(x)} yields an algebraic equation, e.g. 3 cosh ⁡ x = 4 + sinh ⁡ ( 2 x − 6 ) {\displaystyle 3\cosh x=4+\sinh(2x-6)} unfolds to 3 2 ( e x + 1 e x ) = 4 + 1 2 ( ( e x ) 2 e 6 − e 6 ( e x ) 2 ) , {\displaystyle {\frac {3}{2}}(e^{x}+{\frac {1}{e^{x}}})=4+{\frac {1}{2}}\left({\frac {(e^{x})^{2}}{e^{6}}}-{\frac {e^{6}}{(e^{x})^{2}}}\right),} which transforms to the equation 3 2 ( y + 1 y ) = 4 + 1 2 ( y 2 e 6 − e 6 y 2 ) , {\displaystyle {\frac {3}{2}}(y+{\frac {1}{y}})=4+{\frac {1}{2}}\left({\frac {y^{2}}{e^{6}}}-{\frac {e^{6}}{y^{2}}}\right),} which is algebraic and can be solved. Applying x = ln ⁡ y {\displaystyle x=\ln y} obtains the solutions of the original equation. == Approximate solutions == Approximate numerical solutions to transcendental equations can be found using numerical, analytical approximations, or graphical methods. These equations can be solved by direct iteration by reordering the equation into the form x = f ( x ) {\displaystyle x=f(x)} and making an initial guess x 0 {\displaystyle x_{0}} , computing f ( x ) {\displaystyle f(x)} which becomes x 1 {\displaystyle x_{1}} and substituting it back into f ( x ) {\displaystyle f(x)} , etc. Convergence may be very slow. Some reorderings may diverge, so some other reordering that converges must be found. f ( x ) {\displaystyle f(x)} must be continuous and "sufficiently smooth" or the method may fail. Numerical methods for solving arbitrary equations are called root-finding algorithms. By rearranging the equation into the form f ( x ) = 0 {\displaystyle f(x)=0} , if f ( x ) {\displaystyle f(x)} is continuous and differentiable, Newton's method involving taking the derivative of f ( x ) {\displaystyle f(x)} , is a common iterative method of approximating a root; an initial guess x 0 {\displaystyle x_{0}} must be "sufficiently close" to the root of interest to converge to it. In some cases, the equation can be well approximated using Taylor series near the zero. For example, for k ≈ 1 {\displaystyle k\approx 1} , the solutions of sin ⁡ x = k x {\displaystyle \sin x=kx} are approximately those of ( 1 − k ) x − x 3 / 6 = 0 {\displaystyle (1-k)x-x^{3}/6=0} , namely x = 0 {\displaystyle x=0} and x = ± 6 1 − k {\displaystyle x=\pm {\sqrt {6}}{\sqrt {1-k}}} . For a graphical solution, one method is to set each side of a single-variable transcendental equation equal to a dependent variable and plot the two graphs, using their intersecting points to find solutions (see picture). == Other solutions == Some transcendental systems of high-order equations can be solved by “separation” of the unknowns, reducing them to algebraic equations. The following can also be used when solving transcendental equations/inequalities: If x 0 {\displaystyle x_{0}} is a solution to the equation f ( x ) = g ( x ) {\displaystyle f(x)=g(x)} and f ( x ) ≤ c ≤ g ( x ) {\displaystyle f(x)\leq c\leq g(x)} , then this solution must satisfy f ( x 0 ) = g ( x 0 ) = c {\displaystyle f(x_{0})=g(x_{0})=c} . For example, we want to solve log 2 ⁡ ( 3 + 2 x − x 2 ) = tan 2 ⁡ ( π x 4 ) + cot 2 ⁡ ( π x 4 ) {\displaystyle \log _{2}\left(3+2x-x^{2}\right)=\tan ^{2}\left({\frac {\pi x}{4}}\right)+\cot ^{2}\left({\frac {\pi x}{4}}\right)} . The given equation is defined for − 1 < x < 3 {\displaystyle -1<x<3} . Let f ( x ) = log 2 ⁡ ( 3 + 2 x − x 2 ) {\displaystyle f(x)=\log _{2}\left(3+2x-x^{2}\right)} and g ( x ) = tan 2 ⁡ ( π x 4 ) + cot 2 ⁡ ( π x 4 ) {\displaystyle g(x)=\tan ^{2}\left({\frac {\pi x}{4}}\right)+\cot ^{2}\left({\frac {\pi x}{4}}\right)} . It is easy to show that f ( x ) ≤ 2 {\displaystyle f(x)\leq 2} and g ( x ) ≥ 2 {\displaystyle g(x)\geq 2} so if there is a solution to the equation, it must satisfy f ( x ) = g ( x ) = 2 {\displaystyle f(x)=g(x)=2} . From f ( x ) = 2 {\displaystyle f(x)=2} we get x = 1 ∈ ( − 1 , 3 ) {\displaystyle x=1\in (-1,3)} . Indeed, f ( 1 ) = g ( 1 ) = 2 {\displaystyle f(1)=g(1)=2} and so x = 1 {\displaystyle x=1} is the only real solution to the equation. == See also == Mrs. Miniver's problem – Problem on areas of intersecting circles Goat grazing problem - another problem on areas of intersecting circles == Notes == == References == John P. Boyd (2014). Solving Transcendental Equations: The Chebyshev Polynomial Proxy and Other Numerical Rootfinders, Perturbation Series, and Oracles. Other Titles in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics (SIAM). doi:10.1137/1.9781611973525. ISBN 978-1-61197-351-8.
Wikipedia/Transcendental_equation
In mathematics, an extraneous solution (or spurious solution) is one which emerges from the process of solving a problem but is not a valid solution to it. A missing solution is a valid one which is lost during the solution process. Both situations frequently result from performing operations that are not invertible for some or all values of the variables involved, which prevents the chain of logical implications from being bidirectional. == Extraneous solutions: multiplication == One of the basic principles of algebra is that one can multiply both sides of an equation by the same expression without changing the equation's solutions. However, strictly speaking, this is not true, in that multiplication by certain expressions may introduce new solutions that were not present before. For example, consider the following equation: x + 2 = 0. {\displaystyle x+2=0.} If we multiply both sides by zero, we get, 0 = 0. {\displaystyle 0=0.} This is true for all values of x {\displaystyle x} , so the solution set is all real numbers. But clearly not all real numbers are solutions to the original equation. The problem is that multiplication by zero is not invertible: if we multiply by any nonzero value, we can reverse the step by dividing by the same value, but division by zero is not defined, so multiplication by zero cannot be reversed. More subtly, suppose we take the same equation and multiply both sides by x {\displaystyle x} . We get x ( x + 2 ) = ( 0 ) x , {\displaystyle x(x+2)=(0)x,} x 2 + 2 x = 0. {\displaystyle x^{2}+2x=0.} This quadratic equation has two solutions: x = − 2 {\displaystyle x=-2} and x = 0. {\displaystyle x=0.} But if 0 {\displaystyle 0} is substituted for x {\displaystyle x} in the original equation, the result is the invalid equation 2 = 0 {\displaystyle 2=0} . This counterintuitive result occurs because in the case where x = 0 {\displaystyle x=0} , multiplying both sides by x {\displaystyle x} multiplies both sides by zero, and so necessarily produces a true equation just as in the first example. In general, whenever we multiply both sides of an equation by an expression involving variables, we introduce extraneous solutions wherever that expression is equal to zero. But it is not sufficient to exclude these values, because they may have been legitimate solutions to the original equation. For example, suppose we multiply both sides of our original equation x + 2 = 0 {\displaystyle x+2=0} by x + 2. {\displaystyle x+2.} We get ( x + 2 ) ( x + 2 ) = 0 ( x + 2 ) , {\displaystyle (x+2)(x+2)=0(x+2),} x 2 + 4 x + 4 = 0 , {\displaystyle x^{2}+4x+4=0,} which has only one real solution: x = − 2 {\displaystyle x=-2} . This is a solution to the original equation so cannot be excluded, even though x + 2 = 0 {\displaystyle x+2=0} for this value of x {\displaystyle x} . == Extraneous solutions: rational == Extraneous solutions can arise naturally in problems involving fractions with variables in the denominator. For example, consider this equation: 1 x − 2 = 3 x + 2 − 6 x ( x − 2 ) ( x + 2 ) . {\displaystyle {\frac {1}{x-2}}={\frac {3}{x+2}}-{\frac {6x}{(x-2)(x+2)}}\,.} To begin solving, we multiply each side of the equation by the least common denominator of all the fractions contained in the equation. In this case, the least common denominator is ( x − 2 ) ( x + 2 ) {\displaystyle (x-2)(x+2)} . After performing these operations, the fractions are eliminated, and the equation becomes: x + 2 = 3 ( x − 2 ) − 6 x . {\displaystyle x+2=3(x-2)-6x\,.} Solving this yields the single solution x = − 2. {\displaystyle x=-2.} However, when we substitute the solution back into the original equation, we obtain: 1 − 2 − 2 = 3 − 2 + 2 − 6 ( − 2 ) ( − 2 − 2 ) ( − 2 + 2 ) . {\displaystyle {\frac {1}{-2-2}}={\frac {3}{-2+2}}-{\frac {6(-2)}{(-2-2)(-2+2)}}\,.} The equation then becomes: 1 − 4 = 3 0 + 12 0 . {\displaystyle {\frac {1}{-4}}={\frac {3}{0}}+{\frac {12}{0}}\,.} This equation is not valid, since one cannot divide by zero. Therefore, the solution x = − 2 {\displaystyle x=-2} is extraneous and not valid, and the original equation has no solution. For this specific example, it could be recognized that (for the value x = − 2 {\displaystyle x=-2} ), the operation of multiplying by ( x − 2 ) ( x + 2 ) {\displaystyle (x-2)(x+2)} would be a multiplication by zero. However, it is not always simple to evaluate whether each operation already performed was allowed by the final answer. Because of this, often, the only simple effective way to deal with multiplication by expressions involving variables is to substitute each of the solutions obtained into the original equation and confirm that this yields a valid equation. After discarding solutions that yield an invalid equation, we will have the correct set of solutions. In some cases, as in the above example, all solutions may be discarded, in which case the original equation has no solution. == Missing solutions: division == Extraneous solutions are not too difficult to deal with because they just require checking all solutions for validity. However, more insidious are missing solutions, which can occur when performing operations on expressions that are invalid for certain values of those expressions. For example, if we were solving the following equation, the correct solution is obtained by subtracting 4 {\displaystyle 4} from both sides, then dividing both sides by 2 {\displaystyle 2} : 2 x + 4 = 0 , {\displaystyle 2x+4=0,} 2 x = − 4 , {\displaystyle 2x=-4,} x = − 2. {\displaystyle x=-2.} By analogy, we might suppose we can solve the following equation by subtracting 2 x {\displaystyle 2x} from both sides, then dividing by x {\displaystyle x} : x 2 + 2 x = 0 , {\displaystyle x^{2}+2x=0,} x 2 = − 2 x , {\displaystyle x^{2}=-2x,} x = − 2. {\displaystyle x=-2.} The solution x = − 2 {\displaystyle x=-2} is in fact a valid solution to the original equation; but the other solution, x = 0 {\displaystyle x=0} , has disappeared. The problem is that we divided both sides by x {\displaystyle x} , which involves the indeterminate operation of dividing by zero when x = 0. {\displaystyle x=0.} It is generally possible (and advisable) to avoid dividing by any expression that can be zero; however, where this is necessary, it is sufficient to ensure that any values of the variables that make it zero also fail to satisfy the original equation. For example, suppose we have this equation: x + 2 = 0. {\displaystyle x+2=0.} It is valid to divide both sides by x − 2 {\displaystyle x-2} , obtaining the following equation: x + 2 x − 2 = 0. {\displaystyle {\frac {x+2}{x-2}}=0.} This is valid because the only value of x {\displaystyle x} that makes x − 2 {\displaystyle x-2} equal to zero is x = 2 , {\displaystyle x=2,} which is not a solution to the original equation. In some cases we are not interested in certain solutions; for example, we may only want solutions where x {\displaystyle x} is positive. In this case it is okay to divide by an expression that is only zero when x {\displaystyle x} is zero or negative, because this can only remove solutions we do not care about. == Other operations == Multiplication and division are not the only operations that can modify the solution set. For example, take the problem: x 2 = 4. {\displaystyle x^{2}=4.} If we take the positive square root of both sides, we get: x = 2. {\displaystyle x=2.} We are not taking the square root of any negative values here, since both x 2 {\displaystyle x^{2}} and 4 {\displaystyle 4} are necessarily positive. But we have lost the solution x = − 2. {\displaystyle x=-2.} The reason is that x {\displaystyle x} is actually not in general the positive square root of x 2 . {\displaystyle x^{2}.} If x {\displaystyle x} is negative, the positive square root of x 2 {\displaystyle x^{2}} is − x . {\displaystyle -x.} If the step is taken correctly, it leads instead to the equation: x 2 = 4 . {\displaystyle {\sqrt {x^{2}}}={\sqrt {4}}.} | x | = 2. {\displaystyle |x|=2.} x = ± 2. {\displaystyle x=\pm 2.} This equation has the same two solutions as the original one: x = 2 {\displaystyle x=2} and x = − 2. {\displaystyle x=-2.} We can also modify the solution set by squaring both sides, because this will make any negative values in the ranges of the equation positive, causing extraneous solutions. == See also == Invalid proof == References ==
Wikipedia/Extraneous_solution
In mathematics, sine and cosine are trigonometric functions of an angle. The sine and cosine of an acute angle are defined in the context of a right triangle: for the specified angle, its sine is the ratio of the length of the side opposite that angle to the length of the longest side of the triangle (the hypotenuse), and the cosine is the ratio of the length of the adjacent leg to that of the hypotenuse. For an angle θ {\displaystyle \theta } , the sine and cosine functions are denoted as sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( θ ) {\displaystyle \cos(\theta )} . The definitions of sine and cosine have been extended to any real value in terms of the lengths of certain line segments in a unit circle. More modern definitions express the sine and cosine as infinite series, or as the solutions of certain differential equations, allowing their extension to arbitrary positive and negative values and even to complex numbers. The sine and cosine functions are commonly used to model periodic phenomena such as sound and light waves, the position and velocity of harmonic oscillators, sunlight intensity and day length, and average temperature variations throughout the year. They can be traced to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period. == Elementary descriptions == === Right-angled triangle definition === To define the sine and cosine of an acute angle α {\displaystyle \alpha } , start with a right triangle that contains an angle of measure α {\displaystyle \alpha } ; in the accompanying figure, angle α {\displaystyle \alpha } in a right triangle A B C {\displaystyle ABC} is the angle of interest. The three sides of the triangle are named as follows: The opposite side is the side opposite to the angle of interest; in this case, it is a {\displaystyle a} . The hypotenuse is the side opposite the right angle; in this case, it is h {\displaystyle h} . The hypotenuse is always the longest side of a right-angled triangle. The adjacent side is the remaining side; in this case, it is b {\displaystyle b} . It forms a side of (and is adjacent to) both the angle of interest and the right angle. Once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse, and the cosine of the angle is equal to the length of the adjacent side divided by the length of the hypotenuse: sin ⁡ ( α ) = opposite hypotenuse , cos ⁡ ( α ) = adjacent hypotenuse . {\displaystyle \sin(\alpha )={\frac {\text{opposite}}{\text{hypotenuse}}},\qquad \cos(\alpha )={\frac {\text{adjacent}}{\text{hypotenuse}}}.} The other trigonometric functions of the angle can be defined similarly; for example, the tangent is the ratio between the opposite and adjacent sides or equivalently the ratio between the sine and cosine functions. The reciprocal of sine is cosecant, which gives the ratio of the hypotenuse length to the length of the opposite side. Similarly, the reciprocal of cosine is secant, which gives the ratio of the hypotenuse length to that of the adjacent side. The cotangent function is the ratio between the adjacent and opposite sides, a reciprocal of a tangent function. These functions can be formulated as: tan ⁡ ( θ ) = sin ⁡ ( θ ) cos ⁡ ( θ ) = opposite adjacent , cot ⁡ ( θ ) = 1 tan ⁡ ( θ ) = adjacent opposite , csc ⁡ ( θ ) = 1 sin ⁡ ( θ ) = hypotenuse opposite , sec ⁡ ( θ ) = 1 cos ⁡ ( θ ) = hypotenuse adjacent . {\displaystyle {\begin{aligned}\tan(\theta )&={\frac {\sin(\theta )}{\cos(\theta )}}={\frac {\text{opposite}}{\text{adjacent}}},\\\cot(\theta )&={\frac {1}{\tan(\theta )}}={\frac {\text{adjacent}}{\text{opposite}}},\\\csc(\theta )&={\frac {1}{\sin(\theta )}}={\frac {\text{hypotenuse}}{\text{opposite}}},\\\sec(\theta )&={\frac {1}{\cos(\theta )}}={\frac {\textrm {hypotenuse}}{\textrm {adjacent}}}.\end{aligned}}} === Special angle measures === As stated, the values sin ⁡ ( α ) {\displaystyle \sin(\alpha )} and cos ⁡ ( α ) {\displaystyle \cos(\alpha )} appear to depend on the choice of a right triangle containing an angle of measure α {\displaystyle \alpha } . However, this is not the case as all such triangles are similar, and so the ratios are the same for each of them. For example, each leg of the 45-45-90 right triangle is 1 unit, and its hypotenuse is 2 {\displaystyle {\sqrt {2}}} ; therefore, sin ⁡ 45 ∘ = cos ⁡ 45 ∘ = 2 2 {\textstyle \sin 45^{\circ }=\cos 45^{\circ }={\frac {\sqrt {2}}{2}}} . The following table shows the special value of each input for both sine and cosine with the domain between 0 < α < π 2 {\textstyle 0<\alpha <{\frac {\pi }{2}}} . The input in this table provides various unit systems such as degree, radian, and so on. The angles other than those five can be obtained by using a calculator. === Laws === The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. Given a triangle A B C {\displaystyle ABC} with sides a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , and angles opposite those sides α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } , the law states, sin ⁡ α a = sin ⁡ β b = sin ⁡ γ c . {\displaystyle {\frac {\sin \alpha }{a}}={\frac {\sin \beta }{b}}={\frac {\sin \gamma }{c}}.} This is equivalent to the equality of the first three expressions below: a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ = 2 R , {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}=2R,} where R {\displaystyle R} is the triangle's circumradius. The law of cosines is useful for computing the length of an unknown side if two other sides and an angle are known. The law states, a 2 + b 2 − 2 a b cos ⁡ ( γ ) = c 2 {\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}} In the case where γ = π / 2 {\displaystyle \gamma =\pi /2} from which cos ⁡ ( γ ) = 0 {\displaystyle \cos(\gamma )=0} , the resulting equation becomes the Pythagorean theorem. === Vector definition === The cross product and dot product are operations on two vectors in Euclidean vector space. The sine and cosine functions can be defined in terms of the cross product and dot product. If a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } are vectors, and θ {\displaystyle \theta } is the angle between a {\displaystyle \mathbb {a} } and b {\displaystyle \mathbb {b} } , then sine and cosine can be defined as: sin ⁡ ( θ ) = | a × b | | a | | b | , cos ⁡ ( θ ) = a ⋅ b | a | | b | . {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {|\mathbb {a} \times \mathbb {b} |}{|a||b|}},\\\cos(\theta )&={\frac {\mathbb {a} \cdot \mathbb {b} }{|a||b|}}.\end{aligned}}} == Analytic descriptions == === Unit circle definition === The sine and cosine functions may also be defined in a more general way by using unit circle, a circle of radius one centered at the origin ( 0 , 0 ) {\displaystyle (0,0)} , formulated as the equation of x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} in the Cartesian coordinate system. Let a line through the origin intersect the unit circle, making an angle of θ {\displaystyle \theta } with the positive half of the x {\displaystyle x} -axis. The x {\displaystyle x} - and y {\displaystyle y} -coordinates of this point of intersection are equal to cos ⁡ ( θ ) {\displaystyle \cos(\theta )} and sin ⁡ ( θ ) {\displaystyle \sin(\theta )} , respectively; that is, sin ⁡ ( θ ) = y , cos ⁡ ( θ ) = x . {\displaystyle \sin(\theta )=y,\qquad \cos(\theta )=x.} This definition is consistent with the right-angled triangle definition of sine and cosine when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} because the length of the hypotenuse of the unit circle is always 1; mathematically speaking, the sine of an angle equals the opposite side of the triangle, which is simply the y {\displaystyle y} -coordinate. A similar argument can be made for the cosine function to show that the cosine of an angle when 0 < θ < π 2 {\textstyle 0<\theta <{\frac {\pi }{2}}} , even under the new definition using the unit circle. ==== Graph of a function and its elementary properties ==== Using the unit circle definition has the advantage of drawing a graph of sine and cosine functions. This can be done by rotating counterclockwise a point along the circumference of a circle, depending on the input θ > 0 {\displaystyle \theta >0} . In a sine function, if the input is θ = π 2 {\textstyle \theta ={\frac {\pi }{2}}} , the point is rotated counterclockwise and stopped exactly on the y {\displaystyle y} -axis. If θ = π {\displaystyle \theta =\pi } , the point is at the circle's halfway. If θ = 2 π {\displaystyle \theta =2\pi } , the point returned to its origin. This results that both sine and cosine functions have the range between − 1 ≤ y ≤ 1 {\displaystyle -1\leq y\leq 1} . Extending the angle to any real domain, the point rotated counterclockwise continuously. This can be done similarly for the cosine function as well, although the point is rotated initially from the y {\displaystyle y} -coordinate. In other words, both sine and cosine functions are periodic, meaning any angle added by the circumference's circle is the angle itself. Mathematically, sin ⁡ ( θ + 2 π ) = sin ⁡ ( θ ) , cos ⁡ ( θ + 2 π ) = cos ⁡ ( θ ) . {\displaystyle \sin(\theta +2\pi )=\sin(\theta ),\qquad \cos(\theta +2\pi )=\cos(\theta ).} A function f {\displaystyle f} is said to be odd if f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} , and is said to be even if f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} . The sine function is odd, whereas the cosine function is even. Both sine and cosine functions are similar, with their difference being shifted by π 2 {\textstyle {\frac {\pi }{2}}} . This phase shift can be expressed as cos⁡(θ)=sin⁡(θ+π/2) or sin⁡(θ)=cos⁡(θ−π/2). This is distinct from the cofunction identities that follow below, which arise from right-triangle geometry and are not phase shifts: sin ⁡ ( θ ) = cos ⁡ ( π 2 − θ ) , cos ⁡ ( θ ) = sin ⁡ ( π 2 − θ ) . {\displaystyle {\begin{aligned}\sin(\theta )&=\cos \left({\frac {\pi }{2}}-\theta \right),\\\cos(\theta )&=\sin \left({\frac {\pi }{2}}-\theta \right).\end{aligned}}} Zero is the only real fixed point of the sine function; in other words the only intersection of the sine function and the identity function is sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} . The only real fixed point of the cosine function is called the Dottie number. The Dottie number is the unique real root of the equation cos ⁡ ( x ) = x {\displaystyle \cos(x)=x} . The decimal expansion of the Dottie number is approximately 0.739085. ==== Continuity and differentiation ==== The sine and cosine functions are infinitely differentiable. The derivative of sine is cosine, and the derivative of cosine is negative sine: d d x sin ⁡ ( x ) = cos ⁡ ( x ) , d d x cos ⁡ ( x ) = − sin ⁡ ( x ) . {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x),\qquad {\frac {d}{dx}}\cos(x)=-\sin(x).} Continuing the process in higher-order derivative results in the repeated same functions; the fourth derivative of a sine is the sine itself. These derivatives can be applied to the first derivative test, according to which the monotonicity of a function can be defined as the inequality of function's first derivative greater or less than equal to zero. It can also be applied to second derivative test, according to which the concavity of a function can be defined by applying the inequality of the function's second derivative greater or less than equal to zero. The following table shows that both sine and cosine functions have concavity and monotonicity—the positive sign ( + {\displaystyle +} ) denotes a graph is increasing (going upward) and the negative sign ( − {\displaystyle -} ) is decreasing (going downward)—in certain intervals. This information can be represented as a Cartesian coordinates system divided into four quadrants. Both sine and cosine functions can be defined by using differential equations. The pair of ( cos ⁡ θ , sin ⁡ θ ) {\displaystyle (\cos \theta ,\sin \theta )} is the solution ( x ( θ ) , y ( θ ) ) {\displaystyle (x(\theta ),y(\theta ))} to the two-dimensional system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} with the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . One could interpret the unit circle in the above definitions as defining the phase space trajectory of the differential equation with the given initial conditions. It can be interpreted as a phase space trajectory of the system of differential equations y ′ ( θ ) = x ( θ ) {\displaystyle y'(\theta )=x(\theta )} and x ′ ( θ ) = − y ( θ ) {\displaystyle x'(\theta )=-y(\theta )} starting from the initial conditions y ( 0 ) = 0 {\displaystyle y(0)=0} and x ( 0 ) = 1 {\displaystyle x(0)=1} . ==== Integral and the usage in mensuration ==== Their area under a curve can be obtained by using the integral with a certain bounded interval. Their antiderivatives are: ∫ sin ⁡ ( x ) d x = − cos ⁡ ( x ) + C ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) + C , {\displaystyle \int \sin(x)\,dx=-\cos(x)+C\qquad \int \cos(x)\,dx=\sin(x)+C,} where C {\displaystyle C} denotes the constant of integration. These antiderivatives may be applied to compute the mensuration properties of both sine and cosine functions' curves with a given interval. For example, the arc length of the sine curve between 0 {\displaystyle 0} and t {\displaystyle t} is ∫ 0 t 1 + cos 2 ⁡ ( x ) d x = 2 E ⁡ ( t , 1 2 ) , {\displaystyle \int _{0}^{t}\!{\sqrt {1+\cos ^{2}(x)}}\,dx={\sqrt {2}}\operatorname {E} \left(t,{\frac {1}{\sqrt {2}}}\right),} where E ⁡ ( φ , k ) {\displaystyle \operatorname {E} (\varphi ,k)} is the incomplete elliptic integral of the second kind with modulus k {\displaystyle k} . It cannot be expressed using elementary functions. In the case of a full period, its arc length is L = 4 2 π 3 Γ ( 1 / 4 ) 2 + Γ ( 1 / 4 ) 2 2 π = 2 π ϖ + 2 ϖ ≈ 7.6404 … {\displaystyle L={\frac {4{\sqrt {2\pi ^{3}}}}{\Gamma (1/4)^{2}}}+{\frac {\Gamma (1/4)^{2}}{\sqrt {2\pi }}}={\frac {2\pi }{\varpi }}+2\varpi \approx 7.6404\ldots } where Γ {\displaystyle \Gamma } is the gamma function and ϖ {\displaystyle \varpi } is the lemniscate constant. ==== Inverse functions ==== The inverse function of sine is arcsine or inverse sine, denoted as "arcsin", "asin", or sin − 1 {\displaystyle \sin ^{-1}} . The inverse function of cosine is arccosine, denoted as "arccos", "acos", or cos − 1 {\displaystyle \cos ^{-1}} . As sine and cosine are not injective, their inverses are not exact inverse functions, but partial inverse functions. For example, sin ⁡ ( 0 ) = 0 {\displaystyle \sin(0)=0} , but also sin ⁡ ( π ) = 0 {\displaystyle \sin(\pi )=0} , sin ⁡ ( 2 π ) = 0 {\displaystyle \sin(2\pi )=0} , and so on. It follows that the arcsine function is multivalued: arcsin ⁡ ( 0 ) = 0 {\displaystyle \arcsin(0)=0} , but also arcsin ⁡ ( 0 ) = π {\displaystyle \arcsin(0)=\pi } , arcsin ⁡ ( 0 ) = 2 π {\displaystyle \arcsin(0)=2\pi } , and so on. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value. The standard range of principal values for arcsin is from − π 2 {\textstyle -{\frac {\pi }{2}}} to π 2 {\textstyle {\frac {\pi }{2}}} , and the standard range for arccos is from 0 {\displaystyle 0} to π {\displaystyle \pi } . The inverse function of both sine and cosine are defined as: θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) , {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right),} where for some integer k {\displaystyle k} , sin ⁡ ( y ) = x ⟺ y = arcsin ⁡ ( x ) + 2 π k , or y = π − arcsin ⁡ ( x ) + 2 π k cos ⁡ ( y ) = x ⟺ y = arccos ⁡ ( x ) + 2 π k , or y = − arccos ⁡ ( x ) + 2 π k {\displaystyle {\begin{aligned}\sin(y)=x\iff &y=\arcsin(x)+2\pi k,{\text{ or }}\\&y=\pi -\arcsin(x)+2\pi k\\\cos(y)=x\iff &y=\arccos(x)+2\pi k,{\text{ or }}\\&y=-\arccos(x)+2\pi k\end{aligned}}} By definition, both functions satisfy the equations: sin ⁡ ( arcsin ⁡ ( x ) ) = x cos ⁡ ( arccos ⁡ ( x ) ) = x {\displaystyle \sin(\arcsin(x))=x\qquad \cos(\arccos(x))=x} and arcsin ⁡ ( sin ⁡ ( θ ) ) = θ for − π 2 ≤ θ ≤ π 2 arccos ⁡ ( cos ⁡ ( θ ) ) = θ for 0 ≤ θ ≤ π {\displaystyle {\begin{aligned}\arcsin(\sin(\theta ))=\theta \quad &{\text{for}}\quad -{\frac {\pi }{2}}\leq \theta \leq {\frac {\pi }{2}}\\\arccos(\cos(\theta ))=\theta \quad &{\text{for}}\quad 0\leq \theta \leq \pi \end{aligned}}} ==== Other identities ==== According to Pythagorean theorem, the squared hypotenuse is the sum of two squared legs of a right triangle. Dividing the formula on both sides with squared hypotenuse resulting in the Pythagorean trigonometric identity, the sum of a squared sine and a squared cosine equals 1: sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1. {\displaystyle \sin ^{2}(\theta )+\cos ^{2}(\theta )=1.} Sine and cosine satisfy the following double-angle formulas: sin ⁡ ( 2 θ ) = 2 sin ⁡ ( θ ) cos ⁡ ( θ ) , cos ⁡ ( 2 θ ) = cos 2 ⁡ ( θ ) − sin 2 ⁡ ( θ ) = 2 cos 2 ⁡ ( θ ) − 1 = 1 − 2 sin 2 ⁡ ( θ ) {\displaystyle {\begin{aligned}\sin(2\theta )&=2\sin(\theta )\cos(\theta ),\\\cos(2\theta )&=\cos ^{2}(\theta )-\sin ^{2}(\theta )\\&=2\cos ^{2}(\theta )-1\\&=1-2\sin ^{2}(\theta )\end{aligned}}} The cosine double angle formula implies that sin2 and cos2 are, themselves, shifted and scaled sine waves. Specifically, sin 2 ⁡ ( θ ) = 1 − cos ⁡ ( 2 θ ) 2 cos 2 ⁡ ( θ ) = 1 + cos ⁡ ( 2 θ ) 2 {\displaystyle \sin ^{2}(\theta )={\frac {1-\cos(2\theta )}{2}}\qquad \cos ^{2}(\theta )={\frac {1+\cos(2\theta )}{2}}} The graph shows both sine and sine squared functions, with the sine in blue and the sine squared in red. Both graphs have the same shape but with different ranges of values and different periods. Sine squared has only positive values, but twice the number of periods. === Series and polynomials === Both sine and cosine functions can be defined by using a Taylor series, a power series involving the higher-order derivatives. As mentioned in § Continuity and differentiation, the derivative of sine is cosine and that the derivative of cosine is the negative of sine. This means the successive derivatives of sin ⁡ ( x ) {\displaystyle \sin(x)} are cos ⁡ ( x ) {\displaystyle \cos(x)} , − sin ⁡ ( x ) {\displaystyle -\sin(x)} , − cos ⁡ ( x ) {\displaystyle -\cos(x)} , sin ⁡ ( x ) {\displaystyle \sin(x)} , continuing to repeat those four functions. The ( 4 n + k ) {\displaystyle (4n+k)} -th derivative, evaluated at the point 0: sin ( 4 n + k ) ⁡ ( 0 ) = { 0 when k = 0 1 when k = 1 0 when k = 2 − 1 when k = 3 {\displaystyle \sin ^{(4n+k)}(0)={\begin{cases}0&{\text{when }}k=0\\1&{\text{when }}k=1\\0&{\text{when }}k=2\\-1&{\text{when }}k=3\end{cases}}} where the superscript represents repeated differentiation. This implies the following Taylor series expansion at x = 0 {\displaystyle x=0} . One can then use the theory of Taylor series to show that the following identities hold for all real numbers x {\displaystyle x} —where x {\displaystyle x} is the angle in radians. More generally, for all complex numbers: sin ⁡ ( x ) = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 {\displaystyle {\begin{aligned}\sin(x)&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\end{aligned}}} Taking the derivative of each term gives the Taylor series for cosine: cos ⁡ ( x ) = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n {\displaystyle {\begin{aligned}\cos(x)&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}\end{aligned}}} Both sine and cosine functions with multiple angles may appear as their linear combination, resulting in a polynomial. Such a polynomial is known as the trigonometric polynomial. The trigonometric polynomial's ample applications may be acquired in its interpolation, and its extension of a periodic function known as the Fourier series. Let a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} be any coefficients, then the trigonometric polynomial of a degree N {\displaystyle N} —denoted as T ( x ) {\displaystyle T(x)} —is defined as: T ( x ) = a 0 + ∑ n = 1 N a n cos ⁡ ( n x ) + ∑ n = 1 N b n sin ⁡ ( n x ) . {\displaystyle T(x)=a_{0}+\sum _{n=1}^{N}a_{n}\cos(nx)+\sum _{n=1}^{N}b_{n}\sin(nx).} The trigonometric series can be defined similarly analogous to the trigonometric polynomial, its infinite inversion. Let A n {\displaystyle A_{n}} and B n {\displaystyle B_{n}} be any coefficients, then the trigonometric series can be defined as: 1 2 A 0 + ∑ n = 1 ∞ A n cos ⁡ ( n x ) + B n sin ⁡ ( n x ) . {\displaystyle {\frac {1}{2}}A_{0}+\sum _{n=1}^{\infty }A_{n}\cos(nx)+B_{n}\sin(nx).} In the case of a Fourier series with a given integrable function f {\displaystyle f} , the coefficients of a trigonometric series are: A n = 1 π ∫ 0 2 π f ( x ) cos ⁡ ( n x ) d x , B n = 1 π ∫ 0 2 π f ( x ) sin ⁡ ( n x ) d x . {\displaystyle {\begin{aligned}A_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\cos(nx)\,dx,\\B_{n}&={\frac {1}{\pi }}\int _{0}^{2\pi }f(x)\sin(nx)\,dx.\end{aligned}}} == Complex numbers relationship == === Complex exponential function definitions === Both sine and cosine can be extended further via complex number, a set of numbers composed of both real and imaginary numbers. For real number θ {\displaystyle \theta } , the definition of both sine and cosine functions can be extended in a complex plane in terms of an exponential function as follows: sin ⁡ ( θ ) = e i θ − e − i θ 2 i , cos ⁡ ( θ ) = e i θ + e − i θ 2 , {\displaystyle {\begin{aligned}\sin(\theta )&={\frac {e^{i\theta }-e^{-i\theta }}{2i}},\\\cos(\theta )&={\frac {e^{i\theta }+e^{-i\theta }}{2}},\end{aligned}}} Alternatively, both functions can be defined in terms of Euler's formula: e i θ = cos ⁡ ( θ ) + i sin ⁡ ( θ ) , e − i θ = cos ⁡ ( θ ) − i sin ⁡ ( θ ) . {\displaystyle {\begin{aligned}e^{i\theta }&=\cos(\theta )+i\sin(\theta ),\\e^{-i\theta }&=\cos(\theta )-i\sin(\theta ).\end{aligned}}} When plotted on the complex plane, the function e i x {\displaystyle e^{ix}} for real values of x {\displaystyle x} traces out the unit circle in the complex plane. Both sine and cosine functions may be simplified to the imaginary and real parts of e i θ {\displaystyle e^{i\theta }} as: sin ⁡ θ = Im ⁡ ( e i θ ) , cos ⁡ θ = Re ⁡ ( e i θ ) . {\displaystyle {\begin{aligned}\sin \theta &=\operatorname {Im} (e^{i\theta }),\\\cos \theta &=\operatorname {Re} (e^{i\theta }).\end{aligned}}} When z = x + i y {\displaystyle z=x+iy} for real values x {\displaystyle x} and y {\displaystyle y} , where i = − 1 {\displaystyle i={\sqrt {-1}}} , both sine and cosine functions can be expressed in terms of real sines, cosines, and hyperbolic functions as: sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y , cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y . {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y,\\\cos z&=\cos x\cosh y-i\sin x\sinh y.\end{aligned}}} === Polar coordinates === Sine and cosine are used to connect the real and imaginary parts of a complex number with its polar coordinates ( r , θ ) {\displaystyle (r,\theta )} : z = r ( cos ⁡ ( θ ) + i sin ⁡ ( θ ) ) , {\displaystyle z=r(\cos(\theta )+i\sin(\theta )),} and the real and imaginary parts are Re ⁡ ( z ) = r cos ⁡ ( θ ) , Im ⁡ ( z ) = r sin ⁡ ( θ ) , {\displaystyle {\begin{aligned}\operatorname {Re} (z)&=r\cos(\theta ),\\\operatorname {Im} (z)&=r\sin(\theta ),\end{aligned}}} where r {\displaystyle r} and θ {\displaystyle \theta } represent the magnitude and angle of the complex number z {\displaystyle z} . For any real number θ {\displaystyle \theta } , Euler's formula in terms of polar coordinates is stated as z = r e i θ {\textstyle z=re^{i\theta }} . === Complex arguments === Applying the series definition of the sine and cosine to a complex argument, z, gives: sin ⁡ ( z ) = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! z 2 n + 1 = e i z − e − i z 2 i = sinh ⁡ ( i z ) i = − i sinh ⁡ ( i z ) cos ⁡ ( z ) = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! z 2 n = e i z + e − i z 2 = cosh ⁡ ( i z ) {\displaystyle {\begin{aligned}\sin(z)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}z^{2n+1}\\&={\frac {e^{iz}-e^{-iz}}{2i}}\\&={\frac {\sinh \left(iz\right)}{i}}\\&=-i\sinh \left(iz\right)\\\cos(z)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}z^{2n}\\&={\frac {e^{iz}+e^{-iz}}{2}}\\&=\cosh(iz)\\\end{aligned}}} where sinh and cosh are the hyperbolic sine and cosine. These are entire functions. It is also sometimes useful to express the complex sine and cosine functions in terms of the real and imaginary parts of its argument: sin ⁡ ( x + i y ) = sin ⁡ ( x ) cos ⁡ ( i y ) + cos ⁡ ( x ) sin ⁡ ( i y ) = sin ⁡ ( x ) cosh ⁡ ( y ) + i cos ⁡ ( x ) sinh ⁡ ( y ) cos ⁡ ( x + i y ) = cos ⁡ ( x ) cos ⁡ ( i y ) − sin ⁡ ( x ) sin ⁡ ( i y ) = cos ⁡ ( x ) cosh ⁡ ( y ) − i sin ⁡ ( x ) sinh ⁡ ( y ) {\displaystyle {\begin{aligned}\sin(x+iy)&=\sin(x)\cos(iy)+\cos(x)\sin(iy)\\&=\sin(x)\cosh(y)+i\cos(x)\sinh(y)\\\cos(x+iy)&=\cos(x)\cos(iy)-\sin(x)\sin(iy)\\&=\cos(x)\cosh(y)-i\sin(x)\sinh(y)\\\end{aligned}}} ==== Partial fraction and product expansions of complex sine ==== Using the partial fraction expansion technique in complex analysis, one can find that the infinite series ∑ n = − ∞ ∞ ( − 1 ) n z − n = 1 z − 2 z ∑ n = 1 ∞ ( − 1 ) n n 2 − z 2 {\displaystyle \sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{z-n}}={\frac {1}{z}}-2z\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{n^{2}-z^{2}}}} both converge and are equal to π sin ⁡ ( π z ) {\textstyle {\frac {\pi }{\sin(\pi z)}}} . Similarly, one can show that π 2 sin 2 ⁡ ( π z ) = ∑ n = − ∞ ∞ 1 ( z − n ) 2 . {\displaystyle {\frac {\pi ^{2}}{\sin ^{2}(\pi z)}}=\sum _{n=-\infty }^{\infty }{\frac {1}{(z-n)^{2}}}.} Using product expansion technique, one can derive sin ⁡ ( π z ) = π z ∏ n = 1 ∞ ( 1 − z 2 n 2 ) . {\displaystyle \sin(\pi z)=\pi z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}}}\right).} ==== Usage of complex sine ==== sin(z) is found in the functional equation for the Gamma function, Γ ( s ) Γ ( 1 − s ) = π sin ⁡ ( π s ) , {\displaystyle \Gamma (s)\Gamma (1-s)={\pi \over \sin(\pi s)},} which in turn is found in the functional equation for the Riemann zeta-function, ζ ( s ) = 2 ( 2 π ) s − 1 Γ ( 1 − s ) sin ⁡ ( π 2 s ) ζ ( 1 − s ) . {\displaystyle \zeta (s)=2(2\pi )^{s-1}\Gamma (1-s)\sin \left({\frac {\pi }{2}}s\right)\zeta (1-s).} As a holomorphic function, sin z is a 2D solution of Laplace's equation: Δ u ( x 1 , x 2 ) = 0. {\displaystyle \Delta u(x_{1},x_{2})=0.} The complex sine function is also related to the level curves of pendulums. === Complex graphs === == Background == === Etymology === The word sine is derived, indirectly, from the Sanskrit word jyā 'bow-string' or more specifically its synonym jīvá (both adopted from Ancient Greek χορδή 'string; chord'), due to visual similarity between the arc of a circle with its corresponding chord and a bow with its string (see jyā, koti-jyā and utkrama-jyā; sine and chord are closely related in a circle of unit diameter, see Ptolemy’s Theorem). This was transliterated in Arabic as jība, which is meaningless in that language and written as jb (جب). Since Arabic is written without short vowels, jb was interpreted as the homograph jayb (جيب), which means 'bosom', 'pocket', or 'fold'. When the Arabic texts of Al-Battani and al-Khwārizmī were translated into Medieval Latin in the 12th century by Gerard of Cremona, he used the Latin equivalent sinus (which also means 'bay' or 'fold', and more specifically 'the hanging fold of a toga over the breast'). Gerard was probably not the first scholar to use this translation; Robert of Chester appears to have preceded him and there is evidence of even earlier usage. The English form sine was introduced in Thomas Fale's 1593 Horologiographia. The word cosine derives from an abbreviation of the Latin complementi sinus 'sine of the complementary angle' as cosinus in Edmund Gunter's Canon triangulorum (1620), which also includes a similar definition of cotangens. === History === While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The sine and cosine functions are closely related to the jyā and koṭi-jyā functions used in Indian astronomy during the Gupta period (Aryabhatiya and Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. In the early 17th-century, the French mathematician Albert Girard published the first use of the abbreviations sin, cos, and tan; these were further promulgated by Euler (see below). The Opus palatinum de triangulis of Georg Joachim Rheticus, a student of Copernicus, was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596. In a paper published in 1682, Leibniz proved that sin x is not an algebraic function of x. Roger Cotes computed the derivative of sine in his Harmonia Mensurarum (1722). Leonhard Euler's Introductio in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, also defining them as infinite series and presenting "Euler's formula", as well as the near-modern abbreviations sin., cos., tang., cot., sec., and cosec. == Software implementations == There is no standard algorithm for calculating sine and cosine. IEEE 754, the most widely used standard for the specification of reliable floating-point computation, does not address calculating trigonometric functions such as sine. The reason is that no efficient algorithm is known for computing sine and cosine with a specified accuracy, especially for large inputs. Algorithms for calculating sine may be balanced for such constraints as speed, accuracy, portability, or range of input values accepted. This can lead to different results for different algorithms, especially for special circumstances such as very large inputs, e.g. sin(1022). A common programming optimization, used especially in 3D graphics, is to pre-calculate a table of sine values, for example one value per degree, then for values in-between pick the closest pre-calculated value, or linearly interpolate between the 2 closest values to approximate it. This allows results to be looked up from a table rather than being calculated in real time. With modern CPU architectures this method may offer no advantage. The CORDIC algorithm is commonly used in scientific calculators. The sine and cosine functions, along with other trigonometric functions, are widely available across programming languages and platforms. In computing, they are typically abbreviated to sin and cos. Some CPU architectures have a built-in instruction for sine, including the Intel x87 FPUs since the 80387. In programming languages, sin and cos are typically either a built-in function or found within the language's standard math library. For example, the C standard library defines sine functions within math.h: sin(double), sinf(float), and sinl(long double). The parameter of each is a floating point value, specifying the angle in radians. Each function returns the same data type as it accepts. Many other trigonometric functions are also defined in math.h, such as for cosine, arc sine, and hyperbolic sine (sinh). Similarly, Python defines math.sin(x) and math.cos(x) within the built-in math module. Complex sine and cosine functions are also available within the cmath module, e.g. cmath.sin(z). CPython's math functions call the C math library, and use a double-precision floating-point format. === Turns based implementations === Some software libraries provide implementations of sine and cosine using the input angle in half-turns, a half-turn being an angle of 180 degrees or π {\displaystyle \pi } radians. Representing angles in turns or half-turns has accuracy advantages and efficiency advantages in some cases. These functions are called sinpi and cospi in MATLAB, OpenCL, R, Julia, CUDA, and ARM. For example, sinpi(x) would evaluate to sin ⁡ ( π x ) , {\displaystyle \sin(\pi x),} where x is expressed in half-turns, and consequently the final input to the function, πx can be interpreted in radians by sin. The accuracy advantage stems from the ability to perfectly represent key angles like full-turn, half-turn, and quarter-turn losslessly in binary floating-point or fixed-point. In contrast, representing 2 π {\displaystyle 2\pi } , π {\displaystyle \pi } , and π 2 {\textstyle {\frac {\pi }{2}}} in binary floating-point or binary scaled fixed-point always involves a loss of accuracy since irrational numbers cannot be represented with finitely many binary digits. Turns also have an accuracy advantage and efficiency advantage for computing modulo to one period. Computing modulo 1 turn or modulo 2 half-turns can be losslessly and efficiently computed in both floating-point and fixed-point. For example, computing modulo 1 or modulo 2 for a binary point scaled fixed-point value requires only a bit shift or bitwise AND operation. In contrast, computing modulo π 2 {\textstyle {\frac {\pi }{2}}} involves inaccuracies in representing π 2 {\textstyle {\frac {\pi }{2}}} . For applications involving angle sensors, the sensor typically provides angle measurements in a form directly compatible with turns or half-turns. For example, an angle sensor may count from 0 to 4096 over one complete revolution. If half-turns are used as the unit for angle, then the value provided by the sensor directly and losslessly maps to a fixed-point data type with 11 bits to the right of the binary point. In contrast, if radians are used as the unit for storing the angle, then the inaccuracies and cost of multiplying the raw sensor integer by an approximation to π 2048 {\textstyle {\frac {\pi }{2048}}} would be incurred. == See also == == References == === Footnotes === === Citations === === Works cited === == External links == Media related to Sine function at Wikimedia Commons
Wikipedia/Cosine_function
In mathematics, the degree of a polynomial is the highest of the degrees of the polynomial's monomials (individual terms) with non-zero coefficients. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. For a univariate polynomial, the degree of the polynomial is simply the highest exponent occurring in the polynomial. The term order has been used as a synonym of degree but, nowadays, may refer to several other concepts (see Order of a polynomial (disambiguation)). For example, the polynomial 7 x 2 y 3 + 4 x − 9 , {\displaystyle 7x^{2}y^{3}+4x-9,} which can also be written as 7 x 2 y 3 + 4 x 1 y 0 − 9 x 0 y 0 , {\displaystyle 7x^{2}y^{3}+4x^{1}y^{0}-9x^{0}y^{0},} has three terms. The first term has a degree of 5 (the sum of the powers 2 and 3), the second term has a degree of 1, and the last term has a degree of 0. Therefore, the polynomial has a degree of 5, which is the highest degree of any term. To determine the degree of a polynomial that is not in standard form, such as ( x + 1 ) 2 − ( x − 1 ) 2 {\displaystyle (x+1)^{2}-(x-1)^{2}} , one can put it in standard form by expanding the products (by distributivity) and combining the like terms; for example, ( x + 1 ) 2 − ( x − 1 ) 2 = 4 x {\displaystyle (x+1)^{2}-(x-1)^{2}=4x} is of degree 1, even though each summand has degree 2. However, this is not needed when the polynomial is written as a product of polynomials in standard form, because the degree of a product is the sum of the degrees of the factors. == Names of polynomials by degree == The following names are assigned to polynomials according to their degree: Special case – zero (see § Degree of the zero polynomial, below) Degree 0 – non-zero constant Degree 1 – linear Degree 2 – quadratic Degree 3 – cubic Degree 4 – quartic (or, if all terms have even degree, biquadratic) Degree 5 – quintic Degree 6 – sextic (or, less commonly, hexic) Degree 7 – septic (or, less commonly, heptic) Degree 8 – octic Degree 9 – nonic Degree 10 – decic Names for degree above three are based on Latin ordinal numbers, and end in -ic. This should be distinguished from the names used for the number of variables, the arity, which are based on Latin distributive numbers, and end in -ary. For example, a degree two polynomial in two variables, such as x 2 + x y + y 2 {\displaystyle x^{2}+xy+y^{2}} , is called a "binary quadratic": binary due to two variables, quadratic due to degree two. There are also names for the number of terms, which are also based on Latin distributive numbers, ending in -nomial; the common ones are monomial, binomial, and (less commonly) trinomial; thus x 2 + y 2 {\displaystyle x^{2}+y^{2}} is a "binary quadratic binomial". == Examples == The polynomial ( y − 3 ) ( 2 y + 6 ) ( − 4 y − 21 ) {\displaystyle (y-3)(2y+6)(-4y-21)} is a cubic polynomial: after multiplying out and collecting terms of the same degree, it becomes − 8 y 3 − 42 y 2 + 72 y + 378 {\displaystyle -8y^{3}-42y^{2}+72y+378} , with highest exponent 3. The polynomial ( 3 z 8 + z 5 − 4 z 2 + 6 ) + ( − 3 z 8 + 8 z 4 + 2 z 3 + 14 z ) {\displaystyle (3z^{8}+z^{5}-4z^{2}+6)+(-3z^{8}+8z^{4}+2z^{3}+14z)} is a quintic polynomial: upon combining like terms, the two terms of degree 8 cancel, leaving z 5 + 8 z 4 + 2 z 3 − 4 z 2 + 14 z + 6 {\displaystyle z^{5}+8z^{4}+2z^{3}-4z^{2}+14z+6} , with highest exponent 5. == Behavior under polynomial operations == The degree of the sum, the product or the composition of two polynomials is strongly related to the degree of the input polynomials. === Addition === The degree of the sum (or difference) of two polynomials is less than or equal to the greater of their degrees; that is, deg ⁡ ( P + Q ) ≤ max { deg ⁡ ( P ) , deg ⁡ ( Q ) } {\displaystyle \deg(P+Q)\leq \max\{\deg(P),\deg(Q)\}} and deg ⁡ ( P − Q ) ≤ max { deg ⁡ ( P ) , deg ⁡ ( Q ) } {\displaystyle \deg(P-Q)\leq \max\{\deg(P),\deg(Q)\}} . For example, the degree of ( x 3 + x ) − ( x 3 + x 2 ) = − x 2 + x {\displaystyle (x^{3}+x)-(x^{3}+x^{2})=-x^{2}+x} is 2, and 2 ≤ max{3, 3}. The equality always holds when the degrees of the polynomials are different. For example, the degree of ( x 3 + x ) + ( x 2 + 1 ) = x 3 + x 2 + x + 1 {\displaystyle (x^{3}+x)+(x^{2}+1)=x^{3}+x^{2}+x+1} is 3, and 3 = max{3, 2}. === Multiplication === The degree of the product of a polynomial by a non-zero scalar is equal to the degree of the polynomial; that is, deg ⁡ ( c P ) = deg ⁡ ( P ) {\displaystyle \deg(cP)=\deg(P)} . For example, the degree of 2 ( x 2 + 3 x − 2 ) = 2 x 2 + 6 x − 4 {\displaystyle 2(x^{2}+3x-2)=2x^{2}+6x-4} is 2, which is equal to the degree of x 2 + 3 x − 2 {\displaystyle x^{2}+3x-2} . Thus, the set of polynomials (with coefficients from a given field F) whose degrees are smaller than or equal to a given number n forms a vector space; for more, see Examples of vector spaces. More generally, the degree of the product of two polynomials over a field or an integral domain is the sum of their degrees: deg ⁡ ( P Q ) = deg ⁡ ( P ) + deg ⁡ ( Q ) {\displaystyle \deg(PQ)=\deg(P)+\deg(Q)} . For example, the degree of ( x 3 + x ) ( x 2 + 1 ) = x 5 + 2 x 3 + x {\displaystyle (x^{3}+x)(x^{2}+1)=x^{5}+2x^{3}+x} is 5 = 3 + 2. For polynomials over an arbitrary ring, the above rules may not be valid, because of cancellation that can occur when multiplying two nonzero constants. For example, in the ring Z / 4 Z {\displaystyle \mathbf {Z} /4\mathbf {Z} } of integers modulo 4, one has that deg ⁡ ( 2 x ) = deg ⁡ ( 1 + 2 x ) = 1 {\displaystyle \deg(2x)=\deg(1+2x)=1} , but deg ⁡ ( 2 x ( 1 + 2 x ) ) = deg ⁡ ( 2 x ) = 1 {\displaystyle \deg(2x(1+2x))=\deg(2x)=1} , which is not equal to the sum of the degrees of the factors. === Composition === The degree of the composition of two non-constant polynomials P {\displaystyle P} and Q {\displaystyle Q} over a field or integral domain is the product of their degrees: deg ⁡ ( P ∘ Q ) = deg ⁡ ( P ) deg ⁡ ( Q ) . {\displaystyle \deg(P\circ Q)=\deg(P)\deg(Q).} For example, if P = x 3 + x {\displaystyle P=x^{3}+x} has degree 3 and Q = x 2 − 1 {\displaystyle Q=x^{2}-1} has degree 2, then their composition is P ∘ Q = P ∘ ( x 2 − 1 ) = ( x 2 − 1 ) 3 + ( x 2 − 1 ) = x 6 − 3 x 4 + 4 x 2 − 2 , {\displaystyle P\circ Q=P\circ (x^{2}-1)=(x^{2}-1)^{3}+(x^{2}-1)=x^{6}-3x^{4}+4x^{2}-2,} which has degree 6. Note that for polynomials over an arbitrary ring, the degree of the composition may be less than the product of the degrees. For example, in Z / 4 Z , {\displaystyle \mathbf {Z} /4\mathbf {Z} ,} the composition of the polynomials 2 x {\displaystyle 2x} and 1 + 2 x {\displaystyle 1+2x} (both of degree 1) is the constant polynomial 2 x ∘ ( 1 + 2 x ) = 2 + 4 x = 2 , {\displaystyle 2x\circ (1+2x)=2+4x=2,} of degree 0. == Degree of the zero polynomial == The degree of the zero polynomial is either left undefined, or is defined to be negative (usually −1 or − ∞ {\displaystyle -\infty } ). Like any constant value, the value 0 can be considered as a (constant) polynomial, called the zero polynomial. It has no nonzero terms, and so, strictly speaking, it has no degree either. As such, its degree is usually undefined. The propositions for the degree of sums and products of polynomials in the above section do not apply, if any of the polynomials involved is the zero polynomial. It is convenient, however, to define the degree of the zero polynomial to be negative infinity, − ∞ , {\displaystyle -\infty ,} and to introduce the arithmetic rules max ( a , − ∞ ) = a , {\displaystyle \max(a,-\infty )=a,} and a + ( − ∞ ) = − ∞ . {\displaystyle a+(-\infty )=-\infty .} These examples illustrate how this extension satisfies the behavior rules above: The degree of the sum ( x 3 + x ) + ( 0 ) = x 3 + x {\displaystyle (x^{3}+x)+(0)=x^{3}+x} is 3. This satisfies the expected behavior, which is that 3 ≤ max ( 3 , − ∞ ) {\displaystyle 3\leq \max(3,-\infty )} . The degree of the difference ( x ) − ( x ) = 0 {\displaystyle (x)-(x)=0} is − ∞ {\displaystyle -\infty } . This satisfies the expected behavior, which is that − ∞ ≤ max ( 1 , 1 ) {\displaystyle -\infty \leq \max(1,1)} . The degree of the product ( 0 ) ( x 2 + 1 ) = 0 {\displaystyle (0)(x^{2}+1)=0} is − ∞ {\displaystyle -\infty } . This satisfies the expected behavior, which is that − ∞ = − ∞ + 2 {\displaystyle -\infty =-\infty +2} . == Computed from the function values == A number of formulae exist which will evaluate the degree of a polynomial function f. One based on asymptotic analysis is deg ⁡ f = lim x → ∞ log ⁡ | f ( x ) | log ⁡ x {\displaystyle \deg f=\lim _{x\rightarrow \infty }{\frac {\log |f(x)|}{\log x}}} ; this is the exact counterpart of the method of estimating the slope in a log–log plot. This formula generalizes the concept of degree to some functions that are not polynomials. For example: The degree of the multiplicative inverse, 1 / x {\displaystyle \ 1/x} , is −1. The degree of the square root, x {\displaystyle {\sqrt {x}}} , is 1/2. The degree of the logarithm, log ⁡ x {\displaystyle \ \log x} , is 0. The degree of the exponential function, exp ⁡ x {\displaystyle \exp x} , is ∞ . {\displaystyle \infty .} The formula also gives sensible results for many combinations of such functions, e.g., the degree of 1 + x x {\displaystyle {\frac {1+{\sqrt {x}}}{x}}} is − 1 / 2 {\displaystyle -1/2} . Another formula to compute the degree of f from its values is deg ⁡ f = lim x → ∞ x f ′ ( x ) f ( x ) {\displaystyle \deg f=\lim _{x\to \infty }{\frac {xf'(x)}{f(x)}}} ; this second formula follows from applying L'Hôpital's rule to the first formula. Intuitively though, it is more about exhibiting the degree d as the extra constant factor in the derivative d x d − 1 {\displaystyle dx^{d-1}} of x d {\displaystyle x^{d}} . A more fine grained (than a simple numeric degree) description of the asymptotics of a function can be had by using big O notation. In the analysis of algorithms, it is for example often relevant to distinguish between the growth rates of x {\displaystyle x} and x log ⁡ x {\displaystyle x\log x} , which would both come out as having the same degree according to the above formulae. == Extension to polynomials with two or more variables == For polynomials in two or more variables, the degree of a term is the sum of the exponents of the variables in the term; the degree (sometimes called the total degree) of the polynomial is again the maximum of the degrees of all terms in the polynomial. For example, the polynomial x2y2 + 3x3 + 4y has degree 4, the same degree as the term x2y2. However, a polynomial in variables x and y, is a polynomial in x with coefficients which are polynomials in y, and also a polynomial in y with coefficients which are polynomials in x. The polynomial x 2 y 2 + 3 x 3 + 4 y = ( 3 ) x 3 + ( y 2 ) x 2 + ( 4 y ) = ( x 2 ) y 2 + ( 4 ) y + ( 3 x 3 ) {\displaystyle x^{2}y^{2}+3x^{3}+4y=(3)x^{3}+(y^{2})x^{2}+(4y)=(x^{2})y^{2}+(4)y+(3x^{3})} has degree 3 in x and degree 2 in y. == Degree function in abstract algebra == Given a ring R, the polynomial ring R[x] is the set of all polynomials in x that have coefficients in R. In the special case that R is also a field, the polynomial ring R[x] is a principal ideal domain and, more importantly to our discussion here, a Euclidean domain. It can be shown that the degree of a polynomial over a field satisfies all of the requirements of the norm function in the euclidean domain. That is, given two polynomials f(x) and g(x), the degree of the product f(x)g(x) must be larger than both the degrees of f and g individually. In fact, something stronger holds: deg ⁡ ( f ( x ) g ( x ) ) = deg ⁡ ( f ( x ) ) + deg ⁡ ( g ( x ) ) {\displaystyle \deg(f(x)g(x))=\deg(f(x))+\deg(g(x))} For an example of why the degree function may fail over a ring that is not a field, take the following example. Let R = Z / 4 Z {\displaystyle \mathbb {Z} /4\mathbb {Z} } , the ring of integers modulo 4. This ring is not a field (and is not even an integral domain) because 2 × 2 = 4 ≡ 0 (mod 4). Therefore, let f(x) = g(x) = 2x + 1. Then, f(x)g(x) = 4x2 + 4x + 1 = 1. Thus deg(f⋅g) = 0 which is not greater than the degrees of f and g (which each had degree 1). Since the norm function is not defined for the zero element of the ring, we consider the degree of the polynomial f(x) = 0 to also be undefined so that it follows the rules of a norm in a Euclidean domain. == See also == Abel–Ruffini theorem Fundamental theorem of algebra == Notes == == References == Axler, Sheldon (1997), Linear Algebra Done Right (2nd ed.), Springer Science & Business Media, ISBN 9780387982595 Childs, Lindsay N. (1995), A Concrete Introduction to Higher Algebra (2nd ed.), Springer Science & Business Media, ISBN 9780387989990 Childs, Lindsay N. (2009), A Concrete Introduction to Higher Algebra (3rd ed.), Springer Science & Business Media, ISBN 9780387745275 Grillet, Pierre Antoine (2007), Abstract Algebra (2nd ed.), Springer Science & Business Media, ISBN 9780387715681 King, R. Bruce (2009), Beyond the Quartic Equation, Springer Science & Business Media, ISBN 9780817648497 Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (3rd ed.), American Mathematical Society, ISBN 9780821816462 Shafarevich, Igor R. (2003), Discourses on Algebra, Springer Science & Business Media, ISBN 9783540422532
Wikipedia/Octic_equation
In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles. The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators. Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies. In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light. == History == === Ancient world === Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures). === 19th century === Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments: Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel. Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement. In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy. This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community. An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars." == A fundamental force == The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range. All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction. Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena. Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects. The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves. == Classical electrodynamics == In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10 May 1752 by Thomas-François Dalibard of France using a 40-foot-tall (12 m) iron rod instead of a kite and he successfully extracted electrical sparks from a cloud. One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation. A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law. One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.) In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.) Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields. == Extension to nonlinear phenomena == The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics. == Quantities and units == Here is a list of common units related to electromagnetism: In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system. Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units. == Applications == The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction. == See also == == References == == Further reading == === Web sources === === Textbooks === === General coverage === == External links == Magnetic Field Strength Converter Electromagnetic Force – from Eric Weisstein's World of Physics
Wikipedia/Electromagnetic_theory
Gauge theory gravity (GTG) is a theory of gravitation cast in the mathematical language of geometric algebra. To those familiar with general relativity, it is highly reminiscent of the tetrad formalism although there are significant conceptual differences. Most notably, the background in GTG is flat, Minkowski spacetime. The equivalence principle is not assumed, but instead follows from the fact that the gauge covariant derivative is minimally coupled. As in general relativity, equations structurally identical to the Einstein field equations are derivable from a variational principle. A spin tensor can also be supported in a manner similar to Einstein–Cartan–Sciama–Kibble theory. GTG was first proposed by Lasenby, Doran, and Gull in 1998 as a fulfillment of partial results presented in 1993. The theory has not been widely adopted by the rest of the physics community, who have mostly opted for differential geometry approaches like that of the related gauge gravitation theory. == Mathematical foundation == The foundation of GTG comes from two principles. First, position-gauge invariance demands that arbitrary local displacements of fields not affect the physical content of the field equations. Second, rotation-gauge invariance demands that arbitrary local rotations of fields not affect the physical content of the field equations. These principles lead to the introduction of a new pair of linear functions, the position-gauge field and the rotation-gauge field. A displacement by some arbitrary function f x ↦ x ′ = f ( x ) {\displaystyle x\mapsto x'=f(x)} gives rise to the position-gauge field defined by the mapping on its adjoint, h ¯ ( a , x ) ↦ h ¯ ′ ( a , x ) = h ¯ ( f ¯ − 1 ( a ) , f ( x ) ) , {\displaystyle {\bar {\mathsf {h}}}(a,x)\mapsto {\bar {\mathsf {h}}}'(a,x)={\bar {\mathsf {h}}}({\bar {\mathsf {f}}}^{-1}(a),f(x)),} which is linear in its first argument and a is a constant vector. Similarly, a rotation by some arbitrary rotor R gives rise to the rotation-gauge field Ω ¯ ( a , x ) ↦ Ω ¯ ′ ( a , x ) = R Ω ¯ ( a , x ) R † − 2 a ⋅ ∇ R R † . {\displaystyle {\bar {\mathsf {\Omega }}}(a,x)\mapsto {\bar {\mathsf {\Omega }}}'(a,x)=R{\bar {\mathsf {\Omega }}}(a,x)R^{\dagger }-2a\cdot \nabla RR^{\dagger }.} We can define two different covariant directional derivatives a ⋅ D = a ⋅ h ¯ ( ∇ ) + 1 2 Ω ( h ( a ) ) {\displaystyle a\cdot D=a\cdot {\bar {\mathsf {h}}}(\nabla )+{\tfrac {1}{2}}{\mathsf {\Omega }}({\mathsf {h}}(a))} a ⋅ D = a ⋅ h ¯ ( ∇ ) + Ω ( h ( a ) ) × {\displaystyle a\cdot {\mathcal {D}}=a\cdot {\bar {\mathsf {h}}}(\nabla )+{\mathsf {\Omega }}({\mathsf {h}}(a))\times } or with the specification of a coordinate system D μ = ∂ μ + 1 2 Ω μ {\displaystyle D_{\mu }=\partial _{\mu }+{\tfrac {1}{2}}\Omega _{\mu }} D μ = ∂ μ + Ω μ × , {\displaystyle {\mathcal {D}}_{\mu }=\partial _{\mu }+\Omega _{\mu }\times ,} where × denotes the commutator product. The first of these derivatives is better suited for dealing directly with spinors whereas the second is better suited for observables. The GTG analog of the Riemann tensor is built from the commutation rules of these derivatives. [ D μ , D ν ] ψ = 1 2 R μ ν ψ {\displaystyle [D_{\mu },D_{\nu }]\psi ={\tfrac {1}{2}}{\mathsf {R}}_{\mu \nu }\psi } R ( a ∧ b ) = R ( h ( a ∧ b ) ) {\displaystyle {\mathcal {R}}(a\wedge b)={\mathsf {R}}({\mathsf {h}}(a\wedge b))} == Field equations == The field equations are derived by postulating the Einstein–Hilbert action governs the evolution of the gauge fields, i.e. S = ∫ [ 1 2 κ ( R − 2 Λ ) + L M ] ( det h ) − 1 d 4 x . {\displaystyle S=\int \left[{1 \over 2\kappa }\left({\mathcal {R}}-2\Lambda \right)+{\mathcal {L}}_{\mathrm {M} }\right](\det {\mathsf {h}})^{-1}\,\mathrm {d} ^{4}x.} Minimizing variation of the action with respect to the two gauge fields results in the field equations G ( a ) − Λ a = κ T ( a ) {\displaystyle {\mathcal {G}}(a)-\Lambda a=\kappa {\mathcal {T}}(a)} D ∧ h ¯ ( a ) = κ S ⋅ h ¯ ( a ) , {\displaystyle {\mathcal {D}}\wedge {\bar {\mathsf {h}}}(a)=\kappa {\mathcal {S}}\cdot {\bar {\mathsf {h}}}(a),} where T {\displaystyle {\mathcal {T}}} is the covariant energy–momentum tensor and S {\displaystyle {\mathcal {S}}} is the covariant spin tensor. Importantly, these equations do not give an evolving curvature of spacetime but rather merely give the evolution of the gauge fields within the flat spacetime. == Relation to general relativity == For those more familiar with general relativity, it is possible to define a metric tensor from the position-gauge field in a manner similar to tetrads. In the tetrad formalism, a set of four vectors { e ( a ) μ } {\displaystyle \{{e_{(a)}}^{\mu }\}} are introduced. The Greek index μ is raised or lowered by multiplying and contracting with the spacetime's metric tensor. The parenthetical Latin index (a) is a label for each of the four tetrads, which is raised and lowered as if it were multiplied and contracted with a separate Minkowski metric tensor. GTG, roughly, reverses the roles of these indices. The metric is implicitly assumed to be Minkowski in the selection of the spacetime algebra. The information contained in the other set of indices gets subsumed by the behavior of the gauge fields. We can make the associations g μ = h − 1 ( e μ ) {\displaystyle g_{\mu }={\mathsf {h}}^{-1}(e_{\mu })} g μ = h ¯ ( e μ ) {\displaystyle g^{\mu }={\bar {\mathsf {h}}}(e^{\mu })} for a covariant vector and contravariant vector in a curved spacetime, where now the unit vectors { e μ } {\displaystyle \{e_{\mu }\}} are the chosen coordinate basis. These can define the metric using the rule g μ ν = g μ ⋅ g ν . {\displaystyle g_{\mu \nu }=g_{\mu }\cdot g_{\nu }.} Following this procedure, it is possible to show that for the most part the observable predictions of GTG agree with Einstein–Cartan–Sciama–Kibble theory for non-vanishing spin and reduce to general relativity for vanishing spin. GTG does, however, make different predictions about global solutions. For example, in the study of a point mass, the choice of a "Newtonian gauge" yields a solution similar to the Schwarzschild metric in Gullstrand–Painlevé coordinates. General relativity permits an extension known as the Kruskal–Szekeres coordinates. GTG, on the other hand, forbids any such extension. == References == == External links == David Hestenes: Spacetime calculus for gravitation theory – an account of the mathematical formalism explicitly directed to GTG
Wikipedia/Gauge_theory_gravity
Geometric algebra is an extension of vector algebra, providing additional algebraic structures on vector spaces, with geometric interpretations. Vector algebra uses all dimensions and signatures, as does geometric algebra, notably 3+1 spacetime as well as 2 dimensions. == Basic concepts and operations == Geometric algebra (GA) is an extension or completion of vector algebra (VA). The reader is herein assumed to be familiar with the basic concepts and operations of VA and this article will mainly concern itself with operations in G 3 {\displaystyle {\mathcal {G}}_{3}} the GA of 3D space (nor is this article intended to be mathematically rigorous). In GA, vectors are not normally written boldface as the meaning is usually clear from the context. The fundamental difference is that GA provides a new product of vectors called the "geometric product". Elements of GA are graded multivectors: scalars are grade 0, usual vectors are grade 1, bivectors are grade 2 and the highest grade (3 in the 3D case) is traditionally called the pseudoscalar and designated I {\displaystyle I} . The ungeneralized 3D vector form of the geometric product is: a b = a ⋅ b + a ∧ b {\displaystyle ab=a\cdot b+a\wedge b} that is the sum of the usual dot (inner) product and the outer (exterior) product (this last is closely related to the cross product and will be explained below). In VA, entities such as pseudovectors and pseudoscalars need to be bolted on, whereas in GA the equivalent bivector and pseudovector respectively exist naturally as subspaces of the algebra. For example, applying vector calculus in 2 dimensions, such as to compute torque or curl, requires adding an artificial 3rd dimension and extending the vector field to be constant in that dimension, or alternately considering these to be scalars. The torque or curl is then a normal vector field in this 3rd dimension. By contrast, geometric algebra in 2 dimensions defines these as a pseudoscalar field (a bivector), without requiring a 3rd dimension. Similarly, the scalar triple product is ad hoc, and can instead be expressed uniformly using the exterior product and the geometric product. == Translations between formalisms == Here are some comparisons between standard R 3 {\displaystyle {\mathbb {R} }^{3}} vector relations and their corresponding exterior product and geometric product equivalents. All the exterior and geometric product equivalents here are good for more than three dimensions, and some also for two. In two dimensions the cross product is undefined even if what it describes (like torque) is perfectly well defined in a plane without introducing an arbitrary normal vector outside of the space. Many of these relationships only require the introduction of the exterior product to generalize, but since that may not be familiar to somebody with only a background in vector algebra and calculus, some examples are given. === Cross and exterior products === u × v {\displaystyle \mathbf {u} \times \mathbf {v} } is perpendicular to the plane containing u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } . u ∧ v {\displaystyle \mathbf {u} \wedge \mathbf {v} } is an oriented representation of the same plane. We have the pseudoscalar I = e 1 e 2 e 3 {\displaystyle I=e_{1}e_{2}e_{3}} (right handed orthonormal frame) and so e 1 I = I e 1 = e 2 e 3 {\displaystyle e_{1}I=Ie_{1}=e_{2}e_{3}} returns a bivector and I ( e 2 ∧ e 3 ) = I e 2 e 3 = − e 1 {\displaystyle I(e_{2}\wedge e_{3})=Ie_{2}e_{3}=-e_{1}} returns a vector perpendicular to the e 2 ∧ e 3 {\displaystyle e_{2}\wedge e_{3}} plane. This yields a convenient definition for the cross product of traditional vector algebra: u × v = − I ( u ∧ v ) {\displaystyle {u}\times {v}=-I({u}\wedge {v})} (this is antisymmetric). Relevant is the distinction between polar and axial vectors in vector algebra, which is natural in geometric algebra as the distinction between vectors and bivectors (elements of grade two). The I {\displaystyle I} here is a unit pseudoscalar of Euclidean 3-space, which establishes a duality between the vectors and the bivectors, and is named so because of the expected property I 2 = ( e 1 e 2 e 3 ) 2 = e 1 e 2 e 3 e 1 e 2 e 3 = − e 1 e 2 e 1 e 3 e 2 e 3 = e 1 e 1 e 2 e 3 e 2 e 3 = − e 3 e 2 e 2 e 3 = − 1 {\displaystyle I^{2}=(e_{1}e_{2}e_{3})^{2}=e_{1}e_{2}e_{3}e_{1}e_{2}e_{3}=-e_{1}e_{2}e_{1}e_{3}e_{2}e_{3}=e_{1}e_{1}e_{2}e_{3}e_{2}e_{3}=-e_{3}e_{2}e_{2}e_{3}=-1} The equivalence of the R 3 {\displaystyle \mathbb {R} ^{3}} cross product and the exterior product expression above can be confirmed by direct multiplication of − I = − e 1 e 2 e 3 {\displaystyle -I=-{e_{1}}{e_{2}}{e_{3}}} with a determinant expansion of the exterior product u ∧ v = ∑ 1 ≤ i < j ≤ 3 ( u i v j − v i u j ) e i ∧ e j = ∑ 1 ≤ i < j ≤ 3 ( u i v j − v i u j ) e i e j {\displaystyle u\wedge v=\sum _{1\leq i<j\leq 3}(u_{i}v_{j}-v_{i}u_{j}){e_{i}}\wedge {e_{j}}=\sum _{1\leq i<j\leq 3}(u_{i}v_{j}-v_{i}u_{j}){e_{i}}{e_{j}}} See also Cross product as an exterior product. Essentially, the geometric product of a bivector and the pseudoscalar of Euclidean 3-space provides a method of calculation of the Hodge dual. === Cross and commutator products === The pseudovector/bivector subalgebra of the geometric algebra of Euclidean 3-dimensional space form a 3-dimensional vector space themselves. Let the standard unit pseudovectors/bivectors of the subalgebra be i = e 2 e 3 {\displaystyle \mathbf {i} =\mathbf {e_{2}} \mathbf {e_{3}} } , j = e 1 e 3 {\displaystyle \mathbf {j} =\mathbf {e_{1}} \mathbf {e_{3}} } , and k = e 1 e 2 {\displaystyle \mathbf {k} =\mathbf {e_{1}} \mathbf {e_{2}} } , and the anti-commutative commutator product be defined as A × B = 1 2 ( A B − B A ) {\displaystyle A\times B={\tfrac {1}{2}}(AB-BA)} , where A B {\displaystyle AB} is the geometric product. The commutator product is distributive over addition and linear, as the geometric product is distributive over addition and linear. From the definition of the commutator product, i {\displaystyle \mathbf {i} } , j {\displaystyle \mathbf {j} } and k {\displaystyle \mathbf {k} } satisfy the following equalities: i × j = 1 2 ( i j − j i ) = 1 2 ( ( e 2 e 3 e 1 e 3 − e 1 e 3 e 2 e 3 ) = 1 2 ( − e 2 e 3 e 3 e 1 + e 1 e 3 e 3 e 2 ) = 1 2 ( − e 2 e 1 + e 1 e 2 ) = 1 2 ( e 1 e 2 + e 1 e 2 ) = e 1 e 2 = k {\displaystyle \mathbf {i} \times \mathbf {j} ={\tfrac {1}{2}}(\mathbf {i} \mathbf {j} -\mathbf {j} \mathbf {i} )={\tfrac {1}{2}}((\mathbf {e_{2}} \mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{3}} -\mathbf {e_{1}} \mathbf {e_{3}} \mathbf {e_{2}} \mathbf {e_{3}} )={\tfrac {1}{2}}(-\mathbf {e_{2}} \mathbf {e_{3}} \mathbf {e_{3}} \mathbf {e_{1}} +\mathbf {e_{1}} \mathbf {e_{3}} \mathbf {e_{3}} \mathbf {e_{2}} )={\tfrac {1}{2}}(-\mathbf {e_{2}} \mathbf {e_{1}} +\mathbf {e_{1}} \mathbf {e_{2}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{2}} +\mathbf {e_{1}} \mathbf {e_{2}} )=\mathbf {e_{1}} \mathbf {e_{2}} =\mathbf {k} } j × k = 1 2 ( j k − k j ) = 1 2 ( ( e 1 e 3 e 1 e 2 − e 1 e 2 e 1 e 3 ) = 1 2 ( − e 3 e 1 e 1 e 2 + e 2 e 1 e 1 e 3 ) = 1 2 ( − e 3 e 2 + e 2 e 3 ) = 1 2 ( e 2 e 3 + e 2 e 3 ) = e 2 e 3 = i {\displaystyle \mathbf {j} \times \mathbf {k} ={\tfrac {1}{2}}(\mathbf {j} \mathbf {k} -\mathbf {k} \mathbf {j} )={\tfrac {1}{2}}((\mathbf {e_{1}} \mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{2}} -\mathbf {e_{1}} \mathbf {e_{2}} \mathbf {e_{1}} \mathbf {e_{3}} )={\tfrac {1}{2}}(-\mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{1}} \mathbf {e_{2}} +\mathbf {e_{2}} \mathbf {e_{1}} \mathbf {e_{1}} \mathbf {e_{3}} )={\tfrac {1}{2}}(-\mathbf {e_{3}} \mathbf {e_{2}} +\mathbf {e_{2}} \mathbf {e_{3}} )={\tfrac {1}{2}}(\mathbf {e_{2}} \mathbf {e_{3}} +\mathbf {e_{2}} \mathbf {e_{3}} )=\mathbf {e_{2}} \mathbf {e_{3}} =\mathbf {i} } k × i = 1 2 ( k i − i k ) = 1 2 ( e 1 e 2 e 2 e 3 − e 2 e 3 e 1 e 2 ) = 1 2 ( e 1 e 2 e 2 e 3 − e 3 e 2 e 2 e 1 ) = 1 2 ( e 1 e 3 − e 3 e 1 ) = 1 2 ( e 1 e 3 + e 1 e 3 ) = e 1 e 3 = j {\displaystyle \mathbf {k} \times \mathbf {i} ={\tfrac {1}{2}}(\mathbf {k} \mathbf {i} -\mathbf {i} \mathbf {k} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{2}} \mathbf {e_{2}} \mathbf {e_{3}} -\mathbf {e_{2}} \mathbf {e_{3}} \mathbf {e_{1}} \mathbf {e_{2}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{2}} \mathbf {e_{2}} \mathbf {e_{3}} -\mathbf {e_{3}} \mathbf {e_{2}} \mathbf {e_{2}} \mathbf {e_{1}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{3}} -\mathbf {e_{3}} \mathbf {e_{1}} )={\tfrac {1}{2}}(\mathbf {e_{1}} \mathbf {e_{3}} +\mathbf {e_{1}} \mathbf {e_{3}} )=\mathbf {e_{1}} \mathbf {e_{3}} =\mathbf {j} } which imply, by the anti-commutativity of the commutator product, that j × i = − k {\displaystyle \mathbf {j} \times \mathbf {i} =-\mathbf {k} } k × j = − i {\displaystyle \mathbf {k} \times \mathbf {j} =-\mathbf {i} } i × k = − j {\displaystyle \mathbf {i} \times \mathbf {k} =-\mathbf {j} } The anti-commutativity of the commutator product also implies that i × i = j × j = k × k = 0 {\displaystyle \mathbf {i} \times \mathbf {i} =\mathbf {j} \times \mathbf {j} =\mathbf {k} \times \mathbf {k} =0} These equalities and properties are sufficient to determine the commutator product of any two pseudovectors/bivectors A {\displaystyle \mathbf {A} } and B {\displaystyle \mathbf {B} } . As the pseudovectors/bivectors form a vector space, each pseudovector/bivector can be defined as the sum of three orthogonal components parallel to the standard basis pseudovectors/bivectors: A = ( A 1 i + A 2 j + A 3 k ) {\displaystyle \mathbf {A} =(A_{1}\mathbf {i} +A_{2}\mathbf {j} +A_{3}\mathbf {k} )} B = ( B 1 i + B 2 j + B 3 k ) {\displaystyle \mathbf {B} =(B_{1}\mathbf {i} +B_{2}\mathbf {j} +B_{3}\mathbf {k} )} Their commutator product A × B {\displaystyle \mathbf {A} \times \mathbf {B} } can be expanded using its distributive property: A × B = ( A 1 i + A 2 j + A 3 k ) × ( B 1 i + B 2 j + B 3 k ) = A 1 B 1 i × i + A 1 B 2 i × j + A 1 B 3 i × k + A 2 B 1 j × i + A 2 B 2 j × j + A 2 B 3 j × k + A 3 B 1 k × i + A 3 B 2 k × j + A 3 B 3 k × k = A 1 B 2 k − A 1 B 3 j − A 2 B 1 k + A 2 B 3 i + A 3 B 1 j − A 3 B 2 i = ( A 2 B 3 − A 3 B 2 ) i + ( A 3 B 1 − A 1 B 3 ) j + ( A 1 B 2 − A 2 B 1 ) k {\displaystyle {\begin{aligned}\mathbf {A} \times \mathbf {B} &=(A_{1}\mathbf {i} +A_{2}\mathbf {j} +A_{3}\mathbf {k} )\times (B_{1}\mathbf {i} +B_{2}\mathbf {j} +B_{3}\mathbf {k} )\\&=A_{1}B_{1}\mathbf {i} \times \mathbf {i} +A_{1}B_{2}\mathbf {i} \times \mathbf {j} +A_{1}B_{3}\mathbf {i} \times \mathbf {k} +A_{2}B_{1}\mathbf {j} \times \mathbf {i} +A_{2}B_{2}\mathbf {j} \times \mathbf {j} +A_{2}B_{3}\mathbf {j} \times \mathbf {k} +A_{3}B_{1}\mathbf {k} \times \mathbf {i} +A_{3}B_{2}\mathbf {k} \times \mathbf {j} +A_{3}B_{3}\mathbf {k} \times \mathbf {k} \\&=A_{1}B_{2}\mathbf {k} -A_{1}B_{3}\mathbf {j} -A_{2}B_{1}\mathbf {k} +A_{2}B_{3}\mathbf {i} +A_{3}B_{1}\mathbf {j} -A_{3}B_{2}\mathbf {i} =(A_{2}B_{3}-A_{3}B_{2})\mathbf {i} +(A_{3}B_{1}-A_{1}B_{3})\mathbf {j} +(A_{1}B_{2}-A_{2}B_{1})\mathbf {k} \end{aligned}}} which is precisely the cross product in vector algebra for pseudovectors. === Norm of a vector === Ordinarily, ‖ u ‖ 2 = u ⋅ u {\displaystyle {\Vert \mathbf {u} \Vert }^{2}=\mathbf {u} \cdot \mathbf {u} } Making use of the geometric product and the fact that the exterior product of a vector with itself is zero: u u = ‖ u ‖ 2 = u 2 = u ⋅ u + u ∧ u = u ⋅ u {\displaystyle \mathbf {u} \,\mathbf {u} ={\Vert \mathbf {u} \Vert }^{2}={\mathbf {u} }^{2}=\mathbf {u} \cdot \mathbf {u} +\mathbf {u} \wedge \mathbf {u} =\mathbf {u} \cdot \mathbf {u} } === Lagrange identity === In three dimensions the product of two vector lengths can be expressed in terms of the dot and cross products ‖ u ‖ 2 ‖ v ‖ 2 = ( u ⋅ v ) 2 + ‖ u × v ‖ 2 {\displaystyle {\Vert \mathbf {u} \Vert }^{2}{\Vert \mathbf {v} \Vert }^{2}=({\mathbf {u} \cdot \mathbf {v} })^{2}+{\Vert \mathbf {u} \times \mathbf {v} \Vert }^{2}} The corresponding generalization expressed using the geometric product is ‖ u ‖ 2 ‖ v ‖ 2 = ( u ⋅ v ) 2 − ( u ∧ v ) 2 {\displaystyle {\Vert \mathbf {u} \Vert }^{2}{\Vert \mathbf {v} \Vert }^{2}=({\mathbf {u} \cdot \mathbf {v} })^{2}-(\mathbf {u} \wedge \mathbf {v} )^{2}} This follows from expanding the geometric product of a pair of vectors with its reverse ( u v ) ( v u ) = ( u ⋅ v + u ∧ v ) ( u ⋅ v − u ∧ v ) {\displaystyle (\mathbf {u} \mathbf {v} )(\mathbf {v} \mathbf {u} )=({\mathbf {u} \cdot \mathbf {v} }+{\mathbf {u} \wedge \mathbf {v} })({\mathbf {u} \cdot \mathbf {v} }-{\mathbf {u} \wedge \mathbf {v} })} === Determinant expansion of cross and wedge products === u × v = ∑ i < j | u i u j v i v j | e i × e j {\displaystyle \mathbf {u} \times \mathbf {v} =\sum _{i<j}{{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}{\mathbf {e} }_{i}\times {\mathbf {e} }_{j}}} u ∧ v = ∑ i < j | u i u j v i v j | e i ∧ e j {\displaystyle \mathbf {u} \wedge \mathbf {v} =\sum _{i<j}{{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}{\mathbf {e} }_{i}\wedge {\mathbf {e} }_{j}}} Linear algebra texts will often use the determinant for the solution of linear systems by Cramer's rule or for and matrix inversion. An alternative treatment is to axiomatically introduce the wedge product, and then demonstrate that this can be used directly to solve linear systems. This is shown below, and does not require sophisticated math skills to understand. It is then possible to define determinants as nothing more than the coefficients of the wedge product in terms of "unit k-vectors" ( e i ∧ e j {\displaystyle {\mathbf {e} }_{i}\wedge {\mathbf {e} }_{j}} terms) expansions as above. A one-by-one determinant is the coefficient of e 1 {\displaystyle \mathbf {e} _{1}} for an R 1 {\displaystyle \mathbb {R} ^{1}} 1-vector. A two-by-two determinant is the coefficient of e 1 ∧ e 2 {\displaystyle \mathbf {e} _{1}\wedge \mathbf {e} _{2}} for an R 2 {\displaystyle \mathbb {R} ^{2}} bivector A three-by-three determinant is the coefficient of e 1 ∧ e 2 ∧ e 3 {\displaystyle \mathbf {e} _{1}\wedge \mathbf {e} _{2}\wedge \mathbf {e} _{3}} for an R 3 {\displaystyle \mathbb {R} ^{3}} trivector ... When linear system solution is introduced via the wedge product, Cramer's rule follows as a side-effect, and there is no need to lead up to the end results with definitions of minors, matrices, matrix invertibility, adjoints, cofactors, Laplace expansions, theorems on determinant multiplication and row column exchanges, and so forth. === Matrix related === Matrix inversion (Cramer's rule) and determinants can be naturally expressed in terms of the wedge product. The use of the wedge product in the solution of linear equations can be quite useful for various geometric product calculations. Traditionally, instead of using the wedge product, Cramer's rule is usually presented as a generic algorithm that can be used to solve linear equations of the form A x = b {\displaystyle Ax=b} (or equivalently to invert a matrix). Namely x = 1 | A | adj ⁡ ( A ) b . {\displaystyle x={\frac {1}{|A|}}\operatorname {adj} (A)b.} This is a useful theoretic result. For numerical problems row reduction with pivots and other methods are more stable and efficient. When the wedge product is coupled with the Clifford product and put into a natural geometric context, the fact that the determinants are used in the expression of R N {\displaystyle {\mathbb {R} }^{N}} parallelogram area and parallelepiped volumes (and higher-dimensional generalizations thereof) also comes as a nice side-effect. As is also shown below, results such as Cramer's rule also follow directly from the wedge product's selection of non-identical elements. The result is then simple enough that it could be derived easily if required instead of having to remember or look up a rule. Two variables example [ a b ] [ x y ] = a x + b y = c . {\displaystyle {\begin{bmatrix}a&b\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=ax+by=c.} Pre- and post-multiplying by a {\displaystyle a} and b {\displaystyle b} , ( a x + b y ) ∧ b = ( a ∧ b ) x = c ∧ b {\displaystyle (ax+by)\wedge b=(a\wedge b)x=c\wedge b} a ∧ ( a x + b y ) = ( a ∧ b ) y = a ∧ c {\displaystyle a\wedge (ax+by)=(a\wedge b)y=a\wedge c} Provided a ∧ b ≠ 0 {\displaystyle a\wedge b\neq 0} the solution is [ x y ] = 1 a ∧ b [ c ∧ b a ∧ c ] . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\frac {1}{a\wedge b}}{\begin{bmatrix}c\wedge b\\a\wedge c\end{bmatrix}}.} For a , b ∈ R 2 {\displaystyle a,b\in {\mathbb {R} }^{2}} , this is Cramer's rule since the e 1 ∧ e 2 {\displaystyle {e}_{1}\wedge {e}_{2}} factors of the wedge products u ∧ v = | u 1 u 2 v 1 v 2 | e 1 ∧ e 2 {\displaystyle u\wedge v={\begin{vmatrix}u_{1}&u_{2}\\v_{1}&v_{2}\end{vmatrix}}{e}_{1}\wedge {e}_{2}} divide out. Similarly, for three, or N variables, the same ideas hold [ a b c ] [ x y z ] = d {\displaystyle {\begin{bmatrix}a&b&c\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=d} [ x y z ] = 1 a ∧ b ∧ c [ d ∧ b ∧ c a ∧ d ∧ c a ∧ b ∧ d ] {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}={\frac {1}{a\wedge b\wedge c}}{\begin{bmatrix}d\wedge b\wedge c\\a\wedge d\wedge c\\a\wedge b\wedge d\end{bmatrix}}} Again, for the three variable three equation case this is Cramer's rule since the e 1 ∧ e 2 ∧ e 3 {\displaystyle {e}_{1}\wedge {e}_{2}\wedge {e}_{3}} factors of all the wedge products divide out, leaving the familiar determinants. A numeric example with three equations and two unknowns: In case there are more equations than variables and the equations have a solution, then each of the k-vector quotients will be scalars. To illustrate here is the solution of a simple example with three equations and two unknowns. [ 1 1 0 ] x + [ 1 1 1 ] y = [ 1 1 2 ] {\displaystyle {\begin{bmatrix}1\\1\\0\end{bmatrix}}x+{\begin{bmatrix}1\\1\\1\end{bmatrix}}y={\begin{bmatrix}1\\1\\2\end{bmatrix}}} The right wedge product with ( 1 , 1 , 1 ) {\displaystyle (1,1,1)} solves for x {\displaystyle x} [ 1 1 0 ] ∧ [ 1 1 1 ] x = [ 1 1 2 ] ∧ [ 1 1 1 ] {\displaystyle {\begin{bmatrix}1\\1\\0\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\1\end{bmatrix}}x={\begin{bmatrix}1\\1\\2\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\1\end{bmatrix}}} and a left wedge product with ( 1 , 1 , 0 ) {\displaystyle (1,1,0)} solves for y {\displaystyle y} [ 1 1 0 ] ∧ [ 1 1 1 ] y = [ 1 1 0 ] ∧ [ 1 1 2 ] . {\displaystyle {\begin{bmatrix}1\\1\\0\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\1\end{bmatrix}}y={\begin{bmatrix}1\\1\\0\end{bmatrix}}\wedge {\begin{bmatrix}1\\1\\2\end{bmatrix}}.} Observe that both of these equations have the same factor, so one can compute this only once (if this was zero it would indicate the system of equations has no solution). Collection of results for x {\displaystyle x} and y {\displaystyle y} yields a Cramer's rule-like form: [ x y ] = 1 ( 1 , 1 , 0 ) ∧ ( 1 , 1 , 1 ) [ ( 1 , 1 , 2 ) ∧ ( 1 , 1 , 1 ) ( 1 , 1 , 0 ) ∧ ( 1 , 1 , 2 ) ] . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\frac {1}{(1,1,0)\wedge (1,1,1)}}{\begin{bmatrix}(1,1,2)\wedge (1,1,1)\\(1,1,0)\wedge (1,1,2)\end{bmatrix}}.} Writing e i ∧ e j = e i j {\displaystyle {e}_{i}\wedge {e}_{j}={e}_{ij}} , we have the result: [ x y ] = 1 e 13 + e 23 [ − e 13 − e 23 2 e 13 + 2 e 23 ] = [ − 1 2 ] . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\frac {1}{{e}_{13}+{e}_{23}}}{\begin{bmatrix}{-{e}_{13}-{e}_{23}}\\{2{e}_{13}+2{e}_{23}}\\\end{bmatrix}}={\begin{bmatrix}-1\\2\end{bmatrix}}.} === Equation of a plane === For the plane of all points r {\displaystyle {\mathbf {r} }} through the plane passing through three independent points r 0 {\displaystyle {\mathbf {r} }_{0}} , r 1 {\displaystyle {\mathbf {r} }_{1}} , and r 2 {\displaystyle {\mathbf {r} }_{2}} , the normal form of the equation is ( ( r 2 − r 0 ) × ( r 1 − r 0 ) ) ⋅ ( r − r 0 ) = 0. {\displaystyle (({\mathbf {r} }_{2}-{\mathbf {r} }_{0})\times ({\mathbf {r} }_{1}-{\mathbf {r} }_{0}))\cdot ({\mathbf {r} }-{\mathbf {r} }_{0})=0.} The equivalent wedge product equation is ( r 2 − r 0 ) ∧ ( r 1 − r 0 ) ∧ ( r − r 0 ) = 0. {\displaystyle ({\mathbf {r} }_{2}-{\mathbf {r} }_{0})\wedge ({\mathbf {r} }_{1}-{\mathbf {r} }_{0})\wedge ({\mathbf {r} }-{\mathbf {r} }_{0})=0.} === Projection and rejection === Using the Gram–Schmidt process a single vector can be decomposed into two components with respect to a reference vector, namely the projection onto a unit vector in a reference direction, and the difference between the vector and that projection. With u ^ = u / ‖ u ‖ {\displaystyle {\hat {u}}=u/{\Vert u\Vert }} , the projection of v {\displaystyle v} onto u ^ {\displaystyle {\hat {u}}} is P r o j u ^ v = u ^ ( u ^ ⋅ v ) {\displaystyle \mathrm {Proj} _{\hat {u}}\,{v}={\hat {u}}({\hat {u}}\cdot v)} Orthogonal to that vector is the difference, designated the rejection, v − u ^ ( u ^ ⋅ v ) = 1 ‖ u ‖ 2 ( ‖ u ‖ 2 v − u ( u ⋅ v ) ) {\displaystyle v-{\hat {u}}({\hat {u}}\cdot v)={\frac {1}{{\Vert u\Vert }^{2}}}({\Vert u\Vert }^{2}v-u(u\cdot v))} The rejection can be expressed as a single geometric algebraic product in a few different ways u u 2 ( u v − u ⋅ v ) = 1 u ( u ∧ v ) = u ^ ( u ^ ∧ v ) = ( v ∧ u ^ ) u ^ {\displaystyle {\frac {u}{{u}^{2}}}(uv-u\cdot v)={\frac {1}{u}}(u\wedge v)={\hat {u}}({\hat {u}}\wedge v)=(v\wedge {\hat {u}}){\hat {u}}} The similarity in form between the projection and the rejection is notable. The sum of these recovers the original vector v = u ^ ( u ^ ⋅ v ) + u ^ ( u ^ ∧ v ) {\displaystyle v={\hat {u}}({\hat {u}}\cdot v)+{\hat {u}}({\hat {u}}\wedge v)} Here the projection is in its customary vector form. An alternate formulation is possible that puts the projection in a form that differs from the usual vector formulation v = 1 u ( u ⋅ v ) + 1 u ( u ∧ v ) = ( v ⋅ u ) 1 u + ( v ∧ u ) 1 u {\displaystyle v={\frac {1}{u}}({u}\cdot v)+{\frac {1}{u}}({u}\wedge v)=({v}\cdot u){\frac {1}{u}}+(v\wedge u){\frac {1}{u}}} Working backwards from the result, it can be observed that this orthogonal decomposition result can in fact follow more directly from the definition of the geometric product itself. v = u ^ u ^ v = u ^ ( u ^ ⋅ v + u ^ ∧ v ) {\displaystyle v={\hat {u}}{\hat {u}}v={\hat {u}}({\hat {u}}\cdot v+{\hat {u}}\wedge v)} With this approach, the original geometrical consideration is not necessarily obvious, but it is a much quicker way to get at the same algebraic result. However, the hint that one can work backwards, coupled with the knowledge that the wedge product can be used to solve sets of linear equations (see: [1] ), the problem of orthogonal decomposition can be posed directly, Let v = a u + x {\displaystyle v=au+x} , where u ⋅ x = 0 {\displaystyle u\cdot x=0} . To discard the portions of v {\displaystyle v} that are colinear with u {\displaystyle u} , take the exterior product u ∧ v = u ∧ ( a u + x ) = u ∧ x {\displaystyle u\wedge v=u\wedge (au+x)=u\wedge x} Here the geometric product can be employed u ∧ v = u ∧ x = u x − u ⋅ x = u x {\displaystyle u\wedge v=u\wedge x=ux-u\cdot x=ux} Because the geometric product is invertible, this can be solved for x: x = 1 u ( u ∧ v ) . {\displaystyle x={\frac {1}{u}}(u\wedge v).} The same techniques can be applied to similar problems, such as calculation of the component of a vector in a plane and perpendicular to the plane. For three dimensions the projective and rejective components of a vector with respect to an arbitrary non-zero unit vector, can be expressed in terms of the dot and cross product v = ( v ⋅ u ^ ) u ^ + u ^ × ( v × u ^ ) . {\displaystyle \mathbf {v} =(\mathbf {v} \cdot {\hat {\mathbf {u} }}){\hat {\mathbf {u} }}+{\hat {\mathbf {u} }}\times (\mathbf {v} \times {\hat {\mathbf {u} }}).} For the general case the same result can be written in terms of the dot and wedge product and the geometric product of that and the unit vector v = ( v ⋅ u ^ ) u ^ + ( v ∧ u ^ ) u ^ . {\displaystyle \mathbf {v} =(\mathbf {v} \cdot {\hat {\mathbf {u} }}){\hat {\mathbf {u} }}+(\mathbf {v} \wedge {\hat {\mathbf {u} }}){\hat {\mathbf {u} }}.} It's also worthwhile to point out that this result can also be expressed using right or left vector division as defined by the geometric product: v = ( v ⋅ u ) 1 u + ( v ∧ u ) 1 u {\displaystyle \mathbf {v} =(\mathbf {v} \cdot \mathbf {u} ){\frac {1}{\mathbf {u} }}+(\mathbf {v} \wedge \mathbf {u} ){\frac {1}{\mathbf {u} }}} v = 1 u ( u ⋅ v ) + 1 u ( u ∧ v ) . {\displaystyle \mathbf {v} ={\frac {1}{\mathbf {u} }}(\mathbf {u} \cdot \mathbf {v} )+{\frac {1}{\mathbf {u} }}(\mathbf {u} \wedge \mathbf {v} ).} Like vector projection and rejection, higher-dimensional analogs of that calculation are also possible using the geometric product. As an example, one can calculate the component of a vector perpendicular to a plane and the projection of that vector onto the plane. Let w = a u + b v + x {\displaystyle w=au+bv+x} , where u ⋅ x = v ⋅ x = 0 {\displaystyle u\cdot x=v\cdot x=0} . As above, to discard the portions of w {\displaystyle w} that are colinear with u {\displaystyle u} or v {\displaystyle v} , take the wedge product w ∧ u ∧ v = ( a u + b v + x ) ∧ u ∧ v = x ∧ u ∧ v . {\displaystyle w\wedge u\wedge v=(au+bv+x)\wedge u\wedge v=x\wedge u\wedge v.} Having done this calculation with a vector projection, one can guess that this quantity equals x ( u ∧ v ) {\displaystyle x(u\wedge v)} . One can also guess there is a vector and bivector dot product like quantity such that the allows the calculation of the component of a vector that is in the "direction of a plane". Both of these guesses are correct, and validating these facts is worthwhile. However, skipping ahead slightly, this to-be-proven fact allows for a nice closed form solution of the vector component outside of the plane: x = ( w ∧ u ∧ v ) 1 u ∧ v = 1 u ∧ v ( u ∧ v ∧ w ) . {\displaystyle x=(w\wedge u\wedge v){\frac {1}{u\wedge v}}={\frac {1}{u\wedge v}}(u\wedge v\wedge w).} Notice the similarities between this planar rejection result and the vector rejection result. To calculate the component of a vector outside of a plane we take the volume spanned by three vectors (trivector) and "divide out" the plane. Independent of any use of the geometric product it can be shown that this rejection in terms of the standard basis is x = 1 ( A u , v ) 2 ∑ i < j < k | w i w j w k u i u j u k v i v j v k | | u i u j u k v i v j v k e i e j e k | {\displaystyle x={\frac {1}{(A_{u,v})^{2}}}\sum _{i<j<k}{\begin{vmatrix}w_{i}&w_{j}&w_{k}\\u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\\end{vmatrix}}{\begin{vmatrix}u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\{e}_{i}&{e}_{j}&{e}_{k}\\\end{vmatrix}}} where ( A u , v ) 2 = ∑ i < j | u i u j v i v j | = − ( u ∧ v ) 2 {\displaystyle (A_{u,v})^{2}=\sum _{i<j}{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}=-(u\wedge v)^{2}} is the squared area of the parallelogram formed by u {\displaystyle u} , and v {\displaystyle v} . The (squared) magnitude of x {\displaystyle x} is ‖ x ‖ 2 = x ⋅ w = 1 ( A u , v ) 2 ∑ i < j < k | w i w j w k u i u j u k v i v j v k | 2 {\displaystyle {\Vert x\Vert }^{2}=x\cdot w={\frac {1}{(A_{u,v})^{2}}}\sum _{i<j<k}{\begin{vmatrix}w_{i}&w_{j}&w_{k}\\u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\\end{vmatrix}}^{2}} Thus, the (squared) volume of the parallelopiped (base area times perpendicular height) is ∑ i < j < k | w i w j w k u i u j u k v i v j v k | 2 {\displaystyle \sum _{i<j<k}{\begin{vmatrix}w_{i}&w_{j}&w_{k}\\u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\\end{vmatrix}}^{2}} Note the similarity in form to the w, u, v trivector itself ∑ i < j < k | w i w j w k u i u j u k v i v j v k | e i ∧ e j ∧ e k , {\displaystyle \sum _{i<j<k}{\begin{vmatrix}w_{i}&w_{j}&w_{k}\\u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\\end{vmatrix}}{e}_{i}\wedge {e}_{j}\wedge {e}_{k},} which, if you take the set of e i ∧ e j ∧ e k {\displaystyle {e}_{i}\wedge {e}_{j}\wedge {e}_{k}} as a basis for the trivector space, suggests this is the natural way to define the measure of a trivector. Loosely speaking, the measure of a vector is a length, the measure of a bivector is an area, and the measure of a trivector is a volume. If a vector is factored directly into projective and rejective terms using the geometric product v = 1 u ( u ⋅ v + u ∧ v ) {\displaystyle v={\frac {1}{u}}(u\cdot v+u\wedge v)} , then it is not necessarily obvious that the rejection term, a product of vector and bivector is even a vector. Expansion of the vector bivector product in terms of the standard basis vectors has the following form Let r = 1 u ( u ∧ v ) = u u 2 ( u ∧ v ) = 1 ‖ u ‖ 2 u ( u ∧ v ) {\displaystyle r={\frac {1}{u}}(u\wedge v)={\frac {u}{u^{2}}}(u\wedge v)={\frac {1}{{\Vert u\Vert }^{2}}}u(u\wedge v)} It can be shown that r = 1 ‖ u ‖ 2 ∑ i < j | u i u j v i v j | | u i u j e i e j | {\displaystyle r={\frac {1}{{\Vert {u}\Vert }^{2}}}\sum _{i<j}{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}{\begin{vmatrix}u_{i}&u_{j}\\e_{i}&e_{j}\end{vmatrix}}} (a result that can be shown more easily straight from r = v − u ^ ( u ^ ⋅ v ) {\displaystyle r=v-{\hat {u}}({\hat {u}}\cdot v)} ). The rejective term is perpendicular to u {\displaystyle u} , since | u i u j u i u j | = 0 {\displaystyle {\begin{vmatrix}u_{i}&u_{j}\\u_{i}&u_{j}\end{vmatrix}}=0} implies r ⋅ u = 0 {\displaystyle r\cdot u=0} . The magnitude of r {\displaystyle r} is ‖ r ‖ 2 = r ⋅ v = 1 ‖ u ‖ 2 ∑ i < j | u i u j v i v j | 2 . {\displaystyle {\Vert r\Vert }^{2}=r\cdot v={\frac {1}{{\Vert {u}\Vert }^{2}}}\sum _{i<j}{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}^{2}.} So, the quantity ‖ r ‖ 2 ‖ u ‖ 2 = ∑ i < j | u i u j v i v j | 2 {\displaystyle {\Vert r\Vert }^{2}{\Vert {u}\Vert }^{2}=\sum _{i<j}{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}^{2}} is the squared area of the parallelogram formed by u {\displaystyle u} and v {\displaystyle v} . It is also noteworthy that the bivector can be expressed as u ∧ v = ∑ i < j | u i u j v i v j | e i ∧ e j . {\displaystyle u\wedge v=\sum _{i<j}{{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}e_{i}\wedge e_{j}}.} Thus is it natural, if one considers each term e i ∧ e j {\displaystyle e_{i}\wedge e_{j}} as a basis vector of the bivector space, to define the (squared) "length" of that bivector as the (squared) area. Going back to the geometric product expression for the length of the rejection 1 u ( u ∧ v ) {\displaystyle {\frac {1}{u}}(u\wedge v)} we see that the length of the quotient, a vector, is in this case is the "length" of the bivector divided by the length of the divisor. This may not be a general result for the length of the product of two k-vectors, however it is a result that may help build some intuition about the significance of the algebraic operations. Namely, When a vector is divided out of the plane (parallelogram span) formed from it and another vector, what remains is the perpendicular component of the remaining vector, and its length is the planar area divided by the length of the vector that was divided out. === Area of the parallelogram defined by u and v === If A is the area of the parallelogram defined by u and v, then A 2 = ‖ u × v ‖ 2 = ∑ i < j | u i u j v i v j | 2 , {\displaystyle A^{2}={\Vert \mathbf {u} \times \mathbf {v} \Vert }^{2}=\sum _{i<j}{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}^{2},} and A 2 = − ( u ∧ v ) 2 = ∑ i < j | u i u j v i v j | 2 . {\displaystyle A^{2}=-(\mathbf {u} \wedge \mathbf {v} )^{2}=\sum _{i<j}{\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}^{2}.} Note that this squared bivector is a geometric multiplication; this computation can alternatively be stated as the Gram determinant of the two vectors. === Angle between two vectors === ( sin ⁡ θ ) 2 = ‖ u × v ‖ 2 ‖ u ‖ 2 ‖ v ‖ 2 {\displaystyle ({\sin \theta })^{2}={\frac {{\Vert \mathbf {u} \times \mathbf {v} \Vert }^{2}}{{\Vert \mathbf {u} \Vert }^{2}{\Vert \mathbf {v} \Vert }^{2}}}} ( sin ⁡ θ ) 2 = − ( u ∧ v ) 2 u 2 v 2 {\displaystyle ({\sin \theta })^{2}=-{\frac {(\mathbf {u} \wedge \mathbf {v} )^{2}}{{\mathbf {u} }^{2}{\mathbf {v} }^{2}}}} === Volume of the parallelopiped formed by three vectors === In vector algebra, the volume of a parallelopiped is given by the square root of the squared norm of the scalar triple product: V 2 = ‖ ( u × v ) ⋅ w ‖ 2 = | u 1 u 2 u 3 v 1 v 2 v 3 w 1 w 2 w 3 | 2 {\displaystyle V^{2}={\Vert (\mathbf {u} \times \mathbf {v} )\cdot \mathbf {w} \Vert }^{2}={\begin{vmatrix}u_{1}&u_{2}&u_{3}\\v_{1}&v_{2}&v_{3}\\w_{1}&w_{2}&w_{3}\\\end{vmatrix}}^{2}} V 2 = − ( u ∧ v ∧ w ) 2 = − ( ∑ i < j < k | u i u j u k v i v j v k w i w j w k | e ^ i ∧ e ^ j ∧ e ^ k ) 2 = ∑ i < j < k | u i u j u k v i v j v k w i w j w k | 2 {\displaystyle V^{2}=-(\mathbf {u} \wedge \mathbf {v} \wedge \mathbf {w} )^{2}=-\left(\sum _{i<j<k}{\begin{vmatrix}u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\w_{i}&w_{j}&w_{k}\\\end{vmatrix}}{\hat {\mathbf {e} }}_{i}\wedge {\hat {\mathbf {e} }}_{j}\wedge {\hat {\mathbf {e} }}_{k}\right)^{2}=\sum _{i<j<k}{\begin{vmatrix}u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\\w_{i}&w_{j}&w_{k}\\\end{vmatrix}}^{2}} ==== Product of a vector and a bivector ==== In order to justify the normal to a plane result above, a general examination of the product of a vector and bivector is required. Namely, w ( u ∧ v ) = ∑ i , j < k w i e i | u j u k v j v k | e j ∧ e k {\displaystyle w(u\wedge v)=\sum _{i,j<k}w_{i}{e}_{i}{\begin{vmatrix}u_{j}&u_{k}\\v_{j}&v_{k}\\\end{vmatrix}}{e}_{j}\wedge {e}_{k}} This has two parts, the vector part where i = j {\displaystyle i=j} or i = k {\displaystyle i=k} , and the trivector parts where no indexes equal. After some index summation trickery, and grouping terms and so forth, this is w ( u ∧ v ) = ∑ i < j ( w i e j − w j e i ) | u i u j v i v j | + ∑ i < j < k | w i w j w k u i u j u k v i v j v k | e i ∧ e j ∧ e k {\displaystyle w(u\wedge v)=\sum _{i<j}(w_{i}e_{j}-w_{j}e_{i}){\begin{vmatrix}u_{i}&u_{j}\\v_{i}&v_{j}\end{vmatrix}}+\sum _{i<j<k}{\begin{vmatrix}w_{i}&w_{j}&w_{k}\\u_{i}&u_{j}&u_{k}\\v_{i}&v_{j}&v_{k}\end{vmatrix}}{e}_{i}\wedge {e}_{j}\wedge {e}_{k}} The trivector term is w ∧ u ∧ v {\displaystyle w\wedge u\wedge v} . Expansion of ( u ∧ v ) w {\displaystyle (u\wedge v)w} yields the same trivector term (it is the completely symmetric part), and the vector term is negated. Like the geometric product of two vectors, this geometric product can be grouped into symmetric and antisymmetric parts, one of which is a pure k-vector. In analogy the antisymmetric part of this product can be called a generalized dot product, and is roughly speaking the dot product of a "plane" (bivector), and a vector. The properties of this generalized dot product remain to be explored, but first here is a summary of the notation w ( u ∧ v ) = w ⋅ ( u ∧ v ) + w ∧ u ∧ v {\displaystyle w(u\wedge v)=w\cdot (u\wedge v)+w\wedge u\wedge v} ( u ∧ v ) w = − w ⋅ ( u ∧ v ) + w ∧ u ∧ v {\displaystyle (u\wedge v)w=-w\cdot (u\wedge v)+w\wedge u\wedge v} w ∧ u ∧ v = 1 2 ( w ( u ∧ v ) + ( u ∧ v ) w ) {\displaystyle w\wedge u\wedge v={\frac {1}{2}}(w(u\wedge v)+(u\wedge v)w)} w ⋅ ( u ∧ v ) = 1 2 ( w ( u ∧ v ) − ( u ∧ v ) w ) {\displaystyle w\cdot (u\wedge v)={\frac {1}{2}}(w(u\wedge v)-(u\wedge v)w)} Let w = x + y {\displaystyle w=x+y} , where x = a u + b v {\displaystyle x=au+bv} , and y ⋅ u = y ⋅ v = 0 {\displaystyle y\cdot u=y\cdot v=0} . Expressing w {\displaystyle w} and the u ∧ v {\displaystyle u\wedge v} , products in terms of these components is w ( u ∧ v ) = x ( u ∧ v ) + y ( u ∧ v ) = x ⋅ ( u ∧ v ) + y ⋅ ( u ∧ v ) + y ∧ u ∧ v {\displaystyle w(u\wedge v)=x(u\wedge v)+y(u\wedge v)=x\cdot (u\wedge v)+y\cdot (u\wedge v)+y\wedge u\wedge v} With the conditions and definitions above, and some manipulation, it can be shown that the term y ⋅ ( u ∧ v ) = 0 {\displaystyle y\cdot (u\wedge v)=0} , which then justifies the previous solution of the normal to a plane problem. Since the vector term of the vector bivector product the name dot product is zero when the vector is perpendicular to the plane (bivector), and this vector, bivector "dot product" selects only the components that are in the plane, so in analogy to the vector-vector dot product this name itself is justified by more than the fact this is the non-wedge product term of the geometric vector-bivector product. === Derivative of a unit vector === It can be shown that a unit vector derivative can be expressed using the cross product d d t ( r ‖ r ‖ ) = 1 ‖ r ‖ 3 ( r × d r d t ) × r = ( r ^ × 1 ‖ r ‖ d r d t ) × r ^ {\displaystyle {\frac {d}{dt}}\left({\frac {\mathbf {r} }{\Vert \mathbf {r} \Vert }}\right)={\frac {1}{{\Vert \mathbf {r} \Vert }^{3}}}\left(\mathbf {r} \times {\frac {d\mathbf {r} }{dt}}\right)\times \mathbf {r} =\left({\hat {\mathbf {r} }}\times {\frac {1}{\Vert \mathbf {r} \Vert }}{\frac {d\mathbf {r} }{dt}}\right)\times {\hat {\mathbf {r} }}} The equivalent geometric product generalization is d d t ( r ‖ r ‖ ) = 1 ‖ r ‖ 3 r ( r ∧ d r d t ) = 1 r ( r ^ ∧ d r d t ) {\displaystyle {\frac {d}{dt}}\left({\frac {\mathbf {r} }{\Vert \mathbf {r} \Vert }}\right)={\frac {1}{{\Vert \mathbf {r} \Vert }^{3}}}\mathbf {r} \left(\mathbf {r} \wedge {\frac {d\mathbf {r} }{dt}}\right)={\frac {1}{\mathbf {r} }}\left({\hat {\mathbf {r} }}\wedge {\frac {d\mathbf {r} }{dt}}\right)} Thus this derivative is the component of 1 ‖ r ‖ d r d t {\displaystyle {\frac {1}{\Vert \mathbf {r} \Vert }}{\frac {d\mathbf {r} }{dt}}} in the direction perpendicular to r {\displaystyle \mathbf {r} } . In other words, this is 1 ‖ r ‖ d r d t {\displaystyle {\frac {1}{\Vert \mathbf {r} \Vert }}{\frac {d\mathbf {r} }{dt}}} minus the projection of that vector onto r ^ {\displaystyle {\hat {\mathbf {r} }}} . This intuitively makes sense (but a picture would help) since a unit vector is constrained to circular motion, and any change to a unit vector due to a change in its generating vector has to be in the direction of the rejection of r ^ {\displaystyle {\hat {\mathbf {r} }}} from d r d t {\displaystyle {\frac {d\mathbf {r} }{dt}}} . That rejection has to be scaled by 1/|r| to get the final result. When the objective isn't comparing to the cross product, it's also notable that this unit vector derivative can be written r d r ^ d t = r ^ ∧ d r d t {\displaystyle {\mathbf {r} }{\frac {d{\hat {\mathbf {r} }}}{dt}}={\hat {\mathbf {r} }}\wedge {\frac {d\mathbf {r} }{dt}}} == See also == Geometric algebra Bivector == Citations == == References and further reading == Vold, Terje G. (1993), "An introduction to Geometric Algebra with an Application in Rigid Body mechanics" (PDF), American Journal of Physics, 61 (6): 491, Bibcode:1993AmJPh..61..491V, doi:10.1119/1.17201 Gull, S.F.; Lasenby, A.N; Doran, C:J:L (1993), Imaginary Numbers are not Real – the Geometric Algebra of Spacetime (PDF)
Wikipedia/Comparison_of_vector_algebra_and_geometric_algebra
In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, 4 x 2 + 2 x y − 3 y 2 {\displaystyle 4x^{2}+2xy-3y^{2}} is a quadratic form in the variables x and y. The coefficients usually belong to a fixed field K, such as the real or complex numbers, and one speaks of a quadratic form over K. Over the reals, a quadratic form is said to be definite if it takes the value zero only when all its variables are simultaneously zero; otherwise it is isotropic. Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology (intersection forms of manifolds, especially four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form − x T Σ − 1 x {\displaystyle -\mathbf {x} ^{\mathsf {T}}{\boldsymbol {\Sigma }}^{-1}\mathbf {x} } ) Quadratic forms are not to be confused with quadratic equations, which have only one variable and may include terms of degree less than two. A quadratic form is a specific instance of the more general concept of forms. == Introduction == Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form: q ( x ) = a x 2 (unary) q ( x , y ) = a x 2 + b x y + c y 2 (binary) q ( x , y , z ) = a x 2 + b x y + c y 2 + d y z + e z 2 + f x z (ternary) {\displaystyle {\begin{aligned}q(x)&=ax^{2}&&{\textrm {(unary)}}\\q(x,y)&=ax^{2}+bxy+cy^{2}&&{\textrm {(binary)}}\\q(x,y,z)&=ax^{2}+bxy+cy^{2}+dyz+ez^{2}+fxz&&{\textrm {(ternary)}}\end{aligned}}} where a, ..., f are the coefficients. The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers. In the algebraic theory of quadratic forms, the coefficients are elements of a certain field. In the arithmetic theory of quadratic forms, the coefficients belong to a fixed commutative ring, frequently the integers Z or the p-adic integers Zp. Binary quadratic forms have been extensively studied in number theory, in particular, in the theory of quadratic fields, continued fractions, and modular forms. The theory of integral quadratic forms in n variables has important applications to algebraic topology. Using homogeneous coordinates, a non-zero quadratic form in n variables defines an (n − 2)-dimensional quadric in the (n − 1)-dimensional projective space. This is a basic construction in projective geometry. In this way one may visualize 3-dimensional real quadratic forms as conic sections. An example is given by the three-dimensional Euclidean space and the square of the Euclidean norm expressing the distance between a point with coordinates (x, y, z) and the origin: q ( x , y , z ) = d ( ( x , y , z ) , ( 0 , 0 , 0 ) ) 2 = ‖ ( x , y , z ) ‖ 2 = x 2 + y 2 + z 2 . {\displaystyle q(x,y,z)=d((x,y,z),(0,0,0))^{2}=\left\|(x,y,z)\right\|^{2}=x^{2}+y^{2}+z^{2}.} A closely related notion with geometric overtones is a quadratic space, which is a pair (V, q), with V a vector space over a field K, and q : V → K a quadratic form on V. See § Definitions below for the definition of a quadratic form on a vector space. == History == The study of quadratic forms, in particular the question of whether a given integer can be the value of a quadratic form over the integers, dates back many centuries. One such case is Fermat's theorem on sums of two squares, which determines when an integer may be expressed in the form x2 + y2, where x, y are integers. This problem is related to the problem of finding Pythagorean triples, which appeared in the second millennium BCE. In 628, the Indian mathematician Brahmagupta wrote Brāhmasphuṭasiddhānta, which includes, among many other things, a study of equations of the form x2 − ny2 = c. He considered what is now called Pell's equation, x2 − ny2 = 1, and found a method for its solution. In Europe this problem was studied by Brouncker, Euler and Lagrange. In 1801 Gauss published Disquisitiones Arithmeticae, a major portion of which was devoted to a complete theory of binary quadratic forms over the integers. Since then, the concept has been generalized, and the connections with quadratic number fields, the modular group, and other areas of mathematics have been further elucidated. == Associated symmetric matrix == Any n × n matrix A determines a quadratic form qA in n variables by q A ( x 1 , … , x n ) = ∑ i = 1 n ∑ j = 1 n a i j x i x j = x T A x , {\displaystyle q_{A}(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{x_{i}}{x_{j}}=\mathbf {x} ^{\mathsf {T}}A\mathbf {x} ,} where A = (aij). === Example === Consider the case of quadratic forms in three variables x, y, z. The matrix A has the form A = [ a b c d e f g h k ] . {\displaystyle A={\begin{bmatrix}a&b&c\\d&e&f\\g&h&k\end{bmatrix}}.} The above formula gives q A ( x , y , z ) = a x 2 + e y 2 + k z 2 + ( b + d ) x y + ( c + g ) x z + ( f + h ) y z . {\displaystyle q_{A}(x,y,z)=ax^{2}+ey^{2}+kz^{2}+(b+d)xy+(c+g)xz+(f+h)yz.} So, two different matrices define the same quadratic form if and only if they have the same elements on the diagonal and the same values for the sums b + d, c + g and f + h. In particular, the quadratic form qA is defined by a unique symmetric matrix A = [ a b + d 2 c + g 2 b + d 2 e f + h 2 c + g 2 f + h 2 k ] . {\displaystyle A={\begin{bmatrix}a&{\frac {b+d}{2}}&{\frac {c+g}{2}}\\{\frac {b+d}{2}}&e&{\frac {f+h}{2}}\\{\frac {c+g}{2}}&{\frac {f+h}{2}}&k\end{bmatrix}}.} This generalizes to any number of variables as follows. === General case === Given a quadratic form qA over the real numbers, defined by the matrix A = (aij), the matrix B = ( a i j + a j i 2 ) = 1 2 ( A + A T ) {\displaystyle B=\left({\frac {a_{ij}+a_{ji}}{2}}\right)={\frac {1}{2}}(A+A^{\text{T}})} is symmetric, defines the same quadratic form as A, and is the unique symmetric matrix that defines qA. So, over the real numbers (and, more generally, over a field of characteristic different from two), there is a one-to-one correspondence between quadratic forms and symmetric matrices that determine them. == Real quadratic forms == A fundamental problem is the classification of real quadratic forms under a linear change of variables. Jacobi proved that, for every real quadratic form, there is an orthogonal diagonalization; that is, an orthogonal change of variables that puts the quadratic form in a "diagonal form" λ 1 x ~ 1 2 + λ 2 x ~ 2 2 + ⋯ + λ n x ~ n 2 , {\displaystyle \lambda _{1}{\tilde {x}}_{1}^{2}+\lambda _{2}{\tilde {x}}_{2}^{2}+\cdots +\lambda _{n}{\tilde {x}}_{n}^{2},} where the associated symmetric matrix is diagonal. Moreover, the coefficients λ1, λ2, ..., λn are determined uniquely up to a permutation. If the change of variables is given by an invertible matrix that is not necessarily orthogonal, one can suppose that all coefficients λi are 0, 1, or −1. Sylvester's law of inertia states that the numbers of each 0, 1, and −1 are invariants of the quadratic form, in the sense that any other diagonalization will contain the same number of each. The signature of the quadratic form is the triple (n0, n+, n−), where these components count the number of 0s, number of 1s, and the number of −1s, respectively. Sylvester's law of inertia shows that this is a well-defined quantity attached to the quadratic form. The case when all λi have the same sign is especially important: in this case the quadratic form is called positive definite (all 1) or negative definite (all −1). If none of the terms are 0, then the form is called nondegenerate; this includes positive definite, negative definite, and isotropic quadratic form (a mix of 1 and −1); equivalently, a nondegenerate quadratic form is one whose associated symmetric form is a nondegenerate bilinear form. A real vector space with an indefinite nondegenerate quadratic form of index (p, q) (denoting p 1s and q −1s) is often denoted as Rp,q particularly in the physical theory of spacetime. The discriminant of a quadratic form, concretely the class of the determinant of a representing matrix in K / (K×)2 (up to non-zero squares) can also be defined, and for a real quadratic form is a cruder invariant than signature, taking values of only "positive, zero, or negative". Zero corresponds to degenerate, while for a non-degenerate form it is the parity of the number of negative coefficients, (−1)n−. These results are reformulated in a different way below. Let q be a quadratic form defined on an n-dimensional real vector space. Let A be the matrix of the quadratic form q in a given basis. This means that A is a symmetric n × n matrix such that q ( v ) = x T A x , {\displaystyle q(v)=x^{\mathsf {T}}Ax,} where x is the column vector of coordinates of v in the chosen basis. Under a change of basis, the column x is multiplied on the left by an n × n invertible matrix S, and the symmetric square matrix A is transformed into another symmetric square matrix B of the same size according to the formula A → B = S T A S . {\displaystyle A\to B=S^{\mathsf {T}}AS.} Any symmetric matrix A can be transformed into a diagonal matrix B = ( λ 1 0 ⋯ 0 0 λ 2 ⋯ 0 ⋮ ⋮ ⋱ 0 0 0 ⋯ λ n ) {\displaystyle B={\begin{pmatrix}\lambda _{1}&0&\cdots &0\\0&\lambda _{2}&\cdots &0\\\vdots &\vdots &\ddots &0\\0&0&\cdots &\lambda _{n}\end{pmatrix}}} by a suitable choice of an orthogonal matrix S, and the diagonal entries of B are uniquely determined – this is Jacobi's theorem. If S is allowed to be any invertible matrix then B can be made to have only 0, 1, and −1 on the diagonal, and the number of the entries of each type (n0 for 0, n+ for 1, and n− for −1) depends only on A. This is one of the formulations of Sylvester's law of inertia and the numbers n+ and n− are called the positive and negative indices of inertia. Although their definition involved a choice of basis and consideration of the corresponding real symmetric matrix A, Sylvester's law of inertia means that they are invariants of the quadratic form q. The quadratic form q is positive definite if q(v) > 0 (similarly, negative definite if q(v) < 0) for every nonzero vector v. When q(v) assumes both positive and negative values, q is an isotropic quadratic form. The theorems of Jacobi and Sylvester show that any positive definite quadratic form in n variables can be brought to the sum of n squares by a suitable invertible linear transformation: geometrically, there is only one positive definite real quadratic form of every dimension. Its isometry group is a compact orthogonal group O(n). This stands in contrast with the case of isotropic forms, when the corresponding group, the indefinite orthogonal group O(p, q), is non-compact. Further, the isometry groups of Q and −Q are the same (O(p, q) ≈ O(q, p)), but the associated Clifford algebras (and hence pin groups) are different. == Definitions == A quadratic form over a field K is a map q : V → K from a finite-dimensional K-vector space to K such that q(av) = a2q(v) for all a ∈ K, v ∈ V and the function q(u + v) − q(u) − q(v) is bilinear. More concretely, an n-ary quadratic form over a field K is a homogeneous polynomial of degree 2 in n variables with coefficients in K: q ( x 1 , … , x n ) = ∑ i = 1 n ∑ j = 1 n a i j x i x j , a i j ∈ K . {\displaystyle q(x_{1},\ldots ,x_{n})=\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{x_{i}}{x_{j}},\quad a_{ij}\in K.} This formula may be rewritten using matrices: let x be the column vector with components x1, ..., xn and A = (aij) be the n × n matrix over K whose entries are the coefficients of q. Then q ( x ) = x T A x . {\displaystyle q(x)=x^{\mathsf {T}}Ax.} A vector v = (x1, ..., xn) is a null vector if q(v) = 0. Two n-ary quadratic forms φ and ψ over K are equivalent if there exists a nonsingular linear transformation C ∈ GL(n, K) such that ψ ( x ) = φ ( C x ) . {\displaystyle \psi (x)=\varphi (Cx).} Let the characteristic of K be different from 2. The coefficient matrix A of q may be replaced by the symmetric matrix (A + AT)/2 with the same quadratic form, so it may be assumed from the outset that A is symmetric. Moreover, a symmetric matrix A is uniquely determined by the corresponding quadratic form. Under an equivalence C, the symmetric matrix A of φ and the symmetric matrix B of ψ are related as follows: B = C T A C . {\displaystyle B=C^{\mathsf {T}}AC.} The associated bilinear form of a quadratic form q is defined by b q ( x , y ) = 1 2 ( q ( x + y ) − q ( x ) − q ( y ) ) = x T A y = y T A x . {\displaystyle b_{q}(x,y)={\tfrac {1}{2}}(q(x+y)-q(x)-q(y))=x^{\mathsf {T}}Ay=y^{\mathsf {T}}Ax.} Thus, bq is a symmetric bilinear form over K with matrix A. Conversely, any symmetric bilinear form b defines a quadratic form q ( x ) = b ( x , x ) , {\displaystyle q(x)=b(x,x),} and these two processes are the inverses of each other. As a consequence, over a field of characteristic not equal to 2, the theories of symmetric bilinear forms and of quadratic forms in n variables are essentially the same. === Quadratic space === Given an n-dimensional vector space V over a field K, a quadratic form on V is a function Q : V → K that has the following property: for some basis, the function q that maps the coordinates of v ∈ V to Q(v) is a quadratic form. In particular, if V = Kn with its standard basis, one has q ( v 1 , … , v n ) = Q ( [ v 1 , … , v n ] ) for [ v 1 , … , v n ] ∈ K n . {\displaystyle q(v_{1},\ldots ,v_{n})=Q([v_{1},\ldots ,v_{n}])\quad {\text{for}}\quad [v_{1},\ldots ,v_{n}]\in K^{n}.} The change of basis formulas show that the property of being a quadratic form does not depend on the choice of a specific basis in V, although the quadratic form q depends on the choice of the basis. A finite-dimensional vector space with a quadratic form is called a quadratic space. The map Q is a homogeneous function of degree 2, which means that it has the property that, for all a in K and v in V: Q ( a v ) = a 2 Q ( v ) . {\displaystyle Q(av)=a^{2}Q(v).} When the characteristic of K is not 2, the bilinear map B : V × V → K over K is defined: B ( v , w ) = 1 2 ( Q ( v + w ) − Q ( v ) − Q ( w ) ) . {\displaystyle B(v,w)={\tfrac {1}{2}}(Q(v+w)-Q(v)-Q(w)).} This bilinear form B is symmetric. That is, B(x, y) = B(y, x) for all x, y in V, and it determines Q: Q(x) = B(x, x) for all x in V. When the characteristic of K is 2, so that 2 is not a unit, it is still possible to use a quadratic form to define a symmetric bilinear form B′(x, y) = Q(x + y) − Q(x) − Q(y). However, Q(x) can no longer be recovered from this B′ in the same way, since B′(x, x) = 0 for all x (and is thus alternating). Alternatively, there always exists a bilinear form B″ (not in general either unique or symmetric) such that B″(x, x) = Q(x). The pair (V, Q) consisting of a finite-dimensional vector space V over K and a quadratic map Q from V to K is called a quadratic space, and B as defined here is the associated symmetric bilinear form of Q. The notion of a quadratic space is a coordinate-free version of the notion of quadratic form. Sometimes, Q is also called a quadratic form. Two n-dimensional quadratic spaces (V, Q) and (V′, Q′) are isometric if there exists an invertible linear transformation T : V → V′ (isometry) such that Q ( v ) = Q ′ ( T v ) for all v ∈ V . {\displaystyle Q(v)=Q'(Tv){\text{ for all }}v\in V.} The isometry classes of n-dimensional quadratic spaces over K correspond to the equivalence classes of n-ary quadratic forms over K. === Generalization === Let R be a commutative ring, M be an R-module, and b : M × M → R be an R-bilinear form. A mapping q : M → R : v ↦ b(v, v) is the associated quadratic form of b, and B : M × M → R : (u, v) ↦ q(u + v) − q(u) − q(v) is the polar form of q. A quadratic form q : M → R may be characterized in the following equivalent ways: There exists an R-bilinear form b : M × M → R such that q(v) is the associated quadratic form. q(av) = a2q(v) for all a ∈ R and v ∈ M, and the polar form of q is R-bilinear. === Related concepts === Two elements v and w of V are called orthogonal if B(v, w) = 0. The kernel of a bilinear form B consists of the elements that are orthogonal to every element of V. Q is non-singular if the kernel of its associated bilinear form is {0}. If there exists a non-zero v in V such that Q(v) = 0, the quadratic form Q is isotropic, otherwise it is definite. This terminology also applies to vectors and subspaces of a quadratic space. If the restriction of Q to a subspace U of V is identically zero, then U is totally singular. The orthogonal group of a non-singular quadratic form Q is the group of the linear automorphisms of V that preserve Q: that is, the group of isometries of (V, Q) into itself. If a quadratic space (A, Q) has a product so that A is an algebra over a field, and satisfies ∀ x , y ∈ A Q ( x y ) = Q ( x ) Q ( y ) , {\displaystyle \forall x,y\in A\quad Q(xy)=Q(x)Q(y),} then it is a composition algebra. == Equivalence of forms == Every quadratic form q in n variables over a field of characteristic not equal to 2 is equivalent to a diagonal form q ( x ) = a 1 x 1 2 + a 2 x 2 2 + ⋯ + a n x n 2 . {\displaystyle q(x)=a_{1}x_{1}^{2}+a_{2}x_{2}^{2}+\cdots +a_{n}x_{n}^{2}.} Such a diagonal form is often denoted by ⟨a1, ..., an⟩. Classification of all quadratic forms up to equivalence can thus be reduced to the case of diagonal forms. == Geometric meaning == Using Cartesian coordinates in three dimensions, let x = (x, y, z)T, and let A be a symmetric 3-by-3 matrix. Then the geometric nature of the solution set of the equation xTAx + bTx = 1 depends on the eigenvalues of the matrix A. If all eigenvalues of A are non-zero, then the solution set is an ellipsoid or a hyperboloid. If all the eigenvalues are positive, then it is an ellipsoid; if all the eigenvalues are negative, then it is an imaginary ellipsoid (we get the equation of an ellipsoid but with imaginary radii); if some eigenvalues are positive and some are negative, then it is a hyperboloid. If there exist one or more eigenvalues λi = 0, then the shape depends on the corresponding bi. If the corresponding bi ≠ 0, then the solution set is a paraboloid (either elliptic or hyperbolic); if the corresponding bi = 0, then the dimension i degenerates and does not come into play, and the geometric meaning will be determined by other eigenvalues and other components of b. When the solution set is a paraboloid, whether it is elliptic or hyperbolic is determined by whether all other non-zero eigenvalues are of the same sign: if they are, then it is elliptic; otherwise, it is hyperbolic. == Integral quadratic forms == Quadratic forms over the ring of integers are called integral quadratic forms, whereas the corresponding modules are quadratic lattices (sometimes, simply lattices). They play an important role in number theory and topology. An integral quadratic form has integer coefficients, such as x2 + xy + y2; equivalently, given a lattice Λ in a vector space V (over a field with characteristic 0, such as Q or R), a quadratic form Q is integral with respect to Λ if and only if it is integer-valued on Λ, meaning Q(x, y) ∈ Z if x, y ∈ Λ. This is the current use of the term; in the past it was sometimes used differently, as detailed below. === Historical use === Historically there was some confusion and controversy over whether the notion of integral quadratic form should mean: twos in the quadratic form associated to a symmetric matrix with integer coefficients twos out a polynomial with integer coefficients (so the associated symmetric matrix may have half-integer coefficients off the diagonal) This debate was due to the confusion of quadratic forms (represented by polynomials) and symmetric bilinear forms (represented by matrices), and "twos out" is now the accepted convention; "twos in" is instead the theory of integral symmetric bilinear forms (integral symmetric matrices). In "twos in", binary quadratic forms are of the form ax2 + 2bxy + cy2, represented by the symmetric matrix ( a b b c ) {\displaystyle {\begin{pmatrix}a&b\\b&c\end{pmatrix}}} This is the convention Gauss uses in Disquisitiones Arithmeticae. In "twos out", binary quadratic forms are of the form ax2 + bxy + cy2, represented by the symmetric matrix ( a b / 2 b / 2 c ) . {\displaystyle {\begin{pmatrix}a&b/2\\b/2&c\end{pmatrix}}.} Several points of view mean that twos out has been adopted as the standard convention. Those include: better understanding of the 2-adic theory of quadratic forms, the 'local' source of the difficulty; the lattice point of view, which was generally adopted by the experts in the arithmetic of quadratic forms during the 1950s; the actual needs for integral quadratic form theory in topology for intersection theory; the Lie group and algebraic group aspects. === Universal quadratic forms === An integral quadratic form whose image consists of all the positive integers is sometimes called universal. Lagrange's four-square theorem shows that w2 + x2 + y2 + z2 is universal. Ramanujan generalized this aw2 + bx2 + cy2 + dz2 and found 54 multisets {a, b, c, d} that can each generate all positive integers, namely, There are also forms whose image consists of all but one of the positive integers. For example, {1, 2, 5, 5} has 15 as the exception. Recently, the 15 and 290 theorems have completely characterized universal integral quadratic forms: if all coefficients are integers, then it represents all positive integers if and only if it represents all integers up through 290; if it has an integral matrix, it represents all positive integers if and only if it represents all integers up through 15. == See also == ε-quadratic form Cubic form Discriminant of a quadratic form Hasse–Minkowski theorem Quadric Ramanujan's ternary quadratic form Square class Witt group Witt's theorem == Notes == == References == O'Meara, O.T. (2000), Introduction to Quadratic Forms, Berlin, New York: Springer-Verlag, ISBN 978-3-540-66564-9 Conway, John Horton; Fung, Francis Y. C. (1997), The Sensual (Quadratic) Form, Carus Mathematical Monographs, The Mathematical Association of America, ISBN 978-0-88385-030-5 Shafarevich, I. R.; Remizov, A. O. (2012). Linear Algebra and Geometry. Springer. ISBN 978-3-642-30993-9. == Further reading == Cassels, J.W.S. (1978). Rational Quadratic Forms. London Mathematical Society Monographs. Vol. 13. Academic Press. ISBN 0-12-163260-1. Zbl 0395.10029. Kitaoka, Yoshiyuki (1993). Arithmetic of quadratic forms. Cambridge Tracts in Mathematics. Vol. 106. Cambridge University Press. ISBN 0-521-40475-4. Zbl 0785.11021. Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023. Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 73. Springer-Verlag. ISBN 3-540-06009-X. Zbl 0292.10016. O'Meara, O.T. (1973). Introduction to quadratic forms. Die Grundlehren der mathematischen Wissenschaften. Vol. 117. Springer-Verlag. ISBN 3-540-66564-1. Zbl 0259.10018. Pfister, Albrecht (1995). Quadratic Forms with Applications to Algebraic Geometry and Topology. London Mathematical Society lecture note series. Vol. 217. Cambridge University Press. ISBN 0-521-46755-1. Zbl 0847.11014. == External links == A.V.Malyshev (2001) [1994], "Quadratic form", Encyclopedia of Mathematics, EMS Press A.V.Malyshev (2001) [1994], "Binary quadratic form", Encyclopedia of Mathematics, EMS Press
Wikipedia/Nondegenerate_quadratic_form
In mathematics, a shuffle algebra is a Hopf algebra with a basis corresponding to words on some set, whose product is given by the shuffle product X ⧢ Y of two words X, Y: the sum of all ways of interlacing them. The interlacing is given by the riffle shuffle permutation. The shuffle algebra on a finite set is the graded dual of the universal enveloping algebra of the free Lie algebra on the set. Over the rational numbers, the shuffle algebra is isomorphic to the polynomial algebra in the Lyndon words. The shuffle product occurs in generic settings in non-commutative algebras; this is because it is able to preserve the relative order of factors being multiplied together - the riffle shuffle permutation. This can be held in contrast to the divided power structure, which becomes appropriate when factors are commutative. == Shuffle product == The shuffle product of words of lengths m and n is a sum over the ⁠(m+n)!/m!n!⁠ ways of interleaving the two words, as shown in the following examples: ab ⧢ xy = abxy + axby + xaby + axyb + xayb + xyab aaa ⧢ aa = 10aaaaa It may be defined inductively by u ⧢ ε = ε ⧢ u = u ua ⧢ vb = (u ⧢ vb)a + (ua ⧢ v)b where ε is the empty word, a and b are single elements, and u and v are arbitrary words. The shuffle product was introduced by Eilenberg & Mac Lane (1953). The name "shuffle product" refers to the fact that the product can be thought of as a sum over all ways of riffle shuffling two words together: this is the riffle shuffle permutation. The product is commutative and associative. The shuffle product of two words in some alphabet is often denoted by the shuffle product symbol ⧢ (Unicode character U+29E2 SHUFFLE PRODUCT, derived from the Cyrillic letter ⟨ш⟩ sha). == Infiltration product == The closely related infiltration product was introduced by Chen, Fox & Lyndon (1958). It is defined inductively on words over an alphabet A by fa ↑ ga = (f ↑ ga)a + (fa ↑ g)a + (f ↑ g)a fa ↑ gb = (f ↑ gb)a + (fa ↑ g)b For example: ab ↑ ab = ab + 2aab + 2abb + 4 aabb + 2abab ab ↑ ba = aba + bab + abab + 2abba + 2baab + baba The infiltration product is also commutative and associative. == See also == Hopf algebra of permutations Zinbiel algebra == References == == External links == Shuffle product symbol
Wikipedia/Shuffle_algebra
In theoretical physics, twistor theory was proposed by Roger Penrose in 1967 as a possible path to quantum gravity and has evolved into a widely studied branch of theoretical and mathematical physics. Penrose's idea was that twistor space should be the basic arena for physics from which space-time itself should emerge. It has led to powerful mathematical tools that have applications to differential and integral geometry, nonlinear differential equations and representation theory, and in physics to general relativity, quantum field theory, and the theory of scattering amplitudes. Twistor theory arose in the context of the rapidly expanding mathematical developments in Einstein's theory of general relativity in the late 1950s and in the 1960s and carries a number of influences from that period. In particular, Roger Penrose has credited Ivor Robinson as an important early influence in the development of twistor theory, through his construction of so-called Robinson congruences. == Overview == Projective twistor space P T {\displaystyle \mathbb {PT} } is projective 3-space C P 3 {\displaystyle \mathbb {CP} ^{3}} , the simplest 3-dimensional compact algebraic variety. It has a physical interpretation as the space of massless particles with spin. It is the projectivisation of a 4-dimensional complex vector space, non-projective twistor space T {\displaystyle \mathbb {T} } , with a Hermitian form of signature (2, 2) and a holomorphic volume form. This can be most naturally understood as the space of chiral (Weyl) spinors for the conformal group S O ( 4 , 2 ) / Z 2 {\displaystyle SO(4,2)/\mathbb {Z} _{2}} of Minkowski space; it is the fundamental representation of the spin group S U ( 2 , 2 ) {\displaystyle SU(2,2)} of the conformal group. This definition can be extended to arbitrary dimensions except that beyond dimension four, one defines projective twistor space to be the space of projective pure spinors for the conformal group. In its original form, twistor theory encodes physical fields on Minkowski space in terms of complex analytic objects on twistor space via the Penrose transform. This is especially natural for massless fields of arbitrary spin. In the first instance these are obtained via contour integral formulae in terms of free holomorphic functions on regions in twistor space. The holomorphic twistor functions that give rise to solutions to the massless field equations can be more deeply understood as Čech representatives of analytic cohomology classes on regions in P T {\displaystyle \mathbb {PT} } . These correspondences have been extended to certain nonlinear fields, including self-dual gravity in Penrose's nonlinear graviton construction and self-dual Yang–Mills fields in the so-called Ward construction; the former gives rise to deformations of the underlying complex structure of regions in P T {\displaystyle \mathbb {PT} } , and the latter to certain holomorphic vector bundles over regions in P T {\displaystyle \mathbb {PT} } . These constructions have had wide applications, including inter alia the theory of integrable systems. The self-duality condition is a major limitation for incorporating the full nonlinearities of physical theories, although it does suffice for Yang–Mills–Higgs monopoles and instantons (see ADHM construction). An early attempt to overcome this restriction was the introduction of ambitwistors by Isenberg, Yasskin and Green, and their superspace extension, super-ambitwistors, by Edward Witten. Ambitwistor space is the space of complexified light rays or massless particles and can be regarded as a complexification or cotangent bundle of the original twistor description. By extending the ambitwistor correspondence to suitably defined formal neighborhoods, Isenberg, Yasskin and Green showed the equivalence between the vanishing of the curvature along such extended null lines and the full Yang–Mills field equations. Witten showed that a further extension, within the framework of super Yang–Mills theory, including fermionic and scalar fields, gave rise, in the case of N = 1 or 2 supersymmetry, to the constraint equations, while for N = 3 (or 4), the vanishing condition for supercurvature along super null lines (super ambitwistors) implied the full set of field equations, including those for the fermionic fields. This was subsequently shown to give a 1-1 equivalence between the null curvature constraint equations and the supersymmetric Yang-Mills field equations. Through dimensional reduction, it may also be deduced from the analogous super-ambitwistor correspondence for 10-dimensional, N = 1 super-Yang–Mills theory. Twistorial formulae for interactions beyond the self-dual sector also arose in Witten's twistor string theory, which is a quantum theory of holomorphic maps of a Riemann surface into twistor space. This gave rise to the remarkably compact RSV (Roiban, Spradlin and Volovich) formulae for tree-level S-matrices of Yang–Mills theories, but its gravity degrees of freedom gave rise to a version of conformal supergravity limiting its applicability; conformal gravity is an unphysical theory containing ghosts, but its interactions are combined with those of Yang–Mills theory in loop amplitudes calculated via twistor string theory. Despite its shortcomings, twistor string theory led to rapid developments in the study of scattering amplitudes. One was the so-called MHV formalism loosely based on disconnected strings, but was given a more basic foundation in terms of a twistor action for full Yang–Mills theory in twistor space. Another key development was the introduction of BCFW recursion. This has a natural formulation in twistor space that in turn led to remarkable formulations of scattering amplitudes in terms of Grassmann integral formulae and polytopes. These ideas have evolved more recently into the positive Grassmannian and amplituhedron. Twistor string theory was extended first by generalising the RSV Yang–Mills amplitude formula, and then by finding the underlying string theory. The extension to gravity was given by Cachazo & Skinner, and formulated as a twistor string theory for maximal supergravity by David Skinner. Analogous formulae were then found in all dimensions by Cachazo, He and Yuan for Yang–Mills theory and gravity and subsequently for a variety of other theories. They were then understood as string theories in ambitwistor space by Mason and Skinner in a general framework that includes the original twistor string and extends to give a number of new models and formulae. As string theories they have the same critical dimensions as conventional string theory; for example the type II supersymmetric versions are critical in ten dimensions and are equivalent to the full field theory of type II supergravities in ten dimensions (this is distinct from conventional string theories that also have a further infinite hierarchy of massive higher spin states that provide an ultraviolet completion). They extend to give formulae for loop amplitudes and can be defined on curved backgrounds. == The twistor correspondence == Denote Minkowski space by M {\displaystyle M} , with coordinates x a = ( t , x , y , z ) {\displaystyle x^{a}=(t,x,y,z)} and Lorentzian metric η a b {\displaystyle \eta _{ab}} signature ( 1 , 3 ) {\displaystyle (1,3)} . Introduce 2-component spinor indices A = 0 , 1 ; A ′ = 0 ′ , 1 ′ , {\displaystyle A=0,1;\;A'=0',1',} and set x A A ′ = 1 2 ( t − z x + i y x − i y t + z ) . {\displaystyle x^{AA'}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}t-z&x+iy\\x-iy&t+z\end{pmatrix}}.} Non-projective twistor space T {\displaystyle \mathbb {T} } is a four-dimensional complex vector space with coordinates denoted by Z α = ( ω A , π A ′ ) {\displaystyle Z^{\alpha }=\left(\omega ^{A},\,\pi _{A'}\right)} where ω A {\displaystyle \omega ^{A}} and π A ′ {\displaystyle \pi _{A'}} are two constant Weyl spinors. The hermitian form can be expressed by defining a complex conjugation from T {\displaystyle \mathbb {T} } to its dual T ∗ {\displaystyle \mathbb {T} ^{*}} by Z ¯ α = ( π ¯ A , ω ¯ A ′ ) {\displaystyle {\bar {Z}}_{\alpha }=\left({\bar {\pi }}_{A},\,{\bar {\omega }}^{A'}\right)} so that the Hermitian form can be expressed as Z α Z ¯ α = ω A π ¯ A + ω ¯ A ′ π A ′ . {\displaystyle Z^{\alpha }{\bar {Z}}_{\alpha }=\omega ^{A}{\bar {\pi }}_{A}+{\bar {\omega }}^{A'}\pi _{A'}.} This together with the holomorphic volume form, ε α β γ δ Z α d Z β ∧ d Z γ ∧ d Z δ {\displaystyle \varepsilon _{\alpha \beta \gamma \delta }Z^{\alpha }dZ^{\beta }\wedge dZ^{\gamma }\wedge dZ^{\delta }} is invariant under the group SU(2,2), a quadruple cover of the conformal group C(1,3) of compactified Minkowski spacetime. Points in Minkowski space are related to subspaces of twistor space through the incidence relation ω A = i x A A ′ π A ′ . {\displaystyle \omega ^{A}=ix^{AA'}\pi _{A'}.} The incidence relation is preserved under an overall re-scaling of the twistor, so usually one works in projective twistor space P T , {\displaystyle \mathbb {PT} ,} which is isomorphic as a complex manifold to C P 3 {\displaystyle \mathbb {CP} ^{3}} . A point x ∈ M {\displaystyle x\in M} thereby determines a line C P 1 {\displaystyle \mathbb {CP} ^{1}} in P T {\displaystyle \mathbb {PT} } parametrised by π A ′ . {\displaystyle \pi _{A'}.} A twistor Z α {\displaystyle Z^{\alpha }} is easiest understood in space-time for complex values of the coordinates where it defines a totally null two-plane that is self-dual. Take x {\displaystyle x} to be real, then if Z α Z ¯ α {\displaystyle Z^{\alpha }{\bar {Z}}_{\alpha }} vanishes, then x {\displaystyle x} lies on a light ray, whereas if Z α Z ¯ α {\displaystyle Z^{\alpha }{\bar {Z}}_{\alpha }} is non-vanishing, there are no solutions, and indeed then Z α {\displaystyle Z^{\alpha }} corresponds to a massless particle with spin that are not localised in real space-time. == Variations == === Supertwistors === Supertwistors are a supersymmetric extension of twistors introduced by Alan Ferber in 1978. Non-projective twistor space is extended by fermionic coordinates where N {\displaystyle {\mathcal {N}}} is the number of supersymmetries so that a twistor is now given by ( ω A , π A ′ , η i ) , i = 1 , … , N {\displaystyle \left(\omega ^{A},\,\pi _{A'},\,\eta ^{i}\right),i=1,\ldots ,{\mathcal {N}}} with η i {\displaystyle \eta ^{i}} anticommuting. The super conformal group S U ( 2 , 2 | N ) {\displaystyle SU(2,2|{\mathcal {N}})} naturally acts on this space and a supersymmetric version of the Penrose transform takes cohomology classes on supertwistor space to massless supersymmetric multiplets on super Minkowski space. The N = 4 {\displaystyle {\mathcal {N}}=4} case provides the target for Penrose's original twistor string and the N = 8 {\displaystyle {\mathcal {N}}=8} case is that for Skinner's supergravity generalisation. === Higher dimensional generalization of the Klein correspondence === A higher dimensional generalization of the Klein correspondence underlying twistor theory, applicable to isotropic subspaces of conformally compactified (complexified) Minkowski space and its super-space extensions, was developed by J. Harnad and S. Shnider. === Hyperkähler manifolds === Hyperkähler manifolds of dimension 4 k {\displaystyle 4k} also admit a twistor correspondence with a twistor space of complex dimension 2 k + 1 {\displaystyle 2k+1} . === Palatial twistor theory === The nonlinear graviton construction encodes only anti-self-dual, i.e., left-handed fields. A first step towards the problem of modifying twistor space so as to encode a general gravitational field is the encoding of right-handed fields. Infinitesimally, these are encoded in twistor functions or cohomology classes of homogeneity −6. The task of using such twistor functions in a fully nonlinear way so as to obtain a right-handed nonlinear graviton has been referred to as the (gravitational) googly problem. (The word "googly" is a term used in the game of cricket for a ball bowled with right-handed helicity using the apparent action that would normally give rise to left-handed helicity.) The most recent proposal in this direction by Penrose in 2015 was based on noncommutative geometry on twistor space and referred to as palatial twistor theory. The theory is named after Buckingham Palace, where Michael Atiyah suggested to Penrose the use of a type of "noncommutative algebra", an important component of the theory. (The underlying twistor structure in palatial twistor theory was modeled not on the twistor space but on the non-commutative holomorphic twistor quantum algebra.) == See also == Background independence Complex spacetime History of loop quantum gravity Robinson congruences Spin network Twisted geometries == Notes == == References == Roger Penrose (2004), The Road to Reality, Alfred A. Knopf, ch. 33, pp. 958–1009. Roger Penrose and Wolfgang Rindler (1984), Spinors and Space-Time; vol. 1, Two-Spinor Calculus and Relativitic Fields, Cambridge University Press, Cambridge. Roger Penrose and Wolfgang Rindler (1986), Spinors and Space-Time; vol. 2, Spinor and Twistor Methods in Space-Time Geometry, Cambridge University Press, Cambridge. == Further reading == Atiyah, Michael; Dunajski, Maciej; Mason, Lionel J. (2017). "Twistor theory at fifty: from contour integrals to twistor strings". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 473 (2206): 20170530. arXiv:1704.07464. Bibcode:2017RSPSA.47370530A. doi:10.1098/rspa.2017.0530. PMC 5666237. PMID 29118667. S2CID 5735524. Baird, P., "An Introduction to Twistors." Huggett, S. and Tod, K. P. (1994). An Introduction to Twistor Theory, second edition. Cambridge University Press. ISBN 9780521456890. OCLC 831625586. Hughston, L. P. (1979) Twistors and Particles. Springer Lecture Notes in Physics 97, Springer-Verlag. ISBN 978-3-540-09244-5. Hughston, L. P. and Ward, R. S., eds (1979) Advances in Twistor Theory. Pitman. ISBN 0-273-08448-8. Mason, L. J. and Hughston, L. P., eds (1990) Further Advances in Twistor Theory, Volume I: The Penrose Transform and its Applications. Pitman Research Notes in Mathematics Series 231, Longman Scientific and Technical. ISBN 0-582-00466-7. Mason, L. J., Hughston, L. P., and Kobak, P. K., eds (1995) Further Advances in Twistor Theory, Volume II: Integrable Systems, Conformal Geometry, and Gravitation. Pitman Research Notes in Mathematics Series 232, Longman Scientific and Technical. ISBN 0-582-00465-9. Mason, L. J., Hughston, L. P., Kobak, P. K., and Pulverer, K., eds (2001) Further Advances in Twistor Theory, Volume III: Curved Twistor Spaces. Research Notes in Mathematics 424, Chapman and Hall/CRC. ISBN 1-58488-047-3. Penrose, Roger (1967), "Twistor Algebra", Journal of Mathematical Physics, 8 (2): 345–366, Bibcode:1967JMP.....8..345P, doi:10.1063/1.1705200, MR 0216828, archived from the original on 2013-01-12 Penrose, Roger (1968), "Twistor Quantisation and Curved Space-time", International Journal of Theoretical Physics, 1 (1): 61–99, Bibcode:1968IJTP....1...61P, doi:10.1007/BF00668831, S2CID 123628735 Penrose, Roger (1969), "Solutions of the Zero-Rest-Mass Equations", Journal of Mathematical Physics, 10 (1): 38–39, Bibcode:1969JMP....10...38P, doi:10.1063/1.1664756, archived from the original on 2013-01-12 Penrose, Roger (1977), "The Twistor Programme", Reports on Mathematical Physics, 12 (1): 65–76, Bibcode:1977RpMP...12...65P, doi:10.1016/0034-4877(77)90047-7, MR 0465032 Penrose, Roger (1999). "The Central Programme of Twistor Theory". Chaos, Solitons and Fractals. 10 (2–3): 581–611. Bibcode:1999CSF....10..581P. doi:10.1016/S0960-0779(98)00333-6. Witten, Edward (2004), "Perturbative Gauge Theory as a String Theory in Twistor Space", Communications in Mathematical Physics, 252 (1–3): 189–258, arXiv:hep-th/0312171, Bibcode:2004CMaPh.252..189W, doi:10.1007/s00220-004-1187-3, S2CID 14300396 == External links == Penrose, Roger (1999), "Einstein's Equation and Twistor Theory: Recent Developments" Penrose, Roger; Hadrovich, Fedja. "Twistor Theory." Hadrovich, Fedja, "Twistor Primer." Penrose, Roger. "On the Origins of Twistor Theory." Jozsa, Richard (1976), "Applications of Sheaf Cohomology in Twistor Theory." Dunajski, Maciej (2009). "Twistor Theory and Differential Equations". J. Phys. A: Math. Theor. 42 (40): 404004. arXiv:0902.0274. Bibcode:2009JPhA...42N4004D. doi:10.1088/1751-8113/42/40/404004. S2CID 62774126. Andrew Hodges, Summary of recent developments. Huggett, Stephen (2005), "The Elements of Twistor Theory." Mason, L. J., "The twistor programme and twistor strings: From twistor strings to quantum gravity?" Sämann, Christian (2006). Aspects of Twistor Geometry and Supersymmetric Field Theories within Superstring Theory (PhD). Universität Hannover. arXiv:hep-th/0603098. Sparling, George (1999), "On Time Asymmetry." Spradlin, Marcus (2012). "Progress and Prospects in Twistor String Theory" (PDF). hdl:11299/130081. MathWorld: Twistors. Universe Review: "Twistor Theory." Twistor newsletter archives.
Wikipedia/Twistor_theory
Plane-based geometric algebra is an application of Clifford algebra to modelling planes, lines, points, and rigid transformations. Generally this is with the goal of solving applied problems involving these elements and their intersections, projections, and their angle from one another in 3D space. Originally growing out of research on spin groups, it was developed with applications to robotics in mind. It has since been applied to machine learning, rigid body dynamics, and computer science, especially computer graphics. It is usually combined with a duality operation into a system known as "Projective Geometric Algebra", see below. Plane-based geometric algebra takes planar reflections as basic elements, and constructs all other transformations and geometric objects out of them. Formally: it identifies planar reflections with the grade-1 elements of a Clifford Algebra, that is, elements that are written with a single subscript such as " e 1 {\displaystyle {\boldsymbol {e}}_{1}} ". With some rare exceptions described below, the algebra is almost always Cl3,0,1(R), meaning it has three basis grade-1 elements whose square is 1 {\displaystyle 1} and a single basis element whose square is 0 {\displaystyle 0} . Plane-based GA subsumes a large number of algebraic constructions applied in engineering, including the axis–angle representation of rotations, the quaternion and dual quaternion representations of rotations and translations, the plücker representation of lines, the point normal representation of planes, and the homogeneous representation of points. Dual Quaternions then allow the screw, twist and wrench model of classical mechanics to be constructed. The plane-based approach to geometry may be contrasted with the approach that uses the cross product, in which points, translations, rotation axes, and plane normals are all modelled as "vectors". However, use of vectors in advanced engineering problems often require subtle distinctions between different kinds of vector because of this, including Gibbs vectors, pseudovectors and contravariant vectors. The latter of these two, in plane-based GA, map to the concepts of "rotation axis" and "point", with the distinction between them being made clear by the notation: rotation axes such as e 13 {\displaystyle {\boldsymbol {e}}_{13}} (two lower indices) are always notated differently than points such as e 123 {\displaystyle {\boldsymbol {e}}_{123}} (three lower indices). All objects considered below are still "vectors" in the technical sense that they are elements of vector spaces; however they are not (generally) vectors in the sense that one could usefully visualize them as arrows (or take their cross product). Therefore to avoid conflict over different algebraic and visual connotations coming from the word 'vector', this article avoids use of the word. == Construction == Plane-based geometric algebra starts with planes and then constructs lines and points by taking intersections of planes. Its canonical basis consists of the plane such that x = 0 {\displaystyle x=0} , which is labelled e 1 {\displaystyle {\boldsymbol {e}}_{1}} , the y = 0 {\displaystyle y=0} , which is labelled e 2 {\displaystyle {\boldsymbol {e}}_{2}} , and the z = 0 {\displaystyle z=0} plane, e 3 {\displaystyle {\boldsymbol {e}}_{3}} . Other planes may be obtained as weighted sums of the basis planes. for example, e 2 + e 3 {\displaystyle {\boldsymbol {e}}_{2}+{\boldsymbol {e}}_{3}} would be the plane midway between the y- and z-plane. In general, combining two geometric objects in plane-based GA will always be as a weighted average of them – combining points will give a point between them, as will combining lines, and indeed rotations. An operation that is as fundamental as addition is the geometric product. For example: e 1 e 23 = e 123 {\displaystyle {\boldsymbol {e}}_{1}{\boldsymbol {e}}_{23}={\boldsymbol {e}}_{123}} Here we take e 1 {\displaystyle {\boldsymbol {e}}_{1}} , which is a planar reflection in the x = 0 {\displaystyle x=0} plane, and e 23 {\displaystyle {\boldsymbol {e}}_{23}} , which is a 180-degree rotation around the x-axis. Their geometric product is e 123 {\displaystyle {\boldsymbol {e}}_{123}} , which is a point reflection in the origin, because that is the transformation that results from a 180-degree rotation followed by a planar reflection in a plane orthogonal to the rotation's axis. For any pair of elements A {\textstyle A} and B {\displaystyle B} , their geometric product A {\textstyle A} B {\displaystyle B} is the transformation B {\displaystyle B} followed by the transformation A {\textstyle A} . Note that transform composition is not transform application; for example e 1 e 23 {\displaystyle {\boldsymbol {e}}_{1}{\boldsymbol {e}}_{23}} is not " e 23 {\displaystyle {\boldsymbol {e}}_{23}} transformed by e 1 {\displaystyle {\boldsymbol {e}}_{1}} ", it is instead the transform e 23 {\displaystyle {\boldsymbol {e}}_{23}} followed by the transform e 1 {\displaystyle {\boldsymbol {e}}_{1}} . Transform application is implemented with the sandwich product, see below. This geometric interpretation is usually combined with the following assertion: e 1 e 1 = 1 e 2 e 2 = 1 e 3 e 3 = 1 e 0 e 0 = 0 {\displaystyle {\boldsymbol {e}}_{1}{\boldsymbol {e}}_{1}=1\qquad {\boldsymbol {e}}_{2}{\boldsymbol {e}}_{2}=1\qquad {\boldsymbol {e}}_{3}{\boldsymbol {e}}_{3}=1\qquad {\boldsymbol {e}}_{0}{\boldsymbol {e}}_{0}=0} The geometric interpretation of the first three defining equations is that if we perform the same planar reflection twice we get back to where we started; e.g. any grade-1 element (plane) multiplied by itself results in the identity function, " 1 {\displaystyle 1} ". The statement that e 0 e 0 = 0 {\displaystyle {\boldsymbol {e}}_{0}{\boldsymbol {e}}_{0}=0} is more subtle. === Elements at infinity === The algebraic element e 0 {\displaystyle {\boldsymbol {e}}_{0}} represents the plane at infinity. It behaves differently from any other plane – intuitively, it can be "approached but never reached". In 3 dimensions, e 0 {\displaystyle {\boldsymbol {e}}_{0}} can be visualized as the sky. Lying in it are the points called "vanishing points", or alternatively "ideal points", or "points at infinity". Parallel lines such as metal rails on a railway line meet one another at such points. Lines at infinity also exist; the horizon line is an example of such a line. For an observer standing on a plane, all planes parallel to the plane they stand on meet one another at the horizon line. Algebraically, if we take e 2 {\displaystyle {\boldsymbol {e}}_{2}} to be the ground, then e 2 + 5 e 0 {\displaystyle {\boldsymbol {e}}_{2}+5{\boldsymbol {e}}_{0}} will be a plane parallel to the ground (displaced 5 meters from it). These two parallel planes meet one another at the line-at-infinity e 02 {\displaystyle {\boldsymbol {e}}_{02}} . Most lines, for examples e 23 {\displaystyle {\boldsymbol {e}}_{23}} , can act as axes for rotations; in fact they can treated as imaginary quaternions. But lines that lie in the plane-at-infinity e 0 {\displaystyle {\boldsymbol {e}}_{0}} , such as the line e 30 {\displaystyle {\boldsymbol {e}}_{30}} , cannot act as axes for a "rotation". Instead, these are axes for translations, and instead of having an algebra resembling complex numbers or quaternions, their algebraic behaviour is the same as the dual numbers, since they square to 0. Combining the three basis lines-through-the-origin e 23 {\displaystyle {\boldsymbol {e}}_{23}} , e 13 {\displaystyle {\boldsymbol {e}}_{13}} , e 12 {\displaystyle {\boldsymbol {e}}_{12}} , which square to − 1 {\displaystyle -1} , with the three basis lines at infinity e 10 {\displaystyle {\boldsymbol {e}}_{10}} , e 20 {\displaystyle {\boldsymbol {e}}_{20}} , e 30 {\displaystyle {\boldsymbol {e}}_{30}} gives the necessary elements for (Plücker) coordinates of lines. === Derivation of other operations from the geometric product === With the geometric product having been defined a transform composition, there are several practically useful operations that can be extracted from it, similar to how the dot product and cross product were extracted from the quaternion product. These include: Application of any rigid transformation (dual quaternion) or reflection T {\textstyle T} to any object, including points, lines, planes and indeed other rigid transformations, is T A T ~ {\textstyle TA{\tilde {T}}} , where A {\textstyle A} is object to be transformed. This is known as group conjugation or colloquially as the "sandwich product". The meet (or "wedge product") ∧ {\displaystyle \wedge } , which is useful for taking intersections of objects; for example, the intersection of the plane P {\displaystyle P} with the line L {\displaystyle L} is the point P ∧ L {\displaystyle P\wedge L} . The inner product ⋅ {\displaystyle \cdot } , which is useful for taking projections of objects onto other objects; the projection of A {\displaystyle A} onto B {\displaystyle B} is ( A ⋅ B ) B ~ {\displaystyle (A\cdot B){\tilde {B}}} – this formula holds whether the objects are points, lines, or planes. The norm of A {\displaystyle A} is A A ~ {\displaystyle {\sqrt {A{\tilde {A}}}}} and is denoted ‖ A ‖ {\displaystyle \lVert A\rVert } . It can be used to take angles between most objects: the angle between A {\displaystyle A} and B {\displaystyle B} , whether they are lines or planes, is arccos ⁡ ( ‖ A ⋅ B ‖ ) {\displaystyle \arccos(\lVert A\cdot B\rVert )} . This assumes that A {\displaystyle A} and B {\displaystyle B} both have norm 1 {\displaystyle 1} , eg ‖ A ‖ = ‖ B ‖ = 1 {\displaystyle \lVert A\rVert =\lVert B\rVert =1} . Thus it can be seen that the inner product is a generalization of the dot product. The transformation from A {\displaystyle A} to B {\displaystyle B} is 1 + B A ~ {\displaystyle 1+B{\tilde {A}}} , with A {\displaystyle A} and B {\displaystyle B} being points, lines or planes; here, A ~ {\displaystyle {\tilde {A}}} is the reverse (essentially the inverse). This again assumes A {\displaystyle A} and B {\displaystyle B} have unit norm. The commutator product × {\displaystyle \times } , defined as 1 2 ( A B − B A ) {\textstyle {\frac {1}{2}}(AB-BA)} . This is related to the Lie Bracket and Poisson bracket. If A {\textstyle A} is the logarithm of a transformation being undergone by object B {\displaystyle B} , we have that the derivative with respect to time B ˙ {\displaystyle {\dot {B}}} is A {\textstyle A} × {\displaystyle \times } B {\displaystyle B} . For example, recall that e 1 {\displaystyle {\boldsymbol {e}}_{1}} is a plane, as is e 1 + e 2 + e 0 {\displaystyle {\boldsymbol {e}}_{1}+{\boldsymbol {e}}_{2}+{\boldsymbol {e}}_{0}} . Their geometric product is their "reflection composition" – a reflection in e 1 {\displaystyle {\boldsymbol {e}}_{1}} followed by a reflection in e 1 + e 2 + e 0 {\displaystyle {\boldsymbol {e}}_{1}+{\boldsymbol {e}}_{2}+{\boldsymbol {e}}_{0}} , which results in the dual quaternion 1 + e 12 + e 10 {\displaystyle 1+{\boldsymbol {e}}_{12}+{\boldsymbol {e}}_{10}} . But this may be more than is desired; if we wish to take only the intersection line of the two planes, we simply need to look at just the "grade-2 part" of this result, e.g. the part with two lower indices e 12 + e 10 {\displaystyle {\boldsymbol {e}}_{12}+{\boldsymbol {e}}_{10}} . The information needed to specify that the intersection line is contained inside the transform composition of the two planes, because a reflection in a pair of planes will result in a rotation around their intersection line. == Interpretation as algebra of reflections == The algebra of all distance-preserving transformations in 3D is called the Euclidean Group, E ( 3 ) {\displaystyle E(3)} . By the Cartan–Dieudonné theorem, any element of it, which includes rotations and translations, can be written as a series of reflections in planes. In plane-based GA, essentially all geometric objects can be thought of as a transformation. Planes such as e 1 {\displaystyle {\boldsymbol {e}}_{1}} are planar reflections, points such as e 123 {\displaystyle {\boldsymbol {e}}_{123}} are point reflections, and lines such as e 12 {\displaystyle {\boldsymbol {e}}_{12}} are line reflections - which in 3D are the same thing as 180-degree rotations. The identity transform is the unique object that is constructed out of zero reflections. All of these are elements of E ( 3 ) {\displaystyle E(3)} . Some elements of E ( 3 ) {\displaystyle E(3)} , for example rotations by any angle that is not 180 degrees, do not have a single specific geometric object which is used to visualize them; nevertheless, they can always be thought of as being made up of reflections, and can always be represented as a linear combination of some elements of objects in plane-based geometric algebra. For example, 0.8 + 0.6 e 12 {\displaystyle 0.8+0.6{\boldsymbol {e}}_{12}} is a slight rotation about the e 12 {\displaystyle {\boldsymbol {e}}_{12}} axis, and it can be written as a geometric product (a transform composition) of e 1 {\displaystyle {\boldsymbol {e}}_{1}} and 0.8 e 1 + 0.6 e 2 {\displaystyle 0.8{\boldsymbol {e}}_{1}+0.6{\boldsymbol {e}}_{2}} , both of which are planar reflections intersecting at the line e 12 {\displaystyle {\boldsymbol {e}}_{12}} . In fact, any rotation can be written as a composition of two planar reflections that pass through its axis; thus it can be called a 2-reflection. Rotoreflections, glide reflections, and point reflections can also always be written as compositions of 3 planar reflections and so are called 3-reflections. The upper limit of this for 3D is a screw motion, which is a 4-reflection. For this reason, when considering screw motions, it is necessary to use the grade-4 element of 3D plane-based GA, e 1230 {\displaystyle {\boldsymbol {e}}_{1230}} , which is the highest-grade element. === Geometric interpretation of geometric product as "cancelling out" reflections === A reflection in a plane followed by a reflection in the same plane results in no change. The algebraic interpretation for this geometry is that grade-1 elements such as e 1 {\displaystyle {\boldsymbol {e}}_{1}} square to 1. This simple fact can be used to give a geometric interpretation for the general behaviour of the geometric product as a device that solves geometric problems by "cancelling mirrors". To give an example of the usefulness of this, suppose we wish to find a plane orthogonal to a certain line L in 3D and passing through a certain point P. L is a 2-reflection and P {\displaystyle P} is a 3-reflection, so taking their geometric product PL in some sense produces a 5-reflection; however, as in the picture below, two of these reflections cancel, leaving a 3-reflection (sometimes known as a rotoreflection). In the plane-based geometric algebra notation, this rotoreflection can be thought of as a planar reflection "added to" a point reflection. The plane part of this rotoreflection is the plane that is orthogonal to the line L and the original point P. A similar procedure can be used to find the line orthogonal to a plane and passing through a point, or the intersection of a line and a plane, or the intersection line of a plane with another plane. === Rotations and translations as even subalgebra === Rotations and translations are transformations that preserve distances and handedness (chirality), e.g. when they are applied to sets of objects, the relative distances between those objects does not change; nor does their handedness, which is to say that a right-handed glove will not turn into a left-handed glove. All transformations in 3D euclidean plane-based geometric algebra preserve distances, but reflections, rotoreflections, and transflections do not preserve handedness. Rotations and translations do preserve handedness, which in 3D Plane-based GA implies that they can be written as a composition of an even number of reflections. A rotations can thought of as a reflection in a plane followed by a reflection in another plane which is not parallel to the first (the quaternions, which are set in the context of PGA above). If the planes were parallel, composing their reflections would give a translation. Rotations and translations are both special cases of screw motions, e.g. a rotation around a line in space followed by a translation directed along the same line. This group is usually called SE(3), the group of Special (handedness-preserving) Euclidean (distance-preserving) transformations in 3 dimensions. This group has two commonly-used representations that allow them to be used in algebra and computation, one being the 4×4 matrices of real numbers, and the other being the Dual Quaternions. The Dual Quaternion representation (like the usual quaternions) is actually a double cover of SE(3). Since the Dual Quaternions are closed under multiplication and addition and are made from an even number of basis elements in, they are called the even subalgebra of 3D euclidean (plane-based) geometric algebra. The word 'spinor' is sometimes used to describe this subalgebra. Describing rigid transformations using planes was a major goal in the work of Camille Jordan. and Michel Chasles since it allows the treatment to be dimension-independent. == Generalizations == === Inversive Geometry === Inversive geometry is the study of geometric objects and behaviours generated by inversions in circles and spheres. Reflections in planes are a special case of inversions in spheres, because a plane is a sphere with infinite radius. Since plane-based geometric algebra is generated by composition of reflections, it is a special case of inversive geometry. Inversive geometry itself can be performed with the larger system known as Conformal Geometric Algebra(CGA), of which Plane-based GA is a subalgebra. CGA is also usually applied to 3D space, and is able to model general spheres, circles, and conformal (angle-preserving) transformations, which include the transformations seen on the Poincare disk. It can be difficult to see the connection between PGA and CGA, since CGA is often "point based", although some authors take a plane-based approach to CGA which makes the notations for Plane-based GA and CGA identical. === Projective Geometric Algebra === Plane-based geometric algebra is able to represent all Euclidean transformations, but in practice it is almost always combined with a dual operation of some kind to create the larger system known as "Projective Geometric Algebra", PGA. Duality, as in other Clifford and Grassmann algebras, allows a definition of the regressive product; denoting dual of x {\displaystyle x} as x ⋆ {\displaystyle x\star } , the regressive product ∨ {\displaystyle \vee } has the property that ( a ∨ b ) ⋆ = a ⋆ ∧ b ⋆ {\displaystyle (a\vee b)\star =a\star \wedge b\star } . This is extremely useful for engineering applications - in plane-based GA, the regressive product can join a point to another point to obtain a line, and can join a point and a line to obtain a plane. It has the further convenience that if any two elements (points, lines, or planes) have norm (see above) equal to 1 {\displaystyle 1} , the norm of their regressive product is equal to the distance between them. The join of several points is also known as their affine hull. ==== Variants of duality and terminology ==== There is variation across authors as to the precise definition given for ⋆ {\displaystyle \star } that is used above. No matter which definition is given, the regressive product gives completely identical results. Since it is therefore of mainly theoretical rather than practical interest, precise discussion of the dual is usually not included in introductory material on projective geometric algebra. The different approaches to defining x ⋆ {\displaystyle x\star } include: Stating that x ⋆ {\displaystyle x\star } is the right complement of x {\displaystyle x} with the pseudoscalar (the pseudoscalar is the dimension-dependent wedge product of all basis 1-vectors). In 3D therefore we have x ∧ x ⋆ = e 1230 {\displaystyle x\wedge x\star ={\text{e}}_{1230}} ; in 2D we instead have x ∧ x ⋆ = e 120 {\displaystyle x\wedge x\star ={\text{e}}_{120}} . This approach relates elements of plane-based geometric algebra to other elements of plane based geometric algebra (eg, other euclidean transformations); for example in 3D, a planar reflection (plane) would dualize to a point reflection (point). This was the original and still most common definition of the dual, and is sometimes referred to as the Hodge dual. The Projective dual also maps planes to points, but it is not the case that both are reflections; instead, the projective dual switches between the space that plane-based geometric algebra operates in and a non-euclidean (but neither hyperbolic nor elliptic) space discussed by Klein. For example, planes in plane-based geometric algebra, which perform planar reflections, are mapped to points in dual space which are involved in non-trivial transformations known as collineations. Therefore, x {\displaystyle x} and x ⋆ {\displaystyle x\star } cannot both be drawn in familiar Euclidean space. Different authors have termed the plane-based GA part of PGA "Euclidean space" and "Antispace". Conformal Geometric Algebra(CGA) is a larger system of which plane-based GA a subalgebra. The connection is subtle. The join of three points in CGA is defined geometrically as a circle, whereas in PGA it is a plane, which demonstrates that they are different operations. PGA "points" have a fundamentally different algebraic representation than CGA points; to compare the two algebras, PGA points must be recognized as a special case of CGA point pairs, where the pair has one point at infinity ("point reflections"). General point pairs and circles are involved in non-Euclidean transformations (as are most CGA objects, including all duals of PGA objects). To work with both, authors either carefully convert between point reflections and CGA points or work within a PGA-isomorphic subalgebra within CGA - possibly multiple such. The second form of duality, combined with the fact that geometric objects are represented homogeneously (meaning that multiplication by scalars does not change them), is the reason that the system is known as "Projective" Geometric Algebra. It should be clarified that projective geometric algebra does not include the full projective group; this is unlike 3D Conformal Geometric Algebra, which contains the full conformal group. === Projective geometric algebra of non-euclidean geometries and Classical Lie Groups in 3 dimensions === To a first approximation, the physical world is euclidean, i.e. most transformations are rigid; Projective Geometric Algebra is therefore usually based on Cl3,0,1(R), since rigid transformations can be modelled in this algebra. However, it is possible to model other spaces by slightly varying the algebra. In these systems, the points, planes, and lines have the same coordinates that they have in plane-based GA. But transformations like rotations and reflections will have very different effects on the geometry. In all cases below, the algebra is a double cover of the group of reflections, rotations, and rotoreflections in the space. All formulae from the euclidean case carry over to these other geometries – the meet still functions as a way of taking the intersection of objects; the geometric product still functions as a way of composing transformations; and in the hyperbolic case the inner product become able to measure hyperbolic angle. All three even subalgebras are classical Lie groups (after taking the quotient by scalars). The associated Lie algebra for each group is the grade 2 elements of the Clifford algebra, not taking the quotient by scalars. == References ==
Wikipedia/Plane-based_geometric_algebra
In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. The rigid transformations include rotations, translations, reflections, or any sequence of these. Reflections are sometimes excluded from the definition of a rigid transformation by requiring that the transformation also preserve the handedness of objects in the Euclidean space. (A reflection would not preserve handedness; for instance, it would transform a left hand into a right hand.) To avoid ambiguity, a transformation that preserves handedness is known as a rigid motion, a Euclidean motion, or a proper rigid transformation. In dimension two, a rigid motion is either a translation or a rotation. In dimension three, every rigid motion can be decomposed as the composition of a rotation and a translation, and is thus sometimes called a rototranslation. In dimension three, all rigid motions are also screw motions (this is Chasles' theorem). In dimension at most three, any improper rigid transformation can be decomposed into an improper rotation followed by a translation, or into a sequence of reflections. Any object will keep the same shape and size after a proper rigid transformation. All rigid transformations are examples of affine transformations. The set of all (proper and improper) rigid transformations is a mathematical group called the Euclidean group, denoted E(n) for n-dimensional Euclidean spaces. The set of rigid motions is called the special Euclidean group, and denoted SE(n). In kinematics, rigid motions in a 3-dimensional Euclidean space are used to represent displacements of rigid bodies. According to Chasles' theorem, every rigid transformation can be expressed as a screw motion. == Formal definition == A rigid transformation is formally defined as a transformation that, when acting on any vector v, produces a transformed vector T(v) of the form where RT = R−1 (i.e., R is an orthogonal transformation), and t is a vector giving the translation of the origin. A proper rigid transformation has, in addition, which means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). Indeed, when an orthogonal transformation matrix produces a reflection, its determinant is −1. == Distance formula == A measure of distance between points, or metric, is needed in order to confirm that a transformation is rigid. The Euclidean distance formula for Rn is the generalization of the Pythagorean theorem. The formula gives the distance squared between two points X and Y as the sum of the squares of the distances along the coordinate axes, that is d ( X , Y ) 2 = ( X 1 − Y 1 ) 2 + ( X 2 − Y 2 ) 2 + ⋯ + ( X n − Y n ) 2 = ( X − Y ) ⋅ ( X − Y ) . {\displaystyle d\left(\mathbf {X} ,\mathbf {Y} \right)^{2}=\left(X_{1}-Y_{1}\right)^{2}+\left(X_{2}-Y_{2}\right)^{2}+\dots +\left(X_{n}-Y_{n}\right)^{2}=\left(\mathbf {X} -\mathbf {Y} \right)\cdot \left(\mathbf {X} -\mathbf {Y} \right).} where X = (X1, X2, ..., Xn) and Y = (Y1, Y2, ..., Yn), and the dot denotes the scalar product. Using this distance formula, a rigid transformation g : Rn → Rn has the property, d ( g ( X ) , g ( Y ) ) 2 = d ( X , Y ) 2 . {\displaystyle d(g(\mathbf {X} ),g(\mathbf {Y} ))^{2}=d(\mathbf {X} ,\mathbf {Y} )^{2}.} == Translations and linear transformations == A translation of a vector space adds a vector d to every vector in the space, which means it is the transformation It is easy to show that this is a rigid transformation by showing that the distance between translated vectors equal the distance between the original vectors: d ( v + d , w + d ) 2 = ( v + d − w − d ) ⋅ ( v + d − w − d ) = ( v − w ) ⋅ ( v − w ) = d ( v , w ) 2 . {\displaystyle d(\mathbf {v} +\mathbf {d} ,\mathbf {w} +\mathbf {d} )^{2}=(\mathbf {v} +\mathbf {d} -\mathbf {w} -\mathbf {d} )\cdot (\mathbf {v} +\mathbf {d} -\mathbf {w} -\mathbf {d} )=(\mathbf {v} -\mathbf {w} )\cdot (\mathbf {v} -\mathbf {w} )=d(\mathbf {v} ,\mathbf {w} )^{2}.} A linear transformation of a vector space, L : Rn → Rn, preserves linear combinations, L ( V ) = L ( a v + b w ) = a L ( v ) + b L ( w ) . {\displaystyle L(\mathbf {V} )=L(a\mathbf {v} +b\mathbf {w} )=aL(\mathbf {v} )+bL(\mathbf {w} ).} A linear transformation L can be represented by a matrix, which means where [L] is an n×n matrix. A linear transformation is a rigid transformation if it satisfies the condition, d ( [ L ] v , [ L ] w ) 2 = d ( v , w ) 2 , {\displaystyle d([L]\mathbf {v} ,[L]\mathbf {w} )^{2}=d(\mathbf {v} ,\mathbf {w} )^{2},} that is d ( [ L ] v , [ L ] w ) 2 = ( [ L ] v − [ L ] w ) ⋅ ( [ L ] v − [ L ] w ) = ( [ L ] ( v − w ) ) ⋅ ( [ L ] ( v − w ) ) . {\displaystyle d([L]\mathbf {v} ,[L]\mathbf {w} )^{2}=([L]\mathbf {v} -[L]\mathbf {w} )\cdot ([L]\mathbf {v} -[L]\mathbf {w} )=([L](\mathbf {v} -\mathbf {w} ))\cdot ([L](\mathbf {v} -\mathbf {w} )).} Now use the fact that the scalar product of two vectors v.w can be written as the matrix operation vTw, where the T denotes the matrix transpose, we have d ( [ L ] v , [ L ] w ) 2 = ( v − w ) T [ L ] T [ L ] ( v − w ) . {\displaystyle d([L]\mathbf {v} ,[L]\mathbf {w} )^{2}=(\mathbf {v} -\mathbf {w} )^{\mathsf {T}}[L]^{\mathsf {T}}[L](\mathbf {v} -\mathbf {w} ).} Thus, the linear transformation L is rigid if its matrix satisfies the condition [ L ] T [ L ] = [ I ] , {\displaystyle [L]^{\mathsf {T}}[L]=[I],} where [I] is the identity matrix. Matrices that satisfy this condition are called orthogonal matrices. This condition actually requires the columns of these matrices to be orthogonal unit vectors. Matrices that satisfy this condition form a mathematical group under the operation of matrix multiplication called the orthogonal group of n×n matrices and denoted O(n). Compute the determinant of the condition for an orthogonal matrix to obtain det ( [ L ] T [ L ] ) = det [ L ] 2 = det [ I ] = 1 , {\displaystyle \det \left([L]^{\mathsf {T}}[L]\right)=\det[L]^{2}=\det[I]=1,} which shows that the matrix [L] can have a determinant of either +1 or −1. Orthogonal matrices with determinant −1 are reflections, and those with determinant +1 are rotations. Notice that the set of orthogonal matrices can be viewed as consisting of two manifolds in Rn×n separated by the set of singular matrices. The set of rotation matrices is called the special orthogonal group, and denoted SO(n). It is an example of a Lie group because it has the structure of a manifold. == See also == Deformation (mechanics) Motion (geometry) Rigid body dynamics == References ==
Wikipedia/Rigid_transformation
In mathematics, a universal geometric algebra is a type of geometric algebra generated by real vector spaces endowed with an indefinite quadratic form. Some authors restrict this to the infinite-dimensional case. The universal geometric algebra G {\displaystyle {\mathcal {G}}} (n, n) of order 22n is defined as the Clifford algebra of 2n-dimensional pseudo-Euclidean space Rn, n. This algebra is also called the "mother algebra". It has a nondegenerate signature. The vectors in this space generate the algebra through the geometric product. This product makes the manipulation of vectors more similar to the familiar algebraic rules, although non-commutative. When n = ∞, i.e. there are countably many dimensions, then G {\displaystyle {\mathcal {G}}} (∞, ∞) is called simply the universal geometric algebra (UGA), which contains vector spaces such as Rp, q and their respective geometric algebras G {\displaystyle {\mathcal {G}}} (p, q). UGA contains all finite-dimensional geometric algebras (GA). The elements of UGA are called multivectors. Every multivector can be written as the sum of several r-vectors. Some r-vectors are scalars (r = 0), vectors (r = 1) and bivectors (r = 2). One may generate a finite-dimensional GA by choosing a unit pseudoscalar (I). The set of all vectors that satisfy a ∧ I = 0 {\displaystyle a\wedge I=0} is a vector space. The geometric product of the vectors in this vector space then defines the GA, of which I is a member. Since every finite-dimensional GA has a unique I (up to a sign), one can define or characterize the GA by it. A pseudoscalar can be interpreted as an n-plane segment of unit area in an n-dimensional vector space. == Vector manifolds == A vector manifold is a special set of vectors in the UGA. These vectors generate a set of linear spaces tangent to the vector manifold. Vector manifolds were introduced to do calculus on manifolds so one can define (differentiable) manifolds as a set isomorphic to a vector manifold. The difference lies in that a vector manifold is algebraically rich while a manifold is not. Since this is the primary motivation for vector manifolds the following interpretation is rewarding. Consider a vector manifold as a special set of "points". These points are members of an algebra and so can be added and multiplied. These points generate a tangent space of definite dimension "at" each point. This tangent space generates a (unit) pseudoscalar which is a function of the points of the vector manifold. A vector manifold is characterized by its pseudoscalar. The pseudoscalar can be interpreted as a tangent oriented n-plane segment of unit area. Bearing this in mind, a manifold looks locally like Rn at every point. Although a vector manifold can be treated as a completely abstract object, a geometric algebra is created so that every element of the algebra represents a geometric object and algebraic operations such as adding and multiplying correspond to geometric transformations. Consider a set of vectors {x} = Mn in UGA. If this set of vectors generates a set of "tangent" simple (n + 1)-vectors, which is to say ∀ x ∈ M n : ∃ I n ( x ) = x ∧ A ( x ) ∣ I n ( x ) ∨ M n = x {\displaystyle \forall x\in M^{n}:\exists I_{n}(x)=x\wedge A(x)\mid I_{n}(x)\lor M_{n}=x} then Mn is a vector manifold, the value of A is that of a simple n-vector. If one interprets these vectors as points then In(x) is the pseudoscalar of an algebra tangent to Mn at x. In(x) can be interpreted as a unit area at an oriented n-plane: this is why it is labeled with n. The function In gives a distribution of these tangent n-planes over Mn. A vector manifold is defined similarly to how a particular GA can be defined, by its unit pseudoscalar. The set {x} is not closed under addition and multiplication by scalars. This set is not a vector space. At every point the vectors generate a tangent space of definite dimension. The vectors in this tangent space are different from the vectors of the vector manifold. In comparison to the original set they are bivectors, but since they span a linear space—the tangent space—they are also referred to as vectors. Notice that the dimension of this space is the dimension of the manifold. This linear space generates an algebra and its unit pseudoscalar characterizes the vector manifold. This is the manner in which the set of abstract vectors {x} defines the vector manifold. Once the set of "points" generates the "tangent space" the "tangent algebra" and its "pseudoscalar" follow immediately. The unit pseudoscalar of the vector manifold is a (pseudoscalar-valued) function of the points on the vector manifold. If i.e. this function is smooth then one says that the vector manifold is smooth. A manifold can be defined as a set isomorphic to a vector manifold. The points of a manifold do not have any algebraic structure and pertain only to the set itself. This is the main difference between a vector manifold and a manifold that is isomorphic. A vector manifold is always a subset of Universal Geometric Algebra by definition and the elements can be manipulated algebraically. In contrast, a manifold is not a subset of any set other than itself, but the elements have no algebraic relation among them. The differential geometry of a manifold can be carried out in a vector manifold. All quantities relevant to differential geometry can be calculated from In(x) if it is a differentiable function. This is the original motivation behind its definition. Vector manifolds allow an approach to the differential geometry of manifolds alternative to the "build-up" approach where structures such as metrics, connections and fiber bundles are introduced as needed. The relevant structure of a vector manifold is its tangent algebra. The use of geometric calculus along with the definition of vector manifold allow the study of geometric properties of manifolds without using coordinates. == See also == Conformal geometric algebra == References == D. Hestenes, G. Sobczyk (1987-08-31). Clifford Algebra to Geometric Calculus: a Unified Language for mathematics and Physics. Springer. ISBN 902-772-561-6. C. Doran, A. Lasenby (2003-05-29). "6.5 Embedded Surfaces and Vector Manifolds". Geometric Algebra for Physicists. Cambridge University Press. ISBN 0-521-715-954. L. Dorst, J. Lasenby (2011). "19". Guide to Geometric Algebra in Practice. Springer. ISBN 0-857-298-100. Hongbo Li (2008). Invariant Algebras And Geometric Reasoning. World Scientific. ISBN 981-270-808-1.
Wikipedia/Universal_geometric_algebra
In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair u, v of elements of V, we have ⟨ u , v ⟩ = ⟨ T u , T v ⟩ . {\displaystyle \langle u,v\rangle =\langle Tu,Tv\rangle \,.} Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases. Orthogonal transformations are injective: if T v = 0 {\displaystyle Tv=0} then 0 = ⟨ T v , T v ⟩ = ⟨ v , v ⟩ {\displaystyle 0=\langle Tv,Tv\rangle =\langle v,v\rangle } , hence v = 0 {\displaystyle v=0} , so the kernel of T {\displaystyle T} is trivial. Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions. In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of V. The columns of the matrix form another orthonormal basis of V. If an orthogonal transformation is invertible (which is always the case when V is finite-dimensional) then its inverse T − 1 {\displaystyle T^{-1}} is another orthogonal transformation identical to the transpose of T {\displaystyle T} : T − 1 = T T {\displaystyle T^{-1}=T^{\mathtt {T}}} . == Examples == Consider the inner-product space ( R 2 , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (\mathbb {R} ^{2},\langle \cdot ,\cdot \rangle )} with the standard Euclidean inner product and standard basis. Then, the matrix transformation T = [ cos ⁡ ( θ ) − sin ⁡ ( θ ) sin ⁡ ( θ ) cos ⁡ ( θ ) ] : R 2 → R 2 {\displaystyle T={\begin{bmatrix}\cos(\theta )&-\sin(\theta )\\\sin(\theta )&\cos(\theta )\end{bmatrix}}:\mathbb {R} ^{2}\to \mathbb {R} ^{2}} is orthogonal. To see this, consider T e 1 = [ cos ⁡ ( θ ) sin ⁡ ( θ ) ] T e 2 = [ − sin ⁡ ( θ ) cos ⁡ ( θ ) ] {\displaystyle {\begin{aligned}Te_{1}={\begin{bmatrix}\cos(\theta )\\\sin(\theta )\end{bmatrix}}&&Te_{2}={\begin{bmatrix}-\sin(\theta )\\\cos(\theta )\end{bmatrix}}\end{aligned}}} Then, ⟨ T e 1 , T e 1 ⟩ = [ cos ⁡ ( θ ) sin ⁡ ( θ ) ] ⋅ [ cos ⁡ ( θ ) sin ⁡ ( θ ) ] = cos 2 ⁡ ( θ ) + sin 2 ⁡ ( θ ) = 1 ⟨ T e 1 , T e 2 ⟩ = [ cos ⁡ ( θ ) sin ⁡ ( θ ) ] ⋅ [ − sin ⁡ ( θ ) cos ⁡ ( θ ) ] = sin ⁡ ( θ ) cos ⁡ ( θ ) − sin ⁡ ( θ ) cos ⁡ ( θ ) = 0 ⟨ T e 2 , T e 2 ⟩ = [ − sin ⁡ ( θ ) cos ⁡ ( θ ) ] ⋅ [ − sin ⁡ ( θ ) cos ⁡ ( θ ) ] = sin 2 ⁡ ( θ ) + cos 2 ⁡ ( θ ) = 1 {\displaystyle {\begin{aligned}&\langle Te_{1},Te_{1}\rangle ={\begin{bmatrix}\cos(\theta )&\sin(\theta )\end{bmatrix}}\cdot {\begin{bmatrix}\cos(\theta )\\\sin(\theta )\end{bmatrix}}=\cos ^{2}(\theta )+\sin ^{2}(\theta )=1\\&\langle Te_{1},Te_{2}\rangle ={\begin{bmatrix}\cos(\theta )&\sin(\theta )\end{bmatrix}}\cdot {\begin{bmatrix}-\sin(\theta )\\\cos(\theta )\end{bmatrix}}=\sin(\theta )\cos(\theta )-\sin(\theta )\cos(\theta )=0\\&\langle Te_{2},Te_{2}\rangle ={\begin{bmatrix}-\sin(\theta )&\cos(\theta )\end{bmatrix}}\cdot {\begin{bmatrix}-\sin(\theta )\\\cos(\theta )\end{bmatrix}}=\sin ^{2}(\theta )+\cos ^{2}(\theta )=1\\\end{aligned}}} The previous example can be extended to construct all orthogonal transformations. For example, the following matrices define orthogonal transformations on ( R 3 , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (\mathbb {R} ^{3},\langle \cdot ,\cdot \rangle )} : [ cos ⁡ ( θ ) − sin ⁡ ( θ ) 0 sin ⁡ ( θ ) cos ⁡ ( θ ) 0 0 0 1 ] , [ cos ⁡ ( θ ) 0 − sin ⁡ ( θ ) 0 1 0 sin ⁡ ( θ ) 0 cos ⁡ ( θ ) ] , [ 1 0 0 0 cos ⁡ ( θ ) − sin ⁡ ( θ ) 0 sin ⁡ ( θ ) cos ⁡ ( θ ) ] {\displaystyle {\begin{bmatrix}\cos(\theta )&-\sin(\theta )&0\\\sin(\theta )&\cos(\theta )&0\\0&0&1\end{bmatrix}},{\begin{bmatrix}\cos(\theta )&0&-\sin(\theta )\\0&1&0\\\sin(\theta )&0&\cos(\theta )\end{bmatrix}},{\begin{bmatrix}1&0&0\\0&\cos(\theta )&-\sin(\theta )\\0&\sin(\theta )&\cos(\theta )\end{bmatrix}}} == See also == Geometric transformation Improper rotation Linear transformation Orthogonal matrix Rigid transformation Unitary transformation == References ==
Wikipedia/Orthogonal_transformation
In mathematics, a Grassmann–Cayley algebra is the exterior algebra with an additional product, which may be called the shuffle product or the regressive product. It is the most general structure in which projective properties are expressed in a coordinate-free way. The technique is based on work by German mathematician Hermann Grassmann on exterior algebra, and subsequently by British mathematician Arthur Cayley's work on matrices and linear algebra. It is a form of modeling algebra for use in projective geometry. The technique uses subspaces as basic elements of computation, a formalism which allows the translation of synthetic geometric statements into invariant algebraic statements. This can create a useful framework for the modeling of conics and quadrics among other forms, and in tensor mathematics. It also has a number of applications in robotics, particularly for the kinematical analysis of manipulators. == References == == External links == Geometric Algebra FAQ
Wikipedia/Grassmann–Cayley_algebra
Plane-based geometric algebra is an application of Clifford algebra to modelling planes, lines, points, and rigid transformations. Generally this is with the goal of solving applied problems involving these elements and their intersections, projections, and their angle from one another in 3D space. Originally growing out of research on spin groups, it was developed with applications to robotics in mind. It has since been applied to machine learning, rigid body dynamics, and computer science, especially computer graphics. It is usually combined with a duality operation into a system known as "Projective Geometric Algebra", see below. Plane-based geometric algebra takes planar reflections as basic elements, and constructs all other transformations and geometric objects out of them. Formally: it identifies planar reflections with the grade-1 elements of a Clifford Algebra, that is, elements that are written with a single subscript such as " e 1 {\displaystyle {\boldsymbol {e}}_{1}} ". With some rare exceptions described below, the algebra is almost always Cl3,0,1(R), meaning it has three basis grade-1 elements whose square is 1 {\displaystyle 1} and a single basis element whose square is 0 {\displaystyle 0} . Plane-based GA subsumes a large number of algebraic constructions applied in engineering, including the axis–angle representation of rotations, the quaternion and dual quaternion representations of rotations and translations, the plücker representation of lines, the point normal representation of planes, and the homogeneous representation of points. Dual Quaternions then allow the screw, twist and wrench model of classical mechanics to be constructed. The plane-based approach to geometry may be contrasted with the approach that uses the cross product, in which points, translations, rotation axes, and plane normals are all modelled as "vectors". However, use of vectors in advanced engineering problems often require subtle distinctions between different kinds of vector because of this, including Gibbs vectors, pseudovectors and contravariant vectors. The latter of these two, in plane-based GA, map to the concepts of "rotation axis" and "point", with the distinction between them being made clear by the notation: rotation axes such as e 13 {\displaystyle {\boldsymbol {e}}_{13}} (two lower indices) are always notated differently than points such as e 123 {\displaystyle {\boldsymbol {e}}_{123}} (three lower indices). All objects considered below are still "vectors" in the technical sense that they are elements of vector spaces; however they are not (generally) vectors in the sense that one could usefully visualize them as arrows (or take their cross product). Therefore to avoid conflict over different algebraic and visual connotations coming from the word 'vector', this article avoids use of the word. == Construction == Plane-based geometric algebra starts with planes and then constructs lines and points by taking intersections of planes. Its canonical basis consists of the plane such that x = 0 {\displaystyle x=0} , which is labelled e 1 {\displaystyle {\boldsymbol {e}}_{1}} , the y = 0 {\displaystyle y=0} , which is labelled e 2 {\displaystyle {\boldsymbol {e}}_{2}} , and the z = 0 {\displaystyle z=0} plane, e 3 {\displaystyle {\boldsymbol {e}}_{3}} . Other planes may be obtained as weighted sums of the basis planes. for example, e 2 + e 3 {\displaystyle {\boldsymbol {e}}_{2}+{\boldsymbol {e}}_{3}} would be the plane midway between the y- and z-plane. In general, combining two geometric objects in plane-based GA will always be as a weighted average of them – combining points will give a point between them, as will combining lines, and indeed rotations. An operation that is as fundamental as addition is the geometric product. For example: e 1 e 23 = e 123 {\displaystyle {\boldsymbol {e}}_{1}{\boldsymbol {e}}_{23}={\boldsymbol {e}}_{123}} Here we take e 1 {\displaystyle {\boldsymbol {e}}_{1}} , which is a planar reflection in the x = 0 {\displaystyle x=0} plane, and e 23 {\displaystyle {\boldsymbol {e}}_{23}} , which is a 180-degree rotation around the x-axis. Their geometric product is e 123 {\displaystyle {\boldsymbol {e}}_{123}} , which is a point reflection in the origin, because that is the transformation that results from a 180-degree rotation followed by a planar reflection in a plane orthogonal to the rotation's axis. For any pair of elements A {\textstyle A} and B {\displaystyle B} , their geometric product A {\textstyle A} B {\displaystyle B} is the transformation B {\displaystyle B} followed by the transformation A {\textstyle A} . Note that transform composition is not transform application; for example e 1 e 23 {\displaystyle {\boldsymbol {e}}_{1}{\boldsymbol {e}}_{23}} is not " e 23 {\displaystyle {\boldsymbol {e}}_{23}} transformed by e 1 {\displaystyle {\boldsymbol {e}}_{1}} ", it is instead the transform e 23 {\displaystyle {\boldsymbol {e}}_{23}} followed by the transform e 1 {\displaystyle {\boldsymbol {e}}_{1}} . Transform application is implemented with the sandwich product, see below. This geometric interpretation is usually combined with the following assertion: e 1 e 1 = 1 e 2 e 2 = 1 e 3 e 3 = 1 e 0 e 0 = 0 {\displaystyle {\boldsymbol {e}}_{1}{\boldsymbol {e}}_{1}=1\qquad {\boldsymbol {e}}_{2}{\boldsymbol {e}}_{2}=1\qquad {\boldsymbol {e}}_{3}{\boldsymbol {e}}_{3}=1\qquad {\boldsymbol {e}}_{0}{\boldsymbol {e}}_{0}=0} The geometric interpretation of the first three defining equations is that if we perform the same planar reflection twice we get back to where we started; e.g. any grade-1 element (plane) multiplied by itself results in the identity function, " 1 {\displaystyle 1} ". The statement that e 0 e 0 = 0 {\displaystyle {\boldsymbol {e}}_{0}{\boldsymbol {e}}_{0}=0} is more subtle. === Elements at infinity === The algebraic element e 0 {\displaystyle {\boldsymbol {e}}_{0}} represents the plane at infinity. It behaves differently from any other plane – intuitively, it can be "approached but never reached". In 3 dimensions, e 0 {\displaystyle {\boldsymbol {e}}_{0}} can be visualized as the sky. Lying in it are the points called "vanishing points", or alternatively "ideal points", or "points at infinity". Parallel lines such as metal rails on a railway line meet one another at such points. Lines at infinity also exist; the horizon line is an example of such a line. For an observer standing on a plane, all planes parallel to the plane they stand on meet one another at the horizon line. Algebraically, if we take e 2 {\displaystyle {\boldsymbol {e}}_{2}} to be the ground, then e 2 + 5 e 0 {\displaystyle {\boldsymbol {e}}_{2}+5{\boldsymbol {e}}_{0}} will be a plane parallel to the ground (displaced 5 meters from it). These two parallel planes meet one another at the line-at-infinity e 02 {\displaystyle {\boldsymbol {e}}_{02}} . Most lines, for examples e 23 {\displaystyle {\boldsymbol {e}}_{23}} , can act as axes for rotations; in fact they can treated as imaginary quaternions. But lines that lie in the plane-at-infinity e 0 {\displaystyle {\boldsymbol {e}}_{0}} , such as the line e 30 {\displaystyle {\boldsymbol {e}}_{30}} , cannot act as axes for a "rotation". Instead, these are axes for translations, and instead of having an algebra resembling complex numbers or quaternions, their algebraic behaviour is the same as the dual numbers, since they square to 0. Combining the three basis lines-through-the-origin e 23 {\displaystyle {\boldsymbol {e}}_{23}} , e 13 {\displaystyle {\boldsymbol {e}}_{13}} , e 12 {\displaystyle {\boldsymbol {e}}_{12}} , which square to − 1 {\displaystyle -1} , with the three basis lines at infinity e 10 {\displaystyle {\boldsymbol {e}}_{10}} , e 20 {\displaystyle {\boldsymbol {e}}_{20}} , e 30 {\displaystyle {\boldsymbol {e}}_{30}} gives the necessary elements for (Plücker) coordinates of lines. === Derivation of other operations from the geometric product === With the geometric product having been defined a transform composition, there are several practically useful operations that can be extracted from it, similar to how the dot product and cross product were extracted from the quaternion product. These include: Application of any rigid transformation (dual quaternion) or reflection T {\textstyle T} to any object, including points, lines, planes and indeed other rigid transformations, is T A T ~ {\textstyle TA{\tilde {T}}} , where A {\textstyle A} is object to be transformed. This is known as group conjugation or colloquially as the "sandwich product". The meet (or "wedge product") ∧ {\displaystyle \wedge } , which is useful for taking intersections of objects; for example, the intersection of the plane P {\displaystyle P} with the line L {\displaystyle L} is the point P ∧ L {\displaystyle P\wedge L} . The inner product ⋅ {\displaystyle \cdot } , which is useful for taking projections of objects onto other objects; the projection of A {\displaystyle A} onto B {\displaystyle B} is ( A ⋅ B ) B ~ {\displaystyle (A\cdot B){\tilde {B}}} – this formula holds whether the objects are points, lines, or planes. The norm of A {\displaystyle A} is A A ~ {\displaystyle {\sqrt {A{\tilde {A}}}}} and is denoted ‖ A ‖ {\displaystyle \lVert A\rVert } . It can be used to take angles between most objects: the angle between A {\displaystyle A} and B {\displaystyle B} , whether they are lines or planes, is arccos ⁡ ( ‖ A ⋅ B ‖ ) {\displaystyle \arccos(\lVert A\cdot B\rVert )} . This assumes that A {\displaystyle A} and B {\displaystyle B} both have norm 1 {\displaystyle 1} , eg ‖ A ‖ = ‖ B ‖ = 1 {\displaystyle \lVert A\rVert =\lVert B\rVert =1} . Thus it can be seen that the inner product is a generalization of the dot product. The transformation from A {\displaystyle A} to B {\displaystyle B} is 1 + B A ~ {\displaystyle 1+B{\tilde {A}}} , with A {\displaystyle A} and B {\displaystyle B} being points, lines or planes; here, A ~ {\displaystyle {\tilde {A}}} is the reverse (essentially the inverse). This again assumes A {\displaystyle A} and B {\displaystyle B} have unit norm. The commutator product × {\displaystyle \times } , defined as 1 2 ( A B − B A ) {\textstyle {\frac {1}{2}}(AB-BA)} . This is related to the Lie Bracket and Poisson bracket. If A {\textstyle A} is the logarithm of a transformation being undergone by object B {\displaystyle B} , we have that the derivative with respect to time B ˙ {\displaystyle {\dot {B}}} is A {\textstyle A} × {\displaystyle \times } B {\displaystyle B} . For example, recall that e 1 {\displaystyle {\boldsymbol {e}}_{1}} is a plane, as is e 1 + e 2 + e 0 {\displaystyle {\boldsymbol {e}}_{1}+{\boldsymbol {e}}_{2}+{\boldsymbol {e}}_{0}} . Their geometric product is their "reflection composition" – a reflection in e 1 {\displaystyle {\boldsymbol {e}}_{1}} followed by a reflection in e 1 + e 2 + e 0 {\displaystyle {\boldsymbol {e}}_{1}+{\boldsymbol {e}}_{2}+{\boldsymbol {e}}_{0}} , which results in the dual quaternion 1 + e 12 + e 10 {\displaystyle 1+{\boldsymbol {e}}_{12}+{\boldsymbol {e}}_{10}} . But this may be more than is desired; if we wish to take only the intersection line of the two planes, we simply need to look at just the "grade-2 part" of this result, e.g. the part with two lower indices e 12 + e 10 {\displaystyle {\boldsymbol {e}}_{12}+{\boldsymbol {e}}_{10}} . The information needed to specify that the intersection line is contained inside the transform composition of the two planes, because a reflection in a pair of planes will result in a rotation around their intersection line. == Interpretation as algebra of reflections == The algebra of all distance-preserving transformations in 3D is called the Euclidean Group, E ( 3 ) {\displaystyle E(3)} . By the Cartan–Dieudonné theorem, any element of it, which includes rotations and translations, can be written as a series of reflections in planes. In plane-based GA, essentially all geometric objects can be thought of as a transformation. Planes such as e 1 {\displaystyle {\boldsymbol {e}}_{1}} are planar reflections, points such as e 123 {\displaystyle {\boldsymbol {e}}_{123}} are point reflections, and lines such as e 12 {\displaystyle {\boldsymbol {e}}_{12}} are line reflections - which in 3D are the same thing as 180-degree rotations. The identity transform is the unique object that is constructed out of zero reflections. All of these are elements of E ( 3 ) {\displaystyle E(3)} . Some elements of E ( 3 ) {\displaystyle E(3)} , for example rotations by any angle that is not 180 degrees, do not have a single specific geometric object which is used to visualize them; nevertheless, they can always be thought of as being made up of reflections, and can always be represented as a linear combination of some elements of objects in plane-based geometric algebra. For example, 0.8 + 0.6 e 12 {\displaystyle 0.8+0.6{\boldsymbol {e}}_{12}} is a slight rotation about the e 12 {\displaystyle {\boldsymbol {e}}_{12}} axis, and it can be written as a geometric product (a transform composition) of e 1 {\displaystyle {\boldsymbol {e}}_{1}} and 0.8 e 1 + 0.6 e 2 {\displaystyle 0.8{\boldsymbol {e}}_{1}+0.6{\boldsymbol {e}}_{2}} , both of which are planar reflections intersecting at the line e 12 {\displaystyle {\boldsymbol {e}}_{12}} . In fact, any rotation can be written as a composition of two planar reflections that pass through its axis; thus it can be called a 2-reflection. Rotoreflections, glide reflections, and point reflections can also always be written as compositions of 3 planar reflections and so are called 3-reflections. The upper limit of this for 3D is a screw motion, which is a 4-reflection. For this reason, when considering screw motions, it is necessary to use the grade-4 element of 3D plane-based GA, e 1230 {\displaystyle {\boldsymbol {e}}_{1230}} , which is the highest-grade element. === Geometric interpretation of geometric product as "cancelling out" reflections === A reflection in a plane followed by a reflection in the same plane results in no change. The algebraic interpretation for this geometry is that grade-1 elements such as e 1 {\displaystyle {\boldsymbol {e}}_{1}} square to 1. This simple fact can be used to give a geometric interpretation for the general behaviour of the geometric product as a device that solves geometric problems by "cancelling mirrors". To give an example of the usefulness of this, suppose we wish to find a plane orthogonal to a certain line L in 3D and passing through a certain point P. L is a 2-reflection and P {\displaystyle P} is a 3-reflection, so taking their geometric product PL in some sense produces a 5-reflection; however, as in the picture below, two of these reflections cancel, leaving a 3-reflection (sometimes known as a rotoreflection). In the plane-based geometric algebra notation, this rotoreflection can be thought of as a planar reflection "added to" a point reflection. The plane part of this rotoreflection is the plane that is orthogonal to the line L and the original point P. A similar procedure can be used to find the line orthogonal to a plane and passing through a point, or the intersection of a line and a plane, or the intersection line of a plane with another plane. === Rotations and translations as even subalgebra === Rotations and translations are transformations that preserve distances and handedness (chirality), e.g. when they are applied to sets of objects, the relative distances between those objects does not change; nor does their handedness, which is to say that a right-handed glove will not turn into a left-handed glove. All transformations in 3D euclidean plane-based geometric algebra preserve distances, but reflections, rotoreflections, and transflections do not preserve handedness. Rotations and translations do preserve handedness, which in 3D Plane-based GA implies that they can be written as a composition of an even number of reflections. A rotations can thought of as a reflection in a plane followed by a reflection in another plane which is not parallel to the first (the quaternions, which are set in the context of PGA above). If the planes were parallel, composing their reflections would give a translation. Rotations and translations are both special cases of screw motions, e.g. a rotation around a line in space followed by a translation directed along the same line. This group is usually called SE(3), the group of Special (handedness-preserving) Euclidean (distance-preserving) transformations in 3 dimensions. This group has two commonly-used representations that allow them to be used in algebra and computation, one being the 4×4 matrices of real numbers, and the other being the Dual Quaternions. The Dual Quaternion representation (like the usual quaternions) is actually a double cover of SE(3). Since the Dual Quaternions are closed under multiplication and addition and are made from an even number of basis elements in, they are called the even subalgebra of 3D euclidean (plane-based) geometric algebra. The word 'spinor' is sometimes used to describe this subalgebra. Describing rigid transformations using planes was a major goal in the work of Camille Jordan. and Michel Chasles since it allows the treatment to be dimension-independent. == Generalizations == === Inversive Geometry === Inversive geometry is the study of geometric objects and behaviours generated by inversions in circles and spheres. Reflections in planes are a special case of inversions in spheres, because a plane is a sphere with infinite radius. Since plane-based geometric algebra is generated by composition of reflections, it is a special case of inversive geometry. Inversive geometry itself can be performed with the larger system known as Conformal Geometric Algebra(CGA), of which Plane-based GA is a subalgebra. CGA is also usually applied to 3D space, and is able to model general spheres, circles, and conformal (angle-preserving) transformations, which include the transformations seen on the Poincare disk. It can be difficult to see the connection between PGA and CGA, since CGA is often "point based", although some authors take a plane-based approach to CGA which makes the notations for Plane-based GA and CGA identical. === Projective Geometric Algebra === Plane-based geometric algebra is able to represent all Euclidean transformations, but in practice it is almost always combined with a dual operation of some kind to create the larger system known as "Projective Geometric Algebra", PGA. Duality, as in other Clifford and Grassmann algebras, allows a definition of the regressive product; denoting dual of x {\displaystyle x} as x ⋆ {\displaystyle x\star } , the regressive product ∨ {\displaystyle \vee } has the property that ( a ∨ b ) ⋆ = a ⋆ ∧ b ⋆ {\displaystyle (a\vee b)\star =a\star \wedge b\star } . This is extremely useful for engineering applications - in plane-based GA, the regressive product can join a point to another point to obtain a line, and can join a point and a line to obtain a plane. It has the further convenience that if any two elements (points, lines, or planes) have norm (see above) equal to 1 {\displaystyle 1} , the norm of their regressive product is equal to the distance between them. The join of several points is also known as their affine hull. ==== Variants of duality and terminology ==== There is variation across authors as to the precise definition given for ⋆ {\displaystyle \star } that is used above. No matter which definition is given, the regressive product gives completely identical results. Since it is therefore of mainly theoretical rather than practical interest, precise discussion of the dual is usually not included in introductory material on projective geometric algebra. The different approaches to defining x ⋆ {\displaystyle x\star } include: Stating that x ⋆ {\displaystyle x\star } is the right complement of x {\displaystyle x} with the pseudoscalar (the pseudoscalar is the dimension-dependent wedge product of all basis 1-vectors). In 3D therefore we have x ∧ x ⋆ = e 1230 {\displaystyle x\wedge x\star ={\text{e}}_{1230}} ; in 2D we instead have x ∧ x ⋆ = e 120 {\displaystyle x\wedge x\star ={\text{e}}_{120}} . This approach relates elements of plane-based geometric algebra to other elements of plane based geometric algebra (eg, other euclidean transformations); for example in 3D, a planar reflection (plane) would dualize to a point reflection (point). This was the original and still most common definition of the dual, and is sometimes referred to as the Hodge dual. The Projective dual also maps planes to points, but it is not the case that both are reflections; instead, the projective dual switches between the space that plane-based geometric algebra operates in and a non-euclidean (but neither hyperbolic nor elliptic) space discussed by Klein. For example, planes in plane-based geometric algebra, which perform planar reflections, are mapped to points in dual space which are involved in non-trivial transformations known as collineations. Therefore, x {\displaystyle x} and x ⋆ {\displaystyle x\star } cannot both be drawn in familiar Euclidean space. Different authors have termed the plane-based GA part of PGA "Euclidean space" and "Antispace". Conformal Geometric Algebra(CGA) is a larger system of which plane-based GA a subalgebra. The connection is subtle. The join of three points in CGA is defined geometrically as a circle, whereas in PGA it is a plane, which demonstrates that they are different operations. PGA "points" have a fundamentally different algebraic representation than CGA points; to compare the two algebras, PGA points must be recognized as a special case of CGA point pairs, where the pair has one point at infinity ("point reflections"). General point pairs and circles are involved in non-Euclidean transformations (as are most CGA objects, including all duals of PGA objects). To work with both, authors either carefully convert between point reflections and CGA points or work within a PGA-isomorphic subalgebra within CGA - possibly multiple such. The second form of duality, combined with the fact that geometric objects are represented homogeneously (meaning that multiplication by scalars does not change them), is the reason that the system is known as "Projective" Geometric Algebra. It should be clarified that projective geometric algebra does not include the full projective group; this is unlike 3D Conformal Geometric Algebra, which contains the full conformal group. === Projective geometric algebra of non-euclidean geometries and Classical Lie Groups in 3 dimensions === To a first approximation, the physical world is euclidean, i.e. most transformations are rigid; Projective Geometric Algebra is therefore usually based on Cl3,0,1(R), since rigid transformations can be modelled in this algebra. However, it is possible to model other spaces by slightly varying the algebra. In these systems, the points, planes, and lines have the same coordinates that they have in plane-based GA. But transformations like rotations and reflections will have very different effects on the geometry. In all cases below, the algebra is a double cover of the group of reflections, rotations, and rotoreflections in the space. All formulae from the euclidean case carry over to these other geometries – the meet still functions as a way of taking the intersection of objects; the geometric product still functions as a way of composing transformations; and in the hyperbolic case the inner product become able to measure hyperbolic angle. All three even subalgebras are classical Lie groups (after taking the quotient by scalars). The associated Lie algebra for each group is the grade 2 elements of the Clifford algebra, not taking the quotient by scalars. == References ==
Wikipedia/Plane-based_Geometric_Algebra
Conformal geometric algebra (CGA) is the geometric algebra constructed over the resultant space of a map from points in an n-dimensional base space Rp,q to null vectors in Rp+1,q+1. This allows operations on the base space, including reflections, rotations and translations to be represented using versors of the geometric algebra; and it is found that points, lines, planes, circles and spheres gain particularly natural and computationally amenable representations. The effect of the mapping is that generalized (i.e. including zero curvature) k-spheres in the base space map onto (k + 2)-blades, and so that the effect of a translation (or any conformal mapping) of the base space corresponds to a rotation in the higher-dimensional space. In the algebra of this space, based on the geometric product of vectors, such transformations correspond to the algebra's characteristic sandwich operations, similar to the use of quaternions for spatial rotation in 3D, which combine very efficiently. A consequence of rotors representing transformations is that the representations of spheres, planes, circles and other geometrical objects, and equations connecting them, all transform covariantly. A geometric object (a k-sphere) can be synthesized as the wedge product of k + 2 linearly independent vectors representing points on the object; conversely, the object can be decomposed as the repeated wedge product of vectors representing k + 2 distinct points in its surface. Some intersection operations also acquire a tidy algebraic form: for example, for the Euclidean base space R3, applying the wedge product to the dual of the tetravectors representing two spheres produces the dual of the trivector representation of their circle of intersection. As this algebraic structure lends itself directly to effective computation, it facilitates exploration of the classical methods of projective geometry and inversive geometry in a concrete, easy-to-manipulate setting. It has also been used as an efficient structure to represent and facilitate calculations in screw theory. CGA has particularly been applied in connection with the projective mapping of the everyday Euclidean space R3 into a five-dimensional vector space R4,1, which has been investigated for applications in robotics and computer vision. It can be applied generally to any pseudo-Euclidean space - for example, Minkowski space R3,1 to the space R4,2. == Construction of CGA == === Notation and terminology === In this article, the focus is on the algebra G ( 4 , 1 ) {\displaystyle {\mathcal {G}}(4,1)} as it is this particular algebra that has been the subject of most attention over time; other cases are briefly covered in a separate section. The space containing the objects being modelled is referred to here as the base space, and the algebraic space used to model these objects as the representation or conformal space. A homogeneous subspace refers to a linear subspace of the algebraic space. The terms for objects: point, line, circle, sphere, quasi-sphere etc. are used to mean either the geometric object in the base space, or the homogeneous subspace of the representation space that represents that object, with the latter generally being intended unless indicated otherwise. Algebraically, any nonzero null element of the homogeneous subspace will be used, with one element being referred to as normalized by some criterion. Boldface lowercase Latin letters are used to represent position vectors from the origin to a point in the base space. Italic symbols are used for other elements of the representation space. === Base and representation spaces === The base space R3 is represented by extending a basis for the displacements from a chosen origin and adding two basis vectors e− and e+ orthogonal to the base space and to each other, with e−2 = −1 and e+2 = +1, creating the representation space G ( 4 , 1 ) {\displaystyle {\mathcal {G}}(4,1)} . It is convenient to use two null vectors no and n∞ as basis vectors in place of e+ and e−, where no = (e− − e+)/2, and n∞ = e− + e+. It can be verified, where x is in the base space, that: n o 2 = 0 n o ⋅ n ∞ = − 1 n o ⋅ x = 0 n ∞ 2 = 0 n o ∧ n ∞ = e − e + n ∞ ⋅ x = 0 {\displaystyle {\begin{array}{lllll}{n_{\text{o}}}^{2}&=0\qquad n_{\text{o}}\cdot n_{\infty }&=-1\qquad &n_{\text{o}}\cdot \mathbf {x} &=0\\{n_{\infty }}^{2}&=0\qquad n_{\text{o}}\wedge n_{\infty }&=e_{-}e_{+}\qquad &n_{\infty }\cdot \mathbf {x} &=0\end{array}}} These properties lead to the following formulas for the basis vector coefficients of a general vector r in the representation space for a basis with elements ei orthogonal to every other basis element: The coefficient of no for r is −n∞ ⋅ r The coefficient of n∞ for r is −no ⋅ r The coefficient of ei for r is ei−1 ⋅ r. === Mapping between the base space and the representation space === The mapping from a vector in the base space (being from the origin to a point in the affine space represented) is given by the formula: g : x ↦ n o + x + 1 2 x 2 n ∞ {\displaystyle g:\mathbf {x} \mapsto n_{\text{o}}+\mathbf {x} +{\tfrac {1}{2}}\mathbf {x} ^{2}n_{\infty }} Points and other objects that differ only by a nonzero scalar factor all map to the same object in the base space. When normalisation is desired, as for generating a simple reverse map of a point from the representation space to the base space or determining distances, the condition g(x) ⋅ n∞ = −1 may be used. The forward mapping is equivalent to: first conformally projecting x from e123 onto a unit 3-sphere in the space e+ ∧ e123 (in 5-D this is in the subspace r ⋅ (−no − ⁠1/2⁠n∞) = 0); then lift this into a projective space, by adjoining e– = 1, and identifying all points on the same ray from the origin (in 5-D this is in the subspace r ⋅ (−no − ⁠1/2⁠n∞) = 1); then change the normalisation, so the plane for the homogeneous projection is given by the no co-ordinate having a value 1, i.e. r ⋅ n∞ = −1. === Inverse mapping === An inverse mapping for X on the null cone is given (Perwass eqn 4.37) by X ↦ P n ∞ ∧ n o ⊥ ( X − X ⋅ n ∞ ) {\displaystyle X\mapsto {\mathcal {P}}_{n_{\infty }\wedge n_{\text{o}}}^{\perp }\left({\frac {X}{-X\cdot n_{\infty }}}\right)} This first gives a stereographic projection from the light-cone onto the plane r ⋅ n∞ = −1, and then throws away the no and n∞ parts, so that the overall result is to map all of the equivalent points αX = α(no + x + ⁠1/2⁠x2n∞) to x. === Origin and point at infinity === The point x = 0 in Rp,q maps to no in Rp+1,q+1, so no is identified as the (representation) vector of the point at the origin. A vector in Rp+1,q+1 with a nonzero n∞ coefficient, but a zero no coefficient, must (considering the inverse map) be the image of an infinite vector in Rp,q. The direction n∞ therefore represents the (conformal) point at infinity. This motivates the subscripts o and ∞ for identifying the null basis vectors. The choice of the origin is arbitrary: any other point may be chosen, as the representation is of an affine space. The origin merely represents a reference point, and is algebraically equivalent to any other point. As with any translation, changing the origin corresponds to a rotation in the representation space. == Geometrical objects == === Basis === Together with I 5 = e 123 E {\displaystyle I_{5}=e_{123}E} and 1 {\displaystyle 1} , these are the 32 basis blades of the algebra. The Flat Point Origin is written as an outer product because the geometric product is of mixed grade.( E = e + e − {\displaystyle E=e_{+}e_{-}} ). === As the solution of a pair of equations === Given any nonzero blade A of the representing space, the set of vectors that are solutions to a pair of homogeneous equations of the form X 2 = 0 {\displaystyle X^{2}=0} X ∧ A = 0 {\displaystyle X\wedge A=0} is the union of homogeneous 1-d subspaces of null vectors, and is thus a representation of a set of points in the base space. This leads to the choice of a blade A as being a useful way to represent a particular class of geometric objects. Specific cases for the blade A (independent of the number of dimensions of the space) when the base space is Euclidean space are: a scalar: the empty set a vector: a single point a bivector: a pair of points a trivector: a generalized circle a 4-vector: a generalized sphere etc. These each may split into three cases according to whether A2 is positive, zero or negative, corresponding (in reversed order in some cases) to the object as listed, a degenerate case of a single point, or no points (where the nonzero solutions of X ∧ A exclude null vectors). The listed geometric objects (generalized n-spheres) become quasi-spheres in the more general case of the base space being pseudo-Euclidean. Flat objects may be identified by the point at infinity being included in the solutions. Thus, if n∞ ∧ A = 0, the object will be a line, plane, etc., for the blade A respectively being of grade 3, 4, etc. === As derived from points of the object === A blade A representing of one of this class of object may be found as the outer product of linearly independent vectors representing points on the object. In the base space, this linear independence manifests as each point lying outside the object defined by the other points. So, for example, a fourth point lying on the generalized circle defined by three distinct points cannot be used as a fourth point to define a sphere. === odds === Points in e123 map onto the null cone—the null parabola if we set r ⋅ n ∞ = − 1 {\displaystyle r\cdot n_{\infty }=-1} . We can consider the locus of points in e123 s.t. in conformal space g ( x ) ⋅ A = 0 {\displaystyle g(\mathbf {x} )\cdot A=0} , for various types of geometrical object A. We start by observing that g ( a ) ⋅ g ( b ) = − 1 2 ‖ a − b ‖ 2 {\displaystyle g(\mathbf {a} )\cdot g(\mathbf {b} )=-{\tfrac {1}{2}}\|\mathbf {a} -\mathbf {b} \|^{2}} compare: x. a = 0 => x perp a; x.(a∧b) = 0 => x perp a and x perp b x∧a = 0 => x parallel to a; x∧(a∧b) = 0 => x parallel to a or to b (or to some linear combination) the inner product and outer product representations are related by dualisation x∧A = 0 <=> x . A* = 0 (check—works if x is 1-dim, A is n-1 dim) ==== g(x) . A = 0 ==== A point: the locus of x in R3 is a point if A in R4,1 is a vector on the null cone. (N.B. that because it's a homogeneous projective space, vectors of any length on a ray through the origin are equivalent, so g(x).A =0 is equivalent to g(x).g(a) = 0). A sphere: the locus of x is a sphere if A = S, a vector off the null cone. If S = g ( a ) − 1 2 ρ 2 e ∞ {\displaystyle \mathbf {S} =g(\mathbf {a} )-{\tfrac {1}{2}}\rho ^{2}\mathbf {e} _{\infty }} then S.X = 0 => − 1 2 ( a − x ) 2 + 1 2 ρ 2 = 0 {\displaystyle -{\tfrac {1}{2}}(\mathbf {a} -\mathbf {x} )^{2}+{\tfrac {1}{2}}\rho ^{2}=0} these are the points corresponding to a sphere for a vector S off the null-cone, which directions are hyperbolically orthogonal? (cf Lorentz transformation pix) in 2+1 D, if S is (1,a,b), (using co-ords e-, {e+, ei}), the points hyperbolically orthogonal to S are those euclideanly orthogonal to (-1,a,b)—i.e., a plane; or in n dimensions, a hyperplane through the origin. This would cut another plane not through the origin in a line (a hypersurface in an n-2 surface), and then the cone in two points (resp. some sort of n-3 conic surface). So it's going to probably look like some kind of conic. This is the surface that is the image of a sphere under g. A plane: the locus of x is a plane if A = P, a vector with a zero no component. In a homogeneous projective space such a vector P represents a vector on the plane no=1 that would be infinitely far from the origin (ie infinitely far outside the null cone), so g(x).P =0 corresponds to x on a sphere of infinite radius, a plane. In particular: P = a ^ + α e ∞ {\displaystyle \mathbf {P} ={\hat {\mathbf {a} }}+\alpha \mathbf {e} _{\infty }} corresponds to x on a plane with normal a ^ {\displaystyle {\hat {\mathbf {a} }}} an orthogonal distance α from the origin. P = g ( a ) − g ( b ) {\displaystyle \mathbf {P} =g(\mathbf {a} )-g(\mathbf {b} )} corresponds to a plane half way between a and b, with normal a - b circles tangent planes lines lines at infinity point pairs == Transformations == reflections It can be verified that forming P g(x) P gives a new direction on the null-cone, g(x' ), where x' corresponds to a reflection in the plane of points p in R3 that satisfy g(p) . P = 0. g(x) . A = 0 => P g(x) . A P = 0 => P g(x) P . P A P (and similarly for the wedge product), so the effect of applying P sandwich-fashion to any the quantities A in the section above is similarly to reflect the corresponding locus of points x, so the corresponding circles, spheres, lines and planes corresponding to particular types of A are reflected in exactly the same way that applying P to g(x) reflects a point x. This reflection operation can be used to build up general translations and rotations: translations Reflection in two parallel planes gives a translation, g ( x ′ ) = P β P α g ( x ) P α P β {\displaystyle g(\mathbf {x} ^{\prime })=\mathbf {P} _{\beta }\mathbf {P} _{\alpha }\;g(\mathbf {x} )\;\mathbf {P} _{\alpha }\mathbf {P} _{\beta }} If P α = a ^ + α e ∞ {\displaystyle \mathbf {P} _{\alpha }={\hat {\mathbf {a} }}+\alpha \mathbf {e} _{\infty }} and P β = a ^ + β e ∞ {\displaystyle \mathbf {P} _{\beta }={\hat {\mathbf {a} }}+\beta \mathbf {e} _{\infty }} then x ′ = x + 2 ( β − α ) a ^ {\displaystyle \mathbf {x} ^{\prime }=\mathbf {x} +2(\beta -\alpha ){\hat {\mathbf {a} }}} rotations g ( x ′ ) = b ^ a ^ g ( x ) a ^ b ^ {\displaystyle g(\mathbf {x} ^{\prime })={\hat {\mathbf {b} }}{\hat {\mathbf {a} }}\;g(\mathbf {x} )\;{\hat {\mathbf {a} }}{\hat {\mathbf {b} }}} corresponds to an x' that is rotated about the origin by an angle 2 θ where θ is the angle between a and b -- the same effect that this rotor would have if applied directly to x. general rotations rotations about a general point can be achieved by first translating the point to the origin, then rotating around the origin, then translating the point back to its original position, i.e. a sandwiching by the operator T R T ~ {\displaystyle \mathbf {TR{\tilde {T}}} } so g ( G x ) = T R T ~ g ( x ) T R ~ T ~ {\displaystyle g({\mathcal {G}}x)=\mathbf {TR{\tilde {T}}} \;g(\mathbf {x} )\;\mathbf {T{\tilde {R}}{\tilde {T}}} } screws the effect a screw, or motor, (a rotation about a general point, followed by a translation parallel to the axis of rotation) can be achieved by sandwiching g(x) by the operator M = T 2 T 1 R T 1 ~ {\displaystyle \mathbf {M} =\mathbf {T_{2}T_{1}R{\tilde {T_{1}}}} } . M can also be parametrised M = T ′ R ′ {\displaystyle \mathbf {M} =\mathbf {T^{\prime }R^{\prime }} } (Chasles' theorem) inversions an inversion is a reflection in a sphere – various operations that can be achieved using such inversions are discussed at inversive geometry. In particular, the combination of inversion together with the Euclidean transformations translation and rotation is sufficient to express any conformal mapping – i.e. any mapping that universally preserves angles. (Liouville's theorem). dilations two inversions with the same centre produce a dilation. == Generalizations == == History == == Conferences and journals == There is a vibrant and interdisciplinary community around Clifford and Geometric Algebras with a wide range of applications. The main conferences in this subject include the International Conference on Clifford Algebras and their Applications in Mathematical Physics (ICCA) and Applications of Geometric Algebra in Computer Science and Engineering (AGACSE) series. A main publication outlet is the Springer journal Advances in Applied Clifford Algebras. == Notes == == References == == Bibliography ==
Wikipedia/Conformal_geometric_algebra
Screw theory is the algebraic calculation of pairs of vectors, also known as dual vectors – such as angular and linear velocity, or forces and moments – that arise in the kinematics and dynamics of rigid bodies. Screw theory provides a mathematical formulation for the geometry of lines which is central to rigid body dynamics, where lines form the screw axes of spatial movement and the lines of action of forces. The pair of vectors that form the Plücker coordinates of a line define a unit screw, and general screws are obtained by multiplication by a pair of real numbers and addition of vectors. Important theorems of screw theory include: the transfer principle proves that geometric calculations for points using vectors have parallel geometric calculations for lines obtained by replacing vectors with screws; Chasles' theorem proves that any change between two rigid object poses can be performed by a single screw; Poinsot's theorem proves that rotations about a rigid object's major and minor – but not intermediate – axes are stable. Screw theory is an important tool in robot mechanics, mechanical design, computational geometry and multibody dynamics. This is in part because of the relationship between screws and dual quaternions which have been used to interpolate rigid-body motions. Based on screw theory, an efficient approach has also been developed for the type synthesis of parallel mechanisms (parallel manipulators or parallel robots). == Basic concepts == A spatial displacement of a rigid body can be defined by a rotation about a line and a translation along the same line, called a screw motion. This is known as Chasles' theorem. The six parameters that define a screw motion are the four independent components of the Plücker vector that defines the screw axis, together with the rotation angle about and linear slide along this line, and form a pair of vectors called a screw. For comparison, the six parameters that define a spatial displacement can also be given by three Euler angles that define the rotation and the three components of the translation vector. === Screw === A screw is a six-dimensional vector constructed from a pair of three-dimensional vectors, such as forces and torques and linear and angular velocity, that arise in the study of spatial rigid body movement. The components of the screw define the Plücker coordinates of a line in space and the magnitudes of the vector along the line and moment about this line. === Twist === A twist is a screw used to represent the velocity of a rigid body as an angular velocity around an axis and a linear velocity along this axis. All points in the body have the same component of the velocity along the axis, however the greater the distance from the axis the greater the velocity in the plane perpendicular to this axis. Thus, the helicoidal field formed by the velocity vectors in a moving rigid body flattens out the further the points are radially from the twist axis. The points in a body undergoing a constant twist motion trace helices in the fixed frame. If this screw motion has zero pitch then the trajectories trace circles, and the movement is a pure rotation. If the screw motion has infinite pitch then the trajectories are all straight lines in the same direction. === Wrench === The force and torque vectors that arise in applying Newton's laws to a rigid body can be assembled into a screw called a wrench. A force has a point of application and a line of action, therefore it defines the Plücker coordinates of a line in space and has zero pitch. A torque, on the other hand, is a pure moment that is not bound to a line in space and is an infinite pitch screw. The ratio of these two magnitudes defines the pitch of the screw. == Algebra of screws == Let a screw be an ordered pair S = ( S , V ) , {\displaystyle {\mathsf {S}}=(\mathbf {S} ,\mathbf {V} ),} where S and V are three-dimensional real vectors. The sum and difference of these ordered pairs are computed componentwise. Screws are often called dual vectors. Now, introduce the ordered pair of real numbers â = (a, b), called a dual scalar. Let the addition and subtraction of these numbers be componentwise, and define multiplication as a ^ c ^ = ( a , b ) ( c , d ) = ( a c , a d + b c ) . {\displaystyle {\hat {\mathsf {a}}}{\hat {\mathsf {c}}}=(a,b)(c,d)=(ac,ad+bc).} The multiplication of a screw S = (S, V) by the dual scalar â = (a, b) is computed componentwise to be, a ^ S = ( a , b ) ( S , V ) = ( a S , a V + b S ) . {\displaystyle {\hat {\mathsf {a}}}{\mathsf {S}}=(a,b)(\mathbf {S} ,\mathbf {V} )=(a\mathbf {S} ,a\mathbf {V} +b\mathbf {S} ).} Finally, introduce the dot and cross products of screws by the formulas: S ⋅ T = ( S , V ) ⋅ ( T , W ) = ( S ⋅ T , S ⋅ W + V ⋅ T ) , {\displaystyle {\mathsf {S}}\cdot {\mathsf {T}}=(\mathbf {S} ,\mathbf {V} )\cdot (\mathbf {T} ,\mathbf {W} )=(\mathbf {S} \cdot \mathbf {T} ,\,\,\mathbf {S} \cdot \mathbf {W} +\mathbf {V} \cdot \mathbf {T} ),} which is a dual scalar, and S × T = ( S , V ) × ( T , W ) = ( S × T , S × W + V × T ) , {\displaystyle {\mathsf {S}}\times {\mathsf {T}}=(\mathbf {S} ,\mathbf {V} )\times (\mathbf {T} ,\mathbf {W} )=(\mathbf {S} \times \mathbf {T} ,\,\,\mathbf {S} \times \mathbf {W} +\mathbf {V} \times \mathbf {T} ),} which is a screw. The dot and cross products of screws satisfy the identities of vector algebra, and allow computations that directly parallel computations in the algebra of vectors. Let the dual scalar ẑ = (φ, d) define a dual angle, then the infinite series definitions of sine and cosine yield the relations sin ⁡ z ^ = ( sin ⁡ φ , d cos ⁡ φ ) , cos ⁡ z ^ = ( cos ⁡ φ , − d sin ⁡ φ ) , {\displaystyle \sin {\hat {\mathsf {z}}}=(\sin \varphi ,d\cos \varphi ),\,\,\,\cos {\hat {\mathsf {z}}}=(\cos \varphi ,-d\sin \varphi ),} which are also dual scalars. In general, the function of a dual variable is defined to be f(ẑ) = (f(φ), df′(φ)), where df′(φ) is the derivative of f(φ). These definitions allow the following results: Unit screws are Plücker coordinates of a line and satisfy the relation | S | = S ⋅ S = 1 ; {\displaystyle |{\mathsf {S}}|={\sqrt {{\mathsf {S}}\cdot {\mathsf {S}}}}=1;} Let ẑ = (φ, d) be the dual angle, where φ is the angle between the axes of S and T around their common normal, and d is the distance between these axes along the common normal, then S ⋅ T = | S | | T | cos ⁡ z ^ ; {\displaystyle {\mathsf {S}}\cdot {\mathsf {T}}=\left|{\mathsf {S}}\right|\left|{\mathsf {T}}\right|\cos {\hat {\mathsf {z}}};} Let N be the unit screw that defines the common normal to the axes of S and T, and ẑ = (φ, d) is the dual angle between these axes, then S × T = | S | | T | sin ⁡ z ^ N . {\displaystyle {\mathsf {S}}\times {\mathsf {T}}=\left|{\mathsf {S}}\right|\left|{\mathsf {T}}\right|\sin {\hat {\mathsf {z}}}{\mathsf {N}}.} == Wrench == A common example of a screw is the wrench associated with a force acting on a rigid body. Let P be the point of application of the force F and let P be the vector locating this point in a fixed frame. The wrench W = (F, P × F) is a screw. The resultant force and moment obtained from all the forces Fi, i = 1, ..., n, acting on a rigid body is simply the sum of the individual wrenches Wi, that is R = ∑ i = 1 n W i = ∑ i = 1 n ( F i , P i × F i ) . {\displaystyle {\mathsf {R}}=\sum _{i=1}^{n}{\mathsf {W}}_{i}=\sum _{i=1}^{n}(\mathbf {F} _{i},\mathbf {P} _{i}\times \mathbf {F} _{i}).} Notice that the case of two equal but opposite forces F and −F acting at points A and B respectively, yields the resultant R = ( F − F , A × F − B × F ) = ( 0 , ( A − B ) × F ) . {\displaystyle {\mathsf {R}}=(\mathbf {F} -\mathbf {F} ,\mathbf {A} \times \mathbf {F} -\mathbf {B} \times \mathbf {F} )=(0,(\mathbf {A} -\mathbf {B} )\times \mathbf {F} ).} This shows that screws of the form M = ( 0 , M ) , {\displaystyle {\mathsf {M}}=(0,\mathbf {M} ),} can be interpreted as pure moments. == Twist == In order to define the twist of a rigid body, we must consider its movement defined by the parameterized set of spatial displacements, D(t) = ([A(t)], d(t)), where [A] is a rotation matrix and d is a translation vector. This causes a point p that is fixed in moving body coordinates to trace a curve P(t) in the fixed frame given by P ( t ) = [ A ( t ) ] p + d ( t ) . {\displaystyle \mathbf {P} (t)=[A(t)]\mathbf {p} +\mathbf {d} (t).} The velocity of P is V P ( t ) = [ d A ( t ) d t ] p + v ( t ) , {\displaystyle \mathbf {V} _{P}(t)=\left[{\frac {dA(t)}{dt}}\right]\mathbf {p} +\mathbf {v} (t),} where v is velocity of the origin of the moving frame, that is dd/dt. Now substitute p = [AT](P − d) into this equation to obtain, V P ( t ) = [ Ω ] P + v − [ Ω ] d or V P ( t ) = ω × P + v + d × ω , {\displaystyle \mathbf {V} _{P}(t)=[\Omega ]\mathbf {P} +\mathbf {v} -[\Omega ]\mathbf {d} \quad {\text{or}}\quad \mathbf {V} _{P}(t)=\mathbf {\omega } \times \mathbf {P} +\mathbf {v} +\mathbf {d} \times \mathbf {\omega } ,} where [Ω] = [dA/dt][AT] is the angular velocity matrix and ω is the angular velocity vector. The screw T = ( ω → , v + d × ω → ) , {\displaystyle {\mathsf {T}}=({\vec {\omega }},\mathbf {v} +\mathbf {d} \times {\vec {\omega }}),\!} is the twist of the moving body. The vector V = v + d × ω is the velocity of the point in the body that corresponds with the origin of the fixed frame. There are two important special cases: (i) when d is constant, that is v = 0, then the twist is a pure rotation about a line, then the twist is L = ( ω , d × ω ) , {\displaystyle {\mathsf {L}}=(\omega ,\mathbf {d} \times \omega ),} and (ii) when [Ω] = 0, that is the body does not rotate but only slides in the direction v, then the twist is a pure slide given by T = ( 0 , v ) . {\displaystyle {\mathsf {T}}=(0,\mathbf {v} ).} === Revolute joints === For a revolute joint, let the axis of rotation pass through the point q and be directed along the vector ω, then the twist for the joint is given by, ξ = { ω q × ω } . {\displaystyle \xi ={\begin{Bmatrix}{\boldsymbol {\omega }}\\q\times {\boldsymbol {\omega }}\end{Bmatrix}}.} === Prismatic joints === For a prismatic joint, let the vector v pointing define the direction of the slide, then the twist for the joint is given by, ξ = { 0 v } . {\displaystyle \xi ={\begin{Bmatrix}0\\v\end{Bmatrix}}.} == Coordinate transformation of screws == The coordinate transformations for screws are easily understood by beginning with the coordinate transformations of the Plücker vector of line, which in turn are obtained from the transformations of the coordinate of points on the line. Let the displacement of a body be defined by D = ([A], d), where [A] is the rotation matrix and d is the translation vector. Consider the line in the body defined by the two points p and q, which has the Plücker coordinates, q = ( q − p , p × q ) , {\displaystyle {\mathsf {q}}=(\mathbf {q} -\mathbf {p} ,\mathbf {p} \times \mathbf {q} ),} then in the fixed frame we have the transformed point coordinates P = [A]p + d and Q = [A]q + d, which yield. Q = ( Q − P , P × Q ) = ( [ A ] ( q − p ) , [ A ] ( p × q ) + d × [ A ] ( q − p ) ) {\displaystyle {\mathsf {Q}}=(\mathbf {Q} -\mathbf {P} ,\mathbf {P} \times \mathbf {Q} )=([A](\mathbf {q} -\mathbf {p} ),[A](\mathbf {p} \times \mathbf {q} )+\mathbf {d} \times [A](\mathbf {q} -\mathbf {p} ))} Thus, a spatial displacement defines a transformation for Plücker coordinates of lines given by { Q − P P × Q } = [ A 0 D A A ] { q − p p × q } . {\displaystyle {\begin{Bmatrix}\mathbf {Q} -\mathbf {P} \\\mathbf {P} \times \mathbf {Q} \end{Bmatrix}}={\begin{bmatrix}A&0\\DA&A\end{bmatrix}}{\begin{Bmatrix}\mathbf {q} -\mathbf {p} \\\mathbf {p} \times \mathbf {q} \end{Bmatrix}}.} The matrix [D] is the skew-symmetric matrix that performs the cross product operation, that is [D]y = d × y. The 6×6 matrix obtained from the spatial displacement D = ([A], d) can be assembled into the dual matrix [ A ^ ] = ( [ A ] , [ D A ] ) , {\displaystyle [{\hat {\mathsf {A}}}]=([A],[DA]),} which operates on a screw s = (s.v) to obtain, S = [ A ^ ] s , ( S , V ) = ( [ A ] , [ D A ] ) ( s , v ) = ( [ A ] s , [ A ] v + [ D A ] s ) . {\displaystyle {\mathsf {S}}=[{\hat {\mathsf {A}}}]{\mathsf {s}},\quad (\mathbf {S} ,\mathbf {V} )=([A],[DA])(\mathbf {s} ,\mathbf {v} )=([A]\mathbf {s} ,[A]\mathbf {v} +[DA]\mathbf {s} ).} The dual matrix [Â] = ([A], [DA]) has determinant 1 and is called a dual orthogonal matrix. == Twists as elements of a Lie algebra == Consider the movement of a rigid body defined by the parameterized 4x4 homogeneous transform, P ( t ) = [ T ( t ) ] p = { P 1 } = [ A ( t ) d ( t ) 0 1 ] { p 1 } . {\displaystyle {\textbf {P}}(t)=[T(t)]{\textbf {p}}={\begin{Bmatrix}{\textbf {P}}\\1\end{Bmatrix}}={\begin{bmatrix}A(t)&{\textbf {d}}(t)\\0&1\end{bmatrix}}{\begin{Bmatrix}{\textbf {p}}\\1\end{Bmatrix}}.} This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context. The velocity of this movement is defined by computing the velocity of the trajectories of the points in the body, V P = [ T ˙ ( t ) ] p = { V P 0 } = [ A ˙ ( t ) d ˙ ( t ) 0 0 ] { p 1 } . {\displaystyle {\textbf {V}}_{P}=[{\dot {T}}(t)]{\textbf {p}}={\begin{Bmatrix}{\textbf {V}}_{P}\\0\end{Bmatrix}}={\begin{bmatrix}{\dot {A}}(t)&{\dot {\textbf {d}}}(t)\\0&0\end{bmatrix}}{\begin{Bmatrix}{\textbf {p}}\\1\end{Bmatrix}}.} The dot denotes the derivative with respect to time, and because p is constant its derivative is zero. Substitute the inverse transform for p into the velocity equation to obtain the velocity of P by operating on its trajectory P(t), that is V P = [ T ˙ ( t ) ] [ T ( t ) ] − 1 P ( t ) = [ S ] P , {\displaystyle {\textbf {V}}_{P}=[{\dot {T}}(t)][T(t)]^{-1}{\textbf {P}}(t)=[S]{\textbf {P}},} where [ S ] = [ Ω − Ω d + d ˙ 0 0 ] = [ Ω d × ω + v 0 0 ] . {\displaystyle [S]={\begin{bmatrix}\Omega &-\Omega {\textbf {d}}+{\dot {\textbf {d}}}\\0&0\end{bmatrix}}={\begin{bmatrix}\Omega &\mathbf {d} \times \omega +\mathbf {v} \\0&0\end{bmatrix}}.} Recall that [Ω] is the angular velocity matrix. The matrix [S] is an element of the Lie algebra se(3) of the Lie group SE(3) of homogeneous transforms. The components of [S] are the components of the twist screw, and for this reason [S] is also often called a twist. From the definition of the matrix [S], we can formulate the ordinary differential equation, [ T ˙ ( t ) ] = [ S ] [ T ( t ) ] , {\displaystyle [{\dot {T}}(t)]=[S][T(t)],} and ask for the movement [T(t)] that has a constant twist matrix [S]. The solution is the matrix exponential [ T ( t ) ] = e [ S ] t . {\displaystyle [T(t)]=e^{[S]t}.} This formulation can be generalized such that given an initial configuration g(0) in SE(n), and a twist ξ in se(n), the homogeneous transformation to a new location and orientation can be computed with the formula, g ( θ ) = exp ⁡ ( ξ θ ) g ( 0 ) , {\displaystyle g(\theta )=\exp(\xi \theta )g(0),} where θ represents the parameters of the transformation. == Screws by reflection == In transformation geometry, the elemental concept of transformation is the reflection (mathematics). In planar transformations a translation is obtained by reflection in parallel lines, and rotation is obtained by reflection in a pair of intersecting lines. To produce a screw transformation from similar concepts one must use planes in space: the parallel planes must be perpendicular to the screw axis, which is the line of intersection of the intersecting planes that generate the rotation of the screw. Thus four reflections in planes effect a screw transformation. The tradition of inversive geometry borrows some of the ideas of projective geometry and provides a language of transformation that does not depend on analytic geometry. == Homography == The combination of a translation with a rotation effected by a screw displacement can be illustrated with the exponential mapping. Since ε2 = 0 for dual numbers, exp(aε) = 1 + aε, all other terms of the exponential series vanishing. Let F = {1 + εr : r ∈ H}, ε2 = 0. Note that F is stable under the rotation q → p−1qp and under the translation (1 + εr)(1 + εs) = 1 + ε(r + s) for any vector quaternions r and s. F is a 3-flat in the eight-dimensional space of dual quaternions. This 3-flat F represents space, and the homography constructed, restricted to F, is a screw displacement of space. Let a be half the angle of the desired turn about axis r, and br half the displacement on the screw axis. Then form z = exp((a + bε)r) and z* = exp((a − bε)r). Now the homography is [ q : 1 ] ( z 0 0 z ∗ ) = [ q z : z ∗ ] ∼ [ ( z ∗ ) − 1 q z : 1 ] . {\displaystyle [q:1]{\begin{pmatrix}z&0\\0&z^{*}\end{pmatrix}}=[qz:z^{*}]\thicksim [(z^{*})^{-1}qz:1].} The inverse for z* is 1 exp ⁡ ( a r − b ε r ) = ( e a r e − b r ε ) − 1 = e b r ε e − a r , {\displaystyle {\frac {1}{\exp(ar-b\varepsilon r)}}=(e^{ar}e^{-br\varepsilon })^{-1}=e^{br\varepsilon }e^{-ar},} so, the homography sends q to ( e b ε e − a r ) q ( e a r e b ε r ) = e b ε r ( e − a r q e a r ) e b ε r = e 2 b ε r ( e − a r q e a r ) . {\displaystyle (e^{b\varepsilon }e^{-ar})q(e^{ar}e^{b\varepsilon r})=e^{b\varepsilon r}(e^{-ar}qe^{ar})e^{b\varepsilon r}=e^{2b\varepsilon r}(e^{-ar}qe^{ar}).} Now for any quaternion vector p, p* = −p, let q = 1 + pε ∈ F, where the required rotation and translation are effected. Evidently the group of units of the ring of dual quaternions is a Lie group. A subgroup has Lie algebra generated by the parameters a r and b s, where a, b ∈ R, and r, s ∈ H. These six parameters generate a subgroup of the units, the unit sphere. Of course it includes F and the 3-sphere of versors. == Work of forces acting on a rigid body == Consider the set of forces F1, F2 ... Fn act on the points X1, X2 ... Xn in a rigid body. The trajectories of Xi, i = 1,...,n are defined by the movement of the rigid body with rotation [A(t)] and the translation d(t) of a reference point in the body, given by X i ( t ) = [ A ( t ) ] x i + d ( t ) i = 1 , … , n , {\displaystyle \mathbf {X} _{i}(t)=[A(t)]\mathbf {x} _{i}+\mathbf {d} (t)\quad i=1,\ldots ,n,} where xi are coordinates in the moving body. The velocity of each point Xi is V i = ω → × ( X i − d ) + v , {\displaystyle \mathbf {V} _{i}={\vec {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+\mathbf {v} ,} where ω is the angular velocity vector and v is the derivative of d(t). The work by the forces over the displacement δri=viδt of each point is given by δ W = F 1 ⋅ V 1 δ t + F 2 ⋅ V 2 δ t + ⋯ + F n ⋅ V n δ t . {\displaystyle \delta W=\mathbf {F} _{1}\cdot \mathbf {V} _{1}\delta t+\mathbf {F} _{2}\cdot \mathbf {V} _{2}\delta t+\cdots +\mathbf {F} _{n}\cdot \mathbf {V} _{n}\delta t.} Define the velocities of each point in terms of the twist of the moving body to obtain δ W = ∑ i = 1 n F i ⋅ ( ω → × ( X i − d ) + v ) δ t . {\displaystyle \delta W=\sum _{i=1}^{n}\mathbf {F} _{i}\cdot ({\vec {\omega }}\times (\mathbf {X} _{i}-\mathbf {d} )+\mathbf {v} )\delta t.} Expand this equation and collect coefficients of ω and v to obtain δ W = ( ∑ i = 1 n F i ) ⋅ d × ω → δ t + ( ∑ i = 1 n F i ) ⋅ v δ t + ( ∑ i = 1 n X i × F i ) ⋅ ω → δ t = ( ∑ i = 1 n F i ) ⋅ ( v + d × ω → ) δ t + ( ∑ i = 1 n X i × F i ) ⋅ ω → δ t . {\displaystyle {\begin{aligned}\delta W&=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot \mathbf {d} \times {\vec {\omega }}\delta t+\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot \mathbf {v} \delta t+\left(\sum _{i=1}^{n}\mathbf {X} _{i}\times \mathbf {F} _{i}\right)\cdot {\vec {\omega }}\delta t\\[4pt]&=\left(\sum _{i=1}^{n}\mathbf {F} _{i}\right)\cdot (\mathbf {v} +\mathbf {d} \times {\vec {\omega }})\delta t+\left(\sum _{i=1}^{n}\mathbf {X} _{i}\times \mathbf {F} _{i}\right)\cdot {\vec {\omega }}\delta t.\end{aligned}}} Introduce the twist of the moving body and the wrench acting on it given by T = ( ω → , d × ω → + v ) = ( T , T ∘ ) , W = ( ∑ i = 1 n F i , ∑ i = 1 n X i × F i ) = ( W , W ∘ ) , {\displaystyle {\mathsf {T}}=({\vec {\omega }},\mathbf {d} \times {\vec {\omega }}+\mathbf {v} )=(\mathbf {T} ,\mathbf {T} ^{\circ }),\quad {\mathsf {W}}=\left(\sum _{i=1}^{n}\mathbf {F} _{i},\sum _{i=1}^{n}\mathbf {X} _{i}\times \mathbf {F} _{i}\right)=(\mathbf {W} ,\mathbf {W} ^{\circ }),} then work takes the form δ W = ( W ⋅ T ∘ + W ∘ ⋅ T ) δ t . {\displaystyle \delta W=(\mathbf {W} \cdot \mathbf {T} ^{\circ }+\mathbf {W} ^{\circ }\cdot \mathbf {T} )\delta t.} The 6×6 matrix [Π] is used to simplify the calculation of work using screws, so that δ W = ( W ⋅ T ∘ + W ∘ ⋅ T ) δ t = W [ Π ] T δ t , {\displaystyle \delta W=(\mathbf {W} \cdot \mathbf {T} ^{\circ }+\mathbf {W} ^{\circ }\cdot \mathbf {T} )\delta t={\mathsf {W}}[\Pi ]{\mathsf {T}}\delta t,} where [ Π ] = [ 0 I I 0 ] , {\displaystyle [\Pi ]={\begin{bmatrix}0&I\\I&0\end{bmatrix}},} and [I] is the 3×3 identity matrix. === Reciprocal screws === If the virtual work of a wrench on a twist is zero, then the forces and torque of the wrench are constraint forces relative to the twist. The wrench and twist are said to be reciprocal, that is if δ W = W [ Π ] T δ t = 0 , {\displaystyle \delta W={\mathsf {W}}[\Pi ]{\mathsf {T}}\delta t=0,} then the screws W and T are reciprocal. === Twists in robotics === In the study of robotic systems the components of the twist are often transposed to eliminate the need for the 6×6 matrix [Π] in the calculation of work. In this case the twist is defined to be T ˇ = ( d × ω → + v , ω → ) , {\displaystyle {\check {\mathsf {T}}}=(\mathbf {d} \times {\vec {\omega }}+\mathbf {v} ,{\vec {\omega }}),} so the calculation of work takes the form δ W = W ⋅ T ˇ δ t . {\displaystyle \delta W={\mathsf {W}}\cdot {\check {\mathsf {T}}}\delta t.} In this case, if δ W = W ⋅ T ˇ δ t = 0 , {\displaystyle \delta W={\mathsf {W}}\cdot {\check {\mathsf {T}}}\delta t=0,} then the wrench W is reciprocal to the twist T. == History == The mathematical framework was developed by Sir Robert Stawell Ball in 1876 for application in kinematics and statics of mechanisms (rigid body mechanics). Felix Klein saw screw theory as an application of elliptic geometry and his Erlangen Program. He also worked out elliptic geometry, and a fresh view of Euclidean geometry, with the Cayley–Klein metric. The use of a symmetric matrix for a von Staudt conic and metric, applied to screws, has been described by Harvey Lipkin. Other prominent contributors include Julius Plücker, W. K. Clifford, F. M. Dimentberg, Kenneth H. Hunt, J. R. Phillips. The homography idea in transformation geometry was advanced by Sophus Lie more than a century ago. Even earlier, William Rowan Hamilton displayed the versor form of unit quaternions as exp(a r)= cos a + r sin a. The idea is also in Euler's formula parametrizing the unit circle in the complex plane. William Kingdon Clifford initiated the use of dual quaternions for kinematics, followed by Aleksandr Kotelnikov, Eduard Study (Geometrie der Dynamen), and Wilhelm Blaschke. However, the point of view of Sophus Lie has recurred. In 1940, Julian Coolidge described the use of dual quaternions for screw displacements on page 261 of A History of Geometrical Methods. He notes the 1885 contribution of Arthur Buchheim. Coolidge based his description simply on the tools Hamilton had used for real quaternions. == See also == Screw axis Newton–Euler equations uses screws to describe rigid body motions and loading. Twist (differential geometry) Twist (rational trigonometry) == References == == External links == Joe Rooney William Kingdon Clifford, Department of Design and Innovation, the Open University, London. Ravi Banavar notes on Robotics, Geometry and Control
Wikipedia/Screw_theory
In mathematics and theoretical physics, a superalgebra is a Z2-graded algebra. That is, it is an algebra over a commutative ring or field with a decomposition into "even" and "odd" pieces and a multiplication operator that respects the grading. The prefix super- comes from the theory of supersymmetry in theoretical physics. Superalgebras and their representations, supermodules, provide an algebraic framework for formulating supersymmetry. The study of such objects is sometimes called super linear algebra. Superalgebras also play an important role in related field of supergeometry where they enter into the definitions of graded manifolds, supermanifolds and superschemes. == Formal definition == Let K be a commutative ring. In most applications, K is a field of characteristic 0, such as R or C. A superalgebra over K is a K-module A with a direct sum decomposition A = A 0 ⊕ A 1 {\displaystyle A=A_{0}\oplus A_{1}} together with a bilinear multiplication A × A → A such that A i A j ⊆ A i + j {\displaystyle A_{i}A_{j}\subseteq A_{i+j}} where the subscripts are read modulo 2, i.e. they are thought of as elements of Z2. A superring, or Z2-graded ring, is a superalgebra over the ring of integers Z. The elements of each of the Ai are said to be homogeneous. The parity of a homogeneous element x, denoted by |x|, is 0 or 1 according to whether it is in A0 or A1. Elements of parity 0 are said to be even and those of parity 1 to be odd. If x and y are both homogeneous then so is the product xy and | x y | = | x | + | y | {\displaystyle |xy|=|x|+|y|} . An associative superalgebra is one whose multiplication is associative and a unital superalgebra is one with a multiplicative identity element. The identity element in a unital superalgebra is necessarily even. Unless otherwise specified, all superalgebras in this article are assumed to be associative and unital. A commutative superalgebra (or supercommutative algebra) is one which satisfies a graded version of commutativity. Specifically, A is commutative if y x = ( − 1 ) | x | | y | x y {\displaystyle yx=(-1)^{|x||y|}xy\,} for all homogeneous elements x and y of A. There are superalgebras that are commutative in the ordinary sense, but not in the superalgebra sense. For this reason, commutative superalgebras are often called supercommutative in order to avoid confusion. == Sign conventions == When the Z2 grading arises as a "rollup" of a Z- or N-graded algebra into even and odd components, then two distinct (but essentially equivalent) sign conventions can be found in the literature. These can be called the "cohomological sign convention" and the "super sign convention". They differ in how the antipode (exchange of two elements) behaves. In the first case, one has an exchange map x y ↦ ( − 1 ) m n + p q y x {\displaystyle xy\mapsto (-1)^{mn+pq}yx} where m = deg ⁡ x {\displaystyle m=\deg x} is the degree (Z- or N-grading) of x {\displaystyle x} and p {\displaystyle p} the parity. Likewise, n = deg ⁡ y {\displaystyle n=\deg y} is the degree of y {\displaystyle y} and with parity q . {\displaystyle q.} This convention is commonly seen in conventional mathematical settings, such as differential geometry and differential topology. The other convention is to take x y ↦ ( − 1 ) p q y x {\displaystyle xy\mapsto (-1)^{pq}yx} with the parities given as p = m mod 2 {\displaystyle p=m{\bmod {2}}} and q = n mod 2 {\displaystyle q=n{\bmod {2}}} the parity. This is more often seen in physics texts, and requires a parity functor to be judiciously employed to track isomorphisms. Detailed arguments are provided by Pierre Deligne == Examples == Any algebra over a commutative ring K may be regarded as a purely even superalgebra over K; that is, by taking A1 to be trivial. Any Z- or N-graded algebra may be regarded as superalgebra by reading the grading modulo 2. This includes examples such as tensor algebras and polynomial rings over K. In particular, any exterior algebra over K is a superalgebra. The exterior algebra is the standard example of a supercommutative algebra. The symmetric polynomials and alternating polynomials together form a superalgebra, being the even and odd parts, respectively. Note that this is a different grading from the grading by degree. Clifford algebras are superalgebras. They are generally noncommutative. The set of all endomorphisms (denoted E n d ( V ) ≡ H o m ( V , V ) {\displaystyle \mathbf {End} (V)\equiv \mathbf {Hom} (V,V)} , where the boldface H o m {\displaystyle \mathrm {Hom} } is referred to as internal H o m {\displaystyle \mathrm {Hom} } , composed of all linear maps) of a super vector space forms a superalgebra under composition. The set of all square supermatrices with entries in K forms a superalgebra denoted by Mp|q(K). This algebra may be identified with the algebra of endomorphisms of a free supermodule over K of rank p|q and is the internal Hom of above for this space. Lie superalgebras are a graded analog of Lie algebras. Lie superalgebras are nonunital and nonassociative; however, one may construct the analog of a universal enveloping algebra of a Lie superalgebra which is a unital, associative superalgebra. == Further definitions and constructions == === Even subalgebra === Let A be a superalgebra over a commutative ring K. The submodule A0, consisting of all even elements, is closed under multiplication and contains the identity of A and therefore forms a subalgebra of A, naturally called the even subalgebra. It forms an ordinary algebra over K. The set of all odd elements A1 is an A0-bimodule whose scalar multiplication is just multiplication in A. The product in A equips A1 with a bilinear form μ : A 1 ⊗ A 0 A 1 → A 0 {\displaystyle \mu :A_{1}\otimes _{A_{0}}A_{1}\to A_{0}} such that μ ( x ⊗ y ) ⋅ z = x ⋅ μ ( y ⊗ z ) {\displaystyle \mu (x\otimes y)\cdot z=x\cdot \mu (y\otimes z)} for all x, y, and z in A1. This follows from the associativity of the product in A. === Grade involution === There is a canonical involutive automorphism on any superalgebra called the grade involution. It is given on homogeneous elements by x ^ = ( − 1 ) | x | x {\displaystyle {\hat {x}}=(-1)^{|x|}x} and on arbitrary elements by x ^ = x 0 − x 1 {\displaystyle {\hat {x}}=x_{0}-x_{1}} where xi are the homogeneous parts of x. If A has no 2-torsion (in particular, if 2 is invertible) then the grade involution can be used to distinguish the even and odd parts of A: A i = { x ∈ A : x ^ = ( − 1 ) i x } . {\displaystyle A_{i}=\{x\in A:{\hat {x}}=(-1)^{i}x\}.} === Supercommutativity === The supercommutator on A is the binary operator given by [ x , y ] = x y − ( − 1 ) | x | | y | y x {\displaystyle [x,y]=xy-(-1)^{|x||y|}yx} on homogeneous elements, extended to all of A by linearity. Elements x and y of A are said to supercommute if [x, y] = 0. The supercenter of A is the set of all elements of A which supercommute with all elements of A: Z ( A ) = { a ∈ A : [ a , x ] = 0 for all x ∈ A } . {\displaystyle \mathrm {Z} (A)=\{a\in A:[a,x]=0{\text{ for all }}x\in A\}.} The supercenter of A is, in general, different than the center of A as an ungraded algebra. A commutative superalgebra is one whose supercenter is all of A. === Super tensor product === The graded tensor product of two superalgebras A and B may be regarded as a superalgebra A ⊗ B with a multiplication rule determined by: ( a 1 ⊗ b 1 ) ( a 2 ⊗ b 2 ) = ( − 1 ) | b 1 | | a 2 | ( a 1 a 2 ⊗ b 1 b 2 ) . {\displaystyle (a_{1}\otimes b_{1})(a_{2}\otimes b_{2})=(-1)^{|b_{1}||a_{2}|}(a_{1}a_{2}\otimes b_{1}b_{2}).} If either A or B is purely even, this is equivalent to the ordinary ungraded tensor product (except that the result is graded). However, in general, the super tensor product is distinct from the tensor product of A and B regarded as ordinary, ungraded algebras. == Generalizations and categorical definition == One can easily generalize the definition of superalgebras to include superalgebras over a commutative superring. The definition given above is then a specialization to the case where the base ring is purely even. Let R be a commutative superring. A superalgebra over R is a R-supermodule A with a R-bilinear multiplication A × A → A that respects the grading. Bilinearity here means that r ⋅ ( x y ) = ( r ⋅ x ) y = ( − 1 ) | r | | x | x ( r ⋅ y ) {\displaystyle r\cdot (xy)=(r\cdot x)y=(-1)^{|r||x|}x(r\cdot y)} for all homogeneous elements r ∈ R and x, y ∈ A. Equivalently, one may define a superalgebra over R as a superring A together with an superring homomorphism R → A whose image lies in the supercenter of A. One may also define superalgebras categorically. The category of all R-supermodules forms a monoidal category under the super tensor product with R serving as the unit object. An associative, unital superalgebra over R can then be defined as a monoid in the category of R-supermodules. That is, a superalgebra is an R-supermodule A with two (even) morphisms μ : A ⊗ A → A η : R → A {\displaystyle {\begin{aligned}\mu &:A\otimes A\to A\\\eta &:R\to A\end{aligned}}} for which the usual diagrams commute. == Notes == == References == Deligne, P.; Morgan, J. W. (1999). "Notes on Supersymmetry (following Joseph Bernstein)". Quantum Fields and Strings: A Course for Mathematicians. Vol. 1. American Mathematical Society. pp. 41–97. ISBN 0-8218-2012-5. Kac, V. G.; Martinez, C.; Zelmanov, E. (2001). Graded simple Jordan superalgebras of growth one. Memoirs of the AMS Series. Vol. 711. AMS Bookstore. ISBN 978-0-8218-2645-4. Manin, Y. I. (1997). Gauge Field Theory and Complex Geometry ((2nd ed.) ed.). Berlin: Springer. ISBN 3-540-61378-1. Varadarajan, V. S. (2004). Supersymmetry for Mathematicians: An Introduction. Courant Lecture Notes in Mathematics. Vol. 11. American Mathematical Society. ISBN 978-0-8218-3574-6.
Wikipedia/Even_subalgebra
In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. The rigid transformations include rotations, translations, reflections, or any sequence of these. Reflections are sometimes excluded from the definition of a rigid transformation by requiring that the transformation also preserve the handedness of objects in the Euclidean space. (A reflection would not preserve handedness; for instance, it would transform a left hand into a right hand.) To avoid ambiguity, a transformation that preserves handedness is known as a rigid motion, a Euclidean motion, or a proper rigid transformation. In dimension two, a rigid motion is either a translation or a rotation. In dimension three, every rigid motion can be decomposed as the composition of a rotation and a translation, and is thus sometimes called a rototranslation. In dimension three, all rigid motions are also screw motions (this is Chasles' theorem). In dimension at most three, any improper rigid transformation can be decomposed into an improper rotation followed by a translation, or into a sequence of reflections. Any object will keep the same shape and size after a proper rigid transformation. All rigid transformations are examples of affine transformations. The set of all (proper and improper) rigid transformations is a mathematical group called the Euclidean group, denoted E(n) for n-dimensional Euclidean spaces. The set of rigid motions is called the special Euclidean group, and denoted SE(n). In kinematics, rigid motions in a 3-dimensional Euclidean space are used to represent displacements of rigid bodies. According to Chasles' theorem, every rigid transformation can be expressed as a screw motion. == Formal definition == A rigid transformation is formally defined as a transformation that, when acting on any vector v, produces a transformed vector T(v) of the form where RT = R−1 (i.e., R is an orthogonal transformation), and t is a vector giving the translation of the origin. A proper rigid transformation has, in addition, which means that R does not produce a reflection, and hence it represents a rotation (an orientation-preserving orthogonal transformation). Indeed, when an orthogonal transformation matrix produces a reflection, its determinant is −1. == Distance formula == A measure of distance between points, or metric, is needed in order to confirm that a transformation is rigid. The Euclidean distance formula for Rn is the generalization of the Pythagorean theorem. The formula gives the distance squared between two points X and Y as the sum of the squares of the distances along the coordinate axes, that is d ( X , Y ) 2 = ( X 1 − Y 1 ) 2 + ( X 2 − Y 2 ) 2 + ⋯ + ( X n − Y n ) 2 = ( X − Y ) ⋅ ( X − Y ) . {\displaystyle d\left(\mathbf {X} ,\mathbf {Y} \right)^{2}=\left(X_{1}-Y_{1}\right)^{2}+\left(X_{2}-Y_{2}\right)^{2}+\dots +\left(X_{n}-Y_{n}\right)^{2}=\left(\mathbf {X} -\mathbf {Y} \right)\cdot \left(\mathbf {X} -\mathbf {Y} \right).} where X = (X1, X2, ..., Xn) and Y = (Y1, Y2, ..., Yn), and the dot denotes the scalar product. Using this distance formula, a rigid transformation g : Rn → Rn has the property, d ( g ( X ) , g ( Y ) ) 2 = d ( X , Y ) 2 . {\displaystyle d(g(\mathbf {X} ),g(\mathbf {Y} ))^{2}=d(\mathbf {X} ,\mathbf {Y} )^{2}.} == Translations and linear transformations == A translation of a vector space adds a vector d to every vector in the space, which means it is the transformation It is easy to show that this is a rigid transformation by showing that the distance between translated vectors equal the distance between the original vectors: d ( v + d , w + d ) 2 = ( v + d − w − d ) ⋅ ( v + d − w − d ) = ( v − w ) ⋅ ( v − w ) = d ( v , w ) 2 . {\displaystyle d(\mathbf {v} +\mathbf {d} ,\mathbf {w} +\mathbf {d} )^{2}=(\mathbf {v} +\mathbf {d} -\mathbf {w} -\mathbf {d} )\cdot (\mathbf {v} +\mathbf {d} -\mathbf {w} -\mathbf {d} )=(\mathbf {v} -\mathbf {w} )\cdot (\mathbf {v} -\mathbf {w} )=d(\mathbf {v} ,\mathbf {w} )^{2}.} A linear transformation of a vector space, L : Rn → Rn, preserves linear combinations, L ( V ) = L ( a v + b w ) = a L ( v ) + b L ( w ) . {\displaystyle L(\mathbf {V} )=L(a\mathbf {v} +b\mathbf {w} )=aL(\mathbf {v} )+bL(\mathbf {w} ).} A linear transformation L can be represented by a matrix, which means where [L] is an n×n matrix. A linear transformation is a rigid transformation if it satisfies the condition, d ( [ L ] v , [ L ] w ) 2 = d ( v , w ) 2 , {\displaystyle d([L]\mathbf {v} ,[L]\mathbf {w} )^{2}=d(\mathbf {v} ,\mathbf {w} )^{2},} that is d ( [ L ] v , [ L ] w ) 2 = ( [ L ] v − [ L ] w ) ⋅ ( [ L ] v − [ L ] w ) = ( [ L ] ( v − w ) ) ⋅ ( [ L ] ( v − w ) ) . {\displaystyle d([L]\mathbf {v} ,[L]\mathbf {w} )^{2}=([L]\mathbf {v} -[L]\mathbf {w} )\cdot ([L]\mathbf {v} -[L]\mathbf {w} )=([L](\mathbf {v} -\mathbf {w} ))\cdot ([L](\mathbf {v} -\mathbf {w} )).} Now use the fact that the scalar product of two vectors v.w can be written as the matrix operation vTw, where the T denotes the matrix transpose, we have d ( [ L ] v , [ L ] w ) 2 = ( v − w ) T [ L ] T [ L ] ( v − w ) . {\displaystyle d([L]\mathbf {v} ,[L]\mathbf {w} )^{2}=(\mathbf {v} -\mathbf {w} )^{\mathsf {T}}[L]^{\mathsf {T}}[L](\mathbf {v} -\mathbf {w} ).} Thus, the linear transformation L is rigid if its matrix satisfies the condition [ L ] T [ L ] = [ I ] , {\displaystyle [L]^{\mathsf {T}}[L]=[I],} where [I] is the identity matrix. Matrices that satisfy this condition are called orthogonal matrices. This condition actually requires the columns of these matrices to be orthogonal unit vectors. Matrices that satisfy this condition form a mathematical group under the operation of matrix multiplication called the orthogonal group of n×n matrices and denoted O(n). Compute the determinant of the condition for an orthogonal matrix to obtain det ( [ L ] T [ L ] ) = det [ L ] 2 = det [ I ] = 1 , {\displaystyle \det \left([L]^{\mathsf {T}}[L]\right)=\det[L]^{2}=\det[I]=1,} which shows that the matrix [L] can have a determinant of either +1 or −1. Orthogonal matrices with determinant −1 are reflections, and those with determinant +1 are rotations. Notice that the set of orthogonal matrices can be viewed as consisting of two manifolds in Rn×n separated by the set of singular matrices. The set of rotation matrices is called the special orthogonal group, and denoted SO(n). It is an example of a Lie group because it has the structure of a manifold. == See also == Deformation (mechanics) Motion (geometry) Rigid body dynamics == References ==
Wikipedia/Euclidean_transformation
In mathematics, an extraneous solution (or spurious solution) is one which emerges from the process of solving a problem but is not a valid solution to it. A missing solution is a valid one which is lost during the solution process. Both situations frequently result from performing operations that are not invertible for some or all values of the variables involved, which prevents the chain of logical implications from being bidirectional. == Extraneous solutions: multiplication == One of the basic principles of algebra is that one can multiply both sides of an equation by the same expression without changing the equation's solutions. However, strictly speaking, this is not true, in that multiplication by certain expressions may introduce new solutions that were not present before. For example, consider the following equation: x + 2 = 0. {\displaystyle x+2=0.} If we multiply both sides by zero, we get, 0 = 0. {\displaystyle 0=0.} This is true for all values of x {\displaystyle x} , so the solution set is all real numbers. But clearly not all real numbers are solutions to the original equation. The problem is that multiplication by zero is not invertible: if we multiply by any nonzero value, we can reverse the step by dividing by the same value, but division by zero is not defined, so multiplication by zero cannot be reversed. More subtly, suppose we take the same equation and multiply both sides by x {\displaystyle x} . We get x ( x + 2 ) = ( 0 ) x , {\displaystyle x(x+2)=(0)x,} x 2 + 2 x = 0. {\displaystyle x^{2}+2x=0.} This quadratic equation has two solutions: x = − 2 {\displaystyle x=-2} and x = 0. {\displaystyle x=0.} But if 0 {\displaystyle 0} is substituted for x {\displaystyle x} in the original equation, the result is the invalid equation 2 = 0 {\displaystyle 2=0} . This counterintuitive result occurs because in the case where x = 0 {\displaystyle x=0} , multiplying both sides by x {\displaystyle x} multiplies both sides by zero, and so necessarily produces a true equation just as in the first example. In general, whenever we multiply both sides of an equation by an expression involving variables, we introduce extraneous solutions wherever that expression is equal to zero. But it is not sufficient to exclude these values, because they may have been legitimate solutions to the original equation. For example, suppose we multiply both sides of our original equation x + 2 = 0 {\displaystyle x+2=0} by x + 2. {\displaystyle x+2.} We get ( x + 2 ) ( x + 2 ) = 0 ( x + 2 ) , {\displaystyle (x+2)(x+2)=0(x+2),} x 2 + 4 x + 4 = 0 , {\displaystyle x^{2}+4x+4=0,} which has only one real solution: x = − 2 {\displaystyle x=-2} . This is a solution to the original equation so cannot be excluded, even though x + 2 = 0 {\displaystyle x+2=0} for this value of x {\displaystyle x} . == Extraneous solutions: rational == Extraneous solutions can arise naturally in problems involving fractions with variables in the denominator. For example, consider this equation: 1 x − 2 = 3 x + 2 − 6 x ( x − 2 ) ( x + 2 ) . {\displaystyle {\frac {1}{x-2}}={\frac {3}{x+2}}-{\frac {6x}{(x-2)(x+2)}}\,.} To begin solving, we multiply each side of the equation by the least common denominator of all the fractions contained in the equation. In this case, the least common denominator is ( x − 2 ) ( x + 2 ) {\displaystyle (x-2)(x+2)} . After performing these operations, the fractions are eliminated, and the equation becomes: x + 2 = 3 ( x − 2 ) − 6 x . {\displaystyle x+2=3(x-2)-6x\,.} Solving this yields the single solution x = − 2. {\displaystyle x=-2.} However, when we substitute the solution back into the original equation, we obtain: 1 − 2 − 2 = 3 − 2 + 2 − 6 ( − 2 ) ( − 2 − 2 ) ( − 2 + 2 ) . {\displaystyle {\frac {1}{-2-2}}={\frac {3}{-2+2}}-{\frac {6(-2)}{(-2-2)(-2+2)}}\,.} The equation then becomes: 1 − 4 = 3 0 + 12 0 . {\displaystyle {\frac {1}{-4}}={\frac {3}{0}}+{\frac {12}{0}}\,.} This equation is not valid, since one cannot divide by zero. Therefore, the solution x = − 2 {\displaystyle x=-2} is extraneous and not valid, and the original equation has no solution. For this specific example, it could be recognized that (for the value x = − 2 {\displaystyle x=-2} ), the operation of multiplying by ( x − 2 ) ( x + 2 ) {\displaystyle (x-2)(x+2)} would be a multiplication by zero. However, it is not always simple to evaluate whether each operation already performed was allowed by the final answer. Because of this, often, the only simple effective way to deal with multiplication by expressions involving variables is to substitute each of the solutions obtained into the original equation and confirm that this yields a valid equation. After discarding solutions that yield an invalid equation, we will have the correct set of solutions. In some cases, as in the above example, all solutions may be discarded, in which case the original equation has no solution. == Missing solutions: division == Extraneous solutions are not too difficult to deal with because they just require checking all solutions for validity. However, more insidious are missing solutions, which can occur when performing operations on expressions that are invalid for certain values of those expressions. For example, if we were solving the following equation, the correct solution is obtained by subtracting 4 {\displaystyle 4} from both sides, then dividing both sides by 2 {\displaystyle 2} : 2 x + 4 = 0 , {\displaystyle 2x+4=0,} 2 x = − 4 , {\displaystyle 2x=-4,} x = − 2. {\displaystyle x=-2.} By analogy, we might suppose we can solve the following equation by subtracting 2 x {\displaystyle 2x} from both sides, then dividing by x {\displaystyle x} : x 2 + 2 x = 0 , {\displaystyle x^{2}+2x=0,} x 2 = − 2 x , {\displaystyle x^{2}=-2x,} x = − 2. {\displaystyle x=-2.} The solution x = − 2 {\displaystyle x=-2} is in fact a valid solution to the original equation; but the other solution, x = 0 {\displaystyle x=0} , has disappeared. The problem is that we divided both sides by x {\displaystyle x} , which involves the indeterminate operation of dividing by zero when x = 0. {\displaystyle x=0.} It is generally possible (and advisable) to avoid dividing by any expression that can be zero; however, where this is necessary, it is sufficient to ensure that any values of the variables that make it zero also fail to satisfy the original equation. For example, suppose we have this equation: x + 2 = 0. {\displaystyle x+2=0.} It is valid to divide both sides by x − 2 {\displaystyle x-2} , obtaining the following equation: x + 2 x − 2 = 0. {\displaystyle {\frac {x+2}{x-2}}=0.} This is valid because the only value of x {\displaystyle x} that makes x − 2 {\displaystyle x-2} equal to zero is x = 2 , {\displaystyle x=2,} which is not a solution to the original equation. In some cases we are not interested in certain solutions; for example, we may only want solutions where x {\displaystyle x} is positive. In this case it is okay to divide by an expression that is only zero when x {\displaystyle x} is zero or negative, because this can only remove solutions we do not care about. == Other operations == Multiplication and division are not the only operations that can modify the solution set. For example, take the problem: x 2 = 4. {\displaystyle x^{2}=4.} If we take the positive square root of both sides, we get: x = 2. {\displaystyle x=2.} We are not taking the square root of any negative values here, since both x 2 {\displaystyle x^{2}} and 4 {\displaystyle 4} are necessarily positive. But we have lost the solution x = − 2. {\displaystyle x=-2.} The reason is that x {\displaystyle x} is actually not in general the positive square root of x 2 . {\displaystyle x^{2}.} If x {\displaystyle x} is negative, the positive square root of x 2 {\displaystyle x^{2}} is − x . {\displaystyle -x.} If the step is taken correctly, it leads instead to the equation: x 2 = 4 . {\displaystyle {\sqrt {x^{2}}}={\sqrt {4}}.} | x | = 2. {\displaystyle |x|=2.} x = ± 2. {\displaystyle x=\pm 2.} This equation has the same two solutions as the original one: x = 2 {\displaystyle x=2} and x = − 2. {\displaystyle x=-2.} We can also modify the solution set by squaring both sides, because this will make any negative values in the ranges of the equation positive, causing extraneous solutions. == See also == Invalid proof == References ==
Wikipedia/Extraneous_and_missing_solutions
Trial and error is a fundamental method of problem-solving characterized by repeated, varied attempts which are continued until success, or until the practicer stops trying. According to W.H. Thorpe, the term was devised by C. Lloyd Morgan (1852–1936) after trying out similar phrases "trial and failure" and "trial and practice". Under Morgan's Canon, animal behaviour should be explained in the simplest possible way. Where behavior seems to imply higher mental processes, it might be explained by trial-and-error learning. An example is a skillful way in which his terrier Tony opened the garden gate, easily misunderstood as an insightful act by someone seeing the final behavior. Lloyd Morgan, however, had watched and recorded the series of approximations by which the dog had gradually learned the response, and could demonstrate that no insight was required to explain it. Edward Lee Thorndike was the initiator of the theory of trial and error learning based on the findings he showed how to manage a trial-and-error experiment in the laboratory. In his famous experiment, a cat was placed in a series of puzzle boxes in order to study the law of effect in learning. He plotted to learn curves which recorded the timing for each trial. Thorndike's key observation was that learning was promoted by positive results, which was later refined and extended by B. F. Skinner's operant conditioning. Trial and error is also a method of problem solving, repair, tuning, or obtaining knowledge. In the field of computer science, the method is called generate and test (brute force). In elementary algebra, when solving equations, it is called guess and check. This approach can be seen as one of the two basic approaches to problem-solving, contrasted with an approach using insight and theory. However, there are intermediate methods that, for example, use theory to guide the method, an approach known as guided empiricism. This way of thinking has become a mainstay of Karl Popper's critical rationalism. == Methodology == The trial and error approach is used most successfully with simple problems and in games, and it is often the last resort when no apparent rule applies. This does not mean that the approach is inherently careless, for an individual can be methodical in manipulating the variables in an attempt to sort through possibilities that could result in success. Nevertheless, this method is often used by people who have little knowledge in the problem area. The trial-and-error approach has been studied from its natural computational point of view === Simplest applications === Ashby (1960, section 11/5) offers three simple strategies for dealing with the same basic exercise-problem, which have very different efficiencies. Suppose a collection of 1000 on/off switches have to be set to a particular combination by random-based testing, where each test is expected to take one second. [This is also discussed in Traill (1978–2006, section C1.2]. The strategies are: the perfectionist all-or-nothing method, with no attempt at holding partial successes. This would be expected to take more than 10^301 seconds, [i.e., 2^1000 seconds, or 3·5×(10^291) centuries] a serial-test of switches, holding on to the partial successes (assuming that these are manifest), which would take 500 seconds on average parallel-but-individual testing of all switches simultaneously, which would take only one second Note the tacit assumption here that no intelligence or insight is brought to bear on the problem. However, the existence of different available strategies allows us to consider a separate ("superior") domain of processing — a "meta-level" above the mechanics of switch handling — where the various available strategies can be randomly chosen. Once again this is "trial and error", but of a different type. === Hierarchies === Ashby's book develops this "meta-level" idea, and extends it into a whole recursive sequence of levels, successively above each other in a systematic hierarchy. On this basis, he argues that human intelligence emerges from such organization: relying heavily on trial-and-error (at least initially at each new stage), but emerging with what we would call "intelligence" at the end of it all. Thus presumably the topmost level of the hierarchy (at any stage) will still depend on simple trial-and-error. Traill (1978–2006) suggests that this Ashby-hierarchy probably coincides with Piaget's well-known theory of developmental stages. [This work also discusses Ashby's 1000-switch example; see §C1.2]. After all, it is part of Piagetian doctrine that children learn first by actively doing in a more-or-less random way, and then hopefully learn from the consequences — which all has a certain resemblance to Ashby's random "trial-and-error". === Application === Traill (2008, espec. Table "S" on p.31) follows Jerne and Popper in seeing this strategy as probably underlying all knowledge-gathering systems — at least in their initial phase. Four such systems are identified: Natural selection which "educates" the DNA of the species, The brain of the individual (just discussed); The "brain" of society-as-such (including the publicly held body of science); and The adaptive immune system. == Features == Trial and error has a number of features: solution-oriented: trial and error makes no attempt to discover why a solution works, merely that it is a solution. problem-specific: trial and error makes no attempt to generalize a solution to other problems. non-optimal: trial and error is generally an attempt to find a solution, not all solutions, and not the best solution. needs little knowledge: trials and error can proceed where there is little or no knowledge of the subject. It is possible to use trial and error to find all solutions or the best solution, when a testably finite number of possible solutions exist. To find all solutions, one simply makes a note and continues, rather than ending the process, when a solution is found, until all solutions have been tried. To find the best solution, one finds all solutions by the method just described and then comparatively evaluates them based upon some predefined set of criteria, the existence of which is a condition for the possibility of finding a best solution. (Also, when only one solution can exist, as in assembling a jigsaw puzzle, then any solution found is the only solution and so is necessarily the best.) == Examples == Trial and error has traditionally been the main method of finding new drugs, such as antibiotics. Chemists simply try chemicals at random until they find one with the desired effect. In a more sophisticated version, chemists select a narrow range of chemicals it is thought may have some effect using a technique called structure–activity relationship. (The latter case can be alternatively considered as a changing of the problem rather than of the solution strategy: instead of "What chemical will work well as an antibiotic?" the problem in the sophisticated approach is "Which, if any, of the chemicals in this narrow range will work well as an antibiotic?") The method is used widely in many disciplines, such as polymer technology to find new polymer types or families. Trial and error is also commonly seen in player responses to video games - when faced with an obstacle or boss, players often form a number of strategies to surpass the obstacle or defeat the boss, with each strategy being carried out before the player either succeeds or quits the game. Sports teams also make use of trial and error to qualify for and/or progress through the playoffs and win the championship, attempting different strategies, plays, lineups and formations in hopes of defeating each and every opponent along the way to victory. This is especially crucial in playoff series in which multiple wins are required to advance, where a team that loses a game will have the opportunity to try new tactics to find a way to win, if they are not eliminated yet. The scientific method can be regarded as containing an element of trial and error in its formulation and testing of hypotheses. Also compare genetic algorithms, simulated annealing and reinforcement learning – all varieties for search which apply the basic idea of trial and error. Biological evolution can be considered as a form of trial and error. Random mutations and sexual genetic variations can be viewed as trials and poor reproductive fitness, or lack of improved fitness, as the error. Thus after a long time 'knowledge' of well-adapted genomes accumulates simply by virtue of them being able to reproduce. Bogosort, a conceptual sorting algorithm (that is extremely inefficient and impractical), can be viewed as a trial and error approach to sorting a list. However, typical simple examples of bogosort do not track which orders of the list have been tried and may try the same order any number of times, which violates one of the basic principles of trial and error. Trial and error is actually more efficient and practical than bogosort; unlike bogosort, it is guaranteed to halt in finite time on a finite list, and might even be a reasonable way to sort extremely short lists under some conditions. Jumping spiders of the genus Portia use trial and error to find new tactics against unfamiliar prey or in unusual situations, and remember the new tactics. Tests show that Portia fimbriata and Portia labiata can use trial and error in an artificial environment, where the spider's objective is to cross a miniature lagoon that is too wide for a simple jump, and must either jump then swim or only swim. == See also == == References == == Further reading == Ashby, W. R. (1960: Second Edition). Design for a Brain. Chapman & Hall: London. Traill, R.R. (1978–2006). Molecular explanation for intelligence…, Brunel University Thesis, HDL.handle.net Traill, R.R. (2008). Thinking by Molecule, Synapse, or both? — From Piaget’s Schema, to the Selecting/Editing of ncRNA. Ondwelle: Melbourne. Ondwelle.com — or French version Ondwelle.com. Zippelius, R. (1991). Die experimentierende Methode im Recht (Trial and error in Jurisprudence), Academy of Science, Mainz, ISBN 3-515-05901-6
Wikipedia/Trial_and_error
In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically checking all possible candidates for whether or not each candidate satisfies the problem's statement. A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other. While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than processing speed. This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a table – namely, check all entries of the latter, sequentially – is called linear search. == Implementing the brute-force search == === Basic algorithm === In order to apply brute-force search to a specific class of problems, one must implement four procedures, first, next, valid, and output. These procedures should take as a parameter the data P for the particular instance of the problem that is to be solved, and should do the following: first (P): generate a first candidate solution for P. next (P, c): generate the next candidate for P after the current one c. valid (P, c): check whether candidate c is a solution for P. output (P, c): use the solution c of P as appropriate to the application. The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm c ← first(P) while c ≠ Λ do if valid(P,c) then output(P, c) c ← next(P, c) end while For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n ≥ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c < n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we choose Λ to be n + 1, the tests n ≥ 1 and c < n are unnecessary.)The brute-force search algorithm above will call output for every candidate that is a solution to the given instance P. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount of CPU time. == Combinatorial explosion == The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given number n. So if n has sixteen decimal digits, say, the search will require executing at least 1015 computer instructions, which will take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letter – which is only a 10% increase in the data size – will multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018 or 2.4 quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called the combinatorial explosion, or the curse of dimensionality. One example of a case where combinatorial complexity leads to solvability limit is in solving chess. Chess is not a solved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity. == Speeding up brute-force searches == One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by using heuristics specific to the problem class. For example, in the eight queens problem the challenge is to place eight queens on a standard chessboard so that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648 = 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates are all possible ways of choosing of a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutions – about 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements. As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one. In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000 – which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests. == Reordering the search space == In applications that require only one solution, rather than all solutions, the expected running time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random number n, it is better to enumerate the candidate divisors in increasing order, from 2 to n − 1, than the other way around – because the probability that n is divisible by c is 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case, the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t of candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid, given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance. == Alternatives to brute-force search == There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution. Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such as chart parsing can exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as in Constraint Satisfaction Problems, one can dramatically reduce the search space by means of Constraint propagation, that is efficiently implemented in Constraint programming languages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, in computer chess, rather than computing the full minimax tree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by a static evaluation function. == In cryptography == In cryptography, a brute-force attack involves systematically checking all possible keys until the correct key is found. This strategy can in theory be used against any encrypted data (except a one-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier. The key length used in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective by obfuscating the data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it. == References == == See also == A brute-force algorithm to solve Sudoku puzzles. Brute-force attack Big O notation Iteration#Computing
Wikipedia/Brute-force_search
In mathematics, the inverse trigonometric functions (occasionally also called antitrigonometric, cyclometric, or arcus functions) are the inverse functions of the trigonometric functions, under suitably restricted domains. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry. == Notation == Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: arcsin(x), arccos(x), arctan(x), etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships: when measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Thus in the unit circle, the cosine of x function is both the arc and the angle, because the arc of a circle of radius 1 is the same as the angle. Or, "the arc whose cosine is x" is the same as "the angle whose cosine is x", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms asin, acos, atan. The notations sin−1(x), cos−1(x), tan−1(x), etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established sin[−1](x), cos[−1](x), tan[−1](x) – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: tan − 1 ⁡ ( x ) = { arctan ⁡ ( x ) + π k ∣ k ∈ Z } . {\displaystyle \tan ^{-1}(x)=\{\arctan(x)+\pi k\mid k\in \mathbb {Z} \}~.} However, this might appear to conflict logically with the common semantics for expressions such as sin2(x) (although only sin2 x, without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function. The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, (cos(x))−1 = sec(x). Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “−1” superscript: Sin−1(x), Cos−1(x), Tan−1(x), etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by sin−1(x), cos−1(x), etc., or, better, by sin−1 x, cos−1 x, etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case. Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions. == Basic concepts == === Principal values === Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inverse functions. Therefore, the result ranges of the inverse functions are proper (i.e. strict) subsets of the domains of the original functions. For example, using function in the sense of multivalued functions, just as the square root function y = x {\displaystyle y={\sqrt {x}}} could be defined from y 2 = x , {\displaystyle y^{2}=x,} the function y = arcsin ⁡ ( x ) {\displaystyle y=\arcsin(x)} is defined so that sin ⁡ ( y ) = x . {\displaystyle \sin(y)=x.} For a given real number x , {\displaystyle x,} with − 1 ≤ x ≤ 1 , {\displaystyle -1\leq x\leq 1,} there are multiple (in fact, countably infinitely many) numbers y {\displaystyle y} such that sin ⁡ ( y ) = x {\displaystyle \sin(y)=x} ; for example, sin ⁡ ( 0 ) = 0 , {\displaystyle \sin(0)=0,} but also sin ⁡ ( π ) = 0 , {\displaystyle \sin(\pi )=0,} sin ⁡ ( 2 π ) = 0 , {\displaystyle \sin(2\pi )=0,} etc. When only one value is desired, the function may be restricted to its principal branch. With this restriction, for each x {\displaystyle x} in the domain, the expression arcsin ⁡ ( x ) {\displaystyle \arcsin(x)} will evaluate only to a single value, called its principal value. These properties apply to all the inverse trigonometric functions. The principal inverses are listed in the following table. Note: Some authors define the range of arcsecant to be ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π ≤ y < 3 π 2 {\textstyle \pi \leq y<{\frac {3\pi }{2}}} ), because the tangent function is nonnegative on this domain. This makes some computations more consistent. For example, using this range, tan ⁡ ( arcsec ⁡ ( x ) ) = x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))={\sqrt {x^{2}-1}},} whereas with the range ( 0 ≤ y < π 2 {\textstyle 0\leq y<{\frac {\pi }{2}}} or π 2 < y ≤ π {\textstyle {\frac {\pi }{2}}<y\leq \pi } ), we would have to write tan ⁡ ( arcsec ⁡ ( x ) ) = ± x 2 − 1 , {\displaystyle \tan(\operatorname {arcsec}(x))=\pm {\sqrt {x^{2}-1}},} since tangent is nonnegative on 0 ≤ y < π 2 , {\textstyle 0\leq y<{\frac {\pi }{2}},} but nonpositive on π 2 < y ≤ π . {\textstyle {\frac {\pi }{2}}<y\leq \pi .} For a similar reason, the same authors define the range of arccosecant to be ( − π < y ≤ − π 2 {\textstyle (-\pi <y\leq -{\frac {\pi }{2}}} or 0 < y ≤ π 2 ) . {\textstyle 0<y\leq {\frac {\pi }{2}}).} ==== Domains ==== If x is allowed to be a complex number, then the range of y applies only to its real part. The table below displays names and domains of the inverse trigonometric functions along with the range of their usual principal values in radians. The symbol R = ( − ∞ , ∞ ) {\displaystyle \mathbb {R} =(-\infty ,\infty )} denotes the set of all real numbers and Z = { … , − 2 , − 1 , 0 , 1 , 2 , … } {\displaystyle \mathbb {Z} =\{\ldots ,\,-2,\,-1,\,0,\,1,\,2,\,\ldots \}} denotes the set of all integers. The set of all integer multiples of π {\displaystyle \pi } is denoted by π Z := { π n : n ∈ Z } = { … , − 2 π , − π , 0 , π , 2 π , … } . {\displaystyle \pi \mathbb {Z} ~:=~\{\pi n\;:\;n\in \mathbb {Z} \}~=~\{\ldots ,\,-2\pi ,\,-\pi ,\,0,\,\pi ,\,2\pi ,\,\ldots \}.} The symbol ∖ {\displaystyle \,\setminus \,} denotes set subtraction so that, for instance, R ∖ ( − 1 , 1 ) = ( − ∞ , − 1 ] ∪ [ 1 , ∞ ) {\displaystyle \mathbb {R} \setminus (-1,1)=(-\infty ,-1]\cup [1,\infty )} is the set of points in R {\displaystyle \mathbb {R} } (that is, real numbers) that are not in the interval ( − 1 , 1 ) . {\displaystyle (-1,1).} The Minkowski sum notation π Z + ( 0 , π ) {\textstyle \pi \mathbb {Z} +(0,\pi )} and π Z + ( − π 2 , π 2 ) {\displaystyle \pi \mathbb {Z} +{\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}} that is used above to concisely write the domains of cot , csc , tan , and sec {\displaystyle \cot ,\csc ,\tan ,{\text{ and }}\sec } is now explained. Domain of cotangent cot {\displaystyle \cot } and cosecant csc {\displaystyle \csc } : The domains of cot {\displaystyle \,\cot \,} and csc {\displaystyle \,\csc \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which sin ⁡ θ ≠ 0 , {\displaystyle \sin \theta \neq 0,} i.e. all real numbers that are not of the form π n {\displaystyle \pi n} for some integer n , {\displaystyle n,} π Z + ( 0 , π ) = ⋯ ∪ ( − 2 π , − π ) ∪ ( − π , 0 ) ∪ ( 0 , π ) ∪ ( π , 2 π ) ∪ ⋯ = R ∖ π Z {\displaystyle {\begin{aligned}\pi \mathbb {Z} +(0,\pi )&=\cdots \cup (-2\pi ,-\pi )\cup (-\pi ,0)\cup (0,\pi )\cup (\pi ,2\pi )\cup \cdots \\&=\mathbb {R} \setminus \pi \mathbb {Z} \end{aligned}}} Domain of tangent tan {\displaystyle \tan } and secant sec {\displaystyle \sec } : The domains of tan {\displaystyle \,\tan \,} and sec {\displaystyle \,\sec \,} are the same. They are the set of all angles θ {\displaystyle \theta } at which cos ⁡ θ ≠ 0 , {\displaystyle \cos \theta \neq 0,} π Z + ( − π 2 , π 2 ) = ⋯ ∪ ( − 3 π 2 , − π 2 ) ∪ ( − π 2 , π 2 ) ∪ ( π 2 , 3 π 2 ) ∪ ⋯ = R ∖ ( π 2 + π Z ) {\displaystyle {\begin{aligned}\pi \mathbb {Z} +\left(-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}}\right)&=\cdots \cup {\bigl (}{-{\tfrac {3\pi }{2}}},{-{\tfrac {\pi }{2}}}{\bigr )}\cup {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )}\cup {\bigl (}{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}}{\bigr )}\cup \cdots \\&=\mathbb {R} \setminus \left({\tfrac {\pi }{2}}+\pi \mathbb {Z} \right)\\\end{aligned}}} === Solutions to elementary trigonometric equations === Each of the trigonometric functions is periodic in the real part of its argument, running through all its values twice in each interval of 2 π : {\displaystyle 2\pi :} Sine and cosecant begin their period at 2 π k − π 2 {\textstyle 2\pi k-{\frac {\pi }{2}}} (where k {\displaystyle k} is an integer), finish it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then reverse themselves over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cosine and secant begin their period at 2 π k , {\displaystyle 2\pi k,} finish it at 2 π k + π . {\displaystyle 2\pi k+\pi .} and then reverse themselves over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} Tangent begins its period at 2 π k − π 2 , {\textstyle 2\pi k-{\frac {\pi }{2}},} finishes it at 2 π k + π 2 , {\textstyle 2\pi k+{\frac {\pi }{2}},} and then repeats it (forward) over 2 π k + π 2 {\textstyle 2\pi k+{\frac {\pi }{2}}} to 2 π k + 3 π 2 . {\textstyle 2\pi k+{\frac {3\pi }{2}}.} Cotangent begins its period at 2 π k , {\displaystyle 2\pi k,} finishes it at 2 π k + π , {\displaystyle 2\pi k+\pi ,} and then repeats it (forward) over 2 π k + π {\displaystyle 2\pi k+\pi } to 2 π k + 2 π . {\displaystyle 2\pi k+2\pi .} This periodicity is reflected in the general inverses, where k {\displaystyle k} is some integer. The following table shows how inverse trigonometric functions may be used to solve equalities involving the six standard trigonometric functions. It is assumed that the given values θ , {\displaystyle \theta ,} r , {\displaystyle r,} s , {\displaystyle s,} x , {\displaystyle x,} and y {\displaystyle y} all lie within appropriate ranges so that the relevant expressions below are well-defined. Note that "for some k ∈ Z {\displaystyle k\in \mathbb {Z} } " is just another way of saying "for some integer k . {\displaystyle k.} " The symbol ⟺ {\displaystyle \,\iff \,} is logical equality and indicates that if the left hand side is true then so is the right hand side and, conversely, if the right hand side is true then so is the left hand side (see this footnote for more details and an example illustrating this concept). where the first four solutions can be written in expanded form as: For example, if cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} then θ = π + 2 π k = − π + 2 π ( 1 + k ) {\displaystyle \theta =\pi +2\pi k=-\pi +2\pi (1+k)} for some k ∈ Z . {\displaystyle k\in \mathbb {Z} .} While if sin ⁡ θ = ± 1 {\displaystyle \sin \theta =\pm 1} then θ = π 2 + π k = − π 2 + π ( k + 1 ) {\textstyle \theta ={\frac {\pi }{2}}+\pi k=-{\frac {\pi }{2}}+\pi (k+1)} for some k ∈ Z , {\displaystyle k\in \mathbb {Z} ,} where k {\displaystyle k} will be even if sin ⁡ θ = 1 {\displaystyle \sin \theta =1} and it will be odd if sin ⁡ θ = − 1. {\displaystyle \sin \theta =-1.} The equations sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} and csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} have the same solutions as cos ⁡ θ = − 1 {\displaystyle \cos \theta =-1} and sin ⁡ θ = ± 1 , {\displaystyle \sin \theta =\pm 1,} respectively. In all equations above except for those just solved (i.e. except for sin {\displaystyle \sin } / csc ⁡ θ = ± 1 {\displaystyle \csc \theta =\pm 1} and cos {\displaystyle \cos } / sec ⁡ θ = − 1 {\displaystyle \sec \theta =-1} ), the integer k {\displaystyle k} in the solution's formula is uniquely determined by θ {\displaystyle \theta } (for fixed r , s , x , {\displaystyle r,s,x,} and y {\displaystyle y} ). With the help of integer parity Parity ⁡ ( h ) = { 0 if h is even 1 if h is odd {\displaystyle \operatorname {Parity} (h)={\begin{cases}0&{\text{if }}h{\text{ is even }}\\1&{\text{if }}h{\text{ is odd }}\\\end{cases}}} it is possible to write a solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} that doesn't involve the "plus or minus" ± {\displaystyle \,\pm \,} symbol: c o s θ = x {\displaystyle cos\;\theta =x\quad } if and only if θ = ( − 1 ) h arccos ⁡ ( x ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\arccos(x)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z . {\displaystyle h\in \mathbb {Z} .} And similarly for the secant function, s e c θ = r {\displaystyle sec\;\theta =r\quad } if and only if θ = ( − 1 ) h arcsec ⁡ ( r ) + π h + π Parity ⁡ ( h ) {\displaystyle \quad \theta =(-1)^{h}\operatorname {arcsec}(r)+\pi h+\pi \operatorname {Parity} (h)\quad } for some h ∈ Z , {\displaystyle h\in \mathbb {Z} ,} where π h + π Parity ⁡ ( h ) {\displaystyle \pi h+\pi \operatorname {Parity} (h)} equals π h {\displaystyle \pi h} when the integer h {\displaystyle h} is even, and equals π h + π {\displaystyle \pi h+\pi } when it's odd. ==== Detailed example and explanation of the "plus or minus" symbol ± ==== The solutions to cos ⁡ θ = x {\displaystyle \cos \theta =x} and sec ⁡ θ = x {\displaystyle \sec \theta =x} involve the "plus or minus" symbol ± , {\displaystyle \,\pm ,\,} whose meaning is now clarified. Only the solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} will be discussed since the discussion for sec ⁡ θ = x {\displaystyle \sec \theta =x} is the same. We are given x {\displaystyle x} between − 1 ≤ x ≤ 1 {\displaystyle -1\leq x\leq 1} and we know that there is an angle θ {\displaystyle \theta } in some interval that satisfies cos ⁡ θ = x . {\displaystyle \cos \theta =x.} We want to find this θ . {\displaystyle \theta .} The table above indicates that the solution is θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which is a shorthand way of saying that (at least) one of the following statement is true: θ = arccos ⁡ x + 2 π k {\displaystyle \,\theta =\arccos x+2\pi k\,} for some integer k , {\displaystyle k,} or θ = − arccos ⁡ x + 2 π k {\displaystyle \,\theta =-\arccos x+2\pi k\,} for some integer k . {\displaystyle k.} As mentioned above, if arccos ⁡ x = π {\displaystyle \,\arccos x=\pi \,} (which by definition only happens when x = cos ⁡ π = − 1 {\displaystyle x=\cos \pi =-1} ) then both statements (1) and (2) hold, although with different values for the integer k {\displaystyle k} : if K {\displaystyle K} is the integer from statement (1), meaning that θ = π + 2 π K {\displaystyle \theta =\pi +2\pi K} holds, then the integer k {\displaystyle k} for statement (2) is K + 1 {\displaystyle K+1} (because θ = − π + 2 π ( 1 + K ) {\displaystyle \theta =-\pi +2\pi (1+K)} ). However, if x ≠ − 1 {\displaystyle x\neq -1} then the integer k {\displaystyle k} is unique and completely determined by θ . {\displaystyle \theta .} If arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} (which by definition only happens when x = cos ⁡ 0 = 1 {\displaystyle x=\cos 0=1} ) then ± arccos ⁡ x = 0 {\displaystyle \,\pm \arccos x=0\,} (because + arccos ⁡ x = + 0 = 0 {\displaystyle \,+\arccos x=+0=0\,} and − arccos ⁡ x = − 0 = 0 {\displaystyle \,-\arccos x=-0=0\,} so in both cases ± arccos ⁡ x {\displaystyle \,\pm \arccos x\,} is equal to 0 {\displaystyle 0} ) and so the statements (1) and (2) happen to be identical in this particular case (and so both hold). Having considered the cases arccos ⁡ x = 0 {\displaystyle \,\arccos x=0\,} and arccos ⁡ x = π , {\displaystyle \,\arccos x=\pi ,\,} we now focus on the case where arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and arccos ⁡ x ≠ π , {\displaystyle \,\arccos x\neq \pi ,\,} So assume this from now on. The solution to cos ⁡ θ = x {\displaystyle \cos \theta =x} is still θ = ± arccos ⁡ x + 2 π k for some k ∈ Z {\displaystyle \,\theta =\pm \arccos x+2\pi k\,\quad {\text{ for some }}k\in \mathbb {Z} } which as before is shorthand for saying that one of statements (1) and (2) is true. However this time, because arccos ⁡ x ≠ 0 {\displaystyle \,\arccos x\neq 0\,} and 0 < arccos ⁡ x < π , {\displaystyle \,0<\arccos x<\pi ,\,} statements (1) and (2) are different and furthermore, exactly one of the two equalities holds (not both). Additional information about θ {\displaystyle \theta } is needed to determine which one holds. For example, suppose that x = 0 {\displaystyle x=0} and that all that is known about θ {\displaystyle \theta } is that − π ≤ θ ≤ π {\displaystyle \,-\pi \leq \theta \leq \pi \,} (and nothing more is known). Then arccos ⁡ x = arccos ⁡ 0 = π 2 {\displaystyle \arccos x=\arccos 0={\frac {\pi }{2}}} and moreover, in this particular case k = 0 {\displaystyle k=0} (for both the + {\displaystyle \,+\,} case and the − {\displaystyle \,-\,} case) and so consequently, θ = ± arccos ⁡ x + 2 π k = ± ( π 2 ) + 2 π ( 0 ) = ± π 2 . {\displaystyle \theta ~=~\pm \arccos x+2\pi k~=~\pm \left({\frac {\pi }{2}}\right)+2\pi (0)~=~\pm {\frac {\pi }{2}}.} This means that θ {\displaystyle \theta } could be either π / 2 {\displaystyle \,\pi /2\,} or − π / 2. {\displaystyle \,-\pi /2.} Without additional information it is not possible to determine which of these values θ {\displaystyle \theta } has. An example of some additional information that could determine the value of θ {\displaystyle \theta } would be knowing that the angle is above the x {\displaystyle x} -axis (in which case θ = π / 2 {\displaystyle \theta =\pi /2} ) or alternatively, knowing that it is below the x {\displaystyle x} -axis (in which case θ = − π / 2 {\displaystyle \theta =-\pi /2} ). ==== Equal identical trigonometric functions ==== The table below shows how two angles θ {\displaystyle \theta } and φ {\displaystyle \varphi } must be related if their values under a given trigonometric function are equal or negatives of each other. The vertical double arrow ⇕ {\displaystyle \Updownarrow } in the last row indicates that θ {\displaystyle \theta } and φ {\displaystyle \varphi } satisfy | sin ⁡ θ | = | sin ⁡ φ | {\displaystyle \left|\sin \theta \right|=\left|\sin \varphi \right|} if and only if they satisfy | cos ⁡ θ | = | cos ⁡ φ | . {\displaystyle \left|\cos \theta \right|=\left|\cos \varphi \right|.} Set of all solutions to elementary trigonometric equations Thus given a single solution θ {\displaystyle \theta } to an elementary trigonometric equation ( sin ⁡ θ = y {\displaystyle \sin \theta =y} is such an equation, for instance, and because sin ⁡ ( arcsin ⁡ y ) = y {\displaystyle \sin(\arcsin y)=y} always holds, θ := arcsin ⁡ y {\displaystyle \theta :=\arcsin y} is always a solution), the set of all solutions to it are: === Transforming equations === The equations above can be transformed by using the reflection and shift identities: These formulas imply, in particular, that the following hold: sin ⁡ θ = − sin ⁡ ( − θ ) = − sin ⁡ ( π + θ ) = − sin ⁡ ( π − θ ) = − cos ⁡ ( π 2 + θ ) = − cos ⁡ ( π 2 − θ ) = − cos ⁡ ( − π 2 − θ ) = − cos ⁡ ( − π 2 + θ ) = − cos ⁡ ( 3 π 2 − θ ) = − cos ⁡ ( − 3 π 2 + θ ) cos ⁡ θ = − cos ⁡ ( − θ ) = − cos ⁡ ( π + θ ) = − cos ⁡ ( π − θ ) = − sin ⁡ ( π 2 + θ ) = − sin ⁡ ( π 2 − θ ) = − sin ⁡ ( − π 2 − θ ) = − sin ⁡ ( − π 2 + θ ) = − sin ⁡ ( 3 π 2 − θ ) = − sin ⁡ ( − 3 π 2 + θ ) tan ⁡ θ = − tan ⁡ ( − θ ) = − tan ⁡ ( π + θ ) = − tan ⁡ ( π − θ ) = − cot ⁡ ( π 2 + θ ) = − cot ⁡ ( π 2 − θ ) = − cot ⁡ ( − π 2 − θ ) = − cot ⁡ ( − π 2 + θ ) = − cot ⁡ ( 3 π 2 − θ ) = − cot ⁡ ( − 3 π 2 + θ ) {\displaystyle {\begin{aligned}\sin \theta &=-\sin(-\theta )&&=-\sin(\pi +\theta )&&={\phantom {-}}\sin(\pi -\theta )\\&=-\cos \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cos \left({\frac {\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {\pi }{2}}-\theta \right)\\&={\phantom {-}}\cos \left(-{\frac {\pi }{2}}+\theta \right)&&=-\cos \left({\frac {3\pi }{2}}-\theta \right)&&=-\cos \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\cos \theta &={\phantom {-}}\cos(-\theta )&&=-\cos(\pi +\theta )&&=-\cos(\pi -\theta )\\&={\phantom {-}}\sin \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\sin \left({\frac {\pi }{2}}-\theta \right)&&=-\sin \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\sin \left(-{\frac {\pi }{2}}+\theta \right)&&=-\sin \left({\frac {3\pi }{2}}-\theta \right)&&={\phantom {-}}\sin \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\tan \theta &=-\tan(-\theta )&&={\phantom {-}}\tan(\pi +\theta )&&=-\tan(\pi -\theta )\\&=-\cot \left({\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {\pi }{2}}-\theta \right)&&={\phantom {-}}\cot \left(-{\frac {\pi }{2}}-\theta \right)\\&=-\cot \left(-{\frac {\pi }{2}}+\theta \right)&&={\phantom {-}}\cot \left({\frac {3\pi }{2}}-\theta \right)&&=-\cot \left(-{\frac {3\pi }{2}}+\theta \right)\\[0.3ex]\end{aligned}}} where swapping sin ↔ csc , {\displaystyle \sin \leftrightarrow \csc ,} swapping cos ↔ sec , {\displaystyle \cos \leftrightarrow \sec ,} and swapping tan ↔ cot {\displaystyle \tan \leftrightarrow \cot } gives the analogous equations for csc , sec , and cot , {\displaystyle \csc ,\sec ,{\text{ and }}\cot ,} respectively. So for example, by using the equality sin ⁡ ( π 2 − θ ) = cos ⁡ θ , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=\cos \theta ,} the equation cos ⁡ θ = x {\displaystyle \cos \theta =x} can be transformed into sin ⁡ ( π 2 − θ ) = x , {\textstyle \sin \left({\frac {\pi }{2}}-\theta \right)=x,} which allows for the solution to the equation sin ⁡ φ = x {\displaystyle \;\sin \varphi =x\;} (where φ := π 2 − θ {\textstyle \varphi :={\frac {\pi }{2}}-\theta } ) to be used; that solution being: φ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z , {\displaystyle \varphi =(-1)^{k}\arcsin(x)+\pi k\;{\text{ for some }}k\in \mathbb {Z} ,} which becomes: π 2 − θ = ( − 1 ) k arcsin ⁡ ( x ) + π k for some k ∈ Z {\displaystyle {\frac {\pi }{2}}-\theta ~=~(-1)^{k}\arcsin(x)+\pi k\quad {\text{ for some }}k\in \mathbb {Z} } where using the fact that ( − 1 ) k = ( − 1 ) − k {\displaystyle (-1)^{k}=(-1)^{-k}} and substituting h := − k {\displaystyle h:=-k} proves that another solution to cos ⁡ θ = x {\displaystyle \;\cos \theta =x\;} is: θ = ( − 1 ) h + 1 arcsin ⁡ ( x ) + π h + π 2 for some h ∈ Z . {\displaystyle \theta ~=~(-1)^{h+1}\arcsin(x)+\pi h+{\frac {\pi }{2}}\quad {\text{ for some }}h\in \mathbb {Z} .} The substitution arcsin ⁡ x = π 2 − arccos ⁡ x {\displaystyle \;\arcsin x={\frac {\pi }{2}}-\arccos x\;} may be used express the right hand side of the above formula in terms of arccos ⁡ x {\displaystyle \;\arccos x\;} instead of arcsin ⁡ x . {\displaystyle \;\arcsin x.\;} === Relationships between trigonometric functions and inverse trigonometric functions === Trigonometric functions of inverse trigonometric functions are tabulated below. A quick way to derive them is by considering the geometry of a right-angled triangle, with one side of length 1 and another side of length x , {\displaystyle x,} then applying the Pythagorean theorem and definitions of the trigonometric ratios. It is worth noting that for arcsecant and arccosecant, the diagram assumes that x {\displaystyle x} is positive, and thus the result has to be corrected through the use of absolute values and the signum (sgn) operation. === Relationships among the inverse trigonometric functions === Complementary angles: arccos ⁡ ( x ) = π 2 − arcsin ⁡ ( x ) arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) arccsc ⁡ ( x ) = π 2 − arcsec ⁡ ( x ) {\displaystyle {\begin{aligned}\arccos(x)&={\frac {\pi }{2}}-\arcsin(x)\\[0.5em]\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\\[0.5em]\operatorname {arccsc}(x)&={\frac {\pi }{2}}-\operatorname {arcsec}(x)\end{aligned}}} Negative arguments: arcsin ⁡ ( − x ) = − arcsin ⁡ ( x ) arccsc ⁡ ( − x ) = − arccsc ⁡ ( x ) arccos ⁡ ( − x ) = π − arccos ⁡ ( x ) arcsec ⁡ ( − x ) = π − arcsec ⁡ ( x ) arctan ⁡ ( − x ) = − arctan ⁡ ( x ) arccot ⁡ ( − x ) = π − arccot ⁡ ( x ) {\displaystyle {\begin{aligned}\arcsin(-x)&=-\arcsin(x)\\\operatorname {arccsc}(-x)&=-\operatorname {arccsc}(x)\\\arccos(-x)&=\pi -\arccos(x)\\\operatorname {arcsec}(-x)&=\pi -\operatorname {arcsec}(x)\\\arctan(-x)&=-\arctan(x)\\\operatorname {arccot}(-x)&=\pi -\operatorname {arccot}(x)\end{aligned}}} Reciprocal arguments: arcsin ⁡ ( 1 x ) = arccsc ⁡ ( x ) arccsc ⁡ ( 1 x ) = arcsin ⁡ ( x ) arccos ⁡ ( 1 x ) = arcsec ⁡ ( x ) arcsec ⁡ ( 1 x ) = arccos ⁡ ( x ) arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) = π 2 − arctan ⁡ ( x ) , if x > 0 arctan ⁡ ( 1 x ) = arccot ⁡ ( x ) − π = − π 2 − arctan ⁡ ( x ) , if x < 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) = π 2 − arccot ⁡ ( x ) , if x > 0 arccot ⁡ ( 1 x ) = arctan ⁡ ( x ) + π = 3 π 2 − arccot ⁡ ( x ) , if x < 0 {\displaystyle {\begin{aligned}\arcsin \left({\frac {1}{x}}\right)&=\operatorname {arccsc}(x)&\\[0.3em]\operatorname {arccsc} \left({\frac {1}{x}}\right)&=\arcsin(x)&\\[0.3em]\arccos \left({\frac {1}{x}}\right)&=\operatorname {arcsec}(x)&\\[0.3em]\operatorname {arcsec} \left({\frac {1}{x}}\right)&=\arccos(x)&\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)&={\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x>0\\[0.3em]\arctan \left({\frac {1}{x}}\right)&=\operatorname {arccot}(x)-\pi &=-{\frac {\pi }{2}}-\arctan(x)\,,{\text{ if }}x<0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)&={\frac {\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x>0\\[0.3em]\operatorname {arccot} \left({\frac {1}{x}}\right)&=\arctan(x)+\pi &={\frac {3\pi }{2}}-\operatorname {arccot}(x)\,,{\text{ if }}x<0\end{aligned}}} The identities above can be used with (and derived from) the fact that sin {\displaystyle \sin } and csc {\displaystyle \csc } are reciprocals (i.e. csc = 1 sin {\displaystyle \csc ={\tfrac {1}{\sin }}} ), as are cos {\displaystyle \cos } and sec , {\displaystyle \sec ,} and tan {\displaystyle \tan } and cot . {\displaystyle \cot .} Useful identities if one only has a fragment of a sine table: arcsin ⁡ ( x ) = 1 2 arccos ⁡ ( 1 − 2 x 2 ) , if 0 ≤ x ≤ 1 arcsin ⁡ ( x ) = arctan ⁡ ( x 1 − x 2 ) arccos ⁡ ( x ) = 1 2 arccos ⁡ ( 2 x 2 − 1 ) , if 0 ≤ x ≤ 1 arccos ⁡ ( x ) = arctan ⁡ ( 1 − x 2 x ) arccos ⁡ ( x ) = arcsin ⁡ ( 1 − x 2 ) , if 0 ≤ x ≤ 1 , from which you get arccos ( 1 − x 2 1 + x 2 ) = arcsin ⁡ ( 2 x 1 + x 2 ) , if 0 ≤ x ≤ 1 arcsin ( 1 − x 2 ) = π 2 − sgn ⁡ ( x ) arcsin ⁡ ( x ) arctan ⁡ ( x ) = arcsin ⁡ ( x 1 + x 2 ) arccot ⁡ ( x ) = arccos ⁡ ( x 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&={\frac {1}{2}}\arccos \left(1-2x^{2}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin(x)&=\arctan \left({\frac {x}{\sqrt {1-x^{2}}}}\right)\\\arccos(x)&={\frac {1}{2}}\arccos \left(2x^{2}-1\right)\,,{\text{ if }}0\leq x\leq 1\\\arccos(x)&=\arctan \left({\frac {\sqrt {1-x^{2}}}{x}}\right)\\\arccos(x)&=\arcsin \left({\sqrt {1-x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1{\text{ , from which you get }}\\\arccos &\left({\frac {1-x^{2}}{1+x^{2}}}\right)=\arcsin \left({\frac {2x}{1+x^{2}}}\right)\,,{\text{ if }}0\leq x\leq 1\\\arcsin &\left({\sqrt {1-x^{2}}}\right)={\frac {\pi }{2}}-\operatorname {sgn}(x)\arcsin(x)\\\arctan(x)&=\arcsin \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\\\operatorname {arccot}(x)&=\arccos \left({\frac {x}{\sqrt {1+x^{2}}}}\right)\end{aligned}}} Whenever the square root of a complex number is used here, we choose the root with the positive real part (or positive imaginary part if the square was negative real). A useful form that follows directly from the table above is arctan ⁡ ( x ) = arccos ⁡ ( 1 1 + x 2 ) , if x ≥ 0 {\displaystyle \arctan(x)=\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\,,{\text{ if }}x\geq 0} . It is obtained by recognizing that cos ⁡ ( arctan ⁡ ( x ) ) = 1 1 + x 2 = cos ⁡ ( arccos ⁡ ( 1 1 + x 2 ) ) {\displaystyle \cos \left(\arctan \left(x\right)\right)={\sqrt {\frac {1}{1+x^{2}}}}=\cos \left(\arccos \left({\sqrt {\frac {1}{1+x^{2}}}}\right)\right)} . From the half-angle formula, tan ⁡ ( θ 2 ) = sin ⁡ ( θ ) 1 + cos ⁡ ( θ ) {\displaystyle \tan \left({\tfrac {\theta }{2}}\right)={\tfrac {\sin(\theta )}{1+\cos(\theta )}}} , we get: arcsin ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 − x 2 ) arccos ⁡ ( x ) = 2 arctan ⁡ ( 1 − x 2 1 + x ) , if − 1 < x ≤ 1 arctan ⁡ ( x ) = 2 arctan ⁡ ( x 1 + 1 + x 2 ) {\displaystyle {\begin{aligned}\arcsin(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1-x^{2}}}}}\right)\\[0.5em]\arccos(x)&=2\arctan \left({\frac {\sqrt {1-x^{2}}}{1+x}}\right)\,,{\text{ if }}-1<x\leq 1\\[0.5em]\arctan(x)&=2\arctan \left({\frac {x}{1+{\sqrt {1+x^{2}}}}}\right)\end{aligned}}} === Arctangent addition formula === arctan ⁡ ( u ) ± arctan ⁡ ( v ) = arctan ⁡ ( u ± v 1 ∓ u v ) ( mod π ) , u v ≠ 1 . {\displaystyle \arctan(u)\pm \arctan(v)=\arctan \left({\frac {u\pm v}{1\mp uv}}\right){\pmod {\pi }}\,,\quad uv\neq 1\,.} This is derived from the tangent addition formula tan ⁡ ( α ± β ) = tan ⁡ ( α ) ± tan ⁡ ( β ) 1 ∓ tan ⁡ ( α ) tan ⁡ ( β ) , {\displaystyle \tan(\alpha \pm \beta )={\frac {\tan(\alpha )\pm \tan(\beta )}{1\mp \tan(\alpha )\tan(\beta )}}\,,} by letting α = arctan ⁡ ( u ) , β = arctan ⁡ ( v ) . {\displaystyle \alpha =\arctan(u)\,,\quad \beta =\arctan(v)\,.} == In calculus == === Derivatives of inverse trigonometric functions === The derivatives for complex values of z are as follows: d d z arcsin ⁡ ( z ) = 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arccos ⁡ ( z ) = − 1 1 − z 2 ; z ≠ − 1 , + 1 d d z arctan ⁡ ( z ) = 1 1 + z 2 ; z ≠ − i , + i d d z arccot ⁡ ( z ) = − 1 1 + z 2 ; z ≠ − i , + i d d z arcsec ⁡ ( z ) = 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 d d z arccsc ⁡ ( z ) = − 1 z 2 1 − 1 z 2 ; z ≠ − 1 , 0 , + 1 {\displaystyle {\begin{aligned}{\frac {d}{dz}}\arcsin(z)&{}={\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arccos(z)&{}=-{\frac {1}{\sqrt {1-z^{2}}}}\;;&z&{}\neq -1,+1\\{\frac {d}{dz}}\arctan(z)&{}={\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arccot}(z)&{}=-{\frac {1}{1+z^{2}}}\;;&z&{}\neq -i,+i\\{\frac {d}{dz}}\operatorname {arcsec}(z)&{}={\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\\{\frac {d}{dz}}\operatorname {arccsc}(z)&{}=-{\frac {1}{z^{2}{\sqrt {1-{\frac {1}{z^{2}}}}}}}\;;&z&{}\neq -1,0,+1\end{aligned}}} Only for real values of x: d d x arcsec ⁡ ( x ) = 1 | x | x 2 − 1 ; | x | > 1 d d x arccsc ⁡ ( x ) = − 1 | x | x 2 − 1 ; | x | > 1 {\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arcsec}(x)&{}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\\{\frac {d}{dx}}\operatorname {arccsc}(x)&{}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}\;;&|x|>1\end{aligned}}} These formulas can be derived in terms of the derivatives of trigonometric functions. For example, if x = sin ⁡ θ {\displaystyle x=\sin \theta } , then d x / d θ = cos ⁡ θ = 1 − x 2 , {\textstyle dx/d\theta =\cos \theta ={\sqrt {1-x^{2}}},} so d d x arcsin ⁡ ( x ) = d θ d x = 1 d x / d θ = 1 1 − x 2 . {\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {d\theta }{dx}}={\frac {1}{dx/d\theta }}={\frac {1}{\sqrt {1-x^{2}}}}.} === Expression as definite integrals === Integrating the derivative and fixing the value at one point gives an expression for the inverse trigonometric function as a definite integral: arcsin ⁡ ( x ) = ∫ 0 x 1 1 − z 2 d z , | x | ≤ 1 arccos ⁡ ( x ) = ∫ x 1 1 1 − z 2 d z , | x | ≤ 1 arctan ⁡ ( x ) = ∫ 0 x 1 z 2 + 1 d z , arccot ⁡ ( x ) = ∫ x ∞ 1 z 2 + 1 d z , arcsec ⁡ ( x ) = ∫ 1 x 1 z z 2 − 1 d z = π + ∫ − x − 1 1 z z 2 − 1 d z , x ≥ 1 arccsc ⁡ ( x ) = ∫ x ∞ 1 z z 2 − 1 d z = ∫ − ∞ − x 1 z z 2 − 1 d z , x ≥ 1 {\displaystyle {\begin{aligned}\arcsin(x)&{}=\int _{0}^{x}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arccos(x)&{}=\int _{x}^{1}{\frac {1}{\sqrt {1-z^{2}}}}\,dz\;,&|x|&{}\leq 1\\\arctan(x)&{}=\int _{0}^{x}{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arccot}(x)&{}=\int _{x}^{\infty }{\frac {1}{z^{2}+1}}\,dz\;,\\\operatorname {arcsec}(x)&{}=\int _{1}^{x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\pi +\int _{-x}^{-1}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\operatorname {arccsc}(x)&{}=\int _{x}^{\infty }{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz=\int _{-\infty }^{-x}{\frac {1}{z{\sqrt {z^{2}-1}}}}\,dz\;,&x&{}\geq 1\\\end{aligned}}} When x equals 1, the integrals with limited domains are improper integrals, but still well-defined. === Infinite series === Similar to the sine and cosine functions, the inverse trigonometric functions can also be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative, 1 1 − z 2 {\textstyle {\tfrac {1}{\sqrt {1-z^{2}}}}} , as a binomial series, and integrating term by term (using the integral definition as above). The series for arctangent can similarly be derived by expanding its derivative 1 1 + z 2 {\textstyle {\frac {1}{1+z^{2}}}} in a geometric series, and applying the integral definition above (see Leibniz series). arcsin ⁡ ( z ) = z + ( 1 2 ) z 3 3 + ( 1 ⋅ 3 2 ⋅ 4 ) z 5 5 + ( 1 ⋅ 3 ⋅ 5 2 ⋅ 4 ⋅ 6 ) z 7 7 + ⋯ = ∑ n = 0 ∞ ( 2 n − 1 ) ! ! ( 2 n ) ! ! z 2 n + 1 2 n + 1 = ∑ n = 0 ∞ ( 2 n ) ! ( 2 n n ! ) 2 z 2 n + 1 2 n + 1 ; | z | ≤ 1 {\displaystyle {\begin{aligned}\arcsin(z)&=z+\left({\frac {1}{2}}\right){\frac {z^{3}}{3}}+\left({\frac {1\cdot 3}{2\cdot 4}}\right){\frac {z^{5}}{5}}+\left({\frac {1\cdot 3\cdot 5}{2\cdot 4\cdot 6}}\right){\frac {z^{7}}{7}}+\cdots \\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n-1)!!}{(2n)!!}}{\frac {z^{2n+1}}{2n+1}}\\[5pt]&=\sum _{n=0}^{\infty }{\frac {(2n)!}{(2^{n}n!)^{2}}}{\frac {z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\end{aligned}}} arctan ⁡ ( z ) = z − z 3 3 + z 5 5 − z 7 7 + ⋯ = ∑ n = 0 ∞ ( − 1 ) n z 2 n + 1 2 n + 1 ; | z | ≤ 1 z ≠ i , − i {\displaystyle \arctan(z)=z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{5}}-{\frac {z^{7}}{7}}+\cdots =\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{2n+1}}\,;\qquad |z|\leq 1\qquad z\neq i,-i} Series for the other inverse trigonometric functions can be given in terms of these according to the relationships given above. For example, arccos ⁡ ( x ) = π / 2 − arcsin ⁡ ( x ) {\displaystyle \arccos(x)=\pi /2-\arcsin(x)} , arccsc ⁡ ( x ) = arcsin ⁡ ( 1 / x ) {\displaystyle \operatorname {arccsc}(x)=\arcsin(1/x)} , and so on. Another series is given by: 2 ( arcsin ⁡ ( x 2 ) ) 2 = ∑ n = 1 ∞ x 2 n n 2 ( 2 n n ) . {\displaystyle 2\left(\arcsin \left({\frac {x}{2}}\right)\right)^{2}=\sum _{n=1}^{\infty }{\frac {x^{2n}}{n^{2}{\binom {2n}{n}}}}.} Leonhard Euler found a series for the arctangent that converges more quickly than its Taylor series: arctan ⁡ ( z ) = z 1 + z 2 ∑ n = 0 ∞ ∏ k = 1 n 2 k z 2 ( 2 k + 1 ) ( 1 + z 2 ) . {\displaystyle \arctan(z)={\frac {z}{1+z^{2}}}\sum _{n=0}^{\infty }\prod _{k=1}^{n}{\frac {2kz^{2}}{(2k+1)(1+z^{2})}}.} (The term in the sum for n = 0 is the empty product, so is 1.) Alternatively, this can be expressed as arctan ⁡ ( z ) = ∑ n = 0 ∞ 2 2 n ( n ! ) 2 ( 2 n + 1 ) ! z 2 n + 1 ( 1 + z 2 ) n + 1 . {\displaystyle \arctan(z)=\sum _{n=0}^{\infty }{\frac {2^{2n}(n!)^{2}}{(2n+1)!}}{\frac {z^{2n+1}}{(1+z^{2})^{n+1}}}.} Another series for the arctangent function is given by arctan ⁡ ( z ) = i ∑ n = 1 ∞ 1 2 n − 1 ( 1 ( 1 + 2 i / z ) 2 n − 1 − 1 ( 1 − 2 i / z ) 2 n − 1 ) , {\displaystyle \arctan(z)=i\sum _{n=1}^{\infty }{\frac {1}{2n-1}}\left({\frac {1}{(1+2i/z)^{2n-1}}}-{\frac {1}{(1-2i/z)^{2n-1}}}\right),} where i = − 1 {\displaystyle i={\sqrt {-1}}} is the imaginary unit. ==== Continued fractions for arctangent ==== Two alternatives to the power series for arctangent are these generalized continued fractions: arctan ⁡ ( z ) = z 1 + ( 1 z ) 2 3 − 1 z 2 + ( 3 z ) 2 5 − 3 z 2 + ( 5 z ) 2 7 − 5 z 2 + ( 7 z ) 2 9 − 7 z 2 + ⋱ = z 1 + ( 1 z ) 2 3 + ( 2 z ) 2 5 + ( 3 z ) 2 7 + ( 4 z ) 2 9 + ⋱ {\displaystyle \arctan(z)={\frac {z}{1+{\cfrac {(1z)^{2}}{3-1z^{2}+{\cfrac {(3z)^{2}}{5-3z^{2}+{\cfrac {(5z)^{2}}{7-5z^{2}+{\cfrac {(7z)^{2}}{9-7z^{2}+\ddots }}}}}}}}}}={\frac {z}{1+{\cfrac {(1z)^{2}}{3+{\cfrac {(2z)^{2}}{5+{\cfrac {(3z)^{2}}{7+{\cfrac {(4z)^{2}}{9+\ddots }}}}}}}}}}} The second of these is valid in the cut complex plane. There are two cuts, from −i to the point at infinity, going down the imaginary axis, and from i to the point at infinity, going up the same axis. It works best for real numbers running from −1 to 1. The partial denominators are the odd natural numbers, and the partial numerators (after the first) are just (nz)2, with each perfect square appearing once. The first was developed by Leonhard Euler; the second by Carl Friedrich Gauss utilizing the Gaussian hypergeometric series. === Indefinite integrals of inverse trigonometric functions === For real and complex values of z: ∫ arcsin ⁡ ( z ) d z = z arcsin ⁡ ( z ) + 1 − z 2 + C ∫ arccos ⁡ ( z ) d z = z arccos ⁡ ( z ) − 1 − z 2 + C ∫ arctan ⁡ ( z ) d z = z arctan ⁡ ( z ) − 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arccot ⁡ ( z ) d z = z arccot ⁡ ( z ) + 1 2 ln ⁡ ( 1 + z 2 ) + C ∫ arcsec ⁡ ( z ) d z = z arcsec ⁡ ( z ) − ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C ∫ arccsc ⁡ ( z ) d z = z arccsc ⁡ ( z ) + ln ⁡ [ z ( 1 + z 2 − 1 z 2 ) ] + C {\displaystyle {\begin{aligned}\int \arcsin(z)\,dz&{}=z\,\arcsin(z)+{\sqrt {1-z^{2}}}+C\\\int \arccos(z)\,dz&{}=z\,\arccos(z)-{\sqrt {1-z^{2}}}+C\\\int \arctan(z)\,dz&{}=z\,\arctan(z)-{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arccot}(z)\,dz&{}=z\,\operatorname {arccot}(z)+{\frac {1}{2}}\ln \left(1+z^{2}\right)+C\\\int \operatorname {arcsec}(z)\,dz&{}=z\,\operatorname {arcsec}(z)-\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\\\int \operatorname {arccsc}(z)\,dz&{}=z\,\operatorname {arccsc}(z)+\ln \left[z\left(1+{\sqrt {\frac {z^{2}-1}{z^{2}}}}\right)\right]+C\end{aligned}}} For real x ≥ 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − ln ⁡ ( x + x 2 − 1 ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + ln ⁡ ( x + x 2 − 1 ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\ln \left(x+{\sqrt {x^{2}-1}}\right)+C\end{aligned}}} For all real x not between -1 and 1: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + sgn ⁡ ( x ) ln ⁡ | x + x 2 − 1 | + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {sgn}(x)\ln \left|x+{\sqrt {x^{2}-1}}\right|+C\end{aligned}}} The absolute value is necessary to compensate for both negative and positive values of the arcsecant and arccosecant functions. The signum function is also necessary due to the absolute values in the derivatives of the two functions, which create two different solutions for positive and negative values of x. These can be further simplified using the logarithmic definitions of the inverse hyperbolic functions: ∫ arcsec ⁡ ( x ) d x = x arcsec ⁡ ( x ) − arcosh ⁡ ( | x | ) + C ∫ arccsc ⁡ ( x ) d x = x arccsc ⁡ ( x ) + arcosh ⁡ ( | x | ) + C {\displaystyle {\begin{aligned}\int \operatorname {arcsec}(x)\,dx&{}=x\,\operatorname {arcsec}(x)-\operatorname {arcosh} (|x|)+C\\\int \operatorname {arccsc}(x)\,dx&{}=x\,\operatorname {arccsc}(x)+\operatorname {arcosh} (|x|)+C\\\end{aligned}}} The absolute value in the argument of the arcosh function creates a negative half of its graph, making it identical to the signum logarithmic function shown above. All of these antiderivatives can be derived using integration by parts and the simple derivative forms shown above. ==== Example ==== Using ∫ u d v = u v − ∫ v d u {\displaystyle \int u\,dv=uv-\int v\,du} (i.e. integration by parts), set u = arcsin ⁡ ( x ) d v = d x d u = d x 1 − x 2 v = x {\displaystyle {\begin{aligned}u&=\arcsin(x)&dv&=dx\\du&={\frac {dx}{\sqrt {1-x^{2}}}}&v&=x\end{aligned}}} Then ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) − ∫ x 1 − x 2 d x , {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)-\int {\frac {x}{\sqrt {1-x^{2}}}}\,dx,} which by the simple substitution w = 1 − x 2 , d w = − 2 x d x {\displaystyle w=1-x^{2},\ dw=-2x\,dx} yields the final result: ∫ arcsin ⁡ ( x ) d x = x arcsin ⁡ ( x ) + 1 − x 2 + C {\displaystyle \int \arcsin(x)\,dx=x\arcsin(x)+{\sqrt {1-x^{2}}}+C} == Extension to the complex plane == Since the inverse trigonometric functions are analytic functions, they can be extended from the real line to the complex plane. This results in functions with multiple sheets and branch points. One possible way of defining the extension is: arctan ⁡ ( z ) = ∫ 0 z d x 1 + x 2 z ≠ − i , + i {\displaystyle \arctan(z)=\int _{0}^{z}{\frac {dx}{1+x^{2}}}\quad z\neq -i,+i} where the part of the imaginary axis which does not lie strictly between the branch points (−i and +i) is the branch cut between the principal sheet and other sheets. The path of the integral must not cross a branch cut. For z not on a branch cut, a straight line path from 0 to z is such a path. For z on a branch cut, the path must approach from Re[x] > 0 for the upper branch cut and from Re[x] < 0 for the lower branch cut. The arcsine function may then be defined as: arcsin ⁡ ( z ) = arctan ⁡ ( z 1 − z 2 ) z ≠ − 1 , + 1 {\displaystyle \arcsin(z)=\arctan \left({\frac {z}{\sqrt {1-z^{2}}}}\right)\quad z\neq -1,+1} where (the square-root function has its cut along the negative real axis and) the part of the real axis which does not lie strictly between −1 and +1 is the branch cut between the principal sheet of arcsin and other sheets; arccos ⁡ ( z ) = π 2 − arcsin ⁡ ( z ) z ≠ − 1 , + 1 {\displaystyle \arccos(z)={\frac {\pi }{2}}-\arcsin(z)\quad z\neq -1,+1} which has the same cut as arcsin; arccot ⁡ ( z ) = π 2 − arctan ⁡ ( z ) z ≠ − i , i {\displaystyle \operatorname {arccot}(z)={\frac {\pi }{2}}-\arctan(z)\quad z\neq -i,i} which has the same cut as arctan; arcsec ⁡ ( z ) = arccos ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arcsec}(z)=\arccos \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} where the part of the real axis between −1 and +1 inclusive is the cut between the principal sheet of arcsec and other sheets; arccsc ⁡ ( z ) = arcsin ⁡ ( 1 z ) z ≠ − 1 , 0 , + 1 {\displaystyle \operatorname {arccsc}(z)=\arcsin \left({\frac {1}{z}}\right)\quad z\neq -1,0,+1} which has the same cut as arcsec. === Logarithmic forms === These functions may also be expressed using complex logarithms. This extends their domains to the complex plane in a natural fashion. The following identities for principal values of the functions hold everywhere that they are defined, even on their branch cuts. arcsin ⁡ ( z ) = − i ln ⁡ ( 1 − z 2 + i z ) = i ln ⁡ ( 1 − z 2 − i z ) = arccsc ⁡ ( 1 z ) arccos ⁡ ( z ) = − i ln ⁡ ( i 1 − z 2 + z ) = π 2 − arcsin ⁡ ( z ) = arcsec ⁡ ( 1 z ) arctan ⁡ ( z ) = − i 2 ln ⁡ ( i − z i + z ) = − i 2 ln ⁡ ( 1 + i z 1 − i z ) = arccot ⁡ ( 1 z ) arccot ⁡ ( z ) = − i 2 ln ⁡ ( z + i z − i ) = − i 2 ln ⁡ ( i z − 1 i z + 1 ) = arctan ⁡ ( 1 z ) arcsec ⁡ ( z ) = − i ln ⁡ ( i 1 − 1 z 2 + 1 z ) = π 2 − arccsc ⁡ ( z ) = arccos ⁡ ( 1 z ) arccsc ⁡ ( z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) = i ln ⁡ ( 1 − 1 z 2 − i z ) = arcsin ⁡ ( 1 z ) {\displaystyle {\begin{aligned}\arcsin(z)&{}=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)=i\ln \left({\sqrt {1-z^{2}}}-iz\right)&{}=\operatorname {arccsc} \left({\frac {1}{z}}\right)\\[10pt]\arccos(z)&{}=-i\ln \left(i{\sqrt {1-z^{2}}}+z\right)={\frac {\pi }{2}}-\arcsin(z)&{}=\operatorname {arcsec} \left({\frac {1}{z}}\right)\\[10pt]\arctan(z)&{}=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)=-{\frac {i}{2}}\ln \left({\frac {1+iz}{1-iz}}\right)&{}=\operatorname {arccot} \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccot}(z)&{}=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)=-{\frac {i}{2}}\ln \left({\frac {iz-1}{iz+1}}\right)&{}=\arctan \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arcsec}(z)&{}=-i\ln \left(i{\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {1}{z}}\right)={\frac {\pi }{2}}-\operatorname {arccsc}(z)&{}=\arccos \left({\frac {1}{z}}\right)\\[10pt]\operatorname {arccsc}(z)&{}=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)=i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}-{\frac {i}{z}}\right)&{}=\arcsin \left({\frac {1}{z}}\right)\end{aligned}}} ==== Generalization ==== Because all of the inverse trigonometric functions output an angle of a right triangle, they can be generalized by using Euler's formula to form a right triangle in the complex plane. Algebraically, this gives us: c e i θ = c cos ⁡ ( θ ) + i c sin ⁡ ( θ ) {\displaystyle ce^{i\theta }=c\cos(\theta )+ic\sin(\theta )} or c e i θ = a + i b {\displaystyle ce^{i\theta }=a+ib} where a {\displaystyle a} is the adjacent side, b {\displaystyle b} is the opposite side, and c {\displaystyle c} is the hypotenuse. From here, we can solve for θ {\displaystyle \theta } . e ln ⁡ ( c ) + i θ = a + i b ln ⁡ c + i θ = ln ⁡ ( a + i b ) θ = Im ⁡ ( ln ⁡ ( a + i b ) ) {\displaystyle {\begin{aligned}e^{\ln(c)+i\theta }&=a+ib\\\ln c+i\theta &=\ln(a+ib)\\\theta &=\operatorname {Im} \left(\ln(a+ib)\right)\end{aligned}}} or θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\frac {a+ib}{c}}\right)} Simply taking the imaginary part works for any real-valued a {\displaystyle a} and b {\displaystyle b} , but if a {\displaystyle a} or b {\displaystyle b} is complex-valued, we have to use the final equation so that the real part of the result isn't excluded. Since the length of the hypotenuse doesn't change the angle, ignoring the real part of ln ⁡ ( a + b i ) {\displaystyle \ln(a+bi)} also removes c {\displaystyle c} from the equation. In the final equation, we see that the angle of the triangle in the complex plane can be found by inputting the lengths of each side. By setting one of the three sides equal to 1 and one of the remaining sides equal to our input z {\displaystyle z} , we obtain a formula for one of the inverse trig functions, for a total of six equations. Because the inverse trig functions require only one input, we must put the final side of the triangle in terms of the other two using the Pythagorean Theorem relation a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} The table below shows the values of a, b, and c for each of the inverse trig functions and the equivalent expressions for θ {\displaystyle \theta } that result from plugging the values into the equations θ = − i ln ⁡ ( a + i b c ) {\displaystyle \theta =-i\ln \left({\tfrac {a+ib}{c}}\right)} above and simplifying. a b c − i ln ⁡ ( a + i b c ) θ θ a , b ∈ R arcsin ⁡ ( z ) 1 − z 2 z 1 − i ln ⁡ ( 1 − z 2 + i z 1 ) = − i ln ⁡ ( 1 − z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − z 2 + i z ) ) arccos ⁡ ( z ) z 1 − z 2 1 − i ln ⁡ ( z + i 1 − z 2 1 ) = − i ln ⁡ ( z + z 2 − 1 ) Im ⁡ ( ln ⁡ ( z + z 2 − 1 ) ) arctan ⁡ ( z ) 1 z 1 + z 2 − i ln ⁡ ( 1 + i z 1 + z 2 ) = − i 2 ln ⁡ ( i − z i + z ) Im ⁡ ( ln ⁡ ( 1 + i z ) ) arccot ⁡ ( z ) z 1 z 2 + 1 − i ln ⁡ ( z + i z 2 + 1 ) = − i 2 ln ⁡ ( z + i z − i ) Im ⁡ ( ln ⁡ ( z + i ) ) arcsec ⁡ ( z ) 1 z 2 − 1 z − i ln ⁡ ( 1 + i z 2 − 1 z ) = − i ln ⁡ ( 1 z + 1 z 2 − 1 ) Im ⁡ ( ln ⁡ ( 1 z + 1 z 2 − 1 ) ) arccsc ⁡ ( z ) z 2 − 1 1 z − i ln ⁡ ( z 2 − 1 + i z ) = − i ln ⁡ ( 1 − 1 z 2 + i z ) Im ⁡ ( ln ⁡ ( 1 − 1 z 2 + i z ) ) {\displaystyle {\begin{aligned}&a&&b&&c&&-i\ln \left({\frac {a+ib}{c}}\right)&&\theta &&\theta _{a,b\in \mathbb {R} }\\\arcsin(z)\ \ &{\sqrt {1-z^{2}}}&&z&&1&&-i\ln \left({\frac {{\sqrt {1-z^{2}}}+iz}{1}}\right)&&=-i\ln \left({\sqrt {1-z^{2}}}+iz\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-z^{2}}}+iz\right)\right)\\\arccos(z)\ \ &z&&{\sqrt {1-z^{2}}}&&1&&-i\ln \left({\frac {z+i{\sqrt {1-z^{2}}}}{1}}\right)&&=-i\ln \left(z+{\sqrt {z^{2}-1}}\right)&&\operatorname {Im} \left(\ln \left(z+{\sqrt {z^{2}-1}}\right)\right)\\\arctan(z)\ \ &1&&z&&{\sqrt {1+z^{2}}}&&-i\ln \left({\frac {1+iz}{\sqrt {1+z^{2}}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {i-z}{i+z}}\right)&&\operatorname {Im} \left(\ln \left(1+iz\right)\right)\\\operatorname {arccot}(z)\ \ &z&&1&&{\sqrt {z^{2}+1}}&&-i\ln \left({\frac {z+i}{\sqrt {z^{2}+1}}}\right)&&=-{\frac {i}{2}}\ln \left({\frac {z+i}{z-i}}\right)&&\operatorname {Im} \left(\ln \left(z+i\right)\right)\\\operatorname {arcsec}(z)\ \ &1&&{\sqrt {z^{2}-1}}&&z&&-i\ln \left({\frac {1+i{\sqrt {z^{2}-1}}}{z}}\right)&&=-i\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)&&\operatorname {Im} \left(\ln \left({\frac {1}{z}}+{\sqrt {{\frac {1}{z^{2}}}-1}}\right)\right)\\\operatorname {arccsc}(z)\ \ &{\sqrt {z^{2}-1}}&&1&&z&&-i\ln \left({\frac {{\sqrt {z^{2}-1}}+i}{z}}\right)&&=-i\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)&&\operatorname {Im} \left(\ln \left({\sqrt {1-{\frac {1}{z^{2}}}}}+{\frac {i}{z}}\right)\right)\\\end{aligned}}} The particular form of the simplified expression can cause the output to differ from the usual principal branch of each of the inverse trig functions. The formulations given will output the usual principal branch when using the Im ⁡ ( ln ⁡ z ) ∈ ( − π , π ] {\displaystyle \operatorname {Im} \left(\ln z\right)\in (-\pi ,\pi ]} and Re ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Re} \left({\sqrt {z}}\right)\geq 0} principal branch for every function except arccotangent in the θ {\displaystyle \theta } column. Arccotangent in the θ {\displaystyle \theta } column will output on its usual principal branch by using the Im ⁡ ( ln ⁡ z ) ∈ [ 0 , 2 π ) {\displaystyle \operatorname {Im} \left(\ln z\right)\in [0,2\pi )} and Im ⁡ ( z ) ≥ 0 {\displaystyle \operatorname {Im} \left({\sqrt {z}}\right)\geq 0} convention. In this sense, all of the inverse trig functions can be thought of as specific cases of the complex-valued log function. Since these definition work for any complex-valued z {\displaystyle z} , the definitions allow for hyperbolic angles as outputs and can be used to further define the inverse hyperbolic functions. It's possible to algebraically prove these relations by starting with the exponential forms of the trigonometric functions and solving for the inverse function. ==== Example proof ==== sin ⁡ ( ϕ ) = z ϕ = arcsin ⁡ ( z ) {\displaystyle {\begin{aligned}\sin(\phi )&=z\\\phi &=\arcsin(z)\end{aligned}}} Using the exponential definition of sine, and letting ξ = e i ϕ , {\displaystyle \xi =e^{i\phi },} z = e i ϕ − e − i ϕ 2 i 2 i z = ξ − 1 ξ 0 = ξ 2 − 2 i z ξ − 1 ξ = i z ± 1 − z 2 ϕ = − i ln ⁡ ( i z ± 1 − z 2 ) {\displaystyle {\begin{aligned}z&={\frac {e^{i\phi }-e^{-i\phi }}{2i}}\\[10mu]2iz&=\xi -{\frac {1}{\xi }}\\[5mu]0&=\xi ^{2}-2iz\xi -1\\[5mu]\xi &=iz\pm {\sqrt {1-z^{2}}}\\[5mu]\phi &=-i\ln \left(iz\pm {\sqrt {1-z^{2}}}\right)\end{aligned}}} (the positive branch is chosen) ϕ = arcsin ⁡ ( z ) = − i ln ⁡ ( i z + 1 − z 2 ) {\displaystyle \phi =\arcsin(z)=-i\ln \left(iz+{\sqrt {1-z^{2}}}\right)} == Applications == === Finding the angle of a right triangle === Inverse trigonometric functions are useful when trying to determine the remaining two angles of a right triangle when the lengths of the sides of the triangle are known. Recalling the right-triangle definitions of sine and cosine, it follows that θ = arcsin ⁡ ( opposite hypotenuse ) = arccos ⁡ ( adjacent hypotenuse ) . {\displaystyle \theta =\arcsin \left({\frac {\text{opposite}}{\text{hypotenuse}}}\right)=\arccos \left({\frac {\text{adjacent}}{\text{hypotenuse}}}\right).} Often, the hypotenuse is unknown and would need to be calculated before using arcsine or arccosine using the Pythagorean Theorem: a 2 + b 2 = h 2 {\displaystyle a^{2}+b^{2}=h^{2}} where h {\displaystyle h} is the length of the hypotenuse. Arctangent comes in handy in this situation, as the length of the hypotenuse is not needed. θ = arctan ⁡ ( opposite adjacent ) . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)\,.} For example, suppose a roof drops 8 feet as it runs out 20 feet. The roof makes an angle θ with the horizontal, where θ may be computed as follows: θ = arctan ⁡ ( opposite adjacent ) = arctan ⁡ ( rise run ) = arctan ⁡ ( 8 20 ) ≈ 21.8 ∘ . {\displaystyle \theta =\arctan \left({\frac {\text{opposite}}{\text{adjacent}}}\right)=\arctan \left({\frac {\text{rise}}{\text{run}}}\right)=\arctan \left({\frac {8}{20}}\right)\approx 21.8^{\circ }\,.} === In computer science and engineering === ==== Two-argument variant of arctangent ==== The two-argument atan2 function computes the arctangent of y/x given y and x, but with a range of (−π, π]. In other words, atan2(y, x) is the angle between the positive x-axis of a plane and the point (x, y) on it, with positive sign for counter-clockwise angles (upper half-plane, y > 0), and negative sign for clockwise angles (lower half-plane, y < 0). It was first introduced in many computer programming languages, but it is now also common in other fields of science and engineering. In terms of the standard arctan function, that is with range of (−π/2, π/2), it can be expressed as follows: atan2 ⁡ ( y , x ) = { arctan ⁡ ( y x ) x > 0 arctan ⁡ ( y x ) + π y ≥ 0 , x < 0 arctan ⁡ ( y x ) − π y < 0 , x < 0 π 2 y > 0 , x = 0 − π 2 y < 0 , x = 0 undefined y = 0 , x = 0 {\displaystyle \operatorname {atan2} (y,x)={\begin{cases}\arctan \left({\frac {y}{x}}\right)&\quad x>0\\\arctan \left({\frac {y}{x}}\right)+\pi &\quad y\geq 0,\;x<0\\\arctan \left({\frac {y}{x}}\right)-\pi &\quad y<0,\;x<0\\{\frac {\pi }{2}}&\quad y>0,\;x=0\\-{\frac {\pi }{2}}&\quad y<0,\;x=0\\{\text{undefined}}&\quad y=0,\;x=0\end{cases}}} It also equals the principal value of the argument of the complex number x + iy. This limited version of the function above may also be defined using the tangent half-angle formulae as follows: atan2 ⁡ ( y , x ) = 2 arctan ⁡ ( y x 2 + y 2 + x ) {\displaystyle \operatorname {atan2} (y,x)=2\arctan \left({\frac {y}{{\sqrt {x^{2}+y^{2}}}+x}}\right)} provided that either x > 0 or y ≠ 0. However this fails if given x ≤ 0 and y = 0 so the expression is unsuitable for computational use. The above argument order (y, x) seems to be the most common, and in particular is used in ISO standards such as the C programming language, but a few authors may use the opposite convention (x, y) so some caution is warranted. (See variations at atan2 § Realizations of the function in common computer languages.) ==== Arctangent function with location parameter ==== In many applications the solution y {\displaystyle y} of the equation x = tan ⁡ ( y ) {\displaystyle x=\tan(y)} is to come as close as possible to a given value − ∞ < η < ∞ {\displaystyle -\infty <\eta <\infty } . The adequate solution is produced by the parameter modified arctangent function y = arctan η ⁡ ( x ) := arctan ⁡ ( x ) + π rni ⁡ ( η − arctan ⁡ ( x ) π ) . {\displaystyle y=\arctan _{\eta }(x):=\arctan(x)+\pi \,\operatorname {rni} \left({\frac {\eta -\arctan(x)}{\pi }}\right)\,.} The function rni {\displaystyle \operatorname {rni} } rounds to the nearest integer. ==== Numerical accuracy ==== For angles near 0 and π, arccosine is ill-conditioned, and similarly with arcsine for angles near −π/2 and π/2. Computer applications thus need to consider the stability of inputs to these functions and the sensitivity of their calculations, or use alternate methods. == See also == == Notes == == References == Abramowitz, Milton; Stegun, Irene A., eds. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications. ISBN 978-0-486-61272-0. == External links == Weisstein, Eric W. "Inverse Tangent". MathWorld.
Wikipedia/Inverse_trigonometric_function
Solving the geodesic equations is a procedure used in mathematics, particularly Riemannian geometry, and in physics, particularly in general relativity, that results in obtaining geodesics. Physically, these represent the paths of (usually ideal) particles with no proper acceleration, their motion satisfying the geodesic equations. Because the particles are subject to no proper acceleration, the geodesics generally represent the straightest path between two points in a curved spacetime. == The differential geodesic equation == On an n-dimensional Riemannian manifold M {\displaystyle M} , the geodesic equation written in a coordinate chart with coordinates x a {\displaystyle x^{a}} is: d 2 x a d s 2 + Γ b c a d x b d s d x c d s = 0 {\displaystyle {\frac {d^{2}x^{a}}{ds^{2}}}+\Gamma _{bc}^{a}{\frac {dx^{b}}{ds}}{\frac {dx^{c}}{ds}}=0} where the coordinates xa(s) are regarded as the coordinates of a curve γ(s) in M {\displaystyle M} and Γ b c a {\displaystyle \Gamma _{bc}^{a}} are the Christoffel symbols. The Christoffel symbols are functions of the metric and are given by: Γ b c a = 1 2 g a d ( g c d , b + g b d , c − g b c , d ) {\displaystyle \Gamma _{bc}^{a}={\frac {1}{2}}g^{ad}\left(g_{cd,b}+g_{bd,c}-g_{bc,d}\right)} where the comma indicates a partial derivative with respect to the coordinates: g a b , c = ∂ g a b ∂ x c {\displaystyle g_{ab,c}={\frac {\partial {g_{ab}}}{\partial {x^{c}}}}} As the manifold has dimension n {\displaystyle n} , the geodesic equations are a system of n {\displaystyle n} ordinary differential equations for the n {\displaystyle n} coordinate variables. Thus, allied with initial conditions, the system can, according to the Picard–Lindelöf theorem, be solved. One can also use a Lagrangian approach to the problem: defining L = g μ ν d x μ d s d x ν d s {\displaystyle L={\sqrt {g_{\mu \nu }{\frac {dx^{\mu }}{ds}}{\frac {dx^{\nu }}{ds}}}}} and applying the Euler–Lagrange equation. == Heuristics == As the laws of physics can be written in any coordinate system, it is convenient to choose one that simplifies the geodesic equations. Mathematically, this means a coordinate chart is chosen in which the geodesic equations have a particularly tractable form. == Effective potentials == When the geodesic equations can be separated into terms containing only an undifferentiated variable and terms containing only its derivative, the former may be consolidated into an effective potential dependent only on position. In this case, many of the heuristic methods of analysing energy diagrams apply, in particular the location of turning points. == Solution techniques == Solving the geodesic equations means obtaining an exact solution, possibly even the general solution, of the geodesic equations. Most attacks secretly employ the point symmetry group of the system of geodesic equations. This often yields a result giving a family of solutions implicitly, but in many examples does yield the general solution in explicit form. In general relativity, to obtain timelike geodesics it is often simplest to start from the spacetime metric, after dividing by d s 2 {\displaystyle ds^{2}} to obtain the form − 1 = g μ ν x ˙ μ x ˙ ν {\displaystyle -1=g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }} where the dot represents differentiation with respect to s {\displaystyle s} . Because timelike geodesics are maximal, one may apply the Euler–Lagrange equation directly, and thus obtain a set of equations equivalent to the geodesic equations. This method has the advantage of bypassing a tedious calculation of Christoffel symbols. == See also == Geodesics of the Schwarzschild vacuum Mathematics of general relativity Transition from special relativity to general relativity == References ==
Wikipedia/Solving_the_geodesic_equations
In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution. Conventions differ on what values variables may assume and which expressions are considered equivalent. In first-order syntactic unification, variables range over first-order terms and equivalence is syntactic. This version of unification has a unique "best" answer and is used in logic programming and programming language type system implementation, especially in Hindley–Milner based type inference algorithms. In higher-order unification, possibly restricted to higher-order pattern unification, terms may include lambda expressions, and equivalence is up to beta-reduction. This version is used in proof assistants and higher-order logic programming, for example Isabelle, Twelf, and lambdaProlog. Finally, in semantic unification or E-unification, equality is subject to background knowledge and variables range over a variety of domains. This version is used in SMT solvers, term rewriting algorithms, and cryptographic protocol analysis. == Formal definition == A unification problem is a finite set E={ l1 ≐ r1, ..., ln ≐ rn } of equations to solve, where li, ri are in the set T {\displaystyle T} of terms or expressions. Depending on which expressions or terms are allowed to occur in an equation set or unification problem, and which expressions are considered equal, several frameworks of unification are distinguished. If higher-order variables, that is, variables representing functions, are allowed in an expression, the process is called higher-order unification, otherwise first-order unification. If a solution is required to make both sides of each equation literally equal, the process is called syntactic or free unification, otherwise semantic or equational unification, or E-unification, or unification modulo theory. If the right side of each equation is closed (no free variables), the problem is called (pattern) matching. The left side (with variables) of each equation is called the pattern. === Prerequisites === Formally, a unification approach presupposes An infinite set V {\displaystyle V} of variables. For higher-order unification, it is convenient to choose V {\displaystyle V} disjoint from the set of lambda-term bound variables. A set T {\displaystyle T} of terms such that V ⊆ T {\displaystyle V\subseteq T} . For first-order unification, T {\displaystyle T} is usually the set of first-order terms (terms built from variable and function symbols). For higher-order unification T {\displaystyle T} consists of first-order terms and lambda terms (terms containing some higher-order variables). A mapping vars : T → {\displaystyle {\text{vars}}\colon T\rightarrow } P {\displaystyle \mathbb {P} } ( V ) {\displaystyle (V)} , assigning to each term t {\displaystyle t} the set vars ( t ) ⊊ V {\displaystyle {\text{vars}}(t)\subsetneq V} of free variables occurring in t {\displaystyle t} . A theory or equivalence relation ≡ {\displaystyle \equiv } on T {\displaystyle T} , indicating which terms are considered equal. For first-order E-unification, ≡ {\displaystyle \equiv } reflects the background knowledge about certain function symbols; for example, if ⊕ {\displaystyle \oplus } is considered commutative, t ≡ u {\displaystyle t\equiv u} if u {\displaystyle u} results from t {\displaystyle t} by swapping the arguments of ⊕ {\displaystyle \oplus } at some (possibly all) occurrences. In the most typical case that there is no background knowledge at all, then only literally, or syntactically, identical terms are considered equal. In this case, ≡ is called the free theory (because it is a free object), the empty theory (because the set of equational sentences, or the background knowledge, is empty), the theory of uninterpreted functions (because unification is done on uninterpreted terms), or the theory of constructors (because all function symbols just build up data terms, rather than operating on them). For higher-order unification, usually t ≡ u {\displaystyle t\equiv u} if t {\displaystyle t} and u {\displaystyle u} are alpha equivalent. As an example of how the set of terms and theory affects the set of solutions, the syntactic first-order unification problem { y = cons(2,y) } has no solution over the set of finite terms. However, it has the single solution { y ↦ cons(2,cons(2,cons(2,...))) } over the set of infinite tree terms. Similarly, the semantic first-order unification problem { a⋅x = x⋅a } has each substitution of the form { x ↦ a⋅...⋅a } as a solution in a semigroup, i.e. if (⋅) is considered associative. But the same problem, viewed in an abelian group, where (⋅) is considered also commutative, has any substitution at all as a solution. As an example of higher-order unification, the singleton set { a = y(x) } is a syntactic second-order unification problem, since y is a function variable. One solution is { x ↦ a, y ↦ (identity function) }; another one is { y ↦ (constant function mapping each value to a), x ↦ (any value) }. === Substitution === A substitution is a mapping σ : V → T {\displaystyle \sigma :V\rightarrow T} from variables to terms; the notation { x 1 ↦ t 1 , . . . , x k ↦ t k } {\displaystyle \{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} refers to a substitution mapping each variable x i {\displaystyle x_{i}} to the term t i {\displaystyle t_{i}} , for i = 1 , . . . , k {\displaystyle i=1,...,k} , and every other variable to itself; the x i {\displaystyle x_{i}} must be pairwise distinct. Applying that substitution to a term t {\displaystyle t} is written in postfix notation as t { x 1 ↦ t 1 , . . . , x k ↦ t k } {\displaystyle t\{x_{1}\mapsto t_{1},...,x_{k}\mapsto t_{k}\}} ; it means to (simultaneously) replace every occurrence of each variable x i {\displaystyle x_{i}} in the term t {\displaystyle t} by t i {\displaystyle t_{i}} . The result t τ {\displaystyle t\tau } of applying a substitution τ {\displaystyle \tau } to a term t {\displaystyle t} is called an instance of that term t {\displaystyle t} . As a first-order example, applying the substitution { x ↦ h(a,y), z ↦ b } to the term === Generalization, specialization === If a term t {\displaystyle t} has an instance equivalent to a term u {\displaystyle u} , that is, if t σ ≡ u {\displaystyle t\sigma \equiv u} for some substitution σ {\displaystyle \sigma } , then t {\displaystyle t} is called more general than u {\displaystyle u} , and u {\displaystyle u} is called more special than, or subsumed by, t {\displaystyle t} . For example, x ⊕ a {\displaystyle x\oplus a} is more general than a ⊕ b {\displaystyle a\oplus b} if ⊕ is commutative, since then ( x ⊕ a ) { x ↦ b } = b ⊕ a ≡ a ⊕ b {\displaystyle (x\oplus a)\{x\mapsto b\}=b\oplus a\equiv a\oplus b} . If ≡ is literal (syntactic) identity of terms, a term may be both more general and more special than another one only if both terms differ just in their variable names, not in their syntactic structure; such terms are called variants, or renamings of each other. For example, f ( x 1 , a , g ( z 1 ) , y 1 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})} is a variant of f ( x 2 , a , g ( z 2 ) , y 2 ) {\displaystyle f(x_{2},a,g(z_{2}),y_{2})} , since f ( x 1 , a , g ( z 1 ) , y 1 ) { x 1 ↦ x 2 , y 1 ↦ y 2 , z 1 ↦ z 2 } = f ( x 2 , a , g ( z 2 ) , y 2 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})\{x_{1}\mapsto x_{2},y_{1}\mapsto y_{2},z_{1}\mapsto z_{2}\}=f(x_{2},a,g(z_{2}),y_{2})} and f ( x 2 , a , g ( z 2 ) , y 2 ) { x 2 ↦ x 1 , y 2 ↦ y 1 , z 2 ↦ z 1 } = f ( x 1 , a , g ( z 1 ) , y 1 ) . {\displaystyle f(x_{2},a,g(z_{2}),y_{2})\{x_{2}\mapsto x_{1},y_{2}\mapsto y_{1},z_{2}\mapsto z_{1}\}=f(x_{1},a,g(z_{1}),y_{1}).} However, f ( x 1 , a , g ( z 1 ) , y 1 ) {\displaystyle f(x_{1},a,g(z_{1}),y_{1})} is not a variant of f ( x 2 , a , g ( x 2 ) , x 2 ) {\displaystyle f(x_{2},a,g(x_{2}),x_{2})} , since no substitution can transform the latter term into the former one. The latter term is therefore properly more special than the former one. For arbitrary ≡ {\displaystyle \equiv } , a term may be both more general and more special than a structurally different term. For example, if ⊕ is idempotent, that is, if always x ⊕ x ≡ x {\displaystyle x\oplus x\equiv x} , then the term x ⊕ y {\displaystyle x\oplus y} is more general than z {\displaystyle z} , and vice versa, although x ⊕ y {\displaystyle x\oplus y} and z {\displaystyle z} are of different structure. A substitution σ {\displaystyle \sigma } is more special than, or subsumed by, a substitution τ {\displaystyle \tau } if t σ {\displaystyle t\sigma } is subsumed by t τ {\displaystyle t\tau } for each term t {\displaystyle t} . We also say that τ {\displaystyle \tau } is more general than σ {\displaystyle \sigma } . More formally, take a nonempty infinite set V {\displaystyle V} of auxiliary variables such that no equation l i ≐ r i {\displaystyle l_{i}\doteq r_{i}} in the unification problem contains variables from V {\displaystyle V} . Then a substitution σ {\displaystyle \sigma } is subsumed by another substitution τ {\displaystyle \tau } if there is a substitution θ {\displaystyle \theta } such that for all terms X ∉ V {\displaystyle X\notin V} , X σ ≡ X τ θ {\displaystyle X\sigma \equiv X\tau \theta } . For instance { x ↦ a , y ↦ a } {\displaystyle \{x\mapsto a,y\mapsto a\}} is subsumed by τ = { x ↦ y } {\displaystyle \tau =\{x\mapsto y\}} , using θ = { y ↦ a } {\displaystyle \theta =\{y\mapsto a\}} , but σ = { x ↦ a } {\displaystyle \sigma =\{x\mapsto a\}} is not subsumed by τ = { x ↦ y } {\displaystyle \tau =\{x\mapsto y\}} , as f ( x , y ) σ = f ( a , y ) {\displaystyle f(x,y)\sigma =f(a,y)} is not an instance of f ( x , y ) τ = f ( y , y ) {\displaystyle f(x,y)\tau =f(y,y)} . === Solution set === A substitution σ is a solution of the unification problem E if liσ ≡ riσ for i = 1 , . . . , n {\displaystyle i=1,...,n} . Such a substitution is also called a unifier of E. For example, if ⊕ is associative, the unification problem { x ⊕ a ≐ a ⊕ x } has the solutions {x ↦ a}, {x ↦ a ⊕ a}, {x ↦ a ⊕ a ⊕ a}, etc., while the problem { x ⊕ a ≐ a } has no solution. For a given unification problem E, a set S of unifiers is called complete if each solution substitution is subsumed by some substitution in S. A complete substitution set always exists (e.g. the set of all solutions), but in some frameworks (such as unrestricted higher-order unification) the problem of determining whether any solution exists (i.e., whether the complete substitution set is nonempty) is undecidable. The set S is called minimal if none of its members subsumes another one. Depending on the framework, a complete and minimal substitution set may have zero, one, finitely many, or infinitely many members, or may not exist at all due to an infinite chain of redundant members. Thus, in general, unification algorithms compute a finite approximation of the complete set, which may or may not be minimal, although most algorithms avoid redundant unifiers when possible. For first-order syntactical unification, Martelli and Montanari gave an algorithm that reports unsolvability or computes a single unifier that by itself forms a complete and minimal substitution set, called the most general unifier. == Syntactic unification of first-order terms == Syntactic unification of first-order terms is the most widely used unification framework. It is based on T being the set of first-order terms (over some given set V of variables, C of constants and Fn of n-ary function symbols) and on ≡ being syntactic equality. In this framework, each solvable unification problem {l1 ≐ r1, ..., ln ≐ rn} has a complete, and obviously minimal, singleton solution set {σ}. Its member σ is called the most general unifier (mgu) of the problem. The terms on the left and the right hand side of each potential equation become syntactically equal when the mgu is applied i.e. l1σ = r1σ ∧ ... ∧ lnσ = rnσ. Any unifier of the problem is subsumed by the mgu σ. The mgu is unique up to variants: if S1 and S2 are both complete and minimal solution sets of the same syntactical unification problem, then S1 = { σ1 } and S2 = { σ2 } for some substitutions σ1 and σ2, and xσ1 is a variant of xσ2 for each variable x occurring in the problem. For example, the unification problem { x ≐ z, y ≐ f(x) } has a unifier { x ↦ z, y ↦ f(z) }, because This is also the most general unifier. Other unifiers for the same problem are e.g. { x ↦ f(x1), y ↦ f(f(x1)), z ↦ f(x1) }, { x ↦ f(f(x1)), y ↦ f(f(f(x1))), z ↦ f(f(x1)) }, and so on; there are infinitely many similar unifiers. As another example, the problem g(x,x) ≐ f(y) has no solution with respect to ≡ being literal identity, since any substitution applied to the left and right hand side will keep the outermost g and f, respectively, and terms with different outermost function symbols are syntactically different. === Unification algorithms === Jacques Herbrand discussed the basic concepts of unification and sketched an algorithm in 1930. But most authors attribute the first unification algorithm to John Alan Robinson (cf. box). Robinson's algorithm had worst-case exponential behavior in both time and space. Numerous authors have proposed more efficient unification algorithms. Algorithms with worst-case linear-time behavior were discovered independently by Martelli & Montanari (1976) and Paterson & Wegman (1976) Baader & Snyder (2001) uses a similar technique as Paterson-Wegman, hence is linear, but like most linear-time unification algorithms is slower than the Robinson version on small sized inputs due to the overhead of preprocessing the inputs and postprocessing of the output, such as construction of a DAG representation. de Champeaux (2022) is also of linear complexity in the input size but is competitive with the Robinson algorithm on small size inputs. The speedup is obtained by using an object-oriented representation of the predicate calculus that avoids the need for pre- and post-processing, instead making variable objects responsible for creating a substitution and for dealing with aliasing. de Champeaux claims that the ability to add functionality to predicate calculus represented as programmatic objects provides opportunities for optimizing other logic operations as well. The following algorithm is commonly presented and originates from Martelli & Montanari (1982). Given a finite set G = { s 1 ≐ t 1 , . . . , s n ≐ t n } {\displaystyle G=\{s_{1}\doteq t_{1},...,s_{n}\doteq t_{n}\}} of potential equations, the algorithm applies rules to transform it to an equivalent set of equations of the form { x1 ≐ u1, ..., xm ≐ um } where x1, ..., xm are distinct variables and u1, ..., um are terms containing none of the xi. A set of this form can be read as a substitution. If there is no solution the algorithm terminates with ⊥; other authors use "Ω", or "fail" in that case. The operation of substituting all occurrences of variable x in problem G with term t is denoted G {x ↦ t}. For simplicity, constant symbols are regarded as function symbols having zero arguments. ==== Occurs check ==== An attempt to unify a variable x with a term containing x as a strict subterm x ≐ f(..., x, ...) would lead to an infinite term as solution for x, since x would occur as a subterm of itself. In the set of (finite) first-order terms as defined above, the equation x ≐ f(..., x, ...) has no solution; hence the eliminate rule may only be applied if x ∉ vars(t). Since that additional check, called occurs check, slows down the algorithm, it is omitted e.g. in most Prolog systems. From a theoretical point of view, omitting the check amounts to solving equations over infinite trees, see #Unification of infinite terms below. ==== Proof of termination ==== For the proof of termination of the algorithm consider a triple ⟨ n v a r , n l h s , n e q n ⟩ {\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } where nvar is the number of variables that occur more than once in the equation set, nlhs is the number of function symbols and constants on the left hand sides of potential equations, and neqn is the number of equations. When rule eliminate is applied, nvar decreases, since x is eliminated from G and kept only in { x ≐ t }. Applying any other rule can never increase nvar again. When rule decompose, conflict, or swap is applied, nlhs decreases, since at least the left hand side's outermost f disappears. Applying any of the remaining rules delete or check can't increase nlhs, but decreases neqn. Hence, any rule application decreases the triple ⟨ n v a r , n l h s , n e q n ⟩ {\displaystyle \langle n_{var},n_{lhs},n_{eqn}\rangle } with respect to the lexicographical order, which is possible only a finite number of times. Conor McBride observes that "by expressing the structure which unification exploits" in a dependently typed language such as Epigram, Robinson's unification algorithm can be made recursive on the number of variables, in which case a separate termination proof becomes unnecessary. === Examples of syntactic unification of first-order terms === In the Prolog syntactical convention a symbol starting with an upper case letter is a variable name; a symbol that starts with a lowercase letter is a function symbol; the comma is used as the logical and operator. For mathematical notation, x,y,z are used as variables, f,g as function symbols, and a,b as constants. The most general unifier of a syntactic first-order unification problem of size n may have a size of 2n. For example, the problem ⁠ ( ( ( a ∗ z ) ∗ y ) ∗ x ) ∗ w ≐ w ∗ ( x ∗ ( y ∗ ( z ∗ a ) ) ) {\displaystyle (((a*z)*y)*x)*w\doteq w*(x*(y*(z*a)))} ⁠ has the most general unifier ⁠ { z ↦ a , y ↦ a ∗ a , x ↦ ( a ∗ a ) ∗ ( a ∗ a ) , w ↦ ( ( a ∗ a ) ∗ ( a ∗ a ) ) ∗ ( ( a ∗ a ) ∗ ( a ∗ a ) ) } {\displaystyle \{z\mapsto a,y\mapsto a*a,x\mapsto (a*a)*(a*a),w\mapsto ((a*a)*(a*a))*((a*a)*(a*a))\}} ⁠, cf. picture. In order to avoid exponential time complexity caused by such blow-up, advanced unification algorithms work on directed acyclic graphs (dags) rather than trees. === Application: unification in logic programming === The concept of unification is one of the main ideas behind logic programming. Specifically, unification is a basic building block of resolution, a rule of inference for determining formula satisfiability. In Prolog, the equality symbol = implies first-order syntactic unification. It represents the mechanism of binding the contents of variables and can be viewed as a kind of one-time assignment. In Prolog: A variable can be unified with a constant, a term, or another variable, thus effectively becoming its alias. In many modern Prolog dialects and in first-order logic, a variable cannot be unified with a term that contains it; this is the so-called occurs check. Two constants can be unified only if they are identical. Similarly, a term can be unified with another term if the top function symbols and arities of the terms are identical and if the parameters can be unified simultaneously. Note that this is a recursive behavior. Most operations, including +, -, *, /, are not evaluated by =. So for example 1+2 = 3 is not satisfiable because they are syntactically different. The use of integer arithmetic constraints #= introduces a form of E-unification for which these operations are interpreted and evaluated. === Application: type inference === Type inference algorithms are typically based on unification, particularly Hindley-Milner type inference which is used by the functional languages Haskell and ML. For example, when attempting to infer the type of the Haskell expression True : ['x'], the compiler will use the type a -> [a] -> [a] of the list construction function (:), the type Bool of the first argument True, and the type [Char] of the second argument ['x']. The polymorphic type variable a will be unified with Bool and the second argument [a] will be unified with [Char]. a cannot be both Bool and Char at the same time, therefore this expression is not correctly typed. Like for Prolog, an algorithm for type inference can be given: Any type variable unifies with any type expression, and is instantiated to that expression. A specific theory might restrict this rule with an occurs check. Two type constants unify only if they are the same type. Two type constructions unify only if they are applications of the same type constructor and all of their component types recursively unify. === Application: Feature Structure Unification === Unification has been used in different research areas of computational linguistics. == Order-sorted unification == Order-sorted logic allows one to assign a sort, or type, to each term, and to declare a sort s1 a subsort of another sort s2, commonly written as s1 ⊆ s2. For example, when reаsoning about biological creatures, it is useful to declare a sort dog to be a subsort of a sort animal. Wherever a term of some sort s is required, a term of any subsort of s may be supplied instead. For example, assuming a function declaration mother: animal → animal, and a constant declaration lassie: dog, the term mother(lassie) is perfectly valid and has the sort animal. In order to supply the information that the mother of a dog is a dog in turn, another declaration mother: dog → dog may be issued; this is called function overloading, similar to overloading in programming languages. Walther gave a unification algorithm for terms in order-sorted logic, requiring for any two declared sorts s1, s2 their intersection s1 ∩ s2 to be declared, too: if x1 and x2 is a variable of sort s1 and s2, respectively, the equation x1 ≐ x2 has the solution { x1 = x, x2 = x }, where x: s1 ∩ s2. After incorporating this algorithm into a clause-based automated theorem prover, he could solve a benchmark problem by translating it into order-sorted logic, thereby boiling it down an order of magnitude, as many unary predicates turned into sorts. Smolka generalized order-sorted logic to allow for parametric polymorphism. In his framework, subsort declarations are propagated to complex type expressions. As a programming example, a parametric sort list(X) may be declared (with X being a type parameter as in a C++ template), and from a subsort declaration int ⊆ float the relation list(int) ⊆ list(float) is automatically inferred, meaning that each list of integers is also a list of floats. Schmidt-Schauß generalized order-sorted logic to allow for term declarations. As an example, assuming subsort declarations even ⊆ int and odd ⊆ int, a term declaration like ∀ i : int. (i + i) : even allows to declare a property of integer addition that could not be expressed by ordinary overloading. == Unification of infinite terms == Background on infinite trees: B. Courcelle (1983). "Fundamental Properties of Infinite Trees". Theoret. Comput. Sci. 25 (2): 95–169. doi:10.1016/0304-3975(83)90059-2. Michael J. Maher (Jul 1988). "Complete Axiomatizations of the Algebras of Finite, Rational and Infinite Trees". Proc. IEEE 3rd Annual Symp. on Logic in Computer Science, Edinburgh. pp. 348–357. Joxan Jaffar; Peter J. Stuckey (1986). "Semantics of Infinite Tree Logic Programming". Theoretical Computer Science. 46: 141–158. doi:10.1016/0304-3975(86)90027-7. Unification algorithm, Prolog II: A. Colmerauer (1982). K.L. Clark; S.-A. Tarnlund (eds.). Prolog and Infinite Trees. Academic Press. Alain Colmerauer (1984). "Equations and Inequations on Finite and Infinite Trees". In ICOT (ed.). Proc. Int. Conf. on Fifth Generation Computer Systems. pp. 85–99. Applications: Francis Giannesini; Jacques Cohen (1984). "Parser Generation and Grammar Manipulation using Prolog's Infinite Trees". Journal of Logic Programming. 1 (3): 253–265. doi:10.1016/0743-1066(84)90013-X. == E-unification == E-unification is the problem of finding solutions to a given set of equations, taking into account some equational background knowledge E. The latter is given as a set of universal equalities. For some particular sets E, equation solving algorithms (a.k.a. E-unification algorithms) have been devised; for others it has been proven that no such algorithms can exist. For example, if a and b are distinct constants, the equation ⁠ x ∗ a ≐ y ∗ b {\displaystyle x*a\doteq y*b} ⁠ has no solution with respect to purely syntactic unification, where nothing is known about the operator ⁠ ∗ {\displaystyle *} ⁠. However, if the ⁠ ∗ {\displaystyle *} ⁠ is known to be commutative, then the substitution {x ↦ b, y ↦ a} solves the above equation, since The background knowledge E could state the commutativity of ⁠ ∗ {\displaystyle *} ⁠ by the universal equality "⁠ u ∗ v = v ∗ u {\displaystyle u*v=v*u} ⁠ for all u, v". === Particular background knowledge sets E === It is said that unification is decidable for a theory, if a unification algorithm has been devised for it that terminates for any input problem. It is said that unification is semi-decidable for a theory, if a unification algorithm has been devised for it that terminates for any solvable input problem, but may keep searching forever for solutions of an unsolvable input problem. Unification is decidable for the following theories: A A,C A,C,I A,C,Nl A,I A,Nl,Nr (monoid) C Boolean rings Abelian groups, even if the signature is expanded by arbitrary additional symbols (but not axioms) K4 modal algebras Unification is semi-decidable for the following theories: A,Dl,Dr A,C,Dl Commutative rings === One-sided paramodulation === If there is a convergent term rewriting system R available for E, the one-sided paramodulation algorithm can be used to enumerate all solutions of given equations. Starting with G being the unification problem to be solved and S being the identity substitution, rules are applied nondeterministically until the empty set appears as the actual G, in which case the actual S is a unifying substitution. Depending on the order the paramodulation rules are applied, on the choice of the actual equation from G, and on the choice of R's rules in mutate, different computations paths are possible. Only some lead to a solution, while others end at a G ≠ {} where no further rule is applicable (e.g. G = { f(...) ≐ g(...) }). For an example, a term rewrite system R is used defining the append operator of lists built from cons and nil; where cons(x,y) is written in infix notation as x.y for brevity; e.g. app(a.b.nil,c.d.nil) → a.app(b.nil,c.d.nil) → a.b.app(nil,c.d.nil) → a.b.c.d.nil demonstrates the concatenation of the lists a.b.nil and c.d.nil, employing the rewrite rule 2,2, and 1. The equational theory E corresponding to R is the congruence closure of R, both viewed as binary relations on terms. For example, app(a.b.nil,c.d.nil) ≡ a.b.c.d.nil ≡ app(a.b.c.d.nil,nil). The paramodulation algorithm enumerates solutions to equations with respect to that E when fed with the example R. A successful example computation path for the unification problem { app(x,app(y,x)) ≐ a.a.nil } is shown below. To avoid variable name clashes, rewrite rules are consistently renamed each time before their use by rule mutate; v2, v3, ... are computer-generated variable names for this purpose. In each line, the chosen equation from G is highlighted in red. Each time the mutate rule is applied, the chosen rewrite rule (1 or 2) is indicated in parentheses. From the last line, the unifying substitution S = { y ↦ nil, x ↦ a.nil } can be obtained. In fact, app(x,app(y,x)) {y↦nil, x↦ a.nil } = app(a.nil,app(nil,a.nil)) ≡ app(a.nil,a.nil) ≡ a.app(nil,a.nil) ≡ a.a.nil solves the given problem. A second successful computation path, obtainable by choosing "mutate(1), mutate(2), mutate(2), mutate(1)" leads to the substitution S = { y ↦ a.a.nil, x ↦ nil }; it is not shown here. No other path leads to a success. === Narrowing === If R is a convergent term rewriting system for E, an approach alternative to the previous section consists in successive application of "narrowing steps"; this will eventually enumerate all solutions of a given equation. A narrowing step (cf. picture) consists in choosing a nonvariable subterm of the current term, syntactically unifying it with the left hand side of a rule from R, and replacing the instantiated rule's right hand side into the instantiated term. Formally, if l → r is a renamed copy of a rewrite rule from R, having no variables in common with a term s, and the subterm s|p is not a variable and is unifiable with l via the mgu σ, then s can be narrowed to the term t = sσ[rσ]p, i.e. to the term sσ, with the subterm at p replaced by rσ. The situation that s can be narrowed to t is commonly denoted as s ↝ t. Intuitively, a sequence of narrowing steps t1 ↝ t2 ↝ ... ↝ tn can be thought of as a sequence of rewrite steps t1 → t2 → ... → tn, but with the initial term t1 being further and further instantiated, as necessary to make each of the used rules applicable. The above example paramodulation computation corresponds to the following narrowing sequence ("↓" indicating instantiation here): The last term, v2.v2.nil can be syntactically unified with the original right hand side term a.a.nil. The narrowing lemma ensures that whenever an instance of a term s can be rewritten to a term t by a convergent term rewriting system, then s and t can be narrowed and rewritten to a term s′ and t′, respectively, such that t′ is an instance of s′. Formally: whenever sσ →∗ t holds for some substitution σ, then there exist terms s′, t′ such that s ↝∗ s′ and t →∗ t′ and s′ τ = t′ for some substitution τ. == Higher-order unification == Many applications require one to consider the unification of typed lambda-terms instead of first-order terms. Such unification is often called higher-order unification. Higher-order unification is undecidable, and such unification problems do not have most general unifiers. For example, the unification problem { f(a,b,a) ≐ d(b,a,c) }, where the only variable is f, has the solutions {f ↦ λx.λy.λz. d(y,x,c) }, {f ↦ λx.λy.λz. d(y,z,c) }, {f ↦ λx.λy.λz. d(y,a,c) }, {f ↦ λx.λy.λz. d(b,x,c) }, {f ↦ λx.λy.λz. d(b,z,c) } and {f ↦ λx.λy.λz. d(b,a,c) }. A well studied branch of higher-order unification is the problem of unifying simply typed lambda terms modulo the equality determined by αβη conversions. Gérard Huet gave a semi-decidable (pre-)unification algorithm that allows a systematic search of the space of unifiers (generalizing the unification algorithm of Martelli-Montanari with rules for terms containing higher-order variables) that seems to work sufficiently well in practice. Huet and Gilles Dowek have written articles surveying this topic. Several subsets of higher-order unification are well-behaved, in that they are decidable and have a most-general unifier for solvable problems. One such subset is the previously described first-order terms. Higher-order pattern unification, due to Dale Miller, is another such subset. The higher-order logic programming languages λProlog and Twelf have switched from full higher-order unification to implementing only the pattern fragment; surprisingly pattern unification is sufficient for almost all programs, if each non-pattern unification problem is suspended until a subsequent substitution puts the unification into the pattern fragment. A superset of pattern unification called functions-as-constructors unification is also well-behaved. The Zipperposition theorem prover has an algorithm integrating these well-behaved subsets into a full higher-order unification algorithm. In computational linguistics, one of the most influential theories of elliptical construction is that ellipses are represented by free variables whose values are then determined using Higher-Order Unification. For instance, the semantic representation of "Jon likes Mary and Peter does too" is like(j, m) ∧ R(p) and the value of R (the semantic representation of the ellipsis) is determined by the equation like(j, m) = R(j) . The process of solving such equations is called Higher-Order Unification. Wayne Snyder gave a generalization of both higher-order unification and E-unification, i.e. an algorithm to unify lambda-terms modulo an equational theory. == See also == Rewriting Admissible rule Explicit substitution in lambda calculus Mathematical equation solving Dis-unification: solving inequations between symbolic expression Anti-unification: computing a least general generalization (lgg) of two terms, dual to computing a most general instance (mgu) Subsumption lattice, a lattice having unification as meet and anti-unification as join Ontology alignment (use unification with semantic equivalence) == Notes == == References == == Further reading == Franz Baader and Wayne Snyder (2001). "Unification Theory". In John Alan Robinson and Andrei Voronkov, editors, Handbook of Automated Reasoning, volume I, pages 447–533. Elsevier Science Publishers. Gilles Dowek (2001). "Higher-order Unification and Matching" Archived 2019-05-15 at the Wayback Machine. In Handbook of Automated Reasoning. Franz Baader and Tobias Nipkow (1998). Term Rewriting and All That. Cambridge University Press. Franz Baader and Jörg H. Siekmann (1993). "Unification Theory". In Handbook of Logic in Artificial Intelligence and Logic Programming. Jean-Pierre Jouannaud and Claude Kirchner (1991). "Solving Equations in Abstract Algebras: A Rule-Based Survey of Unification". In Computational Logic: Essays in Honor of Alan Robinson. Nachum Dershowitz and Jean-Pierre Jouannaud, Rewrite Systems, in: Jan van Leeuwen (ed.), Handbook of Theoretical Computer Science, volume B Formal Models and Semantics, Elsevier, 1990, pp. 243–320 Jörg H. Siekmann (1990). "Unification Theory". In Claude Kirchner (editor) Unification. Academic Press. Kevin Knight (Mar 1989). "Unification: A Multidisciplinary Survey" (PDF). ACM Computing Surveys. 21 (1): 93–124. CiteSeerX 10.1.1.64.8967. doi:10.1145/62029.62030. S2CID 14619034. Gérard Huet and Derek C. Oppen (1980). "Equations and Rewrite Rules: A Survey". Technical report. Stanford University. Raulefs, Peter; Siekmann, Jörg; Szabó, P.; Unvericht, E. (1979). "A short survey on the state of the art in matching and unification problems". ACM SIGSAM Bulletin. 13 (2): 14–20. doi:10.1145/1089208.1089210. S2CID 17033087. Claude Kirchner and Hélène Kirchner. Rewriting, Solving, Proving. In preparation.
Wikipedia/Unification_(computer_science)
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783. For each integer k there is one branch, denoted by Wk(z), which is a complex-valued function of one complex argument. W0 is known as the principal branch. These functions have the following property: if z and w are any complex numbers, then w e w = z {\displaystyle we^{w}=z} holds if and only if w = W k ( z ) for some integer k . {\displaystyle w=W_{k}(z)\ \ {\text{ for some integer }}k.} When dealing with real numbers only, the two branches W0 and W−1 suffice: for real numbers x and y the equation y e y = x {\displaystyle ye^{y}=x} can be solved for y only if x ≥ −⁠1/e⁠; yields y = W0(x) if x ≥ 0 and the two values y = W0(x) and y = W−1(x) if −⁠1/e⁠ ≤ x < 0. The Lambert W function's branches cannot be expressed in terms of elementary functions. It is useful in combinatorics, for instance, in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y′(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, an opened-form solution for the time-course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function. == Terminology == The notation convention chosen here (with W0 and W−1) follows the canonical reference on the Lambert W function by Corless, Gonnet, Hare, Jeffrey and Knuth. The name "product logarithm" can be understood as follows: since the inverse function of f(w) = ew is termed the logarithm, it makes sense to call the inverse "function" of the product wew the "product logarithm". (Technical note: like the complex logarithm, it is multivalued and thus W is described as a converse relation rather than inverse function.) It is related to the omega constant, which is equal to W0(1). == History == Lambert first considered the related Lambert's Transcendental Equation in 1758, which led to an article by Leonhard Euler in 1783 that discussed the special case of wew. The equation Lambert considered was x = x m + q . {\displaystyle x=x^{m}+q.} Euler transformed this equation into the form x a − x b = ( a − b ) c x a + b . {\displaystyle x^{a}-x^{b}=(a-b)cx^{a+b}.} Both authors derived a series solution for their equations. Once Euler had solved this equation, he considered the case ⁠ a = b {\displaystyle a=b} ⁠. Taking limits, he derived the equation ln ⁡ x = c x a . {\displaystyle \ln x=cx^{a}.} He then put ⁠ a = 1 {\displaystyle a=1} ⁠ and obtained a convergent series solution for the resulting equation, expressing ⁠ x {\displaystyle x} ⁠ in terms of ⁠ c {\displaystyle c} ⁠. After taking derivatives with respect to ⁠ x {\displaystyle x} ⁠ and some manipulation, the standard form of the Lambert function is obtained. In 1993, it was reported that the Lambert ⁠ W {\displaystyle W} ⁠ function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics. Prompted by this, Rob Corless and developers of the Maple computer algebra system realized that "the Lambert W function has been widely used in many fields, but because of differing notation and the absence of a standard name, awareness of the function was not as high as it should have been." Another example where this function is found is in Michaelis–Menten kinetics. Although it was widely believed that the Lambert ⁠ W {\displaystyle W} ⁠ function cannot be expressed in terms of elementary (Liouvillian) functions, the first published proof did not appear until 2008. == Elementary properties, branches and range == There are countably many branches of the W function, denoted by Wk(z), for integer k; W0(z) being the main (or principal) branch. W0(z) is defined for all complex numbers z while Wk(z) with k ≠ 0 is defined for all non-zero z. With W0(0) = 0 and limz→0 Wk(z) = −∞ for all k ≠ 0. The branch point for the principal branch is at z = −⁠1/e⁠, with a branch cut that extends to −∞ along the negative real axis. This branch cut separates the principal branch from the two branches W−1 and W1. In all branches Wk with k ≠ 0, there is a branch point at z = 0 and a branch cut along the entire negative real axis. The functions Wk(z), k ∈ Z are all injective and their ranges are disjoint. The range of the entire multivalued function W is the complex plane. The image of the real axis is the union of the real axis and the quadratrix of Hippias, the parametric curve w = −t cot t + it. === Inverse === The range plot above also delineates the regions in the complex plane where the simple inverse relationship ⁠ W ( n , z e z ) = z {\displaystyle W(n,ze^{z})=z} ⁠ is true. ⁠ f = z e z {\displaystyle f=ze^{z}} ⁠ implies that there exists an ⁠ n {\displaystyle n} ⁠ such that ⁠ z = W ( n , f ) = W ( n , z e z ) {\displaystyle z=W(n,f)=W(n,ze^{z})} ⁠, where ⁠ n {\displaystyle n} ⁠ depends upon the value of ⁠ z {\displaystyle z} ⁠. The value of the integer ⁠ n {\displaystyle n} ⁠ changes abruptly when ⁠ z e z {\displaystyle ze^{z}} ⁠ is at the branch cut of ⁠ W ( n , z e z ) {\displaystyle W(n,ze^{z})} ⁠, which means that ⁠ z e z {\displaystyle ze^{z}} ⁠ ≤ 0, except for ⁠ n = 0 {\displaystyle n=0} ⁠ where it is ⁠ z e z {\displaystyle ze^{z}} ⁠ ≤ −1/⁠ e {\displaystyle e} ⁠. Defining ⁠ z = x + i y {\displaystyle z=x+iy} ⁠, where ⁠ x {\displaystyle x} ⁠ and ⁠ y {\displaystyle y} ⁠ are real, and expressing ⁠ e z {\displaystyle e^{z}} ⁠ in polar coordinates, it is seen that z e z = ( x + i y ) e x ( cos ⁡ y + i sin ⁡ y ) = e x ( x cos ⁡ y − y sin ⁡ y ) + i e x ( x sin ⁡ y + y cos ⁡ y ) {\displaystyle {\begin{aligned}ze^{z}&=(x+iy)e^{x}(\cos y+i\sin y)\\&=e^{x}(x\cos y-y\sin y)+ie^{x}(x\sin y+y\cos y)\\\end{aligned}}} For n ≠ 0 {\displaystyle n\neq 0} , the branch cut for ⁠ W ( n , z e z ) {\displaystyle W(n,ze^{z})} ⁠ is the non-positive real axis, so that x sin ⁡ y + y cos ⁡ y = 0 ⇒ x = − y / tan ⁡ ( y ) , {\displaystyle x\sin y+y\cos y=0\Rightarrow x=-y/\tan(y),} and ( x cos ⁡ y − y sin ⁡ y ) e x ≤ 0. {\displaystyle (x\cos y-y\sin y)e^{x}\leq 0.} For n = 0 {\displaystyle n=0} , the branch cut for ⁠ W [ n , z e z ] {\displaystyle W[n,ze^{z}]} ⁠ is the real axis with − ∞ < z ≤ − 1 / e {\displaystyle -\infty <z\leq -1/e} , so that the inequality becomes ( x cos ⁡ y − y sin ⁡ y ) e x ≤ − 1 / e . {\displaystyle (x\cos y-y\sin y)e^{x}\leq -1/e.} Inside the regions bounded by the above, there are no discontinuous changes in ⁠ W ( n , z e z ) {\displaystyle W(n,ze^{z})} ⁠, and those regions specify where the ⁠ W {\displaystyle W} ⁠ function is simply invertible, i.e. ⁠ W ( n , z e z ) = z {\displaystyle W(n,ze^{z})=z} ⁠. == Calculus == === Derivative === By implicit differentiation, one can show that all branches of W satisfy the differential equation z ( 1 + W ) d W d z = W for z ≠ − 1 e . {\displaystyle z(1+W){\frac {dW}{dz}}=W\quad {\text{for }}z\neq -{\frac {1}{e}}.} (W is not differentiable for z = −⁠1/e⁠.) As a consequence, that gets the following formula for the derivative of W: d W d z = W ( z ) z ( 1 + W ( z ) ) for z ∉ { 0 , − 1 e } . {\displaystyle {\frac {dW}{dz}}={\frac {W(z)}{z(1+W(z))}}\quad {\text{for }}z\not \in \left\{0,-{\frac {1}{e}}\right\}.} Using the identity eW(z) = ⁠z/W(z)⁠, gives the following equivalent formula: d W d z = 1 z + e W ( z ) for z ≠ − 1 e . {\displaystyle {\frac {dW}{dz}}={\frac {1}{z+e^{W(z)}}}\quad {\text{for }}z\neq -{\frac {1}{e}}.} At the origin we have W 0 ′ ( 0 ) = 1. {\displaystyle W'_{0}(0)=1.} The n-th derivative of W is of the form: d n W d z n = P n ( W ( z ) ) ( z + e W ( z ) ) n ( W ( z ) + 1 ) n − 1 for n > 0 , z ≠ − 1 e . {\displaystyle {\frac {d^{n}W}{dz^{n}}}={\frac {P_{n}(W(z))}{(z+e^{W(z)})^{n}(W(z)+1)^{n-1}}}\quad {\text{for }}n>0,\,z\neq -{\frac {1}{e}}.} Where Pn is a polynomial function with coefficients defined in A042977. If and only if z is a root of Pn then zez is a root of the n-th derivative of W. Taking the derivative of the n-th derivative of W yields: d n + 1 W d z n + 1 = ( W ( z ) + 1 ) P n ′ ( W ( z ) ) + ( 1 − 3 n − n W ( z ) ) P n ( W ( z ) ) ( n + e W ( z ) ) n + 1 ( W ( z ) + 1 ) n for n > 0 , z ≠ − 1 e . {\displaystyle {\frac {d^{n+1}W}{dz^{n+1}}}={\frac {(W(z)+1)P_{n}'(W(z))+(1-3n-nW(z))P_{n}(W(z))}{(n+e^{W(z)})^{n+1}(W(z)+1)^{n}}}\quad {\text{for }}n>0,\,z\neq -{\frac {1}{e}}.} Inductively proving the n-th derivative equation. === Integral === The function W(x), and many other expressions involving W(x), can be integrated using the substitution w = W(x), i.e. x = wew: ∫ W ( x ) d x = x W ( x ) − x + e W ( x ) + C = x ( W ( x ) − 1 + 1 W ( x ) ) + C . {\displaystyle {\begin{aligned}\int W(x)\,dx&=xW(x)-x+e^{W(x)}+C\\&=x\left(W(x)-1+{\frac {1}{W(x)}}\right)+C.\end{aligned}}} (The last equation is more common in the literature but is undefined at x = 0). One consequence of this (using the fact that W0(e) = 1) is the identity ∫ 0 e W 0 ( x ) d x = e − 1. {\displaystyle \int _{0}^{e}W_{0}(x)\,dx=e-1.} == Asymptotic expansions == The Taylor series of W0 around 0 can be found using the Lagrange inversion theorem and is given by W 0 ( x ) = ∑ n = 1 ∞ ( − n ) n − 1 n ! x n = x − x 2 + 3 2 x 3 − 16 6 x 4 + 125 24 x 5 − ⋯ . {\displaystyle W_{0}(x)=\sum _{n=1}^{\infty }{\frac {(-n)^{n-1}}{n!}}x^{n}=x-x^{2}+{\tfrac {3}{2}}x^{3}-{\tfrac {16}{6}}x^{4}+{\tfrac {125}{24}}x^{5}-\cdots .} The radius of convergence is ⁠1/e⁠, as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval (−∞, −⁠1/e⁠]; this holomorphic function defines the principal branch of the Lambert W function. For large values of x, W0 is asymptotic to W 0 ( x ) = L 1 − L 2 + L 2 L 1 + L 2 ( − 2 + L 2 ) 2 L 1 2 + L 2 ( 6 − 9 L 2 + 2 L 2 2 ) 6 L 1 3 + L 2 ( − 12 + 36 L 2 − 22 L 2 2 + 3 L 2 3 ) 12 L 1 4 + ⋯ = L 1 − L 2 + ∑ l = 0 ∞ ∑ m = 1 ∞ ( − 1 ) l [ l + m l + 1 ] m ! L 1 − l − m L 2 m , {\displaystyle {\begin{aligned}W_{0}(x)&=L_{1}-L_{2}+{\frac {L_{2}}{L_{1}}}+{\frac {L_{2}\left(-2+L_{2}\right)}{2L_{1}^{2}}}+{\frac {L_{2}\left(6-9L_{2}+2L_{2}^{2}\right)}{6L_{1}^{3}}}+{\frac {L_{2}\left(-12+36L_{2}-22L_{2}^{2}+3L_{2}^{3}\right)}{12L_{1}^{4}}}+\cdots \\[5pt]&=L_{1}-L_{2}+\sum _{l=0}^{\infty }\sum _{m=1}^{\infty }{\frac {(-1)^{l}\left[{\begin{smallmatrix}l+m\\l+1\end{smallmatrix}}\right]}{m!}}L_{1}^{-l-m}L_{2}^{m},\end{aligned}}} where L1 = ln x, L2 = ln ln x, and [l + ml + 1] is a non-negative Stirling number of the first kind. Keeping only the first two terms of the expansion, W 0 ( x ) = ln ⁡ x − ln ⁡ ln ⁡ x + o ( 1 ) . {\displaystyle W_{0}(x)=\ln x-\ln \ln x+{\mathcal {o}}(1).} The other real branch, W−1, defined in the interval [−⁠1/e⁠, 0), has an approximation of the same form as x approaches zero, with in this case L1 = ln(−x) and L2 = ln(−ln(−x)). === Integer and complex powers === Integer powers of W0 also admit simple Taylor (or Laurent) series expansions at zero: W 0 ( x ) 2 = ∑ n = 2 ∞ − 2 ( − n ) n − 3 ( n − 2 ) ! x n = x 2 − 2 x 3 + 4 x 4 − 25 3 x 5 + 18 x 6 − ⋯ . {\displaystyle W_{0}(x)^{2}=\sum _{n=2}^{\infty }{\frac {-2\left(-n\right)^{n-3}}{(n-2)!}}x^{n}=x^{2}-2x^{3}+4x^{4}-{\tfrac {25}{3}}x^{5}+18x^{6}-\cdots .} More generally, for r ∈ Z, the Lagrange inversion formula gives W 0 ( x ) r = ∑ n = r ∞ − r ( − n ) n − r − 1 ( n − r ) ! x n , {\displaystyle W_{0}(x)^{r}=\sum _{n=r}^{\infty }{\frac {-r\left(-n\right)^{n-r-1}}{(n-r)!}}x^{n},} which is, in general, a Laurent series of order r. Equivalently, the latter can be written in the form of a Taylor expansion of powers of W0(x) / x: ( W 0 ( x ) x ) r = e − r W 0 ( x ) = ∑ n = 0 ∞ r ( n + r ) n − 1 n ! ( − x ) n , {\displaystyle \left({\frac {W_{0}(x)}{x}}\right)^{r}=e^{-rW_{0}(x)}=\sum _{n=0}^{\infty }{\frac {r\left(n+r\right)^{n-1}}{n!}}\left(-x\right)^{n},} which holds for any r ∈ C and |x| < ⁠1/e⁠. == Bounds and inequalities == A number of non-asymptotic bounds are known for the Lambert function. Hoorfar and Hassani showed that the following bound holds for x ≥ e: ln ⁡ x − ln ⁡ ln ⁡ x + ln ⁡ ln ⁡ x 2 ln ⁡ x ≤ W 0 ( x ) ≤ ln ⁡ x − ln ⁡ ln ⁡ x + e e − 1 ln ⁡ ln ⁡ x ln ⁡ x . {\displaystyle \ln x-\ln \ln x+{\frac {\ln \ln x}{2\ln x}}\leq W_{0}(x)\leq \ln x-\ln \ln x+{\frac {e}{e-1}}{\frac {\ln \ln x}{\ln x}}.} They also showed the general bound W 0 ( x ) ≤ ln ⁡ ( x + y 1 + ln ⁡ ( y ) ) , {\displaystyle W_{0}(x)\leq \ln \left({\frac {x+y}{1+\ln(y)}}\right),} for every y > 1 / e {\displaystyle y>1/e} and x ≥ − 1 / e {\displaystyle x\geq -1/e} , with equality only for x = y ln ⁡ ( y ) {\displaystyle x=y\ln(y)} . The bound allows many other bounds to be made, such as taking y = x + 1 {\displaystyle y=x+1} which gives the bound W 0 ( x ) ≤ ln ⁡ ( 2 x + 1 1 + ln ⁡ ( x + 1 ) ) . {\displaystyle W_{0}(x)\leq \ln \left({\frac {2x+1}{1+\ln(x+1)}}\right).} In 2013 it was proven that the branch W−1 can be bounded as follows: − 1 − 2 u − u < W − 1 ( − e − u − 1 ) < − 1 − 2 u − 2 3 u for u > 0. {\displaystyle -1-{\sqrt {2u}}-u<W_{-1}\left(-e^{-u-1}\right)<-1-{\sqrt {2u}}-{\tfrac {2}{3}}u\quad {\text{for }}u>0.} Roberto Iacono and John P. Boyd enhanced the bounds as follows: ln ⁡ ( x ln ⁡ x ) − ln ⁡ ( x ln ⁡ x ) 1 + ln ⁡ ( x ln ⁡ x ) ln ⁡ ( 1 − ln ⁡ ln ⁡ x ln ⁡ x ) ≤ W 0 ( x ) ≤ ln ⁡ ( x ln ⁡ x ) − ln ⁡ ( ( 1 − ln ⁡ ln ⁡ x ln ⁡ x ) ( 1 − ln ⁡ ( 1 − ln ⁡ ln ⁡ x ln ⁡ x ) 1 + ln ⁡ ( x ln ⁡ x ) ) ) . {\displaystyle \ln \left({\frac {x}{\ln x}}\right)-{\frac {\ln \left({\frac {x}{\ln x}}\right)}{1+\ln \left({\frac {x}{\ln x}}\right)}}\ln \left(1-{\frac {\ln \ln x}{\ln x}}\right)\leq W_{0}(x)\leq \ln \left({\frac {x}{\ln x}}\right)-\ln \left(\left(1-{\frac {\ln \ln x}{\ln x}}\right)\left(1-{\frac {\ln \left(1-{\frac {\ln \ln x}{\ln x}}\right)}{1+\ln \left({\frac {x}{\ln x}}\right)}}\right)\right).} == Identities == A few identities follow from the definition: W 0 ( x e x ) = x for x ≥ − 1 , W − 1 ( x e x ) = x for x ≤ − 1. {\displaystyle {\begin{aligned}W_{0}(xe^{x})&=x&{\text{for }}x&\geq -1,\\W_{-1}(xe^{x})&=x&{\text{for }}x&\leq -1.\end{aligned}}} Note that, since f(x) = xex is not injective, it does not always hold that W(f(x)) = x, much like with the inverse trigonometric functions. For fixed x < 0 and x ≠ −1, the equation xex = yey has two real solutions in y, one of which is of course y = x. Then, for i = 0 and x < −1, as well as for i = −1 and x ∈ (−1, 0), y = Wi(xex) is the other solution. Some other identities: W ( x ) e W ( x ) = x , therefore: e W ( x ) = x W ( x ) , e − W ( x ) = W ( x ) x , e n W ( x ) = ( x W ( x ) ) n . {\displaystyle {\begin{aligned}&W(x)e^{W(x)}=x,\quad {\text{therefore:}}\\[5pt]&e^{W(x)}={\frac {x}{W(x)}},\qquad e^{-W(x)}={\frac {W(x)}{x}},\qquad e^{nW(x)}=\left({\frac {x}{W(x)}}\right)^{n}.\end{aligned}}} ln ⁡ W 0 ( x ) = ln ⁡ x − W 0 ( x ) for x > 0. {\displaystyle \ln W_{0}(x)=\ln x-W_{0}(x)\quad {\text{for }}x>0.} W 0 ( x ln ⁡ x ) = ln ⁡ x and e W 0 ( x ln ⁡ x ) = x for 1 e ≤ x . {\displaystyle W_{0}\left(x\ln x\right)=\ln x\quad {\text{and}}\quad e^{W_{0}\left(x\ln x\right)}=x\quad {\text{for }}{\frac {1}{e}}\leq x.} W − 1 ( x ln ⁡ x ) = ln ⁡ x and e W − 1 ( x ln ⁡ x ) = x for 0 < x ≤ 1 e . {\displaystyle W_{-1}\left(x\ln x\right)=\ln x\quad {\text{and}}\quad e^{W_{-1}\left(x\ln x\right)}=x\quad {\text{for }}0<x\leq {\frac {1}{e}}.} W ( x ) = ln ⁡ x W ( x ) for x ≥ − 1 e , W ( n x n W ( x ) n − 1 ) = n W ( x ) for n , x > 0 {\displaystyle {\begin{aligned}&W(x)=\ln {\frac {x}{W(x)}}&&{\text{for }}x\geq -{\frac {1}{e}},\\[5pt]&W\left({\frac {nx^{n}}{W\left(x\right)^{n-1}}}\right)=nW(x)&&{\text{for }}n,x>0\end{aligned}}} (which can be extended to other n and x if the correct branch is chosen). W ( x ) + W ( y ) = W ( x y ( 1 W ( x ) + 1 W ( y ) ) ) for x , y > 0. {\displaystyle W(x)+W(y)=W\left(xy\left({\frac {1}{W(x)}}+{\frac {1}{W(y)}}\right)\right)\quad {\text{for }}x,y>0.} Substituting −ln x in the definition: W 0 ( − ln ⁡ x x ) = − ln ⁡ x for 0 < x ≤ e , W − 1 ( − ln ⁡ x x ) = − ln ⁡ x for x > e . {\displaystyle {\begin{aligned}W_{0}\left(-{\frac {\ln x}{x}}\right)&=-\ln x&{\text{for }}0&<x\leq e,\\[5pt]W_{-1}\left(-{\frac {\ln x}{x}}\right)&=-\ln x&{\text{for }}x&>e.\end{aligned}}} With Euler's iterated exponential h(x): h ( x ) = e − W ( − ln ⁡ x ) = W ( − ln ⁡ x ) − ln ⁡ x for x ≠ 1. {\displaystyle {\begin{aligned}h(x)&=e^{-W(-\ln x)}\\&={\frac {W(-\ln x)}{-\ln x}}\quad {\text{for }}x\neq 1.\end{aligned}}} == Special values == The following are special values of the principal branch: W 0 ( − π 2 ) = i π 2 {\displaystyle W_{0}\left(-{\frac {\pi }{2}}\right)={\frac {i\pi }{2}}} W 0 ( − 1 e ) = − 1 {\displaystyle W_{0}\left(-{\frac {1}{e}}\right)=-1} W 0 ( 2 ln ⁡ 2 ) = ln ⁡ 2 {\displaystyle W_{0}\left(2\ln 2\right)=\ln 2} W 0 ( x ln ⁡ x ) = ln ⁡ x ( x ⩾ 1 e ≈ 0.36788 ) {\displaystyle W_{0}\left(x\ln x\right)=\ln x\quad \left(x\geqslant {\tfrac {1}{e}}\approx 0.36788\right)} W 0 ( x x + 1 ln ⁡ x ) = x ln ⁡ x ( x > 0 ) {\displaystyle W_{0}\left(x^{x+1}\ln x\right)=x\ln x\quad \left(x>0\right)} W 0 ( 0 ) = 0 {\displaystyle W_{0}(0)=0} W 0 ( 1 ) = Ω = ( ∫ − ∞ ∞ d t ( e t − t ) 2 + π 2 ) − 1 − 1 ≈ 0.56714329 {\displaystyle W_{0}(1)=\Omega =\left(\int _{-\infty }^{\infty }{\frac {dt}{\left(e^{t}-t\right)^{2}+\pi ^{2}}}\right)^{\!-1}\!\!\!\!-\,1\approx 0.56714329\quad } (the omega constant) W 0 ( 1 ) = e − W 0 ( 1 ) = ln ⁡ 1 W 0 ( 1 ) = − ln ⁡ W 0 ( 1 ) {\displaystyle W_{0}(1)=e^{-W_{0}(1)}=\ln {\frac {1}{W_{0}(1)}}=-\ln W_{0}(1)} W 0 ( e ) = 1 {\displaystyle W_{0}(e)=1} W 0 ( e 1 + e ) = e {\displaystyle W_{0}\left(e^{1+e}\right)=e} W 0 ( e 2 ) = 1 2 {\displaystyle W_{0}\left({\frac {\sqrt {e}}{2}}\right)={\frac {1}{2}}} W 0 ( e n n ) = 1 n {\displaystyle W_{0}\left({\frac {\sqrt[{n}]{e}}{n}}\right)={\frac {1}{n}}} W 0 ( − 1 ) ≈ − 0.31813 + 1.33723 i {\displaystyle W_{0}(-1)\approx -0.31813+1.33723i} Special values of the branch W−1: W − 1 ( − ln ⁡ 2 2 ) = − ln ⁡ 4 {\displaystyle W_{-1}\left(-{\frac {\ln 2}{2}}\right)=-\ln 4} == Representations == The principal branch of the Lambert function can be represented by a proper integral, due to Poisson: − π 2 W 0 ( − x ) = ∫ 0 π sin ⁡ ( 3 2 t ) − x e cos ⁡ t sin ⁡ ( 5 2 t − sin ⁡ t ) 1 − 2 x e cos ⁡ t cos ⁡ ( t − sin ⁡ t ) + x 2 e 2 cos ⁡ t sin ⁡ ( 1 2 t ) d t for | x | < 1 e . {\displaystyle -{\frac {\pi }{2}}W_{0}(-x)=\int _{0}^{\pi }{\frac {\sin \left({\tfrac {3}{2}}t\right)-xe^{\cos t}\sin \left({\tfrac {5}{2}}t-\sin t\right)}{1-2xe^{\cos t}\cos(t-\sin t)+x^{2}e^{2\cos t}}}\sin \left({\tfrac {1}{2}}t\right)\,dt\quad {\text{for }}|x|<{\frac {1}{e}}.} Another representation of the principal branch was found by Kalugin–Jeffrey–Corless: W 0 ( x ) = 1 π ∫ 0 π ln ⁡ ( 1 + x sin ⁡ t t e t cot ⁡ t ) d t . {\displaystyle W_{0}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\ln \left(1+x{\frac {\sin t}{t}}e^{t\cot t}\right)dt.} The following continued fraction representation also holds for the principal branch: W 0 ( x ) = x 1 + x 1 + x 2 + 5 x 3 + 17 x 10 + 133 x 17 + 1927 x 190 + 13582711 x 94423 + ⋱ . {\displaystyle W_{0}(x)={\cfrac {x}{1+{\cfrac {x}{1+{\cfrac {x}{2+{\cfrac {5x}{3+{\cfrac {17x}{10+{\cfrac {133x}{17+{\cfrac {1927x}{190+{\cfrac {13582711x}{94423+\ddots }}}}}}}}}}}}}}}}.} Also, if |W0(x)| < 1: W 0 ( x ) = x exp ⁡ x exp ⁡ x ⋱ . {\displaystyle W_{0}(x)={\cfrac {x}{\exp {\cfrac {x}{\exp {\cfrac {x}{\ddots }}}}}}.} In turn, if |W0(x)| > 1, then W 0 ( x ) = ln ⁡ x ln ⁡ x ln ⁡ x ⋱ . {\displaystyle W_{0}(x)=\ln {\cfrac {x}{\ln {\cfrac {x}{\ln {\cfrac {x}{\ddots }}}}}}.} == Other formulas == === Definite integrals === There are several useful definite integral formulas involving the principal branch of the W function, including the following: ∫ 0 π W 0 ( 2 cot 2 ⁡ x ) sec 2 ⁡ x d x = 4 π , ∫ 0 ∞ W 0 ( x ) x x d x = 2 2 π , ∫ 0 ∞ W 0 ( 1 x 2 ) d x = 2 π , and more generally ∫ 0 ∞ W 0 ( 1 x N ) d x = N 1 − 1 N Γ ( 1 − 1 N ) for N > 0 {\displaystyle {\begin{aligned}&\int _{0}^{\pi }W_{0}\left(2\cot ^{2}x\right)\sec ^{2}x\,dx=4{\sqrt {\pi }},\\[5pt]&\int _{0}^{\infty }{\frac {W_{0}(x)}{x{\sqrt {x}}}}\,dx=2{\sqrt {2\pi }},\\[5pt]&\int _{0}^{\infty }W_{0}\left({\frac {1}{x^{2}}}\right)\,dx={\sqrt {2\pi }},{\text{ and more generally}}\\[5pt]&\int _{0}^{\infty }W_{0}\left({\frac {1}{x^{N}}}\right)\,dx=N^{1-{\frac {1}{N}}}\Gamma \left(1-{\frac {1}{N}}\right)\qquad {\text{for }}N>0\end{aligned}}} where Γ {\displaystyle \Gamma } denotes the gamma function. The first identity can be found by writing the Gaussian integral in polar coordinates. The second identity can be derived by making the substitution u = W0(x), which gives x = u e u , d x d u = ( u + 1 ) e u . {\displaystyle {\begin{aligned}x&=ue^{u},\\[5pt]{\frac {dx}{du}}&=(u+1)e^{u}.\end{aligned}}} Thus ∫ 0 ∞ W 0 ( x ) x x d x = ∫ 0 ∞ u u e u u e u ( u + 1 ) e u d u = ∫ 0 ∞ u + 1 u e u d u = ∫ 0 ∞ u + 1 u 1 e u d u = ∫ 0 ∞ u 1 2 e − u 2 d u + ∫ 0 ∞ u − 1 2 e − u 2 d u = 2 ∫ 0 ∞ ( 2 w ) 1 2 e − w d w + 2 ∫ 0 ∞ ( 2 w ) − 1 2 e − w d w ( u = 2 w ) = 2 2 ∫ 0 ∞ w 1 2 e − w d w + 2 ∫ 0 ∞ w − 1 2 e − w d w = 2 2 ⋅ Γ ( 3 2 ) + 2 ⋅ Γ ( 1 2 ) = 2 2 ( 1 2 π ) + 2 ( π ) = 2 2 π . {\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {W_{0}(x)}{x{\sqrt {x}}}}\,dx&=\int _{0}^{\infty }{\frac {u}{ue^{u}{\sqrt {ue^{u}}}}}(u+1)e^{u}\,du\\[5pt]&=\int _{0}^{\infty }{\frac {u+1}{\sqrt {ue^{u}}}}du\\[5pt]&=\int _{0}^{\infty }{\frac {u+1}{\sqrt {u}}}{\frac {1}{\sqrt {e^{u}}}}du\\[5pt]&=\int _{0}^{\infty }u^{\tfrac {1}{2}}e^{-{\frac {u}{2}}}du+\int _{0}^{\infty }u^{-{\tfrac {1}{2}}}e^{-{\frac {u}{2}}}du\\[5pt]&=2\int _{0}^{\infty }(2w)^{\tfrac {1}{2}}e^{-w}\,dw+2\int _{0}^{\infty }(2w)^{-{\tfrac {1}{2}}}e^{-w}\,dw&&\quad (u=2w)\\[5pt]&=2{\sqrt {2}}\int _{0}^{\infty }w^{\tfrac {1}{2}}e^{-w}\,dw+{\sqrt {2}}\int _{0}^{\infty }w^{-{\tfrac {1}{2}}}e^{-w}\,dw\\[5pt]&=2{\sqrt {2}}\cdot \Gamma \left({\tfrac {3}{2}}\right)+{\sqrt {2}}\cdot \Gamma \left({\tfrac {1}{2}}\right)\\[5pt]&=2{\sqrt {2}}\left({\tfrac {1}{2}}{\sqrt {\pi }}\right)+{\sqrt {2}}\left({\sqrt {\pi }}\right)\\[5pt]&=2{\sqrt {2\pi }}.\end{aligned}}} The third identity may be derived from the second by making the substitution u = x−2 and the first can also be derived from the third by the substitution z = ⁠1/√2⁠ tan x. Deriving its generalization, the fourth identity, is only slightly more involved and can be done by substituting, in turn, u = x 1 N {\displaystyle u=x^{\frac {1}{N}}} , t = W 0 ( u ) {\displaystyle t=W_{0}(u)} , and z = t N {\displaystyle z={\frac {t}{N}}} , observing that one obtains two integrals matching the definition of the gamma function, and finally using the properties of the gamma function to collect terms and simplify. Except for z along the branch cut (−∞, −⁠1/e⁠] (where the integral does not converge), the principal branch of the Lambert W function can be computed by the following integral: W 0 ( z ) = z 2 π ∫ − π π ( 1 − ν cot ⁡ ν ) 2 + ν 2 z + ν csc ⁡ ( ν ) e − ν cot ⁡ ν d ν = z π ∫ 0 π ( 1 − ν cot ⁡ ν ) 2 + ν 2 z + ν csc ⁡ ( ν ) e − ν cot ⁡ ν d ν , {\displaystyle {\begin{aligned}W_{0}(z)&={\frac {z}{2\pi }}\int _{-\pi }^{\pi }{\frac {\left(1-\nu \cot \nu \right)^{2}+\nu ^{2}}{z+\nu \csc \left(\nu \right)e^{-\nu \cot \nu }}}\,d\nu \\[5pt]&={\frac {z}{\pi }}\int _{0}^{\pi }{\frac {\left(1-\nu \cot \nu \right)^{2}+\nu ^{2}}{z+\nu \csc \left(\nu \right)e^{-\nu \cot \nu }}}\,d\nu ,\end{aligned}}} where the two integral expressions are equivalent due to the symmetry of the integrand. === Indefinite integrals === ∫ W ( x ) x d x = W ( x ) 2 2 + W ( x ) + C {\displaystyle \int {\frac {W(x)}{x}}\,dx\;=\;{\frac {W(x)^{2}}{2}}+W(x)+C} ∫ W ( A e B x ) d x = W ( A e B x ) 2 2 B + W ( A e B x ) B + C {\displaystyle \int W\left(Ae^{Bx}\right)\,dx\;=\;{\frac {W\left(Ae^{Bx}\right)^{2}}{2B}}+{\frac {W\left(Ae^{Bx}\right)}{B}}+C} ∫ W ( x ) x 2 d x = Ei ⁡ ( − W ( x ) ) − e − W ( x ) + C {\displaystyle \int {\frac {W(x)}{x^{2}}}\,dx\;=\;\operatorname {Ei} \left(-W(x)\right)-e^{-W(x)}+C} == Applications == === Solving equations === The Lambert W function is used to solve equations in which the unknown quantity occurs both in the base and in the exponent, or both inside and outside of a logarithm. The strategy is to convert such an equation into one of the form zez = w and then to solve for z using the W function. For example, the equation 3 x = 2 x + 2 {\displaystyle 3^{x}=2x+2} (where x is an unknown real number) can be solved by rewriting it as ( x + 1 ) 3 − x = 1 2 ( multiply by 3 − x / 2 ) ⇔ ( − x − 1 ) 3 − x − 1 = − 1 6 ( multiply by − 1 / 3 ) ⇔ ( ln ⁡ 3 ) ( − x − 1 ) e ( ln ⁡ 3 ) ( − x − 1 ) = − ln ⁡ 3 6 ( multiply by ln ⁡ 3 ) {\displaystyle {\begin{aligned}&(x+1)\ 3^{-x}={\frac {1}{2}}&({\mbox{multiply by }}3^{-x}/2)\\\Leftrightarrow \ &(-x-1)\ 3^{-x-1}=-{\frac {1}{6}}&({\mbox{multiply by }}{-}1/3)\\\Leftrightarrow \ &(\ln 3)(-x-1)\ e^{(\ln 3)(-x-1)}=-{\frac {\ln 3}{6}}&({\mbox{multiply by }}\ln 3)\end{aligned}}} This last equation has the desired form and the solutions for real x are: ( ln ⁡ 3 ) ( − x − 1 ) = W 0 ( − ln ⁡ 3 6 ) or ( ln ⁡ 3 ) ( − x − 1 ) = W − 1 ( − ln ⁡ 3 6 ) {\displaystyle (\ln 3)(-x-1)=W_{0}\left({\frac {-\ln 3}{6}}\right)\ \ \ {\textrm {or}}\ \ \ (\ln 3)(-x-1)=W_{-1}\left({\frac {-\ln 3}{6}}\right)} and thus: x = − 1 − W 0 ( − ln ⁡ 3 6 ) ln ⁡ 3 = − 0.79011 … or x = − 1 − W − 1 ( − ln ⁡ 3 6 ) ln ⁡ 3 = 1.44456 … {\displaystyle x=-1-{\frac {W_{0}\left(-{\frac {\ln 3}{6}}\right)}{\ln 3}}=-0.79011\ldots \ \ {\textrm {or}}\ \ x=-1-{\frac {W_{-1}\left(-{\frac {\ln 3}{6}}\right)}{\ln 3}}=1.44456\ldots } Generally, the solution to x = a + b e c x {\displaystyle x=a+b\,e^{cx}} is: x = a − 1 c W ( − b c e a c ) {\displaystyle x=a-{\frac {1}{c}}W(-bc\,e^{ac})} where a, b, and c are complex constants, with b and c not equal to zero, and the W function is of any integer order. === Inviscid flows === Applying the unusual accelerating traveling-wave Ansatz in the form of ρ ( η ) = ρ ( x − a t 2 2 ) {\displaystyle \rho (\eta )=\rho {\big (}x-{\frac {at^{2}}{2}}{\big )}} (where ρ {\displaystyle \rho } , η {\displaystyle \eta } , a, x and t are the density, the reduced variable, the acceleration, the spatial and the temporal variables) the fluid density of the corresponding Euler equation can be given with the help of the W function. === Viscous flows === Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in laboratory experiments can be described by using the Lambert–Euler omega function as follows: H ( x ) = 1 + W ( ( H ( 0 ) − 1 ) e ( H ( 0 ) − 1 ) − x L ) , {\displaystyle H(x)=1+W\left((H(0)-1)e^{(H(0)-1)-{\frac {x}{L}}}\right),} where H(x) is the debris flow height, x is the channel downstream position, L is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient. In pipe flow, the Lambert W function is part of the explicit formulation of the Colebrook equation for finding the Darcy friction factor. This factor is used to determine the pressure drop through a straight run of pipe when the flow is turbulent. === Time-dependent flow in simple branch hydraulic systems === The principal branch of the Lambert W function is employed in the field of mechanical engineering, in the study of time dependent transfer of Newtonian fluids between two reservoirs with varying free surface levels, using centrifugal pumps. The Lambert W function provided an exact solution to the flow rate of fluid in both the laminar and turbulent regimes: Q turb = Q i ζ i W 0 [ ζ i e ( ζ i + β t / b ) ] Q lam = Q i ξ i W 0 [ ξ i e ( ξ i + β t / ( b − Γ 1 ) ) ] {\displaystyle {\begin{aligned}Q_{\text{turb}}&={\frac {Q_{i}}{\zeta _{i}}}W_{0}\left[\zeta _{i}\,e^{(\zeta _{i}+\beta t/b)}\right]\\Q_{\text{lam}}&={\frac {Q_{i}}{\xi _{i}}}W_{0}\left[\xi _{i}\,e^{\left(\xi _{i}+\beta t/(b-\Gamma _{1})\right)}\right]\end{aligned}}} where Q i {\displaystyle Q_{i}} is the initial flow rate and t {\displaystyle t} is time. === Neuroimaging === The Lambert W function is employed in the field of neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding blood oxygenation level dependent (BOLD) signal. === Chemical engineering === The Lambert W function is employed in the field of chemical engineering for modeling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert W function provides an exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other. === Crystal growth === In the crystal growth, the negative principal of the Lambert W-function can be used to calculate the distribution coefficient, k {\textstyle k} , and solute concentration in the melt, C L {\textstyle C_{L}} , from the Scheil equation: k = W 0 ( Z ) ln ⁡ ( 1 − f s ) C L = C 0 ( 1 − f s ) e W 0 ( Z ) Z = C S C 0 ( 1 − f s ) ln ⁡ ( 1 − f s ) {\displaystyle {\begin{aligned}&k={\frac {W_{0}(Z)}{\ln(1-fs)}}\\&C_{L}={\frac {C_{0}}{(1-fs)}}e^{W_{0}(Z)}\\&Z={\frac {C_{S}}{C_{0}}}(1-fs)\ln(1-fs)\end{aligned}}} === Materials science === The Lambert W function is employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert W for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert W turns it in an explicit equation for analytical handling with ease. === Semiconductor === It was shown that a W-function describes the relation between voltage, current and resistance in a diode. === Porous media === The Lambert W function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneous tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the −1 branch applies if the displacement is unstable with the heavier fluid running underneath the lighter fluid. === Bernoulli numbers and Todd genus === The equation (linked with the generating functions of Bernoulli numbers and Todd genus): Y = X 1 − e X {\displaystyle Y={\frac {X}{1-e^{X}}}} can be solved by means of the two real branches W0 and W−1: X ( Y ) = { W − 1 ( Y e Y ) − W 0 ( Y e Y ) = Y − W 0 ( Y e Y ) for Y < − 1 , W 0 ( Y e Y ) − W − 1 ( Y e Y ) = Y − W − 1 ( Y e Y ) for − 1 < Y < 0. {\displaystyle X(Y)={\begin{cases}W_{-1}\left(Ye^{Y}\right)-W_{0}\left(Ye^{Y}\right)=Y-W_{0}\left(Ye^{Y}\right)&{\text{for }}Y<-1,\\W_{0}\left(Ye^{Y}\right)-W_{-1}\left(Ye^{Y}\right)=Y-W_{-1}\left(Ye^{Y}\right)&{\text{for }}-1<Y<0.\end{cases}}} This application shows that the branch difference of the W function can be employed in order to solve other transcendental equations. === Statistics === The centroid of a set of histograms defined with respect to the symmetrized Kullback–Leibler divergence (also called the Jeffreys divergence ) has a closed form using the Lambert W function. === Pooling of tests for infectious diseases === Solving for the optimal group size to pool tests so that at least one individual is infected involves the Lambert W function. === Exact solutions of the Schrödinger equation === The Lambert W function appears in a quantum-mechanical potential, which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as V = V 0 1 + W ( e − x σ ) . {\displaystyle V={\frac {V_{0}}{1+W\left(e^{-{\frac {x}{\sigma }}}\right)}}.} A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to z = W ( e − x σ ) . {\displaystyle z=W\left(e^{-{\frac {x}{\sigma }}}\right).} The Lambert W function also appears in the exact solution for the bound state energy of the one dimensional Schrödinger equation with a Double Delta Potential. === Exact solution of QCD coupling constant === In Quantum chromodynamics, the quantum field theory of the Strong interaction, the coupling constant α s {\displaystyle \alpha _{\text{s}}} is computed perturbatively, the order n corresponding to Feynman diagrams including n quantum loops. The first order, n = 1, solution is exact (at that order) and analytical. At higher orders, n > 1, there is no exact and analytical solution and one typically uses an iterative method to furnish an approximate solution. However, for second order, n = 2, the Lambert function provides an exact (if non-analytical) solution. === Exact solutions of the Einstein vacuum equations === In the Schwarzschild metric solution of the Einstein vacuum equations, the W function is needed to go from the Eddington–Finkelstein coordinates to the Schwarzschild coordinates. For this reason, it also appears in the construction of the Kruskal–Szekeres coordinates. === Resonances of the delta-shell potential === The s-wave resonances of the delta-shell potential can be written exactly in terms of the Lambert W function. === Thermodynamic equilibrium === If a reaction involves reactants and products having heat capacities that are constant with temperature then the equilibrium constant K obeys ln ⁡ K = a T + b + c ln ⁡ T {\displaystyle \ln K={\frac {a}{T}}+b+c\ln T} for some constants a, b, and c. When c (equal to ⁠ΔCp/R⁠) is not zero the value or values of T can be found where K equals a given value as follows, where L can be used for ln T. − a = ( b − ln ⁡ K ) T + c T ln ⁡ T = ( b − ln ⁡ K ) e L + c L e L − a c = ( b − ln ⁡ K c + L ) e L − a c e b − ln ⁡ K c = ( L + b − ln ⁡ K c ) e L + b − ln ⁡ K c L = W ( − a c e b − ln ⁡ K c ) + ln ⁡ K − b c T = exp ⁡ ( W ( − a c e b − ln ⁡ K c ) + ln ⁡ K − b c ) . {\displaystyle {\begin{aligned}-a&=(b-\ln K)T+cT\ln T\\&=(b-\ln K)e^{L}+cLe^{L}\\[5pt]-{\frac {a}{c}}&=\left({\frac {b-\ln K}{c}}+L\right)e^{L}\\[5pt]-{\frac {a}{c}}e^{\frac {b-\ln K}{c}}&=\left(L+{\frac {b-\ln K}{c}}\right)e^{L+{\frac {b-\ln K}{c}}}\\[5pt]L&=W\left(-{\frac {a}{c}}e^{\frac {b-\ln K}{c}}\right)+{\frac {\ln K-b}{c}}\\[5pt]T&=\exp \left(W\left(-{\frac {a}{c}}e^{\frac {b-\ln K}{c}}\right)+{\frac {\ln K-b}{c}}\right).\end{aligned}}} If a and c have the same sign there will be either two solutions or none (or one if the argument of W is exactly −⁠1/e⁠). (The upper solution may not be relevant.) If they have opposite signs, there will be one solution. === Phase separation of polymer mixtures === In the calculation of the phase diagram of thermodynamically incompatible polymer mixtures according to the Edmond-Ogston model, the solutions for binodal and tie-lines are formulated in terms of Lambert W functions. === Wien's displacement law in a D-dimensional universe === Wien's displacement law is expressed as ν max / T = α = c o n s t {\displaystyle \nu _{\max }/T=\alpha =\mathrm {const} } . With x = h ν max / k B T {\displaystyle x=h\nu _{\max }/k_{\mathrm {B} }T} and d ρ T ( x ) / d x = 0 {\displaystyle d\rho _{T}\left(x\right)/dx=0} , where ρ T {\displaystyle \rho _{T}} is the spectral energy energy density, one finds e − x = 1 − x D {\displaystyle e^{-x}=1-{\frac {x}{D}}} , where D {\displaystyle D} is the number of degrees of freedom for spatial translation. The solution x = D + W ( − D e − D ) {\displaystyle x=D+W\left(-De^{-D}\right)} shows that the spectral energy density is dependent on the dimensionality of the universe. === AdS/CFT correspondence === The classical finite-size corrections to the dispersion relations of giant magnons, single spikes and GKP strings can be expressed in terms of the Lambert W function. === Epidemiology === In the t → ∞ limit of the SIR model, the proportion of susceptible and recovered individuals has a solution in terms of the Lambert W function. === Determination of the time of flight of a projectile === The total time of the journey of a projectile which experiences air resistance proportional to its velocity can be determined in exact form by using the Lambert W function. === Electromagnetic surface wave propagation === The transcendental equation that appears in the determination of the propagation wave number of an electromagnetic axially symmetric surface wave (a low-attenuation single TM01 mode) propagating in a cylindrical metallic wire gives rise to an equation like u ln u = v (where u and v clump together the geometrical and physical factors of the problem), which is solved by the Lambert W function. The first solution to this problem, due to Sommerfeld circa 1898, already contained an iterative method to determine the value of the Lambert W function. === Orthogonal trajectories of real ellipses === The family of ellipses x 2 + ( 1 − ε 2 ) y 2 = ε 2 {\displaystyle x^{2}+(1-\varepsilon ^{2})y^{2}=\varepsilon ^{2}} centered at ( 0 , 0 ) {\displaystyle (0,0)} is parameterized by eccentricity ε {\displaystyle \varepsilon } . The orthogonal trajectories of this family are given by the differential equation ( 1 y + y ) d y = ( 1 x − x ) d x {\displaystyle \left({\frac {1}{y}}+y\right)dy=\left({\frac {1}{x}}-x\right)dx} whose general solution is the family y 2 = {\displaystyle y^{2}=} W 0 ( x 2 exp ⁡ ( − 2 C − x 2 ) ) {\displaystyle W_{0}(x^{2}\exp(-2C-x^{2}))} . == Generalizations == The standard Lambert W function expresses exact solutions to transcendental algebraic equations (in x) of the form: where a0, c and r are real constants. The solution is x = r + 1 c W ( c e − c r a 0 ) . {\displaystyle x=r+{\frac {1}{c}}W\left({\frac {c\,e^{-cr}}{a_{0}}}\right).} Generalizations of the Lambert W function include: An application to general relativity and quantum mechanics (quantum gravity) in lower dimensions, in fact a link (unknown prior to 2007) between these two areas, where the right-hand side of (1) is replaced by a quadratic polynomial in x: where r1 and r2 are real distinct constants, the roots of the quadratic polynomial. Here, the solution is a function which has a single argument x but the terms like ri and a0 are parameters of that function. In this respect, the generalization resembles the hypergeometric function and the Meijer G function but it belongs to a different class of functions. When r1 = r2, both sides of (2) can be factored and reduced to (1) and thus the solution reduces to that of the standard W function. Equation (2) expresses the equation governing the dilaton field, from which is derived the metric of the R = T or lineal two-body gravity problem in 1 + 1 dimensions (one spatial dimension and one time dimension) for the case of unequal rest masses, as well as the eigenenergies of the quantum-mechanical double-well Dirac delta function model for unequal charges in one dimension. Analytical solutions of the eigenenergies of a special case of the quantum mechanical three-body problem, namely the (three-dimensional) hydrogen molecule-ion. Here the right-hand side of (1) is replaced by a ratio of infinite order polynomials in x: where ri and si are distinct real constants and x is a function of the eigenenergy and the internuclear distance R. Equation (3) with its specialized cases expressed in (1) and (2) is related to a large class of delay differential equations. G. H. Hardy's notion of a "false derivative" provides exact multiple roots to special cases of (3). Applications of the Lambert W function in fundamental physical problems are not exhausted even for the standard case expressed in (1) as seen recently in the area of atomic, molecular, and optical physics. == Plots == Plots of the Lambert W function on the complex plane == Numerical evaluation == The W function may be approximated using Newton's method, with successive approximations to w = W(z) (so z = wew) being w j + 1 = w j − w j e w j − z e w j + w j e w j . {\displaystyle w_{j+1}=w_{j}-{\frac {w_{j}e^{w_{j}}-z}{e^{w_{j}}+w_{j}e^{w_{j}}}}.} The W function may also be approximated using Halley's method, w j + 1 = w j − w j e w j − z e w j ( w j + 1 ) − ( w j + 2 ) ( w j e w j − z ) 2 w j + 2 {\displaystyle w_{j+1}=w_{j}-{\frac {w_{j}e^{w_{j}}-z}{e^{w_{j}}\left(w_{j}+1\right)-{\dfrac {\left(w_{j}+2\right)\left(w_{j}e^{w_{j}}-z\right)}{2w_{j}+2}}}}} given in Corless et al. to compute W. For real x ≥ − 1 / e {\displaystyle x\geq -1/e} , it may be approximated by the quadratic-rate recursive formula of R. Iacono and J.P. Boyd: w n + 1 ( x ) = w n ( x ) 1 + w n ( x ) ( 1 + log ⁡ ( x w n ( x ) ) ) . {\displaystyle w_{n+1}(x)={\frac {w_{n}(x)}{1+w_{n}(x)}}\left(1+\log \left({\frac {x}{w_{n}(x)}}\right)\right).} Lajos Lóczi proves that by using this iteration with an appropriate starting value w 0 ( x ) {\displaystyle w_{0}(x)} , For the principal branch W 0 : {\displaystyle W_{0}:} if x ∈ ( e , ∞ ) {\displaystyle x\in (e,\infty )} : w 0 ( x ) = log ⁡ ( x ) − log ⁡ ( log ⁡ ( x ) ) , {\displaystyle w_{0}(x)=\log(x)-\log(\log(x)),} if x ∈ ( 0 , e ) : {\displaystyle x\in (0,e):} w 0 ( x ) = x / e , {\displaystyle w_{0}(x)=x/e,} if x ∈ ( − 1 / e , 0 ) : {\displaystyle x\in (-1/e,0):} w 0 ( x ) = e x log ⁡ ( 1 + 1 + e x ) 1 + e x + 1 + e x , {\displaystyle w_{0}(x)={\frac {ex\log(1+{\sqrt {1+ex}})}{1+ex+{\sqrt {1+ex}}}},} For the branch W − 1 : {\displaystyle W_{-1}:} if x ∈ ( − 1 / 4 , 0 ) : {\displaystyle x\in (-1/4,0):} w 0 ( x ) = log ⁡ ( − x ) − log ⁡ ( − log ⁡ ( − x ) ) , {\displaystyle w_{0}(x)=\log(-x)-\log(-\log(-x)),} if x ∈ ( − 1 / e , − 1 / 4 ] : {\displaystyle x\in (-1/e,-1/4]:} w 0 ( x ) = − 1 − 2 1 + e x , {\displaystyle w_{0}(x)=-1-{\sqrt {2}}{\sqrt {1+ex}},} one can determine the maximum number of iteration steps in advance for any precision: if x ∈ ( e , ∞ ) {\displaystyle x\in (e,\infty )} (Theorem 2.4): 0 < W 0 ( x ) − w n ( x ) < ( log ⁡ ( 1 + 1 / e ) ) 2 n , {\displaystyle 0<W_{0}(x)-w_{n}(x)<\left(\log(1+1/e)\right)^{2^{n}},} if x ∈ ( 0 , e ) {\displaystyle x\in (0,e)} (Theorem 2.9): 0 < W 0 ( x ) − w n ( x ) < ( 1 − 1 / e ) 2 n − 1 5 , {\displaystyle 0<W_{0}(x)-w_{n}(x)<{\frac {\left(1-1/e\right)^{2^{n}-1}}{5}},} if x ∈ ( − 1 / e , 0 ) : {\displaystyle x\in (-1/e,0):} for the principal branch W 0 {\displaystyle W_{0}} (Theorem 2.17): 0 < w n ( x ) − W 0 ( x ) < ( 1 / 10 ) 2 n , {\displaystyle 0<w_{n}(x)-W_{0}(x)<\left(1/10\right)^{2^{n}},} for the branch W − 1 {\displaystyle W_{-1}} (Theorem 2.23): 0 < W − 1 ( x ) − w n ( x ) < ( 1 / 2 ) 2 n . {\displaystyle 0<W_{-1}(x)-w_{n}(x)<\left(1/2\right)^{2^{n}}.} Toshio Fukushima has presented a fast method for approximating the real valued parts of the principal and secondary branches of the W function without using any iteration. In this method the W function is evaluated as a conditional switch of rational functions on transformed variables: W 0 ( z ) = { X k ( x ) , ( z k − 1 <= z < z k , k = 1 , 2 , … , 17 ) , U k ( u ) , ( z k − 1 <= z < z k , k = 18 , 19 ) , {\displaystyle W_{0}(z)={\begin{cases}X_{k}(x),&(z_{k-1}<=z<z_{k},\quad k=1,2,\ldots ,17),\\U_{k}(u),&(z_{k-1}<=z<z_{k},\quad k=18,19),\end{cases}}} W − 1 ( z ) = { Y k ( y ) , ( z k − 1 <= z < z k , k = − 1 , − 2 , … , − 7 ) , V k ( u ) , ( z k − 1 <= z < z k , k = − 8 , − 9 , − 10 ) , {\displaystyle W_{-1}(z)={\begin{cases}Y_{k}(y),&(z_{k-1}<=z<z_{k},\quad k=-1,-2,\ldots ,-7),\\V_{k}(u),&(z_{k-1}<=z<z_{k},\quad k=-8,-9,-10),\end{cases}}} where x, u, y and v are transformations of z: x = z + 1 / e , u = ln ⁡ z , y = − z / ( x + 1 / e ) , v = ln ⁡ ( − z ) {\displaystyle x={\sqrt {z+1/e}},\quad u=\ln {z},\quad y=-z/(x+1/{\sqrt {e}}),\quad v=\ln(-z)} . Here X k ( x ) {\displaystyle X_{k}(x)} , U k ( u ) {\displaystyle U_{k}(u)} , Y k ( y ) {\displaystyle Y_{k}(y)} , and V k ( v ) {\displaystyle V_{k}(v)} are rational functions whose coefficients for different k-values are listed in the referenced paper together with the z k {\displaystyle z_{k}} values that determine the subdomains. With higher degree polynomials in these rational functions the method can approximate the W function more accurately. For example, when − 1 / e ≤ z ≤ 2.0082178115844727 {\displaystyle -1/e\leq z\leq 2.0082178115844727} , W 0 ( z ) {\displaystyle W_{0}(z)} can be approximated to 24 bits of accuracy on 64-bit floating point values as W 0 ( z ) ≈ X 1 ( x ) = ∑ i 4 P i x i ∑ i 3 Q i x i {\displaystyle W_{0}(z)\approx X_{1}(x)={\frac {\sum _{i}^{4}P_{i}x^{i}}{\sum _{i}^{3}Q_{i}x^{i}}}} where x is defined with the transformation above and the coefficients P i {\displaystyle P_{i}} and Q i {\displaystyle Q_{i}} are given in the table below. Fukushima also offers an approximation with 50 bits of accuracy on 64-bit floats that uses 8th- and 7th-degree polynomials. == Software == The Lambert W function is implemented in many programming languages. Some of them are listed below: == See also == Wright omega function Lambert's trinomial equation Lagrange inversion theorem Experimental mathematics Holstein–Herring method R = T model Ross' π lemma == Notes == == References == Corless, R.; Gonnet, G.; Hare, D.; Jeffrey, D.; Knuth, Donald (1996). "On the Lambert W function" (PDF). Advances in Computational Mathematics. 5: 329–359. doi:10.1007/BF02124750. ISSN 1019-7168. S2CID 29028411. Archived from the original (PDF) on 2010-12-14. Retrieved 2007-03-10. Chapeau-Blondeau, F.; Monir, A. (2002). "Evaluation of the Lambert W Function and Application to Generation of Generalized Gaussian Noise With Exponent 1/2" (PDF). IEEE Trans. Signal Process. 50 (9). doi:10.1109/TSP.2002.801912. Archived from the original (PDF) on 2012-03-28. Retrieved 2004-03-10. Francis; et al. (2000). "Quantitative General Theory for Periodic Breathing". Circulation. 102 (18): 2214–21. CiteSeerX 10.1.1.505.7194. doi:10.1161/01.cir.102.18.2214. PMID 11056095. S2CID 14410926. (Lambert function is used to solve delay-differential dynamics in human disease.) Hayes, B. (2005). "Why W?" (PDF). American Scientist. 93 (2): 104–108. doi:10.1511/2005.2.104. Archived (PDF) from the original on 2022-10-10. Roy, R.; Olver, F. W. J. (2010), "Lambert W function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. Stewart, Seán M. (2005). "A New Elementary Function for Our Curricula?" (PDF). Australian Senior Mathematics Journal. 19 (2): 8–26. ISSN 0819-4564. Archived (PDF) from the original on 2022-10-10. Veberic, D., "Having Fun with Lambert W(x) Function" arXiv:1003.1628 (2010); Veberic, D. (2012). "Lambert W function for applications in physics". Computer Physics Communications. 183 (12): 2622–2628. arXiv:1209.0735. Bibcode:2012CoPhC.183.2622V. doi:10.1016/j.cpc.2012.07.008. S2CID 315088. Chatzigeorgiou, I. (2013). "Bounds on the Lambert function and their Application to the Outage Analysis of User Cooperation". IEEE Communications Letters. 17 (8): 1505–1508. arXiv:1601.04895. doi:10.1109/LCOMM.2013.070113.130972. S2CID 10062685. == External links == National Institute of Science and Technology Digital Library – Lambert W MathWorld – Lambert W-Function Computing the Lambert W function Corless et al. Notes about Lambert W research GPL C++ implementation with Halley's and Fritsch's iteration. Special Functions of the GNU Scientific Library – GSL [3]
Wikipedia/Lambert's_W_function
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem of minimizing the function x 2 + y 4 {\displaystyle x^{2}+y^{4}} with respect to the variables x {\displaystyle x} and y , {\displaystyle y,} subject to 1 ≤ x ≤ 10 {\displaystyle 1\leq x\leq 10} and 5 ≤ y ≤ 12. {\displaystyle 5\leq y\leq 12.\,} Here the feasible set is the set of pairs (x, y) in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is x 2 + y 4 . {\displaystyle x^{2}+y^{4}.} In many problems, the feasible set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the feasible set is the set of integers (or some subset thereof). In linear programming problems, the feasible set is a convex polytope: a region in multidimensional space whose boundaries are formed by hyperplanes and whose corners are vertices. Constraint satisfaction is the process of finding a point in the feasible region. == Convex feasible set == A convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. Convex feasible sets arise in many types of problems, including linear programming problems, and they are of particular interest because, if the problem has a convex objective function that is to be minimized, it will generally be easier to solve in the presence of a convex feasible set and any local optimum will also be a global optimum. == No feasible set == If the constraints of an optimization problem are mutually contradictory, there are no points that satisfy all the constraints and thus the feasible region is the empty set. In this case the problem has no solution and is said to be infeasible. == Bounded and unbounded feasible sets == Feasible sets may be bounded or unbounded. For example, the feasible set defined by the constraint set {x ≥ 0, y ≥ 0} is unbounded because in some directions there is no limit on how far one can go and still be in the feasible region. In contrast, the feasible set formed by the constraint set {x ≥ 0, y ≥ 0, x + 2y ≤ 4} is bounded because the extent of movement in any direction is limited by the constraints. In linear programming problems with n variables, a necessary but insufficient condition for the feasible set to be bounded is that the number of constraints be at least n + 1 (as illustrated by the above example). If the feasible set is unbounded, there may or may not be an optimum, depending on the specifics of the objective function. For example, if the feasible region is defined by the constraint set {x ≥ 0, y ≥ 0}, then the problem of maximizing x + y has no optimum since any candidate solution can be improved upon by increasing x or y; yet if the problem is to minimize x + y, then there is an optimum (specifically at (x, y) = (0, 0)). == Candidate solution == In optimization and other branches of mathematics, and in search algorithms (a topic in computer science), a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints; that is, it is in the set of feasible solutions. Algorithms for solving various types of optimization problems often narrow the set of candidate solutions down to a subset of the feasible solutions, whose points remain as candidate solutions while the other feasible solutions are henceforth excluded as candidates. The space of all candidate solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space, or solution space. This is the set of all possible solutions that satisfy the problem's constraints. Constraint satisfaction is the process of finding a point in the feasible set. === Genetic algorithm === In the case of the genetic algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm. === Calculus === In calculus, an optimal solution is sought using the first derivative test: the first derivative of the function being optimized is equated to zero, and any values of the choice variable(s) that satisfy this equation are viewed as candidate solutions (while those that do not are ruled out as candidates). There are several ways in which a candidate solution might not be an actual solution. First, it might give a minimum when a maximum is being sought (or vice versa), and second, it might give neither a minimum nor a maximum but rather a saddle point or an inflection point, at which a temporary pause in the local rise or fall of the function occurs. Such candidate solutions may be able to be ruled out by use of the second derivative test, the satisfaction of which is sufficient for the candidate solution to be at least locally optimal. Third, a candidate solution may be a local optimum but not a global optimum. In taking antiderivatives of monomials of the form x n , {\displaystyle x^{n},} the candidate solution using Cavalieri's quadrature formula would be 1 n + 1 x n + 1 + C . {\displaystyle {\tfrac {1}{n+1}}x^{n+1}+C.} This candidate solution is in fact correct except when n = − 1. {\displaystyle n=-1.} === Linear programming === In the simplex method for solving linear programming problems, a vertex of the feasible polytope is selected as the initial candidate solution and is tested for optimality; if it is rejected as the optimum, an adjacent vertex is considered as the next candidate solution. This process is continued until a candidate solution is found to be the optimum. == References ==
Wikipedia/Candidate_solutions
In probability theory, an experiment or trial (see below) is any procedure that can be infinitely repeated and has a well-defined set of possible outcomes, known as the sample space. An experiment is said to be random if it has more than one possible outcome, and deterministic if it has only one. A random experiment that has exactly two (mutually exclusive) possible outcomes is known as a Bernoulli trial. When an experiment is conducted, one (and only one) outcome results— although this outcome may be included in any number of events, all of which would be said to have occurred on that trial. After conducting many trials of the same experiment and pooling the results, an experimenter can begin to assess the empirical probabilities of the various outcomes and events that can occur in the experiment and apply the methods of statistical analysis. == Experiments and trials == Random experiments are often conducted repeatedly, so that the collective results may be subjected to statistical analysis. A fixed number of repetitions of the same experiment can be thought of as a composed experiment, in which case the individual repetitions are called trials. For example, if one were to toss the same coin one hundred times and record each result, each toss would be considered a trial within the experiment composed of all hundred tosses. == Mathematical description == A random experiment is described or modeled by a mathematical construct known as a probability space. A probability space is constructed and defined with a specific kind of experiment or trial in mind. A mathematical description of an experiment consists of three parts: A sample space, Ω (or S), which is the set of all possible outcomes. A set of events F {\displaystyle \scriptstyle {\mathcal {F}}} , where each event is a set containing zero or more outcomes. The assignment of probabilities to the events—that is, a function P mapping from events to probabilities. An outcome is the result of a single execution of the model. Since individual outcomes might be of little practical use, more complicated events are used to characterize groups of outcomes. The collection of all such events is a sigma-algebra F {\displaystyle \scriptstyle {\mathcal {F}}} . Finally, there is a need to specify each event's likelihood of happening; this is done using the probability measure function, P. Once an experiment is designed and established, ω from the sample space Ω, all the events in F {\displaystyle \scriptstyle {\mathcal {F}}} that contain the selected outcome ω (recall that each event is a subset of Ω) are said to “have occurred”. The probability function P is defined in such a way that, if the experiment were to be repeated an infinite number of times, the relative frequencies of occurrence of each of the events would approach agreement with the values P assigns them. As a simple experiment, we may flip a coin twice. The sample space (where the order of the two flips is relevant) is {(H, T), (T, H), (T, T), (H, H)} where "H" means "heads" and "T" means "tails". Note that each of (H, T), (T, H), ... are possible outcomes of the experiment. We may define an event which occurs when a "heads" occurs in either of the two flips. This event contains all of the outcomes except (T, T). == See also == Probability space == References == == External links == Media related to Experiment (probability theory) at Wikimedia Commons
Wikipedia/Experiment_(probability_theory)
In mathematical analysis and in probability theory, a σ-algebra ("sigma algebra") is part of the formalism for defining sets that can be measured. In calculus and analysis, for example, σ-algebras are used to define the concept of sets with area or volume. In probability theory, they are used to define events with a well-defined probability. In this way, σ-algebras help to formalize the notion of size. In formal terms, a σ-algebra (also σ-field, where the σ comes from the German "Summe", meaning "sum") on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair ( X , Σ ) {\displaystyle (X,\Sigma )} is called a measurable space. The set X is understood to be an ambient space (such as the 2D plane or the set of outcomes when rolling a six-sided die {1,2,3,4,5,6}), and the collection Σ is a choice of subsets declared to have a well-defined size. The closure requirements for σ-algebras are designed to capture our intuitive ideas about how sizes combine: if there is a well-defined probability that an event occurs, there should be a well-defined probability that it does not occur (closure under complements); if several sets have a well-defined size, so should their combination (countable unions); if several events have a well-defined probability of occurring, so should the event where they all occur simultaneously (countable intersections). The definition of σ-algebra resembles other mathematical structures such as a topology (which is required to be closed under all unions but only finite intersections, and which doesn't necessarily contain all complements of its sets) or a set algebra (which is closed only under finite unions and intersections). == Examples of σ-algebras == If X = { a , b , c , d } {\displaystyle X=\{a,b,c,d\}} one possible σ-algebra on X {\displaystyle X} is Σ = { ∅ , { a , b } , { c , d } , { a , b , c , d } } , {\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},} where ∅ {\displaystyle \varnothing } is the empty set. In general, a finite algebra is always a σ-algebra. If { A 1 , A 2 , A 3 , … } , {\displaystyle \{A_{1},A_{2},A_{3},\ldots \},} is a countable partition of X {\displaystyle X} then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra. A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy). == Motivation == There are at least three key motivators for σ-algebras: defining measures, manipulating limits of sets, and managing partial information characterized by sets. === Measure === A measure on X {\displaystyle X} is a function that assigns a non-negative real number to subsets of X ; {\displaystyle X;} this can be thought of as making precise a notion of "size" or "volume" for sets. We want the size of the union of disjoint sets to be the sum of their individual sizes, even for an infinite sequence of disjoint sets. One would like to assign a size to every subset of X , {\displaystyle X,} but in many natural settings, this is not possible. For example, the axiom of choice implies that when the size under consideration is the ordinary notion of length for subsets of the real line, then there exist sets for which no size exists, for example, the Vitali sets. For this reason, one considers instead a smaller collection of privileged subsets of X . {\displaystyle X.} These subsets will be called the measurable sets. They are closed under operations that one would expect for measurable sets, that is, the complement of a measurable set is a measurable set and the countable union of measurable sets is a measurable set. Non-empty collections of sets with these properties are called σ-algebras. === Limits of sets === Many uses of measure, such as the probability concept of almost sure convergence, involve limits of sequences of sets. For this, closure under countable unions and intersections is paramount. Set limits are defined as follows on σ-algebras. The limit supremum or outer limit of a sequence A 1 , A 2 , A 3 , … {\displaystyle A_{1},A_{2},A_{3},\ldots } of subsets of X {\displaystyle X} is lim sup n → ∞ A n = ⋂ n = 1 ∞ ⋃ m = n ∞ A m = ⋂ n = 1 ∞ A n ∪ A n + 1 ∪ ⋯ . {\displaystyle \limsup _{n\to \infty }A_{n}=\bigcap _{n=1}^{\infty }\bigcup _{m=n}^{\infty }A_{m}=\bigcap _{n=1}^{\infty }A_{n}\cup A_{n+1}\cup \cdots .} It consists of all points x {\displaystyle x} that are in infinitely many of these sets (or equivalently, that are in cofinally many of them). That is, x ∈ lim sup n → ∞ A n {\displaystyle x\in \limsup _{n\to \infty }A_{n}} if and only if there exists an infinite subsequence A n 1 , A n 2 , … {\displaystyle A_{n_{1}},A_{n_{2}},\ldots } (where n 1 < n 2 < ⋯ {\displaystyle n_{1}<n_{2}<\cdots } ) of sets that all contain x ; {\displaystyle x;} that is, such that x ∈ A n 1 ∩ A n 2 ∩ ⋯ . {\displaystyle x\in A_{n_{1}}\cap A_{n_{2}}\cap \cdots .} The limit infimum or inner limit of a sequence A 1 , A 2 , A 3 , … {\displaystyle A_{1},A_{2},A_{3},\ldots } of subsets of X {\displaystyle X} is lim inf n → ∞ A n = ⋃ n = 1 ∞ ⋂ m = n ∞ A m = ⋃ n = 1 ∞ A n ∩ A n + 1 ∩ ⋯ . {\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n=1}^{\infty }\bigcap _{m=n}^{\infty }A_{m}=\bigcup _{n=1}^{\infty }A_{n}\cap A_{n+1}\cap \cdots .} It consists of all points that are in all but finitely many of these sets (or equivalently, that are eventually in all of them). That is, x ∈ lim inf n → ∞ A n {\displaystyle x\in \liminf _{n\to \infty }A_{n}} if and only if there exists an index N ∈ N {\displaystyle N\in \mathbb {N} } such that A N , A N + 1 , … {\displaystyle A_{N},A_{N+1},\ldots } all contain x ; {\displaystyle x;} that is, such that x ∈ A N ∩ A N + 1 ∩ ⋯ . {\displaystyle x\in A_{N}\cap A_{N+1}\cap \cdots .} The inner limit is always a subset of the outer limit: lim inf n → ∞ A n ⊆ lim sup n → ∞ A n . {\displaystyle \liminf _{n\to \infty }A_{n}~\subseteq ~\limsup _{n\to \infty }A_{n}.} If these two sets are equal then their limit lim n → ∞ A n {\displaystyle \lim _{n\to \infty }A_{n}} exists and is equal to this common set: lim n → ∞ A n := lim inf n → ∞ A n = lim sup n → ∞ A n . {\displaystyle \lim _{n\to \infty }A_{n}:=\liminf _{n\to \infty }A_{n}=\limsup _{n\to \infty }A_{n}.} === Sub σ-algebras === In much of probability, especially when conditional expectation is involved, one is concerned with sets that represent only part of all the possible information that can be observed. This partial information can be characterized with a smaller σ-algebra which is a subset of the principal σ-algebra; it consists of the collection of subsets relevant only to and determined only by the partial information. Formally, if Σ , Σ ′ {\displaystyle \Sigma ,\Sigma '} are σ-algebras on X {\displaystyle X} , then Σ ′ {\displaystyle \Sigma '} is a sub σ-algebra of Σ {\displaystyle \Sigma } if Σ ′ ⊆ Σ {\displaystyle \Sigma '\subseteq \Sigma } . The Bernoulli process provides a simple example. This consists of a sequence of random coin flips, coming up Heads ( H {\displaystyle H} ) or Tails ( T {\displaystyle T} ), of unbounded length. The sample space Ω consists of all possible infinite sequences of H {\displaystyle H} or T : {\displaystyle T:} Ω = { H , T } ∞ = { ( x 1 , x 2 , x 3 , … ) : x i ∈ { H , T } , i ≥ 1 } . {\displaystyle \Omega =\{H,T\}^{\infty }=\{(x_{1},x_{2},x_{3},\dots ):x_{i}\in \{H,T\},i\geq 1\}.} The full sigma algebra can be generated from an ascending sequence of subalgebras, by considering the information that might be obtained after observing some or all of the first n {\displaystyle n} coin flips. This sequence of subalgebras is given by G n = { A × { Ω } : A ⊆ { H , T } n } {\displaystyle {\mathcal {G}}_{n}=\{A\times \{\Omega \}:A\subseteq \{H,T\}^{n}\}} Each of these is finer than the last, and so can be ordered as a filtration G 0 ⊆ G 1 ⊆ G 2 ⊆ ⋯ ⊆ G ∞ {\displaystyle {\mathcal {G}}_{0}\subseteq {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}\subseteq \cdots \subseteq {\mathcal {G}}_{\infty }} The first subalgebra G 0 = { ∅ , Ω } {\displaystyle {\mathcal {G}}_{0}=\{\varnothing ,\Omega \}} is the trivial algebra: it has only two elements in it, the empty set and the total space. The second subalgebra G 1 {\displaystyle {\mathcal {G}}_{1}} has four elements: the two in G 0 {\displaystyle {\mathcal {G}}_{0}} plus two more: sequences that start with H {\displaystyle H} and sequences that start with T {\displaystyle T} . Each subalgebra is finer than the last. The n {\displaystyle n} 'th subalgebra contains 2 n + 1 {\displaystyle 2^{n+1}} elements: it divides the total space Ω {\displaystyle \Omega } into all of the possible sequences that might have been observed after n {\displaystyle n} flips, including the possible non-observation of some of the flips. The limiting algebra G ∞ {\displaystyle {\mathcal {G}}_{\infty }} is the smallest σ-algebra containing all the others. It is the algebra generated by the product topology or weak topology on the product space { H , T } ∞ . {\displaystyle \{H,T\}^{\infty }.} == Definition and properties == === Definition === Let X {\displaystyle X} be some set, and let P ( X ) {\displaystyle P(X)} represent its power set, the set of all subsets of X {\displaystyle X} . Then a subset Σ ⊆ P ( X ) {\displaystyle \Sigma \subseteq P(X)} is called a σ-algebra if and only if it satisfies the following three properties: X {\displaystyle X} is in Σ {\displaystyle \Sigma } . Σ {\displaystyle \Sigma } is closed under complementation: If some set A {\displaystyle A} is in Σ , {\displaystyle \Sigma ,} then so is its complement, X ∖ A . {\displaystyle X\setminus A.} Σ {\displaystyle \Sigma } is closed under countable unions: If A 1 , A 2 , A 3 , … {\displaystyle A_{1},A_{2},A_{3},\ldots } are in Σ , {\displaystyle \Sigma ,} then so is A = A 1 ∪ A 2 ∪ A 3 ∪ ⋯ . {\displaystyle A=A_{1}\cup A_{2}\cup A_{3}\cup \cdots .} From these properties, it follows that the σ-algebra is also closed under countable intersections (by applying De Morgan's laws). It also follows that the empty set ∅ {\displaystyle \varnothing } is in Σ , {\displaystyle \Sigma ,} since by (1) X {\displaystyle X} is in Σ {\displaystyle \Sigma } and (2) asserts that its complement, the empty set, is also in Σ . {\displaystyle \Sigma .} Moreover, since { X , ∅ } {\displaystyle \{X,\varnothing \}} satisfies all 3 conditions, it follows that { X , ∅ } {\displaystyle \{X,\varnothing \}} is the smallest possible σ-algebra on X . {\displaystyle X.} The largest possible σ-algebra on X {\displaystyle X} is P ( X ) . {\displaystyle P(X).} Elements of the σ-algebra are called measurable sets. An ordered pair ( X , Σ ) , {\displaystyle (X,\Sigma ),} where X {\displaystyle X} is a set and Σ {\displaystyle \Sigma } is a σ-algebra over X , {\displaystyle X,} is called a measurable space. A function between two measurable spaces is called a measurable function if the preimage of every measurable set is measurable. The collection of measurable spaces forms a category, with the measurable functions as morphisms. Measures are defined as certain types of functions from a σ-algebra to [ 0 , ∞ ] . {\displaystyle [0,\infty ].} A σ-algebra is both a π-system and a Dynkin system (λ-system). The converse is true as well, by Dynkin's theorem (see below). === Dynkin's π-λ theorem === This theorem (or the related monotone class theorem) is an essential tool for proving many results about properties of specific σ-algebras. It capitalizes on the nature of two simpler classes of sets, namely the following. A π-system P {\displaystyle P} is a collection of subsets of X {\displaystyle X} that is closed under finitely many intersections, and A Dynkin system (or λ-system) D {\displaystyle D} is a collection of subsets of X {\displaystyle X} that contains X {\displaystyle X} and is closed under complement and under countable unions of disjoint subsets. Dynkin's π-λ theorem says, if P {\displaystyle P} is a π-system and D {\displaystyle D} is a Dynkin system that contains P , {\displaystyle P,} then the σ-algebra σ ( P ) {\displaystyle \sigma (P)} generated by P {\displaystyle P} is contained in D . {\displaystyle D.} Since certain π-systems are relatively simple classes, it may not be hard to verify that all sets in P {\displaystyle P} enjoy the property under consideration while, on the other hand, showing that the collection D {\displaystyle D} of all subsets with the property is a Dynkin system can also be straightforward. Dynkin's π-λ Theorem then implies that all sets in σ ( P ) {\displaystyle \sigma (P)} enjoy the property, avoiding the task of checking it for an arbitrary set in σ ( P ) . {\displaystyle \sigma (P).} One of the most fundamental uses of the π-λ theorem is to show equivalence of separately defined measures or integrals. For example, it is used to equate a probability for a random variable X {\displaystyle X} with the Lebesgue-Stieltjes integral typically associated with computing the probability: P ( X ∈ A ) = ∫ A F ( d x ) {\displaystyle \mathbb {P} (X\in A)=\int _{A}\,F(dx)} for all A {\displaystyle A} in the Borel σ-algebra on R , {\displaystyle \mathbb {R} ,} where F ( x ) {\displaystyle F(x)} is the cumulative distribution function for X , {\displaystyle X,} defined on R , {\displaystyle \mathbb {R} ,} while P {\displaystyle \mathbb {P} } is a probability measure, defined on a σ-algebra Σ {\displaystyle \Sigma } of subsets of some sample space Ω . {\displaystyle \Omega .} === Combining σ-algebras === Suppose { Σ α : α ∈ A } {\displaystyle \textstyle \left\{\Sigma _{\alpha }:\alpha \in {\mathcal {A}}\right\}} is a collection of σ-algebras on a space X . {\displaystyle X.} Meet The intersection of a collection of σ-algebras is a σ-algebra. To emphasize its character as a σ-algebra, it often is denoted by: ⋀ α ∈ A Σ α . {\displaystyle \bigwedge _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }.} Sketch of Proof: Let Σ ∗ {\displaystyle \Sigma ^{*}} denote the intersection. Since X {\displaystyle X} is in every Σ α , Σ ∗ {\displaystyle \Sigma _{\alpha },\Sigma ^{*}} is not empty. Closure under complement and countable unions for every Σ α {\displaystyle \Sigma _{\alpha }} implies the same must be true for Σ ∗ . {\displaystyle \Sigma ^{*}.} Therefore, Σ ∗ {\displaystyle \Sigma ^{*}} is a σ-algebra. Join The union of a collection of σ-algebras is not generally a σ-algebra, or even an algebra, but it generates a σ-algebra known as the join which typically is denoted ⋁ α ∈ A Σ α = σ ( ⋃ α ∈ A Σ α ) . {\displaystyle \bigvee _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }=\sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).} A π-system that generates the join is P = { ⋂ i = 1 n A i : A i ∈ Σ α i , α i ∈ A , n ≥ 1 } . {\displaystyle {\mathcal {P}}=\left\{\bigcap _{i=1}^{n}A_{i}:A_{i}\in \Sigma _{\alpha _{i}},\alpha _{i}\in {\mathcal {A}},\ n\geq 1\right\}.} Sketch of Proof: By the case n = 1 , {\displaystyle n=1,} it is seen that each Σ α ⊂ P , {\displaystyle \Sigma _{\alpha }\subset {\mathcal {P}},} so ⋃ α ∈ A Σ α ⊆ P . {\displaystyle \bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\subseteq {\mathcal {P}}.} This implies σ ( ⋃ α ∈ A Σ α ) ⊆ σ ( P ) {\displaystyle \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)\subseteq \sigma ({\mathcal {P}})} by the definition of a σ-algebra generated by a collection of subsets. On the other hand, P ⊆ σ ( ⋃ α ∈ A Σ α ) {\displaystyle {\mathcal {P}}\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)} which, by Dynkin's π-λ theorem, implies σ ( P ) ⊆ σ ( ⋃ α ∈ A Σ α ) . {\displaystyle \sigma ({\mathcal {P}})\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).} === σ-algebras for subspaces === Suppose Y {\displaystyle Y} is a subset of X {\displaystyle X} and let ( X , Σ ) {\displaystyle (X,\Sigma )} be a measurable space. The collection { Y ∩ B : B ∈ Σ } {\displaystyle \{Y\cap B:B\in \Sigma \}} is a σ-algebra of subsets of Y . {\displaystyle Y.} Suppose ( Y , Λ ) {\displaystyle (Y,\Lambda )} is a measurable space. The collection { A ⊆ X : A ∩ Y ∈ Λ } {\displaystyle \{A\subseteq X:A\cap Y\in \Lambda \}} is a σ-algebra of subsets of X . {\displaystyle X.} === Relation to σ-ring === A σ-algebra Σ {\displaystyle \Sigma } is just a σ-ring that contains the universal set X . {\displaystyle X.} A σ-ring need not be a σ-algebra, as for example measurable subsets of zero Lebesgue measure in the real line are a σ-ring, but not a σ-algebra since the real line has infinite measure and thus cannot be obtained by their countable union. If, instead of zero measure, one takes measurable subsets of finite Lebesgue measure, those are a ring but not a σ-ring, since the real line can be obtained by their countable union yet its measure is not finite. === Typographic note === σ-algebras are sometimes denoted using calligraphic capital letters, or the Fraktur typeface. Thus ( X , Σ ) {\displaystyle (X,\Sigma )} may be denoted as ( X , F ) {\displaystyle \scriptstyle (X,\,{\mathcal {F}})} or ( X , F ) . {\displaystyle \scriptstyle (X,\,{\mathfrak {F}}).} == Particular cases and examples == === Separable σ-algebras === A separable σ {\displaystyle \sigma } -algebra (or separable σ {\displaystyle \sigma } -field) is a σ {\displaystyle \sigma } -algebra F {\displaystyle {\mathcal {F}}} that is a separable space when considered as a metric space with metric ρ ( A , B ) = μ ( A △ B ) {\displaystyle \rho (A,B)=\mu (A{\mathbin {\triangle }}B)} for A , B ∈ F {\displaystyle A,B\in {\mathcal {F}}} and a given finite measure μ {\displaystyle \mu } (and with △ {\displaystyle \triangle } being the symmetric difference operator). Any σ {\displaystyle \sigma } -algebra generated by a countable collection of sets is separable, but the converse need not hold. For example, the Lebesgue σ {\displaystyle \sigma } -algebra is separable (since every Lebesgue measurable set is equivalent to some Borel set) but not countably generated (since its cardinality is higher than continuum). A separable measure space has a natural pseudometric that renders it separable as a pseudometric space. The distance between two sets is defined as the measure of the symmetric difference of the two sets. The symmetric difference of two distinct sets can have measure zero; hence the pseudometric as defined above need not to be a true metric. However, if sets whose symmetric difference has measure zero are identified into a single equivalence class, the resulting quotient set can be properly metrized by the induced metric. If the measure space is separable, it can be shown that the corresponding metric space is, too. === Simple set-based examples === Let X {\displaystyle X} be any set. The family consisting only of the empty set and the set X , {\displaystyle X,} called the minimal or trivial σ-algebra over X . {\displaystyle X.} The power set of X , {\displaystyle X,} called the discrete σ-algebra. The collection { ∅ , A , X ∖ A , X } {\displaystyle \{\varnothing ,A,X\setminus A,X\}} is a simple σ-algebra generated by the subset A . {\displaystyle A.} The collection of subsets of X {\displaystyle X} which are countable or whose complements are countable is a σ-algebra (which is distinct from the power set of X {\displaystyle X} if and only if X {\displaystyle X} is uncountable). This is the σ-algebra generated by the singletons of X . {\displaystyle X.} Note: "countable" includes finite or empty. The collection of all unions of sets in a countable partition of X {\displaystyle X} is a σ-algebra. === Stopping time sigma-algebras === A stopping time τ {\displaystyle \tau } can define a σ {\displaystyle \sigma } -algebra F τ , {\displaystyle {\mathcal {F}}_{\tau },} the so-called stopping time sigma-algebra, which in a filtered probability space describes the information up to the random time τ {\displaystyle \tau } in the sense that, if the filtered probability space is interpreted as a random experiment, the maximum information that can be found out about the experiment from arbitrarily often repeating it until the time τ {\displaystyle \tau } is F τ . {\displaystyle {\mathcal {F}}_{\tau }.} == σ-algebras generated by families of sets == === σ-algebra generated by an arbitrary family === Let F {\displaystyle F} be an arbitrary family of subsets of X . {\displaystyle X.} Then there exists a unique smallest σ-algebra which contains every set in F {\displaystyle F} (even though F {\displaystyle F} may or may not itself be a σ-algebra). It is, in fact, the intersection of all σ-algebras containing F . {\displaystyle F.} (See intersections of σ-algebras above.) This σ-algebra is denoted σ ( F ) {\displaystyle \sigma (F)} and is called the σ-algebra generated by F . {\displaystyle F.} If F {\displaystyle F} is empty, then σ ( ∅ ) = { ∅ , X } . {\displaystyle \sigma (\varnothing )=\{\varnothing ,X\}.} Otherwise σ ( F ) {\displaystyle \sigma (F)} consists of all the subsets of X {\displaystyle X} that can be made from elements of F {\displaystyle F} by a countable number of complement, union and intersection operations. For a simple example, consider the set X = { 1 , 2 , 3 } . {\displaystyle X=\{1,2,3\}.} Then the σ-algebra generated by the single subset { 1 } {\displaystyle \{1\}} is σ ( { 1 } ) = { ∅ , { 1 } , { 2 , 3 } , { 1 , 2 , 3 } } . {\displaystyle \sigma (\{1\})=\{\varnothing ,\{1\},\{2,3\},\{1,2,3\}\}.} By an abuse of notation, when a collection of subsets contains only one element, A , {\displaystyle A,} σ ( A ) {\displaystyle \sigma (A)} may be written instead of σ ( { A } ) ; {\displaystyle \sigma (\{A\});} in the prior example σ ( { 1 } ) {\displaystyle \sigma (\{1\})} instead of σ ( { { 1 } } ) . {\displaystyle \sigma (\{\{1\}\}).} Indeed, using σ ( A 1 , A 2 , … ) {\displaystyle \sigma \left(A_{1},A_{2},\ldots \right)} to mean σ ( { A 1 , A 2 , … } ) {\displaystyle \sigma \left(\left\{A_{1},A_{2},\ldots \right\}\right)} is also quite common. There are many families of subsets that generate useful σ-algebras. Some of these are presented here. === σ-algebra generated by a function === If f {\displaystyle f} is a function from a set X {\displaystyle X} to a set Y {\displaystyle Y} and B {\displaystyle B} is a σ {\displaystyle \sigma } -algebra of subsets of Y , {\displaystyle Y,} then the σ {\displaystyle \sigma } -algebra generated by the function f , {\displaystyle f,} denoted by σ ( f ) , {\displaystyle \sigma (f),} is the collection of all inverse images f − 1 ( S ) {\displaystyle f^{-1}(S)} of the sets S {\displaystyle S} in B . {\displaystyle B.} That is, σ ( f ) = { f − 1 ( S ) : S ∈ B } . {\displaystyle \sigma (f)=\left\{f^{-1}(S)\,:\,S\in B\right\}.} A function f {\displaystyle f} from a set X {\displaystyle X} to a set Y {\displaystyle Y} is measurable with respect to a σ-algebra Σ {\displaystyle \Sigma } of subsets of X {\displaystyle X} if and only if σ ( f ) {\displaystyle \sigma (f)} is a subset of Σ . {\displaystyle \Sigma .} One common situation, and understood by default if B {\displaystyle B} is not specified explicitly, is when Y {\displaystyle Y} is a metric or topological space and B {\displaystyle B} is the collection of Borel sets on Y . {\displaystyle Y.} If f {\displaystyle f} is a function from X {\displaystyle X} to R n {\displaystyle \mathbb {R} ^{n}} then σ ( f ) {\displaystyle \sigma (f)} is generated by the family of subsets which are inverse images of intervals/rectangles in R n : {\displaystyle \mathbb {R} ^{n}:} σ ( f ) = σ ( { f − 1 ( [ a 1 , b 1 ] × ⋯ × [ a n , b n ] ) : a i , b i ∈ R } ) . {\displaystyle \sigma (f)=\sigma \left(\left\{f^{-1}(\left[a_{1},b_{1}\right]\times \cdots \times \left[a_{n},b_{n}\right]):a_{i},b_{i}\in \mathbb {R} \right\}\right).} A useful property is the following. Assume f {\displaystyle f} is a measurable map from ( X , Σ X ) {\displaystyle \left(X,\Sigma _{X}\right)} to ( S , Σ S ) {\displaystyle \left(S,\Sigma _{S}\right)} and g {\displaystyle g} is a measurable map from ( X , Σ X ) {\displaystyle \left(X,\Sigma _{X}\right)} to ( T , Σ T ) . {\displaystyle \left(T,\Sigma _{T}\right).} If there exists a measurable map h {\displaystyle h} from ( T , Σ T ) {\displaystyle \left(T,\Sigma _{T}\right)} to ( S , Σ S ) {\displaystyle \left(S,\Sigma _{S}\right)} such that f ( x ) = h ( g ( x ) ) {\displaystyle f(x)=h(g(x))} for all x , {\displaystyle x,} then σ ( f ) ⊆ σ ( g ) . {\displaystyle \sigma (f)\subseteq \sigma (g).} If S {\displaystyle S} is finite or countably infinite or, more generally, ( S , Σ S ) {\displaystyle \left(S,\Sigma _{S}\right)} is a standard Borel space (for example, a separable complete metric space with its associated Borel sets), then the converse is also true. Examples of standard Borel spaces include R n {\displaystyle \mathbb {R} ^{n}} with its Borel sets and R ∞ {\displaystyle \mathbb {R} ^{\infty }} with the cylinder σ-algebra described below. === Borel and Lebesgue σ-algebras === An important example is the Borel algebra over any topological space: the σ-algebra generated by the open sets (or, equivalently, by the closed sets). This σ-algebra is not, in general, the whole power set. For a non-trivial example that is not a Borel set, see the Vitali set or Non-Borel sets. On the Euclidean space R n , {\displaystyle \mathbb {R} ^{n},} another σ-algebra is of importance: that of all Lebesgue measurable sets. This σ-algebra contains more sets than the Borel σ-algebra on R n {\displaystyle \mathbb {R} ^{n}} and is preferred in integration theory, as it gives a complete measure space. === Product σ-algebra === Let ( X 1 , Σ 1 ) {\displaystyle \left(X_{1},\Sigma _{1}\right)} and ( X 2 , Σ 2 ) {\displaystyle \left(X_{2},\Sigma _{2}\right)} be two measurable spaces. The σ-algebra for the corresponding product space X 1 × X 2 {\displaystyle X_{1}\times X_{2}} is called the product σ-algebra and is defined by Σ 1 × Σ 2 = σ ( { B 1 × B 2 : B 1 ∈ Σ 1 , B 2 ∈ Σ 2 } ) . {\displaystyle \Sigma _{1}\times \Sigma _{2}=\sigma \left(\left\{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\right\}\right).} Observe that { B 1 × B 2 : B 1 ∈ Σ 1 , B 2 ∈ Σ 2 } {\displaystyle \{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\}} is a π-system. The Borel σ-algebra for R n {\displaystyle \mathbb {R} ^{n}} is generated by half-infinite rectangles and by finite rectangles. For example, B ( R n ) = σ ( { ( − ∞ , b 1 ] × ⋯ × ( − ∞ , b n ] : b i ∈ R } ) = σ ( { ( a 1 , b 1 ] × ⋯ × ( a n , b n ] : a i , b i ∈ R } ) . {\displaystyle {\mathcal {B}}(\mathbb {R} ^{n})=\sigma \left(\left\{(-\infty ,b_{1}]\times \cdots \times (-\infty ,b_{n}]:b_{i}\in \mathbb {R} \right\}\right)=\sigma \left(\left\{\left(a_{1},b_{1}\right]\times \cdots \times \left(a_{n},b_{n}\right]:a_{i},b_{i}\in \mathbb {R} \right\}\right).} For each of these two examples, the generating family is a π-system. === σ-algebra generated by cylinder sets === Suppose X ⊆ R T = { f : f ( t ) ∈ R , t ∈ T } {\displaystyle X\subseteq \mathbb {R} ^{\mathbb {T} }=\{f:f(t)\in \mathbb {R} ,\ t\in \mathbb {T} \}} is a set of real-valued functions. Let B ( R ) {\displaystyle {\mathcal {B}}(\mathbb {R} )} denote the Borel subsets of R . {\displaystyle \mathbb {R} .} A cylinder subset of X {\displaystyle X} is a finitely restricted set defined as C t 1 , … , t n ( B 1 , … , B n ) = { f ∈ X : f ( t i ) ∈ B i , 1 ≤ i ≤ n } . {\displaystyle C_{t_{1},\dots ,t_{n}}(B_{1},\dots ,B_{n})=\left\{f\in X:f(t_{i})\in B_{i},1\leq i\leq n\right\}.} Each { C t 1 , … , t n ( B 1 , … , B n ) : B i ∈ B ( R ) , 1 ≤ i ≤ n } {\displaystyle \left\{C_{t_{1},\dots ,t_{n}}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\right\}} is a π-system that generates a σ-algebra Σ t 1 , … , t n . {\displaystyle \textstyle \Sigma _{t_{1},\dots ,t_{n}}.} Then the family of subsets F X = ⋃ n = 1 ∞ ⋃ t i ∈ T , i ≤ n Σ t 1 , … , t n {\displaystyle {\mathcal {F}}_{X}=\bigcup _{n=1}^{\infty }\bigcup _{t_{i}\in \mathbb {T} ,i\leq n}\Sigma _{t_{1},\dots ,t_{n}}} is an algebra that generates the cylinder σ-algebra for X . {\displaystyle X.} This σ-algebra is a subalgebra of the Borel σ-algebra determined by the product topology of R T {\displaystyle \mathbb {R} ^{\mathbb {T} }} restricted to X . {\displaystyle X.} An important special case is when T {\displaystyle \mathbb {T} } is the set of natural numbers and X {\displaystyle X} is a set of real-valued sequences. In this case, it suffices to consider the cylinder sets C n ( B 1 , … , B n ) = ( B 1 × ⋯ × B n × R ∞ ) ∩ X = { ( x 1 , x 2 , … , x n , x n + 1 , … ) ∈ X : x i ∈ B i , 1 ≤ i ≤ n } , {\displaystyle C_{n}\left(B_{1},\dots ,B_{n}\right)=\left(B_{1}\times \cdots \times B_{n}\times \mathbb {R} ^{\infty }\right)\cap X=\left\{\left(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots \right)\in X:x_{i}\in B_{i},1\leq i\leq n\right\},} for which Σ n = σ ( { C n ( B 1 , … , B n ) : B i ∈ B ( R ) , 1 ≤ i ≤ n } ) {\displaystyle \Sigma _{n}=\sigma \left(\{C_{n}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\}\right)} is a non-decreasing sequence of σ-algebras. === Ball σ-algebra === The ball σ-algebra is the smallest σ-algebra containing all the open (and/or closed) balls. This is never larger than the Borel σ-algebra. Note that the two σ-algebra are equal for separable spaces. For some nonseparable spaces, some maps are ball measurable even though they are not Borel measurable, making use of the ball σ-algebra useful in the analysis of such maps. === σ-algebra generated by random variable or vector === Suppose ( Ω , Σ , P ) {\displaystyle (\Omega ,\Sigma ,\mathbb {P} )} is a probability space. If Y : Ω → R n {\displaystyle \textstyle Y:\Omega \to \mathbb {R} ^{n}} is measurable with respect to the Borel σ-algebra on R n {\displaystyle \mathbb {R} ^{n}} then Y {\displaystyle Y} is called a random variable ( n = 1 {\displaystyle n=1} ) or random vector ( n > 1 {\displaystyle n>1} ). The σ-algebra generated by Y {\displaystyle Y} is σ ( Y ) = { Y − 1 ( A ) : A ∈ B ( R n ) } . {\displaystyle \sigma (Y)=\left\{Y^{-1}(A):A\in {\mathcal {B}}\left(\mathbb {R} ^{n}\right)\right\}.} === σ-algebra generated by a stochastic process === Suppose ( Ω , Σ , P ) {\displaystyle (\Omega ,\Sigma ,\mathbb {P} )} is a probability space and R T {\displaystyle \mathbb {R} ^{\mathbb {T} }} is the set of real-valued functions on T . {\displaystyle \mathbb {T} .} If Y : Ω → X ⊆ R T {\displaystyle \textstyle Y:\Omega \to X\subseteq \mathbb {R} ^{\mathbb {T} }} is measurable with respect to the cylinder σ-algebra σ ( F X ) {\displaystyle \sigma \left({\mathcal {F}}_{X}\right)} (see above) for X {\displaystyle X} then Y {\displaystyle Y} is called a stochastic process or random process. The σ-algebra generated by Y {\displaystyle Y} is σ ( Y ) = { Y − 1 ( A ) : A ∈ σ ( F X ) } = σ ( { Y − 1 ( A ) : A ∈ F X } ) , {\displaystyle \sigma (Y)=\left\{Y^{-1}(A):A\in \sigma \left({\mathcal {F}}_{X}\right)\right\}=\sigma \left(\left\{Y^{-1}(A):A\in {\mathcal {F}}_{X}\right\}\right),} the σ-algebra generated by the inverse images of cylinder sets. == See also == Measurable function – Kind of mathematical function Sample space – Set of all possible outcomes or results of a statistical trial or experiment Sigma-additive set function – Mapping function Sigma-ring – Family of sets closed under countable unions == References == == External links == "Algebra of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Sigma Algebra from PlanetMath.
Wikipedia/Sigma-algebra
In mathematics, fuzzy measure theory considers generalized measures in which the additive property is replaced by the weaker property of monotonicity. The central concept of fuzzy measure theory is the fuzzy measure (also capacity, see ), which was introduced by Choquet in 1953 and independently defined by Sugeno in 1974 in the context of fuzzy integrals. There exists a number of different classes of fuzzy measures including plausibility/belief measures, possibility/necessity measures, and probability measures, which are a subset of classical measures. == Definitions == Let X {\displaystyle \mathbf {X} } be a universe of discourse, C {\displaystyle {\mathcal {C}}} be a class of subsets of X {\displaystyle \mathbf {X} } , and E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} . A function g : C → R {\displaystyle g:{\mathcal {C}}\to \mathbb {R} } where ∅ ∈ C ⇒ g ( ∅ ) = 0 {\displaystyle \emptyset \in {\mathcal {C}}\Rightarrow g(\emptyset )=0} E ⊆ F ⇒ g ( E ) ≤ g ( F ) {\displaystyle E\subseteq F\Rightarrow g(E)\leq g(F)} is called a fuzzy measure. A fuzzy measure is called normalized or regular if g ( X ) = 1 {\displaystyle g(\mathbf {X} )=1} . == Properties of fuzzy measures == A fuzzy measure is: additive if for any E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} such that E ∩ F = ∅ {\displaystyle E\cap F=\emptyset } , we have g ( E ∪ F ) = g ( E ) + g ( F ) . {\displaystyle g(E\cup F)=g(E)+g(F).} ; supermodular if for any E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} , we have g ( E ∪ F ) + g ( E ∩ F ) ≥ g ( E ) + g ( F ) {\displaystyle g(E\cup F)+g(E\cap F)\geq g(E)+g(F)} ; submodular if for any E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} , we have g ( E ∪ F ) + g ( E ∩ F ) ≤ g ( E ) + g ( F ) {\displaystyle g(E\cup F)+g(E\cap F)\leq g(E)+g(F)} ; superadditive if for any E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} such that E ∩ F = ∅ {\displaystyle E\cap F=\emptyset } , we have g ( E ∪ F ) ≥ g ( E ) + g ( F ) {\displaystyle g(E\cup F)\geq g(E)+g(F)} ; subadditive if for any E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} such that E ∩ F = ∅ {\displaystyle E\cap F=\emptyset } , we have g ( E ∪ F ) ≤ g ( E ) + g ( F ) {\displaystyle g(E\cup F)\leq g(E)+g(F)} ; symmetric if for any E , F ∈ C {\displaystyle E,F\in {\mathcal {C}}} , we have | E | = | F | {\displaystyle |E|=|F|} implies g ( E ) = g ( F ) {\displaystyle g(E)=g(F)} ; Boolean if for any E ∈ C {\displaystyle E\in {\mathcal {C}}} , we have g ( E ) = 0 {\displaystyle g(E)=0} or g ( E ) = 1 {\displaystyle g(E)=1} . Understanding the properties of fuzzy measures is useful in application. When a fuzzy measure is used to define a function such as the Sugeno integral or Choquet integral, these properties will be crucial in understanding the function's behavior. For instance, the Choquet integral with respect to an additive fuzzy measure reduces to the Lebesgue integral. In discrete cases, a symmetric fuzzy measure will result in the ordered weighted averaging (OWA) operator. Submodular fuzzy measures result in convex functions, while supermodular fuzzy measures result in concave functions when used to define a Choquet integral. == Möbius representation == Let g be a fuzzy measure. The Möbius representation of g is given by the set function M, where for every E , F ⊆ X {\displaystyle E,F\subseteq X} , M ( E ) = ∑ F ⊆ E ( − 1 ) | E ∖ F | g ( F ) . {\displaystyle M(E)=\sum _{F\subseteq E}(-1)^{|E\backslash F|}g(F).} The equivalent axioms in Möbius representation are: M ( ∅ ) = 0 {\displaystyle M(\emptyset )=0} . ∑ F ⊆ E | i ∈ F M ( F ) ≥ 0 {\displaystyle \sum _{F\subseteq E|i\in F}M(F)\geq 0} , for all E ⊆ X {\displaystyle E\subseteq \mathbf {X} } and all i ∈ E {\displaystyle i\in E} A fuzzy measure in Möbius representation M is called normalized if ∑ E ⊆ X M ( E ) = 1. {\displaystyle \sum _{E\subseteq \mathbf {X} }M(E)=1.} Möbius representation can be used to give an indication of which subsets of X interact with one another. For instance, an additive fuzzy measure has Möbius values all equal to zero except for singletons. The fuzzy measure g in standard representation can be recovered from the Möbius form using the Zeta transform: g ( E ) = ∑ F ⊆ E M ( F ) , ∀ E ⊆ X . {\displaystyle g(E)=\sum _{F\subseteq E}M(F),\forall E\subseteq \mathbf {X} .} == Simplification assumptions for fuzzy measures == Fuzzy measures are defined on a semiring of sets or monotone class, which may be as granular as the power set of X, and even in discrete cases the number of variables can be as large as 2|X|. For this reason, in the context of multi-criteria decision analysis and other disciplines, simplification assumptions on the fuzzy measure have been introduced so that it is less computationally expensive to determine and use. For instance, when it is assumed the fuzzy measure is additive, it will hold that g ( E ) = ∑ i ∈ E g ( { i } ) {\displaystyle g(E)=\sum _{i\in E}g(\{i\})} and the values of the fuzzy measure can be evaluated from the values on X. Similarly, a symmetric fuzzy measure is defined uniquely by |X| values. Two important fuzzy measures that can be used are the Sugeno- or λ {\displaystyle \lambda } -fuzzy measure and k-additive measures, introduced by Sugeno and Grabisch respectively. === Sugeno λ-measure === The Sugeno λ {\displaystyle \lambda } -measure is a special case of fuzzy measures defined iteratively. It has the following definition: ==== Definition ==== Let X = { x 1 , … , x n } {\displaystyle \mathbf {X} =\left\lbrace x_{1},\dots ,x_{n}\right\rbrace } be a finite set and let λ ∈ ( − 1 , + ∞ ) {\displaystyle \lambda \in (-1,+\infty )} . A Sugeno λ {\displaystyle \lambda } -measure is a function g : 2 X → [ 0 , 1 ] {\displaystyle g:2^{X}\to [0,1]} such that g ( X ) = 1 {\displaystyle g(X)=1} . if A , B ⊆ X {\displaystyle A,B\subseteq \mathbf {X} } (alternatively A , B ∈ 2 X {\displaystyle A,B\in 2^{\mathbf {X} }} ) with A ∩ B = ∅ {\displaystyle A\cap B=\emptyset } then g ( A ∪ B ) = g ( A ) + g ( B ) + λ g ( A ) g ( B ) {\displaystyle g(A\cup B)=g(A)+g(B)+\lambda g(A)g(B)} . As a convention, the value of g at a singleton set { x i } {\displaystyle \left\lbrace x_{i}\right\rbrace } is called a density and is denoted by g i = g ( { x i } ) {\displaystyle g_{i}=g(\left\lbrace x_{i}\right\rbrace )} . In addition, we have that λ {\displaystyle \lambda } satisfies the property λ + 1 = ∏ i = 1 n ( 1 + λ g i ) {\displaystyle \lambda +1=\prod _{i=1}^{n}(1+\lambda g_{i})} . Tahani and Keller as well as Wang and Klir have shown that once the densities are known, it is possible to use the previous polynomial to obtain the values of λ {\displaystyle \lambda } uniquely. === k-additive fuzzy measure === The k-additive fuzzy measure limits the interaction between the subsets E ⊆ X {\displaystyle E\subseteq X} to size | E | = k {\displaystyle |E|=k} . This drastically reduces the number of variables needed to define the fuzzy measure, and as k can be anything from 1 (in which case the fuzzy measure is additive) to X, it allows for a compromise between modelling ability and simplicity. ==== Definition ==== A discrete fuzzy measure g on a set X is called k-additive ( 1 ≤ k ≤ | X | {\displaystyle 1\leq k\leq |\mathbf {X} |} ) if its Möbius representation verifies M ( E ) = 0 {\displaystyle M(E)=0} , whenever | E | > k {\displaystyle |E|>k} for any E ⊆ X {\displaystyle E\subseteq \mathbf {X} } , and there exists a subset F with k elements such that M ( F ) ≠ 0 {\displaystyle M(F)\neq 0} . == Shapley and interaction indices == In game theory, the Shapley value or Shapley index is used to indicate the weight of a game. Shapley values can be calculated for fuzzy measures in order to give some indication of the importance of each singleton. In the case of additive fuzzy measures, the Shapley value will be the same as each singleton. For a given fuzzy measure g, and | X | = n {\displaystyle |\mathbf {X} |=n} , the Shapley index for every i , … , n ∈ X {\displaystyle i,\dots ,n\in X} is: ϕ ( i ) = ∑ E ⊆ X ∖ { i } ( n − | E | − 1 ) ! | E | ! n ! [ g ( E ∪ { i } ) − g ( E ) ] . {\displaystyle \phi (i)=\sum _{E\subseteq \mathbf {X} \backslash \{i\}}{\frac {(n-|E|-1)!|E|!}{n!}}[g(E\cup \{i\})-g(E)].} The Shapley value is the vector ϕ ( g ) = ( ψ ( 1 ) , … , ψ ( n ) ) . {\displaystyle \mathbf {\phi } (g)=(\psi (1),\dots ,\psi (n)).} == See also == Probability theory Possibility theory == References == == Further reading == Beliakov, Pradera and Calvo, Aggregation Functions: A Guide for Practitioners, Springer, New York 2007. Wang, Zhenyuan, and, George J. Klir, Fuzzy Measure Theory, Plenum Press, New York, 1991. == External links == Fuzzy Measure Theory at Fuzzy Image Processing Archived 2019-06-30 at the Wayback Machine
Wikipedia/Fuzzy_measure_theory
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X {\displaystyle X} , or just distribution function of X {\displaystyle X} , evaluated at x {\displaystyle x} , is the probability that X {\displaystyle X} will take a value less than or equal to x {\displaystyle x} . Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by a right-continuous monotone increasing function (a càdlàg function) F : R → [ 0 , 1 ] {\displaystyle F\colon \mathbb {R} \rightarrow [0,1]} satisfying lim x → − ∞ F ( x ) = 0 {\displaystyle \lim _{x\rightarrow -\infty }F(x)=0} and lim x → ∞ F ( x ) = 1 {\displaystyle \lim _{x\rightarrow \infty }F(x)=1} . In the case of a scalar continuous distribution, it gives the area under the probability density function from negative infinity to x {\displaystyle x} . Cumulative distribution functions are also used to specify the distribution of multivariate random variables. == Definition == The cumulative distribution function of a real-valued random variable X {\displaystyle X} is the function given by: 77  where the right-hand side represents the probability that the random variable X {\displaystyle X} takes on a value less than or equal to x {\displaystyle x} . The probability that X {\displaystyle X} lies in the semi-closed interval ( a , b ] {\displaystyle (a,b]} , where a < b {\displaystyle a<b} , is therefore: 84  In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation. If treating several random variables X , Y , … {\displaystyle X,Y,\ldots } etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital F {\displaystyle F} for a cumulative distribution function, in contrast to the lower-case f {\displaystyle f} used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses Φ {\displaystyle \Phi } and ϕ {\displaystyle \phi } instead of F {\displaystyle F} and f {\displaystyle f} , respectively. The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given F ( x ) {\displaystyle F(x)} , f ( x ) = d F ( x ) d x {\displaystyle f(x)={\frac {dF(x)}{dx}}} as long as the derivative exists. The CDF of a continuous random variable X {\displaystyle X} can be expressed as the integral of its probability density function f X {\displaystyle f_{X}} as follows:: 86  F X ( x ) = ∫ − ∞ x f X ( t ) d t . {\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(t)\,dt.} In the case of a random variable X {\displaystyle X} which has distribution having a discrete component at a value b {\displaystyle b} , P ⁡ ( X = b ) = F X ( b ) − lim x → b − F X ( x ) . {\displaystyle \operatorname {P} (X=b)=F_{X}(b)-\lim _{x\to b^{-}}F_{X}(x).} If F X {\displaystyle F_{X}} is continuous at b {\displaystyle b} , this equals zero and there is no discrete component at b {\displaystyle b} . == Properties == Every cumulative distribution function F X {\displaystyle F_{X}} is non-decreasing: p. 78  and right-continuous,: p. 79  which makes it a càdlàg function. Furthermore, lim x → − ∞ F X ( x ) = 0 , lim x → + ∞ F X ( x ) = 1. {\displaystyle \lim _{x\to -\infty }F_{X}(x)=0,\quad \lim _{x\to +\infty }F_{X}(x)=1.} Every function with these three properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable. If X {\displaystyle X} is a purely discrete random variable, then it attains values x 1 , x 2 , … {\displaystyle x_{1},x_{2},\ldots } with probability p i = p ( x i ) {\displaystyle p_{i}=p(x_{i})} , and the CDF of X {\displaystyle X} will be discontinuous at the points x i {\displaystyle x_{i}} : F X ( x ) = P ⁡ ( X ≤ x ) = ∑ x i ≤ x P ⁡ ( X = x i ) = ∑ x i ≤ x p ( x i ) . {\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)=\sum _{x_{i}\leq x}\operatorname {P} (X=x_{i})=\sum _{x_{i}\leq x}p(x_{i}).} If the CDF F X {\displaystyle F_{X}} of a real valued random variable X {\displaystyle X} is continuous, then X {\displaystyle X} is a continuous random variable; if furthermore F X {\displaystyle F_{X}} is absolutely continuous, then there exists a Lebesgue-integrable function f X ( x ) {\displaystyle f_{X}(x)} such that F X ( b ) − F X ( a ) = P ⁡ ( a < X ≤ b ) = ∫ a b f X ( x ) d x {\displaystyle F_{X}(b)-F_{X}(a)=\operatorname {P} (a<X\leq b)=\int _{a}^{b}f_{X}(x)\,dx} for all real numbers a {\displaystyle a} and b {\displaystyle b} . The function f X {\displaystyle f_{X}} is equal to the derivative of F X {\displaystyle F_{X}} almost everywhere, and it is called the probability density function of the distribution of X {\displaystyle X} . If X {\displaystyle X} has finite L1-norm, that is, the expectation of | X | {\displaystyle |X|} is finite, then the expectation is given by the Riemann–Stieltjes integral E [ X ] = ∫ − ∞ ∞ t d F X ( t ) {\displaystyle \mathbb {E} [X]=\int _{-\infty }^{\infty }t\,dF_{X}(t)} and for any x ≥ 0 {\displaystyle x\geq 0} , x ( 1 − F X ( x ) ) ≤ ∫ x ∞ t d F X ( t ) {\displaystyle x(1-F_{X}(x))\leq \int _{x}^{\infty }t\,dF_{X}(t)} as well as x F X ( − x ) ≤ ∫ − ∞ − x ( − t ) d F X ( t ) {\displaystyle xF_{X}(-x)\leq \int _{-\infty }^{-x}(-t)\,dF_{X}(t)} as shown in the diagram (consider the areas of the two red rectangles and their extensions to the right or left up to the graph of F X {\displaystyle F_{X}} ). In particular, we have lim x → − ∞ x F X ( x ) = 0 , lim x → + ∞ x ( 1 − F X ( x ) ) = 0. {\displaystyle \lim _{x\to -\infty }xF_{X}(x)=0,\quad \lim _{x\to +\infty }x(1-F_{X}(x))=0.} In addition, the (finite) expected value of the real-valued random variable X {\displaystyle X} can be defined on the graph of its cumulative distribution function as illustrated by the drawing in the definition of expected value for arbitrary real-valued random variables. == Examples == As an example, suppose X {\displaystyle X} is uniformly distributed on the unit interval [ 0 , 1 ] {\displaystyle [0,1]} . Then the CDF of X {\displaystyle X} is given by F X ( x ) = { 0 : x < 0 x : 0 ≤ x ≤ 1 1 : x > 1 {\displaystyle F_{X}(x)={\begin{cases}0&:\ x<0\\x&:\ 0\leq x\leq 1\\1&:\ x>1\end{cases}}} Suppose instead that X {\displaystyle X} takes only the discrete values 0 and 1, with equal probability. Then the CDF of X {\displaystyle X} is given by F X ( x ) = { 0 : x < 0 1 / 2 : 0 ≤ x < 1 1 : x ≥ 1 {\displaystyle F_{X}(x)={\begin{cases}0&:\ x<0\\1/2&:\ 0\leq x<1\\1&:\ x\geq 1\end{cases}}} Suppose X {\displaystyle X} is exponential distributed. Then the CDF of X {\displaystyle X} is given by F X ( x ; λ ) = { 1 − e − λ x x ≥ 0 , 0 x < 0. {\displaystyle F_{X}(x;\lambda )={\begin{cases}1-e^{-\lambda x}&x\geq 0,\\0&x<0.\end{cases}}} Here λ > 0 is the parameter of the distribution, often called the rate parameter. Suppose X {\displaystyle X} is normal distributed. Then the CDF of X {\displaystyle X} is given by F ( t ; μ , σ ) = 1 σ 2 π ∫ − ∞ t exp ⁡ ( − ( x − μ ) 2 2 σ 2 ) d x . {\displaystyle F(t;\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}\int _{-\infty }^{t}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,dx.} Here the parameter μ {\displaystyle \mu } is the mean or expectation of the distribution; and σ {\displaystyle \sigma } is its standard deviation. A table of the CDF of the standard normal distribution is often used in statistical applications, where it is named the standard normal table, the unit normal table, or the Z table. Suppose X {\displaystyle X} is binomial distributed. Then the CDF of X {\displaystyle X} is given by F ( k ; n , p ) = Pr ( X ≤ k ) = ∑ i = 0 ⌊ k ⌋ ( n i ) p i ( 1 − p ) n − i {\displaystyle F(k;n,p)=\Pr(X\leq k)=\sum _{i=0}^{\lfloor k\rfloor }{n \choose i}p^{i}(1-p)^{n-i}} Here p {\displaystyle p} is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of n {\displaystyle n} independent experiments, and ⌊ k ⌋ {\displaystyle \lfloor k\rfloor } is the "floor" under k {\displaystyle k} , i.e. the greatest integer less than or equal to k {\displaystyle k} . == Derived functions == === Complementary cumulative distribution function (tail distribution) === Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (ccdf) or simply the tail distribution or exceedance, and is defined as F ¯ X ( x ) = P ⁡ ( X > x ) = 1 − F X ( x ) . {\displaystyle {\bar {F}}_{X}(x)=\operatorname {P} (X>x)=1-F_{X}(x).} This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value t {\displaystyle t} of the test statistic p = P ⁡ ( T ≥ t ) = P ⁡ ( T > t ) = 1 − F T ( t ) . {\displaystyle p=\operatorname {P} (T\geq t)=\operatorname {P} (T>t)=1-F_{T}(t).} In survival analysis, F ¯ X ( x ) {\displaystyle {\bar {F}}_{X}(x)} is called the survival function and denoted S ( x ) {\displaystyle S(x)} , while the term reliability function is common in engineering. Properties For a non-negative continuous random variable having an expectation, Markov's inequality states that F ¯ X ( x ) ≤ E ⁡ ( X ) x . {\displaystyle {\bar {F}}_{X}(x)\leq {\frac {\operatorname {E} (X)}{x}}.} As x → ∞ , F ¯ X ( x ) → 0 {\displaystyle x\to \infty ,{\bar {F}}_{X}(x)\to 0} , and in fact F ¯ X ( x ) = o ( 1 / x ) {\displaystyle {\bar {F}}_{X}(x)=o(1/x)} provided that E ⁡ ( X ) {\displaystyle \operatorname {E} (X)} is finite. Proof: Assuming X {\displaystyle X} has a density function f X {\displaystyle f_{X}} , for any c > 0 {\displaystyle c>0} E ⁡ ( X ) = ∫ 0 ∞ x f X ( x ) d x ≥ ∫ 0 c x f X ( x ) d x + c ∫ c ∞ f X ( x ) d x {\displaystyle \operatorname {E} (X)=\int _{0}^{\infty }xf_{X}(x)\,dx\geq \int _{0}^{c}xf_{X}(x)\,dx+c\int _{c}^{\infty }f_{X}(x)\,dx} Then, on recognizing F ¯ X ( c ) = ∫ c ∞ f X ( x ) d x {\displaystyle {\bar {F}}_{X}(c)=\int _{c}^{\infty }f_{X}(x)\,dx} and rearranging terms, 0 ≤ c F ¯ X ( c ) ≤ E ⁡ ( X ) − ∫ 0 c x f X ( x ) d x → 0 as c → ∞ {\displaystyle 0\leq c{\bar {F}}_{X}(c)\leq \operatorname {E} (X)-\int _{0}^{c}xf_{X}(x)\,dx\to 0{\text{ as }}c\to \infty } as claimed. For a random variable having an expectation, E ⁡ ( X ) = ∫ 0 ∞ F ¯ X ( x ) d x − ∫ − ∞ 0 F X ( x ) d x {\displaystyle \operatorname {E} (X)=\int _{0}^{\infty }{\bar {F}}_{X}(x)\,dx-\int _{-\infty }^{0}F_{X}(x)\,dx} and for a non-negative random variable the second term is 0. If the random variable can only take non-negative integer values, this is equivalent to E ⁡ ( X ) = ∑ n = 0 ∞ F ¯ X ( n ) . {\displaystyle \operatorname {E} (X)=\sum _{n=0}^{\infty }{\bar {F}}_{X}(n).} === Folded cumulative distribution === While the plot of a cumulative distribution F {\displaystyle F} often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, that is F fold ( x ) = F ( x ) 1 { F ( x ) ≤ 0.5 } + ( 1 − F ( x ) ) 1 { F ( x ) > 0.5 } {\displaystyle F_{\text{fold}}(x)=F(x)1_{\{F(x)\leq 0.5\}}+(1-F(x))1_{\{F(x)>0.5\}}} where 1 { A } {\displaystyle 1_{\{A\}}} denotes the indicator function and the second summand is the survivor function, thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median, dispersion (specifically, the mean absolute deviation from the median) and skewness of the distribution or of the empirical results. === Inverse distribution function (quantile function) === If the CDF F is strictly increasing and continuous then F − 1 ( p ) , p ∈ [ 0 , 1 ] , {\displaystyle F^{-1}(p),p\in [0,1],} is the unique real number x {\displaystyle x} such that F ( x ) = p {\displaystyle F(x)=p} . This defines the inverse distribution function or quantile function. Some distributions do not have a unique inverse (for example if f X ( x ) = 0 {\displaystyle f_{X}(x)=0} for all a < x < b {\displaystyle a<x<b} , causing F X {\displaystyle F_{X}} to be constant). In this case, one may use the generalized inverse distribution function, which is defined as F − 1 ( p ) = inf { x ∈ R : F ( x ) ≥ p } , ∀ p ∈ [ 0 , 1 ] . {\displaystyle F^{-1}(p)=\inf\{x\in \mathbb {R} :F(x)\geq p\},\quad \forall p\in [0,1].} Example 1: The median is F − 1 ( 0.5 ) {\displaystyle F^{-1}(0.5)} . Example 2: Put τ = F − 1 ( 0.95 ) {\displaystyle \tau =F^{-1}(0.95)} . Then we call τ {\displaystyle \tau } the 95th percentile. Some useful properties of the inverse cdf (which are also preserved in the definition of the generalized inverse distribution function) are: F − 1 {\displaystyle F^{-1}} is nondecreasing F − 1 ( F ( x ) ) ≤ x {\displaystyle F^{-1}(F(x))\leq x} F ( F − 1 ( p ) ) ≥ p {\displaystyle F(F^{-1}(p))\geq p} F − 1 ( p ) ≤ x {\displaystyle F^{-1}(p)\leq x} if and only if p ≤ F ( x ) {\displaystyle p\leq F(x)} If Y {\displaystyle Y} has a U [ 0 , 1 ] {\displaystyle U[0,1]} distribution then F − 1 ( Y ) {\displaystyle F^{-1}(Y)} is distributed as F {\displaystyle F} . This is used in random number generation using the inverse transform sampling-method. If { X α } {\displaystyle \{X_{\alpha }\}} is a collection of independent F {\displaystyle F} -distributed random variables defined on the same sample space, then there exist random variables Y α {\displaystyle Y_{\alpha }} such that Y α {\displaystyle Y_{\alpha }} is distributed as U [ 0 , 1 ] {\displaystyle U[0,1]} and F − 1 ( Y α ) = X α {\displaystyle F^{-1}(Y_{\alpha })=X_{\alpha }} with probability 1 for all α {\displaystyle \alpha } . The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions. === Empirical distribution function === The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function. == Multivariate case == === Definition for two random variables === When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables X , Y {\displaystyle X,Y} , the joint CDF F X Y {\displaystyle F_{XY}} is given by: p. 89  where the right-hand side represents the probability that the random variable X {\displaystyle X} takes on a value less than or equal to x {\displaystyle x} and that Y {\displaystyle Y} takes on a value less than or equal to y {\displaystyle y} . Example of joint cumulative distribution function: For two continuous variables X and Y: Pr ( a < X < b and c < Y < d ) = ∫ a b ∫ c d f ( x , y ) d y d x ; {\displaystyle \Pr(a<X<b{\text{ and }}c<Y<d)=\int _{a}^{b}\int _{c}^{d}f(x,y)\,dy\,dx;} For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of X and Y, and here is the example: given the joint probability mass function in tabular form, determine the joint cumulative distribution function. Solution: using the given table of probabilities for each potential range of X and Y, the joint cumulative distribution function may be constructed in tabular form: === Definition for more than two random variables === For N {\displaystyle N} random variables X 1 , … , X N {\displaystyle X_{1},\ldots ,X_{N}} , the joint CDF F X 1 , … , X N {\displaystyle F_{X_{1},\ldots ,X_{N}}} is given by Interpreting the N {\displaystyle N} random variables as a random vector X = ( X 1 , … , X N ) T {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{N})^{T}} yields a shorter notation: F X ( x ) = P ⁡ ( X 1 ≤ x 1 , … , X N ≤ x N ) {\displaystyle F_{\mathbf {X} }(\mathbf {x} )=\operatorname {P} (X_{1}\leq x_{1},\ldots ,X_{N}\leq x_{N})} === Properties === Every multivariate CDF is: Monotonically non-decreasing for each of its variables, Right-continuous in each of its variables, 0 ≤ F X 1 … X n ( x 1 , … , x n ) ≤ 1 , {\displaystyle 0\leq F_{X_{1}\ldots X_{n}}(x_{1},\ldots ,x_{n})\leq 1,} lim x 1 , … , x n → + ∞ F X 1 … X n ( x 1 , … , x n ) = 1 {\displaystyle \lim _{x_{1},\ldots ,x_{n}\to +\infty }F_{X_{1}\ldots X_{n}}(x_{1},\ldots ,x_{n})=1} and lim x i → − ∞ F X 1 … X n ( x 1 , … , x n ) = 0 , {\displaystyle \lim _{x_{i}\to -\infty }F_{X_{1}\ldots X_{n}}(x_{1},\ldots ,x_{n})=0,} for all i. Not every function satisfying the above four properties is a multivariate CDF, unlike in the single dimension case. For example, let F ( x , y ) = 0 {\displaystyle F(x,y)=0} for x < 0 {\displaystyle x<0} or x + y < 1 {\displaystyle x+y<1} or y < 0 {\displaystyle y<0} and let F ( x , y ) = 1 {\displaystyle F(x,y)=1} otherwise. It is easy to see that the above conditions are met, and yet F {\displaystyle F} is not a CDF since if it was, then P ⁡ ( 1 3 < X ≤ 1 , 1 3 < Y ≤ 1 ) = − 1 {\textstyle \operatorname {P} \left({\frac {1}{3}}<X\leq 1,{\frac {1}{3}}<Y\leq 1\right)=-1} as explained below. The probability that a point belongs to a hyperrectangle is analogous to the 1-dimensional case: F X 1 , X 2 ( a , c ) + F X 1 , X 2 ( b , d ) − F X 1 , X 2 ( a , d ) − F X 1 , X 2 ( b , c ) = P ⁡ ( a < X 1 ≤ b , c < X 2 ≤ d ) = ∫ ⋯ {\displaystyle F_{X_{1},X_{2}}(a,c)+F_{X_{1},X_{2}}(b,d)-F_{X_{1},X_{2}}(a,d)-F_{X_{1},X_{2}}(b,c)=\operatorname {P} (a<X_{1}\leq b,c<X_{2}\leq d)=\int \cdots } == Complex case == === Complex random variable === The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form P ( Z ≤ 1 + 2 i ) {\displaystyle P(Z\leq 1+2i)} make no sense. However expressions of the form P ( ℜ ( Z ) ≤ 1 , ℑ ( Z ) ≤ 3 ) {\displaystyle P(\Re {(Z)}\leq 1,\Im {(Z)}\leq 3)} make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts: F Z ( z ) = F ℜ ( Z ) , ℑ ( Z ) ( ℜ ( z ) , ℑ ( z ) ) = P ( ℜ ( Z ) ≤ ℜ ( z ) , ℑ ( Z ) ≤ ℑ ( z ) ) . {\displaystyle F_{Z}(z)=F_{\Re {(Z)},\Im {(Z)}}(\Re {(z)},\Im {(z)})=P(\Re {(Z)}\leq \Re {(z)},\Im {(Z)}\leq \Im {(z)}).} === Complex random vector === Generalization of Eq.4 yields F Z ( z ) = F ℜ ( Z 1 ) , ℑ ( Z 1 ) , … , ℜ ( Z n ) , ℑ ( Z n ) ( ℜ ( z 1 ) , ℑ ( z 1 ) , … , ℜ ( z n ) , ℑ ( z n ) ) = P ⁡ ( ℜ ( Z 1 ) ≤ ℜ ( z 1 ) , ℑ ( Z 1 ) ≤ ℑ ( z 1 ) , … , ℜ ( Z n ) ≤ ℜ ( z n ) , ℑ ( Z n ) ≤ ℑ ( z n ) ) {\displaystyle F_{\mathbf {Z} }(\mathbf {z} )=F_{\Re {(Z_{1})},\Im {(Z_{1})},\ldots ,\Re {(Z_{n})},\Im {(Z_{n})}}(\Re {(z_{1})},\Im {(z_{1})},\ldots ,\Re {(z_{n})},\Im {(z_{n})})=\operatorname {P} (\Re {(Z_{1})}\leq \Re {(z_{1})},\Im {(Z_{1})}\leq \Im {(z_{1})},\ldots ,\Re {(Z_{n})}\leq \Re {(z_{n})},\Im {(Z_{n})}\leq \Im {(z_{n})})} as definition for the CDS of a complex random vector Z = ( Z 1 , … , Z N ) T {\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{N})^{T}} . == Use in statistical analysis == The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution. === Kolmogorov–Smirnov and Kuiper's tests === The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month. == See also == Descriptive statistics Distribution fitting Ogive (statistics) == References == == External links == Media related to Cumulative distribution functions at Wikimedia Commons
Wikipedia/Cumulative_distribution_function
In probability theory, a tree diagram may be used to represent a probability space. A tree diagram may represent a series of independent events (such as a set of coin flips) or conditional probabilities (such as drawing cards from a deck, without replacing the cards). Each node on the diagram represents an event and is associated with the probability of that event. The root node represents the certain event and therefore has probability 1. Each set of sibling nodes represents an exclusive and exhaustive partition of the parent event. The probability associated with a node is the chance of that event occurring after the parent event occurs. The probability that the series of events leading to a particular node will occur is equal to the product of that node and its parents' probabilities. == See also == Decision tree Markov chain == Notes == == References == Charles Henry Brase, Corrinne Pellillo Brase: Understanding Basic Statistics. Cengage Learning, 2012, ISBN 9781133713890, pp. 205–208 (online copy at Google) == External links == Media related to Probability trees at Wikimedia Commons tree diagrams - examples and applications Tree Diagrams
Wikipedia/Tree_diagram_(probability_theory)
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters. In maximum likelihood estimation, the argument that maximizes the likelihood function serves as a point estimate for the unknown parameter, while the Fisher information (often approximated by the likelihood's Hessian matrix at the maximum) gives an indication of the estimate's precision. In contrast, in Bayesian statistics, the estimate of interest is the converse of the likelihood, the so-called posterior probability of the parameter given the observed data, which is calculated via Bayes' rule. == Definition == The likelihood function, parameterized by a (possibly multivariate) parameter θ {\textstyle \theta } , is usually defined differently for discrete and continuous probability distributions (a more general definition is discussed below). Given a probability density or mass function x ↦ f ( x ∣ θ ) , {\displaystyle x\mapsto f(x\mid \theta ),} where x {\textstyle x} is a realization of the random variable X {\textstyle X} , the likelihood function is θ ↦ f ( x ∣ θ ) , {\displaystyle \theta \mapsto f(x\mid \theta ),} often written L ( θ ∣ x ) . {\displaystyle {\mathcal {L}}(\theta \mid x).} In other words, when f ( x ∣ θ ) {\textstyle f(x\mid \theta )} is viewed as a function of x {\textstyle x} with θ {\textstyle \theta } fixed, it is a probability density function, and when viewed as a function of θ {\textstyle \theta } with x {\textstyle x} fixed, it is a likelihood function. In the frequentist paradigm, the notation f ( x ∣ θ ) {\textstyle f(x\mid \theta )} is often avoided and instead f ( x ; θ ) {\textstyle f(x;\theta )} or f ( x , θ ) {\textstyle f(x,\theta )} are used to indicate that θ {\textstyle \theta } is regarded as a fixed unknown quantity rather than as a random variable being conditioned on. The likelihood function does not specify the probability that θ {\textstyle \theta } is the truth, given the observed sample X = x {\textstyle X=x} . Such an interpretation is a common error, with potentially disastrous consequences (see prosecutor's fallacy). === Discrete probability distribution === Let X {\textstyle X} be a discrete random variable with probability mass function p {\textstyle p} depending on a parameter θ {\textstyle \theta } . Then the function L ( θ ∣ x ) = p θ ( x ) = P θ ( X = x ) , {\displaystyle {\mathcal {L}}(\theta \mid x)=p_{\theta }(x)=P_{\theta }(X=x),} considered as a function of θ {\textstyle \theta } , is the likelihood function, given the outcome x {\textstyle x} of the random variable X {\textstyle X} . Sometimes the probability of "the value x {\textstyle x} of X {\textstyle X} for the parameter value θ {\textstyle \theta } " is written as P(X = x | θ) or P(X = x; θ). The likelihood is the probability that a particular outcome x {\textstyle x} is observed when the true value of the parameter is θ {\textstyle \theta } , equivalent to the probability mass on x {\textstyle x} ; it is not a probability density over the parameter θ {\textstyle \theta } . The likelihood, L ( θ ∣ x ) {\textstyle {\mathcal {L}}(\theta \mid x)} , should not be confused with P ( θ ∣ x ) {\textstyle P(\theta \mid x)} , which is the posterior probability of θ {\textstyle \theta } given the data x {\textstyle x} . ==== Example ==== Consider a simple statistical model of a coin flip: a single parameter p H {\textstyle p_{\text{H}}} that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed. p H {\textstyle p_{\text{H}}} can take on any value within the range 0.0 to 1.0. For a perfectly fair coin, p H = 0.5 {\textstyle p_{\text{H}}=0.5} . Imagine flipping a fair coin twice, and observing two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is P ( HH ∣ p H = 0.5 ) = 0.5 2 = 0.25. {\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.5)=0.5^{2}=0.25.} Equivalently, the likelihood of observing "HH" assuming p H = 0.5 {\textstyle p_{\text{H}}=0.5} is L ( p H = 0.5 ∣ HH ) = 0.25. {\displaystyle {\mathcal {L}}(p_{\text{H}}=0.5\mid {\text{HH}})=0.25.} This is not the same as saying that P ( p H = 0.5 ∣ H H ) = 0.25 {\textstyle P(p_{\text{H}}=0.5\mid HH)=0.25} , a conclusion which could only be reached via Bayes' theorem given knowledge about the marginal probabilities P ( p H = 0.5 ) {\textstyle P(p_{\text{H}}=0.5)} and P ( HH ) {\textstyle P({\text{HH}})} . Now suppose that the coin is not a fair coin, but instead that p H = 0.3 {\textstyle p_{\text{H}}=0.3} . Then the probability of two heads on two flips is P ( HH ∣ p H = 0.3 ) = 0.3 2 = 0.09. {\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.3)=0.3^{2}=0.09.} Hence L ( p H = 0.3 ∣ HH ) = 0.09. {\displaystyle {\mathcal {L}}(p_{\text{H}}=0.3\mid {\text{HH}})=0.09.} More generally, for each value of p H {\textstyle p_{\text{H}}} , we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1. The integral of L {\textstyle {\mathcal {L}}} over [0, 1] is 1/3; likelihoods need not integrate or sum to one over the parameter space. === Continuous probability distribution === Let X {\textstyle X} be a random variable following an absolutely continuous probability distribution with density function f {\textstyle f} (a function of x {\textstyle x} ) which depends on a parameter θ {\textstyle \theta } . Then the function L ( θ ∣ x ) = f θ ( x ) , {\displaystyle {\mathcal {L}}(\theta \mid x)=f_{\theta }(x),} considered as a function of θ {\textstyle \theta } , is the likelihood function (of θ {\textstyle \theta } , given the outcome X = x {\textstyle X=x} ). Again, L {\textstyle {\mathcal {L}}} is not a probability density or mass function over θ {\textstyle \theta } , despite being a function of θ {\textstyle \theta } given the observation X = x {\textstyle X=x} . ==== Relationship between the likelihood and probability density functions ==== The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation x j {\textstyle x_{j}} , the likelihood for the interval [ x j , x j + h ] {\textstyle [x_{j},x_{j}+h]} , where h > 0 {\textstyle h>0} is a constant, is given by L ( θ ∣ x ∈ [ x j , x j + h ] ) {\textstyle {\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])} . Observe that a r g m a x θ ⁡ L ( θ ∣ x ∈ [ x j , x j + h ] ) = a r g m a x θ ⁡ 1 h L ( θ ∣ x ∈ [ x j , x j + h ] ) , {\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h]),} since h {\textstyle h} is positive and constant. Because a r g m a x θ ⁡ 1 h L ( θ ∣ x ∈ [ x j , x j + h ] ) = a r g m a x θ ⁡ 1 h Pr ( x j ≤ x ≤ x j + h ∣ θ ) = a r g m a x θ ⁡ 1 h ∫ x j x j + h f ( x ∣ θ ) d x , {\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\Pr(x_{j}\leq x\leq x_{j}+h\mid \theta )=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx,} where f ( x ∣ θ ) {\textstyle f(x\mid \theta )} is the probability density function, it follows that a r g m a x θ ⁡ L ( θ ∣ x ∈ [ x j , x j + h ] ) = a r g m a x θ ⁡ 1 h ∫ x j x j + h f ( x ∣ θ ) d x . {\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx.} The first fundamental theorem of calculus provides that lim h → 0 + 1 h ∫ x j x j + h f ( x ∣ θ ) d x = f ( x j ∣ θ ) . {\displaystyle \lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx=f(x_{j}\mid \theta ).} Then a r g m a x θ ⁡ L ( θ ∣ x j ) = a r g m a x θ ⁡ [ lim h → 0 + L ( θ ∣ x ∈ [ x j , x j + h ] ) ] = a r g m a x θ ⁡ [ lim h → 0 + 1 h ∫ x j x j + h f ( x ∣ θ ) d x ] = a r g m a x θ ⁡ f ( x j ∣ θ ) . {\displaystyle {\begin{aligned}\mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ).\end{aligned}}} Therefore, a r g m a x θ ⁡ L ( θ ∣ x j ) = a r g m a x θ ⁡ f ( x j ∣ θ ) , {\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ),} and so maximizing the probability density at x j {\textstyle x_{j}} amounts to maximizing the likelihood of the specific observation x j {\textstyle x_{j}} . === In general === In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure. The likelihood function is this density interpreted as a function of the parameter, rather than the random variable. Thus, we can construct a likelihood function for any distribution, whether discrete, continuous, a mixture, or otherwise. (Likelihoods are comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.) The above discussion of the likelihood for discrete random variables uses the counting measure, under which the probability density at any outcome equals the probability of that outcome. === Likelihoods for mixed continuous–discrete distributions === The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses p k ( θ ) {\textstyle p_{k}(\theta )} and a density f ( x ∣ θ ) {\textstyle f(x\mid \theta )} , where the sum of all the p {\textstyle p} 's added to the integral of f {\textstyle f} is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply L ( θ ∣ x ) = p k ( θ ) , {\displaystyle {\mathcal {L}}(\theta \mid x)=p_{k}(\theta ),} where k {\textstyle k} is the index of the discrete probability mass corresponding to observation x {\textstyle x} , because maximizing the probability mass (or probability) at x {\textstyle x} amounts to maximizing the likelihood of the specific observation. The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation x {\textstyle x} , but not with the parameter θ {\textstyle \theta } . === Regularity conditions === In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are assumed in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By the extreme value theorem, it suffices that the likelihood function is continuous on a compact parameter space for the maximum likelihood estimator to exist. While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values might be unknown. In that case, concavity of the likelihood function plays a key role. More specifically, if the likelihood function is twice continuously differentiable on the k-dimensional parameter space Θ {\textstyle \Theta } assumed to be an open connected subset of R k , {\textstyle \mathbb {R} ^{k}\,,} there exists a unique maximum θ ^ ∈ Θ {\textstyle {\hat {\theta }}\in \Theta } if the matrix of second partials H ( θ ) ≡ [ ∂ 2 L ∂ θ i ∂ θ j ] i , j = 1 , 1 n i , n j {\displaystyle \mathbf {H} (\theta )\equiv \left[\,{\frac {\partial ^{2}L}{\,\partial \theta _{i}\,\partial \theta _{j}\,}}\,\right]_{i,j=1,1}^{n_{\mathrm {i} },n_{\mathrm {j} }}\;} is negative definite for every θ ∈ Θ {\textstyle \,\theta \in \Theta \,} at which the gradient ∇ L ≡ [ ∂ L ∂ θ i ] i = 1 n i {\textstyle \;\nabla L\equiv \left[\,{\frac {\partial L}{\,\partial \theta _{i}\,}}\,\right]_{i=1}^{n_{\mathrm {i} }}\;} vanishes, and if the likelihood function approaches a constant on the boundary of the parameter space, ∂ Θ , {\textstyle \;\partial \Theta \;,} i.e., lim θ → ∂ Θ L ( θ ) = 0 , {\displaystyle \lim _{\theta \to \partial \Theta }L(\theta )=0\;,} which may include the points at infinity if Θ {\textstyle \,\Theta \,} is unbounded. Mäkeläinen and co-authors prove this result using Morse theory while informally appealing to a mountain pass property. Mascarenhas restates their proof using the mountain pass theorem. In the proofs of consistency and asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda. In particular, for almost all x {\textstyle x} , and for all θ ∈ Θ , {\textstyle \,\theta \in \Theta \,,} ∂ log ⁡ f ∂ θ r , ∂ 2 log ⁡ f ∂ θ r ∂ θ s , ∂ 3 log ⁡ f ∂ θ r ∂ θ s ∂ θ t {\displaystyle {\frac {\partial \log f}{\partial \theta _{r}}}\,,\quad {\frac {\partial ^{2}\log f}{\partial \theta _{r}\partial \theta _{s}}}\,,\quad {\frac {\partial ^{3}\log f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\,} exist for all r , s , t = 1 , 2 , … , k {\textstyle \,r,s,t=1,2,\ldots ,k\,} in order to ensure the existence of a Taylor expansion. Second, for almost all x {\textstyle x} and for every θ ∈ Θ {\textstyle \,\theta \in \Theta \,} it must be that | ∂ f ∂ θ r | < F r ( x ) , | ∂ 2 f ∂ θ r ∂ θ s | < F r s ( x ) , | ∂ 3 f ∂ θ r ∂ θ s ∂ θ t | < H r s t ( x ) {\displaystyle \left|{\frac {\partial f}{\partial \theta _{r}}}\right|<F_{r}(x)\,,\quad \left|{\frac {\partial ^{2}f}{\partial \theta _{r}\,\partial \theta _{s}}}\right|<F_{rs}(x)\,,\quad \left|{\frac {\partial ^{3}f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\right|<H_{rst}(x)} where H {\textstyle H} is such that ∫ − ∞ ∞ H r s t ( z ) d z ≤ M < ∞ . {\textstyle \,\int _{-\infty }^{\infty }H_{rst}(z)\mathrm {d} z\leq M<\infty \;.} This boundedness of the derivatives is needed to allow for differentiation under the integral sign. And lastly, it is assumed that the information matrix, I ( θ ) = ∫ − ∞ ∞ ∂ log ⁡ f ∂ θ r ∂ log ⁡ f ∂ θ s f d z {\displaystyle \mathbf {I} (\theta )=\int _{-\infty }^{\infty }{\frac {\partial \log f}{\partial \theta _{r}}}\ {\frac {\partial \log f}{\partial \theta _{s}}}\ f\ \mathrm {d} z} is positive definite and | I ( θ ) | {\textstyle \,\left|\mathbf {I} (\theta )\right|\,} is finite. This ensures that the score has a finite variance. The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed. In Bayesian statistics, almost identical regularity conditions are imposed on the likelihood function in order to proof asymptotic normality of the posterior probability, and therefore to justify a Laplace approximation of the posterior in large samples. == Likelihood ratio and relative likelihood == === Likelihood ratio === A likelihood ratio is the ratio of any two specified likelihoods, frequently written as: Λ ( θ 1 : θ 2 ∣ x ) = L ( θ 1 ∣ x ) L ( θ 2 ∣ x ) . {\displaystyle \Lambda (\theta _{1}:\theta _{2}\mid x)={\frac {{\mathcal {L}}(\theta _{1}\mid x)}{{\mathcal {L}}(\theta _{2}\mid x)}}.} The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that the degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio. In frequentist inference, the likelihood ratio is the basis for a test statistic, the so-called likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof. The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem. The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule states that the posterior odds of two alternatives, ⁠ A 1 {\displaystyle A_{1}} ⁠ and ⁠ A 2 {\displaystyle A_{2}} ⁠, given an event ⁠ B {\displaystyle B} ⁠, is the prior odds, times the likelihood ratio. As an equation: O ( A 1 : A 2 ∣ B ) = O ( A 1 : A 2 ) ⋅ Λ ( A 1 : A 2 ∣ B ) . {\displaystyle O(A_{1}:A_{2}\mid B)=O(A_{1}:A_{2})\cdot \Lambda (A_{1}:A_{2}\mid B).} The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below). In evidence-based medicine, likelihood ratios are used in diagnostic testing to assess the value of performing a diagnostic test. === Relative likelihood function === Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the maximum likelihood estimate for the parameter θ is θ ^ {\textstyle {\hat {\theta }}} . Relative plausibilities of other θ values may be found by comparing the likelihoods of those other values with the likelihood of θ ^ {\textstyle {\hat {\theta }}} . The relative likelihood of θ is defined to be R ( θ ) = L ( θ ∣ x ) L ( θ ^ ∣ x ) . {\displaystyle R(\theta )={\frac {{\mathcal {L}}(\theta \mid x)}{{\mathcal {L}}({\hat {\theta }}\mid x)}}.} Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator L ( θ ^ ) {\textstyle {\mathcal {L}}({\hat {\theta }})} . This corresponds to standardizing the likelihood to have a maximum of 1. ==== Likelihood region ==== A likelihood region is the set of all values of θ whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a p% likelihood region for θ is defined to be { θ : R ( θ ) ≥ p 100 } . {\displaystyle \left\{\theta :R(\theta )\geq {\frac {p}{100}}\right\}.} If θ is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval. Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism). Given a model, likelihood intervals can be compared to confidence intervals. If θ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for θ will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1). == Likelihoods that eliminate nuisance parameters == In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are profile, conditional, and marginal likelihoods. These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow a graph. === Profile likelihood === It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function. In general, for a likelihood function depending on the parameter vector θ {\textstyle \mathbf {\theta } } that can be partitioned into θ = ( θ 1 : θ 2 ) {\textstyle \mathbf {\theta } =\left(\mathbf {\theta } _{1}:\mathbf {\theta } _{2}\right)} , and where a correspondence θ ^ 2 = θ ^ 2 ( θ 1 ) {\textstyle \mathbf {\hat {\theta }} _{2}=\mathbf {\hat {\theta }} _{2}\left(\mathbf {\theta } _{1}\right)} can be determined explicitly, concentration reduces computational burden of the original maximization problem. For instance, in a linear regression with normally distributed errors, y = X β + u {\textstyle \mathbf {y} =\mathbf {X} \beta +u} , the coefficient vector could be partitioned into β = [ β 1 : β 2 ] {\textstyle \beta =\left[\beta _{1}:\beta _{2}\right]} (and consequently the design matrix X = [ X 1 : X 2 ] {\textstyle \mathbf {X} =\left[\mathbf {X} _{1}:\mathbf {X} _{2}\right]} ). Maximizing with respect to β 2 {\textstyle \beta _{2}} yields an optimal value function β 2 ( β 1 ) = ( X 2 T X 2 ) − 1 X 2 T ( y − X 1 β 1 ) {\textstyle \beta _{2}(\beta _{1})=\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} _{1}\beta _{1}\right)} . Using this result, the maximum likelihood estimator for β 1 {\textstyle \beta _{1}} can then be derived as β ^ 1 = ( X 1 T ( I − P 2 ) X 1 ) − 1 X 1 T ( I − P 2 ) y {\displaystyle {\hat {\beta }}_{1}=\left(\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {X} _{1}\right)^{-1}\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {y} } where P 2 = X 2 ( X 2 T X 2 ) − 1 X 2 T {\textstyle \mathbf {P} _{2}=\mathbf {X} _{2}\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}} is the projection matrix of X 2 {\textstyle \mathbf {X} _{2}} . This result is known as the Frisch–Waugh–Lovell theorem. Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameter β 2 {\textstyle \beta _{2}} that maximizes the likelihood function, creating an isometric profile of the likelihood function for a given β 1 {\textstyle \beta _{1}} , the result of this procedure is also known as profile likelihood. In addition to being graphed, the profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood. === Conditional likelihood === Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters. One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test. === Marginal likelihood === Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components. === Partial likelihood === A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it. It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time. == Products of likelihoods == The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events: Λ ( A ∣ X 1 ∧ X 2 ) = Λ ( A ∣ X 1 ) ⋅ Λ ( A ∣ X 2 ) . {\displaystyle \Lambda (A\mid X_{1}\land X_{2})=\Lambda (A\mid X_{1})\cdot \Lambda (A\mid X_{2}).} This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities. This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions. The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated. == Log-likelihood == Log-likelihood function is the logarithm of the likelihood function, often denoted by a lowercase l or ⁠ ℓ {\displaystyle \ell } ⁠, to contrast with the uppercase L or L {\textstyle {\mathcal {L}}} for the likelihood. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. But for practical purposes it is more convenient to work with the log-likelihood function in maximum likelihood estimation, in particular since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization. Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overall log-probability is the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for the maximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model. A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods: log ⁡ L ( A ) L ( B ) = log ⁡ L ( A ) − log ⁡ L ( B ) = ℓ ( A ) − ℓ ( B ) . {\displaystyle \log {\frac {{\mathcal {L}}(A)}{{\mathcal {L}}(B)}}=\log {\mathcal {L}}(A)-\log {\mathcal {L}}(B)=\ell (A)-\ell (B).} Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models. === Graph === The graph of the log-likelihood is called the support curve (in the univariate case). In the multivariate case, the concept generalizes into a support surface over the parameter space. It has a relation to, but is distinct from, the support of a distribution. The term was coined by A. W. F. Edwards in the context of statistical hypothesis testing, i.e. whether or not the data "support" one hypothesis (or parameter value) being tested more than any other. The log-likelihood function being plotted is used in the computation of the score (the gradient of the log-likelihood) and Fisher information (the curvature of the log-likelihood). Thus, the graph has a direct interpretation in the context of maximum likelihood estimation and likelihood-ratio tests. === Likelihood equations === If the log-likelihood function is smooth, its gradient with respect to the parameter, known as the score and written s n ( θ ) ≡ ∇ θ ℓ n ( θ ) {\textstyle s_{n}(\theta )\equiv \nabla _{\theta }\ell _{n}(\theta )} , exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events. The equations defined by the stationary point of the score function serve as estimating equations for the maximum likelihood estimator. s n ( θ ) = 0 {\displaystyle s_{n}(\theta )=\mathbf {0} } In that sense, the maximum likelihood estimator is implicitly defined by the value at 0 {\textstyle \mathbf {0} } of the inverse function s n − 1 : E d → Θ {\textstyle s_{n}^{-1}:\mathbb {E} ^{d}\to \Theta } , where E d {\textstyle \mathbb {E} ^{d}} is the d-dimensional Euclidean space, and Θ {\textstyle \Theta } is the parameter space. Using the inverse function theorem, it can be shown that s n − 1 {\textstyle s_{n}^{-1}} is well-defined in an open neighborhood about 0 {\textstyle \mathbf {0} } with probability going to one, and θ ^ n = s n − 1 ( 0 ) {\textstyle {\hat {\theta }}_{n}=s_{n}^{-1}(\mathbf {0} )} is a consistent estimate of θ {\textstyle \theta } . As a consequence there exists a sequence { θ ^ n } {\textstyle \left\{{\hat {\theta }}_{n}\right\}} such that s n ( θ ^ n ) = 0 {\textstyle s_{n}({\hat {\theta }}_{n})=\mathbf {0} } asymptotically almost surely, and θ ^ n → p θ 0 {\textstyle {\hat {\theta }}_{n}\xrightarrow {\text{p}} \theta _{0}} . A similar result can be established using Rolle's theorem. The second derivative evaluated at θ ^ {\textstyle {\hat {\theta }}} , known as Fisher information, determines the curvature of the likelihood surface, and thus indicates the precision of the estimate. === Exponential families === The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function. An exponential family is one whose probability density function is of the form (for some functions, writing ⟨ − , − ⟩ {\textstyle \langle -,-\rangle } for the inner product): p ( x ∣ θ ) = h ( x ) exp ⁡ ( ⟨ η ( θ ) , T ( x ) ⟩ − A ( θ ) ) . {\displaystyle p(x\mid {\boldsymbol {\theta }})=h(x)\exp {\Big (}\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }}){\Big )}.} Each of these terms has an interpretation, but simply switching from probability to likelihood and taking logarithms yields the sum: ℓ ( θ ∣ x ) = ⟨ η ( θ ) , T ( x ) ⟩ − A ( θ ) + log ⁡ h ( x ) . {\displaystyle \ell ({\boldsymbol {\theta }}\mid x)=\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }})+\log h(x).} The η ( θ ) {\textstyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})} and h ( x ) {\textstyle h(x)} each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula: ℓ ( η ∣ x ) = ⟨ η , T ( x ) ⟩ − A ( η ) . {\displaystyle \ell ({\boldsymbol {\eta }}\mid x)=\langle {\boldsymbol {\eta }},\mathbf {T} (x)\rangle -A({\boldsymbol {\eta }}).} In words, the log-likelihood of an exponential family is inner product of the natural parameter ⁠ η {\displaystyle {\boldsymbol {\eta }}} ⁠ and the sufficient statistic ⁠ T ( x ) {\displaystyle \mathbf {T} (x)} ⁠, minus the normalization factor (log-partition function) ⁠ A ( η ) {\displaystyle A({\boldsymbol {\eta }})} ⁠. Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic T and the log-partition function A. ==== Example: the gamma distribution ==== The gamma distribution is an exponential family with two parameters, α {\textstyle \alpha } and β {\textstyle \beta } . The likelihood function is L ( α , β ∣ x ) = β α Γ ( α ) x α − 1 e − β x . {\displaystyle {\mathcal {L}}(\alpha ,\beta \mid x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}.} Finding the maximum likelihood estimate of β {\textstyle \beta } for a single observed value x {\textstyle x} looks rather daunting. Its logarithm is much simpler to work with: log ⁡ L ( α , β ∣ x ) = α log ⁡ β − log ⁡ Γ ( α ) + ( α − 1 ) log ⁡ x − β x . {\displaystyle \log {\mathcal {L}}(\alpha ,\beta \mid x)=\alpha \log \beta -\log \Gamma (\alpha )+(\alpha -1)\log x-\beta x.\,} To maximize the log-likelihood, we first take the partial derivative with respect to β {\textstyle \beta } : ∂ log ⁡ L ( α , β ∣ x ) ∂ β = α β − x . {\displaystyle {\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x)}{\partial \beta }}={\frac {\alpha }{\beta }}-x.} If there are a number of independent observations x 1 , … , x n {\textstyle x_{1},\ldots ,x_{n}} , then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood: ∂ log ⁡ L ( α , β ∣ x 1 , … , x n ) ∂ β = ∂ log ⁡ L ( α , β ∣ x 1 ) ∂ β + ⋯ + ∂ log ⁡ L ( α , β ∣ x n ) ∂ β = n α β − ∑ i = 1 n x i . {\displaystyle {\begin{aligned}&{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1},\ldots ,x_{n})}{\partial \beta }}\\&={\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1})}{\partial \beta }}+\cdots +{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{n})}{\partial \beta }}\\&={\frac {n\alpha }{\beta }}-\sum _{i=1}^{n}x_{i}.\end{aligned}}} To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for β {\textstyle \beta } : β ^ = α x ¯ . {\displaystyle {\widehat {\beta }}={\frac {\alpha }{\bar {x}}}.} Here β ^ {\textstyle {\widehat {\beta }}} denotes the maximum-likelihood estimate, and x ¯ = 1 n ∑ i = 1 n x i {\textstyle \textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}} is the sample mean of the observations. == Background and interpretation == === Historical remarks === The term "likelihood" has been in use in English since at least late Middle English. Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher, in two research papers published in 1921 and 1922. The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher: [I]n 1922, I proposed the term 'likelihood,' in view of the fact that, with respect to [the parameter], it is not a probability, and does not obey the laws of probability, while at the same time it bears to the problem of rational choice among the possible values of [the parameter] a relation similar to that which probability bears to the problem of predicting events in games of chance. . . . Whereas, however, in relation to psychological judgment, likelihood has some resemblance to probability, the two concepts are wholly distinct. . . ." The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher I stress this because in spite of the emphasis that I have always laid upon the difference between probability and likelihood there is still a tendency to treat likelihood as though it were a sort of probability. The first result is thus that there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood. Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability. His use of the term "likelihood" fixed the meaning of the term within mathematical statistics. A. W. F. Edwards (1972) established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence. === Interpretations under different foundations === Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based. For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below. ==== Frequentist interpretation ==== ==== Bayesian interpretation ==== In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence, the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity X {\textstyle X} given another unknown quantity Y {\textstyle Y} is proportional to the probability of Y {\textstyle Y} given X {\textstyle X} . ==== Likelihoodist interpretation ==== In frequentist statistics, the likelihood function is itself a statistic that summarizes a single sample from a population, whose calculated value depends on a choice of several parameters θ1 ... θp, where p is the count of parameters in some already-selected statistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available. The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parameters θ give an accurate approximation of the frequency distribution of the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possible post-hoc probability of having happened. Wilks' theorem quantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate's parameter values and the logarithm of the likelihood generated by population's "true" (but unknown) parameter values is asymptotically χ2 distributed. Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population's "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets' likelihoods may be used to draw a confidence region on a plot whose co-ordinates are the parameters θ1 ... θp. The region surrounds the maximum-likelihood estimate, and all points (parameter sets) within that region differ at most in log-likelihood by some fixed value. The χ2 distribution given by Wilks' theorem converts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small (narrow range of estimates). As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set. ==== AIC-based interpretation ==== Under the AIC paradigm, likelihood is interpreted within the context of information theory. == See also == == Notes == == References == == Further reading == Azzalini, Adelchi (1996). "Likelihood". Statistical Inference Based on the Likelihood. Chapman and Hall. pp. 17–50. ISBN 0-412-60650-X. Boos, Dennis D.; Stefanski, L. A. (2013). "Likelihood Construction and Estimation". Essential Statistical Inference : Theory and Methods. New York: Springer. pp. 27–124. doi:10.1007/978-1-4614-4818-1_2. ISBN 978-1-4614-4817-4. Edwards, A. W. F. (1992) [1972]. Likelihood (Expanded ed.). Johns Hopkins University Press. ISBN 0-8018-4443-6. King, Gary (1989). "The Likelihood Model of Inference". Unifying Political Methodology : the Likehood Theory of Statistical Inference. Cambridge University Press. pp. 59–94. ISBN 0-521-36697-6. Richard, Mark; Vecer, Jan (1 February 2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120. Lindsey, J. K. (1996). "Likelihood". Parametric Statistical Inference. Oxford University Press. pp. 69–139. ISBN 0-19-852359-9. Rohde, Charles A. (2014). Introductory Statistical Inference with the Likelihood Function. Berlin: Springer. ISBN 978-3-319-10460-7. Royall, Richard (1997). Statistical Evidence : A Likelihood Paradigm. London: Chapman & Hall. ISBN 0-412-04411-0. Ward, Michael D.; Ahlquist, John S. (2018). "The Likelihood Function: A Deeper Dive". Maximum Likelihood for Social Science : Strategies for Analysis. Cambridge University Press. pp. 21–28. ISBN 978-1-316-63682-4. == External links == Likelihood function at Planetmath "Log-likelihood". Statlect.
Wikipedia/Likelihood_function
In mathematics, a Borel set is any subset of a topological space that can be formed from its open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets are named after Émile Borel. For a topological space X, the collection of all Borel sets on X forms a σ-algebra, known as the Borel algebra or Borel σ-algebra. The Borel algebra on X is the smallest σ-algebra containing all open sets (or, equivalently, all closed sets). Borel sets are important in measure theory, since any measure defined on the open sets of a space, or on the closed sets of a space, must also be defined on all Borel sets of that space. Any measure defined on the Borel sets is called a Borel measure. Borel sets and the associated Borel hierarchy also play a fundamental role in descriptive set theory. In some contexts, Borel sets are defined to be generated by the compact sets of the topological space, rather than the open sets. The two definitions are equivalent for many well-behaved spaces, including all Hausdorff σ-compact spaces, but can be different in more pathological spaces. == Generating the Borel algebra == In the case that X is a metric space, the Borel algebra in the first sense may be described generatively as follows. For a collection T of subsets of X (that is, for any subset of the power set P(X) of X), let T σ {\displaystyle T_{\sigma }} be all countable unions of elements of T T δ {\displaystyle T_{\delta }} be all countable intersections of elements of T T δ σ = ( T δ ) σ . {\displaystyle T_{\delta \sigma }=(T_{\delta })_{\sigma }.} Now define by transfinite induction a sequence Gm, where m is an ordinal number, in the following manner: For the base case of the definition, let G 0 {\displaystyle G^{0}} be the collection of open subsets of X. If i is not a limit ordinal, then i has an immediately preceding ordinal i − 1. Let G i = [ G i − 1 ] δ σ . {\displaystyle G^{i}=[G^{i-1}]_{\delta \sigma }.} If i is a limit ordinal, set G i = ⋃ j < i G j . {\displaystyle G^{i}=\bigcup _{j<i}G^{j}.} The claim is that the Borel algebra is Gω1, where ω1 is the first uncountable ordinal number. That is, the Borel algebra can be generated from the class of open sets by iterating the operation G ↦ G δ σ . {\displaystyle G\mapsto G_{\delta \sigma }.} to the first uncountable ordinal. To prove this claim, any open set in a metric space is the union of an increasing sequence of closed sets. In particular, complementation of sets maps Gm into itself for any limit ordinal m; moreover if m is an uncountable limit ordinal, Gm is closed under countable unions. For each Borel set B, there is some countable ordinal αB such that B can be obtained by iterating the operation over αB. However, as B varies over all Borel sets, αB will vary over all the countable ordinals, and thus the first ordinal at which all the Borel sets are obtained is ω1, the first uncountable ordinal. The resulting sequence of sets is termed the Borel hierarchy. === Example === An important example, especially in the theory of probability, is the Borel algebra on the set of real numbers. It is the algebra on which the Borel measure is defined. Given a real random variable defined on a probability space, its probability distribution is by definition also a measure on the Borel algebra. The Borel algebra on the reals is the smallest σ-algebra on R that contains all the intervals. In the construction by transfinite induction, it can be shown that, in each step, the number of sets is, at most, the cardinality of the continuum. So, the total number of Borel sets is less than or equal to ℵ 1 ⋅ 2 ℵ 0 = 2 ℵ 0 . {\displaystyle \aleph _{1}\cdot 2^{\aleph _{0}}\,=2^{\aleph _{0}}.} In fact, the cardinality of the collection of Borel sets is equal to that of the continuum (compare to the number of Lebesgue measurable sets that exist, which is strictly larger and equal to 2 2 ℵ 0 {\displaystyle 2^{2^{\aleph _{0}}}} ). == Standard Borel spaces and Kuratowski theorems == Let X be a topological space. The Borel space associated to X is the pair (X,B), where B is the σ-algebra of Borel sets of X. George Mackey defined a Borel space somewhat differently, writing that it is "a set together with a distinguished σ-field of subsets called its Borel sets." However, modern usage is to call the distinguished sub-algebra the measurable sets and such spaces measurable spaces. The reason for this distinction is that the Borel sets are the σ-algebra generated by open sets (of a topological space), whereas Mackey's definition refers to a set equipped with an arbitrary σ-algebra. There exist measurable spaces that are not Borel spaces, for any choice of topology on the underlying space. Measurable spaces form a category in which the morphisms are measurable functions between measurable spaces. A function f : X → Y {\displaystyle f:X\rightarrow Y} is measurable if it pulls back measurable sets, i.e., for all measurable sets B in Y, the set f − 1 ( B ) {\displaystyle f^{-1}(B)} is measurable in X. Theorem. Let X be a Polish space, that is, a topological space such that there is a metric d on X that defines the topology of X and that makes X a complete separable metric space. Then X as a Borel space is isomorphic to one of R, Z, a finite space. (This result is reminiscent of Maharam's theorem.) Considered as Borel spaces, the real line R, the union of R with a countable set, and Rn are isomorphic. A standard Borel space is the Borel space associated to a Polish space. A standard Borel space is characterized up to isomorphism by its cardinality, and any uncountable standard Borel space has the cardinality of the continuum. For subsets of Polish spaces, Borel sets can be characterized as those sets that are the ranges of continuous injective maps defined on Polish spaces. Note however, that the range of a continuous noninjective map may fail to be Borel. See analytic set. Every probability measure on a standard Borel space turns it into a standard probability space. == Non-Borel sets == An example of a subset of the reals that is non-Borel, due to Lusin, is described below. In contrast, an example of a non-measurable set cannot be exhibited, although the existence of such a set is implied, for example, by the axiom of choice. Every irrational number has a unique representation by an infinite simple continued fraction x = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 ⋱ {\displaystyle x=a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{\ddots \,}}}}}}}}} where a 0 {\displaystyle a_{0}} is some integer and all the other numbers a k {\displaystyle a_{k}} are positive integers. Let A {\displaystyle A} be the set of all irrational numbers that correspond to sequences ( a 0 , a 1 , … ) {\displaystyle (a_{0},a_{1},\dots )} with the following property: there exists an infinite subsequence ( a k 0 , a k 1 , … ) {\displaystyle (a_{k_{0}},a_{k_{1}},\dots )} such that each element is a divisor of the next element. This set A {\displaystyle A} is not Borel. However, it is analytic (all Borel sets are also analytic), and complete in the class of analytic sets. For more details see descriptive set theory and the book by A. S. Kechris (see References), especially Exercise (27.2) on page 209, Definition (22.9) on page 169, Exercise (3.4)(ii) on page 14, and on page 196. It's important to note, that while Zermelo–Fraenkel axioms (ZF) are sufficient to formalize the construction of A {\displaystyle A} , it cannot be proven in ZF alone that A {\displaystyle A} is non-Borel. In fact, it is consistent with ZF that R {\displaystyle \mathbb {R} } is a countable union of countable sets, so that any subset of R {\displaystyle \mathbb {R} } is a Borel set. Another non-Borel set is an inverse image f − 1 [ 0 ] {\displaystyle f^{-1}[0]} of an infinite parity function f : { 0 , 1 } ω → { 0 , 1 } {\displaystyle f\colon \{0,1\}^{\omega }\to \{0,1\}} . However, this is a proof of existence (via the axiom of choice), not an explicit example. == Alternative non-equivalent definitions == According to Paul Halmos, a subset of a locally compact Hausdorff topological space is called a Borel set if it belongs to the smallest σ-ring containing all compact sets. Norberg and Vervaat redefine the Borel algebra of a topological space X {\displaystyle X} as the σ {\displaystyle \sigma } -algebra generated by its open subsets and its compact saturated subsets. This definition is well-suited for applications in the case where X {\displaystyle X} is not Hausdorff. It coincides with the usual definition if X {\displaystyle X} is second countable or if every compact saturated subset is closed (which is the case in particular if X {\displaystyle X} is Hausdorff). == See also == Borel hierarchy Borel isomorphism Baire set Cylindrical σ-algebra Descriptive set theory – Subfield of mathematical logic Polish space – Concept in topology == Notes == == References == William Arveson, An Invitation to C*-algebras, Springer-Verlag, 1981. (See Chapter 3 for an excellent exposition of Polish topology) Richard Dudley, Real Analysis and Probability. Wadsworth, Brooks and Cole, 1989 Halmos, Paul R. (1950). Measure theory. D. van Nostrand Co. See especially Sect. 51 "Borel sets and Baire sets". Halsey Royden, Real Analysis, Prentice Hall, 1988 Alexander S. Kechris, Classical Descriptive Set Theory, Springer-Verlag, 1995 (Graduate texts in Math., vol. 156) == External links == "Borel set", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Formal definition of Borel Sets in the Mizar system, and the list of theorems Archived 2020-06-01 at the Wayback Machine that have been formally proved about it. Weisstein, Eric W. "Borel Set". MathWorld.
Wikipedia/Borel_algebra
This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents. == Core probability: selected topics == Probability theory === Basic notions (bsc) === === Instructive examples (paradoxes) (iex) === === Moments (mnt) === === Inequalities (inq) === === Markov chains, processes, fields, networks (Mar) === === Gaussian random variables, vectors, functions (Gau) === === Conditioning (cnd) === === Specific distributions (spd) === === Empirical measure (emm) === === Limit theorems (lmt) === === Large deviations (lrd) === === Random graphs (rgr) === === Random matrices (rmt) === === Stochastic calculus (scl) === === Malliavin calculus (Mal) === === Random dynamical systems (rds) === Random dynamical system / scl Absorbing set Base flow Pullback attractor === Analytic aspects (including measure theoretic) (anl) === == Core probability: other articles, by number and type of random variables == === A single random variable (1:) === ==== Binary (1:B) ==== ==== Discrete (1:D) ==== ==== Continuous (1:C) ==== ==== Real-valued, arbitrary (1:R) ==== ==== Random point of a manifold (1:M) ==== Bertrand's paradox / (1:M) ==== General (random element of an abstract space) (1:G) ==== Pitman–Yor process / (1:G) Random compact set / (1:G) Random element / (1:G) === Two random variables (2:) === ==== Binary (2:B) ==== Coupling / (2:BRG) Craps principle / (2:B) ==== Discrete (2:D) ==== Kullback–Leibler divergence / (2:DCR) Mutual information / (23F:DC) ==== Continuous (2:C) ==== ==== Real-valued, arbitrary (2:R) ==== ==== General (random element of an abstract space) (2:G) ==== Coupling / (2:BRG) Lévy–Prokhorov metric / (2:G) Wasserstein metric / (2:G) === Three random variables (3:) === ==== Binary (3:B) ==== Pairwise independence / (3:B) (F:R) ==== Discrete (3:D) ==== Mutual information / (23F:DC) ==== Continuous (3:C) ==== Mutual information / (23F:DC) === Finitely many random variables (F:) === ==== Binary (F:B) ==== ==== Discrete (F:D) ==== ==== Continuous (F:C) ==== ==== Real-valued, arbitrary (F:R) ==== ==== General (random element of an abstract space) (F:G) ==== Finite-dimensional distribution / (FU:G) Hitting time / (FU:G) Stopped process / (FU:DG) === A large number of random variables (finite but tending to infinity) (L:) === ==== Binary (L:B) ==== Random walk / (FLS:BD) (U:C) ==== Discrete (L:D) ==== ==== Real-valued, arbitrary (L:R) ==== === An infinite sequence of random variables (S:) === ==== Binary (S:B) ==== ==== Discrete (S:D) ==== ==== Continuous (S:C) ==== ==== Real-valued, arbitrary (S:R) ==== ==== General (random element of an abstract space) (S:G) ==== === Uncountably many random variables (continuous-time processes etc) (U:) === ==== Discrete (U:D) ==== ==== Continuous (U:C) ==== ==== Real-valued, arbitrary (U:R) ==== ==== General (random element of an abstract space) (U:G) ==== == Around the core == === General aspects (grl) === === Foundations of probability theory (fnd) === === Gambling (gmb) === === Coincidence (cnc) === === Algorithmics (alg) === === Bayesian approach (Bay) === === Financial mathematics (fnc) === === Physics (phs) === === Genetics (gnt) === === Stochastic process (spr) === === Geometric probability (geo) === === Empirical findings (emp) === Benford's law Pareto principle === Historical (hst) === === Miscellany (msc) === == Counters of articles == Here k(n) means: n links to k articles. (Some articles are linked more than once.)
Wikipedia/Catalog_of_articles_in_probability_theory
In probability theory, an event is a subset of outcomes of an experiment (a subset of the sample space) to which a probability is assigned. A single outcome may be an element of many different events, and different events in an experiment are usually not equally likely, since they may include very different groups of outcomes. An event consisting of only a single outcome is called an elementary event or an atomic event; that is, it is a singleton set. An event that has more than one possible outcome is called a compound event. An event S {\displaystyle S} is said to occur if S {\displaystyle S} contains the outcome x {\displaystyle x} of the experiment (or trial) (that is, if x ∈ S {\displaystyle x\in S} ). The probability (with respect to some probability measure) that an event S {\displaystyle S} occurs is the probability that S {\displaystyle S} contains the outcome x {\displaystyle x} of an experiment (that is, it is the probability that x ∈ S {\displaystyle x\in S} ). An event defines a complementary event, namely the complementary set (the event not occurring), and together these define a Bernoulli trial: did the event occur or not? Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see § Events in probability spaces, below). == A simple example == If we assemble a deck of 52 playing cards with no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each card is a possible outcome. An event, however, is any subset of the sample space, including any singleton set (an elementary event), the empty set (an impossible event, with probability zero) and the sample space itself (a certain event, with probability one). Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include: "Red and black at the same time without being a joker" (0 elements), "The 5 of Hearts" (1 element), "A King" (4 elements), "A Face card" (12 elements), "A Spade" (13 elements), "A Face card or a red suit" (32 elements), "A card" (52 elements). Since all events are sets, they are usually written as sets (for example, {1, 2, 3}), and represented graphically using Venn diagrams. In the situation where each outcome in the sample space Ω is equally likely, the probability P {\displaystyle P} of an event A {\displaystyle A} is the following formula: P ( A ) = | A | | Ω | ( alternatively: Pr ( A ) = | A | | Ω | ) {\displaystyle \mathrm {P} (A)={\frac {|A|}{|\Omega |}}\,\ \left({\text{alternatively:}}\ \Pr(A)={\frac {|A|}{|\Omega |}}\right)} This rule can readily be applied to each of the example events above. == Events in probability spaces == Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution, the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers 'badly behaved' sets, such as those that are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a σ-algebra, that is, a family closed under complementation and countable unions of its members. The most natural choice of σ-algebra is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice. In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected 𝜎-algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the 𝜎-algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all events of interest are elements of the 𝜎-algebra. == A note on notation == Even though events are subsets of some sample space Ω , {\displaystyle \Omega ,} they are often written as predicates or indicators involving random variables. For example, if X {\displaystyle X} is a real-valued random variable defined on the sample space Ω , {\displaystyle \Omega ,} the event { ω ∈ Ω ∣ u < X ( ω ) ≤ v } {\displaystyle \{\omega \in \Omega \mid u<X(\omega )\leq v\}\,} can be written more conveniently as, simply, u < X ≤ v . {\displaystyle u<X\leq v\,.} This is especially common in formulas for a probability, such as Pr ( u < X ≤ v ) = F ( v ) − F ( u ) . {\displaystyle \Pr(u<X\leq v)=F(v)-F(u)\,.} The set u < X ≤ v {\displaystyle u<X\leq v} is an example of an inverse image under the mapping X {\displaystyle X} because ω ∈ X − 1 ( ( u , v ] ) {\displaystyle \omega \in X^{-1}((u,v])} if and only if u < X ( ω ) ≤ v . {\displaystyle u<X(\omega )\leq v.} == See also == Atom (measure theory) – A measurable set with positive measure that contains no subset of smaller positive measure Complementary event – Opposite of a probability event Elementary event – event that contains only one outcomePages displaying wikidata descriptions as a fallback Independent event – When the occurrence of one event does not affect the likelihood of anotherPages displaying short descriptions of redirect targets Outcome (probability) – Possible result of an experiment or trial Pairwise independent events – Set of random variables of which any two are independent == Notes == == External links == "Random event", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Formal definition in the Mizar system.
Wikipedia/Event_(probability_theory)
Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place. In many cases, the model is chosen on the basis of detection theory to try to guess the probability of an outcome given a set amount of input data, for example given an email determining how likely that it is spam. Models can use one or more classifiers in trying to determine the probability of a set of data belonging to another set. For example, a model might be used to determine whether an email is spam or "ham" (non-spam). Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics. Predictive modelling is often contrasted with causal modelling/analysis. In the former, one may be entirely satisfied to make use of indicators of, or proxies for, the outcome of interest. In the latter, one seeks to determine true cause-and-effect relationships. This distinction has given rise to a burgeoning literature in the fields of research methods and statistics and to the common statement that "correlation does not imply causation". == Models == Nearly any statistical model can be used for prediction purposes. Broadly speaking, there are two classes of predictive models: parametric and non-parametric. A third class, semi-parametric models, includes features of both. Parametric models make "specific assumptions with regard to one or more of the population parameters that characterize the underlying distribution(s)". Non-parametric models "typically involve fewer assumptions of structure and distributional form [than parametric models] but usually contain strong assumptions about independencies". == Applications == === Uplift modelling === Uplift modelling is a technique for modelling the change in probability caused by an action. Typically this is a marketing action such as an offer to buy a product, to use a product more or to re-sign a contract. For example, in a retention campaign you wish to predict the change in probability that a customer will remain a customer if they are contacted. A model of the change in probability allows the retention campaign to be targeted at those customers on whom the change in probability will be beneficial. This allows the retention programme to avoid triggering unnecessary churn or customer attrition without wasting money contacting people who would act anyway. === Archaeology === Predictive modelling in archaeology gets its foundations from Gordon Willey's mid-fifties work in the Virú Valley of Peru. Complete, intensive surveys were performed then covariability between cultural remains and natural features such as slope and vegetation were determined. Development of quantitative methods and a greater availability of applicable data led to growth of the discipline in the 1960s and by the late 1980s, substantial progress had been made by major land managers worldwide. Generally, predictive modelling in archaeology is establishing statistically valid causal or covariable relationships between natural proxies such as soil types, elevation, slope, vegetation, proximity to water, geology, geomorphology, etc., and the presence of archaeological features. Through analysis of these quantifiable attributes from land that has undergone archaeological survey, sometimes the "archaeological sensitivity" of unsurveyed areas can be anticipated based on the natural proxies in those areas. Large land managers in the United States, such as the Bureau of Land Management (BLM), the Department of Defense (DOD), and numerous highway and parks agencies, have successfully employed this strategy. By using predictive modelling in their cultural resource management plans, they are capable of making more informed decisions when planning for activities that have the potential to require ground disturbance and subsequently affect archaeological sites. === Customer relationship management === Predictive modelling is used extensively in analytical customer relationship management and data mining to produce customer-level models that describe the likelihood that a customer will take a particular action. The actions are usually sales, marketing and customer retention related. For example, a large consumer organization such as a mobile telecommunications operator will have a set of predictive models for product cross-sell, product deep-sell (or upselling) and churn. It is also now more common for such an organization to have a model of savability using an uplift model. This predicts the likelihood that a customer can be saved at the end of a contract period (the change in churn probability) as opposed to the standard churn prediction model. === Auto insurance === Predictive modelling is utilised in vehicle insurance to assign risk of incidents to policy holders from information obtained from policy holders. This is extensively employed in usage-based insurance solutions where predictive models utilise telemetry-based data to build a model of predictive risk for claim likelihood. Black-box auto insurance predictive models utilise GPS or accelerometer sensor input only. Some models include a wide range of predictive input beyond basic telemetry including advanced driving behaviour, independent crash records, road history, and user profiles to provide improved risk models. === Health care === In 2009 Parkland Health & Hospital System began analyzing electronic medical records in order to use predictive modeling to help identify patients at high risk of readmission. Initially, the hospital focused on patients with congestive heart failure, but the program has expanded to include patients with diabetes, acute myocardial infarction, and pneumonia. In 2018, Banerjee et al. proposed a deep learning model for estimating short-term life expectancy (>3 months) of the patients by analyzing free-text clinical notes in the electronic medical record, while maintaining the temporal visit sequence. The model was trained on a large dataset (10,293 patients) and validated on a separated dataset (1818 patients). It achieved an area under the ROC (Receiver Operating Characteristic) curve of 0.89. To provide explain-ability, they developed an interactive graphical tool that may improve physician understanding of the basis for the model's predictions. The high accuracy and explain-ability of the PPES-Met model may enable the model to be used as a decision support tool to personalize metastatic cancer treatment and provide valuable assistance to physicians. The first clinical prediction model reporting guidelines were published in 2015 (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD)), and have since been updated. Predictive modelling has been used to estimate surgery duration. === Algorithmic trading === Predictive modeling in trading is a modeling process wherein the probability of an outcome is predicted using a set of predictor variables. Predictive models can be built for different assets like stocks, futures, currencies, commodities etc. Predictive modeling is still extensively used by trading firms to devise strategies and trade. It utilizes mathematically advanced software to evaluate indicators on price, volume, open interest and other historical data, to discover repeatable patterns. === Lead tracking systems === Predictive modelling gives lead generators a head start by forecasting data-driven outcomes for each potential campaign. This method saves time and exposes potential blind spots to help client make smarter decisions. === Notable failures of predictive modeling === Although not widely discussed by the mainstream predictive modeling community, predictive modeling is a methodology that has been widely used in the financial industry in the past and some of the major failures contributed to the 2008 financial crisis. These failures exemplify the danger of relying exclusively on models that are essentially backward looking in nature. The following examples are by no mean a complete list: Bond rating. S&P, Moody's and Fitch quantify the probability of default of bonds with discrete variables called rating. The rating can take on discrete values from AAA down to D. The rating is a predictor of the risk of default based on a variety of variables associated with the borrower and historical macroeconomic data. The rating agencies failed with their ratings on the US$600 billion mortgage backed Collateralized Debt Obligation (CDO) market. Almost the entire AAA sector (and the super-AAA sector, a new rating the rating agencies provided to represent super safe investment) of the CDO market defaulted or severely downgraded during 2008, many of which obtained their ratings less than just a year previously. So far, no statistical models that attempt to predict equity market prices based on historical data are considered to consistently make correct predictions over the long term. One particularly memorable failure is that of Long Term Capital Management, a fund that hired highly qualified analysts, including a Nobel Memorial Prize in Economic Sciences winner, to develop a sophisticated statistical model that predicted the price spreads between different securities. The models produced impressive profits until a major debacle that caused the then Federal Reserve chairman Alan Greenspan to step in to broker a rescue plan by the Wall Street broker dealers in order to prevent a meltdown of the bond market. == Possible fundamental limitations of predictive models based on data fitting == History cannot always accurately predict the future. Using relations derived from historical data to predict the future implicitly assumes there are certain lasting conditions or constants in a complex system. This almost always leads to some imprecision when the system involves people. Unknown unknowns are an issue. In all data collection, the collector first defines the set of variables for which data is collected. However, no matter how extensive the collector considers his/her selection of the variables, there is always the possibility of new variables that have not been considered or even defined, yet are critical to the outcome. Algorithms can be defeated adversarially. After an algorithm becomes an accepted standard of measurement, it can be taken advantage of by people who understand the algorithm and have the incentive to fool or manipulate the outcome. This is what happened to the CDO rating described above. The CDO dealers actively fulfilled the rating agencies' input to reach an AAA or super-AAA on the CDO they were issuing, by cleverly manipulating variables that were "unknown" to the rating agencies' "sophisticated" models. == See also == Calibration (statistics) Prediction interval Predictive analytics Predictive inference Statistical learning theory Statistical model == References == == Further reading == Clarke, Bertrand S.; Clarke, Jennifer L. (2018), Predictive Statistics, Cambridge University Press Iglesias, Pilar; Sandoval, Mônica C.; Pereira, Carlos Alberto de Bragança (1993), "Predictive likelihood in finite populations", Brazilian Journal of Probability and Statistics, 7 (1): 65–82, JSTOR 43600831 Kelleher, John D.; Mac Namee, Brian; D'Arcy, Aoife (2015), Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, worked Examples and Case Studies, MIT Press Kuhn, Max; Johnson, Kjell (2013), Applied Predictive Modeling, Springer Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, S2CID 15900983
Wikipedia/Predictive_modelling
In abstract algebra, an automorphism of a Lie algebra g {\displaystyle {\mathfrak {g}}} is an isomorphism from g {\displaystyle {\mathfrak {g}}} to itself, that is, a bijective linear map preserving the Lie bracket. The automorphisms of g {\displaystyle {\mathfrak {g}}} form a group denoted Aut ⁡ g {\displaystyle \operatorname {Aut} {\mathfrak {g}}} , the automorphism group of g {\displaystyle {\mathfrak {g}}} . == Inner and outer automorphisms == The subgroup of Aut ⁡ g {\displaystyle \operatorname {Aut} {\mathfrak {g}}} generated using the adjoint action e ad ⁡ ( x ) , x ∈ g {\displaystyle e^{\operatorname {ad} (x)},x\in {\mathfrak {g}}} is called the inner automorphism group of g {\displaystyle {\mathfrak {g}}} . The group is denoted Aut 0 ⁡ ( g ) {\displaystyle \operatorname {Aut} ^{0}({\mathfrak {g}})} . These form a normal subgroup in the group of automorphisms, and the quotient Aut ⁡ ( g ) / Aut 0 ⁡ ( g ) {\displaystyle \operatorname {Aut} ({\mathfrak {g}})/\operatorname {Aut} ^{0}({\mathfrak {g}})} is known as the outer automorphism group. === Diagram automorphisms === It is known that the outer automorphism group for a simple Lie algebra g {\displaystyle {\mathfrak {g}}} is isomorphic to the group of diagram automorphisms for the corresponding Dynkin diagram in the classification of Lie algebras. The only algebras with non-trivial outer automorphism group are therefore A n ( n ≥ 2 ) {\displaystyle A_{n}(n\geq 2)} , D n {\displaystyle D_{n}} and E 6 {\displaystyle E_{6}} . There are ways to concretely realize these automorphisms in the matrix representations of these groups. For A n = s l ( n + 1 , C ) {\displaystyle A_{n}={\mathfrak {sl}}(n+1,\mathbb {C} )} , the automorphism can be realized as the negative transpose. For D n = s o ( 2 n ) {\displaystyle D_{n}={\mathfrak {so}}(2n)} , the automorphism is obtained by conjugating by an orthogonal matrix in O ( 2 n ) {\displaystyle O(2n)} with determinant −1. == Derivations == A derivation on a Lie algebra is a linear map δ : g → g {\displaystyle \delta :{\mathfrak {g}}\rightarrow {\mathfrak {g}}} satisfying the Leibniz rule δ [ X , Y ] = [ δ X , Y ] + [ X , δ Y ] . {\displaystyle \delta [X,Y]=[\delta X,Y]+[X,\delta Y].} The set of derivations on a Lie algebra g {\displaystyle {\mathfrak {g}}} is denoted Der ⁡ g {\displaystyle \operatorname {Der} {\mathfrak {g}}} , and is a subalgebra of the endomorphisms on g {\displaystyle {\mathfrak {g}}} , that is Der ⁡ g < End ⁡ g {\displaystyle \operatorname {Der} {\mathfrak {g}}<\operatorname {End} {\mathfrak {g}}} . They inherit a Lie algebra structure from the Lie algebra structure on the endomorphism algebra, and closure of the bracket follows from the Leibniz rule. Due to the Jacobi identity, it can be shown that the image of the adjoint representation ad : g → End ⁡ g {\displaystyle \operatorname {ad} :{\mathfrak {g}}\rightarrow \operatorname {End} {\mathfrak {g}}} lies in Der ⁡ g {\displaystyle \operatorname {Der} {\mathfrak {g}}} . Through the Lie group-Lie algebra correspondence, the Lie group of automorphisms Aut ⁡ g {\displaystyle \operatorname {Aut} {\mathfrak {g}}} corresponds to the Lie algebra of derivations Der ⁡ g {\displaystyle \operatorname {Der} {\mathfrak {g}}} . For g {\displaystyle {\mathfrak {g}}} finite, all derivations are inner. == Examples == For each g {\displaystyle g} in a Lie group G {\displaystyle G} , let Ad g {\displaystyle \operatorname {Ad} _{g}} denote the differential at the identity of the conjugation by g {\displaystyle g} . Then Ad g {\displaystyle \operatorname {Ad} _{g}} is an automorphism of g = Lie ⁡ ( G ) {\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)} , the adjoint action by g {\displaystyle g} . == Theorems == The Borel–Morozov theorem states that every solvable subalgebra of a complex semisimple Lie algebra g {\displaystyle {\mathfrak {g}}} can be mapped to a subalgebra of a Cartan subalgebra h {\displaystyle {\mathfrak {h}}} of g {\displaystyle {\mathfrak {g}}} by an inner automorphism of g {\displaystyle {\mathfrak {g}}} . In particular, it says that h ⊕ ⨁ α > 0 g α =: h ⊕ g + {\displaystyle {\mathfrak {h}}\oplus \bigoplus _{\alpha >0}{\mathfrak {g}}_{\alpha }=:{\mathfrak {h}}\oplus {\mathfrak {g}}^{+}} , where g α {\displaystyle {\mathfrak {g}}_{\alpha }} are root spaces, is a maximal solvable subalgebra (that is, a Borel subalgebra). == References == E. Cartan, Le principe de dualité et la théorie des groupes simples et semi-simples. Bull. Sc. math. 49, 1925, pp. 361–374. Humphreys, James (1972). Introduction to Lie algebras and Representation Theory. Springer. ISBN 0387900535. Serre, Jean-Pierre (2000), Algèbres de Lie semi-simples complexes [Complex Semisimple Lie Algebras], translated by Jones, G. A., Springer, ISBN 978-3-540-67827-4.
Wikipedia/Automorphism_of_a_Lie_algebra
In mathematics, a Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if its lower central series terminates in the zero subalgebra. The lower central series is the sequence of subalgebras g ≥ [ g , g ] ≥ [ g , [ g , g ] ] ≥ [ g , [ g , [ g , g ] ] ] ≥ . . . {\displaystyle {\mathfrak {g}}\geq [{\mathfrak {g}},{\mathfrak {g}}]\geq [{\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}]]\geq [{\mathfrak {g}},[{\mathfrak {g}},[{\mathfrak {g}},{\mathfrak {g}}]]]\geq ...} We write g 0 = g {\displaystyle {\mathfrak {g}}_{0}={\mathfrak {g}}} , and g n = [ g , g n − 1 ] {\displaystyle {\mathfrak {g}}_{n}=[{\mathfrak {g}},{\mathfrak {g}}_{n-1}]} for all n > 0 {\displaystyle n>0} . If the lower central series eventually arrives at the zero subalgebra, then the Lie algebra is called nilpotent. The lower central series for Lie algebras is analogous to the lower central series in group theory, and nilpotent Lie algebras are analogs of nilpotent groups. The nilpotent Lie algebras are precisely those that can be obtained from abelian Lie algebras, by successive central extensions. Note that the definition means that, viewed as a non-associative non-unital algebra, a Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if it is nilpotent as an ideal. == Definition == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra. One says that g {\displaystyle {\mathfrak {g}}} is nilpotent if the lower central series terminates, i.e. if g n = 0 {\displaystyle {\mathfrak {g}}_{n}=0} for some n ∈ N . {\displaystyle n\in \mathbb {N} .} Explicitly, this means that [ X 1 , [ X 2 , [ ⋯ [ X n , Y ] ⋯ ] ] = a d X 1 a d X 2 ⋯ a d X n Y = 0 {\displaystyle [X_{1},[X_{2},[\cdots [X_{n},Y]\cdots ]]=\mathrm {ad} _{X_{1}}\mathrm {ad} _{X_{2}}\cdots \mathrm {ad} _{X_{n}}Y=0} ∀ X 1 , X 2 , … , X n , Y ∈ g , ( 1 ) {\displaystyle \forall X_{1},X_{2},\ldots ,X_{n},Y\in {\mathfrak {g}},\qquad (1)} so that adX1adX2 ⋅⋅⋅ adXn = 0. == Equivalent conditions == A very special consequence of (1) is that [ X , [ X , [ ⋯ [ X , Y ] ⋯ ] = a d X n Y ∈ g n = 0 ∀ X , Y ∈ g . ( 2 ) {\displaystyle [X,[X,[\cdots [X,Y]\cdots ]={\mathrm {ad} _{X}}^{n}Y\in {\mathfrak {g}}_{n}=0\quad \forall X,Y\in {\mathfrak {g}}.\qquad (2)} Thus (adX)n = 0 for all X ∈ g {\displaystyle X\in {\mathfrak {g}}} . That is, adX is a nilpotent endomorphism in the usual sense of linear endomorphisms (rather than of Lie algebras). We call such an element X in g {\displaystyle {\mathfrak {g}}} ad-nilpotent. Remarkably, if g {\displaystyle {\mathfrak {g}}} is finite dimensional, the apparently much weaker condition (2) is actually equivalent to (1), as stated by Engel's theorem: A finite dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if and only if all elements of g {\displaystyle {\mathfrak {g}}} are ad-nilpotent, which we will not prove here. A somewhat easier equivalent condition for the nilpotency of g {\displaystyle {\mathfrak {g}}} : g {\displaystyle {\mathfrak {g}}} is nilpotent if and only if a d g {\displaystyle \mathrm {ad} \,{\mathfrak {g}}} is nilpotent (as a Lie algebra). To see this, first observe that (1) implies that a d g {\displaystyle \mathrm {ad} \,{\mathfrak {g}}} is nilpotent, since the expansion of an (n − 1)-fold nested bracket will consist of terms of the form in (1). Conversely, one may write [ [ ⋯ [ X n , X n − 1 ] , ⋯ , X 2 ] , X 1 ] = a d [ ⋯ [ X n , X n − 1 ] , ⋯ , X 2 ] ( X 1 ) , {\displaystyle [[\cdots [X_{n},X_{n-1}],\cdots ,X_{2}],X_{1}]=\mathrm {ad} [\cdots [X_{n},X_{n-1}],\cdots ,X_{2}](X_{1}),} and since ad is a Lie algebra homomorphism, a d [ ⋯ [ X n , X n − 1 ] , ⋯ , X 2 ] = [ a d [ ⋯ [ X n , X n − 1 ] , ⋯ X 3 ] , a d X 2 ] = … = [ ⋯ [ a d X n , a d X n − 1 ] , ⋯ a d X 2 ] . {\displaystyle {\begin{aligned}\mathrm {ad} [\cdots [X_{n},X_{n-1}],\cdots ,X_{2}]&=[\mathrm {ad} [\cdots [X_{n},X_{n-1}],\cdots X_{3}],\mathrm {ad} _{X_{2}}]\\&=\ldots =[\cdots [\mathrm {ad} _{X_{n}},\mathrm {ad} _{X_{n-1}}],\cdots \mathrm {ad} _{X_{2}}].\end{aligned}}} If a d g {\displaystyle \mathrm {ad} \,{\mathfrak {g}}} is nilpotent, the last expression is zero for large enough n, and accordingly the first. But this implies (1), so g {\displaystyle {\mathfrak {g}}} is nilpotent. Also, a finite-dimensional Lie algebra is nilpotent if and only if there exists a descending chain of ideals g = g 0 ⊃ g 1 ⊃ ⋯ ⊃ g n = 0 {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\supset {\mathfrak {g}}_{1}\supset \cdots \supset {\mathfrak {g}}_{n}=0} such that [ g , g i ] ⊂ g i + 1 {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}_{i}]\subset {\mathfrak {g}}_{i+1}} . == Examples == === Strictly upper triangular matrices === If g l ( k , R ) {\displaystyle {\mathfrak {gl}}(k,\mathbb {R} )} is the set of k × k matrices with entries in R {\displaystyle \mathbb {R} } , then the subalgebra consisting of strictly upper triangular matrices is a nilpotent Lie algebra. === Heisenberg algebras === A Heisenberg algebra is nilpotent. For example, in dimension 3, the commutator of two matrices [ [ 0 a b 0 0 c 0 0 0 ] , [ 0 a ′ b ′ 0 0 c ′ 0 0 0 ] ] = [ 0 0 a ″ 0 0 0 0 0 0 ] {\displaystyle \left[{\begin{bmatrix}0&a&b\\0&0&c\\0&0&0\end{bmatrix}},{\begin{bmatrix}0&a'&b'\\0&0&c'\\0&0&0\end{bmatrix}}\right]={\begin{bmatrix}0&0&a''\\0&0&0\\0&0&0\end{bmatrix}}} where a ″ = a c ′ − a ′ c {\displaystyle a''=ac'-a'c} . === Cartan subalgebras === A Cartan subalgebra c {\displaystyle {\mathfrak {c}}} of a Lie algebra l {\displaystyle {\mathfrak {l}}} is nilpotent and self-normalizing page 80. The self-normalizing condition is equivalent to being the normalizer of a Lie algebra. This means c = N l ( c ) = { x ∈ l : [ x , c ] ⊂ c for c ∈ c } {\displaystyle {\mathfrak {c}}=N_{\mathfrak {l}}({\mathfrak {c}})=\{x\in {\mathfrak {l}}:[x,c]\subset {\mathfrak {c}}{\text{ for }}c\in {\mathfrak {c}}\}} . This includes upper triangular matrices t ( n ) {\displaystyle {\mathfrak {t}}(n)} and all diagonal matrices d ( n ) {\displaystyle {\mathfrak {d}}(n)} in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} . === Other examples === If a Lie algebra g {\displaystyle {\mathfrak {g}}} has an automorphism of prime period with no fixed points except at 0, then g {\displaystyle {\mathfrak {g}}} is nilpotent. == Properties == === Nilpotent Lie algebras are solvable === Every nilpotent Lie algebra is solvable. This is useful in proving the solvability of a Lie algebra since, in practice, it is usually easier to prove nilpotency (when it holds!) rather than solvability. However, in general, the converse of this property is false. For example, the subalgebra of g l ( k , R ) {\displaystyle {\mathfrak {gl}}(k,\mathbb {R} )} (k ≥ 2) consisting of upper triangular matrices, b ( k , R ) {\displaystyle {\mathfrak {b}}(k,\mathbb {R} )} , is solvable but not nilpotent. === Subalgebras and images === If a Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent, then all subalgebras and homomorphic images are nilpotent. === Nilpotency of the quotient by the center === If the quotient algebra g / Z ( g ) {\displaystyle {\mathfrak {g}}/Z({\mathfrak {g}})} , where Z ( g ) {\displaystyle Z({\mathfrak {g}})} is the center of g {\displaystyle {\mathfrak {g}}} , is nilpotent, then so is g {\displaystyle {\mathfrak {g}}} . This is to say that a central extension of a nilpotent Lie algebra by a nilpotent Lie algebra is nilpotent. === Engel's theorem === Engel's theorem: A finite dimensional Lie algebra g {\displaystyle {\mathfrak {g}}} is nilpotent if and only if all elements of g {\displaystyle {\mathfrak {g}}} are ad-nilpotent. === Zero Killing form === The Killing form of a nilpotent Lie algebra is 0. === Have outer automorphisms === A nonzero nilpotent Lie algebra has an outer automorphism, that is, an automorphism that is not in the image of Ad. === Derived subalgebras of solvable Lie algebras === The derived subalgebra of a finite dimensional solvable Lie algebra over a field of characteristic 0 is nilpotent. == See also == Solvable Lie algebra == Notes == == References == Fulton, W.; Harris, J. (1991). Representation theory. A first course. Graduate Texts in Mathematics. Vol. 129. New York: Springer-Verlag. ISBN 978-0-387-97527-6. MR 1153249. Humphreys, James E. (1972). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9. New York: Springer-Verlag. ISBN 0-387-90053-5. Knapp, A. W. (2002). Lie groups beyond an introduction. Progress in Mathematics. Vol. 120 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 0-8176-4259-5. Serre, Jean-Pierre (2000), Algèbres de Lie semi-simples complexes [Complex Semisimple Lie Algebras], translated by Jones, G. A., Springer, ISBN 978-3-540-67827-4.
Wikipedia/Nilpotent_Lie_algebra
In mathematics, a restricted Lie algebra (or p-Lie algebra) is a Lie algebra over a field of characteristic p>0 together with an additional "pth power" operation. Most naturally occurring Lie algebras in characteristic p come with this structure, because the Lie algebra of a group scheme over a field of characteristic p is restricted. == Definition == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra over a field k of characteristic p>0. The adjoint representation of g {\displaystyle {\mathfrak {g}}} is defined by ( ad X ) ( Y ) = [ X , Y ] {\displaystyle ({\text{ad }}X)(Y)=[X,Y]} for X , Y ∈ g {\displaystyle X,Y\in {\mathfrak {g}}} . A p-mapping on g {\displaystyle {\mathfrak {g}}} is a function from g {\displaystyle {\mathfrak {g}}} to itself, X ↦ X [ p ] {\displaystyle X\mapsto X^{[p]}} , satisfying: a d ( X [ p ] ) = ( a d X ) p {\displaystyle \mathrm {ad} (X^{[p]})=(\mathrm {ad} \;X)^{p}} for all X ∈ g {\displaystyle X\in {\mathfrak {g}}} , ( t X ) [ p ] = t p X [ p ] {\displaystyle (tX)^{[p]}=t^{p}X^{[p]}} for all t ∈ k {\displaystyle t\in k} and X ∈ g {\displaystyle X\in {\mathfrak {g}}} , ( X + Y ) [ p ] = X [ p ] + Y [ p ] + ∑ i = 1 p − 1 s i ( X , Y ) {\displaystyle (X+Y)^{[p]}=X^{[p]}+Y^{[p]}+\sum _{i=1}^{p-1}s_{i}(X,Y)} for all X , Y ∈ g {\displaystyle X,Y\in {\mathfrak {g}}} , where s i ( X , Y ) {\displaystyle s_{i}(X,Y)} is 1 / i {\displaystyle 1/i} times the coefficient of t i − 1 {\displaystyle t^{i-1}} in the formal expression ( a d ( t X + Y ) ) p − 1 ( X ) {\displaystyle (\mathrm {ad} \;(tX+Y))^{p-1}(X)} . Nathan Jacobson (1937) defined a restricted Lie algebra over k to be a Lie algebra over k together with a p-mapping. A Lie algebra is said to be restrictable if it has at least one p-mapping. By the first property above, in a restricted Lie algebra, the derivation ( a d X ) p {\displaystyle (\mathrm {ad} \;X)^{p}} of g {\displaystyle {\mathfrak {g}}} is inner for each X ∈ g {\displaystyle X\in {\mathfrak {g}}} . In fact, a Lie algebra is restrictable if and only if the derivation ( a d X ) p {\displaystyle (\mathrm {ad} \;X)^{p}} of g {\displaystyle {\mathfrak {g}}} is inner for each X ∈ g {\displaystyle X\in {\mathfrak {g}}} . For example: For p = 2, a restricted Lie algebra has ( X + Y ) [ 2 ] = X [ 2 ] + [ Y , X ] + Y [ 2 ] {\displaystyle (X+Y)^{[2]}=X^{[2]}+[Y,X]+Y^{[2]}} . For p = 3, a restricted Lie algebra has ( X + Y ) [ 3 ] = X [ 3 ] + 2 [ X , [ Y , X ] ] + [ Y , [ Y , X ] ] + Y [ 3 ] {\displaystyle (X+Y)^{[3]}=X^{[3]}+2[X,[Y,X]]+[Y,[Y,X]]+Y^{[3]}} . == Examples == For an associative algebra A over a field k of characteristic p>0, the commutator [ X , Y ] := X Y − Y X {\displaystyle [X,Y]:=XY-YX} and the p-mapping X [ p ] := X p {\displaystyle X^{[p]}:=X^{p}} make A into a restricted Lie algebra. In particular, taking A to be the ring of n x n matrices shows that the Lie algebra g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} of n x n matrices over k is a restricted Lie algebra, with the p-mapping being the pth power of a matrix. This "explains" the definition of a restricted Lie algebra: the complicated formula for ( X + Y ) [ p ] {\displaystyle (X+Y)^{[p]}} is needed to express the pth power of the sum of two matrices over k, ( X + Y ) p {\displaystyle (X+Y)^{p}} , given that X and Y typically do not commute. Let A be an algebra over a field k. (Here A is a possibly non-associative algebra.) Then the derivations of A over k form a Lie algebra Der k ( A ) {\displaystyle {\text{Der}}_{k}(A)} , with the Lie bracket being the commutator, [ D 1 , D 2 ] := D 1 D 2 − D 2 D 1 {\displaystyle [D_{1},D_{2}]:=D_{1}D_{2}-D_{2}D_{1}} . When k has characteristic p>0, then iterating a derivation p times yields a derivation, and this makes Der k ( A ) {\displaystyle {\text{Der}}_{k}(A)} into a restricted Lie algebra. If A has finite dimension as a vector space, then Der k ( A ) {\displaystyle {\text{Der}}_{k}(A)} is the Lie algebra of the automorphism group scheme of A over k; that indicates why spaces of derivations are a natural way to construct Lie algebras. Let G be a group scheme over a field k of characteristic p>0, and let L i e ( G ) {\displaystyle \mathrm {Lie} (G)} be the Zariski tangent space at the identity element of G. Then L i e ( G ) {\displaystyle \mathrm {Lie} (G)} is a restricted Lie algebra over k. This is essentially a special case of the previous example. Indeed, each element X of L i e ( G ) {\displaystyle \mathrm {Lie} (G)} determines a left-invariant vector field on G, and hence a left-invariant derivation on the ring of regular functions on G. The pth power of this derivation is again a left-invariant derivation, hence the derivation associated to an element X [ p ] {\displaystyle X^{[p]}} of L i e ( G ) {\displaystyle \mathrm {Lie} (G)} . Conversely, every restricted Lie algebra of finite dimension over k is the Lie algebra of a group scheme. In fact, G ↦ L i e ( G ) {\displaystyle G\mapsto \mathrm {Lie} (G)} is an equivalence of categories from finite group schemes G of height at most 1 over k (meaning that f p = 0 {\displaystyle f^{p}=0} for all regular functions f on G that vanish at the identity element) to restricted Lie algebras of finite dimension over k. In a sense, this means that Lie theory is less powerful in positive characteristic than in characteristic zero. In characteristic p>0, the multiplicative group G m {\displaystyle G_{m}} (of dimension 1) and its finite subgroup scheme μ p = { x ∈ G m : x p = 1 } {\displaystyle \mu _{p}=\{x\in G_{m}:x^{p}=1\}} have the same restricted Lie algebra, namely the vector space k with the p-mapping a [ p ] = a p {\displaystyle a^{[p]}=a^{p}} . More generally, the restricted Lie algebra of a group scheme G over k only depends on the kernel of the Frobenius homomorphism on G, which is a subgroup scheme of height at most 1. For another example, the Lie algebra of the additive group G a {\displaystyle G_{a}} is the vector space k with p-mapping equal to zero. The corresponding Frobenius kernel is the subgroup scheme α p = { x ∈ G a : x p = 0 } . {\displaystyle \alpha _{p}=\{x\in G_{a}:x^{p}=0\}.} For a scheme X over a field k of characteristic p>0, the space H 0 ( X , T X ) {\displaystyle H^{0}(X,TX)} of vector fields on X is a restricted Lie algebra over k. (If X is affine, so that X = Spec ( A ) {\displaystyle X={\text{Spec}}(A)} for a commutative k-algebra A, this is the Lie algebra of derivations of A over k. In general, one can informally think of H 0 ( X , T X ) {\displaystyle H^{0}(X,TX)} as the Lie algebra of the automorphism group of X over k.) An action of a group scheme G on X determines a homomorphism Lie ( G ) → H 0 ( X , T X ) {\displaystyle {\text{Lie}}(G)\to H^{0}(X,TX)} of restricted Lie algebras. == The choice of a p-mapping == Given two p-mappings on a Lie algebra g {\displaystyle {\mathfrak {g}}} , their difference is a p-linear function from g {\displaystyle {\mathfrak {g}}} to the center z ( g ) {\displaystyle {\mathfrak {z}}({\mathfrak {g}})} . (p-linearity means that f ( X + Y ) = f ( X ) + f ( Y ) {\displaystyle f(X+Y)=f(X)+f(Y)} and f ( t X ) = t p f ( X ) {\displaystyle f(tX)=t^{p}f(X)} .) Thus, if the center of g {\displaystyle {\mathfrak {g}}} is zero, then g {\displaystyle {\mathfrak {g}}} is a restricted Lie algebra in at most one way. In particular, this comment applies to any simple Lie algebra of characteristic p>0. == The restricted enveloping algebra == The functor that takes an associative algebra A over k to A as a restricted Lie algebra has a left adjoint g ↦ u ( g ) {\displaystyle {\mathfrak {g}}\mapsto u({\mathfrak {g}})} , called the restricted enveloping algebra. To construct this, let U ( g ) {\displaystyle U({\mathfrak {g}})} be the universal enveloping algebra of g {\displaystyle {\mathfrak {g}}} over k (ignoring the p-mapping of g {\displaystyle {\mathfrak {g}}} ). Let I be the two-sided ideal generated by the elements X p − X [ p ] {\displaystyle X^{p}-X^{[p]}} for X ∈ g {\displaystyle X\in {\mathfrak {g}}} ; then the restricted enveloping algebra is the quotient ring u ( g ) = U ( g ) / I {\displaystyle u({\mathfrak {g}})=U({\mathfrak {g}})/I} . It satisfies a form of the Poincaré–Birkhoff–Witt theorem: if e 1 , … , e n {\displaystyle e_{1},\ldots ,e_{n}} is a basis for g {\displaystyle {\mathfrak {g}}} as a k-vector space, then a basis for u ( g ) {\displaystyle u({\mathfrak {g}})} is given by all ordered products e 1 i 1 ⋯ e n i n {\displaystyle e_{1}^{i_{1}}\cdots e_{n}^{i_{n}}} with 0 ≤ i j ≤ p − 1 {\displaystyle 0\leq i_{j}\leq p-1} for each j. In particular, the map g → u ( g ) {\displaystyle {\mathfrak {g}}\to u({\mathfrak {g}})} is injective, and if g {\displaystyle {\mathfrak {g}}} has dimension n as a vector space, then u ( g ) {\displaystyle u({\mathfrak {g}})} has dimension p n {\displaystyle p^{n}} as a vector space. A restricted representation V of a restricted Lie algebra g {\displaystyle {\mathfrak {g}}} is a representation of g {\displaystyle {\mathfrak {g}}} as a Lie algebra such that X [ p ] ( v ) = X p ( v ) {\displaystyle X^{[p]}(v)=X^{p}(v)} for all X ∈ g {\displaystyle X\in {\mathfrak {g}}} and v ∈ V {\displaystyle v\in V} . Restricted representations of g {\displaystyle {\mathfrak {g}}} are equivalent to modules over the restricted enveloping algebra. == Classification of simple Lie algebras == The simple Lie algebras of finite dimension over an algebraically closed field of characteristic zero were classified by Wilhelm Killing and Élie Cartan in the 1880s and 1890s, using root systems. Namely, every simple Lie algebra is of type An, Bn, Cn, Dn, E6, E7, E8, F4, or G2. (For example, the simple Lie algebra of type An is the Lie algebra s l ( n + 1 ) {\displaystyle {\mathfrak {sl}}(n+1)} of (n+1) x (n+1) matrices of trace zero.) In characteristic p>0, the classification of simple algebraic groups is the same as in characteristic zero. Their Lie algebras are simple in most cases, and so there are simple Lie algebras An, Bn, Cn, Dn, E6, E7, E8, F4, G2, called (in this context) the classical simple Lie algebras. (Because they come from algebraic groups, the classical simple Lie algebras are restricted.) Surprisingly, there are also many other finite-dimensional simple Lie algebras in characteristic p>0. In particular, there are the simple Lie algebras of Cartan type, which are finite-dimensional analogs of infinite-dimensional Lie algebras in characteristic zero studied by Cartan. Namely, Cartan studied the Lie algebra of vector fields on a smooth manifold of dimension n, or the subalgebra of vector fields that preserve a volume form, a symplectic form, or a contact structure. In characteristic p>0, the simple Lie algebras of Cartan type include both restrictable and non-restrictable examples. Richard Earl Block and Robert Lee Wilson (1988) classified the restricted simple Lie algebras over an algebraically closed field of characteristic p>7. Namely, they are all of classical or Cartan type. Alexander Premet and Helmut Strade (2004) extended the classification to Lie algebras which need not be restricted, and to a larger range of characteristics. (In characteristic 5, Hayk Melikyan found another family of simple Lie algebras.) Namely, every simple Lie algebra over an algebraically closed field of characteristic p>3 is of classical, Cartan, or Melikyan type. == Jacobson's Galois correspondence == Jacobson's Galois correspondence for purely inseparable field extensions is expressed in terms of restricted Lie algebras. == Notes == == References == Block, Richard E.; Wilson, Robert Lee (1988), "Classification of the restricted simple Lie algebras", Journal of Algebra, 114 (1): 115–259, doi:10.1016/0021-8693(88)90216-5, ISSN 0021-8693, MR 0931904. Demazure, Michel; Gabriel, Pierre (1970), Groupes algébriques. Tome I: Géométrie algébrique, généralités, groupes commutatifs, Paris: Masson, ISBN 978-2225616662, MR 0302656 Jacobson, Nathan (1979) [1962], Lie algebras, Dover Publications, ISBN 0-486-63832-4, MR 0559927 Jantzen, Jens Carsten (2003) [1987], Representations of algebraic groups (2nd ed.), American Mathematical Society, ISBN 978-0-8218-3527-2, MR 2015057 Premet, Alexander; Strade, Helmut (2006), "Classification of finite dimensional simple Lie algebras in prime characteristics", Representations of algebraic groups, quantum groups, and Lie algebras, Contemporary Mathematics, vol. 413, Providence, RI: American Mathematical Society, pp. 185–214, arXiv:math/0601380, MR 2263096 Strade, Helmut; Farnsteiner, Rolf (1988), Modular Lie algebras and their representations, Marcel Dekker, ISBN 0-8247-7594-5, MR 0929682 Strade, Helmut (2004), Simple Lie algebras over fields of positive characteristic, vol. 1, Walter de Gruyter, ISBN 3-11-014211-2, MR 2059133
Wikipedia/Restricted_Lie_algebra
In mathematics, a Lie algebra g {\displaystyle {\mathfrak {g}}} is solvable if its derived series terminates in the zero subalgebra. The derived Lie algebra of the Lie algebra g {\displaystyle {\mathfrak {g}}} is the subalgebra of g {\displaystyle {\mathfrak {g}}} , denoted [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} that consists of all linear combinations of Lie brackets of pairs of elements of g {\displaystyle {\mathfrak {g}}} . The derived series is the sequence of subalgebras g ≥ [ g , g ] ≥ [ [ g , g ] , [ g , g ] ] ≥ [ [ [ g , g ] , [ g , g ] ] , [ [ g , g ] , [ g , g ] ] ] ≥ . . . {\displaystyle {\mathfrak {g}}\geq [{\mathfrak {g}},{\mathfrak {g}}]\geq [[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]\geq [[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]],[[{\mathfrak {g}},{\mathfrak {g}}],[{\mathfrak {g}},{\mathfrak {g}}]]]\geq ...} If the derived series eventually arrives at the zero subalgebra, then the Lie algebra is called solvable. The derived series for Lie algebras is analogous to the derived series for commutator subgroups in group theory, and solvable Lie algebras are analogs of solvable groups. Any nilpotent Lie algebra is a fortiori solvable but the converse is not true. The solvable Lie algebras and the semisimple Lie algebras form two large and generally complementary classes, as is shown by the Levi decomposition. The solvable Lie algebras are precisely those that can be obtained from semidirect products, starting from 0 and adding one dimension at a time. A maximal solvable subalgebra is called a Borel subalgebra. The largest solvable ideal of a Lie algebra is called the radical. == Characterizations == Let g {\displaystyle {\mathfrak {g}}} be a finite-dimensional Lie algebra over a field of characteristic 0. The following are equivalent. (i) g {\displaystyle {\mathfrak {g}}} is solvable. (ii) a d ( g ) {\displaystyle {\rm {ad}}({\mathfrak {g}})} , the adjoint representation of g {\displaystyle {\mathfrak {g}}} , is solvable. (iii) There is a finite sequence of ideals a i {\displaystyle {\mathfrak {a}}_{i}} of g {\displaystyle {\mathfrak {g}}} : g = a 0 ⊃ a 1 ⊃ . . . a r = 0 , [ a i , a i ] ⊂ a i + 1 ∀ i . {\displaystyle {\mathfrak {g}}={\mathfrak {a}}_{0}\supset {\mathfrak {a}}_{1}\supset ...{\mathfrak {a}}_{r}=0,\quad [{\mathfrak {a}}_{i},{\mathfrak {a}}_{i}]\subset {\mathfrak {a}}_{i+1}\,\,\forall i.} (iv) [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} is nilpotent. (v) For g {\displaystyle {\mathfrak {g}}} n {\displaystyle n} -dimensional, there is a finite sequence of subalgebras a i {\displaystyle {\mathfrak {a}}_{i}} of g {\displaystyle {\mathfrak {g}}} : g = a 0 ⊃ a 1 ⊃ . . . a n = 0 , dim ⁡ a i / a i + 1 = 1 ∀ i , {\displaystyle {\mathfrak {g}}={\mathfrak {a}}_{0}\supset {\mathfrak {a}}_{1}\supset ...{\mathfrak {a}}_{n}=0,\quad \operatorname {dim} {\mathfrak {a}}_{i}/{\mathfrak {a}}_{i+1}=1\,\,\forall i,} with each a i + 1 {\displaystyle {\mathfrak {a}}_{i+1}} an ideal in a i {\displaystyle {\mathfrak {a}}_{i}} . A sequence of this type is called an elementary sequence. (vi) There is a finite sequence of subalgebras g i {\displaystyle {\mathfrak {g}}_{i}} of g {\displaystyle {\mathfrak {g}}} , g = g 0 ⊃ g 1 ⊃ . . . g r = 0 , {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\supset {\mathfrak {g}}_{1}\supset ...{\mathfrak {g}}_{r}=0,} such that g i + 1 {\displaystyle {\mathfrak {g}}_{i+1}} is an ideal in g i {\displaystyle {\mathfrak {g}}_{i}} and g i / g i + 1 {\displaystyle {\mathfrak {g}}_{i}/{\mathfrak {g}}_{i+1}} is abelian. (vii) The Killing form B {\displaystyle B} of g {\displaystyle {\mathfrak {g}}} satisfies B ( X , Y ) = 0 {\displaystyle B(X,Y)=0} for all X in g {\displaystyle {\mathfrak {g}}} and Y in [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} . This is Cartan's criterion for solvability. == Properties == Lie's Theorem states that if V {\displaystyle V} is a finite-dimensional vector space over an algebraically closed field of characteristic zero, and g {\displaystyle {\mathfrak {g}}} is a solvable Lie algebra, and if π {\displaystyle \pi } is a representation of g {\displaystyle {\mathfrak {g}}} over V {\displaystyle V} , then there exists a simultaneous eigenvector v ∈ V {\displaystyle v\in V} of the endomorphisms π ( X ) {\displaystyle \pi (X)} for all elements X ∈ g {\displaystyle X\in {\mathfrak {g}}} . Every Lie subalgebra and quotient of a solvable Lie algebra are solvable. Given a Lie algebra g {\displaystyle {\mathfrak {g}}} and an ideal h {\displaystyle {\mathfrak {h}}} in it, g {\displaystyle {\mathfrak {g}}} is solvable if and only if both h {\displaystyle {\mathfrak {h}}} and g / h {\displaystyle {\mathfrak {g}}/{\mathfrak {h}}} are solvable. The analogous statement is true for nilpotent Lie algebras provided h {\displaystyle {\mathfrak {h}}} is contained in the center. Thus, an extension of a solvable algebra by a solvable algebra is solvable, while a central extension of a nilpotent algebra by a nilpotent algebra is nilpotent. A solvable nonzero Lie algebra has a nonzero abelian ideal, the last nonzero term in the derived series. If a , b ⊂ g {\displaystyle {\mathfrak {a}},{\mathfrak {b}}\subset {\mathfrak {g}}} are solvable ideals, then so is a + b {\displaystyle {\mathfrak {a}}+{\mathfrak {b}}} . Consequently, if g {\displaystyle {\mathfrak {g}}} is finite-dimensional, then there is a unique solvable ideal r ⊂ g {\displaystyle {\mathfrak {r}}\subset {\mathfrak {g}}} containing all solvable ideals in g {\displaystyle {\mathfrak {g}}} . This ideal is the radical of g {\displaystyle {\mathfrak {g}}} . A solvable Lie algebra g {\displaystyle {\mathfrak {g}}} has a unique largest nilpotent ideal n {\displaystyle {\mathfrak {n}}} , called the nilradical, the set of all X ∈ g {\displaystyle X\in {\mathfrak {g}}} such that a d X {\displaystyle {\rm {ad}}_{X}} is nilpotent. If D is any derivation of g {\displaystyle {\mathfrak {g}}} , then D ( g ) ⊂ n {\displaystyle D({\mathfrak {g}})\subset {\mathfrak {n}}} . == Completely solvable Lie algebras == A Lie algebra g {\displaystyle {\mathfrak {g}}} is called completely solvable or split solvable if it has an elementary sequence{(V) As above definition} of ideals in g {\displaystyle {\mathfrak {g}}} from 0 {\displaystyle 0} to g {\displaystyle {\mathfrak {g}}} . A finite-dimensional nilpotent Lie algebra is completely solvable, and a completely solvable Lie algebra is solvable. Over an algebraically closed field a solvable Lie algebra is completely solvable, but the 3 {\displaystyle 3} -dimensional real Lie algebra of the group of Euclidean isometries of the plane is solvable but not completely solvable. A solvable Lie algebra g {\displaystyle {\mathfrak {g}}} is split solvable if and only if the eigenvalues of a d X {\displaystyle {\rm {ad}}_{X}} are in k {\displaystyle k} for all X {\displaystyle X} in g {\displaystyle {\mathfrak {g}}} . == Examples == === Abelian Lie algebras === Every abelian Lie algebra a {\displaystyle {\mathfrak {a}}} is solvable by definition, since its commutator [ a , a ] = 0 {\displaystyle [{\mathfrak {a}},{\mathfrak {a}}]=0} . This includes the Lie algebra of diagonal matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} , which are of the form { [ ∗ 0 0 0 ∗ 0 0 0 ∗ ] } {\displaystyle \left\{{\begin{bmatrix}*&0&0\\0&*&0\\0&0&*\end{bmatrix}}\right\}} for n = 3 {\displaystyle n=3} . The Lie algebra structure on a vector space V {\displaystyle V} given by the trivial bracket [ m , n ] = 0 {\displaystyle [m,n]=0} for any two matrices m , n ∈ End ( V ) {\displaystyle m,n\in {\text{End}}(V)} gives another example. === Nilpotent Lie algebras === Another class of examples comes from nilpotent Lie algebras since the adjoint representation is solvable. Some examples include the upper-diagonal matrices, such as the class of matrices of the form { [ 0 ∗ ∗ 0 0 ∗ 0 0 0 ] } {\displaystyle \left\{{\begin{bmatrix}0&*&*\\0&0&*\\0&0&0\end{bmatrix}}\right\}} called the Lie algebra of strictly upper triangular matrices. In addition, the Lie algebra of upper diagonal matrices in g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} form a solvable Lie algebra. This includes matrices of the form { [ ∗ ∗ ∗ 0 ∗ ∗ 0 0 ∗ ] } {\displaystyle \left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&0&*\end{bmatrix}}\right\}} and is denoted b k {\displaystyle {\mathfrak {b}}_{k}} . === Solvable but not split-solvable === Let g {\displaystyle {\mathfrak {g}}} be the set of matrices on the form X = ( 0 θ x − θ 0 y 0 0 0 ) , θ , x , y ∈ R . {\displaystyle X=\left({\begin{matrix}0&\theta &x\\-\theta &0&y\\0&0&0\end{matrix}}\right),\quad \theta ,x,y\in \mathbb {R} .} Then g {\displaystyle {\mathfrak {g}}} is solvable, but not split solvable. It is isomorphic with the Lie algebra of the group of translations and rotations in the plane. === Non-example === A semisimple Lie algebra l {\displaystyle {\mathfrak {l}}} is never solvable since its radical Rad ( l ) {\displaystyle {\text{Rad}}({\mathfrak {l}})} , which is the largest solvable ideal in l {\displaystyle {\mathfrak {l}}} , is trivial. page 11 == Solvable Lie groups == Because the term "solvable" is also used for solvable groups in group theory, there are several possible definitions of solvable Lie group. For a Lie group G {\displaystyle G} , there is termination of the usual derived series of the group G {\displaystyle G} (as an abstract group); termination of the closures of the derived series; having a solvable Lie algebra == See also == Cartan's criterion Killing form Lie-Kolchin theorem Solvmanifold Dixmier mapping == Notes == == References == Fulton, W.; Harris, J. (1991). Representation theory. A first course. Graduate Texts in Mathematics. Vol. 129. New York: Springer-Verlag. ISBN 978-0-387-97527-6. MR 1153249. Humphreys, James E. (1972). Introduction to Lie Algebras and Representation Theory. Graduate Texts in Mathematics. Vol. 9. New York: Springer-Verlag. ISBN 0-387-90053-5. Knapp, A. W. (2002). Lie groups beyond an introduction. Progress in Mathematics. Vol. 120 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 0-8176-4259-5.. Serre, Jean-Pierre (2001). Complex Semisimple Lie Algebras. Berlin: Springer. ISBN 3-5406-7827-1. == External links == EoM article Lie algebra, solvable EoM article Lie group, solvable
Wikipedia/Solvable_Lie_algebra
In the theory of Lie groups, Lie algebras and their representation theory, a Lie algebra extension e is an enlargement of a given Lie algebra g by another Lie algebra h. Extensions arise in several ways. There is the trivial extension obtained by taking a direct sum of two Lie algebras. Other types are the split extension and the central extension. Extensions may arise naturally, for instance, when forming a Lie algebra from projective group representations. Such a Lie algebra will contain central charges. Starting with a polynomial loop algebra over finite-dimensional simple Lie algebra and performing two extensions, a central extension and an extension by a derivation, one obtains a Lie algebra which is isomorphic with an untwisted affine Kac–Moody algebra. Using the centrally extended loop algebra one may construct a current algebra in two spacetime dimensions. The Virasoro algebra is the universal central extension of the Witt algebra. Central extensions are needed in physics, because the symmetry group of a quantized system usually is a central extension of the classical symmetry group, and in the same way the corresponding symmetry Lie algebra of the quantum system is, in general, a central extension of the classical symmetry algebra. Kac–Moody algebras have been conjectured to be symmetry groups of a unified superstring theory. The centrally extended Lie algebras play a dominant role in quantum field theory, particularly in conformal field theory, string theory and in M-theory. A large portion towards the end is devoted to background material for applications of Lie algebra extensions, both in mathematics and in physics, in areas where they are actually useful. A parenthetical link, (background material), is provided where it might be beneficial. == History == Due to the Lie correspondence, the theory, and consequently the history of Lie algebra extensions, is tightly linked to the theory and history of group extensions. A systematic study of group extensions was performed by the Austrian mathematician Otto Schreier in 1923 in his PhD thesis and later published. The problem posed for his thesis by Otto Hölder was "given two groups G and H, find all groups E having a normal subgroup N isomorphic to G such that the factor group E/N is isomorphic to H". Lie algebra extensions are most interesting and useful for infinite-dimensional Lie algebras. In 1967, Victor Kac and Robert Moody independently generalized the notion of classical Lie algebras, resulting in a new theory of infinite-dimensional Lie algebras, now called Kac–Moody algebras. They generalize the finite-dimensional simple Lie algebras and can often concretely be constructed as extensions. == Notation and proofs == Notational abuse to be found below includes eX for the exponential map exp given an argument, writing g for the element (g, eH) in a direct product G × H (eH is the identity in H), and analogously for Lie algebra direct sums (where also g + h and (g, h) are used interchangeably). Likewise for semidirect products and semidirect sums. Canonical injections (both for groups and Lie algebras) are used for implicit identifications. Furthermore, if G, H, ..., are groups, then the default names for elements of G, H, ..., are g, h, ..., and their Lie algebras are g, h, ... . The default names for elements of g, h, ..., are G, H, ... (just like for the groups!), partly to save scarce alphabetical resources but mostly to have a uniform notation. Lie algebras that are ingredients in an extension will, without comment, be taken to be over the same field. The summation convention applies, including sometimes when the indices involved are both upstairs or both downstairs. Caveat: Not all proofs and proof outlines below have universal validity. The main reason is that the Lie algebras are often infinite-dimensional, and then there may or may not be a Lie group corresponding to the Lie algebra. Moreover, even if such a group exists, it may not have the "usual" properties, e.g. the exponential map might not exist, and if it does, it might not have all the "usual" properties. In such cases, it is questionable whether the group should be endowed with the "Lie" qualifier. The literature is not uniform. For the explicit examples, the relevant structures are supposedly in place. == Definition == Lie algebra extensions are formalized in terms of short exact sequences. A short exact sequence is an exact sequence of length three, such that i is a monomorphism, s is an epimorphism, and ker s = im i. From these properties of exact sequences, it follows that (the image of) h {\displaystyle {\mathfrak {h}}} is an ideal in e {\displaystyle {\mathfrak {e}}} . Moreover, g ≅ e / Im ⁡ i = e / Ker ⁡ s , {\displaystyle {\mathfrak {g}}\cong {\mathfrak {e}}/\operatorname {Im} i={\mathfrak {e}}/\operatorname {Ker} s,} but it is not necessarily the case that g {\displaystyle {\mathfrak {g}}} is isomorphic to a subalgebra of e {\displaystyle {\mathfrak {e}}} . This construction mirrors the analogous constructions in the closely related concept of group extensions. If the situation in (1) prevails, non-trivially and for Lie algebras over the same field, then one says that e {\displaystyle {\mathfrak {e}}} is an extension of g {\displaystyle {\mathfrak {g}}} by h {\displaystyle {\mathfrak {h}}} . == Properties == The defining property may be reformulated. The Lie algebra e {\displaystyle {\mathfrak {e}}} is an extension of g {\displaystyle {\mathfrak {g}}} by h {\displaystyle {\mathfrak {h}}} if is exact. Here the zeros on the ends represent the zero Lie algebra (containing only the zero vector 0) and the maps are the obvious ones; ι {\displaystyle \iota } maps 0 to 0 and σ {\displaystyle \sigma } maps all elements of g {\displaystyle {\mathfrak {g}}} to 0. With this definition, it follows automatically that i is a monomorphism and s is an epimorphism. An extension of g {\displaystyle {\mathfrak {g}}} by h {\displaystyle {\mathfrak {h}}} is not necessarily unique. Let e , e ′ {\displaystyle {\mathfrak {e}},{\mathfrak {e}}'} denote two extensions and let the primes below have the obvious interpretation. Then, if there exists a Lie algebra isomorphism f : e → e ′ {\displaystyle f\colon {\mathfrak {e}}\rightarrow {\mathfrak {e}}'} such that f ∘ i = i ′ , s ′ ∘ f = s , {\displaystyle f\circ i=i',\quad s'\circ f=s,} then the extensions e {\displaystyle {\mathfrak {e}}} and e ′ {\displaystyle {\mathfrak {e}}'} are said to be equivalent extensions. Equivalence of extensions is an equivalence relation. == Extension types == === Trivial === A Lie algebra extension h ↪ i t ↠ s g , {\displaystyle {\mathfrak {h}}\;{\overset {i}{\hookrightarrow }}\;{\mathfrak {t}}\;{\overset {s}{\twoheadrightarrow }}\;{\mathfrak {g}},} is trivial if there is a subspace i such that t = i ⊕ ker s and i is an ideal in t. === Split === A Lie algebra extension h ↪ i s ↠ s g , {\displaystyle {\mathfrak {h}}\;{\overset {i}{\hookrightarrow }}\;{\mathfrak {s}}\;{\overset {s}{\twoheadrightarrow }}\;{\mathfrak {g}},} is split if there is a subspace u such that s = u ⊕ ker s as a vector space and u is a subalgebra in s. An ideal is a subalgebra, but a subalgebra is not necessarily an ideal. A trivial extension is thus a split extension. === Central === Central extensions of a Lie algebra g by an abelian Lie algebra h can be obtained with the help of a so-called (nontrivial) 2-cocycle (background) on g. Non-trivial 2-cocycles occur in the context of projective representations (background) of Lie groups. This is alluded to further down. A Lie algebra extension h ↪ i e ↠ s g , {\displaystyle {\mathfrak {h}}\;{\overset {i}{\hookrightarrow }}\;{\mathfrak {e}}\;{\overset {s}{\twoheadrightarrow }}\;{\mathfrak {g}},} is a central extension if ker s is contained in the center Z(e) of e. Properties Since the center commutes with everything, h ≅ im i = ker s in this case is abelian. Given a central extension e of g, one may construct a 2-cocycle on g. Suppose e is a central extension of g by h. Let l be a linear map from g to e with the property that s ∘ l = Idg, i.e. l is a section of s. Use this section to define ε: g × g → e by ϵ ( G 1 , G 2 ) = l ( [ G 1 , G 2 ] ) − [ l ( G 1 ) , l ( G 2 ) ] , G 1 , G 2 ∈ g . {\displaystyle \epsilon (G_{1},G_{2})=l([G_{1},G_{2}])-[l(G_{1}),l(G_{2})],\quad G_{1},G_{2}\in {\mathfrak {g}}.} The map ε satisfies ϵ ( G 1 , [ G 2 , G 3 ] ) + ϵ ( G 2 , [ G 3 , G 1 ] ) + ϵ ( G 3 , [ G 1 , G 2 ] ) = 0 ∈ e . {\displaystyle \epsilon (G_{1},[G_{2},G_{3}])+\epsilon (G_{2},[G_{3},G_{1}])+\epsilon (G_{3},[G_{1},G_{2}])=0\in {\mathfrak {e}}.} To see this, use the definition of ε on the left hand side, then use the linearity of l. Use Jacobi identity on g to get rid of half of the six terms. Use the definition of ε again on terms l([Gi,Gj]) sitting inside three Lie brackets, bilinearity of Lie brackets, and the Jacobi identity on e, and then finally use on the three remaining terms that Im ε ⊂ ker s and that ker s ⊂ Z(e) so that ε(Gi, Gj) brackets to zero with everything. It then follows that φ = i−1 ∘ ε satisfies the corresponding relation, and if h in addition is one-dimensional, then φ is a 2-cocycle on g (via a trivial correspondence of h with the underlying field). A central extension 0 ↪ ι h ↪ i e ↠ s g ↠ σ 0 {\displaystyle 0\;{\overset {\iota }{\hookrightarrow }}{\mathfrak {h}}\;{\overset {i}{\hookrightarrow }}\;{\mathfrak {e}}\;{\overset {s}{\twoheadrightarrow }}\;{\mathfrak {g}}\;{\overset {\sigma }{\twoheadrightarrow }}\;0} is universal if for every other central extension 0 ↪ ι h ′ ↪ i ′ e ′ ↠ s ′ g ↠ σ 0 {\displaystyle 0\;{\overset {\iota }{\hookrightarrow }}{\mathfrak {h}}'\;{\overset {i'}{\hookrightarrow }}\;{\mathfrak {e}}'\;{\overset {s'}{\twoheadrightarrow }}\;{\mathfrak {g}}\;{\overset {\sigma }{\twoheadrightarrow }}\;0} there exist unique homomorphisms Φ : e → e ′ {\displaystyle \Phi :{\mathfrak {e}}\to {\mathfrak {e}}'} and Ψ : h → h ′ {\displaystyle \Psi :{\mathfrak {h}}\to {\mathfrak {h}}'} such that the diagram commutes, i.e. i' ∘ Ψ = Φ ∘ i and s' ∘ Φ = s. By universality, it is easy to conclude that such universal central extensions are unique up to isomorphism. == Construction == === By direct sum === Let g {\displaystyle {\mathfrak {g}}} , h {\displaystyle {\mathfrak {h}}} be Lie algebras over the same field F {\displaystyle F} . Define e = h × g , {\displaystyle {\mathfrak {e}}={\mathfrak {h}}\times {\mathfrak {g}},} and define addition pointwise on e {\displaystyle {\mathfrak {e}}} . Scalar multiplication is defined by α ( H , G ) = ( α H , α G ) , α ∈ F , H ∈ h , G ∈ g . {\displaystyle \alpha (H,G)=(\alpha H,\alpha G),\alpha \in F,H\in {\mathfrak {h}},G\in {\mathfrak {g}}.} With these definitions, h × g ≡ h ⊕ g {\displaystyle {\mathfrak {h}}\times {\mathfrak {g}}\equiv {\mathfrak {h}}\oplus {\mathfrak {g}}} is a vector space over F {\displaystyle F} . With the Lie bracket: e {\displaystyle {\mathfrak {e}}} is a Lie algebra. Define further i : h ↪ e ; H ↦ ( H , 0 ) , s : e ↠ g ; ( H , G ) ↦ G . {\displaystyle i:{\mathfrak {h}}\hookrightarrow {\mathfrak {e}};H\mapsto (H,0),\quad s:{\mathfrak {e}}\twoheadrightarrow {\mathfrak {g}};(H,G)\mapsto G.} It is clear that (1) holds as an exact sequence. This extension of g {\displaystyle {\mathfrak {g}}} by h {\displaystyle {\mathfrak {h}}} is called a trivial extension. It is, of course, nothing else than the Lie algebra direct sum. By symmetry of definitions, e {\displaystyle {\mathfrak {e}}} is an extension of h {\displaystyle {\mathfrak {h}}} by g {\displaystyle {\mathfrak {g}}} as well, but h ⊕ g ≠ g ⊕ h {\displaystyle {\mathfrak {h}}\oplus {\mathfrak {g}}\neq {\mathfrak {g}}\oplus {\mathfrak {h}}} . It is clear from (3) that the subalgebra 0 ⊕ g {\displaystyle 0\oplus {\mathfrak {g}}} is an ideal (Lie algebra). This property of the direct sum of Lie algebras is promoted to the definition of a trivial extension. === By semidirect sum === Inspired by the construction of a semidirect product (background) of groups using a homomorphism G → Aut(H), one can make the corresponding construct for Lie algebras. If ψ:g → Der h is a Lie algebra homomorphism, then define a Lie bracket on e = h ⊕ g {\displaystyle {\mathfrak {e}}={\mathfrak {h}}\oplus {\mathfrak {g}}} by With this Lie bracket, the Lie algebra so obtained is denoted e= h ⊕S g and is called the semidirect sum of h and g. By inspection of (7) one sees that 0 ⊕ g is a subalgebra of e and h ⊕ 0 is an ideal in e. Define i:h → e by H ↦ H ⊕ 0 and s:e → g by H ⊕ G ↦ G, H ∈ h, G ∈ g. It is clear that ker s = im i. Thus e is a Lie algebra extension of g by h. As with the trivial extension, this property generalizes to the definition of a split extension. ExampleLet G be the Lorentz group O(3, 1) and let T denote the translation group in 4 dimensions, isomorphic to ( R 4 {\displaystyle \mathbb {R} ^{4}} , +), and consider the multiplication rule of the Poincaré group P ( a 2 , Λ 2 ) ( a 1 , Λ 1 ) = ( a 2 + Λ 2 a 1 , Λ 2 Λ 1 ) , a 1 , a 2 ∈ T ⊂ P , Λ 1 , Λ 2 ∈ O ( 3 , 1 ) ⊂ P , {\displaystyle (a_{2},\Lambda _{2})(a_{1},\Lambda _{1})=(a_{2}+\Lambda _{2}a_{1},\Lambda _{2}\Lambda _{1}),\quad a_{1},a_{2}\in \mathrm {T} \subset \mathrm {P} ,\Lambda _{1},\Lambda _{2}\in \mathrm {O} (3,1)\subset \mathrm {P} ,} (where T and O(3, 1) are identified with their images in P). From it follows immediately that, in the Poincaré group, (0, Λ)(a, I)(0, Λ−1) = (Λ a, I) ∈ T ⊂ P. Thus every Lorentz transformation Λ corresponds to an automorphism ΦΛ of T with inverse ΦΛ−1 and Φ is clearly a homomorphism. Now define P ¯ = T ⊗ S O ( 3 , 1 ) , {\displaystyle {\overline {\mathrm {P} }}=\mathrm {T} \otimes _{S}\mathrm {O} (3,1),} endowed with multiplication given by (4). Unwinding the definitions one finds that the multiplication is the same as the multiplication one started with and it follows that P = P. From (5') follows that ΨΛ = AdΛ and then from (6') it follows that ψλ = adλ. λ ∈ o(3, 1). === By derivation === Let δ be a derivation (background) of h and denote by g the one-dimensional Lie algebra spanned by δ. Define the Lie bracket on e = g ⊕ h by [ G 1 + H 1 , G 2 + H 2 ] = [ λ δ + H 1 , μ δ + H 2 ] = [ H 1 , H 2 ] + λ δ ( H 2 ) − μ δ ( H 1 ) . {\displaystyle [G_{1}+H_{1},G_{2}+H_{2}]=[\lambda \delta +H_{1},\mu \delta +H_{2}]=[H_{1},H_{2}]+\lambda \delta (H_{2})-\mu \delta (H_{1}).} It is obvious from the definition of the bracket that h is and ideal in e in and that g is a subalgebra of e. Furthermore, g is complementary to h in e. Let i:h → e be given by H ↦ (0, H) and s:e → g by (G, H) ↦ G. It is clear that im i = ker s. Thus e is a split extension of g by h. Such an extension is called extension by a derivation. If ψ: g → der h is defined by ψ(μδ)(H) = μδ(H), then ψ is a Lie algebra homomorphism into der h. Hence this construction is a special case of a semidirect sum, for when starting from ψ and using the construction in the preceding section, the same Lie brackets result. === By 2-cocycle === If ε is a 2-cocycle (background) on a Lie algebra g and h is any one-dimensional vector space, let e = h ⊕ g (vector space direct sum) and define a Lie bracket on e by [ μ H + G 1 , ν H + G 2 ] = [ G 1 , G 2 ] + ε ( G 1 , G 2 ) H , μ , ν ∈ F . {\displaystyle [\mu H+G_{1},\nu H+G_{2}]=[G_{1},G_{2}]+\varepsilon (G_{1},G_{2})H,\quad \mu ,\nu \in F.} Here H is an arbitrary but fixed element of h. Antisymmetry follows from antisymmetry of the Lie bracket on g and antisymmetry of the 2-cocycle. The Jacobi identity follows from the corresponding properties of g and of ε. Thus e is a Lie algebra. Put G1 = 0 and it follows that μH ∈ Z(e). Also, it follows with i: μH ↦ (μH, 0) and s: (μH, G) ↦ G that Im i = ker s = {(μH, 0):μ ∈ F} ⊂ Z(e). Hence e is a central extension of g by h. It is called extension by a 2-cocycle. == Theorems == Below follows some results regarding central extensions and 2-cocycles. Theorem Let φ1 and φ2 be cohomologous 2-cocycles on a Lie algebra g and let e1 and e2 be respectively the central extensions constructed with these 2-cocycles. Then the central extensions e1 and e2 are equivalent extensions. Proof By definition, φ2 = φ1 + δf. Define ψ : G + μ c ∈ e 1 ↦ G + μ c + f ( G ) c ∈ e 2 . {\displaystyle \psi :G+\mu c\in {\mathfrak {e}}_{1}\mapsto G+\mu c+f(G)c\in {\mathfrak {e}}_{2}.} It follows from the definitions that ψ is a Lie algebra isomorphism and (2) holds. Corollary A cohomology class [Φ] ∈ H2(g, F) defines a central extension of g which is unique up to isomorphism. The trivial 2-cocycle gives the trivial extension, and since a 2-coboundary is cohomologous with the trivial 2-cocycle, one has Corollary A central extension defined by a coboundary is equivalent with a trivial central extension. Theorem A finite-dimensional simple Lie algebra has only trivial central extensions. Proof Since every central extension comes from a 2-cocycle φ, it suffices to show that every 2-cocycle is a coboundary. Suppose φ is a 2-cocycle on g. The task is to use this 2-cocycle to manufacture a 1-cochain f such that φ = δf. The first step is to, for each G1 ∈ g, use φ to define a linear map ρG1:g → F by ρ G 1 ( G 2 ) ≡ φ ( G 1 , G 2 ) {\displaystyle \rho _{G_{1}}(G_{2})\equiv \varphi (G_{1},G_{2})} . These linear maps are elements of g∗. Let ν:g∗ →g be the vector space isomorphism associated to the nondegenerate Killing form K, and define a linear map d:g → g by d ( G 1 ) ≡ ν ( ρ G 1 ) {\displaystyle d(G_{1})\equiv \nu (\rho _{G_{1}})} . This turns out to be a derivation (for a proof, see below). Since, for semisimple Lie algebras, all derivations are inner, one has d = adGd for some Gd ∈ g. Then φ ( G 1 , G 2 ) ≡ ρ G 1 ( G 2 ) = K ( ν ( ρ G 1 ) , G 2 ) ≡ K ( d ( G 1 ) , G 2 ) = K ( a d G d ( G 1 ) , G 2 ) = K ( [ G d , G 1 ] , G 2 ) = K ( G d , [ G 1 , G 2 ] ) . {\displaystyle \varphi (G_{1},G_{2})\equiv \rho _{G_{1}}(G_{2})=K(\nu (\rho _{G_{1}}),G_{2})\equiv K(d(G_{1}),G_{2})=K(\mathrm {ad} _{G_{d}}(G_{1}),G_{2})=K([G_{d},G_{1}],G_{2})=K(G_{d},[G_{1},G_{2}]).} Let f be the 1-cochain defined by f ( G ) = K ( G d , G ) . {\displaystyle f(G)=K(G_{d},G).} Then δ f ( G 1 , G 2 ) = f ( [ G 1 , G 2 ] ) = K ( G d , [ G 1 , G 2 ] ) = φ ( G 1 , G 2 ) , {\displaystyle \delta f(G_{1},G_{2})=f([G_{1},G_{2}])=K(G_{d},[G_{1},G_{2}])=\varphi (G_{1},G_{2}),} showing that φ is a coboundary. The observation that one can define a derivation d, given a symmetric non-degenerate associative form K and a 2-cocycle φ, by K ( ν ( ρ G 1 ) , G 2 ) ≡ K ( d ( G 1 ) , G 2 ) , {\displaystyle K(\nu (\rho _{G_{1}}),G_{2})\equiv K(d(G_{1}),G_{2}),} or using the symmetry of K and the antisymmetry of φ, K ( d ( G 1 ) , G 2 ) = − K ( G 1 , d ( G 2 ) ) , {\displaystyle K(d(G_{1}),G_{2})=-K(G_{1},d(G_{2})),} leads to a corollary. Corollary Let L:'g × g: → F be a non-degenerate symmetric associative bilinear form and let d be a derivation satisfying L ( d ( G 1 ) , G 2 ) = − L ( G 1 , d ( G 2 ) ) , {\displaystyle L(d(G_{1}),G_{2})=-L(G_{1},d(G_{2})),} then φ defined by φ ( G 1 , G 2 ) = L ( d ( G 1 ) , G 2 ) {\displaystyle \varphi (G_{1},G_{2})=L(d(G_{1}),G_{2})} is a 2-cocycle. Proof The condition on d ensures the antisymmetry of φ. The Jacobi identity for 2-cocycles follows starting with φ ( [ G 1 , G 2 ] , G 3 ) = L ( d [ G 1 , G 2 ] , G 3 ) = L ( [ d ( G 1 ) , G 2 ] , G 3 ) + L ( [ G 1 , d ( G 2 ) ] , G 3 ) , {\displaystyle \varphi ([G_{1},G_{2}],G_{3})=L(d[G_{1},G_{2}],G_{3})=L([d(G_{1}),G_{2}],G_{3})+L([G_{1},d(G_{2})],G_{3}),} using symmetry of the form, the antisymmetry of the bracket, and once again the definition of φ in terms of L. If g is the Lie algebra of a Lie group G and e is a central extension of g, one may ask whether there is a Lie group E with Lie algebra e. The answer is, by Lie's third theorem affirmative. But is there a central extension E of G with Lie algebra e? The answer to this question requires some machinery, and can be found in Tuynman & Wiegerinck (1987, Theorem 5.4). == Applications == The "negative" result of the preceding theorem indicates that one must, at least for semisimple Lie algebras, go to infinite-dimensional Lie algebras to find useful applications of central extensions. There are indeed such. Here will be presented affine Kac–Moody algebras and Virasoro algebras. These are extensions of polynomial loop-algebras and the Witt algebra respectively. === Polynomial loop algebra === Let g be a polynomial loop algebra (background), g = C [ λ , λ − 1 ] ⊗ g 0 , {\displaystyle {\mathfrak {g}}=\mathbb {C} [\lambda ,\lambda ^{-1}]\otimes {\mathfrak {g}}_{0},} where g0 is a complex finite-dimensional simple Lie algebra. The goal is to find a central extension of this algebra. Two of the theorems apply. On the one hand, if there is a 2-cocycle on g, then a central extension may be defined. On the other hand, if this 2-cocycle is acting on the g0 part (only), then the resulting extension is trivial. Moreover, derivations acting on g0 (only) cannot be used for definition of a 2-cocycle either because these derivations are all inner and the same problem results. One therefore looks for derivations on C[λ, λ−1]. One such set of derivations is d k ≡ λ k + 1 d d λ , k ∈ Z . {\displaystyle d_{k}\equiv \lambda ^{k+1}{\frac {d}{d\lambda }},\quad k\in \mathbb {Z} .} In order to manufacture a non-degenerate bilinear associative antisymmetric form L on g, attention is focused first on restrictions on the arguments, with m, n fixed. It is a theorem that every form satisfying the requirements is a multiple of the Killing form K on g0. This requires L ( λ m ⊗ G 1 , λ n ⊗ G 2 ) = γ l m K ( G 1 , G 2 ) . {\displaystyle L(\lambda ^{m}\otimes G_{1},\lambda ^{n}\otimes G_{2})=\gamma _{lm}K(G_{1},G_{2}).} Symmetry of K implies γ m n = γ n m , {\displaystyle \gamma _{mn}=\gamma _{nm},} and associativity yields γ m + k , n = γ m , k + n . {\displaystyle \gamma _{m+k,n}=\gamma _{m,k+n}.} With m = 0 one sees that γk,n = γ0,k+n. This last condition implies the former. Using this fact, define f(n) = γ0,n. The defining equation then becomes L ( λ m ⊗ G 1 , λ n ⊗ G 2 ) = f ( m + n ) K ( G 1 , G 2 ) . {\displaystyle L(\lambda ^{m}\otimes G_{1},\lambda ^{n}\otimes G_{2})=f(m+n)K(G_{1},G_{2}).} For every i ∈ Z {\displaystyle \mathbb {Z} } the definition f ( n ) = δ n i ⇔ γ m n = δ m + n , i {\displaystyle f(n)=\delta _{ni}\Leftrightarrow \gamma _{mn}=\delta _{m+n,i}} does define a symmetric associative bilinear form L i ( λ m ⊗ G 1 , λ n ⊗ G 2 ) = δ m + n , i K ( G 1 , G 2 ) . {\displaystyle L_{i}(\lambda ^{m}\otimes G_{1},\lambda ^{n}\otimes G_{2})=\delta _{m+n,i}K(G_{1},G_{2}).} These span a vector space of forms which have the right properties. Returning to the derivations at hand and the condition L i ( d k ( λ l ⊗ G 1 ) , λ m ⊗ G 2 ) = − L i ( λ l ⊗ G 1 , d k ( λ m ⊗ G 2 ) ) , {\displaystyle L_{i}(d_{k}(\lambda ^{l}\otimes G_{1}),\lambda ^{m}\otimes G_{2})=-L_{i}(\lambda ^{l}\otimes G_{1},d_{k}(\lambda ^{m}\otimes G_{2})),} one sees, using the definitions, that l δ k + l + m , i = − m δ k + l + m , i , {\displaystyle l\delta _{k+l+m,i}=-m\delta _{k+l+m,i},} or, with n = l + m, n δ k + n , i = 0. {\displaystyle n\delta _{k+n,i}=0.} This (and the antisymmetry condition) holds if k = i, in particular it holds when k = i = 0. Thus choose L = L0 and d = d0. With these choices, the premises in the corollary are satisfied. The 2-cocycle φ defined by φ ( P ( λ ) ⊗ G 1 ) , Q ( λ ) ⊗ G 2 ) ) = L ( λ d P d λ ⊗ G 1 , Q ( λ ) ⊗ G 2 ) {\displaystyle \varphi (P(\lambda )\otimes G_{1}),Q(\lambda )\otimes G_{2}))=L(\lambda {\frac {dP}{d\lambda }}\otimes G_{1},Q(\lambda )\otimes G_{2})} is finally employed to define a central extension of g, e = g ⊕ C C , {\displaystyle {\mathfrak {e}}={\mathfrak {g}}\oplus \mathbb {C} C,} with Lie bracket [ P ( λ ) ⊗ G 1 + μ C , Q ( λ ) ⊗ G 2 + ν C ] = P ( λ ) Q ( λ ) ⊗ [ G 1 , G 2 ] + φ ( P ( λ ) ⊗ G 1 , Q ( λ ) ⊗ G 2 ) C . {\displaystyle [P(\lambda )\otimes G_{1}+\mu C,Q(\lambda )\otimes G_{2}+\nu C]=P(\lambda )Q(\lambda )\otimes [G_{1},G_{2}]+\varphi (P(\lambda )\otimes G_{1},Q(\lambda )\otimes G_{2})C.} For basis elements, suitably normalized and with antisymmetric structure constants, one has [ λ l ⊗ G i + μ C , λ m ⊗ G j + ν C ] = λ l + m ⊗ [ G i , G j ] + φ ( λ l ⊗ G i , λ m ⊗ G j ) C = λ l + m ⊗ C i j k G k + L ( λ d λ l d λ ⊗ G i , λ m ⊗ G j ) C = λ l + m ⊗ C i j k G k + l L ( λ l ⊗ G i , λ m ⊗ G j ) C = λ l + m ⊗ C i j k G k + l δ l + m , 0 K ( G i , G j ) C = λ l + m ⊗ C i j k G k + l δ l + m , 0 C i k m C j m k C = λ l + m ⊗ C i j k G k + l δ l + m , 0 δ i j C . {\displaystyle {\begin{aligned}{}[\lambda ^{l}\otimes G_{i}+\mu C,\lambda ^{m}\otimes G_{j}+\nu C]&=\lambda ^{l+m}\otimes [G_{i},G_{j}]+\varphi (\lambda ^{l}\otimes G_{i},\lambda ^{m}\otimes G_{j})C\\&=\lambda ^{l+m}\otimes {C_{ij}}^{k}G_{k}+L(\lambda {\frac {d\lambda ^{l}}{d\lambda }}\otimes G_{i},\lambda ^{m}\otimes G_{j})C\\&=\lambda ^{l+m}\otimes {C_{ij}}^{k}G_{k}+lL(\lambda ^{l}\otimes G_{i},\lambda ^{m}\otimes G_{j})C\\&=\lambda ^{l+m}\otimes {C_{ij}}^{k}G_{k}+l\delta _{l+m,0}K(G_{i},G_{j})C\\&=\lambda ^{l+m}\otimes {C_{ij}}^{k}G_{k}+l\delta _{l+m,0}{C_{ik}}^{m}{C_{jm}}^{k}C=\lambda ^{l+m}\otimes {C_{ij}}^{k}G_{k}+l\delta _{l+m,0}\delta ^{ij}C.\end{aligned}}} This is a universal central extension of the polynomial loop algebra. A note on terminology In physics terminology, the algebra of above might pass for a Kac–Moody algebra, whilst it will probably not in mathematics terminology. An additional dimension, an extension by a derivation is required for this. Nonetheless, if, in a physical application, the eigenvalues of g0 or its representative are interpreted as (ordinary) quantum numbers, the additional superscript on the generators is referred to as the level. It is an additional quantum number. An additional operator whose eigenvalues are precisely the levels is introduced further below. === Current algebra === As an application of a central extension of polynomial loop algebra, a current algebra of a quantum field theory is considered (background). Suppose one has a current algebra, with the interesting commutator being with a Schwinger term. To construct this algebra mathematically, let g be the centrally extended polynomial loop algebra of the previous section with [ λ l ⊗ G i + μ C , λ m ⊗ G j + ν C ] = λ l + m ⊗ C i j k G k + l δ l + m , 0 δ i j C {\displaystyle [\lambda ^{l}\otimes G_{i}+\mu C,\lambda ^{m}\otimes G_{j}+\nu C]=\lambda ^{l+m}\otimes {C_{ij}}^{k}G_{k}+l\delta _{l+m,0}\delta _{ij}C} as one of the commutation relations, or, with a switch of notation (l→m, m→n, i→a, j→b, λm⊗Ga→Tma) with a factor of i under the physics convention, [ T a m , T b n ] = i C a b c T c m + n + m δ m + n , 0 δ a b C . {\displaystyle [T_{a}^{m},T_{b}^{n}]=i{C_{ab}}^{c}T_{c}^{m+n}+m\delta _{m+n,0}\delta _{ab}C.} Define using elements of g, J a ( x ) = ℏ L ∑ n = − ∞ ∞ e 2 π i n x L T a − n , x ∈ R . {\displaystyle J_{a}(x)={\frac {\hbar }{L}}\sum _{n=-\infty }^{\infty }e^{\frac {2\pi inx}{L}}T_{a}^{-n},x\in \mathbb {R} .} One notes that J a ( x + L ) = J a ( x ) {\displaystyle J_{a}(x+L)=J_{a}(x)} so that it is defined on a circle. Now compute the commutator, [ J a ( x ) , J b ( y ) ] = ( ℏ L ) 2 [ ∑ n = − ∞ ∞ e 2 π i n x L T a − n , ∑ m = − ∞ ∞ e 2 π i m y L T b − m ] = ( ℏ L ) 2 ∑ m , n = − ∞ ∞ e 2 π i n x L e 2 π i m y L [ T a − n , T b − m ] . {\displaystyle {\begin{aligned}[][J_{a}(x),J_{b}(y)]&=\left({\frac {\hbar }{L}}\right)^{2}\left[\sum _{n=-\infty }^{\infty }e^{\frac {2\pi inx}{L}}T_{a}^{-n},\sum _{m=-\infty }^{\infty }e^{\frac {2\pi imy}{L}}T_{b}^{-m}\right]\\&=\left({\frac {\hbar }{L}}\right)^{2}\sum _{m,n=-\infty }^{\infty }e^{\frac {2\pi inx}{L}}e^{\frac {2\pi imy}{L}}[T_{a}^{-n},T_{b}^{-m}].\end{aligned}}} For simplicity, switch coordinates so that y → 0, x → x − y ≡ z and use the commutation relations, [ J a ( z ) , J b ( 0 ) ] = ( ℏ L ) 2 ∑ m , n = − ∞ ∞ e 2 π i n z L [ i C a b c T c − m − n + m δ m + n , 0 δ a b C ] = ( ℏ L ) 2 ∑ m = − ∞ ∞ e 2 π i ( − m ) z L ∑ l = − ∞ ∞ i e 2 π i ( l ) z L C a b c T c − l + ( ℏ L ) 2 ∑ m , n = − ∞ ∞ e 2 π i n z L m δ m + n , 0 δ a b C = ( ℏ L ) ∑ m = − ∞ ∞ e 2 π i m z L i C a b c J c ( z ) − ( ℏ L ) 2 ∑ n = − ∞ ∞ e 2 π i n z L n δ a b C {\displaystyle {\begin{aligned}[][J_{a}(z),J_{b}(0)]&=\left({\frac {\hbar }{L}}\right)^{2}\sum _{m,n=-\infty }^{\infty }e^{\frac {2\pi inz}{L}}[i{C_{ab}}^{c}T_{c}^{-m-n}+m\delta _{m+n,0}\delta _{ab}C]\\&=\left({\frac {\hbar }{L}}\right)^{2}\sum _{m=-\infty }^{\infty }e^{\frac {2\pi i(-m)z}{L}}\sum _{l=-\infty }^{\infty }ie^{\frac {2\pi i(l)z}{L}}{C_{ab}}^{c}T_{c}^{-l}+\left({\frac {\hbar }{L}}\right)^{2}\sum _{m,n=-\infty }^{\infty }e^{\frac {2\pi inz}{L}}m\delta _{m+n,0}\delta _{ab}C\\&=\left({\frac {\hbar }{L}}\right)\sum _{m=-\infty }^{\infty }e^{\frac {2\pi imz}{L}}i{C_{ab}}^{c}J_{c}(z)-\left({\frac {\hbar }{L}}\right)^{2}\sum _{n=-\infty }^{\infty }e^{\frac {2\pi inz}{L}}n\delta _{ab}C\end{aligned}}} Now employ the Poisson summation formula, 1 L ∑ n = − ∞ ∞ e − 2 π i n z L = 1 L ∑ n = − ∞ ∞ δ ( z + n L ) = δ ( z ) {\displaystyle {\frac {1}{L}}\sum _{n=-\infty }^{\infty }e^{\frac {-2\pi inz}{L}}={\frac {1}{L}}\sum _{n=-\infty }^{\infty }\delta (z+nL)=\delta (z)} for z in the interval (0, L) and differentiate it to yield − 2 π i L 2 ∑ n = − ∞ ∞ n e − 2 π i n z L = δ ′ ( z ) , {\displaystyle -{\frac {2\pi i}{L^{2}}}\sum _{n=-\infty }^{\infty }ne^{\frac {-2\pi inz}{L}}=\delta '(z),} and finally [ J a ( x − y ) , J b ( 0 ) ] = i ℏ C a b c J c ( x − y ) δ ( x − y ) + i ℏ 2 2 π δ a b C δ ′ ( x − y ) , {\displaystyle [J_{a}(x-y),J_{b}(0)]=i\hbar {C_{ab}}^{c}J_{c}(x-y)\delta (x-y)+{\frac {i\hbar ^{2}}{2\pi }}\delta _{ab}C\delta '(x-y),} or [ J a ( x ) , J b ( y ) ] = i ℏ C a b c J c ( x ) δ ( x − y ) + i ℏ 2 2 π δ a b C δ ′ ( x − y ) , {\displaystyle [J_{a}(x),J_{b}(y)]=i\hbar {C_{ab}}^{c}J_{c}(x)\delta (x-y)+{\frac {i\hbar ^{2}}{2\pi }}\delta _{ab}C\delta '(x-y),} since the delta functions arguments only ensure that the arguments of the left and right arguments of the commutator are equal (formally δ(z) = δ(z − 0) ↦ δ((x −y) − 0) = δ(x −y)). By comparison with CA10, this is a current algebra in two spacetime dimensions, including a Schwinger term, with the space dimension curled up into a circle. In the classical setting of quantum field theory, this is perhaps of little use, but with the advent of string theory where fields live on world sheets of strings, and spatial dimensions are curled up, there may be relevant applications. === Kac–Moody algebra === The derivation d0 used in the construction of the 2-cocycle φ in the previous section can be extended to a derivation D on the centrally extended polynomial loop algebra, here denoted by g in order to realize a Kac–Moody algebra (background). Simply set D ( P ( λ ) ⊗ G + μ C ) = λ d P ( λ ) d λ ⊗ G . {\displaystyle D(P(\lambda )\otimes G+\mu C)=\lambda {\frac {dP(\lambda )}{d\lambda }}\otimes G.} Next, define as a vector space e = C d + g . {\displaystyle {\mathfrak {e}}=\mathbb {C} d+{\mathfrak {g}}.} The Lie bracket on e is, according to the standard construction with a derivation, given on a basis by [ λ m ⊗ G 1 + μ C + ν D , λ n ⊗ G 2 + μ ′ C + ν ′ D ] = λ m + n ⊗ [ G 1 , G 2 ] + m δ m + n , 0 K ( G 1 , G 2 ) C + ν D ( λ n ⊗ G 1 ) − ν ′ D ( λ m ⊗ G 2 ) = λ m + n ⊗ [ G 1 , G 2 ] + m δ m + n , 0 K ( G 1 , G 2 ) C + ν n λ n ⊗ G 1 − ν ′ m λ m ⊗ G 2 . {\displaystyle {\begin{aligned}{}[\lambda ^{m}\otimes G_{1}+\mu C+\nu D,\lambda ^{n}\otimes G_{2}+\mu 'C+\nu 'D]&=\lambda ^{m+n}\otimes [G_{1},G_{2}]+m\delta _{m+n,0}K(G_{1},G_{2})C+\nu D(\lambda ^{n}\otimes G_{1})-\nu 'D(\lambda ^{m}\otimes G_{2})\\&=\lambda ^{m+n}\otimes [G_{1},G_{2}]+m\delta _{m+n,0}K(G_{1},G_{2})C+\nu n\lambda ^{n}\otimes G_{1}-\nu 'm\lambda ^{m}\otimes G_{2}.\end{aligned}}} For convenience, define G i m ↔ λ m ⊗ G i . {\displaystyle G_{i}^{m}\leftrightarrow \lambda ^{m}\otimes G_{i}.} In addition, assume the basis on the underlying finite-dimensional simple Lie algebra has been chosen so that the structure coefficients are antisymmetric in all indices and that the basis is appropriately normalized. Then one immediately through the definitions verifies the following commutation relations. [ G i m , G j n ] = C i j k G k m + n + m δ i j δ m + n , 0 C , [ C , G i m ] = 0 , 1 ≤ i , j , N , m , n ∈ Z [ D , G i m ] = m G i m [ D , C ] = 0. {\displaystyle {\begin{aligned}{}[G_{i}^{m},G_{j}^{n}]&={C_{ij}}^{k}G_{k}^{m+n}+m\delta _{ij}\delta ^{m+n,0}C,\\{}[C,G_{i}^{m}]&=0,\quad 1\leq i,j,N,\quad m,n\in \mathbb {Z} \\{}[D,G_{i}^{m}]&=mG_{i}^{m}\\{}[D,C]&=0.\end{aligned}}} These are precisely the short-hand description of an untwisted affine Kac–Moody algebra. To recapitulate, begin with a finite-dimensional simple Lie algebra. Define a space of formal Laurent polynomials with coefficients in the finite-dimensional simple Lie algebra. With the support of a symmetric non-degenerate alternating bilinear form and a derivation, a 2-cocycle is defined, subsequently used in the standard prescription for a central extension by a 2-cocycle. Extend the derivation to this new space, use the standard prescription for a split extension by a derivation and an untwisted affine Kac–Moody algebra obtains. === Virasoro algebra === The purpose is to construct the Virasoro algebra (named after Miguel Angel Virasoro) as a central extension by a 2-cocycle φ of the Witt algebra W (background). For details see Schottenloher. The Jacobi identity for 2-cocycles yields Letting l = 0 {\displaystyle l=0} and using antisymmetry of η one obtains ( m + p ) η m p = ( m − p ) η m + p , 0 . {\displaystyle (m+p)\eta _{mp}=(m-p)\eta _{m+p,0}.} In the extension, the commutation relations for the element d0 are [ d 0 + μ C , d m + ν C ] φ = − m d m + η 0 m C = − m ( d m − η 0 m m C ) . {\displaystyle [d_{0}+\mu C,d_{m}+\nu C]_{\varphi }=-md_{m}+\eta _{0m}C=-m(d_{m}-{\frac {\eta _{0m}}{m}}C).} It is desirable to get rid of the central charge on the right hand side. To do this define f : W → C ; d m → φ ( d 0 , d m ) m = η 0 m m . {\displaystyle f:W\to \mathbb {C} ;d_{m}\to {\frac {\varphi (d_{0},d_{m})}{m}}={\frac {\eta _{0m}}{m}}.} Then, using f as a 1-cochain, η 0 n ′ = φ ′ ( d 0 , d n ) = φ ( d 0 , d n ) + δ f ( [ d 0 , d n ] ) = φ ( d 0 , d n ) − n η 0 n n = 0 , {\displaystyle \eta '_{0n}=\varphi '(d_{0},d_{n})=\varphi (d_{0},d_{n})+\delta f([d_{0},d_{n}])=\varphi (d_{0},d_{n})-n{\frac {\eta ^{0n}}{n}}=0,} so with this 2-cocycle, equivalent to the previous one, one has [ d 0 + μ C , d m + ν C ] φ ′ = − m d m . {\displaystyle [d_{0}+\mu C,d_{m}+\nu C]_{\varphi '}=-md_{m}.} With this new 2-cocycle (skip the prime) the condition becomes ( n + p ) η m p = ( n − p ) η m + p , 0 = 0 , {\displaystyle (n+p)\eta _{mp}=(n-p)\eta _{m+p,0}=0,} and thus η m p = a ( m ) δ m . − p , a ( − m ) = − a ( m ) , {\displaystyle \eta _{mp}=a(m)\delta _{m.-p},\quad a(-m)=-a(m),} where the last condition is due to the antisymmetry of the Lie bracket. With this, and with l + m + p = 0 (cutting out a "plane" in Z 3 {\displaystyle \mathbb {Z} ^{3}} ), (V10) yields ( 2 m + p ) a ( p ) + ( m − p ) a ( m + p ) + ( m + 2 p ) a ( m ) = 0 , {\displaystyle (2m+p)a(p)+(m-p)a(m+p)+(m+2p)a(m)=0,} that with p = 1 (cutting out a "line" in Z 2 {\displaystyle \mathbb {Z} ^{2}} ) becomes ( m − 1 ) a ( m + 1 ) − ( m + 2 ) a ( m ) + ( 2 m + 1 ) a ( 1 ) = 0. {\displaystyle (m-1)a(m+1)-(m+2)a(m)+(2m+1)a(1)=0.} This is a difference equation generally solved by a ( m ) = α m + β m 3 . {\displaystyle a(m)=\alpha m+\beta m^{3}.} The commutator in the extension on elements of W is then [ d l , d m ] = ( l − m ) d l + m + ( α m + β m 3 ) δ l , − m C . {\displaystyle [d_{l},d_{m}]=(l-m)d_{l+m}+(\alpha m+\beta m^{3})\delta _{l,-m}C.} With β = 0 it is possible to change basis (or modify the 2-cocycle by a 2-coboundary) so that [ d l ′ , d m ′ ] = ( l − m ) d l + m , {\displaystyle [d'_{l},d'_{m}]=(l-m)d_{l+m},} with the central charge absent altogether, and the extension is hence trivial. (This was not (generally) the case with the previous modification, where only d0 obtained the original relations.) With β ≠ 0 the following change of basis, d l ′ = d l + δ 0 l α + γ 2 C , {\displaystyle d'_{l}=d_{l}+\delta _{0l}{\frac {\alpha +\gamma }{2}}C,} the commutation relations take the form [ d l ′ , d m ′ ] = ( l − m ) d l + m ′ + ( γ m + β m 3 ) δ l , − m C , {\displaystyle [d'_{l},d'_{m}]=(l-m)d'_{l+m}+(\gamma m+\beta m^{3})\delta _{l,-m}C,} showing that the part linear in m is trivial. It also shows that H2(W, C {\displaystyle \mathbb {C} } ) is one-dimensional (corresponding to the choice of β). The conventional choice is to take α = −β = 1⁄12 and still retaining freedom by absorbing an arbitrary factor in the arbitrary object C. The Virasoro algebra V is then V = W + C C , {\displaystyle {\mathcal {V}}={\mathcal {W}}+\mathbb {C} C,} with commutation relations ==== Bosonic open strings ==== The relativistic classical open string (background) is subject to quantization. This roughly amounts to taking the position and the momentum of the string and promoting them to operators on the space of states of open strings. Since strings are extended objects, this results in a continuum of operators depending on the parameter σ. The following commutation relations are postulated in the Heisenberg picture. [ X I ( τ , σ ) , P τ J ( τ , σ ) ] = i η I J δ ( σ − σ ′ ) , [ x 0 − ( τ ) , p + ( τ ) ] = − i . {\displaystyle {\begin{aligned}{}[X^{I}(\tau ,\sigma ),{\mathcal {P}}^{\tau J}(\tau ,\sigma )]&=i\eta ^{IJ}\delta (\sigma -\sigma '),\\{}[x_{0}^{-}(\tau ),p^{+}(\tau )]&=-i.\end{aligned}}} All other commutators vanish. Because of the continuum of operators, and because of the delta functions, it is desirable to express these relations instead in terms of the quantized versions of the Virasoro modes, the Virasoro operators. These are calculated to satisfy [ α m I , α n J ] = m η I J δ m + n , 0 {\displaystyle [\alpha _{m}^{I},\alpha _{n}^{J}]=m\eta ^{IJ}\delta _{m+n,0}} They are interpreted as creation and annihilation operators acting on Hilbert space, increasing or decreasing the quantum of their respective modes. If the index is negative, the operator is a creation operator, otherwise it is an annihilation operator. (If it is zero, it is proportional to the total momentum operator.) Since the light cone plus and minus modes were expressed in terms of the transverse Virasoro modes, one must consider the commutation relations between the Virasoro operators. These were classically defined (then modes) as L n = 1 2 ∑ p ∈ Z α n − p I α p I . {\displaystyle L_{n}={\frac {1}{2}}\sum _{p\in \mathbb {Z} }\alpha _{n-p}^{I}\alpha _{p}^{I}.} Since, in the quantized theory, the alphas are operators, the ordering of the factors matter. In view of the commutation relation between the mode operators, it will only matter for the operator L0 (for which m + n = 0). L0 is chosen normal ordered, L 0 = 1 2 α 0 I α 0 I + ∑ p = 1 ∞ α − p I α p I , = α ′ p I p I + ∑ p = 1 ∞ p α p I † α p I + c {\displaystyle L_{0}={\frac {1}{2}}\alpha _{0}^{I}\alpha _{0}^{I}+\sum _{p=1}^{\infty }\alpha _{-p}^{I}\alpha _{p}^{I},=\alpha 'p^{I}p^{I}+\sum _{p=1}^{\infty }p\alpha _{p}^{I\dagger }\alpha _{p}^{I}+c} where c is a possible ordering constant. One obtains after a somewhat lengthy calculation the relations [ L m , L n ] = ( m − n ) L m + n , m + n ≠ 0. {\displaystyle [L_{m},L_{n}]=(m-n)L_{m+n},\quad m+n\neq 0.} If one would allow for m + n = 0 above, then one has precisely the commutation relations of the Witt algebra. Instead one has [ L m , L n ] = ( m − n ) L m + n + D − 2 12 ( m 3 − m ) δ m + n , 0 , ∀ m , n ∈ Z . {\displaystyle [L_{m},L_{n}]=(m-n)L_{m+n}+{\frac {D-2}{12}}(m^{3}-m)\delta _{m+n,0},\quad \forall m,n\in \mathbb {Z} .} upon identification of the generic central term as (D − 2) times the identity operator, this is the Virasoro algebra, the universal central extension of the Witt algebra. The operator L0 enters the theory as the Hamiltonian, modulo an additive constant. Moreover, the Virasoro operators enter into the definition of the Lorentz generators of the theory. It is perhaps the most important algebra in string theory. The consistency of the Lorentz generators, by the way, fixes the spacetime dimensionality to 26. While this theory presented here (for relative simplicity of exposition) is unphysical, or at the very least incomplete (it has, for instance, no fermions) the Virasoro algebra arises in the same way in the more viable superstring theory and M-theory. === Group extension === A projective representation Π(G) of a Lie group G (background) can be used to define a so-called group extension Gex. In quantum mechanics, Wigner's theorem asserts that if G is a symmetry group, then it will be represented projectively on Hilbert space by unitary or antiunitary operators. This is often dealt with by passing to the universal covering group of G and take it as the symmetry group. This works nicely for the rotation group SO(3) and the Lorentz group O(3, 1), but it does not work when the symmetry group is the Galilean group. In this case one has to pass to its central extension, the Bargmann group, which is the symmetry group of the Schrödinger equation. Likewise, if G = R 2 {\displaystyle \mathbb {R} ^{2}} , the group of translations in position and momentum space, one has to pass to its central extension, the Heisenberg group. Let ω be the 2-cocycle on G induced by Π. Define G e x = C ∗ × G = { ( λ , g ) | λ ∈ C , g ∈ G } {\displaystyle G_{\mathrm {ex} }=\mathbb {C} ^{*}\times G=\{(\lambda ,g)|\lambda \in \mathbb {C} ,g\in G\}} as a set and let the multiplication be defined by ( λ 1 , g 1 ) ( λ 2 , g 2 ) = ( λ 1 λ 2 ω ( g 1 , g 2 ) , g 1 g 2 ) . {\displaystyle (\lambda _{1},g_{1})(\lambda _{2},g_{2})=(\lambda _{1}\lambda _{2}\omega (g_{1},g_{2}),g_{1}g_{2}).} Associativity holds since ω is a 2-cocycle on G. One has for the unit element ( 1 , e ) ( λ , g ) = ( λ ω ( e , g ) , g ) = ( λ , g ) = ( λ , g ) ( 1 , e ) , {\displaystyle (1,e)(\lambda ,g)=(\lambda \omega (e,g),g)=(\lambda ,g)=(\lambda ,g)(1,e),} and for the inverse ( λ , g ) − 1 = ( 1 λ ω ( g , g − 1 ) , g − 1 ) . {\displaystyle (\lambda ,g)^{-1}=\left({\frac {1}{\lambda \omega (g,g^{-1})}},g^{-1}\right).} The set ( C ∗ {\displaystyle \mathbb {C} ^{*}} , e) is an abelian subgroup of Gex. This means that Gex is not semisimple. The center of G, Z(G) = {z ∈ G|zg = gz ∀g ∈ G} includes this subgroup. The center may be larger. At the level of Lie algebras it can be shown that the Lie algebra gex of Gex is given by g e x = C C ⊕ g , {\displaystyle {\mathfrak {g}}_{\mathrm {ex} }=\mathbb {C} C\oplus {\mathfrak {g}},} as a vector space and endowed with the Lie bracket [ μ C + G 1 , ν C + G 2 ] = [ G 1 , G 2 ] + η ( G 1 , G 2 ) C . {\displaystyle [\mu C+G_{1},\nu C+G_{2}]=[G_{1},G_{2}]+\eta (G_{1},G_{2})C.} Here η is a 2-cocycle on g. This 2-cocycle can be obtained from ω albeit in a highly nontrivial way. Now by using the projective representation Π one may define a map Πex by Π e x ( ( λ , g ) ) = λ Π ( g ) . {\displaystyle \Pi _{\mathrm {ex} }((\lambda ,g))=\lambda \Pi (g).} It has the properties Π e x ( ( λ 1 , g 1 ) ) Π e x ( ( λ 2 , g 2 ) ) = λ 1 λ 2 Π ( g 1 ) Π ( g 2 ) = λ 1 λ 2 ω ( g 1 , g 2 ) Π ( g 1 g 2 ) = Π e x ( λ 1 λ 2 ω ( g 1 , g 2 ) , g 1 g 2 ) = Π e x ( ( λ 1 , g 1 ) ( λ 2 , g 2 ) ) , {\displaystyle \Pi _{\mathrm {ex} }((\lambda _{1},g_{1}))\Pi _{\mathrm {ex} }((\lambda _{2},g_{2}))=\lambda _{1}\lambda _{2}\Pi (g_{1})\Pi (g_{2})=\lambda _{1}\lambda _{2}\omega (g_{1},g_{2})\Pi (g_{1}g_{2})=\Pi _{\mathrm {ex} }(\lambda _{1}\lambda _{2}\omega (g_{1},g_{2}),g_{1}g_{2})=\Pi _{\mathrm {ex} }((\lambda _{1},g_{1})(\lambda _{2},g_{2})),} so Πex(Gex) is a bona fide representation of Gex. In the context of Wigner's theorem, the situation may be depicted as such (replace C ∗ {\displaystyle \mathbb {C} ^{*}} by U(1)); let SH denote the unit sphere in Hilbert space H, and let (·,·) be its inner product. Let PH denote ray space and [·,·] the ray product. Let moreover a wiggly arrow denote a group action. Then the diagram commutes, i.e. π 2 ∘ Π e x ( ( λ , g ) ) ( ψ ) = Π ∘ π ( g ) ( π 1 ( ψ ) ) , ψ ∈ S H . {\displaystyle \pi _{2}\circ \Pi _{\mathrm {ex} }((\lambda ,g))(\psi )=\Pi \circ \pi (g)(\pi _{1}(\psi )),\quad \psi \in S{\mathcal {H}}.} Moreover, in the same way that G is a symmetry of PH preserving [·,·], Gex is a symmetry of SH preserving (·,·). The fibers of π2 are all circles. These circles are left invariant under the action of U(1). The action of U(1) on these fibers is transitive with no fixed point. The conclusion is that SH is a principal fiber bundle over PH with structure group U(1). == Background material == In order to adequately discuss extensions, structure that goes beyond the defining properties of a Lie algebra is needed. Rudimentary facts about these are collected here for quick reference. === Derivations === A derivation δ on a Lie algebra g is a map δ : g → g {\displaystyle \delta :{\mathfrak {g}}\rightarrow {\mathfrak {g}}} such that the Leibniz rule δ [ G 1 , G 2 ] = [ δ G 1 , G 2 ] + [ G 1 , δ G 2 ] {\displaystyle \delta [G_{1},G_{2}]=[\delta G_{1},G_{2}]+[G_{1},\delta G_{2}]} holds. The set of derivations on a Lie algebra g is denoted der g. It is itself a Lie algebra under the Lie bracket [ δ 1 , δ 2 ] = δ 1 ∘ δ 2 − δ 2 ∘ δ 1 . {\displaystyle [\delta _{1},\delta _{2}]=\delta _{1}\circ \delta _{2}-\delta _{2}\circ \delta _{1}.} It is the Lie algebra of the group Aut g of automorphisms of g. One has to show δ [ G 1 , G 1 ] = [ δ G 1 , G 2 ] + [ G 1 , δ G 2 ] ⇔ e t δ [ G 1 , G 2 ] = [ e t δ G 1 , e t δ G 2 ] , ∀ t ∈ R . {\displaystyle \delta [G_{1},G_{1}]=[\delta G_{1},G_{2}]+[G_{1},\delta G_{2}]\Leftrightarrow e^{t\delta }[G_{1},G_{2}]=[e^{t\delta }G_{1},e^{t\delta }G_{2}],\quad \forall t\in \mathbb {R} .} If the rhs holds, differentiate and set t = 0 implying that the lhs holds. If the lhs holds (A), write the rhs as [ G 1 , G 2 ] = ? e − t δ [ e t δ G 1 , e t δ G 2 ] , {\displaystyle [G_{1},G_{2}]\;{\overset {?}{=}}\;e^{-t\delta }[e^{t\delta }G_{1},e^{t\delta }G_{2}],} and differentiate the rhs of this expression. It is, using (A), identically zero. Hence the rhs of this expression is independent of t and equals its value for t = 0, which is the lhs of this expression. If G ∈ g, then adG, acting by adG1(G2) = [G1, G2], is a derivation. The set adG: G ∈ g is the set of inner derivations on g. For finite-dimensional simple Lie algebras all derivations are inner derivations. === Semidirect product (groups) === Consider two Lie groups G and H and Aut H, the automorphism group of H. The latter is the group of isomorphisms of H. If there is a Lie group homomorphism Φ:G → Aut H, then for each g ∈ G there is a Φ(g) ≡ Φg ∈ Aut H with the property Φgg' = ΦgΦg', g,g' ∈ G. Denote with E the set H × G and define multiplication by Then E is a group with identity (eH, eG) and the inverse is given by (h, g)−1 = (Φg−1(h−1), g−1). Using the expression for the inverse and equation (4) it is seen that H is normal in E. Denote the group with this semidirect product as E = H ⊗S G. Conversely, if E = H ⊗S G is a given semidirect product expression of the group E, then by definition H is normal in E and Cg ∈ Aut H for each g ∈ G where Cg (h) ≡ ghg−1 and the map Φ:g ↦ Cg is a homomorphism. Now make use of the Lie correspondence. The maps Φg:H → H, g ∈ G each induce, at the level of Lie algebras, a map Ψg:h → h. This map is computed by For instance, if G and H are both subgroups of a larger group E and Φg = ghg−1, then and one recognizes Ψ as the adjoint action Ad of E on h restricted to G. Now Ψ:G → Aut h [ ⊂ GL(h) if h is finite-dimensional] is a homomorphism, and appealing once more to the Lie correspondence, there is a unique Lie algebra homomorphism ψ:g → Lie(Aut h) = Der h ⊂ gl(h). This map is (formally) given by for example, if Ψ = Ad, then (formally) where a relationship between Ad and the adjoint action ad rigorously proved in here is used. Lie algebra The Lie algebra is, as a vector space, e = h ⊕ g. This is clear since GH generates E and G ∩ H = (eH, eG). The Lie bracket is given by [ H 1 + G 1 , H 2 + G 2 ] e = [ H 1 , H 2 ] h + ψ G 1 ( H 2 ) − ψ G 2 ( H 1 ) + [ G 1 , G 2 ] g . {\displaystyle [H_{1}+G_{1},H_{2}+G_{2}]_{\mathfrak {e}}=[H_{1},H_{2}]_{\mathfrak {h}}+\psi _{G_{1}}(H_{2})-\psi _{G_{2}}(H_{1})+[G_{1},G_{2}]_{\mathfrak {g}}.} === Cohomology === For the present purposes, consideration of a limited portion of the theory Lie algebra cohomology suffices. The definitions are not the most general possible, or even the most common ones, but the objects they refer to are authentic instances of more the general definitions. 2-cocycles The objects of primary interest are the 2-cocycles on g, defined as bilinear alternating functions, ϕ : g × g → F , {\displaystyle \phi :{\mathfrak {g}}\times {\mathfrak {g}}\rightarrow F,} that are alternating, ϕ ( G 1 , G 2 ) = − ϕ ( G 2 , G 1 ) , {\displaystyle \phi (G_{1},G_{2})=-\phi (G_{2},G_{1}),} and having a property resembling the Jacobi identity called the Jacobi identity for 2-cycles, ϕ ( G 1 , [ G 2 , G 3 ] ) + ϕ ( G 2 , [ G 3 , G 1 ] ) + ϕ ( G 3 , [ G 1 , G 2 ] ) = 0. {\displaystyle \phi (G_{1},[G_{2},G_{3}])+\phi (G_{2},[G_{3},G_{1}])+\phi (G_{3},[G_{1},G_{2}])=0.} The set of all 2-cocycles on g is denoted Z2(g, F). 2-cocycles from 1-cochains Some 2-cocycles can be obtained from 1-cochains. A 1-cochain on g is simply a linear map, f : g → F {\displaystyle f:{\mathfrak {g}}\rightarrow F} The set of all such maps is denoted C1(g, F) and, of course (in at least the finite-dimensional case) C1(g, F) ≅ g*. Using a 1-cochain f, a 2-cocycle δf may be defined by δ f ( G 1 , G 2 ) = f ( [ G 1 , G 2 ] ) . {\displaystyle \delta f(G_{1},G_{2})=f([G_{1},G_{2}]).} The alternating property is immediate and the Jacobi identity for 2-cocycles is (as usual) shown by writing it out and using the definition and properties of the ingredients (here the Jacobi identity on g and the linearity of f). The linear map δ:C1(g, F) → Z2(g, F) is called the coboundary operator (here restricted to C1(g, F)). The second cohomology group Denote the image of C1(g, F) of δ by B2(g, F). The quotient H 2 ( g , F ) = Z 2 ( g , F ) / B 2 ( g , F ) {\displaystyle H^{2}({\mathfrak {g}},\mathbb {F} )=Z^{2}({\mathfrak {g}},\mathbb {F} )/B^{2}({\mathfrak {g}},\mathbb {F} )} is called the second cohomology group of g. Elements of H2(g, F) are equivalence classes of 2-cocycles and two 2-cocycles φ1 and φ2 are called equivalent cocycles if they differ by a 2-coboundary, i.e. if φ1 = φ2 + δf for some f ∈ C1(g, F). Equivalent 2-cocycles are called cohomologous. The equivalence class of φ ∈ Z2(g, F) is denoted [φ] ∈ H2. These notions generalize in several directions. For this, see the main articles. === Structure constants === Let B be a Hamel basis for g. Then each G ∈ g has a unique expression as G = ∑ α ∈ A c α G α , c α ∈ F , G α ∈ B {\displaystyle G=\sum _{\alpha \in A}c_{\alpha }G_{\alpha },\quad c_{\alpha }\in F,G_{\alpha }\in B} for some indexing set A of suitable size. In this expansion, only finitely many cα are nonzero. In the sequel it is (for simplicity) assumed that the basis is countable, and Latin letters are used for the indices and the indexing set can be taken to be N ∗ {\displaystyle \mathbb {N} ^{*}} = 1, 2, .... One immediately has [ G i , G j ] = C i j k G k {\displaystyle [G_{i},G_{j}]={C_{ij}}^{k}G_{k}} for the basis elements, where the summation symbol has been rationalized away, the summation convention applies. The placement of the indices in the structure constants (up or down) is immaterial. The following theorem is useful: Theorem:There is a basis such that the structure constants are antisymmetric in all indices if and only if the Lie algebra is a direct sum of simple compact Lie algebras and u(1) Lie algebras. This is the case if and only if there is a real positive definite metric g on g satisfying the invariance condition g α β C β γ δ = − g γ β C β α δ . {\displaystyle g_{\alpha \beta }{C^{\beta }}_{\gamma \delta }=-g_{\gamma \beta }{C^{\beta }}_{\alpha \delta }.} in any basis. This last condition is necessary on physical grounds for non-Abelian gauge theories in quantum field theory. Thus one can produce an infinite list of possible gauge theories using the Cartan catalog of simple Lie algebras on their compact form (i.e., sl(n, C {\displaystyle \mathbb {C} } ) → su(n), etc. One such gauge theory is the U(1) × SU(2) × SU(3) gauge theory of the standard model with Lie algebra u(1) ⊕ su(2) ⊕ su(3). === Killing form === The Killing form is a symmetric bilinear form on g defined by K ( G 1 , G 2 ) = t r a c e ( a d G 1 a d G 2 ) . {\displaystyle K(G_{1},G_{2})=\mathrm {trace} (\mathrm {ad} _{G_{1}}\mathrm {ad} _{G_{2}}).} Here adG is viewed as a matrix operating on the vector space g. The key fact needed is that if g is semisimple, then, by Cartan's criterion, K is non-degenerate. In such a case K may be used to identify g and g∗. If λ ∈ g∗, then there is a ν(λ) = Gλ ∈ g such that ⟨ λ , G ⟩ = K ( G λ , G ) ∀ G ∈ g . {\displaystyle \langle \lambda ,G\rangle =K(G_{\lambda },G)\quad \forall G\in {\mathfrak {g}}.} This resembles the Riesz representation theorem and the proof is virtually the same. The Killing form has the property K ( [ G 1 , G 2 ] , G 3 ) = K ( G 1 , [ G 2 , G 3 ] ) , {\displaystyle K([G_{1},G_{2}],G_{3})=K(G_{1},[G_{2},G_{3}]),} which is referred to as associativity. By defining gαβ = K[Gα,Gβ] and expanding the inner brackets in terms of structure constants, one finds that the Killing form satisfies the invariance condition of above. === Loop algebra === A loop group is taken as a group of smooth maps from the unit circle S1 into a Lie group G with the group structure defined by the group structure on G. The Lie algebra of a loop group is then a vector space of mappings from S1 into the Lie algebra g of G. Any subalgebra of such a Lie algebra is referred to as a loop algebra. Attention here is focused on polynomial loop algebras of the form { h : S 1 → g | h ( λ ) = ∑ λ n G n , n ∈ Z , λ = e i θ ∈ S 1 , G n ∈ g } . {\displaystyle \{h:S^{1}\to {\mathfrak {g}}|h(\lambda )=\sum \lambda ^{n}G_{n},n\in \mathbb {Z} ,\lambda =e^{i\theta }\in S^{1},G_{n}\in {\mathfrak {g}}\}.} A little thought confirms that these are loops in g as θ goes from 0 to 2π. The operations are the ones defined pointwise by the operations in g. This algebra is isomorphic with the algebra C [ λ , λ − 1 ] ⊗ g , {\displaystyle C[\lambda ,\lambda ^{-1}]\otimes {\mathfrak {g}},} where C[λ, λ−1] is the algebra of Laurent polynomials, ∑ λ k G k ↔ ∑ λ k ⊗ G k . {\displaystyle \sum \lambda ^{k}G_{k}\leftrightarrow \sum \lambda ^{k}\otimes G_{k}.} The Lie bracket is [ P ( λ ) ⊗ G 1 , Q ( λ ) ⊗ G 2 ] = P ( λ ) Q ( λ ) ⊗ [ G 1 , G 2 ] . {\displaystyle [P(\lambda )\otimes G_{1},Q(\lambda )\otimes G_{2}]=P(\lambda )Q(\lambda )\otimes [G_{1},G_{2}].} In this latter view the elements can be considered as polynomials with (constant!) coefficients in g. In terms of a basis and structure constants, [ λ m ⊗ G i , λ n ⊗ G j ] = C i j k λ m + n ⊗ G k . {\displaystyle [\lambda ^{m}\otimes G_{i},\lambda ^{n}\otimes G_{j}]={C_{ij}}^{k}\lambda ^{m+n}\otimes G_{k}.} It is also common to have a different notation, λ m ⊗ G i ≅ λ m G i ↔ T i m ( λ ) ≡ T i m , {\displaystyle \lambda ^{m}\otimes G_{i}\cong \lambda ^{m}G_{i}\leftrightarrow T_{i}^{m}(\lambda )\equiv T_{i}^{m},} where the omission of λ should be kept in mind to avoid confusion; the elements really are functions S1 → g. The Lie bracket is then which is recognizable as one of the commutation relations in an untwisted affine Kac–Moody algebra, to be introduced later, without the central term. With m = n = 0, a subalgebra isomorphic to g is obtained. It generates (as seen by tracing backwards in the definitions) the set of constant maps from S1 into G, which is obviously isomorphic with G when exp is onto (which is the case when G is compact. If G is compact, then a basis (Gk) for g may be chosen such that the Gk are skew-Hermitian. As a consequence, T i n † = ( λ n G i ) † = − λ − n G i = − T i − n . {\displaystyle T_{i}^{n\dagger }=(\lambda ^{n}G_{i})^{\dagger }=-\lambda ^{-n}G_{i}=-T_{i}^{-n}.} Such a representation is called unitary because the representatives H ( λ ) = e θ n k T k − n ∈ G {\displaystyle H(\lambda )=e^{\theta _{n}^{k}T_{k}^{-n}}\in G} are unitary. Here, the minus on the lower index of T is conventional, the summation convention applies, and the λ is (by the definition) buried in the Ts in the right hand side. === Current algebra (physics) === Current algebras arise in quantum field theories as a consequence of global gauge symmetry. Conserved currents occur in classical field theories whenever the Lagrangian respects a continuous symmetry. This is the content of Noether's theorem. Most (perhaps all) modern quantum field theories can be formulated in terms of classical Lagrangians (prior to quantization), so Noether's theorem applies in the quantum case as well. Upon quantization, the conserved currents are promoted to position dependent operators on Hilbert space. These operators are subject to commutation relations, generally forming an infinite-dimensional Lie algebra. A model illustrating this is presented below. To enhance the flavor of physics, factors of i will appear here and there as opposed to in the mathematical conventions. Consider a column vector Φ of scalar fields (Φ1, Φ2, ..., ΦN). Let the Lagrangian density be L = ∂ μ ϕ † ∂ μ ϕ − m 2 ϕ † ϕ . {\displaystyle {\mathcal {L}}=\partial _{\mu }\phi ^{\dagger }\partial ^{\mu }\phi -m^{2}\phi ^{\dagger }\phi .} This Lagrangian is invariant under the transformation ϕ ↦ e − i ∑ a = 1 r α a F a ϕ , {\displaystyle \phi \mapsto e^{-i\sum _{a=1}^{r}\alpha ^{a}F_{a}}\phi ,} where {F1, F1, ..., Fr} are generators of either U(N) or a closed subgroup thereof, satisfying [ F a , F b ] = i C a b c F c . {\displaystyle [F_{a},F_{b}]=i{C_{ab}}^{c}F_{c}.} Noether's theorem asserts the existence of r conserved currents, J a μ = − π μ i F a ϕ , π k μ = ∂ L ∂ ( ∂ μ ϕ k ) , {\displaystyle J_{a}^{\mu }=-\pi ^{\mu }iF_{a}\phi ,\quad \pi ^{k\mu }={\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{k})}},} where πk0 ≡ πk is the momentum canonically conjugate to Φk. The reason these currents are said to be conserved is because ∂ μ J a μ = 0 , {\displaystyle \partial _{\mu }J_{a}^{\mu }=0,} and consequently Q a ( t ) = ∫ J a 0 d 3 x = c o n s t ≡ Q a , {\displaystyle Q_{a}(t)=\int J_{a}^{0}d^{3}x=\mathrm {const} \equiv Q_{a},} the charge associated to the charge density Ja0 is constant in time. This (so far classical) theory is quantized promoting the fields and their conjugates to operators on Hilbert space and by postulating (bosonic quantization) the commutation relations [ ϕ k ( t , x ) , π l ( t , x ) ] = i δ ( x − y ) δ k l , [ ϕ k ( t , x ) , ϕ l ( t , x ) ] = [ π k ( t , x ) , π l ( t , x ) ] = 0. {\displaystyle {\begin{aligned}{}[\phi _{k}(t,x),\pi ^{l}(t,x)]&=i\delta (x-y)\delta _{k}^{l},\\{}[\phi _{k}(t,x),\phi _{l}(t,x)]&=[\pi ^{k}(t,x),\pi ^{l}(t,x)]=0.\end{aligned}}} The currents accordingly become operators They satisfy, using the above postulated relations, the definitions and integration over space, the commutation relations [ J a 0 ( t , x ) , J b 0 ( t , y ) ] = i δ ( x − y ) C a b c J c 0 ( c t , x ) [ Q a , Q b ] = i Q a b c Q c [ Q a , J b μ ( t , x ) ] = i C a b c J c μ ( t , x ) , {\displaystyle {\begin{aligned}{}[J_{a}^{0}(t,\mathbf {x} ),J_{b}^{0}(t,\mathbf {y} )]&=i\delta (\mathbf {x} -\mathbf {y} ){C_{ab}}^{c}J_{c}^{0}(ct,\mathbf {x} )\\{}[Q_{a},Q_{b}]&=i{Q_{ab}}^{c}Q_{c}\\{}[Q_{a},J_{b}^{\mu }(t,\mathbf {x} )]&=i{C_{ab}}^{c}J_{c}^{\mu }(t,\mathbf {x} ),\end{aligned}}} where the speed of light and the reduced Planck constant have been set to unity. The last commutation relation does not follow from the postulated commutation relations (these are fixed only for πk0, not for πk1, πk2, πk3), except for μ = 0 For μ = 1, 2, 3 the Lorentz transformation behavior is used to deduce the conclusion. The next commutator to consider is [ J a 0 ( t , x ) , J b i ( t , y ) ] = i C a b c J c i ( t , x ) δ ( x − y ) + S a b i j ∂ j δ ( x − y ) + . . . . {\displaystyle [J_{a}^{0}(t,\mathbf {x} ),J_{b}^{i}(t,\mathbf {y} )]=i{C_{ab}}^{c}J_{c}^{i}(t,\mathbf {x} )\delta (\mathbf {x} -\mathbf {y} )+S_{ab}^{ij}\partial _{j}\delta (\mathbf {x} -\mathbf {y} )+....} The presence of the delta functions and their derivatives is explained by the requirement of microcausality that implies that the commutator vanishes when x ≠ y. Thus the commutator must be a distribution supported at x = y. The first term is fixed due to the requirement that the equation should, when integrated over X, reduce to the last equation before it. The following terms are the Schwinger terms. They integrate to zero, but it can be shown quite generally that they must be nonzero. === Affine Kac–Moody algebra === Let g be an N-dimensional complex simple Lie algebra with a dedicated suitable normalized basis such that the structure constants are antisymmetric in all indices with commutation relations [ G i , G j ] = C i j k G k , 1 ≤ i , j , N . {\displaystyle [G_{i},G_{j}]={C_{ij}}^{k}G_{k},\quad 1\leq i,j,N.} An untwisted affine Kac–Moody algebra g is obtained by copying the basis for each n ∈ Z {\displaystyle \mathbb {Z} } (regarding the copies as distinct), setting g ¯ = F C ⊕ F D ⊕ ⨁ 1 ≤ i ≤ N , m ∈ Z F G m i {\displaystyle {\overline {\mathfrak {g}}}=FC\oplus FD\oplus \bigoplus _{1\leq i\leq \mathbb {N} ,m\in \mathbb {Z} }FG_{m}^{i}} as a vector space and assigning the commutation relations [ G i m , G j n ] = C i j k G k m + n + m δ i j δ m + n , 0 C , [ C , G i m ] = 0 , 1 ≤ i , j , N , m , n ∈ Z [ D , G i m ] = m G i m [ D , C ] = 0. {\displaystyle {\begin{aligned}{}[G_{i}^{m},G_{j}^{n}]&={C_{ij}}^{k}G_{k}^{m+n}+m\delta _{ij}\delta ^{m+n,0}C,\\{}[C,G_{i}^{m}]&=0,\quad 1\leq i,j,N,\quad m,n\in \mathbb {Z} \\{}[D,G_{i}^{m}]&=mG_{i}^{m}\\{}[D,C]&=0.\end{aligned}}} If C = D = 0, then the subalgebra spanned by the Gmi is obviously identical to the polynomial loop algebra of above. === Witt algebra === The Witt algebra, named after Ernst Witt, is the complexification of the Lie algebra VectS1 of smooth vector fields on the circle S1. In coordinates, such vector fields may be written X = f ( φ ) d d φ , {\displaystyle X=f(\varphi ){\frac {d}{d\varphi }},} and the Lie bracket is the Lie bracket of vector fields, on S1 simply given by [ X , Y ] = [ f d d φ , g d d φ ] = ( f d g d φ − g d f d φ ) d d φ . {\displaystyle [X,Y]=\left[f{\frac {d}{d\varphi }},g{\frac {d}{d\varphi }}\right]=\left(f{\frac {dg}{d\varphi }}-g{\frac {df}{d\varphi }}\right){\frac {d}{d\varphi }}.} The algebra is denoted W = VectS1 + iVectS1. A basis for W is given by the set { d n , n ∈ Z } = { i e i n φ d d φ = − z n + 1 d d z | n ∈ Z } . {\displaystyle \{d_{n},n\in \mathbb {Z} \}=\left\{\left.ie^{in\varphi }{\frac {d}{d\varphi }}=-z^{n+1}{\frac {d}{dz}}\right|n\in \mathbb {Z} \right\}.} This basis satisfies This Lie algebra has a useful central extension, the Virasoro algebra. It has 3-dimensional subalgebras isomorphic with su(1, 1) and sl(2, R {\displaystyle \mathbb {R} } ). For each n ≠ 0, the set {d0, d−n, dn} spans a subalgebra isomorphic to su(1, 1) ≅ sl(2, R {\displaystyle \mathbb {R} } ). === Projective representation === If M is a matrix Lie group, then elements X of its Lie algebra m can be given by X = d d t ( g ( t ) ) | t = 0 , {\displaystyle X={\frac {d}{dt}}\left.(g(t))\right|_{t=0},} where g is a differentiable path in M that goes through the identity element at t = 0. Commutators of elements of the Lie algebra can be computed as [ X 1 , X 2 ] = d d t | t = 0 d d s | s = 0 e t X 1 e s X 2 e − t X 1 . {\displaystyle [X_{1},X_{2}]=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}e^{tX_{1}}e^{sX_{2}}e^{-tX_{1}}.} Likewise, given a group representation U(M), its Lie algebra u(m) is computed by [ Y 1 , Y 2 ] = d d t | t = 0 d d s | s = 0 U ( e t X 1 ) U ( e s X 2 ) U ( e − t X 1 ) = d d t | t = 0 d d s | s = 0 U ( e t X 1 e s X 2 e − t X 1 ) , {\displaystyle {\begin{aligned}[][Y_{1},Y_{2}]&=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}U(e^{tX_{1}})U(e^{sX_{2}})U(e^{-tX_{1}})\\&=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}U(e^{tX_{1}}e^{sX_{2}}e^{-tX_{1}})\end{aligned}},} where Y 1 = d d t | t = 0 U ( e t X 1 ) {\displaystyle Y_{1}=\left.{\frac {d}{dt}}\right|_{t=0}U(e^{tX_{1}})} and Y 2 = d d s | s = 0 U ( e s X 2 ) {\displaystyle Y_{2}=\left.{\frac {d}{ds}}\right|_{s=0}U(e^{sX_{2}})} . Then there is a Lie algebra isomorphism between m and u(m) sending bases to bases, so that u is a faithful representation of m. If however U(G) is an admissible set of representatives of a projective unitary representation, i.e. a unitary representation up to a phase factor, then the Lie algebra, as computed from the group representation, is not isomorphic to m. For U, the multiplication rule reads U ( g 1 ) U ( g 2 ) = ω ( g 1 , g 2 ) U ( g 1 g 2 ) = e i ξ ( g 1 , g 2 ) U ( g 1 g 2 ) . {\displaystyle U(g_{1})U(g_{2})=\omega (g_{1},g_{2})U(g_{1}g_{2})=e^{i\xi (g_{1},g_{2})}U(g_{1}g_{2}).} The function ω,often required to be smooth, satisfies ω ( g , e ) = ω ( e , g ) = 1 , ω ( g 1 , g 2 g 3 ) ω ( g 2 , g 3 ) = ω ( g 1 , g 2 ) ω ( g 1 g 2 , g 3 ) ω ( g , g − 1 ) = ω ( g − 1 , g ) . {\displaystyle {\begin{aligned}\omega (g,e)&=\omega (e,g)=1,\\\omega (g_{1},g_{2}g_{3})\omega (g_{2},g_{3})&=\omega (g_{1},g_{2})\omega (g_{1}g_{2},g_{3})\\\omega (g,g^{-1})&=\omega (g^{-1},g).\end{aligned}}} It is called a 2-cocycle on M. From the above equalities, ( U ( g ) ) − 1 = 1 ω ( g , g − 1 ) U ( g − 1 ) {\displaystyle (U(g))^{-1}={\frac {1}{\omega (g,g^{-1})}}U(g^{-1})} , so one has [ Y 1 , Y 2 ] = d d t | t = 0 d d s | s = 0 U ( e t X 1 ) U ( e s X 2 ) ( U ( e t X 1 ) ) − 1 = d d t | t = 0 d d s | s = 0 1 ω ( e t X 1 , e − t X 1 ) U ( e t X 1 ) U ( e s X 2 ) U ( e − t X 1 ) = d d t | t = 0 d d s | s = 0 ω ( e t X 1 , e s X 2 ) ω ( e t X 1 e s X 2 , e − t X 1 ) ω ( e t X 1 , e − t X 1 ) U ( e t X 1 e s X 2 e − t X 1 ) ≡ d d t | t = 0 d d s | s = 0 Ω ( e t X 1 , e s X 2 ) U ( e t X 1 e s X 2 e − t X 1 ) = d d t | t = 0 d d s | s = 0 U ( e t X 1 e s X 2 e − t X 1 ) + d d t | t = 0 d d s | s = 0 Ω ( e t X 1 , e s X 2 ) I , {\displaystyle {\begin{aligned}[][Y_{1},Y_{2}]&=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}U(e^{tX_{1}})U(e^{sX_{2}})(U(e^{tX_{1}}))^{-1}\\&=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}{\frac {1}{\omega (e^{tX_{1}},e^{-tX_{1}})}}U(e^{tX_{1}})U(e^{sX_{2}})U(e^{-tX_{1}})\\&=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}{\frac {\omega (e^{tX_{1}},e^{sX_{2}})\omega (e^{tX_{1}}e^{sX_{2}},e^{-tX_{1}})}{\omega (e^{tX_{1}},e^{-tX_{1}})}}U(e^{tX_{1}}e^{sX_{2}}e^{-tX_{1}})\\&\equiv \left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}\Omega (e^{tX_{1}},e^{sX_{2}})U(e^{tX_{1}}e^{sX_{2}}e^{-tX_{1}})\\&=\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}U(e^{tX_{1}}e^{sX_{2}}e^{-tX_{1}})+\left.{\frac {d}{dt}}\right|_{t=0}\left.{\frac {d}{ds}}\right|_{s=0}\Omega (e^{tX_{1}},e^{sX_{2}})I,\end{aligned}}} because both Ω and U evaluate to the identity at t = 0. For an explanation of the phase factors ξ, see Wigner's theorem. The commutation relations in m for a basis, [ X i , X j ] = C i j k X k {\displaystyle [X_{i},X_{j}]={C_{ij}^{k}}X_{k}} become in u [ Y i , Y j ] = C i j k Y k + D i j I , {\displaystyle [Y_{i},Y_{j}]={C_{ij}^{k}}Y_{k}+D_{ij}I,} so in order for u to be closed under the bracket (and hence have a chance of actually being a Lie algebra) a central charge I must be included. === Relativistic classical string theory === A classical relativistic string traces out a world sheet in spacetime, just like a point particle traces out a world line. This world sheet can locally be parametrized using two parameters σ and τ. Points xμ in spacetime can, in the range of the parametrization, be written xμ = xμ(σ, τ). One uses a capital X to denote points in spacetime actually being on the world sheet of the string. Thus the string parametrization is given by (σ, τ) ↦(X0(σ, τ), X1(σ, τ), X2(σ, τ), X3(σ, τ)). The inverse of the parametrization provides a local coordinate system on the world sheet in the sense of manifolds. The equations of motion of a classical relativistic string derived in the Lagrangian formalism from the Nambu–Goto action are ∂ P μ τ ∂ τ + ∂ P μ σ ∂ σ = 0 , P μ τ = − T 0 c ( X ˙ ⋅ X ′ ) X μ ′ − ( X ′ ) 2 X ˙ μ ( X ˙ ⋅ X ′ ) 2 − ( X ˙ ) 2 ( X ′ ) 2 , P μ σ = − T 0 c ( X ˙ ⋅ X ′ ) X μ ′ − ( X ˙ ) 2 X μ ′ ( X ˙ ⋅ X ′ ) 2 − ( X ˙ ) 2 ( X ′ ) 2 . {\displaystyle {\frac {\partial {\mathcal {P}}_{\mu }^{\tau }}{\partial \tau }}+{\frac {\partial {\mathcal {P}}_{\mu }^{\sigma }}{\partial \sigma }}=0,\quad {\mathcal {P}}_{\mu }^{\tau }=-{\frac {T_{0}}{c}}{\frac {({\dot {X}}\cdot X')X'_{\mu }-(X')^{2}{\dot {X}}_{\mu }}{\sqrt {({\dot {X}}\cdot X')^{2}-({\dot {X}})^{2}(X')^{2}}}},\quad {\mathcal {P}}_{\mu }^{\sigma }=-{\frac {T_{0}}{c}}{\frac {({\dot {X}}\cdot X')X'_{\mu }-({\dot {X}})^{2}X'_{\mu }}{\sqrt {({\dot {X}}\cdot X')^{2}-({\dot {X}})^{2}(X')^{2}}}}.} A dot over a quantity denotes differentiation with respect to τ and a prime differentiation with respect to σ. A dot between quantities denotes the relativistic inner product. These rather formidable equations simplify considerably with a clever choice of parametrization called the light cone gauge. In this gauge, the equations of motion become X ¨ μ − X μ ″ = 0 , {\displaystyle {\ddot {X}}^{\mu }-{X^{\mu }}''=0,} the ordinary wave equation. The price to be paid is that the light cone gauge imposes constraints, X ˙ μ ⋅ X μ ′ = 0 , ( X ˙ ) 2 + ( X ′ ) 2 = 0 , {\displaystyle {\dot {X}}^{\mu }\cdot {X^{\mu }}'=0,\quad ({\dot {X}})^{2}+(X')^{2}=0,} so that one cannot simply take arbitrary solutions of the wave equation to represent the strings. The strings considered here are open strings, i.e. they don't close up on themselves. This means that the Neumann boundary conditions have to be imposed on the endpoints. With this, the general solution of the wave equation (excluding constraints) is given by X μ ( σ , τ ) = x 0 μ + 2 α ′ p 0 μ τ − i 2 α ′ ∑ n = 1 ( a n μ ∗ e i n τ − a n μ e − i n τ ) cos ⁡ n σ n , {\displaystyle X^{\mu }(\sigma ,\tau )=x_{0}^{\mu }+2\alpha 'p_{0}^{\mu }\tau -i{\sqrt {2\alpha '}}\sum _{n=1}\left(a_{n}^{\mu *}e^{in\tau }-a_{n}^{\mu }e^{-in\tau }\right){\frac {\cos n\sigma }{\sqrt {n}}},} where α' is the slope parameter of the string (related to the string tension). The quantities x0 and p0 are (roughly) string position from the initial condition and string momentum. If all the αμn are zero, the solution represents the motion of a classical point particle. This is rewritten, first defining α 0 μ = 2 α ′ a μ , α n μ = a n μ n , α − n μ = a n μ ∗ n , {\displaystyle \alpha _{0}^{\mu }={\sqrt {2\alpha '}}a_{\mu },\quad \alpha _{n}^{\mu }=a_{n}^{\mu }{\sqrt {n}},\quad \alpha _{-n}^{\mu }=a_{n}^{\mu *}{\sqrt {n}},} and then writing X μ ( σ , τ ) = x 0 μ + 2 α ′ α 0 μ τ + i 2 α ′ ∑ n ≠ 0 1 n α n μ e − i n τ cos ⁡ n σ . {\displaystyle X^{\mu }(\sigma ,\tau )=x_{0}^{\mu }+{\sqrt {2\alpha '}}\alpha _{0}^{\mu }\tau +i{\sqrt {2\alpha '}}\sum _{n\neq 0}{\frac {1}{n}}\alpha _{n}^{\mu }e^{-in\tau }\cos n\sigma .} In order to satisfy the constraints, one passes to light cone coordinates. For I = 2, 3, ...d, where d is the number of space dimensions, set X I ( σ , τ ) = x 0 I + 2 α ′ α 0 I τ + i 2 α ′ ∑ n ≠ 0 1 n α n I e − i n τ cos ⁡ n σ , X + ( σ , τ ) = 2 α ′ α 0 + τ , X − ( σ , τ ) = x 0 − + 2 α ′ α 0 − τ + i 2 α ′ ∑ n ≠ 0 1 n α n − e − i n τ cos ⁡ n σ . {\displaystyle {\begin{aligned}X^{I}(\sigma ,\tau )&=x_{0}^{I}+{\sqrt {2\alpha '}}\alpha _{0}^{I}\tau +i{\sqrt {2\alpha '}}\sum _{n\neq 0}{\frac {1}{n}}\alpha _{n}^{I}e^{-in\tau }\cos n\sigma ,\\X^{+}(\sigma ,\tau )&={\sqrt {2\alpha '}}\alpha _{0}^{+}\tau ,\\X^{-}(\sigma ,\tau )&=x_{0}^{-}+{\sqrt {2\alpha '}}\alpha _{0}^{-}\tau +i{\sqrt {2\alpha '}}\sum _{n\neq 0}{\frac {1}{n}}\alpha _{n}^{-}e^{-in\tau }\cos n\sigma .\end{aligned}}} Not all αnμ, n ∈ Z {\displaystyle \mathbb {Z} } , μ ∈ {+, −, 2, 3, ..., d} are independent. Some are zero (hence missing in the equations above), and the "minus coefficients" satisfy 2 α ′ α n − = 1 2 p + ∑ p ∈ Z α n − p I α p I . {\displaystyle {\sqrt {2\alpha '}}\alpha _{n}^{-}={\frac {1}{2p^{+}}}\sum _{p\in \mathbb {Z} }\alpha _{n-p}^{I}\alpha _{p}^{I}.} The quantity on the left is given a name, 2 α ′ α n − ≡ 1 p + L n , L n = 1 2 ∑ p ∈ Z α n − p I α p I , {\displaystyle {\sqrt {2\alpha '}}\alpha _{n}^{-}\equiv {\frac {1}{p^{+}}}L_{n},\quad L_{n}={\frac {1}{2}}\sum _{p\in \mathbb {Z} }\alpha _{n-p}^{I}\alpha _{p}^{I},} the transverse Virasoro mode. When the theory is quantized, the alphas, and hence the Ln become operators. == See also == Group cohomology Group contraction (Inönu–Wigner contraction) Group extension Lie algebra cohomology == Remarks == == Notes == == References == === Books === Bäuerle, G.G.A; de Kerf, E.A. (1990). A. van Groesen; E.M. de Jager (eds.). Lie algebras. Part 1. Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 1. North-Holland. ISBN 978-0-444-88776-4. MR 1085715. Bäuerle, G.G.A; de Kerf, E.A.; ten Kroode, A. P. E. (1997). A. van Groesen; E.M. de Jager (eds.). Lie algebras. Part 2. Finite and infinite dimensional Lie algebras and their application in physics. Studies in mathematical physics. Vol. 7. North-Holland. ISBN 978-0-444-82836-1. MR 1489232 – via ScienceDirect. Goddard, P.; Olive, D., eds. (1988). Kac–Moody and Virasoro algebras, A reprint Volume for Physicists. Advanced Series in Mathematical Physics. Vol. 3. Singapore: World Scientific Publishing. ISBN 978-9971-50-419-9. Goldin, G.A. (2006). Françoise, J-P.; Naber, G. L.; Tsun, T. S. (eds.). Encyclopedia of Mathematical Physics. Current Algebra. ISBN 978-0-12-512666-3. Green, M.B.; Schwarz, J.H.; Witten, E. (1987). Superstring theory. Vol. l. Cambridge University Press. ISBN 9781107029118. Greiner, W.; Reinhardt, J. (1996). Field Quantization. Springer Publishing. ISBN 978-3-540-59179-5. Humphreys, J. E. (1972). Introduction to Lie Algebras and Representation Theory (3rd ed.). Berlin·Heidelberg·New York: Springer-Verlag. ISBN 978-3-540-90053-5. Kac, V.G. (1990). Infinite-dimensional Lie algebras (3rd ed.). Cambridge University Press. ISBN 978-0-521-37215-2. Knapp, A. (2002). bass, H.; Oesterlé, J.; Weinstein, A. (eds.). Lie groups beyond an introduction. Progress in mathematics. Vol. 140 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 978-0-8176-4259-4. Rossmann, Wulf (2002). Lie Groups - An Introduction Through Linear Groups. Oxford Graduate Texts in Mathematics. Oxford Science Publications. ISBN 0-19-859683-9. Schottenloher, M. (2008) [1997]. A Mathematical Introduction to Conformal Field Theory (2nd ed.). Berlin, Heidelberg: Springer-Verlag. ISBN 978-3-540-68625-5. Weinberg, S. (2002). The Quantum Theory of Fields. Vol. I. Cambridge University Press. ISBN 978-0-521-55001-7. Weinberg, S. (1996). The Quantum Theory of Fields. Vol. II. Cambridge University Press. ISBN 978-0-521-55002-4. Zwiebach, B. (2004). A First Course in String Theory. Cambridge University Press. ISBN 0-521-83143-1. MR 2069234. === Journals === Bargmann, V. (1954). "On unitary ray representations of continuous groups". Ann. of Math. 59 (1): 1–46. doi:10.2307/1969831. JSTOR 1969831. Dolan, L. (1995). "The Beacon of Kac–Moody Symmetry for Physics". Notices of the AMS. 42 (12): 1489–1495. arXiv:hep-th/9601117. Bibcode:1996hep.th....1117D. ISSN 0002-9920. Kac, V. G. (1967r). "[Simple graded Lie algebras of finite growth]". Funkt. Analis I Ego Prilozh (in Russian). 1 (4): 82–83. Kac, V. G. (1967e). "Simple graded Lie algebras of finite growth". Funct. Anal. Appl. 1: 328–329. (English translation) Goddard, P.; Olive, D. (1986). "Kac–Moody and Virasoro algebras in relation to quantum physics". Int. J. Mod. Phys. A. 1 (2): 303–414. Bibcode:1986IJMPA...1..303G. doi:10.1142/S0217751X86000149. This can be found in Kac–Moody and Virasoro algebras, A reprint Volume for Physicists Moody, R. V. (1967). "Lie algebras associated with generalized Cartan matrices". Bull. Amer. Math. Soc. 73 (2): 217–221. doi:10.1090/S0002-9904-1967-11688-4. MR 0207783. Zbl 0154.27303. (open access) Schreier, O. (1926). "Uber die Erweiterung von Gruppen I" [On the theory of group extensions I]. Monatshefte für Mathematik (in German). 34 (1): 165–180. doi:10.1007/BF01694897. hdl:10338.dmlcz/127714. S2CID 124731047. Schreier, O. (1925). "Uber die Erweiterung von Gruppen II" [On the theory of group extensions II]. Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg (in German). 4 (1): 321–346. doi:10.1007/BF02950735. hdl:10338.dmlcz/140420. S2CID 122947636. Virasoro, M. A. (1970). "Subsidiary conditions and ghosts in dual-resonance models". Phys. Rev. D. 1 (10): 2933–2936. Bibcode:1970PhRvD...1.2933V. doi:10.1103/PhysRevD.1.2933. Tuynman, G.M.; Wiegerinck, W.A.J.J. (1987). "Central extensions and physics". Journal of Geometry and Physics. 4 (2): 207–258. Bibcode:1987JGP.....4..207T. doi:10.1016/0393-0440(87)90027-1. === Web === MacTutor (2015). "Schreier biography". MacTutor History of Mathematics. Retrieved 2015-03-08.
Wikipedia/Lie_algebra_extension
In mathematics, in particular abstract algebra and topology, a differential graded Lie algebra (or dg Lie algebra, or dgla) is a graded vector space with added Lie algebra and chain complex structures that are compatible. Such objects have applications in deformation theory and rational homotopy theory. == Definition == A differential graded Lie algebra is a graded vector space L = ⨁ L i {\displaystyle L=\bigoplus L_{i}} over a field of characteristic zero together with a bilinear map [ ⋅ , ⋅ ] : L i ⊗ L j → L i + j {\displaystyle [\cdot ,\cdot ]\colon L_{i}\otimes L_{j}\to L_{i+j}} and a differential d : L i → L i − 1 {\displaystyle d:L_{i}\to L_{i-1}} satisfying [ x , y ] = ( − 1 ) | x | | y | + 1 [ y , x ] , {\displaystyle [x,y]=(-1)^{|x||y|+1}[y,x],} the graded Jacobi identity: ( − 1 ) | x | | z | [ x , [ y , z ] ] + ( − 1 ) | y | | x | [ y , [ z , x ] ] + ( − 1 ) | z | | y | [ z , [ x , y ] ] = 0 , {\displaystyle (-1)^{|x||z|}[x,[y,z]]+(-1)^{|y||x|}[y,[z,x]]+(-1)^{|z||y|}[z,[x,y]]=0,} and the graded Leibniz rule: d [ x , y ] = [ d x , y ] + ( − 1 ) | x | [ x , d y ] {\displaystyle d[x,y]=[dx,y]+(-1)^{|x|}[x,dy]} for any homogeneous elements x, y and z in L. Notice here that the differential lowers the degree and so this differential graded Lie algebra is considered to be homologically graded. If instead the differential raised degree the differential graded Lie algebra is said to be cohomologically graded (usually to reinforce this point the grading is written in superscript: L i {\displaystyle L^{i}} ). The choice of cohomological grading usually depends upon personal preference or the situation as they are equivalent: a homologically graded space can be made into a cohomological one via setting L i = L − i {\displaystyle L^{i}=L_{-i}} . Alternative equivalent definitions of a differential graded Lie algebra include: a Lie algebra object internal to the category of chain complexes; a strict L ∞ {\displaystyle L_{\infty }} -algebra. A morphism of differential graded Lie algebras is a graded linear map f : L → L ′ {\displaystyle f:L\to L^{\prime }} that commutes with the bracket and the differential, i.e., f [ x , y ] L = [ f ( x ) , f ( y ) ] L ′ {\displaystyle f[x,y]_{L}=[f(x),f(y)]_{L^{\prime }}} and f ( d L x ) = d L ′ f ( x ) {\displaystyle f(d_{L}x)=d_{L^{\prime }}f(x)} . Differential graded Lie algebras and their morphisms define a category. == Products and coproducts == The product of two differential graded Lie algebras, L × L ′ {\displaystyle L\times L^{\prime }} , is defined as follows: take the direct sum of the two graded vector spaces L ⊕ L ′ {\displaystyle L\oplus L^{\prime }} , and equip it with the bracket [ ( x , x ′ ) , ( y , y ′ ) ] = ( [ x , y ] , [ x ′ , y ′ ] ) {\displaystyle [(x,x^{\prime }),(y,y^{\prime })]=([x,y],[x^{\prime },y^{\prime }])} and differential D ( x , x ′ ) = ( d x , d ′ x ′ ) {\displaystyle D(x,x^{\prime })=(dx,d^{\prime }x^{\prime })} . The coproduct of two differential graded Lie algebras, L ∗ L ′ {\displaystyle L*L^{\prime }} , is often called the free product. It is defined as the free graded Lie algebra on the two underlying vector spaces with the unique differential extending the two original ones modulo the relations present in either of the two original Lie algebras. == Connection to deformation theory == The main application is to the deformation theory over fields of characteristic zero (in particular over the complex numbers.) The idea goes back to Daniel Quillen's work on rational homotopy theory. One way to formulate this thesis (due to Vladimir Drinfeld, Boris Feigin, Pierre Deligne, Maxim Kontsevich, and others) might be: Any reasonable formal deformation problem in characteristic zero can be described by Maurer–Cartan elements of an appropriate differential graded Lie algebra. A Maurer-Cartan element is a degree −1 element, x ∈ L − 1 {\displaystyle x\in L_{-1}} , that is a solution to the Maurer–Cartan equation: d x + 1 2 [ x , x ] = 0. {\displaystyle dx+{\frac {1}{2}}[x,x]=0.} == See also == Differential graded algebra (DGA) Simplicial Lie algebra Homotopy Lie algebra == References == Quillen, Daniel (1969), "Rational homotopy theory", Annals of Mathematics, 90 (2): 205–295, doi:10.2307/1970725, JSTOR 1970725, MR 0258031 == Further reading == Jacob Lurie, Formal moduli problems, section 2.1 == External links == differential graded Lie algebra at the nLab model structure on dg Lie algebras at the nLab
Wikipedia/Differential_graded_Lie_algebra
In mathematics, a Lie algebra has been generalized in several ways. == Graded Lie algebra and Lie superalgebra == A graded Lie algebra is a Lie algebra with grading. When the grading is Z / 2 {\displaystyle \mathbb {Z} /2} , it is also known as a Lie superalgebra. == Lie-isotopic algebra == A Lie-isotopic algebra is a generalization of Lie algebras proposed by physicist R. M. Santilli in 1978. === Definition === Recall that a finite-dimensional Lie algebra L {\displaystyle L} with generators X 1 , X 2 , . . . , X n {\displaystyle X_{1},X_{2},...,X_{n}} and commutation rules [ X i X j ] = X i X j − X j X i = C i j k X k , {\displaystyle [X_{i}X_{j}]=X_{i}X_{j}-X_{j}X_{i}=C_{ij}^{k}X_{k},} can be defined (particularly in physics) as the totally anti-symmetric algebra A ( L ) − {\displaystyle A(L)^{-}} attached to the universal enveloping associative algebra A ( L ) = { X 1 , X 2 , . . . , X n ; X i X j , i , j = 1 , . . . , n ; 1 } {\displaystyle A(L)=\{X_{1},X_{2},...,X_{n};X_{i}X_{j},i,j=1,...,n;1\}} equipped with the associative product X i × X j {\displaystyle X_{i}\times X_{j}} over a numeric field F {\displaystyle F} with multiplicative unit 1 {\displaystyle 1} . Consider now the axiom-preserving lifting of A ( L ) {\displaystyle A(L)} into the form A ∗ ( L ∗ ) = { X 1 , X 2 , . . . , X n ; X i × X j , i , j = 1 , . . . , n ; 1 ∗ } {\displaystyle A^{*}(L^{*})=\{X_{1},X_{2},...,X_{n};X_{i}\times X_{j},i,j=1,...,n;1^{*}\}} , called universal enveloping isoassociative algebra, with isoproduct X i × X j = X i T ∗ X j , {\displaystyle X_{i}\times X_{j}=X_{i}T^{*}X_{j},} verifying the isoassociative law X i × ( X j × X k ) = X i × ( X j × X k ) {\displaystyle X_{i}\times (X_{j}\times X_{k})=X_{i}\times (X_{j}\times X_{k})} and multiplicative isounit 1 ∗ = 1 / T ∗ , 1 ∗ × X k = X k × 1 ∗ = X k ∀ X k i n A ∗ ( L ∗ ) {\displaystyle 1^{*}=1/T*,1^{*}\times X_{k}=X_{k}\times 1^{*}=X_{k}\forall X_{k}inA^{*}(L^{*})} where T ∗ {\displaystyle T^{*}} , called the isotopic element, is not necessarily an element of A ( L ) {\displaystyle A(L)} which is solely restricted by the condition of being positive-definite, T ∗ > 0 {\displaystyle T^{*}>0} , but otherwise having any desired dependence on local variables, and the products X i T ∗ , T ∗ X j , e t c . {\displaystyle X_{i}T^{*},T^{*}X_{j},etc.} are conventional associative products in A ( L ) {\displaystyle A(L)} . Then a Lie-isotopic algebra L ∗ {\displaystyle L^{*}} can be defined as the totally antisymmetric algebra attached to the enveloping isoassociative algebra. L ∗ = A ∗ ( L ∗ ) − {\displaystyle L^{*}=A^{*}(L^{*})^{-}} with isocommutation rules [ X i , X j ] ∗ = X i × X j − X j × X i = X i T ∗ X j − X j T ∗ X i = C i j ∗ k X k . {\displaystyle [X_{i},X_{j}]^{*}=X_{i}\times X_{j}-X_{j}\times X_{i}=X_{i}T^{*}X_{j}-X_{j}T^{*}X_{i}=C_{ij}^{*k}X_{k}.} It is evident that: 1) The isoproduct and the isounit coincide at the abstract level with the conventional product and; 2) The isocommutators [ X i , X j ] ∗ {\displaystyle [X_{i},X_{j}]^{*}} verify Lie's axioms; 3) In view of the infinitely possible isotopic elements T ∗ {\displaystyle T^{*}} (as numbers, functions, matrices, operators, etc.), any given Lie algebra L {\displaystyle L} admits an infinite class of isotopes; 4) Lie-isotopic algebras are called regular whenever C i j ∗ k = C i j k {\displaystyle C_{ij}^{*k}=C_{ij}^{k}} , and irregular whenever C i j ∗ k ≠ C i j k {\displaystyle C_{ij}^{*k}\neq C_{ij}^{k}} . 5) All regular Lie-isotope L ∗ {\displaystyle L^{*}} are evidently isomorphic to L {\displaystyle L} . However, the relationship between irregular isotopes L ∗ {\displaystyle L^{*}} and L {\displaystyle L} does not appear to have been studied to date (Jan. 20, 2024). An illustration of the applications cf Lie-isotopic algebras in physics is given by the isotopes S U ∗ ( 2 ) {\displaystyle SU^{*}(2)} of the S U ( 2 ) {\displaystyle SU(2)} -spin symmetry whose fundamental representation on a Hilbert space H {\displaystyle H} over the field of complex numbers C {\displaystyle C} can be obtained via the nonunitary transformation of the fundamental reopreserntation of S U ( 2 ) {\displaystyle SU(2)} (Pauli matrices) σ k ∗ = U σ k U † , {\displaystyle \sigma _{k}^{*}=U\sigma _{k}U^{\dagger },} U U † = I ∗ = D i a g . ( λ − 1 , λ ) , D e t 1 ∗ = 1 , {\displaystyle UU^{\dagger }=I^{*}=Diag.(\lambda ^{-1},\lambda ),Det1^{*}=1,} σ 1 ∗ = ( 0 λ λ − 1 0 ) , σ 2 ∗ = ( 0 − i λ i λ − 1 0 ) , σ 3 ∗ = ( λ − 1 0 0 − λ ) , {\displaystyle \sigma _{1}^{*}=\left(\!{\begin{array}{cc}0&\lambda \\\lambda ^{-1}&0\end{array}}\!\right),\sigma _{2}^{*}=\left(\!{\begin{array}{cc}0&-i\!\lambda \\i\!\lambda ^{-1}&0\end{array}}\!\right),\sigma _{3}^{*}=\left(\!{\begin{array}{cc}\lambda ^{-1}&0\\0&-\lambda \end{array}}\!\right),} providing an explicit and concrete realization of Bohm's hidden variables λ {\displaystyle \lambda } , which is 'hidden' in the abstract axiom of associativity and allows an exact representation of the Deuteron magnetic moment. == Lie n-algebra == == Quasi-Lie algebra == A quasi-Lie algebra in abstract algebra is just like a Lie algebra, but with the usual axiom [ x , x ] = 0 {\displaystyle [x,x]=0} replaced by [ x , y ] = − [ y , x ] {\displaystyle [x,y]=-[y,x]} (anti-symmetry). In characteristic other than 2, these are equivalent (in the presence of bilinearity), so this distinction doesn't arise when considering real or complex Lie algebras. It can however become important, when considering Lie algebras over the integers. In a quasi-Lie algebra, 2 [ x , x ] = 0. {\displaystyle 2[x,x]=0.} Therefore, the bracket of any element with itself is 2-torsion, if it does not actually vanish. See also: Whitehead product. == References == Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups. 1964 lectures given at Harvard University. Lecture Notes in Mathematics. Vol. 1500 (Corrected 5th printing of the 2nd (1992) ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-540-70634-2. ISBN 3-540-55008-9. MR 2179691. == Further reading == https://www.researchgate.net/publication/250736074_Some_remarks_on_Lie-isotopic_lifting_of_Minkowski_metric https://onlinelibrary.wiley.com/doi/abs/10.1002/(SICI)1099-1476(19961125)19:17%3C1349::AID-MMA823%3E3.0.CO;2-B
Wikipedia/Quasi-Lie_algebra
In mathematics, the adjoint representation (or adjoint action) of a Lie group G is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if G is G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} , the Lie group of real n-by-n invertible matrices, then the adjoint representation is the group homomorphism that sends an invertible n-by-n matrix g {\displaystyle g} to an endomorphism of the vector space of all linear transformations of R n {\displaystyle \mathbb {R} ^{n}} defined by: x ↦ g x g − 1 {\displaystyle x\mapsto gxg^{-1}} . For any Lie group, this natural representation is obtained by linearizing (i.e. taking the differential of) the action of G on itself by conjugation. The adjoint representation can be defined for linear algebraic groups over arbitrary fields. == Definition == Let G be a Lie group, and let Ψ : G → Aut ⁡ ( G ) {\displaystyle \Psi :G\to \operatorname {Aut} (G)} be the mapping g ↦ Ψg, with Aut(G) the automorphism group of G and Ψg: G → G given by the inner automorphism (conjugation) Ψ g ( h ) = g h g − 1 . {\displaystyle \Psi _{g}(h)=ghg^{-1}~.} This Ψ is a Lie group homomorphism. For each g in G, define Adg to be the derivative of Ψg at the origin: Ad g = ( d Ψ g ) e : T e G → T e G {\displaystyle \operatorname {Ad} _{g}=(d\Psi _{g})_{e}:T_{e}G\rightarrow T_{e}G} where d is the differential and g = T e G {\displaystyle {\mathfrak {g}}=T_{e}G} is the tangent space at the origin e (e being the identity element of the group G). Since Ψ g {\displaystyle \Psi _{g}} is a Lie group automorphism, Adg is a Lie algebra automorphism; i.e., an invertible linear transformation of g {\displaystyle {\mathfrak {g}}} to itself that preserves the Lie bracket. Moreover, since g ↦ Ψ g {\displaystyle g\mapsto \Psi _{g}} is a group homomorphism, g ↦ Ad g {\displaystyle g\mapsto \operatorname {Ad} _{g}} too is a group homomorphism. Hence, the map A d : G → A u t ( g ) , g ↦ A d g {\displaystyle \mathrm {Ad} \colon G\to \mathrm {Aut} ({\mathfrak {g}}),\,g\mapsto \mathrm {Ad} _{g}} is a group representation called the adjoint representation of G. If G is an immersed Lie subgroup of the general linear group G L n ( C ) {\displaystyle \mathrm {GL} _{n}(\mathbb {C} )} (called immersely linear Lie group), then the Lie algebra g {\displaystyle {\mathfrak {g}}} consists of matrices and the exponential map is the matrix exponential exp ⁡ ( X ) = e X {\displaystyle \operatorname {exp} (X)=e^{X}} for matrices X with small operator norms. We will compute the derivative of Ψ g {\displaystyle \Psi _{g}} at e {\displaystyle e} . For g in G and small X in g {\displaystyle {\mathfrak {g}}} , the curve t → exp ⁡ ( t X ) {\displaystyle t\to \exp(tX)} has derivative X {\displaystyle X} at t = 0, one then gets: Ad g ⁡ ( X ) = ( d Ψ g ) e ( X ) = ( Ψ g ∘ exp ⁡ ( t X ) ) ′ ( 0 ) = ( g exp ⁡ ( t X ) g − 1 ) ′ ( 0 ) = g X g − 1 {\displaystyle \operatorname {Ad} _{g}(X)=(d\Psi _{g})_{e}(X)=(\Psi _{g}\circ \exp(tX))'(0)=(g\exp(tX)g^{-1})'(0)=gXg^{-1}} where on the right we have the products of matrices. If G ⊂ G L n ( C ) {\displaystyle G\subset \mathrm {GL} _{n}(\mathbb {C} )} is a closed subgroup (that is, G is a matrix Lie group), then this formula is valid for all g in G and all X in g {\displaystyle {\mathfrak {g}}} . Succinctly, an adjoint representation is an isotropy representation associated to the conjugation action of G around the identity element of G. === Derivative of Ad === One may always pass from a representation of a Lie group G to a representation of its Lie algebra by taking the derivative at the identity. Taking the derivative of the adjoint map A d : G → A u t ( g ) {\displaystyle \mathrm {Ad} :G\to \mathrm {Aut} ({\mathfrak {g}})} at the identity element gives the adjoint representation of the Lie algebra g = Lie ⁡ ( G ) {\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)} of G: a d : g → D e r ( g ) x ↦ ad x = d ( Ad ) e ( x ) {\displaystyle {\begin{aligned}\mathrm {ad} :&\,{\mathfrak {g}}\to \mathrm {Der} ({\mathfrak {g}})\\&\,x\mapsto \operatorname {ad} _{x}=d(\operatorname {Ad} )_{e}(x)\end{aligned}}} where D e r ( g ) = Lie ⁡ ( Aut ⁡ ( g ) ) {\displaystyle \mathrm {Der} ({\mathfrak {g}})=\operatorname {Lie} (\operatorname {Aut} ({\mathfrak {g}}))} is the Lie algebra of A u t ( g ) {\displaystyle \mathrm {Aut} ({\mathfrak {g}})} which may be identified with the derivation algebra of g {\displaystyle {\mathfrak {g}}} . One can show that a d x ( y ) = [ x , y ] {\displaystyle \mathrm {ad} _{x}(y)=[x,y]\,} for all x , y ∈ g {\displaystyle x,y\in {\mathfrak {g}}} , where the right hand side is given (induced) by the Lie bracket of vector fields. Indeed, recall that, viewing g {\displaystyle {\mathfrak {g}}} as the Lie algebra of left-invariant vector fields on G, the bracket on g {\displaystyle {\mathfrak {g}}} is given as: for left-invariant vector fields X, Y, [ X , Y ] = lim t → 0 1 t ( d φ − t ( Y ) − Y ) {\displaystyle [X,Y]=\lim _{t\to 0}{1 \over t}(d\varphi _{-t}(Y)-Y)} where φ t : G → G {\displaystyle \varphi _{t}:G\to G} denotes the flow generated by X. As it turns out, φ t ( g ) = g φ t ( e ) {\displaystyle \varphi _{t}(g)=g\varphi _{t}(e)} , roughly because both sides satisfy the same ODE defining the flow. That is, φ t = R φ t ( e ) {\displaystyle \varphi _{t}=R_{\varphi _{t}(e)}} where R h {\displaystyle R_{h}} denotes the right multiplication by h ∈ G {\displaystyle h\in G} . On the other hand, since Ψ g = R g − 1 ∘ L g {\displaystyle \Psi _{g}=R_{g^{-1}}\circ L_{g}} , by the chain rule, Ad g ⁡ ( Y ) = d ( R g − 1 ∘ L g ) ( Y ) = d R g − 1 ( d L g ( Y ) ) = d R g − 1 ( Y ) {\displaystyle \operatorname {Ad} _{g}(Y)=d(R_{g^{-1}}\circ L_{g})(Y)=dR_{g^{-1}}(dL_{g}(Y))=dR_{g^{-1}}(Y)} as Y is left-invariant. Hence, [ X , Y ] = lim t → 0 1 t ( Ad φ t ( e ) ⁡ ( Y ) − Y ) {\displaystyle [X,Y]=\lim _{t\to 0}{1 \over t}(\operatorname {Ad} _{\varphi _{t}(e)}(Y)-Y)} , which is what was needed to show. Thus, a d x {\displaystyle \mathrm {ad} _{x}} coincides with the same one defined in § Adjoint representation of a Lie algebra below. Ad and ad are related through the exponential map: Specifically, Adexp(x) = exp(adx) for all x in the Lie algebra. It is a consequence of the general result relating Lie group and Lie algebra homomorphisms via the exponential map. If G is an immersely linear Lie group, then the above computation simplifies: indeed, as noted early, Ad g ⁡ ( Y ) = g Y g − 1 {\displaystyle \operatorname {Ad} _{g}(Y)=gYg^{-1}} and thus with g = e t X {\displaystyle g=e^{tX}} , Ad e t X ⁡ ( Y ) = e t X Y e − t X {\displaystyle \operatorname {Ad} _{e^{tX}}(Y)=e^{tX}Ye^{-tX}} . Taking the derivative of this at t = 0 {\displaystyle t=0} , we have: ad X ⁡ Y = X Y − Y X {\displaystyle \operatorname {ad} _{X}Y=XY-YX} . The general case can also be deduced from the linear case: indeed, let G ′ {\displaystyle G'} be an immersely linear Lie group having the same Lie algebra as that of G. Then the derivative of Ad at the identity element for G and that for G' coincide; hence, without loss of generality, G can be assumed to be G'. The upper-case/lower-case notation is used extensively in the literature. Thus, for example, a vector x in the algebra g {\displaystyle {\mathfrak {g}}} generates a vector field X in the group G. Similarly, the adjoint map adxy = [x,y] of vectors in g {\displaystyle {\mathfrak {g}}} is homomorphic to the Lie derivative LXY = [X,Y] of vector fields on the group G considered as a manifold. Further see the derivative of the exponential map. == Adjoint representation of a Lie algebra == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra over some field. Given an element x of a Lie algebra g {\displaystyle {\mathfrak {g}}} , one defines the adjoint action of x on g {\displaystyle {\mathfrak {g}}} as the map ad x : g → g with ad x ⁡ ( y ) = [ x , y ] {\displaystyle \operatorname {ad} _{x}:{\mathfrak {g}}\to {\mathfrak {g}}\qquad {\text{with}}\qquad \operatorname {ad} _{x}(y)=[x,y]} for all y in g {\displaystyle {\mathfrak {g}}} . It is called the adjoint endomorphism or adjoint action. ( ad x {\displaystyle \operatorname {ad} _{x}} is also often denoted as ad ⁡ ( x ) {\displaystyle \operatorname {ad} (x)} .) Since a bracket is bilinear, this determines the linear mapping ad : g → g l ( g ) = ( End ⁡ ( g ) , [ , ] ) {\displaystyle \operatorname {ad} :{\mathfrak {g}}\to {\mathfrak {gl}}({\mathfrak {g}})=(\operatorname {End} ({\mathfrak {g}}),[\;,\;])} given by x ↦ adx. Within End ( g ) {\displaystyle ({\mathfrak {g}})} , the bracket is, by definition, given by the commutator of the two operators: [ T , S ] = T ∘ S − S ∘ T {\displaystyle [T,S]=T\circ S-S\circ T} where ∘ {\displaystyle \circ } denotes composition of linear maps. Using the above definition of the bracket, the Jacobi identity [ x , [ y , z ] ] + [ y , [ z , x ] ] + [ z , [ x , y ] ] = 0 {\displaystyle [x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0} takes the form ( [ ad x , ad y ] ) ( z ) = ( ad [ x , y ] ) ( z ) {\displaystyle \left([\operatorname {ad} _{x},\operatorname {ad} _{y}]\right)(z)=\left(\operatorname {ad} _{[x,y]}\right)(z)} where x, y, and z are arbitrary elements of g {\displaystyle {\mathfrak {g}}} . This last identity says that ad is a Lie algebra homomorphism; i.e., a linear mapping that takes brackets to brackets. Hence, ad is a representation of a Lie algebra and is called the adjoint representation of the algebra g {\displaystyle {\mathfrak {g}}} . If g {\displaystyle {\mathfrak {g}}} is finite-dimensional and a basis for it is chosen, then g l ( g ) {\displaystyle {\mathfrak {gl}}({\mathfrak {g}})} is the Lie algebra of square matrices and the composition corresponds to matrix multiplication. In a more module-theoretic language, the construction says that g {\displaystyle {\mathfrak {g}}} is a module over itself. The kernel of ad is the center of g {\displaystyle {\mathfrak {g}}} (that's just rephrasing the definition). On the other hand, for each element z in g {\displaystyle {\mathfrak {g}}} , the linear mapping δ = ad z {\displaystyle \delta =\operatorname {ad} _{z}} obeys the Leibniz' law: δ ( [ x , y ] ) = [ δ ( x ) , y ] + [ x , δ ( y ) ] {\displaystyle \delta ([x,y])=[\delta (x),y]+[x,\delta (y)]} for all x and y in the algebra (the restatement of the Jacobi identity). That is to say, adz is a derivation and the image of g {\displaystyle {\mathfrak {g}}} under ad is a subalgebra of Der ( g ) {\displaystyle ({\mathfrak {g}})} , the space of all derivations of g {\displaystyle {\mathfrak {g}}} . When g = Lie ⁡ ( G ) {\displaystyle {\mathfrak {g}}=\operatorname {Lie} (G)} is the Lie algebra of a Lie group G, ad is the differential of Ad at the identity element of G. There is the following formula similar to the Leibniz formula: for scalars α , β {\displaystyle \alpha ,\beta } and Lie algebra elements x , y , z {\displaystyle x,y,z} , ( ad x − α − β ) n [ y , z ] = ∑ i = 0 n ( n i ) [ ( ad x − α ) i y , ( ad x − β ) n − i z ] . {\displaystyle (\operatorname {ad} _{x}-\alpha -\beta )^{n}[y,z]=\sum _{i=0}^{n}{\binom {n}{i}}\left[(\operatorname {ad} _{x}-\alpha )^{i}y,(\operatorname {ad} _{x}-\beta )^{n-i}z\right].} == Structure constants == The explicit matrix elements of the adjoint representation are given by the structure constants of the algebra. That is, let {ei} be a set of basis vectors for the algebra, with [ e i , e j ] = ∑ k c i j k e k . {\displaystyle [e^{i},e^{j}]=\sum _{k}{c^{ij}}_{k}e^{k}.} Then the matrix elements for adei are given by [ ad e i ] k j = c i j k . {\displaystyle {\left[\operatorname {ad} _{e^{i}}\right]_{k}}^{j}={c^{ij}}_{k}~.} Thus, for example, the adjoint representation of su(2) is the defining representation of so(3). == Examples == If G is abelian of dimension n, the adjoint representation of G is the trivial n-dimensional representation. If G is a matrix Lie group (i.e. a closed subgroup of G L ( n , C ) {\displaystyle \mathrm {GL} (n,\mathbb {C} )} ), then its Lie algebra is an algebra of n×n matrices with the commutator for a Lie bracket (i.e. a subalgebra of g l n ( C ) {\displaystyle {\mathfrak {gl}}_{n}(\mathbb {C} )} ). In this case, the adjoint map is given by Adg(x) = gxg−1. If G is SL(2, R) (real 2×2 matrices with determinant 1), the Lie algebra of G consists of real 2×2 matrices with trace 0. The representation is equivalent to that given by the action of G by linear substitution on the space of binary (i.e., 2 variable) quadratic forms. == Properties == The following table summarizes the properties of the various maps mentioned in the definition The image of G under the adjoint representation is denoted by Ad(G). If G is connected, the kernel of the adjoint representation coincides with the kernel of Ψ which is just the center of G. Therefore, the adjoint representation of a connected Lie group G is faithful if and only if G is centerless. More generally, if G is not connected, then the kernel of the adjoint map is the centralizer of the identity component G0 of G. By the first isomorphism theorem we have A d ( G ) ≅ G / Z G ( G 0 ) . {\displaystyle \mathrm {Ad} (G)\cong G/Z_{G}(G_{0}).} Given a finite-dimensional real Lie algebra g {\displaystyle {\mathfrak {g}}} , by Lie's third theorem, there is a connected Lie group Int ⁡ ( g ) {\displaystyle \operatorname {Int} ({\mathfrak {g}})} whose Lie algebra is the image of the adjoint representation of g {\displaystyle {\mathfrak {g}}} (i.e., Lie ⁡ ( Int ⁡ ( g ) ) = ad ⁡ ( g ) {\displaystyle \operatorname {Lie} (\operatorname {Int} ({\mathfrak {g}}))=\operatorname {ad} ({\mathfrak {g}})} .) It is called the adjoint group of g {\displaystyle {\mathfrak {g}}} . Now, if g {\displaystyle {\mathfrak {g}}} is the Lie algebra of a connected Lie group G, then Int ⁡ ( g ) {\displaystyle \operatorname {Int} ({\mathfrak {g}})} is the image of the adjoint representation of G: Int ⁡ ( g ) = Ad ⁡ ( G ) {\displaystyle \operatorname {Int} ({\mathfrak {g}})=\operatorname {Ad} (G)} . == Roots of a semisimple Lie group == If G is semisimple, the non-zero weights of the adjoint representation form a root system. (In general, one needs to pass to the complexification of the Lie algebra before proceeding.) To see how this works, consider the case G = SL(n, R). We can take the group of diagonal matrices diag(t1, ..., tn) as our maximal torus T. Conjugation by an element of T sends [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 ⋯ a n n ] ↦ [ a 11 t 1 t 2 − 1 a 12 ⋯ t 1 t n − 1 a 1 n t 2 t 1 − 1 a 21 a 22 ⋯ t 2 t n − 1 a 2 n ⋮ ⋮ ⋱ ⋮ t n t 1 − 1 a n 1 t n t 2 − 1 a n 2 ⋯ a n n ] . {\displaystyle {\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{n1}&a_{n2}&\cdots &a_{nn}\\\end{bmatrix}}\mapsto {\begin{bmatrix}a_{11}&t_{1}t_{2}^{-1}a_{12}&\cdots &t_{1}t_{n}^{-1}a_{1n}\\t_{2}t_{1}^{-1}a_{21}&a_{22}&\cdots &t_{2}t_{n}^{-1}a_{2n}\\\vdots &\vdots &\ddots &\vdots \\t_{n}t_{1}^{-1}a_{n1}&t_{n}t_{2}^{-1}a_{n2}&\cdots &a_{nn}\\\end{bmatrix}}.} Thus, T acts trivially on the diagonal part of the Lie algebra of G and with eigenvectors titj−1 on the various off-diagonal entries. The roots of G are the weights diag(t1, ..., tn) → titj−1. This accounts for the standard description of the root system of G = SLn(R) as the set of vectors of the form ei−ej. === Example SL(2, R) === When computing the root system for one of the simplest cases of Lie Groups, the group SL(2, R) of two dimensional matrices with determinant 1 consists of the set of matrices of the form: [ a b c d ] {\displaystyle {\begin{bmatrix}a&b\\c&d\\\end{bmatrix}}} with a, b, c, d real and ad − bc = 1. A maximal compact connected abelian Lie subgroup, or maximal torus T, is given by the subset of all matrices of the form [ t 1 0 0 t 2 ] = [ t 1 0 0 1 / t 1 ] = [ exp ⁡ ( θ ) 0 0 exp ⁡ ( − θ ) ] {\displaystyle {\begin{bmatrix}t_{1}&0\\0&t_{2}\\\end{bmatrix}}={\begin{bmatrix}t_{1}&0\\0&1/t_{1}\\\end{bmatrix}}={\begin{bmatrix}\exp(\theta )&0\\0&\exp(-\theta )\\\end{bmatrix}}} with t 1 t 2 = 1 {\displaystyle t_{1}t_{2}=1} . The Lie algebra of the maximal torus is the Cartan subalgebra consisting of the matrices [ θ 0 0 − θ ] = θ [ 1 0 0 0 ] − θ [ 0 0 0 1 ] = θ ( e 1 − e 2 ) . {\displaystyle {\begin{bmatrix}\theta &0\\0&-\theta \\\end{bmatrix}}=\theta {\begin{bmatrix}1&0\\0&0\\\end{bmatrix}}-\theta {\begin{bmatrix}0&0\\0&1\\\end{bmatrix}}=\theta (e_{1}-e_{2}).} If we conjugate an element of SL(2, R) by an element of the maximal torus we obtain [ t 1 0 0 1 / t 1 ] [ a b c d ] [ 1 / t 1 0 0 t 1 ] = [ a t 1 b t 1 c / t 1 d / t 1 ] [ 1 / t 1 0 0 t 1 ] = [ a b t 1 2 c t 1 − 2 d ] {\displaystyle {\begin{bmatrix}t_{1}&0\\0&1/t_{1}\\\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\\\end{bmatrix}}{\begin{bmatrix}1/t_{1}&0\\0&t_{1}\\\end{bmatrix}}={\begin{bmatrix}at_{1}&bt_{1}\\c/t_{1}&d/t_{1}\\\end{bmatrix}}{\begin{bmatrix}1/t_{1}&0\\0&t_{1}\\\end{bmatrix}}={\begin{bmatrix}a&bt_{1}^{2}\\ct_{1}^{-2}&d\\\end{bmatrix}}} The matrices [ 1 0 0 0 ] [ 0 0 0 1 ] [ 0 1 0 0 ] [ 0 0 1 0 ] {\displaystyle {\begin{bmatrix}1&0\\0&0\\\end{bmatrix}}{\begin{bmatrix}0&0\\0&1\\\end{bmatrix}}{\begin{bmatrix}0&1\\0&0\\\end{bmatrix}}{\begin{bmatrix}0&0\\1&0\\\end{bmatrix}}} are then 'eigenvectors' of the conjugation operation with eigenvalues 1 , 1 , t 1 2 , t 1 − 2 {\displaystyle 1,1,t_{1}^{2},t_{1}^{-2}} . The function Λ which gives t 1 2 {\displaystyle t_{1}^{2}} is a multiplicative character, or homomorphism from the group's torus to the underlying field R. The function λ giving θ is a weight of the Lie Algebra with weight space given by the span of the matrices. It is satisfying to show the multiplicativity of the character and the linearity of the weight. It can further be proved that the differential of Λ can be used to create a weight. It is also educational to consider the case of SL(3, R). == Variants and analogues == The adjoint representation can also be defined for algebraic groups over any field. The co-adjoint representation is the contragredient representation of the adjoint representation. Alexandre Kirillov observed that the orbit of any vector in a co-adjoint representation is a symplectic manifold. According to the philosophy in representation theory known as the orbit method (see also the Kirillov character formula), the irreducible representations of a Lie group G should be indexed in some way by its co-adjoint orbits. This relationship is closest in the case of nilpotent Lie groups. == See also == Adjoint bundle – Lie algebra bundle associated to any principal bundle by the adjoint representationPages displaying wikidata descriptions as a fallback == Notes == == References == Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Kobayashi, Shoshichi; Nomizu, Katsumi (1996). Foundations of Differential Geometry, Vol. 1 (New ed.). Wiley-Interscience. ISBN 978-0-471-15733-5. Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666.
Wikipedia/Adjoint_representation_of_a_Lie_algebra
In mathematics, a graded Lie algebra is a Lie algebra endowed with a gradation which is compatible with the Lie bracket. In other words, a graded Lie algebra is a Lie algebra which is also a nonassociative graded algebra under the bracket operation. A choice of Cartan decomposition endows any semisimple Lie algebra with the structure of a graded Lie algebra. Any parabolic Lie algebra is also a graded Lie algebra. A graded Lie superalgebra extends the notion of a graded Lie algebra in such a way that the Lie bracket is no longer assumed to be necessarily anticommutative. These arise in the study of derivations on graded algebras, in the deformation theory of Murray Gerstenhaber, Kunihiko Kodaira, and Donald C. Spencer, and in the theory of Lie derivatives. A supergraded Lie superalgebra is a further generalization of this notion to the category of superalgebras in which a graded Lie superalgebra is endowed with an additional super Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } -gradation. These arise when one forms a graded Lie superalgebra in a classical (non-supersymmetric) setting, and then tensorizes to obtain the supersymmetric analog. Still greater generalizations are possible to Lie algebras over a class of braided monoidal categories equipped with a coproduct and some notion of a gradation compatible with the braiding in the category. For hints in this direction, see Lie superalgebra#Category-theoretic definition. == Graded Lie algebras == In its most basic form, a graded Lie algebra is an ordinary Lie algebra g {\displaystyle {\mathfrak {g}}} , together with a gradation of vector spaces g = ⨁ i ∈ Z g i , {\displaystyle {\mathfrak {g}}=\bigoplus _{i\in {\mathbb {Z} }}{\mathfrak {g}}_{i},} such that the Lie bracket respects this gradation: [ g i , g j ] ⊆ g i + j . {\displaystyle [{\mathfrak {g}}_{i},{\mathfrak {g}}_{j}]\subseteq {\mathfrak {g}}_{i+j}.} The universal enveloping algebra of a graded Lie algebra inherits the grading. === Examples === ==== sl(2) ==== For example, the Lie algebra s l ( 2 ) {\displaystyle {\mathfrak {sl}}(2)} of trace-free 2 × 2 matrices is graded by the generators: X = ( 0 1 0 0 ) , Y = ( 0 0 1 0 ) , and H = ( 1 0 0 − 1 ) . {\displaystyle X={\begin{pmatrix}0&1\\0&0\end{pmatrix}},\quad Y={\begin{pmatrix}0&0\\1&0\end{pmatrix}},\quad {\textrm {and}}\quad H={\begin{pmatrix}1&0\\0&-1\end{pmatrix}}.} These satisfy the relations [ X , Y ] = H {\displaystyle [X,Y]=H} , [ H , X ] = 2 X {\displaystyle [H,X]=2X} , and [ H , Y ] = − 2 Y {\displaystyle [H,Y]=-2Y} . Hence with g − 1 = span ( X ) {\displaystyle {\mathfrak {g}}_{-1}={\textrm {span}}(X)} , g 0 = span ( H ) {\displaystyle {\mathfrak {g}}_{0}={\textrm {span}}(H)} , and g 1 = span ( Y ) {\displaystyle {\mathfrak {g}}_{1}={\textrm {span}}(Y)} , the decomposition s l ( 2 ) = g − 1 ⊕ g 0 ⊕ g 1 {\displaystyle {\mathfrak {sl}}(2)={\mathfrak {g}}_{-1}\oplus {\mathfrak {g}}_{0}\oplus {\mathfrak {g}}_{1}} presents s l ( 2 ) {\displaystyle {\mathfrak {sl}}(2)} as a graded Lie algebra. ==== Free Lie algebra ==== The free Lie algebra on a set X naturally has a grading, given by the minimum number of terms needed to generate the group element. This arises for example as the associated graded Lie algebra to the lower central series of a free group. === Generalizations === If Γ {\displaystyle \Gamma } is any commutative monoid, then the notion of a Γ {\displaystyle \Gamma } -graded Lie algebra generalizes that of an ordinary ( Z {\displaystyle \mathbb {Z} } -) graded Lie algebra so that the defining relations hold with the integers Z {\displaystyle \mathbb {Z} } replaced by Γ {\displaystyle \Gamma } . In particular, any semisimple Lie algebra is graded by the root spaces of its adjoint representation. == Graded Lie superalgebras == A graded Lie superalgebra over a field k (not of characteristic 2) consists of a graded vector space E over k, along with a bilinear bracket operation [ − , − ] : E ⊗ k E → E {\displaystyle [-,-]:E\otimes _{k}E\to E} such that the following axioms are satisfied. [-, -] respects the gradation of E: [ E i , E j ] ⊆ E i + j . {\displaystyle [E_{i},E_{j}]\subseteq E_{i+j}.} (Symmetry) For all x in Ei and y in Ej, [ x , y ] = − ( − 1 ) i j [ y , x ] {\displaystyle [x,y]=-(-1)^{ij}\,[y,x]} (Jacobi identity) For all x in Ei, y in Ej, and z in Ek, ( − 1 ) i k [ x , [ y , z ] ] + ( − 1 ) i j [ y , [ z , x ] ] + ( − 1 ) j k [ z , [ x , y ] ] = 0. {\displaystyle (-1)^{ik}[x,[y,z]]+(-1)^{ij}[y,[z,x]]+(-1)^{jk}[z,[x,y]]=0.} (If k has characteristic 3, then the Jacobi identity must be supplemented with the condition [ x , [ x , x ] ] = 0 {\displaystyle [x,[x,x]]=0} for all x in Eodd.) Note, for instance, that when E carries the trivial gradation, a graded Lie superalgebra over k is just an ordinary Lie algebra. When the gradation of E is concentrated in even degrees, one recovers the definition of a (Z-)graded Lie algebra. === Examples and Applications === The most basic example of a graded Lie superalgebra occurs in the study of derivations of graded algebras. If A is a graded k-algebra with gradation A = ⨁ i ∈ Z A i , {\displaystyle A=\bigoplus _{i\in \mathbb {Z} }A_{i},} then a graded k-derivation d on A of degree l is defined by d x = 0 {\displaystyle dx=0} for x ∈ k {\displaystyle x\in k} , d : A i → A i + l {\displaystyle d\colon A_{i}\to A_{i+l}} , and d ( x y ) = ( d x ) y + ( − 1 ) i l x ( d y ) {\displaystyle d(xy)=(dx)y+(-1)^{il}x(dy)} for x ∈ A i {\displaystyle x\in A_{i}} . The space of all graded derivations of degree l is denoted by Der l ⁡ ( A ) {\displaystyle \operatorname {Der} _{l}(A)} , and the direct sum of these spaces, Der ⁡ ( A ) = ⨁ l Der l ⁡ ( A ) , {\displaystyle \operatorname {Der} (A)=\bigoplus _{l}\operatorname {Der} _{l}(A),} carries the structure of an A-module. This generalizes the notion of a derivation of commutative algebras to the graded category. On Der(A), one can define a bracket via: [d, δ ] = dδ − (−1)ijδd, for d ∈ Deri (A) and δ ∈ Derj (A). Equipped with this structure, Der(A) inherits the structure of a graded Lie superalgebra over k. Further examples: The Frölicher–Nijenhuis bracket is an example of a graded Lie algebra arising naturally in the study of connections in differential geometry. The Nijenhuis–Richardson bracket arises in connection with the deformations of Lie algebras. === Generalizations === The notion of a graded Lie superalgebra can be generalized so that their grading is not just the integers. Specifically, a signed semiring consists of a pair ( Γ , ϵ ) {\displaystyle (\Gamma ,\epsilon )} , where Γ {\displaystyle \Gamma } is a semiring and ϵ : Γ → Z / 2 Z {\displaystyle \epsilon \colon \Gamma \to \mathbb {Z} /2\mathbb {Z} } is a homomorphism of additive groups. Then a graded Lie supalgebra over a signed semiring consists of a vector space E graded with respect to the additive structure on Γ {\displaystyle \Gamma } , and a bilinear bracket [-, -] which respects the grading on E and in addition satisfies: [ x , y ] = − ( − 1 ) ϵ ( deg ⁡ x ) ϵ ( deg ⁡ y ) [ y , x ] {\displaystyle [x,y]=-(-1)^{\epsilon (\deg x)\epsilon (\deg y)}[y,x]} for all homogeneous elements x and y, and ( − 1 ) ϵ ( deg ⁡ x ) ϵ ( deg ⁡ z ) [ x , [ y , z ] ] + ( − 1 ) ϵ ( deg ⁡ y ) ϵ ( deg ⁡ x ) [ y , [ z , x ] ] + ( − 1 ) ϵ ( deg ⁡ z ) ϵ ( deg ⁡ y ) [ z , [ x , y ] ] = 0. {\displaystyle (-1)^{\epsilon (\deg x)\epsilon (\deg z)}[x,[y,z]]+(-1)^{\epsilon (\deg y)\epsilon (\deg x)}[y,[z,x]]+(-1)^{\epsilon (\deg z)\epsilon (\deg y)}[z,[x,y]]=0.} Further examples: A Lie superalgebra is a graded Lie superalgebra over the signed semiring ( Z / 2 Z , ϵ ) {\displaystyle (\mathbb {Z} /2\mathbb {Z} ,\epsilon )} , where ϵ {\displaystyle \epsilon } is the identity map for the additive structure on the ring Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } . == Notes == == References == Nijenhuis, Albert; Richardson Jr., Roger W. (1966). "Cohomology and deformations in graded Lie algebras". Bulletin of the American Mathematical Society. 72 (1): 1–29. doi:10.1090/s0002-9904-1966-11401-5. MR 0195995. == See also == Differential graded Lie algebra Graded (mathematics) Lie algebra-valued form
Wikipedia/Graded_Lie_algebra
In mathematics, a complex Lie algebra is a Lie algebra over the complex numbers. Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , its conjugate g ¯ {\displaystyle {\overline {\mathfrak {g}}}} is a complex Lie algebra with the same underlying real vector space but with i = − 1 {\displaystyle i={\sqrt {-1}}} acting as − i {\displaystyle -i} instead. As a real Lie algebra, a complex Lie algebra g {\displaystyle {\mathfrak {g}}} is trivially isomorphic to its conjugate. A complex Lie algebra is isomorphic to its conjugate if and only if it admits a real form (and is said to be defined over the real numbers). == Real form == Given a complex Lie algebra g {\displaystyle {\mathfrak {g}}} , a real Lie algebra g 0 {\displaystyle {\mathfrak {g}}_{0}} is said to be a real form of g {\displaystyle {\mathfrak {g}}} if the complexification g 0 ⊗ R C {\displaystyle {\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} } is isomorphic to g {\displaystyle {\mathfrak {g}}} . A real form g 0 {\displaystyle {\mathfrak {g}}_{0}} is abelian (resp. nilpotent, solvable, semisimple) if and only if g {\displaystyle {\mathfrak {g}}} is abelian (resp. nilpotent, solvable, semisimple). On the other hand, a real form g 0 {\displaystyle {\mathfrak {g}}_{0}} is simple if and only if either g {\displaystyle {\mathfrak {g}}} is simple or g {\displaystyle {\mathfrak {g}}} is of the form s × s ¯ {\displaystyle {\mathfrak {s}}\times {\overline {\mathfrak {s}}}} where s , s ¯ {\displaystyle {\mathfrak {s}},{\overline {\mathfrak {s}}}} are simple and are the conjugates of each other. The existence of a real form in a complex Lie algebra g {\displaystyle {\mathfrak {g}}} implies that g {\displaystyle {\mathfrak {g}}} is isomorphic to its conjugate; indeed, if g = g 0 ⊗ R C = g 0 ⊕ i g 0 {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\otimes _{\mathbb {R} }\mathbb {C} ={\mathfrak {g}}_{0}\oplus i{\mathfrak {g}}_{0}} , then let τ : g → g ¯ {\displaystyle \tau :{\mathfrak {g}}\to {\overline {\mathfrak {g}}}} denote the R {\displaystyle \mathbb {R} } -linear isomorphism induced by complex conjugate and then τ ( i ( x + i y ) ) = τ ( i x − y ) = − i x − y = − i τ ( x + i y ) {\displaystyle \tau (i(x+iy))=\tau (ix-y)=-ix-y=-i\tau (x+iy)} , which is to say τ {\displaystyle \tau } is in fact a C {\displaystyle \mathbb {C} } -linear isomorphism. Conversely, suppose there is a C {\displaystyle \mathbb {C} } -linear isomorphism τ : g → ∼ g ¯ {\displaystyle \tau :{\mathfrak {g}}{\overset {\sim }{\to }}{\overline {\mathfrak {g}}}} ; without loss of generality, we can assume it is the identity function on the underlying real vector space. Then define g 0 = { z ∈ g | τ ( z ) = z } {\displaystyle {\mathfrak {g}}_{0}=\{z\in {\mathfrak {g}}|\tau (z)=z\}} , which is clearly a real Lie algebra. Each element z {\displaystyle z} in g {\displaystyle {\mathfrak {g}}} can be written uniquely as z = 2 − 1 ( z + τ ( z ) ) + i 2 − 1 ( i τ ( z ) − i z ) {\displaystyle z=2^{-1}(z+\tau (z))+i2^{-1}(i\tau (z)-iz)} . Here, τ ( i τ ( z ) − i z ) = − i z + i τ ( z ) {\displaystyle \tau (i\tau (z)-iz)=-iz+i\tau (z)} and similarly τ {\displaystyle \tau } fixes z + τ ( z ) {\displaystyle z+\tau (z)} . Hence, g = g 0 ⊕ i g 0 {\displaystyle {\mathfrak {g}}={\mathfrak {g}}_{0}\oplus i{\mathfrak {g}}_{0}} ; i.e., g 0 {\displaystyle {\mathfrak {g}}_{0}} is a real form. == Complex Lie algebra of a complex Lie group == Let g {\displaystyle {\mathfrak {g}}} be a semisimple complex Lie algebra that is the Lie algebra of a complex Lie group G {\displaystyle G} . Let h {\displaystyle {\mathfrak {h}}} be a Cartan subalgebra of g {\displaystyle {\mathfrak {g}}} and H {\displaystyle H} the Lie subgroup corresponding to h {\displaystyle {\mathfrak {h}}} ; the conjugates of H {\displaystyle H} are called Cartan subgroups. Suppose there is the decomposition g = n − ⊕ h ⊕ n + {\displaystyle {\mathfrak {g}}={\mathfrak {n}}^{-}\oplus {\mathfrak {h}}\oplus {\mathfrak {n}}^{+}} given by a choice of positive roots. Then the exponential map defines an isomorphism from n + {\displaystyle {\mathfrak {n}}^{+}} to a closed subgroup U ⊂ G {\displaystyle U\subset G} . The Lie subgroup B ⊂ G {\displaystyle B\subset G} corresponding to the Borel subalgebra b = h ⊕ n + {\displaystyle {\mathfrak {b}}={\mathfrak {h}}\oplus {\mathfrak {n}}^{+}} is closed and is the semidirect product of H {\displaystyle H} and U {\displaystyle U} ; the conjugates of B {\displaystyle B} are called Borel subgroups. == Notes == == References == Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Knapp, A. W. (2002). Lie groups beyond an introduction. Progress in Mathematics. Vol. 120 (2nd ed.). Boston·Basel·Berlin: Birkhäuser. ISBN 0-8176-4259-5.. Serre, Jean-Pierre (2001). Complex Semisimple Lie Algebras. Berlin: Springer. ISBN 3-5406-7827-1.
Wikipedia/Complex_Lie_algebra
In mathematics, a Lie algebra has been generalized in several ways. == Graded Lie algebra and Lie superalgebra == A graded Lie algebra is a Lie algebra with grading. When the grading is Z / 2 {\displaystyle \mathbb {Z} /2} , it is also known as a Lie superalgebra. == Lie-isotopic algebra == A Lie-isotopic algebra is a generalization of Lie algebras proposed by physicist R. M. Santilli in 1978. === Definition === Recall that a finite-dimensional Lie algebra L {\displaystyle L} with generators X 1 , X 2 , . . . , X n {\displaystyle X_{1},X_{2},...,X_{n}} and commutation rules [ X i X j ] = X i X j − X j X i = C i j k X k , {\displaystyle [X_{i}X_{j}]=X_{i}X_{j}-X_{j}X_{i}=C_{ij}^{k}X_{k},} can be defined (particularly in physics) as the totally anti-symmetric algebra A ( L ) − {\displaystyle A(L)^{-}} attached to the universal enveloping associative algebra A ( L ) = { X 1 , X 2 , . . . , X n ; X i X j , i , j = 1 , . . . , n ; 1 } {\displaystyle A(L)=\{X_{1},X_{2},...,X_{n};X_{i}X_{j},i,j=1,...,n;1\}} equipped with the associative product X i × X j {\displaystyle X_{i}\times X_{j}} over a numeric field F {\displaystyle F} with multiplicative unit 1 {\displaystyle 1} . Consider now the axiom-preserving lifting of A ( L ) {\displaystyle A(L)} into the form A ∗ ( L ∗ ) = { X 1 , X 2 , . . . , X n ; X i × X j , i , j = 1 , . . . , n ; 1 ∗ } {\displaystyle A^{*}(L^{*})=\{X_{1},X_{2},...,X_{n};X_{i}\times X_{j},i,j=1,...,n;1^{*}\}} , called universal enveloping isoassociative algebra, with isoproduct X i × X j = X i T ∗ X j , {\displaystyle X_{i}\times X_{j}=X_{i}T^{*}X_{j},} verifying the isoassociative law X i × ( X j × X k ) = X i × ( X j × X k ) {\displaystyle X_{i}\times (X_{j}\times X_{k})=X_{i}\times (X_{j}\times X_{k})} and multiplicative isounit 1 ∗ = 1 / T ∗ , 1 ∗ × X k = X k × 1 ∗ = X k ∀ X k i n A ∗ ( L ∗ ) {\displaystyle 1^{*}=1/T*,1^{*}\times X_{k}=X_{k}\times 1^{*}=X_{k}\forall X_{k}inA^{*}(L^{*})} where T ∗ {\displaystyle T^{*}} , called the isotopic element, is not necessarily an element of A ( L ) {\displaystyle A(L)} which is solely restricted by the condition of being positive-definite, T ∗ > 0 {\displaystyle T^{*}>0} , but otherwise having any desired dependence on local variables, and the products X i T ∗ , T ∗ X j , e t c . {\displaystyle X_{i}T^{*},T^{*}X_{j},etc.} are conventional associative products in A ( L ) {\displaystyle A(L)} . Then a Lie-isotopic algebra L ∗ {\displaystyle L^{*}} can be defined as the totally antisymmetric algebra attached to the enveloping isoassociative algebra. L ∗ = A ∗ ( L ∗ ) − {\displaystyle L^{*}=A^{*}(L^{*})^{-}} with isocommutation rules [ X i , X j ] ∗ = X i × X j − X j × X i = X i T ∗ X j − X j T ∗ X i = C i j ∗ k X k . {\displaystyle [X_{i},X_{j}]^{*}=X_{i}\times X_{j}-X_{j}\times X_{i}=X_{i}T^{*}X_{j}-X_{j}T^{*}X_{i}=C_{ij}^{*k}X_{k}.} It is evident that: 1) The isoproduct and the isounit coincide at the abstract level with the conventional product and; 2) The isocommutators [ X i , X j ] ∗ {\displaystyle [X_{i},X_{j}]^{*}} verify Lie's axioms; 3) In view of the infinitely possible isotopic elements T ∗ {\displaystyle T^{*}} (as numbers, functions, matrices, operators, etc.), any given Lie algebra L {\displaystyle L} admits an infinite class of isotopes; 4) Lie-isotopic algebras are called regular whenever C i j ∗ k = C i j k {\displaystyle C_{ij}^{*k}=C_{ij}^{k}} , and irregular whenever C i j ∗ k ≠ C i j k {\displaystyle C_{ij}^{*k}\neq C_{ij}^{k}} . 5) All regular Lie-isotope L ∗ {\displaystyle L^{*}} are evidently isomorphic to L {\displaystyle L} . However, the relationship between irregular isotopes L ∗ {\displaystyle L^{*}} and L {\displaystyle L} does not appear to have been studied to date (Jan. 20, 2024). An illustration of the applications cf Lie-isotopic algebras in physics is given by the isotopes S U ∗ ( 2 ) {\displaystyle SU^{*}(2)} of the S U ( 2 ) {\displaystyle SU(2)} -spin symmetry whose fundamental representation on a Hilbert space H {\displaystyle H} over the field of complex numbers C {\displaystyle C} can be obtained via the nonunitary transformation of the fundamental reopreserntation of S U ( 2 ) {\displaystyle SU(2)} (Pauli matrices) σ k ∗ = U σ k U † , {\displaystyle \sigma _{k}^{*}=U\sigma _{k}U^{\dagger },} U U † = I ∗ = D i a g . ( λ − 1 , λ ) , D e t 1 ∗ = 1 , {\displaystyle UU^{\dagger }=I^{*}=Diag.(\lambda ^{-1},\lambda ),Det1^{*}=1,} σ 1 ∗ = ( 0 λ λ − 1 0 ) , σ 2 ∗ = ( 0 − i λ i λ − 1 0 ) , σ 3 ∗ = ( λ − 1 0 0 − λ ) , {\displaystyle \sigma _{1}^{*}=\left(\!{\begin{array}{cc}0&\lambda \\\lambda ^{-1}&0\end{array}}\!\right),\sigma _{2}^{*}=\left(\!{\begin{array}{cc}0&-i\!\lambda \\i\!\lambda ^{-1}&0\end{array}}\!\right),\sigma _{3}^{*}=\left(\!{\begin{array}{cc}\lambda ^{-1}&0\\0&-\lambda \end{array}}\!\right),} providing an explicit and concrete realization of Bohm's hidden variables λ {\displaystyle \lambda } , which is 'hidden' in the abstract axiom of associativity and allows an exact representation of the Deuteron magnetic moment. == Lie n-algebra == == Quasi-Lie algebra == A quasi-Lie algebra in abstract algebra is just like a Lie algebra, but with the usual axiom [ x , x ] = 0 {\displaystyle [x,x]=0} replaced by [ x , y ] = − [ y , x ] {\displaystyle [x,y]=-[y,x]} (anti-symmetry). In characteristic other than 2, these are equivalent (in the presence of bilinearity), so this distinction doesn't arise when considering real or complex Lie algebras. It can however become important, when considering Lie algebras over the integers. In a quasi-Lie algebra, 2 [ x , x ] = 0. {\displaystyle 2[x,x]=0.} Therefore, the bracket of any element with itself is 2-torsion, if it does not actually vanish. See also: Whitehead product. == References == Serre, Jean-Pierre (2006). Lie Algebras and Lie Groups. 1964 lectures given at Harvard University. Lecture Notes in Mathematics. Vol. 1500 (Corrected 5th printing of the 2nd (1992) ed.). Berlin: Springer-Verlag. doi:10.1007/978-3-540-70634-2. ISBN 3-540-55008-9. MR 2179691. == Further reading == https://www.researchgate.net/publication/250736074_Some_remarks_on_Lie-isotopic_lifting_of_Minkowski_metric https://onlinelibrary.wiley.com/doi/abs/10.1002/(SICI)1099-1476(19961125)19:17%3C1349::AID-MMA823%3E3.0.CO;2-B
Wikipedia/Generalization_of_a_Lie_algebra
In mathematics, Lie algebra cohomology is a cohomology theory for Lie algebras. It was first introduced in 1929 by Élie Cartan to study the topology of Lie groups and homogeneous spaces by relating cohomological methods of Georges de Rham to properties of the Lie algebra. It was later extended by Claude Chevalley and Samuel Eilenberg (1948) to coefficients in an arbitrary Lie module. == Motivation == If G {\displaystyle G} is a compact simply connected Lie group, then it is determined by its Lie algebra, so it should be possible to calculate its cohomology from the Lie algebra. This can be done as follows. Its cohomology is the de Rham cohomology of the complex of differential forms on G {\displaystyle G} . Using an averaging process, this complex can be replaced by the complex of left-invariant differential forms. The left-invariant forms, meanwhile, are determined by their values at the identity, so that the space of left-invariant differential forms can be identified with the exterior algebra of the Lie algebra, with a suitable differential. The construction of this differential on an exterior algebra makes sense for any Lie algebra, so it is used to define Lie algebra cohomology for all Lie algebras. More generally one uses a similar construction to define Lie algebra cohomology with coefficients in a module. If G {\displaystyle G} is a simply connected noncompact Lie group, the Lie algebra cohomology of the associated Lie algebra g {\displaystyle {\mathfrak {g}}} does not necessarily reproduce the de Rham cohomology of G {\displaystyle G} . The reason for this is that the passage from the complex of all differential forms to the complex of left-invariant differential forms uses an averaging process that only makes sense for compact groups. == Definition == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra over a commutative ring R with universal enveloping algebra U g {\displaystyle U{\mathfrak {g}}} , and let M be a representation of g {\displaystyle {\mathfrak {g}}} (equivalently, a U g {\displaystyle U{\mathfrak {g}}} -module). Considering R as a trivial representation of g {\displaystyle {\mathfrak {g}}} , one defines the cohomology groups H n ( g ; M ) := E x t U g n ( R , M ) {\displaystyle \mathrm {H} ^{n}({\mathfrak {g}};M):=\mathrm {Ext} _{U{\mathfrak {g}}}^{n}(R,M)} (see Ext functor for the definition of Ext). Equivalently, these are the right derived functors of the left exact invariant submodule functor M ↦ M g := { m ∈ M ∣ x m = 0 for all x ∈ g } . {\displaystyle M\mapsto M^{\mathfrak {g}}:=\{m\in M\mid xm=0\ {\text{ for all }}x\in {\mathfrak {g}}\}.} Analogously, one can define Lie algebra homology as H n ( g ; M ) := T o r n U g ( R , M ) {\displaystyle \mathrm {H} _{n}({\mathfrak {g}};M):=\mathrm {Tor} _{n}^{U{\mathfrak {g}}}(R,M)} (see Tor functor for the definition of Tor), which is equivalent to the left derived functors of the right exact coinvariants functor M ↦ M g := M / g M . {\displaystyle M\mapsto M_{\mathfrak {g}}:=M/{\mathfrak {g}}M.} Some important basic results about the cohomology of Lie algebras include Whitehead's lemmas, Weyl's theorem, and the Levi decomposition theorem. == Chevalley–Eilenberg complex == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra over a field k {\displaystyle k} , with a left action on the g {\displaystyle {\mathfrak {g}}} -module M {\displaystyle M} . The elements of the Chevalley–Eilenberg complex H o m k ( Λ ∙ g , M ) {\displaystyle \mathrm {Hom} _{k}(\Lambda ^{\bullet }{\mathfrak {g}},M)} are called cochains from g {\displaystyle {\mathfrak {g}}} to M {\displaystyle M} . A homogeneous n {\displaystyle n} -cochain from g {\displaystyle {\mathfrak {g}}} to M {\displaystyle M} is thus an alternating k {\displaystyle k} -multilinear function f : Λ n g → M {\displaystyle f\colon \Lambda ^{n}{\mathfrak {g}}\to M} . When g {\displaystyle {\mathfrak {g}}} is finitely generated as vector space, the Chevalley–Eilenberg complex is canonically isomorphic to the tensor product M ⊗ Λ ∙ g ∗ {\displaystyle M\otimes \Lambda ^{\bullet }{\mathfrak {g}}^{*}} , where g ∗ {\displaystyle {\mathfrak {g}}^{*}} denotes the dual vector space of g {\displaystyle {\mathfrak {g}}} . The Lie bracket [ ⋅ , ⋅ ] : Λ 2 g → g {\displaystyle [\cdot ,\cdot ]\colon \Lambda ^{2}{\mathfrak {g}}\rightarrow {\mathfrak {g}}} on g {\displaystyle {\mathfrak {g}}} induces a transpose application d g ( 1 ) : g ∗ → Λ 2 g ∗ {\displaystyle d_{\mathfrak {g}}^{(1)}\colon {\mathfrak {g}}^{*}\rightarrow \Lambda ^{2}{\mathfrak {g}}^{*}} by duality. The latter is sufficient to define a derivation d g {\displaystyle d_{\mathfrak {g}}} of the complex of cochains from g {\displaystyle {\mathfrak {g}}} to k {\displaystyle k} by extending d g ( 1 ) {\displaystyle d_{\mathfrak {g}}^{(1)}} according to the graded Leibniz rule. It follows from the Jacobi identity that d g {\displaystyle d_{\mathfrak {g}}} satisfies d g 2 = 0 {\displaystyle d_{\mathfrak {g}}^{2}=0} and is in fact a differential. In this setting, k {\displaystyle k} is viewed as a trivial g {\displaystyle {\mathfrak {g}}} -module while k ∼ Λ 0 g ∗ ⊆ K e r ( d g ) {\displaystyle k\sim \Lambda ^{0}{\mathfrak {g}}^{*}\subseteq \mathrm {Ker} (d_{\mathfrak {g}})} may be thought of as constants. In general, let γ ∈ H o m ( g , End ⁡ M ) {\displaystyle \gamma \in \mathrm {Hom} ({\mathfrak {g}},\operatorname {End} M)} denote the left action of g {\displaystyle {\mathfrak {g}}} on M {\displaystyle M} and regard it as an application d γ ( 0 ) : M → M ⊗ g ∗ {\displaystyle d_{\gamma }^{(0)}\colon M\rightarrow M\otimes {\mathfrak {g}}^{*}} . The Chevalley–Eilenberg differential d {\displaystyle d} is then the unique derivation extending d γ ( 0 ) {\displaystyle d_{\gamma }^{(0)}} and d g ( 1 ) {\displaystyle d_{\mathfrak {g}}^{(1)}} according to the graded Leibniz rule, the nilpotency condition d 2 = 0 {\displaystyle d^{2}=0} following from the Lie algebra homomorphism from g {\displaystyle {\mathfrak {g}}} to End ⁡ M {\displaystyle \operatorname {End} M} and the Jacobi identity in g {\displaystyle {\mathfrak {g}}} . Explicitly, the differential of the n {\displaystyle n} -cochain f {\displaystyle f} is the ( n + 1 ) {\displaystyle (n+1)} -cochain d f {\displaystyle df} given by: ( d f ) ( x 1 , … , x n + 1 ) = ∑ i ( − 1 ) i + 1 x i f ( x 1 , … , x ^ i , … , x n + 1 ) + ∑ i < j ( − 1 ) i + j f ( [ x i , x j ] , x 1 , … , x ^ i , … , x ^ j , … , x n + 1 ) , {\displaystyle {\begin{aligned}(df)\left(x_{1},\ldots ,x_{n+1}\right)=&\sum _{i}(-1)^{i+1}x_{i}\,f\left(x_{1},\ldots ,{\hat {x}}_{i},\ldots ,x_{n+1}\right)+\\&\sum _{i<j}(-1)^{i+j}f\left(\left[x_{i},x_{j}\right],x_{1},\ldots ,{\hat {x}}_{i},\ldots ,{\hat {x}}_{j},\ldots ,x_{n+1}\right)\,,\end{aligned}}} where the caret signifies omitting that argument. When G {\displaystyle G} is a real Lie group with Lie algebra g {\displaystyle {\mathfrak {g}}} , the Chevalley–Eilenberg complex may also be canonically identified with the space of left-invariant forms with values in M {\displaystyle M} , denoted by Ω ∙ ( G , M ) G {\displaystyle \Omega ^{\bullet }(G,M)^{G}} . The Chevalley–Eilenberg differential may then be thought of as a restriction of the covariant derivative on the trivial fiber bundle G × M → G {\displaystyle G\times M\rightarrow G} , equipped with the equivariant connection γ ~ ∈ Ω 1 ( G , End ⁡ M ) {\displaystyle {\tilde {\gamma }}\in \Omega ^{1}(G,\operatorname {End} M)} associated with the left action γ ∈ H o m ( g , End ⁡ M ) {\displaystyle \gamma \in \mathrm {Hom} ({\mathfrak {g}},\operatorname {End} M)} of g {\displaystyle {\mathfrak {g}}} on M {\displaystyle M} . In the particular case where M = k = R {\displaystyle M=k=\mathbb {R} } is equipped with the trivial action of g {\displaystyle {\mathfrak {g}}} , the Chevalley–Eilenberg differential coincides with the restriction of the de Rham differential on Ω ∙ ( G ) {\displaystyle \Omega ^{\bullet }(G)} to the subspace of left-invariant differential forms. == Cohomology in small dimensions == The zeroth cohomology group is (by definition) the invariants of the Lie algebra acting on the module: H 0 ( g ; M ) = M g = { m ∈ M ∣ x m = 0 for all x ∈ g } . {\displaystyle H^{0}({\mathfrak {g}};M)=M^{\mathfrak {g}}=\{m\in M\mid xm=0\ {\text{ for all }}x\in {\mathfrak {g}}\}.} The first cohomology group is the space Der of derivations modulo the space Ider of inner derivations H 1 ( g ; M ) = D e r ( g , M ) / I d e r ( g , M ) {\displaystyle H^{1}({\mathfrak {g}};M)=\mathrm {Der} ({\mathfrak {g}},M)/\mathrm {Ider} ({\mathfrak {g}},M)\,} , where a derivation is a map d {\displaystyle d} from the Lie algebra to M {\displaystyle M} such that d [ x , y ] = x d y − y d x {\displaystyle d[x,y]=xdy-ydx~} and is called inner if it is given by d x = x a {\displaystyle dx=xa~} for some a {\displaystyle a} in M {\displaystyle M} . The second cohomology group H 2 ( g ; M ) {\displaystyle H^{2}({\mathfrak {g}};M)} is the space of equivalence classes of Lie algebra extensions 0 → M → h → g → 0 {\displaystyle 0\rightarrow M\rightarrow {\mathfrak {h}}\rightarrow {\mathfrak {g}}\rightarrow 0} of the Lie algebra by the module M {\displaystyle M} . Similarly, any element of the cohomology group H n + 1 ( g ; M ) {\displaystyle H^{n+1}({\mathfrak {g}};M)} gives an equivalence class of ways to extend the Lie algebra g {\displaystyle {\mathfrak {g}}} to a "Lie n {\displaystyle n} -algebra" with g {\displaystyle {\mathfrak {g}}} in grade zero and M {\displaystyle M} in grade n {\displaystyle n} . A Lie n {\displaystyle n} -algebra is a homotopy Lie algebra with nonzero terms only in degrees 0 through n {\displaystyle n} . == Examples == === Cohomology on the trivial module === When M = R {\displaystyle M=\mathbb {R} } , as mentioned earlier the Chevalley–Eilenberg complex coincides with the de-Rham complex for a corresponding compact Lie group. In this case M {\displaystyle M} carries the trivial action of g {\displaystyle {\mathfrak {g}}} , so x a = 0 {\displaystyle xa=0} for every x ∈ g , a ∈ M {\displaystyle x\in {\mathfrak {g}},a\in M} . The zeroth cohomology group is M {\displaystyle M} . First cohomology: given a derivation D {\displaystyle D} , x D y = 0 {\displaystyle xDy=0} for all x {\displaystyle x} and y {\displaystyle y} , so derivations satisfy D ( [ x , y ] ) = 0 {\displaystyle D([x,y])=0} for all commutators, so the ideal [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} is contained in the kernel of D {\displaystyle D} . If [ g , g ] = g {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]={\mathfrak {g}}} , as is the case for simple Lie algebras, then D ≡ 0 {\displaystyle D\equiv 0} , so the space of derivations is trivial, so the first cohomology is trivial. If g {\displaystyle {\mathfrak {g}}} is abelian, that is, [ g , g ] = 0 {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]=0} , then any linear functional D : g → M {\displaystyle D:{\mathfrak {g}}\rightarrow M} is in fact a derivation, and the set of inner derivations is trivial as they satisfy D x = x a = 0 {\displaystyle Dx=xa=0} for any a ∈ M {\displaystyle a\in M} . Then the first cohomology group in this case is M dim g {\displaystyle M^{{\text{dim}}{\mathfrak {g}}}} . In light of the de-Rham correspondence, this shows the importance of the compact assumption, as this is the first cohomology group of the n {\displaystyle n} -torus viewed as an abelian group, and R n {\displaystyle \mathbb {R} ^{n}} can also be viewed as an abelian group of dimension n {\displaystyle n} , but R n {\displaystyle \mathbb {R} ^{n}} has trivial cohomology. Second cohomology: The second cohomology group is the space of equivalence classes of central extensions 0 → h → e → g → 0. {\displaystyle 0\rightarrow {\mathfrak {h}}\rightarrow {\mathfrak {e}}\rightarrow {\mathfrak {g}}\rightarrow 0.} Finite dimensional, simple Lie algebras only have trivial central extensions: a proof is provided here. === Cohomology on the adjoint module === When M = g {\displaystyle M={\mathfrak {g}}} , the action is the adjoint action, x ⋅ y = [ x , y ] = ad ( x ) y {\displaystyle x\cdot y=[x,y]={\text{ad}}(x)y} . The zeroth cohomology group is the center z ( g ) {\displaystyle {\mathfrak {z}}({\mathfrak {g}})} First cohomology: the inner derivations are given by D x = x y = [ x , y ] = − ad ( y ) x {\displaystyle Dx=xy=[x,y]=-{\text{ad}}(y)x} , so they are precisely the image of ad : g → End ⁡ g . {\displaystyle {\text{ad}}:{\mathfrak {g}}\rightarrow \operatorname {End} {\mathfrak {g}}.} The first cohomology group is the space of outer derivations. == See also == BRST formalism in theoretical physics. Gelfand–Fuks cohomology == References ==
Wikipedia/Lie_algebra_cohomology
In mathematics, an exceptional Lie algebra is a complex simple Lie algebra whose Dynkin diagram is of exceptional (nonclassical) type. There are exactly five of them: g 2 , f 4 , e 6 , e 7 , e 8 {\displaystyle {\mathfrak {g}}_{2},{\mathfrak {f}}_{4},{\mathfrak {e}}_{6},{\mathfrak {e}}_{7},{\mathfrak {e}}_{8}} ; their respective dimensions are 14, 52, 78, 133, 248. The corresponding diagrams are: G2 : F4 : E6 : E7 : E8 : In contrast, simple Lie algebras that are not exceptional are called classical Lie algebras (there are infinitely many of them). == Construction == There is no simple universally accepted way to construct exceptional Lie algebras; in fact, they were discovered only in the process of the classification program. Here are some constructions: § 22.1-2 of (Fulton & Harris 1991) give a detailed construction of g 2 {\displaystyle {\mathfrak {g}}_{2}} . Exceptional Lie algebras may be realized as the derivation algebras of appropriate nonassociative algebras. Construct e 8 {\displaystyle {\mathfrak {e}}_{8}} first and then find e 6 , e 7 {\displaystyle {\mathfrak {e}}_{6},{\mathfrak {e}}_{7}} as subalgebras. Tits has given a uniformed construction of the five exceptional Lie algebras. == References == == Further reading == https://www.encyclopediaofmath.org/index.php/Lie_algebra,_exceptional http://math.ucr.edu/home/baez/octonions/node13.html
Wikipedia/Exceptional_Lie_algebra
In mathematics, an orthogonal symmetric Lie algebra is a pair ( g , s ) {\displaystyle ({\mathfrak {g}},s)} consisting of a real Lie algebra g {\displaystyle {\mathfrak {g}}} and an automorphism s {\displaystyle s} of g {\displaystyle {\mathfrak {g}}} of order 2 {\displaystyle 2} such that the eigenspace u {\displaystyle {\mathfrak {u}}} of s corresponding to 1 (i.e., the set u {\displaystyle {\mathfrak {u}}} of fixed points) is a compact subalgebra. If "compactness" is omitted, it is called a symmetric Lie algebra. An orthogonal symmetric Lie algebra is said to be effective if u {\displaystyle {\mathfrak {u}}} intersects the center of g {\displaystyle {\mathfrak {g}}} trivially. In practice, effectiveness is often assumed; we do this in this article as well. The canonical example is the Lie algebra of a symmetric space, s {\displaystyle s} being the differential of a symmetry. Let ( g , s ) {\displaystyle ({\mathfrak {g}},s)} be effective orthogonal symmetric Lie algebra, and let p {\displaystyle {\mathfrak {p}}} denotes the -1 eigenspace of s {\displaystyle s} . We say that ( g , s ) {\displaystyle ({\mathfrak {g}},s)} is of compact type if g {\displaystyle {\mathfrak {g}}} is compact and semisimple. If instead it is noncompact, semisimple, and if g = u + p {\displaystyle {\mathfrak {g}}={\mathfrak {u}}+{\mathfrak {p}}} is a Cartan decomposition, then ( g , s ) {\displaystyle ({\mathfrak {g}},s)} is of noncompact type. If p {\displaystyle {\mathfrak {p}}} is an Abelian ideal of g {\displaystyle {\mathfrak {g}}} , then ( g , s ) {\displaystyle ({\mathfrak {g}},s)} is said to be of Euclidean type. Every effective, orthogonal symmetric Lie algebra decomposes into a direct sum of ideals g 0 {\displaystyle {\mathfrak {g}}_{0}} , g − {\displaystyle {\mathfrak {g}}_{-}} and g + {\displaystyle {\mathfrak {g}}_{+}} , each invariant under s {\displaystyle s} and orthogonal with respect to the Killing form of g {\displaystyle {\mathfrak {g}}} , and such that if s 0 {\displaystyle s_{0}} , s − {\displaystyle s_{-}} and s + {\displaystyle s_{+}} denote the restriction of s {\displaystyle s} to g 0 {\displaystyle {\mathfrak {g}}_{0}} , g − {\displaystyle {\mathfrak {g}}_{-}} and g + {\displaystyle {\mathfrak {g}}_{+}} , respectively, then ( g 0 , s 0 ) {\displaystyle ({\mathfrak {g}}_{0},s_{0})} , ( g − , s − ) {\displaystyle ({\mathfrak {g}}_{-},s_{-})} and ( g + , s + ) {\displaystyle ({\mathfrak {g}}_{+},s_{+})} are effective orthogonal symmetric Lie algebras of Euclidean type, compact type and noncompact type. == References == Helgason, Sigurdur (2001). Differential Geometry, Lie Groups, and Symmetric Spaces. American Mathematical Society. ISBN 978-0-8218-2848-9.
Wikipedia/Orthogonal_symmetric_Lie_algebra
In mathematics a Lie coalgebra is the dual structure to a Lie algebra. In finite dimensions, these are dual objects: the dual vector space to a Lie algebra naturally has the structure of a Lie coalgebra, and conversely. == Definition == Let E {\displaystyle E} be a vector space over a field k {\displaystyle \mathbb {k} } equipped with a linear mapping d : E → E ∧ E {\displaystyle d\colon E\to E\wedge E} from E {\displaystyle E} to the exterior product of E {\displaystyle E} with itself. It is possible to extend d {\displaystyle d} uniquely to a graded derivation (this means that, for any a , b ∈ E {\displaystyle a,b\in E} which are homogeneous elements, d ( a ∧ b ) = ( d a ) ∧ b + ( − 1 ) deg ⁡ a a ∧ ( d b ) {\displaystyle d(a\wedge b)=(da)\wedge b+(-1)^{\deg a}a\wedge (db)} ) of degree 1 on the exterior algebra of E {\displaystyle E} : d : ⋀ ∙ E → ⋀ ∙ + 1 E . {\displaystyle d\colon \bigwedge ^{\bullet }E\rightarrow \bigwedge ^{\bullet +1}E.} Then the pair ( E , d ) {\displaystyle (E,d)} is said to be a Lie coalgebra if d 2 = 0 {\displaystyle d^{2}=0} , i.e., if the graded components of the exterior algebra with derivation ( ⋀ ∗ E , d ) {\textstyle (\bigwedge ^{*}E,d)} form a cochain complex: E → d E ∧ E → d ⋀ 3 E → d ⋯ {\displaystyle E\ \xrightarrow {d} \ E\wedge E\ \xrightarrow {d} \ \bigwedge ^{3}E\xrightarrow {d} \ \cdots } === Relation to de Rham complex === Just as the exterior algebra (and tensor algebra) of vector fields on a manifold form a Lie algebra (over the base field k {\displaystyle \mathbb {k} } ), the de Rham complex of differential forms on a manifold form a Lie coalgebra (over the base field k {\displaystyle \mathbb {k} } ). Further, there is a pairing between vector fields and differential forms. However, the situation is subtler: the Lie bracket is not linear over the algebra of smooth functions C ∞ ( M ) {\displaystyle C^{\infty }(M)} (the error is the Lie derivative), nor is the exterior derivative: d ( f g ) = ( d f ) g + f ( d g ) ≠ f ( d g ) {\displaystyle d(fg)=(df)g+f(dg)\neq f(dg)} (it is a derivation, not linear over functions): they are not tensors. They are not linear over functions, but they behave in a consistent way, which is not captured simply by the notion of Lie algebra and Lie coalgebra. Further, in the de Rham complex, the derivation is not only defined for Ω 1 → Ω 2 {\displaystyle \Omega ^{1}\to \Omega ^{2}} , but is also defined for C ∞ ( M ) → Ω 1 ( M ) {\displaystyle C^{\infty }(M)\to \Omega ^{1}(M)} . == The Lie algebra on the dual == A Lie algebra structure on a vector space is a map [ ⋅ , ⋅ ] : g × g → g {\displaystyle [\cdot ,\cdot ]\colon {\mathfrak {g}}\times {\mathfrak {g}}\to {\mathfrak {g}}} which is skew-symmetric, and satisfies the Jacobi identity. Equivalently, a map [ ⋅ , ⋅ ] : g ∧ g → g {\displaystyle [\cdot ,\cdot ]\colon {\mathfrak {g}}\wedge {\mathfrak {g}}\to {\mathfrak {g}}} that satisfies the Jacobi identity. Dually, a Lie coalgebra structure on a vector space E is a linear map d : E → E ⊗ E {\displaystyle d\colon E\to E\otimes E} which is antisymmetric (this means that it satisfies τ ∘ d = − d {\displaystyle \tau \circ d=-d} , where τ {\displaystyle \tau } is the canonical flip E ⊗ E → E ⊗ E {\displaystyle E\otimes E\to E\otimes E} ) and satisfies the so-called cocycle condition (also known as the co-Leibniz rule) ( d ⊗ i d ) ∘ d = ( i d ⊗ d ) ∘ d + ( i d ⊗ τ ) ∘ ( d ⊗ i d ) ∘ d {\displaystyle \left(d\otimes \mathrm {id} \right)\circ d=\left(\mathrm {id} \otimes d\right)\circ d+\left(\mathrm {id} \otimes \tau \right)\circ \left(d\otimes \mathrm {id} \right)\circ d} . Due to the antisymmetry condition, the map d : E → E ⊗ E {\displaystyle d\colon E\to E\otimes E} can be also written as a map d : E → E ∧ E {\displaystyle d\colon E\to E\wedge E} . The dual of the Lie bracket of a Lie algebra g {\displaystyle {\mathfrak {g}}} yields a map (the cocommutator) [ ⋅ , ⋅ ] ∗ : g ∗ → ( g ∧ g ) ∗ ≅ g ∗ ∧ g ∗ {\displaystyle [\cdot ,\cdot ]^{*}\colon {\mathfrak {g}}^{*}\to ({\mathfrak {g}}\wedge {\mathfrak {g}})^{*}\cong {\mathfrak {g}}^{*}\wedge {\mathfrak {g}}^{*}} where the isomorphism ≅ {\displaystyle \cong } holds in finite dimension; dually for the dual of Lie comultiplication. In this context, the Jacobi identity corresponds to the cocycle condition. More explicitly, let E {\displaystyle E} be a Lie coalgebra over a field of characteristic neither 2 nor 3. The dual space E ∗ {\displaystyle E^{*}} carries the structure of a bracket defined by α ( [ x , y ] ) = d α ( x ∧ y ) {\displaystyle \alpha ([x,y])=d\alpha (x\wedge y)} , for all α ∈ E {\displaystyle \alpha \in E} and x , y ∈ E ∗ {\displaystyle x,y\in E^{*}} . We show that this endows E ∗ {\displaystyle E^{*}} with a Lie bracket. It suffices to check the Jacobi identity. For any x , y , z ∈ E ∗ {\displaystyle x,y,z\in E^{*}} and α ∈ E {\displaystyle \alpha \in E} , d 2 α ( x ∧ y ∧ z ) = 1 3 d 2 α ( x ∧ y ∧ z + y ∧ z ∧ x + z ∧ x ∧ y ) = 1 3 ( d α ( [ x , y ] ∧ z ) + d α ( [ y , z ] ∧ x ) + d α ( [ z , x ] ∧ y ) ) , {\displaystyle {\begin{aligned}d^{2}\alpha (x\wedge y\wedge z)&={\frac {1}{3}}d^{2}\alpha (x\wedge y\wedge z+y\wedge z\wedge x+z\wedge x\wedge y)\\&={\frac {1}{3}}\left(d\alpha ([x,y]\wedge z)+d\alpha ([y,z]\wedge x)+d\alpha ([z,x]\wedge y)\right),\end{aligned}}} where the latter step follows from the standard identification of the dual of a wedge product with the wedge product of the duals. Finally, this gives d 2 α ( x ∧ y ∧ z ) = 1 3 ( α ( [ [ x , y ] , z ] ) + α ( [ [ y , z ] , x ] ) + α ( [ [ z , x ] , y ] ) ) . {\displaystyle d^{2}\alpha (x\wedge y\wedge z)={\frac {1}{3}}\left(\alpha ([[x,y],z])+\alpha ([[y,z],x])+\alpha ([[z,x],y])\right).} Since d 2 = 0 {\displaystyle d^{2}=0} , it follows that α ( [ [ x , y ] , z ] + [ [ y , z ] , x ] + [ [ z , x ] , y ] ) = 0 {\displaystyle \alpha ([[x,y],z]+[[y,z],x]+[[z,x],y])=0} , for any α {\displaystyle \alpha } , x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} . Thus, by the double-duality isomorphism (more precisely, by the double-duality monomorphism, since the vector space needs not be finite-dimensional), the Jacobi identity is satisfied. In particular, note that this proof demonstrates that the cocycle condition d 2 = 0 {\displaystyle d^{2}=0} is in a sense dual to the Jacobi identity. == References == Michaelis, Walter (1980), "Lie coalgebras", Advances in Mathematics, 38 (1): 1–54, doi:10.1016/0001-8708(80)90056-0, ISSN 0001-8708, MR 0594993
Wikipedia/Lie_coalgebra
In the mathematical field of Lie theory, the radical of a Lie algebra g {\displaystyle {\mathfrak {g}}} is the largest solvable ideal of g . {\displaystyle {\mathfrak {g}}.} The radical, denoted by r a d ( g ) {\displaystyle {\rm {rad}}({\mathfrak {g}})} , fits into the exact sequence 0 → r a d ( g ) → g → g / r a d ( g ) → 0 {\displaystyle 0\to {\rm {rad}}({\mathfrak {g}})\to {\mathfrak {g}}\to {\mathfrak {g}}/{\rm {rad}}({\mathfrak {g}})\to 0} . where g / r a d ( g ) {\displaystyle {\mathfrak {g}}/{\rm {rad}}({\mathfrak {g}})} is semisimple. When the ground field has characteristic zero and g {\displaystyle {\mathfrak {g}}} has finite dimension, Levi's theorem states that this exact sequence splits; i.e., there exists a (necessarily semisimple) subalgebra of g {\displaystyle {\mathfrak {g}}} that is isomorphic to the semisimple quotient g / r a d ( g ) {\displaystyle {\mathfrak {g}}/{\rm {rad}}({\mathfrak {g}})} via the restriction of the quotient map g → g / r a d ( g ) . {\displaystyle {\mathfrak {g}}\to {\mathfrak {g}}/{\rm {rad}}({\mathfrak {g}}).} A similar notion is a Borel subalgebra, which is a (not necessarily unique) maximal solvable subalgebra. == Definition == Let k {\displaystyle k} be a field and let g {\displaystyle {\mathfrak {g}}} be a finite-dimensional Lie algebra over k {\displaystyle k} . There exists a unique maximal solvable ideal, called the radical, for the following reason. Firstly let a {\displaystyle {\mathfrak {a}}} and b {\displaystyle {\mathfrak {b}}} be two solvable ideals of g {\displaystyle {\mathfrak {g}}} . Then a + b {\displaystyle {\mathfrak {a}}+{\mathfrak {b}}} is again an ideal of g {\displaystyle {\mathfrak {g}}} , and it is solvable because it is an extension of ( a + b ) / a ≃ b / ( a ∩ b ) {\displaystyle ({\mathfrak {a}}+{\mathfrak {b}})/{\mathfrak {a}}\simeq {\mathfrak {b}}/({\mathfrak {a}}\cap {\mathfrak {b}})} by a {\displaystyle {\mathfrak {a}}} . Now consider the sum of all the solvable ideals of g {\displaystyle {\mathfrak {g}}} . It is nonempty since { 0 } {\displaystyle \{0\}} is a solvable ideal, and it is a solvable ideal by the sum property just derived. Clearly it is the unique maximal solvable ideal. == Related concepts == A Lie algebra is semisimple if and only if its radical is 0 {\displaystyle 0} . A Lie algebra is reductive if and only if its radical equals its center. == See also == Levi decomposition == References ==
Wikipedia/Radical_of_a_Lie_algebra
In mathematics, an infinitesimal transformation is a limiting form of small transformation. For example one may talk about an infinitesimal rotation of a rigid body, in three-dimensional space. This is conventionally represented by a 3×3 skew-symmetric matrix A. It is not the matrix of an actual rotation in space; but for small real values of a parameter ε the transformation T = I + ε A {\displaystyle T=I+\varepsilon A} is a small rotation, up to quantities of order ε2. == History == A comprehensive theory of infinitesimal transformations was first given by Sophus Lie. This was at the heart of his work, on what are now called Lie groups and their accompanying Lie algebras; and the identification of their role in geometry and especially the theory of differential equations. The properties of an abstract Lie algebra are exactly those definitive of infinitesimal transformations, just as the axioms of group theory embody symmetry. The term "Lie algebra" was introduced in 1934 by Hermann Weyl, for what had until then been known as the algebra of infinitesimal transformations of a Lie group. == Examples == For example, in the case of infinitesimal rotations, the Lie algebra structure is that provided by the cross product, once a skew-symmetric matrix has been identified with a 3-vector. This amounts to choosing an axis vector for the rotations; the defining Jacobi identity is a well-known property of cross products. The earliest example of an infinitesimal transformation that may have been recognised as such was in Euler's theorem on homogeneous functions. Here it is stated that a function F of n variables x1, ..., xn that is homogeneous of degree r, satisfies Θ F = r F {\displaystyle \Theta F=rF\,} with Θ = ∑ i x i ∂ ∂ x i , {\displaystyle \Theta =\sum _{i}x_{i}{\partial \over \partial x_{i}},} the Theta operator. That is, from the property F ( λ x 1 , … , λ x n ) = λ r F ( x 1 , … , x n ) {\displaystyle F(\lambda x_{1},\dots ,\lambda x_{n})=\lambda ^{r}F(x_{1},\dots ,x_{n})\,} it is possible to differentiate with respect to λ and then set λ equal to 1. This then becomes a necessary condition on a smooth function F to have the homogeneity property; it is also sufficient (by using Schwartz distributions one can reduce the mathematical analysis considerations here). This setting is typical, in that there is a one-parameter group of scalings operating; and the information is coded in an infinitesimal transformation that is a first-order differential operator. == Operator version of Taylor's theorem == The operator equation e t D f ( x ) = f ( x + t ) {\displaystyle e^{tD}f(x)=f(x+t)\,} where D = d d x {\displaystyle D={d \over dx}} is an operator version of Taylor's theorem — and is therefore only valid under caveats about f being an analytic function. Concentrating on the operator part, it shows that D is an infinitesimal transformation, generating translations of the real line via the exponential. In Lie's theory, this is generalised a long way. Any connected Lie group can be built up by means of its infinitesimal generators (a basis for the Lie algebra of the group); with explicit if not always useful information given in the Baker–Campbell–Hausdorff formula. == References == "Lie algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Sophus Lie (1893) Vorlesungen über Continuierliche Gruppen, English translation by D.H. Delphenich, §8, link from Neo-classical Physics.
Wikipedia/Infinitesimal_transformation
In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted Sp(2n, F) and Sp(n) for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by U S p ( n ) {\displaystyle \mathrm {USp} (n)} . Many authors prefer slightly different notations, usually differing by factors of 2. The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group Sp(2n, C) is denoted Cn, and Sp(n) is the compact real form of Sp(2n, C). Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension n. The name "symplectic group" was coined by Hermann Weyl as a replacement for the previous confusing names (line) complex group and Abelian linear group, and is the Greek analog of "complex". The metaplectic group is a double cover of the symplectic group over R; it has analogues over other local fields, finite fields, and adele rings. == Sp(2n, F) == The symplectic group is a classical group defined as the set of linear transformations of a 2n-dimensional vector space over the field F which preserve a non-degenerate skew-symmetric bilinear form. Such a vector space is called a symplectic vector space, and the symplectic group of an abstract symplectic vector space V is denoted Sp(V). Upon fixing a basis for V, the symplectic group becomes the group of 2n × 2n symplectic matrices, with entries in F, under the operation of matrix multiplication. This group is denoted either Sp(2n, F) or Sp(n, F). If the bilinear form is represented by the nonsingular skew-symmetric matrix Ω, then Sp ⁡ ( 2 n , F ) = { M ∈ M 2 n × 2 n ( F ) : M T Ω M = Ω } , {\displaystyle \operatorname {Sp} (2n,F)=\{M\in M_{2n\times 2n}(F):M^{\mathrm {T} }\Omega M=\Omega \},} where MT is the transpose of M. Often Ω is defined to be Ω = ( 0 I n − I n 0 ) , {\displaystyle \Omega ={\begin{pmatrix}0&I_{n}\\-I_{n}&0\\\end{pmatrix}},} where In is the identity matrix. In this case, Sp(2n, F) can be expressed as those block matrices ( A B C D ) {\displaystyle ({\begin{smallmatrix}A&B\\C&D\end{smallmatrix}})} , where A , B , C , D ∈ M n × n ( F ) {\displaystyle A,B,C,D\in M_{n\times n}(F)} , satisfying the three equations: − C T A + A T C = 0 , − C T B + A T D = I n , − D T B + B T D = 0. {\displaystyle {\begin{aligned}-C^{\mathrm {T} }A+A^{\mathrm {T} }C&=0,\\-C^{\mathrm {T} }B+A^{\mathrm {T} }D&=I_{n},\\-D^{\mathrm {T} }B+B^{\mathrm {T} }D&=0.\end{aligned}}} Since all symplectic matrices have determinant 1, the symplectic group is a subgroup of the special linear group SL(2n, F). When n = 1, the symplectic condition on a matrix is satisfied if and only if the determinant is one, so that Sp(2, F) = SL(2, F). For n > 1, there are additional conditions, i.e. Sp(2n, F) is then a proper subgroup of SL(2n, F). Typically, the field F is the field of real numbers R or complex numbers C. In these cases Sp(2n, F) is a real or complex Lie group of real or complex dimension n(2n + 1), respectively. These groups are connected but non-compact. The center of Sp(2n, F) consists of the matrices I2n and −I2n as long as the characteristic of the field is not 2. Since the center of Sp(2n, F) is discrete and its quotient modulo the center is a simple group, Sp(2n, F) is considered a simple Lie group. The real rank of the corresponding Lie algebra, and hence of the Lie group Sp(2n, F), is n. The Lie algebra of Sp(2n, F) is the set s p ( 2 n , F ) = { X ∈ M 2 n × 2 n ( F ) : Ω X + X T Ω = 0 } , {\displaystyle {\mathfrak {sp}}(2n,F)=\{X\in M_{2n\times 2n}(F):\Omega X+X^{\mathrm {T} }\Omega =0\},} equipped with the commutator as its Lie bracket. For the standard skew-symmetric bilinear form Ω = ( 0 I − I 0 ) {\displaystyle \Omega =({\begin{smallmatrix}0&I\\-I&0\end{smallmatrix}})} , this Lie algebra is the set of all block matrices ( A B C D ) {\displaystyle ({\begin{smallmatrix}A&B\\C&D\end{smallmatrix}})} subject to the conditions A = − D T , B = B T , C = C T . {\displaystyle {\begin{aligned}A&=-D^{\mathrm {T} },\\B&=B^{\mathrm {T} },\\C&=C^{\mathrm {T} }.\end{aligned}}} === Sp(2n, C) === The symplectic group over the field of complex numbers is a non-compact, simply connected, simple Lie group. The definition of this group includes no conjugates (contrary to what one might naively expect) but instead it is exactly the same as the definition bar the field change. === Sp(2n, R) === Sp(n, C) is the complexification of the real group Sp(2n, R). Sp(2n, R) is a real, non-compact, connected, simple Lie group. It has a fundamental group isomorphic to the group of integers under addition. As the real form of a simple Lie group its Lie algebra is a splittable Lie algebra. Some further properties of Sp(2n, R): The exponential map from the Lie algebra sp(2n, R) to the group Sp(2n, R) is not surjective. However, any element of the group can be represented as the product of two exponentials. In other words, ∀ S ∈ Sp ⁡ ( 2 n , R ) ∃ X , Y ∈ s p ( 2 n , R ) S = e X e Y . {\displaystyle \forall S\in \operatorname {Sp} (2n,\mathbf {R} )\,\,\exists X,Y\in {\mathfrak {sp}}(2n,\mathbf {R} )\,\,S=e^{X}e^{Y}.} For all S in Sp(2n, R): S = O Z O ′ such that O , O ′ ∈ Sp ⁡ ( 2 n , R ) ∩ SO ⁡ ( 2 n ) ≅ U ( n ) and Z = ( D 0 0 D − 1 ) . {\displaystyle S=OZO'\quad {\text{such that}}\quad O,O'\in \operatorname {Sp} (2n,\mathbf {R} )\cap \operatorname {SO} (2n)\cong U(n)\quad {\text{and}}\quad Z={\begin{pmatrix}D&0\\0&D^{-1}\end{pmatrix}}.} The matrix D is positive-definite and diagonal. The set of such Zs forms a non-compact subgroup of Sp(2n, R) whereas U(n) forms a compact subgroup. This decomposition is known as 'Euler' or 'Bloch–Messiah' decomposition. Further symplectic matrix properties can be found on that Wikipedia page. As a Lie group, Sp(2n, R) has a manifold structure. The manifold for Sp(2n, R) is diffeomorphic to the Cartesian product of the unitary group U(n) with a vector space of dimension n(n+1). === Infinitesimal generators === The members of the symplectic Lie algebra sp(2n, F) are the Hamiltonian matrices. These are matrices, Q {\displaystyle Q} such that Q = ( A B C − A T ) {\displaystyle Q={\begin{pmatrix}A&B\\C&-A^{\mathrm {T} }\end{pmatrix}}} where B and C are symmetric matrices. See classical group for a derivation. === Example of symplectic matrices === For Sp(2, R), the group of 2 × 2 matrices with determinant 1, the three symplectic (0, 1)-matrices are: ( 1 0 0 1 ) , ( 1 0 1 1 ) and ( 1 1 0 1 ) . {\displaystyle {\begin{pmatrix}1&0\\0&1\end{pmatrix}},\quad {\begin{pmatrix}1&0\\1&1\end{pmatrix}}\quad {\text{and}}\quad {\begin{pmatrix}1&1\\0&1\end{pmatrix}}.} ==== Sp(2n, R) ==== It turns out that Sp ⁡ ( 2 n , R ) {\displaystyle \operatorname {Sp} (2n,\mathbf {R} )} can have a fairly explicit description using generators. If we let Sym ⁡ ( n ) {\displaystyle \operatorname {Sym} (n)} denote the symmetric n × n {\displaystyle n\times n} matrices, then Sp ⁡ ( 2 n , R ) {\displaystyle \operatorname {Sp} (2n,\mathbf {R} )} is generated by D ( n ) ∪ N ( n ) ∪ { Ω } , {\displaystyle D(n)\cup N(n)\cup \{\Omega \},} where D ( n ) = { [ A 0 0 ( A T ) − 1 ] | A ∈ GL ⁡ ( n , R ) } N ( n ) = { [ I n B 0 I n ] | B ∈ Sym ⁡ ( n ) } {\displaystyle {\begin{aligned}D(n)&=\left\{\left.{\begin{bmatrix}A&0\\0&(A^{T})^{-1}\end{bmatrix}}\,\right|\,A\in \operatorname {GL} (n,\mathbf {R} )\right\}\\[6pt]N(n)&=\left\{\left.{\begin{bmatrix}I_{n}&B\\0&I_{n}\end{bmatrix}}\,\right|\,B\in \operatorname {Sym} (n)\right\}\end{aligned}}} are subgroups of Sp ⁡ ( 2 n , R ) {\displaystyle \operatorname {Sp} (2n,\mathbf {R} )} pg 173pg 2. === Relationship with symplectic geometry === Symplectic geometry is the study of symplectic manifolds. The tangent space at any point on a symplectic manifold is a symplectic vector space. As noted earlier, structure preserving transformations of a symplectic vector space form a group and this group is Sp(2n, F), depending on the dimension of the space and the field over which it is defined. A symplectic vector space is itself a symplectic manifold. A transformation under an action of the symplectic group is thus, in a sense, a linearised version of a symplectomorphism which is a more general structure preserving transformation on a symplectic manifold. == Sp(n) == The compact symplectic group Sp(n) is the intersection of Sp(2n, C) with the 2 n × 2 n {\displaystyle 2n\times 2n} unitary group: Sp ⁡ ( n ) := Sp ⁡ ( 2 n ; C ) ∩ U ⁡ ( 2 n ) = Sp ⁡ ( 2 n ; C ) ∩ SU ⁡ ( 2 n ) . {\displaystyle \operatorname {Sp} (n):=\operatorname {Sp} (2n;\mathbf {C} )\cap \operatorname {U} (2n)=\operatorname {Sp} (2n;\mathbf {C} )\cap \operatorname {SU} (2n).} It is sometimes written as USp(2n). Alternatively, Sp(n) can be described as the subgroup of GL(n, H) (invertible quaternionic matrices) that preserves the standard hermitian form on Hn: ⟨ x , y ⟩ = x ¯ 1 y 1 + ⋯ + x ¯ n y n . {\displaystyle \langle x,y\rangle ={\bar {x}}_{1}y_{1}+\cdots +{\bar {x}}_{n}y_{n}.} That is, Sp(n) is just the quaternionic unitary group, U(n, H). Indeed, it is sometimes called the hyperunitary group. Also Sp(1) is the group of quaternions of norm 1, equivalent to SU(2) and topologically a 3-sphere S3. Note that Sp(n) is not a symplectic group in the sense of the previous section—it does not preserve a non-degenerate skew-symmetric H-bilinear form on Hn: there is no such form except the zero form. Rather, it is isomorphic to a subgroup of Sp(2n, C), and so does preserve a complex symplectic form in a vector space of twice the dimension. As explained below, the Lie algebra of Sp(n) is the compact real form of the complex symplectic Lie algebra sp(2n, C). Sp(n) is a real Lie group with (real) dimension n(2n + 1). It is compact and simply connected. The Lie algebra of Sp(n) is given by the quaternionic skew-Hermitian matrices, the set of n-by-n quaternionic matrices that satisfy A + A † = 0 {\displaystyle A+A^{\dagger }=0} where A† is the conjugate transpose of A (here one takes the quaternionic conjugate). The Lie bracket is given by the commutator. === Important subgroups === Some main subgroups are: Sp ⁡ ( n ) ⊃ Sp ⁡ ( n − 1 ) {\displaystyle \operatorname {Sp} (n)\supset \operatorname {Sp} (n-1)} Sp ⁡ ( n ) ⊃ U ⁡ ( n ) {\displaystyle \operatorname {Sp} (n)\supset \operatorname {U} (n)} Sp ⁡ ( 2 ) ⊃ O ⁡ ( 4 ) {\displaystyle \operatorname {Sp} (2)\supset \operatorname {O} (4)} Conversely it is itself a subgroup of some other groups: SU ⁡ ( 2 n ) ⊃ Sp ⁡ ( n ) {\displaystyle \operatorname {SU} (2n)\supset \operatorname {Sp} (n)} F 4 ⊃ Sp ⁡ ( 4 ) {\displaystyle \operatorname {F} _{4}\supset \operatorname {Sp} (4)} G 2 ⊃ Sp ⁡ ( 1 ) {\displaystyle \operatorname {G} _{2}\supset \operatorname {Sp} (1)} There are also the isomorphisms of the Lie algebras sp(2) = so(5) and sp(1) = so(3) = su(2). == Relationship between the symplectic groups == Every complex, semisimple Lie algebra has a split real form and a compact real form; the former is called a complexification of the latter two. The Lie algebra of Sp(2n, C) is semisimple and is denoted sp(2n, C). Its split real form is sp(2n, R) and its compact real form is sp(n). These correspond to the Lie groups Sp(2n, R) and Sp(n) respectively. The algebras, sp(p, n − p), which are the Lie algebras of Sp(p, n − p), are the indefinite signature equivalent to the compact form. == Physical significance == === Classical mechanics === The non-compact symplectic group Sp(2n, R) comes up in classical physics as the symmetries of canonical coordinates preserving the Poisson bracket. Consider a system of n particles, evolving under Hamilton's equations whose position in phase space at a given time is denoted by the vector of canonical coordinates, z = ( q 1 , … , q n , p 1 , … , p n ) T . {\displaystyle \mathbf {z} =(q^{1},\ldots ,q^{n},p_{1},\ldots ,p_{n})^{\mathrm {T} }.} The elements of the group Sp(2n, R) are, in a certain sense, canonical transformations on this vector, i.e. they preserve the form of Hamilton's equations. If Z = Z ( z , t ) = ( Q 1 , … , Q n , P 1 , … , P n ) T {\displaystyle \mathbf {Z} =\mathbf {Z} (\mathbf {z} ,t)=(Q^{1},\ldots ,Q^{n},P_{1},\ldots ,P_{n})^{\mathrm {T} }} are new canonical coordinates, then, with a dot denoting time derivative, Z ˙ = M ( z , t ) z ˙ , {\displaystyle {\dot {\mathbf {Z} }}=M({\mathbf {z} },t){\dot {\mathbf {z} }},} where M ( z , t ) ∈ Sp ⁡ ( 2 n , R ) {\displaystyle M(\mathbf {z} ,t)\in \operatorname {Sp} (2n,\mathbf {R} )} for all t and all z in phase space. For the special case of a Riemannian manifold, Hamilton's equations describe the geodesics on that manifold. The coordinates q i {\displaystyle q^{i}} live on the underlying manifold, and the momenta p i {\displaystyle p_{i}} live in the cotangent bundle. This is the reason why these are conventionally written with upper and lower indexes; it is to distinguish their locations. The corresponding Hamiltonian consists purely of the kinetic energy: it is H = 1 2 g i j ( q ) p i p j {\displaystyle H={\tfrac {1}{2}}g^{ij}(q)p_{i}p_{j}} where g i j {\displaystyle g^{ij}} is the inverse of the metric tensor g i j {\displaystyle g_{ij}} on the Riemannian manifold. In fact, the cotangent bundle of any smooth manifold can be a given a symplectic structure in a canonical way, with the symplectic form defined as the exterior derivative of the tautological one-form. === Quantum mechanics === Consider a system of n particles whose quantum state encodes its position and momentum. These coordinates are continuous variables and hence the Hilbert space, in which the state lives, is infinite-dimensional. This often makes the analysis of this situation tricky. An alternative approach is to consider the evolution of the position and momentum operators under the Heisenberg equation in phase space. Construct a vector of canonical coordinates, z ^ = ( q ^ 1 , … , q ^ n , p ^ 1 , … , p ^ n ) T . {\displaystyle \mathbf {\hat {z}} =({\hat {q}}^{1},\ldots ,{\hat {q}}^{n},{\hat {p}}_{1},\ldots ,{\hat {p}}_{n})^{\mathrm {T} }.} The canonical commutation relation can be expressed simply as [ z ^ , z ^ T ] = i ℏ Ω {\displaystyle [\mathbf {\hat {z}} ,\mathbf {\hat {z}} ^{\mathrm {T} }]=i\hbar \Omega } where Ω = ( 0 I n − I n 0 ) {\displaystyle \Omega ={\begin{pmatrix}\mathbf {0} &I_{n}\\-I_{n}&\mathbf {0} \end{pmatrix}}} and In is the n × n identity matrix. Many physical situations only require quadratic Hamiltonians, i.e. Hamiltonians of the form H ^ = 1 2 z ^ T K z ^ {\displaystyle {\hat {H}}={\frac {1}{2}}\mathbf {\hat {z}} ^{\mathrm {T} }K\mathbf {\hat {z}} } where K is a 2n × 2n real, symmetric matrix. This turns out to be a useful restriction and allows us to rewrite the Heisenberg equation as d z ^ d t = Ω K z ^ {\displaystyle {\frac {d\mathbf {\hat {z}} }{dt}}=\Omega K\mathbf {\hat {z}} } The solution to this equation must preserve the canonical commutation relation. It can be shown that the time evolution of this system is equivalent to an action of the real symplectic group, Sp(2n, R), on the phase space. == See also == Hamiltonian mechanics Metaplectic group Orthogonal group Paramodular group Projective unitary group Representations of classical Lie groups Symplectic manifold, Symplectic matrix, Symplectic vector space, Symplectic representation Unitary group Θ10 == Notes == == References == Arnold, V. I. (1989), Mathematical Methods of Classical Mechanics, Graduate Texts in Mathematics, vol. 60 (second ed.), Springer-Verlag, ISBN 0-387-96890-3 Hall, Brian C. (2015), Lie groups, Lie algebras, and representations: An elementary introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666 Fulton, W.; Harris, J. (1991), Representation Theory, A first Course, Graduate Texts in Mathematics, vol. 129, Springer-Verlag, ISBN 978-0-387-97495-8. Goldstein, H. (1980) [1950]. "Chapter 7". Classical Mechanics (2nd ed.). Reading MA: Addison-Wesley. ISBN 0-201-02918-9. Lee, J. M. (2003), Introduction to Smooth manifolds, Graduate Texts in Mathematics, vol. 218, Springer-Verlag, ISBN 0-387-95448-1 Rossmann, Wulf (2002), Lie Groups – An Introduction Through Linear Groups, Oxford Graduate Texts in Mathematics, Oxford Science Publications, ISBN 0-19-859683-9 Ferraro, Alessandro; Olivares, Stefano; Paris, Matteo G. A. (March 2005), "Gaussian states in continuous variable quantum information", arXiv:quant-ph/0503237.
Wikipedia/Symplectic_Lie_algebra
In mathematics, a Lie bialgebra is the Lie-theoretic case of a bialgebra: it is a set with a Lie algebra and a Lie coalgebra structure which are compatible. It is a bialgebra where the multiplication is skew-symmetric and satisfies a dual Jacobi identity, so that the dual vector space is a Lie algebra, whereas the comultiplication is a 1-cocycle, so that the multiplication and comultiplication are compatible. The cocycle condition implies that, in practice, one studies only classes of bialgebras that are cohomologous to a Lie bialgebra on a coboundary. They are also called Poisson-Hopf algebras, and are the Lie algebra of a Poisson–Lie group. Lie bialgebras occur naturally in the study of the Yang–Baxter equations. == Definition == A vector space g {\displaystyle {\mathfrak {g}}} is a Lie bialgebra if it is a Lie algebra, and there is the structure of Lie algebra also on the dual vector space g ∗ {\displaystyle {\mathfrak {g}}^{*}} which is compatible. More precisely the Lie algebra structure on g {\displaystyle {\mathfrak {g}}} is given by a Lie bracket [ , ] : g ⊗ g → g {\displaystyle [\ ,\ ]:{\mathfrak {g}}\otimes {\mathfrak {g}}\to {\mathfrak {g}}} and the Lie algebra structure on g ∗ {\displaystyle {\mathfrak {g}}^{*}} is given by a Lie bracket δ ∗ : g ∗ ⊗ g ∗ → g ∗ {\displaystyle \delta ^{*}:{\mathfrak {g}}^{*}\otimes {\mathfrak {g}}^{*}\to {\mathfrak {g}}^{*}} . Then the map dual to δ ∗ {\displaystyle \delta ^{*}} is called the cocommutator, δ : g → g ⊗ g {\displaystyle \delta :{\mathfrak {g}}\to {\mathfrak {g}}\otimes {\mathfrak {g}}} and the compatibility condition is the following cocycle relation: δ ( [ X , Y ] ) = ( ad X ⊗ 1 + 1 ⊗ ad X ) δ ( Y ) − ( ad Y ⊗ 1 + 1 ⊗ ad Y ) δ ( X ) {\displaystyle \delta ([X,Y])=\left(\operatorname {ad} _{X}\otimes 1+1\otimes \operatorname {ad} _{X}\right)\delta (Y)-\left(\operatorname {ad} _{Y}\otimes 1+1\otimes \operatorname {ad} _{Y}\right)\delta (X)} where ad X ⁡ Y = [ X , Y ] {\displaystyle \operatorname {ad} _{X}Y=[X,Y]} is the adjoint. Note that this definition is symmetric and g ∗ {\displaystyle {\mathfrak {g}}^{*}} is also a Lie bialgebra, the dual Lie bialgebra. == Example == Let g {\displaystyle {\mathfrak {g}}} be any semisimple Lie algebra. To specify a Lie bialgebra structure we thus need to specify a compatible Lie algebra structure on the dual vector space. Choose a Cartan subalgebra t ⊂ g {\displaystyle {\mathfrak {t}}\subset {\mathfrak {g}}} and a choice of positive roots. Let b ± ⊂ g {\displaystyle {\mathfrak {b}}_{\pm }\subset {\mathfrak {g}}} be the corresponding opposite Borel subalgebras, so that t = b − ∩ b + {\displaystyle {\mathfrak {t}}={\mathfrak {b}}_{-}\cap {\mathfrak {b}}_{+}} and there is a natural projection π : b ± → t {\displaystyle \pi :{\mathfrak {b}}_{\pm }\to {\mathfrak {t}}} . Then define a Lie algebra g ′ := { ( X − , X + ) ∈ b − × b + | π ( X − ) + π ( X + ) = 0 } {\displaystyle {\mathfrak {g'}}:=\{(X_{-},X_{+})\in {\mathfrak {b}}_{-}\times {\mathfrak {b}}_{+}\ {\bigl \vert }\ \pi (X_{-})+\pi (X_{+})=0\}} which is a subalgebra of the product b − × b + {\displaystyle {\mathfrak {b}}_{-}\times {\mathfrak {b}}_{+}} , and has the same dimension as g {\displaystyle {\mathfrak {g}}} . Now identify g ′ {\displaystyle {\mathfrak {g'}}} with dual of g {\displaystyle {\mathfrak {g}}} via the pairing ⟨ ( X − , X + ) , Y ⟩ := K ( X + − X − , Y ) {\displaystyle \langle (X_{-},X_{+}),Y\rangle :=K(X_{+}-X_{-},Y)} where Y ∈ g {\displaystyle Y\in {\mathfrak {g}}} and K {\displaystyle K} is the Killing form. This defines a Lie bialgebra structure on g {\displaystyle {\mathfrak {g}}} , and is the "standard" example: it underlies the Drinfeld-Jimbo quantum group. Note that g ′ {\displaystyle {\mathfrak {g'}}} is solvable, whereas g {\displaystyle {\mathfrak {g}}} is semisimple. == Relation to Poisson–Lie groups == The Lie algebra g {\displaystyle {\mathfrak {g}}} of a Poisson–Lie group G has a natural structure of Lie bialgebra. In brief the Lie group structure gives the Lie bracket on g {\displaystyle {\mathfrak {g}}} as usual, and the linearisation of the Poisson structure on G gives the Lie bracket on g ∗ {\displaystyle {\mathfrak {g^{*}}}} (recalling that a linear Poisson structure on a vector space is the same thing as a Lie bracket on the dual vector space). In more detail, let G be a Poisson–Lie group, with f 1 , f 2 ∈ C ∞ ( G ) {\displaystyle f_{1},f_{2}\in C^{\infty }(G)} being two smooth functions on the group manifold. Let ξ = ( d f ) e {\displaystyle \xi =(df)_{e}} be the differential at the identity element. Clearly, ξ ∈ g ∗ {\displaystyle \xi \in {\mathfrak {g}}^{*}} . The Poisson structure on the group then induces a bracket on g ∗ {\displaystyle {\mathfrak {g}}^{*}} , as [ ξ 1 , ξ 2 ] = ( d { f 1 , f 2 } ) e {\displaystyle [\xi _{1},\xi _{2}]=(d\{f_{1},f_{2}\})_{e}\,} where { , } {\displaystyle \{,\}} is the Poisson bracket. Given η {\displaystyle \eta } be the Poisson bivector on the manifold, define η R {\displaystyle \eta ^{R}} to be the right-translate of the bivector to the identity element in G. Then one has that η R : G → g ⊗ g {\displaystyle \eta ^{R}:G\to {\mathfrak {g}}\otimes {\mathfrak {g}}} The cocommutator is then the tangent map: δ = T e η R {\displaystyle \delta =T_{e}\eta ^{R}\,} so that [ ξ 1 , ξ 2 ] = δ ∗ ( ξ 1 ⊗ ξ 2 ) {\displaystyle [\xi _{1},\xi _{2}]=\delta ^{*}(\xi _{1}\otimes \xi _{2})} is the dual of the cocommutator. == See also == Lie coalgebra Manin triple == References == H.-D. Doebner, J.-D. Hennig, eds, Quantum groups, Proceedings of the 8th International Workshop on Mathematical Physics, Arnold Sommerfeld Institute, Claausthal, FRG, 1989, Springer-Verlag Berlin, ISBN 3-540-53503-9. Vyjayanthi Chari and Andrew Pressley, A Guide to Quantum Groups, (1994), Cambridge University Press, Cambridge ISBN 0-521-55884-0. Beisert, N.; Spill, F. (2009). "The classical r-matrix of AdS/CFT and its Lie bialgebra structure". Communications in Mathematical Physics. 285 (2): 537–565. arXiv:0708.1762. Bibcode:2009CMaPh.285..537B. doi:10.1007/s00220-008-0578-2. S2CID 8946457.
Wikipedia/Lie_bialgebra
In mathematics, the Heisenberg group H {\displaystyle H} , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form ( 1 a c 0 1 b 0 0 1 ) {\displaystyle {\begin{pmatrix}1&a&c\\0&1&b\\0&0&1\\\end{pmatrix}}} under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group"). The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space. == Three-dimensional case == In the three-dimensional case, the product of two Heisenberg matrices is given by ( 1 a c 0 1 b 0 0 1 ) ( 1 a ′ c ′ 0 1 b ′ 0 0 1 ) = ( 1 a + a ′ c + a b ′ + c ′ 0 1 b + b ′ 0 0 1 ) . {\displaystyle {\begin{pmatrix}1&a&c\\0&1&b\\0&0&1\\\end{pmatrix}}{\begin{pmatrix}1&a'&c'\\0&1&b'\\0&0&1\\\end{pmatrix}}={\begin{pmatrix}1&a+a'&c+ab'+c'\\0&1&b+b'\\0&0&1\\\end{pmatrix}}.} As one can see from the term ab′, the group is non-abelian. The neutral element of the Heisenberg group is the identity matrix, and inverses are given by ( 1 a c 0 1 b 0 0 1 ) − 1 = ( 1 − a a b − c 0 1 − b 0 0 1 ) . {\displaystyle {\begin{pmatrix}1&a&c\\0&1&b\\0&0&1\\\end{pmatrix}}^{-1}={\begin{pmatrix}1&-a&ab-c\\0&1&-b\\0&0&1\\\end{pmatrix}}.} The group is a subgroup of the 2-dimensional affine group Aff(2): ( 1 a c 0 1 b 0 0 1 ) {\displaystyle {\begin{pmatrix}1&a&c\\0&1&b\\0&0&1\\\end{pmatrix}}} acting on ( x → , 1 ) {\displaystyle ({\vec {x}},1)} corresponds to the affine transform ( 1 a 0 1 ) x → + ( c b ) . {\displaystyle {\begin{pmatrix}1&a\\0&1\end{pmatrix}}{\vec {x}}+{\begin{pmatrix}c\\b\end{pmatrix}}.} There are several prominent examples of the three-dimensional case. === Continuous Heisenberg group === If a, b, c, are real numbers (in the ring R), then one has the continuous Heisenberg group H3(R). It is a nilpotent real Lie group of dimension 3. In addition to the representation as real 3×3 matrices, the continuous Heisenberg group also has several different representations in terms of function spaces. By Stone–von Neumann theorem, there is, up to isomorphism, a unique irreducible unitary representation of H in which its centre acts by a given nontrivial character. This representation has several important realizations, or models. In the Schrödinger model, the Heisenberg group acts on the space of square integrable functions. In the theta representation, it acts on the space of holomorphic functions on the upper half-plane; it is so named for its connection with the theta functions. === Discrete Heisenberg group === If a, b, c are integers (in the ring Z), then one has the discrete Heisenberg group H3(Z). It is a non-abelian nilpotent group. It has two generators: x = ( 1 1 0 0 1 0 0 0 1 ) , y = ( 1 0 0 0 1 1 0 0 1 ) {\displaystyle x={\begin{pmatrix}1&1&0\\0&1&0\\0&0&1\end{pmatrix}},\quad y={\begin{pmatrix}1&0&0\\0&1&1\\0&0&1\end{pmatrix}}} and relations z = x y x − 1 y − 1 , x z = z x , y z = z y , {\displaystyle z=xyx^{-1}y^{-1},\quad xz=zx,\quad yz=zy,} where z = ( 1 0 1 0 1 0 0 0 1 ) {\displaystyle z={\begin{pmatrix}1&0&1\\0&1&0\\0&0&1\end{pmatrix}}} is the generator of the center of H3. (Note that the inverses of x, y, and z replace the 1 above the diagonal with −1.) By Bass's theorem, it has a polynomial growth rate of order 4. One can generate any element through ( 1 a c 0 1 b 0 0 1 ) = y b z c x a . {\displaystyle {\begin{pmatrix}1&a&c\\0&1&b\\0&0&1\end{pmatrix}}=y^{b}z^{c}x^{a}.} === Heisenberg group modulo an odd prime p === If one takes a, b, c in Z/p Z for an odd prime p, then one has the Heisenberg group modulo p. It is a group of order p3 with generators x, y and relations z = x y x − 1 y − 1 , x p = y p = z p = 1 , x z = z x , y z = z y . {\displaystyle z=xyx^{-1}y^{-1},\quad x^{p}=y^{p}=z^{p}=1,\quad xz=zx,\quad yz=zy.} Analogues of Heisenberg groups over finite fields of odd prime order p are called extra special groups, or more properly, extra special groups of exponent p. More generally, if the derived subgroup of a group G is contained in the center Z of G, then the map G/Z × G/Z → Z is a skew-symmetric bilinear operator on abelian groups. However, requiring that G/Z to be a finite vector space requires the Frattini subgroup of G to be contained in the center, and requiring that Z be a one-dimensional vector space over Z/p Z requires that Z have order p, so if G is not abelian, then G is extra special. If G is extra special but does not have exponent p, then the general construction below applied to the symplectic vector space G/Z does not yield a group isomorphic to G. === Heisenberg group modulo 2 === The Heisenberg group modulo 2 is of order 8 and is isomorphic to the dihedral group D4 (the symmetries of a square). Observe that if x = ( 1 1 0 0 1 0 0 0 1 ) , y = ( 1 0 0 0 1 1 0 0 1 ) , {\displaystyle x={\begin{pmatrix}1&1&0\\0&1&0\\0&0&1\end{pmatrix}},\quad y={\begin{pmatrix}1&0&0\\0&1&1\\0&0&1\end{pmatrix}},} then x y = ( 1 1 1 0 1 1 0 0 1 ) , {\displaystyle xy={\begin{pmatrix}1&1&1\\0&1&1\\0&0&1\end{pmatrix}},} and y x = ( 1 1 0 0 1 1 0 0 1 ) . {\displaystyle yx={\begin{pmatrix}1&1&0\\0&1&1\\0&0&1\end{pmatrix}}.} The elements x and y correspond to reflections (with 45° between them), whereas xy and yx correspond to rotations by 90°. The other reflections are xyx and yxy, and rotation by 180° is xyxy (= yxyx). == Heisenberg algebra == The Lie algebra h {\displaystyle {\mathfrak {h}}} of the Heisenberg group H {\displaystyle H} (over the real numbers) is known as the Heisenberg algebra. It may be represented using the space of 3×3 matrices of the form ( 0 a c 0 0 b 0 0 0 ) {\displaystyle {\begin{pmatrix}0&a&c\\0&0&b\\0&0&0\end{pmatrix}}} with a , b , c ∈ R {\displaystyle a,b,c\in \mathbb {R} } . The following three elements form a basis for h {\displaystyle {\mathfrak {h}}} : X = ( 0 1 0 0 0 0 0 0 0 ) , Y = ( 0 0 0 0 0 1 0 0 0 ) , Z = ( 0 0 1 0 0 0 0 0 0 ) . {\displaystyle X={\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}},\quad Y={\begin{pmatrix}0&0&0\\0&0&1\\0&0&0\end{pmatrix}},\quad Z={\begin{pmatrix}0&0&1\\0&0&0\\0&0&0\end{pmatrix}}.} These basis elements satisfy the commutation relations [ X , Y ] = Z , [ X , Z ] = 0 , [ Y , Z ] = 0. {\displaystyle [X,Y]=Z,\quad [X,Z]=0,\quad [Y,Z]=0.} The name "Heisenberg group" is motivated by the preceding relations, which have the same form as the canonical commutation relations in quantum mechanics: [ x ^ , p ^ ] = i ℏ I , [ x ^ , i ℏ I ] = 0 , [ p ^ , i ℏ I ] = 0 , {\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar I,\quad [{\hat {x}},i\hbar I]=0,\quad [{\hat {p}},i\hbar I]=0,} where x ^ {\displaystyle {\hat {x}}} is the position operator, p ^ {\displaystyle {\hat {p}}} is the momentum operator, and ℏ {\displaystyle \hbar } is the Planck constant. The Heisenberg group H has the special property that the exponential map is a one-to-one and onto map from the Lie algebra h {\displaystyle {\mathfrak {h}}} to the group H: exp ⁡ ( 0 a c 0 0 b 0 0 0 ) = ( 1 a c + a b 2 0 1 b 0 0 1 ) . {\displaystyle \exp {\begin{pmatrix}0&a&c\\0&0&b\\0&0&0\end{pmatrix}}={\begin{pmatrix}1&a&c+{\frac {ab}{2}}\\0&1&b\\0&0&1\end{pmatrix}}.} === In conformal field theory === In conformal field theory, the term Heisenberg algebra is used to refer to an infinite-dimensional generalization of the above algebra. It is spanned by elements a n , n ∈ Z {\displaystyle a_{n},n\in \mathbb {Z} } with commutation relations [ a n , a m ] = δ n + m , 0 . {\displaystyle [a_{n},a_{m}]=\delta _{n+m,0}.} Under a rescaling, this is simply a countably-infinite number of copies of the above algebra. == Higher dimensions == More general Heisenberg groups H 2 n + 1 {\displaystyle H_{2n+1}} may be defined for higher dimensions in Euclidean space, and more generally on symplectic vector spaces. The simplest general case is the real Heisenberg group of dimension 2 n + 1 {\displaystyle 2n+1} , for any integer n ≥ 1 {\displaystyle n\geq 1} . As a group of matrices, H 2 n + 1 {\displaystyle H_{2n+1}} (or H 2 n + 1 ( R ) {\displaystyle H_{2n+1}(\mathbb {R} )} to indicate that this is the Heisenberg group over the field R {\displaystyle \mathbb {R} } of real numbers) is defined as the group ( n + 2 ) × ( n + 2 ) {\displaystyle (n+2)\times (n+2)} matrices with entries in R {\displaystyle \mathbb {R} } and having the form [ 1 a c 0 I n b 0 0 1 ] , {\displaystyle {\begin{bmatrix}1&\mathbf {a} &c\\\mathbf {0} &I_{n}&\mathbf {b} \\0&\mathbf {0} &1\end{bmatrix}},} where a is a row vector of length n, b is a column vector of length n, In is the identity matrix of size n. === Group structure === This is indeed a group, as is shown by the multiplication: [ 1 a c 0 I n b 0 0 1 ] ⋅ [ 1 a ′ c ′ 0 I n b ′ 0 0 1 ] = [ 1 a + a ′ c + c ′ + a ⋅ b ′ 0 I n b + b ′ 0 0 1 ] {\displaystyle {\begin{bmatrix}1&\mathbf {a} &c\\\mathbf {0} &I_{n}&\mathbf {b} \\0&\mathbf {0} &1\end{bmatrix}}\cdot {\begin{bmatrix}1&\mathbf {a} '&c'\\\mathbf {0} &I_{n}&\mathbf {b} '\\0&\mathbf {0} &1\end{bmatrix}}={\begin{bmatrix}1&\mathbf {a} +\mathbf {a} '&c+c'+\mathbf {a} \cdot \mathbf {b} '\\\mathbf {0} &I_{n}&\mathbf {b} +\mathbf {b} '\\0&\mathbf {0} &1\end{bmatrix}}} and [ 1 a c 0 I n b 0 0 1 ] ⋅ [ 1 − a − c + a ⋅ b 0 I n − b 0 0 1 ] = [ 1 0 0 0 I n 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}1&\mathbf {a} &c\\\mathbf {0} &I_{n}&\mathbf {b} \\0&\mathbf {0} &1\end{bmatrix}}\cdot {\begin{bmatrix}1&-\mathbf {a} &-c+\mathbf {a} \cdot \mathbf {b} \\\mathbf {0} &I_{n}&-\mathbf {b} \\0&\mathbf {0} &1\end{bmatrix}}={\begin{bmatrix}1&\mathbf {0} &0\\\mathbf {0} &I_{n}&\mathbf {0} \\0&\mathbf {0} &1\end{bmatrix}}.} === Lie algebra === The Heisenberg group is a simply-connected Lie group whose Lie algebra consists of matrices [ 0 a c 0 0 n b 0 0 0 ] , {\displaystyle {\begin{bmatrix}0&\mathbf {a} &c\\\mathbf {0} &0_{n}&\mathbf {b} \\0&\mathbf {0} &0\end{bmatrix}},} where a is a row vector of length n, b is a column vector of length n, 0n is the zero matrix of size n. By letting e1, ..., en be the canonical basis of Rn and setting p i = [ 0 e i T 0 0 0 n 0 0 0 0 ] , q j = [ 0 0 0 0 0 n e j 0 0 0 ] , z = [ 0 0 1 0 0 n 0 0 0 0 ] , {\displaystyle p_{i}={\begin{bmatrix}0&\operatorname {e} _{i}^{\mathrm {T} }&0\\\mathbf {0} &0_{n}&\mathbf {0} \\0&\mathbf {0} &0\end{bmatrix}},\quad q_{j}={\begin{bmatrix}0&\mathbf {0} &0\\\mathbf {0} &0_{n}&\operatorname {e} _{j}\\0&\mathbf {0} &0\end{bmatrix}},\quad z={\begin{bmatrix}0&\mathbf {0} &1\\\mathbf {0} &0_{n}&\mathbf {0} \\0&\mathbf {0} &0\end{bmatrix}},} the associated Lie algebra can be characterized by the canonical commutation relations where p1, ..., pn, q1, ..., qn, z are the algebra generators. In particular, z is a central element of the Heisenberg Lie algebra. Note that the Lie algebra of the Heisenberg group is nilpotent. === Exponential map === Let u = [ 0 a c 0 0 n b 0 0 0 ] , {\displaystyle u={\begin{bmatrix}0&\mathbf {a} &c\\\mathbf {0} &0_{n}&\mathbf {b} \\0&\mathbf {0} &0\end{bmatrix}},} which fulfills u 3 = 0 n + 2 {\displaystyle u^{3}=0_{n+2}} . The exponential map evaluates to exp ⁡ ( u ) = ∑ k = 0 ∞ 1 k ! u k = I n + 2 + u + 1 2 u 2 = [ 1 a c + 1 2 a ⋅ b 0 I n b 0 0 1 ] . {\displaystyle \exp(u)=\sum _{k=0}^{\infty }{\frac {1}{k!}}u^{k}=I_{n+2}+u+{\tfrac {1}{2}}u^{2}={\begin{bmatrix}1&\mathbf {a} &c+{\frac {1}{2}}\mathbf {a} \cdot \mathbf {b} \\\mathbf {0} &I_{n}&\mathbf {b} \\0&\mathbf {0} &1\end{bmatrix}}.} The exponential map of any nilpotent Lie algebra is a diffeomorphism between the Lie algebra and the unique associated connected, simply-connected Lie group. This discussion (aside from statements referring to dimension and Lie group) further applies if we replace R by any commutative ring A. The corresponding group is denoted Hn(A). Under the additional assumption that the prime 2 is invertible in the ring A, the exponential map is also defined, since it reduces to a finite sum and has the form above (e.g. A could be a ring Z/p Z with an odd prime p or any field of characteristic 0). == Representation theory == The unitary representation theory of the Heisenberg group is fairly simple – later generalized by Mackey theory – and was the motivation for its introduction in quantum physics, as discussed below. For each nonzero real number ℏ {\displaystyle \hbar } , we can define an irreducible unitary representation Π ℏ {\displaystyle \Pi _{\hbar }} of H 2 n + 1 {\displaystyle H_{2n+1}} acting on the Hilbert space L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} by the formula [ Π ℏ ( 1 a c 0 I n b 0 0 1 ) ψ ] ( x ) = e i ℏ c e i b ⋅ x ψ ( x + ℏ a ) . {\displaystyle \left[\Pi _{\hbar }{\begin{pmatrix}1&\mathbf {a} &c\\0&I_{n}&\mathbf {b} \\0&0&1\end{pmatrix}}\psi \right](x)=e^{i\hbar c}e^{ib\cdot x}\psi (x+\hbar a).} This representation is known as the Schrödinger representation. The motivation for this representation is the action of the exponentiated position and momentum operators in quantum mechanics. The parameter a {\displaystyle a} describes translations in position space, the parameter b {\displaystyle b} describes translations in momentum space, and the parameter c {\displaystyle c} gives an overall phase factor. The phase factor is needed to obtain a group of operators, since translations in position space and translations in momentum space do not commute. The key result is the Stone–von Neumann theorem, which states that every (strongly continuous) irreducible unitary representation of the Heisenberg group in which the center acts nontrivially is equivalent to Π ℏ {\displaystyle \Pi _{\hbar }} for some ℏ {\displaystyle \hbar } . Alternatively, that they are all equivalent to the Weyl algebra (or CCR algebra) on a symplectic space of dimension 2n. Since the Heisenberg group is a one-dimensional central extension of R 2 n {\displaystyle \mathbb {R} ^{2n}} , its irreducible unitary representations can be viewed as irreducible unitary projective representations of R 2 n {\displaystyle \mathbb {R} ^{2n}} . Conceptually, the representation given above constitutes the quantum-mechanical counterpart to the group of translational symmetries on the classical phase space, R 2 n {\displaystyle \mathbb {R} ^{2n}} . The fact that the quantum version is only a projective representation of R 2 n {\displaystyle \mathbb {R} ^{2n}} is suggested already at the classical level. The Hamiltonian generators of translations in phase space are the position and momentum functions. The span of these functions does not form a Lie algebra under the Poisson bracket, however, because { x i , p j } = δ i , j . {\displaystyle \{x_{i},p_{j}\}=\delta _{i,j}.} Rather, the span of the position and momentum functions and the constants forms a Lie algebra under the Poisson bracket. This Lie algebra is a one-dimensional central extension of the commutative Lie algebra R 2 n {\displaystyle \mathbb {R} ^{2n}} , isomorphic to the Lie algebra of the Heisenberg group. == On symplectic vector spaces == The general abstraction of a Heisenberg group is constructed from any symplectic vector space. For example, let (V, ω) be a finite-dimensional real symplectic vector space (so ω is a nondegenerate skew symmetric bilinear form on V). The Heisenberg group H(V) on (V, ω) (or simply V for brevity) is the set V×R endowed with the group law ( v , t ) ⋅ ( v ′ , t ′ ) = ( v + v ′ , t + t ′ + 1 2 ω ( v , v ′ ) ) . {\displaystyle (v,t)\cdot \left(v',t'\right)=\left(v+v',t+t'+{\frac {1}{2}}\omega \left(v,v'\right)\right).} The Heisenberg group is a central extension of the additive group V. Thus there is an exact sequence 0 → R → H ( V ) → V → 0. {\displaystyle 0\to \mathbf {R} \to H(V)\to V\to 0.} Any symplectic vector space admits a Darboux basis {ej, fk}1 ≤ j,k ≤ n satisfying ω(ej, fk) = δjk and where 2n is the dimension of V (the dimension of V is necessarily even). In terms of this basis, every vector decomposes as v = q a e a + p a f a . {\displaystyle v=q^{a}\mathbf {e} _{a}+p_{a}\mathbf {f} ^{a}.} The qa and pa are canonically conjugate coordinates. If {ej, fk}1 ≤ j,k ≤ n is a Darboux basis for V, then let {E} be a basis for R, and {ej, fk, E}1 ≤ j,k ≤ n is the corresponding basis for V×R. A vector in H(V) is then given by v = q a e a + p a f a + t E {\displaystyle v=q^{a}\mathbf {e} _{a}+p_{a}\mathbf {f} ^{a}+tE} and the group law becomes ( p , q , t ) ⋅ ( p ′ , q ′ , t ′ ) = ( p + p ′ , q + q ′ , t + t ′ + 1 2 ( p q ′ − p ′ q ) ) . {\displaystyle (p,q,t)\cdot \left(p',q',t'\right)=\left(p+p',q+q',t+t'+{\frac {1}{2}}(pq'-p'q)\right).} Because the underlying manifold of the Heisenberg group is a linear space, vectors in the Lie algebra can be canonically identified with vectors in the group. The Lie algebra of the Heisenberg group is given by the commutation relation [ ( v 1 , t 1 ) , ( v 2 , t 2 ) ] = ω ( v 1 , v 2 ) {\displaystyle {\begin{bmatrix}(v_{1},t_{1}),(v_{2},t_{2})\end{bmatrix}}=\omega (v_{1},v_{2})} or written in terms of the Darboux basis [ e a , f b ] = δ a b {\displaystyle \left[\mathbf {e} _{a},\mathbf {f} ^{b}\right]=\delta _{a}^{b}} and all other commutators vanish. It is also possible to define the group law in a different way but which yields a group isomorphic to the group we have just defined. To avoid confusion, we will use u instead of t, so a vector is given by v = q a e a + p a f a + u E {\displaystyle v=q^{a}\mathbf {e} _{a}+p_{a}\mathbf {f} ^{a}+uE} and the group law is ( p , q , u ) ⋅ ( p ′ , q ′ , u ′ ) = ( p + p ′ , q + q ′ , u + u ′ + p q ′ ) . {\displaystyle (p,q,u)\cdot \left(p',q',u'\right)=\left(p+p',q+q',u+u'+pq'\right).} An element of the group v = q a e a + p a f a + u E {\displaystyle v=q^{a}\mathbf {e} _{a}+p_{a}\mathbf {f} ^{a}+uE} can then be expressed as a matrix [ 1 p u 0 I n q 0 0 1 ] {\displaystyle {\begin{bmatrix}1&p&u\\0&I_{n}&q\\0&0&1\end{bmatrix}}} , which gives a faithful matrix representation of H(V). The u in this formulation is related to t in our previous formulation by u = t + 1 2 p q {\displaystyle u=t+{\tfrac {1}{2}}pq} , so that the t value for the product comes to u + u ′ + p q ′ − 1 2 ( p + p ′ ) ( q + q ′ ) = t + 1 2 p q + t ′ + 1 2 p ′ q ′ + p q ′ − 1 2 ( p + p ′ ) ( q + q ′ ) = t + t ′ + 1 2 ( p q ′ − p ′ q ) {\displaystyle {\begin{aligned}&u+u'+pq'-{\frac {1}{2}}\left(p+p'\right)\left(q+q'\right)\\={}&t+{\frac {1}{2}}pq+t'+{\frac {1}{2}}p'q'+pq'-{\frac {1}{2}}\left(p+p'\right)\left(q+q'\right)\\={}&t+t'+{\frac {1}{2}}\left(pq'-p'q\right)\end{aligned}}} , as before. The isomorphism to the group using upper triangular matrices relies on the decomposition of V into a Darboux basis, which amounts to a choice of isomorphism V ≅ U ⊕ U*. Although the new group law yields a group isomorphic to the one given higher up, the group with this law is sometimes referred to as the polarized Heisenberg group as a reminder that this group law relies on a choice of basis (a choice of a Lagrangian subspace of V is a polarization). To any Lie algebra, there is a unique connected, simply connected Lie group G. All other connected Lie groups with the same Lie algebra as G are of the form G/N where N is a central discrete group in G. In this case, the center of H(V) is R and the only discrete subgroups are isomorphic to Z. Thus H(V)/Z is another Lie group which shares this Lie algebra. Of note about this Lie group is that it admits no faithful finite-dimensional representations; it is not isomorphic to any matrix group. It does however have a well-known family of infinite-dimensional unitary representations. == Connection with the Weyl algebra == The Lie algebra h n {\displaystyle {\mathfrak {h}}_{n}} of the Heisenberg group was described above, (1), as a Lie algebra of matrices. The Poincaré–Birkhoff–Witt theorem applies to determine the universal enveloping algebra U ( h n ) {\displaystyle U({\mathfrak {h}}_{n})} . Among other properties, the universal enveloping algebra is an associative algebra into which h n {\displaystyle {\mathfrak {h}}_{n}} injectively imbeds. By the Poincaré–Birkhoff–Witt theorem, it is thus the free vector space generated by the monomials z j p 1 k 1 p 2 k 2 ⋯ p n k n q 1 ℓ 1 q 2 ℓ 2 ⋯ q n ℓ n , {\displaystyle z^{j}p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{n}^{k_{n}}q_{1}^{\ell _{1}}q_{2}^{\ell _{2}}\cdots q_{n}^{\ell _{n}}~,} where the exponents are all non-negative. Consequently, U ( h n ) {\displaystyle U({\mathfrak {h}}_{n})} consists of real polynomials ∑ j , k → , ℓ → c j k → ℓ → z j p 1 k 1 p 2 k 2 ⋯ p n k n q 1 ℓ 1 q 2 ℓ 2 ⋯ q n ℓ n , {\displaystyle \sum _{j,{\vec {k}},{\vec {\ell }}}c_{j{\vec {k}}{\vec {\ell }}}\,\,z^{j}p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{n}^{k_{n}}q_{1}^{\ell _{1}}q_{2}^{\ell _{2}}\cdots q_{n}^{\ell _{n}}~,} with the commutation relations p k p ℓ = p ℓ p k , q k q ℓ = q ℓ q k , p k q ℓ − q ℓ p k = δ k ℓ z , z p k − p k z = 0 , z q k − q k z = 0 . {\displaystyle p_{k}p_{\ell }=p_{\ell }p_{k},\quad q_{k}q_{\ell }=q_{\ell }q_{k},\quad p_{k}q_{\ell }-q_{\ell }p_{k}=\delta _{k\ell }z,\quad zp_{k}-p_{k}z=0,\quad zq_{k}-q_{k}z=0~.} The algebra U ( h n ) {\displaystyle U({\mathfrak {h}}_{n})} is closely related to the algebra of differential operators on R n {\displaystyle \mathbb {R} ^{n}} with polynomial coefficients, since any such operator has a unique representation in the form P = ∑ k → , ℓ → c k → ℓ → ∂ x 1 k 1 ∂ x 2 k 2 ⋯ ∂ x n k n x 1 ℓ 1 x 2 ℓ 2 ⋯ x n ℓ n . {\displaystyle P=\sum _{{\vec {k}},{\vec {\ell }}}c_{{\vec {k}}{\vec {\ell }}}\,\,\partial _{x_{1}}^{k_{1}}\partial _{x_{2}}^{k_{2}}\cdots \partial _{x_{n}}^{k_{n}}x_{1}^{\ell _{1}}x_{2}^{\ell _{2}}\cdots x_{n}^{\ell _{n}}~.} This algebra is called the Weyl algebra. It follows from abstract nonsense that the Weyl algebra Wn is a quotient of U ( h n ) {\displaystyle U({\mathfrak {h}}_{n})} . However, this is also easy to see directly from the above representations; viz. by the mapping z j p 1 k 1 p 2 k 2 ⋯ p n k n q 1 ℓ 1 q 2 ℓ 2 ⋯ q n ℓ n ↦ ∂ x 1 k 1 ∂ x 2 k 2 ⋯ ∂ x n k n x 1 ℓ 1 x 2 ℓ 2 ⋯ x n ℓ n . {\displaystyle z^{j}p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots p_{n}^{k_{n}}q_{1}^{\ell _{1}}q_{2}^{\ell _{2}}\cdots q_{n}^{\ell _{n}}\,\mapsto \,\partial _{x_{1}}^{k_{1}}\partial _{x_{2}}^{k_{2}}\cdots \partial _{x_{n}}^{k_{n}}x_{1}^{\ell _{1}}x_{2}^{\ell _{2}}\cdots x_{n}^{\ell _{n}}~.} == Applications == === Weyl's parameterization of quantum mechanics === The application that led Hermann Weyl to an explicit realization of the Heisenberg group was the question of why the Schrödinger picture and Heisenberg picture are physically equivalent. Abstractly, the reason is the Stone–von Neumann theorem: there is a unique unitary representation with given action of the central Lie algebra element z, up to a unitary equivalence: the nontrivial elements of the algebra are all equivalent to the usual position and momentum operators. Thus, the Schrödinger picture and Heisenberg picture are equivalent – they are just different ways of realizing this essentially unique representation. === Theta representation === The same uniqueness result was used by David Mumford for discrete Heisenberg groups, in his theory of equations defining abelian varieties. This is a large generalization of the approach used in Jacobi's elliptic functions, which is the case of the modulo 2 Heisenberg group, of order 8. The simplest case is the theta representation of the Heisenberg group, of which the discrete case gives the theta function. === Fourier analysis === The Heisenberg group also occurs in Fourier analysis, where it is used in some formulations of the Stone–von Neumann theorem. In this case, the Heisenberg group can be understood to act on the space of square integrable functions; the result is a representation of the Heisenberg groups sometimes called the Weyl representation. == As a sub-Riemannian manifold == The three-dimensional Heisenberg group H3(R) on the reals can also be understood to be a smooth manifold, and specifically, a simple example of a sub-Riemannian manifold. Given a point p = (x, y, z) in R3, define a differential 1-form Θ at this point as Θ p = d z − 1 2 ( x d y − y d x ) . {\displaystyle \Theta _{p}=dz-{\frac {1}{2}}\left(xdy-ydx\right).} This one-form belongs to the cotangent bundle of R3; that is, Θ p : T p R 3 → R {\displaystyle \Theta _{p}:T_{p}\mathbf {R} ^{3}\to \mathbf {R} } is a map on the tangent bundle. Let H p = { v ∈ T p R 3 ∣ Θ p ( v ) = 0 } . {\displaystyle H_{p}=\left\{v\in T_{p}\mathbf {R} ^{3}\mid \Theta _{p}(v)=0\right\}.} It can be seen that H is a subbundle of the tangent bundle TR3. A cometric on H is given by projecting vectors to the two-dimensional space spanned by vectors in the x and y direction. That is, given vectors v = ( v 1 , v 2 , v 3 ) {\displaystyle v=(v_{1},v_{2},v_{3})} and w = ( w 1 , w 2 , w 3 ) {\displaystyle w=(w_{1},w_{2},w_{3})} in TR3, the inner product is given by ⟨ v , w ⟩ = v 1 w 1 + v 2 w 2 . {\displaystyle \langle v,w\rangle =v_{1}w_{1}+v_{2}w_{2}.} The resulting structure turns H into the manifold of the Heisenberg group. An orthonormal frame on the manifold is given by the Lie vector fields X = ∂ ∂ x − 1 2 y ∂ ∂ z , Y = ∂ ∂ y + 1 2 x ∂ ∂ z , Z = ∂ ∂ z , {\displaystyle {\begin{aligned}X&={\frac {\partial }{\partial x}}-{\frac {1}{2}}y{\frac {\partial }{\partial z}},\\Y&={\frac {\partial }{\partial y}}+{\frac {1}{2}}x{\frac {\partial }{\partial z}},\\Z&={\frac {\partial }{\partial z}},\end{aligned}}} which obey the relations [X, Y] = Z and [X, Z] = [Y, Z] = 0. Being Lie vector fields, these form a left-invariant basis for the group action. The geodesics on the manifold are spirals, projecting down to circles in two dimensions. That is, if γ ( t ) = ( x ( t ) , y ( t ) , z ( t ) ) {\displaystyle \gamma (t)=(x(t),y(t),z(t))} is a geodesic curve, then the curve c ( t ) = ( x ( t ) , y ( t ) ) {\displaystyle c(t)=(x(t),y(t))} is an arc of a circle, and z ( t ) = 1 2 ∫ c x d y − y d x {\displaystyle z(t)={\frac {1}{2}}\int _{c}xdy-ydx} with the integral limited to the two-dimensional plane. That is, the height of the curve is proportional to the area of the circle subtended by the circular arc, which follows by Green's theorem. == Heisenberg group of a locally compact abelian group == It is more generally possible to define the Heisenberg group of a locally compact abelian group K, equipped with a Haar measure. Such a group has a Pontrjagin dual K ^ {\displaystyle {\hat {K}}} , consisting of all continuous U ( 1 ) {\displaystyle U(1)} -valued characters on K, which is also a locally compact abelian group if endowed with the compact-open topology. The Heisenberg group associated with the locally compact abelian group K is the subgroup of the unitary group of L 2 ( K ) {\displaystyle L^{2}(K)} generated by translations from K and multiplications by elements of K ^ {\displaystyle {\hat {K}}} . In more detail, the Hilbert space L 2 ( K ) {\displaystyle L^{2}(K)} consists of square-integrable complex-valued functions f {\displaystyle f} on K. The translations in K form a unitary representation of K as operators on L 2 ( K ) {\displaystyle L^{2}(K)} : ( T x f ) ( y ) = f ( x + y ) {\displaystyle (T_{x}f)(y)=f(x+y)} for x , y ∈ K {\displaystyle x,y\in K} . So too do the multiplications by characters: ( M χ f ) ( y ) = χ ( y ) f ( y ) {\displaystyle (M_{\chi }f)(y)=\chi (y)f(y)} for χ ∈ K ^ {\displaystyle \chi \in {\hat {K}}} . These operators do not commute, and instead satisfy ( T x M χ T x − 1 M χ − 1 f ) ( y ) = χ ( x ) ¯ f ( y ) {\displaystyle \left(T_{x}M_{\chi }T_{x}^{-1}M_{\chi }^{-1}f\right)(y)={\overline {\chi (x)}}f(y)} multiplication by a fixed unit modulus complex number. So the Heisenberg group H ( K ) {\displaystyle H(K)} associated with K is a type of central extension of K × K ^ {\displaystyle K\times {\hat {K}}} , via an exact sequence of groups: 1 → U ( 1 ) → H ( K ) → K × K ^ → 0. {\displaystyle 1\to U(1)\to H(K)\to K\times {\hat {K}}\to 0.} More general Heisenberg groups are described by 2-cocyles in the cohomology group H 2 ( K , U ( 1 ) ) {\displaystyle H^{2}(K,U(1))} . The existence of a duality between K {\displaystyle K} and K ^ {\displaystyle {\hat {K}}} gives rise to a canonical cocycle, but there are generally others. The Heisenberg group acts irreducibly on L 2 ( K ) {\displaystyle L^{2}(K)} . Indeed, the continuous characters separate points so any unitary operator of L 2 ( K ) {\displaystyle L^{2}(K)} that commutes with them is an L ∞ {\displaystyle L^{\infty }} multiplier. But commuting with translations implies that the multiplier is constant. A version of the Stone–von Neumann theorem, proved by George Mackey, holds for the Heisenberg group H ( K ) {\displaystyle H(K)} . The Fourier transform is the unique intertwiner between the representations of L 2 ( K ) {\displaystyle L^{2}(K)} and L 2 ( K ^ ) {\displaystyle L^{2}\left({\hat {K}}\right)} . See the discussion at Stone–von Neumann theorem#Relation to the Fourier transform for details. == See also == Canonical commutation relations Wigner–Weyl transform Stone–von Neumann theorem Projective representation Geometrization conjecture == Notes == == References == Binz, Ernst; Pods, Sonja (2008). Geometry of Heisenberg Groups. American Mathematical Society. ISBN 978-0-8218-4495-3. Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, Bibcode:2013qtm..book.....H, ISBN 978-1461471158 Hall, Brian C. (2015). Lie Groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Vol. 222 (second ed.). Springer. ISBN 978-3319134666. Howe, Roger (1980). "On the role of the Heisenberg group in harmonic analysis". Bulletin of the American Mathematical Society. 3 (2): 821–843. doi:10.1090/s0273-0979-1980-14825-9. MR 0578375. Kirillov, Alexandre A. (2004). "Ch. 2: "Representations and Orbits of the Heisenberg Group". Lectures on the Orbit Method. American Mathematical Society. ISBN 0-8218-3530-0. Mackey, George (1976). The theory of Unitary Group Representations. Chicago Lectures in Mathematics. University of Chicago Press. ISBN 978-0226500522. == External links == Groupprops, The Group Properties Wiki Unitriangular matrix group UT(3,p)
Wikipedia/Heisenberg_algebra
This mathematics-related list provides Mubarakzyanov's classification of low-dimensional real Lie algebras, published in Russian in 1963. It complements the article on Lie algebra in the area of abstract algebra. An English version and review of this classification was published by Popovych et al. in 2003. == Mubarakzyanov's Classification == Let g n {\displaystyle {\mathfrak {g}}_{n}} be n {\displaystyle n} -dimensional Lie algebra over the field of real numbers with generators e 1 , … , e n {\displaystyle e_{1},\dots ,e_{n}} , n ≤ 4 {\displaystyle n\leq 4} . For each algebra g {\displaystyle {\mathfrak {g}}} we adduce only non-zero commutators between basis elements. === One-dimensional === g 1 {\displaystyle {\mathfrak {g}}_{1}} , abelian. === Two-dimensional === 2 g 1 {\displaystyle 2{\mathfrak {g}}_{1}} , abelian R 2 {\displaystyle \mathbb {R} ^{2}} ; g 2.1 {\displaystyle {\mathfrak {g}}_{2.1}} , solvable a f f ( 1 ) = { ( a b 0 0 ) : a , b ∈ R } {\displaystyle {\mathfrak {aff}}(1)=\left\{{\begin{pmatrix}a&b\\0&0\end{pmatrix}}\,:\,a,b\in \mathbb {R} \right\}} , [ e 1 , e 2 ] = e 1 . {\displaystyle [e_{1},e_{2}]=e_{1}.} === Three-dimensional === 3 g 1 {\displaystyle 3{\mathfrak {g}}_{1}} , abelian, Bianchi I; g 2.1 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{2.1}\oplus {\mathfrak {g}}_{1}} , decomposable solvable, Bianchi III; g 3.1 {\displaystyle {\mathfrak {g}}_{3.1}} , Heisenberg–Weyl algebra, nilpotent, Bianchi II, [ e 2 , e 3 ] = e 1 ; {\displaystyle [e_{2},e_{3}]=e_{1};} g 3.2 {\displaystyle {\mathfrak {g}}_{3.2}} , solvable, Bianchi IV, [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = e 1 + e 2 ; {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=e_{1}+e_{2};} g 3.3 {\displaystyle {\mathfrak {g}}_{3.3}} , solvable, Bianchi V, [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = e 2 ; {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=e_{2};} g 3.4 {\displaystyle {\mathfrak {g}}_{3.4}} , solvable, Bianchi VI, Poincaré algebra p ( 1 , 1 ) {\displaystyle {\mathfrak {p}}(1,1)} when α = − 1 {\displaystyle \alpha =-1} , [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = α e 2 , − 1 ≤ α < 1 , α ≠ 0 ; {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=\alpha e_{2},\quad -1\leq \alpha <1,\quad \alpha \neq 0;} g 3.5 {\displaystyle {\mathfrak {g}}_{3.5}} , solvable, Bianchi VII, [ e 1 , e 3 ] = β e 1 − e 2 , [ e 2 , e 3 ] = e 1 + β e 2 , β ≥ 0 ; {\displaystyle [e_{1},e_{3}]=\beta e_{1}-e_{2},\quad [e_{2},e_{3}]=e_{1}+\beta e_{2},\quad \beta \geq 0;} g 3.6 {\displaystyle {\mathfrak {g}}_{3.6}} , simple, Bianchi VIII, s l ( 2 , R ) , {\displaystyle {\mathfrak {sl}}(2,\mathbb {R} ),} [ e 1 , e 2 ] = e 1 , [ e 2 , e 3 ] = e 3 , [ e 1 , e 3 ] = 2 e 2 ; {\displaystyle [e_{1},e_{2}]=e_{1},\quad [e_{2},e_{3}]=e_{3},\quad [e_{1},e_{3}]=2e_{2};} g 3.7 {\displaystyle {\mathfrak {g}}_{3.7}} , simple, Bianchi IX, s o ( 3 ) , {\displaystyle {\mathfrak {so}}(3),} [ e 2 , e 3 ] = e 1 , [ e 3 , e 1 ] = e 2 , [ e 1 , e 2 ] = e 3 . {\displaystyle [e_{2},e_{3}]=e_{1},\quad [e_{3},e_{1}]=e_{2},\quad [e_{1},e_{2}]=e_{3}.} Algebra g 3.3 {\displaystyle {\mathfrak {g}}_{3.3}} can be considered as an extreme case of g 3.5 {\displaystyle {\mathfrak {g}}_{3.5}} , when β → ∞ {\displaystyle \beta \rightarrow \infty } , forming contraction of Lie algebra. Over the field C {\displaystyle {\mathbb {C} }} algebras g 3.5 {\displaystyle {\mathfrak {g}}_{3.5}} , g 3.7 {\displaystyle {\mathfrak {g}}_{3.7}} are isomorphic to g 3.4 {\displaystyle {\mathfrak {g}}_{3.4}} and g 3.6 {\displaystyle {\mathfrak {g}}_{3.6}} , respectively. === Four-dimensional === 4 g 1 {\displaystyle 4{\mathfrak {g}}_{1}} , abelian; g 2.1 ⊕ 2 g 1 {\displaystyle {\mathfrak {g}}_{2.1}\oplus 2{\mathfrak {g}}_{1}} , decomposable solvable, [ e 1 , e 2 ] = e 1 ; {\displaystyle [e_{1},e_{2}]=e_{1};} 2 g 2.1 {\displaystyle 2{\mathfrak {g}}_{2.1}} , decomposable solvable, [ e 1 , e 2 ] = e 1 [ e 3 , e 4 ] = e 3 ; {\displaystyle [e_{1},e_{2}]=e_{1}\quad [e_{3},e_{4}]=e_{3};} g 3.1 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.1}\oplus {\mathfrak {g}}_{1}} , decomposable nilpotent, [ e 2 , e 3 ] = e 1 ; {\displaystyle [e_{2},e_{3}]=e_{1};} g 3.2 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.2}\oplus {\mathfrak {g}}_{1}} , decomposable solvable, [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = e 1 + e 2 ; {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=e_{1}+e_{2};} g 3.3 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.3}\oplus {\mathfrak {g}}_{1}} , decomposable solvable, [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = e 2 ; {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=e_{2};} g 3.4 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.4}\oplus {\mathfrak {g}}_{1}} , decomposable solvable, [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = α e 2 , − 1 ≤ α < 1 , α ≠ 0 ; {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=\alpha e_{2},\quad -1\leq \alpha <1,\quad \alpha \neq 0;} g 3.5 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.5}\oplus {\mathfrak {g}}_{1}} , decomposable solvable, [ e 1 , e 3 ] = β e 1 − e 2 [ e 2 , e 3 ] = e 1 + β e 2 , β ≥ 0 ; {\displaystyle [e_{1},e_{3}]=\beta e_{1}-e_{2}\quad [e_{2},e_{3}]=e_{1}+\beta e_{2},\quad \beta \geq 0;} g 3.6 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.6}\oplus {\mathfrak {g}}_{1}} , unsolvable, [ e 1 , e 2 ] = e 1 , [ e 2 , e 3 ] = e 3 , [ e 1 , e 3 ] = 2 e 2 ; {\displaystyle [e_{1},e_{2}]=e_{1},\quad [e_{2},e_{3}]=e_{3},\quad [e_{1},e_{3}]=2e_{2};} g 3.7 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.7}\oplus {\mathfrak {g}}_{1}} , unsolvable, [ e 1 , e 2 ] = e 3 , [ e 2 , e 3 ] = e 1 , [ e 3 , e 1 ] = e 2 ; {\displaystyle [e_{1},e_{2}]=e_{3},\quad [e_{2},e_{3}]=e_{1},\quad [e_{3},e_{1}]=e_{2};} g 4.1 {\displaystyle {\mathfrak {g}}_{4.1}} , indecomposable nilpotent, [ e 2 , e 4 ] = e 1 , [ e 3 , e 4 ] = e 2 ; {\displaystyle [e_{2},e_{4}]=e_{1},\quad [e_{3},e_{4}]=e_{2};} g 4.2 {\displaystyle {\mathfrak {g}}_{4.2}} , indecomposable solvable, [ e 1 , e 4 ] = β e 1 , [ e 2 , e 4 ] = e 2 , [ e 3 , e 4 ] = e 2 + e 3 , β ≠ 0 ; {\displaystyle [e_{1},e_{4}]=\beta e_{1},\quad [e_{2},e_{4}]=e_{2},\quad [e_{3},e_{4}]=e_{2}+e_{3},\quad \beta \neq 0;} g 4.3 {\displaystyle {\mathfrak {g}}_{4.3}} , indecomposable solvable, [ e 1 , e 4 ] = e 1 , [ e 3 , e 4 ] = e 2 ; {\displaystyle [e_{1},e_{4}]=e_{1},\quad [e_{3},e_{4}]=e_{2};} g 4.4 {\displaystyle {\mathfrak {g}}_{4.4}} , indecomposable solvable, [ e 1 , e 4 ] = e 1 , [ e 2 , e 4 ] = e 1 + e 2 , [ e 3 , e 4 ] = e 2 + e 3 ; {\displaystyle [e_{1},e_{4}]=e_{1},\quad [e_{2},e_{4}]=e_{1}+e_{2},\quad [e_{3},e_{4}]=e_{2}+e_{3};} g 4.5 {\displaystyle {\mathfrak {g}}_{4.5}} , indecomposable solvable, [ e 1 , e 4 ] = α e 1 , [ e 2 , e 4 ] = β e 2 , [ e 3 , e 4 ] = γ e 3 , α β γ ≠ 0 ; {\displaystyle [e_{1},e_{4}]=\alpha e_{1},\quad [e_{2},e_{4}]=\beta e_{2},\quad [e_{3},e_{4}]=\gamma e_{3},\quad \alpha \beta \gamma \neq 0;} g 4.6 {\displaystyle {\mathfrak {g}}_{4.6}} , indecomposable solvable, [ e 1 , e 4 ] = α e 1 , [ e 2 , e 4 ] = β e 2 − e 3 , [ e 3 , e 4 ] = e 2 + β e 3 , α > 0 ; {\displaystyle [e_{1},e_{4}]=\alpha e_{1},\quad [e_{2},e_{4}]=\beta e_{2}-e_{3},\quad [e_{3},e_{4}]=e_{2}+\beta e_{3},\quad \alpha >0;} g 4.7 {\displaystyle {\mathfrak {g}}_{4.7}} , indecomposable solvable, [ e 2 , e 3 ] = e 1 , [ e 1 , e 4 ] = 2 e 1 , [ e 2 , e 4 ] = e 2 , [ e 3 , e 4 ] = e 2 + e 3 ; {\displaystyle [e_{2},e_{3}]=e_{1},\quad [e_{1},e_{4}]=2e_{1},\quad [e_{2},e_{4}]=e_{2},\quad [e_{3},e_{4}]=e_{2}+e_{3};} g 4.8 {\displaystyle {\mathfrak {g}}_{4.8}} , indecomposable solvable, [ e 2 , e 3 ] = e 1 , [ e 1 , e 4 ] = ( 1 + β ) e 1 , [ e 2 , e 4 ] = e 2 , [ e 3 , e 4 ] = β e 3 , − 1 ≤ β ≤ 1 ; {\displaystyle [e_{2},e_{3}]=e_{1},\quad [e_{1},e_{4}]=(1+\beta )e_{1},\quad [e_{2},e_{4}]=e_{2},\quad [e_{3},e_{4}]=\beta e_{3},\quad -1\leq \beta \leq 1;} g 4.9 {\displaystyle {\mathfrak {g}}_{4.9}} , indecomposable solvable, [ e 2 , e 3 ] = e 1 , [ e 1 , e 4 ] = 2 α e 1 , [ e 2 , e 4 ] = α e 2 − e 3 , [ e 3 , e 4 ] = e 2 + α e 3 , α ≥ 0 ; {\displaystyle [e_{2},e_{3}]=e_{1},\quad [e_{1},e_{4}]=2\alpha e_{1},\quad [e_{2},e_{4}]=\alpha e_{2}-e_{3},\quad [e_{3},e_{4}]=e_{2}+\alpha e_{3},\quad \alpha \geq 0;} g 4.10 {\displaystyle {\mathfrak {g}}_{4.10}} , indecomposable solvable, [ e 1 , e 3 ] = e 1 , [ e 2 , e 3 ] = e 2 , [ e 1 , e 4 ] = − e 2 , [ e 2 , e 4 ] = e 1 . {\displaystyle [e_{1},e_{3}]=e_{1},\quad [e_{2},e_{3}]=e_{2},\quad [e_{1},e_{4}]=-e_{2},\quad [e_{2},e_{4}]=e_{1}.} Algebra g 4.3 {\displaystyle {\mathfrak {g}}_{4.3}} can be considered as an extreme case of g 4.2 {\displaystyle {\mathfrak {g}}_{4.2}} , when β → 0 {\displaystyle \beta \rightarrow 0} , forming contraction of Lie algebra. Over the field C {\displaystyle {\mathbb {C} }} algebras g 3.5 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.5}\oplus {\mathfrak {g}}_{1}} , g 3.7 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.7}\oplus {\mathfrak {g}}_{1}} , g 4.6 {\displaystyle {\mathfrak {g}}_{4.6}} , g 4.9 {\displaystyle {\mathfrak {g}}_{4.9}} , g 4.10 {\displaystyle {\mathfrak {g}}_{4.10}} are isomorphic to g 3.4 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.4}\oplus {\mathfrak {g}}_{1}} , g 3.6 ⊕ g 1 {\displaystyle {\mathfrak {g}}_{3.6}\oplus {\mathfrak {g}}_{1}} , g 4.5 {\displaystyle {\mathfrak {g}}_{4.5}} , g 4.8 {\displaystyle {\mathfrak {g}}_{4.8}} , 2 g 2.1 {\displaystyle {2{\mathfrak {g}}}_{2.1}} , respectively. == See also == Table of Lie groups Simple Lie group#Full classification == Notes == == References == Mubarakzyanov, G.M. (1963). "On solvable Lie algebras". Izv. Vys. Ucheb. Zaved. Matematika (in Russian). 1 (32): 114–123. MR 0153714. Zbl 0166.04104. Popovych, R.O.; Boyko, V.M.; Nesterenko, M.O.; Lutfullin, M.W.; et al. (2003). "Realizations of real low-dimensional Lie algebras". J. Phys. A: Math. Gen. 36 (26): 7337–7360. arXiv:math-ph/0301029. Bibcode:2003JPhA...36.7337P. doi:10.1088/0305-4470/36/26/309. S2CID 9800361.
Wikipedia/Classification_of_low-dimensional_real_Lie_algebras
In mathematics, a (right) Leibniz algebra, named after Gottfried Wilhelm Leibniz, sometimes called a Loday algebra, after Jean-Louis Loday, is a module L over a commutative ring R with a bilinear product [ _ , _ ] satisfying the Leibniz identity [ [ a , b ] , c ] = [ a , [ b , c ] ] + [ [ a , c ] , b ] . {\displaystyle [[a,b],c]=[a,[b,c]]+[[a,c],b].\,} In other words, right multiplication by any element c is a derivation. If in addition the bracket is alternating ([a, a] = 0) then the Leibniz algebra is a Lie algebra. Indeed, in this case [a, b] = −[b, a] and the Leibniz identity is equivalent to Jacobi's identity ([a, [b, c]] + [c, [a, b]] + [b, [c, a]] = 0). Conversely any Lie algebra is obviously a Leibniz algebra. In this sense, Leibniz algebras can be seen as a non-commutative generalization of Lie algebras. The investigation of which theorems and properties of Lie algebras are still valid for Leibniz algebras is a recurrent theme in the literature. For instance, it has been shown that Engel's theorem still holds for Leibniz algebras and that a weaker version of the Levi–Malcev theorem also holds. The tensor module, T(V) , of any vector space V can be turned into a Loday algebra such that [ a 1 ⊗ ⋯ ⊗ a n , x ] = a 1 ⊗ ⋯ a n ⊗ x for a 1 , … , a n , x ∈ V . {\displaystyle [a_{1}\otimes \cdots \otimes a_{n},x]=a_{1}\otimes \cdots a_{n}\otimes x\quad {\text{for }}a_{1},\ldots ,a_{n},x\in V.} This is the free Loday algebra over V. Leibniz algebras were discovered in 1965 by A. Bloh, who called them D-algebras. They attracted interest after Jean-Louis Loday noticed that the classical Chevalley–Eilenberg boundary map in the exterior module of a Lie algebra can be lifted to the tensor module which yields a new chain complex. In fact this complex is well-defined for any Leibniz algebra. The homology HL(L) of this chain complex is known as Leibniz homology. If L is the Lie algebra of (infinite) matrices over an associative R-algebra A then the Leibniz homology of L is the tensor algebra over the Hochschild homology of A. A Zinbiel algebra is the Koszul dual concept to a Leibniz algebra. It has as defining identity: ( a ∘ b ) ∘ c = a ∘ ( b ∘ c ) + a ∘ ( c ∘ b ) . {\displaystyle (a\circ b)\circ c=a\circ (b\circ c)+a\circ (c\circ b).} == Notes == == References ==
Wikipedia/Leibniz_algebra
In mathematics, the Virasoro algebra is a complex Lie algebra and the unique nontrivial central extension of the Witt algebra. It is widely used in two-dimensional conformal field theory and in string theory. It is named after Miguel Ángel Virasoro. == Structure == The Virasoro algebra is spanned by generators Ln for n ∈ ℤ and the central charge c. These generators satisfy [ c , L n ] = 0 {\displaystyle [c,L_{n}]=0} and The factor of 1 12 {\displaystyle {\frac {1}{12}}} is merely a matter of convention. For a derivation of the algebra as the unique central extension of the Witt algebra, see derivation of the Virasoro algebra or Schottenloher, Thm. 5.1, pp. 79. The Virasoro algebra has a presentation in terms of two generators (e.g. L3 and L−2) and six relations. The generators L n > 0 {\displaystyle L_{n>0}} are called annihilation modes, while L n < 0 {\displaystyle L_{n<0}} are creation modes. A basis of creation generators of the Virasoro algebra's universal enveloping algebra is the set L = { L − n 1 L − n 2 ⋯ L − n k } k ∈ N 0 < n 1 ≤ n 2 ≤ ⋯ n k {\displaystyle {\mathcal {L}}={\Big \{}L_{-n_{1}}L_{-n_{2}}\cdots L_{-n_{k}}{\Big \}}_{\begin{array}{l}k\in \mathbb {N} \\0<n_{1}\leq n_{2}\leq \cdots n_{k}\end{array}}} For L ∈ L {\displaystyle L\in {\mathcal {L}}} , let | L | = ∑ i = 1 k n i {\displaystyle |L|=\sum _{i=1}^{k}n_{i}} , then [ L 0 , L ] = | L | L {\displaystyle [L_{0},L]=|L|L} . == Representation theory == In any indecomposable representation of the Virasoro algebra, the central generator c {\displaystyle c} of the algebra takes a constant value, also denoted c {\displaystyle c} and called the representation's central charge. A vector v {\displaystyle v} in a representation of the Virasoro algebra has conformal dimension (or conformal weight) h {\displaystyle h} if it is an eigenvector of L 0 {\displaystyle L_{0}} with eigenvalue h {\displaystyle h} : L 0 v = h v {\displaystyle L_{0}v=hv} An L 0 {\displaystyle L_{0}} -eigenvector v {\displaystyle v} is called a primary state (of dimension h {\displaystyle h} ) if it is annihilated by the annihilation modes, L n > 0 v = 0 {\displaystyle L_{n>0}v=0} === Highest weight representations === A highest weight representation of the Virasoro algebra is a representation generated by a primary state v {\displaystyle v} . A highest weight representation is spanned by the L 0 {\displaystyle L_{0}} -eigenstates { L v } L ∈ L {\displaystyle \{Lv\}_{L\in {\mathcal {L}}}} . The conformal dimension of L v {\displaystyle Lv} is h + | L | {\displaystyle h+|L|} , where | L | ∈ N {\displaystyle |L|\in \mathbb {N} } is called the level of L v {\displaystyle Lv} . Any state whose level is not zero is called a descendant state of v {\displaystyle v} . For any h , c ∈ C {\displaystyle h,c\in \mathbb {C} } , the Verma module V c , h {\displaystyle {\mathcal {V}}_{c,h}} of central charge c {\displaystyle c} and conformal dimension h {\displaystyle h} is the representation whose basis is { L v } L ∈ L {\displaystyle \{Lv\}_{L\in {\mathcal {L}}}} , for v {\displaystyle v} a primary state of dimension h {\displaystyle h} . The Verma module is the largest possible highest weight representation. The Verma module is indecomposable, and for generic values of h , c ∈ C {\displaystyle h,c\in \mathbb {C} } it is also irreducible. When it is reducible, there exist other highest weight representations with these values of h , c ∈ C {\displaystyle h,c\in \mathbb {C} } , called degenerate representations, which are quotients of the Verma module. In particular, the unique irreducible highest weight representation with these values of h , c ∈ C {\displaystyle h,c\in \mathbb {C} } is the quotient of the Verma module by its maximal submodule. A Verma module is irreducible if and only if it has no singular vectors. === Singular vectors === A singular vector or null vector of a highest weight representation is a state that is both descendant and primary. A sufficient condition for the Verma module V c , h {\displaystyle {\mathcal {V}}_{c,h}} to have a singular vector is h = h r , s ( c ) {\displaystyle h=h_{r,s}(c)} for some r , s ∈ N ∗ {\displaystyle r,s\in \mathbb {N} ^{*}} , where h r , s ( c ) = 1 4 ( ( β r − β − 1 s ) 2 − ( β − β − 1 ) 2 ) , where c = 1 − 6 ( β − β − 1 ) 2 . {\displaystyle h_{r,s}(c)={\frac {1}{4}}{\Big (}(\beta r-\beta ^{-1}s)^{2}-(\beta -\beta ^{-1})^{2}{\Big )}\ ,\quad {\text{where}}\quad c=1-6(\beta -\beta ^{-1})^{2}\ .} Then the singular vector has level r s {\displaystyle rs} and conformal dimension h r , s + r s = h r , − s {\displaystyle h_{r,s}+rs=h_{r,-s}} Here are the values of h r , s ( c ) {\displaystyle h_{r,s}(c)} for r s ≤ 4 {\displaystyle rs\leq 4} , together with the corresponding singular vectors, written as L r , s v {\displaystyle L_{r,s}v} for v {\displaystyle v} the primary state of V c , h r , s ( c ) {\displaystyle {\mathcal {V}}_{c,h_{r,s}(c)}} : r , s h r , s L r , s 1 , 1 0 L − 1 2 , 1 − 1 2 + 3 4 β 2 L − 1 2 − β 2 L − 2 1 , 2 − 1 2 + 3 4 β − 2 L − 1 2 − β − 2 L − 2 3 , 1 − 1 + 2 β 2 L − 1 3 − 4 β 2 L − 1 L − 2 + 2 β 2 ( 2 β 2 + 1 ) L − 3 1 , 3 − 1 + 2 β − 2 L − 1 3 − 4 β − 2 L − 1 L − 2 + 2 β − 2 ( 2 β − 2 + 1 ) L − 3 4 , 1 − 3 2 + 15 4 β 2 L − 1 4 − 10 β 2 L − 1 2 L − 2 + 2 β 2 ( 12 β 2 + 5 ) L − 1 L − 3 + 9 β 4 L − 2 2 − 6 β 2 ( 6 β 4 + 4 β 2 + 1 ) L − 4 2 , 2 3 4 ( β − β − 1 ) 2 L − 1 4 − 2 ( β 2 + β − 2 ) L − 1 2 L − 2 + ( β 2 − β − 2 ) 2 L − 2 2 + 2 ( 1 + ( β + β − 1 ) 2 ) L − 1 L − 3 − 2 ( β + β − 1 ) 2 L − 4 1 , 4 − 3 2 + 15 4 β − 2 L − 1 4 − 10 β − 2 L − 1 2 L − 2 + 2 β − 2 ( 12 β − 2 + 5 ) L − 1 L − 3 + 9 β − 4 L − 2 2 − 6 β − 2 ( 6 β − 4 + 4 β − 2 + 1 ) L − 4 {\displaystyle {\begin{array}{|c|c|l|}\hline r,s&h_{r,s}&L_{r,s}\\\hline \hline 1,1&0&L_{-1}\\\hline 2,1&-{\frac {1}{2}}+{\frac {3}{4}}\beta ^{2}&L_{-1}^{2}-\beta ^{2}L_{-2}\\\hline 1,2&-{\frac {1}{2}}+{\frac {3}{4}}\beta ^{-2}&L_{-1}^{2}-\beta ^{-2}L_{-2}\\\hline 3,1&-1+2\beta ^{2}&L_{-1}^{3}-4\beta ^{2}L_{-1}L_{-2}+2\beta ^{2}(2\beta ^{2}+1)L_{-3}\\\hline 1,3&-1+2\beta ^{-2}&L_{-1}^{3}-4\beta ^{-2}L_{-1}L_{-2}+2\beta ^{-2}(2\beta ^{-2}+1)L_{-3}\\\hline 4,1&-{\frac {3}{2}}+{\frac {15}{4}}\beta ^{2}&{\begin{array}{r}L_{-1}^{4}-10\beta ^{2}L_{-1}^{2}L_{-2}+2\beta ^{2}\left(12\beta ^{2}+5\right)L_{-1}L_{-3}\\+9\beta ^{4}L_{-2}^{2}-6\beta ^{2}\left(6\beta ^{4}+4\beta ^{2}+1\right)L_{-4}\end{array}}\\\hline 2,2&{\frac {3}{4}}\left(\beta -\beta ^{-1}\right)^{2}&{\begin{array}{l}L_{-1}^{4}-2\left(\beta ^{2}+\beta ^{-2}\right)L_{-1}^{2}L_{-2}+\left(\beta ^{2}-\beta ^{-2}\right)^{2}L_{-2}^{2}\\+2\left(1+\left(\beta +\beta ^{-1}\right)^{2}\right)L_{-1}L_{-3}-2\left(\beta +\beta ^{-1}\right)^{2}L_{-4}\end{array}}\\\hline 1,4&-{\frac {3}{2}}+{\frac {15}{4}}\beta ^{-2}&{\begin{array}{r}L_{-1}^{4}-10\beta ^{-2}L_{-1}^{2}L_{-2}+2\beta ^{-2}\left(12\beta ^{-2}+5\right)L_{-1}L_{-3}\\+9\beta ^{-4}L_{-2}^{2}-6\beta ^{-2}\left(6\beta ^{-4}+4\beta ^{-2}+1\right)L_{-4}\end{array}}\\\hline \end{array}}} Singular vectors for arbitrary r , s ∈ N ∗ {\displaystyle r,s\in \mathbb {N} ^{*}} may be computed using various algorithms, and their explicit expressions are known. If β 2 ∉ Q {\displaystyle \beta ^{2}\notin \mathbb {Q} } , then V c , h {\displaystyle {\mathcal {V}}_{c,h}} has a singular vector at level N {\displaystyle N} if and only if h = h r , s ( c ) {\displaystyle h=h_{r,s}(c)} with N = r s {\displaystyle N=rs} . If β 2 ∈ Q {\displaystyle \beta ^{2}\in \mathbb {Q} } , there can also exist a singular vector at level N {\displaystyle N} if N = r s + r ′ s ′ {\displaystyle N=rs+r's'} with h = h r , s ( c ) {\displaystyle h=h_{r,s}(c)} and h + r s = h r ′ , s ′ ( c ) {\displaystyle h+rs=h_{r',s'}(c)} . This singular vector is now a descendant of another singular vector at level r s {\displaystyle rs} . The integers r , s {\displaystyle r,s} that appear in h r , s ( c ) {\displaystyle h_{r,s}(c)} are called Kac indices. It can be useful to use non-integer Kac indices for parametrizing the conformal dimensions of Verma modules that do not have singular vectors, for example in the critical random cluster model. === Shapovalov form === For any c , h ∈ C {\displaystyle c,h\in \mathbb {C} } , the involution L n ↦ L ∗ = L − n {\displaystyle L_{n}\mapsto L^{*}=L_{-n}} defines an automorphism of the Virasoro algebra and of its universal enveloping algebra. Then the Shapovalov form is the symmetric bilinear form on the Verma module V c , h {\displaystyle {\mathcal {V}}_{c,h}} such that ( L v , L ′ v ) = S L , L ′ ( c , h ) {\displaystyle (Lv,L'v)=S_{L,L'}(c,h)} , where the numbers S L , L ′ ( c , h ) {\displaystyle S_{L,L'}(c,h)} are defined by L ∗ L ′ v = | L | = | L ′ | S L , L ′ ( c , h ) v {\displaystyle L^{*}L'v{\underset {|L|=|L'|}{=}}S_{L,L'}(c,h)v} and S L , L ′ ( c , h ) = | L | ≠ | L ′ | 0 {\displaystyle S_{L,L'}(c,h){\underset {|L|\neq |L'|}{=}}0} . The inverse Shapovalov form is relevant to computing Virasoro conformal blocks, and can be determined in terms of singular vectors. The determinant of the Shapovalov form at a given level N {\displaystyle N} is given by the Kac determinant formula, det ( S L , L ′ ( c , h ) ) L , L ′ ∈ L | L | = | L ′ | = N = A N ∏ 1 ≤ r , s ≤ N ( h − h r , s ( c ) ) p ( N − r s ) , {\displaystyle \det \left(S_{L,L'}(c,h)\right)_{\begin{array}{l}L,L'\in {\mathcal {L}}\\|L|=|L'|=N\end{array}}=A_{N}\prod _{1\leq r,s\leq N}{\big (}h-h_{r,s}(c){\big )}^{p(N-rs)},} where p ( N ) {\displaystyle p(N)} is the partition function, and A N {\displaystyle A_{N}} is a positive constant that does not depend on h {\displaystyle h} or c {\displaystyle c} . === Hermitian form and unitarity === If c , h ∈ R {\displaystyle c,h\in \mathbb {R} } , a highest weight representation with conformal dimension h {\displaystyle h} has a unique Hermitian form such that the Hermitian adjoint of L n {\displaystyle L_{n}} is L n † = L − n {\displaystyle L_{n}^{\dagger }=L_{-n}} and the norm of the primary state v {\displaystyle v} is one. In the basis ( L v ) L ∈ L {\displaystyle (Lv)_{L\in {\mathcal {L}}}} , the Hermitian form on the Verma module V c , h {\displaystyle {\mathcal {V}}_{c,h}} has the same matrix as the Shapovalov form S L , L ′ ( c , h ) {\displaystyle S_{L,L'}(c,h)} , now interpreted as a Gram matrix. The representation is called unitary if that Hermitian form is positive definite. Since any singular vector has zero norm, all unitary highest weight representations are irreducible. An irreducible highest weight representation is unitary if and only if either c ≥ 1 {\displaystyle c\geq 1} with h ≥ 0 {\displaystyle h\geq 0} , or c ∈ { 1 − 6 m ( m + 1 ) } m = 2 , 3 , 4 , … = { 0 , 1 2 , 7 10 , 4 5 , 6 7 , 25 28 , … } {\displaystyle c\in \left\{1-{\frac {6}{m(m+1)}}\right\}_{m=2,3,4,\ldots }=\left\{0,{\frac {1}{2}},{\frac {7}{10}},{\frac {4}{5}},{\frac {6}{7}},{\frac {25}{28}},\ldots \right\}} with h ∈ { h r , s ( c ) = ( ( m + 1 ) r − m s ) 2 − 1 4 m ( m + 1 ) } r = 1 , 2 , . . . , m − 1 s = 1 , 2 , . . . , m {\displaystyle h\in \left\{h_{r,s}(c)={\frac {{\big (}(m+1)r-ms{\big )}^{2}-1}{4m(m+1)}}\right\}_{\begin{array}{l}r=1,2,...,m-1\\s=1,2,...,m\end{array}}} Daniel Friedan, Zongan Qiu, and Stephen Shenker showed that these conditions are necessary, and Peter Goddard, Adrian Kent, and David Olive used the coset construction or GKO construction (identifying unitary representations of the Virasoro algebra within tensor products of unitary representations of affine Kac–Moody algebras) to show that they are sufficient. === Characters === The character of a representation R {\displaystyle {\mathcal {R}}} of the Virasoro algebra is the function χ R ( q ) = Tr R ⁡ q L 0 − c 24 . {\displaystyle \chi _{\mathcal {R}}(q)=\operatorname {Tr} _{\mathcal {R}}q^{L_{0}-{\frac {c}{24}}}.} The character of the Verma module V c , h {\displaystyle {\mathcal {V}}_{c,h}} is χ V c , h ( q ) = q h − c 24 ∏ n = 1 ∞ ( 1 − q n ) = q h − c − 1 24 η ( q ) = q h − c 24 ( 1 + q + 2 q 2 + 3 q 3 + 5 q 4 + ⋯ ) , {\displaystyle \chi _{{\mathcal {V}}_{c,h}}(q)={\frac {q^{h-{\frac {c}{24}}}}{\prod _{n=1}^{\infty }(1-q^{n})}}={\frac {q^{h-{\frac {c-1}{24}}}}{\eta (q)}}=q^{h-{\frac {c}{24}}}\left(1+q+2q^{2}+3q^{3}+5q^{4}+\cdots \right),} where η {\displaystyle \eta } is the Dedekind eta function. For any c ∈ C {\displaystyle c\in \mathbb {C} } and for r , s ∈ N ∗ {\displaystyle r,s\in \mathbb {N} ^{*}} , the Verma module V c , h r , s {\displaystyle {\mathcal {V}}_{c,h_{r,s}}} is reducible due to the existence of a singular vector at level r s {\displaystyle rs} . This singular vector generates a submodule, which is isomorphic to the Verma module V c , h r , s + r s {\displaystyle {\mathcal {V}}_{c,h_{r,s}+rs}} . The quotient of V c , h r , s {\displaystyle {\mathcal {V}}_{c,h_{r,s}}} by this submodule is irreducible if V c , h r , s {\displaystyle {\mathcal {V}}_{c,h_{r,s}}} does not have other singular vectors, and its character is χ V c , h r , s / V c , h r , s + r s = χ V c , h r , s − χ V c , h r , s + r s = ( 1 − q r s ) χ V c , h r , s . {\displaystyle \chi _{{\mathcal {V}}_{c,h_{r,s}}/{\mathcal {V}}_{c,h_{r,s}+rs}}=\chi _{{\mathcal {V}}_{c,h_{r,s}}}-\chi _{{\mathcal {V}}_{c,h_{r,s}+rs}}=(1-q^{rs})\chi _{{\mathcal {V}}_{c,h_{r,s}}}.} Let c = c p , p ′ {\displaystyle c=c_{p,p'}} with 2 ≤ p < p ′ {\displaystyle 2\leq p<p'} and p , p ′ {\displaystyle p,p'} coprime, and 1 ≤ r ≤ p − 1 {\displaystyle 1\leq r\leq p-1} and 1 ≤ s ≤ p ′ − 1 {\displaystyle 1\leq s\leq p'-1} . (Then ( r , s ) {\displaystyle (r,s)} is in the Kac table of the corresponding minimal model). The Verma module V c , h r , s {\displaystyle {\mathcal {V}}_{c,h_{r,s}}} has infinitely many singular vectors, and is therefore reducible with infinitely many submodules. This Verma module has an irreducible quotient by its largest nontrivial submodule. (The spectrums of minimal models are built from such irreducible representations.) The character of the irreducible quotient is χ V c , h r , s / ( V c , h r , s + r s + V c , h r , s + ( p − r ) ( p ′ − s ) ) = ∑ k ∈ Z ( χ V c , 1 4 p p ′ ( ( p ′ r − p s + 2 k p p ′ ) 2 − ( p − p ′ ) 2 ) − χ V c , 1 4 p p ′ ( ( p ′ r + p s + 2 k p p ′ ) 2 − ( p − p ′ ) 2 ) ) . {\displaystyle {\begin{aligned}&\chi _{{\mathcal {V}}_{c,h_{r,s}}/({\mathcal {V}}_{c,h_{r,s}+rs}+{\mathcal {V}}_{c,h_{r,s}+(p-r)(p'-s)})}\\&=\sum _{k\in \mathbb {Z} }\left(\chi _{{\mathcal {V}}_{c,{\frac {1}{4pp'}}\left((p'r-ps+2kpp')^{2}-(p-p')^{2}\right)}}-\chi _{{\mathcal {V}}_{c,{\frac {1}{4pp'}}\left((p'r+ps+2kpp')^{2}-(p-p')^{2}\right)}}\right).\end{aligned}}} This expression is an infinite sum because the submodules V c , h r , s + r s {\displaystyle {\mathcal {V}}_{c,h_{r,s}+rs}} and V c , h r , s + ( p − r ) ( p ′ − s ) {\displaystyle {\mathcal {V}}_{c,h_{r,s}+(p-r)(p'-s)}} have a nontrivial intersection, which is itself a complicated submodule. == Applications == === Conformal field theory === In two dimensions, the algebra of local conformal transformations is made of two copies of the Witt algebra. It follows that the symmetry algebra of two-dimensional conformal field theory is the Virasoro algebra. Technically, the conformal bootstrap approach to two-dimensional CFT relies on Virasoro conformal blocks, special functions that include and generalize the characters of representations of the Virasoro algebra. === String theory === Since the Virasoro algebra comprises the generators of the conformal group of the worldsheet, the stress tensor in string theory obeys the commutation relations of (two copies of) the Virasoro algebra. This is because the conformal group decomposes into separate diffeomorphisms of the forward and back lightcones. Diffeomorphism invariance of the worldsheet implies additionally that the stress tensor vanishes. This is known as the Virasoro constraint, and in the quantum theory, cannot be applied to all the states in the theory, but rather only on the physical states (compare Gupta–Bleuler formalism). == Generalizations == === Super Virasoro algebras === There are two supersymmetric N = 1 extensions of the Virasoro algebra, called the Neveu–Schwarz algebra and the Ramond algebra. Their theory is similar to that of the Virasoro algebra, now involving Grassmann numbers. There are further extensions of these algebras with more supersymmetry, such as the N = 2 superconformal algebra. === W-algebras === W-algebras are associative algebras which contain the Virasoro algebra, and which play an important role in two-dimensional conformal field theory. Among W-algebras, the Virasoro algebra has the particularity of being a Lie algebra. === Affine Lie algebras === The Virasoro algebra is a subalgebra of the universal enveloping algebra of any affine Lie algebra, as shown by the Sugawara construction. In this sense, affine Lie algebras are extensions of the Virasoro algebra. === Meromorphic vector fields on Riemann surfaces === The Virasoro algebra is a central extension of the Lie algebra of meromorphic vector fields with two poles on a genus 0 Riemann surface. On a higher-genus compact Riemann surface, the Lie algebra of meromorphic vector fields with two poles also has a central extension, which is a generalization of the Virasoro algebra. This can be further generalized to supermanifolds. === Vertex algebras and conformal algebras === The Virasoro algebra also has vertex algebraic and conformal algebraic counterparts, which basically come from arranging all the basis elements into generating series and working with single objects. == History == The Witt algebra (the Virasoro algebra without the central extension) was discovered by É. Cartan (1909). Its analogues over finite fields were studied by E. Witt in about the 1930s. The central extension of the Witt algebra that gives the Virasoro algebra was first found (in characteristic p > 0) by R. E. Block (1966, page 381) and independently rediscovered (in characteristic 0) by I. M. Gelfand and Dmitry Fuchs (1969). The physicist Miguel Ángel Virasoro (1970) wrote down some operators generating the Virasoro algebra (later known as the Virasoro operators) while studying dual resonance models, though he did not find the central extension. The central extension giving the Virasoro algebra was rediscovered in physics shortly after by J. H. Weis, according to Brower and Thorn (1971, footnote on page 167). == See also == == References == == Further reading == Iohara, Kenji; Koga, Yoshiyuki (2011), Representation theory of the Virasoro algebra, Springer Monographs in Mathematics, London: Springer-Verlag London Ltd., doi:10.1007/978-0-85729-160-8, ISBN 978-0-85729-159-2, MR 2744610 Victor Kac (2001) [1994], "Virasoro algebra", Encyclopedia of Mathematics, EMS Press V. G. Kac, A. K. Raina, Bombay lectures on highest weight representations, World Sci. (1987) ISBN 9971-5-0395-6. Dobrev, V. K. (1986). "Multiplet classification of the indecomposable highest weight modules over the Neveu-Schwarz and Ramond superalgebras". Lett. Math. Phys. 11 (3): 225–234. Bibcode:1986LMaPh..11..225D. doi:10.1007/bf00400220. S2CID 122201087. & correction: ibid. 13 (1987) 260. V. K. Dobrev, "Characters of the irreducible highest weight modules over the Virasoro and super-Virasoro algebras", Suppl. Rendiconti del Circolo Matematico di Palermo, Serie II, Numero 14 (1987) 25-42. Antony Wassermann (2010). "Lecture notes on Kac-Moody and Virasoro algebras". arXiv:1004.1287 [math.RT]. Antony Wassermann (2010). "Direct proofs of the Feigin-Fuchs character formula for unitary representations of the Virasoro algebra". arXiv:1012.6003 [math.RT].
Wikipedia/Virasoro_algebra
The classical Lie algebras are finite-dimensional Lie algebras over a field which can be classified into four types A n {\displaystyle A_{n}} , B n {\displaystyle B_{n}} , C n {\displaystyle C_{n}} and D n {\displaystyle D_{n}} , where for g l ( n ) {\displaystyle {\mathfrak {gl}}(n)} the general linear Lie algebra and I n {\displaystyle I_{n}} the n × n {\displaystyle n\times n} identity matrix: A n := s l ( n + 1 ) = { x ∈ g l ( n + 1 ) : tr ( x ) = 0 } {\displaystyle A_{n}:={\mathfrak {sl}}(n+1)=\{x\in {\mathfrak {gl}}(n+1):{\text{tr}}(x)=0\}} , the special linear Lie algebra; B n := o ( 2 n + 1 ) = { x ∈ g l ( 2 n + 1 ) : x + x T = 0 } {\displaystyle B_{n}:={\mathfrak {o}}(2n+1)=\{x\in {\mathfrak {gl}}(2n+1):x+x^{T}=0\}} , the odd orthogonal Lie algebra; C n := s p ( 2 n ) = { x ∈ g l ( 2 n ) : J n x + x T J n = 0 , J n = ( 0 I n − I n 0 ) } {\displaystyle C_{n}:={\mathfrak {sp}}(2n)=\{x\in {\mathfrak {gl}}(2n):J_{n}x+x^{T}J_{n}=0,J_{n}={\begin{pmatrix}0&I_{n}\\-I_{n}&0\end{pmatrix}}\}} , the symplectic Lie algebra; and D n := o ( 2 n ) = { x ∈ g l ( 2 n ) : x + x T = 0 } {\displaystyle D_{n}:={\mathfrak {o}}(2n)=\{x\in {\mathfrak {gl}}(2n):x+x^{T}=0\}} , the even orthogonal Lie algebra. Except for the low-dimensional cases D 1 = s o ( 2 ) {\displaystyle D_{1}={\mathfrak {so}}(2)} and D 2 = s o ( 4 ) {\displaystyle D_{2}={\mathfrak {so}}(4)} , the classical Lie algebras are simple. The Moyal algebra is an infinite-dimensional Lie algebra that contains all classical Lie algebras as subalgebras. == See also == Simple Lie algebra Classical group == References ==
Wikipedia/Classical_Lie_algebra
In abstract algebra, the center of a group G is the set of elements that commute with every element of G. It is denoted Z(G), from German Zentrum, meaning center. In set-builder notation, Z(G) = {z ∈ G | ∀g ∈ G, zg = gz}. The center is a normal subgroup, Z ( G ) ◃ G {\displaystyle Z(G)\triangleleft G} , and also a characteristic subgroup, but is not necessarily fully characteristic. The quotient group, G / Z(G), is isomorphic to the inner automorphism group, Inn(G). A group G is abelian if and only if Z(G) = G. At the other extreme, a group is said to be centerless if Z(G) is trivial; i.e., consists only of the identity element. The elements of the center are central elements. == As a subgroup == The center of G is always a subgroup of G. In particular: Z(G) contains the identity element of G, because it commutes with every element of g, by definition: eg = g = ge, where e is the identity; If x and y are in Z(G), then so is xy, by associativity: (xy)g = x(yg) = x(gy) = (xg)y = (gx)y = g(xy) for each g ∈ G; i.e., Z(G) is closed; If x is in Z(G), then so is x−1 as, for all g in G, x−1 commutes with g: (gx = xg) ⇒ (x−1gxx−1 = x−1xgx−1) ⇒ (x−1g = gx−1). Furthermore, the center of G is always an abelian and normal subgroup of G. Since all elements of Z(G) commute, it is closed under conjugation. A group homomorphism f : G → H might not restrict to a homomorphism between their centers. The image elements f (g) commute with the image f ( G ), but they need not commute with all of H unless f is surjective. Thus the center mapping G → Z ( G ) {\displaystyle G\to Z(G)} is not a functor between categories Grp and Ab, since it does not induce a map of arrows. == Conjugacy classes and centralizers == By definition, an element is central whenever its conjugacy class contains only the element itself; i.e. Cl(g) = {g}. The center is the intersection of all the centralizers of elements of G: Z ( G ) = ⋂ g ∈ G Z G ( g ) . {\displaystyle Z(G)=\bigcap _{g\in G}Z_{G}(g).} As centralizers are subgroups, this again shows that the center is a subgroup. == Conjugation == Consider the map f : G → Aut(G), from G to the automorphism group of G defined by f(g) = ϕg, where ϕg is the automorphism of G defined by f(g)(h) = ϕg(h) = ghg−1. The function, f is a group homomorphism, and its kernel is precisely the center of G, and its image is called the inner automorphism group of G, denoted Inn(G). By the first isomorphism theorem we get, G/Z(G) ≃ Inn(G). The cokernel of this map is the group Out(G) of outer automorphisms, and these form the exact sequence 1 ⟶ Z(G) ⟶ G ⟶ Aut(G) ⟶ Out(G) ⟶ 1. == Examples == The center of an abelian group, G, is all of G. The center of the Heisenberg group, H, is the set of matrices of the form: ( 1 0 z 0 1 0 0 0 1 ) {\displaystyle {\begin{pmatrix}1&0&z\\0&1&0\\0&0&1\end{pmatrix}}} The center of a nonabelian simple group is trivial. The center of the dihedral group, Dn, is trivial for odd n ≥ 3. For even n ≥ 4, the center consists of the identity element together with the 180° rotation of the polygon. The center of the quaternion group, Q8 = {1, −1, i, −i, j, −j, k, −k}, is {1, −1}. The center of the symmetric group, Sn, is trivial for n ≥ 3. The center of the alternating group, An, is trivial for n ≥ 4. The center of the general linear group over a field F, GLn(F), is the collection of scalar matrices, { sIn ∣ s ∈ F \ {0} }. The center of the orthogonal group, On(F) is {In, −In}. The center of the special orthogonal group, SO(n) is the whole group when n = 2, and otherwise {In, −In} when n is even, and trivial when n is odd. The center of the unitary group, U ( n ) {\displaystyle U(n)} is { e i θ ⋅ I n ∣ θ ∈ [ 0 , 2 π ) } {\displaystyle \left\{e^{i\theta }\cdot I_{n}\mid \theta \in [0,2\pi )\right\}} . The center of the special unitary group, SU ⁡ ( n ) {\displaystyle \operatorname {SU} (n)} is { e i θ ⋅ I n ∣ θ = 2 k π n , k = 0 , 1 , … , n − 1 } {\textstyle \left\lbrace e^{i\theta }\cdot I_{n}\mid \theta ={\frac {2k\pi }{n}},k=0,1,\dots ,n-1\right\rbrace } . The center of the multiplicative group of non-zero quaternions is the multiplicative group of non-zero real numbers. Using the class equation, one can prove that the center of any non-trivial finite p-group is non-trivial. If the quotient group G/Z(G) is cyclic, G is abelian (and hence G = Z(G), so G/Z(G) is trivial). The center of the Rubik's Cube group consists of two elements – the identity (i.e. the solved state) and the superflip. The center of the Pocket Cube group is trivial. The center of the Megaminx group has order 2, and the center of the Kilominx group is trivial. == Higher centers == Quotienting out by the center of a group yields a sequence of groups called the upper central series: (G0 = G) ⟶ (G1 = G0/Z(G0)) ⟶ (G2 = G1/Z(G1)) ⟶ ⋯ The kernel of the map G → Gi is the ith center of G (second center, third center, etc.), denoted Zi(G). Concretely, the (i+1)-st center comprises the elements that commute with all elements up to an element of the ith center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter. The ascending chain of subgroups 1 ≤ Z(G) ≤ Z2(G) ≤ ⋯ stabilizes at i (equivalently, Zi(G) = Zi+1(G)) if and only if Gi is centerless. === Examples === For a centerless group, all higher centers are zero, which is the case Z0(G) = Z1(G) of stabilization. By Grün's lemma, the quotient of a perfect group by its center is centerless, hence all higher centers equal the center. This is a case of stabilization at Z1(G) = Z2(G). == See also == Center (algebra) Center (ring theory) Centralizer and normalizer Conjugacy class == Notes == == References == Fraleigh, John B. (2014). A First Course in Abstract Algebra (7 ed.). Pearson. ISBN 978-1-292-02496-7. == External links == "Centre of a group", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Center_(group_theory)
In mathematics, a quasi-Frobenius Lie algebra ( g , [ , ] , β ) {\displaystyle ({\mathfrak {g}},[\,\,\,,\,\,\,],\beta )} over a field k {\displaystyle k} is a Lie algebra ( g , [ , ] ) {\displaystyle ({\mathfrak {g}},[\,\,\,,\,\,\,])} equipped with a nondegenerate skew-symmetric bilinear form β : g × g → k {\displaystyle \beta :{\mathfrak {g}}\times {\mathfrak {g}}\to k} , which is a Lie algebra 2-cocycle of g {\displaystyle {\mathfrak {g}}} with values in k {\displaystyle k} . In other words, β ( [ X , Y ] , Z ) + β ( [ Z , X ] , Y ) + β ( [ Y , Z ] , X ) = 0 {\displaystyle \beta \left(\left[X,Y\right],Z\right)+\beta \left(\left[Z,X\right],Y\right)+\beta \left(\left[Y,Z\right],X\right)=0} for all X {\displaystyle X} , Y {\displaystyle Y} , Z {\displaystyle Z} in g {\displaystyle {\mathfrak {g}}} . If β {\displaystyle \beta } is a coboundary, which means that there exists a linear form f : g → k {\displaystyle f:{\mathfrak {g}}\to k} such that β ( X , Y ) = f ( [ X , Y ] ) , {\displaystyle \beta (X,Y)=f(\left[X,Y\right]),} then ( g , [ , ] , β ) {\displaystyle ({\mathfrak {g}},[\,\,\,,\,\,\,],\beta )} is called a Frobenius Lie algebra. == Equivalence with pre-Lie algebras with nondegenerate invariant skew-symmetric bilinear form == If ( g , [ , ] , β ) {\displaystyle ({\mathfrak {g}},[\,\,\,,\,\,\,],\beta )} is a quasi-Frobenius Lie algebra, one can define on g {\displaystyle {\mathfrak {g}}} another bilinear product ◃ {\displaystyle \triangleleft } by the formula β ( [ X , Y ] , Z ) = β ( Z ◃ Y , X ) {\displaystyle \beta \left(\left[X,Y\right],Z\right)=\beta \left(Z\triangleleft Y,X\right)} . Then one has [ X , Y ] = X ◃ Y − Y ◃ X {\displaystyle \left[X,Y\right]=X\triangleleft Y-Y\triangleleft X} and ( g , ◃ ) {\displaystyle ({\mathfrak {g}},\triangleleft )} is a pre-Lie algebra. == See also == Lie coalgebra Lie bialgebra Lie algebra cohomology Frobenius algebra Quasi-Frobenius ring == References == Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979. ISBN 0-486-63832-4 Vyjayanthi Chari and Andrew Pressley, A Guide to Quantum Groups, (1994), Cambridge University Press, Cambridge ISBN 0-521-55884-0.
Wikipedia/Quasi-Frobenius_Lie_algebra
In mathematics, a Lie algebra is reductive if its adjoint representation is completely reducible, hence the name. More concretely, a Lie algebra is reductive if it is a direct sum of a semisimple Lie algebra and an abelian Lie algebra: g = s ⊕ a ; {\displaystyle {\mathfrak {g}}={\mathfrak {s}}\oplus {\mathfrak {a}};} there are alternative characterizations, given below. == Examples == The most basic example is the Lie algebra g l n {\displaystyle {\mathfrak {gl}}_{n}} of n × n {\displaystyle n\times n} matrices with the commutator as Lie bracket, or more abstractly as the endomorphism algebra of an n-dimensional vector space, g l ( V ) . {\displaystyle {\mathfrak {gl}}(V).} This is the Lie algebra of the general linear group GL(n), and is reductive as it decomposes as g l n = s l n ⊕ k , {\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus {\mathfrak {k}},} corresponding to traceless matrices and scalar matrices. Any semisimple Lie algebra or abelian Lie algebra is a fortiori reductive. Over the real numbers, compact Lie algebras are reductive. == Definitions == A Lie algebra g {\displaystyle {\mathfrak {g}}} over a field of characteristic 0 is called reductive if any of the following equivalent conditions are satisfied: The adjoint representation (the action by bracketing) of g {\displaystyle {\mathfrak {g}}} is completely reducible (a direct sum of irreducible representations). g {\displaystyle {\mathfrak {g}}} admits a faithful, completely reducible, finite-dimensional representation. The radical of g {\displaystyle {\mathfrak {g}}} equals the center: r ( g ) = z ( g ) . {\displaystyle {\mathfrak {r}}({\mathfrak {g}})={\mathfrak {z}}({\mathfrak {g}}).} The radical always contains the center, but need not equal it. g {\displaystyle {\mathfrak {g}}} is the direct sum of a semisimple ideal s 0 {\displaystyle {\mathfrak {s}}_{0}} and its center z ( g ) : {\displaystyle {\mathfrak {z}}({\mathfrak {g}}):} g = s 0 ⊕ z ( g ) . {\displaystyle {\mathfrak {g}}={\mathfrak {s}}_{0}\oplus {\mathfrak {z}}({\mathfrak {g}}).} Compare to the Levi decomposition, which decomposes a Lie algebra as its radical (which is solvable, not abelian in general) and a Levi subalgebra (which is semisimple). g {\displaystyle {\mathfrak {g}}} is a direct sum of a semisimple Lie algebra s {\displaystyle {\mathfrak {s}}} and an abelian Lie algebra a {\displaystyle {\mathfrak {a}}} : g = s ⊕ a . {\displaystyle {\mathfrak {g}}={\mathfrak {s}}\oplus {\mathfrak {a}}.} g {\displaystyle {\mathfrak {g}}} is a direct sum of prime ideals: g = ∑ g i . {\displaystyle {\mathfrak {g}}=\textstyle {\sum {\mathfrak {g}}_{i}}.} Some of these equivalences are easily seen. For example, the center and radical of s ⊕ a {\displaystyle {\mathfrak {s}}\oplus {\mathfrak {a}}} is a , {\displaystyle {\mathfrak {a}},} while if the radical equals the center the Levi decomposition yields a decomposition g = s 0 ⊕ z ( g ) . {\displaystyle {\mathfrak {g}}={\mathfrak {s}}_{0}\oplus {\mathfrak {z}}({\mathfrak {g}}).} Further, simple Lie algebras and the trivial 1-dimensional Lie algebra k {\displaystyle {\mathfrak {k}}} are prime ideals. == Properties == Reductive Lie algebras are a generalization of semisimple Lie algebras, and share many properties with them: many properties of semisimple Lie algebras depend only on the fact that they are reductive. Notably, the unitarian trick of Hermann Weyl works for reductive Lie algebras. The associated reductive Lie groups are of significant interest: the Langlands program is based on the premise that what is done for one reductive Lie group should be done for all. The intersection of reductive Lie algebras and solvable Lie algebras is exactly abelian Lie algebras (contrast with the intersection of semisimple and solvable Lie algebras being trivial). == References == == External links == Lie algebra, reductive, A.L. Onishchik, in Encyclopaedia of Mathematics, ISBN 1-4020-0609-8, SpringerLink
Wikipedia/Reductive_Lie_algebra
In mathematics, specifically linear algebra, a degenerate bilinear form f (x, y ) on a vector space V is a bilinear form such that the map from V to V∗ (the dual space of V ) given by v ↦ (x ↦ f (x, v )) is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that f ( x , y ) = 0 {\displaystyle f(x,y)=0\,} for all y ∈ V . {\displaystyle \,y\in V.} == Nondegenerate forms == A nondegenerate or nonsingular form is a bilinear form that is not degenerate, meaning that v ↦ ( x ↦ f ( x , v ) ) {\displaystyle v\mapsto (x\mapsto f(x,v))} is an isomorphism, or equivalently in finite dimensions, if and only if f ( x , y ) = 0 {\displaystyle f(x,y)=0} for all y ∈ V {\displaystyle y\in V} implies that x = 0 {\displaystyle x=0} . == Using the determinant == If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is singular, and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis. == Related notions == If for a quadratic form Q there is a non-zero vector v ∈ V such that Q(v) = 0, then Q is an isotropic quadratic form. If Q has the same sign for all non-zero vectors, it is a definite quadratic form or an anisotropic quadratic form. There is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings. == Examples == The study of real, quadratic algebras shows the distinction between types of quadratic forms. The product zz* is a quadratic form for each of the complex numbers, split-complex numbers, and dual numbers. For z = x + ε y, the dual number form is x2 which is a degenerate quadratic form. The split-complex case is an isotropic form, and the complex case is a definite form. The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map V → V ∗ {\displaystyle V\to V^{*}} be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold. == Infinite dimensions == Note that in an infinite-dimensional space, we can have a bilinear form ƒ for which v ↦ ( x ↦ f ( x , v ) ) {\displaystyle v\mapsto (x\mapsto f(x,v))} is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form f ( ϕ , ψ ) = ∫ ψ ( x ) ϕ ( x ) d x {\displaystyle f(\phi ,\psi )=\int \psi (x)\phi (x)\,dx} is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies f ( ϕ , ψ ) = 0 {\displaystyle f(\phi ,\psi )=0} for all ϕ {\displaystyle \phi } implies that ψ = 0. {\displaystyle \psi =0.\,} In such a case where ƒ satisfies injectivity (but not necessarily surjectivity), ƒ is said to be weakly nondegenerate. == Terminology == If f vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form f on V the set of vectors { x ∈ V ∣ f ( x , y ) = 0 for all y ∈ V } {\displaystyle \{x\in V\mid f(x,y)=0{\mbox{ for all }}y\in V\}} forms a totally degenerate subspace of V. The map f is nondegenerate if and only if this subspace is trivial. Geometrically, an isotropic line of the quadratic form corresponds to a point of the associated quadric hypersurface in projective space. Such a line is additionally isotropic for the bilinear form if and only if the corresponding point is a singularity. Hence, over an algebraically closed field, Hilbert's Nullstellensatz guarantees that the quadratic form always has isotropic lines, while the bilinear form has them if and only if the surface is singular. == See also == Indefinite inner product space – generalization of Hilbert space with indefinite signaturePages displaying wikidata descriptions as a fallback Dual system Linear form – Linear map from a vector space to its field of scalars == References ==
Wikipedia/Nondegenerate_form
In mathematics, a pre-Lie algebra is an algebraic structure on a vector space that describes some properties of objects such as rooted trees and vector fields on affine space. The notion of pre-Lie algebra has been introduced by Murray Gerstenhaber in his work on deformations of algebras. Pre-Lie algebras have been considered under some other names, among which one can cite left-symmetric algebras, right-symmetric algebras or Vinberg algebras. == Definition == A pre-Lie algebra ( V , ◃ ) {\displaystyle (V,\triangleleft )} is a vector space V {\displaystyle V} with a linear map ◃ : V ⊗ V → V {\displaystyle \triangleleft :V\otimes V\to V} , satisfying the relation ( x ◃ y ) ◃ z − x ◃ ( y ◃ z ) = ( x ◃ z ) ◃ y − x ◃ ( z ◃ y ) . {\displaystyle (x\triangleleft y)\triangleleft z-x\triangleleft (y\triangleleft z)=(x\triangleleft z)\triangleleft y-x\triangleleft (z\triangleleft y).} This identity can be seen as the invariance of the associator ( x , y , z ) = ( x ◃ y ) ◃ z − x ◃ ( y ◃ z ) {\displaystyle (x,y,z)=(x\triangleleft y)\triangleleft z-x\triangleleft (y\triangleleft z)} under the exchange of the two variables y {\displaystyle y} and z {\displaystyle z} . Every associative algebra is hence also a pre-Lie algebra, as the associator vanishes identically. Although weaker than associativity, the defining relation of a pre-Lie algebra still implies that the commutator x ◃ y − y ◃ x {\displaystyle x\triangleleft y-y\triangleleft x} is a Lie bracket. In particular, the Jacobi identity for the commutator follows from cycling the x , y , z {\displaystyle x,y,z} terms in the defining relation for pre-Lie algebras, above. == Examples == === Vector fields on an affine space === Let U ⊂ R n {\displaystyle U\subset \mathbb {R} ^{n}} be an open neighborhood of R n {\displaystyle \mathbb {R} ^{n}} , parameterised by variables x 1 , ⋯ , x n {\displaystyle x_{1},\cdots ,x_{n}} . Given vector fields u = u i ∂ x i {\displaystyle u=u_{i}\partial _{x_{i}}} , v = v j ∂ x j {\displaystyle v=v_{j}\partial _{x_{j}}} we define u ◃ v = v j ∂ u i ∂ x j ∂ x i {\displaystyle u\triangleleft v=v_{j}{\frac {\partial u_{i}}{\partial x_{j}}}\partial _{x_{i}}} . The difference between ( u ◃ v ) ◃ w {\displaystyle (u\triangleleft v)\triangleleft w} and u ◃ ( v ◃ w ) {\displaystyle u\triangleleft (v\triangleleft w)} , is ( u ◃ v ) ◃ w − u ◃ ( v ◃ w ) = v j w k ∂ 2 u i ∂ x j ∂ x k ∂ x i {\displaystyle (u\triangleleft v)\triangleleft w-u\triangleleft (v\triangleleft w)=v_{j}w_{k}{\frac {\partial ^{2}u_{i}}{\partial x_{j}\partial x_{k}}}\partial _{x_{i}}} which is symmetric in v {\displaystyle v} and w {\displaystyle w} . Thus ◃ {\displaystyle \triangleleft } defines a pre-Lie algebra structure. Given a manifold M {\displaystyle M} and homeomorphisms ϕ , ϕ ′ {\displaystyle \phi ,\phi '} from U , U ′ ⊂ R n {\displaystyle U,U'\subset \mathbb {R} ^{n}} to overlapping open neighborhoods of M {\displaystyle M} , they each define a pre-Lie algebra structure ◃ , ◃ ′ {\displaystyle \triangleleft ,\triangleleft '} on vector fields defined on the overlap. Whilst ◃ {\displaystyle \triangleleft } need not agree with ◃ ′ {\displaystyle \triangleleft '} , their commutators do agree: u ◃ v − v ◃ u = u ◃ ′ v − v ◃ ′ u = [ v , u ] {\displaystyle u\triangleleft v-v\triangleleft u=u\triangleleft 'v-v\triangleleft 'u=[v,u]} , the Lie bracket of v {\displaystyle v} and u {\displaystyle u} . === Rooted trees === Let T {\displaystyle \mathbb {T} } be the free vector space spanned by all rooted trees. One can introduce a bilinear product ↶ {\displaystyle \curvearrowleft } on T {\displaystyle \mathbb {T} } as follows. Let τ 1 {\displaystyle \tau _{1}} and τ 2 {\displaystyle \tau _{2}} be two rooted trees. τ 1 ↶ τ 2 = ∑ s ∈ V e r t i c e s ( τ 1 ) τ 1 ∘ s τ 2 {\displaystyle \tau _{1}\curvearrowleft \tau _{2}=\sum _{s\in \mathrm {Vertices} (\tau _{1})}\tau _{1}\circ _{s}\tau _{2}} where τ 1 ∘ s τ 2 {\displaystyle \tau _{1}\circ _{s}\tau _{2}} is the rooted tree obtained by adding to the disjoint union of τ 1 {\displaystyle \tau _{1}} and τ 2 {\displaystyle \tau _{2}} an edge going from the vertex s {\displaystyle s} of τ 1 {\displaystyle \tau _{1}} to the root vertex of τ 2 {\displaystyle \tau _{2}} . Then ( T , ↶ ) {\displaystyle (\mathbb {T} ,\curvearrowleft )} is a free pre-Lie algebra on one generator. More generally, the free pre-Lie algebra on any set of generators is constructed the same way from trees with each vertex labelled by one of the generators. == References == Chapoton, F.; Livernet, M. (2001), "Pre-Lie algebras and the rooted trees operad", International Mathematics Research Notices, 2001 (8): 395–408, doi:10.1155/S1073792801000198, MR 1827084. Szczesny, M. (2010), Pre-Lie algebras and incidence categories of colored rooted trees, vol. 1007, p. 4784, arXiv:1007.4784, Bibcode:2010arXiv1007.4784S.
Wikipedia/Pre-Lie_algebra
In mathematics, a free Lie algebra over a field K is a Lie algebra generated by a set X, without any imposed relations other than the defining relations of alternating K-bilinearity and the Jacobi identity. == Definition == The definition of the free Lie algebra generated by a set X is as follows: Let X be a set and i : X → L {\displaystyle i\colon X\to L} a morphism of sets (function) from X into a Lie algebra L. The Lie algebra L is called free on X if i {\displaystyle i} is the universal morphism; that is, if for any Lie algebra A with a morphism of sets f : X → A {\displaystyle f\colon X\to A} , there is a unique Lie algebra morphism g : L → A {\displaystyle g\colon L\to A} such that f = g ∘ i {\displaystyle f=g\circ i} . Given a set X, one can show that there exists a unique free Lie algebra L ( X ) {\displaystyle L(X)} generated by X. In the language of category theory, the functor sending a set X to the Lie algebra generated by X is the free functor from the category of sets to the category of Lie algebras. That is, it is left adjoint to the forgetful functor. The free Lie algebra on a set X is naturally graded. The 1-graded component of the free Lie algebra is just the free vector space on that set. One can alternatively define a free Lie algebra on a vector space V as left adjoint to the forgetful functor from Lie algebras over a field K to vector spaces over the field K – forgetting the Lie algebra structure, but remembering the vector space structure. == Universal enveloping algebra == The universal enveloping algebra of a free Lie algebra on a set X is the free associative algebra generated by X. By the Poincaré–Birkhoff–Witt theorem it is the "same size" as the symmetric algebra of the free Lie algebra (meaning that if both sides are graded by giving elements of X degree 1 then they are isomorphic as graded vector spaces). This can be used to describe the dimension of the piece of the free Lie algebra of any given degree. Ernst Witt showed that the number of basic commutators of degree k in the free Lie algebra on an m-element set is given by the necklace polynomial: M m ( k ) = 1 k ∑ d | k μ ( d ) ⋅ m k / d , {\displaystyle M_{m}(k)={\frac {1}{k}}\sum _{d|k}\mu (d)\cdot m^{k/d},} where μ {\displaystyle \mu } is the Möbius function. The graded dual of the universal enveloping algebra of a free Lie algebra on a finite set is the shuffle algebra. This essentially follows because universal enveloping algebras have the structure of a Hopf algebra, and the shuffle product describes the action of comultiplication in this algebra. See tensor algebra for a detailed exposition of the inter-relation between the shuffle product and comultiplication. == Hall sets == An explicit basis of the free Lie algebra can be given in terms of a Hall set, which is a particular kind of subset inside the free magma on X. Elements of the free magma are binary trees, with their leaves labelled by elements of X. Hall sets were introduced by Marshall Hall (1950) based on work of Philip Hall on groups. Subsequently, Wilhelm Magnus showed that they arise as the graded Lie algebra associated with the filtration on a free group given by the lower central series. This correspondence was motivated by commutator identities in group theory due to Philip Hall and Witt. == Lyndon basis == The Lyndon words are a special case of the Hall words, and so in particular there is a basis of the free Lie algebra corresponding to Lyndon words. This is called the Lyndon basis, named after Roger Lyndon. (This is also called the Chen–Fox–Lyndon basis or the Lyndon–Shirshov basis, and is essentially the same as the Shirshov basis.) There is a bijection γ from the Lyndon words in an ordered alphabet to a basis of the free Lie algebra on this alphabet defined as follows: If a word w has length 1 then γ ( w ) = w {\displaystyle \gamma (w)=w} (considered as a generator of the free Lie algebra). If w has length at least 2, then write w = u v {\displaystyle w=uv} for Lyndon words u, v with v as long as possible (the "standard factorization"). Then γ ( w ) = [ γ ( u ) , γ ( v ) ] {\displaystyle \gamma (w)=[\gamma (u),\gamma (v)]} . == Shirshov–Witt theorem == Anatoly Širšov (1953) and Witt (1956) showed that any Lie subalgebra of a free Lie algebra is itself a free Lie algebra. == Applications == Serre's theorem on a semisimple Lie algebra uses a free Lie algebra to construct a semisimple algebra out of generators and relations. The Milnor invariants of a link group are related to the free Lie algebra on the components of the link, as discussed in that article. See also Lie operad for the use of a free Lie algebra in the construction of the operad. == See also == Free object Free algebra Free group == References == Bakhturin, Yu.A. (2001) [1994], "Free Lie algebra over a ring", Encyclopedia of Mathematics, EMS Press Bourbaki, Nicolas (1989). "Chapter II: Free Lie Algebras". Lie Groups and Lie Algebras. Springer. ISBN 0-387-50218-1. Chen, Kuo-Tsai; Fox, Ralph H.; Lyndon, Roger C. (1958), "Free differential calculus. IV. The quotient groups of the lower central series", Annals of Mathematics, Second Series, 68 (1): 81–95, doi:10.2307/1970044, ISSN 0003-486X, JSTOR 1970044, MR 0102539 Hall, Marshall (1950), "A basis for free Lie rings and higher commutators in free groups", Proceedings of the American Mathematical Society, 1 (5): 575–581, doi:10.1090/S0002-9939-1950-0038336-7, ISSN 0002-9939, MR 0038336 Lothaire, M. (1997), Combinatorics on words, Encyclopedia of Mathematics and Its Applications, vol. 17, Perrin, D.; Reutenauer, Christophe; Berstel, J.; Pin, J. E.; Pirillo, G.; Foata, D.; Sakarovitch, J.; Simon, I.; Schützenberger, Marcel-Paul; Choffrut, C.; Cori, R.; Lyndon, Roger; Rota, Gian-Carlo. Foreword by Roger Lyndon (2nd ed.), Cambridge University Press, pp. 76–91, 98, ISBN 0-521-59924-5, Zbl 0874.20040 Magnus, Wilhelm (1937), "Über Beziehungen zwischen höheren Kommutatoren", Journal für die Reine und Angewandte Mathematik (in German), 1937 (177): 105–115, doi:10.1515/crll.1937.177.105, ISSN 0075-4102, JFM 63.0065.01, S2CID 199546158 Magnus, Wilhelm; Karrass, Abraham; Solitar, Donald (2004). Combinatorial group theory (Reprint of the 1976 second ed.). Mineola, NY: Dover. ISBN 0-486-43830-9. MR 2109550. Guy Melançon (2001) [1994], "Hall set", Encyclopedia of Mathematics, EMS Press Guy Melançon (2001) [1994], "Hall word", Encyclopedia of Mathematics, EMS Press Melançon, Guy (2001) [1994], "Shirshov basis", Encyclopedia of Mathematics, EMS Press Reutenauer, Christophe (1993), Free Lie algebras, London Mathematical Society Monographs. New Series, vol. 7, The Clarendon Press Oxford University Press, ISBN 978-0-19-853679-6, MR 1231799 Širšov, Anatoliĭ I. (1953), "Subalgebras of free Lie algebras", Mat. Sbornik, New Series, 33 (75): 441–452, MR 0059892 Širšov, Anatoliĭ I. (1958), "On free Lie rings", Mat. Sbornik, New Series, 45 (2): 113–122, MR 0099356 Bokut, Leonid A.; Latyshev, Victor; Shestakov, Ivan; Zelmanov, Efim, eds. (2009). Selected works of A.I. Shirshov. Translated by Bremner, Murray; Kochetov, Mikhail V. Basel, Boston, Berlin: Birkhäuser. MR 2547481. Witt, Ernst (1956). "Die Unterringe der freien Lieschen Ringe". Mathematische Zeitschrift. 64: 195–216. doi:10.1007/BF01166568. ISSN 0025-5874. MR 0077525. S2CID 119607181.
Wikipedia/Free_Lie_algebra
In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class; "degeneracy" is the condition of being a degenerate case. The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. Equivalently, it becomes a "line segment". Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate. For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if needed) in each specific situation. A degenerate case thus has special features which makes it non-generic, or a special case. However, not all non-generic or special cases are degenerate. For example, right triangles, isosceles triangles and equilateral triangles are non-generic and non-degenerate. In fact, degenerate cases often correspond to singularities, either in the object or in some configuration space. For example, a conic section is degenerate if and only if it has singular points (e.g., point, line, intersecting lines). == In geometry == === Conic section === A degenerate conic is a conic section (a second-degree plane curve, defined by a polynomial equation of degree two) that fails to be an irreducible curve. A point is a degenerate circle, namely one with radius 0. The line is a degenerate case of a parabola if the parabola resides on a tangent plane. In inversive geometry, a line is a degenerate case of a circle, with infinite radius. Two parallel lines also form a degenerate parabola. A line segment can be viewed as a degenerate case of an ellipse in which the semiminor axis goes to zero, the foci go to the endpoints, and the eccentricity goes to one. A circle can be thought of as a degenerate ellipse, as the eccentricity approaches 0 and the foci merge. An ellipse can also degenerate into a single point. A hyperbola can degenerate into two lines crossing at a point, through a family of hyperbolae having those lines as common asymptotes. === Triangle === A degenerate triangle is a "flat" triangle in the sense that it is contained in a line segment. It has thus collinear vertices and zero area. If the three vertices are all distinct, it has two 0° angles and one 180° angle. If two vertices are equal, it has one 0° angle and two undefined angles. If all three vertices are equal, all three angles are undefined. === Rectangle === A rectangle with one pair of opposite sides of length zero degenerates to a line segment, with zero area. If both of the rectangle's pairs of opposite sides have length zero, the rectangle degenerates to a point. === Hyperrectangle === A hyperrectangle is the n-dimensional analog of a rectangle. If its sides along any of the n axes has length zero, it degenerates to a lower-dimensional hyperrectangle, all the way down to a point if the sides aligned with every axis have length zero. === Convex polygon === A convex polygon is degenerate if at least two consecutive sides coincide at least partially, or at least one side has zero length, or at least one angle is 180°. Thus a degenerate convex polygon of n sides looks like a polygon with fewer sides. In the case of triangles, this definition coincides with the one that has been given above. === Convex polyhedron === A convex polyhedron is degenerate if either two adjacent facets are coplanar or two edges are aligned. In the case of a tetrahedron, this is equivalent to saying that all of its vertices lie in the same plane, giving it a volume of zero. === Standard torus === In contexts where self-intersection is allowed, a double-covered sphere is a degenerate standard torus where the axis of revolution passes through the center of the generating circle, rather than outside it. A torus degenerates to a circle when its minor radius goes to 0. === Sphere === When the radius of a sphere goes to zero, the resulting degenerate sphere of zero volume is a point. === Other === See general position for other examples. == Elsewhere == A set containing a single point is a degenerate continuum. Objects such as the digon and monogon can be viewed as degenerate cases of polygons: valid in a general abstract mathematical sense, but not part of the original Euclidean conception of polygons. A random variable which can only take one value has a degenerate distribution; if that value is the real number 0, then its probability density is the Dirac delta function. A root of a polynomial is sometimes said to be degenerate if it is a multiple root, since generically the n roots of an nth degree polynomial are all distinct. This usage carries over to eigenproblems: a degenerate eigenvalue is a multiple root of the characteristic polynomial. In quantum mechanics, any such multiplicity in the eigenvalues of the Hamiltonian operator gives rise to degenerate energy levels. Usually any such degeneracy indicates some underlying symmetry in the system. == See also == Degeneracy (graph theory) Degenerate form Trivial (mathematics) Pathological (mathematics) Vacuous truth == References ==
Wikipedia/Degenerate_(mathematics)
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry. == Notation == The following summarizes short definitions and notations that are used in this article. === Manifold === M {\displaystyle M} , N {\displaystyle N} are n {\displaystyle n} -dimensional smooth manifolds, where n ∈ N {\displaystyle n\in \mathbb {N} } . That is, differentiable manifolds that can be differentiated enough times for the purposes on this page. p ∈ M {\displaystyle p\in M} , q ∈ N {\displaystyle q\in N} denote one point on each of the manifolds. The boundary of a manifold M {\displaystyle M} is a manifold ∂ M {\displaystyle \partial M} , which has dimension n − 1 {\displaystyle n-1} . An orientation on M {\displaystyle M} induces an orientation on ∂ M {\displaystyle \partial M} . We usually denote a submanifold by Σ ⊂ M {\displaystyle \Sigma \subset M} . === Tangent and cotangent bundles === T M {\displaystyle TM} , T ∗ M {\displaystyle T^{*}M} denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold M {\displaystyle M} . T p M {\displaystyle T_{p}M} , T q N {\displaystyle T_{q}N} denote the tangent spaces of M {\displaystyle M} , N {\displaystyle N} at the points p {\displaystyle p} , q {\displaystyle q} , respectively. T p ∗ M {\displaystyle T_{p}^{*}M} denotes the cotangent space of M {\displaystyle M} at the point p {\displaystyle p} . Sections of the tangent bundles, also known as vector fields, are typically denoted as X , Y , Z ∈ Γ ( T M ) {\displaystyle X,Y,Z\in \Gamma (TM)} such that at a point p ∈ M {\displaystyle p\in M} we have X | p , Y | p , Z | p ∈ T p M {\displaystyle X|_{p},Y|_{p},Z|_{p}\in T_{p}M} . Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as α , β ∈ Γ ( T ∗ M ) {\displaystyle \alpha ,\beta \in \Gamma (T^{*}M)} such that at a point p ∈ M {\displaystyle p\in M} we have α | p , β | p ∈ T p ∗ M {\displaystyle \alpha |_{p},\beta |_{p}\in T_{p}^{*}M} . An alternative notation for Γ ( T ∗ M ) {\displaystyle \Gamma (T^{*}M)} is Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} . === Differential k-forms === Differential k {\displaystyle k} -forms, which we refer to simply as k {\displaystyle k} -forms here, are differential forms defined on T M {\displaystyle TM} . We denote the set of all k {\displaystyle k} -forms as Ω k ( M ) {\displaystyle \Omega ^{k}(M)} . For 0 ≤ k , l , m ≤ n {\displaystyle 0\leq k,\ l,\ m\leq n} we usually write α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} , β ∈ Ω l ( M ) {\displaystyle \beta \in \Omega ^{l}(M)} , γ ∈ Ω m ( M ) {\displaystyle \gamma \in \Omega ^{m}(M)} . 0 {\displaystyle 0} -forms f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} are just scalar functions C ∞ ( M ) {\displaystyle C^{\infty }(M)} on M {\displaystyle M} . 1 ∈ Ω 0 ( M ) {\displaystyle \mathbf {1} \in \Omega ^{0}(M)} denotes the constant 0 {\displaystyle 0} -form equal to 1 {\displaystyle 1} everywhere. === Omitted elements of a sequence === When we are given ( k + 1 ) {\displaystyle (k+1)} inputs X 0 , … , X k {\displaystyle X_{0},\ldots ,X_{k}} and a k {\displaystyle k} -form α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} we denote omission of the i {\displaystyle i} th entry by writing α ( X 0 , … , X ^ i , … , X k ) := α ( X 0 , … , X i − 1 , X i + 1 , … , X k ) . {\displaystyle \alpha (X_{0},\ldots ,{\hat {X}}_{i},\ldots ,X_{k}):=\alpha (X_{0},\ldots ,X_{i-1},X_{i+1},\ldots ,X_{k}).} === Exterior product === The exterior product is also known as the wedge product. It is denoted by ∧ : Ω k ( M ) × Ω l ( M ) → Ω k + l ( M ) {\displaystyle \wedge :\Omega ^{k}(M)\times \Omega ^{l}(M)\rightarrow \Omega ^{k+l}(M)} . The exterior product of a k {\displaystyle k} -form α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} and an l {\displaystyle l} -form β ∈ Ω l ( M ) {\displaystyle \beta \in \Omega ^{l}(M)} produce a ( k + l ) {\displaystyle (k+l)} -form α ∧ β ∈ Ω k + l ( M ) {\displaystyle \alpha \wedge \beta \in \Omega ^{k+l}(M)} . It can be written using the set S ( k , k + l ) {\displaystyle S(k,k+l)} of all permutations σ {\displaystyle \sigma } of { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} such that σ ( 1 ) < … < σ ( k ) , σ ( k + 1 ) < … < σ ( k + l ) {\displaystyle \sigma (1)<\ldots <\sigma (k),\ \sigma (k+1)<\ldots <\sigma (k+l)} as ( α ∧ β ) ( X 1 , … , X k + l ) = ∑ σ ∈ S ( k , k + l ) sign ( σ ) α ( X σ ( 1 ) , … , X σ ( k ) ) ⊗ β ( X σ ( k + 1 ) , … , X σ ( k + l ) ) . {\displaystyle (\alpha \wedge \beta )(X_{1},\ldots ,X_{k+l})=\sum _{\sigma \in S(k,k+l)}{\text{sign}}(\sigma )\alpha (X_{\sigma (1)},\ldots ,X_{\sigma (k)})\otimes \beta (X_{\sigma (k+1)},\ldots ,X_{\sigma (k+l)}).} === Directional derivative === The directional derivative of a 0-form f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} along a section X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} is a 0-form denoted ∂ X f . {\displaystyle \partial _{X}f.} === Exterior derivative === The exterior derivative d k : Ω k ( M ) → Ω k + 1 ( M ) {\displaystyle d_{k}:\Omega ^{k}(M)\rightarrow \Omega ^{k+1}(M)} is defined for all 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} . We generally omit the subscript when it is clear from the context. For a 0 {\displaystyle 0} -form f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} we have d 0 f ∈ Ω 1 ( M ) {\displaystyle d_{0}f\in \Omega ^{1}(M)} as the 1 {\displaystyle 1} -form that gives the directional derivative, i.e., for the section X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} we have ( d 0 f ) ( X ) = ∂ X f {\displaystyle (d_{0}f)(X)=\partial _{X}f} , the directional derivative of f {\displaystyle f} along X {\displaystyle X} . For 0 < k ≤ n {\displaystyle 0<k\leq n} , ( d k ω ) ( X 0 , … , X k ) = ∑ 0 ≤ j ≤ k ( − 1 ) j d 0 ( ω ( X 0 , … , X ^ j , … , X k ) ) ( X j ) + ∑ 0 ≤ i < j ≤ k ( − 1 ) i + j ω ( [ X i , X j ] , X 0 , … , X ^ i , … , X ^ j , … , X k ) . {\displaystyle (d_{k}\omega )(X_{0},\ldots ,X_{k})=\sum _{0\leq j\leq k}(-1)^{j}d_{0}(\omega (X_{0},\ldots ,{\hat {X}}_{j},\ldots ,X_{k}))(X_{j})+\sum _{0\leq i<j\leq k}(-1)^{i+j}\omega ([X_{i},X_{j}],X_{0},\ldots ,{\hat {X}}_{i},\ldots ,{\hat {X}}_{j},\ldots ,X_{k}).} === Lie bracket === The Lie bracket of sections X , Y ∈ Γ ( T M ) {\displaystyle X,Y\in \Gamma (TM)} is defined as the unique section [ X , Y ] ∈ Γ ( T M ) {\displaystyle [X,Y]\in \Gamma (TM)} that satisfies ∀ f ∈ Ω 0 ( M ) ⇒ ∂ [ X , Y ] f = ∂ X ∂ Y f − ∂ Y ∂ X f . {\displaystyle \forall f\in \Omega ^{0}(M)\Rightarrow \partial _{[X,Y]}f=\partial _{X}\partial _{Y}f-\partial _{Y}\partial _{X}f.} === Tangent maps === If ϕ : M → N {\displaystyle \phi :M\rightarrow N} is a smooth map, then d ϕ | p : T p M → T ϕ ( p ) N {\displaystyle d\phi |_{p}:T_{p}M\rightarrow T_{\phi (p)}N} defines a tangent map from M {\displaystyle M} to N {\displaystyle N} . It is defined through curves γ {\displaystyle \gamma } on M {\displaystyle M} with derivative γ ′ ( 0 ) = X ∈ T p M {\displaystyle \gamma '(0)=X\in T_{p}M} such that d ϕ ( X ) := ( ϕ ∘ γ ) ′ . {\displaystyle d\phi (X):=(\phi \circ \gamma )'.} Note that ϕ {\displaystyle \phi } is a 0 {\displaystyle 0} -form with values in N {\displaystyle N} . === Pull-back === If ϕ : M → N {\displaystyle \phi :M\rightarrow N} is a smooth map, then the pull-back of a k {\displaystyle k} -form α ∈ Ω k ( N ) {\displaystyle \alpha \in \Omega ^{k}(N)} is defined such that for any k {\displaystyle k} -dimensional submanifold Σ ⊂ M {\displaystyle \Sigma \subset M} ∫ Σ ϕ ∗ α = ∫ ϕ ( Σ ) α . {\displaystyle \int _{\Sigma }\phi ^{*}\alpha =\int _{\phi (\Sigma )}\alpha .} The pull-back can also be expressed as ( ϕ ∗ α ) ( X 1 , … , X k ) = α ( d ϕ ( X 1 ) , … , d ϕ ( X k ) ) . {\displaystyle (\phi ^{*}\alpha )(X_{1},\ldots ,X_{k})=\alpha (d\phi (X_{1}),\ldots ,d\phi (X_{k})).} === Interior product === Also known as the interior derivative, the interior product given a section Y ∈ Γ ( T M ) {\displaystyle Y\in \Gamma (TM)} is a map ι Y : Ω k + 1 ( M ) → Ω k ( M ) {\displaystyle \iota _{Y}:\Omega ^{k+1}(M)\rightarrow \Omega ^{k}(M)} that effectively substitutes the first input of a ( k + 1 ) {\displaystyle (k+1)} -form with Y {\displaystyle Y} . If α ∈ Ω k + 1 ( M ) {\displaystyle \alpha \in \Omega ^{k+1}(M)} and X i ∈ Γ ( T M ) {\displaystyle X_{i}\in \Gamma (TM)} then ( ι Y α ) ( X 1 , … , X k ) = α ( Y , X 1 , … , X k ) . {\displaystyle (\iota _{Y}\alpha )(X_{1},\ldots ,X_{k})=\alpha (Y,X_{1},\ldots ,X_{k}).} === Metric tensor === Given a nondegenerate bilinear form g p ( ⋅ , ⋅ ) {\displaystyle g_{p}(\cdot ,\cdot )} on each T p M {\displaystyle T_{p}M} that is continuous on M {\displaystyle M} , the manifold becomes a pseudo-Riemannian manifold. We denote the metric tensor g {\displaystyle g} , defined pointwise by g ( X , Y ) | p = g p ( X | p , Y | p ) {\displaystyle g(X,Y)|_{p}=g_{p}(X|_{p},Y|_{p})} . We call s = sign ⁡ ( g ) {\displaystyle s=\operatorname {sign} (g)} the signature of the metric. A Riemannian manifold has s = 1 {\displaystyle s=1} , whereas Minkowski space has s = − 1 {\displaystyle s=-1} . === Musical isomorphisms === The metric tensor g ( ⋅ , ⋅ ) {\displaystyle g(\cdot ,\cdot )} induces duality mappings between vector fields and one-forms: these are the musical isomorphisms flat ♭ {\displaystyle \flat } and sharp ♯ {\displaystyle \sharp } . A section A ∈ Γ ( T M ) {\displaystyle A\in \Gamma (TM)} corresponds to the unique one-form A ♭ ∈ Ω 1 ( M ) {\displaystyle A^{\flat }\in \Omega ^{1}(M)} such that for all sections X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} , we have: A ♭ ( X ) = g ( A , X ) . {\displaystyle A^{\flat }(X)=g(A,X).} A one-form α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} corresponds to the unique vector field α ♯ ∈ Γ ( T M ) {\displaystyle \alpha ^{\sharp }\in \Gamma (TM)} such that for all X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} , we have: α ( X ) = g ( α ♯ , X ) . {\displaystyle \alpha (X)=g(\alpha ^{\sharp },X).} These mappings extend via multilinearity to mappings from k {\displaystyle k} -vector fields to k {\displaystyle k} -forms and k {\displaystyle k} -forms to k {\displaystyle k} -vector fields through ( A 1 ∧ A 2 ∧ ⋯ ∧ A k ) ♭ = A 1 ♭ ∧ A 2 ♭ ∧ ⋯ ∧ A k ♭ {\displaystyle (A_{1}\wedge A_{2}\wedge \cdots \wedge A_{k})^{\flat }=A_{1}^{\flat }\wedge A_{2}^{\flat }\wedge \cdots \wedge A_{k}^{\flat }} ( α 1 ∧ α 2 ∧ ⋯ ∧ α k ) ♯ = α 1 ♯ ∧ α 2 ♯ ∧ ⋯ ∧ α k ♯ . {\displaystyle (\alpha _{1}\wedge \alpha _{2}\wedge \cdots \wedge \alpha _{k})^{\sharp }=\alpha _{1}^{\sharp }\wedge \alpha _{2}^{\sharp }\wedge \cdots \wedge \alpha _{k}^{\sharp }.} === Hodge star === For an n-manifold M, the Hodge star operator ⋆ : Ω k ( M ) → Ω n − k ( M ) {\displaystyle {\star }:\Omega ^{k}(M)\rightarrow \Omega ^{n-k}(M)} is a duality mapping taking a k {\displaystyle k} -form α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} to an ( n − k ) {\displaystyle (n{-}k)} -form ( ⋆ α ) ∈ Ω n − k ( M ) {\displaystyle ({\star }\alpha )\in \Omega ^{n-k}(M)} . It can be defined in terms of an oriented frame ( X 1 , … , X n ) {\displaystyle (X_{1},\ldots ,X_{n})} for T M {\displaystyle TM} , orthonormal with respect to the given metric tensor g {\displaystyle g} : ( ⋆ α ) ( X 1 , … , X n − k ) = α ( X n − k + 1 , … , X n ) . {\displaystyle ({\star }\alpha )(X_{1},\ldots ,X_{n-k})=\alpha (X_{n-k+1},\ldots ,X_{n}).} === Co-differential operator === The co-differential operator δ : Ω k ( M ) → Ω k − 1 ( M ) {\displaystyle \delta :\Omega ^{k}(M)\rightarrow \Omega ^{k-1}(M)} on an n {\displaystyle n} dimensional manifold M {\displaystyle M} is defined by δ := ( − 1 ) k ⋆ − 1 d ⋆ = ( − 1 ) n k + n + 1 ⋆ d ⋆ . {\displaystyle \delta :=(-1)^{k}{\star }^{-1}d{\star }=(-1)^{nk+n+1}{\star }d{\star }.} The Hodge–Dirac operator, d + δ {\displaystyle d+\delta } , is a Dirac operator studied in Clifford analysis. === Oriented manifold === An n {\displaystyle n} -dimensional orientable manifold M is a manifold that can be equipped with a choice of an n-form μ ∈ Ω n ( M ) {\displaystyle \mu \in \Omega ^{n}(M)} that is continuous and nonzero everywhere on M. === Volume form === On an orientable manifold M {\displaystyle M} the canonical choice of a volume form given a metric tensor g {\displaystyle g} and an orientation is d e t := | det g | d X 1 ♭ ∧ … ∧ d X n ♭ {\displaystyle \mathbf {det} :={\sqrt {|\det g|}}\;dX_{1}^{\flat }\wedge \ldots \wedge dX_{n}^{\flat }} for any basis d X 1 , … , d X n {\displaystyle dX_{1},\ldots ,dX_{n}} ordered to match the orientation. === Area form === Given a volume form d e t {\displaystyle \mathbf {det} } and a unit normal vector N {\displaystyle N} we can also define an area form σ := ι N det {\displaystyle \sigma :=\iota _{N}{\textbf {det}}} on the boundary ∂ M . {\displaystyle \partial M.} === Bilinear form on k-forms === A generalization of the metric tensor, the symmetric bilinear form between two k {\displaystyle k} -forms α , β ∈ Ω k ( M ) {\displaystyle \alpha ,\beta \in \Omega ^{k}(M)} , is defined pointwise on M {\displaystyle M} by ⟨ α , β ⟩ | p := ⋆ ( α ∧ ⋆ β ) | p . {\displaystyle \langle \alpha ,\beta \rangle |_{p}:={\star }(\alpha \wedge {\star }\beta )|_{p}.} The L 2 {\displaystyle L^{2}} -bilinear form for the space of k {\displaystyle k} -forms Ω k ( M ) {\displaystyle \Omega ^{k}(M)} is defined by ⟨ ⟨ α , β ⟩ ⟩ := ∫ M α ∧ ⋆ β . {\displaystyle \langle \!\langle \alpha ,\beta \rangle \!\rangle :=\int _{M}\alpha \wedge {\star }\beta .} In the case of a Riemannian manifold, each is an inner product (i.e. is positive-definite). === Lie derivative === We define the Lie derivative L : Ω k ( M ) → Ω k ( M ) {\displaystyle {\mathcal {L}}:\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} through Cartan's magic formula for a given section X ∈ Γ ( T M ) {\displaystyle X\in \Gamma (TM)} as L X = d ∘ ι X + ι X ∘ d . {\displaystyle {\mathcal {L}}_{X}=d\circ \iota _{X}+\iota _{X}\circ d.} It describes the change of a k {\displaystyle k} -form along a flow ϕ t {\displaystyle \phi _{t}} associated to the section X {\displaystyle X} . === Laplace–Beltrami operator === The Laplacian Δ : Ω k ( M ) → Ω k ( M ) {\displaystyle \Delta :\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} is defined as Δ = − ( d δ + δ d ) {\displaystyle \Delta =-(d\delta +\delta d)} . == Important definitions == === Definitions on Ωk(M) === α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} is called... closed if d α = 0 {\displaystyle d\alpha =0} exact if α = d β {\displaystyle \alpha =d\beta } for some β ∈ Ω k − 1 {\displaystyle \beta \in \Omega ^{k-1}} coclosed if δ α = 0 {\displaystyle \delta \alpha =0} coexact if α = δ β {\displaystyle \alpha =\delta \beta } for some β ∈ Ω k + 1 {\displaystyle \beta \in \Omega ^{k+1}} harmonic if closed and coclosed === Cohomology === The k {\displaystyle k} -th cohomology of a manifold M {\displaystyle M} and its exterior derivative operators d 0 , … , d n − 1 {\displaystyle d_{0},\ldots ,d_{n-1}} is given by H k ( M ) := ker ( d k ) im ( d k − 1 ) {\displaystyle H^{k}(M):={\frac {{\text{ker}}(d_{k})}{{\text{im}}(d_{k-1})}}} Two closed k {\displaystyle k} -forms α , β ∈ Ω k ( M ) {\displaystyle \alpha ,\beta \in \Omega ^{k}(M)} are in the same cohomology class if their difference is an exact form i.e. [ α ] = [ β ] ⟺ α − β = d η for some η ∈ Ω k − 1 ( M ) {\displaystyle [\alpha ]=[\beta ]\ \ \Longleftrightarrow \ \ \alpha {-}\beta =d\eta \ {\text{ for some }}\eta \in \Omega ^{k-1}(M)} A closed surface of genus g {\displaystyle g} will have 2 g {\displaystyle 2g} generators which are harmonic. === Dirichlet energy === Given α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} , its Dirichlet energy is E D ( α ) := 1 2 ⟨ ⟨ d α , d α ⟩ ⟩ + 1 2 ⟨ ⟨ δ α , δ α ⟩ ⟩ {\displaystyle {\mathcal {E}}_{\text{D}}(\alpha ):={\dfrac {1}{2}}\langle \!\langle d\alpha ,d\alpha \rangle \!\rangle +{\dfrac {1}{2}}\langle \!\langle \delta \alpha ,\delta \alpha \rangle \!\rangle } == Properties == === Exterior derivative properties === ∫ Σ d α = ∫ ∂ Σ α {\displaystyle \int _{\Sigma }d\alpha =\int _{\partial \Sigma }\alpha } ( Stokes' theorem ) d ∘ d = 0 {\displaystyle d\circ d=0} ( cochain complex ) d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta } for α ∈ Ω k ( M ) , β ∈ Ω l ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\ \beta \in \Omega ^{l}(M)} ( Leibniz rule ) d f ( X ) = ∂ X f {\displaystyle df(X)=\partial _{X}f} for f ∈ Ω 0 ( M ) , X ∈ Γ ( T M ) {\displaystyle f\in \Omega ^{0}(M),\ X\in \Gamma (TM)} ( directional derivative ) d α = 0 {\displaystyle d\alpha =0} for α ∈ Ω n ( M ) , dim ( M ) = n {\displaystyle \alpha \in \Omega ^{n}(M),\ {\text{dim}}(M)=n} === Exterior product properties === α ∧ β = ( − 1 ) k l β ∧ α {\displaystyle \alpha \wedge \beta =(-1)^{kl}\beta \wedge \alpha } for α ∈ Ω k ( M ) , β ∈ Ω l ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\ \beta \in \Omega ^{l}(M)} ( alternating ) ( α ∧ β ) ∧ γ = α ∧ ( β ∧ γ ) {\displaystyle (\alpha \wedge \beta )\wedge \gamma =\alpha \wedge (\beta \wedge \gamma )} ( associativity ) ( λ α ) ∧ β = λ ( α ∧ β ) {\displaystyle (\lambda \alpha )\wedge \beta =\lambda (\alpha \wedge \beta )} for λ ∈ R {\displaystyle \lambda \in \mathbb {R} } ( compatibility of scalar multiplication ) α ∧ ( β 1 + β 2 ) = α ∧ β 1 + α ∧ β 2 {\displaystyle \alpha \wedge (\beta _{1}+\beta _{2})=\alpha \wedge \beta _{1}+\alpha \wedge \beta _{2}} ( distributivity over addition ) α ∧ α = 0 {\displaystyle \alpha \wedge \alpha =0} for α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} when k {\displaystyle k} is odd or rank ⁡ α ≤ 1 {\displaystyle \operatorname {rank} \alpha \leq 1} . The rank of a k {\displaystyle k} -form α {\displaystyle \alpha } means the minimum number of monomial terms (exterior products of one-forms) that must be summed to produce α {\displaystyle \alpha } . === Pull-back properties === d ( ϕ ∗ α ) = ϕ ∗ ( d α ) {\displaystyle d(\phi ^{*}\alpha )=\phi ^{*}(d\alpha )} ( commutative with d {\displaystyle d} ) ϕ ∗ ( α ∧ β ) = ( ϕ ∗ α ) ∧ ( ϕ ∗ β ) {\displaystyle \phi ^{*}(\alpha \wedge \beta )=(\phi ^{*}\alpha )\wedge (\phi ^{*}\beta )} ( distributes over ∧ {\displaystyle \wedge } ) ( ϕ 1 ∘ ϕ 2 ) ∗ = ϕ 2 ∗ ϕ 1 ∗ {\displaystyle (\phi _{1}\circ \phi _{2})^{*}=\phi _{2}^{*}\phi _{1}^{*}} ( contravariant ) ϕ ∗ f = f ∘ ϕ {\displaystyle \phi ^{*}f=f\circ \phi } for f ∈ Ω 0 ( N ) {\displaystyle f\in \Omega ^{0}(N)} ( function composition ) === Musical isomorphism properties === ( X ♭ ) ♯ = X {\displaystyle (X^{\flat })^{\sharp }=X} ( α ♯ ) ♭ = α {\displaystyle (\alpha ^{\sharp })^{\flat }=\alpha } === Interior product properties === ι X ∘ ι X = 0 {\displaystyle \iota _{X}\circ \iota _{X}=0} ( nilpotent ) ι X ∘ ι Y = − ι Y ∘ ι X {\displaystyle \iota _{X}\circ \iota _{Y}=-\iota _{Y}\circ \iota _{X}} ι X ( α ∧ β ) = ( ι X α ) ∧ β + ( − 1 ) k α ∧ ( ι X β ) {\displaystyle \iota _{X}(\alpha \wedge \beta )=(\iota _{X}\alpha )\wedge \beta +(-1)^{k}\alpha \wedge (\iota _{X}\beta )} for α ∈ Ω k ( M ) , β ∈ Ω l ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\ \beta \in \Omega ^{l}(M)} ( Leibniz rule ) ι X α = α ( X ) {\displaystyle \iota _{X}\alpha =\alpha (X)} for α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} ι X f = 0 {\displaystyle \iota _{X}f=0} for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} ι X ( f α ) = f ι X α {\displaystyle \iota _{X}(f\alpha )=f\iota _{X}\alpha } for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} === Hodge star properties === ⋆ ( λ 1 α + λ 2 β ) = λ 1 ( ⋆ α ) + λ 2 ( ⋆ β ) {\displaystyle {\star }(\lambda _{1}\alpha +\lambda _{2}\beta )=\lambda _{1}({\star }\alpha )+\lambda _{2}({\star }\beta )} for λ 1 , λ 2 ∈ R {\displaystyle \lambda _{1},\lambda _{2}\in \mathbb {R} } ( linearity ) ⋆ ⋆ α = s ( − 1 ) k ( n − k ) α {\displaystyle {\star }{\star }\alpha =s(-1)^{k(n-k)}\alpha } for α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} , n = dim ⁡ ( M ) {\displaystyle n=\dim(M)} , and s = sign ⁡ ( g ) {\displaystyle s=\operatorname {sign} (g)} the sign of the metric ⋆ ( − 1 ) = s ( − 1 ) k ( n − k ) ⋆ {\displaystyle {\star }^{(-1)}=s(-1)^{k(n-k)}{\star }} ( inversion ) ⋆ ( f α ) = f ( ⋆ α ) {\displaystyle {\star }(f\alpha )=f({\star }\alpha )} for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} ( commutative with 0 {\displaystyle 0} -forms ) ⟨ ⟨ α , α ⟩ ⟩ = ⟨ ⟨ ⋆ α , ⋆ α ⟩ ⟩ {\displaystyle \langle \!\langle \alpha ,\alpha \rangle \!\rangle =\langle \!\langle {\star }\alpha ,{\star }\alpha \rangle \!\rangle } for α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} ( Hodge star preserves 1 {\displaystyle 1} -form norm ) ⋆ 1 = d e t {\displaystyle {\star }\mathbf {1} =\mathbf {det} } ( Hodge dual of constant function 1 is the volume form ) === Co-differential operator properties === δ ∘ δ = 0 {\displaystyle \delta \circ \delta =0} ( nilpotent ) ⋆ δ = ( − 1 ) k d ⋆ {\displaystyle {\star }\delta =(-1)^{k}d{\star }} and ⋆ d = ( − 1 ) k + 1 δ ⋆ {\displaystyle {\star }d=(-1)^{k+1}\delta {\star }} ( Hodge adjoint to d {\displaystyle d} ) ⟨ ⟨ d α , β ⟩ ⟩ = ⟨ ⟨ α , δ β ⟩ ⟩ {\displaystyle \langle \!\langle d\alpha ,\beta \rangle \!\rangle =\langle \!\langle \alpha ,\delta \beta \rangle \!\rangle } if ∂ M = 0 {\displaystyle \partial M=0} ( δ {\displaystyle \delta } adjoint to d {\displaystyle d} ) In general, ∫ M d α ∧ ⋆ β = ∫ ∂ M α ∧ ⋆ β + ∫ M α ∧ ⋆ δ β {\displaystyle \int _{M}d\alpha \wedge \star \beta =\int _{\partial M}\alpha \wedge \star \beta +\int _{M}\alpha \wedge \star \delta \beta } δ f = 0 {\displaystyle \delta f=0} for f ∈ Ω 0 ( M ) {\displaystyle f\in \Omega ^{0}(M)} === Lie derivative properties === d ∘ L X = L X ∘ d {\displaystyle d\circ {\mathcal {L}}_{X}={\mathcal {L}}_{X}\circ d} ( commutative with d {\displaystyle d} ) ι X ∘ L X = L X ∘ ι X {\displaystyle \iota _{X}\circ {\mathcal {L}}_{X}={\mathcal {L}}_{X}\circ \iota _{X}} ( commutative with ι X {\displaystyle \iota _{X}} ) L X ( ι Y α ) = ι [ X , Y ] α + ι Y L X α {\displaystyle {\mathcal {L}}_{X}(\iota _{Y}\alpha )=\iota _{[X,Y]}\alpha +\iota _{Y}{\mathcal {L}}_{X}\alpha } L X ( α ∧ β ) = ( L X α ) ∧ β + α ∧ ( L X β ) {\displaystyle {\mathcal {L}}_{X}(\alpha \wedge \beta )=({\mathcal {L}}_{X}\alpha )\wedge \beta +\alpha \wedge ({\mathcal {L}}_{X}\beta )} ( Leibniz rule ) == Exterior calculus identities == ι X ( ⋆ 1 ) = ⋆ X ♭ {\displaystyle \iota _{X}({\star }\mathbf {1} )={\star }X^{\flat }} ι X ( ⋆ α ) = ( − 1 ) k ⋆ ( X ♭ ∧ α ) {\displaystyle \iota _{X}({\star }\alpha )=(-1)^{k}{\star }(X^{\flat }\wedge \alpha )} if α ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k}(M)} ι X ( ϕ ∗ α ) = ϕ ∗ ( ι d ϕ ( X ) α ) {\displaystyle \iota _{X}(\phi ^{*}\alpha )=\phi ^{*}(\iota _{d\phi (X)}\alpha )} ν , μ ∈ Ω n ( M ) , μ non-zero ⇒ ∃ f ∈ Ω 0 ( M ) : ν = f μ {\displaystyle \nu ,\mu \in \Omega ^{n}(M),\mu {\text{ non-zero }}\ \Rightarrow \ \exists \ f\in \Omega ^{0}(M):\ \nu =f\mu } X ♭ ∧ ⋆ Y ♭ = g ( X , Y ) ( ⋆ 1 ) {\displaystyle X^{\flat }\wedge {\star }Y^{\flat }=g(X,Y)({\star }\mathbf {1} )} ( bilinear form ) [ X , [ Y , Z ] ] + [ Y , [ Z , X ] ] + [ Z , [ X , Y ] ] = 0 {\displaystyle [X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0} ( Jacobi identity ) === Dimensions === If n = dim ⁡ M {\displaystyle n=\dim M} dim ⁡ Ω k ( M ) = ( n k ) {\displaystyle \dim \Omega ^{k}(M)={\binom {n}{k}}} for 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} dim ⁡ Ω k ( M ) = 0 {\displaystyle \dim \Omega ^{k}(M)=0} for k < 0 , k > n {\displaystyle k<0,\ k>n} If X 1 , … , X n ∈ Γ ( T M ) {\displaystyle X_{1},\ldots ,X_{n}\in \Gamma (TM)} is a basis, then a basis of Ω k ( M ) {\displaystyle \Omega ^{k}(M)} is { X σ ( 1 ) ♭ ∧ … ∧ X σ ( k ) ♭ : σ ∈ S ( k , n ) } {\displaystyle \{X_{\sigma (1)}^{\flat }\wedge \ldots \wedge X_{\sigma (k)}^{\flat }\ :\ \sigma \in S(k,n)\}} === Exterior products === Let α , β , γ , α i ∈ Ω 1 ( M ) {\displaystyle \alpha ,\beta ,\gamma ,\alpha _{i}\in \Omega ^{1}(M)} and X , Y , Z , X i {\displaystyle X,Y,Z,X_{i}} be vector fields. α ( X ) = det [ α ( X ) ] {\displaystyle \alpha (X)=\det {\begin{bmatrix}\alpha (X)\\\end{bmatrix}}} ( α ∧ β ) ( X , Y ) = det [ α ( X ) α ( Y ) β ( X ) β ( Y ) ] {\displaystyle (\alpha \wedge \beta )(X,Y)=\det {\begin{bmatrix}\alpha (X)&\alpha (Y)\\\beta (X)&\beta (Y)\\\end{bmatrix}}} ( α ∧ β ∧ γ ) ( X , Y , Z ) = det [ α ( X ) α ( Y ) α ( Z ) β ( X ) β ( Y ) β ( Z ) γ ( X ) γ ( Y ) γ ( Z ) ] {\displaystyle (\alpha \wedge \beta \wedge \gamma )(X,Y,Z)=\det {\begin{bmatrix}\alpha (X)&\alpha (Y)&\alpha (Z)\\\beta (X)&\beta (Y)&\beta (Z)\\\gamma (X)&\gamma (Y)&\gamma (Z)\end{bmatrix}}} ( α 1 ∧ … ∧ α l ) ( X 1 , … , X l ) = det [ α 1 ( X 1 ) α 1 ( X 2 ) … α 1 ( X l ) α 2 ( X 1 ) α 2 ( X 2 ) … α 2 ( X l ) ⋮ ⋮ ⋱ ⋮ α l ( X 1 ) α l ( X 2 ) … α l ( X l ) ] {\displaystyle (\alpha _{1}\wedge \ldots \wedge \alpha _{l})(X_{1},\ldots ,X_{l})=\det {\begin{bmatrix}\alpha _{1}(X_{1})&\alpha _{1}(X_{2})&\dots &\alpha _{1}(X_{l})\\\alpha _{2}(X_{1})&\alpha _{2}(X_{2})&\dots &\alpha _{2}(X_{l})\\\vdots &\vdots &\ddots &\vdots \\\alpha _{l}(X_{1})&\alpha _{l}(X_{2})&\dots &\alpha _{l}(X_{l})\end{bmatrix}}} === Projection and rejection === ( − 1 ) k ι X ⋆ α = ⋆ ( X ♭ ∧ α ) {\displaystyle (-1)^{k}\iota _{X}{\star }\alpha ={\star }(X^{\flat }\wedge \alpha )} ( interior product ι X ⋆ {\displaystyle \iota _{X}{\star }} dual to wedge X ♭ ∧ {\displaystyle X^{\flat }\wedge } ) ( ι X α ) ∧ ⋆ β = α ∧ ⋆ ( X ♭ ∧ β ) {\displaystyle (\iota _{X}\alpha )\wedge {\star }\beta =\alpha \wedge {\star }(X^{\flat }\wedge \beta )} for α ∈ Ω k + 1 ( M ) , β ∈ Ω k ( M ) {\displaystyle \alpha \in \Omega ^{k+1}(M),\beta \in \Omega ^{k}(M)} If | X | = 1 , α ∈ Ω k ( M ) {\displaystyle |X|=1,\ \alpha \in \Omega ^{k}(M)} , then ι X ∘ ( X ♭ ∧ ) : Ω k ( M ) → Ω k ( M ) {\displaystyle \iota _{X}\circ (X^{\flat }\wedge ):\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} is the projection of α {\displaystyle \alpha } onto the orthogonal complement of X {\displaystyle X} . ( X ♭ ∧ ) ∘ ι X : Ω k ( M ) → Ω k ( M ) {\displaystyle (X^{\flat }\wedge )\circ \iota _{X}:\Omega ^{k}(M)\rightarrow \Omega ^{k}(M)} is the rejection of α {\displaystyle \alpha } , the remainder of the projection. thus ι X ∘ ( X ♭ ∧ ) + ( X ♭ ∧ ) ∘ ι X = id {\displaystyle \iota _{X}\circ (X^{\flat }\wedge )+(X^{\flat }\wedge )\circ \iota _{X}={\text{id}}} ( projection–rejection decomposition ) Given the boundary ∂ M {\displaystyle \partial M} with unit normal vector N {\displaystyle N} t := ι N ∘ ( N ♭ ∧ ) {\displaystyle \mathbf {t} :=\iota _{N}\circ (N^{\flat }\wedge )} extracts the tangential component of the boundary. n := ( id − t ) {\displaystyle \mathbf {n} :=({\text{id}}-\mathbf {t} )} extracts the normal component of the boundary. === Sum expressions === ( d α ) ( X 0 , … , X k ) = ∑ 0 ≤ j ≤ k ( − 1 ) j d ( α ( X 0 , … , X ^ j , … , X k ) ) ( X j ) + ∑ 0 ≤ i < j ≤ k ( − 1 ) i + j α ( [ X i , X j ] , X 0 , … , X ^ i , … , X ^ j , … , X k ) {\displaystyle (d\alpha )(X_{0},\ldots ,X_{k})=\sum _{0\leq j\leq k}(-1)^{j}d(\alpha (X_{0},\ldots ,{\hat {X}}_{j},\ldots ,X_{k}))(X_{j})+\sum _{0\leq i<j\leq k}(-1)^{i+j}\alpha ([X_{i},X_{j}],X_{0},\ldots ,{\hat {X}}_{i},\ldots ,{\hat {X}}_{j},\ldots ,X_{k})} ( d α ) ( X 1 , … , X k ) = ∑ i = 1 k ( − 1 ) i + 1 ( ∇ X i α ) ( X 1 , … , X ^ i , … , X k ) {\displaystyle (d\alpha )(X_{1},\ldots ,X_{k})=\sum _{i=1}^{k}(-1)^{i+1}(\nabla _{X_{i}}\alpha )(X_{1},\ldots ,{\hat {X}}_{i},\ldots ,X_{k})} ( δ α ) ( X 1 , … , X k − 1 ) = − ∑ i = 1 n ( ι E i ( ∇ E i α ) ) ( X 1 , … , X ^ i , … , X k ) {\displaystyle (\delta \alpha )(X_{1},\ldots ,X_{k-1})=-\sum _{i=1}^{n}(\iota _{E_{i}}(\nabla _{E_{i}}\alpha ))(X_{1},\ldots ,{\hat {X}}_{i},\ldots ,X_{k})} given a positively oriented orthonormal frame E 1 , … , E n {\displaystyle E_{1},\ldots ,E_{n}} . ( L Y α ) ( X 1 , … , X k ) = ( ∇ Y α ) ( X 1 , … , X k ) − ∑ i = 1 k α ( X 1 , … , ∇ X i Y , … , X k ) {\displaystyle ({\mathcal {L}}_{Y}\alpha )(X_{1},\ldots ,X_{k})=(\nabla _{Y}\alpha )(X_{1},\ldots ,X_{k})-\sum _{i=1}^{k}\alpha (X_{1},\ldots ,\nabla _{X_{i}}Y,\ldots ,X_{k})} === Hodge decomposition === If ∂ M = ∅ {\displaystyle \partial M=\emptyset } , ω ∈ Ω k ( M ) ⇒ ∃ α ∈ Ω k − 1 , β ∈ Ω k + 1 , γ ∈ Ω k ( M ) , d γ = 0 , δ γ = 0 {\displaystyle \omega \in \Omega ^{k}(M)\Rightarrow \exists \alpha \in \Omega ^{k-1},\ \beta \in \Omega ^{k+1},\ \gamma \in \Omega ^{k}(M),\ d\gamma =0,\ \delta \gamma =0} such that ω = d α + δ β + γ {\displaystyle \omega =d\alpha +\delta \beta +\gamma } === Poincaré lemma === If a boundaryless manifold M {\displaystyle M} has trivial cohomology H k ( M ) = { 0 } {\displaystyle H^{k}(M)=\{0\}} , then any closed ω ∈ Ω k ( M ) {\displaystyle \omega \in \Omega ^{k}(M)} is exact. This is the case if M is contractible. == Relations to vector calculus == === Identities in Euclidean 3-space === Let Euclidean metric g ( X , Y ) := ⟨ X , Y ⟩ = X ⋅ Y {\displaystyle g(X,Y):=\langle X,Y\rangle =X\cdot Y} . We use ∇ = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) {\displaystyle \nabla =\left({\partial \over \partial x},{\partial \over \partial y},{\partial \over \partial z}\right)} differential operator R 3 {\displaystyle \mathbb {R} ^{3}} ι X α = g ( X , α ♯ ) = X ⋅ α ♯ {\displaystyle \iota _{X}\alpha =g(X,\alpha ^{\sharp })=X\cdot \alpha ^{\sharp }} for α ∈ Ω 1 ( M ) {\displaystyle \alpha \in \Omega ^{1}(M)} . d e t ( X , Y , Z ) = ⟨ X , Y × Z ⟩ = ⟨ X × Y , Z ⟩ {\displaystyle \mathbf {det} (X,Y,Z)=\langle X,Y\times Z\rangle =\langle X\times Y,Z\rangle } ( scalar triple product ) X × Y = ( ⋆ ( X ♭ ∧ Y ♭ ) ) ♯ {\displaystyle X\times Y=({\star }(X^{\flat }\wedge Y^{\flat }))^{\sharp }} ( cross product ) ι X α = − ( X × A ) ♭ {\displaystyle \iota _{X}\alpha =-(X\times A)^{\flat }} if α ∈ Ω 2 ( M ) , A = ( ⋆ α ) ♯ {\displaystyle \alpha \in \Omega ^{2}(M),\ A=({\star }\alpha )^{\sharp }} X ⋅ Y = ⋆ ( X ♭ ∧ ⋆ Y ♭ ) {\displaystyle X\cdot Y={\star }(X^{\flat }\wedge {\star }Y^{\flat })} ( scalar product ) ∇ f = ( d f ) ♯ {\displaystyle \nabla f=(df)^{\sharp }} ( gradient ) X ⋅ ∇ f = d f ( X ) {\displaystyle X\cdot \nabla f=df(X)} ( directional derivative ) ∇ ⋅ X = ⋆ d ⋆ X ♭ = − δ X ♭ {\displaystyle \nabla \cdot X={\star }d{\star }X^{\flat }=-\delta X^{\flat }} ( divergence ) ∇ × X = ( ⋆ d X ♭ ) ♯ {\displaystyle \nabla \times X=({\star }dX^{\flat })^{\sharp }} ( curl ) ⟨ X , N ⟩ σ = ⋆ X ♭ {\displaystyle \langle X,N\rangle \sigma ={\star }X^{\flat }} where N {\displaystyle N} is the unit normal vector of ∂ M {\displaystyle \partial M} and σ = ι N d e t {\displaystyle \sigma =\iota _{N}\mathbf {det} } is the area form on ∂ M {\displaystyle \partial M} . ∫ Σ d ⋆ X ♭ = ∫ ∂ Σ ⋆ X ♭ = ∫ ∂ Σ ⟨ X , N ⟩ σ {\displaystyle \int _{\Sigma }d{\star }X^{\flat }=\int _{\partial \Sigma }{\star }X^{\flat }=\int _{\partial \Sigma }\langle X,N\rangle \sigma } ( divergence theorem ) === Lie derivatives === L X f = X ⋅ ∇ f {\displaystyle {\mathcal {L}}_{X}f=X\cdot \nabla f} ( 0 {\displaystyle 0} -forms ) L X α = ( ∇ X α ♯ ) ♭ + g ( α ♯ , ∇ X ) {\displaystyle {\mathcal {L}}_{X}\alpha =(\nabla _{X}\alpha ^{\sharp })^{\flat }+g(\alpha ^{\sharp },\nabla X)} ( 1 {\displaystyle 1} -forms ) ⋆ L X β = ( ∇ X B − ∇ B X + ( div X ) B ) ♭ {\displaystyle {\star }{\mathcal {L}}_{X}\beta =\left(\nabla _{X}B-\nabla _{B}X+({\text{div}}X)B\right)^{\flat }} if B = ( ⋆ β ) ♯ {\displaystyle B=({\star }\beta )^{\sharp }} ( 2 {\displaystyle 2} -forms on 3 {\displaystyle 3} -manifolds ) ⋆ L X ρ = d q ( X ) + ( div X ) q {\displaystyle {\star }{\mathcal {L}}_{X}\rho =dq(X)+({\text{div}}X)q} if ρ = ⋆ q ∈ Ω 0 ( M ) {\displaystyle \rho ={\star }q\in \Omega ^{0}(M)} ( n {\displaystyle n} -forms ) L X ( d e t ) = ( div ( X ) ) d e t {\displaystyle {\mathcal {L}}_{X}(\mathbf {det} )=({\text{div}}(X))\mathbf {det} } == References ==
Wikipedia/Exterior_calculus_identities
In mathematics, Lie algebra cohomology is a cohomology theory for Lie algebras. It was first introduced in 1929 by Élie Cartan to study the topology of Lie groups and homogeneous spaces by relating cohomological methods of Georges de Rham to properties of the Lie algebra. It was later extended by Claude Chevalley and Samuel Eilenberg (1948) to coefficients in an arbitrary Lie module. == Motivation == If G {\displaystyle G} is a compact simply connected Lie group, then it is determined by its Lie algebra, so it should be possible to calculate its cohomology from the Lie algebra. This can be done as follows. Its cohomology is the de Rham cohomology of the complex of differential forms on G {\displaystyle G} . Using an averaging process, this complex can be replaced by the complex of left-invariant differential forms. The left-invariant forms, meanwhile, are determined by their values at the identity, so that the space of left-invariant differential forms can be identified with the exterior algebra of the Lie algebra, with a suitable differential. The construction of this differential on an exterior algebra makes sense for any Lie algebra, so it is used to define Lie algebra cohomology for all Lie algebras. More generally one uses a similar construction to define Lie algebra cohomology with coefficients in a module. If G {\displaystyle G} is a simply connected noncompact Lie group, the Lie algebra cohomology of the associated Lie algebra g {\displaystyle {\mathfrak {g}}} does not necessarily reproduce the de Rham cohomology of G {\displaystyle G} . The reason for this is that the passage from the complex of all differential forms to the complex of left-invariant differential forms uses an averaging process that only makes sense for compact groups. == Definition == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra over a commutative ring R with universal enveloping algebra U g {\displaystyle U{\mathfrak {g}}} , and let M be a representation of g {\displaystyle {\mathfrak {g}}} (equivalently, a U g {\displaystyle U{\mathfrak {g}}} -module). Considering R as a trivial representation of g {\displaystyle {\mathfrak {g}}} , one defines the cohomology groups H n ( g ; M ) := E x t U g n ( R , M ) {\displaystyle \mathrm {H} ^{n}({\mathfrak {g}};M):=\mathrm {Ext} _{U{\mathfrak {g}}}^{n}(R,M)} (see Ext functor for the definition of Ext). Equivalently, these are the right derived functors of the left exact invariant submodule functor M ↦ M g := { m ∈ M ∣ x m = 0 for all x ∈ g } . {\displaystyle M\mapsto M^{\mathfrak {g}}:=\{m\in M\mid xm=0\ {\text{ for all }}x\in {\mathfrak {g}}\}.} Analogously, one can define Lie algebra homology as H n ( g ; M ) := T o r n U g ( R , M ) {\displaystyle \mathrm {H} _{n}({\mathfrak {g}};M):=\mathrm {Tor} _{n}^{U{\mathfrak {g}}}(R,M)} (see Tor functor for the definition of Tor), which is equivalent to the left derived functors of the right exact coinvariants functor M ↦ M g := M / g M . {\displaystyle M\mapsto M_{\mathfrak {g}}:=M/{\mathfrak {g}}M.} Some important basic results about the cohomology of Lie algebras include Whitehead's lemmas, Weyl's theorem, and the Levi decomposition theorem. == Chevalley–Eilenberg complex == Let g {\displaystyle {\mathfrak {g}}} be a Lie algebra over a field k {\displaystyle k} , with a left action on the g {\displaystyle {\mathfrak {g}}} -module M {\displaystyle M} . The elements of the Chevalley–Eilenberg complex H o m k ( Λ ∙ g , M ) {\displaystyle \mathrm {Hom} _{k}(\Lambda ^{\bullet }{\mathfrak {g}},M)} are called cochains from g {\displaystyle {\mathfrak {g}}} to M {\displaystyle M} . A homogeneous n {\displaystyle n} -cochain from g {\displaystyle {\mathfrak {g}}} to M {\displaystyle M} is thus an alternating k {\displaystyle k} -multilinear function f : Λ n g → M {\displaystyle f\colon \Lambda ^{n}{\mathfrak {g}}\to M} . When g {\displaystyle {\mathfrak {g}}} is finitely generated as vector space, the Chevalley–Eilenberg complex is canonically isomorphic to the tensor product M ⊗ Λ ∙ g ∗ {\displaystyle M\otimes \Lambda ^{\bullet }{\mathfrak {g}}^{*}} , where g ∗ {\displaystyle {\mathfrak {g}}^{*}} denotes the dual vector space of g {\displaystyle {\mathfrak {g}}} . The Lie bracket [ ⋅ , ⋅ ] : Λ 2 g → g {\displaystyle [\cdot ,\cdot ]\colon \Lambda ^{2}{\mathfrak {g}}\rightarrow {\mathfrak {g}}} on g {\displaystyle {\mathfrak {g}}} induces a transpose application d g ( 1 ) : g ∗ → Λ 2 g ∗ {\displaystyle d_{\mathfrak {g}}^{(1)}\colon {\mathfrak {g}}^{*}\rightarrow \Lambda ^{2}{\mathfrak {g}}^{*}} by duality. The latter is sufficient to define a derivation d g {\displaystyle d_{\mathfrak {g}}} of the complex of cochains from g {\displaystyle {\mathfrak {g}}} to k {\displaystyle k} by extending d g ( 1 ) {\displaystyle d_{\mathfrak {g}}^{(1)}} according to the graded Leibniz rule. It follows from the Jacobi identity that d g {\displaystyle d_{\mathfrak {g}}} satisfies d g 2 = 0 {\displaystyle d_{\mathfrak {g}}^{2}=0} and is in fact a differential. In this setting, k {\displaystyle k} is viewed as a trivial g {\displaystyle {\mathfrak {g}}} -module while k ∼ Λ 0 g ∗ ⊆ K e r ( d g ) {\displaystyle k\sim \Lambda ^{0}{\mathfrak {g}}^{*}\subseteq \mathrm {Ker} (d_{\mathfrak {g}})} may be thought of as constants. In general, let γ ∈ H o m ( g , End ⁡ M ) {\displaystyle \gamma \in \mathrm {Hom} ({\mathfrak {g}},\operatorname {End} M)} denote the left action of g {\displaystyle {\mathfrak {g}}} on M {\displaystyle M} and regard it as an application d γ ( 0 ) : M → M ⊗ g ∗ {\displaystyle d_{\gamma }^{(0)}\colon M\rightarrow M\otimes {\mathfrak {g}}^{*}} . The Chevalley–Eilenberg differential d {\displaystyle d} is then the unique derivation extending d γ ( 0 ) {\displaystyle d_{\gamma }^{(0)}} and d g ( 1 ) {\displaystyle d_{\mathfrak {g}}^{(1)}} according to the graded Leibniz rule, the nilpotency condition d 2 = 0 {\displaystyle d^{2}=0} following from the Lie algebra homomorphism from g {\displaystyle {\mathfrak {g}}} to End ⁡ M {\displaystyle \operatorname {End} M} and the Jacobi identity in g {\displaystyle {\mathfrak {g}}} . Explicitly, the differential of the n {\displaystyle n} -cochain f {\displaystyle f} is the ( n + 1 ) {\displaystyle (n+1)} -cochain d f {\displaystyle df} given by: ( d f ) ( x 1 , … , x n + 1 ) = ∑ i ( − 1 ) i + 1 x i f ( x 1 , … , x ^ i , … , x n + 1 ) + ∑ i < j ( − 1 ) i + j f ( [ x i , x j ] , x 1 , … , x ^ i , … , x ^ j , … , x n + 1 ) , {\displaystyle {\begin{aligned}(df)\left(x_{1},\ldots ,x_{n+1}\right)=&\sum _{i}(-1)^{i+1}x_{i}\,f\left(x_{1},\ldots ,{\hat {x}}_{i},\ldots ,x_{n+1}\right)+\\&\sum _{i<j}(-1)^{i+j}f\left(\left[x_{i},x_{j}\right],x_{1},\ldots ,{\hat {x}}_{i},\ldots ,{\hat {x}}_{j},\ldots ,x_{n+1}\right)\,,\end{aligned}}} where the caret signifies omitting that argument. When G {\displaystyle G} is a real Lie group with Lie algebra g {\displaystyle {\mathfrak {g}}} , the Chevalley–Eilenberg complex may also be canonically identified with the space of left-invariant forms with values in M {\displaystyle M} , denoted by Ω ∙ ( G , M ) G {\displaystyle \Omega ^{\bullet }(G,M)^{G}} . The Chevalley–Eilenberg differential may then be thought of as a restriction of the covariant derivative on the trivial fiber bundle G × M → G {\displaystyle G\times M\rightarrow G} , equipped with the equivariant connection γ ~ ∈ Ω 1 ( G , End ⁡ M ) {\displaystyle {\tilde {\gamma }}\in \Omega ^{1}(G,\operatorname {End} M)} associated with the left action γ ∈ H o m ( g , End ⁡ M ) {\displaystyle \gamma \in \mathrm {Hom} ({\mathfrak {g}},\operatorname {End} M)} of g {\displaystyle {\mathfrak {g}}} on M {\displaystyle M} . In the particular case where M = k = R {\displaystyle M=k=\mathbb {R} } is equipped with the trivial action of g {\displaystyle {\mathfrak {g}}} , the Chevalley–Eilenberg differential coincides with the restriction of the de Rham differential on Ω ∙ ( G ) {\displaystyle \Omega ^{\bullet }(G)} to the subspace of left-invariant differential forms. == Cohomology in small dimensions == The zeroth cohomology group is (by definition) the invariants of the Lie algebra acting on the module: H 0 ( g ; M ) = M g = { m ∈ M ∣ x m = 0 for all x ∈ g } . {\displaystyle H^{0}({\mathfrak {g}};M)=M^{\mathfrak {g}}=\{m\in M\mid xm=0\ {\text{ for all }}x\in {\mathfrak {g}}\}.} The first cohomology group is the space Der of derivations modulo the space Ider of inner derivations H 1 ( g ; M ) = D e r ( g , M ) / I d e r ( g , M ) {\displaystyle H^{1}({\mathfrak {g}};M)=\mathrm {Der} ({\mathfrak {g}},M)/\mathrm {Ider} ({\mathfrak {g}},M)\,} , where a derivation is a map d {\displaystyle d} from the Lie algebra to M {\displaystyle M} such that d [ x , y ] = x d y − y d x {\displaystyle d[x,y]=xdy-ydx~} and is called inner if it is given by d x = x a {\displaystyle dx=xa~} for some a {\displaystyle a} in M {\displaystyle M} . The second cohomology group H 2 ( g ; M ) {\displaystyle H^{2}({\mathfrak {g}};M)} is the space of equivalence classes of Lie algebra extensions 0 → M → h → g → 0 {\displaystyle 0\rightarrow M\rightarrow {\mathfrak {h}}\rightarrow {\mathfrak {g}}\rightarrow 0} of the Lie algebra by the module M {\displaystyle M} . Similarly, any element of the cohomology group H n + 1 ( g ; M ) {\displaystyle H^{n+1}({\mathfrak {g}};M)} gives an equivalence class of ways to extend the Lie algebra g {\displaystyle {\mathfrak {g}}} to a "Lie n {\displaystyle n} -algebra" with g {\displaystyle {\mathfrak {g}}} in grade zero and M {\displaystyle M} in grade n {\displaystyle n} . A Lie n {\displaystyle n} -algebra is a homotopy Lie algebra with nonzero terms only in degrees 0 through n {\displaystyle n} . == Examples == === Cohomology on the trivial module === When M = R {\displaystyle M=\mathbb {R} } , as mentioned earlier the Chevalley–Eilenberg complex coincides with the de-Rham complex for a corresponding compact Lie group. In this case M {\displaystyle M} carries the trivial action of g {\displaystyle {\mathfrak {g}}} , so x a = 0 {\displaystyle xa=0} for every x ∈ g , a ∈ M {\displaystyle x\in {\mathfrak {g}},a\in M} . The zeroth cohomology group is M {\displaystyle M} . First cohomology: given a derivation D {\displaystyle D} , x D y = 0 {\displaystyle xDy=0} for all x {\displaystyle x} and y {\displaystyle y} , so derivations satisfy D ( [ x , y ] ) = 0 {\displaystyle D([x,y])=0} for all commutators, so the ideal [ g , g ] {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]} is contained in the kernel of D {\displaystyle D} . If [ g , g ] = g {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]={\mathfrak {g}}} , as is the case for simple Lie algebras, then D ≡ 0 {\displaystyle D\equiv 0} , so the space of derivations is trivial, so the first cohomology is trivial. If g {\displaystyle {\mathfrak {g}}} is abelian, that is, [ g , g ] = 0 {\displaystyle [{\mathfrak {g}},{\mathfrak {g}}]=0} , then any linear functional D : g → M {\displaystyle D:{\mathfrak {g}}\rightarrow M} is in fact a derivation, and the set of inner derivations is trivial as they satisfy D x = x a = 0 {\displaystyle Dx=xa=0} for any a ∈ M {\displaystyle a\in M} . Then the first cohomology group in this case is M dim g {\displaystyle M^{{\text{dim}}{\mathfrak {g}}}} . In light of the de-Rham correspondence, this shows the importance of the compact assumption, as this is the first cohomology group of the n {\displaystyle n} -torus viewed as an abelian group, and R n {\displaystyle \mathbb {R} ^{n}} can also be viewed as an abelian group of dimension n {\displaystyle n} , but R n {\displaystyle \mathbb {R} ^{n}} has trivial cohomology. Second cohomology: The second cohomology group is the space of equivalence classes of central extensions 0 → h → e → g → 0. {\displaystyle 0\rightarrow {\mathfrak {h}}\rightarrow {\mathfrak {e}}\rightarrow {\mathfrak {g}}\rightarrow 0.} Finite dimensional, simple Lie algebras only have trivial central extensions: a proof is provided here. === Cohomology on the adjoint module === When M = g {\displaystyle M={\mathfrak {g}}} , the action is the adjoint action, x ⋅ y = [ x , y ] = ad ( x ) y {\displaystyle x\cdot y=[x,y]={\text{ad}}(x)y} . The zeroth cohomology group is the center z ( g ) {\displaystyle {\mathfrak {z}}({\mathfrak {g}})} First cohomology: the inner derivations are given by D x = x y = [ x , y ] = − ad ( y ) x {\displaystyle Dx=xy=[x,y]=-{\text{ad}}(y)x} , so they are precisely the image of ad : g → End ⁡ g . {\displaystyle {\text{ad}}:{\mathfrak {g}}\rightarrow \operatorname {End} {\mathfrak {g}}.} The first cohomology group is the space of outer derivations. == See also == BRST formalism in theoretical physics. Gelfand–Fuks cohomology == References ==
Wikipedia/Lie_algebra_homology
In the theory of multivariate polynomials, Buchberger's algorithm is a method for transforming a given set of polynomials into a Gröbner basis, which is another set of polynomials that have the same common zeros and are more convenient for extracting information on these common zeros. It was introduced by Bruno Buchberger simultaneously with the definition of Gröbner bases. The Euclidean algorithm for computing the polynomial greatest common divisor is a special case of Buchberger's algorithm restricted to polynomials of a single variable. Gaussian elimination of a system of linear equations is another special case where the degree of all polynomials equals one. For other Gröbner basis algorithms, see Gröbner basis § Algorithms and implementations. == Algorithm == A crude version of this algorithm to find a basis for an ideal I of a polynomial ring R proceeds as follows: Input A set of polynomials F that generates I Output A Gröbner basis G for I G := F For every fi, fj in G, denote by gi the leading term of fi with respect to the given monomial ordering, and by aij the least common multiple of gi and gj. Choose two polynomials in G and let Sij = ⁠aij/ gi⁠ fi − ⁠aij/ gj⁠ fj (Note that the leading terms here will cancel by construction). Reduce Sij, with the multivariate division algorithm relative to the set G until the result is not further reducible. If the result is non-zero, add it to G. Repeat steps 2–4 until all possible pairs are considered, including those involving the new polynomials added in step 4. Output G The polynomial Sij is commonly referred to as the S-polynomial, where S refers to subtraction (Buchberger) or syzygy (others). The pair of polynomials with which it is associated is commonly referred to as critical pair. There are numerous ways to improve this algorithm beyond what has been stated above. For example, one could reduce all the new elements of F relative to each other before adding them. If the leading terms of fi and fj share no variables in common, then Sij will always reduce to 0 (if we use only fi and fj for reduction), so we needn't calculate it at all. The algorithm terminates because it is consistently increasing the size of the monomial ideal generated by the leading terms of our set F, and Dickson's lemma (or the Hilbert basis theorem) guarantees that any such ascending chain must eventually become constant. == Complexity == The computational complexity of Buchberger's algorithm is very difficult to estimate, because of the number of choices that may dramatically change the computation time. Nevertheless, T. W. Dubé has proved that the degrees of the elements of a reduced Gröbner basis are always bounded by 2 ( d 2 2 + d ) 2 n − 2 {\displaystyle 2\left({\frac {d^{2}}{2}}+d\right)^{2^{n-2}}} , where n is the number of variables, and d the maximal total degree of the input polynomials. This allows, in theory, to use linear algebra over the vector space of the polynomials of degree bounded by this value, for getting an algorithm of complexity d 2 n + o ( 1 ) {\displaystyle d^{2^{n+o(1)}}} . On the other hand, there are examples where the Gröbner basis contains elements of degree d 2 Ω ( n ) {\displaystyle d^{2^{\Omega (n)}}} , and the above upper bound of complexity is optimal. Nevertheless, such examples are extremely rare. Since its discovery, many variants of Buchberger's have been introduced to improve its efficiency. Faugère's F4 and F5 algorithms are presently the most efficient algorithms for computing Gröbner bases, and allow to compute routinely Gröbner bases consisting of several hundreds of polynomials, having each several hundreds of terms and coefficients of several hundreds of digits. == Implementations == In the SymPy library for Python, the (improved) Buchberger algorithm is implemented as sympy.polys.polytools.groebner(). There is an implementation of Buchberger’s algorithm that has been proved correct within the proof assistant Coq. == See also == Knuth–Bendix completion algorithm Quine–McCluskey algorithm – analogous algorithm for Boolean algebra == References == == Further reading == Buchberger, B. (August 1976). "Theoretical Basis for the Reduction of Polynomials to Canonical Forms". ACM SIGSAM Bulletin. 10 (3). ACM: 19–29. doi:10.1145/1088216.1088219. MR 0463136. S2CID 15179417. David Cox, John Little, and Donald O'Shea (1997). Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, Springer. ISBN 0-387-94680-2. Vladimir P. Gerdt, Yuri A. Blinkov (1998). Involutive Bases of Polynomial Ideals, Mathematics and Computers in Simulation, 45:519ff == External links == "Buchberger algorithm", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Buchberger's algorithm on Scholarpedia Weisstein, Eric W. "Buchberger's Algorithm". MathWorld.
Wikipedia/Buchberger's_algorithm
REDUCE is a general-purpose computer algebra system originally geared towards applications in physics. The development of REDUCE was started in 1963 by Anthony C. Hearn; since then, many scientists from all over the world have contributed to its development. REDUCE was open-sourced in December 2008 and is available for free under a modified BSD license on SourceForge. Previously it had cost $695. REDUCE is written entirely in its own Lisp dialect called Standard Lisp, expressed in an ALGOL-like syntax called RLISP that is also used as the basis for REDUCE's user-level language. Implementations of REDUCE are available on most variants of Unix, Linux, Microsoft Windows, or Apple Macintosh systems by using an underlying Portable Standard Lisp (PSL) or Codemist Standard Lisp (CSL) implementation. CSL REDUCE offers a graphical user interface. REDUCE can also be built on other Lisps, such as Common Lisp. == Features == arbitrary precision integer, rational, complex and floating-point arithmetic expressions and functions involving one or more variables algorithms for polynomials, rational and transcendental functions facilities for the solution of a variety of algebraic equations automatic and user-controlled simplification of expressions substitutions and pattern matching in a wide variety of forms symbolic differentiation, indefinite and definite integration solution of ordinary differential equations computations with a wide variety of special functions general matrix and non-commutative algebra plotting in 2 and 3 dimensions of graphs of functions arbitrary points, lines and curves Dirac matrix calculations of interest in high energy physics quantifier elimination and decision for interpreted first-order logic powerful intuitive user-level programming language. == Syntax == The REDUCE language is a high-level structured programming language based on ALGOL 60 (but with Standard Lisp semantics), although it does not support all ALGOL 60 syntax. It is similar to Pascal, which evolved from ALGOL 60, and Modula, which evolved from Pascal. REDUCE is a free-form language, meaning that spacing and line breaks are not significant, but consequently input statements must be separated from each other and all input must be terminated with either a semi-colon (;) or a dollar sign ($). The difference is that if the input results in a useful (non-nil) value then it will be output if the separator is a semi-colon (;) but hidden if it is a dollar sign ($). The assignment operator is colon-equal (:=), which in its simplest usage assigns to the variable on its left the value of the expression on its right. However, a REDUCE variable can have no value, in which case it is displayed as its name, in order to allow mathematical expressions involving indeterminates to be constructed and manipulated. The simplest way to use REDUCE is interactively: type input after the last input prompt, terminate it with semi-colon and press the Return or Enter key; REDUCE processes the input and displays the result. This is illustrated in the screenshot. === Identifiers and strings === Programming languages use identifiers to name constructs such as variables and functions, and strings to store text. A REDUCE identifier must begin with a letter and can be followed by letters, digits and underscore characters (_). A REDUCE identifier can also include any character anywhere if it is input preceded by an exclamation mark (!). A REDUCE string is any sequence of characters delimited when input by double quote characters ("). A double quote can be included in a string by entering two double quotes; no other escape mechanism is implemented within strings. An identifier can be used instead of a string in most situations in REDUCE, such as to represent a file name. REDUCE source code was originally written in all upper-case letters, as were all programming languages in the 1960s. (Hence, the name REDUCE is normally written in all upper-case.) However, modern REDUCE is case-insensitive (by default), which means that it ignores the case of letters, and it is normally written in lower-case. (The REDUCE source code has been converted to lower case.) The exceptions to this rule are that case is preserved within strings and when letters in identifiers are preceded by an exclamation mark (!). Hence, it is conventional to use snake-case (e.g. long_name) rather than camel-case (e.g. longName) for REDUCE identifiers, because camel-case gets lost without also using exclamation marks. === Hello World programs === Below is a REDUCE "Hello, World!" program, which is almost as short as such a program could possibly be! REDUCE displays the output Another REDUCE "Hello, World!" program, which is slightly longer than the version above, uses an identifier as follows CSL REDUCE displays the same output as shown above. (Other REDUCE GUIs may italicise this output on the grounds that it is an identifier rather than a string.) === Statements and expressions === Because REDUCE inherits Lisp semantics, all programming constructs have values. Therefore, the only distinction between statements and expressions is that the value of an expression is used but the value of a statement is not. The terms statement and expression are interchangeable, although a few constructs always return the Lisp value nil and so are always used as statements. There are two ways to group several statements or expressions into a single unit that is syntactically equivalent to a single statement or expression, which is necessary to facilitate structured programming. One is the begin...end construct inherited from ALGOL 60, which is called a block or compound statement. Its value is the value of the expression following the (optional) keyword return. The other uses the bracketing syntax <<...>>, which is called a group statement. Its value is the value of the last (unterminated) expression in it. Both are illustrated below in the procedural programming example below. === Structured programming === REDUCE supports conditional and repetition statements, some of which are controlled by a boolean expression, which is any expression whose value can be either true or false, such as x > 0 {\displaystyle x>0} . (The REDUCE user-level language does not explicitly support constants representing true or false although, as in C and related languages, 0 has the boolean value false, whereas 1 and many other non-zero values have the boolean value true.) ==== Conditional statements: if ... then ... else ==== The conditional statement has the form which can optionally be followed by For example, the following conditional statement ensures that the value of n {\displaystyle n} , assumed to be numerical, is positive. (It effectively implements the absolute value function.) The following conditional statement, used as an expression, avoids an error that would be caused by dividing by 0. ==== Repetition statements: for ... ==== The for statement is a flexible loop construct that executes statement repeatedly a number of times that must be known in advance. One version has the form where variable names a variable whose value can be used within statement, and initial, increment and final are numbers (preferably integers). The value of variable is initialized to initial and statement is executed, then the value of variable is repeatedly increased by increment and the statement executed again, provided the value of variable is not greater than final. The common special case "initial step 1 until final" can be abbreviated as "initial : final". The following for statement computes the value of n ! {\displaystyle n!} as the value of the variable fac. Another version of the for statement iterates over a list, and the keyword do can be replaced by product, sum, collect or join, in which case the for statement becomes an expression and the controlled statement is treated as an expression. With product, the value is the product of the values of the controlled statement; with sum, the value is the sum of the values of the controlled statement; with collect, the value is the values of the controlled statement collected into a list; with join, the value is the values of the controlled statement, which must be lists, joined into one list. The following for statement computes the value of n ! {\displaystyle n!} much more succinctly and elegantly than the previous example. ==== Repetition statements: while ... do; repeat ... until ==== The two loop statements are closely related to the conditional statement and execute statement repeatedly a number of times that need not be known in advance. Their difference is that while repetition stops when boolean expression becomes false whereas repeat repetition stops when boolean expression becomes true. Also, repeat always executes statement at least once and it can be used to initialize boolean expression, whereas when using while boolean expression must be initialized before entering the loop. The following while statement computes the value of n ! {\displaystyle n!} as the value of the variable fac. Note that this code treats the assignment n := n - 1 as an expression and uses its value. === Comments === REDUCE has three comment conventions. It inherits the comment statement from ALGOL 60, which looks like this: comment This is a multi-line comment that ends at the next separator, so it cannot contain separators; Comment statements mostly appear in older code. It inherits the %... comment from Standard Lisp, which looks like this: % This is a single-line comment that ends at the end of the line. % It can appear on a line after code and % can contain the separators ";" and "$". %... comments are analogous to C++ //... comments and are the most commonly used form of comment. REDUCE also supports a C-style /*...*/ comment that looks like this: /* This is a multi-line comment that can appear anywhere a space could and can contain the separators ";" and "$". */ == Programming paradigms == REDUCE's user-level language supports several programming paradigms, as illustrated in the algebraic programming examples below. Since it is based on Lisp, which is a functional programming language, REDUCE supports functional programming and all statements have values (although they are not always useful). REDUCE also supports procedural programming by ignoring statement values. Algebraic computation usually proceeds by transforming a mathematical expression into an equivalent but different form. This is called simplification, even though the result might be much longer. (The name REDUCE is a pun on this problem of intermediate expression swell!) In REDUCE, simplification occurs automatically when an expression is entered or computed, controlled by simplification rules and switches. In this way, REDUCE supports rule-based programming, which is the classic REDUCE programming paradigm. In early versions of REDUCE, rules and switches could only be set globally, but modern REDUCE also supports local setting of rules and switches, meaning that they control the simplification of only one expression. REDUCE programs often contain a mix of programming paradigms. == Algebraic programming examples == The screenshot shows simple interactive use. As a simple programming example, consider the problem of computing the n {\displaystyle n} th Taylor polynomial of the function f ( x ) {\displaystyle f(x)} about the point x = a {\displaystyle x=a} , which is given by the formula ∑ r = 0 n f ( r ) ( a ) r ! ( x − a ) r {\displaystyle \sum _{r=0}^{n}{\frac {f^{(r)}(a)}{r!}}(x-a)^{r}} . Here, f ( r ) {\displaystyle f^{(r)}} denotes the r {\displaystyle r} th derivative of f {\displaystyle f} evaluated at the point a {\displaystyle a} and r ! {\displaystyle r!} denotes the factorial of r {\displaystyle r} . (However, note that REDUCE includes sophisticated facilities for power-series expansion.) As an example of functional programming in REDUCE, here is an easy way to compute the 5th Taylor polynomial of sin ⁡ x {\displaystyle \sin x} about 0. In the following code, the control variable r takes values from 0 through 5 in steps of 1, df is the REDUCE differentiation operator and the operator sub performs substitution of its first argument into its second. Note that this code is very similar to the mathematical formula above (with n = 5 {\displaystyle n=5} and a = 0 {\displaystyle a=0} ). produces by default the output This is correct, but it doesn't look much like a Taylor series. That can be fixed by changing a few output-control switches and then evaluating the special variable ws, which stands for workspace and holds the last non-empty output expression: As an example of procedural programming in REDUCE, here is a procedure to compute the general Taylor polynomial, which works for functions that are well-behaved at the expansion point a {\displaystyle a} . The procedure is called my_taylor because REDUCE already includes an operator called taylor. All the text following a % sign up to the end of the line is a comment. The keyword scalar introduces and initializes two local variables, result and mul. The keywords begin and end delimit a block of code that may include local variables and may return a value, whereas the symbols << and >> delimit a group of statements without introducing local variables. The procedure may be called as follows to compute the same Taylor polynomial as above. == File and package handling == REDUCE GUIs provide menu support for some or all of the file and package handling described below. === File handling === In order to develop non-trivial computations, it is convenient to store source code in a file and have REDUCE read it instead of interactive input. REDUCE input should be plain text (not rich text as produced by word-processing applications). REDUCE filenames are arbitrary. The REDUCE source code uses the filename extension .red for the main source code and .tst for the test files, and for that reason REDUCE GUIs such as CSL REDUCE normally offer to input files with those extensions by default, but on platforms such as Microsoft Windows the extension .txt may be more convenient. It is recommended to end a REDUCE input file with the line ;end; as an end-of-file marker. This is something of a historical quirk but it avoids potential warning messages. Apart from that, an input file can contain whatever might be entered interactively into REDUCE. The command in file1, file2, ... inputs each of the named files in succession into REDUCE, essentially as if their contents had been entered interactively, after which REDUCE waits for further interactive input. If the separator used to terminate this command is a semi-colon (;) then the file content is echoed as output; if the separator is a dollar sign ($) then the file content is not echoed. REDUCE filenames can be either absolute or relative to the current directory; when using a REDUCE GUI absolute filenames are safer because it is not obvious what the current directory is! Filenames can be specified as either strings or identifiers; strings (in double quotes) are usually more convenient because otherwise filename elements such as directory separators and dots must be escaped with an exclamation mark (!). Note that the Microsoft Windows directory or folder separator, backslash (\), does not need to be doubled in REDUCE strings because backslash is not an escape character in REDUCE, but REDUCE on Microsoft Windows also accepts forward slash (/) as the directory separator. REDUCE output can be directed to a file instead of the interactive display by executing the command out file; Output redirection can be terminated permanenty by executing the command shut file; or temporarily by executing the command out t; There are similar mechanisms for directing a compiled version of the REDUCE input to a file and loading compiled code, which is the basis for building REDUCE and can be used to extend it. === Loading packages === REDUCE is composed of a number of packages; some are pre-loaded, some are auto-loaded when needed, and some must be explicitly loaded before they can be used. The command load_package package1, package2, ... loads each of the named packages in succession into REDUCE. Package names are not filenames; they are simple identifiers that do not need any exclamation marks, so they are normally input as identifiers, although they can be input as strings. A package consists of one or more files of compiled Lisp code, and the load_package command ensures that the right files are loaded in the right order. The precise filenames and locations depend on the version of Lisp on which REDUCE is built, but the package names are always the same. == Types and variable scope == REDUCE inherits dynamic scoping from Lisp, which means that data have types but variables themselves do not: the type of a variable is the type of the data assigned to it. The simplest REDUCE data types are Standard Lisp atomic types such as identifiers, machine numbers (i.e. "small" integers and floating-point numbers supported directly by the computer hardware), and strings. Most other REDUCE data types are represented internally as Lisp lists whose first element (car) indicates the data type. For example, the REDUCE input produces the display and the internal representation of this matrix is the Lisp list (mat (1 2) (3 4)) The main algebraic objects used in REDUCE are quotients of two possibly-multivariate polynomials, the indeterminates of which, called kernels, may in fact be functions of one or more variables, e.g. the input produces the display REDUCE uses two representations for such algebraic objects. One is called prefix form, which is just the Standard Lisp code for the expression and is convenient for operations such as input and output; e.g. for z {\displaystyle z} it is (quotient (plus x (expt y 2)) (f x y)) The other is called standard quotient form, which is better for performing algebraic manipulations such as addition; e.g. for z {\displaystyle z} it is (!*sq ((((x . 1) . 1) ((y . 2) . 1)) (((f x y) . 1) . 1)) t) REDUCE converts between these two representations as necessary, but tries to retain standard quotient form as much as possible to avoid the conversion overhead. Because variables have no types there are no variable type declarations in REDUCE, but there are variable scope declarations. The scope of a variable refers to the range of a program throughout which it has the same significance. By default, REDUCE variables are automatically global in scope, meaning that they have the same significance everywhere, i.e. once a variable has been assigned a value, it will evaluate to that same value everywhere. Variables can be declared to have scope limited to a particular block of code by delimiting that block of code by the keywords begin and end, and declaring the variables scalar at the start of the block, using the following syntax (as illustrated in the algebraic programming examples above): begin scalar variable1, variable2, ...; statements end Each variable so declared can optionally be followed by an assignment operator (:=) and an initial value. The keyword scalar should be read as meaning local. (The reason for the name scalar is buried in the history of REDUCE, but it was probably chosen to distinguish local variables from the relativistic 4-vectors and Dirac gamma matrices defined in the high-energy physics package, which was the original core of REDUCE.) The scalar keyword can be replaced by integer or real. The difference is that integer variables are initialized by default to 0, whereas scalar and real variables are initialized by default to the Lisp value nil (which has the algebraic value 0 anyway). This distinction is more significant in the REDUCE implementation language, RLISP, also known as symbolic or lisp mode. Otherwise, it is useful as documentation of the intended use of local variables. There are two other variable declarations that are used only in the implementation of REDUCE, i.e. in symbolic mode. The REDUCE begin...end block described above is translated into a Standard Lisp prog form by the REDUCE parser, and all Standard Lisp variables should either be bound in prog forms, or declared global or fluid. In RLISP, these declarations look like this: fluid '(variable1 variable2 ...) global '(variable1 variable2 ...) A global variable cannot be rebound in a prog form, whereas a fluid variable can. This distinction is normally only significant to a Lisp compiler and is used to maximize efficiency; in interpreted code these declarations can be skipped and undeclared variables are effectively fluid. == Graphics == REDUCE supports graphical display via gnuplot, which is an independent portable open-source graphics package that is included in all REDUCE binary distributions. The REDUCE GNUPLOT package supports the display of curves or surfaces defined by formulas and/or data sets via the command plot(...). This command exposes some, but not all, of the capabilities of gnuplot. The REDUCE TURTLE and LOGOTURTLE packages are built on the REDUCE GNUPLOT package and support turtle graphics in two dimensions; the LOGOTURTLE package also exposes additional capabilities of gnuplot, such as control of colour and line thickness, filling and text annotations. == Available implementations and supported platforms == REDUCE is available from SourceForge. Binary distributions are released a few times a year with no fixed schedule as snapshots of the Subversion repository, and also offer compressed archive snapshots of the full source code. SourceForge can be set up to notify users when a new release is available. In 2024, binary distributions were released for 64-bit versions of macOS, Linux (Debian and Red Hat based systems) and Microsoft Windows. The installers either include or are available for both CSL- and PSL-REDUCE, and may include the REDUCE source code. REDUCE can be built from the source code on a larger range of platforms and on other Lisp systems, such as Common Lisp. == Other software that uses REDUCE == The following projects use REDUCE: ALLTYPES (ALgebraic Language and TYPe System) is a computer algebra type system with particular emphasis on differential algebra and differential equations; DAISY (Differential Algebra for Identifiability of SYstems) is a software tool to perform structural identifiability analysis for linear and nonlinear dynamic models described by polynomial or rational ODE equations; MTT (Model Transformation Tools) is a set of tools for modeling dynamic physical systems using the bond-graph methodology; Reduce.jl is a symbolic parser for Julia language term rewriting using REDUCE algebra; Redlog (REDUCE Logic System) provides more than 100 functions on first-order formulas and was originally independent but is now available as a REDUCE package; Pure is a programming language, which has bindings for REDUCE, providing a very interesting environment for doing computer-powered science. == See also == List of computer algebra systems ALTRAN REDUCE Meets CAMAL - J. P. Fitch [1] == References == == External links == REDUCE on SourceForge Anthony C. Hearn at al., REDUCE User's Manual [ HTML | PDF ]. Anthony C. Hearn, "REDUCE: The First Forty Years". Invited paper presented at the A3L Conference in Honor of the 60th Birthday of Volker Weispfenning, April 2005. Andrey Grozin, "TeXmacs-Reduce interface", April 2012.
Wikipedia/REDUCE_(computer_algebra_system)
Algebraic geometry is a branch of mathematics which uses abstract algebraic techniques, mainly from commutative algebra, to solve geometrical problems. Classically, it studies zeros of multivariate polynomials; the modern approach generalizes this in a few different aspects. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves, and quartic curves like lemniscates and Cassini ovals. These are plane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of points of special interest like singular points, inflection points and points at infinity. More advanced questions involve the topology of the curve and the relationship between curves defined by different equations. Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions via equation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique. In the 20th century, algebraic geometry split into several subareas. The mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field. Real algebraic geometry is the study of the real algebraic varieties. Diophantine geometry and, more generally, arithmetic geometry is the study of algebraic varieties over fields that are not algebraically closed and, specifically, over fields of interest in algebraic number theory, such as the field of rational numbers, number fields, finite fields, function fields, and p-adic fields. A large part of singularity theory is devoted to the singularities of algebraic varieties. Computational algebraic geometry is an area that has emerged at the intersection of algebraic geometry and computer algebra, with the rise of computers. It consists mainly of algorithm design and software development for the study of properties of explicitly given algebraic varieties. Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology, differential and complex geometry. One key achievement of this abstract algebraic geometry is Grothendieck's scheme theory which allows one to use sheaf theory to study algebraic varieties in a way which is very similar to its use in the study of differential and analytic manifolds. This is obtained by extending the notion of point: In classical algebraic geometry, a point of an affine variety may be identified, through Hilbert's Nullstellensatz, with a maximal ideal of the coordinate ring, while the points of the corresponding affine scheme are all prime ideals of this ring. This means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of classical algebraic geometry, mainly concerned with complex points, and of algebraic number theory. Wiles' proof of the longstanding conjecture called Fermat's Last Theorem is an example of the power of this approach. == Basic notions == === Zeros of simultaneous polynomials === In classical algebraic geometry, the main objects of interest are the vanishing sets of collections of polynomials, meaning the set of all points that simultaneously satisfy one or more polynomial equations. For instance, the two-dimensional sphere of radius 1 in three-dimensional Euclidean space R3 could be defined as the set of all points ( x , y , z ) {\displaystyle (x,y,z)} with x 2 + y 2 + z 2 − 1 = 0. {\displaystyle x^{2}+y^{2}+z^{2}-1=0.\,} A "slanted" circle in R3 can be defined as the set of all points ( x , y , z ) {\displaystyle (x,y,z)} which satisfy the two polynomial equations x 2 + y 2 + z 2 − 1 = 0 , {\displaystyle x^{2}+y^{2}+z^{2}-1=0,\,} x + y + z = 0. {\displaystyle x+y+z=0.\,} === Affine varieties === First we start with a field k. In classical algebraic geometry, this field was always the complex numbers C, but many of the same results are true if we assume only that k is algebraically closed. We consider the affine space of dimension n over k, denoted An(k) (or more simply An, when k is clear from the context). When one fixes a coordinate system, one may identify An(k) with kn. The purpose of not working with kn is to emphasize that one "forgets" the vector space structure that kn carries. A function f : An → A1 is said to be polynomial (or regular) if it can be written as a polynomial, that is, if there is a polynomial p in k[x1,...,xn] such that f(M) = p(t1,...,tn) for every point M with coordinates (t1,...,tn) in An. The property of a function to be polynomial (or regular) does not depend on the choice of a coordinate system in An. When a coordinate system is chosen, the regular functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k. Therefore, the set of the regular functions on An is a ring, which is denoted k[An]. We say that a polynomial vanishes at a point if evaluating it at that point gives zero. Let S be a set of polynomials in k[An]. The vanishing set of S (or vanishing locus or zero set) is the set V(S) of all points in An where every polynomial in S vanishes. Symbolically, V ( S ) = { ( t 1 , … , t n ) ∣ p ( t 1 , … , t n ) = 0 for all p ∈ S } . {\displaystyle V(S)=\{(t_{1},\dots ,t_{n})\mid p(t_{1},\dots ,t_{n})=0{\text{ for all }}p\in S\}.\,} A subset of An which is V(S), for some S, is called an algebraic set. The V stands for variety (a specific type of algebraic set to be defined below). Given a subset U of An, can one recover the set of polynomials which generate it? If U is any subset of An, define I(U) to be the set of all polynomials whose vanishing set contains U. The I stands for ideal: if two polynomials f and g both vanish on U, then f+g vanishes on U, and if h is any polynomial, then hf vanishes on U, so I(U) is always an ideal of the polynomial ring k[An]. Two natural questions to ask are: Given a subset U of An, when is U = V(I(U))? Given a set S of polynomials, when is S = I(V(S))? The answer to the first question is provided by introducing the Zariski topology, a topology on An whose closed sets are the algebraic sets, and which directly reflects the algebraic structure of k[An]. Then U = V(I(U)) if and only if U is an algebraic set or equivalently a Zariski-closed set. The answer to the second question is given by Hilbert's Nullstellensatz. In one of its forms, it says that I(V(S)) is the radical of the ideal generated by S. In more abstract language, there is a Galois connection, giving rise to two closure operators; they can be identified, and naturally play a basic role in the theory; the example is elaborated at Galois connection. For various reasons we may not always want to work with the entire ideal corresponding to an algebraic set U. Hilbert's basis theorem implies that ideals in k[An] are always finitely generated. An algebraic set is called irreducible if it cannot be written as the union of two smaller algebraic sets. Any algebraic set is a finite union of irreducible algebraic sets and this decomposition is unique. Thus its elements are called the irreducible components of the algebraic set. An irreducible algebraic set is also called a variety. It turns out that an algebraic set is a variety if and only if it may be defined as the vanishing set of a prime ideal of the polynomial ring. Some authors do not make a clear distinction between algebraic sets and varieties and use irreducible variety to make the distinction when needed. === Regular functions === Just as continuous functions are the natural maps on topological spaces and smooth functions are the natural maps on differentiable manifolds, there is a natural class of functions on an algebraic set, called regular functions or polynomial functions. A regular function on an algebraic set V contained in An is the restriction to V of a regular function on An. For an algebraic set defined on the field of the complex numbers, the regular functions are smooth and even analytic. It may seem unnaturally restrictive to require that a regular function always extend to the ambient space, but it is very similar to the situation in a normal topological space, where the Tietze extension theorem guarantees that a continuous function on a closed subset always extends to the ambient topological space. Just as with the regular functions on affine space, the regular functions on V form a ring, which we denote by k[V]. This ring is called the coordinate ring of V. Since regular functions on V come from regular functions on An, there is a relationship between the coordinate rings. Specifically, if a regular function on V is the restriction of two functions f and g in k[An], then f − g is a polynomial function which is null on V and thus belongs to I(V). Thus k[V] may be identified with k[An]/I(V). === Morphism of affine varieties === Using regular functions from an affine variety to A1, we can define regular maps from one affine variety to another. First we will define a regular map from a variety into affine space: Let V be a variety contained in An. Choose m regular functions on V, and call them f1, ..., fm. We define a regular map f from V to Am by letting f = (f1, ..., fm). In other words, each fi determines one coordinate of the range of f. If V′ is a variety contained in Am, we say that f is a regular map from V to V′ if the range of f is contained in V′. The definition of the regular maps apply also to algebraic sets. The regular maps are also called morphisms, as they make the collection of all affine algebraic sets into a category, where the objects are the affine algebraic sets and the morphisms are the regular maps. The affine varieties is a subcategory of the category of the algebraic sets. Given a regular map g from V to V′ and a regular function f of k[V′], then f ∘ g ∈ k[V]. The map f → f ∘ g is a ring homomorphism from k[V′] to k[V]. Conversely, every ring homomorphism from k[V′] to k[V] defines a regular map from V to V′. This defines an equivalence of categories between the category of algebraic sets and the opposite category of the finitely generated reduced k-algebras. This equivalence is one of the starting points of scheme theory. === Rational function and birational equivalence === In contrast to the preceding sections, this section concerns only varieties and not algebraic sets. On the other hand, the definitions extend naturally to projective varieties (next section), as an affine variety and its projective completion have the same field of functions. If V is an affine variety, its coordinate ring is an integral domain and has thus a field of fractions which is denoted k(V) and called the field of the rational functions on V or, shortly, the function field of V. Its elements are the restrictions to V of the rational functions over the affine space containing V. The domain of a rational function f is not V but the complement of the subvariety (a hypersurface) where the denominator of f vanishes. As with regular maps, one may define a rational map from a variety V to a variety V'. As with the regular maps, the rational maps from V to V' may be identified to the field homomorphisms from k(V') to k(V). Two affine varieties are birationally equivalent if there are two rational functions between them which are inverse one to the other in the regions where both are defined. Equivalently, they are birationally equivalent if their function fields are isomorphic. An affine variety is a rational variety if it is birationally equivalent to an affine space. This means that the variety admits a rational parameterization, that is a parametrization with rational functions. For example, the circle of equation x 2 + y 2 − 1 = 0 {\displaystyle x^{2}+y^{2}-1=0} is a rational curve, as it has the parametric equation x = 2 t 1 + t 2 {\displaystyle x={\frac {2\,t}{1+t^{2}}}} y = 1 − t 2 1 + t 2 , {\displaystyle y={\frac {1-t^{2}}{1+t^{2}}}\,,} which may also be viewed as a rational map from the line to the circle. The problem of resolution of singularities is to know if every algebraic variety is birationally equivalent to a variety whose projective completion is nonsingular (see also smooth completion). It was solved in the affirmative in characteristic 0 by Heisuke Hironaka in 1964 and is yet unsolved in finite characteristic. === Projective variety === Just as the formulas for the roots of second, third, and fourth degree polynomials suggest extending real numbers to the more algebraically complete setting of the complex numbers, many properties of algebraic varieties suggest extending affine space to a more geometrically complete projective space. Whereas the complex numbers are obtained by adding the number i, a root of the polynomial x2 + 1, projective space is obtained by adding in appropriate points "at infinity", points where parallel lines may meet. To see how this might come about, consider the variety V(y − x2). If we draw it, we get a parabola. As x goes to positive infinity, the slope of the line from the origin to the point (x, x2) also goes to positive infinity. As x goes to negative infinity, the slope of the same line goes to negative infinity. Compare this to the variety V(y − x3). This is a cubic curve. As x goes to positive infinity, the slope of the line from the origin to the point (x, x3) goes to positive infinity just as before. But unlike before, as x goes to negative infinity, the slope of the same line goes to positive infinity as well; the exact opposite of the parabola. So the behavior "at infinity" of V(y − x3) is different from the behavior "at infinity" of V(y − x2). The consideration of the projective completion of the two curves, which is their prolongation "at infinity" in the projective plane, allows us to quantify this difference: the point at infinity of the parabola is a regular point, whose tangent is the line at infinity, while the point at infinity of the cubic curve is a cusp. Also, both curves are rational, as they are parameterized by x, and the Riemann-Roch theorem implies that the cubic curve must have a singularity, which must be at infinity, as all its points in the affine space are regular. Thus many of the properties of algebraic varieties, including birational equivalence and all the topological properties, depend on the behavior "at infinity" and so it is natural to study the varieties in projective space. Furthermore, the introduction of projective techniques made many theorems in algebraic geometry simpler and sharper: For example, Bézout's theorem on the number of intersection points between two varieties can be stated in its sharpest form only in projective space. For these reasons, projective space plays a fundamental role in algebraic geometry. Nowadays, the projective space Pn of dimension n is usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimension n + 1, or equivalently to the set of the vector lines in a vector space of dimension n + 1. When a coordinate system has been chosen in the space of dimension n + 1, all the points of a line have the same set of coordinates, up to the multiplication by an element of k. This defines the homogeneous coordinates of a point of Pn as a sequence of n + 1 elements of the base field k, defined up to the multiplication by a nonzero element of k (the same for the whole sequence). A polynomial in n + 1 variables vanishes at all points of a line passing through the origin if and only if it is homogeneous. In this case, one says that the polynomial vanishes at the corresponding point of Pn. This allows us to define a projective algebraic set in Pn as the set V(f1, ..., fk), where a finite set of homogeneous polynomials {f1, ..., fk} vanishes. Like for affine algebraic sets, there is a bijection between the projective algebraic sets and the reduced homogeneous ideals which define them. The projective varieties are the projective algebraic sets whose defining ideal is prime. In other words, a projective variety is a projective algebraic set, whose homogeneous coordinate ring is an integral domain, the projective coordinates ring being defined as the quotient of the graded ring or the polynomials in n + 1 variables by the homogeneous (reduced) ideal defining the variety. Every projective algebraic set may be uniquely decomposed into a finite union of projective varieties. The only regular functions which may be defined properly on a projective variety are the constant functions. Thus this notion is not used in projective situations. On the other hand, the field of the rational functions or function field is a useful notion, which, similarly to the affine case, is defined as the set of the quotients of two homogeneous elements of the same degree in the homogeneous coordinate ring. == Real algebraic geometry == Real algebraic geometry is the study of real algebraic varieties. The fact that the field of the real numbers is an ordered field cannot be ignored in such a study. For example, the curve of equation x 2 + y 2 − a = 0 {\displaystyle x^{2}+y^{2}-a=0} is a circle if a > 0 {\displaystyle a>0} , but has no real points if a < 0 {\displaystyle a<0} . Real algebraic geometry also investigates, more broadly, semi-algebraic sets, which are the solutions of systems of polynomial inequalities. For example, neither branch of the hyperbola of equation x y − 1 = 0 {\displaystyle xy-1=0} is a real algebraic variety. However, the branch in the first quadrant is a semi-algebraic set defined by x y − 1 = 0 {\displaystyle xy-1=0} and x > 0 {\displaystyle x>0} . One open problem in real algebraic geometry is the following part of Hilbert's sixteenth problem: Decide which respective positions are possible for the ovals of a nonsingular plane curve of degree 8. == Computational algebraic geometry == One may date the origin of computational algebraic geometry to meeting EUROSAM'79 (International Symposium on Symbolic and Algebraic Manipulation) held at Marseille, France, in June 1979. At this meeting, Dennis S. Arnon showed that George E. Collins's Cylindrical algebraic decomposition (CAD) allows the computation of the topology of semi-algebraic sets, Bruno Buchberger presented Gröbner bases and his algorithm to compute them, and Daniel Lazard presented a new algorithm for solving systems of homogeneous polynomial equations with a computational complexity which is essentially polynomial in the expected number of solutions and thus singly exponential in the number of the unknowns. This algorithm is strongly related to Macaulay's multivariate resultant. Since then, most results in this area are related to one or several of these items either by using or improving one of these algorithms, or by finding algorithms whose complexity is singly exponential in the number of the variables. A body of mathematical theory complementary to symbolic methods called numerical algebraic geometry has been developed over the last several decades. The main computational method is homotopy continuation. This supports, for example, a model of floating-point computation for solving problems of algebraic geometry. === Gröbner basis === A Gröbner basis is a system of generators of a polynomial ideal whose computation allows the deduction of many properties of the affine algebraic variety defined by the ideal. Given an ideal I defining an algebraic set V: V is empty (over an algebraically closed extension of the basis field) if and only if the Gröbner basis for any monomial ordering is reduced to {1}. By means of the Hilbert series, one may compute the dimension and the degree of V from any Gröbner basis of I for a monomial ordering refining the total degree. If the dimension of V is 0, then one may compute the points (finite in number) of V from any Gröbner basis of I (see Systems of polynomial equations). A Gröbner basis computation allows one to remove from V all irreducible components which are contained in a given hypersurface. A Gröbner basis computation allows one to compute the Zariski closure of the image of V by the projection on the k first coordinates, and the subset of the image where the projection is not proper. More generally, Gröbner basis computations allow one to compute the Zariski closure of the image and the critical points of a rational function of V into another affine variety. Gröbner basis computations do not allow one to compute directly the primary decomposition of I nor the prime ideals defining the irreducible components of V, but most algorithms for this involve Gröbner basis computation. The algorithms which are not based on Gröbner bases use regular chains but may need Gröbner bases in some exceptional situations. Gröbner bases are deemed to be difficult to compute. In fact they may contain, in the worst case, polynomials whose degree is doubly exponential in the number of variables and a number of polynomials which is also doubly exponential. However, this is only a worst-case complexity, and the complexity bound of Lazard's algorithm of 1979 may frequently apply. Faugère F5 algorithm realizes this complexity, as it may be viewed as an improvement of Lazard's 1979 algorithm. It follows that the best implementations allow one to compute almost routinely with algebraic sets of degree more than 100. This means that, presently, the difficulty of computing a Gröbner basis is strongly related to the intrinsic difficulty of the problem. === Cylindrical algebraic decomposition (CAD) === CAD is an algorithm which was introduced in 1973 by G. Collins to implement with an acceptable complexity the Tarski–Seidenberg theorem on quantifier elimination over the real numbers. This theorem concerns the formulas of the first-order logic whose atomic formulas are polynomial equalities or inequalities between polynomials with real coefficients. These formulas are thus the formulas which may be constructed from the atomic formulas by the logical operators and (∧), or (∨), not (¬), for all (∀), and there exists (∃). Tarski's theorem asserts that, from such a formula, one may compute an equivalent formula without quantifiers (∀, ∃). The complexity of CAD is doubly exponential in the number of variables. This means that CAD allows one, in theory, to solve every problem of real algebraic geometry which may be expressed by such a formula—this is almost every problem concerning explicitly given varieties and semi-algebraic sets. While Gröbner basis computation has doubly exponential complexity only in rare cases, CAD has almost always this high complexity. This implies that, unless most polynomials appearing in the input are linear, it may not solve problems with more than four variables. Since 1973, most of the research on this subject is devoted either to improving CAD or finding alternative algorithms in special cases of general interest. As an example of the state of art, there are efficient algorithms to find at least one point in every connected component of a semi-algebraic set, and thus to test whether a semi-algebraic set is empty. On the other hand, CAD is yet, in practice, the best algorithm to count the number of connected components. === Asymptotic complexity vs. practical efficiency === The basic general algorithms of computational geometry have a doubly exponential worst-case complexity. More precisely, if d is the maximal degree of the input polynomials and n the number of variables, then their complexity is at most d2cn for some constant c, and, for some inputs, the complexity is at least d2c′n for another constant c′. During the last 20 years of the 20th century, various algorithms were introduced to solve specific subproblems with a better complexity. Most of these algorithms have a complexity d O ( n 2 ) {\displaystyle d^{O(n^{2})}} . Among those algorithms which solve a subproblem of the problems solved by Gröbner bases, one may cite testing whether an affine variety is empty and solving nonhomogeneous polynomial systems which have a finite number of solutions. Such algorithms are rarely implemented because, on most entries, Faugère's F4 and F5 algorithms have a better practical efficiency and probably a similar or better complexity (probably because the evaluation of the complexity of Gröbner basis algorithms on a particular class of entries is a difficult task which has been done only in a few special cases). The main algorithms of real algebraic geometry which solve a problem solved by CAD are related to the topology of semi-algebraic sets. One may cite counting the number of connected components, testing whether two points are in the same components, and computing a Whitney stratification of a real algebraic set. They have a complexity of d O ( n 2 ) {\displaystyle d^{O(n^{2})}} , but the constant involved by O notation is so high that using them to solve any nontrivial problem effectively solved by CAD, is impossible even if one could use all the existing computing power in the world. Therefore, these algorithms have never been implemented and it is an active research area to search for algorithms with have together a good asymptotic complexity and a good practical efficiency. == Abstract modern viewpoint == The modern approaches to algebraic geometry redefine and effectively extend the range of basic objects in various levels of generality to schemes, formal schemes, ind-schemes, algebraic spaces, algebraic stacks, and so on. The need for this arises already from the useful ideas within the theory of varieties; for example, the formal functions of Zariski can be accommodated by introducing nilpotent elements in structure rings; considering spaces of loops and arcs, constructing quotients by group actions, and developing formal grounds for natural intersection theory and deformation theory lead to some of the further extensions. Most remarkably, in the early 1960s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra which are locally ringed spaces which form a category which is antiequivalent to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along the Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set-theoretic sense is then replaced by a Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc; nowadays some other examples have become prominent, including the Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. Sometimes, other algebraic sites replace the category of affine schemes. For example, Nikolai Durov has introduced commutative algebraic monads as a generalization of local objects in a generalized algebraic geometry. Versions of a tropical geometry, of an absolute geometry over a field of one element, and an algebraic analogue of Arakelov's geometry were realized in this setup. Another formal generalization is possible to universal algebraic geometry in which every variety of algebras has its own algebraic geometry. The term variety of algebras should not be confused with algebraic variety. The language of schemes, stacks, and generalizations has proved to be a valuable way of dealing with geometric concepts and become cornerstones of modern algebraic geometry. Algebraic stacks can be further generalized, and for many practical questions like deformation theory and intersection theory, this is often the most natural approach. One can extend the Grothendieck site of affine schemes to a higher-categorical site of derived affine schemes, by replacing the commutative rings with an infinity category of differential graded commutative algebras, or of simplicial commutative rings or a similar category with an appropriate variant of a Grothendieck topology. One can also replace presheaves of sets with presheaves of simplicial sets (or of infinity groupoids). Then, in presence of an appropriate homotopic machinery, one can develop a notion of derived stack as such a presheaf on the infinity category of derived affine schemes, which satisfies certain infinite-categorical versions of sheaf axioms (and to be algebraic, inductively a sequence of representability conditions). Quillen model categories, Segal categories, and quasicategories are some of the most-used tools to formalize this, yielding the derived algebraic geometry, introduced by the school of Carlos Simpson, including Andre Hirschowitz, Bertrand Toën, Gabrielle Vezzosi, Michel Vaquié, and others; and developed further by Jacob Lurie, Bertrand Toën, and Gabriele Vezzosi. Another (noncommutative) version of derived algebraic geometry, using A-infinity categories, has been developed from the early 1990s by Maxim Kontsevich and followers. == History == === Before the 16th century === Some of the roots of algebraic geometry date back to the work of the Hellenistic Greeks from the 5th century BC. The Delian problem, for instance, was to construct a length x so that the cube of side x contained the same volume as the rectangular box a2b for given sides a and b. Menaechmus (c. 350 BC) considered the problem geometrically by intersecting the pair of plane conics ay = x2 and xy = ab. In the 3rd century BC, Archimedes and Apollonius systematically studied additional problems on conic sections using coordinates. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter, and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding coordinates using geometric methods like using parabolas and curves. Medieval mathematicians, including Omar Khayyam, Leonardo of Pisa, Gersonides and Nicole Oresme in the Medieval Period, solved certain cubic and quadratic equations by purely algebraic means and then interpreted the results geometrically. The Persian mathematician Omar Khayyám (born 1048 AD) believed that there was a relationship between arithmetic, algebra, and geometry. This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century. === Renaissance === Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number of Renaissance mathematicians such as Gerolamo Cardano and Niccolò Fontana "Tartaglia" on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th- and 17th-century mathematicians, notably Blaise Pascal who argued against the use of algebraic and analytical methods in geometry. The French mathematicians Franciscus Vieta and later René Descartes and Pierre de Fermat revolutionized the conventional way of thinking about construction problems through the introduction of coordinate geometry. They were interested primarily in the properties of algebraic curves, such as those defined by Diophantine equations (in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes). During the same period, Blaise Pascal and Gérard Desargues approached geometry from a different perspective, developing the synthetic notions of projective geometry. Pascal and Desargues also studied curves, but from the purely geometrical point of view: the analog of the Greek ruler-and-compass construction. Ultimately, the analytic geometry of Descartes and Fermat won out, for it supplied the 18th-century mathematicians with concrete quantitative tools needed to study physical problems using the new calculus of Newton and Leibniz. However, by the end of the 18th century, most of the algebraic character of coordinate geometry was subsumed by the calculus of infinitesimals of Lagrange and Euler. === 19th and early 20th century === It took the simultaneous 19th-century developments of non-Euclidean geometry and Abelian integrals in order to bring the old algebraic ideas back into the geometrical fold. The first of these new developments was seized up by Edmond Laguerre and Arthur Cayley, who attempted to ascertain the generalized metric properties of projective space. Cayley introduced the idea of homogeneous polynomial forms, and more specifically quadratic forms, on projective space. Subsequently, Felix Klein studied projective geometry (along with other types of geometry) from the viewpoint that the geometry on a space is encoded in a certain class of transformations on the space. By the end of the 19th century, projective geometers were studying more general kinds of transformations on figures in projective space. Rather than the projective linear transformations which were normally regarded as giving the fundamental Kleinian geometry on projective space, they concerned themselves also with the higher-degree birational transformations. This weaker notion of congruence would later lead members of the 20th century Italian school of algebraic geometry to classify algebraic surfaces up to birational isomorphism. The second early-19th-century development, that of Abelian integrals, would lead Bernhard Riemann to the development of Riemann surfaces. In the same period began the algebraization of the algebraic geometry through commutative algebra. The prominent results in this direction are Hilbert's basis theorem and Hilbert's Nullstellensatz, which are the basis of the connection between algebraic geometry and commutative algebra, and Macaulay's multivariate resultant, which is the basis of elimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed by singularity theory and computational algebraic geometry. === 20th century === B. L. van der Waerden, Oscar Zariski, and André Weil developed a foundation for algebraic geometry based on contemporary commutative algebra, including valuation theory and the theory of ideals. One of the goals was to give a rigorous framework for proving the results of the Italian school of algebraic geometry. In particular, this school used systematically the notion of generic point without any precise definition, which was first given by these authors during the 1930s. In the 1950s and 1960s, Jean-Pierre Serre and Alexander Grothendieck recast the foundations making use of sheaf theory. Later, from about 1960, and largely led by Grothendieck, the idea of schemes was worked out, in conjunction with a very refined apparatus of homological techniques. After a decade of rapid development the field stabilized in the 1970s, and new applications were made, both to number theory and to more classical geometric questions on algebraic varieties, singularities, moduli, and formal moduli. An important class of varieties, not easily understood directly from their defining equations, are the abelian varieties, which are the projective varieties whose points form an abelian group. The prototypical examples are the elliptic curves, which have a rich theory. They were instrumental in the proof of Fermat's Last Theorem and are also used in elliptic-curve cryptography. In parallel with the abstract trend of the algebraic geometry, which is concerned with general statements about varieties, methods for effective computation with concretely-given varieties have also been developed, which lead to the new area of computational algebraic geometry. One of the founding methods of this area is the theory of Gröbner bases, introduced by Bruno Buchberger in 1965. Another founding method, more specially devoted to real algebraic geometry, is the cylindrical algebraic decomposition, introduced by George E. Collins in 1973. See also: derived algebraic geometry. == Analytic geometry == An analytic variety over the field of real or complex numbers is defined locally as the set of common solutions of several equations involving analytic functions. It is analogous to the concept of algebraic variety in that it carries a structure sheaf of analytic functions instead of regular functions. Any complex manifold is a complex analytic variety. Since analytic varieties may have singular points, not all complex analytic varieties are manifolds. Over a non-archimedean field analytic geometry is studied via rigid analytic spaces. Modern analytic geometry over the field of complex numbers is closely related to complex algebraic geometry, as has been shown by Jean-Pierre Serre in his paper GAGA, the name of which is French for Algebraic geometry and analytic geometry. The GAGA results over the field of complex numbers may be extended to rigid analytic spaces over non-archimedean fields. == Applications == Algebraic geometry now finds applications in statistics, control theory, robotics, error-correcting codes, phylogenetics and geometric modelling. There are also connections to string theory, game theory, graph matchings, solitons and integer programming. == See also == == Notes == == References == === Sources === Kline, M. (1972). Mathematical Thought from Ancient to Modern Times. Vol. 1. Oxford University Press. ISBN 0195061357. == Further reading == Some classic textbooks that predate schemes van der Waerden, B. L. (1945). Einfuehrung in die algebraische Geometrie. Dover. Hodge, W. V. D.; Pedoe, Daniel (1994). Methods of Algebraic Geometry Volume 1. Cambridge University Press. ISBN 978-0-521-46900-5. Zbl 0796.14001. Hodge, W. V. D.; Pedoe, Daniel (1994). Methods of Algebraic Geometry Volume 2. Cambridge University Press. ISBN 978-0-521-46901-2. Zbl 0796.14002. Hodge, W. V. D.; Pedoe, Daniel (1994). Methods of Algebraic Geometry Volume 3. Cambridge University Press. ISBN 978-0-521-46775-9. Zbl 0796.14003. Modern textbooks that do not use the language of schemes Garrity, Thomas; et al. (2013). Algebraic Geometry A Problem Solving Approach. American Mathematical Society. ISBN 978-0-821-89396-8. Griffiths, Phillip; Harris, Joe (1994). Principles of Algebraic Geometry. Wiley-Interscience. ISBN 978-0-471-05059-9. Zbl 0836.14001. Harris, Joe (1995). Algebraic Geometry A First Course. Springer-Verlag. ISBN 978-0-387-97716-4. Zbl 0779.14001. Mumford, David (1995). Algebraic Geometry I Complex Projective Varieties (2nd ed.). Springer-Verlag. ISBN 978-3-540-58657-9. Zbl 0821.14001. Reid, Miles (1988). Undergraduate Algebraic Geometry. Cambridge University Press. ISBN 978-0-521-35662-6. Zbl 0701.14001. Shafarevich, Igor (1995). Basic Algebraic Geometry I Varieties in Projective Space (2nd ed.). Springer-Verlag. ISBN 978-0-387-54812-8. Zbl 0797.14001. Textbooks in computational algebraic geometry Cox, David A.; Little, John; O'Shea, Donal (1997). Ideals, Varieties, and Algorithms (2nd ed.). Springer-Verlag. ISBN 978-0-387-94680-1. Zbl 0861.13012. Schenck, Hal (2003). Computational Algebraic Geometry. Cambridge University Press. Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise (2006). Algorithms in real algebraic geometry. Springer-Verlag. González-Vega, Laureano; Recio, Tómas (1996). Algorithms in algebraic geometry and applications. Birkhaüser. Elkadi, Mohamed; Mourrain, Bernard; Piene, Ragni, eds. (2006). Algebraic geometry and geometric modeling. Springer-Verlag. Dickenstein, Alicia; Schreyer, Frank-Olaf; Sommese, Andrew J., eds. (2008). Algorithms in Algebraic Geometry. The IMA Volumes in Mathematics and its Applications. Vol. 146. Springer. ISBN 9780387751559. LCCN 2007938208. Cox, David A.; Little, John B.; O'Shea, Donal (1998). Using algebraic geometry. Springer-Verlag. Caviness, Bob F.; Johnson, Jeremy R. (1998). Quantifier elimination and cylindrical algebraic decomposition. Springer-Verlag. Textbooks and references for schemes Eisenbud, David; Harris, Joe (1998). The Geometry of Schemes. Springer-Verlag. ISBN 978-0-387-98637-1. Zbl 0960.14002. Grothendieck, Alexander (1960). Éléments de géométrie algébrique. Publications Mathématiques de l'IHÉS. Zbl 0118.36206. Grothendieck, Alexander; Dieudonné, Jean Alexandre (1971). Éléments de géométrie algébrique. Vol. 1 (2nd ed.). Springer-Verlag. ISBN 978-3-540-05113-8. Zbl 0203.23301. Hartshorne, Robin (1977). Algebraic Geometry. Springer-Verlag. ISBN 978-0-387-90244-9. Zbl 0367.14001. Mumford, David (1999). The Red Book of Varieties and Schemes Includes the Michigan Lectures on Curves and Their Jacobians (2nd ed.). Springer-Verlag. ISBN 978-3-540-63293-1. Zbl 0945.14001. Shafarevich, Igor (1995). Basic Algebraic Geometry II Schemes and complex manifolds (2nd ed.). Springer-Verlag. ISBN 978-3-540-57554-2. Zbl 0797.14002. == External links == Foundations of Algebraic Geometry by Ravi Vakil, 808 pp. Algebraic geometry entry on PlanetMath English translation of the van der Waerden textbook Dieudonné, Jean (March 3, 1972). "The History of Algebraic Geometry". Talk at the Department of Mathematics of the University of Wisconsin–Milwaukee. Archived from the original on 2021-11-22 – via YouTube. The Stacks Project, an open source textbook and reference work on algebraic stacks and algebraic geometry Adjectives Project, an online database for searching examples of schemes and morphisms based on their properties
Wikipedia/Computational_algebraic_geometry
In computer programming, a function (also procedure, method, subroutine, routine, or subprogram) is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times. Callable units provide a powerful programming tool. The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names (unless they are anonymous). Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability. Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units – with different implications and features. == Terminology == Some programming languages, such as COBOL and BASIC, make a distinction between functions that return a value (typically called "functions") and those that do not (typically called "subprogram", "subroutine", or "procedure"). Other programming languages, such as C, C++, and Rust, only use the term "function" irrespective of whether they return a value or not. Some object-oriented languages, such as Java and C#, refer to functions inside classes as "methods". == History == The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines." Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack. The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site. Subroutines were implemented in Konrad Zuse's Z4 in 1945. In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery' under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting ...the structure of the machine need not be complicated one bit. It is possible, since all the logical characteristics essential to this procedure are available, to evolve a coding instruction for placing the subroutines in the memory at places known to the machine, and in such a way that they may easily be called into use.In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding. Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories. Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines. Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000 (1961) is one of the first computers to store subroutine return data on a stack. The DEC PDP-6 (1964) is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 (1966), PDP-11 (1970) and VAX-11 (1976) lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines. === Language support === In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together. One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming. === Libraries === Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs. Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"); and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use. === Return by indirect jump === To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address. On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable. === Jump to subroutine === Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly. In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack. In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example to call a subroutine called MYSUB from the main program. The subroutine would be coded as The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB. Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls. Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC. === Call stack === Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address. The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose. The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter. Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible. When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management. However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data. In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records. ==== Delayed stacking ==== One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both. This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns. == Features == In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked (called) many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control. == Implementations == The features of implementations of callable units evolved over time and varies by context. This section describes features of the various common implementations. === General characteristics === Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including: Delimit the implementation of a function from the rest of the program Assign an identifier, name, to a function Define formal parameters with a name and data type for each Assign a data type to the return value, if any Specify a return value in the function body Call a function Provide actual parameters that correspond to a called function's formal parameters Return control to the caller at the point of call Consume the return value in the caller Dispose of the values returned by a call Provide a private naming scope for variables Identify variables outside the function that are accessible within it Propagate an exceptional condition out of a function and to handle it in the calling context Package functions into a container such as module, library, object, or class === Naming === Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value (function or subprogram) vs. one that does not (subroutine or procedure). Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value. === Call syntax === If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt(x). A callable unit that does not return a value is called as a stand-alone statement like print("hello"). This syntax can also be used for a callable unit that returns a value, but the return value will be ignored. Some older languages require a keyword for calls that do not consume a return value, like CALL print("hello"). === Parameters === Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments. === Return value === In some languages, such as BASIC, a callable has different syntax (i.e. keyword) for a callable that returns a value vs. one that does not. In other languages, the syntax is the same regardless. In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#. In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow. === Side effects === In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution. Side effects are considered undesireble by Robert C. Martin, who is known for promoting design principles. Martin argues that side effects can result in temporal coupling or order dependencies. In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect. === Local variables === Most contexts support local variables – memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address. === Nested call – recursion === If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global. Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without affecting any of the suspended calls variables. Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers: Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer. === Nested scope === Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer. === Reentrancy === If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads. === Overloading === Some languages support overloading – allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix. Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading. Here is an example of overloading in C++, two functions Area that accept different types: PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example: DECLARE gen_name GENERIC( name WHEN(FIXED BINARY), flame WHEN(FLOAT), pathname OTHERWISE); Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called. === Closure === A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects. === Exception reporting === Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution. Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition. Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call. In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example: === Call overhead === A call has runtime overhead, which may include but is not limited to: Allocating and reclaiming call stack storage Saving and restoring processor registers Copying input variables Copying values after the call into the caller's context Automatic testing of the return code Handling of exceptions Dispatching such as for a virtual method in an object-oriented language Various techniques are employed to minimize the runtime cost of calls. ==== Compiler optimization ==== Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult – indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for a language not limited to functional typically assumes the worst case, that every callable may have side effects. ==== Inlining ==== Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line. === Sharing === Callables can be defined within a program, or separately in a library that can be used by multiple programs. === Inter-operability === A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue. === Built-in functions === A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions. A built-in function does not need to be defined like other functions since it is built in to the programming language. == Programming == === Trade-offs === ==== Advantages ==== Advantages of breaking a program into functions include: Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures Reducing duplicate code within a program Enabling reuse of code across multiple programs Dividing a large programming task among various programmers or various stages of a project Hiding implementation details from users of the function Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused. Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired ==== Disadvantages ==== Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism. A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum). === Conventions === Many programming conventions have been developed regarding callables. With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables. Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct. Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead. == Examples == === Early BASIC === Early BASIC variants require each line to have a unique number (line number) that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return. This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable. === Small Basic === In Microsoft Small Basic, targeted to the student first learning how to program in a text-based language, a callable unit is called a subroutine. The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword. This can be called as SayHello(). === Visual Basic === In later versions of Visual Basic (VB), including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method. Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6. VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively. Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified. For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter. The does not return a value and has to be called stand-alone, like DoSomething This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive() This has a side-effect – modifies the variable passed by reference and could be called for variable v like AddTwo(v). Giving v is 5 before the call, it will be 7 after. === C and C++ === In C and C++, a callable unit is called a function. A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces. In C++, a function declared in a class (as non-static) is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function. This function does not return a value and is always called stand-alone, like doSomething() This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive() This function has a side-effect – modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo(&v) where the ampersand (&) tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after. This function requires C++ – would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo(v) does not include an ampersand since the compiler handles passing by reference without syntax in the call. === PL/I === In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like: change_sign: procedure(array); declare array(*,*) float; array = -array; end change_sign; This could be called with various arrays as follows: /* first array bounds from -5 to +10 and 3 to 9 */ declare array1 (-5:10, 3:9)float; /* second array bounds from 1 to 16 and 1 to 16 */ declare array2 (16,16) float; call change_sign(array1); call change_sign(array2); === Python === In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file. The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin() to write "Welcome Martin" to the console. === Prolog === In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule (or clause) of the form: A :- B which has the logical reading: A if B behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB. Consider, for example, the Prolog program: Notice that the motherhood function, X = mother(Y) is represented by a relation, as in a relational database. However, relations in Prolog function as callable units. For example, the procedure call ?- parent_child(X, charles) produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example: == See also == Asynchronous procedure call, a subprogram that is called after its parameters are set by other activities Command–query separation (CQS) Compound operation Coroutines, subprograms that call each other as if both were the main programs Evaluation strategy Event handler, a subprogram that is called in response to an input event or interrupt Function (mathematics) Functional programming Fused operation Intrinsic function Lambda function (computer programming), a function that is not bound to an identifier Logic programming Modular programming Operator overloading Protected procedure Transclusion == References ==
Wikipedia/Function_(computer_science)
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory (AI Lab). Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research. == Research activities == CSAIL's research activities are organized around a number of semi-autonomous research groups, each of which is headed by one or more professors or research scientists. These groups are divided up into seven general areas of research: Artificial intelligence Computational biology Graphics and vision Language and learning Theory of computation Robotics Systems (includes computer architecture, databases, distributed systems, networks and networked systems, operating systems, programming methodology, and software engineering, among others) == History == Computing Research at MIT began with Vannevar Bush's research into a differential analyzer and Claude Shannon's electronic Boolean algebra in the 1930s, the wartime MIT Radiation Laboratory, the post-war Project Whirlwind and Research Laboratory of Electronics (RLE), and MIT Lincoln Laboratory's SAGE in the early 1950s. At MIT, research in the field of artificial intelligence began in the late 1950s. === Project MAC === On July 1, 1963, Project MAC (the Project on Mathematics and Computation, later backronymed to Multiple Access Computer, Machine Aided Cognitions, or Man and Computer) was launched with a $2 million grant from the Defense Advanced Research Projects Agency (DARPA). Project MAC's original director was Robert Fano of MIT's Research Laboratory of Electronics (RLE). Fano decided to call MAC a "project" rather than a "laboratory" for reasons of internal MIT politics – if MAC had been called a laboratory, then it would have been more difficult to raid other MIT departments for research staff. The program manager responsible for the DARPA grant was J. C. R. Licklider, who had previously been at MIT conducting research in RLE, and would later succeed Fano as director of Project MAC. Project MAC would become famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation. Its contemporaries included Project Genie at Berkeley, the Stanford Artificial Intelligence Laboratory, and (somewhat later) University of Southern California's (USC's) Information Sciences Institute. An "AI Group" including Marvin Minsky (the director), John McCarthy (inventor of Lisp), and a talented community of computer programmers were incorporated into Project MAC. They were interested principally in the problems of vision, mechanical motion and manipulation, and language, which they view as the keys to more intelligent machines. In the 1960s and 1970s the AI Group developed a time-sharing operating system called Incompatible Timesharing System (ITS) which ran on PDP-6 and later PDP-10 computers. The early Project MAC community included Fano, Minsky, Licklider, Fernando J. Corbató, and a community of computer programmers and enthusiasts among others who drew their inspiration from former colleague John McCarthy. These founders envisioned the creation of a computer utility whose computational power would be as reliable as an electric utility. To this end, Corbató brought the first computer time-sharing system, Compatible Time-Sharing System (CTSS), with him from the MIT Computation Center, using the DARPA funding to purchase an IBM 7094 for research use. One of the early focuses of Project MAC would be the development of a successor to CTSS, Multics, which was to be the first high availability computer system, developed as a part of an industry consortium including General Electric and Bell Laboratories. In 1966, Scientific American featured Project MAC in the September thematic issue devoted to computer science, that was later published in book form. At the time, the system was described as having approximately 100 TTY terminals, mostly on campus but with a few in private homes. Only 30 users could be logged in at the same time. The project enlisted students in various classes to use the terminals simultaneously in problem solving, simulations, and multi-terminal communications as tests for the multi-access computing software being developed. === AI Lab and LCS === In the late 1960s, Minsky's artificial intelligence group was seeking more space, and was unable to get satisfaction from project director Licklider. Minsky found that although Project MAC as a single entity could not get the additional space he wanted, he could split off to form his own laboratory and then be entitled to more office space. As a result, the MIT AI Lab was formed in 1970, and many of Minsky's AI colleagues left Project MAC to join him in the new laboratory, while most of the remaining members went on to form the Laboratory for Computer Science. Talented programmers such as Richard Stallman, who used TECO to develop EMACS, flourished in the AI Lab during this time. Those researchers who did not join the smaller AI Lab formed the Laboratory for Computer Science and continued their research into operating systems, programming languages, distributed systems, and the theory of computation. Two professors, Hal Abelson and Gerald Jay Sussman, chose to remain neutral — their group was referred to variously as Switzerland and Project MAC for the next 30 years. Among much else, the AI Lab led to the invention of Lisp machines and their attempted commercialization by two companies in the 1980s: Symbolics and Lisp Machines Inc. This divided the AI Lab into "camps" which resulted in a hiring away of many of the talented programmers. The incident inspired Richard Stallman's later work on the GNU Project. "Nobody had envisioned that the AI lab's hacker group would be wiped out, but it was." ... "That is the basis for the free software movement — the experience I had, the life that I've lived at the MIT AI lab — to be working on human knowledge, and not be standing in the way of anybody's further using and further disseminating human knowledge". === CSAIL === On the fortieth anniversary of Project MAC's establishment, July 1, 2003, LCS was merged with the AI Lab to form the MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL. This merger created the largest laboratory (over 600 personnel) on the MIT campus and was regarded as a reuniting of the diversified elements of Project MAC. In 2018, CSAIL launched a five-year collaboration program with IFlytek, a company sanctioned the following year for allegedly using its technology for surveillance and human rights abuses in Xinjiang. In October 2019, MIT announced that it would review its partnerships with sanctioned firms such as iFlyTek and SenseTime. In April 2020, the agreement with iFlyTek was terminated. CSAIL moved from the School of Engineering to the newly formed Schwarzman College of Computing by February 2020. == Offices == From 1963 to 2004, Project MAC, LCS, the AI Lab, and CSAIL had their offices at 545 Technology Square, taking over more and more floors of the building over the years. In 2004, CSAIL moved to the new Ray and Maria Stata Center, which was built specifically to house it and other departments. == Outreach activities == The IMARA (from Swahili word for "power") group sponsors a variety of outreach programs that bridge the global digital divide. Its aim is to find and implement long-term, sustainable solutions which will increase the availability of educational technology and resources to domestic and international communities. These projects are run under the aegis of CSAIL and staffed by MIT volunteers who give training, install and donate computer setups in greater Boston, Massachusetts, Kenya, Native American Indian tribal reservations in the American Southwest such as the Navajo Nation, the Middle East, and Fiji Islands. The CommuniTech project strives to empower under-served communities through sustainable technology and education and does this through the MIT Used Computer Factory (UCF), providing refurbished computers to under-served families, and through the Families Accessing Computer Technology (FACT) classes, it trains those families to become familiar and comfortable with computer technology. == Notable researchers == (Including members and alumni of CSAIL's predecessor laboratories) MacArthur Fellows Tim Berners-Lee, Erik Demaine, Dina Katabi, Daniela L. Rus, Regina Barzilay, Peter Shor, Richard Stallman, and Joshua Tenenbaum Turing Award recipients Leonard M. Adleman, Fernando J. Corbató, Shafi Goldwasser, Butler W. Lampson, John McCarthy, Silvio Micali, Marvin Minsky, Ronald L. Rivest, Adi Shamir, Barbara Liskov, and Michael Stonebraker IJCAI Computers and Thought Award recipients Terry Winograd, Patrick Winston, David Marr, Gerald Jay Sussman, Rodney Brooks Rolf Nevanlinna Prize recipients Madhu Sudan, Peter Shor, Constantinos Daskalakis Gödel Prize recipients Shafi Goldwasser (two-time recipient), Silvio Micali, Maurice Herlihy, Charles Rackoff, Johan Håstad, Peter Shor, and Madhu Sudan Grace Murray Hopper Award recipients Robert Metcalfe, Shafi Goldwasser, Guy L. Steele, Jr., Richard Stallman, and W. Daniel Hillis Textbook authors Harold Abelson and Gerald Jay Sussman, Richard Stallman, Thomas H. Cormen, Charles E. Leiserson, Patrick Winston, Ronald L. Rivest, Barbara Liskov, John Guttag, Jerome H. Saltzer, Frans Kaashoek, Clifford Stein, and Nancy Lynch David D. Clark, former chief protocol architect for the Internet; co-author with Jerome H. Saltzer (also a CSAIL member) and David P. Reed of the influential paper "End-to-End Arguments in Systems Design" Eric Grimson, expert on computer vision and its applications to medicine, appointed Chancellor of MIT March 2011 Bob Frankston, co-developer of VisiCalc, the first computer spreadsheet Seymour Papert, inventor of the Logo programming language Joseph Weizenbaum, creator of the ELIZA computer-simulated therapist === Notable alumni === Robert Metcalfe, who later invented Ethernet at Xerox PARC and later founded 3Com Marc Raibert, who created the robot company Boston Dynamics Drew Houston, co-founder of Dropbox Colin Angle and Helen Greiner who, with previous CSAIL director Rodney Brooks, founded iRobot Jeremy Wertheimer, who developed ITA Software used by travel websites like Kayak and Orbitz Max Krohn, co-founder of OkCupid == Directors == Directors of Project MAC Robert Fano, 1963–1968 J. C. R. Licklider, 1968–1971 Edward Fredkin, 1971–1974 Michael Dertouzos, 1974–1975 Directors of the Artificial Intelligence Laboratory Marvin Minsky, 1970–1972 Patrick Winston, 1972–1997 Rodney Brooks, 1997–2003 Directors of the Laboratory for Computer Science Michael Dertouzos, 1975–2001 Victor Zue, 2001–2003 Directors of CSAIL Rodney Brooks, 2003–2007 Victor Zue, 2007–2011 Anant Agarwal, 2011–2012 Daniela L. Rus, 2012– == CSAIL Alliances == CSAIL Alliances is the industry connection arm of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). CSAIL Alliances offers companies programs to connect with the research, faculty, students, and startups of CSAIL by providing organizations with opportunities to learn about the research, engage with students, explore collaborations with researchers, and join research initiatives such as FinTech at CSAIL, MIT Future of Data, and Machine Learning Applications. == See also == == References == == Further reading == "A Marriage of Convenience: The Founding of the MIT Artificial Intelligence Laboratory" (PDF)., Chious et al. — includes important information on the Incompatible Timesharing System Weizenbaum. Rebel at Work: a documentary film with and about Joseph Weizenbaum Garfinkel, Simson (1999). Abelson, Hall (ed.). Architects of the Information Society: Thirty-Five Years of the Laboratory for Computer Science at MIT. Cambridge, Massachusetts: MIT Press. ISBN 0-262-07196-7. == External links == Official website of CSAIL, successor of the AI Lab
Wikipedia/MIT_Computer_Science_and_Artificial_Intelligence_Laboratory
In computational number theory and computational algebra, Pollard's kangaroo algorithm (also Pollard's lambda algorithm, see Naming below) is an algorithm for solving the discrete logarithm problem. The algorithm was introduced in 1978 by the number theorist John M. Pollard, in the same paper as his better-known Pollard's rho algorithm for solving the same problem. Although Pollard described the application of his algorithm to the discrete logarithm problem in the multiplicative group of units modulo a prime p, it is in fact a generic discrete logarithm algorithm—it will work in any finite cyclic group. == Algorithm == Suppose G {\displaystyle G} is a finite cyclic group of order n {\displaystyle n} which is generated by the element α {\displaystyle \alpha } , and we seek to find the discrete logarithm x {\displaystyle x} of the element β {\displaystyle \beta } to the base α {\displaystyle \alpha } . In other words, one seeks x ∈ Z n {\displaystyle x\in Z_{n}} such that α x = β {\displaystyle \alpha ^{x}=\beta } . The lambda algorithm allows one to search for x {\displaystyle x} in some interval [ a , … , b ] ⊂ Z n {\displaystyle [a,\ldots ,b]\subset Z_{n}} . One may search the entire range of possible logarithms by setting a = 0 {\displaystyle a=0} and b = n − 1 {\displaystyle b=n-1} . 1. Choose a set S {\displaystyle S} of positive integers of mean roughly b − a {\displaystyle {\sqrt {b-a}}} and define a pseudorandom map f : G → S {\displaystyle f:G\rightarrow S} . 2. Choose an integer N {\displaystyle N} and compute a sequence of group elements { x 0 , x 1 , … , x N } {\displaystyle \{x_{0},x_{1},\ldots ,x_{N}\}} according to: x 0 = α b {\displaystyle x_{0}=\alpha ^{b}\,} x i + 1 = x i α f ( x i ) for i = 0 , 1 , … , N − 1 {\displaystyle x_{i+1}=x_{i}\alpha ^{f(x_{i})}{\text{ for }}i=0,1,\ldots ,N-1} 3. Compute d = ∑ i = 0 N − 1 f ( x i ) . {\displaystyle d=\sum _{i=0}^{N-1}f(x_{i}).} Observe that: x N = x 0 α d = α b + d . {\displaystyle x_{N}=x_{0}\alpha ^{d}=\alpha ^{b+d}\,.} 4. Begin computing a second sequence of group elements { y 0 , y 1 , … } {\displaystyle \{y_{0},y_{1},\ldots \}} according to: y 0 = β {\displaystyle y_{0}=\beta \,} y i + 1 = y i α f ( y i ) for i = 0 , 1 , … , N − 1 {\displaystyle y_{i+1}=y_{i}\alpha ^{f(y_{i})}{\text{ for }}i=0,1,\ldots ,N-1} and a corresponding sequence of integers { d 0 , d 1 , … } {\displaystyle \{d_{0},d_{1},\ldots \}} according to: d n = ∑ i = 0 n − 1 f ( y i ) {\displaystyle d_{n}=\sum _{i=0}^{n-1}f(y_{i})} . Observe that: y i = y 0 α d i = β α d i for i = 0 , 1 , … , N − 1 {\displaystyle y_{i}=y_{0}\alpha ^{d_{i}}=\beta \alpha ^{d_{i}}{\mbox{ for }}i=0,1,\ldots ,N-1} 5. Stop computing terms of { y i } {\displaystyle \{y_{i}\}} and { d i } {\displaystyle \{d_{i}\}} when either of the following conditions are met: A) y j = x N {\displaystyle y_{j}=x_{N}} for some j {\displaystyle j} . If the sequences { x i } {\displaystyle \{x_{i}\}} and { y j } {\displaystyle \{y_{j}\}} "collide" in this manner, then we have: x N = y j ⇒ α b + d = β α d j ⇒ β = α b + d − d j ⇒ x ≡ b + d − d j ( mod n ) {\displaystyle x_{N}=y_{j}\Rightarrow \alpha ^{b+d}=\beta \alpha ^{d_{j}}\Rightarrow \beta =\alpha ^{b+d-d_{j}}\Rightarrow x\equiv b+d-d_{j}{\pmod {n}}} and so we are done. B) d i > b − a + d {\displaystyle d_{i}>b-a+d} . If this occurs, then the algorithm has failed to find x {\displaystyle x} . Subsequent attempts can be made by changing the choice of S {\displaystyle S} and/or f {\displaystyle f} . == Complexity == Pollard gives the time complexity of the algorithm as O ( b − a ) {\displaystyle O({\sqrt {b-a}})} , using a probabilistic argument based on the assumption that f {\displaystyle f} acts pseudorandomly. Since a , b {\displaystyle a,b} can be represented using O ( log ⁡ b ) {\displaystyle O(\log b)} bits, this is exponential in the problem size (though still a significant improvement over the trivial brute-force algorithm that takes time O ( b − a ) {\displaystyle O(b-a)} ). For an example of a subexponential time discrete logarithm algorithm, see the index calculus algorithm. == Naming == The algorithm is well known by two names. The first is "Pollard's kangaroo algorithm". This name is a reference to an analogy used in the paper presenting the algorithm, where the algorithm is explained in terms of using a tame kangaroo to trap a wild kangaroo. Pollard has explained that this analogy was inspired by a "fascinating" article published in the same issue of Scientific American as an exposition of the RSA public key cryptosystem. The article described an experiment in which a kangaroo's "energetic cost of locomotion, measured in terms of oxygen consumption at various speeds, was determined by placing kangaroos on a treadmill". The second is "Pollard's lambda algorithm". Much like the name of another of Pollard's discrete logarithm algorithms, Pollard's rho algorithm, this name refers to the similarity between a visualisation of the algorithm and the Greek letter lambda ( λ {\displaystyle \lambda } ). The shorter stroke of the letter lambda corresponds to the sequence { x i } {\displaystyle \{x_{i}\}} , since it starts from the position b to the right of x. Accordingly, the longer stroke corresponds to the sequence { y i } {\displaystyle \{y_{i}\}} , which "collides with" the first sequence (just like the strokes of a lambda intersect) and then follows it subsequently. Pollard has expressed a preference for the name "kangaroo algorithm", as this avoids confusion with some parallel versions of his rho algorithm, which have also been called "lambda algorithms". == See also == Dynkin's card trick Kruskal count Rainbow table == References == == Further reading == Montenegro, Ravi [at Wikidata]; Tetali, Prasad V. (2010-11-07) [2009-05-31]. How Long Does it Take to Catch a Wild Kangaroo? (PDF). Proceedings of the forty-first annual ACM symposium on Theory of computing (STOC 2009). pp. 553–560. arXiv:0812.0789. doi:10.1145/1536414.1536490. S2CID 12797847. Archived (PDF) from the original on 2023-08-20. Retrieved 2023-08-20.
Wikipedia/Pollard's_kangaroo_algorithm
In computer algebra, the Faugère F4 algorithm, by Jean-Charles Faugère, computes the Gröbner basis of an ideal of a multivariate polynomial ring. The algorithm uses the same mathematical principles as the Buchberger algorithm, but computes many normal forms in one go by forming a generally sparse matrix and using fast linear algebra to do the reductions in parallel. The Faugère F5 algorithm first calculates the Gröbner basis of a pair of generator polynomials of the ideal. Then it uses this basis to reduce the size of the initial matrices of generators for the next larger basis: If Gprev is an already computed Gröbner basis (f2, …, fm) and we want to compute a Gröbner basis of (f1) + Gprev then we will construct matrices whose rows are m f1 such that m is a monomial not divisible by the leading term of an element of Gprev. This strategy allows the algorithm to apply two new criteria based on what Faugère calls signatures of polynomials. Thanks to these criteria, the algorithm can compute Gröbner bases for a large class of interesting polynomial systems, called regular sequences, without ever simplifying a single polynomial to zero—the most time-consuming operation in algorithms that compute Gröbner bases. It is also very effective for a large number of non-regular sequences. == Implementations == The Faugère F4 algorithm is implemented in FGb, Faugère's own implementation, which includes interfaces for using it from C/C++ or Maple, in Maple computer algebra system, as the option method=fgb of function Groebner[gbasis] in the Magma computer algebra system, in the SageMath computer algebra system, Study versions of the Faugère F5 algorithm is implemented in the SINGULAR computer algebra system; the SageMath computer algebra system. in SymPy Python package. == Applications == The previously intractable "cyclic 10" problem was solved by F5, as were a number of systems related to cryptography; for example HFE and C*. == References == Faugère, J.-C. (June 1999). "A new efficient algorithm for computing Gröbner bases (F4)" (PDF). Journal of Pure and Applied Algebra. 139 (1): 61–88. doi:10.1016/S0022-4049(99)00005-5. ISSN 0022-4049. Faugère, J.-C. (July 2002). "A new efficient algorithm for computing Gröbner bases without reduction to zero ( F 5 )". Proceedings of the 2002 international symposium on Symbolic and algebraic computation (PDF). ACM Press. pp. 75–83. CiteSeerX 10.1.1.188.651. doi:10.1145/780506.780516. ISBN 978-1-58113-484-1. S2CID 15833106. Till Stegers Faugère's F5 Algorithm Revisited (alternative link). Diplom-Mathematiker Thesis, advisor Johannes Buchmann, Technische Universität Darmstadt, September 2005 (revised April 27, 2007). Many references, including links to available implementations. == External links == Faugère's home page (includes pdf reprints of additional papers) An introduction to the F4 algorithm.
Wikipedia/Faugère_F4_algorithm