text
stringlengths
16
172k
source
stringlengths
32
122
Hugo Hadwiger(23 December 1908 inKarlsruhe, Germany– 29 October 1981 inBern, Switzerland)[1]was aSwissmathematician, known for his work ingeometry,combinatorics, andcryptography. Although born inKarlsruhe, Germany, Hadwiger grew up inBern, Switzerland.[2]He did his undergraduate studies at theUniversity of Bern, where he majored in mathematics but also studied physics andactuarial science.[2]He continued at Bern for his graduate studies, and received his Ph.D. in 1936 under the supervision of Willy Scherrer.[3]He was for more than forty years a professor of mathematics at Bern.[4] Hadwiger's theoreminintegral geometryclassifies the isometry-invariantvaluationsoncompactconvex setsind-dimensional Euclidean space. According to this theorem, any such valuation can be expressed as a linear combination of theintrinsic volumes; for instance, in two dimensions, the intrinsic volumes are thearea, theperimeter, and theEuler characteristic.[5] TheHadwiger–Finsler inequality, proven by Hadwiger withPaul Finsler, is an inequality relating the side lengths and area of anytrianglein theEuclidean plane.[6]It generalizesWeitzenböck's inequalityand was generalized in turn byPedoe's inequality. In the same 1937 paper in which Hadwiger and Finsler published this inequality, they also published theFinsler–Hadwiger theoremon a square derived from two other squares that share a vertex. Hadwiger's name is also associated with several important unsolved problems in mathematics: Hadwiger proved a theorem characterizingeutactic stars, systems of points in Euclidean space formed byorthogonal projectionof higher-dimensionalcross polytopes. He found a higher-dimensional generalization of the space-fillingHill tetrahedra.[15]And his 1957 bookVorlesungen über Inhalt, Oberfläche und Isoperimetriewas foundational for the theory ofMinkowski functionals, used inmathematical morphology.[citation needed] Hadwiger was one of the principal developers of a Swissrotor machinefor encrypting military communications, known asNEMA. The Swiss, fearing that the Germans and Allies could read messages transmitted on theirEnigma cipher machines, enhanced the system by using ten rotors instead of five. The system was used by the Swiss army and air force between 1947 and 1992.[16] Asteroid2151 Hadwiger, discovered in 1977 byPaul Wild, is named after Hadwiger.[4] The first article in the "Research Problems" section of theAmerican Mathematical Monthlywas dedicated byVictor Kleeto Hadwiger, on the occasion of his 60th birthday, in honor of Hadwiger's work editing a column on unsolved problems in the journalElemente der Mathematik.[2]
https://en.wikipedia.org/wiki/Hugo_Hadwiger
Mathematical morphology(MM) is a theory and technique for the analysis and processing ofgeometricalstructures, based onset theory,lattice theory,topology, andrandom functions. MM is most commonly applied todigital images, but it can be employed as well ongraphs,surface meshes,solids, and many other spatial structures. Topologicalandgeometricalcontinuous-space concepts such as size,shape,convexity,connectivity, andgeodesic distance, were introduced by MM on both continuous anddiscrete spaces. MM is also the foundation of morphologicalimage processing, which consists of a set of operators that transform images according to the above characterizations. The basic morphological operators areerosion,dilation,openingandclosing. MM was originally developed forbinary images, and was later extended tograyscalefunctionsand images. The subsequent generalization tocomplete latticesis widely accepted today as MM's theoretical foundation. Mathematical Morphology was developed in 1964 by the collaborative work ofGeorges MatheronandJean Serra, at theÉcole des Mines de Paris,France. Matheron supervised thePhDthesisof Serra, devoted to the quantification of mineral characteristics from thincross sections, and this work resulted in a novel practical approach, as well as theoretical advancements inintegral geometryandtopology. In 1968, theCentre de Morphologie Mathématiquewas founded by the École des Mines de Paris inFontainebleau, France, led by Matheron and Serra. During the rest of the 1960s and most of the 1970s, MM dealt essentially withbinary images, treated assets, and generated a large number ofbinary operatorsand techniques:Hit-or-miss transform,dilation,erosion,opening,closing,granulometry,thinning,skeletonization,ultimate erosion,conditional bisector, and others. A random approach was also developed, based on novel image models. Most of the work in that period was developed in Fontainebleau. From the mid-1970s to mid-1980s, MM was generalized tograyscalefunctions andimagesas well. Besides extending the main concepts (such as dilation, erosion, etc.) to functions, this generalization yielded new operators, such asmorphological gradients,top-hat transformand theWatershed(MM's mainsegmentationapproach). In the 1980s and 1990s, MM gained a wider recognition, as research centers in several countries began to adopt and investigate the method. MM started to be applied to a large number of imaging problems and applications, especially in the field of non-linear filtering of noisy images. In 1986, Serra further generalized MM, this time to a theoretical framework based oncomplete lattices. This generalization brought flexibility to the theory, enabling its application to a much larger number of structures, including color images, video,graphs,meshes, etc. At the same time, Matheron and Serra also formulated a theory for morphologicalfiltering, based on the new lattice framework. The 1990s and 2000s also saw further theoretical advancements, including the concepts ofconnectionsandlevelings. In 1993, the first International Symposium on Mathematical Morphology (ISMM) took place inBarcelona, Spain. Since then, ISMMs are organized every 2–3 years: Fontainebleau, France (1994);Atlanta, USA (1996);Amsterdam, Netherlands (1998);Palo Alto, CA, USA (2000);Sydney, Australia (2002);Paris, France (2005);Rio de Janeiro, Brazil (2007);Groningen, Netherlands (2009); Intra (Verbania), Italy (2011);Uppsala, Sweden (2013);Reykjavík, Iceland (2015); Fontainebleau, France (2017); andSaarbrücken, Germany (2019).[1] In binary morphology, an image is viewed as asubsetof aEuclidean spaceRd{\displaystyle \mathbb {R} ^{d}}or the integer gridZd{\displaystyle \mathbb {Z} ^{d}}, for some dimensiond. The basic idea in binary morphology is to probe an image with a simple, pre-defined shape, drawing conclusions on how this shape fits or misses the shapes in the image. This simple "probe" is called thestructuring element, and is itself a binary image (i.e., a subset of the space or grid). Here are some examples of widely used structuring elements (denoted byB): The basic operations are shift-invariant (translation invariant) operators strongly related toMinkowski addition. LetEbe a Euclidean space or an integer grid, andAa binary image inE. Theerosionof the binary imageAby the structuring elementBis defined by whereBzis the translation ofBby the vectorz, i.e.,Bz={b+z∣b∈B}{\displaystyle B_{z}=\{b+z\mid b\in B\}},∀z∈E{\displaystyle \forall z\in E}. When the structuring elementBhas a center (e.g.,Bis a disk or a square), and this center is located on the origin ofE, then the erosion ofAbyBcan be understood as thelocusof points reached by the center ofBwhenBmoves insideA. For example, the erosion of a square of side 10, centered at the origin, by a disc of radius 2, also centered at the origin, is a square of side 6 centered at the origin. The erosion ofAbyBis also given by the expressionA⊖B=⋂b∈BA−b{\displaystyle A\ominus B=\bigcap _{b\in B}A_{-b}}. Example application: Assume we have received a fax of a dark photocopy. Everything looks like it was written with a pen that is bleeding. Erosion process will allow thicker lines to get skinny and detect the hole inside the letter "o". ThedilationofAby the structuring elementBis defined by The dilation is commutative, also given byA⊕B=B⊕A=⋃a∈ABa{\displaystyle A\oplus B=B\oplus A=\bigcup _{a\in A}B_{a}}. IfBhas a center on the origin, as before, then the dilation ofAbyBcan be understood as the locus of the points covered byBwhen the center ofBmoves insideA. In the above example, the dilation of the square of side 10 by the disk of radius 2 is a square of side 14, with rounded corners, centered at the origin. The radius of the rounded corners is 2. The dilation can also be obtained byA⊕B={z∈E∣(Bs)z∩A≠∅}{\displaystyle A\oplus B=\{z\in E\mid (B^{s})_{z}\cap A\neq \varnothing \}}, whereBsdenotes thesymmetricofB, that is,Bs={x∈E∣−x∈B}{\displaystyle B^{s}=\{x\in E\mid -x\in B\}}. Example application: dilation is the dual operation of the erosion. Figures that are very lightly drawn get thick when "dilated". Easiest way to describe it is to imagine the same fax/text is written with a thicker pen. TheopeningofAbyBis obtained by the erosion ofAbyB, followed by dilation of the resulting image byB: The opening is also given byA∘B=⋃Bx⊆ABx{\displaystyle A\circ B=\bigcup _{B_{x}\subseteq A}B_{x}}, which means that it is the locus of translations of the structuring elementBinside the imageA. In the case of the square of side 10, and a disc of radius 2 as the structuring element, the opening is a square of side 10 with rounded corners, where the corner radius is 2. Example application: Let's assume someone has written a note on a non-soaking paper and that the writing looks as if it is growing tiny hairy roots all over. Opening essentially removes the outer tiny "hairline" leaks and restores the text. The side effect is that it rounds off things. The sharp edges start to disappear. TheclosingofAbyBis obtained by the dilation ofAbyB, followed by erosion of the resulting structure byB: The closing can also be obtained byA∙B=(Ac∘Bs)c{\displaystyle A\bullet B=(A^{c}\circ B^{s})^{c}}, whereXcdenotes thecomplementofXrelative toE(that is,Xc={x∈E∣x∉X}{\displaystyle X^{c}=\{x\in E\mid x\notin X\}}). The above means that the closing is the complement of the locus of translations of the symmetric of the structuring element outside the imageA. Here are some properties of the basic binary morphological operators (dilation, erosion, opening and closing): Ingrayscalemorphology, images arefunctionsmapping aEuclidean spaceor gridEintoR∪{∞,−∞}{\displaystyle \mathbb {R} \cup \{\infty ,-\infty \}}, whereR{\displaystyle \mathbb {R} }is the set ofreals,∞{\displaystyle \infty }is an element larger than any real number, and−∞{\displaystyle -\infty }is an element smaller than any real number. Grayscale structuring elements are also functions of the same format, called "structuring functions". Denoting an image byf(x), the structuring function byb(x) and the support ofbbyB, the grayscale dilation offbybis given by where "sup" denotes thesupremum. Similarly, the erosion offbybis given by where "inf" denotes theinfimum. Just like in binary morphology, the opening and closing are given respectively by It is common to use flat structuring elements in morphological applications. Flat structuring functions are functionsb(x) in the form whereB⊆E{\displaystyle B\subseteq E}. In this case, the dilation and erosion are greatly simplified, and given respectively by In the bounded, discrete case (Eis a grid andBis bounded), thesupremumandinfimumoperators can be replaced by themaximumandminimum. Thus, dilation and erosion are particular cases oforder statisticsfilters, with dilation returning the maximum value within a moving window (the symmetric of the structuring function supportB), and the erosion returning the minimum value within the moving windowB. In the case of flat structuring element, the morphological operators depend only on the relative ordering ofpixelvalues, regardless their numerical values, and therefore are especially suited to the processing ofbinary imagesandgrayscale imageswhoselight transfer functionis not known. By combining these operators one can obtain algorithms for many image processing tasks, such asfeature detection,image segmentation,image sharpening,image filtering, andclassification. Along this line one should also look intoContinuous Morphology[2] Complete latticesarepartially ordered sets, where every subset has aninfimumand asupremum. In particular, it contains aleast elementand agreatest element(also denoted "universe"). Let(L,≤){\displaystyle (L,\leq )}be a complete lattice, with infimum and supremum symbolized by∧{\displaystyle \wedge }and∨{\displaystyle \vee }, respectively. Its universe and least element are symbolized byUand∅{\displaystyle \emptyset }, respectively. Moreover, let{Xi}{\displaystyle \{X_{i}\}}be a collection of elements fromL. A dilation is any operatorδ:L→L{\displaystyle \delta \colon L\rightarrow L}that distributes over the supremum, and preserves the least element. I.e.: An erosion is any operatorε:L→L{\displaystyle \varepsilon \colon L\rightarrow L}that distributes over the infimum, and preserves the universe. I.e.: Dilations and erosions formGalois connections. That is, for every dilationδ{\displaystyle \delta }there is one and only one erosionε{\displaystyle \varepsilon }that satisfies for allX,Y∈L{\displaystyle X,Y\in L}. Similarly, for every erosion there is one and only one dilation satisfying the above connection. Furthermore, if two operators satisfy the connection, thenδ{\displaystyle \delta }must be a dilation, andε{\displaystyle \varepsilon }an erosion. Pairs of erosions and dilations satisfying the above connection are called "adjunctions", and the erosion is said to be the adjoint erosion of the dilation, and vice versa. For every adjunction(ε,δ){\displaystyle (\varepsilon ,\delta )}, the morphological openingγ:L→L{\displaystyle \gamma \colon L\to L}and morphological closingϕ:L→L{\displaystyle \phi \colon L\to L}are defined as follows: The morphological opening and closing are particular cases ofalgebraic opening(or simply opening) andalgebraic closing(or simply closing). Algebraic openings are operators inLthat are idempotent, increasing, and anti-extensive. Algebraic closings are operators inLthat are idempotent, increasing, and extensive. Binary morphology is a particular case of lattice morphology, whereLis thepower setofE(Euclidean space or grid), that is,Lis the set of all subsets ofE, and≤{\displaystyle \leq }is theset inclusion. In this case, the infimum isset intersection, and the supremum isset union. Similarly, grayscale morphology is another particular case, whereLis the set of functions mappingEintoR∪{∞,−∞}{\displaystyle \mathbb {R} \cup \{\infty ,-\infty \}}, and≤{\displaystyle \leq },∨{\displaystyle \vee }, and∧{\displaystyle \wedge }, are the point-wise order, supremum, and infimum, respectively. That is, isfandgare functions inL, thenf≤g{\displaystyle f\leq g}if and only iff(x)≤g(x),∀x∈E{\displaystyle f(x)\leq g(x),\forall x\in E}; the infimumf∧g{\displaystyle f\wedge g}is given by(f∧g)(x)=f(x)∧g(x){\displaystyle (f\wedge g)(x)=f(x)\wedge g(x)}; and the supremumf∨g{\displaystyle f\vee g}is given by(f∨g)(x)=f(x)∨g(x){\displaystyle (f\vee g)(x)=f(x)\vee g(x)}.
https://en.wikipedia.org/wiki/Morphological_image_processing
Inmathematics, atopological vector space(also called alinear topological spaceand commonly abbreviatedTVSort.v.s.) is one of the basic structures investigated infunctional analysis. A topological vector space is avector spacethat is also atopological spacewith the property that the vector space operations (vector addition and scalar multiplication) are alsocontinuous functions. Such a topology is called avector topologyand every topological vector space has auniform topological structure, allowing a notion ofuniform convergenceandcompleteness. Some authors also require that the space is aHausdorff space(although this article does not). One of the most widely studied categories of TVSs arelocally convex topological vector spaces. This article focuses on TVSs that are not necessarily locally convex. Other well-known examples of TVSs includeBanach spaces,Hilbert spacesandSobolev spaces. Many topological vector spaces are spaces offunctions, orlinear operatorsacting on topological vector spaces, and the topology is often defined so as to capture a particular notion ofconvergenceof sequences of functions. In this article, thescalarfield of a topological vector space will be assumed to be either thecomplex numbersC{\displaystyle \mathbb {C} }or thereal numbersR,{\displaystyle \mathbb {R} ,}unless clearly stated otherwise. Everynormed vector spacehas a naturaltopological structure: the norm induces ametricand the metric induces a topology. This is a topological vector space because[citation needed]: Thus allBanach spacesandHilbert spacesare examples of topological vector spaces. There are topological vector spaces whose topology is not induced by a norm, but are still of interest in analysis. Examples of such spaces are spaces ofholomorphic functionson an open domain, spaces ofinfinitely differentiable functions, theSchwartz spaces, and spaces oftest functionsand the spaces ofdistributionson them.[1]These are all examples ofMontel spaces. An infinite-dimensional Montel space is never normable. The existence of a norm for a given topological vector space is characterized byKolmogorov's normability criterion. Atopological fieldis a topological vector space over each of itssubfields. Atopological vector space(TVS)X{\displaystyle X}is avector spaceover atopological fieldK{\displaystyle \mathbb {K} }(most often therealorcomplexnumbers with their standard topologies) that is endowed with atopologysuch that vector addition⋅+⋅:X×X→X{\displaystyle \cdot \,+\,\cdot \;:X\times X\to X}and scalar multiplication⋅:K×X→X{\displaystyle \cdot :\mathbb {K} \times X\to X}arecontinuous functions(where the domains of these functions are endowed withproduct topologies). Such a topology is called avector topologyor aTVS topologyonX.{\displaystyle X.} Every topological vector space is also a commutativetopological groupunder addition. Hausdorff assumption Many authors (for example,Walter Rudin), but not this page, require the topology onX{\displaystyle X}to beT1; it then follows that the space isHausdorff, and evenTychonoff. A topological vector space is said to beseparatedif it is Hausdorff; importantly, "separated" does not meanseparable. The topological and linear algebraic structures can be tied together even more closely with additional assumptions, the most common of which are listedbelow. Category and morphisms Thecategoryof topological vector spaces over a given topological fieldK{\displaystyle \mathbb {K} }is commonly denotedTVSK{\displaystyle \mathrm {TVS} _{\mathbb {K} }}orTVectK.{\displaystyle \mathrm {TVect} _{\mathbb {K} }.}Theobjectsare the topological vector spaces overK{\displaystyle \mathbb {K} }and themorphismsare thecontinuousK{\displaystyle \mathbb {K} }-linear mapsfrom one object to another. Atopological vector space homomorphism(abbreviatedTVS homomorphism), also called atopological homomorphism,[2][3]is acontinuouslinear mapu:X→Y{\displaystyle u:X\to Y}between topological vector spaces (TVSs) such that the induced mapu:X→Im⁡u{\displaystyle u:X\to \operatorname {Im} u}is anopen mappingwhenIm⁡u:=u(X),{\displaystyle \operatorname {Im} u:=u(X),}which is the range or image ofu,{\displaystyle u,}is given thesubspace topologyinduced byY.{\displaystyle Y.} Atopological vector space embedding(abbreviatedTVS embedding), also called atopologicalmonomorphism, is aninjectivetopological homomorphism. Equivalently, a TVS-embedding is a linear map that is also atopological embedding.[2] Atopological vector space isomorphism(abbreviatedTVS isomorphism), also called atopological vector isomorphism[4]or anisomorphism in the category of TVSs, is a bijectivelinearhomeomorphism. Equivalently, it is asurjectiveTVS embedding[2] Many properties of TVSs that are studied, such aslocal convexity,metrizability,completeness, andnormability, are invariant under TVS isomorphisms. A necessary condition for a vector topology A collectionN{\displaystyle {\mathcal {N}}}of subsets of a vector space is calledadditive[5]if for everyN∈N,{\displaystyle N\in {\mathcal {N}},}there exists someU∈N{\displaystyle U\in {\mathcal {N}}}such thatU+U⊆N.{\displaystyle U+U\subseteq N.} Characterization of continuity of addition at0{\displaystyle 0}[5]—If(X,+){\displaystyle (X,+)}is agroup(as all vector spaces are),τ{\displaystyle \tau }is a topology onX,{\displaystyle X,}andX×X{\displaystyle X\times X}is endowed with theproduct topology, then the addition mapX×X→X{\displaystyle X\times X\to X}(defined by(x,y)↦x+y{\displaystyle (x,y)\mapsto x+y}) is continuous at the origin ofX×X{\displaystyle X\times X}if and only if the set ofneighborhoodsof the origin in(X,τ){\displaystyle (X,\tau )}is additive. This statement remains true if the word "neighborhood" is replaced by "open neighborhood." All of the above conditions are consequently a necessity for a topology to form a vector topology. Since every vector topology is translation invariant (which means that for allx0∈X,{\displaystyle x_{0}\in X,}the mapX→X{\displaystyle X\to X}defined byx↦x0+x{\displaystyle x\mapsto x_{0}+x}is ahomeomorphism), to define a vector topology it suffices to define aneighborhood basis(or subbasis) for it at the origin. Theorem[6](Neighborhood filter of the origin)—Suppose thatX{\displaystyle X}is a real or complex vector space. IfB{\displaystyle {\mathcal {B}}}is anon-emptyadditive collection ofbalancedandabsorbingsubsets ofX{\displaystyle X}thenB{\displaystyle {\mathcal {B}}}is aneighborhood baseat0{\displaystyle 0}for a vector topology onX.{\displaystyle X.}That is, the assumptions are thatB{\displaystyle {\mathcal {B}}}is afilter basethat satisfies the following conditions: IfB{\displaystyle {\mathcal {B}}}satisfies the above two conditions but isnota filter base then it will form a neighborhoodsubbasis at0{\displaystyle 0}(rather than a neighborhood basis) for a vector topology onX.{\displaystyle X.} In general, the set of all balanced and absorbing subsets of a vector space does not satisfy the conditions of this theorem and does not form a neighborhood basis at the origin for any vector topology.[5] LetX{\displaystyle X}be a vector space and letU∙=(Ui)i=1∞{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i=1}^{\infty }}be a sequence of subsets ofX.{\displaystyle X.}Each set in the sequenceU∙{\displaystyle U_{\bullet }}is called aknotofU∙{\displaystyle U_{\bullet }}and for every indexi,{\displaystyle i,}Ui{\displaystyle U_{i}}is called thei{\displaystyle i}-th knotofU∙.{\displaystyle U_{\bullet }.}The setU1{\displaystyle U_{1}}is called thebeginningofU∙.{\displaystyle U_{\bullet }.}The sequenceU∙{\displaystyle U_{\bullet }}is/is a:[7][8][9] IfU{\displaystyle U}is anabsorbingdiskin a vector spaceX{\displaystyle X}then the sequence defined byUi:=21−iU{\displaystyle U_{i}:=2^{1-i}U}forms a string beginning withU1=U.{\displaystyle U_{1}=U.}This is called thenatural string ofU{\displaystyle U}[7]Moreover, if a vector spaceX{\displaystyle X}has countable dimension then every string contains anabsolutely convexstring. Summative sequences of sets have the particularly nice property that they define non-negative continuous real-valuedsubadditivefunctions. These functions can then be used to prove many of the basic properties of topological vector spaces. Theorem(R{\displaystyle \mathbb {R} }-valued function induced by a string)—LetU∙=(Ui)i=0∞{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i=0}^{\infty }}be a collection of subsets of a vector space such that0∈Ui{\displaystyle 0\in U_{i}}andUi+1+Ui+1⊆Ui{\displaystyle U_{i+1}+U_{i+1}\subseteq U_{i}}for alli≥0.{\displaystyle i\geq 0.}For allu∈U0,{\displaystyle u\in U_{0},}letS(u):={n∙=(n1,…,nk):k≥1,ni≥0for alli,andu∈Un1+⋯+Unk}.{\displaystyle \mathbb {S} (u):=\left\{n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)~:~k\geq 1,n_{i}\geq 0{\text{ for all }}i,{\text{ and }}u\in U_{n_{1}}+\cdots +U_{n_{k}}\right\}.} Definef:X→[0,1]{\displaystyle f:X\to [0,1]}byf(x)=1{\displaystyle f(x)=1}ifx∉U0{\displaystyle x\not \in U_{0}}and otherwise letf(x):=inf{2−n1+⋯2−nk:n∙=(n1,…,nk)∈S(x)}.{\displaystyle f(x):=\inf _{}\left\{2^{-n_{1}}+\cdots 2^{-n_{k}}~:~n_{\bullet }=\left(n_{1},\ldots ,n_{k}\right)\in \mathbb {S} (x)\right\}.} Thenf{\displaystyle f}is subadditive (meaningf(x+y)≤f(x)+f(y){\displaystyle f(x+y)\leq f(x)+f(y)}for allx,y∈X{\displaystyle x,y\in X}) andf=0{\displaystyle f=0}on⋂i≥0Ui;{\textstyle \bigcap _{i\geq 0}U_{i};}so in particular,f(0)=0.{\displaystyle f(0)=0.}If allUi{\displaystyle U_{i}}aresymmetric setsthenf(−x)=f(x){\displaystyle f(-x)=f(x)}and if allUi{\displaystyle U_{i}}are balanced thenf(sx)≤f(x){\displaystyle f(sx)\leq f(x)}for all scalarss{\displaystyle s}such that|s|≤1{\displaystyle |s|\leq 1}and allx∈X.{\displaystyle x\in X.}IfX{\displaystyle X}is a topological vector space and if allUi{\displaystyle U_{i}}are neighborhoods of the origin thenf{\displaystyle f}is continuous, where if in additionX{\displaystyle X}is Hausdorff andU∙{\displaystyle U_{\bullet }}forms a basis of balanced neighborhoods of the origin inX{\displaystyle X}thend(x,y):=f(x−y){\displaystyle d(x,y):=f(x-y)}is a metric defining the vector topology onX.{\displaystyle X.} A proof of the above theorem is given in the article onmetrizable topological vector spaces. IfU∙=(Ui)i∈N{\displaystyle U_{\bullet }=\left(U_{i}\right)_{i\in \mathbb {N} }}andV∙=(Vi)i∈N{\displaystyle V_{\bullet }=\left(V_{i}\right)_{i\in \mathbb {N} }}are two collections of subsets of a vector spaceX{\displaystyle X}and ifs{\displaystyle s}is a scalar, then by definition:[7] IfS{\displaystyle \mathbb {S} }is a collection sequences of subsets ofX,{\displaystyle X,}thenS{\displaystyle \mathbb {S} }is said to bedirected(downwards)under inclusionor simplydirected downwardifS{\displaystyle \mathbb {S} }is not empty and for allU∙,V∙∈S,{\displaystyle U_{\bullet },V_{\bullet }\in \mathbb {S} ,}there exists someW∙∈S{\displaystyle W_{\bullet }\in \mathbb {S} }such thatW∙⊆U∙{\displaystyle W_{\bullet }\subseteq U_{\bullet }}andW∙⊆V∙{\displaystyle W_{\bullet }\subseteq V_{\bullet }}(said differently, if and only ifS{\displaystyle \mathbb {S} }is aprefilterwith respect to the containment⊆{\displaystyle \,\subseteq \,}defined above). Notation: LetKnots⁡S:=⋃U∙∈SKnots⁡U∙{\textstyle \operatorname {Knots} \mathbb {S} :=\bigcup _{U_{\bullet }\in \mathbb {S} }\operatorname {Knots} U_{\bullet }}be the set of all knots of all strings inS.{\displaystyle \mathbb {S} .} Defining vector topologies using collections of strings is particularly useful for defining classes of TVSs that are not necessarily locally convex. Theorem[7](Topology induced by strings)—If(X,τ){\displaystyle (X,\tau )}is a topological vector space then there exists a setS{\displaystyle \mathbb {S} }[proof 1]of neighborhood strings inX{\displaystyle X}that is directed downward and such that the set of all knots of all strings inS{\displaystyle \mathbb {S} }is aneighborhood basisat the origin for(X,τ).{\displaystyle (X,\tau ).}Such a collection of strings is said to beτ{\displaystyle \tau }fundamental. Conversely, ifX{\displaystyle X}is a vector space and ifS{\displaystyle \mathbb {S} }is a collection of strings inX{\displaystyle X}that is directed downward, then the setKnots⁡S{\displaystyle \operatorname {Knots} \mathbb {S} }of all knots of all strings inS{\displaystyle \mathbb {S} }forms aneighborhood basisat the origin for a vector topology onX.{\displaystyle X.}In this case, this topology is denoted byτS{\displaystyle \tau _{\mathbb {S} }}and it is called thetopology generated byS.{\displaystyle \mathbb {S} .} IfS{\displaystyle \mathbb {S} }is the set of all topological strings in a TVS(X,τ){\displaystyle (X,\tau )}thenτS=τ.{\displaystyle \tau _{\mathbb {S} }=\tau .}[7]A Hausdorff TVS ismetrizableif and only ifits topology can be induced by a single topological string.[10] A vector space is anabelian groupwith respect to the operation of addition, and in a topological vector space the inverse operation is always continuous (since it is the same as multiplication by−1{\displaystyle -1}). Hence, every topological vector space is an abeliantopological group. Every TVS iscompletely regularbut a TVS need not benormal.[11] LetX{\displaystyle X}be a topological vector space. Given asubspaceM⊆X,{\displaystyle M\subseteq X,}the quotient spaceX/M{\displaystyle X/M}with the usualquotient topologyis a Hausdorff topological vector space if and only ifM{\displaystyle M}is closed.[note 2]This permits the following construction: given a topological vector spaceX{\displaystyle X}(that is probably not Hausdorff), form the quotient spaceX/M{\displaystyle X/M}whereM{\displaystyle M}is the closure of{0}.{\displaystyle \{0\}.}X/M{\displaystyle X/M}is then a Hausdorff topological vector space that can be studied instead ofX.{\displaystyle X.} One of the most used properties of vector topologies is that every vector topology istranslation invariant: Scalar multiplication by a non-zero scalar is a TVS-isomorphism. This means that ifs≠0{\displaystyle s\neq 0}then the linear mapX→X{\displaystyle X\to X}defined byx↦sx{\displaystyle x\mapsto sx}is a homeomorphism. Usings=−1{\displaystyle s=-1}produces the negation mapX→X{\displaystyle X\to X}defined byx↦−x,{\displaystyle x\mapsto -x,}which is consequently a linear homeomorphism and thus a TVS-isomorphism. Ifx∈X{\displaystyle x\in X}and any subsetS⊆X,{\displaystyle S\subseteq X,}thenclX⁡(x+S)=x+clX⁡S{\displaystyle \operatorname {cl} _{X}(x+S)=x+\operatorname {cl} _{X}S}[6]and moreover, if0∈S{\displaystyle 0\in S}thenx+S{\displaystyle x+S}is aneighborhood(resp. open neighborhood, closed neighborhood) ofx{\displaystyle x}inX{\displaystyle X}if and only if the same is true ofS{\displaystyle S}at the origin. A subsetE{\displaystyle E}of a vector spaceX{\displaystyle X}is said to be Every neighborhood of the origin is anabsorbing setand contains an openbalancedneighborhood of0{\displaystyle 0}[6]so every topological vector space has a local base of absorbing andbalanced sets. The origin even has a neighborhood basis consisting of closed balanced neighborhoods of0;{\displaystyle 0;}if the space islocally convexthen it also has a neighborhood basis consisting of closed convex balanced neighborhoods of the origin. Bounded subsets A subsetE{\displaystyle E}of a topological vector spaceX{\displaystyle X}isbounded[13]if for every neighborhoodV{\displaystyle V}of the origin there existst{\displaystyle t}such thatE⊆tV{\displaystyle E\subseteq tV}. The definition of boundedness can be weakened a bit;E{\displaystyle E}is bounded if and only if every countable subset of it is bounded. A set is bounded if and only if each of its subsequences is a bounded set.[14]Also,E{\displaystyle E}is bounded if and only if for every balanced neighborhoodV{\displaystyle V}of the origin, there existst{\displaystyle t}such thatE⊆tV.{\displaystyle E\subseteq tV.}Moreover, whenX{\displaystyle X}is locally convex, the boundedness can be characterized byseminorms: the subsetE{\displaystyle E}is bounded if and only if every continuous seminormp{\displaystyle p}is bounded onE.{\displaystyle E.}[15] Everytotally boundedset is bounded.[14]IfM{\displaystyle M}is a vector subspace of a TVSX,{\displaystyle X,}then a subset ofM{\displaystyle M}is bounded inM{\displaystyle M}if and only if it is bounded inX.{\displaystyle X.}[14] Birkhoff–Kakutani theorem—If(X,τ){\displaystyle (X,\tau )}is a topological vector space then the following four conditions are equivalent:[16][note 3] By the Birkhoff–Kakutani theorem, it follows that there is anequivalent metricthat is translation-invariant. A TVS ispseudometrizableif and only if it has a countable neighborhood basis at the origin, or equivalent, if and only if its topology is generated by anF-seminorm. A TVS is metrizable if and only if it is Hausdorff and pseudometrizable. More strongly: a topological vector space is said to benormableif its topology can be induced by a norm. A topological vector space is normable if and only if it is Hausdorff and has a convex bounded neighborhood of the origin.[17] LetK{\displaystyle \mathbb {K} }be a non-discretelocally compacttopological field, for example the real or complex numbers. AHausdorfftopological vector space overK{\displaystyle \mathbb {K} }is locally compact if and only if it isfinite-dimensional, that is, isomorphic toKn{\displaystyle \mathbb {K} ^{n}}for some natural numbern.{\displaystyle n.}[18] Thecanonical uniformity[19]on a TVS(X,τ){\displaystyle (X,\tau )}is the unique translation-invariantuniformitythat induces the topologyτ{\displaystyle \tau }onX.{\displaystyle X.} Every TVS is assumed to be endowed with this canonical uniformity, which makes all TVSs intouniform spaces. This allows one to talk[clarification needed]about related notions such ascompleteness,uniform convergence, Cauchy nets, anduniform continuity, etc., which are always assumed to be with respect to this uniformity (unless indicated other). This implies that every Hausdorff topological vector space isTychonoff.[20]A subset of a TVS iscompactif and only if it is complete andtotally bounded(for Hausdorff TVSs, a set being totally bounded is equivalent to it beingprecompact). But if the TVS is not Hausdorff then there exist compact subsets that are not closed. However, the closure of a compact subset of a non-Hausdorff TVS is again compact (so compact subsets arerelatively compact). With respect to this uniformity, anet(orsequence)x∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}isCauchyif and only if for every neighborhoodV{\displaystyle V}of0,{\displaystyle 0,}there exists some indexn{\displaystyle n}such thatxi−xj∈V{\displaystyle x_{i}-x_{j}\in V}wheneveri≥n{\displaystyle i\geq n}andj≥n.{\displaystyle j\geq n.} EveryCauchy sequenceis bounded, although Cauchy nets and Cauchy filters may not be bounded. A topological vector space where every Cauchy sequence converges is calledsequentially complete; in general, it may not be complete (in the sense that all Cauchy filters converge). The vector space operation of addition is uniformly continuous and anopen map. Scalar multiplication isCauchy continuousbut in general, it is almost never uniformly continuous. Because of this, every topological vector space can be completed and is thus adenselinear subspaceof acomplete topological vector space. LetX{\displaystyle X}be a real or complex vector space. Trivial topology Thetrivial topologyorindiscrete topology{X,∅}{\displaystyle \{X,\varnothing \}}is always a TVS topology on any vector spaceX{\displaystyle X}and it is the coarsest TVS topology possible. An important consequence of this is that the intersection of any collection of TVS topologies onX{\displaystyle X}always contains a TVS topology. Any vector space (including those that are infinite dimensional) endowed with the trivial topology is a compact (and thuslocally compact)completepseudometrizableseminormablelocally convextopological vector space. It isHausdorffif and only ifdim⁡X=0.{\displaystyle \dim X=0.} Finest vector topology There exists a TVS topologyτf{\displaystyle \tau _{f}}onX,{\displaystyle X,}called thefinest vector topologyonX,{\displaystyle X,}that is finer than every other TVS-topology onX{\displaystyle X}(that is, any TVS-topology onX{\displaystyle X}is necessarily a subset ofτf{\displaystyle \tau _{f}}).[23][24]Every linear map from(X,τf){\displaystyle \left(X,\tau _{f}\right)}into another TVS is necessarily continuous. IfX{\displaystyle X}has an uncountableHamel basisthenτf{\displaystyle \tau _{f}}isnotlocally convexandnotmetrizable.[24] ACartesian productof a family of topological vector spaces, when endowed with theproduct topology, is a topological vector space. Consider for instance the setX{\displaystyle X}of all functionsf:R→R{\displaystyle f:\mathbb {R} \to \mathbb {R} }whereR{\displaystyle \mathbb {R} }carries its usualEuclidean topology. This setX{\displaystyle X}is a real vector space (where addition and scalar multiplication are defined pointwise, as usual) that can be identified with (and indeed, is often defined to be) theCartesian productRR,,{\displaystyle \mathbb {R} ^{\mathbb {R} },,}which carries the naturalproduct topology. With this product topology,X:=RR{\displaystyle X:=\mathbb {R} ^{\mathbb {R} }}becomes a topological vector space whose topology is calledthe topology ofpointwise convergenceonR.{\displaystyle \mathbb {R} .}The reason for this name is the following: if(fn)n=1∞{\displaystyle \left(f_{n}\right)_{n=1}^{\infty }}is asequence(or more generally, anet) of elements inX{\displaystyle X}and iff∈X{\displaystyle f\in X}thenfn{\displaystyle f_{n}}convergestof{\displaystyle f}inX{\displaystyle X}if and only if for every real numberx,{\displaystyle x,}fn(x){\displaystyle f_{n}(x)}converges tof(x){\displaystyle f(x)}inR.{\displaystyle \mathbb {R} .}This TVS iscomplete,Hausdorff, andlocally convexbut notmetrizableand consequently notnormable; indeed, every neighborhood of the origin in the product topology contains lines (that is, 1-dimensional vector subspaces, which are subsets of the formRf:={rf:r∈R}{\displaystyle \mathbb {R} f:=\{rf:r\in \mathbb {R} \}}withf≠0{\displaystyle f\neq 0}). ByF. Riesz's theorem, a Hausdorff topological vector space is finite-dimensional if and only if it islocally compact, which happens if and only if it has a compactneighborhoodof the origin. LetK{\displaystyle \mathbb {K} }denoteR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }and endowK{\displaystyle \mathbb {K} }with its usual Hausdorff normedEuclidean topology. LetX{\displaystyle X}be a vector space overK{\displaystyle \mathbb {K} }of finite dimensionn:=dim⁡X{\displaystyle n:=\dim X}and so thatX{\displaystyle X}is vector space isomorphic toKn{\displaystyle \mathbb {K} ^{n}}(explicitly, this means that there exists alinear isomorphismbetween the vector spacesX{\displaystyle X}andKn{\displaystyle \mathbb {K} ^{n}}). This finite-dimensional vector spaceX{\displaystyle X}always has a uniqueHausdorffvector topology, which makes it TVS-isomorphic toKn,{\displaystyle \mathbb {K} ^{n},}whereKn{\displaystyle \mathbb {K} ^{n}}is endowed with the usual Euclidean topology (which is the same as theproduct topology). This Hausdorff vector topology is also the (unique)finestvector topology onX.{\displaystyle X.}X{\displaystyle X}has a unique vector topology if and only ifdim⁡X=0.{\displaystyle \dim X=0.}Ifdim⁡X≠0{\displaystyle \dim X\neq 0}then althoughX{\displaystyle X}does not have a unique vector topology, it does have a uniqueHausdorffvector topology. The proof of this dichotomy (i.e. that a vector topology is either trivial or isomorphic toK{\displaystyle \mathbb {K} }) is straightforward so only an outline with the important observations is given. As usual,K{\displaystyle \mathbb {K} }is assumed have the (normed) Euclidean topology. LetBr:={a∈K:|a|<r}{\displaystyle B_{r}:=\{a\in \mathbb {K} :|a|<r\}}for allr>0.{\displaystyle r>0.}LetX{\displaystyle X}be a1{\displaystyle 1}-dimensional vector space overK.{\displaystyle \mathbb {K} .}IfS⊆X{\displaystyle S\subseteq X}andB⊆K{\displaystyle B\subseteq \mathbb {K} }is a ball centered at0{\displaystyle 0}thenB⋅S=X{\displaystyle B\cdot S=X}wheneverS{\displaystyle S}contains an "unbounded sequence", by which it is meant a sequence of the form(aix)i=1∞{\displaystyle \left(a_{i}x\right)_{i=1}^{\infty }}where0≠x∈X{\displaystyle 0\neq x\in X}and(ai)i=1∞⊆K{\displaystyle \left(a_{i}\right)_{i=1}^{\infty }\subseteq \mathbb {K} }is unbounded in normed spaceK{\displaystyle \mathbb {K} }(in the usual sense). Any vector topology onX{\displaystyle X}will be translation invariant and invariant under non-zero scalar multiplication, and for every0≠x∈X,{\displaystyle 0\neq x\in X,}the mapMx:K→X{\displaystyle M_{x}:\mathbb {K} \to X}given byMx(a):=ax{\displaystyle M_{x}(a):=ax}is a continuous linear bijection. BecauseX=Kx{\displaystyle X=\mathbb {K} x}for any suchx,{\displaystyle x,}every subset ofX{\displaystyle X}can be written asFx=Mx(F){\displaystyle Fx=M_{x}(F)}for some unique subsetF⊆K.{\displaystyle F\subseteq \mathbb {K} .}And if this vector topology onX{\displaystyle X}has a neighborhoodW{\displaystyle W}of the origin that is not equal to all ofX,{\displaystyle X,}then the continuity of scalar multiplicationK×X→X{\displaystyle \mathbb {K} \times X\to X}at the origin guarantees the existence of an open ballBr⊆K{\displaystyle B_{r}\subseteq \mathbb {K} }centered at0{\displaystyle 0}and an open neighborhoodS{\displaystyle S}of the origin inX{\displaystyle X}such thatBr⋅S⊆W≠X,{\displaystyle B_{r}\cdot S\subseteq W\neq X,}which implies thatS{\displaystyle S}doesnotcontain any "unbounded sequence". This implies that for every0≠x∈X,{\displaystyle 0\neq x\in X,}there exists some positive integern{\displaystyle n}such thatS⊆Bnx.{\displaystyle S\subseteq B_{n}x.}From this, it can be deduced that ifX{\displaystyle X}does not carry the trivial topology and if0≠x∈X,{\displaystyle 0\neq x\in X,}then for any ballB⊆K{\displaystyle B\subseteq \mathbb {K} }center at 0 inK,{\displaystyle \mathbb {K} ,}Mx(B)=Bx{\displaystyle M_{x}(B)=Bx}contains an open neighborhood of the origin inX,{\displaystyle X,}which then proves thatMx{\displaystyle M_{x}}is a linearhomeomorphism.Q.E.D.◼{\displaystyle \blacksquare } Discrete and cofinite topologies IfX{\displaystyle X}is a non-trivial vector space (that is, of non-zero dimension) then thediscrete topologyonX{\displaystyle X}(which is alwaysmetrizable) isnota TVS topology because despite making addition and negation continuous (which makes it into atopological groupunder addition), it fails to make scalar multiplication continuous. Thecofinite topologyonX{\displaystyle X}(where a subset is open if and only if its complement is finite) is alsonota TVS topology onX.{\displaystyle X.} A linear operator between two topological vector spaces which is continuous at one point is continuous on the whole domain. Moreover, a linear operatorf{\displaystyle f}is continuous iff(X){\displaystyle f(X)}is bounded (as defined below) for some neighborhoodX{\displaystyle X}of the origin. Ahyperplanein a topological vector spaceX{\displaystyle X}is either dense or closed. Alinear functionalf{\displaystyle f}on a topological vector spaceX{\displaystyle X}has either dense or closed kernel. Moreover,f{\displaystyle f}is continuous if and only if itskernelisclosed. Depending on the application additional constraints are usually enforced on the topological structure of the space. In fact, several principal results in functional analysis fail to hold in general for topological vector spaces: theclosed graph theorem, theopen mapping theorem, and the fact that the dual space of the space separates points in the space. Below are some common topological vector spaces, roughly in order of increasing "niceness." Every topological vector space has acontinuous dual space—the setX′{\displaystyle X'}of all continuous linear functionals, that is,continuous linear mapsfrom the space into the base fieldK.{\displaystyle \mathbb {K} .}A topology on the dual can be defined to be the coarsest topology such that the dual pairing each point evaluationX′→K{\displaystyle X'\to \mathbb {K} }is continuous. This turns the dual into a locally convex topological vector space. This topology is called theweak-* topology.[27]This may not be the onlynatural topologyon the dual space; for instance, the dual of a normed space has a natural norm defined on it. However, it is very important in applications because of its compactness properties (seeBanach–Alaoglu theorem). Caution: WheneverX{\displaystyle X}is a non-normable locally convex space, then the pairing mapX′×X→K{\displaystyle X'\times X\to \mathbb {K} }is never continuous, no matter which vector space topology one chooses onX′.{\displaystyle X'.}A topological vector space has a non-trivial continuous dual space if and only if it has a proper convex neighborhood of the origin.[28] For anyS⊆X{\displaystyle S\subseteq X}of a TVSX,{\displaystyle X,}theconvex(resp.balanced,disked, closed convex, closed balanced, closed disked')hullofS{\displaystyle S}is the smallest subset ofX{\displaystyle X}that has this property and containsS.{\displaystyle S.}The closure (respectively, interior,convex hull, balanced hull, disked hull) of a setS{\displaystyle S}is sometimes denoted byclX⁡S{\displaystyle \operatorname {cl} _{X}S}(respectively,IntX⁡S,{\displaystyle \operatorname {Int} _{X}S,}co⁡S,{\displaystyle \operatorname {co} S,}bal⁡S,{\displaystyle \operatorname {bal} S,}cobal⁡S{\displaystyle \operatorname {cobal} S}). Theconvex hullco⁡S{\displaystyle \operatorname {co} S}of a subsetS{\displaystyle S}is equal to the set of allconvex combinationsof elements inS,{\displaystyle S,}which are finitelinear combinationsof the formt1s1+⋯+tnsn{\displaystyle t_{1}s_{1}+\cdots +t_{n}s_{n}}wheren≥1{\displaystyle n\geq 1}is an integer,s1,…,sn∈S{\displaystyle s_{1},\ldots ,s_{n}\in S}andt1,…,tn∈[0,1]{\displaystyle t_{1},\ldots ,t_{n}\in [0,1]}sum to1.{\displaystyle 1.}[29]The intersection of any family of convex sets is convex and the convex hull of a subset is equal to the intersection of all convex sets that contain it.[29] Properties of neighborhoods and open sets Every TVS isconnected[6]andlocally connected[30]and any connected open subset of a TVS isarcwise connected. IfS⊆X{\displaystyle S\subseteq X}andU{\displaystyle U}is an open subset ofX{\displaystyle X}thenS+U{\displaystyle S+U}is an open set inX{\displaystyle X}[6]and ifS⊆X{\displaystyle S\subseteq X}has non-empty interior thenS−S{\displaystyle S-S}is a neighborhood of the origin.[6] The open convex subsets of a TVSX{\displaystyle X}(not necessarily Hausdorff or locally convex) are exactly those that are of the formz+{x∈X:p(x)<1}={x∈X:p(x−z)<1}{\displaystyle z+\{x\in X:p(x)<1\}~=~\{x\in X:p(x-z)<1\}}for somez∈X{\displaystyle z\in X}and some positive continuoussublinear functionalp{\displaystyle p}onX.{\displaystyle X.}[28] IfK{\displaystyle K}is anabsorbingdiskin a TVSX{\displaystyle X}and ifp:=pK{\displaystyle p:=p_{K}}is theMinkowski functionalofK{\displaystyle K}then[31]IntX⁡K⊆{x∈X:p(x)<1}⊆K⊆{x∈X:p(x)≤1}⊆clX⁡K{\displaystyle \operatorname {Int} _{X}K~\subseteq ~\{x\in X:p(x)<1\}~\subseteq ~K~\subseteq ~\{x\in X:p(x)\leq 1\}~\subseteq ~\operatorname {cl} _{X}K}where importantly, it wasnotassumed thatK{\displaystyle K}had any topological properties nor thatp{\displaystyle p}was continuous (which happens if and only ifK{\displaystyle K}is a neighborhood of the origin). Letτ{\displaystyle \tau }andν{\displaystyle \nu }be two vector topologies onX.{\displaystyle X.}Thenτ⊆ν{\displaystyle \tau \subseteq \nu }if and only if whenever a netx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}inX{\displaystyle X}converges0{\displaystyle 0}in(X,ν){\displaystyle (X,\nu )}thenx∙→0{\displaystyle x_{\bullet }\to 0}in(X,τ).{\displaystyle (X,\tau ).}[32] LetN{\displaystyle {\mathcal {N}}}be a neighborhood basis of the origin inX,{\displaystyle X,}letS⊆X,{\displaystyle S\subseteq X,}and letx∈X.{\displaystyle x\in X.}Thenx∈clX⁡S{\displaystyle x\in \operatorname {cl} _{X}S}if and only if there exists a nets∙=(sN)N∈N{\displaystyle s_{\bullet }=\left(s_{N}\right)_{N\in {\mathcal {N}}}}inS{\displaystyle S}(indexed byN{\displaystyle {\mathcal {N}}}) such thats∙→x{\displaystyle s_{\bullet }\to x}inX.{\displaystyle X.}[33]This shows, in particular, that it will often suffice to consider nets indexed by a neighborhood basis of the origin rather than nets on arbitrary directed sets. IfX{\displaystyle X}is a TVS that is of thesecond categoryin itself (that is, anonmeager space) then any closed convexabsorbingsubset ofX{\displaystyle X}is a neighborhood of the origin.[34]This is no longer guaranteed if the set is not convex (a counter-example exists even inX=R2{\displaystyle X=\mathbb {R} ^{2}}) or ifX{\displaystyle X}is not of the second category in itself.[34] Interior IfR,S⊆X{\displaystyle R,S\subseteq X}andS{\displaystyle S}has non-empty interior thenIntX⁡S=IntX⁡(clX⁡S)andclX⁡S=clX⁡(IntX⁡S){\displaystyle \operatorname {Int} _{X}S~=~\operatorname {Int} _{X}\left(\operatorname {cl} _{X}S\right)~{\text{ and }}~\operatorname {cl} _{X}S~=~\operatorname {cl} _{X}\left(\operatorname {Int} _{X}S\right)}andIntX⁡(R)+IntX⁡(S)⊆R+IntX⁡S⊆IntX⁡(R+S).{\displaystyle \operatorname {Int} _{X}(R)+\operatorname {Int} _{X}(S)~\subseteq ~R+\operatorname {Int} _{X}S\subseteq \operatorname {Int} _{X}(R+S).} Thetopological interiorof adiskis not empty if and only if this interior contains the origin.[35]More generally, ifS{\displaystyle S}is abalancedset with non-empty interiorIntX⁡S≠∅{\displaystyle \operatorname {Int} _{X}S\neq \varnothing }in a TVSX{\displaystyle X}then{0}∪IntX⁡S{\displaystyle \{0\}\cup \operatorname {Int} _{X}S}will necessarily be balanced;[6]consequently,IntX⁡S{\displaystyle \operatorname {Int} _{X}S}will be balanced if and only if it contains the origin.[proof 2]For this (i.e.0∈IntX⁡S{\displaystyle 0\in \operatorname {Int} _{X}S}) to be true, it suffices forS{\displaystyle S}to also be convex (in addition to being balanced and having non-empty interior).;[6]The conclusion0∈IntX⁡S{\displaystyle 0\in \operatorname {Int} _{X}S}could be false ifS{\displaystyle S}is not also convex;[35]for example, inX:=R2,{\displaystyle X:=\mathbb {R} ^{2},}the interior of the closed and balanced setS:={(x,y):xy≥0}{\displaystyle S:=\{(x,y):xy\geq 0\}}is{(x,y):xy>0}.{\displaystyle \{(x,y):xy>0\}.} IfC{\displaystyle C}is convex and0<t≤1,{\displaystyle 0<t\leq 1,}then[36]tInt⁡C+(1−t)cl⁡C⊆Int⁡C.{\displaystyle t\operatorname {Int} C+(1-t)\operatorname {cl} C~\subseteq ~\operatorname {Int} C.}Explicitly, this means that ifC{\displaystyle C}is a convex subset of a TVSX{\displaystyle X}(not necessarily Hausdorff or locally convex),y∈intX⁡C,{\displaystyle y\in \operatorname {int} _{X}C,}andx∈clX⁡C{\displaystyle x\in \operatorname {cl} _{X}C}then the open line segment joiningx{\displaystyle x}andy{\displaystyle y}belongs to the interior ofC;{\displaystyle C;}that is,{tx+(1−t)y:0<t<1}⊆intX⁡C.{\displaystyle \{tx+(1-t)y:0<t<1\}\subseteq \operatorname {int} _{X}C.}[37][38][proof 3] IfN⊆X{\displaystyle N\subseteq X}is any balanced neighborhood of the origin inX{\displaystyle X}thenIntX⁡N⊆B1N=⋃0<|a|<1aN⊆N{\textstyle \operatorname {Int} _{X}N\subseteq B_{1}N=\bigcup _{0<|a|<1}aN\subseteq N}whereB1{\displaystyle B_{1}}is the set of all scalarsa{\displaystyle a}such that|a|<1.{\displaystyle |a|<1.} Ifx{\displaystyle x}belongs to the interior of a convex setS⊆X{\displaystyle S\subseteq X}andy∈clX⁡S,{\displaystyle y\in \operatorname {cl} _{X}S,}then the half-open line segment[x,y):={tx+(1−t)y:0<t≤1}⊆IntX⁡ifx≠y{\displaystyle [x,y):=\{tx+(1-t)y:0<t\leq 1\}\subseteq \operatorname {Int} _{X}{\text{ if }}x\neq y}and[37][x,x)=∅ifx=y.{\displaystyle [x,x)=\varnothing {\text{ if }}x=y.}IfN{\displaystyle N}is abalancedneighborhood of0{\displaystyle 0}inX{\displaystyle X}andB1:={a∈K:|a|<1},{\displaystyle B_{1}:=\{a\in \mathbb {K} :|a|<1\},}then by considering intersections of the formN∩Rx{\displaystyle N\cap \mathbb {R} x}(which are convexsymmetricneighborhoods of0{\displaystyle 0}in the real TVSRx{\displaystyle \mathbb {R} x}) it follows that:Int⁡N=[0,1)Int⁡N=(−1,1)N=B1N,{\displaystyle \operatorname {Int} N=[0,1)\operatorname {Int} N=(-1,1)N=B_{1}N,}and furthermore, ifx∈Int⁡Nandr:=sup{r>0:[0,r)x⊆N}{\displaystyle x\in \operatorname {Int} N{\text{ and }}r:=\sup\{r>0:[0,r)x\subseteq N\}}thenr>1and[0,r)x⊆Int⁡N,{\displaystyle r>1{\text{ and }}[0,r)x\subseteq \operatorname {Int} N,}and ifr≠∞{\displaystyle r\neq \infty }thenrx∈cl⁡N∖Int⁡N.{\displaystyle rx\in \operatorname {cl} N\setminus \operatorname {Int} N.} A topological vector spaceX{\displaystyle X}is Hausdorff if and only if{0}{\displaystyle \{0\}}is a closed subset ofX,{\displaystyle X,}or equivalently, if and only if{0}=clX⁡{0}.{\displaystyle \{0\}=\operatorname {cl} _{X}\{0\}.}Because{0}{\displaystyle \{0\}}is a vector subspace ofX,{\displaystyle X,}the same is true of its closureclX⁡{0},{\displaystyle \operatorname {cl} _{X}\{0\},}which is referred to asthe closure of the origininX.{\displaystyle X.}This vector space satisfiesclX⁡{0}=⋂N∈N(0)N{\displaystyle \operatorname {cl} _{X}\{0\}=\bigcap _{N\in {\mathcal {N}}(0)}N}so that in particular, every neighborhood of the origin inX{\displaystyle X}contains the vector spaceclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}as a subset. Thesubspace topologyonclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}is always thetrivial topology, which in particular implies that the topological vector spaceclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}acompact space(even if its dimension is non-zero or even infinite) and consequently also abounded subsetofX.{\displaystyle X.}In fact, a vector subspace of a TVS is bounded if and only if it is contained in the closure of{0}.{\displaystyle \{0\}.}[14]Every subset ofclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}also carries the trivial topology and so is itself a compact, and thus also complete,subspace(see footnote for a proof).[proof 4]In particular, ifX{\displaystyle X}is not Hausdorff then there exist subsets that are bothcompact and completebutnot closedinX{\displaystyle X};[39]for instance, this will be true of any non-empty proper subset ofclX⁡{0}.{\displaystyle \operatorname {cl} _{X}\{0\}.} IfS⊆X{\displaystyle S\subseteq X}is compact, thenclX⁡S=S+clX⁡{0}{\displaystyle \operatorname {cl} _{X}S=S+\operatorname {cl} _{X}\{0\}}and this set is compact. Thus the closure of a compact subset of a TVS is compact (said differently, all compact sets arerelatively compact),[40]which is not guaranteed for arbitrary non-Hausdorfftopological spaces.[note 6] For every subsetS⊆X,{\displaystyle S\subseteq X,}S+clX⁡{0}⊆clX⁡S{\displaystyle S+\operatorname {cl} _{X}\{0\}\subseteq \operatorname {cl} _{X}S}and consequently, ifS⊆X{\displaystyle S\subseteq X}is open or closed inX{\displaystyle X}thenS+clX⁡{0}=S{\displaystyle S+\operatorname {cl} _{X}\{0\}=S}[proof 5](so that thisarbitraryopenorclosed subsetsS{\displaystyle S}can be described as a"tube"whose vertical side is the vector spaceclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}). For any subsetS⊆X{\displaystyle S\subseteq X}of this TVSX,{\displaystyle X,}the following are equivalent: IfM{\displaystyle M}is a vector subspace of a TVSX{\displaystyle X}thenX/M{\displaystyle X/M}is Hausdorff if and only ifM{\displaystyle M}is closed inX.{\displaystyle X.}Moreover, thequotient mapq:X→X/clX⁡{0}{\displaystyle q:X\to X/\operatorname {cl} _{X}\{0\}}is always aclosed maponto the (necessarily) Hausdorff TVS.[44] Every vector subspace ofX{\displaystyle X}that is an algebraic complement ofclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}(that is, a vector subspaceH{\displaystyle H}that satisfies{0}=H∩clX⁡{0}{\displaystyle \{0\}=H\cap \operatorname {cl} _{X}\{0\}}andX=H+clX⁡{0}{\displaystyle X=H+\operatorname {cl} _{X}\{0\}}) is atopological complementofclX⁡{0}.{\displaystyle \operatorname {cl} _{X}\{0\}.}Consequently, ifH{\displaystyle H}is an algebraic complement ofclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}inX{\displaystyle X}then the addition mapH×clX⁡{0}→X,{\displaystyle H\times \operatorname {cl} _{X}\{0\}\to X,}defined by(h,n)↦h+n{\displaystyle (h,n)\mapsto h+n}is a TVS-isomorphism, whereH{\displaystyle H}is necessarily Hausdorff andclX⁡{0}{\displaystyle \operatorname {cl} _{X}\{0\}}has theindiscrete topology.[45]Moreover, ifC{\displaystyle C}is a HausdorffcompletionofH{\displaystyle H}thenC×clX⁡{0}{\displaystyle C\times \operatorname {cl} _{X}\{0\}}is a completion ofX≅H×clX⁡{0}.{\displaystyle X\cong H\times \operatorname {cl} _{X}\{0\}.}[41] Compact and totally bounded sets A subset of a TVS is compact if and only if it is complete andtotally bounded.[39]Thus, in acomplete topological vector space, a closed and totally bounded subset is compact.[39]A subsetS{\displaystyle S}of a TVSX{\displaystyle X}istotally boundedif and only ifclX⁡S{\displaystyle \operatorname {cl} _{X}S}is totally bounded,[42][43]if and only if its image under the canonical quotient mapX→X/clX⁡({0}){\displaystyle X\to X/\operatorname {cl} _{X}(\{0\})}is totally bounded.[41] Every relatively compact set is totally bounded[39]and the closure of a totally bounded set is totally bounded.[39]The image of a totally bounded set under a uniformly continuous map (such as a continuous linear map for instance) is totally bounded.[39]IfS{\displaystyle S}is a subset of a TVSX{\displaystyle X}such that every sequence inS{\displaystyle S}has a cluster point inS{\displaystyle S}thenS{\displaystyle S}is totally bounded.[41] IfK{\displaystyle K}is a compact subset of a TVSX{\displaystyle X}andU{\displaystyle U}is an open subset ofX{\displaystyle X}containingK,{\displaystyle K,}then there exists a neighborhoodN{\displaystyle N}of 0 such thatK+N⊆U.{\displaystyle K+N\subseteq U.}[46] Closure and closed set The closure of any convex (respectively, any balanced, any absorbing) subset of any TVS has this same property. In particular, the closure of any convex, balanced, and absorbing subset is abarrel. The closure of a vector subspace of a TVS is a vector subspace. Every finite dimensional vector subspace of a Hausdorff TVS is closed. The sum of a closed vector subspace and a finite-dimensional vector subspace is closed.[6]IfM{\displaystyle M}is a vector subspace ofX{\displaystyle X}andN{\displaystyle N}is a closed neighborhood of the origin inX{\displaystyle X}such thatU∩N{\displaystyle U\cap N}is closed inX{\displaystyle X}thenM{\displaystyle M}is closed inX.{\displaystyle X.}[46]The sum of a compact set and a closed set is closed. However, the sum of two closed subsets may fail to be closed[6](see this footnote[note 7]for examples). IfS⊆X{\displaystyle S\subseteq X}anda{\displaystyle a}is a scalar thenaclX⁡S⊆clX⁡(aS),{\displaystyle a\operatorname {cl} _{X}S\subseteq \operatorname {cl} _{X}(aS),}where ifX{\displaystyle X}is Hausdorff,a≠0,orS=∅{\displaystyle a\neq 0,{\text{ or }}S=\varnothing }then equality holds:clX⁡(aS)=aclX⁡S.{\displaystyle \operatorname {cl} _{X}(aS)=a\operatorname {cl} _{X}S.}In particular, every non-zero scalar multiple of a closed set is closed. IfS⊆X{\displaystyle S\subseteq X}and ifA{\displaystyle A}is a set of scalars such that neithercl⁡Snorcl⁡A{\displaystyle \operatorname {cl} S{\text{ nor }}\operatorname {cl} A}contain zero then[47](cl⁡A)(clX⁡S)=clX⁡(AS).{\displaystyle \left(\operatorname {cl} A\right)\left(\operatorname {cl} _{X}S\right)=\operatorname {cl} _{X}(AS).} IfS⊆XandS+S⊆2clX⁡S{\displaystyle S\subseteq X{\text{ and }}S+S\subseteq 2\operatorname {cl} _{X}S}thenclX⁡S{\displaystyle \operatorname {cl} _{X}S}is convex.[47] IfR,S⊆X{\displaystyle R,S\subseteq X}then[6]clX⁡(R)+clX⁡(S)⊆clX⁡(R+S)andclX⁡[clX⁡(R)+clX⁡(S)]=clX⁡(R+S){\displaystyle \operatorname {cl} _{X}(R)+\operatorname {cl} _{X}(S)~\subseteq ~\operatorname {cl} _{X}(R+S)~{\text{ and }}~\operatorname {cl} _{X}\left[\operatorname {cl} _{X}(R)+\operatorname {cl} _{X}(S)\right]~=~\operatorname {cl} _{X}(R+S)}and so consequently, ifR+S{\displaystyle R+S}is closed then so isclX⁡(R)+clX⁡(S).{\displaystyle \operatorname {cl} _{X}(R)+\operatorname {cl} _{X}(S).}[47] IfX{\displaystyle X}is a real TVS andS⊆X,{\displaystyle S\subseteq X,}then⋂r>1rS⊆clX⁡S{\displaystyle \bigcap _{r>1}rS\subseteq \operatorname {cl} _{X}S}where the left hand side is independent of the topology onX;{\displaystyle X;}moreover, ifS{\displaystyle S}is a convex neighborhood of the origin then equality holds. For any subsetS⊆X,{\displaystyle S\subseteq X,}clX⁡S=⋂N∈N(S+N){\displaystyle \operatorname {cl} _{X}S~=~\bigcap _{N\in {\mathcal {N}}}(S+N)}whereN{\displaystyle {\mathcal {N}}}is any neighborhood basis at the origin forX.{\displaystyle X.}[48]However,clX⁡U⊇⋂{U:S⊆U,Uis open inX}{\displaystyle \operatorname {cl} _{X}U~\supseteq ~\bigcap \{U:S\subseteq U,U{\text{ is open in }}X\}}and it is possible for this containment to be proper[49](for example, ifX=R{\displaystyle X=\mathbb {R} }andS{\displaystyle S}is the rational numbers). It follows thatclX⁡U⊆U+U{\displaystyle \operatorname {cl} _{X}U\subseteq U+U}for every neighborhoodU{\displaystyle U}of the origin inX.{\displaystyle X.}[50] Closed hulls In a locally convex space, convex hulls of bounded sets are bounded. This is not true for TVSs in general.[14] IfR,S⊆X{\displaystyle R,S\subseteq X}and the closed convex hull of one of the setsS{\displaystyle S}orR{\displaystyle R}is compact then[51]clX⁡(co⁡(R+S))=clX⁡(co⁡R)+clX⁡(co⁡S).{\displaystyle \operatorname {cl} _{X}(\operatorname {co} (R+S))~=~\operatorname {cl} _{X}(\operatorname {co} R)+\operatorname {cl} _{X}(\operatorname {co} S).}IfR,S⊆X{\displaystyle R,S\subseteq X}each have a closed convex hull that is compact (that is,clX⁡(co⁡R){\displaystyle \operatorname {cl} _{X}(\operatorname {co} R)}andclX⁡(co⁡S){\displaystyle \operatorname {cl} _{X}(\operatorname {co} S)}are compact) then[51]clX⁡(co⁡(R∪S))=co⁡[clX⁡(co⁡R)∪clX⁡(co⁡S)].{\displaystyle \operatorname {cl} _{X}(\operatorname {co} (R\cup S))~=~\operatorname {co} \left[\operatorname {cl} _{X}(\operatorname {co} R)\cup \operatorname {cl} _{X}(\operatorname {co} S)\right].} Hulls and compactness In a general TVS, the closed convex hull of a compact set mayfailto be compact. The balanced hull of a compact (respectively,totally bounded) set has that same property.[6]The convex hull of a finite union of compactconvexsets is again compact and convex.[6] Meager, nowhere dense, and Baire Adiskin a TVS is notnowhere denseif and only if its closure is a neighborhood of the origin.[9]A vector subspace of a TVS that is closed but not open isnowhere dense.[9] SupposeX{\displaystyle X}is a TVS that does not carry theindiscrete topology. ThenX{\displaystyle X}is aBaire spaceif and only ifX{\displaystyle X}has no balanced absorbing nowhere dense subset.[9] A TVSX{\displaystyle X}is a Baire space if and only ifX{\displaystyle X}isnonmeager, which happens if and only if there does not exist anowhere densesetD{\displaystyle D}such thatX=⋃n∈NnD.{\textstyle X=\bigcup _{n\in \mathbb {N} }nD.}[9]Everynonmeagerlocally convex TVS is abarrelled space.[9] Important algebraic facts and common misconceptions IfS⊆X{\displaystyle S\subseteq X}then2S⊆S+S{\displaystyle 2S\subseteq S+S}; ifS{\displaystyle S}is convex then equality holds. For an example where equality doesnothold, letx{\displaystyle x}be non-zero and setS={−x,x};{\displaystyle S=\{-x,x\};}S={x,2x}{\displaystyle S=\{x,2x\}}also works. A subsetC{\displaystyle C}is convex if and only if(s+t)C=sC+tC{\displaystyle (s+t)C=sC+tC}for all positive reals>0andt>0,{\displaystyle s>0{\text{ and }}t>0,}[29]or equivalently, if and only iftC+(1−t)C⊆C{\displaystyle tC+(1-t)C\subseteq C}for all0≤t≤1.{\displaystyle 0\leq t\leq 1.}[52] Theconvex balanced hullof a setS⊆X{\displaystyle S\subseteq X}is equal to the convex hull of thebalanced hullofS;{\displaystyle S;}that is, it is equal toco⁡(bal⁡S).{\displaystyle \operatorname {co} (\operatorname {bal} S).}But in general,bal⁡(co⁡S)⊆cobal⁡S=co⁡(bal⁡S),{\displaystyle \operatorname {bal} (\operatorname {co} S)~\subseteq ~\operatorname {cobal} S~=~\operatorname {co} (\operatorname {bal} S),}where the inclusion might be strict since thebalanced hullof a convex set need not be convex (counter-examples exist even inR2{\displaystyle \mathbb {R} ^{2}}). IfR,S⊆X{\displaystyle R,S\subseteq X}anda{\displaystyle a}is a scalar then[6]a(R+S)=aR+aS,andco⁡(R+S)=co⁡R+co⁡S,andco⁡(aS)=aco⁡S.{\displaystyle a(R+S)=aR+aS,~{\text{ and }}~\operatorname {co} (R+S)=\operatorname {co} R+\operatorname {co} S,~{\text{ and }}~\operatorname {co} (aS)=a\operatorname {co} S.}IfR,S⊆X{\displaystyle R,S\subseteq X}are convex non-empty disjoint sets andx∉R∪S,{\displaystyle x\not \in R\cup S,}thenS∩co⁡(R∪{x})=∅{\displaystyle S\cap \operatorname {co} (R\cup \{x\})=\varnothing }orR∩co⁡(S∪{x})=∅.{\displaystyle R\cap \operatorname {co} (S\cup \{x\})=\varnothing .} In any non-trivial vector spaceX,{\displaystyle X,}there exist two disjoint non-empty convex subsets whose union isX.{\displaystyle X.} Other properties Every TVS topology can be generated by afamilyofF-seminorms.[53] IfP(x){\displaystyle P(x)}is some unarypredicate(a true or false statement dependent onx∈X{\displaystyle x\in X}) then for anyz∈X,{\displaystyle z\in X,}z+{x∈X:P(x)}={x∈X:P(x−z)}.{\displaystyle z+\{x\in X:P(x)\}=\{x\in X:P(x-z)\}.}[proof 6]So for example, ifP(x){\displaystyle P(x)}denotes "‖x‖<1{\displaystyle \|x\|<1}" then for anyz∈X,{\displaystyle z\in X,}z+{x∈X:‖x‖<1}={x∈X:‖x−z‖<1}.{\displaystyle z+\{x\in X:\|x\|<1\}=\{x\in X:\|x-z\|<1\}.}Similarly, ifs≠0{\displaystyle s\neq 0}is a scalar thens{x∈X:P(x)}={x∈X:P(1sx)}.{\displaystyle s\{x\in X:P(x)\}=\left\{x\in X:P\left({\tfrac {1}{s}}x\right)\right\}.}The elementsx∈X{\displaystyle x\in X}of these sets must range over a vector space (that is, overX{\displaystyle X}) rather than not just a subset or else these equalities are no longer guaranteed; similarly,z{\displaystyle z}must belong to this vector space (that is,z∈X{\displaystyle z\in X}). The following table, the color of each cell indicates whether or not a given property of subsets ofX{\displaystyle X}(indicated by the column name, "convex" for instance) is preserved under the set operator (indicated by the row's name, "closure" for instance). If in every TVS, a property is preserved under the indicated set operator then that cell will be colored green; otherwise, it will be colored red. So for instance, since the union of two absorbing sets is again absorbing, the cell in row "R∪S{\displaystyle R\cup S}" and column "Absorbing" is colored green. But since the arbitrary intersection of absorbing sets need not be absorbing, the cell in row "Arbitrary intersections (of at least 1 set)" and column "Absorbing" is colored red. If a cell is not colored then that information has yet to be filled in.
https://en.wikipedia.org/wiki/Topological_vector_space
In themathematicalstudy offunctional analysis, theBanach–Mazur distanceis a way to define adistanceon the setQ(n){\displaystyle Q(n)}ofn{\displaystyle n}-dimensionalnormed spaces. With this distance, the set ofisometryclasses ofn{\displaystyle n}-dimensional normed spaces becomes acompact metric space, called theBanach–Mazur compactum. IfX{\displaystyle X}andY{\displaystyle Y}are two finite-dimensional normed spaces with the same dimension, letGL⁡(X,Y){\displaystyle \operatorname {GL} (X,Y)}denote the collection of all linear isomorphismsT:X→Y.{\displaystyle T:X\to Y.}Denote by‖T‖{\displaystyle \|T\|}theoperator normof such a linear map — the maximum factor by which it "lengthens" vectors. The Banach–Mazur distance betweenX{\displaystyle X}andY{\displaystyle Y}is defined byδ(X,Y)=log⁡(inf{‖T‖‖T−1‖:T∈GL⁡(X,Y)}).{\displaystyle \delta (X,Y)=\log {\Bigl (}\inf \left\{\left\|T\right\|\left\|T^{-1}\right\|:T\in \operatorname {GL} (X,Y)\right\}{\Bigr )}.} We haveδ(X,Y)=0{\displaystyle \delta (X,Y)=0}if and only if the spacesX{\displaystyle X}andY{\displaystyle Y}are isometrically isomorphic. Equipped with the metricδ, the space of isometry classes ofn{\displaystyle n}-dimensional normed spaces becomes acompact metric space, called the Banach–Mazur compactum. Many authors prefer to work with themultiplicative Banach–Mazur distanced(X,Y):=eδ(X,Y)=inf{‖T‖‖T−1‖:T∈GL⁡(X,Y)},{\displaystyle d(X,Y):=\mathrm {e} ^{\delta (X,Y)}=\inf \left\{\left\|T\right\|\left\|T^{-1}\right\|:T\in \operatorname {GL} (X,Y)\right\},}for whichd(X,Z)≤d(X,Y)d(Y,Z){\displaystyle d(X,Z)\leq d(X,Y)\,d(Y,Z)}andd(X,X)=1.{\displaystyle d(X,X)=1.} F. John's theoremon the maximal ellipsoid contained in a convex body gives the estimate: whereℓn2{\displaystyle \ell _{n}^{2}}denotesRn{\displaystyle \mathbb {R} ^{n}}with theEuclidean norm(see the article onLp{\displaystyle L^{p}}spaces). From this it follows thatd(X,Y)≤n{\displaystyle d(X,Y)\leq n}for allX,Y∈Q(n).{\displaystyle X,Y\in Q(n).}However, for the classical spaces, this upper bound for the diameter ofQ(n){\displaystyle Q(n)}is far from being approached. For example, the distance betweenℓn1{\displaystyle \ell _{n}^{1}}andℓn∞{\displaystyle \ell _{n}^{\infty }}is (only) of ordern1/2{\displaystyle n^{1/2}}(up to a multiplicative constant independent from the dimensionn{\displaystyle n}). A major achievement in the direction of estimating the diameter ofQ(n){\displaystyle Q(n)}is due to E. Gluskin, who proved in 1981 that the (multiplicative) diameter of the Banach–Mazur compactum is bounded below bycn,{\displaystyle c\,n,}for some universalc>0.{\displaystyle c>0.} Gluskin's method introduces a class of random symmetricpolytopesP(ω){\displaystyle P(\omega )}inRn,{\displaystyle \mathbb {R} ^{n},}and the normed spacesX(ω){\displaystyle X(\omega )}havingP(ω){\displaystyle P(\omega )}as unit ball (the vector space isRn{\displaystyle \mathbb {R} ^{n}}and the norm is thegaugeofP(ω){\displaystyle P(\omega )}). The proof consists in showing that the required estimate is true with large probability for two independent copies of the normed spaceX(ω).{\displaystyle X(\omega ).} Q(2){\displaystyle Q(2)}is anabsolute extensor.[2]On the other hand,Q(2){\displaystyle Q(2)}is not homeomorphic to aHilbert cube.
https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur_compactum
Infunctional analysisand related areas ofmathematics, acontinuous linear operatororcontinuous linear mappingis acontinuouslinear transformationbetweentopological vector spaces. An operator between twonormed spacesis abounded linear operatorif and only if it is a continuous linear operator. Suppose thatF:X→Y{\displaystyle F:X\to Y}is alinear operatorbetween twotopological vector spaces(TVSs). The following are equivalent: IfY{\displaystyle Y}islocally convexthen this list may be extended to include: IfX{\displaystyle X}andY{\displaystyle Y}are bothHausdorfflocally convex spaces then this list may be extended to include: IfX{\displaystyle X}is asequential space(such as apseudometrizable space) then this list may be extended to include: IfX{\displaystyle X}ispseudometrizableor metrizable (such as a normed orBanach space) then we may add to this list: IfY{\displaystyle Y}isseminormable space(such as anormed space) then this list may be extended to include: IfX{\displaystyle X}andY{\displaystyle Y}are bothnormedorseminormed spaces(with both seminorms denoted by‖⋅‖{\displaystyle \|\cdot \|}) then this list may be extended to include: IfX{\displaystyle X}andY{\displaystyle Y}are Hausdorff locally convex spaces withY{\displaystyle Y}finite-dimensional then this list may be extended to include: Throughout,F:X→Y{\displaystyle F:X\to Y}is alinear mapbetweentopological vector spaces(TVSs). Bounded subset The notion of a "bounded set" for a topological vector space is that of being avon Neumann bounded set. If the space happens to also be anormed space(or aseminormed space) then a subsetS{\displaystyle S}is von Neumann bounded if and only if it isnormbounded, meaning thatsups∈S‖s‖<∞.{\displaystyle \sup _{s\in S}\|s\|<\infty .}A subset of a normed (or seminormed) space is calledboundedif it is norm-bounded (or equivalently, von Neumann bounded). For example, the scalar field (R{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) with theabsolute value|⋅|{\displaystyle |\cdot |}is a normed space, so a subsetS{\displaystyle S}is bounded if and only ifsups∈S|s|{\displaystyle \sup _{s\in S}|s|}is finite, which happens if and only ifS{\displaystyle S}is contained in some open (or closed) ball centered at the origin (zero). Any translation, scalar multiple, and subset of a bounded set is again bounded. Function bounded on a set IfS⊆X{\displaystyle S\subseteq X}is a set thenF:X→Y{\displaystyle F:X\to Y}is said to bebounded onS{\displaystyle S}ifF(S){\displaystyle F(S)}is abounded subsetofY,{\displaystyle Y,}which if(Y,‖⋅‖){\displaystyle (Y,\|\cdot \|)}is a normed (or seminormed) space happens if and only ifsups∈S‖F(s)‖<∞.{\displaystyle \sup _{s\in S}\|F(s)\|<\infty .}A linear mapF{\displaystyle F}is bounded on a setS{\displaystyle S}if and only if it is bounded onx+S:={x+s:s∈S}{\displaystyle x+S:=\{x+s:s\in S\}}for everyx∈X{\displaystyle x\in X}(becauseF(x+S)=F(x)+F(S){\displaystyle F(x+S)=F(x)+F(S)}and any translation of a bounded set is again bounded) if and only if it is bounded oncS:={cs:s∈S}{\displaystyle cS:=\{cs:s\in S\}}for every non-zero scalarc≠0{\displaystyle c\neq 0}(becauseF(cS)=cF(S){\displaystyle F(cS)=cF(S)}and any scalar multiple of a bounded set is again bounded). Consequently, if(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}is a normed or seminormed space, then a linear mapF:X→Y{\displaystyle F:X\to Y}is bounded on some (equivalently, on every) non-degenerate open or closed ball (not necessarily centered at the origin, and of any radius) if and only if it is bounded on the closed unit ball centered at the origin{x∈X:‖x‖≤1}.{\displaystyle \{x\in X:\|x\|\leq 1\}.} Bounded linear maps By definition, a linear mapF:X→Y{\displaystyle F:X\to Y}betweenTVSsis said to beboundedand is called abounded linear operatorif for every(von Neumann) bounded subsetB⊆X{\displaystyle B\subseteq X}of its domain,F(B){\displaystyle F(B)}is a bounded subset of it codomain; or said more briefly, if it is bounded on every bounded subset of its domain. When the domainX{\displaystyle X}is a normed (or seminormed) space then it suffices to check this condition for the open or closed unit ball centered at the origin. Explicitly, ifB1{\displaystyle B_{1}}denotes this ball thenF:X→Y{\displaystyle F:X\to Y}is a bounded linear operator if and only ifF(B1){\displaystyle F\left(B_{1}\right)}is a bounded subset ofY;{\displaystyle Y;}ifY{\displaystyle Y}is also a (semi)normed space then this happens if and only if theoperator norm‖F‖:=sup‖x‖≤1‖F(x)‖<∞{\displaystyle \|F\|:=\sup _{\|x\|\leq 1}\|F(x)\|<\infty }is finite. Everysequentially continuouslinear operator is bounded.[5] Function bounded on a neighborhood and local boundedness In contrast, a mapF:X→Y{\displaystyle F:X\to Y}is said to bebounded on a neighborhood ofa pointx∈X{\displaystyle x\in X}orlocally bounded atx{\displaystyle x}if there exists aneighborhoodU{\displaystyle U}of this point inX{\displaystyle X}such thatF(U){\displaystyle F(U)}is abounded subsetofY.{\displaystyle Y.}It is "bounded on a neighborhood" (of some point) if there existssomepointx{\displaystyle x}in its domain at which it is locally bounded, in which case this linear mapF{\displaystyle F}is necessarily locally bounded ateverypoint of its domain. The term "locally bounded" is sometimes used to refer to a map that is locally bounded at every point of its domain, but some functional analysis authors define "locally bounded" to instead be a synonym of "bounded linear operator", which are related butnotequivalent concepts. For this reason, this article will avoid the term "locally bounded" and instead say "locally bounded at every point" (there is no disagreement about the definition of "locally boundedat a point"). A linear map is "bounded on a neighborhood" (of some point) if and only if it is locally bounded at every point of its domain, in which case it is necessarilycontinuous[2](even if its domain is not anormed space) and thus alsobounded(because a continuous linear operator is always abounded linear operator).[6] For any linear map, if it isbounded on a neighborhoodthen it is continuous,[2][7]and if it is continuous then it isbounded.[6]The converse statements are not true in general but they are both true when the linear map's domain is anormed space. Examples and additional details are now given below. The next example shows that it is possible for a linear map to becontinuous(and thus also bounded) but not bounded on any neighborhood. In particular, it demonstrates that being "bounded on a neighborhood" isnotalways synonymous with being "bounded". Example: A continuous and bounded linear map that is not bounded on any neighborhood: IfId:X→X{\displaystyle \operatorname {Id} :X\to X}is the identity map on somelocally convex topological vector spacethen this linear map is always continuous (indeed, even aTVS-isomorphism) andbounded, butId{\displaystyle \operatorname {Id} }is bounded on a neighborhood if and only if there exists a bounded neighborhood of the origin inX,{\displaystyle X,}whichis equivalent toX{\displaystyle X}being aseminormable space(which ifX{\displaystyle X}is Hausdorff, is the same as being anormable space). This shows that it is possible for a linear map to be continuous butnotbounded on any neighborhood. Indeed, this example shows that everylocally convex spacethat is not seminormable has a linear TVS-automorphismthat is not bounded on any neighborhood of any point. Thus although every linear map that is bounded on a neighborhood is necessarily continuous, the converse is not guaranteed in general. To summarize the discussion below, for a linear map on a normed (or seminormed) space, being continuous, beingbounded, and being bounded on a neighborhood are allequivalent. A linear map whose domainorcodomain is normable (or seminormable) is continuous if and only if it bounded on a neighborhood. And abounded linear operatorvalued in alocally convex spacewill be continuous if its domain is(pseudo)metrizable[2]orbornological.[6] Guaranteeing that "continuous" implies "bounded on a neighborhood" A TVS is said to belocally boundedif there exists a neighborhood that is also abounded set.[8]For example, everynormedorseminormed spaceis a locally bounded TVS since the unit ball centered at the origin is a bounded neighborhood of the origin. IfB{\displaystyle B}is a bounded neighborhood of the origin in a (locally bounded) TVS then its image under any continuous linear map will be a bounded set (so this map is thus bounded on this neighborhoodB{\displaystyle B}). Consequently, a linear map from a locally bounded TVS into any other TVS is continuous if and only if it isbounded on a neighborhood. Moreover, any TVS with this property must be a locally bounded TVS. Explicitly, ifX{\displaystyle X}is a TVS such that every continuous linear map (into any TVS) whose domain isX{\displaystyle X}is necessarily bounded on a neighborhood, thenX{\displaystyle X}must be a locally bounded TVS (because theidentity functionX→X{\displaystyle X\to X}is always a continuous linear map). Any linear map from a TVS into a locally bounded TVS (such as any linear functional) is continuous if and only if it is bounded on a neighborhood.[8]Conversely, ifY{\displaystyle Y}is a TVS such that every continuous linear map (from any TVS) with codomainY{\displaystyle Y}is necessarilybounded on a neighborhood, thenY{\displaystyle Y}must be a locally bounded TVS.[8]In particular, a linear functional on a arbitrary TVS is continuous if and only if it is bounded on a neighborhood.[8] Thus when the domainorthe codomain of a linear map is normable or seminormable, then continuity will beequivalentto being bounded on a neighborhood. Guaranteeing that "bounded" implies "continuous" A continuous linear operator is always abounded linear operator.[6]But importantly, in the most general setting of a linear operator between arbitrary topological vector spaces, it is possible for a linear operator to beboundedbut tonotbe continuous. A linear map whose domain ispseudometrizable(such as anynormed space) isboundedif and only if it is continuous.[2]The same is true of a linear map from abornological spaceinto alocally convex space.[6] Guaranteeing that "bounded" implies "bounded on a neighborhood" In general, without additional information about either the linear map or its domain or codomain, the map being "bounded" is not equivalent to it being "bounded on a neighborhood". IfF:X→Y{\displaystyle F:X\to Y}is a bounded linear operator from anormed spaceX{\displaystyle X}into some TVS thenF:X→Y{\displaystyle F:X\to Y}is necessarily continuous; this is because any open ballB{\displaystyle B}centered at the origin inX{\displaystyle X}is both a bounded subset (which implies thatF(B){\displaystyle F(B)}is bounded sinceF{\displaystyle F}is a bounded linear map) and a neighborhood of the origin inX,{\displaystyle X,}so thatF{\displaystyle F}is thus bounded on this neighborhoodB{\displaystyle B}of the origin, which (as mentioned above) guarantees continuity. Every linear functional on atopological vector space(TVS) is a linear operator so all of the properties described above for continuous linear operators apply to them. However, because of their specialized nature, we can say even more about continuous linear functionals than we can about more general continuous linear operators. LetX{\displaystyle X}be atopological vector space(TVS) over the fieldF{\displaystyle \mathbb {F} }(X{\displaystyle X}need not beHausdorfforlocally convex) and letf:X→F{\displaystyle f:X\to \mathbb {F} }be alinear functionalonX.{\displaystyle X.}The following are equivalent:[1] IfX{\displaystyle X}andY{\displaystyle Y}are complex vector spaces then this list may be extended to include: If the domainX{\displaystyle X}is asequential spacethen this list may be extended to include: If the domainX{\displaystyle X}ismetrizable or pseudometrizable(for example, aFréchet spaceor anormed space) then this list may be extended to include: If the domainX{\displaystyle X}is abornological space(for example, apseudometrizable TVS) andY{\displaystyle Y}islocally convexthen this list may be extended to include: and if in additionX{\displaystyle X}is a vector space over thereal numbers(which in particular, implies thatf{\displaystyle f}is real-valued) then this list may be extended to include: IfX{\displaystyle X}is complex then either all three off,{\displaystyle f,}Re⁡f,{\displaystyle \operatorname {Re} f,}andIm⁡f{\displaystyle \operatorname {Im} f}arecontinuous(respectively,bounded), or else all three arediscontinuous(respectively, unbounded). Every linear map whose domain is a finite-dimensional Hausdorfftopological vector space(TVS) is continuous. This is not true if the finite-dimensional TVS is not Hausdorff. Every (constant) mapX→Y{\displaystyle X\to Y}between TVS that is identically equal to zero is a linear map that is continuous, bounded, and bounded on the neighborhoodX{\displaystyle X}of the origin. In particular, every TVS has a non-emptycontinuous dual space(although it is possible for the constant zero map to be its only continuous linear functional). SupposeX{\displaystyle X}is any Hausdorff TVS. Theneverylinear functionalonX{\displaystyle X}is necessarily continuous if and only if every vector subspace ofX{\displaystyle X}is closed.[12]Every linear functional onX{\displaystyle X}is necessarily a bounded linear functional if and only if everybounded subsetofX{\displaystyle X}is contained in a finite-dimensional vector subspace.[13] Alocally convexmetrizable topological vector spaceisnormableif and only if every bounded linear functional on it is continuous. A continuous linear operator mapsbounded setsinto bounded sets. The proof uses the facts that the translation of an open set in a linear topological space is again an open set, and the equalityF−1(D)+x=F−1(D+F(x)){\displaystyle F^{-1}(D)+x=F^{-1}(D+F(x))}for any subsetD{\displaystyle D}ofY{\displaystyle Y}and anyx∈X,{\displaystyle x\in X,}which is true due to theadditivityofF.{\displaystyle F.} IfX{\displaystyle X}is a complexnormed spaceandf{\displaystyle f}is a linear functional onX,{\displaystyle X,}then‖f‖=‖Re⁡f‖{\displaystyle \|f\|=\|\operatorname {Re} f\|}[14](where in particular, one side is infinite if and only if the other side is infinite). Every non-trivial continuous linear functional on a TVSX{\displaystyle X}is anopen map.[1]Iff{\displaystyle f}is a linear functional on a real vector spaceX{\displaystyle X}and ifp{\displaystyle p}is a seminorm onX,{\displaystyle X,}then|f|≤p{\displaystyle |f|\leq p}if and only iff≤p.{\displaystyle f\leq p.}[1] Iff:X→F{\displaystyle f:X\to \mathbb {F} }is a linear functional andU⊆X{\displaystyle U\subseteq X}is a non-empty subset, then by defining the setsf(U):={f(u):u∈U}and|f(U)|:={|f(u)|:u∈U},{\displaystyle f(U):=\{f(u):u\in U\}\quad {\text{ and }}\quad |f(U)|:=\{|f(u)|:u\in U\},}the supremumsupu∈U|f(u)|{\displaystyle \,\sup _{u\in U}|f(u)|\,}can be written more succinctly assup|f(U)|{\displaystyle \,\sup |f(U)|\,}becausesup|f(U)|=sup{|f(u)|:u∈U}=supu∈U|f(u)|.{\displaystyle \sup |f(U)|~=~\sup\{|f(u)|:u\in U\}~=~\sup _{u\in U}|f(u)|.}Ifs{\displaystyle s}is a scalar thensup|f(sU)|=|s|sup|f(U)|{\displaystyle \sup |f(sU)|~=~|s|\sup |f(U)|}so that ifr>0{\displaystyle r>0}is a real number andB≤r:={c∈F:|c|≤r}{\displaystyle B_{\leq r}:=\{c\in \mathbb {F} :|c|\leq r\}}is the closed ball of radiusr{\displaystyle r}centered at the origin then the following are equivalent:
https://en.wikipedia.org/wiki/Continuous_linear_operator
Inoperator theory, abounded operatorT:X→Ybetweennormed vector spacesXandYis said to be acontractionif itsoperator norm||T|| ≤ 1. This notion is a special case of the concept of acontraction mapping, but every bounded operator becomes a contraction after suitable scaling. The analysis of contractions provides insight into the structure of operators, or a family of operators. The theory of contractions onHilbert spaceis largely due toBéla Szőkefalvi-NagyandCiprian Foias. IfTis a contraction acting on aHilbert spaceH{\displaystyle {\mathcal {H}}}, the following basic objects associated withTcan be defined. Thedefect operatorsofTare the operatorsDT= (1 −T*T)½andDT*= (1 −TT*)½. The square root is thepositive semidefinite onegiven by thespectral theorem. Thedefect spacesDT{\displaystyle {\mathcal {D}}_{T}}andDT∗{\displaystyle {\mathcal {D}}_{T*}}are the closure of the ranges Ran(DT) and Ran(DT*) respectively. The positive operatorDTinduces an inner product onH{\displaystyle {\mathcal {H}}}. The inner product space can be identified naturally with Ran(DT). A similar statement holds forDT∗{\displaystyle {\mathcal {D}}_{T*}}. Thedefect indicesofTare the pair The defect operators and the defect indices are a measure of the non-unitarity ofT. A contractionTon a Hilbert space can be canonically decomposed into an orthogonal direct sum whereUis a unitary operator and Γ iscompletely non-unitaryin the sense that it has no non-zeroreducing subspaceson which its restriction is unitary. IfU= 0,Tis said to be acompletely non-unitary contraction. A special case of this decomposition is theWold decompositionfor anisometry, where Γ is a proper isometry. Contractions on Hilbert spaces can be viewed as the operator analogs of cos θ and are calledoperator anglesin some contexts. The explicit description of contractions leads to (operator-)parametrizations of positive and unitary matrices. Sz.-Nagy's dilation theorem, proved in 1953, states that for any contractionTon a Hilbert spaceH, there is aunitary operatorUon a larger Hilbert spaceK⊇Hsuch that ifPis the orthogonal projection ofKontoHthenTn=PUnPfor alln> 0. The operatorUis called adilationofTand is uniquely determined ifUis minimal, i.e.Kis the smallest closed subspace invariant underUandU* containingH. In fact define[1] the orthogonal direct sum of countably many copies ofH. LetVbe the isometry onH{\displaystyle {\mathcal {H}}}defined by Let Define a unitaryWonK{\displaystyle {\mathcal {K}}}by Wis then a unitary dilation ofTwithHconsidered as the first component ofH⊂K{\displaystyle {\mathcal {H}}\subset {\mathcal {K}}}. The minimal dilationUis obtained by taking the restriction ofWto the closed subspace generated by powers ofWapplied toH. There is an alternative proof of Sz.-Nagy's dilation theorem, which allows significant generalization.[2] LetGbe a group,U(g) a unitary representation ofGon a Hilbert spaceKandPan orthogonal projection onto a closed subspaceH=PKofK. The operator-valued function with values in operators onKsatisfies the positive-definiteness condition where Moreover, Conversely, every operator-valued positive-definite function arises in this way. Recall that every (continuous) scalar-valued positive-definite function on a topological group induces an inner product and group representation φ(g) = 〈Ugv,v〉 whereUgis a (strongly continuous) unitary representation (seeBochner's theorem). Replacingv, a rank-1 projection, by a general projection gives the operator-valued statement. In fact the construction is identical; this is sketched below. LetH{\displaystyle {\mathcal {H}}}be the space of functions onGof finite support with values inHwith inner product Gacts unitarily onH{\displaystyle {\mathcal {H}}}by Moreover,Hcan be identified with a closed subspace ofH{\displaystyle {\mathcal {H}}}using the isometric embedding sendingvinHtofvwith IfPis the projection ofH{\displaystyle {\mathcal {H}}}ontoH, then using the above identification. WhenGis a separable topological group, Φ is continuous in the strong (or weak)operator topologyif and only ifUis. In this case functions supported on a countable dense subgroup ofGare dense inH{\displaystyle {\mathcal {H}}}, so thatH{\displaystyle {\mathcal {H}}}is separable. WhenG=Zany contraction operatorTdefines such a function Φ through forn> 0. The above construction then yields a minimal unitary dilation. The same method can be applied to prove a second dilation theorem of Sz._Nagy for a one-parameter strongly continuous contraction semigroupT(t) (t≥ 0) on a Hilbert spaceH.Cooper (1947)had previously proved the result for one-parameter semigroups of isometries,[3] The theorem states that there is a larger Hilbert spaceKcontainingHand a unitary representationU(t) ofRsuch that and the translatesU(t)HgenerateK. In factT(t) defines a continuous operator-valued positove-definite function Φ onRthrough fort> 0. Φ is positive-definite on cyclic subgroups ofR, by the argument forZ, and hence onRitself by continuity. The previous construction yields a minimal unitary representationU(t) and projectionP. TheHille–Yosida theoremassigns a closedunbounded operatorAto every contractive one-parameter semigroupT'(t) through where the domain onAconsists of all ξ for which this limit exists. Ais called thegeneratorof the semigroup and satisfies on its domain. WhenAis a self-adjoint operator in the sense of thespectral theoremand this notation is used more generally in semigroup theory. Thecogeneratorof the semigroup is the contraction defined by Acan be recovered fromTusing the formula In particular a dilation ofTonK⊃Himmediately gives a dilation of the semigroup.[4] LetTbe totally non-unitary contraction onH. Then the minimal unitary dilationUofTonK⊃His unitarily equivalent to a direct sum of copies the bilateral shift operator, i.e. multiplication byzon L2(S1).[5] IfPis the orthogonal projection ontoHthen forfin L∞= L∞(S1) it follows that the operatorf(T) can be defined by Let H∞be the space of bounded holomorphic functions on the unit diskD. Any such function has boundary values in L∞and is uniquely determined by these, so that there is an embedding H∞⊂ L∞. Forfin H∞,f(T) can be defined without reference to the unitary dilation. In fact if for |z| < 1, then forr< 1 is holomorphic on |z| < 1/r. In that casefr(T) is defined by the holomorphic functional calculus andf(T) can be defined by The map sendingftof(T) defines an algebra homomorphism of H∞into bounded operators onH. Moreover, if then This map has the following continuity property: if a uniformly bounded sequencefntends almost everywhere tof, thenfn(T) tends tof(T) in the strong operator topology. Fort≥ 0, letetbe the inner function IfTis the cogenerator of a one-parameter semigroup of completely non-unitary contractionsT(t), then and A completely non-unitary contractionTis said to belong to the class C0if and only iff(T) = 0 for some non-zerofin H∞. In this case the set of suchfforms an ideal in H∞. It has the form φ ⋅ H∞wheregis aninner function, i.e. such that |φ| = 1 onS1: φ is uniquely determined up to multiplication by a complex number of modulus 1 and is called theminimal functionofT. It has properties analogous to theminimal polynomialof a matrix. The minimal function φ admits a canonical factorization where |c|=1,B(z) is aBlaschke product with andP(z) is holomorphic with non-negative real part inD. By theHerglotz representation theorem, for some non-negative finite measure μ on the circle: in this case, if non-zero, μ must besingularwith respect to Lebesgue measure. In the above decomposition of φ, either of the two factors can be absent. The minimal function φ determines thespectrumofT. Within the unit disk, the spectral values are the zeros of φ. There are at most countably many such λi, all eigenvalues ofT, the zeros ofB(z). A point of the unit circle does not lie in the spectrum ofTif and only if φ has a holomorphic continuation to a neighborhood of that point. φ reduces to a Blaschke product exactly whenHequals the closure of the direct sum (not necessarily orthogonal) of the generalized eigenspaces[6] Two contractionsT1andT2are said to bequasi-similarwhen there are bounded operatorsA,Bwith trivial kernel and dense range such that The following properties of a contractionTare preserved under quasi-similarity: Two quasi-similar C0contractions have the same minimal function and hence the same spectrum. Theclassification theoremfor C0contractions states that two multiplicity free C0contractions are quasi-similar if and only if they have the same minimal function (up to a scalar multiple).[7] A model for multiplicity free C0contractions with minimal function φ is given by taking where H2is theHardy spaceof the circle and lettingTbe multiplication byz.[8] Such operators are calledJordan blocksand denotedS(φ). As a generalization ofBeurling's theorem, the commutant of such an operator consists exactly of operators ψ(T) with ψ inH≈, i.e. multiplication operators onH2corresponding to functions inH≈. A C0contraction operatorTis multiplicity free if and only if it is quasi-similar to a Jordan block (necessarily corresponding the one corresponding to its minimal function). Examples. with the λi's distinct, of modulus less than 1, such that and (ei) is an orthonormal basis, thenS, and henceT, is C0and multiplicity free. HenceHis the closure of direct sum of the λi-eigenspaces ofT, each having multiplicity one. This can also be seen directly using the definition of quasi-similarity. Classification theorem for C0contractions:Every C0contraction is canonically quasi-similar to a direct sum of Jordan blocks. In fact every C0contraction is quasi-similar to a unique operator of the form where the φnare uniquely determined inner functions, with φ1the minimal function ofSand henceT.[10]
https://en.wikipedia.org/wiki/Contraction_(operator_theory)
Inmathematics, anormed vector spaceornormed spaceis avector spaceover therealorcomplexnumbers on which anormis defined.[1]A norm is a generalization of the intuitive notion of "length" in the physical world. IfV{\displaystyle V}is a vector space overK{\displaystyle K}, whereK{\displaystyle K}is a field equal toR{\displaystyle \mathbb {R} }or toC{\displaystyle \mathbb {C} }, then a norm onV{\displaystyle V}is a mapV→R{\displaystyle V\to \mathbb {R} }, typically denoted by‖⋅‖{\displaystyle \lVert \cdot \rVert }, satisfying the following four axioms: IfV{\displaystyle V}is a real or complex vector space as above, and‖⋅‖{\displaystyle \lVert \cdot \rVert }is a norm onV{\displaystyle V}, then the ordered pair(V,‖⋅‖){\displaystyle (V,\lVert \cdot \rVert )}is called a normed vector space. If it is clear from context which norm is intended, then it is common to denote the normed vector space simply byV{\displaystyle V}. A norm induces adistance, called its(norm) induced metric, by the formulad(x,y)=‖y−x‖.{\displaystyle d(x,y)=\|y-x\|.}which makes any normed vector space into ametric spaceand atopological vector space. If this metric space iscompletethen the normed space is aBanach space. Every normed vector space can be "uniquely extended" to a Banach space, which makes normed spaces intimately related to Banach spaces. Every Banach space is a normed space but converse is not true. For example, the set of thefinite sequencesof real numbers can be normed with theEuclidean norm, but it is not complete for this norm. Aninner product spaceis a normed vector space whose norm is the square root of the inner product of a vector and itself. TheEuclidean normof aEuclidean vector spaceis a special case that allows definingEuclidean distanceby the formulad(A,B)=‖AB→‖.{\displaystyle d(A,B)=\|{\overrightarrow {AB}}\|.} The study of normed spaces and Banach spaces is a fundamental part offunctional analysis, a major subfield of mathematics. Anormed vector spaceis avector spaceequipped with anorm. Aseminormed vector spaceis a vector space equipped with aseminorm. A usefulvariation of the triangle inequalityis‖x−y‖≥|‖x‖−‖y‖|{\displaystyle \|x-y\|\geq |\|x\|-\|y\||}for any vectorsx{\displaystyle x}andy.{\displaystyle y.} This also shows that a vector norm is a (uniformly)continuous function. Property 3 depends on a choice of norm|α|{\displaystyle |\alpha |}on the field of scalars. When the scalar field isR{\displaystyle \mathbb {R} }(or more generally a subset ofC{\displaystyle \mathbb {C} }), this is usually taken to be the ordinaryabsolute value, but other choices are possible. For example, for a vector space overQ{\displaystyle \mathbb {Q} }one could take|α|{\displaystyle |\alpha |}to be thep{\displaystyle p}-adic absolute value. If(V,‖⋅‖){\displaystyle (V,\|\,\cdot \,\|)}is a normed vector space, the norm‖⋅‖{\displaystyle \|\,\cdot \,\|}induces ametric(a notion ofdistance) and therefore atopologyonV.{\displaystyle V.}This metric is defined in the natural way: the distance between two vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }is given by‖u−v‖.{\displaystyle \|\mathbf {u} -\mathbf {v} \|.}This topology is precisely the weakest topology which makes‖⋅‖{\displaystyle \|\,\cdot \,\|}continuous and which is compatible with the linear structure ofV{\displaystyle V}in the following sense: Similarly, for any seminormed vector space we can define the distance between two vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }as‖u−v‖.{\displaystyle \|\mathbf {u} -\mathbf {v} \|.}This turns the seminormed space into apseudometric space(notice this is weaker than a metric) and allows the definition of notions such ascontinuityandconvergence. To put it more abstractly every seminormed vector space is atopological vector spaceand thus carries atopological structurewhich is induced by the semi-norm. Of special interest arecompletenormed spaces, which are known asBanach spaces. Every normed vector spaceV{\displaystyle V}sits as a dense subspace inside some Banach space; this Banach space is essentially uniquely defined byV{\displaystyle V}and is called thecompletionofV.{\displaystyle V.} Two norms on the same vector space are calledequivalentif they define the sametopology. On a finite-dimensional vector space (but not infinite-dimensional vector spaces), all norms are equivalent (although the resulting metric spaces need not be the same)[2]And since any Euclidean space is complete, we can thus conclude that all finite-dimensional normed vector spaces are Banach spaces. A normed vector spaceV{\displaystyle V}islocally compactif and only if the unit ballB={x:‖x‖≤1}{\displaystyle B=\{x:\|x\|\leq 1\}}iscompact, which is the case if and only ifV{\displaystyle V}is finite-dimensional; this is a consequence ofRiesz's lemma. (In fact, a more general result is true: a topological vector space is locally compact if and only if it is finite-dimensional. The point here is that we don't assume the topology comes from a norm.) The topology of a seminormed vector space has many nice properties. Given aneighbourhood systemN(0){\displaystyle {\mathcal {N}}(0)}around 0 we can construct all other neighbourhood systems asN(x)=x+N(0):={x+N:N∈N(0)}{\displaystyle {\mathcal {N}}(x)=x+{\mathcal {N}}(0):=\{x+N:N\in {\mathcal {N}}(0)\}}withx+N:={x+n:n∈N}.{\displaystyle x+N:=\{x+n:n\in N\}.} Moreover, there exists aneighbourhood basisfor the origin consisting ofabsorbingandconvex sets. As this property is very useful infunctional analysis, generalizations of normed vector spaces with this property are studied under the namelocally convex spaces. A norm (orseminorm)‖⋅‖{\displaystyle \|\cdot \|}on a topological vector space(X,τ){\displaystyle (X,\tau )}is continuous if and only if the topologyτ‖⋅‖{\displaystyle \tau _{\|\cdot \|}}that‖⋅‖{\displaystyle \|\cdot \|}induces onX{\displaystyle X}iscoarserthanτ{\displaystyle \tau }(meaning,τ‖⋅‖⊆τ{\displaystyle \tau _{\|\cdot \|}\subseteq \tau }), which happens if and only if there exists some open ballB{\displaystyle B}in(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}(such as maybe{x∈X:‖x‖<1}{\displaystyle \{x\in X:\|x\|<1\}}for example) that is open in(X,τ){\displaystyle (X,\tau )}(said different, such thatB∈τ{\displaystyle B\in \tau }). Atopological vector space(X,τ){\displaystyle (X,\tau )}is callednormableif there exists a norm‖⋅‖{\displaystyle \|\cdot \|}onX{\displaystyle X}such that the canonical metric(x,y)↦‖y−x‖{\displaystyle (x,y)\mapsto \|y-x\|}induces the topologyτ{\displaystyle \tau }onX.{\displaystyle X.}The following theorem is due toKolmogorov:[3] Kolmogorov's normability criterion: A Hausdorff topological vector space is normable if and only if there exists a convex,von Neumann boundedneighborhood of0∈X.{\displaystyle 0\in X.} A product of a family of normable spaces is normable if and only if only finitely many of the spaces are non-trivial (that is,≠{0}{\displaystyle \neq \{0\}}).[3]Furthermore, the quotient of a normable spaceX{\displaystyle X}by a closed vector subspaceC{\displaystyle C}is normable, and if in additionX{\displaystyle X}'s topology is given by a norm‖⋅,‖{\displaystyle \|\,\cdot ,\|}then the mapX/C→R{\displaystyle X/C\to \mathbb {R} }given byx+C↦infc∈C‖x+c‖{\textstyle x+C\mapsto \inf _{c\in C}\|x+c\|}is a well defined norm onX/C{\displaystyle X/C}that induces thequotient topologyonX/C.{\displaystyle X/C.}[4] IfX{\displaystyle X}is a Hausdorfflocally convextopological vector spacethen the following are equivalent: Furthermore,X{\displaystyle X}is finite-dimensional if and only ifXσ′{\displaystyle X_{\sigma }^{\prime }}is normable (hereXσ′{\displaystyle X_{\sigma }^{\prime }}denotesX′{\displaystyle X^{\prime }}endowed with theweak-* topology). The topologyτ{\displaystyle \tau }of theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}as defined in the article onspaces of test functions and distributions, is defined by a countable family of norms but it isnota normable space because there does not exist any norm‖⋅‖{\displaystyle \|\cdot \|}onC∞(K){\displaystyle C^{\infty }(K)}such that the topology that this norm induces is equal toτ.{\displaystyle \tau .} Even if a metrizable topological vector space has a topology that is defined by a family of norms, then it may nevertheless still fail to benormable space(meaning that its topology can not be defined by anysinglenorm). An example of such a space is theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}whose definition can be found in the article onspaces of test functions and distributions, because its topologyτ{\displaystyle \tau }is defined by a countable family of norms but it isnota normable space because there does not exist any norm‖⋅‖{\displaystyle \|\cdot \|}onC∞(K){\displaystyle C^{\infty }(K)}such that the topology this norm induces is equal toτ.{\displaystyle \tau .}In fact, the topology of alocally convex spaceX{\displaystyle X}can be a defined by a family ofnormsonX{\displaystyle X}if and only if there existsat least onecontinuous norm onX.{\displaystyle X.}[6] The most important maps between two normed vector spaces are thecontinuouslinear maps. Together with these maps, normed vector spaces form acategory. The norm is a continuous function on its vector space. All linear maps between finite-dimensional vector spaces are also continuous. Anisometrybetween two normed vector spaces is a linear mapf{\displaystyle f}which preserves the norm (meaning‖f(v)‖=‖v‖{\displaystyle \|f(\mathbf {v} )\|=\|\mathbf {v} \|}for all vectorsv{\displaystyle \mathbf {v} }). Isometries are always continuous andinjective. Asurjectiveisometry between the normed vector spacesV{\displaystyle V}andW{\displaystyle W}is called anisometric isomorphism, andV{\displaystyle V}andW{\displaystyle W}are calledisometrically isomorphic. Isometrically isomorphic normed vector spaces are identical for all practical purposes. When speaking of normed vector spaces, we augment the notion ofdual spaceto take the norm into account. The dualV′{\displaystyle V^{\prime }}of a normed vector spaceV{\displaystyle V}is the space of allcontinuouslinear maps fromV{\displaystyle V}to the base field (the complexes or the reals) — such linear maps are called "functionals". The norm of a functionalφ{\displaystyle \varphi }is defined as thesupremumof|φ(v)|{\displaystyle |\varphi (\mathbf {v} )|}wherev{\displaystyle \mathbf {v} }ranges over all unit vectors (that is, vectors of norm1{\displaystyle 1}) inV.{\displaystyle V.}This turnsV′{\displaystyle V^{\prime }}into a normed vector space. An important theorem about continuous linear functionals on normed vector spaces is theHahn–Banach theorem. The definition of many normed spaces (in particular,Banach spaces) involves a seminorm defined on a vector space and then the normed space is defined as thequotient spaceby the subspace of elements of seminorm zero. For instance, with theLp{\displaystyle L^{p}}spaces, the function defined by‖f‖p=(∫|f(x)|pdx)1/p{\displaystyle \|f\|_{p}=\left(\int |f(x)|^{p}\;dx\right)^{1/p}}is a seminorm on the vector space of all functions on which theLebesgue integralon the right hand side is defined and finite. However, the seminorm is equal to zero for any functionsupportedon a set ofLebesgue measurezero. These functions form a subspace which we "quotient out", making them equivalent to the zero function. Givenn{\displaystyle n}seminormed spaces(Xi,qi){\displaystyle \left(X_{i},q_{i}\right)}with seminormsqi:Xi→R,{\displaystyle q_{i}:X_{i}\to \mathbb {R} ,}denote theproduct spacebyX:=∏i=1nXi{\displaystyle X:=\prod _{i=1}^{n}X_{i}}where vector addition defined as(x1,…,xn)+(y1,…,yn):=(x1+y1,…,xn+yn){\displaystyle \left(x_{1},\ldots ,x_{n}\right)+\left(y_{1},\ldots ,y_{n}\right):=\left(x_{1}+y_{1},\ldots ,x_{n}+y_{n}\right)}and scalar multiplication defined asα(x1,…,xn):=(αx1,…,αxn).{\displaystyle \alpha \left(x_{1},\ldots ,x_{n}\right):=\left(\alpha x_{1},\ldots ,\alpha x_{n}\right).} Define a new functionq:X→R{\displaystyle q:X\to \mathbb {R} }byq(x1,…,xn):=∑i=1nqi(xi),{\displaystyle q\left(x_{1},\ldots ,x_{n}\right):=\sum _{i=1}^{n}q_{i}\left(x_{i}\right),}which is a seminorm onX.{\displaystyle X.}The functionq{\displaystyle q}is a norm if and only if allqi{\displaystyle q_{i}}are norms. More generally, for each realp≥1{\displaystyle p\geq 1}the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x1,…,xn):=(∑i=1nqi(xi)p)1p{\displaystyle q\left(x_{1},\ldots ,x_{n}\right):=\left(\sum _{i=1}^{n}q_{i}\left(x_{i}\right)^{p}\right)^{\frac {1}{p}}}is a semi norm. For eachp{\displaystyle p}this defines the same topological space. A straightforward argument involving elementary linear algebra shows that the only finite-dimensional seminormed spaces are those arising as the product space of a normed space and a space with trivial seminorm. Consequently, many of the more interesting examples and applications of seminormed spaces occur for infinite-dimensional vector spaces.
https://en.wikipedia.org/wiki/Normed_space
Inmathematics,operator theoryis the study oflinear operatorsonfunction spaces, beginning withdifferential operatorsandintegral operators. The operators may be presented abstractly by their characteristics, such asbounded linear operatorsorclosed operators, and consideration may be given tononlinear operators. The study, which depends heavily on thetopologyof function spaces, is a branch offunctional analysis. If a collection of operators forms analgebra over a field, then it is anoperator algebra. The description of operator algebras is part of operator theory. Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification ofnormal operatorsin terms of theirspectrafalls into this category. Thespectral theoremis any of a number of results aboutlinear operatorsor aboutmatrices.[1]In broad terms the spectraltheoremprovides conditions under which anoperatoror a matrix can bediagonalized(that is, represented as adiagonal matrixin some basis). This concept of diagonalization is relatively straightforward for operators onfinite-dimensionalspaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class oflinear operatorsthat can be modelled bymultiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement aboutcommutativeC*-algebras. See alsospectral theoryfor a historical perspective. Examples of operators to which the spectral theorem applies areself-adjoint operatorsor more generallynormal operatorsonHilbert spaces. The spectral theorem also provides acanonicaldecomposition, called thespectral decomposition,eigenvalue decomposition, oreigendecomposition, of the underlying vector space on which the operator acts. Anormal operatoron acomplexHilbert spaceH{\displaystyle H}is acontinuouslinear operatorN:H→H{\displaystyle N\colon H\rightarrow H}thatcommuteswith itshermitian adjointN∗{\displaystyle N^{\ast }}, that is:NN∗=N∗N{\displaystyle NN^{\ast }=N^{\ast }N}.[2] Normal operators are important because thespectral theoremholds for them. Today, the class of normal operators is well understood. Examples of normal operators are The spectral theorem extends to a more general class of matrices. LetA{\displaystyle A}be an operator on a finite-dimensionalinner product space.A{\displaystyle A}is said to benormalifA∗A=AA∗{\displaystyle A^{\ast }A=AA^{\ast }}. One can show thatA{\displaystyle A}is normal if and only if it is unitarily diagonalizable: By theSchur decomposition, we haveA=UTU∗{\displaystyle A=UTU^{\ast }}, whereU{\displaystyle U}is unitary andT{\displaystyle T}upper triangular. SinceA{\displaystyle A}is normal,T∗T=TT∗{\displaystyle T^{\ast }T=TT^{\ast }}. Therefore,T{\displaystyle T}must be diagonal since normal upper triangular matrices are diagonal. The converse is obvious. In other words,A{\displaystyle A}is normal if and only if there exists aunitary matrixU{\displaystyle U}such thatA=UDU∗{\displaystyle A=UDU^{*}}whereD{\displaystyle D}is adiagonal matrix. Then, the entries of the diagonal ofD{\displaystyle D}are theeigenvaluesofA{\displaystyle A}. The column vectors ofU{\displaystyle U}are theeigenvectorsofA{\displaystyle A}and they areorthonormal. Unlike the Hermitian case, the entries ofD{\displaystyle D}need not be real. Thepolar decompositionof anybounded linear operatorAbetween complexHilbert spacesis a canonical factorization as the product of apartial isometryand a non-negative operator.[3] The polar decomposition for matrices generalizes as follows: ifAis a bounded linear operator then there is a unique factorization ofAas a productA=UPwhereUis a partial isometry,Pis a non-negative self-adjoint operator and the initial space ofUis the closure of the range ofP. The operatorUmust be weakened to a partial isometry, rather than unitary, because of the following issues. IfAis theone-sided shiftonl2(N), then |A| = (A*A)1/2=I. So ifA=U|A|,Umust beA, which is not unitary. The existence of a polar decomposition is a consequence ofDouglas' lemma: Lemma—IfA,Bare bounded operators on a Hilbert spaceH, andA*A≤B*B, then there exists a contractionCsuch thatA=CB. Furthermore,Cis unique ifKer(B*) ⊂Ker(C). The operatorCcan be defined byC(Bh) =Ah, extended by continuity to the closure ofRan(B), and by zero on the orthogonal complement ofRan(B). The operatorCis well-defined sinceA*A≤B*BimpliesKer(B) ⊂ Ker(A). The lemma then follows. In particular, ifA*A=B*B, thenCis a partial isometry, which is unique ifKer(B*) ⊂ Ker(C).In general, for any bounded operatorA,A∗A=(A∗A)12(A∗A)12,{\displaystyle A^{*}A=(A^{*}A)^{\frac {1}{2}}(A^{*}A)^{\frac {1}{2}},}where (A*A)1/2is the unique positive square root ofA*Agiven by the usualfunctional calculus. So by the lemma, we haveA=U(A∗A)12{\displaystyle A=U(A^{*}A)^{\frac {1}{2}}}for some partial isometryU, which is unique if Ker(A) ⊂ Ker(U). (NoteKer(A) = Ker(A*A) = Ker(B) = Ker(B*), whereB=B*= (A*A)1/2.) TakePto be (A*A)1/2and one obtains the polar decompositionA=UP. Notice that an analogous argument can be used to showA = P'U', whereP'is positive andU'a partial isometry. WhenHis finite dimensional,Ucan be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version ofsingular value decomposition. By property of thecontinuous functional calculus, |A| is in theC*-algebragenerated byA. A similar but weaker statement holds for the partial isometry: the polar partUis in thevon Neumann algebragenerated byA. IfAis invertible,Uwill be in theC*-algebragenerated byAas well. Many operators that are studied are operators on Hilbert spaces ofholomorphic functions, and the study of the operator is intimately linked to questions in function theory. For example,Beurling's theoremdescribes theinvariant subspacesof the unilateral shift in terms of inner functions, which are bounded holomorphic functions on the unit disk with unimodular boundary values almost everywhere on the circle. Beurling interpreted the unilateral shift as multiplication by the independent variable on theHardy space.[4]The success in studying multiplication operators, and more generallyToeplitz operators(which are multiplication, followed by projection onto the Hardy space) has inspired the study of similar questions on other spaces, such as theBergman space. The theory ofoperator algebrasbringsalgebrasof operators such asC*-algebrasto the fore. A C*-algebra,A, is aBanach algebraover the field ofcomplex numbers, together with amap* :A→A. One writesx*for the image of an elementxofA. The map * has the following properties:[5] Remark.The first three identities say thatAis a*-algebra. The last identity is called theC* identityand is equivalent to:‖xx∗‖=‖x‖2,{\displaystyle \|xx^{*}\|=\|x\|^{2},} The C*-identity is a very strong requirement. For instance, together with thespectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:‖x‖2=‖x∗x‖=sup{|λ|:x∗x−λ1is not invertible}.{\displaystyle \|x\|^{2}=\|x^{*}x\|=\sup\{|\lambda |:x^{*}x-\lambda \,1{\text{ is not invertible}}\}.}
https://en.wikipedia.org/wiki/Operator_theory
In themathematicalfield offunctional analysisthere are several standardtopologieswhich are given to the algebraB(X)ofbounded linear operatorson aBanach spaceX. Let(Tn)n∈N{\displaystyle (T_{n})_{n\in \mathbb {N} }}be a sequence of linear operators on the Banach spaceX{\displaystyle X}. Consider the statement that(Tn)n∈N{\displaystyle (T_{n})_{n\in \mathbb {N} }}converges to some operatorT{\displaystyle T}onX{\displaystyle X}. This could have several different meanings: There are many topologies that can be defined onB(X)besides the ones used above; most are at first only defined whenX=His a Hilbert space, even though in many cases there are appropriate generalisations. The topologies listed below are all locally convex, which implies that they are defined by a family ofseminorms. In analysis, a topology is called strong if it has many open sets and weak if it has few open sets, so that the corresponding modes of convergence are, respectively, strong and weak. (In topology proper, these terms can suggest the opposite meaning, so strong and weak are replaced with, respectively, fine and coarse.) The diagram on the right is a summary of the relations, with the arrows pointing from strong to weak. IfHis a Hilbert space, the linear space ofHilbert spaceoperatorsB(X)has a (unique)predualB(H)∗{\displaystyle B(H)_{*}}, consisting of the trace class operators, whose dual isB(X). The seminormpw(x)forwpositive in the predual is defined to beB(w,x*x)1/2. IfBis a vector space of linear maps on the vector spaceA, thenσ(A,B)is defined to be the weakest topology onAsuch that all elements ofBare continuous. The continuous linear functionals onB(H)for the weak, strong, and strong*(operator) topologies are the same, and are the finite linear combinations of the linear functionals (xh1,h2) forh1,h2∈H. The continuous linear functionals onB(H)for the ultraweak, ultrastrong, ultrastrong*and Arens-Mackey topologies are the same, and are the elements of the predualB(H)*. By definition, the continuous linear functionals in the norm topology are the same as those in the weak Banach space topology. This dual is a rather large space with many pathological elements. On norm bounded sets ofB(H), the weak (operator) and ultraweak topologies coincide. This can be seen via, for instance, theBanach–Alaoglu theorem. For essentially the same reason, the ultrastrong topology is the same as the strong topology on any (norm) bounded subset ofB(H). Same is true for the Arens-Mackey topology, the ultrastrong*, and the strong*topology. In locally convex spaces, closure of convex sets can be characterized by the continuous linear functionals. Therefore, for aconvexsubsetKofB(H), the conditions thatKbe closed in the ultrastrong*, ultrastrong, and ultraweak topologies are all equivalent and are also equivalent to the conditions that for allr> 0,Khas closed intersection with the closed ball of radiusrin the strong*, strong, or weak (operator) topologies. The norm topology is metrizable and the others are not; in fact they fail to befirst-countable. However, whenHis separable, all the topologies above are metrizable when restricted to the unit ball (or to any norm-bounded subset). The most commonly used topologies are the norm, strong, and weak operator topologies. The weak operator topology is useful for compactness arguments, because the unit ball is compact by theBanach–Alaoglu theorem. The norm topology is fundamental because it makesB(H)into a Banach space, but it is too strong for many purposes; for example,B(H)is not separable in this topology. The strong operator topology could be the most commonly used. The ultraweak and ultrastrong topologies are better-behaved than the weak and strong operator topologies, but their definitions are more complicated, so they are usually not used unless their better properties are really needed. For example, the dual space ofB(H)in the weak or strong operator topology is too small to have much analytic content. The adjoint map is not continuous in the strong operator and ultrastrong topologies, while the strong* and ultrastrong* topologies are modifications so that the adjoint becomes continuous. They are not used very often. The Arens–Mackey topology and the weak Banach space topology are relatively rarely used. To summarize, the three essential topologies onB(H)are the norm, ultrastrong, and ultraweak topologies. The weak and strong operator topologies are widely used as convenient approximations to the ultraweak and ultrastrong topologies. The other topologies are relatively obscure.
https://en.wikipedia.org/wiki/Topologies_on_the_set_of_operators_on_a_Hilbert_space
Inmathematics, more specificallyfunctional analysisandoperator theory, the notion ofunbounded operatorprovides an abstract framework for dealing withdifferential operators, unboundedobservablesinquantum mechanics, and other cases. The term "unbounded operator" can be misleading, since In contrast tobounded operators, unbounded operators on a given space do not form analgebra, nor even a linear space, because each one is defined on its own domain. The term "operator" often means "bounded linear operator", but in the context of this article it means "unbounded operator", with the reservations made above. The theory of unbounded operators developed in the late 1920s and early 1930s as part of developing a rigorous mathematical framework forquantum mechanics.[1]The theory's development is due toJohn von Neumann[2]andMarshall Stone.[3]Von Neumann introduced usinggraphsto analyze unbounded operators in 1932.[4] LetX,YbeBanach spaces. Anunbounded operator(or simplyoperator)T:D(T) →Yis alinear mapTfrom a linear subspaceD(T) ⊆X—the domain ofT—to the spaceY.[5]Contrary to the usual convention,Tmay not be defined on the whole spaceX. An operatorTis said to beclosedif its graphΓ(T)is aclosed set.[6](Here, the graphΓ(T)is a linear subspace of thedirect sumX⊕Y, defined as the set of all pairs(x,Tx), wherexruns over the domain ofT.) Explicitly, this means that for every sequence{xn}of points from the domain ofTsuch thatxn→xandTxn→y, it holds thatxbelongs to the domain ofTandTx=y.[6]The closedness can also be formulated in terms of thegraph norm: an operatorTis closed if and only if its domainD(T)is acomplete spacewith respect to the norm:[7] An operatorTis said to bedensely definedif its domain isdenseinX.[5]This also includes operators defined on the entire spaceX, since the whole space is dense in itself. The denseness of the domain is necessary and sufficient for the existence of the adjoint (ifXandYare Hilbert spaces) and the transpose; see the sections below. IfT:D(T) →Yis closed, densely defined andcontinuouson its domain, then its domain is all ofX.[nb 1] A densely defined symmetric[clarification needed]operatorTon aHilbert spaceHis calledbounded from belowifT+ais a positive operator for some real numbera. That is,⟨Tx|x⟩ ≥ −a||x||2for allxin the domain ofT(or alternatively⟨Tx|x⟩ ≥a||x||2sinceais arbitrary).[8]If bothTand−Tare bounded from below thenTis bounded.[8] LetC([0, 1])denote the space of continuous functions on the unit interval, and letC1([0, 1])denote the space of continuously differentiable functions. We equipC([0,1]){\displaystyle C([0,1])}with the supremum norm,‖⋅‖∞{\displaystyle \|\cdot \|_{\infty }}, making it a Banach space. Define the classical differentiation operator⁠d/dx⁠:C1([0, 1]) →C([0, 1])by the usual formula: Every differentiable function is continuous, soC1([0, 1]) ⊆C([0, 1]). We claim that⁠d/dx⁠:C([0, 1]) →C([0, 1])is a well-defined unbounded operator, with domainC1([0, 1]). For this, we need to show thatddx{\displaystyle {\frac {d}{dx}}}is linear and then, for example, exhibit some{fn}n⊂C1([0,1]){\displaystyle \{f_{n}\}_{n}\subset C^{1}([0,1])}such that‖fn‖∞=1{\displaystyle \|f_{n}\|_{\infty }=1}andsupn‖ddxfn‖∞=+∞{\displaystyle \sup _{n}\|{\frac {d}{dx}}f_{n}\|_{\infty }=+\infty }. This is a linear operator, since a linear combinationa f+bgof two continuously differentiable functionsf,gis also continuously differentiable, and The operator is not bounded. For example, satisfy but asn→∞{\displaystyle n\to \infty }. The operator is densely defined, and closed. The same operator can be treated as an operatorZ→Zfor many choices of Banach spaceZand not be bounded between any of them. At the same time, it can be bounded as an operatorX→Yfor other pairs of Banach spacesX,Y, and also as operatorZ→Zfor some topological vector spacesZ.[clarification needed]As an example letI⊂Rbe an open interval and consider where: The adjoint of an unbounded operator can be defined in two equivalent ways. LetT:D(T)⊆H1→H2{\displaystyle T:D(T)\subseteq H_{1}\to H_{2}}be an unbounded operator between Hilbert spaces. First, it can be defined in a way analogous to how one defines the adjoint of a bounded operator. Namely, the adjointT∗:D(T∗)⊆H2→H1{\displaystyle T^{*}:D\left(T^{*}\right)\subseteq H_{2}\to H_{1}}ofTis defined as an operator with the property:⟨Tx∣y⟩2=⟨x∣T∗y⟩1,x∈D(T).{\displaystyle \langle Tx\mid y\rangle _{2}=\left\langle x\mid T^{*}y\right\rangle _{1},\qquad x\in D(T).}More precisely,T∗y{\displaystyle T^{*}y}is defined in the following way. Ify∈H2{\displaystyle y\in H_{2}}is such thatx↦⟨Tx∣y⟩{\displaystyle x\mapsto \langle Tx\mid y\rangle }is a continuous linear functional on the domain ofT, theny{\displaystyle y}is declared to be an element ofD(T∗),{\displaystyle D\left(T^{*}\right),}and after extending the linear functional to the whole space via theHahn–Banach theorem, it is possible to find somez{\displaystyle z}inH1{\displaystyle H_{1}}such that⟨Tx∣y⟩2=⟨x∣z⟩1,x∈D(T),{\displaystyle \langle Tx\mid y\rangle _{2}=\langle x\mid z\rangle _{1},\qquad x\in D(T),}sinceRiesz representation theoremallows the continuous dual of the Hilbert spaceH1{\displaystyle H_{1}}to be identified with the set of linear functionals given by the inner product. This vectorz{\displaystyle z}is uniquely determined byy{\displaystyle y}if and only if the linear functionalx↦⟨Tx∣y⟩{\displaystyle x\mapsto \langle Tx\mid y\rangle }is densely defined; or equivalently, ifTis densely defined. Finally, lettingT∗y=z{\displaystyle T^{*}y=z}completes the construction ofT∗,{\displaystyle T^{*},}which is necessarily a linear map. The adjointT∗y{\displaystyle T^{*}y}exists if and only ifTis densely defined. By definition, the domain ofT∗{\displaystyle T^{*}}consists of elementsy{\displaystyle y}inH2{\displaystyle H_{2}}such thatx↦⟨Tx∣y⟩{\displaystyle x\mapsto \langle Tx\mid y\rangle }is continuous on the domain ofT. Consequently, the domain ofT∗{\displaystyle T^{*}}could be anything; it could be trivial (that is, contains only zero).[9]It may happen that the domain ofT∗{\displaystyle T^{*}}is a closedhyperplaneandT∗{\displaystyle T^{*}}vanishes everywhere on the domain.[10][11]Thus, boundedness ofT∗{\displaystyle T^{*}}on its domain does not imply boundedness ofT. On the other hand, ifT∗{\displaystyle T^{*}}is defined on the whole space thenTis bounded on its domain and therefore can be extended by continuity to a bounded operator on the whole space.[nb 2]If the domain ofT∗{\displaystyle T^{*}}is dense, then it has its adjointT∗∗.{\displaystyle T^{**}.}[12]A closed densely defined operatorTis bounded if and only ifT∗{\displaystyle T^{*}}is bounded.[nb 3] The other equivalent definition of the adjoint can be obtained by noticing a general fact. Define a linear operatorJ{\displaystyle J}as follows:[12]{J:H1⊕H2→H2⊕H1J(x⊕y)=−y⊕x{\displaystyle {\begin{cases}J:H_{1}\oplus H_{2}\to H_{2}\oplus H_{1}\\J(x\oplus y)=-y\oplus x\end{cases}}}SinceJ{\displaystyle J}is an isometric surjection, it is unitary. Hence:J(Γ(T))⊥{\displaystyle J(\Gamma (T))^{\bot }}is the graph of some operatorS{\displaystyle S}if and only ifTis densely defined.[13]A simple calculation shows that this "some"S{\displaystyle S}satisfies:⟨Tx∣y⟩2=⟨x∣Sy⟩1,{\displaystyle \langle Tx\mid y\rangle _{2}=\langle x\mid Sy\rangle _{1},}for everyxin the domain ofT. ThusS{\displaystyle S}is the adjoint ofT. It follows immediately from the above definition that the adjointT∗{\displaystyle T^{*}}is closed.[12]In particular, a self-adjoint operator (meaningT=T∗{\displaystyle T=T^{*}}) is closed. An operatorTis closed and densely defined if and only ifT∗∗=T.{\displaystyle T^{**}=T.}[nb 4] Some well-known properties for bounded operators generalize to closed densely defined operators. The kernel of a closed operator is closed. Moreover, the kernel of a closed densely defined operatorT:H1→H2{\displaystyle T:H_{1}\to H_{2}}coincides with the orthogonal complement of the range of the adjoint. That is,[14]ker⁡(T)=ran⁡(T∗)⊥.{\displaystyle \operatorname {ker} (T)=\operatorname {ran} (T^{*})^{\bot }.}von Neumann's theoremstates thatT∗T{\displaystyle T^{*}T}andTT∗{\displaystyle TT^{*}}are self-adjoint, and thatI+T∗T{\displaystyle I+T^{*}T}andI+TT∗{\displaystyle I+TT^{*}}both have bounded inverses.[15]IfT∗{\displaystyle T^{*}}has trivial kernel,Thas dense range (by the above identity.) Moreover: In contrast to the bounded case, it is not necessary that(TS)∗=S∗T∗,{\displaystyle (TS)^{*}=S^{*}T^{*},}since, for example, it is even possible that(TS)∗{\displaystyle (TS)^{*}}does not exist.[citation needed]This is, however, the case if, for example,Tis bounded.[16] A densely defined, closed operatorTis callednormalif it satisfies the following equivalent conditions:[17] Every self-adjoint operator is normal. LetT:B1→B2{\displaystyle T:B_{1}\to B_{2}}be an operator between Banach spaces. Then thetranspose(ordual)tT:B2∗→B1∗{\displaystyle {}^{t}T:{B_{2}}^{*}\to {B_{1}}^{*}}ofT{\displaystyle T}is the linear operator satisfying:⟨Tx,y′⟩=⟨x,(tT)y′⟩{\displaystyle \langle Tx,y'\rangle =\langle x,\left({}^{t}T\right)y'\rangle }for allx∈B1{\displaystyle x\in B_{1}}andy∈B2∗.{\displaystyle y\in B_{2}^{*}.}Here, we used the notation:⟨x,x′⟩=x′(x).{\displaystyle \langle x,x'\rangle =x'(x).}[18] The necessary and sufficient condition for the transpose ofT{\displaystyle T}to exist is thatT{\displaystyle T}is densely defined (for essentially the same reason as to adjoints, as discussed above.) For any Hilbert spaceH,{\displaystyle H,}there is the anti-linear isomorphism:J:H∗→H{\displaystyle J:H^{*}\to H}given byJf=y{\displaystyle Jf=y}wheref(x)=⟨x∣y⟩H,(x∈H).{\displaystyle f(x)=\langle x\mid y\rangle _{H},(x\in H).}Through this isomorphism, the transposetT{\displaystyle {}^{t}T}relates to the adjointT∗{\displaystyle T^{*}}in the following way:[19]T∗=J1(tT)J2−1,{\displaystyle T^{*}=J_{1}\left({}^{t}T\right)J_{2}^{-1},}whereJj:Hj∗→Hj{\displaystyle J_{j}:H_{j}^{*}\to H_{j}}. (For the finite-dimensional case, this corresponds to the fact that the adjoint of a matrix is its conjugate transpose.) Note that this gives the definition of adjoint in terms of a transpose. Closed linear operators are a class oflinear operatorsonBanach spaces. They are more general thanbounded operators, and therefore not necessarilycontinuous, but they still retain nice enough properties that one can define thespectrumand (with certain assumptions)functional calculusfor such operators. Many important linear operators which fail to be bounded turn out to be closed, such as thederivativeand a large class ofdifferential operators. LetX,Ybe twoBanach spaces. Alinear operatorA:D(A) ⊆X→Yisclosedif for everysequence{xn}inD(A)convergingtoxinXsuch thatAxn→y∈Yasn→ ∞one hasx∈D(A)andAx=y. Equivalently,Ais closed if itsgraphisclosedin thedirect sumX⊕Y. Given a linear operatorA, not necessarily closed, if the closure of its graph inX⊕Yhappens to be the graph of some operator, that operator is called theclosureofA, and we say thatAisclosable. Denote the closure ofAbyA. It follows thatAis therestrictionofAtoD(A). Acore(oressential domain) of a closable operator is asubsetCofD(A)such that the closure of the restriction ofAtoCisA. Consider thederivativeoperatorA=⁠d/dx⁠whereX=Y=C([a,b])is the Banach space of allcontinuous functionson aninterval[a,b]. If one takes its domainD(A)to beC1([a,b]), thenAis a closed operator which is not bounded.[20]On the other hand ifD(A) =C∞([a,b]), thenAwill no longer be closed, but it will be closable, with the closure being its extension defined onC1([a,b]). An operatorTon a Hilbert space issymmetricif and only if for eachxandyin the domain ofTwe have⟨Tx∣y⟩=⟨x∣Ty⟩{\displaystyle \langle Tx\mid y\rangle =\langle x\mid Ty\rangle }. A densely defined operatorTis symmetric if and only if it agrees with its adjointT∗restricted to the domain ofT, in other words whenT∗is an extension ofT.[21] In general, ifTis densely defined and symmetric, the domain of the adjointT∗need not equal the domain ofT. IfTis symmetric and the domain ofTand the domain of the adjoint coincide, then we say thatTisself-adjoint.[22]Note that, whenTis self-adjoint, the existence of the adjoint implies thatTis densely defined and sinceT∗is necessarily closed,Tis closed. A densely defined operatorTissymmetric, if the subspaceΓ(T)(defined in a previous section) is orthogonal to its imageJ(Γ(T))underJ(whereJ(x,y):=(y,-x)).[nb 6] Equivalently, an operatorTisself-adjointif it is densely defined, closed, symmetric, and satisfies the fourth condition: both operatorsT–i,T+iare surjective, that is, map the domain ofTonto the whole spaceH. In other words: for everyxinHthere existyandzin the domain ofTsuch thatTy–iy=xandTz+iz=x.[23] An operatorTisself-adjoint, if the two subspacesΓ(T),J(Γ(T))are orthogonal and their sum is the whole spaceH⊕H.{\displaystyle H\oplus H.}[12] This approach does not cover non-densely defined closed operators. Non-densely defined symmetric operators can be defined directly or via graphs, but not via adjoint operators. A symmetric operator is often studied via itsCayley transform. An operatorTon a complex Hilbert space is symmetric if and only if the number⟨Tx∣x⟩{\displaystyle \langle Tx\mid x\rangle }is real for allxin the domain ofT.[21] A densely defined closed symmetric operatorTis self-adjoint if and only ifT∗is symmetric.[24]It may happen that it is not.[25][26] A densely defined operatorTis calledpositive[8](ornonnegative[27]) if its quadratic form is nonnegative, that is,⟨Tx∣x⟩≥0{\displaystyle \langle Tx\mid x\rangle \geq 0}for allxin the domain ofT. Such operator is necessarily symmetric. The operatorT∗Tis self-adjoint[28]and positive[8]for every densely defined, closedT. Thespectral theoremapplies to self-adjoint operators[29]and moreover, to normal operators,[30][31]but not to densely defined, closed operators in general, since in this case the spectrum can be empty.[32][33] A symmetric operator defined everywhere is closed, therefore bounded,[6]which is theHellinger–Toeplitz theorem.[34] By definition, an operatorTis anextensionof an operatorSifΓ(S) ⊆ Γ(T).[35]An equivalent direct definition: for everyxin the domain ofS,xbelongs to the domain ofTandSx=Tx.[5][35] Note that an everywhere defined extension exists for every operator, which is a purely algebraic fact explained atDiscontinuous linear map § General existence theoremand based on theaxiom of choice. If the given operator is not bounded then the extension is adiscontinuous linear map. It is of little use since it cannot preserve important properties of the given operator (see below), and usually is highly non-unique. An operatorTis calledclosableif it satisfies the following equivalent conditions:[6][35][36] Not all operators are closable.[37] A closable operatorThas the least closed extensionT¯{\displaystyle {\overline {T}}}called theclosureofT. The closure of the graph ofTis equal to the graph ofT¯.{\displaystyle {\overline {T}}.}[6][35]Other, non-minimal closed extensions may exist.[25][26] A densely defined operatorTis closable if and only ifT∗is densely defined. In this caseT¯=T∗∗{\displaystyle {\overline {T}}=T^{**}}and(T¯)∗=T∗.{\displaystyle ({\overline {T}})^{*}=T^{*}.}[12][38] IfSis densely defined andTis an extension ofSthenS∗is an extension ofT∗.[39] Every symmetric operator is closable.[40] A symmetric operator is calledmaximal symmetricif it has no symmetric extensions, except for itself.[21]Every self-adjoint operator is maximal symmetric.[21]The converse is wrong.[41] An operator is calledessentially self-adjointif its closure is self-adjoint.[40]An operator is essentially self-adjoint if and only if it has one and only one self-adjoint extension.[24] A symmetric operator may have more than one self-adjoint extension, and even a continuum of them.[26] A densely defined, symmetric operatorTis essentially self-adjoint if and only if both operatorsT–i,T+ihave dense range.[42] LetTbe a densely defined operator. Denoting the relation "Tis an extension ofS" byS⊂T(a conventional abbreviation for Γ(S) ⊆ Γ(T)) one has the following.[43] The class ofself-adjoint operatorsis especially important in mathematical physics. Every self-adjoint operator is densely defined, closed and symmetric. The converse holds for bounded operators but fails in general. Self-adjointness is substantially more restricting than these three properties. The famousspectral theoremholds for self-adjoint operators. In combination withStone's theorem on one-parameter unitary groupsit shows that self-adjoint operators are precisely the infinitesimal generators of strongly continuous one-parameter unitary groups, seeSelf-adjoint operator § Self-adjoint extensions in quantum mechanics. Such unitary groups are especially important for describingtime evolutionin classical and quantum mechanics. This article incorporates material from Closed operator onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Unbounded_operator
Inmathematics, more specifically infunctional analysis, aBanach space(/ˈbɑː.nʌx/,Polish pronunciation:[ˈba.nax]) is acompletenormed vector space. Thus, a Banach space is a vector space with ametricthat allows the computation ofvector lengthand distance between vectors and is complete in the sense that aCauchy sequenceof vectors always converges to a well-definedlimitthat is within the space. Banach spaces are named after the Polish mathematicianStefan Banach, who introduced this concept and studied it systematically in 1920–1922 along withHans HahnandEduard Helly.[1]Maurice René Fréchetwas the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space".[2]Banach spaces originally grew out of the study offunction spacesbyHilbert,Fréchet, andRieszearlier in the century. Banach spaces play a central role in functional analysis. In other areas ofanalysis, the spaces under study are often Banach spaces. ABanach spaceis acompletenormed space(X,‖⋅‖).{\displaystyle (X,\|{\cdot }\|).}A normed space is a pair[note 1](X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}consisting of avector spaceX{\displaystyle X}over a scalar fieldK{\displaystyle \mathbb {K} }(whereK{\displaystyle \mathbb {K} }is commonlyR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) together with a distinguished[note 2]norm‖⋅‖:X→R.{\displaystyle \|{\cdot }\|:X\to \mathbb {R} .}Like all norms, this norm induces atranslation invariant[note 3]distance function, called thecanonicalor(norm) induced metric, defined for all vectorsx,y∈X{\displaystyle x,y\in X}by[note 4]d(x,y):=‖y−x‖=‖x−y‖.{\displaystyle d(x,y):=\|y-x\|=\|x-y\|.}This makesX{\displaystyle X}into ametric space(X,d).{\displaystyle (X,d).}A sequencex1,x2,…{\displaystyle x_{1},x_{2},\ldots }is calledCauchy in(X,d){\displaystyle (X,d)}ord{\displaystyle d}-Cauchyor‖⋅‖{\displaystyle \|{\cdot }\|}-Cauchyif for every realr>0,{\displaystyle r>0,}there exists some indexN{\displaystyle N}such thatd(xn,xm)=‖xn−xm‖<r{\displaystyle d(x_{n},x_{m})=\|x_{n}-x_{m}\|<r}wheneverm{\displaystyle m}andn{\displaystyle n}are greater thanN.{\displaystyle N.}The normed space(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is called aBanach spaceand the canonical metricd{\displaystyle d}is called acomplete metricif(X,d){\displaystyle (X,d)}is acomplete metric space, which by definition means for everyCauchy sequencex1,x2,…{\displaystyle x_{1},x_{2},\ldots }in(X,d),{\displaystyle (X,d),}there exists somex∈X{\displaystyle x\in X}such thatlimn→∞xn=xin(X,d),{\displaystyle \lim _{n\to \infty }x_{n}=x\;{\text{ in }}(X,d),}where because‖xn−x‖=d(xn,x),{\displaystyle \|x_{n}-x\|=d(x_{n},x),}this sequence's convergence tox{\displaystyle x}can equivalently be expressed aslimn→∞‖xn−x‖=0inR.{\displaystyle \lim _{n\to \infty }\|x_{n}-x\|=0\;{\text{ in }}\mathbb {R} .} The norm‖⋅‖{\displaystyle \|{\cdot }\|}of a normed space(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is called acomplete normif(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a Banach space. For any normed space(X,‖⋅‖),{\displaystyle (X,\|{\cdot }\|),}there exists anL-semi-inner product⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }onX{\displaystyle X}such that‖x‖=⟨x,x⟩{\textstyle \|x\|={\sqrt {\langle x,x\rangle }}}for allx∈X.{\displaystyle x\in X.}[3]In general, there may be infinitely many L-semi-inner products that satisfy this condition and the proof of the existence of L-semi-inner products relies on the non-constructiveHahn–Banach theorem[3]. L-semi-inner products are a generalization ofinner products, which are what fundamentally distinguishHilbert spacesfrom all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces. The vector space structure allows one to relate the behavior of Cauchy sequences to that of convergingseries of vectors. A normed spaceX{\displaystyle X}is a Banach space if and only if eachabsolutely convergentseries inX{\displaystyle X}converges to a value that lies withinX,{\displaystyle X,}[4]symbolically∑n=1∞‖vn‖<∞⟹∑n=1∞vnconverges inX.{\displaystyle \sum _{n=1}^{\infty }\|v_{n}\|<\infty \implies \sum _{n=1}^{\infty }v_{n}{\text{ converges in }}X.} The canonical metricd{\displaystyle d}of a normed space(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}induces the usualmetric topologyτd{\displaystyle \tau _{d}}onX,{\displaystyle X,}which is referred to as thecanonicalornorm inducedtopology. Every normed space is automatically assumed to carry thisHausdorfftopology, unless indicated otherwise. With this topology, every Banach space is aBaire space, although there exist normed spaces that are Baire but not Banach.[5]The norm‖⋅‖:X→R{\displaystyle \|{\cdot }\|:X\to \mathbb {R} }is always acontinuous functionwith respect to the topology that it induces. The open and closed balls of radiusr>0{\displaystyle r>0}centered at a pointx∈X{\displaystyle x\in X}are, respectively, the setsBr(x):={z∈X∣‖z−x‖<r}andCr(x):={z∈X∣‖z−x‖≤r}.{\displaystyle B_{r}(x):=\{z\in X\mid \|z-x\|<r\}\qquad {\text{ and }}\qquad C_{r}(x):=\{z\in X\mid \|z-x\|\leq r\}.}Any such ball is aconvexandbounded subsetofX,{\displaystyle X,}but acompactball/neighborhoodexists if and only ifX{\displaystyle X}isfinite-dimensional. In particular, no infinite–dimensional normed space can belocally compactor have theHeine–Borel property. Ifx0{\displaystyle x_{0}}is a vector ands≠0{\displaystyle s\neq 0}is a scalar, thenx0+sBr(x)=B|s|r(x0+sx)andx0+sCr(x)=C|s|r(x0+sx).{\displaystyle x_{0}+s\,B_{r}(x)=B_{|s|r}(x_{0}+sx)\qquad {\text{ and }}\qquad x_{0}+s\,C_{r}(x)=C_{|s|r}(x_{0}+sx).}Usings=1{\displaystyle s=1}shows that the norm-induced topology istranslation invariant, which means that for anyx∈X{\displaystyle x\in X}andS⊆X,{\displaystyle S\subseteq X,}the subsetS{\displaystyle S}isopen(respectively,closed) inX{\displaystyle X}if and only if its translationx+S:={x+s∣s∈S}{\displaystyle x+S:=\{x+s\mid s\in S\}}is open (respectively, closed). Consequently, the norm induced topology is completely determined by anyneighbourhood basisat the origin. Some common neighborhood bases at the origin include{Br(0)∣r>0},{Cr(0)∣r>0},{Brn(0)∣n∈N},and{Crn(0)∣n∈N},{\displaystyle \{B_{r}(0)\mid r>0\},\qquad \{C_{r}(0)\mid r>0\},\qquad \{B_{r_{n}}(0)\mid n\in \mathbb {N} \},\qquad {\text{ and }}\qquad \{C_{r_{n}}(0)\mid n\in \mathbb {N} \},}wherer1,r2,…{\displaystyle r_{1},r_{2},\ldots }can be any sequence of positive real numbers that converges to0{\displaystyle 0}inR{\displaystyle \mathbb {R} }(common choices arern:=1n{\displaystyle r_{n}:={\tfrac {1}{n}}}orrn:=1/2n{\displaystyle r_{n}:=1/2^{n}}). So, for example, any open subsetU{\displaystyle U}ofX{\displaystyle X}can be written as a unionU=⋃x∈IBrx(x)=⋃x∈Ix+Brx(0)=⋃x∈Ix+rxB1(0){\displaystyle U=\bigcup _{x\in I}B_{r_{x}}(x)=\bigcup _{x\in I}x+B_{r_{x}}(0)=\bigcup _{x\in I}x+r_{x}\,B_{1}(0)}indexed by some subsetI⊆U,{\displaystyle I\subseteq U,}where eachrx{\displaystyle r_{x}}may be chosen from the aforementioned sequencer1,r2,….{\displaystyle r_{1},r_{2},\ldots .}(The open balls can also be replaced with closed balls, although the indexing setI{\displaystyle I}and radiirx{\displaystyle r_{x}}may then also need to be replaced). Additionally,I{\displaystyle I}can always be chosen to becountableifX{\displaystyle X}is aseparable space, which by definition means thatX{\displaystyle X}contains some countabledense subset. All finite–dimensional normed spaces are separable Banach spaces and any two Banach spaces of the same finite dimension are linearly homeomorphic. Every separable infinite–dimensionalHilbert spaceis linearly isometrically isomorphic to the separable Hilbertsequence spaceℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}with its usual norm‖⋅‖2.{\displaystyle \|{\cdot }\|_{2}.} TheAnderson–Kadec theoremstates that every infinite–dimensional separableFréchet spaceishomeomorphicto theproduct space∏i∈NR{\textstyle \prod _{i\in \mathbb {N} }\mathbb {R} }of countably many copies ofR{\displaystyle \mathbb {R} }(this homeomorphism need not be alinear map).[6][7]Thus all infinite–dimensional separable Fréchet spaces are homeomorphic to each other (or said differently, their topology is uniqueup toa homeomorphism). Since every Banach space is a Fréchet space, this is also true of all infinite–dimensional separable Banach spaces, includingℓ2(N).{\displaystyle \ell ^{2}(\mathbb {N} ).}In fact,ℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}is evenhomeomorphicto its ownunitsphere{x∈ℓ2(N)∣‖x‖2=1},{\displaystyle \{x\in \ell ^{2}(\mathbb {N} )\mid \|x\|_{2}=1\},}which stands in sharp contrast to finite–dimensional spaces (theEuclidean planeR2{\displaystyle \mathbb {R} ^{2}}is not homeomorphic to theunit circle, for instance). This pattern inhomeomorphism classesextends to generalizations ofmetrizable(locally Euclidean)topological manifoldsknown asmetricBanach manifolds, which aremetric spacesthat are around every point,locally homeomorphicto some open subset of a given Banach space (metricHilbert manifoldsand metricFréchet manifoldsare defined similarly).[7]For example, every open subsetU{\displaystyle U}of a Banach spaceX{\displaystyle X}is canonically a metric Banach manifold modeled onX{\displaystyle X}since theinclusion mapU→X{\displaystyle U\to X}is anopenlocal homeomorphism. Using Hilbert spacemicrobundles, David Henderson showed[8]in 1969 that every metric manifold modeled on a separable infinite–dimensional Banach (orFréchet) space can betopologically embeddedas anopensubsetofℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}and, consequently, also admits a uniquesmooth structuremaking it into aC∞{\displaystyle C^{\infty }}Hilbert manifold. There is a compact subsetS{\displaystyle S}ofℓ2(N){\displaystyle \ell ^{2}(\mathbb {N} )}whoseconvex hullco⁡(S){\displaystyle \operatorname {co} (S)}isnotclosed and thus alsonotcompact.[note 5][9]However, like in all Banach spaces, theclosedconvex hullco¯S{\displaystyle {\overline {\operatorname {co} }}S}of this (and every other) compact subset will be compact.[10]In a normed space that is not complete then it is in generalnotguaranteed thatco¯S{\displaystyle {\overline {\operatorname {co} }}S}will be compact wheneverS{\displaystyle S}is; an example[note 5]can even be found in a (non-complete)pre-Hilbertvector subspace ofℓ2(N).{\displaystyle \ell ^{2}(\mathbb {N} ).} This norm-induced topology also makes(X,τd){\displaystyle (X,\tau _{d})}into what is known as atopological vector space(TVS), which by definition is a vector space endowed with a topology making the operations of addition and scalar multiplication continuous. It is emphasized that the TVS(X,τd){\displaystyle (X,\tau _{d})}isonlya vector space together with a certain type of topology; that is to say, when considered as a TVS, it isnotassociated withanyparticular norm or metric (both of which are "forgotten"). This Hausdorff TVS(X,τd){\displaystyle (X,\tau _{d})}is evenlocally convexbecause the set of all open balls centered at the origin forms aneighbourhood basisat the origin consisting of convexbalancedopen sets. This TVS is alsonormable, which by definition refers to any TVS whose topology is induced by some (possibly unknown)norm. Normable TVSsare characterized bybeing Hausdorff and having aboundedconvexneighborhood of the origin. All Banach spaces arebarrelled spaces, which means that everybarrelis neighborhood of the origin (all closed balls centered at the origin are barrels, for example) and guarantees that theBanach–Steinhaus theoremholds. Theopen mapping theoremimplies that whenτ1{\displaystyle \tau _{1}}andτ2{\displaystyle \tau _{2}}are topologies onX{\displaystyle X}that make both(X,τ1){\displaystyle (X,\tau _{1})}and(X,τ2){\displaystyle (X,\tau _{2})}intocomplete metrizable TVSes(for example, Banach orFréchet spaces), if one topology isfiner or coarserthan the other, then they must be equal (that is, ifτ1⊆τ2{\displaystyle \tau _{1}\subseteq \tau _{2}}orτ2⊆τ1{\displaystyle \tau _{2}\subseteq \tau _{1}}thenτ1=τ2{\displaystyle \tau _{1}=\tau _{2}}).[11]So, for example, if(X,p){\displaystyle (X,p)}and(X,q){\displaystyle (X,q)}are Banach spaces with topologiesτp{\displaystyle \tau _{p}}andτq,{\displaystyle \tau _{q},}and if one of these spaces has some open ball that is also an open subset of the other space (or, equivalently, if one ofp:(X,τq)→R{\displaystyle p:(X,\tau _{q})\to \mathbb {R} }orq:(X,τp)→R{\displaystyle q:(X,\tau _{p})\to \mathbb {R} }is continuous), then their topologies are identical and the normsp{\displaystyle p}andq{\displaystyle q}areequivalent. Two norms,p{\displaystyle p}andq,{\displaystyle q,}on a vector spaceX{\displaystyle X}are said to beequivalentif they induce the same topology;[12]this happens if and only if there exist real numbersc,C>0{\displaystyle c,C>0}such thatcq(x)≤p(x)≤Cq(x){\textstyle c\,q(x)\leq p(x)\leq C\,q(x)}for allx∈X.{\displaystyle x\in X.}Ifp{\displaystyle p}andq{\displaystyle q}are two equivalent norms on a vector spaceX{\displaystyle X}then(X,p){\displaystyle (X,p)}is a Banach space if and only if(X,q){\displaystyle (X,q)}is a Banach space. See this footnote for an example of a continuous norm on a Banach space that isnotequivalent to that Banach space's given norm.[note 6][12]All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space.[13] A metricD{\displaystyle D}on a vector spaceX{\displaystyle X}is induced by a norm onX{\displaystyle X}if and only ifD{\displaystyle D}istranslation invariant[note 3]andabsolutely homogeneous, which means thatD(sx,sy)=|s|D(x,y){\displaystyle D(sx,sy)=|s|D(x,y)}for all scalarss{\displaystyle s}and allx,y∈X,{\displaystyle x,y\in X,}in which case the function‖x‖:=D(x,0){\displaystyle \|x\|:=D(x,0)}defines a norm onX{\displaystyle X}and the canonical metric induced by‖⋅‖{\displaystyle \|{\cdot }\|}is equal toD.{\displaystyle D.} Suppose that(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a normed space and thatτ{\displaystyle \tau }is the norm topology induced onX.{\displaystyle X.}Suppose thatD{\displaystyle D}isanymetriconX{\displaystyle X}such that the topology thatD{\displaystyle D}induces onX{\displaystyle X}is equal toτ.{\displaystyle \tau .}IfD{\displaystyle D}istranslation invariant[note 3]then(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a Banach space if and only if(X,D){\displaystyle (X,D)}is a complete metric space.[14]IfD{\displaystyle D}isnottranslation invariant, then it may be possible for(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}to be a Banach space but for(X,D){\displaystyle (X,D)}tonotbe a complete metric space[15](see this footnote[note 7]for an example). In contrast, a theorem of Klee,[16][17][note 8]which also applies to allmetrizable topological vector spaces, implies that if there existsany[note 9]complete metricD{\displaystyle D}onX{\displaystyle X}that induces the norm topologyτ{\displaystyle \tau }onX,{\displaystyle X,}then(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}is a Banach space. AFréchet spaceis alocally convex topological vector spacewhose topology is induced by some translation-invariant complete metric. Every Banach space is a Fréchet space but not conversely; indeed, there even exist Fréchet spaces on which no norm is a continuous function (such as thespace of real sequencesRN=∏i∈NR{\textstyle \mathbb {R} ^{\mathbb {N} }=\prod _{i\in \mathbb {N} }\mathbb {R} }with theproduct topology). However, the topology of every Fréchet space is induced by somecountablefamily of real-valued (necessarily continuous) maps calledseminorms, which are generalizations ofnorms. It is even possible for a Fréchet space to have a topology that is induced by a countable family ofnorms(such norms would necessarily be continuous)[note 10][18]but to not be a Banach/normable spacebecause its topology can not be defined by anysinglenorm. An example of such a space is theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}whose definition can be found in the article onspaces of test functions and distributions. There is another notion of completeness besides metric completeness and that is the notion of acomplete topological vector space(TVS) or TVS-completeness, which uses the theory ofuniform spaces. Specifically, the notion of TVS-completeness uses a unique translation-invariantuniformity, called thecanonical uniformity, that dependsonlyon vector subtraction and the topologyτ{\displaystyle \tau }that the vector space is endowed with, and so in particular, this notion of TVS completeness is independent of whatever norm induced the topologyτ{\displaystyle \tau }(and even applies to TVSs that arenoteven metrizable). Every Banach space is a complete TVS. Moreover, a normed space is a Banach space (that is, its norm-induced metric is complete) if and only if it is complete as a topological vector space. If(X,τ){\displaystyle (X,\tau )}is ametrizable topological vector space(such as any norm induced topology, for example), then(X,τ){\displaystyle (X,\tau )}is a complete TVS if and only if it is asequentiallycomplete TVS, meaning that it is enough to check that every Cauchysequencein(X,τ){\displaystyle (X,\tau )}converges in(X,τ){\displaystyle (X,\tau )}to some point ofX{\displaystyle X}(that is, there is no need to consider the more general notion of arbitrary Cauchynets). If(X,τ){\displaystyle (X,\tau )}is a topological vector space whose topology is induced bysome(possibly unknown) norm (such spaces are callednormable), then(X,τ){\displaystyle (X,\tau )}is a complete topological vector space if and only ifX{\displaystyle X}may be assigned anorm‖⋅‖{\displaystyle \|{\cdot }\|}that induces onX{\displaystyle X}the topologyτ{\displaystyle \tau }and also makes(X,‖⋅‖){\displaystyle (X,\|{\cdot }\|)}into a Banach space. AHausdorfflocally convex topological vector spaceX{\displaystyle X}isnormableif and only if itsstrong dual spaceXb′{\displaystyle X'_{b}}is normable,[19]in which caseXb′{\displaystyle X'_{b}}is a Banach space (Xb′{\displaystyle X'_{b}}denotes thestrong dual spaceofX,{\displaystyle X,}whose topology is a generalization of thedual norm-induced topology on thecontinuous dual spaceX′{\displaystyle X'}; see this footnote[note 11]for more details). IfX{\displaystyle X}is ametrizablelocally convex TVS, thenX{\displaystyle X}is normable if and only ifXb′{\displaystyle X'_{b}}is aFréchet–Urysohn space.[20]This shows that in the category oflocally convex TVSs, Banach spaces are exactly those complete spaces that are bothmetrizableand have metrizablestrong dual spaces. Every normed space can beisometricallyembedded onto a dense vector subspace of a Banach space, where this Banach space is called acompletionof the normed space. This Hausdorff completion is unique up toisometricisomorphism. More precisely, for every normed spaceX,{\displaystyle X,}there exists a Banach spaceY{\displaystyle Y}and a mappingT:X→Y{\displaystyle T:X\to Y}such thatT{\displaystyle T}is anisometric mappingandT(X){\displaystyle T(X)}is dense inY.{\displaystyle Y.}IfZ{\displaystyle Z}is another Banach space such that there is an isometric isomorphism fromX{\displaystyle X}onto a dense subset ofZ,{\displaystyle Z,}thenZ{\displaystyle Z}is isometrically isomorphic toY.{\displaystyle Y.}The Banach spaceY{\displaystyle Y}is the Hausdorffcompletionof the normed spaceX.{\displaystyle X.}The underlying metric space forY{\displaystyle Y}is the same as the metric completion ofX,{\displaystyle X,}with the vector space operations extended fromX{\displaystyle X}toY.{\displaystyle Y.}The completion ofX{\displaystyle X}is sometimes denoted byX^.{\displaystyle {\widehat {X}}.} IfX{\displaystyle X}andY{\displaystyle Y}are normed spaces over the sameground fieldK,{\displaystyle \mathbb {K} ,}the set of allcontinuousK{\displaystyle \mathbb {K} }-linear mapsT:X→Y{\displaystyle T:X\to Y}is denoted byB(X,Y).{\displaystyle B(X,Y).}In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed spaceX{\displaystyle X}to another normed space is continuous if and only if it isboundedon the closedunit ballofX.{\displaystyle X.}Thus, the vector spaceB(X,Y){\displaystyle B(X,Y)}can be given theoperator norm‖T‖=sup{‖Tx‖Y∣x∈X,‖x‖X≤1}.{\displaystyle \|T\|=\sup\{\|Tx\|_{Y}\mid x\in X,\ \|x\|_{X}\leq 1\}.} ForY{\displaystyle Y}a Banach space, the spaceB(X,Y){\displaystyle B(X,Y)}is a Banach space with respect to this norm. In categorical contexts, it is sometimes convenient to restrict thefunction spacebetween two Banach spaces to only theshort maps; in that case the spaceB(X,Y){\displaystyle B(X,Y)}reappears as a naturalbifunctor.[21] IfX{\displaystyle X}is a Banach space, the spaceB(X)=B(X,X){\displaystyle B(X)=B(X,X)}forms a unitalBanach algebra; the multiplication operation is given by the composition of linear maps. IfX{\displaystyle X}andY{\displaystyle Y}are normed spaces, they areisomorphic normed spacesif there exists a linear bijectionT:X→Y{\displaystyle T:X\to Y}such thatT{\displaystyle T}and its inverseT−1{\displaystyle T^{-1}}are continuous. If one of the two spacesX{\displaystyle X}orY{\displaystyle Y}is complete (orreflexive,separable, etc.) then so is the other space. Two normed spacesX{\displaystyle X}andY{\displaystyle Y}areisometrically isomorphicif in addition,T{\displaystyle T}is anisometry, that is,‖T(x)‖=‖x‖{\displaystyle \|T(x)\|=\|x\|}for everyx{\displaystyle x}inX.{\displaystyle X.}TheBanach–Mazur distanced(X,Y){\displaystyle d(X,Y)}between two isomorphic but not isometric spacesX{\displaystyle X}andY{\displaystyle Y}gives a measure of how much the two spacesX{\displaystyle X}andY{\displaystyle Y}differ. Everycontinuous linear operatoris abounded linear operatorand if dealing only with normed spaces then the converse is also true. That is, alinear operatorbetween two normed spaces isboundedif and only if it is acontinuous function. So in particular, because the scalar field (which isR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) is a normed space, alinear functionalon a normed space is abounded linear functionalif and only if it is acontinuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces. Iff:X→R{\displaystyle f:X\to \mathbb {R} }is asubadditive function(such as a norm, asublinear function, or real linear functional), then[22]f{\displaystyle f}iscontinuous at the originif and only iff{\displaystyle f}isuniformly continuouson all ofX{\displaystyle X}; and if in additionf(0)=0{\displaystyle f(0)=0}thenf{\displaystyle f}is continuous if and only if itsabsolute value|f|:X→[0,∞){\displaystyle |f|:X\to [0,\infty )}is continuous, which happens if and only if{x∈X∣|f(x)|<1}{\displaystyle \{x\in X\mid |f(x)|<1\}}is an open subset ofX.{\displaystyle X.}[22][note 12]And very importantly for applying theHahn–Banach theorem, a linear functionalf{\displaystyle f}is continuous if and only if this is true of itsreal partRe⁡f{\displaystyle \operatorname {Re} f}and moreover,‖Re⁡f‖=‖f‖{\displaystyle \|\operatorname {Re} f\|=\|f\|}andthe real partRe⁡f{\displaystyle \operatorname {Re} f}completely determinesf,{\displaystyle f,}which is why the Hahn–Banach theorem is often stated only for real linear functionals. Also, a linear functionalf{\displaystyle f}onX{\displaystyle X}is continuous if and only if theseminorm|f|{\displaystyle |f|}is continuous, which happens if and only if there exists a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }such that|f|≤p{\displaystyle |f|\leq p}; this last statement involving the linear functionalf{\displaystyle f}and seminormp{\displaystyle p}is encountered in many versions of the Hahn–Banach theorem. The Cartesian productX×Y{\displaystyle X\times Y}of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used,[23]such as‖(x,y)‖1=‖x‖+‖y‖,‖(x,y)‖∞=max(‖x‖,‖y‖){\displaystyle \|(x,y)\|_{1}=\|x\|+\|y\|,\qquad \|(x,y)\|_{\infty }=\max(\|x\|,\|y\|)}which correspond (respectively) to thecoproductandproductin the category of Banach spaces and short maps (discussed above).[21]For finite (co)products, these norms give rise to isomorphic normed spaces, and the productX×Y{\displaystyle X\times Y}(or the direct sumX⊕Y{\displaystyle X\oplus Y}) is complete if and only if the two factors are complete. IfM{\displaystyle M}is aclosedlinear subspaceof a normed spaceX,{\displaystyle X,}there is a natural norm on the quotient spaceX/M,{\displaystyle X/M,}‖x+M‖=infm∈M‖x+m‖.{\displaystyle \|x+M\|=\inf \limits _{m\in M}\|x+m\|.} The quotientX/M{\displaystyle X/M}is a Banach space whenX{\displaystyle X}is complete.[24]The quotient map fromX{\displaystyle X}ontoX/M,{\displaystyle X/M,}sendingx∈X{\displaystyle x\in X}to its classx+M,{\displaystyle x+M,}is linear, onto, and of norm1,{\displaystyle 1,}except whenM=X,{\displaystyle M=X,}in which case the quotient is the null space. The closed linear subspaceM{\displaystyle M}ofX{\displaystyle X}is said to be acomplemented subspaceofX{\displaystyle X}ifM{\displaystyle M}is therangeof asurjectivebounded linearprojectionP:X→M.{\displaystyle P:X\to M.}In this case, the spaceX{\displaystyle X}is isomorphic to the direct sum ofM{\displaystyle M}andker⁡P,{\displaystyle \ker P,}the kernel of the projectionP.{\displaystyle P.} Suppose thatX{\displaystyle X}andY{\displaystyle Y}are Banach spaces and thatT∈B(X,Y).{\displaystyle T\in B(X,Y).}There exists a canonical factorization ofT{\displaystyle T}as[24]T=T1∘π,T:X⟶πX/ker⁡T⟶T1Y{\displaystyle T=T_{1}\circ \pi ,\quad T:X{\overset {\pi }{{}\longrightarrow {}}}X/\ker T{\overset {T_{1}}{{}\longrightarrow {}}}Y}where the first mapπ{\displaystyle \pi }is the quotient map, and the second mapT1{\displaystyle T_{1}}sends every classx+ker⁡T{\displaystyle x+\ker T}in the quotient to the imageT(x){\displaystyle T(x)}inY.{\displaystyle Y.}This is well defined because all elements in the same class have the same image. The mappingT1{\displaystyle T_{1}}is a linear bijection fromX/ker⁡T{\displaystyle X/\ker T}onto the rangeT(X),{\displaystyle T(X),}whose inverse need not be bounded. Basic examples[25]of Banach spaces include: theLp spacesLp{\displaystyle L^{p}}and their special cases, thesequence spacesℓp{\displaystyle \ell ^{p}}that consist of scalar sequences indexed bynatural numbersN{\displaystyle \mathbb {N} }; among them, the spaceℓ1{\displaystyle \ell ^{1}}ofabsolutely summablesequences and the spaceℓ2{\displaystyle \ell ^{2}}of square summable sequences; the spacec0{\displaystyle c_{0}}of sequences tending to zero and the spaceℓ∞{\displaystyle \ell ^{\infty }}of bounded sequences; the spaceC(K){\displaystyle C(K)}of continuous scalar functions on a compact Hausdorff spaceK,{\displaystyle K,}equipped with the max norm,‖f‖C(K)=max{|f(x)|∣x∈K},f∈C(K).{\displaystyle \|f\|_{C(K)}=\max\{|f(x)|\mid x\in K\},\quad f\in C(K).} According to theBanach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of someC(K).{\displaystyle C(K).}[26]For every separable Banach spaceX,{\displaystyle X,}there is a closed subspaceM{\displaystyle M}ofℓ1{\displaystyle \ell ^{1}}such thatX:=ℓ1/M.{\displaystyle X:=\ell ^{1}/M.}[27] AnyHilbert spaceserves as an example of a Banach space. A Hilbert spaceH{\displaystyle H}onK=R,C{\displaystyle \mathbb {K} =\mathbb {R} ,\mathbb {C} }is complete for a norm of the form‖x‖H=⟨x,x⟩,{\displaystyle \|x\|_{H}={\sqrt {\langle x,x\rangle }},}where⟨⋅,⋅⟩:H×H→K{\displaystyle \langle \cdot ,\cdot \rangle :H\times H\to \mathbb {K} }is theinner product, linear in its first argument that satisfies the following:⟨y,x⟩=⟨x,y⟩¯,for allx,y∈H⟨x,x⟩≥0,for allx∈H⟨x,x⟩=0if and only ifx=0.{\displaystyle {\begin{aligned}\langle y,x\rangle &={\overline {\langle x,y\rangle }},\quad {\text{ for all }}x,y\in H\\\langle x,x\rangle &\geq 0,\quad {\text{ for all }}x\in H\\\langle x,x\rangle =0{\text{ if and only if }}x&=0.\end{aligned}}} For example, the spaceL2{\displaystyle L^{2}}is a Hilbert space. TheHardy spaces, theSobolev spacesare examples of Banach spaces that are related toLp{\displaystyle L^{p}}spaces and have additional structure. They are important in different branches of analysis,Harmonic analysisandPartial differential equationsamong others. ABanach algebrais a Banach spaceA{\displaystyle A}overK=R{\displaystyle \mathbb {K} =\mathbb {R} }orC,{\displaystyle \mathbb {C} ,}together with a structure ofalgebra overK{\displaystyle \mathbb {K} }, such that the product mapA×A∋(a,b)↦ab∈A{\displaystyle A\times A\ni (a,b)\mapsto ab\in A}is continuous. An equivalent norm onA{\displaystyle A}can be found so that‖ab‖≤‖a‖‖b‖{\displaystyle \|ab\|\leq \|a\|\|b\|}for alla,b∈A.{\displaystyle a,b\in A.} IfX{\displaystyle X}is a normed space andK{\displaystyle \mathbb {K} }the underlyingfield(either therealsor thecomplex numbers), thecontinuous dual spaceis the space of continuous linear maps fromX{\displaystyle X}intoK,{\displaystyle \mathbb {K} ,}orcontinuous linear functionals. The notation for the continuous dual isX′=B(X,K){\displaystyle X'=B(X,\mathbb {K} )}in this article.[28]SinceK{\displaystyle \mathbb {K} }is a Banach space (using theabsolute valueas norm), the dualX′{\displaystyle X'}is a Banach space, for every normed spaceX.{\displaystyle X.}TheDixmier–Ng theoremcharacterizes the dual spaces of Banach spaces. The main tool for proving the existence of continuous linear functionals is theHahn–Banach theorem. Hahn–Banach theorem—LetX{\displaystyle X}be avector spaceover the fieldK=R,C.{\displaystyle \mathbb {K} =\mathbb {R} ,\mathbb {C} .}Let further Then, there exists a linear functionalF:X→K{\displaystyle F:X\to \mathbb {K} }so thatF|Y=f,andfor allx∈X,Re⁡(F(x))≤p(x).{\displaystyle F{\big \vert }_{Y}=f,\quad {\text{ and }}\quad {\text{ for all }}x\in X,\ \ \operatorname {Re} (F(x))\leq p(x).} In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional.[29]An important special case is the following: for every vectorx{\displaystyle x}in a normed spaceX,{\displaystyle X,}there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(x)=‖x‖X,‖f‖X′≤1.{\displaystyle f(x)=\|x\|_{X},\quad \|f\|_{X'}\leq 1.} Whenx{\displaystyle x}is not equal to the0{\displaystyle \mathbf {0} }vector, the functionalf{\displaystyle f}must have norm one, and is called anorming functionalforx.{\displaystyle x.} TheHahn–Banach separation theoremstates that two disjoint non-emptyconvex setsin a real Banach space, one of them open, can be separated by a closedaffinehyperplane. The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane.[30] A subsetS{\displaystyle S}in a Banach spaceX{\displaystyle X}istotalif thelinear spanofS{\displaystyle S}isdenseinX.{\displaystyle X.}The subsetS{\displaystyle S}is total inX{\displaystyle X}if and only if the only continuous linear functional that vanishes onS{\displaystyle S}is the0{\displaystyle \mathbf {0} }functional: this equivalence follows from the Hahn–Banach theorem. IfX{\displaystyle X}is the direct sum of two closed linear subspacesM{\displaystyle M}andN,{\displaystyle N,}then the dualX′{\displaystyle X'}ofX{\displaystyle X}is isomorphic to the direct sum of the duals ofM{\displaystyle M}andN.{\displaystyle N.}[31]IfM{\displaystyle M}is a closed linear subspace inX,{\displaystyle X,}one can associate theorthogonal ofM{\displaystyle M}in the dual,M⊥={x′∈X∣x′(m)=0for allm∈M}.{\displaystyle M^{\bot }=\{x'\in X\mid x'(m)=0{\text{ for all }}m\in M\}.} The orthogonalM⊥{\displaystyle M^{\bot }}is a closed linear subspace of the dual. The dual ofM{\displaystyle M}is isometrically isomorphic toX′/M⊥.{\displaystyle X'/M^{\bot }.}The dual ofX/M{\displaystyle X/M}is isometrically isomorphic toM⊥.{\displaystyle M^{\bot }.}[32] The dual of a separable Banach space need not be separable, but: Theorem[33]—LetX{\displaystyle X}be a normed space. IfX′{\displaystyle X'}isseparable, thenX{\displaystyle X}is separable. WhenX′{\displaystyle X'}is separable, the above criterion for totality can be used for proving the existence of a countable total subset inX.{\displaystyle X.} Theweak topologyon a Banach spaceX{\displaystyle X}is thecoarsest topologyonX{\displaystyle X}for which all elementsx′{\displaystyle x'}in the continuous dual spaceX′{\displaystyle X'}are continuous. The norm topology is thereforefinerthan the weak topology. It follows from the Hahn–Banach separation theorem that the weak topology isHausdorff, and that a norm-closedconvex subsetof a Banach space is also weakly closed.[34]A norm-continuous linear map between two Banach spacesX{\displaystyle X}andY{\displaystyle Y}is alsoweakly continuous, that is, continuous from the weak topology ofX{\displaystyle X}to that ofY.{\displaystyle Y.}[35] IfX{\displaystyle X}is infinite-dimensional, there exist linear maps which are not continuous. The spaceX∗{\displaystyle X^{*}}of all linear maps fromX{\displaystyle X}to the underlying fieldK{\displaystyle \mathbb {K} }(this spaceX∗{\displaystyle X^{*}}is called thealgebraic dual space, to distinguish it fromX′{\displaystyle X'}also induces a topology onX{\displaystyle X}which isfinerthan the weak topology, and much less used in functional analysis. On a dual spaceX′,{\displaystyle X',}there is a topology weaker than the weak topology ofX′,{\displaystyle X',}called theweak* topology. It is the coarsest topology onX′{\displaystyle X'}for which all evaluation mapsx′∈X′↦x′(x),{\displaystyle x'\in X'\mapsto x'(x),}wherex{\displaystyle x}ranges overX,{\displaystyle X,}are continuous. Its importance comes from theBanach–Alaoglu theorem. Banach–Alaoglu theorem—LetX{\displaystyle X}be anormed vector space. Then theclosedunit ballB={x∈X∣‖x‖≤1}{\displaystyle B=\{x\in X\mid \|x\|\leq 1\}}of the dual space iscompactin the weak* topology. The Banach–Alaoglu theorem can be proved usingTychonoff's theoremabout infinite products of compact Hausdorff spaces. WhenX{\displaystyle X}is separable, the unit ballB′{\displaystyle B'}of the dual is ametrizablecompact in the weak* topology.[36] The dual ofc0{\displaystyle c_{0}}is isometrically isomorphic toℓ1{\displaystyle \ell ^{1}}: for every bounded linear functionalf{\displaystyle f}onc0,{\displaystyle c_{0},}there is a unique elementy={yn}∈ℓ1{\displaystyle y=\{y_{n}\}\in \ell ^{1}}such thatf(x)=∑n∈Nxnyn,x={xn}∈c0,and‖f‖(c0)′=‖y‖ℓ1.{\displaystyle f(x)=\sum _{n\in \mathbb {N} }x_{n}y_{n},\qquad x=\{x_{n}\}\in c_{0},\ \ {\text{and}}\ \ \|f\|_{(c_{0})'}=\|y\|_{\ell _{1}}.} The dual ofℓ1{\displaystyle \ell ^{1}}is isometrically isomorphic toℓ∞{\displaystyle \ell ^{\infty }}. The dual ofLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is isometrically isomorphic toLq([0,1]){\displaystyle L^{q}([0,1])}when1≤p<∞{\displaystyle 1\leq p<\infty }and1p+1q=1.{\displaystyle {\frac {1}{p}}+{\frac {1}{q}}=1.} For every vectory{\displaystyle y}in a Hilbert spaceH,{\displaystyle H,}the mappingx∈H→fy(x)=⟨x,y⟩{\displaystyle x\in H\to f_{y}(x)=\langle x,y\rangle } defines a continuous linear functionalfy{\displaystyle f_{y}}onH.{\displaystyle H.}TheRiesz representation theoremstates that every continuous linear functional onH{\displaystyle H}is of the formfy{\displaystyle f_{y}}for a uniquely defined vectory{\displaystyle y}inH.{\displaystyle H.}The mappingy∈H→fy{\displaystyle y\in H\to f_{y}}is anantilinearisometric bijection fromH{\displaystyle H}onto its dualH′.{\displaystyle H'.}When the scalars are real, this map is an isometric isomorphism. WhenK{\displaystyle K}is a compact Hausdorff topological space, the dualM(K){\displaystyle M(K)}ofC(K){\displaystyle C(K)}is the space ofRadon measuresin the sense of Bourbaki.[37]The subsetP(K){\displaystyle P(K)}ofM(K){\displaystyle M(K)}consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball ofM(K).{\displaystyle M(K).}Theextreme pointsofP(K){\displaystyle P(K)}are theDirac measuresonK.{\displaystyle K.}The set of Dirac measures onK,{\displaystyle K,}equipped with the w*-topology, ishomeomorphictoK.{\displaystyle K.} Banach–Stone Theorem—IfK{\displaystyle K}andL{\displaystyle L}are compact Hausdorff spaces and ifC(K){\displaystyle C(K)}andC(L){\displaystyle C(L)}are isometrically isomorphic, then the topological spacesK{\displaystyle K}andL{\displaystyle L}arehomeomorphic.[38][39] The result has been extended by Amir[40]and Cambern[41]to the case when the multiplicativeBanach–Mazur distancebetweenC(K){\displaystyle C(K)}andC(L){\displaystyle C(L)}is<2.{\displaystyle <2.}The theorem is no longer true when the distance is=2.{\displaystyle =2.}[42] In the commutativeBanach algebraC(K),{\displaystyle C(K),}themaximal idealsare precisely kernels of Dirac measures onK,{\displaystyle K,}Ix=ker⁡δx={f∈C(K)∣f(x)=0},x∈K.{\displaystyle I_{x}=\ker \delta _{x}=\{f\in C(K)\mid f(x)=0\},\quad x\in K.} More generally, by theGelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with itscharacters—not merely as sets but as topological spaces: the former with thehull-kernel topologyand the latter with the w*-topology. In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dualA′.{\displaystyle A'.} Theorem—IfK{\displaystyle K}is a compact Hausdorff space, then the maximal ideal spaceΞ{\displaystyle \Xi }of the Banach algebraC(K){\displaystyle C(K)}ishomeomorphictoK.{\displaystyle K.}[38] Not every unital commutative Banach algebra is of the formC(K){\displaystyle C(K)}for some compact Hausdorff spaceK.{\displaystyle K.}However, this statement holds if one placesC(K){\displaystyle C(K)}in the smaller category of commutativeC*-algebras.Gelfand'srepresentation theoremfor commutative C*-algebras states that every commutative unitalC*-algebraA{\displaystyle A}is isometrically isomorphic to aC(K){\displaystyle C(K)}space.[43]The Hausdorff compact spaceK{\displaystyle K}here is again the maximal ideal space, also called thespectrumofA{\displaystyle A}in the C*-algebra context. IfX{\displaystyle X}is a normed space, the (continuous) dualX″{\displaystyle X''}of the dualX′{\displaystyle X'}is called thebidualorsecond dualofX.{\displaystyle X.}For every normed spaceX,{\displaystyle X,}there is a natural map,{FX:X→X″FX(x)(f)=f(x)for allx∈X,and for allf∈X′{\displaystyle {\begin{cases}F_{X}\colon X\to X''\\F_{X}(x)(f)=f(x)&{\text{ for all }}x\in X,{\text{ and for all }}f\in X'\end{cases}}} This definesFX(x){\displaystyle F_{X}(x)}as a continuous linear functional onX′,{\displaystyle X',}that is, an element ofX″.{\displaystyle X''.}The mapFX:x→FX(x){\displaystyle F_{X}\colon x\to F_{X}(x)}is a linear map fromX{\displaystyle X}toX″.{\displaystyle X''.}As a consequence of the existence of anorming functionalf{\displaystyle f}for everyx∈X,{\displaystyle x\in X,}this mapFX{\displaystyle F_{X}}is isometric, thusinjective. For example, the dual ofX=c0{\displaystyle X=c_{0}}is identified withℓ1,{\displaystyle \ell ^{1},}and the dual ofℓ1{\displaystyle \ell ^{1}}is identified withℓ∞,{\displaystyle \ell ^{\infty },}the space of bounded scalar sequences. Under these identifications,FX{\displaystyle F_{X}}is the inclusion map fromc0{\displaystyle c_{0}}toℓ∞.{\displaystyle \ell ^{\infty }.}It is indeed isometric, but not onto. IfFX{\displaystyle F_{X}}issurjective, then the normed spaceX{\displaystyle X}is calledreflexive(seebelow). Being the dual of a normed space, the bidualX″{\displaystyle X''}is complete, therefore, every reflexive normed space is a Banach space. Using the isometric embeddingFX,{\displaystyle F_{X},}it is customary to consider a normed spaceX{\displaystyle X}as a subset of its bidual. WhenX{\displaystyle X}is a Banach space, it is viewed as a closed linear subspace ofX″.{\displaystyle X''.}IfX{\displaystyle X}is not reflexive, the unit ball ofX{\displaystyle X}is a proper subset of the unit ball ofX″.{\displaystyle X''.}TheGoldstine theoremstates that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual. In other words, for everyx″{\displaystyle x''}in the bidual, there exists anet(xi)i∈I{\displaystyle (x_{i})_{i\in I}}inX{\displaystyle X}so thatsupi∈I‖xi‖≤‖x″‖,x″(f)=limif(xi),f∈X′.{\displaystyle \sup _{i\in I}\|x_{i}\|\leq \|x''\|,\ \ x''(f)=\lim _{i}f(x_{i}),\quad f\in X'.} The net may be replaced by a weakly*-convergent sequence when the dualX′{\displaystyle X'}is separable. On the other hand, elements of the bidual ofℓ1{\displaystyle \ell ^{1}}that are not inℓ1{\displaystyle \ell ^{1}}cannot be weak*-limit ofsequencesinℓ1,{\displaystyle \ell ^{1},}sinceℓ1{\displaystyle \ell ^{1}}isweakly sequentially complete. Here are the main general results about Banach spaces that go back to the time of Banach's book (Banach (1932)) and are related to theBaire category theorem. According to this theorem, a complete metric space (such as a Banach space, aFréchet spaceor anF-space) cannot be equal to a union of countably many closed subsets with emptyinteriors. Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countableHamel basisis finite-dimensional. Banach–Steinhaus Theorem—LetX{\displaystyle X}be a Banach space andY{\displaystyle Y}be anormed vector space. Suppose thatF{\displaystyle F}is a collection of continuous linear operators fromX{\displaystyle X}toY.{\displaystyle Y.}The uniform boundedness principle states that if for allx{\displaystyle x}inX{\displaystyle X}we havesupT∈F‖T(x)‖Y<∞,{\displaystyle \sup _{T\in F}\|T(x)\|_{Y}<\infty ,}thensupT∈F‖T‖Y<∞.{\displaystyle \sup _{T\in F}\|T\|_{Y}<\infty .} The Banach–Steinhaus theorem is not limited to Banach spaces. It can be extended for example to the case whereX{\displaystyle X}is aFréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhoodU{\displaystyle U}of0{\displaystyle \mathbf {0} }inX{\displaystyle X}such that allT{\displaystyle T}inF{\displaystyle F}are uniformly bounded onU,{\displaystyle U,}supT∈Fsupx∈U‖T(x)‖Y<∞.{\displaystyle \sup _{T\in F}\sup _{x\in U}\;\|T(x)\|_{Y}<\infty .} The Open Mapping Theorem—LetX{\displaystyle X}andY{\displaystyle Y}be Banach spaces andT:X→Y{\displaystyle T:X\to Y}be a surjective continuous linear operator, thenT{\displaystyle T}is an open map. Corollary—Every one-to-one bounded linear operator from a Banach space onto a Banach space is an isomorphism. The First Isomorphism Theorem for Banach spaces—Suppose thatX{\displaystyle X}andY{\displaystyle Y}are Banach spaces and thatT∈B(X,Y).{\displaystyle T\in B(X,Y).}Suppose further that the range ofT{\displaystyle T}is closed inY.{\displaystyle Y.}ThenX/ker⁡T{\displaystyle X/\ker T}is isomorphic toT(X).{\displaystyle T(X).} This result is a direct consequence of the precedingBanach isomorphism theoremand of the canonical factorization of bounded linear maps. Corollary—If a Banach spaceX{\displaystyle X}is the internal direct sum of closed subspacesM1,…,Mn,{\displaystyle M_{1},\ldots ,M_{n},}thenX{\displaystyle X}is isomorphic toM1⊕⋯⊕Mn.{\displaystyle M_{1}\oplus \cdots \oplus M_{n}.} This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection fromM1⊕⋯⊕Mn{\displaystyle M_{1}\oplus \cdots \oplus M_{n}}ontoX{\displaystyle X}sendingm1,⋯,mn{\displaystyle m_{1},\cdots ,m_{n}}to the summ1+⋯+mn.{\displaystyle m_{1}+\cdots +m_{n}.} The Closed Graph Theorem—LetT:X→Y{\displaystyle T:X\to Y}be a linear mapping between Banach spaces. The graph ofT{\displaystyle T}is closed inX×Y{\displaystyle X\times Y}if and only ifT{\displaystyle T}is continuous. The normed spaceX{\displaystyle X}is calledreflexivewhen the natural map{FX:X→X″FX(x)(f)=f(x)for allx∈X,and for allf∈X′{\displaystyle {\begin{cases}F_{X}:X\to X''\\F_{X}(x)(f)=f(x)&{\text{ for all }}x\in X,{\text{ and for all }}f\in X'\end{cases}}}is surjective. Reflexive normed spaces are Banach spaces. Theorem—IfX{\displaystyle X}is a reflexive Banach space, every closed subspace ofX{\displaystyle X}and every quotient space ofX{\displaystyle X}are reflexive. This is a consequence of the Hahn–Banach theorem. Further, by the open mapping theorem, if there is a bounded linear operator from the Banach spaceX{\displaystyle X}onto the Banach spaceY,{\displaystyle Y,}thenY{\displaystyle Y}is reflexive. Theorem—IfX{\displaystyle X}is a Banach space, thenX{\displaystyle X}is reflexive if and only ifX′{\displaystyle X'}is reflexive. Corollary—LetX{\displaystyle X}be a reflexive Banach space. ThenX{\displaystyle X}isseparableif and only ifX′{\displaystyle X'}is separable. Indeed, if the dualY′{\displaystyle Y'}of a Banach spaceY{\displaystyle Y}is separable, thenY{\displaystyle Y}is separable. IfX{\displaystyle X}is reflexive and separable, then the dual ofX′{\displaystyle X'}is separable, soX′{\displaystyle X'}is separable. Theorem—Suppose thatX1,…,Xn{\displaystyle X_{1},\ldots ,X_{n}}are normed spaces and thatX=X1⊕⋯⊕Xn.{\displaystyle X=X_{1}\oplus \cdots \oplus X_{n}.}ThenX{\displaystyle X}is reflexive if and only if eachXj{\displaystyle X_{j}}is reflexive. Hilbert spaces are reflexive. TheLp{\displaystyle L^{p}}spaces are reflexive when1<p<∞.{\displaystyle 1<p<\infty .}More generally,uniformly convex spacesare reflexive, by theMilman–Pettis theorem. The spacesc0,ℓ1,L1([0,1]),C([0,1]){\displaystyle c_{0},\ell ^{1},L^{1}([0,1]),C([0,1])}are not reflexive. In these examples of non-reflexive spacesX,{\displaystyle X,}the bidualX″{\displaystyle X''}is "much larger" thanX.{\displaystyle X.}Namely, under the natural isometric embedding ofX{\displaystyle X}intoX″{\displaystyle X''}given by the Hahn–Banach theorem, the quotientX″/X{\displaystyle X''/X}is infinite-dimensional, and even nonseparable. However, Robert C. James has constructed an example[44]of a non-reflexive space, usually called "the James space" and denoted byJ,{\displaystyle J,}[45]such that the quotientJ″/J{\displaystyle J''/J}is one-dimensional. Furthermore, this spaceJ{\displaystyle J}is isometrically isomorphic to its bidual. Theorem—A Banach spaceX{\displaystyle X}is reflexive if and only if its unit ball iscompactin theweak topology. WhenX{\displaystyle X}is reflexive, it follows that all closed and boundedconvex subsetsofX{\displaystyle X}are weakly compact. In a Hilbert spaceH,{\displaystyle H,}the weak compactness of the unit ball is very often used in the following way: every bounded sequence inH{\displaystyle H}has weakly convergent subsequences. Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certainoptimization problems. For example, everyconvexcontinuous function on the unit ballB{\displaystyle B}of a reflexive space attains its minimum at some point inB.{\displaystyle B.} As a special case of the preceding result, whenX{\displaystyle X}is a reflexive space overR,{\displaystyle \mathbb {R} ,}every continuous linear functionalf{\displaystyle f}inX′{\displaystyle X'}attains its maximum‖f‖{\displaystyle \|f\|}on the unit ball ofX.{\displaystyle X.}The followingtheorem of Robert C. Jamesprovides a converse statement. James' Theorem—For a Banach space the following two properties are equivalent: The theorem can be extended to give a characterization of weakly compact convex sets. On every non-reflexive Banach spaceX,{\displaystyle X,}there exist continuous linear functionals that are notnorm-attaining. However, theBishop–Phelpstheorem[46]states that norm-attaining functionals are norm dense in the dualX′{\displaystyle X'}ofX.{\displaystyle X.} A sequence{xn}{\displaystyle \{x_{n}\}}in a Banach spaceX{\displaystyle X}isweakly convergentto a vectorx∈X{\displaystyle x\in X}if{f(xn)}{\displaystyle \{f(x_{n})\}}converges tof(x){\displaystyle f(x)}for every continuous linear functionalf{\displaystyle f}in the dualX′.{\displaystyle X'.}The sequence{xn}{\displaystyle \{x_{n}\}}is aweakly Cauchy sequenceif{f(xn)}{\displaystyle \{f(x_{n})\}}converges to a scalar limitL(f){\displaystyle L(f)}for everyf{\displaystyle f}inX′.{\displaystyle X'.}A sequence{fn}{\displaystyle \{f_{n}\}}in the dualX′{\displaystyle X'}isweakly* convergentto a functionalf∈X′{\displaystyle f\in X'}iffn(x){\displaystyle f_{n}(x)}converges tof(x){\displaystyle f(x)}for everyx{\displaystyle x}inX.{\displaystyle X.}Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of theBanach–Steinhaustheorem. When the sequence{xn}{\displaystyle \{x_{n}\}}inX{\displaystyle X}is a weakly Cauchy sequence, the limitL{\displaystyle L}above defines a bounded linear functional on the dualX′,{\displaystyle X',}that is, an elementL{\displaystyle L}of the bidual ofX,{\displaystyle X,}andL{\displaystyle L}is the limit of{xn}{\displaystyle \{x_{n}\}}in the weak*-topology of the bidual. The Banach spaceX{\displaystyle X}isweakly sequentially completeif every weakly Cauchy sequence is weakly convergent inX.{\displaystyle X.}It follows from the preceding discussion that reflexive spaces are weakly sequentially complete. Theorem[47]—For every measureμ,{\displaystyle \mu ,}the spaceL1(μ){\displaystyle L^{1}(\mu )}is weakly sequentially complete. An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the0{\displaystyle \mathbf {0} }vector. Theunit vector basisofℓp{\displaystyle \ell ^{p}}for1<p<∞,{\displaystyle 1<p<\infty ,}or ofc0,{\displaystyle c_{0},}is another example of aweakly null sequence, that is, a sequence that converges weakly to0.{\displaystyle \mathbf {0} .}For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to0.{\displaystyle \mathbf {0} .}[48] The unit vector basis ofℓ1{\displaystyle \ell ^{1}}is not weakly Cauchy. Weakly Cauchy sequences inℓ1{\displaystyle \ell ^{1}}are weakly convergent, sinceL1{\displaystyle L^{1}}-spaces are weakly sequentially complete. Actually, weakly convergent sequences inℓ1{\displaystyle \ell ^{1}}are norm convergent.[49]This means thatℓ1{\displaystyle \ell ^{1}}satisfiesSchur's property. Weakly Cauchy sequences and theℓ1{\displaystyle \ell ^{1}}basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal.[50] Theorem[51]—Let{xn}n∈N{\displaystyle \{x_{n}\}_{n\in \mathbb {N} }}be a bounded sequence in a Banach space. Either{xn}n∈N{\displaystyle \{x_{n}\}_{n\in \mathbb {N} }}has a weakly Cauchy subsequence, or it admits a subsequenceequivalentto the standard unit vector basis ofℓ1.{\displaystyle \ell ^{1}.} A complement to this result is due to Odell and Rosenthal (1975). Theorem[52]—LetX{\displaystyle X}be a separable Banach space. The following are equivalent: By the Goldstine theorem, every element of the unit ballB″{\displaystyle B''}ofX″{\displaystyle X''}is weak*-limit of a net in the unit ball ofX.{\displaystyle X.}WhenX{\displaystyle X}does not containℓ1,{\displaystyle \ell ^{1},}every element ofB″{\displaystyle B''}is weak*-limit of asequencein the unit ball ofX.{\displaystyle X.}[53] When the Banach spaceX{\displaystyle X}is separable, the unit ball of the dualX′,{\displaystyle X',}equipped with the weak*-topology, is a metrizable compact spaceK,{\displaystyle K,}[36]and every elementx″{\displaystyle x''}in the bidualX″{\displaystyle X''}defines a bounded function onK{\displaystyle K}:x′∈K↦x″(x′),|x″(x′)|≤‖x″‖.{\displaystyle x'\in K\mapsto x''(x'),\quad |x''(x')|\leq \|x''\|.} This function is continuous for the compact topology ofK{\displaystyle K}if and only ifx″{\displaystyle x''}is actually inX,{\displaystyle X,}considered as subset ofX″.{\displaystyle X''.}Assume in addition for the rest of the paragraph thatX{\displaystyle X}does not containℓ1.{\displaystyle \ell ^{1}.}By the preceding result of Odell and Rosenthal, the functionx″{\displaystyle x''}is thepointwise limitonK{\displaystyle K}of a sequence{xn}⊆X{\displaystyle \{x_{n}\}\subseteq X}of continuous functions onK,{\displaystyle K,}it is therefore afirst Baire class functiononK.{\displaystyle K.}The unit ball of the bidual is a pointwise compact subset of the first Baire class onK.{\displaystyle K.}[54] WhenX{\displaystyle X}is separable, the unit ball of the dual is weak*-compact by theBanach–Alaoglu theoremand metrizable for the weak* topology,[36]hence every bounded sequence in the dual has weakly* convergent subsequences. This applies to separable reflexive spaces, but more is true in this case, as stated below. The weak topology of a Banach spaceX{\displaystyle X}is metrizable if and only ifX{\displaystyle X}is finite-dimensional.[55]If the dualX′{\displaystyle X'}is separable, the weak topology of the unit ball ofX{\displaystyle X}is metrizable. This applies in particular to separable reflexive Banach spaces. Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences. Eberlein–Šmulian theorem[56]—A setA{\displaystyle A}in a Banach space is relatively weakly compact if and only if every sequence{an}{\displaystyle \{a_{n}\}}inA{\displaystyle A}has a weakly convergent subsequence. A Banach spaceX{\displaystyle X}is reflexive if and only if each bounded sequence inX{\displaystyle X}has a weakly convergent subsequence.[57] A weakly compact subsetA{\displaystyle A}inℓ1{\displaystyle \ell ^{1}}is norm-compact. Indeed, every sequence inA{\displaystyle A}has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property ofℓ1.{\displaystyle \ell ^{1}.} A way to classify Banach spaces is through the probabilistic notion oftype and cotype, these two measure how far a Banach space is from a Hilbert space. ASchauder basisin a Banach spaceX{\displaystyle X}is a sequence{en}n≥0{\displaystyle \{e_{n}\}_{n\geq 0}}of vectors inX{\displaystyle X}with the property that for every vectorx∈X,{\displaystyle x\in X,}there existuniquelydefined scalars{xn}n≥0{\displaystyle \{x_{n}\}_{n\geq 0}}depending onx,{\displaystyle x,}such thatx=∑n=0∞xnen,i.e.,x=limnPn(x),Pn(x):=∑k=0nxkek.{\displaystyle x=\sum _{n=0}^{\infty }x_{n}e_{n},\quad {\textit {i.e.,}}\quad x=\lim _{n}P_{n}(x),\ P_{n}(x):=\sum _{k=0}^{n}x_{k}e_{k}.} Banach spaces with a Schauder basis are necessarilyseparable, because the countable set of finite linear combinations with rational coefficients (say) is dense. It follows from the Banach–Steinhaus theorem that the linear mappings{Pn}{\displaystyle \{P_{n}\}}are uniformly bounded by some constantC.{\displaystyle C.}Let{en∗}{\displaystyle \{e_{n}^{*}\}}denote the coordinate functionals which assign to everyx{\displaystyle x}inX{\displaystyle X}the coordinatexn{\displaystyle x_{n}}ofx{\displaystyle x}in the above expansion. They are calledbiorthogonal functionals. When the basis vectors have norm1,{\displaystyle 1,}the coordinate functionals{en∗}{\displaystyle \{e_{n}^{*}\}}have norm≤2C{\displaystyle {}\leq 2C}in the dual ofX.{\displaystyle X.} Most classical separable spaces have explicit bases. TheHaar system{hn}{\displaystyle \{h_{n}\}}is a basis forLp([0,1]){\displaystyle L^{p}([0,1])}when1≤p<∞.{\displaystyle 1\leq p<\infty .}Thetrigonometric systemis a basis inLp(T){\displaystyle L^{p}(\mathbf {T} )}when1<p<∞.{\displaystyle 1<p<\infty .}TheSchauder systemis a basis in the spaceC([0,1]).{\displaystyle C([0,1]).}[58]The question of whether the disk algebraA(D){\displaystyle A(\mathbf {D} )}has a basis[59]remained open for more than forty years, until Bočkarev showed in 1974 thatA(D){\displaystyle A(\mathbf {D} )}admits a basis constructed from theFranklin system.[60] Since every vectorx{\displaystyle x}in a Banach spaceX{\displaystyle X}with a basis is the limit ofPn(x),{\displaystyle P_{n}(x),}withPn{\displaystyle P_{n}}of finite rank and uniformly bounded, the spaceX{\displaystyle X}satisfies thebounded approximation property. The first example byEnfloof a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis.[61] Robert C. James characterized reflexivity in Banach spaces with a basis: the spaceX{\displaystyle X}with a Schauder basis is reflexive if and only if the basis is bothshrinking and boundedly complete.[62]In this case, the biorthogonal functionals form a basis of the dual ofX.{\displaystyle X.} LetX{\displaystyle X}andY{\displaystyle Y}be twoK{\displaystyle \mathbb {K} }-vector spaces. Thetensor productX⊗Y{\displaystyle X\otimes Y}ofX{\displaystyle X}andY{\displaystyle Y}is aK{\displaystyle \mathbb {K} }-vector spaceZ{\displaystyle Z}with a bilinear mappingT:X×Y→Z{\displaystyle T:X\times Y\to Z}which has the followinguniversal property: The image underT{\displaystyle T}of a couple(x,y){\displaystyle (x,y)}inX×Y{\displaystyle X\times Y}is denoted byx⊗y,{\displaystyle x\otimes y,}and called asimple tensor. Every elementz{\displaystyle z}inX⊗Y{\displaystyle X\otimes Y}is a finite sum of such simple tensors. There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others theprojective cross normandinjective cross normintroduced byA. Grothendieckin 1955.[63] In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that theprojective tensor product[64]of two Banach spacesX{\displaystyle X}andY{\displaystyle Y}is thecompletionX⊗^πY{\displaystyle X{\widehat {\otimes }}_{\pi }Y}of the algebraic tensor productX⊗Y{\displaystyle X\otimes Y}equipped with the projective tensor norm, and similarly for theinjective tensor product[65]X⊗^εY.{\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y.}Grothendieck proved in particular that[66] C(K)⊗^εY≃C(K,Y),L1([0,1])⊗^πY≃L1([0,1],Y),{\displaystyle {\begin{aligned}C(K){\widehat {\otimes }}_{\varepsilon }Y&\simeq C(K,Y),\\L^{1}([0,1]){\widehat {\otimes }}_{\pi }Y&\simeq L^{1}([0,1],Y),\end{aligned}}}whereK{\displaystyle K}is a compact Hausdorff space,C(K,Y){\displaystyle C(K,Y)}the Banach space of continuous functions fromK{\displaystyle K}toY{\displaystyle Y}andL1([0,1],Y){\displaystyle L^{1}([0,1],Y)}the space of Bochner-measurable and integrable functions from[0,1]{\displaystyle [0,1]}toY,{\displaystyle Y,}and where the isomorphisms are isometric. The two isomorphisms above are the respective extensions of the map sending the tensorf⊗y{\displaystyle f\otimes y}to the vector-valued functions∈K→f(s)y∈Y.{\displaystyle s\in K\to f(s)y\in Y.} LetX{\displaystyle X}be a Banach space. The tensor productX′⊗^εX{\displaystyle X'{\widehat {\otimes }}_{\varepsilon }X}is identified isometrically with the closure inB(X){\displaystyle B(X)}of the set of finite rank operators. WhenX{\displaystyle X}has theapproximation property, this closure coincides with the space ofcompact operatorsonX.{\displaystyle X.} For every Banach spaceY,{\displaystyle Y,}there is a natural norm1{\displaystyle 1}linear mapY⊗^πX→Y⊗^εX{\displaystyle Y{\widehat {\otimes }}_{\pi }X\to Y{\widehat {\otimes }}_{\varepsilon }X}obtained by extending the identity map of the algebraic tensor product. Grothendieck related theapproximation problemto the question of whether this map is one-to-one whenY{\displaystyle Y}is the dual ofX.{\displaystyle X.}Precisely, for every Banach spaceX,{\displaystyle X,}the mapX′⊗^πX⟶X′⊗^εX{\displaystyle X'{\widehat {\otimes }}_{\pi }X\ \longrightarrow X'{\widehat {\otimes }}_{\varepsilon }X}is one-to-one if and only ifX{\displaystyle X}has the approximation property.[67] Grothendieck conjectured thatX⊗^πY{\displaystyle X{\widehat {\otimes }}_{\pi }Y}andX⊗^εY{\displaystyle X{\widehat {\otimes }}_{\varepsilon }Y}must be different wheneverX{\displaystyle X}andY{\displaystyle Y}are infinite-dimensional Banach spaces. This was disproved byGilles Pisierin 1983.[68]Pisier constructed an infinite-dimensional Banach spaceX{\displaystyle X}such thatX⊗^πX{\displaystyle X{\widehat {\otimes }}_{\pi }X}andX⊗^εX{\displaystyle X{\widehat {\otimes }}_{\varepsilon }X}are equal. Furthermore, just asEnflo'sexample, this spaceX{\displaystyle X}is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical spaceB(ℓ2){\displaystyle B(\ell ^{2})}does not have the approximation property.[69] A necessary and sufficient condition for the norm of a Banach spaceX{\displaystyle X}to be associated to an inner product is theparallelogram identity: Parallelogram identity—for allx,y∈X:‖x+y‖2+‖x−y‖2=2(‖x‖2+‖y‖2).{\displaystyle x,y\in X:\qquad \|x+y\|^{2}+\|x-y\|^{2}=2(\|x\|^{2}+\|y\|^{2}).} It follows, for example, that theLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is a Hilbert space only whenp=2.{\displaystyle p=2.}If this identity is satisfied, the associated inner product is given by thepolarization identity. In the case of real scalars, this gives:⟨x,y⟩=14(‖x+y‖2−‖x−y‖2).{\displaystyle \langle x,y\rangle ={\tfrac {1}{4}}(\|x+y\|^{2}-\|x-y\|^{2}).} For complex scalars, defining theinner productso as to beC{\displaystyle \mathbb {C} }-linear inx,{\displaystyle x,}antilineariny,{\displaystyle y,}the polarization identity gives:⟨x,y⟩=14(‖x+y‖2−‖x−y‖2+i(‖x+iy‖2−‖x−iy‖2)).{\displaystyle \langle x,y\rangle ={\tfrac {1}{4}}\left(\|x+y\|^{2}-\|x-y\|^{2}+i(\|x+iy\|^{2}-\|x-iy\|^{2})\right).} To see that the parallelogram law is sufficient, one observes in the real case that⟨x,y⟩{\displaystyle \langle x,y\rangle }is symmetric, and in the complex case, that it satisfies theHermitian symmetryproperty and⟨ix,y⟩=i⟨x,y⟩.{\displaystyle \langle ix,y\rangle =i\langle x,y\rangle .}The parallelogram law implies that⟨x,y⟩{\displaystyle \langle x,y\rangle }is additive inx.{\displaystyle x.}It follows that it is linear over the rationals, thus linear by continuity. Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available. The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constantc≥1{\displaystyle c\geq 1}: Kwapień proved that ifc−2∑k=1n‖xk‖2≤Ave±⁡‖∑k=1n±xk‖2≤c2∑k=1n‖xk‖2{\displaystyle c^{-2}\sum _{k=1}^{n}\|x_{k}\|^{2}\leq \operatorname {Ave} _{\pm }\left\|\sum _{k=1}^{n}\pm x_{k}\right\|^{2}\leq c^{2}\sum _{k=1}^{n}\|x_{k}\|^{2}}for every integern{\displaystyle n}and all families of vectors{x1,…,xn}⊆X,{\displaystyle \{x_{1},\ldots ,x_{n}\}\subseteq X,}then the Banach spaceX{\displaystyle X}is isomorphic to a Hilbert space.[70]Here,Ave±{\displaystyle \operatorname {Ave} _{\pm }}denotes the average over the2n{\displaystyle 2^{n}}possible choices of signs±1.{\displaystyle \pm 1.}In the same article, Kwapień proved that the validity of a Banach-valuedParseval's theoremfor the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces. Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space.[71]The proof rests uponDvoretzky's theoremabout Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integern,{\displaystyle n,}any finite-dimensional normed space, with dimension sufficiently large compared ton,{\displaystyle n,}contains subspaces nearly isometric to then{\displaystyle n}-dimensional Euclidean space. The next result gives the solution of the so-calledhomogeneous space problem. An infinite-dimensional Banach spaceX{\displaystyle X}is said to behomogeneousif it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic toℓ2{\displaystyle \ell ^{2}}is homogeneous, and Banach asked for the converse.[72] Theorem[73]—A Banach space isomorphic to all its infinite-dimensional closed subspaces is isomorphic to a separable Hilbert space. An infinite-dimensional Banach space ishereditarily indecomposablewhen no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces. TheGowersdichotomy theorem[73]asserts that every infinite-dimensional Banach spaceX{\displaystyle X}contains, either a subspaceY{\displaystyle Y}withunconditional basis, or a hereditarily indecomposable subspaceZ,{\displaystyle Z,}and in particular,Z{\displaystyle Z}is not isomorphic to its closed hyperplanes.[74]IfX{\displaystyle X}is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski andTomczak–Jaegermann, for spaces with an unconditional basis,[75]thatX{\displaystyle X}is isomorphic toℓ2.{\displaystyle \ell ^{2}.} IfT:X→Y{\displaystyle T:X\to Y}is anisometryfrom the Banach spaceX{\displaystyle X}onto the Banach spaceY{\displaystyle Y}(where bothX{\displaystyle X}andY{\displaystyle Y}are vector spaces overR{\displaystyle \mathbb {R} }), then theMazur–Ulam theoremstates thatT{\displaystyle T}must be an affine transformation. In particular, ifT(0X)=0Y,{\displaystyle T(0_{X})=0_{Y},}this isT{\displaystyle T}maps the zero ofX{\displaystyle X}to the zero ofY,{\displaystyle Y,}thenT{\displaystyle T}must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure. Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces. Anderson–Kadec theorem(1965–66) proves[76]that any two infinite-dimensionalseparableBanach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved[77]that any two Banach spaces are homeomorphic if and only if they have the samedensity character, the minimum cardinality of a dense subset. When two compact Hausdorff spacesK1{\displaystyle K_{1}}andK2{\displaystyle K_{2}}arehomeomorphic, the Banach spacesC(K1){\displaystyle C(K_{1})}andC(K2){\displaystyle C(K_{2})}are isometric. Conversely, whenK1{\displaystyle K_{1}}is not homeomorphic toK2,{\displaystyle K_{2},}the (multiplicative) Banach–Mazur distance betweenC(K1){\displaystyle C(K_{1})}andC(K2){\displaystyle C(K_{2})}must be greater than or equal to2,{\displaystyle 2,}see above theresults by Amir and Cambern. Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin:[78] Theorem[79]—LetK{\displaystyle K}be an uncountable compact metric space. ThenC(K){\displaystyle C(K)}is isomorphic toC([0,1]).{\displaystyle C([0,1]).} The situation is different forcountably infinitecompact Hausdorff spaces. Every countably infinite compactK{\displaystyle K}is homeomorphic to some closed interval ofordinal numbers⟨1,α⟩={γ∣1≤γ≤α}{\displaystyle \langle 1,\alpha \rangle =\{\gamma \mid 1\leq \gamma \leq \alpha \}}equipped with theorder topology, whereα{\displaystyle \alpha }is a countably infinite ordinal.[80]The Banach spaceC(K){\displaystyle C(K)}is then isometric toC(⟨1,α⟩). Whenα,β{\displaystyle \alpha ,\beta }are two countably infinite ordinals, and assumingα≤β,{\displaystyle \alpha \leq \beta ,}the spacesC(⟨1,α⟩)andC(⟨1,β⟩)are isomorphic if and only ifβ<αω.[81]For example, the Banach spacesC(⟨1,ω⟩),C(⟨1,ωω⟩),C(⟨1,ωω2⟩),C(⟨1,ωω3⟩),⋯,C(⟨1,ωωω⟩),⋯{\displaystyle C(\langle 1,\omega \rangle ),\ C(\langle 1,\omega ^{\omega }\rangle ),\ C(\langle 1,\omega ^{\omega ^{2}}\rangle ),\ C(\langle 1,\omega ^{\omega ^{3}}\rangle ),\cdots ,C(\langle 1,\omega ^{\omega ^{\omega }}\rangle ),\cdots }are mutually non-isomorphic. Glossary of symbols for the table below: Several concepts of a derivative may be defined on a Banach space. See the articles on theFréchet derivativeand theGateaux derivativefor details. The Fréchet derivative allows for an extension of the concept of atotal derivativeto Banach spaces. The Gateaux derivative allows for an extension of adirectional derivativetolocally convextopological vector spaces. Fréchet differentiability is a stronger condition than Gateaux differentiability. Thequasi-derivativeis another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability. Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functionsR→R,{\displaystyle \mathbb {R} \to \mathbb {R} ,}or the space of alldistributionsonR,{\displaystyle \mathbb {R} ,}are complete but are not normed vector spaces and hence not Banach spaces. InFréchet spacesone still has a completemetric, whileLF-spacesare completeuniformvector spaces arising as limits of Fréchet spaces.
https://en.wikipedia.org/wiki/Banach_space
Inmathematics, acontraction mapping, orcontractionorcontractor, on ametric space(M,d) is afunctionffromMto itself, with the property that there is somereal number0≤k<1{\displaystyle 0\leq k<1}such that for allxandyinM, The smallest such value ofkis called theLipschitz constantoff. Contractive maps are sometimes calledLipschitzian maps. If the above condition is instead satisfied fork≤ 1, then the mapping is said to be anon-expansive map. More generally, the idea of a contractive mapping can be defined for maps between metric spaces. Thus, if (M,d) and (N,d') are two metric spaces, thenf:M→N{\displaystyle f:M\rightarrow N}is a contractive mapping if there is a constant0≤k<1{\displaystyle 0\leq k<1}such that for allxandyinM. Every contraction mapping isLipschitz continuousand henceuniformly continuous(for a Lipschitz continuous function, the constantkis no longer necessarily less than 1). A contraction mapping has at most onefixed point. Moreover, theBanach fixed-point theoremstates that every contraction mapping on anon-emptycomplete metric spacehas a unique fixed point, and that for anyxinMtheiterated functionsequencex,f(x),f(f(x)),f(f(f(x))), ... converges to the fixed point. This concept is very useful foriterated function systemswherecontraction mappings are often used. Banach's fixed-point theorem is also applied in proving the existence of solutions ofordinary differential equations, and is used in one proof of theinverse function theorem.[1] Contraction mappings play an important role indynamic programmingproblems.[2][3] A non-expansive mapping withk=1{\displaystyle k=1}can be generalized to afirmly non-expansive mappingin aHilbert spaceH{\displaystyle {\mathcal {H}}}if the following holds for allxandyinH{\displaystyle {\mathcal {H}}}: where This is a special case ofα{\displaystyle \alpha }averaged nonexpansive operators withα=1/2{\displaystyle \alpha =1/2}.[4]A firmly non-expansive mapping is always non-expansive, via theCauchy–Schwarz inequality. The class of firmly non-expansive maps is closed underconvex combinations, but not compositions.[5]This class includesproximal mappingsof proper, convex, lower-semicontinuous functions, hence it also includes orthogonalprojectionsonto non-empty closedconvex sets. The class of firmly nonexpansive operators is equal to the set of resolvents of maximallymonotone operators.[6]Surprisingly, while iterating non-expansive maps has no guarantee to find a fixed point (e.g. multiplication by -1), firm non-expansiveness is sufficient toguarantee global convergenceto a fixed point, provided a fixed point exists. More precisely, ifFix⁡f:={x∈H|f(x)=x}≠∅{\displaystyle \operatorname {Fix} f:=\{x\in {\mathcal {H}}\ |\ f(x)=x\}\neq \varnothing }, then for any initial pointx0∈H{\displaystyle x_{0}\in {\mathcal {H}}}, iterating (∀n∈N)xn+1=f(xn){\displaystyle (\forall n\in \mathbb {N} )\quad x_{n+1}=f(x_{n})} yields convergence to a fixed pointxn→z∈Fix⁡f{\displaystyle x_{n}\to z\in \operatorname {Fix} f}. This convergence might beweakin an infinite-dimensional setting.[5] Asubcontraction maporsubcontractoris a mapfon a metric space (M,d) such that If theimageof a subcontractorfiscompact, thenfhas a fixed point.[7] In alocally convex space(E,P) withtopologygiven by a setPofseminorms, one can define for anyp∈Pap-contraction as a mapfsuch that there is somekp< 1 such thatp(f(x) −f(y))≤kpp(x−y). Iffis ap-contraction for allp∈Pand (E,P) is sequentially complete, thenfhas a fixed point, given as limit of any sequencexn+1=f(xn), and if (E,P) isHausdorff, then the fixed point is unique.[8]
https://en.wikipedia.org/wiki/Contraction_mapping
Infunctional analysis, theHahn–Banach theoremis a central result that allows the extension ofbounded linear functionalsdefined on avector subspaceof somevector spaceto the whole space. The theorem also shows that there are sufficientcontinuouslinear functionals defined on everynormed vector spacein order to study thedual space. Another version of the Hahn–Banach theorem is known as theHahn–Banach separation theoremor thehyperplane separation theorem, and has numerous uses inconvex geometry. The theorem is named for the mathematiciansHans HahnandStefan Banach, who proved it independently in the late 1920s. The special case of the theorem for the spaceC[a,b]{\displaystyle C[a,b]}of continuous functions on an interval was proved earlier (in 1912) byEduard Helly,[1]and a more general extension theorem, theM. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 byMarcel Riesz.[2] The first Hahn–Banach theorem was proved byEduard Hellyin 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space (CN{\displaystyle \mathbb {C} ^{\mathbb {N} }}) had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensionalextensionexists (where the linear functional has its domain extended by one dimension) and then usinginduction. In 1927, Hahn defined generalBanach spacesand used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that usessublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both usedtransfinite induction.[3] The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as themoment problem, whereby given all the potentialmoments of a functionone must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is theFourier cosine seriesproblem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. Riesz and Helly solved the problem for certain classes of spaces (such asLp([0,1]){\displaystyle L^{p}([0,1])}andC([a,b]){\displaystyle C([a,b])}) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem:[3] IfX{\displaystyle X}happens to be areflexive spacethen to solve the vector problem, it suffices to solve the following dual problem:[3] Riesz went on to defineLp([0,1]){\displaystyle L^{p}([0,1])}space(1<p<∞{\displaystyle 1<p<\infty }) in 1910 and theℓp{\displaystyle \ell ^{p}}spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution.[3] Theorem[3](The functional problem)—Let(xi)i∈I{\displaystyle \left(x_{i}\right)_{i\in I}}be vectors in arealorcomplexnormed spaceX{\displaystyle X}and let(ci)i∈I{\displaystyle \left(c_{i}\right)_{i\in I}}be scalars alsoindexed byI≠∅.{\displaystyle I\neq \varnothing .} There exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(xi)=ci{\displaystyle f\left(x_{i}\right)=c_{i}}for alli∈I{\displaystyle i\in I}if and only if there exists aK>0{\displaystyle K>0}such that for any choice of scalars(si)i∈I{\displaystyle \left(s_{i}\right)_{i\in I}}where all but finitely manysi{\displaystyle s_{i}}are0,{\displaystyle 0,}the following holds:|∑i∈Isici|≤K‖∑i∈Isixi‖.{\displaystyle \left|\sum _{i\in I}s_{i}c_{i}\right|\leq K\left\|\sum _{i\in I}s_{i}x_{i}\right\|.} The Hahn–Banach theorem can be deduced from the above theorem.[3]IfX{\displaystyle X}isreflexivethen this theorem solves the vector problem. A real-valued functionf:M→R{\displaystyle f:M\to \mathbb {R} }defined on a subsetM{\displaystyle M}ofX{\displaystyle X}is said to bedominated (above) bya functionp:X→R{\displaystyle p:X\to \mathbb {R} }iff(m)≤p(m){\displaystyle f(m)\leq p(m)}for everym∈M.{\displaystyle m\in M.}For this reason, the following version of the Hahn–Banach theorem is calledthe dominatedextensiontheorem. Hahn–Banach dominated extension theorem(for real linear functionals)[4][5][6]—Ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function(such as anormorseminormfor example) defined on a real vector spaceX{\displaystyle X}then anylinear functionaldefined on a vector subspace ofX{\displaystyle X}that isdominated abovebyp{\displaystyle p}has at least onelinear extensionto all ofX{\displaystyle X}that is also dominated above byp.{\displaystyle p.} Explicitly, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function, which by definition means that it satisfiesp(x+y)≤p(x)+p(y)andp(tx)=tp(x)for allx,y∈Xand all realt≥0,{\displaystyle p(x+y)\leq p(x)+p(y)\quad {\text{ and }}\quad p(tx)=tp(x)\qquad {\text{ for all }}\;x,y\in X\;{\text{ and all real }}\;t\geq 0,}and iff:M→R{\displaystyle f:M\to \mathbb {R} }is a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such thatf(m)≤p(m)for allm∈M{\displaystyle f(m)\leq p(m)\quad {\text{ for all }}m\in M}then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }such thatF(m)=f(m)for allm∈M,{\displaystyle F(m)=f(m)\quad {\text{ for all }}m\in M,}F(x)≤p(x)for allx∈X.{\displaystyle F(x)\leq p(x)\quad ~\;\,{\text{ for all }}x\in X.}Moreover, ifp{\displaystyle p}is aseminormthen|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}necessarily holds for allx∈X.{\displaystyle x\in X.} The theorem remains true if the requirements onp{\displaystyle p}are relaxed to require only thatp{\displaystyle p}be aconvex function:[7][8]p(tx+(1−t)y)≤tp(x)+(1−t)p(y)for all0<t<1andx,y∈X.{\displaystyle p(tx+(1-t)y)\leq tp(x)+(1-t)p(y)\qquad {\text{ for all }}0<t<1{\text{ and }}x,y\in X.}A functionp:X→R{\displaystyle p:X\to \mathbb {R} }is convex and satisfiesp(0)≤0{\displaystyle p(0)\leq 0}if and only ifp(ax+by)≤ap(x)+bp(y){\displaystyle p(ax+by)\leq ap(x)+bp(y)}for all vectorsx,y∈X{\displaystyle x,y\in X}and all non-negative reala,b≥0{\displaystyle a,b\geq 0}such thata+b≤1.{\displaystyle a+b\leq 1.}Everysublinear functionis a convex function. On the other hand, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is convex withp(0)≥0,{\displaystyle p(0)\geq 0,}then the function defined byp0(x)=definft>0p(tx)t{\displaystyle p_{0}(x)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\inf _{t>0}{\frac {p(tx)}{t}}}ispositively homogeneous(because for allx{\displaystyle x}andr>0{\displaystyle r>0}one hasp0(rx)=inft>0p(trx)t=rinft>0p(trx)tr=rinfτ>0p(τx)τ=rp0(x){\displaystyle p_{0}(rx)=\inf _{t>0}{\frac {p(trx)}{t}}=r\inf _{t>0}{\frac {p(trx)}{tr}}=r\inf _{\tau >0}{\frac {p(\tau x)}{\tau }}=rp_{0}(x)}), hence, being convex,it is sublinear. It is also bounded above byp0≤p,{\displaystyle p_{0}\leq p,}and satisfiesF≤p0{\displaystyle F\leq p_{0}}for every linear functionalF≤p.{\displaystyle F\leq p.}So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals. IfF:X→R{\displaystyle F:X\to \mathbb {R} }is linear thenF≤p{\displaystyle F\leq p}if and only if[4]−p(−x)≤F(x)≤p(x)for allx∈X,{\displaystyle -p(-x)\leq F(x)\leq p(x)\quad {\text{ for all }}x\in X,}which is the (equivalent) conclusion that some authors[4]write instead ofF≤p.{\displaystyle F\leq p.}It follows that ifp:X→R{\displaystyle p:X\to \mathbb {R} }is alsosymmetric, meaning thatp(−x)=p(x){\displaystyle p(-x)=p(x)}holds for allx∈X,{\displaystyle x\in X,}thenF≤p{\displaystyle F\leq p}if and only|F|≤p.{\displaystyle |F|\leq p.}Everynormis aseminormand both are symmetricbalancedsublinear functions. A sublinear function is a seminorm if and only if it is abalanced function. On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. Theidentity functionR→R{\displaystyle \mathbb {R} \to \mathbb {R} }onX:=R{\displaystyle X:=\mathbb {R} }is an example of a sublinear function that is not a seminorm. The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. Hahn–Banach theorem[3][9]—Supposep:X→R{\displaystyle p:X\to \mathbb {R} }aseminormon a vector spaceX{\displaystyle X}over the fieldK,{\displaystyle \mathbf {K} ,}which is eitherR{\displaystyle \mathbb {R} }orC.{\displaystyle \mathbb {C} .}Iff:M→K{\displaystyle f:M\to \mathbf {K} }is a linear functional on a vector subspaceM{\displaystyle M}such that|f(m)|≤p(m)for allm∈M,{\displaystyle |f(m)|\leq p(m)\quad {\text{ for all }}m\in M,}then there exists a linear functionalF:X→K{\displaystyle F:X\to \mathbf {K} }such thatF(m)=f(m)for allm∈M,{\displaystyle F(m)=f(m)\quad \;{\text{ for all }}m\in M,}|F(x)|≤p(x)for allx∈X.{\displaystyle |F(x)|\leq p(x)\quad \;\,{\text{ for all }}x\in X.} The theorem remains true if the requirements onp{\displaystyle p}are relaxed to require only that for allx,y∈X{\displaystyle x,y\in X}and all scalarsa{\displaystyle a}andb{\displaystyle b}satisfying|a|+|b|≤1,{\displaystyle |a|+|b|\leq 1,}[8]p(ax+by)≤|a|p(x)+|b|p(y).{\displaystyle p(ax+by)\leq |a|p(x)+|b|p(y).}This condition holds if and only ifp{\displaystyle p}is aconvexandbalanced functionsatisfyingp(0)≤0,{\displaystyle p(0)\leq 0,}or equivalently, if and only if it is convex, satisfiesp(0)≤0,{\displaystyle p(0)\leq 0,}andp(ux)≤p(x){\displaystyle p(ux)\leq p(x)}for allx∈X{\displaystyle x\in X}and allunit lengthscalarsu.{\displaystyle u.} A complex-valued functionalF{\displaystyle F}is said to bedominated byp{\displaystyle p}if|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for allx{\displaystyle x}in the domain ofF.{\displaystyle F.}With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly: Proof The following observations allow theHahn–Banach theorem for real vector spacesto be applied to (complex-valued) linear functionals on complex vector spaces. Every linear functionalF:X→C{\displaystyle F:X\to \mathbb {C} }on a complex vector space iscompletely determinedby itsreal partRe⁡F:X→R{\displaystyle \;\operatorname {Re} F:X\to \mathbb {R} \;}through the formula[6][proof 1]F(x)=Re⁡F(x)−iRe⁡F(ix)for allx∈X{\displaystyle F(x)\;=\;\operatorname {Re} F(x)-i\operatorname {Re} F(ix)\qquad {\text{ for all }}x\in X}and moreover, if‖⋅‖{\displaystyle \|\cdot \|}is anormonX{\displaystyle X}then theirdual normsare equal:‖F‖=‖Re⁡F‖.{\displaystyle \|F\|=\|\operatorname {Re} F\|.}[10]In particular, a linear functional onX{\displaystyle X}extends another one defined onM⊆X{\displaystyle M\subseteq X}if and only if their real parts are equal onM{\displaystyle M}(in other words, a linear functionalF{\displaystyle F}extendsf{\displaystyle f}if and only ifRe⁡F{\displaystyle \operatorname {Re} F}extendsRe⁡f{\displaystyle \operatorname {Re} f}). The real part of a linear functional onX{\displaystyle X}is always areal-linear functional(meaning that it is linear whenX{\displaystyle X}is considered as a real vector space) and ifR:X→R{\displaystyle R:X\to \mathbb {R} }is a real-linear functional on a complex vector space thenx↦R(x)−iR(ix){\displaystyle x\mapsto R(x)-iR(ix)}defines the unique linear functional onX{\displaystyle X}whose real part isR.{\displaystyle R.} IfF{\displaystyle F}is a linear functional on a (complex or real) vector spaceX{\displaystyle X}and ifp:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm then[6][proof 2]|F|≤pif and only ifRe⁡F≤p.{\displaystyle |F|\,\leq \,p\quad {\text{ if and only if }}\quad \operatorname {Re} F\,\leq \,p.}Stated in simpler language, a linear functional isdominatedby a seminormp{\displaystyle p}if and only if itsreal part is dominated abovebyp.{\displaystyle p.} Supposep:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm on a complex vector spaceX{\displaystyle X}and letf:M→C{\displaystyle f:M\to \mathbb {C} }be a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}that satisfies|f|≤p{\displaystyle |f|\leq p}onM.{\displaystyle M.}ConsiderX{\displaystyle X}as a real vector space and apply theHahn–Banach theorem for real vector spacesto thereal-linear functionalRe⁡f:M→R{\displaystyle \;\operatorname {Re} f:M\to \mathbb {R} \;}to obtain a real-linear extensionR:X→R{\displaystyle R:X\to \mathbb {R} }that is also dominated above byp,{\displaystyle p,}so that it satisfiesR≤p{\displaystyle R\leq p}onX{\displaystyle X}andR=Re⁡f{\displaystyle R=\operatorname {Re} f}onM.{\displaystyle M.}The mapF:X→C{\displaystyle F:X\to \mathbb {C} }defined byF(x)=R(x)−iR(ix){\displaystyle F(x)\;=\;R(x)-iR(ix)}is a linear functional onX{\displaystyle X}that extendsf{\displaystyle f}(because their real parts agree onM{\displaystyle M}) and satisfies|F|≤p{\displaystyle |F|\leq p}onX{\displaystyle X}(becauseRe⁡F≤p{\displaystyle \operatorname {Re} F\leq p}andp{\displaystyle p}is a seminorm).◼{\displaystyle \blacksquare } The proof above shows that whenp{\displaystyle p}is a seminorm then there is a one-to-one correspondence between dominated linear extensions off:M→C{\displaystyle f:M\to \mathbb {C} }and dominated real-linear extensions ofRe⁡f:M→R;{\displaystyle \operatorname {Re} f:M\to \mathbb {R} ;}the proof even gives a formula for explicitly constructing a linear extension off{\displaystyle f}from any given real-linear extension of its real part. Continuity A linear functionalF{\displaystyle F}on atopological vector spaceiscontinuousif and only if this is true of its real partRe⁡F;{\displaystyle \operatorname {Re} F;}if the domain is a normed space then‖F‖=‖Re⁡F‖{\displaystyle \|F\|=\|\operatorname {Re} F\|}(where one side is infinite if and only if the other side is infinite).[10]AssumeX{\displaystyle X}is atopological vector spaceandp:X→R{\displaystyle p:X\to \mathbb {R} }issublinear function. Ifp{\displaystyle p}is acontinuoussublinear function that dominates a linear functionalF{\displaystyle F}thenF{\displaystyle F}is necessarily continuous.[6]Moreover, a linear functionalF{\displaystyle F}is continuous if and only if itsabsolute value|F|{\displaystyle |F|}(which is aseminormthat dominatesF{\displaystyle F}) is continuous.[6]In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function. TheHahn–Banach theorem for real vector spacesultimately follows from Helly's initial result for the special case where the linear functional is extended fromM{\displaystyle M}to a larger vector space in whichM{\displaystyle M}hascodimension1.{\displaystyle 1.}[3] Lemma[6](One–dimensional dominated extension theorem)—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be asublinear functionon a real vector spaceX,{\displaystyle X,}letf:M→R{\displaystyle f:M\to \mathbb {R} }alinear functionalon apropervector subspaceM⊊X{\displaystyle M\subsetneq X}such thatf≤p{\displaystyle f\leq p}onM{\displaystyle M}(meaningf(m)≤p(m){\displaystyle f(m)\leq p(m)}for allm∈M{\displaystyle m\in M}), and letx∈X{\displaystyle x\in X}be a vectornotinM{\displaystyle M}(soM⊕Rx=span⁡{M,x}{\displaystyle M\oplus \mathbb {R} x=\operatorname {span} \{M,x\}}). There exists a linear extensionF:M⊕Rx→R{\displaystyle F:M\oplus \mathbb {R} x\to \mathbb {R} }off{\displaystyle f}such thatF≤p{\displaystyle F\leq p}onM⊕Rx.{\displaystyle M\oplus \mathbb {R} x.} Given any real numberb,{\displaystyle b,}the mapFb:M⊕Rx→R{\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} }defined byFb(m+rx)=f(m)+rb{\displaystyle F_{b}(m+rx)=f(m)+rb}is always a linear extension off{\displaystyle f}toM⊕Rx{\displaystyle M\oplus \mathbb {R} x}[note 1]but it might not satisfyFb≤p.{\displaystyle F_{b}\leq p.}It will be shown thatb{\displaystyle b}can always be chosen so as to guarantee thatFb≤p,{\displaystyle F_{b}\leq p,}which will complete the proof. Ifm,n∈M{\displaystyle m,n\in M}thenf(m)−f(n)=f(m−n)≤p(m−n)=p(m+x−x−n)≤p(m+x)+p(−x−n){\displaystyle f(m)-f(n)=f(m-n)\leq p(m-n)=p(m+x-x-n)\leq p(m+x)+p(-x-n)}which implies−p(−n−x)−f(n)≤p(m+x)−f(m).{\displaystyle -p(-n-x)-f(n)~\leq ~p(m+x)-f(m).}So definea=supn∈M[−p(−n−x)−f(n)]andc=infm∈M[p(m+x)−f(m)]{\displaystyle a=\sup _{n\in M}[-p(-n-x)-f(n)]\qquad {\text{ and }}\qquad c=\inf _{m\in M}[p(m+x)-f(m)]}wherea≤c{\displaystyle a\leq c}are real numbers. To guaranteeFb≤p,{\displaystyle F_{b}\leq p,}it suffices thata≤b≤c{\displaystyle a\leq b\leq c}(in fact, this is also necessary[note 2]) because thenb{\displaystyle b}satisfies "the decisive inequality"[6]−p(−n−x)−f(n)≤b≤p(m+x)−f(m)for allm,n∈M.{\displaystyle -p(-n-x)-f(n)~\leq ~b~\leq ~p(m+x)-f(m)\qquad {\text{ for all }}\;m,n\in M.} To see thatf(m)+rb≤p(m+rx){\displaystyle f(m)+rb\leq p(m+rx)}follows,[note 3]assumer≠0{\displaystyle r\neq 0}and substitute1rm{\displaystyle {\tfrac {1}{r}}m}in for bothm{\displaystyle m}andn{\displaystyle n}to obtain−p(−1rm−x)−1rf(m)≤b≤p(1rm+x)−1rf(m).{\displaystyle -p\left(-{\tfrac {1}{r}}m-x\right)-{\tfrac {1}{r}}f\left(m\right)~\leq ~b~\leq ~p\left({\tfrac {1}{r}}m+x\right)-{\tfrac {1}{r}}f\left(m\right).}Ifr>0{\displaystyle r>0}(respectively, ifr<0{\displaystyle r<0}) then the right (respectively, the left) hand side equals1r[p(m+rx)−f(m)]{\displaystyle {\tfrac {1}{r}}\left[p(m+rx)-f(m)\right]}so that multiplying byr{\displaystyle r}givesrb≤p(m+rx)−f(m).{\displaystyle rb\leq p(m+rx)-f(m).}◼{\displaystyle \blacksquare } This lemma remains true ifp:X→R{\displaystyle p:X\to \mathbb {R} }is merely aconvex functioninstead of a sublinear function.[7][8] Assume thatp{\displaystyle p}is convex, which means thatp(ty+(1−t)z)≤tp(y)+(1−t)p(z){\displaystyle p(ty+(1-t)z)\leq tp(y)+(1-t)p(z)}for all0≤t≤1{\displaystyle 0\leq t\leq 1}andy,z∈X.{\displaystyle y,z\in X.}LetM,{\displaystyle M,}f:M→R,{\displaystyle f:M\to \mathbb {R} ,}andx∈X∖M{\displaystyle x\in X\setminus M}be as inthe lemma's statement. Given anym,n∈M{\displaystyle m,n\in M}and any positive realr,s>0,{\displaystyle r,s>0,}the positive real numberst:=sr+s{\displaystyle t:={\tfrac {s}{r+s}}}andrr+s=1−t{\displaystyle {\tfrac {r}{r+s}}=1-t}sum to1{\displaystyle 1}so that the convexity ofp{\displaystyle p}onX{\displaystyle X}guaranteesp(sr+sm+rr+sn)=p(sr+s(m−rx)+rr+s(n+sx))≤sr+sp(m−rx)+rr+sp(n+sx){\displaystyle {\begin{alignedat}{9}p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)~&=~p{\big (}{\tfrac {s}{r+s}}(m-rx)&&+{\tfrac {r}{r+s}}(n+sx){\big )}&&\\&\leq ~{\tfrac {s}{r+s}}\;p(m-rx)&&+{\tfrac {r}{r+s}}\;p(n+sx)&&\\\end{alignedat}}}and hencesf(m)+rf(n)=(r+s)f(sr+sm+rr+sn)by linearity off≤(r+s)p(sr+sm+rr+sn)f≤ponM≤sp(m−rx)+rp(n+sx){\displaystyle {\begin{alignedat}{9}sf(m)+rf(n)~&=~(r+s)\;f\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad {\text{ by linearity of }}f\\&\leq ~(r+s)\;p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad f\leq p{\text{ on }}M\\&\leq ~sp(m-rx)+rp(n+sx)\\\end{alignedat}}}thus proving that−sp(m−rx)+sf(m)≤rp(n+sx)−rf(n),{\displaystyle -sp(m-rx)+sf(m)~\leq ~rp(n+sx)-rf(n),}which after multiplying both sides by1rs{\displaystyle {\tfrac {1}{rs}}}becomes1r[−p(m−rx)+f(m)]≤1s[p(n+sx)−f(n)].{\displaystyle {\tfrac {1}{r}}[-p(m-rx)+f(m)]~\leq ~{\tfrac {1}{s}}[p(n+sx)-f(n)].}This implies that the values defined bya=supr>0m∈M1r[−p(m−rx)+f(m)]andc=infs>0n∈M1s[p(n+sx)−f(n)]{\displaystyle a=\sup _{\stackrel {m\in M}{r>0}}{\tfrac {1}{r}}[-p(m-rx)+f(m)]\qquad {\text{ and }}\qquad c=\inf _{\stackrel {n\in M}{s>0}}{\tfrac {1}{s}}[p(n+sx)-f(n)]}are real numbers that satisfya≤c.{\displaystyle a\leq c.}As in the above proof of theone–dimensional dominated extension theoremabove, for any realb∈R{\displaystyle b\in \mathbb {R} }defineFb:M⊕Rx→R{\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} }byFb(m+rx)=f(m)+rb.{\displaystyle F_{b}(m+rx)=f(m)+rb.}It can be verified that ifa≤b≤c{\displaystyle a\leq b\leq c}thenFb≤p{\displaystyle F_{b}\leq p}whererb≤p(m+rx)−f(m){\displaystyle rb\leq p(m+rx)-f(m)}follows fromb≤c{\displaystyle b\leq c}whenr>0{\displaystyle r>0}(respectively, follows froma≤b{\displaystyle a\leq b}whenr<0{\displaystyle r<0}).◼{\displaystyle \blacksquare } Thelemma aboveis the key step in deducing the dominated extension theorem fromZorn's lemma. The set of all possible dominated linear extensions off{\displaystyle f}are partially ordered by extension of each other, so there is a maximal extensionF.{\displaystyle F.}By the codimension-1 result, ifF{\displaystyle F}is not defined on all ofX,{\displaystyle X,}then it can be further extended. ThusF{\displaystyle F}must be defined everywhere, as claimed.◼{\displaystyle \blacksquare } WhenM{\displaystyle M}has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case usesZorn's lemmaalthough the strictly weakerultrafilter lemma[11](which is equivalent to thecompactness theoremand to theBoolean prime ideal theorem) may be used instead. Hahn–Banach can also be proved usingTychonoff's theoremforcompactHausdorff spaces[12](which is also equivalent to the ultrafilter lemma) TheMizar projecthas completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file.[13] The Hahn–Banach theorem can be used to guarantee the existence ofcontinuous linear extensionsofcontinuous linear functionals. Hahn–Banach continuous extension theorem[14]—Every continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}of a (real or complex)locally convextopological vector spaceX{\displaystyle X}has a continuous linear extensionF{\displaystyle F}to all ofX.{\displaystyle X.}If in additionX{\displaystyle X}is anormed space, then this extension can be chosen so that itsdual normis equal to that off.{\displaystyle f.} Incategory-theoreticterms, the underlying field of the vector space is aninjective objectin the category of locally convex vector spaces. On anormed(orseminormed) space, a linear extensionF{\displaystyle F}of abounded linear functionalf{\displaystyle f}is said to benorm-preservingif it has the samedual normas the original functional:‖F‖=‖f‖.{\displaystyle \|F\|=\|f\|.}Because of this terminology, the second part ofthe above theoremis sometimes referred to as the "norm-preserving" version of the Hahn–Banach theorem.[15]Explicitly: Norm-preserving Hahn–Banach continuous extension theorem[15]—Every continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}of a (real or complex) normed spaceX{\displaystyle X}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}that satisfies‖f‖=‖F‖.{\displaystyle \|f\|=\|F\|.} The following observations allow thecontinuous extension theoremto be deduced from theHahn–Banach theorem.[16] The absolute value of a linear functional is always a seminorm. A linear functionalF{\displaystyle F}on atopological vector spaceX{\displaystyle X}is continuous if and only if its absolute value|F|{\displaystyle |F|}is continuous, which happens if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|F|≤p{\displaystyle |F|\leq p}on the domain ofF.{\displaystyle F.}[17]IfX{\displaystyle X}is a locally convex space then this statement remains true when the linear functionalF{\displaystyle F}is defined on apropervector subspace ofX.{\displaystyle X.} Letf{\displaystyle f}be a continuous linear functional defined on a vector subspaceM{\displaystyle M}of alocally convex topological vector spaceX.{\displaystyle X.}BecauseX{\displaystyle X}is locally convex, there exists a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}thatdominatesf{\displaystyle f}(meaning that|f(m)|≤p(m){\displaystyle |f(m)|\leq p(m)}for allm∈M{\displaystyle m\in M}). By theHahn–Banach theorem, there exists a linear extension off{\displaystyle f}toX,{\displaystyle X,}call itF,{\displaystyle F,}that satisfies|F|≤p{\displaystyle |F|\leq p}onX.{\displaystyle X.}This linear functionalF{\displaystyle F}is continuous since|F|≤p{\displaystyle |F|\leq p}andp{\displaystyle p}is a continuous seminorm. Proof for normed spaces A linear functionalf{\displaystyle f}on anormed spaceiscontinuousif and only if it isbounded, which means that itsdual norm‖f‖=sup{|f(m)|:‖m‖≤1,m∈domain⁡f}{\displaystyle \|f\|=\sup\{|f(m)|:\|m\|\leq 1,m\in \operatorname {domain} f\}}is finite, in which case|f(m)|≤‖f‖‖m‖{\displaystyle |f(m)|\leq \|f\|\|m\|}holds for every pointm{\displaystyle m}in its domain. Moreover, ifc≥0{\displaystyle c\geq 0}is such that|f(m)|≤c‖m‖{\displaystyle |f(m)|\leq c\|m\|}for allm{\displaystyle m}in the functional's domain, then necessarily‖f‖≤c.{\displaystyle \|f\|\leq c.}IfF{\displaystyle F}is a linear extension of a linear functionalf{\displaystyle f}then their dual norms always satisfy‖f‖≤‖F‖{\displaystyle \|f\|\leq \|F\|}[proof 3]so that equality‖f‖=‖F‖{\displaystyle \|f\|=\|F\|}is equivalent to‖F‖≤‖f‖,{\displaystyle \|F\|\leq \|f\|,}which holds if and only if|F(x)|≤‖f‖‖x‖{\displaystyle |F(x)|\leq \|f\|\|x\|}for every pointx{\displaystyle x}in the extension's domain. This can be restated in terms of the function‖f‖‖⋅‖:X→R{\displaystyle \|f\|\,\|\cdot \|:X\to \mathbb {R} }defined byx↦‖f‖‖x‖,{\displaystyle x\mapsto \|f\|\,\|x\|,}which is always aseminorm:[note 4] Applying theHahn–Banach theoremtof{\displaystyle f}with this seminorm‖f‖‖⋅‖{\displaystyle \|f\|\,\|\cdot \|}thus produces a dominated linear extension whose norm is (necessarily) equal to that off,{\displaystyle f,}which proves the theorem: Letf{\displaystyle f}be a continuous linear functional defined on a vector subspaceM{\displaystyle M}of a normed spaceX.{\displaystyle X.}Then the functionp:X→R{\displaystyle p:X\to \mathbb {R} }defined byp(x)=‖f‖‖x‖{\displaystyle p(x)=\|f\|\,\|x\|}is a seminorm onX{\displaystyle X}thatdominatesf,{\displaystyle f,}meaning that|f(m)|≤p(m){\displaystyle |f(m)|\leq p(m)}holds for everym∈M.{\displaystyle m\in M.}By theHahn–Banach theorem, there exists a linear functionalF{\displaystyle F}onX{\displaystyle X}that extendsf{\displaystyle f}(which guarantees‖f‖≤‖F‖{\displaystyle \|f\|\leq \|F\|}) and that is also dominated byp,{\displaystyle p,}meaning that|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for everyx∈X.{\displaystyle x\in X.}The fact that‖f‖{\displaystyle \|f\|}is a real number such that|F(x)|≤‖f‖‖x‖{\displaystyle |F(x)|\leq \|f\|\|x\|}for everyx∈X,{\displaystyle x\in X,}guarantees‖F‖≤‖f‖.{\displaystyle \|F\|\leq \|f\|.}Since‖F‖=‖f‖{\displaystyle \|F\|=\|f\|}is finite, the linear functionalF{\displaystyle F}is bounded and thus continuous. Thecontinuous extension theoremmight fail if thetopological vector space(TVS)X{\displaystyle X}is notlocally convex. For example, for0<p<1,{\displaystyle 0<p<1,}theLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is acompletemetrizable TVS(anF-space) that isnotlocally convex (in fact, its only convex open subsets are itselfLp([0,1]){\displaystyle L^{p}([0,1])}and the empty set) and the only continuous linear functional onLp([0,1]){\displaystyle L^{p}([0,1])}is the constant0{\displaystyle 0}function (Rudin 1991, §1.47). SinceLp([0,1]){\displaystyle L^{p}([0,1])}is Hausdorff, every finite-dimensional vector subspaceM⊆Lp([0,1]){\displaystyle M\subseteq L^{p}([0,1])}islinearly homeomorphictoEuclidean spaceRdim⁡M{\displaystyle \mathbb {R} ^{\dim M}}orCdim⁡M{\displaystyle \mathbb {C} ^{\dim M}}(byF. Riesz's theorem) and so every non-zero linear functionalf{\displaystyle f}onM{\displaystyle M}is continuous but none has a continuous linear extension to all ofLp([0,1]).{\displaystyle L^{p}([0,1]).}However, it is possible for a TVSX{\displaystyle X}to not be locally convex but nevertheless have enough continuous linear functionals that itscontinuous dual spaceX∗{\displaystyle X^{*}}separates points; for such a TVS, a continuous linear functional defined on a vector subspacemighthave a continuous linear extension to the whole space. If theTVSX{\displaystyle X}is notlocally convexthen there might not exist any continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }defined onX{\displaystyle X}(not just onM{\displaystyle M}) that dominatesf,{\displaystyle f,}in which case the Hahn–Banach theorem can not be applied as it was inthe above proofof the continuous extension theorem. However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: IfX{\displaystyle X}is any TVS (not necessarily locally convex), then a continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}if and only if there exists some continuous seminormp{\displaystyle p}onX{\displaystyle X}thatdominatesf.{\displaystyle f.}Specifically, if given a continuous linear extensionF{\displaystyle F}thenp:=|F|{\displaystyle p:=|F|}is a continuous seminorm onX{\displaystyle X}that dominatesf;{\displaystyle f;}and conversely, if given a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}that dominatesf{\displaystyle f}then any dominated linear extension off{\displaystyle f}toX{\displaystyle X}(the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension. The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets:{−p(−x−n)−f(n):n∈M},{\displaystyle \{-p(-x-n)-f(n):n\in M\},}and{p(m+x)−f(m):m∈M}.{\displaystyle \{p(m+x)-f(m):m\in M\}.}This sort of argument appears widely inconvex geometry,[18]optimization theory, andeconomics. Lemmas to this end derived from the original Hahn–Banach theorem are known as theHahn–Banach separation theorems.[19][20]They are generalizations of thehyperplane separation theorem, which states that two disjoint nonempty convex subsets of a finite-dimensional spaceRn{\displaystyle \mathbb {R} ^{n}}can be separated by someaffine hyperplane, which is afiber(level set) of the formf−1(s)={x:f(x)=s}{\displaystyle f^{-1}(s)=\{x:f(x)=s\}}wheref≠0{\displaystyle f\neq 0}is a non-zero linear functional ands{\displaystyle s}is a scalar. Theorem[19]—LetA{\displaystyle A}andB{\displaystyle B}be non-empty convex subsets of a reallocally convex topological vector spaceX.{\displaystyle X.}IfInt⁡A≠∅{\displaystyle \operatorname {Int} A\neq \varnothing }andB∩Int⁡A=∅{\displaystyle B\cap \operatorname {Int} A=\varnothing }then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatsupf(A)≤inff(B){\displaystyle \sup f(A)\leq \inf f(B)}andf(a)<inff(B){\displaystyle f(a)<\inf f(B)}for alla∈Int⁡A{\displaystyle a\in \operatorname {Int} A}(such anf{\displaystyle f}is necessarily non-zero). When the convex sets have additional properties, such as beingopenorcompactfor example, then the conclusion can be substantially strengthened: Theorem[3][21]—LetA{\displaystyle A}andB{\displaystyle B}be convex non-empty disjoint subsets of a realtopological vector spaceX.{\displaystyle X.} IfX{\displaystyle X}is complex (rather than real) then the same claims hold, but for thereal partoff.{\displaystyle f.} Then following important corollary is known as theGeometric Hahn–Banach theoremorMazur's theorem(also known asAscoli–Mazur theorem[22]). It follows from the first bullet above and the convexity ofM.{\displaystyle M.} Theorem (Mazur)[23]—LetM{\displaystyle M}be a vector subspace of the topological vector spaceX{\displaystyle X}and supposeK{\displaystyle K}is a non-empty convex open subset ofX{\displaystyle X}withK∩M=∅.{\displaystyle K\cap M=\varnothing .}Then there is a closedhyperplane(codimension-1 vector subspace)N⊆X{\displaystyle N\subseteq X}that containsM,{\displaystyle M,}but remains disjoint fromK.{\displaystyle K.} Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals. Corollary[24](Separation of a subspace and an open convex set)—LetM{\displaystyle M}be a vector subspace of alocally convex topological vector spaceX,{\displaystyle X,}andU{\displaystyle U}be a non-empty open convex subset disjoint fromM.{\displaystyle M.}Then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(m)=0{\displaystyle f(m)=0}for allm∈M{\displaystyle m\in M}andRe⁡f>0{\displaystyle \operatorname {Re} f>0}onU.{\displaystyle U.} Since points are triviallyconvex, geometric Hahn–Banach implies that functionals can detect theboundaryof a set. In particular, letX{\displaystyle X}be a real topological vector space andA⊆X{\displaystyle A\subseteq X}be convex withInt⁡A≠∅.{\displaystyle \operatorname {Int} A\neq \varnothing .}Ifa0∈A∖Int⁡A{\displaystyle a_{0}\in A\setminus \operatorname {Int} A}then there is a functional that is vanishing ata0,{\displaystyle a_{0},}but supported on the interior ofA.{\displaystyle A.}[19] Call a normed spaceX{\displaystyle X}smoothif at each pointx{\displaystyle x}in its unit ball there exists a unique closed hyperplane to the unit ball atx.{\displaystyle x.}Köthe showed in 1983 that a normed space is smooth at a pointx{\displaystyle x}if and only if the norm isGateaux differentiableat that point.[3] LetU{\displaystyle U}be a convexbalancedneighborhood of the origin in alocally convextopological vector spaceX{\displaystyle X}and supposex∈X{\displaystyle x\in X}is not an element ofU.{\displaystyle U.}Then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such that[3]sup|f(U)|≤|f(x)|.{\displaystyle \sup |f(U)|\leq |f(x)|.} The Hahn–Banach theorem is the first sign of an important philosophy infunctional analysis: to understand a space, one should understand itscontinuous functionals. For example, linear subspaces are characterized by functionals: ifXis a normed vector space with linear subspaceM(not necessarily closed) and ifz{\displaystyle z}is an element ofXnot in theclosureofM, then there exists a continuous linear mapf:X→K{\displaystyle f:X\to \mathbf {K} }withf(m)=0{\displaystyle f(m)=0}for allm∈M,{\displaystyle m\in M,}f(z)=1,{\displaystyle f(z)=1,}and‖f‖=dist⁡(z,M)−1.{\displaystyle \|f\|=\operatorname {dist} (z,M)^{-1}.}(To see this, note thatdist⁡(⋅,M){\displaystyle \operatorname {dist} (\cdot ,M)}is a sublinear function.) Moreover, ifz{\displaystyle z}is an element ofX, then there exists a continuous linear mapf:X→K{\displaystyle f:X\to \mathbf {K} }such thatf(z)=‖z‖{\displaystyle f(z)=\|z\|}and‖f‖≤1.{\displaystyle \|f\|\leq 1.}This implies that thenatural injectionJ{\displaystyle J}from a normed spaceXinto itsdouble dualV∗∗{\displaystyle V^{**}}is isometric. That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space isHausdorfforlocally convex. However, supposeXis a topological vector space, not necessarily Hausdorff orlocally convex, but with a nonempty, proper, convex, open setM. Then geometric Hahn–Banach implies that there is a hyperplane separatingMfrom any other point. In particular, there must exist a nonzero functional onX— that is, thecontinuous dual spaceX∗{\displaystyle X^{*}}is non-trivial.[3][25]ConsideringXwith theweak topologyinduced byX∗,{\displaystyle X^{*},}thenXbecomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new spaceseparates points. ThusXwith this weak topology becomesHausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. The Hahn–Banach theorem is often useful when one wishes to apply the method ofa priori estimates. Suppose that we wish to solve the linear differential equationPu=f{\displaystyle Pu=f}foru,{\displaystyle u,}withf{\displaystyle f}given in some Banach spaceX. If we have control on the size ofu{\displaystyle u}in terms of‖f‖X{\displaystyle \|f\|_{X}}and we can think ofu{\displaystyle u}as a bounded linear functional on some suitable space of test functionsg,{\displaystyle g,}then we can viewf{\displaystyle f}as a linear functional by adjunction:(f,g)=(u,P∗g).{\displaystyle (f,g)=(u,P^{*}g).}At first, this functional is only defined on the image ofP,{\displaystyle P,}but using the Hahn–Banach theorem, we can try to extend it to the entire codomainX. The resulting functional is often defined to be aweak solution to the equation. Theorem[26]—A real Banach space isreflexiveif and only if every pair of non-empty disjoint closed convex subsets, one of which is bounded, can be strictly separated by a hyperplane. To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. Proposition—SupposeX{\displaystyle X}is a Hausdorff locally convex TVS over the fieldK{\displaystyle \mathbf {K} }andY{\displaystyle Y}is a vector subspace ofX{\displaystyle X}that isTVS–isomorphictoKI{\displaystyle \mathbf {K} ^{I}}for some setI.{\displaystyle I.}ThenY{\displaystyle Y}is a closed andcomplementedvector subspace ofX.{\displaystyle X.} SinceKI{\displaystyle \mathbf {K} ^{I}}is a complete TVS so isY,{\displaystyle Y,}and since any complete subset of a Hausdorff TVS is closed,Y{\displaystyle Y}is a closed subset ofX.{\displaystyle X.}Letf=(fi)i∈I:Y→KI{\displaystyle f=\left(f_{i}\right)_{i\in I}:Y\to \mathbf {K} ^{I}}be a TVS isomorphism, so that eachfi:Y→K{\displaystyle f_{i}:Y\to \mathbf {K} }is a continuous surjective linear functional. By the Hahn–Banach theorem, we may extend eachfi{\displaystyle f_{i}}to a continuous linear functionalFi:X→K{\displaystyle F_{i}:X\to \mathbf {K} }onX.{\displaystyle X.}LetF:=(Fi)i∈I:X→KI{\displaystyle F:=\left(F_{i}\right)_{i\in I}:X\to \mathbf {K} ^{I}}soF{\displaystyle F}is a continuous linear surjection such that its restriction toY{\displaystyle Y}isF|Y=(Fi|Y)i∈I=(fi)i∈I=f.{\displaystyle F{\big \vert }_{Y}=\left(F_{i}{\big \vert }_{Y}\right)_{i\in I}=\left(f_{i}\right)_{i\in I}=f.}LetP:=f−1∘F:X→Y,{\displaystyle P:=f^{-1}\circ F:X\to Y,}which is a continuous linear map whose restriction toY{\displaystyle Y}isP|Y=f−1∘F|Y=f−1∘f=1Y,{\displaystyle P{\big \vert }_{Y}=f^{-1}\circ F{\big \vert }_{Y}=f^{-1}\circ f=\mathbf {1} _{Y},}where1Y{\displaystyle \mathbb {1} _{Y}}denotes theidentity maponY.{\displaystyle Y.}This shows thatP{\displaystyle P}is a continuouslinear projectionontoY{\displaystyle Y}(that is,P∘P=P{\displaystyle P\circ P=P}). ThusY{\displaystyle Y}is complemented inX{\displaystyle X}andX=Y⊕ker⁡P{\displaystyle X=Y\oplus \ker P}in the category of TVSs.◼{\displaystyle \blacksquare } The above result may be used to show that every closed vector subspace ofRN{\displaystyle \mathbb {R} ^{\mathbb {N} }}is complemented because any such space is either finite dimensional or else TVS–isomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.} General template There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: Theorem[3]—IfD{\displaystyle D}is anabsorbingdiskin a real or complex vector spaceX{\displaystyle X}and iff{\displaystyle f}be a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such that|f|≤1{\displaystyle |f|\leq 1}onM∩D,{\displaystyle M\cap D,}then there exists a linear functionalF{\displaystyle F}onX{\displaystyle X}extendingf{\displaystyle f}such that|F|≤1{\displaystyle |F|\leq 1}onD.{\displaystyle D.} Hahn–Banach theorem for seminorms[27][28]—Ifp:M→R{\displaystyle p:M\to \mathbb {R} }is aseminormdefined on a vector subspaceM{\displaystyle M}ofX,{\displaystyle X,}and ifq:X→R{\displaystyle q:X\to \mathbb {R} }is a seminorm onX{\displaystyle X}such thatp≤q|M,{\displaystyle p\leq q{\big \vert }_{M},}then there exists a seminormP:X→R{\displaystyle P:X\to \mathbb {R} }onX{\displaystyle X}such thatP|M=p{\displaystyle P{\big \vert }_{M}=p}onM{\displaystyle M}andP≤q{\displaystyle P\leq q}onX.{\displaystyle X.} LetS{\displaystyle S}be the convex hull of{m∈M:p(m)≤1}∪{x∈X:q(x)≤1}.{\displaystyle \{m\in M:p(m)\leq 1\}\cup \{x\in X:q(x)\leq 1\}.}BecauseS{\displaystyle S}is anabsorbingdiskinX,{\displaystyle X,}itsMinkowski functionalP{\displaystyle P}is a seminorm. Thenp=P{\displaystyle p=P}onM{\displaystyle M}andP≤q{\displaystyle P\leq q}onX.{\displaystyle X.} So for example, suppose thatf{\displaystyle f}is abounded linear functionaldefined on a vector subspaceM{\displaystyle M}of anormed spaceX,{\displaystyle X,}so its theoperator norm‖f‖{\displaystyle \|f\|}is a non-negative real number. Then the linear functional'sabsolute valuep:=|f|{\displaystyle p:=|f|}is a seminorm onM{\displaystyle M}and the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x)=‖f‖‖x‖{\displaystyle q(x)=\|f\|\,\|x\|}is a seminorm onX{\displaystyle X}that satisfiesp≤q|M{\displaystyle p\leq q{\big \vert }_{M}}onM.{\displaystyle M.}TheHahn–Banach theorem for seminormsguarantees the existence of a seminormP:X→R{\displaystyle P:X\to \mathbb {R} }that is equal to|f|{\displaystyle |f|}onM{\displaystyle M}(sinceP|M=p=|f|{\displaystyle P{\big \vert }_{M}=p=|f|}) and is bounded above byP(x)≤‖f‖‖x‖{\displaystyle P(x)\leq \|f\|\,\|x\|}everywhere onX{\displaystyle X}(sinceP≤q{\displaystyle P\leq q}). Hahn–Banach sandwich theorem[3]—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX,{\displaystyle X,}letS⊆X{\displaystyle S\subseteq X}be any subset ofX,{\displaystyle X,}and letf:S→R{\displaystyle f:S\to \mathbb {R} }beanymap. If there exist positive real numbersa{\displaystyle a}andb{\displaystyle b}such that0≥infs∈S[p(s−ax−by)−f(s)−af(x)−bf(y)]for allx,y∈S,{\displaystyle 0\geq \inf _{s\in S}[p(s-ax-by)-f(s)-af(x)-bf(y)]\qquad {\text{ for all }}x,y\in S,}then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }onX{\displaystyle X}such thatF≤p{\displaystyle F\leq p}onX{\displaystyle X}andf≤F≤p{\displaystyle f\leq F\leq p}onS.{\displaystyle S.} Theorem[3](Andenaes, 1970)—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX,{\displaystyle X,}letf:M→R{\displaystyle f:M\to \mathbb {R} }be a linear functional on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such thatf≤p{\displaystyle f\leq p}onM,{\displaystyle M,}and letS⊆X{\displaystyle S\subseteq X}be any subset ofX.{\displaystyle X.}Then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }onX{\displaystyle X}that extendsf,{\displaystyle f,}satisfiesF≤p{\displaystyle F\leq p}onX,{\displaystyle X,}and is (pointwise) maximal onS{\displaystyle S}in the following sense: ifF^:X→R{\displaystyle {\widehat {F}}:X\to \mathbb {R} }is a linear functional onX{\displaystyle X}that extendsf{\displaystyle f}and satisfiesF^≤p{\displaystyle {\widehat {F}}\leq p}onX,{\displaystyle X,}thenF≤F^{\displaystyle F\leq {\widehat {F}}}onS{\displaystyle S}impliesF=F^{\displaystyle F={\widehat {F}}}onS.{\displaystyle S.} IfS={s}{\displaystyle S=\{s\}}is a singleton set (wheres∈X{\displaystyle s\in X}is some vector) and ifF:X→R{\displaystyle F:X\to \mathbb {R} }is such a maximal dominated linear extension off:M→R,{\displaystyle f:M\to \mathbb {R} ,}thenF(s)=infm∈M[f(s)+p(s−m)].{\displaystyle F(s)=\inf _{m\in M}[f(s)+p(s-m)].}[3] Vector–valued Hahn–Banach theorem[3]—IfX{\displaystyle X}andY{\displaystyle Y}are vector spaces over the same field and iff:M→Y{\displaystyle f:M\to Y}is a linear map defined on a vector subspaceM{\displaystyle M}ofX,{\displaystyle X,}then there exists a linear mapF:X→Y{\displaystyle F:X\to Y}that extendsf.{\displaystyle f.} A setΓ{\displaystyle \Gamma }of mapsX→X{\displaystyle X\to X}iscommutative(with respect tofunction composition∘{\displaystyle \,\circ \,}) ifF∘G=G∘F{\displaystyle F\circ G=G\circ F}for allF,G∈Γ.{\displaystyle F,G\in \Gamma .}Say that a functionf{\displaystyle f}defined on a subsetM{\displaystyle M}ofX{\displaystyle X}isΓ{\displaystyle \Gamma }-invariantifL(M)⊆M{\displaystyle L(M)\subseteq M}andf∘L=f{\displaystyle f\circ L=f}onM{\displaystyle M}for everyL∈Γ.{\displaystyle L\in \Gamma .} An invariant Hahn–Banach theorem[29]—SupposeΓ{\displaystyle \Gamma }is acommutative setof continuous linear maps from anormed spaceX{\displaystyle X}into itself and letf{\displaystyle f}be a continuous linear functional defined some vector subspaceM{\displaystyle M}ofX{\displaystyle X}that isΓ{\displaystyle \Gamma }-invariant, which means thatL(M)⊆M{\displaystyle L(M)\subseteq M}andf∘L=f{\displaystyle f\circ L=f}onM{\displaystyle M}for everyL∈Γ.{\displaystyle L\in \Gamma .}Thenf{\displaystyle f}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}that has the sameoperator norm‖f‖=‖F‖{\displaystyle \|f\|=\|F\|}and is alsoΓ{\displaystyle \Gamma }-invariant, meaning thatF∘L=F{\displaystyle F\circ L=F}onX{\displaystyle X}for everyL∈Γ.{\displaystyle L\in \Gamma .} This theorem may be summarized: The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. Mazur–Orlicz theorem[3]—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be asublinear functionon a real or complex vector spaceX,{\displaystyle X,}letT{\displaystyle T}be any set, and letR:T→R{\displaystyle R:T\to \mathbb {R} }andv:T→X{\displaystyle v:T\to X}be any maps. The following statements are equivalent: The following theorem characterizes whenanyscalar function onX{\displaystyle X}(not necessarily linear) has a continuous linear extension to all ofX.{\displaystyle X.} Theorem(The extension principle[30])—Letf{\displaystyle f}a scalar-valued function on a subsetS{\displaystyle S}of atopological vector spaceX.{\displaystyle X.}Then there exists a continuous linear functionalF{\displaystyle F}onX{\displaystyle X}extendingf{\displaystyle f}if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|∑i=1naif(si)|≤p(∑i=1naisi){\displaystyle \left|\sum _{i=1}^{n}a_{i}f(s_{i})\right|\leq p\left(\sum _{i=1}^{n}a_{i}s_{i}\right)}for all positive integersn{\displaystyle n}and all finite sequencesa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}of scalars and elementss1,…,sn{\displaystyle s_{1},\ldots ,s_{n}}ofS.{\displaystyle S.} LetXbe a topological vector space. A vector subspaceMofXhasthe extension propertyif any continuous linear functional onMcan be extended to a continuous linear functional onX, and we say thatXhas theHahn–Banach extension property(HBEP) if every vector subspace ofXhas the extension property.[31] The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For completemetrizable topological vector spacesthere is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex.[31]On the other hand, a vector spaceXof uncountable dimension, endowed with thefinest vector topology, then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable.[31] A vector subspaceMof a TVSXhasthe separation propertyif for every element ofXsuch thatx∉M,{\displaystyle x\not \in M,}there exists a continuous linear functionalf{\displaystyle f}onXsuch thatf(x)≠0{\displaystyle f(x)\neq 0}andf(m)=0{\displaystyle f(m)=0}for allm∈M.{\displaystyle m\in M.}Clearly, the continuous dual space of a TVSXseparates points onXif and only if{0},{\displaystyle \{0\},}has the separation property. In 1992, Kakol proved that any infinite dimensional vector spaceX, there exist TVS-topologies onXthat do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points onX. However, ifXis a TVS theneveryvector subspace ofXhas the extension property if and only ifeveryvector subspace ofXhas the separation property.[31] The proof of theHahn–Banach theorem for real vector spaces(HB) commonly usesZorn's lemma, which in the axiomatic framework ofZermelo–Fraenkel set theory(ZF) is equivalent to theaxiom of choice(AC). It was discovered byŁośandRyll-Nardzewski[12]and independently byLuxemburg[11]thatHBcan be proved using theultrafilter lemma(UL), which is equivalent (underZF) to theBoolean prime ideal theorem(BPI).BPIis strictly weaker than the axiom of choice and it was later shown thatHBis strictly weaker thanBPI.[32] Theultrafilter lemmais equivalent (underZF) to theBanach–Alaoglu theorem,[33]which is another foundational theorem infunctional analysis. Although the Banach–Alaoglu theorem impliesHB,[34]it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger thanHB). However,HBis equivalent toa certain weakened version of the Banach–Alaoglu theoremfor normed spaces.[35]The Hahn–Banach theorem is also equivalent to the following statement:[36] (BPIis equivalent to the statement that there are always non-constant probability charges which take only the values 0 and 1.) InZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set.[37]Moreover, the Hahn–Banach theorem implies theBanach–Tarski paradox.[38] ForseparableBanach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem ofsecond-order arithmeticthat takes a form ofKőnig's lemmarestricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example ofreverse mathematics.[39][40] Proofs
https://en.wikipedia.org/wiki/Hahn-Banach_theorem
Inmathematics, anormed vector spaceornormed spaceis avector spaceover therealorcomplexnumbers on which anormis defined.[1]A norm is a generalization of the intuitive notion of "length" in the physical world. IfV{\displaystyle V}is a vector space overK{\displaystyle K}, whereK{\displaystyle K}is a field equal toR{\displaystyle \mathbb {R} }or toC{\displaystyle \mathbb {C} }, then a norm onV{\displaystyle V}is a mapV→R{\displaystyle V\to \mathbb {R} }, typically denoted by‖⋅‖{\displaystyle \lVert \cdot \rVert }, satisfying the following four axioms: IfV{\displaystyle V}is a real or complex vector space as above, and‖⋅‖{\displaystyle \lVert \cdot \rVert }is a norm onV{\displaystyle V}, then the ordered pair(V,‖⋅‖){\displaystyle (V,\lVert \cdot \rVert )}is called a normed vector space. If it is clear from context which norm is intended, then it is common to denote the normed vector space simply byV{\displaystyle V}. A norm induces adistance, called its(norm) induced metric, by the formulad(x,y)=‖y−x‖.{\displaystyle d(x,y)=\|y-x\|.}which makes any normed vector space into ametric spaceand atopological vector space. If this metric space iscompletethen the normed space is aBanach space. Every normed vector space can be "uniquely extended" to a Banach space, which makes normed spaces intimately related to Banach spaces. Every Banach space is a normed space but converse is not true. For example, the set of thefinite sequencesof real numbers can be normed with theEuclidean norm, but it is not complete for this norm. Aninner product spaceis a normed vector space whose norm is the square root of the inner product of a vector and itself. TheEuclidean normof aEuclidean vector spaceis a special case that allows definingEuclidean distanceby the formulad(A,B)=‖AB→‖.{\displaystyle d(A,B)=\|{\overrightarrow {AB}}\|.} The study of normed spaces and Banach spaces is a fundamental part offunctional analysis, a major subfield of mathematics. Anormed vector spaceis avector spaceequipped with anorm. Aseminormed vector spaceis a vector space equipped with aseminorm. A usefulvariation of the triangle inequalityis‖x−y‖≥|‖x‖−‖y‖|{\displaystyle \|x-y\|\geq |\|x\|-\|y\||}for any vectorsx{\displaystyle x}andy.{\displaystyle y.} This also shows that a vector norm is a (uniformly)continuous function. Property 3 depends on a choice of norm|α|{\displaystyle |\alpha |}on the field of scalars. When the scalar field isR{\displaystyle \mathbb {R} }(or more generally a subset ofC{\displaystyle \mathbb {C} }), this is usually taken to be the ordinaryabsolute value, but other choices are possible. For example, for a vector space overQ{\displaystyle \mathbb {Q} }one could take|α|{\displaystyle |\alpha |}to be thep{\displaystyle p}-adic absolute value. If(V,‖⋅‖){\displaystyle (V,\|\,\cdot \,\|)}is a normed vector space, the norm‖⋅‖{\displaystyle \|\,\cdot \,\|}induces ametric(a notion ofdistance) and therefore atopologyonV.{\displaystyle V.}This metric is defined in the natural way: the distance between two vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }is given by‖u−v‖.{\displaystyle \|\mathbf {u} -\mathbf {v} \|.}This topology is precisely the weakest topology which makes‖⋅‖{\displaystyle \|\,\cdot \,\|}continuous and which is compatible with the linear structure ofV{\displaystyle V}in the following sense: Similarly, for any seminormed vector space we can define the distance between two vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }as‖u−v‖.{\displaystyle \|\mathbf {u} -\mathbf {v} \|.}This turns the seminormed space into apseudometric space(notice this is weaker than a metric) and allows the definition of notions such ascontinuityandconvergence. To put it more abstractly every seminormed vector space is atopological vector spaceand thus carries atopological structurewhich is induced by the semi-norm. Of special interest arecompletenormed spaces, which are known asBanach spaces. Every normed vector spaceV{\displaystyle V}sits as a dense subspace inside some Banach space; this Banach space is essentially uniquely defined byV{\displaystyle V}and is called thecompletionofV.{\displaystyle V.} Two norms on the same vector space are calledequivalentif they define the sametopology. On a finite-dimensional vector space (but not infinite-dimensional vector spaces), all norms are equivalent (although the resulting metric spaces need not be the same)[2]And since any Euclidean space is complete, we can thus conclude that all finite-dimensional normed vector spaces are Banach spaces. A normed vector spaceV{\displaystyle V}islocally compactif and only if the unit ballB={x:‖x‖≤1}{\displaystyle B=\{x:\|x\|\leq 1\}}iscompact, which is the case if and only ifV{\displaystyle V}is finite-dimensional; this is a consequence ofRiesz's lemma. (In fact, a more general result is true: a topological vector space is locally compact if and only if it is finite-dimensional. The point here is that we don't assume the topology comes from a norm.) The topology of a seminormed vector space has many nice properties. Given aneighbourhood systemN(0){\displaystyle {\mathcal {N}}(0)}around 0 we can construct all other neighbourhood systems asN(x)=x+N(0):={x+N:N∈N(0)}{\displaystyle {\mathcal {N}}(x)=x+{\mathcal {N}}(0):=\{x+N:N\in {\mathcal {N}}(0)\}}withx+N:={x+n:n∈N}.{\displaystyle x+N:=\{x+n:n\in N\}.} Moreover, there exists aneighbourhood basisfor the origin consisting ofabsorbingandconvex sets. As this property is very useful infunctional analysis, generalizations of normed vector spaces with this property are studied under the namelocally convex spaces. A norm (orseminorm)‖⋅‖{\displaystyle \|\cdot \|}on a topological vector space(X,τ){\displaystyle (X,\tau )}is continuous if and only if the topologyτ‖⋅‖{\displaystyle \tau _{\|\cdot \|}}that‖⋅‖{\displaystyle \|\cdot \|}induces onX{\displaystyle X}iscoarserthanτ{\displaystyle \tau }(meaning,τ‖⋅‖⊆τ{\displaystyle \tau _{\|\cdot \|}\subseteq \tau }), which happens if and only if there exists some open ballB{\displaystyle B}in(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}(such as maybe{x∈X:‖x‖<1}{\displaystyle \{x\in X:\|x\|<1\}}for example) that is open in(X,τ){\displaystyle (X,\tau )}(said different, such thatB∈τ{\displaystyle B\in \tau }). Atopological vector space(X,τ){\displaystyle (X,\tau )}is callednormableif there exists a norm‖⋅‖{\displaystyle \|\cdot \|}onX{\displaystyle X}such that the canonical metric(x,y)↦‖y−x‖{\displaystyle (x,y)\mapsto \|y-x\|}induces the topologyτ{\displaystyle \tau }onX.{\displaystyle X.}The following theorem is due toKolmogorov:[3] Kolmogorov's normability criterion: A Hausdorff topological vector space is normable if and only if there exists a convex,von Neumann boundedneighborhood of0∈X.{\displaystyle 0\in X.} A product of a family of normable spaces is normable if and only if only finitely many of the spaces are non-trivial (that is,≠{0}{\displaystyle \neq \{0\}}).[3]Furthermore, the quotient of a normable spaceX{\displaystyle X}by a closed vector subspaceC{\displaystyle C}is normable, and if in additionX{\displaystyle X}'s topology is given by a norm‖⋅,‖{\displaystyle \|\,\cdot ,\|}then the mapX/C→R{\displaystyle X/C\to \mathbb {R} }given byx+C↦infc∈C‖x+c‖{\textstyle x+C\mapsto \inf _{c\in C}\|x+c\|}is a well defined norm onX/C{\displaystyle X/C}that induces thequotient topologyonX/C.{\displaystyle X/C.}[4] IfX{\displaystyle X}is a Hausdorfflocally convextopological vector spacethen the following are equivalent: Furthermore,X{\displaystyle X}is finite-dimensional if and only ifXσ′{\displaystyle X_{\sigma }^{\prime }}is normable (hereXσ′{\displaystyle X_{\sigma }^{\prime }}denotesX′{\displaystyle X^{\prime }}endowed with theweak-* topology). The topologyτ{\displaystyle \tau }of theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}as defined in the article onspaces of test functions and distributions, is defined by a countable family of norms but it isnota normable space because there does not exist any norm‖⋅‖{\displaystyle \|\cdot \|}onC∞(K){\displaystyle C^{\infty }(K)}such that the topology that this norm induces is equal toτ.{\displaystyle \tau .} Even if a metrizable topological vector space has a topology that is defined by a family of norms, then it may nevertheless still fail to benormable space(meaning that its topology can not be defined by anysinglenorm). An example of such a space is theFréchet spaceC∞(K),{\displaystyle C^{\infty }(K),}whose definition can be found in the article onspaces of test functions and distributions, because its topologyτ{\displaystyle \tau }is defined by a countable family of norms but it isnota normable space because there does not exist any norm‖⋅‖{\displaystyle \|\cdot \|}onC∞(K){\displaystyle C^{\infty }(K)}such that the topology this norm induces is equal toτ.{\displaystyle \tau .}In fact, the topology of alocally convex spaceX{\displaystyle X}can be a defined by a family ofnormsonX{\displaystyle X}if and only if there existsat least onecontinuous norm onX.{\displaystyle X.}[6] The most important maps between two normed vector spaces are thecontinuouslinear maps. Together with these maps, normed vector spaces form acategory. The norm is a continuous function on its vector space. All linear maps between finite-dimensional vector spaces are also continuous. Anisometrybetween two normed vector spaces is a linear mapf{\displaystyle f}which preserves the norm (meaning‖f(v)‖=‖v‖{\displaystyle \|f(\mathbf {v} )\|=\|\mathbf {v} \|}for all vectorsv{\displaystyle \mathbf {v} }). Isometries are always continuous andinjective. Asurjectiveisometry between the normed vector spacesV{\displaystyle V}andW{\displaystyle W}is called anisometric isomorphism, andV{\displaystyle V}andW{\displaystyle W}are calledisometrically isomorphic. Isometrically isomorphic normed vector spaces are identical for all practical purposes. When speaking of normed vector spaces, we augment the notion ofdual spaceto take the norm into account. The dualV′{\displaystyle V^{\prime }}of a normed vector spaceV{\displaystyle V}is the space of allcontinuouslinear maps fromV{\displaystyle V}to the base field (the complexes or the reals) — such linear maps are called "functionals". The norm of a functionalφ{\displaystyle \varphi }is defined as thesupremumof|φ(v)|{\displaystyle |\varphi (\mathbf {v} )|}wherev{\displaystyle \mathbf {v} }ranges over all unit vectors (that is, vectors of norm1{\displaystyle 1}) inV.{\displaystyle V.}This turnsV′{\displaystyle V^{\prime }}into a normed vector space. An important theorem about continuous linear functionals on normed vector spaces is theHahn–Banach theorem. The definition of many normed spaces (in particular,Banach spaces) involves a seminorm defined on a vector space and then the normed space is defined as thequotient spaceby the subspace of elements of seminorm zero. For instance, with theLp{\displaystyle L^{p}}spaces, the function defined by‖f‖p=(∫|f(x)|pdx)1/p{\displaystyle \|f\|_{p}=\left(\int |f(x)|^{p}\;dx\right)^{1/p}}is a seminorm on the vector space of all functions on which theLebesgue integralon the right hand side is defined and finite. However, the seminorm is equal to zero for any functionsupportedon a set ofLebesgue measurezero. These functions form a subspace which we "quotient out", making them equivalent to the zero function. Givenn{\displaystyle n}seminormed spaces(Xi,qi){\displaystyle \left(X_{i},q_{i}\right)}with seminormsqi:Xi→R,{\displaystyle q_{i}:X_{i}\to \mathbb {R} ,}denote theproduct spacebyX:=∏i=1nXi{\displaystyle X:=\prod _{i=1}^{n}X_{i}}where vector addition defined as(x1,…,xn)+(y1,…,yn):=(x1+y1,…,xn+yn){\displaystyle \left(x_{1},\ldots ,x_{n}\right)+\left(y_{1},\ldots ,y_{n}\right):=\left(x_{1}+y_{1},\ldots ,x_{n}+y_{n}\right)}and scalar multiplication defined asα(x1,…,xn):=(αx1,…,αxn).{\displaystyle \alpha \left(x_{1},\ldots ,x_{n}\right):=\left(\alpha x_{1},\ldots ,\alpha x_{n}\right).} Define a new functionq:X→R{\displaystyle q:X\to \mathbb {R} }byq(x1,…,xn):=∑i=1nqi(xi),{\displaystyle q\left(x_{1},\ldots ,x_{n}\right):=\sum _{i=1}^{n}q_{i}\left(x_{i}\right),}which is a seminorm onX.{\displaystyle X.}The functionq{\displaystyle q}is a norm if and only if allqi{\displaystyle q_{i}}are norms. More generally, for each realp≥1{\displaystyle p\geq 1}the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x1,…,xn):=(∑i=1nqi(xi)p)1p{\displaystyle q\left(x_{1},\ldots ,x_{n}\right):=\left(\sum _{i=1}^{n}q_{i}\left(x_{i}\right)^{p}\right)^{\frac {1}{p}}}is a semi norm. For eachp{\displaystyle p}this defines the same topological space. A straightforward argument involving elementary linear algebra shows that the only finite-dimensional seminormed spaces are those arising as the product space of a normed space and a space with trivial seminorm. Consequently, many of the more interesting examples and applications of seminormed spaces occur for infinite-dimensional vector spaces.
https://en.wikipedia.org/wiki/Normed_vector_space
Inmathematics, alinear form(also known as alinear functional,[1]aone-form, or acovector) is alinear map[nb 1]from avector spaceto itsfieldofscalars(often, thereal numbersor thecomplex numbers). IfVis a vector space over a fieldk, the set of all linear functionals fromVtokis itself a vector space overkwith addition and scalar multiplication definedpointwise. This space is called thedual spaceofV, or sometimes thealgebraic dual space, when atopological dual spaceis also considered. It is often denotedHom(V,k),[2]or, when the fieldkis understood,V∗{\displaystyle V^{*}};[3]other notations are also used, such asV′{\displaystyle V'},[4][5]V#{\displaystyle V^{\#}}orV∨.{\displaystyle V^{\vee }.}[2]When vectors are represented bycolumn vectors(as is common when abasisis fixed), then linear functionals are represented asrow vectors, and their values on specific vectors are given bymatrix products(with the row vector on the left). The constantzero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) issurjective(that is, its range is all ofk). Suppose that vectors in the real coordinate spaceRn{\displaystyle \mathbb {R} ^{n}}are represented as column vectorsx=[x1⋮xn].{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.} For each row vectora=[a1⋯an]{\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}}there is a linear functionalfa{\displaystyle f_{\mathbf {a} }}defined byfa(x)=a1x1+⋯+anxn,{\displaystyle f_{\mathbf {a} }(\mathbf {x} )=a_{1}x_{1}+\cdots +a_{n}x_{n},}and each linear functional can be expressed in this form. This can be interpreted as either the matrix product or the dot product of the row vectora{\displaystyle \mathbf {a} }and the column vectorx{\displaystyle \mathbf {x} }:fa(x)=a⋅x=[a1⋯an][x1⋮xn].{\displaystyle f_{\mathbf {a} }(\mathbf {x} )=\mathbf {a} \cdot \mathbf {x} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.} Thetracetr⁡(A){\displaystyle \operatorname {tr} (A)}of a square matrixA{\displaystyle A}is the sum of all elements on itsmain diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make avector spacefrom the set of alln×n{\displaystyle n\times n}matrices. The trace is a linear functional on this space becausetr⁡(sA)=str⁡(A){\displaystyle \operatorname {tr} (sA)=s\operatorname {tr} (A)}andtr⁡(A+B)=tr⁡(A)+tr⁡(B){\displaystyle \operatorname {tr} (A+B)=\operatorname {tr} (A)+\operatorname {tr} (B)}for all scalarss{\displaystyle s}and alln×n{\displaystyle n\times n}matricesAandB.{\displaystyle A{\text{ and }}B.} Linear functionals first appeared infunctional analysis, the study ofvector spaces of functions. A typical example of a linear functional isintegration: the linear transformation defined by theRiemann integralI(f)=∫abf(x)dx{\displaystyle I(f)=\int _{a}^{b}f(x)\,dx}is a linear functional from the vector spaceC[a,b]{\displaystyle C[a,b]}of continuous functions on the interval[a,b]{\displaystyle [a,b]}to the real numbers. The linearity ofI{\displaystyle I}follows from the standard facts about the integral:I(f+g)=∫ab[f(x)+g(x)]dx=∫abf(x)dx+∫abg(x)dx=I(f)+I(g)I(αf)=∫abαf(x)dx=α∫abf(x)dx=αI(f).{\displaystyle {\begin{aligned}I(f+g)&=\int _{a}^{b}[f(x)+g(x)]\,dx=\int _{a}^{b}f(x)\,dx+\int _{a}^{b}g(x)\,dx=I(f)+I(g)\\I(\alpha f)&=\int _{a}^{b}\alpha f(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx=\alpha I(f).\end{aligned}}} LetPn{\displaystyle P_{n}}denote the vector space of real-valued polynomial functions of degree≤n{\displaystyle \leq n}defined on an interval[a,b].{\displaystyle [a,b].}Ifc∈[a,b],{\displaystyle c\in [a,b],}then letevc:Pn→R{\displaystyle \operatorname {ev} _{c}:P_{n}\to \mathbb {R} }be theevaluation functionalevc⁡f=f(c).{\displaystyle \operatorname {ev} _{c}f=f(c).}The mappingf↦f(c){\displaystyle f\mapsto f(c)}is linear since(f+g)(c)=f(c)+g(c)(αf)(c)=αf(c).{\displaystyle {\begin{aligned}(f+g)(c)&=f(c)+g(c)\\(\alpha f)(c)&=\alpha f(c).\end{aligned}}} Ifx0,…,xn{\displaystyle x_{0},\ldots ,x_{n}}aren+1{\displaystyle n+1}distinct points in[a,b],{\displaystyle [a,b],}then the evaluation functionalsevxi,{\displaystyle \operatorname {ev} _{x_{i}},}i=0,…,n{\displaystyle i=0,\ldots ,n}form abasisof the dual space ofPn{\displaystyle P_{n}}(Lax (1996)proves this last fact usingLagrange interpolation). A functionf{\displaystyle f}having theequation of a linef(x)=a+rx{\displaystyle f(x)=a+rx}witha≠0{\displaystyle a\neq 0}(for example,f(x)=1+2x{\displaystyle f(x)=1+2x}) isnota linear functional onR{\displaystyle \mathbb {R} }, since it is notlinear.[nb 2]It is, however,affine-linear. In finite dimensions, a linear functional can be visualized in terms of itslevel sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallelhyperplanes. This method of visualizing linear functionals is sometimes introduced ingeneral relativitytexts, such asGravitationbyMisner, Thorne & Wheeler (1973). Ifx0,…,xn{\displaystyle x_{0},\ldots ,x_{n}}aren+1{\displaystyle n+1}distinct points in[a,b], then the linear functionalsevxi:f↦f(xi){\displaystyle \operatorname {ev} _{x_{i}}:f\mapsto f\left(x_{i}\right)}defined above form abasisof the dual space ofPn, the space of polynomials of degree≤n.{\displaystyle \leq n.}The integration functionalIis also a linear functional onPn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficientsa0,…,an{\displaystyle a_{0},\ldots ,a_{n}}for whichI(f)=a0f(x0)+a1f(x1)+⋯+anf(xn){\displaystyle I(f)=a_{0}f(x_{0})+a_{1}f(x_{1})+\dots +a_{n}f(x_{n})}for allf∈Pn.{\displaystyle f\in P_{n}.}This forms the foundation of the theory ofnumerical quadrature.[6] Linear functionals are particularly important inquantum mechanics. Quantum mechanical systems are represented byHilbert spaces, which areanti–isomorphicto their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information seebra–ket notation. In the theory ofgeneralized functions, certain kinds of generalized functions calleddistributionscan be realized as linear functionals on spaces oftest functions. Every non-degeneratebilinear formon a finite-dimensional vector spaceVinduces anisomorphismV→V∗:v↦v∗such thatv∗(w):=⟨v,w⟩∀w∈V,{\displaystyle v^{*}(w):=\langle v,w\rangle \quad \forall w\in V,} where the bilinear form onVis denoted⟨⋅,⋅⟩{\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle }(for instance, inEuclidean space,⟨v,w⟩=v⋅w{\displaystyle \langle v,w\rangle =v\cdot w}is thedot productofvandw). The inverse isomorphism isV∗→V:v∗↦v, wherevis the unique element ofVsuch that⟨v,w⟩=v∗(w){\displaystyle \langle v,w\rangle =v^{*}(w)}for allw∈V.{\displaystyle w\in V.} The above defined vectorv∗∈V∗is said to be thedual vectorofv∈V.{\displaystyle v\in V.} In an infinite dimensionalHilbert space, analogous results hold by theRiesz representation theorem. There is a mappingV↦V∗fromVinto itscontinuous dual spaceV∗. Let the vector spaceVhave a basise1,e2,…,en{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\dots ,\mathbf {e} _{n}}, not necessarilyorthogonal. Then thedual spaceV∗{\displaystyle V^{*}}has a basisω~1,ω~2,…,ω~n{\displaystyle {\tilde {\omega }}^{1},{\tilde {\omega }}^{2},\dots ,{\tilde {\omega }}^{n}}called thedual basisdefined by the special property thatω~i(ej)={1ifi=j0ifi≠j.{\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})={\begin{cases}1&{\text{if}}\ i=j\\0&{\text{if}}\ i\neq j.\end{cases}}} Or, more succinctly,ω~i(ej)=δij{\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})=\delta _{ij}} whereδij{\displaystyle \delta _{ij}}is theKronecker delta. Here the superscripts of the basis functionals are not exponents but are insteadcontravariantindices. A linear functionalu~{\displaystyle {\tilde {u}}}belonging to the dual spaceV~{\displaystyle {\tilde {V}}}can be expressed as alinear combinationof basis functionals, with coefficients ("components")ui,u~=∑i=1nuiω~i.{\displaystyle {\tilde {u}}=\sum _{i=1}^{n}u_{i}\,{\tilde {\omega }}^{i}.} Then, applying the functionalu~{\displaystyle {\tilde {u}}}to a basis vectorej{\displaystyle \mathbf {e} _{j}}yieldsu~(ej)=∑i=1n(uiω~i)ej=∑iui[ω~i(ej)]{\displaystyle {\tilde {u}}(\mathbf {e} _{j})=\sum _{i=1}^{n}\left(u_{i}\,{\tilde {\omega }}^{i}\right)\mathbf {e} _{j}=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left(\mathbf {e} _{j}\right)\right]} due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Thenu~(ej)=∑iui[ω~i(ej)]=∑iuiδij=uj.{\displaystyle {\begin{aligned}{\tilde {u}}({\mathbf {e} }_{j})&=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\mathbf {e} }_{j}\right)\right]\\&=\sum _{i}u_{i}{\delta }_{ij}\\&=u_{j}.\end{aligned}}} So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector. When the spaceVcarries aninner product, then it is possible to write explicitly a formula for the dual basis of a given basis. LetVhave (not necessarily orthogonal) basise1,…,en.{\displaystyle \mathbf {e} _{1},\dots ,\mathbf {e} _{n}.}In three dimensions (n= 3), the dual basis can be written explicitlyω~i(v)=12⟨∑j=13∑k=13εijk(ej×ek)e1⋅e2×e3,v⟩,{\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )={\frac {1}{2}}\left\langle {\frac {\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon ^{ijk}\,(\mathbf {e} _{j}\times \mathbf {e} _{k})}{\mathbf {e} _{1}\cdot \mathbf {e} _{2}\times \mathbf {e} _{3}}},\mathbf {v} \right\rangle ,}fori=1,2,3,{\displaystyle i=1,2,3,}whereεis theLevi-Civita symboland⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }the inner product (ordot product) onV. In higher dimensions, this generalizes as followsω~i(v)=⟨∑1≤i2<i3<⋯<in≤nεii2…in(⋆ei2∧⋯∧ein)⋆(e1∧⋯∧en),v⟩,{\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )=\left\langle {\frac {\sum _{1\leq i_{2}<i_{3}<\dots <i_{n}\leq n}\varepsilon ^{ii_{2}\dots i_{n}}(\star \mathbf {e} _{i_{2}}\wedge \cdots \wedge \mathbf {e} _{i_{n}})}{\star (\mathbf {e} _{1}\wedge \cdots \wedge \mathbf {e} _{n})}},\mathbf {v} \right\rangle ,}where⋆{\displaystyle \star }is theHodge star operator. Modulesover aringare generalizations of vector spaces, which removes the restriction that coefficients belong to afield. Given a moduleMover a ringR, a linear form onMis a linear map fromMtoR, where the latter is considered as a module over itself. The space of linear forms is always denotedHomk(V,k), whetherkis a field or not. It is aright moduleifVis a left module. The existence of "enough" linear forms on a module is equivalent toprojectivity.[8] Dual Basis Lemma—AnR-moduleMisprojectiveif and only if there exists a subsetA⊂M{\displaystyle A\subset M}and linear forms{fa∣a∈A}{\displaystyle \{f_{a}\mid a\in A\}}such that, for everyx∈M,{\displaystyle x\in M,}only finitely manyfa(x){\displaystyle f_{a}(x)}are nonzero, andx=∑a∈Afa(x)a{\displaystyle x=\sum _{a\in A}{f_{a}(x)a}} Suppose thatX{\displaystyle X}is a vector space overC.{\displaystyle \mathbb {C} .}Restricting scalar multiplication toR{\displaystyle \mathbb {R} }gives rise to a real vector space[9]XR{\displaystyle X_{\mathbb {R} }}called therealificationofX.{\displaystyle X.}Any vector spaceX{\displaystyle X}overC{\displaystyle \mathbb {C} }is also a vector space overR,{\displaystyle \mathbb {R} ,}endowed with acomplex structure; that is, there exists a realvector subspaceXR{\displaystyle X_{\mathbb {R} }}such that we can (formally) writeX=XR⊕XRi{\displaystyle X=X_{\mathbb {R} }\oplus X_{\mathbb {R} }i}asR{\displaystyle \mathbb {R} }-vector spaces. Every linear functional onX{\displaystyle X}is complex-valued while every linear functional onXR{\displaystyle X_{\mathbb {R} }}is real-valued. Ifdim⁡X≠0{\displaystyle \dim X\neq 0}then a linear functional on either one ofX{\displaystyle X}orXR{\displaystyle X_{\mathbb {R} }}is non-trivial (meaning not identically0{\displaystyle 0}) if and only if it is surjective (because ifφ(x)≠0{\displaystyle \varphi (x)\neq 0}then for any scalars,{\displaystyle s,}φ((s/φ(x))x)=s{\displaystyle \varphi \left((s/\varphi (x))x\right)=s}), where theimageof a linear functional onX{\displaystyle X}isC{\displaystyle \mathbb {C} }while the image of a linear functional onXR{\displaystyle X_{\mathbb {R} }}isR.{\displaystyle \mathbb {R} .}Consequently, the only function onX{\displaystyle X}that is both a linear functional onX{\displaystyle X}and a linear function onXR{\displaystyle X_{\mathbb {R} }}is the trivial functional; in other words,X#∩XR#={0},{\displaystyle X^{\#}\cap X_{\mathbb {R} }^{\#}=\{0\},}where⋅#{\displaystyle \,{\cdot }^{\#}}denotes the space'salgebraic dual space. However, everyC{\displaystyle \mathbb {C} }-linear functional onX{\displaystyle X}is anR{\displaystyle \mathbb {R} }-linearoperator(meaning that it isadditiveandhomogeneous overR{\displaystyle \mathbb {R} }), but unless it is identically0,{\displaystyle 0,}it is not anR{\displaystyle \mathbb {R} }-linearfunctionalonX{\displaystyle X}because its range (which isC{\displaystyle \mathbb {C} }) is 2-dimensional overR.{\displaystyle \mathbb {R} .}Conversely, a non-zeroR{\displaystyle \mathbb {R} }-linear functional has range too small to be aC{\displaystyle \mathbb {C} }-linear functional as well. Ifφ∈X#{\displaystyle \varphi \in X^{\#}}then denote itsreal partbyφR:=Re⁡φ{\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi }and itsimaginary partbyφi:=Im⁡φ.{\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .}ThenφR:X→R{\displaystyle \varphi _{\mathbb {R} }:X\to \mathbb {R} }andφi:X→R{\displaystyle \varphi _{i}:X\to \mathbb {R} }are linear functionals onXR{\displaystyle X_{\mathbb {R} }}andφ=φR+iφi.{\displaystyle \varphi =\varphi _{\mathbb {R} }+i\varphi _{i}.}The fact thatz=Re⁡z−iRe⁡(iz)=Im⁡(iz)+iIm⁡z{\displaystyle z=\operatorname {Re} z-i\operatorname {Re} (iz)=\operatorname {Im} (iz)+i\operatorname {Im} z}for allz∈C{\displaystyle z\in \mathbb {C} }implies that for allx∈X,{\displaystyle x\in X,}[9]φ(x)=φR(x)−iφR(ix)=φi(ix)+iφi(x){\displaystyle {\begin{alignedat}{4}\varphi (x)&=\varphi _{\mathbb {R} }(x)-i\varphi _{\mathbb {R} }(ix)\\&=\varphi _{i}(ix)+i\varphi _{i}(x)\\\end{alignedat}}}and consequently, thatφi(x)=−φR(ix){\displaystyle \varphi _{i}(x)=-\varphi _{\mathbb {R} }(ix)}andφR(x)=φi(ix).{\displaystyle \varphi _{\mathbb {R} }(x)=\varphi _{i}(ix).}[10] The assignmentφ↦φR{\displaystyle \varphi \mapsto \varphi _{\mathbb {R} }}defines abijective[10]R{\displaystyle \mathbb {R} }-linear operatorX#→XR#{\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}}whose inverse is the mapL∙:XR#→X#{\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}}defined by the assignmentg↦Lg{\displaystyle g\mapsto L_{g}}that sendsg:XR→R{\displaystyle g:X_{\mathbb {R} }\to \mathbb {R} }to the linear functionalLg:X→C{\displaystyle L_{g}:X\to \mathbb {C} }defined byLg(x):=g(x)−ig(ix)for allx∈X.{\displaystyle L_{g}(x):=g(x)-ig(ix)\quad {\text{ for all }}x\in X.}The real part ofLg{\displaystyle L_{g}}isg{\displaystyle g}and the bijectionL∙:XR#→X#{\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}}is anR{\displaystyle \mathbb {R} }-linear operator, meaning thatLg+h=Lg+Lh{\displaystyle L_{g+h}=L_{g}+L_{h}}andLrg=rLg{\displaystyle L_{rg}=rL_{g}}for allr∈R{\displaystyle r\in \mathbb {R} }andg,h∈XR#.{\displaystyle g,h\in X_{\mathbb {R} }^{\#}.}[10]Similarly for the imaginary part, the assignmentφ↦φi{\displaystyle \varphi \mapsto \varphi _{i}}induces anR{\displaystyle \mathbb {R} }-linear bijectionX#→XR#{\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}}whose inverse is the mapXR#→X#{\displaystyle X_{\mathbb {R} }^{\#}\to X^{\#}}defined by sendingI∈XR#{\displaystyle I\in X_{\mathbb {R} }^{\#}}to the linear functional onX{\displaystyle X}defined byx↦I(ix)+iI(x).{\displaystyle x\mapsto I(ix)+iI(x).} This relationship was discovered byHenry Löwigin 1934 (although it is usually credited to F. Murray),[11]and can be generalized to arbitraryfinite extensions of a fieldin the natural way. It has many important consequences, some of which will now be described. Supposeφ:X→C{\displaystyle \varphi :X\to \mathbb {C} }is a linear functional onX{\displaystyle X}with real partφR:=Re⁡φ{\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi }and imaginary partφi:=Im⁡φ.{\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .} Thenφ=0{\displaystyle \varphi =0}if and only ifφR=0{\displaystyle \varphi _{\mathbb {R} }=0}if and only ifφi=0.{\displaystyle \varphi _{i}=0.} Assume thatX{\displaystyle X}is atopological vector space. Thenφ{\displaystyle \varphi }is continuous if and only if its real partφR{\displaystyle \varphi _{\mathbb {R} }}is continuous, if and only ifφ{\displaystyle \varphi }'s imaginary partφi{\displaystyle \varphi _{i}}is continuous. That is, either all three ofφ,φR,{\displaystyle \varphi ,\varphi _{\mathbb {R} },}andφi{\displaystyle \varphi _{i}}are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular,φ∈X′{\displaystyle \varphi \in X^{\prime }}if and only ifφR∈XR′{\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }}where the prime denotes the space'scontinuous dual space.[9] LetB⊆X.{\displaystyle B\subseteq X.}IfuB⊆B{\displaystyle uB\subseteq B}for all scalarsu∈C{\displaystyle u\in \mathbb {C} }ofunit length(meaning|u|=1{\displaystyle |u|=1}) then[proof 1][12]supb∈B|φ(b)|=supb∈B|φR(b)|.{\displaystyle \sup _{b\in B}|\varphi (b)|=\sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|.}Similarly, ifφi:=Im⁡φ:X→R{\displaystyle \varphi _{i}:=\operatorname {Im} \varphi :X\to \mathbb {R} }denotes the complex part ofφ{\displaystyle \varphi }theniB⊆B{\displaystyle iB\subseteq B}impliessupb∈B|φR(b)|=supb∈B|φi(b)|.{\displaystyle \sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|=\sup _{b\in B}\left|\varphi _{i}(b)\right|.}IfX{\displaystyle X}is anormed spacewith norm‖⋅‖{\displaystyle \|\cdot \|}and ifB={x∈X:‖x‖≤1}{\displaystyle B=\{x\in X:\|x\|\leq 1\}}is the closed unit ball then thesupremumsabove are theoperator norms(defined in the usual way) ofφ,φR,{\displaystyle \varphi ,\varphi _{\mathbb {R} },}andφi{\displaystyle \varphi _{i}}so that[12]‖φ‖=‖φR‖=‖φi‖.{\displaystyle \|\varphi \|=\left\|\varphi _{\mathbb {R} }\right\|=\left\|\varphi _{i}\right\|.}This conclusion extends to the analogous statement forpolarsofbalanced setsin generaltopological vector spaces. Below, allvector spacesare over either thereal numbersR{\displaystyle \mathbb {R} }or thecomplex numbersC.{\displaystyle \mathbb {C} .} IfV{\displaystyle V}is atopological vector space, the space ofcontinuouslinear functionals — thecontinuous dual— is often simply called the dual space. IfV{\displaystyle V}is aBanach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called thealgebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual. A linear functionalfon a (not necessarilylocally convex)topological vector spaceXis continuous if and only if there exists a continuous seminormponXsuch that|f|≤p.{\displaystyle |f|\leq p.}[13] Continuous linear functionals have nice properties foranalysis: a linear functional is continuous if and only if itskernelis closed,[14]and a non-trivial continuous linear functional is anopen map, even if the (topological) vector space is not complete.[15] A vector subspaceM{\displaystyle M}ofX{\displaystyle X}is calledmaximalifM⊊X{\displaystyle M\subsetneq X}(meaningM⊆X{\displaystyle M\subseteq X}andM≠X{\displaystyle M\neq X}) and does not exist a vector subspaceN{\displaystyle N}ofX{\displaystyle X}such thatM⊊N⊊X.{\displaystyle M\subsetneq N\subsetneq X.}A vector subspaceM{\displaystyle M}ofX{\displaystyle X}is maximal if and only if it is the kernel of some non-trivial linear functional onX{\displaystyle X}(that is,M=ker⁡f{\displaystyle M=\ker f}for some linear functionalf{\displaystyle f}onX{\displaystyle X}that is not identically0). Anaffine hyperplaneinX{\displaystyle X}is a translate of a maximal vector subspace. By linearity, a subsetH{\displaystyle H}ofX{\displaystyle X}is a affine hyperplane if and only if there exists some non-trivial linear functionalf{\displaystyle f}onX{\displaystyle X}such thatH=f−1(1)={x∈X:f(x)=1}.{\displaystyle H=f^{-1}(1)=\{x\in X:f(x)=1\}.}[11]Iff{\displaystyle f}is a linear functional ands≠0{\displaystyle s\neq 0}is a scalar thenf−1(s)=s(f−1(1))=(1sf)−1(1).{\displaystyle f^{-1}(s)=s\left(f^{-1}(1)\right)=\left({\frac {1}{s}}f\right)^{-1}(1).}This equality can be used to relate different level sets off.{\displaystyle f.}Moreover, iff≠0{\displaystyle f\neq 0}then the kernel off{\displaystyle f}can be reconstructed from the affine hyperplaneH:=f−1(1){\displaystyle H:=f^{-1}(1)}byker⁡f=H−H.{\displaystyle \ker f=H-H.} Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem. Theorem[16][17]—Iff,g1,…,gn{\displaystyle f,g_{1},\ldots ,g_{n}}are linear functionals onX, then the following are equivalent: Iffis a non-trivial linear functional onXwith kernelN,x∈X{\displaystyle x\in X}satisfiesf(x)=1,{\displaystyle f(x)=1,}andUis abalancedsubset ofX, thenN∩(x+U)=∅{\displaystyle N\cap (x+U)=\varnothing }if and only if|f(u)|<1{\displaystyle |f(u)|<1}for allu∈U.{\displaystyle u\in U.}[15] Any (algebraic) linear functional on avector subspacecan be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all ofR.{\displaystyle \mathbb {R} .}However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example, Hahn–Banach dominated extension theorem[18](Rudin 1991, Th. 3.2)—Ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function, andf:M→R{\displaystyle f:M\to \mathbb {R} }is alinear functionalon alinear subspaceM⊆X{\displaystyle M\subseteq X}which is dominated byponM, then there exists a linear extensionF:X→R{\displaystyle F:X\to \mathbb {R} }offto the whole spaceXthat is dominated byp, i.e., there exists a linear functionalFsuch thatF(m)=f(m){\displaystyle F(m)=f(m)}for allm∈M,{\displaystyle m\in M,}and|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for allx∈X.{\displaystyle x\in X.} LetXbe atopological vector space(TVS) withcontinuous dual spaceX′.{\displaystyle X'.} For any subsetHofX′,{\displaystyle X',}the following are equivalent:[19] IfHis an equicontinuous subset ofX′{\displaystyle X'}then the following sets are also equicontinuous: theweak-*closure, thebalanced hull, theconvex hull, and theconvex balanced hull.[19]Moreover,Alaoglu's theoremimplies that the weak-* closure of an equicontinuous subset ofX′{\displaystyle X'}is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).[20][19]
https://en.wikipedia.org/wiki/Linear_functional
Inmathematics, afunctionf{\displaystyle f}issuperadditiveiff(x+y)≥f(x)+f(y){\displaystyle f(x+y)\geq f(x)+f(y)}for allx{\displaystyle x}andy{\displaystyle y}in thedomainoff.{\displaystyle f.} Similarly, asequencea1,a2,…{\displaystyle a_{1},a_{2},\ldots }is calledsuperadditiveif it satisfies theinequalityan+m≥an+am{\displaystyle a_{n+m}\geq a_{n}+a_{m}}for allm{\displaystyle m}andn.{\displaystyle n.} The term "superadditive" is also applied to functions from aboolean algebrato the real numbers whereP(X∨Y)≥P(X)+P(Y),{\displaystyle P(X\lor Y)\geq P(X)+P(Y),}such aslower probabilities. Iff{\displaystyle f}is a superadditive function whose domain contains0,{\displaystyle 0,}thenf(0)≤0.{\displaystyle f(0)\leq 0.}To see this, simply setx=0{\displaystyle x=0}andy=0{\displaystyle y=0}in the defining inequality. The negative of a superadditive function issubadditive. The major reason for the use of superadditive sequences is the followinglemmadue toMichael Fekete.[4] The analogue of Fekete's lemma holds forsubadditivefunctions as well. There are extensions of Fekete's lemma that do not require the definition of superadditivity above to hold for allm{\displaystyle m}andn.{\displaystyle n.}There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. A good exposition of this topic may be found in Steele (1997).[5][6] Notes This article incorporates material from Superadditivity onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Superadditivity
Āryabhata's sine tableis a set of twenty-four numbers given in the astronomical treatiseĀryabhatiyacomposed by the fifth centuryIndian mathematicianand astronomerĀryabhata(476–550 CE), for the computation of thehalf-chordsof a certain set of arcs of a circle. The set of numbers appears in verse 12 in Chapter 1Dasagitikaof Aryabhatiya and is the first table of sines.[1][2]It is not a table in the modern sense of a mathematical table; that is, it is not a set of numbers arranged into rows and columns.[3][4][5]Āryabhaṭa's table is also not a set of values of the trigonometric sine function in a conventional sense; it is a table of thefirst differencesof the values oftrigonometric sinesexpressed inarcminutes, and because of this the table is also referred to asĀryabhaṭa's table of sine-differences.[6][7] Āryabhaṭa's table was the first sine table ever constructed in thehistory of mathematics.[8]The now lost tables ofHipparchus(c. 190 BC – c. 120 BC) andMenelaus(c. 70–140 CE) andthose ofPtolemy(c. AD 90 – c. 168) were all tables ofchordsand not of half-chords.[8]Āryabhaṭa's table remained as the standard sine table of ancient India. There were continuous attempts to improve the accuracy of this table. These endeavors culminated in the eventual discovery of thepower series expansionsof the sine and cosine functions byMadhava of Sangamagrama(c. 1350 – c. 1425), the founder of theKerala school of astronomy and mathematics, and the tabulation of asine table by Madhavawith values accurate to seven or eight decimal places. Some historians of mathematics have argued that the sine table given in Āryabhaṭiya was an adaptation of earlier such tables constructed by mathematicians and astronomers of ancient Greece.[9]David Pingree, one of America's foremost historians of the exact sciences in antiquity, was an exponent of such a view. Assuming this hypothesis,G. J. Toomer[10][11][12]writes, "Hardly any documentation exists for the earliest arrival of Greek astronomical models in India, or for that matter what those models would have looked like. So it is very difficult to ascertain the extent to which what has come down to us represents transmitted knowledge, and what is original with Indian scientists. ... The truth is probably a tangled mixture of both."[13] The values encoded in Āryabhaṭa's Sanskrit verse can be decoded using thenumerical schemeexplained inĀryabhaṭīya, and the decoded numbers are listed in the table below. In the table, the angle measures relevant to Āryabhaṭa's sine table are listed in the second column. The third column contains the list the numbers contained in the Sanskrit verse given above inDevanagariscript. For the convenience of users unable to read Devanagari, these word-numerals are reproduced in the fourth column inISO 15919transliteration. The next column contains these numbers in theHindu-Arabic numerals. Āryabhaṭa's numbers are the first differences in the values of sines. The corresponding value of sine (or more precisely, ofjya) can be obtained by summing up the differences up to that difference. Thus the value ofjyacorresponding to 18° 45′ is the sum 225 + 224 + 222 + 219 + 215 = 1105. For assessing the accuracy of Āryabhaṭa's computations, the modern values ofjyas are given in the last column of the table. In the Indian mathematical tradition, the sine ( orjya) of an angle is not a ratio of numbers. It is the length of a certain line segment, a certain half-chord. The radius of the base circle is basic parameter for the construction of such tables. Historically, several tables have been constructed using different values for this parameter. Āryabhaṭa has chosen the number 3438 as the value of radius of the base circle for the computation of his sine table. The rationale of the choice of this parameter is the idea of measuring the circumference of a circle in angle measures. In astronomical computations distances are measured indegrees,minutes,seconds, etc. In this measure, the circumference of a circle is 360° = (60 × 360) minutes = 21600 minutes. The radius of the circle, the measure of whose circumference is 21600 minutes, is 21600 / 2π minutes. Computing this using the valueπ= 3.1416 known toAryabhataone gets the radius of the circle as 3438 minutes approximately. Āryabhaṭa's sine table is based on this value for the radius of the base circle. It has not yet been established who is the first ever to use this value for the base radius. ButAryabhatiyais the earliest surviving text containing a reference to this basic constant.[14] The second section of Āryabhaṭiya, titled Ganitapādd, a contains a stanza indicating a method for the computation of the sine table. There are several ambiguities in correctly interpreting the meaning of this verse. For example, the following is a translation of the verse given by Katz wherein the words in square brackets are insertions of the translator and not translations of texts in the verse.[14] This may be referring to the fact that the second derivative of the sine function is equal to the negative of the sine function.
https://en.wikipedia.org/wiki/Aryabhata%27s_sine_table
Brahmagupta's interpolation formulais a second-order polynomialinterpolation formuladeveloped by theIndianmathematicianandastronomerBrahmagupta(598–668CE) in the early 7th centuryCE. TheSanskritcouplet describing the formula can be found in the supplementary part ofKhandakadyakaa work ofBrahmaguptacompleted in 665 CE.[1]The same couplet appears in Brahmagupta's earlierDhyana-graha-adhikara, which was probably written "near the beginning of the second quarter of the 7th century CE, if not earlier."[1]Brahmagupta was one of the first to describe and use aninterpolation formulausing second-orderdifferences.[2][3] Brahmagupta's interpolation formula is equivalent to modern-day second-order Newton–Stirlinginterpolation formula. Mathematicians prior to Brahmagupta used a simplelinear interpolationformula. The linear interpolation formula to computef(a)is For the computation off(a), Brahmagupta replacesDrwith another expression which gives more accurate values and which amounts to using a second-order interpolation formula. In Brahmagupta's terminology the differenceDris thegatakhanda, meaningpast differenceor the difference that was crossed over, the differenceDr+1is thebhogyakhandawhich is thedifference yet to come.Vikalais the amount in minutes by which the interval has been covered at the point where we want to interpolate. In the present notations it isa−xr. The new expression which replacesfr+1−fris calledsphuta-bhogyakhanda. The description ofsphuta-bhogyakhandais contained in the following Sanskrit couplet (Dhyana-Graha-Upadesa-Adhyaya, 17; Khandaka Khadyaka, IX, 8):[1] [clarification needed(text needed)] This has been translated using Bhattolpala's (10th century CE) commentary as follows:[1][4] This formula was originally stated for the computation of the values of the sine function for which the common interval in the underlying base table was 900 minutes or 15 degrees. So the reference to 900 is in fact a reference to the common intervalh. Brahmagupta's method computation ofshutabhogyakhandacan be formulated in modern notation as follows: The±sign is to be taken according to whether⁠1/2⁠(Dr+Dr+1)is less than or greater thanDr+1, or equivalently, according to whetherDr<Dr+1orDr>Dr+1. Brahmagupta's expression can be put in the following form: This correction factor yields the following approximate value forf(a): This is Stirling'sinterpolation formulatruncated at the second-order differences.[5][6]It is not known how Brahmagupta arrived at his interpolation formula.[1]Brahmagupta has given a separate formula for the case where the values of the independent variable are not equally spaced.
https://en.wikipedia.org/wiki/Brahmagupta%27s_interpolation_formula
Inmathematics, theEisenstein integers(named afterGotthold Eisenstein), occasionally also known[1]asEulerian integers(afterLeonhard Euler), are thecomplex numbersof the form whereaandbareintegersand is aprimitive(hence non-real)cube root of unity. The Eisenstein integers form atriangular latticein thecomplex plane, in contrast with theGaussian integers, which form asquare latticein the complex plane. The Eisenstein integers are acountably infinite set. The Eisenstein integers form acommutative ringofalgebraic integersin thealgebraic number fieldQ(ω)– the thirdcyclotomic field. To see that the Eisenstein integers are algebraic integers note that eachz=a+bωis a root of themonic polynomial In particular,ωsatisfies the equation The product of two Eisenstein integersa+bωandc+dωis given explicitly by The 2-norm of an Eisenstein integer is just itssquared modulus, and is given by which is clearly a positive ordinary (rational) integer. Also, thecomplex conjugateofωsatisfies Thegroup of unitsin this ring is thecyclic groupformed by the sixthroots of unityin the complex plane:{±1, ±ω, ±ω2}, the Eisenstein integers of norm1. The ring of Eisenstein integers forms aEuclidean domainwhose normNis given by the square modulus, as above: Adivision algorithm, applied to any dividendαand divisorβ≠ 0, gives a quotientκand a remainderρsmaller than the divisor, satisfying: Here,α,β,κ,ρare all Eisenstein integers. This algorithm implies theEuclidean algorithm, which provesEuclid's lemmaand theunique factorizationof Eisenstein integers into Eisenstein primes. One division algorithm is as follows. First perform the division in the field of complex numbers, and write the quotient in terms ofω: for rationala,b∈Q. Then obtain the Eisenstein integer quotient by rounding the rational coefficients to the nearest integer: Here⌊x⌉{\displaystyle \lfloor x\rceil }may denote any of the standardrounding-to-integer functions. The reason this satisfiesN(ρ) <N(β), while the analogous procedure fails for most otherquadratic integerrings, is as follows. Afundamental domainfor the idealZ[ω]β=Zβ+Zωβ, acting by translations on the complex plane, is the 60°–120° rhombus with vertices0,β,ωβ,β+ωβ. Any Eisenstein integerαlies inside one of the translates of thisparallelogram, and the quotientκis one of its vertices. The remainder is the square distance fromαto this vertex, but the maximum possible distance in our algorithm is only32|β|{\displaystyle {\tfrac {\sqrt {3}}{2}}|\beta |}, so|ρ|≤32|β|<|β|{\displaystyle |\rho |\leq {\tfrac {\sqrt {3}}{2}}|\beta |<|\beta |}. (The size ofρcould be slightly decreased by takingκto be the closest corner.) Ifxandyare Eisenstein integers, we say thatxdividesyif there is some Eisenstein integerzsuch thaty=zx. A non-unit Eisenstein integerxis said to be an Eisenstein prime if its only non-unit divisors are of the formux, whereuis any of the six units. They are the corresponding concept to theGaussian primesin the Gaussian integers. There are two types of Eisenstein prime. In the second type, factors of3,1−ω{\displaystyle 1-\omega }and1−ω2{\displaystyle 1-\omega ^{2}}areassociates:1−ω=(−ω)(1−ω2){\displaystyle 1-\omega =(-\omega )(1-\omega ^{2})}, so it is regarded as a special type in some books.[2][3] The first few Eisenstein primes of the form3n− 1are: Natural primes that are congruent to0or1modulo3arenotEisenstein primes:[4]they admit nontrivial factorizations inZ[ω]. For example: In general, if a natural primepis1modulo3and can therefore be written asp=a2−ab+b2, then it factorizes overZ[ω]as Some non-real Eisenstein primes are Up to conjugacy and unit multiples, the primes listed above, together with2and5, are all the Eisenstein primes ofabsolute valuenot exceeding7. As of October 2023[update], the largest known real Eisenstein prime is thetenth-largest known prime10223 × 231172165+ 1, discovered by Péter Szabolcs andPrimeGrid.[5]With one exception,[clarification needed]all larger known primes areMersenne primes, discovered byGIMPS. Real Eisenstein primes are congruent to2 mod 3, and all Mersenne primes greater than3are congruent to1 mod 3; thus no Mersenne prime is an Eisenstein prime. The sum of the reciprocals of all Eisenstein integers excluding0raised to the fourth power is0:[6]∑z∈E∖{0}1z4=G4(e2πi3)=0{\displaystyle \sum _{z\in \mathbf {E} \setminus \{0\}}{\frac {1}{z^{4}}}=G_{4}\left(e^{\frac {2\pi i}{3}}\right)=0}soe2πi/3{\displaystyle e^{2\pi i/3}}is a root ofj-invariant. In generalGk(e2πi3)=0{\displaystyle G_{k}\left(e^{\frac {2\pi i}{3}}\right)=0}if and only ifk≢0(mod6){\displaystyle k\not \equiv 0{\pmod {6}}}.[7] The sum of the reciprocals of all Eisenstein integers excluding0raised to the sixth power can be expressed in terms of thegamma function:∑z∈E∖{0}1z6=G6(e2πi3)=Γ(1/3)188960π6{\displaystyle \sum _{z\in \mathbf {E} \setminus \{0\}}{\frac {1}{z^{6}}}=G_{6}\left(e^{\frac {2\pi i}{3}}\right)={\frac {\Gamma (1/3)^{18}}{8960\pi ^{6}}}}whereEare the Eisenstein integers andG6is theEisenstein seriesof weight 6.[8] Thequotientof the complex planeCby thelatticecontaining all Eisenstein integers is acomplex torusof real dimension2. This is one of two tori with maximalsymmetryamong all such complex tori.[9]This torus can be obtained by identifying each of the three pairs of opposite edges of a regular hexagon. The other maximally symmetric torus is the quotient of the complex plane by the additive lattice ofGaussian integers, and can be obtained by identifying each of the two pairs of opposite sides of a square fundamental domain, such as[0, 1] × [0, 1].
https://en.wikipedia.org/wiki/Eisenstein_integer
In the mathematical field ofcomplex analysis,elliptic functionsare special kinds ofmeromorphicfunctions, that satisfy two periodicity conditions. They are named elliptic functions because they come fromelliptic integrals. Those integrals are in turn named elliptic because they first were encountered for the calculation of the arc length of anellipse. Important elliptic functions areJacobi elliptic functionsand theWeierstrass℘{\displaystyle \wp }-function. Further development of this theory led tohyperelliptic functionsandmodular forms. Ameromorphic functionis called an elliptic function, if there are twoR{\displaystyle \mathbb {R} }-linear independentcomplex numbersω1,ω2∈C{\displaystyle \omega _{1},\omega _{2}\in \mathbb {C} }such that So elliptic functions have two periods and are thereforedoubly periodic functions. Iff{\displaystyle f}is an elliptic function with periodsω1,ω2{\displaystyle \omega _{1},\omega _{2}}it also holds that for every linear combinationγ=mω1+nω2{\displaystyle \gamma =m\omega _{1}+n\omega _{2}}withm,n∈Z{\displaystyle m,n\in \mathbb {Z} }. Theabelian group is called theperiod lattice. Theparallelogramgenerated byω1{\displaystyle \omega _{1}}andω2{\displaystyle \omega _{2}} is afundamental domainofΛ{\displaystyle \Lambda }actingonC{\displaystyle \mathbb {C} }. Geometrically the complex plane is tiled with parallelograms. Everything that happens in one fundamental domain repeats in all the others. For that reason we can view elliptic function as functions with thequotient groupC/Λ{\displaystyle \mathbb {C} /\Lambda }as their domain. This quotient group, called anelliptic curve, can be visualised as a parallelogram where opposite sides are identified, whichtopologicallyis atorus.[1] The following three theorems are known asLiouville's theorems (1847). A holomorphic elliptic function is constant.[2] This is the original form ofLiouville's theoremand can be derived from it.[3]A holomorphic elliptic function is bounded since it takes on all of its values on the fundamental domain which is compact. So it is constant by Liouville's theorem. Every elliptic function has finitely many poles inC/Λ{\displaystyle \mathbb {C} /\Lambda }and the sum of itsresiduesis zero.[4] This theorem implies that there is no elliptic function not equal to zero with exactly one pole of order one or exactly one zero of order one in the fundamental domain. A non-constant elliptic function takes on every value the same number of times inC/Λ{\displaystyle \mathbb {C} /\Lambda }counted with multiplicity.[5] One of the most important elliptic functions is the Weierstrass℘{\displaystyle \wp }-function. For a given period latticeΛ{\displaystyle \Lambda }it is defined by It is constructed in such a way that it has a pole of order two at every lattice point. The term−1λ2{\displaystyle -{\frac {1}{\lambda ^{2}}}}is there to make the series convergent. ℘{\displaystyle \wp }is an even elliptic function; that is,℘(−z)=℘(z){\displaystyle \wp (-z)=\wp (z)}.[6] Its derivative is an odd function, i.e.℘′(−z)=−℘′(z).{\displaystyle \wp '(-z)=-\wp '(z).}[6] One of the main results of the theory of elliptic functions is the following: Every elliptic function with respect to a given period latticeΛ{\displaystyle \Lambda }can be expressed as a rational function in terms of℘{\displaystyle \wp }and℘′{\displaystyle \wp '}.[7] The℘{\displaystyle \wp }-function satisfies thedifferential equation whereg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are constants that depend onΛ{\displaystyle \Lambda }. More precisely,g2(ω1,ω2)=60G4(ω1,ω2){\displaystyle g_{2}(\omega _{1},\omega _{2})=60G_{4}(\omega _{1},\omega _{2})}andg3(ω1,ω2)=140G6(ω1,ω2){\displaystyle g_{3}(\omega _{1},\omega _{2})=140G_{6}(\omega _{1},\omega _{2})}, whereG4{\displaystyle G_{4}}andG6{\displaystyle G_{6}}are so calledEisenstein series.[8] In algebraic language, the field of elliptic functions is isomorphic to the field where the isomorphism maps℘{\displaystyle \wp }toX{\displaystyle X}and℘′{\displaystyle \wp '}toY{\displaystyle Y}. The relation toelliptic integralshas mainly a historical background. Elliptic integrals had been studied byLegendre, whose work was taken on byNiels Henrik AbelandCarl Gustav Jacobi. Abel discovered elliptic functions by taking the inverse functionφ{\displaystyle \varphi }of the elliptic integral function withx=φ(α){\displaystyle x=\varphi (\alpha )}.[9] Additionally he defined the functions[10] and After continuation to the complex plane they turned out to be doubly periodic and are known asAbel elliptic functions. Jacobi elliptic functionsare similarly obtained as inverse functions of elliptic integrals. Jacobi considered the integral function and inverted it:x=sn⁡(ξ){\displaystyle x=\operatorname {sn} (\xi )}.sn{\displaystyle \operatorname {sn} }stands forsinus amplitudinisand is the name of the new function.[11]He then introduced the functionscosinus amplitudinisanddelta amplitudinis, which are defined as follows: Only by taking this step, Jacobi could prove his general transformation formula of elliptic integrals in 1827.[12] Shortly after the development ofinfinitesimal calculusthe theory of elliptic functions was started by the Italian mathematicianGiulio di Fagnanoand the Swiss mathematicianLeonhard Euler. When they tried to calculate the arc length of alemniscatethey encountered problems involving integrals that contained the square root of polynomials of degree 3 and 4.[13]It was clear that those so called elliptic integrals could not be solved using elementary functions. Fagnano observed an algebraic relation between elliptic integrals, what he published in 1750.[13]Euler immediately generalized Fagnano's results and posed his algebraic addition theorem for elliptic integrals.[13] Except for a comment byLanden[14]his ideas were not pursued until 1786, whenLegendrepublished his paperMémoires sur les intégrations par arcs d’ellipse.[15]Legendre subsequently studied elliptic integrals and called themelliptic functions. Legendre introduced a three-fold classification – three kinds – which was a crucial simplification of the rather complicated theory at that time. Other important works of Legendre are:Mémoire sur les transcendantes elliptiques(1792),[16]Exercices de calcul intégral(1811–1817),[17]Traité des fonctions elliptiques(1825–1832).[18]Legendre's work was mostly left untouched by mathematicians until 1826. Subsequently,Niels Henrik AbelandCarl Gustav Jacobiresumed the investigations and quickly discovered new results. At first they inverted the elliptic integral function. Following a suggestion of Jacobi in 1829 these inverse functions are now calledelliptic functions. One of Jacobi's most important works isFundamenta nova theoriae functionum ellipticarumwhich was published 1829.[19]The addition theorem Euler found was posed and proved in its general form by Abel in 1829. In those days the theory of elliptic functions and the theory of doubly periodic functions were considered to be different theories. They were brought together byBriotandBouquetin 1856.[20]Gaussdiscovered many of the properties of elliptic functions 30 years earlier but never published anything on the subject.[21]
https://en.wikipedia.org/wiki/Elliptic_function
In mathematicsAbel elliptic functionsare a special kind ofelliptic functions, that were established by the Norwegian mathematicianNiels Henrik Abel. He published his paper "Recherches sur les Fonctions elliptiques" inCrelle's Journalin 1827.[1]It was the first work on elliptic functions that was actually published.[2]Abel's work on elliptic functions also influencedJacobi'sstudies of elliptic functions, whose 1829 published bookFundamenta nova theoriae functionum ellipticarumbecame the standard work on elliptic functions.[3] Abel's starting point were theelliptic integralswhich had been studied in great detail byAdrien-Marie Legendre. He began his research in 1823 when he still was a student. In particular he viewed them ascomplex functionswhich at that time were still in their infancy. In the following years Abel continued to explore these functions. He also tried to generalize them to functions with even more periods, but seemed to be in no hurry to publish his results. But in the beginning of the year 1827 he wrote together his first, long presentationRecherches sur les fonctions elliptiquesof his discoveries.[4]At the end of the same year he became aware ofCarl Gustav Jacobiand his works on new transformations of elliptic integrals. Abel finishes then a second part of his article on elliptic functions and shows in an appendix how the transformation results of Jacobi would easily follow.[5][3]When he then sees the next publication by Jacobi where he makes use of elliptic functions to prove his results without referring to Abel, the Norwegian mathematician finds himself to be in a struggle with Jacobi over priority. He finishes several new articles about related issues, now for the first time dating them, but dies less than a year later in 1829.[6]In the meantime Jacobi completes his great workFundamenta nova theoriae functionum ellipticarumon elliptic functions which appears the same year as a book. It ended up defining what would be the standard form of elliptic functions in the years that followed.[6] Consider the elliptic integral of the first kind in the following symmetric form:[7] α{\displaystyle \alpha }is an odd increasing function on the interval[−1c,1c]{\displaystyle {\bigl [}{-{\tfrac {1}{c}}},{\tfrac {1}{c}}{\bigr ]}}with the maximum:[2] That meansα{\displaystyle \alpha }is invertible: There exists a functionφ{\displaystyle \varphi }such thatx=φ(α(x)){\displaystyle x=\varphi (\alpha (x))}, which is well-defined on the interval[−ω2,ω2]{\displaystyle {\bigl [}{-{\tfrac {\omega }{2}}},{\tfrac {\omega }{2}}{\bigr ]}}. Like the functionα{\displaystyle \alpha }, it depends on the parametersc{\displaystyle c}ande{\displaystyle e}which can be expressed by writingφ(u;e,c){\displaystyle \varphi (u;e,c)}. Sinceα{\displaystyle \alpha }is an odd function,φ{\displaystyle \varphi }is also an odd function which meansφ(−u)=−φ(u){\displaystyle \varphi (-u)=-\varphi (u)}. By taking thederivativewith respect tou{\displaystyle u}one gets: which is an even function, i.e.,φ(−u)=φ(u){\displaystyle \varphi (-u)=\varphi (u)}. Abel introduced the new functions Thereby it holds that[2]φ′(u)=f(u)F(u){\displaystyle \varphi '(u)=f(u)F(u)}. φ{\displaystyle \varphi },f{\displaystyle f}andF{\displaystyle F}are the functions known as Abel elliptic functions. They can be continued using the addition theorems. For example adding±12ω{\displaystyle \pm {\tfrac {1}{2}}\omega }one gets: φ{\displaystyle \varphi }can be continued onto purely imaginary numbers by introducing the substitutiont→it{\displaystyle t\rightarrow it}. One getsxi=φ(βi){\displaystyle xi=\varphi (\beta i)}, where β{\displaystyle \beta }is an increasing function on the interval[−1e,1e]{\displaystyle {\bigl [}{-{\tfrac {1}{e}}},{\tfrac {1}{e}}{\bigr ]}}with the maximum[8] That meansφ{\displaystyle \varphi },f{\displaystyle f}andF{\displaystyle F}are known along the real and imaginary axes. Using the addition theorems again they can be extended onto the complex plane. For example forα∈[−ω2,ω2]{\displaystyle \alpha \in {\bigl [}{-{\tfrac {\omega }{2}}},{\tfrac {\omega }{2}}{\bigr ]}}yields to The periodicity ofφ{\displaystyle \varphi },f{\displaystyle f}andF{\displaystyle F}can be shown by applying the addition theorems multiple times. All three functions are doubly periodic which means they have twoR{\displaystyle \mathbb {R} }-linear independent periods in the complex plane:[9] The poles of the functionsφ(α){\displaystyle \varphi (\alpha )},f(α){\displaystyle f(\alpha )}andF(α){\displaystyle F(\alpha )}are at[10] Abel's elliptic functions can be expressed by theJacobi elliptic functions, which do not depend on the parametersc{\displaystyle c}ande{\displaystyle e}but on a modulusk{\displaystyle k}: wherek=iec{\displaystyle k={\frac {ie}{c}}}. For the functionsφ{\displaystyle \varphi },f{\displaystyle f}andF{\displaystyle F}the following addition theorems hold:[8] whereR=1+c2e2φ2(α)φ2(β){\displaystyle R=1+c^{2}e^{2}\varphi ^{2}(\alpha )\varphi ^{2}(\beta )}. These follow from the addition theorems for elliptic integrals thatEuleralready had proven.[8]
https://en.wikipedia.org/wiki/Abel_elliptic_functions
Inmathematics, theJacobi elliptic functionsare a set of basicelliptic functions. They are found in the description of themotion of a pendulum, as well as in the design of electronicelliptic filters. Whiletrigonometric functionsare defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to otherconic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notationsn{\displaystyle \operatorname {sn} }forsin{\displaystyle \sin }. The Jacobi elliptic functions are used more often in practical problems than theWeierstrass elliptic functionsas they do not require notions of complex analysis to be defined and/or understood. They were introduced byCarl Gustav Jakob Jacobi(1829).Carl Friedrich Gausshad already studied special Jacobi elliptic functions in 1797, thelemniscate elliptic functionsin particular,[1]but his work was published much later. There are twelve Jacobi elliptic functions denoted bypq⁡(u,m){\displaystyle \operatorname {pq} (u,m)}, wherep{\displaystyle \mathrm {p} }andq{\displaystyle \mathrm {q} }are any of the lettersc{\displaystyle \mathrm {c} },s{\displaystyle \mathrm {s} },n{\displaystyle \mathrm {n} }, andd{\displaystyle \mathrm {d} }. (Functions of the formpp⁡(u,m){\displaystyle \operatorname {pp} (u,m)}are trivially set to unity for notational completeness.)u{\displaystyle u}is the argument, andm{\displaystyle m}is the parameter, both of which may be complex. In fact, the Jacobi elliptic functions aremeromorphicin bothu{\displaystyle u}andm{\displaystyle m}.[2]The distribution of the zeros and poles in theu{\displaystyle u}-plane is well-known. However, questions of the distribution of the zeros and poles in them{\displaystyle m}-plane remain to be investigated.[2] In the complex plane of the argumentu{\displaystyle u}, the twelve functions form a repeating lattice of simplepoles and zeroes.[3]Depending on the function, one repeating parallelogram, or unit cell, will have sides of length2K{\displaystyle 2K}or4K{\displaystyle 4K}on the real axis, and2K′{\displaystyle 2K'}or4K′{\displaystyle 4K'}on the imaginary axis, whereK=K(m){\displaystyle K=K(m)}andK′=K(1−m){\displaystyle K'=K(1-m)}are known as thequarter periodswithK(⋅){\displaystyle K(\cdot )}being theelliptic integralof the first kind. The nature of the unit cell can be determined by inspecting the "auxiliary rectangle" (generally a parallelogram), which is a rectangle formed by the origin(0,0){\displaystyle (0,0)}at one corner, and(K,K′){\displaystyle (K,K')}as the diagonally opposite corner. As in the diagram, the four corners of the auxiliary rectangle are nameds{\displaystyle \mathrm {s} },c{\displaystyle \mathrm {c} },d{\displaystyle \mathrm {d} }, andn{\displaystyle \mathrm {n} }, going counter-clockwise from the origin. The functionpq⁡(u,m){\displaystyle \operatorname {pq} (u,m)}will have a zero at thep{\displaystyle \mathrm {p} }corner and a pole at theq{\displaystyle \mathrm {q} }corner. The twelve functions correspond to the twelve ways of arranging these poles and zeroes in the corners of the rectangle. When the argumentu{\displaystyle u}and parameterm{\displaystyle m}are real, with0<m<1{\displaystyle 0<m<1},K{\displaystyle K}andK′{\displaystyle K'}will be real and the auxiliary parallelogram will in fact be a rectangle, and the Jacobi elliptic functions will all be real valued on the real line. Since the Jacobi elliptic functions are doubly periodic inu{\displaystyle u}, they factor through atorus– in effect, their domain can be taken to be a torus, just as cosine and sine are in effect defined on a circle. Instead of having only one circle, we now have the product of two circles, one real and the other imaginary. The complex plane can be replaced by acomplex torus. The circumference of the first circle is4K{\displaystyle 4K}and the second4K′{\displaystyle 4K'}, whereK{\displaystyle K}andK′{\displaystyle K'}are thequarter periods. Each function has two zeroes and two poles at opposite positions on the torus. Among the points0{\displaystyle 0},K{\displaystyle K},K+iK′{\displaystyle K+iK'},iK′{\displaystyle iK'}there is one zero and one pole. The Jacobi elliptic functions are then doubly periodic, meromorphic functions satisfying the following properties: The elliptic functions can be given in a variety of notations, which can make the subject unnecessarily confusing. Elliptic functions are functions of two variables. The first variable might be given in terms of theamplitudeφ{\displaystyle \varphi }, or more commonly, in terms ofu{\displaystyle u}given below. The second variable might be given in terms of theparameterm{\displaystyle m}, or as theelliptic modulusk{\displaystyle k}, wherek2=m{\displaystyle k^{2}=m}, or in terms of themodular angleα{\displaystyle \alpha }, wherem=sin2⁡α{\displaystyle m=\sin ^{2}\alpha }. The complements ofk{\displaystyle k}andm{\displaystyle m}are defined asm′=1−m{\displaystyle m'=1-m}andk′=m′{\textstyle k'={\sqrt {m'}}}. These four terms are used below without comment to simplify various expressions. The twelve Jacobi elliptic functions are generally written aspq⁡(u,m){\displaystyle \operatorname {pq} (u,m)}wherep{\displaystyle \mathrm {p} }andq{\displaystyle \mathrm {q} }are any of the lettersc{\displaystyle \mathrm {c} },s{\displaystyle \mathrm {s} },n{\displaystyle \mathrm {n} }, andd{\displaystyle \mathrm {d} }. Functions of the formpp⁡(u,m){\displaystyle \operatorname {pp} (u,m)}are trivially set to unity for notational completeness. The “major” functions are generally taken to becn⁡(u,m){\displaystyle \operatorname {cn} (u,m)},sn⁡(u,m){\displaystyle \operatorname {sn} (u,m)}anddn⁡(u,m){\displaystyle \operatorname {dn} (u,m)}from which all other functions can be derived and expressions are often written solely in terms of these three functions, however, various symmetries and generalizations are often most conveniently expressed using the full set. (This notation is due toGudermannandGlaisherand is not Jacobi's original notation.) Throughout this article,pq⁡(u,t2)=pq⁡(u;t){\displaystyle \operatorname {pq} (u,t^{2})=\operatorname {pq} (u;t)}. The functions are notationally related to each other by the multiplication rule: (arguments suppressed) from which other commonly used relationships can be derived: The multiplication rule follows immediately from the identification of the elliptic functions with theNeville theta functions[5] Also note that: There is a definition, relating the elliptic functions to the inverse of theincomplete elliptic integral of the first kindF{\displaystyle F}. These functions take the parametersu{\displaystyle u}andm{\displaystyle m}as inputs. Theφ{\displaystyle \varphi }that satisfies is called theJacobi amplitude: In this framework, theelliptic sinesnu(Latin:sinus amplitudinis) is given by and theelliptic cosinecnu(Latin:cosinus amplitudinis) is given by and thedelta amplitudednu(Latin:delta amplitudinis)[note 1] In the above, the valuem{\displaystyle m}is a free parameter, usually taken to be real such that0≤m≤1{\displaystyle 0\leq m\leq 1}(but can be complex in general), and so the elliptic functions can be thought of as being given by two variables,u{\displaystyle u}and the parameterm{\displaystyle m}. The remaining nine elliptic functions are easily built from the above three (sn{\displaystyle \operatorname {sn} },cn{\displaystyle \operatorname {cn} },dn{\displaystyle \operatorname {dn} }), and are given in a section below. Note that whenφ=π/2{\displaystyle \varphi =\pi /2}, thatu{\displaystyle u}then equals thequarter periodK{\displaystyle K}. In the most general setting,am⁡(u,m){\displaystyle \operatorname {am} (u,m)}is amultivalued function(inu{\displaystyle u}) with infinitely manylogarithmic branch points(the branches differ by integer multiples of2π{\displaystyle 2\pi }), namely the points2sK(m)+(4t+1)K(1−m)i{\displaystyle 2sK(m)+(4t+1)K(1-m)i}and2sK(m)+(4t+3)K(1−m)i{\displaystyle 2sK(m)+(4t+3)K(1-m)i}wheres,t∈Z{\displaystyle s,t\in \mathbb {Z} }.[6]This multivalued function can be made single-valued by cutting the complex plane along the line segments joining these branch points (the cutting can be done in non-equivalent ways, giving non-equivalent single-valued functions), thus makingam⁡(u,m){\displaystyle \operatorname {am} (u,m)}analyticeverywhere except on thebranch cuts. In contrast,sin⁡am⁡(u,m){\displaystyle \sin \operatorname {am} (u,m)}and other elliptic functions have no branch points, give consistent values for every branch ofam{\displaystyle \operatorname {am} }, and aremeromorphicin the whole complex plane. Since every elliptic function is meromorphic in the whole complex plane (by definition),am⁡(u,m){\displaystyle \operatorname {am} (u,m)}(when considered as a single-valued function) is not an elliptic function. However, a particular cutting foram⁡(u,m){\displaystyle \operatorname {am} (u,m)}can be made in theu{\displaystyle u}-plane by line segments from2sK(m)+(4t+1)K(1−m)i{\displaystyle 2sK(m)+(4t+1)K(1-m)i}to2sK(m)+(4t+3)K(1−m)i{\displaystyle 2sK(m)+(4t+3)K(1-m)i}withs,t∈Z{\displaystyle s,t\in \mathbb {Z} }; then it only remains to defineam⁡(u,m){\displaystyle \operatorname {am} (u,m)}at the branch cuts by continuity from some direction. Thenam⁡(u,m){\displaystyle \operatorname {am} (u,m)}becomes single-valued and singly-periodic inu{\displaystyle u}with the minimal period4iK(1−m){\displaystyle 4iK(1-m)}and it has singularities at the logarithmic branch points mentioned above. Ifm∈R{\displaystyle m\in \mathbb {R} }andm≤1{\displaystyle m\leq 1},am⁡(u,m){\displaystyle \operatorname {am} (u,m)}is continuous inu{\displaystyle u}on the real line. Whenm>1{\displaystyle m>1}, the branch cuts ofam⁡(u,m){\displaystyle \operatorname {am} (u,m)}in theu{\displaystyle u}-plane cross the real line at2(2s+1)K(1/m)/m{\displaystyle 2(2s+1)K(1/m)/{\sqrt {m}}}fors∈Z{\displaystyle s\in \mathbb {Z} }; therefore form>1{\displaystyle m>1},am⁡(u,m){\displaystyle \operatorname {am} (u,m)}is not continuous inu{\displaystyle u}on the real line and jumps by2π{\displaystyle 2\pi }on the discontinuities. But definingam⁡(u,m){\displaystyle \operatorname {am} (u,m)}this way gives rise to very complicated branch cuts in them{\displaystyle m}-plane (nottheu{\displaystyle u}-plane); they have not been fully described as of yet. Let be theincomplete elliptic integral of the second kindwith parameterm{\displaystyle m}. Then theJacobi epsilonfunction can be defined as foru∈R{\displaystyle u\in \mathbb {R} }and0<m<1{\displaystyle 0<m<1}and byanalytic continuationin each of the variables otherwise: the Jacobi epsilon function is meromorphic in the whole complex plane (in bothu{\displaystyle u}andm{\displaystyle m}). Alternatively, throughout both theu{\displaystyle u}-plane andm{\displaystyle m}-plane,[7] E{\displaystyle {\mathcal {E}}}is well-defined in this way because allresiduesoft↦dn⁡(t,m)2{\displaystyle t\mapsto \operatorname {dn} (t,m)^{2}}are zero, so the integral is path-independent. So the Jacobi epsilon relates the incomplete elliptic integral of the first kind to the incomplete elliptic integral of the second kind: The Jacobi epsilon function is not an elliptic function, but it appears when differentiating the Jacobi elliptic functions with respect to the parameter. TheJacobi znfunction is defined by It is a singly periodic function which is meromorphic inu{\displaystyle u}, but not inm{\displaystyle m}(due to the branch cuts ofE{\displaystyle E}andK{\displaystyle K}). Its minimal period inu{\displaystyle u}is2K(m){\displaystyle 2K(m)}. It is related to theJacobi zeta functionbyZ(φ,m)=zn⁡(F(φ,m),m).{\displaystyle Z(\varphi ,m)=\operatorname {zn} (F(\varphi ,m),m).} Historically, the Jacobi elliptic functions were first defined by using the amplitude. In more modern texts on elliptic functions, the Jacobi elliptic functions are defined by other means, for example by ratios of theta functions (see below), and the amplitude is ignored. In modern terms, the relation to elliptic integrals would be expressed bysn⁡(F(φ,m),m)=sin⁡φ{\displaystyle \operatorname {sn} (F(\varphi ,m),m)=\sin \varphi }(orcn⁡(F(φ,m),m)=cos⁡φ{\displaystyle \operatorname {cn} (F(\varphi ,m),m)=\cos \varphi }) instead ofam⁡(F(φ,m),m)=φ{\displaystyle \operatorname {am} (F(\varphi ,m),m)=\varphi }. cos⁡φ,sin⁡φ{\displaystyle \cos \varphi ,\sin \varphi }are defined on the unit circle, with radiusr= 1 and angleφ={\displaystyle \varphi =}arc length of the unit circle measured from the positivex-axis. Similarly, Jacobi elliptic functions are defined on the unit ellipse,[citation needed]witha= 1. Let then: For each angleφ{\displaystyle \varphi }the parameter (the incomplete elliptic integral of the first kind) is computed. On the unit circle (a=b=1{\displaystyle a=b=1}),u{\displaystyle u}would be an arc length. However, the relation ofu{\displaystyle u}to thearc length of an ellipseis more complicated.[8] LetP=(x,y)=(rcos⁡φ,rsin⁡φ){\displaystyle P=(x,y)=(r\cos \varphi ,r\sin \varphi )}be a point on the ellipse, and letP′=(x′,y′)=(cos⁡φ,sin⁡φ){\displaystyle P'=(x',y')=(\cos \varphi ,\sin \varphi )}be the point where the unit circle intersects the line betweenP{\displaystyle P}and the originO{\displaystyle O}. Then the familiar relations from the unit circle: read for the ellipse: So the projections of the intersection pointP′{\displaystyle P'}of the lineOP{\displaystyle OP}with the unit circle on thex- andy-axes are simplycn⁡(u,m){\displaystyle \operatorname {cn} (u,m)}andsn⁡(u,m){\displaystyle \operatorname {sn} (u,m)}. These projections may be interpreted as 'definition as trigonometry'. In short: For thex{\displaystyle x}andy{\displaystyle y}value of the pointP{\displaystyle P}withu{\displaystyle u}and parameterm{\displaystyle m}we get, after inserting the relation: into:x=r(φ,m)cos⁡(φ),y=r(φ,m)sin⁡(φ){\displaystyle x=r(\varphi ,m)\cos(\varphi ),y=r(\varphi ,m)\sin(\varphi )}that: The latter relations for thex- andy-coordinates of points on the unit ellipse may be considered as generalization of the relationsx=cos⁡φ,y=sin⁡φ{\displaystyle x=\cos \varphi ,y=\sin \varphi }for the coordinates of points on the unit circle. The following table summarizes the expressions for all Jacobi elliptic functions pq(u,m) in the variables (x,y,r) and (φ,dn) withr=x2+y2{\textstyle r={\sqrt {x^{2}+y^{2}}}} Equivalently, Jacobi's elliptic functions can be defined in terms of thetheta functions.[9]Withz,τ∈C{\displaystyle z,\tau \in \mathbb {C} }such thatIm⁡τ>0{\displaystyle \operatorname {Im} \tau >0}, let and letθ2(τ)=θ2(0|τ){\displaystyle \theta _{2}(\tau )=\theta _{2}(0|\tau )},θ3(τ)=θ3(0|τ){\displaystyle \theta _{3}(\tau )=\theta _{3}(0|\tau )},θ4(τ)=θ4(0|τ){\displaystyle \theta _{4}(\tau )=\theta _{4}(0|\tau )}. Then withK=K(m){\displaystyle K=K(m)},K′=K(1−m){\displaystyle K'=K(1-m)},ζ=πu/(2K){\displaystyle \zeta =\pi u/(2K)}andτ=iK′/K{\displaystyle \tau =iK'/K}, The Jacobi zn function can be expressed by theta functions as well: where′{\displaystyle '}denotes the partial derivative with respect to the first variable. In fact, the definition of the Jacobi elliptic functions in Whittaker & Watson is stated a little bit differently than the one given above (but it's equivalent to it) and relies on modular inversion:The functionλ{\displaystyle \lambda }, defined by assumes every value inC−{0,1}{\displaystyle \mathbb {C} -\{0,1\}}once and only once[10]in whereH{\displaystyle \mathbb {H} }is the upper half-plane in the complex plane,∂F1{\displaystyle \partial F_{1}}is the boundary ofF1{\displaystyle F_{1}}and In this way, eachm=defλ(τ)∈C−{0,1}{\displaystyle m\,{\overset {\text{def}}{=}}\,\lambda (\tau )\in \mathbb {C} -\{0,1\}}can be associated withone and only oneτ{\displaystyle \tau }. Then Whittaker & Watson define the Jacobi elliptic functions by whereζ=u/θ3(τ)2{\displaystyle \zeta =u/\theta _{3}(\tau )^{2}}. In the book, they place an additional restriction onm{\displaystyle m}(thatm∉(−∞,0)∪(1,∞){\displaystyle m\notin (-\infty ,0)\cup (1,\infty )}), but it is in fact not a necessary restriction (see the Cox reference). Also, ifm=0{\displaystyle m=0}orm=1{\displaystyle m=1}, the Jacobi elliptic functions degenerate to non-elliptic functions which is described below. The Jacobi elliptic functions can be defined very simply using theNeville theta functions:[11] Simplifications of complicated products of the Jacobi elliptic functions are often made easier using these identities. The Jacobi imaginary transformations relate various functions of the imaginary variablei uor, equivalently, relations between various values of themparameter. In terms of the major functions:[12]: 506 Using the multiplication rule, all other functions may be expressed in terms of the above three. The transformations may be generally written aspq⁡(u,m)=γpqpq′⁡(iu,1−m){\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}. The following table gives theγpqpq′⁡(iu,1−m){\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(i\,u,1\!-\!m)}for the specified pq(u,m).[11](The arguments(iu,1−m){\displaystyle (i\,u,1\!-\!m)}are suppressed) Since thehyperbolic trigonometric functionsare proportional to the circular trigonometric functions with imaginary arguments, it follows that the Jacobi functions will yield the hyperbolic functions for m=1.[5]: 249In the figure, the Jacobi curve has degenerated to two vertical lines atx= 1 andx= −1. The Jacobi real transformations[5]: 308yield expressions for the elliptic functions in terms with alternate values ofm. The transformations may be generally written aspq⁡(u,m)=γpqpq′⁡(ku,1/m){\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}. The following table gives theγpqpq′⁡(ku,1/m){\displaystyle \gamma _{\operatorname {pq} }\operatorname {pq} '(k\,u,1/m)}for the specified pq(u,m).[11](The arguments(ku,1/m){\displaystyle (k\,u,1/m)}are suppressed) Jacobi's real and imaginary transformations can be combined in various ways to yield three more simple transformations .[5]: 214The real and imaginary transformations are two transformations in a group (D3oranharmonic group) of six transformations. If is the transformation for themparameter in the real transformation, and is the transformation ofmin the imaginary transformation, then the other transformations can be built up by successive application of these two basic transformations, yielding only three more possibilities: These five transformations, along with the identity transformation (μU(m) =m) yield the six-element group. With regard to the Jacobi elliptic functions, the general transformation can be expressed using just three functions: wherei= U, I, IR, R, RI, or RIR, identifying the transformation, γiis a multiplication factor common to these three functions, and the prime indicates the transformed function. The other nine transformed functions can be built up from the above three. The reason the cs, ns, ds functions were chosen to represent the transformation is that the other functions will be ratios of these three (except for their inverses) and the multiplication factors will cancel. The following table lists the multiplication factors for the three ps functions, the transformedm's, and the transformed function names for each of the six transformations.[5]: 214(As usual,k2=m, 1 −k2=k12=m′ and the arguments (γiu,μi(m){\displaystyle \gamma _{i}u,\mu _{i}(m)}) are suppressed) Thus, for example, we may build the following table for the RIR transformation.[11]The transformation is generally writtenpq⁡(u,m)=γpqpq′⁡(k′u,−m/m′){\displaystyle \operatorname {pq} (u,m)=\gamma _{\operatorname {pq} }\,\operatorname {pq'} (k'\,u,-m/m')}(The arguments(k′u,−m/m′){\displaystyle (k'\,u,-m/m')}are suppressed) The value of the Jacobi transformations is that any set of Jacobi elliptic functions with any real-valued parametermcan be converted into another set for which0<m≤1/2{\displaystyle 0<m\leq 1/2}and, for real values ofu, the function values will be real.[5]: p. 215 In the following, the second variable is suppressed and is equal tom{\displaystyle m}: where both identities are valid for allu,v,m∈C{\displaystyle u,v,m\in \mathbb {C} }such that both sides are well-defined. With we have where all the identities are valid for allu,m∈C{\displaystyle u,m\in \mathbb {C} }such that both sides are well-defined. Introducing complex numbers, our ellipse has an associated hyperbola: from applying Jacobi's imaginary transformation[11]to the elliptic functions in the above equation forxandy. It follows that we can putx=dn⁡(u,1−m),y=sn⁡(u,1−m){\displaystyle x=\operatorname {dn} (u,1-m),y=\operatorname {sn} (u,1-m)}. So our ellipse has a dual ellipse with m replaced by 1-m. This leads to the complex torus mentioned in the Introduction.[13]Generally, m may be a complex number, but when m is real and m<0, the curve is an ellipse with major axis in the x direction. At m=0 the curve is a circle, and for 0<m<1, the curve is an ellipse with major axis in the y direction. Atm= 1, the curve degenerates into two vertical lines atx= ±1. Form> 1, the curve is a hyperbola. Whenmis complex but not real,xoryor both are complex and the curve cannot be described on a realx-ydiagram. Reversing the order of the two letters of the function name results in the reciprocals of the three functions above: Similarly, the ratios of the three primary functions correspond to the first letter of the numerator followed by the first letter of the denominator: More compactly, we have where p and q are any of the letters s, c, d. In the complex plane of the argumentu, the Jacobi elliptic functions form a repeating pattern of poles (and zeroes). The residues of the poles all have the same absolute value, differing only in sign. Each function pq(u,m) has an "inverse function" (in the multiplicative sense) qp(u,m) in which the positions of the poles and zeroes are exchanged. The periods of repetition are generally different in the real and imaginary directions, hence the use of the term "doubly periodic" to describe them. For the Jacobi amplitude and the Jacobi epsilon function: whereE(m){\displaystyle E(m)}is thecomplete elliptic integral of the second kindwith parameterm{\displaystyle m}. The double periodicity of the Jacobi elliptic functions may be expressed as: whereαandβare any pair of integers.K(⋅) is the complete elliptic integral of the first kind, also known as thequarter period. The power of negative unity (γ) is given in the following table: When the factor (−1)γis equal to −1, the equation expresses quasi-periodicity. When it is equal to unity, it expresses full periodicity. It can be seen, for example, that for the entries containing only α when α is even, full periodicity is expressed by the above equation, and the function has full periods of 4K(m) and 2iK(1 −m). Likewise, functions with entries containing onlyβhave full periods of 2K(m) and 4iK(1 −m), while those with α + β have full periods of 4K(m) and 4iK(1 −m). In the diagram on the right, which plots one repeating unit for each function, indicating phase along with the location of poles and zeroes, a number of regularities can be noted: The inverse of each function is opposite the diagonal, and has the same size unit cell, with poles and zeroes exchanged. The pole and zero arrangement in the auxiliary rectangle formed by (0,0), (K,0), (0,K′) and (K,K′) are in accordance with the description of the pole and zero placement described in the introduction above. Also, the size of the white ovals indicating poles are a rough measure of the absolute value of the residue for that pole. The residues of the poles closest to the origin in the figure (i.e. in the auxiliary rectangle) are listed in the following table: When applicable, poles displaced above by 2Kor displaced to the right by 2K′ have the same value but with signs reversed, while those diagonally opposite have the same value. Note that poles and zeroes on the left and lower edges are considered part of the unit cell, while those on the upper and right edges are not. The information about poles can in fact be used tocharacterizethe Jacobi elliptic functions:[14] The functionu↦sn⁡(u,m){\displaystyle u\mapsto \operatorname {sn} (u,m)}is the unique elliptic function having simple poles at2rK+(2s+1)iK′{\displaystyle 2rK+(2s+1)iK'}(withr,s∈Z{\displaystyle r,s\in \mathbb {Z} }) with residues(−1)r/m{\displaystyle (-1)^{r}/{\sqrt {m}}}taking the value0{\displaystyle 0}at0{\displaystyle 0}. The functionu↦cn⁡(u,m){\displaystyle u\mapsto \operatorname {cn} (u,m)}is the unique elliptic function having simple poles at2rK+(2s+1)iK′{\displaystyle 2rK+(2s+1)iK'}(withr,s∈Z{\displaystyle r,s\in \mathbb {Z} }) with residues(−1)r+s−1i/m{\displaystyle (-1)^{r+s-1}i/{\sqrt {m}}}taking the value1{\displaystyle 1}at0{\displaystyle 0}. The functionu↦dn⁡(u,m){\displaystyle u\mapsto \operatorname {dn} (u,m)}is the unique elliptic function having simple poles at2rK+(2s+1)iK′{\displaystyle 2rK+(2s+1)iK'}(withr,s∈Z{\displaystyle r,s\in \mathbb {Z} }) with residues(−1)s−1i{\displaystyle (-1)^{s-1}i}taking the value1{\displaystyle 1}at0{\displaystyle 0}. Settingm=−1{\displaystyle m=-1}gives thelemniscate elliptic functionssl{\displaystyle \operatorname {sl} }andcl{\displaystyle \operatorname {cl} }: Whenm=0{\displaystyle m=0}orm=1{\displaystyle m=1}, the Jacobi elliptic functions are reduced to non-elliptic functions: For the Jacobi amplitude,am⁡(u,0)=u{\displaystyle \operatorname {am} (u,0)=u}andam⁡(u,1)=gd⁡u{\displaystyle \operatorname {am} (u,1)=\operatorname {gd} u}wheregd{\displaystyle \operatorname {gd} }is theGudermannian function. In general if neither of p,q is d thenpq⁡(u,1)=pq⁡(gd⁡(u),0){\displaystyle \operatorname {pq} (u,1)=\operatorname {pq} (\operatorname {gd} (u),0)}. sn⁡(u2,m)=±1−cn⁡(u,m)1+dn⁡(u,m){\displaystyle \operatorname {sn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {1-\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}}cn⁡(u2,m)=±cn⁡(u,m)+dn⁡(u,m)1+dn⁡(u,m){\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}{1+\operatorname {dn} (u,m)}}}}cn⁡(u2,m)=±m′+dn⁡(u,m)+mcn⁡(u,m)1+dn⁡(u,m){\displaystyle \operatorname {cn} \left({\frac {u}{2}},m\right)=\pm {\sqrt {\frac {m'+\operatorname {dn} (u,m)+m\operatorname {cn} (u,m)}{1+\operatorname {dn} (u,m)}}}} Half K formula sn⁡[12K(k);k]=21+k+1−k{\displaystyle \operatorname {sn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {\sqrt {2}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}} cn⁡[12K(k);k]=21−k241+k+1−k{\displaystyle \operatorname {cn} \left[{\tfrac {1}{2}}K(k);k\right]={\frac {{\sqrt {2}}\,{\sqrt[{4}]{1-k^{2}}}}{{\sqrt {1+k}}+{\sqrt {1-k}}}}} dn⁡[12K(k);k]=1−k24{\displaystyle \operatorname {dn} \left[{\tfrac {1}{2}}K(k);k\right]={\sqrt[{4}]{1-k^{2}}}} Third K formula To getx3, we take the tangent of twice the arctangent of the modulus. Also this equation leads to the sn-value of the third ofK: These equations lead to the other values of the Jacobi-Functions: Fifth K formula Following equation has following solution: To get the sn-values, we put the solution x into following expressions: Relations between squares of the functions can be derived from two basic relationships (Arguments (u,m) suppressed):cn2+sn2=1{\displaystyle \operatorname {cn} ^{2}+\operatorname {sn} ^{2}=1}cn2+m′sn2=dn2{\displaystyle \operatorname {cn} ^{2}+m'\operatorname {sn} ^{2}=\operatorname {dn} ^{2}}wherem + m'= 1. Multiplying by any function of the formnqyields more general equations: cq2+sq2=nq2{\displaystyle \operatorname {cq} ^{2}+\operatorname {sq} ^{2}=\operatorname {nq} ^{2}}cq2⁡+m′sq2=dq2{\displaystyle \operatorname {cq} ^{2}{}+m'\operatorname {sq} ^{2}=\operatorname {dq} ^{2}} Withq=d, these correspond trigonometrically to the equations for the unit circle (x2+y2=r2{\displaystyle x^{2}+y^{2}=r^{2}}) and the unit ellipse (x2+m′y2=1{\displaystyle x^{2}{}+m'y^{2}=1}), withx=cd,y=sdandr=nd. Using the multiplication rule, other relationships may be derived. For example: −dn2⁡+m′=−mcn2=msn2−m{\displaystyle -\operatorname {dn} ^{2}{}+m'=-m\operatorname {cn} ^{2}=m\operatorname {sn} ^{2}-m} −m′nd2⁡+m′=−mm′sd2=mcd2−m{\displaystyle -m'\operatorname {nd} ^{2}{}+m'=-mm'\operatorname {sd} ^{2}=m\operatorname {cd} ^{2}-m} m′sc2⁡+m′=m′nc2=dc2−m{\displaystyle m'\operatorname {sc} ^{2}{}+m'=m'\operatorname {nc} ^{2}=\operatorname {dc} ^{2}-m} cs2⁡+m′=ds2=ns2−m{\displaystyle \operatorname {cs} ^{2}{}+m'=\operatorname {ds} ^{2}=\operatorname {ns} ^{2}-m} The functions satisfy the two square relations (dependence onmsuppressed)cn2⁡(u)+sn2⁡(u)=1,{\displaystyle \operatorname {cn} ^{2}(u)+\operatorname {sn} ^{2}(u)=1,\,} dn2⁡(u)+msn2⁡(u)=1.{\displaystyle \operatorname {dn} ^{2}(u)+m\operatorname {sn} ^{2}(u)=1.\,} From this we see that (cn, sn, dn) parametrizes anelliptic curvewhich is the intersection of the twoquadricsdefined by the above two equations. We now may define a group law for points on this curve by the addition formulas for the Jacobi functions[3] cn⁡(x+y)=cn⁡(x)cn⁡(y)−sn⁡(x)sn⁡(y)dn⁡(x)dn⁡(y)1−msn2⁡(x)sn2⁡(y),sn⁡(x+y)=sn⁡(x)cn⁡(y)dn⁡(y)+sn⁡(y)cn⁡(x)dn⁡(x)1−msn2⁡(x)sn2⁡(y),dn⁡(x+y)=dn⁡(x)dn⁡(y)−msn⁡(x)sn⁡(y)cn⁡(x)cn⁡(y)1−msn2⁡(x)sn2⁡(y).{\displaystyle {\begin{aligned}\operatorname {cn} (x+y)&={\operatorname {cn} (x)\operatorname {cn} (y)-\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {dn} (x)\operatorname {dn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {sn} (x+y)&={\operatorname {sn} (x)\operatorname {cn} (y)\operatorname {dn} (y)+\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {dn} (x) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}},\\[8pt]\operatorname {dn} (x+y)&={\operatorname {dn} (x)\operatorname {dn} (y)-m\operatorname {sn} (x)\operatorname {sn} (y)\operatorname {cn} (x)\operatorname {cn} (y) \over {1-m\operatorname {sn} ^{2}(x)\operatorname {sn} ^{2}(y)}}.\end{aligned}}} The Jacobi epsilon and zn functions satisfy a quasi-addition theorem:E(x+y,m)=E(x,m)+E(y,m)−msn⁡(x,m)sn⁡(y,m)sn⁡(x+y,m),zn⁡(x+y,m)=zn⁡(x,m)+zn⁡(y,m)−msn⁡(x,m)sn⁡(y,m)sn⁡(x+y,m).{\displaystyle {\begin{aligned}{\mathcal {E}}(x+y,m)&={\mathcal {E}}(x,m)+{\mathcal {E}}(y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m),\\\operatorname {zn} (x+y,m)&=\operatorname {zn} (x,m)+\operatorname {zn} (y,m)-m\operatorname {sn} (x,m)\operatorname {sn} (y,m)\operatorname {sn} (x+y,m).\end{aligned}}} Double angle formulae can be easily derived from the above equations by settingx=y.[3]Half angle formulae[11][3]are all of the form: pq⁡(12u,m)2=fp/fq{\displaystyle \operatorname {pq} ({\tfrac {1}{2}}u,m)^{2}=f_{\mathrm {p} }/f_{\mathrm {q} }} where:fc=cn⁡(u,m)+dn⁡(u,m){\displaystyle f_{\mathrm {c} }=\operatorname {cn} (u,m)+\operatorname {dn} (u,m)}fs=1−cn⁡(u,m){\displaystyle f_{\mathrm {s} }=1-\operatorname {cn} (u,m)}fn=1+dn⁡(u,m){\displaystyle f_{\mathrm {n} }=1+\operatorname {dn} (u,m)}fd=(1+dn⁡(u,m))−m(1−cn⁡(u,m)){\displaystyle f_{\mathrm {d} }=(1+\operatorname {dn} (u,m))-m(1-\operatorname {cn} (u,m))} Thederivativesof the three basic Jacobi elliptic functions (with respect to the first variable, withm{\displaystyle m}fixed) are:ddzsn⁡(z)=cn⁡(z)dn⁡(z),{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {sn} (z)=\operatorname {cn} (z)\operatorname {dn} (z),}ddzcn⁡(z)=−sn⁡(z)dn⁡(z),{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {cn} (z)=-\operatorname {sn} (z)\operatorname {dn} (z),}ddzdn⁡(z)=−msn⁡(z)cn⁡(z).{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {dn} (z)=-m\operatorname {sn} (z)\operatorname {cn} (z).} These can be used to derive the derivatives of all other functions as shown in the table below (arguments (u,m) suppressed): Also With theaddition theorems aboveand for a givenmwith 0 <m< 1 the major functions are therefore solutions to the following nonlinearordinary differential equations: The function which exactly solves thependulum differential equation, with initial angleθ0{\displaystyle \theta _{0}}and zero initial angular velocity is wherem=sin⁡(θ0/2)2{\displaystyle m=\sin(\theta _{0}/2)^{2}},c>0{\displaystyle c>0}andt∈R{\displaystyle t\in \mathbb {R} }. With the first argumentz{\displaystyle z}fixed, the derivatives with respect to the second variablem{\displaystyle m}are as follows: Let thenomebeq=exp⁡(−πK′(m)/K(m))=eiπτ{\displaystyle q=\exp(-\pi K'(m)/K(m))=e^{i\pi \tau }},Im⁡(τ)>0{\displaystyle \operatorname {Im} (\tau )>0},m=k2{\displaystyle m=k^{2}}and letv=πu/(2K(m)){\displaystyle v=\pi u/(2K(m))}. Then the functions have expansions asLambert series when|Im⁡(u/K)|<Im⁡(iK′/K).{\displaystyle \left|\operatorname {Im} (u/K)\right|<\operatorname {Im} (iK'/K).} Bivariate power series expansions have been published by Schett.[15] The theta function ratios provide an efficient way of computing the Jacobi elliptic functions. There is an alternative method, based on thearithmetic-geometric meanandLanden's transformations:[6] Initialize where0<m<1{\displaystyle 0<m<1}. Define wheren≥1{\displaystyle n\geq 1}. Then define foru∈R{\displaystyle u\in \mathbb {R} }and a fixedN∈N{\displaystyle N\in \mathbb {N} }. If forn≥1{\displaystyle n\geq 1}, then asN→∞{\displaystyle N\to \infty }. This is notable for its rapid convergence. It is then trivial to compute all Jacobi elliptic functions from the Jacobi amplitudeam{\displaystyle \operatorname {am} }on the real line.[note 2] In conjunction with the addition theorems for elliptic functions (which hold for complex numbers in general) and the Jacobi transformations, the method of computation described above can be used to compute all Jacobi elliptic functions in the whole complex plane. Another method of fast computation of the Jacobi elliptic functions via the arithmetic–geometric mean, avoiding the computation of the Jacobi amplitude, is due to Herbert E. Salzer:[16] Let Set Then asN→∞{\displaystyle N\to \infty }. Yet, another method for a rapidly converging fast computation of the Jacobi elliptic sine function found in the literature is shown below.[17] Let: Then set: Then: The Jacobi elliptic functions can be expanded in terms of the hyperbolic functions. Whenm{\displaystyle m}is close to unity, such thatm′2{\displaystyle m'^{2}}and higher powers ofm′{\displaystyle m'}can be neglected, we have:[18][19] For the Jacobi amplitude,am⁡(u,m)≈gd⁡(u)+14m′(sinh⁡(u)cosh⁡(u)−u)sech⁡(u).{\displaystyle \operatorname {am} (u,m)\approx \operatorname {gd} (u)+{\frac {1}{4}}m'(\sinh(u)\cosh(u)-u)\operatorname {sech} (u).} Assuming real numbersa,p{\displaystyle a,p}with0<a<p{\displaystyle 0<a<p}and thenomeq=eπiτ{\displaystyle q=e^{\pi i\tau }},Im⁡(τ)>0{\displaystyle \operatorname {Im} (\tau )>0}withelliptic modulusk(τ)=1−k′(τ)2=(ϑ10(0;τ)/ϑ00(0;τ))2{\textstyle k(\tau )={\sqrt {1-k'(\tau )^{2}}}=(\vartheta _{10}(0;\tau )/\vartheta _{00}(0;\tau ))^{2}}. IfK[τ]=K(k(τ)){\displaystyle K[\tau ]=K(k(\tau ))}, whereK(x)=π/2⋅2F1(1/2,1/2;1;x2){\displaystyle K(x)=\pi /2\cdot {}_{2}F_{1}(1/2,1/2;1;x^{2})}is thecomplete elliptic integral of the first kind, then holds the followingcontinued fraction expansion[20] Known continued fractions involvingsn(t),cn(t){\displaystyle {\textrm {sn}}(t),{\textrm {cn}}(t)}anddn(t){\displaystyle {\textrm {dn}}(t)}with elliptic modulusk{\displaystyle k}are Forz∈C{\displaystyle z\in \mathbb {C} },|k|<1{\displaystyle |k|<1}:[21]pg. 374 Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}},|k|<1{\displaystyle |k|<1}:[21]pg. 375 Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}},|k|<1{\displaystyle |k|<1}:[22]pg. 220 Forz∈C∖{0}{\displaystyle z\in \mathbb {C} \setminus \{0\}},|k|<1{\displaystyle |k|<1}:[21]pg. 374 Forz∈C{\displaystyle z\in \mathbb {C} },|k|<1{\displaystyle |k|<1}:[21]pg. 375 The inverses of the Jacobi elliptic functions can be defined similarly to theinverse trigonometric functions; ifx=sn⁡(ξ,m){\displaystyle x=\operatorname {sn} (\xi ,m)},ξ=arcsn⁡(x,m){\displaystyle \xi =\operatorname {arcsn} (x,m)}. They can be represented as elliptic integrals,[23][24][25]and power series representations have been found.[26][3] ThePeirce quincuncial projectionis amap projectionbased on Jacobian elliptic functions.
https://en.wikipedia.org/wiki/Jacobi_elliptic_functions
Inmathematics, theWeierstrass elliptic functionsareelliptic functionsthat take a particularly simple form. They are named forKarl Weierstrass. This class of functions is also referred to as℘-functionsand they are usually denoted by the symbol ℘, a uniquely fancyscriptp. They play an important role in the theory of elliptic functions, i.e.,meromorphic functionsthat aredoubly periodic. A ℘-function together with its derivative can be used to parameterizeelliptic curvesand they generate the field of elliptic functions with respect to a given period lattice. Symbol for Weierstrass℘{\displaystyle \wp }-function Acubicof the formCg2,g3C={(x,y)∈C2:y2=4x3−g2x−g3}{\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }=\{(x,y)\in \mathbb {C} ^{2}:y^{2}=4x^{3}-g_{2}x-g_{3}\}}, whereg2,g3∈C{\displaystyle g_{2},g_{3}\in \mathbb {C} }are complex numbers withg23−27g32≠0{\displaystyle g_{2}^{3}-27g_{3}^{2}\neq 0}, cannot berationally parameterized.[1]Yet one still wants to find a way to parameterize it. For thequadricK={(x,y)∈R2:x2+y2=1}{\displaystyle K=\left\{(x,y)\in \mathbb {R} ^{2}:x^{2}+y^{2}=1\right\}}; theunit circle, there exists a (non-rational) parameterization using the sine function and its derivative the cosine function:ψ:R/2πZ→K,t↦(sin⁡t,cos⁡t).{\displaystyle \psi :\mathbb {R} /2\pi \mathbb {Z} \to K,\quad t\mapsto (\sin t,\cos t).}Because of the periodicity of the sine and cosineR/2πZ{\displaystyle \mathbb {R} /2\pi \mathbb {Z} }is chosen to be the domain, so the function is bijective. In a similar way one can get a parameterization ofCg2,g3C{\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }}by means of the doubly periodic℘{\displaystyle \wp }-function (see in the section "Relation to elliptic curves"). This parameterization has the domainC/Λ{\displaystyle \mathbb {C} /\Lambda }, which is topologically equivalent to atorus.[2] There is another analogy to the trigonometric functions. Consider the integral functiona(x)=∫0xdy1−y2.{\displaystyle a(x)=\int _{0}^{x}{\frac {dy}{\sqrt {1-y^{2}}}}.}It can be simplified by substitutingy=sin⁡t{\displaystyle y=\sin t}ands=arcsin⁡x{\displaystyle s=\arcsin x}:a(x)=∫0sdt=s=arcsin⁡x.{\displaystyle a(x)=\int _{0}^{s}dt=s=\arcsin x.}That meansa−1(x)=sin⁡x{\displaystyle a^{-1}(x)=\sin x}. So the sine function is an inverse function of an integral function.[3] Elliptic functions are the inverse functions ofelliptic integrals. In particular, let:u(z)=∫z∞ds4s3−g2s−g3.{\displaystyle u(z)=\int _{z}^{\infty }{\frac {ds}{\sqrt {4s^{3}-g_{2}s-g_{3}}}}.}Then the extension ofu−1{\displaystyle u^{-1}}to the complex plane equals the℘{\displaystyle \wp }-function.[4]This invertibility is used incomplex analysisto provide a solution to certainnonlinear differential equationssatisfying thePainlevé property, i.e., those equations that admitpolesas their onlymovable singularities.[5] Letω1,ω2∈C{\displaystyle \omega _{1},\omega _{2}\in \mathbb {C} }be twocomplex numbersthat arelinearly independentoverR{\displaystyle \mathbb {R} }and letΛ:=Zω1+Zω2:={mω1+nω2:m,n∈Z}{\displaystyle \Lambda :=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}:m,n\in \mathbb {Z} \}}be theperiod latticegenerated by those numbers. Then the℘{\displaystyle \wp }-function is defined as follows: This series converges locallyuniformly absolutelyin thecomplex torusC/Λ{\displaystyle \mathbb {C} /\Lambda }. It is common to use1{\displaystyle 1}andτ{\displaystyle \tau }in theupper half-planeH:={z∈C:Im⁡(z)>0}{\displaystyle \mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}}asgeneratorsof thelattice. Dividing byω1{\textstyle \omega _{1}}maps the latticeZω1+Zω2{\displaystyle \mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}}isomorphically onto the latticeZ+Zτ{\displaystyle \mathbb {Z} +\mathbb {Z} \tau }withτ=ω2ω1{\textstyle \tau ={\tfrac {\omega _{2}}{\omega _{1}}}}. Because−τ{\displaystyle -\tau }can be substituted forτ{\displaystyle \tau }, without loss of generality we can assumeτ∈H{\displaystyle \tau \in \mathbb {H} }, and then define℘(z,τ):=℘(z,1,τ){\displaystyle \wp (z,\tau ):=\wp (z,1,\tau )}. With that definition, we have℘(z,ω1,ω2)=ω1−2℘(z/ω1,ω2/ω1){\displaystyle \wp (z,\omega _{1},\omega _{2})=\omega _{1}^{-2}\wp (z/\omega _{1},\omega _{2}/\omega _{1})}. Letr:=min{|λ|:0≠λ∈Λ}{\displaystyle r:=\min\{{|\lambda }|:0\neq \lambda \in \Lambda \}}. Then for0<|z|<r{\displaystyle 0<|z|<r}the℘{\displaystyle \wp }-function has the followingLaurent expansion℘(z)=1z2+∑n=1∞(2n+1)G2n+2z2n{\displaystyle \wp (z)={\frac {1}{z^{2}}}+\sum _{n=1}^{\infty }(2n+1)G_{2n+2}z^{2n}}whereGn=∑0≠λ∈Λλ−n{\displaystyle G_{n}=\sum _{0\neq \lambda \in \Lambda }\lambda ^{-n}}forn≥3{\displaystyle n\geq 3}are so calledEisenstein series.[6] Setg2=60G4{\displaystyle g_{2}=60G_{4}}andg3=140G6{\displaystyle g_{3}=140G_{6}}. Then the℘{\displaystyle \wp }-function satisfies the differential equation[6]℘′2(z)=4℘3(z)−g2℘(z)−g3.{\displaystyle \wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}.}This relation can be verified by forming a linear combination of powers of℘{\displaystyle \wp }and℘′{\displaystyle \wp '}to eliminate the pole atz=0{\displaystyle z=0}. This yields an entire elliptic function that has to be constant byLiouville's theorem.[6] The coefficients of the above differential equationg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are known as theinvariants. Because they depend on the latticeΛ{\displaystyle \Lambda }they can be viewed as functions inω1{\displaystyle \omega _{1}}andω2{\displaystyle \omega _{2}}. The series expansion suggests thatg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}arehomogeneous functionsof degree−4{\displaystyle -4}and−6{\displaystyle -6}. That is[7]g2(λω1,λω2)=λ−4g2(ω1,ω2){\displaystyle g_{2}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-4}g_{2}(\omega _{1},\omega _{2})}g3(λω1,λω2)=λ−6g3(ω1,ω2){\displaystyle g_{3}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-6}g_{3}(\omega _{1},\omega _{2})}forλ≠0{\displaystyle \lambda \neq 0}. Ifω1{\displaystyle \omega _{1}}andω2{\displaystyle \omega _{2}}are chosen in such a way thatIm⁡(ω2ω1)>0{\displaystyle \operatorname {Im} \left({\tfrac {\omega _{2}}{\omega _{1}}}\right)>0},g2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}can be interpreted as functions on theupper half-planeH:={z∈C:Im⁡(z)>0}{\displaystyle \mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}}. Letτ=ω2ω1{\displaystyle \tau ={\tfrac {\omega _{2}}{\omega _{1}}}}. One has:[8]g2(1,τ)=ω14g2(ω1,ω2),{\displaystyle g_{2}(1,\tau )=\omega _{1}^{4}g_{2}(\omega _{1},\omega _{2}),}g3(1,τ)=ω16g3(ω1,ω2).{\displaystyle g_{3}(1,\tau )=\omega _{1}^{6}g_{3}(\omega _{1},\omega _{2}).}That meansg2andg3are only scaled by doing this. Setg2(τ):=g2(1,τ){\displaystyle g_{2}(\tau ):=g_{2}(1,\tau )}andg3(τ):=g3(1,τ).{\displaystyle g_{3}(\tau ):=g_{3}(1,\tau ).}As functions ofτ∈H{\displaystyle \tau \in \mathbb {H} },g2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are so calledmodular forms. TheFourier seriesforg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}are given as follows:[9]g2(τ)=43π4[1+240∑k=1∞σ3(k)q2k]{\displaystyle g_{2}(\tau )={\frac {4}{3}}\pi ^{4}\left[1+240\sum _{k=1}^{\infty }\sigma _{3}(k)q^{2k}\right]}g3(τ)=827π6[1−504∑k=1∞σ5(k)q2k]{\displaystyle g_{3}(\tau )={\frac {8}{27}}\pi ^{6}\left[1-504\sum _{k=1}^{\infty }\sigma _{5}(k)q^{2k}\right]}whereσm(k):=∑d∣kdm{\displaystyle \sigma _{m}(k):=\sum _{d\mid {k}}d^{m}}is thedivisor functionandq=eπiτ{\displaystyle q=e^{\pi i\tau }}is thenome. Themodular discriminantΔ{\displaystyle \Delta }is defined as thediscriminantof the characteristic polynomial of the differential equation℘′2(z)=4℘3(z)−g2℘(z)−g3{\displaystyle \wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}}as follows:Δ=g23−27g32.{\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}.}The discriminant is a modular form of weight12{\displaystyle 12}. That is, under the action of themodular group, it transforms asΔ(aτ+bcτ+d)=(cτ+d)12Δ(τ){\displaystyle \Delta \left({\frac {a\tau +b}{c\tau +d}}\right)=\left(c\tau +d\right)^{12}\Delta (\tau )}wherea,b,d,c∈Z{\displaystyle a,b,d,c\in \mathbb {Z} }withad−bc=1{\displaystyle ad-bc=1}.[10] Note thatΔ=(2π)12η24{\displaystyle \Delta =(2\pi )^{12}\eta ^{24}}whereη{\displaystyle \eta }is theDedekind eta function.[11] For the Fourier coefficients ofΔ{\displaystyle \Delta }, seeRamanujan tau function. e1{\displaystyle e_{1}},e2{\displaystyle e_{2}}ande3{\displaystyle e_{3}}are usually used to denote the values of the℘{\displaystyle \wp }-function at the half-periods.e1≡℘(ω12){\displaystyle e_{1}\equiv \wp \left({\frac {\omega _{1}}{2}}\right)}e2≡℘(ω22){\displaystyle e_{2}\equiv \wp \left({\frac {\omega _{2}}{2}}\right)}e3≡℘(ω1+ω22){\displaystyle e_{3}\equiv \wp \left({\frac {\omega _{1}+\omega _{2}}{2}}\right)}They are pairwise distinct and only depend on the latticeΛ{\displaystyle \Lambda }and not on its generators.[12] e1{\displaystyle e_{1}},e2{\displaystyle e_{2}}ande3{\displaystyle e_{3}}are the roots of the cubic polynomial4℘(z)3−g2℘(z)−g3{\displaystyle 4\wp (z)^{3}-g_{2}\wp (z)-g_{3}}and are related by the equation:e1+e2+e3=0.{\displaystyle e_{1}+e_{2}+e_{3}=0.}Because those roots are distinct the discriminantΔ{\displaystyle \Delta }does not vanish on the upper half plane.[13]Now we can rewrite the differential equation:℘′2(z)=4(℘(z)−e1)(℘(z)−e2)(℘(z)−e3).{\displaystyle \wp '^{2}(z)=4(\wp (z)-e_{1})(\wp (z)-e_{2})(\wp (z)-e_{3}).}That means the half-periods are zeros of℘′{\displaystyle \wp '}. The invariantsg2{\displaystyle g_{2}}andg3{\displaystyle g_{3}}can be expressed in terms of these constants in the following way:[14]g2=−4(e1e2+e1e3+e2e3){\displaystyle g_{2}=-4(e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3})}g3=4e1e2e3{\displaystyle g_{3}=4e_{1}e_{2}e_{3}}e1{\displaystyle e_{1}},e2{\displaystyle e_{2}}ande3{\displaystyle e_{3}}are related to themodular lambda function:λ(τ)=e3−e2e1−e2,τ=ω2ω1.{\displaystyle \lambda (\tau )={\frac {e_{3}-e_{2}}{e_{1}-e_{2}}},\quad \tau ={\frac {\omega _{2}}{\omega _{1}}}.} For numerical work, it is often convenient to calculate the Weierstrass elliptic function in terms ofJacobi's elliptic functions. The basic relations are:[15]℘(z)=e3+e1−e3sn2⁡w=e2+(e1−e3)dn2⁡wsn2⁡w=e1+(e1−e3)cn2⁡wsn2⁡w{\displaystyle \wp (z)=e_{3}+{\frac {e_{1}-e_{3}}{\operatorname {sn} ^{2}w}}=e_{2}+(e_{1}-e_{3}){\frac {\operatorname {dn} ^{2}w}{\operatorname {sn} ^{2}w}}=e_{1}+(e_{1}-e_{3}){\frac {\operatorname {cn} ^{2}w}{\operatorname {sn} ^{2}w}}}wheree1,e2{\displaystyle e_{1},e_{2}}ande3{\displaystyle e_{3}}are the three roots described above and where the moduluskof the Jacobi functions equalsk=e2−e3e1−e3{\displaystyle k={\sqrt {\frac {e_{2}-e_{3}}{e_{1}-e_{3}}}}}and their argumentwequalsw=ze1−e3.{\displaystyle w=z{\sqrt {e_{1}-e_{3}}}.} The function℘(z,τ)=℘(z,1,ω2/ω1){\displaystyle \wp (z,\tau )=\wp (z,1,\omega _{2}/\omega _{1})}can be represented byJacobi's theta functions:℘(z,τ)=(πθ2(0,q)θ3(0,q)θ4(πz,q)θ1(πz,q))2−π23(θ24(0,q)+θ34(0,q)){\displaystyle \wp (z,\tau )=\left(\pi \theta _{2}(0,q)\theta _{3}(0,q){\frac {\theta _{4}(\pi z,q)}{\theta _{1}(\pi z,q)}}\right)^{2}-{\frac {\pi ^{2}}{3}}\left(\theta _{2}^{4}(0,q)+\theta _{3}^{4}(0,q)\right)}whereq=eπiτ{\displaystyle q=e^{\pi i\tau }}is the nome andτ{\displaystyle \tau }is the period ratio(τ∈H){\displaystyle (\tau \in \mathbb {H} )}.[16]This also provides a very rapid algorithm for computing℘(z,τ){\displaystyle \wp (z,\tau )}. Consider the embedding of the cubic curve in thecomplex projective plane whereO{\displaystyle O}is a point lying on theline at infinityP1(C){\displaystyle \mathbb {P} _{1}(\mathbb {C} )}. For this cubic there exists no rational parameterization, ifΔ≠0{\displaystyle \Delta \neq 0}.[1]In this case it is also called an elliptic curve. Nevertheless there is a parameterization inhomogeneous coordinatesthat uses the℘{\displaystyle \wp }-function and its derivative℘′{\displaystyle \wp '}:[17] Now the mapφ{\displaystyle \varphi }isbijectiveand parameterizes the elliptic curveC¯g2,g3C{\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}. C/Λ{\displaystyle \mathbb {C} /\Lambda }is anabelian groupand atopological space, equipped with thequotient topology. It can be shown that every Weierstrass cubic is given in such a way. That is to say that for every pairg2,g3∈C{\displaystyle g_{2},g_{3}\in \mathbb {C} }withΔ=g23−27g32≠0{\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}\neq 0}there exists a latticeZω1+Zω2{\displaystyle \mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}}, such that g2=g2(ω1,ω2){\displaystyle g_{2}=g_{2}(\omega _{1},\omega _{2})}andg3=g3(ω1,ω2){\displaystyle g_{3}=g_{3}(\omega _{1},\omega _{2})}.[18] The statement that elliptic curves overQ{\displaystyle \mathbb {Q} }can be parameterized overQ{\displaystyle \mathbb {Q} }, is known as themodularity theorem. This is an important theorem innumber theory. It was part ofAndrew Wiles'proof (1995) ofFermat's Last Theorem. Letz,w∈C{\displaystyle z,w\in \mathbb {C} }, so thatz,w,z+w,z−w∉Λ{\displaystyle z,w,z+w,z-w\notin \Lambda }. Then one has:[19]℘(z+w)=14[℘′(z)−℘′(w)℘(z)−℘(w)]2−℘(z)−℘(w).{\displaystyle \wp (z+w)={\frac {1}{4}}\left[{\frac {\wp '(z)-\wp '(w)}{\wp (z)-\wp (w)}}\right]^{2}-\wp (z)-\wp (w).} As well as the duplication formula:[19]℘(2z)=14[℘″(z)℘′(z)]2−2℘(z).{\displaystyle \wp (2z)={\frac {1}{4}}\left[{\frac {\wp ''(z)}{\wp '(z)}}\right]^{2}-2\wp (z).} These formulas also have a geometric interpretation, if one looks at the elliptic curveC¯g2,g3C{\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}together with the mappingφ:C/Λ→C¯g2,g3C{\displaystyle {\varphi }:\mathbb {C} /\Lambda \to {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}as in the previous section. The group structure of(C/Λ,+){\displaystyle (\mathbb {C} /\Lambda ,+)}translates to the curveC¯g2,g3C{\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}and can be geometrically interpreted there: The sum of three pairwise different pointsa,b,c∈C¯g2,g3C{\displaystyle a,b,c\in {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }}is zero if and only if they lie on the same line inPC2{\displaystyle \mathbb {P} _{\mathbb {C} }^{2}}.[20] This is equivalent to:det(1℘(u+v)−℘′(u+v)1℘(v)℘′(v)1℘(u)℘′(u))=0,{\displaystyle \det \left({\begin{array}{rrr}1&\wp (u+v)&-\wp '(u+v)\\1&\wp (v)&\wp '(v)\\1&\wp (u)&\wp '(u)\\\end{array}}\right)=0,}where℘(u)=a{\displaystyle \wp (u)=a},℘(v)=b{\displaystyle \wp (v)=b}andu,v∉Λ{\displaystyle u,v\notin \Lambda }.[21] The Weierstrass's elliptic function is usually written with a rather special, lower case script letter ℘, which was Weierstrass's own notation introduced in his lectures of 1862–1863.[footnote 1]It should not be confused with the normal mathematical script letters P: 𝒫 and 𝓅. In computing, the letter ℘ is available as\wpinTeX. InUnicodethe code point isU+2118℘SCRIPT CAPITAL P(&weierp;, &wp;), with the more correct aliasweierstrass elliptic function.[footnote 2]InHTML, it can be escaped as&weierp;.
https://en.wikipedia.org/wiki/Weierstrass_elliptic_function
TheLee conformal world in a tetrahedronis apolyhedral,conformalmap projectionthat projects the globe onto atetrahedronusingDixon elliptic functions. It is conformal everywhere except for the four singularities at the vertices of the polyhedron. Because of the nature of polyhedra, this map projection can betessellatedinfinitely in the plane. It was developed byLaurence Patrick Leein 1965.[1] Coordinates from a sphericaldatumcan be transformed into Lee conformal projection coordinates with the following formulas,[1]whereλ{\textstyle \lambda }is the longitude andφ{\textstyle \varphi }the latitude: where and sm and cm areDixon elliptic functions. Since there is no elementary expression for these functions, Lee suggests using the 28th-degreeMacLaurin series.[1] Lee, L. P.(1976).Conformal Projections Based on Elliptic Functions.Cartographica Monographs. Vol. 16. Toronto: B. V. Gutsell, York University.ISBN0-919870-16-3.Supplement No. 1 toThe Canadian Cartographer13. Thiscartographyormappingterm article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Lee_conformal_world_in_a_tetrahedron
Incomplex analysis, aSchwarz–Christoffel mappingis aconformal mapof theupper half-planeor the complexunit diskonto the interior of asimple polygon. Such a map isguaranteed to existby theRiemann mapping theorem(stated byBernhard Riemannin 1851); the Schwarz–Christoffel formula provides an explicit construction. They were introduced independently byElwin Christoffelin 1867 andHermann Schwarzin 1869. Schwarz–Christoffel mappings are used inpotential theoryand some of its applications, includingminimal surfaces,hyperbolic art, andfluid dynamics. Consider a polygon in the complex plane. TheRiemann mapping theoremimplies that there is abiholomorphicmappingffrom the upper half-plane to the interior of the polygon. The functionfmaps the real axis to the edges of the polygon. If the polygon has interioranglesα,β,γ,…{\displaystyle \alpha ,\beta ,\gamma ,\ldots }, then this mapping is given by whereK{\displaystyle K}is aconstant, anda<b<c<⋯{\displaystyle a<b<c<\cdots }are the values, along the real axis of theζ{\displaystyle \zeta }plane, of points corresponding to the vertices of the polygon in thez{\displaystyle z}plane. A transformation of this form is called aSchwarz–Christoffel mapping. The integral can be simplified by mapping thepoint at infinityof theζ{\displaystyle \zeta }plane to one of the vertices of thez{\displaystyle z}plane polygon. By doing this, the first factor in the formula becomes constant and so can be absorbed into the constantK{\displaystyle K}. Conventionally, the point at infinity would be mapped to the vertex with angleα{\displaystyle \alpha }. In practice, to find a mapping to a specific polygon one needs to find thea<b<c<⋯{\displaystyle a<b<c<\cdots }values which generate the correct polygon side lengths. This requires solving a set of nonlinear equations, and in most cases can only be donenumerically.[1] Consider a semi-infinite strip in thezplane. This may be regarded as a limiting form of atrianglewith verticesP= 0,Q= πi, andR(withRreal), asRtends to infinity. Nowα = 0andβ = γ =π⁄2in the limit. Suppose we are looking for the mappingfwithf(−1) =Q,f(1) =P, andf(∞) =R. Thenfis given by Evaluation of this integral yields whereCis a (complex) constant of integration. Requiring thatf(−1) =Qandf(1) =PgivesC= 0andK= 1. Hence the Schwarz–Christoffel mapping is given by This transformation is sketched below. A mapping to a planetrianglewith interior anglesπa,πb{\displaystyle \pi a,\,\pi b}andπ(1−a−b){\displaystyle \pi (1-a-b)}is given by which can be expressed in terms ofhypergeometric functionsor incompletebeta functions. The upper half-plane is mapped to a triangle with circular arcs for edges by theSchwarz triangle map. The upper half-plane is mapped to the square by whereFis the incompleteelliptic integralof the first kind. An analogue of SC mapping that works also for multiply-connected is presented in:Case, James (2008),"Breakthrough in Conformal Mapping"(PDF),SIAM News,41(1).
https://en.wikipedia.org/wiki/Schwarz%E2%80%93Christoffel_mapping
In mathematics, acomplex numberis an element of anumber systemthat extends thereal numberswith a specific element denotedi, called theimaginary unitand satisfying the equationi2=−1{\displaystyle i^{2}=-1}; every complex number can be expressed in the forma+bi{\displaystyle a+bi}, whereaandbare real numbers. Because no real number satisfies the above equation,iwas called animaginary numberbyRené Descartes. For the complex numbera+bi{\displaystyle a+bi},ais called thereal part, andbis called theimaginary part. The set of complex numbers is denoted by either of the symbolsC{\displaystyle \mathbb {C} }orC. Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world.[1][2] Complex numbers allow solutions to allpolynomial equations, even those that have no solutions in real numbers. More precisely, thefundamental theorem of algebraasserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation(x+1)2=−9{\displaystyle (x+1)^{2}=-9}has no real solution, because the square of a real number cannot be negative, but has the two nonreal complex solutions−1+3i{\displaystyle -1+3i}and−1−3i{\displaystyle -1-3i}. Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rulei2=−1{\displaystyle i^{2}=-1}along with theassociative,commutative, anddistributive laws. Every nonzero complex number has amultiplicative inverse. This makes the complex numbers afieldwith the real numbers as a subfield. Because of these properties,⁠a+bi=a+ib{\displaystyle a+bi=a+ib}⁠, and which form is written depends upon convention and style considerations. The complex numbers also form areal vector spaceofdimension two, with{1,i}{\displaystyle \{1,i\}}as astandard basis. This standard basis makes the complex numbers aCartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely some geometric objects and operations can be expressed in terms of complex numbers. For example, the real numbers form thereal line, which is pictured as the horizontal axis of the complex plane, while real multiples ofi{\displaystyle i}are the vertical axis. A complex number can also be defined by its geometricpolar coordinates: the radius is called theabsolute valueof the complex number, while the angle from the positive real axis is called the argument of the complex number. The complex numbers of absolute value one form theunit circle. Adding a fixed complex number to all complex numbers defines atranslationin the complex plane, and multiplying by a fixed complex number is asimilaritycentered at the origin (dilating by the absolute value, and rotating by the argument). The operation ofcomplex conjugationis thereflection symmetrywith respect to the real axis. The complex numbers form a rich structure that is simultaneously analgebraically closed field, acommutative algebraover the reals, and aEuclidean vector spaceof dimension two. A complex number is an expression of the forma+bi, whereaandbare real numbers, andiis an abstract symbol, the so-called imaginary unit, whose meaning will be explained further below. For example,2 + 3iis a complex number.[3] For a complex numbera+bi, the real numberais called itsreal part, and the real numberb(not the complex numberbi) is itsimaginary part.[4][5]The real part of a complex numberzis denotedRe(z),Re(z){\displaystyle {\mathcal {Re}}(z)}, orR(z){\displaystyle {\mathfrak {R}}(z)}; the imaginary part isIm(z),Im(z){\displaystyle {\mathcal {Im}}(z)}, orI(z){\displaystyle {\mathfrak {I}}(z)}: for example,Re⁡(2+3i)=2{\textstyle \operatorname {Re} (2+3i)=2},Im⁡(2+3i)=3{\displaystyle \operatorname {Im} (2+3i)=3}. A complex numberzcan be identified with theordered pairof real numbers(ℜ(z),ℑ(z)){\displaystyle (\Re (z),\Im (z))}, which may be interpreted as coordinates of a point in a Euclidean plane with standard coordinates, which is then called thecomplex planeorArgand diagram.[6][7][a]The horizontal axis is generally used to display the real part, with increasing values to the right, and the imaginary part marks the vertical axis, with increasing values upwards. A real numberacan be regarded as a complex numbera+ 0i, whose imaginary part is 0. A purely imaginary numberbiis a complex number0 +bi, whose real part is zero. It is common to writea+ 0i=a,0 +bi=bi, anda+ (−b)i=a−bi; for example,3 + (−4)i= 3 − 4i. Thesetof all complex numbers is denoted byC{\displaystyle \mathbb {C} }(blackboard bold) orC(upright bold). In some disciplines such as electromagnetism and electrical engineering,jis used instead ofi, asifrequently represents electric current,[8][9]and complex numbers are written asa+bjora+jb. Two complex numbersa=x+yi{\displaystyle a=x+yi}andb=u+vi{\displaystyle b=u+vi}areaddedby separately adding their real and imaginary parts. That is to say: a+b=(x+yi)+(u+vi)=(x+u)+(y+v)i.{\displaystyle a+b=(x+yi)+(u+vi)=(x+u)+(y+v)i.}Similarly,subtractioncan be performed asa−b=(x+yi)−(u+vi)=(x−u)+(y−v)i.{\displaystyle a-b=(x+yi)-(u+vi)=(x-u)+(y-v)i.} The addition can be geometrically visualized as follows: the sum of two complex numbersaandb, interpreted as points in the complex plane, is the point obtained by building aparallelogramfrom the three verticesO, and the points of the arrows labeledaandb(provided that they are not on a line). Equivalently, calling these pointsA,B, respectively and the fourth point of the parallelogramXthetrianglesOABandXBAarecongruent. The product of two complex numbers is computed as follows: For example,(3+2i)(4−i)=3⋅4−(2⋅(−1))+(3⋅(−1)+2⋅4)i=14+5i.{\displaystyle (3+2i)(4-i)=3\cdot 4-(2\cdot (-1))+(3\cdot (-1)+2\cdot 4)i=14+5i.}In particular, this includes as a special case the fundamental formula This formula distinguishes the complex numberifrom any real number, since the square of any (negative or positive) real number is always a non-negative real number. With this definition of multiplication and addition, familiar rules for the arithmetic of rational or real numbers continue to hold for complex numbers. More precisely, thedistributive property, thecommutative properties(of addition and multiplication) hold. Therefore, the complex numbers form an algebraic structure known as afield, the same way as the rational or real numbers do.[10] Thecomplex conjugateof the complex numberz=x+yiis defined asz¯=x−yi.{\displaystyle {\overline {z}}=x-yi.}[11]It is also denoted by some authors byz∗{\displaystyle z^{*}}. Geometrically,zis the"reflection"ofzabout the real axis. Conjugating twice gives the original complex number:z¯¯=z.{\displaystyle {\overline {\overline {z}}}=z.}A complex number is real if and only if it equals its own conjugate. Theunary operationof taking the complex conjugate of a complex number cannot be expressed by applying only the basic operations of addition, subtraction, multiplication and division. For any complex numberz=x+yi, the product is anon-negative realnumber. This allows to define theabsolute value(ormodulusormagnitude) ofzto be the square root[12]|z|=x2+y2.{\displaystyle |z|={\sqrt {x^{2}+y^{2}}}.}ByPythagoras' theorem,|z|{\displaystyle |z|}is the distance from the origin to the point representing the complex numberzin the complex plane. In particular, thecircle of radius onearound the origin consists precisely of the numberszsuch that|z|=1{\displaystyle |z|=1}. Ifz=x=x+0i{\displaystyle z=x=x+0i}is a real number, then|z|=|x|{\displaystyle |z|=|x|}: its absolute value as a complex number and as a real number are equal. Using the conjugate, thereciprocalof a nonzero complex numberz=x+yi{\displaystyle z=x+yi}can be computed to be 1z=z¯zz¯=z¯|z|2=x−yix2+y2=xx2+y2−yx2+y2i.{\displaystyle {\frac {1}{z}}={\frac {\bar {z}}{z{\bar {z}}}}={\frac {\bar {z}}{|z|^{2}}}={\frac {x-yi}{x^{2}+y^{2}}}={\frac {x}{x^{2}+y^{2}}}-{\frac {y}{x^{2}+y^{2}}}i.}More generally, the division of an arbitrary complex numberw=u+vi{\displaystyle w=u+vi}by a non-zero complex numberz=x+yi{\displaystyle z=x+yi}equalswz=wz¯|z|2=(u+vi)(x−iy)x2+y2=ux+vyx2+y2+vx−uyx2+y2i.{\displaystyle {\frac {w}{z}}={\frac {w{\bar {z}}}{|z|^{2}}}={\frac {(u+vi)(x-iy)}{x^{2}+y^{2}}}={\frac {ux+vy}{x^{2}+y^{2}}}+{\frac {vx-uy}{x^{2}+y^{2}}}i.}This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression may be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator.[13][14] Theargumentofz(sometimes called the "phase"φ)[7]is the angle of theradiusOzwith the positive real axis, and is written asargz, expressed inradiansin this article. The angle is defined only up to adding integer multiples of2π{\displaystyle 2\pi }, since a rotation by2π{\displaystyle 2\pi }(or 360°) around the origin leaves all points in the complex plane unchanged. One possible choice to uniquely specify the argument is to require it to be within the interval(−π,π]{\displaystyle (-\pi ,\pi ]}, which is referred to as theprincipal value.[15]The argument can be computed from the rectangular formx + yiby means of thearctan(inverse tangent) function.[16] For any complex numberz, with absolute valuer=|z|{\displaystyle r=|z|}and argumentφ{\displaystyle \varphi }, the equation holds. This identity is referred to as the polar form ofz. It is sometimes abbreviated asz=rcis⁡φ{\textstyle z=r\operatorname {\mathrm {cis} } \varphi }. In electronics, one represents aphasorwith amplituderand phaseφinangle notation:[17]z=r∠φ.{\displaystyle z=r\angle \varphi .} If two complex numbers are given in polar form, i.e.,z1=r1(cosφ1+isinφ1)andz2=r2(cosφ2+isinφ2), the product and division can be computed asz1z2=r1r2(cos⁡(φ1+φ2)+isin⁡(φ1+φ2)).{\displaystyle z_{1}z_{2}=r_{1}r_{2}(\cos(\varphi _{1}+\varphi _{2})+i\sin(\varphi _{1}+\varphi _{2})).}z1z2=r1r2(cos⁡(φ1−φ2)+isin⁡(φ1−φ2)),ifz2≠0.{\displaystyle {\frac {z_{1}}{z_{2}}}={\frac {r_{1}}{r_{2}}}\left(\cos(\varphi _{1}-\varphi _{2})+i\sin(\varphi _{1}-\varphi _{2})\right),{\text{if }}z_{2}\neq 0.}(These are a consequence of thetrigonometric identitiesfor the sine and cosine function.) In other words, the absolute values aremultipliedand the arguments areaddedto yield the polar form of the product. The picture at the right illustrates the multiplication of(2+i)(3+i)=5+5i.{\displaystyle (2+i)(3+i)=5+5i.}Because the real and imaginary part of5 + 5iare equal, the argument of that number is 45 degrees, orπ/4(inradian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles arearctan(1/3) and arctan(1/2), respectively. Thus, the formulaπ4=arctan⁡(12)+arctan⁡(13){\displaystyle {\frac {\pi }{4}}=\arctan \left({\frac {1}{2}}\right)+\arctan \left({\frac {1}{3}}\right)}holds. As thearctanfunction can be approximated highly efficiently, formulas like this – known asMachin-like formulas– are used for high-precision approximations ofπ:[18]π4=4arctan⁡(15)−arctan⁡(1239){\displaystyle {\frac {\pi }{4}}=4\arctan \left({\frac {1}{5}}\right)-\arctan \left({\frac {1}{239}}\right)} Then-th power of a complex number can be computed usingde Moivre's formula, which is obtained by repeatedly applying the above formula for the product:zn=z⋅⋯⋅z⏟nfactors=(r(cos⁡φ+isin⁡φ))n=rn(cos⁡nφ+isin⁡nφ).{\displaystyle z^{n}=\underbrace {z\cdot \dots \cdot z} _{n{\text{ factors}}}=(r(\cos \varphi +i\sin \varphi ))^{n}=r^{n}\,(\cos n\varphi +i\sin n\varphi ).}For example, the first few powers of the imaginary unitiarei,i2=−1,i3=−i,i4=1,i5=i,…{\displaystyle i,i^{2}=-1,i^{3}=-i,i^{4}=1,i^{5}=i,\dots }. Thennth rootsof a complex numberzare given byz1/n=rn(cos⁡(φ+2kπn)+isin⁡(φ+2kπn)){\displaystyle z^{1/n}={\sqrt[{n}]{r}}\left(\cos \left({\frac {\varphi +2k\pi }{n}}\right)+i\sin \left({\frac {\varphi +2k\pi }{n}}\right)\right)}for0 ≤k≤n− 1. (Herern{\displaystyle {\sqrt[{n}]{r}}}is the usual (positive)nth root of the positive real numberr.) Because sine and cosine are periodic, other integer values ofkdo not give other values. For anyz≠0{\displaystyle z\neq 0}, there are, in particularndistinct complexn-th roots. For example, there are 4 fourth roots of 1, namely In general there isnonatural way of distinguishing one particular complexnth root of a complex number. (This is in contrast to the roots of a positive real numberx, which has a unique positive realn-th root, which is therefore commonly referred to asthen-th root ofx.) One refers to this situation by saying that thenth root is an-valued functionofz. Thefundamental theorem of algebra, ofCarl Friedrich GaussandJean le Rond d'Alembert, states that for any complex numbers (calledcoefficients)a0, ...,an, the equationanzn+⋯+a1z+a0=0{\displaystyle a_{n}z^{n}+\dotsb +a_{1}z+a_{0}=0}has at least one complex solutionz, provided that at least one of the higher coefficientsa1, ...,anis nonzero.[19]This property does not hold for thefield of rational numbersQ{\displaystyle \mathbb {Q} }(the polynomialx2− 2does not have a rational root, because√2is not a rational number) nor the real numbersR{\displaystyle \mathbb {R} }(the polynomialx2+ 4does not have a real root, because the square ofxis positive for any real numberx). Because of this fact,C{\displaystyle \mathbb {C} }is called analgebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below. There are various proofs of this theorem, by either analytic methods such asLiouville's theorem, ortopologicalones such as thewinding number, or a proof combiningGalois theoryand the fact that any real polynomial ofodddegree has at least one real root. The solution inradicals(withouttrigonometric functions) of a generalcubic equation, when all three of its roots are real numbers, contains the square roots ofnegative numbers, a situation that cannot be rectified by factoring aided by therational root test, if the cubic isirreducible; this is the so-calledcasus irreducibilis("irreducible case"). This conundrum led Italian mathematicianGerolamo Cardanoto conceive of complex numbers in around 1545 in hisArs Magna,[20]though his understanding was rudimentary; moreover, he later described complex numbers as being "as subtle as they are useless".[21]Cardano did use imaginary numbers, but described using them as "mental torture."[22]This was prior to the use of the graphical complex plane. Cardano and other Italian mathematicians, notablyScipione del Ferro, in the 1500s created an algorithm for solving cubic equations which generally had one real solution and two solutions containing an imaginary number. Because they ignored the answers with the imaginary numbers, Cardano found them useless.[23] Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to everypolynomial equationof degree one or higher. Complex numbers thus form analgebraically closed field, where any polynomial equation has aroot. Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematicianRafael Bombelli.[24]A more abstract formalism for the complex numbers was further developed by the Irish mathematicianWilliam Rowan Hamilton, who extended this abstraction to the theory ofquaternions.[25] The earliest fleeting reference tosquare rootsofnegative numberscan perhaps be said to occur in the work of the Greek mathematicianHero of Alexandriain the 1st centuryAD, where in hisStereometricahe considered, apparently in error, the volume of an impossiblefrustumof apyramidto arrive at the term81−144{\displaystyle {\sqrt {81-144}}}in his calculations, which today would simplify to−63=3i7{\displaystyle {\sqrt {-63}}=3i{\sqrt {7}}}.[b]Negative quantities were not conceived of inHellenistic mathematicsand Hero merely replaced the negative value by its positive144−81=37.{\displaystyle {\sqrt {144-81}}=3{\sqrt {7}}.}[27] The impetus to study complex numbers as a topic in itself first arose in the 16th century whenalgebraic solutionsfor the roots ofcubicandquarticpolynomialswere discovered by Italian mathematicians (Niccolò Fontana TartagliaandGerolamo Cardano). It was soon realized (but proved much later)[28]that these formulas, even if one were interested only in real solutions, sometimes required the manipulation of square roots of negative numbers. In fact, it was proved later that the use of complex numbersis unavoidablewhen all three roots are real and distinct.[c]However, the general formula can still be used in this case, with some care to deal with the ambiguity resulting from the existence of three cubic roots for nonzero complex numbers. Rafael Bombelli was the first to address explicitly these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic, trying to resolve these issues. The term "imaginary" for these quantities was coined byRené Descartesin 1637, who was at pains to stress their unreal nature:[29] ... sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.[... quelquefois seulement imaginaires c'est-à-dire que l'on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu'il n'y a quelquefois aucune quantité qui corresponde à celle qu'on imagine.] A further source of confusion was that the equation−12=−1−1=−1{\displaystyle {\sqrt {-1}}^{2}={\sqrt {-1}}{\sqrt {-1}}=-1}seemed to be capriciously inconsistent with the algebraic identityab=ab{\displaystyle {\sqrt {a}}{\sqrt {b}}={\sqrt {ab}}}, which is valid for non-negative real numbersaandb, and which was also used in complex number calculations with one ofa,bpositive and the other negative. The incorrect use of this identity in the case when bothaandbare negative, and the related identity1a=1a{\textstyle {\frac {1}{\sqrt {a}}}={\sqrt {\frac {1}{a}}}}, even bedeviledLeonhard Euler. This difficulty eventually led to the convention of using the special symboliin place of−1{\displaystyle {\sqrt {-1}}}to guard against this mistake.[30][31]Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book,Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730Abraham de Moivrenoted that the identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be re-expressed by the followingde Moivre's formula: (cos⁡θ+isin⁡θ)n=cos⁡nθ+isin⁡nθ.{\displaystyle (\cos \theta +i\sin \theta )^{n}=\cos n\theta +i\sin n\theta .} In 1748, Euler went further and obtainedEuler's formulaofcomplex analysis:[32] eiθ=cos⁡θ+isin⁡θ{\displaystyle e^{i\theta }=\cos \theta +i\sin \theta } by formally manipulating complexpower seriesand observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane was first described byDanish–NorwegianmathematicianCaspar Wesselin 1799,[33]although it had been anticipated as early as 1685 inWallis'sA Treatise of Algebra.[34] Wessel's memoir appeared in the Proceedings of theCopenhagen Academybut went largely unnoticed. In 1806Jean-Robert Argandindependently issued a pamphlet on complex numbers and provided a rigorous proof of thefundamental theorem of algebra.[35]Carl Friedrich Gausshad earlier published an essentiallytopologicalproof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1".[36]It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane,[37]largely establishing modern notation and terminology:[38] If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1,−1{\displaystyle {\sqrt {-1}}}positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness. In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée,[39][40]Mourey,[41]Warren,[42][43][44]Françaisand his brother,Bellavitis.[45][46] The English mathematicianG.H. Hardyremarked that Gauss was the first mathematician to use complex numbers in "a really confident and scientific way" although mathematicians such as NorwegianNiels Henrik AbelandCarl Gustav Jacob Jacobiwere necessarily using them routinely before Gauss published his 1831 treatise.[47] Augustin-Louis CauchyandBernhard Riemanntogether brought the fundamental ideas ofcomplex analysisto a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand calledcosφ+isinφthedirection factor, andr=a2+b2{\displaystyle r={\sqrt {a^{2}+b^{2}}}}themodulus;[d][48]Cauchy (1821) calledcosφ+isinφthereduced form(l'expression réduite)[49]and apparently introduced the termargument; Gauss usedifor−1{\displaystyle {\sqrt {-1}}},[e]introduced the termcomplex numberfora+bi,[f]and calleda2+b2thenorm.[g]The expressiondirection coefficient, often used forcosφ+isinφ, is due to Hankel (1867),[53]andabsolute value,formodulus,is due to Weierstrass. Later classical writers on the general theory includeRichard Dedekind,Otto Hölder,Felix Klein,Henri Poincaré,Hermann Schwarz,Karl Weierstrassand many others. Important work (including a systematization) in complex multivariate calculus has been started at beginning of the 20th century. Important results have been achieved byWilhelm Wirtingerin 1927. While the above low-level definitions, including the addition and multiplication, accurately describe the complex numbers, there are other, equivalent approaches that reveal the abstract algebraic structure of the complex numbers more immediately. One approach toC{\displaystyle \mathbb {C} }is viapolynomials, i.e., expressions of the formp(X)=anXn+⋯+a1X+a0,{\displaystyle p(X)=a_{n}X^{n}+\dotsb +a_{1}X+a_{0},}where thecoefficientsa0, ...,anare real numbers. The set of all such polynomials is denoted byR[X]{\displaystyle \mathbb {R} [X]}. Since sums and products of polynomials are again polynomials, this setR[X]{\displaystyle \mathbb {R} [X]}forms acommutative ring, called thepolynomial ring(over the reals). To every such polynomialp, one may assign the complex numberp(i)=anin+⋯+a1i+a0{\displaystyle p(i)=a_{n}i^{n}+\dotsb +a_{1}i+a_{0}}, i.e., the value obtained by settingX=i{\displaystyle X=i}. This defines a function This function issurjectivesince every complex number can be obtained in such a way: the evaluation of alinear polynomiala+bX{\displaystyle a+bX}atX=i{\displaystyle X=i}isa+bi{\displaystyle a+bi}. However, the evaluation of polynomialX2+1{\displaystyle X^{2}+1}atiis 0, sincei2+1=0.{\displaystyle i^{2}+1=0.}This polynomial isirreducible, i.e., cannot be written as a product of two linear polynomials. Basic facts ofabstract algebrathen imply that thekernelof the above map is anidealgenerated by this polynomial, and that the quotient by this ideal is a field, and that there is anisomorphism between the quotient ring andC{\displaystyle \mathbb {C} }. Some authors take this as the definition ofC{\displaystyle \mathbb {C} }.[54] Accepting thatC{\displaystyle \mathbb {C} }is algebraically closed, because it is analgebraic extensionofR{\displaystyle \mathbb {R} }in this approach,C{\displaystyle \mathbb {C} }is therefore thealgebraic closureofR.{\displaystyle \mathbb {R} .} Complex numbersa+bican also be represented by2 × 2matricesthat have the form(a−bba).{\displaystyle {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}.}Here the entriesaandbare real numbers. As the sum and product of two such matrices is again of this form, these matrices form asubringof the ring of2 × 2matrices. A simple computation shows that the mapa+ib↦(a−bba){\displaystyle a+ib\mapsto {\begin{pmatrix}a&-b\\b&\;\;a\end{pmatrix}}}is aring isomorphismfrom the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with thedeterminantof the corresponding matrix, and the conjugate of a complex number with thetransposeof the matrix. The geometric description of the multiplication of complex numbers can also be expressed in terms ofrotation matricesby using this correspondence between complex numbers and such matrices. The action of the matrix on a vector(x,y)corresponds to the multiplication ofx+iybya+ib. In particular, if the determinant is1, there is a real numbertsuch that the matrix has the form (cos⁡t−sin⁡tsin⁡tcos⁡t).{\displaystyle {\begin{pmatrix}\cos t&-\sin t\\\sin t&\;\;\cos t\end{pmatrix}}.}In this case, the action of the matrix on vectors and the multiplication by the complex numbercos⁡t+isin⁡t{\displaystyle \cos t+i\sin t}are both therotationof the anglet. The study of functions of a complex variable is known ascomplex analysisand has enormous practical use inapplied mathematicsas well as in other branches of mathematics. Often, the most natural proofs for statements inreal analysisor evennumber theoryemploy techniques from complex analysis (seeprime number theoremfor an example). Unlike real functions, which are commonly represented as two-dimensional graphs,complex functionshave four-dimensional graphs and may usefully be illustrated by color-coding athree-dimensional graphto suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. The notions ofconvergent seriesandcontinuous functionsin (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said toconvergeif and only if its real and imaginary parts do. This is equivalent to the(ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view,C{\displaystyle \mathbb {C} }, endowed with themetricd⁡(z1,z2)=|z1−z2|{\displaystyle \operatorname {d} (z_{1},z_{2})=|z_{1}-z_{2}|}is a completemetric space, which notably includes thetriangle inequality|z1+z2|≤|z1|+|z2|{\displaystyle |z_{1}+z_{2}|\leq |z_{1}|+|z_{2}|}for any two complex numbersz1andz2. Like in real analysis, this notion of convergence is used to construct a number ofelementary functions: theexponential functionexpz, also writtenez, is defined as theinfinite series, which can be shown toconvergefor anyz:exp⁡z:=1+z+z22⋅1+z33⋅2⋅1+⋯=∑n=0∞znn!.{\displaystyle \exp z:=1+z+{\frac {z^{2}}{2\cdot 1}}+{\frac {z^{3}}{3\cdot 2\cdot 1}}+\cdots =\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.}For example,exp⁡(1){\displaystyle \exp(1)}isEuler's numbere≈2.718{\displaystyle e\approx 2.718}.Euler's formulastates:exp⁡(iφ)=cos⁡φ+isin⁡φ{\displaystyle \exp(i\varphi )=\cos \varphi +i\sin \varphi }for any real numberφ. This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includesEuler's identityexp⁡(iπ)=−1.{\displaystyle \exp(i\pi )=-1.} For any positive real numbert, there is a unique real numberxsuch thatexp⁡(x)=t{\displaystyle \exp(x)=t}. This leads to the definition of thenatural logarithmas theinverseln:R+→R;x↦ln⁡x{\displaystyle \ln \colon \mathbb {R} ^{+}\to \mathbb {R} ;x\mapsto \ln x}of the exponential function. The situation is different for complex numbers, since by the functional equation and Euler's identity. For example,eiπ=e3iπ= −1, so bothiπand3iπare possible values for the complex logarithm of−1. In general, given any non-zero complex numberw, any numberzsolving the equation is called acomplex logarithmofw, denotedlog⁡w{\displaystyle \log w}. It can be shown that these numbers satisfyz=log⁡w=ln⁡|w|+iarg⁡w,{\displaystyle z=\log w=\ln |w|+i\arg w,}wherearg{\displaystyle \arg }is theargumentdefinedabove, andln{\displaystyle \ln }the (real)natural logarithm. As arg is amultivalued function, unique only up to a multiple of2π, log is also multivalued. Theprincipal valueof log is often taken by restricting the imaginary part to theinterval(−π,π]. This leads to the complex logarithm being abijectivefunction taking values in the stripR++i(−π,π]{\displaystyle \mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right]}(that is denotedS0{\displaystyle S_{0}}in the above illustration)ln:C×→R++i(−π,π].{\displaystyle \ln \colon \;\mathbb {C} ^{\times }\;\to \;\;\;\mathbb {R} ^{+}+\;i\,\left(-\pi ,\pi \right].} Ifz∈C∖(−R≥0){\displaystyle z\in \mathbb {C} \setminus \left(-\mathbb {R} _{\geq 0}\right)}is not a non-positive real number (a positive or a non-real number), the resultingprincipal valueof the complex logarithm is obtained with−π<φ<π. It is ananalytic functionoutside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real numberz∈−R+{\displaystyle z\in -\mathbb {R} ^{+}}, where the principal value islnz= ln(−z) +iπ.[h] Complexexponentiationzωis defined aszω=exp⁡(ωln⁡z),{\displaystyle z^{\omega }=\exp(\omega \ln z),}and is multi-valued, except whenωis an integer. Forω= 1 /n, for some natural numbern, this recovers the non-uniqueness ofnth roots mentioned above. Ifz> 0is real (andωan arbitrary complex number), one has a preferred choice ofln⁡x{\displaystyle \ln x}, the real logarithm, which can be used to define a preferred exponential function. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; seefailure of power and logarithm identities. For example, they do not satisfyabc=(ab)c.{\displaystyle a^{bc}=\left(a^{b}\right)^{c}.}Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right. The series defining the real trigonometric functionssineandcosine, as well as thehyperbolic functionssinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such astangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method ofanalytic continuation. A functionf:C{\displaystyle f:\mathbb {C} }→C{\displaystyle \mathbb {C} }is calledholomorphicorcomplex differentiableat a pointz0{\displaystyle z_{0}}if the limit exists (in which case it is denoted byf′(z0){\displaystyle f'(z_{0})}). This mimics the definition for real differentiable functions, except that all quantities are complex numbers. Loosely speaking, the freedom of approachingz0{\displaystyle z_{0}}in different directions imposes a much stronger condition than being (real) differentiable. For example, the function is differentiable as a functionR2→R2{\displaystyle \mathbb {R} ^{2}\to \mathbb {R} ^{2}}, but isnotcomplex differentiable. A real differentiable function is complex differentiableif and only ifit satisfies theCauchy–Riemann equations, which are sometimes abbreviated as Complex analysis shows some features not apparent in real analysis. For example, theidentity theoremasserts that two holomorphic functionsfandgagree if they agree on an arbitrarily smallopen subsetofC{\displaystyle \mathbb {C} }.Meromorphic functions, functions that can locally be written asf(z)/(z−z0)nwith a holomorphic functionf, still share some of the features of holomorphic functions. Other functions haveessential singularities, such assin(1/z)atz= 0. Complex numbers have applications in many scientific areas, includingsignal processing,control theory,electromagnetism,fluid dynamics,quantum mechanics,cartography, andvibration analysis. Some of these applications are described below. Complex conjugation is also employed ininversive geometry, a branch of geometry studying reflections more general than ones about a line. In thenetwork analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when themaximum power transfer theoremis looked for. Threenon-collinearpointsu,v,w{\displaystyle u,v,w}in the plane determine theshapeof the triangle{u,v,w}{\displaystyle \{u,v,w\}}. Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic asS(u,v,w)=u−wu−v.{\displaystyle S(u,v,w)={\frac {u-w}{u-v}}.}The shapeS{\displaystyle S}of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by anaffine transformation), corresponding to the intuitive notion of shape, and describingsimilarity. Thus each triangle{u,v,w}{\displaystyle \{u,v,w\}}is in asimilarity classof triangles with the same shape.[55] TheMandelbrot setis a popular example of a fractal formed on the complex plane. It is defined by plotting every locationc{\displaystyle c}where iterating the sequencefc(z)=z2+c{\displaystyle f_{c}(z)=z^{2}+c}does notdivergewheniteratedinfinitely. Similarly,Julia setshave the same rules, except wherec{\displaystyle c}remains constant. Every triangle has a uniqueSteiner inellipse– anellipseinside the triangle and tangent to the midpoints of the three sides of the triangle. Thefociof a triangle's Steiner inellipse can be found as follows, according toMarden's theorem:[56][57]Denote the triangle's vertices in the complex plane asa=xA+yAi,b=xB+yBi, andc=xC+yCi. Write thecubic equation(x−a)(x−b)(x−c)=0{\displaystyle (x-a)(x-b)(x-c)=0}, take its derivative, and equate the (quadratic) derivative to zero. Marden's theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution inC{\displaystyle \mathbb {C} }.A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are calledalgebraic numbers– they are a principal object of study inalgebraic number theory. Compared toQ¯{\displaystyle {\overline {\mathbb {Q} }}}, the algebraic closure ofQ{\displaystyle \mathbb {Q} }, which also contains all algebraic numbers,C{\displaystyle \mathbb {C} }has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery offield theoryto thenumber fieldcontainingroots of unity, it can be shown that it is not possible to construct a regularnonagonusing only compass and straightedge– a purely geometric problem. Another example is theGaussian integers; that is, numbers of the formx+iy, wherexandyare integers, which can be used to classifysums of squares. Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, theRiemann zeta functionζ(s)is related to the distribution ofprime numbers. In applied fields, complex numbers are often used to compute certain real-valuedimproper integrals, by means of complex-valued functions. Several methods exist to do this; seemethods of contour integration. Indifferential equations, it is common to first find all complex rootsrof thecharacteristic equationof alinear differential equationor equation system and then attempt to solve the system in terms of base functions of the formf(t) =ert. Likewise, indifference equations, the complex rootsrof the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the formf(t) =rt. SinceC{\displaystyle \mathbb {C} }is algebraically closed, any non-empty complexsquare matrixhas at least one (complex)eigenvalue. By comparison, real matrices do not always have real eigenvalues, for examplerotation matrices(for rotations of the plane for angles other than 0° or 180°) leave no direction fixed, and therefore do not have anyrealeigenvalue. The existence of (complex) eigenvalues, and the ensuing existence ofeigendecompositionis a useful tool for computing matrix powers andmatrix exponentials. Complex numbers often generalize concepts originally conceived in the real numbers. For example, theconjugate transposegeneralizes thetranspose,hermitian matricesgeneralizesymmetric matrices, andunitary matricesgeneralizeorthogonal matrices. Incontrol theory, systems are often transformed from thetime domainto the complexfrequency domainusing theLaplace transform. The system'szeros and polesare then analyzed in thecomplex plane. Theroot locus,Nyquist plot, andNichols plottechniques all make use of the complex plane. In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are If a system has zeros in the right half plane, it is anonminimum phasesystem. Complex numbers are used insignal analysisand other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For asine waveof a givenfrequency, the absolute value|z|of the correspondingzis theamplitudeand theargumentargzis thephase. IfFourier analysisis employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex-valued functions of the form x(t)=Re⁡{X(t)}{\displaystyle x(t)=\operatorname {Re} \{X(t)\}} and X(t)=Aeiωt=aeiϕeiωt=aei(ωt+ϕ){\displaystyle X(t)=Ae^{i\omega t}=ae^{i\phi }e^{i\omega t}=ae^{i(\omega t+\phi )}} where ω represents theangular frequencyand the complex numberAencodes the phase and amplitude as explained above. This use is also extended intodigital signal processinganddigital image processing, which use digital versions of Fourier analysis (andwaveletanalysis) to transmit,compress, restore, and otherwise processdigitalaudiosignals, still images, andvideosignals. Another example, relevant to the two side bands ofamplitude modulationof AM radio, is: cos⁡((ω+α)t)+cos⁡((ω−α)t)=Re⁡(ei(ω+α)t+ei(ω−α)t)=Re⁡((eiαt+e−iαt)⋅eiωt)=Re⁡(2cos⁡(αt)⋅eiωt)=2cos⁡(αt)⋅Re⁡(eiωt)=2cos⁡(αt)⋅cos⁡(ωt).{\displaystyle {\begin{aligned}\cos((\omega +\alpha )t)+\cos \left((\omega -\alpha )t\right)&=\operatorname {Re} \left(e^{i(\omega +\alpha )t}+e^{i(\omega -\alpha )t}\right)\\&=\operatorname {Re} \left(\left(e^{i\alpha t}+e^{-i\alpha t}\right)\cdot e^{i\omega t}\right)\\&=\operatorname {Re} \left(2\cos(\alpha t)\cdot e^{i\omega t}\right)\\&=2\cos(\alpha t)\cdot \operatorname {Re} \left(e^{i\omega t}\right)\\&=2\cos(\alpha t)\cdot \cos \left(\omega t\right).\end{aligned}}} Inelectrical engineering, theFourier transformis used to analyze varyingelectric currentsandvoltages. The treatment ofresistors,capacitors, andinductorscan then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called theimpedance. This approach is calledphasorcalculus. In electrical engineering, the imaginary unit is denoted byj, to avoid confusion withI, which is generally in use to denote electric current, or, more particularly,i, which is generally in use to denote instantaneous electric current. Because the voltage in an AC circuit is oscillating, it can be represented as V(t)=V0ejωt=V0(cos⁡ωt+jsin⁡ωt),{\displaystyle V(t)=V_{0}e^{j\omega t}=V_{0}\left(\cos \omega t+j\sin \omega t\right),} To obtain the measurable quantity, the real part is taken: v(t)=Re⁡(V)=Re⁡[V0ejωt]=V0cos⁡ωt.{\displaystyle v(t)=\operatorname {Re} (V)=\operatorname {Re} \left[V_{0}e^{j\omega t}\right]=V_{0}\cos \omega t.} The complex-valued signalV(t)is called theanalyticrepresentation of the real-valued, measurable signalv(t).[58] Influid dynamics, complex functions are used to describepotential flow in two dimensions. The complex number field is intrinsic to themathematical formulations of quantum mechanics, where complexHilbert spacesprovide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – theSchrödinger equationand Heisenberg'smatrix mechanics– make use of complex numbers. Inspecial relativityandgeneral relativity, some formulas for the metric onspacetimebecome simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but isused in an essential wayinquantum field theory.) Complex numbers are essential tospinors, which are a generalization of thetensorsused in relativity. The fieldC{\displaystyle \mathbb {C} }has the following three properties: It can be shown that any field having these properties isisomorphic(as a field) toC.{\displaystyle \mathbb {C} .}For example, thealgebraic closureof the fieldQp{\displaystyle \mathbb {Q} _{p}}of thep-adic numberalso satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields).[59]Also,C{\displaystyle \mathbb {C} }is isomorphic to the field of complexPuiseux series. However, specifying an isomorphism requires theaxiom of choice. Another consequence of this algebraic characterization is thatC{\displaystyle \mathbb {C} }contains many proper subfields that are isomorphic toC{\displaystyle \mathbb {C} }. The preceding characterization ofC{\displaystyle \mathbb {C} }describes only the algebraic aspects ofC.{\displaystyle \mathbb {C} .}That is to say, the properties ofnearnessandcontinuity, which matter in areas such asanalysisandtopology, are not dealt with. The following description ofC{\displaystyle \mathbb {C} }as atopological field(that is, a field that is equipped with atopology, which allows the notion of convergence) does take into account the topological properties.C{\displaystyle \mathbb {C} }contains a subsetP(namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: Moreover,C{\displaystyle \mathbb {C} }has a nontrivialinvolutiveautomorphismx↦x*(namely the complex conjugation), such thatx x*is inPfor any nonzeroxinC.{\displaystyle \mathbb {C} .} Any fieldFwith these properties can be endowed with a topology by taking the setsB(x,p) = {y|p− (y−x)(y−x)* ∈P}as abase, wherexranges over the field andpranges overP. With this topologyFis isomorphic as atopologicalfield toC.{\displaystyle \mathbb {C} .} The onlyconnectedlocally compacttopological fieldsareR{\displaystyle \mathbb {R} }andC.{\displaystyle \mathbb {C} .}This gives another characterization ofC{\displaystyle \mathbb {C} }as a topological field, becauseC{\displaystyle \mathbb {C} }can be distinguished fromR{\displaystyle \mathbb {R} }because the nonzero complex numbers areconnected, while the nonzero real numbers are not.[60] The process of extending the fieldR{\displaystyle \mathbb {R} }of reals toC{\displaystyle \mathbb {C} }is an instance of theCayley–Dickson construction. Applying this construction iteratively toC{\displaystyle \mathbb {C} }then yields thequaternions, theoctonions,[61]thesedenions, and thetrigintaduonions. This construction turns out to diminish the structural properties of the involved number systems. Unlike the reals,C{\displaystyle \mathbb {C} }is not anordered field, that is to say, it is not possible to define a relationz1<z2that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, soi2= −1precludes the existence of anorderingonC.{\displaystyle \mathbb {C} .}[62]Passing fromC{\displaystyle \mathbb {C} }to the quaternionsH{\displaystyle \mathbb {H} }loses commutativity, while the octonions (additionally to not being commutative) fail to be associative. The reals, complex numbers, quaternions and octonions are allnormed division algebrasoverR{\displaystyle \mathbb {R} }. ByHurwitz's theoremthey are the only ones; thesedenions, the next step in the Cayley–Dickson construction, fail to have this structure. The Cayley–Dickson construction is closely related to theregular representationofC,{\displaystyle \mathbb {C} ,}thought of as anR{\displaystyle \mathbb {R} }-algebra(anR{\displaystyle \mathbb {R} }-vector space with a multiplication), with respect to the basis(1,i). This means the following: theR{\displaystyle \mathbb {R} }-linear mapC→Cz↦wz{\displaystyle {\begin{aligned}\mathbb {C} &\rightarrow \mathbb {C} \\z&\mapsto wz\end{aligned}}}for some fixed complex numberwcan be represented by a2 × 2matrix (once a basis has been chosen). With respect to the basis(1,i), this matrix is(Re⁡(w)−Im⁡(w)Im⁡(w)Re⁡(w)),{\displaystyle {\begin{pmatrix}\operatorname {Re} (w)&-\operatorname {Im} (w)\\\operatorname {Im} (w)&\operatorname {Re} (w)\end{pmatrix}},}that is, the one mentioned in the section on matrix representation of complex numbers above. While this is alinear representationofC{\displaystyle \mathbb {C} }in the 2 × 2 real matrices, it is not the only one. Any matrixJ=(pqr−p),p2+qr+1=0{\displaystyle J={\begin{pmatrix}p&q\\r&-p\end{pmatrix}},\quad p^{2}+qr+1=0}has the property that its square is the negative of the identity matrix:J2= −I. Then{z=aI+bJ:a,b∈R}{\displaystyle \{z=aI+bJ:a,b\in \mathbb {R} \}}is also isomorphic to the fieldC,{\displaystyle \mathbb {C} ,}and gives an alternative complex structure onR2.{\displaystyle \mathbb {R} ^{2}.}This is generalized by the notion of alinear complex structure. Hypercomplex numbersalso generalizeR,{\displaystyle \mathbb {R} ,}C,{\displaystyle \mathbb {C} ,}H,{\displaystyle \mathbb {H} ,}andO.{\displaystyle \mathbb {O} .}For example, this notion contains thesplit-complex numbers, which are elements of the ringR[x]/(x2−1){\displaystyle \mathbb {R} [x]/(x^{2}-1)}(as opposed toR[x]/(x2+1){\displaystyle \mathbb {R} [x]/(x^{2}+1)}for complex numbers). In this ring, the equationa2= 1has four solutions. The fieldR{\displaystyle \mathbb {R} }is the completion ofQ,{\displaystyle \mathbb {Q} ,}the field ofrational numbers, with respect to the usualabsolute valuemetric. Other choices ofmetricsonQ{\displaystyle \mathbb {Q} }lead to the fieldsQp{\displaystyle \mathbb {Q} _{p}}ofp-adic numbers(for anyprime numberp), which are thereby analogous toR{\displaystyle \mathbb {R} }. There are no other nontrivial ways of completingQ{\displaystyle \mathbb {Q} }thanR{\displaystyle \mathbb {R} }andQp,{\displaystyle \mathbb {Q} _{p},}byOstrowski's theorem. The algebraic closuresQp¯{\displaystyle {\overline {\mathbb {Q} _{p}}}}ofQp{\displaystyle \mathbb {Q} _{p}}still carry a norm, but (unlikeC{\displaystyle \mathbb {C} }) are not complete with respect to it. The completionCp{\displaystyle \mathbb {C} _{p}}ofQp¯{\displaystyle {\overline {\mathbb {Q} _{p}}}}turns out to be algebraically closed. By analogy, the field is calledp-adic complex numbers. The fieldsR,{\displaystyle \mathbb {R} ,}Qp,{\displaystyle \mathbb {Q} _{p},}and their finite field extensions, includingC,{\displaystyle \mathbb {C} ,}are calledlocal fields.
https://en.wikipedia.org/wiki/Complex_number
Inmathematics,Euler's identity[note 1](also known asEuler's equation) is theequalityeiπ+1=0{\displaystyle e^{i\pi }+1=0}where Euler's identity is named after the Swiss mathematicianLeonhard Euler. It is a special case ofEuler's formulaeix=cos⁡x+isin⁡x{\displaystyle e^{ix}=\cos x+i\sin x}when evaluated forx=π{\displaystyle x=\pi }. Euler's identity is considered an exemplar ofmathematical beauty, as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used ina proof[3][4]thatπistranscendental, which implies the impossibility ofsquaring the circle. Euler's identity is often cited as an example of deepmathematical beauty.[5]Three of the basicarithmeticoperations occur exactly once each:addition,multiplication, andexponentiation. The identity also links five fundamentalmathematical constants:[6] The equation is often given in the form of an expression set equal to zero, which is common practice in several areas of mathematics. Stanford Universitymathematics professorKeith Devlinhas said, "like a Shakespeareansonnetthat captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence".[7]Paul Nahin, a professor emeritus at theUniversity of New Hampshirewho wrote a book dedicated toEuler's formulaand its applications inFourier analysis, said Euler's identity is "of exquisite beauty".[8] Mathematics writerConstance Reidhas said that Euler's identity is "the most famous formula in all mathematics".[9]Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor atHarvard University, after proving Euler's identity during a lecture, said that it "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth".[10] A 1990 poll of readers byThe Mathematical Intelligencernamed Euler's identity the "most beautiful theorem in mathematics".[11]In a 2004 poll of readers byPhysics World, Euler's identity tied withMaxwell's equations(ofelectromagnetism) as the "greatest equation ever".[12] At least three books inpopular mathematicshave been published about Euler's identity: Euler's identity asserts thateiπ{\displaystyle e^{i\pi }}is equal to −1. The expressioneiπ{\displaystyle e^{i\pi }}is a special case of the expressionez{\displaystyle e^{z}}, wherezis anycomplex number. In general,ez{\displaystyle e^{z}}is defined for complexzby extending one of thedefinitions of the exponential functionfrom real exponents to complex exponents. For example, one common definition is: Euler's identity therefore states that the limit, asnapproaches infinity, of(1+iπn)n{\displaystyle (1+{\tfrac {i\pi }{n}})^{n}}is equal to −1. This limit is illustrated in the animation to the right. Euler's identity is aspecial caseofEuler's formula, which states that for anyreal numberx, where the inputs of thetrigonometric functionssine and cosine are given inradians. In particular, whenx=π, Since and it follows that which yields Euler's identity: Any complex numberz=x+iy{\displaystyle z=x+iy}can be represented by the point(x,y){\displaystyle (x,y)}on thecomplex plane. This point can also be represented inpolar coordinatesas(r,θ){\displaystyle (r,\theta )}, whereris the absolute value ofz(distance from the origin), andθ{\displaystyle \theta }is the argument ofz(angle counterclockwise from the positivex-axis). By the definitions of sine and cosine, this point has cartesian coordinates of(rcos⁡θ,rsin⁡θ){\displaystyle (r\cos \theta ,r\sin \theta )}, implying thatz=r(cos⁡θ+isin⁡θ){\displaystyle z=r(\cos \theta +i\sin \theta )}. According to Euler's formula, this is equivalent to sayingz=reiθ{\displaystyle z=re^{i\theta }}. Euler's identity says that−1=eiπ{\displaystyle -1=e^{i\pi }}. Sinceeiπ{\displaystyle e^{i\pi }}isreiθ{\displaystyle re^{i\theta }}forr= 1 andθ=π{\displaystyle \theta =\pi }, this can be interpreted as a fact about the number −1 on the complex plane: its distance from the origin is 1, and its angle from the positivex-axis isπ{\displaystyle \pi }radians. Additionally, when any complex numberzismultipliedbyeiθ{\displaystyle e^{i\theta }}, it has the effect of rotatingz{\displaystyle z}counterclockwise by an angle ofθ{\displaystyle \theta }on the complex plane. Since multiplication by −1 reflects a point across the origin, Euler's identity can be interpreted as saying that rotating any pointπ{\displaystyle \pi }radians around the origin has the same effect as reflecting the point across the origin. Similarly, settingθ{\displaystyle \theta }equal to2π{\displaystyle 2\pi }yields the related equatione2πi=1,{\displaystyle e^{2\pi i}=1,}which can be interpreted as saying that rotating any point by oneturnaround the origin returns it to its original position. Euler's identity is also a special case of the more general identity that thenthroots of unity, forn> 1, add up to 0: Euler's identity is the case wheren= 2. A similar identity also applies toquaternion exponential: let{i,j,k}be the basisquaternions; then, More generally, letqbe a quaternion with a zero real part and a norm equal to 1; that is,q=ai+bj+ck,{\displaystyle q=ai+bj+ck,}witha2+b2+c2=1.{\displaystyle a^{2}+b^{2}+c^{2}=1.}Then one has The same formula applies tooctonions, with a zero real part and a norm equal to 1. These formulas are a direct generalization of Euler's identity, sincei{\displaystyle i}and−i{\displaystyle -i}are the only complex numbers with a zero real part and a norm (absolute value) equal to 1. Euler's identity is a direct result ofEuler's formula, published in his monumental 1748 work of mathematical analysis,Introductio in analysin infinitorum,[16]but it is questionable whether the particular concept of linking five fundamental constants in a compact form can be attributed to Euler himself, as he may never have expressed it.[17] Robin Wilsonwrites:[18] We've seen how [Euler's identity] can easily be deduced from results ofJohann BernoulliandRoger Cotes, but that neither of them seem to have done so. Even Euler does not seem to have written it down explicitly—and certainly it doesn't appear in any of his publications—though he must surely have realized that it follows immediately from his identity [i.e.Euler's formula],eix= cosx+isinx. Moreover, it seems to be unknown who first stated the result explicitly
https://en.wikipedia.org/wiki/Euler%27s_identity
Inintegral calculus,Euler's formulaforcomplex numbersmay be used to evaluateintegralsinvolvingtrigonometric functions. Using Euler's formula, any trigonometric function may be written in terms of complex exponential functions, namelyeix{\displaystyle e^{ix}}ande−ix{\displaystyle e^{-ix}}and then integrated. This technique is often simpler and faster than usingtrigonometric identitiesorintegration by parts, and is sufficiently powerful to integrate anyrational expressioninvolving trigonometric functions.[1] Euler's formula states that[2] Substituting−x{\displaystyle -x}forx{\displaystyle x}gives the equation because cosine is an even function and sine is odd. These two equations can be solved for the sine and cosine to give Consider the integral The standard approach to this integral is to use ahalf-angle formulato simplify the integrand. We can use Euler's identity instead: At this point, it would be possible to change back to real numbers using the formulae2ix+e−2ix= 2 cos 2x. Alternatively, we can integrate the complex exponentials and not change back to trigonometric functions until the end: Consider the integral This integral would be extremely tedious to solve using trigonometric identities, but using Euler's identity makes it relatively painless: At this point we can either integrate directly, or we can first change the integrand to2 cos 6x− 4 cos 4x+ 2 cos 2xand continue from there. Either method gives In addition to Euler's identity, it can be helpful to make judicious use of thereal partsof complex expressions. For example, consider the integral Sincecosxis the real part ofeix, we know that The integral on the right is easy to evaluate: Thus: In general, this technique may be used to evaluate any fractions involving trigonometric functions. For example, consider the integral Using Euler's identity, this integral becomes If we now make thesubstitutionu=eix{\displaystyle u=e^{ix}}, the result is the integral of arational function: One may proceed usingpartial fraction decomposition.
https://en.wikipedia.org/wiki/Integration_using_Euler%27s_formula
Thehistory ofLorentz transformationscomprises the development oflinear transformationsforming theLorentz grouporPoincaré grouppreserving theLorentz interval−x02+⋯+xn2{\displaystyle -x_{0}^{2}+\cdots +x_{n}^{2}}and theMinkowski inner product−x0y0+⋯+xnyn{\displaystyle -x_{0}y_{0}+\cdots +x_{n}y_{n}}. Inmathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory ofquadratic forms,hyperbolic geometry,Möbius geometry, andsphere geometry, which is connected to the fact that the group ofmotions in hyperbolic space, theMöbius grouporprojective special linear group, and theLaguerre groupareisomorphicto theLorentz group. Inphysics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry ofMaxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis ofspecial relativityin which they exhibit the symmetry ofMinkowski spacetime, making thespeed of lightinvariant between different inertial frames. They relate the spacetime coordinates of two arbitraryinertial frames of referencewith constant relative speedv. In one frame, the position of an event is given byx,y,zand timet, while in the other frame the same event has coordinatesx′,y′,z′andt′. Using the coefficients of asymmetric matrixA, the associatedbilinear form, and alinear transformationsin terms oftransformation matrixg, the Lorentz transformation is given if the following conditions are satisfied: It forms anindefinite orthogonal groupcalled theLorentz groupO(1,n), while the case detg=+1 forms the restrictedLorentz groupSO(1,n). The quadratic form becomes theLorentz intervalin terms of anindefinite quadratic formofMinkowski space(being a special case ofpseudo-Euclidean space), and the associated bilinear form becomes theMinkowski inner product.[1][2]Long before the advent of special relativity it was used in topics such as theCayley–Klein metric,hyperboloid modeland other models ofhyperbolic geometry, computations ofelliptic functionsand integrals, transformation ofindefinite quadratic forms,squeeze mappingsof the hyperbola,group theory,Möbius transformations,spherical wave transformation, transformation of theSine-Gordon equation,Biquaternionalgebra,split-complex numbers,Clifford algebra, and others. In thespecial relativity, Lorentz transformations exhibit the symmetry ofMinkowski spacetimeby using a constantcas thespeed of light, and a parametervas the relativevelocitybetween twoinertial reference frames. Using the above conditions, the Lorentz transformation in 3+1 dimensions assume the form: In physics, analogous transformations have been introduced byVoigt (1887)related to an incompressible medium, and byHeaviside (1888), Thomson (1889), Searle (1896)andLorentz (1892, 1895)who analyzedMaxwell's equations. They were completed byLarmor (1897, 1900)andLorentz (1899, 1904), and brought into their modern form byPoincaré (1905)who gave the transformation the name of Lorentz.[3]Eventually,Einstein (1905)showed in his development ofspecial relativitythat the transformations follow from theprinciple of relativityand constant light speed alone by modifying the traditional concepts of space and time, without requiring amechanical aetherin contradistinction to Lorentz and Poincaré.[4]Minkowski (1907–1908)used them to argue that space and time are inseparably connected asspacetime. Regarding special representations of the Lorentz transformations:Minkowski (1907–1908)andSommerfeld (1909)used imaginary trigonometric functions,Frank (1909)andVarićak (1910)usedhyperbolic functions,Bateman and Cunningham (1909–1910)usedspherical wave transformations,Herglotz (1909–10)used Möbius transformations,Plummer (1910)andGruner (1921)used trigonometric Lorentz boosts,Ignatowski (1910)derived the transformations without light speed postulate,Noether (1910) and Klein (1910)as wellConway (1911) and Silberstein (1911)used Biquaternions,Ignatowski (1910/11), Herglotz (1911), and othersused vector transformations valid in arbitrary directions,Borel (1913–14)used Cayley–Hermite parameter, Woldemar Voigt(1887)[R 1]developed a transformation in connection with theDoppler effectand an incompressible medium, being in modern notation:[5][6] If the right-hand sides of his equations are multiplied by γ they are the modern Lorentz transformation. In Voigt's theory the speed of light is invariant, but his transformations mix up a relativistic boost together with a rescaling of space-time. Optical phenomena in free space arescale,conformal, andLorentz invariant, so the combination is invariant too.[6]For instance, Lorentz transformations can be extended by using factorl{\displaystyle l}:[R 2] l=1/γ gives the Voigt transformation,l=1 the Lorentz transformation. But scale transformations are not a symmetry of all the laws of nature, only of electromagnetism, so these transformations cannot be used to formulate aprinciple of relativityin general. It was demonstrated by Poincaré and Einstein that one has to setl=1 in order to make the above transformation symmetric and to form a group as required by the relativity principle, therefore the Lorentz transformation is the only viable choice. Voigt sent his 1887 paper to Lorentz in 1908,[7]and that was acknowledged in 1909: In a paper "Über das Doppler'sche Princip", published in 1887 (Gött. Nachrichten, p. 41) and which to my regret has escaped my notice all these years, Voigt has applied to equations of the form (7) (§ 3 of this book) [namelyΔΨ−1c2∂2Ψ∂t2=0{\displaystyle \Delta \Psi -{\tfrac {1}{c^{2}}}{\tfrac {\partial ^{2}\Psi }{\partial t^{2}}}=0}] a transformation equivalent to the formulae (287) and (288) [namelyx′=γl(x−vt),y′=ly,z′=lz,t′=γl(t−vc2x){\displaystyle x^{\prime }=\gamma l\left(x-vt\right),\ y^{\prime }=ly,\ z^{\prime }=lz,\ t^{\prime }=\gamma l\left(t-{\tfrac {v}{c^{2}}}x\right)}]. The idea of the transformations used above (and in § 44) might therefore have been borrowed from Voigt and the proof that it does not alter the form of the equations for thefreeether is contained in his paper.[R 3] AlsoHermann Minkowskisaid in 1908 that the transformations which play the main role in the principle of relativity were first examined by Voigt in 1887. Voigt responded in the same paper by saying that his theory was based on an elastic theory of light, not an electromagnetic one. However, he concluded that some results were actually the same.[R 4] In 1888,Oliver Heaviside[R 5]investigated the properties ofcharges in motionaccording to Maxwell's electrodynamics. He calculated, among other things, anisotropies in the electric field of moving bodies represented by this formula:[8] Consequently,Joseph John Thomson(1889)[R 6]found a way to substantially simplify calculations concerning moving charges by using the following mathematical transformation (like other authors such as Lorentz or Larmor, also Thomson implicitly used theGalilean transformationz-vtin his equation[9]): Thereby,inhomogeneous electromagnetic wave equationsare transformed into aPoisson equation.[9]Eventually,George Frederick Charles Searle[R 7]noted in (1896) that Heaviside's expression leads to a deformation of electric fields which he called "Heaviside-Ellipsoid" ofaxial ratio In order to explain theaberration of lightand the result of theFizeau experimentin accordance withMaxwell's equations, Lorentz in 1892 developed a model ("Lorentz ether theory") in which the aether is completely motionless, and the speed of light in the aether is constant in all directions. In order to calculate the optics of moving bodies, Lorentz introduced the following quantities to transform from the aether system into a moving system (it's unknown whether he was influenced by Voigt, Heaviside, and Thomson)[R 8][10] wherex*is theGalilean transformationx-vt. Except the additional γ in the time transformation, this is the complete Lorentz transformation.[10]Whiletis the "true" time for observers resting in the aether,t′is an auxiliary variable only for calculating processes for moving systems. It is also important that Lorentz and later also Larmor formulated this transformation in two steps. At first an implicit Galilean transformation, and later the expansion into the "fictitious" electromagnetic system with the aid of the Lorentz transformation. In order to explain the negative result of theMichelson–Morley experiment, he (1892b)[R 9]introduced the additional hypothesis that also intermolecular forces are affected in a similar way and introducedlength contractionin his theory (without proof as he admitted). The same hypothesis had been made previously byGeorge FitzGeraldin 1889 based on Heaviside's work. While length contraction was a real physical effect for Lorentz, he considered the time transformation only as a heuristic working hypothesis and a mathematical stipulation. In 1895, Lorentz further elaborated on his theory and introduced the "theorem of corresponding states". This theorem states that a moving observer (relative to the ether) in his "fictitious" field makes the same observations as a resting observers in his "real" field for velocities to first order inv/c. Lorentz showed that the dimensions of electrostatic systems in the ether and a moving frame are connected by this transformation:[R 10] For solving optical problems Lorentz used the following transformation, in which the modified time variable was called "local time" (German:Ortszeit) by him:[R 11] With this concept Lorentz could explain theDoppler effect, theaberration of light, and theFizeau experiment.[11] In 1897, Larmor extended the work of Lorentz and derived the following transformation[R 12] Larmor noted that if it is assumed that the constitution of molecules is electrical then the FitzGerald–Lorentz contraction is a consequence of this transformation, explaining theMichelson–Morley experiment. It's notable that Larmor was the first who recognized that some sort oftime dilationis a consequence of this transformation as well, because "individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio 1/γ".[12][13]Larmor wrote his electrodynamical equations and transformations neglecting terms of higher order than(v/c)2– when his 1897 paper was reprinted in 1929, Larmor added the following comment in which he described how they can be made valid to all orders ofv/c:[R 13] Nothing need be neglected: the transformation isexactifv/c2is replaced byεv/c2in the equations and also in the change following fromttot′, as is worked out inAether and Matter(1900), p. 168, and as Lorentz found it to be in 1904, thereby stimulating the modern schemes of intrinsic relational relativity. In line with that comment, in his book Aether and Matter published in 1900, Larmor used a modified local timet″=t′-εvx′/c2instead of the 1897 expressiont′=t-vx/c2by replacingv/c2withεv/c2, so thatt″is now identical to the one given by Lorentz in 1892, which he combined with a Galilean transformation for thex′, y′, z′, t′coordinates:[R 14] Larmor knew that the Michelson–Morley experiment was accurate enough to detect an effect of motion depending on the factor(v/c)2, and so he sought the transformations which were "accurate to second order" (as he put it). Thus he wrote the final transformations (wherex′=x-vtandt″as given above) as:[R 15] by which he arrived at the complete Lorentz transformation. Larmor showed that Maxwell's equations were invariant under this two-step transformation, "to second order inv/c" – it was later shown by Lorentz (1904) and Poincaré (1905) that they are indeed invariant under this transformation to all orders inv/c. Larmor gave credit to Lorentz in two papers published in 1904, in which he used the term "Lorentz transformation" for Lorentz's first order transformations of coordinates and field configurations: p. 583: [..] Lorentz's transformation for passing from the field of activity of a stationary electrodynamic material system to that of one moving with uniform velocity of translation through the aether.p. 585: [..] the Lorentz transformation has shown us what is not so immediately obvious [..][R 16]p. 622: [..] the transformation first developed by Lorentz: namely, each point in space is to have its own origin from which time is measured, its "local time" in Lorentz's phraseology, and then the values of the electric and magnetic vectors [..] at all points in the aether between the molecules in the system at rest, are the same as those of the vectors [..] at the corresponding points in the convected system at the same local times.[R 17] Also Lorentz extended his theorem of corresponding states in 1899. First he wrote a transformation equivalent to the one from 1892 (again,x* must be replaced byx-vt):[R 18] Then he introduced a factor ε of which he said he has no means of determining it, and modified his transformation as follows (where the above value oft′has to be inserted):[R 19] This is equivalent to the complete Lorentz transformation when solved forx″andt″and with ε=1. Like Larmor, Lorentz noticed in 1899[R 20]also some sort of time dilation effect in relation to the frequency of oscillating electrons"that inSthe time of vibrations bekεtimes as great as inS0", whereS0is the aether frame.[14] In 1904 he rewrote the equations in the following form by settingl=1/ε (again,x* must be replaced byx-vt):[R 21] Under the assumption thatl=1whenv=0, he demonstrated thatl=1must be the case at all velocities, therefore length contraction can only arise in the line of motion. So by setting the factorlto unity, Lorentz's transformations now assumed the same form as Larmor's and are now completed. Unlike Larmor, who restricted himself to show the covariance of Maxwell's equations to second order, Lorentz tried to widen its covariance to all orders inv/c. He also derived the correct formulas for the velocity dependence ofelectromagnetic mass, and concluded that the transformation formulas must apply to all forces of nature, not only electrical ones.[R 22]However, he didn't achieve full covariance of the transformation equations for charge density and velocity.[15]When the 1904 paper was reprinted in 1913, Lorentz therefore added the following remark:[16] One will notice that in this work the transformation equations of Einstein’s Relativity Theory have not quite been attained. [..] On this circumstance depends the clumsiness of many of the further considerations in this work. Lorentz's 1904 transformation was cited and used byAlfred Buchererin July 1904:[R 23] or byWilhelm Wienin July 1904:[R 24] or byEmil Cohnin November 1904 (setting the speed of light to unity):[R 25] or byRichard Gansin February 1905:[R 26] Neither Lorentz or Larmor gave a clear physical interpretation of the origin of local time. However,Henri Poincaréin 1900 commented on the origin of Lorentz's "wonderful invention" of local time.[17]He remarked that it arose when clocks in a moving reference frame are synchronised by exchanging signals which are assumed to travel with the same speedc{\displaystyle c}in both directions, which lead to what is nowadays calledrelativity of simultaneity, although Poincaré's calculation does not involve length contraction or time dilation.[R 27]In order to synchronise the clocks here on Earth (thex*, t* frame) a light signal from one clock (at the origin) is sent to another (atx*), and is sent back. It's supposed that the Earth is moving with speedvin thex-direction (=x*-direction) in some rest system (x, t) (i.e.theluminiferous aethersystem for Lorentz and Larmor). The time of flight outwards is and the time of flight back is The elapsed time on the clock when the signal is returned isδta+δtband the timet*=(δta+δtb)/2is ascribed to the moment when the light signal reached the distant clock. In the rest frame the timet=δtais ascribed to that same instant. Some algebra gives the relation between the different time coordinates ascribed to the moment of reflection. Thus identical to Lorentz (1892). By dropping the factor γ2under the assumption thatv2c2≪1{\displaystyle {\tfrac {v^{2}}{c^{2}}}\ll 1}, Poincaré gave the resultt*=t-vx*/c2, which is the form used by Lorentz in 1895. Similar physical interpretations of local time were later given byEmil Cohn(1904)[R 28]andMax Abraham(1905).[R 29] On June 5, 1905 (published June 9) Poincaré formulated transformation equations which are algebraically equivalent to those of Larmor and Lorentz and gave them the modern form:[R 30] Apparently Poincaré was unaware of Larmor's contributions, because he only mentioned Lorentz and therefore used for the first time the name "Lorentz transformation".[18][19]Poincaré set the speed of light to unity, pointed out the group characteristics of the transformation by settingl=1, and modified/corrected Lorentz's derivation of the equations of electrodynamics in some details in order to fully satisfy the principle of relativity,i.e.making them fully Lorentz covariant.[20] In July 1905 (published in January 1906)[R 31]Poincaré showed in detail how the transformations and electrodynamic equations are a consequence of theprinciple of least action; he demonstrated in more detail the group characteristics of the transformation, which he calledLorentz group, and he showed that the combinationx2+y2+z2-t2is invariant. He noticed that the Lorentz transformation is merely a rotation in four-dimensional space about the origin by introducingct−1{\displaystyle ct{\sqrt {-1}}}as a fourth imaginary coordinate, and he used an early form offour-vectors. He also formulated the velocity addition formula, which he had already derived in unpublished letters to Lorentz from May 1905:[R 32] On June 30, 1905 (published September 1905) Einstein published what is now calledspecial relativityand gave a new derivation of the transformation, which was based only on the principle of relativity and the principle of the constancy of the speed of light. While Lorentz considered "local time" to be a mathematical stipulation device for explaining the Michelson-Morley experiment, Einstein showed that the coordinates given by the Lorentz transformation were in fact the inertial coordinates of relatively moving frames of reference. For quantities of first order inv/cthis was also done by Poincaré in 1900, while Einstein derived the complete transformation by this method. Unlike Lorentz and Poincaré who still distinguished between real time in the aether and apparent time for moving observers, Einstein showed that the transformations applied to the kinematics of moving frames.[21][22][23] The notation for this transformation is equivalent to Poincaré's of 1905, except that Einstein didn't set the speed of light to unity:[R 33] Einstein also defined the velocity addition formula:[R 34] and the light aberration formula:[R 35] The work on the principle of relativity by Lorentz, Einstein,Planck, together with Poincaré's four-dimensional approach, were further elaborated and combined with thehyperboloid modelbyHermann Minkowskiin 1907 and 1908.[R 36][R 37]Minkowski particularly reformulated electrodynamics in a four-dimensional way (Minkowski spacetime).[24]For instance, he wrotex, y, z, itin the formx1, x2, x3, x4. By defining ψ as the angle of rotation around thez-axis, the Lorentz transformation assumes the form (withc=1):[R 38] Even though Minkowski used the imaginary number iψ, he for once[R 38]directly used thetangens hyperbolicusin the equation for velocity Minkowski's expression can also by written as ψ=atanh(q) and was later calledrapidity. He also wrote the Lorentz transformation in matrix form:[R 39] As a graphical representation of the Lorentz transformation he introduced theMinkowski diagram, which became a standard tool in textbooks and research articles on relativity:[R 40] Using an imaginary rapidity such as Minkowski,Arnold Sommerfeld(1909) formulated the Lorentz boost and the relativistic velocity addition in terms of trigonometric functions and thespherical law of cosines:[R 41] Hyperbolic functions were used byPhilipp Frank(1909), who derived the Lorentz transformation usingψasrapidity:[R 42] In line withSophus Lie's (1871) research on the relation between sphere transformations with an imaginary radius coordinate and 4D conformal transformations, it was pointed out byBatemanandCunningham(1909–1910), that by settingu=ictas the imaginary fourth coordinates one can produce spacetime conformal transformations. Not only the quadratic formλ(dx2+dy2+dz2+du2){\displaystyle \lambda \left(dx^{2}+dy^{2}+dz^{2}+du^{2}\right)}, but alsoMaxwells equationsare covariant with respect to these transformations, irrespective of the choice of λ. These variants of conformal or Lie sphere transformations were calledspherical wave transformationsby Bateman.[R 43][R 44]However, this covariance is restricted to certain areas such as electrodynamics, whereas the totality of natural laws in inertial frames is covariant under theLorentz group.[R 45]In particular, by setting λ=1 the Lorentz groupSO(1,3)can be seen as a 10-parameter subgroup of the 15-parameter spacetime conformal groupCon(1,3). Bateman (1910–12)[25]also alluded to the identity between theLaguerre inversionand the Lorentz transformations. In general, the isomorphism between the Laguerre group and the Lorentz group was pointed out byÉlie Cartan(1912, 1915–55),[R 46]Henri Poincaré(1912–21)[R 47]and others. FollowingFelix Klein(1889–1897) and Fricke & Klein (1897) concerning the Cayley absolute, hyperbolic motion and its transformation,Gustav Herglotz(1909–10) classified the one-parameter Lorentz transformations as loxodromic, hyperbolic, parabolic and elliptic. The general case (on the left) and the hyperbolic case equivalent to Lorentz transformations or squeeze mappings are as follows:[R 48] FollowingSommerfeld (1909), hyperbolic functions were used byVladimir Varićakin several papers starting from 1910, who represented the equations of special relativity on the basis ofhyperbolic geometryin terms of Weierstrass coordinates. For instance, by settingl=ctandv/c=tanh(u)withuas rapidity he wrote the Lorentz transformation:[R 49] and showed the relation of rapidity to theGudermannian functionand theangle of parallelism:[R 49] He also related the velocity addition to thehyperbolic law of cosines:[R 50] Subsequently, other authors such asE. T. Whittaker(1910) orAlfred Robb(1911, who coined the name rapidity) used similar expressions, which are still used in modern textbooks. w:Henry Crozier Keating Plummer(1910) defined the Lorentz boost in terms of trigonometric functions[R 51] While earlier derivations and formulations of the Lorentz transformation relied from the outset on optics, electrodynamics, or the invariance of the speed of light,Vladimir Ignatowski(1910) showed that it is possible to use the principle of relativity (and relatedgroup theoreticalprinciples) alone, in order to derive the following transformation between two inertial frames:[R 52][R 53] The variablencan be seen as a space-time constant whose value has to be determined by experiment or taken from a known physical law such as electrodynamics. For that purpose, Ignatowski used the above-mentioned Heaviside ellipsoid representing a contraction of electrostatic fields byx/γ in the direction of motion. It can be seen that this is only consistent with Ignatowski's transformation whenn=1/c2, resulting inp=γ and the Lorentz transformation. Withn=0, no length changes arise and the Galilean transformation follows. Ignatowski's method was further developed and improved byPhilipp FrankandHermann Rothe(1911, 1912),[R 54]with various authors developing similar methods in subsequent years.[26] Felix Klein(1908) described Cayley's (1854) 4D quaternion multiplications as "Drehstreckungen" (orthogonal substitutions in terms of rotations leaving invariant a quadratic form up to a factor), and pointed out that the modern principle of relativity as provided by Minkowski is essentially only the consequent application of such Drehstreckungen, even though he didn't provide details.[R 55] In an appendix to Klein's and Sommerfeld's "Theory of the top" (1910),Fritz Noethershowed how to formulate hyperbolic rotations using biquaternions withω=−1{\displaystyle \omega ={\sqrt {-1}}}, which he also related to the speed of light by setting ω2=-c2. He concluded that this is the principal ingredient for a rational representation of the group of Lorentz transformations:[R 56] Besides citing quaternion related standard works byArthur Cayley(1854), Noether referred to the entries in Klein's encyclopedia byEduard Study(1899) and the French version byÉlie Cartan(1908).[27]Cartan's version contains a description of Study'sdual numbers, Clifford's biquaternions (including the choiceω=−1{\displaystyle \omega ={\sqrt {-1}}}for hyperbolic geometry), and Clifford algebra, with references to Stephanos (1883), Buchheim (1884–85), Vahlen (1901–02) and others. Citing Noether, Klein himself published in August 1910 the following quaternion substitutions forming the group of Lorentz transformations:[R 57] or in March 1911[R 58] Arthur W. Conwayin February 1911 explicitly formulated quaternionic Lorentz transformations of various electromagnetic quantities in terms of velocity λ:[R 59] AlsoLudwik Silbersteinin November 1911[R 60]as well as in 1914,[28]formulated the Lorentz transformation in terms of velocityv: Silberstein cites Cayley (1854, 1855) and Study's encyclopedia entry (in the extended French version of Cartan in 1908), as well as the appendix of Klein's and Sommerfeld's book. Vladimir Ignatowski(1910, published 1911) showed how to reformulate the Lorentz transformation in order to allow for arbitrary velocities and coordinates:[R 61] Gustav Herglotz(1911)[R 62]also showed how to formulate the transformation in order to allow for arbitrary velocities and coordinatesv=(vx, vy, vz)andr=(x, y, z): This was simplified using vector notation byLudwik Silberstein(1911 on the left, 1914 on the right):[R 63] Equivalent formulas were also given byWolfgang Pauli(1921),[29]withErwin Madelung(1922) providing the matrix form[30] These formulas were called "general Lorentz transformation without rotation" byChristian Møller(1952),[31]who in addition gave an even more general Lorentz transformation in which the Cartesian axes have different orientations, using arotation operatorD{\displaystyle {\mathfrak {D}}}. In this case,v′=(v′x, v′y, v′z)is not equal to -v=(-vx, -vy, -vz), but the relationv′=−Dv{\displaystyle \mathbf {v} '=-{\mathfrak {D}}\mathbf {v} }holds instead, with the result Émile Borel(1913) started by demonstrating Euclidean motions using Euler-Rodrigues parameter in three dimensions, and Cayley's (1846) parameter in four dimensions. Then he demonstrated the connection to indefinite quadratic forms expressing hyperbolic motions and Lorentz transformations. In three dimensions:[R 64] In four dimensions:[R 65] In order to simplify the graphical representation of Minkowski space,Paul Gruner(1921) (with the aid of Josef Sauter) developed what is now calledLoedel diagrams, using the following relations:[R 66] In another paper Gruner used the alternative relations:[R 67] Learning materials related toHistory of Topics in Special Relativity/mathsourceat Wikiversity
https://en.wikipedia.org/wiki/History_of_Lorentz_transformations
Inmathematicsandphysics, many topics arenamed in honorof Swiss mathematicianLeonhard Euler(1707–1783), who made many important discoveries and innovations. Many of these items named after Euler include their own unique function, equation, formula, identity, number (single or sequence), or other mathematical entity. Many of these entities have been given simple yet ambiguous names such asEuler's function,Euler's equation, andEuler's formula. Euler's work touched upon so many fields that he is often the earliest written reference on a given matter. In an effort to avoid naming everything after Euler, some discoveries and theorems are attributed to the first person to have proved themafterEuler.[1][2] Usually,Euler's equationrefers to one of (or a set of)differential equations(DEs). It is customary to classify them intoODEsandPDEs. Otherwise,Euler's equationmay refer to a non-differential equation, as in these three cases: Selected topics from above, grouped by subject, and additional topics from the fields of music and physical systems
https://en.wikipedia.org/wiki/List_of_topics_named_after_Leonhard_Euler
Inmathematics, thePythagorean theoremorPythagoras' theoremis a fundamental relation inEuclidean geometrybetween the three sides of aright triangle. It states that the area of thesquarewhose side is thehypotenuse(the side opposite theright angle) is equal to the sum of the areas of the squares on the other two sides. Thetheoremcan be written as anequationrelating the lengths of the sidesa,band the hypotenusec, sometimes called thePythagorean equation:[1] The theorem is named for theGreekphilosopherPythagoras, born around 570 BC. The theorem has beenprovednumerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including bothgeometricproofs andalgebraicproofs, with some dating back thousands of years. WhenEuclidean spaceis represented by aCartesian coordinate systeminanalytic geometry,Euclidean distancesatisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points. The theorem can begeneralizedin various ways: tohigher-dimensional spaces, tospaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all butn-dimensionalsolids. In one rearrangement proof, two squares are used whose sides have a measure ofa+b{\displaystyle a+b}and which contain four right triangles whose sides area,bandc, with the hypotenuse beingc. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are lengthc. Each outer square has an area of(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+c2{\displaystyle 2ab+c^{2}}, with2ab{\displaystyle 2ab}representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of lengthaandb. These rectangles in their new position have now delineated two new squares, one having side lengthais formed in the bottom-left corner, and another square of side lengthbformed in the top-right corner. In this new position, this left side now has a square of area(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}.Since both squares have the area of(a+b)2{\displaystyle (a+b)^{2}}it follows that the other measure of the square area also equal each other such that2ab+c2{\displaystyle 2ab+c^{2}}=2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. With the area of the four triangles removed from both side of the equation what remains isa2+b2=c2.{\displaystyle a^{2}+b^{2}=c^{2}.}[2] In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areasa2{\displaystyle a^{2}}andb2{\displaystyle b^{2}}which will again lead to a second square of with the area2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. English mathematicianSir Thomas Heathgives this proof in his commentary on Proposition I.47 inEuclid'sElements, and mentions the proposals of German mathematiciansCarl Anton BretschneiderandHermann Hankelthat Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him."[3]Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues.[4] The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with sidec, as shown in the lower part of the diagram.[5]This results in a larger square, with sidea+band area(a+b)2. The four triangles and the square sidecmust have the same area as the larger square, giving A similar proof uses four copies of a right triangle with sidesa,bandc, arranged inside a square with sidecas in the top half of the diagram.[6]The triangles are similar with area12ab{\displaystyle {\tfrac {1}{2}}ab}, while the small square has sideb−aand area(b−a)2. The area of the large square is therefore But this is a square with sidecand areac2, so This theorem may have more known proofs than any other (thelawofquadratic reciprocitybeing another contender for that distinction); the bookThe Pythagorean Propositioncontains 370 proofs.[7] This proof is based on theproportionalityof the sides of threesimilartriangles, that is, upon the fact that theratioof any two corresponding sides of similar triangles is the same regardless of the size of the triangles. LetABCrepresent a right triangle, with theright anglelocated atC, as shown on the figure. Draw thealtitudefrom pointC, and callHits intersection with the sideAB. PointHdivides the length of the hypotenusecinto partsdande. The new triangle,ACH,issimilarto triangleABC, because they both have a right angle (by definition of the altitude), and they share the angle atA, meaning that the third angle will be the same in both triangles as well, marked asθin the figure. By a similar reasoning, the triangleCBHis also similar toABC. The proof of similarity of the triangles requires thetriangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to theparallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: The first result equates thecosinesof the anglesθ, whereas the second result equates theirsines. These ratios can be written as Summing these two equalities results in which, after simplification, demonstrates the Pythagorean theorem: The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. Oneconjectureis that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in theElements, and that the theory of proportions needed further development at that time.[8] Albert Einsteingave a proof by dissection in which the pieces do not need to be moved.[9]Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and twosimilarshapes that each include one of two legs instead of the hypotenuse (seeSimilar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. In outline, here is how the proof inEuclid'sElementsproceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to becongruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. LetA,B,Cbe theverticesof a right triangle, with a right angle atA. Drop a perpendicular fromAto the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementarylemmata: Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square.[10] The proof is as follows: This proof, which appears in Euclid'sElementsas that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares.[12][13]This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used.[14][15] Another by rearrangement is given by the middle animation. A large square is formed with areac2, from four identical right triangles with sidesa,bandc, fitted around a small central square. Then two rectangles are formed with sidesaandbby moving the triangles. Combining the smaller square with these rectangles produces two squares of areasa2andb2, which must have the same area as the initial large square.[16] The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is calleddissection. This shows the area of the large square equals that of the two smaller ones.[17] As shown in the accompanying animation, area-preservingshear mappingsand translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly.[18]Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse. A relatedproof by U.S. President James A. Garfieldwas published before he was elected president; while he was aU.S. Representative.[19][20][21]Instead of a square it uses atrapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. Thearea of the trapezoidcan be calculated to be half the area of the square, that is The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of12{\displaystyle {\frac {1}{2}}}, which is removed by multiplying by two to give the result. One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employingcalculus.[22][23][24] The triangleABCis a right triangle, as shown in the upper part of the diagram, withBCthe hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of lengthy, the sideACof lengthxand the sideABof lengtha, as seen in the lower diagram part. Ifxis increased by a small amountdxby extending the sideACslightly toD, thenyalso increases bydy. These form two sides of a triangle,CDE, which (withEchosen soCEis perpendicular to the hypotenuse) is a right triangle approximately similar toABC. Therefore, the ratios of their sides must be the same, that is: This can be rewritten asydy=xdx{\displaystyle y\,dy=x\,dx}, which is adifferential equationthat can be solved by direct integration: giving The constant can be deduced fromx= 0,y=ato give the equation This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place ofdxanddy. Theconverseof the theorem is also true:[25] Given a triangle with sides of lengtha,b, andc, ifa2+b2=c2,then the angle between sidesaandbis aright angle. For any three positivereal numbersa,b, andcsuch thata2+b2=c2, there exists a triangle with sidesa,bandcas a consequence of theconverse of the triangle inequality. This converse appears in Euclid'sElements(Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right."[26] It can be proved using thelaw of cosinesor as follows: LetABCbe a triangle with side lengthsa,b, andc, witha2+b2=c2.Construct a second triangle with sides of lengthaandbcontaining a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has lengthc=√a2+b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengthsa,bandc, the triangles arecongruentand must have the same angles. Therefore, the angle between the side of lengthsaandbin the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem.[27][28] Acorollaryof the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Letcbe chosen to be the longest of the three sides anda+b>c(otherwise there is no triangle according to thetriangle inequality). The following statements apply:[29] Edsger W. Dijkstrahas stated this proposition about acute, right, and obtuse triangles in this language: whereαis the angle opposite to sidea,βis the angle opposite to sideb,γis the angle opposite to sidec, and sgn is thesign function.[30] A Pythagorean triple has three positive integersa,b, andc, such thata2+b2=c2.In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths.[1]Such a triple is commonly written(a,b,c).Some well-known examples are(3, 4, 5)and(5, 12, 13). A primitive Pythagorean triple is one in whicha,bandcarecoprime(thegreatest common divisorofa,bandcis 1). The following is a list of primitive Pythagorean triples with values less than 100: There are manyformulas for generating Pythagorean triples. Of these,Euclid's formulais the most well-known: given arbitrary positive integersmandn, the formula states that the integers forms a Pythagorean triple. Given aright trianglewith sidesa,b,c{\displaystyle a,b,c}andaltituded{\displaystyle d}(a line from the right angle and perpendicular to thehypotenusec{\displaystyle c}). The Pythagorean theorem has, while theinverse Pythagorean theoremrelates the twolegsa,b{\displaystyle a,b}to the altituded{\displaystyle d},[31] The equation can be transformed to, wherex2+y2=z2{\displaystyle x^{2}+y^{2}=z^{2}}for any non-zerorealx,y,z{\displaystyle x,y,z}. If thea,b,d{\displaystyle a,b,d}are to beintegers, the smallest solutiona>b>d{\displaystyle a>b>d}is then using the smallest Pythagorean triple3,4,5{\displaystyle 3,4,5}. The reciprocal Pythagorean theorem is a special case of theoptic equation where the denominators are squares and also for aheptagonal trianglewhose sidesp,q,r{\displaystyle p,q,r}are square numbers. One of the consequences of the Pythagorean theorem is that line segments whose lengths areincommensurable(so the ratio of which is not arational number) can be constructed using astraightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by thesquare rootoperation. The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer.[32]Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as√2,√3,√5. For more detail, seeQuadratic irrational. Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit.[33]According to one legend,Hippasus of Metapontum(ca.470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable.[34]A careful discussion of Hippasus's contributions is found inFritz.[35] For anycomplex number theabsolute valueor modulus is given by So the three quantities,r,xandyare related by the Pythagorean equation, Note thatris defined to be a positive number or zero butxandycan be negative as well as positive. Geometricallyris the distance of thezfrom zero or the originOin thecomplex plane. This can be generalised to find the distance between two points,z1andz2say. The required distance is given by so again they are related by a version of the Pythagorean equation, The distance formula inCartesian coordinatesis derived from the Pythagorean theorem.[36]If(x1,y1)and(x2,y2)are points in the plane, then the distance between them, also called theEuclidean distance, is given by More generally, inEuclideann-space, the Euclidean distance between two points,A=(a1,a2,…,an){\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})}andB=(b1,b2,…,bn){\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})}, is defined, by generalization of the Pythagorean theorem, as: If instead of Euclidean distance, the square of this value (thesquared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: The squared form is a smooth,convex functionof both points, and is widely used inoptimization theoryandstatistics, forming the basis ofleast squares. If Cartesian coordinates are not used, for example, ifpolar coordinatesare used in two dimensions or, in more general terms, ifcurvilinear coordinatesare used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in theapplications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates(r,θ)can be introduced as: Then two points with locations(r1,θ1)and(r2,θ2)are separated by a distances: Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: using the trigonometricproduct-to-sum formulas. This formula is thelaw of cosines, sometimes called the generalized Pythagorean theorem.[37]From this result, for the case where the radii to the two locations are at right angles, the enclosed angleΔθ=π/2,and the form corresponding to Pythagoras' theorem is regained:s2=r12+r22.{\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.}The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. In a right triangle with sidesa,band hypotenusec,trigonometrydetermines thesineandcosineof the angleθbetween sideaand the hypotenuse as: From that it follows: where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity.[38]In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of sizesinθand adjacent side of sizecosθin units of the hypotenuse. The Pythagorean theorem relates thecross productanddot productin a similar way:[39] This can be seen from the definitions of the cross product and dot product, as withnaunit vectornormal to bothaandb. The relationship follows from these definitions and the Pythagorean trigonometric identity. This can also be used to define the cross product. By rearranging the following equation is obtained This can be considered as a condition on the cross product and so part of its definition, for example inseven dimensions.[40][41] If the first four of theEuclidean geometry axiomsare assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is,Euclid's fifth postulateimplies the Pythagorean theorem and vice-versa. The Pythagorean theorem generalizes beyond the areas of squares on the three sides to anysimilar figures. This was known byHippocrates of Chiosin the 5th century BC,[42]and was included byEuclidin hisElements:[43] If one erects similar figures (seeEuclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side. This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures area:b:c).[44]While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle).[44] The basic idea behind this generalization is that the area of a plane figure isproportionalto the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areasA,BandCare erected on sides with corresponding lengthsa,bandcthen: But, by the Pythagorean theorem,a2+b2=c2, soA+B=C. Conversely, if we can prove thatA+B=Cfor three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangleCon its hypotenuse, and two similar right triangles (AandB) constructed on the other two sides, formed by dividing the central triangle by itsaltitude. The sum of the areas of the two smaller triangles therefore is that of the third, thusA+B=Cand reversing the above logic leads to the Pythagorean theorema2+b2=c2. (See alsoEinstein's proof by dissection without rearrangement) The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states thata2+b2−2abcos⁡θ=c2{\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}}whereθ{\displaystyle \theta }is the angle between sidesa{\displaystyle a}andb{\displaystyle b}.[45] Whenθ{\displaystyle \theta }isπ2{\displaystyle {\frac {\pi }{2}}}radians or 90°, thencos⁡θ=0{\displaystyle \cos {\theta }=0}, and the formula reduces to the usual Pythagorean theorem. At any selected angle of a general triangle of sidesa, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeledc. Inscribing the isosceles triangle forms triangleCADwith angle θ opposite sideband with sideralongc. A second triangle is formed with angle θ opposite sideaand a side with lengthsalongc, as shown in the figure.Thābit ibn Qurrastated that the sides of the three triangles were related as:[47][48] As the angle θ approachesπ/2, the base of the isosceles triangle narrows, and lengthsrandsoverlap less and less. Whenθ=π/2,ADBbecomes a right triangle,r+s=c, and the original Pythagorean theorem is regained. One proof observes that triangleABChas the same angles as triangleCAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by thetriangle postulate.) Consequently,ABCis similar to the reflection ofCAD, the triangleDACin the lower panel. Taking the ratio of sides opposite and adjacent to θ, Likewise, for the reflection of the other triangle, Clearing fractionsand adding these two relations: the required result. The theorem remains valid if the angleθ{\displaystyle \theta }is obtuse so the lengthsrandsare non-overlapping. Pappus's area theoremis a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization byPappus of Alexandriain 4 AD[49][50] The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same baseband heighth. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. In terms ofsolid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider thecuboidshown in the figure. The length offace diagonalACis found from Pythagoras' theorem as: where these three sides form a right triangle. Using diagonalACand the horizontal edgeCD, the length ofbody diagonalADthen is found by a second application of Pythagoras' theorem as: or, doing it all in one step: This result is the three-dimensional expression for the magnitude of a vectorv(the diagonal AD) in terms of its orthogonal components{vk}(the three mutually perpendicular sides): This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. A substantial generalization of the Pythagorean theorem to three dimensions isde Gua's theorem, named forJean Paul de Gua de Malves: If atetrahedronhas a right angle corner (like a corner of acube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem":[51] Letx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}be orthogonal vectors inRn. Consider then-dimensional simplexSwith vertices0,x1,…,xn{\displaystyle 0,x_{1},\ldots ,x_{n}}. (Think of the(n− 1)-dimensional simplex with verticesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}not including the origin as the "hypotenuse" ofSand the remaining(n− 1)-dimensional faces ofSas its "legs".) Then the square of the volume of the hypotenuse ofSis the sum of the squares of the volumes of thenlegs. This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording:[52] Given ann-rectangularn-dimensional simplex, the square of the(n− 1)-content of thefacetopposing the right vertex will equal the sum of the squares of the(n− 1)-contents of the remaining facets. The Pythagorean theorem can be generalized toinner product spaces,[53]which are generalizations of the familiar 2-dimensional and 3-dimensionalEuclidean spaces. For example, afunctionmay be considered as avectorwith infinitely many components in an inner product space, as infunctional analysis.[54] In an inner product space, the concept ofperpendicularityis replaced by the concept oforthogonality: two vectorsvandware orthogonal if their inner product⟨v,w⟩{\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle }is zero. Theinner productis a generalization of thedot productof vectors. The dot product is called thestandardinner product or theEuclideaninner product. However, other inner products are possible.[55] The concept of length is replaced by the concept of thenorm‖v‖ of a vectorv, defined as:[56] In an inner-product space, thePythagorean theoremstates that for any two orthogonal vectorsvandwwe have Here the vectorsvandware akin to the sides of a right triangle with hypotenuse given by thevector sumv+w. This form of the Pythagorean theorem is a consequence of theproperties of the inner product: where⟨v,w⟩=⟨w,v⟩=0{\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0}because of orthogonality. A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is theparallelogram law:[56] which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality isipso factoa norm corresponding to an inner product.[56] The Pythagorean identity can be extended to sums of more than two orthogonal vectors. Ifv1,v2, ...,vnare pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section onsolid geometry) results in the equation[57] Another generalization of the Pythagorean theorem applies toLebesgue-measurablesets of objects in any number of dimensions. Specifically, the square of the measure of anm-dimensional set of objects in one or more parallelm-dimensionalflatsinn-dimensionalEuclidean spaceis equal to the sum of the squares of the measures of theorthogonalprojections of the object(s) onto allm-dimensional coordinate subspaces.[58] In mathematical terms: where: The Pythagorean theorem is derived from theaxiomsofEuclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theoremimplies, and is implied by, Euclid's Parallel (Fifth) Postulate.[59][60]Thus, right triangles in anon-Euclidean geometry[61]do not satisfy the Pythagorean theorem. For example, inspherical geometry, all three sides of the right triangle (saya,b, andc) bounding an octant of the unit sphere have length equal toπ/2, and all its angles are right angles, which violates the Pythagorean theorem becausea2+b2=2c2>c2{\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}}. Here two cases of non-Euclidean geometry are considered—spherical geometryandhyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, sayA+B=C. The sides are then related as follows: the sum of the areas of the circles with diametersaandbequals the area of the circle with diameterc.[62] For any righttriangle on a sphereof radiusR(for example, ifγin the figure is a right angle), with sidesa,b,c,the relation between the sides takes the form:[63] This equation can be derived as a special case of thespherical law of cosinesthat applies to all spherical triangles: For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengthsa,b,andcon a sphere with expanding radiusR. AsRapproaches infinity the quantitiesa/R,b/R,andc/Rtend to zero and the spherical Pythagorean identity reduces to1=1,{\displaystyle 1=1,}so we must look at itsasymptotic expansion. TheMaclaurin seriesfor the cosine function can be written ascos⁡x=1−12x2+O(x4){\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}}with the remainder term inbig O notation. Lettingx=c/R{\displaystyle x=c/R}be a side of the triangle, and treating the expression as an asymptotic expansion in terms ofRfor a fixedc, and likewise foraandb. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields Subtracting 1 and then negating each side, Multiplying through by2R2,the asymptotic expansion forcin terms of fixeda,band variableRis The Euclidean Pythagorean relationshipc2=a2+b2{\textstyle c^{2}=a^{2}+b^{2}}is recovered in the limit, as the remainder vanishes when the radiusRapproaches infinity. For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identitycos⁡2θ=1−2sin2⁡θ{\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }}to avoidloss of significance. Then the spherical Pythagorean theorem can alternately be written as In ahyperbolicspace with uniformGaussian curvature−1/R2, for a righttrianglewith legsa,b, and hypotenusec, the relation between the sides takes the form:[64] where cosh is thehyperbolic cosine. This formula is a special form of thehyperbolic law of cosinesthat applies to all hyperbolic triangles:[65] with γ the angle at the vertex opposite the sidec. By using theMaclaurin seriesfor the hyperbolic cosine,coshx≈ 1 +x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, asa,b, andcall approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. For small right triangles(a,b<<R), the hyperbolic cosines can be eliminated to avoidloss of significance, giving For any uniform curvatureK(positive, zero, or negative), in very small right triangles (|K|a2, |K|b2<< 1) with hypotenusec, it can be shown that The Pythagorean theorem applies toinfinitesimaltriangles seen indifferential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies withdsthe element of distance and (dx,dy,dz) the components of the vector separating the two points. Such a space is called aEuclidean space. However, inRiemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form:[66] which is called themetric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficientsgij.) It may be a function of position, and often describescurved space. A simple example is Euclidean (flat) space expressed incurvilinear coordinates. For example, inpolar coordinates: There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians ofMesopotamianmathematics have concluded that the Pythagorean rule was in widespread use during theOld Babylonian period(20th to 16th centuries BC), over a thousand years beforePythagoraswas born.[68][69][70][71]The history of the theorem can be divided into four parts: knowledge ofPythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within somedeductive system. Writtenc.1800BC, theEgyptianMiddle KingdomBerlin Papyrus 6619includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tabletPlimpton 322, written nearLarsaalsoc.1800BC, contains many entries closely related to Pythagorean triples.[72] InIndia, theBaudhayanaShulba Sutra, the dates of which are given variously as between the 8th and 5th century BC,[73]contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of theisoscelesright triangleand in the general case, as does theApastambaShulba Sutra(c.600 BC).[a] ByzantineNeoplatonicphilosopher and mathematicianProclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed toPlato, the other to Pythagoras",[76]for generating special Pythagorean triples. The rule attributed to Pythagoras (c.570– c.495 BC) starts from anodd numberand produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According toThomas L. Heath(1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived.[77]However, when authors such asPlutarchandCiceroattributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted.[78][79]ClassicistKurt von Fritzwrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period ofPythagorean mathematics."[35]Around 300 BC, in Euclid'sElements, the oldest extantaxiomatic proofof the theorem is presented.[80] With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, theChinesetextZhoubi Suanjing(周髀算经), (The Arithmetical Classic of theGnomonand the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理).[81][82]During theHan Dynasty(202 BC to 220 AD), Pythagorean triples appear inThe Nine Chapters on the Mathematical Art,[83]together with a mention of right triangles.[84]Some believe the theorem arose first inChinain the 11th century BC,[85]where it is alternatively known as the "Shang Gao theorem" (商高定理),[86]named after theDuke of Zhou'sastronomer and mathematician, whose reasoning composed most of what was in theZhoubi Suanjing.[87]
https://en.wikipedia.org/wiki/Pythagorean_theorem#Non-Euclidean_geometry
The numbereis amathematical constantapproximately equal to 2.71828 that is thebaseof thenatural logarithmandexponential function. It is sometimes calledEuler's number, after the Swiss mathematicianLeonhard Euler, though this can invite confusion withEuler numbers, or withEuler's constant, a different constant typically denotedγ{\displaystyle \gamma }. Alternatively,ecan be calledNapier's constantafterJohn Napier.[2][3]The Swiss mathematicianJacob Bernoullidiscovered the constant while studying compound interest.[4][5] The numbereis of great importance in mathematics,[6]alongside 0, 1,π, andi. All five appear in one formulation ofEuler's identityeiπ+1=0{\displaystyle e^{i\pi }+1=0}and play important and recurring roles across mathematics.[7][8]Like the constantπ,eisirrational, meaning that it cannot be represented as a ratio of integers, and moreover it istranscendental, meaning that it is not a root of any non-zeropolynomialwith rational coefficients.[3]To 30 decimal places, the value ofeis:[1] The numbereis thelimitlimn→∞(1+1n)n,{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n},}an expression that arises in the computation ofcompound interest. It is the sum of the infiniteseriese=∑n=0∞1n!=1+11+11⋅2+11⋅2⋅3+⋯.{\displaystyle e=\sum \limits _{n=0}^{\infty }{\frac {1}{n!}}=1+{\frac {1}{1}}+{\frac {1}{1\cdot 2}}+{\frac {1}{1\cdot 2\cdot 3}}+\cdots .} It is the unique positive numberasuch that the graph of the functiony=axhas aslopeof 1 atx= 0. One hase=exp⁡(1),{\displaystyle e=\exp(1),}whereexp{\displaystyle \exp }is the (natural)exponential function, the unique function that equals its ownderivativeand satisfies the equationexp⁡(0)=1.{\displaystyle \exp(0)=1.}Since the exponential function is commonly denoted asx↦ex,{\displaystyle x\mapsto e^{x},}one has alsoe=e1.{\displaystyle e=e^{1}.} Thelogarithmof basebcan be defined as theinverse functionof the functionx↦bx.{\displaystyle x\mapsto b^{x}.}Sinceb=b1,{\displaystyle b=b^{1},}one haslogb⁡b=1.{\displaystyle \log _{b}b=1.}The equatione=e1{\displaystyle e=e^{1}}implies therefore thateis the base of the natural logarithm. The numberecan also be characterized in terms of anintegral:[9]∫1edxx=1.{\displaystyle \int _{1}^{e}{\frac {dx}{x}}=1.} For other characterizations, see§ Representations. The first references to the constant were published in 1618 in the table of an appendix of a work on logarithms byJohn Napier. However, this did not contain the constant itself, but simply a list oflogarithms to the basee{\displaystyle e}. It is assumed that the table was written byWilliam Oughtred. In 1661,Christiaan Huygensstudied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm ofe, but he did not recognizeeitself as a quantity of interest.[5][10] The constant itself was introduced byJacob Bernoulliin 1683, for solving the problem ofcontinuous compoundingof interest.[11][12]In his solution, the constanteoccurs as thelimitlimn→∞(1+1n)n,{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n},}wherenrepresents the number of intervals in a year on which the compound interest is evaluated (for example,n=12{\displaystyle n=12}for monthly compounding). The first symbol used for this constant was the letterbbyGottfried Leibnizin letters toChristiaan Huygensin 1690 and 1691.[13] Leonhard Eulerstarted to use the letterefor the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons,[14]and in a letter toChristian Goldbachon 25 November 1731.[15][16]The first appearance ofein a printed publication was in Euler'sMechanica(1736).[17]It is unknown why Euler chose the lettere.[18]Although some researchers used the lettercin the subsequent years, the letterewas more common and eventually became standard.[2] Euler proved thateis the sum of theinfinite seriese=∑n=0∞1n!=10!+11!+12!+13!+14!+⋯,{\displaystyle e=\sum _{n=0}^{\infty }{\frac {1}{n!}}={\frac {1}{0!}}+{\frac {1}{1!}}+{\frac {1}{2!}}+{\frac {1}{3!}}+{\frac {1}{4!}}+\cdots ,}wheren!is thefactorialofn.[5]The equivalence of the two characterizations using the limit and the infinite series can be proved via thebinomial theorem.[19] Jacob Bernoulli discovered this constant in 1683, while studying a question aboutcompound interest:[5] An account starts with $1.00 and pays 100 percent interest per year. If the interest is credited once, at the end of the year, the value of the account at year-end will be $2.00. What happens if the interest is computed and credited more frequently during the year? If the interest is credited twice in the year, the interest rate for each 6 months will be 50%, so the initial $1 is multiplied by 1.5 twice, yielding$1.00 × 1.52= $2.25at the end of the year. Compounding quarterly yields$1.00 × 1.254= $2.44140625, and compounding monthly yields$1.00 × (1 + 1/12)12= $2.613035.... If there arencompounding intervals, the interest for each interval will be100%/nand the value at the end of the year will be $1.00 ×(1 + 1/n)n.[20][21] Bernoulli noticed that this sequence approaches a limit (theforce of interest) with largernand, thus, smaller compounding intervals.[5]Compounding weekly (n= 52) yields $2.692596..., while compounding daily (n= 365) yields $2.714567... (approximately two cents more). The limit asngrows large is the number that came to be known ase. That is, withcontinuouscompounding, the account value will reach $2.718281828... More generally, an account that starts at $1 and offers an annual interest rate ofRwill, aftertyears, yieldeRtdollars with continuous compounding. Here,Ris the decimal equivalent of the rate of interest expressed as apercentage, so for 5% interest,R= 5/100 = 0.05.[20][21] The numbereitself also has applications inprobability theory, in a way that is not obviously related to exponential growth. Suppose that a gambler plays a slot machine that pays out with a probability of one innand plays itntimes. Asnincreases, the probability that gambler will lose allnbets approaches1/e, which is approximately 36.79%. Forn= 20, this is already 1/2.789509... (approximately 35.85%). This is an example of aBernoulli trialprocess. Each time the gambler plays the slots, there is a one innchance of winning. Playingntimes is modeled by thebinomial distribution, which is closely related to thebinomial theoremandPascal's triangle. The probability of winningktimes out ofntrials is:[22] In particular, the probability of winning zero times (k= 0) is The limit of the above expression, asntends to infinity, is precisely1/e. Exponential growthis a process that increases quantity over time at an ever-increasing rate. It occurs when the instantaneousrate of change(that is, thederivative) of a quantity with respect to time isproportionalto the quantity itself.[21]Described as a function, a quantity undergoing exponential growth is anexponential functionof time, that is, the variable representing time is the exponent (in contrast to other types of growth, such asquadratic growth). If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoingexponential decayinstead. The law of exponential growth can be written in different but mathematically equivalent forms, by using a differentbase, for which the numbereis a common and convenient choice:x(t)=x0⋅ekt=x0⋅et/τ.{\displaystyle x(t)=x_{0}\cdot e^{kt}=x_{0}\cdot e^{t/\tau }.}Here,x0{\displaystyle x_{0}}denotes the initial value of the quantityx,kis the growth constant, andτ{\displaystyle \tau }is the time it takes the quantity to grow by a factor ofe. The normal distribution with zero mean and unit standard deviation is known as thestandard normal distribution,[23]given by theprobability density functionϕ(x)=12πe−12x2.{\displaystyle \phi (x)={\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}x^{2}}.} The constraint of unit standard deviation (and thus also unit variance) results in the⁠1/2⁠in the exponent, and the constraint of unit total area under the curveϕ(x){\displaystyle \phi (x)}results in the factor1/2π{\displaystyle \textstyle 1/{\sqrt {2\pi }}}. This function is symmetric aroundx= 0, where it attains its maximum value1/2π{\displaystyle \textstyle 1/{\sqrt {2\pi }}}, and hasinflection pointsatx= ±1. Another application ofe, also discovered in part by Jacob Bernoulli along withPierre Remond de Montmort, is in the problem ofderangements, also known as thehat check problem:[24]nguests are invited to a party and, at the door, the guests all check their hats with the butler, who in turn places the hats intonboxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so puts the hats into boxes selected at random. The problem of de Montmort is to find the probability thatnoneof the hats gets put into the right box. This probability, denoted bypn{\displaystyle p_{n}\!}, is: Asntends to infinity,pnapproaches1/e. Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box isn!/e,roundedto the nearest integer, for every positiven.[25] The maximum value ofxx{\displaystyle {\sqrt[{x}]{x}}}occurs atx=e{\displaystyle x=e}. Equivalently, for any value of the baseb> 1, it is the case that the maximum value ofx−1logb⁡x{\displaystyle x^{-1}\log _{b}x}occurs atx=e{\displaystyle x=e}(Steiner's problem, discussedbelow). This is useful in the problem of a stick of lengthLthat is broken intonequal parts. The value ofnthat maximizes the product of the lengths is then either[26] The quantityx−1logb⁡x{\displaystyle x^{-1}\log _{b}x}is also a measure ofinformationgleaned from an event occurring with probability1/x{\displaystyle 1/x}(approximately36.8%{\displaystyle 36.8\%}whenx=e{\displaystyle x=e}), so that essentially the same optimal division appears in optimal planning problems like thesecretary problem. The numbereoccurs naturally in connection with many problems involvingasymptotics. An example isStirling's formulafor theasymptoticsof thefactorial function, in which both the numberseandπappear:[27]n!∼2πn(ne)n.{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.} As a consequence,[27]e=limn→∞nn!n.{\displaystyle e=\lim _{n\to \infty }{\frac {n}{\sqrt[{n}]{n!}}}.} The principal motivation for introducing the numbere, particularly incalculus, is to performdifferentialandintegral calculuswithexponential functionsandlogarithms.[28]A general exponentialfunctiony=axhas a derivative, given by alimit: The parenthesized limit on the right is independent of thevariablex.Its value turns out to be the logarithm ofato basee. Thus, when the value ofais settoe,this limit is equalto1,and so one arrives at the following simple identity: Consequently, the exponential function with baseeis particularly suited to doing calculus.Choosinge(as opposed to some other number) as the base of the exponential function makes calculations involving the derivatives much simpler. Another motivation comes from considering the derivative of the base-alogarithm (i.e.,logax),[28]forx> 0: where the substitutionu=h/xwas made. The base-alogarithm ofeis 1, ifaequalse. So symbolically, The logarithm with this special base is called thenatural logarithm, and is usually denoted asln; it behaves well under differentiation since there is no undetermined limit to carry through the calculations. Thus, there are two ways of selecting such special numbersa. One way is to set the derivative of the exponential functionaxequal toax, and solve fora. The other way is to set the derivative of the basealogarithm to1/xand solve fora. In each case, one arrives at a convenient choice of base for doing calculus. It turns out that these two solutions foraare actuallythe same: the numbere. TheTaylor seriesfor the exponential function can be deduced from the facts that the exponential function is its own derivative and that it equals 1 when evaluated at 0:[29]ex=∑n=0∞xnn!.{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}.}Settingx=1{\displaystyle x=1}recovers the definition ofeas the sum of an infinite series. The natural logarithm function can be defined as the integral from 1 tox{\displaystyle x}of1/t{\displaystyle 1/t}, and the exponential function can then be defined as the inverse function of the natural logarithm. The numbereis the value of the exponential function evaluated atx=1{\displaystyle x=1}, or equivalently, the number whose natural logarithm is 1. It follows thateis the unique positive real number such that∫1e1tdt=1.{\displaystyle \int _{1}^{e}{\frac {1}{t}}\,dt=1.} Becauseexis the unique function (up tomultiplication by a constantK) that is equal to its ownderivative, ddxKex=Kex,{\displaystyle {\frac {d}{dx}}Ke^{x}=Ke^{x},} it is therefore its ownantiderivativeas well:[30] ∫Kexdx=Kex+C.{\displaystyle \int Ke^{x}\,dx=Ke^{x}+C.} Equivalently, the family of functions y(x)=Kex{\displaystyle y(x)=Ke^{x}} whereKis any real or complex number, is the full solution to thedifferential equation y′=y.{\displaystyle y'=y.} The numbereis the unique real number such that(1+1x)x<e<(1+1x)x+1{\displaystyle \left(1+{\frac {1}{x}}\right)^{x}<e<\left(1+{\frac {1}{x}}\right)^{x+1}}for all positivex.[31] Also, we have the inequalityex≥x+1{\displaystyle e^{x}\geq x+1}for all realx, with equality if and only ifx= 0. Furthermore,eis the unique base of the exponential for which the inequalityax≥x+ 1holds for allx.[32]This is a limiting case ofBernoulli's inequality. Steiner's problemasks to find theglobal maximumfor the function f(x)=x1x.{\displaystyle f(x)=x^{\frac {1}{x}}.} This maximum occurs precisely atx=e. (One can check that the derivative oflnf(x)is zero only for this value ofx.) Similarly,x= 1/eis where theglobal minimumoccurs for the function f(x)=xx.{\displaystyle f(x)=x^{x}.} The infinitetetration converges if and only ifx∈ [(1/e)e,e1/e] ≈ [0.06599, 1.4447],[33][34]shown by a theorem ofLeonhard Euler.[35][36][37] The real numbereisirrational.Eulerproved this by showing that itssimple continued fractionexpansion does not terminate.[38](See alsoFourier'sproof thateis irrational.) Furthermore, by theLindemann–Weierstrass theorem,eistranscendental, meaning that it is not a solution of any non-zero polynomial equation with rational coefficients. It was the first number to be proved transcendental without having been specifically constructed for this purpose (compare withLiouville number); the proof was given byCharles Hermitein 1873.[39]The numbereis one of only a few transcendental numbers for which the exactirrationality exponentis known (given byμ(e)=2{\displaystyle \mu (e)=2}).[40] Anunsolved problemthus far is the question of whether or not the numberseandπarealgebraically independent. This would be resolved bySchanuel's conjecture– a currently unproven generalization of the Lindemann–Weierstrass theorem.[41][42] It is conjectured thateisnormal, meaning that wheneis expressed in anybasethe possible digits in that base are uniformly distributed (occur with equal probability in any sequence of given length).[43] Inalgebraic geometry, aperiodis a number that can be expressed as an integral of analgebraic functionover an algebraicdomain. The constantπis a period, but it is conjectured thateis not.[44] Theexponential functionexmay be written as aTaylor series[45][29] ex=1+x1!+x22!+x33!+⋯=∑n=0∞xnn!.{\displaystyle e^{x}=1+{x \over 1!}+{x^{2} \over 2!}+{x^{3} \over 3!}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}.} Because this series isconvergentfor everycomplexvalue ofx, it is commonly used to extend the definition ofexto the complex numbers.[46]This, with the Taylor series forsinandcosx, allows one to deriveEuler's formula: eix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,} which holds for every complexx.[46]The special case withx=πisEuler's identity: eiπ+1=0,{\displaystyle e^{i\pi }+1=0,}which is considered to be an exemplar ofmathematical beautyas it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used ina proofthatπistranscendental, which implies the impossibility ofsquaring the circle.[47][48]Moreover, the identity implies that, in theprincipal branchof the logarithm,[46] ln⁡(−1)=iπ.{\displaystyle \ln(-1)=i\pi .} Furthermore, using the laws for exponentiation, (cos⁡x+isin⁡x)n=(eix)n=einx=cos⁡nx+isin⁡nx{\displaystyle (\cos x+i\sin x)^{n}=\left(e^{ix}\right)^{n}=e^{inx}=\cos nx+i\sin nx} for any integern, which isde Moivre's formula.[49] The expressions ofcosxandsinxin terms of theexponential functioncan be deduced from the Taylor series:[46]cos⁡x=eix+e−ix2,sin⁡x=eix−e−ix2i.{\displaystyle \cos x={\frac {e^{ix}+e^{-ix}}{2}},\qquad \sin x={\frac {e^{ix}-e^{-ix}}{2i}}.} The expressioncos⁡x+isin⁡x{\textstyle \cos x+i\sin x}is sometimes abbreviated ascis(x).[49] The numberecan be represented in a variety of ways: as aninfinite series, aninfinite product, acontinued fraction, or alimit of a sequence. In addition to the limit and the series given above, there is also thesimple continued fraction which written out looks like The following infinite product evaluates toe:[26]e=21(43)1/2(6⋅85⋅7)1/4(10⋅12⋅14⋅169⋅11⋅13⋅15)1/8⋯.{\displaystyle e={\frac {2}{1}}\left({\frac {4}{3}}\right)^{1/2}\left({\frac {6\cdot 8}{5\cdot 7}}\right)^{1/4}\left({\frac {10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}}\right)^{1/8}\cdots .} Many other series, sequence, continued fraction, and infinite product representations ofehave been proved. In addition to exact analytical expressions for representation ofe, there are stochastic techniques for estimatinge. One such approach begins with an infinite sequence of independent random variablesX1,X2..., drawn from theuniform distributionon [0, 1]. LetVbe the least numbernsuch that the sum of the firstnobservations exceeds 1: Then theexpected valueofVise:E(V) =e.[52][53] The number of known digits ofehas increased substantially since the introduction of the computer, due both to increasing performance of computers and to algorithmic improvements.[54][55] Since around 2010, the proliferation of modern high-speeddesktop computershas made it feasible for amateurs to compute trillions of digits ofewithin acceptable amounts of time. On Dec 24, 2023, a record-setting calculation was made by Jordan Ranous, givingeto 35,000,000,000,000 digits.[63] One way to compute the digits ofeis with the series[64]e=∑k=0∞1k!.{\displaystyle e=\sum _{k=0}^{\infty }{\frac {1}{k!}}.} A faster method involves two recursive functionsp(a,b){\displaystyle p(a,b)}andq(a,b){\displaystyle q(a,b)}. The functions are defined as(p(a,b)q(a,b))={(1b),ifb=a+1,(p(a,m)q(m,b)+p(m,b)q(a,m)q(m,b)),otherwise, wherem=⌊(a+b)/2⌋.{\displaystyle {\binom {p(a,b)}{q(a,b)}}={\begin{cases}{\binom {1}{b}},&{\text{if }}b=a+1{\text{,}}\\{\binom {p(a,m)q(m,b)+p(m,b)}{q(a,m)q(m,b)}},&{\text{otherwise, where }}m=\lfloor (a+b)/2\rfloor .\end{cases}}} The expression1+p(0,n)q(0,n){\displaystyle 1+{\frac {p(0,n)}{q(0,n)}}}produces thenth partial sum of the series above. This method usesbinary splittingto computeewith fewer single-digit arithmetic operations and thus reducedbit complexity. Combining this withfast Fourier transform-based methods of multiplying integers makes computing the digits very fast.[64] During the emergence ofinternet culture, individuals and organizations sometimes paid homage to the numbere. In an early example, thecomputer scientistDonald Knuthlet the version numbers of his programMetafontapproache. The versions are 2, 2.7, 2.71, 2.718, and so forth.[65] In another instance, theIPOfiling forGooglein 2004, rather than a typical round-number amount of money, the company announced its intention to raise 2,718,281,828USD, which isebilliondollarsroundedto the nearest dollar.[66] Google was also responsible for a billboard[67]that appeared in the heart ofSilicon Valley, and later inCambridge, Massachusetts;Seattle, Washington; andAustin, Texas. It read "{first 10-digit prime found in consecutive digits ofe}.com". The first 10-digit prime ineis 7427466391, which starts at the 99th digit.[68]Solving this problem and visiting the advertised (now defunct) website led to an even more difficult problem to solve, which consisted of finding the fifth term in the sequence 7182818284, 8182845904, 8747135266, 7427466391. It turned out that the sequence consisted of 10-digit numbers found in consecutive digits ofewhose digits summed to 49. The fifth term in the sequence is 5966290435, which starts at the 127th digit.[69]Solving this second problem finally led to aGoogle Labswebpage where the visitor was invited to submit a résumé.[70] The last release of the officialPython 2interpreter has version number 2.7.18, a reference toe.[71]
https://en.wikipedia.org/wiki/E_(mathematical_constant)
Ingeometry, theequal incircles theoremderives from a JapaneseSangaku, and pertains to the following construction: a series of rays are drawn from a given point to a given line such that the inscribed circles of the triangles formed by adjacent rays and the base line are equal. In the illustration the equal blue circles define the spacing between the rays, as described. The theorem states that the incircles of the triangles formed (starting from any given ray) by every other ray, every third ray, etc. and the base line are also equal. The case of every other ray is illustrated above by the green circles, which are all equal. From the fact that the theorem does not depend on the angle of the initial ray, it can be seen that the theorem properly belongs toanalysis, rather than geometry, and must relate to a continuous scaling function which defines the spacing of the rays. In fact, this function is thehyperbolic sine. The theorem is a direct corollary of the following lemma: Suppose that thenth ray makes an angleγn{\displaystyle \gamma _{n}}with the normal to the baseline. Ifγn{\displaystyle \gamma _{n}}is parameterized according to the equation,tan⁡γn=sinh⁡θn{\displaystyle \tan \gamma _{n}=\sinh \theta _{n}}, then values ofθn=a+nb{\displaystyle \theta _{n}=a+nb}, wherea{\displaystyle a}andb{\displaystyle b}are real constants, define a sequence of rays that satisfy the condition of equal incircles, and furthermore any sequence of rays satisfying the condition can be produced by suitable choice of the constantsa{\displaystyle a}andb{\displaystyle b}. In the diagram, lines PS and PT are adjacent rays making anglesγn{\displaystyle \gamma _{n}}andγn+1{\displaystyle \gamma _{n+1}}with line PR, which is perpendicular to the baseline, RST. Line QXOY is parallel to the baseline and passes through O, the center of the incircle of△{\displaystyle \triangle }PST, which is tangent to the rays at W and Z. Also, line PQ has lengthh−r{\displaystyle h-r}, and line QR has lengthr{\displaystyle r}, the radius of the incircle. Then△{\displaystyle \triangle }OWX is similar to△{\displaystyle \triangle }PQX and△{\displaystyle \triangle }OZY is similar to△{\displaystyle \triangle }PQY, and from XY = XO + OY we get This relation on a set of angles,{γm}{\displaystyle \{\gamma _{m}\}}, expresses the condition of equal incircles. To prove the lemma, we settan⁡γn=sinh⁡(a+nb){\displaystyle \tan \gamma _{n}=\sinh(a+nb)}, which givessec⁡γn=cosh⁡(a+nb){\displaystyle \sec \gamma _{n}=\cosh(a+nb)}. Usinga+(n+1)b=(a+nb)+b{\displaystyle a+(n+1)b=(a+nb)+b}, we apply the addition rules forsinh{\displaystyle \sinh }andcosh{\displaystyle \cosh }, and verify that the equal incircles relation is satisfied by setting This gives an expression for the parameterb{\displaystyle b}in terms of the geometric measures,h{\displaystyle h}andr{\displaystyle r}. With this definition ofb{\displaystyle b}we then obtain an expression for the radii,rN{\displaystyle r_{N}}, of the incircles formed by taking everyNth ray as the sides of the triangles
https://en.wikipedia.org/wiki/Equal_incircles_theorem
Thehyperbolastic functions, also known ashyperbolastic growth models, aremathematical functionsthat are used in medicalstatistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac.[1]The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection.[1][2]These functions can be used in a wide variety of modeling problems such as tumor growth,stem cellproliferation, pharma kinetics, cancer growth, sigmoid activation function inneural networks, and epidemiological disease progression or regression.[1][3][4] Thehyperbolastic functionscan model both growth and decay curves until it reachescarrying capacity. Due to their flexibility, these models have diverse applications in the medical field, with the ability to capture disease progression with an intervening treatment. As the figures indicate,hyperbolastic functionscan fit asigmoidal curveindicating that the slowest rate occurs at the early and late stages.[5]In addition to the presenting sigmoidal shapes, it can also accommodate biphasic situations where medical interventions slow or reverse disease progression; but, when the effect of the treatment vanishes, the disease will begin the second phase of its progression until it reaches its horizontal asymptote. One of the main characteristics these functions have is that they cannot only fit sigmoidal shapes, but can also model biphasic growth patterns that other classical sigmoidal curves cannot adequately model. This distinguishing feature has advantageous applications in various fields including medicine, biology, economics, engineering,agronomy, and computer aided system theory.[6][7][8][9][10] Thehyperbolastic rate equation of type I, denoted H1, is given by dP(x)dx=P(x)M(M−P(x))(δ+θ1+x2),{\displaystyle {\frac {dP(x)}{dx}}={\frac {P(x)}{M}}\left(M-P\left(x\right)\right)\left(\delta +{\frac {\theta }{\sqrt {1+x^{2}}}}\right),} wherex{\displaystyle x}is any real number andP(x){\displaystyle P\left(x\right)}is the population size atx{\displaystyle x}. The parameterM{\displaystyle M}represents carrying capacity, and parametersδ{\displaystyle \delta }andθ{\displaystyle \theta }jointly represent growth rate. The parameterθ{\displaystyle \theta }gives the distance from a symmetric sigmoidal curve. Solving the hyperbolastic rate equation of type I forP(x){\displaystyle P\left(x\right)}gives P(x)=M1+αe−δx−θarsinh⁡(x),{\displaystyle P(x)={\frac {M}{1+\alpha e^{-\delta x-\theta \operatorname {arsinh} (x)}}},} wherearsinh{\displaystyle \operatorname {arsinh} }is theinverse hyperbolic sinefunction. If one desires to use the initial conditionP(x0)=P0{\displaystyle P\left(x_{0}\right)=P_{0}}, thenα{\displaystyle \alpha }can be expressed as Ifx0=0{\displaystyle x_{0}=0}, thenα{\displaystyle \alpha }reduces to In the event that a vertical shift is needed to give a better model fit, one can add the shift parameterζ{\displaystyle \zeta }, which would result in the following formula Thehyperbolastic function of type Igeneralizes thelogistic function. If the parametersθ=0{\displaystyle \theta =0}, then it would become a logistic function. This functionP(x){\displaystyle P(x)}is ahyperbolastic function of type I. Thestandard hyperbolastic function of type Iis Thehyperbolastic rate equation of type II, denoted by H2, is defined as wheretanh{\displaystyle \tanh }is thehyperbolic tangentfunction,M{\displaystyle M}is the carrying capacity, and bothδ{\displaystyle \delta }andγ>0{\displaystyle \gamma >0}jointly determine the growth rate. In addition, the parameterγ{\displaystyle \gamma }represents acceleration in the time course. Solving the hyperbolastic rate function of type II forP(x){\displaystyle P\left(x\right)}gives If one desires to use initial conditionP(x0)=P0,{\displaystyle P(x_{0})=P_{0},}thenα{\displaystyle \alpha }can be expressed as Ifx0=0{\displaystyle x_{0}=0}, thenα{\displaystyle \alpha }reduces to Similarly, in the event that a vertical shift is needed to give a better fit, one can use the following formula Thestandard hyperbolastic function of type IIis defined as The hyperbolastic rate equation of type III is denoted by H3 and has the form wheret{\displaystyle t}> 0. The parameterM{\displaystyle M}represents the carrying capacity, and the parametersδ,{\displaystyle \delta ,}γ,{\displaystyle \gamma ,}andθ{\displaystyle \theta }jointly determine the growth rate. The parameterγ,{\displaystyle \gamma ,}represents acceleration of the time scale, while the size ofθ{\displaystyle \theta }represents distance from a symmetric sigmoidal curve. The solution to the differential equation of type III is with the initial conditionP(t0)=P0{\displaystyle P\left(t_{0}\right)=P_{0}}we can expressα{\displaystyle \alpha }as The hyperbolastic distribution of type III is a three-parameter family of continuousprobability distributionswith scale parametersδ{\displaystyle \delta }> 0, andθ{\displaystyle \theta }≥ 0 and parameterγ{\displaystyle \gamma }as theshape parameter. When the parameterθ{\displaystyle \theta }= 0, the hyperbolastic distribution of type III is reduced to theweibull distribution.[11]The hyperbolasticcumulative distribution functionof type III is given by and its correspondingprobability density functionis Thehazard functionh{\displaystyle h}(or failure rate) is given by Thesurvival functionS{\displaystyle S}is given by The standard hyperbolastic cumulative distribution function of type III is defined as and its corresponding probability density function is If one desires to calculate the pointx{\displaystyle x}where the population reaches a percentage of its carrying capacityM{\displaystyle M}, then one can solve the equation forx{\displaystyle x}, where0<k<1{\displaystyle 0<k<1}. For instance, the half point can be found by settingk=12{\displaystyle k={\frac {1}{2}}}. According to stem cell researchers at McGowan Institute for Regenerative Medicine at the University of Pittsburgh, "a newer model [called the hyperbolastic type III or] H3 is adifferential equationthat also describes the cell growth. This model allows for much more variation and has been proven to better predict growth."[12] The hyperbolastic growth models H1, H2, and H3 have been applied to analyze the growth of solidEhrlich carcinomausing a variety of treatments.[13] In animal science,[14]the hyperbolastic functions have been used for modeling broiler chicken growth.[15][16]The hyperbolastic model of type III was used to determine the size of the recovering wound.[17] In the area of wound healing, the hyperbolastic models accurately representing the time course of healing.[18]Such functions have been used to investigate variations in the healing velocity among different kinds of wounds and at different stages in the healing process taking into consideration the areas of trace elements, growth factors, diabetic wounds, and nutrition.[19][20] Another application of hyperbolastic functions is in the area of thestochastic diffusionprocess,[21]whose mean function is a hyperbolastic curve. The main characteristics of the process are studied and themaximum likelihood estimationfor the parameters of the process is considered.[22]To this end, the firefly metaheuristic optimization algorithm is applied after bounding the parametric space by a stage wise procedure. Some examples based on simulated sample paths and real data illustrate this development. A sample path of adiffusion processmodels the trajectory of a particle embedded in a flowing fluid and subjected to random displacements due to collisions with other particles, which is calledBrownian motion.[23][24][25][26][27]The hyperbolastic function of type III was used to model the proliferation of both adultmesenchymalandembryonic stem cells;[28][29][30][31]and, the hyperbolastic mixed model of type II has been used in modelingcervical cancerdata.[32]Hyperbolastic curves can be an important tool in analyzing cellular growth, the fitting of biological curves, the growth ofphytoplankton, and instantaneous maturity rate.[33][34][35][36] Inforest ecologyand management, the hyperbolastic models have been applied to model the relationship between DBH and height.[37] The multivariablehyperbolastic model type IIIhas been used to analyze the growth dynamics of phytoplankton taking into consideration the concentration of nutrients.[38] Hyperbolastic regressionsarestatistical modelsthat utilize standardhyperbolastic functionsto model adichotomousormultinomialoutcome variable. The purpose of hyperbolastic regression is to predict an outcome using a set of explanatory (independent) variables. These types of regressions are routinely used in many areas including medical, public health, dental, biomedical, as well as social, behavioral, and engineering sciences. For instance, binary regression analysis has been used to predictendoscopiclesions in iron deficiencyanemia.[39]In addition, binary regression was applied to differentiate between malignant and benignadnexal massprior to surgery.[40] LetY{\displaystyle Y}be a binary outcome variable which can assume one of two mutually exclusive values, success or failure. If we code success asY=1{\displaystyle Y=1}and failure asY=0{\displaystyle Y=0}, then for parameterθ≥−1{\displaystyle \theta \geq -1}, the hyperbolastic success probability of type I with a sample of sizen{\displaystyle n}as a function of parameterθ{\displaystyle \theta }and parameter vectorβ=(β0,β1,…,βp){\displaystyle {\boldsymbol {\beta }}=(\beta _{0},\beta _{1},\ldots ,\beta _{p})}given ap{\displaystyle p}-dimensional vector of explanatory variables is defined asxi=(xi1,xi2,…,xip)T{\displaystyle \mathbf {x} _{i}=(x_{i1},\ x_{i2},\ldots ,\ x_{ip})^{T}}, wherei=1,2,…,n{\displaystyle i=1,2,\ldots ,n}, is given by The odds of success is the ratio of the probability of success to the probability of failure. For binary hyperbolastic regression of type I, the odds of success is denoted byOddsH1{\displaystyle Odds_{H1}}and expressed by the equation The logarithm ofOddsH1{\displaystyle Odds_{H1}}is called thelogitof binary hyperbolastic regression of type I. The logit transformation is denoted byLH1{\displaystyle L_{H1}}and can be written as TheShannon informationfor the random variableY{\displaystyle Y}is defined as where the base of logarithmb>0{\displaystyle b>0}andb≠1{\displaystyle b\neq 1}. For binary outcome,b{\displaystyle b}is equal to2{\displaystyle 2}. For the binary hyperbolastic regression of type I, the informationI(y){\displaystyle I(y)}is given by whereZ=β0+∑s=1pβsxs{\displaystyle Z=\beta _{0}+\sum _{s=1}^{p}\beta _{s}x_{s}}, andxs{\displaystyle x_{s}}is thesth{\displaystyle s^{th}}input data. For a random sample of binary outcomes of sizen{\displaystyle n}, the average empirical information for hyperbolastic H1 can be estimated by whereZi=β0+∑s=1pβsxis{\displaystyle Z_{i}=\beta _{0}+\sum _{s=1}^{p}\beta _{s}x_{is}}, andxis{\displaystyle x_{is}}is thesth{\displaystyle s^{th}}input data for theith{\displaystyle i^{th}}observation. Information entropymeasures the loss of information in a transmitted message or signal. In machine learning applications, it is the number of bits necessary to transmit a randomly selected event from a probability distribution. For a discrete random variableY{\displaystyle Y}, the information entropyH{\displaystyle H}is defined as whereP(y){\displaystyle P(y)}is the probability mass function for the random variableY{\displaystyle Y}. The information entropy is the mathematical expectation ofI(y){\displaystyle I(y)}with respect to probability mass functionP(y){\displaystyle P(y)}. The Information entropy has many applications in machine learning and artificial intelligence such as classification modeling and decision trees. For the hyperbolastic H1, the entropyH{\displaystyle H}is equal to The estimated average entropy for hyperbolastic H1 is denoted byH¯{\displaystyle {\bar {H}}}and is given by The binarycross-entropycompares the observedy∈{0,1}{\displaystyle y\in \{0,1\}}with the predicted probabilities. The average binary cross-entropy for hyperbolastic H1 is denoted byC¯{\displaystyle {\overline {C}}}and is equal to The hyperbolastic regression of type II is an alternative method for the analysis of binary data with robust properties. For the binary outcome variableY{\displaystyle Y}, the hyperbolastic success probability of type II is a function of ap{\displaystyle p}-dimensional vector of explanatory variablesxi{\displaystyle \mathbf {x} _{i}}given by For the binary hyperbolastic regression of type II, the odds of success is denoted byOddsH2{\displaystyle Odds_{H2}}and is defined as The logit transformationLH2{\displaystyle L_{H2}}is given by For the binary hyperbolastic regression H2, the Shannon informationI(y){\displaystyle I(y)}is given by whereZ=β0+∑s=1pβsxs{\displaystyle Z=\beta _{0}+\sum _{s=1}^{p}\beta _{s}x_{s}}, andxs{\displaystyle x_{s}}is thesth{\displaystyle s^{th}}input data. For a random sample of binary outcomes of sizen{\displaystyle n}, the average empirical information for hyperbolastic H2 is estimated by whereZi=β0+∑s=1pβsxis{\displaystyle Z_{i}=\beta _{0}+\sum _{s=1}^{p}\beta _{s}x_{is}}, andxis{\displaystyle x_{is}}is thesth{\displaystyle s^{th}}input data for theith{\displaystyle i^{th}}observation. For the hyperbolastic H2, the information entropyH{\displaystyle H}is equal to and the estimated average entropyH¯{\displaystyle {\bar {H}}}for hyperbolastic H2 is The average binary cross-entropyC¯{\displaystyle {\overline {C}}}for hyperbolastic H2 is The estimate of the parameter vectorβ{\displaystyle {\boldsymbol {\beta }}}can be obtained by maximizing the log-likelihood function whereπ(xi;β){\displaystyle \pi (\mathbf {x} _{i};{\boldsymbol {\beta }})}is defined according to one of the two types of hyberbolastic functions used. The generalization of the binary hyperbolastic regression to multinomial hyperbolastic regression has a response variableyi{\displaystyle y_{i}}for individuali{\displaystyle i}withk{\displaystyle k}categories (i.e.yi∈{1,2,…,k}{\displaystyle y_{i}\in \{1,2,\ldots ,k\}}). Whenk=2{\displaystyle k=2}, this model reduces to a binary hyperbolastic regression. For eachi=1,2,…,n{\displaystyle i=1,2,\ldots ,n}, we formk{\displaystyle k}indicator variablesyij{\displaystyle y_{ij}}where meaning thatyij=1{\displaystyle y_{ij}=1}whenever theith{\displaystyle i^{th}}response is in categoryj{\displaystyle j}and0{\displaystyle 0}otherwise. Define parameter vectorβj=(βj0,βj1,…,βjp){\displaystyle {\boldsymbol {\beta }}_{j}=(\beta _{j0},\beta _{j1},\ldots ,\beta _{jp})}in ap+1{\displaystyle p+1}-dimensional Euclidean space andβ=(β1,…,βk−1)T{\displaystyle {\boldsymbol {\beta }}=({\boldsymbol {\beta }}_{1},\ldots ,{\boldsymbol {\beta }}_{k-1})^{T}}. Using category 1 as a reference andπ1(xi;β){\displaystyle \pi _{1}(\mathbf {x} _{i};{\boldsymbol {\beta }})}as its corresponding probability function, the multinomial hyperbolastic regression of type I probabilities are defined as and forj=2,…,k{\displaystyle j=2,\ldots ,k}, Similarly, for the multinomial hyperbolastic regression of type II we have and forj=2,…,k{\displaystyle j=2,\ldots ,k}, whereηs(xi;β)=βs0+∑l=1pβslxil{\displaystyle \eta _{s}(\mathbf {x} _{i};{\boldsymbol {\beta }})=\beta _{s0}+\sum _{l=1}^{p}\beta _{sl}x_{il}}withs=2,…,k{\displaystyle s=2,\dots ,k}andi=1,…,n{\displaystyle i=1,\dots ,n}. The choice ofπi(xi;β){\displaystyle \pi _{i}(\mathbf {x_{i}} ;{\boldsymbol {\beta }})}is dependent on the choice of hyperbolastic H1 or H2. For the multiclass(j=1,2,…,k){\displaystyle (j=1,2,\dots ,k)}, the Shannon informationIj{\displaystyle I_{j}}is For a random sample of sizen{\displaystyle n}, the empirical multiclass information can be estimated by For a discrete random variableY{\displaystyle Y}, the multiclass information entropy is defined as whereP(y){\displaystyle P(y)}is the probability mass function for the multiclass random variableY{\displaystyle Y}. For the hyperbolastic H1 or H2, the multiclass entropyH{\displaystyle H}is equal to The estimated average multiclass entropyH¯{\displaystyle {\overline {H}}}is equal to Multiclass cross-entropy compares the observed multiclass output with the predicted probabilities. For a random sample of multiclass outcomes of sizen{\displaystyle n}, the average multiclass cross-entropyC¯{\displaystyle {\overline {C}}}for hyperbolastic H1 or H2 can be estimated by The log-odds of membership in categoryj{\displaystyle j}versus the reference category 1, denoted byoj(xi;β){\displaystyle o_{j}(\mathbf {x} _{i};{\boldsymbol {\beta }})}, is equal to wherej=2,…,k{\displaystyle j=2,\ldots ,k}andi=1,…,n{\displaystyle i=1,\ldots ,n}. The estimated parameter matrixβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}of multinomial hyperbolastic regression is obtained by maximizing the log-likelihood function. The maximum likelihood estimates of the parameter matrixβ{\displaystyle {\boldsymbol {\beta }}}is
https://en.wikipedia.org/wiki/Hyperbolastic_functions
Inmathematics, theinverse hyperbolic functionsareinversesof thehyperbolic functions, analogous to theinverse circular functions. There are six in common use: inverse hyperbolic sine, inverse hyperbolic cosine, inverse hyperbolic tangent, inverse hyperbolic cosecant, inverse hyperbolic secant, and inverse hyperbolic cotangent. They are commonly denoted by the symbols for the hyperbolic functions, prefixed witharc-orar-or with a superscript−1{\displaystyle {-1}}(for examplearcsinh,arsinh, orsinh−1{\displaystyle \sinh ^{-1}}). For a given value of a hyperbolic function, the inverse hyperbolic function provides the correspondinghyperbolic angle measure, for examplearsinh⁡(sinh⁡a)=a{\displaystyle \operatorname {arsinh} (\sinh a)=a}andsinh⁡(arsinh⁡x)=x.{\displaystyle \sinh(\operatorname {arsinh} x)=x.}Hyperbolic angle measure is thelength of an arcof aunit hyperbolax2−y2=1{\displaystyle x^{2}-y^{2}=1}as measured in the Lorentzian plane (notthe length of a hyperbolic arc in theEuclidean plane), and twice theareaof the correspondinghyperbolic sector. This is analogous to the waycircular angle measureis the arc length of an arc of theunit circlein the Euclidean plane or twice the area of the correspondingcircular sector. Alternately hyperbolic angle is the area of a sector of the hyperbolaxy=1.{\displaystyle xy=1.}Some authors call the inverse hyperbolic functionshyperbolic area functions.[1] Hyperbolic functions occur in the calculation of angles and distances inhyperbolic geometry. They also occur in the solutions of many lineardifferential equations(such as the equation defining acatenary),cubic equations, andLaplace's equationinCartesian coordinates.Laplace's equationsare important in many areas ofphysics, includingelectromagnetic theory,heat transfer,fluid dynamics, andspecial relativity. The earliest and most widely adopted symbols use the prefixarc-(that is:arcsinh,arccosh,arctanh,arcsech,arccsch,arccoth), by analogy with theinverse circular functions(arcsin, etc.). For aunit hyperbola("Lorentzian circle") in the Lorentzian plane (pseudo-Euclidean planeofsignature(1, 1))[2]or in thehyperbolic numberplane,[3]thehyperbolic angle measure(argument to the hyperbolic functions) is indeed thearc lengthof a hyperbolic arc. Also common is the notationsinh−1,{\displaystyle \sinh ^{-1},}cosh−1,{\displaystyle \cosh ^{-1},}etc.,[4][5]although care must be taken to avoid misinterpretations of the superscript −1 as an exponent. The standard convention is thatsinh−1⁡x{\displaystyle \sinh ^{-1}x}orsinh−1⁡(x){\displaystyle \sinh ^{-1}(x)}means the inverse function while(sinh⁡x)−1{\displaystyle (\sinh x)^{-1}}orsinh⁡(x)−1{\displaystyle \sinh(x)^{-1}}means thereciprocal1/sinh⁡x.{\displaystyle 1/\sinh x.}Especially inconsistent is the conventional use of positiveintegersuperscripts to indicate an exponent rather than function composition, e.g.sinh2⁡x{\displaystyle \sinh ^{2}x}conventionally means(sinh⁡x)2{\displaystyle (\sinh x)^{2}}andnotsinh⁡(sinh⁡x).{\displaystyle \sinh(\sinh x).} Because the argument of hyperbolic functions isnotthe arc length of a hyperbolic arc in theEuclidean plane, some authors have condemned the prefixarc-, arguing that the prefixar-(forarea) orarg-(forargument) should be preferred.[6]Following this recommendation, theISO 80000-2standard abbreviations use the prefixar-(that is:arsinh,arcosh,artanh,arsech,arcsch,arcoth). In computer programming languages, inverse circular and hyperbolic functions are often named with the shorter prefixa-(asinh, etc.). This article will consistently adopt the prefixar-for convenience. Since thehyperbolic functionsare quadraticrational functionsof the exponential functionexp⁡x,{\displaystyle \exp x,}they may be solved using thequadratic formulaand then written in terms of thenatural logarithm. Forcomplexarguments, the inverse circular and hyperbolic functions, thesquare root, and the natural logarithm are allmulti-valued functions. These formulas can be derived in terms of the derivatives of hyperbolic functions. For example, ifx=sinh⁡θ{\displaystyle x=\sinh \theta }, thendx/dθ=cosh⁡θ=1+x2,{\textstyle dx/d\theta =\cosh \theta ={\sqrt {1+x^{2}}},}so Expansion series can be obtained for the above functions: Anasymptotic expansionfor arsinh is given by Asfunctions of a complex variable, inverse hyperbolic functions aremultivalued functionsthat areanalyticexcept at a finite number of points. For such a function, it is common to define aprincipal value, which is a single valued analytic function which coincides with one specific branch of the multivalued function, over a domain consisting of thecomplex planein which a finite number ofarcs(usuallyhalf linesorline segments) have been removed. These arcs are calledbranch cuts. The principal value of the multifunction is chosen at a particular point and values elsewhere in the domain of definition are defined to agree with those found byanalytic continuation. For example, for the square root, the principal value is defined as the square root that has a positivereal part. This defines a single valued analytic function, which is defined everywhere, except for non-positive real values of the variables (where the two square roots have a zero real part). This principal value of the square root function is denotedx{\displaystyle {\sqrt {x}}}in what follows. Similarly, the principal value of the logarithm, denotedLog{\displaystyle \operatorname {Log} }in what follows, is defined as the value for which theimaginary parthas the smallest absolute value. It is defined everywhere except for non-positive real values of the variable, for which two different values of the logarithm reach the minimum. For all inverse hyperbolic functions, the principal value may be defined in terms of principal values of the square root and the logarithm function. However, in some cases, the formulas of§ Definitions in terms of logarithmsdo not give a correct principal value, as giving a domain of definition which is too small and, in one casenon-connected. The principal value of the inverse hyperbolic sine is given by The argument of the square root is a non-positivereal number,if and only ifzbelongs to one of the intervals[i, +i∞)and(−i∞, −i]of the imaginary axis. If the argument of the logarithm is real, then it is positive. Thus this formula defines a principal value for arsinh, with branch cuts[i, +i∞)and(−i∞, −i]. This is optimal, as the branch cuts must connect the singular pointsiand−ito infinity. The formula for the inverse hyperbolic cosine given in§ Inverse hyperbolic cosineis not convenient, since similar to the principal values of the logarithm and the square root, the principal value of arcosh would not be defined for imaginaryz. Thus the square root has to be factorized, leading to The principal values of the square roots are both defined, except ifzbelongs to the real interval(−∞, 1]. If the argument of the logarithm is real, thenzis real and has the same sign. Thus, the above formula defines a principal value of arcosh outside the real interval(−∞, 1], which is thus the unique branch cut. The formulas given in§ Definitions in terms of logarithmssuggests for the definition of the principal values of the inverse hyperbolic tangent and cotangent. In these formulas, the argument of the logarithm is real if and only ifzis real. For artanh, this argument is in the real interval(−∞, 0], ifzbelongs either to(−∞, −1]or to[1, ∞). For arcoth, the argument of the logarithm is in(−∞, 0], if and only ifzbelongs to the real interval[−1, 1]. Therefore, these formulas define convenient principal values, for which the branch cuts are(−∞, −1]and[1, ∞)for the inverse hyperbolic tangent, and[−1, 1]for the inverse hyperbolic cotangent. In view of a better numerical evaluation near the branch cuts, some authors[citation needed]use the following definitions of the principal values, although the second one introduces aremovable singularityatz= 0. The two definitions ofartanh{\displaystyle \operatorname {artanh} }differ for real values ofz{\displaystyle z}withz>1{\displaystyle z>1}. The ones ofarcoth{\displaystyle \operatorname {arcoth} }differ for real values ofz{\displaystyle z}withz∈[0,1){\displaystyle z\in [0,1)}. For the inverse hyperbolic cosecant, the principal value is defined as It is defined except when the arguments of the logarithm and the square root are non-positive real numbers. The principal value of the square root is thus defined outside the interval[−i,i]of the imaginary line. If the argument of the logarithm is real, thenzis a non-zero real number, and this implies that the argument of the logarithm is positive. Thus, the principal value is defined by the above formula outside thebranch cut, consisting of the interval[−i,i]of the imaginary line. (Atz= 0, there is a singular point that is included in the branch cut.) Here, as in the case of the inverse hyperbolic cosine, we have to factorize the square root. This gives the principal value If the argument of a square root is real, thenzis real, and it follows that both principal values of square roots are defined, except ifzis real and belongs to one of the intervals(−∞, 0]and[1, +∞). If the argument of the logarithm is real and negative, thenzis also real and negative. It follows that the principal value of arsech is well defined, by the above formula outside twobranch cuts, the real intervals(−∞, 0]and[1, +∞). Forz= 0, there is a singular point that is included in one of the branch cuts. In the following graphical representation of the principal values of the inverse hyperbolic functions, the branch cuts appear as discontinuities of the color. The fact that the whole branch cuts appear as discontinuities, shows that these principal values may not be extended into analytic functions defined over larger domains. In other words, the above definedbranch cutsare minimal.
https://en.wikipedia.org/wiki/Inverse_hyperbolic_function
The following is a list ofintegrals(anti-derivativefunctions) ofhyperbolic functions. For a complete list of integral functions, seelist of integrals. In all formulas the constantais assumed to be nonzero, andCdenotes theconstant of integration.
https://en.wikipedia.org/wiki/List_of_integrals_of_hyperbolic_functions
Inmathematics,Poinsot's spiralsare twospiralsrepresented by thepolar equations where csch is thehyperbolic cosecant, and sech is thehyperbolic secant.[1]They are named after the French mathematicianLouis Poinsot. Thisgeometry-relatedarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Poinsot%27s_spirals
In mathematics, anelliptic Gauss sumis an analog of aGauss sumdepending on anelliptic curvewith complex multiplication. Thequadratic residuesymbol in a Gauss sum is replaced by a higher residue symbol such as a cubic or quartic residue symbol, and the exponential function in a Gauss sum is replaced by anelliptic function. They were introduced byEisenstein(1850), at least in the lemniscate case when the elliptic curve has complex multiplication byi, but seem to have been forgotten or ignored until the paper (Pinch 1988). (Lemmermeyer 2000, 9.3) gives the following example of an elliptic Gauss sum, for the case of an elliptic curve with complex multiplication byi. where
https://en.wikipedia.org/wiki/Elliptic_Gauss_sum
Inmathematics, thelemniscate constantϖis atranscendentalmathematical constant that is the ratio of theperimeterofBernoulli's lemniscateto itsdiameter, analogous to the definition ofπfor the circle.[1]Equivalently, the perimeter of the lemniscate(x2+y2)2=x2−y2{\displaystyle (x^{2}+y^{2})^{2}=x^{2}-y^{2}}is2ϖ. The lemniscate constant is closely related to thelemniscate elliptic functionsand approximately equal to 2.62205755.[2]It also appears in evaluation of thegammaandbeta functionat certain rational values. The symbolϖis acursivevariant ofπknown asvariant pirepresented in Unicode by the characterU+03D6ϖGREEK PI SYMBOL. Sometimes the quantities2ϖorϖ/2are referred to asthelemniscate constant.[3][4] Gauss's constant, denoted byG, is equal toϖ/π≈ 0.8346268[5]and named afterCarl Friedrich Gauss, who calculated it via thearithmetic–geometric meanas1/M(1,2){\displaystyle 1/M{\bigl (}1,{\sqrt {2}}{\bigr )}}.[6]By 1799, Gauss had two proofs of the theorem thatM(1,2)=π/ϖ{\displaystyle M{\bigl (}1,{\sqrt {2}}{\bigr )}=\pi /\varpi }whereϖ{\displaystyle \varpi }is the lemniscate constant.[7] John Todd named two more lemniscate constants, thefirst lemniscate constantA=ϖ/2 ≈ 1.3110287771and thesecond lemniscate constantB=π/(2ϖ) ≈ 0.5990701173.[8][9][10] The lemniscate constantϖ{\displaystyle \varpi }and Todd's first lemniscate constantA{\displaystyle A}were proventranscendentalbyCarl Ludwig Siegelin 1932 and later byTheodor Schneiderin 1937 and Todd's second lemniscate constantB{\displaystyle B}and Gauss's constantG{\displaystyle G}were proven transcendental by Theodor Schneider in 1941.[8][11][12]In 1975,Gregory Chudnovskyproved that the set{π,ϖ}{\displaystyle \{\pi ,\varpi \}}isalgebraically independentoverQ{\displaystyle \mathbb {Q} }, which implies thatA{\displaystyle A}andB{\displaystyle B}are algebraically independent as well.[13][14]But the set{π,M(1,1/2),M′(1,1/2)}{\displaystyle {\bigl \{}\pi ,M{\bigl (}1,1/{\sqrt {2}}{\bigr )},M'{\bigl (}1,1/{\sqrt {2}}{\bigr )}{\bigr \}}}(where the prime denotes thederivativewith respect to the second variable) is not algebraically independent overQ{\displaystyle \mathbb {Q} }.[15]In 1996,Yuri Nesterenkoproved that the set{π,ϖ,eπ}{\displaystyle \{\pi ,\varpi ,e^{\pi }\}}is algebraically independent overQ{\displaystyle \mathbb {Q} }.[16] As of 2025 over 2 trillion digits of this constant have been calculated usingy-cruncher.[17] Usually,ϖ{\displaystyle \varpi }is defined by the first equality below, but it has many equivalent forms:[18] ϖ=2∫01dt1−t4=2∫0∞dt1+t4=∫01dtt−t3=∫1∞dtt3−t=4∫0∞(1+t44−t)dt=22∫011−t44dt=3∫011−t4dt=2K(i)=12B(14,12)=122B(14,14)=Γ(1/4)222π=2−24ζ(3/4)2ζ(1/4)2=2.62205755429211981046483958989111941…,{\displaystyle {\begin{aligned}\varpi &=2\int _{0}^{1}{\frac {\mathrm {d} t}{\sqrt {1-t^{4}}}}={\sqrt {2}}\int _{0}^{\infty }{\frac {\mathrm {d} t}{\sqrt {1+t^{4}}}}=\int _{0}^{1}{\frac {\mathrm {d} t}{\sqrt {t-t^{3}}}}=\int _{1}^{\infty }{\frac {\mathrm {d} t}{\sqrt {t^{3}-t}}}\\[6mu]&=4\int _{0}^{\infty }{\Bigl (}{\sqrt[{4}]{1+t^{4}}}-t{\Bigr )}\,\mathrm {d} t=2{\sqrt {2}}\int _{0}^{1}{\sqrt[{4}]{1-t^{4}}}\mathop {\mathrm {d} t} =3\int _{0}^{1}{\sqrt {1-t^{4}}}\,\mathrm {d} t\\[2mu]&=2K(i)={\tfrac {1}{2}}\mathrm {B} {\bigl (}{\tfrac {1}{4}},{\tfrac {1}{2}}{\bigr )}={\tfrac {1}{2{\sqrt {2}}}}\mathrm {B} {\bigl (}{\tfrac {1}{4}},{\tfrac {1}{4}}{\bigr )}={\frac {\Gamma (1/4)^{2}}{2{\sqrt {2\pi }}}}={\frac {2-{\sqrt {2}}}{4}}{\frac {\zeta (3/4)^{2}}{\zeta (1/4)^{2}}}\\[5mu]&=2.62205\;75542\;92119\;81046\;48395\;89891\;11941\ldots ,\end{aligned}}} whereKis thecomplete elliptic integral of the first kindwith modulusk,Βis thebeta function,Γis thegamma functionandζis theRiemann zeta function. The lemniscate constant can also be computed by thearithmetic–geometric meanM{\displaystyle M}, ϖ=πM(1,2).{\displaystyle \varpi ={\frac {\pi }{M{\bigl (}1,{\sqrt {2}}{\bigr )}}}.} Gauss's constant is typically defined as thereciprocalof thearithmetic–geometric meanof 1 and thesquare root of 2, after his calculation ofM(1,2){\displaystyle M{\bigl (}1,{\sqrt {2}}{\bigr )}}published in 1800:[19]G=1M(1,2){\displaystyle G={\frac {1}{M{\bigl (}1,{\sqrt {2}}{\bigr )}}}}John Todd's lemniscate constants may be given in terms of thebeta functionB:A=ϖ2=14B(14,12),B=π2ϖ=14B(12,34).{\displaystyle {\begin{aligned}A&={\frac {\varpi }{2}}={\tfrac {1}{4}}\mathrm {B} {\bigl (}{\tfrac {1}{4}},{\tfrac {1}{2}}{\bigr )},\\[3mu]B&={\frac {\pi }{2\varpi }}={\tfrac {1}{4}}\mathrm {B} {\bigl (}{\tfrac {1}{2}},{\tfrac {3}{4}}{\bigr )}.\end{aligned}}} β′(0)=log⁡ϖπ{\displaystyle \beta '(0)=\log {\frac {\varpi }{\sqrt {\pi }}}} which is analogous to ζ′(0)=log⁡12π{\displaystyle \zeta '(0)=\log {\frac {1}{\sqrt {2\pi }}}} whereβ{\displaystyle \beta }is theDirichlet beta functionandζ{\displaystyle \zeta }is theRiemann zeta function.[20] Analogously to theLeibniz formula for π,β(1)=∑n=1∞χ(n)n=π4,{\displaystyle \beta (1)=\sum _{n=1}^{\infty }{\frac {\chi (n)}{n}}={\frac {\pi }{4}},}we have[21][22][23][24][25]L(E,1)=∑n=1∞ν(n)n=ϖ4{\displaystyle L(E,1)=\sum _{n=1}^{\infty }{\frac {\nu (n)}{n}}={\frac {\varpi }{4}}}whereL{\displaystyle L}is theL-functionof theelliptic curveE:y2=x3−x{\displaystyle E:\,y^{2}=x^{3}-x}overQ{\displaystyle \mathbb {Q} }; this means thatν{\displaystyle \nu }is themultiplicative functiongiven byν(pn)={p−Np,p∈P,n=10,p=2,n≥2ν(p)ν(pn−1)−pν(pn−2),p∈P∖{2},n≥2{\displaystyle \nu (p^{n})={\begin{cases}p-{\mathcal {N}}_{p},&p\in \mathbb {P} ,\,n=1\\[5mu]0,&p=2,\,n\geq 2\\[5mu]\nu (p)\nu (p^{n-1})-p\nu (p^{n-2}),&p\in \mathbb {P} \setminus \{2\},\,n\geq 2\end{cases}}}whereNp{\displaystyle {\mathcal {N}}_{p}}is the number of solutions of the congruencea3−a≡b2(mod⁡p),p∈P{\displaystyle a^{3}-a\equiv b^{2}\,(\operatorname {mod} p),\quad p\in \mathbb {P} }in variablesa,b{\displaystyle a,b}that are non-negative integers (P{\displaystyle \mathbb {P} }is the set of all primes). Equivalently,ν{\displaystyle \nu }is given byF(τ)=η(4τ)2η(8τ)2=∑n=1∞ν(n)qn,q=e2πiτ{\displaystyle F(\tau )=\eta (4\tau )^{2}\eta (8\tau )^{2}=\sum _{n=1}^{\infty }\nu (n)q^{n},\quad q=e^{2\pi i\tau }}whereτ∈C{\displaystyle \tau \in \mathbb {C} }such thatℑ⁡τ>0{\displaystyle \operatorname {\Im } \tau >0}andη{\displaystyle \eta }is theeta function.[26][27][28]The above result can be equivalently written as∑n=1∞ν(n)ne−2πn/32=ϖ8{\displaystyle \sum _{n=1}^{\infty }{\frac {\nu (n)}{n}}e^{-2\pi n/{\sqrt {32}}}={\frac {\varpi }{8}}}(the number32{\displaystyle 32}is theconductorofE{\displaystyle E}) and also tells us that theBSD conjectureis true for the aboveE{\displaystyle E}.[29]The first few values ofν{\displaystyle \nu }are given by the following table; if1≤n≤113{\displaystyle 1\leq n\leq 113}such thatn{\displaystyle n}doesn't appear in the table, thenν(n)=0{\displaystyle \nu (n)=0}:nν(n)nν(n)1153145−261−109−365−1213673−617281925−185−429−10891037−297184110101−2456109649−7113−14{\displaystyle {\begin{array}{r|r|r|r}n&\nu (n)&n&\nu (n)\\\hline 1&1&53&14\\5&-2&61&-10\\9&-3&65&-12\\13&6&73&-6\\17&2&81&9\\25&-1&85&-4\\29&-10&89&10\\37&-2&97&18\\41&10&101&-2\\45&6&109&6\\49&-7&113&-14\\\end{array}}} LetΔ{\displaystyle \Delta }be the minimal weight level1{\displaystyle 1}new form. Then[30]Δ(i)=164(ϖπ)12.{\displaystyle \Delta (i)={\frac {1}{64}}\left({\frac {\varpi }{\pi }}\right)^{12}.}Theq{\displaystyle q}-coefficient ofΔ{\displaystyle \Delta }is theRamanujan tau function. Viète's formulaforπcan be written: 2π=12⋅12+1212⋅12+1212+1212⋯{\displaystyle {\frac {2}{\pi }}={\sqrt {\frac {1}{2}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {\frac {1}{2}}}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {\frac {1}{2}}}}}}}\cdots } An analogous formula forϖis:[31] 2ϖ=12⋅12+12/12⋅12+12/12+12/12⋯{\displaystyle {\frac {2}{\varpi }}={\sqrt {\frac {1}{2}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\bigg /}\!{\sqrt {\frac {1}{2}}}}}\cdot {\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\Bigg /}\!{\sqrt {{\frac {1}{2}}+{\frac {1}{2}}{\bigg /}\!{\sqrt {\frac {1}{2}}}}}}}\cdots } TheWallis productforπis: π2=∏n=1∞(1+1n)(−1)n+1=∏n=1∞(2n2n−1⋅2n2n+1)=(21⋅23)(43⋅45)(65⋅67)⋯{\displaystyle {\frac {\pi }{2}}=\prod _{n=1}^{\infty }\left(1+{\frac {1}{n}}\right)^{(-1)^{n+1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)={\biggl (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\biggr )}{\biggl (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\biggr )}{\biggl (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\biggr )}\cdots } An analogous formula forϖis:[32] ϖ2=∏n=1∞(1+12n)(−1)n+1=∏n=1∞(4n−14n−2⋅4n4n+1)=(32⋅45)(76⋅89)(1110⋅1213)⋯{\displaystyle {\frac {\varpi }{2}}=\prod _{n=1}^{\infty }\left(1+{\frac {1}{2n}}\right)^{(-1)^{n+1}}=\prod _{n=1}^{\infty }\left({\frac {4n-1}{4n-2}}\cdot {\frac {4n}{4n+1}}\right)={\biggl (}{\frac {3}{2}}\cdot {\frac {4}{5}}{\biggr )}{\biggl (}{\frac {7}{6}}\cdot {\frac {8}{9}}{\biggr )}{\biggl (}{\frac {11}{10}}\cdot {\frac {12}{13}}{\biggr )}\cdots } A related result for Gauss's constant (G=ϖ/π{\displaystyle G=\varpi /\pi }) is:[33] ϖπ=∏n=1∞(4n−14n⋅4n+24n+1)=(34⋅65)(78⋅109)(1112⋅1413)⋯{\displaystyle {\frac {\varpi }{\pi }}=\prod _{n=1}^{\infty }\left({\frac {4n-1}{4n}}\cdot {\frac {4n+2}{4n+1}}\right)={\biggl (}{\frac {3}{4}}\cdot {\frac {6}{5}}{\biggr )}{\biggl (}{\frac {7}{8}}\cdot {\frac {10}{9}}{\biggr )}{\biggl (}{\frac {11}{12}}\cdot {\frac {14}{13}}{\biggr )}\cdots } An infinite series discovered by Gauss is:[34] ϖπ=∑n=0∞(−1)n∏k=1n(2k−1)2(2k)2=1−1222+12⋅3222⋅42−12⋅32⋅5222⋅42⋅62+⋯{\displaystyle {\frac {\varpi }{\pi }}=\sum _{n=0}^{\infty }(-1)^{n}\prod _{k=1}^{n}{\frac {(2k-1)^{2}}{(2k)^{2}}}=1-{\frac {1^{2}}{2^{2}}}+{\frac {1^{2}\cdot 3^{2}}{2^{2}\cdot 4^{2}}}-{\frac {1^{2}\cdot 3^{2}\cdot 5^{2}}{2^{2}\cdot 4^{2}\cdot 6^{2}}}+\cdots } TheMachin formulaforπis14π=4arctan⁡15−arctan⁡1239,{\textstyle {\tfrac {1}{4}}\pi =4\arctan {\tfrac {1}{5}}-\arctan {\tfrac {1}{239}},}and several similar formulas forπcan be developed using trigonometric angle sum identities, e.g. Euler's formula14π=arctan⁡12+arctan⁡13{\textstyle {\tfrac {1}{4}}\pi =\arctan {\tfrac {1}{2}}+\arctan {\tfrac {1}{3}}}. Analogous formulas can be developed forϖ, including the following found by Gauss:12ϖ=2arcsl⁡12+arcsl⁡723{\displaystyle {\tfrac {1}{2}}\varpi =2\operatorname {arcsl} {\tfrac {1}{2}}+\operatorname {arcsl} {\tfrac {7}{23}}}, wherearcsl{\displaystyle \operatorname {arcsl} }is thelemniscate arcsine.[35] The lemniscate constant can be rapidly computed by the series[36][37] wherepn=12(3n2−n){\displaystyle p_{n}={\tfrac {1}{2}}(3n^{2}-n)}(these are thegeneralized pentagonal numbers). Also[38] In a spirit similar to that of theBasel problem, whereZ[i]{\displaystyle \mathbb {Z} [i]}are theGaussian integersandG4{\displaystyle G_{4}}is theEisenstein seriesof weight⁠4{\displaystyle 4}⁠(seeLemniscate elliptic functions § Hurwitz numbersfor a more general result).[39] A related result is whereσ3{\displaystyle \sigma _{3}}is thesum of positive divisors function.[40] In 1842,Malmstenfound whereγ{\displaystyle \gamma }isEuler's constantandβ(s){\displaystyle \beta (s)}is the Dirichlet-Beta function. The lemniscate constant is given by the rapidly converging series ϖ=π324e−π3(∑n=−∞∞(−1)ne−2nπ(3n+1))2.{\displaystyle \varpi =\pi {\sqrt[{4}]{32}}e^{-{\frac {\pi }{3}}}{\biggl (}\sum _{n=-\infty }^{\infty }(-1)^{n}e^{-2n\pi (3n+1)}{\biggr )}^{2}.} The constant is also given by theinfinite product Also[41] A (generalized)continued fractionforπisπ2=1+11+1⋅21+2⋅31+3⋅41+⋱{\displaystyle {\frac {\pi }{2}}=1+{\cfrac {1}{1+{\cfrac {1\cdot 2}{1+{\cfrac {2\cdot 3}{1+{\cfrac {3\cdot 4}{1+\ddots }}}}}}}}}An analogous formula forϖis[9]ϖ2=1+12+2⋅32+4⋅52+6⋅72+⋱{\displaystyle {\frac {\varpi }{2}}=1+{\cfrac {1}{2+{\cfrac {2\cdot 3}{2+{\cfrac {4\cdot 5}{2+{\cfrac {6\cdot 7}{2+\ddots }}}}}}}}} DefineBrouncker's continued fractionby[42]b(s)=s+122s+322s+522s+⋱,s>0.{\displaystyle b(s)=s+{\cfrac {1^{2}}{2s+{\cfrac {3^{2}}{2s+{\cfrac {5^{2}}{2s+\ddots }}}}}},\quad s>0.}Letn≥0{\displaystyle n\geq 0}except for the first equality wheren≥1{\displaystyle n\geq 1}. Then[43][44]b(4n)=(4n+1)∏k=1n(4k−1)2(4k−3)(4k+1)πϖ2b(4n+1)=(2n+1)∏k=1n(2k)2(2k−1)(2k+1)4πb(4n+2)=(4n+1)∏k=1n(4k−3)(4k+1)(4k−1)2ϖ2πb(4n+3)=(2n+1)∏k=1n(2k−1)(2k+1)(2k)2π.{\displaystyle {\begin{aligned}b(4n)&=(4n+1)\prod _{k=1}^{n}{\frac {(4k-1)^{2}}{(4k-3)(4k+1)}}{\frac {\pi }{\varpi ^{2}}}\\b(4n+1)&=(2n+1)\prod _{k=1}^{n}{\frac {(2k)^{2}}{(2k-1)(2k+1)}}{\frac {4}{\pi }}\\b(4n+2)&=(4n+1)\prod _{k=1}^{n}{\frac {(4k-3)(4k+1)}{(4k-1)^{2}}}{\frac {\varpi ^{2}}{\pi }}\\b(4n+3)&=(2n+1)\prod _{k=1}^{n}{\frac {(2k-1)(2k+1)}{(2k)^{2}}}\,\pi .\end{aligned}}}For example,b(1)=4π,b(2)=ϖ2π,b(3)=π,b(4)=9πϖ2.{\displaystyle {\begin{aligned}b(1)&={\frac {4}{\pi }},&b(2)&={\frac {\varpi ^{2}}{\pi }},&b(3)&=\pi ,&b(4)&={\frac {9\pi }{\varpi ^{2}}}.\end{aligned}}} In fact, the values ofb(1){\displaystyle b(1)}andb(2){\displaystyle b(2)}, coupled with the functional equationb(s+2)=(s+1)2b(s),{\displaystyle b(s+2)={\frac {(s+1)^{2}}{b(s)}},}determine the values ofb(n){\displaystyle b(n)}for alln{\displaystyle n}. Simple continued fractions for the lemniscate constant and related constants include[45][46]ϖ=[2,1,1,1,1,1,4,1,2,…],2ϖ=[5,4,10,2,1,2,3,29,…],ϖ2=[1,3,4,1,1,1,5,2,…],ϖπ=[0,1,5,21,3,4,14,…].{\displaystyle {\begin{aligned}\varpi &=[2,1,1,1,1,1,4,1,2,\ldots ],\\[8mu]2\varpi &=[5,4,10,2,1,2,3,29,\ldots ],\\[5mu]{\frac {\varpi }{2}}&=[1,3,4,1,1,1,5,2,\ldots ],\\[2mu]{\frac {\varpi }{\pi }}&=[0,1,5,21,3,4,14,\ldots ].\end{aligned}}} The lemniscate constantϖis related to the area under the curvex4+y4=1{\displaystyle x^{4}+y^{4}=1}. Definingπn:=B(1n,1n){\displaystyle \pi _{n}\mathrel {:=} \mathrm {B} {\bigl (}{\tfrac {1}{n}},{\tfrac {1}{n}}{\bigr )}}, twice the area in the positive quadrant under the curvexn+yn=1{\displaystyle x^{n}+y^{n}=1}is2∫011−xnndx=1nπn.{\textstyle 2\int _{0}^{1}{\sqrt[{n}]{1-x^{n}}}\mathop {\mathrm {d} x} ={\tfrac {1}{n}}\pi _{n}.}In the quartic case,14π4=12ϖ.{\displaystyle {\tfrac {1}{4}}\pi _{4}={\tfrac {1}{\sqrt {2}}}\varpi .} In 1842, Malmsten discovered that[47] ∫01log⁡(−log⁡x)1+x2dx=π2log⁡πϖ2.{\displaystyle \int _{0}^{1}{\frac {\log(-\log x)}{1+x^{2}}}\,dx={\frac {\pi }{2}}\log {\frac {\pi }{\varpi {\sqrt {2}}}}.} Furthermore,∫0∞tanh⁡xxe−xdx=log⁡ϖ2π{\displaystyle \int _{0}^{\infty }{\frac {\tanh x}{x}}e^{-x}\,dx=\log {\frac {\varpi ^{2}}{\pi }}} and[48] ∫0∞e−x4dx=2ϖ2π4,analogous to∫0∞e−x2dx=π2,{\displaystyle \int _{0}^{\infty }e^{-x^{4}}\,dx={\frac {\sqrt {2\varpi {\sqrt {2\pi }}}}{4}},\quad {\text{analogous to}}\,\int _{0}^{\infty }e^{-x^{2}}\,dx={\frac {\sqrt {\pi }}{2}},}a form ofGaussian integral. The lemniscate constant appears in the evaluation of the integrals πϖ=∫0π2sin⁡(x)dx=∫0π2cos⁡(x)dx{\displaystyle {\frac {\pi }{\varpi }}=\int _{0}^{\frac {\pi }{2}}{\sqrt {\sin(x)}}\,dx=\int _{0}^{\frac {\pi }{2}}{\sqrt {\cos(x)}}\,dx} ϖπ=∫0∞dxcosh⁡(πx){\displaystyle {\frac {\varpi }{\pi }}=\int _{0}^{\infty }{\frac {dx}{\sqrt {\cosh(\pi x)}}}} John Todd's lemniscate constants are defined by integrals:[8] A=∫01dx1−x4{\displaystyle A=\int _{0}^{1}{\frac {dx}{\sqrt {1-x^{4}}}}} B=∫01x2dx1−x4{\displaystyle B=\int _{0}^{1}{\frac {x^{2}\,dx}{\sqrt {1-x^{4}}}}} The lemniscate constant satisfies the equation[49] πϖ=2∫01x2dx1−x4{\displaystyle {\frac {\pi }{\varpi }}=2\int _{0}^{1}{\frac {x^{2}\,dx}{\sqrt {1-x^{4}}}}} Euler discovered in 1738 that for the rectangular elastica (first and second lemniscate constants)[50][49] arclength⋅height=A⋅B=∫01dx1−x4⋅∫01x2dx1−x4=ϖ2⋅π2ϖ=π4{\displaystyle {\textrm {arc}}\ {\textrm {length}}\cdot {\textrm {height}}=A\cdot B=\int _{0}^{1}{\frac {\mathrm {d} x}{\sqrt {1-x^{4}}}}\cdot \int _{0}^{1}{\frac {x^{2}\mathop {\mathrm {d} x} }{\sqrt {1-x^{4}}}}={\frac {\varpi }{2}}\cdot {\frac {\pi }{2\varpi }}={\frac {\pi }{4}}} Now considering the circumferenceC{\displaystyle C}of the ellipse with axes2{\displaystyle {\sqrt {2}}}and1{\displaystyle 1}, satisfying2x2+4y2=1{\displaystyle 2x^{2}+4y^{2}=1}, Stirling noted that[51] C2=∫01dx1−x4+∫01x2dx1−x4{\displaystyle {\frac {C}{2}}=\int _{0}^{1}{\frac {dx}{\sqrt {1-x^{4}}}}+\int _{0}^{1}{\frac {x^{2}\,dx}{\sqrt {1-x^{4}}}}} Hence the full circumference is C=πϖ+ϖ=3.820197789…{\displaystyle C={\frac {\pi }{\varpi }}+\varpi =3.820197789\ldots } This is also the arc length of thesinecurve on half a period:[52] C=∫0π1+cos2⁡(x)dx{\displaystyle C=\int _{0}^{\pi }{\sqrt {1+\cos ^{2}(x)}}\,dx} Analogously to2π=limn→∞|(2n)!B2n|12n{\displaystyle 2\pi =\lim _{n\to \infty }\left|{\frac {(2n)!}{\mathrm {B} _{2n}}}\right|^{\frac {1}{2n}}}whereBn{\displaystyle \mathrm {B} _{n}}areBernoulli numbers, we have2ϖ=limn→∞((4n)!H4n)14n{\displaystyle 2\varpi =\lim _{n\to \infty }\left({\frac {(4n)!}{\mathrm {H} _{4n}}}\right)^{\frac {1}{4n}}}whereHn{\displaystyle \mathrm {H} _{n}}areHurwitz numbers.
https://en.wikipedia.org/wiki/Lemniscate_constant
ThePeirce quincuncial projectionis theconformal map projectionfrom the sphere to an unfolded squaredihedron, developed byCharles Sanders Peircein 1879.[1]Eachoctantprojects onto anisosceles right triangle, and these are arranged into a square. The namequincuncialrefers to this arrangement: the north pole at the center and quarters of the south pole in the corners form aquincunxpattern like the pips on thefiveface of a traditionaldie. The projection has the distinctive property that it forms a seamlesssquare tilingof the plane, conformal except at foursingular pointsalong the equator. Typically the projection is square and oriented such that the north pole lies at the center, butan oblique aspect in a rectanglewas proposed by Émile Guyou in 1887, anda transverse aspectwas proposed byOscar S. Adamsin 1925. The projection has seen use in digital photography for portraying sphericalpanoramas. The maturation ofcomplex analysisled to general techniques forconformal mapping, where points of a flat surface are handled as numbers on thecomplex plane. While working at theUnited States Coast and Geodetic Survey, the American philosopherCharles Sanders Peircepublished his projection in 1879,[2]having been inspired byH. A. Schwarz's 1869conformal transformation of a circle onto a polygon ofnsides(known as the Schwarz–Christoffel mapping). In the normal aspect, Peirce's projection presents theNorthern Hemispherein a square; theSouthern Hemisphereis split into four isosceles triangles symmetrically surrounding the first one, akin to star-like projections. In effect, the whole map is a square, inspiring Peirce to call his projectionquincuncial, after the arrangement of five items in aquincunx. After Peirce presented his projection, two other cartographers developed similar projections of the hemisphere (or the whole sphere, after a suitable rearrangement) on a square: Guyou in 1887 and Adams in 1925.[3][4]The three projections aretransversalversions of each other (see related projections below). The Peirce quincuncial projection is "formed by transforming thestereographic projectionwith a pole at infinity, by means of an elliptic function".[5]The Peirce quincuncial is really a projection of the hemisphere, but its tessellation properties (see below) permit its use for the entire sphere. The projection maps the interior of a circle onto the interior of a square by means of theSchwarz–Christoffel mapping, as follows:[4] sd⁡(2w,12)=2r{\displaystyle \operatorname {sd} \left({\sqrt {2}}w,{\frac {1}{\sqrt {2}}}\right)={\sqrt {2}}\,r} where Anelliptic integralof the first kind can be used to solve forw. The comma notation used forsd(u,k)means that⁠12{\displaystyle {\tfrac {1}{\sqrt {2}}}}⁠is themodulusfor the elliptic function ratio, as opposed to theparameter[which would be writtensd(u|m)] or theamplitude[which would be writtensd(u\α)]. The mapping has a scale factor of 1/2 at the center, like the generating stereographic projection. Note that:sd⁡(2w,12)=2sl⁡(w){\displaystyle \operatorname {sd} \left({\sqrt {2}}w,{\frac {1}{\sqrt {2}}}\right)={\sqrt {2}}\operatorname {sl} \left(w\right)}is thelemniscaticsine function (seeLemniscate elliptic functions). According to Peirce, his projection has the following properties (Peirce, 1879): The projectiontessellatesthe plane; i.e., repeated copies can completely cover (tile) an arbitrary area, each copy's features exactly matching those of its neighbors. (See the example to the right). Furthermore, the four triangles of the second hemisphere of Peirce quincuncial projection can be rearranged as another square that is placed next to the square that corresponds to the first hemisphere, resulting in a rectangle with aspect ratio of 2:1; this arrangement is equivalent to the transverse aspect of theGuyou hemisphere-in-a-square projection.[6] Like many other projections based upon complex numbers, the Peirce quincuncial has been rarely used for geographic purposes. One of the few recorded cases is in 1946, when it was used by the U.S. Coast and Geodetic Survey world map of air routes.[6]It has been used recently to present spherical panoramas for practical as well as aesthetic purposes, where it can present the entire sphere with most areas being recognizable.[7] Intransverse aspect, one hemisphere becomes theAdams hemisphere-in-a-square projection(the pole is placed at the corner of the square). Its four singularities are at the North Pole, the South Pole, on the equator at25°W, and on the equator at 155°E, in the Arctic, Atlantic, and Pacific oceans, and in Antarctica.[8]That great circle divides the traditionalWesternandEasternhemispheres. Inoblique aspect(45 degrees) of one hemisphere becomes theGuyou hemisphere-in-a-square projection(the pole is placed in the middle of the edge of the square). Its four singularities are at 45 degrees north and south latitude on the great circle composed of the20°Wmeridian and the 160°E meridians, in the Atlantic and Pacific oceans.[8]That great circle divides the traditional western and eastern hemispheres.
https://en.wikipedia.org/wiki/Peirce_quincuncial_projection
Levi ben Gershon(1288 – 20 April 1344), better known by hisGraecizedname asGersonides, or by his Latinized nameMagister Leo Hebraeus,[1]or inHebrewby the abbreviation of first letters asRaLBaG,[2]was a medieval French Jewishphilosopher, Talmudist,mathematician,physicianandastronomer/astrologer. He was born atBagnolsinLanguedoc,France. According toAbraham Zacutoand others, he was the son ofGerson ben Solomon Catalan. As in the case of the othermedievalJewish philosophers, little is known of his life. His family had been distinguished for piety and exegetical skill in Talmud, but though he was known in the Jewish community by commentaries on certain books of theBible, he never seems to have accepted any rabbinical post. It has been suggested[by whom?]that the uniqueness of his opinions may have put obstacles in the way of his advancement to a higher position or office. He is known to have been atAvignonand Orange during his life, and is believed to have died in 1344, thoughZacutoasserts that he died atPerpignanin 1370. Gersonides is known for his unorthodox views and rigidAristotelianism, which eventually led him to rationalize many of the miracles in the Bible. His commentary on the Bible was sharply criticized by the most prominent scholars, such asAbarbanel,Chisdai Crescas, andRivash, the latter accusing him ofheresyand almost banning his works.[3] Part of his writings consist of commentaries on the portions ofAristotlethen known, or rather of commentaries on the commentaries ofAverroes. Some of these are printed in the earlyLatineditions of Aristotle's works. His most important treatise, that by which he has a place in the history of philosophy, is entitledSefer Milhamot Ha-Shem, ("The Wars of the Lord"), and occupied twelve years in composition (1317–1329). A portion of it, containing an elaborate survey ofastronomyas known to theArabs, was translated into Latin in 1342 at the request ofPope Clement VI. The Wars of the Lordis modeled after the plan of the great work of Jewish philosophy, theGuide for the PerplexedofMaimonides. It may be regarded as a criticism of some elements of Maimonides'syncretismof Aristotelianism and rabbinic Jewish thought. Ralbag's treatise strictly adhered to Aristotelian thought.[4]TheWars of the Lordreview: Gersonides was also the author of commentaries on thePentateuch,Joshua,Judges,I & II Samuel,I & II Kings,Proverbs,Job,Ecclesiastes,Song of Songs,Ruth,Esther,Daniel,Ezra-Nehemiah, andChronicles. He makes reference to a commentary onIsaiah, but it is not extant. In contrast to thetheologyheld by other Jewish thinkers, Jewish theologianLouis Jacobsargues, Gersonides held that God does not have complete foreknowledge of human acts. "Gersonides, bothered by the old question of how God'sforeknowledgeis compatible with human freedom, holds that what God knows beforehand is all the choices open to each individual. God does not know, however, which choice the individual, in his freedom, will make."[5] Another neoclassical Jewish proponent of self-limited omniscience wasAbraham ibn Daud. "Whereas the earlier Jewish philosophers extended theomniscienceof God to include the free acts of man, and had argued that human freedom of decision was not affected by God's foreknowledge of its results, Ibn Daud, evidently followingAlexander of Aphrodisias, excludes human action from divine foreknowledge. God, he holds, limited his omniscience even as He limited His omnipotence in regard to human acts".[6] RabbiYeshayahu Horowitzexplained the apparent paradox of his position by citing the old question, "Can God create a rock so heavy that He cannot pick it up?" He said that we cannot accept free choice as a creation of God's, and simultaneously question its logical compatibility with omnipotence. See further discussion inFree will in Jewish thought. Gersonides posits that people's souls are composed of two parts: a material, or human, intellect; and an acquired, or agent, intellect. The material intellect is inherent in every person, and gives people the capacity to understand and learn. This material intellect is mortal, and dies with the body. However, he also posits that the soul also has an acquired intellect. This survives death, and can contain the accumulated knowledge that the person acquired during his lifetime. For Gersonides,Seymour Feldmanpoints out, Man is immortal insofar as he attains the intellectual perfection that is open to him. This means that man becomes immortal only if and to the extent that he acquires knowledge of what he can in principle know, e.g. mathematics and the natural sciences. This knowledge survives his bodily death and constitutes his immortality.[8] Gersonides was the author of the following Talmudic and halakhic works: Only the first work is extant.[9] Gersonides was the first to make a number of major mathematical and scientific advances, though since he wrote only in Hebrew and few of his writings were translated to other languages, his influence on non-Jewish thought was limited.[10] Gersonides wroteMaaseh Hoshevin 1321 dealing with arithmetical operations including extraction ofsquareandcube roots, various algebraic identities, certain sums including sums of consecutive integers, squares, and cubes, binomial coefficients, and simple combinatorial identities. The work is notable for its early use of proof bymathematical induction, and pioneering work in combinatorics.[11]The title Maaseh Hoshev literally means the Work of the thinker, but it is also a pun on a biblical phrase meaning "clever work". Maaseh Hoshev is sometimes mistakenly referred to as Sefer Hamispar (The Book of Number), which is an earlier and less sophisticated work byRabbi Abraham ben Meir ibn Ezra(1090–1167). In 1342, Gersonides wroteOn Sines, Chords and Arcs, which examinedtrigonometry, in particular proving thesine lawfor plane triangles and giving five-figuresine tables.[10] One year later, at the request of thebishop of Meaux, he wroteThe Harmony of Numbersin which he considers a problem ofPhilippe de Vitryinvolving so-calledharmonic numbers, which have the form 2m·3n. The problem was to characterize all pairs of harmonic numbers differing by 1. Gersonides proved that there are only four such pairs: (1,2), (2,3), (3,4) and (8,9).[12] He is also credited to have invented theJacob's staff,[13]an instrument to measure the angular distance between celestial objects. It is described as consisting …of a staff of 4.5 feet (1.4 m) long and about one inch (2.5 cm) wide, with six or seven perforated tablets which could slide along the staff, each tablet being an integral fraction of the staff length to facilitate calculation, used to measure the distance between stars or planets, and the altitudes and diameters of the Sun, Moon and stars.[14] Gersonides observed asolar eclipseon March 3, 1337. After he had observed this event he proposed a new theory of the sun which he proceeded to test by further observations. Another eclipse observed by Gersonides was the eclipse of the Moon on 3 October 1335. He described a geometrical model for the motion of the Moon and made other astronomical observations of the Moon, Sun and planets using acamera obscura. Some of his beliefs were well wide of the truth, such as his belief that theMilky Waywas on the sphere of the fixed stars and shines by the reflected light of the Sun. Gersonides was also the earliest known mathematician to have used the technique of mathematical induction in a systematic and self-conscious fashion and anticipated Galileo's error theory.[11] The lunar craterRabbi Leviis named after him. Gersonides believed thatastrologywas real, and developed a naturalistic, non-supernatural explanation of how it works.Julius Guttmanexplained that for Gersonides, astrology was: founded on the metaphysical doctrine of the dependence of all earthly occurrences upon the heavenly world. The general connection imparted to the prophet by the active intellect is the general order of the astrological constellation. The constellation under which a man is born determines his nature and fate, and constellations as well determine the life span of nations. …The active intellect knows the astrological order, from the most general form of the constellations to their last specification, which in turn contains all of the conditions of occurrence of a particular event. Thus, when a prophet deals with the destiny of a particular person or human group, he receives from the active intellect a knowledge of the order of the constellations, and with sufficient precision to enable him to predict its fate in full detail. …This astrological determinism has only one limitation. The free will of man could shatter the course of action ordained for him by the stars; prophecy could therefore predict the future on the basis of astrological determination only insofar as the free will of man does not break through the determined course of things.[15] Gersonides appears to be the only astronomer before modern times to have surmised that the fixed stars are much further away than the planets. Whereas all other astronomers put the fixed stars on a rotating sphere just beyond the outer planets, Gersonides estimated the distance to the fixed stars to be no less than 159,651,513,380,944 earth radii,[16]or about 100,000 lightyears in modern units. Using data he collected from his own observations, Gersonides refuted Ptolemy's model in what the notable physicistYuval Ne'emanhas considered as "one of the most important insights in the history of science, generally missed in telling the story of the transition from epicyclic corrections to the geocentric model toCopernicus' heliocentric model". Ne'eman argued that after Gersonides reviewed Ptolemy's model with its epicycles he realized that it could be checked, by measuring the changes in the apparent brightnesses of Mars and looking for cyclical changes along the conjectured epicycles. These thus ceased being dogma, they were a theory that had to be experimentally verified, "à la Popper". Gersonides developed tools for these measurements, essentially pinholes and thecamera obscura. The results of his observations did not fit Ptolemy's model at all. Concluding that the model was inadequate, Gersonides tried (unsuccessfully) to improve on it. That challenge was finally answered, of course, byCopernicusandKeplerthree centuries later, but Gersonides was the first to falsify the Alexandrian dogma - the first known instance of modern falsification philosophy. Gersonides also showed that Ptolemy's model for the lunar orbit, though reproducing correctly the evolution of the Moon's position, fails completely in predicting the apparent sizes of the Moon in its motion. Unfortunately, there is no evidence that the findings influenced later generations of astronomers, even though Gersonides' writings were translated and available.[17] Gersonides is an important character in the novelThe Dream of ScipiobyIain Pears, where he is depicted as the mentor of the protagonist Olivier de Noyen, a non-Jewish poet and intellectual. A (fictional) encounter between Gersonides andPope Clement VIatAvignonduring theBlack Deathis a major element in the book's plot.
https://en.wikipedia.org/wiki/Gersonides
Inspherical trigonometry, thehalf side formularelates the angles and lengths of the sides ofspherical triangles, which are triangles drawn on thesurface of a sphereand so have curved sides and do not obey the formulas forplane triangles.[1] For a triangle△ABC{\displaystyle \triangle ABC}on a sphere, the half-side formula is[2]tan⁡12a=−cos⁡(S)cos⁡(S−A)cos⁡(S−B)cos⁡(S−C){\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}a&={\sqrt {\frac {-\cos(S)\,\cos(S-A)}{\cos(S-B)\,\cos(S-C)}}}\end{aligned}}} wherea, b, care the angular lengths (measure ofcentral angle, arc lengths normalized to asphere of unit radius) of the sides opposite anglesA, B, Crespectively, andS=12(A+B+C){\displaystyle S={\tfrac {1}{2}}(A+B+C)}is half the sum of the angles. Two more formulas can be obtained forb{\displaystyle b}andc{\displaystyle c}by permuting the labelsA,B,C.{\displaystyle A,B,C.} The polar dual relationship for a spherical triangle is thehalf-angle formula, tan⁡12A=sin⁡(s−b)sin⁡(s−c)sin⁡(s)sin⁡(s−a){\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}A&={\sqrt {\frac {\sin(s-b)\,\sin(s-c)}{\sin(s)\,\sin(s-a)}}}\end{aligned}}} where semiperimeters=12(a+b+c){\displaystyle s={\tfrac {1}{2}}(a+b+c)}is half the sum of the sides. Again, two more formulas can be obtained by permuting the labelsA,B,C.{\displaystyle A,B,C.} The same relationships can be written as rational equations of half-tangents (tangents of half-angles). Ifta=tan⁡12a,{\displaystyle t_{a}=\tan {\tfrac {1}{2}}a,}tb=tan⁡12b,{\displaystyle t_{b}=\tan {\tfrac {1}{2}}b,}tc=tan⁡12c,{\displaystyle t_{c}=\tan {\tfrac {1}{2}}c,}tA=tan⁡12A,{\displaystyle t_{A}=\tan {\tfrac {1}{2}}A,}tB=tan⁡12B,{\displaystyle t_{B}=\tan {\tfrac {1}{2}}B,}andtC=tan⁡12C,{\displaystyle t_{C}=\tan {\tfrac {1}{2}}C,}then the half-side formula is equivalent to: ta2=(tBtC+tCtA+tAtB−1)(−tBtC+tCtA+tAtB+1)(tBtC−tCtA+tAtB+1)(tBtC+tCtA−tAtB+1).{\displaystyle {\begin{aligned}t_{a}^{2}&={\frac {{\bigl (}t_{B}t_{C}+t_{C}t_{A}+t_{A}t_{B}-1{\bigr )}{\bigl (}{-t_{B}t_{C}+t_{C}t_{A}+t_{A}t_{B}+1}{\bigr )}}{{\bigl (}t_{B}t_{C}-t_{C}t_{A}+t_{A}t_{B}+1{\bigr )}{\bigl (}t_{B}t_{C}+t_{C}t_{A}-t_{A}t_{B}+1{\bigr )}}}.\end{aligned}}} and the half-angle formula is equivalent to: tA2=(ta−tb+tc+tatbtc)(ta+tb−tc+tatbtc)(ta+tb+tc−tatbtc)(−ta+tb+tc+tatbtc).{\displaystyle {\begin{aligned}t_{A}^{2}&={\frac {{\bigl (}t_{a}-t_{b}+t_{c}+t_{a}t_{b}t_{c}{\bigr )}{\bigl (}t_{a}+t_{b}-t_{c}+t_{a}t_{b}t_{c}{\bigr )}}{{\bigl (}t_{a}+t_{b}+t_{c}-t_{a}t_{b}t_{c}{\bigr )}{\bigl (}{-t_{a}+t_{b}+t_{c}+t_{a}t_{b}t_{c}}{\bigr )}}}.\end{aligned}}}
https://en.wikipedia.org/wiki/Half-side_formula
Spherical trigonometryis the branch ofspherical geometrythat deals with the metrical relationships between thesidesandanglesofspherical triangles, traditionally expressed usingtrigonometric functions. On thesphere,geodesicsaregreat circles. Spherical trigonometry is of great importance for calculations inastronomy,geodesy, andnavigation. The origins of spherical trigonometry inGreek mathematicsand the major developments in Islamic mathematics are discussed fully inHistory of trigonometryandMathematics in medieval Islam. The subject came to fruition in Early Modern times with important developments byJohn Napier,Delambreand others, and attained an essentially complete form by the end of the nineteenth century with the publication of Todhunter's textbookSpherical trigonometry for the use of colleges and Schools.[1]Since then, significant developments have been the application of vector methods, quaternion methods, and the use of numerical methods. Aspherical polygonis apolygonon the surface of the sphere. Its sides arearcsofgreat circles—the spherical geometry equivalent ofline segmentsinplane geometry. Such polygons may have any number of sides greater than 1. Two-sided spherical polygons—lunes, also calleddigonsorbi-angles—are bounded by two great-circle arcs: a familiar example is the curved outward-facing surface of a segment of an orange. Three arcs serve to define a spherical triangle, the principal subject of this article. Polygons with higher numbers of sides (4-sided spherical quadrilaterals, 5-sided spherical pentagons, etc.) are defined in similar manner. Analogously to their plane counterparts, spherical polygons with more than 3 sides can always be treated as the composition of spherical triangles. One spherical polygon with interesting properties is thepentagramma mirificum, a 5-sided sphericalstar polygonwith a right angle at every vertex. From this point in the article, discussion will be restricted to spherical triangles, referred to simply astriangles. In particular, the sum of the angles of a spherical triangle is strictly greater than the sum of the angles of a triangle defined on the Euclidean plane, which is always exactlyπradians. Thepolar triangleassociated with a triangle△ABCis defined as follows. Consider the great circle that contains the sideBC. This great circle is defined by the intersection of a diametral plane with the surface. Draw the normal to that plane at the centre: it intersects the surface at two points and the point that is on the same side of the plane asAis (conventionally) termed the pole ofAand it is denoted byA'. The pointsB'andC'are defined similarly. The triangle△A'B'C'is the polar triangle corresponding to triangle△ABC. The angles and sides of the polar triangle are given by (Todhunter,[1]Art.27)A′=π−a,B′=π−b,C′=π−c,a′=π−A,b′=π−B,c′=π−C.{\displaystyle {\begin{alignedat}{3}A'&=\pi -a,&\qquad B'&=\pi -b,&\qquad C'&=\pi -c,\\a'&=\pi -A,&b'&=\pi -B,&c'&=\pi -C.\end{alignedat}}}Therefore, if any identity is proved for△ABCthen we can immediately derive a second identity by applying the first identity to the polar triangle by making the above substitutions. This is how the supplemental cosine equations are derived from the cosine equations. Similarly, the identities for a quadrantal triangle can be derived from those for a right-angled triangle. The polar triangle of a polar triangle is the original triangle. If the3 × 3matrixMhas the positionsA,B, andCas its columns then the rows of the matrix inverseM−1, if normalized to unit length, are the positionsA′,B′, andC′. In particular, when△A′B′C′is the polar triangle of△ABCthen△ABCis the polar triangle of△A′B′C′. The cosine rule is the fundamental identity of spherical trigonometry: all other identities, including the sine rule, may be derived from the cosine rule:cos⁡a=cos⁡bcos⁡c+sin⁡bsin⁡ccos⁡A,cos⁡b=cos⁡ccos⁡a+sin⁡csin⁡acos⁡B,cos⁡c=cos⁡acos⁡b+sin⁡asin⁡bcos⁡C.{\displaystyle {\begin{aligned}\cos a&=\cos b\cos c+\sin b\sin c\cos A,\\[2pt]\cos b&=\cos c\cos a+\sin c\sin a\cos B,\\[2pt]\cos c&=\cos a\cos b+\sin a\sin b\cos C.\end{aligned}}} These identities generalize the cosine rule of planetrigonometry, to which they are asymptotically equivalent in the limit of small interior angles. (On the unit sphere, ifa,b,c→0{\displaystyle a,b,c\rightarrow 0}setsin⁡a≈a{\displaystyle \sin a\approx a}andcos⁡a≈1−a22{\displaystyle \cos a\approx 1-{\frac {a^{2}}{2}}}etc.; seeSpherical law of cosines.) The sphericallaw of sinesis given by the formulasin⁡Asin⁡a=sin⁡Bsin⁡b=sin⁡Csin⁡c.{\displaystyle {\frac {\sin A}{\sin a}}={\frac {\sin B}{\sin b}}={\frac {\sin C}{\sin c}}.}These identities approximate the sine rule of planetrigonometrywhen the sides are much smaller than the radius of the sphere. The spherical cosine formulae were originally proved by elementary geometry and the planar cosine rule (Todhunter,[1]Art.37). He also gives a derivation using simple coordinate geometry and the planar cosine rule (Art.60). The approach outlined here uses simplervectormethods. (These methods are also discussed atSpherical law of cosines.) Consider three unit vectorsOA→,OB→,OC→drawn from the origin to the vertices of the triangle (on the unit sphere). The arcBCsubtends an angle of magnitudeaat the centre and thereforeOB→·OC→= cosa. Introduce a Cartesian basis withOA→along thez-axis andOB→in thexz-plane making an anglecwith thez-axis. The vectorOC→projects toONin thexy-plane and the angle betweenONand thex-axis isA. Therefore, the three vectors have components: OA→:(0,0,1)OB→:(sin⁡c,0,cos⁡c)OC→:(sin⁡bcos⁡A,sin⁡bsin⁡A,cos⁡b).{\displaystyle {\begin{aligned}{\vec {OA}}:&\quad (0,\,0,\,1)\\{\vec {OB}}:&\quad (\sin c,\,0,\,\cos c)\\{\vec {OC}}:&\quad (\sin b\cos A,\,\sin b\sin A,\,\cos b).\end{aligned}}} The scalar productOB→· OC→in terms of the components isOB→⋅OC→=sin⁡csin⁡bcos⁡A+cos⁡ccos⁡b.{\displaystyle {\vec {OB}}\cdot {\vec {OC}}=\sin c\sin b\cos A+\cos c\cos b.}Equating the two expressions for the scalar product givescos⁡a=cos⁡bcos⁡c+sin⁡bsin⁡ccos⁡A.{\displaystyle \cos a=\cos b\cos c+\sin b\sin c\cos A.}This equation can be re-arranged to give explicit expressions for the angle in terms of the sides:cos⁡A=cos⁡a−cos⁡bcos⁡csin⁡bsin⁡c.{\displaystyle \cos A={\frac {\cos a-\cos b\cos c}{\sin b\sin c}}.} The other cosine rules are obtained by cyclic permutations. This derivation is given in Todhunter,[1](Art.40). From the identitysin2⁡A=1−cos2⁡A{\displaystyle \sin ^{2}A=1-\cos ^{2}A}and the explicit expression forcosAgiven immediately abovesin2⁡A=1−(cos⁡a−cos⁡bcos⁡csin⁡bsin⁡c)2=(1−cos2⁡b)(1−cos2⁡c)−(cos⁡a−cos⁡bcos⁡c)2sin2bsin2csin⁡Asin⁡a=1−cos2a−cos2b−cos2c+2cos⁡acos⁡bcos⁡csin⁡asin⁡bsin⁡c.{\displaystyle {\begin{aligned}\sin ^{2}A&=1-\left({\frac {\cos a-\cos b\cos c}{\sin b\sin c}}\right)^{2}\\[5pt]&={\frac {(1-\cos ^{2}b)(1-\cos ^{2}c)-(\cos a-\cos b\cos c)^{2}}{\sin ^{2}\!b\,\sin ^{2}\!c}}\\[5pt]{\frac {\sin A}{\sin a}}&={\frac {\sqrt {1-\cos ^{2}\!a-\cos ^{2}\!b-\cos ^{2}\!c+2\cos a\cos b\cos c}}{\sin a\sin b\sin c}}.\end{aligned}}}Since the right hand side is invariant under a cyclic permutation ofa,b, andcthe spherical sine rule follows immediately. There are many ways of deriving the fundamental cosine and sine rules and the other rules developed in the following sections. For example, Todhunter[1]gives two proofs of the cosine rule (Articles 37 and 60) and two proofs of the sine rule (Articles 40 and 42). The page onSpherical law of cosinesgives four different proofs of the cosine rule. Text books on geodesy[2]and spherical astronomy[3]give different proofs and the online resources ofMathWorldprovide yet more.[4]There are even more exotic derivations, such as that of Banerjee[5]who derives the formulae using the linear algebra of projection matrices and also quotes methods indifferential geometryand the group theory of rotations. The derivation of the cosine rule presented above has the merits of simplicity and directness and the derivation of the sine rule emphasises the fact that no separate proof is required other than the cosine rule. However, the above geometry may be used to give an independent proof of the sine rule. Thescalar triple product,OA→· (OB→×OC→)evaluates tosinbsincsinAin the basis shown. Similarly, in a basis oriented with thez-axis alongOB→, the triple productOB→· (OC→×OA→), evaluates tosincsinasinB. Therefore, the invariance of the triple product under cyclic permutations givessinbsinA= sinasinBwhich is the first of the sine rules. See curved variations of thelaw of sinesto see details of this derivation. When any three of the differentialsda,db,dc,dA,dB,dCare known, the following equations, which are found by differentiating the cosine rule and using the sine rule, can be used to calculate the other three by elimination:[6] da=cos⁡Cdb+cos⁡Bdc+sin⁡bsin⁡CdA,db=cos⁡Adc+cos⁡Cda+sin⁡csin⁡AdB,dc=cos⁡Bda+cos⁡Adb+sin⁡asin⁡BdC.{\displaystyle {\begin{aligned}da=\cos C\ db+\cos B\ dc+\sin b\ \sin C\ dA,\\db=\cos A\ dc+\cos C\ da+\sin c\ \sin A\ dB,\\dc=\cos B\ da+\cos A\ db+\sin a\ \sin B\ dC.\\\end{aligned}}} Applying the cosine rules to the polar triangle gives (Todhunter,[1]Art.47),i.e.replacingAbyπ–a,abyπ–Aetc.,cos⁡A=−cos⁡Bcos⁡C+sin⁡Bsin⁡Ccos⁡a,cos⁡B=−cos⁡Ccos⁡A+sin⁡Csin⁡Acos⁡b,cos⁡C=−cos⁡Acos⁡B+sin⁡Asin⁡Bcos⁡c.{\displaystyle {\begin{aligned}\cos A&=-\cos B\,\cos C+\sin B\,\sin C\,\cos a,\\\cos B&=-\cos C\,\cos A+\sin C\,\sin A\,\cos b,\\\cos C&=-\cos A\,\cos B+\sin A\,\sin B\,\cos c.\end{aligned}}} The six parts of a triangle may be written in cyclic order as (aCbAcB). The cotangent, or four-part, formulae relate two sides and two angles forming fourconsecutiveparts around the triangle, for example (aCbA) orBaCb). In such a set there are inner and outer parts: for example in the set (BaCb) the inner angle isC, the inner side isa, the outer angle isB, the outer side isb. The cotangent rule may be written as (Todhunter,[1]Art.44)cos(innerside)cos(innerangle)=cot(outerside)sin(innerside)−cot(outerangle)sin(innerangle),{\displaystyle \cos \!{\Bigl (}{\begin{smallmatrix}{\text{inner}}\\{\text{side}}\end{smallmatrix}}{\Bigr )}\cos \!{\Bigl (}{\begin{smallmatrix}{\text{inner}}\\{\text{angle}}\end{smallmatrix}}{\Bigr )}=\cot \!{\Bigl (}{\begin{smallmatrix}{\text{outer}}\\{\text{side}}\end{smallmatrix}}{\Bigr )}\sin \!{\Bigl (}{\begin{smallmatrix}{\text{inner}}\\{\text{side}}\end{smallmatrix}}{\Bigr )}-\cot \!{\Bigl (}{\begin{smallmatrix}{\text{outer}}\\{\text{angle}}\end{smallmatrix}}{\Bigr )}\sin \!{\Bigl (}{\begin{smallmatrix}{\text{inner}}\\{\text{angle}}\end{smallmatrix}}{\Bigr )},}and the six possible equations are (with the relevant set shown at right):(CT1)cos⁡bcos⁡C=cot⁡asin⁡b−cot⁡Asin⁡C(aCbA)(CT2)cos⁡bcos⁡A=cot⁡csin⁡b−cot⁡Csin⁡A(CbAc)(CT3)cos⁡ccos⁡A=cot⁡bsin⁡c−cot⁡Bsin⁡A(bAcB)(CT4)cos⁡ccos⁡B=cot⁡asin⁡c−cot⁡Asin⁡B(AcBa)(CT5)cos⁡acos⁡B=cot⁡csin⁡a−cot⁡Csin⁡B(cBaC)(CT6)cos⁡acos⁡C=cot⁡bsin⁡a−cot⁡Bsin⁡C(BaCb){\displaystyle {\begin{alignedat}{5}{\text{(CT1)}}&&\qquad \cos b\,\cos C&=\cot a\,\sin b-\cot A\,\sin C\qquad &&(aCbA)\\[0ex]{\text{(CT2)}}&&\cos b\,\cos A&=\cot c\,\sin b-\cot C\,\sin A&&(CbAc)\\[0ex]{\text{(CT3)}}&&\cos c\,\cos A&=\cot b\,\sin c-\cot B\,\sin A&&(bAcB)\\[0ex]{\text{(CT4)}}&&\cos c\,\cos B&=\cot a\,\sin c-\cot A\,\sin B&&(AcBa)\\[0ex]{\text{(CT5)}}&&\cos a\,\cos B&=\cot c\,\sin a-\cot C\,\sin B&&(cBaC)\\[0ex]{\text{(CT6)}}&&\cos a\,\cos C&=\cot b\,\sin a-\cot B\,\sin C&&(BaCb)\end{alignedat}}}To prove the first formula start from the first cosine rule and on the right-hand side substitute forcoscfrom the third cosine rule:cos⁡a=cos⁡bcos⁡c+sin⁡bsin⁡ccos⁡A=cos⁡b(cos⁡acos⁡b+sin⁡asin⁡bcos⁡C)+sin⁡bsin⁡Csin⁡acot⁡Acos⁡asin2⁡b=cos⁡bsin⁡asin⁡bcos⁡C+sin⁡bsin⁡Csin⁡acot⁡A.{\displaystyle {\begin{aligned}\cos a&=\cos b\cos c+\sin b\sin c\cos A\\&=\cos b\ (\cos a\cos b+\sin a\sin b\cos C)+\sin b\sin C\sin a\cot A\\\cos a\sin ^{2}b&=\cos b\sin a\sin b\cos C+\sin b\sin C\sin a\cot A.\end{aligned}}}The result follows on dividing bysinasinb. Similar techniques with the other two cosine rules give CT3 and CT5. The other three equations follow by applying rules 1, 3 and 5 to the polar triangle. With2s=(a+b+c){\displaystyle 2s=(a+b+c)}and2S=(A+B+C),{\displaystyle 2S=(A+B+C),}sin⁡12A=sin⁡(s−b)sin⁡(s−c)sin⁡bsin⁡csin⁡12a=−cos⁡Scos⁡(S−A)sin⁡Bsin⁡Ccos⁡12A=sin⁡ssin⁡(s−a)sin⁡bsin⁡ccos⁡12a=cos⁡(S−B)cos⁡(S−C)sin⁡Bsin⁡Ctan⁡12A=sin⁡(s−b)sin⁡(s−c)sin⁡ssin⁡(s−a)tan⁡12a=−cos⁡Scos⁡(S−A)cos⁡(S−B)cos⁡(S−C){\displaystyle {\begin{alignedat}{5}\sin {\tfrac {1}{2}}A&={\sqrt {\frac {\sin(s-b)\sin(s-c)}{\sin b\sin c}}}&\qquad \qquad \sin {\tfrac {1}{2}}a&={\sqrt {\frac {-\cos S\cos(S-A)}{\sin B\sin C}}}\\[2ex]\cos {\tfrac {1}{2}}A&={\sqrt {\frac {\sin s\sin(s-a)}{\sin b\sin c}}}&\cos {\tfrac {1}{2}}a&={\sqrt {\frac {\cos(S-B)\cos(S-C)}{\sin B\sin C}}}\\[2ex]\tan {\tfrac {1}{2}}A&={\sqrt {\frac {\sin(s-b)\sin(s-c)}{\sin s\sin(s-a)}}}&\tan {\tfrac {1}{2}}a&={\sqrt {\frac {-\cos S\cos(S-A)}{\cos(S-B)\cos(S-C)}}}\end{alignedat}}} Another twelve identities follow by cyclic permutation. The proof (Todhunter,[1]Art.49) of the first formula starts from the identity2sin2A2=1−cos⁡A,{\displaystyle 2\sin ^{2}\!{\tfrac {A}{2}}=1-\cos A,}using the cosine rule to expressAin terms of the sides and replacing the sum of two cosines by a product. (Seesum-to-product identities.) The second formula starts from the identity2cos2A2=1+cos⁡A,{\displaystyle 2\cos ^{2}\!{\tfrac {A}{2}}=1+\cos A,}the third is a quotient and the remainder follow by applying the results to the polar triangle. The Delambre analogies (also called Gauss analogies) were published independently by Delambre, Gauss, and Mollweide in 1807–1809.[7] sin⁡12(A+B)cos⁡12C=cos⁡12(a−b)cos⁡12csin⁡12(A−B)cos⁡12C=sin⁡12(a−b)sin⁡12ccos⁡12(A+B)sin⁡12C=cos⁡12(a+b)cos⁡12ccos⁡12(A−B)sin⁡12C=sin⁡12(a+b)sin⁡12c{\displaystyle {\begin{aligned}{\frac {\sin {\tfrac {1}{2}}(A+B)}{\cos {\tfrac {1}{2}}C}}={\frac {\cos {\tfrac {1}{2}}(a-b)}{\cos {\tfrac {1}{2}}c}}&\qquad \qquad &{\frac {\sin {\tfrac {1}{2}}(A-B)}{\cos {\tfrac {1}{2}}C}}={\frac {\sin {\tfrac {1}{2}}(a-b)}{\sin {\tfrac {1}{2}}c}}\\[2ex]{\frac {\cos {\tfrac {1}{2}}(A+B)}{\sin {\tfrac {1}{2}}C}}={\frac {\cos {\tfrac {1}{2}}(a+b)}{\cos {\tfrac {1}{2}}c}}&\qquad &{\frac {\cos {\tfrac {1}{2}}(A-B)}{\sin {\tfrac {1}{2}}C}}={\frac {\sin {\tfrac {1}{2}}(a+b)}{\sin {\tfrac {1}{2}}c}}\end{aligned}}}Another eight identities follow by cyclic permutation. Proved by expanding the numerators and using the half angle formulae. (Todhunter,[1]Art.54 and Delambre[8]) tan⁡12(A+B)=cos⁡12(a−b)cos⁡12(a+b)cot⁡12Ctan⁡12(a+b)=cos⁡12(A−B)cos⁡12(A+B)tan⁡12ctan⁡12(A−B)=sin⁡12(a−b)sin⁡12(a+b)cot⁡12Ctan⁡12(a−b)=sin⁡12(A−B)sin⁡12(A+B)tan⁡12c{\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}(A+B)={\frac {\cos {\tfrac {1}{2}}(a-b)}{\cos {\tfrac {1}{2}}(a+b)}}\cot {\tfrac {1}{2}}C&\qquad &\tan {\tfrac {1}{2}}(a+b)={\frac {\cos {\tfrac {1}{2}}(A-B)}{\cos {\tfrac {1}{2}}(A+B)}}\tan {\tfrac {1}{2}}c\\[2ex]\tan {\tfrac {1}{2}}(A-B)={\frac {\sin {\tfrac {1}{2}}(a-b)}{\sin {\tfrac {1}{2}}(a+b)}}\cot {\tfrac {1}{2}}C&\qquad &\tan {\tfrac {1}{2}}(a-b)={\frac {\sin {\tfrac {1}{2}}(A-B)}{\sin {\tfrac {1}{2}}(A+B)}}\tan {\tfrac {1}{2}}c\end{aligned}}} Another eight identities follow by cyclic permutation. These identities follow by division of the Delambre formulae. (Todhunter,[1]Art.52) Taking quotients of these yields thelaw of tangents, first stated byPersian mathematicianNasir al-Din al-Tusi(1201–1274), tan⁡12(A−B)tan⁡12(A+B)=tan⁡12(a−b)tan⁡12(a+b){\displaystyle {\frac {\tan {\tfrac {1}{2}}(A-B)}{\tan {\tfrac {1}{2}}(A+B)}}={\frac {\tan {\tfrac {1}{2}}(a-b)}{\tan {\tfrac {1}{2}}(a+b)}}} When one of the angles, sayC, of a spherical triangle is equal toπ/2 the various identities given above are considerably simplified. There are ten identities relating three elements chosen from the seta,b,c,A, andB. Napier[9]provided an elegantmnemonic aidfor the ten independent equations: the mnemonic is called Napier's circle or Napier's pentagon (when the circle in the above figure, right, is replaced by a pentagon). First, write the six parts of the triangle (three vertex angles, three arc angles for the sides) in the order they occur around any circuit of the triangle: for the triangle shown above left, going clockwise starting withagivesaCbAcB. Next replace the parts that are not adjacent toC(that isA,c, andB) by their complements and then delete the angleCfrom the list. The remaining parts can then be drawn as five ordered, equal slices of a pentagram, or circle, as shown in the above figure (right). For any choice of three contiguous parts, one (themiddlepart) will be adjacent to two parts and opposite the other two parts. The ten Napier's Rules are given by The key for remembering which trigonometric function goes with which part is to look at the first vowel of the kind of part: middle parts take the sine, adjacent parts take the tangent, and opposite parts take the cosine. For an example, starting with the sector containingawe have:sin⁡a=tan⁡(π2−B)tan⁡b=cos⁡(π2−c)cos⁡(π2−A)=cot⁡Btan⁡b=sin⁡csin⁡A.{\displaystyle {\begin{aligned}\sin a&=\tan({\tfrac {\pi }{2}}-B)\,\tan b\\[2pt]&=\cos({\tfrac {\pi }{2}}-c)\,\cos({\tfrac {\pi }{2}}-A)\\[2pt]&=\cot B\,\tan b\\[4pt]&=\sin c\,\sin A.\end{aligned}}}The full set of rules for the right spherical triangle is (Todhunter,[1]Art.62)(R1)cos⁡c=cos⁡acos⁡b,(R6)tan⁡b=cos⁡Atan⁡c,(R2)sin⁡a=sin⁡Asin⁡c,(R7)tan⁡a=cos⁡Btan⁡c,(R3)sin⁡b=sin⁡Bsin⁡c,(R8)cos⁡A=sin⁡Bcos⁡a,(R4)tan⁡a=tan⁡Asin⁡b,(R9)cos⁡B=sin⁡Acos⁡b,(R5)tan⁡b=tan⁡Bsin⁡a,(R10)cos⁡c=cot⁡Acot⁡B.{\displaystyle {\begin{alignedat}{4}&{\text{(R1)}}&\qquad \cos c&=\cos a\,\cos b,&\qquad \qquad &{\text{(R6)}}&\qquad \tan b&=\cos A\,\tan c,\\&{\text{(R2)}}&\sin a&=\sin A\,\sin c,&&{\text{(R7)}}&\tan a&=\cos B\,\tan c,\\&{\text{(R3)}}&\sin b&=\sin B\,\sin c,&&{\text{(R8)}}&\cos A&=\sin B\,\cos a,\\&{\text{(R4)}}&\tan a&=\tan A\,\sin b,&&{\text{(R9)}}&\cos B&=\sin A\,\cos b,\\&{\text{(R5)}}&\tan b&=\tan B\,\sin a,&&{\text{(R10)}}&\cos c&=\cot A\,\cot B.\end{alignedat}}} A quadrantal spherical triangle is defined to be a spherical triangle in which one of the sides subtends an angle ofπ/2 radians at the centre of the sphere: on the unit sphere the side has lengthπ/2. In the case that the sidechas lengthπ/2 on the unit sphere the equations governing the remaining sides and angles may be obtained by applying the rules for the right spherical triangle of the previous section to the polar triangle△A'B'C'with sidesa', b', c'such thatA'=π−a,a'=π−Aetc. The results are:(Q1)cos⁡C=−cos⁡Acos⁡B,(Q6)tan⁡B=−cos⁡atan⁡C,(Q2)sin⁡A=sin⁡asin⁡C,(Q7)tan⁡A=−cos⁡btan⁡C,(Q3)sin⁡B=sin⁡bsin⁡C,(Q8)cos⁡a=sin⁡bcos⁡A,(Q4)tan⁡A=tan⁡asin⁡B,(Q9)cos⁡b=sin⁡acos⁡B,(Q5)tan⁡B=tan⁡bsin⁡A,(Q10)cos⁡C=−cot⁡acot⁡b.{\displaystyle {\begin{alignedat}{4}&{\text{(Q1)}}&\qquad \cos C&=-\cos A\,\cos B,&\qquad \qquad &{\text{(Q6)}}&\qquad \tan B&=-\cos a\,\tan C,\\&{\text{(Q2)}}&\sin A&=\sin a\,\sin C,&&{\text{(Q7)}}&\tan A&=-\cos b\,\tan C,\\&{\text{(Q3)}}&\sin B&=\sin b\,\sin C,&&{\text{(Q8)}}&\cos a&=\sin b\,\cos A,\\&{\text{(Q4)}}&\tan A&=\tan a\,\sin B,&&{\text{(Q9)}}&\cos b&=\sin a\,\cos B,\\&{\text{(Q5)}}&\tan B&=\tan b\,\sin A,&&{\text{(Q10)}}&\cos C&=-\cot a\,\cot b.\end{alignedat}}} Substituting the second cosine rule into the first and simplifying gives:cos⁡a=(cos⁡acos⁡c+sin⁡asin⁡ccos⁡B)cos⁡c+sin⁡bsin⁡ccos⁡Acos⁡asin2⁡c=sin⁡acos⁡csin⁡ccos⁡B+sin⁡bsin⁡ccos⁡A{\displaystyle {\begin{aligned}\cos a&=(\cos a\,\cos c+\sin a\,\sin c\,\cos B)\cos c+\sin b\,\sin c\,\cos A\\[4pt]\cos a\,\sin ^{2}c&=\sin a\,\cos c\,\sin c\,\cos B+\sin b\,\sin c\,\cos A\end{aligned}}}Cancelling the factor ofsincgivescos⁡asin⁡c=sin⁡acos⁡ccos⁡B+sin⁡bcos⁡A{\displaystyle \cos a\sin c=\sin a\,\cos c\,\cos B+\sin b\,\cos A} Similar substitutions in the other cosine and supplementary cosine formulae give a large variety of 5-part rules. They are rarely used. Multiplying the first cosine rule bycosAgivescos⁡acos⁡A=cos⁡bcos⁡ccos⁡A+sin⁡bsin⁡c−sin⁡bsin⁡csin2⁡A.{\displaystyle \cos a\cos A=\cos b\,\cos c\,\cos A+\sin b\,\sin c-\sin b\,\sin c\,\sin ^{2}A.}Similarly multiplying the first supplementary cosine rule bycosayieldscos⁡acos⁡A=−cos⁡Bcos⁡Ccos⁡a+sin⁡Bsin⁡C−sin⁡Bsin⁡Csin2⁡a.{\displaystyle \cos a\cos A=-\cos B\,\cos C\,\cos a+\sin B\,\sin C-\sin B\,\sin C\,\sin ^{2}a.}Subtracting the two and noting that it follows from the sine rules thatsin⁡bsin⁡csin2⁡A=sin⁡Bsin⁡Csin2⁡a{\displaystyle \sin b\,\sin c\,\sin ^{2}A=\sin B\,\sin C\,\sin ^{2}a}produces Cagnoli's equationsin⁡bsin⁡c+cos⁡bcos⁡ccos⁡A=sin⁡Bsin⁡C−cos⁡Bcos⁡Ccos⁡a{\displaystyle \sin b\,\sin c+\cos b\,\cos c\,\cos A=\sin B\,\sin C-\cos B\,\cos C\,\cos a}which is a relation between the six parts of the spherical triangle.[10] The solution of triangles is the principal purpose of spherical trigonometry: given three, four or five elements of the triangle, determine the others. The case of five given elements is trivial, requiring only a single application of the sine rule. For four given elements there is one non-trivial case, which is discussed below. For three given elements there are six cases: three sides, two sides and an included or opposite angle, two angles and an included or opposite side, or three angles. (The last case has no analogue in planar trigonometry.) No single method solves all cases. The figure below shows the seven non-trivial cases: in each case the given sides are marked with a cross-bar and the given angles with an arc. (The given elements are also listed below the triangle). In the summary notation here such as ASA, A refers to a given angle and S refers to a given side, and the sequence of A's and S's in the notation refers to the corresponding sequence in the triangle. The solution methods listed here are not the only possible choices: many others are possible. In general it is better to choose methods that avoid taking an inverse sine because of the possible ambiguity between an angle and its supplement. The use of half-angle formulae is often advisable because half-angles will be less thanπ/2 and therefore free from ambiguity. There is a full discussion in Todhunter. The articleSolution of triangles#Solving spherical trianglespresents variants on these methods with a slightly different notation. There is a full discussion of the solution of oblique triangles in Todhunter.[1]: Chap. VISee also the discussion in Ross.[11]Nasir al-Din al-Tusiwas the first to list the six distinct cases (2–7 in the diagram) of a right triangle in spherical trigonometry.[12] Another approach is to split the triangle into two right-angled triangles. For example, take the Case 3 example whereb,c, andBare given. Construct the great circle fromAthat is normal to the sideBCat the pointD. Use Napier's rules to solve the triangle△ABD: usecandBto find the sidesADandBDand the angle∠BAD. Then use Napier's rules to solve the triangle△ACD: that is useADandbto find the sideDCand the anglesCand∠DAC. The angleAand sideafollow by addition. Not all of the rules obtained are numerically robust in extreme examples, for example when an angle approaches zero orπ. Problems and solutions may have to be examined carefully, particularly when writing code to solve an arbitrary triangle. Consider anN-sided spherical polygon and letAndenote then-th interior angle. The area of such a polygon is given by (Todhunter,[1]Art.99)Area of polygon(on the unit sphere)≡EN=(∑n=1NAn)−(N−2)π.{\displaystyle {{\text{Area of polygon}} \atop {\text{(on the unit sphere)}}}\equiv E_{N}=\left(\sum _{n=1}^{N}A_{n}\right)-(N-2)\pi .} For the case of a spherical triangle with anglesA,B, andCthis reduces toGirard's theoremArea of triangle(on the unit sphere)≡E=E3=A+B+C−π,{\displaystyle {{\text{Area of triangle}} \atop {\text{(on the unit sphere)}}}\equiv E=E_{3}=A+B+C-\pi ,}whereEis the amount by which the sum of the angles exceedsπradians, called thespherical excessof the triangle. This theorem is named after its author,Albert Girard.[13]An earlier proof was derived, but not published, by the English mathematicianThomas Harriot. On a sphere of radiusRboth of the above area expressions are multiplied byR2. The definition of the excess is independent of the radius of the sphere. The converse result may be written as A+B+C=π+4π×Area of triangleArea of the sphere.{\displaystyle A+B+C=\pi +{\frac {4\pi \times {\text{Area of triangle}}}{\text{Area of the sphere}}}.} Since the area of a triangle cannot be negative the spherical excess is always positive. It is not necessarily small, because the sum of the angles may attain 5π(3πforproperangles). For example, an octant of a sphere is a spherical triangle with three right angles, so that the excess isπ/2. In practical applications itisoften small: for example the triangles of geodetic survey typically have a spherical excess much less than 1' of arc.[14]On the Earth the excess of an equilateral triangle with sides 21.3 km (and area 393 km2) is approximately 1 arc second. There are many formulae for the excess. For example, Todhunter,[1](Art.101—103) gives ten examples including that ofL'Huilier:tan⁡14E=tan⁡12stan⁡12(s−a)tan⁡12(s−b)tan⁡12(s−c){\displaystyle \tan {\tfrac {1}{4}}E={\sqrt {\tan {\tfrac {1}{2}}s\,\tan {\tfrac {1}{2}}(s-a)\,\tan {\tfrac {1}{2}}(s-b)\,\tan {\tfrac {1}{2}}(s-c)}}}wheres=12(a+b+c){\displaystyle s={\tfrac {1}{2}}(a+b+c)}. This formula is reminiscent ofHeron's formulafor planar triangles. Because some triangles are badly characterized by their edges (e.g., ifa=b≈12c{\textstyle a=b\approx {\frac {1}{2}}c}), it is often better to use the formula for the excess in terms of two edges and their included angletan⁡12E=tan⁡12atan⁡12bsin⁡C1+tan⁡12atan⁡12bcos⁡C.{\displaystyle \tan {\tfrac {1}{2}}E={\frac {\tan {\frac {1}{2}}a\tan {\frac {1}{2}}b\sin C}{1+\tan {\frac {1}{2}}a\tan {\frac {1}{2}}b\cos C}}.} When triangle△ABCis a right triangle with right angle atC, thencosC= 0andsinC= 1, so this reduces totan⁡12E=tan⁡12atan⁡12b.{\displaystyle \tan {\tfrac {1}{2}}E=\tan {\tfrac {1}{2}}a\tan {\tfrac {1}{2}}b.} Angle deficitis defined similarly forhyperbolic geometry. The spherical excess of a spherical quadrangle bounded by the equator, the two meridians of longitudesλ1{\displaystyle \lambda _{1}}andλ2,{\displaystyle \lambda _{2},}and the great-circle arc between two points with longitude and latitude(λ1,φ1){\displaystyle (\lambda _{1},\varphi _{1})}and(λ2,φ2){\displaystyle (\lambda _{2},\varphi _{2})}istan⁡12E4=sin⁡12(φ2+φ1)cos⁡12(φ2−φ1)tan⁡12(λ2−λ1).{\displaystyle \tan {\tfrac {1}{2}}E_{4}={\frac {\sin {\tfrac {1}{2}}(\varphi _{2}+\varphi _{1})}{\cos {\tfrac {1}{2}}(\varphi _{2}-\varphi _{1})}}\tan {\tfrac {1}{2}}(\lambda _{2}-\lambda _{1}).} This result is obtained from one of Napier's analogies. In the limit whereφ1,φ2,λ2−λ1{\displaystyle \varphi _{1},\varphi _{2},\lambda _{2}-\lambda _{1}}are all small, this reduces to the familiar trapezoidal area,E4≈12(φ2+φ1)(λ2−λ1){\textstyle E_{4}\approx {\frac {1}{2}}(\varphi _{2}+\varphi _{1})(\lambda _{2}-\lambda _{1})}. The area of a polygon can be calculated from individual quadrangles of the above type, from (analogously) individual triangle bounded by a segment of the polygon and two meridians,[15]by aline integralwithGreen's theorem,[16]or via anequal-area projectionas commonly done in GIS. The other algorithms can still be used with the side lengths calculated using agreat-circle distanceformula.
https://en.wikipedia.org/wiki/Spherical_triangles
Intrigonometry, thelaw of cosines(also known as thecosine formulaorcosine rule) relates the lengths of the sides of atriangleto thecosineof one of itsangles. For a triangle with sides⁠a{\displaystyle a}⁠,⁠b{\displaystyle b}⁠, and⁠c{\displaystyle c}⁠, opposite respective angles⁠α{\displaystyle \alpha }⁠,⁠β{\displaystyle \beta }⁠, and⁠γ{\displaystyle \gamma }⁠(see Fig. 1), the law of cosines states: c2=a2+b2−2abcos⁡γ,a2=b2+c2−2bccos⁡α,b2=a2+c2−2accos⁡β.{\displaystyle {\begin{aligned}c^{2}&=a^{2}+b^{2}-2ab\cos \gamma ,\\[3mu]a^{2}&=b^{2}+c^{2}-2bc\cos \alpha ,\\[3mu]b^{2}&=a^{2}+c^{2}-2ac\cos \beta .\end{aligned}}} The law of cosines generalizes thePythagorean theorem, which holds only forright triangles: if⁠γ{\displaystyle \gamma }⁠is aright anglethen⁠cos⁡γ=0{\displaystyle \cos \gamma =0}⁠, and the law of cosinesreduces to⁠c2=a2+b2{\displaystyle c^{2}=a^{2}+b^{2}}⁠. The law of cosines is useful forsolving a trianglewhen all three sides or two sides and their included angle are given. The theorem is used insolution of triangles, i.e., to find (see Figure 3): These formulas produce highround-off errorsinfloating pointcalculations if the triangle is very acute, i.e., ifcis small relative toaandborγis small compared to 1. It is even possible to obtain a result slightly greater than one for the cosine of an angle. The third formula shown is the result of solving forain thequadratic equationa2− 2abcosγ+b2−c2= 0. This equation can have 2, 1, or 0 positive solutions corresponding to the number of possible triangles given the data. It will have two positive solutions ifbsinγ<c<b, only one positive solution ifc=bsinγ, and no solution ifc<bsinγ. These different cases are also explained by theside-side-angle congruence ambiguity. Book II of Euclid'sElements, compiled c. 300 BC from material up to a century or two older, contains a geometric theorem corresponding to the law of cosines but expressed in the contemporary language of rectangle areas; Hellenistic trigonometry developed later, and sine and cosine per se first appeared centuries afterward in India. The cases ofobtuse triangles and acute triangles(corresponding to the two cases of negative or positive cosine) are treated separately, in Propositions II.12 and II.13:[1] Proposition 12.In obtuse-angled triangles the square on the side subtending the obtuse angle is greater than the squares on the sides containing the obtuse angle by twice the rectangle contained by one of the sides about the obtuse angle, namely that on which the perpendicular falls, and the straight line cut off outside by the perpendicular towards the obtuse angle. Proposition 13 contains an analogous statement for acute triangles. In his (now-lost and only preserved through fragmentary quotations) commentary,Heron of Alexandriaprovided proofs of theconversesof both II.12 and II.13.[2] Using notation as in Fig. 2, Euclid's statement of proposition II.12 can be represented more concisely (though anachronistically) by the formula AB2=CA2+CB2+2(CA)(CH).{\displaystyle AB^{2}=CA^{2}+CB^{2}+2(CA)(CH).} To transform this into the familiar expression for the law of cosines, substitute⁠AB=c{\displaystyle AB=c}⁠,⁠CA=b{\displaystyle CA=b}⁠,⁠CB=a{\displaystyle CB=a}⁠, andCH=acos⁡(π−γ){\displaystyle CH=a\cos(\pi -\gamma )\ }=−acos⁡γ{\displaystyle \!{}=-a\cos \gamma }. Proposition II.13 was not used in Euclid's time for thesolution of triangles, but later it was used that way in the course of solving astronomical problems byal-Bīrūnī(11th century) andJohannes de Muris(14th century).[3]Something equivalent to thespherical law of cosineswas used (but not stated in general) byal-Khwārizmī(9th century),al-Battānī(9th century), andNīlakaṇṭha(15th century).[4] The 13th century Persian mathematicianNaṣīr al-Dīn al-Ṭūsī, in hisKitāb al-Shakl al-qattāʴ(Book on the Complete Quadrilateral, c. 1250), systematically described how to solve triangles from various combinations of given data. Given two sides and their included angle in a scalene triangle, he proposed finding the third side by dropping a perpendicular from the vertex of one of the unknown angles to the opposite base, reducing the problem to finding the legs of one right triangle from a known angle and hypotenuse using thelaw of sinesand then finding the hypotenuse of another right triangle from two known sides by thePythagorean theorem.[5] About two centuries later, another Persian mathematician,Jamshīd al-Kāshī, who computed the most accurate trigonometric tables of his era, also described the solution of triangles from various combinations of given data in hisMiftāḥ al-ḥisāb(Key of Arithmetic, 1427), and repeated essentially al-Ṭūsī's method, now consolidated into one formula and including more explicit details, as follows:[6] Another caseis when two sides and the angle between them are known and the rest are unknown. We multiply one of the sides by the sine of the [known] angle one time and by the sine of its complement the other time converted and we subtract the second result from the other side if the angle is acute and add it if the angle is obtuse. We then square the result and add to it the square of the first result. We take the square root of the sum to get the remaining side.... Using modern algebraic notation and conventions this might be written c=(b−acos⁡γ)2+(asin⁡γ)2{\displaystyle c={\sqrt {(b-a\cos \gamma )^{2}+(a\sin \gamma )^{2}}}} when⁠γ{\displaystyle \gamma }⁠is acute or c=(b+a|cos⁡γ|)2+(asin⁡γ)2{\displaystyle c={\sqrt {\left(b+a\left|\cos \gamma \right|\right)^{2}+\left(a\sin \gamma \right)^{2}}}} when⁠γ{\displaystyle \gamma }⁠is obtuse. (When⁠γ{\displaystyle \gamma }⁠is obtuse, the modern convention is that⁠cos⁡γ{\displaystyle \cos \gamma }⁠is negative andcos⁡(π−γ)=−cos⁡γ{\displaystyle \cos(\pi -\gamma )=-\cos \gamma }is positive; historically sines and cosines were considered to be line segments with non-negative lengths.) By squaring both sides,expandingthe squared binomial, and then applying the Pythagorean trigonometric identity⁠cos2⁡γ+sin2⁡γ=1{\displaystyle \cos ^{2}\gamma +\sin ^{2}\gamma =1}⁠, we obtain the familiar law of cosines: c2=b2−2bacos⁡γ+a2cos2⁡γ+a2sin2⁡γ=a2+b2−2abcos⁡γ.{\displaystyle {\begin{aligned}c^{2}&=b^{2}-2ba\cos \gamma +a^{2}\cos ^{2}\gamma +a^{2}\sin ^{2}\gamma \\[5mu]&=a^{2}+b^{2}-2ab\cos \gamma .\end{aligned}}} InFrance, the law of cosines is sometimes referred to as thethéorème d'Al-Kashi.[8][9] The same method used by al-Ṭūsī appeared in Europe as early as the 15th century, inRegiomontanus'sDe triangulis omnimodis(On Triangles of All Kinds, 1464), a comprehensive survey of plane and spherical trigonometry known at the time.[10] The theorem was first written using algebraic notation byFrançois Viètein the 16th century. At the beginning of the 19th century, modern algebraic notation allowed the law of cosines to be written in its current symbolic form.[11] Euclidproved this theorem by applying thePythagorean theoremto each of the two right triangles in Fig. 2 (AHBandCHB). Usingato denote the line segmentCB,bto denote the line segmentAC,cto denote the line segmentAB,dto denote the line segmentCHandhfor the heightBH, triangleAHBgives usc2=(b+d)2+h2,{\displaystyle c^{2}=(b+d)^{2}+h^{2},} and triangleCHBgivesd2+h2=a2.{\displaystyle d^{2}+h^{2}=a^{2}.} Expandingthe first equation givesc2=b2+2bd+d2+h2.{\displaystyle c^{2}=b^{2}+2bd+d^{2}+h^{2}.} Substituting the second equation into this, the following can be obtained:c2=a2+b2+2bd.{\displaystyle c^{2}=a^{2}+b^{2}+2bd.} This is Euclid's Proposition 12 from Book 2 of theElements.[12]To transform it into the modern form of the law of cosines, note thatd=acos⁡(π−γ)=−acos⁡γ.{\displaystyle d=a\cos(\pi -\gamma )=-a\cos \gamma .} Euclid's proof of his Proposition 13 proceeds along the same lines as his proof of Proposition 12: he applies the Pythagorean theorem to both right triangles formed by dropping the perpendicular onto one of the sides enclosing the angleγand uses the square of a difference to simplify. Using more trigonometry, the law of cosines can be deduced by using the Pythagorean theorem only once. In fact, by using the right triangle on the left hand side of Fig. 6 it can be shown that: c2=(b−acos⁡γ)2+(asin⁡γ)2=b2−2abcos⁡γ+a2cos2⁡γ+a2sin2⁡γ=b2+a2−2abcos⁡γ,{\displaystyle {\begin{aligned}c^{2}&=(b-a\cos \gamma )^{2}+(a\sin \gamma )^{2}\\&=b^{2}-2ab\cos \gamma +a^{2}\cos ^{2}\gamma +a^{2}\sin ^{2}\gamma \\&=b^{2}+a^{2}-2ab\cos \gamma ,\end{aligned}}} using thetrigonometric identity⁠cos2⁡γ+sin2⁡γ=1{\displaystyle \cos ^{2}\gamma +\sin ^{2}\gamma =1}⁠. This proof needs a slight modification ifb<acos(γ). In this case, the right triangle to which the Pythagorean theorem is applied movesoutsidethe triangleABC. The only effect this has on the calculation is that the quantityb−acos(γ)is replaced byacos(γ) −b.As this quantity enters the calculation only through its square, the rest of the proof is unaffected. However, this problem only occurs whenβis obtuse, and may be avoided by reflecting the triangle about the bisector ofγ. Referring to Fig. 6 it is worth noting that if the angle opposite sideaisαthen:tan⁡α=asin⁡γb−acos⁡γ.{\displaystyle \tan \alpha ={\frac {a\sin \gamma }{b-a\cos \gamma }}.} This is useful for direct calculation of a second angle when two sides and an included angle are given. Thealtitudethrough vertexCis a segment perpendicular to sidec. The distance from the foot of the altitude to vertexAplus the distance from the foot of the altitude to vertexBis equal to the length of sidec(see Fig. 5). Each of these distances can be written as one of the other sides multiplied by the cosine of the adjacent angle,[13]c=acos⁡β+bcos⁡α.{\displaystyle c=a\cos \beta +b\cos \alpha .} (This is still true ifαorβis obtuse, in which case the perpendicular falls outside the triangle.) Multiplying both sides bycyieldsc2=accos⁡β+bccos⁡α.{\displaystyle c^{2}=ac\cos \beta +bc\cos \alpha .} The same steps work just as well when treating either of the other sides as the base of the triangle:a2=accos⁡β+abcos⁡γ,b2=bccos⁡α+abcos⁡γ.{\displaystyle {\begin{aligned}a^{2}&=ac\cos \beta +ab\cos \gamma ,\\[3mu]b^{2}&=bc\cos \alpha +ab\cos \gamma .\end{aligned}}} Taking the equation for⁠c2{\displaystyle c^{2}}⁠and subtracting the equations for⁠b2{\displaystyle b^{2}}⁠and⁠a2{\displaystyle a^{2}}⁠,c2−a2−b2=accos⁡β+bccos⁡α−accos⁡β−bccos⁡α−2abcos⁡γc2=a2+b2−2abcos⁡γ.{\displaystyle {\begin{aligned}c^{2}-a^{2}-b^{2}&={\color {BlueGreen}{\cancel {\color {Black}ac\cos \beta }}}+{\color {Peach}{\cancel {\color {Black}bc\cos \alpha }}}-{\color {BlueGreen}{\cancel {\color {Black}ac\cos \beta }}}-{\color {Peach}{\cancel {\color {Black}bc\cos \alpha }}}-2ab\cos \gamma \\c^{2}&=a^{2}+b^{2}-2ab\cos \gamma .\end{aligned}}} This proof is independent of thePythagorean theorem, insofar as it is based only on the right-triangle definition of cosine and obtains squared side lengths algebraically. Other proofs typically invoke the Pythagorean theorem explicitly, and are more geometric, treatingacosγas a label for the length of a certain line segment.[13] Unlike many proofs, this one handles the cases of obtuse and acute anglesγin a unified fashion. Consider a triangle with sides of lengtha,b,c, whereθis the measurement of the angle opposite the side of lengthc. This triangle can be placed on theCartesian coordinate systemwith sideaaligned along thexaxis and angleθplaced at the origin, by plotting the components of the 3 points of the triangle as shown in Fig. 4:A=(bcos⁡θ,bsin⁡θ),B=(a,0),andC=(0,0).{\displaystyle A=(b\cos \theta ,b\sin \theta ),B=(a,0),{\text{ and }}C=(0,0).} By thedistance formula,[14] c=(a−bcos⁡θ)2+(0−bsin⁡θ)2.{\displaystyle c={\sqrt {(a-b\cos \theta )^{2}+(0-b\sin \theta )^{2}}}.} Squaring both sides and simplifyingc2=(a−bcos⁡θ)2+(−bsin⁡θ)2=a2−2abcos⁡θ+b2cos2⁡θ+b2sin2⁡θ=a2+b2(sin2⁡θ+cos2⁡θ)−2abcos⁡θ=a2+b2−2abcos⁡θ.{\displaystyle {\begin{aligned}c^{2}&=(a-b\cos \theta )^{2}+(-b\sin \theta )^{2}\\&=a^{2}-2ab\cos \theta +b^{2}\cos ^{2}\theta +b^{2}\sin ^{2}\theta \\&=a^{2}+b^{2}(\sin ^{2}\theta +\cos ^{2}\theta )-2ab\cos \theta \\&=a^{2}+b^{2}-2ab\cos \theta .\end{aligned}}} An advantage of this proof is that it does not require the consideration of separate cases depending on whether the angleγis acute, right, or obtuse. However, the cases treated separately inElementsII.12–13 and later by al-Ṭūsī, al-Kāshī, and others could themselves be combined by using concepts of signed lengths and areas and a concept of signed cosine, without needing a full Cartesian coordinate system. Referring to the diagram, triangleABCwith sidesAB=c,BC=aandAC=bis drawn inside its circumcircle as shown. TriangleABDis constructed congruent to triangleABCwithAD=BCandBD=AC. Perpendiculars fromDandCmeet baseABatEandFrespectively. Then:BF=AE=BCcos⁡B^=acos⁡B^⇒DC=EF=AB−2BF=c−2acos⁡B^.{\displaystyle {\begin{aligned}&BF=AE=BC\cos {\hat {B}}=a\cos {\hat {B}}\\\Rightarrow \ &DC=EF=AB-2BF=c-2a\cos {\hat {B}}.\end{aligned}}} Now the law of cosines is rendered by a straightforward application ofPtolemy's theoremtocyclic quadrilateralABCD:AD×BC+AB×DC=AC×BD⇒a2+c(c−2acos⁡B^)=b2⇒a2+c2−2accos⁡B^=b2.{\displaystyle {\begin{aligned}&AD\times BC+AB\times DC=AC\times BD\\\Rightarrow \ &a^{2}+c(c-2a\cos {\hat {B}})=b^{2}\\\Rightarrow \ &a^{2}+c^{2}-2ac\cos {\hat {B}}=b^{2}.\end{aligned}}} Plainly if angleBisright, thenABCDis a rectangle and application of Ptolemy's theorem yields thePythagorean theorem:a2+c2=b2.{\displaystyle a^{2}+c^{2}=b^{2}.} One can also prove the law of cosines by calculatingareas. The change of sign as the angleγbecomes obtuse makes a case distinction necessary. Recall that Acute case.Figure 7a shows aheptagoncut into smaller pieces (in two different ways) to yield a proof of the law of cosines. The various pieces are The equality of areas on the left and on the right givesa2+b2=c2+2abcos⁡γ.{\displaystyle a^{2}+b^{2}=c^{2}+2ab\cos \gamma .} Obtuse case.Figure 7b cuts ahexagonin two different ways into smaller pieces, yielding a proof of the law of cosines in the case that the angleγis obtuse. We have The equality of areas on the left and on the right givesa2+b2−2abcos⁡(γ)=c2.{\displaystyle a^{2}+b^{2}-2ab\cos(\gamma )=c^{2}.} The rigorous proof will have to include proofs that various shapes arecongruentand therefore have equal area. This will use the theory ofcongruent triangles. Using thegeometry of the circle, it is possible to give a moregeometricproof than using thePythagorean theoremalone.Algebraicmanipulations (in particular thebinomial theorem) are avoided. Case of acute angleγ, wherea> 2bcosγ.Drop theperpendicularfromAontoa=BC, creating a line segment of lengthbcosγ. Duplicate theright triangleto form theisosceles triangleACP. Construct thecirclewith centerAand radiusb, and itstangenth=BHthroughB. The tangenthforms a right angle with the radiusb(Euclid'sElements: Book 3, Proposition 18; or seehere), so the yellow triangle in Figure 8 is right. Apply thePythagorean theoremto obtainc2=b2+h2.{\displaystyle c^{2}=b^{2}+h^{2}.} Then use thetangent secant theorem(Euclid'sElements: Book 3, Proposition 36), which says that the square on the tangent through a pointBoutside the circle is equal to the product of the two lines segments (fromB) created by anysecantof the circle throughB. In the present case:BH2=BC·BP, orh2=a(a−2bcos⁡γ).{\displaystyle h^{2}=a(a-2b\cos \gamma ).} Substituting into the previous equation gives the law of cosines:c2=b2+a(a−2bcos⁡γ).{\displaystyle c^{2}=b^{2}+a(a-2b\cos \gamma ).} Note thath2is thepowerof the pointBwith respect to the circle. The use of the Pythagorean theorem and the tangent secant theorem can be replaced by a single application of thepower of a point theorem. Case of acute angleγ, wherea< 2bcosγ.Drop theperpendicularfromAontoa=BC, creating a line segment of lengthbcosγ. Duplicate theright triangleto form theisosceles triangleACP. Construct thecirclewith centerAand radiusb, and achordthroughBperpendicular toc=AB,half of which ish=BH.Apply thePythagorean theoremto obtainb2=c2+h2.{\displaystyle b^{2}=c^{2}+h^{2}.} Now use thechord theorem(Euclid'sElements: Book 3, Proposition 35), which says that if two chords intersect, the product of the two line segments obtained on one chord is equal to the product of the two line segments obtained on the other chord. In the present case:BH2=BC·BP,orh2=a(2bcos⁡γ−a).{\displaystyle h^{2}=a(2b\cos \gamma -a).} Substituting into the previous equation gives the law of cosines:b2=c2+a(2bcos⁡γ−a).{\displaystyle b^{2}=c^{2}+a(2b\cos \gamma -a).} Note that the power of the pointBwith respect to the circle has the negative value−h2. Case of obtuse angleγ.This proof uses the power of a point theorem directly, without the auxiliary triangles obtained by constructing a tangent or a chord. Construct a circle with centerBand radiusa(see Figure 9), which intersects thesecantthroughAandCinCandK. Thepowerof the pointAwith respect to the circle is equal to bothAB2−BC2andAC·AK. Therefore,c2−a2=b(b+2acos⁡(π−γ))=b(b−2acos⁡γ),{\displaystyle {\begin{aligned}c^{2}-a^{2}&{}=b(b+2a\cos(\pi -\gamma ))\\&{}=b(b-2a\cos \gamma ),\end{aligned}}} which is the law of cosines. Using algebraic measures for line segments (allowingnegative numbersas lengths of segments) the case of obtuse angle (CK> 0) and acute angle (CK< 0) can be treated simultaneously. The law of cosines can be proven algebraically from thelaw of sinesand a few standard trigonometric identities.[15]To start, three angles of a triangle sum to astraight angle(α+β+γ=π{\displaystyle \alpha +\beta +\gamma =\pi }radians). Thus by the angle sum identities for sine and cosine, sin⁡γ=−sin⁡(π−γ)=−sin⁡(α+β)=sin⁡αcos⁡β+cos⁡αsin⁡β,cos⁡γ=−cos⁡(π−γ)=−cos⁡(α+β)=sin⁡αsin⁡β−cos⁡αcos⁡β.{\displaystyle {\begin{alignedat}{3}\sin \gamma &={\phantom {-}}\sin(\pi -\gamma )&&={\phantom {-}}\sin(\alpha +\beta )&&=\sin \alpha \,\cos \beta +\cos \alpha \,\sin \beta ,\\[5mu]\cos \gamma &=-\cos(\pi -\gamma )&&=-\cos(\alpha +\beta )&&=\sin \alpha \,\sin \beta -\cos \alpha \,\cos \beta .\end{alignedat}}} Squaring the first of these identities, then substitutingcos⁡αcos⁡β={\displaystyle \cos \alpha \,\cos \beta ={}}sin⁡αsin⁡β−cos⁡γ{\displaystyle \sin \alpha \,\sin \beta -\cos \gamma }from the second, and finally replacingcos2⁡α+sin2⁡α={\displaystyle \cos ^{2}\alpha +\sin ^{2}\alpha ={}}cos2⁡β+sin2⁡β=1,{\displaystyle \cos ^{2}\beta +\sin ^{2}\beta =1,}thePythagorean trigonometric identity, we have: sin2⁡γ=(sin⁡αcos⁡β+cos⁡αsin⁡β)2=sin2⁡αcos2⁡β+2sin⁡αsin⁡βcos⁡αcos⁡β+cos2⁡αsin2⁡β=sin2⁡αcos2⁡β+2sin⁡αsin⁡β(sin⁡αsin⁡β−cos⁡γ)+cos2⁡αsin2⁡β=sin2⁡α(cos2⁡β+sin2⁡β)+sin2⁡β(cos2⁡α+sin2⁡α)−2sin⁡αsin⁡βcos⁡γ=sin2⁡α+sin2⁡β−2sin⁡αsin⁡βcos⁡γ.{\displaystyle {\begin{aligned}\sin ^{2}\gamma &=(\sin \alpha \,\cos \beta +\cos \alpha \,\sin \beta )^{2}\\[3mu]&=\sin ^{2}\alpha \,\cos ^{2}\beta +2\sin \alpha \,\sin \beta \,\cos \alpha \,\cos \beta +\cos ^{2}\alpha \,\sin ^{2}\beta \\[3mu]&=\sin ^{2}\alpha \,\cos ^{2}\beta +2\sin \alpha \,\sin \beta (\sin \alpha \,\sin \beta -\cos \gamma )+\cos ^{2}\alpha \,\sin ^{2}\beta \\[3mu]&=\sin ^{2}\alpha (\cos ^{2}\beta +\sin ^{2}\beta )+\sin ^{2}\beta (\cos ^{2}\alpha +\sin ^{2}\alpha )-2\sin \alpha \,\sin \beta \,\cos \gamma \\[3mu]&=\sin ^{2}\alpha +\sin ^{2}\beta -2\sin \alpha \,\sin \beta \,\cos \gamma .\end{aligned}}} The law of sines holds thatasin⁡αβ=bsin⁡β=csin⁡γβ=k,{\displaystyle {\frac {a}{\sin \alpha {\vphantom {\beta }}}}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma {\vphantom {\beta }}}}=k,} so to prove the law of cosines, we multiply both sides of our previous identity by⁠k2{\displaystyle k^{2}}⁠: sin2⁡γc2sin2⁡γ=sin2⁡αa2sin2⁡α+sin2⁡βb2sin2⁡β−2sin⁡αsin⁡βcos⁡γabsin⁡αsin⁡βsin2c2=a2+b2−2abcos⁡γ.{\displaystyle {\begin{aligned}\sin ^{2}\gamma {\frac {c^{2}}{\sin ^{2}\gamma }}&=\sin ^{2}\alpha {\frac {a^{2}}{\sin ^{2}\alpha }}+\sin ^{2}\beta {\frac {b^{2}}{\sin ^{2}\beta }}-2\sin \alpha \,\sin \beta \,\cos \gamma {\frac {ab}{\sin \alpha \,\sin \beta {\vphantom {\sin ^{2}}}}}\\[10mu]c^{2}&=a^{2}+b^{2}-2ab\cos \gamma .\end{aligned}}} This concludes the proof. Denote CB→=a→,CA→=b→,AB→=c→{\displaystyle {\overrightarrow {CB}}={\vec {a}},\ {\overrightarrow {CA}}={\vec {b}},\ {\overrightarrow {AB}}={\vec {c}}} Therefore,c→=a→−b→{\displaystyle {\vec {c}}={\vec {a}}-{\vec {b}}} Taking thedot productof each side with itself:c→⋅c→=(a→−b→)⋅(a→−b→)‖c→‖2=‖a→‖2+‖b→‖2−2a→⋅b→{\displaystyle {\begin{aligned}{\vec {c}}\cdot {\vec {c}}&=({\vec {a}}-{\vec {b}})\cdot ({\vec {a}}-{\vec {b}})\\\Vert {\vec {c}}\Vert ^{2}&=\Vert {\vec {a}}\Vert ^{2}+\Vert {\vec {b}}\Vert ^{2}-2\,{\vec {a}}\cdot {\vec {b}}\end{aligned}}} Using the identity u→⋅v→=‖u→‖‖v→‖cos⁡∠(u→,v→){\displaystyle {\vec {u}}\cdot {\vec {v}}=\Vert {\vec {u}}\Vert \,\Vert {\vec {v}}\Vert \cos \angle ({\vec {u}},\ {\vec {v}})} leads to ‖c→‖2=‖a→‖2+‖b→‖2−2‖a→‖‖b→‖cos⁡∠(a→,b→){\displaystyle \Vert {\vec {c}}\Vert ^{2}=\Vert {\vec {a}}\Vert ^{2}+{\Vert {\vec {b}}\Vert }^{2}-2\,\Vert {\vec {a}}\Vert \!\;\Vert {\vec {b}}\Vert \cos \angle ({\vec {a}},\ {\vec {b}})} The result follows. Whena=b, i.e., when the triangle isisosceleswith the two sides incident to the angleγequal, the law of cosines simplifies significantly. Namely, becausea2+b2= 2a2= 2ab, the law of cosines becomescos⁡γ=1−c22a2{\displaystyle \cos \gamma =1-{\frac {c^{2}}{2a^{2}}}} orc2=2a2(1−cos⁡γ).{\displaystyle c^{2}=2a^{2}(1-\cos \gamma ).} Given an arbitrarytetrahedronwhose four faces have areasA,B,C, andD, withdihedral angle⁠φab{\displaystyle \varphi _{ab}}⁠between facesAandB, etc., a higher-dimensional analogue of the law of cosines is:[16]A2=B2+C2+D2−2(BCcos⁡φbc+CDcos⁡φcd+DBcos⁡φdb).{\displaystyle A^{2}=B^{2}+C^{2}+D^{2}-2\left(BC\cos \varphi _{bc}+CD\cos \varphi _{cd}+DB\cos \varphi _{db}\right).} When the angle,γ, is small and the adjacent sides,aandb, are of similar length, the right hand side of the standard form of the law of cosines is subject tocatastrophic cancellationin numerical approximations. In situations where this is an important concern, a mathematically equivalent version of the law of cosines, similar to thehaversine formula, can prove useful:c2=(a−b)2+4absin2⁡(γ2)=(a−b)2+4abhaversin⁡(γ).{\displaystyle {\begin{aligned}c^{2}&=(a-b)^{2}+4ab\sin ^{2}\left({\frac {\gamma }{2}}\right)\\&=(a-b)^{2}+4ab\operatorname {haversin} (\gamma ).\end{aligned}}} In the limit of an infinitesimal angle, the law of cosines degenerates into thecircular arc lengthformula,c=aγ. As in Euclidean geometry, one can use the law of cosines to determine the anglesA,B,Cfrom the knowledge of the sidesa,b,c. In contrast to Euclidean geometry, the reverse is also possible in both non-Euclidean models: the anglesA,B,Cdetermine the sidesa,b,c. A triangle is defined by three pointsu,v, andwon the unit sphere, and the arcs ofgreat circlesconnecting those points. If these great circles make anglesA,B, andCwith opposite sidesa,b,cthen thespherical law of cosinesasserts that all of the following relationships hold: cos⁡a=cos⁡bcos⁡c+sin⁡bsin⁡ccos⁡Acos⁡A=−cos⁡Bcos⁡C+sin⁡Bsin⁡Ccos⁡acos⁡a=cos⁡A+cos⁡Bcos⁡Csin⁡Bsin⁡C.{\displaystyle {\begin{aligned}\cos a&=\cos b\cos c+\sin b\sin c\cos A\\\cos A&=-\cos B\cos C+\sin B\sin C\cos a\\\cos a&={\frac {\cos A+\cos B\cos C}{\sin B\sin C}}.\end{aligned}}} Inhyperbolic geometry, a pair of equations are collectively known as thehyperbolic law of cosines. The first iscosh⁡a=cosh⁡bcosh⁡c−sinh⁡bsinh⁡ccos⁡A{\displaystyle \cosh a=\cosh b\cosh c-\sinh b\sinh c\cos A} wheresinhandcoshare thehyperbolic sine and cosine, and the second iscos⁡A=−cos⁡Bcos⁡C+sin⁡Bsin⁡Ccosh⁡a.{\displaystyle \cos A=-\cos B\cos C+\sin B\sin C\cosh a.} The length of the sides can be computed by: cosh⁡a=cos⁡A+cos⁡Bcos⁡Csin⁡Bsin⁡C.{\displaystyle \cosh a={\frac {\cos A+\cos B\cos C}{\sin B\sin C}}.} The law of cosines can be generalized to allpolyhedraby considering any polyhedron with vector sides and invoking thedivergence Theorem.[17]
https://en.wikipedia.org/wiki/Law_of_cosines
Intrigonometry, thelaw of tangentsortangent rule[1]is a statement about the relationship between the tangents of two angles of atriangleand the lengths of the opposing sides. In Figure 1,a,b, andcare the lengths of the three sides of the triangle, andα,β, andγare the anglesoppositethose three respective sides. The law oftangentsstates that The law of tangents, although not as commonly known as thelaw of sinesor thelaw of cosines, is equivalent to the law of sines, and can be used in any case where two sides and the included angle, or two angles and a side, are known. To prove the law of tangents one can start with thelaw of sines: where⁠d{\displaystyle d}⁠is thediameterof thecircumcircle, so that⁠a=dsin⁡α{\displaystyle a=d\sin \alpha }⁠and⁠b=dsin⁡β{\displaystyle b=d\sin \beta }⁠. It follows that Using thetrigonometric identity, the factor formula for sines specifically we get As an alternative to using the identity for the sum or difference of two sines, one may cite the trigonometric identity (seetangent half-angle formula). The law of tangents can be used to compute the angles of a triangle in which two sidesaandband the enclosed angleγare given. From compute the angle differenceα−β=Δ; use that to calculateβ= (180° −γ−Δ)/2and thenα=β+Δ. Once an angle opposite a known side is computed, the remaining sideccan be computed using thelaw of sines. In the time before electronic calculators were available, this method was preferable to an application of thelaw of cosinesc=√a2+b2− 2abcosγ, as this latter law necessitated an additional lookup in alogarithm table, in order to compute the square root. In modern times the law of tangents may have betternumericalproperties than the law of cosines: Ifγis small, andaandbare almost equal, then an application of the law of cosines leads to a subtraction of almost equal values, incurringcatastrophic cancellation. On a sphere of unit radius, the sides of the triangle are arcs ofgreat circles. Accordingly, their lengths can be expressed in radians or any other units of angular measure. LetA,B,Cbe the angles at the three vertices of the triangle and leta,b,cbe the respective lengths of the opposite sides. The spherical law of tangents says[2] The law of tangents was discovered by the Persian mathematicianAbu al-Wafain the 10th century.[3] Ibn Muʿādh al-Jayyānīalso described the law of tangents for planar triangles in the 11th century.[4] The law of tangents for spherical triangles was described in the 13th century byPersian mathematicianNasir al-Din al-Tusi(1201–1274), who also presented the law of sines for plane triangles in his five-volume workTreatise on the Quadrilateral.[4][5] A generalization of the law of tangents holds for acyclic quadrilateral◻ABCD.{\displaystyle \square ABCD.}Denote the lengths of sides|AB|=a,{\displaystyle |AB|=a,}|BC|=b,{\displaystyle |BC|=b,}|CD|=c,{\displaystyle |CD|=c,}and|DA|=d{\displaystyle |DA|=d}and angle measures∠DAB=α,{\displaystyle \angle {DAB}=\alpha ,}∠ABC=β{\displaystyle \angle {ABC}=\beta }.Then:[6] This formula reduces to the law of tangents for a triangle whenc=0{\displaystyle c=0}.
https://en.wikipedia.org/wiki/Law_of_tangents
Intrigonometry, thelaw of cotangentsis a relationship among the lengths of the sides of atriangleand thecotangentsof the halves of the three angles.[1][2] Just as three quantities whose equality is expressed by thelaw of sinesare equal to the diameter of thecircumscribed circleof the triangle (or to its reciprocal, depending on how the law is expressed), so also the law of cotangents relates the radius of theinscribed circleof atriangle(theinradius) to its sides and angles. Using the usual notations for a triangle (see the figure at the upper right), wherea, b, care the lengths of the three sides,A, B, Care the vertices opposite those three respective sides,α, β, γare the corresponding angles at those vertices,sis thesemiperimeter, that is,s=⁠a+b+c/2⁠, andris the radius of the inscribed circle, the law ofcotangentsstates thatcot⁡12αs−a=cot⁡12βs−b=cot⁡12γs−c=1r,{\displaystyle {\frac {\cot {\frac {1}{2}}\alpha }{s-a}}={\frac {\cot {\frac {1}{2}}\beta }{s-b}}={\frac {\cot {\frac {1}{2}}\gamma }{s-c}}={\frac {1}{r}},} and furthermore that the inradius is given byr=(s−a)(s−b)(s−c)s.{\displaystyle r={\sqrt {\frac {(s-a)(s-b)(s-c)}{s}}}\,.} In the upper figure, the points of tangency of the incircle with the sides of the triangle break the perimeter into 6 segments, in 3 pairs. In each pair the segments are of equal length. For example, the 2 segments adjacent to vertexAare equal. If we pick one segment from each pair, their sum will be thesemiperimeters. An example of this is the segments shown in color in the figure. The two segments making up the red line add up toa, so the blue segment must be of lengths−a. Obviously, the other five segments must also have lengthss−a,s−b, ors−c, as shown in the lower figure. By inspection of the figure, using the definition of the cotangent function, we havecot⁡α2=s−ar{\displaystyle \cot {\frac {\alpha }{2}}={\frac {s-a}{r}}\,}and similarly for the other two angles, proving the first assertion. For the second one—the inradius formula—we start from thegeneral addition formula:cot⁡(u+v+w)=cot⁡u+cot⁡v+cot⁡w−cot⁡ucot⁡vcot⁡w1−cot⁡ucot⁡v−cot⁡vcot⁡w−cot⁡wcot⁡u.{\displaystyle \cot(u+v+w)={\frac {\cot u+\cot v+\cot w-\cot u\cot v\cot w}{1-\cot u\cot v-\cot v\cot w-\cot w\cot u}}.} Applying tocot⁡(12α+12β+12γ)=cot⁡π2=0,{\displaystyle \cot \left({\tfrac {1}{2}}\alpha +{\tfrac {1}{2}}\beta +{\tfrac {1}{2}}\gamma \right)=\cot {\tfrac {\pi }{2}}=0,}we obtain: cot⁡α2cot⁡β2cot⁡γ2=cot⁡α2+cot⁡β2+cot⁡γ2.{\displaystyle \cot {\frac {\alpha }{2}}\cot {\frac {\beta }{2}}\cot {\frac {\gamma }{2}}=\cot {\frac {\alpha }{2}}+\cot {\frac {\beta }{2}}+\cot {\frac {\gamma }{2}}.}(This is also thetriple cotangent identity.) Substituting the values obtained in the first part, we get:(s−a)r(s−b)r(s−c)r=s−ar+s−br+s−cr=3s−2sr=sr{\displaystyle {\begin{aligned}{\frac {(s-a)}{r}}{\frac {(s-b)}{r}}{\frac {(s-c)}{r}}&={\frac {s-a}{r}}+{\frac {s-b}{r}}+{\frac {s-c}{r}}\\[2pt]&={\frac {3s-2s}{r}}\\[2pt]&={\frac {s}{r}}\end{aligned}}}Multiplying through by⁠r3/s⁠gives the value ofr2, proving the second assertion. A number of other results can be derived from the law of cotangents. S=r(s−a)+r(s−b)+r(s−c)=r(3s−(a+b+c))=r(3s−2s)=rs{\displaystyle {\begin{aligned}S&=r(s-a)+r(s-b)+r(s-c)\\&=r{\bigl (}3s-(a+b+c){\bigr )}\\&=r(3s-2s)\\&=rs\end{aligned}}}This gives the resultS=s(s−a)(s−b)(s−c){\displaystyle S={\sqrt {s(s-a)(s-b)(s-c)}}}as required. sin⁡12(α−β)sin⁡12(α+β)=cot⁡12β−cot⁡12αcot⁡12β+cot⁡12α=a−b2s−a−b.{\displaystyle {\frac {\sin {\tfrac {1}{2}}(\alpha -\beta )}{\sin {\frac {1}{2}}(\alpha +\beta )}}={\frac {\cot {\frac {1}{2}}\beta -\cot {\tfrac {1}{2}}\alpha }{\cot {\frac {1}{2}}\beta +\cot {\tfrac {1}{2}}\alpha }}={\frac {a-b}{2s-a-b}}.}This gives the resulta−bc=sin⁡12(α−β)cos⁡12γ{\displaystyle {\frac {a-b}{c}}={\dfrac {\sin {\frac {1}{2}}(\alpha -\beta )}{\cos {\frac {1}{2}}\gamma }}}as required. cos⁡12(α−β)cos⁡12(α+β)=cot⁡12αcot⁡12β+1cot⁡12αcot⁡12β−1=cot⁡12α+cot⁡12β+2cot⁡12γcot⁡12α+cot⁡12β=4s−a−b−2c2s−a−b.{\displaystyle {\begin{aligned}{\frac {\cos {\tfrac {1}{2}}(\alpha -\beta )}{\cos {\tfrac {1}{2}}(\alpha +\beta )}}&={\frac {\cot {\tfrac {1}{2}}\alpha \,\cot {\tfrac {1}{2}}\beta +1}{\cot {\tfrac {1}{2}}\alpha \,\cot {\tfrac {1}{2}}\beta -1}}\\[4pt]&={\frac {\cot {\tfrac {1}{2}}\alpha +\cot {\tfrac {1}{2}}\beta +2\cot {\tfrac {1}{2}}\gamma }{\cot {\tfrac {1}{2}}\alpha +\cot {\tfrac {1}{2}}\beta }}\\[4pt]&={\frac {4s-a-b-2c}{2s-a-b}}.\end{aligned}}} Here, an extra step is required to transform a product into a sum, according to the sum/product formula. This gives the result b+ac=cos⁡12(α−β)sin⁡12γ{\displaystyle {\frac {b+a}{c}}={\dfrac {\cos {\tfrac {1}{2}}(\alpha -\beta )}{\sin {\tfrac {1}{2}}\gamma }}}as required. The law of cotangents is not as common or well established in trigonometry as thelaws of sines,cosines, ortangents, so the same name is sometimes applied to other triangle identities involving cotangents. For example: The sum of the cotangents of two angles equals the ratio of the side between them to thealtitudethrough the third vertex:[3] The law of cosines can be expressed in terms of the cotangent instead of the cosine, which brings the triangle's areaS{\displaystyle S}into the identity:[4] Because the three angles of a triangle sum toπ,{\displaystyle \pi ,}the sum of the pairwise products of their cotangents is one:[5] The law of cotangents may also refer to thecotangent rulein spherical trigonometry.
https://en.wikipedia.org/wiki/Law_of_cotangents
Intrigonometry,Mollweide's formulais a pair of relationships between sides and angles in a triangle.[1][2] A variant in more geometrical style was first published byIsaac Newtonin 1707 and then byFriedrich Wilhelm von Oppel[de]in 1746.Thomas Simpsonpublished the now-standard expression in 1748.Karl Mollweiderepublished the same result in 1808 without citing those predecessors.[3] It can be used to check the consistency ofsolutions of triangles.[4] Leta,{\displaystyle a,}b,{\displaystyle b,}andc{\displaystyle c}be the lengths of the three sides of a triangle. Letα,{\displaystyle \alpha ,}β,{\displaystyle \beta ,}andγ{\displaystyle \gamma }be the measures of the angles opposite those three sides respectively. Mollweide's formulas are Because in a planar triangle12γ=12π−12(α+β),{\displaystyle {\tfrac {1}{2}}\gamma ={\tfrac {1}{2}}\pi -{\tfrac {1}{2}}(\alpha +\beta ),}these identities can alternately be written in a form in which they are more clearly a limiting case ofNapier's analogiesfor spherical triangles (this was the form used by Von Oppel), Dividing one by the other to eliminatec{\displaystyle c}results in thelaw of tangents, In terms of half-angle tangents alone, Mollweide's formula can be written as or equivalently Multiplying the respective sides of these identities gives one half-angle tangent in terms of the three sides, which becomes thelaw of cotangentsafter taking the square root, wheres=12(a+b+c){\textstyle s={\tfrac {1}{2}}(a+b+c)}is thesemiperimeter. The identities can also be proven equivalent to thelaw of sinesandlaw of cosines. Inspherical trigonometry, thelaw of cosinesand derived identities such as Napier's analogies have precise duals swapping central angles measuring the sides and dihedral angles at the vertices. In the infinitesimal limit, the law of cosines for sides reduces to the planar law of cosines and two of Napier's analogies reduce to Mollweide's formulas above. But the law of cosines for angles degenerates to0=0.{\displaystyle 0=0.}By dividing squared side length by the spherical excessE,{\displaystyle E,}we obtain a non-vanishing ratio, the spherical trigonometry relation: In the infinitesimal limit, as the half-angle tangents of spherical sides reduce to lengths of planar sides, the half-angle tangent of spherical excess reduces to twice the areaA{\displaystyle A}of a planar triangle, so on the plane this is: and likewise fora{\displaystyle a}andb.{\displaystyle b.} As corollaries (multiplying or dividing the above formula in terms ofa{\displaystyle a}andb{\displaystyle b}) we obtain two dual statements to Mollweide's formulas. The first expresses the area in terms of two sides and the included angle, and the other is the law of sines: We can alternately express the second formula in a form closer to one of Mollweide's formulas (again the law of tangents): A generalization of Mollweide's formula holds for acyclic quadrilateral◻ABCD.{\displaystyle \square ABCD.}Denote the lengths of sides|AB|=a,{\displaystyle |AB|=a,}|BC|=b,{\displaystyle |BC|=b,}|CD|=c,{\displaystyle |CD|=c,}and|DA|=d{\displaystyle |DA|=d}and angle measures∠DAB=α,{\displaystyle \angle {DAB}=\alpha ,}∠ABC=β,{\displaystyle \angle {ABC}=\beta ,}∠BCD=γ,{\displaystyle \angle {BCD}=\gamma ,}and∠CDA=δ.{\displaystyle \angle {CDA}=\delta .}IfE{\displaystyle E}is the point of intersection of the diagonals, denote∠CED=θ.{\displaystyle \angle {CED}=\theta .}Then:[5] Several variant formulas can be constructed by substituting based on the cyclic quadrilateral identities, As rational relationships in terms of half-angle tangents of two adjacent angles, these formulas can be written: A triangle may be regarded as a quadrilateral with one side of length zero. From this perspective, asd{\displaystyle d}approaches zero, a cyclic quadrilateral converges into a triangle△A′B′C′,{\displaystyle \triangle A'B'C',}and the formulas above simplify to the analogous triangle formulas. Relabeling to match the convention for triangles, in the limita′=b,{\displaystyle a'=b,}b′=c,{\displaystyle b'=c,}c′=a,{\displaystyle c'=a,}α′=α+δ−π=π−θ,{\displaystyle \alpha '=\alpha +\delta -\pi =\pi -\theta ,}β′=β,{\displaystyle \beta '=\beta ,}andγ′=γ.{\displaystyle \gamma '=\gamma .}
https://en.wikipedia.org/wiki/Mollweide%27s_formula
Solution of triangles(Latin:solutio triangulorum) is the maintrigonometricproblem of finding the characteristics of atriangle(angles and lengths of sides), when some of these are known. The triangle can be located on aplaneor on asphere. Applications requiring triangle solutions includegeodesy,astronomy,construction, andnavigation. A general form triangle has six main characteristics (see picture): three linear (side lengthsa, b, c) and three angular (α, β, γ). The classical plane trigonometry problem is to specify three of the six characteristics and determine the other three. A triangle can be uniquely determined in this sense when given any of the following:[1][2] For all cases in the plane, at least one of the side lengths must be specified. If only the angles are given, the side lengths cannot be determined, because anysimilartriangle is a solution. The standard method of solving the problem is to use fundamental relations. There are other (sometimes practically useful) universal relations: thelaw of cotangentsandMollweide's formula. Let three side lengthsa, b, cbe specified. To find the anglesα, β, thelaw of cosinescan be used:[3]α=arccos⁡b2+c2−a22bcβ=arccos⁡a2+c2−b22ac.{\displaystyle {\begin{aligned}\alpha &=\arccos {\frac {b^{2}+c^{2}-a^{2}}{2bc}}\\[4pt]\beta &=\arccos {\frac {a^{2}+c^{2}-b^{2}}{2ac}}.\end{aligned}}} Then angleγ= 180° −α−β. Some sources recommend to find angleβfrom thelaw of sinesbut (as Note 1 above states) there is a risk of confusing an acute angle value with an obtuse one. Another method of calculating the angles from known sides is to apply thelaw of cotangents. Area usingHeron's formula:A=s(s−a)(s−b)(s−c){\displaystyle A={\sqrt {s(s-a)(s-b)(s-c)}}}wheres=a+b+c2{\displaystyle s={\frac {a+b+c}{2}}} Heron's formula without using the semiperimeter:A=(a+b+c)(b+c−a)(a+c−b)(a+b−c)4{\displaystyle A={\frac {\sqrt {(a+b+c)(b+c-a)(a+c-b)(a+b-c)}}{4}}} Here the lengths of sidesa, band the angleγbetween these sides are known. The third side can be determined from the law of cosines:[4]c=a2+b2−2abcos⁡γ.{\displaystyle c={\sqrt {a^{2}+b^{2}-2ab\cos \gamma }}.}Now we use law of cosines to find the second angle:α=arccos⁡b2+c2−a22bc.{\displaystyle \alpha =\arccos {\frac {b^{2}+c^{2}-a^{2}}{2bc}}.}Finally,β= 180° −α−γ. This case is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Assume that two sidesb, cand the angleβare known. The equation for the angleγcan be implied from thelaw of sines:[5]sin⁡γ=cbsin⁡β.{\displaystyle \sin \gamma ={\frac {c}{b}}\sin \beta .}We denote furtherD=⁠c/b⁠sinβ(the equation's right side). There are four possible cases: Onceγis obtained, the third angleα= 180° −β−γ. The third side can then be found from the law of sines:a=bsin⁡αsin⁡β{\displaystyle a=b\ {\frac {\sin \alpha }{\sin \beta }}} or from the law of cosines:a=ccos⁡β±b2−c2sin2⁡β{\displaystyle a=c\cos \beta \pm {\sqrt {b^{2}-c^{2}\sin ^{2}\beta }}} The known characteristics are the sidecand the anglesα, β. The third angleγ= 180° −α−β. Two unknown sides can be calculated from the law of sines:[6]a=csin⁡αsin⁡γ=csin⁡αsin⁡(α+β)b=csin⁡βsin⁡γ=csin⁡βsin⁡(α+β){\displaystyle {\begin{aligned}a&=c\ {\frac {\sin \alpha }{\sin \gamma }}=c\ {\frac {\sin \alpha }{\sin(\alpha +\beta )}}\\[4pt]b&=c\ {\frac {\sin \beta }{\sin \gamma }}=c\ {\frac {\sin \beta }{\sin(\alpha +\beta )}}\end{aligned}}} The procedure for solving an AAS triangle is same as that for an ASA triangle: First, find the third angle by using the angle sum property of a triangle, then find the other two sides using thelaw of sines. In many cases, triangles can be solved given three pieces of information some of which are the lengths of the triangle'smedians,altitudes, orangle bisectors. Posamentier and Lehmann[7]list the results for the question of solvability using no higher than square roots (i.e.,constructibility) for each of the 95 distinct cases; 63 of these are constructible. The generalspherical triangleis fully determined by three of its six characteristics (3 sides and 3 angles). The lengths of the sidesa, b, cof a spherical triangle are theircentral angles, measured in angular units rather than linear units. (On a unit sphere, the angle (inradians) and length around the sphere are numerically the same. On other spheres, the angle (in radians) is equal to the length around the sphere divided by the radius.) Spherical geometrydiffers from planarEuclidean geometry, so the solution of spherical triangles is built on different rules. For example, thesum of the three anglesα+β+γdepends on the size of the triangle. In addition,similar trianglescannot be unequal, so the problem of constructing a triangle with specified three angles has a unique solution. The basic relations used to solve a problem are similar to those of the planar case: seeSpherical law of cosinesandSpherical law of sines. Among other relationships that may be useful are thehalf-side formulaandNapier's analogies:[8]tan⁡12ccos⁡12(α−β)=tan⁡12(a+b)cos⁡12(α+β)tan⁡12csin⁡12(α−β)=tan⁡12(a−b)sin⁡12(α+β)cot⁡12γcos⁡12(a−b)=tan⁡12(α+β)cos⁡12(a+b)cot⁡12γsin⁡12(a−b)=tan⁡12(α−β)sin⁡12(a+b).{\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}c\,\cos {\tfrac {1}{2}}(\alpha -\beta )&=\tan {\tfrac {1}{2}}(a+\,b)\cos {\tfrac {1}{2}}(\alpha +\beta )\\\tan {\tfrac {1}{2}}c\,\sin {\tfrac {1}{2}}(\alpha -\beta )&=\tan {\tfrac {1}{2}}(a\ \!-\,b)\sin {\tfrac {1}{2}}(\alpha +\beta )\\\cot {\tfrac {1}{2}}\gamma \ \!\cos {\tfrac {1}{2}}(a\ \!-\,b)&=\tan {\tfrac {1}{2}}(\alpha +\beta )\cos {\tfrac {1}{2}}(a+b)\\\cot {\tfrac {1}{2}}\gamma \,\sin {\tfrac {1}{2}}(a\ \!-\,b)&=\tan {\tfrac {1}{2}}(\alpha -\beta )\sin {\tfrac {1}{2}}(a+b).\end{aligned}}} Known: the sidesa,b,c(in angular units). The triangle's angles are computed using thespherical law of cosines:α=arccos⁡cos⁡a−cos⁡bcos⁡csin⁡bsin⁡c,β=arccos⁡cos⁡b−cos⁡ccos⁡asin⁡csin⁡a,γ=arccos⁡cos⁡c−cos⁡acos⁡bsin⁡asin⁡b.{\displaystyle {\begin{aligned}\alpha &=\arccos {\frac {\cos a-\cos b\ \cos c}{\sin b\ \sin c}},\\[4pt]\beta &=\arccos {\frac {\cos b-\cos c\ \cos a}{\sin c\ \sin a}},\\[4pt]\gamma &=\arccos {\frac {\cos c-\cos a\ \cos b}{\sin a\ \sin b}}.\end{aligned}}} Known: the sidesa, band the angleγbetween them. The sideccan be found from thespherical law of cosines:c=arccos⁡(cos⁡acos⁡b+sin⁡asin⁡bcos⁡γ).{\displaystyle c=\arccos \left(\cos a\cos b+\sin a\sin b\cos \gamma \right).} The anglesα, βcan be calculated as above, or by using Napier's analogies: α=arctan⁡2sin⁡atan⁡12γsin⁡(b+a)+cot⁡12γsin⁡(b−a),β=arctan⁡2sin⁡btan⁡12γsin⁡(a+b)+cot⁡12γsin⁡(a−b).{\displaystyle {\begin{aligned}\alpha &=\arctan \ {\frac {2\sin a}{\tan {\frac {1}{2}}\gamma \,\sin(b+a)+\cot {\frac {1}{2}}\gamma \,\sin(b-a)}},\\[4pt]\beta &=\arctan \ {\frac {2\sin b}{\tan {\frac {1}{2}}\gamma \,\sin(a+b)+\cot {\frac {1}{2}}\gamma \,\sin(a-b)}}.\end{aligned}}} This problem arises in thenavigation problemof finding the great circle between two points on the earth specified by their latitude and longitude; in this application, it is important to use formulas which are not susceptible to round-off errors. For this purpose, the following formulas (which may be derived using vector algebra) can be used:c=arctan⁡(sin⁡acos⁡b−cos⁡asin⁡bcos⁡γ)2+(sin⁡bsin⁡γ)2cos⁡acos⁡b+sin⁡asin⁡bcos⁡γ,α=arctan⁡sin⁡asin⁡γsin⁡bcos⁡a−cos⁡bsin⁡acos⁡γ,β=arctan⁡sin⁡bsin⁡γsin⁡acos⁡b−cos⁡asin⁡bcos⁡γ,{\displaystyle {\begin{aligned}c&=\arctan {\frac {\sqrt {(\sin a\cos b-\cos a\sin b\cos \gamma )^{2}+(\sin b\sin \gamma )^{2}}}{\cos a\cos b+\sin a\sin b\cos \gamma }},\\[4pt]\alpha &=\arctan {\frac {\sin a\sin \gamma }{\sin b\cos a-\cos b\sin a\cos \gamma }},\\[4pt]\beta &=\arctan {\frac {\sin b\sin \gamma }{\sin a\cos b-\cos a\sin b\cos \gamma }},\end{aligned}}}where the signs of the numerators and denominators in these expressions should be used to determine the quadrant of the arctangent. This problem is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Known: the sidesb, cand the angleβnot between them. A solution exists if the following condition holds:b>arcsin(sin⁡csin⁡β).{\displaystyle b>\arcsin \!{\bigl (}\sin c\,\sin \beta {\bigr )}.}The angleγcan be found from thespherical law of sines:γ=arcsin⁡sin⁡csin⁡βsin⁡b.{\displaystyle \gamma =\arcsin {\frac {\sin c\,\sin \beta }{\sin b}}.}As for the plane case, ifb<cthen there are two solutions:γand180° -γ. We can find other characteristics by using Napier's analogies:a=2arctan⁡[tan⁡12(b−c)sin⁡12(β+γ)sin⁡12(β−γ)],α=2arccot⁡[tan⁡12(β−γ)sin⁡12(b+c)sin⁡12(b−c)].{\displaystyle {\begin{aligned}a&=2\arctan \left[\tan {\tfrac {1}{2}}(b-c)\ {\frac {\sin {\tfrac {1}{2}}(\beta +\gamma )}{\sin {\tfrac {1}{2}}(\beta -\gamma )}}\right],\\[4pt]\alpha &=2\operatorname {arccot} \left[\tan {\tfrac {1}{2}}(\beta -\gamma )\ {\frac {\sin {\tfrac {1}{2}}(b+c)}{\sin {\tfrac {1}{2}}(b-c)}}\right].\end{aligned}}} Known: the sidecand the anglesα, β. First we determine the angleγusing thespherical law of cosines:γ=arccos(sin⁡αsin⁡βcos⁡c−cos⁡αcos⁡β).{\displaystyle \gamma =\arccos \!{\bigl (}\sin \alpha \sin \beta \cos c-\cos \alpha \cos \beta {\bigr )}.\,} We can find the two unknown sides from the spherical law of cosines (using the calculated angleγ): a=arccos⁡cos⁡α+cos⁡βcos⁡γsin⁡βsin⁡γ,b=arccos⁡cos⁡β+cos⁡αcos⁡γsin⁡αsin⁡γ,{\displaystyle {\begin{aligned}a&=\arccos {\frac {\cos \alpha +\cos \beta \cos \gamma }{\sin \beta \sin \gamma }},\\[4pt]b&=\arccos {\frac {\cos \beta +\cos \alpha \cos \gamma }{\sin \alpha \sin \gamma }},\end{aligned}}} or by using Napier's analogies:a=arctan⁡2sin⁡αcot⁡12csin⁡(β+α)+tan⁡12csin⁡(β−α),b=arctan⁡2sin⁡βcot⁡12csin⁡(α+β)+tan⁡12csin⁡(α−β).{\displaystyle {\begin{aligned}a&=\arctan {\frac {2\sin \alpha }{\cot {\frac {1}{2}}c\,\sin(\beta +\alpha )+\tan {\frac {1}{2}}c\,\sin(\beta -\alpha )}},\\[4pt]b&=\arctan {\frac {2\sin \beta }{\cot {\frac {1}{2}}c\,\sin(\alpha +\beta )+\tan {\frac {1}{2}}c\,\sin(\alpha -\beta )}}.\end{aligned}}} Known: the sideaand the anglesα, β. The sidebcan be found from thespherical law of sines:b=arcsin⁡sin⁡asin⁡βsin⁡α.{\displaystyle b=\arcsin {\frac {\sin a\,\sin \beta }{\sin \alpha }}.} If the angle for the sideais acute andα>β, another solution exists:b=π−arcsin⁡sin⁡asin⁡βsin⁡α.{\displaystyle b=\pi -\arcsin {\frac {\sin a\,\sin \beta }{\sin \alpha }}.} We can find other characteristics by using Napier's analogies:c=2arctan⁡[tan⁡12(a−b)sin⁡12(α+β)sin⁡12(α−β)],γ=2arccot⁡[tan⁡12(α−β)sin⁡12(a+b)sin⁡12(a−b)].{\displaystyle {\begin{aligned}c&=2\arctan \left[\tan {\tfrac {1}{2}}(a-b)\ {\frac {\sin {\tfrac {1}{2}}(\alpha +\beta )}{\sin {\frac {1}{2}}(\alpha -\beta )}}\right],\\[4pt]\gamma &=2\operatorname {arccot} \left[\tan {\tfrac {1}{2}}(\alpha -\beta )\ {\frac {\sin {\tfrac {1}{2}}(a+b)}{\sin {\frac {1}{2}}(a-b)}}\right].\end{aligned}}} Known: the anglesα, β, γ. From thespherical law of cosineswe infer:a=arccos⁡cos⁡α+cos⁡βcos⁡γsin⁡βsin⁡γ,b=arccos⁡cos⁡β+cos⁡γcos⁡αsin⁡γsin⁡α,c=arccos⁡cos⁡γ+cos⁡αcos⁡βsin⁡αsin⁡β.{\displaystyle {\begin{aligned}a&=\arccos {\frac {\cos \alpha +\cos \beta \cos \gamma }{\sin \beta \sin \gamma }},\\[4pt]b&=\arccos {\frac {\cos \beta +\cos \gamma \cos \alpha }{\sin \gamma \sin \alpha }},\\[4pt]c&=\arccos {\frac {\cos \gamma +\cos \alpha \cos \beta }{\sin \alpha \sin \beta }}.\end{aligned}}} The above algorithms become much simpler if one of the angles of a triangle (for example, the angleC) is the right angle. Such a spherical triangle is fully defined by its two elements, and the other three can be calculated usingNapier's Pentagonor the following relations. If one wants to measure the distancedfrom shore to a remote ship via triangulation, one marks on the shore two points with known distancelbetween them (the baseline). Letα, βbe the angles between the baseline and the direction to the ship. From the formulae above (ASA case, assuming planar geometry) one can compute the distance as thetriangle height:d=sin⁡αsin⁡βsin⁡(α+β)ℓ=tan⁡αtan⁡βtan⁡α+tan⁡βℓ.{\displaystyle d={\frac {\sin \alpha \,\sin \beta }{\sin(\alpha +\beta )}}\ell ={\frac {\tan \alpha \,\tan \beta }{\tan \alpha +\tan \beta }}\ell .} For the spherical case, one can first compute the length of side from the point atαto the ship (i.e. the side opposite toβ) via the ASA formulatan⁡b=2sin⁡βcot⁡12ℓsin⁡(α+β)+tan⁡12ℓsin⁡(α−β),{\displaystyle \tan b={\frac {2\sin \beta }{\cot {\frac {1}{2}}\ell \,\sin(\alpha +\beta )+\tan {\frac {1}{2}}\ell \,\sin(\alpha -\beta )}},}and insert this into the AAS formula for the right subtriangle that contains the angleαand the sidesbandd:sin⁡d=sin⁡bsin⁡α=tan⁡b1+tan2⁡bsin⁡α.{\displaystyle \sin d=\sin b\sin \alpha ={\frac {\tan b}{\sqrt {1+\tan ^{2}b}}}\sin \alpha .}(The planar formula is actually the first term of the Taylor expansion ofdof the spherical solution in powers ofℓ.) This method is used incabotage. The anglesα, βare defined by observation of familiar landmarks from the ship. As another example, if one wants to measure the heighthof a mountain or a high building, the anglesα, βfrom two ground points to the top are specified. Letℓbe the distance between these points. From the same ASA case formulas we obtain:h=sin⁡αsin⁡βsin⁡(β−α)ℓ=tan⁡αtan⁡βtan⁡β−tan⁡αℓ.{\displaystyle h={\frac {\sin \alpha \,\sin \beta }{\sin(\beta -\alpha )}}\ell ={\frac {\tan \alpha \,\tan \beta }{\tan \beta -\tan \alpha }}\ell .} To calculate the distance between two points on the globe, we consider the spherical triangleABC, whereCis the North Pole. Some characteristics are:a=90∘−λB,b=90∘−λA,γ=LA−LB.{\displaystyle {\begin{aligned}a&=90^{\circ }-\lambda _{B},\\b&=90^{\circ }-\lambda _{A},\\\gamma &=L_{A}-L_{B}.\end{aligned}}}Iftwo sides and the included angle given, we obtain from the formulasAB¯=Rarccos[sin⁡λAsin⁡λB+cos⁡λAcos⁡λBcos⁡(LA−LB)].{\displaystyle {\overline {AB}}=R\arccos \!{\Bigr [}\sin \lambda _{A}\sin \lambda _{B}+\cos \lambda _{A}\cos \lambda _{B}\cos(L_{A}-L_{B}){\Bigr ]}.}HereRis theEarth's radius.
https://en.wikipedia.org/wiki/Solution_of_triangles
Surveyingorland surveyingis the technique, profession, art, and science of determining theterrestrialtwo-dimensionalorthree-dimensionalpositions ofpointsand thedistancesandanglesbetween them. These points are usually on the surface of the Earth, and they are often used to establish maps and boundaries forownership, locations, such as the designated positions of structural components for construction or the surface location of subsurface features, or other purposes required by government or civil law, such as property sales.[1] A professional in land surveying is called aland surveyor. Surveyors work with elements ofgeodesy,geometry,trigonometry,regression analysis,physics, engineering,metrology,programming languages, and the law. They use equipment, such astotal stations, robotic total stations,theodolites,GNSSreceivers,retroreflectors,3D scanners,lidarsensors, radios,inclinometer, handheld tablets, optical and digitallevels, subsurface locators, drones,GIS, and surveying software. Surveying has been an element in the development of the human environment since the beginning ofrecorded history. It is used in the planning and execution of most forms ofconstruction. It is also used in transportation, communications, mapping, and the definition of legal boundaries for land ownership. It is an important tool for research in many other scientific disciplines. TheInternational Federation of Surveyorsdefines the function of surveying as follows:[2] A surveyor is a professional person with the academic qualifications and technical expertise to conduct one, or more, of the following activities; Surveying has occurred since humans built the first large structures. Inancient Egypt, arope stretcherwould use simple geometry to re-establish boundaries after the annual floods of theNile River. The almost perfect squareness and north–south orientation of theGreat Pyramid of Giza, builtc.2700 BC, affirm the Egyptians' command of surveying. Thegromainstrument may have originated inMesopotamia(early 1st millennium BC).[3]The prehistoric monument atStonehenge(c.2500 BC) was set out by prehistoric surveyors using peg and rope geometry.[4] The mathematicianLiu Huidescribed ways of measuring distant objects in his workHaidao SuanjingorThe Sea Island Mathematical Manual, published in 263 AD. The Romans recognized land surveying as a profession. They established the basic measurements under which the Roman Empire was divided, such as a tax register of conquered lands (300 AD).[5]Roman surveyors were known asGromatici. In medieval Europe,beating the boundsmaintained the boundaries of a village or parish. This was the practice of gathering a group of residents and walking around the parish or village to establish a communal memory of the boundaries. Young boys were included to ensure the memory lasted as long as possible. In England,William the Conquerorcommissioned theDomesday Bookin 1086. It recorded the names of all the land owners, the area of land they owned, the quality of the land, and specific information of the area's content and inhabitants. It did not include maps showing exact locations. Abel Foullondescribed aplane tablein 1551, but it is thought that the instrument was in use earlier as his description is of a developed instrument. Gunter's chainwas introduced in 1620 by English mathematicianEdmund Gunter. It enabled plots of land to be accurately surveyed and plotted for legal and commercial purposes. Leonard Diggesdescribed atheodolitethat measured horizontal angles in his bookA geometric practice named Pantometria(1571). Joshua Habermel (Erasmus Habermehl) created a theodolite with a compass and tripod in 1576. Johnathon Sission was the first to incorporate a telescope on a theodolite in 1725.[6] In the 18th century, modern techniques and instruments for surveying began to be used.Jesse Ramsdenintroduced the first precisiontheodolitein 1787. It was an instrument for measuringanglesin the horizontal and vertical planes. He created hisgreat theodoliteusing an accuratedividing engineof his own design. Ramsden's theodolite represented a great step forward in the instrument's accuracy.William Gascoigneinvented an instrument that used atelescopewith an installedcrosshairas a target device, in 1640.James Wattdeveloped an optical meter for the measuring of distance in 1771; it measured theparallactic anglefrom which the distance to a point could be deduced. Dutch mathematicianWillebrord Snellius(a.k.a. Snel van Royen) introduced the modern systematic use oftriangulation. In 1615 he surveyed the distance fromAlkmaartoBreda, approximately 72 miles (116 km). He underestimated this distance by 3.5%. The survey was a chain of quadrangles containing 33 triangles in all. Snell showed how planar formulae could be corrected to allow for thecurvature of the Earth. He also showed how toresect, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured more accurately than bearings of the vertices, which depended on a compass. His work established the idea of surveying a primary network of control points, and locating subsidiary points inside the primary network later. Between 1733 and 1740,Jacques Cassiniand his sonCésarundertook the first triangulation of France. They included a re-surveying of themeridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles. By this time triangulation methods were well established for local map-making. It was only towards the end of the 18th century that detailed triangulation network surveys mapped whole countries. In 1784, a team from GeneralWilliam Roy'sOrdnance Surveyof Great Britain began thePrincipal Triangulation of Britain. The first Ramsden theodolite was built for this survey. The survey was finally completed in 1853. TheGreat Trigonometric Surveyof India began in 1801. The Indian survey had an enormous scientific impact. It was responsible for one of the first accurate measurements of a section of an arc of longitude, and for measurements of the geodesic anomaly. It named and mappedMount Everestand the other Himalayan peaks. Surveying became a professional occupation in high demand at the turn of the 19th century with the onset of theIndustrial Revolution. The profession developed more accurate instruments to aid its work. Industrial infrastructure projects used surveyors to lay outcanals, roads and rail. In the US, theLand Ordinance of 1785created thePublic Land Survey System. It formed the basis for dividing the western territories into sections to allow the sale of land. The PLSS divided states into township grids which were further divided into sections and fractions of sections.[1] NapoleonBonaparte foundedcontinental Europe's firstcadastrein 1808. This gathered data on the number of parcels of land, their value, land usage, and names. This system soon spread around Europe. Robert Torrensintroduced theTorrens systemin South Australia in 1858. Torrens intended to simplify land transactions and provide reliable titles via a centralized register of land. The Torrens system was adopted in several other nations of the English-speaking world. Surveying became increasingly important with the arrival of railroads in the 1800s. Surveying was necessary so that railroads could plan technologically and financially viable routes. At the beginning of the century, surveyors had improved the older chains and ropes, but they still faced the problem of accurate measurement of long distances.Trevor Lloyd Wadleydeveloped theTellurometerduring the 1950s. It measures long distances using two microwave transmitter/receivers.[7]During the late 1950sGeodimeterintroducedelectronic distance measurement(EDM) equipment.[8]EDM units use a multi frequency phase shift of light waves to find a distance.[9]These instruments eliminated the need for days or weeks of chain measurement by measuring between points kilometers apart in one go. Advances in electronics allowed miniaturization of EDM. In the 1970s the first instruments combining angle and distance measurement appeared, becoming known astotal stations. Manufacturers added more equipment by degrees, bringing improvements in accuracy and speed of measurement. Major advances include tilt compensators, data recorders and on-board calculation programs. The first satellite positioning system was theUS NavyTRANSIT system. The first successful launch took place in 1960. The system's main purpose was to provide position information toPolaris missilesubmarines. Surveyors found they could use field receivers to determine the location of a point. Sparse satellite cover and large equipment made observations laborious and inaccurate. The main use was establishing benchmarks in remote locations. The US Air Force launched the first prototype satellites of theGlobal Positioning System(GPS) in 1978. GPS used a larger constellation of satellites and improved signal transmission, thus improving accuracy. Early GPS observations required several hours of observations by a static receiver to reach survey accuracy requirements. Later improvements to both satellites and receivers allowed forReal Time Kinematic(RTK) surveying. RTK surveys provide high-accuracy measurements by using a fixed base station and a second roving antenna. The position of the roving antenna can be tracked. Thetheodolite,total stationandRTKGPSsurvey remain the primary methods in use. Remote sensingand satellite imagery continue to improve and become cheaper, allowing more commonplace use. Prominent new technologies include three-dimensional (3D) scanning andlidar-based topographical surveys.UAVtechnology along withphotogrammetricimage processing is also appearing. The main surveying instruments in use around the world are thetheodolite,measuring tape,total station,3D scanners,GPS/GNSS,levelandrod. Most instruments screw onto atripodwhen in use. Tape measures are often used for measurement of smaller distances. 3D scanners and various forms of aerial imagery are also used. The theodolite is an instrument for the measurement of angles. It uses two separatecircles,protractorsoralidadesto measure angles in the horizontal and the vertical plane. A telescope mounted on trunnions is aligned vertically with the target object. The whole upper section rotates for horizontal alignment. The vertical circle measures the angle that the telescope makes against the vertical, known as the zenith angle. The horizontal circle uses an upper and lower plate. When beginning the survey, the surveyor points the instrument in a known direction (bearing), and clamps the lower plate in place. The instrument can then rotate to measure the bearing to other objects. If no bearing is known or direct angle measurement is wanted, the instrument can be set to zero during the initial sight. It will then read the angle between the initial object, the theodolite itself, and the item that the telescope aligns with. Thegyrotheodoliteis a form of theodolite that uses a gyroscope to orient itself in the absence of reference marks. It is used in underground applications. Thetotal stationis a development of the theodolite with an electronic distance measurement device (EDM). A total station can be used for leveling when set to the horizontal plane. Since their introduction, total stations have shifted from optical-mechanical to fully electronic devices.[10] Modern top-of-the-line total stations no longer need a reflector or prism to return the light pulses used for distance measurements. They are fully robotic, and can even e-mail point data to a remote computer and connect tosatellite positioning systems, such asGlobal Positioning System.Real Time KinematicGPS systems have significantly increased the speed of surveying, and they are now horizontally accurate to within 1 cm ± 1 ppm in real-time, while vertically it is currently about half of that to within 2 cm ± 2 ppm.[11] GPS surveying differs from other GPS uses in the equipment and methods used. Static GPS uses two receivers placed in position for a considerable length of time. The long span of time lets the receiver compare measurements as the satellites orbit. The changes as the satellites orbit also provide the measurement network with well conditioned geometry. This produces an accuratebaselinethat can be over 20 km long. RTK surveying uses one static antenna and one roving antenna. The static antenna tracks changes in the satellite positions and atmospheric conditions. The surveyor uses the roving antenna to measure the points needed for the survey. The two antennas use a radio link that allows the static antenna to send corrections to the roving antenna. The roving antenna then applies those corrections to the GPS signals it is receiving to calculate its own position. RTK surveying covers smaller distances than static methods. This is because divergent conditions further away from the base reduce accuracy. Surveying instruments have characteristics that make them suitable for certain uses. Theodolites and levels are often used by constructors rather than surveyors in first world countries. The constructor can perform simple survey tasks using a relatively cheap instrument. Total stations are workhorses for many professional surveyors because they are versatile and reliable in all conditions. The productivity improvements from a GPS on large scale surveys make them popular for major infrastructure or data gathering projects. One-person robotic-guided total stations allow surveyors to measure without extra workers to aim the telescope or record data. A fast but expensive way to measure large areas is with a helicopter, using a GPS to record the location of the helicopter and a laser scanner to measure the ground. To increase precision, surveyors placebeaconson the ground (about 20 km (12 mi) apart). This method reaches precisions between 5–40 cm (depending on flight height).[12] Surveyors use ancillary equipment such as tripods and instrument stands; staves and beacons used for sighting purposes;PPE; vegetation clearing equipment; digging implements for finding survey markers buried over time; hammers for placements of markers in various surfaces and structures; and portable radios for communication over long lines of sight. Land surveyors, construction professionals, geomatics engineers and civil engineers usingtotal station,GPS, 3D scanners, and other collector data use land surveying software to increase efficiency, accuracy, and productivity. Land Surveying Software is a staple of contemporary land surveying.[13] Typically, much if not all of thedraftingand some of thedesigningforplansandplatsof the surveyed property is done by the surveyor, and nearly everyone working in the area of drafting today (2021) utilizesCADsoftware and hardware both on PC, and more and more in newer generation data collectors in the field as well.[14]Other computer platforms and tools commonly used today by surveyors are offered online by theU.S. Federal Governmentand other governments' survey agencies, such as theNational Geodetic Surveyand theCORSnetwork, to get automated corrections and conversions for collectedGPSdata, and the datacoordinate systemsthemselves. Surveyors determine the position of objects by measuring angles and distances. The factors that can affect the accuracy of their observations are also measured. They then use this data to create vectors, bearings, coordinates, elevations, areas, volumes, plans and maps. Measurements are often split into horizontal and vertical components to simplify calculation. GPS and astronomic measurements also need measurement of a time component. BeforeEDM(Electronic Distance Measurement) laser devices,distanceswere measured using a variety of means. In pre-colonial America Natives would use the "bow shot" as a distance reference ("as far as an arrow can slung out of a bow", or "flights of a Cherokee long bow").[15]Europeans used chains with links of a known length such as aGunter's chain, or measuring tapes made of steel orinvar. To measure horizontal distances, these chains or tapes were pulled taut to reduce sagging and slack. The distance had to be adjusted for heat expansion. Attempts to hold the measuring instrument level would also be made. When measuring up a slope, the surveyor might have to "break" (break chain) the measurement- use an increment less than the total length of the chain.Perambulators, or measuring wheels, were used to measure longer distances but not to a high level of accuracy.Tacheometryis the science of measuring distances by measuring the angle between two ends of an object with a known size. It was sometimes used before to the invention of EDM where rough ground made chain measurement impractical. Historically, horizontal angles were measured by using acompassto provide a magnetic bearing or azimuth. Later, more precise scribed discs improved angular resolution. Mounting telescopes withreticlesatop the disc allowed more precise sighting (seetheodolite). Levels and calibrated circles allowed the measurement of vertical angles.Verniersallowed measurement to a fraction of a degree, such as with a turn-of-the-centurytransit. Theplane tableprovided a graphical method of recording and measuring angles, which reduced the amount of mathematics required. In 1829Francis Ronaldsinvented areflecting instrumentfor recording angles graphically by modifying theoctant.[16] By observing the bearing from every vertex in a figure, a surveyor can measure around the figure. The final observation will be between the two points first observed, except with a 180° difference. This is called aclose. If the first and last bearings are different, this shows the error in the survey, called theangular misclose. The surveyor can use this information to prove that the work meets the expected standards. The simplest method for measuring height is with analtimeterusing air pressure to find the height. When more precise measurements are needed, means like precise levels (also known as differential leveling) are used. When precise leveling, a series of measurements between two points are taken using an instrument and a measuring rod. Differences in height between the measurements are added and subtracted in a series to get the net difference in elevation between the two endpoints. With theGlobal Positioning System(GPS), elevation can be measured with satellite receivers. Usually, GPS is somewhat less accurate than traditional precise leveling, but may be similar over long distances. When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed.Turningis a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun, which is why this method is referred to asdifferential levelling. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet (7.6 m) high, allowing the level to be set much higher than the base of the rod. The primary way of determining one's position on the Earth's surface when no known positions are nearby is by astronomic observations. Observations to the Sun, Moon and stars could all be made using navigational techniques. Once the instrument's position and bearing to a star is determined, the bearing can be transferred to a reference point on Earth. The point can then be used as a base for further observations. Survey-accurate astronomic positions were difficult to observe and calculate and so tended to be a base off which many other measurements were made. Since the advent of the GPS system, astronomic observations are rare as GPS allows adequate positions to be determined over most of the surface of the Earth. Few survey positions are derived from the first principles. Instead, most surveys points are measured relative to previously measured points. This forms a reference orcontrolnetwork where each point can be used by a surveyor to determine their own position when beginning a new survey. Survey points are usually marked on the earth's surface by objects ranging from small nails driven into the ground tolarge beaconsthat can be seen from long distances. The surveyors can set up their instruments in this position and measure to nearby objects. Sometimes a tall, distinctive feature such as a steeple or radio aerial has its position calculated as a reference point that angles can be measured against. Triangulationis a method of horizontal location favoured in the days before EDM and GPS measurement. It can determine distances, elevations and directions between distant objects. Since the early days of surveying, this was the primary method of determining accurate positions of objects fortopographicmaps of large areas. A surveyor first needs to know the horizontal distance between two of the objects, known as thebaseline. Then the heights, distances and angular position of other objects can be derived, as long as they are visible from one of the original objects. High-accuracy transits or theodolites were used, and angle measurements were repeated for increased accuracy. See alsoTriangulation in three dimensions. Offsettingis an alternate method of determining the position of objects, and was often used to measure imprecise features such as riverbanks. The surveyor would mark and measure two known positions on the ground roughly parallel to the feature, and mark out a baseline between them. At regular intervals, a distance was measured at right angles from the first line to the feature. The measurements could then be plotted on a plan or map, and the points at the ends of the offset lines could be joined to show the feature. Traversingis a common method of surveying smaller areas. The surveyors start from an old reference mark or known position and place a network of reference marks covering the survey area. They then measure bearings and distances between the reference marks, and to the target features. Most traverses form a loop pattern or link between two prior reference marks so the surveyor can check their measurements. Many surveys do not calculate positions on the surface of the Earth, but instead, measure the relative positions of objects. However, often the surveyed items need to be compared to outside data, such as boundary lines or previous survey's objects. The oldest way of describing a position is via latitude and longitude, and often a height above sea level. As the surveying profession grew it created Cartesian coordinate systems to simplify the mathematics for surveys over small parts of the Earth. The simplest coordinate systems assume that the Earth is flat and measure from an arbitrary point, known as a 'datum' (singular form of data). The coordinate system allows easy calculation of the distances and direction between objects over small areas. Large areas distort due to the Earth's curvature. North is often defined as true north at the datum. For larger regions, it is necessary to model the shape of the Earth using an ellipsoid or a geoid. Many countries have created coordinate-grids customized to lessen error in their area of the Earth. A basic tenet of surveying is that no measurement is perfect, and that there will always be a small amount of error.[17]There are three classes of survey errors: Surveyors avoid these errors by calibrating their equipment, using consistent methods, and by good design of their reference network. Repeated measurements can be averaged and any outlier measurements discarded. Independent checks like measuring a point from two or more locations or using two different methods are used, and errors can be detected by comparing the results of two or more measurements, thus utilizingredundancy. Once the surveyor has calculated the level of the errors in his or her work, it isadjusted. This is the process of distributing the error between all measurements. Each observation is weighted according to how much of the total error it is likely to have caused and part of that error is allocated to it in a proportional way. The most common methods of adjustment are theBowditchmethod, also known as the compass rule, and theprinciple of least squaresmethod. The surveyor must be able to distinguish betweenaccuracy and precision. In the United States, surveyors and civil engineers use units of feet wherein a survey foot breaks down into 10ths and 100ths. Many deed descriptions containing distances are often expressed using these units (125.25 ft). On the subject of accuracy, surveyors are often held to a standard of one one-hundredth of a foot; about 1/8 inch. Calculation and mapping tolerances are much smaller wherein achieving near-perfect closures are desired. Though tolerances will vary from project to project, in the field and day to day usage beyond a 100th of a foot is often impractical. Local organisations or regulatory bodies class specializations of surveying in different ways. Broad groups are: Based on the considerations and true shape of the Earth, surveying is broadly classified into two types. Plane surveyingassumes the Earth is flat. Curvature and spheroidal shape of the Earth is neglected. In this type of surveying all triangles formed by joining survey lines are considered as plane triangles. It is employed for small survey works where errors due to the Earth's shape are too small to matter.[18] Ingeodetic surveyingthe curvature of the Earth is taken into account while calculating reduced levels, angles, bearings and distances. This type of surveying is usually employed for large survey works. Survey works up to 100 square miles (260 square kilometers ) are treated as plane and beyond that are treated as geodetic.[19]In geodetic surveying necessary corrections are applied to reduced levels, bearings and other observations.[18] The basic principles of surveying have changed little over the ages, but the tools used by surveyors have evolved. Engineering, especially civil engineering, often needs surveyors. Surveyors help determine the placement of roads, railways, reservoirs, dams,pipelines,retaining walls, bridges, and buildings. They establish the boundaries of legal descriptions and political divisions. They also provide advice and data forgeographical information systems(GIS) that record land features and boundaries. Surveyors must have a thorough knowledge ofalgebra, basiccalculus,geometry, andtrigonometry. They must also know the laws that deal with surveys,real property, and contracts. Most jurisdictions recognize three different levels of qualification: Related professions includecartographers,hydrographers,geodesists,photogrammetrists, andtopographers, as well ascivil engineersandgeomatics engineers. Licensing requirements vary with jurisdiction, and are commonly consistent within national borders. Prospective surveyors usually have to receive a degree in surveying, followed by a detailed examination of their knowledge of surveying law and principles specific to the region they wish to practice in, and undergo a period of on-the-job training or portfolio building before they are awarded a license to practise. Licensed surveyors usually receive apost nominal, which varies depending on where they qualified. The system has replaced older apprenticeship systems. A licensed land surveyor is generally required to sign and seal all plans. The state dictates the format, showing their name and registration number. In many jurisdictions, surveyors must mark their registration number onsurvey monumentswhen setting boundary corners. Monuments take the form of capped iron rods, concrete monuments, or nails with washers. Most countries' governments regulate at least some forms of surveying. Their survey agencies establish regulations and standards. Standards control accuracy, surveying credentials, monumentation of boundaries and maintenance ofgeodetic networks. Many nations devolve this authority to regional entities or states/provinces. Cadastral surveys tend to be the most regulated because of the permanence of the work. Lot boundaries established by cadastral surveys may stand for hundreds of years without modification. Most jurisdictions also have a form of professional institution representing local surveyors. These institutes often endorse or license potential surveyors, as well as set and enforce ethical standards. The largest institution is theInternational Federation of Surveyors(Abbreviated FIG, for French:Fédération Internationale des Géomètres). They represent the survey industry worldwide. Most English-speaking countries consider building surveying a distinct profession. They have their own professional associations and licensing requirements. A building surveyor can provide technical building advice on existing buildings, new buildings, design, compliance with regulations such as planning and building control. A building surveyor normally acts on behalf of his or her client ensuring that their vested interests remain protected. The Royal Institution of Chartered Surveyors (RICS) is a world-recognised governing body for those working within the built environment.[20] One of the primary roles of the land surveyor is to determine the boundary of real property on the ground. The surveyor must determine where the adjoining landowners wish to put the boundary. The boundary is established in legal documents and plans prepared by attorneys, engineers, and land surveyors. The surveyor then puts monuments on the corners of the new boundary. They might also find or resurvey the corners of the property monumented by prior surveys. Cadastral land surveyorsare licensed by governments. The cadastral survey branch of theBureau of Land Management(BLM) conducts most cadastral surveys in the United States.[21]They consult withForest Service,National Park Service,Army Corps of Engineers,Bureau of Indian Affairs,Fish and Wildlife Service,Bureau of Reclamation, and others. The BLM used to be known as theUnited States General Land Office(GLO). In states organized per thePublic Land Survey System(PLSS), surveyors must carry out BLM cadastral surveys under that system. Cadastral surveyors often have to work around changes to the earth that obliterate or damage boundary monuments. When this happens, they must consider evidence that is not recorded on the title deed. This is known as extrinsic evidence.[22] Quantity surveying is a profession that deals with thecosts and contractsof construction projects. A quantity surveyor is an expert in estimating thecosts of materials, labor, and timeneeded for a project, as well as managing thefinancial and legal aspectsof the project. A quantity surveyor can work for either the client or the contractor, and can be involved in different stages of the project, from planning to completion. Quantity surveyors are also known asChartered Surveyorsin the UK. Some U.S. Presidents were land surveyors.George WashingtonandAbraham Lincolnsurveyedcolonial or frontier territories early in their career, prior to serving in office. Ferdinand Rudolph Hassleris considered the "father" of geodetic surveying in the U.S.[23] David T. Abercrombiepracticed land surveying before starting anoutfitterstore ofexcursiongoods. The business would later turn intoAbercrombie & Fitchlifestyle clothing store. Percy Harrison Fawcettwas a British surveyor that explored the jungles of South America attempting to find theLost City of Z. His biography and expeditions were recounted in the bookThe Lost City of Zand were later adapted onfilm screen. Inō Tadatakaproduced the first map of Japan using modern surveying techniques starting in 1800, at the age of 55.
https://en.wikipedia.org/wiki/Surveying
Aristarchus's inequality(after the GreekastronomerandmathematicianAristarchus of Samos; c. 310 – c. 230 BCE) is a law oftrigonometrywhich states that ifαandβareacute angles(i.e. between 0 and a right angle) andβ<αthen Ptolemyused the first of these inequalities while constructinghis table of chords.[1] The proof is a consequence of the more widely known inequalities Using these inequalities we can first prove that We first note that the inequality is equivalent to which itself can be rewritten as We now want to show that The second inequality is simplyβ<tan⁡β{\displaystyle \beta <\tan \beta }. The first one is true because Now we want to show the second inequality, i.e. that: We first note that due to the initial inequalities we have that: Consequently, using that0<α−β<α{\displaystyle 0<\alpha -\beta <\alpha }in the previous equation (replacingβ{\displaystyle \beta }byα−β<α{\displaystyle \alpha -\beta <\alpha }) we obtain: We conclude that Thiselementary geometry-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Aristarchus%27s_inequality
This article is a summary ofdifferentiation rules, that is, rules for computing thederivativeof afunctionincalculus. Unless otherwise stated, all functions are functions ofreal numbers(R{\textstyle \mathbb {R} }) that return real values, although, more generally, the formulas below apply wherever they arewell defined,[1][2]including the case ofcomplex numbers(C{\textstyle \mathbb {C} }).[3] For any value ofc{\textstyle c}, wherec∈R{\textstyle c\in \mathbb {R} }, iff(x){\textstyle f(x)}is the constant function given byf(x)=c{\textstyle f(x)=c}, thendfdx=0{\textstyle {\frac {df}{dx}}=0}.[4] Letc∈R{\textstyle c\in \mathbb {R} }andf(x)=c{\textstyle f(x)=c}. By the definition of the derivative:f′(x)=limh→0f(x+h)−f(x)h=limh→0(c)−(c)h=limh→00h=limh→00=0.{\displaystyle {\begin{aligned}f'(x)&=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}\\&=\lim _{h\to 0}{\frac {(c)-(c)}{h}}\\&=\lim _{h\to 0}{\frac {0}{h}}\\&=\lim _{h\to 0}0\\&=0.\end{aligned}}} This computation shows that the derivative of any constant function is 0. Thederivativeof the function at a point is the slope of the linetangentto the curve at the point. Theslopeof the constant function is 0, because thetangent lineto the constant function is horizontal and its angle is 0. In other words, the value of the constant function,y{\textstyle y}, will not change as the value ofx{\textstyle x}increases or decreases. For any functionsf{\textstyle f}andg{\textstyle g}and any real numbersa{\textstyle a}andb{\textstyle b}, the derivative of the functionh(x)=af(x)+bg(x){\textstyle h(x)=af(x)+bg(x)}with respect tox{\textstyle x}ish′(x)=af′(x)+bg′(x){\textstyle h'(x)=af'(x)+bg'(x)}. InLeibniz's notation, this formula is written as:d(af+bg)dx=adfdx+bdgdx.{\displaystyle {\frac {d(af+bg)}{dx}}=a{\frac {df}{dx}}+b{\frac {dg}{dx}}.} Special cases include: (af)′=af′,{\displaystyle (af)'=af',} (f+g)′=f′+g′,{\displaystyle (f+g)'=f'+g',} (f−g)′=f′−g′.{\displaystyle (f-g)'=f'-g'.} For the functionsf{\textstyle f}andg{\textstyle g}, the derivative of the functionh(x)=f(x)g(x){\textstyle h(x)=f(x)g(x)}with respect tox{\textstyle x}is:h′(x)=(fg)′(x)=f′(x)g(x)+f(x)g′(x).{\displaystyle h'(x)=(fg)'(x)=f'(x)g(x)+f(x)g'(x).} In Leibniz's notation, this formula is written:d(fg)dx=gdfdx+fdgdx.{\displaystyle {\frac {d(fg)}{dx}}=g{\frac {df}{dx}}+f{\frac {dg}{dx}}.} The derivative of the functionh(x)=f(g(x)){\textstyle h(x)=f(g(x))}is:h′(x)=f′(g(x))⋅g′(x).{\displaystyle h'(x)=f'(g(x))\cdot g'(x).} In Leibniz's notation, this formula is written as:ddxh(x)=ddzf(z)|z=g(x)⋅ddxg(x),{\displaystyle {\frac {d}{dx}}h(x)=\left.{\frac {d}{dz}}f(z)\right|_{z=g(x)}\cdot {\frac {d}{dx}}g(x),}often abridged to:dh(x)dx=df(g(x))dg(x)⋅dg(x)dx.{\displaystyle {\frac {dh(x)}{dx}}={\frac {df(g(x))}{dg(x)}}\cdot {\frac {dg(x)}{dx}}.} Focusing on the notion of maps, and the differential being a mapD{\textstyle {\text{D}}}, this formula is written in a more concise way as:[D(f∘g)]x=[Df]g(x)⋅[Dg]x.{\displaystyle [{\text{D}}(f\circ g)]_{x}=[{\text{D}}f]_{g(x)}\cdot [{\text{D}}g]_{x}.} If the functionf{\textstyle f}has aninverse functiong{\textstyle g}, meaning thatg(f(x))=x{\textstyle g(f(x))=x}andf(g(y))=y{\textstyle f(g(y))=y}, then:g′=1f′∘g.{\displaystyle g'={\frac {1}{f'\circ g}}.} In Leibniz notation, this formula is written as:dxdy=1dydx.{\displaystyle {\frac {dx}{dy}}={\frac {1}{\frac {dy}{dx}}}.} Iff(x)=xr{\textstyle f(x)=x^{r}}, for any real numberr≠0{\textstyle r\neq 0}, then:f′(x)=rxr−1.{\displaystyle f'(x)=rx^{r-1}.} Whenr=1{\textstyle r=1}, this formula becomes the special case that, iff(x)=x{\textstyle f(x)=x}, thenf′(x)=1{\textstyle f'(x)=1}. Combining the power rule with the sum and constant multiple rules permits the computation of the derivative of any polynomial. The derivative ofh(x)=1f(x){\textstyle h(x)={\frac {1}{f(x)}}}for any (nonvanishing) functionf{\textstyle f}is:h′(x)=−f′(x)(f(x))2,{\displaystyle h'(x)=-{\frac {f'(x)}{(f(x))^{2}}},}whereverf{\textstyle f}is nonzero. In Leibniz's notation, this formula is written:d(1f)dx=−1f2dfdx.{\displaystyle {\frac {d\left({\frac {1}{f}}\right)}{dx}}=-{\frac {1}{f^{2}}}{\frac {df}{dx}}.} The reciprocal rule can be derived either from the quotient rule or from the combination of power rule and chain rule. Iff{\textstyle f}andg{\textstyle g}are functions, then:(fg)′=f′g−g′fg2,{\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-g'f}{g^{2}}},}whereverg{\textstyle g}is nonzero. This can be derived from the product rule and the reciprocal rule. The elementary power rule generalizes considerably. The most general power rule is thefunctional power rule: for any functionsf{\textstyle f}andg{\textstyle g},(fg)′=(egln⁡f)′=fg(f′gf+g′ln⁡f),{\displaystyle (f^{g})'=\left(e^{g\ln f}\right)'=f^{g}\left(f'{g \over f}+g'\ln f\right),\quad }wherever both sides are well defined. Special cases: ddx(cax)=acaxln⁡c,c>0.{\displaystyle {\frac {d}{dx}}\left(c^{ax}\right)={ac^{ax}\ln c},\qquad c>0.}The equation above is true for allc{\displaystyle c}, but the derivative forc<0{\displaystyle c<0}yields a complex number. ddx(eax)=aeax.{\displaystyle {\frac {d}{dx}}\left(e^{ax}\right)=ae^{ax}.} ddx(logc⁡x)=1xln⁡c,c>1.{\displaystyle {\frac {d}{dx}}\left(\log _{c}x\right)={1 \over x\ln c},\qquad c>1.}The equation above is also true for allc{\textstyle c}but yields a complex number ifc<0{\textstyle c<0}. ddx(ln⁡x)=1x,x>0.{\displaystyle {\frac {d}{dx}}\left(\ln x\right)={1 \over x},\qquad x>0.} ddx(ln⁡|x|)=1x,x≠0.{\displaystyle {\frac {d}{dx}}\left(\ln |x|\right)={1 \over x},\qquad x\neq 0.} ddx(W(x))=1x+eW(x),x>−1e,{\displaystyle {\frac {d}{dx}}\left(W(x)\right)={1 \over {x+e^{W(x)}}},\qquad x>-{1 \over e},}whereW(x){\textstyle W(x)}is theLambert W function. ddx(xx)=xx(1+ln⁡x).{\displaystyle {\frac {d}{dx}}\left(x^{x}\right)=x^{x}(1+\ln x).} ddx(f(x)g(x))=g(x)f(x)g(x)−1dfdx+f(x)g(x)ln⁡(f(x))dgdx,iff(x)>0anddfdxanddgdxexist.{\displaystyle {\frac {d}{dx}}\left(f(x)^{g(x)}\right)=g(x)f(x)^{g(x)-1}{\frac {df}{dx}}+f(x)^{g(x)}\ln {(f(x))}{\frac {dg}{dx}},\qquad {\text{if }}f(x)>0{\text{ and }}{\frac {df}{dx}}{\text{ and }}{\frac {dg}{dx}}{\text{ exist.}}} ddx(f1(x)f2(x)(...)fn(x))=[∑k=1n∂∂xk(f1(x1)f2(x2)(...)fn(xn))]|x1=x2=...=xn=x,iffi<n(x)>0anddfidxexists.{\displaystyle {\frac {d}{dx}}\left(f_{1}(x)^{f_{2}(x)^{\left(...\right)^{f_{n}(x)}}}\right)=\left[\sum \limits _{k=1}^{n}{\frac {\partial }{\partial x_{k}}}\left(f_{1}(x_{1})^{f_{2}(x_{2})^{\left(...\right)^{f_{n}(x_{n})}}}\right)\right]{\biggr \vert }_{x_{1}=x_{2}=...=x_{n}=x},\qquad {\text{ if }}f_{i<n}(x)>0{\text{ and }}{\frac {df_{i}}{dx}}{\text{ exists.}}} Thelogarithmic derivativeis another way of stating the rule for differentiating thelogarithmof a function (using the chain rule):(ln⁡f)′=f′f,{\displaystyle (\ln f)'={\frac {f'}{f}},}whereverf{\textstyle f}is positive. Logarithmic differentiationis a technique which uses logarithms and its differentiation rules to simplify certain expressions before actually applying the derivative.[citation needed] Logarithms can be used to remove exponents, convert products into sums, and convert division into subtraction—each of which may lead to a simplified expression for taking derivatives. The derivatives in the table above are for when the range of the inverse secant is[0,π]{\textstyle [0,\pi ]}and when the range of the inverse cosecant is[−π2,π2]{\textstyle \left[-{\frac {\pi }{2}},{\frac {\pi }{2}}\right]}. It is common to additionally define aninverse tangent function with two arguments,arctan⁡(y,x){\textstyle \arctan(y,x)}. Its value lies in the range[−π,π]{\textstyle [-\pi ,\pi ]}and reflects the quadrant of the point(x,y){\textstyle (x,y)}. For the first and fourth quadrant (i.e.,x>0{\displaystyle x>0}), one hasarctan⁡(y,x>0)=arctan⁡(yx){\textstyle \arctan(y,x>0)=\arctan({\frac {y}{x}})}. Its partial derivatives are:∂arctan⁡(y,x)∂y=xx2+y2and∂arctan⁡(y,x)∂x=−yx2+y2.{\displaystyle {\frac {\partial \arctan(y,x)}{\partial y}}={\frac {x}{x^{2}+y^{2}}}\qquad {\text{and}}\qquad {\frac {\partial \arctan(y,x)}{\partial x}}={\frac {-y}{x^{2}+y^{2}}}.} Γ(x)=∫0∞tx−1e−tdt{\displaystyle \Gamma (x)=\int _{0}^{\infty }t^{x-1}e^{-t}\,dt}Γ′(x)=∫0∞tx−1e−tln⁡tdt=Γ(x)(∑n=1∞(ln⁡(1+1n)−1x+n)−1x)=Γ(x)ψ(x),{\displaystyle {\begin{aligned}\Gamma '(x)&=\int _{0}^{\infty }t^{x-1}e^{-t}\ln t\,dt\\&=\Gamma (x)\left(\sum _{n=1}^{\infty }\left(\ln \left(1+{\dfrac {1}{n}}\right)-{\dfrac {1}{x+n}}\right)-{\dfrac {1}{x}}\right)\\&=\Gamma (x)\psi (x),\end{aligned}}}withψ(x){\textstyle \psi (x)}being thedigamma function, expressed by the parenthesized expression to the right ofΓ(x){\textstyle \Gamma (x)}in the line above. ζ(x)=∑n=1∞1nx{\displaystyle \zeta (x)=\sum _{n=1}^{\infty }{\frac {1}{n^{x}}}}ζ′(x)=−∑n=1∞ln⁡nnx=−ln⁡22x−ln⁡33x−ln⁡44x−⋯=−∑pprimep−xln⁡p(1−p−x)2∏qprime,q≠p11−q−x{\displaystyle {\begin{aligned}\zeta '(x)&=-\sum _{n=1}^{\infty }{\frac {\ln n}{n^{x}}}=-{\frac {\ln 2}{2^{x}}}-{\frac {\ln 3}{3^{x}}}-{\frac {\ln 4}{4^{x}}}-\cdots \\&=-\sum _{p{\text{ prime}}}{\frac {p^{-x}\ln p}{(1-p^{-x})^{2}}}\prod _{q{\text{ prime}},q\neq p}{\frac {1}{1-q^{-x}}}\end{aligned}}} Suppose that it is required to differentiate with respect tox{\textstyle x}the function:F(x)=∫a(x)b(x)f(x,t)dt,{\displaystyle F(x)=\int _{a(x)}^{b(x)}f(x,t)\,dt,} where the functionsf(x,t){\textstyle f(x,t)}and∂∂xf(x,t){\textstyle {\frac {\partial }{\partial x}}\,f(x,t)}are both continuous in botht{\textstyle t}andx{\textstyle x}in some region of the(t,x){\textstyle (t,x)}plane, includinga(x)≤t≤b(x){\textstyle a(x)\leq t\leq b(x)}, wherex0≤x≤x1{\textstyle x_{0}\leq x\leq x_{1}}, and the functionsa(x){\textstyle a(x)}andb(x){\textstyle b(x)}are both continuous and both have continuous derivatives forx0≤x≤x1{\textstyle x_{0}\leq x\leq x_{1}}. Then, forx0≤x≤x1{\textstyle \,x_{0}\leq x\leq x_{1}}:F′(x)=f(x,b(x))b′(x)−f(x,a(x))a′(x)+∫a(x)b(x)∂∂xf(x,t)dt.{\displaystyle F'(x)=f(x,b(x))\,b'(x)-f(x,a(x))\,a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}\,f(x,t)\;dt\,.} This formula is the general form of theLeibniz integral ruleand can be derived using thefundamental theorem of calculus. Some rules exist for computing then{\textstyle n}th derivative of functions, wheren{\textstyle n}is a positive integer, including: Iff{\textstyle f}andg{\textstyle g}aren{\textstyle n}-times differentiable, then:dndxn[f(g(x))]=n!∑{km}f(r)(g(x))∏m=1n1km!(g(m)(x))km,{\displaystyle {\frac {d^{n}}{dx^{n}}}[f(g(x))]=n!\sum _{\{k_{m}\}}f^{(r)}(g(x))\prod _{m=1}^{n}{\frac {1}{k_{m}!}}\left(g^{(m)}(x)\right)^{k_{m}},}wherer=∑m=1n−1km{\textstyle r=\sum _{m=1}^{n-1}k_{m}}and the set{km}{\textstyle \{k_{m}\}}consists of all non-negative integer solutions of theDiophantine equation∑m=1nmkm=n{\textstyle \sum _{m=1}^{n}mk_{m}=n}. Iff{\textstyle f}andg{\textstyle g}aren{\textstyle n}-times differentiable, then:dndxn[f(x)g(x)]=∑k=0n(nk)dn−kdxn−kf(x)dkdxkg(x).{\displaystyle {\frac {d^{n}}{dx^{n}}}[f(x)g(x)]=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {d^{n-k}}{dx^{n-k}}}f(x){\frac {d^{k}}{dx^{k}}}g(x).} These rules are given in many books, both on elementary and advanced calculus, in pure and applied mathematics. Those in this article (in addition to the above references) can be found in:
https://en.wikipedia.org/wiki/Table_of_derivatives#Derivatives_of_trigonometric_functions
In mathematics, the values of thetrigonometric functionscan be expressed approximately, as incos⁡(π/4)≈0.707{\displaystyle \cos(\pi /4)\approx 0.707}, or exactly, as incos⁡(π/4)=2/2{\displaystyle \cos(\pi /4)={\sqrt {2}}/2}. Whiletrigonometric tablescontain many approximate values, the exact values for certain angles can be expressed by a combination of arithmetic operations andsquare roots. The angles with trigonometric values that are expressible in this way are exactly those that can be constructed with acompass and straight edge, and the values are calledconstructible numbers. The trigonometric functions of angles that are multiples of 15°, 18°, or 22.5° have simple algebraic values. These values are listed in the following table for angles from 0° to 45°[1][2](seebelowfor proofs). In the table below, the label "Undefined" represents a ratio1:0.{\displaystyle 1:0.}If the codomain of the trigonometric functions is taken to be thereal numbersthese entries areundefined, whereas if the codomain is taken to be theprojectively extended real numbers, these entries take the value∞{\displaystyle \infty }(seedivision by zero). For angles outside of this range, trigonometric values can be found by applyingreflection and shift identitiessuch as Atrigonometric numberis a number that can be expressed as thesine or cosineof arationalmultiple ofπradians.[3]Sincesin⁡(x)=cos⁡(x−π/2),{\displaystyle \sin(x)=\cos(x-\pi /2),}the case of a sine can be omitted from this definition. Therefore any trigonometric number can be written ascos⁡(2πk/n){\displaystyle \cos(2\pi k/n)}, wherekandnare integers. This number can be thought of as the real part of thecomplex numbercos⁡(2πk/n)+isin⁡(2πk/n){\displaystyle \cos(2\pi k/n)+i\sin(2\pi k/n)}.De Moivre's formulashows that numbers of this form areroots of unity: Since the root of unity is arootof the polynomialxn− 1, it isalgebraic. Since the trigonometric number is the average of the root of unity and itscomplex conjugate, and algebraic numbers are closed under arithmetic operations, every trigonometric number is algebraic.[3]The minimal polynomials of trigonometric numbers can beexplicitly enumerated.[4]In contrast, by theLindemann–Weierstrass theorem, the sine or cosine of any non-zero algebraic number is always transcendental.[5] The real part of any root of unity is a trigonometric number. ByNiven's theorem, the only rational trigonometric numbers are 0, 1, −1, 1/2, and −1/2.[6] An angle can be constructed with a compass and straightedge if and only if its sine (or equivalently cosine) can be expressed by a combination of arithmetic operations and square roots applied to integers.[7]Additionally, an angle that is a rational multiple ofπ{\displaystyle \pi }radians is constructible if and only if, when it is expressed asaπ/b{\displaystyle a\pi /b}radians, whereaandbarerelatively primeintegers, theprime factorizationof the denominator,b, is the product of somepower of twoand any number of distinctFermat primes(a Fermat prime is a prime number one greater than a power of two).[8] Thus, for example,2π/15=24∘{\displaystyle 2\pi /15=24^{\circ }}is a constructible angle because 15 is the product of the Fermat primes 3 and 5. Similarlyπ/12=15∘{\displaystyle \pi /12=15^{\circ }}is a constructible angle because 12 is a power of two (4) times a Fermat prime (3). Butπ/9=20∘{\displaystyle \pi /9=20^{\circ }}is not a constructible angle, since9=3⋅3{\displaystyle 9=3\cdot 3}is not the product ofdistinctFermat primes as it contains 3 as a factor twice, and neither isπ/7≈25.714∘{\displaystyle \pi /7\approx 25.714^{\circ }}, since 7 is not a Fermat prime.[9] It results from the above characterisation that an angle of an integer number of degrees is constructible if and only if this number of degrees is a multiple of3. From areflection identity,cos⁡(45∘)=sin⁡(90∘−45∘)=sin⁡(45∘){\displaystyle \cos(45^{\circ })=\sin(90^{\circ }-45^{\circ })=\sin(45^{\circ })}. Substituting into thePythagorean trigonometric identitysin⁡(45∘)2+cos⁡(45∘)2=1{\displaystyle \sin(45^{\circ })^{2}+\cos(45^{\circ })^{2}=1}, one obtains theminimal polynomial2sin⁡(45∘)2−1=0{\displaystyle 2\sin(45^{\circ })^{2}-1=0}. Taking the positive root, one findssin⁡(45∘)=cos⁡(45∘)=1/2=2/2{\displaystyle \sin(45^{\circ })=\cos(45^{\circ })=1/{\sqrt {2}}={\sqrt {2}}/2}. A geometric way of deriving the sine or cosine of 45° is by considering an isosceles right triangle with leg length 1. Since two of the angles in an isosceles triangle are equal, if the remaining angle is 90° for a right triangle, then the two equal angles are each 45°. Then by the Pythagorean theorem, the length of the hypotenuse of such a triangle is2{\displaystyle {\sqrt {2}}}. Scaling the triangle so that its hypotenuse has a length of 1 divides the lengths by2{\displaystyle {\sqrt {2}}}, giving the same value for the sine or cosine of 45° given above. The values of sine and cosine of 30 and 60 degrees are derived by analysis of theequilateral triangle. In an equilateral triangle, the 3 angles are equal and sum to 180°, therefore each corner angle is 60°. Bisecting one corner, thespecial right trianglewith angles 30-60-90 is obtained. By symmetry, the bisected side is half of the side of the equilateral triangle, so one concludessin⁡(30∘)=1/2{\displaystyle \sin(30^{\circ })=1/2}. The Pythagorean and reflection identities then givesin⁡(60∘)=cos⁡(30∘)=1−(1/2)2=3/2{\displaystyle \sin(60^{\circ })=\cos(30^{\circ })={\sqrt {1-(1/2)^{2}}}={\sqrt {3}}/2}. The value ofsin⁡(18∘){\displaystyle \sin(18^{\circ })}may be derived using themultiple angle formulasfor sine and cosine.[10]By the double angle formula for sine: By the triple angle formula for cosine: Since sin(36°) = cos(54°), we equate these two expressions and cancel a factor of cos(18°): This quadratic equation has only one positive root: The Pythagorean identity then givescos⁡(18∘){\displaystyle \cos(18^{\circ })}, and the double and triple angle formulas give sine and cosine of 36°, 54°, and 72°. Thencos⁡(36∘)=(5+1)/4=φ/2{\displaystyle \cos(36^{\circ })=({\sqrt {5}}+1)/4=\varphi /2}, where⁠φ{\displaystyle \varphi }⁠is thegolden ratio. The sines and cosines of all other angles between 0 and 90° that are multiples of 3° can be derived from the angles described above and thesum and difference formulas. Specifically,[11] For example, since24∘=60∘−36∘{\displaystyle 24^{\circ }=60^{\circ }-36^{\circ }}, its cosine can be derived by the cosine difference formula: If the denominator,b, is multiplied by additional factors of 2, the sine and cosine can be derived with thehalf-angle formulas. For example, 22.5° (π/8 rad) is half of 45°, so its sine and cosine are:[12] Repeated application of the half-angle formulas leads tonested radicals, specifically nestedsquare roots of 2of the form2±⋯{\displaystyle {\sqrt {2\pm \cdots }}}. In general, the sine and cosine of most angles of the formβ/2n{\displaystyle \beta /2^{n}}can be expressed using nested square roots of 2 in terms ofβ{\displaystyle \beta }. Specifically, if one can write an angle asα=π(12−∑i=1k∏j=1ibj2i+1)=π(12−b14−b1b28−b1b2b316−…−b1b2…bk2k+1){\displaystyle \alpha =\pi \left({\frac {1}{2}}-\sum _{i=1}^{k}{\frac {\prod _{j=1}^{i}b_{j}}{2^{i+1}}}\right)=\pi \left({\frac {1}{2}}-{\frac {b_{1}}{4}}-{\frac {b_{1}b_{2}}{8}}-{\frac {b_{1}b_{2}b_{3}}{16}}-\ldots -{\frac {b_{1}b_{2}\ldots b_{k}}{2^{k+1}}}\right)}wherebk∈[−2,2]{\displaystyle b_{k}\in [-2,2]}andbi{\displaystyle b_{i}}is -1, 0, or 1 fori<k{\displaystyle i<k}, then[13]cos⁡(α)=b122+b22+b32+…+bk−12+2sin⁡(πbk4){\displaystyle \cos(\alpha )={\frac {b_{1}}{2}}{\sqrt {2+b_{2}{\sqrt {2+b_{3}{\sqrt {2+\ldots +b_{k-1}{\sqrt {2+2\sin \left({\frac {\pi b_{k}}{4}}\right)}}}}}}}}}and ifb1≠0{\displaystyle b_{1}\neq 0}then[13]sin⁡(α)=122−b22+b32+b42+…+bk−12+2sin⁡(πbk4){\displaystyle \sin(\alpha )={\frac {1}{2}}{\sqrt {2-b_{2}{\sqrt {2+b_{3}{\sqrt {2+b_{4}{\sqrt {2+\ldots +b_{k-1}{\sqrt {2+2\sin \left({\frac {\pi b_{k}}{4}}\right)}}}}}}}}}}}For example,13π32=π(12−14+18+116−132){\displaystyle {\frac {13\pi }{32}}=\pi \left({\frac {1}{2}}-{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}-{\frac {1}{32}}\right)}, so one has(b1,b2,b3,b4)=(1,−1,1,−1){\displaystyle (b_{1},b_{2},b_{3},b_{4})=(1,-1,1,-1)}and obtains:cos⁡(13π32)=122−2+2+2sin⁡(−π4)=122−2+2−2{\displaystyle \cos \left({\frac {13\pi }{32}}\right)={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {2+2\sin \left({\frac {-\pi }{4}}\right)}}}}}}={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {2-{\sqrt {2}}}}}}}}}sin⁡(13π32)=122+2+2−2{\displaystyle \sin \left({\frac {13\pi }{32}}\right)={\frac {1}{2}}{\sqrt {2+{\sqrt {2+{\sqrt {2-{\sqrt {2}}}}}}}}} Since 17 is a Fermat prime, a regular17-gonis constructible, which means that the sines and cosines of angles such as2π/17{\displaystyle 2\pi /17}radians can be expressed in terms of square roots. In particular, in 1796,Carl Friedrich Gaussshowed that:[14][15] The sines and cosines of other constructible angles of the formk2nπ17{\displaystyle {\frac {k2^{n}\pi }{17}}}(for integersk,n{\displaystyle k,n}) can be derived from this one. As discussed in§ Constructibility, only certain angles that are rational multiples ofπ{\displaystyle \pi }radians have trigonometric values that can be expressed with square roots. The angle 1°, beingπ/180=π/(22⋅32⋅5){\displaystyle \pi /180=\pi /(2^{2}\cdot 3^{2}\cdot 5)}radians, has a repeated factor of 3 in the denominator and thereforesin⁡(1∘){\displaystyle \sin(1^{\circ })}cannot be expressed using only square roots. A related question is whether it can be expressed using cube roots. The following two approaches can be used, but both result in an expression that involves thecube root of a complex number. Using the triple-angle identity, we can identifysin⁡(1∘){\displaystyle \sin(1^{\circ })}as a root of a cubic polynomial:sin⁡(3∘)=−4x3+3x{\displaystyle \sin(3^{\circ })=-4x^{3}+3x}. The three roots of this polynomial aresin⁡(1∘){\displaystyle \sin(1^{\circ })},sin⁡(59∘){\displaystyle \sin(59^{\circ })}, and−sin⁡(61∘){\displaystyle -\sin(61^{\circ })}. Sincesin⁡(3∘){\displaystyle \sin(3^{\circ })}is constructible, an expression for it could be plugged intoCardano's formulato yield an expression forsin⁡(1∘){\displaystyle \sin(1^{\circ })}. However, since all three roots of the cubic are real, this is an instance ofcasus irreducibilis, and the expression would require taking the cube root of a complex number.[16][17] Alternatively, byDe Moivre's formula: Taking cube roots and adding or subtracting the equations, we have:[17]
https://en.wikipedia.org/wiki/Exact_trigonometric_values
Theexternal secantfunction (abbreviatedexsecant, symbolizedexsec) is atrigonometric functiondefined in terms of thesecantfunction: exsec⁡θ=sec⁡θ−1=1cos⁡θ−1.{\displaystyle \operatorname {exsec} \theta =\sec \theta -1={\frac {1}{\cos \theta }}-1.} It was introduced in 1855 by Americancivil engineerCharles Haslett, who used it in conjunction with the existingversinefunction,vers⁡θ=1−cos⁡θ,{\displaystyle \operatorname {vers} \theta =1-\cos \theta ,}for designing and measuringcircularsections ofrailroadtrack.[3]It was adopted bysurveyorsand civil engineers in the United States for railroad androad design, and since the early 20th century has sometimes been briefly mentioned in American trigonometry textbooks and general-purpose engineering manuals.[4]For completeness, a few books also defined acoexsecantorexcosecantfunction (symbolizedcoexsecorexcsc),coexsec⁡θ={\displaystyle \operatorname {coexsec} \theta ={}}csc⁡θ−1,{\displaystyle \csc \theta -1,}the exsecant of thecomplementary angle,[5][6]though it was not used in practice. While the exsecant has occasionally found other applications, today it is obscure and mainly of historical interest.[7] As aline segment, an external secant of acirclehas one endpoint on the circumference, and then extends radially outward. The length of this segment is the radius of the circle times the trigonometric exsecant of the central angle between the segment's inner endpoint and thepoint of tangencyfor a line through the outer endpoint andtangentto the circle. The wordsecantcomes from Latin for "to cut", and a generalsecant line"cuts" a circle, intersecting it twice; this concept dates to antiquity and can be found in Book 3 ofEuclid'sElements, as used e.g. in theintersecting secants theorem. 18th century sources inLatincalledanynon-tangentialline segment external to a circle with one endpoint on the circumference asecans exterior.[8] The trigonometricsecant, named byThomas Fincke(1583), is more specifically based on a line segment with one endpoint at the center of a circle and the other endpoint outside the circle; the circle divides this segment into a radius and an external secant. The external secant segment was used byGalileo Galilei(1632) under the namesecant.[9] In the 19th century, mostrailroadtracks were constructed out ofarcs of circles, calledsimple curves.[10]Surveyorsandcivil engineersworking for the railroad needed to make many repetitive trigonometrical calculations to measure and plan circular sections of track. In surveying, and more generally in practical geometry, tables of both "natural" trigonometric functions and theircommon logarithmswere used, depending on the specific calculation. Using logarithms converts expensive multiplication of multi-digit numbers to cheaper addition, and logarithmic versions of trigonometric tables further saved labor by reducing the number of necessary table lookups.[11] Theexternal secantorexternal distanceof a curved track section is the shortest distance between the track and the intersection of the tangent lines from the ends of the arc, which equals the radius times the trigonometric exsecant of half thecentral anglesubtended by the arc,Rexsec⁡12Δ.{\displaystyle R\operatorname {exsec} {\tfrac {1}{2}}\Delta .}[12]By comparison, theversed sineof a curved track section is the furthest distance from thelongchord(the line segment between endpoints) to the track[13]– cf.Sagitta– which equals the radius times the trigonometric versine of half the central angle,Rvers⁡12Δ.{\displaystyle R\operatorname {vers} {\tfrac {1}{2}}\Delta .}These are both natural quantities to measure or calculate when surveying circular arcs, which must subsequently be multiplied or divided by other quantities. Charles Haslett (1855) found that directly looking up the logarithm of the exsecant and versine saved significant effort and produced more accurate results compared to calculating the same quantity from values found in previously available trigonometric tables.[3]The same idea was adopted by other authors, such as Searles (1880).[14]By 1913 Haslett's approach was so widely adopted in the American railroad industry that, in that context, "tables of external secants and versed sines [were] more common than [were] tables of secants".[15] In the late-19th and 20th century, railroads began using arcs of anEuler spiralas atrack transition curvebetween straight or circular sections of differing curvature. These spiral curves can be approximately calculated using exsecants and versines.[15][16] Solving the same types of problems is required when surveying circular sections ofcanals[17]and roads, and the exsecant was still used in mid-20th century books about road surveying.[18] The exsecant has sometimes been used for other applications, such asbeam theory[19]anddepth soundingwith a wire.[20] In recent years, the availability ofcalculatorsandcomputershas removed the need for trigonometric tables of specialized functions such as this one.[21]Exsecant is generally not directly built into calculators or computing environments (though it has sometimes been included insoftware libraries),[22]and calculations in general are much cheaper than in the past, no longer requiring tedious manual labor. Naïvely evaluating the expressions1−cos⁡θ{\displaystyle 1-\cos \theta }(versine) andsec⁡θ−1{\displaystyle \sec \theta -1}(exsecant) is problematic for small angles wheresec⁡θ≈cos⁡θ≈1.{\displaystyle \sec \theta \approx \cos \theta \approx 1.}Computing the difference between two approximately equal quantities results incatastrophic cancellation: because most of the digits of each quantity are the same, they cancel in the subtraction, yielding a lower-precision result. For example, the secant of1°is approximately1.000152, with the leading several digits wasted on zeros, while thecommon logarithmof the exsecant of1°is approximately−3.817220,[23]all of whose digits are meaningful. If the logarithm of exsecant is calculated by looking up the secant in a six-placetrigonometric tableand then subtracting1, the differencesec 1° − 1 ≈0.000152has only 3significant digits, and after computing the logarithm only three digits are correct,log(sec 1° − 1) ≈−3.818156.[24]For even smaller angles loss of precision is worse. If a table or computer implementation of the exsecant function is not available, the exsecant can be accurately computed asexsec⁡θ=tan⁡θtan⁡12θ|,{\textstyle \operatorname {exsec} \theta =\tan \theta \,\tan {\tfrac {1}{2}}\theta {\vphantom {\Big |}},}or using versine,exsec⁡θ=vers⁡θsec⁡θ,{\textstyle \operatorname {exsec} \theta =\operatorname {vers} \theta \,\sec \theta ,}which can itself be computed asvers⁡θ=2(sin⁡12θ))2|={\textstyle \operatorname {vers} \theta =2{\bigl (}{\sin {\tfrac {1}{2}}\theta }{\bigr )}{\vphantom {)}}^{2}{\vphantom {\Big |}}={}}sin⁡θtan⁡12θ|{\displaystyle \sin \theta \,\tan {\tfrac {1}{2}}\theta \,{\vphantom {\Big |}}};Haslett used these identities to compute his 1855 exsecant and versine tables.[25][26] For a sufficiently small angle, a circular arc is approximately shaped like aparabola, and the versine and exsecant are approximately equal to each-other and both proportional to the square of the arclength.[27] Theinverseof the exsecant function, which might be symbolizedarcexsec,[6]is well defined if its argumenty≥0{\displaystyle y\geq 0}ory≤−2{\displaystyle y\leq -2}and can be expressed in terms of otherinverse trigonometric functions(usingradiansfor the angle): arcexsec⁡y=arcsec⁡(y+1)={arctan(y2+2y)ify≥0,undefinedif−2<y<0,π−arctan(y2+2y)ify≤−2;.{\displaystyle \operatorname {arcexsec} y=\operatorname {arcsec}(y+1)={\begin{cases}{\arctan }{\bigl (}\!{\textstyle {\sqrt {y^{2}+2y}}}\,{\bigr )}&{\text{if}}\ \ y\geq 0,\\[6mu]{\text{undefined}}&{\text{if}}\ \ {-2}<y<0,\\[4mu]\pi -{\arctan }{\bigl (}\!{\textstyle {\sqrt {y^{2}+2y}}}\,{\bigr )}&{\text{if}}\ \ y\leq {-2};\\\end{cases}}_{\vphantom {.}}} the arctangent expression is well behaved for small angles.[28] While historical uses of the exsecant did not explicitly involvecalculus, itsderivativeandantiderivative(forxin radians) are:[29] ddxexsec⁡x=tan⁡xsec⁡x,∫exsec⁡xdx=ln⁡|sec⁡x+tan⁡x|−x+C,∫|{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} x}}\operatorname {exsec} x&=\tan x\,\sec x,\\[10mu]\int \operatorname {exsec} x\,\mathrm {d} x&=\ln {\bigl |}\sec x+\tan x{\bigr |}-x+C,{\vphantom {\int _{|}}}\end{aligned}}} wherelnis thenatural logarithm. See alsoIntegral of the secant function. The exsecant of twice an angle is:[6] exsec⁡2θ=2sin2⁡θ1−2sin2⁡θ.{\displaystyle \operatorname {exsec} 2\theta ={\frac {2\sin ^{2}\theta }{1-2\sin ^{2}\theta }}.} Van Sickle, Jenna (2011)."The history of one definition: Teaching trigonometry in the US before 1900".International Journal for the History of Mathematics Education.6(2):55–70. Review:Poor, Henry Varnum, ed. (1856-03-22)."Practical Book of Reference, and Engineer's Field Book. By Charles Haslett".American Railroad Journal(Review). Second Quarto Series.XII(12): 184. Whole No. 1040, Vol. XXIX. Zucker, Ruth (1964)."4.3.147: Elementary Transcendental Functions - Circular functions". InAbramowitz, Milton;Stegun, Irene A.(eds.).Handbook of Mathematical Functions. Washington, D.C.: National Bureau of Standards. p. 78.LCCN64-60036. van Haecht, Joannes (1784). "Articulus III: De secantibus circuli: Corollarium III: [109]".Geometria elementaria et practica: quam in usum auditorum(in Latin). Lovanii, e typographia academica. p. 24, foldout. Finocchiaro, Maurice A. (2003). "Physical-Mathematical Reasoning: Galileo on the Extruding Power of Terrestrial Rotation".Synthese.134(1–2, Logic and Mathematical Reasoning):217–244.doi:10.1023/A:1022143816001.JSTOR20117331. Searles, William Henry; Ives, Howard Chapin (1915) [1880].Field Engineering: A Handbook of the Theory and Practice of Railway Surveying, Location and Construction(17th ed.). New York:John Wiley & Sons. Meyer, Carl F. (1969) [1949].Route Surveying and Design(4th ed.). Scranton, PA: International Textbook Co. "MIT/GNU Scheme – Scheme Arithmetic"(MIT/GNU Schemesource code). v. 12.1.Massachusetts Institute of Technology. 2023-09-01.exsecfunction,arith.scmlines 61–63. Retrieved2024-04-01. Review:"Field Manual for Railroad Engineers. By J. C. Nagle".The Engineer(Review).84: 540. 1897-12-03. "MIT/GNU Scheme – Scheme Arithmetic"(MIT/GNU Schemesource code). v. 12.1.Massachusetts Institute of Technology. 2023-09-01.aexsecfunction,arith.scmlines 65–71. Retrieved2024-04-01.
https://en.wikipedia.org/wiki/Exsecant
Inspherical trigonometry, thelaw of cosines(also called thecosine rule for sides[1]) is a theorem relating the sides and angles ofspherical triangles, analogous to the ordinarylaw of cosinesfrom planetrigonometry. Given a unit sphere, a "spherical triangle" on the surface of the sphere is defined by thegreat circlesconnecting three pointsu,v, andwon the sphere (shown at right). If the lengths of these three sides area(fromutov),b(fromutow), andc(fromvtow), and the angle of the corner oppositecisC, then the (first) spherical law of cosines states:[2][1] cos⁡c=cos⁡acos⁡b+sin⁡asin⁡bcos⁡C{\displaystyle \cos c=\cos a\cos b+\sin a\sin b\cos C\,} Since this is a unit sphere, the lengthsa,b, andcare simply equal to the angles (inradians) subtended by those sides from the center of the sphere. (For a non-unit sphere, the lengths are the subtended angles times the radius, and the formula still holds ifa,bandcare reinterpreted as the subtended angles). As a special case, forC=⁠π/2⁠, thencosC= 0, and one obtains the spherical analogue of thePythagorean theorem: cos⁡c=cos⁡acos⁡b{\displaystyle \cos c=\cos a\cos b\,} If the law of cosines is used to solve forc, the necessity of inverting the cosine magnifiesrounding errorswhencis small. In this case, the alternative formulation of thelaw of haversinesis preferable.[3] A variation on the law of cosines, the second spherical law of cosines,[4](also called thecosine rule for angles[1]) states: cos⁡C=−cos⁡Acos⁡B+sin⁡Asin⁡Bcos⁡c{\displaystyle \cos C=-\cos A\cos B+\sin A\sin B\cos c\,} whereAandBare the angles of the corners opposite to sidesaandb, respectively. It can be obtained from consideration of aspherical triangle dualto the given one. Letu,v, andwdenote theunit vectorsfrom the center of the sphere to those corners of the triangle. The angles and distances do not change if the coordinate system is rotated, so we can rotate the coordinate system so thatu{\displaystyle \mathbf {u} }is at thenorth poleandv{\displaystyle \mathbf {v} }is somewhere on theprime meridian(longitude of 0). With this rotation, thespherical coordinatesforv{\displaystyle \mathbf {v} }are(r,θ,ϕ)=(1,a,0),{\displaystyle (r,\theta ,\phi )=(1,a,0),}whereθis the angle measured from the north pole not from the equator, and the spherical coordinates forw{\displaystyle \mathbf {w} }are(r,θ,ϕ)=(1,b,C).{\displaystyle (r,\theta ,\phi )=(1,b,C).}The Cartesian coordinates forv{\displaystyle \mathbf {v} }are(x,y,z)=(sin⁡a,0,cos⁡a){\displaystyle (x,y,z)=(\sin a,0,\cos a)}and the Cartesian coordinates forw{\displaystyle \mathbf {w} }are(x,y,z)=(sin⁡bcos⁡C,sin⁡bsin⁡C,cos⁡b).{\displaystyle (x,y,z)=(\sin b\cos C,\sin b\sin C,\cos b).}The value ofcos⁡c{\displaystyle \cos c}is the dot product of the two Cartesian vectors, which issin⁡asin⁡bcos⁡C+cos⁡acos⁡b.{\displaystyle \sin a\sin b\cos C+\cos a\cos b.} Letu,v, andwdenote theunit vectorsfrom the center of the sphere to those corners of the triangle. We haveu·u= 1,v·w= cosc,u·v= cosa, andu·w= cosb. The vectorsu×vandu×whave lengthssinaandsinbrespectively and the angle between them isC, sosin⁡asin⁡bcos⁡C=(u×v)⋅(u×w)=(u⋅u)(v⋅w)−(u⋅w)(v⋅u)=cos⁡c−cos⁡acos⁡b{\displaystyle {\begin{aligned}\sin a\sin b\cos C&=({\mathbf {u}}\times {\mathbf {v}})\cdot ({\mathbf {u}}\times {\mathbf {w}})\\&=({\mathbf {u}}\cdot {\mathbf {u}})({\mathbf {v}}\cdot {\mathbf {w}})-({\mathbf {u}}\cdot {\mathbf {w}})({\mathbf {v}}\cdot {\mathbf {u}})\\&=\cos c-\cos a\cos b\end{aligned}}} usingcross products,dot products, and theBinet–Cauchy identity(p×q)⋅(r×s)=(p⋅r)(q⋅s)−(p⋅s)(q⋅r).{\displaystyle ({\mathbf {p}}\times {\mathbf {q}})\cdot ({\mathbf {r}}\times {\mathbf {s}})=({\mathbf {p}}\cdot {\mathbf {r}})({\mathbf {q}}\cdot {\mathbf {s}})-({\mathbf {p}}\cdot {\mathbf {s}})({\mathbf {q}}\cdot {\mathbf {r}}).} The following proof relies on the concept ofquaternionsand is based on a proof given in Brand:[5]Letu,v, andwdenote theunit vectorsfrom the center of the unit sphere to those corners of the triangle. We define the quaternionu= (0,u) = 0 +uxi+uyj+uzk. The quaternionuis used to represent a rotation by 180° around the axis indicated by the vectoru. We note that using−uas the axis of rotation gives the same result, and that the rotation is its own inverse. We also definev= (0,v)andw= (0,w). We compute theproductof quaternions, which also gives the composition of the corresponding rotations: where(f,g)represents the real (scalar) and imaginary (vector) parts of a quaternion,ais the angle betweenuandv, andw′ = (u×v) / |u×v|is the axis of the rotation that movesutovalong a great circle. Similarly we define: The quaternionsq,r, andsare used to represent rotations with axes of rotationw′,u′, andv′, respectively, and angles of rotation2a,2b, and2c, respectively. (Because these are double angles, each ofq,r, andsrepresents two applications of the rotation implied by an edge of the spherical triangle.) From the definitions, it follows that which tells us that the composition of these rotations is the identity transformation. In particular,rq=s−1gives us Expanding the left-hand side, we obtain Equating the real parts on both sides of the identity, we obtain Becauseu′is parallel tov×w,w′is parallel tou×v= −v×u, andCis the angle betweenv×wandv×u, it follows thatu′⋅w′=−cos⁡C{\displaystyle {\mathbf {u}}'\cdot {\mathbf {w}}'=-\cos C}. Thus, The first and second spherical laws of cosines can be rearranged to put the sides (a,b,c) and angles (A,B,C) on opposite sides of the equations:cos⁡C=cos⁡c−cos⁡acos⁡bsin⁡asin⁡bcos⁡c=cos⁡C+cos⁡Acos⁡Bsin⁡Asin⁡B{\displaystyle {\begin{aligned}\cos C&={\frac {\cos c-\cos a\cos b}{\sin a\sin b}}\\\cos c&={\frac {\cos C+\cos A\cos B}{\sin A\sin B}}\\\end{aligned}}} Forsmallspherical triangles, i.e. for smalla,b, andc, the spherical law of cosines is approximately the same as the ordinary planar law of cosines,c2≈a2+b2−2abcos⁡C.{\displaystyle c^{2}\approx a^{2}+b^{2}-2ab\cos C\,.} To prove this, we will use thesmall-angle approximationobtained from theMaclaurin seriesfor the cosine and sine functions:cos⁡a=1−a22+O(a4)sin⁡a=a+O(a3){\displaystyle {\begin{aligned}\cos a&=1-{\frac {a^{2}}{2}}+O\left(a^{4}\right)\\\sin a&=a+O\left(a^{3}\right)\end{aligned}}} Substituting these expressions into the spherical law of cosines nets: 1−c22+O(c4)=1−a22−b22+a2b24+O(a4)+O(b4)+cos⁡(C)(ab+O(a3b)+O(ab3)+O(a3b3)){\displaystyle 1-{\frac {c^{2}}{2}}+O\left(c^{4}\right)=1-{\frac {a^{2}}{2}}-{\frac {b^{2}}{2}}+{\frac {a^{2}b^{2}}{4}}+O\left(a^{4}\right)+O\left(b^{4}\right)+\cos(C)\left(ab+O\left(a^{3}b\right)+O\left(ab^{3}\right)+O\left(a^{3}b^{3}\right)\right)} or after simplifying: c2=a2+b2−2abcos⁡C+O(c4)+O(a4)+O(b4)+O(a2b2)+O(a3b)+O(ab3)+O(a3b3).{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C+O\left(c^{4}\right)+O\left(a^{4}\right)+O\left(b^{4}\right)+O\left(a^{2}b^{2}\right)+O\left(a^{3}b\right)+O\left(ab^{3}\right)+O\left(a^{3}b^{3}\right).} Thebig Oterms foraandbare dominated byO(a4) +O(b4)asaandbget small, so we can write this last expression as: c2=a2+b2−2abcos⁡C+O(a4)+O(b4)+O(c4).{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C+O\left(a^{4}\right)+O\left(b^{4}\right)+O\left(c^{4}\right).} Various trigonometric equations equivalent to the spherical law of cosines were used in the course of solving astronomical problems by medieval Islamic astronomersal-Khwārizmī(9th century) andal-Battānī(c. 900), Indian astronomerNīlakaṇṭha(15th century), and Austrian astronomerGeorg von Peuerbach(15th century) but none of them treated it as a general method for solving spherical triangles.[6]For example, al-Khwārizmī calculated the azimuth⁠α{\displaystyle \alpha }⁠of the Sun in terms of its altitude⁠h{\displaystyle h}⁠, terrestrial latitude⁠ϕ{\displaystyle \phi }⁠, and ortive amplitude⁠ψ{\displaystyle \psi }⁠(angular distance between due East and the Sun's rising place on the horizon) as⁠cos⁡α=(sin⁡ψ−tan⁡ϕsin⁡h)/cos⁡h{\displaystyle \cos \alpha =(\sin \psi -\tan \phi \sin h)/\cos h}⁠.[7](SeeHorizontal coordinate system.) The spherical law of cosines appeared as an independent trigonometrical identity for solving spherical triangles in Peuerbach's studentRegiomontanus'sDe triangulis omnimodis(unfinished at Regiomontanus's death in 1476, published posthumously 1533), a foundational work for European trigonometry and astronomy which comprehensively described how to solve plane and spherical triangles. Regiomontanus used nearly the modern form, but written in terms of theversine,⁠vers⁡x=1−cos⁡x{\displaystyle \operatorname {vers} x=1-\cos x}⁠, rather than the cosine,[8] Mathematical historians have speculated that Regiomontanus may have adapted the result from specific astronomical examples in al-Battānī'sKitāb az-Zīj aṣ-Ṣābi’, which was published in Latin translation annotated by Regiomontanus in 1537.
https://en.wikipedia.org/wiki/Spherical_law_of_cosines
The following is a list ofintegrals(antiderivativefunctions) oftrigonometric functions. For antiderivatives involving both exponential and trigonometric functions, seeList of integrals of exponential functions. For a complete list of antiderivative functions, seeLists of integrals. For the special antiderivatives involving trigonometric functions, seeTrigonometric integral.[1] Generally, if the functionsin⁡x{\displaystyle \sin x}is any trigonometric function, andcos⁡x{\displaystyle \cos x}is its derivative, ∫acos⁡nxdx=ansin⁡nx+C{\displaystyle \int a\cos nx\,dx={\frac {a}{n}}\sin nx+C} In all formulas the constantais assumed to be nonzero, andCdenotes theconstant of integration. An integral that is a rational function of the sine and cosine can be evaluated usingBioche's rules. Using thebeta functionB(a,b){\displaystyle B(a,b)}one can write Using themodified Struve functionsLα(x){\displaystyle L_{\alpha }(x)}andmodified Bessel functionsIα(x){\displaystyle I_{\alpha }(x)}one can write
https://en.wikipedia.org/wiki/List_of_integrals_of_trigonometric_functions
Intrigonometry, it is common to usemnemonicsto help remembertrigonometric identitiesand the relationships between the varioustrigonometric functions. Thesine,cosine, andtangentratios in a right triangle can be remembered by representing them as strings of letters, for instance SOH-CAH-TOA in English: One way to remember the letters is to sound them out phonetically (i.e./ˌsoʊkəˈtoʊə/SOH-kə-TOH-ə, similar toKrakatoa).[1] Another method is to expand the letters into a sentence, such as "Some Old Horses Chew Apples Happily Throughout Old Age", "Some Old Hippy Caught Another Hippy Tripping On Acid", or "Studying Our Homework Can Always Help To Obtain Achievement". The order may be switched, as in "Tommy On A Ship Of His Caught A Herring" (tangent, sine, cosine) or "The Old Army Colonel And His Son Often Hiccup" (tangent, cosine, sine) or "Come And Have Some Oranges Help To Overcome Amnesia" (cosine, sine, tangent).[2][3]Communities in Chinese circles may choose to remember it as TOA-CAH-SOH, which also means 'big-footed woman' (Chinese:大腳嫂;Pe̍h-ōe-jī:tōa-kha-só) inHokkien.[citation needed] An alternate way to remember the letters for Sin, Cos, and Tan is to memorize the syllables Oh, Ah, Oh-Ah (i.e./oʊəˈoʊ.ə/) for O/H, A/H, O/A.[4]Longer mnemonics for these letters include "Oscar Has A Hold On Angie" and "Oscar Had A Heap of Apples."[2] All StudentsTakeCalculus is amnemonicfor the sign of eachtrigonometric functionsin eachquadrantof the plane. The letters ASTC signify which of the trigonometric functions are positive, starting in the top right 1st quadrant and movingcounterclockwisethrough quadrants 2 to 4.[5] Other mnemonics include: Other easy-to-remember mnemonics are theACTSandCASTlaws. These have the disadvantages of not going sequentially from quadrants 1 to 4 and not reinforcing the numbering convention of the quadrants. Sines and cosines of common angles 0°, 30°, 45°, 60° and 90° follow the patternn2{\displaystyle {\frac {\sqrt {n}}{2}}}withn = 0, 1, ..., 4for sine andn = 4, 3, ..., 0for cosine, respectively:[9] Another mnemonic permits all of the basic identities to be read off quickly. The hexagonal chart can be constructed with a little thought:[10] Starting at any vertex of the resulting hexagon: Aside from the last bullet, the specific values for each identity are summarized in this table:
https://en.wikipedia.org/wiki/Mnemonics_in_trigonometry
Pentagramma mirificum(Latinfor "miraculous pentagram") is astar polygonon asphere, composed of fivegreat circlearcs, all of whoseinternal anglesareright angles. This shape was described byJohn Napierin his 1614 bookMirifici Logarithmorum Canonis Descriptio(Description of the Admirable Table of Logarithms) along withrulesthat link the values oftrigonometric functionsof five parts of arightspherical triangle(two angles and three sides). The properties ofpentagramma mirificumwere studied, among others, byCarl Friedrich Gauss.[1] On a sphere, both the angles and the sides of a triangle (arcs of great circles) are measured as angles. There are five right angles, each measuringπ/2,{\displaystyle \pi /2,}atA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}, andE.{\displaystyle E.} There are ten arcs, each measuringπ/2:{\displaystyle \pi /2:}PC{\displaystyle PC},PE{\displaystyle PE},QD{\displaystyle QD},QA{\displaystyle QA},RE{\displaystyle RE},RB{\displaystyle RB},SA{\displaystyle SA},SC{\displaystyle SC},TB{\displaystyle TB}, andTD.{\displaystyle TD.} In the spherical pentagonPQRST{\displaystyle PQRST}, every vertex is the pole of the opposite side. For instance, pointP{\displaystyle P}is the pole of equatorRS{\displaystyle RS}, pointQ{\displaystyle Q}— the pole of equatorST{\displaystyle ST}, etc. At each vertex of pentagonPQRST{\displaystyle PQRST}, theexternal angleis equal in measure to the opposite side. For instance,∠APT=∠BPQ=RS,∠BQP=∠CQR=ST,{\displaystyle \angle APT=\angle BPQ=RS,\;\angle BQP=\angle CQR=ST,}etc. Napier's circlesof spherical trianglesAPT{\displaystyle APT},BQP{\displaystyle BQP},CRQ{\displaystyle CRQ},DSR{\displaystyle DSR}, andETS{\displaystyle ETS}arerotationsof one another. Gauss introduced the notation The following identities hold, allowing the determination of any three of the above quantities from the two remaining ones:[2] Gauss proved the following "beautiful equality" (schöne Gleichung):[2] It is satisfied, for instance, by numbers(α,β,γ,δ,ε)=(9,2/3,2,5,1/3){\displaystyle (\alpha ,\beta ,\gamma ,\delta ,\varepsilon )=(9,2/3,2,5,1/3)}, whose productαβγδε{\displaystyle \alpha \beta \gamma \delta \varepsilon }is equal to20{\displaystyle 20}. Proof of the first part of the equality: Proof of the second part of the equality: From Gauss comes also the formula[2] (1+iα)(1+iβ)(1+iγ)(1+iδ)(1+iε)=αβγδεeiAPQRST,{\displaystyle (1+i{\sqrt {^{^{\!}}\alpha }})(1+i{\sqrt {\beta }})(1+i{\sqrt {^{^{\!}}\gamma }})(1+i{\sqrt {\delta }})(1+i{\sqrt {^{^{\!}}\varepsilon }})=\alpha \beta \gamma \delta \varepsilon e^{iA_{PQRST}},}whereAPQRST=2π−(|PQ⌢|+|QR⌢|+|RS⌢|+|ST⌢|+|TP⌢|){\displaystyle A_{PQRST}=2\pi -(|{\overset {\frown }{PQ}}|+|{\overset {\frown }{QR}}|+|{\overset {\frown }{RS}}|+|{\overset {\frown }{ST}}|+|{\overset {\frown }{TP}}|)}is the area of pentagonPQRST{\displaystyle PQRST}. The image of spherical pentagonPQRST{\displaystyle PQRST}in thegnomonic projection(a projection from the centre of the sphere) onto any plane tangent to the sphere is a rectilinear pentagon. Its five verticesP′Q′R′S′T′{\displaystyle P'Q'R'S'T'}unambiguously determineaconic section; in this case — anellipse. Gauss showed that the altitudes of pentagramP′Q′R′S′T′{\displaystyle P'Q'R'S'T'}(lines passing through vertices and perpendicular to opposite sides) cross in one pointO′{\displaystyle O'}, which is the image of the point of tangency of the plane to sphere. Arthur Cayleyobserved that, if we set the origin of aCartesian coordinate systemin pointO′{\displaystyle O'}, then the coordinates of verticesP′Q′R′S′T′{\displaystyle P'Q'R'S'T'}:(x1,y1),…,{\displaystyle (x_{1},y_{1}),\ldots ,}(x5,y5){\displaystyle (x_{5},y_{5})}satisfy the equalitiesx1x4+y1y4={\displaystyle x_{1}x_{4}+y_{1}y_{4}=}x2x5+y2y5={\displaystyle x_{2}x_{5}+y_{2}y_{5}=}x3x1+y3y1={\displaystyle x_{3}x_{1}+y_{3}y_{1}=}x4x2+y4y2={\displaystyle x_{4}x_{2}+y_{4}y_{2}=}x5x3+y5y3=−ρ2{\displaystyle x_{5}x_{3}+y_{5}y_{3}=-\rho ^{2}}, whereρ{\displaystyle \rho }is the length of the radius of the sphere.[3]
https://en.wikipedia.org/wiki/Pentagramma_mirificum
Inmathematics, thePythagorean theoremorPythagoras' theoremis a fundamental relation inEuclidean geometrybetween the three sides of aright triangle. It states that the area of thesquarewhose side is thehypotenuse(the side opposite theright angle) is equal to the sum of the areas of the squares on the other two sides. Thetheoremcan be written as anequationrelating the lengths of the sidesa,band the hypotenusec, sometimes called thePythagorean equation:[1] The theorem is named for theGreekphilosopherPythagoras, born around 570 BC. The theorem has beenprovednumerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including bothgeometricproofs andalgebraicproofs, with some dating back thousands of years. WhenEuclidean spaceis represented by aCartesian coordinate systeminanalytic geometry,Euclidean distancesatisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points. The theorem can begeneralizedin various ways: tohigher-dimensional spaces, tospaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all butn-dimensionalsolids. In one rearrangement proof, two squares are used whose sides have a measure ofa+b{\displaystyle a+b}and which contain four right triangles whose sides area,bandc, with the hypotenuse beingc. In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are lengthc. Each outer square has an area of(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+c2{\displaystyle 2ab+c^{2}}, with2ab{\displaystyle 2ab}representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of lengthaandb. These rectangles in their new position have now delineated two new squares, one having side lengthais formed in the bottom-left corner, and another square of side lengthbformed in the top-right corner. In this new position, this left side now has a square of area(a+b)2{\displaystyle (a+b)^{2}}as well as2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}.Since both squares have the area of(a+b)2{\displaystyle (a+b)^{2}}it follows that the other measure of the square area also equal each other such that2ab+c2{\displaystyle 2ab+c^{2}}=2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. With the area of the four triangles removed from both side of the equation what remains isa2+b2=c2.{\displaystyle a^{2}+b^{2}=c^{2}.}[2] In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areasa2{\displaystyle a^{2}}andb2{\displaystyle b^{2}}which will again lead to a second square of with the area2ab+a2+b2{\displaystyle 2ab+a^{2}+b^{2}}. English mathematicianSir Thomas Heathgives this proof in his commentary on Proposition I.47 inEuclid'sElements, and mentions the proposals of German mathematiciansCarl Anton BretschneiderandHermann Hankelthat Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him."[3]Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues.[4] The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with sidec, as shown in the lower part of the diagram.[5]This results in a larger square, with sidea+band area(a+b)2. The four triangles and the square sidecmust have the same area as the larger square, giving A similar proof uses four copies of a right triangle with sidesa,bandc, arranged inside a square with sidecas in the top half of the diagram.[6]The triangles are similar with area12ab{\displaystyle {\tfrac {1}{2}}ab}, while the small square has sideb−aand area(b−a)2. The area of the large square is therefore But this is a square with sidecand areac2, so This theorem may have more known proofs than any other (thelawofquadratic reciprocitybeing another contender for that distinction); the bookThe Pythagorean Propositioncontains 370 proofs.[7] This proof is based on theproportionalityof the sides of threesimilartriangles, that is, upon the fact that theratioof any two corresponding sides of similar triangles is the same regardless of the size of the triangles. LetABCrepresent a right triangle, with theright anglelocated atC, as shown on the figure. Draw thealtitudefrom pointC, and callHits intersection with the sideAB. PointHdivides the length of the hypotenusecinto partsdande. The new triangle,ACH,issimilarto triangleABC, because they both have a right angle (by definition of the altitude), and they share the angle atA, meaning that the third angle will be the same in both triangles as well, marked asθin the figure. By a similar reasoning, the triangleCBHis also similar toABC. The proof of similarity of the triangles requires thetriangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to theparallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides: The first result equates thecosinesof the anglesθ, whereas the second result equates theirsines. These ratios can be written as Summing these two equalities results in which, after simplification, demonstrates the Pythagorean theorem: The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. Oneconjectureis that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in theElements, and that the theory of proportions needed further development at that time.[8] Albert Einsteingave a proof by dissection in which the pieces do not need to be moved.[9]Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and twosimilarshapes that each include one of two legs instead of the hypotenuse (seeSimilar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well. In outline, here is how the proof inEuclid'sElementsproceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to becongruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow. LetA,B,Cbe theverticesof a right triangle, with a right angle atA. Drop a perpendicular fromAto the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs. For the formal proof, we require four elementarylemmata: Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square.[10] The proof is as follows: This proof, which appears in Euclid'sElementsas that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares.[12][13]This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used.[14][15] Another by rearrangement is given by the middle animation. A large square is formed with areac2, from four identical right triangles with sidesa,bandc, fitted around a small central square. Then two rectangles are formed with sidesaandbby moving the triangles. Combining the smaller square with these rectangles produces two squares of areasa2andb2, which must have the same area as the initial large square.[16] The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is calleddissection. This shows the area of the large square equals that of the two smaller ones.[17] As shown in the accompanying animation, area-preservingshear mappingsand translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly.[18]Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse. A relatedproof by U.S. President James A. Garfieldwas published before he was elected president; while he was aU.S. Representative.[19][20][21]Instead of a square it uses atrapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. Thearea of the trapezoidcan be calculated to be half the area of the square, that is The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of12{\displaystyle {\frac {1}{2}}}, which is removed by multiplying by two to give the result. One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employingcalculus.[22][23][24] The triangleABCis a right triangle, as shown in the upper part of the diagram, withBCthe hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of lengthy, the sideACof lengthxand the sideABof lengtha, as seen in the lower diagram part. Ifxis increased by a small amountdxby extending the sideACslightly toD, thenyalso increases bydy. These form two sides of a triangle,CDE, which (withEchosen soCEis perpendicular to the hypotenuse) is a right triangle approximately similar toABC. Therefore, the ratios of their sides must be the same, that is: This can be rewritten asydy=xdx{\displaystyle y\,dy=x\,dx}, which is adifferential equationthat can be solved by direct integration: giving The constant can be deduced fromx= 0,y=ato give the equation This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place ofdxanddy. Theconverseof the theorem is also true:[25] Given a triangle with sides of lengtha,b, andc, ifa2+b2=c2,then the angle between sidesaandbis aright angle. For any three positivereal numbersa,b, andcsuch thata2+b2=c2, there exists a triangle with sidesa,bandcas a consequence of theconverse of the triangle inequality. This converse appears in Euclid'sElements(Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right."[26] It can be proved using thelaw of cosinesor as follows: LetABCbe a triangle with side lengthsa,b, andc, witha2+b2=c2.Construct a second triangle with sides of lengthaandbcontaining a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has lengthc=√a2+b2, the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengthsa,bandc, the triangles arecongruentand must have the same angles. Therefore, the angle between the side of lengthsaandbin the original triangle is a right angle. The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem.[27][28] Acorollaryof the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Letcbe chosen to be the longest of the three sides anda+b>c(otherwise there is no triangle according to thetriangle inequality). The following statements apply:[29] Edsger W. Dijkstrahas stated this proposition about acute, right, and obtuse triangles in this language: whereαis the angle opposite to sidea,βis the angle opposite to sideb,γis the angle opposite to sidec, and sgn is thesign function.[30] A Pythagorean triple has three positive integersa,b, andc, such thata2+b2=c2.In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths.[1]Such a triple is commonly written(a,b,c).Some well-known examples are(3, 4, 5)and(5, 12, 13). A primitive Pythagorean triple is one in whicha,bandcarecoprime(thegreatest common divisorofa,bandcis 1). The following is a list of primitive Pythagorean triples with values less than 100: There are manyformulas for generating Pythagorean triples. Of these,Euclid's formulais the most well-known: given arbitrary positive integersmandn, the formula states that the integers forms a Pythagorean triple. Given aright trianglewith sidesa,b,c{\displaystyle a,b,c}andaltituded{\displaystyle d}(a line from the right angle and perpendicular to thehypotenusec{\displaystyle c}). The Pythagorean theorem has, while theinverse Pythagorean theoremrelates the twolegsa,b{\displaystyle a,b}to the altituded{\displaystyle d},[31] The equation can be transformed to, wherex2+y2=z2{\displaystyle x^{2}+y^{2}=z^{2}}for any non-zerorealx,y,z{\displaystyle x,y,z}. If thea,b,d{\displaystyle a,b,d}are to beintegers, the smallest solutiona>b>d{\displaystyle a>b>d}is then using the smallest Pythagorean triple3,4,5{\displaystyle 3,4,5}. The reciprocal Pythagorean theorem is a special case of theoptic equation where the denominators are squares and also for aheptagonal trianglewhose sidesp,q,r{\displaystyle p,q,r}are square numbers. One of the consequences of the Pythagorean theorem is that line segments whose lengths areincommensurable(so the ratio of which is not arational number) can be constructed using astraightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by thesquare rootoperation. The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer.[32]Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as√2,√3,√5. For more detail, seeQuadratic irrational. Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit.[33]According to one legend,Hippasus of Metapontum(ca.470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable.[34]A careful discussion of Hippasus's contributions is found inFritz.[35] For anycomplex number theabsolute valueor modulus is given by So the three quantities,r,xandyare related by the Pythagorean equation, Note thatris defined to be a positive number or zero butxandycan be negative as well as positive. Geometricallyris the distance of thezfrom zero or the originOin thecomplex plane. This can be generalised to find the distance between two points,z1andz2say. The required distance is given by so again they are related by a version of the Pythagorean equation, The distance formula inCartesian coordinatesis derived from the Pythagorean theorem.[36]If(x1,y1)and(x2,y2)are points in the plane, then the distance between them, also called theEuclidean distance, is given by More generally, inEuclideann-space, the Euclidean distance between two points,A=(a1,a2,…,an){\displaystyle A\,=\,(a_{1},a_{2},\dots ,a_{n})}andB=(b1,b2,…,bn){\displaystyle B\,=\,(b_{1},b_{2},\dots ,b_{n})}, is defined, by generalization of the Pythagorean theorem, as: If instead of Euclidean distance, the square of this value (thesquared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates: The squared form is a smooth,convex functionof both points, and is widely used inoptimization theoryandstatistics, forming the basis ofleast squares. If Cartesian coordinates are not used, for example, ifpolar coordinatesare used in two dimensions or, in more general terms, ifcurvilinear coordinatesare used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in theapplications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates(r,θ)can be introduced as: Then two points with locations(r1,θ1)and(r2,θ2)are separated by a distances: Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as: using the trigonometricproduct-to-sum formulas. This formula is thelaw of cosines, sometimes called the generalized Pythagorean theorem.[37]From this result, for the case where the radii to the two locations are at right angles, the enclosed angleΔθ=π/2,and the form corresponding to Pythagoras' theorem is regained:s2=r12+r22.{\displaystyle s^{2}=r_{1}^{2}+r_{2}^{2}.}The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles. In a right triangle with sidesa,band hypotenusec,trigonometrydetermines thesineandcosineof the angleθbetween sideaand the hypotenuse as: From that it follows: where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity.[38]In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of sizesinθand adjacent side of sizecosθin units of the hypotenuse. The Pythagorean theorem relates thecross productanddot productin a similar way:[39] This can be seen from the definitions of the cross product and dot product, as withnaunit vectornormal to bothaandb. The relationship follows from these definitions and the Pythagorean trigonometric identity. This can also be used to define the cross product. By rearranging the following equation is obtained This can be considered as a condition on the cross product and so part of its definition, for example inseven dimensions.[40][41] If the first four of theEuclidean geometry axiomsare assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is,Euclid's fifth postulateimplies the Pythagorean theorem and vice-versa. The Pythagorean theorem generalizes beyond the areas of squares on the three sides to anysimilar figures. This was known byHippocrates of Chiosin the 5th century BC,[42]and was included byEuclidin hisElements:[43] If one erects similar figures (seeEuclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side. This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures area:b:c).[44]While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle).[44] The basic idea behind this generalization is that the area of a plane figure isproportionalto the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areasA,BandCare erected on sides with corresponding lengthsa,bandcthen: But, by the Pythagorean theorem,a2+b2=c2, soA+B=C. Conversely, if we can prove thatA+B=Cfor three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangleCon its hypotenuse, and two similar right triangles (AandB) constructed on the other two sides, formed by dividing the central triangle by itsaltitude. The sum of the areas of the two smaller triangles therefore is that of the third, thusA+B=Cand reversing the above logic leads to the Pythagorean theorema2+b2=c2. (See alsoEinstein's proof by dissection without rearrangement) The Pythagorean theorem is a special case of the more general theorem relating the lengths of sides in any triangle, the law of cosines, which states thata2+b2−2abcos⁡θ=c2{\displaystyle a^{2}+b^{2}-2ab\cos {\theta }=c^{2}}whereθ{\displaystyle \theta }is the angle between sidesa{\displaystyle a}andb{\displaystyle b}.[45] Whenθ{\displaystyle \theta }isπ2{\displaystyle {\frac {\pi }{2}}}radians or 90°, thencos⁡θ=0{\displaystyle \cos {\theta }=0}, and the formula reduces to the usual Pythagorean theorem. At any selected angle of a general triangle of sidesa, b, c, inscribe an isosceles triangle such that the equal angles at its base θ are the same as the selected angle. Suppose the selected angle θ is opposite the side labeledc. Inscribing the isosceles triangle forms triangleCADwith angle θ opposite sideband with sideralongc. A second triangle is formed with angle θ opposite sideaand a side with lengthsalongc, as shown in the figure.Thābit ibn Qurrastated that the sides of the three triangles were related as:[47][48] As the angle θ approachesπ/2, the base of the isosceles triangle narrows, and lengthsrandsoverlap less and less. Whenθ=π/2,ADBbecomes a right triangle,r+s=c, and the original Pythagorean theorem is regained. One proof observes that triangleABChas the same angles as triangleCAD, but in opposite order. (The two triangles share the angle at vertex A, both contain the angle θ, and so also have the same third angle by thetriangle postulate.) Consequently,ABCis similar to the reflection ofCAD, the triangleDACin the lower panel. Taking the ratio of sides opposite and adjacent to θ, Likewise, for the reflection of the other triangle, Clearing fractionsand adding these two relations: the required result. The theorem remains valid if the angleθ{\displaystyle \theta }is obtuse so the lengthsrandsare non-overlapping. Pappus's area theoremis a further generalization, that applies to triangles that are not right triangles, using parallelograms on the three sides in place of squares (squares are a special case, of course). The upper figure shows that for a scalene triangle, the area of the parallelogram on the longest side is the sum of the areas of the parallelograms on the other two sides, provided the parallelogram on the long side is constructed as indicated (the dimensions labeled with arrows are the same, and determine the sides of the bottom parallelogram). This replacement of squares with parallelograms bears a clear resemblance to the original Pythagoras' theorem, and was considered a generalization byPappus of Alexandriain 4 AD[49][50] The lower figure shows the elements of the proof. Focus on the left side of the figure. The left green parallelogram has the same area as the left, blue portion of the bottom parallelogram because both have the same baseband heighth. However, the left green parallelogram also has the same area as the left green parallelogram of the upper figure, because they have the same base (the upper left side of the triangle) and the same height normal to that side of the triangle. Repeating the argument for the right side of the figure, the bottom parallelogram has the same area as the sum of the two green parallelograms. In terms ofsolid geometry, Pythagoras' theorem can be applied to three dimensions as follows. Consider thecuboidshown in the figure. The length offace diagonalACis found from Pythagoras' theorem as: where these three sides form a right triangle. Using diagonalACand the horizontal edgeCD, the length ofbody diagonalADthen is found by a second application of Pythagoras' theorem as: or, doing it all in one step: This result is the three-dimensional expression for the magnitude of a vectorv(the diagonal AD) in terms of its orthogonal components{vk}(the three mutually perpendicular sides): This one-step formulation may be viewed as a generalization of Pythagoras' theorem to higher dimensions. However, this result is really just the repeated application of the original Pythagoras' theorem to a succession of right triangles in a sequence of orthogonal planes. A substantial generalization of the Pythagorean theorem to three dimensions isde Gua's theorem, named forJean Paul de Gua de Malves: If atetrahedronhas a right angle corner (like a corner of acube), then the square of the area of the face opposite the right angle corner is the sum of the squares of the areas of the other three faces. This result can be generalized as in the "n-dimensional Pythagorean theorem":[51] Letx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}be orthogonal vectors inRn. Consider then-dimensional simplexSwith vertices0,x1,…,xn{\displaystyle 0,x_{1},\ldots ,x_{n}}. (Think of the(n− 1)-dimensional simplex with verticesx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}not including the origin as the "hypotenuse" ofSand the remaining(n− 1)-dimensional faces ofSas its "legs".) Then the square of the volume of the hypotenuse ofSis the sum of the squares of the volumes of thenlegs. This statement is illustrated in three dimensions by the tetrahedron in the figure. The "hypotenuse" is the base of the tetrahedron at the back of the figure, and the "legs" are the three sides emanating from the vertex in the foreground. As the depth of the base from the vertex increases, the area of the "legs" increases, while that of the base is fixed. The theorem suggests that when this depth is at the value creating a right vertex, the generalization of Pythagoras' theorem applies. In a different wording:[52] Given ann-rectangularn-dimensional simplex, the square of the(n− 1)-content of thefacetopposing the right vertex will equal the sum of the squares of the(n− 1)-contents of the remaining facets. The Pythagorean theorem can be generalized toinner product spaces,[53]which are generalizations of the familiar 2-dimensional and 3-dimensionalEuclidean spaces. For example, afunctionmay be considered as avectorwith infinitely many components in an inner product space, as infunctional analysis.[54] In an inner product space, the concept ofperpendicularityis replaced by the concept oforthogonality: two vectorsvandware orthogonal if their inner product⟨v,w⟩{\displaystyle \langle \mathbf {v} ,\mathbf {w} \rangle }is zero. Theinner productis a generalization of thedot productof vectors. The dot product is called thestandardinner product or theEuclideaninner product. However, other inner products are possible.[55] The concept of length is replaced by the concept of thenorm‖v‖ of a vectorv, defined as:[56] In an inner-product space, thePythagorean theoremstates that for any two orthogonal vectorsvandwwe have Here the vectorsvandware akin to the sides of a right triangle with hypotenuse given by thevector sumv+w. This form of the Pythagorean theorem is a consequence of theproperties of the inner product: where⟨v,w⟩=⟨w,v⟩=0{\displaystyle \langle \mathbf {v,\ w} \rangle =\langle \mathbf {w,\ v} \rangle =0}because of orthogonality. A further generalization of the Pythagorean theorem in an inner product space to non-orthogonal vectors is theparallelogram law:[56] which says that twice the sum of the squares of the lengths of the sides of a parallelogram is the sum of the squares of the lengths of the diagonals. Any norm that satisfies this equality isipso factoa norm corresponding to an inner product.[56] The Pythagorean identity can be extended to sums of more than two orthogonal vectors. Ifv1,v2, ...,vnare pairwise-orthogonal vectors in an inner-product space, then application of the Pythagorean theorem to successive pairs of these vectors (as described for 3-dimensions in the section onsolid geometry) results in the equation[57] Another generalization of the Pythagorean theorem applies toLebesgue-measurablesets of objects in any number of dimensions. Specifically, the square of the measure of anm-dimensional set of objects in one or more parallelm-dimensionalflatsinn-dimensionalEuclidean spaceis equal to the sum of the squares of the measures of theorthogonalprojections of the object(s) onto allm-dimensional coordinate subspaces.[58] In mathematical terms: where: The Pythagorean theorem is derived from theaxiomsofEuclidean geometry, and in fact, were the Pythagorean theorem to fail for some right triangle, then the plane in which this triangle is contained cannot be Euclidean. More precisely, the Pythagorean theoremimplies, and is implied by, Euclid's Parallel (Fifth) Postulate.[59][60]Thus, right triangles in anon-Euclidean geometry[61]do not satisfy the Pythagorean theorem. For example, inspherical geometry, all three sides of the right triangle (saya,b, andc) bounding an octant of the unit sphere have length equal toπ/2, and all its angles are right angles, which violates the Pythagorean theorem becausea2+b2=2c2>c2{\displaystyle a^{2}+b^{2}=2c^{2}>c^{2}}. Here two cases of non-Euclidean geometry are considered—spherical geometryandhyperbolic plane geometry; in each case, as in the Euclidean case for non-right triangles, the result replacing the Pythagorean theorem follows from the appropriate law of cosines. However, the Pythagorean theorem remains true in hyperbolic geometry and elliptic geometry if the condition that the triangle be right is replaced with the condition that two of the angles sum to the third, sayA+B=C. The sides are then related as follows: the sum of the areas of the circles with diametersaandbequals the area of the circle with diameterc.[62] For any righttriangle on a sphereof radiusR(for example, ifγin the figure is a right angle), with sidesa,b,c,the relation between the sides takes the form:[63] This equation can be derived as a special case of thespherical law of cosinesthat applies to all spherical triangles: For infinitesimal triangles on the sphere (or equivalently, for finite spherical triangles on a sphere of infinite radius), the spherical relation between the sides of a right triangle reduces to the Euclidean form of the Pythagorean theorem. To see how, assume we have a spherical triangle of fixed side lengthsa,b,andcon a sphere with expanding radiusR. AsRapproaches infinity the quantitiesa/R,b/R,andc/Rtend to zero and the spherical Pythagorean identity reduces to1=1,{\displaystyle 1=1,}so we must look at itsasymptotic expansion. TheMaclaurin seriesfor the cosine function can be written ascos⁡x=1−12x2+O(x4){\textstyle \cos x=1-{\tfrac {1}{2}}x^{2}+O{\left(x^{4}\right)}}with the remainder term inbig O notation. Lettingx=c/R{\displaystyle x=c/R}be a side of the triangle, and treating the expression as an asymptotic expansion in terms ofRfor a fixedc, and likewise foraandb. Substituting the asymptotic expansion for each of the cosines into the spherical relation for a right triangle yields Subtracting 1 and then negating each side, Multiplying through by2R2,the asymptotic expansion forcin terms of fixeda,band variableRis The Euclidean Pythagorean relationshipc2=a2+b2{\textstyle c^{2}=a^{2}+b^{2}}is recovered in the limit, as the remainder vanishes when the radiusRapproaches infinity. For practical computation in spherical trigonometry with small right triangles, cosines can be replaced with sines using the double-angle identitycos⁡2θ=1−2sin2⁡θ{\displaystyle \cos {2\theta }=1-2\sin ^{2}{\theta }}to avoidloss of significance. Then the spherical Pythagorean theorem can alternately be written as In ahyperbolicspace with uniformGaussian curvature−1/R2, for a righttrianglewith legsa,b, and hypotenusec, the relation between the sides takes the form:[64] where cosh is thehyperbolic cosine. This formula is a special form of thehyperbolic law of cosinesthat applies to all hyperbolic triangles:[65] with γ the angle at the vertex opposite the sidec. By using theMaclaurin seriesfor the hyperbolic cosine,coshx≈ 1 +x2/2, it can be shown that as a hyperbolic triangle becomes very small (that is, asa,b, andcall approach zero), the hyperbolic relation for a right triangle approaches the form of Pythagoras' theorem. For small right triangles(a,b<<R), the hyperbolic cosines can be eliminated to avoidloss of significance, giving For any uniform curvatureK(positive, zero, or negative), in very small right triangles (|K|a2, |K|b2<< 1) with hypotenusec, it can be shown that The Pythagorean theorem applies toinfinitesimaltriangles seen indifferential geometry. In three dimensional space, the distance between two infinitesimally separated points satisfies withdsthe element of distance and (dx,dy,dz) the components of the vector separating the two points. Such a space is called aEuclidean space. However, inRiemannian geometry, a generalization of this expression useful for general coordinates (not just Cartesian) and general spaces (not just Euclidean) takes the form:[66] which is called themetric tensor. (Sometimes, by abuse of language, the same term is applied to the set of coefficientsgij.) It may be a function of position, and often describescurved space. A simple example is Euclidean (flat) space expressed incurvilinear coordinates. For example, inpolar coordinates: There is debate whether the Pythagorean theorem was discovered once, or many times in many places, and the date of first discovery is uncertain, as is the date of the first proof. Historians ofMesopotamianmathematics have concluded that the Pythagorean rule was in widespread use during theOld Babylonian period(20th to 16th centuries BC), over a thousand years beforePythagoraswas born.[68][69][70][71]The history of the theorem can be divided into four parts: knowledge ofPythagorean triples, knowledge of the relationship among the sides of a right triangle, knowledge of the relationships among adjacent angles, and proofs of the theorem within somedeductive system. Writtenc.1800BC, theEgyptianMiddle KingdomBerlin Papyrus 6619includes a problem whose solution is the Pythagorean triple 6:8:10, but the problem does not mention a triangle. The Mesopotamian tabletPlimpton 322, written nearLarsaalsoc.1800BC, contains many entries closely related to Pythagorean triples.[72] InIndia, theBaudhayanaShulba Sutra, the dates of which are given variously as between the 8th and 5th century BC,[73]contains a list of Pythagorean triples and a statement of the Pythagorean theorem, both in the special case of theisoscelesright triangleand in the general case, as does theApastambaShulba Sutra(c.600 BC).[a] ByzantineNeoplatonicphilosopher and mathematicianProclus, writing in the fifth century AD, states two arithmetic rules, "one of them attributed toPlato, the other to Pythagoras",[76]for generating special Pythagorean triples. The rule attributed to Pythagoras (c.570– c.495 BC) starts from anodd numberand produces a triple with leg and hypotenuse differing by one unit; the rule attributed to Plato (428/427 or 424/423 – 348/347 BC) starts from an even number and produces a triple with leg and hypotenuse differing by two units. According toThomas L. Heath(1861–1940), no specific attribution of the theorem to Pythagoras exists in the surviving Greek literature from the five centuries after Pythagoras lived.[77]However, when authors such asPlutarchandCiceroattributed the theorem to Pythagoras, they did so in a way which suggests that the attribution was widely known and undoubted.[78][79]ClassicistKurt von Fritzwrote, "Whether this formula is rightly attributed to Pythagoras personally ... one can safely assume that it belongs to the very oldest period ofPythagorean mathematics."[35]Around 300 BC, in Euclid'sElements, the oldest extantaxiomatic proofof the theorem is presented.[80] With contents known much earlier, but in surviving texts dating from roughly the 1st century BC, theChinesetextZhoubi Suanjing(周髀算经), (The Arithmetical Classic of theGnomonand the Circular Paths of Heaven) gives a reasoning for the Pythagorean theorem for the (3, 4, 5) triangle — in China it is called the "Gougu theorem" (勾股定理).[81][82]During theHan Dynasty(202 BC to 220 AD), Pythagorean triples appear inThe Nine Chapters on the Mathematical Art,[83]together with a mention of right triangles.[84]Some believe the theorem arose first inChinain the 11th century BC,[85]where it is alternatively known as the "Shang Gao theorem" (商高定理),[86]named after theDuke of Zhou'sastronomer and mathematician, whose reasoning composed most of what was in theZhoubi Suanjing.[87]
https://en.wikipedia.org/wiki/Pythagorean_theorem
Intrigonometry,tangent half-angle formulasrelate the tangent of half of an angle to trigonometric functions of the entire angle.[1] The tangent of half an angle is thestereographic projectionof the circle through the point at angleπ{\textstyle \pi }radiansonto the line through the angles±π2{\textstyle \pm {\frac {\pi }{2}}}. Tangent half-angle formulae includetan⁡12(η±θ)=tan⁡12η±tan⁡12θ1∓tan⁡12ηtan⁡12θ=sin⁡η±sin⁡θcos⁡η+cos⁡θ=−cos⁡η−cos⁡θsin⁡η∓sin⁡θ,{\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}(\eta \pm \theta )&={\frac {\tan {\tfrac {1}{2}}\eta \pm \tan {\tfrac {1}{2}}\theta }{1\mp \tan {\tfrac {1}{2}}\eta \,\tan {\tfrac {1}{2}}\theta }}={\frac {\sin \eta \pm \sin \theta }{\cos \eta +\cos \theta }}=-{\frac {\cos \eta -\cos \theta }{\sin \eta \mp \sin \theta }}\,,\end{aligned}}}with simpler formulae whenηis known to be0,π/2,π, or3π/2becausesin(η)andcos(η)can be replaced by simple constants. In the reverse direction, the formulae includesin⁡α=2tan⁡12α1+tan2⁡12αcos⁡α=1−tan2⁡12α1+tan2⁡12αtan⁡α=2tan⁡12α1−tan2⁡12α.{\displaystyle {\begin{aligned}\sin \alpha &={\frac {2\tan {\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\\[7pt]\cos \alpha &={\frac {1-\tan ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\\[7pt]\tan \alpha &={\frac {2\tan {\tfrac {1}{2}}\alpha }{1-\tan ^{2}{\tfrac {1}{2}}\alpha }}\,.\end{aligned}}} Using the angle addition and subtraction formulae for both the sine and cosine one obtainssin⁡(a+b)+sin⁡(a−b)=2sin⁡acos⁡bcos⁡(a+b)+cos⁡(a−b)=2cos⁡acos⁡b.{\displaystyle {\begin{aligned}\sin(a+b)+\sin(a-b)&=2\sin a\cos b\\[15mu]\cos(a+b)+\cos(a-b)&=2\cos a\cos b\,.\end{aligned}}} Settinga=12(η+θ){\textstyle a={\tfrac {1}{2}}(\eta +\theta )}andb=12(η−θ){\displaystyle b={\tfrac {1}{2}}(\eta -\theta )}and substituting yieldssin⁡η+sin⁡θ=2sin⁡12(η+θ)cos⁡12(η−θ)cos⁡η+cos⁡θ=2cos⁡12(η+θ)cos⁡12(η−θ).{\displaystyle {\begin{aligned}\sin \eta +\sin \theta =2\sin {\tfrac {1}{2}}(\eta +\theta )\,\cos {\tfrac {1}{2}}(\eta -\theta )\\[15mu]\cos \eta +\cos \theta =2\cos {\tfrac {1}{2}}(\eta +\theta )\,\cos {\tfrac {1}{2}}(\eta -\theta )\,.\end{aligned}}} Dividing the sum of sines by the sum of cosines givessin⁡η+sin⁡θcos⁡η+cos⁡θ=tan⁡12(η+θ).{\displaystyle {\frac {\sin \eta +\sin \theta }{\cos \eta +\cos \theta }}=\tan {\tfrac {1}{2}}(\eta +\theta )\,.} Also, a similar calculation starting withsin⁡(a+b)−sin⁡(a−b){\displaystyle \sin(a+b)-\sin(a-b)}andcos⁡(a+b)−cos⁡(a−b){\displaystyle \cos(a+b)-\cos(a-b)}gives−cos⁡η−cos⁡θsin⁡η−sin⁡θ=tan⁡12(η+θ).{\displaystyle -{\frac {\cos \eta -\cos \theta }{\sin \eta -\sin \theta }}=\tan {\tfrac {1}{2}}(\eta +\theta )\,.} Furthermore, usingdouble-angle formulaeand the Pythagorean identity1+tan2⁡α=1/cos2⁡α{\textstyle 1+\tan ^{2}\alpha =1{\big /}\cos ^{2}\alpha }givessin⁡α=2sin⁡12αcos⁡12α=2sin⁡12αcos⁡12α/cos2⁡12α1+tan2⁡12α=2tan⁡12α1+tan2⁡12α{\displaystyle \sin \alpha =2\sin {\tfrac {1}{2}}\alpha \cos {\tfrac {1}{2}}\alpha ={\frac {2\sin {\tfrac {1}{2}}\alpha \,\cos {\tfrac {1}{2}}\alpha {\Big /}\cos ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}={\frac {2\tan {\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}}cos⁡α=cos2⁡12α−sin2⁡12α=(cos2⁡12α−sin2⁡12α)/cos2⁡12α1+tan2⁡12α=1−tan2⁡12α1+tan2⁡12α.{\displaystyle \cos \alpha =\cos ^{2}{\tfrac {1}{2}}\alpha -\sin ^{2}{\tfrac {1}{2}}\alpha ={\frac {\left(\cos ^{2}{\tfrac {1}{2}}\alpha -\sin ^{2}{\tfrac {1}{2}}\alpha \right){\Big /}\cos ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}={\frac {1-\tan ^{2}{\tfrac {1}{2}}\alpha }{1+\tan ^{2}{\tfrac {1}{2}}\alpha }}\,.}Taking the quotient of the formulae for sine and cosine yieldstan⁡α=2tan⁡12α1−tan2⁡12α.{\displaystyle \tan \alpha ={\frac {2\tan {\tfrac {1}{2}}\alpha }{1-\tan ^{2}{\tfrac {1}{2}}\alpha }}\,.} Applying the formulae derived above to the rhombus figure on the right, it is readily shown that tan⁡12(a+b)=sin⁡12(a+b)cos⁡12(a+b)=sin⁡a+sin⁡bcos⁡a+cos⁡b.{\displaystyle \tan {\tfrac {1}{2}}(a+b)={\frac {\sin {\tfrac {1}{2}}(a+b)}{\cos {\tfrac {1}{2}}(a+b)}}={\frac {\sin a+\sin b}{\cos a+\cos b}}.} In the unit circle, application of the above shows thatt=tan⁡12φ{\textstyle t=\tan {\tfrac {1}{2}}\varphi }. Bysimilarity of triangles, tsin⁡φ=11+cos⁡φ.{\displaystyle {\frac {t}{\sin \varphi }}={\frac {1}{1+\cos \varphi }}.} It follows that t=sin⁡φ1+cos⁡φ=sin⁡φ(1−cos⁡φ)(1+cos⁡φ)(1−cos⁡φ)=1−cos⁡φsin⁡φ.{\displaystyle t={\frac {\sin \varphi }{1+\cos \varphi }}={\frac {\sin \varphi (1-\cos \varphi )}{(1+\cos \varphi )(1-\cos \varphi )}}={\frac {1-\cos \varphi }{\sin \varphi }}.} In various applications oftrigonometry, it is useful to rewrite thetrigonometric functions(such assineandcosine) in terms ofrational functionsof a new variablet{\displaystyle t}. These identities are known collectively as thetangent half-angle formulaebecause of the definition oft{\displaystyle t}. These identities can be useful incalculusfor converting rational functions in sine and cosine to functions oftin order to find theirantiderivatives. Geometrically, the construction goes like this: for any point(cosφ, sinφ)on theunit circle, draw the line passing through it and the point(−1, 0). This point crosses they-axis at some pointy=t. One can show using simple geometry thatt= tan(φ/2). The equation for the drawn line isy= (1 +x)t. The equation for the intersection of the line and circle is then aquadratic equationinvolvingt. The two solutions to this equation are(−1, 0)and(cosφ, sinφ). This allows us to write the latter as rational functions oft(solutions are given below). The parametertrepresents thestereographic projectionof the point(cosφ, sinφ)onto they-axis with the center of projection at(−1, 0). Thus, the tangent half-angle formulae give conversions between the stereographic coordinateton the unit circle and the standard angular coordinateφ. Then we have sin⁡φ=2t1+t2,cos⁡φ=1−t21+t2,tan⁡φ=2t1−t2cot⁡φ=1−t22t,sec⁡φ=1+t21−t2,csc⁡φ=1+t22t,{\displaystyle {\begin{aligned}&\sin \varphi ={\frac {2t}{1+t^{2}}},&&\cos \varphi ={\frac {1-t^{2}}{1+t^{2}}},\\[8pt]&\tan \varphi ={\frac {2t}{1-t^{2}}}&&\cot \varphi ={\frac {1-t^{2}}{2t}},\\[8pt]&\sec \varphi ={\frac {1+t^{2}}{1-t^{2}}},&&\csc \varphi ={\frac {1+t^{2}}{2t}},\end{aligned}}} and eiφ=1+it1−it,e−iφ=1−it1+it.{\displaystyle e^{i\varphi }={\frac {1+it}{1-it}},\qquad e^{-i\varphi }={\frac {1-it}{1+it}}.} Both this expression ofeiφ{\displaystyle e^{i\varphi }}and the expressiont=tan⁡(φ/2){\displaystyle t=\tan(\varphi /2)}can be solved forφ{\displaystyle \varphi }. Equating these gives thearctangentin terms of thenatural logarithmarctan⁡t=−i2ln⁡1+it1−it.{\displaystyle \arctan t={\frac {-i}{2}}\ln {\frac {1+it}{1-it}}.} Incalculus, the tangent half-angle substitution is used to find antiderivatives ofrational functionsofsinφandcosφ. Differentiatingt=tan⁡12φ{\displaystyle t=\tan {\tfrac {1}{2}}\varphi }givesdtdφ=12sec2⁡12φ=12(1+tan2⁡12φ)=12(1+t2){\displaystyle {\frac {dt}{d\varphi }}={\tfrac {1}{2}}\sec ^{2}{\tfrac {1}{2}}\varphi ={\tfrac {1}{2}}(1+\tan ^{2}{\tfrac {1}{2}}\varphi )={\tfrac {1}{2}}(1+t^{2})}and thusdφ=2dt1+t2.{\displaystyle d\varphi ={{2\,dt} \over {1+t^{2}}}.} One can play an entirely analogous game with thehyperbolic functions. A point on (the right branch of) ahyperbolais given by(coshψ, sinhψ). Projecting this ontoy-axis from the center(−1, 0)gives the following: t=tanh⁡12ψ=sinh⁡ψcosh⁡ψ+1=cosh⁡ψ−1sinh⁡ψ{\displaystyle t=\tanh {\tfrac {1}{2}}\psi ={\frac {\sinh \psi }{\cosh \psi +1}}={\frac {\cosh \psi -1}{\sinh \psi }}} with the identities sinh⁡ψ=2t1−t2,cosh⁡ψ=1+t21−t2,tanh⁡ψ=2t1+t2,coth⁡ψ=1+t22t,sechψ=1−t21+t2,cschψ=1−t22t,{\displaystyle {\begin{aligned}&\sinh \psi ={\frac {2t}{1-t^{2}}},&&\cosh \psi ={\frac {1+t^{2}}{1-t^{2}}},\\[8pt]&\tanh \psi ={\frac {2t}{1+t^{2}}},&&\coth \psi ={\frac {1+t^{2}}{2t}},\\[8pt]&\operatorname {sech} \,\psi ={\frac {1-t^{2}}{1+t^{2}}},&&\operatorname {csch} \,\psi ={\frac {1-t^{2}}{2t}},\end{aligned}}} and eψ=1+t1−t,e−ψ=1−t1+t.{\displaystyle e^{\psi }={\frac {1+t}{1-t}},\qquad e^{-\psi }={\frac {1-t}{1+t}}.} Findingψin terms oftleads to following relationship between theinverse hyperbolic tangentartanh{\displaystyle \operatorname {artanh} }and the natural logarithm: 2artanh⁡t=ln⁡1+t1−t.{\displaystyle 2\operatorname {artanh} t=\ln {\frac {1+t}{1-t}}.} The hyperbolic tangent half-angle substitution in calculus usesdψ=2dt1−t2.{\displaystyle d\psi ={{2\,dt} \over {1-t^{2}}}\,.} Comparing the hyperbolic identities to the circular ones, one notices that they involve the same functions oft, just permuted. If we identify the parametertin both cases we arrive at a relationship between the circular functions and the hyperbolic ones. That is, if t=tan⁡12φ=tanh⁡12ψ{\displaystyle t=\tan {\tfrac {1}{2}}\varphi =\tanh {\tfrac {1}{2}}\psi } then φ=2arctan⁡(tanh⁡12ψ)≡gd⁡ψ.{\displaystyle \varphi =2\arctan {\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )}\equiv \operatorname {gd} \psi .} wheregd(ψ)is theGudermannian function. The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. The above descriptions of the tangent half-angle formulae (projection the unit circle and standard hyperbola onto they-axis) give a geometric interpretation of this function. Starting with aPythagorean trianglewith side lengthsa,b, andcthat are positive integers and satisfya2+b2=c2, it follows immediately that eachinterior angleof the triangle has rational values for sine and cosine, because these are just ratios of side lengths. Thus each of these angles has a rational value for its half-angle tangent, usingtanφ/2 = sinφ/ (1 + cosφ). The reverse is also true. If there are two positive angles that sum to 90°, each with a rational half-angle tangent, and the third angle is aright anglethen a triangle with these interior angles can bescaled toa Pythagorean triangle. If the third angle is not required to be a right angle, but is the angle that makes the three positive angles sum to 180° then the third angle will necessarily have a rational number for its half-angle tangent when the first two do (using angle addition and subtraction formulas for tangents) and the triangle can be scaled to aHeronian triangle. Generally, ifKis asubfieldof the complex numbers thentanφ/2 ∈K∪ {∞}implies that{sinφ, cosφ, tanφ, secφ, cscφ, cotφ} ⊆K∪ {∞}.
https://en.wikipedia.org/wiki/Tangent_half-angle_formula
In mathematics, the values of thetrigonometric functionscan be expressed approximately, as incos⁡(π/4)≈0.707{\displaystyle \cos(\pi /4)\approx 0.707}, or exactly, as incos⁡(π/4)=2/2{\displaystyle \cos(\pi /4)={\sqrt {2}}/2}. Whiletrigonometric tablescontain many approximate values, the exact values for certain angles can be expressed by a combination of arithmetic operations andsquare roots. The angles with trigonometric values that are expressible in this way are exactly those that can be constructed with acompass and straight edge, and the values are calledconstructible numbers. The trigonometric functions of angles that are multiples of 15°, 18°, or 22.5° have simple algebraic values. These values are listed in the following table for angles from 0° to 45°[1][2](seebelowfor proofs). In the table below, the label "Undefined" represents a ratio1:0.{\displaystyle 1:0.}If the codomain of the trigonometric functions is taken to be thereal numbersthese entries areundefined, whereas if the codomain is taken to be theprojectively extended real numbers, these entries take the value∞{\displaystyle \infty }(seedivision by zero). For angles outside of this range, trigonometric values can be found by applyingreflection and shift identitiessuch as Atrigonometric numberis a number that can be expressed as thesine or cosineof arationalmultiple ofπradians.[3]Sincesin⁡(x)=cos⁡(x−π/2),{\displaystyle \sin(x)=\cos(x-\pi /2),}the case of a sine can be omitted from this definition. Therefore any trigonometric number can be written ascos⁡(2πk/n){\displaystyle \cos(2\pi k/n)}, wherekandnare integers. This number can be thought of as the real part of thecomplex numbercos⁡(2πk/n)+isin⁡(2πk/n){\displaystyle \cos(2\pi k/n)+i\sin(2\pi k/n)}.De Moivre's formulashows that numbers of this form areroots of unity: Since the root of unity is arootof the polynomialxn− 1, it isalgebraic. Since the trigonometric number is the average of the root of unity and itscomplex conjugate, and algebraic numbers are closed under arithmetic operations, every trigonometric number is algebraic.[3]The minimal polynomials of trigonometric numbers can beexplicitly enumerated.[4]In contrast, by theLindemann–Weierstrass theorem, the sine or cosine of any non-zero algebraic number is always transcendental.[5] The real part of any root of unity is a trigonometric number. ByNiven's theorem, the only rational trigonometric numbers are 0, 1, −1, 1/2, and −1/2.[6] An angle can be constructed with a compass and straightedge if and only if its sine (or equivalently cosine) can be expressed by a combination of arithmetic operations and square roots applied to integers.[7]Additionally, an angle that is a rational multiple ofπ{\displaystyle \pi }radians is constructible if and only if, when it is expressed asaπ/b{\displaystyle a\pi /b}radians, whereaandbarerelatively primeintegers, theprime factorizationof the denominator,b, is the product of somepower of twoand any number of distinctFermat primes(a Fermat prime is a prime number one greater than a power of two).[8] Thus, for example,2π/15=24∘{\displaystyle 2\pi /15=24^{\circ }}is a constructible angle because 15 is the product of the Fermat primes 3 and 5. Similarlyπ/12=15∘{\displaystyle \pi /12=15^{\circ }}is a constructible angle because 12 is a power of two (4) times a Fermat prime (3). Butπ/9=20∘{\displaystyle \pi /9=20^{\circ }}is not a constructible angle, since9=3⋅3{\displaystyle 9=3\cdot 3}is not the product ofdistinctFermat primes as it contains 3 as a factor twice, and neither isπ/7≈25.714∘{\displaystyle \pi /7\approx 25.714^{\circ }}, since 7 is not a Fermat prime.[9] It results from the above characterisation that an angle of an integer number of degrees is constructible if and only if this number of degrees is a multiple of3. From areflection identity,cos⁡(45∘)=sin⁡(90∘−45∘)=sin⁡(45∘){\displaystyle \cos(45^{\circ })=\sin(90^{\circ }-45^{\circ })=\sin(45^{\circ })}. Substituting into thePythagorean trigonometric identitysin⁡(45∘)2+cos⁡(45∘)2=1{\displaystyle \sin(45^{\circ })^{2}+\cos(45^{\circ })^{2}=1}, one obtains theminimal polynomial2sin⁡(45∘)2−1=0{\displaystyle 2\sin(45^{\circ })^{2}-1=0}. Taking the positive root, one findssin⁡(45∘)=cos⁡(45∘)=1/2=2/2{\displaystyle \sin(45^{\circ })=\cos(45^{\circ })=1/{\sqrt {2}}={\sqrt {2}}/2}. A geometric way of deriving the sine or cosine of 45° is by considering an isosceles right triangle with leg length 1. Since two of the angles in an isosceles triangle are equal, if the remaining angle is 90° for a right triangle, then the two equal angles are each 45°. Then by the Pythagorean theorem, the length of the hypotenuse of such a triangle is2{\displaystyle {\sqrt {2}}}. Scaling the triangle so that its hypotenuse has a length of 1 divides the lengths by2{\displaystyle {\sqrt {2}}}, giving the same value for the sine or cosine of 45° given above. The values of sine and cosine of 30 and 60 degrees are derived by analysis of theequilateral triangle. In an equilateral triangle, the 3 angles are equal and sum to 180°, therefore each corner angle is 60°. Bisecting one corner, thespecial right trianglewith angles 30-60-90 is obtained. By symmetry, the bisected side is half of the side of the equilateral triangle, so one concludessin⁡(30∘)=1/2{\displaystyle \sin(30^{\circ })=1/2}. The Pythagorean and reflection identities then givesin⁡(60∘)=cos⁡(30∘)=1−(1/2)2=3/2{\displaystyle \sin(60^{\circ })=\cos(30^{\circ })={\sqrt {1-(1/2)^{2}}}={\sqrt {3}}/2}. The value ofsin⁡(18∘){\displaystyle \sin(18^{\circ })}may be derived using themultiple angle formulasfor sine and cosine.[10]By the double angle formula for sine: By the triple angle formula for cosine: Since sin(36°) = cos(54°), we equate these two expressions and cancel a factor of cos(18°): This quadratic equation has only one positive root: The Pythagorean identity then givescos⁡(18∘){\displaystyle \cos(18^{\circ })}, and the double and triple angle formulas give sine and cosine of 36°, 54°, and 72°. Thencos⁡(36∘)=(5+1)/4=φ/2{\displaystyle \cos(36^{\circ })=({\sqrt {5}}+1)/4=\varphi /2}, where⁠φ{\displaystyle \varphi }⁠is thegolden ratio. The sines and cosines of all other angles between 0 and 90° that are multiples of 3° can be derived from the angles described above and thesum and difference formulas. Specifically,[11] For example, since24∘=60∘−36∘{\displaystyle 24^{\circ }=60^{\circ }-36^{\circ }}, its cosine can be derived by the cosine difference formula: If the denominator,b, is multiplied by additional factors of 2, the sine and cosine can be derived with thehalf-angle formulas. For example, 22.5° (π/8 rad) is half of 45°, so its sine and cosine are:[12] Repeated application of the half-angle formulas leads tonested radicals, specifically nestedsquare roots of 2of the form2±⋯{\displaystyle {\sqrt {2\pm \cdots }}}. In general, the sine and cosine of most angles of the formβ/2n{\displaystyle \beta /2^{n}}can be expressed using nested square roots of 2 in terms ofβ{\displaystyle \beta }. Specifically, if one can write an angle asα=π(12−∑i=1k∏j=1ibj2i+1)=π(12−b14−b1b28−b1b2b316−…−b1b2…bk2k+1){\displaystyle \alpha =\pi \left({\frac {1}{2}}-\sum _{i=1}^{k}{\frac {\prod _{j=1}^{i}b_{j}}{2^{i+1}}}\right)=\pi \left({\frac {1}{2}}-{\frac {b_{1}}{4}}-{\frac {b_{1}b_{2}}{8}}-{\frac {b_{1}b_{2}b_{3}}{16}}-\ldots -{\frac {b_{1}b_{2}\ldots b_{k}}{2^{k+1}}}\right)}wherebk∈[−2,2]{\displaystyle b_{k}\in [-2,2]}andbi{\displaystyle b_{i}}is -1, 0, or 1 fori<k{\displaystyle i<k}, then[13]cos⁡(α)=b122+b22+b32+…+bk−12+2sin⁡(πbk4){\displaystyle \cos(\alpha )={\frac {b_{1}}{2}}{\sqrt {2+b_{2}{\sqrt {2+b_{3}{\sqrt {2+\ldots +b_{k-1}{\sqrt {2+2\sin \left({\frac {\pi b_{k}}{4}}\right)}}}}}}}}}and ifb1≠0{\displaystyle b_{1}\neq 0}then[13]sin⁡(α)=122−b22+b32+b42+…+bk−12+2sin⁡(πbk4){\displaystyle \sin(\alpha )={\frac {1}{2}}{\sqrt {2-b_{2}{\sqrt {2+b_{3}{\sqrt {2+b_{4}{\sqrt {2+\ldots +b_{k-1}{\sqrt {2+2\sin \left({\frac {\pi b_{k}}{4}}\right)}}}}}}}}}}}For example,13π32=π(12−14+18+116−132){\displaystyle {\frac {13\pi }{32}}=\pi \left({\frac {1}{2}}-{\frac {1}{4}}+{\frac {1}{8}}+{\frac {1}{16}}-{\frac {1}{32}}\right)}, so one has(b1,b2,b3,b4)=(1,−1,1,−1){\displaystyle (b_{1},b_{2},b_{3},b_{4})=(1,-1,1,-1)}and obtains:cos⁡(13π32)=122−2+2+2sin⁡(−π4)=122−2+2−2{\displaystyle \cos \left({\frac {13\pi }{32}}\right)={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {2+2\sin \left({\frac {-\pi }{4}}\right)}}}}}}={\frac {1}{2}}{\sqrt {2-{\sqrt {2+{\sqrt {2-{\sqrt {2}}}}}}}}}sin⁡(13π32)=122+2+2−2{\displaystyle \sin \left({\frac {13\pi }{32}}\right)={\frac {1}{2}}{\sqrt {2+{\sqrt {2+{\sqrt {2-{\sqrt {2}}}}}}}}} Since 17 is a Fermat prime, a regular17-gonis constructible, which means that the sines and cosines of angles such as2π/17{\displaystyle 2\pi /17}radians can be expressed in terms of square roots. In particular, in 1796,Carl Friedrich Gaussshowed that:[14][15] The sines and cosines of other constructible angles of the formk2nπ17{\displaystyle {\frac {k2^{n}\pi }{17}}}(for integersk,n{\displaystyle k,n}) can be derived from this one. As discussed in§ Constructibility, only certain angles that are rational multiples ofπ{\displaystyle \pi }radians have trigonometric values that can be expressed with square roots. The angle 1°, beingπ/180=π/(22⋅32⋅5){\displaystyle \pi /180=\pi /(2^{2}\cdot 3^{2}\cdot 5)}radians, has a repeated factor of 3 in the denominator and thereforesin⁡(1∘){\displaystyle \sin(1^{\circ })}cannot be expressed using only square roots. A related question is whether it can be expressed using cube roots. The following two approaches can be used, but both result in an expression that involves thecube root of a complex number. Using the triple-angle identity, we can identifysin⁡(1∘){\displaystyle \sin(1^{\circ })}as a root of a cubic polynomial:sin⁡(3∘)=−4x3+3x{\displaystyle \sin(3^{\circ })=-4x^{3}+3x}. The three roots of this polynomial aresin⁡(1∘){\displaystyle \sin(1^{\circ })},sin⁡(59∘){\displaystyle \sin(59^{\circ })}, and−sin⁡(61∘){\displaystyle -\sin(61^{\circ })}. Sincesin⁡(3∘){\displaystyle \sin(3^{\circ })}is constructible, an expression for it could be plugged intoCardano's formulato yield an expression forsin⁡(1∘){\displaystyle \sin(1^{\circ })}. However, since all three roots of the cubic are real, this is an instance ofcasus irreducibilis, and the expression would require taking the cube root of a complex number.[16][17] Alternatively, byDe Moivre's formula: Taking cube roots and adding or subtracting the equations, we have:[17]
https://en.wikipedia.org/wiki/Trigonometric_number
Trigonometry(fromAncient Greekτρίγωνον(trígōnon)'triangle'andμέτρον(métron)'measure')[1]is a branch ofmathematicsconcerned with relationships betweenanglesand side lengths of triangles. In particular, thetrigonometric functionsrelate the angles of aright trianglewithratiosof its side lengths. The field emerged in theHellenistic worldduring the 3rd century BC from applications ofgeometrytoastronomical studies.[2]The Greeks focused on thecalculation of chords, while mathematicians in India created the earliest-known tables of values for trigonometric ratios (also calledtrigonometric functions) such assine.[3] Throughout history, trigonometry has been applied in areas such asgeodesy,surveying,celestial mechanics, andnavigation.[4] Trigonometry is known for its manyidentities. Thesetrigonometric identities[5]are commonly used for rewriting trigonometricalexpressionswith the aim to simplify an expression, to find a more useful form of an expression, or tosolve an equation.[6] Sumerianastronomers studied angle measure, using a division of circles into 360 degrees.[8]They, and later theBabylonians, studied the ratios of the sides ofsimilartriangles and discovered some properties of these ratios but did not turn that into a systematic method for finding sides and angles of triangles. Theancient Nubiansused a similar method.[9] In the 3rd century BC,Hellenistic mathematicianssuch asEuclidandArchimedesstudied the properties ofchordsandinscribed anglesin circles, and they proved theorems that are equivalent to modern trigonometric formulae, although they presented them geometrically rather than algebraically. In 140 BC,Hipparchus(fromNicaea, Asia Minor) gave the first tables of chords, analogous to moderntables of sine values, and used them to solve problems in trigonometry andspherical trigonometry.[10]In the 2nd century AD, the Greco-Egyptian astronomerPtolemy(from Alexandria, Egypt) constructed detailed trigonometric tables (Ptolemy's table of chords) in Book 1, chapter 11 of hisAlmagest.[11]Ptolemy usedchordlength to define his trigonometric functions, a minor difference from thesineconvention we use today.[12](The value we call sin(θ) can be found by looking up the chord length for twice the angle of interest (2θ) in Ptolemy's table, and then dividing that value by two.) Centuries passed before more detailed tables were produced, and Ptolemy's treatise remained in use for performing trigonometric calculations in astronomy throughout the next 1200 years in the medievalByzantine,Islamic, and, later, Western European worlds. The modern definition of the sine is first attested in theSurya Siddhanta, and its properties were further documented in the 5th century (AD) byIndian mathematicianand astronomerAryabhata.[13]These Greek and Indian works were translated and expanded bymedieval Islamic mathematicians. In 830 AD, Persian mathematicianHabash al-Hasib al-Marwaziproduced the first table of cotangents.[14][15]By the 10th century AD, in the work of Persian mathematicianAbū al-Wafā' al-Būzjānī, all sixtrigonometric functionswere used.[16]Abu al-Wafa had sine tables in 0.25° increments, to 8 decimal places of accuracy, and accurate tables of tangent values.[16]He also made important innovations inspherical trigonometry[17][18][19]ThePersianpolymathNasir al-Din al-Tusihas been described as the creator of trigonometry as a mathematical discipline in its own right.[20][21][22]He was the first to treat trigonometry as a mathematical discipline independent from astronomy, and he developed spherical trigonometry into its present form.[15]He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and in hisOn the Sector Figure, he stated the law of sines for plane and spherical triangles, discovered thelaw of tangentsfor spherical triangles, and provided proofs for both these laws.[23]Knowledge of trigonometric functions and methods reachedWestern EuropeviaLatin translationsof Ptolemy's GreekAlmagestas well as the works ofPersian and Arab astronomerssuch asAl BattaniandNasir al-Din al-Tusi.[24]One of the earliest works on trigonometry by a northern European mathematician isDe Triangulisby the 15th century German mathematicianRegiomontanus, who was encouraged to write, and provided with a copy of theAlmagest, by theByzantine Greek scholarcardinalBasilios Bessarionwith whom he lived for several years.[25]At the same time, another translation of theAlmagestfrom Greek into Latin was completed by the CretanGeorge of Trebizond.[26]Trigonometry was still so little known in 16th-century northern Europe thatNicolaus Copernicusdevoted two chapters ofDe revolutionibus orbium coelestiumto explain its basic concepts. Driven by the demands ofnavigationand the growing need for accurate maps of large geographic areas, trigonometry grew into a major branch of mathematics.[27]Bartholomaeus Pitiscuswas the first to use the word, publishing hisTrigonometriain 1595.[28]Gemma Frisiusdescribed for the first time the method oftriangulationstill used today in surveying. It wasLeonhard Eulerwho fully incorporatedcomplex numbersinto trigonometry. The works of the Scottish mathematiciansJames Gregoryin the 17th century andColin Maclaurinin the 18th century were influential in the development oftrigonometric series.[29]Also in the 18th century,Brook Taylordefined the generalTaylor series.[30] Trigonometric ratios are the ratios between edges of a right triangle. These ratios depend only on one acute angle of the right triangle, since any two right triangles with the same acute angle aresimilar.[31] So, these ratios definefunctionsof this angle that are calledtrigonometric functions. Explicitly, they are defined below as functions of the known angleA, wherea,bandhrefer to the lengths of the sides in the accompanying figure. In the following definitions, thehypotenuseis the side opposite to the 90-degree angle in a right triangle; it is the longest side of the triangle and one of the two sides adjacent to angleA. Theadjacent legis the other side that is adjacent to angleA. Theopposite sideis the side that is opposite to angleA. The termsperpendicularandbaseare sometimes used for the opposite and adjacent sides respectively. See below underMnemonics. Thereciprocalsof these ratios are named thecosecant(csc),secant(sec), andcotangent(cot), respectively: The cosine, cotangent, and cosecant are so named because they are respectively the sine, tangent, and secant of the complementary angle abbreviated to "co-".[32] With these functions, one can answer virtually all questions about arbitrary triangles by using thelaw of sinesand thelaw of cosines.[33]These laws can be used to compute the remaining angles and sides of any triangle as soon as two sides and their included angle or two angles and a side or three sides are known. A common use ofmnemonicsis to remember facts and relationships in trigonometry. For example, thesine,cosine, andtangentratios in a right triangle can be remembered by representing them and their corresponding sides as strings of letters. For instance, a mnemonic is SOH-CAH-TOA:[34] One way to remember the letters is to sound them out phonetically (i.e./ˌsoʊkəˈtoʊə/SOH-kə-TOH-ə, similar toKrakatoa).[35]Another method is to expand the letters into a sentence, such as "SomeOldHippieCaughtAnotherHippieTrippin'OnAcid".[36] Trigonometric ratios can also be represented using theunit circle, which is the circle of radius 1 centered at the origin in the plane.[37]In this setting, theterminal sideof an angleAplaced instandard positionwill intersect the unit circle in a point (x,y), wherex=cos⁡A{\displaystyle x=\cos A}andy=sin⁡A{\displaystyle y=\sin A}.[37]This representation allows for the calculation of commonly found trigonometric values, such as those in the following table:[38] Using theunit circle, one can extend the definitions of trigonometric ratios to all positive and negative arguments[39](seetrigonometric function). The following table summarizes the properties of the graphs of the six main trigonometric functions:[40][41] Because the six main trigonometric functions are periodic, they are notinjective(or, 1 to 1), and thus are not invertible. Byrestrictingthe domain of a trigonometric function, however, they can be made invertible.[42]: 48ff The names of the inverse trigonometric functions, together with their domains and range, can be found in the following table:[42]: 48ff[43]: 521ff When considered as functions of a real variable, the trigonometric ratios can be represented by aninfinite series. For instance, sine and cosine have the following representations:[44] With these definitions the trigonometric functions can be defined forcomplex numbers.[45]When extended as functions of real or complex variables, the followingformulaholds for the complex exponential: This complex exponential function, written in terms of trigonometric functions, is particularly useful.[46][47] Trigonometric functions were among the earliest uses formathematical tables.[48]Such tables were incorporated into mathematics textbooks and students were taught to look up values and how tointerpolatebetween the values listed to get higher accuracy.[49]Slide ruleshad special scales for trigonometric functions.[50] Scientific calculatorshave buttons for calculating the main trigonometric functions (sin, cos, tan, and sometimescisand their inverses).[51]Most allow a choice of angle measurement methods:degrees, radians, and sometimesgradians. Most computerprogramming languagesprovide function libraries that include the trigonometric functions.[52]Thefloating point unithardware incorporated into the microprocessor chips used in most personal computers has built-in instructions for calculating trigonometric functions.[53] In addition to the six ratios listed earlier, there are additional trigonometric functions that were historically important, though seldom used today. These include thechord(crd(θ) = 2 sin(⁠θ/2⁠)), theversine(versin(θ) = 1 − cos(θ) = 2 sin2(⁠θ/2⁠)) (which appeared in the earliest tables[54]), thecoversine(coversin(θ) = 1 − sin(θ) = versin(⁠π/2⁠−θ)), thehaversine(haversin(θ) =⁠1/2⁠versin(θ) = sin2(⁠θ/2⁠)),[55]theexsecant(exsec(θ) = sec(θ) − 1), and theexcosecant(excsc(θ) = exsec(⁠π/2⁠−θ) = csc(θ) − 1). SeeList of trigonometric identitiesfor more relations between these functions. For centuries, spherical trigonometry has been used for locating solar, lunar, and stellar positions,[56]predicting eclipses, and describing the orbits of the planets.[57] In modern times, the technique oftriangulationis used inastronomyto measure the distance to nearby stars,[58]as well as insatellite navigation systems.[19] Historically, trigonometry has been used for locating latitudes and longitudes of sailing vessels, plotting courses, and calculating distances during navigation.[59] Trigonometry is still used in navigation through such means as theGlobal Positioning Systemandartificial intelligenceforautonomous vehicles.[60] In landsurveying, trigonometry is used in the calculation of lengths, areas, and relative angles between objects.[61] On a larger scale, trigonometry is used ingeographyto measure distances between landmarks.[62] The sine and cosine functions are fundamental to the theory ofperiodic functions,[63]such as those that describesoundandlightwaves.Fourierdiscovered that everycontinuous,periodic functioncould be described as aninfinite sumof trigonometric functions. Even non-periodic functions can be represented as anintegralof sines and cosines through theFourier transform. This has applications toquantum mechanics[64]andcommunications,[65]among other fields. Trigonometry is useful in manyphysical sciences,[66]includingacoustics,[67]andoptics.[67]In these areas, they are used to describesoundandlight waves, and to solve boundary- and transmission-related problems.[68] Other fields that use trigonometry or trigonometric functions includemusic theory,[69]geodesy,audio synthesis,[70]architecture,[71]electronics,[69]biology,[72]medical imaging(CT scansandultrasound),[73]chemistry,[74]number theory(and hencecryptology),[75]seismology,[67]meteorology,[76]oceanography,[77]image compression,[78]phonetics,[79]economics,[80]electrical engineering,mechanical engineering,civil engineering,[69]computer graphics,[81]cartography,[69]crystallography[82]andgame development.[81] Trigonometry has been noted for its many identities, that is, equations that are true for all possible inputs.[83] Identities involving only angles are known astrigonometric identities. Other equations, known astriangle identities,[84]relate both the sides and angles of a given triangle. In the following identities,A,BandCare the angles of a triangle anda,bandcare the lengths of sides of the triangle opposite the respective angles (as shown in the diagram). Thelaw of sines(also known as the "sine rule") for an arbitrary triangle states:[85] whereΔ{\displaystyle \Delta }is the area of the triangle andRis the radius of thecircumscribed circleof the triangle: Thelaw of cosines(known as the cosine formula, or the "cos rule") is an extension of the Pythagorean theorem to arbitrary triangles:[85] or equivalently: Thelaw of tangents, developed byFrançois Viète, is an alternative to the Law of Cosines when solving for the unknown edges of a triangle, providing simpler computations when using trigonometric tables.[86]It is given by: Given two sidesaandband the angle between the sidesC, thearea of the triangleis given by half the product of the lengths of two sides and the sine of the angle between the two sides:[85] The following trigonometricidentitiesare related to thePythagorean theoremand hold for any value:[87] The second and third equations are derived from dividing the first equation bycos2⁡A{\displaystyle \cos ^{2}{A}}andsin2⁡A{\displaystyle \sin ^{2}{A}}, respectively. Euler's formula, which states thateix=cos⁡x+isin⁡x{\displaystyle e^{ix}=\cos x+i\sin x}, produces the followinganalyticalidentities for sine, cosine, and tangent in terms ofeand theimaginary uniti: Other commonly used trigonometric identities include the half-angle identities, the angle sum and difference identities, and the product-to-sum identities.[31]
https://en.wikipedia.org/wiki/Trigonometry
Amongst the lay public of non-mathematicians and non-scientists,trigonometryis known chiefly for its application to measurement problems, yet is also often used in ways that are far more subtle, such as its place in thetheory of music; still other uses are more technical, such as innumber theory. The mathematical topics ofFourier seriesandFourier transformsrely heavily on knowledge of trigonometric functions and find application in a number of areas, includingstatistics. In Chapter XI ofThe Age of Reason, the American revolutionary andEnlightenmentthinkerThomas Painewrote:[1] From 1802 until 1871, theGreat Trigonometrical Surveywas a project to survey the Indian subcontinent with high precision. Starting from the coastal baseline, mathematicians and geographers triangulated vast distances across the country. One of the key achievements was measuring the height of Himalayan mountains, and determining thatMount Everestis the highest point on Earth.[2] For the 25 years preceding the invention of thelogarithmin 1614,prosthaphaeresiswas the only known generally applicable way of approximating products quickly. It used the identities for the trigonometric functions of sums and differences of angles in terms of the products of trigonometric functions of those angles.[3] Scientific fields that make use of trigonometry include: That these fields involve trigonometry does not mean knowledge of trigonometry is needed in order to learn anything about them. Itdoesmean thatsomethings in these fields cannot be understood without trigonometry. For example, a professor ofmusicmay perhaps know nothing of mathematics, but would probably know thatPythagoraswas the earliest known contributor to the mathematical theory of music. Insomeof the fields of endeavor listed above it is easy to imagine how trigonometry could be used. For example, in navigation and land surveying, the occasions for the use of trigonometry are in at least some cases simple enough that they can be described in a beginning trigonometry textbook. In the case of music theory, the application of trigonometry is related to work begun by Pythagoras, who observed that the sounds made by plucking two strings of different lengths are consonant if both lengths are small integer multiples of a common length.[4]The resemblance between the shape of a vibrating string and the graph of thesinefunction is no mere coincidence. In oceanography, the resemblance between the shapes of somewavesand the graph of the sine function is also not coincidental. In some other fields, among themclimatology, biology, and economics, there are seasonal periodicities. The study of these often involves the periodic nature of the sine and cosine functions. Many fields make use of trigonometry in more advanced ways than can be discussed in a single article. Often those involve what are called theFourier series, after the 18th- and 19th-century French mathematician and physicistJoseph Fourier. Fourier series have a surprisingly diverse array of applications in many scientific fields, in particular in all of the phenomena involving seasonal periodicities mentioned above, and in wave motion, and hence in the study of radiation, of acoustics, of seismology, of modulation of radio waves in electronics, and of electric power engineering.[5] A Fourier series is a sum of this form: where each of the squares (◻{\displaystyle \square }) is a different number, and one is adding infinitely many terms. Fourier used these for studyingheatflow anddiffusion(diffusion is the process whereby, when you drop a sugar cube into a gallon of water, the sugar gradually spreads through the water, a pollutant spreads through the air, or any dissolved substance spreads through any fluid). Fourier series are also applicable to subjects whose connection with wave motion is far from obvious. One ubiquitous example isdigital compressionwherebyimages,audioandvideodata are compressed into a much smaller size which makes their transmission feasible overtelephone,internetandbroadcastnetworks. Another example, mentioned above, is diffusion. Among others are: thegeometry of numbers,isoperimetric problems, recurrence ofrandom walks,quadratic reciprocity, thecentral limit theorem,Heisenberg's inequality. A more abstract concept than Fourier series is the idea ofFourier transform. Fourier transforms involveintegralsrather than sums, and are used in a similarly diverse array of scientific fields. Many natural laws are expressed by relatingrates of changeof quantities to the quantities themselves. For example: The rate population change is sometimes jointly proportional to (1) the present population and (2) the amount by which the present population falls short of thecarrying capacity. This kind of relationship is called adifferential equation. If, given this information, one tries to express population as a function of time, one is trying to "solve" the differential equation. Fourier transforms may be used to convert some differential equations to algebraic equations for which methods of solving them are known. Fourier transforms have many uses. In almost any scientific context in which the words spectrum,harmonic, orresonanceare encountered, Fourier transforms or Fourier series are nearby. Intelligence quotients are sometimes held to be distributed according to abell-shaped curve.[6]About 40% of the area under the curve is in the interval from 100 to 120; correspondingly, about 40% of the population scores between 100 and 120 on IQ tests. Nearly 9% of the area under the curve is in the interval from 120 to 140; correspondingly, about 9% of the population scores between 120 and 140 on IQ tests, etc. Similarly many other things are distributed according to the "bell-shaped curve", including measurement errors in many physical measurements. Why the ubiquity of the "bell-shaped curve"? There is a theoretical reason for this, and it involves Fourier transforms and hencetrigonometric functions. That is one of a variety of applications of Fourier transforms tostatistics. Trigonometric functions are also applied when statisticians study seasonal periodicities, which are often represented by Fourier series. There is a hint of a connection between trigonometry and number theory. Loosely speaking, one could say that number theory deals with qualitative properties rather than quantitative properties of numbers. Discard the ones that are not in lowest terms; keep only those that are in lowest terms: Then bring in trigonometry: The value of the sum is −1, because 42 has anoddnumber of prime factors and none of them is repeated: 42 = 2 × 3 × 7. (If there had been anevennumber of non-repeated factors then the sum would have been 1; if there had been any repeated prime factors (e.g., 60 = 2 × 2 × 3 × 5) then the sum would have been 0; the sum is theMöbius functionevaluated at 42.) This hints at the possibility of applyingFourier analysisto number theory. Various types ofequationscan be solved using trigonometry. For example, alinear difference equationorlinear differential equationwith constant coefficients has solutions expressed in terms of theeigenvaluesof its characteristic equation; if some of the eigenvalues arecomplex, the complex terms can be replaced by trigonometric functions of real terms, showing that the dynamic variable exhibitsoscillations. Similarly,cubic equationswith three real solutions have analgebraic solutionthat is unhelpful in that it contains cube roots of complex numbers; again an alternative solution exists in terms of trigonometric functions of real terms.
https://en.wikipedia.org/wiki/Uses_of_trigonometry
Theversineorversed sineis atrigonometric functionfound in some of the earliest (SanskritAryabhatia,[1]Section I)trigonometric tables. The versine of an angle is 1 minus itscosine. There are several related functions, most notably thecoversineandhaversine. The latter, half a versine, is of particular importance in thehaversine formulaof navigation. Theversine[3][4][5][6][7]orversed sine[8][9][10][11][12]is atrigonometric functionalready appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviationsversin,sinver,[13][14]vers, orsiv.[15][16]InLatin, it is known as thesinus versus(flipped sine),versinus,versus, orsagitta(arrow).[17] Expressed in terms of commontrigonometric functionssine, cosine, and tangent, the versine is equal toversin⁡θ=1−cos⁡θ=2sin2⁡θ2=sin⁡θtan⁡θ2{\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in thehaversine formulaused historically innavigation. havθ=sin2⁡(θ2)=1−cos⁡θ2{\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinarysinefunction (see note on etymology) was sometimes historically called thesinus rectus("straight sine"), to contrast it with the versed sine (sinus versus).[31]The meaning of these terms is apparent if one looks at the functions in the original context for their definition, aunit circle: For a verticalchordABof the unit circle, the sine of the angleθ(representing half of the subtended angleΔ) is the distanceAC(half of the chord). On the other hand, the versed sine ofθis the distanceCDfrom the center of the chord to the center of the arc. Thus, the sum of cos(θ) (equal to the length of lineOC) and versin(θ) (equal to the length of lineCD) is the radiusOD(with length 1). Illustrated this way, the sine is vertical (rectus, literally "straight") while the versine is horizontal (versus, literally "turned against, out-of-place"); both are distances fromCto the circle. This figure also illustrates the reason why the versine was sometimes called thesagitta, Latin forarrow.[17][30]If the arcADBof the double-angleΔ= 2θis viewed as a "bow" and the chordABas its "string", then the versineCDis clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal",sagittais also an obsolete synonym for theabscissa(the horizontal axis of a graph).[30] In 1821,Cauchyused the termssinus versus(siv) for the versine andcosinus versus(cosiv) for the coversine.[15][16][nb 1] Asθgoes to zero, versin(θ) is the difference between two nearly equal quantities, so a user of atrigonometric tablefor the cosine alone would need a very high accuracy to obtain the versine in order to avoidcatastrophic cancellation, making separate tables for the latter convenient.[12]Even with a calculator or computer,round-off errorsmake it advisable to use the sin2formula for smallθ. Another historical advantage of the versine is that it is always non-negative, so itslogarithmis defined everywhere except for the single angle (θ= 0, 2π, …) where it is zero—thus, one could uselogarithmic tablesfor multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half-chord) values (as opposed to thechords tabulated by Ptolemyand other Greek authors), calculated from theSurya Siddhanthaof India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°).[31] The versine appears as an intermediate step in the application of thehalf-angle formulasin2(⁠θ/2⁠)=⁠1/2⁠versin(θ), derived byPtolemy, that was used to construct such tables. The haversine, in particular, was important innavigationbecause it appears in thehaversine formula, which is used to reasonably accurately compute distances on an astronomicspheroid(see issues with theEarth's radius vs. sphere) given angular positions (e.g.,longitudeandlatitude). One could also use sin2(⁠θ/2⁠)directly, but having a table of the haversine removed the need to compute squares and square roots.[12] An early utilization byJosé de Mendoza y Ríosof what later would be called haversines is documented in 1801.[14][32] The first known English equivalent to atable of haversineswas published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords".[33][34][17] In 1835, the termhaversine(notated naturally ashav.orbase-10 logarithmicallyaslog. haversineorlog. havers.) was coined[35]byJames Inman[14][36][37]in the third edition of his workNavigation and Nautical Astronomy: For the Use of British Seamento simplify the calculation of distances between two points on the surface of the Earth usingspherical trigonometryfor applications in navigation.[3][35]Inman also used the termsnat. versineandnat. vers.for versines.[3] Other high-regarded tables of haversines were those of Richard Farley in 1856[33][38]and John Caulfield Hannyngton in 1876.[33][39] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearinglunar distancesutilizingGaussian logarithmssince 1995[40][41]or in a more compact method forsight reductionsince 2014.[29] While the usage of the versine, coversine and haversine as well as theirinverse functionscan be traced back centuries, the names for the other fivecofunctionsappear to be of much younger origin. One period (0 <θ< 2π) of a versine or, more commonly, a haversine waveform is also commonly used insignal processingandcontrol theoryas the shape of apulseor awindow function(includingHann,Hann–PoissonandTukey windows), because it smoothly (continuousin value andslope) "turns on" fromzerotoone(for haversine) and back to zero.[nb 2]In these applications, it is namedHann functionorraised-cosine filter. The functions are circular rotations of each other. Inverse functions likearcversine(arcversin, arcvers,[8]avers,[43][44]aver),arcvercosine(arcvercosin, arcvercos, avercos, avcs),arccoversine(arccoversin, arccovers,[8]acovers,[43][44]acvs),arccovercosine(arccovercosin, arccovercos, acovercos, acvc),archaversine(archaversin, archav, haversin−1,[45]invhav,[46][47][48]ahav,[43][44]ahvs, ahv, hav−1[49][50]),archavercosine(archavercosin, archavercos, ahvc),archacoversine(archacoversin, ahcv) orarchacovercosine(archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into thecomplex plane.[42][19][24] Maclaurin series:[24] When the versinevis small in comparison to the radiusr, it may be approximated from the half-chord lengthL(the distanceACshown above) by the formula[51]v≈L22r.{\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc lengths(ADin the figure above) by the formulas≈L+v2r{\displaystyle s\approx L+{\frac {v^{2}}{r}}}This formula was known to the Chinese mathematicianShen Kuo, and a more accurate formula also involving the sagitta was developed two centuries later byGuo Shoujing.[52] A more accurate approximation used in engineering[53]isv≈s32L128r{\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The termversineis also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distancevfrom the chord to the curve (usually at the chord midpoint) is called aversinemeasurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In thelimitas the chord lengthLgoes to zero, the ratio⁠8v/L2⁠goes to the instantaneouscurvature. This usage is especially common inrail transport, where it describes measurements of the straightness of therail tracks[54]and it is the basis of theHallade methodforrail surveying. The termsagitta(often abbreviatedsag) is used similarly inoptics, for describing the surfaces oflensesandmirrors.
https://en.wikipedia.org/wiki/Versine
Theversineorversed sineis atrigonometric functionfound in some of the earliest (SanskritAryabhatia,[1]Section I)trigonometric tables. The versine of an angle is 1 minus itscosine. There are several related functions, most notably thecoversineandhaversine. The latter, half a versine, is of particular importance in thehaversine formulaof navigation. Theversine[3][4][5][6][7]orversed sine[8][9][10][11][12]is atrigonometric functionalready appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviationsversin,sinver,[13][14]vers, orsiv.[15][16]InLatin, it is known as thesinus versus(flipped sine),versinus,versus, orsagitta(arrow).[17] Expressed in terms of commontrigonometric functionssine, cosine, and tangent, the versine is equal toversin⁡θ=1−cos⁡θ=2sin2⁡θ2=sin⁡θtan⁡θ2{\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in thehaversine formulaused historically innavigation. havθ=sin2⁡(θ2)=1−cos⁡θ2{\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinarysinefunction (see note on etymology) was sometimes historically called thesinus rectus("straight sine"), to contrast it with the versed sine (sinus versus).[31]The meaning of these terms is apparent if one looks at the functions in the original context for their definition, aunit circle: For a verticalchordABof the unit circle, the sine of the angleθ(representing half of the subtended angleΔ) is the distanceAC(half of the chord). On the other hand, the versed sine ofθis the distanceCDfrom the center of the chord to the center of the arc. Thus, the sum of cos(θ) (equal to the length of lineOC) and versin(θ) (equal to the length of lineCD) is the radiusOD(with length 1). Illustrated this way, the sine is vertical (rectus, literally "straight") while the versine is horizontal (versus, literally "turned against, out-of-place"); both are distances fromCto the circle. This figure also illustrates the reason why the versine was sometimes called thesagitta, Latin forarrow.[17][30]If the arcADBof the double-angleΔ= 2θis viewed as a "bow" and the chordABas its "string", then the versineCDis clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal",sagittais also an obsolete synonym for theabscissa(the horizontal axis of a graph).[30] In 1821,Cauchyused the termssinus versus(siv) for the versine andcosinus versus(cosiv) for the coversine.[15][16][nb 1] Asθgoes to zero, versin(θ) is the difference between two nearly equal quantities, so a user of atrigonometric tablefor the cosine alone would need a very high accuracy to obtain the versine in order to avoidcatastrophic cancellation, making separate tables for the latter convenient.[12]Even with a calculator or computer,round-off errorsmake it advisable to use the sin2formula for smallθ. Another historical advantage of the versine is that it is always non-negative, so itslogarithmis defined everywhere except for the single angle (θ= 0, 2π, …) where it is zero—thus, one could uselogarithmic tablesfor multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half-chord) values (as opposed to thechords tabulated by Ptolemyand other Greek authors), calculated from theSurya Siddhanthaof India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°).[31] The versine appears as an intermediate step in the application of thehalf-angle formulasin2(⁠θ/2⁠)=⁠1/2⁠versin(θ), derived byPtolemy, that was used to construct such tables. The haversine, in particular, was important innavigationbecause it appears in thehaversine formula, which is used to reasonably accurately compute distances on an astronomicspheroid(see issues with theEarth's radius vs. sphere) given angular positions (e.g.,longitudeandlatitude). One could also use sin2(⁠θ/2⁠)directly, but having a table of the haversine removed the need to compute squares and square roots.[12] An early utilization byJosé de Mendoza y Ríosof what later would be called haversines is documented in 1801.[14][32] The first known English equivalent to atable of haversineswas published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords".[33][34][17] In 1835, the termhaversine(notated naturally ashav.orbase-10 logarithmicallyaslog. haversineorlog. havers.) was coined[35]byJames Inman[14][36][37]in the third edition of his workNavigation and Nautical Astronomy: For the Use of British Seamento simplify the calculation of distances between two points on the surface of the Earth usingspherical trigonometryfor applications in navigation.[3][35]Inman also used the termsnat. versineandnat. vers.for versines.[3] Other high-regarded tables of haversines were those of Richard Farley in 1856[33][38]and John Caulfield Hannyngton in 1876.[33][39] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearinglunar distancesutilizingGaussian logarithmssince 1995[40][41]or in a more compact method forsight reductionsince 2014.[29] While the usage of the versine, coversine and haversine as well as theirinverse functionscan be traced back centuries, the names for the other fivecofunctionsappear to be of much younger origin. One period (0 <θ< 2π) of a versine or, more commonly, a haversine waveform is also commonly used insignal processingandcontrol theoryas the shape of apulseor awindow function(includingHann,Hann–PoissonandTukey windows), because it smoothly (continuousin value andslope) "turns on" fromzerotoone(for haversine) and back to zero.[nb 2]In these applications, it is namedHann functionorraised-cosine filter. The functions are circular rotations of each other. Inverse functions likearcversine(arcversin, arcvers,[8]avers,[43][44]aver),arcvercosine(arcvercosin, arcvercos, avercos, avcs),arccoversine(arccoversin, arccovers,[8]acovers,[43][44]acvs),arccovercosine(arccovercosin, arccovercos, acovercos, acvc),archaversine(archaversin, archav, haversin−1,[45]invhav,[46][47][48]ahav,[43][44]ahvs, ahv, hav−1[49][50]),archavercosine(archavercosin, archavercos, ahvc),archacoversine(archacoversin, ahcv) orarchacovercosine(archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into thecomplex plane.[42][19][24] Maclaurin series:[24] When the versinevis small in comparison to the radiusr, it may be approximated from the half-chord lengthL(the distanceACshown above) by the formula[51]v≈L22r.{\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc lengths(ADin the figure above) by the formulas≈L+v2r{\displaystyle s\approx L+{\frac {v^{2}}{r}}}This formula was known to the Chinese mathematicianShen Kuo, and a more accurate formula also involving the sagitta was developed two centuries later byGuo Shoujing.[52] A more accurate approximation used in engineering[53]isv≈s32L128r{\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The termversineis also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distancevfrom the chord to the curve (usually at the chord midpoint) is called aversinemeasurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In thelimitas the chord lengthLgoes to zero, the ratio⁠8v/L2⁠goes to the instantaneouscurvature. This usage is especially common inrail transport, where it describes measurements of the straightness of therail tracks[54]and it is the basis of theHallade methodforrail surveying. The termsagitta(often abbreviatedsag) is used similarly inoptics, for describing the surfaces oflensesandmirrors.
https://en.wikipedia.org/wiki/Haversine
Mādhava of Sangamagrāma(Mādhavan)[4](c.1340– c.1425) was an Indianmathematicianandastronomerwho is considered to be the founder of theKerala school of astronomy and mathematicsin theLate Middle Ages. Madhava made pioneering contributions to the study ofinfinite series,calculus,trigonometry,geometryandalgebra. He was the first to use infinite series approximations for a range of trigonometric functions, which has been called the "decisive step onward from the finite procedures of ancient mathematics to treat theirlimit-passage toinfinity".[1] Little is known about Madhava's life with certainty. However, from scattered references to Madhava found in diverse manuscripts, historians of Kerala school have pieced together information about the mathematician. In a manuscript preserved in the Oriental Institute, Baroda, Madhava has been referred to asMādhavan vēṇvārōhādīnām karttā ... Mādhavan Ilaññippaḷḷi Emprān.[4]It has been noted that the epithet 'Emprān' refers to theEmprāntiricommunity, to which Madhava might have belonged.[5] The term "Ilaññippaḷḷi" has been identified as a reference to the residence of Madhava. This is corroborated by Madhava himself. In his short work on the moon's positions titledVeṇvāroha, Madhava says that he was born in a house namedbakuḷādhiṣṭhita . . . vihāra.[6]This is clearly Sanskrit forIlaññippaḷḷi.Ilaññiis the Malayalam name of the evergreen treeMimusops elengiand the Sanskrit name for the same isBakuḷa. Palli is a term for village. The Sanskrit house namebakuḷādhiṣṭhita . . . vihārahas also been interpreted as a reference to the Malayalam house nameIraññi ninna ppaḷḷiand some historians have tried to identify it with one of two currently existing houses with namesIriññanavaḷḷiandIriññārapaḷḷiboth of which are located nearIrinjalakudatown in central Kerala.[6]This identification is far fetched because both names have neither phonetic similarity nor semantic equivalence to the word "Ilaññippaḷḷi".[5] Most of the writers of astronomical and mathematical works who lived after Madhava's period have referred to Madhava as "Sangamagrama Madhava" and as such it is important that the real import of the word "Sangamagrama" be made clear. The general view among many scholars is that Sangamagrama is the town ofIrinjalakudasome 70 kilometers south of the Nila river and about 70 kilometers south ofCochin.[5]It seems that there is not much concrete ground for this belief except perhaps the fact that the presiding deity of an early medieval temple in the town, theKoodalmanikyam Temple, is worshiped as Sangameswara meaning the Lord of the Samgama and so Samgamagrama can be interpreted as the village of Samgameswara. But there are several places inKarnatakawithsamgamaor its equivalentkūḍalain their names and with a temple dedicated to Samgamḗsvara, the lord of the confluence. (KudalasangamainBagalkot districtis one such place with a celebrated temple dedicated to the Lord of the Samgama.)[5] There is a small town on the southern banks of the Nila river, around 10 kilometers upstream fromTirunavaya, called Kūḍallūr. The exact literal Sanskrit translation of this place name is Samgamagram:kūṭalin Malayalam means a confluence (which in Sanskrit issamgama) andūrmeans a village (which in Sanskrit isgrama). Also the place is at the confluence of the Nila river and its most important tributary, namely, the Kunti river. (There is no confluence of rivers near Irinjalakuada.) Incidentally there is still existing aNambudiri(Malayali Brahmin) family by nameKūtallūr Manaa few kilometers away from the Kudallur village. The family has its origins in Kudallur village itself. For many generations this family hosted a greatGurukulamspecialising inVedanga.[5]That the only available manuscript ofSphuṭacandrāpti, a book authored by Madhava, was obtained from the manuscript collection ofKūtallūr Manamight strengthen the conjecture that Madhava might have had some association withKūtallūr Mana.[7]Thus the most plausible possibility is that the forefathers of Madhava migrated from the Tulu land or thereabouts to settle in Kudallur village, which is situated on the southern banks of the Nila river not far from Tirunnavaya, a generation or two before his birth and lived in a house known asIlaññippaḷḷiwhose present identity is unknown.[5] There are also no definite evidences to pinpoint the period during which Madhava flourished. In his Venvaroha, Madhava gives a date in 1400 CE as the epoch. Madhava's pupilParameshvara Nambudiri, the only known direct pupil of Madhava, is known to have completed his seminal workDrigganitain 1430 and the Paramesvara's date has been determined asc.1360-1455. From such circumstantial evidences historians have assigned the datec.1340– c.1425to Madhava. Although there is some evidence of mathematical work in Kerala prior to Madhava (e.g.,Sadratnamala[which?]c. 1300, a set of fragmentary results[8]), it is clear from citations that Madhava provided the creative impulse for the development of a rich mathematical tradition in medieval Kerala. However, except for a couple, most of Madhava's original works have been lost. He is referred to in the work of subsequent Kerala mathematicians, particularly inNilakantha Somayaji'sTantrasangraha(c. 1500), as the source for several infinite series expansions, including sinθand arctanθ. The 16th-century textMahajyānayana prakāra(Method of Computing Great Sines) cites Madhava as the source for several series derivations forπ. InJyeṣṭhadeva'sYuktibhāṣā(c. 1530),[9]written inMalayalam, these series are presented with proofs in terms of theTaylor seriesexpansions for polynomials like 1/(1+x2), withx= tanθ, etc. Thus, what is explicitly Madhava's work is a source of some debate. TheYukti-dipika(also called theTantrasangraha-vyakhya), possibly composed bySankara Variar, a student of Jyeṣṭhadeva, presents several versions of the series expansions for sinθ, cosθ, and arctanθ, as well as some products with radius and arclength, most versions of which appear in Yuktibhāṣā. For those that do not, Rajagopal and Rangachari have argued, quoting extensively from the original Sanskrit,[1]that since some of these have been attributed by Nilakantha to Madhava, some of the other forms might also be the work of Madhava. Others have speculated that the early textKaranapaddhati(c. 1375–1475), or theMahajyānayana prakārawas written by Madhava, but this is unlikely.[3] Karanapaddhati, along with the even earlier Keralite mathematics textSadratnamala, as well as theTantrasangrahaandYuktibhāṣā, were considered in an 1834 article byC. M. Whish, which was the first to draw attention to their priority over Newton in discovering theFluxion(Newton's name for differentials).[8]In the mid-20th century, the Russian scholar Jushkevich revisited the legacy of Madhava,[10]and a comprehensive look at the Kerala school was provided by Sarma in 1972.[11] There are several known astronomers who preceded Madhava, including Kǖṭalur Kizhār (2nd century),[12]Vararuci (4th century), andŚaṅkaranārāyaṇa(866 AD). It is possible that other unknown figures preceded him. However, we have a clearer record of the tradition after Madhava.Parameshvarawas a direct disciple. According to apalm leaf manuscriptof a Malayalam commentary on theSurya Siddhanta, Parameswara's son Damodara (c. 1400–1500) had Nilakantha Somayaji as one of his disciples. Jyeshtadeva was a disciple of Nilakantha.Achyutha Pisharadiof Trikkantiyur is mentioned as a disciple of Jyeṣṭhadeva, and the grammarianMelpathur Narayana Bhattathirias his disciple.[9] If we consider mathematics as a progression from finite processes of algebra to considerations of the infinite, then the first steps towards this transition typically come with infinite series expansions. It is this transition to the infinite series that is attributed to Madhava. In Europe, the first such series were developed byJames Gregoryin 1667. Madhava's work is notable for the series, but what is truly remarkable is his estimate of an error term (or correction term).[13]This implies that he understood very well the limit nature of the infinite series. Thus, Madhava may have invented the ideas underlyinginfinite seriesexpansions of functions,power series,trigonometric series, and rational approximations of infinite series.[14] However, as stated above, which results are precisely Madhava's and which are those of his successors is difficult to determine. The following presents a summary of results that have been attributed to Madhava by various scholars. Among his many contributions, he discovered infinite series for thetrigonometric functionsofsine,cosine,arctangent, and many methods for calculating thecircumferenceof acircle. One of Madhava's series is known from the textYuktibhāṣā, which contains the derivation and proof of thepower seriesforinverse tangent, discovered by Madhava.[15]In the text,Jyeṣṭhadevadescribes the series in the following manner: The first term is the product of the given sine and radius of the desired arc divided by the cosine of the arc. The succeeding terms are obtained by a process of iteration when the first term is repeatedly multiplied by the square of the sine and divided by the square of the cosine. All the terms are then divided by the odd numbers 1, 3, 5, .... The arc is obtained by adding and subtracting respectively the terms of odd rank and those of even rank. It is laid down that the sine of the arc or that of its complement whichever is the smaller should be taken here as the given sine. Otherwise the terms obtained by this above iteration will not tend to the vanishing magnitude.[16] This yields: or equivalently: This series isGregory's series(named afterJames Gregory, who rediscovered it three centuries after Madhava). Even if we consider this particular series as the work ofJyeṣṭhadeva, it would pre-date Gregory by a century, and certainly other infinite series of a similar nature had been worked out by Madhava. Today, it is referred to as theMadhava-Gregory-Leibniz series.[16][17] Madhava composed an accurate table of sines. Madhava's values are accurate to the seventh decimal place. Marking a quarter circle at twenty-four equal intervals, he gave the lengths of the half-chord (sines) corresponding to each of them. It is believed that he may have computed these values based on the series expansions:[18] Madhava's work on the value of the mathematicalconstant Piis cited in theMahajyānayana prakāra("Methods for the great sines").[citation needed]While some scholars such as Sarma[9]feel that this book may have been composed by Madhava himself, it is more likely the work of a 16th-century successor.[18]This text attributes most of the expansions to Madhava, and gives the followinginfinite seriesexpansion ofπ, now known as theMadhava-Leibniz series:[19][20] which he obtained from the power-series expansion of the arc-tangent function. However, what is most impressive is that he also gave a correction termRnfor the error after computing the sum up tonterms,[18]namely: where the third correction leads to highly accurate computations ofπ. It has long been speculated how Madhava found these correction terms.[21]They are the first three convergents of a finite continued fraction, which, when combined with the original Madhava's series evaluated tonterms, yields about 3n/2 correct digits: The absolute value of the correction term in next higher order is He also gave a more rapidly converging series by transforming the original infinite series ofπ, obtaining the infinite series By using the first 21 terms to compute an approximation ofπ, he obtains a value correct to 11 decimal places (3.14159265359).[22]The value of 3.1415926535898, correct to 13 decimals, is sometimes attributed to Madhava,[23]but may be due to one of his followers. These were the most accurate approximations ofπgiven since the 5th century (seeHistory of numerical approximations ofπ). The textSadratnamalaappears to give the astonishingly accurate value ofπ= 3.14159265358979324 (correct to 17 decimal places). Based on this, R. Gupta has suggested that this text was also composed by Madhava.[3][22] Madhava also carried out investigations into other series for arc lengths and the associated approximations to rational fractions ofπ.[3] Madhava developed thepower seriesexpansion for some trigonometry functions which were further developed by his successors at theKerala school of astronomy and mathematics.[24](Certain ideas of calculus were known toearlier mathematicians.) Madhava also extended some results found in earlier works, including those ofBhāskara II.[24]However, they did not combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, or turn calculus into the powerful problem-solving tool we have today.[25] K. V. Sarmahas identified Madhava as the author of the following works:[26][27] The Kerala school of astronomy and mathematics, founded by Madhava, flourished between the 14th and 16th centuries, and included among its membersParameshvara,Neelakanta Somayaji,Jyeshtadeva,Achyuta Pisharati,Melpathur Narayana Bhattathiriand Achyuta Panikkar. The group is known for series expansion of three trigonometric functions of sine, cosine and arctant and proofs of their results where later given in theYuktibhasa.[8][24][25]The group also did much other work in astronomy: more pages are devoted to astronomical computations than purely mathematical results.[9] The Kerala school also contributed to linguistics (the relation between language and mathematics is an ancient Indian tradition, seeKātyāyana). Theayurvedicand poetic traditions ofKeralacan be traced back to this school. The famous poem,Narayaniyam, was composed byNarayana Bhattathiri. Madhava has been called "the greatest mathematician-astronomer of medieval India",[3]some of his discoveries in this field show him to have possessed extraordinary intuition".[29]O'Connor and Robertson state that a fair assessment of Madhava is that he took the decisive step towards modern classical analysis.[18] The Kerala school was well known in the 15th and 16th centuries, in the period of the first contact with European navigators in theMalabar Coast. At the time, the port ofMuziris, nearSangamagrama, was a major center for maritime trade, and a number ofJesuitmissionaries and traders were active in this region. Given the fame of the Kerala school, and the interest shown by some of the Jesuit groups during this period in local scholarship, some scholars, including G. Joseph of the U. Manchester have suggested[30]that the writings of the Kerala school may have also been transmitted to Europe around this time, which was still about a century before Newton.[31]However, there is no direct evidence by way of relevant manuscripts that such a transmission actually took place.[31]According toDavid Bressoud, "there is no evidence that the Indian work of series was known beyond India, or even outside of Kerala, until the nineteenth century."[32]
https://en.wikipedia.org/wiki/Madhava_of_Sangamagrama
Madhava's correction termis a mathematical expression attributed toMadhava of Sangamagrama(c. 1340 – c. 1425), the founder of theKerala school of astronomy and mathematics, that can be used to give a better approximation to the value of the mathematical constantπ(pi) than the partial sum approximation obtained by truncating theMadhava–Leibniz infinite seriesforπ. The Madhava–Leibniz infinite series forπis Taking the partial sum of the firstn{\displaystyle n}terms we have the following approximation toπ: Denoting the Madhava correction term byF(n){\displaystyle F(n)}, we have the following better approximation toπ: Three different expressions have been attributed to Madhava as possible values ofF(n){\displaystyle F(n)}, namely, In the extant writings of the mathematicians of the Kerala school there are some indications regarding how the correction termsF1(n){\displaystyle F_{1}(n)}andF2(n){\displaystyle F_{2}(n)}have been obtained, but there are no indications on how the expressionF3(n){\displaystyle F_{3}(n)}has been obtained. This has led to a lot of speculative work on how the formulas might have been derived. The expressions forF2(n){\displaystyle F_{2}(n)}andF3(n){\displaystyle F_{3}(n)}are given explicitly in theYuktibhasha, a major treatise on mathematics and astronomy authored by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530, but that forF1(n){\displaystyle F_{1}(n)}appears there only as a step in the argument leading to the derivation ofF2(n){\displaystyle F_{2}(n)}.[1][2] TheYuktidipika–Laghuvivrthicommentary ofTantrasangraha, a treatise written byNilakantha Somayajian astronomer/mathematician belonging to the Kerala school of astronomy and mathematics and completed in 1501, presents the second correction term in the following verses (Chapter 2: Verses 271–274):[3][1] English translation of the verses:[3] In modern notations this can be stated as follows (whered{\displaystyle d}is the diameter of the circle): If we setp=2n−1{\displaystyle p=2n-1}, the last term in the right hand side of the above equation reduces to4dF2(n){\displaystyle 4dF_{2}(n)}. The same commentary also gives the correction termF3(n){\displaystyle F_{3}(n)}in the following verses (Chapter 2: Verses 295–296): English translation of the verses:[3] In modern notations, this can be stated as follows: where the "multiplier"m=1+((p+1)/2)2.{\textstyle m=1+\left((p+1)/2\right)^{2}.}If we setp=2n−1{\displaystyle p=2n-1}, the last term in the right hand side of the above equation reduces to4dF3(n){\displaystyle 4dF_{3}(n)}. Let Then, writingp=2n+1{\displaystyle p=2n+1}, the errors|π4−si(n)|{\displaystyle \left|{\frac {\pi }{4}}-s_{i}(n)\right|}have the following bounds:[2][4] The errors in using these approximations in computing the value ofπare The following table gives the values of these errors for a few selected values ofn{\displaystyle n}. It has been noted that the correction termsF1(n),F2(n),F3(n){\displaystyle F_{1}(n),F_{2}(n),F_{3}(n)}are the first three convergents of the followingcontinued fractionexpressions:[3] The functionf(n){\displaystyle f(n)}that renders the equation exact can be expressed in the following form:[1] The first three convergents of this infinite continued fraction are precisely the correction terms of Madhava. Also, this functionf(n){\displaystyle f(n)}has the following property: In a paper published in 1990, a group of three Japanese researchers proposed an ingenious method by which Madhava might have obtained the three correction terms. Their proposal was based on two assumptions: Madhava used355/113{\displaystyle 355/113}as the value ofπand he used theEuclidean algorithmfor division.[5][6] Writing and takingπ=355/113,{\displaystyle \pi =355/113,}compute the valuesS(n),{\displaystyle S(n),}express them as a fraction with 1 as numerator, and finally ignore the fractional parts in the denominator to obtain approximations: This suggests the following first approximation toS(n){\displaystyle S(n)}which is the correction termF1(n){\displaystyle F_{1}(n)}talked about earlier. The fractions that were ignored can then be expressed with 1 as numerator, with the fractional parts in the denominators ignored to obtain the next approximation. Two such steps are: This yields the next two approximations toS(n),{\displaystyle S(n),}exactly the same as the correction termsF2(n),{\displaystyle F_{2}(n),} andF3(n),{\displaystyle F_{3}(n),} attributed to Madhava.
https://en.wikipedia.org/wiki/Madhava%27s_correction_term
Inmathematics, theLaurent seriesof acomplex functionf(z){\displaystyle f(z)}is a representation of that function as apower serieswhich includes terms of negative degree. It may be used to express complex functions in cases where aTaylor seriesexpansion cannot be applied. The Laurent series was named after and first published byPierre Alphonse Laurentin 1843.Karl Weierstrasshad previously described it in a paper written in 1841 but not published until 1894.[1] The Laurent series for a complex functionf(z){\displaystyle f(z)}about an arbitrary pointc{\displaystyle c}is given by[2][3]f(z)=∑n=−∞∞an(z−c)n,{\displaystyle f(z)=\sum _{n=-\infty }^{\infty }a_{n}(z-c)^{n},}where the coefficientsan{\displaystyle a_{n}}are defined by acontour integralthat generalizesCauchy's integral formula:an=12πi∮γf(z)(z−c)n+1dz.{\displaystyle a_{n}={\frac {1}{2\pi i}}\oint _{\gamma }{\frac {f(z)}{(z-c)^{n+1}}}\,dz.} The path of integrationγ{\displaystyle \gamma }is counterclockwise around aJordan curveenclosingc{\displaystyle c}and lying in anannulusA{\displaystyle A}in whichf(z){\displaystyle f(z)}isholomorphic(analytic). The expansion forf(z){\displaystyle f(z)}will then be valid anywhere inside the annulus. The annulus is shown in red in the figure on the right, along with an example of a suitable path of integration labeledγ{\displaystyle \gamma }. Whenγ{\displaystyle \gamma }is defined as thecircle|z−c|=ϱ{\displaystyle |z-c|=\varrho }, wherer<ϱ<R{\displaystyle r<\varrho <R}, this amounts to computing the complexFourier coefficientsof the restriction off{\displaystyle f}toγ{\displaystyle \gamma }.[4]The fact that these integrals are unchanged by a deformation of the contourγ{\displaystyle \gamma }is an immediate consequence ofGreen's theorem. One may also obtain the Laurent series for a complex functionf(z){\displaystyle f(z)}atz=∞{\displaystyle z=\infty }. However, this is the same as whenR→∞{\displaystyle R\rightarrow \infty }. In practice, the above integral formula may not offer the most practical method for computing the coefficientsan{\displaystyle a_{n}}for a given functionf(z){\displaystyle f(z)}; instead, one often pieces together the Laurent series by combining known Taylor expansions. Because the Laurent expansion of a function isuniquewhenever it exists, any expression of this form that equals the given functionf(z){\displaystyle f(z)}in some annulus must actually be the Laurent expansion off(z){\displaystyle f(z)}. Laurent series with complex coefficients are an important tool incomplex analysis, especially to investigate the behavior of functions nearsingularities. Consider for instance the functionf(x)=e−1/x2{\displaystyle f(x)=e^{-1/x^{2}}}withf(0)=0{\displaystyle f(0)=0}. As a real function, it is infinitely differentiable everywhere; as a complex function however it is not differentiable atx=0{\displaystyle x=0}. The Laurent series off(x){\displaystyle f(x)}is obtained via thepower series representation,e−1/x2=∑n=0∞(−1)nx−2nn!,{\displaystyle e^{-1/x^{2}}=\sum _{n=0}^{\infty }(-1)^{n}\,{x^{-2n} \over n!},}which converges tof(x){\displaystyle f(x)}for allx∈C{\displaystyle x\in \mathbb {C} }except at the singularityx=0{\displaystyle x=0}. The graph on the right showsf(x){\displaystyle f(x)}in black and its Laurent approximations∑n=0N(−1)nx−2nn!,∀N∈N+.{\displaystyle \sum _{n=0}^{N}(-1)^{n}\,{x^{-2n} \over n!},\quad \forall N\in \mathbb {N} ^{+}.}AsN→∞{\displaystyle N\to \infty }, the approximation becomes exact for all (complex) numbersx{\displaystyle x}except at the singularityx=0{\displaystyle x=0}. More generally, Laurent series can be used to expressholomorphic functionsdefined on anannulus, much aspower seriesare used to express holomorphic functions defined on adisc. Suppose∑n=−∞∞an(z−c)n{\displaystyle \sum _{n=-\infty }^{\infty }a_{n}(z-c)^{n}}is a given Laurent series with complex coefficientsan{\displaystyle a_{n}}and a complex centerc{\displaystyle c}. Then there exists auniqueinner radiusr{\displaystyle r}and outer radiusR{\displaystyle R}such that: It is possible thatr{\displaystyle r}may be zero orR{\displaystyle R}may be infinite; at the other extreme, it's not necessarily true thatr{\displaystyle r}is less thanR{\displaystyle R}. These radii can be computed by taking thelimit superiorof the coefficientsan{\displaystyle a_{n}}such that:r=lim supn→∞|a−n|1n,1R=lim supn→∞|an|1n.{\displaystyle {\begin{aligned}r&=\limsup _{n\to \infty }|a_{-n}|^{\frac {1}{n}},\\{\frac {1}{R}}&=\limsup _{n\to \infty }|a_{n}|^{\frac {1}{n}}.\end{aligned}}} Whenr=0{\displaystyle r=0}, the coefficienta−1{\displaystyle a_{-1}}of the Laurent expansion is called theresidueoff(z){\displaystyle f(z)}at the singularityc{\displaystyle c}.[6]For example, the functionf(z)=ezz+e1/z,{\displaystyle f(z)={e^{z} \over z}+e^{{1}/{z}},}is holomorphic everywhere except atz=0{\displaystyle z=0}. The Laurent expansion aboutc=0{\displaystyle c=0}can then be obtained from the power series representation:f(z)=⋯+(13!)z−3+(12!)z−2+2z−1+2+(12!)z+(13!)z2+(14!)z3+⋯,{\displaystyle f(z)=\cdots +\left({1 \over 3!}\right)z^{-3}+\left({1 \over 2!}\right)z^{-2}+2z^{-1}+2+\left({1 \over 2!}\right)z+\left({1 \over 3!}\right)z^{2}+\left({1 \over 4!}\right)z^{3}+\cdots ,}hence, the residue is given bya−1=2{\displaystyle a_{-1}=2}. Conversely, for a holomorphic functionf(z){\displaystyle f(z)}defined on the annulusA={z:r<|z−c|<R}{\displaystyle A=\{z:r<|z-c|<R\}}, there always exists a unique Laurent series with centerc{\displaystyle c}which converges (at least onA{\displaystyle A}) tof(z){\displaystyle f(z)}. For example, consider the following rational function, along with itspartial fractionexpansion:f(z)=1(z−1)(z−2i)=1+2i5(1z−1−1z−2i).{\displaystyle f(z)={\frac {1}{(z-1)(z-2i)}}={\frac {1+2i}{5}}\left({\frac {1}{z-1}}-{\frac {1}{z-2i}}\right).} This function has singularities atz=1{\displaystyle z=1}andz=2i{\displaystyle z=2i}, where the denominator is zero and the expression is therefore undefined. ATaylor seriesaboutz=0{\displaystyle z=0}(which yields a power series) will only converge in a disc ofradius1, since it "hits" the singularity atz=1{\displaystyle z=1}. However, there are three possible Laurent expansions about 0, depending on the radius ofz{\displaystyle z}: Suppose a functionf(z){\displaystyle f(z)}holomorphic on the annulusr<|z−c|<R{\displaystyle r<|z-c|<R}has two Laurent series:f(z)=∑n=−∞∞an(z−c)n=∑n=−∞∞bn(z−c)n.{\displaystyle f(z)=\sum _{n=-\infty }^{\infty }a_{n}(z-c)^{n}=\sum _{n=-\infty }^{\infty }b_{n}(z-c)^{n}.} Multiply both sides by(z−c)−k−1{\displaystyle (z-c)^{-k-1}}, where k is an arbitrary integer, and integrate on a path γ inside the annulus,∮γ∑n=−∞∞an(z−c)n−k−1dz=∮γ∑n=−∞∞bn(z−c)n−k−1dz.{\displaystyle \oint _{\gamma }\,\sum _{n=-\infty }^{\infty }a_{n}(z-c)^{n-k-1}\,dz=\oint _{\gamma }\,\sum _{n=-\infty }^{\infty }b_{n}(z-c)^{n-k-1}\,dz.} The series converges uniformly onr+ε≤|z−c|≤R−ε{\displaystyle r+\varepsilon \leq |z-c|\leq R-\varepsilon }, whereεis a positive number small enough forγto be contained in the constricted closed annulus, so the integration and summation can be interchanged. Substituting the identity∮γ(z−c)n−k−1dz=2πiδnk{\displaystyle \oint _{\gamma }\,(z-c)^{n-k-1}\,dz=2\pi i\delta _{nk}}into the summation yieldsak=bk.{\displaystyle a_{k}=b_{k}.} Hence the Laurent series is unique. ALaurent polynomialis a Laurent series in which only finitely many coefficients are non-zero. Laurent polynomials differ from ordinarypolynomialsin that they may have terms of negative degree. Theprincipal partof a Laurent series is the series of terms with negative degree, that is∑k=−∞−1ak(z−c)k.{\displaystyle \sum _{k=-\infty }^{-1}a_{k}(z-c)^{k}.} If the principal part off{\displaystyle f}is a finite sum, thenf{\displaystyle f}has apoleatc{\displaystyle c}of order equal to (negative) the degree of the highest term; on the other hand, iff{\displaystyle f}has anessential singularityatc{\displaystyle c}, the principal part is an infinite sum (meaning it has infinitely many non-zero terms). If the inner radius of convergence of the Laurent series forf{\displaystyle f}is 0, thenf{\displaystyle f}has an essential singularity atc{\displaystyle c}if and only if the principal part is an infinite sum, and has a pole otherwise. If the inner radius of convergence is positive,f{\displaystyle f}may have infinitely many negative terms but still be regular atc{\displaystyle c}, as in the example above, in which case it is represented by adifferentLaurent series in a disk aboutc{\displaystyle c}. Laurent series with only finitely many negative terms are well-behaved—they are a power series divided byzk{\displaystyle z^{k}}, and can be analyzed similarly—while Laurent series with infinitely many negative terms have complicated behavior on the inner circle of convergence. Laurent series cannot in general be multiplied. Algebraically, the expression for the terms of the product may involve infinite sums which need not converge (one cannot take theconvolutionof integer sequences). Geometrically, the two Laurent series may have non-overlapping annuli of convergence. Two Laurent series with onlyfinitelymany negative terms can be multiplied: algebraically, the sums are all finite; geometrically, these have poles atc{\displaystyle c}, and inner radius of convergence 0, so they both converge on an overlapping annulus. Thus when definingformal Laurent series, one requires Laurent series with only finitely many negative terms. Similarly, the sum of two convergent Laurent series need not converge, though it is always defined formally, but the sum of two bounded below Laurent series (or any Laurent series on a punctured disk) has a non-empty annulus of convergence. Also, for a fieldF{\displaystyle F}, by the sum and multiplication defined above,formal Laurent serieswould form a fieldF((x)){\displaystyle F((x))}which is also the field of fractions of the ringF[[x]]{\displaystyle F[[x]]}offormal power series.
https://en.wikipedia.org/wiki/Laurent_series
Inmathematics,Puiseux seriesare a generalization ofpower seriesthat allow for negative and fractional exponents of theindeterminate. For example, the series is a Puiseux series in the indeterminatex. Puiseux series were first introduced byIsaac Newtonin 1676[1]and rediscovered byVictor Puiseuxin 1850.[2] The definition of a Puiseux series includes that the denominators of the exponents must be bounded. So, by reducing exponents to a common denominatorn, a Puiseux series becomes aLaurent seriesin annth rootof the indeterminate. For example, the example above is a Laurent series inx1/6.{\displaystyle x^{1/6}.}Because a complex number hasnnth roots, aconvergentPuiseux series typically definesnfunctions in aneighborhoodof0. Puiseux's theorem, sometimes also called theNewton–Puiseux theorem, asserts that, given apolynomial equationP(x,y)=0{\displaystyle P(x,y)=0}with complex coefficients, its solutions iny, viewed as functions ofx, may be expanded as Puiseux series inxthat areconvergentin someneighbourhoodof0. In other words, every branch of analgebraic curvemay be locally described by a Puiseux series inx(or inx−x0when considering branches above a neighborhood ofx0≠ 0). Using modern terminology, Puiseux's theorem asserts that the set of Puiseux series over analgebraically closed fieldof characteristic 0 is itself an algebraically closed field, called thefield of Puiseux series. It is thealgebraic closureof thefield of formal Laurent series, which itself is thefield of fractionsof thering of formal power series. IfKis afield(such as thecomplex numbers), aPuiseux serieswith coefficients inKis an expression of the form wheren{\displaystyle n}is a positive integer andk0{\displaystyle k_{0}}is an integer. In other words, Puiseux series differ fromLaurent seriesin that they allow for fractional exponents of the indeterminate, as long as these fractional exponents have bounded denominator (heren). Just as with Laurent series, Puiseux series allow for negative exponents of the indeterminate as long as these negative exponents are bounded below (here byk0{\displaystyle k_{0}}). Addition and multiplication are as expected: for example, and One might define them by first "upgrading" the denominator of the exponents to some common denominatorNand then performing the operation in the corresponding field of formal Laurent series ofT1/N{\displaystyle T^{1/N}}. The Puiseux series with coefficients inKform a field, which is the union of fields offormal Laurent seriesinT1/n{\displaystyle T^{1/n}}(considered as an indeterminate). This yields an alternative definition of the field of Puiseux series in terms of adirect limit. For every positive integern, letTn{\displaystyle T_{n}}be an indeterminate (meant to representT1/n{\textstyle T^{1/n}}), andK((Tn)){\displaystyle K(\!(T_{n})\!)}be the field of formal Laurent series inTn.{\displaystyle T_{n}.}Ifmdividesn, the mappingTm↦(Tn)n/m{\displaystyle T_{m}\mapsto (T_{n})^{n/m}}induces afield homomorphismK((Tm))→K((Tn)),{\displaystyle K(\!(T_{m})\!)\to K(\!(T_{n})\!),}and these homomorphisms form adirect systemthat has the field of Puiseux series as a direct limit. The fact that every field homomorphism is injective shows that this direct limit can be identified with the above union, and that the two definitions are equivalent (up toan isomorphism). A nonzero Puiseux seriesf{\displaystyle f}can be uniquely written as withck0≠0.{\displaystyle c_{k_{0}}\neq 0.}Thevaluation off{\displaystyle f}is the smallest exponent for the natural order of the rational numbers, and the corresponding coefficientck0{\textstyle c_{k_{0}}}is called theinitial coefficientorvaluation coefficientoff{\displaystyle f}. The valuation of the zero series is+∞.{\displaystyle +\infty .} The functionvis avaluationand makes the Puiseux series avalued field, with theadditive groupQ{\displaystyle \mathbb {Q} }of the rational numbers as itsvaluation group. As for every valued fields, the valuation defines aultrametric distanceby the formulad(f,g)=exp⁡(−v(f−g)).{\displaystyle d(f,g)=\exp(-v(f-g)).}For this distance, the field of Puiseux series is ametric space. The notation expresses that a Puiseux is the limit of its partial sums. However, the field of Puiseux series is notcomplete; see below§ Levi–Civita field. Puiseux series provided byNewton–Puiseux theoremareconvergentin the sense that there is a neighborhood of zero in which they are convergent (0 excluded if the valuation is negative). More precisely, let be a Puiseux series withcomplexcoefficients. There is a real numberr, called theradius of convergencesuch that the series converges ifTis substituted for a nonzero complex numbertof absolute value less thanr, andris the largest number with this property. A Puiseux series isconvergentif it has a nonzero radius of convergence. Because a nonzero complex number hasnnth roots, some care must be taken for the substitution: a specificnth root oft, sayx, must be chosen. Then the substitution consists of replacingTk/n{\displaystyle T^{k/n}}byxk{\displaystyle x^{k}}for everyk. The existence of the radius of convergence results from the similar existence for apower series, applied toT−k0/nf,{\textstyle T^{-k_{0}/n}f,}considered as a power series inT1/n.{\displaystyle T^{1/n}.} It is a part of Newton–Puiseux theorem that the provided Puiseux series have a positive radius of convergence, and thus define a (multivalued)analytic functionin some neighborhood of zero (zero itself possibly excluded). If the base fieldK{\displaystyle K}isordered, then the field of Puiseux series overK{\displaystyle K}is also naturally (“lexicographically”) ordered as follows: a non-zero Puiseux seriesf{\displaystyle f}with 0 is declared positive whenever its valuation coefficient is so. Essentially, this means that any positive rational power of the indeterminateT{\displaystyle T}is made positive, but smaller than any positive element in the base fieldK{\displaystyle K}. If the base fieldK{\displaystyle K}is endowed with a valuationw{\displaystyle w}, then we can construct a different valuation on the field of Puiseux series overK{\displaystyle K}by letting the valuationw^(f){\displaystyle {\hat {w}}(f)}beω⋅v+w(ck),{\displaystyle \omega \cdot v+w(c_{k}),}wherev=k/n{\displaystyle v=k/n}is the previously defined valuation (ck{\displaystyle c_{k}}is the first non-zero coefficient) andω{\displaystyle \omega }is infinitely large (in other words, the value group ofw^{\displaystyle {\hat {w}}}isQ×Γ{\displaystyle \mathbb {Q} \times \Gamma }ordered lexicographically, whereΓ{\displaystyle \Gamma }is the value group ofw{\displaystyle w}). Essentially, this means that the previously defined valuationv{\displaystyle v}is corrected by an infinitesimal amount to take into account the valuationw{\displaystyle w}given on the base field. As early as 1671,[3]Isaac Newtonimplicitly used Puiseux series and proved the following theorem for approximating withseriestherootsofalgebraic equationswhose coefficients are functions that are themselves approximated with series orpolynomials. For this purpose, he introduced theNewton polygon, which remains a fundamental tool in this context. Newton worked with truncated series, and it is only in 1850 thatVictor Puiseux[2]introduced the concept of (non-truncated) Puiseux series and proved the theorem that is now known asPuiseux's theoremorNewton–Puiseux theorem.[4]The theorem asserts that, given an algebraic equation whose coefficients are polynomials or, more generally, Puiseux series over afieldofcharacteristic zero, every solution of the equation can be expressed as a Puiseux series. Moreover, the proof provides an algorithm for computing these Puiseux series, and, when working over thecomplex numbers, the resulting series are convergent. In modern terminology, the theorem can be restated as:the field of Puiseux series over an algebraically closed field of characteristic zero, and the field of convergent Puiseux series over the complex numbers, are bothalgebraically closed. Let be a polynomial whose nonzero coefficientsai(x){\displaystyle a_{i}(x)}are polynomials, power series, or even Puiseux series inx. In this section, the valuationv(ai){\displaystyle v(a_{i})}ofai{\displaystyle a_{i}}is the lowest exponent ofxinai.{\displaystyle a_{i}.}(Most of what follows applies more generally to coefficients in anyvalued ring.) For computing the Puiseux series that arerootsofP(that is solutions of thefunctional equationP(y)=0{\displaystyle P(y)=0}), the first thing to do is to compute the valuation of the roots. This is the role of the Newton polygon. Let us consider, in aCartesian plane, the points of coordinates(i,v(ai)).{\displaystyle (i,v(a_{i})).}TheNewton polygonofPis the lowerconvex hullof these points. That is, the edges of the Newton polygon are theline segmentsjoining two of these points, such that all these points are not below the line supporting the segment (below is, as usually, relative to the value of the second coordinate). Given a Puiseux seriesy0{\displaystyle y_{0}}of valuationv0{\displaystyle v_{0}}, the valuation ofP(y0){\displaystyle P(y_{0})}is at least the minimum of the numbersiv0+v(ai),{\displaystyle iv_{0}+v(a_{i}),}and is equal to this minimum if this minimum is reached for only onei. So, fory0{\displaystyle y_{0}}being a root ofP, the minimum must be reached at least twice. That is, there must be two valuesi1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}ofisuch thati1v0+v(ai1)=i2v0+v(ai2),{\displaystyle i_{1}v_{0}+v(a_{i_{1}})=i_{2}v_{0}+v(a_{i_{2}}),}andiv0+v(ai)≥i1v0+v(ai1){\displaystyle iv_{0}+v(a_{i})\geq i_{1}v_{0}+v(a_{i_{1}})}for everyi. That is,(i1,v(ai1)){\displaystyle (i_{1},v(a_{i_{1}}))}and(i2,v(ai2)){\displaystyle (i_{2},v(a_{i_{2}}))}must belong to an edge of the Newton polygon, andv0=−v(ai1)−v(ai2)i1−i2{\displaystyle v_{0}=-{\frac {v(a_{i_{1}})-v(a_{i_{2}})}{i_{1}-i_{2}}}}must be the opposite of the slope of this edge. This is a rational number as soon as all valuationsv(ai){\displaystyle v(a_{i})}are rational numbers, and this is the reason for introducing rational exponents in Puiseux series. In summary,the valuation of a root ofPmust be the opposite of a slope of an edge of the Newton polynomial. The initial coefficient of a Puiseux series solution ofP(y)=0{\displaystyle P(y)=0}can easily be deduced. Letci{\displaystyle c_{i}}be the initial coefficient ofai(x),{\displaystyle a_{i}(x),}that is, the coefficient ofxv(ai){\displaystyle x^{v(a_{i})}}inai(x).{\displaystyle a_{i}(x).}Let−v0{\displaystyle -v_{0}}be a slope of the Newton polygon, andγx0v0{\displaystyle \gamma x_{0}^{v_{0}}}be the initial term of a corresponding Puiseux series solution ofP(y)=0.{\displaystyle P(y)=0.}If no cancellation would occur, then the initial coefficient ofP(y){\displaystyle P(y)}would be∑i∈Iciγi,{\textstyle \sum _{i\in I}c_{i}\gamma ^{i},}whereIis the set of the indicesisuch that(i,v(ai)){\displaystyle (i,v(a_{i}))}belongs to the edge of slopev0{\displaystyle v_{0}}of the Newton polygon. So, for having a root, the initial coefficientγ{\displaystyle \gamma }must be a nonzero root of the polynomialχ(x)=∑i∈Icixi{\displaystyle \chi (x)=\sum _{i\in I}c_{i}x^{i}}(this notation will be used in the next section). In summary, the Newton polynomial allows an easy computation of all possible initial terms of Puiseux series that are solutions ofP(y)=0.{\displaystyle P(y)=0.} The proof of Newton–Puiseux theorem will consist of starting from these initial terms for computing recursively the next terms of the Puiseux series solutions. Let suppose that the first termγxv0{\displaystyle \gamma x^{v_{0}}}of a Puiseux series solution ofP(y)=0{\displaystyle P(y)=0}has been be computed by the method of the preceding section. It remains to computez=y−γxv0.{\displaystyle z=y-\gamma x^{v_{0}}.}For this, we sety0=γxv0,{\displaystyle y_{0}=\gamma x^{v_{0}},}and write theTaylor expansionofPatz=y−y0:{\displaystyle z=y-y_{0}:} This is a polynomial inzwhose coefficients are Puiseux series inx. One may apply to it the method of the Newton polygon, and iterate for getting the terms of the Puiseux series, one after the other. But some care is required for insuring thatv(z)>v0,{\displaystyle v(z)>v_{0},}and showing that one get a Puiseux series, that is, that the denominators of the exponents ofxremain bounded. The derivation with respect toydoes not change the valuation inxof the coefficients; that is, and the equality occurs if and only ifχ(j)(γ)≠0,{\displaystyle \chi ^{(j)}(\gamma )\neq 0,}whereχ(x){\displaystyle \chi (x)}is the polynomial of the preceding section. Ifmis the multiplicity ofγ{\displaystyle \gamma }as a root ofχ,{\displaystyle \chi ,}it results that the inequality is an equality forj=m.{\displaystyle j=m.}The terms such thatj>m{\displaystyle j>m}can be forgotten as far as one is concerned by valuations, asv(z)>v0{\displaystyle v(z)>v_{0}}andj>m{\displaystyle j>m}imply This means that, for iterating the method of Newton polygon, one can and one must consider only the part of the Newton polygon whose first coordinates belongs to the interval[0,m].{\displaystyle [0,m].}Two cases have to be considered separately and will be the subject of next subsections, the so-calledramified case, wherem> 1, and theregular casewherem= 1. The way of applying recursively the method of the Newton polygon has been described precedingly. As each application of the method may increase, in the ramified case, the denominators of exponents (valuations), it remains to prove that one reaches the regular case after a finite number of iterations (otherwise the denominators of the exponents of the resulting series would not be bounded, and this series would not be a Puiseux series. By the way, it will also be proved that one gets exactly as many Puiseux series solutions as expected, that is the degree ofP(y){\displaystyle P(y)}iny. Without loss of generality, one can suppose thatP(0)≠0,{\displaystyle P(0)\neq 0,}that is,a0≠0.{\displaystyle a_{0}\neq 0.}Indeed, each factoryofP(y){\displaystyle P(y)}provides a solution that is the zero Puiseux series, and such factors can be factored out. As the characteristic is supposed to be zero, one can also suppose thatP(y){\displaystyle P(y)}is asquare-free polynomial, that is that the solutions ofP(y)=0{\displaystyle P(y)=0}are all different. Indeed, thesquare-free factorizationuses only the operations of the field of coefficients for factoringP(y){\displaystyle P(y)}into square-free factors than can be solved separately. (The hypothesis of characteristic zero is needed, since, in characteristicp, the square-free decomposition can provide irreducible factors, such asyp−x,{\displaystyle y^{p}-x,}that have multiple roots over an algebraic extension.) In this context, one defines thelengthof an edge of a Newton polygon as the difference of theabscissasof its end points. The length of a polygon is the sum of the lengths of its edges. With the hypothesisP(0)≠0,{\displaystyle P(0)\neq 0,}the length of the Newton polygon ofPis its degree iny, that is the number of its roots. The length of an edge of the Newton polygon is the number of roots of a given valuation. This number equals the degree of the previously defined polynomialχ(x).{\displaystyle \chi (x).} The ramified case corresponds thus to two (or more) solutions that have the same initial term(s). As these solutions must be distinct (square-free hypothesis), they must be distinguished after a finite number of iterations. That is, one gets eventually a polynomialχ(x){\displaystyle \chi (x)}that is square free, and the computation can continue as in the regular case for each root ofχ(x).{\displaystyle \chi (x).} As the iteration of the regular case does not increase the denominators of the exponents, This shows that the method provides all solutions as Puiseux series, that is, that the field of Puiseux series over the complex numbers is an algebraically closed field that contains the univariate polynomial ring with complex coefficients. The Newton–Puiseux theorem is not valid over fields of positive characteristic. For example, the equationX2−X=T−1{\displaystyle X^{2}-X=T^{-1}}has solutions and (one readily checks on the first few terms that the sum and product of these two series are 1 and−T−1{\displaystyle -T^{-1}}respectively; this is valid whenever the base fieldKhas characteristic different from 2). As the powers of 2 in the denominators of the coefficients of the previous example might lead one to believe, the statement of the theorem is not true in positive characteristic. The example of theArtin–SchreierequationXp−X=T−1{\displaystyle X^{p}-X=T^{-1}}shows this: reasoning with valuations shows thatXshould have valuation−1p{\textstyle -{\frac {1}{p}}}, and if we rewrite it asX=T−1/p+X1{\displaystyle X=T^{-1/p}+X_{1}}then and one shows similarly thatX1{\displaystyle X_{1}}should have valuation−1p2{\textstyle -{\frac {1}{p^{2}}}}, and proceeding in that way one obtains the series since this series makes no sense as a Puiseux series—because the exponents have unbounded denominators—the original equation has no solution. However, suchEisenstein equationsare essentially the only ones not to have a solution, because, ifK{\displaystyle K}is algebraically closed of characteristicp>0{\displaystyle p>0}, then the field of Puiseux series overK{\displaystyle K}is the perfect closure of the maximal tamelyramifiedextension ofK((T)){\displaystyle K(\!(T)\!)}.[4] Similarly to the case of algebraic closure, there is an analogous theorem forreal closure: ifK{\displaystyle K}is a real closed field, then the field of Puiseux series overK{\displaystyle K}is the real closure of the field of formal Laurent series overK{\displaystyle K}.[5](This implies the former theorem since any algebraically closed field of characteristic zero is the unique quadratic extension of some real-closed field.) There is also an analogous result forp-adic closure: ifK{\displaystyle K}is ap{\displaystyle p}-adically closed field with respect to a valuationw{\displaystyle w}, then the field of Puiseux series overK{\displaystyle K}is alsop{\displaystyle p}-adically closed.[6] LetX{\displaystyle X}be analgebraic curve[7]given by an affine equationF(x,y)=0{\displaystyle F(x,y)=0}over an algebraically closed fieldK{\displaystyle K}of characteristic zero, and consider a pointp{\displaystyle p}onX{\displaystyle X}which we can assume to be(0,0){\displaystyle (0,0)}. We also assume thatX{\displaystyle X}is not the coordinate axisx=0{\displaystyle x=0}. Then aPuiseux expansionof (they{\displaystyle y}coordinate of)X{\displaystyle X}atp{\displaystyle p}is a Puiseux seriesf{\displaystyle f}having positive valuation such thatF(x,f(x))=0{\displaystyle F(x,f(x))=0}. More precisely, let us define thebranchesofX{\displaystyle X}atp{\displaystyle p}to be the pointsq{\displaystyle q}of thenormalizationY{\displaystyle Y}ofX{\displaystyle X}which map top{\displaystyle p}. For each suchq{\displaystyle q}, there is a local coordinatet{\displaystyle t}ofY{\displaystyle Y}atq{\displaystyle q}(which is a smooth point) such that the coordinatesx{\displaystyle x}andy{\displaystyle y}can be expressed as formal power series oft{\displaystyle t}, sayx=tn+⋯{\displaystyle x=t^{n}+\cdots }(sinceK{\displaystyle K}is algebraically closed, we can assume the valuation coefficient to be 1) andy=ctk+⋯{\displaystyle y=ct^{k}+\cdots }: then there is a unique Puiseux series of the formf=cTk/n+⋯{\displaystyle f=cT^{k/n}+\cdots }(a power series inT1/n{\displaystyle T^{1/n}}), such thaty(t)=f(x(t)){\displaystyle y(t)=f(x(t))}(the latter expression is meaningful sincex(t)1/n=t+⋯{\displaystyle x(t)^{1/n}=t+\cdots }is a well-defined power series int{\displaystyle t}). This is a Puiseux expansion ofX{\displaystyle X}atp{\displaystyle p}which is said to be associated to the branch given byq{\displaystyle q}(or simply, the Puiseux expansion of that branch ofX{\displaystyle X}), and each Puiseux expansion ofX{\displaystyle X}atp{\displaystyle p}is given in this manner for a unique branch ofX{\displaystyle X}atp{\displaystyle p}.[8][9] This existence of a formal parametrization of the branches of an algebraic curve or function is also referred to asPuiseux's theorem: it has arguably the same mathematical content as the fact that the field of Puiseux series is algebraically closed and is a historically more accurate description of the original author's statement.[10] For example, the curvey2=x3+x2{\displaystyle y^{2}=x^{3}+x^{2}}(whose normalization is a line with coordinatet{\displaystyle t}and mapt↦(t2−1,t3−t){\displaystyle t\mapsto (t^{2}-1,t^{3}-t)}) has two branches at the double point (0,0), corresponding to the pointst=+1{\displaystyle t=+1}andt=−1{\displaystyle t=-1}on the normalization, whose Puiseux expansions arey=x+12x2−18x3+⋯{\textstyle y=x+{\frac {1}{2}}x^{2}-{\frac {1}{8}}x^{3}+\cdots }andy=−x−12x2+18x3+⋯{\textstyle y=-x-{\frac {1}{2}}x^{2}+{\frac {1}{8}}x^{3}+\cdots }respectively (here, both are power series because thex{\displaystyle x}coordinate isétaleat the corresponding points in the normalization). At the smooth point(−1,0){\displaystyle (-1,0)}(which ist=0{\displaystyle t=0}in the normalization), it has a single branch, given by the Puiseux expansiony=−(x+1)1/2+(x+1)3/2{\displaystyle y=-(x+1)^{1/2}+(x+1)^{3/2}}(thex{\displaystyle x}coordinate ramifies at this point, so it is not a power series). The curvey2=x3{\displaystyle y^{2}=x^{3}}(whose normalization is again a line with coordinatet{\displaystyle t}and mapt↦(t2,t3){\displaystyle t\mapsto (t^{2},t^{3})}), on the other hand, has a single branch at thecusp point(0,0){\displaystyle (0,0)}, whose Puiseux expansion isy=x3/2{\displaystyle y=x^{3/2}}. WhenK=C{\displaystyle K=\mathbb {C} }is the field of complex numbers, the Puiseux expansion of an algebraic curve (as defined above) isconvergentin the sense that for a given choice ofn{\displaystyle n}-th root ofx{\displaystyle x}, they converge for small enough|x|{\displaystyle |x|}, hence define an analytic parametrization of each branch ofX{\displaystyle X}in the neighborhood ofp{\displaystyle p}(more precisely, the parametrization is by then{\displaystyle n}-th root ofx{\displaystyle x}). The field of Puiseux series is notcompleteas ametric space. Its completion, called theLevi-Civita field, can be described as follows: it is the field of formal expressions of the formf=∑eceTe,{\textstyle f=\sum _{e}c_{e}T^{e},}where the support of the coefficients (that is, the set ofesuch thatce≠0{\displaystyle c_{e}\neq 0}) is the range of an increasing sequence of rational numbers that either is finite or tends to+∞{\displaystyle +\infty }. In other words, such series admit exponents of unbounded denominators, provided there are finitely many terms of exponent less thanA{\displaystyle A}for any given boundA{\displaystyle A}. For example,∑k=1+∞Tk+1k{\textstyle \sum _{k=1}^{+\infty }T^{k+{\frac {1}{k}}}}is not a Puiseux series, but it is the limit of aCauchy sequenceof Puiseux series; in particular, it is the limit of∑k=1NTk+1k{\textstyle \sum _{k=1}^{N}T^{k+{\frac {1}{k}}}}asN→+∞{\displaystyle N\to +\infty }. However, even this completion is still not "maximally complete" in the sense that it admits non-trivial extensions which are valued fields having the same value group and residue field,[11][12]hence the opportunity of completing it even more. Hahn seriesare a further (larger) generalization of Puiseux series, introduced byHans Hahnin the course of the proof of hisembedding theoremin 1907 and then studied by him in his approach toHilbert's seventeenth problem. In a Hahn series, instead of requiring the exponents to have bounded denominator they are required to form awell-ordered subsetof the value group (usuallyQ{\displaystyle \mathbb {Q} }orR{\displaystyle \mathbb {R} }). These were later further generalized byAnatoly MaltsevandBernhard Neumannto a non-commutative setting (they are therefore sometimes known asHahn–Mal'cev–Neumann series). Using Hahn series, it is possible to give a description of the algebraic closure of the field of power series in positive characteristic which is somewhat analogous to the field of Puiseux series.[13]
https://en.wikipedia.org/wiki/Puiseux_series
Madhava's correction termis a mathematical expression attributed toMadhava of Sangamagrama(c. 1340 – c. 1425), the founder of theKerala school of astronomy and mathematics, that can be used to give a better approximation to the value of the mathematical constantπ(pi) than the partial sum approximation obtained by truncating theMadhava–Leibniz infinite seriesforπ. The Madhava–Leibniz infinite series forπis Taking the partial sum of the firstn{\displaystyle n}terms we have the following approximation toπ: Denoting the Madhava correction term byF(n){\displaystyle F(n)}, we have the following better approximation toπ: Three different expressions have been attributed to Madhava as possible values ofF(n){\displaystyle F(n)}, namely, In the extant writings of the mathematicians of the Kerala school there are some indications regarding how the correction termsF1(n){\displaystyle F_{1}(n)}andF2(n){\displaystyle F_{2}(n)}have been obtained, but there are no indications on how the expressionF3(n){\displaystyle F_{3}(n)}has been obtained. This has led to a lot of speculative work on how the formulas might have been derived. The expressions forF2(n){\displaystyle F_{2}(n)}andF3(n){\displaystyle F_{3}(n)}are given explicitly in theYuktibhasha, a major treatise on mathematics and astronomy authored by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530, but that forF1(n){\displaystyle F_{1}(n)}appears there only as a step in the argument leading to the derivation ofF2(n){\displaystyle F_{2}(n)}.[1][2] TheYuktidipika–Laghuvivrthicommentary ofTantrasangraha, a treatise written byNilakantha Somayajian astronomer/mathematician belonging to the Kerala school of astronomy and mathematics and completed in 1501, presents the second correction term in the following verses (Chapter 2: Verses 271–274):[3][1] English translation of the verses:[3] In modern notations this can be stated as follows (whered{\displaystyle d}is the diameter of the circle): If we setp=2n−1{\displaystyle p=2n-1}, the last term in the right hand side of the above equation reduces to4dF2(n){\displaystyle 4dF_{2}(n)}. The same commentary also gives the correction termF3(n){\displaystyle F_{3}(n)}in the following verses (Chapter 2: Verses 295–296): English translation of the verses:[3] In modern notations, this can be stated as follows: where the "multiplier"m=1+((p+1)/2)2.{\textstyle m=1+\left((p+1)/2\right)^{2}.}If we setp=2n−1{\displaystyle p=2n-1}, the last term in the right hand side of the above equation reduces to4dF3(n){\displaystyle 4dF_{3}(n)}. Let Then, writingp=2n+1{\displaystyle p=2n+1}, the errors|π4−si(n)|{\displaystyle \left|{\frac {\pi }{4}}-s_{i}(n)\right|}have the following bounds:[2][4] The errors in using these approximations in computing the value ofπare The following table gives the values of these errors for a few selected values ofn{\displaystyle n}. It has been noted that the correction termsF1(n),F2(n),F3(n){\displaystyle F_{1}(n),F_{2}(n),F_{3}(n)}are the first three convergents of the followingcontinued fractionexpressions:[3] The functionf(n){\displaystyle f(n)}that renders the equation exact can be expressed in the following form:[1] The first three convergents of this infinite continued fraction are precisely the correction terms of Madhava. Also, this functionf(n){\displaystyle f(n)}has the following property: In a paper published in 1990, a group of three Japanese researchers proposed an ingenious method by which Madhava might have obtained the three correction terms. Their proposal was based on two assumptions: Madhava used355/113{\displaystyle 355/113}as the value ofπand he used theEuclidean algorithmfor division.[5][6] Writing and takingπ=355/113,{\displaystyle \pi =355/113,}compute the valuesS(n),{\displaystyle S(n),}express them as a fraction with 1 as numerator, and finally ignore the fractional parts in the denominator to obtain approximations: This suggests the following first approximation toS(n){\displaystyle S(n)}which is the correction termF1(n){\displaystyle F_{1}(n)}talked about earlier. The fractions that were ignored can then be expressed with 1 as numerator, with the fractional parts in the denominators ignored to obtain the next approximation. Two such steps are: This yields the next two approximations toS(n),{\displaystyle S(n),}exactly the same as the correction termsF2(n),{\displaystyle F_{2}(n),} andF3(n),{\displaystyle F_{3}(n),} attributed to Madhava.
https://en.wikipedia.org/wiki/Madhava%27s_value_of_%CF%80
Thetable of chords, created by theGreekastronomer, geometer, and geographerPtolemyinEgyptduring the 2nd century AD, is atrigonometric tablein Book I, chapter 11 of Ptolemy'sAlmagest,[1]a treatise onmathematical astronomy. It is essentially equivalent to a table of values of thesinefunction. It was the earliest trigonometric table extensive enough for many practical purposes, including those of astronomy (an earlier table of chords byHipparchusgave chords only for arcs that were multiples of⁠7+1/2⁠° =⁠π/24⁠radians).[2]Since the 8th and 9th centuries, the sine and other trigonometric functions have been used in Islamic mathematics and astronomy, reforming the production of sine tables.[3]KhwarizmiandHabash al-Hasiblater produced a set of trigonometric tables. Achordof acircleis a line segment whose endpoints are on the circle. Ptolemy used a circle whose diameter is 120 parts. He tabulated the length of a chord whose endpoints are separated by an arc ofndegrees, fornranging from⁠1/2⁠to 180 by increments of⁠1/2⁠. In modern notation, the length of the chord corresponding to an arc ofθdegrees is Asθgoes from 0 to 180, the chord of aθ° arc goes from 0 to 120. For tiny arcs, the chord is to the arc angle in degrees asπis to 3, or more precisely, the ratio can be made as close as desired to⁠π/3⁠≈1.04719755by makingθsmall enough. Thus, for the arc of⁠1/2⁠°, the chord length is slightly more than the arc angle in degrees. As the arc increases, the ratio of the chord to the arc decreases. When the arc reaches60°, the chord length is exactly equal to the number of degrees in the arc, i.e. chord 60° = 60. For arcs of more than 60°, the chord is less than the arc, until an arc of 180° is reached, when the chord is only 120. The fractional parts of chord lengths were expressed insexagesimal(base 60) numerals. For example, where the length of a chord subtended by a 112° arc is reported to be 99,29,5, it has a length of rounded to the nearest⁠1/602⁠.[1] After the columns for the arc and the chord, a third column is labeled "sixtieths". For an arc ofθ°, the entry in the "sixtieths" column is This is the average number of sixtieths of a unit that must be added to chord(θ°) each time the angle increases by one minute of arc, between the entry forθ° and that for (θ+⁠1/2⁠)°. Thus, it is used forlinear interpolation. Glowatzki and Göttsche showed that Ptolemy must have calculated chords to five sexigesimal places in order to achieve the degree of accuracy found in the "sixtieths" column.[4][5] Chapter 10 of Book I of theAlmagestpresentsgeometrictheorems used for computing chords. Ptolemy used geometric reasoning based on Proposition 10 of Book XIII ofEuclid'sElementsto find the chords of 72° and 36°. ThatPropositionstates that if an equilateralpentagonis inscribed in a circle, then the area of the square on the side of the pentagon equals the sum of the areas of the squares on the sides of thehexagonand thedecagoninscribed in the same circle. He usedPtolemy's theoremon quadrilaterals inscribed in a circle to derive formulas for the chord of a half-arc, the chord of the sum of two arcs, and the chord of a difference of two arcs. The theorem states that for aquadrilateralinscribed in acircle, the product of the lengths of the diagonals equals the sum of the products of the two pairs of lengths of opposite sides. The derivations of trigonometric identities rely on acyclic quadrilateralin which one side is a diameter of the circle. To find the chords of arcs of 1° and⁠1/2⁠° he used approximations based onAristarchus's inequality. The inequality states that for arcsαandβ, if 0 <β<α< 90°, then Ptolemy showed that for arcs of 1° and⁠1/2⁠°, the approximations correctly give the first two sexagesimal places after the integer part. Gerald J. Toomerin his translation of the Almagest gives seven entries where some manuscripts have scribal errors, changing one "digit" (one letter, see below).Glenn Elerthas made a comparison between Ptolemy's values and the true values (120 times the sine of half the angle) and has found that theroot mean squareerror is 0.000136. But much of this is simply due to rounding off to the nearest 1/3600, since this equals 0.0002777... There are nevertheless many entries where the last "digit" is off by 1 (too high or too low) from the best rounded value. Ptolemy's values are often too high by 1 in the last place, and more so towards the higher angles. The largest errors are about 0.0004, which still corresponds to an error of only 1 in the lastsexagesimaldigit.[6] Lengths of arcs of the circle, in degrees, and the integer parts of chord lengths, were expressed in abase 10numeral systemthat used 21 of the letters of theGreek alphabetwith the meanings given in the following table, and a symbol, "∠′", that means⁠1/2⁠and a raised circle "○" that fills a blank space (effectively representing zero). Three of the letters, labeled "archaic" in the table below, had not been in use in the Greek language for some centuries before theAlmagestwas written, but were still in use as numerals andmusical notes. Thus, for example, an arc of⁠143+1/2⁠° is expressed asρμγ∠′. (As the table only reaches 180°, the Greek numerals for 200 and above are not used.) The fractional parts of chord lengths required great accuracy, and were given insexagesimalnotation in two columns in the table: The first column gives an integer multiple of⁠1/60⁠, in the range 0–59, the second an integer multiple of⁠1/602⁠=⁠1/3600⁠, also in the range 0–59. Thus in Heiberg'sedition of theAlmagestwith the table of chords on pages 48–63, the beginning of the table, corresponding to arcs from⁠1/2⁠°to⁠7+1/2⁠°,looks like this: Later in the table, one can see the base-10 nature of the numerals expressing the integer parts of the arc and the chord length. Thus an arc of 85° is written asπε(πfor 80 andεfor 5) and not broken down into 60 + 25. The corresponding chord length is 81 plus a fractional part. The integer part begins withπα, likewise not broken into 60 + 21. But the fractional part,460+15602{\textstyle {\tfrac {4}{60}}+{\tfrac {15}{60^{2}}}}, is written asδ, for 4, in the⁠1/60⁠column, followed byιε, for 15, in the⁠1/602⁠column. The table has 45 lines on each of eight pages, for a total of 360 lines.
https://en.wikipedia.org/wiki/Ptolemy%27s_table_of_chords
Ingeometry, asimplex(plural:simplexesorsimplices) is a generalization of the notion of atriangleortetrahedronto arbitrarydimensions. The simplex is so-named because it represents the simplest possiblepolytopein any given dimension. For example, Specifically, ak-simplexis ak-dimensionalpolytopethat is theconvex hullof itsk+ 1vertices. More formally, suppose thek+ 1pointsu0,…,uk{\displaystyle u_{0},\dots ,u_{k}}areaffinely independent, which means that thekvectorsu1−u0,…,uk−u0{\displaystyle u_{1}-u_{0},\dots ,u_{k}-u_{0}}arelinearly independent. Then, the simplex determined by them is the set of pointsC={θ0u0+⋯+θkuk|∑i=0kθi=1andθi≥0fori=0,…,k}.{\displaystyle C=\left\{\theta _{0}u_{0}+\dots +\theta _{k}u_{k}~{\Bigg |}~\sum _{i=0}^{k}\theta _{i}=1{\mbox{ and }}\theta _{i}\geq 0{\mbox{ for }}i=0,\dots ,k\right\}.} Aregular simplex[1]is a simplex that is also aregular polytope. A regulark-simplex may be constructed from a regular(k− 1)-simplex by connecting a new vertex to all original vertices by the common edge length. Thestandard simplexorprobability simplex[2]is the(k− 1)-dimensional simplex whose vertices are thekstandardunit vectorsinRk{\displaystyle \mathbf {R} ^{k}}, or in other words{x∈Rk:x0+⋯+xk−1=1,xi≥0fori=0,…,k−1}.{\displaystyle \left\{x\in \mathbf {R} ^{k}:x_{0}+\dots +x_{k-1}=1,x_{i}\geq 0{\text{ for }}i=0,\dots ,k-1\right\}.} Intopologyandcombinatorics, it is common to "glue together" simplices to form asimplicial complex. The geometric simplex and simplicial complex should not be confused with theabstract simplicial complex, in which a simplex is simply afinite setand the complex is a family of such sets that is closed under taking subsets. The concept of a simplex was known toWilliam Kingdon Clifford, who wrote about these shapes in 1886 but called them "prime confines".Henri Poincaré, writing aboutalgebraic topologyin 1900, called them "generalized tetrahedra". In 1902Pieter Hendrik Schoutedescribed the concept first with theLatinsuperlativesimplicissimum("simplest") and then with the same Latin adjective in the normal formsimplex("simple").[3] Theregular simplexfamily is the first of threeregular polytopefamilies, labeled byDonald Coxeterasαn, the other two being thecross-polytopefamily, labeled asβn, and thehypercubes, labeled asγn. A fourth family, thetessellation ofn-dimensional space by infinitely many hypercubes, he labeled asδn.[4] Theconvex hullof anynonemptysubsetof then+ 1points that define ann-simplex is called afaceof the simplex. Faces are simplices themselves. In particular, the convex hull of a subset of sizem+ 1(of then+ 1defining points) is anm-simplex, called anm-faceof then-simplex. The 0-faces (i.e., the defining points themselves as sets of size 1) are called thevertices(singular: vertex), the 1-faces are called theedges, the (n− 1)-faces are called thefacets, and the solen-face is the wholen-simplex itself. In general, the number ofm-faces is equal to thebinomial coefficient(n+1m+1){\displaystyle {\tbinom {n+1}{m+1}}}.[5]Consequently, the number ofm-faces of ann-simplex may be found in column (m+ 1) of row (n+ 1) ofPascal's triangle. A simplexAis acofaceof a simplexBifBis a face ofA.Faceandfacetcan have different meanings when describing types of simplices in asimplicial complex. The extendedf-vectorfor ann-simplex can be computed by(1,1)n+1, like the coefficients ofpolynomial products. For example, a7-simplexis (1,1)8= (1,2,1)4= (1,4,6,4,1)2= (1,8,28,56,70,56,28,8,1). The number of 1-faces (edges) of then-simplex is then-thtriangle number, the number of 2-faces of then-simplex is the(n− 1)thtetrahedron number, the number of 3-faces of then-simplex is the(n− 2)th 5-cell number, and so on. Ann-simplex is thepolytopewith the fewest vertices that requiresndimensions. Consider a line segmentABas a shape in a 1-dimensional space (the 1-dimensional space is the line in which the segment lies). One can place a new pointCsomewhere off the line. The new shape, triangleABC, requires two dimensions; it cannot fit in the original 1-dimensional space. The triangle is the 2-simplex, a simple shape that requires two dimensions. Consider a triangleABC, a shape in a 2-dimensional space (the plane in which the triangle resides). One can place a new pointDsomewhere off the plane. The new shape, tetrahedronABCD, requires three dimensions; it cannot fit in the original 2-dimensional space. The tetrahedron is the 3-simplex, a simple shape that requires three dimensions. Consider tetrahedronABCD, a shape in a 3-dimensional space (the 3-space in which the tetrahedron lies). One can place a new pointEsomewhere outside the 3-space. The new shapeABCDE, called a 5-cell, requires four dimensions and is called the 4-simplex; it cannot fit in the original 3-dimensional space. (It also cannot be visualized easily.) This idea can be generalized, that is, adding a single new point outside the currently occupied space, which requires going to the next higher dimension to hold the new shape. This idea can also be worked backward: the line segment we started with is a simple shape that requires a 1-dimensional space to hold it; the line segment is the 1-simplex. The line segment itself was formed by starting with a single point in 0-dimensional space (this initial point is the 0-simplex) and adding a second point, which required the increase to 1-dimensional space. More formally, an(n+ 1)-simplex can be constructed as a join (∨ operator) of ann-simplex and a point,( ). An(m+n+ 1)-simplex can be constructed as a join of anm-simplex and ann-simplex. The two simplices are oriented to be completely normal from each other, with translation in a direction orthogonal to both of them. A 1-simplex is the join of two points:( ) ∨ ( ) = 2 ⋅ ( ). A general 2-simplex (scalene triangle) is the join of three points:( ) ∨ ( ) ∨ ( ). Anisosceles triangleis the join of a 1-simplex and a point:{ } ∨ ( ). Anequilateral triangleis 3 ⋅ ( ) or {3}. A general 3-simplex is the join of 4 points:( ) ∨ ( ) ∨ ( ) ∨ ( ). A 3-simplex with mirror symmetry can be expressed as the join of an edge and two points:{ } ∨ ( ) ∨ ( ). A 3-simplex with triangular symmetry can be expressed as the join of an equilateral triangle and 1 point:3.( )∨( )or{3}∨( ). Aregular tetrahedronis4 ⋅ ( )or {3,3} and so on. In some conventions,[7]the empty set is defined to be a (−1)-simplex. The definition of the simplex above still makes sense ifn= −1. This convention is more common in applications to algebraic topology (such assimplicial homology) than to the study of polytopes. ThesePetrie polygons(skew orthogonal projections) show all the vertices of the regular simplex on acircle, and all vertex pairs connected by edges. Thestandardn-simplex(orunitn-simplex) is the subset ofRn+1given by The simplexΔnlies in theaffine hyperplaneobtained by removing the restrictionti≥ 0in the above definition. Then+ 1vertices of the standardn-simplex are the pointsei∈Rn+1, where Astandard simplexis an example of a0/1-polytope, with all coordinates as 0 or 1. It can also be seen onefacetof a regular(n+ 1)-orthoplex. There is a canonical map from the standardn-simplex to an arbitraryn-simplex with vertices (v0, ...,vn) given by The coefficientstiare called thebarycentric coordinatesof a point in then-simplex. Such a general simplex is often called anaffinen-simplex, to emphasize that the canonical map is anaffine transformation. It is also sometimes called anoriented affinen-simplexto emphasize that the canonical map may beorientation preservingor reversing. More generally, there is a canonical map from the standard(n−1){\displaystyle (n-1)}-simplex (withnvertices) onto anypolytopewithnvertices, given by the same equation (modifying indexing): These are known asgeneralized barycentric coordinates, and express every polytope as theimageof a simplex:Δn−1↠P.{\displaystyle \Delta ^{n-1}\twoheadrightarrow P.} A commonly used function fromRnto the interior of the standard(n−1){\displaystyle (n-1)}-simplex is thesoftmax function, or normalized exponential function; this generalizes thestandard logistic function. An alternative coordinate system is given by taking theindefinite sum: This yields the alternative presentation byorder,namely as nondecreasingn-tuples between 0 and 1: Geometrically, this is ann-dimensional subset ofRn{\displaystyle \mathbf {R} ^{n}}(maximal dimension, codimension 0) rather than ofRn+1{\displaystyle \mathbf {R} ^{n+1}}(codimension 1). The facets, which on the standard simplex correspond to one coordinate vanishing,ti=0,{\displaystyle t_{i}=0,}here correspond to successive coordinates being equal,si=si+1,{\displaystyle s_{i}=s_{i+1},}while theinteriorcorresponds to the inequalities becomingstrict(increasing sequences). A key distinction between these presentations is the behavior under permuting coordinates – the standard simplex is stabilized by permuting coordinates, while permuting elements of the "ordered simplex" do not leave it invariant, as permuting an ordered sequence generally makes it unordered. Indeed, the ordered simplex is a (closed)fundamental domainfor theactionof thesymmetric groupon then-cube, meaning that the orbit of the ordered simplex under then! elements of the symmetric group divides then-cube inton!{\displaystyle n!}mostly disjoint simplices (disjoint except for boundaries), showing that this simplex has volume1/n!. Alternatively, the volume can be computed by an iterated integral, whose successive integrands are 1,x,x2/2,x3/3!, ...,xn/n!. A further property of this presentation is that it uses the order but not addition, and thus can be defined in any dimension over any ordered set, and for example can be used to define an infinite-dimensional simplex without issues of convergence of sums. Especially in numerical applications ofprobability theory, aprojectiononto the standard simplex is of interest. Given⁠p{\displaystyle p}⁠, possibly with coordinates that are negative or in excess of 1, the closest point⁠t{\displaystyle t}⁠on the simplex has coordinates whereΔ{\displaystyle \Delta }is chosen such that∑imax{pi+Δ,0}=1.{\textstyle \sum _{i}\max\{p_{i}+\Delta \,,0\}=1.} Δ{\displaystyle \Delta }can be easily calculated from sorting the coordinates of⁠p{\displaystyle p}⁠.[8]The sorting approach takesO(nlog⁡n){\displaystyle O(n\log n)}complexity, which can be improved toO(n)complexity viamedian-findingalgorithms.[9]Projecting onto the simplex is computationally similar to projecting onto theℓ1{\displaystyle \ell _{1}}ball.Also see Integer programming. Finally, a simple variant is to replace "summing to 1" with "summing to at most 1"; this raises the dimension by 1, so to simplify notation, the indexing changes: This yields ann-simplex as a corner of then-cube, and is a standard orthogonal simplex. This is the simplex used in thesimplex method, which is based at the origin, and locally models a vertex on a polytope withnfacets. One way to write down a regularn-simplex inRnis to choose two points to be the first two vertices, choose a third point to make an equilateral triangle, choose a fourth point to make a regular tetrahedron, and so on. Each step requires satisfying equations that ensure that each newly chosen vertex, together with the previously chosen vertices, forms a regular simplex. There are several sets of equations that can be written down and used for this purpose. These include the equality of all the distances between vertices; the equality of all the distances from vertices to the center of the simplex; the fact that the angle subtended through the new vertex by any two previously chosen vertices isπ/3{\displaystyle \pi /3}; and the fact that the angle subtended through the center of the simplex by any two vertices isarccos⁡(−1/n){\displaystyle \arccos(-1/n)}. It is also possible to directly write down a particular regularn-simplex inRnwhich can then be translated, rotated, and scaled as desired. One way to do this is as follows. Denote thebasis vectorsofRnbye1throughen. Begin with the standard(n− 1)-simplex which is the convex hull of the basis vectors. By adding an additional vertex, these become a face of a regularn-simplex. The additional vertex must lie on the line perpendicular to the barycenter of the standard simplex, so it has the form(α/n, ...,α/n)for somereal numberα. Since the squared distance between two basis vectors is 2, in order for the additional vertex to form a regularn-simplex, the squared distance between it and any of the basis vectors must also be 2. This yields aquadratic equationforα. Solving this equation shows that there are two choices for the additional vertex: Either of these, together with the standard basis vectors, yields a regularn-simplex. The above regularn-simplex is not centered on the origin. It can be translated to the origin by subtracting the mean of its vertices. By rescaling, it can be given unit side length. This results in the simplex whose vertices are: for1≤i≤n{\displaystyle 1\leq i\leq n}, and Note that there are two sets of vertices described here. One set uses+{\displaystyle +}in each calculation. The other set uses−{\displaystyle -}in each calculation. This simplex is inscribed in a hypersphere of radiusn/(2(n+1)){\displaystyle {\sqrt {n/(2(n+1))}}}. A different rescaling produces a simplex that is inscribed in a unit hypersphere. When this is done, its vertices are where1≤i≤n{\displaystyle 1\leq i\leq n}, and The side length of this simplex is2(n+1)/n{\textstyle {\sqrt {2(n+1)/n}}}. A highly symmetric way to construct a regularn-simplex is to use a representation of thecyclic groupZn+1byorthogonal matrices. This is ann×northogonal matrixQsuch thatQn+1=Iis theidentity matrix, but no lower power ofQis. Applying powers of thismatrixto an appropriate vectorvwill produce the vertices of a regularn-simplex. To carry this out, first observe that for any orthogonal matrixQ, there is a choice of basis in whichQis a block diagonal matrix where eachQiis orthogonal and either2 × 2or1 × 1. In order forQto have ordern+ 1, all of these matrices must have orderdividingn+ 1. Therefore eachQiis either a1 × 1matrix whose only entry is1or, ifnisodd,−1; or it is a2 × 2matrix of the form where eachωiis anintegerbetween zero andninclusive. A sufficient condition for the orbit of a point to be a regular simplex is that the matricesQiform a basis for the non-trivial irreducible real representations ofZn+1, and the vector being rotated is not stabilized by any of them. In practical terms, forneventhis means that every matrixQiis2 × 2, there is an equality of sets and, for everyQi, the entries ofvupon whichQiacts are not both zero. For example, whenn= 4, one possible matrix is Applying this to the vector(1, 0, 1, 0)results in the simplex whose vertices are each of which has distance √5 from the others. Whennis odd, the condition means that exactly one of the diagonal blocks is1 × 1, equal to−1, and acts upon a non-zero entry ofv; while the remaining diagonal blocks, sayQ1, ...,Q(n− 1) / 2, are2 × 2, there is an equality of sets and each diagonal block acts upon a pair of entries ofvwhich are not both zero. So, for example, whenn= 3, the matrix can be For the vector(1, 0, 1/√2), the resulting simplex has vertices each of which has distance 2 from the others. Thevolumeof ann-simplex inn-dimensional space with vertices(v0, ...,vn)is where each column of then×ndeterminantis avectorthat points from vertexv0to another vertexvk.[10]This formula is particularly useful whenv0{\displaystyle v_{0}}is the origin. The expression employs aGram determinantand works even when then-simplex's vertices are in a Euclidean space with more thanndimensions, e.g., a triangle inR3{\displaystyle \mathbf {R} ^{3}}. A more symmetric way to compute the volume of ann-simplex inRn{\displaystyle \mathbf {R} ^{n}}is Another common way of computing the volume of the simplex is via theCayley–Menger determinant, which works even when the n-simplex's vertices are in a Euclidean space with more than n dimensions.[11] Without the1/n!it is the formula for the volume of ann-parallelotope. This can be understood as follows: Assume thatPis ann-parallelotope constructed on a basis(v0,e1,…,en){\displaystyle (v_{0},e_{1},\ldots ,e_{n})}ofRn{\displaystyle \mathbf {R} ^{n}}. Given apermutationσ{\displaystyle \sigma }of{1,2,…,n}{\displaystyle \{1,2,\ldots ,n\}}, call a list of verticesv0,v1,…,vn{\displaystyle v_{0},\ v_{1},\ldots ,v_{n}}an-path if (so there aren!n-paths andvn{\displaystyle v_{n}}does not depend on the permutation). The following assertions hold: IfPis the unitn-hypercube, then the union of then-simplexes formed by the convex hull of eachn-path isP, and these simplexes are congruent and pairwise non-overlapping.[12]In particular, the volume of such a simplex is IfPis a general parallelotope, the same assertions hold except that it is no longer true, in dimension > 2, that the simplexes need to be pairwise congruent; yet their volumes remain equal, because then-parallelotope is the image of the unitn-hypercube by thelinear isomorphismthat sends the canonical basis ofRn{\displaystyle \mathbf {R} ^{n}}toe1,…,en{\displaystyle e_{1},\ldots ,e_{n}}. As previously, this implies that the volume of a simplex coming from an-path is: Conversely, given ann-simplex(v0,v1,v2,…vn){\displaystyle (v_{0},\ v_{1},\ v_{2},\ldots v_{n})}ofRn{\displaystyle \mathbf {R} ^{n}}, it can be supposed that the vectorse1=v1−v0,e2=v2−v1,…en=vn−vn−1{\displaystyle e_{1}=v_{1}-v_{0},\ e_{2}=v_{2}-v_{1},\ldots e_{n}=v_{n}-v_{n-1}}form a basis ofRn{\displaystyle \mathbf {R} ^{n}}. Considering the parallelotope constructed fromv0{\displaystyle v_{0}}ande1,…,en{\displaystyle e_{1},\ldots ,e_{n}}, one sees that the previous formula is valid for every simplex. Finally, the formula at the beginning of this section is obtained by observing that From this formula, it follows immediately that the volume under a standardn-simplex (i.e. between the origin and the simplex inRn+1) is The volume of a regularn-simplex with unit side length is as can be seen by multiplying the previous formula byxn+1, to get the volume under then-simplex as a function of its vertex distancexfrom the origin, differentiating with respect tox, atx=1/2{\displaystyle x=1/{\sqrt {2}}}(where then-simplex side length is 1), and normalizing by the lengthdx/n+1{\displaystyle dx/{\sqrt {n+1}}}of the increment,(dx/(n+1),…,dx/(n+1)){\displaystyle (dx/(n+1),\ldots ,dx/(n+1))}, along the normal vector. Any two(n− 1)-dimensional faces of a regularn-dimensional simplex are themselves regular(n− 1)-dimensional simplices, and they have the samedihedral angleofcos−1(1/n).[13][14] This can be seen by noting that the center of the standard simplex is(1n+1,…,1n+1){\textstyle \left({\frac {1}{n+1}},\dots ,{\frac {1}{n+1}}\right)}, and the centers of its faces are coordinate permutations of(0,1n,…,1n){\textstyle \left(0,{\frac {1}{n}},\dots ,{\frac {1}{n}}\right)}. Then, by symmetry, the vector pointing from(1n+1,…,1n+1){\textstyle \left({\frac {1}{n+1}},\dots ,{\frac {1}{n+1}}\right)}to(0,1n,…,1n){\textstyle \left(0,{\frac {1}{n}},\dots ,{\frac {1}{n}}\right)}is perpendicular to the faces. So the vectors normal to the faces are permutations of(−n,1,…,1){\displaystyle (-n,1,\dots ,1)}, from which the dihedral angles are calculated. An "orthogonal corner" means here that there is a vertex at which all adjacent edges are pairwise orthogonal. It immediately follows that all adjacentfacesare pairwise orthogonal. Such simplices are generalizations of right triangles and for them there exists ann-dimensional version of thePythagorean theorem: The sum of the squared(n− 1)-dimensional volumes of the facets adjacent to the orthogonal corner equals the squared(n− 1)-dimensional volume of the facet opposite of the orthogonal corner. whereA1…An{\displaystyle A_{1}\ldots A_{n}}are facets being pairwise orthogonal to each other but not orthogonal toA0{\displaystyle A_{0}}, which is the facet opposite the orthogonal corner.[15] For a 2-simplex, the theorem is thePythagorean theoremfor triangles with a right angle and for a 3-simplex it isde Gua's theoremfor a tetrahedron with an orthogonal corner. TheHasse diagramof the face lattice of ann-simplex is isomorphic to the graph of the(n+ 1)-hypercube's edges, with the hypercube's vertices mapping to each of then-simplex's elements, including the entire simplex and the null polytope as the extreme points of the lattice (mapped to two opposite vertices on the hypercube). This fact may be used to efficiently enumerate the simplex's face lattice, since more general face lattice enumeration algorithms are more computationally expensive. Then-simplex is also thevertex figureof the(n+ 1)-hypercube. It is also thefacetof the(n+ 1)-orthoplex. Topologically, ann-simplex isequivalentto ann-ball. Everyn-simplex is ann-dimensionalmanifold with corners. In probability theory, the points of the standardn-simplex in(n+ 1)-space form the space of possible probability distributions on a finite set consisting ofn+ 1possible outcomes. The correspondence is as follows: For each distribution described as an ordered(n+ 1)-tuple of probabilities whose sum is (necessarily) 1, we associate the point of the simplex whosebarycentric coordinatesare precisely those probabilities. That is, thekth vertex of the simplex is assigned to have thekth probability of the(n+ 1)-tuple as its barycentric coefficient. This correspondence is an affine homeomorphism. Aitchinson geometry is a natural way to construct aninner product spacefrom the standard simplexΔD−1{\displaystyle \Delta ^{D-1}}. It defines the following operations on simplices and real numbers: Since all simplices are self-dual, they can form a series of compounds; Inalgebraic topology, simplices are used as building blocks to construct an interesting class oftopological spacescalledsimplicial complexes. These spaces are built from simplices glued together in acombinatorialfashion. Simplicial complexes are used to define a certain kind ofhomologycalledsimplicial homology. A finite set ofk-simplexes embedded in anopen subsetofRnis called an affinek-chain. The simplexes in a chain need not be unique; they may occur withmultiplicity. Rather than using standard set notation to denote an affine chain, it is instead the standard practice to use plus signs to separate each member in the set. If some of the simplexes have the oppositeorientation, these are prefixed by a minus sign. If some of the simplexes occur in the set more than once, these are prefixed with an integer count. Thus, an affine chain takes the symbolic form of a sum with integer coefficients. Note that each facet of ann-simplex is an affine(n− 1)-simplex, and thus theboundaryof ann-simplex is an affine(n− 1)-chain. Thus, if we denote one positively oriented affine simplex as with thevj{\displaystyle v_{j}}denoting the vertices, then the boundary∂σ{\displaystyle \partial \sigma }ofσis the chain It follows from this expression, and the linearity of the boundary operator, that the boundary of the boundary of a simplex is zero: Likewise, the boundary of the boundary of a chain is zero:∂2ρ=0{\displaystyle \partial ^{2}\rho =0}. More generally, a simplex (and a chain) can be embedded into amanifoldby means of smooth, differentiable mapf:Rn→M{\displaystyle f:\mathbf {R} ^{n}\to M}. In this case, both the summation convention for denoting the set, and the boundary operation commute with theembedding. That is, where theai{\displaystyle a_{i}}are the integers denoting orientation and multiplicity. For the boundary operator∂{\displaystyle \partial }, one has: whereρis a chain. The boundary operation commutes with the mapping because, in the end, the chain is defined as a set and little more, and the set operation always commutes with themap operation(by definition of a map). Acontinuous mapf:σ→X{\displaystyle f:\sigma \to X}to atopological spaceXis frequently referred to as asingularn-simplex. (A map is generally called "singular" if it fails to have some desirable property such as continuity and, in this case, the term is meant to reflect to the fact that the continuous map need not be an embedding.)[16] Since classicalalgebraic geometryallows one to talk about polynomial equations but not inequalities, thealgebraic standard n-simplexis commonly defined as the subset of affine(n+ 1)-dimensional space, where all coordinates sum up to 1 (thus leaving out the inequality part). The algebraic description of this set isΔn:={x∈An+1|∑i=1n+1xi=1},{\displaystyle \Delta ^{n}:=\left\{x\in \mathbb {A} ^{n+1}~{\Bigg |}~\sum _{i=1}^{n+1}x_{i}=1\right\},}which equals thescheme-theoretic descriptionΔn(R)=Spec⁡(R[Δn]){\displaystyle \Delta _{n}(R)=\operatorname {Spec} (R[\Delta ^{n}])}withR[Δn]:=R[x1,…,xn+1]/(1−∑xi){\displaystyle R[\Delta ^{n}]:=R[x_{1},\ldots ,x_{n+1}]\left/\left(1-\sum x_{i}\right)\right.}the ring of regular functions on the algebraicn-simplex (for anyringR{\displaystyle R}). By using the same definitions as for the classicaln-simplex, then-simplices for different dimensionsnassemble into onesimplicial object, while the ringsR[Δn]{\displaystyle R[\Delta ^{n}]}assemble into one cosimplicial objectR[Δ∙]{\displaystyle R[\Delta ^{\bullet }]}(in thecategoryof schemes resp. rings, since the face and degeneracy maps are all polynomial). The algebraicn-simplices are used in higherK-theoryand in the definition of higherChow groups.
https://en.wikipedia.org/wiki/Simplex
Inmathematics, thecross productorvector product(occasionallydirected area product, to emphasize its geometric significance) is abinary operationon twovectorsin a three-dimensionalorientedEuclidean vector space(named hereE{\displaystyle E}), and is denoted by the symbol×{\displaystyle \times }. Given twolinearly independent vectorsaandb, the cross product,a×b(read "a cross b"), is a vector that isperpendicularto bothaandb,[1]and thusnormalto the plane containing them. It has many applications in mathematics,physics,engineering, andcomputer programming. It should not be confused with thedot product(projection product). The magnitude of the cross product equals the area of aparallelogramwith the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. Theunitsof the cross-product are the product of the units of each vector. If two vectors areparallelor areanti-parallel(that is, they are linearly dependent), or if either one has zero length, then their cross product is zero.[2] The cross product isanticommutative(that is,a×b= −b×a) and isdistributiveover addition, that is,a× (b+c) =a×b+a×c.[1]The spaceE{\displaystyle E}together with the cross product is analgebra over the real numbers, which is neithercommutativenorassociative, but is aLie algebrawith the cross product being theLie bracket. Like the dot product, it depends on themetricofEuclidean space, but unlike the dot product, it also depends on a choice oforientation(or "handedness") of the space (it is why an oriented space is needed). The resultant vector is invariant of rotation of basis. Due to the dependence onhandedness, the cross product is said to be apseudovector. In connection with the cross product, theexterior productof vectors can be used in arbitrary dimensions (with abivectoror2-formresult) and is independent of the orientation of the space. The product can be generalized in various ways, using the orientation and metric structure just as for the traditional 3-dimensional cross product; one can, inndimensions, take the product ofn− 1vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.[3]Thecross-product in seven dimensionshas undesirable properties (e.g. itfailsto satisfy theJacobi identity), so it is not used in mathematical physics to represent quantities such as multi-dimensionalspace-time.[4](See§ Generalizationsbelow for other dimensions.) The cross product of two vectorsaandbis defined only in three-dimensional space and is denoted bya×b. Inphysicsandapplied mathematics, the wedge notationa∧bis often used (in conjunction with the namevector product),[5][6][7]although in pure mathematics such notation is usually reserved for just the exterior product, an abstraction of the vector product tondimensions. The cross producta×bis defined as a vectorcthat isperpendicular(orthogonal) to bothaandb, with a direction given by theright-hand rule[1]and a magnitude equal to the area of theparallelogramthat the vectors span.[2] The cross product is defined by the formula[8][9] where If the vectorsaandbare parallel (that is, the angleθbetween them is either 0° or 180°), by the above formula, the cross product ofaandbis thezero vector0. The direction of the vectorndepends on the chosen orientation of the space. Conventionally, it is given by the right-hand rule, where one simply points the forefinger of the right hand in the direction ofaand the middle finger in the direction ofb. Then, the vectornis coming out of the thumb (see the adjacent picture). Using this rule implies that the cross product isanti-commutative; that is,b×a= −(a×b). By pointing the forefinger towardbfirst, and then pointing the middle finger towarda, the thumb will be forced in the opposite direction, reversing the sign of the product vector. As the cross product operator depends on the orientation of the space, in general the cross product of two vectors is not a "true" vector, but apseudovector. See§ Handednessfor more detail. In 1842,William Rowan Hamiltonfirst described the algebra ofquaternionsand the non-commutative Hamilton product. In particular, when the Hamilton product of two vectors (that is, pure quaternions with zero scalar part) is performed, it results in a quaternion with a scalar and vector part. The scalar and vector part of this Hamilton product corresponds to the negative of dot product and cross product of the two vectors. In 1881,Josiah Willard Gibbs,[10]and independentlyOliver Heaviside, introduced the notation for both the dot product and the cross product using a period (a⋅b) and an "×" (a×b), respectively, to denote them.[11] In 1877, to emphasize the fact that the result of a dot product is ascalarwhile the result of a cross product is avector,William Kingdon Cliffordcoined the alternative namesscalar productandvector productfor the two operations.[11]These alternative names are still widely used in the literature. Both the cross notation (a×b) and the namecross productwere possibly inspired by the fact that eachscalar componentofa×bis computed by multiplying non-corresponding components ofaandb. Conversely, a dot producta⋅binvolves multiplications between corresponding components ofaandb. As explainedbelow, the cross product can be expressed in the form of a determinant of a special3 × 3matrix. According toSarrus's rule, this involves multiplications between matrix elements identified by crossed diagonals. If(i,j,k){\displaystyle (\mathbf {\color {blue}{i}} ,\mathbf {\color {red}{j}} ,\mathbf {\color {green}{k}} )}is a positively oriented orthonormal basis, the basis vectors satisfy the following equalities[1] Amnemonicfor these formulas is that they can be deduced from any other of them by acyclic permutationof the basis vectors. This mnemonic applies also to many formulas given in this article. Theanticommutativityof the cross product, implies that The anticommutativity of the cross product (and the obvious lack of linear independence) also implies that These equalities, together with thedistributivityandlinearityof the cross product (though neither follows easily from the definition given above), are sufficient to determine the cross product of any two vectorsaandb. Each vector can be defined as the sum of three orthogonal components parallel to the standard basis vectors: Their cross producta×bcan be expanded using distributivity: This can be interpreted as the decomposition ofa×binto the sum of nine simpler cross products involving vectors aligned withi,j, ork. Each one of these nine cross products operates on two vectors that are easy to handle as they are either parallel or orthogonal to each other. From this decomposition, by using the above-mentionedequalitiesand collecting similar terms, we obtain: meaning that the threescalar componentsof the resulting vectors=s1i+s2j+s3k=a×bare Usingcolumn vectors, we can represent the same result as follows: The cross product can also be expressed as theformaldeterminant:[note 1][1] This determinant can be computed usingSarrus's ruleorcofactor expansion. Using Sarrus's rule, it expands to which gives the components of the resulting vector directly. The latter formula avoids having to change the orientation of the space when we inverse an orthonormal basis. Themagnitudeof the cross product can be interpreted as the positiveareaof theparallelogramhavingaandbas sides (see Figure 1):[1]‖a×b‖=‖a‖‖b‖|sin⁡θ|.{\displaystyle \left\|\mathbf {a} \times \mathbf {b} \right\|=\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\left|\sin \theta \right|.} Indeed, one can also compute the volumeVof aparallelepipedhavinga,bandcas edges by using a combination of a cross product and a dot product, calledscalar triple product(see Figure 2): Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolute value: Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross product can be thought of as a measure ofperpendicularityin the same way that the dot product is a measure ofparallelism. Given twounit vectors, their cross product has a magnitude of 1 if the two are perpendicular and a magnitude of zero if the two are parallel. The dot product of two unit vectors behaves just oppositely: it is zero when the unit vectors are perpendicular and 1 if the unit vectors are parallel. Unit vectors enable two convenient identities: the dot product of two unit vectors yields the cosine (which may be positive or negative) of the angle between the two unit vectors. The magnitude of the cross product of the two unit vectors yields the sine (which will always be positive). If the cross product of two vectors is the zero vector (that is,a×b=0), then either one or both of the inputs is the zero vector, (a=0orb=0) or else they are parallel or antiparallel (a∥b) so that the sine of the angle between them is zero (θ= 0°orθ= 180°andsinθ= 0). The self cross product of a vector is the zero vector: The cross product isanticommutative, distributiveover addition, and compatible with scalar multiplication so that It is notassociative, but satisfies theJacobi identity: Distributivity, linearity and Jacobi identity show that theR3vector spacetogether with vector addition and the cross product forms aLie algebra, the Lie algebra of the realorthogonal groupin 3 dimensions,SO(3). The cross product does not obey thecancellation law; that is,a×b=a×cwitha≠0does not implyb=c, but only that: This can be the case wherebandccancel, but additionally whereaandb−care parallel; that is, they are related by a scale factort, leading to: for some scalart. If, in addition toa×b=a×canda≠0as above, it is the case thata⋅b=a⋅cthen Asb−ccannot be simultaneously parallel (for the cross product to be0) and perpendicular (for the dot product to be 0) toa, it must be the case thatbandccancel:b=c. From the geometrical definition, the cross product is invariant under properrotationsabout the axis defined bya×b. In formulae: More generally, the cross product obeys the following identity undermatrixtransformations: whereM{\displaystyle M}is a 3-by-3matrixand(M−1)T{\displaystyle \left(M^{-1}\right)^{\mathrm {T} }}is thetransposeof theinverseandcof{\displaystyle \operatorname {cof} }is the cofactor matrix. It can be readily seen how this formula reduces to the former one ifM{\displaystyle M}is a rotation matrix. IfM{\displaystyle M}is a 3-by-3 symmetric matrix applied to a generic cross producta×b{\displaystyle \mathbf {a} \times \mathbf {b} }, the following relation holds true: The cross product of two vectors lies in thenull spaceof the2 × 3matrix with the vectors as rows: For the sum of two cross products, the following identity holds: Theproduct ruleof differential calculus applies to any bilinear operation, and therefore also to the cross product: whereaandbare vectors that depend on the real variablet. The cross product is used in both forms of the triple product. Thescalar triple productof three vectors is defined as It is the signed volume of theparallelepipedwith edgesa,bandcand as such the vectors can be used in any order that's aneven permutationof the above ordering. The following therefore are equal: Thevector triple productis the cross product of a vector with the result of another cross product, and is related to the dot product by the following formula Themnemonic"BAC minus CAB" is used to remember the order of the vectors in the right hand member. This formula is used inphysicsto simplify vector calculations. A special case, regardinggradientsand useful invector calculus, is where ∇2is thevector Laplacianoperator. Other identities relate the cross product to the scalar triple product: whereIis the identity matrix. The cross product and the dot product are related by: The right-hand side is theGram determinantofaandb, the square of the area of the parallelogram defined by the vectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, in terms of the angleθbetween the two vectors, as: the above given relationship can be rewritten as follows: Invoking thePythagorean trigonometric identityone obtains: which is the magnitude of the cross product expressed in terms ofθ, equal to the area of the parallelogram defined byaandb(seedefinitionabove). The combination of this requirement and the property that the cross product be orthogonal to its constituentsaandbprovides an alternative definition of the cross product.[13] Given two vectorsaandcwitha≠0, the equationa×b=cadmits solutions forbif and only ifais orthogonal toc(that is, ifa⋅c= 0). In that case, there exists an infinite family of solutions forb, which are wheretis an arbitrary constant. This can be derived using the triple product expansion: Rearrange to solve forbto give The coefficient of the last term can be simplified to just the arbitrary constanttto yield the result shown above. The relation can be compared with another relation involving the right-hand side, namelyLagrange's identityexpressed as[14] whereaandbmay ben-dimensional vectors. This also shows that theRiemannian volume formfor surfaces is exactly thesurface elementfrom vector calculus. In the case wheren= 3, combining these two equations results in the expression for the magnitude of the cross product in terms of its components:[15] The same result is found directly using the components of the cross product found from InR3, Lagrange's equation is a special case of the multiplicativity|vw| = |v||w|of the norm in thequaternion algebra. It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional case of theBinet–Cauchy identity:[16][17] Ifa=candb=d, this simplifies to the formula above. The cross product conveniently describes the infinitesimal generators ofrotationsinR3. Specifically, ifnis a unit vector inR3andR(φ,n) denotes a rotation about the axis through the origin specified byn, with angle φ (measured in radians, counterclockwise when viewed from the tip ofn), then for every vectorxinR3. The cross product withntherefore describes the infinitesimal generator of the rotations aboutn. These infinitesimal generators form theLie algebraso(3) of therotation group SO(3), and we obtain the result that the Lie algebraR3with cross product is isomorphic to the Lie algebraso(3). The vector cross product also can be expressed as the product of askew-symmetric matrixand a vector:[16]a×b=[a]×b=[0−a3a2a30−a1−a2a10][b1b2b3]a×b=[b]×Ta=[0b3−b2−b30b1b2−b10][a1a2a3],{\displaystyle {\begin{aligned}\mathbf {a} \times \mathbf {b} =[\mathbf {a} ]_{\times }\mathbf {b} &={\begin{bmatrix}\,0&\!-a_{3}&\,\,a_{2}\\\,\,a_{3}&0&\!-a_{1}\\-a_{2}&\,\,a_{1}&\,0\end{bmatrix}}{\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\end{bmatrix}}\\\mathbf {a} \times \mathbf {b} ={[\mathbf {b} ]_{\times }}^{\mathrm {\!\!T} }\mathbf {a} &={\begin{bmatrix}\,0&\,\,b_{3}&\!-b_{2}\\-b_{3}&0&\,\,b_{1}\\\,\,b_{2}&\!-b_{1}&\,0\end{bmatrix}}{\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}},\end{aligned}}}where superscriptTrefers to thetransposeoperation, and [a]×is defined by:[a]×=def[0−a3a2a30−a1−a2a10].{\displaystyle [\mathbf {a} ]_{\times }{\stackrel {\rm {def}}{=}}{\begin{bmatrix}\,\,0&\!-a_{3}&\,\,\,a_{2}\\\,\,\,a_{3}&0&\!-a_{1}\\\!-a_{2}&\,\,a_{1}&\,\,0\end{bmatrix}}.} The columns [a]×,iof the skew-symmetric matrix for a vectoracan be also obtained by calculating the cross product withunit vectors. That is,[a]×,i=a×e^i,i∈{1,2,3}{\displaystyle [\mathbf {a} ]_{\times ,i}=\mathbf {a} \times \mathbf {{\hat {e}}_{i}} ,\;i\in \{1,2,3\}}or[a]×=∑i=13(a×e^i)⊗e^i,{\displaystyle [\mathbf {a} ]_{\times }=\sum _{i=1}^{3}\left(\mathbf {a} \times \mathbf {{\hat {e}}_{i}} \right)\otimes \mathbf {{\hat {e}}_{i}} ,}where⊗{\displaystyle \otimes }is theouter productoperator. Also, ifais itself expressed as a cross product:a=c×d{\displaystyle \mathbf {a} =\mathbf {c} \times \mathbf {d} }then[a]×=dcT−cdT.{\displaystyle [\mathbf {a} ]_{\times }=\mathbf {d} \mathbf {c} ^{\mathrm {T} }-\mathbf {c} \mathbf {d} ^{\mathrm {T} }.} Evaluation of the cross product givesa=c×d=(c2d3−c3d2c3d1−c1d3c1d2−c2d1){\displaystyle \mathbf {a} =\mathbf {c} \times \mathbf {d} ={\begin{pmatrix}c_{2}d_{3}-c_{3}d_{2}\\c_{3}d_{1}-c_{1}d_{3}\\c_{1}d_{2}-c_{2}d_{1}\end{pmatrix}}}Hence, the left hand side equals[a]×=[0c2d1−c1d2c3d1−c1d3c1d2−c2d10c3d2−c2d3c1d3−c3d1c2d3−c3d20]{\displaystyle [\mathbf {a} ]_{\times }={\begin{bmatrix}0&c_{2}d_{1}-c_{1}d_{2}&c_{3}d_{1}-c_{1}d_{3}\\c_{1}d_{2}-c_{2}d_{1}&0&c_{3}d_{2}-c_{2}d_{3}\\c_{1}d_{3}-c_{3}d_{1}&c_{2}d_{3}-c_{3}d_{2}&0\end{bmatrix}}}Now, for the right hand side,cdT=[c1d1c1d2c1d3c2d1c2d2c2d3c3d1c3d2c3d3]{\displaystyle \mathbf {c} \mathbf {d} ^{\mathrm {T} }={\begin{bmatrix}c_{1}d_{1}&c_{1}d_{2}&c_{1}d_{3}\\c_{2}d_{1}&c_{2}d_{2}&c_{2}d_{3}\\c_{3}d_{1}&c_{3}d_{2}&c_{3}d_{3}\end{bmatrix}}}And its transpose isdcT=[c1d1c2d1c3d1c1d2c2d2c3d2c1d3c2d3c3d3]{\displaystyle \mathbf {d} \mathbf {c} ^{\mathrm {T} }={\begin{bmatrix}c_{1}d_{1}&c_{2}d_{1}&c_{3}d_{1}\\c_{1}d_{2}&c_{2}d_{2}&c_{3}d_{2}\\c_{1}d_{3}&c_{2}d_{3}&c_{3}d_{3}\end{bmatrix}}}Evaluation of the right hand side givesdcT−cdT=[0c2d1−c1d2c3d1−c1d3c1d2−c2d10c3d2−c2d3c1d3−c3d1c2d3−c3d20]{\displaystyle \mathbf {d} \mathbf {c} ^{\mathrm {T} }-\mathbf {c} \mathbf {d} ^{\mathrm {T} }={\begin{bmatrix}0&c_{2}d_{1}-c_{1}d_{2}&c_{3}d_{1}-c_{1}d_{3}\\c_{1}d_{2}-c_{2}d_{1}&0&c_{3}d_{2}-c_{2}d_{3}\\c_{1}d_{3}-c_{3}d_{1}&c_{2}d_{3}-c_{3}d_{2}&0\end{bmatrix}}}Comparison shows that the left hand side equals the right hand side. This result can be generalized to higher dimensions usinggeometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector.[18]In three dimensions bivectors aredualto vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to vectors.[18] This notation is also often much easier to work with, for example, inepipolar geometry. From the general properties of the cross product follows immediately that[a]×a=0{\displaystyle [\mathbf {a} ]_{\times }\,\mathbf {a} =\mathbf {0} }andaT[a]×=0{\displaystyle \mathbf {a} ^{\mathrm {T} }\,[\mathbf {a} ]_{\times }=\mathbf {0} }and from fact that [a]×is skew-symmetric it follows thatbT[a]×b=0.{\displaystyle \mathbf {b} ^{\mathrm {T} }\,[\mathbf {a} ]_{\times }\,\mathbf {b} =0.} The above-mentioned triple product expansion (bac–cab rule) can be easily proven using this notation. As mentioned above, the Lie algebraR3with cross product is isomorphic to the Lie algebraso(3), whose elements can be identified with the 3×3 skew-symmetric matrices. The mapa→ [a]×provides an isomorphism betweenR3andso(3). Under this map, the cross product of 3-vectors corresponds to thecommutatorof 3x3 skew-symmetric matrices. C1=[0000010−10],C2=[00−1000100],C3=[010−100000]{\displaystyle \mathbf {C} _{1}={\begin{bmatrix}0&0&0\\0&0&1\\0&-1&0\end{bmatrix}},\quad \mathbf {C} _{2}={\begin{bmatrix}0&0&-1\\0&0&0\\1&0&0\end{bmatrix}},\quad \mathbf {C} _{3}={\begin{bmatrix}0&1&0\\-1&0&0\\0&0&0\end{bmatrix}}} These matrices share the following properties: Theorthogonal projection matrixof a vectorv≠0{\displaystyle \mathbf {v} \neq \mathbf {0} }is given byPv=v(vTv)−1vT{\displaystyle \mathbf {P} _{\mathbf {v} }=\mathbf {v} \left(\mathbf {v} ^{\textrm {T}}\mathbf {v} \right)^{-1}\mathbf {v} ^{T}}. The projection matrix onto theorthogonal complementis given byPv⊥=I−Pv{\displaystyle \mathbf {P} _{\mathbf {v} }^{^{\perp }}=\mathbf {I} -\mathbf {P} _{\mathbf {v} }}, whereI{\displaystyle \mathbf {I} }is the identity matrix. For the special case ofv=ei{\displaystyle \mathbf {v} =\mathbf {e} _{i}}, it can be verified that Pe1⊥=[000010001],Pe2⊥=[100000001],Pe3⊥=[100010000]{\displaystyle \mathbf {P} _{\mathbf {e} _{1}}^{^{\perp }}={\begin{bmatrix}0&0&0\\0&1&0\\0&0&1\end{bmatrix}},\quad \mathbf {P} _{\mathbf {e} _{2}}^{^{\perp }}={\begin{bmatrix}1&0&0\\0&0&0\\0&0&1\end{bmatrix}},\quad \mathbf {P} _{\mathbf {e} _{3}}^{^{\perp }}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\end{bmatrix}}} For other properties of orthogonal projection matrices, seeprojection (linear algebra). The cross product can alternatively be defined in terms of theLevi-Civita tensorEijkand a dot productηmi, which are useful in converting vector notation for tensor applications: where theindicesi,j,k{\displaystyle i,j,k}correspond to vector components. This characterization of the cross product is often expressed more compactly using theEinstein summation conventionas in which repeated indices are summed over the values 1 to 3. In a positively-oriented orthonormal basisηmi= δmi(theKronecker delta) andEijk=εijk{\displaystyle E_{ijk}=\varepsilon _{ijk}}(theLevi-Civita symbol). In that case, this representation is another form of the skew-symmetric representation of the cross product: Inclassical mechanics: representing the cross product by using the Levi-Civita symbol can cause mechanical symmetries to be obvious when physical systems areisotropic. (An example: consider a particle in aHooke's lawpotential in three-space, free to oscillate in three dimensions; none of these dimensions are "special" in any sense, so symmetries lie in the cross-product-represented angular momentum, which are made clear by the abovementioned Levi-Civita representation).[citation needed] The word "xyzzy" can be used to remember the definition of the cross product. If where: then: The second and third equations can be obtained from the first by simply vertically rotating the subscripts,x→y→z→x. The problem, of course, is how to remember the first equation, and two options are available for this purpose: either to remember the relevant two diagonals of Sarrus's scheme (those containingi), or to remember the xyzzy sequence. Since the first diagonal in Sarrus's scheme is just themain diagonalof theabove-mentioned 3×3 matrix, the first three letters of the word xyzzy can be very easily remembered. Similarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation. This may be helpful for remembering the correct cross product formula. If then: If we want to obtain the formula forax{\displaystyle a_{x}}we simply drop thebx{\displaystyle b_{x}}andcx{\displaystyle c_{x}}from the formula, and take the next two components down: When doing this foray{\displaystyle a_{y}}the next two elements down should "wrap around" the matrix so that after the z component comes the x component. For clarity, when performing this operation foray{\displaystyle a_{y}}, the next two components should be z and x (in that order). While foraz{\displaystyle a_{z}}the next two components should be taken as x and y. Forax{\displaystyle a_{x}}then, if we visualize the cross operator as pointing from an element on the left to an element on the right, we can take the first element on the left and simply multiply by the element that the cross points to in the right-hand matrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here as well. This results in ourax{\displaystyle a_{x}}formula – We can do this in the same way foray{\displaystyle a_{y}}andaz{\displaystyle a_{z}}to construct their associated formulas. The cross product has applications in various contexts. For example, it is used in computational geometry, physics and engineering. A non-exhaustive list of examples follows. The cross product appears in the calculation of the distance of twoskew lines(lines not in the same plane) from each other in three-dimensional space. The cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed incomputer graphics. For example, the winding of a polygon (clockwise or anticlockwise) about a point within the polygon can be calculated by triangulating the polygon (like spoking a wheel) and summing the angles (between the spokes) using the cross product to keep track of the sign of each angle. Incomputational geometryofthe plane, the cross product is used to determine the sign of theacute angledefined by three pointsp1=(x1,y1),p2=(x2,y2){\displaystyle p_{1}=(x_{1},y_{1}),p_{2}=(x_{2},y_{2})}andp3=(x3,y3){\displaystyle p_{3}=(x_{3},y_{3})}. It corresponds to the direction (upward or downward) of the cross product of the two coplanarvectorsdefined by the two pairs of points(p1,p2){\displaystyle (p_{1},p_{2})}and(p1,p3){\displaystyle (p_{1},p_{3})}. The sign of the acute angle is the sign of the expression which is the signed length of the cross product of the two vectors. To use the cross product, simply extend the 2D vectorsp1,p2,p3{\displaystyle p_{1},p_{2},p_{3}}to co-planar 3D vectors by settingzk=0{\displaystyle z_{k}=0}for each of them. In the "right-handed" coordinate system, if the result is 0, the points arecollinear; if it is positive, the three points constitute a positive angle of rotation aroundp1{\displaystyle p_{1}}fromp2{\displaystyle p_{2}}top3{\displaystyle p_{3}}, otherwise a negative angle. From another point of view, the sign ofP{\displaystyle P}tells whetherp3{\displaystyle p_{3}}lies to the left or to the right of linep1,p2.{\displaystyle p_{1},p_{2}.} The cross product is used in calculating the volume of apolyhedronsuch as atetrahedronorparallelepiped. Theangular momentumLof a particle about a given origin is defined as: whereris the position vector of the particle relative to the origin,pis the linear momentum of the particle. In the same way, themomentMof a forceFBapplied at point B around point A is given as: In mechanics themoment of a forceis also calledtorqueand written asτ{\displaystyle \mathbf {\tau } } Since positionr,linear momentumpand forceFare alltruevectors, both the angular momentumLand the moment of a forceMarepseudovectorsoraxial vectors. The cross product frequently appears in the description of rigid motions. Two pointsPandQon arigid bodycan be related by: wherer{\displaystyle \mathbf {r} }is the point's position,v{\displaystyle \mathbf {v} }is its velocity andω{\displaystyle {\boldsymbol {\omega }}}is the body'sangular velocity. Since positionr{\displaystyle \mathbf {r} }and velocityv{\displaystyle \mathbf {v} }aretruevectors, the angular velocityω{\displaystyle {\boldsymbol {\omega }}}is apseudovectororaxial vector. The cross product is used to describe theLorentz forceexperienced by a moving electric chargeqe: Since velocityv,forceFand electric fieldEare alltruevectors, the magnetic fieldBis apseudovector. Invector calculus, the cross product is used to define the formula for thevector operatorcurl. The trick of rewriting a cross product in terms of a matrix multiplication appears frequently inepipolarand multi-view geometry, in particular when deriving matching constraints. The cross product can be defined in terms of the exterior product. It can be generalized to anexternal productin other than three dimensions.[19]This generalization allows a natural geometric interpretation of the cross product. Inexterior algebrathe exterior product of two vectors is a bivector. A bivector is an oriented plane element, in much the same way that a vector is an oriented line element. Given two vectorsaandb, one can view the bivectora∧bas the oriented parallelogram spanned byaandb. The cross product is then obtained by taking theHodge starof the bivectora∧b, mapping2-vectorsto vectors: This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. In ad-dimensional space, Hodge star takes ak-vector to a (d–k)-vector; thus only ind =3 dimensions is the result an element of dimension one (3–2 = 1), i.e. a vector. For example, ind =4 dimensions, the cross product of two vectors has dimension 4–2 = 2, giving a bivector. Thus, only in three dimensions does cross product define an algebra structure to multiply vectors. When physics laws are written as equations, it is possible to make an arbitrary choice of the coordinate system, including handedness. One should be careful to never write down an equation where the two sides do not behave equally under all transformations that need to be considered. For example, if one side of the equation is a cross product of twopolar vectors, one must take into account that the result is anaxial vector. Therefore, for consistency, the other side must also be an axial vector.[citation needed]More generally, the result of a cross product may be either a polar vector or an axial vector, depending on the type of its operands (polar vectors or axial vectors). Namely, polar vectors and axial vectors are interrelated in the following ways under application of the cross product: or symbolically Because the cross product may also be a polar vector, it may not change direction with a mirror image transformation. This happens, according to the above relationships, if one of the operands is a polar vector and the other one is an axial vector (e.g., the cross product of two polar vectors). For instance, avector triple productinvolving three polar vectors is a polar vector. A handedness-free approach is possible using exterior algebra. Let (i,j,k) be an orthonormal basis. The vectorsi,jandkdo not depend on the orientation of the space. They can even be defined in the absence of any orientation. They can not therefore be axial vectors. But ifiandjare polar vectors, thenkis an axial vector fori×j=korj×i=k. This is a paradox. "Axial" and "polar" arephysicalqualifiers forphysicalvectors; that is, vectors which representphysicalquantities such as the velocity or the magnetic field. The vectorsi,jandkare mathematical vectors, neither axial nor polar. In mathematics, the cross-product of two vectors is a vector. There is no contradiction. There are several ways to generalize the cross product to higher dimensions. The cross product can be seen as one of the simplest Lie products, and is thus generalized byLie algebras, which are axiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity. Many Lie algebras exist, and their study is a major field of mathematics, calledLie theory. For example, theHeisenberg algebragives another Lie algebra structure onR3.{\displaystyle \mathbf {R} ^{3}.}In the basis{x,y,z},{\displaystyle \{x,y,z\},}the product is[x,y]=z,[x,z]=[y,z]=0.{\displaystyle [x,y]=z,[x,z]=[y,z]=0.} The cross product can also be described in terms ofquaternions. In general, if a vector[a1,a2,a3]is represented as the quaterniona1i+a2j+a3k, the cross product of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result. The real part will be the negative of the dot product of the two vectors. A cross product for 7-dimensional vectors can be obtained in the same way by using theoctonionsinstead of the quaternions. The nonexistence of nontrivial vector-valued cross products of two vectors in other dimensions is related to the result fromHurwitz's theoremthat the onlynormed division algebrasare the ones with dimension 1, 2, 4, and 8. In general dimension, there is no direct analogue of the binary cross product that yields specifically a vector. There is however the exterior product, which has similar properties, except that the exterior product of two vectors is now a2-vectorinstead of an ordinary vector. As mentioned above, the cross product can be interpreted as the exterior product in three dimensions by using theHodge staroperator to map 2-vectors to vectors. The Hodge dual of the exterior product yields an(n− 2)-vector, which is a natural generalization of the cross product in any number of dimensions. The exterior product and dot product can be combined (through summation) to form thegeometric productin geometric algebra. As mentioned above, the cross product can be interpreted in three dimensions as the Hodge dual of the exterior product. In any finitendimensions, the Hodge dual of the exterior product ofn− 1vectors is a vector. So, instead of a binary operation, in arbitrary finite dimensions, the cross product is generalized as the Hodge dual of the exterior product of some givenn− 1vectors. This generalization is calledexternal product.[20] Interpreting the three-dimensionalvector spaceof the algebra as the2-vector(not the 1-vector)subalgebraof the three-dimensional geometric algebra, wherei=e2e3{\displaystyle \mathbf {i} =\mathbf {e_{2}} \mathbf {e_{3}} },j=e1e3{\displaystyle \mathbf {j} =\mathbf {e_{1}} \mathbf {e_{3}} }, andk=e1e2{\displaystyle \mathbf {k} =\mathbf {e_{1}} \mathbf {e_{2}} }, the cross product corresponds exactly to thecommutator productin geometric algebra and both use the same symbol×{\displaystyle \times }. The commutator product is defined for 2-vectorsA{\displaystyle A}andB{\displaystyle B}in geometric algebra as: whereAB{\displaystyle AB}is the geometric product.[21] The commutator product could be generalised to arbitrarymultivectorsin three dimensions, which results in a multivector consisting of only elements ofgrades1 (1-vectors/true vectors) and 2 (2-vectors/pseudovectors). While the commutator product of two 1-vectors is indeed the same as the exterior product and yields a 2-vector, the commutator of a 1-vector and a 2-vector yields a true vector, corresponding instead to theleft and right contractionsin geometric algebra. The commutator product of two 2-vectors has no corresponding equivalent product, which is why the commutator product is defined in the first place for 2-vectors. Furthermore, the commutator triple product of three 2-vectors is the same as thevector triple productof the same three pseudovectors in vector algebra. However, the commutator triple product of three 1-vectors in geometric algebra is instead thenegativeof thevector triple productof the same three true vectors in vector algebra. Generalizations to higher dimensions is provided by the same commutator product of 2-vectors in higher-dimensional geometric algebras, but the 2-vectors are no longer pseudovectors. Just as the commutator product/cross product of 2-vectors in three dimensionscorrespond to the simplest Lie algebra, the 2-vector subalgebras of higher dimensional geometric algebra equipped with the commutator product also correspond to the Lie algebras.[22]Also as in three dimensions, the commutator product could be further generalised to arbitrary multivectors. In the context ofmultilinear algebra, the cross product can be seen as the (1,2)-tensor (amixed tensor, specifically abilinear map) obtained from the 3-dimensionalvolume form,[note 2]a (0,3)-tensor, byraising an index. In detail, the 3-dimensional volume form defines a productV×V×V→R,{\displaystyle V\times V\times V\to \mathbf {R} ,}by taking the determinant of the matrix given by these 3 vectors. Byduality, this is equivalent to a functionV×V→V∗,{\displaystyle V\times V\to V^{*},}(fixing any two inputs gives a functionV→R{\displaystyle V\to \mathbf {R} }by evaluating on the third input) and in the presence of aninner product(such as the dot product; more generally, a non-degenerate bilinear form), we have an isomorphismV→V∗,{\displaystyle V\to V^{*},}and thus this yields a mapV×V→V,{\displaystyle V\times V\to V,}which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a (1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index". Translating the above algebra into geometry, the function "volume of the parallelepiped defined by(a,b,−){\displaystyle (a,b,-)}" (where the first two vectors are fixed and the last is an input), which defines a functionV→R{\displaystyle V\to \mathbf {R} }, can berepresenteduniquely as the dot product with a vector: this vector is the cross producta×b.{\displaystyle a\times b.}From this perspective, the cross product isdefinedby thescalar triple product,Vol(a,b,c)=(a×b)⋅c.{\displaystyle \mathrm {Vol} (a,b,c)=(a\times b)\cdot c.} In the same way, in higher dimensions one may define generalized cross products by raising indices of then-dimensional volume form, which is a(0,n){\displaystyle (0,n)}-tensor. The most direct generalizations of the cross product are to define either: These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant andparity. The(n−1){\displaystyle (n-1)}-ary product can be described as follows: givenn−1{\displaystyle n-1}vectorsv1,…,vn−1{\displaystyle v_{1},\dots ,v_{n-1}}inRn,{\displaystyle \mathbf {R} ^{n},}define their generalized cross productvn=v1×⋯×vn−1{\displaystyle v_{n}=v_{1}\times \cdots \times v_{n-1}}as: This is the unique multilinear, alternating product which evaluates toe1×⋯×en−1=en{\displaystyle e_{1}\times \cdots \times e_{n-1}=e_{n}},e2×⋯×en=e1,{\displaystyle e_{2}\times \cdots \times e_{n}=e_{1},}and so forth for cyclic permutations of indices. In coordinates, one can give a formula for this(n−1){\displaystyle (n-1)}-ary analogue of the cross product inRnby: This formula is identical in structure to the determinant formula for the normal cross product inR3except that the row of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that the ordered vectors (v1, ...,vn−1, Λn–1i=0vi) have a positiveorientationwith respect to (e1, ...,en). Ifnis odd, this modification leaves the value unchanged, so this convention agrees with the normal definition of the binary product. In the case thatnis even, however, the distinction must be kept. This(n−1){\displaystyle (n-1)}-ary form enjoys many of the same properties as the vector cross product: it isalternatingand linear in its arguments, it is perpendicular to each argument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vector cross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of the arguments. Moreover, the product[v1,…,vn]:=⋀i=0nvi{\displaystyle [v_{1},\ldots ,v_{n}]:=\bigwedge _{i=0}^{n}v_{i}}satisfies the Filippov identity, and so it endowsRn+1with a structure of n-Lie algebra (see Proposition 1 of[23]). In 1773,Joseph-Louis Lagrangeused the component form of both the dot and cross products in order to study thetetrahedronin three dimensions.[24][note 3] In 1843,William Rowan Hamiltonintroduced thequaternionproduct, and with it the termsvectorandscalar. Given two quaternions[0,u]and[0,v], whereuandvare vectors inR3, their quaternion product can be summarized as[−u⋅v,u×v].James Clerk Maxwellused Hamilton's quaternion tools to develop his famouselectromagnetism equations, and for this and other reasons quaternions for a time were an essential part of physics education. In 1844,Hermann Grassmannpublished a geometric algebra not tied to dimension two or three. Grassmann developed several products, including a cross product represented then by[uv].[25](See also:exterior algebra.) In 1853,Augustin-Louis Cauchy, a contemporary of Grassmann, published a paper on algebraic keys which were used to solve equations and had the same multiplication properties as the cross product.[26][27] In 1878,William Kingdon Clifford, known for aprecursorto theClifford algebranamed in his honor, publishedElements of Dynamic, in which the termvector productis attested. In the book, this product of two vectors is defined to have magnitude equal to theareaof theparallelogramof which they are two sides, and direction perpendicular to their plane.[28] In lecture notes from 1881,Gibbsrepresented the cross product byu×v{\displaystyle u\times v}and called it theskew product.[29][30]In 1901, Gibb's studentEdwin Bidwell Wilsonedited and extended these lecture notes into thetextbookVector Analysis. Wilson kept the termskew product, but observed that the alternative termscross product[note 4]andvector productwere more frequent.[31] In 1908,Cesare Burali-FortiandRoberto Marcolongointroduced the vector product notationu ∧ v.[25]This is used inFranceand other areas until this day, as the symbol×{\displaystyle \times }is already used to denotemultiplicationand theCartesian product.[citation needed]
https://en.wikipedia.org/wiki/Cross_product
Inmathematics, theseven-dimensional cross productis abilinear operationonvectorsinseven-dimensional Euclidean space. It assigns to any two vectorsa,bin⁠R7{\displaystyle \mathbb {R} ^{7}}⁠a vectora×balso in⁠R7{\displaystyle \mathbb {R} ^{7}}⁠.[1]Like thecross productin three dimensions, the seven-dimensional product isanticommutativeanda×bis orthogonal both toaand tob. Unlike in three dimensions, it does not satisfy theJacobi identity, and while the three-dimensional cross product is unique up to a sign, there are many seven-dimensional cross products. The seven-dimensional cross product has the same relationship to theoctonionsas the three-dimensional product does to thequaternions. The seven-dimensional cross product is one way of generalizing the cross product to other than three dimensions, and it is the only other bilinear product of two vectors that is vector-valued, orthogonal, and has the same magnitude as in the 3D case.[2]In other dimensions there are vector-valued products of three or more vectors that satisfy these conditions, and binary products withbivectorresults. The product can be given by a multiplication table, such as the one here. This table, due to Cayley,[3][4]gives the product of orthonormal basis vectorseiandejfor eachi,jfrom 1 to 7. For example, from the table The table can be used to calculate the product of any two vectors. For example, to calculate thee1component ofx×ythe basis vectors that multiply to producee1can be picked out to give This can be repeated for the other six components. There are 480 such tables for any given set of orthogonal basis vectors, one for each of the products satisfying the definition such that each entry in the table can be expressed in terms of a single element of the basis.[5]This table can be summarized by the relation[4] whereεijk{\displaystyle \varepsilon _{ijk}}is theLevi-Civita symbol, a completely antisymmetric tensor with a positive value +1 whenijk= 123, 145, 176, 246, 257, 347, 365. The top left 3 × 3 corner of this table gives the cross product in three dimensions. A cross product on aEuclidean spaceVis abilinear mapfromV×VtoV, mapping vectorsxandyinVto another vectorx×yalso inV, wherex×yhas the properties[1][6] where (x·y) is the Euclideandot productand |x| is theEuclidean norm. The first property states that the product is perpendicular to its arguments, while the second property gives the magnitude of the product. An equivalent expression in terms of theangleθbetween the vectors[7]is[8] which is the area of theparallelogramin the plane ofxandywith the two vectors as sides.[9]A third statement of the magnitude condition is ifx×x= 0 is assumed as a separate axiom.[10] Given the properties of bilinearity, orthogonality and magnitude, a nonzero cross product exists only in three and seven dimensions.[2][11][10]This can be shown by postulating the properties required for the cross product, then deducing an equation which is only satisfied when the dimension is 0, 1, 3 or 7. In zero dimensions there is only the zero vector, while in one dimension all vectors are parallel, so in both these cases the product must be identically zero. The restriction to 0, 1, 3 and 7 dimensions is related toHurwitz's theorem, thatnormed division algebrasare only possible in 1, 2, 4 and 8 dimensions. The cross product is formed from the product of the normed division algebra by restricting it to the 0, 1, 3, or 7 imaginary dimensions of the algebra, giving nonzero products in only three and seven dimensions.[12] In contrast to the three-dimensional cross product, which is unique (apart from sign), there are many possible binary cross products in seven dimensions. One way to see this is to note that given any pair of vectorsxandy∈R7{\displaystyle \in \mathbb {R} ^{7}}and any vectorvof magnitude |v| = |x||y| sinθin the five-dimensional space perpendicular to the plane spanned byxandy, it is possible to find a cross product with a multiplication table (and an associated set of basis vectors) such thatx×y=v. Unlike in three dimensions,x×y=a×bdoes not imply thataandblie in the same plane asxandy.[11] Further properties follow from the definition, including the following identities: Other properties follow only in the three-dimensional case, and are not satisfied by the seven-dimensional cross product, notably, Thanks to the Jacobi Identity, the three-dimensional cross product givesR3{\displaystyle \mathbb {R} ^{3}}the structure of aLie algebra, which is isomorphic toso(3){\displaystyle {\mathfrak {so}}(3)}, theLie algebra of the 3d rotation group. Because the Jacobi identity fails in seven dimensions, the seven-dimensional cross product does not giveR7{\displaystyle \mathbb {R} ^{7}}the structure of a Lie algebra. To define a particular cross product, anorthonormal basis{ej} may be selected and a multiplication table provided that determines all the products{ei×ej}. One possible multiplication table is described in theMultiplication table section, but it is not unique.[5]Unlike three dimensions, there are many tables because every pair of unit vectors is perpendicular to five other unit vectors, allowing many choices for each cross product. Once we have established a multiplication table, it is then applied to general vectorsxandyby expressingxandyin terms of the basis and expandingx×ythrough bilinearity. Usinge1toe7for the basis vectors a different multiplication table from the one in the Introduction, leading to a different cross product, is given with anticommutativity by[11] More compactly this rule can be written as withi= 1, ..., 7modulo7 and the indicesi,i+ 1andi+ 3allowed to permute evenly. Together with anticommutativity this generates the product. This rule directly produces the two diagonals immediately adjacent to the diagonal of zeros in the table. Also, from an identity in the subsection onconsequences, which produces diagonals further out, and so on. Theejcomponent of cross productx×yis given by selecting all occurrences ofejin the table and collecting the corresponding components ofxfrom the left column and ofyfrom the top row. The result is: As the cross product is bilinear, the operatorx× –can be written as a matrix, which takes the form[citation needed] The cross product is then given by Two different multiplication tables have been used in this article, and there are more.[5][13]These multiplication tables are characterized by theFano plane,[14][15]and these are shown in the figure for the two tables used here: at top, the one described by Sabinin, Sbitneva, and Shestakov, and at bottom that described by Lounesto. The numbers under the Fano diagrams (the set of lines in the diagram) indicate a set of indices for seven independent products in each case, interpreted asijk→ei×ej=ek. The multiplication table is recovered from the Fano diagram by following either the straight line connecting any three points, or the circle in the center, with a sign as given by the arrows. For example, the first row of multiplications resulting ine1inthe above listingis obtained by following the three paths connected toe1in the lower Fano diagram: the circular pathe2×e4, the diagonal pathe3×e7, and the edge pathe6×e1=e5rearranged usingone of the above identitiesas: or also obtained directly from the diagram with the rule that any two unit vectors on a straight line are connected by multiplication to the third unit vector on that straight line with signs according to the arrows (sign of the permutation that orders the unit vectors). It can be seen that both multiplication rules follow from the same Fano diagram by simply renaming the unit vectors, and changing the sense of the center unit vector. Considering all possible permutations of the basis there are 480 multiplication tables and so 480 cross products like this.[15] The product can also be calculated usinggeometric algebraof a seven-dimensional vector space with apositive-definite quadratic form. The product starts with theexterior product, abivector-valued product of two vectors: This is bilinear, alternating, has the desired magnitude, but is not vector-valued. The vector, and so the cross product, comes from the contraction of this bivector with atrivector. In three dimensions, up to a scale factor there is only one trivector, thepseudoscalarof the space, and a product of the above bivector and one of the two unit trivectors gives the vector result, thedualof the bivector. A similar calculation is done is seven dimensions, except that since trivectors form a 35-dimensional space there are many trivectors that could be used, though not just any trivector will do. The trivector that gives the same product as the above coordinate transform is This is combined with the exterior product to give the cross product where⌟{\displaystyle \lrcorner }is theleft contractionoperator from geometric algebra.[11][16] Just as the 3-dimensional cross product can be expressed in terms of thequaternions, the 7-dimensional cross product can be expressed in terms of theoctonions. After identifyingR7with the imaginary octonions (theorthogonal complementof the real part ofO), the cross product is given in terms of octonion multiplication by Conversely, supposeVis a 7-dimensional Euclidean space with a given cross product. Then one can define a bilinear multiplication onR⊕Vas follows: The spaceR⊕Vwith this multiplication is then isomorphic to the octonions.[17] The cross product only exists in three and seven dimensions as one can always define a multiplication on a space of one higher dimension as above, and this space can be shown to be anormed division algebra. ByHurwitz's theoremsuch algebras only exist in one, two, four, and eight dimensions, so the cross product must be in zero, one, three or seven dimensions. The products in zero and one dimensions are trivial, so non-trivial cross products only exist in three and seven dimensions.[18][19] The failure of the 7-dimension cross product to satisfy the Jacobi identity is related to the nonassociativity of the octonions. In fact, where [x,y,z] is theassociator. In three dimensions the cross product is invariant under the action of the rotation group,SO(3), so the cross product ofxandyafter they are rotated is the image ofx×yunder the rotation. But this invariance is not true in seven dimensions; that is, the cross product is not invariant under the group of rotations in seven dimensions,SO(7). Instead it is invariant under the exceptional Lie groupG2, a subgroup of SO(7).[11][17] Nonzero binary cross products exist only in three and seven dimensions. Further products are possible when lifting the restriction that it must be a binary product.[20][21]We require the product to bemulti-linear,alternating, vector-valued, and orthogonal to each of the input vectorsai. The orthogonality requirement implies that inndimensions, no more thann− 1vectors can be used. The magnitude of the product should equal the volume of theparallelotopewith the vectors as edges, which can be calculated using theGram determinant. The conditions are The Gram determinant is the squared volume of the parallelotope witha1, ...,akas edges. With these conditions a non-trivial cross product only exists: One version of the product of three vectors in eight dimensions is given bya×b×c=(a∧b∧c)⌟(w−ve8){\displaystyle \mathbf {a} \times \mathbf {b} \times \mathbf {c} =(\mathbf {a} \wedge \mathbf {b} \wedge \mathbf {c} )~\lrcorner ~(\mathbf {w} -\mathbf {ve} _{8})}wherevis the same trivector as used in seven dimensions,⌟{\displaystyle \lrcorner }is again the left contraction, andw= −ve12...7is a 4-vector. There are also trivial products. Asnoted already, a binary product only exists in 7, 3, 1 and 0 dimensions, the last two being identically zero. A further trivial 'product' arises in even dimensions, which takes a single vector and produces a vector of the same magnitude orthogonal to it through the left contraction with a suitable bivector. In two dimensions this is a rotation through a right angle. As a further generalization, we can loosen the requirements of multilinearity and magnitude, and consider a general continuous functionVd→V(whereVisRnendowed with the Euclidean inner product andd≥ 2), which is only required to satisfy the following two properties: Under these requirements, the cross product only exists (I) forn= 3,d= 2, (II) forn= 7,d= 2, (III) forn= 8,d= 3, and (IV) for anyd=n− 1.[1][20] In another direction, vector product algebras have been defined over an arbitraryfield, and for any field not of characteristic 2 they must have dimension 0, 1, 3, or 7. In fact this result has been generalized still further, e.g. by working over any commutative ring in which 2 iscancellable, meaning that 2x = 2y implies x = y.[22]
https://en.wikipedia.org/wiki/Seven-dimensional_cross_product
Inmathematics, in particularabstract algebra, agraded ringis aringsuch that the underlyingadditive groupis adirect sum of abelian groupsRi{\displaystyle R_{i}}such that⁠RiRj⊆Ri+j{\displaystyle R_{i}R_{j}\subseteq R_{i+j}}⁠. Theindex setis usually the set of nonnegativeintegersor the set of integers, but can be anymonoid. The direct sum decomposition is usually referred to asgradationorgrading. Agraded moduleis defined similarly (see below for the precise definition). It generalizesgraded vector spaces. A graded module that is also a graded ring is called agraded algebra. A graded ring could also be viewed as a graded⁠Z{\displaystyle \mathbb {Z} }⁠-algebra. Theassociativityis not important (in fact not used at all) in the definition of a graded ring; hence, the notion applies tonon-associative algebrasas well; e.g., one can consider agraded Lie algebra. Generally, the index set of a graded ring is assumed to be the set of nonnegative integers, unless otherwise explicitly specified. This is the case in this article. A graded ring is aringthat is decomposed into adirect sum ofadditive groups, such that for all nonnegative integersm{\displaystyle m}and⁠n{\displaystyle n}⁠. A nonzero element ofRn{\displaystyle R_{n}}is said to behomogeneousofdegree⁠n{\displaystyle n}⁠. By definition of a direct sum, every nonzero elementa{\displaystyle a}ofR{\displaystyle R}can be uniquely written as a suma=a0+a1+⋯+an{\displaystyle a=a_{0}+a_{1}+\cdots +a_{n}}where eachai{\displaystyle a_{i}}is either 0 or homogeneous of degree⁠i{\displaystyle i}⁠. The nonzeroai{\displaystyle a_{i}}are thehomogeneous componentsof⁠a{\displaystyle a}⁠. Some basic properties are: AnidealI⊆R{\displaystyle I\subseteq R}ishomogeneous, if for every⁠a∈I{\displaystyle a\in I}⁠, the homogeneous components ofa{\displaystyle a}also belong to⁠I{\displaystyle I}⁠. (Equivalently, if it is a graded submodule of⁠R{\displaystyle R}⁠; see§ Graded module.) Theintersectionof a homogeneous idealI{\displaystyle I}withRn{\displaystyle R_{n}}is an⁠R0{\displaystyle R_{0}}⁠-submoduleofRn{\displaystyle R_{n}}called thehomogeneous partof degreen{\displaystyle n}of⁠I{\displaystyle I}⁠. A homogeneous ideal is the direct sum of its homogeneous parts. IfI{\displaystyle I}is a two-sided homogeneous ideal in⁠R{\displaystyle R}⁠, thenR/I{\displaystyle R/I}is also a graded ring, decomposed as whereIn{\displaystyle I_{n}}is the homogeneous part of degreen{\displaystyle n}of⁠I{\displaystyle I}⁠. The corresponding idea inmodule theoryis that of agraded module, namely a leftmoduleMover a graded ringRsuch that and for everyiandj. Examples: Amorphismf:N→M{\displaystyle f:N\to M}of graded modules, called agraded morphismorgraded homomorphism, is ahomomorphismof the underlying modules that respects grading; i.e.,⁠f(Ni)⊆Mi{\displaystyle f(N_{i})\subseteq M_{i}}⁠. Agraded submoduleis a submodule that is a graded module in own right and such that the set-theoreticinclusionis a morphism of graded modules. Explicitly, a graded moduleNis a graded submodule ofMif and only if it is a submodule ofMand satisfies⁠Ni=N∩Mi{\displaystyle N_{i}=N\cap M_{i}}⁠. Thekerneland theimageof a morphism of graded modules are graded submodules. Remark: To give a graded morphism from a graded ring to another graded ring with the image lying in thecenteris the same as to give the structure of a graded algebra to the latter ring. Given a graded moduleM{\displaystyle M}, theℓ{\displaystyle \ell }-twist ofM{\displaystyle M}is a graded module defined byM(ℓ)n=Mn+ℓ{\displaystyle M(\ell )_{n}=M_{n+\ell }}(cf.Serre's twisting sheafinalgebraic geometry). LetMandNbe graded modules. Iff:M→N{\displaystyle f\colon M\to N}is a morphism of modules, thenfis said to have degreediff(Mn)⊆Nn+d{\displaystyle f(M_{n})\subseteq N_{n+d}}. Anexterior derivativeofdifferential formsindifferential geometryis an example of such a morphism having degree 1. Given a graded moduleMover a commutative graded ringR, one can associate theformal power series⁠P(M,t)∈Z[[t]]{\displaystyle P(M,t)\in \mathbb {Z} [\![t]\!]}⁠: (assumingℓ(Mn){\displaystyle \ell (M_{n})}are finite.) It is called theHilbert–Poincaré seriesofM. A graded module is said to be finitely generated if the underlying module isfinitely generated. The generators may be taken to be homogeneous (by replacing the generators by their homogeneous parts.) SupposeRis apolynomial ring⁠k[x0,…,xn]{\displaystyle k[x_{0},\dots ,x_{n}]}⁠,ka field, andMa finitely generated graded module over it. Then the functionn↦dimk⁡Mn{\displaystyle n\mapsto \dim _{k}M_{n}}is called the Hilbert function ofM. The function coincides with theinteger-valued polynomialfor largencalled theHilbert polynomialofM. Anassociative algebraAover a ringRis agraded algebraif it is graded as a ring. In the usual case where the ringRis not graded (in particular ifRis a field), it is given the trivial grading (every element ofRis of degree 0). Thus,R⊆A0{\displaystyle R\subseteq A_{0}}and the graded piecesAi{\displaystyle A_{i}}areR-modules. In the case where the ringRis also a graded ring, then one requires that In other words, we requireAto be a graded left module overR. Examples of graded algebras are common in mathematics: Graded algebras are much used incommutative algebraandalgebraic geometry,homological algebra, andalgebraic topology. One example is the close relationship betweenhomogeneous polynomialsandprojective varieties(cf.Homogeneous coordinate ring.) The above definitions have been generalized to rings graded using anymonoidGas an index set. AG-graded ringRis a ring with a direct sum decomposition such that Elements ofRthat lie insideRi{\displaystyle R_{i}}for somei∈G{\displaystyle i\in G}are said to behomogeneousofgradei. The previously defined notion of "graded ring" now becomes the same thing as anN{\displaystyle \mathbb {N} }-graded ring, whereN{\displaystyle \mathbb {N} }is the monoid ofnatural numbersunder addition. The definitions for graded modules and algebras can also be extended this way replacing the indexing setN{\displaystyle \mathbb {N} }with any monoidG. Remarks: Examples: Some graded rings (or algebras) are endowed with ananticommutativestructure. This notion requires ahomomorphismof the monoid of the gradation into the additive monoid ofZ/2Z{\displaystyle \mathbb {Z} /2\mathbb {Z} }, the field with two elements. Specifically, asigned monoidconsists of a pair(Γ,ε){\displaystyle (\Gamma ,\varepsilon )}whereΓ{\displaystyle \Gamma }is a monoid andε:Γ→Z/2Z{\displaystyle \varepsilon \colon \Gamma \to \mathbb {Z} /2\mathbb {Z} }is a homomorphism of additive monoids. AnanticommutativeΓ{\displaystyle \Gamma }-graded ringis a ringAgraded with respect toΓ{\displaystyle \Gamma }such that: for all homogeneous elementsxandy. Intuitively, a gradedmonoidis the subset of a graded ring,⨁n∈N0Rn{\textstyle \bigoplus _{n\in \mathbb {N} _{0}}R_{n}}, generated by theRn{\displaystyle R_{n}}'s, without using the additive part. That is, the set of elements of the graded monoid is⋃n∈N0Rn{\displaystyle \bigcup _{n\in \mathbb {N} _{0}}R_{n}}. Formally, a graded monoid[1]is a monoid(M,⋅){\displaystyle (M,\cdot )}, with a gradation functionϕ:M→N0{\displaystyle \phi :M\to \mathbb {N} _{0}}such thatϕ(m⋅m′)=ϕ(m)+ϕ(m′){\displaystyle \phi (m\cdot m')=\phi (m)+\phi (m')}. Note that the gradation of1M{\displaystyle 1_{M}}is necessarily 0. Some authors request furthermore thatϕ(m)≠0{\displaystyle \phi (m)\neq 0}whenmis not the identity. Assuming the gradations of non-identity elements are non-zero, the number of elements of gradationnis at mostgn{\displaystyle g^{n}}wheregis thecardinalityof agenerating setGof the monoid. Therefore the number of elements of gradationnor less is at mostn+1{\displaystyle n+1}(forg=1{\displaystyle g=1}) orgn+1−1g−1{\textstyle {\frac {g^{n+1}-1}{g-1}}}else. Indeed, each such element is the product of at mostnelements ofG, and onlygn+1−1g−1{\textstyle {\frac {g^{n+1}-1}{g-1}}}such products exist. Similarly, the identity element can not be written as the product of two non-identity elements. That is, there is no unitdivisorin such a graded monoid. These notions allow us to extend the notion ofpower series ring. Instead of the indexing family beingN{\displaystyle \mathbb {N} }, the indexing family could be any graded monoid, assuming that the number of elements of degreenis finite, for each integern. More formally, let(K,+K,×K){\displaystyle (K,+_{K},\times _{K})}be an arbitrarysemiringand(R,⋅,ϕ){\displaystyle (R,\cdot ,\phi )}a graded monoid. ThenK⟨⟨R⟩⟩{\displaystyle K\langle \langle R\rangle \rangle }denotes the semiring of power series with coefficients inKindexed byR. Its elements are functions fromRtoK. The sum of two elementss,s′∈K⟨⟨R⟩⟩{\displaystyle s,s'\in K\langle \langle R\rangle \rangle }is defined pointwise, it is the function sendingm∈R{\displaystyle m\in R}tos(m)+Ks′(m){\displaystyle s(m)+_{K}s'(m)}, and the product is the function sendingm∈R{\displaystyle m\in R}to the infinite sum∑p,q∈Rp⋅q=ms(p)×Ks′(q){\displaystyle \sum _{p,q\in R \atop p\cdot q=m}s(p)\times _{K}s'(q)}. This sum is correctly defined (i.e., finite) because, for eachm, there are only a finite number of pairs(p,q)such thatpq=m. Informal language theory, given analphabetA, thefree monoidof words overAcan be considered as a graded monoid, where the gradation of a word is its length.
https://en.wikipedia.org/wiki/Graded_algebra
On adifferentiable manifold, theexterior derivativeextends the concept of thedifferentialof a function todifferential formsof higher degree. The exterior derivative was first described in its current form byÉlie Cartanin 1899. The resulting calculus, known asexterior calculus, allows for a natural, metric-independent generalization ofStokes' theorem,Gauss's theorem, andGreen's theoremfrom vector calculus. If a differentialk-form is thought of as measuring thefluxthrough an infinitesimalk-parallelotopeat each point of the manifold, then its exterior derivative can be thought of as measuring the net flux through the boundary of a(k+ 1)-parallelotope at each point. The exterior derivative of adifferential formof degreek(also differentialk-form, or justk-form for brevity here) is a differential form of degreek+ 1. Iffis asmooth function(a0-form), then the exterior derivative offis thedifferentialoff. That is,dfis the unique1-formsuch that for every smoothvector fieldX,df(X) =dXf, wheredXfis thedirectional derivativeoffin the direction ofX. The exterior product of differential forms (denoted with the same symbol∧) is defined as theirpointwiseexterior product. There are a variety of equivalent definitions of the exterior derivative of a generalk-form. The exterior derivative is defined to be the uniqueℝ-linear mapping fromk-forms to(k+ 1)-forms that has the following properties: Iff{\displaystyle f}andg{\displaystyle g}are two0{\displaystyle 0}-forms (functions), then from the third property for the quantityd(f∧g){\displaystyle d(f\wedge g)}, which is simplyd(fg){\displaystyle d(fg)}, the familiar product ruled(fg)=gdf+fdg{\displaystyle d(fg)=g\,df+f\,dg}is recovered. The third property can be generalised, for instance, ifα{\displaystyle \alpha }is ak{\displaystyle k}-form,β{\displaystyle \beta }is anl{\displaystyle l}-form andγ{\displaystyle \gamma }is anm{\displaystyle m}-form, then Alternatively, one can work entirely in alocal coordinate system(x1, ...,xn). The coordinate differentialsdx1, ...,dxnform a basis of the space of one-forms, each associated with a coordinate. Given amulti-indexI= (i1, ...,ik)with1 ≤ip≤nfor1 ≤p≤k(and denotingdxi1∧ ... ∧dxikwithdxI), the exterior derivative of a (simple)k-form overℝnis defined as (using theEinstein summation convention). The definition of the exterior derivative is extendedlinearlyto a generalk-form (which is expressible as a linear combination of basic simplek{\displaystyle k}-forms) where each of the components of the multi-indexIrun over all the values in{1, ...,n}. Note that wheneverjequals one of the components of the multi-indexIthendxj∧dxI= 0(seeExterior product). The definition of the exterior derivative in local coordinates follows from the precedingdefinition in terms of axioms. Indeed, with thek-formφas defined above, Here, we have interpretedgas a0-form, and then applied the properties of the exterior derivative. This result extends directly to the generalk-formωas In particular, for a1-formω, the components ofdωinlocal coordinatesare Caution: There are two conventions regarding the meaning ofdxi1∧⋯∧dxik{\displaystyle dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}}. Most current authors[citation needed]have the convention that while in older texts like Kobayashi and Nomizu or Helgason Alternatively, an explicit formula can be given[1]for the exterior derivative of ak-formω, when paired withk+ 1arbitrary smoothvector fieldsV0,V1, ...,Vk: where[Vi,Vj]denotes theLie bracketand a hat denotes the omission of that element: In particular, whenωis a1-form we have thatdω(X,Y) =dX(ω(Y)) −dY(ω(X)) −ω([X,Y]). Note:With the conventions of e.g., Kobayashi–Nomizu and Helgason the formula differs by a factor of⁠1/k+ 1⁠: Example 1.Considerσ=udx1∧dx2over a1-form basisdx1, ...,dxnfor a scalar fieldu. The exterior derivative is: The last formula, where summation starts ati= 3, follows easily from the properties of theexterior product. Namely,dxi∧dxi= 0. Example 2.Letσ=udx+vdybe a1-form defined overℝ2. By applying the above formula to each term (considerx1=xandx2=y) we have the sum IfMis a compact smooth orientablen-dimensional manifold with boundary, andωis an(n− 1)-form onM, thenthe generalized form of Stokes' theoremstates that Intuitively, if one thinks ofMas being divided into infinitesimal regions, and one adds the flux through the boundaries of all the regions, the interior boundaries all cancel out, leaving the total flux through the boundary ofM. Ak-formωis calledclosedifdω= 0; closed forms are thekernelofd.ωis calledexactifω=dαfor some(k− 1)-formα; exact forms are theimageofd. Becaused2= 0, every exact form is closed. ThePoincaré lemmastates that in a contractible region, the converse is true. Because the exterior derivativedhas the property thatd2= 0, it can be used as thedifferential(coboundary) to definede Rham cohomologyon a manifold. Thek-th de Rham cohomology (group) is the vector space of closedk-forms modulo the exactk-forms; as noted in the previous section, the Poincaré lemma states that these vector spaces are trivial for a contractible region, fork> 0. Forsmooth manifolds, integration of forms gives a natural homomorphism from the de Rham cohomology to the singular cohomology overℝ. The theorem of de Rham shows that this map is actually an isomorphism, a far-reaching generalization of the Poincaré lemma. As suggested by the generalized Stokes' theorem, the exterior derivative is the "dual" of theboundary mapon singular simplices. The exterior derivative is natural in the technical sense: iff:M→Nis a smooth map andΩkis the contravariant smoothfunctorthat assigns to each manifold the space ofk-forms on the manifold, then the following diagram commutes sod(f∗ω) =f∗dω, wheref∗denotes thepullbackoff. This follows from thatf∗ω(·), by definition, isω(f∗(·)),f∗being thepushforwardoff. Thusdis anatural transformationfromΩktoΩk+1. Mostvector calculusoperators are special cases of, or have close relationships to, the notion of exterior differentiation. Asmooth functionf:M→ ℝon a real differentiable manifoldMis a0-form. The exterior derivative of this0-form is the1-formdf. When an inner product⟨·,·⟩is defined, thegradient∇fof a functionfis defined as the unique vector inVsuch that its inner product with any element ofVis the directional derivative offalong the vector, that is such that That is, where♯denotes themusical isomorphism♯:V∗→Vmentioned earlier that is induced by the inner product. The1-formdfis a section of thecotangent bundle, that gives a local linear approximation tofin the cotangent space at each point. A vector fieldV= (v1,v2, ...,vn)onℝnhas a corresponding(n− 1)-form wheredxi^{\displaystyle {\widehat {dx^{i}}}}denotes the omission of that element. (For instance, whenn= 3, i.e. in three-dimensional space, the2-formωVis locally thescalar triple productwithV.) The integral ofωVover a hypersurface is thefluxofVover that hypersurface. The exterior derivative of this(n− 1)-form is then-form A vector fieldVonℝnalso has a corresponding1-form Locally,ηVis thedot productwithV. The integral ofηValong a path is theworkdone against−Valong that path. Whenn= 3, in three-dimensional space, the exterior derivative of the1-formηVis the2-form The standardvector calculusoperators can be generalized for anypseudo-Riemannian manifold, and written in coordinate-free notation as follows: where⋆is theHodge star operator,♭and♯are themusical isomorphisms,fis ascalar fieldandFis avector field. Note that the expression forcurlrequires♯to act on⋆d(F♭), which is a form of degreen− 2. A natural generalization of♯tok-forms of arbitrary degree allows this expression to make sense for anyn.
https://en.wikipedia.org/wiki/Exterior_derivative
Differential geometryis amathematicaldiscipline that studies thegeometryof smooth shapes and smooth spaces, otherwise known assmooth manifolds. It uses the techniques ofsingle variable calculus,vector calculus,linear algebraandmultilinear algebra. The field has its origins in the study ofspherical geometryas far back asantiquity. It also relates toastronomy, thegeodesyof theEarth, and later the study ofhyperbolic geometrybyLobachevsky. The simplest examples of smooth spaces are theplane and space curvesandsurfacesin the three-dimensionalEuclidean space, and the study of these shapes formed the basis for development of modern differential geometry during the 18th and 19th centuries. Since the late 19th century, differential geometry has grown into a field concerned more generally with geometric structures ondifferentiable manifolds. A geometric structure is one which defines some notion of size, distance, shape, volume, or other rigidifying structure. For example, inRiemannian geometrydistances and angles are specified, insymplectic geometryvolumes may be computed, inconformal geometryonly angles are specified, and ingauge theorycertainfieldsare given over the space. Differential geometry is closely related to, and is sometimes taken to include,differential topology, which concerns itself with properties of differentiable manifolds that do not rely on any additional geometric structure (see that article for more discussion on the distinction between the two subjects). Differential geometry is also related to the geometric aspects of the theory ofdifferential equations, otherwise known asgeometric analysis. Differential geometry finds applications throughout mathematics and thenatural sciences. Most prominently the language of differential geometry was used byAlbert Einsteinin histheory of general relativity, and subsequently byphysicistsin the development ofquantum field theoryand thestandard model of particle physics. Outside of physics, differential geometry finds applications inchemistry,economics,engineering,control theory,computer graphicsandcomputer vision, and recently inmachine learning. The history and development of differential geometry as a subject begins at least as far back asclassical antiquity. It is intimately linked to the development of geometry more generally, of the notion of space and shape, and oftopology, especially the study ofmanifolds. In this section we focus primarily on the history of the application ofinfinitesimalmethods to geometry, and later to the ideas oftangent spaces, and eventually the development of the modern formalism of the subject in terms oftensorsandtensor fields. The study of differential geometry, or at least the study of the geometry of smooth shapes, can be traced back at least toclassical antiquity. In particular, much was known about the geometry of theEarth, aspherical geometry, in the time of theancient Greekmathematicians. Famously,Eratosthenescalculated thecircumferenceof the Earth around 200 BC, and around 150 ADPtolemyin hisGeographyintroduced thestereographic projectionfor the purposes of mapping the shape of the Earth.[1]Implicitly throughout this time principles that form the foundation of differential geometry and calculus were used ingeodesy, although in a much simplified form. Namely, as far back asEuclid'sElementsit was understood that a straight line could be defined by its property of providing the shortest distance between two points, and applying this same principle to the surface of theEarthleads to the conclusion thatgreat circles, which are only locally similar to straight lines in a flat plane, provide the shortest path between two points on the Earth's surface. Indeed, the measurements of distance along suchgeodesicpaths by Eratosthenes and others can be considered a rudimentary measure ofarclengthof curves, a concept which did not see a rigorous definition in terms of calculus until the 1600s. Around this time there were only minimal overt applications of the theory ofinfinitesimalsto the study of geometry, a precursor to the modern calculus-based study of the subject. InEuclid'sElementsthe notion oftangencyof a line to a circle is discussed, andArchimedesapplied themethod of exhaustionto compute the areas of smooth shapes such as thecircle, and the volumes of smooth three-dimensional solids such as the sphere, cones, and cylinders.[1] There was little development in the theory of differential geometry between antiquity and the beginning of theRenaissance. Before the development of calculus byNewtonandLeibniz, the most significant development in the understanding of differential geometry came fromGerardus Mercator's development of theMercator projectionas a way of mapping the Earth. Mercator had an understanding of the advantages and pitfalls of his map design, and in particular was aware of theconformalnature of his projection, as well as the difference betweenpraga, the lines of shortest distance on the Earth, and thedirectio, the straight line paths on his map. Mercator noted that the praga wereoblique curvaturin this projection.[1]This fact reflects the lack of ametric-preserving mapof the Earth's surface onto a flat plane, a consequence of the laterTheorema EgregiumofGauss. The first systematic or rigorous treatment of geometry using the theory of infinitesimals and notions fromcalculusbegan around the 1600s when calculus was first developed byGottfried LeibnizandIsaac Newton. At this time, the recent work ofRené Descartesintroducinganalytic coordinatesto geometry allowed geometric shapes of increasing complexity to be described rigorously. In particular around this timePierre de Fermat, Newton, and Leibniz began the study ofplane curvesand the investigation of concepts such as points ofinflectionand circles ofosculation, which aid in the measurement ofcurvature. Indeed, already in hisfirst paperon the foundations of calculus, Leibniz notes that the infinitesimal conditiond2y=0{\displaystyle d^{2}y=0}indicates the existence of an inflection point. Shortly after this time theBernoulli brothers,JacobandJohannmade important early contributions to the use of infinitesimals to study geometry. In lectures by Johann Bernoulli at the time, later collated byL'Hopitalintothe first textbook on differential calculus, the tangents to plane curves of various types are computed using the conditiondy=0{\displaystyle dy=0}, and similarly points of inflection are calculated.[1]At this same time theorthogonalitybetween the osculating circles of a plane curve and the tangent directions is realised, and the first analytical formula for the radius of an osculating circle, essentially the first analytical formula for the notion ofcurvature, is written down. In the wake of the development of analytic geometry and plane curves,Alexis Clairautbegan the study ofspace curvesat just the age of 16.[2][1]In his book Clairaut introduced the notion of tangent andsubtangentdirections to space curves in relation to the directions which lie along asurfaceon which the space curve lies. Thus Clairaut demonstrated an implicit understanding of thetangent spaceof a surface and studied this idea using calculus for the first time. Importantly Clairaut introduced the terminology ofcurvatureanddouble curvature, essentially the notion ofprincipal curvatureslater studied by Gauss and others. Around this same time,Leonhard Euler, originally a student of Johann Bernoulli, provided many significant contributions not just to the development of geometry, but to mathematics more broadly.[3]In regards to differential geometry, Euler studied the notion of ageodesicon a surface deriving the first analyticalgeodesic equation, and later introduced the first set of intrinsic coordinate systems on a surface, beginning the theory ofintrinsic geometryupon which modern geometric ideas are based.[1]Around this time Euler's study of mechanics in theMechanicalead to the realization that a mass traveling along a surface not under the effect of any force would traverse a geodesic path, an early precursor to the important foundational ideas of Einstein'sgeneral relativity, and also to theEuler–Lagrange equationsand the first theory of thecalculus of variations, which underpins in modern differential geometry many techniques insymplectic geometryandgeometric analysis. This theory was used byLagrange, a co-developer of the calculus of variations, to derive the first differential equation describing aminimal surfacein terms of the Euler–Lagrange equation. In 1760 Euler proved a theorem expressing the curvature of a space curve on a surface in terms of the principal curvatures, known asEuler's theorem. Later in the 1700s, the new French school led byGaspard Mongebegan to make contributions to differential geometry. Monge made important contributions to the theory of plane curves, surfaces, and studiedsurfaces of revolutionandenvelopesof plane curves and space curves. Several students of Monge made contributions to this same theory, and for exampleCharles Dupinprovided a new interpretation of Euler's theorem in terms of the principle curvatures, which is the modern form of the equation.[1] The field of differential geometry became an area of study considered in its own right, distinct from the more broad idea of analytic geometry, in the 1800s, primarily through the foundational work ofCarl Friedrich GaussandBernhard Riemann, and also in the important contributions ofNikolai Lobachevskyonhyperbolic geometryandnon-Euclidean geometryand throughout the same period the development ofprojective geometry. Dubbed the single most important work in the history of differential geometry,[4]in 1827 Gauss produced theDisquisitiones generales circa superficies curvasdetailing the general theory of curved surfaces.[5][4][6]In this work and his subsequent papers and unpublished notes on the theory of surfaces, Gauss has been dubbed the inventor of non-Euclidean geometry and the inventor of intrinsic differential geometry.[6]In his fundamental paper Gauss introduced theGauss map,Gaussian curvature,firstandsecond fundamental forms, proved theTheorema Egregiumshowing the intrinsic nature of the Gaussian curvature, and studied geodesics, computing the area of ageodesic trianglein various non-Euclidean geometries on surfaces. At this time Gauss was already of the opinion that the standard paradigm ofEuclidean geometryshould be discarded, and was in possession of private manuscripts on non-Euclidean geometry which informed his study of geodesic triangles.[6][7]Around this same timeJános Bolyaiand Lobachevsky independently discoveredhyperbolic geometryand thus demonstrated the existence of consistent geometries outside Euclid's paradigm. Concrete models of hyperbolic geometry were produced byEugenio Beltramilater in the 1860s, andFelix Kleincoined the term non-Euclidean geometry in 1871, and through theErlangen programput Euclidean and non-Euclidean geometries on the same footing.[8]Implicitly, thespherical geometryof the Earth that had been studied since antiquity was a non-Euclidean geometry, anelliptic geometry. The development of intrinsic differential geometry in the language of Gauss was spurred on by his student,Bernhard Riemannin hisHabilitationsschrift,On the hypotheses which lie at the foundation of geometry.[9]In this work Riemann introduced the notion of aRiemannian metricand theRiemannian curvature tensorfor the first time, and began the systematic study of differential geometry in higher dimensions. This intrinsic point of view in terms of the Riemannian metric, denoted byds2{\displaystyle ds^{2}}by Riemann, was the development of an idea of Gauss's about the linear elementds{\displaystyle ds}of a surface. At this time Riemann began to introduce the systematic use oflinear algebraandmultilinear algebrainto the subject, making great use of the theory ofquadratic formsin his investigation of metrics and curvature. At this time Riemann did not yet develop the modern notion of a manifold, as even the notion of atopological spacehad not been encountered, but he did propose that it might be possible to investigate or measure the properties of the metric ofspacetimethrough the analysis of masses within spacetime, linking with the earlier observation of Euler that masses under the effect of no forces would travel along geodesics on surfaces, and predicting Einstein's fundamental observation of theequivalence principlea full 60 years before it appeared in the scientific literature.[6][4] In the wake of Riemann's new description, the focus of techniques used to study differential geometry shifted from the ad hoc and extrinsic methods of the study of curves and surfaces to a more systematic approach in terms oftensor calculusand Klein's Erlangen program, and progress increased in the field. The notion of groups of transformations was developed bySophus LieandJean Gaston Darboux, leading to important results in the theory ofLie groupsandsymplectic geometry. The notion of differential calculus on curved spaces was studied byElwin Christoffel, who introduced theChristoffel symbolswhich describe thecovariant derivativein 1868, and by others includingEugenio Beltramiwho studied many analytic questions on manifolds.[10]In 1899Luigi Bianchiproduced hisLectures on differential geometrywhich studied differential geometry from Riemann's perspective, and a year laterTullio Levi-CivitaandGregorio Ricci-Curbastroproduced their textbook systematically developing the theory ofabsolute differential calculusandtensor calculus.[11][4]It was in this language that differential geometry was used by Einstein in the development of general relativity andpseudo-Riemannian geometry. The subject of modern differential geometry emerged from the early 1900s in response to the foundational contributions of many mathematicians, including importantlythe workofHenri Poincaréon the foundations oftopology.[12]At the start of the 1900s there was a major movement within mathematics to formalise the foundational aspects of the subject to avoid crises of rigour and accuracy, known asHilbert's program. As part of this broader movement, the notion of atopological spacewas distilled in byFelix Hausdorffin 1914, and by 1942 there were many different notions of manifold of a combinatorial and differential-geometric nature.[12] Interest in the subject was also focused by the emergence of Einstein's theory of general relativity and the importance of the Einstein Field equations. Einstein's theory popularised the tensor calculus of Ricci and Levi-Civita and introduced the notationg{\displaystyle g}for a Riemannian metric, andΓ{\displaystyle \Gamma }for the Christoffel symbols, both coming fromGinGravitation.Élie Cartanhelped reformulate the foundations of the differential geometry of smooth manifolds in terms ofexterior calculusand the theory ofmoving frames, leading in the world of physics toEinstein–Cartan theory.[13][4] Following this early development, many mathematicians contributed to the development of the modern theory, includingJean-Louis Koszulwho introducedconnections on vector bundles,Shiing-Shen Chernwho introducedcharacteristic classesto the subject and began the study ofcomplex manifolds,Sir William Vallance Douglas HodgeandGeorges de Rhamwho expanded understanding ofdifferential forms,Charles Ehresmannwho introduced the theory of fibre bundles andEhresmann connections, and others.[13][4]Of particular importance wasHermann Weylwho made important contributions to the foundations of general relativity, introduced theWeyl tensorproviding insight intoconformal geometry, and first defined the notion of agaugeleading to the development ofgauge theoryin physics andmathematics. In the middle and late 20th century differential geometry as a subject expanded in scope and developed links to other areas of mathematics and physics. The development ofgauge theoryandYang–Mills theoryin physics brought bundles and connections into focus, leading to developments ingauge theory. Many analytical results were investigated including the proof of theAtiyah–Singer index theorem. The development ofcomplex geometrywas spurred on by parallel results inalgebraic geometry, and results in the geometry and global analysis of complex manifolds were proven byShing-Tung Yauand others. In the latter half of the 20th century new analytic techniques were developed in regards to curvature flows such as theRicci flow, which culminated inGrigori Perelman's proof of thePoincaré conjecture. During this same period primarily due to the influence ofMichael Atiyah, new links betweentheoretical physicsand differential geometry were formed. Techniques from the study of theYang–Mills equationsandgauge theorywere used by mathematicians to develop new invariants of smooth manifolds. Physicists such asEdward Witten, the only physicist to be awarded aFields medal, made new impacts in mathematics by usingtopological quantum field theoryandstring theoryto make predictions and provide frameworks for new rigorous mathematics, which has resulted for example in the conjecturalmirror symmetryand theSeiberg–Witten invariants. Riemannian geometry studiesRiemannian manifolds,smooth manifoldswith aRiemannian metric. This is a concept of distance expressed by means of asmoothpositive definitesymmetric bilinear formdefined on the tangent space at each point. Riemannian geometry generalizesEuclidean geometryto spaces that are not necessarily flat, though they still resemble Euclidean space at each point infinitesimally, i.e. in thefirst order of approximation. Various concepts based on length, such as thearc lengthof curves,areaof plane regions, andvolumeof solids all possess natural analogues in Riemannian geometry. The notion of adirectional derivativeof a function frommultivariable calculusis extended to the notion of acovariant derivativeof atensor. Many concepts of analysis and differential equations have been generalized to the setting of Riemannian manifolds. A distance-preservingdiffeomorphismbetween Riemannian manifolds is called anisometry. This notion can also be definedlocally, i.e. for small neighborhoods of points. Any two regular curves are locally isometric. However, theTheorema Egregiumof Carl Friedrich Gauss showed that for surfaces, the existence of a local isometry imposes that theGaussian curvaturesat the corresponding points must be the same. In higher dimensions, theRiemann curvature tensoris an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat. An important class of Riemannian manifolds is theRiemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the "ordinary" plane and space considered in Euclidean andnon-Euclidean geometry. Pseudo-Riemannian geometrygeneralizes Riemannian geometry to the case in which themetric tensorneed not bepositive-definite. A special case of this is aLorentzian manifold, which is the mathematical basis of Einstein'sgeneral relativity theory of gravity. Finsler geometry hasFinsler manifoldsas the main object of study. This is a differential manifold with aFinsler metric, that is, aBanach normdefined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifoldM{\displaystyle M}is a functionF:TM→[0,∞){\displaystyle F:\mathrm {T} M\to [0,\infty )}such that: Symplectic geometryis the study ofsymplectic manifolds. Analmost symplectic manifoldis a differentiable manifold equipped with a smoothly varyingnon-degenerateskew-symmetricbilinear formon each tangent space, i.e., a nondegenerate 2-formω, called thesymplectic form. A symplectic manifold is an almost symplectic manifold for which the symplectic formωis closed:dω= 0. A diffeomorphism between two symplectic manifolds which preserves the symplectic form is called asymplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, so symplectic manifolds necessarily have even dimension. In dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism. Thephase spaceof a mechanical system is a symplectic manifold and they made an implicit appearance already in the work ofJoseph Louis Lagrangeonanalytical mechanicsand later inCarl Gustav Jacobi's andWilliam Rowan Hamilton'sformulations of classical mechanics. By contrast with Riemannian geometry, where thecurvatureprovides a local invariant of Riemannian manifolds,Darboux's theoremstates that all symplectic manifolds are locally isomorphic. The only invariants of a symplectic manifold are global in nature and topological aspects play a prominent role in symplectic geometry. The first result in symplectic topology is probably thePoincaré–Birkhoff theorem, conjectured byHenri Poincaréand then proved byG.D. Birkhoffin 1912. It claims that if an area preserving map of anannulustwists each boundary component in opposite directions, then the map has at least two fixed points.[14] Contact geometrydeals with certain manifolds of odd dimension. It is close to symplectic geometry and like the latter, it originated in questions of classical mechanics. Acontact structureon a(2n+ 1)-dimensional manifoldMis given by a smooth hyperplane fieldHin thetangent bundlethat is as far as possible from being associated with the level sets of a differentiable function onM(the technical term is "completely nonintegrable tangent hyperplane distribution"). Near each pointp, a hyperplane distribution is determined by a nowhere vanishing1-formα{\displaystyle \alpha }, which is unique up to multiplication by a nowhere vanishing function: A local 1-form onMis acontact formif the restriction of itsexterior derivativetoHis a non-degenerate two-form and thus induces a symplectic structure onHpat each point. If the distributionHcan be defined by a global one-formα{\displaystyle \alpha }then this form is contact if and only if the top-dimensional form is avolume formonM, i.e. does not vanish anywhere. A contact analogue of the Darboux theorem holds: all contact structures on an odd-dimensional manifold are locally isomorphic and can be brought to a certain local normal form by a suitable choice of the coordinate system. Complex differential geometryis the study ofcomplex manifolds. Analmost complex manifoldis arealmanifoldM{\displaystyle M}, endowed with atensorof type (1, 1), i.e. avector bundle endomorphism(called analmost complex structure) It follows from this definition that an almost complex manifold is even-dimensional. An almost complex manifold is calledcomplexifNJ=0{\displaystyle N_{J}=0}, whereNJ{\displaystyle N_{J}}is a tensor of type (2, 1) related toJ{\displaystyle J}, called theNijenhuis tensor(or sometimes thetorsion). An almost complex manifold is complex if and only if it admits aholomorphiccoordinate atlas. Analmost Hermitian structureis given by an almost complex structureJ, along with aRiemannian metricg, satisfying the compatibility condition An almost Hermitian structure defines naturally adifferential two-form The following two conditions are equivalent: where∇{\displaystyle \nabla }is theLevi-Civita connectionofg{\displaystyle g}. In this case,(J,g){\displaystyle (J,g)}is called aKähler structure, and aKähler manifoldis a manifold endowed with a Kähler structure. In particular, a Kähler manifold is both a complex and asymplectic manifold. A large class of Kähler manifolds (the class ofHodge manifolds) is given by all the smoothcomplex projective varieties. CR geometryis the study of the intrinsic geometry of boundaries of domains incomplex manifolds. Conformal geometryis the study of the set of angle-preserving (conformal) transformations on a space. Differential topologyis the study of global geometric invariants without a metric or symplectic form. Differential topology starts from the natural operations such asLie derivativeof naturalvector bundlesandde Rham differentialofforms. BesideLie algebroids, alsoCourant algebroidsstart playing a more important role. ALie groupis agroupin the category of smooth manifolds. Beside the algebraic properties this enjoys also differential geometric properties. The most obvious construction is that of a Lie algebra which is the tangent space at the unit endowed with the Lie bracket between left-invariantvector fields. Beside the structure theory there is also the wide field ofrepresentation theory. Geometric analysisis a mathematical discipline where tools from differential equations, especially elliptic partial differential equations are used to establish new results in differential geometry and differential topology. Gauge theory is the study of connections on vector bundles and principal bundles, and arises out of problems inmathematical physicsand physicalgauge theorieswhich underpin thestandard model of particle physics. Gauge theory is concerned with the study of differential equations for connections on bundles, and the resulting geometricmoduli spacesof solutions to these equations as well as the invariants that may be derived from them. These equations often arise as theEuler–Lagrange equationsdescribing the equations of motion of certain physical systems inquantum field theory, and so their study is of considerable interest in physics. The apparatus ofvector bundles,principal bundles, andconnectionson bundles plays an extraordinarily important role in modern differential geometry. A smooth manifold always carries a natural vector bundle, thetangent bundle. Loosely speaking, this structure by itself is sufficient only for developing analysis on the manifold, while doing geometry requires, in addition, some way to relate the tangent spaces at different points, i.e. a notion ofparallel transport. An important example is provided byaffine connections. For a surface inR3, tangent planes at different points can be identified using a natural path-wise parallelism induced by the ambient Euclidean space, which has a well-known standard definition of metric and parallelism. InRiemannian geometry, theLevi-Civita connectionserves a similar purpose. More generally, differential geometers consider spaces with a vector bundle and an arbitrary affine connection which is not defined in terms of a metric. In physics, the manifold may bespacetimeand the bundles and connections are related to various physical fields. From the beginning and through the middle of the 19th century, differential geometry was studied from theextrinsicpoint of view: curves and surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in anambient spaceof three dimensions). The simplest results are those in the differential geometry of curves and differential geometry of surfaces. Starting with the work ofRiemann, theintrinsicpoint of view was developed, in which one cannot speak of moving "outside" the geometric object because it is considered to be given in a free-standing way. The fundamental result here is Gauss'stheorema egregium, to the effect thatGaussian curvatureis an intrinsic invariant. The intrinsic point of view is more flexible. For example, it is useful in relativity where space-time cannot naturally be taken as extrinsic. However, there is a price to pay in technical complexity: the intrinsic definitions of curvature andconnectionsbecome much less visually intuitive. These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one. (See theNash embedding theorem.) In the formalism ofgeometric calculusboth extrinsic and intrinsic geometry of a manifold can be characterized by a singlebivector-valued one-form called theshape operator.[15] Below are some examples of how differential geometry is applied to other fields of science and mathematics.
https://en.wikipedia.org/wiki/Differential_geometry
Inmathematics(particularlymultivariable calculus), avolume integral(∭) is anintegralover a3-dimensionaldomain; that is, it is a special case ofmultiple integrals. Volume integrals are especially important inphysicsfor many applications, for example, to calculatefluxdensities, or to calculate mass from a corresponding density function. Often the volume integral is represented in terms of a differential volume elementdV=dxdydz{\displaystyle dV=dx\,dy\,dz}.∭Df(x,y,z)dV.{\displaystyle \iiint _{D}f(x,y,z)\,dV.}It can also mean atriple integralwithin a regionD⊂R3{\displaystyle D\subset \mathbb {R} ^{3}}of afunctionf(x,y,z),{\displaystyle f(x,y,z),}and is usually written as:∭Df(x,y,z)dxdydz.{\displaystyle \iiint _{D}f(x,y,z)\,dx\,dy\,dz.}A volume integral incylindrical coordinatesis∭Df(ρ,φ,z)ρdρdφdz,{\displaystyle \iiint _{D}f(\rho ,\varphi ,z)\rho \,d\rho \,d\varphi \,dz,}and a volume integral inspherical coordinates(using the ISO convention for angles withφ{\displaystyle \varphi }as the azimuth andθ{\displaystyle \theta }measured from the polar axis (see more onconventions)) has the form∭Df(r,θ,φ)r2sin⁡θdrdθdφ.{\displaystyle \iiint _{D}f(r,\theta ,\varphi )r^{2}\sin \theta \,dr\,d\theta \,d\varphi .}The triple integral can be transformed from Cartesian coordinates to any arbitrary coordinate system using theJacobian matrix and determinant. Suppose we have a transformation of coordinates from(x,y,z)↦(u,v,w){\displaystyle (x,y,z)\mapsto (u,v,w)}. We can represent the integral as the following.∭Df(x,y,z)dxdydz=∭Df(u,v,w)|∂(x,y,z)∂(u,v,w)|dudvdw{\displaystyle \iiint _{D}f(x,y,z)\,dx\,dy\,dz=\iiint _{D}f(u,v,w)\left|{\frac {\partial (x,y,z)}{\partial (u,v,w)}}\right|\,du\,dv\,dw}Where we define the Jacobian determinant to be.J=∂(x,y,z)∂(u,v,w)=|∂x∂u∂x∂v∂x∂w∂y∂u∂y∂v∂y∂w∂z∂u∂z∂v∂z∂w|{\displaystyle \mathbf {J} ={\frac {\partial (x,y,z)}{\partial (u,v,w)}}={\begin{vmatrix}{\frac {\partial x}{\partial u}}&{\frac {\partial x}{\partial v}}&{\frac {\partial x}{\partial w}}\\{\frac {\partial y}{\partial u}}&{\frac {\partial y}{\partial v}}&{\frac {\partial y}{\partial w}}\\{\frac {\partial z}{\partial u}}&{\frac {\partial z}{\partial v}}&{\frac {\partial z}{\partial w}}\\\end{vmatrix}}} Integrating the equationf(x,y,z)=1{\displaystyle f(x,y,z)=1}over a unit cube yields the following result:∫01∫01∫011dxdydz=∫01∫01(1−0)dydz=∫01(1−0)dz=1−0=1{\displaystyle \int _{0}^{1}\int _{0}^{1}\int _{0}^{1}1\,dx\,dy\,dz=\int _{0}^{1}\int _{0}^{1}(1-0)\,dy\,dz=\int _{0}^{1}\left(1-0\right)dz=1-0=1} So the volume of the unit cube is 1 as expected. This is rather trivial however, and a volume integral is far more powerful. For instance if we have a scalar density function on the unit cube then the volume integral will give the total mass of the cube. For example for density function:{f:R3→Rf:(x,y,z)↦x+y+z{\displaystyle {\begin{cases}f:\mathbb {R} ^{3}\to \mathbb {R} \\f:(x,y,z)\mapsto x+y+z\end{cases}}}the total mass of the cube is:∫01∫01∫01(x+y+z)dxdydz=∫01∫01(12+y+z)dydz=∫01(1+z)dz=32{\displaystyle \int _{0}^{1}\int _{0}^{1}\int _{0}^{1}(x+y+z)\,dx\,dy\,dz=\int _{0}^{1}\int _{0}^{1}\left({\frac {1}{2}}+y+z\right)dy\,dz=\int _{0}^{1}(1+z)\,dz={\frac {3}{2}}}
https://en.wikipedia.org/wiki/Volume_integral
Inmathematics, the concept of ameasureis a generalization and formalization ofgeometrical measures(length,area,volume) and other common notions, such asmagnitude,mass, andprobabilityof events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational inprobability theory,integration theory, and can be generalized to assumenegative values, as withelectrical charge. Far-reaching generalizations (such asspectral measuresandprojection-valued measures) of measure are widely used inquantum physicsand physics in general. The intuition behind this concept dates back toancient Greece, whenArchimedestried to calculate thearea of a circle.[1][2]But it was not until the late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works ofÉmile Borel,Henri Lebesgue,Nikolai Luzin,Johann Radon,Constantin Carathéodory, andMaurice Fréchet, among others. LetX{\displaystyle X}be a set andΣ{\displaystyle \Sigma }aσ-algebraoverX.{\displaystyle X.}Aset functionμ{\displaystyle \mu }fromΣ{\displaystyle \Sigma }to theextended real number lineis called ameasureif the following conditions hold: If at least one setE{\displaystyle E}has finite measure, then the requirementμ(∅)=0{\displaystyle \mu (\varnothing )=0}is met automatically due to countable additivity:μ(E)=μ(E∪∅)=μ(E)+μ(∅),{\displaystyle \mu (E)=\mu (E\cup \varnothing )=\mu (E)+\mu (\varnothing ),}and thereforeμ(∅)=0.{\displaystyle \mu (\varnothing )=0.} If the condition of non-negativity is dropped, andμ{\displaystyle \mu }takes on at most one of the values of±∞,{\displaystyle \pm \infty ,}thenμ{\displaystyle \mu }is called asigned measure. The pair(X,Σ){\displaystyle (X,\Sigma )}is called ameasurable space, and the members ofΣ{\displaystyle \Sigma }are calledmeasurable sets. Atriple(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}is called ameasure space. Aprobability measureis a measure with total measure one – that is,μ(X)=1.{\displaystyle \mu (X)=1.}Aprobability spaceis a measure space with a probability measure. For measure spaces that are alsotopological spacesvarious compatibility conditions can be placed for the measure and the topology. Most measures met in practice inanalysis(and in many cases also inprobability theory) areRadon measures. Radon measures have an alternative definition in terms of linear functionals on thelocally convex topological vector spaceofcontinuous functionswithcompact support. This approach is taken byBourbaki(2004) and a number of other sources. For more details, see the article onRadon measures. Some important measures are listed here. Other 'named' measures used in various theories include:Borel measure,Jordan measure,ergodic measure,Gaussian measure,Baire measure,Radon measure,Young measure, andLoeb measure. In physics an example of a measure is spatial distribution ofmass(see for example,gravity potential), or another non-negativeextensive property,conserved(seeconservation lawfor a list of these) or not. Negative values lead to signed measures, see "generalizations" below. Measure theory is used in machine learning. One example is the Flow Induced Probability Measure in GFlowNet.[3] Letμ{\displaystyle \mu }be a measure. IfE1{\displaystyle E_{1}}andE2{\displaystyle E_{2}}are measurable sets withE1⊆E2{\displaystyle E_{1}\subseteq E_{2}}thenμ(E1)≤μ(E2).{\displaystyle \mu (E_{1})\leq \mu (E_{2}).} For anycountablesequenceE1,E2,E3,…{\displaystyle E_{1},E_{2},E_{3},\ldots }of (not necessarily disjoint) measurable setsEn{\displaystyle E_{n}}inΣ:{\displaystyle \Sigma :}μ(⋃i=1∞Ei)≤∑i=1∞μ(Ei).{\displaystyle \mu \left(\bigcup _{i=1}^{\infty }E_{i}\right)\leq \sum _{i=1}^{\infty }\mu (E_{i}).} IfE1,E2,E3,…{\displaystyle E_{1},E_{2},E_{3},\ldots }are measurable sets that are increasing (meaning thatE1⊆E2⊆E3⊆…{\displaystyle E_{1}\subseteq E_{2}\subseteq E_{3}\subseteq \ldots }) then theunionof the setsEn{\displaystyle E_{n}}is measurable andμ(⋃i=1∞Ei)=limi→∞μ(Ei)=supi≥1μ(Ei).{\displaystyle \mu \left(\bigcup _{i=1}^{\infty }E_{i}\right)~=~\lim _{i\to \infty }\mu (E_{i})=\sup _{i\geq 1}\mu (E_{i}).} IfE1,E2,E3,…{\displaystyle E_{1},E_{2},E_{3},\ldots }are measurable sets that are decreasing (meaning thatE1⊇E2⊇E3⊇…{\displaystyle E_{1}\supseteq E_{2}\supseteq E_{3}\supseteq \ldots }) then theintersectionof the setsEn{\displaystyle E_{n}}is measurable; furthermore, if at least one of theEn{\displaystyle E_{n}}has finite measure thenμ(⋂i=1∞Ei)=limi→∞μ(Ei)=infi≥1μ(Ei).{\displaystyle \mu \left(\bigcap _{i=1}^{\infty }E_{i}\right)=\lim _{i\to \infty }\mu (E_{i})=\inf _{i\geq 1}\mu (E_{i}).} This property is false without the assumption that at least one of theEn{\displaystyle E_{n}}has finite measure. For instance, for eachn∈N,{\displaystyle n\in \mathbb {N} ,}letEn=[n,∞)⊆R,{\displaystyle E_{n}=[n,\infty )\subseteq \mathbb {R} ,}which all have infinite Lebesgue measure, but the intersection is empty. A measurable setX{\displaystyle X}is called anull setifμ(X)=0.{\displaystyle \mu (X)=0.}A subset of a null set is called anegligible set. A negligible set need not be measurable, but every measurable negligible set is automatically a null set. A measure is calledcompleteif every negligible set is measurable. A measure can be extended to a complete one by considering the σ-algebra of subsetsY{\displaystyle Y}which differ by a negligible set from a measurable setX,{\displaystyle X,}that is, such that thesymmetric differenceofX{\displaystyle X}andY{\displaystyle Y}is contained in a null set. One definesμ(Y){\displaystyle \mu (Y)}to equalμ(X).{\displaystyle \mu (X).} Iff:X→[0,+∞]{\displaystyle f:X\to [0,+\infty ]}is(Σ,B([0,+∞])){\displaystyle (\Sigma ,{\cal {B}}([0,+\infty ]))}-measurable, thenμ{x∈X:f(x)≥t}=μ{x∈X:f(x)>t}{\displaystyle \mu \{x\in X:f(x)\geq t\}=\mu \{x\in X:f(x)>t\}}foralmost allt∈[−∞,∞].{\displaystyle t\in [-\infty ,\infty ].}[4]This property is used in connection withLebesgue integral. BothF(t):=μ{x∈X:f(x)>t}{\displaystyle F(t):=\mu \{x\in X:f(x)>t\}}andG(t):=μ{x∈X:f(x)≥t}{\displaystyle G(t):=\mu \{x\in X:f(x)\geq t\}}are monotonically non-increasing functions oft,{\displaystyle t,}so both of them haveat most countably many discontinuitiesand thus they are continuous almost everywhere, relative to the Lebesgue measure. Ift<0{\displaystyle t<0}then{x∈X:f(x)≥t}=X={x∈X:f(x)>t},{\displaystyle \{x\in X:f(x)\geq t\}=X=\{x\in X:f(x)>t\},}so thatF(t)=G(t),{\displaystyle F(t)=G(t),}as desired. Ift{\displaystyle t}is such thatμ{x∈X:f(x)>t}=+∞{\displaystyle \mu \{x\in X:f(x)>t\}=+\infty }thenmonotonicityimpliesμ{x∈X:f(x)≥t}=+∞,{\displaystyle \mu \{x\in X:f(x)\geq t\}=+\infty ,}so thatF(t)=G(t),{\displaystyle F(t)=G(t),}as required. Ifμ{x∈X:f(x)>t}=+∞{\displaystyle \mu \{x\in X:f(x)>t\}=+\infty }for allt{\displaystyle t}then we are done, so assume otherwise. Then there is a uniquet0∈{−∞}∪[0,+∞){\displaystyle t_{0}\in \{-\infty \}\cup [0,+\infty )}such thatF{\displaystyle F}is infinite to the left oft{\displaystyle t}(which can only happen whent0≥0{\displaystyle t_{0}\geq 0}) and finite to the right. Arguing as above,μ{x∈X:f(x)≥t}=+∞{\displaystyle \mu \{x\in X:f(x)\geq t\}=+\infty }whent<t0.{\displaystyle t<t_{0}.}Similarly, ift0≥0{\displaystyle t_{0}\geq 0}andF(t0)=+∞{\displaystyle F\left(t_{0}\right)=+\infty }thenF(t0)=G(t0).{\displaystyle F\left(t_{0}\right)=G\left(t_{0}\right).} Fort>t0,{\displaystyle t>t_{0},}lettn{\displaystyle t_{n}}be a monotonically non-decreasing sequence converging tot.{\displaystyle t.}The monotonically non-increasing sequences{x∈X:f(x)>tn}{\displaystyle \{x\in X:f(x)>t_{n}\}}of members ofΣ{\displaystyle \Sigma }has at least one finitelyμ{\displaystyle \mu }-measurable component, and{x∈X:f(x)≥t}=⋂n{x∈X:f(x)>tn}.{\displaystyle \{x\in X:f(x)\geq t\}=\bigcap _{n}\{x\in X:f(x)>t_{n}\}.}Continuity from above guarantees thatμ{x∈X:f(x)≥t}=limtn↑tμ{x∈X:f(x)>tn}.{\displaystyle \mu \{x\in X:f(x)\geq t\}=\lim _{t_{n}\uparrow t}\mu \{x\in X:f(x)>t_{n}\}.}The right-hand sidelimtn↑tF(tn){\displaystyle \lim _{t_{n}\uparrow t}F\left(t_{n}\right)}then equalsF(t)=μ{x∈X:f(x)>t}{\displaystyle F(t)=\mu \{x\in X:f(x)>t\}}ift{\displaystyle t}is a point of continuity ofF.{\displaystyle F.}SinceF{\displaystyle F}is continuous almost everywhere, this completes the proof. Measures are required to be countably additive. However, the condition can be strengthened as follows. For any setI{\displaystyle I}and any set of nonnegativeri,i∈I{\displaystyle r_{i},i\in I}define:∑i∈Iri=sup{∑i∈Jri:|J|<∞,J⊆I}.{\displaystyle \sum _{i\in I}r_{i}=\sup \left\lbrace \sum _{i\in J}r_{i}:|J|<\infty ,J\subseteq I\right\rbrace .}That is, we define the sum of theri{\displaystyle r_{i}}to be the supremum of all the sums of finitely many of them. A measureμ{\displaystyle \mu }onΣ{\displaystyle \Sigma }isκ{\displaystyle \kappa }-additive if for anyλ<κ{\displaystyle \lambda <\kappa }and any family of disjoint setsXα,α<λ{\displaystyle X_{\alpha },\alpha <\lambda }the following hold:⋃α∈λXα∈Σ{\displaystyle \bigcup _{\alpha \in \lambda }X_{\alpha }\in \Sigma }μ(⋃α∈λXα)=∑α∈λμ(Xα).{\displaystyle \mu \left(\bigcup _{\alpha \in \lambda }X_{\alpha }\right)=\sum _{\alpha \in \lambda }\mu \left(X_{\alpha }\right).}The second condition is equivalent to the statement that theidealof null sets isκ{\displaystyle \kappa }-complete. A measure space(X,Σ,μ){\displaystyle (X,\Sigma ,\mu )}is called finite ifμ(X){\displaystyle \mu (X)}is a finite real number (rather than∞{\displaystyle \infty }). Nonzero finite measures are analogous toprobability measuresin the sense that any finite measureμ{\displaystyle \mu }is proportional to the probability measure1μ(X)μ.{\displaystyle {\frac {1}{\mu (X)}}\mu .}A measureμ{\displaystyle \mu }is calledσ-finiteifX{\displaystyle X}can be decomposed into a countable union of measurable sets of finite measure. Analogously, a set in a measure space is said to have aσ-finite measureif it is a countable union of sets with finite measure. For example, thereal numberswith the standardLebesgue measureare σ-finite but not finite. Consider theclosed intervals[k,k+1]{\displaystyle [k,k+1]}for allintegersk;{\displaystyle k;}there are countably many such intervals, each has measure 1, and their union is the entire real line. Alternatively, consider thereal numberswith thecounting measure, which assigns to each finite set of reals the number of points in the set. This measure space is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. The σ-finite measure spaces have some very convenient properties; σ-finiteness can be compared in this respect to theLindelöf propertyof topological spaces.[original research?]They can be also thought of as a vague generalization of the idea that a measure space may have 'uncountable measure'. LetX{\displaystyle X}be a set, letA{\displaystyle {\cal {A}}}be a sigma-algebra onX,{\displaystyle X,}and letμ{\displaystyle \mu }be a measure onA.{\displaystyle {\cal {A}}.}We sayμ{\displaystyle \mu }issemifiniteto mean that for allA∈μpre{+∞},{\displaystyle A\in \mu ^{\text{pre}}\{+\infty \},}P(A)∩μpre(R>0)≠∅.{\displaystyle {\cal {P}}(A)\cap \mu ^{\text{pre}}(\mathbb {R} _{>0})\neq \emptyset .}[5] Semifinite measures generalize sigma-finite measures, in such a way that some big theorems of measure theory that hold for sigma-finite but not arbitrary measures can be extended with little modification to hold for semifinite measures. (To-do: add examples of such theorems; cf. the talk page.) The zero measure is sigma-finite and thus semifinite. In addition, the zero measure is clearly less than or equal toμ.{\displaystyle \mu .}It can be shown there is a greatest measure with these two properties: Theorem (semifinite part)[9]—For any measureμ{\displaystyle \mu }onA,{\displaystyle {\cal {A}},}there exists, among semifinite measures onA{\displaystyle {\cal {A}}}that are less than or equal toμ,{\displaystyle \mu ,}agreatestelementμsf.{\displaystyle \mu _{\text{sf}}.} We say thesemifinite partofμ{\displaystyle \mu }to mean the semifinite measureμsf{\displaystyle \mu _{\text{sf}}}defined in the above theorem. We give some nice, explicit formulas, which some authors may take as definition, for the semifinite part: Sinceμsf{\displaystyle \mu _{\text{sf}}}is semifinite, it follows that ifμ=μsf{\displaystyle \mu =\mu _{\text{sf}}}thenμ{\displaystyle \mu }is semifinite. It is also evident that ifμ{\displaystyle \mu }is semifinite thenμ=μsf.{\displaystyle \mu =\mu _{\text{sf}}.} Every0−∞{\displaystyle 0-\infty }measurethat is not the zero measure is not semifinite. (Here, we say0−∞{\displaystyle 0-\infty }measureto mean a measure whose range lies in{0,+∞}{\displaystyle \{0,+\infty \}}:(∀A∈A)(μ(A)∈{0,+∞}).{\displaystyle (\forall A\in {\cal {A}})(\mu (A)\in \{0,+\infty \}).}) Below we give examples of0−∞{\displaystyle 0-\infty }measures that are not zero measures. Measures that are not semifinite are very wild when restricted to certain sets.[Note 1]Every measure is, in a sense, semifinite once its0−∞{\displaystyle 0-\infty }part (the wild part) is taken away. Theorem (Luther decomposition)[14][15]—For any measureμ{\displaystyle \mu }onA,{\displaystyle {\cal {A}},}there exists a0−∞{\displaystyle 0-\infty }measureξ{\displaystyle \xi }onA{\displaystyle {\cal {A}}}such thatμ=ν+ξ{\displaystyle \mu =\nu +\xi }for some semifinite measureν{\displaystyle \nu }onA.{\displaystyle {\cal {A}}.}In fact, among such measuresξ,{\displaystyle \xi ,}there exists aleastmeasureμ0−∞.{\displaystyle \mu _{0-\infty }.}Also, we haveμ=μsf+μ0−∞.{\displaystyle \mu =\mu _{\text{sf}}+\mu _{0-\infty }.} We say the0−∞{\displaystyle \mathbf {0-\infty } }partofμ{\displaystyle \mu }to mean the measureμ0−∞{\displaystyle \mu _{0-\infty }}defined in the above theorem. Here is an explicit formula forμ0−∞{\displaystyle \mu _{0-\infty }}:μ0−∞=(sup{μ(B)−μsf(B):B∈P(A)∩μsfpre(R≥0)})A∈A.{\displaystyle \mu _{0-\infty }=(\sup\{\mu (B)-\mu _{\text{sf}}(B):B\in {\cal {P}}(A)\cap \mu _{\text{sf}}^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}.} Localizable measures are a special case of semifinite measures and a generalization of sigma-finite measures. LetX{\displaystyle X}be a set, letA{\displaystyle {\cal {A}}}be a sigma-algebra onX,{\displaystyle X,}and letμ{\displaystyle \mu }be a measure onA.{\displaystyle {\cal {A}}.} A measure is said to be s-finite if it is a countable sum of finite measures. S-finite measures are more general than sigma-finite ones and have applications in the theory ofstochastic processes. If theaxiom of choiceis assumed to be true, it can be proved that not all subsets ofEuclidean spaceareLebesgue measurable; examples of such sets include theVitali set, and the non-measurable sets postulated by theHausdorff paradoxand theBanach–Tarski paradox. For certain purposes, it is useful to have a "measure" whose values are not restricted to the non-negative reals or infinity. For instance, a countably additiveset functionwith values in the (signed) real numbers is called asigned measure, while such a function with values in thecomplex numbersis called acomplex measure. Observe, however, that complex measure is necessarily of finitevariation, hence complex measures includefinite signed measuresbut not, for example, theLebesgue measure. Measures that take values inBanach spaceshave been studied extensively.[22]A measure that takes values in the set of self-adjoint projections on aHilbert spaceis called aprojection-valued measure; these are used infunctional analysisfor thespectral theorem. When it is necessary to distinguish the usual measures which take non-negative values from generalizations, the termpositive measureis used. Positive measures are closed underconical combinationbut not generallinear combination, while signed measures are the linear closure of positive measures. More generally seemeasure theory in topological vector spaces. Another generalization is thefinitely additive measure, also known as acontent. This is the same as a measure except that instead of requiringcountableadditivity we require onlyfiniteadditivity. Historically, this definition was used first. It turns out that in general, finitely additive measures are connected with notions such asBanach limits, the dual ofL∞{\displaystyle L^{\infty }}and theStone–Čech compactification. All these are linked in one way or another to theaxiom of choice. Contents remain useful in certain technical problems ingeometric measure theory; this is the theory ofBanach measures. Achargeis a generalization in both directions: it is a finitely additive, signed measure.[23](Cf.ba spacefor information aboutboundedcharges, where we say a charge isboundedto mean its range its a bounded subset ofR.)
https://en.wikipedia.org/wiki/Measure_(mathematics)
Aproduct integralis anyproduct-based counterpart of the usualsum-basedintegralofcalculus. The product integral was developed by the mathematicianVito Volterrain 1887 to solve systems oflinear differential equations.[1][2] The classicalRiemann integralof afunctionf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }can be defined by the relation where thelimitis taken over allpartitionsof theinterval[a,b]{\displaystyle [a,b]}whosenormsapproach zero. Product integrals are similar, but take thelimitof aproductinstead of thelimitof asum. They can be thought of as "continuous" versions of "discrete"products. They are defined as For the case off:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }, the product integral reduces exactly to the case ofLebesgue integration, that is, to classical calculus. Thus, the interesting cases arise for functionsf:[a,b]→A{\displaystyle f:[a,b]\to A}whereA{\displaystyle A}is either somecommutative algebra, such as a finite-dimensionalmatrix field, or ifA{\displaystyle A}is anon-commutative algebra. The theories for these two cases, the commutative and non-commutative cases, have little in common. The non-commutative case is far more complicated; it requires properpath-orderingto make the integral well-defined. For the commutative case, three distinct definitions are commonplace in the literature, referred to as Type-I, Type-II orgeometric, and type-III orbigeometric.[3][4][5]Such integrals have found use inepidemiology(theKaplan–Meier estimator) and stochasticpopulation dynamics. The geometric integral, together with the geometric derivative, is useful inimage analysis[6]and in the study of growth/decay phenomena (e.g., ineconomic growth,bacterial growth, andradioactive decay).[7][8]Thebigeometric integral, together with the bigeometric derivative, is useful in some applications offractals,[9][10][11][12]and in the theory ofelasticityin economics.[3][5][13] The non-commutative case commonly arises inquantum mechanicsandquantum field theory. The integrand is generally an operator belonging to somenon-commutative algebra. In this case, one must be careful to establish apath-orderingwhile integrating. A typical result is theordered exponential. TheMagnus expansionprovides one technique for computing the Volterra integral. Examples include theDyson expansion, the integrals that occur in theoperator product expansionand theWilson line, a product integral over a gauge field. TheWilson loopis the trace of a Wilson line. The product integral also occurs incontrol theory, as thePeano–Baker seriesdescribing state transitions inlinear systemswritten in amaster equationtype form. The Volterra product integral is most useful when applied to matrix-valued functions or functions with values in aBanach algebra. When applied to scalars belonging to a non-commutative field, to matrixes, and to operators,i.e.to mathematical objects that don't commute, the Volterra integral splits in two definitions.[14] Theleft product integralis With this notation of left products (i.e. normal products applied from left) Theright product integral With this notation of right products (i.e. applied from right) Where1{\displaystyle \mathbb {1} }is the identity matrix and D is a partition of the interval [a,b] in the Riemann sense,i.e.the limit is over the maximum interval in the partition. Note how in this casetime orderingbecomes evident in the definitions. TheMagnus expansionprovides a technique for computing the product integral. It defines a continuous-time version of theBaker–Campbell–Hausdorff formula. The product integral satisfies a collection of properties defining a one-parametercontinuous group; these are stated in two articles showing applications: theDyson seriesand thePeano–Baker series. The commutative case is vastly simpler, and, as a result, a large variety of distinct notations and definitions have appeared. Three distinct styles are popular in the literature. This subsection adopts the product∏{\displaystyle \textstyle \prod }notation for product integration instead of the integral∫{\displaystyle \textstyle \int }(usually modified by a superimposed times symbol or letter P) favoured byVolterraand others. An arbitrary classification of types is adopted to impose some order in the field. When the function to be integrated is valued in the real numbers, then the theory reduces exactly to the theory ofLebesgue integration. The type I product integral corresponds toVolterra's original definition.[2][15][16]The following relationship exists forscalar functionsf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }: which is called thegeometric integral. The logarithm is well-defined ifftakes values in the real or complex numbers, or ifftakes values in a commutative field of commutingtrace-classoperators. This definition of the product integral is thecontinuousanalog of thediscreteproductoperator∏i=ab{\displaystyle \textstyle \prod _{i=a}^{b}}(withi,a,b∈Z{\displaystyle i,a,b\in \mathbb {Z} }) and themultiplicativeanalog to the (normal/standard/additive)integral∫abdx{\displaystyle \textstyle \int _{a}^{b}dx}(withx∈[a,b]{\displaystyle x\in [a,b]}): It is very useful instochastics, where thelog-likelihood(i.e. thelogarithmof a product integral ofindependentrandom variables) equals theintegralof thelogarithmof these (infinitesimallymany)random variables: The type III product integral is called thebigeometric integral. For the commutative case, the following results hold for the type II product integral (the geometric integral). The geometric integral (type II above) plays a central role in thegeometric calculus,[3][4][17]which is a multiplicative calculus. The inverse of the geometric integral, which is thegeometric derivative, denotedf∗(x){\displaystyle f^{*}(x)}, is defined using the following relationship: Thus, the following can be concluded: whereXis arandom variablewithprobability distributionF(x). Compare with the standardlaw of large numbers: When the integrand takes values in thereal numbers, then the product intervals become easy to work with by usingsimple functions. Just as in the case ofLebesgue version of (classical) integrals, one can compute product integrals by approximating them with the product integrals ofsimple functions. The case of Type II geometric integrals reduces to exactly the case of classical Lebesgue integration. Becausesimple functionsgeneralizestep functions, in what follows we will only consider the special case of simple functions that are step functions. This will also make it easier to compare theLebesgue definitionwith theRiemann definition. Given a step functionf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }with atagged partition oneapproximationof the "Riemann definition" of thetype I product integralis given by[18] The (type I) product integral was defined to be, roughly speaking, thelimitof theseproductsbyLudwig Schlesingerin a 1931 article.[which?] Another approximation of the "Riemann definition" of the type I product integral is defined as Whenf{\displaystyle f}is aconstant function, the limit of the first type of approximation is equal to the second type of approximation.[19]Notice that in general, for a step function, the value of the second type of approximation doesn't depend on the partition, as long as the partition is arefinementof the partition defining the step function, whereas the value of the first type of approximationdoesdepend on the fineness of the partition, even when it is a refinement of the partition defining the step function. It turns out that[20]foranyproduct-integrable functionf{\displaystyle f}, the limit of the first type of approximation equals the limit of the second type of approximation. Since, for step functions, the value of the second type of approximation doesn't depend on the fineness of the partition for partitions "fine enough", it makes sense to define[21]the "Lebesgue (type I) product integral" of a step function as wherey0<a=s1<y1<⋯<yn−1<sn<yn=b{\displaystyle y_{0}<a=s_{1}<y_{1}<\dots <y_{n-1}<s_{n}<y_{n}=b}is the tagged partition corresponding to the step functionf{\displaystyle f}. (In contrast, the corresponding quantity would not be unambiguously defined using the first type of approximation.) This generalizes to arbitrarymeasure spacesreadily. IfX{\displaystyle X}is a measure space withmeasureμ{\displaystyle \mu }, then for any product-integrable simple functionf(x)=∑k=1nakIAk(x){\displaystyle f(x)=\sum _{k=1}^{n}a_{k}I_{A_{k}}(x)}(i.e. aconical combinationof theindicator functionsfor somedisjointmeasurable setsA1,A2,…,An⊆X{\displaystyle A_{1},A_{2},\dots ,A_{n}\subseteq X}), its type I product integral is defined to be sinceak{\displaystyle a_{k}}is the value off{\displaystyle f}at any point ofAk{\displaystyle A_{k}}. In the special case whereX=R{\displaystyle X=\mathbb {R} },μ{\displaystyle \mu }isLebesgue measure, and all of the measurable setsAk{\displaystyle A_{k}}areintervals, one can verify that this is equal to the definition given above for that special case. Analogous tothe theory of Lebesgue (classical) integrals, the Type I product integral of any product-integrable functionf{\displaystyle f}can be written as the limit of an increasingsequenceof Volterra product integrals of product-integrable simple functions. Takinglogarithmsof both sides of the above definition, one gets that for any product-integrable simple functionf{\displaystyle f}: where we used the definition of integral for simple functions. Moreover, becausecontinuous functionslikeexp{\displaystyle \exp }can be interchanged with limits, and the product integral of any product-integrable functionf{\displaystyle f}is equal to the limit of product integrals of simple functions, it follows that the relationship holds generally foranyproduct-integrablef{\displaystyle f}. This clearly generalizes the property mentioned above. The Type I integral is multiplicative as aset function,[22]which can be shown using the above property. More specifically, given a product-integrable functionf{\displaystyle f}one can define a set functionVf{\displaystyle {\cal {V}}_{f}}by defining, for every measurable setB⊆X{\displaystyle B\subseteq X}, whereIB(x){\displaystyle I_{B}(x)}denotes theindicator functionofB{\displaystyle B}. Then for any twodisjointmeasurable setsB1,B2{\displaystyle B_{1},B_{2}}one has This property can be contrasted with measures, which aresigma-additiveset functions. However, the Type I integral isnotmultiplicativeas afunctional. Given two product-integrable functionsf,g{\displaystyle f,g}, and a measurable setA{\displaystyle A}, it is generally the case that IfX{\displaystyle X}is a measure space with measureμ{\displaystyle \mu }, then for any product-integrable simple functionf(x)=∑k=1nakIAk(x){\displaystyle f(x)=\sum _{k=1}^{n}a_{k}I_{A_{k}}(x)}(i.e. aconical combinationof theindicator functionsfor some disjoint measurable setsA1,A2,…,An⊆X{\displaystyle A_{1},A_{2},\dots ,A_{n}\subseteq X}), its type II product integral is defined to be This can be seen to generalize the definition given above. Taking logarithms of both sides, we see that for any product-integrable simple functionf{\displaystyle f}: where the definition of the Lebesgue integral for simple functions was used. This observation, analogous to the one already made for Type II integrals above, allows one to entirely reduce the "Lebesgue theory of type II geometric integrals" to the Lebesgue theory of (classical) integrals. In other words, because continuous functions likeexp{\displaystyle \exp }andln{\displaystyle \ln }can be interchanged with limits, and the product integral of any product-integrable functionf{\displaystyle f}is equal to the limit of some increasing sequence of product integrals of simple functions, it follows that the relationship holds generally foranyproduct-integrablef{\displaystyle f}. This generalizes the property of geometric integrals mentioned above.
https://en.wikipedia.org/wiki/Product_integral
Inmathematics, tables oftrigonometric functionsare useful in a number of areas. Before the existence ofpocket calculators,trigonometric tableswere essential fornavigation,scienceandengineering. The calculation ofmathematical tableswas an important area of study, which led to the development of thefirst mechanical computing devices. Modern computers and pocket calculators now generate trigonometric function values on demand, using special libraries of mathematical code. Often, these libraries use pre-calculated tables internally, and compute the required value by using an appropriateinterpolationmethod. Interpolation of simple look-up tables of trigonometric functions is still used incomputer graphics, where only modest accuracy may be required and speed is often paramount. Another important application of trigonometric tables and generation schemes is forfast Fourier transform(FFT) algorithms, where the same trigonometric function values (calledtwiddle factors) must be evaluated many times in a given transform, especially in the common case where many transforms of the same size are computed. In this case, calling generic library routines every time is unacceptably slow. One option is to call the library routines once, to build up a table of those trigonometric values that will be needed, but this requires significant memory to store the table. The other possibility, since aregular sequenceof values is required, is to use a recurrence formula to compute the trigonometric values on the fly. Significant research has been devoted to finding accurate, stable recurrence schemes in order to preserve the accuracy of the FFT (which is very sensitive to trigonometric errors). A trigonometry table is essentially a reference chart that presents the values of sine, cosine, tangent, and other trigonometric functions for various angles. These angles are usually arranged across the top row of the table, while the different trigonometric functions are labeled in the first column on the left. To locate the value of a specific trigonometric function at a certain angle, you would find the row for the function and follow it across to the column under the desired angle.[1] Modern computers and calculators use a variety of techniques to provide trigonometric function values on demand for arbitrary angles (Kantabutra, 1996). One common method, especially on higher-end processors withfloating-pointunits, is to combine apolynomialorrationalapproximation(such asChebyshev approximation, best uniform approximation,Padé approximation, and typically for higher or variable precisions,TaylorandLaurent series) with range reduction and a table lookup — they first look up the closest angle in a small table, and then use the polynomial to compute the correction. Maintaining precision while performing such interpolation is nontrivial, but methods likeGal's accurate tables, Cody and Waite range reduction, and Payne and Hanek radian reduction algorithms can be used for this purpose. On simpler devices that lack ahardware multiplier, there is an algorithm calledCORDIC(as well as related techniques) that is more efficient, since it uses onlyshiftsand additions. All of these methods are commonly implemented inhardwarefor performance reasons. The particular polynomial used to approximate a trigonometric function is generated ahead of time using some approximation of aminimax approximation algorithm. Forvery high precisioncalculations, when series-expansion convergence becomes too slow, trigonometric functions can be approximated by thearithmetic-geometric mean, which itself approximates the trigonometric function by the (complex)elliptic integral(Brent, 1976). Trigonometric functions of angles that arerationalmultiples of 2π arealgebraic numbers. The values fora/b·2πcan be found by applyingde Moivre's identityforn = ato abthroot of unity, which is also a root of the polynomialxb- 1in thecomplex plane. For example, the cosine and sine of 2π ⋅ 5/37 are therealandimaginary parts, respectively, of the 5th power of the 37th root of unity cos(2π/37) + sin(2π/37)i, which is a root of thedegree-37 polynomialx37− 1. For this case, aroot-finding algorithmsuch asNewton's methodis much simpler than the arithmetic-geometric mean algorithms above while converging at a similar asymptotic rate. The latter algorithms are required fortranscendentaltrigonometric constants, however. Historically, the earliest method by which trigonometric tables were computed, and probably the most common until the advent of computers, was to repeatedly apply the half-angle and angle-additiontrigonometric identitiesstarting from a known value (such as sin(π/2) = 1, cos(π/2) = 0). This method was used by the ancient astronomerPtolemy, who derived them in theAlmagest, a treatise onastronomy. In modern form, the identities he derived are stated as follows (with signs determined by the quadrant in whichxlies): These were used to constructPtolemy's table of chords, which was applied to astronomical problems. Various other permutations on these identities are possible: for example, some early trigonometric tables used not sine and cosine, but sine andversine. A quick, but inaccurate, algorithm for calculating a table ofNapproximationssnforsin(2πn/N) andcnforcos(2πn/N) is: forn= 0,...,N− 1, whered= 2π/N. This is simply theEuler methodfor integrating thedifferential equation: with initial conditionss(0) = 0 andc(0) = 1, whose analytical solution iss= sin(t) andc= cos(t). Unfortunately, this is not a useful algorithm for generating sine tables because it has a significant error, proportional to 1/N. For example, forN= 256 the maximum error in the sine values is ~0.061 (s202= −1.0368 instead of −0.9757). ForN= 1024, the maximum error in the sine values is ~0.015 (s803= −0.99321 instead of −0.97832), about 4 times smaller. If the sine and cosine values obtained were to be plotted, this algorithm would draw alogarithmic spiralrather than a circle. A simple recurrence formula to generate trigonometric tables is based onEuler's formulaand the relation: This leads to the following recurrence to compute trigonometric valuessnandcnas above: forn= 0, ...,N− 1, wherewr= cos(2π/N) andwi= sin(2π/N). These two starting trigonometric values are usually computed using existing library functions (but could also be found e.g. by employingNewton's methodin the complex plane to solve for the primitiverootofzN− 1). This method would produce anexacttable in exact arithmetic, but has errors in finite-precisionfloating-pointarithmetic. In fact, the errors grow as O(εN) (in both the worst and average cases), where ε is the floating-point precision. A significant improvement is to use the following modification to the above, a trick (due to Singleton[2]) often used to generate trigonometric values for FFT implementations: where α = 2 sin2(π/N) and β = sin(2π/N). The errors of this method are much smaller, O(ε √N) on average and O(εN) in the worst case, but this is still large enough to substantially degrade the accuracy of FFTs of large sizes.
https://en.wikipedia.org/wiki/Generating_trigonometric_tables
Inmathematics, aNewtonian series, named afterIsaac Newton, is a sum over asequencean{\displaystyle a_{n}}written in the form where is thebinomial coefficientand(s)n{\displaystyle (s)_{n}}is thefalling factorial. Newtonian series often appear in relations of the form seen inumbral calculus. The generalizedbinomial theoremgives A proof for this identity can be obtained by showing that it satisfies the differential equation Thedigamma function: TheStirling numbers of the second kindare given by the finite sum This formula is a special case of thekthforward differenceof themonomialxnevaluated atx= 0: A related identity forms the basis of theNörlund–Rice integral: whereΓ(x){\displaystyle \Gamma (x)}is theGamma functionandB(x,y){\displaystyle B(x,y)}is theBeta function. Thetrigonometric functionshaveumbralidentities: and The umbral nature of these identities is a bit more clear by writing them in terms of thefalling factorial(s)n{\displaystyle (s)_{n}}. The first few terms of the sin series are which can be recognized as resembling theTaylor seriesfor sinx, with (s)nstanding in the place ofxn. Inanalytic number theoryit is of interest to sum whereBare theBernoulli numbers. Employing the generating function its Borel sum can be evaluated as The general relation gives the Newton series whereζ{\displaystyle \zeta }is theHurwitz zeta functionandBk(x){\displaystyle B_{k}(x)}theBernoulli polynomial. The series does not converge, the identity holds formally. Another identity is1Γ(x)=∑k=0∞(x−ak)∑j=0k(−1)k−jΓ(a+j)(kj),{\displaystyle {\frac {1}{\Gamma (x)}}=\sum _{k=0}^{\infty }{x-a \choose k}\sum _{j=0}^{k}{\frac {(-1)^{k-j}}{\Gamma (a+j)}}{k \choose j},}which converges forx>a{\displaystyle x>a}. This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent)
https://en.wikipedia.org/wiki/Table_of_Newtonian_series
Inmathematics, aunit vectorin anormed vector spaceis avector(often aspatial vector) oflength1. A unit vector is often denoted by a lowercase letter with acircumflex, or "hat", as inv^{\displaystyle {\hat {\mathbf {v} }}}(pronounced "v-hat"). The termnormalized vectoris sometimes used as a synonym forunit vector. Thenormalized vector ûof a non-zero vectoruis the unit vector in the direction ofu, i.e., where ‖u‖ is thenorm(or length) ofuand‖u‖=(u1,u2,...,un){\textstyle \|\mathbf {u} \|=(u_{1},u_{2},...,u_{n})}.[1][2] The proof is the following:‖u^‖=u1u12+...+un22+...+unu12+...+un22=u12+...+un2u12+...+un2=1=1{\textstyle \|\mathbf {\hat {u}} \|={\sqrt {{\frac {u_{1}}{\sqrt {u_{1}^{2}+...+u_{n}^{2}}}}^{2}+...+{\frac {u_{n}}{\sqrt {u_{1}^{2}+...+u_{n}^{2}}}}^{2}}}={\sqrt {\frac {u_{1}^{2}+...+u_{n}^{2}}{u_{1}^{2}+...+u_{n}^{2}}}}={\sqrt {1}}=1} A unit vector is often used to representdirections, such asnormal directions. Unit vectors are often chosen to form thebasisof a vector space, and every vector in the space may be written as alinear combinationform of unit vectors. Unit vectors may be used to represent the axes of aCartesian coordinate system. For instance, the standardunit vectorsin the direction of thex,y, andzaxes of athree dimensional Cartesian coordinate systemare They form a set of mutuallyorthogonalunit vectors, typically referred to as astandard basisinlinear algebra. They are often denoted using commonvector notation(e.g.,xorx→{\displaystyle {\vec {x}}}) rather than standard unit vector notation (e.g.,x̂). In most contexts it can be assumed thatx,y, andz, (orx→,{\displaystyle {\vec {x}},}y→,{\displaystyle {\vec {y}},}andz→{\displaystyle {\vec {z}}}) are versors of a 3-DCartesian coordinate system. The notations (î,ĵ,k̂), (x̂1,x̂2,x̂3), (êx,êy,êz), or (ê1,ê2,ê3), with or withouthat, are also used,[1]particularly in contexts wherei,j,kmight lead to confusion with another quantity (for instance withindexsymbols such asi,j,k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed inCartesian notationas a linear combination ofx,y,z, its three scalar components can be referred to asdirection cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe theorientation(angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). The threeorthogonalunit vectors appropriate to cylindrical symmetry are: They are related to the Cartesian basisx^{\displaystyle {\hat {x}}},y^{\displaystyle {\hat {y}}},z^{\displaystyle {\hat {z}}}by: The vectorsρ^{\displaystyle {\boldsymbol {\hat {\rho }}}}andφ^{\displaystyle {\boldsymbol {\hat {\varphi }}}}are functions ofφ,{\displaystyle \varphi ,}and arenotconstant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect toφ{\displaystyle \varphi }are: The unit vectors appropriate to spherical symmetry are:r^{\displaystyle \mathbf {\hat {r}} }, the direction in which the radial distance from the origin increases;φ^{\displaystyle {\boldsymbol {\hat {\varphi }}}}, the direction in which the angle in thex-yplane counterclockwise from the positivex-axis is increasing; andθ^{\displaystyle {\boldsymbol {\hat {\theta }}}}, the direction in which the angle from the positivezaxis is increasing. To minimize redundancy of representations, the polar angleθ{\displaystyle \theta }is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written inspherical coordinates, as the roles ofφ^{\displaystyle {\boldsymbol {\hat {\varphi }}}}andθ^{\displaystyle {\boldsymbol {\hat {\theta }}}}are often reversed. Here, the American "physics" convention[3]is used. This leaves theazimuthal angleφ{\displaystyle \varphi }defined the same as in cylindrical coordinates. TheCartesianrelations are: The spherical unit vectors depend on bothφ{\displaystyle \varphi }andθ{\displaystyle \theta }, and hence there are 5 possible non-zero derivatives. For a more complete description, seeJacobian matrix and determinant. The non-zero derivatives are: Common themes of unit vectors occur throughoutphysicsandgeometry:[4] A normal vectorn^{\displaystyle \mathbf {\hat {n}} }to the plane containing and defined by the radial position vectorrr^{\displaystyle r\mathbf {\hat {r}} }and angular tangential direction of rotationθθ^{\displaystyle \theta {\boldsymbol {\hat {\theta }}}}is necessary so that the vector equations of angular motion hold. In terms ofpolar coordinates;n^=r^×θ^{\displaystyle \mathbf {\hat {n}} =\mathbf {\hat {r}} \times {\boldsymbol {\hat {\theta }}}} One unit vectore^∥{\displaystyle \mathbf {\hat {e}} _{\parallel }}aligned parallel to a principal direction (red line), and a perpendicular unit vectore^⊥{\displaystyle \mathbf {\hat {e}} _{\bot }}is in any radial direction relative to the principal line. Unit vector at acute deviation angleφ(including 0 orπ/2 rad) relative to a principal direction. In general, a coordinate system may be uniquely specified using a number oflinearly independentunit vectorse^n{\displaystyle \mathbf {\hat {e}} _{n}}[1](the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denotede^1,e^2,e^3{\displaystyle \mathbf {\hat {e}} _{1},\mathbf {\hat {e}} _{2},\mathbf {\hat {e}} _{3}}. It is nearly always convenient to define the system to be orthonormal andright-handed: whereδij{\displaystyle \delta _{ij}}is theKronecker delta(which is 1 fori=j, and 0 otherwise) andεijk{\displaystyle \varepsilon _{ijk}}is theLevi-Civita symbol(which is 1 for permutations ordered asijk, and −1 for permutations ordered askji). A unit vector inR3{\displaystyle \mathbb {R} ^{3}}was called aright versorbyW. R. Hamilton, as he developed hisquaternionsH⊂R4{\displaystyle \mathbb {H} \subset \mathbb {R} ^{4}}. In fact, he was the originator of the termvector, as every quaternionq=s+v{\displaystyle q=s+v}has a scalar partsand a vector partv. Ifvis a unit vector inR3{\displaystyle \mathbb {R} ^{3}}, then the square ofvin quaternions is −1. Thus byEuler's formula,exp⁡(θv)=cos⁡θ+vsin⁡θ{\displaystyle \exp(\theta v)=\cos \theta +v\sin \theta }is aversorin the3-sphere. Whenθis aright angle, the versor is a right versor: its scalar part is zero and its vector partvis a unit vector inR3{\displaystyle \mathbb {R} ^{3}}. Thus the right versors extend the notion ofimaginary unitsfound in thecomplex plane, where the right versors now range over the2-sphereS2⊂R3⊂H{\displaystyle \mathbb {S} ^{2}\subset \mathbb {R} ^{3}\subset \mathbb {H} }rather than the pair{i, −i} in the complex plane. By extension, aright quaternionis a real multiple of a right versor.
https://en.wikipedia.org/wiki/Unit_vector
Ananti-aliasing filter(AAF) is afilterused before asignal samplerto restrict thebandwidthof asignalto satisfy theNyquist–Shannon sampling theoremover theband of interest. Since the theorem states that unambiguous reconstruction of the signal from its samples is possible when thepower of frequenciesabove theNyquist frequencyis zero, abrick wall filteris an idealized but impractical AAF.[a]A practical AAF makes a trade off between reducedbandwidthand increasedaliasing. A practical anti-aliasing filter will typically permit some aliasing to occur or attenuate or otherwise distort some in-band frequencies close to the Nyquist limit. For this reason, many practical systems sample higher than would be theoretically required by a perfect AAF in order to ensure that all frequencies of interest can be reconstructed, a practice calledoversampling. In the case ofopticalimage sampling, as byimage sensorsindigital cameras, the anti-aliasing filter is also known as anoptical low-pass filter(OLPF),blur filter, orAA filter. The mathematics of sampling intwo spatial dimensionsis similar to the mathematics oftime-domainsampling, but the filter implementation technologies are different. The typical implementation indigital camerasis two layers ofbirefringentmaterial such aslithium niobate, which spreads each optical point into a cluster of four points.[1]The choice of spot separation for such a filter involves a tradeoff among sharpness, aliasing, and fill factor (the ratio of the active refracting area of amicrolens arrayto the total contiguous area occupied by the array). In amonochromeorthree-CCDorFoveon X3camera, the microlens array alone, if near 100% effective, can provide a significant anti-aliasing function,[2]while in color filter array (e.g.Bayer filter) cameras, an additional filter is generally needed to reduce aliasing to an acceptable level.[3][4][5] Alternative implementations include thePentax K-3's anti-aliasing filter, which applies smallvibrationsto the sensor element.[6][promotion?] Anti-aliasing filters are used at the input of ananalog-to-digital converter. Similar filters are used asreconstruction filtersat the output of adigital-to-analog converter. In the latter case, the filter prevents imaging, the reverse process of aliasing where in-band frequencies are mirrored out of band. Withoversampling, a higher intermediate digital sample rate is used, so that a nearly idealdigital filtercansharplycut offaliasing near the original lowNyquist frequencyand give betterphase response, while a much simpleranalog filtercan stop frequencies above the new higher Nyquist frequency. Because analog filters have relatively high cost and limited performance, relaxing the demands on the analog filter can greatly reduce both aliasing and cost. Furthermore, because somenoiseis averaged out, the higher sampling rate can moderately improvesignal-to-noise ratio. A signal may be intentionally sampled at a higher rate to reduce the requirements and distortion of the anti-alias filter. For example, compareCD audiowithhigh-resolution audio. CD audio filters the signal to a passband edge of 20 kHz, with a stopband Nyquist frequency of 22.05 kHz and sample rate of 44.1 kHz. The narrow 2.05 kHz transition band requires a compromise between filter complexity and performance. High-resolution audio uses a higher sample rate, providing both a higher passband edge and larger transition band, which allows better filter performance with reduced aliasing, reduced attenuation of higher audio frequencies and reduced time and phase domain signal distortion.[7][8][failed verification][9][10] Often, an anti-aliasing filter is alow-pass filter; this is not a requirement, however. Generalizations of the Nyquist–Shannon sampling theorem allow sampling of other band-limitedpassbandsignals instead ofbasebandsignals. For signals that are bandwidth limited, but not centered at zero, aband-pass filtercan be used as an anti-aliasing filter. For example, this could be done with asingle-sideband modulatedorfrequency modulatedsignal. If one desired to sample anFM radiobroadcast centered at 87.9 MHz and bandlimited to a 200 kHz band, then an appropriate anti-alias filter would be centered on 87.9 MHz with 200 kHz bandwidth (orpassbandof 87.8 MHz to 88.0 MHz), and the sampling rate would be no less than 400 kHz, but should also satisfy other constraints to preventaliasing.[specify] It is very important to avoid input signal overload when using an anti-aliasing filter. If the signal is strong enough, it can causeclippingat theanalog-to-digital converter, even after filtering. Whendistortiondue to clipping occurs after the anti-aliasing filter, it can create components outside thepassbandof the anti-aliasing filter; these components can then alias, causing the reproduction of other non-harmonicallyrelated frequencies.[11]
https://en.wikipedia.org/wiki/Anti-aliasing_filter
Inmathematics, aBorwein integralis anintegralwhose unusual properties were first presented by mathematiciansDavid BorweinandJonathan Borweinin 2001.[1]Borwein integrals involve products ofsinc⁡(ax){\displaystyle \operatorname {sinc} (ax)}, where thesinc functionis given bysinc⁡(x)=sin⁡(x)/x{\displaystyle \operatorname {sinc} (x)=\sin(x)/x}forx{\displaystyle x}not equal to 0, andsinc⁡(0)=1{\displaystyle \operatorname {sinc} (0)=1}.[1][2] These integrals are remarkable for exhibiting apparent patterns that eventually break down. The following is an example. This pattern continues up to At the next step the pattern fails, In general, similar integrals have value⁠π/2⁠whenever the numbers3, 5, 7…are replaced by positive real numbers such that the sum of their reciprocals is less than 1. In the example above,⁠1/3⁠+⁠1/5⁠+ … +⁠1/13⁠< 1,but⁠1/3⁠+⁠1/5⁠+ … +⁠1/15⁠> 1. With the inclusion of the additional factor2cos⁡(x){\displaystyle 2\cos(x)}, the pattern holds up over a longer series,[3] but In this case,⁠1/3⁠+⁠1/5⁠+ … +⁠1/111⁠< 2,but⁠1/3⁠+⁠1/5⁠+ … +⁠1/113⁠> 2. The exact answer can be calculated using the general formula provided in the next section, and a representation of it is shown below. Fully expanded, this value turns into a fraction that involves two 2736 digit integers. The reason the original and the extended series break down has been demonstrated with an intuitive mathematical explanation.[4][5]In particular, arandom walkreformulation with a causality argument sheds light on the pattern breaking and opens the way for a number of generalizations.[6] Given a sequence of nonzero real numbers,a0,a1,a2,…{\displaystyle a_{0},a_{1},a_{2},\ldots }, a general formula for the integral can be given.[1]To state the formula, one will need to consider sums involving theak{\displaystyle a_{k}}. In particular, ifγ=(γ1,γ2,…,γn)∈{±1}n{\displaystyle \gamma =(\gamma _{1},\gamma _{2},\ldots ,\gamma _{n})\in \{\pm 1\}^{n}}is ann{\displaystyle n}-tuple where each entry is±1{\displaystyle \pm 1}, then we writebγ=a0+γ1a1+γ2a2+⋯+γnan{\displaystyle b_{\gamma }=a_{0}+\gamma _{1}a_{1}+\gamma _{2}a_{2}+\cdots +\gamma _{n}a_{n}}, which is a kind of alternating sum of the first fewak{\displaystyle a_{k}}, and we setεγ=γ1γ2⋯γn{\displaystyle \varepsilon _{\gamma }=\gamma _{1}\gamma _{2}\cdots \gamma _{n}}, which is either±1{\displaystyle \pm 1}. With this notation, the value for the above integral is where In the case whena0>|a1|+|a2|+⋯+|an|{\displaystyle a_{0}>|a_{1}|+|a_{2}|+\cdots +|a_{n}|}, we haveCn=1{\displaystyle C_{n}=1}. Furthermore, if there is ann{\displaystyle n}such that for eachk=0,…,n−1{\displaystyle k=0,\ldots ,n-1}we have0<an<2ak{\displaystyle 0<a_{n}<2a_{k}}anda1+a2+⋯+an−1<a0<a1+a2+⋯+an−1+an{\displaystyle a_{1}+a_{2}+\cdots +a_{n-1}<a_{0}<a_{1}+a_{2}+\cdots +a_{n-1}+a_{n}}, which means thatn{\displaystyle n}is the first value when the partial sum of the firstn{\displaystyle n}elements of the sequence exceeda0{\displaystyle a_{0}}, thenCk=1{\displaystyle C_{k}=1}for eachk=0,…,n−1{\displaystyle k=0,\ldots ,n-1}but The first example is the case whenak=12k+1{\displaystyle a_{k}={\frac {1}{2k+1}}}. Note that ifn=7{\displaystyle n=7}thena7=115{\displaystyle a_{7}={\frac {1}{15}}}and13+15+17+19+111+113≈0.955{\displaystyle {\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{9}}+{\frac {1}{11}}+{\frac {1}{13}}\approx 0.955}but13+15+17+19+111+113+115≈1.02{\displaystyle {\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{9}}+{\frac {1}{11}}+{\frac {1}{13}}+{\frac {1}{15}}\approx 1.02}, so becausea0=1{\displaystyle a_{0}=1}, we get that which remains true if we remove any of the products, but that which is equal to the value given previously. An exact integration method that is efficient for evaluating Borwein-like integrals is discussed here.[7]This integration method works by reformulating integration in terms of a series of differentiations and it yields intuition into the unusual behavior of the Borwein integrals. The Integration by Differentiation method is applicable to general integrals, including Fourier and Laplace transforms. It is used in the integration engine of Maple since 2019. The Integration by Differentiation method is independent of the Feynman method that also uses differentiation to integrate. While the integral becomes less thanπ2{\displaystyle {\frac {\pi }{2}}}whenn{\displaystyle n}exceeds 6, it never becomes much less, and in fact Borwein and Bailey[8]have shown where we can pull the limit out of the integral thanks to thedominated convergence theorem. Similarly, while becomes less thanπ2{\displaystyle {\frac {\pi }{2}}}whenn{\displaystyle n}exceeds 55, we have Furthermore, using theWeierstrass factorizations one can show and with a change of variables obtain[9] and[8][10] Schmuland[11]has given appealing probabilistic formulations of the infinite product Borwein integrals. For example, consider the random harmonic series where one flips independent fair coins to choose the signs. This series convergesalmost surely, that is, with probability 1. Theprobability density functionof the result is a well-defined function, and value of this function at 2 is close to 1/8. However, it is closer to Schmuland's explanation is that this quantity is1/π{\displaystyle 1/\pi }times
https://en.wikipedia.org/wiki/Borwein_integral
Inmathematics, there are severalintegralsknown as theDirichlet integral, after the German mathematicianPeter Gustav Lejeune Dirichlet, one of which is theimproper integralof thesinc functionover the positive real number line. ∫0∞sin⁡xxdx=π2.{\displaystyle \int _{0}^{\infty }{\frac {\sin x}{x}}\,dx={\frac {\pi }{2}}.} This integral is notabsolutely convergent, meaning|sin⁡xx|{\displaystyle \left|{\frac {\sin x}{x}}\right|}has infinite Lebesgue or Riemann improper integral over the positive real line, so the sinc function is notLebesgue integrableover the positive real line. The sinc function is, however, integrable in the sense of the improperRiemann integralor the generalized Riemann orHenstock–Kurzweil integral.[1][2]This can be seen by usingDirichlet's test for improper integrals. It is a good illustration of special techniques for evaluating definite integrals, particularly when it is not useful to directly apply thefundamental theorem of calculusdue to the lack of an elementaryantiderivativefor the integrand, as thesine integral, an antiderivative of the sinc function, is not anelementary function. In this case, the improper definite integral can be determined in several ways: the Laplace transform, double integration, differentiating under the integral sign, contour integration, and the Dirichlet kernel. But since the integrand is an even function, the domain of integration can be extended to the negative real number line as well. Letf(t){\displaystyle f(t)}be a function defined whenevert≥0.{\displaystyle t\geq 0.}Then itsLaplace transformis given byL{f(t)}=F(s)=∫0∞e−stf(t)dt,{\displaystyle {\mathcal {L}}\{f(t)\}=F(s)=\int _{0}^{\infty }e^{-st}f(t)\,dt,}if the integral exists.[3] A property of theLaplace transform useful for evaluating improper integralsisL[f(t)t]=∫s∞F(u)du,{\displaystyle {\mathcal {L}}\left[{\frac {f(t)}{t}}\right]=\int _{s}^{\infty }F(u)\,du,}providedlimt→0f(t)t{\displaystyle \lim _{t\to 0}{\frac {f(t)}{t}}}exists. In what follows, one needs the resultL{sin⁡t}=1s2+1,{\displaystyle {\mathcal {L}}\{\sin t\}={\frac {1}{s^{2}+1}},}which is the Laplace transform of the functionsin⁡t{\displaystyle \sin t}(see the section 'Differentiating under the integral sign' for a derivation) as well as a version ofAbel's theorem(a consequence of thefinal value theorem for the Laplace transform). Therefore,∫0∞sin⁡ttdt=lims→0∫0∞e−stsin⁡ttdt=lims→0L[sin⁡tt]=lims→0∫s∞duu2+1=lims→0arctan⁡u|s∞=lims→0[π2−arctan⁡(s)]=π2.{\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\sin t}{t}}\,dt&=\lim _{s\to 0}\int _{0}^{\infty }e^{-st}{\frac {\sin t}{t}}\,dt=\lim _{s\to 0}{\mathcal {L}}\left[{\frac {\sin t}{t}}\right]\\[6pt]&=\lim _{s\to 0}\int _{s}^{\infty }{\frac {du}{u^{2}+1}}=\lim _{s\to 0}\arctan u{\Biggr |}_{s}^{\infty }\\[6pt]&=\lim _{s\to 0}\left[{\frac {\pi }{2}}-\arctan(s)\right]={\frac {\pi }{2}}.\end{aligned}}} Evaluating the Dirichlet integral using the Laplace transform is equivalent to calculating the same double definite integral by changing theorder of integration, namely,(I1=∫0∞∫0∞e−stsin⁡tdtds)=(I2=∫0∞∫0∞e−stsin⁡tdsdt),{\displaystyle \left(I_{1}=\int _{0}^{\infty }\int _{0}^{\infty }e^{-st}\sin t\,dt\,ds\right)=\left(I_{2}=\int _{0}^{\infty }\int _{0}^{\infty }e^{-st}\sin t\,ds\,dt\right),}(I1=∫0∞1s2+1ds=π2)=(I2=∫0∞sin⁡ttdt),provideds>0.{\displaystyle \left(I_{1}=\int _{0}^{\infty }{\frac {1}{s^{2}+1}}\,ds={\frac {\pi }{2}}\right)=\left(I_{2}=\int _{0}^{\infty }{\frac {\sin t}{t}}\,dt\right),{\text{ provided }}s>0.}The change of order is justified by the fact that for alls>0{\displaystyle s>0}, the integral is absolutely convergent. First rewrite the integral as a function of the additional variables,{\displaystyle s,}namely, the Laplace transform ofsin⁡tt.{\displaystyle {\frac {\sin t}{t}}.}So letf(s)=∫0∞e−stsin⁡ttdt.{\displaystyle f(s)=\int _{0}^{\infty }e^{-st}{\frac {\sin t}{t}}\,dt.} In order to evaluate the Dirichlet integral, we need to determinef(0).{\displaystyle f(0).}The continuity off{\displaystyle f}can be justified by applying thedominated convergence theoremafter integration by parts. Differentiate with respect tos>0{\displaystyle s>0}and apply theLeibniz rule for differentiating under the integral signto obtaindfds=dds∫0∞e−stsin⁡ttdt=∫0∞∂∂se−stsin⁡ttdt=−∫0∞e−stsin⁡tdt.{\displaystyle {\begin{aligned}{\frac {df}{ds}}&={\frac {d}{ds}}\int _{0}^{\infty }e^{-st}{\frac {\sin t}{t}}\,dt=\int _{0}^{\infty }{\frac {\partial }{\partial s}}e^{-st}{\frac {\sin t}{t}}\,dt\\[6pt]&=-\int _{0}^{\infty }e^{-st}\sin t\,dt.\end{aligned}}} Now, using Euler's formulaeit=cos⁡t+isin⁡t,{\displaystyle e^{it}=\cos t+i\sin t,}one can express the sine function in terms of complex exponentials:sin⁡t=12i(eit−e−it).{\displaystyle \sin t={\frac {1}{2i}}\left(e^{it}-e^{-it}\right).} Therefore,dfds=−∫0∞e−stsin⁡tdt=−∫0∞e−steit−e−it2idt=−12i∫0∞[e−t(s−i)−e−t(s+i)]dt=−12i[−1s−ie−t(s−i)−−1s+ie−t(s+i)]0∞=−12i[0−(−1s−i+1s+i)]=−12i(1s−i−1s+i)=−12i(s+i−(s−i)s2+1)=−1s2+1.{\displaystyle {\begin{aligned}{\frac {df}{ds}}&=-\int _{0}^{\infty }e^{-st}\sin t\,dt=-\int _{0}^{\infty }e^{-st}{\frac {e^{it}-e^{-it}}{2i}}dt\\[6pt]&=-{\frac {1}{2i}}\int _{0}^{\infty }\left[e^{-t(s-i)}-e^{-t(s+i)}\right]dt\\[6pt]&=-{\frac {1}{2i}}\left[{\frac {-1}{s-i}}e^{-t(s-i)}-{\frac {-1}{s+i}}e^{-t(s+i)}\right]_{0}^{\infty }\\[6pt]&=-{\frac {1}{2i}}\left[0-\left({\frac {-1}{s-i}}+{\frac {1}{s+i}}\right)\right]=-{\frac {1}{2i}}\left({\frac {1}{s-i}}-{\frac {1}{s+i}}\right)\\[6pt]&=-{\frac {1}{2i}}\left({\frac {s+i-(s-i)}{s^{2}+1}}\right)=-{\frac {1}{s^{2}+1}}.\end{aligned}}} Integrating with respect tos{\displaystyle s}givesf(s)=∫−dss2+1=A−arctan⁡s,{\displaystyle f(s)=\int {\frac {-ds}{s^{2}+1}}=A-\arctan s,} whereA{\displaystyle A}is a constant of integration to be determined. Sincelims→∞f(s)=0,{\displaystyle \lim _{s\to \infty }f(s)=0,}A=lims→∞arctan⁡s=π2,{\displaystyle A=\lim _{s\to \infty }\arctan s={\frac {\pi }{2}},}using the principal value. This means that fors>0{\displaystyle s>0}f(s)=π2−arctan⁡s.{\displaystyle f(s)={\frac {\pi }{2}}-\arctan s.} Finally, by continuity ats=0,{\displaystyle s=0,}we havef(0)=π2−arctan⁡(0)=π2,{\displaystyle f(0)={\frac {\pi }{2}}-\arctan(0)={\frac {\pi }{2}},}as before. Considerf(z)=eizz.{\displaystyle f(z)={\frac {e^{iz}}{z}}.} As a function of the complex variablez,{\displaystyle z,}it has a simple pole at the origin, which prevents the application ofJordan's lemma, whose other hypotheses are satisfied. Define then a new function[4]g(z)=eizz+iε.{\displaystyle g(z)={\frac {e^{iz}}{z+i\varepsilon }}.} The pole has been moved to the negative imaginary axis, sog(z){\displaystyle g(z)}can be integrated along the semicircleγ{\displaystyle \gamma }of radiusR{\displaystyle R}centered atz=0{\displaystyle z=0}extending in the positive imaginary direction, and closed along the real axis. One then takes the limitε→0.{\displaystyle \varepsilon \to 0.} The complex integral is zero by theresidue theorem, as there are no poles inside the integration pathγ{\displaystyle \gamma }:0=∫γg(z)dz=∫−RReixx+iεdx+∫0πei(Reiθ+θ)Reiθ+iεiRdθ.{\displaystyle 0=\int _{\gamma }g(z)\,dz=\int _{-R}^{R}{\frac {e^{ix}}{x+i\varepsilon }}\,dx+\int _{0}^{\pi }{\frac {e^{i(Re^{i\theta }+\theta )}}{Re^{i\theta }+i\varepsilon }}iR\,d\theta .} The second term vanishes asR{\displaystyle R}goes to infinity. As for the first integral, one can use one version of theSokhotski–Plemelj theoremfor integrals over the real line: for acomplex-valued functionfdefined and continuously differentiable on the real line and real constantsa{\displaystyle a}andb{\displaystyle b}witha<0<b{\displaystyle a<0<b}one findslimε→0+∫abf(x)x±iεdx=∓iπf(0)+P∫abf(x)xdx,{\displaystyle \lim _{\varepsilon \to 0^{+}}\int _{a}^{b}{\frac {f(x)}{x\pm i\varepsilon }}\,dx=\mp i\pi f(0)+{\mathcal {P}}\int _{a}^{b}{\frac {f(x)}{x}}\,dx,} whereP{\displaystyle {\mathcal {P}}}denotes theCauchy principal value. Back to the above original calculation, one can write0=P∫eixxdx−πi.{\displaystyle 0={\mathcal {P}}\int {\frac {e^{ix}}{x}}\,dx-\pi i.} By taking the imaginary part on both sides and noting that the functionsin⁡(x)/x{\displaystyle \sin(x)/x}is even, we get∫−∞+∞sin⁡(x)xdx=2∫0+∞sin⁡(x)xdx.{\displaystyle \int _{-\infty }^{+\infty }{\frac {\sin(x)}{x}}\,dx=2\int _{0}^{+\infty }{\frac {\sin(x)}{x}}\,dx.} Finally,limε→0∫ε∞sin⁡(x)xdx=∫0∞sin⁡(x)xdx=π2.{\displaystyle \lim _{\varepsilon \to 0}\int _{\varepsilon }^{\infty }{\frac {\sin(x)}{x}}\,dx=\int _{0}^{\infty }{\frac {\sin(x)}{x}}\,dx={\frac {\pi }{2}}.} Alternatively, choose as the integration contour forf{\displaystyle f}the union of upper half-plane semicircles of radiiε{\displaystyle \varepsilon }andR{\displaystyle R}together with two segments of the real line that connect them. On one hand the contour integral is zero, independently ofε{\displaystyle \varepsilon }andR;{\displaystyle R;}on the other hand, asε→0{\displaystyle \varepsilon \to 0}andR→∞{\displaystyle R\to \infty }the integral's imaginary part converges to2I+ℑ(ln⁡0−ln⁡(πi))=2I−π{\displaystyle 2I+\Im {\big (}\ln 0-\ln(\pi i){\big )}=2I-\pi }(hereln⁡z{\displaystyle \ln z}is any branch of logarithm on upper half-plane), leading toI=π2.{\displaystyle I={\frac {\pi }{2}}.} Consider the well-known formula for theDirichlet kernel:[5]Dn(x)=1+2∑k=1ncos⁡(2kx)=sin⁡[(2n+1)x]sin⁡(x).{\displaystyle D_{n}(x)=1+2\sum _{k=1}^{n}\cos(2kx)={\frac {\sin[(2n+1)x]}{\sin(x)}}.} It immediately follows that:∫0π2Dn(x)dx=π2.{\displaystyle \int _{0}^{\frac {\pi }{2}}D_{n}(x)\,dx={\frac {\pi }{2}}.} Definef(x)={1x−1sin⁡(x)x≠00x=0{\displaystyle f(x)={\begin{cases}{\frac {1}{x}}-{\frac {1}{\sin(x)}}&x\neq 0\\[6pt]0&x=0\end{cases}}} Clearly,f{\displaystyle f}is continuous whenx∈(0,π/2];{\displaystyle x\in (0,\pi /2];}to see its continuity at 0 applyL'Hopital's Rule:limx→0sin⁡(x)−xxsin⁡(x)=limx→0cos⁡(x)−1sin⁡(x)+xcos⁡(x)=limx→0−sin⁡(x)2cos⁡(x)−xsin⁡(x)=0.{\displaystyle \lim _{x\to 0}{\frac {\sin(x)-x}{x\sin(x)}}=\lim _{x\to 0}{\frac {\cos(x)-1}{\sin(x)+x\cos(x)}}=\lim _{x\to 0}{\frac {-\sin(x)}{2\cos(x)-x\sin(x)}}=0.} Hence,f{\displaystyle f}fulfills the requirements of theRiemann-Lebesgue Lemma. This means:limλ→∞∫0π/2f(x)sin⁡(λx)dx=0⟹limλ→∞∫0π/2sin⁡(λx)xdx=limλ→∞∫0π/2sin⁡(λx)sin⁡(x)dx.{\displaystyle \lim _{\lambda \to \infty }\int _{0}^{\pi /2}f(x)\sin(\lambda x)dx=0\quad \Longrightarrow \quad \lim _{\lambda \to \infty }\int _{0}^{\pi /2}{\frac {\sin(\lambda x)}{x}}dx=\lim _{\lambda \to \infty }\int _{0}^{\pi /2}{\frac {\sin(\lambda x)}{\sin(x)}}dx.} (The form of the Riemann-Lebesgue Lemma used here is proven in the article cited.) We would like to compute:∫0∞sin⁡(t)tdt=limλ→∞∫0λπ2sin⁡(t)tdt=limλ→∞∫0π2sin⁡(λx)xdx=limλ→∞∫0π2sin⁡(λx)sin⁡(x)dx=limn→∞∫0π2sin⁡((2n+1)x)sin⁡(x)dx=limn→∞∫0π2Dn(x)dx=π2{\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\sin(t)}{t}}dt=&\lim _{\lambda \to \infty }\int _{0}^{\lambda {\frac {\pi }{2}}}{\frac {\sin(t)}{t}}dt\\[6pt]=&\lim _{\lambda \to \infty }\int _{0}^{\frac {\pi }{2}}{\frac {\sin(\lambda x)}{x}}dx\\[6pt]=&\lim _{\lambda \to \infty }\int _{0}^{\frac {\pi }{2}}{\frac {\sin(\lambda x)}{\sin(x)}}dx\\[6pt]=&\lim _{n\to \infty }\int _{0}^{\frac {\pi }{2}}{\frac {\sin((2n+1)x)}{\sin(x)}}dx\\[6pt]=&\lim _{n\to \infty }\int _{0}^{\frac {\pi }{2}}D_{n}(x)dx={\frac {\pi }{2}}\end{aligned}}} However, we must justify switching the real limit inλ{\displaystyle \lambda }to the integral limit inn,{\displaystyle n,}which will follow from showing that the limit does exist. Usingintegration by parts, we have:∫absin⁡(x)xdx=∫abd(1−cos⁡(x))xdx=1−cos⁡(x)x|ab+∫ab1−cos⁡(x)x2dx{\displaystyle \int _{a}^{b}{\frac {\sin(x)}{x}}dx=\int _{a}^{b}{\frac {d(1-\cos(x))}{x}}dx=\left.{\frac {1-\cos(x)}{x}}\right|_{a}^{b}+\int _{a}^{b}{\frac {1-\cos(x)}{x^{2}}}dx} Now, asa→0{\displaystyle a\to 0}andb→∞{\displaystyle b\to \infty }the term on the left converges with no problem. See thelist of limits of trigonometric functions. We now show that∫−∞∞1−cos⁡(x)x2dx{\displaystyle \int _{-\infty }^{\infty }{\frac {1-\cos(x)}{x^{2}}}dx}is absolutely integrable, which implies that the limit exists.[6] First, we seek to bound the integral near the origin. Using the Taylor-series expansion of the cosine about zero,1−cos⁡(x)=1−∑k≥0(−1)(k+1)x2k2k!=∑k≥1(−1)(k+1)x2k2k!.{\displaystyle 1-\cos(x)=1-\sum _{k\geq 0}{\frac {{(-1)^{(k+1)}}x^{2k}}{2k!}}=\sum _{k\geq 1}{\frac {{(-1)^{(k+1)}}x^{2k}}{2k!}}.} Therefore,|1−cos⁡(x)x2|=|−∑k≥0x2k2(k+1)!|≤∑k≥0|x|kk!=e|x|.{\displaystyle \left|{\frac {1-\cos(x)}{x^{2}}}\right|=\left|-\sum _{k\geq 0}{\frac {x^{2k}}{2(k+1)!}}\right|\leq \sum _{k\geq 0}{\frac {|x|^{k}}{k!}}=e^{|x|}.} Splitting the integral into pieces, we have∫−∞∞|1−cos⁡(x)x2|dx≤∫−∞−ε2x2dx+∫−εεe|x|dx+∫ε∞2x2dx≤K,{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {1-\cos(x)}{x^{2}}}\right|dx\leq \int _{-\infty }^{-\varepsilon }{\frac {2}{x^{2}}}dx+\int _{-\varepsilon }^{\varepsilon }e^{|x|}dx+\int _{\varepsilon }^{\infty }{\frac {2}{x^{2}}}dx\leq K,} for some constantK>0.{\displaystyle K>0.}This shows that the integral is absolutely integrable, which implies the original integral exists, and switching fromλ{\displaystyle \lambda }ton{\displaystyle n}was in fact justified, and the proof is complete.
https://en.wikipedia.org/wiki/Dirichlet_integral
Lanczos filteringandLanczos resamplingare two applications of a certain mathematical formula. It can be used as alow-pass filteror used to smoothlyinterpolatethe value of adigital signalbetween itssamples. In the latter case, it maps each sample of the given signal to a translated and scaled copy of theLanczos kernel, which is asinc functionwindowedby the central lobe of a second, longer, sinc function. The sum of these translated and scaled kernels is then evaluated at the desired points. Lanczos resampling is typically used to increase thesampling rateof a digital signal, or to shift it by a fraction of the sampling interval. It is often used also formultivariate interpolation, for example toresizeorrotateadigital image. It has been considered the "best compromise" among several simple filters for this purpose.[1] The filter was invented byClaude Duchon, who named it afterCornelius Lanczosdue to Duchon's use ofSigma approximationin constructing the filter, a technique created by Lanczos.[2] The effect of each input sample on the interpolated values is defined by the filter'sreconstruction kernelL(x), called the Lanczos kernel. It is the normalizedsincfunctionsinc(x),windowed(multiplied) by theLanczos window,orsinc window, which is the central lobe of a horizontally stretched sinc functionsinc(x/a)for−a≤x≤a. Equivalently, The parameterais a positive integer, typically 2 or 3, which determines the size of the kernel. The Lanczos kernel has2a− 1lobes: a positive one at the center, anda− 1alternating negative and positive lobes on each side. Given a one-dimensional signal with samplessi, for integer values ofi, the valueS(x)interpolated at an arbitrary real argumentxis obtained by the discreteconvolutionof those samples with the Lanczos kernel:[3] whereais the filter size parameter, and⌊x⌋{\displaystyle \lfloor x\rfloor }is thefloor function. The bounds of this sum are such that the kernel is zero outside of them. As long as the parameterais a positive integer, the Lanczos kernel iscontinuouseverywhere, and itsderivativeis defined and continuous everywhere (even atx= ±a, where both sinc functions go to zero). Therefore, the reconstructed signalS(x)too will be continuous, with continuous derivative. The Lanczos kernel is zero at every integer argumentx, except atx= 0, where it has value 1. Therefore, the reconstructed signal exactly interpolates the given samples: we will haveS(x) =sifor every integer argumentx=i. Lanczos resampling is one form of a general method developed by Lanczos to counteract theGibbs phenomenonby multiplying coefficients of a truncatedFourier seriesbysinc(πk/m){\displaystyle \mathrm {sinc} (\pi k/m)}, wherek{\displaystyle k}is the coefficient index andm{\displaystyle m}is how many coefficients we're keeping.[4][5]The same reasoning applies in the case of truncated functions if we wish to remove Gibbs oscillations in their spectrum. Lanczos filter's kernel in two dimensions is The theoretically optimal reconstruction filter forband-limited signalsis thesinc filter, which has infinitesupport. The Lanczos filter is one of many practical (finitely supported) approximations of the sinc filter. Each interpolated value is the weighted sum of2aconsecutive input samples. Thus, by varying the2aparameter one may trade computation speed for improved frequency response. The parameter also allows one to choose between a smoother interpolation or a preservation of sharp transients in the data. For image processing, the trade-off is between the reduction ofaliasingartefacts and the preservation of sharp edges. Also as with any such processing, there are no results for the borders of the image. Increasing the length of the kernel increases the cropping of the edges of the image. The Lanczos filter has been compared with other interpolation methods for discrete signals, particularly other windowed versions of the sinc filter.TurkowskiandGabrielclaimed that the Lanczos filter (witha= 2) is the "best compromise in terms of reduction of aliasing, sharpness, and minimal ringing", compared with truncated sinc and theBartlett,cosine-, andHann-windowedsinc, for decimation and interpolation of 2-dimensional image data.[1]According toJim Blinn, the Lanczos kernel (witha= 3) "keeps low frequencies and rejects high frequencies better than any (achievable) filter we've seen so far."[6] Lanczos interpolation is a popular filter for "upscaling" videos in various media utilities, such asAviSynth[7]andFFmpeg.[8] Since the kernel assumes negative values fora> 1, the interpolated signal can be negative even if all samples are positive. More generally, the range of values of the interpolated signal may be wider than the range spanned by the discrete sample values. In particular, there may beringing artifactsjust before and after abrupt changes in the sample values, which may lead toclipping artifacts. However, these effects are reduced compared to the (non-windowed) sinc filter. Fora= 2 (a three-lobed kernel) the ringing is < 1%. When using the Lanczos filter for image resampling, the ringing effect will create light and dark halos along any strong edges. While these bands may be visually annoying, they help increase theperceived sharpness, and therefore provide a form ofedge enhancement. This may improve the subjective quality of the image, given the special role of edge sharpness invision.[9] In some applications, the low-end clipping artifacts can be ameliorated by transforming the data to a logarithmic domain prior to filtering. In this case the interpolated values will be a weighted geometric mean, rather than an arithmetic mean, of the input samples. The Lanczos kernel does not have thepartition of unityproperty. That is, the sumU(x)=∑i∈ZL(x−i){\textstyle U(x)=\sum _{i\in \mathbb {Z} }L(x-i)}of all integer-translated copies of the kernel is not always 1. Therefore, the Lanczos interpolation of a discrete signal with constant samples does not yield a constant function. This defect is most evident whena= 1. Also, fora= 1the interpolated signal has zero derivative at every integer argument. This is rather academic, since using a single-lobe kernel (a= 1) loses all the benefits of the Lanczos approach and provides a poor filter. There are many better single-lobe, bell-shaped windowing functions. The partition of unity can be introduced by a normalization, for0≤x<1{\displaystyle 0\leq x<1}.
https://en.wikipedia.org/wiki/Lanczos_resampling
Infunctional analysis, theShannon wavelet(orsinc wavelets) is a decomposition that is defined by signal analysis by idealbandpass filters. Shannon wavelet may be either ofrealorcomplextype. Shannon wavelet is not well-localized (noncompact) in the time domain, but its Fourier transform is band-limited (compact support). Hence Shannon wavelet has poor time localization but has good frequency localization. These characteristics are in stark contrast to those of theHaar wavelet. The Haar and sinc systems are Fourier duals of each other. Sinc function is the starting point for the definition of the Shannon wavelet. First, we define the scaling function to be the sinc function. ϕ(Sha)(t):=sin⁡πtπt=sinc⁡(t).{\displaystyle \phi ^{\text{(Sha)}}(t):={\frac {\sin \pi t}{\pi t}}=\operatorname {sinc} (t).} And define the dilated and translated instances to be ϕkn(t):=2n/2ϕ(Sha)(2nt−k){\displaystyle \phi _{k}^{n}(t):=2^{n/2}\phi ^{\text{(Sha)}}(2^{n}t-k)} where the parametern,k{\displaystyle n,k}means the dilation and the translation for the wavelet respectively. Then we can derive theFourier transformof the scaling function: Φ(Sha)(ω)=12πΠ(ω2π)={12π,if|ω|≤π,0ifotherwise.{\displaystyle \Phi ^{\text{(Sha)}}(\omega )={\frac {1}{2\pi }}\Pi ({\frac {\omega }{2\pi }})={\begin{cases}{\frac {1}{2\pi }},&{\mbox{if }}{|\omega |\leq \pi },\\0&{\mbox{if }}{\mbox{otherwise}}.\\\end{cases}}}where the (normalised)gate functionis defined by Π(x):={1,if|x|≤1/2,0ifotherwise.{\displaystyle \Pi (x):={\begin{cases}1,&{\mbox{if }}{|x|\leq 1/2},\\0&{\mbox{if }}{\mbox{otherwise}}.\\\end{cases}}}Also for the dilated and translated instances of scaling function:Φkn(ω)=2−n/22πe−iω(k+1)/2nΠ(ω2n+1π){\displaystyle \Phi _{k}^{n}(\omega )={\frac {2^{-n/2}}{2\pi }}e^{-i\omega (k+1)/2^{n}}\Pi ({\frac {\omega }{2^{n+1}\pi }})} UseΦ(Sha){\displaystyle \Phi ^{\text{(Sha)}}}and multiresolution approximation we can derive the Fourier transform of the Mother wavelet: Ψ(Sha)(ω)=12πe−iω(Π(ωπ−32)+Π(ωπ+32)){\displaystyle \Psi ^{\text{(Sha)}}(\omega )={\frac {1}{2\pi }}e^{-i\omega }{\bigg (}\Pi ({\frac {\omega }{\pi }}-{\frac {3}{2}})+\Pi ({\frac {\omega }{\pi }}+{\frac {3}{2}}){\bigg )}} And the dilated and translated instances: Ψkn(ω)=2−n/22πe−iω(k+1)/2n(Π(ω2nπ−32)+Π(ω2nπ+32)){\displaystyle \Psi _{k}^{n}(\omega )={\frac {2^{-n/2}}{2\pi }}e^{-i\omega (k+1)/2^{n}}{\bigg (}\Pi ({\frac {\omega }{2^{n}\pi }}-{\frac {3}{2}})+\Pi ({\frac {\omega }{2^{n}\pi }}+{\frac {3}{2}}){\bigg )}} Then the shannon mother wavelet function and the family of dilated and translated instances can be obtained by the inverse Fourier transform: ψ(Sha)(t)=sin⁡π(t−(1/2))−sin⁡2π(t−(1/2))π(t−1/2)=sinc⁡(t−12)−2sinc⁡(2(t−12)){\displaystyle \psi ^{\text{(Sha)}}(t)={\frac {\sin \pi (t-(1/2))-\sin 2\pi (t-(1/2))}{\pi (t-1/2)}}=\operatorname {sinc} {\bigg (}t-{\frac {1}{2}}{\bigg )}-2\operatorname {sinc} {\bigg (}2(t-{\frac {1}{2}}){\bigg )}} ψkn(t)=2n/2ψ(Sha)(2nt−k){\displaystyle \psi _{k}^{n}(t)=2^{n/2}\psi ^{\text{(Sha)}}(2^{n}t-k)} <ψkn(t),ψhm(t)>=δnmδhk={1,ifh=kandn=m0,otherwise{\displaystyle <\psi _{k}^{n}(t),\psi _{h}^{m}(t)>=\delta ^{nm}\delta _{hk}={\begin{cases}1,&{\text{if }}h=k{\text{ and }}n=m\\0,&{\text{otherwise}}\end{cases}}} <ϕk0(t),ϕh0(t)>=δkh{\displaystyle <\phi _{k}^{0}(t),\phi _{h}^{0}(t)>=\delta ^{kh}} <ϕk0(t),ψhm(t)>=0{\displaystyle <\phi _{k}^{0}(t),\psi _{h}^{m}(t)>=0} Supposef(x)∈L2(R){\displaystyle f(x)\in L_{2}(\mathbb {R} )}such thatsupp⁡FT⁡{f}⊂[−π,π]{\displaystyle \operatorname {supp} \operatorname {FT} \{f\}\subset [-\pi ,\pi ]}and for any dilation and the translation parametern,k{\displaystyle n,k}, |∫−∞∞f(t)ϕk0(t)dt|<∞{\displaystyle {\Bigg |}\int _{-\infty }^{\infty }f(t)\phi _{k}^{0}(t)dt{\Bigg |}<\infty },|∫−∞∞f(t)ψkn(t)dt|<∞{\displaystyle {\Bigg |}\int _{-\infty }^{\infty }f(t)\psi _{k}^{n}(t)dt{\Bigg |}<\infty } Then f(t)=∑k=∞∞αkϕk0(t){\displaystyle f(t)=\sum _{k=\infty }^{\infty }\alpha _{k}\phi _{k}^{0}(t)}is uniformly convergent, whereαk=f(k){\displaystyle \alpha _{k}=f(k)} TheFourier transformof the Shannon mother wavelet is given by: where the (normalised)gate functionis defined by The analytical expression of the real Shannon wavelet can be found by taking the inverseFourier transform: or alternatively as where is the usualsinc functionthat appears inShannon sampling theorem. This wavelet belongs to theC∞{\displaystyle C^{\infty }}-class ofdifferentiability, but it decreases slowly at infinity and has nobounded support, sinceband-limitedsignals cannot be time-limited. Thescaling functionfor the Shannon MRA (orSinc-MRA) is given by the sample function: In the case ofcomplexcontinuous wavelet, the Shannon wavelet is defined by
https://en.wikipedia.org/wiki/Shannon_wavelet
Insignal processing, asinc filtercan refer to either asinc-in-timefilterwhoseimpulse responseis asinc functionand whosefrequency responseis rectangular, or to asinc-in-frequencyfilter whose impulse response is rectangular and whose frequency response is a sinc function. Calling them according to which domain the filter resembles a sinc avoids confusion. If the domain is unspecified, sinc-in-time is often assumed, or context hopefully can infer the correct domain. Sinc-in-time is an idealfilterthat removes all frequency components above a givencutoff frequency, without attenuating lower frequencies, and haslinear phaseresponse. It may thus be considered abrick-wall filterorrectangular filter. Itsimpulse responseis asinc functionin thetime domain: sin⁡(πt)πt{\displaystyle {\frac {\sin(\pi t)}{\pi t}}} while itsfrequency responseis arectangular function: whereB{\displaystyle B}(representing itsbandwidth) is an arbitrary cutoff frequency. Its impulse response is given by theinverse Fourier transformof its frequency response: wheresincis the normalizedsinc function. An idealizedelectronic filterwith full transmission in the pass band, complete attenuation in the stop band, and abrupt transitions is known colloquially as a "brick-wall filter" (in reference to the shape of thetransfer function). The sinc-in-time filter is a brick-walllow-pass filter, from which brick-wallband-pass filtersandhigh-pass filtersare easily constructed. The lowpass filter with brick-wall cutoff at frequencyBLhas impulse response and transfer function given by: The band-pass filter with lower band edgeBLand upper band edgeBHis just the difference of two such sinc-in-time filters (since the filters are zero phase, their magnitude responses subtract directly):[1] The high-pass filter with lower band edgeBHis just a transparent filter minus a sinc-in-time filter, which makes it clear that theDirac delta functionis the limit of a narrow-in-time sinc-in-time filter: As the sinc-in-time filter has infinite impulse response in both positive and negative time directions, it isnon-causaland has an infinite delay (i.e., itscompact supportin thefrequency domainforces its time response not to have compact support meaning that it is ever-lasting) and infinite order (i.e., the response cannot be expressed as alinear differential equationwith a finite sum). However, it is used in conceptual demonstrations or proofs, such as thesampling theoremand theWhittaker–Shannon interpolation formula. Sinc-in-time filters must be approximated for real-world (non-abstract) applications, typically bywindowingand truncating an ideal sinc-in-time filterkernel, but doing so reduces its ideal properties.[2]This applies to other brick-wall filters built using sinc-in-time filters. The sinc filter is notbounded-input–bounded-output (BIBO) stable. That is, a bounded input can produce an unbounded output, because the integral of the absolute value of the sinc function is infinite. A bounded input that produces an unbounded output is sgn(sinc(t)). Another is sin(2πBt)u(t), a sine wave starting at time 0, at the cutoff frequency. The simplest implementation of asinc-in-frequencyfilter uses aboxcarimpulse response to produce asimple moving average(specifically if divide by the number of samples), also known as accumulate-and-dump filter (specifically if simply sum without a division). It can be modeled as a FIR filter with allN{\displaystyle N}coefficients equal. It is sometimes cascaded to produce higher-order moving averages (seeFinite impulse response § Moving average exampleandcascaded integrator–comb filter). This filter can be used for crude but fast and easydownsampling(a.k.a. decimation) by a factor ofN.{\displaystyle N.}The simplicity of the filter is foiled by its mediocre low-pass capabilities. The stop-band contains periodic lobes with gradually decreasing height in between the nulls at multiples offSN{\textstyle {\frac {f_{S}}{N}}}. The first lobe is -11.3dBfor a 4-sample moving average, or -12.8 dB for an 8-sample moving average, and -13.1 dB for a 16-sample moving average. AnN{\displaystyle N}-sample filter sampled atfS{\displaystyle f_{S}}will alias all non-fully attenuated signal components lying abovefS2N{\textstyle {\frac {f_{S}}{2N}}}to thebasebandranging fromDCtofS2N.{\textstyle {\frac {f_{S}}{2N}}.} A group averaging filter processingN{\displaystyle N}samples hasN2{\displaystyle {\tfrac {N}{2}}}transmission zeroesevenly-spaced byfSN,{\displaystyle {\tfrac {f_{S}}{N}},}with the lowest zero atfSN{\displaystyle {\tfrac {f_{S}}{N}}}and the highest zero atfS2{\displaystyle {\tfrac {f_{S}}{2}}}(theNyquist frequency). Above the Nyquist frequency, the frequency response is mirrored and then is repeated periodically abovefS{\displaystyle f_{S}}forever. Themagnitudeof the frequency response (plotted in these graphs) is useful when one wants to know how much frequencies are attenuated. Though the sinc function really oscillates between negative and positive values, negative values of the frequency response simply correspond to a 180-degreephase shift. Aninverse sinc filtermay be used forequalizationin the digital domain (e.g. aFIR filter) or analog domain (e.g.opamp filter) to counteract undesired attenuation in the frequency band of interest to provide a flat frequency response.[3] SeeWindow function § Rectangular windowfor application of the sinc kernel as the simplest windowing function.
https://en.wikipedia.org/wiki/Sinc_filter
Innumerical analysisandapplied mathematics,sinc numerical methodsare numerical techniques[1]for finding approximate solutions ofpartial differential equationsandintegral equationsbased on the translates ofsincfunction and Cardinal function C(f,h) which is an expansion of f defined by where the step size h>0 and where the sinc function is defined by Sinc approximation methods excel for problems whose solutions may have singularities, or infinite domains, or boundary layers. The truncated Sinc expansion of f is defined by the following series: Indeed, Sinc are ubiquitous for approximating every operation of calculus In the standard setup of the sinc numerical methods, the errors (inbig O notation) are known to beO(e−cn){\displaystyle O\left(e^{-c{\sqrt {n}}}\right)}with some c>0, where n is the number of nodes or bases used in the methods. However, Sugihara[2]has recently found that the errors in the Sinc numerical methods based on double exponential transformation areO(e−knln⁡n){\displaystyle O\left(e^{-{\frac {kn}{\ln n}}}\right)}with some k>0, in a setup that is also meaningful both theoretically and practically and are found to be best possible in a certain mathematical sense. Thisapplied mathematics–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sinc_numerical_methods
Thetrigonometric functions(especiallysineandcosine) for complexsquare matricesoccur in solutions of second-order systems ofdifferential equations.[1]They are defined by the sameTaylor seriesthat hold for the trigonometric functions ofcomplex numbers:[2] withXnbeing thenthpowerof the matrixX, andIbeing theidentity matrixof appropriate dimensions. Equivalently, they can be defined using thematrix exponentialalong with the matrix equivalent ofEuler's formula,eiX= cosX+isinX, yielding For example, takingXto be a standardPauli matrix, one has as well as, for thecardinal sine function, The analog of thePythagorean trigonometric identityholds:[2] IfXis adiagonal matrix,sinXandcosXare also diagonal matrices with(sinX)nn= sin(Xnn)and(cosX)nn= cos(Xnn), that is, they can be calculated by simply taking the sines or cosines of the matrices's diagonal components. The analogs of thetrigonometric addition formulasare trueif and only ifXY = YX:[2] The tangent, as well asinverse trigonometric functions,hyperbolicandinverse hyperbolic functionshave also been defined for matrices:[3] and so on.
https://en.wikipedia.org/wiki/Trigonometric_functions_of_matrices
TheWhittaker–Shannon interpolation formulaorsinc interpolationis a method to construct acontinuous-timebandlimitedfunction from a sequence of real numbers. The formula dates back to the works ofE. Borelin 1898, andE. T. Whittakerin 1915, and was cited from works ofJ. M. Whittakerin 1935, and in the formulation of theNyquist–Shannon sampling theorembyClaude Shannonin 1949. It is also commonly calledShannon's interpolation formulaandWhittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it theCardinal series. Given a sequence of real numbers,x[n]=x(nT), the continuous function (where "sinc" denotes thenormalized sinc function) has aFourier transform,X(f), whose non-zero values are confined to the region :|f|≤12T{\displaystyle |f|\leq {\frac {1}{2T}}}. When the parameterThas units of seconds, thebandlimit, 1/(2T), has units of cycles/sec (hertz). When thex[n] sequence represents time samples, at intervalT, of a continuous function, the quantityfs= 1/Tis known as thesample rate, andfs/2 is the correspondingNyquist frequency. When the sampled function has a bandlimit,B, less than the Nyquist frequency,x(t) is aperfect reconstructionof the original function. (SeeSampling theorem.) Otherwise, the frequency components above the Nyquist frequency "fold" into the sub-Nyquist region ofX(f), resulting in distortion. (SeeAliasing.) The interpolation formula is derived in theNyquist–Shannon sampling theoremarticle, which points out that it can also be expressed as theconvolutionof aninfinite impulse trainwith asinc function: This is equivalent to filtering the impulse train with an ideal (brick-wall)low-pass filterwith gain of 1 (or 0 dB) in the passband. If the sample rate is sufficiently high, this means that the baseband image (the original signal before sampling) is passed unchanged and the other images are removed by the brick-wall filter. The interpolation formula always convergesabsolutelyandlocally uniformlyas long as By theHölder inequalitythis is satisfied if the sequence(x[n])n∈Z{\displaystyle (x[n])_{n\in \mathbb {Z} }}belongs to any of theℓp(Z,C){\displaystyle \ell ^{p}(\mathbb {Z} ,\mathbb {C} )}spaceswith 1 ≤p< ∞, that is This condition is sufficient, but not necessary. For example, the sum will generally converge if the sample sequence comes from sampling almost anystationary process, in which case the sample sequence is not square summable, and is not in anyℓp(Z,C){\displaystyle \ell ^{p}(\mathbb {Z} ,\mathbb {C} )}space. Ifx[n] is an infinite sequence of samples of a sample function of a wide-sensestationary process, then it is not a member of anyℓp{\displaystyle \ell ^{p}}orLpspace, with probability 1; that is, the infinite sum of samples raised to a powerpdoes not have a finite expected value. Nevertheless, the interpolation formula converges with probability 1. Convergence can readily be shown by computing the variances of truncated terms of the summation, and showing that the variance can be made arbitrarily small by choosing a sufficient number of terms. If the process mean is nonzero, then pairs of terms need to be considered to also show that the expected value of the truncated terms converges to zero. Since a random process does not have a Fourier transform, the condition under which the sum converges to the original function must also be different. A stationary random process does have anautocorrelation functionand hence aspectral densityaccording to theWiener–Khinchin theorem. A suitable condition for convergence to a sample function from the process is that the spectral density of the process be zero at all frequencies equal to and above half the sample rate.
https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula
TheWinkel tripel projection(Winkel III), a modified azimuthal[1]map projectionof theworld, is one ofthree projectionsproposed by German cartographer Oswald Winkel (7 January 1874 – 18 July 1953) in 1921. The projection is thearithmetic meanof theequirectangular projectionand theAitoff projection:[2]The nametripel(Germanfor 'triple') refers to Winkel's goal of minimizing threekinds of distortion: area, direction, and distance.[3] whereλis the longitude relative to the central meridian of the projection,φis the latitude,φ1is the standard parallel for theequirectangular projection, sinc is theunnormalized cardinal sinefunction, and In his proposal, Winkel set Aclosed-forminverse mappingdoes not exist, and computing the inverse numerically requires the use ofiterative methods.[4] David M. Goldberg andJ. Richard Gott IIIshowed that the Winkel tripel fares better against several other projections analyzed against their measures of distortion, producing minimal distance,Tissot indicatrixellipticity and area errors, and the least skew of any of the projections they studied.[5]By a different metric,Capek's "Q", the Winkel tripel ranked ninth among a hundred map projections of the world, behind the commonEckert IV projectionandRobinson projections.[6] In 1998, the Winkel tripel projection replaced the Robinson projection as the standard projection for world maps made by theNational Geographic Society.[3]Many educational institutes and textbooks soon followed National Geographic's example in adopting the projection, most of which still utilize it.[7][8]
https://en.wikipedia.org/wiki/Winkel_tripel_projection
Adiscrete cosine transform(DCT) expresses a finite sequence ofdata pointsin terms of a sum ofcosinefunctions oscillating at differentfrequencies. The DCT, first proposed byNasir Ahmedin 1972, is a widely used transformation technique insignal processinganddata compression. It is used in mostdigital media, includingdigital images(such asJPEGandHEIF),digital video(such asMPEGandH.26x),digital audio(such asDolby Digital,MP3andAAC),digital television(such asSDTV,HDTVandVOD),digital radio(such asAAC+andDAB+), andspeech coding(such asAAC-LD,SirenandOpus). DCTs are also important to numerous other applications inscience and engineering, such asdigital signal processing,telecommunicationdevices, reducingnetwork bandwidthusage, andspectral methodsfor the numerical solution ofpartial differential equations. A DCT is aFourier-related transformsimilar to thediscrete Fourier transform(DFT), but using onlyreal numbers. The DCTs are generally related toFourier seriescoefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier series coefficients of only periodically extended sequences. DCTs are equivalent to DFTs of roughly twice the length, operating on real data withevensymmetry (since the Fourier transform of a real and even function is real and even), whereas in some variants the input or output data are shifted by half a sample. There are eight standard DCT variants, of which four are common. The most common variant of discrete cosine transform is the type-II DCT, which is often called simplythe DCT. This was the original DCT as first proposed by Ahmed. Its inverse, the type-III DCT, is correspondingly often called simplythe inverse DCTorthe IDCT. Two related transforms are thediscrete sine transform(DST), which is equivalent to a DFT of real andodd functions, and themodified discrete cosine transform(MDCT), which is based on a DCT of overlapping data. Multidimensional DCTs (MD DCTs) are developed to extend the concept of DCT to multidimensional signals. A variety of fast algorithms have been developed to reduce the computational complexity of implementing DCT. One of these is the integer DCT (IntDCT),[1]anintegerapproximation of the standard DCT,[2]:ix, xiii, 1, 141–304used in severalISO/IECandITU-Tinternational standards.[1][2] DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks.[3]DCT blocks sizes including 8x8pixelsfor the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels.[1][4]The DCT has a strongenergy compactionproperty,[5][6]capable of achieving high quality at highdata compression ratios.[7][8]However, blockycompression artifactscan appear when heavy DCT compression is applied. The DCT was first conceived byNasir Ahmedwhile working atKansas State University. The concept was proposed to theNational Science Foundationin 1972. The DCT was originally intended forimage compression.[9][1]Ahmed developed a practical DCT algorithm with his PhD students T. Raj Natarajan andK. R. Raoat theUniversity of Texas at Arlingtonin 1973.[9]They presented their results in a January 1974 paper, titledDiscrete Cosine Transform.[5][6][10]It described what is now called the type-II DCT (DCT-II),[2]:51as well as the type-III inverse DCT (IDCT).[5] Since its introduction in 1974, there has been significant research on the DCT.[10]In 1977, Wen-Hsiung Chen published a paper with C. Harrison Smith and Stanley C. Fralick presenting a fast DCT algorithm.[11][10]Further developments include a 1978 paper by M. J. Narasimha and A. M. Peterson, and a 1984 paper by B. G. Lee.[10]These research papers, along with the original 1974 Ahmed paper and the 1977 Chen paper, were cited by theJoint Photographic Experts Groupas the basis forJPEG's lossy image compression algorithm in 1992.[10][12] Thediscrete sine transform(DST) was derived from the DCT, by replacing theNeumann conditionatx=0with aDirichlet condition.[2]:35-36The DST was described in the 1974 DCT paper by Ahmed, Natarajan and Rao.[5]A type-I DST (DST-I) was later described byAnil K. Jainin 1976, and a type-II DST (DST-II) was then described by H.B. Kekra and J.K. Solanka in 1978.[13] In 1975, John A. Roese and Guner S. Robinson adapted the DCT forinter-framemotion-compensatedvideo coding. They experimented with the DCT and thefast Fourier transform(FFT), developing inter-frame hybrid coders for both, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bitperpixelfor avideotelephonescene with image quality comparable to anintra-frame coderrequiring 2-bit per pixel.[14][15]In 1979,Anil K. Jainand Jaswant R. Jain further developed motion-compensated DCT video compression,[16][17]also called block motion compensation.[17]This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981.[17]Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards.[18][19] A DCT variant, themodified discrete cosine transform(MDCT), was developed by John P. Princen, A.W. Johnson and Alan B. Bradley at theUniversity of Surreyin 1987,[20]following earlier work by Princen and Bradley in 1986.[21]The MDCT is used in most modernaudio compressionformats, such asDolby Digital(AC-3),[22][23]MP3(which uses a hybrid DCT-FFTalgorithm),[24]Advanced Audio Coding(AAC),[25]andVorbis(Ogg).[26] Nasir Ahmed also developed a lossless DCT algorithm with Giridhar Mandyam and Neeraj Magotra at theUniversity of New Mexicoin 1995. This allows the DCT technique to be used forlossless compressionof images. It is a modification of the original DCT algorithm, and incorporates elements of inverse DCT anddelta modulation. It is a more effective lossless compression algorithm thanentropy coding.[27]Lossless DCT is also known as LDCT.[28] The DCT is the most widely used transformation technique insignal processing,[29]and by far the most widely used linear transform indata compression.[30]Uncompresseddigital mediaas well aslossless compressionhave highmemoryandbandwidthrequirements, which is significantly reduced by the DCTlossy compressiontechnique,[7][8]capable of achievingdata compression ratiosfrom 8:1 to 14:1 for near-studio-quality,[7]up to 100:1 for acceptable-quality content.[8]DCT compression standards are used in digital media technologies, such asdigital images,digital photos,[31][32]digital video,[18][33]streaming media,[34]digital television,streaming television,video on demand(VOD),[8]digital cinema,[22]high-definition video(HD video), andhigh-definition television(HDTV).[7][35] The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy compression, because it has a strongenergy compactionproperty.[5][6]In typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For strongly correlatedMarkov processes, the DCT can approach the compaction efficiency of theKarhunen-Loève transform(which is optimal in the decorrelation sense). As explained below, this stems from the boundary conditions implicit in the cosine functions. DCTs are widely employed in solvingpartial differential equationsbyspectral methods, where the different variants of the DCT correspond to slightly different even and odd boundary conditions at the two ends of the array. DCTs are closely related toChebyshev polynomials, and fast DCT algorithms (below) are used inChebyshev approximationof arbitrary functions by series of Chebyshev polynomials, for example inClenshaw–Curtis quadrature. The DCT is widely used in many applications, which include the following. The DCT-II is an important image compression technique. It is used in image compression standards such asJPEG, andvideo compressionstandards such asH.26x,MJPEG,MPEG,DV,TheoraandDaala. There, the two-dimensional DCT-II ofN×N{\displaystyle N\times N}blocks are computed and the results arequantizedandentropy coded. In this case,N{\displaystyle N}is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8 × 8 transform coefficient array in which the(0,0){\displaystyle (0,0)}element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies. The integer DCT, an integer approximation of the DCT,[2][1]is used inAdvanced Video Coding(AVC),[52][1]introduced in 2003, andHigh Efficiency Video Coding(HEVC),[4][1]introduced in 2013. The integer DCT is also used in theHigh Efficiency Image Format(HEIF), which uses a subset of theHEVCvideo coding format for coding still images.[4]AVC uses 4 x 4 and 8 x 8 blocks. HEVC and HEIF use varied block sizes between 4 x 4 and 32 x 32pixels.[4][1]As of 2019[update], AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers.[43] Multidimensional DCTs (MD DCTs) have several applications, mainly 3-D DCTs such as the 3-D DCT-II, which has several new applications like Hyperspectral Imaging coding systems,[85]variable temporal length 3-D DCT coding,[86]video codingalgorithms,[87]adaptive video coding[88]and 3-D Compression.[89]Due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using MD DCTs is rapidly increasing.DCT-IVhas gained popularity for its applications in fast implementation of real-valued polyphase filtering banks,[90]lapped orthogonal transform[91][92]and cosine-modulated wavelet bases.[93] DCT plays an important role indigital signal processingspecificallydata compression. The DCT is widely implemented indigital signal processors(DSP), as well as digital signal processing software. Many companies have developed DSPs based on DCT technology. DCTs are widely used for applications such asencoding, decoding, video, audio,multiplexing, control signals,signaling, andanalog-to-digital conversion. DCTs are also commonly used forhigh-definition television(HDTV) encoder/decoderchips.[1] A common issue with DCT compression indigital mediaare blockycompression artifacts,[94]caused by DCT blocks.[3]In a DCT algorithm, an image (or frame in an image sequence) is divided into square blocks which are processed independently from each other, then the DCT blocks is taken within each block and the resulting DCT coefficients arequantized. This process can cause blocking artifacts, primarily at highdata compression ratios.[94]This can also cause themosquito noiseeffect, commonly found indigital video.[95] DCT blocks are often used inglitch art.[3]The artistRosa Menkmanmakes use of DCT-based compression artifacts in her glitch art,[96]particularly the DCT blocks found in mostdigital mediaformats such asJPEGdigital images andMP3audio.[3]Another example isJpegsby German photographerThomas Ruff, which uses intentionalJPEGartifacts as the basis of the picture's style.[97][98] Like any Fourier-related transform, DCTs express a function or a signal in terms of a sum ofsinusoidswith differentfrequenciesandamplitudes. Like the DFT, a DCT operates on a function at a finite number ofdiscrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines (in the form ofcomplex exponentials). However, this visible difference is merely a consequence of a deeper distinction: a DCT implies differentboundary conditionsfrom the DFT or other related transforms. The Fourier-related transforms that operate on a function over a finitedomain, such as the DFT or DCT or aFourier series, can be thought of as implicitly defining anextensionof that function outside the domain. That is, once you write a functionf(x){\displaystyle f(x)}as a sum of sinusoids, you can evaluate that sum at anyx{\displaystyle x}, even forx{\displaystyle x}where the originalf(x){\displaystyle f(x)}was not specified. The DFT, like the Fourier series, implies aperiodicextension of the original function. A DCT, like acosine transform, implies anevenextension of the original function. However, because DCTs operate onfinite,discretesequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd atboththe left and right boundaries of the domain (i.e. the min-nand max-nboundaries in the definitions below, respectively). Second, one has to specify aroundwhat pointthe function is even or odd. In particular, consider a sequenceabcdof four equally spaced data points, and say that we specify an evenleftboundary. There are two sensible possibilities: either the data are even about the samplea, in which case the even extension isdcbabcd, or the data are even about the pointhalfwaybetweenaand the previous point, in which case the even extension isdcbaabcd(ais repeated). Each boundary can be either even or odd (2 choices per boundary) and can be symmetric about a data point or the point halfway between two data points (2 choices per boundary), for a total of 2 × 2 × 2 × 2 = 16 possibilities. These choices lead to all the standard variations of DCTs and alsodiscrete sine transforms(DSTs). Half of these possibilities, those where theleftboundary is even, correspond to the 8 types of DCT; the other half are the 8 types of DST. These different boundary conditions strongly affect the applications of the transform and lead to uniquely useful properties for the various DCT types. Most directly, when using Fourier-related transforms to solvepartial differential equationsbyspectral methods, the boundary conditions are directly specified as a part of the problem being solved. Or, for the MDCT (based on the type-IV DCT), the boundary conditions are intimately involved in the MDCT's critical property of time-domain aliasing cancellation. In a more subtle fashion, the boundary conditions are responsible for theenergy compactificationproperties that make DCTs useful for image and audio compression, because the boundaries affect the rate of convergence of any Fourier-like series. In particular, it is well known that anydiscontinuitiesin a function reduce therate of convergenceof the Fourier series so that more sinusoids are needed to represent the function with a given accuracy. The same principle governs the usefulness of the DFT and other transforms for signal compression; the smoother a function is, the fewer terms in its DFT or DCT are required to represent it accurately, and the more it can be compressed.[a]However, the implicit periodicity of the DFT means that discontinuities usually occur at the boundaries: any random segment of a signal is unlikely to have the same value at both the left and right boundaries.[b]In contrast, a DCT wherebothboundaries are evenalwaysyields a continuous extension at the boundaries (although theslopeis generally discontinuous). This is why DCTs, and in particular DCTs of types I, II, V, and VI (the types that have two even boundaries) generally perform better for signal compression than DFTs and DSTs. In practice, a type-II DCT is usually preferred for such applications, in part for reasons of computational convenience. Formally, the discrete cosine transform is alinear, invertiblefunctionf:RN→RN{\displaystyle f:\mathbb {R} ^{N}\to \mathbb {R} ^{N}}(whereR{\displaystyle \mathbb {R} }denotes the set ofreal numbers), or equivalently an invertibleN×Nsquare matrix. There are several variants of the DCT with slightly modified definitions. TheNreal numbersx0,…xN−1{\displaystyle ~x_{0},\ \ldots \ x_{N-1}~}are transformed into theNreal numbersX0,…,XN−1{\displaystyle X_{0},\,\ldots ,\,X_{N-1}}according to one of the formulas: Some authors further multiply thex0{\displaystyle x_{0}}andxN−1{\displaystyle x_{N-1}}terms by2{\displaystyle {\sqrt {2\,}}\,}and correspondingly multiply theX0{\displaystyle X_{0}}andXN−1{\displaystyle X_{N-1}}terms by1/2{\displaystyle 1/{\sqrt {2\,}}\,}which, if one further multiplies by an overall scale factor of2N−1{\textstyle {\sqrt {{\tfrac {2}{N-1\,}}\,}}}, makes the DCT-I matrixorthogonalbut breaks the direct correspondence with a real-evenDFT. The DCT-I is exactly equivalent (up to an overall scale factor of 2), to a DFT of2(N−1){\displaystyle 2(N-1)}real numbers with even symmetry. For example, a DCT-I ofN=5{\displaystyle N=5}real numbersabcde{\displaystyle a\ b\ c\ d\ e}is exactly equivalent to a DFT of eight real numbersabcdedcb{\displaystyle a\ b\ c\ d\ e\ d\ c\ b}(even symmetry), divided by two. (In contrast, DCT types II-IV involve a half-sample shift in the equivalent DFT.) Note, however, that the DCT-I is not defined forN{\displaystyle N}less than 2, while all other DCT types are defined for any positiveN{\displaystyle N}. Thus, the DCT-I corresponds to the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=0{\displaystyle n=0}and even aroundn=N−1{\displaystyle n=N-1}; similarly forXk{\displaystyle X_{k}}. The DCT-II is probably the most commonly used form, and is often simply referred to as theDCT.[5][6] This transform is exactly equivalent (up to an overall scale factor of 2) to a DFT of4N{\displaystyle 4N}real inputs of even symmetry, where the even-indexed elements are zero. That is, it is half of the DFT of the4N{\displaystyle 4N}inputsyn,{\displaystyle y_{n},}wherey2n=0{\displaystyle y_{2n}=0},y2n+1=xn{\displaystyle y_{2n+1}=x_{n}}for0≤n<N{\displaystyle 0\leq n<N},y2N=0{\displaystyle y_{2N}=0}, andy4N−n=yn{\displaystyle y_{4N-n}=y_{n}}for0<n<2N{\displaystyle 0<n<2N}. DCT-II transformation is also possible using2N{\displaystyle 2N}signal followed by a multiplication by half shift. This is demonstrated byMakhoul.[citation needed] Some authors further multiply theX0{\displaystyle X_{0}}term by1/N{\displaystyle 1/{\sqrt {N\,}}\,}and multiply the rest of the matrix by an overall scale factor of2/N{\textstyle {\sqrt {{2}/{N}}}}(see below for the corresponding change in DCT-III). This makes the DCT-II matrixorthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. This is the normalization used byMatlab.[99]In many applications, such asJPEG, the scaling is arbitrary because scale factors can be combined with a subsequent computational step (e.g. thequantizationstep in JPEG[100]), and a scaling can be chosen that allows the DCT to be computed with fewer multiplications.[101][102] The DCT-II implies the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=−1/2{\displaystyle n=-1/2}and even aroundn=N−1/2{\displaystyle n=N-1/2\,};Xk{\displaystyle X_{k}}is even aroundk=0{\displaystyle k=0}and odd aroundk=N{\displaystyle k=N}. Because it is the inverse of DCT-II up to a scale factor (see below), this form is sometimes simply referred to as "the inverse DCT" ("IDCT").[6] Some authors divide thex0{\displaystyle x_{0}}term by2{\displaystyle {\sqrt {2}}}instead of by 2 (resulting in an overallx0/2{\displaystyle x_{0}/{\sqrt {2}}}term) and multiply the resulting matrix by an overall scale factor of2/N{\textstyle {\sqrt {2/N}}}(see above for the corresponding change in DCT-II), so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrixorthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output. The DCT-III implies the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=0{\displaystyle n=0}and odd aroundn=N;{\displaystyle n=N;}Xk{\displaystyle X_{k}}is even aroundk=−1/2{\displaystyle k=-1/2}and even aroundk=N−1/2.{\displaystyle k=N-1/2.} The DCT-IV matrix becomesorthogonal(and thus, being clearly symmetric, its own inverse) if one further multiplies by an overall scale factor of2/N.{\textstyle {\sqrt {2/N}}.} A variant of the DCT-IV, where data from different transforms areoverlapped, is called themodified discrete cosine transform(MDCT).[103] The DCT-IV implies the boundary conditions:xn{\displaystyle x_{n}}is even aroundn=−1/2{\displaystyle n=-1/2}and odd aroundn=N−1/2;{\displaystyle n=N-1/2;}similarly forXk.{\displaystyle X_{k}.} DCTs of types I–IV treat both boundaries consistently regarding the point of symmetry: they are even/odd around either a data point for both boundaries or halfway between two data points for both boundaries. By contrast, DCTs of types V-VIII imply boundaries that are even/odd around a data point for one boundary and halfway between two data points for the other boundary. In other words, DCT types I–IV are equivalent to real-even DFTs of even order (regardless of whetherN{\displaystyle N}is even or odd), since the corresponding DFT is of length2(N−1){\displaystyle 2(N-1)}(for DCT-I) or4N{\displaystyle 4N}(for DCT-II & III) or8N{\displaystyle 8N}(for DCT-IV). The four additional types of discrete cosine transform[104]correspond essentially to real-even DFTs of logically odd order, which have factors ofN±1/2{\displaystyle N\pm {1}/{2}}in the denominators of the cosine arguments. However, these variants seem to be rarely used in practice. One reason, perhaps, is thatFFTalgorithms for odd-length DFTs are generally more complicated thanFFTalgorithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below. (The trivial real-even array, a length-one DFT (odd length) of a single numbera, corresponds to a DCT-V of lengthN=1.{\displaystyle N=1.}) Using the normalization conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N− 1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/Nand vice versa.[6] Like for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by2/N{\textstyle {\sqrt {2/N}}}so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of√2(see above), this can be used to make the transform matrixorthogonal. Multidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions: they are simply a separable product (equivalently, a composition) of DCTs along each dimension. For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2D DCT-II is given by the formula (omitting normalization and other scale factors, as above): The3-D DCT-IIis only the extension of2-D DCT-IIin three dimensional space and mathematically can be calculated by the formula The inverse of3-D DCT-IIis3-D DCT-IIIand can be computed from the formula given by Technically, computing a two-, three- (or -multi) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as arow-columnalgorithm. As withmultidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order (i.e. interleaving/combining the algorithms for the different dimensions). Owing to the rapid growth in the applications based on the 3-D DCT, several fast algorithms are developed for the computation of 3-D DCT-II. Vector-Radix algorithms are applied for computing M-D DCT to reduce the computational complexity and to increase the computational speed. To compute 3-D DCT-II efficiently, a fast algorithm, Vector-Radix Decimation in Frequency (VR DIF) algorithm was developed. In order to apply the VR DIF algorithm the input data is to be formulated and rearranged as follows.[105][106]The transform sizeN × N × Nis assumed to be 2. The figure to the adjacent shows the four stages that are involved in calculating 3-D DCT-II using VR DIF algorithm. The first stage is the 3-D reordering using the index mapping illustrated by the above equations. The second stage is the butterfly calculation. Each butterfly calculates eight points together as shown in the figure just below, wherec(φi)=cos⁡(φi){\displaystyle c(\varphi _{i})=\cos(\varphi _{i})}. The original 3-D DCT-II now can be written as whereφi=π2N(4Ni+1),andi=1,2,3.{\displaystyle \varphi _{i}={\frac {\pi }{2N}}(4N_{i}+1),{\text{ and }}i=1,2,3.} If the even and the odd parts ofk1,k2{\displaystyle k_{1},k_{2}}andk3{\displaystyle k_{3}}and are considered, the general formula for the calculation of the 3-D DCT-II can be expressed as where The whole 3-D DCT calculation needs[log2⁡N]{\displaystyle ~[\log _{2}N]~}stages, and each stage involves18N3{\displaystyle ~{\tfrac {1}{8}}\ N^{3}~}butterflies. The whole 3-D DCT requires[18N3log2⁡N]{\displaystyle ~\left[{\tfrac {1}{8}}\ N^{3}\log _{2}N\right]~}butterflies to be computed. Each butterfly requires seven real multiplications (including trivial multiplications) and 24 real additions (including trivial additions). Therefore, the total number of real multiplications needed for this stage is[78N3log2⁡N],{\displaystyle ~\left[{\tfrac {7}{8}}\ N^{3}\ \log _{2}N\right]~,}and the total number of real additions i.e. including the post-additions (recursive additions) which can be calculated directly after the butterfly stage or after the bit-reverse stage are given by[106][32N3log2⁡N]⏟Real+[32N3log2⁡N−3N3+3N2]⏟Recursive=[92N3log2⁡N−3N3+3N2].{\displaystyle ~\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N\right]} _{\text{Real}}+\underbrace {\left[{\frac {3}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]} _{\text{Recursive}}=\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~.} The conventional method to calculate MD-DCT-II is using a Row-Column-Frame (RCF) approach which is computationally complex and less productive on most advanced recent hardware platforms. The number of multiplications required to compute VR DIF Algorithm when compared to RCF algorithm are quite a few in number. The number of Multiplications and additions involved in RCF approach are given by[32N3log2⁡N]{\displaystyle ~\left[{\frac {3}{2}}N^{3}\log _{2}N\right]~}and[92N3log2⁡N−3N3+3N2],{\displaystyle ~\left[{\frac {9}{2}}N^{3}\log _{2}N-3N^{3}+3N^{2}\right]~,}respectively. From Table 1, it can be seen that the total number of multiplications associated with the 3-D DCT VR algorithm is less than that associated with the RCF approach by more than 40%. In addition, the RCF approach involves matrix transpose and more indexing and data swapping than the new VR algorithm. This makes the 3-D DCT VR algorithm more efficient and better suited for 3-D applications that involve the 3-D DCT-II such as video compression and other 3-D image processing applications. The main consideration in choosing a fast algorithm is to avoid computational and structural complexities. As the technology of computers and DSPs advances, the execution time of arithmetic operations (multiplications and additions) is becoming very fast, and regular computational structure becomes the most important factor.[107]Therefore, although the above proposed 3-D VR algorithm does not achieve the theoretical lower bound on the number of multiplications,[108]it has a simpler computational structure as compared to other 3-D DCT algorithms. It can be implemented in place using a single butterfly and possesses the properties of theCooley–Tukey FFT algorithmin 3-D. Hence, the 3-D VR presents a good choice for reducing arithmetic operations in the calculation of the 3-D DCT-II, while keeping the simple structure that characterize butterfly-styleCooley–Tukey FFT algorithms. The image to the right shows a combination of horizontal and vertical frequencies for an8 × 8(N1=N2=8){\displaystyle (~N_{1}=N_{2}=8~)}two-dimensional DCT. Each step from left to right and top to bottom is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. Another move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data( 8×8 )is transformed to alinear combinationof these 64 frequency squares. The M-D DCT-IV is just an extension of 1-D DCT-IV on toMdimensional domain. The 2-D DCT-IV of a matrix or an image is given by We can compute the MD DCT-IV using the regular row-column method or we can use the polynomial transform method[109]for the fast and efficient computation. The main idea of this algorithm is to use the Polynomial Transform to convert the multidimensional DCT into a series of 1-D DCTs directly. MD DCT-IV also has several applications in various fields. Although the direct application of these formulas would requireO(N2){\displaystyle ~{\mathcal {O}}(N^{2})~}operations, it is possible to compute the same thing with onlyO(Nlog⁡N){\displaystyle ~{\mathcal {O}}(N\log N)~}complexity by factorizing the computation similarly to thefast Fourier transform(FFT). One can also compute DCTs via FFTs combined withO(N){\displaystyle ~{\mathcal {O}}(N)~}pre- and post-processing steps. In general,O(Nlog⁡N){\displaystyle ~{\mathcal {O}}(N\log N)~}methods to compute DCTs are known as fast cosine transform (FCT) algorithms. The most efficient algorithms, in principle, are usually those that are specialized directly for the DCT, as opposed to using an ordinary FFT plusO(N){\displaystyle ~{\mathcal {O}}(N)~}extra operations (see below for an exception). However, even "specialized" DCT algorithms (including all of those that achieve the lowest known arithmetic counts, at least forpower-of-twosizes) are typically closely related to FFT algorithms – since DCTs are essentially DFTs of real-even data, one can design a fast DCT algorithm by taking an FFT and eliminating the redundant operations due to this symmetry. This can even be done automatically (Frigo & Johnson 2005). Algorithms based on theCooley–Tukey FFT algorithmare most common, but any other FFT algorithm is also applicable. For example, theWinograd FFT algorithmleads to minimal-multiplication algorithms for the DFT, albeit generally at the cost of more additions, and a similar algorithm was proposed by (Feig & Winograd 1992a) for the DCT. Because the algorithms for DFTs, DCTs, and similar transforms are all so closely related, any improvement in algorithms for one transform will theoretically lead to immediate gains for the other transforms as well (Duhamel & Vetterli 1990). While DCT algorithms that employ an unmodified FFT often have some theoretical overhead compared to the best specialized DCT algorithms, the former also have a distinct advantage: Highly optimized FFT programs are widely available. Thus, in practice, it is often easier to obtain high performance for general lengthsNwith FFT-based algorithms.[c]Specialized DCT algorithms, on the other hand, see widespread use for transforms of small, fixed sizes such as the8 × 8DCT-II used inJPEGcompression, or the small DCTs (or MDCTs) typically used in audio compression. (Reduced code size may also be a reason to use a specialized DCT for embedded-device applications.) In fact, even the DCT algorithms using an ordinary FFT are sometimes equivalent to pruning the redundant operations from a larger FFT of real-symmetric data, and they can even be optimal from the perspective of arithmetic counts. For example, a type-II DCT is equivalent to a DFT of size4N{\displaystyle ~4N~}with real-even symmetry whose even-indexed elements are zero. One of the most common methods for computing this via an FFT (e.g. the method used inFFTPACKandFFTW) was described byNarasimha & Peterson (1978)andMakhoul (1980), and this method in hindsight can be seen as one step of a radix-4 decimation-in-time Cooley–Tukey algorithm applied to the "logical" real-even DFT corresponding to the DCT-II.[d]Because the even-indexed elements are zero, this radix-4 step is exactly the same as a split-radix step. If the subsequent sizeN{\displaystyle ~N~}real-data FFT is also performed by a real-datasplit-radix algorithm(as inSorensen et al. (1987)), then the resulting algorithm actually matches what was long the lowest published arithmetic count for the power-of-two DCT-II (2Nlog2⁡N−N+2{\displaystyle ~2N\log _{2}N-N+2~}real-arithmetic operations[e]). A recent reduction in the operation count to179Nlog2⁡N+O(N){\displaystyle ~{\tfrac {17}{9}}N\log _{2}N+{\mathcal {O}}(N)}also uses a real-data FFT.[110]So, there is nothing intrinsically bad about computing the DCT via an FFT from an arithmetic perspective – it is sometimes merely a question of whether the corresponding FFT algorithm is optimal. (As a practical matter, the function-call overhead in invoking a separate FFT routine might be significant for smallN,{\displaystyle ~N~,}but this is an implementation rather than an algorithmic question since it can be solved by unrolling or inlining.) Consider this 8x8 grayscale image of capital letter A. Each basis function is multiplied by its coefficient and then this product is added to the final image.
https://en.wikipedia.org/wiki/Discrete_cosine_transform
Inmathematics, thelogarithmic integral functionorintegral logarithmli(x) is aspecial function. It is relevant in problems ofphysicsand hasnumber theoreticsignificance. In particular, according to theprime number theorem, it is a very goodapproximationto theprime-counting function, which is defined as the number ofprime numbersless than or equal to a given valuex. The logarithmic integral has an integral representation defined for all positivereal numbersx≠ 1 by thedefinite integral Here,lndenotes thenatural logarithm. The function1/(lnt)has asingularityatt= 1, and the integral forx> 1is interpreted as aCauchy principal value, Theoffset logarithmic integralorEulerian logarithmic integralis defined as As such, the integral representation has the advantage of avoiding the singularity in the domain of integration. Equivalently, The function li(x) has a single positive zero; it occurs atx≈ 1.45136 92348 83381 05028 39684 85892 02744 94930...OEIS:A070769; this number is known as theRamanujan–Soldner constant. li⁡(Li−1(0))=li(2){\displaystyle \operatorname {li} ({\text{Li}}^{-1}(0))={\text{li}}(2)}≈ 1.045163 780117 492784 844588 889194 613136 522615 578151...OEIS:A069284 This is−(Γ(0,−ln⁡2)+iπ){\displaystyle -(\Gamma (0,-\ln 2)+i\,\pi )}whereΓ(a,x){\displaystyle \Gamma (a,x)}is theincomplete gamma function. It must be understood as theCauchy principal valueof the function. The function li(x) is related to theexponential integralEi(x) via the equation which is valid forx> 0. This identity provides a series representation of li(x) as whereγ≈ 0.57721 56649 01532 ...OEIS:A001620is theEuler–Mascheroni constant. A more rapidly convergent series byRamanujan[1]is The asymptotic behavior forx→∞{\displaystyle x\to \infty }is whereO{\displaystyle O}is thebig O notation. The fullasymptotic expansionis or This gives the following more accurate asymptotic behaviour: As an asymptotic expansion, this series isnot convergent: it is a reasonable approximation only if the series is truncated at a finite number of terms, and only large values ofxare employed. This expansion follows directly from the asymptotic expansion for theexponential integral. This implies e.g. that we can bracket li as: for allln⁡x≥11{\displaystyle \ln x\geq 11}. The logarithmic integral is important innumber theory, appearing in estimates of the number ofprime numbersless than a given value. For example, theprime number theoremstates that: whereπ(x){\displaystyle \pi (x)}denotes the number of primes smaller than or equal tox{\displaystyle x}. Assuming theRiemann hypothesis, we get the even stronger:[2] In fact, theRiemann hypothesisis equivalent to the statement that: For smallx{\displaystyle x},li⁡(x)>π(x){\displaystyle \operatorname {li} (x)>\pi (x)}but the difference changes sign an infinite number of times asx{\displaystyle x}increases, and thefirst time that this happensis somewhere between 1019and1.4×10316.
https://en.wikipedia.org/wiki/Logarithmic_integral
Inmathematics,physicsandengineering, thesinc function(/ˈsɪŋk/SINK), denoted bysinc(x), has two forms, normalized and unnormalized.[1] In mathematics, the historicalunnormalized sinc functionis defined forx≠ 0bysinc⁡(x)=sin⁡xx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin x}{x}}.} Alternatively, the unnormalized sinc function is often called thesampling function, indicated as Sa(x).[2] Indigital signal processingandinformation theory, thenormalized sinc functionis commonly defined forx≠ 0bysinc⁡(x)=sin⁡(πx)πx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}.} In either case, the value atx= 0is defined to be the limiting valuesinc⁡(0):=limx→0sin⁡(ax)ax=1{\displaystyle \operatorname {sinc} (0):=\lim _{x\to 0}{\frac {\sin(ax)}{ax}}=1}for all reala≠ 0(the limit can be proven using thesqueeze theorem). Thenormalizationcauses thedefinite integralof the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value ofπ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values ofx. The normalized sinc function is theFourier transformof therectangular functionwith no scaling. It is used in the concept ofreconstructinga continuous bandlimited signal from uniformly spacedsamplesof that signal. The only difference between the two definitions is in the scaling of theindependent variable(thexaxis) by a factor ofπ. In both cases, the value of the function at theremovable singularityat zero is understood to be the limit value 1. The sinc function is thenanalyticeverywhere and hence anentire function. The function has also been called thecardinal sineorsine cardinalfunction.[3][4]The termsincwas introduced byPhilip M. Woodwardin his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own",[5]and his 1953 bookProbability and Information Theory, with Applications to Radar.[6][7]The function itself was first mathematically derived in this form byLord Rayleighin his expression (Rayleigh's formula) for the zeroth-order sphericalBessel functionof the first kind. Thezero crossingsof the unnormalized sinc are at non-zero integer multiples ofπ, while zero crossings of the normalized sinc occur at non-zero integers. The local maxima and minima of the unnormalized sinc correspond to its intersections with thecosinefunction. That is,⁠sin(ξ)/ξ⁠= cos(ξ)for all pointsξwhere the derivative of⁠sin(x)/x⁠is zero and thus a local extremum is reached. This follows from the derivative of the sinc function:ddxsinc⁡(x)={cos⁡(x)−sinc⁡(x)x,x≠00,x=0.{\displaystyle {\frac {d}{dx}}\operatorname {sinc} (x)={\begin{cases}{\dfrac {\cos(x)-\operatorname {sinc} (x)}{x}},&x\neq 0\\0,&x=0\end{cases}}.} The first few terms of the infinite series for thexcoordinate of then-th extremum with positivexcoordinate are[citation needed]xn=q−q−1−23q−3−1315q−5−146105q−7−⋯,{\displaystyle x_{n}=q-q^{-1}-{\frac {2}{3}}q^{-3}-{\frac {13}{15}}q^{-5}-{\frac {146}{105}}q^{-7}-\cdots ,}whereq=(n+12)π,{\displaystyle q=\left(n+{\frac {1}{2}}\right)\pi ,}and where oddnlead to a local minimum, and evennto a local maximum. Because of symmetry around theyaxis, there exist extrema withxcoordinates−xn. In addition, there is an absolute maximum atξ0= (0, 1). The normalized sinc function has a simple representation as theinfinite product:sin⁡(πx)πx=∏n=1∞(1−x2n2){\displaystyle {\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right)} and is related to thegamma functionΓ(x)throughEuler's reflection formula:sin⁡(πx)πx=1Γ(1+x)Γ(1−x).{\displaystyle {\frac {\sin(\pi x)}{\pi x}}={\frac {1}{\Gamma (1+x)\Gamma (1-x)}}.} Eulerdiscovered[8]thatsin⁡(x)x=∏n=1∞cos⁡(x2n),{\displaystyle {\frac {\sin(x)}{x}}=\prod _{n=1}^{\infty }\cos \left({\frac {x}{2^{n}}}\right),}and because of the product-to-sum identity[9] ∏n=1kcos⁡(x2n)=12k−1∑n=12k−1cos⁡(n−1/22k−1x),∀k≥1,{\displaystyle \prod _{n=1}^{k}\cos \left({\frac {x}{2^{n}}}\right)={\frac {1}{2^{k-1}}}\sum _{n=1}^{2^{k-1}}\cos \left({\frac {n-1/2}{2^{k-1}}}x\right),\quad \forall k\geq 1,}Euler's product can be recast as a sumsin⁡(x)x=limN→∞1N∑n=1Ncos⁡(n−1/2Nx).{\displaystyle {\frac {\sin(x)}{x}}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{n=1}^{N}\cos \left({\frac {n-1/2}{N}}x\right).} Thecontinuous Fourier transformof the normalized sinc (to ordinary frequency) isrect(f):∫−∞∞sinc⁡(t)e−i2πftdt=rect⁡(f),{\displaystyle \int _{-\infty }^{\infty }\operatorname {sinc} (t)\,e^{-i2\pi ft}\,dt=\operatorname {rect} (f),}where therectangular functionis 1 for argument between −⁠1/2⁠and⁠1/2⁠, and zero otherwise. This corresponds to the fact that thesinc filteris the ideal (brick-wall, meaning rectangularfrequency response)low-pass filter. This Fourier integral, including the special case∫−∞∞sin⁡(πx)πxdx=rect⁡(0)=1{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin(\pi x)}{\pi x}}\,dx=\operatorname {rect} (0)=1}is animproper integral(seeDirichlet integral) and not a convergentLebesgue integral, as∫−∞∞|sin⁡(πx)πx|dx=+∞.{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {\sin(\pi x)}{\pi x}}\right|\,dx=+\infty .} The normalized sinc function has properties that make it ideal in relationship tointerpolationofsampledbandlimitedfunctions: Other properties of the two sinc functions include: The normalized sinc function can be used as anascent delta function, meaning that the followingweak limitholds: lima→0sin⁡(πxa)πx=lima→01asinc⁡(xa)=δ(x).{\displaystyle \lim _{a\to 0}{\frac {\sin \left({\frac {\pi x}{a}}\right)}{\pi x}}=\lim _{a\to 0}{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)=\delta (x).} This is not an ordinary limit, since the left side does not converge. Rather, it means that lima→0∫−∞∞1asinc⁡(xa)φ(x)dx=φ(0){\displaystyle \lim _{a\to 0}\int _{-\infty }^{\infty }{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)\varphi (x)\,dx=\varphi (0)} for everySchwartz function, as can be seen from theFourier inversion theorem. In the above expression, asa→ 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of±⁠1/πx⁠, regardless of the value ofa. This complicates the informal picture ofδ(x)as being zero for allxexcept at the pointx= 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in theGibbs phenomenon. We can also make an immediate connection with the standard Dirac representation ofδ(x){\displaystyle \delta (x)}by writingb=1/a{\displaystyle b=1/a}and limb→∞sin⁡(bπx)πx=limb→∞12π∫−bπbπeikxdk=12π∫−∞∞eikxdk=δ(x),{\displaystyle \lim _{b\to \infty }{\frac {\sin \left(b\pi x\right)}{\pi x}}=\lim _{b\to \infty }{\frac {1}{2\pi }}\int _{-b\pi }^{b\pi }e^{ikx}dk={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ikx}dk=\delta (x),} which makes clear the recovery of the delta as an infinite bandwidth limit of the integral. All sums in this section refer to the unnormalized sinc function. The sum ofsinc(n)over integernfrom 1 to∞equals⁠π− 1/2⁠: ∑n=1∞sinc⁡(n)=sinc⁡(1)+sinc⁡(2)+sinc⁡(3)+sinc⁡(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} (n)=\operatorname {sinc} (1)+\operatorname {sinc} (2)+\operatorname {sinc} (3)+\operatorname {sinc} (4)+\cdots ={\frac {\pi -1}{2}}.} The sum of the squares also equals⁠π− 1/2⁠:[10][11] ∑n=1∞sinc2⁡(n)=sinc2⁡(1)+sinc2⁡(2)+sinc2⁡(3)+sinc2⁡(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)+\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)+\operatorname {sinc} ^{2}(4)+\cdots ={\frac {\pi -1}{2}}.} When the signs of theaddendsalternate and begin with +, the sum equals⁠1/2⁠:∑n=1∞(−1)n+1sinc⁡(n)=sinc⁡(1)−sinc⁡(2)+sinc⁡(3)−sinc⁡(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} (n)=\operatorname {sinc} (1)-\operatorname {sinc} (2)+\operatorname {sinc} (3)-\operatorname {sinc} (4)+\cdots ={\frac {1}{2}}.} The alternating sums of the squares and cubes also equal⁠1/2⁠:[12]∑n=1∞(−1)n+1sinc2⁡(n)=sinc2⁡(1)−sinc2⁡(2)+sinc2⁡(3)−sinc2⁡(4)+⋯=12,{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)-\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)-\operatorname {sinc} ^{2}(4)+\cdots ={\frac {1}{2}},} ∑n=1∞(−1)n+1sinc3⁡(n)=sinc3⁡(1)−sinc3⁡(2)+sinc3⁡(3)−sinc3⁡(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{3}(n)=\operatorname {sinc} ^{3}(1)-\operatorname {sinc} ^{3}(2)+\operatorname {sinc} ^{3}(3)-\operatorname {sinc} ^{3}(4)+\cdots ={\frac {1}{2}}.} TheTaylor seriesof the unnormalizedsincfunction can be obtained from that of the sine (which also yields its value of 1 atx= 0):sin⁡xx=∑n=0∞(−1)nx2n(2n+1)!=1−x23!+x45!−x67!+⋯{\displaystyle {\frac {\sin x}{x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n+1)!}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots } The series converges for allx. The normalized version follows easily:sin⁡πxπx=1−π2x23!+π4x45!−π6x67!+⋯{\displaystyle {\frac {\sin \pi x}{\pi x}}=1-{\frac {\pi ^{2}x^{2}}{3!}}+{\frac {\pi ^{4}x^{4}}{5!}}-{\frac {\pi ^{6}x^{6}}{7!}}+\cdots } Eulerfamously compared this series to the expansion of the infinite product form to solve theBasel problem. The product of 1-D sinc functions readily provides amultivariatesinc function for the square Cartesian grid (lattice):sincC(x,y) = sinc(x) sinc(y), whoseFourier transformis theindicator functionof a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesianlattice(e.g.,hexagonal lattice) is a function whoseFourier transformis theindicator functionof theBrillouin zoneof that lattice. For example, the sinc function for the hexagonal lattice is a function whoseFourier transformis theindicator functionof the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for thehexagonal,body-centered cubic,face-centered cubicand other higher-dimensional lattices can be explicitly derived[13]using the geometric properties of Brillouin zones and their connection tozonotopes. For example, ahexagonal latticecan be generated by the (integer)linear spanof the vectorsu1=[1232]andu2=[12−32].{\displaystyle \mathbf {u} _{1}={\begin{bmatrix}{\frac {1}{2}}\\{\frac {\sqrt {3}}{2}}\end{bmatrix}}\quad {\text{and}}\quad \mathbf {u} _{2}={\begin{bmatrix}{\frac {1}{2}}\\-{\frac {\sqrt {3}}{2}}\end{bmatrix}}.} Denotingξ1=23u1,ξ2=23u2,ξ3=−23(u1+u2),x=[xy],{\displaystyle {\boldsymbol {\xi }}_{1}={\tfrac {2}{3}}\mathbf {u} _{1},\quad {\boldsymbol {\xi }}_{2}={\tfrac {2}{3}}\mathbf {u} _{2},\quad {\boldsymbol {\xi }}_{3}=-{\tfrac {2}{3}}(\mathbf {u} _{1}+\mathbf {u} _{2}),\quad \mathbf {x} ={\begin{bmatrix}x\\y\end{bmatrix}},}one can derive[13]the sinc function for this hexagonal lattice assincH⁡(x)=13(cos⁡(πξ1⋅x)sinc⁡(ξ2⋅x)sinc⁡(ξ3⋅x)+cos⁡(πξ2⋅x)sinc⁡(ξ3⋅x)sinc⁡(ξ1⋅x)+cos⁡(πξ3⋅x)sinc⁡(ξ1⋅x)sinc⁡(ξ2⋅x)).{\displaystyle {\begin{aligned}\operatorname {sinc} _{\text{H}}(\mathbf {x} )={\tfrac {1}{3}}{\big (}&\cos \left(\pi {\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right){\big )}.\end{aligned}}} This construction can be used to designLanczos windowfor general multidimensional lattices.[13] Some authors, by analogy, define the hyperbolic sine cardinal function.[14][15][16]
https://en.wikipedia.org/wiki/Tanc_function
Inmathematics,hyperbolic functionsare analogues of the ordinarytrigonometric functions, but defined using thehyperbolarather than thecircle. Just as the points(cost, sint)form acircle with a unit radius, the points(cosht, sinht)form the right half of theunit hyperbola. Also, similarly to how the derivatives ofsin(t)andcos(t)arecos(t)and–sin(t)respectively, the derivatives ofsinh(t)andcosh(t)arecosh(t)andsinh(t)respectively. Hyperbolic functions are used to express theangle of parallelisminhyperbolic geometry. They are used to expressLorentz boostsashyperbolic rotationsinspecial relativity. They also occur in the solutions of many lineardifferential equations(such as the equation defining acatenary),cubic equations, andLaplace's equationinCartesian coordinates.Laplace's equationsare important in many areas ofphysics, includingelectromagnetic theory,heat transfer, andfluid dynamics. The basic hyperbolic functions are:[1] from which are derived:[4] corresponding to the derived trigonometric functions. Theinverse hyperbolic functionsare: The hyperbolic functions take arealargumentcalled ahyperbolic angle. The magnitude of a hyperbolic angle is theareaof itshyperbolic sectortoxy= 1. The hyperbolic functions may be defined in terms of thelegs of a right trianglecovering this sector. Incomplex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine areentire functions. As a result, the other hyperbolic functions aremeromorphicin the whole complex plane. ByLindemann–Weierstrass theorem, the hyperbolic functions have atranscendental valuefor every non-zeroalgebraic valueof the argument.[12] The first known calculation of a hyperbolic trigonometry problem is attributed toGerardus Mercatorwhen issuing theMercator map projectioncirca 1566. It requires tabulating solutions to a transcendental equation involving hyperbolic functions.[13] The first to suggest a similarity between the sector of the circle and that of the hyperbola wasIsaac Newtonin his 1687Principia Mathematica.[14] Roger Cotessuggested to modify the trigonometric functions using theimaginary uniti=−1{\displaystyle i={\sqrt {-1}}}to obtain an oblatespheroidfrom a prolate one.[14] Hyperbolic functions were formally introduced in 1757 byVincenzo Riccati.[14][13][15]Riccati usedSc.andCc.(sinus/cosinus circulare) to refer to circular functions andSh.andCh.(sinus/cosinus hyperbolico) to refer to hyperbolic functions.[14]As early as 1759,Daviet de Foncenexshowed the interchangeability of the trigonometric and hyperbolic functions using the imaginary unit and extendedde Moivre's formulato hyperbolic functions.[15][14] During the 1760s,Johann Heinrich Lambertsystematized the use functions and provided exponential expressions in various publications.[14][15]Lambert credited Riccati for the terminology and names of the functions, but altered the abbreviations to those used today.[15][16] There are various equivalent ways to define the hyperbolic functions. In terms of theexponential function:[1][4] The hyperbolic functions may be defined as solutions ofdifferential equations: The hyperbolic sine and cosine are the solution(s,c)of the systemc′(x)=s(x),s′(x)=c(x),{\displaystyle {\begin{aligned}c'(x)&=s(x),\\s'(x)&=c(x),\\\end{aligned}}}with the initial conditionss(0)=0,c(0)=1.{\displaystyle s(0)=0,c(0)=1.}The initial conditions make the solution unique; without them any pair of functions(aex+be−x,aex−be−x){\displaystyle (ae^{x}+be^{-x},ae^{x}-be^{-x})}would be a solution. sinh(x)andcosh(x)are also the unique solution of the equationf″(x) =f(x), such thatf(0) = 1,f′(0) = 0for the hyperbolic cosine, andf(0) = 0,f′(0) = 1for the hyperbolic sine. Hyperbolic functions may also be deduced fromtrigonometric functionswithcomplexarguments: whereiis theimaginary unitwithi2= −1. The above definitions are related to the exponential definitions viaEuler's formula(See§ Hyperbolic functions for complex numbersbelow). It can be shown that thearea under the curveof the hyperbolic cosine (over a finite interval) is always equal to thearc lengthcorresponding to that interval:[17]area=∫abcosh⁡xdx=∫ab1+(ddxcosh⁡x)2dx=arc length.{\displaystyle {\text{area}}=\int _{a}^{b}\cosh x\,dx=\int _{a}^{b}{\sqrt {1+\left({\frac {d}{dx}}\cosh x\right)^{2}}}\,dx={\text{arc length.}}} The hyperbolic tangent is the (unique) solution to thedifferential equationf′ = 1 −f2, withf(0) = 0.[18][19] The hyperbolic functions satisfy many identities, all of them similar in form to thetrigonometric identities. In fact,Osborn's rule[20]states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) forθ{\displaystyle \theta },2θ{\displaystyle 2\theta },3θ{\displaystyle 3\theta }orθ{\displaystyle \theta }andφ{\displaystyle \varphi }into a hyperbolic identity, by: Odd and even functions:sinh⁡(−x)=−sinh⁡xcosh⁡(−x)=cosh⁡x{\displaystyle {\begin{aligned}\sinh(-x)&=-\sinh x\\\cosh(-x)&=\cosh x\end{aligned}}} Hence:tanh⁡(−x)=−tanh⁡xcoth⁡(−x)=−coth⁡xsech⁡(−x)=sech⁡xcsch⁡(−x)=−csch⁡x{\displaystyle {\begin{aligned}\tanh(-x)&=-\tanh x\\\coth(-x)&=-\coth x\\\operatorname {sech} (-x)&=\operatorname {sech} x\\\operatorname {csch} (-x)&=-\operatorname {csch} x\end{aligned}}} Thus,coshxandsechxareeven functions; the others areodd functions. arsech⁡x=arcosh⁡(1x)arcsch⁡x=arsinh⁡(1x)arcoth⁡x=artanh⁡(1x){\displaystyle {\begin{aligned}\operatorname {arsech} x&=\operatorname {arcosh} \left({\frac {1}{x}}\right)\\\operatorname {arcsch} x&=\operatorname {arsinh} \left({\frac {1}{x}}\right)\\\operatorname {arcoth} x&=\operatorname {artanh} \left({\frac {1}{x}}\right)\end{aligned}}} Hyperbolic sine and cosine satisfy:cosh⁡x+sinh⁡x=excosh⁡x−sinh⁡x=e−x{\displaystyle {\begin{aligned}\cosh x+\sinh x&=e^{x}\\\cosh x-\sinh x&=e^{-x}\end{aligned}}} which are analogous toEuler's formula, and cosh2⁡x−sinh2⁡x=1{\displaystyle \cosh ^{2}x-\sinh ^{2}x=1} which is analogous to thePythagorean trigonometric identity. One also hassech2⁡x=1−tanh2⁡xcsch2⁡x=coth2⁡x−1{\displaystyle {\begin{aligned}\operatorname {sech} ^{2}x&=1-\tanh ^{2}x\\\operatorname {csch} ^{2}x&=\coth ^{2}x-1\end{aligned}}} for the other functions. sinh⁡(x+y)=sinh⁡xcosh⁡y+cosh⁡xsinh⁡ycosh⁡(x+y)=cosh⁡xcosh⁡y+sinh⁡xsinh⁡ytanh⁡(x+y)=tanh⁡x+tanh⁡y1+tanh⁡xtanh⁡y{\displaystyle {\begin{aligned}\sinh(x+y)&=\sinh x\cosh y+\cosh x\sinh y\\\cosh(x+y)&=\cosh x\cosh y+\sinh x\sinh y\\\tanh(x+y)&={\frac {\tanh x+\tanh y}{1+\tanh x\tanh y}}\\\end{aligned}}}particularlycosh⁡(2x)=sinh2⁡x+cosh2⁡x=2sinh2⁡x+1=2cosh2⁡x−1sinh⁡(2x)=2sinh⁡xcosh⁡xtanh⁡(2x)=2tanh⁡x1+tanh2⁡x{\displaystyle {\begin{aligned}\cosh(2x)&=\sinh ^{2}{x}+\cosh ^{2}{x}=2\sinh ^{2}x+1=2\cosh ^{2}x-1\\\sinh(2x)&=2\sinh x\cosh x\\\tanh(2x)&={\frac {2\tanh x}{1+\tanh ^{2}x}}\\\end{aligned}}} Also:sinh⁡x+sinh⁡y=2sinh⁡(x+y2)cosh⁡(x−y2)cosh⁡x+cosh⁡y=2cosh⁡(x+y2)cosh⁡(x−y2){\displaystyle {\begin{aligned}\sinh x+\sinh y&=2\sinh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\cosh x+\cosh y&=2\cosh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} sinh⁡(x−y)=sinh⁡xcosh⁡y−cosh⁡xsinh⁡ycosh⁡(x−y)=cosh⁡xcosh⁡y−sinh⁡xsinh⁡ytanh⁡(x−y)=tanh⁡x−tanh⁡y1−tanh⁡xtanh⁡y{\displaystyle {\begin{aligned}\sinh(x-y)&=\sinh x\cosh y-\cosh x\sinh y\\\cosh(x-y)&=\cosh x\cosh y-\sinh x\sinh y\\\tanh(x-y)&={\frac {\tanh x-\tanh y}{1-\tanh x\tanh y}}\\\end{aligned}}} Also:[21]sinh⁡x−sinh⁡y=2cosh⁡(x+y2)sinh⁡(x−y2)cosh⁡x−cosh⁡y=2sinh⁡(x+y2)sinh⁡(x−y2){\displaystyle {\begin{aligned}\sinh x-\sinh y&=2\cosh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\cosh x-\cosh y&=2\sinh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} sinh⁡(x2)=sinh⁡x2(cosh⁡x+1)=sgn⁡xcosh⁡x−12cosh⁡(x2)=cosh⁡x+12tanh⁡(x2)=sinh⁡xcosh⁡x+1=sgn⁡xcosh⁡x−1cosh⁡x+1=ex−1ex+1{\displaystyle {\begin{aligned}\sinh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\sqrt {2(\cosh x+1)}}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{2}}}\\[6px]\cosh \left({\frac {x}{2}}\right)&={\sqrt {\frac {\cosh x+1}{2}}}\\[6px]\tanh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\cosh x+1}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{\cosh x+1}}}={\frac {e^{x}-1}{e^{x}+1}}\end{aligned}}} wheresgnis thesign function. Ifx≠ 0, then[22] tanh⁡(x2)=cosh⁡x−1sinh⁡x=coth⁡x−csch⁡x{\displaystyle \tanh \left({\frac {x}{2}}\right)={\frac {\cosh x-1}{\sinh x}}=\coth x-\operatorname {csch} x} sinh2⁡x=12(cosh⁡2x−1)cosh2⁡x=12(cosh⁡2x+1){\displaystyle {\begin{aligned}\sinh ^{2}x&={\tfrac {1}{2}}(\cosh 2x-1)\\\cosh ^{2}x&={\tfrac {1}{2}}(\cosh 2x+1)\end{aligned}}} The following inequality is useful in statistics:[23]cosh⁡(t)≤et2/2.{\displaystyle \operatorname {cosh} (t)\leq e^{t^{2}/2}.} It can be proved by comparing the Taylor series of the two functions term by term. arsinh⁡(x)=ln⁡(x+x2+1)arcosh⁡(x)=ln⁡(x+x2−1)x≥1artanh⁡(x)=12ln⁡(1+x1−x)|x|<1arcoth⁡(x)=12ln⁡(x+1x−1)|x|>1arsech⁡(x)=ln⁡(1x+1x2−1)=ln⁡(1+1−x2x)0<x≤1arcsch⁡(x)=ln⁡(1x+1x2+1)x≠0{\displaystyle {\begin{aligned}\operatorname {arsinh} (x)&=\ln \left(x+{\sqrt {x^{2}+1}}\right)\\\operatorname {arcosh} (x)&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&&x\geq 1\\\operatorname {artanh} (x)&={\frac {1}{2}}\ln \left({\frac {1+x}{1-x}}\right)&&|x|<1\\\operatorname {arcoth} (x)&={\frac {1}{2}}\ln \left({\frac {x+1}{x-1}}\right)&&|x|>1\\\operatorname {arsech} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)=\ln \left({\frac {1+{\sqrt {1-x^{2}}}}{x}}\right)&&0<x\leq 1\\\operatorname {arcsch} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&&x\neq 0\end{aligned}}} ddxsinh⁡x=cosh⁡xddxcosh⁡x=sinh⁡xddxtanh⁡x=1−tanh2⁡x=sech2⁡x=1cosh2⁡xddxcoth⁡x=1−coth2⁡x=−csch2⁡x=−1sinh2⁡xx≠0ddxsech⁡x=−tanh⁡xsech⁡xddxcsch⁡x=−coth⁡xcsch⁡xx≠0{\displaystyle {\begin{aligned}{\frac {d}{dx}}\sinh x&=\cosh x\\{\frac {d}{dx}}\cosh x&=\sinh x\\{\frac {d}{dx}}\tanh x&=1-\tanh ^{2}x=\operatorname {sech} ^{2}x={\frac {1}{\cosh ^{2}x}}\\{\frac {d}{dx}}\coth x&=1-\coth ^{2}x=-\operatorname {csch} ^{2}x=-{\frac {1}{\sinh ^{2}x}}&&x\neq 0\\{\frac {d}{dx}}\operatorname {sech} x&=-\tanh x\operatorname {sech} x\\{\frac {d}{dx}}\operatorname {csch} x&=-\coth x\operatorname {csch} x&&x\neq 0\end{aligned}}}ddxarsinh⁡x=1x2+1ddxarcosh⁡x=1x2−11<xddxartanh⁡x=11−x2|x|<1ddxarcoth⁡x=11−x21<|x|ddxarsech⁡x=−1x1−x20<x<1ddxarcsch⁡x=−1|x|1+x2x≠0{\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&={\frac {1}{\sqrt {x^{2}+1}}}\\{\frac {d}{dx}}\operatorname {arcosh} x&={\frac {1}{\sqrt {x^{2}-1}}}&&1<x\\{\frac {d}{dx}}\operatorname {artanh} x&={\frac {1}{1-x^{2}}}&&|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&={\frac {1}{1-x^{2}}}&&1<|x|\\{\frac {d}{dx}}\operatorname {arsech} x&=-{\frac {1}{x{\sqrt {1-x^{2}}}}}&&0<x<1\\{\frac {d}{dx}}\operatorname {arcsch} x&=-{\frac {1}{|x|{\sqrt {1+x^{2}}}}}&&x\neq 0\end{aligned}}} Each of the functionssinhandcoshis equal to itssecond derivative, that is:d2dx2sinh⁡x=sinh⁡x{\displaystyle {\frac {d^{2}}{dx^{2}}}\sinh x=\sinh x}d2dx2cosh⁡x=cosh⁡x.{\displaystyle {\frac {d^{2}}{dx^{2}}}\cosh x=\cosh x\,.} All functions with this property arelinear combinationsofsinhandcosh, in particular theexponential functionsex{\displaystyle e^{x}}ande−x{\displaystyle e^{-x}}.[24] ∫sinh⁡(ax)dx=a−1cosh⁡(ax)+C∫cosh⁡(ax)dx=a−1sinh⁡(ax)+C∫tanh⁡(ax)dx=a−1ln⁡(cosh⁡(ax))+C∫coth⁡(ax)dx=a−1ln⁡|sinh⁡(ax)|+C∫sech⁡(ax)dx=a−1arctan⁡(sinh⁡(ax))+C∫csch⁡(ax)dx=a−1ln⁡|tanh⁡(ax2)|+C=a−1ln⁡|coth⁡(ax)−csch⁡(ax)|+C=−a−1arcoth⁡(cosh⁡(ax))+C{\displaystyle {\begin{aligned}\int \sinh(ax)\,dx&=a^{-1}\cosh(ax)+C\\\int \cosh(ax)\,dx&=a^{-1}\sinh(ax)+C\\\int \tanh(ax)\,dx&=a^{-1}\ln(\cosh(ax))+C\\\int \coth(ax)\,dx&=a^{-1}\ln \left|\sinh(ax)\right|+C\\\int \operatorname {sech} (ax)\,dx&=a^{-1}\arctan(\sinh(ax))+C\\\int \operatorname {csch} (ax)\,dx&=a^{-1}\ln \left|\tanh \left({\frac {ax}{2}}\right)\right|+C=a^{-1}\ln \left|\coth \left(ax\right)-\operatorname {csch} \left(ax\right)\right|+C=-a^{-1}\operatorname {arcoth} \left(\cosh \left(ax\right)\right)+C\end{aligned}}} The following integrals can be proved usinghyperbolic substitution:∫1a2+u2du=arsinh⁡(ua)+C∫1u2−a2du=sgn⁡uarcosh⁡|ua|+C∫1a2−u2du=a−1artanh⁡(ua)+Cu2<a2∫1a2−u2du=a−1arcoth⁡(ua)+Cu2>a2∫1ua2−u2du=−a−1arsech⁡|ua|+C∫1ua2+u2du=−a−1arcsch⁡|ua|+C{\displaystyle {\begin{aligned}\int {{\frac {1}{\sqrt {a^{2}+u^{2}}}}\,du}&=\operatorname {arsinh} \left({\frac {u}{a}}\right)+C\\\int {{\frac {1}{\sqrt {u^{2}-a^{2}}}}\,du}&=\operatorname {sgn} {u}\operatorname {arcosh} \left|{\frac {u}{a}}\right|+C\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {artanh} \left({\frac {u}{a}}\right)+C&&u^{2}<a^{2}\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {arcoth} \left({\frac {u}{a}}\right)+C&&u^{2}>a^{2}\\\int {{\frac {1}{u{\sqrt {a^{2}-u^{2}}}}}\,du}&=-a^{-1}\operatorname {arsech} \left|{\frac {u}{a}}\right|+C\\\int {{\frac {1}{u{\sqrt {a^{2}+u^{2}}}}}\,du}&=-a^{-1}\operatorname {arcsch} \left|{\frac {u}{a}}\right|+C\end{aligned}}} whereCis theconstant of integration. It is possible to express explicitly theTaylor seriesat zero (or theLaurent series, if the function is not defined at zero) of the above functions. sinh⁡x=x+x33!+x55!+x77!+⋯=∑n=0∞x2n+1(2n+1)!{\displaystyle \sinh x=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}}This series isconvergentfor everycomplexvalue ofx. Since the functionsinhxisodd, only odd exponents forxoccur in its Taylor series. cosh⁡x=1+x22!+x44!+x66!+⋯=∑n=0∞x2n(2n)!{\displaystyle \cosh x=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}}This series isconvergentfor everycomplexvalue ofx. Since the functioncoshxiseven, only even exponents forxoccur in its Taylor series. The sum of the sinh and cosh series is theinfinite seriesexpression of theexponential function. The following series are followed by a description of a subset of theirdomain of convergence, where the series is convergent and its sum equals the function.tanh⁡x=x−x33+2x515−17x7315+⋯=∑n=1∞22n(22n−1)B2nx2n−1(2n)!,|x|<π2coth⁡x=x−1+x3−x345+2x5945+⋯=∑n=0∞22nB2nx2n−1(2n)!,0<|x|<πsech⁡x=1−x22+5x424−61x6720+⋯=∑n=0∞E2nx2n(2n)!,|x|<π2csch⁡x=x−1−x6+7x3360−31x515120+⋯=∑n=0∞2(1−22n−1)B2nx2n−1(2n)!,0<|x|<π{\displaystyle {\begin{aligned}\tanh x&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots =\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\coth x&=x^{-1}+{\frac {x}{3}}-{\frac {x^{3}}{45}}+{\frac {2x^{5}}{945}}+\cdots =\sum _{n=0}^{\infty }{\frac {2^{2n}B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \\\operatorname {sech} x&=1-{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}-{\frac {61x^{6}}{720}}+\cdots =\sum _{n=0}^{\infty }{\frac {E_{2n}x^{2n}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\operatorname {csch} x&=x^{-1}-{\frac {x}{6}}+{\frac {7x^{3}}{360}}-{\frac {31x^{5}}{15120}}+\cdots =\sum _{n=0}^{\infty }{\frac {2(1-2^{2n-1})B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \end{aligned}}} where: The following expansions are valid in the whole complex plane: The hyperbolic functions represent an expansion oftrigonometrybeyond thecircular functions. Both types depend on anargument, eithercircular angleorhyperbolic angle. Since thearea of a circular sectorwith radiusrand angleu(in radians) isr2u/2, it will be equal touwhenr=√2. In the diagram, such a circle is tangent to the hyperbolaxy= 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict ahyperbolic sectorwith area corresponding to hyperbolic angle magnitude. The legs of the tworight triangleswith hypotenuse on the ray defining the angles are of length√2times the circular and hyperbolic functions. The hyperbolic angle is aninvariant measurewith respect to thesqueeze mapping, just as the circular angle is invariant under rotation.[25] TheGudermannian functiongives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the functionacosh(x/a)is thecatenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. The decomposition of the exponential function in itseven and odd partsgives the identitiesex=cosh⁡x+sinh⁡x,{\displaystyle e^{x}=\cosh x+\sinh x,}ande−x=cosh⁡x−sinh⁡x.{\displaystyle e^{-x}=\cosh x-\sinh x.}Combined withEuler's formulaeix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}this givesex+iy=(cosh⁡x+sinh⁡x)(cos⁡y+isin⁡y){\displaystyle e^{x+iy}=(\cosh x+\sinh x)(\cos y+i\sin y)}for thegeneral complex exponential function. Additionally,ex=1+tanh⁡x1−tanh⁡x=1+tanh⁡x21−tanh⁡x2{\displaystyle e^{x}={\sqrt {\frac {1+\tanh x}{1-\tanh x}}}={\frac {1+\tanh {\frac {x}{2}}}{1-\tanh {\frac {x}{2}}}}} Since theexponential functioncan be defined for anycomplexargument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functionssinhzandcoshzare thenholomorphic. Relationships to ordinary trigonometric functions are given byEuler's formulafor complex numbers:eix=cos⁡x+isin⁡xe−ix=cos⁡x−isin⁡x{\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\e^{-ix}&=\cos x-i\sin x\end{aligned}}}so:cosh⁡(ix)=12(eix+e−ix)=cos⁡xsinh⁡(ix)=12(eix−e−ix)=isin⁡xcosh⁡(x+iy)=cosh⁡(x)cos⁡(y)+isinh⁡(x)sin⁡(y)sinh⁡(x+iy)=sinh⁡(x)cos⁡(y)+icosh⁡(x)sin⁡(y)tanh⁡(ix)=itan⁡xcosh⁡x=cos⁡(ix)sinh⁡x=−isin⁡(ix)tanh⁡x=−itan⁡(ix){\displaystyle {\begin{aligned}\cosh(ix)&={\frac {1}{2}}\left(e^{ix}+e^{-ix}\right)=\cos x\\\sinh(ix)&={\frac {1}{2}}\left(e^{ix}-e^{-ix}\right)=i\sin x\\\cosh(x+iy)&=\cosh(x)\cos(y)+i\sinh(x)\sin(y)\\\sinh(x+iy)&=\sinh(x)\cos(y)+i\cosh(x)\sin(y)\\\tanh(ix)&=i\tan x\\\cosh x&=\cos(ix)\\\sinh x&=-i\sin(ix)\\\tanh x&=-i\tan(ix)\end{aligned}}} Thus, hyperbolic functions areperiodicwith respect to the imaginary component, with period2πi{\displaystyle 2\pi i}(πi{\displaystyle \pi i}for hyperbolic tangent and cotangent).
https://en.wikipedia.org/wiki/Tanhc_function
Inmathematics,physicsandengineering, thesinc function(/ˈsɪŋk/SINK), denoted bysinc(x), has two forms, normalized and unnormalized.[1] In mathematics, the historicalunnormalized sinc functionis defined forx≠ 0bysinc⁡(x)=sin⁡xx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin x}{x}}.} Alternatively, the unnormalized sinc function is often called thesampling function, indicated as Sa(x).[2] Indigital signal processingandinformation theory, thenormalized sinc functionis commonly defined forx≠ 0bysinc⁡(x)=sin⁡(πx)πx.{\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}.} In either case, the value atx= 0is defined to be the limiting valuesinc⁡(0):=limx→0sin⁡(ax)ax=1{\displaystyle \operatorname {sinc} (0):=\lim _{x\to 0}{\frac {\sin(ax)}{ax}}=1}for all reala≠ 0(the limit can be proven using thesqueeze theorem). Thenormalizationcauses thedefinite integralof the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value ofπ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values ofx. The normalized sinc function is theFourier transformof therectangular functionwith no scaling. It is used in the concept ofreconstructinga continuous bandlimited signal from uniformly spacedsamplesof that signal. The only difference between the two definitions is in the scaling of theindependent variable(thexaxis) by a factor ofπ. In both cases, the value of the function at theremovable singularityat zero is understood to be the limit value 1. The sinc function is thenanalyticeverywhere and hence anentire function. The function has also been called thecardinal sineorsine cardinalfunction.[3][4]The termsincwas introduced byPhilip M. Woodwardin his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own",[5]and his 1953 bookProbability and Information Theory, with Applications to Radar.[6][7]The function itself was first mathematically derived in this form byLord Rayleighin his expression (Rayleigh's formula) for the zeroth-order sphericalBessel functionof the first kind. Thezero crossingsof the unnormalized sinc are at non-zero integer multiples ofπ, while zero crossings of the normalized sinc occur at non-zero integers. The local maxima and minima of the unnormalized sinc correspond to its intersections with thecosinefunction. That is,⁠sin(ξ)/ξ⁠= cos(ξ)for all pointsξwhere the derivative of⁠sin(x)/x⁠is zero and thus a local extremum is reached. This follows from the derivative of the sinc function:ddxsinc⁡(x)={cos⁡(x)−sinc⁡(x)x,x≠00,x=0.{\displaystyle {\frac {d}{dx}}\operatorname {sinc} (x)={\begin{cases}{\dfrac {\cos(x)-\operatorname {sinc} (x)}{x}},&x\neq 0\\0,&x=0\end{cases}}.} The first few terms of the infinite series for thexcoordinate of then-th extremum with positivexcoordinate are[citation needed]xn=q−q−1−23q−3−1315q−5−146105q−7−⋯,{\displaystyle x_{n}=q-q^{-1}-{\frac {2}{3}}q^{-3}-{\frac {13}{15}}q^{-5}-{\frac {146}{105}}q^{-7}-\cdots ,}whereq=(n+12)π,{\displaystyle q=\left(n+{\frac {1}{2}}\right)\pi ,}and where oddnlead to a local minimum, and evennto a local maximum. Because of symmetry around theyaxis, there exist extrema withxcoordinates−xn. In addition, there is an absolute maximum atξ0= (0, 1). The normalized sinc function has a simple representation as theinfinite product:sin⁡(πx)πx=∏n=1∞(1−x2n2){\displaystyle {\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right)} and is related to thegamma functionΓ(x)throughEuler's reflection formula:sin⁡(πx)πx=1Γ(1+x)Γ(1−x).{\displaystyle {\frac {\sin(\pi x)}{\pi x}}={\frac {1}{\Gamma (1+x)\Gamma (1-x)}}.} Eulerdiscovered[8]thatsin⁡(x)x=∏n=1∞cos⁡(x2n),{\displaystyle {\frac {\sin(x)}{x}}=\prod _{n=1}^{\infty }\cos \left({\frac {x}{2^{n}}}\right),}and because of the product-to-sum identity[9] ∏n=1kcos⁡(x2n)=12k−1∑n=12k−1cos⁡(n−1/22k−1x),∀k≥1,{\displaystyle \prod _{n=1}^{k}\cos \left({\frac {x}{2^{n}}}\right)={\frac {1}{2^{k-1}}}\sum _{n=1}^{2^{k-1}}\cos \left({\frac {n-1/2}{2^{k-1}}}x\right),\quad \forall k\geq 1,}Euler's product can be recast as a sumsin⁡(x)x=limN→∞1N∑n=1Ncos⁡(n−1/2Nx).{\displaystyle {\frac {\sin(x)}{x}}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{n=1}^{N}\cos \left({\frac {n-1/2}{N}}x\right).} Thecontinuous Fourier transformof the normalized sinc (to ordinary frequency) isrect(f):∫−∞∞sinc⁡(t)e−i2πftdt=rect⁡(f),{\displaystyle \int _{-\infty }^{\infty }\operatorname {sinc} (t)\,e^{-i2\pi ft}\,dt=\operatorname {rect} (f),}where therectangular functionis 1 for argument between −⁠1/2⁠and⁠1/2⁠, and zero otherwise. This corresponds to the fact that thesinc filteris the ideal (brick-wall, meaning rectangularfrequency response)low-pass filter. This Fourier integral, including the special case∫−∞∞sin⁡(πx)πxdx=rect⁡(0)=1{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin(\pi x)}{\pi x}}\,dx=\operatorname {rect} (0)=1}is animproper integral(seeDirichlet integral) and not a convergentLebesgue integral, as∫−∞∞|sin⁡(πx)πx|dx=+∞.{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {\sin(\pi x)}{\pi x}}\right|\,dx=+\infty .} The normalized sinc function has properties that make it ideal in relationship tointerpolationofsampledbandlimitedfunctions: Other properties of the two sinc functions include: The normalized sinc function can be used as anascent delta function, meaning that the followingweak limitholds: lima→0sin⁡(πxa)πx=lima→01asinc⁡(xa)=δ(x).{\displaystyle \lim _{a\to 0}{\frac {\sin \left({\frac {\pi x}{a}}\right)}{\pi x}}=\lim _{a\to 0}{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)=\delta (x).} This is not an ordinary limit, since the left side does not converge. Rather, it means that lima→0∫−∞∞1asinc⁡(xa)φ(x)dx=φ(0){\displaystyle \lim _{a\to 0}\int _{-\infty }^{\infty }{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)\varphi (x)\,dx=\varphi (0)} for everySchwartz function, as can be seen from theFourier inversion theorem. In the above expression, asa→ 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of±⁠1/πx⁠, regardless of the value ofa. This complicates the informal picture ofδ(x)as being zero for allxexcept at the pointx= 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in theGibbs phenomenon. We can also make an immediate connection with the standard Dirac representation ofδ(x){\displaystyle \delta (x)}by writingb=1/a{\displaystyle b=1/a}and limb→∞sin⁡(bπx)πx=limb→∞12π∫−bπbπeikxdk=12π∫−∞∞eikxdk=δ(x),{\displaystyle \lim _{b\to \infty }{\frac {\sin \left(b\pi x\right)}{\pi x}}=\lim _{b\to \infty }{\frac {1}{2\pi }}\int _{-b\pi }^{b\pi }e^{ikx}dk={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ikx}dk=\delta (x),} which makes clear the recovery of the delta as an infinite bandwidth limit of the integral. All sums in this section refer to the unnormalized sinc function. The sum ofsinc(n)over integernfrom 1 to∞equals⁠π− 1/2⁠: ∑n=1∞sinc⁡(n)=sinc⁡(1)+sinc⁡(2)+sinc⁡(3)+sinc⁡(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} (n)=\operatorname {sinc} (1)+\operatorname {sinc} (2)+\operatorname {sinc} (3)+\operatorname {sinc} (4)+\cdots ={\frac {\pi -1}{2}}.} The sum of the squares also equals⁠π− 1/2⁠:[10][11] ∑n=1∞sinc2⁡(n)=sinc2⁡(1)+sinc2⁡(2)+sinc2⁡(3)+sinc2⁡(4)+⋯=π−12.{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)+\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)+\operatorname {sinc} ^{2}(4)+\cdots ={\frac {\pi -1}{2}}.} When the signs of theaddendsalternate and begin with +, the sum equals⁠1/2⁠:∑n=1∞(−1)n+1sinc⁡(n)=sinc⁡(1)−sinc⁡(2)+sinc⁡(3)−sinc⁡(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} (n)=\operatorname {sinc} (1)-\operatorname {sinc} (2)+\operatorname {sinc} (3)-\operatorname {sinc} (4)+\cdots ={\frac {1}{2}}.} The alternating sums of the squares and cubes also equal⁠1/2⁠:[12]∑n=1∞(−1)n+1sinc2⁡(n)=sinc2⁡(1)−sinc2⁡(2)+sinc2⁡(3)−sinc2⁡(4)+⋯=12,{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)-\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)-\operatorname {sinc} ^{2}(4)+\cdots ={\frac {1}{2}},} ∑n=1∞(−1)n+1sinc3⁡(n)=sinc3⁡(1)−sinc3⁡(2)+sinc3⁡(3)−sinc3⁡(4)+⋯=12.{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{3}(n)=\operatorname {sinc} ^{3}(1)-\operatorname {sinc} ^{3}(2)+\operatorname {sinc} ^{3}(3)-\operatorname {sinc} ^{3}(4)+\cdots ={\frac {1}{2}}.} TheTaylor seriesof the unnormalizedsincfunction can be obtained from that of the sine (which also yields its value of 1 atx= 0):sin⁡xx=∑n=0∞(−1)nx2n(2n+1)!=1−x23!+x45!−x67!+⋯{\displaystyle {\frac {\sin x}{x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n+1)!}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots } The series converges for allx. The normalized version follows easily:sin⁡πxπx=1−π2x23!+π4x45!−π6x67!+⋯{\displaystyle {\frac {\sin \pi x}{\pi x}}=1-{\frac {\pi ^{2}x^{2}}{3!}}+{\frac {\pi ^{4}x^{4}}{5!}}-{\frac {\pi ^{6}x^{6}}{7!}}+\cdots } Eulerfamously compared this series to the expansion of the infinite product form to solve theBasel problem. The product of 1-D sinc functions readily provides amultivariatesinc function for the square Cartesian grid (lattice):sincC(x,y) = sinc(x) sinc(y), whoseFourier transformis theindicator functionof a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesianlattice(e.g.,hexagonal lattice) is a function whoseFourier transformis theindicator functionof theBrillouin zoneof that lattice. For example, the sinc function for the hexagonal lattice is a function whoseFourier transformis theindicator functionof the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for thehexagonal,body-centered cubic,face-centered cubicand other higher-dimensional lattices can be explicitly derived[13]using the geometric properties of Brillouin zones and their connection tozonotopes. For example, ahexagonal latticecan be generated by the (integer)linear spanof the vectorsu1=[1232]andu2=[12−32].{\displaystyle \mathbf {u} _{1}={\begin{bmatrix}{\frac {1}{2}}\\{\frac {\sqrt {3}}{2}}\end{bmatrix}}\quad {\text{and}}\quad \mathbf {u} _{2}={\begin{bmatrix}{\frac {1}{2}}\\-{\frac {\sqrt {3}}{2}}\end{bmatrix}}.} Denotingξ1=23u1,ξ2=23u2,ξ3=−23(u1+u2),x=[xy],{\displaystyle {\boldsymbol {\xi }}_{1}={\tfrac {2}{3}}\mathbf {u} _{1},\quad {\boldsymbol {\xi }}_{2}={\tfrac {2}{3}}\mathbf {u} _{2},\quad {\boldsymbol {\xi }}_{3}=-{\tfrac {2}{3}}(\mathbf {u} _{1}+\mathbf {u} _{2}),\quad \mathbf {x} ={\begin{bmatrix}x\\y\end{bmatrix}},}one can derive[13]the sinc function for this hexagonal lattice assincH⁡(x)=13(cos⁡(πξ1⋅x)sinc⁡(ξ2⋅x)sinc⁡(ξ3⋅x)+cos⁡(πξ2⋅x)sinc⁡(ξ3⋅x)sinc⁡(ξ1⋅x)+cos⁡(πξ3⋅x)sinc⁡(ξ1⋅x)sinc⁡(ξ2⋅x)).{\displaystyle {\begin{aligned}\operatorname {sinc} _{\text{H}}(\mathbf {x} )={\tfrac {1}{3}}{\big (}&\cos \left(\pi {\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right){\big )}.\end{aligned}}} This construction can be used to designLanczos windowfor general multidimensional lattices.[13] Some authors, by analogy, define the hyperbolic sine cardinal function.[14][15][16]
https://en.wikipedia.org/wiki/Sinhc_function
Inmathematics,hyperbolic functionsare analogues of the ordinarytrigonometric functions, but defined using thehyperbolarather than thecircle. Just as the points(cost, sint)form acircle with a unit radius, the points(cosht, sinht)form the right half of theunit hyperbola. Also, similarly to how the derivatives ofsin(t)andcos(t)arecos(t)and–sin(t)respectively, the derivatives ofsinh(t)andcosh(t)arecosh(t)andsinh(t)respectively. Hyperbolic functions are used to express theangle of parallelisminhyperbolic geometry. They are used to expressLorentz boostsashyperbolic rotationsinspecial relativity. They also occur in the solutions of many lineardifferential equations(such as the equation defining acatenary),cubic equations, andLaplace's equationinCartesian coordinates.Laplace's equationsare important in many areas ofphysics, includingelectromagnetic theory,heat transfer, andfluid dynamics. The basic hyperbolic functions are:[1] from which are derived:[4] corresponding to the derived trigonometric functions. Theinverse hyperbolic functionsare: The hyperbolic functions take arealargumentcalled ahyperbolic angle. The magnitude of a hyperbolic angle is theareaof itshyperbolic sectortoxy= 1. The hyperbolic functions may be defined in terms of thelegs of a right trianglecovering this sector. Incomplex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine areentire functions. As a result, the other hyperbolic functions aremeromorphicin the whole complex plane. ByLindemann–Weierstrass theorem, the hyperbolic functions have atranscendental valuefor every non-zeroalgebraic valueof the argument.[12] The first known calculation of a hyperbolic trigonometry problem is attributed toGerardus Mercatorwhen issuing theMercator map projectioncirca 1566. It requires tabulating solutions to a transcendental equation involving hyperbolic functions.[13] The first to suggest a similarity between the sector of the circle and that of the hyperbola wasIsaac Newtonin his 1687Principia Mathematica.[14] Roger Cotessuggested to modify the trigonometric functions using theimaginary uniti=−1{\displaystyle i={\sqrt {-1}}}to obtain an oblatespheroidfrom a prolate one.[14] Hyperbolic functions were formally introduced in 1757 byVincenzo Riccati.[14][13][15]Riccati usedSc.andCc.(sinus/cosinus circulare) to refer to circular functions andSh.andCh.(sinus/cosinus hyperbolico) to refer to hyperbolic functions.[14]As early as 1759,Daviet de Foncenexshowed the interchangeability of the trigonometric and hyperbolic functions using the imaginary unit and extendedde Moivre's formulato hyperbolic functions.[15][14] During the 1760s,Johann Heinrich Lambertsystematized the use functions and provided exponential expressions in various publications.[14][15]Lambert credited Riccati for the terminology and names of the functions, but altered the abbreviations to those used today.[15][16] There are various equivalent ways to define the hyperbolic functions. In terms of theexponential function:[1][4] The hyperbolic functions may be defined as solutions ofdifferential equations: The hyperbolic sine and cosine are the solution(s,c)of the systemc′(x)=s(x),s′(x)=c(x),{\displaystyle {\begin{aligned}c'(x)&=s(x),\\s'(x)&=c(x),\\\end{aligned}}}with the initial conditionss(0)=0,c(0)=1.{\displaystyle s(0)=0,c(0)=1.}The initial conditions make the solution unique; without them any pair of functions(aex+be−x,aex−be−x){\displaystyle (ae^{x}+be^{-x},ae^{x}-be^{-x})}would be a solution. sinh(x)andcosh(x)are also the unique solution of the equationf″(x) =f(x), such thatf(0) = 1,f′(0) = 0for the hyperbolic cosine, andf(0) = 0,f′(0) = 1for the hyperbolic sine. Hyperbolic functions may also be deduced fromtrigonometric functionswithcomplexarguments: whereiis theimaginary unitwithi2= −1. The above definitions are related to the exponential definitions viaEuler's formula(See§ Hyperbolic functions for complex numbersbelow). It can be shown that thearea under the curveof the hyperbolic cosine (over a finite interval) is always equal to thearc lengthcorresponding to that interval:[17]area=∫abcosh⁡xdx=∫ab1+(ddxcosh⁡x)2dx=arc length.{\displaystyle {\text{area}}=\int _{a}^{b}\cosh x\,dx=\int _{a}^{b}{\sqrt {1+\left({\frac {d}{dx}}\cosh x\right)^{2}}}\,dx={\text{arc length.}}} The hyperbolic tangent is the (unique) solution to thedifferential equationf′ = 1 −f2, withf(0) = 0.[18][19] The hyperbolic functions satisfy many identities, all of them similar in form to thetrigonometric identities. In fact,Osborn's rule[20]states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) forθ{\displaystyle \theta },2θ{\displaystyle 2\theta },3θ{\displaystyle 3\theta }orθ{\displaystyle \theta }andφ{\displaystyle \varphi }into a hyperbolic identity, by: Odd and even functions:sinh⁡(−x)=−sinh⁡xcosh⁡(−x)=cosh⁡x{\displaystyle {\begin{aligned}\sinh(-x)&=-\sinh x\\\cosh(-x)&=\cosh x\end{aligned}}} Hence:tanh⁡(−x)=−tanh⁡xcoth⁡(−x)=−coth⁡xsech⁡(−x)=sech⁡xcsch⁡(−x)=−csch⁡x{\displaystyle {\begin{aligned}\tanh(-x)&=-\tanh x\\\coth(-x)&=-\coth x\\\operatorname {sech} (-x)&=\operatorname {sech} x\\\operatorname {csch} (-x)&=-\operatorname {csch} x\end{aligned}}} Thus,coshxandsechxareeven functions; the others areodd functions. arsech⁡x=arcosh⁡(1x)arcsch⁡x=arsinh⁡(1x)arcoth⁡x=artanh⁡(1x){\displaystyle {\begin{aligned}\operatorname {arsech} x&=\operatorname {arcosh} \left({\frac {1}{x}}\right)\\\operatorname {arcsch} x&=\operatorname {arsinh} \left({\frac {1}{x}}\right)\\\operatorname {arcoth} x&=\operatorname {artanh} \left({\frac {1}{x}}\right)\end{aligned}}} Hyperbolic sine and cosine satisfy:cosh⁡x+sinh⁡x=excosh⁡x−sinh⁡x=e−x{\displaystyle {\begin{aligned}\cosh x+\sinh x&=e^{x}\\\cosh x-\sinh x&=e^{-x}\end{aligned}}} which are analogous toEuler's formula, and cosh2⁡x−sinh2⁡x=1{\displaystyle \cosh ^{2}x-\sinh ^{2}x=1} which is analogous to thePythagorean trigonometric identity. One also hassech2⁡x=1−tanh2⁡xcsch2⁡x=coth2⁡x−1{\displaystyle {\begin{aligned}\operatorname {sech} ^{2}x&=1-\tanh ^{2}x\\\operatorname {csch} ^{2}x&=\coth ^{2}x-1\end{aligned}}} for the other functions. sinh⁡(x+y)=sinh⁡xcosh⁡y+cosh⁡xsinh⁡ycosh⁡(x+y)=cosh⁡xcosh⁡y+sinh⁡xsinh⁡ytanh⁡(x+y)=tanh⁡x+tanh⁡y1+tanh⁡xtanh⁡y{\displaystyle {\begin{aligned}\sinh(x+y)&=\sinh x\cosh y+\cosh x\sinh y\\\cosh(x+y)&=\cosh x\cosh y+\sinh x\sinh y\\\tanh(x+y)&={\frac {\tanh x+\tanh y}{1+\tanh x\tanh y}}\\\end{aligned}}}particularlycosh⁡(2x)=sinh2⁡x+cosh2⁡x=2sinh2⁡x+1=2cosh2⁡x−1sinh⁡(2x)=2sinh⁡xcosh⁡xtanh⁡(2x)=2tanh⁡x1+tanh2⁡x{\displaystyle {\begin{aligned}\cosh(2x)&=\sinh ^{2}{x}+\cosh ^{2}{x}=2\sinh ^{2}x+1=2\cosh ^{2}x-1\\\sinh(2x)&=2\sinh x\cosh x\\\tanh(2x)&={\frac {2\tanh x}{1+\tanh ^{2}x}}\\\end{aligned}}} Also:sinh⁡x+sinh⁡y=2sinh⁡(x+y2)cosh⁡(x−y2)cosh⁡x+cosh⁡y=2cosh⁡(x+y2)cosh⁡(x−y2){\displaystyle {\begin{aligned}\sinh x+\sinh y&=2\sinh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\cosh x+\cosh y&=2\cosh \left({\frac {x+y}{2}}\right)\cosh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} sinh⁡(x−y)=sinh⁡xcosh⁡y−cosh⁡xsinh⁡ycosh⁡(x−y)=cosh⁡xcosh⁡y−sinh⁡xsinh⁡ytanh⁡(x−y)=tanh⁡x−tanh⁡y1−tanh⁡xtanh⁡y{\displaystyle {\begin{aligned}\sinh(x-y)&=\sinh x\cosh y-\cosh x\sinh y\\\cosh(x-y)&=\cosh x\cosh y-\sinh x\sinh y\\\tanh(x-y)&={\frac {\tanh x-\tanh y}{1-\tanh x\tanh y}}\\\end{aligned}}} Also:[21]sinh⁡x−sinh⁡y=2cosh⁡(x+y2)sinh⁡(x−y2)cosh⁡x−cosh⁡y=2sinh⁡(x+y2)sinh⁡(x−y2){\displaystyle {\begin{aligned}\sinh x-\sinh y&=2\cosh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\cosh x-\cosh y&=2\sinh \left({\frac {x+y}{2}}\right)\sinh \left({\frac {x-y}{2}}\right)\\\end{aligned}}} sinh⁡(x2)=sinh⁡x2(cosh⁡x+1)=sgn⁡xcosh⁡x−12cosh⁡(x2)=cosh⁡x+12tanh⁡(x2)=sinh⁡xcosh⁡x+1=sgn⁡xcosh⁡x−1cosh⁡x+1=ex−1ex+1{\displaystyle {\begin{aligned}\sinh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\sqrt {2(\cosh x+1)}}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{2}}}\\[6px]\cosh \left({\frac {x}{2}}\right)&={\sqrt {\frac {\cosh x+1}{2}}}\\[6px]\tanh \left({\frac {x}{2}}\right)&={\frac {\sinh x}{\cosh x+1}}&&=\operatorname {sgn} x\,{\sqrt {\frac {\cosh x-1}{\cosh x+1}}}={\frac {e^{x}-1}{e^{x}+1}}\end{aligned}}} wheresgnis thesign function. Ifx≠ 0, then[22] tanh⁡(x2)=cosh⁡x−1sinh⁡x=coth⁡x−csch⁡x{\displaystyle \tanh \left({\frac {x}{2}}\right)={\frac {\cosh x-1}{\sinh x}}=\coth x-\operatorname {csch} x} sinh2⁡x=12(cosh⁡2x−1)cosh2⁡x=12(cosh⁡2x+1){\displaystyle {\begin{aligned}\sinh ^{2}x&={\tfrac {1}{2}}(\cosh 2x-1)\\\cosh ^{2}x&={\tfrac {1}{2}}(\cosh 2x+1)\end{aligned}}} The following inequality is useful in statistics:[23]cosh⁡(t)≤et2/2.{\displaystyle \operatorname {cosh} (t)\leq e^{t^{2}/2}.} It can be proved by comparing the Taylor series of the two functions term by term. arsinh⁡(x)=ln⁡(x+x2+1)arcosh⁡(x)=ln⁡(x+x2−1)x≥1artanh⁡(x)=12ln⁡(1+x1−x)|x|<1arcoth⁡(x)=12ln⁡(x+1x−1)|x|>1arsech⁡(x)=ln⁡(1x+1x2−1)=ln⁡(1+1−x2x)0<x≤1arcsch⁡(x)=ln⁡(1x+1x2+1)x≠0{\displaystyle {\begin{aligned}\operatorname {arsinh} (x)&=\ln \left(x+{\sqrt {x^{2}+1}}\right)\\\operatorname {arcosh} (x)&=\ln \left(x+{\sqrt {x^{2}-1}}\right)&&x\geq 1\\\operatorname {artanh} (x)&={\frac {1}{2}}\ln \left({\frac {1+x}{1-x}}\right)&&|x|<1\\\operatorname {arcoth} (x)&={\frac {1}{2}}\ln \left({\frac {x+1}{x-1}}\right)&&|x|>1\\\operatorname {arsech} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}-1}}\right)=\ln \left({\frac {1+{\sqrt {1-x^{2}}}}{x}}\right)&&0<x\leq 1\\\operatorname {arcsch} (x)&=\ln \left({\frac {1}{x}}+{\sqrt {{\frac {1}{x^{2}}}+1}}\right)&&x\neq 0\end{aligned}}} ddxsinh⁡x=cosh⁡xddxcosh⁡x=sinh⁡xddxtanh⁡x=1−tanh2⁡x=sech2⁡x=1cosh2⁡xddxcoth⁡x=1−coth2⁡x=−csch2⁡x=−1sinh2⁡xx≠0ddxsech⁡x=−tanh⁡xsech⁡xddxcsch⁡x=−coth⁡xcsch⁡xx≠0{\displaystyle {\begin{aligned}{\frac {d}{dx}}\sinh x&=\cosh x\\{\frac {d}{dx}}\cosh x&=\sinh x\\{\frac {d}{dx}}\tanh x&=1-\tanh ^{2}x=\operatorname {sech} ^{2}x={\frac {1}{\cosh ^{2}x}}\\{\frac {d}{dx}}\coth x&=1-\coth ^{2}x=-\operatorname {csch} ^{2}x=-{\frac {1}{\sinh ^{2}x}}&&x\neq 0\\{\frac {d}{dx}}\operatorname {sech} x&=-\tanh x\operatorname {sech} x\\{\frac {d}{dx}}\operatorname {csch} x&=-\coth x\operatorname {csch} x&&x\neq 0\end{aligned}}}ddxarsinh⁡x=1x2+1ddxarcosh⁡x=1x2−11<xddxartanh⁡x=11−x2|x|<1ddxarcoth⁡x=11−x21<|x|ddxarsech⁡x=−1x1−x20<x<1ddxarcsch⁡x=−1|x|1+x2x≠0{\displaystyle {\begin{aligned}{\frac {d}{dx}}\operatorname {arsinh} x&={\frac {1}{\sqrt {x^{2}+1}}}\\{\frac {d}{dx}}\operatorname {arcosh} x&={\frac {1}{\sqrt {x^{2}-1}}}&&1<x\\{\frac {d}{dx}}\operatorname {artanh} x&={\frac {1}{1-x^{2}}}&&|x|<1\\{\frac {d}{dx}}\operatorname {arcoth} x&={\frac {1}{1-x^{2}}}&&1<|x|\\{\frac {d}{dx}}\operatorname {arsech} x&=-{\frac {1}{x{\sqrt {1-x^{2}}}}}&&0<x<1\\{\frac {d}{dx}}\operatorname {arcsch} x&=-{\frac {1}{|x|{\sqrt {1+x^{2}}}}}&&x\neq 0\end{aligned}}} Each of the functionssinhandcoshis equal to itssecond derivative, that is:d2dx2sinh⁡x=sinh⁡x{\displaystyle {\frac {d^{2}}{dx^{2}}}\sinh x=\sinh x}d2dx2cosh⁡x=cosh⁡x.{\displaystyle {\frac {d^{2}}{dx^{2}}}\cosh x=\cosh x\,.} All functions with this property arelinear combinationsofsinhandcosh, in particular theexponential functionsex{\displaystyle e^{x}}ande−x{\displaystyle e^{-x}}.[24] ∫sinh⁡(ax)dx=a−1cosh⁡(ax)+C∫cosh⁡(ax)dx=a−1sinh⁡(ax)+C∫tanh⁡(ax)dx=a−1ln⁡(cosh⁡(ax))+C∫coth⁡(ax)dx=a−1ln⁡|sinh⁡(ax)|+C∫sech⁡(ax)dx=a−1arctan⁡(sinh⁡(ax))+C∫csch⁡(ax)dx=a−1ln⁡|tanh⁡(ax2)|+C=a−1ln⁡|coth⁡(ax)−csch⁡(ax)|+C=−a−1arcoth⁡(cosh⁡(ax))+C{\displaystyle {\begin{aligned}\int \sinh(ax)\,dx&=a^{-1}\cosh(ax)+C\\\int \cosh(ax)\,dx&=a^{-1}\sinh(ax)+C\\\int \tanh(ax)\,dx&=a^{-1}\ln(\cosh(ax))+C\\\int \coth(ax)\,dx&=a^{-1}\ln \left|\sinh(ax)\right|+C\\\int \operatorname {sech} (ax)\,dx&=a^{-1}\arctan(\sinh(ax))+C\\\int \operatorname {csch} (ax)\,dx&=a^{-1}\ln \left|\tanh \left({\frac {ax}{2}}\right)\right|+C=a^{-1}\ln \left|\coth \left(ax\right)-\operatorname {csch} \left(ax\right)\right|+C=-a^{-1}\operatorname {arcoth} \left(\cosh \left(ax\right)\right)+C\end{aligned}}} The following integrals can be proved usinghyperbolic substitution:∫1a2+u2du=arsinh⁡(ua)+C∫1u2−a2du=sgn⁡uarcosh⁡|ua|+C∫1a2−u2du=a−1artanh⁡(ua)+Cu2<a2∫1a2−u2du=a−1arcoth⁡(ua)+Cu2>a2∫1ua2−u2du=−a−1arsech⁡|ua|+C∫1ua2+u2du=−a−1arcsch⁡|ua|+C{\displaystyle {\begin{aligned}\int {{\frac {1}{\sqrt {a^{2}+u^{2}}}}\,du}&=\operatorname {arsinh} \left({\frac {u}{a}}\right)+C\\\int {{\frac {1}{\sqrt {u^{2}-a^{2}}}}\,du}&=\operatorname {sgn} {u}\operatorname {arcosh} \left|{\frac {u}{a}}\right|+C\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {artanh} \left({\frac {u}{a}}\right)+C&&u^{2}<a^{2}\\\int {\frac {1}{a^{2}-u^{2}}}\,du&=a^{-1}\operatorname {arcoth} \left({\frac {u}{a}}\right)+C&&u^{2}>a^{2}\\\int {{\frac {1}{u{\sqrt {a^{2}-u^{2}}}}}\,du}&=-a^{-1}\operatorname {arsech} \left|{\frac {u}{a}}\right|+C\\\int {{\frac {1}{u{\sqrt {a^{2}+u^{2}}}}}\,du}&=-a^{-1}\operatorname {arcsch} \left|{\frac {u}{a}}\right|+C\end{aligned}}} whereCis theconstant of integration. It is possible to express explicitly theTaylor seriesat zero (or theLaurent series, if the function is not defined at zero) of the above functions. sinh⁡x=x+x33!+x55!+x77!+⋯=∑n=0∞x2n+1(2n+1)!{\displaystyle \sinh x=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+{\frac {x^{7}}{7!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}}This series isconvergentfor everycomplexvalue ofx. Since the functionsinhxisodd, only odd exponents forxoccur in its Taylor series. cosh⁡x=1+x22!+x44!+x66!+⋯=∑n=0∞x2n(2n)!{\displaystyle \cosh x=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+{\frac {x^{6}}{6!}}+\cdots =\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}}This series isconvergentfor everycomplexvalue ofx. Since the functioncoshxiseven, only even exponents forxoccur in its Taylor series. The sum of the sinh and cosh series is theinfinite seriesexpression of theexponential function. The following series are followed by a description of a subset of theirdomain of convergence, where the series is convergent and its sum equals the function.tanh⁡x=x−x33+2x515−17x7315+⋯=∑n=1∞22n(22n−1)B2nx2n−1(2n)!,|x|<π2coth⁡x=x−1+x3−x345+2x5945+⋯=∑n=0∞22nB2nx2n−1(2n)!,0<|x|<πsech⁡x=1−x22+5x424−61x6720+⋯=∑n=0∞E2nx2n(2n)!,|x|<π2csch⁡x=x−1−x6+7x3360−31x515120+⋯=∑n=0∞2(1−22n−1)B2nx2n−1(2n)!,0<|x|<π{\displaystyle {\begin{aligned}\tanh x&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots =\sum _{n=1}^{\infty }{\frac {2^{2n}(2^{2n}-1)B_{2n}x^{2n-1}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\coth x&=x^{-1}+{\frac {x}{3}}-{\frac {x^{3}}{45}}+{\frac {2x^{5}}{945}}+\cdots =\sum _{n=0}^{\infty }{\frac {2^{2n}B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \\\operatorname {sech} x&=1-{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}-{\frac {61x^{6}}{720}}+\cdots =\sum _{n=0}^{\infty }{\frac {E_{2n}x^{2n}}{(2n)!}},\qquad \left|x\right|<{\frac {\pi }{2}}\\\operatorname {csch} x&=x^{-1}-{\frac {x}{6}}+{\frac {7x^{3}}{360}}-{\frac {31x^{5}}{15120}}+\cdots =\sum _{n=0}^{\infty }{\frac {2(1-2^{2n-1})B_{2n}x^{2n-1}}{(2n)!}},\qquad 0<\left|x\right|<\pi \end{aligned}}} where: The following expansions are valid in the whole complex plane: The hyperbolic functions represent an expansion oftrigonometrybeyond thecircular functions. Both types depend on anargument, eithercircular angleorhyperbolic angle. Since thearea of a circular sectorwith radiusrand angleu(in radians) isr2u/2, it will be equal touwhenr=√2. In the diagram, such a circle is tangent to the hyperbolaxy= 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict ahyperbolic sectorwith area corresponding to hyperbolic angle magnitude. The legs of the tworight triangleswith hypotenuse on the ray defining the angles are of length√2times the circular and hyperbolic functions. The hyperbolic angle is aninvariant measurewith respect to thesqueeze mapping, just as the circular angle is invariant under rotation.[25] TheGudermannian functiongives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers. The graph of the functionacosh(x/a)is thecatenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity. The decomposition of the exponential function in itseven and odd partsgives the identitiesex=cosh⁡x+sinh⁡x,{\displaystyle e^{x}=\cosh x+\sinh x,}ande−x=cosh⁡x−sinh⁡x.{\displaystyle e^{-x}=\cosh x-\sinh x.}Combined withEuler's formulaeix=cos⁡x+isin⁡x,{\displaystyle e^{ix}=\cos x+i\sin x,}this givesex+iy=(cosh⁡x+sinh⁡x)(cos⁡y+isin⁡y){\displaystyle e^{x+iy}=(\cosh x+\sinh x)(\cos y+i\sin y)}for thegeneral complex exponential function. Additionally,ex=1+tanh⁡x1−tanh⁡x=1+tanh⁡x21−tanh⁡x2{\displaystyle e^{x}={\sqrt {\frac {1+\tanh x}{1-\tanh x}}}={\frac {1+\tanh {\frac {x}{2}}}{1-\tanh {\frac {x}{2}}}}} Since theexponential functioncan be defined for anycomplexargument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functionssinhzandcoshzare thenholomorphic. Relationships to ordinary trigonometric functions are given byEuler's formulafor complex numbers:eix=cos⁡x+isin⁡xe−ix=cos⁡x−isin⁡x{\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\e^{-ix}&=\cos x-i\sin x\end{aligned}}}so:cosh⁡(ix)=12(eix+e−ix)=cos⁡xsinh⁡(ix)=12(eix−e−ix)=isin⁡xcosh⁡(x+iy)=cosh⁡(x)cos⁡(y)+isinh⁡(x)sin⁡(y)sinh⁡(x+iy)=sinh⁡(x)cos⁡(y)+icosh⁡(x)sin⁡(y)tanh⁡(ix)=itan⁡xcosh⁡x=cos⁡(ix)sinh⁡x=−isin⁡(ix)tanh⁡x=−itan⁡(ix){\displaystyle {\begin{aligned}\cosh(ix)&={\frac {1}{2}}\left(e^{ix}+e^{-ix}\right)=\cos x\\\sinh(ix)&={\frac {1}{2}}\left(e^{ix}-e^{-ix}\right)=i\sin x\\\cosh(x+iy)&=\cosh(x)\cos(y)+i\sinh(x)\sin(y)\\\sinh(x+iy)&=\sinh(x)\cos(y)+i\cosh(x)\sin(y)\\\tanh(ix)&=i\tan x\\\cosh x&=\cos(ix)\\\sinh x&=-i\sin(ix)\\\tanh x&=-i\tan(ix)\end{aligned}}} Thus, hyperbolic functions areperiodicwith respect to the imaginary component, with period2πi{\displaystyle 2\pi i}(πi{\displaystyle \pi i}for hyperbolic tangent and cotangent).
https://en.wikipedia.org/wiki/Coshc_function
Acrestpoint on awaveis the highest point of the wave. A crest is a point on asurface wavewhere thedisplacementof the medium is at a maximum. Atroughis the opposite of a crest, so theminimumor lowest point of the wave. When the crests and troughs of twosine wavesof equalamplitudeandfrequencyintersect or collide, while being inphasewith each other, the result is calledconstructiveinterferenceand the magnitudes double (above and below the line). When inantiphase– 180° out of phase – the result isdestructiveinterference: the resulting wave is the undisturbed line having zeroamplitude. Thisphysics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Crest_(physics)
Inmathematics, theexponential functionis the uniquereal functionwhich mapszerotooneand has aderivativeeverywhere equal to its value. The exponential of a variable⁠x{\displaystyle x}⁠is denoted⁠exp⁡x{\displaystyle \exp x}⁠or⁠ex{\displaystyle e^{x}}⁠, with the two notations used interchangeably. It is calledexponentialbecause its argument can be seen as anexponentto which a constantnumbere≈ 2.718, the base, is raised. There are several other definitions of the exponential function, which are all equivalent although being of very different nature. The exponential function converts sums to products: it maps theadditive identity0to themultiplicative identity1, and the exponential of a sum is equal to the product of separate exponentials,⁠exp⁡(x+y)=exp⁡x⋅exp⁡y{\displaystyle \exp(x+y)=\exp x\cdot \exp y}⁠. Itsinverse function, thenatural logarithm,⁠ln{\displaystyle \ln }⁠or⁠log{\displaystyle \log }⁠, converts products to sums:⁠ln⁡(x⋅y)=ln⁡x+ln⁡y{\displaystyle \ln(x\cdot y)=\ln x+\ln y}⁠. The exponential function is occasionally called thenatural exponential function, matching the namenatural logarithm, for distinguishing it from some other functions that are also commonly calledexponential functions. These functions include the functions of the form⁠f(x)=bx{\displaystyle f(x)=b^{x}}⁠, which isexponentiationwith a fixed base⁠b{\displaystyle b}⁠. More generally, and especially in applications, functions of the general form⁠f(x)=abx{\displaystyle f(x)=ab^{x}}⁠are also called exponential functions. Theygrowordecayexponentially in that the rate that⁠f(x){\displaystyle f(x)}⁠changes when⁠x{\displaystyle x}⁠is increased isproportionalto the current value of⁠f(x){\displaystyle f(x)}⁠. The exponential function can be generalized to acceptcomplex numbersas arguments. This reveals relations between multiplication of complex numbers, rotations in thecomplex plane, andtrigonometry.Euler's formula⁠exp⁡iθ=cos⁡θ+isin⁡θ{\displaystyle \exp i\theta =\cos \theta +i\sin \theta }⁠expresses and summarizes these relations. The exponential function can be even further generalized to accept other types of arguments, such asmatricesand elements ofLie algebras. Thegraphofy=ex{\displaystyle y=e^{x}}is upward-sloping, and increases faster than every power of⁠x{\displaystyle x}⁠.[1]The graph always lies above thex-axis, but becomes arbitrarily close to it for large negativex; thus, thex-axis is a horizontalasymptote. The equationddxex=ex{\displaystyle {\tfrac {d}{dx}}e^{x}=e^{x}}means that theslopeof thetangentto the graph at each point is equal to its height (itsy-coordinate) at that point. There are several equivalent definitions of the exponential function, although of very different nature. One of the simplest definitions is: Theexponential functionis theuniquedifferentiable functionthat equals itsderivative, and takes the value1for the value0of its variable. This "conceptual" definition requires a uniqueness proof and an existence proof, but it allows an easy derivation of the main properties of the exponential function. Uniqueness:If⁠f(x){\displaystyle f(x)}⁠and⁠g(x){\displaystyle g(x)}⁠are two functions satisfying the above definition, then the derivative of⁠f/g{\displaystyle f/g}⁠is zero everywhere because of thequotient rule. It follows that⁠f/g{\displaystyle f/g}⁠is constant; this constant is1since⁠f(0)=g(0)=1{\displaystyle f(0)=g(0)=1}⁠. Existenceis proved in each of the two following sections. The exponential function is theinverse functionof thenatural logarithm.Theinverse function theoremimplies that the natural logarithm has an inverse function, that satisfies the above definition. This is a first proof of existence. Therefore, one has for everyreal numberx{\displaystyle x}and every positive real numbery.{\displaystyle y.} The exponential function is the sum of thepower series[2][3]exp⁡(x)=1+x+x22!+x33!+⋯=∑n=0∞xnn!,{\displaystyle {\begin{aligned}\exp(x)&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots \\&=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}},\end{aligned}}} wheren!{\displaystyle n!}is thefactorialofn(the product of thenfirst positive integers). This series isabsolutely convergentfor everyx{\displaystyle x}per theratio test. So, the derivative of the sum can be computed by term-by-term differentiation, and this shows that the sum of the series satisfies the above definition. This is a second existence proof, and shows, as a byproduct, that the exponential function is defined for every⁠x{\displaystyle x}⁠, and is everywhere the sum of itsMaclaurin series. The exponential satisfies the functional equation:exp⁡(x+y)=exp⁡(x)⋅exp⁡(y).{\displaystyle \exp(x+y)=\exp(x)\cdot \exp(y).}This results from the uniqueness and the fact that the functionf(x)=exp⁡(x+y)/exp⁡(y){\displaystyle f(x)=\exp(x+y)/\exp(y)}satisfies the above definition. It can be proved that a function that satisfies this functional equation has the form⁠x↦exp⁡(cx){\displaystyle x\mapsto \exp(cx)}⁠if it is eithercontinuousormonotonic. It is thusdifferentiable, and equals the exponential function if its derivative at0is1. The exponential function is thelimit, as the integerngoes to infinity,[4][3]exp⁡(x)=limn→+∞(1+xn)n.{\displaystyle \exp(x)=\lim _{n\to +\infty }\left(1+{\frac {x}{n}}\right)^{n}.}By continuity of the logarithm, this can be proved by taking logarithms and provingx=limn→∞ln⁡(1+xn)n=limn→∞nln⁡(1+xn),{\displaystyle x=\lim _{n\to \infty }\ln \left(1+{\frac {x}{n}}\right)^{n}=\lim _{n\to \infty }n\ln \left(1+{\frac {x}{n}}\right),}for example withTaylor's theorem. Reciprocal:The functional equation implies⁠exe−x=1{\displaystyle e^{x}e^{-x}=1}⁠. Therefore⁠ex≠0{\displaystyle e^{x}\neq 0}⁠for every⁠x{\displaystyle x}⁠and1ex=e−x.{\displaystyle {\frac {1}{e^{x}}}=e^{-x}.} Positiveness:⁠ex>0{\displaystyle e^{x}>0}⁠for every real number⁠x{\displaystyle x}⁠. This results from theintermediate value theorem, since⁠e0=1{\displaystyle e^{0}=1}⁠and, if one would have⁠ex<0{\displaystyle e^{x}<0}⁠for some⁠x{\displaystyle x}⁠, there would be an⁠y{\displaystyle y}⁠such that⁠ey=0{\displaystyle e^{y}=0}⁠between⁠0{\displaystyle 0}⁠and⁠x{\displaystyle x}⁠. Since the exponential function equals its derivative, this implies that the exponential function ismonotonically increasing. Extension ofexponentiationto positive real bases:Letbbe a positive real number. The exponential function and the natural logarithm being the inverse each of the other, one hasb=exp⁡(ln⁡b).{\displaystyle b=\exp(\ln b).}Ifnis an integer, the functional equation of the logarithm impliesbn=exp⁡(ln⁡bn)=exp⁡(nln⁡b).{\displaystyle b^{n}=\exp(\ln b^{n})=\exp(n\ln b).}Since the right-most expression is defined ifnis any real number, this allows defining⁠bx{\displaystyle b^{x}}⁠for every positive real numberband every real numberx:bx=exp⁡(xln⁡b).{\displaystyle b^{x}=\exp(x\ln b).}In particular, ifbis theEuler's numbere=exp⁡(1),{\displaystyle e=\exp(1),}one hasln⁡e=1{\displaystyle \ln e=1}(inverse function) and thusex=exp⁡(x).{\displaystyle e^{x}=\exp(x).}This shows the equivalence of the two notations for the exponential function. A function is commonly calledan exponential function—with an indefinite article—if it has the form⁠x↦bx{\displaystyle x\mapsto b^{x}}⁠, that is, if it is obtained fromexponentiationby fixing the base and letting theexponentvary. More generally and especially in applied contexts, the termexponential functionis commonly used for functions of the form⁠f(x)=abx{\displaystyle f(x)=ab^{x}}⁠. This may be motivated by the fact that, if the values of the function representquantities, a change ofmeasurement unitchanges the value of⁠a{\displaystyle a}⁠, and so, it is nonsensical to impose⁠a=1{\displaystyle a=1}⁠. These most general exponential functions are thedifferentiable functionsthat satisfy the following equivalent characterizations. Thebaseof an exponential function is thebaseof theexponentiationthat appears in it when written as⁠x→abx{\displaystyle x\to ab^{x}}⁠, namely⁠b{\displaystyle b}⁠.[6]The base is⁠ek{\displaystyle e^{k}}⁠in the second characterization,exp⁡f′(x)f(x){\textstyle \exp {\frac {f'(x)}{f(x)}}}in the third one, and(f(x+d)f(x))1/d{\textstyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}}in the last one. The last characterization is important inempirical sciences, as allowing a directexperimentaltest whether a function is an exponential function. Exponentialgrowthorexponential decay—where the variable change isproportionalto the variable value—are thus modeled with exponential functions. Examples are unlimited population growth leading toMalthusian catastrophe,continuously compounded interest, andradioactive decay. If the modeling function has the form⁠x↦aekx,{\displaystyle x\mapsto ae^{kx},}⁠or, equivalently, is a solution of the differential equation⁠y′=ky{\displaystyle y'=ky}⁠, the constant⁠k{\displaystyle k}⁠is called, depending on the context, thedecay constant,disintegration constant,[7]rate constant,[8]ortransformation constant.[9] For proving the equivalence of the above properties, one can proceed as follows. The two first characterizations are equivalent, since, if⁠b=ek{\displaystyle b=e^{k}}⁠and⁠k=ln⁡b{\displaystyle k=\ln b}⁠, one has s.ekx=(ek)x=bx.{\displaystyle e^{kx}=(e^{k})^{x}=b^{x}.}The basic properties of the exponential function (derivative and functional equation) implies immediately the third and ths last condititon Suppose that the third condition is verified, and let⁠k{\displaystyle k}⁠be the constant value off′(x)/f(x).{\displaystyle f'(x)/f(x).}Since∂ekx∂x=kekx,{\textstyle {\frac {\partial e^{kx}}{\partial x}}=ke^{kx},}thequotient rulefor derivation implies that∂∂xf(x)ekx=0,{\displaystyle {\frac {\partial }{\partial x}}\,{\frac {f(x)}{e^{kx}}}=0,}and thus that there is a constant⁠a{\displaystyle a}⁠such thatf(x)=aekx.{\displaystyle f(x)=ae^{kx}.} If the last condition is verified, letφ(d)=f(x+d)/f(x),{\textstyle \varphi (d)=f(x+d)/f(x),}which is independent of⁠x{\displaystyle x}⁠. Using⁠φ(0)=1{\displaystyle \varphi (0)=1}⁠, one getsf(x+d)−f(x)d=f(x)φ(d)−φ(0)d.{\displaystyle {\frac {f(x+d)-f(x)}{d}}=f(x)\,{\frac {\varphi (d)-\varphi (0)}{d}}.}Taking the limit when⁠d{\displaystyle d}⁠tends to zero, one gets that the third condition is verified with⁠k=φ′(0){\displaystyle k=\varphi '(0)}⁠. It follows therefore that⁠f(x)=aekx{\displaystyle f(x)=ae^{kx}}⁠for some⁠a,{\displaystyle a,}⁠and⁠φ(d)=ekd.{\displaystyle \varphi (d)=e^{kd}.}⁠As a byproduct, one gets that(f(x+d)f(x))1/d=ek{\displaystyle \left({\frac {f(x+d)}{f(x)}}\right)^{1/d}=e^{k}}is independent of both⁠x{\displaystyle x}⁠and⁠d{\displaystyle d}⁠. The earliest occurrence of the exponential function was inJacob Bernoulli's study ofcompound interestsin 1683.[10]This is this study that led Bernoulli to consider the numberlimn→∞(1+1n)n{\displaystyle \lim _{n\to \infty }\left(1+{\frac {1}{n}}\right)^{n}}now known asEuler's numberand denoted⁠e{\displaystyle e}⁠. The exponential function is involved as follows in the computation ofcontinuously compounded interests. If a principal amount of 1 earns interest at an annual rate ofxcompounded monthly, then the interest earned each month is⁠x/12⁠times the current value, so each month the total value is multiplied by(1 +⁠x/12⁠), and the value at the end of the year is(1 +⁠x/12⁠)12. If instead interest is compounded daily, this becomes(1 +⁠x/365⁠)365. Letting the number of time intervals per year grow without bound leads to thelimitdefinition of the exponential function,exp⁡x=limn→∞(1+xn)n{\displaystyle \exp x=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}}first given byLeonhard Euler.[4] Exponential functions occur very often in solutions ofdifferential equations. The exponential functions can be defined as solutions ofdifferential equations. Indeed, the exponential function is a solution of the simplest possible differential equation, namely⁠y′=y{\displaystyle y'=y}⁠. Every other exponential function, of the form⁠y=abx{\displaystyle y=ab^{x}}⁠, is a solution of the differential equation⁠y′=ky{\displaystyle y'=ky}⁠, and every solution of this differential equation has this form. The solutions of an equation of the formy′+ky=f(x){\displaystyle y'+ky=f(x)}involve exponential functions in a more sophisticated way, since they have the formy=ce−kx+e−kx∫f(x)ekxdx,{\displaystyle y=ce^{-kx}+e^{-kx}\int f(x)e^{kx}dx,}where⁠c{\displaystyle c}⁠is an arbitrary constant and the integral denotes anyantiderivativeof its argument. More generally, the solutions of every linear differential equation with constant coefficients can be expressed in terms of exponential functions and, when they are not homogeneous, antiderivatives. This holds true also for systems of linear differential equations with constant coefficients. The exponential function can be naturally extended to acomplex function, which is a function with thecomplex numbersasdomainandcodomain, such that itsrestrictionto the reals is the above-defined exponential function, calledreal exponential functionin what follows. This function is also calledthe exponential function, and also denoted⁠ez{\displaystyle e^{z}}⁠or⁠exp⁡(z){\displaystyle \exp(z)}⁠. For distinguishing the complex case from the real one, the extended function is also calledcomplex exponential functionor simplycomplex exponential. Most of the definitions of the exponential function can be used verbatim for definiting the complex exponential function, and the proof of their equivalence is the same as in the real case. The complex exponential function can be defined in several equivalent ways that are the same as in the real case. Thecomplex exponentialis the unique complex function that equals itscomplex derivativeand takes the value⁠1{\displaystyle 1}⁠for the argument⁠0{\displaystyle 0}⁠:dezdz=ezande0=1.{\displaystyle {\frac {de^{z}}{dz}}=e^{z}\quad {\text{and}}\quad e^{0}=1.} Thecomplex exponential functionis the sum of theseriesez=∑k=0∞zkk!.{\displaystyle e^{z}=\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}.}This series isabsolutely convergentfor every complex number⁠z{\displaystyle z}⁠. So, the complex differential is anentire function. The complex exponential function is thelimitez=limn→∞(1+zn)n{\displaystyle e^{z}=\lim _{n\to \infty }\left(1+{\frac {z}{n}}\right)^{n}} The functional equationew+z=ewez{\displaystyle e^{w+z}=e^{w}e^{z}}holds for every complex numbers⁠w{\displaystyle w}⁠and⁠z{\displaystyle z}⁠. The complex exponential is the uniquecontinuous functionthat satisfies this functional equation and has the value⁠1{\displaystyle 1}⁠for⁠z=0{\displaystyle z=0}⁠. Thecomplex logarithmis aright-inverse functionof the complex exponential:elog⁡z=z.{\displaystyle e^{\log z}=z.}However, since the complex logarithm is amultivalued function, one haslog⁡ez={z+2ikπ∣k∈Z},{\displaystyle \log e^{z}=\{z+2ik\pi \mid k\in \mathbb {Z} \},}and it is difficult to define the complex exponential from the complex logarithm. On the opposite, this is the complex logarithm that is often defined from the complex exponential. The complex exponential has the following properties:1ez=e−z{\displaystyle {\frac {1}{e^{z}}}=e^{-z}}andez≠0for everyz∈C.{\displaystyle e^{z}\neq 0\quad {\text{for every }}z\in \mathbb {C} .}It isperiodic functionof period⁠2iπ{\displaystyle 2i\pi }⁠; that isez+2ikπ=ezfor everyk∈Z.{\displaystyle e^{z+2ik\pi }=e^{z}\quad {\text{for every }}k\in \mathbb {Z} .}This results fromEuler's identity⁠eiπ=−1{\displaystyle e^{i\pi }=-1}⁠and the functional identity. Thecomplex conjugateof the complex exponential isez¯=ez¯.{\displaystyle {\overline {e^{z}}}=e^{\overline {z}}.}Its modulus is|ez|=e|ℜ(z)|,{\displaystyle |e^{z}|=e^{|\Re (z)|},}where⁠ℜ(z){\displaystyle \Re (z)}⁠denotes the real part of⁠z{\displaystyle z}⁠. Complex exponential andtrigonometric functionsare strongly related byEuler's formula:eit=cos⁡(t)+isin⁡(t).{\displaystyle e^{it}=\cos(t)+i\sin(t).} This formula provides the decomposition of complex exponential intoreal and imaginary parts:ex+iy=excos⁡y+iexsin⁡y.{\displaystyle e^{x+iy}=e^{x}\,\cos y+ie^{x}\,\sin y.} The trigonometric functions can be expressed in terms of complex exponentials:cos⁡x=eix+e−ix2sin⁡x=eix−e−ix2itan⁡x=i1−e2ix1+e2ix{\displaystyle {\begin{aligned}\cos x&={\frac {e^{ix}+e^{-ix}}{2}}\\\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\\tan x&=i\,{\frac {1-e^{2ix}}{1+e^{2ix}}}\end{aligned}}} In these formulas,⁠x,y,t{\displaystyle x,y,t}⁠are commonly interpreted as real variables, but the formulas remain valid if the variables are interpreted as complex variables. These formulas may be used to define trigonometric functions of a complex variable.[11] Considering the complex exponential function as a function involving four real variables:v+iw=exp⁡(x+iy){\displaystyle v+iw=\exp(x+iy)}the graph of the exponential function is a two-dimensional surface curving through four dimensions. Starting with a color-coded portion of thexy{\displaystyle xy}domain, the following are depictions of the graph as variously projected into two or three dimensions. The second image shows how the domain complex plane is mapped into the range complex plane: The third and fourth images show how the graph in the second image extends into one of the other two dimensions not shown in the second image. The third image shows the graph extended along the realx{\displaystyle x}axis. It shows the graph is a surface of revolution about thex{\displaystyle x}axis of the graph of the real exponential function, producing a horn or funnel shape. The fourth image shows the graph extended along the imaginaryy{\displaystyle y}axis. It shows that the graph's surface for positive and negativey{\displaystyle y}values doesn't really meet along the negative realv{\displaystyle v}axis, but instead forms a spiral surface about they{\displaystyle y}axis. Because itsy{\displaystyle y}values have been extended to±2π, this image also better depicts the 2π periodicity in the imaginaryy{\displaystyle y}value. The power series definition of the exponential function makes sense for squarematrices(for which the function is called thematrix exponential) and more generally in any unitalBanach algebraB. In this setting,e0= 1, andexis invertible with inversee−xfor anyxinB. Ifxy=yx, thenex+y=exey, but this identity can fail for noncommutingxandy. Some alternative definitions lead to the same function. For instance,excan be defined aslimn→∞(1+xn)n.{\displaystyle \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} Orexcan be defined asfx(1), wherefx:R→Bis the solution to the differential equation⁠dfx/dt⁠(t) =xfx(t), with initial conditionfx(0) = 1; it follows thatfx(t) =etxfor everytinR. Given aLie groupGand its associatedLie algebrag{\displaystyle {\mathfrak {g}}}, theexponential mapis a mapg{\displaystyle {\mathfrak {g}}}↦Gsatisfying similar properties. In fact, sinceRis the Lie algebra of the Lie group of all positive real numbers under multiplication, the ordinary exponential function for real arguments is a special case of the Lie algebra situation. Similarly, since the Lie groupGL(n,R)of invertiblen×nmatrices has as Lie algebraM(n,R), the space of alln×nmatrices, the exponential function for square matrices is a special case of the Lie algebra exponential map. The identityexp⁡(x+y)=exp⁡(x)exp⁡(y){\displaystyle \exp(x+y)=\exp(x)\exp(y)}can fail for Lie algebra elementsxandythat do not commute; theBaker–Campbell–Hausdorff formulasupplies the necessary correction terms. The functionezis atranscendental function, which means that it is not arootof a polynomial over theringof therational fractionsC(z).{\displaystyle \mathbb {C} (z).} Ifa1, ...,anare distinct complex numbers, thenea1z, ...,eanzare linearly independent overC(z){\displaystyle \mathbb {C} (z)}, and henceezistranscendentaloverC(z){\displaystyle \mathbb {C} (z)}. The Taylor series definition above is generally efficient for computing (an approximation of)ex{\displaystyle e^{x}}. However, when computing near the argumentx=0{\displaystyle x=0}, the result will be close to 1, and computing the value of the differenceex−1{\displaystyle e^{x}-1}withfloating-point arithmeticmay lead to the loss of (possibly all)significant figures, producing a large relative error, possibly even a meaningless result. Following a proposal byWilliam Kahan, it may thus be useful to have a dedicated routine, often calledexpm1, which computesex− 1directly, bypassing computation ofex. For example, one may use the Taylor series:ex−1=x+x22+x36+⋯+xnn!+⋯.{\displaystyle e^{x}-1=x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+\cdots +{\frac {x^{n}}{n!}}+\cdots .} This was first implemented in 1979 in theHewlett-PackardHP-41Ccalculator, and provided by several calculators,[12][13]operating systems(for exampleBerkeley UNIX 4.3BSD[14]),computer algebra systems, and programming languages (for exampleC99).[15] In addition to basee, theIEEE 754-2008standard defines similar exponential functions near 0 for base 2 and 10:2x−1{\displaystyle 2^{x}-1}and10x−1{\displaystyle 10^{x}-1}. A similar approach has been used for the logarithm; seelog1p. An identity in terms of thehyperbolic tangent,expm1⁡(x)=ex−1=2tanh⁡(x/2)1−tanh⁡(x/2),{\displaystyle \operatorname {expm1} (x)=e^{x}-1={\frac {2\tanh(x/2)}{1-\tanh(x/2)}},}gives a high-precision value for small values ofxon systems that do not implementexpm1(x). The exponential function can also be computed withcontinued fractions. A continued fraction forexcan be obtained viaan identity of Euler:ex=1+x1−xx+2−2xx+3−3xx+4−⋱{\displaystyle e^{x}=1+{\cfrac {x}{1-{\cfrac {x}{x+2-{\cfrac {2x}{x+3-{\cfrac {3x}{x+4-\ddots }}}}}}}}} The followinggeneralized continued fractionforezconverges more quickly:[16]ez=1+2z2−z+z26+z210+z214+⋱{\displaystyle e^{z}=1+{\cfrac {2z}{2-z+{\cfrac {z^{2}}{6+{\cfrac {z^{2}}{10+{\cfrac {z^{2}}{14+\ddots }}}}}}}}} or, by applying the substitutionz=⁠x/y⁠:exy=1+2x2y−x+x26y+x210y+x214y+⋱{\displaystyle e^{\frac {x}{y}}=1+{\cfrac {2x}{2y-x+{\cfrac {x^{2}}{6y+{\cfrac {x^{2}}{10y+{\cfrac {x^{2}}{14y+\ddots }}}}}}}}}with a special case forz= 2:e2=1+40+226+2210+2214+⋱=7+25+17+19+111+⋱{\displaystyle e^{2}=1+{\cfrac {4}{0+{\cfrac {2^{2}}{6+{\cfrac {2^{2}}{10+{\cfrac {2^{2}}{14+\ddots }}}}}}}}=7+{\cfrac {2}{5+{\cfrac {1}{7+{\cfrac {1}{9+{\cfrac {1}{11+\ddots }}}}}}}}} This formula also converges, though more slowly, forz> 2. For example:e3=1+6−1+326+3210+3214+⋱=13+547+914+918+922+⋱{\displaystyle e^{3}=1+{\cfrac {6}{-1+{\cfrac {3^{2}}{6+{\cfrac {3^{2}}{10+{\cfrac {3^{2}}{14+\ddots }}}}}}}}=13+{\cfrac {54}{7+{\cfrac {9}{14+{\cfrac {9}{18+{\cfrac {9}{22+\ddots }}}}}}}}}
https://en.wikipedia.org/wiki/Complex_exponential