text
stringlengths
11
320k
source
stringlengths
26
161
Least-squares adjustmentis a model for the solution of anoverdetermined systemof equations based on the principle ofleast squaresofobservation residuals. It is used extensively in the disciplines ofsurveying,geodesy, andphotogrammetry—the field ofgeomatics, collectively. There are three forms of least squares adjustment:parametric,conditional, andcombined: Clearly, parametric and conditional adjustments correspond to the more general combined case whenf(X,Y) =h(X) -Yandf(X,Y) =g(Y), respectively. Yet the special cases warrant simpler solutions, as detailed below. Often in the literature,Ymay be denotedL. The equalities above only hold for the estimated parametersX^{\displaystyle {\hat {X}}}and observationsY^{\displaystyle {\hat {Y}}}, thusf(X^,Y^)=0{\displaystyle f\left({\hat {X}},{\hat {Y}}\right)=0}. In contrast, measured observationsY~{\displaystyle {\tilde {Y}}}and approximate parametersX~{\displaystyle {\tilde {X}}}produce a nonzeromisclosure:w~=f(X~,Y~).{\displaystyle {\tilde {w}}=f\left({\tilde {X}},{\tilde {Y}}\right).}One can proceed toTaylor series expansionof the equations, which results in theJacobiansordesign matrices: the first one,A=∂f/∂X;{\displaystyle A=\partial {f}/\partial {X};}and the second one,B=∂f/∂Y.{\displaystyle B=\partial {f}/\partial {Y}.}The linearized model then reads:w~+Ax^+By^=0,{\displaystyle {\tilde {w}}+A{\hat {x}}+B{\hat {y}}=0,}wherex^=X^−X~{\displaystyle {\hat {x}}={\hat {X}}-{\tilde {X}}}are estimatedparameter correctionsto thea priorivalues, andy^=Y^−Y~{\displaystyle {\hat {y}}={\hat {Y}}-{\tilde {Y}}}are post-fitobservationresiduals. In the parametric adjustment, the second design matrix is an identity,B=-I, and the misclosure vector can be interpreted as the pre-fit residuals,y~=w~=h(X~)−Y~{\displaystyle {\tilde {y}}={\tilde {w}}=h({\tilde {X}})-{\tilde {Y}}}, so the system simplifies to:Ax^=y^−y~,{\displaystyle A{\hat {x}}={\hat {y}}-{\tilde {y}},}which is in the form ofordinary least squares. In the conditional adjustment, the first design matrix is null,A= 0. For the more general cases,Lagrange multipliersare introduced to relate the two Jacobian matrices, and transform theconstrainedleast squares problem into an unconstrained one (albeit a larger one). In any case, their manipulation leads to theX^{\displaystyle {\hat {X}}}andY^{\displaystyle {\hat {Y}}}vectors as well as the respective parameters and observationsa posterioricovariance matrices. Given the matrices and vectors above, their solution is found via standard least-squares methods; e.g., forming thenormal matrixand applyingCholesky decomposition, applying theQR factorizationdirectly to the Jacobian matrix,iterative methodsfor very large systems, etc. Ifrank deficiencyis encountered, it can often be rectified by the inclusion of additional equations imposing constraints on the parameters and/or observations, leading toconstrained least squares.
https://en.wikipedia.org/wiki/Least-squares_adjustment
Instatistics,best linear unbiased prediction(BLUP) is used in linearmixed modelsfor the estimation ofrandom effects. BLUP was derived byCharles Roy Hendersonin 1950 but the term "best linear unbiased predictor" (or "prediction") seems not to have been used until 1962.[1]"Best linear unbiased predictions" (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) (seeGauss–Markov theorem) of fixed effects. The distinction arises because it is conventional to talk aboutestimatingfixed effects but aboutpredictingrandom effects, but the two terms are otherwise equivalent. (This is a bit strange since the random effects have already been "realized"; they already exist. The use of the term "prediction" may be because in the field of animal breeding in which Henderson worked, the random effects were usually genetic merit, which could be used to predict the quality of offspring (Robinson[1]page 28)). However, the equations for the "fixed" effects and for the random effects are different. In practice, it is often the case that the parameters associated with the random effect(s) term(s) are unknown; these parameters are the variances of the random effects and residuals. Typically the parameters are estimated and plugged into the predictor, leading to theempirical best linear unbiased predictor(EBLUP). Notice that by simply plugging in the estimated parameter into the predictor, additional variability is unaccounted for, leading to overly optimistic prediction variances for the EBLUP.[citation needed] Best linear unbiased predictions are similar toempirical Bayesestimates of random effects in linear mixed models, except that in the latter case, where weights depend on unknown values of components of variance, these unknown variances are replaced by sample-based estimates. Suppose that the model for observations {Yj;j= 1, ...,n} is written as whereμ{\displaystyle \mu }is the mean of all observationsY{\displaystyle Y}, andξjandεjrepresent the random effect and observation error for observationj, and suppose they are uncorrelated and have known variancesσξ2andσε2, respectively. Further,xjis a vector ofindependent variablesfor thejth observation andβ{\displaystyle \beta }is a vector of regression parameters. The BLUP problem of providing an estimate of the observation-error-free value for thekth observation, can be formulated as requiring that the coefficients of a linear predictor, defined as should be chosen so as to minimise the variance of the prediction error, subject to the condition that the predictor is unbiased, In contrast to the case ofbest linear unbiased estimation, the "quantity to be estimated",Y~k{\displaystyle {\widetilde {Y}}_{k}}, not only has a contribution from a random element but one of the observed quantities, specificallyYk{\displaystyle Y_{k}}which contributes toY^k{\displaystyle {\widehat {Y}}_{k}}, also has a contribution from this same random element. In contrast to BLUE, BLUP takes into account known or estimated variances.[2] Henderson explored breeding from a statistical point of view. His work assisted the development of theselection index(SI) andestimated breeding value(EBV). These statistical methods influenced the artificial insemination stud rankings used in the United States. These early statistical methods are confused with the BLUP now common in livestock breeding. The actual term BLUP originated out of work at theUniversity of Guelphin Canada by Daniel Sorensen and Brian Kennedy, in which they extended Henderson's results to a model that includes several cycles of selection.[3]This model was popularized by the University of Guelph in the dairy industry under the name BLUP. Further work by the University showed BLUP's superiority over EBV and SI leading to it becoming the primary genetic predictor[citation needed]. There is thus confusion between the BLUP model popularized above with the best linear unbiased prediction statistical method which was too theoretical for general use. The model was supplied for use on computers to farmers. In Canada, all dairies report nationally. The genetics in Canada were shared making it the largest genetic pool and thus source of improvements. This and BLUP drove a rapid increase inHolstein cattlequality.
https://en.wikipedia.org/wiki/Best_linear_unbiased_prediction
Inmathematics, anormis afunctionfrom a real or complexvector spaceto the non-negative real numbers that behaves in certain ways like the distance from theorigin: itcommuteswith scaling, obeys a form of thetriangle inequality, and zero is only at the origin. In particular, theEuclidean distancein aEuclidean spaceis defined by a norm on the associatedEuclidean vector space, called theEuclidean norm, the2-norm, or, sometimes, themagnitudeorlengthof the vector. This norm can be defined as thesquare rootof theinner productof a vector with itself. Aseminormsatisfies the first two properties of a norm but may be zero for vectors other than the origin.[1]A vector space with a specified norm is called anormed vector space. In a similar manner, a vector space with a seminorm is called aseminormed vector space. The termpseudonormhas been used for several related meanings. It may be a synonym of "seminorm".[1]It can also refer to a norm that can take infinite values[2]or to certain functions parametrised by adirected set.[3] Given avector spaceX{\displaystyle X}over asubfieldF{\displaystyle F}of the complex numbersC,{\displaystyle \mathbb {C} ,}anormonX{\displaystyle X}is areal-valued functionp:X→R{\displaystyle p:X\to \mathbb {R} }with the following properties, where|s|{\displaystyle |s|}denotes the usualabsolute valueof a scalars{\displaystyle s}:[4] AseminormonX{\displaystyle X}is a functionp:X→R{\displaystyle p:X\to \mathbb {R} }that has properties (1.) and (2.)[6]so that in particular, every norm is also a seminorm (and thus also asublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that ifp{\displaystyle p}is a norm (or more generally, a seminorm) thenp(0)=0{\displaystyle p(0)=0}and thatp{\displaystyle p}also has the following property: Some authors include non-negativity as part of the definition of "norm", although this is not necessary. Although this article defined "positive" to be a synonym of "positive definite", some authors instead define "positive" to be a synonym of "non-negative";[7]these definitions are not equivalent. Suppose thatp{\displaystyle p}andq{\displaystyle q}are two norms (or seminorms) on a vector spaceX.{\displaystyle X.}Thenp{\displaystyle p}andq{\displaystyle q}are calledequivalent, if there exist two positive real constantsc{\displaystyle c}andC{\displaystyle C}such that for every vectorx∈X,{\displaystyle x\in X,}cq(x)≤p(x)≤Cq(x).{\displaystyle cq(x)\leq p(x)\leq Cq(x).}The relation "p{\displaystyle p}is equivalent toq{\displaystyle q}" isreflexive,symmetric(cq≤p≤Cq{\displaystyle cq\leq p\leq Cq}implies1Cp≤q≤1cp{\displaystyle {\tfrac {1}{C}}p\leq q\leq {\tfrac {1}{c}}p}), andtransitiveand thus defines anequivalence relationon the set of all norms onX.{\displaystyle X.}The normsp{\displaystyle p}andq{\displaystyle q}are equivalent if and only if they induce the same topology onX.{\displaystyle X.}[8]Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.[8] If a normp:X→R{\displaystyle p:X\to \mathbb {R} }is given on a vector spaceX,{\displaystyle X,}then the norm of a vectorz∈X{\displaystyle z\in X}is usually denoted by enclosing it within double vertical lines:‖z‖=p(z){\displaystyle \|z\|=p(z)}, as proposed byStefan Banachin his doctoral thesis from 1920. Such notation is also sometimes used ifp{\displaystyle p}is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, asexplained below), the notation|x|{\displaystyle |x|}with single vertical lines is also widespread. Every (real or complex) vector space admits a norm: Ifx∙=(xi)i∈I{\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}}is aHamel basisfor a vector spaceX{\displaystyle X}then the real-valued map that sendsx=∑i∈Isixi∈X{\displaystyle x=\sum _{i\in I}s_{i}x_{i}\in X}(where all but finitely many of the scalarssi{\displaystyle s_{i}}are0{\displaystyle 0}) to∑i∈I|si|{\displaystyle \sum _{i\in I}\left|s_{i}\right|}is a norm onX.{\displaystyle X.}[9]There are also a large number of norms that exhibit additional properties that make them useful for specific problems. Theabsolute value|x|{\displaystyle |x|}is a norm on the vector space formed by therealorcomplex numbers. The complex numbers form aone-dimensional vector spaceover themselves and a two-dimensional vector space over the reals; the absolute value is a norm for these two structures. Any normp{\displaystyle p}on a one-dimensional vector spaceX{\displaystyle X}is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preservingisomorphismof vector spacesf:F→X,{\displaystyle f:\mathbb {F} \to X,}whereF{\displaystyle \mathbb {F} }is eitherR{\displaystyle \mathbb {R} }orC,{\displaystyle \mathbb {C} ,}and norm-preserving means that|x|=p(f(x)).{\displaystyle |x|=p(f(x)).}This isomorphism is given by sending1∈F{\displaystyle 1\in \mathbb {F} }to a vector of norm1,{\displaystyle 1,}which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm. On then{\displaystyle n}-dimensionalEuclidean spaceRn,{\displaystyle \mathbb {R} ^{n},}the intuitive notion of length of the vectorx=(x1,x2,…,xn){\displaystyle {\boldsymbol {x}}=\left(x_{1},x_{2},\ldots ,x_{n}\right)}is captured by the formula[10]‖x‖2:=x12+⋯+xn2.{\displaystyle \|{\boldsymbol {x}}\|_{2}:={\sqrt {x_{1}^{2}+\cdots +x_{n}^{2}}}.} This is theEuclidean norm, which gives the ordinary distance from the origin to the pointX—a consequence of thePythagorean theorem. This operation may also be referred to as "SRSS", which is an acronym for thesquareroot of thesum ofsquares.[11] The Euclidean norm is by far the most commonly used norm onRn,{\displaystyle \mathbb {R} ^{n},}[10]but there are other norms on this vector space as will be shown below. However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces. Theinner productof two vectors of aEuclidean vector spaceis thedot productof theircoordinate vectorsover anorthonormal basis. Hence, the Euclidean norm can be written in a coordinate-free way as‖x‖:=x⋅x.{\displaystyle \|{\boldsymbol {x}}\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.} The Euclidean norm is also called thequadratic norm,L2{\displaystyle L^{2}}norm,[12]ℓ2{\displaystyle \ell ^{2}}norm,2-norm, orsquare norm; seeLp{\displaystyle L^{p}}space. It defines adistance functioncalled theEuclidean length,L2{\displaystyle L^{2}}distance, orℓ2{\displaystyle \ell ^{2}}distance. The set of vectors inRn+1{\displaystyle \mathbb {R} ^{n+1}}whose Euclidean norm is a given positive constant forms ann{\displaystyle n}-sphere. The Euclidean norm of acomplex numberis theabsolute value(also called themodulus) of it, if thecomplex planeis identified with theEuclidean planeR2.{\displaystyle \mathbb {R} ^{2}.}This identification of the complex numberx+iy{\displaystyle x+iy}as a vector in the Euclidean plane, makes the quantityx2+y2{\textstyle {\sqrt {x^{2}+y^{2}}}}(as first suggested by Euler) the Euclidean norm associated with the complex number. Forz=x+iy{\displaystyle z=x+iy}, the norm can also be written asz¯z{\displaystyle {\sqrt {{\bar {z}}z}}}wherez¯{\displaystyle {\bar {z}}}is thecomplex conjugateofz.{\displaystyle z\,.} There are exactly fourEuclidean Hurwitz algebrasover thereal numbers. These are the real numbersR,{\displaystyle \mathbb {R} ,}the complex numbersC,{\displaystyle \mathbb {C} ,}thequaternionsH,{\displaystyle \mathbb {H} ,}and lastly theoctonionsO,{\displaystyle \mathbb {O} ,}where the dimensions of these spaces over the real numbers are1,2,4,and8,{\displaystyle 1,2,4,{\text{ and }}8,}respectively. The canonical norms onR{\displaystyle \mathbb {R} }andC{\displaystyle \mathbb {C} }are theirabsolute valuefunctions, as discussed previously. The canonical norm onH{\displaystyle \mathbb {H} }ofquaternionsis defined by‖q‖=qq∗=q∗q=a2+b2+c2+d2{\displaystyle \lVert q\rVert ={\sqrt {\,qq^{*}~}}={\sqrt {\,q^{*}q~}}={\sqrt {\,a^{2}+b^{2}+c^{2}+d^{2}~}}}for every quaternionq=a+bi+cj+dk{\displaystyle q=a+b\,\mathbf {i} +c\,\mathbf {j} +d\,\mathbf {k} }inH.{\displaystyle \mathbb {H} .}This is the same as the Euclidean norm onH{\displaystyle \mathbb {H} }considered as the vector spaceR4.{\displaystyle \mathbb {R} ^{4}.}Similarly, the canonical norm on theoctonionsis just the Euclidean norm onR8.{\displaystyle \mathbb {R} ^{8}.} On ann{\displaystyle n}-dimensionalcomplex spaceCn,{\displaystyle \mathbb {C} ^{n},}the most common norm is‖z‖:=|z1|2+⋯+|zn|2=z1z¯1+⋯+znz¯n.{\displaystyle \|{\boldsymbol {z}}\|:={\sqrt {\left|z_{1}\right|^{2}+\cdots +\left|z_{n}\right|^{2}}}={\sqrt {z_{1}{\bar {z}}_{1}+\cdots +z_{n}{\bar {z}}_{n}}}.} In this case, the norm can be expressed as thesquare rootof theinner productof the vector and itself:‖x‖:=xHx,{\displaystyle \|{\boldsymbol {x}}\|:={\sqrt {{\boldsymbol {x}}^{H}~{\boldsymbol {x}}}},}wherex{\displaystyle {\boldsymbol {x}}}is represented as acolumn vector[x1x2…xn]T{\displaystyle {\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{n}\end{bmatrix}}^{\rm {T}}}andxH{\displaystyle {\boldsymbol {x}}^{H}}denotes itsconjugate transpose. This formula is valid for anyinner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to thecomplex dot product. Hence the formula in this case can also be written using the following notation:‖x‖:=x⋅x.{\displaystyle \|{\boldsymbol {x}}\|:={\sqrt {{\boldsymbol {x}}\cdot {\boldsymbol {x}}}}.} ‖x‖1:=∑i=1n|xi|.{\displaystyle \|{\boldsymbol {x}}\|_{1}:=\sum _{i=1}^{n}\left|x_{i}\right|.}The name relates to the distance a taxi has to drive in a rectangularstreet grid(like that of theNew Yorkborough ofManhattan) to get from the origin to the pointx.{\displaystyle x.} The set of vectors whose 1-norm is a given constant forms the surface of across polytope, which has dimension equal to the dimension of the vector space minus 1. The Taxicab norm is also called theℓ1{\displaystyle \ell ^{1}}norm. The distance derived from this norm is called theManhattan distanceorℓ1{\displaystyle \ell ^{1}}distance. The 1-norm is simply the sum of the absolute values of the columns. In contrast,∑i=1nxi{\displaystyle \sum _{i=1}^{n}x_{i}}is not a norm because it may yield negative results. Letp≥1{\displaystyle p\geq 1}be a real number. Thep{\displaystyle p}-norm (also calledℓp{\displaystyle \ell ^{p}}-norm) of vectorx=(x1,…,xn){\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{n})}is[10]‖x‖p:=(∑i=1n|xi|p)1/p.{\displaystyle \|\mathbf {x} \|_{p}:={\biggl (}\sum _{i=1}^{n}\left|x_{i}\right|^{p}{\biggr )}^{1/p}.}Forp=1,{\displaystyle p=1,}we get thetaxicab norm, forp=2{\displaystyle p=2}we get theEuclidean norm, and asp{\displaystyle p}approaches∞{\displaystyle \infty }thep{\displaystyle p}-norm approaches theinfinity normormaximum norm:‖x‖∞:=maxi|xi|.{\displaystyle \|\mathbf {x} \|_{\infty }:=\max _{i}\left|x_{i}\right|.}Thep{\displaystyle p}-norm is related to thegeneralized meanor power mean. Forp=2,{\displaystyle p=2,}the‖⋅‖2{\displaystyle \|\,\cdot \,\|_{2}}-norm is even induced by a canonicalinner product⟨⋅,⋅⟩,{\displaystyle \langle \,\cdot ,\,\cdot \rangle ,}meaning that‖x‖2=⟨x,x⟩{\textstyle \|\mathbf {x} \|_{2}={\sqrt {\langle \mathbf {x} ,\mathbf {x} \rangle }}}for all vectorsx.{\displaystyle \mathbf {x} .}This inner product can be expressed in terms of the norm by using thepolarization identity. Onℓ2,{\displaystyle \ell ^{2},}this inner product is theEuclidean inner productdefined by⟨(xn)n,(yn)n⟩ℓ2=∑nxn¯yn{\displaystyle \langle \left(x_{n}\right)_{n},\left(y_{n}\right)_{n}\rangle _{\ell ^{2}}~=~\sum _{n}{\overline {x_{n}}}y_{n}}while for the spaceL2(X,μ){\displaystyle L^{2}(X,\mu )}associated with ameasure space(X,Σ,μ),{\displaystyle (X,\Sigma ,\mu ),}which consists of allsquare-integrable functions, this inner product is⟨f,g⟩L2=∫Xf(x)¯g(x)dx.{\displaystyle \langle f,g\rangle _{L^{2}}=\int _{X}{\overline {f(x)}}g(x)\,\mathrm {d} x.} This definition is still of some interest for0<p<1,{\displaystyle 0<p<1,}but the resulting function does not define a norm,[13]because it violates thetriangle inequality. What is true for this case of0<p<1,{\displaystyle 0<p<1,}even in the measurable analog, is that the correspondingLp{\displaystyle L^{p}}class is a vector space, and it is also true that the function∫X|f(x)−g(x)|pdμ{\displaystyle \int _{X}|f(x)-g(x)|^{p}~\mathrm {d} \mu }(withoutp{\displaystyle p}th root) defines a distance that makesLp(X){\displaystyle L^{p}(X)}into a complete metrictopological vector space. These spaces are of great interest infunctional analysis,probability theoryandharmonic analysis. However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional. The partial derivative of thep{\displaystyle p}-norm is given by∂∂xk‖x‖p=xk|xk|p−2‖x‖pp−1.{\displaystyle {\frac {\partial }{\partial x_{k}}}\|\mathbf {x} \|_{p}={\frac {x_{k}\left|x_{k}\right|^{p-2}}{\|\mathbf {x} \|_{p}^{p-1}}}.} The derivative with respect tox,{\displaystyle x,}therefore, is∂‖x‖p∂x=x∘|x|p−2‖x‖pp−1.{\displaystyle {\frac {\partial \|\mathbf {x} \|_{p}}{\partial \mathbf {x} }}={\frac {\mathbf {x} \circ |\mathbf {x} |^{p-2}}{\|\mathbf {x} \|_{p}^{p-1}}}.}where∘{\displaystyle \circ }denotesHadamard productand|⋅|{\displaystyle |\cdot |}is used for absolute value of each component of the vector. For the special case ofp=2,{\displaystyle p=2,}this becomes∂∂xk‖x‖2=xk‖x‖2,{\displaystyle {\frac {\partial }{\partial x_{k}}}\|\mathbf {x} \|_{2}={\frac {x_{k}}{\|\mathbf {x} \|_{2}}},}or∂∂x‖x‖2=x‖x‖2.{\displaystyle {\frac {\partial }{\partial \mathbf {x} }}\|\mathbf {x} \|_{2}={\frac {\mathbf {x} }{\|\mathbf {x} \|_{2}}}.} Ifx{\displaystyle \mathbf {x} }is some vector such thatx=(x1,x2,…,xn),{\displaystyle \mathbf {x} =(x_{1},x_{2},\ldots ,x_{n}),}then:‖x‖∞:=max(|x1|,…,|xn|).{\displaystyle \|\mathbf {x} \|_{\infty }:=\max \left(\left|x_{1}\right|,\ldots ,\left|x_{n}\right|\right).} The set of vectors whose infinity norm is a given constant,c,{\displaystyle c,}forms the surface of ahypercubewith edge length2c.{\displaystyle 2c.} The energy norm[14]of a vectorx=(x1,x2,…,xn)∈Rn{\displaystyle {\boldsymbol {x}}=\left(x_{1},x_{2},\ldots ,x_{n}\right)\in \mathbb {R} ^{n}}is defined in terms of asymmetricpositive definitematrixA∈Rn{\displaystyle A\in \mathbb {R} ^{n}}as ‖x‖A:=xT⋅A⋅x.{\displaystyle {\|{\boldsymbol {x}}\|}_{A}:={\sqrt {{\boldsymbol {x}}^{T}\cdot A\cdot {\boldsymbol {x}}}}.} It is clear that ifA{\displaystyle A}is theidentity matrix, this norm corresponds to theEuclidean norm. IfA{\displaystyle A}is diagonal, this norm is also called aweighted norm. The energy norm is induced by theinner productgiven by⟨x,y⟩A:=xT⋅A⋅y{\displaystyle \langle {\boldsymbol {x}},{\boldsymbol {y}}\rangle _{A}:={\boldsymbol {x}}^{T}\cdot A\cdot {\boldsymbol {y}}}forx,y∈Rn{\displaystyle {\boldsymbol {x}},{\boldsymbol {y}}\in \mathbb {R} ^{n}}. In general, the value of the norm is dependent on thespectrumofA{\displaystyle A}: For a vectorx{\displaystyle {\boldsymbol {x}}}with a Euclidean norm of one, the value of‖x‖A{\displaystyle {\|{\boldsymbol {x}}\|}_{A}}is bounded from below and above by the smallest and largest absoluteeigenvaluesofA{\displaystyle A}respectively, where the bounds are achieved ifx{\displaystyle {\boldsymbol {x}}}coincides with the corresponding (normalized) eigenvectors. Based on the symmetricmatrix square rootA1/2{\displaystyle A^{1/2}}, the energy norm of a vector can be written in terms of the standard Euclidean norm as ‖x‖A=‖A1/2x‖2.{\displaystyle {\|{\boldsymbol {x}}\|}_{A}={\|A^{1/2}{\boldsymbol {x}}\|}_{2}.} In probability and functional analysis, the zero norm induces a complete metric topology for the space ofmeasurable functionsand for theF-spaceof sequences with F–norm(xn)↦∑n2−nxn/(1+xn).{\textstyle (x_{n})\mapsto \sum _{n}{2^{-n}x_{n}/(1+x_{n})}.}[15]Here we mean byF-normsome real-valued function‖⋅‖{\displaystyle \lVert \cdot \rVert }on an F-space with distanced,{\displaystyle d,}such that‖x‖=d(x,0).{\displaystyle \lVert x\rVert =d(x,0).}TheF-norm described above is not a norm in the usual sense because it lacks the required homogeneity property. Inmetric geometry, thediscrete metrictakes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines theHamming distance, which is important incodingandinformation theory. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero. However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness. When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous. Insignal processingandstatistics,David Donohoreferred to thezero"norm"with quotation marks. Following Donoho's notation, the zero "norm" ofx{\displaystyle x}is simply the number of non-zero coordinates ofx,{\displaystyle x,}or the Hamming distance of the vector from zero. When this "norm" is localized to a bounded set, it is the limit ofp{\displaystyle p}-norms asp{\displaystyle p}approaches 0. Of course, the zero "norm" isnottruly a norm, because it is notpositive homogeneous. Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument.Abusing terminology, some engineers[who?]omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function theL0{\displaystyle L^{0}}norm, echoing the notation for theLebesgue spaceofmeasurable functions. The generalization of the above norms to an infinite number of components leads toℓp{\displaystyle \ell ^{p}}andLp{\displaystyle L^{p}}spacesforp≥1,{\displaystyle p\geq 1\,,}with norms ‖x‖p=(∑i∈N|xi|p)1/pand‖f‖p,X=(∫X|f(x)|pdx)1/p{\displaystyle \|x\|_{p}={\bigg (}\sum _{i\in \mathbb {N} }\left|x_{i}\right|^{p}{\bigg )}^{1/p}{\text{ and }}\ \|f\|_{p,X}={\bigg (}\int _{X}|f(x)|^{p}~\mathrm {d} x{\bigg )}^{1/p}} for complex-valued sequences and functions onX⊆Rn{\displaystyle X\subseteq \mathbb {R} ^{n}}respectively, which can be further generalized (seeHaar measure). These norms are also valid in the limit asp→+∞{\displaystyle p\rightarrow +\infty }, giving asupremum norm, and are calledℓ∞{\displaystyle \ell ^{\infty }}andL∞.{\displaystyle L^{\infty }\,.} Anyinner productinduces in a natural way the norm‖x‖:=⟨x,x⟩.{\textstyle \|x\|:={\sqrt {\langle x,x\rangle }}.} Other examples of infinite-dimensional normed vector spaces can be found in theBanach spacearticle. Generally, these norms do not give the same topologies. For example, an infinite-dimensionalℓp{\displaystyle \ell ^{p}}space gives astrictly finer topologythan an infinite-dimensionalℓq{\displaystyle \ell ^{q}}space whenp<q.{\displaystyle p<q\,.} Other norms onRn{\displaystyle \mathbb {R} ^{n}}can be constructed by combining the above; for example‖x‖:=2|x1|+3|x2|2+max(|x3|,2|x4|)2{\displaystyle \|x\|:=2\left|x_{1}\right|+{\sqrt {3\left|x_{2}\right|^{2}+\max(\left|x_{3}\right|,2\left|x_{4}\right|)^{2}}}}is a norm onR4.{\displaystyle \mathbb {R} ^{4}.} For any norm and anyinjectivelinear transformationA{\displaystyle A}we can define a new norm ofx,{\displaystyle x,}equal to‖Ax‖.{\displaystyle \|Ax\|.}In 2D, withA{\displaystyle A}a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. EachA{\displaystyle A}applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: aparallelogramof a particular shape, size, and orientation. In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prismswith parallelogram base). There are examples of norms that are not defined by "entrywise" formulas. For instance, theMinkowski functionalof a centrally-symmetric convex body inRn{\displaystyle \mathbb {R} ^{n}}(centered at zero) defines a norm onRn{\displaystyle \mathbb {R} ^{n}}(see§ Classification of seminorms: absolutely convex absorbing setsbelow). All the above formulas also yield norms onCn{\displaystyle \mathbb {C} ^{n}}without modification. There are also norms on spaces of matrices (with real or complex entries), the so-calledmatrix norms. LetE{\displaystyle E}be afinite extensionof a fieldk{\displaystyle k}ofinseparable degreepμ,{\displaystyle p^{\mu },}and letk{\displaystyle k}have algebraic closureK.{\displaystyle K.}If the distinctembeddingsofE{\displaystyle E}are{σj}j,{\displaystyle \left\{\sigma _{j}\right\}_{j},}then theGalois-theoretic normof an elementα∈E{\displaystyle \alpha \in E}is the value(∏jσk(α))pμ.{\textstyle \left(\prod _{j}{\sigma _{k}(\alpha )}\right)^{p^{\mu }}.}As that function is homogeneous of degree[E:k]{\displaystyle [E:k]}, the Galois-theoretic norm is not a norm in the sense of this article. However, the[E:k]{\displaystyle [E:k]}-th root of the norm (assuming that concept makes sense) is a norm.[16] The concept of normN(z){\displaystyle N(z)}incomposition algebrasdoesnotshare the usual properties of a norm sincenull vectorsare allowed. A composition algebra(A,∗,N){\displaystyle (A,{}^{*},N)}consists of analgebra over a fieldA,{\displaystyle A,}aninvolution∗,{\displaystyle {}^{*},}and aquadratic formN(z)=zz∗{\displaystyle N(z)=zz^{*}}called the "norm". The characteristic feature of composition algebras is thehomomorphismproperty ofN{\displaystyle N}: for the productwz{\displaystyle wz}of two elementsw{\displaystyle w}andz{\displaystyle z}of the composition algebra, its norm satisfiesN(wz)=N(w)N(z).{\displaystyle N(wz)=N(w)N(z).}In the case ofdivision algebrasR,{\displaystyle \mathbb {R} ,}C,{\displaystyle \mathbb {C} ,}H,{\displaystyle \mathbb {H} ,}andO{\displaystyle \mathbb {O} }the composition algebra norm is the square of the norm discussed above. In those cases the norm is adefinite quadratic form. In thesplit algebrasthe norm is anisotropic quadratic form. For any normp:X→R{\displaystyle p:X\to \mathbb {R} }on a vector spaceX,{\displaystyle X,}thereverse triangle inequalityholds:p(x±y)≥|p(x)−p(y)|for allx,y∈X.{\displaystyle p(x\pm y)\geq |p(x)-p(y)|{\text{ for all }}x,y\in X.}Ifu:X→Y{\displaystyle u:X\to Y}is a continuous linear map between normed spaces, then the norm ofu{\displaystyle u}and the norm of thetransposeofu{\displaystyle u}are equal.[17] For theLp{\displaystyle L^{p}}norms, we haveHölder's inequality[18]|⟨x,y⟩|≤‖x‖p‖y‖q1p+1q=1.{\displaystyle |\langle x,y\rangle |\leq \|x\|_{p}\|y\|_{q}\qquad {\frac {1}{p}}+{\frac {1}{q}}=1.}A special case of this is theCauchy–Schwarz inequality:[18]|⟨x,y⟩|≤‖x‖2‖y‖2.{\displaystyle \left|\langle x,y\rangle \right|\leq \|x\|_{2}\|y\|_{2}.} Every norm is aseminormand thus satisfies allproperties of the latter. In turn, every seminorm is asublinear functionand thus satisfies allproperties of the latter. In particular, every norm is aconvex function. The concept ofunit circle(the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is asquareoriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unitcircle; while for the infinity norm, it is an axis-aligned square. For anyp{\displaystyle p}-norm, it is asuperellipsewith congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must beconvexand centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, andp≥1{\displaystyle p\geq 1}for ap{\displaystyle p}-norm). In terms of the vector space, the seminorm defines atopologyon the space, and this is aHausdorfftopology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. Asequenceof vectors{vn}{\displaystyle \{v_{n}\}}is said toconvergein norm tov,{\displaystyle v,}if‖vn−v‖→0{\displaystyle \left\|v_{n}-v\right\|\to 0}asn→∞.{\displaystyle n\to \infty .}Equivalently, the topology consists of all sets that can be represented as a union of openballs. If(X,‖⋅‖){\displaystyle (X,\|\cdot \|)}is a normed space then[19]‖x−y‖=‖x−z‖+‖z−y‖for allx,y∈Xandz∈[x,y].{\displaystyle \|x-y\|=\|x-z\|+\|z-y\|{\text{ for all }}x,y\in X{\text{ and }}z\in [x,y].} Two norms‖⋅‖α{\displaystyle \|\cdot \|_{\alpha }}and‖⋅‖β{\displaystyle \|\cdot \|_{\beta }}on a vector spaceX{\displaystyle X}are calledequivalentif they induce the same topology,[8]which happens if and only if there exist positive real numbersC{\displaystyle C}andD{\displaystyle D}such that for allx∈X{\displaystyle x\in X}C‖x‖α≤‖x‖β≤D‖x‖α.{\displaystyle C\|x\|_{\alpha }\leq \|x\|_{\beta }\leq D\|x\|_{\alpha }.}For instance, ifp>r≥1{\displaystyle p>r\geq 1}onCn,{\displaystyle \mathbb {C} ^{n},}then[20]‖x‖p≤‖x‖r≤n(1/r−1/p)‖x‖p.{\displaystyle \|x\|_{p}\leq \|x\|_{r}\leq n^{(1/r-1/p)}\|x\|_{p}.} In particular,‖x‖2≤‖x‖1≤n‖x‖2{\displaystyle \|x\|_{2}\leq \|x\|_{1}\leq {\sqrt {n}}\|x\|_{2}}‖x‖∞≤‖x‖2≤n‖x‖∞{\displaystyle \|x\|_{\infty }\leq \|x\|_{2}\leq {\sqrt {n}}\|x\|_{\infty }}‖x‖∞≤‖x‖1≤n‖x‖∞,{\displaystyle \|x\|_{\infty }\leq \|x\|_{1}\leq n\|x\|_{\infty },}That is,‖x‖∞≤‖x‖2≤‖x‖1≤n‖x‖2≤n‖x‖∞.{\displaystyle \|x\|_{\infty }\leq \|x\|_{2}\leq \|x\|_{1}\leq {\sqrt {n}}\|x\|_{2}\leq n\|x\|_{\infty }.}If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent. Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space isuniformly isomorphic. All seminorms on a vector spaceX{\displaystyle X}can be classified in terms ofabsolutely convexabsorbing subsetsA{\displaystyle A}ofX.{\displaystyle X.}To each such subset corresponds a seminormpA{\displaystyle p_{A}}called thegaugeofA,{\displaystyle A,}defined aspA(x):=inf{r∈R:r>0,x∈rA}{\displaystyle p_{A}(x):=\inf\{r\in \mathbb {R} :r>0,x\in rA\}}whereinf{\displaystyle \inf _{}}is theinfimum, with the property that{x∈X:pA(x)<1}⊆A⊆{x∈X:pA(x)≤1}.{\displaystyle \left\{x\in X:p_{A}(x)<1\right\}~\subseteq ~A~\subseteq ~\left\{x\in X:p_{A}(x)\leq 1\right\}.}Conversely: Anylocally convex topological vector spacehas alocal basisconsisting of absolutely convex sets. A common method to construct such a basis is to use a family(p){\displaystyle (p)}of seminormsp{\displaystyle p}thatseparates points: the collection of all finite intersections of sets{p<1/n}{\displaystyle \{p<1/n\}}turns the space into alocally convex topological vector spaceso that every p iscontinuous. Such a method is used to designweak and weak* topologies. norm case:
https://en.wikipedia.org/wiki/L2_norm
Least-squares spectral analysis(LSSA) is a method of estimating afrequency spectrumbased on aleast-squaresfit ofsinusoidsto data samples, similar toFourier analysis.[1][2]Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems.[3]Unlike in Fourier analysis, data need not be equally spaced to use LSSA. Developed in 1969[4]and 1971,[5]LSSA is also known as theVaníček methodand theGauss-Vaniček methodafterPetr Vaníček,[6][7]and as theLomb method[3]or theLomb–Scargle periodogram,[2][8]based on the simplifications first by Nicholas R. Lomb[9]and then by Jeffrey D. Scargle.[10] The close connections betweenFourier analysis, theperiodogram, and theleast-squaresfitting of sinusoids have been known for a long time.[11]However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning ofMathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques,[12]including both a periodogram analysis equivalent to what nowadays is called the Lomb method and least-squares fitting of selected frequencies of sinusoids determined from such periodograms — and connected by a procedure known today as thematching pursuitwith post-back fitting[13]or the orthogonal matching pursuit.[14] Petr Vaníček, a Canadiangeophysicistandgeodesistof theUniversity of New Brunswick, proposed in 1969 also the matching-pursuit approach for equally and unequally spaced data, which he called "successive spectral analysis" and the result a "least-squares periodogram".[4]He generalized this method to account for any systematic components beyond a simple mean, such as a "predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude", and applied it to a variety of samples, in 1971.[5] Vaníček's strictly least-squares method was then simplified in 1976 by Nicholas R. Lomb of theUniversity of Sydney, who pointed out its close connection toperiodogramanalysis.[9]Subsequently, the definition of a periodogram of unequally spaced data was modified and analyzed by Jeffrey D. Scargle ofNASA Ames Research Center,[10]who showed that, with minor changes, it becomes identical to Lomb's least-squares formula for fitting individual sinusoid frequencies. Scargle states that his paper "does not introduce a new detection technique, but instead studies the reliability and efficiency of detection with the most commonly used technique, the periodogram, in the case where the observation times areunevenly spaced," and further points out regarding least-squares fitting of sinusoids compared to periodogram analysis, that his paper "establishes, apparently for the first time, that (with the proposed modifications) these two methods are exactly equivalent."[10] Press[3]summarizes the development this way: A completely different method of spectral analysis for unevenly sampled data, one that mitigates these difficulties and has some other very desirable properties, was developed by Lomb, based in part on earlier work by Barning and Vanicek, and additionally elaborated by Scargle. In 1989, Michael J. Korenberg ofQueen's Universityin Kingston, Ontario, developed the "fast orthogonal search" method of more quickly finding a near-optimal decomposition of spectra or other problems,[15]similar to the technique that later became known as the orthogonal matching pursuit. In the Vaníček method, a discrete data set is approximated by a weighted sum of sinusoids of progressively determined frequencies using a standardlinear regressionorleast-squaresfit.[16]The frequencies are chosen using a method similar to Barning's, but going further in optimizing the choice of each successive new frequency by picking the frequency that minimizes the residual after least-squares fitting (equivalent to the fitting technique now known asmatching pursuitwith pre-backfitting[13]). The number of sinusoids must be less than or equal to the number of data samples (counting sines and cosines of the same frequency as separate sinusoids). A data vectorΦis represented as a weighted sum of sinusoidal basis functions, tabulated in a matrixAby evaluating each function at the sample times, with weight vectorx: where the weights vectorxis chosen to minimize the sum of squared errors in approximatingΦ. The solution forxis closed-form, using standardlinear regression:[17] Here the matrix A can be based on any set of functions mutually independent (not necessarily orthogonal) when evaluated at the sample times; functions used for spectral analysis are typically sines and cosines evenly distributed over the frequency range of interest. If we choose too many frequencies in a too-narrow frequency range, the functions will be insufficiently independent, the matrix ill-conditioned, and the resulting spectrum meaningless.[17] When the basis functions inAare orthogonal (that is, not correlated, meaning the columns have zero pair-wisedot products), the matrixATAis diagonal; when the columns all have the same power (sum of squares of elements), then that matrix is anidentity matrixtimes a constant, so the inversion is trivial. The latter is the case when the sample times are equally spaced and sinusoids chosen as sines and cosines equally spaced in pairs on the frequency interval 0 to a half cycle per sample (spaced by 1/N cycles per sample, omitting the sine phases at 0 and maximum frequency where they are identically zero). This case is known as thediscrete Fourier transform, slightly rewritten in terms of measurements and coefficients.[17] Trying to lower the computational burden of the Vaníček method in 1976[9](no longer an issue), Lomb proposed using the above simplification in general, except for pair-wise correlations between sine and cosine bases of the same frequency, since the correlations between pairs of sinusoids are often small, at least when they are not tightly spaced. This formulation is essentially that of the traditionalperiodogrambut adapted for use with unevenly spaced samples. The vectorxis a reasonably good estimate of an underlying spectrum, but since we ignore any correlations,Axis no longer a good approximation to the signal, and the method is no longer a least-squares method — yet in the literature continues to be referred to as such. Rather than just taking dot products of the data with sine and cosine waveforms directly, Scargle modified the standard periodogram formula so to find a time delayτ{\displaystyle \tau }first, such that this pair of sinusoids would be mutually orthogonal at sample timestj{\displaystyle t_{j}}and also adjusted for the potentially unequal powers of these two basis functions, to obtain a better estimate of the power at a frequency.[3][10]This procedure made his modified periodogram method exactly equivalent to Lomb's method. Time delayτ{\displaystyle \tau }by definition equals to Then the periodogram at frequencyω{\displaystyle \omega }is estimated as: which, as Scargle reports, has the same statistical distribution as the periodogram in the evenly sampled case.[10] At any individual frequencyω{\displaystyle \omega }, this method gives the same power as does a least-squares fit to sinusoids of that frequency and of the form: In practice, it is always difficult to judge if a given Lomb peak is significant or not, especially when the nature of the noise is unknown, so for example a false-alarm spectral peak in the Lomb periodogram analysis of noisy periodic signal may result from noise in turbulence data.[19]Fourier methods can also report false spectral peaks when analyzing patched-up or data edited otherwise.[7] The standard Lomb–Scargle periodogram is only valid for a model with a zero mean. Commonly, this is approximated — by subtracting the mean of the data before calculating the periodogram. However, this is an inaccurate assumption when the mean of the model (the fitted sinusoids) is non-zero. ThegeneralizedLomb–Scargle periodogram removes this assumption and explicitly solves for the mean. In this case, the function fitted is The generalized Lomb–Scargle periodogram has also been referred to in the literature as afloating mean periodogram.[21] Michael Korenberg ofQueen's UniversityinKingston, Ontario, developed a method for choosing a sparse set of components from an over-complete set — such as sinusoidal components for spectral analysis — called the fast orthogonal search (FOS). Mathematically, FOS uses a slightly modifiedCholesky decompositionin a mean-square error reduction (MSER) process, implemented as asparse matrixinversion.[15][22]As with the other LSSA methods, FOS avoids the major shortcoming of discrete Fourier analysis, so it can accurately identify embedded periodicities and excel with unequally spaced data. The fast orthogonal search method was also applied to other problems, such asnonlinear system identification. Palmer has developed a method for finding the best-fit function to any chosen number of harmonics, allowing more freedom to find non-sinusoidal harmonic functions.[23]His is a fast (FFT-based) technique forweighted least-squares analysison arbitrarily spaced data with non-uniform standard errors. Source code that implements this technique is available.[24]Because data are often not sampled at uniformly spaced discrete times, this method "grids" the data by sparsely filling a time series array at the sample times. All intervening grid points receive zero statistical weight, equivalent to having infinite error bars at times between samples. The most useful feature of LSSA is enabling incomplete records to bespectrallyanalyzed — without the need tomanipulatedata or to invent otherwise non-existent data. Magnitudesin the LSSAspectrumdepict the contribution of a frequency or period to thevarianceof thetime series.[4]Generally, spectral magnitudes thus defined enable the output's straightforwardsignificance levelregime.[25]Alternatively, spectral magnitudes in the Vaníček spectrum can also be expressed indB.[26]Note that spectral magnitudes in the Vaníček spectrum followβ-distribution.[27] Inverse transformation of Vaníček's LSSA is possible, as is most easily seen by writing the forward transform as a matrix; the matrix inverse (when the matrix is not singular) or pseudo-inverse will then be an inverse transformation; the inverse will exactly match the original data if the chosen sinusoids are mutually independent at the sample points and their number is equal to the number of data points.[17]No such inverse procedure is known for the periodogram method. The LSSA can be implemented in less than a page ofMATLABcode.[28]In essence:[16] "to compute the least-squares spectrum we must computemspectral values ... which involves performing the least-squares approximationmtimes, each time to get [the spectral power] for a different frequency" I.e., for each frequency in a desired set of frequencies,sineandcosinefunctions are evaluated at the times corresponding to the data samples, anddot productsof the datavectorwith the sinusoid vectors are taken and appropriately normalized; following the method known as Lomb/Scargle periodogram, a time shift is calculated for each frequency to orthogonalize the sine and cosine components before the dot product;[17]finally, a power is computed from those twoamplitudecomponents. This same process implements adiscrete Fourier transformwhen the data are uniformly spaced in time and the frequencies chosen correspond to integer numbers of cycles over the finite data record. This method treats each sinusoidal component independently, or out of context, even though they may not be orthogonal to data points; it is Vaníček's original method. In addition, it is possible to perform a full simultaneous or in-context least-squares fit by solving a matrix equation and partitioning the total data variance between the specified sinusoid frequencies.[17]Such a matrix least-squares solution is natively available in MATLAB as thebackslashoperator.[29] Furthermore, the simultaneous or in-context method, as opposed to the independent or out-of-context version (as well as the periodogram version due to Lomb), cannot fit more components (sines and cosines) than there are data samples, so that:[17] "...serious repercussions can also arise if the selected frequencies result in some of the Fourier components (trig functions) becoming nearly linearly dependent with each other, thereby producing an ill-conditioned or near singular N. To avoid such ill conditioning it becomes necessary to either select a different set of frequencies to be estimated (e.g., equally spaced frequencies) or simply neglect the correlations in N (i.e., the off-diagonal blocks) and estimate the inverse least squares transform separately for the individual frequencies..." Lomb's periodogram method, on the other hand, can use an arbitrarily high number of, or density of, frequency components, as in a standardperiodogram; that is, the frequency domain can be over-sampled by an arbitrary factor.[3]However, as mentioned above, one should keep in mind that Lomb's simplification and diverging from the least squares criterion opened up his technique to grave sources of errors, resulting even in false spectral peaks.[19] In Fourier analysis, such as theFourier transformanddiscrete Fourier transform, the sinusoids fitted to data are all mutually orthogonal, so there is no distinction between the simple out-of-context dot-product-based projection onto basis functions versus an in-context simultaneous least-squares fit; that is, no matrix inversion is required to least-squares partition the variance between orthogonal sinusoids of different frequencies.[30]In the past, Fourier's was for many a method of choice thanks to its processing-efficientfast Fourier transformimplementation when complete data records with equally spaced samples are available, and they used the Fourier family of techniques to analyze gapped records as well, which, however, required manipulating and even inventing non-existent data just so to be able to run a Fourier-based algorithm.
https://en.wikipedia.org/wiki/Least-squares_spectral_analysis
Inlinear algebraandfunctional analysis, aprojectionis alinear transformationP{\displaystyle P}from avector spaceto itself (anendomorphism) such thatP∘P=P{\displaystyle P\circ P=P}. That is, wheneverP{\displaystyle P}is applied twice to any vector, it gives the same result as if it were applied once (i.e.P{\displaystyle P}isidempotent). It leaves itsimageunchanged.[1]This definition of "projection" formalizes and generalizes the idea ofgraphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection onpointsin the object. Aprojectionon a vector spaceV{\displaystyle V}is a linear operatorP:V→V{\displaystyle P\colon V\to V}such thatP2=P{\displaystyle P^{2}=P}. WhenV{\displaystyle V}has aninner productand iscomplete, i.e. whenV{\displaystyle V}is aHilbert space, the concept oforthogonalitycan be used. A projectionP{\displaystyle P}on a Hilbert spaceV{\displaystyle V}is called anorthogonal projectionif it satisfies⟨Px,y⟩=⟨x,Py⟩{\displaystyle \langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P\mathbf {y} \rangle }for allx,y∈V{\displaystyle \mathbf {x} ,\mathbf {y} \in V}. A projection on a Hilbert space that is not orthogonal is called anoblique projection. Theeigenvaluesof a projection matrix must be 0 or 1. For example, the function which maps the point(x,y,z){\displaystyle (x,y,z)}in three-dimensional spaceR3{\displaystyle \mathbb {R} ^{3}}to the point(x,y,0){\displaystyle (x,y,0)}is an orthogonal projection onto thexy-plane. This function is represented by the matrixP=[100010000].{\displaystyle P={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\end{bmatrix}}.} The action of this matrix on an arbitraryvectorisP[xyz]=[xy0].{\displaystyle P{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}.} To see thatP{\displaystyle P}is indeed a projection, i.e.,P=P2{\displaystyle P=P^{2}}, we computeP2[xyz]=P[xy0]=[xy0]=P[xyz].{\displaystyle P^{2}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=P{\begin{bmatrix}x\\y\\0\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}=P{\begin{bmatrix}x\\y\\z\end{bmatrix}}.} Observing thatPT=P{\displaystyle P^{\mathrm {T} }=P}shows that the projection is an orthogonal projection. A simple example of a non-orthogonal (oblique) projection isP=[00α1].{\displaystyle P={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}.} Viamatrix multiplication, one sees thatP2=[00α1][00α1]=[00α1]=P.{\displaystyle P^{2}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}{\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}=P.}showing thatP{\displaystyle P}is indeed a projection. The projectionP{\displaystyle P}is orthogonalif and only ifα=0{\displaystyle \alpha =0}because only thenPT=P.{\displaystyle P^{\mathrm {T} }=P.} By definition, a projectionP{\displaystyle P}isidempotent(i.e.P2=P{\displaystyle P^{2}=P}). Every projection is anopen maponto its image, meaning that it maps eachopen setin thedomainto an open set in thesubspace topologyof theimage.[citation needed]That is, for any vectorx{\displaystyle \mathbf {x} }and any ballBx{\displaystyle B_{\mathbf {x} }}(with positive radius) centered onx{\displaystyle \mathbf {x} }, there exists a ballBPx{\displaystyle B_{P\mathbf {x} }}(with positive radius) centered onPx{\displaystyle P\mathbf {x} }that is wholly contained in the imageP(Bx){\displaystyle P(B_{\mathbf {x} })}. LetW{\displaystyle W}be a finite-dimensional vector space andP{\displaystyle P}be a projection onW{\displaystyle W}. Suppose thesubspacesU{\displaystyle U}andV{\displaystyle V}are theimageandkernelofP{\displaystyle P}respectively. ThenP{\displaystyle P}has the following properties: The image and kernel of a projection arecomplementary, as areP{\displaystyle P}andQ=I−P{\displaystyle Q=I-P}. The operatorQ{\displaystyle Q}is also a projection as the image and kernel ofP{\displaystyle P}become the kernel and image ofQ{\displaystyle Q}and vice versa. We sayP{\displaystyle P}is a projection alongV{\displaystyle V}ontoU{\displaystyle U}(kernel/image) andQ{\displaystyle Q}is a projection alongU{\displaystyle U}ontoV{\displaystyle V}. In infinite-dimensional vector spaces, thespectrumof a projection is contained in{0,1}{\displaystyle \{0,1\}}as(λI−P)−1=1λI+1λ(λ−1)P.{\displaystyle (\lambda I-P)^{-1}={\frac {1}{\lambda }}I+{\frac {1}{\lambda (\lambda -1)}}P.}Only 0 or 1 can be aneigenvalueof a projection. This implies that an orthogonal projectionP{\displaystyle P}is always apositive semi-definite matrix. In general, the correspondingeigenspacesare (respectively) the kernel and range of the projection. Decomposition of a vector space into direct sums is not unique. Therefore, given a subspaceV{\displaystyle V}, there may be many projections whose range (or kernel) isV{\displaystyle V}. If a projection is nontrivial it hasminimal polynomialx2−x=x(x−1){\displaystyle x^{2}-x=x(x-1)}, which factors into distinct linear factors, and thusP{\displaystyle P}isdiagonalizable. The product of projections is not in general a projection, even if they are orthogonal. If two projectionscommutethen their product is a projection, but theconverseis false: the product of two non-commuting projections may be a projection. If two orthogonal projections commute then their product is an orthogonal projection. If the product of two orthogonal projections is an orthogonal projection, then the two orthogonal projections commute (more generally: two self-adjointendomorphismscommute if and only if their product is self-adjoint). When the vector spaceW{\displaystyle W}has aninner productand is complete (is aHilbert space) the concept oforthogonalitycan be used. Anorthogonal projectionis a projection for which the rangeU{\displaystyle U}and the kernelV{\displaystyle V}areorthogonal subspaces. Thus, for everyx{\displaystyle \mathbf {x} }andy{\displaystyle \mathbf {y} }inW{\displaystyle W},⟨Px,(y−Py)⟩=⟨(x−Px),Py⟩=0{\displaystyle \langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =\langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =0}. Equivalently:⟨x,Py⟩=⟨Px,Py⟩=⟨Px,y⟩.{\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle .} A projection is orthogonal if and only if it isself-adjoint. Using the self-adjoint and idempotent properties ofP{\displaystyle P}, for anyx{\displaystyle \mathbf {x} }andy{\displaystyle \mathbf {y} }inW{\displaystyle W}we havePx∈U{\displaystyle P\mathbf {x} \in U},y−Py∈V{\displaystyle \mathbf {y} -P\mathbf {y} \in V}, and⟨Px,y−Py⟩=⟨x,(P−P2)y⟩=0{\displaystyle \langle P\mathbf {x} ,\mathbf {y} -P\mathbf {y} \rangle =\langle \mathbf {x} ,\left(P-P^{2}\right)\mathbf {y} \rangle =0}where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is the inner product associated withW{\displaystyle W}. Therefore,P{\displaystyle P}andI−P{\displaystyle I-P}are orthogonal projections.[3]The other direction, namely that ifP{\displaystyle P}is orthogonal then it is self-adjoint, follows from the implication from⟨(x−Px),Py⟩=⟨Px,(y−Py)⟩=0{\displaystyle \langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =\langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =0}to⟨x,Py⟩=⟨Px,Py⟩=⟨Px,y⟩=⟨x,P∗y⟩{\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P^{*}\mathbf {y} \rangle }for everyx{\displaystyle x}andy{\displaystyle y}inW{\displaystyle W}; thusP=P∗{\displaystyle P=P^{*}}. The existence of an orthogonal projection onto a closed subspace follows from theHilbert projection theorem. An orthogonal projection is abounded operator. This is because for everyv{\displaystyle \mathbf {v} }in the vector space we have, by theCauchy–Schwarz inequality:‖Pv‖2=⟨Pv,Pv⟩=⟨Pv,v⟩≤‖Pv‖⋅‖v‖{\displaystyle \left\|P\mathbf {v} \right\|^{2}=\langle P\mathbf {v} ,P\mathbf {v} \rangle =\langle P\mathbf {v} ,\mathbf {v} \rangle \leq \left\|P\mathbf {v} \right\|\cdot \left\|\mathbf {v} \right\|}Thus‖Pv‖≤‖v‖{\displaystyle \left\|P\mathbf {v} \right\|\leq \left\|\mathbf {v} \right\|}. For finite-dimensional complex or real vector spaces, thestandard inner productcan be substituted for⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }. A simple case occurs when the orthogonal projection is onto a line. Ifu{\displaystyle \mathbf {u} }is aunit vectoron the line, then the projection is given by theouter productPu=uuT.{\displaystyle P_{\mathbf {u} }=\mathbf {u} \mathbf {u} ^{\mathsf {T}}.}(Ifu{\displaystyle \mathbf {u} }is complex-valued, the transpose in the above equation is replaced by a Hermitian transpose). This operator leavesuinvariant, and it annihilates all vectors orthogonal tou{\displaystyle \mathbf {u} }, proving that it is indeed the orthogonal projection onto the line containingu.[4]A simple way to see this is to consider an arbitrary vectorx{\displaystyle \mathbf {x} }as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it,x=x∥+x⊥{\displaystyle \mathbf {x} =\mathbf {x} _{\parallel }+\mathbf {x} _{\perp }}. Applying projection, we getPux=uuTx∥+uuTx⊥=u(sgn⁡(uTx∥)‖x∥‖)+u⋅0=x∥{\displaystyle P_{\mathbf {u} }\mathbf {x} =\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }+\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\perp }=\mathbf {u} \left(\operatorname {sgn} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }\right)\left\|\mathbf {x} _{\parallel }\right\|\right)+\mathbf {u} \cdot \mathbf {0} =\mathbf {x} _{\parallel }}by the properties of thedot productof parallel and perpendicular vectors. This formula can be generalized to orthogonal projections on a subspace of arbitrarydimension. Letu1,…,uk{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}}be anorthonormal basisof the subspaceU{\displaystyle U}, with the assumption that the integerk≥1{\displaystyle k\geq 1}, and letA{\displaystyle A}denote then×k{\displaystyle n\times k}matrix whose columns areu1,…,uk{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}}, i.e.,A=[u1⋯uk]{\displaystyle A={\begin{bmatrix}\mathbf {u} _{1}&\cdots &\mathbf {u} _{k}\end{bmatrix}}}. Then the projection is given by:[5]PA=AAT{\displaystyle P_{A}=AA^{\mathsf {T}}}which can be rewritten asPA=∑i⟨ui,⋅⟩ui.{\displaystyle P_{A}=\sum _{i}\langle \mathbf {u} _{i},\cdot \rangle \mathbf {u} _{i}.} The matrixAT{\displaystyle A^{\mathsf {T}}}is thepartial isometrythat vanishes on theorthogonal complementofU{\displaystyle U}, andA{\displaystyle A}is the isometry that embedsU{\displaystyle U}into the underlying vector space. The range ofPA{\displaystyle P_{A}}is therefore thefinal spaceofA{\displaystyle A}. It is also clear thatAAT{\displaystyle AA^{\mathsf {T}}}is the identity operator onU{\displaystyle U}. The orthonormality condition can also be dropped. Ifu1,…,uk{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}}is a (not necessarily orthonormal)basiswithk≥1{\displaystyle k\geq 1}, andA{\displaystyle A}is the matrix with these vectors as columns, then the projection is:[6][7]PA=A(ATA)−1AT.{\displaystyle P_{A}=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}.} The matrixA{\displaystyle A}still embedsU{\displaystyle U}into the underlying vector space but is no longer an isometry in general. The matrix(ATA)−1{\displaystyle \left(A^{\mathsf {T}}A\right)^{-1}}is a "normalizing factor" that recovers the norm. For example, therank-1 operatoruuT{\displaystyle \mathbf {u} \mathbf {u} ^{\mathsf {T}}}is not a projection if‖u‖≠1.{\displaystyle \left\|\mathbf {u} \right\|\neq 1.}After dividing byuTu=‖u‖2,{\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {u} =\left\|\mathbf {u} \right\|^{2},}we obtain the projectionu(uTu)−1uT{\displaystyle \mathbf {u} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {u} \right)^{-1}\mathbf {u} ^{\mathsf {T}}}onto the subspace spanned byu{\displaystyle u}. In the general case, we can have an arbitrarypositive definitematrixD{\displaystyle D}defining an inner product⟨x,y⟩D=y†Dx{\displaystyle \langle x,y\rangle _{D}=y^{\dagger }Dx}, and the projectionPA{\displaystyle P_{A}}is given byPAx=argminy∈range⁡(A)⁡‖x−y‖D2{\textstyle P_{A}x=\operatorname {argmin} _{y\in \operatorname {range} (A)}\left\|x-y\right\|_{D}^{2}}. ThenPA=A(ATDA)−1ATD.{\displaystyle P_{A}=A\left(A^{\mathsf {T}}DA\right)^{-1}A^{\mathsf {T}}D.} When the range space of the projection is generated by aframe(i.e. the number of generators is greater than its dimension), the formula for the projection takes the form:PA=AA+{\displaystyle P_{A}=AA^{+}}. HereA+{\displaystyle A^{+}}stands for theMoore–Penrose pseudoinverse. This is just one of many ways to construct the projection operator. If[AB]{\displaystyle {\begin{bmatrix}A&B\end{bmatrix}}}is a non-singular matrix andATB=0{\displaystyle A^{\mathsf {T}}B=0}(i.e.,B{\displaystyle B}is thenull spacematrix ofA{\displaystyle A}),[8]the following holds:I=[AB][AB]−1[ATBT]−1[ATBT]=[AB]([ATBT][AB])−1[ATBT]=[AB][ATAOOBTB]−1[ATBT]=A(ATA)−1AT+B(BTB)−1BT{\displaystyle {\begin{aligned}I&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}\left({\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}\right)^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A^{\mathsf {T}}A&O\\O&B^{\mathsf {T}}B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\[4pt]&=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}+B\left(B^{\mathsf {T}}B\right)^{-1}B^{\mathsf {T}}\end{aligned}}} If the orthogonal condition is enhanced toATWB=ATWTB=0{\displaystyle A^{\mathsf {T}}WB=A^{\mathsf {T}}W^{\mathsf {T}}B=0}withW{\displaystyle W}non-singular, the following holds:I=[AB][(ATWA)−1AT(BTWB)−1BT]W.{\displaystyle I={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}\left(A^{\mathsf {T}}WA\right)^{-1}A^{\mathsf {T}}\\\left(B^{\mathsf {T}}WB\right)^{-1}B^{\mathsf {T}}\end{bmatrix}}W.} All these formulas also hold for complex inner product spaces, provided that theconjugate transposeis used instead of the transpose. Further details on sums of projectors can be found in Banerjee and Roy (2014).[9]Also see Banerjee (2004)[10]for application of sums of projectors in basicspherical trigonometry. The termoblique projectionsis sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (seeoblique projection), though not as frequently as orthogonal projections. Whereas calculating the fitted value of anordinary least squaresregression requires an orthogonal projection, calculating the fitted value of aninstrumental variables regressionrequires an oblique projection. A projection is defined by its kernel and the basis vectors used to characterize its range (which is a complement of the kernel). When these basis vectors are orthogonal to the kernel, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the kernel, the projection is an oblique projection, or just a projection. LetP:V→V{\displaystyle P\colon V\to V}be a linear operator such thatP2=P{\displaystyle P^{2}=P}and assume thatP{\displaystyle P}is not the zero operator. Let the vectorsu1,…,uk{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}}form a basis for the range ofP{\displaystyle P}, and assemble these vectors in then×k{\displaystyle n\times k}matrixA{\displaystyle A}. Thenk≥1{\displaystyle k\geq 1}, otherwisek=0{\displaystyle k=0}andP{\displaystyle P}is the zero operator. The range and the kernel are complementary spaces, so the kernel has dimensionn−k{\displaystyle n-k}. It follows that theorthogonal complementof the kernel has dimensionk{\displaystyle k}. Letv1,…,vk{\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{k}}form a basis for the orthogonal complement of the kernel of the projection, and assemble these vectors in the matrixB{\displaystyle B}. Then the projectionP{\displaystyle P}(with the conditionk≥1{\displaystyle k\geq 1}) is given byP=A(BTA)−1BT.{\displaystyle P=A\left(B^{\mathsf {T}}A\right)^{-1}B^{\mathsf {T}}.} This expression generalizes the formula for orthogonal projections given above.[11][12]A standard proof of this expression is the following. For any vectorx{\displaystyle \mathbf {x} }in the vector spaceV{\displaystyle V}, we can decomposex=x1+x2{\displaystyle \mathbf {x} =\mathbf {x} _{1}+\mathbf {x} _{2}}, where vectorx1=P(x){\displaystyle \mathbf {x} _{1}=P(\mathbf {x} )}is in the image ofP{\displaystyle P}, and vectorx2=x−P(x).{\displaystyle \mathbf {x} _{2}=\mathbf {x} -P(\mathbf {x} ).}SoP(x2)=P(x)−P2(x)=0{\displaystyle P(\mathbf {x} _{2})=P(\mathbf {x} )-P^{2}(\mathbf {x} )=\mathbf {0} }, and thenx2{\displaystyle \mathbf {x} _{2}}is in the kernel ofP{\displaystyle P}, which is the null space ofA.{\displaystyle A.}In other words, the vectorx1{\displaystyle \mathbf {x} _{1}}is in the column space ofA,{\displaystyle A,}sox1=Aw{\displaystyle \mathbf {x} _{1}=A\mathbf {w} }for somek{\displaystyle k}dimension vectorw{\displaystyle \mathbf {w} }and the vectorx2{\displaystyle \mathbf {x} _{2}}satisfiesBTx2=0{\displaystyle B^{\mathsf {T}}\mathbf {x} _{2}=\mathbf {0} }by the construction ofB{\displaystyle B}. Put these conditions together, and we find a vectorw{\displaystyle \mathbf {w} }so thatBT(x−Aw)=0{\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} }. Since matricesA{\displaystyle A}andB{\displaystyle B}are of full rankk{\displaystyle k}by their construction, thek×k{\displaystyle k\times k}-matrixBTA{\displaystyle B^{\mathsf {T}}A}is invertible. So the equationBT(x−Aw)=0{\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} }gives the vectorw=(BTA)−1BTx.{\displaystyle \mathbf {w} =(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} .}In this way,Px=x1=Aw=A(BTA)−1BTx{\displaystyle P\mathbf {x} =\mathbf {x} _{1}=A\mathbf {w} =A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} }for any vectorx∈V{\displaystyle \mathbf {x} \in V}and henceP=A(BTA)−1BT{\displaystyle P=A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}}. In the case thatP{\displaystyle P}is an orthogonal projection, we can takeA=B{\displaystyle A=B}, and it follows thatP=A(ATA)−1AT{\displaystyle P=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}}. By using this formula, one can easily check thatP=PT{\displaystyle P=P^{\mathsf {T}}}. In general, if the vector space is over complex number field, one then uses theHermitian transposeA∗{\displaystyle A^{*}}and has the formulaP=A(A∗A)−1A∗{\displaystyle P=A\left(A^{*}A\right)^{-1}A^{*}}. Recall that one can express theMoore–Penrose inverseof the matrixA{\displaystyle A}byA+=(A∗A)−1A∗{\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}}sinceA{\displaystyle A}has full column rank, soP=AA+{\displaystyle P=AA^{+}}. I−P{\displaystyle I-P}is also an oblique projection. The singular values ofP{\displaystyle P}andI−P{\displaystyle I-P}can be computed by anorthonormal basisofA{\displaystyle A}. LetQA{\displaystyle Q_{A}}be an orthonormal basis ofA{\displaystyle A}and letQA⊥{\displaystyle Q_{A}^{\perp }}be theorthogonal complementofQA{\displaystyle Q_{A}}. Denote the singular values of the matrixQATA(BTA)−1BTQA⊥{\displaystyle Q_{A}^{T}A(B^{T}A)^{-1}B^{T}Q_{A}^{\perp }}by the positive valuesγ1≥γ2≥…≥γk{\displaystyle \gamma _{1}\geq \gamma _{2}\geq \ldots \geq \gamma _{k}}. With this, the singular values forP{\displaystyle P}are:[13]σi={1+γi21≤i≤k0otherwise{\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\0&{\text{otherwise}}\end{cases}}}and the singular values forI−P{\displaystyle I-P}areσi={1+γi21≤i≤k1k+1≤i≤n−k0otherwise{\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\1&k+1\leq i\leq n-k\\0&{\text{otherwise}}\end{cases}}}This implies that the largest singular values ofP{\displaystyle P}andI−P{\displaystyle I-P}are equal, and thus that thematrix normof the oblique projections are the same. However, thecondition numbersatisfies the relationκ(I−P)=σ11≥σ1σk=κ(P){\displaystyle \kappa (I-P)={\frac {\sigma _{1}}{1}}\geq {\frac {\sigma _{1}}{\sigma _{k}}}=\kappa (P)}, and is therefore not necessarily equal. LetV{\displaystyle V}be a vector space (in this case a plane) spanned by orthogonal vectorsu1,u2,…,up{\displaystyle \mathbf {u} _{1},\mathbf {u} _{2},\dots ,\mathbf {u} _{p}}. Lety{\displaystyle y}be a vector. One can define a projection ofy{\displaystyle \mathbf {y} }ontoV{\displaystyle V}asprojV⁡y=y⋅uiui⋅uiui{\displaystyle \operatorname {proj} _{V}\mathbf {y} ={\frac {\mathbf {y} \cdot \mathbf {u} ^{i}}{\mathbf {u} ^{i}\cdot \mathbf {u} ^{i}}}\mathbf {u} ^{i}}where repeated indices are summed over (Einstein sum notation). The vectory{\displaystyle \mathbf {y} }can be written as an orthogonal sum such thaty=projV⁡y+z{\displaystyle \mathbf {y} =\operatorname {proj} _{V}\mathbf {y} +\mathbf {z} }.projV⁡y{\displaystyle \operatorname {proj} _{V}\mathbf {y} }is sometimes denoted asy^{\displaystyle {\hat {\mathbf {y} }}}. There is a theorem in linear algebra that states that thisz{\displaystyle \mathbf {z} }is the smallest distance (theorthogonal distance) fromy{\displaystyle \mathbf {y} }toV{\displaystyle V}and is commonly used in areas such asmachine learning. Any projectionP=P2{\displaystyle P=P^{2}}on a vector space of dimensiond{\displaystyle d}over afieldis adiagonalizable matrix, since itsminimal polynomialdividesx2−x{\displaystyle x^{2}-x}, which splits into distinct linear factors. Thus there exists a basis in whichP{\displaystyle P}has the form wherer{\displaystyle r}is therankofP{\displaystyle P}. HereIr{\displaystyle I_{r}}is theidentity matrixof sizer{\displaystyle r},0d−r{\displaystyle 0_{d-r}}is thezero matrixof sized−r{\displaystyle d-r}, and⊕{\displaystyle \oplus }is thedirect sumoperator. If the vector space is complex and equipped with aninner product, then there is anorthonormalbasis in which the matrix ofPis[14] whereσ1≥σ2≥⋯≥σk>0{\displaystyle \sigma _{1}\geq \sigma _{2}\geq \dots \geq \sigma _{k}>0}. Theintegersk,s,m{\displaystyle k,s,m}and the real numbersσi{\displaystyle \sigma _{i}}are uniquely determined.2k+s+m=d{\displaystyle 2k+s+m=d}. The factorIm⊕0s{\displaystyle I_{m}\oplus 0_{s}}corresponds to the maximal invariant subspace on whichP{\displaystyle P}acts as anorthogonalprojection (so thatPitself is orthogonal if and only ifk=0{\displaystyle k=0}) and theσi{\displaystyle \sigma _{i}}-blocks correspond to theobliquecomponents. When the underlying vector spaceX{\displaystyle X}is a (not necessarily finite-dimensional)normed vector space, analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume nowX{\displaystyle X}is aBanach space. Many of the algebraic results discussed above survive the passage to this context. A given direct sum decomposition ofX{\displaystyle X}into complementary subspaces still specifies a projection, and vice versa. IfX{\displaystyle X}is the direct sumX=U⊕V{\displaystyle X=U\oplus V}, then the operator defined byP(u+v)=u{\displaystyle P(u+v)=u}is still a projection with rangeU{\displaystyle U}and kernelV{\displaystyle V}. It is also clear thatP2=P{\displaystyle P^{2}=P}. Conversely, ifP{\displaystyle P}is projection onX{\displaystyle X}, i.e.P2=P{\displaystyle P^{2}=P}, then it is easily verified that(1−P)2=(1−P){\displaystyle (1-P)^{2}=(1-P)}. In other words,1−P{\displaystyle 1-P}is also a projection. The relationP2=P{\displaystyle P^{2}=P}implies1=P+(1−P){\displaystyle 1=P+(1-P)}andX{\displaystyle X}is the direct sumrg⁡(P)⊕rg⁡(1−P){\displaystyle \operatorname {rg} (P)\oplus \operatorname {rg} (1-P)}. However, in contrast to the finite-dimensional case, projections need not becontinuousin general. If a subspaceU{\displaystyle U}ofX{\displaystyle X}is not closed in the norm topology, then the projection ontoU{\displaystyle U}is not continuous. In other words, the range of a continuous projectionP{\displaystyle P}must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus acontinuousprojectionP{\displaystyle P}gives a decomposition ofX{\displaystyle X}into two complementaryclosedsubspaces:X=rg⁡(P)⊕ker⁡(P)=ker⁡(1−P)⊕ker⁡(P){\displaystyle X=\operatorname {rg} (P)\oplus \ker(P)=\ker(1-P)\oplus \ker(P)}. The converse holds also, with an additional assumption. SupposeU{\displaystyle U}is a closed subspace ofX{\displaystyle X}. If there exists a closed subspaceV{\displaystyle V}such thatX=U⊕V, then the projectionP{\displaystyle P}with rangeU{\displaystyle U}and kernelV{\displaystyle V}is continuous. This follows from theclosed graph theorem. Supposexn→xandPxn→y. One needs to show thatPx=y{\displaystyle Px=y}. SinceU{\displaystyle U}is closed and{Pxn} ⊂U,ylies inU{\displaystyle U}, i.e.Py=y. Also,xn−Pxn= (I−P)xn→x−y. BecauseV{\displaystyle V}is closed and{(I−P)xn} ⊂V, we havex−y∈V{\displaystyle x-y\in V}, i.e.P(x−y)=Px−Py=Px−y=0{\displaystyle P(x-y)=Px-Py=Px-y=0}, which proves the claim. The above argument makes use of the assumption that bothU{\displaystyle U}andV{\displaystyle V}are closed. In general, given a closed subspaceU{\displaystyle U}, there need not exist a complementary closed subspaceV{\displaystyle V}, although forHilbert spacesthis can always be done by taking theorthogonal complement. For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence ofHahn–Banach theorem. LetU{\displaystyle U}be the linear span ofu{\displaystyle u}. By Hahn–Banach, there exists a boundedlinear functionalφ{\displaystyle \varphi }such thatφ(u) = 1. The operatorP(x)=φ(x)u{\displaystyle P(x)=\varphi (x)u}satisfiesP2=P{\displaystyle P^{2}=P}, i.e. it is a projection. Boundedness ofφ{\displaystyle \varphi }implies continuity ofP{\displaystyle P}and thereforeker⁡(P)=rg⁡(I−P){\displaystyle \ker(P)=\operatorname {rg} (I-P)}is a closed complementary subspace ofU{\displaystyle U}. Projections (orthogonal and otherwise) play a major role inalgorithmsfor certain linear algebra problems: As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations ofcharacteristic functions. Idempotents are used in classifying, for instance,semisimple algebras, whilemeasure theorybegins with considering characteristic functions ofmeasurable sets. Therefore, as one can imagine, projections are very often encountered in the context ofoperator algebras. In particular, avon Neumann algebrais generated by its completelatticeof projections. More generally, given a map between normed vector spacesT:V→W,{\displaystyle T\colon V\to W,}one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that(ker⁡T)⊥→W{\displaystyle (\ker T)^{\perp }\to W}be an isometry (comparePartial isometry); in particular it must beonto. The case of an orthogonal projection is whenWis a subspace ofV.InRiemannian geometry, this is used in the definition of aRiemannian submersion.
https://en.wikipedia.org/wiki/Orthogonal_projection
Proximal gradient(forward backward splitting)methods for learningis an area of research inoptimizationandstatistical learning theorywhich studies algorithms for a general class ofconvexregularizationproblems where the regularization penalty may not bedifferentiable. One such example isℓ1{\displaystyle \ell _{1}}regularization (also known as Lasso) of the form Proximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application.[1][2]Such customized penalties can help to induce certain structure in problem solutions, such assparsity(in the case oflasso) orgroup structure(in the case ofgroup lasso). Proximal gradient methodsare applicable in a wide variety of scenarios for solvingconvex optimizationproblems of the form whereF{\displaystyle F}isconvexand differentiable withLipschitz continuousgradient,R{\displaystyle R}is aconvex,lower semicontinuousfunction which is possibly nondifferentiable, andH{\displaystyle {\mathcal {H}}}is some set, typically aHilbert space. The usual criterion ofx{\displaystyle x}minimizesF(x)+R(x){\displaystyle F(x)+R(x)}if and only if∇(F+R)(x)=0{\displaystyle \nabla (F+R)(x)=0}in the convex, differentiable setting is now replaced by where∂φ{\displaystyle \partial \varphi }denotes thesubdifferentialof a real-valued, convex functionφ{\displaystyle \varphi }. Given a convex functionφ:H→R{\displaystyle \varphi :{\mathcal {H}}\to \mathbb {R} }an important operator to consider is itsproximal operatorproxφ:H→H{\displaystyle \operatorname {prox} _{\varphi }:{\mathcal {H}}\to {\mathcal {H}}}defined by which is well-defined because of the strict convexity of theℓ2{\displaystyle \ell _{2}}norm. The proximal operator can be seen as a generalization of aprojection.[1][3][4]We see that the proximity operator is important becausex∗{\displaystyle x^{*}}is a minimizer to the problemminx∈HF(x)+R(x){\displaystyle \min _{x\in {\mathcal {H}}}F(x)+R(x)}if and only if One important technique related to proximal gradient methods is theMoreau decomposition,which decomposes the identity operator as the sum of two proximity operators.[1]Namely, letφ:X→R{\displaystyle \varphi :{\mathcal {X}}\to \mathbb {R} }be alower semicontinuous, convex function on a vector spaceX{\displaystyle {\mathcal {X}}}. We define itsFenchel conjugateφ∗:X→R{\displaystyle \varphi ^{*}:{\mathcal {X}}\to \mathbb {R} }to be the function The general form of Moreau's decomposition states that for anyx∈X{\displaystyle x\in {\mathcal {X}}}and anyγ>0{\displaystyle \gamma >0}that which forγ=1{\displaystyle \gamma =1}implies thatx=proxφ⁡(x)+proxφ∗⁡(x){\displaystyle x=\operatorname {prox} _{\varphi }(x)+\operatorname {prox} _{\varphi ^{*}}(x)}.[1][3]The Moreau decomposition can be seen to be a generalization of the usual orthogonal decomposition of a vector space, analogous with the fact that proximity operators are generalizations of projections.[1] In certain situations it may be easier to compute the proximity operator for the conjugateφ∗{\displaystyle \varphi ^{*}}instead of the functionφ{\displaystyle \varphi }, and therefore the Moreau decomposition can be applied. This is the case forgroup lasso. Consider theregularizedempirical risk minimizationproblem with square loss and with theℓ1{\displaystyle \ell _{1}}normas the regularization penalty: wherexi∈Rdandyi∈R.{\displaystyle x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .}Theℓ1{\displaystyle \ell _{1}}regularization problem is sometimes referred to aslasso(least absolute shrinkage and selection operator).[5]Suchℓ1{\displaystyle \ell _{1}}regularization problems are interesting because they inducesparsesolutions, that is, solutionsw{\displaystyle w}to the minimization problem have relatively few nonzero components. Lasso can be seen to be a convex relaxation of the non-convex problem where‖w‖0{\displaystyle \|w\|_{0}}denotes theℓ0{\displaystyle \ell _{0}}"norm", which is the number of nonzero entries of the vectorw{\displaystyle w}. Sparse solutions are of particular interest in learning theory for interpretability of results: a sparse solution can identify a small number of important factors.[5] For simplicity we restrict our attention to the problem whereλ=1{\displaystyle \lambda =1}. To solve the problem we consider our objective function in two parts: a convex, differentiable termF(w)=1n∑i=1n(yi−⟨w,xi⟩)2{\displaystyle F(w)={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-\langle w,x_{i}\rangle )^{2}}and a convex functionR(w)=‖w‖1{\displaystyle R(w)=\|w\|_{1}}. Note thatR{\displaystyle R}is not strictly convex. Let us compute the proximity operator forR(w){\displaystyle R(w)}. First we find an alternative characterization of the proximity operatorproxR⁡(x){\displaystyle \operatorname {prox} _{R}(x)}as follows: u=proxR⁡(x)⟺0∈∂(R(u)+12‖u−x‖22)⟺0∈∂R(u)+u−x⟺x−u∈∂R(u).{\displaystyle {\begin{aligned}u=\operatorname {prox} _{R}(x)\iff &0\in \partial \left(R(u)+{\frac {1}{2}}\|u-x\|_{2}^{2}\right)\\\iff &0\in \partial R(u)+u-x\\\iff &x-u\in \partial R(u).\end{aligned}}} ForR(w)=‖w‖1{\displaystyle R(w)=\|w\|_{1}}it is easy to compute∂R(w){\displaystyle \partial R(w)}: thei{\displaystyle i}th entry of∂R(w){\displaystyle \partial R(w)}is precisely Using the recharacterization of the proximity operator given above, for the choice ofR(w)=‖w‖1{\displaystyle R(w)=\|w\|_{1}}andγ>0{\displaystyle \gamma >0}we have thatproxγR⁡(x){\displaystyle \operatorname {prox} _{\gamma R}(x)}is defined entrywise by which is known as thesoft thresholdingoperatorSγ(x)=proxγ‖⋅‖1⁡(x){\displaystyle S_{\gamma }(x)=\operatorname {prox} _{\gamma \|\cdot \|_{1}}(x)}.[1][6] To finally solve the lasso problem we consider the fixed point equation shown earlier: Given that we have computed the form of the proximity operator explicitly, then we can define a standard fixed point iteration procedure. Namely, fix some initialw0∈Rd{\displaystyle w^{0}\in \mathbb {R} ^{d}}, and fork=1,2,…{\displaystyle k=1,2,\ldots }define Note here the effective trade-off between the empirical error termF(w){\displaystyle F(w)}and the regularization penaltyR(w){\displaystyle R(w)}. This fixed point method has decoupled the effect of the two different convex functions which comprise the objective function into a gradient descent step (wk−γ∇F(wk){\displaystyle w^{k}-\gamma \nabla F\left(w^{k}\right)}) and a soft thresholding step (viaSγ{\displaystyle S_{\gamma }}). Convergence of this fixed point scheme is well-studied in the literature[1][6]and is guaranteed under appropriate choice of step sizeγ{\displaystyle \gamma }and loss function (such as the square loss taken here).Accelerated methodswere introduced by Nesterov in 1983 which improve the rate of convergence under certain regularity assumptions onF{\displaystyle F}.[7]Such methods have been studied extensively in previous years.[8]For more general learning problems where the proximity operator cannot be computed explicitly for some regularization termR{\displaystyle R}, such fixed point schemes can still be carried out using approximations to both the gradient and the proximity operator.[4][9] There have been numerous developments within the past decade inconvex optimizationtechniques which have influenced the application of proximal gradient methods in statistical learning theory. Here we survey a few important topics which can greatly improve practical algorithmic performance of these methods.[2][10] In the fixed point iteration scheme one can allow variable step sizeγk{\displaystyle \gamma _{k}}instead of a constantγ{\displaystyle \gamma }. Numerous adaptive step size schemes have been proposed throughout the literature.[1][4][11][12]Applications of these schemes[2][13]suggest that these can offer substantial improvement in number of iterations required for fixed point convergence. Elastic net regularizationoffers an alternative to pureℓ1{\displaystyle \ell _{1}}regularization. The problem of lasso (ℓ1{\displaystyle \ell _{1}}) regularization involves the penalty termR(w)=‖w‖1{\displaystyle R(w)=\|w\|_{1}}, which is not strictly convex. Hence, solutions tominwF(w)+R(w),{\displaystyle \min _{w}F(w)+R(w),}whereF{\displaystyle F}is some empirical loss function, need not be unique. This is often avoided by the inclusion of an additional strictly convex term, such as anℓ2{\displaystyle \ell _{2}}norm regularization penalty. For example, one can consider the problem wherexi∈Rdandyi∈R.{\displaystyle x_{i}\in \mathbb {R} ^{d}{\text{ and }}y_{i}\in \mathbb {R} .}For0<μ≤1{\displaystyle 0<\mu \leq 1}the penalty termλ((1−μ)‖w‖1+μ‖w‖22){\displaystyle \lambda \left((1-\mu )\|w\|_{1}+\mu \|w\|_{2}^{2}\right)}is now strictly convex, and hence the minimization problem now admits a unique solution. It has been observed that for sufficiently smallμ>0{\displaystyle \mu >0}, the additional penalty termμ‖w‖22{\displaystyle \mu \|w\|_{2}^{2}}acts as a preconditioner and can substantially improve convergence while not adversely affecting the sparsity of solutions.[2][14] Proximal gradient methods provide a general framework which is applicable to a wide variety of problems instatistical learning theory. Certain problems in learning can often involve data which has additional structure that is knowna priori. In the past several years there have been new developments which incorporate information about group structure to provide methods which are tailored to different applications. Here we survey a few such methods. Group lasso is a generalization of thelasso methodwhen features are grouped into disjoint blocks.[15]Suppose the features are grouped into blocks{w1,…,wG}{\displaystyle \{w_{1},\ldots ,w_{G}\}}. Here we take as a regularization penalty which is the sum of theℓ2{\displaystyle \ell _{2}}norm on corresponding feature vectors for the different groups. A similar proximity operator analysis as above can be used to compute the proximity operator for this penalty. Where the lasso penalty has a proximity operator which is soft thresholding on each individual component, the proximity operator for the group lasso is soft thresholding on each group. For the groupwg{\displaystyle w_{g}}we have that proximity operator ofλγ(∑g=1G‖wg‖2){\displaystyle \lambda \gamma \left(\sum _{g=1}^{G}\|w_{g}\|_{2}\right)}is given by wherewg{\displaystyle w_{g}}is theg{\displaystyle g}th group. In contrast to lasso, the derivation of the proximity operator for group lasso relies on theMoreau decomposition. Here the proximity operator of the conjugate of the group lasso penalty becomes a projection onto theballof adual norm.[2] In contrast to the group lasso problem, where features are grouped into disjoint blocks, it may be the case that grouped features are overlapping or have a nested structure. Such generalizations of group lasso have been considered in a variety of contexts.[16][17][18][19]For overlapping groups one common approach is known aslatent group lassowhich introduces latent variables to account for overlap.[20][21]Nested group structures are studied inhierarchical structure predictionand withdirected acyclic graphs.[18]
https://en.wikipedia.org/wiki/Proximal_gradient_methods_for_learning
Inmathematical optimizationanddecision theory, aloss functionorcost function(sometimes also called an error function)[1]is a function that maps aneventor values of one or more variables onto areal numberintuitively representing some "cost" associated with the event. Anoptimization problemseeks to minimize a loss function. Anobjective functionis either a loss function or its opposite (in specific domains, variously called areward function, aprofit function, autility function, afitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used forparameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old asLaplace, was reintroduced in statistics byAbraham Waldin the middle of the 20th century.[2]In the context ofeconomics, for example, this is usuallyeconomic costorregret. Inclassification, it is the penalty for an incorrect classification of an example. Inactuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works ofHarald Cramérin the 1920s.[3]Inoptimal control, the loss is the penalty for failing to achieve a desired value. Infinancial risk management, the function is mapped to a monetary loss. Leonard J. Savageargued that using non-Bayesian methods such asminimax, the loss function should be based on the idea ofregret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. The use of aquadraticloss function is common, for example when usingleast squarestechniques. It is often more mathematically tractable than other loss functions because of the properties ofvariances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target ist, then a quadratic loss function is for some constantC; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as thesquared error loss(SEL).[1] Many commonstatistics, includingt-tests,regressionmodels,design of experiments, and much else, useleast squaresmethods applied usinglinear regressiontheory, which is based on the quadratic loss function. The quadratic loss function is also used inlinear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as aquadratic formin the deviations of the variables of interest from their desired values; this approach istractablebecause it results in linearfirst-order conditions. In the context ofstochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like theHuber, Log-Cosh and SMAE losses are used when the data has many large outliers. Instatisticsanddecision theory, a frequently used loss function is the0-1 loss function usingIverson bracketnotation, i.e. it evaluates to 1 wheny^≠y{\displaystyle {\hat {y}}\neq y}, and 0 otherwise. In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called alsoutilityfunction) in a form suitable for optimization — the problem thatRagnar Frischhas highlighted in hisNobel Prizelecture.[4]The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[5][6]In particular,Andranik Tangianshowed that the most usable objective functions — quadratic and additive — are determined by a fewindifferencepoints. He used this property in the models for constructing these objective functions from eitherordinalorcardinaldata that were elicited through computer-assisted interviews with decision makers.[7][8]Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[9]and the European subsidies for equalizing unemployment rates among 271 German regions.[10] In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variableX. BothfrequentistandBayesianstatistical theory involve making a decision based on theexpected valueof the loss function; however, this quantity is defined differently under the two paradigms. We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to theprobability distribution,Pθ, of the observed data,X. This is also referred to as therisk function[11][12][13][14]of the decision ruleδand the parameterθ. Here the decision rule depends on the outcome ofX. The risk function is given by: Here,θis a fixed but possibly unknown state of nature,Xis a vector of observations stochastically drawn from apopulation,Eθ{\displaystyle \operatorname {E} _{\theta }}is the expectation over all population values ofX,dPθis aprobability measureover the event space ofX(parametrized byθ) and the integral is evaluated over the entiresupportofX. In a Bayesian approach, the expectation is calculated using theprior distributionπ*of the parameterθ: where m(x) is known as thepredictive likelihoodwherein θ has been "integrated out,"π*(θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the actiona*which minimises this expected loss, which is referred to asBayes Risk. In the latter equation, the integrand inside dx is known as thePosterior Risk, and minimising it with respect to decisionaalso minimizes the overall Bayes Risk. This optimal decision,a*is known as theBayes (decision) Rule- it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. In economics, decision-making under uncertainty is often modelled using thevon Neumann–Morgenstern utility functionof the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Adecision rulemakes a choice using an optimality criterion. Some commonly used criteria are: Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15] A common example involves estimating "location". Under typical statistical assumptions, themeanor average is the statistic for estimating location that minimizes the expected loss experienced under thesquared-errorloss function, while themedianis the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent isrisk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. Forrisk-averseorrisk-lovingagents, loss is measured as the negative of autility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for examplemortalityormorbidityin the field ofpublic healthorsafety engineering. For mostoptimization algorithms, it is desirable to have a loss function that is globallycontinuousanddifferentiable. Two very commonly used loss functions are thesquared loss,L(a)=a2{\displaystyle L(a)=a^{2}}, and theabsolute loss,L(a)=|a|{\displaystyle L(a)=|a|}. However the absolute loss has the disadvantage that it is not differentiable ata=0{\displaystyle a=0}. The squared loss has the disadvantage that it has the tendency to be dominated byoutliers—when summing over a set ofa{\displaystyle a}'s (as in∑i=1nL(ai){\textstyle \sum _{i=1}^{n}L(a_{i})}), the final sum tends to be the result of a few particularly largea-values, rather than an expression of the averagea-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16]Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case ofi.i.d.observations, the principle of complete information, and some others. W. Edwards DemingandNassim Nicholas Talebargue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
https://en.wikipedia.org/wiki/Quadratic_loss_function
Inmathematics, theroot mean square(abbrev.RMS,RMSorrms) of asetof values is thesquare rootof the set'smean square.[1]Given a setxi{\displaystyle x_{i}}, its RMS is denoted as eitherxRMS{\displaystyle x_{\mathrm {RMS} }}orRMSx{\displaystyle \mathrm {RMS} _{x}}. The RMS is also known as thequadratic mean(denotedM2{\displaystyle M_{2}}),[2][3]a special case of thegeneralized mean. The RMS of a continuousfunctionis denotedfRMS{\displaystyle f_{\mathrm {RMS} }}and can be defined in terms of anintegralof the square of the function. Inestimation theory, theroot-mean-square deviationof an estimator measures how far the estimator strays from the data. The RMS value of a set of values (or acontinuous-timewaveform) is the square root of the arithmetic mean of the squares of the values, or the square of the function that defines the continuous waveform. In the case of a set ofnvalues{x1,x2,…,xn}{\displaystyle \{x_{1},x_{2},\dots ,x_{n}\}}, the RMS is The corresponding formula for a continuous function (or waveform)f(t) defined over the intervalT1≤t≤T2{\displaystyle T_{1}\leq t\leq T_{2}}is and the RMS for a function over all time is The RMS over all time of aperiodic functionis equal to the RMS of one period of the function. The RMS value of a continuous function or signal can be approximated by taking the RMS of a sample consisting of equally spaced observations. Additionally, the RMS value of various waveforms can also be determined withoutcalculus, as shown by Cartwright.[4] In the case of the RMS statistic of arandom process, theexpected valueis used instead of the mean. If thewaveformis a puresine wave, the relationships between amplitudes (peak-to-peak, peak) and RMS are fixed and known, as they are for any continuousperiodicwave. However, this is not true for an arbitrary waveform, which may not be periodic or continuous. For a zero-mean sine wave, the relationship between RMS and peak-to-peakamplitudeis: For other waveforms, the relationships are not the same as they are for sine waves. For example, for either a triangular or sawtooth wave: Waveforms made by summing known simple waveforms have an RMS value that is the root of the sum of squares of the component RMS values, if the component waveforms areorthogonal(that is, if the average of the product of one simple waveform with another is zero for all pairs other than a waveform times itself).[5] Alternatively, for waveforms that are perfectly positively correlated, or "in phase" with each other, their RMS values sum directly. The RMS of analternating electric currentequals the value of constantdirect currentthat would dissipate the same power in aresistive load.[1] A special case of RMS of waveform combinations is:[6] whereVDC{\displaystyle {\text{V}}_{\text{DC}}}refers to thedirect current(or average) component of the signal, andRMSAC{\displaystyle {\text{RMS}}_{\text{AC}}}is thealternating currentcomponent of the signal. Electrical engineers often need to know thepower,P, dissipated by anelectrical resistance,R. It is easy to do the calculation when there is a constantcurrent,I, through the resistance. For a load ofRohms, power is given by: However, if the current is a time-varying function,I(t), this formula must be extended to reflect the fact that the current (and thus the instantaneous power) is varying over time. If the function is periodic (such as household AC power), it is still meaningful to discuss theaveragepower dissipated over time, which is calculated by taking the average power dissipation: So, the RMS value,IRMS, of the functionI(t) is the constant current that yields the same power dissipation as the time-averaged power dissipation of the currentI(t). Average power can also be found using the same method that in the case of a time-varyingvoltage,V(t), with RMS valueVRMS, This equation can be used for any periodicwaveform, such as asinusoidalorsawtooth waveform, allowing us to calculate the mean power delivered into a specified load. By taking the square root of both these equations and multiplying them together, the power is found to be: Both derivations depend on voltage and current being proportional (that is, the load,R, is purely resistive).Reactiveloads (that is, loads capable of not just dissipating energy but also storing it) are discussed under the topic ofAC power. In the common case ofalternating currentwhenI(t) is asinusoidalcurrent, as is approximately true for mains power, the RMS value is easy to calculate from the continuous case equation above. IfIpis defined to be the peak current, then: wheretis time andωis theangular frequency(ω= 2π/T, whereTis the period of the wave). SinceIpis a positive constant and was to be squared within the integral: Using atrigonometric identityto eliminate squaring of trig function: but since the interval is a whole number of complete cycles (per definition of RMS), the sine terms will cancel out, leaving: A similar analysis leads to the analogous equation for sinusoidal voltage: whereIPrepresents the peak current andVPrepresents the peak voltage. Because of their usefulness in carrying out power calculations, listedvoltagesfor power outlets (for example, 120V in the US, or 230V in Europe) are almost always quoted in RMS values, and not peak values. Peak values can be calculated from RMS values from the above formula, which impliesVP=VRMS×√2, assuming the source is a pure sine wave. Thus the peak value of the mains voltage in the USA is about 120 ×√2, or about 170 volts. The peak-to-peak voltage, being double this, is about 340 volts. A similar calculation indicates that the peak mains voltage in Europe is about 325 volts, and the peak-to-peak mains voltage, about 650 volts. RMS quantities such as electric current are usually calculated over one cycle. However, for some purposes the RMS current over a longer period is required when calculating transmission power losses. The same principle applies, and (for example) a current of 10 amps used for 12 hours each 24-hour day represents an average current of 5 amps, but an RMS current of 7.07 amps, in the long term. The termRMS poweris sometimes erroneously used (e.g., in the audio industry) as a synonym formean poweroraverage power(it is proportional to the square of the RMS voltage or RMS current in a resistive load). For a discussion of audio power measurements and their shortcomings, seeAudio power. In thephysicsofgasmolecules, theroot-mean-square speedis defined as the square root of the average squared-speed. The RMS speed of an ideal gas iscalculatedusing the following equation: whereRrepresents thegas constant, 8.314 J/(mol·K),Tis the temperature of the gas inkelvins, andMis themolar massof the gas in kilograms per mole. In physics, speed is defined as the scalar magnitude of velocity. For a stationary gas, the average speed of its molecules can be in the order of thousands of km/h, even though the average velocity of its molecules is zero. When two data sets — one set from theoretical prediction and the other from actual measurement of some physical variable, for instance — are compared, the RMS of the pairwise differences of the two data sets can serve as a measure of how far on average the error is from 0. The mean of the absolute values of the pairwise differences could be a useful measure of the variability of the differences. However, the RMS of the differences is usually the preferred measure, probably due to mathematical convention and compatibility with other formulae. RMS is used inaudio engineeringto measure signal volume, particularly in the case of audio processing. The alternative volume measurement is peak volume, in analog as the signal Vpp, or in digital as the -dB peak below clipping given the encoding format. Signal RMS in this context is often used as the comparator signal forcompression, which produces a "smoothing" effect in compression by responding more slowly to sharp transients like those on drums.[7]RMS is also used as amasteringmetric to compare against other time average units such asLUFs.[8] The RMS can be computed in the frequency domain, usingParseval's theorem. For a sampled signalx[n]=x(t=nT){\displaystyle x[n]=x(t=nT)}, whereT{\displaystyle T}is the sampling period, whereX[m]=DFT⁡{x[n]}{\displaystyle X[m]=\operatorname {DFT} \{x[n]\}}andNis the sample size, that is, the number of observations in the sample and DFT coefficients. In this case, the RMS computed in the time domain is the same as in the frequency domain: Thestandard deviationσx=(x−x¯)rms{\displaystyle \sigma _{x}=(x-{\overline {x}})_{\text{rms}}}of apopulationor awaveformx{\displaystyle x}is the RMS deviation ofx{\displaystyle x}from itsarithmetic meanx¯{\displaystyle {\bar {x}}}. They are related to the RMS value ofx{\displaystyle x}by[9] From this it is clear that the RMS value is always greater than or equal to the average, in that the RMS includes the squared deviation (error) as well. Physical scientists often use the termroot mean squareas a synonym forstandard deviationwhen it can be assumed the input signal has zero mean, that is, referring to the square root of the mean squared deviation of a signal from a given baseline or fit.[10][11]This is useful for electrical engineers in calculating the "AC only" RMS of a signal. Standard deviation being the RMS of a signal's variation about the mean, rather than about 0, theDC componentis removed (that is, RMS(signal) = stdev(signal) if the mean signal is 0).
https://en.wikipedia.org/wiki/Root_mean_square
Squared deviations from the mean(SDM) result fromsquaringdeviations. Inprobability theoryandstatistics, the definition ofvarianceis either theexpected valueof the SDM (when considering a theoreticaldistribution) or its average value (for actual experimental data). Computations foranalysis of varianceinvolve the partitioning of a sum of SDM. An understanding of the computations involved is greatly enhanced by a study of the statistical value For arandom variableX{\displaystyle X}with meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}, (Its derivation is shownhere.) Therefore, From the above, the following can be derived: The sum of squared deviations needed to calculatesample variance(before deciding whether to divide bynorn− 1) is most easily calculated as From the two derived expectations above the expected value of this sum is which implies This effectively proves the use of the divisorn− 1 in the calculation of anunbiasedsample estimate ofσ2. In the situation where data is available forkdifferent treatment groups having sizeniwhereivaries from 1 tok, then it is assumed that the expected mean of each group is and the variance of each treatment group is unchanged from the population varianceσ2{\displaystyle \sigma ^{2}}. Under the Null Hypothesis that the treatments have no effect, then each of theTi{\displaystyle T_{i}}will be zero. It is now possible to calculate three sums of squares: Under the null hypothesis that the treatments cause no differences and all theTi{\displaystyle T_{i}}are zero, the expectation simplifies to Under the null hypothesis, the difference of any pair ofI,T, andCdoes not contain any dependency onμ{\displaystyle \mu }, onlyσ2{\displaystyle \sigma ^{2}}. The constants (n− 1), (k− 1), and (n−k) are normally referred to as the number ofdegrees of freedom. In a very simple example, 5 observations arise from two treatments. The first treatment gives three values 1, 2, and 3, and the second treatment gives two values 4, and 6. Giving
https://en.wikipedia.org/wiki/Squared_deviations_from_the_mean
Aneural circuitis a population ofneuronsinterconnected bysynapsesto carry out a specific function when activated.[1]Multiple neural circuits interconnect with one another to formlarge scale brain networks.[2] Neural circuits have inspired the design ofartificial neural networks, though there are significant differences. Early treatments of neuralnetworkscan be found inHerbert Spencer'sPrinciples of Psychology, 3rd edition (1872),Theodor Meynert'sPsychiatry(1884),William James'Principles ofPsychology(1890), andSigmund Freud's Project for a Scientific Psychology (composed 1895).[3]The first rule of neuronal learning was described byHebbin 1949, in theHebbian theory. Thus, Hebbian pairing of pre-synaptic and post-synaptic activity can substantially alter the dynamic characteristics of the synaptic connection and therefore either facilitate or inhibitsignal transmission. In 1959, theneuroscientists,Warren Sturgis McCullochandWalter Pittspublished the first works on the processing of neural networks.[4]They showed theoretically that networks of artificial neurons couldimplementlogical,arithmetic, andsymbolicfunctions. Simplifiedmodels of biological neuronswere set up, now usually calledperceptronsorartificial neurons. These simple models accounted forneural summation(i.e., potentials at the post-synaptic membrane will summate in thecell body). Later models also provided for excitatory and inhibitory synaptic transmission. The connections between neurons in the brain are much more complex than those of theartificial neuronsused in theconnectionistneural computing models ofartificial neural networks. The basic kinds of connections between neurons aresynapses: bothchemicalandelectrical synapses. The establishment of synapses enables the connection of neurons into millions of overlapping, and interlinking neural circuits. Presynaptic proteins calledneurexinsare central to this process.[5] One principle by which neurons work isneural summation–potentialsat thepostsynaptic membranewill sum up in the cell body. If thedepolarizationof the neuron at theaxon hillockgoes above threshold an action potential will occur that travels down theaxonto the terminal endings to transmit a signal to other neurons. Excitatory and inhibitory synaptic transmission is realized mostly byexcitatory postsynaptic potentials(EPSPs), andinhibitory postsynaptic potentials(IPSPs). On theelectrophysiologicallevel, there are various phenomena which alter the response characteristics of individual synapses (calledsynaptic plasticity) and individual neurons (intrinsic plasticity). These are often divided into short-term plasticity and long-term plasticity. Long-term synaptic plasticity is often contended to be the most likelymemorysubstrate. Usually, the term "neuroplasticity" refers to changes in the brain that are caused by activity or experience. Connections display temporal and spatial characteristics. Temporal characteristics refers to the continuously modified activity-dependent efficacy of synaptic transmission, calledspike-timing-dependent plasticity. It has been observed in several studies that the synaptic efficacy of this transmission can undergo short-term increase (calledfacilitation) or decrease (depression) according to the activity of the presynaptic neuron. The induction of long-term changes in synaptic efficacy, bylong-term potentiation(LTP) ordepression(LTD), depends strongly on the relative timing of the onset of theexcitatory postsynaptic potentialand the postsynaptic action potential. LTP is induced by a series of action potentials which cause a variety of biochemical responses. Eventually, the reactions cause the expression of new receptors on the cellular membranes of the postsynaptic neurons or increase the efficacy of the existing receptors throughphosphorylation. Backpropagating action potentials cannot occur because after an action potential travels down a given segment of the axon, them gatesonvoltage-gated sodium channelsclose, thus blocking any transient opening of theh gatefrom causing a change in the intracellular sodium ion (Na+) concentration, and preventing the generation of an action potential back towards the cell body. In some cells, however,neural backpropagationdoes occur through thedendritic branchingand may have important effects on synaptic plasticity and computation. A neuron in the brain requires a single signal to aneuromuscular junctionto stimulate contraction of the postsynaptic muscle cell. In the spinal cord, however, at least 75afferentneurons are required to produce firing. This picture is further complicated by variation in time constant between neurons, as some cells can experience theirEPSPsover a wider period of time than others. While in synapses in thedeveloping brainsynaptic depression has been particularly widely observed it has been speculated that it changes to facilitation in adult brains. Neural connections are built and maintained primarily by glia.Astrocytes, a type of glial cell, have been implicated for their influence onsynaptogenesis. The presence of astrocytes, in rat retinal ganglion cell (RGC) cultures, increased synaptic growth, suggesting that it plays role in the process. Through signals between synapses and astrocytes, the number of synapses are regulated as neuronal circuits as they develop. Additionally, they release proteins to maintain homeostatic plasticity for the entire circuit and the synapse itself.[6] Early life adversity(ELA) during critical periods of development can influence circuitry. People exposed to several adverse life events undergo changes in connectivity that shape fear perception and cognition. In opposition to those with a lack of ELA, amygdala volume is lower which is associated with possible issues in emotional control. Also, stress in youth can irreversibly modify previously existing connection between the hippocampus, medial prefrontal cortex (mPFC), and orbitofrontal cortex (OFC). The interactions between these brain regions are critical to proper cognitive function. ELA poses a risk to normal working memory, learning memory, and other executive functions by reconstructing circuitry.[7] An example of a neural circuit is thetrisynaptic circuitin thehippocampus. Another is thePapez circuitlinking thehypothalamusto thelimbic lobe. There are several neural circuits in thecortico-basal ganglia-thalamo-cortical loop. These circuits carry information between the cortex,basal ganglia, thalamus, and back to the cortex. The largest structure within the basal ganglia, thestriatum, is seen as having its own internal microcircuitry.[8] Neural circuits in thespinal cordcalledcentral pattern generatorsare responsible for controlling motor instructions involved in rhythmic behaviours. Rhythmic behaviours include walking,urination, andejaculation. The central pattern generators are made up of different groups ofspinal interneurons.[9] There are four principal types of neural circuits that are responsible for a broad scope of neural functions. These circuits are a diverging circuit, a converging circuit, a reverberating circuit, and a parallel after-discharge circuit.[10]Circuits can also be classified as forms of feedforward excitation, feedforward inhibition, lateral inhibition, and mutual inhibition. Diverging and converging circuits are a type of feedforward excitation. Feedforward excitation refers to the method of travel taken by neuronal signals. It involves a downstream transfer of information.[11] In a diverging circuit, one neuron synapses with a number of postsynaptic cells. Each of these may synapse with many more making it possible for one neuron to stimulate up to thousands of cells. This is exemplified in the way that thousands of muscle fibers can be stimulated from the initial input from a singlemotor neuron.[10]In a converging circuit, inputs from many sources are converged into one output, affecting just one neuron or a neuron pool. This type of circuit is exemplified in therespiratory centerof thebrainstem, which responds to a number of inputs from different sources by giving out an appropriate breathing pattern.[10] A reverberating circuit produces a repetitive output. In a signalling procedure from one neuron to another in a linear sequence, one of the neurons may send a signal back to initiating neuron. Each time that the first neuron fires, the other neuron further down the sequence fire again sending it back to the source. This restimulates the first neuron and also allows the path of transmission to continue to its output. A resulting repetitive pattern is the outcome that only stops if one or more of the synapses fail, or if an inhibitory feed from another source causes it to stop. This type of reverberating circuit is found in the respiratory center that sends signals to therespiratory muscles, causing inhalation. When the circuit is interrupted by an inhibitory signal the muscles relax causing exhalation. This type of circuit may play a part inepileptic seizures.[10] In a parallel after-discharge circuit, a neuron inputs to several chains of neurons. Each chain is made up of a different number of neurons but their signals converge onto one output neuron. Each synapse in the circuit acts to delay the signal by about 0.5 msec, so that the more synapses there are, the longer is the delay to the output neuron. After the input has stopped, the output will go on firing for some time. This type of circuit does not have a feedback loop as does the reverberating circuit. Continued firing after the stimulus has stopped is calledafter-discharge. This circuit type is found in thereflex arcsof certainreflexes.[10] Differentneuroimagingtechniques have been developed to investigate the activity of neural circuits and networks. The use of "brain scanners" or functional neuroimaging to investigate the structure or function of the brain is common, either as simply a way of better assessing brain injury with high-resolution pictures, or by examining the relative activations of different brain areas. Such technologies may includefunctional magnetic resonance imaging(fMRI),brain positron emission tomography(brain PET), andcomputed axial tomography(CAT) scans.Functional neuroimaginguses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially fMRI, which measureshemodynamic activity(usingBOLD-contrast imaging) which is closely linked to neural activity, PET, andelectroencephalography(EEG) is used. Connectionistmodels serve as a test platform for different hypotheses of representation, information processing, and signal transmission. Lesioning studies in such models, e.g.artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network performs, can also yield important insights in the working of several cell assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia ofParkinson'spatients) can yield insights into the underlying mechanisms for patterns of cognitive deficits observed in the particular patient group. Predictions from these models can be tested in patients or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process iterative. The modern balance between the connectionist approach and the single-cell approach inneurobiologyhas been achieved through a lengthy discussion. In 1972, Barlow announced thesingle neuron revolution: "our perceptions are caused by the activity of a rather small number of neurons selected from a very large population of predominantly silent cells."[12]This approach was stimulated by the idea ofgrandmother cellput forward two years earlier. Barlow formulated "five dogmas" of neuron doctrine. Recent studies of 'grandmother cell' and sparse coding phenomena develop and modify these ideas.[13]The single cell experiments used intracranial electrodes in the medial temporal lobe (the hippocampus and surrounding cortex). Modern development ofconcentration of measuretheory (stochastic separation theorems) with applications toartificial neural networksgive mathematical background to unexpected effectiveness of small neural ensembles in high-dimensional brain.[14] Disruptions to neural circuitry caused by changes in neurons and neural networks can lead to the pathogenesis of mental illnesses and neurodegenerative diseases. Modifications to thebasal gangliaare often associated with diseases such as inParkinson's disease.[15]Moreover, the elimination of dendritic spines in dopaminergic neurons in thesubstantia nigraand medium spiny neurons from the striatum which are located in the basal ganglia. Methods like calcium imaging have identified dopamine receptors, D1 and D2, to be involved in the regulation of dendritic spine loss and formation. The removal of dendritic spine presence in neurons negatively impacts synaptic plasticity, learning, memory development, and overall cognitive function.[16] In early stages of Alzheimer's disease and individuals with mild cognitive impairment, synaptic removal and alterations to typical dendritic spine structure have been observed. Abnormalities to dendritic morphology include damage to neurites and spine loss. This can extend to the axon and trigger a progressive shrinking process. Variations in the expression levels of Alzheimer's disease-related proteins include β-secretase, γ-secretase, and amyloid plaque also alter dendritic spine density. Closer proximity to these proteins further contributes to dendritic dissimilarities.[16]
https://en.wikipedia.org/wiki/Neural_circuit
Neural backpropagationis the phenomenon in which, after theaction potentialof aneuroncreates a voltage spike down theaxon(normal propagation), another impulse is generated from thesomaand propagates towards theapicalportions of the dendritic arbor ordendrites(from which much of the original input current originated). In addition to active backpropagation of the action potential, there is also passiveelectrotonicspread. While there is ample evidence to prove the existence of backpropagating action potentials, the function of such action potentials and the extent to which they invade the most distal dendrites remain highly controversial. When the gradedexcitatory postsynaptic potentials(EPSPs) depolarize the soma to spike threshold at theaxon hillock, first, the axon experiences a propagating impulse through the electrical properties of its voltage-gated sodium and voltage-gated potassium channels. An action potential occurs in the axon first as research illustrates that sodium channels at the dendrites exhibit a higher threshold than those on the membrane of the axon (Rapp et al., 1996). Moreover, the voltage-gated sodium channels on the dendritic membranes having a higher threshold helps prevent them triggering an action potential from synaptic input. Instead, only when the soma depolarizes enough from accumulating graded potentials and firing an axonal action potential will these channels be activated to propagate a signal traveling backwards (Rapp et al. 1996). Generally, EPSPs from synaptic activation are not large enough to activate the dendritic voltage-gated calcium channels (usually on the order of a couple milliamperes each) so backpropagation is typically believed to happen only when the cell is activated to fire an action potential. These sodium channels on the dendrites are abundant in certain types of neurons, especially mitral and pyramidal cells, and quickly inactivate. Initially, it was thought that an action potential could only travel down the axon in one direction (towards the axon terminal where it ultimately signaled the release of neurotransmitters). However, recent research has provided evidence for the existence of backwards-propagating action potentials (Staley 2004). To elaborate, neural backpropagation can occur in one of two ways. First, during the initiation of an axonal action potential, the cell body, or soma, can become depolarized as well. Thisdepolarizationcan spread through the cell body towards thedendritic treewhere there are voltage-gated sodium channels. The depolarization of these voltage-gated sodium channels can then result in the propagation of a dendritic action potential. Such backpropagation is sometimes referred to as an echo of the forward propagating action potential (Staley 2004). It has also been shown that an action potential initiated in the axon can create a retrograde signal that travels in the opposite direction (Hausser 2000). This impulse travels up the axon eventually causing the cell body to become depolarized, thus triggering the dendritic voltage-gated calcium channels. As described in the first process, the triggering of dendritic voltage-gated calcium channels leads to the propagation of a dendritic action potential. It is important to note that the strength of backpropagating action potentials varies greatly between different neuronal types (Hausser 2000). Some types of neuronal cells show little to no decrease in the amplitude of action potentials as they invade and travel through the dendritic tree while other neuronal cell types, such as cerebellarPurkinje neurons, exhibit very little action potential backpropagation (Stuart 1997). Additionally, there are other neuronal cell types that manifest varying degrees of amplitude decrement during backpropagation. It is thought that this is due to the fact that each neuronal cell type contains varying numbers of the voltage-gated channels required to propagate a dendritic action potential. Generally, synaptic signals that are received by the dendrite are combined in the soma in order to generate an action potential that is then transmitted down the axon toward the next synaptic contact. Thus, the backpropagation of action potentials poses a threat to initiate an uncontrolledpositive feedbackloop between the soma and the dendrites. For example, as an action potential was triggered, its dendritic echo could enter the dendrite and potentially trigger a second action potential. If left unchecked, an endless cycle of action potentials triggered by their own echo would be created. In order to prevent such a cycle, most neurons have a relatively high density ofA-type K+ channels. A-type K+ channels belong to the superfamily ofvoltage-gated ion channelsand are transmembrane channels that help maintain the cell's membrane potential (Cai 2007). Typically, they play a crucial role in returning the cell to its resting membrane following an action potential by allowing an inhibitory current of K+ ions to quickly flow out of the neuron. The presence of these channels in such high density in the dendrites explains their inability to initiate an action potential, even during synaptic input. Additionally, the presence of these channels provides a mechanism by which the neuron can suppress and regulate the backpropagation of action potentials through the dendrite (Vetter 2000). Pharmacological antagonists of these channels promoted the frequency of backpropagating action potentials which demonstrates their importance in keeping the cell from excessive firing (Waters et al., 2004). Results have indicated a linear increase in the density of A-type channels with increasing distance into the dendrite away from the soma. The increase in the density of A-type channels results in a dampening of the backpropagating action potential as it travels into the dendrite. Essentially, inhibition occurs because the A-type channels facilitate the outflow of K+ ions in order to maintain the membrane potential below threshold levels (Cai 2007). Such inhibition limits EPSP and protects the neuron from entering a never-ending positive-positive feedback loop between the soma and the dendrites. Since the 1950s, evidence has existed thatneuronsin thecentral nervous systemgenerate anaction potential, or voltage spike, that travels both through theaxonto signal the next neuron and backpropagates through thedendritessending a retrograde signal to its presynaptic signaling neurons. This current decays significantly with travel length along the dendrites, so effects are predicted to be more significant for neurons whose synapses are near the postsynaptic cell body, with magnitude depending mainly on sodium-channel density in the dendrite. It is also dependent on the shape of the dendritic tree and, more importantly, on the rate of signal currents to the neuron. On average, a backpropagating spike loses about half its voltage after traveling nearly 500 micrometres. Backpropagation occurs actively in theneocortex,hippocampus,substantia nigra, andspinal cord, while in thecerebellumit occurs relatively passively. This is consistent with observations thatsynaptic plasticityis much more apparent in areas like the hippocampus, which controls spatial memory, than the cerebellum, which controls more unconscious and vegetative functions. The backpropagating current also causes a voltage change that increases the concentration of Ca2+in the dendrites, an event which coincides with certain models of synaptic plasticity. This change also affects future integration of signals, leading to at least a short-term response difference between the presynaptic signals and the postsynaptic spike.[1] While many questions have yet to be answered in regards to neural backpropagation, there exists a number of hypotheses regarding its function. Some proposed function include involvement insynaptic plasticity, involvement indendrodendriticinhibition, boostingsynaptic responses, resetting membrane potential, retrograde actions at synapses and conditional axonal output. Backpropagation is believed to help form LTP (long term potentiation) and Hebbian plasticity at hippocampal synapses. Since artificial LTP induction, using microelectrode stimulation, voltage clamp, etc. requires the postsynaptic cell to be slightly depolarized when EPSPs are elicited, backpropagation can serve as the means of depolarization of the postsynaptic cell. Backpropagating action potentials can induce Long-term potentiation by behaving as a signal that informs the presynaptic cell that the postsynaptic cell has fired. Moreover,Spike-Time Dependent Plasticityis known as the narrow time frame for which coincidental firing of both the pre and post synaptic neurons will induce plasticity. Neural backpropagation occurs in this window to interact withNMDA receptorsat the apical dendrites by assisting in the removal of voltage sensitive Mg2+ block (Waters et al., 2004). This process permits the large influx of calcium which provokes a cascade of events to cause potentiation. Current literature also suggests that backpropagating action potentials are also responsible for the release of retrograde neurotransmitters and trophic factors which contribute to the short-term and long-term efficacy between two neurons. Since the backpropagating action potentials essentially exhibit a copy of the neurons axonal firing pattern, they help establish a synchrony between the pre and post synaptic neurons (Waters et al., 2004). Importantly, backpropagating action potentials are necessary for the release of Brain-Derived Neurotrophic Factor (BDNF). BDNF is an essential component for inducing synaptic plasticity and development (Kuczewski N., Porcher C., Ferrand N., 2008). Moreover, backpropagating action potentials have been shown to induce BDNF-dependent phosphorylation of cyclic AMP response element-binding protein (CREB) which is known to be a major component in synaptic plasticity and memory formation (Kuczewski N., Porcher C., Lessmann V., et al. 2008). While a backpropagating action potential can presumably cause changes in the weight of the presynaptic connections, there is no simple mechanism for an error signal to propagate through multiple layers of neurons, as in the computerbackpropagationalgorithm. However, simple linear topologies have shown that effective computation is possible through signal backpropagation in this biological sense.[2]
https://en.wikipedia.org/wiki/Neural_backpropagation
Backpropagation through time(BPTT) is agradient-based techniquefor training certain types ofrecurrent neural networks, such asElman networks. The algorithm was independently derived by numerous researchers.[1][2][3] The training data for a recurrent neural network is an ordered sequence ofk{\displaystyle k}input-output pairs,⟨a0,y0⟩,⟨a1,y1⟩,⟨a2,y2⟩,...,⟨ak−1,yk−1⟩{\displaystyle \langle \mathbf {a} _{0},\mathbf {y} _{0}\rangle ,\langle \mathbf {a} _{1},\mathbf {y} _{1}\rangle ,\langle \mathbf {a} _{2},\mathbf {y} _{2}\rangle ,...,\langle \mathbf {a} _{k-1},\mathbf {y} _{k-1}\rangle }. An initial value must be specified for the hidden statex0{\displaystyle \mathbf {x} _{0}}, typically chosen to be azero vector. BPTT begins by unfolding a recurrent neural network in time. The unfolded network containsk{\displaystyle k}inputs and outputs, but every copy of the network shares the same parameters. Then, thebackpropagationalgorithm is used to find the gradient of theloss functionwith respect to all the network parameters. Consider an example of a neural network that contains arecurrentlayerf{\displaystyle f}and afeedforwardlayerg{\displaystyle g}. There are different ways to define the training cost, but the aggregated cost is always the average of the costs of each of the time steps. The cost of each time step can be computed separately. The figure above shows how the cost at timet+3{\displaystyle t+3}can be computed, by unfolding the recurrent layerf{\displaystyle f}for three time steps and adding the feedforward layerg{\displaystyle g}. Each instance off{\displaystyle f}in the unfolded network shares the same parameters. Thus, the weight updates in each instance (f1,f2,f3{\displaystyle f_{1},f_{2},f_{3}}) are summed together. Below is pseudocode for a truncated version of BPTT, where the training data containsn{\displaystyle n}input-output pairs, and the network is unfolded fork{\displaystyle k}time steps: BPTT tends to be significantly faster for training recurrent neural networks than general-purpose optimization techniques such asevolutionary optimization.[4] BPTT has difficulty with local optima. With recurrent neural networks, local optima are a much more significant problem than with feed-forward neural networks.[5]The recurrent feedback in such networks tends to create chaotic responses in the error surface which cause local optima to occur frequently, and in poor locations on the error surface.
https://en.wikipedia.org/wiki/Backpropagation_through_time
Backpropagation through structure(BPTS) is agradient-based techniquefor trainingrecursive neural networks, proposed in a 1996 paper written by Christoph Goller and Andreas Küchler.[1] Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Backpropagation_through_structure
In neuroscience and machine learning,three-factor learningis the combinaison ofHebbian plasticitywith a third modulatory factor to stabilise and enhance synaptic learning.[1]This third factor can represent various signals such as reward, punishment, error, surprise, or novelty, often implemented throughneuromodulators.[2] Three-factor learning introduces the concept ofeligibility traces, which flag synapses for potential modification pending the arrival of the third factor, and helps temporalcredit assignementby bridging the gap between rapid neuronal firing and slower behavioral timescales, from which learning can be done.[3]Biological basis for Three-factor learning rules have been supported by experimental evidence.[4][2]This approach addresses the instability of classical Hebbian learning by minimizingautocorrelationand maximizingcross-correlationbetween inputs.[1]
https://en.wikipedia.org/wiki/Three-factor_learning
This is alist ofnumerical analysistopics. Error analysis (mathematics) Numerical linear algebra— study of numerical algorithms for linear algebra problems Eigenvalue algorithm— a numerical algorithm for locating the eigenvalues of a matrix Interpolation— construct a function going through some given data points Polynomial interpolation— interpolation by polynomials Spline interpolation— interpolation by piecewise polynomials Trigonometric interpolation— interpolation by trigonometric polynomials Approximation theory Root-finding algorithm— algorithms for solving the equationf(x) = 0 Mathematical optimization— algorithm for finding maxima or minima of a given function Linear programming(also treatsinteger programming) — objective function and constraints are linear Convex optimization Nonlinear programming— the most general optimization problem in the usual framework Optimal control Infinite-dimensional optimization Numerical integration— the numerical evaluation of an integral Numerical methods for ordinary differential equations— the numerical solution of ordinary differential equations (ODEs) Numerical partial differential equations— the numerical solution of partial differential equations (PDEs) Finite difference method— based on approximating differential operators with difference operators Finite element method— based on a discretization of the space of solutionsgradient discretisation method— based on both the discretization of the solution and of its gradient For a large list of software, see thelist of numerical-analysis software.
https://en.wikipedia.org/wiki/List_of_numerical_analysis_topics#Eigenvalue_algorithms
Cluster analysisorclusteringis the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called acluster) are moresimilar(in some specific sense defined by the analyst) to each other than to those in other groups (clusters). It is a main task ofexploratory data analysis, and a common technique forstatisticaldata analysis, used in many fields, includingpattern recognition,image analysis,information retrieval,bioinformatics,data compression,computer graphicsandmachine learning. Cluster analysis refers to a family of algorithms and tasks rather than one specificalgorithm. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with smalldistancesbetween cluster members, dense areas of the data space, intervals or particularstatistical distributions. Clustering can therefore be formulated as amulti-objective optimizationproblem. The appropriate clustering algorithm and parameter settings (including parameters such as thedistance functionto use, a density threshold or the number of expected clusters) depend on the individualdata setand intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process ofknowledge discoveryor interactive multi-objective optimization that involves trial and failure. It is often necessary to modifydata preprocessingand model parameters until the result achieves the desired properties. Besides the termclustering, there are a number of terms with similar meanings, includingautomaticclassification,numerical taxonomy,botryology(fromGreek:βότρυς'grape'),typological analysis, andcommunity detection. The subtle differences are often in the use of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest. Cluster analysis originated in anthropology by Driver and Kroeber in 1932[1]and introduced to psychology byJoseph Zubinin 1938[2]andRobert Tryonin 1939[3]and famously used byCattellbeginning in 1943[4]for trait theory classification inpersonality psychology. The notion of a "cluster" cannot be precisely defined, which is one of the reasons why there are so many clustering algorithms.[5]There is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these "cluster models" is key to understanding the differences between the various algorithms. Typical cluster models include: A "clustering" is essentially a set of such clusters, usually containing all objects in the data set. Additionally, it may specify the relationship of the clusters to each other, for example, a hierarchy of clusters embedded in each other. Clusterings can be roughly distinguished as: There are also finer distinctions possible, for example: As listed above, clustering algorithms can be categorized based on their cluster model. The following overview will only list the most prominent examples of clustering algorithms, as there are possibly over 100 published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized. An overview of algorithms explained in Wikipedia can be found in thelist of statistics algorithms. There is no objectively "correct" clustering algorithm, but as it was noted, "clustering is in the eye of the beholder."[5]In fact, an axiomatic approach to clustering demonstrates that it is impossible for any clustering method to meet three fundamental properties simultaneously:scale invariance(results remain unchanged under proportional scaling of distances),richness(all possible partitions of the data can be achieved), andconsistencybetween distances and the clustering structure.[7]The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. An algorithm that is designed for one kind of model will generally fail on a data set that contains a radically different kind of model.[5]For example, k-means cannot find non-convex clusters.[5]Most traditional clustering methods assume the clusters exhibit a spherical, elliptical or convex shape.[8] Connectivity-based clustering, also known ashierarchical clustering, is based on the core idea of objects being more related to nearby objects than to objects farther away. These algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using adendrogram, which explains where the common name "hierarchical clustering" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix. Connectivity-based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice ofdistance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance) to use. Popular choices are known assingle-linkage clustering(the minimum of object distances),complete linkage clustering(the maximum of object distances), andUPGMAorWPGMA("Unweighted or Weighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions). These methods will not produce a unique partitioning of the data set, but a hierarchy from which the user still needs to choose appropriate clusters. They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as "chaining phenomenon", in particular withsingle-linkage clustering). In the general case, the complexity isO(n3){\displaystyle {\mathcal {O}}(n^{3})}for agglomerative clustering andO(2n−1){\displaystyle {\mathcal {O}}(2^{n-1})}fordivisive clustering,[9]which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexityO(n2){\displaystyle {\mathcal {O}}(n^{2})}) are known: SLINK[10]for single-linkage and CLINK[11]for complete-linkage clustering. In centroid-based clustering, each cluster is represented by a central vector, which is not necessarily a member of the data set. When the number of clusters is fixed tok,k-means clusteringgives a formal definition as an optimization problem: find thekcluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized. The optimization problem itself is known to beNP-hard, and thus the common approach is to search only for approximate solutions. A particularly well-known approximate method isLloyd's algorithm,[12]often just referred to as "k-means algorithm" (althoughanother algorithm introduced this name). It does however only find alocal optimum, and is commonly run multiple times with different random initializations. Variations ofk-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set (k-medoids), choosingmedians(k-medians clustering), choosing the initial centers less randomly (k-means++) or allowing a fuzzy cluster assignment (fuzzy c-means). Mostk-means-type algorithms require thenumber of clusters–k– to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid; often yielding improperly cut borders of clusters. This happens primarily because the algorithm optimizes cluster centers, not cluster borders. Steps involved in the centroid-based clustering algorithm are: K-means has a number of interesting theoretical properties. First, it partitions the data space into a structure known as aVoronoi diagram. Second, it is conceptually close to nearest neighbor classification, and as such is popular inmachine learning. Third, it can be seen as a variation of model-based clustering, and Lloyd's algorithm as a variation of theExpectation-maximization algorithmfor this model discussed below. Centroid-based clustering problems such ask-means andk-medoids are special cases of the uncapacitated, metricfacility location problem, a canonical problem in the operations research and computational geometry communities. In a basic facility location problem (of which there are numerous variants that model more elaborate settings), the task is to find the best warehouse locations to optimally service a given set of consumers. One may view "warehouses" as cluster centroids and "consumer locations" as the data to be clustered. This makes it possible to apply the well-developed algorithmic solutions from the facility location literature to the presently considered centroid-based clustering problem. The clustering framework most closely related to statistics ismodel-based clustering, which is based ondistribution models. This approach models the data as arising from a mixture of probability distributions. It has the advantages of providing principled statistical answers to questions such as how many clusters there are, what clustering method or model to use, and how to detect and deal with outliers. While the theoretical foundation of these methods is excellent, they suffer fromoverfittingunless constraints are put on the model complexity. A more complex model will usually be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult. Standardmodel-based clusteringmethods include more parsimonious models based on theeigenvalue decompositionof the covariance matrices, that provide a balance between overfitting and fidelity to the data. One prominent method is known as Gaussian mixture models (using theexpectation-maximization algorithm). Here, the data set is usually modeled with a fixed (to avoid overfitting) number ofGaussian distributionsthat are initialized randomly and whose parameters are iteratively optimized to better fit the data set. This will converge to alocal optimum, so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to; for soft clusterings, this is not necessary. Distribution-based clustering produces complex models for clusters that can capturecorrelation and dependencebetween attributes. However, these algorithms put an extra burden on the user: for many real data sets, there may be no concisely defined mathematical model (e.g. assuming Gaussian distributions is a rather strong assumption on the data). In density-based clustering,[13]clusters are defined as areas of higher density than the remainder of the data set. Objects in sparse areas – that are required to separate clusters – are usually considered to be noise and border points. The most popular[14]density-based clustering method isDBSCAN.[15]In contrast to many newer methods, it features a well-defined cluster model called "density-reachability". Similar to linkage-based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range. Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it isdeterministicfor core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times.OPTICS[16]is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameterε{\displaystyle \varepsilon }, and produces a hierarchical result related to that oflinkage clustering. DeLi-Clu,[17]Density-Link-Clustering combines ideas fromsingle-linkage clusteringand OPTICS, eliminating theε{\displaystyle \varepsilon }parameter entirely and offering performance improvements over OPTICS by using anR-treeindex. The key drawback ofDBSCANandOPTICSis that they expect some kind of density drop to detect cluster borders. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such asEM clusteringthat are able to precisely model this kind of data. Mean-shiftis a clustering approach where each object is moved to the densest area in its vicinity, based onkernel density estimation. Eventually, objects converge to local maxima of density. Similar to k-means clustering, these "density attractors" can serve as representatives for the data set, but mean-shift can detect arbitrary-shaped clusters similar to DBSCAN. Due to the expensive iterative procedure and density estimation, mean-shift is usually slower than DBSCAN or k-Means. Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails.[17] The grid-based technique is used for amulti-dimensionaldata set.[18]In this technique, we create a grid structure, and the comparison is performed on grids (also known as cells). The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in the grid-based clusteringalgorithmare: In recent years, considerable effort has been put into improving the performance of existing algorithms.[19][20]Among them areCLARANS,[21]andBIRCH.[22]With the recent need to process larger and larger data sets (also known asbig data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such ascanopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such ask-means clustering. Forhigh-dimensional data, many of the existing methods fail due to thecurse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to newclustering algorithms for high-dimensional datathat focus onsubspace clustering(where only some attributes are used, and cluster models include the relevant attributes for the cluster) andcorrelation clusteringthat also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving acorrelationof their attributes.[23]Examples for such clustering algorithms are CLIQUE[24]andSUBCLU.[25] Ideas from density-based clustering methods (in particular theDBSCAN/OPTICSfamily of algorithms) have been adapted to subspace clustering (HiSC,[26]hierarchical subspace clustering and DiSH[27]) and correlation clustering (HiCO,[28]hierarchical correlation clustering, 4C[29]using "correlation connectivity" and ERiC[30]exploring hierarchical density-based correlation clusters). Several different clustering systems based onmutual informationhave been proposed. One is Marina Meilă'svariation of informationmetric;[31]another provides hierarchical clustering.[32]Using genetic algorithms, a wide range of different fit-functions can be optimized, including mutual information.[33]Alsobelief propagation, a recent development incomputer scienceandstatistical physics, has led to the creation of new types of clustering algorithms.[34] Evaluation (or "validation") of clustering results is as difficult as the clustering itself.[35]Popular approaches involve "internal" evaluation, where the clustering is summarized to a single quality score, "external" evaluation, where the clustering is compared to an existing "ground truth" classification, "manual" evaluation by a human expert, and "indirect" evaluation by evaluating the utility of the clustering in its intended application.[36] Internal evaluation measures suffer from the problem that they represent functions that themselves can be seen as a clustering objective. For example, one could cluster the data set by the Silhouette coefficient; except that there is no known efficient algorithm for this. By using such an internal measure for evaluation, one rather compares the similarity of the optimization problems,[36]and not necessarily how useful the clustering is. External evaluation has similar problems: if we have such "ground truth" labels, then we would not need to cluster; and in practical applications we usually do not have such labels. On the other hand, the labels only reflect one possible partitioning of the data set, which does not imply that there does not exist a different, and maybe even better, clustering. Neither of these approaches can therefore ultimately judge the actual quality of a clustering, but this needs human evaluation,[36]which is highly subjective. Nevertheless, such statistics can be quite informative in identifying bad clusterings,[37]but one should not dismiss subjective human evaluation.[37] When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications.[38]Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering. Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another.[5]Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for some kind of models has no chance if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion.[5]For example, k-means clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with non-convex clusters neither the use ofk-means, nor of an evaluation criterion that assumes convexity, is sound. More than a dozen of internal evaluation measures exist, usually based on the intuition that items in the same cluster should be more similar than items in different clusters.[39]: 115–121For example, the following methods can be used to assess the quality of clustering algorithms based on internal criterion: TheDavies–Bouldin indexcan be calculated by the following formula:DB=1n∑i=1nmaxj≠i(σi+σjd(ci,cj)){\displaystyle DB={\frac {1}{n}}\sum _{i=1}^{n}\max _{j\neq i}\left({\frac {\sigma _{i}+\sigma _{j}}{d(c_{i},c_{j})}}\right)}wherenis the number of clusters,ci{\displaystyle c_{i}}is thecentroidof clusteri{\displaystyle i},σi{\displaystyle \sigma _{i}}is the average distance of all elements in clusteri{\displaystyle i}to centroidci{\displaystyle c_{i}}, andd(ci,cj){\displaystyle d(c_{i},c_{j})}is the distance between centroidsci{\displaystyle c_{i}}andcj{\displaystyle c_{j}}. Since algorithms that produce clusters with low intra-cluster distances (high intra-cluster similarity) and high inter-cluster distances (low inter-cluster similarity) will have a low Davies–Bouldin index, the clustering algorithm that produces a collection of clusters with the smallestDavies–Bouldin indexis considered the best algorithm based on this criterion. The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:[40] whered(i,j) represents the distance between clustersiandj, andd'(k) measures the intra-cluster distance of clusterk. The inter-cluster distanced(i,j) between two clusters may be any number of distance measures, such as the distance between thecentroidsof the clusters. Similarly, the intra-cluster distanced'(k) may be measured in a variety of ways, such as the maximal distance between any pair of elements in clusterk. Since internal criterion seek clusters with high intra-cluster similarity and low inter-cluster similarity, algorithms that produce clusters with high Dunn index are more desirable. The silhouette coefficient contrasts the average distance to elements in the same cluster with the average distance to elements in other clusters. Objects with a high silhouette value are considered well clustered, objects with a low value may be outliers. This index works well withk-means clustering, and is also used to determine the optimal number of clusters.[41] In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by (expert) humans. Thus, the benchmark sets can be thought of as agold standardfor evaluation.[35]These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may containanomalies.[42]Additionally, from aknowledge discoverypoint of view, the reproduction of known knowledge may not necessarily be the intended result.[42]In the special scenario ofconstrained clustering, where meta information (such as class labels) is used already in the clustering process, the hold-out of information for evaluation purposes is non-trivial.[43] A number of measures are adapted from variants used to evaluate classification tasks. In place of counting the number of times a class was correctly assigned to a single data point (known astrue positives), suchpair countingmetrics assess whether each pair of data points that is truly in the same cluster is predicted to be in the same cluster.[35] As with internal evaluation, several external evaluation measures exist,[39]: 125–129for example: Purity is a measure of the extent to which clusters contain a single class.[38]Its calculation can be thought of as follows: For each cluster, count the number of data points from the most common class in said cluster. Now take the sum over all clusters and divide by the total number of data points. Formally, given some set of clustersM{\displaystyle M}and some set of classesD{\displaystyle D}, both partitioningN{\displaystyle N}data points, purity can be defined as: This measure doesn't penalize having many clusters, and more clusters will make it easier to produce a high purity. A purity score of 1 is always possible by putting each data point in its own cluster. Also, purity doesn't work well for imbalanced data, where even poorly performing clustering algorithms will give a high purity value. For example, if a size 1000 dataset consists of two classes, one containing 999 points and the other containing 1 point, then every possible partition will have a purity of at least 99.9%. The Rand index[44]computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. It can be computed using the following formula: whereTP{\displaystyle TP}is the number of true positives,TN{\displaystyle TN}is the number oftrue negatives,FP{\displaystyle FP}is the number offalse positives, andFN{\displaystyle FN}is the number offalse negatives. The instances being counted here are the number of correctpairwiseassignments. That is,TP{\displaystyle TP}is the number of pairs of points that are clustered together in the predicted partition and in the ground truth partition,FP{\displaystyle FP}is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, thenTP+TN+FP+FN=(N2){\displaystyle TP+TN+FP+FN={\binom {N}{2}}}. One issue with theRand indexis thatfalse positivesandfalse negativesare equally weighted. This may be an undesirable characteristic for some clustering applications. The F-measure addresses this concern,[citation needed]as does the chance-correctedadjusted Rand index. The F-measure can be used to balance the contribution offalse negativesby weightingrecallthrough a parameterβ≥0{\displaystyle \beta \geq 0}. Letprecisionandrecall(both external evaluation measures in themselves) be defined as follows:P=TPTP+FP{\displaystyle P={\frac {TP}{TP+FP}}}R=TPTP+FN{\displaystyle R={\frac {TP}{TP+FN}}}whereP{\displaystyle P}is theprecisionrate andR{\displaystyle R}is therecallrate. We can calculate the F-measure by using the following formula:[38]Fβ=(β2+1)⋅P⋅Rβ2⋅P+R{\displaystyle F_{\beta }={\frac {(\beta ^{2}+1)\cdot P\cdot R}{\beta ^{2}\cdot P+R}}}Whenβ=0{\displaystyle \beta =0},F0=P{\displaystyle F_{0}=P}. In other words,recallhas no impact on the F-measure whenβ=0{\displaystyle \beta =0}, and increasingβ{\displaystyle \beta }allocates an increasing amount of weight to recall in the final F-measure. AlsoTN{\displaystyle TN}is not taken into account and can vary from 0 upward without bound. The Jaccard index is used to quantify the similarity between two datasets. TheJaccard indextakes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula:J(A,B)=|A∩B||A∪B|=TPTP+FP+FN{\displaystyle J(A,B)={\frac {|A\cap B|}{|A\cup B|}}={\frac {TP}{TP+FP+FN}}}This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets. Note thatTN{\displaystyle TN}is not taken into account. The Dice symmetric measure doubles the weight onTP{\displaystyle TP}while still ignoringTN{\displaystyle TN}:DSC=2TP2TP+FP+FN{\displaystyle DSC={\frac {2TP}{2TP+FP+FN}}} The Fowlkes–Mallows index[45]computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes–Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:FM=TPTP+FP⋅TPTP+FN{\displaystyle FM={\sqrt {{\frac {TP}{TP+FP}}\cdot {\frac {TP}{TP+FN}}}}}whereTP{\displaystyle TP}is the number oftrue positives,FP{\displaystyle FP}is the number offalse positives, andFN{\displaystyle FN}is the number offalse negatives. TheFM{\displaystyle FM}index is the geometric mean of theprecisionandrecallP{\displaystyle P}andR{\displaystyle R}, and is thus also known as theG-measure, while the F-measure is their harmonic mean.[46][47]Moreover,precisionandrecallare also known as Wallace's indicesBI{\displaystyle B^{I}}andBII{\displaystyle B^{II}}.[48]Chance normalized versions of recall, precision and G-measure correspond toInformedness,MarkednessandMatthews Correlationand relate strongly toKappa.[49] The Chi index[50]is an external validation index that measure the clustering results by applying thechi-squared statistic. This index scores positively the fact that the labels are as sparse as possible across the clusters, i.e., that each cluster has as few different labels as possible. The higher the value of the Chi Index the greater the relationship between the resulting clusters and the label used. The mututal information is aninformation theoreticmeasure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clusterings.Normalized mutual informationis a family of corrected-for-chance variants of this that has a reduced bias for varying cluster numbers.[35] A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster. The validity measure (short v-measure) is a combined metric for homogeneity and completeness of the clusters[51] To measure cluster tendency is to measure to what degree clusters exist in the data to be clustered, and may be performed as an initial test, before attempting clustering. One way to do this is to compare the data against random data. On average, random data should not have clusters[verification needed].
https://en.wikipedia.org/wiki/Clustering_(statistics)
Ingeometry, thehyperplane separation theoremis a theorem aboutdisjointconvex setsinn-dimensionalEuclidean space. There are several rather similar versions. In one version of the theorem, if both these sets areclosedand at least one of them iscompact, then there is ahyperplanein between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is aseparating axis, because the orthogonalprojectionsof the convex bodies onto the axis are disjoint. The hyperplane separation theorem is due toHermann Minkowski. TheHahn–Banach separation theoremgeneralizes the result totopological vector spaces. A related result is thesupporting hyperplane theorem. In the context ofsupport-vector machines, theoptimally separating hyperplaneormaximum-margin hyperplaneis ahyperplanewhich separates twoconvex hullsof points and isequidistantfrom the two.[1][2][3] Hyperplane separation theorem[4]—LetA{\displaystyle A}andB{\displaystyle B}be two disjoint nonempty convex subsets ofRn{\displaystyle \mathbb {R} ^{n}}. Then there exist a nonzero vectorv{\displaystyle v}and a real numberc{\displaystyle c}such that for allx{\displaystyle x}inA{\displaystyle A}andy{\displaystyle y}inB{\displaystyle B}; i.e., the hyperplane⟨⋅,v⟩=c{\displaystyle \langle \cdot ,v\rangle =c},v{\displaystyle v}the normal vector, separatesA{\displaystyle A}andB{\displaystyle B}. If both sets are closed, and at least one of them is compact, then the separation can be strict, that is,⟨x,v⟩>c1and⟨y,v⟩<c2{\displaystyle \langle x,v\rangle >c_{1}\,{\text{ and }}\langle y,v\rangle <c_{2}}for somec1>c2{\displaystyle c_{1}>c_{2}} In all cases, assumeA,B{\displaystyle A,B}to be disjoint, nonempty, and convex subsets ofRn{\displaystyle \mathbb {R} ^{n}}. The summary of the results are as follows: The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where acontinuouslinear functional equals some constant) even in the weak sense where the inequalities are not strict.[5] Here, the compactness in the hypothesis cannot be relaxed; see an example in the sectionCounterexamples and uniqueness. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as theHahn–Banach separation theorem. The proof is based on the following lemma: Lemma—LetA{\displaystyle A}andB{\displaystyle B}be two disjoint closed subsets ofRn{\displaystyle \mathbb {R} ^{n}}, and assumeA{\displaystyle A}is compact. Then there exist pointsa0∈A{\displaystyle a_{0}\in A}andb0∈B{\displaystyle b_{0}\in B}minimizing the distance‖a−b‖{\displaystyle \|a-b\|}overa∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}. Leta∈A{\displaystyle a\in A}andb∈B{\displaystyle b\in B}be any pair of points, and letr1=‖b−a‖{\displaystyle r_{1}=\|b-a\|}. SinceA{\displaystyle A}is compact, it is contained in some ball centered ona{\displaystyle a}; let the radius of this ball ber2{\displaystyle r_{2}}. LetS=B∩Br1+r2(a)¯{\displaystyle S=B\cap {\overline {B_{r_{1}+r_{2}}(a)}}}be the intersection ofB{\displaystyle B}with a closed ball of radiusr1+r2{\displaystyle r_{1}+r_{2}}arounda{\displaystyle a}. ThenS{\displaystyle S}is compact and nonempty because it containsb{\displaystyle b}. Since the distance function is continuous, there exist pointsa0{\displaystyle a_{0}}andb0{\displaystyle b_{0}}whose distance‖a0−b0‖{\displaystyle \|a_{0}-b_{0}\|}is the minimum over all pairs of points inA×S{\displaystyle A\times S}. It remains to show thata0{\displaystyle a_{0}}andb0{\displaystyle b_{0}}in fact have the minimum distance over all pairs of points inA×B{\displaystyle A\times B}. Suppose for contradiction that there exist pointsa′{\displaystyle a'}andb′{\displaystyle b'}such that‖a′−b′‖<‖a0−b0‖{\displaystyle \|a'-b'\|<\|a_{0}-b_{0}\|}. Then in particular,‖a′−b′‖<r1{\displaystyle \|a'-b'\|<r_{1}}, and by the triangle inequality,‖a−b′‖≤‖a′−b′‖+‖a−a′‖<r1+r2{\displaystyle \|a-b'\|\leq \|a'-b'\|+\|a-a'\|<r_{1}+r_{2}}. Thereforeb′{\displaystyle b'}is contained inS{\displaystyle S}, which contradicts the fact thata0{\displaystyle a_{0}}andb0{\displaystyle b_{0}}had minimum distance overA×S{\displaystyle A\times S}.◻{\displaystyle \square } We first prove the second case. (See the diagram.) WLOG,A{\displaystyle A}is compact. By the lemma, there exist pointsa0∈A{\displaystyle a_{0}\in A}andb0∈B{\displaystyle b_{0}\in B}of minimum distance to each other. SinceA{\displaystyle A}andB{\displaystyle B}are disjoint, we havea0≠b0{\displaystyle a_{0}\neq b_{0}}. Now, construct two hyperplanesLA,LB{\displaystyle L_{A},L_{B}}perpendicular to line segment[a0,b0]{\displaystyle [a_{0},b_{0}]}, withLA{\displaystyle L_{A}}acrossa0{\displaystyle a_{0}}andLB{\displaystyle L_{B}}acrossb0{\displaystyle b_{0}}. We claim that neitherA{\displaystyle A}norB{\displaystyle B}enters the space betweenLA,LB{\displaystyle L_{A},L_{B}}, and thus the perpendicular hyperplanes to(a0,b0){\displaystyle (a_{0},b_{0})}satisfy the requirement of the theorem. Algebraically, the hyperplanesLA,LB{\displaystyle L_{A},L_{B}}are defined by the vectorv:=b0−a0{\displaystyle v:=b_{0}-a_{0}}, and two constantscA:=⟨v,a0⟩<cB:=⟨v,b0⟩{\displaystyle c_{A}:=\langle v,a_{0}\rangle <c_{B}:=\langle v,b_{0}\rangle }, such thatLA={x:⟨v,x⟩=cA},LB={x:⟨v,x⟩=cB}{\displaystyle L_{A}=\{x:\langle v,x\rangle =c_{A}\},L_{B}=\{x:\langle v,x\rangle =c_{B}\}}. Our claim is that∀a∈A,⟨v,a⟩≤cA{\displaystyle \forall a\in A,\langle v,a\rangle \leq c_{A}}and∀b∈B,⟨v,b⟩≥cB{\displaystyle \forall b\in B,\langle v,b\rangle \geq c_{B}}. Suppose there is somea∈A{\displaystyle a\in A}such that⟨v,a⟩>cA{\displaystyle \langle v,a\rangle >c_{A}}, then leta′{\displaystyle a'}be the foot of perpendicular fromb0{\displaystyle b_{0}}to the line segment[a0,a]{\displaystyle [a_{0},a]}. SinceA{\displaystyle A}is convex,a′{\displaystyle a'}is insideA{\displaystyle A}, and by planar geometry,a′{\displaystyle a'}is closer tob0{\displaystyle b_{0}}thana0{\displaystyle a_{0}}, contradiction. Similar argument applies toB{\displaystyle B}. Now for the first case. Approach bothA,B{\displaystyle A,B}from the inside byA1⊆A2⊆⋯⊆A{\displaystyle A_{1}\subseteq A_{2}\subseteq \cdots \subseteq A}andB1⊆B2⊆⋯⊆B{\displaystyle B_{1}\subseteq B_{2}\subseteq \cdots \subseteq B}, such that eachAk,Bk{\displaystyle A_{k},B_{k}}is closed and compact, and the unions are the relative interiorsrelint(A),relint(B){\displaystyle \mathrm {relint} (A),\mathrm {relint} (B)}. (Seerelative interiorpage for details.) Now by the second case, for each pairAk,Bk{\displaystyle A_{k},B_{k}}there exists some unit vectorvk{\displaystyle v_{k}}and real numberck{\displaystyle c_{k}}, such that⟨vk,Ak⟩<ck<⟨vk,Bk⟩{\displaystyle \langle v_{k},A_{k}\rangle <c_{k}<\langle v_{k},B_{k}\rangle }. Since the unit sphere is compact, we can take a convergent subsequence, so thatvk→v{\displaystyle v_{k}\to v}. LetcA:=supa∈A⟨v,a⟩,cB:=infb∈B⟨v,b⟩{\displaystyle c_{A}:=\sup _{a\in A}\langle v,a\rangle ,c_{B}:=\inf _{b\in B}\langle v,b\rangle }. We claim thatcA≤cB{\displaystyle c_{A}\leq c_{B}}, thus separatingA,B{\displaystyle A,B}. Assume not, then there exists somea∈A,b∈B{\displaystyle a\in A,b\in B}such that⟨v,a⟩>⟨v,b⟩{\displaystyle \langle v,a\rangle >\langle v,b\rangle }, then sincevk→v{\displaystyle v_{k}\to v}, for large enoughk{\displaystyle k}, we have⟨vk,a⟩>⟨vk,b⟩{\displaystyle \langle v_{k},a\rangle >\langle v_{k},b\rangle }, contradiction. Since a separating hyperplane cannot intersect the interiors of open convex sets, we have a corollary: Separation theorem I—LetA{\displaystyle A}andB{\displaystyle B}be two disjoint nonempty convex sets. IfA{\displaystyle A}is open, then there exist a nonzero vectorv{\displaystyle v}and real numberc{\displaystyle c}such that for allx{\displaystyle x}inA{\displaystyle A}andy{\displaystyle y}inB{\displaystyle B}. If both sets are open, then there exist a nonzero vectorv{\displaystyle v}and real numberc{\displaystyle c}such that for allx{\displaystyle x}inA{\displaystyle A}andy{\displaystyle y}inB{\displaystyle B}. If the setsA,B{\displaystyle A,B}have possible intersections, but theirrelative interiorsare disjoint, then the proof of the first case still applies with no change, thus yielding: Separation theorem II—LetA{\displaystyle A}andB{\displaystyle B}be two nonempty convex subsets ofRn{\displaystyle \mathbb {R} ^{n}}with disjoint relative interiors. Then there exist a nonzero vectorv{\displaystyle v}and a real numberc{\displaystyle c}such that in particular, we have thesupporting hyperplane theorem. Supporting hyperplane theorem—ifA{\displaystyle A}is a convex set inRn,{\displaystyle \mathbb {R} ^{n},}anda0{\displaystyle a_{0}}is a point on theboundaryofA{\displaystyle A}, then there exists a supporting hyperplane ofA{\displaystyle A}containinga0{\displaystyle a_{0}}. If the affine span ofA{\displaystyle A}is not all ofRn{\displaystyle \mathbb {R} ^{n}}, then extend the affine span to a supporting hyperplane. Else,relint(A)=int(A){\displaystyle \mathrm {relint} (A)=\mathrm {int} (A)}is disjoint fromrelint({a0})={a0}{\displaystyle \mathrm {relint} (\{a_{0}\})=\{a_{0}\}}, so apply the above theorem. Note that the existence of a hyperplane that only "separates" two convex sets in the weak sense of both inequalities being non-strict obviously does not imply that the two sets are disjoint. Both sets could have points located on the hyperplane. If one ofAorBis not convex, then there are many possible counterexamples. For example,AandBcould be concentric circles. A more subtle counterexample is one in whichAandBare both closed but neither one is compact. For example, ifAis a closed half plane and B is bounded by one arm of a hyperbola, then there is no strictly separating hyperplane: (Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample hasAcompact andBopen. For example, A can be a closed square and B can be an open square that touchesA. In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation. Thehorn angleprovides a good counterexample to many hyperplane separations. For example, inR2{\displaystyle \mathbb {R} ^{2}}, the unit disk is disjoint from the open interval((1,0),(1,1)){\displaystyle ((1,0),(1,1))}, but the only line separating them contains the entirety of((1,0),(1,1)){\displaystyle ((1,0),(1,1))}. This shows that ifA{\displaystyle A}is closed andB{\displaystyle B}isrelativelyopen, then there does not necessarily exist a separation that is strict forB{\displaystyle B}. However, ifA{\displaystyle A}is closedpolytopethen such a separation exists.[6] Farkas' lemmaand related results can be understood as hyperplane separation theorems when the convex bodies are defined by finitely many linear inequalities. More results may be found.[6] In collision detection, the hyperplane separation theorem is usually used in the following form: Separating axis theorem—Two closed convex objects are disjoint if there exists a line ("separating axis") onto which the two objects' projections are disjoint. Regardless of dimensionality, the separating axis is always a line. For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane. The separating axis theorem can be applied for fastcollision detectionbetween polygon meshes. Eachface'snormalor other feature direction is used as a separating axis. Note that this yields possible separating axes, not separating lines/planes. In 3D, using face normals alone will fail to separate some edge-on-edge non-colliding cases. Additional axes, consisting of the cross-products of pairs of edges, one taken from each object, are required.[7] For increased efficiency, parallel axes may be calculated as a single axis.
https://en.wikipedia.org/wiki/Hyperplane_separation_theorem
Kirchberger's theoremis a theorem indiscrete geometry, onlinear separability. The two-dimensional version of the theorem states that, if a finite set of red and blue points in theEuclidean planehas the property that, for every four points, there exists a line separating the red and blue points within those four, then there exists a single line separating all the red points from all the blue points. Donald Watson phrases this result more colorfully, with a farmyard analogy: If sheep and goats are grazing in a field and for every four animals there exists a line separating the sheep from the goats then there exists such a line for all the animals.[1] More generally, for finitely many red and blue points ind{\displaystyle d}-dimensionalEuclidean space, if the red and blue points in every subset ofd+2{\displaystyle d+2}of the points are linearly separable, then all the red points and all the blue points are linearly separable. Another equivalent way of stating the result is that, if theconvex hullsof finitely many red and blue points have a nonempty intersection, then there exists a subset ofd+2{\displaystyle d+2}points for which the convex hulls of the red and blue points in the subsets also intersect.[2][3] The theorem is named after German mathematician Paul Kirchberger, a student ofDavid Hilbertat theUniversity of Göttingenwho proved it in his 1902 dissertation,[4]and published it in 1903 inMathematische Annalen,[5]as an auxiliary theorem used in his analysis ofChebyshev approximation. A report of Hilbert on the dissertation states that some of Kirchberger's auxiliary theorems in this part of his dissertation were known toHermann Minkowskibut unpublished; it is not clear whether this statement applies to the result now known as Kirchberger's theorem.[6] Since Kirchberger's work, other proofs of Kirchberger's theorem have been published, including simple proofs based onHelly's theoremon intersections ofconvex sets,[7]based onCarathéodory's theoremon membership inconvex hulls,[2]or based on principles related toRadon's theoremon intersections of convex hulls.[3]However, Helly's theorem, Carathéodory's theorem, and Radon's theorem all postdate Kirchberger's theorem. A strengthened version of Kirchberger's theorem fixes one of the given points, and only considers subsets ofd+2{\displaystyle d+2}points that include the fixed point. If the red and blue points in each of these subsets are linearly separable, then all the red points and all the blue points are linearly separable.[1]The theorem also holds if the red points and blue points formcompact setsthat are not necessarily finite.[3] By usingstereographic projection, Kirchberger's theorem can be used to prove a similar result for circular or spherical separability: if every five points of finitely many red and blue points in the plane can have their red and blue points separated by a circle, or everyd+3{\displaystyle d+3}points in higher dimensions can have their red and blue points separated by ahypersphere, then all the red and blue points can be separated in the same way.[8]
https://en.wikipedia.org/wiki/Kirchberger%27s_theorem
Inmachine learning, theperceptronis an algorithm forsupervised learningofbinary classifiers. A binary classifier is a function that can decide whether or not an input, represented by a vector of numbers, belongs to some specific class.[1]It is a type oflinear classifier, i.e. a classification algorithm that makes its predictions based on alinear predictor functioncombining a set ofweightswith thefeature vector. The artificial neuron network was invented in 1943 byWarren McCullochandWalter PittsinA logical calculus of the ideas immanent in nervous activity.[5] In 1957,Frank Rosenblattwas at theCornell Aeronautical Laboratory. He simulated the perceptron on anIBM 704.[6][7]Later, he obtained funding by the Information Systems Branch of the United StatesOffice of Naval Researchand theRome Air Development Center, to build a custom-made computer, theMark I Perceptron. It was first publicly demonstrated on 23 June 1960.[8]The machine was "part of a previously secret four-year NPIC [the US'National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters".[9] Rosenblatt described the details of the perceptron in a 1958 paper.[10]His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "projection", "association" and "response". He presented at the first international symposium on AI,Mechanisation of Thought Processes, which took place in 1958 November.[11] Rosenblatt's project was funded under Contract Nonr-401(40) "Cognitive Systems Research Program", which lasted from 1959 to 1970,[12]and Contract Nonr-2381(00) "Project PARA" ("PARA" means "Perceiving and Recognition Automata"), which lasted from 1957[6]to 1963.[13] In 1959, the Institute for Defense Analysis awarded his group a $10,000 contract. By September 1961, the ONR awarded further $153,000 worth of contracts, with $108,000 committed for 1962.[14] The ONR research manager, Marvin Denicoff, stated that ONR, instead ofARPA, funded the Perceptron project, because the project was unlikely to produce technological results in the near or medium term. Funding from ARPA go up to the order of millions dollars, while from ONR are on the order of 10,000 dollars. Meanwhile, the head ofIPTOat ARPA,J.C.R. Licklider, was interested in 'self-organizing', 'adaptive' and other biologically-inspired methods in the 1950s; but by the mid-1960s he was openly critical of these, including the perceptron. Instead he strongly favored the logical AI approach ofSimonandNewell.[15] The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for theIBM 704, it was subsequently implemented in custom-built hardware as theMark I Perceptronwith the project name "Project PARA",[16]designed forimage recognition. The machine is currently inSmithsonian National Museum of American History.[17] The Mark I Perceptron had three layers. One version was implemented as follows: Rosenblatt called this three-layered perceptron network thealpha-perceptron, to distinguish it from other perceptron models he experimented with.[8] The S-units are connected to the A-units randomly (according to a table of random numbers) via a plugboard (see photo), to "eliminate any particular intentional bias in the perceptron". The connection weights are fixed, not learned. Rosenblatt was adamant about the random connections, as he believed the retina was randomly connected to the visual cortex, and he wanted his perceptron machine to resemble human visual perception.[18] The A-units are connected to the R-units, with adjustable weights encoded inpotentiometers, and weight updates during learning were performed by electric motors.[2]: 193The hardware details are in an operators' manual.[16] In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledglingAIcommunity; based on Rosenblatt's statements,The New York Timesreported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."[19] The Photo Division ofCentral Intelligence Agency, from 1960 to 1964, studied the use of Mark I Perceptron machine for recognizing militarily interesting silhouetted targets (such as planes and ships) inaerial photos.[20][21] Rosenblatt described his experiments with many variants of the Perceptron machine in a bookPrinciples of Neurodynamics(1962). The book is a published version of the 1961 report.[22] Among the variants are: The machine was shipped from Cornell to Smithsonian in 1967, under a government transfer administered by the Office of Naval Research.[9] Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field ofneural networkresearch to stagnate for many years, before it was recognised that afeedforward neural networkwith two or more layers (also called amultilayer perceptron) had greater processing power than perceptrons with one layer (also called asingle-layer perceptron). Single-layer perceptrons are only capable of learninglinearly separablepatterns.[23]For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems. In 1969, a famous book entitledPerceptronsbyMarvin MinskyandSeymour Papertshowed that it was impossible for these classes of network to learn anXORfunction. It is often incorrectly believed that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page onPerceptrons (book)for more information.) Nevertheless, the often-miscited Minsky and Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s.[23][verification needed]This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected. Rosenblatt continued working on perceptrons despite diminishing funding. The last attempt was Tobermory, built between 1961 and 1967, built for speech recognition.[24]It occupied an entire room.[25]It had 4 layers with 12,000 weights implemented by toroidalmagnetic cores. By the time of its completion, simulation on digital computers had become faster than purpose-built perceptron machines.[26]He died in a boating accident in 1971. Thekernel perceptronalgorithm was already introduced in 1964 by Aizerman et al.[27]Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first byFreundandSchapire(1998),[1]and more recently byMohriand Rostamizadeh (2013) who extend previous results and give new and more favorable L1 bounds.[28][29] The perceptron is a simplified model of a biologicalneuron. While the complexity ofbiological neuron modelsis often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons.[30] The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in.[31] In the modern sense, the perceptron is an algorithm for learning a binary classifier called athreshold function: a function that maps its inputx{\displaystyle \mathbf {x} }(a real-valuedvector) to an output valuef(x){\displaystyle f(\mathbf {x} )}(a singlebinaryvalue): f(x)=h(w⋅x+b){\displaystyle f(\mathbf {x} )=h(\mathbf {w} \cdot \mathbf {x} +b)} whereh{\displaystyle h}is theHeaviside step-function(where an input of>0{\textstyle >0}outputs 1; otherwise 0 is the output ),w{\displaystyle \mathbf {w} }is a vector of real-valued weights,w⋅x{\displaystyle \mathbf {w} \cdot \mathbf {x} }is thedot product∑i=1mwixi{\textstyle \sum _{i=1}^{m}w_{i}x_{i}}, wheremis the number of inputs to the perceptron, andbis thebias. The bias shifts the decision boundary away from the origin and does not depend on any input value. Equivalently, sincew⋅x+b=(w,b)⋅(x,1){\displaystyle \mathbf {w} \cdot \mathbf {x} +b=(\mathbf {w} ,b)\cdot (\mathbf {x} ,1)}, we can add the bias termb{\displaystyle b}as another weightwm+1{\displaystyle \mathbf {w} _{m+1}}and add a coordinate1{\displaystyle 1}to each inputx{\displaystyle \mathbf {x} }, and then write it as a linear classifier that passes the origin:f(x)=h(w⋅x){\displaystyle f(\mathbf {x} )=h(\mathbf {w} \cdot \mathbf {x} )} The binary value off(x){\displaystyle f(\mathbf {x} )}(0 or 1) is used to perform binary classification onx{\displaystyle \mathbf {x} }as either a positive or a negative instance. Spatially, the bias shifts the position (though not the orientation) of the planardecision boundary. In the context of neural networks, a perceptron is anartificial neuronusing theHeaviside step functionas the activation function. The perceptron algorithm is also termed thesingle-layer perceptron, to distinguish it from amultilayer perceptron, which is a misnomer for a more complicated neural network. As a linear classifier, the single-layer perceptron is the simplestfeedforward neural network. From aninformation theorypoint of view, a single perceptron withKinputs has a capacity of2Kbitsof information.[32]This result is due toThomas Cover.[33] Specifically letT(N,K){\displaystyle T(N,K)}be the number of ways to linearly separateNpoints inKdimensions, thenT(N,K)={2NK≥N2∑k=0K−1(N−1k)K<N{\displaystyle T(N,K)=\left\{{\begin{array}{cc}2^{N}&K\geq N\\2\sum _{k=0}^{K-1}\left({\begin{array}{c}N-1\\k\end{array}}\right)&K<N\end{array}}\right.}WhenKis large,T(N,K)/2N{\displaystyle T(N,K)/2^{N}}is very close to one whenN≤2K{\displaystyle N\leq 2K}, but very close to zero whenN>2K{\displaystyle N>2K}. In words, one perceptron unit can almost certainly memorize a random assignment of binary labels on N points whenN≤2K{\displaystyle N\leq 2K}, but almost certainly not whenN>2K{\displaystyle N>2K}. When operating on only binary inputs, a perceptron is called alinearly separable Boolean function, or threshold Boolean function. The sequence of numbers of threshold Boolean functions on n inputs isOEISA000609. The value is only known exactly up ton=9{\displaystyle n=9}case, but the order of magnitude is known quite exactly: it has upper bound2n2−nlog2⁡n+O(n){\displaystyle 2^{n^{2}-n\log _{2}n+O(n)}}and lower bound2n2−nlog2⁡n−O(n){\displaystyle 2^{n^{2}-n\log _{2}n-O(n)}}.[34] Any Boolean linear threshold function can be implemented with only integer weights. Furthermore, the number of bits necessary and sufficient for representing a single integer weight parameter isΘ(nln⁡n){\displaystyle \Theta (n\ln n)}.[34] A single perceptron can learn to classify any half-space. It cannot solve any linearly nonseparable vectors, such as the Booleanexclusive-orproblem (the famous "XOR problem"). A perceptron network withone hidden layercan learn to classify any compact subset arbitrarily closely. Similarly, it can also approximate anycompactly-supportedcontinuous functionarbitrarily closely. This is essentially a special case of thetheorems by George Cybenko and Kurt Hornik. Perceptrons(Minsky and Papert, 1969) studied the kind of perceptron networks necessary to learn various Boolean functions. Consider a perceptron network withn{\displaystyle n}input units, one hidden layer, and one output, similar to the Mark I Perceptron machine. It computes a Boolean function of typef:2n→2{\displaystyle f:2^{n}\to 2}. They call a functionconjunctively local of orderk{\displaystyle k}, iff there exists a perceptron network such that each unit in the hidden layer connects to at mostk{\displaystyle k}input units. Theorem. (Theorem 3.1.1): The parity function is conjunctively local of ordern{\displaystyle n}. Theorem. (Section 5.5): The connectedness function is conjunctively local of orderΩ(n1/2){\displaystyle \Omega (n^{1/2})}. Below is an example of a learning algorithm for a single-layer perceptron with a single output unit. For a single-layer perceptron with multiple output units, since the weights of one output unit are completely separate from all the others', the same algorithm can be run for each output unit. Formultilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such asbackpropagationmust be used. If the activation function or the underlying process being modeled by the perceptron isnonlinear, alternative learning algorithms such as thedelta rulecan be used as long as the activation function isdifferentiable. Nonetheless, the learning algorithm described in the steps below will often work, even for multilayer perceptrons with nonlinear activation functions. When multiple perceptrons are combined in an artificial neural network, each output neuron operates independently of all the others; thus, learning each output can be considered in isolation. We first define some variables: We show the values of the features as follows: To represent the weights: To show the time-dependence ofw{\displaystyle \mathbf {w} }, we use: The algorithm updates the weights after every training sample in step 2b. A single perceptron is alinear classifier. It can only reach a stable state if all input vectors are classified correctly. In case the training setDisnotlinearly separable, i.e. if the positive examples cannot be separated from the negative examples by a hyperplane, then the algorithm would not converge since there is no solution. Hence, if linear separability of the training set is not known a priori, one of the training variants below should be used. Detailed analysis and extensions to the convergence theorem are in Chapter 11 ofPerceptrons(1969). Linear separability is testable in timemin(O(nd/2),O(d2n),O(nd−1ln⁡n)){\displaystyle \min(O(n^{d/2}),O(d^{2n}),O(n^{d-1}\ln n))}, wheren{\displaystyle n}is the number of data points, andd{\displaystyle d}is the dimension of each point.[35] If the training setislinearly separable, then the perceptron is guaranteed to converge after making finitely many mistakes.[36]The theorem is proved by Rosenblatt et al. Perceptron convergence theorem—Given a datasetD{\textstyle D}, such thatmax(x,y)∈D‖x‖2=R{\textstyle \max _{(x,y)\in D}\|x\|_{2}=R}, and it is linearly separable by some unit vectorw∗{\textstyle w^{*}}, with marginγ{\textstyle \gamma }:γ:=min(x,y)∈Dy(w∗⋅x){\displaystyle \gamma :=\min _{(x,y)\in D}y(w^{*}\cdot x)} Then the perceptron 0-1 learning algorithm converges after making at most(R/γ)2{\textstyle (R/\gamma )^{2}}mistakes, for any learning rate, and any method of sampling from the dataset. The following simple proof is due to Novikoff (1962). The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction with which it has a negativedot product, and thus can be bounded above byO(√t), wheretis the number of changes to the weight vector. However, it can also be bounded below byO(t)because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector. Suppose at stept{\textstyle t}, the perceptron with weightwt{\textstyle w_{t}}makes a mistake on data point(x,y){\textstyle (x,y)}, then it updates towt+1=wt+r(y−fwt(x))x{\textstyle w_{t+1}=w_{t}+r(y-f_{w_{t}}(x))x}. Ify=0{\textstyle y=0}, the argument is symmetric, so we omit it. WLOG,y=1{\textstyle y=1}, thenfwt(x)=0{\textstyle f_{w_{t}}(x)=0},fw∗(x)=1{\textstyle f_{w^{*}}(x)=1}, andwt+1=wt+rx{\textstyle w_{t+1}=w_{t}+rx}. By assumption, we have separation with margins:w∗⋅x≥γ{\displaystyle w^{*}\cdot x\geq \gamma }Thus,w∗⋅wt+1−w∗⋅wt=w∗⋅(rx)≥rγ{\displaystyle w^{*}\cdot w_{t+1}-w^{*}\cdot w_{t}=w^{*}\cdot (rx)\geq r\gamma } Also‖wt+1‖22−‖wt‖22=‖wt+rx‖22−‖wt‖22=2r(wt⋅x)+r2‖x‖22{\displaystyle \|w_{t+1}\|_{2}^{2}-\|w_{t}\|_{2}^{2}=\|w_{t}+rx\|_{2}^{2}-\|w_{t}\|_{2}^{2}=2r(w_{t}\cdot x)+r^{2}\|x\|_{2}^{2}}and since the perceptron made a mistake,wt⋅x≤0{\textstyle w_{t}\cdot x\leq 0}, and so‖wt+1‖22−‖wt‖22≤‖x‖22≤r2R2{\displaystyle \|w_{t+1}\|_{2}^{2}-\|w_{t}\|_{2}^{2}\leq \|x\|_{2}^{2}\leq r^{2}R^{2}} Since we started withw0=0{\textstyle w_{0}=0}, after makingN{\textstyle N}mistakes,‖w‖2≤Nr2R2{\displaystyle \|w\|_{2}\leq {\sqrt {Nr^{2}R^{2}}}}but also‖w‖2≥w⋅w∗≥Nrγ{\displaystyle \|w\|_{2}\geq w\cdot w^{*}\geq Nr\gamma } Combining the two, we haveN≤(R/γ)2{\textstyle N\leq (R/\gamma )^{2}} While the perceptron algorithm is guaranteed to converge onsomesolution in the case of a linearly separable training set, it may still pickanysolution and problems may admit many solutions of varying quality.[37]Theperceptron of optimal stability, nowadays better known as the linearsupport-vector machine, was designed to solve this problem (Krauth andMezard, 1987).[38] When the dataset is not linearly separable, then there is no way for a single perceptron to converge. However, we still have[39] Perceptron cycling theorem—If the datasetD{\displaystyle D}has only finitely many points, then there exists an upper bound numberM{\displaystyle M}, such that for any starting weight vectorw0{\displaystyle w_{0}}all weight vectorwt{\displaystyle w_{t}}has norm bounded by‖wt‖≤‖w0‖+M{\displaystyle \|w_{t}\|\leq \|w_{0}\|+M} This is proved first byBradley Efron.[40] Consider a dataset where thex{\displaystyle x}are from{−1,+1}n{\displaystyle \{-1,+1\}^{n}}, that is, the vertices of an n-dimensional hypercube centered at origin, andy=θ(xi){\displaystyle y=\theta (x_{i})}. That is, all data points with positivexi{\displaystyle x_{i}}havey=1{\displaystyle y=1}, and vice versa. By the perceptron convergence theorem, a perceptron would converge after making at mostn{\displaystyle n}mistakes. If we were to write a logical program to perform the same task, each positive example shows that one of the coordinates is the right one, and each negative example shows that itscomplementis a positive example. By collecting all the known positive examples, we eventually eliminate all but one coordinate, at which point the dataset is learned.[41] This bound is asymptotically tight in terms of the worst-case. In the worst-case, the first presented example is entirely new, and givesn{\displaystyle n}bits of information, but each subsequent example would differ minimally from previous examples, and gives 1 bit each. Aftern+1{\displaystyle n+1}examples, there are2n{\displaystyle 2n}bits of information, which is sufficient for the perceptron (with2n{\displaystyle 2n}bits of information).[32] However, it is not tight in terms of expectation if the examples are presented uniformly at random, since the first would given{\displaystyle n}bits, the secondn/2{\displaystyle n/2}bits, and so on, takingO(ln⁡n){\displaystyle O(\ln n)}examples in total.[41] The pocket algorithm with ratchet (Gallant, 1990) solves the stability problem of perceptron learning by keeping the best solution seen so far "in its pocket". The pocket algorithm then returns the solution in the pocket, rather than the last solution. It can be used also for non-separable data sets, where the aim is to find a perceptron with a small number of misclassifications. However, these solutions appear purely stochastically and hence the pocket algorithm neither approaches them gradually in the course of learning, nor are they guaranteed to show up within a given number of learning steps. The Maxover algorithm (Wendemuth, 1995) is"robust"in the sense that it will converge regardless of (prior) knowledge of linear separability of the data set.[42]In the linearly separable case, it will solve the training problem – if desired, even with optimal stability (maximum marginbetween the classes). For non-separable data sets, it will return a solution with a computable small number of misclassifications.[43]In all cases, the algorithm gradually approaches the solution in the course of learning, without memorizing previous states and without stochastic jumps. Convergence is to global optimality for separable data sets and to local optimality for non-separable data sets. The Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons. In separable problems, perceptron training can also aim at finding the largest separating margin between the classes. The so-called perceptron of optimal stability can be determined by means of iterative training and optimization schemes, such as the Min-Over algorithm (Krauth and Mezard, 1987)[38]or the AdaTron (Anlauf and Biehl, 1989)).[44]AdaTron uses the fact that the corresponding quadratic optimization problem is convex. The perceptron of optimal stability, together with thekernel trick, are the conceptual foundations of thesupport-vector machine. Theα{\displaystyle \alpha }-perceptron further used a pre-processing layer of fixed random weights, with thresholded output units. This enabled the perceptron to classifyanaloguepatterns, by projecting them into abinary space. In fact, for a projection space of sufficiently high dimension, patterns can become linearly separable. Another way to solve nonlinear problems without using multiple layers is to use higher order networks (sigma-pi unit). In this type of network, each element in the input vector is extended with each pairwise combination of multiplied inputs (second order). This can be extended to ann-order network. It should be kept in mind, however, that the best classifier is not necessarily that which classifies all the training data perfectly. Indeed, if we had the prior constraint that the data come from equi-variant Gaussian distributions, the linear separation in the input space is optimal, and the nonlinear solution isoverfitted. Other linear classification algorithms includeWinnow,support-vector machine, andlogistic regression. Like most other techniques for training linear classifiers, the perceptron generalizes naturally tomulticlass classification. Here, the inputx{\displaystyle x}and the outputy{\displaystyle y}are drawn from arbitrary sets. A feature representation functionf(x,y){\displaystyle f(x,y)}maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vectorw{\displaystyle w}, but now the resulting score is used to choose among many possible outputs: Learning again iterates over the examples, predicting an output for each, leaving the weights unchanged when the predicted output matches the target, and changing them when it does not. The update becomes: This multiclass feedback formulation reduces to the original perceptron whenx{\displaystyle x}is a real-valued vector,y{\displaystyle y}is chosen from{0,1}{\displaystyle \{0,1\}}, andf(x,y)=yx{\displaystyle f(x,y)=yx}. For certain problems, input/output representations and features can be chosen so thatargmaxyf(x,y)⋅w{\displaystyle \mathrm {argmax} _{y}f(x,y)\cdot w}can be found efficiently even thoughy{\displaystyle y}is chosen from a very large or even infinite set. Since 2002, perceptron training has become popular in the field ofnatural language processingfor such tasks aspart-of-speech taggingandsyntactic parsing(Collins, 2002). It has also been applied to large-scale machine learning problems in adistributed computingsetting.[45]
https://en.wikipedia.org/wiki/Perceptron
Theaverage absolute deviation(AAD) of a data set is theaverageof theabsolutedeviationsfrom acentral point. It is asummary statisticofstatistical dispersionor variability. In the general form, the central point can be amean,median,mode, or the result of any other measure of central tendency or any reference value related to the given data set. AAD includes themean absolute deviationand themedian absolute deviation(both abbreviated asMAD). Several measures ofstatistical dispersionare defined in terms of the absolute deviation. The term "average absolute deviation" does not uniquely identify a measure ofstatistical dispersion, as there are several measures that can be used to measure absolute deviations, and there are several measures ofcentral tendencythat can be used as well. Thus, to uniquely identify the absolute deviation it is necessary to specify both the measure of deviation and the measure of central tendency. The statistical literature has not yet adopted a standard notation, as both the mean absolute deviation around the mean and the median absolute deviation around the median have been denoted by their initials "MAD" in the literature, which may lead to confusion, since they generally have values considerably different from each other. The mean absolute deviation of a setX= {x1,x2, …,xn} is1n∑i=1n|xi−m(X)|.{\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}|x_{i}-m(X)|.} The choice of measure of central tendency,m(X){\displaystyle m(X)}, has a marked effect on the value of the mean deviation. For example, for the data set {2, 2, 3, 4, 14}: Themean absolute deviation(MAD), also referred to as the "mean deviation" or sometimes "average absolute deviation", is the mean of the data's absolute deviations around the data's mean: the average (absolute) distance from the mean. "Average absolute deviation" can refer to either this usage, or to the general form with respect to a specified central point (see above). MAD has been proposed to be used in place ofstandard deviationsince it corresponds better to real life.[1]Because the MAD is a simpler measure of variability than thestandard deviation, it can be useful in school teaching.[2][3] This method's forecast accuracy is very closely related to themean squared error(MSE) method which is just the average squared error of the forecasts. Although these methods are very closely related, MAD is more commonly used because it is both easier to compute (avoiding the need for squaring)[4]and easier to understand.[5] For thenormal distribution, the ratio of mean absolute deviation from the mean to standard deviation is2/π=0.79788456…{\textstyle {\sqrt {2/\pi }}=0.79788456\ldots }. Thus ifXis a normally distributed random variable with expected value 0 then, see Geary (1935):[6]w=E|X|E(X2)=2π.{\displaystyle w={\frac {E|X|}{\sqrt {E(X^{2})}}}={\sqrt {\frac {2}{\pi }}}.}In other words, for a normal distribution, mean absolute deviation is about 0.8 times the standard deviation. However, in-sample measurements deliver values of the ratio of mean average deviation / standard deviation for a given Gaussian samplenwith the following bounds:wn∈[0,1]{\displaystyle w_{n}\in [0,1]}, with a bias for smalln.[7] The mean absolute deviation from the mean is less than or equal to thestandard deviation; one way of proving this relies onJensen's inequality. Jensen's inequality isφ(E[Y])≤E[φ(Y)]{\displaystyle \varphi \left(\mathbb {E} [Y]\right)\leq \mathbb {E} \left[\varphi (Y)\right]}, whereφis a convex function, this implies forY=|X−μ|{\displaystyle Y=\vert X-\mu \vert }that:(E|X−μ|)2≤E(|X−μ|2){\displaystyle \left(\mathbb {E} |X-\mu \right|)^{2}\leq \mathbb {E} \left(|X-\mu |^{2}\right)}(E|X−μ|)2≤Var⁡(X){\displaystyle \left(\mathbb {E} |X-\mu \right|)^{2}\leq \operatorname {Var} (X)} Since both sides are positive, and thesquare rootis amonotonically increasing functionin the positive domain:E(|X−μ|)≤Var⁡(X){\displaystyle \mathbb {E} \left(|X-\mu \right|)\leq {\sqrt {\operatorname {Var} (X)}}} For a general case of this statement, seeHölder's inequality. Themedianis the point about which the mean deviation is minimized. The MAD median offers a direct measure of the scale of a random variable around its medianDmed=E|X−median|{\displaystyle D_{\text{med}}=E|X-{\text{median}}|} This is themaximum likelihoodestimator of the scale parameterb{\displaystyle b}of theLaplace distribution. Since the median minimizes the average absolute distance, we haveDmed≤Dmean{\displaystyle D_{\text{med}}\leq D_{\text{mean}}}. The mean absolute deviation from the median is less than or equal to the mean absolute deviation from the mean. In fact, the mean absolute deviation from the median is always less than or equal to the mean absolute deviation from any other fixed number. By using the general dispersion function, Habib (2011) defined MAD about median asDmed=E|X−median|=2Cov⁡(X,IO){\displaystyle D_{\text{med}}=E|X-{\text{median}}|=2\operatorname {Cov} (X,I_{O})}where the indicator function isIO:={1ifx>median,0otherwise.{\displaystyle \mathbf {I} _{O}:={\begin{cases}1&{\text{if }}x>{\text{median}},\\0&{\text{otherwise}}.\end{cases}}} This representation allows for obtaining MAD median correlation coefficients.[citation needed] While in principle the mean or any other central point could be taken as the central point for the median absolute deviation, most often themedianvalue is taken instead. Themedian absolute deviation(also MAD) is themedianof the absolute deviation from themedian. It is arobust estimator of dispersion. For the example {2, 2, 3, 4, 14}: 3 is the median, so the absolute deviations from the median are {1, 1, 0, 1, 11} (reordered as {0, 1, 1, 1, 11}) with a median of 1, in this case unaffected by the value of the outlier 14, so the median absolute deviation is 1. For a symmetric distribution, the median absolute deviation is equal to half theinterquartile range. Themaximum absolute deviationaround an arbitrary point is the maximum of the absolute deviations of a sample from that point. While not strictly a measure of central tendency, the maximum absolute deviation can be found using the formula for the average absolute deviation as above withm(X)=max(X){\displaystyle m(X)=\max(X)}, wheremax(X){\displaystyle \max(X)}is thesample maximum. The measures of statistical dispersion derived from absolute deviation characterize various measures of central tendency asminimizingdispersion: The median is the measure of central tendency most associated with the absolute deviation. Some location parameters can be compared as follows: The mean absolute deviation of a sample is abiased estimatorof the mean absolute deviation of the population. In order for the absolute deviation to be an unbiased estimator, the expected value (average) of all the sample absolute deviations must equal the population absolute deviation. However, it does not. For the population 1,2,3 both the population absolute deviation about the median and the population absolute deviation about the mean are 2/3. The average of all the sample absolute deviations about the mean of size 3 that can be drawn from the population is 44/81, while the average of all the sample absolute deviations about the median is 4/9. Therefore, the absolute deviation is a biased estimator. However, this argument is based on the notion of mean-unbiasedness. Each measure of location has its own form of unbiasedness (see entry onbiased estimator). The relevant form of unbiasedness here is median unbiasedness.
https://en.wikipedia.org/wiki/Average_absolute_deviation
Instatistics, themean signed difference(MSD),[1]also known asmean signed deviation,mean signed error, ormean bias error[2]is a samplestatisticthat summarizes how well a set of estimatesθ^i{\displaystyle {\hat {\theta }}_{i}}match the quantitiesθi{\displaystyle \theta _{i}}that they are supposed to estimate. It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of themean square error. For example, suppose alinear regressionmodel has been estimated over a sample of data, and is then used to extrapolate predictions of thedependent variableout of sample after the out-of-sample data points have become available. Thenθi{\displaystyle \theta _{i}}would be thei-th out-of-sample value of the dependent variable, andθ^i{\displaystyle {\hat {\theta }}_{i}}would be its predicted value. The mean signed deviation is the average value ofθ^i−θi.{\displaystyle {\hat {\theta }}_{i}-\theta _{i}.} The mean signed difference is derived from a set ofnpairs,(θ^i,θi){\displaystyle ({\hat {\theta }}_{i},\theta _{i})}, whereθ^i{\displaystyle {\hat {\theta }}_{i}}is an estimate of the parameterθ{\displaystyle \theta }in a case where it is known thatθ=θi{\displaystyle \theta =\theta _{i}}. In many applications, all the quantitiesθi{\displaystyle \theta _{i}}will share a common value. When applied toforecastingin atime series analysiscontext, a forecasting procedure might be evaluated using the mean signed difference, withθ^i{\displaystyle {\hat {\theta }}_{i}}being the predicted value of a series at a givenlead timeandθi{\displaystyle \theta _{i}}being the value of the series eventually observed for that time-point. The mean signed difference is defined to be The mean signed difference is often useful when the estimationsθi^{\displaystyle {\hat {\theta _{i}}}}are biased from the true valuesθi{\displaystyle \theta _{i}}in a certain direction. If the estimator that produces theθi^{\displaystyle {\hat {\theta _{i}}}}values is unbiased, thenMSD⁡(θi^)=0{\displaystyle \operatorname {MSD} ({\hat {\theta _{i}}})=0}. However, if the estimationsθi^{\displaystyle {\hat {\theta _{i}}}}are produced by abiased estimator, then the mean signed difference is a useful tool to understand the direction of the estimator's bias. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mean_signed_deviation
Instatisticsandoptimization,errorsandresidualsare two closely related and easily confused measures of thedeviationof anobserved valueof anelementof astatistical samplefrom its "true value" (not necessarily observable). Theerrorof anobservationis the deviation of the observed value from the true value of a quantity of interest (for example, apopulation mean). Theresidualis the difference between the observed value and theestimatedvalue of the quantity of interest (for example, asample mean). The distinction is most important inregression analysis, where the concepts are sometimes called theregression errorsandregression residualsand where they lead to the concept ofstudentized residuals. Ineconometrics, "errors" are also calleddisturbances.[1][2][3] Suppose there is a series of observations from aunivariate distributionand we want to estimate themeanof that distribution (the so-calledlocation model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean. Astatistical error(ordisturbance) is the amount by which an observation differs from itsexpected value, the latter being based on the wholepopulationfrom which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being themeanof the entire population, is typically unobservable, and hence the statistical error cannot be observed either. Aresidual(or fitting deviation), on the other hand, is an observableestimateof the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample ofnpeople. Thesample meancould serve as a good estimator of thepopulationmean. Then we have: Note that, because of the definition of the sample mean, the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarilynotindependent. The statistical errors, on the other hand, are independent, and their sum within the random sample isalmost surelynot zero. One can standardize statistical errors (especially of anormal distribution) in az-score(or "standard score"), and standardize residuals in at-statistic, or more generallystudentized residuals. If we assume a normally distributed population with mean μ andstandard deviationσ, and choose individuals independently, then we have and thesample mean is a random variable distributed such that: Thestatistical errorsare then withexpectedvalues of zero,[4]whereas theresidualsare The sum of squares of thestatistical errors, divided byσ2, has achi-squared distributionwithndegrees of freedom: However, this quantity is not observable as the population mean is unknown. The sum of squares of theresiduals, on the other hand, is observable. The quotient of that sum by σ2has a chi-squared distribution with onlyn− 1 degrees of freedom: This difference betweennandn− 1 degrees of freedom results inBessel's correctionfor the estimation ofsample varianceof a population with unknown mean and unknown variance. No correction is necessary if the population mean is known. It is remarkable that thesum of squares of the residualsand the sample mean can be shown to be independent of each other, using, e.g.Basu's theorem. That fact, and the normal and chi-squared distributions given above form the basis of calculations involving the t-statistic: whereX¯n−μ0{\displaystyle {\overline {X}}_{n}-\mu _{0}}represents the errors,Sn{\displaystyle S_{n}}represents the sample standard deviation for a sample of sizen, and unknownσ, and the denominator termSn/n{\displaystyle S_{n}/{\sqrt {n}}}accounts for the standard deviation of the errors according to:[5] Var⁡(X¯n)=σ2n{\displaystyle \operatorname {Var} \left({\overline {X}}_{n}\right)={\frac {\sigma ^{2}}{n}}} Theprobability distributionsof the numerator and the denominator separately depend on the value of the unobservable population standard deviationσ, butσappears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not knowσ, we know the probability distribution of this quotient: it has aStudent's t-distributionwithn− 1 degrees of freedom. We can therefore use this quotient to find aconfidence intervalforμ. This t-statistic can be interpreted as "the number of standard errors away from the regression line."[6] Inregression analysis, the distinction betweenerrorsandresidualsis subtle and important, and leads to the concept ofstudentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from thefittedfunction are the residuals. If the linear model is applicable, a scatterplot of residuals plotted against the independent variable should be random about zero with no trend to the residuals.[5]If the data exhibit a trend, the regression model is likely incorrect; for example, the true function may be a quadratic or higher order polynomial. If they are random, or have no trend, but "fan out" - they exhibit a phenomenon calledheteroscedasticity. If all of the residuals are equal, or do not fan out, they exhibithomoscedasticity. However, a terminological difference arises in the expressionmean squared error(MSE). The mean squared error of a regression is a number computed from the sum of squares of the computedresiduals, and not of the unobservableerrors. If that sum of squares is divided byn, the number of observations, the result is the mean of the squared residuals. Since this is abiasedestimate of the variance of the unobserved errors, the bias is removed by dividing the sum of the squared residuals bydf=n−p− 1, instead ofn, wheredfis the number ofdegrees of freedom(nminus the number of parameters (excluding the intercept) p being estimated - 1). This forms an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.[7] Another method to calculate the mean square of error when analyzing the variance of linear regression using a technique like that used inANOVA(they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equaln−p− 1, wherepis the number of parameters estimated in the model (one for each variable in the regression equation, not including the intercept)). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by dividing the mean square of the model by the mean square of the error, and we can then determine significance (which is why you want the mean squares to begin with.).[8] However, because of the behavior of the process of regression, thedistributionsof residuals at different data points (of the input variable) may varyeven ifthe errors themselves are identically distributed. Concretely, in alinear regressionwhere the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will behigherthan the variability of residuals at the ends of the domain:[9]linear regressions fit endpoints better than the middle. This is also reflected in theinfluence functionsof various data points on theregression coefficients: endpoints have more influence. Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability ofresiduals,which is calledstudentizing. This is particularly important in the case of detectingoutliers, where the case in question is somehow different from the others in a dataset. For example, a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain. The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observableprediction errors: Themean squared error(MSE) refers to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). Theroot mean square error(RMSE) is the square-root of MSE. Thesum of squares of errors(SSE) is the MSE multiplied by the sample size. Sum of squares of residuals(SSR) is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. This is the basis for theleast squaresestimate, where the regression coefficients are chosen such that the SSR is minimal (i.e. its derivative is zero). Likewise, thesum of absolute errors(SAE) is the sum of the absolute values of the residuals, which is minimized in theleast absolute deviationsapproach to regression. Themean error(ME) is the bias. Themean residual(MR) is always zero for least-squares estimators.
https://en.wikipedia.org/wiki/Errors_and_residuals
Convex analysisis the branch ofmathematicsdevoted to the study of properties ofconvex functionsandconvex sets, often with applications inconvex minimization, a subdomain ofoptimization theory. A subsetC⊆X{\displaystyle C\subseteq X}of somevector spaceX{\displaystyle X}isconvexif it satisfies any of the following equivalent conditions: Throughout,f:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}will be a map valued in theextended real numbers[−∞,∞]=R∪{±∞}{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}with adomaindomain⁡f=X{\displaystyle \operatorname {domain} f=X}that is a convex subset of some vector space. The mapf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is aconvex functionif holds for any real0<r<1{\displaystyle 0<r<1}and anyx,y∈X{\displaystyle x,y\in X}withx≠y.{\displaystyle x\neq y.}If this remains true off{\displaystyle f}when the defining inequality (Convexity ≤) is replaced by the strict inequality thenf{\displaystyle f}is calledstrictly convex.[1] Convex functions are related to convex sets. Specifically, the functionf{\displaystyle f}is convex if and only if itsepigraph is a convex set.[2]The epigraphs of extended real-valued functions play a role in convex analysis that is analogous to the role played bygraphsof real-valued function inreal analysis. Specifically, the epigraph of an extended real-valued function provides geometric intuition that can be used to help formula or prove conjectures. The domain of a functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is denoted bydomain⁡f{\displaystyle \operatorname {domain} f}while itseffective domainis the set[2] The functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is calledproperifdom⁡f≠∅{\displaystyle \operatorname {dom} f\neq \varnothing }andf(x)>−∞{\displaystyle f(x)>-\infty }forallx∈domain⁡f.{\displaystyle x\in \operatorname {domain} f.}[2]Alternatively, this means that there exists somex{\displaystyle x}in the domain off{\displaystyle f}at whichf(x)∈R{\displaystyle f(x)\in \mathbb {R} }andf{\displaystyle f}is alsoneverequal to−∞.{\displaystyle -\infty .}In words, a function isproperif its domain is not empty, it never takes on the value−∞,{\displaystyle -\infty ,}and it also is not identically equal to+∞.{\displaystyle +\infty .}Iff:Rn→[−∞,∞]{\displaystyle f:\mathbb {R} ^{n}\to [-\infty ,\infty ]}is aproper convex functionthen there exist some vectorb∈Rn{\displaystyle b\in \mathbb {R} ^{n}}and somer∈R{\displaystyle r\in \mathbb {R} }such that wherex⋅b{\displaystyle x\cdot b}denotes thedot productof these vectors. Theconvex conjugateof an extended real-valued functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}(not necessarily convex) is the functionf∗:X∗→[−∞,∞]{\displaystyle f^{*}:X^{*}\to [-\infty ,\infty ]}from the(continuous) dual spaceX∗{\displaystyle X^{*}}ofX,{\displaystyle X,}and[3] where the brackets⟨⋅,⋅⟩{\displaystyle \left\langle \cdot ,\cdot \right\rangle }denote thecanonical duality⟨x∗,z⟩:=x∗(z).{\displaystyle \left\langle x^{*},z\right\rangle :=x^{*}(z).}Thebiconjugateoff{\displaystyle f}is the mapf∗∗=(f∗)∗:X→[−∞,∞]{\displaystyle f^{**}=\left(f^{*}\right)^{*}:X\to [-\infty ,\infty ]}defined byf∗∗(x):=supz∗∈X∗{⟨x,z∗⟩−f(z∗)}{\displaystyle f^{**}(x):=\sup _{z^{*}\in X^{*}}\left\{\left\langle x,z^{*}\right\rangle -f\left(z^{*}\right)\right\}}for everyx∈X.{\displaystyle x\in X.}IfFunc⁡(X;Y){\displaystyle \operatorname {Func} (X;Y)}denotes the set ofY{\displaystyle Y}-valued functions onX,{\displaystyle X,}then the mapFunc⁡(X;[−∞,∞])→Func⁡(X∗;[−∞,∞]){\displaystyle \operatorname {Func} (X;[-\infty ,\infty ])\to \operatorname {Func} \left(X^{*};[-\infty ,\infty ]\right)}defined byf↦f∗{\displaystyle f\mapsto f^{*}}is called theLegendre-Fenchel transform. Iff:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}andx∈X{\displaystyle x\in X}then thesubdifferential setis For example, in the important special case wheref=‖⋅‖{\displaystyle f=\|\cdot \|}is a norm onX{\displaystyle X}, it can be shown[proof 1]that if0≠x∈X{\displaystyle 0\neq x\in X}then this definition reduces down to: For anyx∈X{\displaystyle x\in X}andx∗∈X∗,{\displaystyle x^{*}\in X^{*},}f(x)+f∗(x∗)≥⟨x∗,x⟩,{\displaystyle f(x)+f^{*}\left(x^{*}\right)\geq \left\langle x^{*},x\right\rangle ,}which is called theFenchel-Young inequality. This inequality is an equality (i.e.f(x)+f∗(x∗)=⟨x∗,x⟩{\displaystyle f(x)+f^{*}\left(x^{*}\right)=\left\langle x^{*},x\right\rangle }) if and only ifx∗∈∂f(x).{\displaystyle x^{*}\in \partial f(x).}It is in this way that the subdifferential set∂f(x){\displaystyle \partial f(x)}is directly related to the convex conjugatef∗(x∗).{\displaystyle f^{*}\left(x^{*}\right).} Thebiconjugateof a functionf:X→[−∞,∞]{\displaystyle f:X\to [-\infty ,\infty ]}is the conjugate of the conjugate, typically written asf∗∗:X→[−∞,∞].{\displaystyle f^{**}:X\to [-\infty ,\infty ].}The biconjugate is useful for showing whenstrongorweak dualityhold (via theperturbation function). For anyx∈X,{\displaystyle x\in X,}the inequalityf∗∗(x)≤f(x){\displaystyle f^{**}(x)\leq f(x)}follows from theFenchel–Young inequality. Forproper functions,f=f∗∗{\displaystyle f=f^{**}}if and only iff{\displaystyle f}is convex andlower semi-continuousbyFenchel–Moreau theorem.[3][4] Aconvex minimization(primal)problemis one of the form In optimization theory, theduality principlestates that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. In general given twodual pairsseparatedlocally convex spaces(X,X∗){\displaystyle \left(X,X^{*}\right)}and(Y,Y∗).{\displaystyle \left(Y,Y^{*}\right).}Then given the functionf:X→[−∞,∞],{\displaystyle f:X\to [-\infty ,\infty ],}we can define the primal problem as findingx{\displaystyle x}such that If there are constraint conditions, these can be built into the functionf{\displaystyle f}by lettingf=f+Iconstraints{\displaystyle f=f+I_{\mathrm {constraints} }}whereI{\displaystyle I}is theindicator function. Then letF:X×Y→[−∞,∞]{\displaystyle F:X\times Y\to [-\infty ,\infty ]}be aperturbation functionsuch thatF(x,0)=f(x).{\displaystyle F(x,0)=f(x).}[5] Thedual problemwith respect to the chosen perturbation function is given by whereF∗{\displaystyle F^{*}}is the convex conjugate in both variables ofF.{\displaystyle F.} Theduality gapis the difference of the right and left hand sides of the inequality[6][5][7] This principle is the same asweak duality. If the two sides are equal to each other, then the problem is said to satisfystrong duality. There are many conditions for strong duality to hold such as: For a convex minimization problem with inequality constraints, the Lagrangian dual problem is where the objective functionL(x,u){\displaystyle L(x,u)}is the Lagrange dual function defined as follows:
https://en.wikipedia.org/wiki/Convex_analysis
Inmathematicsandmathematical optimization, theconvex conjugateof a function is a generalization of theLegendre transformationwhich applies to non-convex functions. It is also known asLegendre–Fenchel transformation,Fenchel transformation, orFenchel conjugate(afterAdrien-Marie LegendreandWerner Fenchel). The convex conjugate is widely used for constructing thedual probleminoptimization theory, thus generalizingLagrangian duality. LetX{\displaystyle X}be arealtopological vector spaceand letX∗{\displaystyle X^{*}}be thedual spacetoX{\displaystyle X}. Denote by the canonicaldual pairing, which is defined by⟨x∗,x⟩↦x∗(x).{\displaystyle \left\langle x^{*},x\right\rangle \mapsto x^{*}(x).} For a functionf:X→R∪{−∞,+∞}{\displaystyle f:X\to \mathbb {R} \cup \{-\infty ,+\infty \}}taking values on theextended real number line, itsconvex conjugateis the function whose value atx∗∈X∗{\displaystyle x^{*}\in X^{*}}is defined to be thesupremum: or, equivalently, in terms of theinfimum: This definition can be interpreted as an encoding of theconvex hullof the function'sepigraphin terms of itssupporting hyperplanes.[1] For more examples, see§ Table of selected convex conjugates. The convex conjugate and Legendre transform of the exponential function agree except that thedomainof the convex conjugate is strictly larger as the Legendre transform is only defined forpositive real numbers. Seethis article for example. LetFdenote acumulative distribution functionof arandom variableX. Then (integrating by parts),f(x):=∫−∞xF(u)du=E⁡[max(0,x−X)]=x−E⁡[min(x,X)]{\displaystyle f(x):=\int _{-\infty }^{x}F(u)\,du=\operatorname {E} \left[\max(0,x-X)\right]=x-\operatorname {E} \left[\min(x,X)\right]}has the convex conjugatef∗(p)=∫0pF−1(q)dq=(p−1)F−1(p)+E⁡[min(F−1(p),X)]=pF−1(p)−E⁡[max(0,F−1(p)−X)].{\displaystyle f^{*}(p)=\int _{0}^{p}F^{-1}(q)\,dq=(p-1)F^{-1}(p)+\operatorname {E} \left[\min(F^{-1}(p),X)\right]=pF^{-1}(p)-\operatorname {E} \left[\max(0,F^{-1}(p)-X)\right].} A particular interpretation has the transformfinc(x):=arg⁡suptt⋅x−∫01max{t−f(u),0}du,{\displaystyle f^{\text{inc}}(x):=\arg \sup _{t}t\cdot x-\int _{0}^{1}\max\{t-f(u),0\}\,du,}as this is a nondecreasing rearrangement of the initial functionf; in particular,finc=f{\displaystyle f^{\text{inc}}=f}forfnondecreasing. The convex conjugate of aclosed convex functionis again a closed convex function. The convex conjugate of apolyhedral convex function(a convex function withpolyhedralepigraph) is again a polyhedral convex function. Declare thatf≤g{\displaystyle f\leq g}if and only iff(x)≤g(x){\displaystyle f(x)\leq g(x)}for allx.{\displaystyle x.}Then convex-conjugation isorder-reversing, which by definition means that iff≤g{\displaystyle f\leq g}thenf∗≥g∗.{\displaystyle f^{*}\geq g^{*}.} For a family of functions(fα)α{\displaystyle \left(f_{\alpha }\right)_{\alpha }}it follows from the fact that supremums may be interchanged that and from themax–min inequalitythat The convex conjugate of a function is alwayslower semi-continuous. Thebiconjugatef∗∗{\displaystyle f^{**}}(the convex conjugate of the convex conjugate) is also theclosed convex hull, i.e. the largestlower semi-continuousconvex function withf∗∗≤f.{\displaystyle f^{**}\leq f.}Forproper functionsf,{\displaystyle f,} For any functionfand its convex conjugatef*,Fenchel's inequality(also known as theFenchel–Young inequality) holds for everyx∈X{\displaystyle x\in X}andp∈X∗{\displaystyle p\in X^{*}}: Furthermore, the equality holds only whenp∈∂f(x){\displaystyle p\in \partial f(x)}. The proof follows from the definition of convex conjugate:f∗(p)=supx~{⟨p,x~⟩−f(x~)}≥⟨p,x⟩−f(x).{\displaystyle f^{*}(p)=\sup _{\tilde {x}}\left\{\langle p,{\tilde {x}}\rangle -f({\tilde {x}})\right\}\geq \langle p,x\rangle -f(x).} For two functionsf0{\displaystyle f_{0}}andf1{\displaystyle f_{1}}and a number0≤λ≤1{\displaystyle 0\leq \lambda \leq 1}the convexity relation holds. The∗{\displaystyle {*}}operation is a convex mapping itself. Theinfimal convolution(or epi-sum) of two functionsf{\displaystyle f}andg{\displaystyle g}is defined as Letf1,…,fm{\displaystyle f_{1},\ldots ,f_{m}}beproper, convex andlower semicontinuousfunctions onRn.{\displaystyle \mathbb {R} ^{n}.}Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper),[2]and satisfies The infimal convolution of two functions has a geometric interpretation: The (strict)epigraphof the infimal convolution of two functions is theMinkowski sumof the (strict) epigraphs of those functions.[3] If the functionf{\displaystyle f}is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate: hence and moreover If for someγ>0,{\displaystyle \gamma >0,}g(x)=α+βx+γ⋅f(λx+δ){\displaystyle g(x)=\alpha +\beta x+\gamma \cdot f\left(\lambda x+\delta \right)}, then LetA:X→Y{\displaystyle A:X\to Y}be abounded linear operator. For any convex functionf{\displaystyle f}onX,{\displaystyle X,} where is the preimage off{\displaystyle f}with respect toA{\displaystyle A}andA∗{\displaystyle A^{*}}is theadjoint operatorofA.{\displaystyle A.}[4] A closed convex functionf{\displaystyle f}is symmetric with respect to a given setG{\displaystyle G}oforthogonal linear transformations, if and only if its convex conjugatef∗{\displaystyle f^{*}}is symmetric with respect toG.{\displaystyle G.} The following table provides Legendre transforms for many common functions as well as a few useful properties.[5]
https://en.wikipedia.org/wiki/Convex_conjugate
Ingeometry, aconvex curveis aplane curvethat has asupporting linethrough each of its points. There are many other equivalent definitions of these curves, going back toArchimedes. Examples of convex curves include theconvex polygons, theboundariesofconvex sets, and thegraphsofconvex functions. Important subclasses of convex curves include the closed convex curves (the boundaries of bounded convex sets), thesmooth curvesthat are convex, and the strictly convex curves, which have the additional property that each supporting line passes through a unique point of the curve. Bounded convex curves have a well-defined length, which can be obtained by approximating them with polygons, or from the average length of their projections onto a line. The maximum number of grid points that can belong to a single curve is controlled by its length. The points at which a convex curve has a uniquesupporting linearedensewithin the curve, and the distance of these lines from the origin defines a continuoussupport function. A smooth simple closed curve is convex if and only if itscurvaturehas a consistent sign, which happens if and only if itstotal curvatureequals itstotal absolute curvature. Archimedes, in hisOn the Sphere and Cylinder, defines convex arcs as the plane curves that lie on one side of the line through their two endpoints, and for which allchordstouch the same side of the curve.[1]This may have been the first formal definition of any notion of convexity, although convex polygons and convex polyhedra were already long known before Archimedes.[2]For the next two millennia, there was little study of convexity:[2]its in-depth investigation began again only in the 19th century,[3]whenAugustin-Louis Cauchyand others began usingmathematical analysisinstead ofalgebraic methodsto putcalculuson a more rigorous footing.[1][2] Many other equivalent definitions for the convex curves are possible, as detailed below. Convex curves have also been defined by their supporting lines, by the sets they form boundaries of, and by their intersections with lines. In order to distinguish closed convex curves from curves that are not closed, the closed convex curves have sometimes also been calledconvex loops, and convex curves that are not closed have also been calledconvex arcs.[4] Aplane curveis the image of anycontinuous functionfrom anintervalto theEuclidean plane. Intuitively, it is a set of points that could be traced out by a moving point. More specifically,smooth curvesgenerally at least require that the function from the interval to the plane becontinuously differentiable, and in some contexts are defined to require higher derivatives. The function parameterizing a smooth curve is often assumed to beregular, meaning that its derivative stays away from zero; intuitively, the moving point never slows to a halt or reverses direction. Each interior point of a smooth curve has atangent line. If, in addition, the second derivative exists everywhere, then each of these points has a well-definedcurvature.[5] A plane curve isclosedif the two endpoints of the interval are mapped to the same point in the plane, and it issimpleif no other two points coincide.[5]Less commonly, a simple plane curve may be said to beopenif it is topologically equivalent to a line, neither having an endpoint nor forming any limiting point that does not belong to it, and dividing the plane into two unbounded regions.[6]However, this terminology is ambiguous as other sources refer to a curve with two distinct endpoints as an open curve.[7]Here, we use the topological-line meaning of an open curve. Asupporting lineis a line containing at least one point of the curve, for which the curve is contained in one of the twohalf-planesbounded by the line. A plane curve is calledconvexif it has a supporting line through each of its points.[8][9]For example, thegraphof aconvex functionhas a supporting line below the graph through each of its points. More strongly, at the points where the function has a derivative, there is exactly one supporting line, thetangent line.[10] Supporting lines and tangent lines are not the same thing,[11]but for convex curves, every tangent line is a supporting line.[8]At a point of a curve where a tangent line exists, there can only be one supporting line, the tangent line.[12]Therefore, a smooth curve is convex if it lies on one side of each of its tangent lines. This may be used as an equivalent definition of convexity for smooth curves, or more generally forpiecewisesmooth curves.[13][a] A convex curve may be alternatively defined as a connected subset of theboundaryof aconvex setin theEuclidean plane.[8][9]Not every convex set has a connected boundary,[b]but when it does, the whole boundary is an example of a convex curve. When aboundedconvex set in the plane is not a line segment, its boundary forms a simple closed convex curve.[16]By theJordan curve theorem, a simple closed curve divides the plane into interior and exterior regions, and another equivalent definition of a closed convex curve is that it is a simple closed curve whose union with its interior is a convex set.[9][17]Examples of open and unbounded convex curves include the graphs of convex functions. Again, these are boundaries of convex sets, theepigraphsof the same functions.[18] This definition is equivalent to the definition of convex curves from support lines. Every convex curve, defined as a curve with a support line through each point, is a subset of the boundary of its ownconvex hull. Every connected subset of the boundary of a convex set has a support line through each of its points.[8][9][19] For a convex curve, every line in the plane intersects the curve in one of four ways: its intersection can be the empty set, a single point, a pair of points, or an interval. In the cases where a closed curve intersects in a single point or an interval, the line is a supporting line. This can be used as an alternative definition of the convex curves: they are theJordan curves(connected simple curves) for which every intersection with a line has one of these four types. This definition can be used to generalize convex curves from theEuclidean planeto certain otherlinear spacessuch as thereal projective plane. In these spaces, like in the Euclidean plane, any curve with only these restricted line intersections has a supporting line for each point.[20] Thestrictly convex curvesagain have many equivalent definitions. They are the convex curves that do not contain anyline segments.[21]They are the curves for which every intersection of the curve with a line consists of at most two points.[20]They are the curves that can be formed as a connected subset of the boundary of astrictly convex set.[22]Here, a set is strictly convex if every point of its boundary is anextreme pointof the set, the unique maximizer of some linear function.[23]As the boundaries of strictly convex sets, these are the curves that lie inconvex position, meaning that none of their points can be aconvex combinationof any other subset of its points.[24] Closed strictly convex curves can be defined as the simple closed curves that arelocally equivalent(under an appropriate coordinate transformation) to the graphs of strictly convex functions. This means that, at each point of the curve, there is aneighborhoodof the points and a system ofCartesian coordinateswithin that neighborhood such that, within that neighborhood, the curve coincides with the graph of a strictly convex function.[25][c] Smoothclosed convex curves with anaxis of symmetry, such as anellipseorMoss's egg, may sometimes be calledovals.[28]However, the same word has also been used to describe the sets for which each point has a unique line disjoint from the rest of the set, especially in the context ofovalsin finiteprojective geometry. In Euclidean geometry these are the smooth strictly convex closed curves, without any requirement of symmetry.[20] Every bounded convex curve is arectifiable curve, meaning that it has a well-defined finitearc length, and can be approximated in length by a sequence of inscribedpolygonal chains. For closed convex curves, the length may be given by a form of theCrofton formulaasπ{\displaystyle \pi }times the average length of its projections onto lines.[8]It is also possible to approximate the area of the convex hull of a convex curve by a sequence of inscribedconvex polygons. For any integern{\displaystyle n}, the most accurate approximatingn{\displaystyle n}-gon has the property that each vertex has a supporting line parallel to the line through its two neighboring vertices.[29]As Archimedes already knew, if two convex curves have the same endpoint, and one of the two curves lies between the other and the line through their endpoints, then the inner curve is shorter than the outer one.[2] According toNewton's theorem about ovals, the area cut off from aninfinitely differentiableconvex curve by a line cannot be an algebraic function of the coefficients of the line.[30] It is not possible for a strictly convex curve to pass through many points of theinteger lattice. If the curve has lengthL{\displaystyle L}, then according to a theorem ofVojtěch Jarník, the number of lattice points that it can pass through is at most32π3L2/3+O(L1/3).{\displaystyle {\frac {3}{\sqrt[{3}]{2\pi }}}L^{2/3}+O(L^{1/3}).}Because this estimate usesbig O notation, it is accurate only in the limiting case of large lengths. Neither the leading constant nor the exponent in the error term can be improved.[31] A convex curve can have at most acountable setofsingular points, where it has more than one supporting line. All of the remaining points must be non-singular, and the unique supporting line at these points is necessarily a tangent line. This implies that the non-singular points form adense setin the curve.[10][32]It is also possible to construct convex curves for which the singular points are dense.[19] A closed strictly convex closed curve has a continuoussupport function, mapping each direction of supporting lines to their signed distance from the origin. It is an example of ahedgehog, a type of curve determined as theenvelopeof a system of lines with a continuous support function. The hedgehogs also include non-convex curves, such as theastroid, and even self-crossing curves, but the smooth strictly convex curves are the only hedgehogs that have no singular points.[33] It is impossible for a convex curve to have three parallel tangent lines. More strongly, a smooth closed curve is convex if and only if it does not have three parallel tangent lines. In one direction, the middle of any three parallel tangent lines would separate the points of tangency of the other two lines, so it could not be a line of support. There could be no other line of support through its point of tangency, so a curve tangent to these three lines could not be convex. In the other direction, a non-convex smooth closed curve has at least one point with no support line. The tangent line through that point, and the two tangent supporting lines parallel to it, form a set of three parallel tangent lines.[13][d] According to thefour-vertex theorem, every smooth closed curve has at least fourvertices, points that are local minima or local maxima ofcurvature.[36]The original proof of the theorem, bySyamadas Mukhopadhyayain 1909, considered only convexcurves;[37]it was later extended to all smooth closedcurves.[36] Curvature can be used to characterize the smooth closed curves that areconvex.[13]The curvature depends in a trivial way on the parameterization of the curve: if a regularly parameterization of a curve is reversed, the same set of points results, but its curvature isnegated.[5]A smooth simple closed curve, with a regular parameterization, is convex if and only if its curvature has a consistent sign: always non-negative, or alwaysnon-positive.[13][e]Every smooth simple closed curve with strictly positive (or strictly negative) curvature is strictly convex, but some strictly convex curves can have points with curvaturezero.[39] Thetotal absolute curvatureof a smooth convex curve,∫|κ(s)|ds,{\displaystyle \int |\kappa (s)|ds,}is atmost2π{\displaystyle 2\pi }.It is exactly2π{\displaystyle 2\pi }for closed convex curves, equalling thetotal curvatureof these curves, and of any simple closed curve. For convex curves, the equality of total absolute curvature and total curvature follows from the fact that the curvature has a consistent sign. For closed curves that are not convex, the total absolute curvature is always greaterthan2π{\displaystyle 2\pi },and its excess can be used as a measure of how far from convex the curve is. More generally, byFenchel's theorem, the total absolute curvature of a closed smoothspace curveis atleast2π{\displaystyle 2\pi },with equality only for convex planecurves.[40][41] By theAlexandrov theorem, a non-smooth convex curve has a second derivative, and therefore a well-defined curvature,almost everywhere. This means that the subset of points without a second derivative hasmeasure zeroin the curve. However, in other senses, the set of points with a second derivative can be small. In particular, for the graphs of generic non-smooth convex functions, it is ameager set, that is, a countable union ofnowhere dense sets.[42] The boundary of anyconvex polygonforms a convex curve (one that is apiecewise linear curveand not strictly convex). A polygon that isinscribedin any strictly convex curve, with its vertices in order along the curve, must be a convex polygon.[43] Theinscribed square problemis the problem of proving that every simple closed curve in the plane contains the four corners of a square. Although still unsolved in general, its solved cases include the convex curves.[44]In connection with this problem, related problems of finding inscribed quadrilaterals have been studied for convex curves. A scaled and rotated copy of anyrectangleortrapezoidcan be inscribed in any given closed convex curve. When the curve is smooth, a scaled and rotated copy of anycyclic quadrilateralcan be inscribed in it. However, the assumption of smoothness is necessary for this result, because someright kitescannot be inscribed in some obtuseisosceles triangles.[45][46]Regular polygonswith more than four sides cannot be inscribed in all closed convex curves, because the curve formed by asemicircleand its diameter does not contain any of these polygons.[47]
https://en.wikipedia.org/wiki/Convex_curve
Inmathematics— specifically, inRiemannian geometry—geodesic convexityis a natural generalization ofconvexity for setsandfunctionstoRiemannian manifolds. It is common to drop the prefix "geodesic" and refer simply to "convexity" of a set or function. Let (M,g) be a Riemannian manifold.
https://en.wikipedia.org/wiki/Geodesic_convexity
Infunctional analysis, theHahn–Banach theoremis a central result that allows the extension ofbounded linear functionalsdefined on avector subspaceof somevector spaceto the whole space. The theorem also shows that there are sufficientcontinuouslinear functionals defined on everynormed vector spacein order to study thedual space. Another version of the Hahn–Banach theorem is known as theHahn–Banach separation theoremor thehyperplane separation theorem, and has numerous uses inconvex geometry. The theorem is named for the mathematiciansHans HahnandStefan Banach, who proved it independently in the late 1920s. The special case of the theorem for the spaceC[a,b]{\displaystyle C[a,b]}of continuous functions on an interval was proved earlier (in 1912) byEduard Helly,[1]and a more general extension theorem, theM. Riesz extension theorem, from which the Hahn–Banach theorem can be derived, was proved in 1923 byMarcel Riesz.[2] The first Hahn–Banach theorem was proved byEduard Hellyin 1912 who showed that certain linear functionals defined on a subspace of a certain type of normed space (CN{\displaystyle \mathbb {C} ^{\mathbb {N} }}) had an extension of the same norm. Helly did this through the technique of first proving that a one-dimensionalextensionexists (where the linear functional has its domain extended by one dimension) and then usinginduction. In 1927, Hahn defined generalBanach spacesand used Helly's technique to prove a norm-preserving version of Hahn–Banach theorem for Banach spaces (where a bounded linear functional on a subspace has a bounded linear extension of the same norm to the whole space). In 1929, Banach, who was unaware of Hahn's result, generalized it by replacing the norm-preserving version with the dominated extension version that usessublinear functions. Whereas Helly's proof used mathematical induction, Hahn and Banach both usedtransfinite induction.[3] The Hahn–Banach theorem arose from attempts to solve infinite systems of linear equations. This is needed to solve problems such as themoment problem, whereby given all the potentialmoments of a functionone must determine if a function having these moments exists, and, if so, find it in terms of those moments. Another such problem is theFourier cosine seriesproblem, whereby given all the potential Fourier cosine coefficients one must determine if a function having those coefficients exists, and, again, find it if so. Riesz and Helly solved the problem for certain classes of spaces (such asLp([0,1]){\displaystyle L^{p}([0,1])}andC([a,b]){\displaystyle C([a,b])}) where they discovered that the existence of a solution was equivalent to the existence and continuity of certain linear functionals. In effect, they needed to solve the following problem:[3] IfX{\displaystyle X}happens to be areflexive spacethen to solve the vector problem, it suffices to solve the following dual problem:[3] Riesz went on to defineLp([0,1]){\displaystyle L^{p}([0,1])}space(1<p<∞{\displaystyle 1<p<\infty }) in 1910 and theℓp{\displaystyle \ell ^{p}}spaces in 1913. While investigating these spaces he proved a special case of the Hahn–Banach theorem. Helly also proved a special case of the Hahn–Banach theorem in 1912. In 1910, Riesz solved the functional problem for some specific spaces and in 1912, Helly solved it for a more general class of spaces. It wasn't until 1932 that Banach, in one of the first important applications of the Hahn–Banach theorem, solved the general functional problem. The following theorem states the general functional problem and characterizes its solution.[3] Theorem[3](The functional problem)—Let(xi)i∈I{\displaystyle \left(x_{i}\right)_{i\in I}}be vectors in arealorcomplexnormed spaceX{\displaystyle X}and let(ci)i∈I{\displaystyle \left(c_{i}\right)_{i\in I}}be scalars alsoindexed byI≠∅.{\displaystyle I\neq \varnothing .} There exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(xi)=ci{\displaystyle f\left(x_{i}\right)=c_{i}}for alli∈I{\displaystyle i\in I}if and only if there exists aK>0{\displaystyle K>0}such that for any choice of scalars(si)i∈I{\displaystyle \left(s_{i}\right)_{i\in I}}where all but finitely manysi{\displaystyle s_{i}}are0,{\displaystyle 0,}the following holds:|∑i∈Isici|≤K‖∑i∈Isixi‖.{\displaystyle \left|\sum _{i\in I}s_{i}c_{i}\right|\leq K\left\|\sum _{i\in I}s_{i}x_{i}\right\|.} The Hahn–Banach theorem can be deduced from the above theorem.[3]IfX{\displaystyle X}isreflexivethen this theorem solves the vector problem. A real-valued functionf:M→R{\displaystyle f:M\to \mathbb {R} }defined on a subsetM{\displaystyle M}ofX{\displaystyle X}is said to bedominated (above) bya functionp:X→R{\displaystyle p:X\to \mathbb {R} }iff(m)≤p(m){\displaystyle f(m)\leq p(m)}for everym∈M.{\displaystyle m\in M.}For this reason, the following version of the Hahn–Banach theorem is calledthe dominatedextensiontheorem. Hahn–Banach dominated extension theorem(for real linear functionals)[4][5][6]—Ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function(such as anormorseminormfor example) defined on a real vector spaceX{\displaystyle X}then anylinear functionaldefined on a vector subspace ofX{\displaystyle X}that isdominated abovebyp{\displaystyle p}has at least onelinear extensionto all ofX{\displaystyle X}that is also dominated above byp.{\displaystyle p.} Explicitly, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is asublinear function, which by definition means that it satisfiesp(x+y)≤p(x)+p(y)andp(tx)=tp(x)for allx,y∈Xand all realt≥0,{\displaystyle p(x+y)\leq p(x)+p(y)\quad {\text{ and }}\quad p(tx)=tp(x)\qquad {\text{ for all }}\;x,y\in X\;{\text{ and all real }}\;t\geq 0,}and iff:M→R{\displaystyle f:M\to \mathbb {R} }is a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such thatf(m)≤p(m)for allm∈M{\displaystyle f(m)\leq p(m)\quad {\text{ for all }}m\in M}then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }such thatF(m)=f(m)for allm∈M,{\displaystyle F(m)=f(m)\quad {\text{ for all }}m\in M,}F(x)≤p(x)for allx∈X.{\displaystyle F(x)\leq p(x)\quad ~\;\,{\text{ for all }}x\in X.}Moreover, ifp{\displaystyle p}is aseminormthen|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}necessarily holds for allx∈X.{\displaystyle x\in X.} The theorem remains true if the requirements onp{\displaystyle p}are relaxed to require only thatp{\displaystyle p}be aconvex function:[7][8]p(tx+(1−t)y)≤tp(x)+(1−t)p(y)for all0<t<1andx,y∈X.{\displaystyle p(tx+(1-t)y)\leq tp(x)+(1-t)p(y)\qquad {\text{ for all }}0<t<1{\text{ and }}x,y\in X.}A functionp:X→R{\displaystyle p:X\to \mathbb {R} }is convex and satisfiesp(0)≤0{\displaystyle p(0)\leq 0}if and only ifp(ax+by)≤ap(x)+bp(y){\displaystyle p(ax+by)\leq ap(x)+bp(y)}for all vectorsx,y∈X{\displaystyle x,y\in X}and all non-negative reala,b≥0{\displaystyle a,b\geq 0}such thata+b≤1.{\displaystyle a+b\leq 1.}Everysublinear functionis a convex function. On the other hand, ifp:X→R{\displaystyle p:X\to \mathbb {R} }is convex withp(0)≥0,{\displaystyle p(0)\geq 0,}then the function defined byp0(x)=definft>0p(tx)t{\displaystyle p_{0}(x)\;{\stackrel {\scriptscriptstyle {\text{def}}}{=}}\;\inf _{t>0}{\frac {p(tx)}{t}}}ispositively homogeneous(because for allx{\displaystyle x}andr>0{\displaystyle r>0}one hasp0(rx)=inft>0p(trx)t=rinft>0p(trx)tr=rinfτ>0p(τx)τ=rp0(x){\displaystyle p_{0}(rx)=\inf _{t>0}{\frac {p(trx)}{t}}=r\inf _{t>0}{\frac {p(trx)}{tr}}=r\inf _{\tau >0}{\frac {p(\tau x)}{\tau }}=rp_{0}(x)}), hence, being convex,it is sublinear. It is also bounded above byp0≤p,{\displaystyle p_{0}\leq p,}and satisfiesF≤p0{\displaystyle F\leq p_{0}}for every linear functionalF≤p.{\displaystyle F\leq p.}So the extension of the Hahn–Banach theorem to convex functionals does not have a much larger content than the classical one stated for sublinear functionals. IfF:X→R{\displaystyle F:X\to \mathbb {R} }is linear thenF≤p{\displaystyle F\leq p}if and only if[4]−p(−x)≤F(x)≤p(x)for allx∈X,{\displaystyle -p(-x)\leq F(x)\leq p(x)\quad {\text{ for all }}x\in X,}which is the (equivalent) conclusion that some authors[4]write instead ofF≤p.{\displaystyle F\leq p.}It follows that ifp:X→R{\displaystyle p:X\to \mathbb {R} }is alsosymmetric, meaning thatp(−x)=p(x){\displaystyle p(-x)=p(x)}holds for allx∈X,{\displaystyle x\in X,}thenF≤p{\displaystyle F\leq p}if and only|F|≤p.{\displaystyle |F|\leq p.}Everynormis aseminormand both are symmetricbalancedsublinear functions. A sublinear function is a seminorm if and only if it is abalanced function. On a real vector space (although not on a complex vector space), a sublinear function is a seminorm if and only if it is symmetric. Theidentity functionR→R{\displaystyle \mathbb {R} \to \mathbb {R} }onX:=R{\displaystyle X:=\mathbb {R} }is an example of a sublinear function that is not a seminorm. The dominated extension theorem for real linear functionals implies the following alternative statement of the Hahn–Banach theorem that can be applied to linear functionals on real or complex vector spaces. Hahn–Banach theorem[3][9]—Supposep:X→R{\displaystyle p:X\to \mathbb {R} }aseminormon a vector spaceX{\displaystyle X}over the fieldK,{\displaystyle \mathbf {K} ,}which is eitherR{\displaystyle \mathbb {R} }orC.{\displaystyle \mathbb {C} .}Iff:M→K{\displaystyle f:M\to \mathbf {K} }is a linear functional on a vector subspaceM{\displaystyle M}such that|f(m)|≤p(m)for allm∈M,{\displaystyle |f(m)|\leq p(m)\quad {\text{ for all }}m\in M,}then there exists a linear functionalF:X→K{\displaystyle F:X\to \mathbf {K} }such thatF(m)=f(m)for allm∈M,{\displaystyle F(m)=f(m)\quad \;{\text{ for all }}m\in M,}|F(x)|≤p(x)for allx∈X.{\displaystyle |F(x)|\leq p(x)\quad \;\,{\text{ for all }}x\in X.} The theorem remains true if the requirements onp{\displaystyle p}are relaxed to require only that for allx,y∈X{\displaystyle x,y\in X}and all scalarsa{\displaystyle a}andb{\displaystyle b}satisfying|a|+|b|≤1,{\displaystyle |a|+|b|\leq 1,}[8]p(ax+by)≤|a|p(x)+|b|p(y).{\displaystyle p(ax+by)\leq |a|p(x)+|b|p(y).}This condition holds if and only ifp{\displaystyle p}is aconvexandbalanced functionsatisfyingp(0)≤0,{\displaystyle p(0)\leq 0,}or equivalently, if and only if it is convex, satisfiesp(0)≤0,{\displaystyle p(0)\leq 0,}andp(ux)≤p(x){\displaystyle p(ux)\leq p(x)}for allx∈X{\displaystyle x\in X}and allunit lengthscalarsu.{\displaystyle u.} A complex-valued functionalF{\displaystyle F}is said to bedominated byp{\displaystyle p}if|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for allx{\displaystyle x}in the domain ofF.{\displaystyle F.}With this terminology, the above statements of the Hahn–Banach theorem can be restated more succinctly: Proof The following observations allow theHahn–Banach theorem for real vector spacesto be applied to (complex-valued) linear functionals on complex vector spaces. Every linear functionalF:X→C{\displaystyle F:X\to \mathbb {C} }on a complex vector space iscompletely determinedby itsreal partRe⁡F:X→R{\displaystyle \;\operatorname {Re} F:X\to \mathbb {R} \;}through the formula[6][proof 1]F(x)=Re⁡F(x)−iRe⁡F(ix)for allx∈X{\displaystyle F(x)\;=\;\operatorname {Re} F(x)-i\operatorname {Re} F(ix)\qquad {\text{ for all }}x\in X}and moreover, if‖⋅‖{\displaystyle \|\cdot \|}is anormonX{\displaystyle X}then theirdual normsare equal:‖F‖=‖Re⁡F‖.{\displaystyle \|F\|=\|\operatorname {Re} F\|.}[10]In particular, a linear functional onX{\displaystyle X}extends another one defined onM⊆X{\displaystyle M\subseteq X}if and only if their real parts are equal onM{\displaystyle M}(in other words, a linear functionalF{\displaystyle F}extendsf{\displaystyle f}if and only ifRe⁡F{\displaystyle \operatorname {Re} F}extendsRe⁡f{\displaystyle \operatorname {Re} f}). The real part of a linear functional onX{\displaystyle X}is always areal-linear functional(meaning that it is linear whenX{\displaystyle X}is considered as a real vector space) and ifR:X→R{\displaystyle R:X\to \mathbb {R} }is a real-linear functional on a complex vector space thenx↦R(x)−iR(ix){\displaystyle x\mapsto R(x)-iR(ix)}defines the unique linear functional onX{\displaystyle X}whose real part isR.{\displaystyle R.} IfF{\displaystyle F}is a linear functional on a (complex or real) vector spaceX{\displaystyle X}and ifp:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm then[6][proof 2]|F|≤pif and only ifRe⁡F≤p.{\displaystyle |F|\,\leq \,p\quad {\text{ if and only if }}\quad \operatorname {Re} F\,\leq \,p.}Stated in simpler language, a linear functional isdominatedby a seminormp{\displaystyle p}if and only if itsreal part is dominated abovebyp.{\displaystyle p.} Supposep:X→R{\displaystyle p:X\to \mathbb {R} }is a seminorm on a complex vector spaceX{\displaystyle X}and letf:M→C{\displaystyle f:M\to \mathbb {C} }be a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}that satisfies|f|≤p{\displaystyle |f|\leq p}onM.{\displaystyle M.}ConsiderX{\displaystyle X}as a real vector space and apply theHahn–Banach theorem for real vector spacesto thereal-linear functionalRe⁡f:M→R{\displaystyle \;\operatorname {Re} f:M\to \mathbb {R} \;}to obtain a real-linear extensionR:X→R{\displaystyle R:X\to \mathbb {R} }that is also dominated above byp,{\displaystyle p,}so that it satisfiesR≤p{\displaystyle R\leq p}onX{\displaystyle X}andR=Re⁡f{\displaystyle R=\operatorname {Re} f}onM.{\displaystyle M.}The mapF:X→C{\displaystyle F:X\to \mathbb {C} }defined byF(x)=R(x)−iR(ix){\displaystyle F(x)\;=\;R(x)-iR(ix)}is a linear functional onX{\displaystyle X}that extendsf{\displaystyle f}(because their real parts agree onM{\displaystyle M}) and satisfies|F|≤p{\displaystyle |F|\leq p}onX{\displaystyle X}(becauseRe⁡F≤p{\displaystyle \operatorname {Re} F\leq p}andp{\displaystyle p}is a seminorm).◼{\displaystyle \blacksquare } The proof above shows that whenp{\displaystyle p}is a seminorm then there is a one-to-one correspondence between dominated linear extensions off:M→C{\displaystyle f:M\to \mathbb {C} }and dominated real-linear extensions ofRe⁡f:M→R;{\displaystyle \operatorname {Re} f:M\to \mathbb {R} ;}the proof even gives a formula for explicitly constructing a linear extension off{\displaystyle f}from any given real-linear extension of its real part. Continuity A linear functionalF{\displaystyle F}on atopological vector spaceiscontinuousif and only if this is true of its real partRe⁡F;{\displaystyle \operatorname {Re} F;}if the domain is a normed space then‖F‖=‖Re⁡F‖{\displaystyle \|F\|=\|\operatorname {Re} F\|}(where one side is infinite if and only if the other side is infinite).[10]AssumeX{\displaystyle X}is atopological vector spaceandp:X→R{\displaystyle p:X\to \mathbb {R} }issublinear function. Ifp{\displaystyle p}is acontinuoussublinear function that dominates a linear functionalF{\displaystyle F}thenF{\displaystyle F}is necessarily continuous.[6]Moreover, a linear functionalF{\displaystyle F}is continuous if and only if itsabsolute value|F|{\displaystyle |F|}(which is aseminormthat dominatesF{\displaystyle F}) is continuous.[6]In particular, a linear functional is continuous if and only if it is dominated by some continuous sublinear function. TheHahn–Banach theorem for real vector spacesultimately follows from Helly's initial result for the special case where the linear functional is extended fromM{\displaystyle M}to a larger vector space in whichM{\displaystyle M}hascodimension1.{\displaystyle 1.}[3] Lemma[6](One–dimensional dominated extension theorem)—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be asublinear functionon a real vector spaceX,{\displaystyle X,}letf:M→R{\displaystyle f:M\to \mathbb {R} }alinear functionalon apropervector subspaceM⊊X{\displaystyle M\subsetneq X}such thatf≤p{\displaystyle f\leq p}onM{\displaystyle M}(meaningf(m)≤p(m){\displaystyle f(m)\leq p(m)}for allm∈M{\displaystyle m\in M}), and letx∈X{\displaystyle x\in X}be a vectornotinM{\displaystyle M}(soM⊕Rx=span⁡{M,x}{\displaystyle M\oplus \mathbb {R} x=\operatorname {span} \{M,x\}}). There exists a linear extensionF:M⊕Rx→R{\displaystyle F:M\oplus \mathbb {R} x\to \mathbb {R} }off{\displaystyle f}such thatF≤p{\displaystyle F\leq p}onM⊕Rx.{\displaystyle M\oplus \mathbb {R} x.} Given any real numberb,{\displaystyle b,}the mapFb:M⊕Rx→R{\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} }defined byFb(m+rx)=f(m)+rb{\displaystyle F_{b}(m+rx)=f(m)+rb}is always a linear extension off{\displaystyle f}toM⊕Rx{\displaystyle M\oplus \mathbb {R} x}[note 1]but it might not satisfyFb≤p.{\displaystyle F_{b}\leq p.}It will be shown thatb{\displaystyle b}can always be chosen so as to guarantee thatFb≤p,{\displaystyle F_{b}\leq p,}which will complete the proof. Ifm,n∈M{\displaystyle m,n\in M}thenf(m)−f(n)=f(m−n)≤p(m−n)=p(m+x−x−n)≤p(m+x)+p(−x−n){\displaystyle f(m)-f(n)=f(m-n)\leq p(m-n)=p(m+x-x-n)\leq p(m+x)+p(-x-n)}which implies−p(−n−x)−f(n)≤p(m+x)−f(m).{\displaystyle -p(-n-x)-f(n)~\leq ~p(m+x)-f(m).}So definea=supn∈M[−p(−n−x)−f(n)]andc=infm∈M[p(m+x)−f(m)]{\displaystyle a=\sup _{n\in M}[-p(-n-x)-f(n)]\qquad {\text{ and }}\qquad c=\inf _{m\in M}[p(m+x)-f(m)]}wherea≤c{\displaystyle a\leq c}are real numbers. To guaranteeFb≤p,{\displaystyle F_{b}\leq p,}it suffices thata≤b≤c{\displaystyle a\leq b\leq c}(in fact, this is also necessary[note 2]) because thenb{\displaystyle b}satisfies "the decisive inequality"[6]−p(−n−x)−f(n)≤b≤p(m+x)−f(m)for allm,n∈M.{\displaystyle -p(-n-x)-f(n)~\leq ~b~\leq ~p(m+x)-f(m)\qquad {\text{ for all }}\;m,n\in M.} To see thatf(m)+rb≤p(m+rx){\displaystyle f(m)+rb\leq p(m+rx)}follows,[note 3]assumer≠0{\displaystyle r\neq 0}and substitute1rm{\displaystyle {\tfrac {1}{r}}m}in for bothm{\displaystyle m}andn{\displaystyle n}to obtain−p(−1rm−x)−1rf(m)≤b≤p(1rm+x)−1rf(m).{\displaystyle -p\left(-{\tfrac {1}{r}}m-x\right)-{\tfrac {1}{r}}f\left(m\right)~\leq ~b~\leq ~p\left({\tfrac {1}{r}}m+x\right)-{\tfrac {1}{r}}f\left(m\right).}Ifr>0{\displaystyle r>0}(respectively, ifr<0{\displaystyle r<0}) then the right (respectively, the left) hand side equals1r[p(m+rx)−f(m)]{\displaystyle {\tfrac {1}{r}}\left[p(m+rx)-f(m)\right]}so that multiplying byr{\displaystyle r}givesrb≤p(m+rx)−f(m).{\displaystyle rb\leq p(m+rx)-f(m).}◼{\displaystyle \blacksquare } This lemma remains true ifp:X→R{\displaystyle p:X\to \mathbb {R} }is merely aconvex functioninstead of a sublinear function.[7][8] Assume thatp{\displaystyle p}is convex, which means thatp(ty+(1−t)z)≤tp(y)+(1−t)p(z){\displaystyle p(ty+(1-t)z)\leq tp(y)+(1-t)p(z)}for all0≤t≤1{\displaystyle 0\leq t\leq 1}andy,z∈X.{\displaystyle y,z\in X.}LetM,{\displaystyle M,}f:M→R,{\displaystyle f:M\to \mathbb {R} ,}andx∈X∖M{\displaystyle x\in X\setminus M}be as inthe lemma's statement. Given anym,n∈M{\displaystyle m,n\in M}and any positive realr,s>0,{\displaystyle r,s>0,}the positive real numberst:=sr+s{\displaystyle t:={\tfrac {s}{r+s}}}andrr+s=1−t{\displaystyle {\tfrac {r}{r+s}}=1-t}sum to1{\displaystyle 1}so that the convexity ofp{\displaystyle p}onX{\displaystyle X}guaranteesp(sr+sm+rr+sn)=p(sr+s(m−rx)+rr+s(n+sx))≤sr+sp(m−rx)+rr+sp(n+sx){\displaystyle {\begin{alignedat}{9}p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)~&=~p{\big (}{\tfrac {s}{r+s}}(m-rx)&&+{\tfrac {r}{r+s}}(n+sx){\big )}&&\\&\leq ~{\tfrac {s}{r+s}}\;p(m-rx)&&+{\tfrac {r}{r+s}}\;p(n+sx)&&\\\end{alignedat}}}and hencesf(m)+rf(n)=(r+s)f(sr+sm+rr+sn)by linearity off≤(r+s)p(sr+sm+rr+sn)f≤ponM≤sp(m−rx)+rp(n+sx){\displaystyle {\begin{alignedat}{9}sf(m)+rf(n)~&=~(r+s)\;f\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad {\text{ by linearity of }}f\\&\leq ~(r+s)\;p\left({\tfrac {s}{r+s}}m+{\tfrac {r}{r+s}}n\right)&&\qquad f\leq p{\text{ on }}M\\&\leq ~sp(m-rx)+rp(n+sx)\\\end{alignedat}}}thus proving that−sp(m−rx)+sf(m)≤rp(n+sx)−rf(n),{\displaystyle -sp(m-rx)+sf(m)~\leq ~rp(n+sx)-rf(n),}which after multiplying both sides by1rs{\displaystyle {\tfrac {1}{rs}}}becomes1r[−p(m−rx)+f(m)]≤1s[p(n+sx)−f(n)].{\displaystyle {\tfrac {1}{r}}[-p(m-rx)+f(m)]~\leq ~{\tfrac {1}{s}}[p(n+sx)-f(n)].}This implies that the values defined bya=supr>0m∈M1r[−p(m−rx)+f(m)]andc=infs>0n∈M1s[p(n+sx)−f(n)]{\displaystyle a=\sup _{\stackrel {m\in M}{r>0}}{\tfrac {1}{r}}[-p(m-rx)+f(m)]\qquad {\text{ and }}\qquad c=\inf _{\stackrel {n\in M}{s>0}}{\tfrac {1}{s}}[p(n+sx)-f(n)]}are real numbers that satisfya≤c.{\displaystyle a\leq c.}As in the above proof of theone–dimensional dominated extension theoremabove, for any realb∈R{\displaystyle b\in \mathbb {R} }defineFb:M⊕Rx→R{\displaystyle F_{b}:M\oplus \mathbb {R} x\to \mathbb {R} }byFb(m+rx)=f(m)+rb.{\displaystyle F_{b}(m+rx)=f(m)+rb.}It can be verified that ifa≤b≤c{\displaystyle a\leq b\leq c}thenFb≤p{\displaystyle F_{b}\leq p}whererb≤p(m+rx)−f(m){\displaystyle rb\leq p(m+rx)-f(m)}follows fromb≤c{\displaystyle b\leq c}whenr>0{\displaystyle r>0}(respectively, follows froma≤b{\displaystyle a\leq b}whenr<0{\displaystyle r<0}).◼{\displaystyle \blacksquare } Thelemma aboveis the key step in deducing the dominated extension theorem fromZorn's lemma. The set of all possible dominated linear extensions off{\displaystyle f}are partially ordered by extension of each other, so there is a maximal extensionF.{\displaystyle F.}By the codimension-1 result, ifF{\displaystyle F}is not defined on all ofX,{\displaystyle X,}then it can be further extended. ThusF{\displaystyle F}must be defined everywhere, as claimed.◼{\displaystyle \blacksquare } WhenM{\displaystyle M}has countable codimension, then using induction and the lemma completes the proof of the Hahn–Banach theorem. The standard proof of the general case usesZorn's lemmaalthough the strictly weakerultrafilter lemma[11](which is equivalent to thecompactness theoremand to theBoolean prime ideal theorem) may be used instead. Hahn–Banach can also be proved usingTychonoff's theoremforcompactHausdorff spaces[12](which is also equivalent to the ultrafilter lemma) TheMizar projecthas completely formalized and automatically checked the proof of the Hahn–Banach theorem in the HAHNBAN file.[13] The Hahn–Banach theorem can be used to guarantee the existence ofcontinuous linear extensionsofcontinuous linear functionals. Hahn–Banach continuous extension theorem[14]—Every continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}of a (real or complex)locally convextopological vector spaceX{\displaystyle X}has a continuous linear extensionF{\displaystyle F}to all ofX.{\displaystyle X.}If in additionX{\displaystyle X}is anormed space, then this extension can be chosen so that itsdual normis equal to that off.{\displaystyle f.} Incategory-theoreticterms, the underlying field of the vector space is aninjective objectin the category of locally convex vector spaces. On anormed(orseminormed) space, a linear extensionF{\displaystyle F}of abounded linear functionalf{\displaystyle f}is said to benorm-preservingif it has the samedual normas the original functional:‖F‖=‖f‖.{\displaystyle \|F\|=\|f\|.}Because of this terminology, the second part ofthe above theoremis sometimes referred to as the "norm-preserving" version of the Hahn–Banach theorem.[15]Explicitly: Norm-preserving Hahn–Banach continuous extension theorem[15]—Every continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}of a (real or complex) normed spaceX{\displaystyle X}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}that satisfies‖f‖=‖F‖.{\displaystyle \|f\|=\|F\|.} The following observations allow thecontinuous extension theoremto be deduced from theHahn–Banach theorem.[16] The absolute value of a linear functional is always a seminorm. A linear functionalF{\displaystyle F}on atopological vector spaceX{\displaystyle X}is continuous if and only if its absolute value|F|{\displaystyle |F|}is continuous, which happens if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|F|≤p{\displaystyle |F|\leq p}on the domain ofF.{\displaystyle F.}[17]IfX{\displaystyle X}is a locally convex space then this statement remains true when the linear functionalF{\displaystyle F}is defined on apropervector subspace ofX.{\displaystyle X.} Letf{\displaystyle f}be a continuous linear functional defined on a vector subspaceM{\displaystyle M}of alocally convex topological vector spaceX.{\displaystyle X.}BecauseX{\displaystyle X}is locally convex, there exists a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}thatdominatesf{\displaystyle f}(meaning that|f(m)|≤p(m){\displaystyle |f(m)|\leq p(m)}for allm∈M{\displaystyle m\in M}). By theHahn–Banach theorem, there exists a linear extension off{\displaystyle f}toX,{\displaystyle X,}call itF,{\displaystyle F,}that satisfies|F|≤p{\displaystyle |F|\leq p}onX.{\displaystyle X.}This linear functionalF{\displaystyle F}is continuous since|F|≤p{\displaystyle |F|\leq p}andp{\displaystyle p}is a continuous seminorm. Proof for normed spaces A linear functionalf{\displaystyle f}on anormed spaceiscontinuousif and only if it isbounded, which means that itsdual norm‖f‖=sup{|f(m)|:‖m‖≤1,m∈domain⁡f}{\displaystyle \|f\|=\sup\{|f(m)|:\|m\|\leq 1,m\in \operatorname {domain} f\}}is finite, in which case|f(m)|≤‖f‖‖m‖{\displaystyle |f(m)|\leq \|f\|\|m\|}holds for every pointm{\displaystyle m}in its domain. Moreover, ifc≥0{\displaystyle c\geq 0}is such that|f(m)|≤c‖m‖{\displaystyle |f(m)|\leq c\|m\|}for allm{\displaystyle m}in the functional's domain, then necessarily‖f‖≤c.{\displaystyle \|f\|\leq c.}IfF{\displaystyle F}is a linear extension of a linear functionalf{\displaystyle f}then their dual norms always satisfy‖f‖≤‖F‖{\displaystyle \|f\|\leq \|F\|}[proof 3]so that equality‖f‖=‖F‖{\displaystyle \|f\|=\|F\|}is equivalent to‖F‖≤‖f‖,{\displaystyle \|F\|\leq \|f\|,}which holds if and only if|F(x)|≤‖f‖‖x‖{\displaystyle |F(x)|\leq \|f\|\|x\|}for every pointx{\displaystyle x}in the extension's domain. This can be restated in terms of the function‖f‖‖⋅‖:X→R{\displaystyle \|f\|\,\|\cdot \|:X\to \mathbb {R} }defined byx↦‖f‖‖x‖,{\displaystyle x\mapsto \|f\|\,\|x\|,}which is always aseminorm:[note 4] Applying theHahn–Banach theoremtof{\displaystyle f}with this seminorm‖f‖‖⋅‖{\displaystyle \|f\|\,\|\cdot \|}thus produces a dominated linear extension whose norm is (necessarily) equal to that off,{\displaystyle f,}which proves the theorem: Letf{\displaystyle f}be a continuous linear functional defined on a vector subspaceM{\displaystyle M}of a normed spaceX.{\displaystyle X.}Then the functionp:X→R{\displaystyle p:X\to \mathbb {R} }defined byp(x)=‖f‖‖x‖{\displaystyle p(x)=\|f\|\,\|x\|}is a seminorm onX{\displaystyle X}thatdominatesf,{\displaystyle f,}meaning that|f(m)|≤p(m){\displaystyle |f(m)|\leq p(m)}holds for everym∈M.{\displaystyle m\in M.}By theHahn–Banach theorem, there exists a linear functionalF{\displaystyle F}onX{\displaystyle X}that extendsf{\displaystyle f}(which guarantees‖f‖≤‖F‖{\displaystyle \|f\|\leq \|F\|}) and that is also dominated byp,{\displaystyle p,}meaning that|F(x)|≤p(x){\displaystyle |F(x)|\leq p(x)}for everyx∈X.{\displaystyle x\in X.}The fact that‖f‖{\displaystyle \|f\|}is a real number such that|F(x)|≤‖f‖‖x‖{\displaystyle |F(x)|\leq \|f\|\|x\|}for everyx∈X,{\displaystyle x\in X,}guarantees‖F‖≤‖f‖.{\displaystyle \|F\|\leq \|f\|.}Since‖F‖=‖f‖{\displaystyle \|F\|=\|f\|}is finite, the linear functionalF{\displaystyle F}is bounded and thus continuous. Thecontinuous extension theoremmight fail if thetopological vector space(TVS)X{\displaystyle X}is notlocally convex. For example, for0<p<1,{\displaystyle 0<p<1,}theLebesgue spaceLp([0,1]){\displaystyle L^{p}([0,1])}is acompletemetrizable TVS(anF-space) that isnotlocally convex (in fact, its only convex open subsets are itselfLp([0,1]){\displaystyle L^{p}([0,1])}and the empty set) and the only continuous linear functional onLp([0,1]){\displaystyle L^{p}([0,1])}is the constant0{\displaystyle 0}function (Rudin 1991, §1.47). SinceLp([0,1]){\displaystyle L^{p}([0,1])}is Hausdorff, every finite-dimensional vector subspaceM⊆Lp([0,1]){\displaystyle M\subseteq L^{p}([0,1])}islinearly homeomorphictoEuclidean spaceRdim⁡M{\displaystyle \mathbb {R} ^{\dim M}}orCdim⁡M{\displaystyle \mathbb {C} ^{\dim M}}(byF. Riesz's theorem) and so every non-zero linear functionalf{\displaystyle f}onM{\displaystyle M}is continuous but none has a continuous linear extension to all ofLp([0,1]).{\displaystyle L^{p}([0,1]).}However, it is possible for a TVSX{\displaystyle X}to not be locally convex but nevertheless have enough continuous linear functionals that itscontinuous dual spaceX∗{\displaystyle X^{*}}separates points; for such a TVS, a continuous linear functional defined on a vector subspacemighthave a continuous linear extension to the whole space. If theTVSX{\displaystyle X}is notlocally convexthen there might not exist any continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }defined onX{\displaystyle X}(not just onM{\displaystyle M}) that dominatesf,{\displaystyle f,}in which case the Hahn–Banach theorem can not be applied as it was inthe above proofof the continuous extension theorem. However, the proof's argument can be generalized to give a characterization of when a continuous linear functional has a continuous linear extension: IfX{\displaystyle X}is any TVS (not necessarily locally convex), then a continuous linear functionalf{\displaystyle f}defined on a vector subspaceM{\displaystyle M}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}if and only if there exists some continuous seminormp{\displaystyle p}onX{\displaystyle X}thatdominatesf.{\displaystyle f.}Specifically, if given a continuous linear extensionF{\displaystyle F}thenp:=|F|{\displaystyle p:=|F|}is a continuous seminorm onX{\displaystyle X}that dominatesf;{\displaystyle f;}and conversely, if given a continuous seminormp:X→R{\displaystyle p:X\to \mathbb {R} }onX{\displaystyle X}that dominatesf{\displaystyle f}then any dominated linear extension off{\displaystyle f}toX{\displaystyle X}(the existence of which is guaranteed by the Hahn–Banach theorem) will be a continuous linear extension. The key element of the Hahn–Banach theorem is fundamentally a result about the separation of two convex sets:{−p(−x−n)−f(n):n∈M},{\displaystyle \{-p(-x-n)-f(n):n\in M\},}and{p(m+x)−f(m):m∈M}.{\displaystyle \{p(m+x)-f(m):m\in M\}.}This sort of argument appears widely inconvex geometry,[18]optimization theory, andeconomics. Lemmas to this end derived from the original Hahn–Banach theorem are known as theHahn–Banach separation theorems.[19][20]They are generalizations of thehyperplane separation theorem, which states that two disjoint nonempty convex subsets of a finite-dimensional spaceRn{\displaystyle \mathbb {R} ^{n}}can be separated by someaffine hyperplane, which is afiber(level set) of the formf−1(s)={x:f(x)=s}{\displaystyle f^{-1}(s)=\{x:f(x)=s\}}wheref≠0{\displaystyle f\neq 0}is a non-zero linear functional ands{\displaystyle s}is a scalar. Theorem[19]—LetA{\displaystyle A}andB{\displaystyle B}be non-empty convex subsets of a reallocally convex topological vector spaceX.{\displaystyle X.}IfInt⁡A≠∅{\displaystyle \operatorname {Int} A\neq \varnothing }andB∩Int⁡A=∅{\displaystyle B\cap \operatorname {Int} A=\varnothing }then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatsupf(A)≤inff(B){\displaystyle \sup f(A)\leq \inf f(B)}andf(a)<inff(B){\displaystyle f(a)<\inf f(B)}for alla∈Int⁡A{\displaystyle a\in \operatorname {Int} A}(such anf{\displaystyle f}is necessarily non-zero). When the convex sets have additional properties, such as beingopenorcompactfor example, then the conclusion can be substantially strengthened: Theorem[3][21]—LetA{\displaystyle A}andB{\displaystyle B}be convex non-empty disjoint subsets of a realtopological vector spaceX.{\displaystyle X.} IfX{\displaystyle X}is complex (rather than real) then the same claims hold, but for thereal partoff.{\displaystyle f.} Then following important corollary is known as theGeometric Hahn–Banach theoremorMazur's theorem(also known asAscoli–Mazur theorem[22]). It follows from the first bullet above and the convexity ofM.{\displaystyle M.} Theorem (Mazur)[23]—LetM{\displaystyle M}be a vector subspace of the topological vector spaceX{\displaystyle X}and supposeK{\displaystyle K}is a non-empty convex open subset ofX{\displaystyle X}withK∩M=∅.{\displaystyle K\cap M=\varnothing .}Then there is a closedhyperplane(codimension-1 vector subspace)N⊆X{\displaystyle N\subseteq X}that containsM,{\displaystyle M,}but remains disjoint fromK.{\displaystyle K.} Mazur's theorem clarifies that vector subspaces (even those that are not closed) can be characterized by linear functionals. Corollary[24](Separation of a subspace and an open convex set)—LetM{\displaystyle M}be a vector subspace of alocally convex topological vector spaceX,{\displaystyle X,}andU{\displaystyle U}be a non-empty open convex subset disjoint fromM.{\displaystyle M.}Then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such thatf(m)=0{\displaystyle f(m)=0}for allm∈M{\displaystyle m\in M}andRe⁡f>0{\displaystyle \operatorname {Re} f>0}onU.{\displaystyle U.} Since points are triviallyconvex, geometric Hahn–Banach implies that functionals can detect theboundaryof a set. In particular, letX{\displaystyle X}be a real topological vector space andA⊆X{\displaystyle A\subseteq X}be convex withInt⁡A≠∅.{\displaystyle \operatorname {Int} A\neq \varnothing .}Ifa0∈A∖Int⁡A{\displaystyle a_{0}\in A\setminus \operatorname {Int} A}then there is a functional that is vanishing ata0,{\displaystyle a_{0},}but supported on the interior ofA.{\displaystyle A.}[19] Call a normed spaceX{\displaystyle X}smoothif at each pointx{\displaystyle x}in its unit ball there exists a unique closed hyperplane to the unit ball atx.{\displaystyle x.}Köthe showed in 1983 that a normed space is smooth at a pointx{\displaystyle x}if and only if the norm isGateaux differentiableat that point.[3] LetU{\displaystyle U}be a convexbalancedneighborhood of the origin in alocally convextopological vector spaceX{\displaystyle X}and supposex∈X{\displaystyle x\in X}is not an element ofU.{\displaystyle U.}Then there exists a continuous linear functionalf{\displaystyle f}onX{\displaystyle X}such that[3]sup|f(U)|≤|f(x)|.{\displaystyle \sup |f(U)|\leq |f(x)|.} The Hahn–Banach theorem is the first sign of an important philosophy infunctional analysis: to understand a space, one should understand itscontinuous functionals. For example, linear subspaces are characterized by functionals: ifXis a normed vector space with linear subspaceM(not necessarily closed) and ifz{\displaystyle z}is an element ofXnot in theclosureofM, then there exists a continuous linear mapf:X→K{\displaystyle f:X\to \mathbf {K} }withf(m)=0{\displaystyle f(m)=0}for allm∈M,{\displaystyle m\in M,}f(z)=1,{\displaystyle f(z)=1,}and‖f‖=dist⁡(z,M)−1.{\displaystyle \|f\|=\operatorname {dist} (z,M)^{-1}.}(To see this, note thatdist⁡(⋅,M){\displaystyle \operatorname {dist} (\cdot ,M)}is a sublinear function.) Moreover, ifz{\displaystyle z}is an element ofX, then there exists a continuous linear mapf:X→K{\displaystyle f:X\to \mathbf {K} }such thatf(z)=‖z‖{\displaystyle f(z)=\|z\|}and‖f‖≤1.{\displaystyle \|f\|\leq 1.}This implies that thenatural injectionJ{\displaystyle J}from a normed spaceXinto itsdouble dualV∗∗{\displaystyle V^{**}}is isometric. That last result also suggests that the Hahn–Banach theorem can often be used to locate a "nicer" topology in which to work. For example, many results in functional analysis assume that a space isHausdorfforlocally convex. However, supposeXis a topological vector space, not necessarily Hausdorff orlocally convex, but with a nonempty, proper, convex, open setM. Then geometric Hahn–Banach implies that there is a hyperplane separatingMfrom any other point. In particular, there must exist a nonzero functional onX— that is, thecontinuous dual spaceX∗{\displaystyle X^{*}}is non-trivial.[3][25]ConsideringXwith theweak topologyinduced byX∗,{\displaystyle X^{*},}thenXbecomes locally convex; by the second bullet of geometric Hahn–Banach, the weak topology on this new spaceseparates points. ThusXwith this weak topology becomesHausdorff. This sometimes allows some results from locally convex topological vector spaces to be applied to non-Hausdorff and non-locally convex spaces. The Hahn–Banach theorem is often useful when one wishes to apply the method ofa priori estimates. Suppose that we wish to solve the linear differential equationPu=f{\displaystyle Pu=f}foru,{\displaystyle u,}withf{\displaystyle f}given in some Banach spaceX. If we have control on the size ofu{\displaystyle u}in terms of‖f‖X{\displaystyle \|f\|_{X}}and we can think ofu{\displaystyle u}as a bounded linear functional on some suitable space of test functionsg,{\displaystyle g,}then we can viewf{\displaystyle f}as a linear functional by adjunction:(f,g)=(u,P∗g).{\displaystyle (f,g)=(u,P^{*}g).}At first, this functional is only defined on the image ofP,{\displaystyle P,}but using the Hahn–Banach theorem, we can try to extend it to the entire codomainX. The resulting functional is often defined to be aweak solution to the equation. Theorem[26]—A real Banach space isreflexiveif and only if every pair of non-empty disjoint closed convex subsets, one of which is bounded, can be strictly separated by a hyperplane. To illustrate an actual application of the Hahn–Banach theorem, we will now prove a result that follows almost entirely from the Hahn–Banach theorem. Proposition—SupposeX{\displaystyle X}is a Hausdorff locally convex TVS over the fieldK{\displaystyle \mathbf {K} }andY{\displaystyle Y}is a vector subspace ofX{\displaystyle X}that isTVS–isomorphictoKI{\displaystyle \mathbf {K} ^{I}}for some setI.{\displaystyle I.}ThenY{\displaystyle Y}is a closed andcomplementedvector subspace ofX.{\displaystyle X.} SinceKI{\displaystyle \mathbf {K} ^{I}}is a complete TVS so isY,{\displaystyle Y,}and since any complete subset of a Hausdorff TVS is closed,Y{\displaystyle Y}is a closed subset ofX.{\displaystyle X.}Letf=(fi)i∈I:Y→KI{\displaystyle f=\left(f_{i}\right)_{i\in I}:Y\to \mathbf {K} ^{I}}be a TVS isomorphism, so that eachfi:Y→K{\displaystyle f_{i}:Y\to \mathbf {K} }is a continuous surjective linear functional. By the Hahn–Banach theorem, we may extend eachfi{\displaystyle f_{i}}to a continuous linear functionalFi:X→K{\displaystyle F_{i}:X\to \mathbf {K} }onX.{\displaystyle X.}LetF:=(Fi)i∈I:X→KI{\displaystyle F:=\left(F_{i}\right)_{i\in I}:X\to \mathbf {K} ^{I}}soF{\displaystyle F}is a continuous linear surjection such that its restriction toY{\displaystyle Y}isF|Y=(Fi|Y)i∈I=(fi)i∈I=f.{\displaystyle F{\big \vert }_{Y}=\left(F_{i}{\big \vert }_{Y}\right)_{i\in I}=\left(f_{i}\right)_{i\in I}=f.}LetP:=f−1∘F:X→Y,{\displaystyle P:=f^{-1}\circ F:X\to Y,}which is a continuous linear map whose restriction toY{\displaystyle Y}isP|Y=f−1∘F|Y=f−1∘f=1Y,{\displaystyle P{\big \vert }_{Y}=f^{-1}\circ F{\big \vert }_{Y}=f^{-1}\circ f=\mathbf {1} _{Y},}where1Y{\displaystyle \mathbb {1} _{Y}}denotes theidentity maponY.{\displaystyle Y.}This shows thatP{\displaystyle P}is a continuouslinear projectionontoY{\displaystyle Y}(that is,P∘P=P{\displaystyle P\circ P=P}). ThusY{\displaystyle Y}is complemented inX{\displaystyle X}andX=Y⊕ker⁡P{\displaystyle X=Y\oplus \ker P}in the category of TVSs.◼{\displaystyle \blacksquare } The above result may be used to show that every closed vector subspace ofRN{\displaystyle \mathbb {R} ^{\mathbb {N} }}is complemented because any such space is either finite dimensional or else TVS–isomorphic toRN.{\displaystyle \mathbb {R} ^{\mathbb {N} }.} General template There are now many other versions of the Hahn–Banach theorem. The general template for the various versions of the Hahn–Banach theorem presented in this article is as follows: Theorem[3]—IfD{\displaystyle D}is anabsorbingdiskin a real or complex vector spaceX{\displaystyle X}and iff{\displaystyle f}be a linear functional defined on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such that|f|≤1{\displaystyle |f|\leq 1}onM∩D,{\displaystyle M\cap D,}then there exists a linear functionalF{\displaystyle F}onX{\displaystyle X}extendingf{\displaystyle f}such that|F|≤1{\displaystyle |F|\leq 1}onD.{\displaystyle D.} Hahn–Banach theorem for seminorms[27][28]—Ifp:M→R{\displaystyle p:M\to \mathbb {R} }is aseminormdefined on a vector subspaceM{\displaystyle M}ofX,{\displaystyle X,}and ifq:X→R{\displaystyle q:X\to \mathbb {R} }is a seminorm onX{\displaystyle X}such thatp≤q|M,{\displaystyle p\leq q{\big \vert }_{M},}then there exists a seminormP:X→R{\displaystyle P:X\to \mathbb {R} }onX{\displaystyle X}such thatP|M=p{\displaystyle P{\big \vert }_{M}=p}onM{\displaystyle M}andP≤q{\displaystyle P\leq q}onX.{\displaystyle X.} LetS{\displaystyle S}be the convex hull of{m∈M:p(m)≤1}∪{x∈X:q(x)≤1}.{\displaystyle \{m\in M:p(m)\leq 1\}\cup \{x\in X:q(x)\leq 1\}.}BecauseS{\displaystyle S}is anabsorbingdiskinX,{\displaystyle X,}itsMinkowski functionalP{\displaystyle P}is a seminorm. Thenp=P{\displaystyle p=P}onM{\displaystyle M}andP≤q{\displaystyle P\leq q}onX.{\displaystyle X.} So for example, suppose thatf{\displaystyle f}is abounded linear functionaldefined on a vector subspaceM{\displaystyle M}of anormed spaceX,{\displaystyle X,}so its theoperator norm‖f‖{\displaystyle \|f\|}is a non-negative real number. Then the linear functional'sabsolute valuep:=|f|{\displaystyle p:=|f|}is a seminorm onM{\displaystyle M}and the mapq:X→R{\displaystyle q:X\to \mathbb {R} }defined byq(x)=‖f‖‖x‖{\displaystyle q(x)=\|f\|\,\|x\|}is a seminorm onX{\displaystyle X}that satisfiesp≤q|M{\displaystyle p\leq q{\big \vert }_{M}}onM.{\displaystyle M.}TheHahn–Banach theorem for seminormsguarantees the existence of a seminormP:X→R{\displaystyle P:X\to \mathbb {R} }that is equal to|f|{\displaystyle |f|}onM{\displaystyle M}(sinceP|M=p=|f|{\displaystyle P{\big \vert }_{M}=p=|f|}) and is bounded above byP(x)≤‖f‖‖x‖{\displaystyle P(x)\leq \|f\|\,\|x\|}everywhere onX{\displaystyle X}(sinceP≤q{\displaystyle P\leq q}). Hahn–Banach sandwich theorem[3]—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX,{\displaystyle X,}letS⊆X{\displaystyle S\subseteq X}be any subset ofX,{\displaystyle X,}and letf:S→R{\displaystyle f:S\to \mathbb {R} }beanymap. If there exist positive real numbersa{\displaystyle a}andb{\displaystyle b}such that0≥infs∈S[p(s−ax−by)−f(s)−af(x)−bf(y)]for allx,y∈S,{\displaystyle 0\geq \inf _{s\in S}[p(s-ax-by)-f(s)-af(x)-bf(y)]\qquad {\text{ for all }}x,y\in S,}then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }onX{\displaystyle X}such thatF≤p{\displaystyle F\leq p}onX{\displaystyle X}andf≤F≤p{\displaystyle f\leq F\leq p}onS.{\displaystyle S.} Theorem[3](Andenaes, 1970)—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be a sublinear function on a real vector spaceX,{\displaystyle X,}letf:M→R{\displaystyle f:M\to \mathbb {R} }be a linear functional on a vector subspaceM{\displaystyle M}ofX{\displaystyle X}such thatf≤p{\displaystyle f\leq p}onM,{\displaystyle M,}and letS⊆X{\displaystyle S\subseteq X}be any subset ofX.{\displaystyle X.}Then there exists a linear functionalF:X→R{\displaystyle F:X\to \mathbb {R} }onX{\displaystyle X}that extendsf,{\displaystyle f,}satisfiesF≤p{\displaystyle F\leq p}onX,{\displaystyle X,}and is (pointwise) maximal onS{\displaystyle S}in the following sense: ifF^:X→R{\displaystyle {\widehat {F}}:X\to \mathbb {R} }is a linear functional onX{\displaystyle X}that extendsf{\displaystyle f}and satisfiesF^≤p{\displaystyle {\widehat {F}}\leq p}onX,{\displaystyle X,}thenF≤F^{\displaystyle F\leq {\widehat {F}}}onS{\displaystyle S}impliesF=F^{\displaystyle F={\widehat {F}}}onS.{\displaystyle S.} IfS={s}{\displaystyle S=\{s\}}is a singleton set (wheres∈X{\displaystyle s\in X}is some vector) and ifF:X→R{\displaystyle F:X\to \mathbb {R} }is such a maximal dominated linear extension off:M→R,{\displaystyle f:M\to \mathbb {R} ,}thenF(s)=infm∈M[f(s)+p(s−m)].{\displaystyle F(s)=\inf _{m\in M}[f(s)+p(s-m)].}[3] Vector–valued Hahn–Banach theorem[3]—IfX{\displaystyle X}andY{\displaystyle Y}are vector spaces over the same field and iff:M→Y{\displaystyle f:M\to Y}is a linear map defined on a vector subspaceM{\displaystyle M}ofX,{\displaystyle X,}then there exists a linear mapF:X→Y{\displaystyle F:X\to Y}that extendsf.{\displaystyle f.} A setΓ{\displaystyle \Gamma }of mapsX→X{\displaystyle X\to X}iscommutative(with respect tofunction composition∘{\displaystyle \,\circ \,}) ifF∘G=G∘F{\displaystyle F\circ G=G\circ F}for allF,G∈Γ.{\displaystyle F,G\in \Gamma .}Say that a functionf{\displaystyle f}defined on a subsetM{\displaystyle M}ofX{\displaystyle X}isΓ{\displaystyle \Gamma }-invariantifL(M)⊆M{\displaystyle L(M)\subseteq M}andf∘L=f{\displaystyle f\circ L=f}onM{\displaystyle M}for everyL∈Γ.{\displaystyle L\in \Gamma .} An invariant Hahn–Banach theorem[29]—SupposeΓ{\displaystyle \Gamma }is acommutative setof continuous linear maps from anormed spaceX{\displaystyle X}into itself and letf{\displaystyle f}be a continuous linear functional defined some vector subspaceM{\displaystyle M}ofX{\displaystyle X}that isΓ{\displaystyle \Gamma }-invariant, which means thatL(M)⊆M{\displaystyle L(M)\subseteq M}andf∘L=f{\displaystyle f\circ L=f}onM{\displaystyle M}for everyL∈Γ.{\displaystyle L\in \Gamma .}Thenf{\displaystyle f}has a continuous linear extensionF{\displaystyle F}to all ofX{\displaystyle X}that has the sameoperator norm‖f‖=‖F‖{\displaystyle \|f\|=\|F\|}and is alsoΓ{\displaystyle \Gamma }-invariant, meaning thatF∘L=F{\displaystyle F\circ L=F}onX{\displaystyle X}for everyL∈Γ.{\displaystyle L\in \Gamma .} This theorem may be summarized: The following theorem of Mazur–Orlicz (1953) is equivalent to the Hahn–Banach theorem. Mazur–Orlicz theorem[3]—Letp:X→R{\displaystyle p:X\to \mathbb {R} }be asublinear functionon a real or complex vector spaceX,{\displaystyle X,}letT{\displaystyle T}be any set, and letR:T→R{\displaystyle R:T\to \mathbb {R} }andv:T→X{\displaystyle v:T\to X}be any maps. The following statements are equivalent: The following theorem characterizes whenanyscalar function onX{\displaystyle X}(not necessarily linear) has a continuous linear extension to all ofX.{\displaystyle X.} Theorem(The extension principle[30])—Letf{\displaystyle f}a scalar-valued function on a subsetS{\displaystyle S}of atopological vector spaceX.{\displaystyle X.}Then there exists a continuous linear functionalF{\displaystyle F}onX{\displaystyle X}extendingf{\displaystyle f}if and only if there exists a continuous seminormp{\displaystyle p}onX{\displaystyle X}such that|∑i=1naif(si)|≤p(∑i=1naisi){\displaystyle \left|\sum _{i=1}^{n}a_{i}f(s_{i})\right|\leq p\left(\sum _{i=1}^{n}a_{i}s_{i}\right)}for all positive integersn{\displaystyle n}and all finite sequencesa1,…,an{\displaystyle a_{1},\ldots ,a_{n}}of scalars and elementss1,…,sn{\displaystyle s_{1},\ldots ,s_{n}}ofS.{\displaystyle S.} LetXbe a topological vector space. A vector subspaceMofXhasthe extension propertyif any continuous linear functional onMcan be extended to a continuous linear functional onX, and we say thatXhas theHahn–Banach extension property(HBEP) if every vector subspace ofXhas the extension property.[31] The Hahn–Banach theorem guarantees that every Hausdorff locally convex space has the HBEP. For completemetrizable topological vector spacesthere is a converse, due to Kalton: every complete metrizable TVS with the Hahn–Banach extension property is locally convex.[31]On the other hand, a vector spaceXof uncountable dimension, endowed with thefinest vector topology, then this is a topological vector spaces with the Hahn–Banach extension property that is neither locally convex nor metrizable.[31] A vector subspaceMof a TVSXhasthe separation propertyif for every element ofXsuch thatx∉M,{\displaystyle x\not \in M,}there exists a continuous linear functionalf{\displaystyle f}onXsuch thatf(x)≠0{\displaystyle f(x)\neq 0}andf(m)=0{\displaystyle f(m)=0}for allm∈M.{\displaystyle m\in M.}Clearly, the continuous dual space of a TVSXseparates points onXif and only if{0},{\displaystyle \{0\},}has the separation property. In 1992, Kakol proved that any infinite dimensional vector spaceX, there exist TVS-topologies onXthat do not have the HBEP despite having enough continuous linear functionals for the continuous dual space to separate points onX. However, ifXis a TVS theneveryvector subspace ofXhas the extension property if and only ifeveryvector subspace ofXhas the separation property.[31] The proof of theHahn–Banach theorem for real vector spaces(HB) commonly usesZorn's lemma, which in the axiomatic framework ofZermelo–Fraenkel set theory(ZF) is equivalent to theaxiom of choice(AC). It was discovered byŁośandRyll-Nardzewski[12]and independently byLuxemburg[11]thatHBcan be proved using theultrafilter lemma(UL), which is equivalent (underZF) to theBoolean prime ideal theorem(BPI).BPIis strictly weaker than the axiom of choice and it was later shown thatHBis strictly weaker thanBPI.[32] Theultrafilter lemmais equivalent (underZF) to theBanach–Alaoglu theorem,[33]which is another foundational theorem infunctional analysis. Although the Banach–Alaoglu theorem impliesHB,[34]it is not equivalent to it (said differently, the Banach–Alaoglu theorem is strictly stronger thanHB). However,HBis equivalent toa certain weakened version of the Banach–Alaoglu theoremfor normed spaces.[35]The Hahn–Banach theorem is also equivalent to the following statement:[36] (BPIis equivalent to the statement that there are always non-constant probability charges which take only the values 0 and 1.) InZF, the Hahn–Banach theorem suffices to derive the existence of a non-Lebesgue measurable set.[37]Moreover, the Hahn–Banach theorem implies theBanach–Tarski paradox.[38] ForseparableBanach spaces, D. K. Brown and S. G. Simpson proved that the Hahn–Banach theorem follows from WKL0, a weak subsystem ofsecond-order arithmeticthat takes a form ofKőnig's lemmarestricted to binary trees as an axiom. In fact, they prove that under a weak set of assumptions, the two are equivalent, an example ofreverse mathematics.[39][40] Proofs
https://en.wikipedia.org/wiki/Hahn%E2%80%93Banach_theorem
Inmathematics, theHermite–Hadamard inequality, named afterCharles HermiteandJacques Hadamardand sometimes also calledHadamard's inequality, states that if a functionf: [a,b] →Risconvex, then the following chain of inequalities hold: The inequality has been generalized to higher dimensions: ifΩ⊂Rn{\displaystyle \Omega \subset \mathbb {R} ^{n}}is a bounded, convex domain andf:Ω→R{\displaystyle f:\Omega \rightarrow \mathbb {R} }is a positive convex function, then wherecn{\displaystyle c_{n}}is a constant depending only on the dimension.
https://en.wikipedia.org/wiki/Hermite%E2%80%93Hadamard_inequality
Invector calculus, aninvex functionis adifferentiable functionf{\displaystyle f}fromRn{\displaystyle \mathbb {R} ^{n}}toR{\displaystyle \mathbb {R} }for which there exists a vector valued functionη{\displaystyle \eta }such that for allxandu. Invex functions were introduced by Hanson as a generalization ofconvex functions.[1]Ben-Israel and Mond provided a simple proof that a function is invex if and only if everystationary pointis aglobal minimum, a theorem first stated by Craven and Glover.[2][3] Hanson also showed that if the objective and the constraints of anoptimization problemare invex with respect to the same functionη(x,u){\displaystyle \eta (x,u)}, then theKarush–Kuhn–Tucker conditionsare sufficient for a global minimum. A slight generalization of invex functions calledType I invex functionsare the most general class of functions for which theKarush–Kuhn–Tucker conditionsare necessary and sufficient for a global minimum.[4]Consider a mathematical program of the form minf(x)s.t.g(x)≤0{\displaystyle {\begin{array}{rl}\min &f(x)\\{\text{s.t.}}&g(x)\leq 0\end{array}}} wheref:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }andg:Rn→Rm{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} ^{m}}are differentiable functions. LetF={x∈Rn|g(x)≤0}{\displaystyle F=\{x\in \mathbb {R} ^{n}\;|\;g(x)\leq 0\}}denote the feasible region of this program. The functionf{\displaystyle f}is aType Iobjective functionand the functiong{\displaystyle g}is aType I constraint functionatx0{\displaystyle x_{0}}with respect toη{\displaystyle \eta }if there exists a vector-valued functionη{\displaystyle \eta }defined onF{\displaystyle F}such that f(x)−f(x0)≥η(x)⋅∇f(x0){\displaystyle f(x)-f(x_{0})\geq \eta (x)\cdot \nabla {f(x_{0})}} and −g(x0)≥η(x)⋅∇g(x0){\displaystyle -g(x_{0})\geq \eta (x)\cdot \nabla {g(x_{0})}} for allx∈F{\displaystyle x\in {F}}.[5]Note that, unlike invexity, Type I invexity is defined relative to a pointx0{\displaystyle x_{0}}. Theorem (Theorem 2.1 in[4]):Iff{\displaystyle f}andg{\displaystyle g}are Type I invex at a pointx∗{\displaystyle x^{*}}with respect toη{\displaystyle \eta }, and theKarush–Kuhn–Tucker conditionsare satisfied atx∗{\displaystyle x^{*}}, thenx∗{\displaystyle x^{*}}is a global minimizer off{\displaystyle f}overF{\displaystyle F}. LetE{\displaystyle E}fromRn{\displaystyle \mathbb {R} ^{n}}toRn{\displaystyle \mathbb {R} ^{n}}andf{\displaystyle f}fromM{\displaystyle \mathbb {M} }toR{\displaystyle \mathbb {R} }be anE{\displaystyle E}-differentiable function on a nonempty open setM⊂Rn{\displaystyle \mathbb {M} \subset \mathbb {R} ^{n}}. Thenf{\displaystyle f}is said to be an E-invex function atu{\displaystyle u}if there exists a vector valued functionη{\displaystyle \eta }such that for allx{\displaystyle x}andu{\displaystyle u}inM{\displaystyle \mathbb {M} }. E-invex functions were introduced by Abdulaleem as a generalization of differentiableconvex functions.[6] LetE:Rn→Rn{\displaystyle E:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}, andM⊂Rn{\displaystyle M\subset \mathbb {R} ^{n}}be an open E-invex set. A vector-valued pair(f,g){\displaystyle (f,g)}, wheref{\displaystyle f}andg{\displaystyle g}represent objective and constraint functions respectively, is said to beE-type Iwith respect to a vector-valued functionη:M×M→Rn{\displaystyle \eta :M\times M\to \mathbb {R} ^{n}}, atu∈M{\displaystyle u\in M}, if the following inequalities hold for allx∈FE={x∈Rn|g(E(x))≤0}{\displaystyle x\in F_{E}=\{x\in \mathbb {R} ^{n}\;|\;g(E(x))\leq 0\}}: fi(E(x))−fi(E(u))≥∇fi(E(u))⋅η(E(x),E(u)),{\displaystyle f_{i}(E(x))-f_{i}(E(u))\geq \nabla f_{i}(E(u))\cdot \eta (E(x),E(u)),} −gj(E(u))≥∇gj(E(u))⋅η(E(x),E(u)).{\displaystyle -g_{j}(E(u))\geq \nabla g_{j}(E(u))\cdot \eta (E(x),E(u)).} Iff{\displaystyle f}andg{\displaystyle g}are differentiable functions andE(x)=x{\displaystyle E(x)=x}(E{\displaystyle E}is an identity map), then the definition of E-type I functions[7]reduces to the definition of type I functions introduced by Rueda and Hanson.[8]
https://en.wikipedia.org/wiki/Invex_function
Inmathematics,Jensen's inequality, named after the Danish mathematicianJohan Jensen, relates the value of aconvex functionof anintegralto the integral of the convex function. It wasprovedby Jensen in 1906,[1]building on an earlier proof of the same inequality for doubly-differentiable functions byOtto Hölderin 1889.[2]Given its generality, theinequalityappears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation (or equivalently, the opposite inequality for concave transformations).[3] Jensen's inequality generalizes the statement that thesecant lineof a convex function liesabovethegraphof thefunction, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (fort∈ [0,1]), while the graph of the function is the convex function of the weighted means, Thus, Jensen's inequality in this case is In the context ofprobability theory, it is generally stated in the following form: ifXis arandom variableandφis a convex function, then The difference between the two sides of the inequality,E⁡[φ(X)]−φ(E⁡[X]){\displaystyle \operatorname {E} \left[\varphi (X)\right]-\varphi \left(\operatorname {E} [X]\right)}, is called theJensen gap.[4] The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using either the language ofmeasure theoryor (equivalently) probability. In the probabilistic setting, the inequality can be further generalized to itsfull strength. For a realconvex functionφ{\displaystyle \varphi }, numbersx1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}in its domain, and positive weightsai{\displaystyle a_{i}}, Jensen's inequality can be stated as: and the inequality is reversed ifφ{\displaystyle \varphi }isconcave, which is Equality holds if and only ifx1=x2=⋯=xn{\displaystyle x_{1}=x_{2}=\cdots =x_{n}}orφ{\displaystyle \varphi }is linear on a domain containingx1,x2,⋯,xn{\displaystyle x_{1},x_{2},\cdots ,x_{n}}. As a particular case, if the weightsai{\displaystyle a_{i}}are all equal, then (1) and (2) become For instance, the functionlog(x)isconcave, so substitutingφ(x)=log⁡(x){\displaystyle \varphi (x)=\log(x)}in the previous formula (4) establishes the (logarithm of the) familiararithmetic-mean/geometric-mean inequality: log(∑i=1nxin)≥∑i=1nlog(xi)n{\displaystyle \log \!\left({\frac {\sum _{i=1}^{n}x_{i}}{n}}\right)\geq {\frac {\sum _{i=1}^{n}\log \!\left(x_{i}\right)}{n}}}exp(log(∑i=1nxin))≥exp(∑i=1nlog(xi)n){\displaystyle \exp \!\left(\log \!\left({\frac {\sum _{i=1}^{n}x_{i}}{n}}\right)\right)\geq \exp \!\left({\frac {\sum _{i=1}^{n}\log \!\left(x_{i}\right)}{n}}\right)}x1+x2+⋯+xnn≥x1⋅x2⋯xnn{\displaystyle {\frac {x_{1}+x_{2}+\cdots +x_{n}}{n}}\geq {\sqrt[{n}]{x_{1}\cdot x_{2}\cdots x_{n}}}} A common application hasxas a function of another variable (or set of variables)t, that is,xi=g(ti){\displaystyle x_{i}=g(t_{i})}. All of this carries directly over to the general continuous case: the weightsaiare replaced by a non-negative integrable functionf(x), such as a probability distribution, and the summations are replaced by integrals. Let(Ω,A,μ){\displaystyle (\Omega ,A,\mu )}be aprobability space. Letf:Ω→R{\displaystyle f:\Omega \to \mathbb {R} }be aμ{\displaystyle \mu }-measurable function andφ:R→R{\displaystyle \varphi :\mathbb {R} \to \mathbb {R} }be convex. Then:[5]φ(∫Ωfdμ)≤∫Ωφ∘fdμ{\displaystyle \varphi \left(\int _{\Omega }f\,\mathrm {d} \mu \right)\leq \int _{\Omega }\varphi \circ f\,\mathrm {d} \mu } In real analysis, we may require an estimate on wherea,b∈R{\displaystyle a,b\in \mathbb {R} }, andf:[a,b]→R{\displaystyle f\colon [a,b]\to \mathbb {R} }is a non-negative Lebesgue-integrablefunction. In this case, the Lebesgue measure of[a,b]{\displaystyle [a,b]}need not be unity. However, by integration by substitution, the interval can be rescaled so that it has measure unity. Then Jensen's inequality can be applied to get[6] The same result can be equivalently stated in aprobability theorysetting, by a simple change of notation. Let(Ω,F,P){\displaystyle (\Omega ,{\mathfrak {F}},\operatorname {P} )}be aprobability space,Xanintegrablereal-valuedrandom variableandφ{\displaystyle \varphi }aconvex function. Then[7]φ(E⁡[X])≤E⁡[φ(X)].{\displaystyle \varphi {\big (}\operatorname {E} [X]{\big )}\leq \operatorname {E} [\varphi (X)].} In this probability setting, the measureμis intended as a probabilityP{\displaystyle \operatorname {P} }, the integral with respect toμas anexpected valueE{\displaystyle \operatorname {E} }, and the functionf{\displaystyle f}as arandom variableX. Note that the equality holds if and only ifφ{\displaystyle \varphi }is a linear function on some convex setA{\displaystyle A}such thatP(X∈A)=1{\displaystyle P(X\in A)=1}(which follows by inspecting the measure-theoretical proof below). More generally, letTbe a realtopological vector space, andXaT-valuedintegrablerandom variable. In this general setting,integrablemeans that there exists an elementE⁡[X]{\displaystyle \operatorname {E} [X]}inT, such that for any elementzin thedual spaceofT:E⁡|⟨z,X⟩|<∞{\displaystyle \operatorname {E} |\langle z,X\rangle |<\infty }, and⟨z,E⁡[X]⟩=E⁡[⟨z,X⟩]{\displaystyle \langle z,\operatorname {E} [X]\rangle =\operatorname {E} [\langle z,X\rangle ]}. Then, for any measurable convex functionφand any sub-σ-algebraG{\displaystyle {\mathfrak {G}}}ofF{\displaystyle {\mathfrak {F}}}: HereE⁡[⋅∣G]{\displaystyle \operatorname {E} [\cdot \mid {\mathfrak {G}}]}stands for theexpectation conditionedto the σ-algebraG{\displaystyle {\mathfrak {G}}}. This general statement reduces to the previous ones when the topological vector spaceTis thereal axis, andG{\displaystyle {\mathfrak {G}}}is the trivialσ-algebra{∅, Ω}(where∅is theempty set, andΩis thesample space).[8] LetXbe a one-dimensional random variable with meanμ{\displaystyle \mu }and varianceσ2≥0{\displaystyle \sigma ^{2}\geq 0}. Letφ(x){\displaystyle \varphi (x)}be a twice differentiable function, and define the function Then[9] In particular, whenφ(x){\displaystyle \varphi (x)}is convex, thenφ″(x)≥0{\displaystyle \varphi ''(x)\geq 0}, and the standard form of Jensen's inequality immediately follows for the case whereφ(x){\displaystyle \varphi (x)}is additionally assumed to be twice differentiable. Jensen's inequality can be proved in several ways, and three different proofs corresponding to the different statements above will be offered. Before embarking on these mathematical derivations, however, it is worth analyzing an intuitive graphical argument based on the probabilistic case whereXis a real number (see figure). Assuming a hypothetical distribution ofXvalues, one can immediately identify the position ofE⁡[X]{\displaystyle \operatorname {E} [X]}and its imageφ(E⁡[X]){\displaystyle \varphi (\operatorname {E} [X])}in the graph. Noticing that for convex mappingsY=φ(x)of somexvalues the corresponding distribution ofYvalues is increasingly "stretched up" for increasing values ofX, it is easy to see that the distribution ofYis broader in the interval corresponding toX>X0and narrower inX<X0for anyX0; in particular, this is also true forX0=E⁡[X]{\displaystyle X_{0}=\operatorname {E} [X]}. Consequently, in this picture the expectation ofYwill always shift upwards with respect to the position ofφ(E⁡[X]){\displaystyle \varphi (\operatorname {E} [X])}. A similar reasoning holds if the distribution ofXcovers a decreasing portion of the convex function, or both a decreasing and an increasing portion of it. This "proves" the inequality, i.e. with equality whenφ(X)is not strictly convex, e.g. when it is a straight line, or whenXfollows adegenerate distribution(i.e. is a constant). The proofs below formalize this intuitive notion. Ifλ1andλ2are two arbitrary nonnegative real numbers such thatλ1+λ2= 1then convexity ofφimplies This can be generalized: ifλ1, ...,λnare nonnegative real numbers such thatλ1+ ... +λn= 1, then for anyx1, ...,xn. Thefinite formof the Jensen's inequality can be proved byinduction: by convexity hypotheses, the statement is true forn= 2. Suppose the statement is true for somen, so for anyλ1, ...,λnsuch thatλ1+ ... +λn= 1. One needs to prove it forn+ 1. At least one of theλiis strictly smaller than1{\displaystyle 1}, sayλn+1; therefore by convexity inequality: Sinceλ1+ ... +λn+λn+1= 1, applying the inductive hypothesis gives therefore We deduce the inequality is true forn+ 1, by induction it follows that the result is also true for all integerngreater than 2. In order to obtain the general inequality from this finite form, one needs to use a density argument. The finite form can be rewritten as: whereμnis a measure given by an arbitraryconvex combinationofDirac deltas: Since convex functions arecontinuous, and since convex combinations of Dirac deltas areweaklydensein the set of probability measures (as could be easily verified), the general statement is obtained simply by a limiting procedure. Letg{\displaystyle g}be a real-valuedμ{\displaystyle \mu }-integrable function on a probability spaceΩ{\displaystyle \Omega }, and letφ{\displaystyle \varphi }be a convex function on the real numbers. Sinceφ{\displaystyle \varphi }is convex, at each real numberx{\displaystyle x}we have a nonempty set ofsubderivatives, which may be thought of as lines touching the graph ofφ{\displaystyle \varphi }atx{\displaystyle x}, but which are below the graph ofφ{\displaystyle \varphi }at all points (support lines of the graph). Now, if we define because of the existence of subderivatives for convex functions, we may choosea{\displaystyle a}andb{\displaystyle b}such that for all realx{\displaystyle x}and But then we have that for almost allω∈Ω{\displaystyle \omega \in \Omega }. Since we have a probability measure, the integral is monotone withμ(Ω)=1{\displaystyle \mu (\Omega )=1}so that as desired. LetXbe an integrable random variable that takes values in a real topological vector spaceT. Sinceφ:T→R{\displaystyle \varphi :T\to \mathbb {R} }is convex, for anyx,y∈T{\displaystyle x,y\in T}, the quantity is decreasing asθapproaches 0+. In particular, thesubdifferentialofφ{\displaystyle \varphi }evaluated atxin the directionyis well-defined by It is easily seen that the subdifferential is linear iny[citation needed](that is false and the assertion requires Hahn-Banach theorem to be proved) and, since the infimum taken in the right-hand side of the previous formula is smaller than the value of the same term forθ= 1, one gets In particular, for an arbitrary sub-σ-algebraG{\displaystyle {\mathfrak {G}}}we can evaluate the last inequality whenx=E⁡[X∣G],y=X−E⁡[X∣G]{\displaystyle x=\operatorname {E} [X\mid {\mathfrak {G}}],\,y=X-\operatorname {E} [X\mid {\mathfrak {G}}]}to obtain Now, if we take the expectation conditioned toG{\displaystyle {\mathfrak {G}}}on both sides of the previous expression, we get the result since: by the linearity of the subdifferential in theyvariable, and the following well-known property of theconditional expectation: SupposeΩis a measurable subset of the real line andf(x) is a non-negative function such that In probabilistic language,fis aprobability density function. Then Jensen's inequality becomes the following statement about convex integrals: Ifgis any real-valued measurable function andφ{\textstyle \varphi }is convex over the range ofg, then Ifg(x) =x, then this form of the inequality reduces to a commonly used special case: This is applied inVariational Bayesian methods. Ifg(x) =x2n, andXis a random variable, thengis convex as and so In particular, if some even moment2nofXis finite,Xhas a finite mean. An extension of this argument showsXhas finite moments of every orderl∈N{\displaystyle l\in \mathbb {N} }dividingn. LetΩ = {x1, ...xn},and takeμto be thecounting measureonΩ, then the general form reduces to a statement about sums: provided thatλi≥ 0and There is also an infinite discrete form. Jensen's inequality is of particular importance in statistical physics when the convex function is an exponential, giving: where theexpected valuesare with respect to someprobability distributionin therandom variableX. Proof: Letφ(x)=ex{\displaystyle \varphi (x)=e^{x}}inφ(E⁡[X])≤E⁡[φ(X)].{\displaystyle \varphi \left(\operatorname {E} [X]\right)\leq \operatorname {E} \left[\varphi (X)\right].} Ifp(x)is the true probability density forX, andq(x)is another density, then applying Jensen's inequality for the random variableY(X) =q(X)/p(X)and the convex functionφ(y) = −log(y)gives Therefore: a result calledGibbs' inequality. It shows that the average message length is minimised when codes are assigned on the basis of the true probabilitiesprather than any other distributionq. The quantity that is non-negative is called theKullback–Leibler divergenceofqfromp, whereD(p(x)‖q(x))=∫p(x)log⁡(p(x)q(x))dx{\displaystyle D(p(x)\|q(x))=\int p(x)\log \left({\frac {p(x)}{q(x)}}\right)dx}. Since−log(x)is a strictly convex function forx> 0, it follows that equality holds whenp(x)equalsq(x)almost everywhere. IfLis a convex function andG{\displaystyle {\mathfrak {G}}}a sub-sigma-algebra, then, from the conditional version of Jensen's inequality, we get So if δ(X) is someestimatorof an unobserved parameter θ given a vector of observablesX; and ifT(X) is asufficient statisticfor θ; then an improved estimator, in the sense of having a smaller expected lossL, can be obtained by calculating the expected value of δ with respect to θ, taken over all possible vectors of observationsXcompatible with the same value ofT(X) as that observed. Further, because T is a sufficient statistic,δ1(X){\displaystyle \delta _{1}(X)}does not depend on θ, hence, becomes a statistic. This result is known as theRao–Blackwell theorem. The relation betweenrisk aversionanddeclining marginal utilityfor scalar outcomes can be stated formally with Jensen's inequality: risk aversion can be stated as preferring a certain outcomeu(E[x]){\displaystyle u(E[x])}to a fair gamble with potentially larger but uncertain outcome ofu(x){\displaystyle u(x)}: u(E[x])>E[u(x)]{\displaystyle u(E[x])>E[u(x)]}. But this is simply Jensen's inequality for aconcaveu(x){\displaystyle u(x)}: autility functionthat exhibits declining marginal utility.[11] Beyond its classical formulation for real numbers and convex functions, Jensen’s inequality has been extended to the realm of operator theory. In this non‐commutative setting the inequality is expressed in terms of operator convex functions—that is, functions defined on an interval I that satisfy for every pair of self‐adjoint operators x and y (with spectra in I) and every scalarλ∈[0,1]{\displaystyle \lambda \in [0,1]}. Hansen and Pedersen[12]established a definitive version of this inequality by considering genuine non‐commutative convex combinations. In particular, if one has an n‑tuple of bounded self‐adjoint operatorsx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}with spectra in I and an n‑tuple of operatorsa1,…,an{\displaystyle a_{1},\dots ,a_{n}}satisfying then the following operator Jensen inequality holds: This result shows that the convex transformation “respects” non-commutative convex combinations, thereby extending the classical inequality to operators without the need for additional restrictions on the interval of definition.[12]A closely related extension is given by the Jensen trace inequality. For a continuous convex function f defined on I, if one considers self‐adjoint matricesx1,…,xn{\displaystyle x_{1},\dots ,x_{n}}(with spectra in I) and matricesa1,…,an{\displaystyle a_{1},\dots ,a_{n}}satisfying∑i=1nai∗ai=I{\displaystyle \sum _{i=1}^{n}a_{i}^{*}a_{i}=I}, then one has This inequality naturally extends toC*-algebrasequipped with a finite trace and is particularly useful in applications ranging from quantum statistical mechanics to information theory. Furthermore, contractive versions of these operator inequalities are available when one only assumes∑i=1naitai≤I{\displaystyle \sum _{i=1}^{n}a_{i}^{t}a_{i}\leq I}, provided that additional conditions such asf(0)≤0{\displaystyle f(0)\leq 0}(when 0 ∈ I) are imposed. Extensions to continuous fields of operators and to settings involving conditional expectations on C-algebras further illustrate the broad applicability of these generalizations.
https://en.wikipedia.org/wiki/Jensen%27s_inequality
K-convex functions, first introduced byScarf,[1]are a special weakening of the concept ofconvex functionwhich is crucial in the proof of theoptimalityof the(s,S){\displaystyle (s,S)}policy ininventory control theory. The policy is characterized by two numberssandS,S≥s{\displaystyle S\geq s}, such that when the inventory level falls below levels, an order is issued for a quantity that brings the inventory up to levelS, and nothing is ordered otherwise. Gallego and Sethi[2]have generalized the concept ofK-convexity to higher dimensional Euclidean spaces. Two equivalent definitions are as follows: LetKbe a non-negative real number. A functiong:R→R{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }isK-convex if for anyu,z≥0,{\displaystyle u,z\geq 0,}andb>0{\displaystyle b>0}. A functiong:R→R{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }isK-convex if for allx≤y,λ∈[0,1]{\displaystyle x\leq y,\lambda \in [0,1]}, whereλ¯=1−λ{\displaystyle {\bar {\lambda }}=1-\lambda }. This definition admits a simple geometric interpretation related to the concept of visibility.[3]Leta≥0{\displaystyle a\geq 0}. A point(x,f(x)){\displaystyle (x,f(x))}is said to be visible from(y,f(y)+a){\displaystyle (y,f(y)+a)}if all intermediate points(λx+λ¯y,f(λx+λ¯y)),0≤λ≤1{\displaystyle (\lambda x+{\bar {\lambda }}y,f(\lambda x+{\bar {\lambda }}y)),0\leq \lambda \leq 1}lie below the line segment joining these two points. Then the geometric characterization ofK-convexity can be obtain as: It is sufficient to prove that the above definitions can be transformed to each other. This can be seen by using the transformation [4] Ifg:R→R{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }isK-convex, then it isL-convex for anyL≥K{\displaystyle L\geq K}. In particular, ifg{\displaystyle g}is convex, then it is alsoK-convex for anyK≥0{\displaystyle K\geq 0}. Ifg1{\displaystyle g_{1}}isK-convex andg2{\displaystyle g_{2}}isL-convex, then forα≥0,β≥0,g=αg1+βg2{\displaystyle \alpha \geq 0,\beta \geq 0,\;g=\alpha g_{1}+\beta g_{2}}is(αK+βL){\displaystyle (\alpha K+\beta L)}-convex. Ifg{\displaystyle g}isK-convex andξ{\displaystyle \xi }is a random variable such thatE|g(x−ξ)|<∞{\displaystyle E|g(x-\xi )|<\infty }for allx{\displaystyle x}, thenEg(x−ξ){\displaystyle Eg(x-\xi )}is alsoK-convex. Ifg:R→R{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }isK-convex, restriction ofg{\displaystyle g}on any convex setD⊂R{\displaystyle \mathbb {D} \subset \mathbb {R} }isK-convex. Ifg:R→R{\displaystyle g:\mathbb {R} \rightarrow \mathbb {R} }is a continuousK-convex function andg(y)→∞{\displaystyle g(y)\rightarrow \infty }as|y|→∞{\displaystyle |y|\rightarrow \infty }, then there exit scalarss{\displaystyle s}andS{\displaystyle S}withs≤S{\displaystyle s\leq S}such that
https://en.wikipedia.org/wiki/K-convex_function
Inmathematics,Kachurovskii's theoremis a theorem relating theconvexityof a function on aBanach spaceto themonotonicityof itsFréchet derivative. LetKbe aconvex subsetof a Banach spaceVand letf:K→R∪ {+∞} be anextended real-valued functionthat is Fréchet differentiable with derivative df(x) :V→Rat each pointxinK. (In fact, df(x) is an element of thecontinuous dual spaceV∗.) Then the following are equivalent:
https://en.wikipedia.org/wiki/Kachurovskii%27s_theorem
Inmathematics, amonotonic function(ormonotone function) is afunctionbetweenordered setsthat preserves or reverses the givenorder.[1][2][3]This concept first arose incalculus, and was later generalized to the more abstract setting oforder theory. Incalculus, a functionf{\displaystyle f}defined on asubsetof thereal numberswith real values is calledmonotonicif it is either entirely non-decreasing, or entirely non-increasing.[2]That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease. A function is termedmonotonically increasing(alsoincreasingornon-decreasing)[3]if for allx{\displaystyle x}andy{\displaystyle y}such thatx≤y{\displaystyle x\leq y}one hasf(x)≤f(y){\displaystyle f\!\left(x\right)\leq f\!\left(y\right)}, sof{\displaystyle f}preserves the order (see Figure 1). Likewise, a function is calledmonotonically decreasing(alsodecreasingornon-increasing)[3]if, wheneverx≤y{\displaystyle x\leq y}, thenf(x)≥f(y){\displaystyle f\!\left(x\right)\geq f\!\left(y\right)}, so itreversesthe order (see Figure 2). If the order≤{\displaystyle \leq }in the definition of monotonicity is replaced by the strict order<{\displaystyle <}, one obtains a stronger requirement. A function with this property is calledstrictly increasing(alsoincreasing).[3][4]Again, by inverting the order symbol, one finds a corresponding concept calledstrictly decreasing(alsodecreasing).[3][4]A function with either property is calledstrictly monotone. Functions that are strictly monotone areone-to-one(because forx{\displaystyle x}not equal toy{\displaystyle y}, eitherx<y{\displaystyle x<y}orx>y{\displaystyle x>y}and so, by monotonicity, eitherf(x)<f(y){\displaystyle f\!\left(x\right)<f\!\left(y\right)}orf(x)>f(y){\displaystyle f\!\left(x\right)>f\!\left(y\right)}, thusf(x)≠f(y){\displaystyle f\!\left(x\right)\neq f\!\left(y\right)}.) To avoid ambiguity, the termsweakly monotone,weakly increasingandweakly decreasingare often used to refer to non-strict monotonicity. The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing. A functionf{\displaystyle f}is said to beabsolutely monotonicover an interval(a,b){\displaystyle \left(a,b\right)}if the derivatives of all orders off{\displaystyle f}arenonnegativeor allnonpositiveat all points on the interval. All strictly monotonic functions areinvertiblebecause they are guaranteed to have a one-to-one mapping from their range to their domain. However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one). A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, ify=g(x){\displaystyle y=g(x)}is strictly increasing on the range[a,b]{\displaystyle [a,b]}, then it has an inversex=h(y){\displaystyle x=h(y)}on the range[g(a),g(b)]{\displaystyle [g(a),g(b)]}. The termmonotonicis sometimes used in place ofstrictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.[citation needed] The termmonotonic transformation(ormonotone transformation) may also cause confusion because it refers to a transformation by a strictly increasing function. This is the case in economics with respect to the ordinal properties of autility functionbeing preserved across a monotonic transform (see alsomonotone preferences).[5]In this context, the term "monotonic transformation" refers to a positive monotonic transformation and is intended to distinguish it from a "negative monotonic transformation," which reverses the order of the numbers.[6] The following properties are true for a monotonic functionf:R→R{\displaystyle f\colon \mathbb {R} \to \mathbb {R} }: These properties are the reason why monotonic functions are useful in technical work inanalysis. Other important properties of these functions include: An important application of monotonic functions is inprobability theory. IfX{\displaystyle X}is arandom variable, itscumulative distribution functionFX(x)=Prob(X≤x){\displaystyle F_{X}\!\left(x\right)={\text{Prob}}\!\left(X\leq x\right)}is a monotonically increasing function. A function isunimodalif it is monotonically increasing up to some point (themode) and then monotonically decreasing. Whenf{\displaystyle f}is astrictly monotonicfunction, thenf{\displaystyle f}isinjectiveon its domain, and ifT{\displaystyle T}is therangeoff{\displaystyle f}, then there is aninverse functiononT{\displaystyle T}forf{\displaystyle f}. In contrast, each constant function is monotonic, but not injective,[7]and hence cannot have an inverse. The graphic shows six monotonic functions. Their simplest forms are shown in the plot area and the expressions used to create them are shown on they-axis. A mapf:X→Y{\displaystyle f:X\to Y}is said to bemonotoneif each of itsfibersisconnected; that is, for each elementy∈Y,{\displaystyle y\in Y,}the (possibly empty) setf−1(y){\displaystyle f^{-1}(y)}is a connectedsubspaceofX.{\displaystyle X.} Infunctional analysison atopological vector spaceX{\displaystyle X}, a (possibly non-linear) operatorT:X→X∗{\displaystyle T:X\rightarrow X^{*}}is said to be amonotone operatorif (Tu−Tv,u−v)≥0∀u,v∈X.{\displaystyle (Tu-Tv,u-v)\geq 0\quad \forall u,v\in X.}Kachurovskii's theoremshows thatconvex functionsonBanach spaceshave monotonic operators as their derivatives. A subsetG{\displaystyle G}ofX×X∗{\displaystyle X\times X^{*}}is said to be amonotone setif for every pair[u1,w1]{\displaystyle [u_{1},w_{1}]}and[u2,w2]{\displaystyle [u_{2},w_{2}]}inG{\displaystyle G}, (w1−w2,u1−u2)≥0.{\displaystyle (w_{1}-w_{2},u_{1}-u_{2})\geq 0.}G{\displaystyle G}is said to bemaximal monotoneif it is maximal among all monotone sets in the sense of set inclusion. The graph of a monotone operatorG(T){\displaystyle G(T)}is a monotone set. A monotone operator is said to bemaximal monotoneif its graph is amaximal monotone set. Order theory deals with arbitrarypartially ordered setsandpreordered setsas a generalization of real numbers. The above definition of monotonicity is relevant in these cases as well. However, the terms "increasing" and "decreasing" are avoided, since their conventional pictorial representation does not apply to orders that are nottotal. Furthermore, thestrictrelations<{\displaystyle <}and>{\displaystyle >}are of little use in many non-total orders and hence no additional terminology is introduced for them. Letting≤{\displaystyle \leq }denote the partial order relation of any partially ordered set, amonotonefunction, also calledisotone, ororder-preserving, satisfies the property x≤y⟹f(x)≤f(y){\displaystyle x\leq y\implies f(x)\leq f(y)} for allxandyin its domain. The composite of two monotone mappings is also monotone. Thedualnotion is often calledantitone,anti-monotone, ororder-reversing. Hence, an antitone functionfsatisfies the property x≤y⟹f(y)≤f(x),{\displaystyle x\leq y\implies f(y)\leq f(x),} for allxandyin its domain. Aconstant functionis both monotone and antitone; conversely, iffis both monotone and antitone, and if the domain offis alattice, thenfmust be constant. Monotone functions are central in order theory. They appear in most articles on the subject and examples from special applications are found in these places. Some notable special monotone functions areorder embeddings(functions for whichx≤y{\displaystyle x\leq y}if and only iff(x)≤f(y)){\displaystyle f(x)\leq f(y))}andorder isomorphisms(surjectiveorder embeddings). In the context ofsearch algorithmsmonotonicity (also called consistency) is a condition applied toheuristic functions. A heuristich(n){\displaystyle h(n)}is monotonic if, for every nodenand every successorn'ofngenerated by any actiona, the estimated cost of reaching the goal fromnis no greater than the step cost of getting ton'plus the estimated cost of reaching the goal fromn', h(n)≤c(n,a,n′)+h(n′).{\displaystyle h(n)\leq c\left(n,a,n'\right)+h\left(n'\right).} This is a form oftriangle inequality, withn,n', and the goalGnclosest ton. Because every monotonic heuristic is alsoadmissible, monotonicity is a stricter requirement than admissibility. Someheuristic algorithmssuch asA*can be provenoptimalprovided that the heuristic they use is monotonic.[8] InBoolean algebra, a monotonic function is one such that for allaiandbiin{0,1}, ifa1≤b1,a2≤b2, ...,an≤bn(i.e. the Cartesian product{0, 1}nis orderedcoordinatewise), thenf(a1, ...,an) ≤ f(b1, ...,bn). In other words, a Boolean function is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. Graphically, this means that ann-ary Boolean function is monotonic when its representation as ann-cubelabelled with truth values has no upward edge fromtruetofalse. (This labelledHasse diagramis thedualof the function's labelledVenn diagram, which is the more common representation forn≤ 3.) The monotonic Boolean functions are precisely those that can be defined by an expression combining the inputs (which may appear more than once) using only the operatorsandandor(in particularnotis forbidden). For instance "at least two ofa,b,chold" is a monotonic function ofa,b,c, since it can be written for instance as ((aandb) or (aandc) or (bandc)). The number of such functions onnvariables is known as theDedekind numberofn. SAT solving, generally anNP-hardtask, can be achieved efficiently when all involved functions and predicates are monotonic and Boolean.[9]
https://en.wikipedia.org/wiki/Monotone_operator
Inmathematics,Karamata's inequality,[1]named afterJovan Karamata,[2]also known as themajorization inequality, is a theorem inelementary algebrafor convex and concave real-valued functions, defined on an interval of the real line. It generalizes the discrete form ofJensen's inequality, and generalizes in turn to the concept ofSchur-convex functions. LetIbe anintervalof thereal lineand letfdenote a real-valued,convex functiondefined onI. Ifx1, …,xnandy1, …,ynare numbers inIsuch that(x1, …,xn)majorizes(y1, …,yn), then Here majorization means thatx1, …,xnandy1, …,ynsatisfies and we have the inequalities and the equality Iffis astrictly convex function, then the inequality (1) holds with equality if and only if we havexi=yifor alli∈ {1, …,n}. x⪯wy{\displaystyle x\preceq _{w}y}if and only if∑g(xi)≤∑g(yi){\displaystyle \sum g\left(x_{i}\right)\leq \sum g\left(y_{i}\right)}for any continuous increasing convex functiong:R→R{\displaystyle g:\mathbb {R} \to \mathbb {R} }.[3] The finite form ofJensen's inequalityis a special case of this result. Consider the real numbersx1, …,xn∈Iand let denote theirarithmetic mean. Then(x1, …,xn)majorizes then-tuple(a,a, …,a), since the arithmetic mean of theilargest numbers of(x1, …,xn)is at least as large as the arithmetic meanaof all thennumbers, for everyi∈ {1, …,n− 1}. By Karamata's inequality (1) for the convex functionf, Dividing byngives Jensen's inequality. The sign is reversed iffis concave. We may assume that the numbers are in decreasing order as specified in (2). Ifxi=yifor alli∈ {1, …,n}, then the inequality (1) holds with equality, hence we may assume in the following thatxi≠yifor at least onei. Ifxi=yifor ani∈ {1, …,n}, then the inequality (1) and the majorization properties (3) and (4) are not affected if we removexiandyi. Hence we may assume thatxi≠yifor alli∈ {1, …,n}. It is aproperty of convex functionsthat for two numbersx≠yin the intervalItheslope of thesecant linethrough the points(x,f(x))and(y,f(y))of thegraphoffis amonotonically non-decreasingfunction inxforyfixed (andvice versa). This implies that for alli∈ {1, …,n− 1}. DefineA0=B0= 0and for alli∈ {1, …,n}. By the majorization property (3),Ai≥Bifor alli∈ {1, …,n− 1}and by (4),An=Bn. Hence, which proves Karamata's inequality (1). To discuss the case of equality in (1), note thatx1>y1by (3) and our assumptionxi≠yifor alli∈ {1, …,n− 1}. Letibe the smallest index such that(xi,yi) ≠ (xi+1,yi+1), which exists due to (4). ThenAi>Bi. Iffis strictly convex, then there is strict inequality in (6), meaning thatci+1<ci. Hence there is a strictly positive term in the sum on the right hand side of (7) and equality in (1) cannot hold. If the convex functionfis non-decreasing, thencn≥ 0. The relaxed condition (5) means thatAn≥Bn, which is enough to conclude thatcn(An−Bn) ≥ 0in the last step of (7). If the functionfis strictly convex and non-decreasing, thencn> 0. It only remains to discuss the caseAn>Bn. However, then there is a strictly positive term on the right hand side of (7) and equality in (1) cannot hold. An explanation of Karamata's inequality and majorization theory can be foundhere.
https://en.wikipedia.org/wiki/Karamata%27s_inequality
Inmathematics, afunctionfislogarithmically convexorsuperconvex[1]iflog∘f{\displaystyle {\log }\circ f}, thecompositionof thelogarithmwithf, is itself aconvex function. LetXbe aconvex subsetof arealvector space, and letf:X→Rbe a function takingnon-negativevalues. Thenfis: Here we interpretlog⁡0{\displaystyle \log 0}as−∞{\displaystyle -\infty }. Explicitly,fis logarithmically convex if and only if, for allx1,x2∈Xand allt∈ [0, 1], the two following equivalent conditions hold: Similarly,fis strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for allt∈ (0, 1). The above definition permitsfto be zero, but iffis logarithmically convex and vanishes anywhere inX, then it vanishes everywhere in the interior ofX. Iffis a differentiable function defined on an intervalI⊆R, thenfis logarithmically convex if and only if the following condition holds for allxandyinI: This is equivalent to the condition that, wheneverxandyare inIandx>y, Moreover,fis strictly logarithmically convex if and only if these inequalities are always strict. Iffis twice differentiable, then it is logarithmically convex if and only if, for allxinI, If the inequality is always strict, thenfis strictly logarithmically convex. However, the converse is false: It is possible thatfis strictly logarithmically convex and that, for somex, we havef″(x)f(x)=f′(x)2{\displaystyle f''(x)f(x)=f'(x)^{2}}. For example, iff(x)=exp⁡(x4){\displaystyle f(x)=\exp(x^{4})}, thenfis strictly logarithmically convex, butf″(0)f(0)=0=f′(0)2{\displaystyle f''(0)f(0)=0=f'(0)^{2}}. Furthermore,f:I→(0,∞){\displaystyle f\colon I\to (0,\infty )}is logarithmically convex if and only ifeαxf(x){\displaystyle e^{\alpha x}f(x)}is convex for allα∈R{\displaystyle \alpha \in \mathbb {R} }.[2][3] Iff1,…,fn{\displaystyle f_{1},\ldots ,f_{n}}are logarithmically convex, and ifw1,…,wn{\displaystyle w_{1},\ldots ,w_{n}}are non-negative real numbers, thenf1w1⋯fnwn{\displaystyle f_{1}^{w_{1}}\cdots f_{n}^{w_{n}}}is logarithmically convex. If{fi}i∈I{\displaystyle \{f_{i}\}_{i\in I}}is any family of logarithmically convex functions, theng=supi∈Ifi{\displaystyle g=\sup _{i\in I}f_{i}}is logarithmically convex. Iff:X→I⊆R{\displaystyle f\colon X\to I\subseteq \mathbf {R} }is convex andg:I→R≥0{\displaystyle g\colon I\to \mathbf {R} _{\geq 0}}is logarithmically convex and non-decreasing, theng∘f{\displaystyle g\circ f}is logarithmically convex. A logarithmically convex functionfis a convex function since it is thecompositeof theincreasingconvex functionexp{\displaystyle \exp }and the functionlog∘f{\displaystyle \log \circ f}, which is by definition convex. However, being logarithmically convex is a strictly stronger property than being convex. For example, the squaring functionf(x)=x2{\displaystyle f(x)=x^{2}}is convex, but its logarithmlog⁡f(x)=2log⁡|x|{\displaystyle \log f(x)=2\log |x|}is not. Therefore the squaring function is not logarithmically convex. This article incorporates material from logarithmically convex function onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Logarithmically_convex_function
Inconvex analysisand thecalculus of variations, both branches ofmathematics, apseudoconvex functionis afunctionthat behaves like aconvex functionwith respect to finding itslocal minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positivedirectional derivative. The property must hold in all of the function domain, and not only for nearby points. Consider adifferentiablefunctionf:X⊆Rn→R{\displaystyle f:X\subseteq \mathbb {R} ^{n}\rightarrow \mathbb {R} }, defined on a (nonempty)convexopen setX{\displaystyle X}of the finite-dimensionalEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}. This function is said to bepseudoconvexif the following property holds:[1] Equivalently: Here∇f{\displaystyle \nabla f}is thegradientoff{\displaystyle f}, defined by:∇f=(∂f∂x1,…,∂f∂xn).{\displaystyle \nabla f=\left({\frac {\partial f}{\partial x_{1}}},\dots ,{\frac {\partial f}{\partial x_{n}}}\right).} Note that the definition may also be stated in terms of thedirectional derivativeoff{\displaystyle f}, in the direction given by the vectorv=y−x{\displaystyle v=y-x}. This is because, asf{\displaystyle f}is differentiable, this directional derivative is given by: Every convex function is pseudoconvex, but the converse is not true. For example, the functionf(x)=x+x3{\displaystyle f(x)=x+x^{3}}is pseudoconvex but not convex. Similarly, any pseudoconvex function isquasiconvex; but the converse is not true, since the functionf(x)=x3{\displaystyle f(x)=x^{3}}is quasiconvex but not pseudoconvex. This can be summarized schematically as: To see thatf(x)=x3{\displaystyle f(x)=x^{3}}is not pseudoconvex, consider its derivative atx=0{\displaystyle x=0}:f′(0)=0{\displaystyle f^{\prime }(0)=0}. Then, iff(x)=x3{\displaystyle f(x)=x^{3}}was pseudoconvex, we should have: In particular it should be true fory=−1{\displaystyle y=-1}. But it is not, as:f(−1)=(−1)3=−1<f(0)=0{\displaystyle f(-1)=(-1)^{3}=-1<f(0)=0}. For any differentiable function, we have theFermat's theoremnecessary condition of optimality, which states that: iff{\displaystyle f}has a local minimum atx∗{\displaystyle x^{*}}in anopendomain, thenx∗{\displaystyle x^{*}}must be astationary pointoff{\displaystyle f}(that is:∇f(x∗)=0{\displaystyle \nabla f(x^{*})=0}). Pseudoconvexity is of great interest in the area ofoptimization, because the converse is also true for any pseudoconvex function. That is:[2]ifx∗{\displaystyle x^{*}}is astationary pointof a pseudoconvex functionf{\displaystyle f}, thenf{\displaystyle f}has a global minimum atx∗{\displaystyle x^{*}}. Note also that the result guarantees a global minimum (not only local). This last result is also true for a convex function, but it is not true for a quasiconvex function. Consider for example the quasiconvex function: This function is not pseudoconvex, but it is quasiconvex. Also, the pointx=0{\displaystyle x=0}is a critical point off{\displaystyle f}, asf′(0)=0{\displaystyle f^{\prime }(0)=0}. However,f{\displaystyle f}does not have a global minimum atx=0{\displaystyle x=0}(not even a local minimum). Finally, note that a pseudoconvex function may not have any critical point. Take for example the pseudoconvex function:f(x)=x3+x{\displaystyle f(x)=x^{3}+x}, whose derivative is always positive:f′(x)=3x2+1>0,∀x∈R{\displaystyle f^{\prime }(x)=3x^{2}+1>0,\,\forall \,x\in \mathbb {R} }. An example of a function that is pseudoconvex, but not convex, is:f(x)=x2x2+k,k>0.{\displaystyle f(x)={\frac {x^{2}}{x^{2}+k}},\,k>0.}The figure shows this function for the case wherek=0.2{\displaystyle k=0.2}. This example may be generalized to two variables as: The previous example may be modified to obtain a function that is not convex, nor pseudoconvex, but is quasiconvex: The figure shows this function for the case wherek=0.5,p=0.6{\displaystyle k=0.5,p=0.6}. As can be seen, this function is not convex because of the concavity, and it is not pseudoconvex because it is not differentiable atx=0{\displaystyle x=0}. The notion of pseudoconvexity can be generalized to nondifferentiable functions as follows.[3]Given any functionf:X→R{\displaystyle f:X\rightarrow \mathbb {R} }, we can define the upperDini derivativeoff{\displaystyle f}by: whereuis anyunit vector. The function is said to be pseudoconvex if it is increasing in any direction where the upper Dini derivative is positive. More precisely, this is characterized in terms of thesubdifferential∂f{\displaystyle \partial f}as follows: where[x,y]{\displaystyle [x,y]}denotes the line segment adjoiningxandy. Apseudoconcave functionis a function whose negative is pseudoconvex. Apseudolinear functionis a function that is both pseudoconvex and pseudoconcave.[4]For example,linear–fractional programshave pseudolinearobjective functionsandlinear–inequality constraints. These properties allow fractional-linear problems to be solved by a variant of thesimplex algorithm(ofGeorge B. Dantzig).[5][6][7] Given a vector-valued functionη{\displaystyle \eta }, there is a more general notion ofη{\displaystyle \eta }-pseudoconvexity[8][9]andη{\displaystyle \eta }-pseudolinearity; wherein classical pseudoconvexity and pseudolinearity pertain to the case whenη(x,y)=y−x{\displaystyle \eta (x,y)=y-x}.
https://en.wikipedia.org/wiki/Pseudoconvex_function
Inmathematics, aquasiconvex functionis areal-valuedfunctiondefined on anintervalor on aconvex subsetof a realvector spacesuch that theinverse imageof any set of the form(−∞,a){\displaystyle (-\infty ,a)}is aconvex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to bequasiconcave. Quasiconvexity is a more general property than convexity in that allconvex functionsare also quasiconvex, but not all quasiconvex functions are convex.Univariateunimodalfunctions are quasiconvex or quasiconcave, however this is not necessarily the case for functions with multiplearguments. For example, the 2-dimensionalRosenbrock functionis unimodal but not quasiconvex and functions withstar-convexsublevel sets can be unimodal without being quasiconvex. A functionf:S→R{\displaystyle f:S\to \mathbb {R} }defined on a convex subsetS{\displaystyle S}of a real vector space is quasiconvex if for allx,y∈S{\displaystyle x,y\in S}andλ∈[0,1]{\displaystyle \lambda \in [0,1]}we have In words, iff{\displaystyle f}is such that it is always true that a point directly between two other points does not give a higher value of the function than both of the other points do, thenf{\displaystyle f}is quasiconvex. Note that the pointsx{\displaystyle x}andy{\displaystyle y}, and the point directly between them, can be points on a line or more generally points inn-dimensional space. An alternative way (see introduction) of defining a quasi-convex functionf(x){\displaystyle f(x)}is to require that each sublevel setSα(f)={x∣f(x)≤α}{\displaystyle S_{\alpha }(f)=\{x\mid f(x)\leq \alpha \}}is a convex set. If furthermore for allx≠y{\displaystyle x\neq y}andλ∈(0,1){\displaystyle \lambda \in (0,1)}, thenf{\displaystyle f}isstrictly quasiconvex. That is, strict quasiconvexity requires that a point directly between two other points must give a lower value of the function than one of the other points does. Aquasiconcave functionis a function whose negative is quasiconvex, and astrictly quasiconcave functionis a function whose negative is strictly quasiconvex. Equivalently a functionf{\displaystyle f}is quasiconcave if and strictly quasiconcave if A (strictly) quasiconvex function has (strictly) convexlower contour sets, while a (strictly) quasiconcave function has (strictly) convexupper contour sets. A function that is both quasiconvex and quasiconcave isquasilinear. A particular case of quasi-concavity, ifS⊂R{\displaystyle S\subset \mathbb {R} }, isunimodality, in which there is a locally maximal value. Quasiconvex functions have applications inmathematical analysis, inmathematical optimization, and ingame theoryandeconomics. Innonlinear optimization, quasiconvex programming studiesiterative methodsthat converge to a minimum (if one exists) for quasiconvex functions. Quasiconvex programming is a generalization ofconvex programming.[1]Quasiconvex programming is used in the solution of "surrogate"dual problems, whose biduals provide quasiconvex closures of the primal problem, which therefore provide tighter bounds than do the convex closures provided by Lagrangiandual problems.[2]Intheory, quasiconvex programming and convex programming problems can be solved in reasonable amount of time, where the number of iterations grows like a polynomial in the dimension of the problem (and in the reciprocal of the approximation error tolerated);[3]however, such theoretically "efficient" methods use "divergent-series"step size rules, which were first developed for classicalsubgradient methods. Classical subgradient methods using divergent-series rules are much slower than modern methods of convex minimization, such as subgradient projection methods,bundle methodsof descent, and nonsmoothfilter methods. Inmicroeconomics, quasiconcaveutility functionsimply that consumers haveconvex preferences. Quasiconvex functions are important also ingame theory,industrial organization, andgeneral equilibrium theory, particularly for applications ofSion's minimax theorem. Generalizing aminimax theoremofJohn von Neumann, Sion's theorem is also used in the theory ofpartial differential equations.
https://en.wikipedia.org/wiki/Quasiconvex_function
Inmathematics,subderivatives(orsubgradient) generalizes thederivativeto convex functions which are not necessarilydifferentiable. The set of subderivatives at a point is called thesubdifferentialat that point.[1]Subderivatives arise inconvex analysis, the study ofconvex functions, often in connection toconvex optimization. Letf:I→R{\displaystyle f:I\to \mathbb {R} }be areal-valued convex function defined on anopen intervalof the real line. Such a function need not be differentiable at all points: For example, theabsolute valuefunctionf(x)=|x|{\displaystyle f(x)=|x|}is non-differentiable whenx=0{\displaystyle x=0}. However, as seen in the graph on the right (wheref(x){\displaystyle f(x)}in blue has non-differentiable kinks similar to the absolute value function), for anyx0{\displaystyle x_{0}}in the domain of the function one can draw a line which goes through the point(x0,f(x0)){\displaystyle (x_{0},f(x_{0}))}and which is everywhere either touching or below the graph off. Theslopeof such a line is called asubderivative. Rigorously, asubderivativeof a convex functionf:I→R{\displaystyle f:I\to \mathbb {R} }at a pointx0{\displaystyle x_{0}}in the open intervalI{\displaystyle I}is a real numberc{\displaystyle c}such thatf(x)−f(x0)≥c(x−x0){\displaystyle f(x)-f(x_{0})\geq c(x-x_{0})}for allx∈I{\displaystyle x\in I}. By the converse of themean value theorem, thesetof subderivatives atx0{\displaystyle x_{0}}for a convex function is anonemptyclosed interval[a,b]{\displaystyle [a,b]}, wherea{\displaystyle a}andb{\displaystyle b}are theone-sided limitsa=limx→x0−f(x)−f(x0)x−x0,{\displaystyle a=\lim _{x\to x_{0}^{-}}{\frac {f(x)-f(x_{0})}{x-x_{0}}},}b=limx→x0+f(x)−f(x0)x−x0.{\displaystyle b=\lim _{x\to x_{0}^{+}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}.}Theinterval[a,b]{\displaystyle [a,b]}of all subderivatives is called thesubdifferentialof the functionf{\displaystyle f}atx0{\displaystyle x_{0}}, denoted by∂f(x0){\displaystyle \partial f(x_{0})}. Iff{\displaystyle f}is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential atx0{\displaystyle x_{0}}contains exactly one subderivative, thenf{\displaystyle f}is differentiable atx0{\displaystyle x_{0}}and∂f(x0)={f′(x0)}{\displaystyle \partial f(x_{0})=\{f'(x_{0})\}}.[2] Consider the functionf(x)=|x|{\displaystyle f(x)=|x|}which is convex. Then, the subdifferential at the origin is theinterval[−1,1]{\displaystyle [-1,1]}. The subdifferential at any pointx0<0{\displaystyle x_{0}<0}is thesingleton set{−1}{\displaystyle \{-1\}}, while the subdifferential at any pointx0>0{\displaystyle x_{0}>0}is the singleton set{1}{\displaystyle \{1\}}. This is similar to thesign function, but is not single-valued at0{\displaystyle 0}, instead including all possible subderivatives. The concepts of subderivative and subdifferential can be generalized to functions of several variables. Iff:U→R{\displaystyle f:U\to \mathbb {R} }is a real-valued convex function defined on aconvexopen setin theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, a vectorv{\displaystyle v}in that space is called asubgradientatx0∈U{\displaystyle x_{0}\in U}if for anyx∈U{\displaystyle x\in U}one has that where the dot denotes thedot product. The set of all subgradients atx0{\displaystyle x_{0}}is called thesubdifferentialatx0{\displaystyle x_{0}}and is denoted∂f(x0){\displaystyle \partial f(x_{0})}. The subdifferential is always a nonempty convexcompact set. These concepts generalize further to convex functionsf:U→R{\displaystyle f:U\to \mathbb {R} }on aconvex setin alocally convex spaceV{\displaystyle V}. A functionalv∗{\displaystyle v^{*}}in thedual spaceV∗{\displaystyle V^{*}}is called asubgradientatx0{\displaystyle x_{0}}inU{\displaystyle U}if for allx∈U{\displaystyle x\in U}, The set of all subgradients atx0{\displaystyle x_{0}}is called the subdifferential atx0{\displaystyle x_{0}}and is again denoted∂f(x0){\displaystyle \partial f(x_{0})}. The subdifferential is always a convexclosed set. It can be an empty set; consider for example anunbounded operator, which is convex, but has no subgradient. Iff{\displaystyle f}is continuous, the subdifferential is nonempty. The subdifferential on convex functions was introduced byJean Jacques MoreauandR. Tyrrell Rockafellarin the early 1960s. Thegeneralized subdifferentialfor nonconvex functions was introduced byFrancis H. Clarkeand R. Tyrrell Rockafellar in the early 1980s.[4]
https://en.wikipedia.org/wiki/Subderivative
Acobweb plot, known also asLémeray DiagramorVerhulst diagramis a visual tool used indynamical systems, a field ofmathematicsto investigate the qualitative behaviour of one-dimensionaliterated functions, such as thelogistic map. The technique was introduced in the 1890s by E.-M. Lémeray.[1]Using a cobweb plot, it is possible to infer the long-term status of aninitial conditionunderrepeated applicationof a map.[2] For a given iterated functionf:R→R{\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }, the plot consists of a diagonal (x=y{\displaystyle x=y}) line and a curve representingy=f(x){\displaystyle y=f(x)}. To plot the behaviour of a valuex0{\displaystyle x_{0}}, apply the following steps. On the Lémeray diagram, a stablefixed pointcorresponds to the segment of the staircase with progressively decreasing stair lengths or to an inwardspiral, while an unstable fixed point is the segment of the staircase with growing stairs or an outward spiral. It follows from the definition of a fixed point that the staircasesconvergewhereas spirals center at a point where thediagonaly=x{\displaystyle y=x}line crosses the function graph. A period-2orbitis represented by arectangle, while greater period cycles produce further, more complex closed loops. Achaoticorbit would show a "filled-out" area, indicating an infinite number of non-repeating values.[2] Thismathematical physics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cobweb_plot
Combinatory logicis a notation to eliminate the need forquantifiedvariables inmathematical logic. It was introduced byMoses Schönfinkel[1]andHaskell Curry,[2]and has more recently been used incomputer scienceas a theoreticalmodel of computationand also as a basis for the design offunctional programming languages. It is based oncombinators, which were introduced bySchönfinkelin 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly inpredicate logic. A combinator is ahigher-order functionthat uses onlyfunction applicationand earlier defined combinators to define a result from its arguments. Combinatory logic was originally intended as a 'pre-logic' that would clarify the role ofquantified variablesin logic, essentially by eliminating them. Another way of eliminating quantified variables isQuine'spredicate functor logic. While theexpressive powerof combinatory logic typically exceeds that offirst-order logic, the expressive power ofpredicate functor logicis identical to that of first order logic (Quine 1960, 1966, 1976). The original inventor of combinatory logic,Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper.Haskell Curryrediscovered the combinators while working as an instructor atPrinceton Universityin late 1927.[3]In the late 1930s,Alonzo Churchand his students at Princeton invented a rival formalism for functional abstraction, thelambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was byHaskell Curryand his students, or byRobert FeysinBelgium. Curry and Feys (1958), and Curryet al.(1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book byBarendregt,[4]which reviews themodelsDana Scottdevised for combinatory logic in the 1960s and 1970s. Incomputer science, combinatory logic is used as a simplified model ofcomputation, used incomputability theoryandproof theory. Despite its simplicity, combinatory logic captures many essential features of computation. Combinatory logic can be viewed as a variant of thelambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set ofcombinators, primitive functions withoutfree variables. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model somenon-strictfunctional programminglanguages andhardware. The purest form of this view is the programming languageUnlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest. Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations.[5]Dana Scottin the 1960s and 1970s showed how to marrymodel theoryand combinatory logic. Lambda calculus is concerned with objects calledlambda-terms, which can be represented by the following three forms of strings: where⁠v{\displaystyle v}⁠is a variable name drawn from a predefined infinite set of variable names, and⁠E1{\displaystyle E_{1}}⁠and⁠E2{\displaystyle E_{2}}⁠are lambda-terms. Terms of the form⁠λv.E1{\displaystyle \lambda v.E_{1}}⁠are calledabstractions. The variablevis called theformal parameterof the abstraction, and⁠E1{\displaystyle E_{1}}⁠is thebodyof the abstraction. The term⁠λv.E1{\displaystyle \lambda v.E_{1}}⁠represents the function which, applied to an argument, binds the formal parametervto the argument and then computes the resulting value of⁠E1{\displaystyle E_{1}}⁠— that is, it returns⁠E1{\displaystyle E_{1}}⁠, with every occurrence ofvreplaced by the argument. Terms of the form⁠(E1E2){\displaystyle (E_{1}E_{2})}⁠are calledapplications. Applications model function invocation or execution: the function represented by⁠E1{\displaystyle E_{1}}⁠is to be invoked, with⁠E2{\displaystyle E_{2}}⁠as its argument, and the result is computed. If⁠E1{\displaystyle E_{1}}⁠(sometimes called theapplicand) is an abstraction, the term may bereduced:⁠E2{\displaystyle E_{2}}⁠, the argument, may be substituted into the body of⁠E1{\displaystyle E_{1}}⁠in place of the formal parameter of⁠E1{\displaystyle E_{1}}⁠, and the result is a new lambda term which isequivalentto the old one. If a lambda term contains no subterms of the form⁠((λv.E1)E2){\displaystyle ((\lambda v.E_{1})E_{2})}⁠then it cannot be reduced, and is said to be innormal form. The expression⁠E[v:=a]{\displaystyle E[v:=a]}⁠represents the result of taking the termEand replacing all free occurrences ofvin it witha. Thus we write By convention, we take⁠(abc){\displaystyle (abc)}⁠as shorthand for⁠((ab)c){\displaystyle ((ab)c)}⁠(i.e., application isleft associative). The motivation for this definition of reduction is that it captures the essential behavior of all mathematical functions. For example, consider the function that computes the square of a number. We might write (Using "⁠∗{\displaystyle *}⁠" to indicate multiplication.)xhere is theformal parameterof the function. To evaluate the square for a particular argument, say 3, we insert it into the definition in place of the formal parameter: To evaluate the resulting expression⁠3∗3{\displaystyle 3*3}⁠, we would have to resort to our knowledge of multiplication and the number 3. Since any computation is simply a composition of the evaluation of suitable functions on suitable primitive arguments, this simple substitution principle suffices to capture the essential mechanism of computation. Moreover, in lambda calculus, notions such as '3' and '⁠∗{\displaystyle *}⁠' can be represented without any need for externally defined primitive operators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like the number 3 and like the multiplication operator, q.v.Church encoding. Lambda calculus is known to be computationally equivalent in power to many other plausible models for computation (includingTuring machines); that is, any calculation that can be accomplished in any of these other models can be expressed in lambda calculus, and vice versa. According to theChurch–Turing thesis, both models can express any possible computation. It is perhaps surprising that lambda-calculus can represent any conceivable computation using only the simple notions of function abstraction and application based on simple textual substitution of terms for variables. But even more remarkable is that abstraction is not even required.Combinatory logicis a model of computation equivalent to lambda calculus, but without abstraction. The advantage of this is that evaluating expressions in lambda calculus is quite complicated because the semantics of substitution must be specified with great care to avoid variable capture problems. In contrast, evaluating expressions in combinatory logic is much simpler, because there is no notion of substitution. Since abstraction is the only way to manufacture functions in the lambda calculus, something must replace it in the combinatory calculus. Instead of abstraction, combinatory calculus provides a limited set of primitive functions out of which other functions may be built. A combinatory term has one of the following forms: The primitive functions arecombinators, or functions that, when seen as lambda terms, contain nofree variables. To shorten the notations, a general convention is that⁠(E1E2E3...En){\displaystyle (E_{1}E_{2}E_{3}...E_{n})}⁠, or even⁠E1E2E3...En{\displaystyle E_{1}E_{2}E_{3}...E_{n}}⁠, denotes the term⁠(...((E1E2)E3)...En){\displaystyle (...((E_{1}E_{2})E_{3})...E_{n})}⁠. This is the same general convention (left-associativity) as for multiple application in lambda calculus. In combinatory logic, each primitive combinator comes with a reduction rule of the form whereEis a term mentioning only variables from the set{x1...xn}. It is in this way that primitive combinators behave as functions. The simplest example of a combinator isI, the identity combinator, defined by for all termsx. Another simple combinator isK, which manufactures constant functions: (Kx) is the function which, for any argument, returnsx, so we say for all termsxandy. Or, following the convention for multiple application, A third combinator isS, which is a generalized version of application: Sappliesxtoyafter first substitutingzinto each of them. Or put another way,xis applied toyinside the environmentz. GivenSandK,Iitself is unnecessary, since it can be built from the other two: for any termx. Note that although ((S K K)x) = (Ix) for anyx, (S K K) itself is not equal toI. We say the terms areextensionally equal. Extensional equality captures the mathematical notion of the equality of functions: that two functions areequalif they always produce the same results for the same arguments. In contrast, the terms themselves, together with the reduction of primitive combinators, capture the notion ofintensional equalityof functions: that two functions areequalonly if they have identical implementationsup tothe expansion of primitive combinators. There are many ways to implement an identity function; (S K K) andIare among these ways. (S K S) is yet another. We will use the wordequivalentto indicate extensional equality, reservingequalfor identical combinatorial terms. A more interesting combinator is thefixed point combinatororYcombinator, which can be used to implementrecursion. SandKcan be composed to produce combinators that are extensionally equal toanylambda term, and therefore, by Church's thesis, to any computable function whatsoever. The proof is to present a transformation,T[ ], which converts an arbitrary lambda term into an equivalent combinator. T[ ]may be defined as follows: Note thatT[ ] as given is not a well-typed mathematical function, but rather a term rewriter: Although it eventually yields a combinator, the transformation may generate intermediary expressions that are neither lambda terms nor combinators, via rule (5). This process is also known asabstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (seeSummary of lambda calculusabove). It is related to the process ofbracket abstraction, which takes an expressionEbuilt from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x=Eholds. A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows:[6] Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm. For example, we will convert the lambda termλx.λy.(yx) to a combinatorial term: If we apply this combinatorial term to any two termsxandy(by feeding them in a queue-like fashion into the combinator 'from the right'), it reduces as follows: The combinatory representation, (S(K(S I)) (S(K K)I)) is much longer than the representation as a lambda term,λx.λy.(y x). This is typical. In general, theT[ ] construction may expand a lambda term of lengthnto a combinatorial term of lengthΘ(n3).[7] TheT[ ] transformation is motivated by a desire to eliminate abstraction. Two special cases, rules 3 and 4, are trivial:λx.xis clearly equivalent toI, andλx.Eis clearly equivalent to (KT[E]) ifxdoes not appear free inE. The first two rules are also simple: Variables convert to themselves, and applications, which are allowed in combinatory terms, are converted to combinators simply by converting the applicand and the argument to combinators. It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction. λx.(E1E2) is a function which takes an argument, saya, and substitutes it into the lambda term (E1E2) in place ofx, yielding (E1E2)[x: =a]. But substitutingainto (E1E2) in place ofxis just the same as substituting it into bothE1andE2, so By extensional equality, Therefore, to find a combinator equivalent toλx.(E1E2), it is sufficient to find a combinator equivalent to (Sλx.E1λx.E2), and evidently fits the bill.E1andE2each contain strictly fewer applications than (E1E2), so the recursion must terminate in a lambda term with no applications at all—either a variable, or a term of the formλx.E. The combinators generated by theT[ ] transformation can be made smaller if we take into account theη-reductionrule: λx.(Ex) is the function which takes an argument,x, and applies the functionEto it; this is extensionally equal to the functionEitself. It is therefore sufficient to convertEto combinatorial form. Taking this simplification into account, the example above becomes: This combinator is equivalent to the earlier, longer one: Similarly, the original version of theT[ ] transformation transformed the identity functionλf.λx.(fx) into (S(S(K S) (S(K K)I)) (K I)). With the η-reduction rule,λf.λx.(fx) is transformed intoI. There are one-point bases from which every combinator can be composed extensionally equal toanylambda term. A simple example of such a basis is {X} where: It is not difficult to verify that: Since {K,S} is a basis, it follows that {X} is a basis too. TheIotaprogramming language usesXas its sole combinator. Another simple example of a one-point basis is: The simplest known one-point basis is a slight modification ofS: In fact, there exist infinitely many such bases.[8] In addition toSandK,Schönfinkel (1924)included two combinators which are now calledBandC, with the following reductions: He also explains how they in turn can be expressed using onlySandK: These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used byCurry, and much later byDavid Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows: UsingBandCcombinators, the transformation ofλx.λy.(yx) looks like this: And indeed, (CIxy) does reduce to (yx): The motivation here is thatBandCare limited versions ofS. WhereasStakes a value and substitutes it into both the applicand and its argument before performing the application,Cperforms the substitution only in the applicand, andBonly in the argument. The modern names for the combinators come fromHaskell Curry's doctoral thesis of 1930 (seeB, C, K, W System). InSchönfinkel's original paper, what we now callS,K,I,BandCwere calledS,C,I,Z, andTrespectively. The reduction in combinator size that results from the new transformation rules can also be achieved without introducingBandC, as demonstrated in Section 3.2 ofTromp (2008). A distinction must be made between theCLKas described in this article and theCLIcalculus. The distinction corresponds to that between the λKand the λIcalculus. Unlike the λKcalculus, the λIcalculus restricts abstractions to: As a consequence, combinatorKis not present in the λIcalculus nor in theCLIcalculus. The constants ofCLIare:I,B,CandS, which form a basis from which allCLIterms can be composed (modulo equality). Every λIterm can be converted into an equalCLIcombinator according to rules similar to those presented above for the conversion of λKterms intoCLKcombinators. See chapter 9 in Barendregt (1984). The conversionL[ ] from combinatorial terms to lambda terms is trivial: Note, however, that this transformation is not the inverse transformation of any of the versions ofT[ ] that we have seen. Anormal formis any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This can be shown in a similar way as for the corresponding problems for lambda terms. The undecidable problems above (equivalence, existence of normal form, etc.) take as input syntactic representations of terms under a suitable encoding (e.g.,Church encoding). One may also consider a toy trivial computation model where we "compute" properties of terms by means of combinators applied directly to the terms themselves as arguments, rather than to their syntactic representations. More precisely, let apredicatebe a combinator that, when applied, returns eitherTorF(whereTandFrepresent the conventionalChurch encodings of true and false,λx.λy.xandλx.λy.y, transformed into combinatory logic; the combinatory versions haveT=KandF= (KI)). A predicateNisnontrivialif there are two argumentsAandBsuch thatNA=TandNB=F. A combinatorNiscompleteifNMhas a normal form for every argumentM. An analogue of Rice's theorem for this toy model then says that every complete predicate is trivial. The proof of this theorem is rather simple.[9] By reductio ad absurdum. Suppose there is a complete non trivial predicate, sayN. BecauseNis supposed to be non trivial there are combinatorsAandBsuch that Fixed point theorem gives: ABSURDUM = (NEGATION ABSURDUM), for BecauseNis supposed to be complete either: Hence (NABSURDUM) is neitherTnorF, which contradicts the presupposition thatNwould be a complete non trivial predicate.Q.E.D. From this undefinability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there isnocomplete predicate, say EQUAL, such that: If EQUAL would exist, then for allA,λx.(EQUALx A) would have to be a complete non trivial predicate. However, note that it also immediately follows from this undefinability theorem that many properties of terms that are obviously decidable are not definable by complete predicates either: e.g., there is no predicate that could tell whether the first primitive function letter occurring in a term is aK. This shows that definability by predicates is a not a reasonable model of decidability. David Turner used his combinators to implement theSASL programming language. Kenneth E. Iversonused primitives based on Curry's combinators in hisJ programming language, a successor toAPL. This enabled what Iverson calledtacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators.[10] TheCurry–Howard isomorphismimplies a connection between logic and programming: every proof of a theorem ofintuitionistic logiccorresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified with function type signatures. Specifically, a typed combinatory logic corresponds to aHilbert systeminproof theory. TheKandScombinators correspond to the axioms and function application corresponds to the detachment (modus ponens) rule The calculus consisting ofAK,AS, andMPis complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the setWof all deductively closed sets of formulas, ordered byinclusion. Then⟨W,⊆⟩{\displaystyle \langle W,\subseteq \rangle }is an intuitionisticKripke frame, and we define a model⊩{\displaystyle \Vdash }in this frame by This definition obeys the conditions on satisfaction of →: on one hand, ifX⊩A→B{\displaystyle X\Vdash A\to B}, andY∈W{\displaystyle Y\in W}is such thatY⊇X{\displaystyle Y\supseteq X}andY⊩A{\displaystyle Y\Vdash A}, thenY⊩B{\displaystyle Y\Vdash B}by modus ponens. On the other hand, ifX⊮A→B{\displaystyle X\not \Vdash A\to B}, thenX,A⊬B{\displaystyle X,A\not \vdash B}by thededuction theorem, thus the deductive closure ofX∪{A}{\displaystyle X\cup \{A\}}is an elementY∈W{\displaystyle Y\in W}such thatY⊇X{\displaystyle Y\supseteq X},Y⊩A{\displaystyle Y\Vdash A}, andY⊮B{\displaystyle Y\not \Vdash B}. LetAbe any formula which is not provable in the calculus. ThenAdoes not belong to the deductive closureXof the empty set, thusX⊮A{\displaystyle X\not \Vdash A}, andAis not intuitionistically valid.
https://en.wikipedia.org/wiki/Combinatory_logic
Ring homomorphisms Algebraic structures Related structures Algebraic number theory Noncommutative algebraic geometry Free algebra Clifford algebra Inmathematics, acomposition ring, introduced in (Adler 1962), is acommutative ring(R, 0, +, −, ·), possibly without anidentity 1, together with an operation such that, for any three elementsf,g,h∈R{\displaystyle f,g,h\in R}one has It isnotgenerally the case thatf∘g=g∘f{\displaystyle f\circ g=g\circ f},noris it generally the case thatf∘(g+h){\displaystyle f\circ (g+h)}(orf∘(g⋅h){\displaystyle f\circ (g\cdot h)}) has any algebraic relationship tof∘g{\displaystyle f\circ g}andf∘h{\displaystyle f\circ h}. There are a few ways to make a commutative ringRinto a composition ring without introducing anything new. More interesting examples can be formed by defining a composition on another ring constructed fromR. For a concrete example take the ringZ[x]{\displaystyle {\mathbb {Z} }[x]}, considered as the ring of polynomial maps from the integers to itself. A ring endomorphism ofZ[x]{\displaystyle {\mathbb {Z} }[x]}is determined by the image underF{\displaystyle F}of the variablex{\displaystyle x}, which we denote by and this imagef{\displaystyle f}can be any element ofZ[x]{\displaystyle {\mathbb {Z} }[x]}. Therefore, one may consider the elementsf∈Z[x]{\displaystyle f\in {\mathbb {Z} }[x]}as endomorphisms and assign∘:Z[x]×Z[x]→Z[x]{\displaystyle \circ :{\mathbb {Z} }[x]\times {\mathbb {Z} }[x]\rightarrow {\mathbb {Z} }[x]}, accordingly. One easily verifies thatZ[x]{\displaystyle {\mathbb {Z} }[x]}satisfies the above axioms. For example, one has This example is isomorphic to the given example forR[X] withRequal toZ{\displaystyle \mathbb {Z} }, and also to the subring of all functionsZ→Z{\displaystyle \mathbb {Z} \to \mathbb {Z} }formed by the polynomial functions.
https://en.wikipedia.org/wiki/Composition_ring
Inmathematics, aflowformalizes the idea of the motion of particles in a fluid. Flows are ubiquitous in science, includingengineeringandphysics. The notion of flow is basic to the study ofordinary differential equations. Informally, a flow may be viewed as a continuous motion of points over time. More formally, a flow is agroup actionof thereal numberson aset. The idea of avector flow, that is, the flow determined by avector field, occurs in the areas ofdifferential topology,Riemannian geometryandLie groups. Specific examples of vector flows include thegeodesic flow, theHamiltonian flow, theRicci flow, themean curvature flow, andAnosov flows. Flows may also be defined for systems ofrandom variablesandstochastic processes, and occur in the study ofergodicdynamical systems. The most celebrated of these is perhaps theBernoulli flow. Aflowon a setXis agroup actionof theadditive groupofreal numbersonX. More explicitly, a flow is amapping such that, for allx∈Xand all real numberssandt, It is customary to writeφt(x)instead ofφ(x,t), so that the equations above can be expressed asφ0=Id{\displaystyle \varphi ^{0}={\text{Id}}}(theidentity function) andφs∘φt=φs+t{\displaystyle \varphi ^{s}\circ \varphi ^{t}=\varphi ^{s+t}}(group law). Then, for all⁠t∈R,{\displaystyle t\in \mathbb {R} ,}⁠the mapping⁠φt:X→X{\displaystyle \varphi ^{t}:X\to X}⁠is a bijection with inverse⁠φ−t:X→X.{\displaystyle \varphi ^{-t}:X\to X.}⁠This follows from the above definition, and the real parametertmay be taken as a generalizedfunctional power, as infunction iteration. Flows are usually required to be compatible withstructuresfurnished on the setX. In particular, ifXis equipped with atopology, thenφis usually required to becontinuous. IfXis equipped with adifferentiable structure, thenφis usually required to bedifferentiable. In these cases the flow forms aone-parameter groupof homeomorphisms and diffeomorphisms, respectively. In certain situations one might also considerlocal flows, which are defined only in some subset called theflow domainofφ. This is often the case with theflows of vector fields. It is very common in many fields, includingengineering,physicsand the study ofdifferential equations, to use a notation that makes the flow implicit. Thus,x(t)is written for⁠φt(x0),{\displaystyle \varphi ^{t}(x_{0}),}⁠and one might say that the variablexdepends on the timetand the initial conditionx=x0. Examples are given below. In the case of aflow of a vector fieldVon asmooth manifoldX, the flow is often denoted in such a way that its generator is made explicit. For example, GivenxinX, the set{φ(x,t):t∈R}{\displaystyle \{\varphi (x,t):t\in \mathbb {R} \}}is called theorbitofxunderφ. Informally, it may be regarded as the trajectory of a particle that was initially positioned atx. If the flow is generated by avector field, then its orbits are the images of itsintegral curves. Let⁠f:R→X{\displaystyle f:\mathbb {R} \to X}⁠be a time-dependent trajectory which is a bijective function. Then a flow can be defined by Let⁠F:Rn→Rn{\displaystyle {\boldsymbol {F}}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}⁠be a (time-independent) vector field and⁠x:R→Rn{\displaystyle {\boldsymbol {x}}:\mathbb {R} \to \mathbb {R} ^{n}}⁠the solution of the initial value problem Thenφ(x0,t)=x(t){\displaystyle \varphi ({\boldsymbol {x}}_{0},t)={\boldsymbol {x}}(t)}is theflow of the vector fieldF. It is a well-defined local flow provided that the vector field⁠F:Rn→Rn{\displaystyle {\boldsymbol {F}}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}⁠isLipschitz-continuous. Then⁠φ:Rn×R→Rn{\displaystyle \varphi :\mathbb {R} ^{n}\times \mathbb {R} \to \mathbb {R} ^{n}}⁠is also Lipschitz-continuous wherever defined. In general it may be hard to show that the flowφis globally defined, but one simple criterion is that the vector fieldFiscompactly supported. In the case of time-dependent vector fields⁠F:Rn×R→Rn{\displaystyle {\boldsymbol {F}}:\mathbb {R} ^{n}\times \mathbb {R} \to \mathbb {R} ^{n}}⁠, one denotesφt,t0(x0)=x(t+t0),{\displaystyle \varphi ^{t,t_{0}}({\boldsymbol {x}}_{0})={\boldsymbol {x}}(t+t_{0}),}where⁠x:R→Rn{\displaystyle {\boldsymbol {x}}:\mathbb {R} \to \mathbb {R} ^{n}}⁠is the solution of Then⁠φt,t0(x0){\displaystyle \varphi ^{t,t_{0}}({\boldsymbol {x}}_{0})}⁠is thetime-dependent flow ofF. It is not a "flow" by the definition above, but it can easily be seen as one by rearranging its arguments. Namely, the mapping indeed satisfies the group law for the last variable: One can see time-dependent flows of vector fields as special cases of time-independent ones by the following trick. Define Theny(t)is the solution of the "time-independent" initial value problem if and only ifx(t)is the solution of the original time-dependent initial value problem. Furthermore, then the mappingφis exactly the flow of the "time-independent" vector fieldG. The flows of time-independent and time-dependent vector fields are defined on smooth manifolds exactly as they are defined on the Euclidean space⁠Rn{\displaystyle \mathbb {R} ^{n}}⁠and their local behavior is the same. However, the global topological structure of a smooth manifold is strongly manifest in what kind of global vector fields it can support, and flows of vector fields on smooth manifolds are indeed an important tool in differential topology. The bulk of studies in dynamical systems are conducted on smooth manifolds, which are thought of as "parameter spaces" in applications. Formally: LetM{\displaystyle {\mathcal {M}}}be adifferentiable manifold. LetTpM{\displaystyle \mathrm {T} _{p}{\mathcal {M}}}denote thetangent spaceof a pointp∈M.{\displaystyle p\in {\mathcal {M}}.}LetTM{\displaystyle \mathrm {T} {\mathcal {M}}}be the complete tangent manifold; that is,TM=∪p∈MTpM.{\displaystyle \mathrm {T} {\mathcal {M}}=\cup _{p\in {\mathcal {M}}}\mathrm {T} _{p}{\mathcal {M}}.}Letf:R×M→TM{\displaystyle f:\mathbb {R} \times {\mathcal {M}}\to \mathrm {T} {\mathcal {M}}}be a time-dependentvector fieldonM{\displaystyle {\mathcal {M}}}; that is,fis a smooth map such that for eacht∈R{\displaystyle t\in \mathbb {R} }andp∈M{\displaystyle p\in {\mathcal {M}}}, one hasf(t,p)∈TpM;{\displaystyle f(t,p)\in \mathrm {T} _{p}{\mathcal {M}};}that is, the mapx↦f(t,x){\displaystyle x\mapsto f(t,x)}maps each point to an element of its own tangent space. For a suitable intervalI⊆R{\displaystyle I\subseteq \mathbb {R} }containing 0, the flow offis a functionϕ:I×M→M{\displaystyle \phi :I\times {\mathcal {M}}\to {\mathcal {M}}}that satisfiesϕ(0,x0)=x0∀x0∈Mddt|t=t0ϕ(t,x0)=f(t0,ϕ(t0,x0))∀x0∈M,t0∈I{\displaystyle {\begin{aligned}\phi (0,x_{0})&=x_{0}&\forall x_{0}\in {\mathcal {M}}\\{\frac {\mathrm {d} }{\mathrm {d} t}}{\Big |}_{t=t_{0}}\phi (t,x_{0})&=f(t_{0},\phi (t_{0},x_{0}))&\forall x_{0}\in {\mathcal {M}},t_{0}\in I\end{aligned}}} LetΩbe a subdomain (bounded or not) of⁠Rn{\displaystyle \mathbb {R} ^{n}}⁠(withnan integer). Denote byΓits boundary (assumed smooth). Consider the followingheat equationonΩ × (0,T), forT> 0, with the following initial value conditionu(0) =u0inΩ. The equationu= 0onΓ × (0,T)corresponds to the Homogeneous Dirichlet boundary condition. The mathematical setting for this problem can be the semigroup approach. To use this tool, we introduce the unbounded operatorΔDdefined onL2(Ω){\displaystyle L^{2}(\Omega )}by its domain (see the classicalSobolev spaceswithHk(Ω)=Wk,2(Ω){\displaystyle H^{k}(\Omega )=W^{k,2}(\Omega )}and is the closure of the infinitely differentiable functions with compact support inΩfor theH1(Ω)−{\displaystyle H^{1}(\Omega )-}norm). For anyv∈D(ΔD){\displaystyle v\in D(\Delta _{D})}, we have With this operator, the heat equation becomesu′(t)=ΔDu(t){\displaystyle u'(t)=\Delta _{D}u(t)}andu(0) =u0. Thus, the flow corresponding to this equation is (see notations above) whereexp(tΔD)is the (analytic) semigroup generated byΔD. Again, letΩbe a subdomain (bounded or not) of⁠Rn{\displaystyle \mathbb {R} ^{n}}⁠(withnan integer). We denote byΓits boundary (assumed smooth). Consider the followingwave equationonΩ×(0,T){\displaystyle \Omega \times (0,T)}(forT> 0), with the following initial conditionu(0) =u1,0inΩandut(0)=u2,0inΩ.{\displaystyle u_{t}(0)=u^{2,0}{\mbox{ in }}\Omega .} Using the same semigroup approach as in the case of the Heat Equation above. We write the wave equation as a first order in time partial differential equation by introducing the following unbounded operator, with domainD(A)=H2(Ω)∩H01(Ω)×H01(Ω){\displaystyle D({\mathcal {A}})=H^{2}(\Omega )\cap H_{0}^{1}(\Omega )\times H_{0}^{1}(\Omega )}onH=H01(Ω)×L2(Ω){\displaystyle H=H_{0}^{1}(\Omega )\times L^{2}(\Omega )}(the operatorΔDis defined in the previous example). We introduce the column vectors (whereu1=u{\displaystyle u^{1}=u}andu2=ut{\displaystyle u^{2}=u_{t}}) and With these notions, the Wave Equation becomesU′(t)=AU(t){\displaystyle U'(t)={\mathcal {A}}U(t)}andU(0) =U0. Thus, the flow corresponding to this equation is whereetA{\displaystyle {\mbox{e}}^{t{\mathcal {A}}}}is the (unitary) semigroup generated byA.{\displaystyle {\mathcal {A}}.} Ergodicdynamical systems, that is, systems exhibiting randomness, exhibit flows as well. The most celebrated of these is perhaps theBernoulli flow. TheOrnstein isomorphism theoremstates that, for any givenentropyH, there exists a flowφ(x,t), called the Bernoulli flow, such that the flow at timet= 1,i.e.φ(x, 1), is aBernoulli shift. Furthermore, this flow is unique, up to a constant rescaling of time. That is, ifψ(x,t), is another flow with the same entropy, thenψ(x,t) =φ(x,t), for some constantc. The notion of uniqueness and isomorphism here is that of theisomorphism of dynamical systems. Many dynamical systems, includingSinai's billiardsandAnosov flowsare isomorphic to Bernoulli shifts.
https://en.wikipedia.org/wiki/Flow_(mathematics)
Incomputer science,function compositionis an act or mechanism to combine simplefunctionsto build more complicated ones. Like the usualcomposition of functionsinmathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole. Programmers frequently apply functions to results of other functions, and almost all programming languages allow it. In some cases, the composition of functions is interesting as a function in its own right, to be used later. Such a function can always be defined but languages withfirst-class functionsmake it easier. The ability to easily compose functions encouragesfactoring(breaking apart)functionsfor maintainability andcode reuse. More generally, big systems might be built by composing whole programs. Narrowly speaking, function composition applies to functions that operate on a finite amount of data, each step sequentially processing it before handing it to the next. Functions that operate on potentially infinite data (astreamor othercodata) are known asfilters, and are instead connected in apipeline, which is analogous to function composition and can executeconcurrently. For example, suppose we have twofunctionsfandg, as inz=f(y)andy=g(x). Composing them means we first computey=g(x), and then useyto computez=f(y). Here is the example in theC language: The steps can be combined if we don't give a name to the intermediate result: Despite differences in length, these two implementations compute the same result. The second implementation requires only one line of code and is colloquially referred to as a "highly composed" form. Readability and hence maintainability is one advantage of highly composed forms, since they require fewer lines of code, minimizing a program's "surface area".[1]DeMarco and Lister empirically verify an inverse relationship between surface area and maintainability.[2]On the other hand, it may be possible to overuse highly composed forms. A nesting of too many functions may have the opposite effect, making the code less maintainable. In astack-based language, functional composition is even more natural: it is performed byconcatenation, and is usually the primary method of program design. The above example inForth: Which will take whatever was on the stack before, apply g, then f, and leave the result on the stack. Seepostfix composition notationfor the corresponding mathematical notation. Now suppose that the combination of calling f() on the result of g() is frequently useful, and which we want to name foo() to be used as a function in its own right. In most languages, we can define a new function implemented by composition. Example inC: (the long form with intermediates would work as well.) Example inForth: In languages such asC, the only way to create a new function is to define it in the program source, which means that functions can't be composed atrun time. An evaluation of an arbitrary composition ofpredefinedfunctions, however, is possible: Infunctional programminglanguages, function composition can be naturally expressed as ahigher-order functionor operator. In other programming languages you can write your own mechanisms to perform function composition. InHaskell, the examplefoo=f∘ggiven above becomes: using the built-in composition operator (.) which can be read asf after gorg composed with f. The composition operator∘itself can be defined in Haskell using alambda expression: The first line describes the type of (.) - it takes a pair of functions,f,gand returns a function (the lambda expression on the second line). Note that Haskell doesn't require specification of the exact input and output types of f and g; the a, b, c, and x are placeholders; only the relation betweenf,gmatters (f must accept what g returns). This makes (.) apolymorphicoperator. Variants ofLisp, especiallyScheme, theinterchangeability of code and datatogether with the treatment of functions lend themselves extremely well for a recursive definition of avariadiccompositional operator. Many dialects ofAPLfeature built in function composition using the symbol∘. This higher-order function extends function composition todyadicapplication of the left side function such thatA f∘g BisA f g B. Additionally, you can define function composition: In dialect that does not support inline definition using braces, the traditional definition is available: RakulikeHaskellhas a built in function composition operator, the main difference is it is spelled as∘oro. Also likeHaskellyou could define the operator yourself. In fact the following is the Raku code used to define it in theRakudoimplementation. Nim supportsuniform function call syntax, which allows for arbitrary function composition through the method syntax.operator.[3] InPython, a way to define the composition for any group of functions, is usingreducefunction (use functools.reduce in Python 3): InJavaScriptwe can define it as a function which takes two functions f and g, and produces a function: InC#we can define it as an Extension method which takes Funcs f and g, and produces a new Func: Languages likeRubylet you construct a binary operator yourself: However, a native function composition operator was introduced in Ruby 2.6:[4] Notions of composition, including theprinciple of compositionalityandcomposability, are so ubiquitous that numerous strands of research have separately evolved. The following is a sampling of the kind of research in which the notion of composition is central. Whole programs or systems can be treated as functions, which can be readily composed if their inputs and outputs are well-defined.[5]Pipelinesallowing easy composition offilterswere so successful that they became adesign patternof operating systems. Imperative procedureswith side effects violatereferential transparencyand therefore are not cleanly composable. However if one considers the "state of the world" before and after running the code as its input and output, one gets a clean function. Composition of such functions corresponds to running the procedures one after the other. Themonadformalism uses this idea to incorporate side effects and input/output (I/O) into functional languages.
https://en.wikipedia.org/wiki/Function_composition_(computer_science)
Arandom variable(also calledrandom quantity,aleatory variable, orstochastic variable) is amathematicalformalization of a quantity or object which depends onrandomevents.[1]The term 'random variable' in its mathematical definition refers to neither randomness nor variability[2]but instead is a mathematicalfunctionin which Informally, randomness typically represents some fundamental element of chance, such as in the roll of adie; it may also represent uncertainty, such asmeasurement error.[1]However, theinterpretation of probabilityis philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorousaxiomaticsetup. In the formal mathematical language ofmeasure theory, a random variable is defined as ameasurable functionfrom aprobability measure space(called thesample space) to ameasurable space. This allows consideration of thepushforward measure, which is called thedistributionof the random variable; the distribution is thus aprobability measureon the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may beindependent. It is common to consider the special cases ofdiscrete random variablesandabsolutely continuous random variables, corresponding to whether a random variable is valued in a countable subset or in an interval ofreal numbers. There are other important possibilities, especially in the theory ofstochastic processes, wherein it is natural to considerrandom sequencesorrandom functions. Sometimes arandom variableis taken to be automatically valued in the real numbers, with more general random quantities instead being calledrandom elements. According toGeorge Mackey,Pafnuty Chebyshevwas the first person "to think systematically in terms of random variables".[3] Arandom variableX{\displaystyle X}is ameasurable functionX:Ω→E{\displaystyle X\colon \Omega \to E}from a sample spaceΩ{\displaystyle \Omega }as a set of possibleoutcomesto ameasurable spaceE{\displaystyle E}. The technical axiomatic definition requires the sample spaceΩ{\displaystyle \Omega }to belong to aprobability triple(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}(see themeasure-theoretic definition). A random variable is often denoted by capitalRoman letterssuch asX,Y,Z,T{\displaystyle X,Y,Z,T}.[4] The probability thatX{\displaystyle X}takes on a value in a measurable setS⊆E{\displaystyle S\subseteq E}is written as In many cases,X{\displaystyle X}isreal-valued, i.e.E=R{\displaystyle E=\mathbb {R} }. In some contexts, the termrandom element(seeextensions) is used to denote a random variable not of this form. When theimage(or range) ofX{\displaystyle X}is finite orcountablyinfinite, the random variable is called adiscrete random variable[5]: 399and its distribution is adiscrete probability distribution, i.e. can be described by aprobability mass functionthat assigns a probability to each value in the image ofX{\displaystyle X}. If the image is uncountably infinite (usually aninterval) thenX{\displaystyle X}is called acontinuous random variable.[6][7]In the special case that it isabsolutely continuous, its distribution can be described by aprobability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous.[8] Any random variable can be described by itscumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value. The term "random variable" in statistics is traditionally limited to thereal-valuedcase (E=R{\displaystyle E=\mathbb {R} }). In this case, the structure of the real numbers makes it possible to define quantities such as theexpected valueandvarianceof a random variable, itscumulative distribution function, and themomentsof its distribution. However, the definition above is valid for anymeasurable spaceE{\displaystyle E}of values. Thus one can consider random elements of other setsE{\displaystyle E}, such as randomBoolean values,categorical values,complex numbers,vectors,matrices,sequences,trees,sets,shapes,manifolds, andfunctions. One may then specifically refer to arandom variable oftypeE{\displaystyle E}, or anE{\displaystyle E}-valued random variable. This more general concept of arandom elementis particularly useful in disciplines such asgraph theory,machine learning,natural language processing, and other fields indiscrete mathematicsandcomputer science, where one is often interested in modeling the random variation of non-numericaldata structures. In some cases, it is nonetheless convenient to represent each element ofE{\displaystyle E}, using one or more real numbers. In this case, a random element may optionally be represented as avector of real-valued random variables(all defined on the same underlying probability spaceΩ{\displaystyle \Omega }, which allows the different random variables tocovary). For example: If a random variableX:Ω→R{\displaystyle X\colon \Omega \to \mathbb {R} }defined on the probability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\operatorname {P} )}is given, we can ask questions like "How likely is it that the value ofX{\displaystyle X}is equal to 2?". This is the same as the probability of the event{ω:X(ω)=2}{\displaystyle \{\omega :X(\omega )=2\}\,\!}which is often written asP(X=2){\displaystyle P(X=2)\,\!}orpX(2){\displaystyle p_{X}(2)}for short. Recording all these probabilities of outputs of a random variableX{\displaystyle X}yields theprobability distributionofX{\displaystyle X}. The probability distribution "forgets" about the particular probability space used to defineX{\displaystyle X}and only records the probabilities of various output values ofX{\displaystyle X}. Such a probability distribution, ifX{\displaystyle X}is real-valued, can always be captured by itscumulative distribution function and sometimes also using aprobability density function,fX{\displaystyle f_{X}}. Inmeasure-theoreticterms, we use the random variableX{\displaystyle X}to "push-forward" the measureP{\displaystyle P}onΩ{\displaystyle \Omega }to a measurepX{\displaystyle p_{X}}onR{\displaystyle \mathbb {R} }. The measurepX{\displaystyle p_{X}}is called the "(probability) distribution ofX{\displaystyle X}" or the "law ofX{\displaystyle X}".[9]The densityfX=dpX/dμ{\displaystyle f_{X}=dp_{X}/d\mu }, theRadon–Nikodym derivativeofpX{\displaystyle p_{X}}with respect to some reference measureμ{\displaystyle \mu }onR{\displaystyle \mathbb {R} }(often, this reference measure is theLebesgue measurein the case of continuous random variables, or thecounting measurein the case of discrete random variables). The underlying probability spaceΩ{\displaystyle \Omega }is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such ascorrelation and dependenceorindependencebased on ajoint distributionof two or more random variables on the same probability space. In practice, one often disposes of the spaceΩ{\displaystyle \Omega }altogether and just puts a measure onR{\displaystyle \mathbb {R} }that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article onquantile functionsfor fuller development. Consider an experiment where a person is chosen at random. An example of a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to their height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sumPMF⁡(0)+PMF⁡(2)+PMF⁡(4)+⋯{\displaystyle \operatorname {PMF} (0)+\operatorname {PMF} (2)+\operatorname {PMF} (4)+\cdots }. In examples such as these, thesample spaceis often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. If{an},{bn}{\textstyle \{a_{n}\},\{b_{n}\}}are countable sets of real numbers,bn>0{\textstyle b_{n}>0}and∑nbn=1{\textstyle \sum _{n}b_{n}=1}, thenF=∑nbnδan(x){\textstyle F=\sum _{n}b_{n}\delta _{a_{n}}(x)}is a discrete distribution function. Hereδt(x)=0{\displaystyle \delta _{t}(x)=0}forx<t{\displaystyle x<t},δt(x)=1{\displaystyle \delta _{t}(x)=1}forx≥t{\displaystyle x\geq t}. Taking for instance an enumeration of all rational numbers as{an}{\displaystyle \{a_{n}\}}, one gets a discrete function that is not necessarily astep function(piecewiseconstant). The possible outcomes for one coin toss can be described by the sample spaceΩ={heads,tails}{\displaystyle \Omega =\{{\text{heads}},{\text{tails}}\}}. We can introduce a real-valued random variableY{\displaystyle Y}that models a $1 payoff for a successful bet on heads as follows:Y(ω)={1,ifω=heads,0,ifω=tails.{\displaystyle Y(\omega )={\begin{cases}1,&{\text{if }}\omega ={\text{heads}},\\[6pt]0,&{\text{if }}\omega ={\text{tails}}.\end{cases}}} If the coin is afair coin,Yhas aprobability mass functionfY{\displaystyle f_{Y}}given by:fY(y)={12,ify=1,12,ify=0,{\displaystyle f_{Y}(y)={\begin{cases}{\tfrac {1}{2}},&{\text{if }}y=1,\\[6pt]{\tfrac {1}{2}},&{\text{if }}y=0,\end{cases}}} A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbersn1andn2from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variableXgiven by the function that maps the pair to the sum:X((n1,n2))=n1+n2{\displaystyle X((n_{1},n_{2}))=n_{1}+n_{2}}and (if the dice arefair) has a probability mass functionfXgiven by:fX(S)=min(S−1,13−S)36,forS∈{2,3,4,5,6,7,8,9,10,11,12}{\displaystyle f_{X}(S)={\frac {\min(S-1,13-S)}{36}},{\text{ for }}S\in \{2,3,4,5,6,7,8,9,10,11,12\}} Formally, a continuous random variable is a random variable whosecumulative distribution functioniscontinuouseverywhere.[10]There are no "gaps", which would correspond to numbers which have a finite probability ofoccurring. Instead, continuous random variablesalmost nevertake an exact prescribed valuec(formally,∀c∈R:Pr(X=c)=0{\textstyle \forall c\in \mathbb {R} :\;\Pr(X=c)=0}) but there is a positive probability that its value will lie in particularintervalswhich can bearbitrarily small. Continuous random variables usually admitprobability density functions(PDF), which characterize their CDF andprobability measures; such distributions are also calledabsolutely continuous; but some continuous distributions aresingular, or mixes of an absolutely continuous part and a singular part. An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case,X= the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to anyrangeof values. For example, the probability of choosing a number in [0, 180] is1⁄2. Instead of speaking of a probability mass function, we say that the probabilitydensityofXis 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. More formally, given anyintervalI=[a,b]={x∈R:a≤x≤b}{\textstyle I=[a,b]=\{x\in \mathbb {R} :a\leq x\leq b\}}, a random variableXI∼U⁡(I)=U⁡[a,b]{\displaystyle X_{I}\sim \operatorname {U} (I)=\operatorname {U} [a,b]}is called a "continuous uniformrandom variable" (CURV) if the probability that it takes a value in asubintervaldepends only on the length of the subinterval. This implies that the probability ofXI{\displaystyle X_{I}}falling in any subinterval[c,d]⊆[a,b]{\displaystyle [c,d]\subseteq [a,b]}isproportionalto thelengthof the subinterval, that is, ifa≤c≤d≤b, one has Pr(XI∈[c,d])=d−cb−a{\displaystyle \Pr \left(X_{I}\in [c,d]\right)={\frac {d-c}{b-a}}} where the last equality results from theunitarity axiomof probability. Theprobability density functionof a CURVX∼U⁡[a,b]{\displaystyle X\sim \operatorname {U} [a,b]}is given by theindicator functionof its interval ofsupportnormalized by the interval's length:fX(x)={1b−a,a≤x≤b0,otherwise.{\displaystyle f_{X}(x)={\begin{cases}\displaystyle {1 \over b-a},&a\leq x\leq b\\0,&{\text{otherwise}}.\end{cases}}}Of particular interest is the uniform distribution on theunit interval[0,1]{\displaystyle [0,1]}. Samples of any desiredprobability distributionD{\displaystyle \operatorname {D} }can be generated by calculating thequantile functionofD{\displaystyle \operatorname {D} }on arandomly-generated numberdistributed uniformly on the unit interval. This exploitsproperties of cumulative distribution functions, which are a unifying framework for all random variables. Amixed random variableis a random variable whosecumulative distribution functionis neitherdiscretenoreverywhere-continuous.[10]It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case theCDFwill be the weighted average of the CDFs of the component variables.[10] An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails,X= −1; otherwiseX= the value of the spinner as in the preceding example. There is a probability of1⁄2that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; seeLebesgue's decomposition theorem § Refinement. The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers). The most formal,axiomaticdefinition of a random variable involvesmeasure theory. Continuous random variables are defined in terms ofsetsof numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. theBanach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed asigma-algebrato constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, theBorel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite orcountably infinitenumber ofunionsand/orintersectionsof such intervals.[11] The measure-theoretic definition is as follows. Let(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}be aprobability spaceand(E,E){\displaystyle (E,{\mathcal {E}})}ameasurable space. Then an(E,E){\displaystyle (E,{\mathcal {E}})}-valued random variableis a measurable functionX:Ω→E{\displaystyle X\colon \Omega \to E}, which means that, for every subsetB∈E{\displaystyle B\in {\mathcal {E}}}, itspreimageisF{\displaystyle {\mathcal {F}}}-measurable;X−1(B)∈F{\displaystyle X^{-1}(B)\in {\mathcal {F}}}, whereX−1(B)={ω:X(ω)∈B}{\displaystyle X^{-1}(B)=\{\omega :X(\omega )\in B\}}.[12]This definition enables us to measure any subsetB∈E{\displaystyle B\in {\mathcal {E}}}in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member ofΩ{\displaystyle \Omega }is a possible outcome, a member ofF{\displaystyle {\mathcal {F}}}is a measurable subset of possible outcomes, the functionP{\displaystyle P}gives the probability of each such measurable subset,E{\displaystyle E}represents the set of values that the random variable can take (such as the set of real numbers), and a member ofE{\displaystyle {\mathcal {E}}}is a "well-behaved" (measurable) subset ofE{\displaystyle E}(those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. WhenE{\displaystyle E}is atopological space, then the most common choice for theσ-algebraE{\displaystyle {\mathcal {E}}}is theBorel σ-algebraB(E){\displaystyle {\mathcal {B}}(E)}, which is the σ-algebra generated by the collection of all open sets inE{\displaystyle E}. In such case the(E,E){\displaystyle (E,{\mathcal {E}})}-valued random variable is called anE{\displaystyle E}-valued random variable. Moreover, when the spaceE{\displaystyle E}is the real lineR{\displaystyle \mathbb {R} }, then such a real-valued random variable is called simply arandom variable. In this case the observation space is the set of real numbers. Recall,(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}is the probability space. For a real observation space, the functionX:Ω→R{\displaystyle X\colon \Omega \rightarrow \mathbb {R} }is a real-valued random variable if This definition is a special case of the above because the set{(−∞,r]:r∈R}{\displaystyle \{(-\infty ,r]:r\in \mathbb {R} \}}generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that{ω:X(ω)≤r}=X−1((−∞,r]){\displaystyle \{\omega :X(\omega )\leq r\}=X^{-1}((-\infty ,r])}. The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept ofexpected valueof a random variable, denotedE⁡[X]{\displaystyle \operatorname {E} [X]}, and also called thefirstmoment.In general,E⁡[f(X)]{\displaystyle \operatorname {E} [f(X)]}is not equal tof(E⁡[X]){\displaystyle f(\operatorname {E} [X])}. Once the "average value" is known, one could then ask how far from this average value the values ofX{\displaystyle X}typically are, a question that is answered by thevarianceandstandard deviationof a random variable.E⁡[X]{\displaystyle \operatorname {E} [X]}can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations ofX{\displaystyle X}. Mathematically, this is known as the (generalised)problem of moments: for a given class of random variablesX{\displaystyle X}, find a collection{fi}{\displaystyle \{f_{i}\}}of functions such that the expectation valuesE⁡[fi(X)]{\displaystyle \operatorname {E} [f_{i}(X)]}fully characterise thedistributionof the random variableX{\displaystyle X}. Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity functionf(X)=X{\displaystyle f(X)=X}of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for acategoricalrandom variableXthat can take on thenominalvalues "red", "blue" or "green", the real-valued function[X=green]{\displaystyle [X={\text{green}}]}can be constructed; this uses theIverson bracket, and has the value 1 ifX{\displaystyle X}has the value "green", 0 otherwise. Then, theexpected valueand other moments of this function can be determined. A new random variableYcan be defined byapplyinga realBorel measurable functiong:R→R{\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} }to the outcomes of areal-valuedrandom variableX{\displaystyle X}. That is,Y=g(X){\displaystyle Y=g(X)}. Thecumulative distribution functionofY{\displaystyle Y}is then If functiong{\displaystyle g}is invertible (i.e.,h=g−1{\displaystyle h=g^{-1}}exists, whereh{\displaystyle h}isg{\displaystyle g}'sinverse function) and is eitherincreasing or decreasing, then the previous relation can be extended to obtain With the same hypotheses of invertibility ofg{\displaystyle g}, assuming alsodifferentiability, the relation between theprobability density functionscan be found by differentiating both sides of the above expression with respect toy{\displaystyle y}, in order to obtain[10] If there is no invertibility ofg{\displaystyle g}but eachy{\displaystyle y}admits at most a countable number of roots (i.e., a finite, or countably infinite, number ofxi{\displaystyle x_{i}}such thaty=g(xi){\displaystyle y=g(x_{i})}) then the previous relation between theprobability density functionscan be generalized with wherexi=gi−1(y){\displaystyle x_{i}=g_{i}^{-1}(y)}, according to theinverse function theorem. The formulas for densities do not demandg{\displaystyle g}to be increasing. In the measure-theoretic,axiomatic approachto probability, if a random variableX{\displaystyle X}onΩ{\displaystyle \Omega }and aBorel measurable functiong:R→R{\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} }, thenY=g(X){\displaystyle Y=g(X)}is also a random variable onΩ{\displaystyle \Omega }, since the composition of measurable functionsis also measurable. (However, this is not necessarily true ifg{\displaystyle g}isLebesgue measurable.[citation needed]) The same procedure that allowed one to go from a probability space(Ω,P){\displaystyle (\Omega ,P)}to(R,dFX){\displaystyle (\mathbb {R} ,dF_{X})}can be used to obtain the distribution ofY{\displaystyle Y}. LetX{\displaystyle X}be a real-valued,continuous random variableand letY=X2{\displaystyle Y=X^{2}}. Ify<0{\displaystyle y<0}, thenP(X2≤y)=0{\displaystyle P(X^{2}\leq y)=0}, so Ify≥0{\displaystyle y\geq 0}, then so SupposeX{\displaystyle X}is a random variable with a cumulative distribution whereθ>0{\displaystyle \theta >0}is a fixed parameter. Consider the random variableY=log(1+e−X).{\displaystyle Y=\mathrm {log} (1+e^{-X}).}Then, The last expression can be calculated in terms of the cumulative distribution ofX,{\displaystyle X,}so which is thecumulative distribution function(CDF) of anexponential distribution. SupposeX{\displaystyle X}is a random variable with astandard normal distribution, whose density is Consider the random variableY=X2.{\displaystyle Y=X^{2}.}We can find the density using the above formula for a change of variables: In this case the change is notmonotonic, because every value ofY{\displaystyle Y}has two corresponding values ofX{\displaystyle X}(one positive and negative). However, because of symmetry, both halves will transform identically, i.e., The inverse transformation is and its derivative is Then, This is achi-squared distributionwith onedegree of freedom. SupposeX{\displaystyle X}is a random variable with anormal distribution, whose density is Consider the random variableY=X2.{\displaystyle Y=X^{2}.}We can find the density using the above formula for a change of variables: In this case the change is notmonotonic, because every value ofY{\displaystyle Y}has two corresponding values ofX{\displaystyle X}(one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms: The inverse transformation is and its derivative is Then, This is anoncentral chi-squared distributionwith onedegree of freedom. There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below. If the sample space is a subset of the real line, random variablesXandYareequal in distribution(denotedX=dY{\displaystyle X{\stackrel {d}{=}}Y}) if they have the same distribution functions: To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equalmoment generating functionshave the same distribution. This provides, for example, a useful method of checking equality of certain functions ofindependent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a definedLaplace transform. Two random variablesXandYareequalalmost surely(denotedX=a.s.Y{\displaystyle X\;{\stackrel {\text{a.s.}}{=}}\;Y}) if, and only if, the probability that they are different iszero: For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: where "ess sup" represents theessential supremumin the sense ofmeasure theory. Finally, the two random variablesXandYareequalif they are equal as functions on their measurable space: This notion is typically the least useful in probability theory because in practice and in theory, the underlyingmeasure spaceof theexperimentis rarely explicitly characterized or even characterizable. Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables consideredin isolationare "practically equivalent" if they are equal in distribution -- but once we relate them tootherrandom variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely. For example, consider the real random variablesA,B,C, andDall defined on the same probability space. Suppose thatAandBare equal almost surely (A=a.s.B{\displaystyle A\;{\stackrel {\text{a.s.}}{=}}\;B}), butAandCare only equal in distribution (A=dC{\displaystyle A{\stackrel {d}{=}}C}). ThenA+D=a.s.B+D{\displaystyle A+D\;{\stackrel {\text{a.s.}}{=}}\;B+D}, but in generalA+D≠C+D{\displaystyle A+D\;\neq \;C+D}(not even in distribution). Similarly, we have that the expectation valuesE(AD)=E(BD){\displaystyle \mathbb {E} (AD)=\mathbb {E} (BD)}, but in generalE(AD)≠E(CD){\displaystyle \mathbb {E} (AD)\neq \mathbb {E} (CD)}. Therefore, two random variables that are equal in distribution (but not equal almost surely) can have differentcovarianceswith a third random variable. A significant theme in mathematical statistics consists of obtaining convergence results for certainsequencesof random variables; for instance thelaw of large numbersand thecentral limit theorem. There are various senses in which a sequenceXn{\displaystyle X_{n}}of random variables can converge to a random variableX{\displaystyle X}. These are explained in the article onconvergence of random variables.
https://en.wikipedia.org/wiki/Random_variable#Functions_of_random_variables
Inengineering,functional decompositionis the process of resolving afunctionalrelationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts. This process of decomposition may be undertaken to gain insight into the identity of the constituent components, which may reflect individual physical processes of interest. Also, functional decomposition may result in a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level ofmodularity(i.e., independence or non-interaction). Interaction (statistics)(a situation in which one causal variable depends on the state of a second causal variable)[clarify]between the components are critical to the function of the collection. All interactions may not beobservable, or measured[clarify], but possibly deduced through repetitiveperception[clarify], synthesis, validation and verification of composite behavior. Decomposition of a function into non-interacting components generally permits more economical representations of the function. Intuitively, this reduction in representation size is achieved simply because each variable depends only on a subset of the other variables. Thus, variablex1{\displaystyle x_{1}}only depends directly on variablex2{\displaystyle x_{2}}, rather than depending on theentire setof variables. We would say that variablex2{\displaystyle x_{2}}screens offvariablex1{\displaystyle x_{1}}from the rest of the world. Practical examples of this phenomenon surround us. Consider the particular case of "northbound traffic on theWest Side Highway." Let us assume this variable (x1{\displaystyle {x_{1}}}) takes on three possible values of {"moving slow", "moving deadly slow", "not moving at all"}. Now, let's say the variablex1{\displaystyle {x_{1}}}depends on two other variables, "weather" with values of {"sun", "rain", "snow"}, and "GW Bridgetraffic" with values {"10mph", "5mph", "1mph"}. The point here is that while there are certainly many secondary variables that affect the weather variable (e.g., low pressure system over Canada,butterfly flappingin Japan, etc.) and the Bridge traffic variable (e.g., an accident onI-95, presidential motorcade, etc.) all these other secondary variables are not directly relevant to the West Side Highway traffic. All we need (hypothetically) in order to predict the West Side Highway traffic is the weather and the GW Bridge traffic, because these two variablesscreen offWest Side Highway traffic from all other potential influences. That is, all other influences actthroughthem. Practical applications of functional decomposition are found inBayesian networks,structural equation modeling,linear systems, anddatabase systems. Processes related to functional decomposition are prevalent throughout the fields ofknowledge representationandmachine learning. Hierarchical model induction techniques such asLogic circuit minimization,decision trees,grammatical inference,hierarchical clustering, andquadtree decompositionare all examples of function decomposition. Manystatistical inferencemethods can be thought of as implementing a function decomposition process in the presence of noise; that is, where functional dependencies are only expected to holdapproximately. Among such models aremixture modelsand the recently popular methods referred to as "causal decompositions" orBayesian networks. Seedatabase normalization. In practical scientific applications, it is almost never possible to achieve perfect functional decomposition because of the incredible complexity of the systems under study. This complexity is manifested in the presence of "noise," which is just a designation for all the unwanted and untraceable influences on our observations. However, while perfect functional decomposition is usually impossible, the spirit lives on in a large number of statistical methods that are equipped to deal with noisy systems. When a natural or artificial system is intrinsically hierarchical, thejoint distributionon system variables should provide evidence of this hierarchical structure. The task of an observer who seeks to understand the system is then to infer the hierarchical structure from observations of these variables. This is the notion behind the hierarchical decomposition of a joint distribution, the attempt to recover something of the intrinsic hierarchical structure which generated that joint distribution. As an example,Bayesian networkmethods attempt to decompose a joint distribution along its causal fault lines, thus "cutting nature at its seams". The essential motivation behind these methods is again that within most systems (natural or artificial), relatively few components/events interact with one another directly on equal footing.[1]Rather, one observes pockets of dense connections (direct interactions) among small subsets of components, but only loose connections between these densely connected subsets. There is thus a notion of "causal proximity" in physical systems under which variables naturally precipitate into small clusters. Identifying these clusters and using them to represent the joint provides the basis for great efficiency of storage (relative to the full joint distribution) as well as for potent inference algorithms. Functional Decomposition is a design method intending to produce a non-implementation, architectural description of a computer program. The software architect first establishes a series of functions and types that accomplishes the main processing problem of the computer program, decomposes each to reveal common functions and types, and finally derives Modules from this activity. Functional decomposition is used in the analysis of manysignal processingsystems, such asLTI systems. The input signal to an LTI system can be expressed as a function,f(t){\displaystyle f(t)}. Thenf(t){\displaystyle f(t)}can be decomposed into a linear combination of other functions, called component signals: Here,{g1(t),g2(t),g3(t),…,gn(t)}{\displaystyle \{g_{1}(t),g_{2}(t),g_{3}(t),\dots ,g_{n}(t)\}}are the component signals. Note that{a1,a2,a3,…,an}{\displaystyle \{a_{1},a_{2},a_{3},\dots ,a_{n}\}}are constants. This decomposition aids in analysis, because now the output of the system can be expressed in terms of the components of the input. If we letT{}{\displaystyle T\{\}}represent the effect of the system, then the output signal isT{f(t)}{\displaystyle T\{f(t)\}}, which can be expressed as: In other words, the system can be seen as acting separately on each of the components of the input signal. Commonly used examples of this type of decomposition are theFourier seriesand theFourier transform. Functional decomposition insystems engineeringrefers to the process of defining a system in functional terms, then defining lower-level functions and sequencing relationships from these higher level systems functions.[2]The basic idea is to try to divide a system in such a way that each block of ablock diagramcan be described without an "and" or "or" in the description. This exercise forces each part of the system to have a purefunction. When a system is designed as pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function. For example, say that one needs to make astereosystem. One might functionally decompose this intospeakers,amplifier, atape deckand a front panel. Later, when a different model needs an audioCD, it can probably fit the same interfaces.
https://en.wikipedia.org/wiki/Functional_decomposition
Inmathematics, afunctional equation[1][2][irrelevant citation]is, in the broadest meaning, anequationin which one or several functions appear asunknowns. So,differential equationsandintegral equationsare functional equations. However, a more restricted meaning is often used, where afunctional equationis an equation that relates several values of the same function. For example, thelogarithm functionsareessentially characterizedby thelogarithmic functional equationlog⁡(xy)=log⁡(x)+log⁡(y).{\displaystyle \log(xy)=\log(x)+\log(y).} If thedomainof the unknown function is supposed to be thenatural numbers, the function is generally viewed as asequence, and, in this case, a functional equation (in the narrower meaning) is called arecurrence relation. Thus the termfunctional equationis used mainly forreal functionsandcomplex functions. Moreover asmoothness conditionis often assumed for the solutions, since without such a condition, most functional equations have very irregular solutions. For example, thegamma functionis a function that satisfies the functional equationf(x+1)=xf(x){\displaystyle f(x+1)=xf(x)}and the initial valuef(1)=1.{\displaystyle f(1)=1.}There are many functions that satisfy these conditions, but the gamma function is the unique one that ismeromorphicin the whole complex plane, andlogarithmically convexforxreal and positive (Bohr–Mollerup theorem). One feature that all of the examples listed above have in common is that, in each case, two or more known functions (sometimes multiplication by a constant, sometimes addition of two variables, sometimes theidentity function) are inside the argument of the unknown functions to be solved for. When it comes to asking forallsolutions, it may be the case that conditions frommathematical analysisshould be applied; for example, in the case of theCauchy equationmentioned above, the solutions that arecontinuous functionsare the 'reasonable' ones, while other solutions that are not likely to have practical application can be constructed (by using aHamel basisfor thereal numbersasvector spaceover therational numbers). TheBohr–Mollerup theoremis another well-known example. Theinvolutionsare characterized by the functional equationf(f(x))=x{\displaystyle f(f(x))=x}. These appear inBabbage'sfunctional equation (1820),[3] Other involutions, and solutions of the equation, include which includes the previous three asspecial casesor limits. One method of solving elementary functional equations is substitution.[citation needed] Some solutions to functional equations have exploitedsurjectivity,injectivity,oddness, andevenness.[citation needed] Some functional equations have been solved with the use ofansatzes,mathematical induction.[citation needed] Some classes of functional equations can be solved by computer-assisted techniques.[vague][4] Indynamic programminga variety of successive approximation methods[5][6]are used to solveBellman's functional equation, including methods based onfixed point iterations.
https://en.wikipedia.org/wiki/Functional_equation
Explainable AI(XAI), often overlapping withinterpretable AI, orexplainable machine learning(XML), is a field of research withinartificial intelligence(AI) that explores methods that provide humans with the ability ofintellectual oversightover AI algorithms.[1][2]The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms,[3]to make them more understandable and transparent.[4]This addresses users' requirement to assess safety and scrutinize the automated decision making in applications.[5]XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.[6][7] XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.[8]XAI may be an implementation of the socialright to explanation.[9]Even if there is no such legal right or regulatory requirement, XAI can improve theuser experienceof a product or service by helping end users trust that the AI is making good decisions.[10]XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.[11]This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.[12] Machine learning(ML) algorithms used in AI can be categorized aswhite-boxorblack-box.[13]White-box models provide results that are understandable to experts in the domain. Black-box models, on the other hand, are extremely hard to explain and may not be understood even by domain experts.[14]XAI algorithms follow the three principles of transparency, interpretability, and explainability. In summary, Interpretability refers to the user's ability to understand model outputs, while Model Transparency includes Simulatability (reproducibility of predictions), Decomposability (intuitive explanations for parameters), and Algorithmic Transparency (explaining how algorithms work). Model Functionality focuses on textual descriptions, visualization, and local explanations, which clarify specific outputs or instances rather than entire models. All these concepts aim to enhance the comprehensibility and usability of AI systems.[20]If algorithms fulfill these principles, they provide a basis for justifying decisions, tracking them and thereby verifying them, improving the algorithms, and exploring new facts.[21] Sometimes it is also possible to achieve a high-accuracy result with white-box ML algorithms. These algorithms have an interpretable structure that can be used to explain predictions.[22]Concept Bottleneck Models, which use concept-level abstractions to explain model reasoning, are examples of this and can be applied in both image[23]and text[24]prediction tasks. This is especially important in domains like medicine, defense, finance, and law, where it is crucial to understand decisions and build trust in the algorithms.[11]Many researchers argue that, at least for supervised machine learning, the way forward is symbolic regression, where the algorithm searches the space of mathematical expressions to find the model that best fits a given dataset.[25][26][27] AI systems optimize behavior to satisfy a mathematically specified goal system chosen by the system designers, such as the command "maximize the accuracy ofassessing how positivefilm reviews are in the test dataset." The AI may learn useful general rules from the test set, such as "reviews containing the word "horrible" are likely to be negative." However, it may also learn inappropriate rules, such as "reviews containing 'Daniel Day-Lewis' are usually positive"; such rules may be undesirable if they are likely to fail to generalize outside the training set, or if people consider the rule to be "cheating" or "unfair." A human can audit rules in an XAI to get an idea of how likely the system is to generalize to future real-world data outside the test set.[28] Cooperation betweenagents– in this case,algorithmsand humans – depends on trust. If humans are to accept algorithmic prescriptions, they need to trust them. Incompleteness in formal trust criteria is a barrier to optimization. Transparency, interpretability, and explainability are intermediate goals on the road to these more comprehensive trust criteria.[29]This is particularly relevant in medicine,[30]especially withclinical decision support systems(CDSS), in which medical professionals should be able to understand how and why a machine-based decision was made in order to trust the decision and augment their decision-making process.[31] AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data but do not reflect the more nuanced implicit desires of the human system designers or the full complexity of the domain data. For example, a 2017 system tasked withimage recognitionlearned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures rather than learning how to tell if a horse was actually pictured.[7]In another 2017 system, asupervised learningAI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object.[32][33] One transparency project, theDARPAXAI program, aims to produce "glass box" models that are explainable to a "human-in-the-loop" without greatly sacrificing AI performance. Human users of such a system can understand the AI's cognition (both in real-time and after the fact) and can determine whether to trust the AI.[34]Other applications of XAI areknowledge extractionfrom black-box models and model comparisons.[35]In the context of monitoring systems for ethical and socio-legal compliance, the term "glass box" is commonly used to refer to tools that track the inputs and outputs of the system in question, and provide value-based explanations for their behavior. These tools aim to ensure that the system operates in accordance with ethical and legal standards, and that its decision-making processes are transparent and accountable. The term "glass box" is often used in contrast to "black box" systems, which lack transparency and can be more difficult to monitor and regulate.[36]The term is also used to name a voice assistant that produces counterfactual statements as explanations.[37] There is a subtle difference between the terms explainability and interpretability in the context of AI.[38] Some explainability techniques don't involve understanding how the model works, and may work across various AI systems. Treating the model as a black box and analyzing how marginal changes to the inputs affect the result sometimes provides a sufficient explanation. Explainability is useful for ensuring that AI models are not making decisions based on irrelevant or otherwise unfair criteria. Forclassificationandregressionmodels, several popular techniques exist: For images,saliency mapshighlight the parts of an image that most influenced the result.[43] Systems that are expert or knowledge based are software systems that are made by experts. This system consists of a knowledge based encoding for the domain knowledge. This system is usually modeled as production rules, and someone uses this knowledge base which the user can question the system for knowledge. In expert systems, the language and explanations are understood with an explanation for the reasoning or a problem solving activity.[5] However, these techniques are not very suitable forlanguage modelslikegenerative pretrained transformers. Since these models generate language, they can provide an explanation, but which may not be reliable. Other techniques include attention analysis (examining how the model focuses on different parts of the input), probing methods (testing what information is captured in the model's representations), causal tracing (tracing the flow of information through the model) and circuit discovery (identifying specific subnetworks responsible for certain behaviors). Explainability research in this area overlaps significantly with interpretability andalignmentresearch.[44] Scholars sometimes use the term "mechanistic interpretability" to refer to the process ofreverse-engineeringartificial neural networksto understand their internal decision-making mechanisms and components, similar to how one might analyze a complex machine or computer program.[46] Interpretability research often focuses on generative pretrained transformers. It is particularly relevant forAI safetyandalignment, as it may enable to identify signs of undesired behaviors such assycophancy, deceptiveness or bias, and to better steer AI models.[47] Studying the interpretability of the most advancedfoundation modelsoften involves searching for an automated way to identify "features" in generative pretrained transformers. In aneural network, a feature is a pattern of neuron activations that corresponds to a concept. A compute-intensive technique called "dictionary learning" makes it possible to identify features to some degree. Enhancing the ability to identify and edit features is expected to significantly improve thesafetyoffrontier AI models.[48][49] Forconvolutional neural networks,DeepDreamcan generate images that strongly activate a particular neuron, providing a visual hint about what the neuron is trained to identify.[50] During the 1970s to 1990s,symbolic reasoning systems, such asMYCIN,[51]GUIDON,[52]SOPHIE,[53]and PROTOS[54][55]could represent, reason about, and explain their reasoning for diagnostic, instructional, or machine-learning (explanation-based learning) purposes. MYCIN, developed in the early 1970s as a research prototype for diagnosingbacteremiainfections of the bloodstream, could explain[56]which of its hand-coded rules contributed to a diagnosis in a specific case. Research inintelligent tutoring systemsresulted in developing systems such as SOPHIE that could act as an "articulate expert", explaining problem-solving strategy at a level the student could understand, so they would know what action to take next. For instance, SOPHIE could explain the qualitative reasoning behind its electronics troubleshooting, even though it ultimately relied on theSPICEcircuit simulator. Similarly, GUIDON added tutorial rules to supplement MYCIN's domain-level rules so it could explain the strategy for medical diagnosis. Symbolic approaches to machine learning relying on explanation-based learning, such as PROTOS, made use of explicit representations of explanations expressed in a dedicated explanation language, both to explain their actions and to acquire new knowledge.[55] In the 1980s through the early 1990s,truth maintenance systems(TMS) extended the capabilities of causal-reasoning,rule-based, and logic-based inference systems.[57]: 360–362A TMS explicitly tracks alternate lines of reasoning, justifications for conclusions, and lines of reasoning that lead to contradictions, allowing future reasoning to avoid these dead ends. To provide an explanation, they trace reasoning from conclusions to assumptions through rule operations or logical inferences, allowing explanations to be generated from the reasoning traces. As an example, consider a rule-based problem solver with just a few rules about Socrates that concludes he has died from poison: By just tracing through the dependency structure the problem solver can construct the following explanation: "Socrates died because he was mortal and drank poison, and all mortals die when they drink poison. Socrates was mortal because he was a man and all men are mortal. Socrates drank poison because he held dissident beliefs, the government was conservative, and those holding conservative dissident beliefs under conservative governments must drink poison."[58]: 164–165 By the 1990s researchers began studying whether it is possible to meaningfully extract the non-hand-coded rules being generated by opaque trained neural networks.[59]Researchers in clinicalexpert systemscreating[clarification needed]neural network-powered decision support for clinicians sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice.[9]In the 2010s public concerns about racial and other bias in the use of AI for criminal sentencing decisions and findings of creditworthiness may have led to increased demand for transparent artificial intelligence.[7]As a result, many academics and organizations are developing tools to help detect bias in their systems.[60] Marvin Minskyet al. raised the issue that AI can function as a form of surveillance, with the biases inherent in surveillance, suggesting HI (Humanistic Intelligence) as a way to create a more fair and balanced "human-in-the-loop" AI.[61] Explainable AI has been recently a new topic researched amongst the context of modern deep learning. Modern complex AI techniques, such asdeep learning, are naturally opaque.[62]To address this issue, methods have been developed to make new models more explainable and interpretable.[63][17][16][64][65][66]This includes layerwise relevance propagation (LRP), a technique for determining which features in a particular input vector contribute most strongly to a neural network's output.[67][68]Other techniques explain some particular prediction made by a (nonlinear) black-box model, a goal referred to as "local interpretability".[69][70][71][72][73][74]We still today cannot explain the output of today's DNNs without the new explanatory mechanisms, we also can't by the neural network, or external explanatory components[75]There is also research on whether the concepts of local interpretability can be applied to a remote context, where a model is operated by a third-party.[76][77] There has been work on making glass-box models which are more transparent to inspection.[22][78]This includesdecision trees,[79]Bayesian networks, sparselinear models,[80]and more.[81]TheAssociation for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT)was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include artificial intelligence.[82][83] Some techniques allow visualisations of the inputs to which individualsoftware neuronsrespond to most strongly. Several groups found that neurons can be aggregated into circuits that perform human-comprehensible functions, some of which reliably arise across different networks trained independently.[84][85] There are various techniques to extract compressed representations of the features of given inputs, which can then be analysed by standardclustering techniques. Alternatively, networks can be trained to output linguistic explanations of their behaviour, which are then directly human-interpretable.[86]Model behaviour can also be explained with reference to training data—for example, by evaluating which training inputs influenced a given behaviour the most.[87] The use of explainable artificial intelligence (XAI) in pain research, specifically in understanding the role of electrodermal activity forautomated pain recognition: hand-crafted features and deep learning models in pain recognition, highlighting the insights that simple hand-crafted features can yield comparative performances to deep learning models and that both traditional feature engineering and deep feature learning approaches rely on simple characteristics of the input time-series data.[88] As regulators, official bodies, and general users come to depend on AI-based dynamic systems, clearer accountability will be required forautomated decision-makingprocesses to ensure trust and transparency. The first global conference exclusively dedicated to this emerging discipline was the 2017International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI).[89]It has evolved over the years, with various workshops organised and co-located to many other international conferences, and it has now a dedicated global event, "The world conference on eXplainable Artificial Intelligence", with its own proceedings.[90][91] The European Union introduced aright to explanationin theGeneral Data Protection Regulation(GDPR) to address potential problems stemming from the rising importance of algorithms. The implementation of the regulation began in 2018. However, the right to explanation in GDPR covers only the local aspect of interpretability. In the United States, insurance companies are required to be able to explain their rate and coverage decisions.[92]In France theLoi pour une République numérique(Digital Republic Act) grants subjects the right to request and receive information pertaining to the implementation of algorithms that process data about them. Despite ongoing endeavors to enhance the explainability of AI models, they persist with several inherent limitations. By making an AI system more explainable, we also reveal more of its inner workings. For example, the explainability method of feature importance identifies features or variables that are most important in determining the model's output, while the influential samples method identifies the training samples that are most influential in determining the output, given a particular input.[93]Adversarial parties could take advantage of this knowledge. For example, competitor firms could replicate aspects of the original AI system in their own product, thus reducing competitive advantage.[94]An explainable AI system is also susceptible to being “gamed”—influenced in a way that undermines its intended purpose. One study gives the example of a predictive policing system; in this case, those who could potentially “game” the system are the criminals subject to the system's decisions. In this study, developers of the system discussed the issue of criminal gangs looking to illegally obtain passports, and they expressed concerns that, if given an idea of what factors might trigger an alert in the passport application process, those gangs would be able to “send guinea pigs” to test those triggers, eventually finding a loophole that would allow them to “reliably get passports from under the noses of the authorities”.[95] Many approaches that it uses provides explanation in general, it doesn't take account for the diverse backgrounds and knowledge level of the users. This leads to challenges with accurate comprehension for all users. Expert users can find the explanations lacking in depth, and are oversimplified, while a beginner user may struggle understanding the explanations as they are complex. This limitation downplays the ability of the XAI techniques to appeal to their users with different levels of knowledge, which can impact the trust from users and who uses it. The quality of explanations can be different amongst their users as they all have different expertise levels, including different situation and conditions.[96] A fundamental barrier to making AI systems explainable is the technical complexity of such systems. End users often lack the coding knowledge required to understand software of any kind. Current methods used to explain AI are mainly technical ones, geared toward machine learning engineers for debugging purposes, rather than toward the end users who are ultimately affected by the system, causing “a gap between explainability in practice and the goal of transparency”.[93]Proposed solutions to address the issue of technical complexity include either promoting the coding education of the general public so technical explanations would be more accessible to end users, or providing explanations in layperson terms.[94] The solution must avoid oversimplification. It is important to strike a balance between accuracy – how faithfully the explanation reflects the process of the AI system – and explainability – how well end users understand the process. This is a difficult balance to strike, since the complexity of machine learning makes it difficult for even ML engineers to fully understand, let alone non-experts.[93] The goal of explainability to end users of AI systems is to increase trust in the systems, even “address concerns about lack of ‘fairness’ and discriminatory effects”.[94]However, even with a good understanding of an AI system, end users may not necessarily trust the system.[97]In one study, participants were presented with combinations of white-box and black-box explanations, and static and interactive explanations of AI systems. While these explanations served to increase both their self-reported and objective understanding, it had no impact on their level of trust, which remained skeptical.[98] This outcome was especially true for decisions that impacted the end user in a significant way, such as graduate school admissions. Participants judged algorithms to be too inflexible and unforgiving in comparison to human decision-makers; instead of rigidly adhering to a set of rules, humans are able to consider exceptional cases as well as appeals to their initial decision.[98]For such decisions, explainability will not necessarily cause end users to accept the use of decision-making algorithms. We will need to either turn to another method to increase trust and acceptance of decision-making algorithms, or question the need to rely solely on AI for such impactful decisions in the first place. However, some emphasize that the purpose of explainability of artificial intelligence is not to merely increase users' trust in the system's decisions, but to calibrate the users' level of trust to the correct level.[99]According to this principle, too much or too little user trust in the AI system will harm the overall performance of the human-system unit. When the trust is excessive, the users are not critical of possible mistakes of the system and when the users do not have enough trust in the system, they will not exhaust the benefits inherent in it. Some scholars have suggested that explainability in AI should be considered a goal secondary to AI effectiveness, and that encouraging the exclusive development of XAI may limit the functionality of AI more broadly.[100][101]Critiques of XAI rely on developed concepts of mechanistic and empiric reasoning fromevidence-based medicineto suggest that AI technologies can be clinically validated even when their function cannot be understood by their operators.[100] Some researchers advocate the use of inherently interpretable machine learning models, rather than using post-hoc explanations in which a second model is created to explain the first. This is partly because post-hoc models increase the complexity in a decision pathway and partly because it is often unclear how faithfully a post-hoc explanation can mimic the computations of an entirely separate model.[22]However, another view is that what is important is that the explanation accomplishes the given task at hand, and whether it is pre or post-hoc doesn't matter. If a post-hoc explanation method helps a doctor diagnose cancer better, it is of secondary importance whether it is a correct/incorrect explanation. The goals of XAI amount to a form oflossy compressionthat will become less effective as AI models grow in their number of parameters. Along with other factors this leads to a theoretical limit for explainability.[102] Explainability was studied also insocial choice theory. Social choice theory aims at finding solutions to social decision problems, that are based on well-established axioms.Ariel D. Procaccia[103]explains that these axioms can be used to construct convincing explanations to the solutions. This principle has been used to construct explanations in various subfields of social choice. Cailloux and Endriss[104]present a method for explaining voting rules using theaxiomsthat characterize them. They exemplify their method on theBorda voting rule. Peters, Procaccia, Psomas and Zhou[105]present an algorithm for explaining the outcomes of the Borda rule using O(m2) explanations, and prove that this is tight in the worst case. Yang, Hausladen, Peters, Pournaras, Fricker and Helbing[106]present an empirical study of explainability inparticipatory budgeting. They compared the greedy and theequal sharesrules, and three types of explanations:mechanism explanation(a general explanation of how the aggregation rule works given the voting input),individual explanation(explaining how many voters had at least one approved project, at least 10000 CHF in approved projects), andgroup explanation(explaining how the budget is distributed among the districts and topics). They compared the perceivedtrustworthinessandfairnessof greedy and equal shares, before and after the explanations. They found out that, for MES, mechanism explanation yields the highest increase in perceived fairness and trustworthiness; the second-highest was Group explanation. For Greedy, Mechanism explanation increases perceived trustworthiness but not fairness, whereas Individual explanation increases both perceived fairness and trustworthiness. Group explanationdecreasesthe perceived fairness and trustworthiness. Nizri, Azaria and Hazon[107]present an algorithm for computing explanations for theShapley value. Given a coalitional game, their algorithm decomposes it to sub-games, for which it is easy to generate verbal explanations based on the axioms characterizing the Shapley value. The payoff allocation for each sub-game is perceived as fair, so the Shapley-based payoff allocation for the given game should seem fair as well. An experiment with 210 human subjects shows that, with their automatically generated explanations, subjects perceive Shapley-based payoff allocation as significantly fairer than with a general standard explanation.
https://en.wikipedia.org/wiki/Explainable_artificial_intelligence
Inprobabilityandstatistics, amixture distributionis theprobability distributionof arandom variablethat is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may berandom vectors(each having the same dimension), in which case the mixture distribution is amultivariate distribution. In cases where each of the underlying random variables iscontinuous, the outcome variable will also be continuous and itsprobability density functionis sometimes referred to as amixture density. Thecumulative distribution function(and theprobability density functionif it exists) can be expressed as aconvex combination(i.e. a weighted sum, with non-negative weights that sum to 1) of other distribution functions and density functions. The individual distributions that are combined to form the mixture distribution are called themixture components, and the probabilities (or weights) associated with each component are called themixture weights. The number of components in a mixture distribution is often restricted to being finite, although in some cases the components may becountably infinitein number. More general cases (i.e. anuncountableset of component distributions), as well as the countable case, are treated under the title ofcompound distributions. A distinction needs to be made between arandom variablewhose distribution function or density is the sum of a set of components (i.e. a mixture distribution) and a random variable whose value is the sum of the values of two or more underlying random variables, in which case the distribution is given by theconvolutionoperator. As an example, the sum of twojointly normally distributedrandom variables, each with different means, will still have a normal distribution. On the other hand, a mixture density created as a mixture of two normal distributions with different means will have two peaks provided that the two means are far enough apart, showing that this distribution is radically different from a normal distribution. Mixture distributions arise in many contexts in the literature and arise naturally where astatistical populationcontains two or moresubpopulations. They are also sometimes used as a means of representing non-normal distributions. Data analysis concerningstatistical modelsinvolving mixture distributions is discussed under the title ofmixture models, while the present article concentrates on simple probabilistic and statistical properties of mixture distributions and how these relate to properties of the underlying distributions. Given a finite set of probability density functionsp1(x), ...,pn(x), or corresponding cumulative distribution functionsP1(x),...,Pn(x)andweightsw1, ...,wnsuch thatwi≥ 0and∑wi= 1, the mixture distribution can be represented by writing either the density,f, or the distribution function,F, as a sum (which in both cases is a convex combination):F(x)=∑i=1nwiPi(x),{\displaystyle F(x)=\sum _{i=1}^{n}\,w_{i}\,P_{i}(x),}f(x)=∑i=1nwipi(x).{\displaystyle f(x)=\sum _{i=1}^{n}\,w_{i}\,p_{i}(x).}This type of mixture, being a finite sum, is called afinite mixture,and in applications, an unqualified reference to a "mixture density" usually means a finite mixture. The case of a countably infinite set of components is covered formally by allowingn=∞{\displaystyle n=\infty \!}. Where the set of component distributions isuncountable, the result is often called acompound probability distribution. The construction of such distributions has a formal similarity to that of mixture distributions, with either infinite summations or integrals replacing the finite summations used for finite mixtures. Consider a probability density functionp(x;a)for a variablex, parameterized bya. That is, for each value ofain some setA,p(x;a)is a probability density function with respect tox. Given a probability density functionw(meaning thatwis nonnegative and integrates to 1), the function f(x)=∫Aw(a)p(x;a)da{\displaystyle f(x)=\int _{A}\,w(a)\,p(x;a)\,da} is again a probability density function forx. A similar integral can be written for the cumulative distribution function. Note that the formulae here reduce to the case of a finite or infinite mixture if the densitywis allowed to be ageneralized functionrepresenting the "derivative" of the cumulative distribution function of adiscrete distribution. The mixture components are often not arbitrary probability distributions, but instead are members of aparametric family(such as normal distributions), with different values for a parameter or parameters. In such cases, assuming that it exists, the density can be written in the form of a sum as:f(x;a1,…,an)=∑i=1nwip(x;ai){\displaystyle f(x;a_{1},\ldots ,a_{n})=\sum _{i=1}^{n}\,w_{i}\,p(x;a_{i})}for one parameter, orf(x;a1,…,an,b1,…,bn)=∑i=1nwip(x;ai,bi){\displaystyle f(x;a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{n})=\sum _{i=1}^{n}\,w_{i}\,p(x;a_{i},b_{i})}for two parameters, and so forth. A generallinear combinationof probability density functions is not necessarily a probability density, since it may be negative or it may integrate to something other than 1. However, aconvex combinationof probability density functions preserves both of these properties (non-negativity and integrating to 1), and thus mixture densities are themselves probability density functions. LetX1, ...,Xndenote random variables from thencomponent distributions, and letXdenote a random variable from the mixture distribution. Then, for any functionH(·)for whichE⁡[H(Xi)]{\displaystyle \operatorname {E} [H(X_{i})]}exists, and assuming that the component densitiespi(x)exist, E⁡[H(X)]=∫−∞∞H(x)∑i=1nwipi(x)dx=∑i=1nwi∫−∞∞pi(x)H(x)dx=∑i=1nwiE⁡[H(Xi)].{\displaystyle {\begin{aligned}\operatorname {E} [H(X)]&=\int _{-\infty }^{\infty }H(x)\sum _{i=1}^{n}w_{i}p_{i}(x)\,dx\\&=\sum _{i=1}^{n}w_{i}\int _{-\infty }^{\infty }p_{i}(x)H(x)\,dx=\sum _{i=1}^{n}w_{i}\operatorname {E} [H(X_{i})].\end{aligned}}} Thejth moment about zero (i.e. choosingH(x) =xj) is simply a weighted average of thej-th moments of the components. Moments about the meanH(x) = (x − μ)jinvolve a binomial expansion:[1] E⁡[(X−μ)j]=∑i=1nwiE⁡[(Xi−μi+μi−μ)j]=∑i=1nwi∑k=0j(jk)(μi−μ)j−kE⁡[(Xi−μi)k],{\displaystyle {\begin{aligned}\operatorname {E} \left[{\left(X-\mu \right)}^{j}\right]&=\sum _{i=1}^{n}w_{i}\operatorname {E} \left[{\left(X_{i}-\mu _{i}+\mu _{i}-\mu \right)}^{j}\right]\\&=\sum _{i=1}^{n}w_{i}\sum _{k=0}^{j}{\binom {j}{k}}{\left(\mu _{i}-\mu \right)}^{j-k}\operatorname {E} \left[{\left(X_{i}-\mu _{i}\right)}^{k}\right],\end{aligned}}} whereμidenotes the mean of thei-th component. In the case of a mixture of one-dimensional distributions with weightswi, meansμiand variancesσi2, the total mean and variance will be:E⁡[X]=μ=∑i=1nwiμi,{\displaystyle \operatorname {E} [X]=\mu =\sum _{i=1}^{n}w_{i}\mu _{i},}E⁡[(X−μ)2]=σ2=E⁡[X2]−μ2(standard variance reformulation)=(∑i=1nwiE⁡[Xi2])−μ2=∑i=1nwi(σi2+μi2)−μ2(σi2=E⁡[Xi2]−μi2⟹E⁡[Xi2]=σi2+μi2){\displaystyle {\begin{aligned}\operatorname {E} \left[(X-\mu )^{2}\right]&=\sigma ^{2}\\&=\operatorname {E} [X^{2}]-\mu ^{2}&({\text{standard variance reformulation}})\\&=\left(\sum _{i=1}^{n}w_{i}\operatorname {E} \left[X_{i}^{2}\right]\right)-\mu ^{2}\\&=\sum _{i=1}^{n}w_{i}(\sigma _{i}^{2}+\mu _{i}^{2})-\mu ^{2}&(\sigma _{i}^{2}=\operatorname {E} [X_{i}^{2}]-\mu _{i}^{2}\implies \operatorname {E} [X_{i}^{2}]=\sigma _{i}^{2}+\mu _{i}^{2})\end{aligned}}} These relations highlight the potential of mixture distributions to display non-trivial higher-order moments such asskewnessandkurtosis(fat tails) and multi-modality, even in the absence of such features within the components themselves. Marron and Wand (1992) give an illustrative account of the flexibility of this framework.[2] The question ofmultimodalityis simple for some cases, such as mixtures ofexponential distributions: all such mixtures areunimodal.[3]However, for the case of mixtures ofnormal distributions, it is a complex one. Conditions for the number of modes in a multivariate normal mixture are explored by Ray & Lindsay[4]extending earlier work on univariate[5][6]and multivariate[7]distributions. Here the problem of evaluation of the modes of anncomponent mixture in aDdimensional space is reduced to identification of critical points (local minima, maxima andsaddle points) on amanifoldreferred to as theridgeline surface, which is the image of the ridgeline functionx∗(α)=[∑i=1nαiΣi−1]−1×[∑i=1nαiΣi−1μi],{\displaystyle x^{*}(\alpha )=\left[\sum _{i=1}^{n}\alpha _{i}\Sigma _{i}^{-1}\right]^{-1}\times \left[\sum _{i=1}^{n}\alpha _{i}\Sigma _{i}^{-1}\mu _{i}\right],}whereα{\displaystyle \alpha }belongs to the(n−1){\displaystyle (n-1)}-dimensional standardsimplex:Sn={α∈Rn:αi∈[0,1],∑i=1nαi=1}{\displaystyle {\mathcal {S}}_{n}=\left\{\alpha \in \mathbb {R} ^{n}:\alpha _{i}\in [0,1],\sum _{i=1}^{n}\alpha _{i}=1\right\}}andΣi∈RD×D,μi∈RD{\displaystyle \Sigma _{i}\in \mathbb {R} ^{D\times D},\,\mu _{i}\in \mathbb {R} ^{D}}correspond to the covariance and mean of thei-th component. Ray & Lindsay[4]consider the case in whichn−1<D{\displaystyle n-1<D}showing a one-to-one correspondence of modes of the mixture and those on theridge elevation functionh(α)=q(x∗(α)){\displaystyle h(\alpha )=q(x^{*}(\alpha ))}thus one may identify the modes by solvingdh(α)dα=0{\displaystyle {\frac {dh(\alpha )}{d\alpha }}=0}with respect toα{\displaystyle \alpha }and determining the valuex∗(α){\displaystyle x^{*}(\alpha )}. Using graphical tools, the potential multi-modality of mixtures with number of componentsn∈{2,3}{\displaystyle n\in \{2,3\}}is demonstrated; in particular it is shown that the number of modes may exceedn{\displaystyle n}and that the modes may not be coincident with the component means. For two components they develop a graphical tool for analysis by instead solving the aforementioned differential with respect to the first mixing weightw1{\displaystyle w_{1}}(which also determines the second mixing weight throughw2=1−w1{\displaystyle w_{2}=1-w_{1}}) and expressing the solutions as a functionΠ(α),α∈[0,1]{\displaystyle \Pi (\alpha ),\,\alpha \in [0,1]}so that the number and location of modes for a given value ofw1{\displaystyle w_{1}}corresponds to the number of intersections of the graph on the lineΠ(α)=w1{\displaystyle \Pi (\alpha )=w_{1}}. This in turn can be related to the number of oscillations of the graph and therefore to solutions ofdΠ(α)dα=0{\displaystyle {\frac {d\Pi (\alpha )}{d\alpha }}=0}leading to an explicit solution for the case of a two component mixture withΣ1=Σ2=Σ{\displaystyle \Sigma _{1}=\Sigma _{2}=\Sigma }(sometimes called ahomoscedasticmixture) given by1−α(1−α)dM(μ1,μ2,Σ)2{\displaystyle 1-\alpha (1-\alpha )d_{M}(\mu _{1},\mu _{2},\Sigma )^{2}}wheredM(μ1,μ2,Σ)=(μ2−μ1)TΣ−1(μ2−μ1){\textstyle d_{M}(\mu _{1},\mu _{2},\Sigma )={\sqrt {(\mu _{2}-\mu _{1})^{\mathsf {T}}\Sigma ^{-1}(\mu _{2}-\mu _{1})}}}is theMahalanobis distancebetweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}. Since the above is quadratic it follows that in this instance there are at most two modes irrespective of the dimension or the weights. For normal mixtures with generaln>2{\displaystyle n>2}andD>1{\displaystyle D>1}, a lower bound for the maximum number of possible modes, and – conditionally on the assumption that the maximum number is finite – an upper bound are known. For those combinations ofn{\displaystyle n}andD{\displaystyle D}for which the maximum number is known, it matches the lower bound.[8] Simple examples can be given by a mixture of two normal distributions. (SeeMultimodal distribution#Mixture of two normal distributionsfor more details.) Given an equal (50/50) mixture of two normal distributions with the same standard deviation and different means (homoscedastic), the overall distribution will exhibit lowkurtosisrelative to a single normal distribution – the means of the subpopulations fall on the shoulders of the overall distribution. If sufficiently separated, namely by twice the (common) standard deviation, so|μ1−μ2|>2σ,{\displaystyle \left|\mu _{1}-\mu _{2}\right|>2\sigma ,}these form abimodal distribution, otherwise it simply has a wide peak.[9]The variation of the overall population will also be greater than the variation of the two subpopulations (due to spread from different means), and thus exhibitsoverdispersionrelative to a normal distribution with fixed variationσ, though it will not be overdispersed relative to a normal distribution with variation equal to variation of the overall population. Alternatively, given two subpopulations with the same mean and different standard deviations, the overall population will exhibit high kurtosis, with a sharper peak and heavier tails (and correspondingly shallower shoulders) than a single distribution. The following example is adapted from Hampel,[10]who creditsJohn Tukey. Consider the mixture distribution defined by The mean ofi.i.d.observations fromF(x)behaves "normally" except for exorbitantly large samples, although the mean ofF(x)does not even exist. Mixture densities are complicated densities expressible in terms of simpler densities (the mixture components), and are used both because they provide a good model for certain data sets (where different subsets of the data exhibit different characteristics and can best be modeled separately), and because they can be more mathematically tractable, because the individual mixture components can be more easily studied than the overall mixture density. Mixture densities can be used to model astatistical populationwithsubpopulations, where the mixture components are the densities on the subpopulations, and the weights are the proportions of each subpopulation in the overall population. Mixture densities can also be used to modelexperimental erroror contamination – one assumes that most of the samples measure the desired phenomenon, with some samples from a different, erroneous distribution. Parametric statistics that assume no error often fail on such mixture densities – for example, statistics that assume normality often fail disastrously in the presence of even a fewoutliers– and instead one usesrobust statistics. Inmeta-analysisof separate studies,study heterogeneitycauses distribution of results to be a mixture distribution, and leads tooverdispersionof results relative to predicted error. For example, in astatistical survey, themargin of error(determined by sample size) predicts thesampling errorand hence dispersion of results on repeated surveys. The presence of study heterogeneity (studies have differentsampling bias) increases the dispersion relative to the margin of error.
https://en.wikipedia.org/wiki/Mixture_density
Inprobability theoryandstatistics, amixtureis a probabilistic combination of two or more probability distributions.[1]The concept arises mostly in two contexts: Thisprobability-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Mixture_(probability)
Subspace Gaussian mixture model(SGMM) is an acoustic modeling approach in which all phonetic states share a common Gaussianmixture modelstructure, and the means and mixture weights vary in a subspace of the total parameter space.[1] Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Subspace_Gaussian_mixture_model
Inmathematics, theGiry monadis a construction that assigns to ameasurable spacea space ofprobability measuresover it, equipped with a canonicalsigma-algebra.[1][2][3][4][5]It is one of the main examples of aprobability monad. It is implicitly used inprobability theorywhenever one considersprobability measureswhich dependmeasurablyon a parameter (giving rise toMarkov kernels), or when one hasprobability measures over probability measures(such as inde Finetti's theorem). Like many iterable constructions, it has thecategory-theoreticstructure of amonad, on thecategory of measurable spaces. The Giry monad, like everymonad, consists of three structures:[6][7][8] Let(X,F){\displaystyle (X,{\mathcal {F}})}be ameasurable space. Denote byPX{\displaystyle PX}the set ofprobability measuresover(X,F){\displaystyle (X,{\mathcal {F}})}. We equip the setPX{\displaystyle PX}with asigma-algebraas follows. First of all, for every measurable setA∈F{\displaystyle A\in {\mathcal {F}}}, define the mapεA:PX→R{\displaystyle \varepsilon _{A}:PX\to \mathbb {R} }byp⟼p(A){\displaystyle p\longmapsto p(A)}. We then define the sigma algebraPF{\displaystyle {\mathcal {PF}}}onPX{\displaystyle PX}to be the smallest sigma-algebra which makes the mapsεA{\displaystyle \varepsilon _{A}}measurable, for allA∈F{\displaystyle A\in {\mathcal {F}}}(whereR{\displaystyle \mathbb {R} }is assumed equipped with theBorel sigma-algebra).[6] Equivalently,PF{\displaystyle {\mathcal {PF}}}can be defined as the smallest sigma-algebra onPX{\displaystyle PX}which makes the maps measurable for all bounded measurablef:X→R{\displaystyle f:X\to \mathbb {R} }.[9] The assignment(X,F)↦(PX,PF){\displaystyle (X,{\mathcal {F}})\mapsto (PX,{\mathcal {PF}})}is part of anendofunctoron thecategory of measurable spaces, usually denoted again byP{\displaystyle P}. Its action onmorphisms, i.e. onmeasurable maps, is via thepushforward of measures. Namely, given a measurable mapf:(X,F)→(Y,G){\displaystyle f:(X,{\mathcal {F}})\to (Y,{\mathcal {G}})}, one assigns tof{\displaystyle f}the mapf∗:(PX,PF)→(PY,PG){\displaystyle f_{*}:(PX,{\mathcal {PF}})\to (PY,{\mathcal {PG}})}defined by for allp∈PX{\displaystyle p\in PX}and all measurable setsB∈G{\displaystyle B\in {\mathcal {G}}}.[6] Given a measurable space(X,F){\displaystyle (X,{\mathcal {F}})}, the mapδ:(X,F)→(PX,PF){\displaystyle \delta :(X,{\mathcal {F}})\to (PX,{\mathcal {PF}})}maps an elementx∈X{\displaystyle x\in X}to theDirac measureδx∈PX{\displaystyle \delta _{x}\in PX}, defined on measurable subsetsA∈F{\displaystyle A\in {\mathcal {F}}}by[6] Letμ∈PPX{\displaystyle \mu \in PPX}, i.e. a probability measure over the probability measures over(X,F){\displaystyle (X,{\mathcal {F}})}. We define the probability measureEμ∈PX{\displaystyle {\mathcal {E}}\mu \in PX}by for all measurableA∈F{\displaystyle A\in {\mathcal {F}}}. This gives a measurable,naturalmapE:(PPX,PPF)→(PX,PF){\displaystyle {\mathcal {E}}:(PPX,{\mathcal {PPF}})\to (PX,{\mathcal {PF}})}.[6] Amixture distribution, or more generally acompound distribution, can be seen as an application of the mapE{\displaystyle {\mathcal {E}}}. Let's see this for the case of a finite mixture. Letp1,…,pn{\displaystyle p_{1},\dots ,p_{n}}be probability measures on(X,F){\displaystyle (X,{\mathcal {F}})}, and consider the probability measureq{\displaystyle q}given by the mixture for all measurableA∈F{\displaystyle A\in {\mathcal {F}}}, for some weightswi≥0{\displaystyle w_{i}\geq 0}satisfyingw1+⋯+wn=1{\displaystyle w_{1}+\dots +w_{n}=1}. We can view the mixtureq{\displaystyle q}as the averageq=Eμ{\displaystyle q={\mathcal {E}}\mu }, where the measure on measuresμ∈PPX{\displaystyle \mu \in PPX}, which in this case is discrete, is given by More generally, the mapE:PPX→PX{\displaystyle {\mathcal {E}}:PPX\to PX}can be seen as the most general, non-parametric way to form arbitrarymixtureorcompound distributions. The triple(P,δ,E){\displaystyle (P,\delta ,{\mathcal {E}})}is called theGiry monad.[1][2][3][4][5] One of the properties of thesigma-algebraPF{\displaystyle {\mathcal {PF}}}is that givenmeasurable spaces(X,F){\displaystyle (X,{\mathcal {F}})}and(Y,G){\displaystyle (Y,{\mathcal {G}})}, we have abijective correspondencebetweenmeasurable functions(X,F)→(PY,PG){\displaystyle (X,{\mathcal {F}})\to (PY,{\mathcal {PG}})}andMarkov kernels(X,F)→(Y,G){\displaystyle (X,{\mathcal {F}})\to (Y,{\mathcal {G}})}. This allows to view a Markov kernel, equivalently, as a measurably parametrized probability measure.[10] In more detail, given a measurable functionf:(X,F)→(PY,PG){\displaystyle f:(X,{\mathcal {F}})\to (PY,{\mathcal {PG}})}, one can obtain the Markov kernelf♭:(X,F)→(Y,G){\displaystyle f^{\flat }:(X,{\mathcal {F}})\to (Y,{\mathcal {G}})}as follows, for everyx∈X{\displaystyle x\in X}and every measurableB∈G{\displaystyle B\in {\mathcal {G}}}(note thatf(x)∈PY{\displaystyle f(x)\in PY}is a probability measure). Conversely, given a Markov kernelk:(X,F)→(Y,G){\displaystyle k:(X,{\mathcal {F}})\to (Y,{\mathcal {G}})}, one can form the measurable functionk♯:(X,F)→(PY,PG){\displaystyle k^{\sharp }:(X,{\mathcal {F}})\to (PY,{\mathcal {PG}})}mappingx∈X{\displaystyle x\in X}to the probability measurek♯(x)∈PY{\displaystyle k^{\sharp }(x)\in PY}defined by for every measurableB∈G{\displaystyle B\in {\mathcal {G}}}. The two assignments are mutually inverse. From the point of view ofcategory theory, we can interpret this correspondence as anadjunction between thecategory of measurable spacesand thecategory of Markov kernels. In particular, the category of Markov kernels can be seen as theKleisli categoryof the Giry monad.[3][4][5] Givenmeasurable spaces(X,F){\displaystyle (X,{\mathcal {F}})}and(Y,G){\displaystyle (Y,{\mathcal {G}})}, one can form the measurable space(PX,PX)×(PY,PY)=(X×Y,F×G){\displaystyle (PX,{\mathcal {PX}})\times (PY,{\mathcal {PY}})=(X\times Y,{\mathcal {F}}\times {\mathcal {G}})}with theproduct sigma-algebra, which is theproductin thecategory of measurable spaces. Givenprobability measuresp∈PX{\displaystyle p\in PX}andq∈PY{\displaystyle q\in PY}, one can form theproduct measurep⊗q{\displaystyle p\otimes q}on(X×Y,F×G){\displaystyle (X\times Y,{\mathcal {F}}\times {\mathcal {G}})}. This gives anatural,measurable map usually denoted by∇{\displaystyle \nabla }or by⊗{\displaystyle \otimes }.[4] The map∇:PX×PY→P(X×Y){\displaystyle \nabla :PX\times PY\to P(X\times Y)}is in general not an isomorphism, since there are probability measures onX×Y{\displaystyle X\times Y}which are not product distributions, for example in case ofcorrelation. However, the maps∇:PX×PY→P(X×Y){\displaystyle \nabla :PX\times PY\to P(X\times Y)}and the isomorphism1≅P1{\displaystyle 1\cong P1}make the Giry monad amonoidal monad, and so in particular a commutativestrong monad.[4]
https://en.wikipedia.org/wiki/Giry_monad
Alpha–beta pruningis asearch algorithmthat seeks to decrease the number of nodes that are evaluated by theminimax algorithmin itssearch tree. It is an adversarial search algorithm used commonly for machine playing of two-playercombinatorial games(Tic-tac-toe,Chess,Connect 4, etc.). It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision.[1] John McCarthy during theDartmouth Workshopmet Alex Bernstein ofIBM, who was writing a chess program. McCarthy invented alpha–beta search and recommended it to him, but Bernstein was "unconvinced".[2] Allen NewellandHerbert A. Simonwho used whatJohn McCarthycalls an "approximation"[3]in 1958 wrote that alpha–beta "appears to have been reinvented a number of times".[4]Arthur Samuelhad an early version for a checkers simulation. Richards, Timothy Hart, Michael Levin and/or Daniel Edwards also invented alpha–beta independently in theUnited States.[5]McCarthy proposed similar ideas during theDartmouth workshopin 1956 and suggested it to a group of his students includingAlan Kotokat MIT in 1961.[6]Alexander Brudnoindependently conceived the alpha–beta algorithm, publishing his results in 1963.[7]Donald Knuthand Ronald W. Moore refined the algorithm in 1975.[8][9]Judea Pearlproved its optimality in terms of the expected running time for trees with randomly assigned leaf values in two papers.[10][11]The optimality of the randomized version of alpha–beta was shown by Michael Saks and Avi Wigderson in 1986.[12] Agame treecan represent many two-playerzero-sum games, such as chess, checkers, and reversi. Each node in the tree represents a possible situation in the game. Each terminal node (outcome) of a branch is assigned a numeric score that determines the value of the outcome to the player with the next move.[13] The algorithm maintains two values, alpha and beta, which respectively represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of. Initially, alpha is negative infinity and beta is positive infinity, i.e. both players start with their worst possible score. Whenever the maximum score that the minimizing player (i.e. the "beta" player) is assured of becomes less than the minimum score that the maximizing player (i.e., the "alpha" player) is assured of (i.e. beta < alpha), the maximizing player need not consider further descendants of this node, as they will never be reached in the actual play. To illustrate this with a real-life example, suppose somebody is playing chess, and it is their turn. Move "A" will improve the player's position. The player continues to look for moves to make sure a better one hasn't been missed. Move "B" is also a good move, but the player then realizes that it will allow the opponent to force checkmate in two moves. Thus, other outcomes from playing move B no longer need to be considered since the opponent can force a win. The maximum score that the opponent could force after move "B" is negative infinity: a loss for the player. This is less than the minimum position that was previously found; move "A" does not result in a forced loss in two moves. The benefit of alpha–beta pruning lies in the fact that branches of the search tree can be eliminated.[13]This way, the search time can be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to thebranch and boundclass of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node). With an (average or constant)branching factorofb, and a search depth ofdplies, the maximum number of leaf node positions evaluated (when the move ordering ispessimal) isO(bd) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves are always searched first), the number of leaf node positions evaluated is aboutO(b×1×b×1×...×b) for odd depth andO(b×1×b×1×...×1) for even depth, orO(bd/2)=O(bd){\displaystyle O(b^{d/2})=O({\sqrt {b^{d}}})}. In the latter case, where the ply of a search is even, the effective branching factor is reduced to itssquare root, or, equivalently, the search can go twice as deep with the same amount of computation.[14]The explanation ofb×1×b×1×... is that all the first player's moves must be studied to find the best one, but for each, only the second player's best move is needed to refute all but the first (and best) first player move—alpha–beta ensures no other second player moves need be considered. When nodes are considered in a random order (i.e., the algorithm randomizes), asymptotically, the expected number of nodes evaluated in uniform trees with binary leaf-values isΘ(((b−1+b2+14b+1)/4)d){\displaystyle \Theta (((b-1+{\sqrt {b^{2}+14b+1}})/4)^{d})}.[12]For the same trees, when the values are assigned to the leaf values independently of each other and say zero and one are both equally probable, the expected number of nodes evaluated isΘ((b/2)d){\displaystyle \Theta ((b/2)^{d})}, which is much smaller than the work done by the randomized algorithm, mentioned above, and is again optimal for such random trees.[10]When the leaf values are chosen independently of each other but from the[0,1]{\displaystyle [0,1]}interval uniformly at random, the expected number of nodes evaluated increases toΘ(bd/log(d)){\displaystyle \Theta (b^{d/log(d)})}in thed→∞{\displaystyle d\to \infty }limit,[11]which is again optimal for this kind of random tree. Note that the actual work for "small" values ofd{\displaystyle d}is better approximated using0.925d0.747{\displaystyle 0.925d^{0.747}}.[11][10] A chess program that searches four plies with an average of 36 branches per node evaluates more than one million terminal nodes. An optimal alpha-beta prune would eliminate all but about 2,000 terminal nodes, a reduction of 99.8%.[13] Normally during alpha–beta, thesubtreesare temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try to find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as throughiterative deepening. Additionally, this algorithm can be trivially modified to return an entireprincipal variationin addition to the score. Some more aggressive algorithms such asMTD(f)do not easily permit such a modification. The pseudo-code for depth limited minimax with alpha–beta pruning is as follows:[15] Implementations of alpha–beta pruning can often be delineated by whether they are "fail-soft," or "fail-hard". The pseudo-code illustrates the fail-soft variation. With fail-soft alpha–beta, the alphabeta function may return values (v) that exceed (v < α or v > β) the α and β bounds set by its function call arguments. In comparison, fail-hard alpha–beta limits its function return value into the inclusive range of α and β. Further improvement can be achieved without sacrificing accuracy by using orderingheuristicsto search earlier parts of the tree that are likely to force alpha–beta cutoffs. For example, in chess, moves that capture pieces may be examined before moves that do not, and moves that have scored highly inearlier passesthrough the game-tree analysis may be evaluated before others. Another common, and very cheap, heuristic is thekiller heuristic, where the last move that caused a beta-cutoff at the same tree level in the tree search is always examined first. This idea can also be generalized into a set ofrefutation tables. Alpha–beta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as anaspiration window. In the extreme case, the search is performed with alpha and beta equal; a technique known aszero-window search,null-window search, orscout search. This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failedhigh(high edge of window was too low) orlow(lower edge of window was too high). This gives information about what window values might be useful in a re-search of the position. Over time, other improvements have been suggested, and indeed the Falphabeta (fail-soft alpha–beta) idea of John Fishburn is nearly universal and is already incorporated above in a slightly modified form. Fishburn also suggested a combination of the killer heuristic and zero-window search under the name Lalphabeta ("last move with minimal window alpha–beta search"). Since theminimaxalgorithm and its variants are inherentlydepth-first, a strategy such asiterative deepeningis usually used in conjunction with alpha–beta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Another advantage of using iterative deepening is that searches at shallower depths give move-ordering hints, as well as shallow alpha and beta estimates, that both can help produce cutoffs for higher depth searches much earlier than would otherwise be possible. Algorithms likeSSS*, on the other hand, use thebest-firststrategy. This can potentially make them more time-efficient, but typically at a heavy cost in space-efficiency.[16]
https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning
Theexpectiminimaxalgorithm is a variation of theminimaxalgorithm, for use inartificial intelligencesystems that play two-playerzero-sum games, such asbackgammon, in which the outcome depends on a combination of the player's skill andchance elementssuch as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" ("move by nature") nodes, which take theexpected valueof a random event occurring.[1]Ingame theoryterms, an expectiminimax tree is the game tree of anextensive-form gameofperfect, butincomplete information. In the traditionalminimaxmethod, the levels of the tree alternate from max to min until the depth limit of the tree has been reached. In an expectiminimax tree, the "chance" nodes are interleaved with the max and min nodes. Instead of taking the max or min of theutility valuesof their children, chance nodes take a weighted average, with the weight being the probability that child is reached.[1] The interleaving depends on the game. Each "turn" of the game is evaluated as a "max" node (representing the AI player's turn), a "min" node (representing a potentially-optimal opponent's turn), or a "chance" node (representing a random effect or player).[1] For example, consider a game in which each round consists of a single die throw, and then decisions made by first the AI player, and then another intelligent opponent. The order of nodes in this game would alternate between "chance", "max" and then "min".[1] The expectiminimax algorithm is a variant of theminimaxalgorithm and was firstly proposed byDonald Michiein 1966.[2]Itspseudocodeis given below. Note that for random nodes, there must be a known probability of reaching each child. (For most games of chance, child nodes will be equally-weighted, which means the return value can simply be the average of all child values.) Expectimax search is a variant described inUniversal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability(2005) by Tom Everitt andMarcus Hutter. Bruce Ballard was the first to develop a technique, called *-minimax, that enablesalpha-beta pruningin expectiminimax trees.[3][4]The problem with integratingalpha-beta pruninginto the expectiminimax algorithm is that the scores of a chance node's children may exceed the alpha or beta bound of its parent, even if the weighted value of each child does not. However, it is possible to bound the scores of a chance node's children, and therefore bound the score of the CHANCE node. If a standard iterative search is about to score thei{\displaystyle i}th child of a chance node withN{\displaystyle N}equally likely children, that search has computed scoresv1,v2,…,vi−1{\displaystyle v_{1},v_{2},\ldots ,v_{i-1}}for child nodes 1 throughi−1{\displaystyle i-1}. Assuming a lowest possible scoreL{\displaystyle L}and a highest possible scoreU{\displaystyle U}for each unsearched child, the bounds of the chance node's score is as follows: score≤1n((v1+…+vi−1)+vi+U×(n−i)){\displaystyle {\text{score}}\leq {\frac {1}{n}}\left((v_{1}+\ldots +v_{i-1})+v_{i}+U\times (n-i)\right)} score≥1n((v1+…+vi−1)+vi+L×(n−i)){\displaystyle {\text{score}}\geq {\frac {1}{n}}\left((v_{1}+\ldots +v_{i-1})+v_{i}+L\times (n-i)\right)} If an alpha and/or beta bound are given in scoring the chance node, these bounds can be used to cut off the search of thei{\displaystyle i}th child. The above equations can be rearranged to find a new alpha & beta value that will cut off the search if it would cause the chance node to exceed its own alpha and beta bounds: αi=N×α−(v1+…+vi−1)+U×(n−i){\displaystyle \alpha _{i}=N\times \alpha -\left(v_{1}+\ldots +v_{i-1}\right)+U\times (n-i)} βi=N×β−(v1+…+vi−1)+L×(n−i){\displaystyle \beta _{i}=N\times \beta -\left(v_{1}+\ldots +v_{i-1}\right)+L\times (n-i)} Thepseudocodefor extending expectiminimax with fail-hardalpha-beta pruningin this manner is as follows: This technique is one of a family of variants of algorithms which can bound the search of a CHANCE node and its children based on collecting lower and upper bounds of the children during search. Other techniques which can offer performance benefits include probing each child with a heuristic to establish a min or max before performing a full search on each child, etc.
https://en.wikipedia.org/wiki/Expectiminimax
Ingame theory, ann-player gameis a game which is well defined for any number of players. This is usually used in contrast to standard2-player gamesthat are only specified for two players. In definingn-player games,game theoristsusually provide a definition that allow for any (finite) number of players.[1]The limiting case ofn→∞{\displaystyle n\to \infty }is the subject ofmean field game theory.[2] Changing games from 2-player games ton-player games entails some concerns. For instance, thePrisoner's dilemmais a 2-player game. One might define ann-player Prisoner's Dilemma where a single defection results everyone else getting the sucker's payoff. Alternatively, it might take certain amount of defection before the cooperators receive the sucker's payoff. (One example of ann-player Prisoner's Dilemma is theDiner's dilemma.) n-player games can not be solved usingminimax, the theorem that is the basis of tree searching for2-player games. Other algorithms, likemaxn, are required for traversing the game tree to optimize the score for a specific player.[3] Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Maxn_algorithm
Computer chessincludes both hardware (dedicated computers) andsoftwarecapable of playingchess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Computer chess applications that play at the level of achess grandmasteror higher are available on hardware fromsupercomputerstosmart phones. Standalone chess-playing machines are also available.Stockfish,Leela Chess Zero,GNU Chess,Fruit, and other free open source applications are available for various platforms. Computer chess applications, whether implemented in hardware or software, use different strategies than humans to choose their moves: they useheuristic methodsto build, search and evaluatetreesrepresenting sequences of moves from the current position and attempt to execute the best such sequence during play. Such trees are typically quite large, thousands to millions of nodes. The computational speed of modern computers, capable of processing tens of thousands to hundreds of thousands of nodes or more per second, along with extension and reduction heuristics that narrow the tree to mostly relevant nodes, make such an approach effective. The first chess machines capable of playing chess or reduced chess-like games were software programs running on digital computers early in thevacuum-tube computerage (1950s). The early programs played so poorly that even a beginner could defeat them. Within 40 years, in 1997,chess enginesrunning on super-computers or specialized hardware were capable ofdefeating even the best human players. By 2006,programs running on desktop PCshad attained the same capability. In 2006,Monty Newborn, Professor of Computer Science atMcGill University, declared: "the science has been done". Nevertheless,solving chessis not currently possible for modern computers due to thegame's extremely large number of possible variations.[1] Computer chess was once considered the "Drosophilaof AI", the edge ofknowledge engineering. The field is now considered a scientifically completed paradigm, and playing chess is a mundane computing activity.[2] In the past, stand-alone chess machines (usually microprocessors running software chess programs; occasionally specialized hardware) were sold. Today,chess enginesmay be installed as software on ordinary devices likesmartphonesandPCs,[3]either alone or alongsideGUIprograms such asChessbaseand themobile appsforChess.comandLichess(both primarily websites).[4]Examples offree and open sourceengines includeStockfish[5]andLeela Chess Zero[6](Lc0). Chess.com maintains its ownproprietaryengine named Torch.[7]Some chess engines, including Stockfish, have web versions made in languages likeWebAssemblyandJavaScript.[8]Most chess programs and sites offer the ability to analyze positions and games using chess engines, and some offer the ability to play against engines (which can be set to play at custom levels of strength) as though they were normal opponents. Hardware requirements for chess engines are minimal, but performance will vary with processor speed, and memory, needed to hold largetransposition tables. Most modern chess engines, such as Stockfish, rely onefficiently updatable neural networks, tailored to be run exclusively onCPUs,[9][10]but Lc0 uses networks reliant onGPUperformance.[11][12]Top engines such as Stockfish can be expected to beat the world's best players reliably, even when running on consumer-grade hardware.[13] Perhaps the most common type of chess software are programs that simply play chess. A human player makes a move on the board, the AI calculates and plays a subsequent move, and the human and AI alternate turns until the game ends. Thechess engine, which calculates the moves, and thegraphical user interface(GUI) are sometimes separate programs. Different engines can be connected to the GUI, permitting play against different styles of opponent. Engines often have a simple textcommand-line interface, while GUIs may offer a variety of piece sets, board styles, or even 3D or animated pieces. Because recent engines are so capable, engines or GUIs may offer some way of handicapping the engine's ability, to improve the odds for a win by the human player.Universal Chess Interface(UCI) engines such asFritzorRybkamay have a built-in mechanism for reducing theElo ratingof the engine (via UCI's uci_limitstrength and uci_elo parameters). Some versions ofFritzhave a Handicap and Fun mode for limiting the current engine or changing the percentage of mistakes it makes or changing its style.Fritzalso has a Friend Mode where during the game it tries to match the level of the player. Chess databases allow users to search through a large library of historical games, analyze them, check statistics, and formulate an opening repertoire.Chessbase(for PC) is a common program for these purposes amongst professional players, but there are alternatives such asShane's Chess Information Database(Scid)[14]for Windows, Mac or Linux,Chess Assistant[15]for PC,[16]Gerhard Kalab's Chess PGN Master for Android[17]or Giordano Vicoli's Chess-Studio for iOS.[18] Programs such asPlaychessallow players to play against one another over the internet. Chess training programs teach chess.Chessmasterhad playthrough tutorials by IMJosh Waitzkinand GMLarry Christiansen.Stefan Meyer-KahlenoffersShredderChess Tutor based on the Step coursebooks of Rob Brunia and Cor Van Wijgerden. FormerWorld ChampionMagnus Carlsen'sPlay Magnus companyreleased aMagnus Trainer appfor Android and iOS.ChessbasehasFritz and Chessterfor children. Convekta provides a large number of training apps such as CT-ART and its Chess King line based on tutorials by GM Alexander Kalinin and Maxim Blokh. There is alsosoftware for handling chess problems. After discovering refutation screening—the application ofalpha–beta pruningto optimizing move evaluation—in 1957, a team atCarnegie Mellon Universitypredicted that a computer would defeat the world human champion by 1967.[19]It did not anticipate the difficulty of determining the right order to evaluate moves. Researchers worked to improve programs' ability to identifykiller heuristics, unusually high-scoring moves to reexamine when evaluating other branches, but into the 1970s most top chess players believed that computers would not soon be able to play at aMasterlevel.[20]In 1968,International MasterDavid Levy made a famous betthat no chess computer would be able to beat him within ten years,[21]and in 1976Senior Masterand professor of psychologyEliot HearstofIndiana Universitywrote that "the only way a current computer program could ever win a single game against a master player would be for the master, perhaps in a drunken stupor while playing 50 games simultaneously, to commit some once-in-a-year blunder".[20] In the late 1970s chess programs began defeating highly skilled human players.[20]The year of Hearst's statement,Northwestern University'sChess 4.5at thePaul MassonAmerican Chess Championship'sClass Blevel became the first to win a human tournament. Levy won his bet in 1978 by beatingChess 4.7, but it achieved the first computer victory against a Master-class player at the tournament level by winning one of the six games.[21]In 1980,Bellebegan often defeating Masters. By 1982 two programs played at Master level and three were slightly weaker.[20] The sudden improvement without a theoretical breakthrough was unexpected, as many did not expect that Belle's ability to examine 100,000 positions a second—about eight plies—would be sufficient. The Spracklens, creators of the successful microcomputer programSargon, estimated that 90% of the improvement came from faster evaluation speed and only 10% from improved evaluations.New Scientiststated in 1982 that computers "playterriblechess ... clumsy, inefficient, diffuse, and just plain ugly", but humans lost to them by making "horrible blunders, astonishing lapses, incomprehensible oversights, gross miscalculations, and the like" much more often than they realized; "in short, computers win primarily through their ability to find and exploit miscalculations in human initiatives".[20] By 1982, microcomputer chess programs could evaluate up to 1,500 moves a second and were as strong as mainframe chess programs of five years earlier, able to defeat a majority of amateur players. While only able to look ahead one or two plies more than at their debut in the mid-1970s, doing so improved their play more than experts expected; seemingly minor improvements "appear to have allowed the crossing of a psychological threshold, after which a rich harvest of human error becomes accessible",New Scientistwrote.[20]While reviewingSPOCin 1984,BYTEwrote that "Computers—mainframes, minis, and micros—tend to play ugly, inelegant chess", but notedRobert Byrne's statement that "tactically they are freer from error than the average human player". The magazine describedSPOCas a "state-of-the-art chess program" for the IBM PC with a "surprisingly high" level of play, and estimated its USCF rating as 1700 (Class B).[22] At the 1982North American Computer Chess Championship,Monroe Newbornpredicted that a chess program could become world champion within five years; tournament director and International MasterMichael Valvopredicted ten years; the Spracklens predicted 15;Ken Thompsonpredicted more than 20; and others predicted that it would never happen. The most widely held opinion, however, stated that it would occur around the year 2000.[23]In 1989, Levy was defeated byDeep Thoughtin an exhibition match. Deep Thought, however, was still considerably below World Championship level, as the reigning world champion,Garry Kasparov, demonstrated in two strong wins in 1989. It was not until a 1996 match withIBM'sDeep Bluethat Kasparov lost his first game to a computer at tournament time controls inDeep Blue versus Kasparov, 1996, game 1. This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three anddrawtwo of the remaining five games of the match, for a convincing victory. In May 1997, an updated version of Deep Blue defeated Kasparov 3½–2½ in a return match. A documentary mainly about the confrontation was made in 2003, titledGame Over: Kasparov and the Machine. With increasing processing power and improved evaluation functions, chess programs running on commercially available workstations began to rival top-flight players. In 1998,Rebel 10defeatedViswanathan Anand, who at the time was ranked second in the world, by a score of 5–3. However, most of those games were not played at normal time controls. Out of the eight games, four wereblitzgames (five minutes plus five secondsFischer delayfor each move); these Rebel won 3–1. Two were semi-blitz games (fifteen minutes for each side) that Rebel won as well (1½–½). Finally, two games were played as regular tournament games (forty moves in two hours, one hour sudden death); here it was Anand who won ½–1½.[24]In fast games, computers played better than humans, but at classical time controls – at which a player's rating is determined – the advantage was not so clear. In the early 2000s, commercially available programs such asJuniorandFritzwere able to draw matches against former world champion Garry Kasparov and classical world championVladimir Kramnik. In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-gameBrains in Bahrainmatch, which ended in a draw. Kramnik won games 2 and 3 by "conventional"anti-computer tactics– play conservatively for a long-term advantage the computer is not able to see in itsgame treesearch. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular". Kramnik, in a better position in the earlymiddlegame, tried a piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending against such attacks. True to form, Fritz found a watertight defense and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik the stronger player in the match.[citation needed] In January 2003, Kasparov playedJunior, another chess computer program, in New York City. The match ended 3–3. In November 2003, Kasparov playedX3D Fritz. The match ended 2–2. In 2005,Hydra, a dedicated chess computer with custom hardware and sixty-four processors and also winner of the 14thIPCCCin 2005, defeated seventh-rankedMichael Adams5½–½ in a six-game match (though Adams' preparation was far less thorough than Kramnik's for the 2002 series).[25] In November–December 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won; the match ended 2–4. Kramnik was able to view the computer's opening book. In the first five games Kramnik steered the game into a typical "anti-computer" positional contest. He lost one game (overlooking a mate in one), and drew the next four. In the final game, in an attempt to draw the match, Kramnik played the more aggressiveSicilian Defenceand was crushed. There was speculation that interest in human–computer chess competition would plummet as a result of the 2006 Kramnik-Deep Fritz match.[26]According to Newborn, for example, "the science is done".[27] Human–computer chess matches showed the best computer systems overtaking human chess champions in the late 1990s. For the 40 years prior to that, the trend had been that the best machines gained about 40 points per year in theElo ratingwhile the best humans only gained roughly 2 points per year.[28]The highest rating obtained by a computer in human competition was Deep Thought's USCF rating of 2551 in 1988 and FIDE no longer accepts human–computer results in their rating lists. Specialized machine-only Elo pools have been created for rating machines, but such numbers, while similar in appearance, are not directly compared.[29]In 2016, theSwedish Chess Computer Associationrated computer programKomodoat 3361. Chess enginescontinue to improve. In 2009, chess engines running on slower hardware reached thegrandmasterlevel. Amobile phonewon acategory6 tournament with a performance rating 2898: chess engineHiarcs13 running insidePocket Fritz4 on the mobile phoneHTC Touch HDwon the Copa Mercosur tournament inBuenos Aires, Argentina with 9 wins and 1 draw on August 4–14, 2009.[30]Pocket Fritz 4 searches fewer than 20,000 positions per second.[31]This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second. Advanced Chessis a form of chess developed in 1998 by Kasparov where a human plays against another human, and both have access to computers to enhance their strength. The resulting "advanced" player was argued by Kasparov to be stronger than a human or computer alone. This has been proven in numerous occasions, such as at Freestyle Chess events. Players today are inclined to treat chess engines as analysis tools rather than opponents.[32]Chess grandmasterAndrew Soltisstated in 2016 "The computers are just much too good" and that world championMagnus Carlsenwon't play computer chess because "he just loses all the time and there's nothing more depressing than losing without even being in the game."[33] Since the era of mechanical machines that played rook and king endings and electrical machines that played other games likehexin the early years of the 20th century, scientists and theoreticians have sought to develop a procedural representation of how humans learn, remember, think and apply knowledge, and the game of chess, because of its daunting complexity, became the "Drosophilaof artificial intelligence (AI)".[Note 1]The procedural resolution of complexity became synonymous with thinking, and early computers, even before the chess automaton era, were popularly referred to as "electronic brains". Several different schema were devised starting in the latter half of the 20th century to represent knowledge and thinking, as applied to playing the game of chess (and other games like checkers): Using "ends-and-means" heuristics a human chess player can intuitively determine optimal outcomes and how to achieve them regardless of the number of moves necessary, but a computer must be systematic in its analysis. Most players agree thatlooking at least five moves ahead(tenplies) when necessary is required to play well. Normal tournament rules give each player an average of three minutes per move. On average there are more than 30 legal moves per chess position, so a computer must examine a quadrillion possibilities to look ahead ten plies (five full moves); one that could examine a million positions a second would require more than 30 years.[20] The earliest attempts at procedural representations of playing chess predated the digital electronic age, but it was the stored program digital computer that gave scope to calculating such complexity. Claude Shannon, in 1949, laid out the principles of algorithmic solution of chess. In that paper, the game is represented by a "tree", or digital data structure of choices (branches) corresponding to moves. The nodes of the tree were positions on the board resulting from the choices of move. The impossibility of representing an entire game of chess by constructing a tree from first move to last was immediately apparent: there are an average of 36 moves per position in chess and an average game lasts about 35 moves to resignation (60-80 moves if played to checkmate, stalemate, or other draw). There are 400 positions possible after the first move by each player, about 200,000 after two moves each, and nearly 120 million after just 3 moves each. So a limited lookahead (search) to some depth, followed by using domain-specific knowledge to evaluate the resulting terminal positions was proposed. A kind of middle-ground position, given good moves by both sides, would result, and its evaluation would inform the player about the goodness or badness of the moves chosen. Searching and comparing operations on the tree were well suited to computer calculation; the representation of subtle chess knowledge in the evaluation function was not. The early chess programs suffered in both areas: searching the vast tree required computational resources far beyond those available, and what chess knowledge was useful and how it was to be encoded would take decades to discover. The developers of a chess-playing computer system must decide on a number of fundamental implementation issues. These include: Adriaan de Grootinterviewed a number of chess players of varying strengths, and concluded that bothmastersand beginners look at around forty to fifty positions before deciding which move to play. What makes the former much better players is that they usepattern recognitionskills built from experience. This enables them to examine some lines in much greater depth than others by simply not considering moves they can assume to be poor. More evidence for this being the case is the way that good human players find it much easier to recall positions from genuine chess games, breaking them down into a small number of recognizable sub-positions, rather than completely random arrangements of the same pieces. In contrast, poor players have the same level of recall for both. The equivalent of this in computer chess areevaluation functionsfor leaf evaluation, which correspond to the human players' pattern recognition skills, and the use of machine learning techniques in training them, such as Texel tuning,stochastic gradient descent, andreinforcement learning, which corresponds to building experience in human players. This allows modern programs to examine some lines in much greater depth than others by using forwards pruning and other selective heuristics to simply not consider moves the program assume to be poor through their evaluation function, in the same way that human players do. The only fundamental difference between a computer program and a human in this sense is that a computer program can search much deeper than a human player could, allowing it to search more nodes and bypass thehorizon effectto a much greater extent than is possible with human players. Computer chess programs usually support a number of commonde factostandards. Nearly all of today's programs can read and write game moves asPortable Game Notation(PGN), and can read and write individual positions asForsyth–Edwards Notation(FEN). Older chess programs often only understoodlong algebraic notation, but today users expect chess programs to understand standardalgebraic chess notation. Starting in the late 1990s, programmers began to develop separatelyengines(with acommand-line interfacewhich calculates which moves are strongest in a position) or agraphical user interface(GUI) which provides the player with a chessboard they can see, and pieces that can be moved. Engines communicate their moves to the GUI using a protocol such as the Chess Engine Communication Protocol (CECP) orUniversal Chess Interface(UCI). By dividing chess programs into these two pieces, developers can write only the user interface, or only the engine, without needing to write both parts of the program. (See alsochess engine.) Developers have to decide whether to connect the engine to an opening book and/or endgametablebasesor leave this to the GUI. Thedata structureused to represent each chess position is key to the performance of move generation andposition evaluation. Methods include pieces stored in an array ("mailbox" and "0x88"), piece positions stored in a list ("piece list"), collections of bit-sets for piece locations ("bitboards"), andhuffman codedpositions for compact long-term storage. Computer chess programs consider chess moves as agame tree. In theory, they examine all moves, then all counter-moves to those moves, then all moves countering them, and so on, where each individual move by one player is called a "ply". This evaluation continues until a certain maximum search depth or the program determines that a final "leaf" position has been reached (e.g. checkmate). One particular type of search algorithm used in computer chess areminimaxsearch algorithms, where at each ply the "best" move by the player is selected; one player is trying to maximize the score, the other to minimize it. By this alternating process, one particular terminal node whose evaluation represents the searched value of the position will be arrived at. Its value is backed up to the root, and that evaluation becomes the valuation of the position on the board. This search process is called minimax. A naive implementation of the minimax algorithm can only search to a small depth in a practical amount of time, so various methods have been devised to greatly speed the search for good moves.Alpha–beta pruning, a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, is typically used to reduce the search space of the program. In addition, various selective search heuristics, such asquiescence search, forward pruning, search extensions and search reductions, are also used as well. These heuristics are triggered based on certain conditions in an attempt to weed out obviously bad moves (history moves) or to investigate interesting nodes (e.g. check extensions,passed pawnson seventhrank, etc.). These selective search heuristics have to be used very carefully however. If the program overextends, it wastes too much time looking at uninteresting positions. If too much is pruned or reduced, there is a risk of cutting out interesting nodes. Monte Carlo tree search (MCTS) is a heuristic search algorithm which expands the search tree based on random sampling of the search space. A version of Monte Carlo tree search commonly used in computer chess is PUCT, Predictor and Upper Confidence bounds applied to Trees. DeepMind'sAlphaZeroandLeela Chess Zerouses MCTS instead of minimax. Such engines usebatchingongraphics processing unitsin order to calculate theirevaluation functionsand policy (move selection), and therefore require aparallelsearch algorithm as calculations on the GPU are inherently parallel. The minimax and alpha-beta pruning algorithms used in computer chess are inherently serial algorithms, so would not work well with batching on the GPU. On the other hand, MCTS is a good alternative, because the random sampling used in Monte Carlo tree search lends itself well to parallel computing, and is why nearly all engines which support calculations on the GPU use MCTS instead of alpha-beta. Many other optimizations can be used to make chess-playing programs stronger. For example,transposition tablesare used to record positions that have been previously evaluated, to save recalculation of them.Refutation tablesrecord key moves that "refute" what appears to be a good move; these are typically tried first in variant positions (since a move that refutes one position is likely to refute another). The drawback is that transposition tables at deep ply depths can get quite large – tens to hundreds of millions of entries. IBM's Deep Blue transposition table in 1996, for example was 500 million entries. Transposition tables that are too small can result in spending more time searching for non-existent entries due to threshing than the time saved by entries found. Many chess engines usepondering, searching to deeper levels on the opponent's time, similar to human beings, to increase their playing strength. Of course, faster hardware and additional memory can improve chess program playing strength. Hyperthreaded architectures can improve performance modestly if the program is running on a single core or a small number of cores. Most modern programs are designed to take advantage of multiple cores to do parallel search. Other programs are designed to run on a general purpose computer and allocate move generation, parallel search, or evaluation to dedicated processors or specialized co-processors. Thefirst paperon chess search was byClaude Shannonin 1950.[34]He predicted the two main possible search strategies which would be used, which he labeled "Type A" and "Type B",[35]before anyone had programmed a computer to play chess. Type A programs would use a "brute force" approach, examining every possible position for a fixed number of moves using a pure naiveminimax algorithm. Shannon believed this would be impractical for two reasons. First, with approximately thirty moves possible in a typical real-life position, he expected that searching the approximately 109positions involved in looking three moves ahead for both sides (sixplies) would take about sixteen minutes, even in the "very optimistic" case that the chess computer evaluated a million positions every second. (It took about forty years to achieve this speed.) A later search algorithm calledalpha–beta pruning, a system of defining upper and lower bounds on possible search results and searching until the bounds coincided, reduced the branching factor of the game tree logarithmically, but it still was not feasible for chess programs at the time to exploit the exponential explosion of the tree. Second, it ignored the problem of quiescence, trying to only evaluate a position that is at the end of anexchangeof pieces or other important sequence of moves ('lines'). He expected that adapting minimax to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. He expected that adapting type A to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further. This led naturally to what is referred to as "selective search" or "type B search", using chess knowledge (heuristics) to select a few presumably good moves from each position to search, and prune away the others without searching. Instead of wasting processing power examining bad or trivial moves, Shannon suggested that type B programs would use two improvements: This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time. However, early attempts at selective search often resulted in the best move or moves being pruned away. As a result, little or no progress was made for the next 25 years dominated by this first iteration of the selective search paradigm. The best program produced in this early period was Mac Hack VI in 1967; it played at the about the same level as the average amateur (C class on the United States Chess Federation rating scale). Meanwhile, hardware continued to improve, and in 1974, brute force searching was implemented for the first time in the Northwestern University Chess 4.0 program. In this approach, all alternative moves at a node are searched, and none are pruned away. They discovered that the time required to simply search all the moves was much less than the time required to apply knowledge-intensive heuristics to select just a few of them, and the benefit of not prematurely or inadvertently pruning away good moves resulted in substantially stronger performance. In the 1980s and 1990s, progress was finally made in the selective search paradigm, with the development ofquiescence search, null move pruning, and other modern selective search heuristics. These heuristics had far fewer mistakes than earlier heuristics did, and was found to be worth the extra time it saved because it could search deeper and widely adopted by many engines. While many modern programs do usealpha-beta searchas a substrate for their search algorithm, these additional selective search heuristics used in modern programs means that the program no longer does a "brute force" search. Instead they heavily rely on these selective search heuristics to extend lines the program considers good and prune and reduce lines the program considers bad, to the point where most of the nodes on the search tree are pruned away, enabling modern programs to search very deep. In 2006,Rémi CoulomcreatedMonte Carlo tree search, another kind of type B selective search. In 2007, an adaption of Monte Carlo tree search called Upper Confidence bounds applied to Trees or UCT for short was created by Levente Kocsis and Csaba Szepesvári. In 2011, Chris Rosin developed a variation of UCT called Predictor + Upper Confidence bounds applied to Trees, or PUCT for short. PUCT was then used inAlphaZeroin 2017, and later inLeela Chess Zeroin 2018. In the 1970s, most chess programs ran on super computers like Control Data Cyber 176s or Cray-1s, indicative that during that developmental period for computer chess, processing power was the limiting factor in performance. Most chess programs struggled to search to a depth greater than 3 ply. It was not until the hardware chess machines of the 1980s, that a relationship between processor speed and knowledge encoded in the evaluation function became apparent. It has been estimated that doubling the computer speed gains approximately fifty to seventyElopoints in playing strength (Levy & Newborn 1991:192). For most chess positions, computers cannot look ahead to all possible final positions. Instead, they must look ahead a fewpliesand compare the possible positions, known as leaves. The algorithm that evaluates leaves is termed the "evaluation function", and these algorithms are often vastly different between different chess programs. Evaluation functions typically evaluate positions in hundredths of a pawn (called a centipawn), where by convention, a positive evaluation favors White, and a negative evaluation favors Black. However, some evaluation function output win/draw/loss percentages instead of centipawns. Historically, handcrafted evaluation functions consider material value along with other factors affecting the strength of each side. When counting up the material for each side, typical values for pieces are 1 point for apawn, 3 points for aknightorbishop, 5 points for arook, and 9 points for aqueen. (SeeChess piece relative value.) Thekingis sometimes given an arbitrarily high value such as 200 points (Shannon's paper) to ensure that a checkmate outweighs all other factors (Levy & Newborn 1991:45). In addition to points for pieces, most handcrafted evaluation functions take many factors into account, such as pawn structure, the fact that a pair of bishops are usually worth more, centralized pieces are worth more, and so on. The protection of kings is usually considered, as well as the phase of the game (opening, middle or endgame).Machine learningtechniques such as Texel turning,stochastic gradient descent, orreinforcement learningare usually used to optimise handcrafted evaluation functions. Most modern evaluation functions make use ofneural networks. The most common evaluation function in use today is theefficiently updatable neural network, which is a shallow neural network whose inputs arepiece-square tables. Piece-square tables are a set of 64 values corresponding to the squares of the chessboard, and there typically exists a piece-square table for every piece and colour, resulting in 12 piece-square tables and thus 768 inputs into the neural network. In addition, some engines usedeep neural networksin their evaluation function. Neural networks are usually trained using somereinforcement learningalgorithm, in conjunction withsupervised learningorunsupervised learning. The output of the evaluation function is a single scalar, quantized in centipawns or other units, which is, in the case of handcrafted evaluation functions, a weighted summation of the various factors described, or in the case of neural network based evaluation functions, the output of the head of the neural network. The evaluation putatively represents or approximates the value of the subtree below the evaluated node as if it had been searched to termination, i.e. the end of the game. During the search, an evaluation is compared against evaluations of other leaves, eliminating nodes that represent bad or poor moves for either side, to yield a node which by convergence, represents the value of the position with best play by both sides. Endgame play had long been one of the great weaknesses of chess programs because of the depth of search needed. Some otherwise master-level programs were unable to win in positions where even intermediate human players could force a win. To solve this problem, computers have been used to analyze somechess endgamepositions completely, starting withkingandpawnagainst king. Such endgame tablebases are generated in advance using a form ofretrograde analysis, starting with positions where the final result is known (e.g., where one side has been mated) and seeing which other positions are one move away from them, then which are one move from those, etc.Ken Thompsonwas a pioneer in this area. The results of the computer analysis sometimes surprised people. In 1977 Thompson's Belle chess machine used the endgame tablebase for a king androokagainst king andqueenand was able to draw that theoretically lost ending against several masters (seePhilidor position#Queen versus rook). This was despite not following the usual strategy to delay defeat by keeping the defending king and rook close together for as long as possible. Asked to explain the reasons behind some of the program's moves, Thompson was unable to do so beyond saying the program's database simply returned the best moves. Most grandmasters declined to play against the computer in the queen versus rook endgame, butWalter Browneaccepted the challenge. A queen versus rook position was set up in which the queen can win in thirty moves, with perfect play. Browne was allowed 2½ hours to play fifty moves, otherwise a draw would be claimed under thefifty-move rule. After forty-five moves, Browne agreed to a draw, being unable to force checkmate or win the rook within the next five moves. In the final position, Browne was still seventeen moves away from checkmate, but not quite that far away from winning the rook. Browne studied the endgame, and played the computer again a week later in a different position in which the queen can win in thirty moves. This time, he captured the rook on the fiftieth move, giving him a winning position.[36][37] Other positions, long believed to be won, turned out to take more moves against perfect play to actually win than were allowed by chess's fifty-move rule. As a consequence, for some years the official FIDE rules of chess were changed to extend the number of moves allowed in these endings. After a while, the rule reverted to fifty moves in all positions – more such positions were discovered, complicating the rule still further, and it made no difference in human play, as they could not play the positions perfectly. Over the years, otherendgame databaseformats have been released including the Edward Tablebase, the De Koning Database and theNalimovTablebase which is used by many chess programs such asRybka,ShredderandFritz. Tablebases for all positions with six pieces are available.[38]Some seven-piece endgames have been analyzed by Marc Bourzutschky and Yakov Konoval.[39]Programmers using the Lomonosov supercomputers in Moscow have completed a chess tablebase for all endgames with seven pieces or fewer (trivial endgame positions are excluded, such as six white pieces versus a lone blackking).[40][41]In all of these endgame databases it is assumed that castling is no longer possible. Many tablebases do not consider the fifty-move rule, under which a game where fifty moves pass without a capture or pawn move can be claimed to be a draw by either player. This results in the tablebase returning results such as "Forced mate in sixty-six moves" in some positions which would actually be drawn because of the fifty-move rule. One reason for this is that if the rules of chess were to be changed once more, giving more time to win such positions, it will not be necessary to regenerate all the tablebases. It is also very easy for the program using the tablebases to notice and take account of this 'feature' and in any case if using an endgame tablebase will choose the move that leads to the quickest win (even if it would fall foul of the fifty-move rule with perfect play). If playing an opponent not using a tablebase, such a choice will give good chances of winning within fifty moves. The Nalimov tablebases, which use state-of-the-artcompressiontechniques, require 7.05GBof hard disk space for all five-piece endings. To cover all the six-piece endings requires approximately 1.2TB. It is estimated that a seven-piece tablebase requires between 50 and 200TBof storage space.[42] Endgame databases featured prominently in 1999, when Kasparov played anexhibition match on the Internet against the rest of the world. A seven pieceQueenandpawnendgame was reached with the World Team fighting to salvage a draw.Eugene Nalimovhelped by generating the six piece ending tablebase where both sides had two Queens which was used heavily to aid analysis by both sides. The most popular endgame tablebase is syzygy which is used by most top computer programs likeStockfish,Leela Chess Zero, andKomodo. It is also significantly smaller in size than other formats, with 7-piece tablebases taking only 18.4 TB.[43] For a current state-of-the art chess engine like Stockfish, a table base only provides a very minor increase in playing strength (approximately 3 Elo points for syzygy 6men as of Stockfish 15).[44] Chess engines, like human beings, may save processing time as well as select variations known to be strong via referencing anopening bookstored in a database. Opening books cover the opening moves of a game to variable depth, depending on opening and variation, but usually to the first 10-12 moves (20-24 ply). In the early eras of computer chess, trusting variations studied in-depth by human grandmasters for decades was superior to the weak performance of mid-20th-century engines. And even in the contemporary era, allowing computer engines to extensively analyze various openings at their leisure beforehand, and then simply consult the results when in a game, speeds up their play. In the 1990s, some theorists believed that chess engines of the day had much of their strength in memorized opening books and knowledge dedicated to known positions, and thus believed a validanti-computer tacticwould be to intentionally play some out-of-book moves in order to force the chess program to think for itself. This seems to have been a dubious assumption even then; Garry Kasparov tried it via using the non-standardMieses Openingat the 1997Deep Blue versus Garry KasparovGame 1 match, but lost. This tactic became even weaker as time passed; the opening books stored in computer databases can be far more extensive than even the best prepared humans, meaning computers will be well-prepared for even rare variations and know the correct play. More generally, the play of engines even in fully unknown situations (as comes up in variants such asChess960) is still exceptionally strong, so the lack of an opening book isn't even a major disadvantage for tactically sharp chess engines, who can discover strong moves in unfamiliar board variations accurately. In contemporary engine tournaments, engines are often told to play situations from a variety of openings, including unbalanced ones, to reduce the draw rate and to add more variety to the games.[45] CEGT,[46]CSS,[47]SSDF,[48]WBEC,[49]REBEL,[50]FGRL,[51]and IPON[52]maintain rating lists allowing fans to compare the strength of engines. Various versions ofStockfish,Komodo,Leela Chess Zero, andFat Fritzdominate the rating lists in the early 2020s. CCRL (Computer Chess Rating Lists) is an organisation that tests computerchess engines'strengthby playing the programs against each other. CCRL was founded in 2006 to promote computer-computer competition and tabulate results on a rating list.[53] The organisation runs three different lists: 40/40 (40 minutes for every 40 moves played), 40/4 (4 minutes for every 40 moves played), and 40/4FRC(same time control but Chess960).[Note 2]Pondering (orpermanent brain) is switched off and timing is adjusted to the AMD64 X2 4600+ (2.4 GHz)CPUby usingCrafty 19.17 BHas a benchmark. Generic, neutralopening booksare used (as opposed to the engine's own book) up to a limit of 12 moves into the game alongside 4 or 5 mantablebases.[53][54][55] The idea of creating a chess-playing machine dates back to the eighteenth century. Around 1769, the chess playingautomatoncalledThe Turk, created byHungarianinventorFarkas Kempelen, became famous before being exposed as a hoax. Before the development ofdigital computing, serious trials based on automata such asEl Ajedrecistaof 1912, built by Spanish engineerLeonardo Torres Quevedo, which played a king and rook versus king ending, were too complex and limited to be useful for playing full games of chess. The field of mechanical chess research languished until the advent of the digital computer in the 1950s. Since then, chess enthusiasts andcomputer engineershave built, with increasing degrees of seriousness and success, chess-playing machines and computer programs. One of the few chess grandmasters to devote himself seriously to computer chess was formerWorld Chess ChampionMikhail Botvinnik, who wrote several works on the subject. Botvinnik's interest in Computer Chess started in the 50s, favouring chess algorithms based on Shannon's selective type B strategy, as discussed along with Max Euwe 1958 in Dutch Television. Working with relatively primitive hardware available in theSoviet Unionin the early 1960s, Botvinnik had no choice but to investigate software move selection techniques; at the time only the most powerful computers could achieve much beyond a three-ply full-width search, and Botvinnik had no such machines. In 1965 Botvinnik was a consultant to the ITEP team in a US-Soviet computer chess match which won a correspondence chess match against the Kotok-McCarthy-Program led by John McCarthy in 1967.(seeKotok-McCarthy). Later he advised the team that created the chess program Kaissa at Moscow's Institute of Control Sciences. Botvinnik had his own ideas to model a Chess Master's Mind. After publishing and discussing his early ideas on attack maps and trajectories at Moscow Central Chess Clubin 1966, he found Vladimir Butenko as supporter and collaborator. Butenko first implemented the 15x15 vector attacks board representation on a M-20 computer, determining trajectories. After Botvinnik introduced the concept of Zones in 1970, Butenko refused further cooperation and began to write his own program, dubbed Eureka. In the 70s and 80s, leading a team around Boris Stilman, Alexander Yudin, Alexander Reznitskiy, Michael Tsfasman and Mikhail Chudakov, Botvinnik worked on his own project 'Pioneer' - which was an Artificial Intelligence based chess project. In the 90s, Botvinnik already in his 80s, he worked on the new project 'CC Sapiens'. One developmental milestone occurred when the team fromNorthwestern University, which was responsible for theChessseries of programs and won the first threeACMComputer Chess Championships(1970–72), abandoned type B searching in 1973. The resulting program, Chess 4.0, won that year's championship and its successors went on to come in second in both the 1974 ACM Championship and that year's inauguralWorld Computer Chess Championship, before winning the ACM Championship again in 1975, 1976 and 1977. The type A implementation turned out to be just as fast: in the time it used to take to decide which moves were worthy of being searched, it was possible just to search all of them. In fact, Chess 4.0 set the paradigm that was and still is followed essentially by all modern Chess programs today, and that had been successfully started by the Russian ITEP in 1965. In 1978, an early rendition of Ken Thompson's hardware chess machineBelle, entered and won the North American Computer Chess Championship over the dominant Northwestern University Chess 4.7. Technological advances by orders of magnitude in processing power have made the brute force approach far more incisive than was the case in the early years. The result is that a very solid, tactical AI player aided by some limited positional knowledge built in by the evaluation function and pruning/extension rules began to match the best players in the world. It turned out to produce excellent results, at least in the field of chess, to let computers do what they do best (calculate) rather than coax them into imitating human thought processes and knowledge. In 1997Deep Blue, a brute-force machine capable of examining 500 million nodes per second, defeated World Champion Garry Kasparov, marking the first time a computer has defeated a reigning world chess champion in standard time control. In 2016,NPRasked experts to characterize the playing style of computer chess engines.Murray Campbellof IBM stated that "Computers don't have any sense of aesthetics... They play what they think is the objectively best move in any position, even if it looks absurd, and they can play any move no matter how ugly it is." Grandmasters Andrew Soltis andSusan Polgarstated that computers are more likely to retreat than humans are.[33] Whileneural networkshave been used in theevaluation functionsof chess engines since the late 1980s, with programs such as NeuroChess, Morph, Blondie25, Giraffe,AlphaZero, andMuZero,[56][57][58][59][60]neural networks did not become widely adopted by chess engines until the arrival ofefficiently updatable neural networksin the summer of 2020. Efficiently updatable neural networks were originally developed incomputer shogiin 2018 by Yu Nasu,[61][62]and had to be first ported to a derivative of Stockfish called Stockfish NNUE on 31 May 2020,[63]and integrated into the official Stockfish engine on 6 August 2020,[64][65]before other chess programmers began to adopt neural networks into their engines. Some people, such as theRoyal Society'sVenki Ramakrishnan, believe thatAlphaZeroled to the widespread adoption of neural networks in chess engines.[66]However, AlphaZero influenced very few engines to begin using neural networks, and those tended to be new experimental engines such asLeela Chess Zero, which began specifically to replicate the AlphaZero paper. Thedeep neural networksused in AlphaZero's evaluation function required expensivegraphics processing units, which were not compatible with existing chess engines. The vast majority of chess engines only usecentral processing units, and computing and processing information on the GPUs require speciallibrariesin the backend such asNvidia'sCUDA, which none of the engines had access to. Thus the vast majority of chess engines such asKomodoandStockfishcontinued to use handcrafted evaluation functions untilefficiently updatable neural networkswere ported to computer chess fromcomputer shogiin 2020, which did not require either the use of GPUs or libraries like CUDA at all. Even then, the neural networks used in computer chess are fairly shallow, and thedeep reinforcement learningmethods pioneered by AlphaZero are still extremely rare in computer chess. These chess playing systems include custom hardware with approx. dates of introduction (excluding dedicated microcomputers): In the late 1970s to early 1990s, there was a competitive market for dedicated chess computers. This market changed in the mid-1990s when computers with dedicated processors could no longer compete with the fast processors in personal computers. Recently, some hobbyists have been using theMulti Emulator Super Systemto run the chess programs created for Fidelity or Hegener & Glaser's Mephisto computers on modern 64-bit operating systems such asWindows 10.[87]The author ofRebel, Ed Schröder has also adapted three of the Hegener & Glaser Mephisto's he wrote to work as UCI engines.[88] These programs can be run on MS-DOS, and can be run on 64-bit Windows 10 via emulators such asDOSBoxorQemu:[89] Well-known computer chess theorists include: The prospects of completelysolvingchess are generally considered to be rather remote. It is widely conjectured that no computationally inexpensive method to solve chess exists even in the weak sense of determining with certainty the value of the initial position, and hence the idea of solving chess in the stronger sense of obtaining a practically usable description of a strategy for perfect play for either side seems unrealistic today. However, it has not been proven that no computationally cheap way of determining the best move in a chess position exists, nor even that a traditionalalpha–beta searcherrunning on present-day computing hardware could not solve the initial position in an acceptable amount of time. The difficulty in proving the latter lies in the fact that, while the number of board positions that could happen in the course of a chess game is huge (on the order of at least 1043[91]to 1047), it is hard to rule out with mathematical certainty the possibility that the initial position allows either side to force a mate or athreefold repetitionafter relatively few moves, in which case the search tree might encompass only a very small subset of the set of possible positions. It has been mathematically proven thatgeneralized chess(chess played with an arbitrarily large number of pieces on an arbitrarily large chessboard) isEXPTIME-complete,[92]meaning that determining the winning side in an arbitrary position of generalized chess provably takes exponential time in the worst case; however, this theoretical result gives no lower bound on the amount of work required to solve ordinary 8x8 chess. Martin Gardner'sMinichess, played on a 5×5 board with approximately 1018possible board positions, has been solved; its game-theoretic value is 1/2 (i.e. a draw can be forced by either side), and the forcing strategy to achieve that result has been described. Progress has also been made from the other side: as of 2012, all 7 and fewer pieces (2 kings and up to 5 other pieces) endgames have been solved. A "chess engine" is software that calculates and orders which moves are the strongest to play in a given position. Engine authors focus on improving the play of their engines, often just importing the engine into agraphical user interface(GUI) developed by someone else. Engines communicate with the GUI by standardized protocols such as the nowadays ubiquitousUniversal Chess Interfacedeveloped byStefan Meyer-Kahlenand Franz Huber. There are others, like the Chess Engine Communication Protocol developed by Tim Mann forGNU ChessandWinboard.Chessbasehas its own proprietary protocol, and at one time Millennium 2000 had another protocol used forChessGenius. Engines designed for one operating system and protocol may be ported to other OS's or protocols. Chess engines are regularly matched against each other at dedicatedchess engine tournaments. In 1997, theInternet Chess Clubreleased its first Java client for playing chess online against other people inside one's webbrowser.[93]This was probably one of the first chess web apps.Free Internet Chess Serverfollowed soon after with a similar client.[94]In 2004,International Correspondence Chess Federationopened up a web server to replace their email-based system.[95]Chess.comstarted offering Live Chess in 2007.[96]Chessbase/Playchesshas long had a downloadable client, and added a web-based client in 2013.[97] Another popular web app is tactics training. The now defunct Chess Tactics Server opened its site in 2006,[98]followed by Chesstempo the next year,[99]andChess.comadded its Tactics Trainer in 2008.[100]Chessbaseadded a tactics trainer web app in 2015.[101] Chessbasetook their chess game database online in 1998.[102]Another early chess game databases was Chess Lab, which started in 1999.[103]New In Chesshad initially tried to compete withChessbaseby releasing a NICBase program forWindows 3.x, but eventually, decided to give up on software, and instead focus on their online database starting in 2002.[104] One could play against the engineShredderonline from 2006.[105]In 2015,Chessbaseadded a play Fritz web app,[106]as well as My Games for storing one's games.[107] Starting in 2007,Chess.comoffered the content of the training program, Chess Mentor, to their customers online.[108]Top GMs such asSam ShanklandandWalter Brownehave contributed lessons. The introduction of artificial intelligence transformed the game of chess, particularly at the elite levels. AI greatly influenced defensive strategies. It has the capacity to compute every potential move without concern, unlike human players who are bound to emotional and psychological impacts from factors such as stress or tiredness. As a result, many positions once considered not defensible are now recognized as defensible. After studying millions of games, chess engines made new analysis and improved the existing theories of opening. These improvements led to the creation of new ideas and changed the way players think throughout all parts of the game.[109]In classical chess, elite players commonly initiate games by making 10 to 15 opening moves that align with established analyses or leading engine recommendations.[110] Unlike traditional over-the-board tournaments where handheld metal detectors are employed in order to counter players attempts at using electronic assistance, fair-play monitoring in online chess is much more challenging. During the 2020 European Online Chess Championship, which saw a record participation of nearly 4000 players over 80 participants were disqualified for cheating—most from beginner and youth categories.[111]The event underscored the growing need for advanced detection methods in online competitions. In response to these issues, chess platforms such asChess.comdeveloped AI-based statistical models which track improbable moves by a player and compare them to moves that could be made by an engine. Expert examination is conducted for all suspected cases, and the findings are published on a regular basis. FIDE introduced AI behavior-tracking technology to strengthenanti-cheating measuresin online events.[112] AI-based detection systems use a combination of machine learning to track suspicious player actions in different games. This is done by measuring discrepancies between the real moves and the predicted moves derived from the available statistics. Players of unusually high skill level or unusual strategies that can imitate moves characteristic of automated chess systems. Each case is examined by a human expert to ensure that the decision is correct before any actions are made to guarantee fairness and accuracy.[112] The Maia Chess project was began in 2020 by theUniversity of Toronto,Cornell University, andMicrosoft Research. Maia Chess is aneural networkconstructed to impersonate a human’s manner of playing chess based on skill. Each Maia models was tested on 9 sets of 500,000 positions each, covering rating levels from 1100 to 1900. They perform best when predicting moves made by players at their targeted rating level, with lower Maias accurately predicting moves from lower-rated players (around 1100) and higher Maias doing the same for higher-rated players (around 1900). The primary goal of Maia is to develop an AI chess engine that imitates human decision-making rather than focusing on optimal moves. Through personalization across different skill levels, Maia is able to simulate game styles typical for each level more accurately.[113][114] While considered something done more for entertainment than for serious play, people have discovered thatlarge language models(LLMs) of the type created in 2018 and beyond such asGPT-3can be prompted into producing chess moves given proper language prompts. While inefficient compared to native chess engines, the fact that LLMs can track the board state at all beyond the opening rather than simply recite chess-like phrases in adreamlike statewas considered greatly surprising. LLM play has a number of quirks compared to engine play; for example, engines don't generally "care" how a board state was arrived at. However, LLMs seem to produce different quality moves for a chess position reached via strong play compared to the same board state produced via a set of strange preceding moves (which will generally produce weaker and more random moves).[115] This article incorporatestextby Chess Programming Wiki available under theCC BY-SA 3.0license.
https://en.wikipedia.org/wiki/Computer_chess
Thehorizon effect, also known as thehorizon problem, is a problem inartificial intelligencewhereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a fewpliesdown thegame tree. Thus, for a computer searching only a fixed number of plies, there is a possibility that it will make a poor long-term move. The drawbacks of the move are not "visible" because the computer does not search to the depth at which itsevaluation functionreveals the true evaluation of the line. The analogy is to peering at a distance on a sphere like the earth, but a threat being beneath the horizon and hence unseen. When evaluating a largegame treeusing techniques such asminimaxwithalpha-beta pruning, search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect. In 1973Hans Berlinernamed this phenomenon, which he and other researchers had observed, the "Horizon Effect."[1]He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form." The horizon effect can be somewhat mitigated byquiescence search. This technique extends the effort and time spent searching board states left in volatile positions and allocates less effort to easier-to-assess board states. For example, "scoring" the worth of a chess position often involves amaterial value count, but this count is misleading if there arehanging piecesor an imminent checkmate. A board state after the white queen has captured a protected black knight would appear to the naive material count to be advantageous to white as they are now up a knight, but is probably disastrous as the queen will be taken in the exchange one ply later. A quiescence search may tell a search algorithm to play out thecapturesandchecksbefore scoringleaf nodeswith volatile positions. Inchess, assume a situation where the computer only searches the game tree to sixpliesand from the current position determines that the queen is lost in the sixth ply; and suppose there is a move in the search depth where it maysacrificea rook, and the loss of the queen is pushed to the eighth ply. This is, of course, a worse move than sacrificing the queen because it leads to losing both a queen and a rook. However, because the loss of the queen was pushed over the horizon of search, it is not discovered and evaluated by the search. Losing the rook seems to be better than losing the queen, so the sacrifice is returned as the best option whereas delaying the sacrifice of the queen has in fact additionally weakened the computer's position. As another example, while someperpetual checksquickly trigger a threefold-repetition draw, others can involve a queen chasing a king around across the board, varying the position each time, meaning the actual forced draw could be many moves into the future. If a quiescence search playing out the checks isn't done, then the AI might not detect the possibility, and can let the engine blunder a winning position into a drawn one. InGo, the horizon effect is a major concern for writing an AI capable of even beginner-level play, and part of why alpha-beta search was a weak approach toComputer Gocompared to later machine learning and pattern recognition approaches. It is a very common situation for certain stones to be "dead" yet require many moves to actually capture them if fought over. The horizon effect may cause a naive algorithm to incorrectly assess the situation and believe that the stones are savable by calculating a play that seems to keep the doomed stones alive as of the move the search tree stops at. While the death of the group can indeed be delayed, it cannot be stopped, and contesting this will only allow more stones to be captured. A classic example that beginners learn areGo ladders, but the same general idea occurs even in situations that aren't strictly ladders.[2]
https://en.wikipedia.org/wiki/Horizon_effect
Thelesser of two evils principle, also referred to as thelesser evil principleandlesser-evilism, is the principle that when faced with selecting from two immoral options, the least immoral one should be chosen. The principle is most often invoked in reference to binary political choices under systems that make it impossible to express asincere preference for one's favorite. The maxim existed already in Platonic philosophy.[1]InNicomachean Ethics,Aristotlewrites: "For the lesser evil can be seen in comparison with the greater evil as a good, since this lesser evil is preferable to the greater one, and whatever preferable is good". The modern formulation was popularized byThomas à Kempis'devotional bookThe Imitation of Christwritten in early 15th century. In part IV of hisEthics,Spinozastates the following maxim:[2] Proposition 65: "According to the guidance of reason, of two things which are good, we shall follow the greater good, and of two evils, follow the less." The concept of "lesser evil" voting (LEV) can be seen as a form of theminimaxstrategy ("minimize maximum loss") where voters, when faced with two or more candidates, choose the one they perceive as the most likely to do harm and vote for the one most likely to defeat him, or the "lesser evil." To do so, "voting should not be viewed as a form of personal self-expression or moral judgement directed in retaliation towards major party candidates who fail to reflect our values, or of a corrupt system designed to limit choices to those acceptable to corporate elites" rather as an opportunity to reduce harm or loss.[3] Hannah Arendtargued that "Those who choose the lesser evil forget very quickly that they chose evil". In contrastSeyla Benhabibargues that politics would not exist without the necessity to choose between a greater and a lesser evil.[4]When limited to the two most likely candidates,[5]"lesser evil" is the most likely "greater good",[6]for the "common good", asPope Francishas said.[7] In 2012,Huffington Postcolumnist Sanford Jay Rosen stated that refusal to vote for the lesser of two evils became common practice for left-leaning voters in theUnited Statesdue to their overwhelming disapproval of the United States government's support for theVietnam War.[8]Rosen stated: "Beginning with the1968 presidential election, I often have heard from liberals that they could not vote for the lesser of two evils. Some said they would not vote; some said they would vote for a third-party candidate. That mantra delivered us to Richard Nixon in1972until Watergate did him in. And it delivered us toGeorge W. BushandDick Cheneyin2000until they were termed out in 2009".[8] In the2016 United States presidential election, both major candidates of the major parties —Hillary Clinton(D) andDonald Trump(R) — had disapproval ratings close to 60% by August 2016.[9]Green Party candidateJill Steininvoked this idea in her campaign stating, "Don't vote for the lesser evil, fight for the greater good".[10]Green Party votes hurt Democratic chances in 2000 and 2016.[11][12][13]This sentiment was repeated for the next two election cycles, both of which were between Trump and Democratic candidatesJoe Bidenin 2020 andKamala Harrisin 2024.[14][15]Accordingly, the lesser evil principle should be applied to two front-runners among many choices, after eliminating from consideration "minor party candidates (who) can be spoilers in elections by taking away enough votes from a major party candidate to influence the outcome without winning."[16] In hisDarkHorsepodcast,Bret Weinsteindescribes hisUnity 2020proposal for the2020 presidential electionas an option that, in case of failure, would not asymmetrically weaken voters' second-best choice on a single political side, thereby avoiding thelesser evil paradox.[17] In elections between only two candidates where one is mildly unpopular and the other immensely unpopular, opponents of both candidates frequently advocate a vote for the mildly unpopular candidate. For example, in the second round of the2002 French presidential electiongraffiti in Paris told people to "vote for the crook, not the fascist". The "crook" in those scribbled public messages wasJacques ChiracofRally for the Republicand the "fascist" wasJean-Marie Le Penof theNational Front. Chirac eventually won the second round having garnered 82% of the vote.[18] "Between Scylla and Charybdis" is an idiom derived fromHomer'sOdyssey. In the story,Odysseuschose to go nearScyllaas the lesser of two evils. He lost six of his companions, but if he had gone nearCharybdisall would be doomed. Because of such stories, having to navigate between the two hazards eventually entered idiomatic use. An equivalent English seafaring phrase is "Between a rock and a hard place".[19]The Latin lineincidit in scyllam cupiens vitare charybdim("he runs into Scylla, wishing to avoid Charybdis") had earlier become proverbial, with a meaning much the same asjumping from the frying pan into the fire.Erasmusrecorded it as an ancient proverb in hisAdagia, although the earliest known instance is in theAlexandreis, a 12th-century Latinepic poembyWalter of Châtillon.[20]
https://en.wikipedia.org/wiki/Lesser_of_two_evils_principle
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Invoting systems, theMinimax Condorcet methodis asingle-winnerranked-choice votingmethod that always elects themajority (Condorcet) winner.[1]Minimax compares all candidates against each other in around-robin tournament, then ranks candidates by their worst election result (the result where they would receive the fewest votes). The candidate with thelargest(maximum) number of votes in theirworst(minimum) matchup is declared the winner. The Minimax Condorcet method selects the candidate for whom the greatest pairwise score for another candidate against him or her is the least such score among all candidates. Imagine politicians compete like football teams in around-robin tournament, where every team plays against every other team once. In each matchup, a candidate's score is equal to the number of voters who support them over their opponent. Minimax finds each team's (or candidate's) worst game – the one where they received the smallest number of points (votes). Each team's tournament score is equal to the number of points they got in their worst game. The first place in the tournament goes to the team with the best tournament score. Formally, letscore⁡(X,Y){\displaystyle \operatorname {score} (X,Y)}denote the pairwise score forX{\displaystyle X}againstY{\displaystyle Y}. Then the candidate,W{\displaystyle W}selected by minimax (aka the winner) is given by: When it is permitted to rank candidates equally, or not rank all candidates, three interpretations of the rule are possible. When voters must rank all the candidates, all three variants are equivalent. Letd(X,Y){\displaystyle d(X,Y)}be the number of voters rankingXoverY. The variants define the scorescore⁡(X,Y){\displaystyle \operatorname {score} (X,Y)}for candidateXagainstYas: When one of the first two variants is used, the method can be restated as: "Disregard the weakestpairwisedefeat until one candidate is unbeaten." An "unbeaten" candidate possesses a maximum score against him which is zero or negative. Minimax usingwinning votesormarginssatisfies theCondorcetand themajority criterion, but not theSmith criterion,mutual majority criterion, orCondorcet loser criterion. Whenwinning votesis used, minimax also satisfies theplurality criterion. Minimax failsindependence of irrelevant alternatives,independence of clones,local independence of irrelevant alternatives, andindependence of Smith-dominated alternatives.[citation needed] With thepairwise oppositionvariant (sometimes called MMPO), minimax only satisfies the majority-strengthCondorcet criterion; a candidate with arelative majorityover every other may not be elected. MMPO is alater-no-harmsystem and also satisfiessincere favorite criterion. Nicolaus Tidemanmodified minimaxto only drop edges that createCondorcet cycles, allowing his method to satisfy many of the above properties.Schulze's methodsimilarly reduces to minimax when there are only three candidates. Suppose thatTennesseeis holding an election on the location of itscapital. The population is concentrated around four major cities.All voters want the capital to be as close to them as possible.The options are: The preferences of each region's voters are: The results of the pairwise scores would be tabulated as follows: Result:In all three alternativesNashvillehas the lowest value and is elected winner. Assume three candidates A, B and C and voters with the following preferences: The results would be tabulated as follows: Result: With the winning votes and margins alternatives, the Condorcet winnerAis declared Minimax winner. However, using the pairwise opposition alternative,Cis declared winner, since less voters strongly oppose him in his worst pairwise score against A than A is opposed by in his worst pairwise score against B. Assume four candidates A, B, C and D. Voters are allowed to not consider some candidates (denoting an n/a in the table), so that their ballots are not taken into account for pairwise scores of that candidates. The results would be tabulated as follows: Result: Each of the three alternatives gives another winner:
https://en.wikipedia.org/wiki/Minimax_Condorcet
Indecision theory,regret aversion(oranticipated regret) describes how the human emotional response ofregretcan influence decision-making underuncertainty. When individuals make choices without complete information, they often experience regret if they later discover that a different choice would have produced a better outcome. This regret can be quantified as the difference in value between the actual decision made and what would have been the optimal decision in hindsight. Unlike traditional models that consider regret as merely a post-decision emotional response, the theory of regret aversion proposes that decision-makers actively anticipate potential future regret and incorporate this anticipation into their current decision-making process. This anticipation can lead individuals to make choices specifically designed to minimize the possibility of experiencing regret later, even if those choices are not optimal from a purely probabilistic expected-value perspective. Regret is a powerfulnegative emotionwith significant social andreputationalimplications, playing a central role in how humans learn from experience and in the psychology ofrisk aversion. The conscious anticipation of regret creates afeedback loopthat elevates regret from being simply an emotional reaction—often modeled as merehuman behavior—into a key factor inrationalchoice behavior that can be formally modeled in decision theory. This anticipatory mechanism helps explain various observed decision patterns that deviate from standardexpected utility theory, includingstatus quo bias, inaction inertia, and the tendency to avoid decisions that might lead to easily imagined counterfactual scenarios where a better outcome would have occurred. Regret theory is a model intheoretical economicssimultaneously developed in 1982 byGraham LoomesandRobert Sugden,[1]David E. Bell,[2]andPeter C. Fishburn.[3]Regret theory models choice under uncertainty taking into account the effect of anticipated regret. Subsequently, several other authors improved upon it.[4] It incorporates a regret term in theutility functionwhich depends negatively on the realized outcome and positively on the best alternative outcome given the uncertainty resolution. This regret term is usually an increasing, continuous and non-negative function subtracted to the traditional utility index. These type of preferences always violatetransitivityin the traditional sense,[5]although most satisfy a weaker version.[4] For independent lotteries and when regret is evaluated over the difference between utilities and then averaged over the all combinations of outcomes, the regret can still be transitive but for only specific form of regret functional. It is shown that onlyhyperbolic sinefunction will maintain this property.[6]This form of regret inherits most of desired features, such as holding right preferences in face of first orderstochastic dominance, risk averseness for logarithmic utilities and the ability to explainAllais paradox. Regret aversion is not only a theoretical economics model, but a cognitive bias occurring as a decision has been made to abstain from regretting an alternative decision. To better preface, regret aversion can be seen through fear by either commission or omission; the prospect of committing to a failure or omitting an opportunity that we seek to avoid.[7]Regret, feeling sadness or disappointment over something that has happened, can be rationalized for a certain decision, but can guide preferences and can lead people astray. This contributes to the spread of disinformation because things are not seen as one's personal responsibility. Several experiments over both incentivized and hypothetical choices attest to the magnitude of this effect. Experiments infirst price auctionsshow that by manipulating the feedback the participants expect to receive, significant differences in the average bids are observed.[8]In particular, "Loser's regret" can be induced by revealing the winning bid to all participants in the auction, and thus revealing to the losers whether they would have been able to make a profit and how much could it have been (a participant that has a valuation of $50, bids $30 and finds out the winning bid was $35 will also learn that he or she could have earned as much as $15 by bidding anything over $35.) This in turn allows for the possibility of regret and if bidders correctly anticipate this, they would tend to bid higher than in the case where no feedback on the winning bid is provided in order to decrease the possibility of regret. In decisions over lotteries, experiments also provide supporting evidence of anticipated regret.[9][10][11]As in the case of first price auctions, differences in feedback over the resolution of the uncertainty can cause the possibility of regret and if this is anticipated, it may induce different preferences. For example, when faced with a choice between $40 with certainty and a coin toss that pays $100 if the outcome is guessed correctly and $0 otherwise, not only does the certain payment alternative minimizes the risk but also the possibility of regret, since typically the coin will not be tossed (and thus the uncertainty not resolved) while if the coin toss is chosen, the outcome that pays $0 will induce regret. If the coin is tossed regardless of the chosen alternative, then the alternative payoff will always be known and then there is no choice that will eliminate the possibility of regret. Anticipated regret tends to be overestimated for both choices and actions over which people perceive themselves to be responsible.[12][13]People are particularly likely to overestimate the regret they will feel when missing a desired outcome by a narrow margin. In one study, commuters predicted they would experience greater regret if they missed a train by 1 minute more than missing a train by 5 minutes, for example, but commuters who actually missed their train by 1 or 5 minutes experienced (equal and) lower amounts of regret. Commuters appeared to overestimate the regret they would feel when missing the train by a narrow margin, because they tended to underestimate the extent to which they would attribute missing the train to external causes (e.g., missing their wallet or spending less time in the shower).[12] Besides the traditional setting of choices over lotteries, regret aversion has been proposed as an explanation for the typically observed overbidding in first price auctions,[14]and thedisposition effect,[15]among others. Theminimaxregret approach is to minimize the worst-case regret, originally presented byLeonard Savagein 1951.[16]The aim of this is to perform as closely as possible to the optimal course. Since the minimax criterion applied here is to the regret (difference or ratio of the payoffs) rather than to the payoff itself, it is not as pessimistic as the ordinary minimax approach. Similar approaches have been used in a variety of areas such as: One benefit of minimax (as opposed to expected regret) is that it is independent of the probabilities of the various outcomes: thus if regret can be accurately computed, one can reliably use minimax regret. However, probabilities of outcomes are hard to estimate. This differs from the standard minimax approach in that it usesdifferencesorratiosbetween outcomes, and thus requires interval or ratio measurements, as well asordinal measurements(ranking), as in standard minimax. Suppose an investor has to choose between investing in stocks, bonds or the money market, and the total return depends on what happens to interest rates. The following table shows some possible returns: The crudemaximinchoice based on returns would be to invest in the money market, ensuring a return of at least 1. However, if interest rates fell then the regret associated with this choice would be large. This would be 11, which is the difference between the 12 which could have been received if the outcome had been known in advance and the 1 received. A mixed portfolio of about 11.1% in stocks and 88.9% in the money market would have ensured a return of at least 2.22; but, if interest rates fell, there would be a regret of about 9.78. The regret table for this example, constructed by subtracting actual returns from best returns, is as follows: Therefore, using a minimax choice based on regret, the best course would be to invest in bonds, ensuring a regret of no worse than 5. A mixed investment portfolio would do even better: 61.1% invested in stocks, and 38.9% in the money market would produce a regret no worse than about 4.28. What follows is an illustration of how the concept of regret can be used to design a linearestimator. In this example, the problem is to construct a linear estimator of a finite-dimensional parameter vectorx{\displaystyle x}from its noisy linear measurement with known noise covariance structure. The loss of reconstruction ofx{\displaystyle x}is measured using themean-squared error(MSE). The unknown parameter vector is known to lie in anellipsoidE{\displaystyle E}centered at zero. The regret is defined to be the difference between the MSE of the linear estimator that doesn't know the parameterx{\displaystyle x}, and the MSE of the linear estimator that knowsx{\displaystyle x}. Also, since the estimator is restricted to be linear, the zero MSE cannot be achieved in the latter case. In this case, the solution of a convex optimization problem gives the optimal, minimax regret-minimizing linear estimator, which can be seen by the following argument. According to the assumptions, the observed vectory{\displaystyle y}and the unknown deterministic parameter vectorx{\displaystyle x}are tied by the linear model whereH{\displaystyle H}is a knownn×m{\displaystyle n\times m}matrix withfull column rankm{\displaystyle m}, andw{\displaystyle w}is a zero mean random vector with a known covariance matrixCw{\displaystyle C_{w}}. Let be a linear estimate ofx{\displaystyle x}fromy{\displaystyle y}, whereG{\displaystyle G}is somem×n{\displaystyle m\times n}matrix. The MSE of this estimator is given by Since the MSE depends explicitly onx{\displaystyle x}it cannot be minimized directly. Instead, the concept of regret can be used in order to define a linear estimator with good MSE performance. To define the regret here, consider a linear estimator that knows the value of the parameterx{\displaystyle x}, i.e., the matrixG{\displaystyle G}can explicitly depend onx{\displaystyle x}: The MSE ofx^o{\displaystyle {\hat {x}}^{o}}is To find the optimalG(x){\displaystyle G(x)},MSEo{\displaystyle MSE^{o}}is differentiated with respect toG{\displaystyle G}and the derivative is equated to 0 getting Then, using theMatrix Inversion Lemma Substituting thisG(x){\displaystyle G(x)}back intoMSEo{\displaystyle MSE^{o}}, one gets This is the smallest MSE achievable with a linear estimate that knowsx{\displaystyle x}. In practice this MSE cannot be achieved, but it serves as a bound on the optimal MSE. The regret of using the linear estimator specified byG{\displaystyle G}is equal to The minimax regret approach here is to minimize the worst-case regret, i.e.,supx∈ER(x,G).{\displaystyle \sup _{x\in E}R(x,G).}This will allow a performance as close as possible to the best achievable performance in the worst case of the parameterx{\displaystyle x}. Although this problem appears difficult, it is an instance ofconvex optimizationand in particular a numerical solution can be efficiently calculated.[17]Similar ideas can be used whenx{\displaystyle x}is random with uncertainty in thecovariance matrix.[18][19] Camara, Hartline and Johnsen[20]studyprincipal-agent problems. These areincomplete-information gamesbetween two players calledPrincipalandAgent, whose payoffs depend on a state of nature known only by the Agent. The Principal commits to a policy, then the agent responds, and then the state of nature is revealed. They assume that the principal and agent interact repeatedly, and may learn over time from the state history, usingreinforcement learning. They assume that the agent is driven by regret-aversion. In particular, the agent minimizes hiscounterfactual internal regret. Based on this assumption, they developmechanismsthat minimize the principal's regret. Collina, Roth and Shao[21]improve their mechanism both in running-time and in the bounds for regret (as a function of the number of distinct states of nature).
https://en.wikipedia.org/wiki/Regret_(decision_theory)#Minimax_regret
Incomputer science,Monte Carlo tree search(MCTS) is aheuristicsearch algorithmfor some kinds ofdecision processes, most notably those employed insoftwarethat playsboard games. In that context MCTS is used to solve thegame tree. MCTS was combined with neural networks in 2016[1]and has been used in multiple board games likeChess,Shogi,[2]Checkers,Backgammon,Contract Bridge,Go,Scrabble, andClobber[3]as well as in turn-based-strategy video games (such asTotal War: Rome II's implementation in the high level campaign AI[4]) and applications outside of games.[5] TheMonte Carlo method, which uses random sampling for deterministic problems which are difficult or impossible to solve using other approaches, dates back to the 1940s.[6]In his 1987 PhD thesis, Bruce Abramson combinedminimax searchwith anexpected-outcome modelbased on random game playouts to the end, instead of the usualstatic evaluation function. Abramson said the expected-outcome model "is shown to be precise, accurate, easily estimable, efficiently calculable, and domain-independent."[7]He experimented in-depth withtic-tac-toeand then with machine-generated evaluation functions forOthelloandchess. Such methods were then explored and successfully applied to heuristic search in the field ofautomated theorem provingby W. Ertel, J. Schumann and C. Suttner in 1989,[8][9][10]thus improving the exponential search times of uninformed search algorithms such as e.g. breadth-first search, depth-first search oriterative deepening. In 1992, B. Brügmann employed it for the first time in aGo-playing program.[11]In 2002, Chang et al.[12]proposed the idea of "recursive rolling out and backtracking" with "adaptive" sampling choices in their Adaptive Multi-stage Sampling (AMS) algorithm for the model ofMarkov decision processes. AMS was the first work to explore the idea ofUCB-based exploration and exploitation in constructing sampled/simulated (Monte Carlo) trees and was the main seed for UCT (Upper Confidence Trees).[13] In 2006, inspired by its predecessors,[15]Rémi Coulomdescribed the application of the Monte Carlo method to game-tree search and coined the name Monte Carlo tree search,[16]L. Kocsis and Cs. Szepesvári developed the UCT (Upper Confidence bounds applied to Trees) algorithm,[17]and S. Gelly et al. implemented UCT in their program MoGo.[18]In 2008, MoGo achieveddan(master) level in 9×9 Go,[19]and the Fuego program began to win against strong amateur players in 9×9 Go.[20] In January 2012, the Zen program won 3:1 in a Go match on a 19×19 board with anamateur 2 danplayer.[21]Google Deepminddeveloped the programAlphaGo, which in October 2015 became the first Computer Go program to beat a professional human Go player withouthandicapson a full-sized 19x19 board.[1][22][23]In March 2016, AlphaGo was awarded an honorary 9-dan (master) level in 19×19 Go for defeatingLee Sedolina five-game matchwith a final score of four games to one.[24]AlphaGo represents a significant improvement over previous Go programs as well as a milestone inmachine learningas it uses Monte Carlo tree search withartificial neural networks(adeep learningmethod) for policy (move selection) and value, giving it efficiency far surpassing previous programs.[25] The MCTS algorithm has also been used in programs that play otherboard games(for exampleHex,[26]Havannah,[27]Game of the Amazons,[28]andArimaa[29]), real-time video games (for instanceMs. Pac-Man[30][31]andFable Legends[32]), and nondeterministic games (such asskat,[33]poker,[34]Magic: The Gathering,[35]orSettlers of Catan[36]). The focus of MCTS is on the analysis of the most promising moves, expanding thesearch treebased onrandom samplingof the search space. The application of Monte Carlo tree search in games is based on manyplayouts,also calledroll-outs. In each playout, the game is played out to the very end by selecting moves at random. The final game result of each playout is then used to weight the nodes in the game tree so that better nodes are more likely to be chosen in future playouts. The most basic way to use playouts is to apply the same number of playouts after each legal move of the current player, then choose the move which led to the most victories.[11]The efficiency of this method—calledPure Monte Carlo Game Search—often increases with time as more playouts are assigned to the moves that have frequently resulted in the current player's victory according to previous playouts. Each round of Monte Carlo tree search consists of four steps:[37] This graph shows the steps involved in one decision, with each node showing the ratio of wins to total playouts from that point in the game tree for the player that the node represents.[38]In the Selection diagram, black is about to move. The root node shows there are 11 wins out of 21 playouts for white from this position so far. It complements the total of 10/21 black wins shown along the three black nodes under it, each of which represents a possible black move. Note that this graph does not follow the UCT algorithm described below. If white loses the simulation, all nodes along the selection incremented their simulation count (the denominator), but among them only the black nodes were credited with wins (the numerator). If instead white wins, all nodes along the selection would still increment their simulation count, but among them only the white nodes would be credited with wins. In games where draws are possible, a draw causes the numerator for both black and white to be incremented by 0.5 and the denominator by 1. This ensures that during selection, each player's choices expand towards the most promising moves for that player, which mirrors the goal of each player to maximize the value of their move. Rounds of search are repeated as long as the time allotted to a move remains. Then the move with the most simulations made (i.e. the highest denominator) is chosen as the final answer. This basic procedure can be applied to any game whose positions necessarily have a finite number of moves and finite length. For each position, all feasible moves are determined:krandom games are played out to the very end, and the scores are recorded. The move leading to the best score is chosen. Ties are broken byfair coinflips. Pure Monte Carlo Game Search results in strong play in several games with random elements, as in the gameEinStein würfelt nicht!. It converges to optimal play (asktends to infinity) in board filling games with random turn order, for instance in the gameHexwith random turn order.[39]DeepMind's AlphaZero replaces the simulation step with an evaluation based on a neural network.[2] The main difficulty in selecting child nodes is maintaining some balance between theexploitationof deep variants after moves with high average win rate and theexplorationof moves with few simulations. The first formula for balancing exploitation and exploration in games, called UCT (Upper Confidence Bound1applied to trees), was introduced byLevente KocsisandCsaba Szepesvári.[17]UCT is based on the UCB1 formula derived by Auer, Cesa-Bianchi, and Fischer[40]and the probably convergent AMS (Adaptive Multi-stage Sampling) algorithm first applied to multi-stage decision-making models (specifically,Markov Decision Processes) by Chang, Fu, Hu, and Marcus.[12]Kocsis and Szepesvári recommend to choose in each node of the game tree the move for which the expressionwini+cln⁡Nini{\displaystyle {\frac {w_{i}}{n_{i}}}+c{\sqrt {\frac {\ln N_{i}}{n_{i}}}}}has the highest value. In this formula: The first component of the formula above corresponds to exploitation; it is high for moves with high average win ratio. The second component corresponds to exploration; it is high for moves with few simulations. Most contemporary implementations of Monte Carlo tree search are based on some variant of UCT that traces its roots back to the AMS simulation optimization algorithm for estimating the value function in finite-horizonMarkov Decision Processes(MDPs) introduced by Chang et al.[12](2005) inOperations Research. (AMS was the first work to explore the idea of UCB-based exploration and exploitation in constructing sampled/simulated (Monte Carlo) trees and was the main seed for UCT.[13]) Although it has been proven that the evaluation of moves in Monte Carlo tree search converges tominimaxwhen using UCT,[17][41]the basic version of Monte Carlo tree search converges only in so called "Monte Carlo Perfect" games.[42]However, Monte Carlo tree search does offer significant advantages overalpha–beta pruningand similar algorithms that minimize the search space. In particular, pure Monte Carlo tree search does not need an explicitevaluation function. Simply implementing the game's mechanics is sufficient to explore the search space (i.e. the generating of allowed moves in a given position and the game-end conditions). As such, Monte Carlo tree search can be employed in games without a developed theory or ingeneral game playing. The game tree in Monte Carlo tree search grows asymmetrically as the method concentrates on the more promising subtrees. Thus[dubious–discuss], it achieves better results than classical algorithms in games with a highbranching factor. A disadvantage is that in certain positions, there may be moves that look superficially strong, but that actually lead to a loss via a subtle line of play. Such "trap states" require thorough analysis to be handled correctly, particularly when playing against an expert player; however, MCTS may not "see" such lines due to its policy of selective node expansion.[43][44]It is believed that this may have been part of the reason forAlphaGo's loss in its fourth game against Lee Sedol. In essence, the search attempts to prune sequences which are less relevant. In some cases, a play can lead to a very specific line of play which is significant, but which is overlooked when the tree is pruned, and this outcome is therefore "off the search radar".[45] Various modifications of the basic Monte Carlo tree search method have been proposed to shorten the search time. Some employ domain-specific expert knowledge, others do not. Monte Carlo tree search can use eitherlightorheavyplayouts. Light playouts consist of random moves while heavy playouts apply various heuristics to influence the choice of moves. These heuristics may employ the results of previous playouts (e.g. the Last Good Reply heuristic[46]) or expert knowledge of a given game. For instance, in many Go-playing programs certain stone patterns in a portion of the board influence the probability of moving into that area.[18]Paradoxically, playing suboptimally in simulations sometimes makes a Monte Carlo tree search program play stronger overall.[47] Domain-specific knowledge may be employed when building the game tree to help the exploitation of some variants. One such method assigns nonzeropriorsto the number of won and played simulations when creating each child node, leading to artificially raised or lowered average win rates that cause the node to be chosen more or less frequently, respectively, in the selection step.[48]A related method, calledprogressive bias, consists in adding to the UCB1 formula abini{\displaystyle {\frac {b_{i}}{n_{i}}}}element, wherebiis a heuristic score of thei-th move.[37] The basic Monte Carlo tree search collects enough information to find the most promising moves only after many rounds; until then its moves are essentially random. This exploratory phase may be reduced significantly in a certain class of games using RAVE (Rapid Action Value Estimation).[48]In these games, permutations of a sequence of moves lead to the same position. Typically, they are board games in which a move involves placement of a piece or a stone on the board. In such games the value of each move is often only slightly influenced by other moves. In RAVE, for a given game tree nodeN, its child nodesCistore not only the statistics of wins in playouts started in nodeNbut also the statistics of wins in all playouts started in nodeNand below it, if they contain movei(also when the move was played in the tree, between nodeNand a playout). This way the contents of tree nodes are influenced not only by moves played immediately in a given position but also by the same moves played later. When using RAVE, the selection step selects the node, for which the modified UCB1 formula(1−β(ni,n~i))wini+β(ni,n~i)w~in~i+cln⁡tni{\displaystyle (1-\beta (n_{i},{\tilde {n}}_{i})){\frac {w_{i}}{n_{i}}}+\beta (n_{i},{\tilde {n}}_{i}){\frac {{\tilde {w}}_{i}}{{\tilde {n}}_{i}}}+c{\sqrt {\frac {\ln t}{n_{i}}}}}has the highest value. In this formula,w~i{\displaystyle {\tilde {w}}_{i}}andn~i{\displaystyle {\tilde {n}}_{i}}stand for the number of won playouts containing moveiand the number of all playouts containing movei, and theβ(ni,n~i){\displaystyle \beta (n_{i},{\tilde {n}}_{i})}function should be close to one and to zero for relatively small and relatively bigniandn~i{\displaystyle {\tilde {n}}_{i}}, respectively. One of many formulas forβ(ni,n~i){\displaystyle \beta (n_{i},{\tilde {n}}_{i})}, proposed by D. Silver,[49]says that in balanced positions one can takeβ(ni,n~i)=n~ini+n~i+4b2nin~i{\displaystyle \beta (n_{i},{\tilde {n}}_{i})={\frac {{\tilde {n}}_{i}}{n_{i}+{\tilde {n}}_{i}+4b^{2}n_{i}{\tilde {n}}_{i}}}}, wherebis an empirically chosen constant. Heuristics used in Monte Carlo tree search often require many parameters. There are automated methods to tune the parameters to maximize the win rate.[50] Monte Carlo tree search can be concurrently executed by manythreadsorprocesses. There are several fundamentally different methods of itsparallelexecution:[51]
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
Negamaxsearch is a variant form ofminimaxsearch that relies on thezero-sumproperty of atwo-player game. This algorithm relies on the fact that⁠min(a,b)=−max(−b,−a){\displaystyle \min(a,b)=-\max(-b,-a)}⁠to simplify the implementation of theminimaxalgorithm. More precisely, the value of a position to player A in such a game is the negation of the value to player B. Thus, the player on move looks for a move that maximizes the negation of the value resulting from the move: this successor position must by definition have been valued by the opponent. The reasoning of the previous sentence works regardless of whether A or B is on move. This means that a single procedure can be used to value both positions. This is a coding simplification over minimax, which requires that A selects the move with the maximum-valued successor while B selects the move with the minimum-valued successor. It should not be confused withnegascout, an algorithm to compute the minimax or negamax value quickly by clever use ofalpha–beta pruningdiscovered in the 1980s. Note that alpha–beta pruning is itself a way to compute the minimax or negamax value of a position quickly by avoiding the search of certain uninteresting positions. Mostadversarial searchengines are coded using some form of negamax search. NegaMax operates on the samegame treesas those used with the minimax search algorithm. Each node and root node in the tree are game states (such as game board configuration) of a two player game. Transitions to child nodes represent moves available to a player who is about to play from a given node. The negamax search objective is to find the node score value for the player who is playing at the root node. Thepseudocodebelow shows the negamax base algorithm,[1]with a configurable limit for the maximum search depth: The root node inherits its score from one of its immediate child nodes. The child node that ultimately sets the root node's best score also represents the best move to play. Although the negamax function shown only returns the node's best score, practical negamax implementations will retain and return both best move and best score for the root node. Only the node's best score is essential with non-root nodes. And a node's best move isn't necessary to retain nor return for non-root nodes. What can be confusing is how the heuristic value of the current node is calculated. In this implementation, this value is always calculated from the point of view of player A, whose color value is one. In other words, higher heuristic values always represent situations more favorable for player A. This is the same behavior as the normalminimaxalgorithm. The heuristic value is not necessarily the same as a node's return value due to value negation by negamax and the color parameter. The negamax node's return value is a heuristic score from the point of view of the node's current player. Negamax scores match minimax scores for nodes where player A is about to play, and where player A is the maximizing player in the minimax equivalent. Negamax always searches for the maximum value for all its nodes. Hence for player B nodes, the minimax score is a negation of its negamax score. Player B is the minimizing player in the minimax equivalent. Negamax can be implemented without the color parameter. In this case, the heuristic evaluation function must return values from the point of view of the node's current player rather than an absolute score. For example, the heuristic evaluation function in chess should return a positive value if the current node's player is black and black is winning. Algorithm optimizations forminimaxare also equally applicable for Negamax.Alpha–beta pruningcan decrease the number of nodes the negamax algorithm evaluates in a search tree in a manner similar with its use with the minimax algorithm. The pseudocode for depth-limited negamax search with alpha–beta pruning follows:[1] Alpha (α) and beta (β) represent lower and upper bounds for child node values at a given tree depth. Negamax sets the arguments α and β for the root node to the lowest and highest values possible. Other search algorithms, such asnegascoutandMTD(f), may initialize α and β with alternate values to further improve tree search performance. When negamax encounters a child node value outside an alpha/beta range, the negamax search cuts off thereby pruning portions of the game tree from exploration. Cut offs are implicit based on the node return value. A node value found within the range of its initial α and β is the node's exact (or true) value. This value is identical to the result the negamax base algorithm would return, without cut offs and without any α and β bounds. If a node return value is out of range, then the value represents an upper (if value ≤ α) or lower (if value ≥ β) bound for the node's exact value. Alpha–beta pruning eventually discards any value bound results. Such values do not contribute nor affect the negamax value at its root node. This pseudocode shows the fail-soft variation of alpha–beta pruning. Fail-soft never returns α or β directly as a node value. Thus, a node value may be outside the initial α and β range bounds set with a negamax function call. In contrast, fail-hard alpha–beta pruning always limits a node value in the range of α and β. This implementation also shows optional move ordering prior to theforeach loopthat evaluates child nodes. Move ordering[2]is an optimization for alpha beta pruning that attempts to guess the most probable child nodes that yield the node's score. The algorithm searches those child nodes first. The result of good guesses is earlier and more frequent alpha/beta cut offs occur, thereby pruning additional game tree branches and remaining child nodes from the search tree. Transposition tablesselectivelymemoizethe values of nodes in the game tree.Transpositionis a term reference that a given game board position can be reached in more than one way with differing game move sequences. When negamax searches the game tree, and encounters the same node multiple times, a transposition table can return a previously computed value of the node, skipping potentially lengthy and duplicate re-computation of the node's value. Negamax performance improves particularly for game trees with many paths that lead to a given node in common. The pseudo code that adds transposition table functions to negamax with alpha/beta pruning is given as follows:[1] Alpha/beta pruning and maximum search depth constraints in negamax can result in partial, inexact, and entirely skipped evaluation of nodes in a game tree. This complicates adding transposition table optimizations for negamax. It is insufficient to track only the node'svaluein the table, becausevaluemay not be the node's true value. The code therefore must preserve and restore the relationship ofvaluewith alpha/beta parameters and the search depth for each transposition table entry. Transposition tables are typically lossy and will omit or overwrite previous values of certain game tree nodes in its tables. This is necessary since the number of nodes negamax visits often far exceeds the transposition table size. Lost or omitted table entries are non-critical and will not affect the negamax result. However, lost entries may require negamax to re-compute certain game tree node values more frequently, thus affecting performance.
https://en.wikipedia.org/wiki/Negamax
Principal variation search(sometimes equated with the practically identicalNegaScout) is anegamaxalgorithm that can be faster thanalpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing theminimaxvalue of a node in atree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage. NegaScout works best when there is a good move ordering. In practice, the move ordering is often determined by previous shallower searches. It produces more cutoffs than alpha–beta by assuming that the first explored node is the best. In other words, it supposes the first node is in theprincipal variation. Then, it can check whether that is true by searching the remaining nodes with a null window (also known as a scout window; when alpha and beta are equal), which is faster than searching with the regular alpha–beta window. If the proof fails, then the first node was not in the principal variation, and the search continues as normal alpha–beta. Hence, NegaScout works best when the move ordering is good. With a random move ordering, NegaScout will take more time than regular alpha–beta; although it will not explore any nodes alpha–beta did not, it will have to re-search many nodes. Alexander Reinefeldinvented NegaScout several decades after the invention of alpha–beta pruning. He gives a proof of correctness of NegaScout in his book.[1] Another search algorithm calledSSS*can theoretically result in fewer nodes searched. However, its original formulation has practical issues (in particular, it relies heavily on an OPEN list for storage) and nowadays most chess engines still use a form of NegaScout in their search. Most chess engines use a transposition table in which the relevant part of the search tree is stored. This part of the tree has the same size as SSS*'s OPEN list would have.[2]A reformulation called MT-SSS* allowed it to be implemented as a series of null window calls to Alpha–Beta (or NegaScout) that use a transposition table, and direct comparisons using game playing programs could be made. It did not outperform NegaScout in practice. Yet another search algorithm, which does tend to do better than NegaScout in practice, is the best-first algorithm calledMTD(f), although neither algorithm dominates the other. There are trees in which NegaScout searches fewer nodes than SSS* or MTD(f) and vice versa. NegaScout takes after SCOUT, invented byJudea Pearlin 1980, which was the first algorithm to outperform alpha–beta and to be proven asymptotically optimal.[3][4]Null windows, with β=α+1 in a negamax setting, were invented independently by J.P. Fishburn and used in an algorithm similar to SCOUT in an appendix to his Ph.D. thesis,[5]in a parallel alpha–beta algorithm,[6]and on the last subtree of a search tree root node.[7] Most of the moves are not acceptable for both players, so we do not need to fully search every node to get the exact score. The exact score is only needed for nodes in theprincipal variation(an optimal sequence of moves for both players), where it will propagate up to the root. In iterative deepening search, the previous iteration has already established a candidate for such a sequence, which is also commonly called the principal variation. For any non-leaf in this principal variation, its children are reordered such that the next node from this principal variation is the first child. All other children are assumed to result in a worse or equal score for the current player (this assumption follows from the assumption that the current PV candidate is an actual PV). To test this, we search the first move with a full window to establish an upper bound on the score of the other children, for which we conduct a zero window search to test if a move can be better. Since a zero window search is much cheaper due to the higher frequency of beta cut-offs, this can save a lot of effort. If we find that a move can raise alpha, our assumption has been disproven for this move and we do a re-search with the full window to get the exact score.[8][9]
https://en.wikipedia.org/wiki/Negascout
In the mathematical area ofgame theoryand ofconvex optimization, aminimax theoremis a theorem that claims that under certain conditions on the setsX{\displaystyle X}andY{\displaystyle Y}and on the functionf{\displaystyle f}.[1]It is always true that the left-hand side is at most the right-hand side (max–min inequality) but equality only holds under certain conditions identified by minimax theorems. The first theorem in this sense isvon Neumann's minimax theorem about two-playerzero-sum gamespublished in 1928,[2]which is considered the starting point ofgame theory. Von Neumann is quoted as saying "As far as I can see, there could be no theory of games ... without that theorem ... I thought there was nothing worth publishing until the Minimax Theorem was proved".[3]Since then, several generalizations and alternative versions of von Neumann's original theorem have appeared in the literature.[4][5] Von Neumann's original theorem[2]was motivated by game theory and applies to the case where Under these assumptions, von Neumann proved that In the context of two-playerzero-sum games, the setsX{\displaystyle X}andY{\displaystyle Y}correspond to the strategy sets of the first and second player, respectively, which consist of lotteries over their actions (so-calledmixed strategies), and their payoffs are defined by thepayoff matrixA{\displaystyle A}. The functionf(x,y){\displaystyle f(x,y)}encodes theexpected valueof the payoff to the first player when the first player plays the strategyx{\displaystyle x}and the second player plays the strategyy{\displaystyle y}. Von Neumann's minimax theorem can be generalized to domains that are compact and convex, and to functions that are concave in their first argument and convex in their second argument (known as concave-convex functions). Formally, letX⊆Rn{\displaystyle X\subseteq \mathbb {R} ^{n}}andY⊆Rm{\displaystyle Y\subseteq \mathbb {R} ^{m}}becompactconvexsets. Iff:X×Y→R{\displaystyle f:X\times Y\rightarrow \mathbb {R} }is a continuous function that is concave-convex, i.e. Then we have that Sion's minimax theorem is a generalization of von Neumann's minimax theorem due toMaurice Sion,[6]relaxing the requirement that X and Y be standard simplexes and that f be bilinear. It states:[6][7] LetX{\displaystyle X}be aconvexsubset of alinear topological spaceand letY{\displaystyle Y}be acompactconvexsubset of alinear topological space. Iff{\displaystyle f}is a real-valuedfunctiononX×Y{\displaystyle X\times Y}with Then we have that Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it. Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Sion%27s_minimax_theorem
Tit for tatis an English saying meaning "equivalentretaliation". It is analterationoftipfortap"blow for blow",[1]first recorded in 1558.[2] It is also a highly effectivestrategyingame theory. Anagentusing this strategy will first cooperate, then subsequently replicate an opponent's previous action. If the opponent previously was cooperative, the agent is cooperative. If not, the agent is not. This is similar toreciprocal altruismin biology. Tit-for-tat has been very successfully used as a strategy for theiterated prisoner's dilemma. The strategy was first introduced byAnatol RapoportinRobert Axelrod's two tournaments,[3]held around 1980. Notably, it was (on both occasions) both the simplest strategy and the most successful in direct competition. Few have extended the game theoretical approach to other applications such as finance. In that context the tit for tat strategy was shown to be associated to the trend following strategy.[4] The success of the tit-for-tat strategy, which is largely cooperative despite that its name emphasizes an adversarial nature, took many by surprise. Arrayed against strategies produced by various teams it won in two competitions. After the first competition, new strategies formulated specifically to combat tit-for-tat failed due to their negative interactions with each other; a successful strategy other than tit-for-tat would have had to be formulated with both tit-for-tat and itself in mind. This result may help explain how groups of animals, especially human societies, have developed to live in mostly or fully cooperative ways, rather than in the fiercely competitive and aggressive manner one might expect from individuals living in a natural state of constant conflict. This, and particularly its application to human society and politics, is the subject ofRobert Axelrod's bookThe Evolution of Cooperation. Moreover, the tit-for-tat strategy has been of beneficial use to social psychologists and sociologists in studying effective techniques to reduce conflict. Research has indicated that when individuals who have been in competition for a period of time no longer trust one another, the most effective competition reverser is the use of the tit-for-tat strategy. Individuals commonly engage in behavioral assimilation, a process in which they tend to match their own behaviors to those displayed by cooperating or competing group members. Therefore, if the tit-for-tat strategy begins with cooperation, then cooperation ensues. On the other hand, if the other party competes, then the tit-for-tat strategy will lead the alternate party to compete as well. Ultimately, each action by the other member is countered with a matching response, competition with competition and cooperation with cooperation. In the case of conflict resolution, the tit-for-tat strategy is effective for several reasons: the technique is recognized asclear,nice,provocable, andforgiving. Firstly, it is aclearand recognizable strategy. Those using it quickly recognize its contingencies and adjust their behavior accordingly. Moreover, it is considered to beniceas it begins with cooperation and only defects in response to competition. The strategy is alsoprovocablebecause it provides immediate retaliation for those who compete. Finally, it isforgivingas it immediately produces cooperation should the competitor make a cooperative move. The implications of the tit-for-tat strategy have been of relevance to conflict research, resolution and many aspects of applied social science.[5] Take for example the following infinitely repeated prisoners dilemma game: The tit-for-tat strategy copies what the other player previously chose. If players cooperate by playing strategy (C,C) they cooperate forever. Cooperation gives the following payoff (whereδ{\displaystyle \delta }is the discount factor): ageometric seriessumming to If a player deviates to defecting (D), then the next round they get punished. Alternate between outcomes where p1 cooperates and p2 deviates, and vice versa. Deviation gives the following payoff: a sum of two geometric series that comes to Expect collaboration if payoff of deviation is no better than cooperation. Continue cooperating if,δ≥34{\displaystyle \delta \geq {\frac {3}{4}}} Continue defecting if,δ<34{\displaystyle \delta <{\frac {3}{4}}} While Axelrod has empirically shown that the strategy is optimal in some cases of direct competition, two agents playing tit for tat remain vulnerable. A one-time, single-bit error in either player's interpretation of events can lead to an unending "death spiral": if one agent defects and the opponent cooperates, then both agents will end up alternating cooperate and defect, yielding a lower payoff than if both agents were to continually cooperate. This situation frequently arises in real world conflicts, ranging from schoolyard fights to civil and regional wars. The reason for these issues is that tit for tat is not asubgame perfect equilibrium, except under knife-edge conditions on thediscount rate.[6]While this sub-game is not directly reachable by two agents playing tit-for-tat strategies, a strategy must be aNash equilibriumin all sub-games to be sub-game perfect. Further, this sub-game may be reached if any noise is allowed in the agents' signaling. A sub-game perfect variant of tit for tat known as "contrite tit for tat" may be created by employing a basic reputation mechanism.[7] Knife-edge is "equilibrium that exists only for exact values of the exogenous variables. If you vary the variables in even the slightest way, knife-edge equilibrium disappear."[8] Can be both Nash equilibrium and knife-edge equilibrium. Known as knife-edge equilibrium because the equilibrium "rests precariously on" the exact value. Example: Suppose X = 0. There is no profitable deviation from (Down, Left) or from (Up, Right). However, if the value of X deviates by any amount, no matter how small, then the equilibrium no longer stands. It becomes profitable to deviate to up, for example, if X has a value of 0.000001 instead of 0. Thus, the equilibrium is very precarious. In its usage in the Wikipedia article, knife-edge conditions is referring to the fact that very rarely, only when a specific condition is met and, for instance, X, equals a specific value is there an equilibrium. Tit for two tats could be used to mitigate this problem; see the description below.[9]"Tit for tat with forgiveness" is a similar attempt to escape the death spiral. When the opponent defects, a player employing this strategy will occasionally cooperate on the next move anyway. The exact probability that a player will respond with cooperation depends on the line-up of opponents. Furthermore, the tit-for-tat strategy is not proved optimal in situations short of total competition. For example, when the parties are friends it may be best for the friendship when a player cooperates at every step despite occasional deviations by the other player. Most situations in the real world are less competitive than the total competition in which the tit-for-tat strategy won its competition. Tit for tat is very different fromgrim trigger, in that it is forgiving in nature, as it immediately produces cooperation, should the competitor choose to cooperate. Grim trigger on the other hand is the most unforgiving strategy, in the sense even a single defect would the make the player playing using grim trigger defect for the remainder of the game.[10] Tit for two tats is similar to tit for tat, but allows the opponent to defect from the agreed upon strategy twice before the player retaliates.  This aspect makes the player using the tit for tat strategy appear more "forgiving" to the opponent. In a tit for tat strategy, once an opponent defects, the tit for tat player immediately responds by defecting on the next move. This has the unfortunate consequence of causing two retaliatory strategies to continuously defect against each other resulting in a poor outcome for both players. A tit for two tats player will let the first defection go unchallenged as a means to avoid the "death spiral" of the previous example. If the opponent defects twice in a row, the tit for two tats player will respond by defecting. This strategy was put forward byRobert Axelrodduring his second round of computer simulations atRAND. After analyzing the results of the first experiment, he determined that had a participant entered the tit for two tats strategy it would have emerged with a higher cumulative score than any other program. As a result, he himself entered it with high expectations in the second tournament. Unfortunately, owing to the more aggressive nature of the programs entered in the second round, which were able to take advantage of its highly forgiving nature, tit for two tats did significantly worse (in the game-theory sense) than tit for tat.[11] BitTorrentpeers use tit-for-tat strategy to optimize their download speed.[12]More specifically, most BitTorrent peers use a variant of tit for two tats which is calledregular unchokingin BitTorrent terminology. BitTorrent peers have a limited number of upload slots to allocate to other peers. Consequently, when a peer's upload bandwidth is saturated, it will use a tit-for-tat strategy. Cooperation is achieved when upload bandwidth is exchanged for download bandwidth. Therefore, when a peer is not uploading in return to our own peer uploading, the BitTorrent program willchokethe connection with the uncooperative peer and allocate this upload slot to a hopefully more cooperating peer.Regular unchokingcorrelates to always cooperating on the first move in prisoner's dilemma. Periodically, a peer will allocate an upload slot to a randomly chosen uncooperative peer (unchoke). This is calledoptimistic unchoking. This behavior allows searching for more cooperating peers and gives a second chance to previously non-cooperating peers. The optimal threshold values of this strategy are still the subject of research. Studies in the prosocial behaviour of animals have led many ethologists and evolutionary psychologists to apply tit-for-tat strategies to explain why altruism evolves in many animal communities. Evolutionary game theory, derived from the mathematical theories formalised byvon NeumannandMorgenstern(1953), was first devised byMaynard Smith(1972) and explored further in bird behaviour byRobert Hinde. Their application of game theory to the evolution of animal strategies launched an entirely new way of analysing animal behaviour. Reciprocal altruismworks in animal communities where the cost to the benefactor in any transaction of food, mating rights, nesting or territory is less than the gains to the beneficiary. The theory also holds that the act of altruism should be reciprocated if the balance of needs reverse. Mechanisms to identify and punish "cheaters" who fail to reciprocate, in effect a form of tit for tat, are important to regulate reciprocal altruism. For example, tit-for-tat is suggested to be the mechanism of cooperative predator inspection behavior inguppies. The tit-for-tat inability of either side to back away from conflict, for fear of being perceived as weak or as cooperating with the enemy, has been the cause of many prolonged conflicts throughout history. However, the tit for tat strategy has also been detected by analysts in the spontaneousnon-violentbehaviour, called "live and let live" that arose during trench warfare in theFirst World War. Troops dug in only a few hundred feet from each other would evolve an unspoken understanding. If a sniper killed a soldier on one side, the other expected an equal retaliation. Conversely, if no one was killed for a time, the other side would acknowledge this implied "truce" and act accordingly. This created a "separate peace" between the trenches.[13] DuringThe Troublesthe term was used to describe increasingeye for an eyebehaviour between theIrish RepublicansandUlster Unionists.[14]This can be seen with theRed Lion Pub bombingby the IRA being followed by theMcGurk's Bar bombing, both targeting civilians. Specifically the attacksof massacreswould be structured around the mutual killings ofUnionistandRepublicancommunities, both communities being generally uninterested in the violence.[15]Thissectarianmentality led to the term"Tit for tat bombings"to enter the commonlexiconofNorthern Irishsociety.[16][17]
https://en.wikipedia.org/wiki/Tit_for_Tat
Indecision theoryandgame theory,Wald'smaximinmodelis a non-probabilistic decision-making model according to which decisions are ranked on the basis of their worst-case outcomes – the optimal decision is one with the least bad outcome. It is one of the most important models inrobust decision makingin general androbust optimizationin particular. It is also known by a variety of other titles, such as Wald's maximin rule, Wald's maximin principle, Wald's maximin paradigm, and Wald's maximin criterion. Often 'minimax' is used instead of 'maximin'. This model represents a 2-person game in which themax{\displaystyle \max }player plays first. In response, the second player selects the worst state inS(d){\displaystyle S(d)}, namely a state inS(d){\displaystyle S(d)}that minimizes the payofff(d,s){\displaystyle f(d,s)}overs{\displaystyle s}inS(d){\displaystyle S(d)}. In many applications the second player represents uncertainty. However, there are maximin models that are completely deterministic. The above model is theclassicformat of Wald's maximin model. There is an equivalentmathematical programming(MP) format: whereR{\displaystyle \mathbb {R} }denotes the real line. As ingame theory, the worst payoff associated with decisiond{\displaystyle d}, namely is calledthe security levelof decisiond{\displaystyle d}. The minimax version of the model is obtained by exchanging the positions of themax{\displaystyle \max }andmin{\displaystyle \min }operations in the classic format: The equivalent MP format is as follows: Inspired by game theory,Abraham Walddeveloped this model[1][2][3]as an approach to scenarios in which there is only one player (the decision maker). Player 2 showcases a gloomy approach to uncertainty. In Wald's maximin model, player 1 (themax{\displaystyle \max }player) plays first and player 2 (themin{\displaystyle \min }player) knows player 1's decision when he selects his decision. This is a major simplification of theclassic 2-person zero-sum gamein which the two players choose their strategies without knowing the other player's choice. The game of Wald's maximin model is also a 2-personzero-sum game, but the players choose sequentially. With the establishment of modern decision theory in the 1950s, the model became a key ingredient in the formulation of non-probabilistic decision-making models in the face of severe uncertainty.[4][5]It is widely used in diverse fields such asdecision theory,control theory,economics,statistics,robust optimization,operations research,philosophy, etc.[6][7] One of the most famous examples of a Maximin/Minimax model is whereR{\displaystyle \mathbb {R} }denotes the real line. Formally we can setD=S(d)=R{\displaystyle D=S(d)=\mathbb {R} }andf(d,s)=d2−s2{\displaystyle f(d,s)=d^{2}-s^{2}}. The picture is this The optimal solution is the (red)saddle point(x,y)=(0,0){\displaystyle (x,y)=(0,0)}. There are many cases where it is convenient to 'organize' the Maximin/Minimax model as a 'table'. The convention is that the rows of the table represent the decisions, and the columns represent the states. Henri is going for a walk. The sun may shine, or it may rain. Should Henri carry an umbrella? Henri does not like carrying an umbrella, but he dislikes getting wet even more. His "payoff matrix", viewing this as a Maximin game pitting Henri against Nature, is as follows. Appending aWorst Payoffcolumn and aBest Worst Payoffcolumn to the payoff table, we obtain The worst case, if Henri goes out without umbrella, is definitely worse than the (best) worst case when carrying an umbrella. Therefore, Henri takes his umbrella with him. Over the years a variety of related models have been developed primarily to moderate the pessimistic approach dictated by the worst-case orientation of the model.[4][5][8][9][10]For example, Savage'sminimax regret model[11]is associated with the payoff regrets. The sets of statesS(d),d∈D,{\displaystyle S(d),d\in D,}need not represent uncertainty. They can represent (deterministic) variations in the value of a parameter. LetD{\displaystyle D}be a finite set representing possible locations of an 'undesirable' public facility (e.g. garbage dump), and letS{\displaystyle S}denote a finite set of locations in the neighborhood of the planned facility, representing existing dwellings. It might be desirable to build the facility so that its shortest distance from an existing dwelling is as large as possible. The maximin formulation of the problem is as follows: wheredist(d,s){\displaystyle dist(d,s)}denotes the distance ofs{\displaystyle s}fromd{\displaystyle d}. Note that in this problemS(d){\displaystyle S(d)}does not vary withd{\displaystyle d}. In cases where is it desirable to live close to the facility, the objective could be to minimize the maximum distance from the facility. This yields the following minimax problem: These are genericfacility locationproblems. Experience has shown that the formulation of maximin models can be subtle in the sense that problems that 'do not look like' maximin problems can be formulated as such. Consider the following problem: Given a finite setX{\displaystyle X}and a real valued functiong{\displaystyle g}onX{\displaystyle X}, find the largest subset ofX{\displaystyle X}such thatg(x)≤0{\displaystyle g(x)\leq 0}for everyx{\displaystyle x}in this subset. The maximin formulation of this problem, in the MP format, is as follows: Generic problems of this type appear in robustness analysis.[12][13] It has been shown that theradius of stabilitymodel andinfo-gap's robustnessmodel are simple instances of Wald's maximin model.[14] Constraints can be incorporated explicitly in the maximin models. For instance, the following is a constrained maximin problem stated in the classic format Its equivalent MP format is as follows: Such models are very useful inrobust optimization. One of the 'weaknesses' of the Maximin model is that the robustness that it provides comes with aprice.[10]By playing it safe, the Maximin model tends to generate conservative decisions, whose price can be high. The following example illustrates this important feature of the model. Suppose there are two options,x'and⁠x″{\displaystyle x''}⁠, and whereS(x′)=S(x″)=[a,b]{\displaystyle S(x')=S(x'')=[a,b]}. The model is then as follows: There are no general-purpose algorithms for the solution of maximin problems. Some problems are very simple to solve, others are very difficult.[9][10][15][16] Consider the case where thestate variableis an "index", for instance letS(d)={1,2,…,k}{\displaystyle S(d)=\{1,2,\dots ,k\}}for alld∈D{\displaystyle d\in D}. The associated maximin problem is then as follows: wherefs(d)≡f(d,s){\displaystyle f_{s}(d)\equiv f(d,s)}. Ifd∈Rn{\displaystyle d\in \mathbb {R} ^{n}}, all the functionsfs,s=1,2,…,k,{\displaystyle f_{s},s=1,2,\dots ,k,}arelinear, andd∈D{\displaystyle d\in D}is specified by a system oflinearconstraints ond{\displaystyle d}, then this problem is alinear programmingproblem that can be solved bylinear programmingalgorithms such as thesimplex algorithm.
https://en.wikipedia.org/wiki/Wald%27s_maximin_model
In the statisticaldecision theory, where one is faced with making decisions in the presence of statistical knowledge,Γ-minimax inferenceis aminimaxapproach used to deal with partial prior information. It works with applications of Γ-minimax to statistical estimation, and contains Γ-minimax theory, used to pick applicable decision rules to use when given partial prior information about the distribution of an unknown parameter. The decision rule selected must be the one that minimizes the supremum of the payoff over the priors in Γ, with Bayes and regret risk prioritized in a frequentist approach, and posterior expected loss and regret prioritized in a Bayesian one. The Γ-minimax principle has been discussed and proposed before byHerbert Robbins.[1][2]andI. J. Good[3]to deal with instances of partial prior information that can arise from the minimax approach pioneered byAbraham Wald.[4]
https://en.wikipedia.org/wiki/Gamma-minimax_inference
Reversi Championis a video game adaptation of theOthelloboard game. Playable in single-player or two-player modes, it was developed and published byLoricielsand released in1984for theOric 1,Oric Atmos, andSega SC-3000computers.[1]AnAmstrad CPCversion followed in February1986. While the Oric and SC-3000 versions offer relatively basic gameplay, theAmstrad CPCedition stands out for its extensive options and refined controls for placing pieces. All versions are notable for their range of difficulty levels. Vincent Baillet, the original developer, created the game while still a high school student. He programmed early versions forprogrammable calculatorsand theZX81, which competed in international Othello programming tournaments. Baillet later adapted the game for the Oric 1 at Loriciels' request, with the SC-3000 version released concurrently in 1984. A plannedRainbow 100version was ultimately canceled. In 1986, Jacques Métois developed a distinct Amstrad CPC adaptation. Contemporary reviews praised the game's accessibility and polish, though some criticized theAmstradCPCversion's algorithm as suitable only for beginners. Other critiques noted a lack of originality, but overall,Reversi Championreceived positive feedback. Reversi Championis a digital adaptation of theOthelloboard game. Two players compete on agame boardof 64 squares, one using white pieces and the other black. The objective is to have the most pieces of one's color on the board by the end of the game. Players take turns placing a piece of their color, capturing opponent pieces by flanking them in a row, column, or diagonal, thereby flipping them to their color.[2][3][4][5] The game supports single-player mode against the computer or two-player mode,[4][6][7][8]with a demonstration mode featuring computer-versus-computer play also available.[2][4][7][8] OnOric 1andOric Atmos, the game board dominates the screen, displayed in black against a light blue background. To the right, information such as whose turn it is, the difficulty level, and occasional computer hints are shown. On theAmstrad CPC, the board occupies the left half,[7][9]rendered in green against a purple background with orange and purple pieces. Players can customize these colors.[8]The top of the screen features an options menu,[7][9]while the right side displays the difficulty level, a log of recent moves, a field for entering the next move,[9]and a panel evaluating the computer's possible moves.[2][9] In the Oric version, players select a square using thekeyboardfrom a list of available moves displayed by the computer.[4][6]On the CPC, players move acursor, depicted as an arrow,[7]across the board using ajoystick,[3][7][9]arrow keys,[7][9]or an AMX mouse.[2][7]Alternatively, players can input square coordinates via the keyboard.[7][9]A list of possible moves is also accessible.[2][8][10] The Oric version offers fifteen difficulty levels plus "Beginner" or "Expert" modes, with the computer's thinking time varying by level.[4][6]Players can undo moves,[4][6][11]request hints,[4][6][11]change levels mid-game,[4]or interrupt the computer's move calculation.[4][6]The CPC version provides six difficulty levels,[7][8][9][10][12][13]with thinking time also scaling with difficulty.[8]It includes 4,000 possible opening moves[3][7][12][13][14][15]and ten options, such as undoing moves,[7][8][9][12][15]swapping player pieces mid-game,[7][9][12][15]editing the board,[7][9]orsavinga game for later.[7][8][9]These options openwindowsfor user interaction.[2][7][9] In the early1980s, Vincent Baillet, the future programmer of the Oric andSega SC-3000versions, was an avidOthelloplayer.[16]Inspired by the interactiveReversi Challengerboard,[17]which performed impressively despite limited hardware,[16]he developed Othello programs for theHP-41Cprogrammable calculator,[16][18]ZX81,[16][18][19]and possibly the TI-58 and Casio FX-502 calculators.[16] During this period,L'Ordinateur Individuelhosted annual international Othello programming tournaments at SICOB. Baillet competed multiple times, initially with calculator programs and later with a ZX81 version,[16]gaining recognition in1982[20][21]and1983.[22][21]In 1982, among 200 competitors across three categories, he placed third in the "compiled programs" category with his ZX81 program,[20][21][16]written entirely inZilog Z80machine code.[16]The other categories were "interpreted programs" and "pocket computers." Still a high school student, Baillet refined his program the following year, enabling it to play against itself to optimize strategies.[16]The program assignedweightsto variables for black and white pieces, adjusting them based on game outcomes to find an optimal balance.[16]A key challenge was the unreliablecassettestorage medium.[16]In 1983, his ZX81 program, now a prototype for Reversi Champion, placed second among 96 competitors, including 20 in the compiled programs category.[21][22][23] Baillet approached Laurent Weill ofLoricielsto develop anOric 1version, securing a contract immediately.[16][18][19]To appeal to a broader audience, he introduced weaker difficulty levels, possibly incorporating randomness in lower-level computer responses.[16] Drawing on articles fromL'Ordinateur IndividuelandMicro-Systèmes, as well as the book Chess Skill in Man and Machine, Baillet implemented aminimax algorithmwithalpha-beta pruningand other optimization techniques. The algorithm evaluates all possible moves and responses to a set depth.[16] At Loriciels' request, Baillet also developed theSega SC-3000version, which closely resembled the Oric version and posed no significant development challenges.[16]Although the Oric version was ready earlier, Loriciels delayed its release to launch both versions simultaneously in 1984.[16]Baillet earned 10francsper copy sold, with each retailing for approximately 100 francs.[16][note 1] In 1982,Digital Equipment Corporation(DEC) released theRainbow 100, a hybrid computer supportingVT100,CP/M,CP/M-86, andMS-DOS. DEC commissioned a port of Reversi Champion, which Baillet developed, but the project was abandoned, and he received no compensation.[16] For theAmstrad CPCversion, Loriciels did not involve Baillet, despite his expertise. Instead, Jacques Métois created a new adaptation, released in 1986, seemingly independent of the earlier versions.[16] Vincent Baillet estimated that approximately 1,000 copies were sold for each of the Oric and Sega SC-3000 versions.[16] Contemporary reviews highlighted the game's strengths. The Oric version's ease of use was praised byThéoricandJeux et Stratégie,[4][11]while the CPC version'sjoystickcontrols, likened to amouse, were lauded by CPC andHebdogiciel.[3][9]The Oric version's presentation was described as "pleasant" and "polished" by Théoric and "very good" by Jeux et Stratégie.[4][11]The CPC version's interface and windowed options were commended byAmstar'sYannick Bourrée and Hebdogiciel.[2][9]CPC appreciated the CPC version's extensive options and clear manual,[3]as did Jeux et Stratégie for the Oric manual.[11]Amstrad Magazine's Frédéric Nardeau noted the CPC version's fast cassette loading.[7]Théoric gave a glowing review of the Oric adaptation,[4]and Jeux et Stratégie's Michel Brassinne called it "excellent," awarding it 5/5 for accessibility and overall quality.[6]Five issues later, Brassinne and Emmanuel Lazard, a French Othello champion, gave it 3/5, ranking it second among eight Othello programs.[11]For the CPC version, Micro V.O.'s Yves Huitric praised its varied openings and six difficulty levels.[13]Tiltfound it "interesting" and "well-designed," rating it 5/6.[12][10]Hebdogiciel named it "Software of the Week" and the best Othello game,[9]a view echoed by Nardeau, who gave it 5/5 for interest and difficulty.[7] The CPC version's algorithm was praised by Bourrée and Hebdogiciel for challenging amateurs early on,[2][9]with Tilt noting its suitability for beginners.[12][10]Bruno de la Boisserie, a former French Othello Federation secretary, rated all versions as medium to strong, depending on the difficulty, and fully compliant with Othello rules.[24][note 2]Brassinne noted the French Othello Federation's endorsement of the Oric version.[6]However, Lazard criticized the CPC algorithm for poor tactics—focusing on both central and edge squares, ineffective against skilled players—and delayed move evaluation.[11][14]He found it "pleasant" but inferior to the Oric version, limiting beginner progress.[14]Baillet also deemed the CPC version's level "weak."[16] Other critiques included a lack of originality, rated 2/5 for Oric by Brassinne and 0/5 for CPC by Lazard.[6][14]Brassinne questioned the Oric version's hints, which failed to secure wins if followed consistently.[6]Baillet explained that hints used a shallowerdepth-first search, favoring the computer.[16]Tilt echoed this for the CPC, warning against blindly following hints.[8][10]They praised its speed but noted longer thinking times at higher levels.[12][8][10]Nardeau criticized the CPC version's lack of confirmation for the "New" option, risking unintended game resets.[7]Hebdogiciel found the CPC graphics lackluster,[9]while Tilt called them standard, rating them 4/6.[8]Conversely,Amstrad Magazinefound them "clear and pleasant," awarding 4/5.[7] In 1997, Jacques Métois, the CPC version's developer, drew inspiration fromReversi Championto create Othello Windows forMicrosoft Windows16-bit and 32-bit systems.[25]
https://en.wikipedia.org/wiki/Reversi_Champion
Linear discriminant analysis(LDA),normal discriminant analysis(NDA),canonical variates analysis(CVA), ordiscriminant function analysisis a generalization ofFisher's linear discriminant, a method used instatisticsand other fields, to find alinear combinationof features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as alinear classifier, or, more commonly, fordimensionality reductionbefore laterclassification. LDA is closely related toanalysis of variance(ANOVA) andregression analysis, which also attempt to express onedependent variableas a linear combination of other features or measurements.[2][3]However, ANOVA usescategoricalindependent variablesand acontinuousdependent variable, whereas discriminant analysis has continuousindependent variablesand a categorical dependent variable (i.e.the class label).[4]Logistic regressionandprobit regressionare more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method. LDA is also closely related toprincipal component analysis(PCA) andfactor analysisin that they both look for linear combinations of variables which best explain the data.[5]LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made. LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.[6][7] Discriminant analysis is used when groups are known a priori (unlike incluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure.[8]In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type. The originaldichotomousdiscriminant analysis was developed by SirRonald Fisherin 1936.[9]It is different from anANOVAorMANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership.[10] Consider a set of observationsx→{\displaystyle {\vec {x}}}(also called features, attributes, variables or measurements) for each sample of an object or event with known classy{\displaystyle y}. This set of samples is called thetraining setin asupervised learningcontext. The classification problem is then to find a good predictor for the classy{\displaystyle y}of any sample of the same distribution (not necessarily from the training set) given only an observationx→{\displaystyle {\vec {x}}}.[11]: 338 LDA approaches the problem by assuming that the conditionalprobability density functionsp(x→|y=0){\displaystyle p({\vec {x}}|y=0)}andp(x→|y=1){\displaystyle p({\vec {x}}|y=1)}are boththe normal distributionwith mean andcovarianceparameters(μ→0,Σ0){\displaystyle \left({\vec {\mu }}_{0},\Sigma _{0}\right)}and(μ→1,Σ1){\displaystyle \left({\vec {\mu }}_{1},\Sigma _{1}\right)}, respectively. Under this assumption, theBayes-optimal solutionis to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that: Without any further assumptions, the resulting classifier is referred to asquadratic discriminant analysis(QDA). LDA instead makes the additional simplifyinghomoscedasticityassumption (i.e.that the class covariances are identical, soΣ0=Σ1=Σ{\displaystyle \Sigma _{0}=\Sigma _{1}=\Sigma }) and that the covariances have full rank. In this case, several terms cancel: and the above decision criterion becomes a threshold on thedot product for some threshold constantc, where This means that the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of this linear combination of the known observations. It is often useful to see this conclusion in geometrical terms: the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of projection of multidimensional-space pointx→{\displaystyle {\vec {x}}}onto vectorw→{\displaystyle {\vec {w}}}(thus, we only consider its direction). In other words, the observation belongs toy{\displaystyle y}if correspondingx→{\displaystyle {\vec {x}}}is located on a certain side of a hyperplane perpendicular tow→{\displaystyle {\vec {w}}}. The location of the plane is defined by the thresholdc{\displaystyle c}. The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables.[8] It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions,[12]and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated).[13] Discriminant analysis works by creating one or more linear combinations of predictors, creating a newlatent variablefor each function. These functions are called discriminant functions. The number of functions possible is eitherNg−1{\displaystyle N_{g}-1}whereNg{\displaystyle N_{g}}= number of groups, orp{\displaystyle p}(the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions. Given groupj{\displaystyle j}, withRj{\displaystyle \mathbb {R} _{j}}sets of sample space, there is a discriminant rule such that ifx∈Rj{\displaystyle x\in \mathbb {R} _{j}}, thenx∈j{\displaystyle x\in j}. Discriminant analysis then, finds “good” regions ofRj{\displaystyle \mathbb {R} _{j}}to minimize classification error, therefore leading to a high percent correct classified in the classification table.[14] Each function is given a discriminant score[clarification needed]to determine how well it predicts group placement. Aneigenvaluein discriminant analysis is the characteristic root of each function.[clarification needed]It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates.[8]This however, should be interpreted with caution, as eigenvalues have no upper limit.[10][8]The eigenvalue can be viewed as a ratio ofSSbetweenandSSwithinas in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of theIV[clarification needed].[10]This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc.. Some suggest the use of eigenvalues aseffect sizemeasures, however, this is generally not supported.[10]Instead, thecanonical correlationis the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio ofSSbetweenandSStotal. It is the correlation between groups and the function.[10]Another popular measure of effect size is the percent of variance[clarification needed]for each function. This is calculated by: (λx/Σλi) X 100 whereλxis the eigenvalue for the function and Σλiis the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others.[10]Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement.[10]Kappa normalizes across all categorizes rather than biased by a significantly good or poorly performing classes.[clarification needed][17] Canonical discriminant analysis (CDA) finds axes (k− 1canonical coordinates,kbeing the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimalk− 1 space through then-dimensional cloud of data that best separates (the projections in that space of) thekgroups. See “Multiclass LDA” for details below. Because LDA uses canonical variates, it was initially often referred as the "method of canonical variates"[18]or canonical variates analysis (CVA).[19] The termsFisher's linear discriminantandLDAare often used interchangeably, althoughFisher'soriginal article[2]actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such asnormally distributedclasses or equal classcovariances. Suppose two classes of observations havemeansμ→0,μ→1{\displaystyle {\vec {\mu }}_{0},{\vec {\mu }}_{1}}and covariancesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}. Then the linear combination of featuresw→Tx→{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {x}}}will havemeansw→Tμ→i{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {\mu }}_{i}}andvariancesw→TΣiw→{\displaystyle {\vec {w}}^{\mathrm {T} }\Sigma _{i}{\vec {w}}}fori=0,1{\displaystyle i=0,1}. Fisher defined the separation between these twodistributionsto be the ratio of the variance between the classes to the variance within the classes: This measure is, in some sense, a measure of thesignal-to-noise ratiofor the class labelling. It can be shown that the maximum separation occurs when When the assumptions of LDA are satisfied, the above equation is equivalent to LDA. Be sure to note that the vectorw→{\displaystyle {\vec {w}}}is thenormalto the discriminanthyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular tow→{\displaystyle {\vec {w}}}. Generally, the data points to be discriminated are projected ontow→{\displaystyle {\vec {w}}}; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means,w→⋅μ→0{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{0}}andw→⋅μ→1{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{1}}. In this case the parameter c in threshold conditionw→⋅x→>c{\displaystyle {\vec {w}}\cdot {\vec {x}}>c}can be found explicitly: Otsu's methodis related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes. In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find asubspacewhich appears to contain all of the class variability.[20]This generalization is due toC. R. Rao.[21]Suppose that each of C classes has a meanμi{\displaystyle \mu _{i}}and the same covarianceΣ{\displaystyle \Sigma }. Then the scatter between class variability may be defined by the sample covariance of the class means whereμ{\displaystyle \mu }is the mean of the class means. The class separation in a directionw→{\displaystyle {\vec {w}}}in this case will be given by This means that whenw→{\displaystyle {\vec {w}}}is aneigenvectorofΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}the separation will be equal to the correspondingeigenvalue. IfΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to theC− 1 largest eigenvalues (sinceΣb{\displaystyle \Sigma _{b}}is of rankC− 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section. If classification is required, instead ofdimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common method is pairwise classification, where a new classifier is created for each pair of classes (givingC(C− 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification. The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is anincremental LDA algorithm, and this idea has been extensively studied over the last two decades.[22]Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features.[23]In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules.[24]Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples.[22] In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either themaximum likelihood estimateor themaximum a posterioriestimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct. Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class.[5]In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use apseudo inverseinstead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned byΣb{\displaystyle \Sigma _{b}}.[25]Another strategy to deal with small sample size is to use ashrinkage estimatorof the covariance matrix, which can be expressed mathematically as whereI{\displaystyle I}is the identity matrix, andλ{\displaystyle \lambda }is theshrinkage intensityorregularisation parameter. This leads to the framework of regularized discriminant analysis[26]or shrinkage discriminant analysis.[27] Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via thekernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is thekernel Fisher discriminant. LDA can be generalized tomultiple discriminant analysis, wherecbecomes acategorical variablewithNpossible states, instead of only two. Analogously, if the class-conditional densitiesp(x→∣c=i){\displaystyle p({\vec {x}}\mid c=i)}are normal with shared covariances, thesufficient statisticforP(c∣x→){\displaystyle P(c\mid {\vec {x}})}are the values ofNprojections, which are thesubspacespanned by theNmeans,affine projectedby the inverse covariance matrix. These projections can be found by solving ageneralized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details. In addition to the examples given below, LDA is applied inpositioningandproduct management. Inbankruptcy predictionbased on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA,Edward Altman's1968 model[28]is still a leading model in practical applications.[29][30][31] In computerisedface recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are calledFisher faces, while those obtained using the relatedprincipal component analysisare calledeigenfaces. Inmarketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data.Logistic regressionor other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps: The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate, and severe form. Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance.[32] In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra,[33]to detect animal source ofEscherichia colistudying its virulence factors[34]etc. This method can be used toseparate the alteration zones[clarification needed]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.[35] Discriminant function analysis is very similar tologistic regression, and both can be used to answer the same research questions.[10]Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression.[36]Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate.[8]Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.[9][8] Geometric anomalies in higher dimensions lead to the well-knowncurse of dimensionality. Nevertheless, proper utilization ofconcentration of measurephenomena can make computation easier.[37]An important case of theseblessing of dimensionalityphenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples.[38]These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution.[39]In particular, such theorems are proven forlog-concavedistributions includingmultidimensional normal distribution(the proof is based on the concentration inequalities for log-concave measures[40]) and for product measures on a multidimensional cube (this is proven usingTalagrand's concentration inequalityfor product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction forartificial intelligencesystems in high dimension.[41]
https://en.wikipedia.org/wiki/Discriminant_function
Invector calculus, thecurl, also known asrotor, is avector operatorthat describes theinfinitesimalcirculationof avector fieldin three-dimensionalEuclidean space. The curl at a point in the field is represented by avectorwhose length and direction denote themagnitudeand axis of the maximum circulation.[1]The curl of a field is formally defined as the circulation density at each point of the field. A vector field whose curl is zero is calledirrotational. The curl is a form ofdifferentiationfor vector fields. The corresponding form of thefundamental theorem of calculusisStokes' theorem, which relates thesurface integralof the curl of a vector field to theline integralof the vector field around the boundary curve. The notationcurlFis more common in North America. In the rest of the world, particularly in 20th century scientific literature, the alternative notationrotFis traditionally used, which comes from the "rate of rotation" that it represents. To avoid confusion, modern authors tend to use thecross productnotation with thedel(nabla) operator, as in∇×F{\displaystyle \nabla \times \mathbf {F} },[2]which also reveals the relation between curl (rotor),divergence, andgradientoperators. Unlike thegradientanddivergence, curl as formulated in vector calculus does not generalize simply to other dimensions; somegeneralizationsare possible, but only in three dimensions is the geometrically defined curl of a vector field again a vector field. This deficiency is a direct consequence of the limitations of vector calculus; on the other hand, when expressed as an antisymmetric tensor field via the wedge operator ofgeometric calculus, the curl generalizes to all dimensions. The circumstance is similar to that attending the 3-dimensionalcross product, and indeed the connection is reflected in the notation∇×{\displaystyle \nabla \times }for the curl. The name "curl" was first suggested byJames Clerk Maxwellin 1871[3]but the concept was apparently first used in the construction of an optical field theory byJames MacCullaghin 1839.[4][5] The curl of a vector fieldF, denoted bycurlF, or∇×F{\displaystyle \nabla \times \mathbf {F} }, orrotF, is an operator that mapsCkfunctions inR3toCk−1functions inR3, and in particular, it maps continuously differentiable functionsR3→R3to continuous functionsR3→R3. It can be defined in several ways, to be mentioned below: One way to define the curl of a vector field at a point is implicitly through its components along various axes passing through the point: ifu^{\displaystyle \mathbf {\hat {u}} }is any unit vector, the component of the curl ofFalong the directionu^{\displaystyle \mathbf {\hat {u}} }may be defined to be the limiting value of a closedline integralin a plane perpendicular tou^{\displaystyle \mathbf {\hat {u}} }divided by the area enclosed, as the path of integration is contracted indefinitely around the point. More specifically, the curl is defined at a pointpas[6][7](∇×F)(p)⋅u^=deflimA→01|A|∮C(p)F⋅dr{\displaystyle (\nabla \times \mathbf {F} )(p)\cdot \mathbf {\hat {u}} \ {\overset {\underset {\mathrm {def} }{}}{{}={}}}\lim _{A\to 0}{\frac {1}{|A|}}\oint _{C(p)}\mathbf {F} \cdot \mathrm {d} \mathbf {r} }where theline integralis calculated along theboundaryCof theareaAcontaining point p,|A|being the magnitude of the area. This equation defines the component of the curl ofFalong the directionu^{\displaystyle \mathbf {\hat {u}} }. The infinitesimal surfaces bounded byChaveu^{\displaystyle \mathbf {\hat {u}} }as theirnormal.Cis oriented via theright-hand rule. The above formula means that the component of the curl of a vector field along a certain axis is theinfinitesimalarea densityof the circulation of the field in a plane perpendicular to that axis. This formula does nota prioridefine a legitimate vector field, for the individual circulation densities with respect to various axesa priorineed not relate to each other in the same way as the components of a vector do; that theydoindeed relate to each other in this precise manner must be proven separately. To this definition fits naturally theKelvin–Stokes theorem, as a global formula corresponding to the definition. It equates thesurface integralof the curl of a vector field to the above line integral taken around the boundary of the surface. Another way one can define the curl vector of a functionFat a point is explicitly as the limiting value of a vector-valued surface integral around a shell enclosingpdivided by the volume enclosed, as the shell is contracted indefinitely aroundp. More specifically, the curl may be defined by the vector formula(∇×F)(p)=deflimV→01|V|∮Sn^×FdS{\displaystyle (\nabla \times \mathbf {F} )(p){\overset {\underset {\mathrm {def} }{}}{{}={}}}\lim _{V\to 0}{\frac {1}{|V|}}\oint _{S}\mathbf {\hat {n}} \times \mathbf {F} \ \mathrm {d} S}where the surface integral is calculated along the boundarySof the volumeV,|V|being the magnitude of the volume, andn^{\displaystyle \mathbf {\hat {n}} }pointing outward from the surfaceSperpendicularly at every point inS. In this formula, the cross product in the integrand measures the tangential component ofFat each point on the surfaceS, and points along the surface at right angles to thetangential projectionofF. Integrating this cross product over the whole surface results in a vector whose magnitude measures the overall circulation ofFaroundS, and whose direction is at right angles to this circulation. The above formula says that thecurlof a vector field at a point is theinfinitesimal volume densityof this "circulation vector" around the point. To this definition fits naturally another global formula (similar to the Kelvin-Stokes theorem) which equates thevolume integralof the curl of a vector field to the above surface integral taken over the boundary of the volume. Whereas the above two definitions of the curl are coordinate free, there is another "easy to memorize" definition of the curl in curvilinearorthogonal coordinates, e.g. inCartesian coordinates,spherical,cylindrical, or evenellipticalorparabolic coordinates:(curl⁡F)1=1h2h3(∂(h3F3)∂u2−∂(h2F2)∂u3),(curl⁡F)2=1h3h1(∂(h1F1)∂u3−∂(h3F3)∂u1),(curl⁡F)3=1h1h2(∂(h2F2)∂u1−∂(h1F1)∂u2).{\displaystyle {\begin{aligned}&(\operatorname {curl} \mathbf {F} )_{1}={\frac {1}{h_{2}h_{3}}}\left({\frac {\partial (h_{3}F_{3})}{\partial u_{2}}}-{\frac {\partial (h_{2}F_{2})}{\partial u_{3}}}\right),\\[5pt]&(\operatorname {curl} \mathbf {F} )_{2}={\frac {1}{h_{3}h_{1}}}\left({\frac {\partial (h_{1}F_{1})}{\partial u_{3}}}-{\frac {\partial (h_{3}F_{3})}{\partial u_{1}}}\right),\\[5pt]&(\operatorname {curl} \mathbf {F} )_{3}={\frac {1}{h_{1}h_{2}}}\left({\frac {\partial (h_{2}F_{2})}{\partial u_{1}}}-{\frac {\partial (h_{1}F_{1})}{\partial u_{2}}}\right).\end{aligned}}} The equation for each component(curlF)kcan be obtained by exchanging each occurrence of a subscript 1, 2, 3 in cyclic permutation: 1 → 2, 2 → 3, and 3 → 1 (where the subscripts represent the relevant indices). If(x1,x2,x3)are theCartesian coordinatesand(u1,u2,u3)are the orthogonal coordinates, thenhi=(∂x1∂ui)2+(∂x2∂ui)2+(∂x3∂ui)2{\displaystyle h_{i}={\sqrt {\left({\frac {\partial x_{1}}{\partial u_{i}}}\right)^{2}+\left({\frac {\partial x_{2}}{\partial u_{i}}}\right)^{2}+\left({\frac {\partial x_{3}}{\partial u_{i}}}\right)^{2}}}}is the length of the coordinate vector corresponding toui. The remaining two components of curl result fromcyclic permutationofindices: 3,1,2 → 1,2,3 → 2,3,1. In practice, the two coordinate-free definitions described above are rarely used because in virtually all cases, the curloperatorcan be applied using some set ofcurvilinear coordinates, for which simpler representations have been derived. The notation∇×F{\displaystyle \nabla \times \mathbf {F} }has its origins in the similarities to the 3-dimensionalcross product, and it is useful as amnemonicinCartesian coordinatesif∇{\displaystyle \nabla }is taken as a vectordifferential operatordel. Such notation involvingoperatorsis common inphysicsandalgebra. Expanded in 3-dimensionalCartesian coordinates(seeDel in cylindrical and spherical coordinatesforsphericalandcylindricalcoordinate representations),∇×F{\displaystyle \nabla \times \mathbf {F} }is, forF{\displaystyle \mathbf {F} }composed of[Fx,Fy,Fz]{\displaystyle [F_{x},F_{y},F_{z}]}(where the subscripts indicate the components of the vector, not partial derivatives):∇×F=|ı^ȷ^k^∂∂x∂∂y∂∂zFxFyFz|{\displaystyle \nabla \times \mathbf {F} ={\begin{vmatrix}{\boldsymbol {\hat {\imath }}}&{\boldsymbol {\hat {\jmath }}}&{\boldsymbol {\hat {k}}}\\[5mu]{\dfrac {\partial }{\partial x}}&{\dfrac {\partial }{\partial y}}&{\dfrac {\partial }{\partial z}}\\[5mu]F_{x}&F_{y}&F_{z}\end{vmatrix}}}wherei,j, andkare theunit vectorsfor thex-,y-, andz-axes, respectively. This expands as follows:[8]∇×F=(∂Fz∂y−∂Fy∂z)ı^+(∂Fx∂z−∂Fz∂x)ȷ^+(∂Fy∂x−∂Fx∂y)k^{\displaystyle \nabla \times \mathbf {F} =\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right){\boldsymbol {\hat {\imath }}}+\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right){\boldsymbol {\hat {\jmath }}}+\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right){\boldsymbol {\hat {k}}}} Although expressed in terms of coordinates, the result is invariant under proper rotations of the coordinate axes but the result inverts under reflection. In a general coordinate system, the curl is given by[1](∇×F)k=1gεkℓm∇ℓFm{\displaystyle (\nabla \times \mathbf {F} )^{k}={\frac {1}{\sqrt {g}}}\varepsilon ^{k\ell m}\nabla _{\ell }F_{m}}whereεdenotes theLevi-Civita tensor,∇thecovariant derivative,g{\displaystyle g}is thedeterminantof themetric tensorand theEinstein summation conventionimplies that repeated indices are summed over. Due to the symmetry of theChristoffel symbolsparticipating in the covariant derivative, this expression reduces to thepartial derivative:(∇×F)=1gRkεkℓm∂ℓFm{\displaystyle (\nabla \times \mathbf {F} )={\frac {1}{\sqrt {g}}}\mathbf {R} _{k}\varepsilon ^{k\ell m}\partial _{\ell }F_{m}}whereRkare the local basis vectors. Equivalently, using theexterior derivative, the curl can be expressed as:∇×F=(⋆(dF♭))♯{\displaystyle \nabla \times \mathbf {F} =\left(\star {\big (}{\mathrm {d} }\mathbf {F} ^{\flat }{\big )}\right)^{\sharp }} Here♭and♯are themusical isomorphisms, and★is theHodge star operator. This formula shows how to calculate the curl ofFin any coordinate system, and how to extend the curl to anyorientedthree-dimensionalRiemannianmanifold. Since this depends on a choice of orientation, curl is achiraloperation. In other words, if the orientation is reversed, then the direction of the curl is also reversed. Suppose the vector field describes thevelocity fieldof afluid flow(such as a large tank ofliquidorgas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point.[9]The curl of the vector field at any point is given by the rotation of an infinitesimal area in thexy-plane (forz-axis component of the curl),zx-plane (fory-axis component of the curl) andyz-plane (forx-axis component of the curl vector). This can be seen in the examples below. Thevector fieldF(x,y,z)=yı^−xȷ^{\displaystyle \mathbf {F} (x,y,z)=y{\boldsymbol {\hat {\imath }}}-x{\boldsymbol {\hat {\jmath }}}}can be decomposed asFx=y,Fy=−x,Fz=0.{\displaystyle F_{x}=y,F_{y}=-x,F_{z}=0.} Upon visual inspection, the field can be described as "rotating". If the vectors of the field were to represent a linearforceacting on objects present at that point, and an object were to be placed inside the field, the object would start to rotate clockwise around itself. This is true regardless of where the object is placed. Calculating the curl:∇×F=0ı^+0ȷ^+(∂∂x(−x)−∂∂yy)k^=−2k^{\displaystyle \nabla \times \mathbf {F} =0{\boldsymbol {\hat {\imath }}}+0{\boldsymbol {\hat {\jmath }}}+\left({\frac {\partial }{\partial x}}(-x)-{\frac {\partial }{\partial y}}y\right){\boldsymbol {\hat {k}}}=-2{\boldsymbol {\hat {k}}}} The resulting vector field describing the curl would at all points be pointing in the negativezdirection. The results of this equation align with what could have been predicted using theright-hand ruleusing aright-handed coordinate system. Being a uniform vector field, the object described before would have the same rotational intensity regardless of where it was placed. For the vector fieldF(x,y,z)=−x2ȷ^{\displaystyle \mathbf {F} (x,y,z)=-x^{2}{\boldsymbol {\hat {\jmath }}}} the curl is not as obvious from the graph. However, taking the object in the previous example, and placing it anywhere on the linex= 3, the force exerted on the right side would be slightly greater than the force exerted on the left, causing it to rotate clockwise. Using the right-hand rule, it can be predicted that the resulting curl would be straight in the negativezdirection. Inversely, if placed onx= −3, the object would rotate counterclockwise and the right-hand rule would result in a positivezdirection. Calculating the curl:∇×F=0ı^+0ȷ^+∂∂x(−x2)k^=−2xk^.{\displaystyle {\nabla }\times \mathbf {F} =0{\boldsymbol {\hat {\imath }}}+0{\boldsymbol {\hat {\jmath }}}+{\frac {\partial }{\partial x}}\left(-x^{2}\right){\boldsymbol {\hat {k}}}=-2x{\boldsymbol {\hat {k}}}.} The curl points in the negativezdirection whenxis positive and vice versa. In this field, the intensity of rotation would be greater as the object moves away from the planex= 0. In generalcurvilinear coordinates(not only in Cartesian coordinates), the curl of a cross product of vector fieldsvandFcan be shown to be∇×(v×F)=((∇⋅F)+F⋅∇)v−((∇⋅v)+v⋅∇)F.{\displaystyle \nabla \times \left(\mathbf {v\times F} \right)={\Big (}\left(\mathbf {\nabla \cdot F} \right)+\mathbf {F\cdot \nabla } {\Big )}\mathbf {v} -{\Big (}\left(\mathbf {\nabla \cdot v} \right)+\mathbf {v\cdot \nabla } {\Big )}\mathbf {F} \ .} Interchanging the vector fieldvand∇operator, we arrive at the cross product of a vector field with curl of a vector field:v×(∇×F)=∇F(v⋅F)−(v⋅∇)F,{\displaystyle \mathbf {v\ \times } \left(\mathbf {\nabla \times F} \right)=\nabla _{\mathbf {F} }\left(\mathbf {v\cdot F} \right)-\left(\mathbf {v\cdot \nabla } \right)\mathbf {F} \ ,}where∇Fis the Feynman subscript notation, which considers only the variation due to the vector fieldF(i.e., in this case,vis treated as being constant in space). Another example is the curl of a curl of a vector field. It can be shown that in general coordinates∇×(∇×F)=∇(∇⋅F)−∇2F,{\displaystyle \nabla \times \left(\mathbf {\nabla \times F} \right)=\mathbf {\nabla } (\mathbf {\nabla \cdot F} )-\nabla ^{2}\mathbf {F} \ ,}and this identity defines thevector LaplacianofF, symbolized as∇2F. The curl of thegradientofanyscalar fieldφis always thezero vectorfield∇×(∇φ)=0{\displaystyle \nabla \times (\nabla \varphi )={\boldsymbol {0}}}which follows from theantisymmetryin the definition of the curl, and thesymmetry of second derivatives. Thedivergenceof the curl of any vector field is equal to zero:∇⋅(∇×F)=0.{\displaystyle \nabla \cdot (\nabla \times \mathbf {F} )=0.} Ifφis a scalar valued function andFis a vector field, then∇×(φF)=∇φ×F+φ∇×F{\displaystyle \nabla \times (\varphi \mathbf {F} )=\nabla \varphi \times \mathbf {F} +\varphi \nabla \times \mathbf {F} } The vector calculus operations ofgrad, curl, anddivare most easily generalized in the context of differential forms, which involves a number of steps. In short, they correspond to the derivatives of 0-forms, 1-forms, and 2-forms, respectively. The geometric interpretation of curl as rotation corresponds to identifyingbivectors(2-vectors) in 3 dimensions with thespecial orthogonal Lie algebraso(3){\displaystyle {\mathfrak {so}}(3)}of infinitesimal rotations (in coordinates, skew-symmetric 3 × 3 matrices), while representing rotations by vectors corresponds to identifying 1-vectors (equivalently, 2-vectors) andso(3){\displaystyle {\mathfrak {so}}(3)},these all being 3-dimensional spaces. In 3 dimensions, a differential 0-form is a real-valued functionf(x,y,z){\displaystyle f(x,y,z)}; a differential 1-form is the following expression, where the coefficients are functions:a1dx+a2dy+a3dz;{\displaystyle a_{1}\,dx+a_{2}\,dy+a_{3}\,dz;}a differential 2-form is the formal sum, again with function coefficients:a12dx∧dy+a13dx∧dz+a23dy∧dz;{\displaystyle a_{12}\,dx\wedge dy+a_{13}\,dx\wedge dz+a_{23}\,dy\wedge dz;}and a differential 3-form is defined by a single term with one function as coefficient:a123dx∧dy∧dz.{\displaystyle a_{123}\,dx\wedge dy\wedge dz.}(Here thea-coefficients are real functions of three variables; thewedge products, e.g.dx∧dy{\displaystyle {\text{d}}x\wedge {\text{d}}y}, can be interpreted asoriented plane segments,dx∧dy=−dy∧dx{\displaystyle {\text{d}}x\wedge {\text{d}}y=-{\text{d}}y\wedge {\text{d}}x}, etc.) Theexterior derivativeof ak-form inR3is defined as the(k+ 1)-form from above—and inRnif, e.g.,ω(k)=∑1≤i1<i2<⋯<ik≤nai1,…,ikdxi1∧⋯∧dxik,{\displaystyle \omega ^{(k)}=\sum _{1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n}a_{i_{1},\ldots ,i_{k}}\,dx_{i_{1}}\wedge \cdots \wedge dx_{i_{k}},}then the exterior derivativedleads todω(k)=∑j=1i1<⋯<ikn∂ai1,…,ik∂xjdxj∧dxi1∧⋯∧dxik.{\displaystyle d\omega ^{(k)}=\sum _{\scriptstyle {j=1} \atop \scriptstyle {i_{1}<\cdots <i_{k}}}^{n}{\frac {\partial a_{i_{1},\ldots ,i_{k}}}{\partial x_{j}}}\,dx_{j}\wedge dx_{i_{1}}\wedge \cdots \wedge dx_{i_{k}}.} The exterior derivative of a 1-form is therefore a 2-form, and that of a 2-form is a 3-form. On the other hand, because of the interchangeability of mixed derivatives,∂2∂xi∂xj=∂2∂xj∂xi,{\displaystyle {\frac {\partial ^{2}}{\partial x_{i}\,\partial x_{j}}}={\frac {\partial ^{2}}{\partial x_{j}\,\partial x_{i}}},}and antisymmetry,dxi∧dxj=−dxj∧dxi{\displaystyle dx_{i}\wedge dx_{j}=-dx_{j}\wedge dx_{i}} the twofold application of the exterior derivative yields0{\displaystyle 0}(the zerok+2{\displaystyle k+2}-form). Thus, denoting the space ofk-forms byΩk(R3){\displaystyle \Omega ^{k}(\mathbb {R} ^{3})}and the exterior derivative bydone gets a sequence:0⟶dΩ0(R3)⟶dΩ1(R3)⟶dΩ2(R3)⟶dΩ3(R3)⟶d0.{\displaystyle 0\,{\overset {d}{\longrightarrow }}\;\Omega ^{0}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\;\Omega ^{1}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\;\Omega ^{2}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\;\Omega ^{3}\left(\mathbb {R} ^{3}\right)\,{\overset {d}{\longrightarrow }}\,0.} HereΩk(Rn){\displaystyle \Omega ^{k}(\mathbb {R} ^{n})}is the space of sections of theexterior algebraΛk(Rn){\displaystyle \Lambda ^{k}(\mathbb {R} ^{n})}vector bundleoverRn, whose dimension is thebinomial coefficient(nk){\displaystyle {\binom {n}{k}}}; note thatΩk(R3)=0{\displaystyle \Omega ^{k}(\mathbb {R} ^{3})=0}fork>3{\displaystyle k>3}ork<0{\displaystyle k<0}. Writing only dimensions, one obtains a row ofPascal's triangle: 0→1→3→3→1→0;{\displaystyle 0\rightarrow 1\rightarrow 3\rightarrow 3\rightarrow 1\rightarrow 0;} the 1-dimensional fibers correspond to scalar fields, and the 3-dimensional fibers to vector fields, as described below. Modulo suitable identifications, the three nontrivial occurrences of the exterior derivative correspond to grad, curl, and div. Differential forms and the differential can be defined on any Euclidean space, or indeed any manifold, without any notion of a Riemannian metric. On aRiemannian manifold, or more generallypseudo-Riemannian manifold,k-forms can be identified withk-vectorfields (k-forms arek-covector fields, and a pseudo-Riemannian metric gives an isomorphism between vectors and covectors), and on anorientedvector space with anondegenerate form(an isomorphism between vectors and covectors), there is an isomorphism betweenk-vectors and(n−k)-vectors; in particular on (the tangent space of) an oriented pseudo-Riemannian manifold. Thus on an oriented pseudo-Riemannian manifold, one can interchangek-forms,k-vector fields,(n−k)-forms, and(n−k)-vector fields; this is known asHodge duality. Concretely, onR3this is given by: Thus, identifying 0-forms and 3-forms with scalar fields, and 1-forms and 2-forms with vector fields: On the other hand, the fact thatd2= 0corresponds to the identities∇×(∇f)=0{\displaystyle \nabla \times (\nabla f)=\mathbf {0} }for any scalar fieldf, and∇⋅(∇×v)=0{\displaystyle \nabla \cdot (\nabla \times \mathbf {v} )=0}for any vector fieldv. Grad and div generalize to all oriented pseudo-Riemannian manifolds, with the same geometric interpretation, because the spaces of 0-forms andn-forms at each point are always 1-dimensional and can be identified with scalar fields, while the spaces of 1-forms and(n− 1)-forms are always fiberwisen-dimensional and can be identified with vector fields. Curl does not generalize in this way to 4 or more dimensions (or down to 2 or fewer dimensions); in 4 dimensions the dimensions are so the curl of a 1-vector field (fiberwise 4-dimensional) is a2-vector field, which at each point belongs to 6-dimensional vector space, and so one hasω(2)=∑i<k=1,2,3,4ai,kdxi∧dxk,{\displaystyle \omega ^{(2)}=\sum _{i<k=1,2,3,4}a_{i,k}\,dx_{i}\wedge dx_{k},}which yields a sum of six independent terms, and cannot be identified with a 1-vector field. Nor can one meaningfully go from a 1-vector field to a 2-vector field to a 3-vector field (4 → 6 → 4), as taking the differential twice yields zero (d2= 0). Thus there is no curl function from vector fields to vector fields in other dimensions arising in this way. However, one can define a curl of a vector field as a2-vector fieldin general, as described below. 2-vectors correspond to the exterior powerΛ2V; in the presence of an inner product, in coordinates these are the skew-symmetric matrices, which are geometrically considered as thespecial orthogonal Lie algebraso{\displaystyle {\mathfrak {so}}}(V)of infinitesimal rotations. This has(n2)=⁠1/2⁠n(n− 1)dimensions, and allows one to interpret the differential of a 1-vector field as its infinitesimal rotations. Only in 3 dimensions (or trivially in 0 dimensions) we haven=⁠1/2⁠n(n− 1), which is the most elegant and common case. In 2 dimensions the curl of a vector field is not a vector field but a function, as 2-dimensional rotations are given by an angle (a scalar – an orientation is required to choose whether one counts clockwise or counterclockwise rotations as positive); this is not the div, but is rather perpendicular to it. In 3 dimensions the curl of a vector field is a vector field as is familiar (in 1 and 0 dimensions the curl of a vector field is 0, because there are no non-trivial 2-vectors), while in 4 dimensions the curl of a vector field is, geometrically, at each point an element of the 6-dimensional Lie algebraso(4){\displaystyle {\mathfrak {so}}(4)}. The curl of a 3-dimensional vector field which only depends on 2 coordinates (sayxandy) is simply a vertical vector field (in thezdirection) whose magnitude is the curl of the 2-dimensional vector field, as in the examples on this page. Considering curl as a 2-vector field (an antisymmetric 2-tensor) has been used to generalize vector calculus and associated physics to higher dimensions.[10] In the case where the divergence of a vector fieldVis zero, a vector fieldWexists such thatV= curl(W).[citation needed]This is why themagnetic field, characterized by zero divergence, can be expressed as the curl of amagnetic vector potential. IfWis a vector field withcurl(W) =V, then adding any gradient vector fieldgrad(f)toWwill result in another vector fieldW+ grad(f)such thatcurl(W+ grad(f)) =Vas well. This can be summarized by saying that the inverse curl of a three-dimensional vector field can be obtained up to an unknownirrotational fieldwith theBiot–Savart law.
https://en.wikipedia.org/wiki/Curl_(mathematics)
Invector calculus,divergenceis avector operatorthat operates on avector field, producing ascalar fieldgiving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outwardfluxof a vector field from an infinitesimal volume around a given point. As an example, consider air as it is heated or cooled. Thevelocityof the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value. In physical terms, the divergence of a vector field is the extent to which the vector fieldfluxbehaves like asource or a sinkat a given point. It is a local measure of its "outgoingness" – the extent to which there are more of the field vectors exiting from an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence. The divergence of a vector field is often illustrated using the simple example of thevelocity fieldof a fluid, a liquid or gas. A moving gas has avelocity, a speed and direction at each point, which can be represented by avector, so the velocity of the gas forms avector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore, the velocity field has negative divergence everywhere. In contrast, in a gas at a constant temperature and pressure, the net flux of gas out of any closed surface is zero. The gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so thenetflux is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is calledsolenoidal. If the gas is heated only at one point or small region, or a small tube is introduced which supplies a source of additional gas at one point, the gas there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the gas, centered on the heated point. Any closed surface enclosing the heated point will have a flux of gas particles passing out of it, so there is positive divergence at that point. However any closed surfacenotenclosing the point will have a constant density of gas inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore, the divergence at any other point is zero. The divergence of a vector fieldF(x)at a pointx0is defined as thelimitof the ratio of thesurface integralofFout of the closed surface of a volumeVenclosingx0to the volume ofV, asVshrinks to zero where|V|is the volume ofV,S(V)is the boundary ofV, andn^{\displaystyle \mathbf {\hat {n}} }is the outwardunit normalto that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that containx0and approach zero volume. The result,divF, is a scalar function ofx. Since this definition is coordinate-free, it shows that the divergence is the same in anycoordinate system. However the above definition is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use. A vector field with zero divergence everywhere is calledsolenoidal– in which case any closed surface has no net flux across it. In three-dimensional Cartesian coordinates, the divergence of acontinuously differentiablevector fieldF=Fxi+Fyj+Fzk{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} }is defined as thescalar-valued function: div⁡F=∇⋅F=(∂∂x,∂∂y,∂∂z)⋅(Fx,Fy,Fz)=∂Fx∂x+∂Fy∂y+∂Fz∂z.{\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} =\left({\frac {\partial }{\partial x}},{\frac {\partial }{\partial y}},{\frac {\partial }{\partial z}}\right)\cdot (F_{x},F_{y},F_{z})={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.} Although expressed in terms of coordinates, the result is invariant underrotations, as the physical interpretation suggests. This is because the trace of theJacobian matrixof anN-dimensional vector fieldFinN-dimensional space is invariant under any invertible linear transformation[clarification needed]. The common notation for the divergence∇ ·Fis a convenient mnemonic, where the dot denotes an operation reminiscent of thedot product: take the components of the∇operator (seedel), apply them to the corresponding components ofF, and sum the results. Because applying an operator is different from multiplying the components, this is considered anabuse of notation. For a vector expressed inlocalunitcylindrical coordinatesasF=erFr+eθFθ+ezFz,{\displaystyle \mathbf {F} =\mathbf {e} _{r}F_{r}+\mathbf {e} _{\theta }F_{\theta }+\mathbf {e} _{z}F_{z},}whereeais the unit vector in directiona, the divergence is[1]div⁡F=∇⋅F=1r∂∂r(rFr)+1r∂Fθ∂θ+∂Fz∂z.{\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\frac {1}{r}}{\frac {\partial }{\partial r}}\left(rF_{r}\right)+{\frac {1}{r}}{\frac {\partial F_{\theta }}{\partial \theta }}+{\frac {\partial F_{z}}{\partial z}}.} The use of local coordinates is vital for the validity of the expression. If we considerxthe position vector and the functionsr(x),θ(x), andz(x), which assign the correspondingglobalcylindrical coordinate to a vector, in generalr(F(x))≠Fr(x){\displaystyle r(\mathbf {F} (\mathbf {x} ))\neq F_{r}(\mathbf {x} )},θ(F(x))≠Fθ(x){\displaystyle \theta (\mathbf {F} (\mathbf {x} ))\neq F_{\theta }(\mathbf {x} )},andz(F(x))≠Fz(x){\displaystyle z(\mathbf {F} (\mathbf {x} ))\neq F_{z}(\mathbf {x} )}.In particular, if we consider the identity functionF(x) =x, we find that: θ(F(x))=θ≠Fθ(x)=0.{\displaystyle \theta (\mathbf {F} (\mathbf {x} ))=\theta \neq F_{\theta }(\mathbf {x} )=0.} Inspherical coordinates, withθthe angle with thezaxis andφthe rotation around thezaxis, andFagain written in local unit coordinates, the divergence is[2]div⁡F=∇⋅F=1r2∂∂r(r2Fr)+1rsin⁡θ∂∂θ(sin⁡θFθ)+1rsin⁡θ∂Fφ∂φ.{\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}F_{r}\right)+{\frac {1}{r\sin \theta }}{\frac {\partial }{\partial \theta }}\left(\sin \theta \,F_{\theta }\right)+{\frac {1}{r\sin \theta }}{\frac {\partial F_{\varphi }}{\partial \varphi }}.} LetAbe continuously differentiable second-ordertensor fielddefined as follows: A=[A11A12A13A21A22A23A31A32A33]{\displaystyle \mathbf {A} ={\begin{bmatrix}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\\A_{31}&A_{32}&A_{33}\end{bmatrix}}} the divergence in cartesian coordinate system is a first-order tensor field[3]and can be defined in two ways:[4] div⁡(A)=∂Aik∂xkei=Aik,kei=[∂A11∂x1+∂A12∂x2+∂A13∂x3∂A21∂x1+∂A22∂x2+∂A23∂x3∂A31∂x1+∂A32∂x2+∂A33∂x3]{\displaystyle \operatorname {div} (\mathbf {A} )={\frac {\partial A_{ik}}{\partial x_{k}}}~\mathbf {e} _{i}=A_{ik,k}~\mathbf {e} _{i}={\begin{bmatrix}{\dfrac {\partial A_{11}}{\partial x_{1}}}+{\dfrac {\partial A_{12}}{\partial x_{2}}}+{\dfrac {\partial A_{13}}{\partial x_{3}}}\\{\dfrac {\partial A_{21}}{\partial x_{1}}}+{\dfrac {\partial A_{22}}{\partial x_{2}}}+{\dfrac {\partial A_{23}}{\partial x_{3}}}\\{\dfrac {\partial A_{31}}{\partial x_{1}}}+{\dfrac {\partial A_{32}}{\partial x_{2}}}+{\dfrac {\partial A_{33}}{\partial x_{3}}}\end{bmatrix}}} and[5][6][7] ∇⋅A=∂Aki∂xkei=Aki,kei=[∂A11∂x1+∂A21∂x2+∂A31∂x3∂A12∂x1+∂A22∂x2+∂A32∂x3∂A13∂x1+∂A23∂x2+∂A33∂x3]{\displaystyle \nabla \cdot \mathbf {A} ={\frac {\partial A_{ki}}{\partial x_{k}}}~\mathbf {e} _{i}=A_{ki,k}~\mathbf {e} _{i}={\begin{bmatrix}{\dfrac {\partial A_{11}}{\partial x_{1}}}+{\dfrac {\partial A_{21}}{\partial x_{2}}}+{\dfrac {\partial A_{31}}{\partial x_{3}}}\\{\dfrac {\partial A_{12}}{\partial x_{1}}}+{\dfrac {\partial A_{22}}{\partial x_{2}}}+{\dfrac {\partial A_{32}}{\partial x_{3}}}\\{\dfrac {\partial A_{13}}{\partial x_{1}}}+{\dfrac {\partial A_{23}}{\partial x_{2}}}+{\dfrac {\partial A_{33}}{\partial x_{3}}}\\\end{bmatrix}}} We have div⁡(AT)=∇⋅A{\displaystyle \operatorname {div} {\left(\mathbf {A} ^{\mathsf {T}}\right)}=\nabla \cdot \mathbf {A} } If tensor is symmetricAij=Ajithendiv⁡(A)=∇⋅A{\displaystyle \operatorname {div} (\mathbf {A} )=\nabla \cdot \mathbf {A} }.Because of this, often in the literature the two definitions (and symbolsdivand∇⋅{\displaystyle \nabla \cdot }) are used interchangeably (especially in mechanics equations where tensor symmetry is assumed). Expressions of∇⋅A{\displaystyle \nabla \cdot \mathbf {A} }in cylindrical and spherical coordinates are given in the articledel in cylindrical and spherical coordinates. UsingEinstein notationwe can consider the divergence ingeneral coordinates, which we write asx1, …,xi, …,xn, wherenis the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, sox2refers to the second component, and not the quantityxsquared. The index variableiis used to refer to an arbitrary component, such asxi. The divergence can then be written via theVoss-Weylformula,[8]as: div⁡(F)=1ρ∂(ρFi)∂xi,{\displaystyle \operatorname {div} (\mathbf {F} )={\frac {1}{\rho }}{\frac {\partial {\left(\rho \,F^{i}\right)}}{\partial x^{i}}},} whereρ{\displaystyle \rho }is the local coefficient of thevolume elementandFiare the components ofF=Fiei{\displaystyle \mathbf {F} =F^{i}\mathbf {e} _{i}}with respect to the localunnormalizedcovariant basis(sometimes written asei=∂x/∂xi{\displaystyle \mathbf {e} _{i}=\partial \mathbf {x} /\partial x^{i}}).The Einstein notation implies summation overi, since it appears as both an upper and lower index. The volume coefficientρis a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we haveρ= 1,ρ=randρ=r2sinθ, respectively. The volume can also be expressed asρ=|detgab|{\textstyle \rho ={\sqrt {\left|\det g_{ab}\right|}}}, wheregabis themetric tensor. Thedeterminantappears because it provides the appropriate invariant definition of the volume, given a set of vectors. Since the determinant is a scalar quantity which doesn't depend on the indices, these can be suppressed, writingρ=|detg|{\textstyle \rho ={\sqrt {\left|\det g\right|}}}.The absolute value is taken in order to handle the general case where the determinant might be negative, such as in pseudo-Riemannian spaces. The reason for the square-root is a bit subtle: it effectively avoids double-counting as one goes from curved to Cartesian coordinates, and back. The volume (the determinant) can also be understood as theJacobianof the transformation from Cartesian to curvilinear coordinates, which forn= 3givesρ=|∂(x,y,z)∂(x1,x2,x3)|{\textstyle \rho =\left|{\frac {\partial (x,y,z)}{\partial (x^{1},x^{2},x^{3})}}\right|}. Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we writee^i{\displaystyle {\hat {\mathbf {e} }}_{i}}for the normalized basis, andF^i{\displaystyle {\hat {F}}^{i}}for the components ofFwith respect to it, we have thatF=Fiei=Fi‖ei‖ei‖ei‖=Figiie^i=F^ie^i,{\displaystyle \mathbf {F} =F^{i}\mathbf {e} _{i}=F^{i}\|{\mathbf {e} _{i}}\|{\frac {\mathbf {e} _{i}}{\|\mathbf {e} _{i}\|}}=F^{i}{\sqrt {g_{ii}}}\,{\hat {\mathbf {e} }}_{i}={\hat {F}}^{i}{\hat {\mathbf {e} }}_{i},}using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant elemente^i{\displaystyle {\hat {\mathbf {e} }}^{i}},we can conclude thatFi=F^i/gii{\textstyle F^{i}={\hat {F}}^{i}/{\sqrt {g_{ii}}}}. After substituting, the formula becomes: div⁡(F)=1ρ∂(ρgiiF^i)∂xi=1detg∂(detggiiF^i)∂xi.{\displaystyle \operatorname {div} (\mathbf {F} )={\frac {1}{\rho }}{\frac {\partial \left({\frac {\rho }{\sqrt {g_{ii}}}}{\hat {F}}^{i}\right)}{\partial x^{i}}}={\frac {1}{\sqrt {\det g}}}{\frac {\partial \left({\sqrt {\frac {\det g}{g_{ii}}}}\,{\hat {F}}^{i}\right)}{\partial x^{i}}}.} See§ In curvilinear coordinatesfor further discussion. The following properties can all be derived from the ordinary differentiation rules ofcalculus. Most importantly, the divergence is alinear operator, i.e., for all vector fieldsFandGand allreal numbersaandb. There is aproduct ruleof the following type: ifφis a scalar-valued function andFis a vector field, then or in more suggestive notation Another product rule for thecross productof two vector fieldsFandGin three dimensions involves thecurland reads as follows: or TheLaplacianof ascalar fieldis the divergence of the field'sgradient: The divergence of thecurlof any vector field (in three dimensions) is equal to zero: If a vector fieldFwith zero divergence is defined on a ball inR3, then there exists some vector fieldGon the ball withF= curlG. For regions inR3more topologically complicated than this, the latter statement might be false (seePoincaré lemma). The degree offailureof the truth of the statement, measured by thehomologyof thechain complex serves as a nice quantification of the complicatedness of the underlying regionU. These are the beginnings and main motivations ofde Rham cohomology. It can be shown that any stationary fluxv(r)that is twice continuously differentiable inR3and vanishes sufficiently fast for|r| → ∞can be decomposed uniquely into anirrotational partE(r)and asource-free partB(r). Moreover, these parts are explicitly determined by the respectivesource densities(see above) andcirculation densities(see the articleCurl): For the irrotational part one has with The source-free part,B, can be similarly written: one only has to replace thescalar potentialΦ(r)by avector potentialA(r)and the terms−∇Φby+∇ ×A, and the source densitydivvby the circulation density∇ ×v. This "decomposition theorem" is a by-product of the stationary case ofelectrodynamics. It is a special case of the more generalHelmholtz decomposition, which works in dimensions greater than three as well. The divergence of a vector field can be defined in any finite numbern{\displaystyle n}of dimensions. If in a Euclidean coordinate system with coordinatesx1,x2, ...,xn, define In the 1D case,Freduces to a regular function, and the divergence reduces to the derivative. For anyn, the divergence is a linear operator, and it satisfies the "product rule" for any scalar-valued functionφ. One can express the divergence as a particular case of theexterior derivative, which takes a2-formto a 3-form inR3. Define the current two-form as It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of densityρ= 1dx∧dy∧dzmoving with local velocityF. Its exterior derivativedjis then given by where∧{\displaystyle \wedge }is thewedge product. Thus, the divergence of the vector fieldFcan be expressed as: Here the superscript♭is one of the twomusical isomorphisms, and⋆is theHodge star operator. When the divergence is written in this way, the operator⋆d⋆{\displaystyle {\star }d{\star }}is referred to as thecodifferential. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system. The appropriate expression is more complicated incurvilinear coordinates. The divergence of a vector field extends naturally to anydifferentiable manifoldof dimensionnthat has avolume form(ordensity)μ, e.g. aRiemannianorLorentzian manifold. Generalising the construction of atwo-formfor a vector field onR3, on such a manifold a vector fieldXdefines an(n− 1)-formj=iXμobtained by contractingXwithμ. The divergence is then the function defined by The divergence can be defined in terms of theLie derivativeas This means that the divergence measures the rate of expansion of a unit of volume (avolume element) as it flows with the vector field. On apseudo-Riemannian manifold, the divergence with respect to the volume can be expressed in terms of theLevi-Civita connection∇: where the second expression is the contraction of the vector field valued 1-form∇Xwith itself and the last expression is the traditional coordinate expression fromRicci calculus. An equivalent expression without using a connection is wheregis themetricand∂a{\displaystyle \partial _{a}}denotes the partial derivative with respect to coordinatexa. The square-root of the (absolute value of thedeterminantof the) metric appears because the divergence must be written with the correct conception of thevolume. In curvilinear coordinates, the basis vectors are no longer orthonormal; the determinant encodes the correct idea of volume in this case. It appears twice, here, once, so that theXa{\displaystyle X^{a}}can be transformed into "flat space" (where coordinates are actually orthonormal), and once again so that∂a{\displaystyle \partial _{a}}is also transformed into "flat space", so that finally, the "ordinary" divergence can be written with the "ordinary" concept of volume in flat space (i.e.unit volume,i.e.one,i.e.not written down). The square-root appears in the denominator, because the derivative transforms in the opposite way (contravariantly) to the vector (which iscovariant). This idea of getting to a "flat coordinate system" where local computations can be done in a conventional way is called avielbein. A different way to see this is to note that the divergence is thecodifferentialin disguise. That is, the divergence corresponds to the expression⋆d⋆{\displaystyle \star d\star }withd{\displaystyle d}thedifferentialand⋆{\displaystyle \star }theHodge star. The Hodge star, by its construction, causes thevolume formto appear in all of the right places. Divergence can also be generalised totensors. InEinstein notation, the divergence of acontravariant vectorFμis given by where∇μdenotes thecovariant derivative. In this general setting, the correct formulation of the divergence is to recognize that it is acodifferential; the appropriate properties follow from there. Equivalently, some authors define the divergence of amixed tensorby using themusical isomorphism♯: ifTis a(p,q)-tensor(pfor the contravariant vector andqfor the covariant one), then we define thedivergence ofTto be the(p,q− 1)-tensor that is, we take the trace over thefirst twocovariant indices of the covariant derivative.[a]The♯{\displaystyle \sharp }symbol refers to themusical isomorphism.
https://en.wikipedia.org/wiki/Divergence
Indifferential geometry, thefour-gradient(or4-gradient)∂{\displaystyle {\boldsymbol {\partial }}}is thefour-vectoranalogue of thegradient∇→{\displaystyle {\vec {\boldsymbol {\nabla }}}}fromvector calculus. Inspecial relativityand inquantum mechanics, the four-gradient is used to define the properties and relations between the various physical four-vectors andtensors. This article uses the(+ − − −)metric signature. SR and GR are abbreviations forspecial relativityandgeneral relativityrespectively. c{\displaystyle c}indicates thespeed of lightin vacuum. ημν=diag⁡[1,−1,−1,−1]{\displaystyle \eta _{\mu \nu }=\operatorname {diag} [1,-1,-1,-1]}is the flatspacetimemetricof SR. There are alternate ways of writing four-vector expressions in physics: The Latin tensor index ranges in{1, 2, 3},and represents a 3-space vector, e.g.Ai=(a1,a2,a3)=a→{\displaystyle A^{i}=\left(a^{1},a^{2},a^{3}\right)={\vec {\mathbf {a} }}}. The Greek tensor index ranges in{0, 1, 2, 3},and represents a 4-vector, e.g.Aμ=(a0,a1,a2,a3)=A{\displaystyle A^{\mu }=\left(a^{0},a^{1},a^{2},a^{3}\right)=\mathbf {A} }. In SR physics, one typically uses a concise blend, e.g.A=(a0,a→){\displaystyle \mathbf {A} =\left(a^{0},{\vec {\mathbf {a} }}\right)}, wherea0{\displaystyle a^{0}}represents the temporal component anda→{\displaystyle {\vec {\mathbf {a} }}}represents the spatial 3-component. Tensors in SR are typically 4D(m,n){\displaystyle (m,n)}-tensors, withm{\displaystyle m}upper indices andn{\displaystyle n}lower indices, with the 4D indicating 4 dimensions = the number of values each index can take. The tensor contraction used in theMinkowski metriccan go to either side (seeEinstein notation):[1]: 56, 151–152, 158–161A⋅B=AμημνBν=AνBν=AμBμ=∑μ=03aμbμ=a0b0−∑i=13aibi=a0b0−a→⋅b→{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }\eta _{\mu \nu }B^{\nu }=A_{\nu }B^{\nu }=A^{\mu }B_{\mu }=\sum _{\mu =0}^{3}a^{\mu }b_{\mu }=a^{0}b^{0}-\sum _{i=1}^{3}a^{i}b^{i}=a^{0}b^{0}-{\vec {\mathbf {a} }}\cdot {\vec {\mathbf {b} }}} The 4-gradient covariant components compactly written infour-vectorandRicci calculusnotation are:[2][3]: 16∂∂Xμ=(∂0,∂1,∂2,∂3)=(∂0,∂i)=(1c∂∂t,∇→)=(∂tc,∇→)=(∂tc,∂x,∂y,∂z)=∂μ=,μ{\displaystyle {\dfrac {\partial }{\partial X^{\mu }}}=\left(\partial _{0},\partial _{1},\partial _{2},\partial _{3}\right)=\left(\partial _{0},\partial _{i}\right)=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},\partial _{x},\partial _{y},\partial _{z}\right)=\partial _{\mu }={}_{,\mu }} Thecommain the last part above,μ{\displaystyle {}_{,\mu }}implies thepartial differentiationwith respect to 4-positionXμ{\displaystyle X^{\mu }}. The contravariant components are:[2][3]: 16∂=∂α=ηαβ∂β=(∂0,∂1,∂2,∂3)=(∂0,∂i)=(1c∂∂t,−∇→)=(∂tc,−∇→)=(∂tc,−∂x,−∂y,−∂z){\displaystyle {\boldsymbol {\partial }}=\partial ^{\alpha }=\eta ^{\alpha \beta }\partial _{\beta }=\left(\partial ^{0},\partial ^{1},\partial ^{2},\partial ^{3}\right)=\left(\partial ^{0},\partial ^{i}\right)=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},-{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)=\left({\frac {\partial _{t}}{c}},-\partial _{x},-\partial _{y},-\partial _{z}\right)} Alternative symbols to∂α{\displaystyle \partial _{\alpha }}are◻{\displaystyle \Box }andD(although◻{\displaystyle \Box }can also signify∂μ∂μ{\displaystyle \partial ^{\mu }\partial _{\mu }}as thed'Alembert operator). In GR, one must use the more generalmetric tensorgαβ{\displaystyle g^{\alpha \beta }}and the tensorcovariant derivative∇μ=;μ{\displaystyle \nabla _{\mu }={}_{;\mu }}(not to be confused with the vector 3-gradient∇→{\displaystyle {\vec {\nabla }}}). The covariant derivative∇ν{\displaystyle \nabla _{\nu }}incorporates the 4-gradient∂ν{\displaystyle \partial _{\nu }}plusspacetimecurvatureeffects via theChristoffel symbolsΓμσν{\displaystyle \Gamma ^{\mu }{}_{\sigma \nu }} Thestrong equivalence principlecan be stated as:[4]: 184 "Any physical law which can be expressed in tensor notation in SR has exactly the same form in a locally inertial frame of a curved spacetime." The 4-gradient commas (,) in SR are simply changed to covariant derivative semi-colons (;) in GR, with the connection between the two usingChristoffel symbols. This is known in relativity physics as the "comma to semi-colon rule". So, for example, ifTμν,μ=0{\displaystyle T^{\mu \nu }{}_{,\mu }=0}in SR, thenTμν;μ=0{\displaystyle T^{\mu \nu }{}_{;\mu }=0}in GR. On a (1,0)-tensor or 4-vector this would be:[4]: 136–139∇βVα=∂βVα+VμΓαμβVα;β=Vα,β+VμΓαμβ{\displaystyle {\begin{aligned}\nabla _{\beta }V^{\alpha }&=\partial _{\beta }V^{\alpha }+V^{\mu }\Gamma ^{\alpha }{}_{\mu \beta }\\[0.1ex]V^{\alpha }{}_{;\beta }&=V^{\alpha }{}_{,\beta }+V^{\mu }\Gamma ^{\alpha }{}_{\mu \beta }\end{aligned}}} On a (2,0)-tensor this would be:∇νTμν=∂νTμν+ΓμσνTσν+ΓνσνTμσTμν;ν=Tμν,ν+ΓμσνTσν+ΓνσνTμσ{\displaystyle {\begin{aligned}\nabla _{\nu }T^{\mu \nu }&=\partial _{\nu }T^{\mu \nu }+\Gamma ^{\mu }{}_{\sigma \nu }T^{\sigma \nu }+\Gamma ^{\nu }{}_{\sigma \nu }T^{\mu \sigma }\\T^{\mu \nu }{}_{;\nu }&=T^{\mu \nu }{}_{,\nu }+\Gamma ^{\mu }{}_{\sigma \nu }T^{\sigma \nu }+\Gamma ^{\nu }{}_{\sigma \nu }T^{\mu \sigma }\end{aligned}}} The 4-gradient is used in a number of different ways inspecial relativity(SR): Throughout this article the formulas are all correct for the flat spacetimeMinkowski coordinatesof SR, but have to be modified for the more general curved space coordinates ofgeneral relativity(GR). Divergenceis avector operatorthat produces a signed scalar field giving the quantity of avector field'ssourceat each point. Note that in this metric signature [+,−,−,−] the 4-Gradient has a negative spatial component. It gets canceled when taking the 4D dot product since the Minkowski Metric is Diagonal[+1,−1,−1,−1]. The 4-divergence of the4-positionXμ=(ct,x→){\displaystyle X^{\mu }=\left(ct,{\vec {\mathbf {x} }}\right)}gives thedimensionofspacetime:∂⋅X=∂μημνXν=∂νXν=(∂tc,−∇→)⋅(ct,x→)=∂tc(ct)+∇→⋅x→=(∂tt)+(∂xx+∂yy+∂zz)=(1)+(3)=4{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {X} =\partial ^{\mu }\eta _{\mu \nu }X^{\nu }=\partial _{\nu }X^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot (ct,{\vec {x}})={\frac {\partial _{t}}{c}}(ct)+{\vec {\nabla }}\cdot {\vec {x}}=(\partial _{t}t)+(\partial _{x}x+\partial _{y}y+\partial _{z}z)=(1)+(3)=4} The 4-divergence of the4-current densityJμ=(ρc,j→)=ρoUμ=ρoγ(c,u→)=(ρc,ρu→){\displaystyle J^{\mu }=\left(\rho c,{\vec {\mathbf {j} }}\right)=\rho _{o}U^{\mu }=\rho _{o}\gamma \left(c,{\vec {\mathbf {u} }}\right)=\left(\rho c,\rho {\vec {\mathbf {u} }}\right)}gives aconservation law– theconservation of charge:[1]: 103–107∂⋅J=∂μημνJν=∂νJν=(∂tc,−∇→)⋅(ρc,j→)=∂tc(ρc)+∇→⋅j→=∂tρ+∇→⋅j→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {J} =\partial ^{\mu }\eta _{\mu \nu }J^{\nu }=\partial _{\nu }J^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot (\rho c,{\vec {j}})={\frac {\partial _{t}}{c}}(\rho c)+{\vec {\nabla }}\cdot {\vec {j}}=\partial _{t}\rho +{\vec {\nabla }}\cdot {\vec {j}}=0} This means that the time rate of change of the charge density must equal the negative spatial divergence of the current density∂tρ=−∇→⋅j→{\displaystyle \partial _{t}\rho =-{\vec {\nabla }}\cdot {\vec {j}}}. In other words, the charge inside a box cannot just change arbitrarily, it must enter and leave the box via a current. This is acontinuity equation. The 4-divergence of the4-number flux(4-dust)Nμ=(nc,n→)=noUμ=noγ(c,u→)=(nc,nu→){\displaystyle N^{\mu }=\left(nc,{\vec {\mathbf {n} }}\right)=n_{o}U^{\mu }=n_{o}\gamma \left(c,{\vec {\mathbf {u} }}\right)=\left(nc,n{\vec {\mathbf {u} }}\right)}is used in particle conservation:[4]: 90–110∂⋅N=∂μημνNν=∂νNν=(∂tc,−∇→)⋅(nc,nu→)=∂tc(nc)+∇→⋅nu→=∂tn+∇→⋅nu→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {N} =\partial ^{\mu }\eta _{\mu \nu }N^{\nu }=\partial _{\nu }N^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot \left(nc,n{\vec {\mathbf {u} }}\right)={\frac {\partial _{t}}{c}}\left(nc\right)+{\vec {\nabla }}\cdot n{\vec {\mathbf {u} }}=\partial _{t}n+{\vec {\nabla }}\cdot n{\vec {\mathbf {u} }}=0} This is aconservation lawfor the particle number density, typically something like baryon number density. The 4-divergence of theelectromagnetic 4-potentialAμ=(ϕc,a→){\textstyle A^{\mu }=\left({\frac {\phi }{c}},{\vec {\mathbf {a} }}\right)}is used in theLorenz gauge condition:[1]: 105–107∂⋅A=∂μημνAν=∂νAν=(∂tc,−∇→)⋅(ϕc,a→)=∂tc(ϕc)+∇→⋅a→=∂tϕc2+∇→⋅a→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {A} =\partial ^{\mu }\eta _{\mu \nu }A^{\nu }=\partial _{\nu }A^{\nu }=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\cdot \left({\frac {\phi }{c}},{\vec {a}}\right)={\frac {\partial _{t}}{c}}\left({\frac {\phi }{c}}\right)+{\vec {\nabla }}\cdot {\vec {a}}={\frac {\partial _{t}\phi }{c^{2}}}+{\vec {\nabla }}\cdot {\vec {a}}=0} This is the equivalent of aconservation lawfor the EM 4-potential. The 4-divergence of the transverse traceless 4D (2,0)-tensorhTTμν{\displaystyle h_{TT}^{\mu \nu }}representing gravitational radiation in the weak-field limit (i.e. freely propagating far from the source). The transverse condition∂⋅hTTμν=∂μhTTμν=0{\displaystyle {\boldsymbol {\partial }}\cdot h_{TT}^{\mu \nu }=\partial _{\mu }h_{TT}^{\mu \nu }=0}is the equivalent of a conservation equation for freely propagating gravitational waves. The 4-divergence of thestress–energy tensorTμν{\displaystyle T^{\mu \nu }}as the conservedNoether currentassociated withspacetimetranslations, gives four conservation laws in SR:[4]: 101–106 Theconservation of energy(temporal direction) and theconservation of linear momentum(3 separate spatial directions).∂⋅Tμν=∂νTμν=Tμν,ν=0μ=(0,0,0,0){\displaystyle {\boldsymbol {\partial }}\cdot T^{\mu \nu }=\partial _{\nu }T^{\mu \nu }=T^{\mu \nu }{}_{,\nu }=0^{\mu }=(0,0,0,0)} It is often written as:∂νTμν=Tμν,ν=0{\displaystyle \partial _{\nu }T^{\mu \nu }=T^{\mu \nu }{}_{,\nu }=0}where it is understood that the single zero is actually a 4-vector zero0μ=(0,0,0,0){\displaystyle 0^{\mu }=(0,0,0,0)}. When the conservation of the stress–energy tensor(∂νTμν=0μ{\displaystyle \partial _{\nu }T^{\mu \nu }=0^{\mu }})for aperfect fluidis combined with the conservation of particle number density (∂⋅N=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {N} =0}), both utilizing the 4-gradient, one can derive therelativistic Euler equations, which influid mechanicsandastrophysicsare a generalization of theEuler equationsthat account for the effects ofspecial relativity. These equations reduce to the classical Euler equations if the fluid 3-space velocity ismuch lessthan the speed of light, the pressure is much less than theenergy density, and the latter is dominated by the rest mass density. In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show thatangular momentum(relativistic angular momentum) is also conserved:∂ν(xαTμν−xμTαν)=(xαTμν−xμTαν),ν=0αμ{\displaystyle \partial _{\nu }\left(x^{\alpha }T^{\mu \nu }-x^{\mu }T^{\alpha \nu }\right)=\left(x^{\alpha }T^{\mu \nu }-x^{\mu }T^{\alpha \nu }\right)_{,\nu }=0^{\alpha \mu }}where this zero is actually a (2,0)-tensor zero. TheJacobian matrixis thematrixof all first-orderpartial derivativesof avector-valued function. The 4-gradient∂μ{\displaystyle \partial ^{\mu }}acting on the4-positionXν{\displaystyle X^{\nu }}gives the SRMinkowski spacemetricημν{\displaystyle \eta ^{\mu \nu }}:[3]: 16∂[X]=∂μ[Xν]=Xν,μ=(∂tc,−∇→)[(ct,x→)]=(∂tc,−∂x,−∂y,−∂z)[(ct,x,y,z)],=[∂tcct∂tcx∂tcy∂tcz−∂xct−∂xx−∂xy−∂xz−∂yct−∂yx−∂yy−∂yz−∂zct−∂zx−∂zy−∂zz]=[10000−10000−10000−1]=diag⁡[1,−1,−1,−1]=ημν.{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}[\mathbf {X} ]=\partial ^{\mu }[X^{\nu }]=X^{\nu _{,}\mu }&=\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)\left[\left(ct,{\vec {x}}\right)\right]=\left({\frac {\partial _{t}}{c}},-\partial _{x},-\partial _{y},-\partial _{z}\right)[(ct,x,y,z)],\\[3pt]&={\begin{bmatrix}{\frac {\partial _{t}}{c}}ct&{\frac {\partial _{t}}{c}}x&{\frac {\partial _{t}}{c}}y&{\frac {\partial _{t}}{c}}z\\-\partial _{x}ct&-\partial _{x}x&-\partial _{x}y&-\partial _{x}z\\-\partial _{y}ct&-\partial _{y}x&-\partial _{y}y&-\partial _{y}z\\-\partial _{z}ct&-\partial _{z}x&-\partial _{z}y&-\partial _{z}z\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}}\\[3pt]&=\operatorname {diag} [1,-1,-1,-1]=\eta ^{\mu \nu }.\end{aligned}}} For the Minkowski metric, the components[ημμ]=1/[ημμ]{\displaystyle \left[\eta ^{\mu \mu }\right]=1/\left[\eta _{\mu \mu }\right]}(μ{\displaystyle \mu }not summed), with non-diagonal components all zero. For the Cartesian Minkowski Metric, this givesημν=ημν=diag⁡[1,−1,−1,−1]{\displaystyle \eta ^{\mu \nu }=\eta _{\mu \nu }=\operatorname {diag} [1,-1,-1,-1]}. Generally,ημν=δμν=diag⁡[1,1,1,1]{\displaystyle \eta _{\mu }^{\nu }=\delta _{\mu }^{\nu }=\operatorname {diag} [1,1,1,1]}, whereδμν{\displaystyle \delta _{\mu }^{\nu }}is the 4DKronecker delta. The Lorentz transformation is written in tensor form as[4]: 69Xμ′=Λνμ′Xν{\displaystyle X^{\mu '}=\Lambda _{\nu }^{~\mu '}X^{\nu }}and sinceΛνμ′{\displaystyle \Lambda _{\nu }^{\mu '}}are just constants, then∂Xμ′∂Xν=Λνμ′{\displaystyle {\dfrac {\partial X^{\mu '}}{\partial X^{\nu }}}=\Lambda _{\nu }^{\mu '}} Thus, by definition of the 4-gradient∂ν[Xμ′]=(∂∂Xν)[Xμ′]=∂Xμ′∂Xν=Λνμ′{\displaystyle \partial _{\nu }\left[X^{\mu '}\right]=\left({\dfrac {\partial }{\partial X^{\nu }}}\right)\left[X^{\mu '}\right]={\dfrac {\partial X^{\mu '}}{\partial X^{\nu }}}=\Lambda _{\nu }^{\mu '}} This identity is fundamental. Components of the 4-gradient transform according to the inverse of the components of 4-vectors. So the 4-gradient is the "archetypal" one-form. The scalar product of4-velocityUμ{\displaystyle U^{\mu }}with the 4-gradient gives thetotal derivativewith respect toproper timeddτ{\displaystyle {\frac {d}{d\tau }}}:[1]: 58–59U⋅∂=Uμημν∂ν=γ(c,u→)⋅(∂tc,−∇→)=γ(c∂tc+u→⋅∇→)=γ(∂t+dxdt∂x+dydt∂y+dzdt∂z)=γddt=ddτddτ=dXμdXμddτ=dXμdτddXμ=Uμ∂μ=U⋅∂{\displaystyle {\begin{aligned}\mathbf {U} \cdot {\boldsymbol {\partial }}&=U^{\mu }\eta _{\mu \nu }\partial ^{\nu }=\gamma \left(c,{\vec {u}}\right)\cdot \left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)=\gamma \left(c{\frac {\partial _{t}}{c}}+{\vec {u}}\cdot {\vec {\nabla }}\right)=\gamma \left(\partial _{t}+{\frac {dx}{dt}}\partial _{x}+{\frac {dy}{dt}}\partial _{y}+{\frac {dz}{dt}}\partial _{z}\right)=\gamma {\frac {d}{dt}}={\frac {d}{d\tau }}\\{\frac {d}{d\tau }}&={\frac {dX^{\mu }}{dX^{\mu }}}{\frac {d}{d\tau }}={\frac {dX^{\mu }}{d\tau }}{\frac {d}{dX^{\mu }}}=U^{\mu }\partial _{\mu }=\mathbf {U} \cdot {\boldsymbol {\partial }}\end{aligned}}} The fact thatU⋅∂{\displaystyle \mathbf {U} \cdot {\boldsymbol {\partial }}}is aLorentz scalar invariantshows that thetotal derivativewith respect toproper timeddτ{\displaystyle {\frac {d}{d\tau }}}is likewise a Lorentz scalar invariant. So, for example, the4-velocityUμ{\displaystyle U^{\mu }}is the derivative of the4-positionXμ{\displaystyle X^{\mu }}with respect to proper time:ddτX=(U⋅∂)X=U⋅∂[X]=Uα⋅ημν=Uαηανημν=Uαδαμ=Uμ=U{\displaystyle {\frac {d}{d\tau }}\mathbf {X} =(\mathbf {U} \cdot {\boldsymbol {\partial }})\mathbf {X} =\mathbf {U} \cdot {\boldsymbol {\partial }}[\mathbf {X} ]=U^{\alpha }\cdot \eta ^{\mu \nu }=U^{\alpha }\eta _{\alpha \nu }\eta ^{\mu \nu }=U^{\alpha }\delta _{\alpha }^{\mu }=U^{\mu }=\mathbf {U} }orddτX=γddtX=γddt(ct,x→)=γ(ddtct,ddtx→)=γ(c,u→)=U{\displaystyle {\frac {d}{d\tau }}\mathbf {X} =\gamma {\frac {d}{dt}}\mathbf {X} =\gamma {\frac {d}{dt}}\left(ct,{\vec {x}}\right)=\gamma \left({\frac {d}{dt}}ct,{\frac {d}{dt}}{\vec {x}}\right)=\gamma \left(c,{\vec {u}}\right)=\mathbf {U} } Another example, the4-accelerationAμ{\displaystyle A^{\mu }}is the proper-time derivative of the4-velocityUμ{\displaystyle U^{\mu }}:ddτU=(U⋅∂)U=U⋅∂[U]=Uαηαμ∂μ[Uν]=Uαηαμ[∂tcγc∂tcγu→−∇→γc−∇→γu→]=Uα[∂tcγc00∇→γu→]=γ(c∂tcγc,u→⋅∇γu→)=γ(c∂tγ,ddt[γu→])=γ(cγ˙,γ˙u→+γu→˙)=A{\displaystyle {\begin{aligned}{\frac {d}{d\tau }}\mathbf {U} &=(\mathbf {U} \cdot {\boldsymbol {\partial }})\mathbf {U} =\mathbf {U} \cdot {\boldsymbol {\partial }}[\mathbf {U} ]=U^{\alpha }\eta _{\alpha \mu }\partial ^{\mu }\left[U^{\nu }\right]\\&=U^{\alpha }\eta _{\alpha \mu }{\begin{bmatrix}{\frac {\partial _{t}}{c}}\gamma c&{\frac {\partial _{t}}{c}}\gamma {\vec {u}}\\-{\vec {\nabla }}\gamma c&-{\vec {\nabla }}\gamma {\vec {u}}\end{bmatrix}}=U^{\alpha }{\begin{bmatrix}\ {\frac {\partial _{t}}{c}}\gamma c&0\\0&{\vec {\nabla }}\gamma {\vec {u}}\end{bmatrix}}\\[3pt]&=\gamma \left(c{\frac {\partial _{t}}{c}}\gamma c,{\vec {u}}\cdot \nabla \gamma {\vec {u}}\right)=\gamma \left(c\partial _{t}\gamma ,{\frac {d}{dt}}\left[\gamma {\vec {u}}\right]\right)=\gamma \left(c{\dot {\gamma }},{\dot {\gamma }}{\vec {u}}+\gamma {\dot {\vec {u}}}\right)=\mathbf {A} \end{aligned}}} orddτU=γddt(γc,γu→)=γ(ddt[γc],ddt[γu→])=γ(cγ˙,γ˙u→+γu→˙)=A{\displaystyle {\frac {d}{d\tau }}\mathbf {U} =\gamma {\frac {d}{dt}}(\gamma c,\gamma {\vec {u}})=\gamma \left({\frac {d}{dt}}[\gamma c],{\frac {d}{dt}}[\gamma {\vec {u}}]\right)=\gamma (c{\dot {\gamma }},{\dot {\gamma }}{\vec {u}}+\gamma {\dot {\vec {u}}})=\mathbf {A} } The Faradayelectromagnetic tensorFμν{\displaystyle F^{\mu \nu }}is a mathematical object that describes the electromagnetic field inspacetimeof a physical system.[1]: 101–128[5]:314[3]: 17–18[6]: 29–30[7]: 4 Applying the 4-gradient to make an antisymmetric tensor, one gets:Fμν=∂μAν−∂νAμ=[0−Ex/c−Ey/c−Ez/cEx/c0−BzByEy/cBz0−BxEz/c−ByBx0]{\displaystyle F^{\mu \nu }=\partial ^{\mu }A^{\nu }-\partial ^{\nu }A^{\mu }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}}where: By applying the 4-gradient again, and defining the4-current densityasJβ=J=(cρ,j→){\displaystyle J^{\beta }=\mathbf {J} =\left(c\rho ,{\vec {\mathbf {j} }}\right)}one can derive the tensor form of theMaxwell equations:∂αFαβ=μoJβ{\displaystyle \partial _{\alpha }F^{\alpha \beta }=\mu _{o}J^{\beta }}∂γFαβ+∂αFβγ+∂βFγα=0αβγ{\displaystyle \partial _{\gamma }F_{\alpha \beta }+\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha }=0_{\alpha \beta \gamma }}where the second line is a version of theBianchi identity(Jacobi identity). Awavevectoris avectorwhich helps describe awave. Like any vector, it has amagnitude and direction, both of which are important: Its magnitude is either thewavenumberorangular wavenumberof the wave (inversely proportional to thewavelength), and its direction is ordinarily the direction ofwave propagation The4-wavevectorKμ{\displaystyle K^{\mu }}is the 4-gradient of the negative phaseΦ{\displaystyle \Phi }(or the negative 4-gradient of the phase) of a wave in Minkowski Space:[6]: 387Kμ=K=(ωc,k→)=∂[−Φ]=−∂[Φ]{\displaystyle K^{\mu }=\mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)={\boldsymbol {\partial }}[-\Phi ]=-{\boldsymbol {\partial }}[\Phi ]} This is mathematically equivalent to the definition of thephaseof awave(or more specifically aplane wave):K⋅X=ωt−k→⋅x→=−Φ{\displaystyle \mathbf {K} \cdot \mathbf {X} =\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}=-\Phi } where 4-positionX=(ct,x→){\displaystyle \mathbf {X} =\left(ct,{\vec {\mathbf {x} }}\right)},ω{\displaystyle \omega }is the temporal angular frequency,k→{\displaystyle {\vec {\mathbf {k} }}}is the spatial 3-space wavevector, andΦ{\displaystyle \Phi }is the Lorentz scalar invariant phase. ∂[K⋅X]=∂[ωt−k→⋅x→]=(∂tc,−∇)[ωt−k→⋅x→]=(∂tc[ωt−k→⋅x→],−∇[ωt−k→⋅x→])=(∂tc[ωt],−∇[−k→⋅x→])=(ωc,k→)=K{\displaystyle \partial [\mathbf {K} \cdot \mathbf {X} ]=\partial \left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]=\left({\frac {\partial _{t}}{c}},-\nabla \right)\left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]=\left({\frac {\partial _{t}}{c}}\left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right],-\nabla \left[\omega t-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]\right)=\left({\frac {\partial _{t}}{c}}[\omega t],-\nabla \left[-{\vec {\mathbf {k} }}\cdot {\vec {\mathbf {x} }}\right]\right)=\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=\mathbf {K} }with the assumption that the plane waveω{\displaystyle \omega }andk→{\displaystyle {\vec {\mathbf {k} }}}are not explicit functions oft{\displaystyle t}orx→{\displaystyle {\vec {\mathbf {x} }}}. The explicit form of an SR plane waveΨn(X){\displaystyle \Psi _{n}(\mathbf {X} )}can be written as:[7]: 9 Ψn(X)=Ane−i(Kn⋅X)=Anei(Φn){\displaystyle \Psi _{n}(\mathbf {X} )=A_{n}e^{-i(\mathbf {K_{n}} \cdot \mathbf {X} )}=A_{n}e^{i(\Phi _{n})}}whereAn{\displaystyle A_{n}}is a (possiblycomplex) amplitude. A general waveΨ(X){\displaystyle \Psi (\mathbf {X} )}would be thesuperpositionof multiple plane waves:Ψ(X)=∑n[Ψn(X)]=∑n[Ane−i(Kn⋅X)]=∑n[Anei(Φn)]{\displaystyle \Psi (\mathbf {X} )=\sum _{n}[\Psi _{n}(\mathbf {X} )]=\sum _{n}\left[A_{n}e^{-i(\mathbf {K_{n}} \cdot \mathbf {X} )}\right]=\sum _{n}\left[A_{n}e^{i(\Phi _{n})}\right]} Again using the 4-gradient,∂[Ψ(X)]=∂[Ae−i(K⋅X)]=−iK[Ae−i(K⋅X)]=−iK[Ψ(X)]{\displaystyle \partial [\Psi (\mathbf {X} )]=\partial \left[Ae^{-i(\mathbf {K} \cdot \mathbf {X} )}\right]=-i\mathbf {K} \left[Ae^{-i(\mathbf {K} \cdot \mathbf {X} )}\right]=-i\mathbf {K} [\Psi (\mathbf {X} )]}or∂=−iK{\displaystyle {\boldsymbol {\partial }}=-i\mathbf {K} }which is the 4-gradient version ofcomplex-valuedplane waves In special relativity, electromagnetism and wave theory, the d'Alembert operator, also called the d'Alembertian or the wave operator, is the Laplace operator of Minkowski space. The operator is named after French mathematician and physicist Jean le Rond d'Alembert. The square of∂{\displaystyle {\boldsymbol {\partial }}}is the 4-Laplacian, which is called thed'Alembert operator:[5]:300[3]: 17‒18[6]: 41[7]: 4 ∂⋅∂=∂μ⋅∂ν=∂μημν∂ν=∂ν∂ν=1c2∂2∂t2−∇→2=(∂tc)2−∇→2.{\displaystyle {\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}=\partial ^{\mu }\cdot \partial ^{\nu }=\partial ^{\mu }\eta _{\mu \nu }\partial ^{\nu }=\partial _{\nu }\partial ^{\nu }={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-{\vec {\nabla }}^{2}=\left({\frac {\partial _{t}}{c}}\right)^{2}-{\vec {\nabla }}^{2}.} As it is thedot productof two 4-vectors, the d'Alembertian is aLorentz invariantscalar. Occasionally, in analogy with the 3-dimensional notation, the symbols◻{\displaystyle \Box }and◻2{\displaystyle \Box ^{2}}are used for the 4-gradient and d'Alembertian respectively. More commonly however, the symbol◻{\displaystyle \Box }is reserved for the d'Alembertian. Some examples of the 4-gradient as used in the d'Alembertian follow: In theKlein–Gordonrelativistic quantum wave equation for spin-0 particles (ex.Higgs boson):[(∂⋅∂)+(m0cℏ)2]ψ=[(∂t2c2−∇→2)+(m0cℏ)2]ψ=0{\displaystyle \left[({\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }})+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =\left[\left({\frac {\partial _{t}^{2}}{c^{2}}}-{\vec {\nabla }}^{2}\right)+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} In thewave equationfor theelectromagnetic field(usingLorenz gauge(∂⋅A)=(∂μAμ)=0{\displaystyle ({\boldsymbol {\partial }}\cdot \mathbf {A} )=\left(\partial _{\mu }A^{\mu }\right)=0}): where: In thewave equationof agravitational wave(using a similarLorenz gauge(∂μhTTμν)=0{\displaystyle \left(\partial _{\mu }h_{TT}^{\mu \nu }\right)=0})[6]: 274–322(∂⋅∂)hTTμν=0{\displaystyle ({\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }})h_{TT}^{\mu \nu }=0}wherehTTμν{\displaystyle h_{TT}^{\mu \nu }}is the transverse traceless 2-tensor representing gravitational radiation in the weak-field limit (i.e. freely propagating far from the source). Further conditions onhTTμν{\displaystyle h_{TT}^{\mu \nu }}are: In the 4-dimensional version ofGreen's function:(∂⋅∂)G[X−X′]=δ(4)[X−X′]{\displaystyle ({\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }})G\left[\mathbf {X} -\mathbf {X'} \right]=\delta ^{(4)}\left[\mathbf {X} -\mathbf {X'} \right]}where the 4DDelta functionis:δ(4)[X]=1(2π)4∫d4Ke−i(K⋅X){\displaystyle \delta ^{(4)}[\mathbf {X} ]={\frac {1}{(2\pi )^{4}}}\int d^{4}\mathbf {K} e^{-i(\mathbf {K} \cdot \mathbf {X} )}} Invector calculus, thedivergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a result that relates the flow (that is,flux) of avector fieldthrough asurfaceto the behavior of the vector field inside the surface. More precisely, the divergence theorem states that the outwardfluxof a vector field through a closed surface is equal to thevolume integralof thedivergenceover the region inside the surface. Intuitively, it states thatthe sum of all sources minus the sum of all sinks gives the net flow out of a region. In vector calculus, and more generally differential geometry,Stokes' theorem(also called the generalized Stokes' theorem) is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. ∫Ωd4X(∂μVμ)=∮∂ΩdS(VμNμ){\displaystyle \int _{\Omega }d^{4}X\left(\partial _{\mu }V^{\mu }\right)=\oint _{\partial \Omega }dS\left(V^{\mu }N_{\mu }\right)}or∫Ωd4X(∂⋅V)=∮∂ΩdS(V⋅N){\displaystyle \int _{\Omega }d^{4}X\left({\boldsymbol {\partial }}\cdot \mathbf {V} \right)=\oint _{\partial \Omega }dS\left(\mathbf {V} \cdot \mathbf {N} \right)}where TheHamilton–Jacobi equation(HJE) is a formulation of classical mechanics, equivalent to other formulations such asNewton's laws of motion,Lagrangian mechanicsandHamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely. The HJE is also the only formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, the HJE fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the 18th century) of finding an analogy between the propagation of light and the motion of a particle The generalized relativistic momentumPT{\displaystyle \mathbf {P_{T}} }of a particle can be written as[1]: 93–96PT=P+qA{\displaystyle \mathbf {P_{T}} =\mathbf {P} +q\mathbf {A} }whereP=(Ec,p→){\displaystyle \mathbf {P} =\left({\frac {E}{c}},{\vec {\mathbf {p} }}\right)}andA=(ϕc,a→){\displaystyle \mathbf {A} =\left({\frac {\phi }{c}},{\vec {\mathbf {a} }}\right)} This is essentially the 4-total momentumPT=(ETc,pT→){\displaystyle \mathbf {P_{T}} =\left({\frac {E_{T}}{c}},{\vec {\mathbf {p_{T}} }}\right)}of the system; atest particlein afieldusing theminimal couplingrule. There is the inherent momentum of the particleP{\displaystyle \mathbf {P} }, plus momentum due to interaction with the EM 4-vector potentialA{\displaystyle \mathbf {A} }via the particle chargeq{\displaystyle q}. The relativisticHamilton–Jacobi equationis obtained by setting the total momentum equal to the negative 4-gradient of theactionS{\displaystyle S}.PT=−∂[S]=(ETc,pT→)=(Hc,pT→)=−∂[S]=−(∂tc,−∇→)[S]{\displaystyle \mathbf {P_{T}} =-{\boldsymbol {\partial }}[S]=\left({\frac {E_{T}}{c}},{\vec {\mathbf {p_{T}} }}\right)=\left({\frac {H}{c}},{\vec {\mathbf {p_{T}} }}\right)=-{\boldsymbol {\partial }}[S]=-\left({\frac {\partial _{t}}{c}},-{\vec {\boldsymbol {\nabla }}}\right)[S]} The temporal component gives:ET=H=−∂t[S]{\displaystyle E_{T}=H=-\partial _{t}[S]} The spatial components give:pT→=∇→[S]{\displaystyle {\vec {\mathbf {p_{T}} }}={\vec {\boldsymbol {\nabla }}}[S]} whereH{\displaystyle H}is the Hamiltonian. This is actually related to the 4-wavevector being equal the negative 4-gradient of the phase from above.Kμ=K=(ωc,k→)=−∂[Φ]{\displaystyle K^{\mu }=\mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=-{\boldsymbol {\partial }}[\Phi ]} To get the HJE, one first uses the Lorentz scalar invariant rule on the 4-momentum:P⋅P=(m0c)2{\displaystyle \mathbf {P} \cdot \mathbf {P} =(m_{0}c)^{2}} But from theminimal couplingrule:P=PT−qA{\displaystyle \mathbf {P} =\mathbf {P_{T}} -q\mathbf {A} } So:(PT−qA)⋅(PT−qA)=(PT−qA)2=(m0c)2⇒(−∂[S]−qA)2=(m0c)2{\displaystyle {\begin{aligned}\left(\mathbf {P_{T}} -q\mathbf {A} \right)\cdot \left(\mathbf {P_{T}} -q\mathbf {A} \right)=\left(\mathbf {P_{T}} -q\mathbf {A} \right)^{2}&=\left(m_{0}c\right)^{2}\\\Rightarrow \left(-{\boldsymbol {\partial }}[S]-q\mathbf {A} \right)^{2}&=\left(m_{0}c\right)^{2}\end{aligned}}} Breaking into the temporal and spatial components:(−∂t[S]c−qϕc)2−(∇[S]−qa)2=(m0c)2⇒(∇[S]−qa)2−1c2(−∂t[S]−qϕ)2+(m0c)2=0⇒(∇[S]−qa)2−1c2(∂t[S]+qϕ)2+(m0c)2=0{\displaystyle {\begin{aligned}&&\left(-{\frac {\partial _{t}[S]}{c}}-{\frac {q\phi }{c}}\right)^{2}-({\boldsymbol {\nabla }}[S]-q\mathbf {a} )^{2}&=(m_{0}c)^{2}\\&\Rightarrow &({\boldsymbol {\nabla }}[S]-q\mathbf {a} )^{2}-{\frac {1}{c^{2}}}(-\partial _{t}[S]-q\phi )^{2}+(m_{0}c)^{2}&=0\\&\Rightarrow &({\boldsymbol {\nabla }}[S]-q\mathbf {a} )^{2}-{\frac {1}{c^{2}}}(\partial _{t}[S]+q\phi )^{2}+(m_{0}c)^{2}&=0\end{aligned}}} where the final is the relativisticHamilton–Jacobi equation. The 4-gradient is connected withquantum mechanics. The relation between the4-momentumP{\displaystyle \mathbf {P} }and the 4-gradient∂{\displaystyle {\boldsymbol {\partial }}}gives theSchrödinger QM relations.[7]: 3–5P=(Ec,p→)=iℏ∂=iℏ(∂tc,−∇→){\displaystyle \mathbf {P} =\left({\frac {E}{c}},{\vec {p}}\right)=i\hbar {\boldsymbol {\partial }}=i\hbar \left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)} The temporal component gives:E=iℏ∂t{\displaystyle E=i\hbar \partial _{t}} The spatial components give:p→=−iℏ∇→{\displaystyle {\vec {p}}=-i\hbar {\vec {\nabla }}} This can actually be composed of two separate steps. First:[1]: 82–84 P=(Ec,p→)=ℏK=ℏ(ωc,k→){\displaystyle \mathbf {P} =\left({\frac {E}{c}},{\vec {p}}\right)=\hbar \mathbf {K} =\hbar \left({\frac {\omega }{c}},{\vec {k}}\right)}which is the full 4-vector version of: The (temporal component)Planck–Einstein relationE=ℏω{\displaystyle E=\hbar \omega } The (spatial components)de Brogliematter waverelationp→=ℏk→{\displaystyle {\vec {p}}=\hbar {\vec {k}}} Second:[5]:300 K=(ωc,k→)=i∂=i(∂tc,−∇→){\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},{\vec {k}}\right)=i{\boldsymbol {\partial }}=i\left({\frac {\partial _{t}}{c}},-{\vec {\nabla }}\right)}which is just the 4-gradient version of thewave equationforcomplex-valuedplane waves The temporal component gives:ω=i∂t{\displaystyle \omega =i\partial _{t}} The spatial components give:k→=−i∇→{\displaystyle {\vec {k}}=-i{\vec {\nabla }}} In quantum mechanics (physics), thecanonical commutation relationis the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier transform of another). The 4-gradient is a component in several of the relativistic wave equations:[5]:300–309[3]: 25, 30–31, 55–69 In theKlein–Gordon relativistic quantum wave equationfor spin-0 particles (ex.Higgs boson):[7]: 5[(∂μ∂μ)+(m0cℏ)2]ψ=0{\displaystyle \left[\left(\partial ^{\mu }\partial _{\mu }\right)+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} In theDirac relativistic quantum wave equationfor spin-1/2 particles (ex.electrons):[7]: 130[iγμ∂μ−m0cℏ]ψ=0{\displaystyle \left[i\gamma ^{\mu }\partial _{\mu }-{\frac {m_{0}c}{\hbar }}\right]\psi =0} whereγμ{\displaystyle \gamma ^{\mu }}are theDirac gamma matricesandψ{\displaystyle \psi }is a relativisticwave function. ψ{\displaystyle \psi }isLorentz scalarfor the Klein–Gordon equation, and aspinorfor the Dirac equation. It is nice that the gamma matrices themselves refer back to the fundamental aspect of SR, the Minkowski metric:[7]: 130{γμ,γν}=γμγν+γνγμ=2ημνI4{\displaystyle \left\{\gamma ^{\mu },\gamma ^{\nu }\right\}=\gamma ^{\mu }\gamma ^{\nu }+\gamma ^{\nu }\gamma ^{\mu }=2\eta ^{\mu \nu }I_{4}} Conservation of 4-probability current density follows from the continuity equation:[7]: 6∂⋅J=∂tρ+∇→⋅j→=0{\displaystyle {\boldsymbol {\partial }}\cdot \mathbf {J} =\partial _{t}\rho +{\vec {\boldsymbol {\nabla }}}\cdot {\vec {\mathbf {j} }}=0} The4-probability current densityhas the relativistically covariant expression:[7]: 6Jprobμ=iℏ2m0(ψ∗∂μψ−ψ∂μψ∗){\displaystyle J_{\text{prob}}^{\mu }={\frac {i\hbar }{2m_{0}}}\left(\psi ^{*}\partial ^{\mu }\psi -\psi \partial ^{\mu }\psi ^{*}\right)} The4-charge current densityis just the charge (q) times the 4-probability current density:[7]: 8Jchargeμ=iℏq2m0(ψ∗∂μψ−ψ∂μψ∗){\displaystyle J_{\text{charge}}^{\mu }={\frac {i\hbar q}{2m_{0}}}\left(\psi ^{*}\partial ^{\mu }\psi -\psi \partial ^{\mu }\psi ^{*}\right)} Relativistic wave equationsuse 4-vectors in order to be covariant.[3][7] Start with the standard SR 4-vectors:[1] Note the following simple relations from the previous sections, where each 4-vector is related to another by aLorentz scalar: Now, just apply the standard Lorentz scalar product rule to each one:U⋅U=c2P⋅P=(m0c)2K⋅K=(m0cℏ)2∂⋅∂=(−im0cℏ)2=−(m0cℏ)2{\displaystyle {\begin{aligned}\mathbf {U} \cdot \mathbf {U} &=c^{2}\\\mathbf {P} \cdot \mathbf {P} &=(m_{0}c)^{2}\\\mathbf {K} \cdot \mathbf {K} &=\left({\frac {m_{0}c}{\hbar }}\right)^{2}\\{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}&=\left({\frac {-im_{0}c}{\hbar }}\right)^{2}=-\left({\frac {m_{0}c}{\hbar }}\right)^{2}\end{aligned}}} The last equation (with the 4-gradient scalar product) is a fundamental quantum relation. When applied to a Lorentz scalar fieldψ{\displaystyle \psi }, one gets the Klein–Gordon equation, the most basic of the quantumrelativistic wave equations:[7]: 5–8[∂⋅∂+(m0cℏ)2]ψ=0{\displaystyle \left[{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]\psi =0} TheSchrödinger equationis the low-velocitylimiting case(|v| ≪c) of theKlein–Gordon equation.[7]: 7–8 If the quantum relation is applied to a 4-vector fieldAμ{\displaystyle A^{\mu }}instead of a Lorentz scalar fieldψ{\displaystyle \psi }, then one gets theProca equation:[7]: 361[∂⋅∂+(m0cℏ)2]Aμ=0μ{\displaystyle \left[{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}+\left({\frac {m_{0}c}{\hbar }}\right)^{2}\right]A^{\mu }=0^{\mu }} If the rest mass term is set to zero (light-like particles), then this gives the freeMaxwell equation:[∂⋅∂]Aμ=0μ{\displaystyle [{\boldsymbol {\partial }}\cdot {\boldsymbol {\partial }}]A^{\mu }=0^{\mu }} More complicated forms and interactions can be derived by using theminimal couplingrule: In modernelementaryparticle physics, one can define agauge covariant derivativewhich utilizes the extra RQM fields (internal particle spaces) now known to exist. The version known from classical EM (in natural units) is:[3]: 39Dμ=∂μ−igAμ{\displaystyle D^{\mu }=\partial ^{\mu }-igA^{\mu }} The full covariant derivative for thefundamental interactionsof theStandard Modelthat we are presently aware of (innatural units) is:[3]: 35–53 Dμ=∂μ−ig112YBμ−ig212τi⋅Wiμ−ig312λa⋅Gaμ{\displaystyle D^{\mu }=\partial ^{\mu }-ig_{1}{\frac {1}{2}}YB^{\mu }-ig_{2}{\frac {1}{2}}\tau _{i}\cdot W_{i}^{\mu }-ig_{3}{\frac {1}{2}}\lambda _{a}\cdot G_{a}^{\mu }}orD=∂−ig112YB−ig212τi⋅Wi−ig312λa⋅Ga{\displaystyle \mathbf {D} ={\boldsymbol {\partial }}-ig_{1}{\frac {1}{2}}Y\mathbf {B} -ig_{2}{\frac {1}{2}}{\boldsymbol {\tau }}_{i}\cdot \mathbf {W} _{i}-ig_{3}{\frac {1}{2}}{\boldsymbol {\lambda }}_{a}\cdot \mathbf {G} _{a}} where the scalar product summations (⋅{\displaystyle \cdot }) here refer to the internal spaces, not the tensor indices: Thecoupling constants(g1,g2,g3){\displaystyle (g_{1},g_{2},g_{3})}are arbitrary numbers that must be discovered from experiment. It is worth emphasizing that for thenon-abeliantransformations once thegi{\displaystyle g_{i}}are fixed for one representation, they are known for all representations. These internal particle spaces have been discovered empirically.[3]: 47 In three dimensions, the gradient operator maps a scalar field to a vector field such that the line integral between any two points in the vector field is equal to the difference between the scalar field at these two points. Based on this, it mayappearincorrectlythat the natural extension of the gradient to 4 dimensionsshouldbe:∂α=?(∂∂t,∇→),{\displaystyle \partial ^{\alpha }{\overset {?}{=}}\left({\frac {\partial }{\partial t}},{\vec {\nabla }}\right),}which isincorrect. However, a line integral involves the application of the vector dot product, and when this is extended to 4-dimensional spacetime, a change of sign is introduced to either the spatial co-ordinates or the time co-ordinate depending on the convention used. This is due to the non-Euclidean nature of spacetime. In this article, we place a negative sign on the spatial coordinates (the time-positive metric conventionημν=diag⁡[1,−1,−1,−1]{\displaystyle \eta ^{\mu \nu }=\operatorname {diag} [1,-1,-1,-1]}). The factor of (1/c) is to keep the correctunit dimensionality, [length]−1, for all components of the 4-vector and the (−1) is to keep the 4-gradientLorentz covariant. Adding these two corrections to the above expression gives thecorrectdefinition of 4-gradient:[1]: 55–56[3]: 16∂α=(1c∂∂t,−∇→){\displaystyle \partial ^{\alpha }=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},-{\vec {\nabla }}\right)} Regarding the use of scalars, 4-vectors and tensors in physics, various authors use slightly different notations for the same equations. For instance, some usem{\displaystyle m}for invariant rest mass, others usem0{\displaystyle m_{0}}for invariant rest mass and usem{\displaystyle m}for relativistic mass. Many authors set factors ofc{\displaystyle c}andℏ{\displaystyle \hbar }andG{\displaystyle G}to dimensionless unity. Others show some or all the constants. Some authors usev{\displaystyle v}for velocity, others useu{\displaystyle u}. Some useK{\displaystyle K}as a 4-wavevector (to pick an arbitrary example). Others usek{\displaystyle k}orK{\displaystyle \mathbf {K} }orkμ{\displaystyle k^{\mu }}orkμ{\displaystyle k_{\mu }}orKν{\displaystyle K^{\nu }}orN{\displaystyle N}, etc. Some write the 4-wavevector as(ωc,k){\displaystyle \left({\frac {\omega }{c}},\mathbf {k} \right)}, some as(k,ωc){\displaystyle \left(\mathbf {k} ,{\frac {\omega }{c}}\right)}or(k0,k){\displaystyle \left(k^{0},\mathbf {k} \right)}or(k0,k1,k2,k3){\displaystyle \left(k^{0},k^{1},k^{2},k^{3}\right)}or(k1,k2,k3,k4){\displaystyle \left(k^{1},k^{2},k^{3},k^{4}\right)}or(kt,kx,ky,kz){\displaystyle \left(k_{t},k_{x},k_{y},k_{z}\right)}or(k1,k2,k3,ik4){\displaystyle \left(k^{1},k^{2},k^{3},ik^{4}\right)}. Some will make sure that the dimensional units match across the 4-vector, others do not. Some refer to the temporal component in the 4-vector name, others refer to the spatial component in the 4-vector name. Some mix it throughout the book, sometimes using one then later on the other. Some use the metric(+ − − −), others use the metric(− + + +). Some don't use 4-vectors, but do everything as the old styleEand 3-space vectorp. The thing is, all of these are just notational styles, with some more clear and concise than the others. The physics is the same as long as one uses a consistent style throughout the whole derivation.[7]: 2–4
https://en.wikipedia.org/wiki/Four-gradient
Inmathematics, theHessian matrix,Hessianor (less commonly)Hesse matrixis asquare matrixof second-orderpartial derivativesof a scalar-valuedfunction, orscalar field. It describes the localcurvatureof a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematicianLudwig Otto Hesseand later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or,∇∇{\displaystyle \nabla \nabla }or∇⊗∇{\displaystyle \nabla \otimes \nabla }orD2{\displaystyle D^{2}}. Supposef:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a function taking as input a vectorx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}and outputting a scalarf(x)∈R.{\displaystyle f(\mathbf {x} )\in \mathbb {R} .}If all second-orderpartial derivativesoff{\displaystyle f}exist, then the Hessian matrixH{\displaystyle \mathbf {H} }off{\displaystyle f}is a squaren×n{\displaystyle n\times n}matrix, usually defined and arranged asHf=[∂2f∂x12∂2f∂x1∂x2⋯∂2f∂x1∂xn∂2f∂x2∂x1∂2f∂x22⋯∂2f∂x2∂xn⋮⋮⋱⋮∂2f∂xn∂x1∂2f∂xn∂x2⋯∂2f∂xn2].{\displaystyle \mathbf {H} _{f}={\begin{bmatrix}{\dfrac {\partial ^{2}f}{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}f}{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}f}{\partial x_{n}^{2}}}\end{bmatrix}}.}That is, the entry of theith row and thejth column is(Hf)i,j=∂2f∂xi∂xj.{\displaystyle (\mathbf {H} _{f})_{i,j}={\frac {\partial ^{2}f}{\partial x_{i}\,\partial x_{j}}}.} If furthermore the second partial derivatives are all continuous, the Hessian matrix is asymmetric matrixby thesymmetry of second derivatives. Thedeterminantof the Hessian matrix is called theHessian determinant.[1] The Hessian matrix of a functionf{\displaystyle f}is theJacobian matrixof thegradientof the functionf{\displaystyle f}; that is:H(f(x))=J(∇f(x)).{\displaystyle \mathbf {H} (f(\mathbf {x} ))=\mathbf {J} (\nabla f(\mathbf {x} )).} Iff{\displaystyle f}is ahomogeneous polynomialin three variables, the equationf=0{\displaystyle f=0}is theimplicit equationof aplane projective curve. Theinflection pointsof the curve are exactly the non-singular points where the Hessian determinant is zero. It follows byBézout's theoremthat acubic plane curvehas at most 9 inflection points, since the Hessian determinant is a polynomial of degree 3. The Hessian matrix of aconvex functionispositive semi-definite. Refining this property allows us to test whether acritical pointx{\displaystyle x}is a local maximum, local minimum, or a saddle point, as follows: If the Hessian ispositive-definiteatx,{\displaystyle x,}thenf{\displaystyle f}attains an isolated local minimum atx.{\displaystyle x.}If the Hessian isnegative-definiteatx,{\displaystyle x,}thenf{\displaystyle f}attains an isolated local maximum atx.{\displaystyle x.}If the Hessian has both positive and negativeeigenvalues, thenx{\displaystyle x}is asaddle pointforf.{\displaystyle f.}Otherwise the test is inconclusive. This implies that at a local minimum the Hessian is positive-semidefinite, and at a local maximum the Hessian is negative-semidefinite. For positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). However, more can be said from the point of view ofMorse theory. Thesecond-derivative testfor functions of one and two variables is simpler than the general case. In one variable, the Hessian contains exactly one second derivative; if it is positive, thenx{\displaystyle x}is a local minimum, and if it is negative, thenx{\displaystyle x}is a local maximum; if it is zero, then the test is inconclusive. In two variables, thedeterminantcan be used, because the determinant is the product of the eigenvalues. If it is positive, then the eigenvalues are both positive, or both negative. If it is negative, then the two eigenvalues have different signs. If it is zero, then the second-derivative test is inconclusive. Equivalently, the second-order conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upper-leftmost)minors(determinants of sub-matrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero. Specifically, the sufficient condition for a minimum is that all of these principal minors be positive, while the sufficient condition for a maximum is that the minors alternate in sign, with the1×1{\displaystyle 1\times 1}minor being negative. If thegradient(the vector of the partial derivatives) of a functionf{\displaystyle f}is zero at some pointx,{\displaystyle \mathbf {x} ,}thenf{\displaystyle f}has acritical point(orstationary point) atx.{\displaystyle \mathbf {x} .}Thedeterminantof the Hessian atx{\displaystyle \mathbf {x} }is called, in some contexts, adiscriminant. If this determinant is zero thenx{\displaystyle \mathbf {x} }is called adegenerate critical pointoff,{\displaystyle f,}or anon-Morse critical pointoff.{\displaystyle f.}Otherwise it is non-degenerate, and called aMorse critical pointoff.{\displaystyle f.} The Hessian matrix plays an important role inMorse theoryandcatastrophe theory, because itskernelandeigenvaluesallow classification of the critical points.[2][3][4] The determinant of the Hessian matrix, when evaluated at a critical point of a function, is equal to theGaussian curvatureof the function considered as a manifold. The eigenvalues of the Hessian at that point are the principal curvatures of the function, and the eigenvectors are the principal directions of curvature. (SeeGaussian curvature § Relation to principal curvatures.) Hessian matrices are used in large-scaleoptimizationproblems withinNewton-type methods because they are the coefficient of the quadratic term of a localTaylor expansionof a function. That is,y=f(x+Δx)≈f(x)+∇f(x)TΔx+12ΔxTH(x)Δx{\displaystyle y=f(\mathbf {x} +\Delta \mathbf {x} )\approx f(\mathbf {x} )+\nabla f(\mathbf {x} )^{\mathsf {T}}\Delta \mathbf {x} +{\frac {1}{2}}\,\Delta \mathbf {x} ^{\mathsf {T}}\mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} }where∇f{\displaystyle \nabla f}is thegradient(∂f∂x1,…,∂f∂xn).{\displaystyle \left({\frac {\partial f}{\partial x_{1}}},\ldots ,{\frac {\partial f}{\partial x_{n}}}\right).}Computing and storing the full Hessian matrix takesΘ(n2){\displaystyle \Theta \left(n^{2}\right)}memory, which is infeasible for high-dimensional functions such as theloss functionsofneural nets,conditional random fields, and otherstatistical modelswith large numbers of parameters. For such situations,truncated-Newtonandquasi-Newtonalgorithms have been developed. The latter family of algorithms use approximations to the Hessian; one of the most popular quasi-Newton algorithms isBFGS.[5] Such approximations may use the fact that an optimization algorithm uses the Hessian only as alinear operatorH(v),{\displaystyle \mathbf {H} (\mathbf {v} ),}and proceed by first noticing that the Hessian also appears in the local expansion of the gradient:∇f(x+Δx)=∇f(x)+H(x)Δx+O(‖Δx‖2){\displaystyle \nabla f(\mathbf {x} +\Delta \mathbf {x} )=\nabla f(\mathbf {x} )+\mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} +{\mathcal {O}}(\|\Delta \mathbf {x} \|^{2})} LettingΔx=rv{\displaystyle \Delta \mathbf {x} =r\mathbf {v} }for some scalarr,{\displaystyle r,}this givesH(x)Δx=H(x)rv=rH(x)v=∇f(x+rv)−∇f(x)+O(r2),{\displaystyle \mathbf {H} (\mathbf {x} )\,\Delta \mathbf {x} =\mathbf {H} (\mathbf {x} )r\mathbf {v} =r\mathbf {H} (\mathbf {x} )\mathbf {v} =\nabla f(\mathbf {x} +r\mathbf {v} )-\nabla f(\mathbf {x} )+{\mathcal {O}}(r^{2}),}that is,H(x)v=1r[∇f(x+rv)−∇f(x)]+O(r){\displaystyle \mathbf {H} (\mathbf {x} )\mathbf {v} ={\frac {1}{r}}\left[\nabla f(\mathbf {x} +r\mathbf {v} )-\nabla f(\mathbf {x} )\right]+{\mathcal {O}}(r)}so if the gradient is already computed, the approximate Hessian can be computed by a linear (in the size of the gradient) number of scalar operations. (While simple to program, this approximation scheme is not numerically stable sincer{\displaystyle r}has to be made small to prevent error due to theO(r){\displaystyle {\mathcal {O}}(r)}term, but decreasing it loses precision in the first term.[6]) Notably regarding Randomized Search Heuristics, theevolution strategy's covariance matrix adapts to the inverse of the Hessian matrix,up toa scalar factor and small random fluctuations. This result has been formally proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation.[7] The Hessian matrix is commonly used for expressing image processing operators inimage processingandcomputer vision(see theLaplacian of Gaussian(LoG) blob detector,the determinant of Hessian (DoH) blob detectorandscale space). It can be used innormal modeanalysis to calculate the different molecular frequencies ininfrared spectroscopy.[8]It can also be used in local sensitivity and statistical diagnostics.[9] Abordered Hessianis used for the second-derivative test in certain constrained optimization problems. Given the functionf{\displaystyle f}considered previously, but adding a constraint functiong{\displaystyle g}such thatg(x)=c,{\displaystyle g(\mathbf {x} )=c,}the bordered Hessian is the Hessian of theLagrange functionΛ(x,λ)=f(x)+λ[g(x)−c]{\displaystyle \Lambda (\mathbf {x} ,\lambda )=f(\mathbf {x} )+\lambda [g(\mathbf {x} )-c]}:[10]H(Λ)=[∂2Λ∂λ2∂2Λ∂λ∂x(∂2Λ∂λ∂x)T∂2Λ∂x2]=[0∂g∂x1∂g∂x2⋯∂g∂xn∂g∂x1∂2Λ∂x12∂2Λ∂x1∂x2⋯∂2Λ∂x1∂xn∂g∂x2∂2Λ∂x2∂x1∂2Λ∂x22⋯∂2Λ∂x2∂xn⋮⋮⋮⋱⋮∂g∂xn∂2Λ∂xn∂x1∂2Λ∂xn∂x2⋯∂2Λ∂xn2]=[0∂g∂x(∂g∂x)T∂2Λ∂x2]{\displaystyle \mathbf {H} (\Lambda )={\begin{bmatrix}{\dfrac {\partial ^{2}\Lambda }{\partial \lambda ^{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial \lambda \partial \mathbf {x} }}\\\left({\dfrac {\partial ^{2}\Lambda }{\partial \lambda \partial \mathbf {x} }}\right)^{\mathsf {T}}&{\dfrac {\partial ^{2}\Lambda }{\partial \mathbf {x} ^{2}}}\end{bmatrix}}={\begin{bmatrix}0&{\dfrac {\partial g}{\partial x_{1}}}&{\dfrac {\partial g}{\partial x_{2}}}&\cdots &{\dfrac {\partial g}{\partial x_{n}}}\\[2.2ex]{\dfrac {\partial g}{\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}^{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{1}\,\partial x_{n}}}\\[2.2ex]{\dfrac {\partial g}{\partial x_{2}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}\,\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}^{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{2}\,\partial x_{n}}}\\[2.2ex]\vdots &\vdots &\vdots &\ddots &\vdots \\[2.2ex]{\dfrac {\partial g}{\partial x_{n}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}\,\partial x_{1}}}&{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}\,\partial x_{2}}}&\cdots &{\dfrac {\partial ^{2}\Lambda }{\partial x_{n}^{2}}}\end{bmatrix}}={\begin{bmatrix}0&{\dfrac {\partial g}{\partial \mathbf {x} }}\\\left({\dfrac {\partial g}{\partial \mathbf {x} }}\right)^{\mathsf {T}}&{\dfrac {\partial ^{2}\Lambda }{\partial \mathbf {x} ^{2}}}\end{bmatrix}}} If there are, say,m{\displaystyle m}constraints then the zero in the upper-left corner is anm×m{\displaystyle m\times m}block of zeros, and there arem{\displaystyle m}border rows at the top andm{\displaystyle m}border columns at the left. The above rules stating that extrema are characterized (among critical points with a non-singular Hessian) by a positive-definite or negative-definite Hessian cannot apply here since a bordered Hessian can neither be negative-definite nor positive-definite, aszTHz=0{\displaystyle \mathbf {z} ^{\mathsf {T}}\mathbf {H} \mathbf {z} =0}ifz{\displaystyle \mathbf {z} }is any vector whose sole non-zero entry is its first. The second derivative test consists here of sign restrictions of the determinants of a certain set ofn−m{\displaystyle n-m}submatrices of the bordered Hessian.[11]Intuitively, them{\displaystyle m}constraints can be thought of as reducing the problem to one withn−m{\displaystyle n-m}free variables. (For example, the maximization off(x1,x2,x3){\displaystyle f\left(x_{1},x_{2},x_{3}\right)}subject to the constraintx1+x2+x3=1{\displaystyle x_{1}+x_{2}+x_{3}=1}can be reduced to the maximization off(x1,x2,1−x1−x2){\displaystyle f\left(x_{1},x_{2},1-x_{1}-x_{2}\right)}without constraint.) Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, for which the first2m{\displaystyle 2m}leading principal minors are neglected, the smallest minor consisting of the truncated first2m+1{\displaystyle 2m+1}rows and columns, the next consisting of the truncated first2m+2{\displaystyle 2m+2}rows and columns, and so on, with the last being the entire bordered Hessian; if2m+1{\displaystyle 2m+1}is larger thann+m,{\displaystyle n+m,}then the smallest leading principal minor is the Hessian itself.[12]There are thusn−m{\displaystyle n-m}minors to consider, each evaluated at the specific point being considered as acandidate maximum or minimum. A sufficient condition for a localmaximumis that these minors alternate in sign with the smallest one having the sign of(−1)m+1.{\displaystyle (-1)^{m+1}.}A sufficient condition for a localminimumis that all of these minors have the sign of(−1)m.{\displaystyle (-1)^{m}.}(In the unconstrained case ofm=0{\displaystyle m=0}these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively). Iff{\displaystyle f}is instead avector fieldf:Rn→Rm,{\displaystyle \mathbf {f} :\mathbb {R} ^{n}\to \mathbb {R} ^{m},}that is,f(x)=(f1(x),f2(x),…,fm(x)),{\displaystyle \mathbf {f} (\mathbf {x} )=\left(f_{1}(\mathbf {x} ),f_{2}(\mathbf {x} ),\ldots ,f_{m}(\mathbf {x} )\right),}then the collection of second partial derivatives is not an×n{\displaystyle n\times n}matrix, but rather a third-ordertensor. This can be thought of as an array ofm{\displaystyle m}Hessian matrices, one for each component off{\displaystyle \mathbf {f} }:H(f)=(H(f1),H(f2),…,H(fm)).{\displaystyle \mathbf {H} (\mathbf {f} )=\left(\mathbf {H} (f_{1}),\mathbf {H} (f_{2}),\ldots ,\mathbf {H} (f_{m})\right).}This tensor degenerates to the usual Hessian matrix whenm=1.{\displaystyle m=1.} In the context ofseveral complex variables, the Hessian may be generalized. Supposef:Cn→C,{\displaystyle f\colon \mathbb {C} ^{n}\to \mathbb {C} ,}and writef(z1,…,zn).{\displaystyle f\left(z_{1},\ldots ,z_{n}\right).}IdentifyingCn{\displaystyle {\mathbb {C} }^{n}}withR2n{\displaystyle {\mathbb {R} }^{2n}}, the normal "real" Hessian is a2n×2n{\displaystyle 2n\times 2n}matrix. As the object of study in several complex variables areholomorphic functions, that is, solutions to the n-dimensionalCauchy–Riemann conditions, we usually look on the part of the Hessian that contains information invariant under holomorphic changes of coordinates. This "part" is the so-called complex Hessian, which is the matrix(∂2f∂zj∂z¯k)j,k.{\displaystyle \left({\frac {\partial ^{2}f}{\partial z_{j}\partial {\bar {z}}_{k}}}\right)_{j,k}.}Note that iff{\displaystyle f}is holomorphic, then its complex Hessian matrix is identically zero, so the complex Hessian is used to study smooth but not holomorphic functions, see for exampleLevi pseudoconvexity. When dealing with holomorphic functions, we could consider the Hessian matrix(∂2f∂zj∂zk)j,k.{\displaystyle \left({\frac {\partial ^{2}f}{\partial z_{j}\partial z_{k}}}\right)_{j,k}.} Let(M,g){\displaystyle (M,g)}be aRiemannian manifoldand∇{\displaystyle \nabla }itsLevi-Civita connection. Letf:M→R{\displaystyle f:M\to \mathbb {R} }be a smooth function. Define the Hessian tensor byHess⁡(f)∈Γ(T∗M⊗T∗M)byHess⁡(f):=∇∇f=∇df,{\displaystyle \operatorname {Hess} (f)\in \Gamma \left(T^{*}M\otimes T^{*}M\right)\quad {\text{ by }}\quad \operatorname {Hess} (f):=\nabla \nabla f=\nabla df,}where this takes advantage of the fact that the first covariant derivative of a function is the same as its ordinary differential. Choosing local coordinates{xi}{\displaystyle \left\{x^{i}\right\}}gives a local expression for the Hessian asHess⁡(f)=∇i∂jfdxi⊗dxj=(∂2f∂xi∂xj−Γijk∂f∂xk)dxi⊗dxj{\displaystyle \operatorname {Hess} (f)=\nabla _{i}\,\partial _{j}f\ dx^{i}\!\otimes \!dx^{j}=\left({\frac {\partial ^{2}f}{\partial x^{i}\partial x^{j}}}-\Gamma _{ij}^{k}{\frac {\partial f}{\partial x^{k}}}\right)dx^{i}\otimes dx^{j}}whereΓijk{\displaystyle \Gamma _{ij}^{k}}are theChristoffel symbolsof the connection. Other equivalent forms for the Hessian are given byHess⁡(f)(X,Y)=⟨∇Xgrad⁡f,Y⟩andHess⁡(f)(X,Y)=X(Yf)−df(∇XY).{\displaystyle \operatorname {Hess} (f)(X,Y)=\langle \nabla _{X}\operatorname {grad} f,Y\rangle \quad {\text{ and }}\quad \operatorname {Hess} (f)(X,Y)=X(Yf)-df(\nabla _{X}Y).}
https://en.wikipedia.org/wiki/Hessian_matrix
Inmathematics, askew gradientof aharmonic functionover asimply connected domainwith two real dimensions is avector fieldthat is everywhereorthogonalto thegradientof the function and that has the samemagnitudeas the gradient. The skew gradient can be defined using complex analysis and theCauchy–Riemann equations. Letf(z(x,y))=u(x,y)+iv(x,y){\displaystyle f(z(x,y))=u(x,y)+iv(x,y)}be a complex-valued analytic function, whereu,vare real-valued scalar functions of the real variablesx,y. A skew gradient is defined as: and from theCauchy–Riemann equations, it is derived that The skew gradient has two interesting properties. It is everywhere orthogonal to the gradient of u, and of the same length:
https://en.wikipedia.org/wiki/Skew_gradient
Aspatial gradientis agradientwhosecomponentsare spatialderivatives, i.e.,rate of changeof a givenscalarphysical quantitywith respect to theposition coordinatesinphysical space. Homogeneous regions have spatial gradientvector normequal to zero. When evaluated oververtical position(altitude or depth), it is calledvertical derivativeorvertical gradient; the remainder is calledhorizontal gradientcomponent, thevector projectionof the full gradient onto thehorizontal plane. Examples:
https://en.wikipedia.org/wiki/Spatial_gradient
Inmathematical optimizationtheory,dualityor theduality principleis the principle thatoptimization problemsmay be viewed from either of two perspectives, theprimal problemor thedual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal.[1]This fact is calledweak duality. In general, the optimal values of the primal and dual problems need not be equal. Their difference is called theduality gap. Forconvex optimizationproblems, the duality gap is zero under aconstraint qualificationcondition. This fact is calledstrong duality. Usually the term "dual problem" refers to theLagrangian dual problembut other dual problems are used – for example, theWolfe dual problemand theFenchel dual problem. The Lagrangian dual problem is obtained by forming theLagrangianof a minimization problem by using nonnegativeLagrange multipliersto add the constraints to the objective function, and then solving for the primal variable values that minimize the original objective function. This solution gives the primal variables as functions of the Lagrange multipliers, which are called dual variables, so that the new problem is to maximize the objective function with respect to the dual variables under the derived constraints on the dual variables (including at least the nonnegativity constraints). In general given twodual pairsofseparatedlocally convex spaces(X,X∗){\displaystyle \left(X,X^{*}\right)}and(Y,Y∗){\displaystyle \left(Y,Y^{*}\right)}and the functionf:X→R∪{+∞}{\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}, we can define the primal problem as findingx^{\displaystyle {\hat {x}}}such thatf(x^)=infx∈Xf(x).{\displaystyle f({\hat {x}})=\inf _{x\in X}f(x).\,}In other words, ifx^{\displaystyle {\hat {x}}}exists,f(x^){\displaystyle f({\hat {x}})}is theminimumof the functionf{\displaystyle f}and theinfimum(greatest lower bound) of the function is attained. If there are constraint conditions, these can be built into the functionf{\displaystyle f}by lettingf~=f+Iconstraints{\displaystyle {\tilde {f}}=f+I_{\mathrm {constraints} }}whereIconstraints{\displaystyle I_{\mathrm {constraints} }}is a suitable function onX{\displaystyle X}that has a minimum 0 on the constraints, and for which one can prove thatinfx∈Xf~(x)=infxconstrainedf(x){\displaystyle \inf _{x\in X}{\tilde {f}}(x)=\inf _{x\ \mathrm {constrained} }f(x)}. The latter condition is trivially, but not always conveniently, satisfied for thecharacteristic function(i.e.Iconstraints(x)=0{\displaystyle I_{\mathrm {constraints} }(x)=0}forx{\displaystyle x}satisfying the constraints andIconstraints(x)=∞{\displaystyle I_{\mathrm {constraints} }(x)=\infty }otherwise). Then extendf~{\displaystyle {\tilde {f}}}to aperturbation functionF:X×Y→R∪{+∞}{\displaystyle F:X\times Y\to \mathbb {R} \cup \{+\infty \}}such thatF(x,0)=f~(x){\displaystyle F(x,0)={\tilde {f}}(x)}.[2] Theduality gapis the difference of the right and left hand sides of the inequality whereF∗{\displaystyle F^{*}}is theconvex conjugatein both variables andsup{\displaystyle \sup }denotes thesupremum(least upper bound).[2][3][4] The duality gap is the difference between the values of any primal solutions and any dual solutions. Ifd∗{\displaystyle d^{*}}is the optimal dual value andp∗{\displaystyle p^{*}}is the optimal primal value, then the duality gap is equal top∗−d∗{\displaystyle p^{*}-d^{*}}. This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only ifstrong dualityholds. Otherwise the gap is strictly positive andweak dualityholds.[5] In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of theconvex relaxationof the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closedconvex hulland with replacing a non-convex function with its convexclosure, that is the function that has theepigraphthat is the closed convex hull of the original primal objective function.[6][7][8][9][10][11][12][13][14][15][16] Linear programmingproblems areoptimizationproblems in which theobjective functionand theconstraintsare alllinear. In the primal problem, the objective function is alinear combinationofnvariables. There aremconstraints, each of which places an upper bound on a linear combination of thenvariables. The goal is to maximize the value of the objective function subject to the constraints. Asolutionis avector(a list) ofnvalues that achieves the maximum value for the objective function. In the dual problem, the objective function is a linear combination of themvalues that are the limits in themconstraints from the primal problem. There arendual constraints, each of which places a lower bound on a linear combination ofmdual variables. In the linear case, in the primal problem, from each sub-optimal point that satisfies all the constraints, there is a direction orsubspaceof directions to move that increases the objective function. Moving in any such direction is said to remove slack between thecandidate solutionand one or more constraints. Aninfeasiblevalue of the candidate solution is one that exceeds one or more of the constraints. In the dual problem, the dual vector multiplies the constraints that determine the positions of the constraints in the primal. Varying the dual vector in the dual problem is equivalent to revising the upper bounds in the primal problem. The lowest upper bound is sought. That is, the dual vector is minimized in order to remove slack between the candidate positions of the constraints and the actual optimum. An infeasible value of the dual vector is one that is too low. It sets the candidate positions of one or more of the constraints in a position that excludes the actual optimum. This intuition is made formal by the equations inLinear programming: Duality. Innonlinear programming, the constraints are not necessarily linear. Nonetheless, many of the same principles apply. To ensure that the global maximum of a non-linear problem can be identified easily, the problem formulation often requires that the functions be convex and have compact lower level sets. This is the significance of theKarush–Kuhn–Tucker conditions. They provide necessary conditions for identifying local optima of non-linear programming problems. There are additional conditions (constraint qualifications) that are necessary so that it will be possible to define the direction to anoptimalsolution. An optimal solution is one that is alocal optimum, but possibly not a global optimum. Motivation[17] Suppose we want to solve the followingnonlinear programmingproblem: minimizef0(x)subject tofi(x)≤0,i∈{1,…,m}{\displaystyle {\begin{aligned}{\text{minimize }}&f_{0}(x)\\{\text{subject to }}&f_{i}(x)\leq 0,\ i\in \left\{1,\ldots ,m\right\}\\\end{aligned}}} The problem has constraints; we would like to convert it to a program without constraints. Theoretically, it is possible to do it by minimizing the functionJ(x){\displaystyle J(x)}, defined as J(x)=f0(x)+∑iI[fi(x)]{\displaystyle J(x)=f_{0}(x)+\sum _{i}I[f_{i}(x)]} whereI{\displaystyle I}is an infinitestep function:I[u]=0{\displaystyle I[u]=0}ifu≤0{\displaystyle u\leq 0}, andI[u]=∞{\displaystyle I[u]=\infty }otherwise. ButJ(x){\displaystyle J(x)}is hard to solve as it is not continuous. It is possible to "approximate"I[u]{\displaystyle I[u]}byλu{\displaystyle \lambda u}, whereλ{\displaystyle \lambda }is a positive constant. This yields a function known as the Lagrangian: L(x,λ)=f0(x)+∑iλifi(x){\displaystyle L(x,\lambda )=f_{0}(x)+\sum _{i}\lambda _{i}f_{i}(x)} Note that, for everyx{\displaystyle x}, maxλ≥0L(x,λ)=J(x){\displaystyle \max _{\lambda \geq 0}L(x,\lambda )=J(x)}. Proof: Therefore, the original problem is equivalent to: minxmaxλ≥0L(x,λ){\displaystyle \min _{x}\max _{\lambda \geq 0}L(x,\lambda )}. By reversing the order of min and max, we get: maxλ≥0minxL(x,λ){\displaystyle \max _{\lambda \geq 0}\min _{x}L(x,\lambda )}. Thedual functionis the inner problem in the above formula: g(λ):=minxL(x,λ){\displaystyle g(\lambda ):=\min _{x}L(x,\lambda )}. TheLagrangian dual programis the program of maximizing g: maxλ≥0g(λ){\displaystyle \max _{\lambda \geq 0}g(\lambda )}. The optimal solution to the dual program is a lower bound for the optimal solution of the original (primal) program; this is theweak dualityprinciple. If the primal problem is convex and bounded from below, and there exists a point in which all nonlinear constraints are strictly satisfied (Slater's condition), then the optimal solution to the dual programequalsthe optimal solution of the primal program; this is thestrong dualityprinciple. In this case, we can solve the primal program by finding an optimal solutionλ* to the dual program, and then solving: minxL(x,λ∗){\displaystyle \min _{x}L(x,\lambda ^{*})}. Note that, to use either the weak or the strong duality principle, we need a way to compute g(λ). In general this may be hard, as we need to solve a different minimization problem for everyλ. But for some classes of functions, it is possible to get an explicit formula for g(). Solving the primal and dual programs together is often easier than solving only one of them. Examples arelinear programmingandquadratic programming. A better and more general approach to duality is provided byFenchel's duality theorem.[18]: Sub.3.3.1 Another condition in which the min-max and max-min are equal is when the Lagrangian has asaddle point: (x∗, λ∗) is a saddle point of the Lagrange function L if and only if x∗ is an optimal solution to the primal, λ∗ is an optimal solution to the dual, and the optimal values in the indicated problems are equal to each other.[18]: Prop.3.2.2 Given anonlinear programmingproblem in standard form with the domainD⊂Rn{\displaystyle {\mathcal {D}}\subset \mathbb {R} ^{n}}having non-empty interior, theLagrangian functionL:Rn×Rm×Rp→R{\displaystyle {\mathcal {L}}:\mathbb {R} ^{n}\times \mathbb {R} ^{m}\times \mathbb {R} ^{p}\to \mathbb {R} }is defined as The vectorsλ{\displaystyle \lambda }andν{\displaystyle \nu }are called thedual variablesorLagrange multiplier vectorsassociated with the problem. TheLagrange dual functiong:Rm×Rp→R{\displaystyle g:\mathbb {R} ^{m}\times \mathbb {R} ^{p}\to \mathbb {R} }is defined as The dual functiongis concave, even when the initial problem is not convex, because it is a point-wise infimum of affine functions. The dual function yields lower bounds on the optimal valuep∗{\displaystyle p^{*}}of the initial problem; for anyλ≥0{\displaystyle \lambda \geq 0}and anyν{\displaystyle \nu }we haveg(λ,ν)≤p∗{\displaystyle g(\lambda ,\nu )\leq p^{*}}. If aconstraint qualificationsuch asSlater's conditionholds and the original problem is convex, then we havestrong duality, i.e.d∗=maxλ≥0,νg(λ,ν)=inff0=p∗{\displaystyle d^{*}=\max _{\lambda \geq 0,\nu }g(\lambda ,\nu )=\inf f_{0}=p^{*}}. For a convex minimization problem with inequality constraints, the Lagrangian dual problem is where the objective function is the Lagrange dual function. Provided that the functionsf{\displaystyle f}andg1,…,gm{\displaystyle g_{1},\ldots ,g_{m}}are continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem is called theWolfe dual problem. This problem may be difficult to deal with computationally, because the objective function is not concave in the joint variables(u,x){\displaystyle (u,x)}. Also, the equality constraint∇f(x)+∑j=1muj∇gj(x){\displaystyle \nabla f(x)+\sum _{j=1}^{m}u_{j}\,\nabla g_{j}(x)}is nonlinear in general, so the Wolfe dual problem is typically a nonconvex optimization problem. In any case,weak dualityholds.[19] According toGeorge Dantzig, the duality theorem for linear optimization was conjectured byJohn von Neumannimmediately after Dantzig presented the linear programming problem. Von Neumann noted that he was using information from hisgame theory, and conjectured that two person zero sum matrix game was equivalent to linear programming. Rigorous proofs were first published in 1948 byAlbert W. Tuckerand his group. (Dantzig's foreword to Nering and Tucker, 1993) Insupport vector machines(SVMs), formulating the primal problem of SVMs as the dual problem can be used to implement theKernel trick, but the latter has higher time complexity in the historical cases.
https://en.wikipedia.org/wiki/Duality_(optimization)
Inmathematical optimization, theKarush–Kuhn–Tucker(KKT)conditions, also known as theKuhn–Tucker conditions, arefirst derivative tests(sometimes called first-ordernecessary conditions) for a solution innonlinear programmingto beoptimal, provided that someregularity conditionsare satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method ofLagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers. The Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.[1] The KKT conditions were originally named afterHarold W. KuhnandAlbert W. Tucker, who first published the conditions in 1951.[2]Later scholars discovered that the necessary conditions for this problem had been stated byWilliam Karushin his master's thesis in 1939.[3][4] Consider the following nonlinear optimization problem instandard form: wherex∈X{\displaystyle \mathbf {x} \in \mathbf {X} }is the optimization variable chosen from aconvex subsetofRn{\displaystyle \mathbb {R} ^{n}},f{\displaystyle f}is theobjectiveorutilityfunction,gi(i=1,…,m){\displaystyle g_{i}\ (i=1,\ldots ,m)}are the inequalityconstraintfunctions andhj(j=1,…,ℓ){\displaystyle h_{j}\ (j=1,\ldots ,\ell )}are the equalityconstraintfunctions. The numbers of inequalities and equalities are denoted bym{\displaystyle m}andℓ{\displaystyle \ell }respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function L(x,μ,λ)=f(x)+μ⊤g(x)+λ⊤h(x)=L(x,α)=f(x)+α⊤(g(x)h(x)){\displaystyle {\mathcal {L}}(\mathbf {x} ,\mathbf {\mu } ,\mathbf {\lambda } )=f(\mathbf {x} )+\mathbf {\mu } ^{\top }\mathbf {g} (\mathbf {x} )+\mathbf {\lambda } ^{\top }\mathbf {h} (\mathbf {x} )=L(\mathbf {x} ,\mathbf {\alpha } )=f(\mathbf {x} )+\mathbf {\alpha } ^{\top }{\begin{pmatrix}\mathbf {g} (\mathbf {x} )\\\mathbf {h} (\mathbf {x} )\end{pmatrix}}} where g(x)=[g1(x)⋮gi(x)⋮gm(x)],h(x)=[h1(x)⋮hj(x)⋮hℓ(x)],μ=[μ1⋮μi⋮μm],λ=[λ1⋮λj⋮λℓ]andα=[μλ].{\displaystyle \mathbf {g} \left(\mathbf {x} \right)={\begin{bmatrix}g_{1}\left(\mathbf {x} \right)\\\vdots \\g_{i}\left(\mathbf {x} \right)\\\vdots \\g_{m}\left(\mathbf {x} \right)\end{bmatrix}},\quad \mathbf {h} \left(\mathbf {x} \right)={\begin{bmatrix}h_{1}\left(\mathbf {x} \right)\\\vdots \\h_{j}\left(\mathbf {x} \right)\\\vdots \\h_{\ell }\left(\mathbf {x} \right)\end{bmatrix}},\quad \mathbf {\mu } ={\begin{bmatrix}\mu _{1}\\\vdots \\\mu _{i}\\\vdots \\\mu _{m}\\\end{bmatrix}},\quad \mathbf {\lambda } ={\begin{bmatrix}\lambda _{1}\\\vdots \\\lambda _{j}\\\vdots \\\lambda _{\ell }\end{bmatrix}}\quad {\text{and}}\quad \mathbf {\alpha } ={\begin{bmatrix}\mu \\\lambda \end{bmatrix}}.}TheKarush–Kuhn–Tucker theoremthen states the following. Theorem—(sufficiency) If(x∗,α∗){\displaystyle (\mathbf {x} ^{\ast },\mathbf {\alpha } ^{\ast })}is asaddle pointofL(x,α){\displaystyle L(\mathbf {x} ,\mathbf {\alpha } )}inx∈X{\displaystyle \mathbf {x} \in \mathbf {X} },μ≥0{\displaystyle \mathbf {\mu } \geq \mathbf {0} }, thenx∗{\displaystyle \mathbf {x} ^{\ast }}is an optimal vector for the above optimization problem. (necessity) Suppose thatf(x){\displaystyle f(\mathbf {x} )}andgi(x){\displaystyle g_{i}(\mathbf {x} )},i=1,…,m{\displaystyle i=1,\ldots ,m}, areconvexinX{\displaystyle \mathbf {X} }and that there existsx0∈relint⁡(X){\displaystyle \mathbf {x} _{0}\in \operatorname {relint} (\mathbf {X} )}such thatg(x0)<0{\displaystyle \mathbf {g} (\mathbf {x} _{0})<\mathbf {0} }(i.e.,Slater's conditionholds). Then with an optimal vectorx∗{\displaystyle \mathbf {x} ^{\ast }}for the above optimization problem there is associated a vectorα∗=[μ∗λ∗]{\displaystyle \mathbf {\alpha } ^{\ast }={\begin{bmatrix}\mu ^{*}\\\lambda ^{*}\end{bmatrix}}}satisfyingμ∗≥0{\displaystyle \mathbf {\mu } ^{*}\geq \mathbf {0} }such that(x∗,α∗){\displaystyle (\mathbf {x} ^{\ast },\mathbf {\alpha } ^{\ast })}is a saddle point ofL(x,α){\displaystyle L(\mathbf {x} ,\mathbf {\alpha } )}.[5] Since the idea of this approach is to find asupporting hyperplaneon the feasible setΓ={x∈X:gi(x)≤0,i=1,…,m}{\displaystyle \mathbf {\Gamma } =\left\{\mathbf {x} \in \mathbf {X} :g_{i}(\mathbf {x} )\leq 0,i=1,\ldots ,m\right\}}, the proof of the Karush–Kuhn–Tucker theorem makes use of thehyperplane separation theorem.[6] The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where aclosed-formsolution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.[7] Suppose that theobjective functionf:Rn→R{\displaystyle f\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} }and the constraint functionsgi:Rn→R{\displaystyle g_{i}\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} }andhj:Rn→R{\displaystyle h_{j}\colon \mathbb {R} ^{n}\rightarrow \mathbb {R} }havesubderivativesat a pointx∗∈Rn{\displaystyle x^{*}\in \mathbb {R} ^{n}}. Ifx∗{\displaystyle x^{*}}is alocal optimumand the optimization problem satisfies some regularity conditions (see below), then there exist constantsμi(i=1,…,m){\displaystyle \mu _{i}\ (i=1,\ldots ,m)}andλj(j=1,…,ℓ){\displaystyle \lambda _{j}\ (j=1,\ldots ,\ell )}, called KKT multipliers, such that the following four groups of conditions hold:[8] The last condition is sometimes written in the equivalent form:μigi(x∗)=0,fori=1,…,m.{\displaystyle \mu _{i}g_{i}(x^{*})=0,{\text{ for }}i=1,\ldots ,m.} In the particular casem=0{\displaystyle m=0}, i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are calledLagrange multipliers. Theorem—(sufficiency) If there exists a solutionx∗{\displaystyle x^{*}}to the primal problem, a solution(μ∗,λ∗){\displaystyle (\mu ^{*},\lambda ^{*})}to the dual problem, such that together they satisfy the KKT conditions, then the problem pair has strong duality, andx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}is a solution pair to the primal and dual problems. (necessity) If the problem pair has strong duality, then for any solutionx∗{\displaystyle x^{*}}to the primal problem and any solution(μ∗,λ∗){\displaystyle (\mu ^{*},\lambda ^{*})}to the dual problem, the pairx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}must satisfy the KKT conditions.[9] First, for thex∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}to satisfy the KKT conditions is equivalent to them being aNash equilibrium. Fix(μ∗,λ∗){\displaystyle (\mu ^{*},\lambda ^{*})}, and varyx{\displaystyle x}: equilibrium is equivalent to primal stationarity. Fixx∗{\displaystyle x^{*}}, and vary(μ,λ){\displaystyle (\mu ,\lambda )}: equilibrium is equivalent to primal feasibility and complementary slackness. Sufficiency: the solution pairx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}satisfies the KKT conditions, thus is a Nash equilibrium, and therefore closes the duality gap. Necessity: any solution pairx∗,(μ∗,λ∗){\displaystyle x^{*},(\mu ^{*},\lambda ^{*})}must close the duality gap, thus they must constitute a Nash equilibrium (since neither side could do any better), thus they satisfy the KKT conditions. The primal problem can be interpreted as moving a particle in the space ofx{\displaystyle x}, and subjecting it to three kinds of force fields: Primal stationarity states that the "force" of∂f(x∗){\displaystyle \partial f(x^{*})}is exactly balanced by a linear sum of forces∂hj(x∗){\displaystyle \partial h_{j}(x^{*})}and∂gi(x∗){\displaystyle \partial g_{i}(x^{*})}. Dual feasibility additionally states that all the∂gi(x∗){\displaystyle \partial g_{i}(x^{*})}forces must be one-sided, pointing inwards into the feasible set forx{\displaystyle x}. Complementary slackness states that ifgi(x∗)<0{\displaystyle g_{i}(x^{*})<0}, then the force coming from∂gi(x∗){\displaystyle \partial g_{i}(x^{*})}must be zero i.e.,μi(x∗)=0{\displaystyle \mu _{i}(x^{*})=0}, since the particle is not on the boundary, the one-sided constraint force cannot activate. The necessary conditions can be written withJacobian matricesof the constraint functions. Letg(x):Rn→Rm{\displaystyle \mathbf {g} (x):\,\!\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}}be defined asg(x)=(g1(x),…,gm(x))⊤{\displaystyle \mathbf {g} (x)=\left(g_{1}(x),\ldots ,g_{m}(x)\right)^{\top }}and leth(x):Rn→Rℓ{\displaystyle \mathbf {h} (x):\,\!\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{\ell }}be defined ash(x)=(h1(x),…,hℓ(x))⊤{\displaystyle \mathbf {h} (x)=\left(h_{1}(x),\ldots ,h_{\ell }(x)\right)^{\top }}. Letμ=(μ1,…,μm)⊤{\displaystyle {\boldsymbol {\mu }}=\left(\mu _{1},\ldots ,\mu _{m}\right)^{\top }}andλ=(λ1,…,λℓ)⊤{\displaystyle {\boldsymbol {\lambda }}=\left(\lambda _{1},\ldots ,\lambda _{\ell }\right)^{\top }}. Then the necessary conditions can be written as: One can ask whether a minimizer pointx∗{\displaystyle x^{*}}of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizerx∗{\displaystyle x^{*}}of a functionf(x){\displaystyle f(x)}in an unconstrained problem has to satisfy the condition∇f(x∗)=0{\displaystyle \nabla f(x^{*})=0}. For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one: The strict implications can be shown and In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems. In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name. The necessary conditions are sufficient for optimality if the objective functionf{\displaystyle f}of a maximization problem is a differentiableconcave function, the inequality constraintsgj{\displaystyle g_{j}}are differentiableconvex functions, the equality constraintshi{\displaystyle h_{i}}areaffine functions, andSlater's conditionholds.[11]Similarly, if the objective functionf{\displaystyle f}of a minimization problem is a differentiableconvex function, the necessary conditions are also sufficient for optimality. It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1invex functions.[12][13] For smooth,non-linear optimizationproblems, a second order sufficient condition is given as follows. The solutionx∗,λ∗,μ∗{\displaystyle x^{*},\lambda ^{*},\mu ^{*}}found in the above section is a constrained local minimum if for the Lagrangian, then, wheres≠0{\displaystyle s\neq 0}is a vector satisfying the following, where only those active inequality constraintsgi(x){\displaystyle g_{i}(x)}corresponding to strict complementarity (i.e. whereμi>0{\displaystyle \mu _{i}>0}) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict. IfsT∇xx2L(x∗,λ∗,μ∗)s=0{\displaystyle s^{T}\nabla _{xx}^{2}L(x^{*},\lambda ^{*},\mu ^{*})s=0}, the third order Taylor expansion of the Lagrangian should be used to verify ifx∗{\displaystyle x^{*}}is a local minimum. The minimization off(x1,x2)=(x2−x12)(x2−3x12){\displaystyle f(x_{1},x_{2})=(x_{2}-x_{1}^{2})(x_{2}-3x_{1}^{2})}is a good counter-example, see alsoPeano surface. Often inmathematical economicsthe KKT approach is used in theoretical models in order to obtain qualitative results. For example,[14]consider a firm that maximizes its sales revenue subject to a minimum profit constraint. LettingQ{\displaystyle Q}be the quantity of output produced (to be chosen),R(Q){\displaystyle R(Q)}be sales revenue with a positive first derivative and with a zero value at zero output,C(Q){\displaystyle C(Q)}be production costs with a positive first derivative and with a non-negative value at zero output, andGmin{\displaystyle G_{\min }}be the positive minimal acceptable level ofprofit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is and the KKT conditions are SinceQ=0{\displaystyle Q=0}would violate the minimum profit constraint, we haveQ>0{\displaystyle Q>0}and hence the third condition implies that the first condition holds with equality. Solving that equality gives Because it was given thatdR/dQ{\displaystyle {\text{d}}R/{\text{d}}Q}anddC/dQ{\displaystyle {\text{d}}C/{\text{d}}Q}are strictly positive, this inequality along with the non-negativity condition onμ{\displaystyle \mu }guarantees thatμ{\displaystyle \mu }is positive and so the revenue-maximizing firm operates at a level of output at whichmarginal revenuedR/dQ{\displaystyle {\text{d}}R/{\text{d}}Q}is less thanmarginal costdC/dQ{\displaystyle {\text{d}}C/{\text{d}}Q}— a result that is of interest because it contrasts with the behavior of aprofit maximizingfirm, which operates at a level at which they are equal. If we reconsider the optimization problem as a maximization problem with constant inequality constraints: The value function is defined as so the domain ofV{\displaystyle V}is{a∈Rm∣for somex∈X,gi(x)≤ai,i∈{1,…,m}}.{\displaystyle \{a\in \mathbb {R} ^{m}\mid {\text{for some }}x\in X,g_{i}(x)\leq a_{i},i\in \{1,\ldots ,m\}\}.} Given this definition, each coefficientμi{\displaystyle \mu _{i}}is the rate at which the value function increases asai{\displaystyle a_{i}}increases. Thus if eachai{\displaystyle a_{i}}is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our functionf{\displaystyle f}. This interpretation is especially important in economics and is used, for instance, inutility maximization problems. With an extra multiplierμ0≥0{\displaystyle \mu _{0}\geq 0}, which may be zero (as long as(μ0,μ,λ)≠0{\displaystyle (\mu _{0},\mu ,\lambda )\neq 0}), in front of∇f(x∗){\displaystyle \nabla f(x^{*})}the KKT stationarity conditions turn into which are called theFritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality conditionKKT or (not-MFCQ). The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions usingsubderivatives.
https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions
Inmathematics,engineering,computer scienceandeconomics, anoptimization problemis theproblemof finding thebestsolution from allfeasible solutions. Optimization problems can be divided into two categories, depending on whether thevariablesarecontinuousordiscrete: In the context of an optimization problem, thesearch spacerefers to the set of all possible points or solutions that satisfy the problem's constraints, targets, or goals.[1]These points represent the feasible solutions that can be evaluated to find the optimal solution according to the objective function. The search space is often defined by the domain of the function being optimized, encompassing all valid inputs that meet the problem's requirements.[2] The search space can vary significantly in size and complexity depending on the problem. For example, in a continuous optimization problem, the search space might be a multidimensional real-valued domain defined by bounds or constraints. In a discrete optimization problem, such as combinatorial optimization, the search space could consist of a finite set of permutations, combinations, or configurations. In some contexts, the termsearch spacemay also refer to the optimization of the domain itself, such as determining the most appropriate set of variables or parameters to define the problem. Understanding and effectively navigating the search space is crucial for designing efficient algorithms, as it directly influences the computational complexity and the likelihood of finding an optimal solution. Thestandard formof acontinuousoptimization problem is[3]minimizexf(x)subjecttogi(x)≤0,i=1,…,mhj(x)=0,j=1,…,p{\displaystyle {\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)\leq 0,\quad i=1,\dots ,m\\&&&h_{j}(x)=0,\quad j=1,\dots ,p\end{aligned}}}where Ifm=p= 0, the problem is an unconstrained optimization problem. By convention, the standard form defines aminimization problem. Amaximization problemcan be treated bynegatingthe objective function. Formally, acombinatorial optimizationproblemAis a quadruple[citation needed](I,f,m,g), where The goal is then to find for some instancexanoptimal solution, that is, a feasible solutionywithm(x,y)=g{m(x,y′):y′∈f(x)}.{\displaystyle m(x,y)=g\left\{m(x,y'):y'\in f(x)\right\}.} For each combinatorial optimization problem, there is a correspondingdecision problemthat asks whether there is a feasible solution for some particular measurem0. For example, if there is agraphGwhich contains verticesuandv, an optimization problem might be "find a path fromutovthat uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path fromutovthat uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'. In the field ofapproximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.[4]
https://en.wikipedia.org/wiki/Optimization_problem
Proximal gradient methodsare a generalized form of projection used to solve non-differentiableconvex optimizationproblems. Many interesting problems can be formulated as convex optimization problems of the form minx∈RN⁡∑i=1nfi(x){\displaystyle \operatorname {min} \limits _{x\in \mathbb {R} ^{N}}\sum _{i=1}^{n}f_{i}(x)} wherefi:RN→R,i=1,…,n{\displaystyle f_{i}:\mathbb {R} ^{N}\rightarrow \mathbb {R} ,\ i=1,\dots ,n}are possibly non-differentiableconvex functions. The lack of differentiability rules out conventional smooth optimization techniques like thesteepest descent methodand theconjugate gradient method, but proximal gradient methods can be used instead. Proximal gradient methods starts by a splitting step, in which the functionsf1,...,fn{\displaystyle f_{1},...,f_{n}}are used individually so as to yield an easilyimplementablealgorithm. They are calledproximalbecause each non-differentiable function amongf1,...,fn{\displaystyle f_{1},...,f_{n}}is involved via itsproximity operator. Iterative shrinkage thresholding algorithm,[1]projected Landweber, projected gradient,alternating projections,alternating-direction method of multipliers, alternating splitBregmanare special instances of proximal algorithms.[2] For the theory of proximal gradient methods from the perspective of and with applications tostatistical learning theory, seeproximal gradient methods for learning. One of the widely used convex optimization algorithms isprojections onto convex sets(POCS). This algorithm is employed to recover/synthesize a signal satisfying simultaneously several convex constraints. Letfi{\displaystyle f_{i}}be the indicator function of non-empty closed convex setCi{\displaystyle C_{i}}modeling a constraint. This reduces to convex feasibility problem, which require us to find a solution such that it lies in the intersection of all convex setsCi{\displaystyle C_{i}}. In POCS method each setCi{\displaystyle C_{i}}is incorporated by itsprojection operatorPCi{\displaystyle P_{C_{i}}}. So in eachiterationx{\displaystyle x}is updated as However beyond such problemsprojection operatorsare not appropriate and more general operators are required to tackle them. Among the various generalizations of the notion of a convex projection operator that exist, proximal operators are best suited for other purposes. Special instances of Proximal Gradient Methods are
https://en.wikipedia.org/wiki/Proximal_gradient_method
Many problems inmathematical programmingcan be formulated asproblems onconvex setsorconvex bodies. Six kinds of problems are particularly important:[1]: Sec.2optimization,violation,validity,separation,membershipandemptiness. Each of these problems has a strong (exact) variant, and a weak (approximate) variant. In all problem descriptions,Kdenotes acompactandconvex setinRn. The strong variants of the problems are:[1]: 47 Closely related to the problems on convex sets is the following problem on aconvex functionf:Rn→R: From the definitions, it is clear that algorithms for some of the problems can be used to solve other problems in oracle-polynomial time: The solvability of a problem crucially depends on the nature ofKand the wayKit is represented. For example: Each of the above problems has a weak variant, in which the answer is given only approximately. To define the approximation, we define the following operations on convex sets:[1]: 6 Using these notions, the weak variants are:[1]: 50 Analogously to the strong variants, algorithms for some of the problems can be used to solve other problems in oracle-polynomial time: Some of these weak variants can be slightly strengthened.[1]: Rem.2.1.5(a)For example, WVAL with inputsc,t' =t+ε/2 andε' =ε/2 does one of the following: Besides these trivial implications, there are highly non-trivial implications, whose proof relies on theellipsoid method. Some of these implications require additional information about theconvex bodyK. In particular, besides the number of dimensionsn, the following information may be needed:[1]: 53 The following can be done in oracle-polynomial time:[1]: Sec.4 The following implications use thepolar setofK- defined asK∗:={y∈Rn:yTx≤1for allx∈K}{\displaystyle K^{*}:=\{y\in \mathbb {R} ^{n}:y^{T}x\leq 1{\text{ for all }}x\in K\}}. Note thatK**=K. Some of the above implications provably do not work without the additional information.[1]: Sec.4.5 Using the above basic problems, one can solve several geometric problems related to convex bodies. In particular, one can find an approximateJohn ellipsoidin oracle-polynomial time:[1]: Sec.4.6 These results imply that it is possible to approximate any norm by anellipsoidal norm. Specifically, suppose a normNis given by aweak norm oracle: for every vectorxinQnand every rationalε>0, it returns a rational numberrsuch that |N(x)-r|<ε. Suppose we also know a constantc1that gives a lower bound on the ratio of N(x) to the Euclidean norm,c1‖x‖≤N(x){\displaystyle c_{1}\|x\|\leq N(x)}Then we can compute in oracle-polynomial time a linear transformationTofRnsuch that, for allxinRn,‖Tx‖≤N(x)≤n(n+1)‖Tx‖{\displaystyle \|Tx\|\leq N(x)\leq {\sqrt {n(n+1)}}\|Tx\|}. It is also possible to approximate thediameterand thewidthofK: Some problems not yet solved (as of 1993) are whether it is possible to compute in polytime the volume, the center of gravity or the surface area of a convex body given by a separation oracle. Some binary operations on convex sets preserve the algorithmic properties of the various problems. In particular, given two convex setsKandL:[1]: Sec.4.7 In some cases, an oracle for a weak problem can be used to solve the corresponding strong problem. An algorithm for WMEM, given circumscribed radiusRand inscribe radiusrand interior pointa0, can solve the following slightly stronger membership problem (still weaker than SMEM): given a vectoryinQn, and a rationalε>0, either assert thatyinS(K,ε), or assert thatynot inK.The proof is elementary and uses a single call to the WMEM oracle.[1]: 108 Suppose now thatKis apolyhedron. Then, many oracles to weak problems can be used to solve the corresponding strong problems in oracle-polynomial time. The reductions require an upper bound on the representation complexity (facet complexityorvertex complexity) of the polyhedron:[1]: Sec. 6.3 The proofs use results onsimultaneous diophantine approximation. How essential is the additional information for the above reductions?[1]: 173 Using the previous results, it is possible to prove implications between strong variants. The following can be done in oracle-polynomial time for awell-described polyhedron- a polyhedron for which an upper bound on therepresentation complexityis known:[1]: Sec.6.4 So SSEP, SVIOL and SOPT are all polynomial-time equivalent. This equivalence, in particular, impliesKhachian's proof thatlinear programmingcan be solved in polynomial time,[1]: Thm.6.4.12since when a polyhedron is given by explicit linear inequalities, a SSEP oracle is trivial to implement. Moreover, a basic optimal dual solution can also be found in polytime.[1]: Thm.6.5.14 Note that the above theorems do not require an assumption of full-dimensionality or a lower bound on the volume. Other reductions cannot be made without additional information: Jain[5]extends one of the above theorems to convex sets that are not polyhedra and not well-described. He only requires a guarantee that the convex set contains at leastone point(not necessarily a vertex) with a bounded representation length. He proves that, under this assumption, SNEMPT can be solved (a point in the convex set can be found) in polytime.[5]: Thm.12Moreover, the representation length of the found point is at most P(n) times the given bound, where P is some polynomial function.[5]: Thm.13 Using the above basic problems, one can solve several geometric problems related to nonempty polytopes and polyhedra with a bound on the representation complexity, in oracle-polynomial time, given an oracle to SSEP, SVIOL or SOPT:[1]: Sec.6.5
https://en.wikipedia.org/wiki/Algorithmic_problems_on_convex_sets
InBayesian inference, theBernstein–von Mises theoremprovides the basis for using Bayesian credible sets for confidence statements inparametric models. It states that under some conditions, a posterior distribution converges intotal variation distanceto a multivariate normal distribution centered at the maximum likelihood estimatorθ^n{\displaystyle {\widehat {\theta }}_{n}}with covariance matrix given byn−1I(θ0)−1{\displaystyle n^{-1}{\mathcal {I}}(\theta _{0})^{-1}}, whereθ0{\displaystyle \theta _{0}}is the true population parameter andI(θ0){\displaystyle {\mathcal {I}}(\theta _{0})}is theFisher information matrixat the true population parameter value:[1] The Bernstein–von Mises theorem linksBayesian inferencewithfrequentist inference. It assumes there is some true probabilistic process that generates the observations, as in frequentism, and then studies the quality of Bayesian methods of recovering that process, and making uncertainty statements about that process. In particular, it states that asymptotically, many Bayesian credible sets of a certain credibility levelα{\displaystyle \alpha }will act as confidence sets of confidence levelα{\displaystyle \alpha }, which allows for the interpretation of Bayesian credible sets. Let(Pθ:θ∈Θ){\displaystyle (P_{\theta }\,:\,\theta \in \Theta )}be a well-specified statistical model, where the parameter spaceΘ{\displaystyle \Theta }is a subset ofRk{\displaystyle \mathbb {R} ^{k}}. Further, let dataX1,…,Xn∈X{\displaystyle X_{1},\ldots ,X_{n}\in {\mathcal {X}}}be independently and identically distributed fromPθ0{\displaystyle P_{\theta _{0}}}. Suppose that all of the following conditions hold: Then for any estimatorθ^n{\displaystyle {\widehat {\theta }}_{n}}satisfyingn(θ^n−θ0)→dN(0,I−1(θ0)){\displaystyle {\sqrt {n}}({\widehat {\theta }}_{n}-\theta _{0})\xrightarrow {d} {\mathcal {N}}(0,{\mathcal {I}}^{-1}(\theta _{0}))}, the posterior distributionΠn{\displaystyle \Pi _{n}}ofθ∣X1,…,Xn{\displaystyle \theta \mid X_{1},\ldots ,X_{n}}satisfies ||Πn−N(θ^n,1nI−1(θ0))||TV→Pθ00.{\displaystyle {\left|\left|\Pi _{n}-{\mathcal {N}}\left({\widehat {\theta }}_{n},{\frac {1}{n}}{\mathcal {I}}^{-1}({\theta _{0}})\right)\right|\right|}_{\mathrm {TV} }\xrightarrow {P_{\theta _{0}}} 0.} asn→∞{\displaystyle n\rightarrow \infty }. Under certain regularity conditions, themaximum likelihood estimatoris an asymptotically efficient estimator and can thus be used asθ^n{\displaystyle {\widehat {\theta }}_{n}}in the theorem statement. This then yields that the posterior distribution converges in total variation distance to the asymptotic distribution of the maximum likelihood estimator, which is commonly used to construct frequentist confidence sets. The most important implication of the Bernstein–von Mises theorem is that the Bayesian inference is asymptotically correct from a frequentist point of view. This means that for large amounts of data, one can use the posterior distribution to make, from a frequentist point of view, valid statements about estimation and uncertainty. The theorem is named afterRichard von MisesandS. N. Bernstein, although the first proper proof was given byJoseph L. Doobin 1949 for random variables with finiteprobability space.[2]LaterLucien Le Cam, his PhD studentLorraine Schwartz,David A. FreedmanandPersi Diaconisextended the proof under more general assumptions.[citation needed] In case of a misspecified model, the posterior distribution will also become asymptotically Gaussian with a correct mean, but not necessarily with the Fisher information as the variance. This implies that Bayesian credible sets of levelα{\displaystyle \alpha }cannot be interpreted as confidence sets of levelα{\displaystyle \alpha }.[3] In the case of nonparametric statistics, the Bernstein–von Mises theorem usually fails to hold with a notable exception of theDirichlet process. A remarkable result was found by Freedman in 1965: the Bernstein–von Mises theorem does not holdalmost surelyif the random variable has an infinite countableprobability space; however, this depends on allowing a very broad range of possible priors. In practice, the priors used typically in research do have the desirable property even with an infinite countableprobability space. Different summary statistics such as themodeand mean may behave differently in the posterior distribution. In Freedman's examples, the posterior density and its mean can converge on the wrong result, but the posterior mode is consistent and will converge on the correct result.
https://en.wikipedia.org/wiki/Bernstein%E2%80%93von_Mises_theorem
Theprobability ofsuccess(POS)is a statistics concept commonly used in thepharmaceutical industryincluding byhealth authoritiesto supportdecision making. The probability of success is a concept closely related to conditional power andpredictive power. Conditional power is the probability of observing statistical significance given the observed data assuming the treatment effect parameter equals a specific value. Conditional power is often criticized for this assumption. If we know the exact value of the treatment effect, there is no need to do the experiment. To address this issue, we can consider conditional power in aBayesiansetting by considering the treatment effect parameter to be arandom variable. Taking theexpected valueof the conditional power with respect to theposterior distributionof the parameter gives the predictive power. Predictive power can also be calculated in afrequentistsetting. No matter how it is calculated, predictive power is a random variable since it is aconditional probabilityconditioned on randomly observed data. Both conditional power and predictive power usestatistical significanceas the success criterion. However, statistical significance is often not sufficient to define success. For example, ahealth authorityoften requires the magnitude of the treatment effect to be bigger than an effect which is merely statistically significant in order to support successful registration. In order to address this issue, we can extend conditional power and predictive power to the concept of probability of success. For probability of success, the success criterion is not restricted to statistical significance. It can be something else such as a clinical meaningful result. Traditional pilot trial design is typically done by controllingtype I errorrate and power for detecting a specific parameter value. The goal of a pilot trial such as a phase II trial is usually not to support registration. Therefore it doesn't make sense to control type I error rate, especially a big type I error, as typically done in a phase II trial. A pilot trial usually provides evidence to support a Go/No Go decision for a confirmatory trial. Therefore it makes more sense to design a trial based on PPOS. To support a No/Go decision, traditional methods require the PPOS to be small. However the PPOS can be small just due to chance. To solve this issue, we can require the PPOS credible interval to be tight such that the PPOS calculation is supported by sufficient information and hence PPOS is not small just due to chance. Finding anoptimal designis equivalent to find the solution to the following 2 equations.[1] where PPOS1 and PPOS2 are some user-defined cutoff values. The first equation ensures that the PPOS is small such that not too many trials will be prevented entering next stage, to guard againstfalse negatives. The first equation also ensures that the PPOS is not too small such that not too many trials will enter the next stage, to guard againstfalse positives. The second equation ensures that the PPOScredible intervalis tight such that the PPOS calculation is supported by sufficient information. The second equation also ensures that the PPOScredible intervalis not too tight such that it won't demand too many resources. Traditional futility interim is designed based on beta spending. However beta spending doesn't have an intuitive interpretation. Therefore it is difficult to communicate to non-statistician colleagues. Since PPOS has an intuitive interpretation, it makes more sense to design futility interim using PPOS. To declare futility, we mandate the PPOS to be small and PPOS calculation to be supported by sufficient information. According to Tang, 2015[2]finding the optimal design is equivalent to solving the following 2 equations. Traditional efficacy interim is designed based on spending functions. Since spending functions don't have an intuitive interpretation, it is difficult to communicate to non-statistician colleagues. In contrast probability of success has an intuitive interpretation and hence can facilitate communication with non-statistician colleagues. Tang (2016)[3][4]proposes the use of the following criteria to support efficacy interim decision making: mCPOS>c1 lCPOS>c2 where mCPOS is the median of CPOS with respect to the distribution of the parameter and lCPOS is the lower bound of the credible interval of CPOS. The first criterion ensures that the probability of success is large. The second criterion ensures that the credible interval of CPOS is tight; the CPOS calculation is supported by enough information; hence the probability of success is not large by chance. Finding the optimal design is equivalent to finding the solution to the following equations:
https://en.wikipedia.org/wiki/Probability_of_success
Bayesian epistemologyis a formal approach to various topics inepistemologythat has its roots inThomas Bayes' work in the field of probability theory.[1]One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted assubjective probabilities. As such, they are subject to the laws ofprobability theory, which act as the norms ofrationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form ofDutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in thephilosophy of science, for example, can be approached through the Bayesianprinciple of conditionalizationby holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept ofcoherencein terms of probability, usually in the sense that twopropositionscohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field ofsocial epistemology, for example, concerning the problem oftestimonyor the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved. Traditional epistemology and Bayesian epistemology are both forms of epistemology, but they differ in various respects, for example, concerning their methodology, their interpretation of belief, the role justification or confirmation plays in them and some of their research interests. Traditional epistemology focuses on topics such as the analysis of thenature of knowledge, usually in terms ofjustified true beliefs, thesources of knowledge, like perception or testimony, thestructure of a body of knowledge, for example in the form offoundationalismorcoherentism, and the problem ofphilosophical skepticismor the question of whether knowledge is possible at all.[2][3]These inquiries are usually based on epistemic intuitions and regard beliefs as either present or absent.[4]Bayesian epistemology, on the other hand, works by formalizing concepts and problems, which are often vague in the traditional approach. It thereby focuses more on mathematical intuitions and promises a higher degree of precision.[1][4]It sees belief as a continuous phenomenon that comes in various degrees, so-calledcredences.[5]Some Bayesians have even suggested that the regular notion of belief should be abandoned.[6]But there are also proposals to connect the two, for example, theLockean thesis, which defines belief as credence above a certain threshold.[7][8]Justificationplays a central role in traditional epistemology while Bayesians have focused on the related notions of confirmation and disconfirmation through evidence.[5]The notion of evidence is important for both approaches but only the traditional approach has been interested in studying the sources of evidence, like perception and memory. Bayesianism, on the other hand, has focused on the role of evidence for rationality: how someone's credence should be adjusted upon receiving new evidence.[5]There is an analogy between the Bayesian norms of rationality in terms of probabilistic laws and the traditional norms of rationality in terms of deductive consistency.[5][6]Certain traditional problems, like the topic of skepticism about our knowledge of the external world, are difficult to express in Bayesian terms.[5] Bayesian epistemology is based only on a few fundamental principles, which can be used to define various other notions and can be applied to many topics in epistemology.[5][4]At their core, these principles constitute constraints on how we should assign credences to propositions. They determine what an ideally rational agent would believe.[6]The basic principles can be divided into synchronic or static principles, which govern how credences are to be assigned at any moment, and diachronic or dynamic principles, which determine how the agent should change their beliefs upon receiving new evidence. Theaxioms of probabilityand theprincipal principlebelong to the static principles while the principle of conditionalization governs the dynamic aspects as a form ofprobabilistic inference.[6][4]The most characteristic Bayesian expression of these principles is found in the form ofDutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs.[4]This test for determining irrationality has been referred to as the "pragmatic self-defeat test".[6] One important difference to traditional epistemology is that Bayesian epistemology focuses not on the notion of simple belief but on the notion of degrees of belief, so-calledcredences.[1]This approach tries to capture the idea of certainty:[4]we believe in all kinds of claims but we are more certain about some, like that the earth is round, than about others, like thatPlatowas the author of theFirst Alcibiades. These degrees come in values between 0 and 1. A degree of 1 implies that a claim is completely accepted. A degree of 0, on the other hand, corresponds to full disbelief. This means that the claim is fully rejected and the person firmly believes the opposite claim. A degree of 0.5 corresponds to suspension of belief, meaning that the person has not yet made up their mind: they have no opinion either way and thus neither accept nor reject the claim. According to theBayesian interpretation of probability, credences stand for subjective probabilities. FollowingFrank P. Ramsey, they are interpreted in terms of the willingness to bet money on a claim.[9][1][4]So having a credence of 0.8 (i.e. 80 %) that your favorite soccer team will win the next game would mean being willing to bet up to four dollars for the chance to make one dollar profit. This account draws a tight connection between Bayesian epistemology anddecision theory.[10][11]It might seem that betting-behavior is only one special area and as such not suited for defining such a general notion as credences. But, as Ramsey argues, we bet all the time when understood in the widest sense. For example, in going to the train station, we bet on the train being there on time, otherwise we would have stayed at home.[4]It follows from the interpretation of credence in terms of willingness to make bets that it would be irrational to ascribe a credence of 0 or 1 to any proposition, except forcontradictionsandtautologies.[6]The reason for this is that ascribing these extreme values would mean that one would be willing to bet anything, including one's life, even if the payoff was minimal.[1]Another negative side-effect of such extreme credences is that they are permanently fixed and cannot be updated anymore upon acquiring new evidence. This central tenet of Bayesianism, that credences are interpreted as subjective probabilities and are therefore governed by the norms of probability, has been referred to asprobabilism.[10]These norms express the nature of the credences of ideally rational agents.[4]They do not put demands on what credence we should have on any single given belief, for example, whether it will rain tomorrow. Instead, they constrain the system of beliefs as a whole.[4]For example, if your credence that it will rain tomorrow is 0.8 then your credence in the opposite proposition, i.e. that it will not rain tomorrow, should be 0.2, not 0.1 or 0.5. According to Stephan Hartmann and Jan Sprenger, the axioms of probability can be expressed through the following two laws: (1)P(A)=1{\displaystyle P(A)=1}for any tautologyA{\displaystyle A}; (2) For incompatible (mutually exclusive) propositionsA{\displaystyle A}andB{\displaystyle B},P(A∨B)=P(A)+P(B){\displaystyle P(A\lor B)=P(A)+P(B)}.[4] Another important Bayesian principle of degrees of beliefs is theprincipal principledue toDavid Lewis.[10]It states that our knowledge of objective probabilities should correspond to our subjective probabilities in the form of credences.[4][5]So if you know that the objective chance of a coin landing heads is 50% then your credence that the coin will land heads should be 0.5. Theaxioms of probabilitytogether with theprincipal principledetermines thestaticorsynchronicaspect of rationality: what an agent's beliefs should be like when only considering one moment.[1]But rationality also involves adynamicordiachronicaspect, which comes to play for changing one's credences upon being confronted with new evidence. This aspect is determined by theprinciple of conditionalization.[1][4] Theprinciple of conditionalizationgoverns how the agent's credence in a hypothesis should change upon receiving new evidence for or against this hypothesis.[6][10]As such, it expresses the dynamic aspect of how ideal rational agents would behave.[1]It is based on the notion ofconditional probability, which is the measure of the probability that one event occurs given that another event has already occurred. The unconditional probability thatA{\displaystyle A}will occur is usually expressed asP(A){\displaystyle P(A)}while the conditional probability thatA{\displaystyle A}will occur given that B has already occurred is written asP(A∣B){\displaystyle P(A\mid B)}. For example, the probability of flipping a coin two times and the coin landing heads two times is only 25%. But the conditional probability of this occurring given that the coin has landed heads on the first flip is then 50%. Theprinciple of conditionalizationapplies this idea to credences:[1]we should change our credence that the coin will land heads two times upon receiving evidence that it has already landed heads on the first flip. The probability assigned to the hypothesis before the event is calledprior probability.[12]The probability afterward is calledposterior probability. According to thesimple principle of conditionalization, this can be expressed in the following way:Pposterior(H)=Pprior(H∣E)=Pprior(H∧E)Pprior(E){\displaystyle P_{\text{posterior}}(H)=P_{\text{prior}}(H\mid E)={\frac {P_{\text{prior}}(H\land E)}{P_{\text{prior}}(E)}}}.[1][6]So the posterior probability that the hypothesis is true is equal to the conditional prior probability that the hypothesis is true relative to the evidence, which is equal to the prior probability that both the hypothesis and the evidence are true, divided by the prior probability that the evidence is true. The original expression of this principle, referred to asBayes' theorem, can be directly deduced from this formulation.[6] Thesimple principle of conditionalizationmakes the assumption that our credence in the acquired evidence, i.e. its posterior probability, is 1, which is unrealistic. For example, scientists sometimes need to discard previously accepted evidence upon making new discoveries, which would be impossible if the corresponding credence was 1.[6]An alternative form of conditionalization, proposed byRichard Jeffrey, adjusts the formula to take the probability of the evidence into account:[13][14]Pposterior(H)=Pprior(H∣E)⋅Pposterior(E)+Pprior(H∣¬E)⋅Pposterior(¬E){\displaystyle P_{\text{posterior}}(H)=P_{\text{prior}}(H\mid E)\cdot P_{\text{posterior}}(E)+P_{\text{prior}}(H\mid \lnot E)\cdot P_{\text{posterior}}(\lnot E)}.[6] ADutch bookis a series of bets that necessarily results in a loss.[15][16]An agent is vulnerable to a Dutch book if their credences violate the laws of probability.[4]This can be either in synchronic cases, in which the conflict happens between beliefs held at the same time, or in diachronic cases, in which the agent does not respond properly to new evidence.[6][16]In the most simple synchronic case, only two credences are involved: the credence in a proposition and in its negation.[17]The laws of probability hold that these two credences together should amount to 1 since either the proposition or its negation are true. Agents who violate this law are vulnerable to a synchronic Dutch book.[6]For example, given the proposition that it will rain tomorrow, suppose that an agent's degree of belief that it is true is 0.51 and the degree that it is false is also 0.51. In this case, the agent would be willing to accept two bets at $0.51 for the chance to win $1: one that it will rain and another that it will not rain. The two bets together cost $1.02, resulting in a loss of $0.02, no matter whether it will rain or not.[17]The principle behind diachronic Dutch books is the same, but they are more complicated since they involve making bets before and after receiving new evidence and have to take into account that there is a loss in each case no matter how the evidence turns out to be.[17][16] There are different interpretations about what it means that an agent is vulnerable to a Dutch book. On the traditional interpretation, such a vulnerability reveals that the agent is irrational since they would willingly engage in behavior that is not in their best self-interest.[6]One problem with this interpretation is that it assumes logical omniscience as a requirement for rationality, which is problematic especially in complicated diachronic cases. An alternative interpretation uses Dutch books as "a kind of heuristic for determining when one's degrees of belief have the potential to be pragmatically self-defeating".[6]This interpretation is compatible with holding a more realistic view of rationality in the face of human limitations.[16] Dutch books are closely related to theaxioms of probability.[16]TheDutch book theoremholds that only credence assignments that do not follow the axioms of probability are vulnerable to Dutch books. Theconverse Dutch book theoremstates that no credence assignment following these axioms is vulnerable to a Dutch book.[4][16] In thephilosophy of science,confirmationrefers to the relation between a piece of evidence and a hypothesisconfirmedby it.[18]Confirmation theoryis the study of confirmation and disconfirmation: how scientific hypotheses are supported or refuted by evidence.[19]Bayesian confirmation theoryprovides a model of confirmation based on theprinciple of conditionalization.[6][18]A piece of evidence confirms a theory if the conditional probability of that theory relative to the evidence is higher than the unconditional probability of the theory by itself.[18]Expressed formally:P(H∣E)>P(H){\displaystyle P(H\mid E)>P(H)}.[6]If the evidence lowers the probability of the hypothesis then it disconfirms it. Scientists are usually not just interested in whether a piece of evidence supports a theory but also in how much support it provides. There are different ways how this degree can be determined.[18]The simplest version just measures the difference between the conditional probability of the hypothesis relative to the evidence and the unconditional probability of the hypothesis, i.e. the degree of support isP(H∣E)−P(H){\displaystyle P(H\mid E)-P(H)}.[4]The problem with measuring this degree is that it depends on how certain the theory already is prior to receiving the evidence. So if a scientist is already very certain that a theory is true then one further piece of evidence will not affect her credence much, even if the evidence would be very strong.[6][4]There are other constraints for how an evidence measure should behave, for example, surprising evidence, i.e. evidence that had a low probability on its own, should provide more support.[4][18]Scientists are often faced with the problem of having to decide between two competing theories. In such cases, the interest is not so much in absolute confirmation, or how much a new piece of evidence would support this or that theory, but in relative confirmation, i.e. in which theory is supported more by the new evidence.[6] A well-known problem in confirmation theory isCarl Gustav Hempel'sraven paradox.[20][19][18]Hempel starts by pointing out that seeing a black raven counts as evidence for the hypothesis thatall ravens are blackwhileseeing a green appleis usually not taken to be evidence for or against this hypothesis. The paradox consists in the consideration that the hypothesis "all ravens are black" is logically equivalent to the hypothesis "if something is not black, then it is not a raven".[18]So since seeing a green apple counts as evidence for the second hypothesis, it should also count as evidence for the first one.[6]Bayesianism allows that seeing a green apple supports the raven-hypothesis while explaining our initial intuition otherwise. This result is reached if we assume that seeing a green apple provides minimal but still positive support for the raven-hypothesis while spotting a black raven provides significantly more support.[6][18][20] Coherenceplays a central role in various epistemological theories, for example, in thecoherence theory of truthor in thecoherence theory of justification.[21][22]It is often assumed that sets of beliefs are more likely to be true if they are coherent than otherwise.[1]For example, we would be more likely to trust a detective who can connect all the pieces of evidence into a coherent story. But there is no general agreement as to how coherence is to be defined.[1][4]Bayesianism has been applied to this field by suggesting precise definitions of coherence in terms of probability, which can then be employed to tackle other problems surrounding coherence.[4]One such definition was proposed by Tomoji Shogenji, who suggests that the coherence between two beliefs is equal to the probability of their conjunction divided by the probabilities of each by itself, i.e.Coherence(A,B)=P(A∧B)(P(A)⋅P(B)){\displaystyle Coherence(A,B)={\frac {P(A\land B)}{(P(A)\cdot P(B))}}}.[4][23]Intuitively, this measures how likely it is that the two beliefs are true at the same time, compared to how likely this would be if they were neutrally related to each other.[23]The coherence is high if the two beliefs are relevant to each other.[4]Coherence defined this way is relative to a credence assignment. This means that two propositions may have high coherence for one agent and a low coherence for another agent due to the difference in prior probabilities of the agents' credences.[4] Social epistemologystudies the relevance of social factors for knowledge.[24]In the field of science, for example, this is relevant since individual scientists have to place their trust in some claimed discoveries of other scientists in order to progress.[1]The Bayesian approach can be applied to various topics in social epistemology. For example, probabilistic reasoning can be used in the field oftestimonyto evaluate how reliable a given report is.[6]In this way, it can be formally shown that witness reports that are probabilistically independent of each other provide more support than otherwise.[1]Another topic in social epistemology concerns the question of how to aggregate the beliefs of the individuals within a group to arrive at the belief of the group as a whole.[24]Bayesianism approaches this problem by aggregating the probability assignments of the different individuals.[6][1] In order to draw probabilistic inferences based on new evidence, it is necessary to already have a prior probability assigned to the proposition in question.[25]But this is not always the case: there are many propositions that the agent never considered and therefore lacks a credence for. This problem is usually solved by assigning a probability to the proposition in question in order to learn from the new evidence through conditionalization.[6][26]Theproblem of priorsconcerns the question of how this initial assignment should be done.[25]Subjective Bayesianshold that there are no or few constraints besides probabilistic coherence that determine how we assign the initial probabilities. The argument for this freedom in choosing the initial credence is that the credences will change as we acquire more evidence and will converge on the same value after enough steps no matter where we start.[6]Objective Bayesians, on the other hand, assert that there are various constraints that determine the initial assignment. One important constraint is theprinciple of indifference.[5][25]It states that the credences should be distributed equally among all the possible outcomes.[27][10]For example, the agent wants to predict the color of balls drawn from an urn containing only red and black balls without any information about the ratio of red to black balls.[6]Applied to this situation, the principle of indifference states that the agent should initially assume that the probability to draw a red ball is 50%. This is due to symmetric considerations: it is the only assignment in which the prior probabilities are invariant to a change in label.[6]While this approach works for some cases it produces paradoxes in others. Another objection is that one should not assign prior probabilities based on initial ignorance.[6] The norms of rationality according to the standard definitions of Bayesian epistemology assumelogical omniscience: the agent has to make sure to exactly follow all the laws of probability for all her credences in order to count as rational.[28][29]Whoever fails to do so is vulnerable to Dutch books and is therefore irrational. This is an unrealistic standard for human beings, as critics have pointed out.[6] Theproblem of old evidenceconcerns cases in which the agent does not know at the time of acquiring a piece of evidence that it confirms a hypothesis but only learns about this supporting-relation later.[6]Normally, the agent would increase her belief in the hypothesis after discovering this relation. But this is not allowed inBayesian confirmation theorysince conditionalization can only happen upon a change of the probability of the evidential statement, which is not the case.[6][30]For example, the observation ofcertain anomalies in the orbit of Mercuryis evidence for thetheory of general relativity. But this data had been obtained before the theory was formulated, thereby counting as old evidence.[30]
https://en.wikipedia.org/wiki/Bayesian_epistemology
Instatisticsandstatistical physics, theMetropolis–Hastings algorithmis aMarkov chain Monte Carlo(MCMC) method for obtaining a sequence ofrandom samplesfrom aprobability distributionfrom which direct sampling is difficult. New samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate ahistogram) or tocompute an integral(e.g. anexpected value). Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g.adaptive rejection sampling) that can directly return independent samples from the distribution, and these are free from the problem ofautocorrelatedsamples that is inherent in MCMC methods. The algorithm is named in part forNicholas Metropolis, the first coauthor of a 1953 paper, entitledEquation of State Calculations by Fast Computing Machines, withArianna W. Rosenbluth,Marshall Rosenbluth,Augusta H. TellerandEdward Teller. For many years the algorithm was known simply as theMetropolis algorithm.[1][2]The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970,W.K. Hastingsextended it to the more general case.[3]The generalized method was eventually identified by both names, although the first use of the term "Metropolis-Hastings algorithm" is unclear. Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article withStanisław Ulam, and led the group in the Theoretical Division that designed and built theMANIAC Icomputer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death,Marshall Rosenbluthattended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics".[4]Further historical clarification is made by Gubernatis in a 2005 journal article[5]recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time. This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)".[6]In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage ofstatistical mechanicsand take ensemble averages instead of following detailedkinematics". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often withJohn Von Neumann. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death,[7]Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer. The Metropolis–Hastings algorithm can draw samples from anyprobability distributionwithprobability densityP(x){\displaystyle P(x)}, provided that we know a functionf(x){\displaystyle f(x)}proportional to thedensityP{\displaystyle P}and the values off(x){\displaystyle f(x)}can be calculated. The requirement thatf(x){\displaystyle f(x)}must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because it removes the need to calculate the density's normalization factor, which is often extremely difficult in practice. The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, that the distribution of the next sample depends only on the current sample value, which makes the sequence of samples aMarkov chain. Specifically, at each iteration, the algorithm proposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the functionf(x){\displaystyle f(x)}of the current and candidate sample values with respect to the desired distribution. The method used to propose new candidates is characterized by the probability distributiong(x∣y){\displaystyle g(x\mid y)}(sometimes writtenQ(x∣y){\displaystyle Q(x\mid y)}) of a new proposed samplex{\displaystyle x}given the previous sampley{\displaystyle y}. This is called theproposal density,proposal function, orjumping distribution. A common choice forg(x∣y){\displaystyle g(x\mid y)}is aGaussian distributioncentered aty{\displaystyle y}, so that points closer toy{\displaystyle y}are more likely to be visited next, making the sequence of samples into aGaussian random walk. In the original paper by Metropolis et al. (1953),g(x∣y){\displaystyle g(x\mid y)}was suggested to be a uniform distribution limited to some maximum distance fromy{\displaystyle y}. More complicated proposal functions are also possible, such as those ofHamiltonian Monte Carlo,Langevin Monte Carlo, orpreconditioned Crank–Nicolson. For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below. Letf(x){\displaystyle f(x)}be a function that is proportional to the desired probability density functionP(x){\displaystyle P(x)}(a.k.a. a target distribution)[a]. This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place.P(x){\displaystyle P(x)}at specific pointx{\displaystyle x}is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratioα{\displaystyle \alpha }indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density isP(x){\displaystyle P(x)}. If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region ofP(x){\displaystyle P(x)}corresponding to anα>1≥u{\displaystyle \alpha >1\geq u}), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions ofP(x){\displaystyle P(x)}, while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with densityP(x){\displaystyle P(x)}. Compared with an algorithm likeadaptive rejection sampling[8]that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages: On the other hand, most simplerejection samplingmethods suffer from the "curse of dimensionality", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples fromhierarchical Bayesian modelsand other high-dimensional statistical models used nowadays in many disciplines. Inmultivariatedistributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known asGibbs sampling, involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality.[10]This is especially applicable when the multivariate distribution is composed of a set of individualrandom variablesin which each variable is conditioned on only a small number of other variables, as is the case in most typicalhierarchical models. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are theadaptive rejection samplingmethods,[8]the adaptive rejection Metropolis sampling algorithm,[11]a simple one-dimensional Metropolis–Hastings step, orslice sampling. The purpose of the Metropolis–Hastings algorithm is to generate a collection of states according to a desired distributionP(x){\displaystyle P(x)}. To accomplish this, the algorithm uses aMarkov process, which asymptotically reaches a uniquestationary distributionπ(x){\displaystyle \pi (x)}such thatπ(x)=P(x){\displaystyle \pi (x)=P(x)}.[12] A Markov process is uniquely defined by its transition probabilitiesP(x′∣x){\displaystyle P(x'\mid x)}, the probability of transitioning from any given statex{\displaystyle x}to any other given statex′{\displaystyle x'}. It has a unique stationary distributionπ(x){\displaystyle \pi (x)}when the following two conditions are met:[12] The Metropolis–Hastings algorithm involves designing a Markov process (by constructing transition probabilities) that fulfills the two above conditions, such that its stationary distributionπ(x){\displaystyle \pi (x)}is chosen to beP(x){\displaystyle P(x)}. The derivation of the algorithm starts with the condition ofdetailed balance: which is re-written as The approach is to separate the transition in two sub-steps; the proposal and the acceptance-rejection. The proposal distributiong(x′∣x){\displaystyle g(x'\mid x)}is the conditional probability of proposing a statex′{\displaystyle x'}givenx{\displaystyle x}, and the acceptance distributionA(x′,x){\displaystyle A(x',x)}is the probability to accept the proposed statex′{\displaystyle x'}. The transition probability can be written as the product of them: Inserting this relation in the previous equation, we have The next step in the derivation is to choose an acceptance ratio that fulfills the condition above. One common choice is the Metropolis choice: For this Metropolis acceptance ratioA{\displaystyle A}, eitherA(x′,x)=1{\displaystyle A(x',x)=1}orA(x,x′)=1{\displaystyle A(x,x')=1}and, either way, the condition is satisfied. The Metropolis–Hastings algorithm can thus be written as follows: Provided that specified conditions are met, the empirical distribution of saved statesx0,…,xT{\displaystyle x_{0},\ldots ,x_{T}}will approachP(x){\displaystyle P(x)}. The number of iterations (T{\displaystyle T}) required to effectively estimateP(x){\displaystyle P(x)}depends on the number of factors, including the relationship betweenP(x){\displaystyle P(x)}and the proposal distribution and the desired accuracy of estimation.[13]For distribution on discrete state spaces, it has to be of the order of theautocorrelationtime of the Markov process.[14] It is important to notice that it is not clear, in a general problem, which distributiong(x′∣x){\displaystyle g(x'\mid x)}one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand. A common use of Metropolis–Hastings algorithm is to compute an integral. Specifically, consider a spaceΩ⊂R{\displaystyle \Omega \subset \mathbb {R} }and a probability distributionP(x){\displaystyle P(x)}overΩ{\displaystyle \Omega },x∈Ω{\displaystyle x\in \Omega }. Metropolis–Hastings can estimate an integral of the form of whereA(x){\displaystyle A(x)}is a (measurable) function of interest. For example, consider astatisticE(x){\displaystyle E(x)}and its probability distributionP(E){\displaystyle P(E)}, which is amarginal distribution. Suppose that the goal is to estimateP(E){\displaystyle P(E)}forE{\displaystyle E}on the tail ofP(E){\displaystyle P(E)}. Formally,P(E){\displaystyle P(E)}can be written as and, thus, estimatingP(E){\displaystyle P(E)}can be accomplished by estimating the expected value of theindicator functionAE(x)≡1E(x){\displaystyle A_{E}(x)\equiv \mathbf {1} _{E}(x)}, which is 1 whenE(x)∈[E,E+ΔE]{\displaystyle E(x)\in [E,E+\Delta E]}and zero otherwise. BecauseE{\displaystyle E}is on the tail ofP(E){\displaystyle P(E)}, the probability to draw a statex{\displaystyle x}withE(x){\displaystyle E(x)}on the tail ofP(E){\displaystyle P(E)}is proportional toP(E){\displaystyle P(E)}, which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimateP(E){\displaystyle P(E)}on the tails. This can be done e.g. by using a sampling distributionπ(x){\displaystyle \pi (x)}to favor those states (e.g.π(x)∝eaE{\displaystyle \pi (x)\propto e^{aE}}witha>0{\displaystyle a>0}). Suppose that the most recent value sampled isxt{\displaystyle x_{t}}. To follow the Metropolis–Hastings algorithm, we next draw a new proposal statex′{\displaystyle x'}with probability densityg(x′∣xt){\displaystyle g(x'\mid x_{t})}and calculate a value where is the probability (e.g., Bayesian posterior) ratio between the proposed samplex′{\displaystyle x'}and the previous samplext{\displaystyle x_{t}}, and is the ratio of the proposal density in two directions (fromxt{\displaystyle x_{t}}tox′{\displaystyle x'}and conversely). This is equal to 1 if the proposal density is symmetric. Then the new statext+1{\displaystyle x_{t+1}}is chosen according to the following rules. The Markov chain is started from an arbitrary initial valuex0{\displaystyle x_{0}}, and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known asburn-in. The remaining set of accepted values ofx{\displaystyle x}represent asamplefrom the distributionP(x){\displaystyle P(x)}. The algorithm works best if the proposal density matches the shape of the target distributionP(x){\displaystyle P(x)}, from which direct sampling is difficult, that isg(x′∣xt)≈P(x′){\displaystyle g(x'\mid x_{t})\approx P(x')}. If a Gaussian proposal densityg{\displaystyle g}is used, the variance parameterσ2{\displaystyle \sigma ^{2}}has to be tuned during the burn-in period. This is usually done by calculating theacceptance rate, which is the fraction of proposed samples that is accepted in a window of the lastN{\displaystyle N}samples. The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for anN{\displaystyle N}-dimensional Gaussian target distribution.[15]These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using theBernstein–von Mises theorem.[16] Ifσ2{\displaystyle \sigma ^{2}}is too small, the chain willmix slowly(i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly toP(x){\displaystyle P(x)}). On the other hand, ifσ2{\displaystyle \sigma ^{2}}is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, soa1{\displaystyle a_{1}}will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph. MCMC can be used to draw samples from theposterior distributionof a statistical model. The acceptance probability is given by:Pacc(θi→θ∗)=min(1,L(y|θ∗)P(θ∗)L(y|θi)P(θi)Q(θi|θ∗)Q(θ∗|θi)),{\displaystyle P_{acc}(\theta _{i}\to \theta ^{*})=\min \left(1,{\frac {{\mathcal {L}}(y|\theta ^{*})P(\theta ^{*})}{{\mathcal {L}}(y|\theta _{i})P(\theta _{i})}}{\frac {Q(\theta _{i}|\theta ^{*})}{Q(\theta ^{*}|\theta _{i})}}\right),}whereL{\displaystyle {\mathcal {L}}}is thelikelihood,P(θ){\displaystyle P(\theta )}the prior probability density andQ{\displaystyle Q}the (conditional) proposal probability.
https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm
Fawkesis a facial image cloaking software created by the SAND (Security, Algorithms, Networking and Data) Laboratory of theUniversity of Chicago.[1]It is a free tool that is available as a standalone executable.[2]The software creates small alterations in images usingartificial intelligenceto protect the images from being recognized and matched byfacial recognition software.[3]The goal of the Fawkes program is to enable individuals to protect their own privacy from large data collection. As of May 2022, Fawkes v1.0 has surpassed 840,000 downloads.[4]Eventually, the SAND Laboratory hopes to implement the software on a larger scale to combat unwarranted facial recognition software.[5] The Fawkes program was named after the fictional protagonist from the movie and comicV for Vendetta,who drew inspiration from historical figureGuy Fawkes.[6]The Fawkes proposal was initially presented at aUSENIX Securityconference in August 2020 where it received approval and was launched shortly after. The most recent version available for download, Fawkes v1.0, was released in April 2021, and is still being updated in 2022.[4]The founding team is led by Emily Wenger and Shawn Shan, PhD students at theUniversity of Chicago. Additional support from Jiayun Zhang and Huiying Li, with faculty advisors Ben Zhao and Heather Zheng, contributed to the creation of the software.[7]The team cites nonconsensual data collection, specifically done by such companies asClearwater AI, as being the prime inspiration behind the creation of Fawkes.[8] The methods that Fawkes uses can be identified as similar toadversarial machine learning. This method trains a facial recognition software using already altered images. This results in the software not being able to match the altered image with the actual image, as it does not recognize them as the same image. Fawkes also usesdata poisoningattacks, which change the data set used to train certain deep learning models. Fawkes utilizes two types of data poisoning techniques: clean label attacks and model corruption attacks. The creators of Fawkes identify, that using sybil images can increase the effectiveness of their software against recognition software products.Sybil imagesare images that do not match the person they are attributed to. This confuses the facial recognition software and leads to misidientification which also helps the efficacy of image cloaking. Privacy preserving machine learning uses techniques similar to the Fawkes software but opts for differentially private model training, which helps to keep information in the data set private.[3] Fawkes image cloaking can be used on images and apps that are used every day. However, the efficacy of the software wanes if there are cloaked and uncloaked images that the facial recognition software can utilize. The image cloaking software has been tested on high-powered facial recognition software with varied results.[3]A similar facial cloaking software to Fawkes is calledLowKey. LowKey also alters images on a visual level, but these alterations are much more noticeable compared to the Fawkes software.[2]
https://en.wikipedia.org/wiki/Fawkes_(image_cloaking_software)
Inmathematics, aweak derivativeis a generalization of the concept of thederivativeof afunction(strong derivative) for functions not assumeddifferentiable, but onlyintegrable, i.e., to lie in theLpspaceL1([a,b]){\displaystyle L^{1}([a,b])}. The method ofintegration by partsholds that for smooth functionsu{\displaystyle u}andφ{\displaystyle \varphi }we have A functionu' being the weak derivative ofuis essentially defined by the requirement that this equation must hold for all smooth functionsφ{\displaystyle \varphi }vanishing at the boundary points (φ(a)=φ(b)=0{\displaystyle \varphi (a)=\varphi (b)=0}). Letu{\displaystyle u}be a function in theLebesgue spaceL1([a,b]){\displaystyle L^{1}([a,b])}. We say thatv{\displaystyle v}inL1([a,b]){\displaystyle L^{1}([a,b])}is aweak derivativeofu{\displaystyle u}if forallinfinitelydifferentiable functionsφ{\displaystyle \varphi }withφ(a)=φ(b)=0{\displaystyle \varphi (a)=\varphi (b)=0}.[1][2] Generalizing ton{\displaystyle n}dimensions, ifu{\displaystyle u}andv{\displaystyle v}are in the spaceLloc1(U){\displaystyle L_{\text{loc}}^{1}(U)}oflocally integrable functionsfor someopen setU⊂Rn{\displaystyle U\subset \mathbb {R} ^{n}}, and ifα{\displaystyle \alpha }is amulti-index, we say thatv{\displaystyle v}is theαth{\displaystyle \alpha ^{\text{th}}}-weak derivative ofu{\displaystyle u}if for allφ∈Cc∞(U){\displaystyle \varphi \in C_{c}^{\infty }(U)}, that is, for all infinitely differentiable functionsφ{\displaystyle \varphi }withcompact supportinU{\displaystyle U}. HereDαφ{\displaystyle D^{\alpha }\varphi }is defined asDαφ=∂|α|φ∂x1α1⋯∂xnαn.{\displaystyle D^{\alpha }\varphi ={\frac {\partial ^{|\alpha |}\varphi }{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}.} Ifu{\displaystyle u}has a weak derivative, it is often writtenDαu{\displaystyle D^{\alpha }u}since weak derivatives are unique (at least, up to a set ofmeasure zero, see below).[3] If two functions are weak derivatives of the same function, they are equal except on a set withLebesgue measurezero, i.e., they are equalalmost everywhere. If we considerequivalence classesof functions such that two functions are equivalent if they are equal almost everywhere, then the weak derivative is unique. Also, ifuis differentiable in the conventional sense then its weak derivative is identical (in the sense given above) to its conventional (strong) derivative. Thus the weak derivative is a generalization of the strong one. Furthermore, the classical rules for derivatives of sums and products of functions also hold for the weak derivative. This concept gives rise to the definition ofweak solutionsinSobolev spaces, which are useful for problems ofdifferential equationsand infunctional analysis.
https://en.wikipedia.org/wiki/Weak_derivative
Subgradient methodsareconvex optimizationmethods which usesubderivatives. Originally developed byNaum Z. Shorand others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method ofgradient descent. Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks. In recent years, someinterior-point methodshave been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage. Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem. Letf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }be aconvex functionwith domainRn.{\displaystyle \mathbb {R} ^{n}.}A classical subgradient method iteratesx(k+1)=x(k)−αkg(k){\displaystyle x^{(k+1)}=x^{(k)}-\alpha _{k}g^{(k)}\ }whereg(k){\displaystyle g^{(k)}}denotesanysubgradientoff{\displaystyle f\ }atx(k),{\displaystyle x^{(k)},\ }andx(k){\displaystyle x^{(k)}}is thekth{\displaystyle k^{th}}iterate ofx.{\displaystyle x.}Iff{\displaystyle f\ }is differentiable, then its only subgradient is the gradient vector∇f{\displaystyle \nabla f}itself. It may happen that−g(k){\displaystyle -g^{(k)}}is not a descent direction forf{\displaystyle f\ }atx(k).{\displaystyle x^{(k)}.}We therefore maintain a listfbest{\displaystyle f_{\rm {best}}\ }that keeps track of the lowest objective function value found so far, i.e.fbest(k)=min{fbest(k−1),f(x(k))}.{\displaystyle f_{\rm {best}}^{(k)}=\min\{f_{\rm {best}}^{(k-1)},f(x^{(k)})\}.} Many different types of step-size rules are used by subgradient methods. This article notes five classical step-size rules for which convergenceproofsare known: For all five rules, the step-sizes are determined "off-line", before the method is iterated; the step-sizes do not depend on preceding iterations. This "off-line" property of subgradient methods differs from the "on-line" step-size rules used for descent methods for differentiable functions: Many methods for minimizing differentiable functions satisfy Wolfe's sufficient conditions for convergence, where step-sizes typically depend on the current point and the current search-direction. An extensive discussion of stepsize rules for subgradient methods, including incremental versions, is given in the books by Bertsekas[1]and by Bertsekas, Nedic, and Ozdaglar.[2] For constant step-length and scaled subgradients havingEuclidean normequal to one, the subgradient method converges to an arbitrarily close approximation to the minimum value, that is These classical subgradient methods have poor performance and are no longer recommended for general use.[4][5]However, they are still used widely in specialized applications because they are simple and they can be easily adapted to take advantage of the special structure of the problem at hand. During the 1970s,Claude Lemaréchaland Phil Wolfe proposed "bundle methods" of descent for problems of convex minimization.[6]The meaning of the term "bundle methods" has changed significantly since that time. Modern versions and full convergence analysis were provided by Kiwiel.[7]Contemporary bundle-methods often use "levelcontrol" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.[4][5] One extension of the subgradient method is theprojected subgradient method, which solves the constrainedoptimizationproblem whereC{\displaystyle {\mathcal {C}}}is aconvex set. The projected subgradient method uses the iterationx(k+1)=P(x(k)−αkg(k)){\displaystyle x^{(k+1)}=P\left(x^{(k)}-\alpha _{k}g^{(k)}\right)}whereP{\displaystyle P}is projection onC{\displaystyle {\mathcal {C}}}andg(k){\displaystyle g^{(k)}}is any subgradient off{\displaystyle f\ }atx(k).{\displaystyle x^{(k)}.} The subgradient method can be extended to solve the inequality constrained problem wherefi{\displaystyle f_{i}}are convex. The algorithm takes the same form as the unconstrained casex(k+1)=x(k)−αkg(k){\displaystyle x^{(k+1)}=x^{(k)}-\alpha _{k}g^{(k)}\ }whereαk>0{\displaystyle \alpha _{k}>0}is a step size, andg(k){\displaystyle g^{(k)}}is a subgradient of the objective or one of the constraint functions atx.{\displaystyle x.\ }Takeg(k)={∂f0(x)iffi(x)≤0∀i=1…m∂fj(x)for somejsuch thatfj(x)>0{\displaystyle g^{(k)}={\begin{cases}\partial f_{0}(x)&{\text{ if }}f_{i}(x)\leq 0\;\forall i=1\dots m\\\partial f_{j}(x)&{\text{ for some }}j{\text{ such that }}f_{j}(x)>0\end{cases}}}where∂f{\displaystyle \partial f}denotes thesubdifferentialoff.{\displaystyle f.\ }If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.
https://en.wikipedia.org/wiki/Subgradient_method
In mathematics, theClarke generalized derivativesare types generalized ofderivativesthat allow for the differentiation of nonsmooth functions. The Clarke derivatives were introduced byFrancis Clarkein 1975.[1] For alocally Lipschitz continuousfunctionf:Rn→R,{\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} ,}theClarke generalized directional derivativeoff{\displaystyle f}atx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}in the directionv∈Rn{\displaystyle v\in \mathbb {R} ^{n}}is defined asf∘(x,v)=lim supy→x,h↓0f(y+hv)−f(y)h,{\displaystyle f^{\circ }(x,v)=\limsup _{y\rightarrow x,h\downarrow 0}{\frac {f(y+hv)-f(y)}{h}},}wherelim sup{\displaystyle \limsup }denotes thelimit supremum. Then, using the above definition off∘{\displaystyle f^{\circ }}, theClarke generalized gradientoff{\displaystyle f}atx{\displaystyle x}(also called theClarkesubdifferential) is given as∂∘f(x):={ξ∈Rn:⟨ξ,v⟩≤f∘(x,v),∀v∈Rn},{\displaystyle \partial ^{\circ }\!f(x):=\{\xi \in \mathbb {R} ^{n}:\langle \xi ,v\rangle \leq f^{\circ }(x,v),\forall v\in \mathbb {R} ^{n}\},}where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }represents aninner productof vectors inR.{\displaystyle \mathbb {R} .}Note that the Clarke generalized gradient is set-valued—that is, at eachx∈Rn,{\displaystyle x\in \mathbb {R} ^{n},}the function value∂∘f(x){\displaystyle \partial ^{\circ }\!f(x)}is a set. More generally, given a Banach spaceX{\displaystyle X}and a subsetY⊂X,{\displaystyle Y\subset X,}the Clarke generalized directional derivative and generalized gradients are defined as above for alocally Lipschitz continuousfunctionf:Y→R.{\displaystyle f:Y\to \mathbb {R} .}
https://en.wikipedia.org/wiki/Clarke_generalized_derivative
Inprobability theoryandstatistics, thecovariance functiondescribes how much tworandom variableschange together (theircovariance) with varying spatial or temporal separation. For arandom fieldorstochastic processZ(x) on a domainD, a covariance functionC(x,y) gives the covariance of the values of the random field at the two locationsxandy: C(x,y):=cov⁡(Z(x),Z(y))=E[(Z(x)−E[Z(x)])(Z(y)−E[Z(y)])].{\displaystyle C(x,y):=\operatorname {cov} (Z(x),Z(y))=\mathbb {E} {\Big [}{\big (}Z(x)-\mathbb {E} [Z(x)]{\big )}{\big (}Z(y)-\mathbb {E} [Z(y)]{\big )}{\Big ]}.\,} The sameC(x,y) is called theautocovariancefunction in two instances: intime series(to denote exactly the same concept except thatxandyrefer to locations in time rather than in space), and in multivariate random fields (to refer to the covariance of a variable with itself, as opposed to thecross covariancebetween two different variables at different locations, Cov(Z(x1),Y(x2))).[1] For locationsx1,x2, ...,xN∈Dthe variance of every linear combination can be computed as A function is a valid covariance function if and only if[2]this variance is non-negative for all possible choices ofNand weightsw1, ...,wN. A function with this property is calledpositive semidefinite. In case of a weaklystationaryrandom field, where for any lagh, the covariance function can be represented by a one-parameter function which is called acovariogramand also acovariance function. Implicitly theC(xi,xj) can be computed fromCs(h) by: Thepositive definitenessof this single-argument version of the covariance function can be checked byBochner's theorem.[2] For a givenvarianceσ2{\displaystyle \sigma ^{2}}, a simple stationary parametric covariance function is the "exponential covariance function" whereVis a scaling parameter (correlation length), andd=d(x,y) is the distance between two points. Sample paths of aGaussian processwith the exponential covariance function are not smooth. The "squared exponential" (or "Gaussian") covariance function: is a stationary covariance function with smooth sample paths. TheMatérn covariance functionandrational quadratic covariance functionare two parametric families of stationary covariance functions. The Matérn family includes the exponential and squared exponential covariance functions as special cases.
https://en.wikipedia.org/wiki/Covariance_function
Inmathematical analysis, integral equations are equations in which an unknownfunctionappears under anintegralsign.[1]In mathematical notation, integral equations may thus be expressed as being of the form:f(x1,x2,x3,…,xn;u(x1,x2,x3,…,xn);I1(u),I2(u),I3(u),…,Im(u))=0{\displaystyle f(x_{1},x_{2},x_{3},\ldots ,x_{n};u(x_{1},x_{2},x_{3},\ldots ,x_{n});I^{1}(u),I^{2}(u),I^{3}(u),\ldots ,I^{m}(u))=0}whereIi(u){\displaystyle I^{i}(u)}is anintegral operatoracting onu.Hence, integral equations may be viewed as the analog todifferential equationswhere instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the generalintegral equationabove with the general form of a differential equation which may be expressed as follows:f(x1,x2,x3,…,xn;u(x1,x2,x3,…,xn);D1(u),D2(u),D3(u),…,Dm(u))=0{\displaystyle f(x_{1},x_{2},x_{3},\ldots ,x_{n};u(x_{1},x_{2},x_{3},\ldots ,x_{n});D^{1}(u),D^{2}(u),D^{3}(u),\ldots ,D^{m}(u))=0}whereDi(u){\displaystyle D^{i}(u)}may be viewed as adifferential operatorof orderi.[1]Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation.[1]In addition, because one can convert between the two, differential equations in physics such asMaxwell's equationsoften have an analog integral and differential form.[2]See also, for example,Green's functionandFredholm theory. Various classification methods for integral equations exist. A few standard classifications include distinctions between linear and nonlinear; homogeneous and inhomogeneous; Fredholm and Volterra; first order, second order, and third order; and singular and regular integral equations.[1]These distinctions usually rest on some fundamental property such as the consideration of the linearity of the equation or the homogeneity of the equation.[1]These comments are made concrete through the following definitions and examples: Linear: An integral equation is linear if the unknown functionu(x) and its integrals appear linearly in the equation.[1]Hence, an example of a linear equation would be:[1]u(x)=f(x)+λ∫α(x)β(x)K(x,t)⋅u(t)dt{\displaystyle u(x)=f(x)+\lambda \int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u(t)\,dt}As a note on naming convention: i)u(x) is called the unknown function, ii)f(x) is called a known function, iii)K(x,t) is a function of two variables and often called theKernelfunction, and iv)λis an unknown factor or parameter, which plays the same role as theeigenvalueinlinear algebra.[1] Nonlinear: An integral equation is nonlinear if the unknown function ''u(x) or any of its integrals appear nonlinear in the equation.[1]Hence, examples of nonlinear equations would be the equation above if we replacedu(t) withu2(x),cos⁡(u(x)),oreu(x){\displaystyle u^{2}(x),\,\,\cos(u(x)),\,{\text{or }}\,e^{u(x)}}, such as:u(x)=f(x)+∫α(x)β(x)K(x,t)⋅u2(t)dt{\displaystyle u(x)=f(x)+\int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u^{2}(t)\,dt}Certain kinds of nonlinear integral equations have specific names.[3]A selection of such equations are:[3] More information on the Hammerstein equation and different versions of the Hammerstein equation can be found in the Hammerstein section below. First kind: An integral equation is called an integral equation of the first kind if the unknown function appears only under the integral sign.[3]An example would be:f(x)=∫abK(x,t)u(t)dt{\displaystyle f(x)=\int _{a}^{b}K(x,t)\,u(t)\,dt}.[3] Second kind: An integral equation is called an integral equation of the second kind if the unknown function also appears outside the integral.[3] Third kind: An integral equation is called an integral equation of the third kind if it is a linear Integral equation of the following form:[3]g(t)u(t)+λ∫abK(t,x)u(x)dx=f(t){\displaystyle g(t)u(t)+\lambda \int _{a}^{b}K(t,x)u(x)\,dx=f(t)}whereg(t) vanishes at least once in the interval [a,b][4][5]or whereg(t) vanishes at a finite number of points in (a,b).[6] Fredholm: An integral equation is called aFredholm integral equationif both of the limits of integration in all integrals are fixed and constant.[1]An example would be that the integral is taken over a fixed subset ofRn{\displaystyle \mathbb {R} ^{n}}.[3]Hence, the following two examples are Fredholm equations:[1] Note that we can express integral equations such as those above also using integral operator notation.[7]For example, we can define the Fredholm integral operator as:(Fy)(t):=∫t0TK(t,s)y(s)ds.{\displaystyle ({\mathcal {F}}y)(t):=\int _{t_{0}}^{T}K(t,s)\,y(s)\,ds.}Hence, the above Fredholm equation of the second kind may be written compactly as:[7]y(t)=g(t)+λ(Fy)(t).{\displaystyle y(t)=g(t)+\lambda ({\mathcal {F}}y)(t).} Volterra: An integral equation is called aVolterra integral equationif at least one of the limits of integration is a variable.[1]Hence, the integral is taken over a domain varying with the variable of integration.[3]Examples of Volterra equations would be:[1] As with Fredholm equations, we can again adopt operator notation. Thus, we can define the linear Volterra integral operatorV:C(I)→C(I){\displaystyle {\mathcal {V}}:C(I)\to C(I)}, as follows:[3](Vφ)(t):=∫t0tK(t,s)φ(s)ds{\displaystyle ({\mathcal {V}}\varphi )(t):=\int _{t_{0}}^{t}K(t,s)\,\varphi (s)\,ds}wheret∈I=[t0,T]{\displaystyle t\in I=[t_{0},T]}andK(t,s) is called the kernel and must be continuous on the intervalD:={(t,s):0≤s≤t≤T≤∞}{\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}}.[3]Hence, the Volterra integral equation of the first kind may be written as:[3](Vy)(t)=g(t){\displaystyle ({\mathcal {V}}y)(t)=g(t)}withg(0)=0{\displaystyle g(0)=0}. In addition, a linear Volterra integral equation of the second kind for an unknown functiony(t){\displaystyle y(t)}and a given continuous functiong(t){\displaystyle g(t)}on the intervalI{\displaystyle I}wheret∈I{\displaystyle t\in I}:y(t)=g(t)+(Vy)(t).{\displaystyle y(t)=g(t)+({\mathcal {V}}y)(t).}Volterra–Fredholm: In higher dimensions, integral equations such as Fredholm–Volterra integral equations (VFIE) exist.[3]A VFIE has the form:u(t,x)=g(t,x)+(Tu)(t,x){\displaystyle u(t,x)=g(t,x)+({\mathcal {T}}u)(t,x)}withx∈Ω{\displaystyle x\in \Omega }andΩ{\displaystyle \Omega }being a closed bounded region inRd{\displaystyle \mathbb {R} ^{d}}with piecewise smooth boundary.[3]The Fredholm-Volterra Integral OperatorT:C(I×Ω)→C(I×Ω){\displaystyle {\mathcal {T}}:C(I\times \Omega )\to C(I\times \Omega )}is defined as:[3] (Tu)(t,x):=∫0t∫ΩK(t,s,x,ξ)G(u(s,ξ))dξds.{\displaystyle ({\mathcal {T}}u)(t,x):=\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds.}Note that while throughout this article, the bounds of the integral are usually written as intervals, this need not be the case.[7]In general, integral equations don't always need to be defined over an interval[a,b]=I{\displaystyle [a,b]=I}, but could also be defined over a curve or surface.[7] Homogeneous: An integral equation is called homogeneous if the known functionf{\displaystyle f}is identically zero.[1] Inhomogeneous: An integral equation is called inhomogeneous if the known functionf{\displaystyle f}is nonzero.[1] Regular: An integral equation is called regular if the integrals used are all proper integrals.[7] Singularorweakly singular: An integral equation is called singular or weakly singular if the integral is an improper integral.[7]This could be either because at least one of the limits of integration is infinite or the kernel becomes unbounded, meaning infinite, on at least one point in the interval or domain over which is being integrated.[1] Examples include:[1]F(λ)=∫−∞∞e−iλxu(x)dx{\displaystyle F(\lambda )=\int _{-\infty }^{\infty }e^{-i\lambda x}u(x)\,dx}L[u(x)]=∫0∞e−λxu(x)dx{\displaystyle L[u(x)]=\int _{0}^{\infty }e^{-\lambda x}u(x)\,dx}These two integral equations are the Fourier transform and the Laplace transform ofu(x), respectively, with both being Fredholm equations of the first kind with kernelK(x,t)=e−iλx{\displaystyle K(x,t)=e^{-i\lambda x}}andK(x,t)=e−λx{\displaystyle K(x,t)=e^{-\lambda x}}, respectively.[1]Another example of a singular integral equation in which the kernel becomes unbounded is:[1]x2=∫0x1x−tu(t)dt.{\displaystyle x^{2}=\int _{0}^{x}{\frac {1}{\sqrt {x-t}}}\,u(t)\,dt.}This equation is a special form of the more general weakly singular Volterra integral equation of the first kind, called Abel's integral equation:[7]g(x)=∫axf(y)x−ydy{\displaystyle g(x)=\int _{a}^{x}{\frac {f(y)}{\sqrt {x-y}}}\,dy}Strongly singular: An integral equation is called strongly singular if the integral is defined by a special regularisation, for example, by the Cauchy principal value.[7] AnIntegro-differentialequation, as the name suggests, combines differential and integral operators into one equation.[1]There are many version including the Volterra integro-differential equation and delay type equations as defined below.[3]For example, using the Volterra operator as defined above, the Volterra integro-differential equation may be written as:[3]y′(t)=f(t,y(t))+(Vαy)(t){\displaystyle y'(t)=f(t,y(t))+(V_{\alpha }y)(t)}For delay problems, we can define the delay integral operator(Wθ,αy){\displaystyle ({\mathcal {W}}_{\theta ,\alpha }y)}as:[3](Wθ,αy)(t):=∫θ(t)t(t−s)−α⋅k2(t,s,y(s),y′(s))ds{\displaystyle ({\mathcal {W}}_{\theta ,\alpha }y)(t):=\int _{\theta (t)}^{t}(t-s)^{-\alpha }\cdot k_{2}(t,s,y(s),y'(s))\,ds}where the delay integro-differential equation may be expressed as:[3]y′(t)=f(t,y(t),y(θ(t)))+(Wθ,αy)(t).{\displaystyle y'(t)=f(t,y(t),y(\theta (t)))+({\mathcal {W}}_{\theta ,\alpha }y)(t).} The solution to a linear Volterra integral equation of the first kind, given by the equation:(Vy)(t)=g(t){\displaystyle ({\mathcal {V}}y)(t)=g(t)}can be described by the following uniqueness and existence theorem.[3]Recall that the Volterra integral operatorV:C(I)→C(I){\displaystyle {\mathcal {V}}:C(I)\to C(I)}, can be defined as follows:[3](Vφ)(t):=∫t0tK(t,s)φ(s)ds{\displaystyle ({\mathcal {V}}\varphi )(t):=\int _{t_{0}}^{t}K(t,s)\,\varphi (s)\,ds}wheret∈I=[t0,T]{\displaystyle t\in I=[t_{0},T]}andK(t,s) is called the kernel and must be continuous on the intervalD:={(t,s):0≤s≤t≤T≤∞}{\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}}.[3] Theorem—Assume thatK{\displaystyle K}satisfiesK∈C(D),∂K/∂t∈C(D){\displaystyle K\in C(D),\,\partial K/\partial t\in C(D)}and|K(t,t)|≥k0>0{\displaystyle \vert K(t,t)\vert \geq k_{0}>0}for somet∈I.{\displaystyle t\in I.}Then for anyg∈C1(I){\displaystyle g\in C^{1}(I)}withg(0)=0{\displaystyle g(0)=0}the integral equation above has a unique solution iny∈C(I){\displaystyle y\in C(I)}. The solution to a linear Volterra integral equation of the second kind, given by the equation:[3]y(t)=g(t)+(Vy)(t){\displaystyle y(t)=g(t)+({\mathcal {V}}y)(t)}can be described by the following uniqueness and existence theorem.[3] Theorem—LetK∈C(D){\displaystyle K\in C(D)}and letR{\displaystyle R}denote the resolvent Kernel associated withK{\displaystyle K}. Then, for anyg∈C(I){\displaystyle g\in C(I)}, the second-kind Volterra integral equation has a unique solutiony∈C(I){\displaystyle y\in C(I)}and this solution is given by:y(t)=g(t)+∫0tR(t,s)g(s)ds.{\displaystyle y(t)=g(t)+\int _{0}^{t}R(t,s)\,g(s)\,ds.} A Volterra Integral equation of the second kind can be expressed as follows:[3]u(t,x)=g(t,x)+∫0x∫0yK(x,ξ,y,η)u(ξ,η)dηdξ{\displaystyle u(t,x)=g(t,x)+\int _{0}^{x}\int _{0}^{y}K(x,\xi ,y,\eta )\,u(\xi ,\eta )\,d\eta \,d\xi }where(x,y)∈Ω:=[0,X]×[0,Y]{\displaystyle (x,y)\in \Omega :=[0,X]\times [0,Y]},g∈C(Ω){\displaystyle g\in C(\Omega )},K∈C(D2){\displaystyle K\in C(D_{2})}andD2:={(x,ξ,y,η):0≤ξ≤x≤X,0≤η≤y≤Y}{\displaystyle D_{2}:=\{(x,\xi ,y,\eta ):0\leq \xi \leq x\leq X,0\leq \eta \leq y\leq Y\}}.[3]This integral equation has a unique solutionu∈C(Ω){\displaystyle u\in C(\Omega )}given by:[3]u(t,x)=g(t,x)+∫0x∫0yR(x,ξ,y,η)g(ξ,η)dηdξ{\displaystyle u(t,x)=g(t,x)+\int _{0}^{x}\int _{0}^{y}R(x,\xi ,y,\eta )\,g(\xi ,\eta )\,d\eta \,d\xi }whereR{\displaystyle R}is the resolvent kernel ofK.[3] As defined above, a VFIE has the form:u(t,x)=g(t,x)+(Tu)(t,x){\displaystyle u(t,x)=g(t,x)+({\mathcal {T}}u)(t,x)}withx∈Ω{\displaystyle x\in \Omega }andΩ{\displaystyle \Omega }being a closed bounded region inRd{\displaystyle \mathbb {R} ^{d}}with piecewise smooth boundary.[3]The Fredholm–Volterrra Integral OperatorT:C(I×Ω)→C(I×Ω){\displaystyle {\mathcal {T}}:C(I\times \Omega )\to C(I\times \Omega )}is defined as:[3](Tu)(t,x):=∫0t∫ΩK(t,s,x,ξ)G(u(s,ξ))dξds.{\displaystyle ({\mathcal {T}}u)(t,x):=\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds.}In the case where the KernelKmay be written asK(t,s,x,ξ)=k(t−s)H(x,ξ){\displaystyle K(t,s,x,\xi )=k(t-s)H(x,\xi )},Kis called the positive memory kernel.[3]With this in mind, we can now introduce the following theorem:[3] Theorem—If the linear VFIE given by:u(t,x)=g(t,x)+∫0t∫ΩK(t,s,x,ξ)G(u(s,ξ))dξds{\displaystyle u(t,x)=g(t,x)+\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds}with(t,x)∈I×Ω{\displaystyle (t,x)\in I\times \Omega }satisfies the following conditions: Then the VFIE has a unique solutionu∈C(I×Ω){\displaystyle u\in C(I\times \Omega )}given byu(t,x)=g(t,x)+∫0t∫ΩR(t,s,x,ξ)G(u(s,ξ))dξds{\displaystyle u(t,x)=g(t,x)+\int _{0}^{t}\int _{\Omega }R(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds}whereR∈C(D×Ω2){\displaystyle R\in C(D\times \Omega ^{2})}is called the Resolvent Kernel and is given by the limit of the Neumann series for the KernelK{\displaystyle K}and solves the resolvent equations:R(t,s,x,ξ)=K(t,s,x,ξ)+∫0t∫ΩK(t,v,x,z)R(v,s,z,ξ)dzdv=K(t,s,x,ξ)+∫0t∫ΩR(t,v,x,z)K(v,s,z,ξ)dzdv{\displaystyle R(t,s,x,\xi )=K(t,s,x,\xi )+\int _{0}^{t}\int _{\Omega }K(t,v,x,z)R(v,s,z,\xi )\,dz\,dv=K(t,s,x,\xi )+\int _{0}^{t}\int _{\Omega }R(t,v,x,z)K(v,s,z,\xi )\,dz\,dv} A special type of Volterra equation which is used in various applications is defined as follows:[3]y(t)=g(t)+(Vαy)(t){\displaystyle y(t)=g(t)+(V_{\alpha }y)(t)}wheret∈I=[t0,T]{\displaystyle t\in I=[t_{0},T]}, the functiong(t) is continuous on the intervalI{\displaystyle I}, and the Volterra integral operator(Vαt){\displaystyle (V_{\alpha }t)}is given by:(Vαt)(t):=∫t0t(t−s)−α⋅k(t,s,y(s))ds{\displaystyle (V_{\alpha }t)(t):=\int _{t_{0}}^{t}(t-s)^{-\alpha }\cdot k(t,s,y(s))\,ds}with(0≤α<1){\displaystyle (0\leq \alpha <1)}.[3] In the following section, we give an example of how to convert an initial value problem (IVP) into an integral equation. There are multiple motivations for doing so, among them being that integral equations can often be more readily solvable and are more suitable for proving existence and uniqueness theorems.[7] The following example was provided by Wazwaz on pages 1 and 2 in his book.[1]We examine the IVP given by the equation: u′(t)=2tu(t),x≥0{\displaystyle u'(t)=2tu(t),\,\,\,\,\,\,\,x\geq 0}and the initial condition: u(0)=1{\displaystyle u(0)=1} If we integrate both sides of the equation, we get: ∫0xu′(t)dt=∫0x2tu(t)dt{\displaystyle \int _{0}^{x}u'(t)\,dt=\int _{0}^{x}2tu(t)\,dt} and by thefundamental theorem of calculus, we obtain: u(x)−u(0)=∫0x2tu(t)dt{\displaystyle u(x)-u(0)=\int _{0}^{x}2tu(t)\,dt} Rearranging the equation above, we get the integral equation: u(x)=1+∫0x2tu(t)dt{\displaystyle u(x)=1+\int _{0}^{x}2tu(t)\,dt} which is a Volterra integral equation of the form: u(x)=f(x)+∫α(x)β(x)K(x,t)⋅u(t)dt{\displaystyle u(x)=f(x)+\int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u(t)\,dt} whereK(x,t) is called the kernel and equal to 2t, andf(x) = 1.[1] It is worth noting that integral equations often do not have an analytical solution, and must be solved numerically. An example of this is evaluating theelectric-field integral equation(EFIE) ormagnetic-field integral equation(MFIE) over an arbitrarily shaped object in an electromagnetic scattering problem. One method to solve numerically requires discretizing variables and replacing integral by a quadrature rule Then we have a system withnequations andnvariables. By solving it we get the value of thenvariables Certain homogeneous linear integral equations can be viewed as thecontinuum limitofeigenvalue equations. Usingindex notation, an eigenvalue equation can be written as whereM= [Mi,j]is a matrix,vis one of its eigenvectors, andλis the associated eigenvalue. Taking the continuum limit, i.e., replacing the discrete indicesiandjwith continuous variablesxandy, yields where the sum overjhas been replaced by an integral overyand the matrixMand the vectorvhave been replaced by thekernelK(x,y)and theeigenfunctionφ(y). (The limits on the integral are fixed, analogously to the limits on the sum overj.) This gives a linear homogeneous Fredholm equation of the second type. In general,K(x,y)can be adistribution, rather than a function in the strict sense. If the distributionKhas support only at the pointx=y, then the integral equation reduces to adifferential eigenfunction equation. In general, Volterra and Fredholm integral equations can arise from a single differential equation, depending on which sort of conditions are applied at the boundary of the domain of its solution. y(t)=λx(t)+∫0∞k(t−s)x(s)ds,0≤t<∞.{\displaystyle y(t)=\lambda x(t)+\int _{0}^{\infty }k(t-s)\,x(s)\,ds,\qquad 0\leq t<\infty .}Originally, such equations were studied in connection with problems in radiative transfer, and more recently, they have been related to the solution of boundary integral equations for planar problems in which the boundary is only piecewise smooth. A Hammerstein equation is a nonlinear first-kind Volterra integral equation of the form:[3]g(t)=∫0tK(t,s)G(s,y(s))ds.{\displaystyle g(t)=\int _{0}^{t}K(t,s)\,G(s,y(s))\,ds.}Under certain regularity conditions, the equation is equivalent to the implicit Volterra integral equation of the second-kind:[3]G(t,y(t))=g1(t)−∫0tK1(t,s)G(s,y(s))ds{\displaystyle G(t,y(t))=g_{1}(t)-\int _{0}^{t}K_{1}(t,s)\,G(s,y(s))\,ds}where:g1(t):=g′(t)K(t,t)andK1(t,s):=−1K(t,t)∂K(t,s)∂t.{\displaystyle g_{1}(t):={\frac {g'(t)}{K(t,t)}}\,\,\,\,\,\,\,{\text{and}}\,\,\,\,\,\,\,K_{1}(t,s):=-{\frac {1}{K(t,t)}}{\frac {\partial K(t,s)}{\partial t}}.}The equation may however also be expressed in operator form which motivates the definition of the following operator called the nonlinear Volterra-Hammerstein operator:[3](Hy)(t):=∫0tK(t,s)G(s,y(s))ds{\displaystyle ({\mathcal {H}}y)(t):=\int _{0}^{t}K(t,s)\,G(s,y(s))\,ds}HereG:I×R→R{\displaystyle G:I\times \mathbb {R} \to \mathbb {R} }is a smooth function while the kernelKmay be continuous, i.e. bounded, or weakly singular.[3]The corresponding second-kind Volterra integral equation called the Volterra-Hammerstein Integral Equation of the second kind, or simply Hammerstein equation for short, can be expressed as:[3]y(t)=g(t)+(Hy)(t){\displaystyle y(t)=g(t)+({\mathcal {H}}y)(t)}In certain applications, the nonlinearity of the functionGmay be treated as being only semi-linear in the form of:[3]G(s,y)=y+H(s,y){\displaystyle G(s,y)=y+H(s,y)}In this case, we the following semi-linear Volterra integral equation:[3]y(t)=g(t)+(Hy)(t)=g(t)+∫0tK(t,s)[y(s)+H(s,y(s))]ds{\displaystyle y(t)=g(t)+({\mathcal {H}}y)(t)=g(t)+\int _{0}^{t}K(t,s)[y(s)+H(s,y(s))]\,ds}In this form, we can state an existence and uniqueness theorem for the semi-linear Hammerstein integral equation.[3] Theorem—Suppose that the semi-linear Hammerstein equation has a unique solutiony∈C(I){\displaystyle y\in C(I)}andH:I×R→R{\displaystyle H:I\times \mathbb {R} \to \mathbb {R} }be a Lipschitz continuous function. Then the solution of this equation may be written in the form:y(t)=yl(t)+∫0tR(t,s)H(s,y(s))ds{\displaystyle y(t)=y_{l}(t)+\int _{0}^{t}R(t,s)\,H(s,y(s))\,ds}whereyl(t){\displaystyle y_{l}(t)}denotes the unique solution of the linear part of the equation above and is given by:yl(t)=g(t)+∫0tR(t,s)g(s)ds{\displaystyle y_{l}(t)=g(t)+\int _{0}^{t}R(t,s)\,g(s)\,ds}withR(t,s){\displaystyle R(t,s)}denoting the resolvent kernel. We can also write the Hammerstein equation using a different operator called the Niemytzki operator, or substitution operator,N{\displaystyle {\mathcal {N}}}defined as follows:[3](Nφ)(t):=G(t,φ(t)){\displaystyle ({\mathcal {N}}\varphi )(t):=G(t,\varphi (t))}More about this can be found on page 75 of this book.[3] Integral equations are important in many applications. Problems in which integral equations are encountered includeradiative transfer, and theoscillationof a string, membrane, or axle. Oscillation problems may also be solved asdifferential equations.
https://en.wikipedia.org/wiki/Integral_equation
In mathematics, and specifically inoperator theory, apositive-definite function on a grouprelates the notions of positivity, in the context ofHilbert spaces, and algebraicgroups. It can be viewed as a particular type ofpositive-definite kernelwhere the underlying set has the additional group structure. LetG{\displaystyle G}be a group,H{\displaystyle H}be a complex Hilbert space, andL(H){\displaystyle L(H)}be the bounded operators onH{\displaystyle H}. Apositive-definite functiononG{\displaystyle G}is a functionF:G→L(H){\displaystyle F:G\to L(H)}that satisfies for every functionh:G→H{\displaystyle h:G\to H}with finite support (h{\displaystyle h}takes non-zero values for only finitely manys{\displaystyle s}). In other words, a functionF:G→L(H){\displaystyle F:G\to L(H)}is said to be a positive-definite function if the kernelK:G×G→L(H){\displaystyle K:G\times G\to L(H)}defined byK(s,t)=F(s−1t){\displaystyle K(s,t)=F(s^{-1}t)}is a positive-definite kernel. Such a kernel isG{\displaystyle G}-symmetric, that is, it invariant under leftG{\displaystyle G}-action:K(s,t)=K(rs,rt),∀r∈G{\displaystyle K(s,t)=K(rs,rt),\quad \forall r\in G}WhenG{\displaystyle G}is alocally compact group, the definition generalizes by integration over its left-invariantHaar measureμ{\displaystyle \mu }. A positive-definite function onG{\displaystyle G}is a continuous functionF:G→L(H){\displaystyle F:G\to L(H)}that satisfies∫s,t∈G⟨F(s−1t)h(t),h(s)⟩μ(ds)μ(dt)≥0,{\displaystyle \int _{s,t\in G}\langle F(s^{-1}t)h(t),h(s)\rangle \;\mu (ds)\mu (dt)\geq 0,}for every continuous functionh:G→H{\displaystyle h:G\to H}withcompact support. The constant functionF(g)=I{\displaystyle F(g)=I}, whereI{\displaystyle I}is the identity operator onH{\displaystyle H}, is positive-definite. LetG{\displaystyle G}be a finite abelian group andH{\displaystyle H}be the one-dimensional Hilbert spaceC{\displaystyle \mathbb {C} }. Anycharacterχ:G→C{\displaystyle \chi :G\to \mathbb {C} }is positive-definite. (This is a special case ofunitary representation.) To show this, recall that a character of a finite groupG{\displaystyle G}is a homomorphism fromG{\displaystyle G}to the multiplicative group of norm-1 complex numbers. Then, for any functionh:G→C{\displaystyle h:G\to \mathbb {C} },∑s,t∈Gχ(s−1t)h(t)h(s)¯=∑s,t∈Gχ(s−1)h(t)χ(t)h(s)¯=∑sχ(s−1)h(s)¯∑th(t)χ(t)=|∑th(t)χ(t)|2≥0.{\displaystyle \sum _{s,t\in G}\chi (s^{-1}t)h(t){\overline {h(s)}}=\sum _{s,t\in G}\chi (s^{-1})h(t)\chi (t){\overline {h(s)}}=\sum _{s}\chi (s^{-1}){\overline {h(s)}}\sum _{t}h(t)\chi (t)=\left|\sum _{t}h(t)\chi (t)\right|^{2}\geq 0.}WhenG=Rn{\displaystyle G=\mathbb {R} ^{n}}with theLebesgue measure, andH=Cm{\displaystyle H=\mathbb {C} ^{m}}, a positive-definite function onG{\displaystyle G}is a continuous functionF:Rn→Cm×m{\displaystyle F:\mathbb {R} ^{n}\to \mathbb {C} ^{m\times m}}such that∫x,y∈Rnh(x)†F(x−y)h(y)dxdy≥0{\displaystyle \int _{x,y\in \mathbb {R} ^{n}}h(x)^{\dagger }F(x-y)h(y)\;dxdy\geq 0}for every continuous functionh:Rn→Cm{\displaystyle h:\mathbb {R} ^{n}\to \mathbb {C} ^{m}}with compact support. Aunitary representationis a unital homomorphismΦ:G→L(H){\displaystyle \Phi :G\to L(H)}whereΦ(s){\displaystyle \Phi (s)}is a unitary operator for alls{\displaystyle s}. For suchΦ{\displaystyle \Phi },Φ(s−1)=Φ(s)∗{\displaystyle \Phi (s^{-1})=\Phi (s)^{*}}. Positive-definite functions onG{\displaystyle G}are intimately related to unitary representations ofG{\displaystyle G}. Every unitary representation ofG{\displaystyle G}gives rise to a family of positive-definite functions. Conversely, given a positive-definite function, one can define a unitary representation ofG{\displaystyle G}in a natural way. LetΦ:G→L(H){\displaystyle \Phi :G\to L(H)}be a unitary representation ofG{\displaystyle G}. IfP∈L(H){\displaystyle P\in L(H)}is the projection onto a closed subspaceH′{\displaystyle H'}ofH{\displaystyle H}. ThenF(s)=PΦ(s){\displaystyle F(s)=P\Phi (s)}is a positive-definite function onG{\displaystyle G}with values inL(H′){\displaystyle L(H')}. This can be shown readily: for everyh:G→H′{\displaystyle h:G\to H'}with finite support. IfG{\displaystyle G}has a topology andΦ{\displaystyle \Phi }is weakly(resp. strongly) continuous, then clearly so isF{\displaystyle F}. On the other hand, consider now a positive-definite functionF{\displaystyle F}onG{\displaystyle G}. A unitary representation ofG{\displaystyle G}can be obtained as follows. LetC00(G,H){\displaystyle C_{00}(G,H)}be the family of functionsh:G→H{\displaystyle h:G\to H}with finite support. The corresponding positive kernelK(s,t)=F(s−1t){\displaystyle K(s,t)=F(s^{-1}t)}defines a (possibly degenerate) inner product onC00(G,H){\displaystyle C_{00}(G,H)}. Let the resulting Hilbert space be denoted byV{\displaystyle V}. We notice that the "matrix elements"K(s,t)=K(a−1s,a−1t){\displaystyle K(s,t)=K(a^{-1}s,a^{-1}t)}for alla,s,t{\displaystyle a,s,t}inG{\displaystyle G}. SoUah(s)=h(a−1s){\displaystyle U_{a}h(s)=h(a^{-1}s)}preserves the inner product onV{\displaystyle V}, i.e. it is unitary inL(V){\displaystyle L(V)}. It is clear that the mapΦ(a)=Ua{\displaystyle \Phi (a)=U_{a}}is a representation ofG{\displaystyle G}onV{\displaystyle V}. The unitary representation is unique, up to Hilbert space isomorphism, provided the following minimality condition holds: where⋁{\displaystyle \bigvee }denotes the closure of the linear span. IdentifyH{\displaystyle H}as elements (possibly equivalence classes) inV{\displaystyle V}, whose support consists of the identity elemente∈G{\displaystyle e\in G}, and letP{\displaystyle P}be the projection onto this subspace. Then we havePUaP=F(a){\displaystyle PU_{a}P=F(a)}for alla∈G{\displaystyle a\in G}. LetG{\displaystyle G}be the additive group of integersZ{\displaystyle \mathbb {Z} }. The kernelK(n,m)=F(m−n){\displaystyle K(n,m)=F(m-n)}is called a kernel ofToeplitztype, by analogy withToeplitz matrices. IfF{\displaystyle F}is of the formF(n)=Tn{\displaystyle F(n)=T^{n}}whereT{\displaystyle T}is a bounded operator acting on some Hilbert space. One can show that the kernelK(n,m){\displaystyle K(n,m)}is positive if and only ifT{\displaystyle T}is acontraction. By the discussion from the previous section, we have a unitary representation ofZ{\displaystyle \mathbb {Z} },Φ(n)=Un{\displaystyle \Phi (n)=U^{n}}for a unitary operatorU{\displaystyle U}. Moreover, the propertyPUaP=F(a){\displaystyle PU_{a}P=F(a)}now translates toPUnP=Tn{\displaystyle PU^{n}P=T^{n}}. This is preciselySz.-Nagy's dilation theoremand hints at an important dilation-theoretic characterization of positivity that leads to a parametrization of arbitrary positive-definite kernels.
https://en.wikipedia.org/wiki/Positive-definite_function_on_a_group
Infunctional analysis, areproducing kernel Hilbert space(RKHS) is aHilbert spaceof functions in which point evaluation is a continuouslinear functional. Specifically, a Hilbert spaceH{\displaystyle H}of functions from a setX{\displaystyle X}(toR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }) is an RKHS if the point-evaluation functionalLx:H→C{\displaystyle L_{x}:H\to \mathbb {C} },Lx(f)=f(x){\displaystyle L_{x}(f)=f(x)}, is continuous for everyx∈X{\displaystyle x\in X}. Equivalently,H{\displaystyle H}is an RKHS if there exists a functionKx∈H{\displaystyle K_{x}\in H}such that, for allf∈H{\displaystyle f\in H},⟨f,Kx⟩=f(x).{\displaystyle \langle f,K_{x}\rangle =f(x).}The functionKx{\displaystyle K_{x}}is then called thereproducing kernel, and it reproduces the value off{\displaystyle f}atx{\displaystyle x}via the inner product. An immediate consequence of this property is that convergence in norm implies uniform convergence on any subset ofX{\displaystyle X}on which‖Kx‖{\displaystyle \|K_{x}\|}is bounded. However, the converse does not necessarily hold. Often the setX{\displaystyle X}carries a topology, and‖Kx‖{\displaystyle \|K_{x}\|}depends continuously onx∈X{\displaystyle x\in X}, in which case: convergence in norm implies uniform convergence on compact subsets ofX{\displaystyle X}. It is not entirely straightforward to construct natural examples of a Hilbert space which are not an RKHS in a non-trivial fashion.[1]Some examples, however, have been found.[2][3] While, formally,L2spacesare defined as Hilbert spaces of equivalence classes of functions, this definition can trivially be extended to a Hilbert space of functions by choosing a (total) function as a representative for each equivalence class. However, no choice of representatives can make this space an RKHS (K0{\displaystyle K_{0}}would need to be the non-existent Dirac delta function). However, there are RKHSs in which the norm is anL2-norm, such as the space of band-limited functions (see the example below). An RKHS is associated with a kernel that reproduces every function in the space in the sense that for everyx{\displaystyle x}in the set on which the functions are defined, "evaluation atx{\displaystyle x}" can be performed by taking an inner product with a function determined by the kernel. Such areproducing kernelexists if and only if every evaluation functional is continuous. The reproducing kernel was first introduced in the 1907 work ofStanisław Zarembaconcerningboundary value problemsforharmonicandbiharmonic functions.James Mercersimultaneously examinedfunctionswhich satisfy the reproducing property in the theory ofintegral equations. The idea of the reproducing kernel remained untouched for nearly twenty years until it appeared in the dissertations ofGábor Szegő,Stefan Bergman, andSalomon Bochner. The subject was eventually systematically developed in the early 1950s byNachman Aronszajnand Stefan Bergman.[4] These spaces have wide applications, includingcomplex analysis,harmonic analysis, andquantum mechanics. Reproducing kernel Hilbert spaces are particularly important in the field ofstatistical learning theorybecause of the celebratedrepresenter theoremwhich states that every function in an RKHS that minimises an empirical risk functional can be written as alinear combinationof the kernel function evaluated at the training points. This is a practically useful result as it effectively simplifies theempirical risk minimizationproblem from an infinite dimensional to a finite dimensional optimization problem. For ease of understanding, we provide the framework for real-valued Hilbert spaces. The theory can be easily extended to spaces of complex-valued functions and hence include the many important examples of reproducing kernel Hilbert spaces that are spaces ofanalytic functions.[5] LetX{\displaystyle X}be an arbitrarysetandH{\displaystyle H}aHilbert spaceofreal-valued functionsonX{\displaystyle X}, equipped with pointwise addition and pointwise scalar multiplication. Theevaluationfunctional over the Hilbert space of functionsH{\displaystyle H}is a linear functional that evaluates each function at a pointx{\displaystyle x}, We say thatHis areproducing kernel Hilbert spaceif, for allx{\displaystyle x}inX{\displaystyle X},Lx{\displaystyle L_{x}}iscontinuousat everyf{\displaystyle f}inH{\displaystyle H}or, equivalently, ifLx{\displaystyle L_{x}}is abounded operatoronH{\displaystyle H}, i.e. there exists someMx>0{\displaystyle M_{x}>0}such that AlthoughMx<∞{\displaystyle M_{x}<\infty }is assumed for allx∈X{\displaystyle x\in X}, it might still be the case thatsupxMx=∞{\textstyle \sup _{x}M_{x}=\infty }. While property (1) is the weakest condition that ensures both the existence of an inner product and the evaluation of every function inH{\displaystyle H}at every point in the domain, it does not lend itself to easy application in practice. A more intuitive definition of the RKHS can be obtained by observing that this property guarantees that the evaluation functional can be represented by taking the inner product off{\displaystyle f}with a functionKx{\displaystyle K_{x}}inH{\displaystyle H}. This function is the so-calledreproducing kernel[citation needed]for the Hilbert spaceH{\displaystyle H}from which the RKHS takes its name. More formally, theRiesz representation theoremimplies that for allx{\displaystyle x}inX{\displaystyle X}there exists a unique elementKx{\displaystyle K_{x}}ofH{\displaystyle H}with the reproducing property, SinceKx{\displaystyle K_{x}}is itself a function defined onX{\displaystyle X}with values in the fieldR{\displaystyle \mathbb {R} }(orC{\displaystyle \mathbb {C} }in the case of complex Hilbert spaces) and asKx{\displaystyle K_{x}}is inH{\displaystyle H}we have that whereKy∈H{\displaystyle K_{y}\in H}is the element inH{\displaystyle H}associated toLy{\displaystyle L_{y}}. This allows us to define the reproducing kernel ofH{\displaystyle H}as a functionK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }(orC{\displaystyle \mathbb {C} }in the complex case) by From this definition it is easy to see thatK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }(orC{\displaystyle \mathbb {C} }in the complex case) is both symmetric (resp. conjugate symmetric) andpositive definite, i.e. for everyn∈N,x1,…,xn∈X,andc1,…,cn∈R.{\displaystyle n\in \mathbb {N} ,x_{1},\dots ,x_{n}\in X,{\text{ and }}c_{1},\dots ,c_{n}\in \mathbb {R} .}[6]The Moore–Aronszajn theorem (see below) is a sort of converse to this: if a functionK{\displaystyle K}satisfies these conditions then there is a Hilbert space of functions onX{\displaystyle X}for which it is a reproducing kernel. The simplest example of a reproducing kernel Hilbert space is the spaceL2(X,μ){\displaystyle L^{2}(X,\mu )}whereX{\displaystyle X}is a set andμ{\displaystyle \mu }is thecounting measureonX{\displaystyle X}. Forx∈X{\displaystyle x\in X}, the reproducing kernelKx{\displaystyle K_{x}}is theindicator functionof the one point set{x}⊂X{\displaystyle \{x\}\subset X}. Nontrivial reproducing kernel Hilbert spaces often involveanalytic functions, as we now illustrate by example. Consider the Hilbert space ofbandlimitedcontinuous functionsH{\displaystyle H}. Fix somecutoff frequency0<a<∞{\displaystyle 0<a<\infty }and define the Hilbert space whereL2(R){\displaystyle L^{2}(\mathbb {R} )}is the set of square integrable functions, andF(ω)=∫−∞∞f(t)e−iωtdt{\textstyle F(\omega )=\int _{-\infty }^{\infty }f(t)e^{-i\omega t}\,dt}is theFourier transformoff{\displaystyle f}. As the inner product, we use Since this is a closed subspace ofL2(R){\displaystyle L^{2}(\mathbb {R} )}, it is a Hilbert space. Moreover, the elements ofH{\displaystyle H}are smooth functions onR{\displaystyle \mathbb {R} }that tend to zero at infinity, essentially by theRiemann-Lebesgue lemma. In fact, the elements ofH{\displaystyle H}are the restrictions toR{\displaystyle \mathbb {R} }of entireholomorphic functions, by thePaley–Wiener theorem. From theFourier inversion theorem, we have It then follows by theCauchy–Schwarz inequalityandPlancherel's theoremthat, for allx{\displaystyle x}, This inequality shows that the evaluation functional is bounded, proving thatH{\displaystyle H}is indeed a RKHS. The kernel functionKx{\displaystyle K_{x}}in this case is given by The Fourier transform ofKx(y){\displaystyle K_{x}(y)}defined above is given by which is a consequence of thetime-shifting property of the Fourier transform. Consequently, usingPlancherel's theorem, we have Thus we obtain the reproducing property of the kernel. Kx{\displaystyle K_{x}}in this case is the "bandlimited version" of theDirac delta function, and thatKx(y){\displaystyle K_{x}(y)}converges toδ(y−x){\displaystyle \delta (y-x)}in the weak sense as the cutoff frequencya{\displaystyle a}tends to infinity. We have seen how a reproducing kernel Hilbert space defines a reproducing kernel function that is both symmetric andpositive definite. The Moore–Aronszajn theorem goes in the other direction; it states that every symmetric, positive definite kernel defines a unique reproducing kernel Hilbert space. The theorem first appeared in Aronszajn'sTheory of Reproducing Kernels, although he attributes it toE. H. Moore. Proof. For allxinX, defineKx=K(x, ⋅ ). LetH0be the linear span of {Kx:x∈X}. Define an inner product onH0by which impliesK(x,y)=⟨Kx,Ky⟩H0{\displaystyle K(x,y)=\left\langle K_{x},K_{y}\right\rangle _{H_{0}}}. The symmetry of this inner product follows from the symmetry ofKand the non-degeneracy follows from the fact thatKis positive definite. LetHbe thecompletionofH0with respect to this inner product. ThenHconsists of functions of the form Now we can check the reproducing property (2): To prove uniqueness, letGbe another Hilbert space of functions for whichKis a reproducing kernel. For everyxandyinX, (2) implies that By linearity,⟨⋅,⋅⟩H=⟨⋅,⋅⟩G{\displaystyle \langle \cdot ,\cdot \rangle _{H}=\langle \cdot ,\cdot \rangle _{G}}on the span of{Kx:x∈X}{\displaystyle \{K_{x}:x\in X\}}. ThenH⊂G{\displaystyle H\subset G}becauseGis complete and containsH0and hence contains its completion. Now we need to prove that every element ofGis inH. Letf{\displaystyle f}be an element ofG. SinceHis a closed subspace ofG, we can writef=fH+fH⊥{\displaystyle f=f_{H}+f_{H^{\bot }}}wherefH∈H{\displaystyle f_{H}\in H}andfH⊥∈H⊥{\displaystyle f_{H^{\bot }}\in H^{\bot }}. Now ifx∈X{\displaystyle x\in X}then, sinceKis a reproducing kernel ofGandH: where we have used the fact thatKx{\displaystyle K_{x}}belongs toHso that its inner product withfH⊥{\displaystyle f_{H^{\bot }}}inGis zero. This shows thatf=fH{\displaystyle f=f_{H}}inGand concludes the proof. We may characterize a symmetric positive definite kernelK{\displaystyle K}via the integral operator usingMercer's theoremand obtain an additional view of the RKHS. LetX{\displaystyle X}be a compact space equipped with a strictly positive finiteBorel measureμ{\displaystyle \mu }andK:X×X→R{\displaystyle K:X\times X\to \mathbb {R} }a continuous, symmetric, and positive definite function. Define the integral operatorTK:L2(X)→L2(X){\displaystyle T_{K}:L_{2}(X)\to L_{2}(X)}as whereL2(X){\displaystyle L_{2}(X)}is the space of square integrable functions with respect toμ{\displaystyle \mu }. Mercer's theorem states that the spectral decomposition of the integral operatorTK{\displaystyle T_{K}}ofK{\displaystyle K}yields a series representation ofK{\displaystyle K}in terms of the eigenvalues and eigenfunctions ofTK{\displaystyle T_{K}}. This then implies thatK{\displaystyle K}is a reproducing kernel so that the corresponding RKHS can be defined in terms of these eigenvalues and eigenfunctions. We provide the details below. Under these assumptionsTK{\displaystyle T_{K}}is a compact, continuous, self-adjoint, and positive operator. Thespectral theoremfor self-adjoint operators implies that there is an at most countable decreasing sequence(σi)i≥0{\displaystyle (\sigma _{i})_{i\geq 0}}such thatlimi→∞σi=0{\textstyle \lim _{i\to \infty }\sigma _{i}=0}andTKφi(x)=σiφi(x){\displaystyle T_{K}\varphi _{i}(x)=\sigma _{i}\varphi _{i}(x)}, where the{φi}{\displaystyle \{\varphi _{i}\}}form an orthonormal basis ofL2(X){\displaystyle L_{2}(X)}. By the positivity ofTK,σi>0{\displaystyle T_{K},\sigma _{i}>0}for alli.{\displaystyle i.}One can also show thatTK{\displaystyle T_{K}}maps continuously into the space of continuous functionsC(X){\displaystyle C(X)}and therefore we may choose continuous functions as the eigenvectors, that is,φi∈C(X){\displaystyle \varphi _{i}\in C(X)}for alli.{\displaystyle i.}Then by Mercer's theoremK{\displaystyle K}may be written in terms of the eigenvalues and continuous eigenfunctions as for allx,y∈X{\displaystyle x,y\in X}such that This above series representation is referred to as a Mercer kernel or Mercer representation ofK{\displaystyle K}. Furthermore, it can be shown that the RKHSH{\displaystyle H}ofK{\displaystyle K}is given by where the inner product ofH{\displaystyle H}given by This representation of the RKHS has application in probability and statistics, for example to theKarhunen-Loève representationfor stochastic processes andkernel PCA. Afeature mapis a mapφ:X→F{\displaystyle \varphi \colon X\rightarrow F}, whereF{\displaystyle F}is a Hilbert space which we will call the feature space. The first sections presented the connection between bounded/continuous evaluation functions, positive definite functions, and integral operators and in this section we provide another representation of the RKHS in terms of feature maps. Every feature map defines a kernel via ClearlyK{\displaystyle K}is symmetric and positive definiteness follows from the properties of inner product inF{\displaystyle F}. Conversely, every positive definite function and corresponding reproducing kernel Hilbert space has infinitely many associated feature maps such that (3) holds. For example, we can trivially takeF=H{\displaystyle F=H}andφ(x)=Kx{\displaystyle \varphi (x)=K_{x}}for allx∈X{\displaystyle x\in X}. Then (3) is satisfied by the reproducing property. Another classical example of a feature map relates to the previous section regarding integral operators by takingF=ℓ2{\displaystyle F=\ell ^{2}}andφ(x)=(σiφi(x))i{\displaystyle \varphi (x)=({\sqrt {\sigma _{i}}}\varphi _{i}(x))_{i}}. This connection between kernels and feature maps provides us with a new way to understand positive definite functions and hence reproducing kernels as inner products inH{\displaystyle H}. Moreover, every feature map can naturally define a RKHS by means of the definition of a positive definite function. Lastly, feature maps allow us to construct function spaces that reveal another perspective on the RKHS. Consider the linear space We can define a norm onHφ{\displaystyle H_{\varphi }}by It can be shown thatHφ{\displaystyle H_{\varphi }}is a RKHS with kernel defined byK(x,y)=⟨φ(x),φ(y)⟩F{\displaystyle K(x,y)=\langle \varphi (x),\varphi (y)\rangle _{F}}. This representation implies that the elements of the RKHS are inner products of elements in the feature space and can accordingly be seen as hyperplanes. This view of the RKHS is related to thekernel trickin machine learning.[7] Useful properties of RKHSs: The RKHSH{\displaystyle H}corresponding to this kernel is the dual space, consisting of functionsf(x)=⟨x,β⟩{\displaystyle f(x)=\langle x,\beta \rangle }satisfying‖f‖H2=‖β‖2{\displaystyle \|f\|_{H}^{2}=\|\beta \|^{2}}. These are another common class of kernels which satisfyK(x,y)=K(‖x−y‖){\displaystyle K(x,y)=K(\|x-y\|)}. Some examples include: We also provide examples ofBergman kernels. LetXbe finite and letHconsist of all complex-valued functions onX. Then an element ofHcan be represented as an array of complex numbers. If the usualinner productis used, thenKxis the function whose value is 1 atxand 0 everywhere else, andK(x,y){\displaystyle K(x,y)}can be thought of as an identity matrix since In this case,His isomorphic toCn{\displaystyle \mathbb {C} ^{n}}. The case ofX=D{\displaystyle X=\mathbb {D} }(whereD{\displaystyle \mathbb {D} }denotes theunit disc) is more sophisticated. Here theBergman spaceA2(D){\displaystyle A^{2}(\mathbb {D} )}is the space ofsquare-integrableholomorphic functionsonD{\displaystyle \mathbb {D} }. It can be shown that the reproducing kernel forA2(D){\displaystyle A^{2}(\mathbb {D} )}is Lastly, the space of band limited functions inL2(R){\displaystyle L^{2}(\mathbb {R} )}with bandwidth2a{\displaystyle 2a}is a RKHS with reproducing kernel In this section we extend the definition of the RKHS to spaces of vector-valued functions as this extension is particularly important inmulti-task learningandmanifold regularization. The main difference is that the reproducing kernelΓ{\displaystyle \Gamma }is a symmetric function that is now a positive semi-definitematrixfor everyx,y{\displaystyle x,y}inX{\displaystyle X}. More formally, we define a vector-valued RKHS (vvRKHS) as a Hilbert space of functionsf:X→RT{\displaystyle f:X\to \mathbb {R} ^{T}}such that for allc∈RT{\displaystyle c\in \mathbb {R} ^{T}}andx∈X{\displaystyle x\in X} and This second property parallels the reproducing property for the scalar-valued case. This definition can also be connected to integral operators, bounded evaluation functions, and feature maps as we saw for the scalar-valued RKHS. We can equivalently define the vvRKHS as a vector-valued Hilbert space with a bounded evaluation functional and show that this implies the existence of a unique reproducing kernel by the Riesz Representation theorem. Mercer's theorem can also be extended to address the vector-valued setting and we can therefore obtain a feature map view of the vvRKHS. Lastly, it can also be shown that the closure of the span of{Γxc:x∈X,c∈RT}{\displaystyle \{\Gamma _{x}c:x\in X,c\in \mathbb {R} ^{T}\}}coincides withH{\displaystyle H}, another property similar to the scalar-valued case. We can gain intuition for the vvRKHS by taking a component-wise perspective on these spaces. In particular, we find that every vvRKHS is isometricallyisomorphicto a scalar-valued RKHS on a particular input space. LetΛ={1,…,T}{\displaystyle \Lambda =\{1,\dots ,T\}}. Consider the spaceX×Λ{\displaystyle X\times \Lambda }and the corresponding reproducing kernel As noted above, the RKHS associated to this reproducing kernel is given by the closure of the span of{γ(x,t):x∈X,t∈Λ}{\displaystyle \{\gamma _{(x,t)}:x\in X,t\in \Lambda \}}whereγ(x,t)(y,s)=γ((x,t),(y,s)){\displaystyle \gamma _{(x,t)}(y,s)=\gamma ((x,t),(y,s))}for every set of pairs(x,t),(y,s)∈X×Λ{\displaystyle (x,t),(y,s)\in X\times \Lambda }. The connection to the scalar-valued RKHS can then be made by the fact that every matrix-valued kernel can be identified with a kernel of the form of (4) via Moreover, every kernel with the form of (4) defines a matrix-valued kernel with the above expression. Now letting the mapD:HΓ→Hγ{\displaystyle D:H_{\Gamma }\to H_{\gamma }}be defined as whereet{\displaystyle e_{t}}is thetth{\displaystyle t^{\text{th}}}component of the canonical basis forRT{\displaystyle \mathbb {R} ^{T}}, one can show thatD{\displaystyle D}is bijective and an isometry betweenHΓ{\displaystyle H_{\Gamma }}andHγ{\displaystyle H_{\gamma }}. While this view of the vvRKHS can be useful in multi-task learning, this isometry does not reduce the study of the vector-valued case to that of the scalar-valued case. In fact, this isometry procedure can make both the scalar-valued kernel and the input space too difficult to work with in practice as properties of the original kernels are often lost.[11][12][13] An important class of matrix-valued reproducing kernels areseparablekernels which can factorized as the product of a scalar valued kernel and aT{\displaystyle T}-dimensional symmetric positive semi-definite matrix. In light of our previous discussion these kernels are of the form for allx,y{\displaystyle x,y}inX{\displaystyle X}andt,s{\displaystyle t,s}inT{\displaystyle T}. As the scalar-valued kernel encodes dependencies between the inputs, we can observe that the matrix-valued kernel encodes dependencies among both the inputs and the outputs. We lastly remark that the above theory can be further extended to spaces of functions with values in function spaces but obtaining kernels for these spaces is a more difficult task.[14] TheReLU functionis commonly defined asf(x)=max{0,x}{\displaystyle f(x)=\max\{0,x\}}and is a mainstay in the architecture of neural networks where it is used as an activation function. One can construct a ReLU-like nonlinear function using the theory of reproducing kernel Hilbert spaces. Below, we derive this construction and show how it implies the representation power of neural networks with ReLU activations. We will work with the Hilbert spaceH=L21(0)[0,∞){\displaystyle {\mathcal {H}}=L_{2}^{1}(0)[0,\infty )}of absolutely continuous functions withf(0)=0{\displaystyle f(0)=0}and square integrable (i.e.L2{\displaystyle L_{2}}) derivative. It has the inner product To construct the reproducing kernel it suffices to consider a dense subspace, so letf∈C1[0,∞){\displaystyle f\in C^{1}[0,\infty )}andf(0)=0{\displaystyle f(0)=0}. The Fundamental Theorem of Calculus then gives where andKy′(x)=G(x,y),Ky(0)=0{\displaystyle K_{y}'(x)=G(x,y),\ K_{y}(0)=0}i.e. This impliesKy=K(⋅,y){\displaystyle K_{y}=K(\cdot ,y)}reproducesf{\displaystyle f}. Moreover the minimum function onX×X=[0,∞)×[0,∞){\displaystyle X\times X=[0,\infty )\times [0,\infty )}has the following representations with the ReLu function: Using this formulation, we can apply therepresenter theoremto the RKHS, letting one prove the optimality of using ReLU activations in neural network settings.[citation needed]
https://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space
Kernel methodsare a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in acomputationally efficientway and allow algorithms to easily swap functions of varying complexity. In typicalmachine learningalgorithms, these functions produce a scalar output. Recent development of kernel methods for functions with vector-valued output is due, at least in part, to interest in simultaneously solving related problems. Kernels which capture the relationship between the problems allow them toborrow strengthfrom each other. Algorithms of this type includemulti-task learning(also called multi-output learning or vector-valued learning),transfer learning, and co-kriging.Multi-label classificationcan be interpreted as mapping inputs to (binary) coding vectors with length equal to the number of classes. InGaussian processes, kernels are calledcovariance functions. Multiple-output functions correspond to considering multiple processes. SeeBayesian interpretation of regularizationfor the connection between the two perspectives. The history of learning vector-valued functions is closely linked totransfer learning- storing knowledge gained while solving one problem and applying it to a different but related problem. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn”, which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning.[1]Interest in learning vector-valued functions was particularly sparked by multitask learning, a framework which tries to learn multiple, possibly different tasks simultaneously. Much of the initial research in multitask learning in the machine learning community was algorithmic in nature, and applied to methods such as neural networks, decision trees andk-nearest neighbors in the 1990s.[2]The use of probabilistic models and Gaussian processes was pioneered and largely developed in the context of geostatistics, where prediction over vector-valued output data is known as cokriging.[3][4][5]Geostatistical approaches to multivariate modeling are mostly formulated around the linear model of coregionalization (LMC), a generative approach for developing valid covariance functions that has been used for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. The regularization and kernel theory literature for vector-valued functions followed in the 2000s.[6][7]While the Bayesian and regularization perspectives were developed independently, they are in fact closely related.[8] In this context, the supervised learning problem is to learn the functionf{\displaystyle f}which best predicts vector-valued outputsyi{\displaystyle \mathbf {y_{i}} }given inputs (data)xi{\displaystyle \mathbf {x_{i}} }. In general, each component of (yi{\displaystyle \mathbf {y_{i}} }), could have different input data (xd,i{\displaystyle \mathbf {x_{d,i}} }) with different cardinality (p{\displaystyle p}) and even different input spaces (X{\displaystyle {\mathcal {X}}}).[8]Geostatistics literature calls this caseheterotopic, and usesisotopicto indicate that the each component of the output vector has the same set of inputs.[9] Here, for simplicity in the notation, we assume the number and sample space of the data for each output are the same. Sources:[8][10][11] From the regularization perspective, the problem is to learnf∗{\displaystyle f_{*}}belonging to areproducing kernel Hilbert spaceof vector-valued functions (H{\displaystyle {\mathcal {H}}}). This is similar to the scalar case ofTikhonov regularization, with some extra care in the notation. withc¯=(K(X,X)+λN(I))−1y¯{\displaystyle {\bar {\mathbf {c} }}=(\mathbf {K} (\mathbf {X} ,\mathbf {X} )+\lambda N\mathbf {(} I))^{-1}{\bar {\mathbf {y} }}},wherec¯andy¯{\displaystyle {\bar {\mathbf {c} }}{\text{ and }}{\bar {\mathbf {y} }}}are the coefficients and output vectors concatenated to formND{\displaystyle ND}vectors andK(X,X)is anND×ND{\displaystyle \mathbf {K} (\mathbf {X} ,\mathbf {X} ){\text{ is an }}ND\times ND}matrix ofN×N{\displaystyle N\times N}blocks:(K(xi,xj))d,d′{\displaystyle (\mathbf {K} (\mathbf {x_{i}} ,\mathbf {x_{j}} ))_{d,d'}} f∗(x)=∑i=1Nk(xi,x)ci=kx⊺c{\displaystyle f_{*}(\mathbf {x} )=\sum \limits _{i=1}^{N}k(\mathbf {x_{i}} ,\mathbf {x} )c_{i}=\mathbf {k} _{\mathbf {x} }^{\intercal }\mathbf {c} } Solve forc{\displaystyle \mathbf {c} }by taking the derivative of the learning problem, setting it equal to zero, and substituting in the above expression forf∗{\displaystyle f_{*}}: whereKij=k(xi,xj)=ithelement ofkxj{\displaystyle \mathbf {K} _{ij}=k(\mathbf {x_{i}} ,\mathbf {x_{j}} )=i^{\text{th}}{\text{ element of }}\mathbf {k} _{\mathbf {x_{j}} }} †{\displaystyle ^{\dagger }}It is possible, though non-trivial, to show that a representer theorem also holds for Tikhonov regularization in the vector-valued setting.[8] Note, the matrix-valued kernelK{\displaystyle \mathbf {K} }can also be defined by a scalar kernelR{\displaystyle R}on the spaceX×{1,…,D}{\displaystyle {\mathcal {X}}\times \{1,\ldots ,D\}}. Anisometryexists between the Hilbert spaces associated with these two kernels: The estimator of the vector-valued regularization framework can also be derived from a Bayesian viewpoint using Gaussian process methods in the case of a finite dimensionalReproducing kernel Hilbert space. The derivation is similar to the scalar-valued caseBayesian interpretation of regularization. The vector-valued functionf{\displaystyle {\textbf {f}}}, consisting ofD{\displaystyle D}outputs{fd}d=1D{\displaystyle \left\{f_{d}\right\}_{d=1}^{D}}, is assumed to follow a Gaussian process: wherem:X→RD{\displaystyle {\textbf {m}}:{\mathcal {X}}\to {\textbf {R}}^{D}}is now a vector of the mean functions{md(x)}d=1D{\displaystyle \left\{m_{d}({\textbf {x}})\right\}_{d=1}^{D}}for the outputs andK{\displaystyle {\textbf {K}}}is a positive definite matrix-valued function with entry(K(x,x′))d,d′{\displaystyle ({\textbf {K}}({\textbf {x}},{\textbf {x}}'))_{d,d'}}corresponding to the covariance between the outputsfd(x){\displaystyle f_{d}({\textbf {x}})}andfd′(x′){\displaystyle f_{d'}({\textbf {x}}')}. For a set of inputsX{\displaystyle {\textbf {X}}}, the prior distribution over the vectorf(X){\displaystyle {\textbf {f}}({\textbf {X}})}is given byN(m(X),K(X,X)){\displaystyle {\mathcal {N}}({\textbf {m}}({\textbf {X}}),{\textbf {K}}({\textbf {X}},{\textbf {X}}))}, wherem(X){\displaystyle {\textbf {m}}({\textbf {X}})}is a vector that concatenates the mean vectors associated to the outputs andK(X,X){\displaystyle {\textbf {K}}({\textbf {X}},{\textbf {X}})}is a block-partitioned matrix. The distribution of the outputs is taken to be Gaussian: whereΣ∈RD×D{\displaystyle \Sigma \in {\mathcal {\textbf {R}}}^{D\times D}}is a diagonal matrix with elements{σd2}d=1D{\displaystyle \left\{\sigma _{d}^{2}\right\}_{d=1}^{D}}specifying the noise for each output. Using this form for the likelihood, the predictive distribution for a new vectorx∗{\displaystyle {\textbf {x}}_{*}}is: whereS{\displaystyle {\textbf {S}}}is the training data, andϕ{\displaystyle \phi }is a set of hyperparameters forK(x,x′){\displaystyle {\textbf {K}}({\textbf {x}},{\textbf {x}}')}andΣ{\displaystyle \Sigma }. Equations forf∗{\displaystyle {\textbf {f}}_{*}}andK∗{\displaystyle {\textbf {K}}_{*}}can then be obtained: whereΣ=Σ⊗IN,Kx∗∈RD×ND{\displaystyle {\boldsymbol {\Sigma }}=\Sigma \otimes {\textbf {I}}_{N},{\textbf {K}}_{{\textbf {x}}_{*}}\in {\mathcal {\textbf {R}}}^{D\times ND}}has entries(K(x∗,xj))d,d′{\displaystyle ({\textbf {K}}({\textbf {x}}_{*},{\textbf {x}}_{j}))_{d,d'}}forj=1,⋯,N{\displaystyle j=1,\cdots ,N}andd,d′=1,⋯,D{\displaystyle d,d'=1,\cdots ,D}. Note that the predictorf∗{\displaystyle {\textbf {f}}^{*}}is identical to the predictor derived in the regularization framework. For non-Gaussian likelihoods different methods such as Laplace approximation and variational methods are needed to approximate the estimators. A simple, but broadly applicable, class of multi-output kernels can be separated into the product of a kernel on the input-space and a kernel representing the correlations among the outputs:[8] In matrix form:K(x,x′)=k(x,x′)B{\displaystyle \mathbf {K} (\mathbf {x} ,\mathbf {x'} )=k(\mathbf {x} ,\mathbf {x'} )\mathbf {B} }whereB{\displaystyle \mathbf {B} }is aD×D{\displaystyle D\times D}symmetric and positive semi-definite matrix. Note, settingB{\displaystyle \mathbf {B} }to the identity matrix treats the outputs as unrelated and is equivalent to solving the scalar-output problems separately. For a slightly more general form, adding several of these kernels yieldssum of separable kernels(SoS kernels). Sources:[8][10][12][13][14] One way of obtainingkT{\displaystyle k_{T}}is to specify aregularizerwhich limits the complexity off{\displaystyle f}in a desirable way, and then derive the corresponding kernel. For certain regularizers, this kernel will turn out to be separable. Mixed-effect regularizer where: where1is aD×D{\displaystyle \mathbf {1} {\text{ is a }}D\times D}matrix with all entries equal to 1. This regularizer is a combination of limiting the complexity of each component of the estimator (fl{\displaystyle f_{l}}) and forcing each component of the estimator to be close to the mean of all the components. Settingω=0{\displaystyle \omega =0}treats all the components as independent and is the same as solving the scalar problems separately. Settingω=1{\displaystyle \omega =1}assumes all the components are explained by the same function. Cluster-based regularizer where: whereGl,q=ε1δlq+(ε2−ε1)Ml,q{\displaystyle \mathbf {G} _{l,q}=\varepsilon _{1}\delta _{lq}+(\varepsilon _{2}-\varepsilon _{1})\mathbf {M} _{l,q}} This regularizer divides the components intor{\displaystyle r}clusters and forces the components in each cluster to be similar. Graph regularizer whereMis aD×D{\displaystyle \mathbf {M} {\text{ is a }}D\times D}matrix of weights encoding the similarities between the components whereL=D−M{\displaystyle \mathbf {L} =\mathbf {D} -\mathbf {M} },Dl,q=δl,q(∑h=1DMl,h+Ml,q){\displaystyle \mathbf {D} _{l,q}=\delta _{l,q}(\sum \limits _{h=1}^{D}\mathbf {M} _{l,h}+\mathbf {M} _{l,q})} Note,L{\displaystyle \mathbf {L} }is the graphlaplacian. See also:graph kernel. Several approaches to learningB{\displaystyle \mathbf {B} }from data have been proposed.[8]These include: performing a preliminary inference step to estimateB{\displaystyle \mathbf {B} }from the training data,[9]a proposal to learnB{\displaystyle \mathbf {B} }andf{\displaystyle \mathbf {f} }together based on the cluster regularizer,[15]and sparsity-based approaches which assume only a few of the features are needed.[16][17] In LMC, outputs are expressed as linear combinations of independent random functions such that the resulting covariance function (over all inputs and outputs) is a valid positive semidefinite function. AssumingD{\displaystyle D}outputs{fd(x)}d=1D{\displaystyle \left\{f_{d}({\textbf {x}})\right\}_{d=1}^{D}}withx∈Rp{\displaystyle {\textbf {x}}\in {\mathcal {\textbf {R}}}^{p}}, eachfd{\displaystyle f_{d}}is expressed as: wheread,q{\displaystyle a_{d,q}}are scalar coefficients and the independent functionsuq(x){\displaystyle u_{q}({\textbf {x}})}have zero mean and covariance cov[uq(x),uq′(x′)]=kq(x,x′){\displaystyle [u_{q}({\textbf {x}}),u_{q'}({\textbf {x}}')]=k_{q}({\textbf {x}},{\textbf {x}}')}ifq=q′{\displaystyle q=q'}and 0 otherwise. The cross covariance between any two functionsfd(x){\displaystyle f_{d}({\textbf {x}})}andfd′(x){\displaystyle f_{d'}({\textbf {x}})}can then be written as: where the functionsuqi(x){\displaystyle u_{q}^{i}({\textbf {x}})}, withq=1,⋯,Q{\displaystyle q=1,\cdots ,Q}andi=1,⋯,Rq{\displaystyle i=1,\cdots ,R_{q}}have zero mean and covariance cov[uqi(x),uq′i′(x)′]=kq(x,x′){\displaystyle [u_{q}^{i}({\textbf {x}}),u_{q'}^{i'}({\textbf {x}})']=k_{q}({\textbf {x}},{\textbf {x}}')}ifi=i′{\displaystyle i=i'}andq=q′{\displaystyle q=q'}. Butcov⁡[fd(x),fd′(x′)]{\displaystyle \operatorname {cov} [f_{d}({\textbf {x}}),f_{d'}({\textbf {x}}')]}is given by(K(x,x′))d,d′{\displaystyle ({\textbf {K}}({\textbf {x}},{\textbf {x}}'))_{d,d'}}. Thus the kernelK(x,x′){\displaystyle {\textbf {K}}({\textbf {x}},{\textbf {x}}')}can now be expressed as where eachBq∈RD×D{\displaystyle {\textbf {B}}_{q}\in {\mathcal {\textbf {R}}}^{D\times D}}is known as a coregionalization matrix. Therefore, the kernel derived from LMC is a sum of the products of two covariance functions, one that models the dependence between the outputs, independently of the input vectorx{\displaystyle {\textbf {x}}}(the coregionalization matrixBq{\displaystyle {\textbf {B}}_{q}}), and one that models the input dependence, independently of{fd(x)}d=1D{\displaystyle \left\{f_{d}({\textbf {x}})\right\}_{d=1}^{D}}(the covariance functionkq(x,x′){\displaystyle k_{q}({\textbf {x}},{\textbf {x}}')}). The ICM is a simplified version of the LMC, withQ=1{\displaystyle Q=1}. ICM assumes that the elementsbd,d′q{\displaystyle b_{d,d'}^{q}}of the coregionalization matrixBq{\displaystyle \mathbf {B} _{q}}can be written asbd,d′q=vd,d′bq{\displaystyle b_{d,d'}^{q}=v_{d,d'}b_{q}}, for some suitable coefficientsvd,d′{\displaystyle v_{d,d'}}. With this form forbd,d′q{\displaystyle b_{d,d'}^{q}}: where In this case, the coefficients and the kernel matrix for multiple outputs becomesK(x,x′)=k(x,x′)B{\displaystyle \mathbf {K} (\mathbf {x} ,\mathbf {x} ')=k(\mathbf {x} ,\mathbf {x} ')\mathbf {B} }. ICM is much more restrictive than the LMC since it assumes that each basic covariancekq(x,x′){\displaystyle k_{q}(\mathbf {x} ,\mathbf {x} ')}contributes equally to the construction of the autocovariances and cross covariances for the outputs. However, the computations required for the inference are greatly simplified. Another simplified version of the LMC is the semiparametric latent factor model (SLFM), which corresponds to settingRq=1{\displaystyle R_{q}=1}(instead ofQ=1{\displaystyle Q=1}as in ICM). Thus each latent functionuq{\displaystyle u_{q}}has its own covariance. While simple, the structure of separable kernels can be too limiting for some problems. Notable examples of non-separable kernels in theregularization literatureinclude: In theBayesian perspective, LMC produces a separable kernel because the output functions evaluated at a pointx{\displaystyle {\textbf {x}}}only depend on the values of the latent functions atx{\displaystyle {\textbf {x}}}. A non-trivial way to mix the latent functions is by convolving a base process with a smoothing kernel. If the base process is a Gaussian process, the convolved process is Gaussian as well. We can therefore exploit convolutions to construct covariance functions.[20]This method of producing non-separable kernels is known as process convolution. Process convolutions were introduced for multiple outputs in the machine learning community as "dependent Gaussian processes".[21] When implementing an algorithm using any of the kernels above, practical considerations of tuning the parameters and ensuring reasonable computation time must be considered. Approached from the regularization perspective, parameter tuning is similar to the scalar-valued case and can generally be accomplished withcross validation. Solving the required linear system is typically expensive in memory and time. If the kernel is separable, a coordinate transform can convertK(X,X){\displaystyle \mathbf {K} (\mathbf {X} ,\mathbf {X} )}to ablock-diagonal matrix, greatly reducing the computational burden by solving D independent subproblems (plus theeigendecompositionofB{\displaystyle \mathbf {B} }). In particular, for a least squares loss function (Tikhonov regularization), there exists a closed form solution forc¯{\displaystyle {\bar {\mathbf {c} }}}:[8][14] There are many works related to parameter estimation for Gaussian processes. Some methods such as maximization of the marginal likelihood (also known as evidence approximation, type II maximum likelihood, empirical Bayes), and least squares give point estimates of the parameter vectorϕ{\displaystyle \phi }. There are also works employing a full Bayesian inference by assigning priors toϕ{\displaystyle \phi }and computing the posterior distribution through a sampling procedure. For non-Gaussian likelihoods, there is no closed form solution for the posterior distribution or for the marginal likelihood. However, the marginal likelihood can be approximated under a Laplace, variational Bayes or expectation propagation (EP) approximation frameworks for multiple output classification and used to find estimates for the hyperparameters. The main computational problem in the Bayesian viewpoint is the same as the one appearing in regularization theory of inverting the matrix This step is necessary for computing the marginal likelihood and the predictive distribution. For most proposed approximation methods to reduce computation, the computational efficiency gained is independent of the particular method employed (e.g. LMC, process convolution) used to compute the multi-output covariance matrix. A summary of different methods for reducing computational complexity in multi-output Gaussian processes is presented in.[8]
https://en.wikipedia.org/wiki/Kernel_methods_for_vector_output
Instatistics,kernel density estimation(KDE) is the application ofkernel smoothingforprobability density estimation, i.e., anon-parametricmethod toestimatetheprobability density functionof arandom variablebased onkernelsasweights. KDE answers a fundamental data smoothing problem where inferences about thepopulationare made based on a finite datasample. In some fields such assignal processingandeconometricsit is also termed theParzen–Rosenblatt windowmethod, afterEmanuel ParzenandMurray Rosenblatt, who are usually credited with independently creating it in its current form.[1][2]One of the famous applications of kernel density estimation is in estimating the class-conditionalmarginal densitiesof data when using anaive Bayes classifier, which can improve its prediction accuracy.[3] Let(x1,x2, ...,xn)beindependent and identically distributedsamples drawn from some univariate distribution with an unknowndensityfat any given pointx. We are interested in estimating the shape of this functionf. Itskernel density estimatorisf^h(x)=1n∑i=1nKh(x−xi)=1nh∑i=1nK(x−xih),{\displaystyle {\hat {f}}_{h}(x)={\frac {1}{n}}\sum _{i=1}^{n}K_{h}(x-x_{i})={\frac {1}{nh}}\sum _{i=1}^{n}K{\left({\frac {x-x_{i}}{h}}\right)},}whereKis thekernel— a non-negative function — andh> 0is asmoothingparameter called thebandwidthor simply width.[3]A kernel with subscripthis called thescaled kerneland defined asKh(x) =⁠1/h⁠K(⁠x/h⁠). Intuitively one wants to choosehas small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. The choice of bandwidth is discussed in more detail below. A range ofkernel functionsare commonly used: uniform, triangular, biweight, triweight,Epanechnikov(parabolic), normal, and others. The Epanechnikov kernel is optimal in a mean square error sense,[4]though the loss of efficiency is small for the kernels listed previously.[5]Due to its convenient mathematical properties, the normal kernel is often used, which meansK(x) =ϕ(x), whereϕis thestandard normaldensity function. The kernel density estimator then becomesf^h(x)=1nhσ12π∑i=1nexp⁡(−(x−xi)22h2σ2),{\displaystyle {\hat {f}}_{h}(x)={\frac {1}{nh\sigma }}{\frac {1}{\sqrt {2\pi }}}\sum _{i=1}^{n}\exp \left({\frac {-(x-x_{i})^{2}}{2h^{2}\sigma ^{2}}}\right),}whereσ{\displaystyle \sigma }is the standard deviation of the samplex→{\displaystyle {\vec {x}}}. The construction of a kernel density estimate finds interpretations in fields outside of density estimation.[6]For example, inthermodynamics, this is equivalent to the amount of heat generated whenheat kernels(the fundamental solution to theheat equation) are placed at each data point locationsxi. Similar methods are used to constructdiscrete Laplace operatorson point clouds formanifold learning(e.g.diffusion map). Kernel density estimates are closely related tohistograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. The diagram below based on these 6 data points illustrates this relationship: For the histogram, first, the horizontal axis is divided into sub-intervals or bins which cover the range of the data: In this case, six bins each of width 2. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. If more than one data point falls inside the same bin, the boxes are stacked on top of each other. For the kernel density estimate, normal kernels with a standard deviation of 1.5 (indicated by the red dashed lines) are placed on each of the data pointsxi. The kernels are summed to make the kernel density estimate (solid blue curve). The smoothness of the kernel density estimate (compared to the discreteness of the histogram) illustrates how kernel density estimates converge faster to the true underlying density for continuous random variables.[7] Thebandwidthof the kernel is afree parameterwhich exhibits a strong influence on the resulting estimate. To illustrate its effect, we take a simulatedrandom samplefrom the standardnormal distribution(plotted at the blue spikes in therug ploton the horizontal axis). The grey curve is the true density (a normal density with mean 0 and variance 1). In comparison, the red curve isundersmoothedsince it contains too many spurious data artifacts arising from using a bandwidthh= 0.05, which is too small. The green curve isoversmoothedsince using the bandwidthh= 2obscures much of the underlying structure. The black curve with a bandwidth ofh= 0.337 is considered to be optimally smoothed since its density estimate is close to the true density. An extreme situation is encountered in the limith→0{\displaystyle h\to 0}(no smoothing), where the estimate is a sum ofndelta functionscentered at the coordinates of analyzed samples. In the other extreme limith→∞{\displaystyle h\to \infty }the estimate retains the shape of the used kernel, centered on the mean of the samples (completely smooth). The most common optimality criterion used to select this parameter is the expectedL2risk function, also termed themean integrated squared error: MISE⁡(h)=E[∫(f^h(x)−f(x))2dx]{\displaystyle \operatorname {MISE} (h)=\operatorname {E} \!\left[\int \!{\left({\hat {f}}\!_{h}(x)-f(x)\right)}^{2}dx\right]} Under weak assumptions onfandK, (fis the, generally unknown, real density function),[1][2] MISE⁡(h)=AMISE⁡(h)+o((nh)−1+h4){\displaystyle \operatorname {MISE} (h)=\operatorname {AMISE} (h)+{\mathcal {o}}{\left((nh)^{-1}+h^{4}\right)}} whereois thelittle o notation, andnthe sample size (as above). The AMISE is the asymptotic MISE, i. e. the two leading terms, AMISE⁡(h)=R(K)nh+14m2(K)2h4R(f″){\displaystyle \operatorname {AMISE} (h)={\frac {R(K)}{nh}}+{\frac {1}{4}}m_{2}(K)^{2}h^{4}R(f'')} whereR(g)=∫g(x)2dx{\textstyle R(g)=\int g(x)^{2}\,dx}for a functiong,m2(K)=∫x2K(x)dx{\textstyle m_{2}(K)=\int x^{2}K(x)\,dx}andf″{\displaystyle f''}is the second derivative off{\displaystyle f}andK{\displaystyle K}is the kernel. The minimum of this AMISE is the solution to this differential equation ∂∂hAMISE⁡(h)=−R(K)nh2+m2(K)2h3R(f″)=0{\displaystyle {\frac {\partial }{\partial h}}\operatorname {AMISE} (h)=-{\frac {R(K)}{nh^{2}}}+m_{2}(K)^{2}h^{3}R(f'')=0} or hAMISE=R(K)1/5m2(K)2/5R(f″)1/5n−1/5=Cn−1/5{\displaystyle h_{\operatorname {AMISE} }={\frac {R(K)^{1/5}}{m_{2}(K)^{2/5}R(f'')^{1/5}}}n^{-1/5}=Cn^{-1/5}} Neither the AMISE nor thehAMISEformulas can be used directly since they involve the unknown density functionf{\displaystyle f}or its second derivativef″{\displaystyle f''}. To overcome that difficulty, a variety of automatic, data-based methods have been developed to select the bandwidth. Several review studies have been undertaken to compare their efficacies,[8][9][10][11][12][13][14]with the general consensus that the plug-in selectors[6][15][16]andcross validationselectors[17][18][19]are the most useful over a wide range of data sets. Substituting any bandwidthhwhich has the same asymptotic ordern−1/5ashAMISEinto the AMISE gives thatAMISE(h) =O(n−4/5), whereOis thebigOnotation. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator.[20]Note that then−4/5rate is slower than the typicaln−1convergence rate of parametric methods. If the bandwidth is not held fixed, but is varied depending upon the location of either the estimate (balloon estimator) or the samples (pointwise estimator), this produces a particularly powerful method termedadaptive or variable bandwidth kernel density estimation. Bandwidth selection for kernel density estimation of heavy-tailed distributions is relatively difficult.[21] If Gaussian basis functions are used to approximateunivariatedata, and the underlying density being estimated is Gaussian, the optimal choice forh(that is, the bandwidth that minimises themean integrated squared error) is:[22] h=(4σ^53n)1/5≈1.06σ^n−1/5,{\displaystyle h={\left({\frac {4{\hat {\sigma }}^{5}}{3n}}\right)}^{1/5}\approx 1.06\,{\hat {\sigma }}\,n^{-1/5},} Anh{\displaystyle h}value is considered more robust when it improves the fit for long-tailed and skewed distributions or for bimodal mixture distributions. This is often done empirically by replacing thestandard deviationσ^{\displaystyle {\hat {\sigma }}}by the parameterA{\displaystyle A}below: A=min(σ^,IQR1.34){\displaystyle A=\min \left({\hat {\sigma }},{\frac {\mathrm {IQR} }{1.34}}\right)}whereIQRis the interquartile range. Another modification that will improve the model is to reduce the factor from 1.06 to 0.9. Then the final formula would be: h=0.9min(σ^,IQR1.34)n−1/5{\displaystyle h=0.9\,\min \left({\hat {\sigma }},{\frac {\mathrm {IQR} }{1.34}}\right)\,n^{-1/5}}wheren{\displaystyle n}is the sample size. This approximation is termed thenormal distribution approximation, Gaussian approximation, orSilverman's rule of thumb.[22]While this rule of thumb is easy to compute, it should be used with caution as it can yield widely inaccurate estimates when the density is not close to being normal. For example, when estimating the bimodalGaussian mixture model122πe−12(x−10)2+122πe−12(x+10)2{\displaystyle {\frac {1}{2{\sqrt {2\pi }}}}e^{-{\frac {1}{2}}(x-10)^{2}}+{\frac {1}{2{\sqrt {2\pi }}}}e^{-{\frac {1}{2}}(x+10)^{2}}}from a sample of 200 points, the figure on the right shows the true density and two kernel density estimates — one using the rule-of-thumb bandwidth, and the other using a solve-the-equation bandwidth.[6][16]The estimate based on the rule-of-thumb bandwidth is significantly oversmoothed. Given the sample(x1,x2, ...,xn), it is natural to estimate thecharacteristic functionφ(t) = E[eitX]asφ^(t)=1n∑j=1neitxj{\displaystyle {\hat {\varphi }}(t)={\frac {1}{n}}\sum _{j=1}^{n}e^{itx_{j}}}Knowing the characteristic function, it is possible to find the corresponding probability density function through theFourier transformformula. One difficulty with applying this inversion formula is that it leads to a diverging integral, since the estimateφ^(t){\displaystyle {\hat {\varphi }}(t)}is unreliable for larget's. To circumvent this problem, the estimatorφ^(t){\displaystyle {\hat {\varphi }}(t)}is multiplied by a damping functionψh(t) =ψ(ht), which is equal to 1 at the origin and then falls to 0 at infinity. The "bandwidth parameter"hcontrols how fast we try to dampen the functionφ^(t){\displaystyle {\hat {\varphi }}(t)}. In particular whenhis small, thenψh(t)will be approximately one for a large range oft's, which means thatφ^(t){\displaystyle {\hat {\varphi }}(t)}remains practically unaltered in the most important region oft's. The most common choice for functionψis either the uniform functionψ(t) =1{−1 ≤t≤ 1}, which effectively means truncating the interval of integration in the inversion formula to[−1/h, 1/h], or theGaussian functionψ(t) =e−πt2. Once the functionψhas been chosen, the inversion formula may be applied, and the density estimator will bef^(x)=12π∫−∞+∞φ^(t)ψh(t)e−itxdt=12π∫−∞+∞1n∑j=1neit(xj−x)ψ(ht)dt=1nh∑j=1n12π∫−∞+∞e−i(ht)x−xjhψ(ht)d(ht)=1nh∑j=1nK(x−xjh),{\displaystyle {\begin{aligned}{\hat {f}}(x)&={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }{\hat {\varphi }}(t)\psi _{h}(t)e^{-itx}\,dt\\[1ex]&={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }{\frac {1}{n}}\sum _{j=1}^{n}e^{it(x_{j}-x)}\psi (ht)\,dt\\[1ex]&={\frac {1}{nh}}\sum _{j=1}^{n}{\frac {1}{2\pi }}\int _{-\infty }^{+\infty }e^{-i(ht){\frac {x-x_{j}}{h}}}\psi (ht)\,d(ht)\\[1ex]&={\frac {1}{nh}}\sum _{j=1}^{n}K{\left({\frac {x-x_{j}}{h}}\right)},\end{aligned}}} whereKis theFourier transformof the damping functionψ. Thus the kernel density estimator coincides with the characteristic function density estimator. We can extend the definition of the (global) mode to a local sense and define the local modes: M={x:g(x)=0∣λ1(x)<0}{\displaystyle M=\{x:g(x)=0\mid \lambda _{1}(x)<0\}} Namely,M{\displaystyle M}is the collection of points for which the density function is locally maximized. A natural estimator ofM{\displaystyle M}is a plug-in from KDE,[23][24]whereg(x){\displaystyle g(x)}andλ1(x){\displaystyle \lambda _{1}(x)}are KDE version ofg(x){\displaystyle g(x)}andλ1(x){\displaystyle \lambda _{1}(x)}. Under mild assumptions,Mc{\displaystyle M_{c}}is aconsistent estimatorofM{\displaystyle M}. Note that one can use the mean shift algorithm[25][26][27]to compute the estimatorMc{\displaystyle M_{c}}numerically. A non-exhaustive list of software implementations of kernel density estimators includes:
https://en.wikipedia.org/wiki/Kernel_density_estimation
Forcomputer science, instatistical learning theory, arepresenter theoremis any of several related results stating that a minimizerf∗{\displaystyle f^{*}}of a regularizedempirical risk functionaldefined over areproducing kernel Hilbert spacecan be represented as a finite linear combination of kernel products evaluated on the input points in the training set data. The following Representer Theorem and its proof are due toSchölkopf, Herbrich, and Smola:[1] Theorem:Consider a positive-definite real-valued kernelk:X×X→R{\displaystyle k:{\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }on a non-empty setX{\displaystyle {\mathcal {X}}}with a corresponding reproducing kernel Hilbert spaceHk{\displaystyle H_{k}}. Let there be given which together define the following regularized empirical risk functional onHk{\displaystyle H_{k}}: Then, any minimizer of the empirical risk admits a representation of the form: whereαi∈R{\displaystyle \alpha _{i}\in \mathbb {R} }for all1≤i≤n{\displaystyle 1\leq i\leq n}. Proof:Define a mapping (so thatφ(x)=k(⋅,x){\displaystyle \varphi (x)=k(\cdot ,x)}is itself a mapX→R{\displaystyle {\mathcal {X}}\to \mathbb {R} }). Sincek{\displaystyle k}is a reproducing kernel, then where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is the inner product onHk{\displaystyle H_{k}}. Given anyx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}, one can use orthogonal projection to decompose anyf∈Hk{\displaystyle f\in H_{k}}into a sum of two functions, one lying inspan⁡{φ(x1),…,φ(xn)}{\displaystyle \operatorname {span} \left\lbrace \varphi (x_{1}),\ldots ,\varphi (x_{n})\right\rbrace }, and the other lying in the orthogonal complement: where⟨v,φ(xi)⟩=0{\displaystyle \langle v,\varphi (x_{i})\rangle =0}for alli{\displaystyle i}. The above orthogonal decomposition and thereproducing propertytogether show that applyingf{\displaystyle f}to any training pointxj{\displaystyle x_{j}}produces which we observe is independent ofv{\displaystyle v}. Consequently, the value of the error functionE{\displaystyle E}in (*) is likewise independent ofv{\displaystyle v}. For the second term (the regularization term), sincev{\displaystyle v}is orthogonal to∑i=1nαiφ(xi){\displaystyle \sum _{i=1}^{n}\alpha _{i}\varphi (x_{i})}andg{\displaystyle g}is strictly monotonic, we have Therefore, settingv=0{\displaystyle v=0}does not affect the first term of (*), while it strictly decreases the second term. Consequently, any minimizerf∗{\displaystyle f^{*}}in (*) must havev=0{\displaystyle v=0}, i.e., it must be of the form which is the desired result. The Theorem stated above is a particular example of a family of results that are collectively referred to as "representer theorems"; here we describe several such. The first statement of a representer theorem was due to Kimeldorf and Wahba for the special case in which forλ>0{\displaystyle \lambda >0}. Schölkopf, Herbrich, and Smola generalized this result by relaxing the assumption of the squared-loss cost and allowing the regularizer to be any strictly monotonically increasing functiong(⋅){\displaystyle g(\cdot )}of the Hilbert space norm. It is possible to generalize further by augmenting the regularized empirical risk functional through the addition of unpenalized offset terms. For example, Schölkopf, Herbrich, and Smola also consider the minimization i.e., we consider functions of the formf~=f+h{\displaystyle {\tilde {f}}=f+h}, wheref∈Hk{\displaystyle f\in H_{k}}andh{\displaystyle h}is an unpenalized function lying in the span of a finite set of real-valued functions{ψp:X→R∣1≤p≤M}{\displaystyle \lbrace \psi _{p}\colon {\mathcal {X}}\to \mathbb {R} \mid 1\leq p\leq M\rbrace }. Under the assumption that then×M{\displaystyle n\times M}matrix(ψp(xi))ip{\displaystyle \left(\psi _{p}(x_{i})\right)_{ip}}has rankM{\displaystyle M}, they show that the minimizerf~∗{\displaystyle {\tilde {f}}^{*}}in(†){\displaystyle (\dagger )}admits a representation of the form whereαi,βp∈R{\displaystyle \alpha _{i},\beta _{p}\in \mathbb {R} }and theβp{\displaystyle \beta _{p}}are all uniquely determined. The conditions under which a representer theorem exists were investigated by Argyriou, Micchelli, and Pontil, who proved the following: Theorem:LetX{\displaystyle {\mathcal {X}}}be a nonempty set,k{\displaystyle k}a positive-definite real-valued kernel onX×X{\displaystyle {\mathcal {X}}\times {\mathcal {X}}}with corresponding reproducing kernel Hilbert spaceHk{\displaystyle H_{k}}, and letR:Hk→R{\displaystyle R\colon H_{k}\to \mathbb {R} }be a differentiable regularization function. Then given a training sample(x1,y1),…,(xn,yn)∈X×R{\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})\in {\mathcal {X}}\times \mathbb {R} }and an arbitrary error functionE:(X×R2)m→R∪{∞}{\displaystyle E\colon ({\mathcal {X}}\times \mathbb {R} ^{2})^{m}\to \mathbb {R} \cup \lbrace \infty \rbrace }, a minimizer of the regularized empirical risk admits a representation of the form whereαi∈R{\displaystyle \alpha _{i}\in \mathbb {R} }for all1≤i≤n{\displaystyle 1\leq i\leq n}, if and only if there exists a nondecreasing functionh:[0,∞)→R{\displaystyle h\colon [0,\infty )\to \mathbb {R} }for which Effectively, this result provides a necessary and sufficient condition on a differentiable regularizerR(⋅){\displaystyle R(\cdot )}under which the corresponding regularized empirical risk minimization(‡){\displaystyle (\ddagger )}will have a representer theorem. In particular, this shows that a broad class of regularized risk minimizations (much broader than those originally considered by Kimeldorf and Wahba) have representer theorems. Representer theorems are useful from a practical standpoint because they dramatically simplify the regularizedempirical risk minimizationproblem(‡){\displaystyle (\ddagger )}. In most interesting applications, the search domainHk{\displaystyle H_{k}}for the minimization will be an infinite-dimensional subspace ofL2(X){\displaystyle L^{2}({\mathcal {X}})}, and therefore the search (as written) does not admit implementation on finite-memory and finite-precision computers. In contrast, the representation off∗(⋅){\displaystyle f^{*}(\cdot )}afforded by a representer theorem reduces the original (infinite-dimensional) minimization problem to a search for the optimaln{\displaystyle n}-dimensional vector of coefficientsα=(α1,…,αn)∈Rn{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})\in \mathbb {R} ^{n}};α{\displaystyle \alpha }can then be obtained by applying any standard function minimization algorithm. Consequently, representer theorems provide the theoretical basis for the reduction of the general machine learning problem to algorithms that can actually be implemented on computers in practice. The following provides an example of how to solve for the minimizer whose existence is guaranteed by the representer theorem. This method works for any positive definite kernelK{\displaystyle K}, and allows us to transform a complicated (possibly infinite dimensional) optimization problem into a simple linear system that can be solved numerically. Assume that we are using a least squares error function and a regularization functiong(x)=λx2{\displaystyle g(x)=\lambda x^{2}}for someλ>0{\displaystyle \lambda >0}. By the representer theorem, the minimizer has the form for someα∗=(α1∗,…,αn∗)∈Rn{\displaystyle \alpha ^{*}=(\alpha _{1}^{*},\dots ,\alpha _{n}^{*})\in \mathbb {R} ^{n}}. Noting that we see thatα∗{\displaystyle \alpha ^{*}}has the form whereAij=k(xi,xj){\displaystyle A_{ij}=k(x_{i},x_{j})}andy=(y1,…,yn){\displaystyle y=(y_{1},\dots ,y_{n})}. This can be factored out and simplified to SinceA⊺A+λA{\displaystyle A^{\intercal }A+\lambda A}is positive definite, there is indeed a single global minimum for this expression. LetF(α)=α⊺(A⊺A+λA)α−2α⊺A⊺y{\displaystyle F(\alpha )=\alpha ^{\intercal }(A^{\intercal }A+\lambda A)\alpha -2\alpha ^{\intercal }A^{\intercal }y}and note thatF{\displaystyle F}is convex. Thenα∗{\displaystyle \alpha ^{*}}, the global minimum, can be solved by setting∇αF=0{\displaystyle \nabla _{\alpha }F=0}. Recalling that all positive definite matrices are invertible, we see that so the minimizer may be found via a linear solve.
https://en.wikipedia.org/wiki/Representer_theorem
Similarity learningis an area ofsupervised machine learninginartificial intelligence. It is closely related toregressionandclassification, but the goal is to learn asimilarity functionthat measures howsimilaror related two objects are. It has applications inranking, inrecommendation systems, visual identity tracking, face verification, and speaker verification. There are four common setups for similarity and metric distance learning. A common approach for learning similarity is to model the similarity function as abilinear form. For example, in the case of ranking similarity learning, one aims to learn a matrix W that parametrizes the similarity functionfW(x,z)=xTWz{\displaystyle f_{W}(x,z)=x^{T}Wz}. When data is abundant, a common approach is to learn asiamese network– a deep network model with parameter sharing. Similarity learning is closely related todistance metric learning. Metric learning is the task of learning a distance function over objects. Ametricordistance functionhas to obey four axioms:non-negativity,identity of indiscernibles,symmetryandsubadditivity(or the triangle inequality). In practice, metric learning algorithms ignore the condition of identity of indiscernibles and learn a pseudo-metric. When the objectsxi{\displaystyle x_{i}}are vectors inRd{\displaystyle R^{d}}, then any matrixW{\displaystyle W}in the symmetric positive semi-definite coneS+d{\displaystyle S_{+}^{d}}defines a distance pseudo-metric of the space of x through the formDW(x1,x2)2=(x1−x2)⊤W(x1−x2){\displaystyle D_{W}(x_{1},x_{2})^{2}=(x_{1}-x_{2})^{\top }W(x_{1}-x_{2})}. WhenW{\displaystyle W}is a symmetric positive definite matrix,DW{\displaystyle D_{W}}is a metric. Moreover, as any symmetric positive semi-definite matrixW∈S+d{\displaystyle W\in S_{+}^{d}}can be decomposed asW=L⊤L{\displaystyle W=L^{\top }L}whereL∈Re×d{\displaystyle L\in R^{e\times d}}ande≥rank(W){\displaystyle e\geq rank(W)}, the distance functionDW{\displaystyle D_{W}}can be rewritten equivalentlyDW(x1,x2)2=(x1−x2)⊤L⊤L(x1−x2)=‖L(x1−x2)‖22{\displaystyle D_{W}(x_{1},x_{2})^{2}=(x_{1}-x_{2})^{\top }L^{\top }L(x_{1}-x_{2})=\|L(x_{1}-x_{2})\|_{2}^{2}}. The distanceDW(x1,x2)2=‖x1′−x2′‖22{\displaystyle D_{W}(x_{1},x_{2})^{2}=\|x_{1}'-x_{2}'\|_{2}^{2}}corresponds to the Euclidean distance between the transformedfeature vectorsx1′=Lx1{\displaystyle x_{1}'=Lx_{1}}andx2′=Lx2{\displaystyle x_{2}'=Lx_{2}}. Many formulations for metric learning have been proposed.[4][5]Some well-known approaches for metric learning include learning from relative comparisons,[6]which is based on thetriplet loss,large margin nearest neighbor,[7]and information theoretic metric learning (ITML).[8] Instatistics, thecovariancematrix of the data is sometimes used to define a distance metric calledMahalanobis distance. Similarity learning is used in information retrieval forlearning to rank, in face verification or face identification,[9][10]and inrecommendation systems. Also, many machine learning approaches rely on some metric. This includesunsupervised learningsuch asclustering, which groups together close or similar objects. It also includes supervised approaches likeK-nearest neighbor algorithmwhich rely on labels of nearby objects to decide on the label of a new object. Metric learning has been proposed as a preprocessing step for many of these approaches.[11] Metric and similarity learning naively scale quadratically with the dimension of the input space, as can easily see when the learned metric has a bilinear formfW(x,z)=xTWz{\displaystyle f_{W}(x,z)=x^{T}Wz}. Scaling to higher dimensions can be achieved by enforcing a sparseness structure over the matrix model, as done with HDSL,[12]and with COMET.[13] For further information on this topic, see the surveys on metric and similarity learning by Bellet et al.[4]and Kulis.[5]
https://en.wikipedia.org/wiki/Similarity_learning
Cover's theoremis a statement incomputational learning theoryand is one of the primary theoretical motivations for the use of non-linearkernel methodsinmachine learningapplications. It is so termed after the information theoristThomas M. Coverwho stated it in 1965, referring to it ascounting function theorem. Let the number of homogeneously linearly separable sets ofN{\displaystyle N}points ind{\displaystyle d}dimensions be defined as acountingfunctionC(N,d){\displaystyle C(N,d)}of the number of pointsN{\displaystyle N}and the dimensionalityd{\displaystyle d}. The theorem states thatC(N,d)=2∑k=0d−1(N−1k){\displaystyle C(N,d)=2\sum _{k=0}^{d-1}{\binom {N-1}{k}}}. It requires, as a necessary and sufficient condition, that the points are ingeneral position. Simply put, this means that the points should be as linearly independent (non-aligned) as possible. This condition is satisfied "with probability 1" oralmost surelyfor random point sets, while it may easily be violated for real data, since these are often structured along smaller-dimensionality manifolds within the data space. The functionC(N,d){\displaystyle C(N,d)}follows two different regimes depending on the relationship betweenN{\displaystyle N}andd{\displaystyle d}. A consequence of the theorem is that given a set of training data that is notlinearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into ahigher-dimensional spacevia somenon-linear transformation, or: A complex pattern-classification problem, cast in a high-dimensional space nonlinearly, is more likely to be linearly separable than in a low-dimensional space, provided that the space is not densely populated. By induction with the recursive relationC(N+1,d)=C(N,d)+C(N,d−1).{\displaystyle C(N+1,d)=C(N,d)+C(N,d-1).}To show that, with fixedN{\displaystyle N}, increasingd{\displaystyle d}may turn a set of points from non-separable to separable, adeterministic mappingmay be used: suppose there areN{\displaystyle N}points. Lift them onto the vertices of thesimplexin theN−1{\displaystyle N-1}dimensional real space. Since everypartitionof the samples into two sets is separable by alinear separator, the property follows. The 1965 paper contains multiple theorems. Theorem 6: LetX∪{y}={x1,x2,⋯,xN,y}{\textstyle X\cup \{y\}=\left\{x_{1},x_{2},\cdots ,x_{N},y\right\}}be inϕ{\textstyle \phi }-general position ind{\textstyle d}-space, whereϕ=(ϕ1,ϕ2,⋯,ϕd){\textstyle \phi =\left(\phi _{1},\phi _{2},\cdots ,\phi _{d}\right)}. Theny{\textstyle y}is ambiguous with respect toC(N,d−1){\textstyle C(N,d-1)}dichotomies ofX{\textstyle X}relative to the class of allϕ{\textstyle \phi }-surfaces. Corollary: If each of theϕ{\textstyle \phi }-separable dichotomies ofX{\textstyle X}has equal probability, then the probabilityA(N,d){\textstyle A(N,d)}thaty{\textstyle y}is ambiguous with respect to a randomϕ{\textstyle \phi }-separable dichotomy ofX{\textstyle X}isC(N,d−1)C(N,d){\displaystyle {\frac {C(N,d-1)}{C(N,d)}}}. IfN/d→β{\displaystyle N/d\to \beta }, then at the limit ofN→∞{\displaystyle N\to \infty }, this probability converges tolimNA(N,d)={1,0≤β≤21β−1,β≥2{\displaystyle \lim _{N}A(N,d)={\begin{cases}1,&0\leq \beta \leq 2\\{\frac {1}{\beta -1}},&\beta \geq 2\end{cases}}}. This can be interpreted as a bound on the memory capacity of a singleperceptron unit. Thed{\displaystyle d}is the number of input weights into the perceptron. The formula states that at the limit of larged{\displaystyle d}, the perceptron would almost certainly be able to memorize up to2d{\displaystyle 2d}binary labels, but almost certainly fail to memorize any more than that. (MacKay 2003, p. 490) Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Cover%27s_theorem
Instatistics, aquadratic classifieris astatistical classifierthat uses aquadraticdecision surfaceto separate measurements of two or more classes of objects or events. It is a more general version of thelinear classifier. Statistical classificationconsiders a set ofvectorsof observationsxof an object or event, each of which has a known typey. This set is referred to as thetraining set. The problem is then to determine, for a given new observation vector, what the best class should be. For a quadratic classifier, the correct solution is assumed to be quadratic in the measurements, soywill be decided based onxTAx+bTx+c{\displaystyle \mathbf {x^{T}Ax} +\mathbf {b^{T}x} +c} In the special case where each observation consists of two measurements, this means that the surfaces separating the classes will beconic sections(i.e., either aline, acircleorellipse, aparabolaor ahyperbola). In this sense, we can state that a quadratic model is a generalization of the linear model, and its use is justified by the desire to extend the classifier's ability to represent more complex separating surfaces. Quadratic discriminant analysis (QDA) is closely related tolinear discriminant analysis(LDA), where it is assumed that the measurements from each class arenormally distributed.[1]Unlike LDA however, in QDA there is no assumption that thecovarianceof each of the classes is identical.[2]When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is thelikelihood ratio test. Suppose there are only two groups, with meansμ0,μ1{\displaystyle \mu _{0},\mu _{1}}and covariance matricesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}corresponding toy=0{\displaystyle y=0}andy=1{\displaystyle y=1}respectively. Then the likelihood ratio is given byLikelihood ratio=2π|Σ1|−1exp⁡(−12(x−μ1)TΣ1−1(x−μ1))2π|Σ0|−1exp⁡(−12(x−μ0)TΣ0−1(x−μ0))<t{\displaystyle {\text{Likelihood ratio}}={\frac {{\sqrt {2\pi |\Sigma _{1}|}}^{-1}\exp \left(-{\frac {1}{2}}(\mathbf {x} -{\boldsymbol {\mu }}_{1})^{T}\Sigma _{1}^{-1}(\mathbf {x} -{\boldsymbol {\mu }}_{1})\right)}{{\sqrt {2\pi |\Sigma _{0}|}}^{-1}\exp \left(-{\frac {1}{2}}(\mathbf {x} -{\boldsymbol {\mu }}_{0})^{T}\Sigma _{0}^{-1}(\mathbf {x} -{\boldsymbol {\mu }}_{0})\right)}}<t}for some thresholdt{\displaystyle t}. After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic. The sample estimates of the mean vector and variance-covariance matrices will substitute the population quantities in this formula. While QDA is the most commonly-used method for obtaining a classifier, other methods are also possible. One such method is to create a longer measurement vector from the old one by adding all pairwise products of individual measurements. For instance, the vector[x1,x2,x3]{\displaystyle [x_{1},\;x_{2},\;x_{3}]}would become[x1,x2,x3,x12,x1x2,x1x3,x22,x2x3,x32].{\displaystyle [x_{1},\;x_{2},\;x_{3},\;x_{1}^{2},\;x_{1}x_{2},\;x_{1}x_{3},\;x_{2}^{2},\;x_{2}x_{3},\;x_{3}^{2}].} Finding a quadratic classifier for the original measurements would then become the same as finding a linear classifier based on the expanded measurement vector. This observation has been used in extending neural network models;[3]the "circular" case, which corresponds to introducing only the sum of pure quadratic termsx12+x22+x32+⋯{\displaystyle \;x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+\cdots \;}with no mixed products (x1x2,x1x3,…{\displaystyle \;x_{1}x_{2},\;x_{1}x_{3},\;\ldots \;}), has been proven to be the optimal compromise between extending the classifier's representation power and controlling the risk of overfitting (Vapnik-Chervonenkis dimension).[4] For linear classifiers based only ondot products, these expanded measurements do not have to be actually computed, since the dot product in the higher-dimensional space is simply related to that in the original space. This is an example of the so-calledkernel trick, which can be applied to linear discriminant analysis as well as thesupport vector machine.
https://en.wikipedia.org/wiki/Quadratic_classifier
Inmachine learning,support vector machines(SVMs, alsosupport vector networks[1]) aresupervisedmax-marginmodels with associated learningalgorithmsthat analyze data forclassificationandregression analysis. Developed atAT&T Bell Laboratories,[1][2]SVMs are one of the most studied models, being based on statistical learning frameworks ofVC theoryproposed byVapnik(1982, 1995) andChervonenkis(1974). In addition to performinglinear classification, SVMs can efficiently perform non-linear classification using thekernel trick, representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in a higher-dimensionalfeature space. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces, where linear classification can be performed.[3]Being max-margin models, SVMs are resilient to noisy data (e.g., misclassified examples). SVMs can also be used forregressiontasks, where the objective becomesϵ{\displaystyle \epsilon }-sensitive. The support vector clustering[4]algorithm, created byHava SiegelmannandVladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data.[citation needed]These data sets requireunsupervised learningapproaches, which attempt to find naturalclustering of the datainto groups, and then to map new data according to these clusters. The popularity of SVMs is likely due to their amenability to theoretical analysis, and their flexibility in being applied to a wide variety of tasks, includingstructured predictionproblems. It is not clear that SVMs have better predictive performance than other linear models, such aslogistic regressionandlinear regression.[5] Classifying datais a common task inmachine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class anewdata pointwill be in. In the case of support vector machines, a data point is viewed as ap{\displaystyle p}-dimensional vector (a list ofp{\displaystyle p}numbers), and we want to know whether we can separate such points with a(p−1){\displaystyle (p-1)}-dimensionalhyperplane. This is called alinear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, ormargin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as themaximum-margin hyperplaneand the linear classifier it defines is known as amaximum-margin classifier; or equivalently, theperceptron of optimal stability.[6] More formally, a support vector machine constructs ahyperplaneor set of hyperplanes in a high or infinite-dimensional space, which can be used forclassification,regression, or other tasks like outliers detection.[7]Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower thegeneralization errorof the classifier.[8]A lowergeneralization errormeans that the implementer is less likely to experienceoverfitting. Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are notlinearly separablein that space. For this reason, it was proposed[9]that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure thatdot productsof pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of akernel functionk(x,y){\displaystyle k(x,y)}selected to suit the problem.[10]The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parametersαi{\displaystyle \alpha _{i}}of images offeature vectorsxi{\displaystyle x_{i}}that occur in the data base. With this choice of a hyperplane, the pointsx{\displaystyle x}in thefeature spacethat are mapped into the hyperplane are defined by the relation∑iαik(xi,x)=constant.{\displaystyle \textstyle \sum _{i}\alpha _{i}k(x_{i},x)={\text{constant}}.}Note that ifk(x,y){\displaystyle k(x,y)}becomes small asy{\displaystyle y}grows further away fromx{\displaystyle x}, each term in the sum measures the degree of closeness of the test pointx{\displaystyle x}to the corresponding data base pointxi{\displaystyle x_{i}}. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of pointsx{\displaystyle x}mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space. SVMs can be used to solve various real-world problems: The original SVM algorithm was invented byVladimir N. VapnikandAlexey Ya. Chervonenkisin 1964.[citation needed]In 1992, Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trickto maximum-margin hyperplanes.[9]The "soft margin" incarnation, as is commonly used in software packages, was proposed byCorinna Cortesand Vapnik in 1993 and published in 1995.[1] We are given a training dataset ofn{\displaystyle n}points of the form(x1,y1),…,(xn,yn),{\displaystyle (\mathbf {x} _{1},y_{1}),\ldots ,(\mathbf {x} _{n},y_{n}),}where theyi{\displaystyle y_{i}}are either 1 or −1, each indicating the class to which the pointxi{\displaystyle \mathbf {x} _{i}}belongs. Eachxi{\displaystyle \mathbf {x} _{i}}is ap{\displaystyle p}-dimensionalrealvector. We want to find the "maximum-margin hyperplane" that divides the group of pointsxi{\displaystyle \mathbf {x} _{i}}for whichyi=1{\displaystyle y_{i}=1}from the group of points for whichyi=−1{\displaystyle y_{i}=-1}, which is defined so that the distance between the hyperplane and the nearest pointxi{\displaystyle \mathbf {x} _{i}}from either group is maximized. Anyhyperplanecan be written as the set of pointsx{\displaystyle \mathbf {x} }satisfyingwTx−b=0,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} -b=0,}wherew{\displaystyle \mathbf {w} }is the (not necessarily normalized)normal vectorto the hyperplane. This is much likeHesse normal form, except thatw{\displaystyle \mathbf {w} }is not necessarily a unit vector. The parameterb‖w‖{\displaystyle {\tfrac {b}{\|\mathbf {w} \|}}}determines the offset of the hyperplane from the origin along the normal vectorw{\displaystyle \mathbf {w} }. Warning: most of the literature on the subject defines the bias so thatwTx+b=0.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} +b=0.} If the training data islinearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations and Geometrically, the distance between these two hyperplanes is2‖w‖{\displaystyle {\tfrac {2}{\|\mathbf {w} \|}}},[21]so to maximize the distance between the planes we want to minimize‖w‖{\displaystyle \|\mathbf {w} \|}. The distance is computed using thedistance from a point to a planeequation. We also have to prevent data points from falling into the margin, we add the following constraint: for eachi{\displaystyle i}eitherwTxi−b≥1,ifyi=1,{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\geq 1\,,{\text{ if }}y_{i}=1,}orwTxi−b≤−1,ifyi=−1.{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\leq -1\,,{\text{ if }}y_{i}=-1.}These constraints state that each data point must lie on the correct side of the margin. This can be rewritten as We can put this together to get the optimization problem: minimizew,b12‖w‖2subject toyi(w⊤xi−b)≥1∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b}{\operatorname {minimize} }}&&{\frac {1}{2}}\|\mathbf {w} \|^{2}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thew{\displaystyle \mathbf {w} }andb{\displaystyle b}that solve this problem determine the final classifier,x↦sgn⁡(wTx−b){\displaystyle \mathbf {x} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)}, wheresgn⁡(⋅){\displaystyle \operatorname {sgn}(\cdot )}is thesign function. An important consequence of this geometric description is that the max-margin hyperplane is completely determined by thosexi{\displaystyle \mathbf {x} _{i}}that lie nearest to it (explained below). Thesexi{\displaystyle \mathbf {x} _{i}}are calledsupport vectors. To extend SVM to cases in which the data are not linearly separable, thehinge lossfunction is helpfulmax(0,1−yi(wTxi−b)).{\displaystyle \max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right).} Note thatyi{\displaystyle y_{i}}is thei-th target (i.e., in this case, 1 or −1), andwTxi−b{\displaystyle \mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b}is thei-th output. This function is zero if the constraint in(1)is satisfied, in other words, ifxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin. The goal of the optimization then is to minimize: ‖w‖2+C[1n∑i=1nmax(0,1−yi(wTxi−b))],{\displaystyle \lVert \mathbf {w} \rVert ^{2}+C\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right],} where the parameterC>0{\displaystyle C>0}determines the trade-off between increasing the margin size and ensuring that thexi{\displaystyle \mathbf {x} _{i}}lie on the correct side of the margin (Note we can add a weight to either term in the equation above). By deconstructing the hinge loss, this optimization problem can be formulated into the following: minimizew,b,ζ‖w‖22+C∑i=1nζisubject toyi(w⊤xi−b)≥1−ζi,ζi≥0∀i∈{1,…,n}{\displaystyle {\begin{aligned}&{\underset {\mathbf {w} ,\;b,\;\mathbf {\zeta } }{\operatorname {minimize} }}&&\|\mathbf {w} \|_{2}^{2}+C\sum _{i=1}^{n}\zeta _{i}\\&{\text{subject to}}&&y_{i}(\mathbf {w} ^{\top }\mathbf {x} _{i}-b)\geq 1-\zeta _{i},\quad \zeta _{i}\geq 0\quad \forall i\in \{1,\dots ,n\}\end{aligned}}} Thus, for large values ofC{\displaystyle C}, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. The original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed alinear classifier. However, in 1992,Bernhard Boser,Isabelle GuyonandVladimir Vapniksuggested a way to create nonlinear classifiers by applying thekernel trick(originally proposed by Aizerman et al.[22]) to maximum-margin hyperplanes.[9]The kernel trick, wheredot productsare replaced by kernels, is easily derived in the dual representation of the SVM problem. This allows the algorithm to fit the maximum-margin hyperplane in a transformedfeature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space. It is noteworthy that working in a higher-dimensional feature space increases thegeneralization errorof support vector machines, although given enough samples the algorithm still performs well.[23] Some common kernels include: The kernel is related to the transformφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}by the equationk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. The valuewis also in the transformed space, withw=∑iαiyiφ(xi){\textstyle \mathbf {w} =\sum _{i}\alpha _{i}y_{i}\varphi (\mathbf {x} _{i})}. Dot products withwfor classification can again be computed by the kernel trick, i.e.w⋅φ(x)=∑iαiyik(xi,x){\textstyle \mathbf {w} \cdot \varphi (\mathbf {x} )=\sum _{i}\alpha _{i}y_{i}k(\mathbf {x} _{i},\mathbf {x} )}. Computing the (soft-margin) SVM classifier amounts to minimizing an expression of the form We focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value forλ{\displaystyle \lambda }yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing(2)to aquadratic programmingproblem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed. Minimizing(2)can be rewritten as a constrained optimization problem with a differentiable objective function in the following way. For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}we introduce a variableζi=max(0,1−yi(wTxi−b)){\displaystyle \zeta _{i}=\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)}. Note thatζi{\displaystyle \zeta _{i}}is the smallest nonnegative number satisfyingyi(wTxi−b)≥1−ζi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\geq 1-\zeta _{i}.} Thus we can rewrite the optimization problem as follows minimize1n∑i=1nζi+λ‖w‖2subject toyi(wTxi−b)≥1−ζiandζi≥0,for alli.{\displaystyle {\begin{aligned}&{\text{minimize }}{\frac {1}{n}}\sum _{i=1}^{n}\zeta _{i}+\lambda \|\mathbf {w} \|^{2}\\[0.5ex]&{\text{subject to }}y_{i}\left(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b\right)\geq 1-\zeta _{i}\,{\text{ and }}\,\zeta _{i}\geq 0,\,{\text{for all }}i.\end{aligned}}} This is called theprimalproblem. By solving for theLagrangian dualof the above problem, one obtains the simplified problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xiTxj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\mathbf {x} _{i}^{\mathsf {T}}\mathbf {x} _{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} This is called thedualproblem. Since the dual maximization problem is a quadratic function of theci{\displaystyle c_{i}}subject to linear constraints, it is efficiently solvable byquadratic programmingalgorithms. Here, the variablesci{\displaystyle c_{i}}are defined such that w=∑i=1nciyixi.{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\mathbf {x} _{i}.} Moreover,ci=0{\displaystyle c_{i}=0}exactly whenxi{\displaystyle \mathbf {x} _{i}}lies on the correct side of the margin, and0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}whenxi{\displaystyle \mathbf {x} _{i}}lies on the margin's boundary. It follows thatw{\displaystyle \mathbf {w} }can be written as a linear combination of the support vectors. The offset,b{\displaystyle b}, can be recovered by finding anxi{\displaystyle \mathbf {x} _{i}}on the margin's boundary and solvingyi(wTxi−b)=1⟺b=wTxi−yi.{\displaystyle y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)=1\iff b=\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-y_{i}.} (Note thatyi−1=yi{\displaystyle y_{i}^{-1}=y_{i}}sinceyi=±1{\displaystyle y_{i}=\pm 1}.) Suppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data pointsφ(xi).{\displaystyle \varphi (\mathbf {x} _{i}).}Moreover, we are given a kernel functionk{\displaystyle k}which satisfiesk(xi,xj)=φ(xi)⋅φ(xj){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{j})=\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j})}. We know the classification vectorw{\displaystyle \mathbf {w} }in the transformed space satisfies w=∑i=1nciyiφ(xi),{\displaystyle \mathbf {w} =\sum _{i=1}^{n}c_{i}y_{i}\varphi (\mathbf {x} _{i}),} where, theci{\displaystyle c_{i}}are obtained by solving the optimization problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(φ(xi)⋅φ(xj))yjcj=∑i=1nci−12∑i=1n∑j=1nyicik(xi,xj)yjcjsubject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}{\text{maximize}}\,\,f(c_{1}\ldots c_{n})&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\varphi (\mathbf {x} _{i})\cdot \varphi (\mathbf {x} _{j}))y_{j}c_{j}\\&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}k(\mathbf {x} _{i},\mathbf {x} _{j})y_{j}c_{j}\\{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}&=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} The coefficientsci{\displaystyle c_{i}}can be solved for using quadratic programming, as before. Again, we can find some indexi{\displaystyle i}such that0<ci<(2nλ)−1{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}, so thatφ(xi){\displaystyle \varphi (\mathbf {x} _{i})}lies on the boundary of the margin in the transformed space, and then solve b=wTφ(xi)−yi=[∑j=1ncjyjφ(xj)⋅φ(xi)]−yi=[∑j=1ncjyjk(xj,xi)]−yi.{\displaystyle {\begin{aligned}b=\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {x} _{i})-y_{i}&=\left[\sum _{j=1}^{n}c_{j}y_{j}\varphi (\mathbf {x} _{j})\cdot \varphi (\mathbf {x} _{i})\right]-y_{i}\\&=\left[\sum _{j=1}^{n}c_{j}y_{j}k(\mathbf {x} _{j},\mathbf {x} _{i})\right]-y_{i}.\end{aligned}}} Finally, z↦sgn⁡(wTφ(z)−b)=sgn⁡([∑i=1nciyik(xi,z)]−b).{\displaystyle \mathbf {z} \mapsto \operatorname {sgn}(\mathbf {w} ^{\mathsf {T}}\varphi (\mathbf {z} )-b)=\operatorname {sgn} \left(\left[\sum _{i=1}^{n}c_{i}y_{i}k(\mathbf {x} _{i},\mathbf {z} )\right]-b\right).} Recent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high. Sub-gradient descentalgorithms for the SVM work directly with the expression f(w,b)=[1n∑i=1nmax(0,1−yi(wTxi−b))]+λ‖w‖2.{\displaystyle f(\mathbf {w} ,b)=\left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} _{i}-b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} Note thatf{\displaystyle f}is aconvex functionofw{\displaystyle \mathbf {w} }andb{\displaystyle b}. As such, traditionalgradient descent(orSGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function'ssub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale withn{\displaystyle n}, the number of data points.[24] Coordinate descentalgorithms for the SVM work from the dual problem maximizef(c1…cn)=∑i=1nci−12∑i=1n∑j=1nyici(xi⋅xj)yjcj,subject to∑i=1nciyi=0,and0≤ci≤12nλfor alli.{\displaystyle {\begin{aligned}&{\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(x_{i}\cdot x_{j})y_{j}c_{j},\\&{\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.\end{aligned}}} For eachi∈{1,…,n}{\displaystyle i\in \{1,\,\ldots ,\,n\}}, iteratively, the coefficientci{\displaystyle c_{i}}is adjusted in the direction of∂f/∂ci{\displaystyle \partial f/\partial c_{i}}. Then, the resulting vector of coefficients(c1′,…,cn′){\displaystyle (c_{1}',\,\ldots ,\,c_{n}')}is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven.[25] The soft-margin support vector machine described above is an example of anempirical risk minimization(ERM) algorithm for thehinge loss. Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties. In supervised learning, one is given a set of training examplesX1…Xn{\displaystyle X_{1}\ldots X_{n}}with labelsy1…yn{\displaystyle y_{1}\ldots y_{n}}, and wishes to predictyn+1{\displaystyle y_{n+1}}givenXn+1{\displaystyle X_{n+1}}. To do so one forms ahypothesis,f{\displaystyle f}, such thatf(Xn+1){\displaystyle f(X_{n+1})}is a "good" approximation ofyn+1{\displaystyle y_{n+1}}. A "good" approximation is usually defined with the help of aloss function,ℓ(y,z){\displaystyle \ell (y,z)}, which characterizes how badz{\displaystyle z}is as a prediction ofy{\displaystyle y}. We would then like to choose a hypothesis that minimizes theexpected risk: ε(f)=E[ℓ(yn+1,f(Xn+1))].{\displaystyle \varepsilon (f)=\mathbb {E} \left[\ell (y_{n+1},f(X_{n+1}))\right].} In most cases, we don't know the joint distribution ofXn+1,yn+1{\displaystyle X_{n+1},\,y_{n+1}}outright. In these cases, a common strategy is to choose the hypothesis that minimizes theempirical risk: ε^(f)=1n∑k=1nℓ(yk,f(Xk)).{\displaystyle {\hat {\varepsilon }}(f)={\frac {1}{n}}\sum _{k=1}^{n}\ell (y_{k},f(X_{k})).} Under certain assumptions about the sequence of random variablesXk,yk{\displaystyle X_{k},\,y_{k}}(for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk asn{\displaystyle n}grows large. This approach is calledempirical risk minimization,or ERM. In order for the minimization problem to have a well-defined solution, we have to place constraints on the setH{\displaystyle {\mathcal {H}}}of hypotheses being considered. IfH{\displaystyle {\mathcal {H}}}is anormed space(as is the case for SVM), a particularly effective technique is to consider only those hypothesesf{\displaystyle f}for which‖f‖H<k{\displaystyle \lVert f\rVert _{\mathcal {H}}<k}. This is equivalent to imposing aregularization penaltyR(f)=λk‖f‖H{\displaystyle {\mathcal {R}}(f)=\lambda _{k}\lVert f\rVert _{\mathcal {H}}}, and solving the new optimization problem f^=argminf∈Hε^(f)+R(f).{\displaystyle {\hat {f}}=\mathrm {arg} \min _{f\in {\mathcal {H}}}{\hat {\varepsilon }}(f)+{\mathcal {R}}(f).} This approach is calledTikhonov regularization. More generally,R(f){\displaystyle {\mathcal {R}}(f)}can be some measure of the complexity of the hypothesisf{\displaystyle f}, so that simpler hypotheses are preferred. Recall that the (soft-margin) SVM classifierw^,b:x↦sgn⁡(w^Tx−b){\displaystyle {\hat {\mathbf {w} }},b:\mathbf {x} \mapsto \operatorname {sgn}({\hat {\mathbf {w} }}^{\mathsf {T}}\mathbf {x} -b)}is chosen to minimize the following expression: [1n∑i=1nmax(0,1−yi(wTx−b))]+λ‖w‖2.{\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(\mathbf {w} ^{\mathsf {T}}\mathbf {x} -b)\right)\right]+\lambda \|\mathbf {w} \|^{2}.} In light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is thehinge loss ℓ(y,z)=max(0,1−yz).{\displaystyle \ell (y,z)=\max \left(0,1-yz\right).} From this perspective, SVM is closely related to other fundamentalclassification algorithmssuch asregularized least-squaresandlogistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with thesquare-loss,ℓsq(y,z)=(y−z)2{\displaystyle \ell _{sq}(y,z)=(y-z)^{2}}; logistic regression employs thelog-loss, ℓlog(y,z)=ln⁡(1+e−yz).{\displaystyle \ell _{\log }(y,z)=\ln(1+e^{-yz}).} The difference between the hinge loss and these other loss functions is best stated in terms oftarget functions -the function that minimizes expected risk for a given pair of random variablesX,y{\displaystyle X,\,y}. In particular, letyx{\displaystyle y_{x}}denotey{\displaystyle y}conditional on the event thatX=x{\displaystyle X=x}. In the classification setting, we have: yx={1with probabilitypx−1with probability1−px{\displaystyle y_{x}={\begin{cases}1&{\text{with probability }}p_{x}\\-1&{\text{with probability }}1-p_{x}\end{cases}}} The optimal classifier is therefore: f∗(x)={1ifpx≥1/2−1otherwise{\displaystyle f^{*}(x)={\begin{cases}1&{\text{if }}p_{x}\geq 1/2\\-1&{\text{otherwise}}\end{cases}}} For the square-loss, the target function is the conditional expectation function,fsq(x)=E[yx]{\displaystyle f_{sq}(x)=\mathbb {E} \left[y_{x}\right]}; For the logistic loss, it's the logit function,flog(x)=ln⁡(px/(1−px)){\displaystyle f_{\log }(x)=\ln \left(p_{x}/({1-p_{x}})\right)}. While both of these target functions yield the correct classifier, assgn⁡(fsq)=sgn⁡(flog)=f∗{\displaystyle \operatorname {sgn}(f_{sq})=\operatorname {sgn}(f_{\log })=f^{*}}, they give us more information than we need. In fact, they give us enough information to completely describe the distribution ofyx{\displaystyle y_{x}}. On the other hand, one can check that the target function for the hinge loss isexactlyf∗{\displaystyle f^{*}}. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms ofR{\displaystyle {\mathcal {R}}}) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.[26] SVMs belong to a family of generalizedlinear classifiersand can be interpreted as an extension of theperceptron.[27]They can also be considered a special case ofTikhonov regularization. A special property is that they simultaneously minimize the empiricalclassification errorand maximize thegeometric margin; hence they are also known asmaximummargin classifiers. A comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik.[28] The effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameterλ{\displaystyle \lambda }. A common choice is a Gaussian kernel, which has a single parameterγ{\displaystyle \gamma }. The best combination ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }is often selected by agrid searchwith exponentially growing sequences ofλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, for example,λ∈{2−5,2−3,…,213,215}{\displaystyle \lambda \in \{2^{-5},2^{-3},\dots ,2^{13},2^{15}\}};γ∈{2−15,2−13,…,21,23}{\displaystyle \gamma \in \{2^{-15},2^{-13},\dots ,2^{1},2^{3}\}}. Typically, each combination of parameter choices is checked usingcross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, recent work inBayesian optimizationcan be used to selectλ{\displaystyle \lambda }andγ{\displaystyle \gamma }, often requiring the evaluation of far fewer parameter combinations than grid search. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.[29] Potential drawbacks of the SVM include the following aspects: Multiclass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements. The dominant approach for doing so is to reduce the singlemulticlass probleminto multiplebinary classificationproblems.[30]Common methods for such reduction include:[30][31] Crammer and Singer proposed a multiclass SVM method which casts themulticlass classificationproblem into a single optimization problem, rather than decomposing it into multiple binary classification problems.[34]See also Lee, Lin and Wahba[35][36]and Van den Burg and Groenen.[37] Transductive support vector machines extend SVMs in that they could also treat partially labeled data insemi-supervised learningby following the principles oftransduction. Here, in addition to the training setD{\displaystyle {\mathcal {D}}}, the learner is also given a set D⋆={xi⋆∣xi⋆∈Rp}i=1k{\displaystyle {\mathcal {D}}^{\star }=\{\mathbf {x} _{i}^{\star }\mid \mathbf {x} _{i}^{\star }\in \mathbb {R} ^{p}\}_{i=1}^{k}} of test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem:[38] Minimize (inw,b,y⋆{\displaystyle \mathbf {w} ,b,\mathbf {y} ^{\star }}) 12‖w‖2{\displaystyle {\frac {1}{2}}\|\mathbf {w} \|^{2}} subject to (for anyi=1,…,n{\displaystyle i=1,\dots ,n}and anyj=1,…,k{\displaystyle j=1,\dots ,k}) yi(w⋅xi−b)≥1,yj⋆(w⋅xj⋆−b)≥1,{\displaystyle {\begin{aligned}&y_{i}(\mathbf {w} \cdot \mathbf {x} _{i}-b)\geq 1,\\&y_{j}^{\star }(\mathbf {w} \cdot \mathbf {x} _{j}^{\star }-b)\geq 1,\end{aligned}}} and yj⋆∈{−1,1}.{\displaystyle y_{j}^{\star }\in \{-1,1\}.} Transductive support vector machines were introduced by Vladimir N. Vapnik in 1998. Structured support-vector machine is an extension of the traditional SVM model. While the SVM model is primarily designed for binary classification, multiclass classification, and regression tasks, structured SVM broadens its application to handle general structured output labels, for example parse trees, classification with taxonomies, sequence alignment and many more.[39] A version of SVM forregressionwas proposed in 1996 byVladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola.[40]This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction. Another SVM version known asleast-squares support vector machine(LS-SVM) has been proposed by Suykens and Vandewalle.[41] Training the original SVR means solving[42] wherexi{\displaystyle x_{i}}is a training sample with target valueyi{\displaystyle y_{i}}. The inner product plus intercept⟨w,xi⟩+b{\displaystyle \langle w,x_{i}\rangle +b}is the prediction for that sample, andε{\displaystyle \varepsilon }is a free parameter that serves as a threshold: all predictions have to be within anε{\displaystyle \varepsilon }range of the true predictions. Slack variables are usually added into the above to allow for errors and to allow approximation in the case the above problem is infeasible. In 2011 it was shown by Polson and Scott that the SVM admits aBayesianinterpretation through the technique ofdata augmentation.[43]In this approach the SVM is viewed as agraphical model(where the parameters are connected via probability distributions). This extended view allows the application ofBayesiantechniques to SVMs, such as flexible feature modeling, automatichyperparametertuning, andpredictive uncertainty quantification. Recently, a scalable version of the Bayesian SVM was developed byFlorian Wenzel, enabling the application of Bayesian SVMs tobig data.[44]Florian Wenzel developed two different versions, a variational inference (VI) scheme for the Bayesian kernel support vector machine (SVM) and a stochastic version (SVI) for the linear Bayesian SVM.[45] The parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving thequadratic programming(QP) problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more manageable chunks. Another approach is to use aninterior-point methodthat usesNewton-like iterations to find a solution of theKarush–Kuhn–Tucker conditionsof the primal and dual problems.[46]Instead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick. Another common method is Platt'ssequential minimal optimization(SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage. This algorithm is conceptually simple, easy to implement, generally faster, and has better scaling properties for difficult SVM problems.[47] The special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin,logistic regression; this class of algorithms includessub-gradient descent(e.g., PEGASOS[48]) andcoordinate descent(e.g., LIBLINEAR[49]). LIBLINEAR has some attractive training-time properties. Each convergence iteration takes time linear in the time taken to read the train data, and the iterations also have aQ-linear convergenceproperty, making the algorithm extremely fast. The general kernel SVMs can also be solved more efficiently usingsub-gradient descent(e.g. P-packSVM[50]), especially whenparallelizationis allowed. Kernel SVMs are available in many machine-learning toolkits, includingLIBSVM,MATLAB,SAS, SVMlight,kernlab,scikit-learn,Shogun,Weka,Shark,JKernelMachines,OpenCVand others. Preprocessing of data (standardization) is highly recommended to enhance accuracy of classification.[51]There are a few methods of standardization, such as min-max, normalization by decimal scaling, Z-score.[52]Subtraction of mean and division by variance of each feature is usually used for SVM.[53]
https://en.wikipedia.org/wiki/Support_vector_machines
InEuclidean geometry, theintersection of a line and a linecan be theempty set, apoint, or anotherline. Distinguishing these cases and finding theintersectionhave uses, for example, incomputer graphics,motion planning, andcollision detection. Inthree-dimensionalEuclidean geometry, if two lines are not in the sameplane, they have no point of intersection[1]and are calledskew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have aninfinitudeof points in common (namely all of the points on either of them); if they are distinct but have the sameslope, they are said to beparalleland have no points in common; otherwise, they have a single point of intersection. The distinguishing features ofnon-Euclidean geometryare the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line.[further explanation needed] Anecessary conditionfor two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to thetetrahedronwith vertices at two of the points on one line and two of the points on the other line beingdegeneratein the sense of having zerovolume. For the algebraic form of this condition, seeSkew lines § Testing for skewness. First we consider the intersection of two linesL1andL2in two-dimensional space, with lineL1being defined by two distinct points(x1,y1)and(x2,y2), and lineL2being defined by two distinct points(x3,y3)and(x4,y4).[2] The intersectionPof lineL1andL2can be defined usingdeterminants. The determinants can be written out as: When the two lines are parallel or coincident, the denominator is zero. The intersection point above is for the infinitely long lines defined by the points, rather than theline segmentsbetween the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define linesL1andL2in terms of first degreeBézierparameters: (wheretanduare real numbers). The intersection point of the lines is found with one of the following values oftoru, where and with There will be an intersection if0 ≤t≤ 1and0 ≤u≤ 1. The intersection point falls within the first line segment if0 ≤t≤ 1, and it falls within the second line segment if0 ≤u≤ 1. These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point.[3] In the case where the two line segments share an x axis andx2=x1+1{\displaystyle x_{2}=x_{1}+1},t{\displaystyle t}andu{\displaystyle u}simplify tot=u=y1−y3y1−y2−y3+y4,{\displaystyle t=u={\frac {y_{1}-y_{3}}{y_{1}-y_{2}-y_{3}+y_{4}}},}with(Px,Py)=(x1+t,y1+t(y2−y1))or(Px,Py)=(x1+t,y3+t(y4−y3)).{\displaystyle (P_{x},P_{y})={\bigl (}x_{1}+t,\;y_{1}+t(y_{2}-y_{1}){\bigr )}\quad {\text{or}}\quad (P_{x},P_{y})={\bigl (}x_{1}+t,\;y_{3}+t(y_{4}-y_{3}){\bigr )}.} Thexandycoordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equationsy=ax+candy=bx+dwhereaandbare theslopes(gradients) of the lines and wherecanddare they-intercepts of the lines. At the point where the two lines intersect (if they do), bothycoordinates will be the same, hence the following equality: We can rearrange this expression in order to extract the value ofx, and so, To find theycoordinate, all we need to do is substitute the value ofxinto either one of the two line equations, for example, into the first: Hence, the point of intersection is Note that ifa=bthen the two lines areparalleland they do not intersect, unlessc=das well, in which case the lines are coincident and they intersect at every point. By usinghomogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple(x,y,w). The mapping from 3D to 2D coordinates is(x′,y′) = (⁠x/w⁠,⁠y/w⁠). We can convert 2D points to homogeneous coordinates by defining them as(x,y, 1). Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined asa1x+b1y+c1= 0anda2x+b2y+c2= 0. We can represent these two lines inline coordinatesasU1= (a1,b1,c1)andU2= (a2,b2,c2). The intersectionP′of two lines is then simply given by[4] Ifcp= 0, the lines do not intersect. The intersection of two lines can be generalized to involve additional lines. The existence of and expression for then-line intersection problem are as follows. In two dimensions, more than two linesalmost certainlydo not intersect at a single point. To determine if they do and, if so, to find the intersection point, write theith equation (i= 1, …,n) as and stack these equations into matrix form as where theith row of then× 2matrixAis[ai1,ai2],wis the 2 × 1 vector[xy], and theith element of the column vectorbisbi. IfAhas independent columns, itsrankis 2. Then if and only if the rank of theaugmented matrix[A|b]is also 2, there exists a solution of the matrix equation and thus an intersection point of thenlines. The intersection point, if it exists, is given by whereAgis theMoore–Penrose generalized inverseofA(which has the form shown becauseAhas full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank ofAis only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are calledskew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form Thus a set ofnlines can be represented by2nequations in the 3-dimensional coordinate vectorw: where nowAis2n× 3andbis2n× 1. As before there is a unique intersection point if and only ifAhas full column rank and the augmented matrix[A|b]does not, and the unique intersection if it exists is given by In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in aleast-squaressense. In the two-dimensional case, first, represent lineias a pointpion the line and aunitnormal vectorn̂i, perpendicular to that line. That is, ifx1andx2are points on line 1, then letp1=x1and let which is the unit vector along the line, rotated by a right angle. The distance from a pointxto the line(p,n̂)is given by And so the squared distance from a pointxto a line is The sum of squared distances to many lines is thecost function: This can be rearranged: To find the minimum, we differentiate with respect toxand set the result equal to the zero vector: so and so Whilen̂iis not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting thatn̂in̂iTis simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing aseminormon the distance betweenpiand another point giving the distance to the line. In any number of dimensions, ifv̂iis a unit vectoralongtheith line, then whereIis theidentity matrix, and so[5] In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an originaiand a unit direction vectorn̂i. The square of the distance from a pointpto one of the lines is given from Pythagoras: where(p−ai)Tn̂iis the projection ofp−aion linei. The sum of distances to the square to all lines is To minimize this expression, we differentiate it with respect top. which results in whereIis theidentity matrix. This is a matrixSp=C, with solutionp=S+C, whereS+is thepseudo-inverseofS. Inspherical geometry, any two great circles intersect.[6] Inhyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line.[6]
https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection
Line fittingis the process of constructing astraight linethat has the best fit to a series of data points. Several methods exist, considering:
https://en.wikipedia.org/wiki/Line_fitting